Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
3,000
60,417,835
Using TensorFlow to extract meta data containing numbers
<p>I am new to ML/DL but I am looking for a way to extract meta data from text and I figured ML could be a good solution.</p> <p><strong>Objective</strong></p> <p>Input: Sentence containing a field descriptor and a value/values e.g:</p> <blockquote> <p>"Non-current assets 5 675 5 512 4 789 4 586"</p> <p>"Cash and cash equivalents 909 861 912 630"</p> <p>"Inventories, trade and other receivables and other current assets 3 756 2 998 2 864 2 834"</p> <p>"Total assets 10 340 9 372 8 565 8 051"</p> <p>"Equity 5 649 4 560 2 365 1 969"</p> <p>"Non-current liabilities 2 438 2 403 3 270 2 407"</p> <p>"Current liabilities 2 253 2 409 2 931 3 675"</p> </blockquote> <p>I have done some research and know that the words need to be embedded (Using Word2Vec or something similar). But how are the numbers handled?</p> <p>Output: Tuple {field: value}</p> <blockquote> <p>{non_current_assets: 5675}</p> <p>{cash_and_cash_equivalents: 909}</p> <p>{total_assets: 10340}</p> <p>{equity: 5649}</p> <p>{non_current_liabilities: 2438}</p> <p>{current_liabilities: 3756}</p> <p>{inventories: 3756}</p> </blockquote> <p><strong>Questions</strong></p> <ol> <li>Is it possible to solve using ML? If so: <ol> <li>How should I format the input data?</li> <li>What algorithm is best suited for this task?</li> </ol></li> </ol>
<p>You're question is not completely clear.</p> <p>If you just have strings with text then numbers and you want {text: number} you should just split on int character and not doing ML, I your string are within in more complete text document, but it would be easier to have a full example then.</p> <p>if your sentence is within a text like foe example:</p> <pre><code>text = " If your import is failing due to a missing package, you can use pip. Non-current assets 5 675 5 512 4 789 4 586. We also expect cash equivalents 909 861 912 630 in toal" </code></pre> <p>You could use part of speech tagging and chunking to detect nominal groups that are before numbers:</p> <p>like in this: <a href="https://medium.com/@acrosson/extracting-names-emails-and-phone-numbers-5d576354baa" rel="nofollow noreferrer">https://medium.com/@acrosson/extracting-names-emails-and-phone-numbers-5d576354baa</a></p> <pre><code>import nltk from nltk.corpus import stopwords nltk.download('stopwords') nltk.download('punkt') stop = stopwords.words('english') document = ' '.join([i for i in text.split() if i not in stop]) sentences = nltk.sent_tokenize(document) sentences = [nltk.word_tokenize(sent) for sent in sentences] nltk.download('averaged_perceptron_tagger') sentences = [nltk.pos_tag(sent) for sent in sentences] </code></pre> <p>then sentences is:</p> <pre><code>[[('If', 'IN'), ('import', 'NN'), ('failing', 'VBG'), ('due', 'JJ'), ('missing', 'VBG'), ('package', 'NN'), (',', ','), ('use', 'NN'), ('pip', 'NN'), ('.', '.')], [('Non-current', 'JJ'), ('assets', 'NNS'), ('5', 'CD'), ('675', 'CD'), ('5', 'CD'), ('512', 'CD'), ('4', 'CD'), ('789', 'CD'), ('4', 'CD'), ('586', 'CD'), ('.', '.')], [('We', 'PRP'), ('also', 'RB'), ('expect', 'VBP'), ('cash', 'NN'), ('equivalents', 'NNS'), ('909', 'CD'), ('861', 'CD'), ('912', 'CD'), ('630', 'CD'), ('toal', 'NN')]] </code></pre> <p>And you can define regex based on grammar to detect nominal group followed by number like:</p> <pre><code>grammar = """MATCH:{&lt;JJ&gt;&lt;NNS&gt;&lt;CD&gt;}""" #grammar would need to be completed cp = nltk.RegexpParser(grammar) for sentence in sentences: print(cp.parse(sentence)) </code></pre> <p>which return:</p> <pre><code>(S If/IN import/NN failing/VBG due/JJ missing/VBG package/NN ,/, use/NN pip/NN ./.) (S (MATCH Non-current/JJ assets/NNS 5/CD) 675/CD 5/CD 512/CD 4/CD 789/CD 4/CD 586/CD ./.) (S We/PRP also/RB expect/VBP cash/NN equivalents/NNS 909/CD 861/CD 912/CD 630/CD toal/NN) </code></pre> <p>It would be much more difficult to do it from scratch with tensorflow if you're not an expert I think</p>
python|tensorflow|machine-learning
0
3,001
60,403,585
Merging Pandas DataFrames with keys in different columns
<p>I'm trying to merge two Pandas DataFrames which are as follows:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'PAIR': ['140-120', '200-280', '350-310', '410-480', '500-570'], 'SCORE': [99, 70, 14, 84, 50]}) print(df1) PAIR SCORE 0 140-120 99 1 200-280 70 2 350-310 14 3 410-480 84 4 500-570 50 df2 = pd.DataFrame({'PAIR1': ['140-120', '280-200', '350-310', '480-410', '500-570'], 'PAIR2': ['120-140', '200-280', '310-350', '410-480', '570-500'], 'BRAND' : ['A', 'V', 'P', 'V', 'P']}) print(df2) PAIR1 PAIR2 BRAND 0 140-120 120-140 A 1 280-200 200-280 V 2 350-310 310-350 P 3 480-410 410-480 V 4 500-570 570-500 P </code></pre> <p>If you take a closer look, you will notice that each value in the <code>PAIR</code> column of <code>df1</code> match either the value in <code>PAIR1</code> or <code>PAIR2</code> of <code>df2</code>. In <code>df2</code>, the keys are present in both ways (e.g. <strong>140-120</strong> and <strong>120-140)</strong>.</p> <p>My goal is to merge the two DataFrames to obtain the following result: </p> <pre><code> PAIR SCORE BRAND 0 140-120 99 A 1 200-280 70 V 2 350-310 14 P 3 410-480 84 V 4 500-570 50 P </code></pre> <p>I tried to first merge <code>df1</code> with <code>df2</code> the following way:</p> <pre><code>df3 = pd.merge(left = df1, right = df2, how = 'left', left_on = 'PAIR', right_on = 'PAIR1') </code></pre> <p>Then, taking the resulting DataFrame <code>df3</code> and merge it back with <code>df2</code>: </p> <pre><code>df4 = pd.merge(left = df3, right = df2, how = 'left', left_on = 'PAIR', right_on = 'PAIR2') print(df4) PAIR SCORE PAIR1_x PAIR2_x BRAND_x PAIR1_y PAIR2_y BRAND_y 0 140-120 99 140-120 120-140 A NaN NaN NaN 1 200-280 70 NaN NaN NaN 280-200 200-280 V 2 350-310 14 350-310 310-350 P NaN NaN NaN 3 410-480 84 NaN NaN NaN 480-410 410-480 V 4 500-570 50 500-570 570-500 P NaN NaN NaN </code></pre> <p>This is not my desired result. I don't how else I can account for the fact that the correct key might be either in <code>PAIR1</code> or <code>PAIR2</code>. Any help would be appreciated.</p>
<p>Somewhat clumsy solution: build a Series that maps each pair in <code>df2</code> to its corresponding brand, then pass this mapping to <code>df1['PAIR'].map()</code>.</p> <pre><code># Build a series whose index maps pairs to values mapper = df2.melt(id_vars='BRAND').set_index('value')['BRAND'] mapper value 140-120 A 280-200 V 350-310 P 480-410 V 500-570 P 120-140 A 200-280 V 310-350 P 410-480 V 570-500 P Name: BRAND, dtype: object # Use the mapper on df1['PAIR'] df1['BRAND'] = df1['PAIR'].map(mapper) df1 PAIR SCORE BRAND 0 140-120 99 A 1 200-280 70 V 2 350-310 14 P 3 410-480 84 V 4 500-570 50 P </code></pre>
python|pandas|dataframe
3
3,002
60,603,500
How to merge two datetime column in one ? Pandas Python
<p>I would like two transform two columns begin and end:</p> <pre><code> begin end 0 NaN 2019-10-21 07:48:28.272688 1 NaN 2019-10-21 07:48:28.449916 2 2019-10-21 07:48:26.740378 NaN 3 2019-10-21 07:48:26.923764 NaN 4 NaN 2019-10-21 07:48:41.689466 5 2019-10-21 07:48:37.306045 NaN 6 NaN 2019-10-21 07:58:00.774449 7 2019-10-21 07:57:59.223986 NaN 8 NaN 2019-10-21 08:32:37.004455 9 2019-10-21 08:32:35.755252 NaN </code></pre> <p>into one column timestamp with an other column flag : </p> <pre><code> Timestamp Flag 0 2019-10-21 07:48:28.272688 end 1 2019-10-21 07:48:28.449916 end 2 2019-10-21 07:48:26.740378 begin 3 2019-10-21 07:48:26.923764 begin 4 2019-10-21 07:48:41.689466 end 5 2019-10-21 07:48:37.306045 begin 6 2019-10-21 07:58:00.774449 end 7 2019-10-21 07:57:59.223986 begin 8 2019-10-21 08:32:37.004455 end 9 2019-10-21 08:32:35.755252 begin </code></pre> <p>But at the moment I can't find a solution to merge the two column begin and end into one.</p> <p>Thank you for your time !</p>
<p>Use <code>numpy.where()</code>:</p> <pre><code>df['Timestamp'] = np.where(df['begin'].isna(), df['end'], df['begin']) df['flag'] = np.where(df['begin'].isna(), ['end'],['begin']) </code></pre> <p>If your null value <code>NaN</code> is a string instead, use it as your condition.</p>
python|pandas|dataframe|datetime
2
3,003
59,722,983
How to calculate geometric mean in a differentiable way?
<p>How to calculate goemetric mean along a dimension using Pytorch? Some numbers can be negative. The function must be differentiable.</p>
<p>A known (reasonably) numerically-stable version of the geometric mean is:</p> <pre class="lang-py prettyprint-override"><code>import torch def gmean(input_x, dim): log_x = torch.log(input_x) return torch.exp(torch.mean(log_x, dim=dim)) x = torch.Tensor([2.0] * 1000).requires_grad_(True) print(gmean(x, dim=0)) # tensor(2.0000, grad_fn=&lt;ExpBackward&gt;) </code></pre> <p>This kind of implementation can be found, for example, in SciPy (<a href="https://github.com/scipy/scipy/blob/1cc8beed5362ed290f5a8ddf4e99db49b4de6286/scipy/stats/mstats_basic.py#L268-L271" rel="nofollow noreferrer">see here</a>), which is a quite stable lib.</p> <hr /> <p>The implementation above does not handle zeros and negative numbers. Some will argue that the geometric mean with negative numbers is not well-defined, at least when not all of them are negative.</p>
pytorch
9
3,004
61,867,441
split dataframe column into multiple columns
<p>I have been sent a file. I have read it in as a dataframe which contains only one column and over a 1,000,000 rows. Each row is a mixture of numbers and text. </p> <p>I tried the following line below.</p> <blockquote> <p>data = data.str.split('/t',expand=True)</p> </blockquote> <p>However I get the error below,</p> <blockquote> <p>AttributeError: 'DataFrame' object has no attribute 'str'</p> </blockquote> <p>I thought maybe it was because its of type object and not string. So tried the line below however that seems to have no effect.</p> <blockquote> <p>data.astype('str')</p> </blockquote> <p>How can I split this column?</p>
<p>I think there is one column <code>DataFrame</code>, so for one column is possible select first column by position with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a>:</p> <pre><code>data = data.iloc[:, 0].str.split('/t',expand=True) </code></pre> <p>Or if psosible select first column by name:</p> <pre><code>data = data['col'].str.split('/t',expand=True) </code></pre>
python|pandas
1
3,005
61,965,890
Pandas DataFrames If else condition on multiple columns
<p>I have a Data frame as shown below </p> <pre><code>import pandas as pd df = pd.DataFrame({ "name": ["john","peter","john","alex"], "height": [6,5,4,4], "shape": ["null","null","null","null"] }) </code></pre> <p>I want to apply this--- If name == john and height == 6 return shape = good else if height == 4 return shape = bad else change the shape to middle so the final Dataframe should look like this</p> <pre><code> df = ({ "name": ["john","peter","john","alex"], "height": [6,5,4,4], "shape": ["good","middle","bad","bad"] }) </code></pre> <p>The only library I want to use is 'Pandas' and I do NOT want to use 'lambda' or 'NumPy'. Thanks in advance for your time. I will upvote your answers. </p>
<p>Let us do <code>np.select</code></p> <pre><code>import numpy as np cond1=df.name.eq('john')&amp;df.height.eq(6) cond2=df.height.eq(4) df['shape']=np.select([cond1,cond2],['good','bad'],'middle') df name height shape 0 john 6 good 1 peter 5 middle 2 john 4 bad 3 alex 4 bad </code></pre>
python|pandas|dataframe|if-statement|conditional-statements
1
3,006
61,920,569
Tensorflow Image Generator passing Tensor with dtype=string instead of Tensor with dtype=float32 to loss function
<p>I am following the YOLO v1 paper to create an object detector from scratch with Tensorflow and python. My dataset is a set of images and a 7x7x12 Tensor that represents the label for the image. I import the image names and labels (as a string) into a dataframe from a CSV using pandas, and then do some operations on the labels to turn them into Tensors. I then create a generator using ImageGenerator.flow_from_dataframe(), and the feed that generator as the input for my model. I end up getting the following error when the model tries to call the custom loss function that I created:</p> <pre><code>File "/home/michael/Desktop/YOLO_Detector/src/main.py", line 61, in &lt;module&gt; epochs=10) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 819, in fit use_multiprocessing=use_multiprocessing) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 342, in fit total_epochs=epochs) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 128, in run_one_epoch batch_outs = execution_function(iterator) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 98, in execution_function distributed_function(input_fn)) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 568, in __call__ result = self._call(*args, **kwds) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 615, in _call self._initialize(args, kwds, add_initializers_to=initializers) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 497, in _initialize *args, **kwds)) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 2389, in _get_concrete_function_internal_garbage_collected graph_function, _, _ = self._maybe_define_function(args, kwargs) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 2703, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 2593, in _create_graph_function capture_by_value=self._capture_by_value), File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py", line 978, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 439, in wrapped_fn return weak_wrapped_fn().__wrapped__(*args, **kwds) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 85, in distributed_function per_replica_function, args=args) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py", line 763, in experimental_run_v2 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py", line 1819, in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py", line 2164, in _call_for_each_replica return fn(*args, **kwargs) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py", line 292, in wrapper return func(*args, **kwargs) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 433, in train_on_batch output_loss_metrics=model._output_loss_metrics) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py", line 312, in train_on_batch output_loss_metrics=output_loss_metrics)) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py", line 253, in _process_single_batch training=training)) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py", line 167, in _model_loss per_sample_losses = loss_fn.call(targets[i], outs[i]) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/losses.py", line 221, in call return self.fn(y_true, y_pred, **self._fn_kwargs) File "/home/michael/Desktop/YOLO_Detector/src/utils.py", line 25, in yolo_loss_function pos = kb.mean(y_true-y_pred) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/math_ops.py", line 902, in binary_op_wrapper return func(x, y, name=name) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 10104, in sub "Sub", x=x, y=y, name=name) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 576, in _apply_op_helper param_name=input_name) File "/home/michael/.local/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 61, in _SatisfiesTypeConstraint ", ".join(dtypes.as_dtype(x).name for x in allowed_list))) TypeError: Value passed to parameter 'x' has DataType string not in list of allowed values: bfloat16, float16, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128 </code></pre> <p>When I use the python debugger to check the y_true that is being passed to the loss function, I see the following tensor:</p> <pre><code>Tensor("IteratorGetNext:1", shape=(None, 1), dtype=string) </code></pre> <p>however when I manually check the label of the image by calling the following python code, I get a tensor with the correct shape and values:</p> <pre><code>img, label = next(train_gen) print(type(label[0])) print(label[0].shape) print(label[0].dtype) -------------Output-------------- &lt;class 'tensorflow.python.framework.ops.EagerTensor'&gt; (7, 7, 12) &lt;dtype: 'float32'&gt; </code></pre> <p>Below is the code that I am using to create the dataset and train the model:</p> <pre><code>import tensorflow as tf from tensorflow import keras import pandas as pd import matplotlib.pyplot as plt import cv2 import numpy as np import os from src import utils, model path = "/home/michael/Desktop/YOLO_Detector/dataset/labels.csv" train_df = pd.read_csv(path, delim_whitespace=True, header=None) train_df.columns = ['filename', 'output_tensor'] train_df["output_tensor"] = train_df["output_tensor"].apply(lambda x: utils.string_to_tensor(x)) # train_df["output_tensor"] = train_df["output_tensor"].apply(lambda x: tf.expand_dims(x, 3)) image_generator = keras.preprocessing.image.ImageDataGenerator(rescale=1. / 255., validation_split=0.2) train_gen = image_generator.flow_from_dataframe( dataframe=train_df, directory="/home/michael/Desktop/YOLO_Detector/dataset", x_col='filename', y_col='output_tensor', class_mode='raw', batch_size=64, target_size=(448, 448), shuffle=False, subset="training" ) validation_gen = image_generator.flow_from_dataframe( dataframe=train_df, directory="/home/michael/Desktop/YOLO_Detector/dataset", x_col='filename', y_col='output_tensor', class_mode="raw", batch_size=64, target_size=(448, 448), shuffle=False, subset="validation" ) img, label = next(train_gen) print(type(label[0])) print(label[0].shape) print(label[0].dtype) model = model.create_model() lr_schedule = keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=1e-2, decay_rate=0.0005, decay_steps=100000 ) sgd = keras.optimizers.SGD(learning_rate=lr_schedule, momentum=0.9) model.compile(optimizer=sgd, loss=utils.yolo_loss_function, metrics=['accuracy']) model.fit(x=train_gen, epochs=10) </code></pre> <p>I am using Tensorflow 2 with ROCm, and Eager Execution is on. How do I get the y_true tensor to be the correct output label (7x7x12 Tensor with dtype=float32) instead of it being a string?</p>
<p>I figured out the issue. The issue is that you cannot correctly store a Tensor or a Numpy array in a Pandas Dataframe. I ended up having to manually create the image/tensor pairs by doing the following:</p> <pre><code>img_list = [] labels_list = [] for i in range(64): labels_list.append(utils.string_to_numpy(train_df["output_tensor"][i])) image = tf.keras.preprocessing.image.load_img(f"/home/michael/Desktop/YOLO_Detector/dataset/{train_df['filename'][i]}", target_size=(448, 448)) image_arr = keras.preprocessing.image.img_to_array(image) / 255.0 img_list.append(image_arr) img = np.asarray(img_list) label = np.asarray(labels_list) </code></pre> <p>and then calling the img as x and label as y in model.fit()</p>
python|tensorflow|keras|tensorflow2.0
0
3,007
57,916,147
Getting difference between datetime
<p>i am trying to get a difference between two columns of datetime in my dataframe, which looks like this:</p> <pre><code> Time 1 Time 2 Hours 2019-02-24 09:35:49 2019-02-24 09:18:47 0 2019-02-24 09:43:45 2019-02-24 09:18:47 0 2019-02-24 21:52:25 2019-02-24 09:18:47 12 2019-02-25 22:04:11 2019-02-24 21:52:25 0 2019-02-25 22:49:53 2019-02-24 21:52:25 0 2019-02-25 12:52:32 2019-02-24 21:52:25 15 2019-02-25 00:53:57 2019-02-25 12:52:32 12 2019-02-25 02:47:47 2019-02-25 00:53:57 1 </code></pre> <p>I am using the following code:</p> <pre class="lang-py prettyprint-override"><code>delta = Time 2 - Time 1 totalSeconds = delta.seconds Hours = divmod(totalSeconds, 3600)[0] df.loc[index,'Hours'] = Hours </code></pre> <p>But the problem is that this code is not taking change of date into account.In line 4 of my data, the values of date-time are a day apart but its still showing difference of hours as 0 hours because i reckon it is only subtracting time and not considering change of date.</p> <p>What should i change in my code, kindly suggest.</p>
<p>This small demo uses row 4 of your data.</p> <p>IN:</p> <pre><code>df = pd.DataFrame({'Time1':['2019-02-25 22:04:11'], 'Time2':['2019-02-24 21:52:25']}) df = df.applymap(pd.to_datetime) df['Hours'] = (df.Time1 - df.Time2).dt.total_seconds() // 3600 </code></pre> <p>OUT:</p> <pre><code>| Time1 | Time2 | Hours | |---------------------|---------------------|-------| | 2019-02-25 22:04:11 | 2019-02-24 21:52:25 | 24.0 | </code></pre>
python|pandas|datetime
0
3,008
58,057,794
What's the difference when we use and when don't use () after a method or a function or an operator in Python?
<p>For example, </p> <pre><code>import pandas as pd weather = pd.read_csv(r'D:\weather.csv') weather.count weather.count() </code></pre> <p>weather is one dataframe with multiple columns and rows Then what's the difference when asking for weather.count and weather.count() ?</p>
<p>It depends. In general this question has nothing to do with pandas. The answer is relevant to how Python is designed.</p> <p>In this case, <code>.count</code> is a method. Particularly, a method of <code>pandas.DataFrame</code>, and it will confirm it:</p> <pre><code>df = pd.DataFrame({'a': []}) print(df.count) </code></pre> <p>Outputs</p> <pre><code>&lt;bound method DataFrame.count of Empty DataFrame Columns: [a] Index: []&gt; </code></pre> <p>Adding <code>()</code> will call this method:</p> <pre><code>print(df.count()) </code></pre> <p>Outputs</p> <pre><code>a 0 dtype: int64 </code></pre> <p><br> However that is not always the case. <code>.count</code> could have been a non-callable attribute (ie a string, an int, etc) or a property.</p> <p>In this case it's a non-callable attribute:</p> <pre><code>class Foo: def __init__(self, c): self.count = c obj = Foo(42) print(obj.count) </code></pre> <p>Will output</p> <pre><code>42 </code></pre> <p>Adding <code>()</code> in this case will raise an exception because it makes no sense to call an integer:</p> <pre><code>print(obj.count()) TypeError: 'int' object is not callable </code></pre>
python|pandas
2
3,009
58,055,501
Pandas reindex converts all values to NaN
<p>I have a dataframe of the following:</p> <pre><code>&gt;&gt;&gt; a = pd.DataFrame({'values':[random.randint(-10,10) for i in range(10)]}) &gt;&gt;&gt; a values 0 -3 1 -8 2 -2 3 3 4 8 5 6 6 -5 7 0 8 8 9 -4 </code></pre> <p>And would like to reindex it so the index is entirely date time. I am doing that with the following code:</p> <pre><code>&gt;&gt;&gt; times = [datetime.datetime(2018,1,2,12,40,0) + datetime.timedelta(seconds=i) for i in range(10)] &gt;&gt;&gt; times [datetime.datetime(2018, 1, 2, 12, 40), datetime.datetime(2018, 1, 2, 12, 40, 1), datetime.datetime(2018, 1, 2, 12, 40, 2), datetime.datetime(2018, 1, 2, 12, 40, 3), datetime.datetime(2018, 1, 2, 12, 40, 4), datetime.datetime(2018, 1, 2, 12, 40, 5), datetime.datetime(2018, 1, 2, 12, 40, 6), datetime.datetime(2018, 1, 2, 12, 40, 7), datetime.datetime(2018, 1, 2, 12, 40, 8), datetime.datetime(2018, 1, 2, 12, 40, 9)] &gt;&gt;&gt; a.reindex(times) values 2018-01-02 12:40:00 NaN 2018-01-02 12:40:01 NaN 2018-01-02 12:40:02 NaN 2018-01-02 12:40:03 NaN 2018-01-02 12:40:04 NaN 2018-01-02 12:40:05 NaN 2018-01-02 12:40:06 NaN 2018-01-02 12:40:07 NaN 2018-01-02 12:40:08 NaN 2018-01-02 12:40:09 NaN </code></pre> <p>As you can see, it instead deletes the values I just had and just puts NaN's in their place. How would I reindex this dataframe to look something like this:</p> <pre><code> values 2018-01-02 12:40:00 -3 2018-01-02 12:40:01 -8 2018-01-02 12:40:02 -2 2018-01-02 12:40:03 3 2018-01-02 12:40:04 8 2018-01-02 12:40:05 6 2018-01-02 12:40:06 -5 2018-01-02 12:40:07 0 2018-01-02 12:40:08 8 2018-01-02 12:40:09 -4 </code></pre>
<p>as long as you have size of <code>times</code> the same as <code>df.size</code>, you may pass it to <code>set_index</code></p> <pre><code>df = df.set_index([times]) Out[64]: values 2018-01-02 12:40:00 -3 2018-01-02 12:40:01 -8 2018-01-02 12:40:02 -2 2018-01-02 12:40:03 3 2018-01-02 12:40:04 8 2018-01-02 12:40:05 6 2018-01-02 12:40:06 -5 2018-01-02 12:40:07 0 2018-01-02 12:40:08 8 2018-01-02 12:40:09 -4 </code></pre> <hr> <p>Or you assign it directly to <code>index</code></p> <pre><code>In [67]: df.index = times In [68]: df Out[68]: values 2018-01-02 12:40:00 -3 2018-01-02 12:40:01 -8 2018-01-02 12:40:02 -2 2018-01-02 12:40:03 3 2018-01-02 12:40:04 8 2018-01-02 12:40:05 6 2018-01-02 12:40:06 -5 2018-01-02 12:40:07 0 2018-01-02 12:40:08 8 2018-01-02 12:40:09 -4 </code></pre>
python|pandas|dataframe|datetime|reindex
2
3,010
54,983,485
Add default value to merge in pandas
<p>Similar to this topic : <a href="https://stackoverflow.com/questions/47696262/add-default-values-while-merging-tables-in-pandas">Add default values while merging tables in pandas</a></p> <p>The answer to this topic fills all <code>NaN</code> in the resulting DataFrame and that's not what I want to do.</p> <p>Let's imagine the following situation : I have two dataframes <code>df1</code> and <code>df2</code>. Each of this DataFrame might contains some <code>Nan</code>, the columns of <code>df1</code> are <code>'a'</code> and <code>col1</code>, the columns of <code>df2</code> are <code>'a'</code> and <code>col2</code> where col1 and col2 are disjoints list of columns name (For example df1 and df2 could have respectively <code>'a', 'b', 'c'</code> and <code>'a', 'd', 'e'</code> as columns names). I want to perform a left merge on <code>df1</code> and <code>df2</code> and fill all the missing values of that merge(any row of <code>df1</code> with a value of the column <code>'a'</code> that is not a value of column <code>'a'</code> in df2) with a default value. We can imagine that I have a dict <code>default_values</code> that match any element of <code>col2</code> to a default values.</p> <p>To give you a concrete example : </p> <pre><code>df1 a b c 0 0 0.038108 0.961687 1 1 0.107457 0.616689 2 2 0.661485 0.240353 3 3 0.457169 0.560912 4 5 5.000000 5.000000 df2 a d e 0 0 0.405170 0.934776 1 1 0.684532 0.168738 2 2 0.729693 0.967310 3 3 0.844770 NaN 4 4 0.842673 0.941324 default_values = {'d':42, 'e':43} </code></pre> <p>Expected Output : </p> <pre><code> a b c d e 0 0 0.038108 0.961687 0.405170 0.934776 1 1 0.107457 0.616689 0.684532 0.168738 2 2 0.661485 0.240353 0.729693 0.967310 3 3 0.457169 0.560912 0.844770 NaN 4 5 5.000000 5.000000 42 43 </code></pre>
<p>While writing this question, I found a working solution. I still think it's an interesting question. Here's a solution to get the expected output :</p> <pre><code>df3 = pd.DataFrame(default_values, index = df1.set_index('a').index.difference(df2.a)) df3['a'] = df3.index df1.merge(pd.concat((df2, df3), sort=False)) </code></pre> <p>This solution works for a left/right merge, and it can be extended to work for an outer merge (by completing the first dataframe as well).</p> <p>Edit : The <code>how='left'</code> argument is not specified in my merge because the DataFrame I'm merging with is constructed to have all the value of the column 'a' in df1 in its own column 'a'. We could add an <code>how='left'</code> to this call of merge, and it would give the same output.</p>
python|pandas|dataframe
2
3,011
54,997,426
How to convert built-in method values of dict object to dictionary
<pre><code>dic = {} dic[3] = [1,2,3] dic[1] = [4,5,6] dic[6] = [7,8] S = np.sum(dic.values, axis=0) </code></pre> <p><code>dic</code> is a dictionary, <code>{3: [1, 2, 3], 1: [4, 5, 6], 6: [7, 8]}</code>. <code>S</code> should also be a dictionary, right? </p> <pre><code>print(S) # &lt;built-in method values of dict object at 0x7f36df660b88&gt; print(type(S)) # &lt;class 'builtin_function_or_method'&gt; </code></pre> <p>Is it possible to convert <code>S</code> to a dictionary, like <code>{3: 6, 1: 15, 6: 15}</code>?</p>
<p>The problem you have with your code is in this line:</p> <pre><code>S = np.sum(dic.values, axis=0) </code></pre> <p><code>dic.values</code> is a function. You have to call it! Nonetheless, the result you get is not what you expect:</p> <pre><code>np.sum(np.array(dic.values()), axis=0) [4, 5, 6, 1, 2, 3, 7, 8] </code></pre> <p>This is because <code>axis=0</code> means: <em>sum everything you find in the array i'm giving</em>. Which translates to:</p> <pre><code>[4, 5, 6] + [1, 2, 3] + [7, 8] = [4, 5, 6, 1, 2, 3, 7, 8] </code></pre> <p>Thus, the best solution you have is using a list comprehension:</p> <pre><code>{k: sum(v) for k, v in dic.items()} </code></pre> <p>Output:</p> <pre><code>{1: 15, 3: 6, 6: 15} </code></pre>
python|numpy
3
3,012
73,203,960
Pandas groupby and shift spilling between groups
<p>I'm having an issue with Pandas where the combination of groupby and shift seems to have data spilled between the groups.</p> <p>Here's a reproducible example:</p> <pre><code>from pandas import Timestamp sample = {'start': {0: Timestamp('2022-08-02 07:20:00'), 1: Timestamp('2022-08-02 07:25:00'), 2: Timestamp('2022-08-02 07:26:00'), 3: Timestamp('2022-08-02 07:35:00'), 4: Timestamp('2022-08-02 08:20:00'), 5: Timestamp('2022-08-02 08:25:00'), 6: Timestamp('2022-08-02 08:26:00'), 7: Timestamp('2022-08-02 08:35:00')}, 'end': {0: Timestamp('2022-08-02 07:30:00'), 1: Timestamp('2022-08-02 07:35:00'), 2: Timestamp('2022-08-02 12:34:00'), 3: Timestamp('2022-08-02 07:40:00'), 4: Timestamp('2022-08-02 08:30:00'), 5: Timestamp('2022-08-02 08:55:00'), 6: Timestamp('2022-08-02 08:34:00'), 7: Timestamp('2022-08-02 08:40:00')}, 'group': {0: 'G1', 1: 'G1', 2: 'G1', 3: 'G1', 4: 'G2', 5: 'G2', 6: 'G2', 7: 'G2'}} df = pd.DataFrame(sample) df = df.sort_values('start') df['notworking'] = df.groupby('group')['end'].shift().cummax() </code></pre> <p>This gives the following output</p> <pre><code> start end group notworking 0 2022-08-02 07:20:00 2022-08-02 07:30:00 G1 1 2022-08-02 07:25:00 2022-08-02 07:35:00 G1 2022-08-02 07:30:00 2 2022-08-02 07:26:00 2022-08-02 12:34:00 G1 2022-08-02 07:35:00 3 2022-08-02 07:35:00 2022-08-02 07:40:00 G1 2022-08-02 12:34:00 4 2022-08-02 08:20:00 2022-08-02 08:30:00 G2 5 2022-08-02 08:25:00 2022-08-02 08:55:00 G2 2022-08-02 12:34:00 6 2022-08-02 08:26:00 2022-08-02 08:34:00 G2 2022-08-02 12:34:00 7 2022-08-02 08:35:00 2022-08-02 08:40:00 G2 2022-08-02 12:34:00 </code></pre> <p>The <code>'end'</code> at index 2 is correctly assigned to <code>'notworking'</code> at index 3, but this value persists over in the next group.</p> <p>My desired outcome is for cummax() to start fresh for each group, like this:</p> <pre><code> start end group notworking 0 2022-08-02 07:20:00 2022-08-02 07:30:00 G1 1 2022-08-02 07:25:00 2022-08-02 07:35:00 G1 2022-08-02 07:30:00 2 2022-08-02 07:26:00 2022-08-02 12:34:00 G1 2022-08-02 07:35:00 3 2022-08-02 07:35:00 2022-08-02 07:40:00 G1 2022-08-02 12:34:00 4 2022-08-02 08:20:00 2022-08-02 08:30:00 G2 5 2022-08-02 08:25:00 2022-08-02 08:55:00 G2 2022-08-02 08:30:00 6 2022-08-02 08:26:00 2022-08-02 08:34:00 G2 2022-08-02 08:55:00 7 2022-08-02 08:35:00 2022-08-02 08:40:00 G2 2022-08-02 08:55:00 </code></pre> <p>I guess this is simple user error. Does anyone know a fix for this?</p>
<p><code>groupby.shift</code> returns a Series so your <code>cummax</code> is operated on the Series not your desired SeriesGroupBy. You can try <code>groupby.transform</code></p> <pre class="lang-py prettyprint-override"><code>df['notworking'] = df.groupby('group')['end'].transform(lambda col: col.shift().cummax()) </code></pre> <pre><code>print(df) start end group notworking 0 2022-08-02 07:20:00 2022-08-02 07:30:00 G1 NaT 1 2022-08-02 07:25:00 2022-08-02 07:35:00 G1 2022-08-02 07:30:00 2 2022-08-02 07:26:00 2022-08-02 12:34:00 G1 2022-08-02 07:35:00 3 2022-08-02 07:35:00 2022-08-02 07:40:00 G1 2022-08-02 12:34:00 4 2022-08-02 08:20:00 2022-08-02 08:30:00 G2 NaT 5 2022-08-02 08:25:00 2022-08-02 08:55:00 G2 2022-08-02 08:30:00 6 2022-08-02 08:26:00 2022-08-02 08:34:00 G2 2022-08-02 08:55:00 7 2022-08-02 08:35:00 2022-08-02 08:40:00 G2 2022-08-02 08:55:00 </code></pre>
python|pandas|group-by
2
3,013
73,253,758
A dataset with Int64, Float64 and datetime64[ns] gets converted to object after applying Pandas fillna method
<p>I am using Kaggle's dataset (<a href="https://www.kaggle.com/datasets/claytonmiller/lbnl-automated-fault-detection-for-buildings-data" rel="nofollow noreferrer">https://www.kaggle.com/datasets/claytonmiller/lbnl-automated-fault-detection-for-buildings-data</a>)</p> <p>I have A dataset with Int64, Float64, and datetime64[ns] datatypes; after using the pandas fillna method, however, all of my data type changes to object datatype.</p> <p>Could anyone assist me with what I need to do to retain the original data types after the Pandas conversion?</p> <p>The following is the code I use:</p> <pre><code>import pandas as pd import datetime as dt %matplotlib inline df = pd.read_csv('RTU.csv') df['Timestamp'] = pd.to_datetime(df['Timestamp']) </code></pre> <p><a href="https://i.stack.imgur.com/7OGIu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7OGIu.png" alt="The data types initally" /></a></p> <p>If I do a <code>df.dtypes</code> I can see the correct datatypes however, after the following lines of code, it changes to object datatype.</p> <pre><code>df['Timestamp'] = pd.to_datetime(df['Timestamp']) def fault_mapper_FD(faultDate): if pd.Timestamp(2017, 8, 27, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 8, 28, 0): return 0 if pd.Timestamp(2017, 8, 29, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 8, 29, 23, 59): return 0 if pd.Timestamp(2017, 12, 1, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 12, 1, 23, 59): return 0 if pd.Timestamp(2017, 12, 3, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 12, 3, 23, 59): return 0 if pd.Timestamp(2017, 12, 7, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 12, 8, 0): return 0 if pd.Timestamp(2017, 12, 14, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 12, 14, 23, 59): return 0 if pd.Timestamp(2018, 2, 7, 0) &lt;= faultDate &lt;= pd.Timestamp(2018, 2, 7, 23, 59): return 0 if pd.Timestamp(2018, 2, 9, 0) &lt;= faultDate &lt;= pd.Timestamp(2018, 2, 9, 23, 59): return 0 if pd.Timestamp(2017, 12, 20, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 12, 20, 23, 59): return 0 if pd.Timestamp(2018, 2, 18, 0) &lt;= faultDate &lt;= pd.Timestamp(2018, 2, 18, 23, 59): return 0 if pd.Timestamp(2018, 2, 1, 0) &lt;= faultDate &lt;= pd.Timestamp(2018, 2, 1, 23, 59): return 0 if pd.Timestamp(2018, 1, 31, 0) &lt;= faultDate &lt;= pd.Timestamp(2018, 1, 31, 23, 59): return 0 if pd.Timestamp(2018, 1, 28, 0) &lt;= faultDate &lt;= pd.Timestamp(2018, 1, 28, 23, 59): return 0 if pd.Timestamp(2018, 1, 27, 0) &lt;= faultDate &lt;= pd.Timestamp(2018, 1, 27, 23, 59): return 0 if (pd.Timestamp(2017, 9, 1, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 9, 1, 23, 59) or pd.Timestamp(2017, 11, 30, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 11, 30, 23, 59) or pd.Timestamp(2017, 12, 9, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 12, 9, 23, 59) or pd.Timestamp(2017, 12, 10, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 12, 11, 0) or pd.Timestamp(2017, 12, 24, 0) &lt;= faultDate &lt;= pd.Timestamp(2017, 12, 24, 23, 59) or pd.Timestamp(2018, 2, 4, 0) &lt;= faultDate &lt;= pd.Timestamp(2018, 2, 4, 23, 59) or pd.Timestamp(2018, 2, 5, 0) &lt;= faultDate &lt;= pd.Timestamp(2018, 2, 6, 0)): return 1 df['FD'] = df['Timestamp'].apply(lambda fault_date: fault_mapper_FD(fault_date)) cond = (df.Timestamp.dt.time &gt; dt.time(22,0)) | ((df.Timestamp.dt.time &lt; dt.time(7,0))) df[cond] = df[cond].fillna(0,axis=1) </code></pre> <p>Now the <code>df.dtypes</code> gives all of my columns as objects/</p> <p><a href="https://i.stack.imgur.com/D3aUY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D3aUY.png" alt="The data types after the Pandas fillna methos" /></a></p>
<p>I think you have a small typo. You just need to call</p> <pre><code>df = df[cond].fillna(0,axis=0) </code></pre> <p>which indeed doesn't change datatypes</p> <pre><code>Timestamp datetime64[ns] RTU: Supply Air Temperature float64 RTU: Return Air Temperature float64 RTU: Supply Air Fan Status int64 RTU: Circuit 1 Discharge Temperature float64 ... VAV Box: Room 203 Air Temperature float64 VAV Box: Room 204 Air Temperature float64 VAV Box: Room 205 Air Temperature float64 VAV Box: Room 206 Air Temperature float64 Fault Detection Ground Truth int64 Length: 69, dtype: object </code></pre>
python|pandas|fillna
1
3,014
67,322,305
RuntimeWarning: invalid value encountered in double_scalars h[i]=(delta_tau*((sigma*x[i])**2))/(s*h[i-1]-delta_tau*r*h[i-1])
<pre><code>import numpy as np import scipy.stats as si import sympy as sy import math H=1.0 K=100 T=1 s=0.95 r=0.04 sigma=0.025 delta_tau_temp=(s*(H**2))/((r*(H**2))+((sigma**2)*((106-H)**2))) #N_tau-no of time steps, N_x- no of space points N_tau = math.floor(T/delta_tau_temp) +1 #delta_tau -time step delta_tau = T/(N_tau) N_x =int((106/H) + N_tau) h=np.zeros((N_x )) #increment for space points x=np.zeros((N_x)) #space points u=np.zeros((N_x,N_tau)) #2-D array m_1 =100/H -1 m_2 =100/H +1 for i in range(N_x): if i &lt;=(106/H-1): h[i]=H x[i]=H*i if i == 106/H : x[i]= H*i h[i]=(delta_tau*((sigma*x[i])**2))/(s*h[i-1]-delta_tau*r*h[i-1]) else: x[i]=x[i-1]+h[i-1] h[i]=(delta_tau*((sigma*x[i])**2))/(s*h[i-1]-delta_tau*r*h[i-1]) for j in range(N_tau): if j==0: for i in range(N_x): u[i][0]=max(x[i]-K,0) else: if j&lt;=(N_tau -m_1 -2): for i in range(N_x -j-1): u[i][j]=((delta_tau*(((sigma*x[i])**2)-r*x[i]*h[i-1]))/(h[i-1]*(h[i-1]+h[i])))*u[i-1][j-1] +(1-(r*delta_t)-((delta_t*((sigma*x[i])**2))/(h[i-1]*h[i])))*u[i][j-1]+((delta_tau*(((sigma*x[i])**2)+r*x[i]*h[i]))/(h[i]*(h[i-1]+h[i])))*u[i+1][j-1] print(u[i][j]) else: if i in range(j-N_tau +m_1 +1, N_x -n-1 ): u[i][j]=((delta_tau*(((sigma*x[i])**2)-r*x[i]*h[i-1]))/(h[i-1]*(h[i-1]+h[i])))*u[i-1][j-1] +(1-(r*delta_t)-((delta_t*((sigma*x[i])**2))/(h[i-1]*h[i])))*u[i][j-1]+((delta_tau*(((sigma*x[i])**2)+r*x[i]*h[i]))/(h[i]*(h[i-1]+h[i])))*u[i+1][j-1] print(u[i][j]) </code></pre> <p>I plan to obtain the values of u[i][j] only for a certain number of grid points under some constraint on i as mentioned in the range of i. I do not understand where the mistake is. please could someone help</p>
<p>You seems to be dividing by zero at <code>if i &lt;=(106/H-1):</code> as H=1.0. The error goes away by changing H to something &gt; 1.</p>
python|numpy
0
3,015
67,224,619
Repeatitively creating tensorflow custom model instance and training inside loop gives error
<p>I have created custom model class using example from tensorflow <a href="https://www.tensorflow.org/guide/keras/custom_layers_and_models" rel="nofollow noreferrer">here</a>. Then I tried to get custom model summary so I searched <a href="https://stackoverflow.com/questions/55235212/model-summary-cant-print-output-shape-while-using-subclass-model">here</a>. But there was a problem when I tried to train custom model inside a for loop. First iteration always succeed but following iteration crashes with following error message (Input tensors to a <code>CustomModel</code> must come from <code>tf.keras.Input</code>Received: None)</p> <p>During debugging I found that in first iteration, <code>super(CustomModel, self).__init__()</code> calls <code>Model.__init__()</code> and doesn't call <code>Functional.__init__()</code>. After that <code>super(CustomModel, self).__init__(inputs=self.input_layer, outputs=self.out)</code> calls <code>Functional.__init__()</code>.</p> <p>But in second iteration <code>super(CustomModel, self).__init__()</code> calls <code>Functional.__init__()</code> right away.</p> <p>How can I train this custom model in second iteration? Here is my code:</p> <pre><code>import os import numpy as np os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorflow as tf # noqa from tensorflow import keras # noqa from enum import Enum, auto # noqa BATCH_SIZE = 20 class Algo(Enum): linearReg = auto() class CustomModel(keras.Model): def __init__(self, input_shape, algo=Algo.linearReg.value, **kwargs): super(CustomModel, self).__init__() self.in_shape = input_shape self.algo = algo # create input layer for model summary self.input_layer = keras.layers.Input((self.in_shape[1],)) self.custom_layer = [keras.layers.experimental.preprocessing.Normalization()] if algo == Algo.linearReg.value: self.custom_layer.append(keras.layers.Dense(units=1, activation='linear')) # get output layer with call method for model summary self.out = self.call(self.input_layer) # reinitialize with input layer and output super(CustomModel, self).__init__( inputs=self.input_layer, outputs=self.out) def call(self, input_tensor, **kwargs): x = input_tensor for layer in self.custom_layer: x = layer(x) return x def predict(self, x, **kwargs): for layer in self.custom_layer: x = layer.call(x) return x def from_config(self, config, custom_objects=None): super(CustomModel, self).__init__() def get_config(self): config = super(CustomModel, self).get_config() return config features, labels = (np.random.sample((100, 2, BATCH_SIZE)), np.random.sample((100, 1, BATCH_SIZE))) dataset = tf.data.Dataset.from_tensor_slices((features, labels)) x, y = next(iter(dataset)) for i in range(4): model = CustomModel(x.shape, Algo.linearReg.value) model.summary() model.compile(optimizer=keras.optimizers.Ftrl(learning_rate=0.01), loss='mse', metrics=['mae']) model.fit(dataset, epochs=int(2), verbose=2, validation_data=dataset.shuffle(2).take(1)) y_predict = model.predict(x) </code></pre>
<p>Well analyzing further gives me headache. However the problem seems occur when I reinitialize with input and output. So I figured out alternative solution although I don't like model.model() part.</p> <pre><code>class CustomModel(keras.Model): def __init__(self, input_shape, algo=Algo.linearReg.value, **kwargs): super(CustomModel, self).__init__() self.in_shape = input_shape self.algo = algo # create input layer for model summary self.input_layer = keras.layers.Input((self.in_shape[1],)) self.custom_layer = [keras.layers.experimental.preprocessing.Normalization()] if algo == Algo.linearReg.value: self.custom_layer.append(keras.layers.Dense(units=1, activation='linear')) # get output layer with call method for model summary self.out = self.call(self.input_layer) def model(self): return keras.Model(inputs=self.input_layer, outputs=self.call(self.input_layer)) def call(self, input_tensor, **kwargs): for layer in self.custom_layer: input_tensor = layer(input_tensor) return input_tensor def predict(self, input_tensor, **kwargs): for layer in self.custom_layer: input_tensor = layer.call(input_tensor) return input_tensor def from_config(self, config, custom_objects=None): super(CustomModel, self).__init__() def get_config(self): config = super(CustomModel, self).get_config() return config features, labels = (np.random.sample((100, 2, BATCH_SIZE)), np.random.sample((100, 1, BATCH_SIZE))) dataset = tf.data.Dataset.from_tensor_slices((features, labels)) x, y = next(iter(dataset)) for i in range(4): model = CustomModel(x.shape, Algo.linearReg.value) model.model().summary() model.compile(optimizer=keras.optimizers.Ftrl(learning_rate=0.01), loss='mse', metrics=['mae']) model.fit(dataset, epochs=int(2), verbose=2, validation_data=dataset.shuffle(2).take(1)) y_predict = model.predict(x) </code></pre>
python-3.x|keras|tensorflow2.0
0
3,016
67,559,279
Columns with mixed datatype to be saved as str and jsonb in postgres with python
<p>I need your advice but don't be horrified with the code below, please.</p> <p>Situation: I call an API to retrieve the sales information. The response looks like the following:</p> <pre><code>[{'Id': 123, 'Currency': 'USD', 'SalesOrder': [{'Price': 2, 'Subitem': 1, 'Discount': 0.0, 'OrderQuantity': 1.0}, {'Price': 3, 'Subitem': 2, 'Discount': 0.0, 'OrderQuantity': 2.0}], 'Tax': 18}, {'Id': 124, 'Currency': 'USD', 'SalesOrder': [{'Price': 2, 'Subitem': 1, 'Discount': 0.0, 'OrderQuantity': 1.0}, {'Price': 3, 'Subitem': 2, 'Discount': 0.0, 'OrderQuantity': 2.0}], 'Tax': 18}] </code></pre> <p>Expected outcome: 1. 'Id' is a stand-alone column; 'Currency' is a stand-alone column. 2. As there could be a different number of 'Subitems', I thought of adding 'SalesOrder' as a json blob in postgres and then, query the json column. Thus, the end result is a postgres table with three columns.</p> <pre><code>id =[] currency = [] salesOrder = [] #extracting values for item in df: id.append(item.get(&quot;Id&quot;) currency.append(item.get(&quot;Currency&quot;)) salesOrders.append(item.get(&quot;SalesOrder&quot;)) #converting to a pandas df df_id = pd.DataFrame(id) df_currency = pd.DataFrame(currency) df_sales_order = pd.DataFrame(salesOrder) #concatenating cols df_row = pd.concat([df_id, df_currency, df_sales_order], axis = 1) #outputting results to a table engine = create_engine('postgresql+psycopg2://username:password@endpoint/db') with engine.connect() as conn, conn.begin(): df_row.to_sql('tbl', con=conn, schema='schema', if_exists='append', index = False) </code></pre> <p>Doubts: 1. If I try to implement the code above, the 'SalesOrder' list gets split into an X number of columns. Why so? How can I avoid it and keep it together? 2. I am not sure how to proceed with the mixture of data types (str + jsonb). Shall I load 'non-json' columns and then, update the table with the json column?</p>
<p>Instead of doing this &quot;df_sales_order = pd.DataFrame(salesOrder) &quot;, just create a column in the &quot;df_currency&quot; like df_currency[&quot;sales_order&quot;] and fill it with the &quot;item.get(&quot;SalesOrder&quot;)&quot;. This should solve the issue.</p>
python|pandas
1
3,017
59,928,576
Efficiently extracting information from a pandas dataframe based on two column values
<p>I am trying to extract information from a data frame which is indexed by productId and customerId. I have a large number (millions) of (productId, customerId) pairs and am interested in finding the most efficient way possible to do this.</p> <p>I have two data frames, df1 containing the customerId, productId pairs I'm interested in, and a second frame df2 containing information of interest which is indexed by customerId, productId pairs.</p> <p>So far I have tried something like:</p> <pre><code>def f(x, y): return(df2.col[(df2.customerId == x) &amp; (df2.productId == y)].sum()) values = df1.apply(lambda x: f(x.customerId, x.productId), axis = 1) </code></pre> <p>which works fine but is very slow.</p> <p>Any suggestions on improvements?</p>
<p>You could try a list comprehension:</p> <pre><code>values = [df2.loc[df2[['customerId', 'productId']].eq(i).all(), 'col'].sum() for i in df1.values] </code></pre>
python|pandas
0
3,018
60,038,811
replace 0.01 with the row maximum value from another columns
<p>There is such dataframe df</p> <pre><code> article price1 price2 price3 0 A9911652 0.01 0.01 2980.31 1 A9911653 7041.33 0.01 2869.40 2 A9911654 0.01 9324.63 0.01 3 A9911659 4785.74 0.01 1622.78 4 A9911661 6067.27 6673.99 0.01 </code></pre> <p>I'd like to replace the 0.01 values with the maximum value of the row, so it should look this way:</p> <pre><code> article price1 price2 price3 0 A9911652 2980.31 2980.31 2980.31 1 A9911653 7041.33 7041.33 2869.40 2 A9911654 9324.63 9324.63 9324.63 3 A9911659 4785.74 4785.74 1622.78 4 A9911661 6067.27 6673.99 6673.99 </code></pre> <p>I tried the following:</p> <pre><code> df.replace(0.01,df[['price3','price2','price1']].max(axis=1),inplace=True) </code></pre> <p>But it doesn't change anything. What would be the right way to do this?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mask.html" rel="nofollow noreferrer"><code>DataFrame.mask</code></a> with <code>axis=0</code> parameter:</p> <pre><code>df = df.mask(df == 0.01, df[['price3','price2','price1']].max(axis=1), axis=0) print (df) article price1 price2 price3 0 A9911652 2980.31 2980.31 2980.31 1 A9911653 7041.33 7041.33 2869.40 2 A9911654 9324.63 9324.63 9324.63 3 A9911659 4785.74 4785.74 1622.78 4 A9911661 6067.27 6673.99 6673.99 </code></pre> <p>If want specify columns for set new values:</p> <pre><code>c =['price3','price2','price1'] df[c] = df[c].mask(df[c] == 0.01, df[c].max(axis=1), axis=0) print (df) article price1 price2 price3 0 A9911652 2980.31 2980.31 2980.31 1 A9911653 7041.33 7041.33 2869.40 2 A9911654 9324.63 9324.63 9324.63 3 A9911659 4785.74 4785.74 1622.78 4 A9911661 6067.27 6673.99 6673.99 </code></pre>
python|pandas
2
3,019
60,172,160
Scatter plot not rendering in dash plotly
<p>I am building a dashboard on dash plotly that will have multiple x features in a single scatter plot, with each feature either being presented as a line or a line with markers. </p> <p>I have built a scatter plot according to the requirements I have specified however, when I run my dashboard locally I do not actually see the scatter plot</p> <p>This is the code I have written</p> <pre><code>import dash import dash_table import plotly.graph_objs as go import dash_html_components as html import dash_core_components as dcc from dash.dependencies import Input,Output import pandas as pd import os import numpy as np app = dash.Dash() app.layout = html.Div(children=[ dcc.Graph( id='supervisor' ) ]) @app.callback(dash.dependencies.Output('supervisor','figure')) def scattertable(): trace0 = go.Scatter( x=supervisor['Características (D)'], y=supervisor['Mean Team Performance'], mode='lines', name='Caracteristicas (D)' ) trace1 = go.Scatter( x=supervisor['Características (I)'], y=supervisor['Mean Team Performance'], mode='lines+markers', name='Características (I)' ) trace2 = go.Scatter( x=supervisor['Características (S)'], y=supervisor['Mean Team Performance'], mode='lines', name='Características (S)' ) trace3 = go.Scatter( x=supervisor['Características (C)'], y=supervisor['Mean Team Performance'], mode='lines+markers', name='Características (C)' ) data = [trace0,trace1,trace2,trace3] return {"data": data, "layout": go.Layout(title="Relationship", yaxis={"title":'Mean', "range":[0, max(supervisor['Mean Team Performance'])+1]}, xaxis={"title":'Characteristics', "tickangle":45}, )} if __name__ == '__main__': app.run_server(debug=True) </code></pre> <p>This is a sample of my data</p> <pre><code>{'Características (D)': {2373: nan, 2361: 67.0, 2349: 65.0}, 'Características (I)': {2373: nan, 2361: 20.0, 2349: 55.0}, 'Características (S)': {2373: nan, 2361: 48.0, 2349: 30.0}, 'Características (C)': {2373: nan, 2361: 90.0, 2349: 85.0}, 'Motivación (D)': {2373: nan, 2361: 69.0, 2349: 59.0}, 'Motivación (I)': {2373: nan, 2361: 25.0, 2349: 58.0}, 'Motivación (S)': {2373: nan, 2361: 65.0, 2349: 30.0}, 'Motivación (C)': {2373: nan, 2361: 84.0, 2349: 93.0}, 'Bajo Stress (D)': {2373: nan, 2361: 69.0, 2349: 69.0}, 'Bajo Stress (I)': {2373: nan, 2361: 30.0, 2349: 60.0}, 'Bajo Stress (S)': {2373: nan, 2361: 40.0, 2349: 40.0}, 'Bajo Stress (C)': {2373: nan, 2361: 92.0, 2349: 74.0}, 'Cost to Company': {2373: 1908.33, 2361: 1908.33, 2349: 1908.33}, 'MonthsofEmploymentRounded': {2373: 1.0, 2361: 4.0, 2349: 4.0}, 'Compensation': {2373: 1200.0, 2361: 1200.0, 2349: 1200.0}, 'span': {2373: 37.0, 2361: 58.0, 2349: 86.0}, 'Mean Team Performance': {2373: 0.40544395205206984, 2361: 0.5936947689016717, 2349: 0.5403025332663768}, 'Mean Team Employment in Months': {2373: 8.675675675675675, 2361: 5.396551724137931, 2349: 6.174418604651163}, 'employment span': {2373: 43, 2361: 128, 2349: 128} </code></pre> <p>}</p>
<p>Dash requires an input for a successful callback to occur. If you just want to generate the Plotly scatter plot, you do not need a callback and can just put your code into the app layout. I also converted your dictionary into a Pandas dataframe for creation of the plot.</p> <p>Updated code below:</p> <pre><code>import dash import dash_table import plotly.graph_objs as go import dash_html_components as html import dash_core_components as dcc from dash.dependencies import Input,Output import pandas as pd import os import numpy as np supervisor_df = pd.DataFrame.from_dict(supervisor) fig = go.Figure() category_dict = {'Características (D)':'lines', 'Características (I)':'lines+markers', 'Características (S)':'lines', 'Características (C)':'lines+markers'} for category in category_dict.keys(): fig.add_trace(go.Scatter( x=supervisor_df[category], y=supervisor_df['Mean Team Performance'], mode=category_dict[category], name=category )) fig.update_layout(title="Relationship", yaxis={"title":'Mean', "range":[0, max(supervisor_df['Mean Team Performance'])+1]}, xaxis={"title":'Characteristics', "tickangle":45}, ) app = dash.Dash() app.layout = html.Div(children=[ dcc.Graph( id='supervisor', figure=fig.to_dict() ) ]) if __name__ == '__main__': app.run_server(debug=True) </code></pre>
python|pandas|plotly-dash|plotly-python
0
3,020
60,316,673
Calculations involving mixed type elements in columns - np.nan, float, string elements
<p>I have a dataframe where columns include mixed type elements and I need to do some calculations among them. Assume this dataframe:</p> <pre><code>A=[20, np.nan, 10, 'give', np.nan, np.nan] B=[10, np.nan, np.nan, np.nan, 10, 'given'] frame=pd.DataFrame(zip(A,B)) frame.columns=['A', 'B'] </code></pre> <p>I want to populate the difference of B from A. If I do <code>frame['diff']=frame['A']-frame['B']</code> it does not give the result I need. Instead, the result I would want is in the 'desired diff' column.</p> <p>Basically, if A or B has a number, then B or A should be 0. If a string is in A, while B is NaN, then it should write "positive" and, in the vice versa case, it should write "negative". See below:</p> <pre><code>frame A B diff desired diff 0 20 10 10 10 1 NaN NaN NaN NaN 2 10 NaN NaN 10 3 give NaN NaN positive 4 NaN 10 NaN -10 5 NaN given NaN negative </code></pre> <p>Just for the record, I have tried to implement <code>np.where</code> and <code>np.select</code> and some conditions such as <code>np.logical_and(frame['A'].apply(lambda x: isinstance(x, float)), frame['B'].isna())</code> to achieve the desired output, but without success.</p> <p>Thanks in advance for your suggestions!</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_numeric.html" rel="nofollow noreferrer"><code>to_numeric</code></a> with <code>errors='coerce'</code> for check non numeric and no missing values and set new values by <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html" rel="nofollow noreferrer"><code>numpy.select</code></a> and subtract values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sub.html" rel="nofollow noreferrer"><code>Series.sub</code></a> with <code>fill_value=0</code> parameter:</p> <pre><code>a = pd.to_numeric(frame['A'], errors='coerce') m1 = frame['A'].notna() m2 = a.isna() b = pd.to_numeric(frame['B'], errors='coerce') m3 = frame['B'].notna() m4 = b.isna() frame['new'] = np.select([m1 &amp; m2, m3 &amp; m4], ['positive', 'negative'], default = a.sub(b, fill_value=0)) print (frame) A B new 0 20 10 10.0 1 NaN NaN nan 2 10 NaN 10.0 3 give NaN positive 4 NaN 10 -10.0 5 NaN given negative </code></pre>
python|string|pandas|dataframe|nan
2
3,021
65,079,318
AttributeError: 'str' object has no attribute 'dim' in pytorch
<p>I got the following error output in the PyTorch when sent model predictions into the model. Does anyone know what's going on?</p> <p>Following are the architecture model that I created, in the error output, it shows the issue exists in the x = self.fc1(cls_hs) line.</p> <pre><code>class BERT_Arch(nn.Module): def __init__(self, bert): super(BERT_Arch, self).__init__() self.bert = bert # dropout layer self.dropout = nn.Dropout(0.1) # relu activation function self.relu = nn.ReLU() # dense layer 1 self.fc1 = nn.Linear(768,512) # dense layer 2 (Output layer) self.fc2 = nn.Linear(512,2) #softmax activation function self.softmax = nn.LogSoftmax(dim=1) #define the forward pass def forward(self, sent_id, mask): #pass the inputs to the model _, cls_hs = self.bert(sent_id, attention_mask=mask) print(mask) print(type(mask)) x = self.fc1(cls_hs) x = self.relu(x) x = self.dropout(x) # output layer x = self.fc2(x) # apply softmax activation x = self.softmax(x) return x </code></pre> <pre><code>/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1686 if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): 1687 return handle_torch_function(linear, tens_ops, input, weight, bias=bias) -&gt; 1688 if input == 2 and bias is not None: 1689 print(input) 1690 # fused op is marginally faster AttributeError: 'str' object has no attribute 'dim' </code></pre>
<p><strong>If you work with transformers==3.0.0, everything should work fine !</strong></p> <p>There were some updates in transformers==4.0.0</p> <p>To get transformers==3.0.0, following command can be used:</p> <pre><code>!pip install transformers==3.0.0 </code></pre>
python|python-3.x|tensorflow|machine-learning|bert-language-model
5
3,022
49,827,646
how to draw multiple distribution
<pre><code>fig_dspl, axes_dspl = plt.subplots(nrows=1, ncols=2, figsize=(9, 4)) sns.distplot(df_08['displ'], ax = axes_dspl[0]) _ = axes_dspl[0].set_title('08') sns.distplot(df_18['displ'], ax = axes_dspl[1]) _ = axes_dspl[1].set_title('18') </code></pre> <p>can anyone explain the detail of this code above? especially the first line, is this for multiple graphs? i understand how to draw a single plot (sis.displot), don't clearly understand the <code>ax = axes_dspl[0]</code>) ... and what the <code>_ = axes_dspl[0]</code></p>
<p>Create two plots, get the figures and axes</p> <pre><code>fig_dspl, axes_dspl = plt.subplots(nrows=1, ncols=2, figsize=(9, 4)) </code></pre> <p>In the first axes, draw a <code>seaborn.distplot</code>.</p> <pre><code>sns.distplot(df_08['displ'], ax = axes_dspl[0]) </code></pre> <p>Set <code>'08'</code> as the title of this plot. Assign the result to <code>_</code> and ignore it (you may as well write <code>axes_dsp...</code> instead of <code>_ = axes_dsp...</code> in this case).</p> <pre><code>_ = axes_dspl[0].set_title('08') </code></pre> <p>Similarly do so for the second axes.</p> <pre><code>sns.distplot(df_18['displ'], ax = axes_dspl[1]) _ = axes_dspl[1].set_title('18') </code></pre> <p>In conclusion:</p> <ol> <li><p>The first assignment (to the outcome of <code>distplots</code>) allows greater control of the result, in this case by setting the titles later on.</p></li> <li><p>The later assignments (<code>_ = axes_dspl...</code>) are just moise and are better omitted.</p></li> </ol>
pandas
1
3,023
49,909,538
mapping a multi-index to existing pandas dataframe columns using separate dataframe
<p>I have an existing data frame in the following format (let's call it <code>df</code>):</p> <pre><code> A B C D 0 1 2 1 4 1 3 0 2 2 2 1 5 3 1 </code></pre> <p>The column names were extracted from a spreadsheet that has the following form (let's call it <code>cat_df</code>):</p> <pre><code> current category broader category X A Y B Y C Z D </code></pre> <p>First I'd like to prepend a higher level index to make <code>df</code> look like so:</p> <pre><code> X Y Z A B C D 0 1 2 1 4 1 3 0 2 2 2 1 5 3 1 </code></pre> <p>Lastly i'd like to 'roll-up' the data into the meta-index by summing over subindices, to generate a new dataframe like so:</p> <pre><code> X Y Z 0 1 3 4 1 3 2 2 2 1 8 1 </code></pre> <p>Using <code>concat</code> from <a href="https://stackoverflow.com/questions/14744068/prepend-a-level-to-a-pandas-multiindex">this answer</a> has gotten me close, but it seems like it'd be a very manual process picking out each subset. My true dataset is has a more complex mapping, so I'd like to refer to it directly as I build my meta-index. I think once I get the meta-index settled, a simple <code>groupby</code> should get me to the summation, but I'm still stuck on the first step. </p>
<pre><code>d = dict(zip(cat_df['current category'], cat_df.index)) cols = pd.MultiIndex.from_arrays([df.columns.map(d.get), df.columns]) df.set_axis(cols, axis=1, inplace=False) X Y Z A B C D 0 1 2 1 4 1 3 0 2 2 2 1 5 3 1 </code></pre> <hr> <pre><code>df_new = df.set_axis(cols, axis=1, inplace=False) df_new.groupby(axis=1, level=0).sum() X Y Z 0 1 3 4 1 3 2 2 2 1 8 1 </code></pre>
python|pandas|indexing
4
3,024
63,917,912
CNN image to image regression output has very high loss 99.3%
<p>I have a deep learning model that takes 4d image as input and predicts 1D image. But my loss is very high. Could anyone help me find out why.</p> <p>sample input images: [1st dimension[][1]][1]+[2nd dimension][2]+[3rd dimension][3]+[4th dimension][4]==== output [desired output image][5]</p> <p>information contain output image is very less.</p> <p>I used RMSE for loss calculation form tf.keras. it seems to be not converging.</p> <p>Here is how my loss looks like:</p> <p>Epoch 1/5 25/27 [==========================&gt;...] - ETA: 1:16 - loss: 99.7717 - acc: 0.0000e+00</p> <p>Model architecture and model fitting code is as follows:</p> <pre><code>def unet(pretrained_weights = None,input_size = (512,512,4)): inputs = Input(input_size) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs) #conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1) #conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2) #conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3) pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3) #conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4) drop4 = Dropout(0.5)(conv4) pool4 = MaxPooling2D(pool_size=(2, 2))(drop4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4) #conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5) drop5 = Dropout(0.5)(conv5) up6 = Conv2DTranspose(512, (2,2), strides=(2,2), padding='same')(drop5) #merge6 = concatenate([drop4,up6], axis = 3) merge6=up6 conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6) #conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6) up7 =Conv2DTranspose(256, (2,2), strides=(2,2), padding='same')(conv6) #merge7 = concatenate([conv3,up7], axis = 3) merge7=up7 conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7) #conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7) #up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(Conv2DTranspose(1, (3,3), strides=(2,2), padding='same')(conv7)) up8 = Conv2DTranspose(128, (2,2), strides=(2,2), padding='same')(conv7) #merge8 = concatenate([conv2,up8], axis = 3) merge8=up8 Conv2DTranspose(1, (3,3), strides=(2,2), padding='same') conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8) #conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8) #up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(Conv2DTranspose(1, (3,3), strides=(2,2), padding='same')(conv8)) up9 = Conv2DTranspose(64, (2,2), strides=(2,2), padding='same')(conv8) #merge9 = concatenate([conv1,up9], axis = 3) merge9=up9 conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9) #conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) #conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9) conv11 = tf.keras.layers.Reshape((512, 512))(conv10) model = tf.keras.Model(inputs,conv11) model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy']) #model.summary() if(pretrained_weights): model.load_weights(pretrained_weights) return model model=unet() model.compile(loss=tf.keras.losses.MeanAbsolutePercentageError(),optimizer=tf.keras.optimizers.Adadelta(), metrics = ['accuracy']) model.fit(train_ds,epochs=5,verbose=1,validation_data=validation_ds) [1]: https://i.stack.imgur.com/h9x3C.png [2]: https://i.stack.imgur.com/0LjZ4.png [3]: https://i.stack.imgur.com/qu0cm.png [4]: https://i.stack.imgur.com/ZiKlg.png [5]: https://i.stack.imgur.com/f0izQ.png </code></pre>
<p>I think you can perform few of the following checks to figure out the problem:</p> <ul> <li>Check if target image is being loaded correctly</li> <li>Change optimizer and play around with learning rate</li> <li>Train for few more epochs to see if loss is converging</li> <li>Sometimes the loss function has poor convergence. I see that you are using mean_absolute_percentage_error. It is not a smooth loss function and likely to have convergence issues. May be train model using rmse for few epochs and then switch out</li> </ul>
python|tensorflow|keras|conv-neural-network
0
3,025
64,118,032
TypeError: Value passed to parameter 'input' has DataType bool not in list of allowed values: float32, float64, int32, uint8, int16, int8
<p>I have a dataset with 5 labels</p> <pre><code>def get_label(file_path): # convert the path to a list of path components parts = tf.strings.split(file_path, os.path.sep) class_names = ['daisy' 'dandelion' 'roses' 'sunflowers' 'tulips'] # The second to last is the class-directory one_hot = parts[-2] == class_names # Integer encode the label return tf.argmax(one_hot) def decode_img(img): # convert the compressed string to a 3D uint8 tensor img = tf.image.decode_jpeg(img, channels=3) # resize the image to the desired size return tf.image.resize(img, [img_height, img_width]) def process_path(file_path): label = get_label(file_path) # load the raw data from the file as a string img = tf.io.read_file(file_path) img = decode_img(img) return img, label train_ds = train_ds.map(process_path, num_parallel_calls=AUTOTUNE) </code></pre> <p>If I change this code with other dataset having 2 labels, <code>class_names = ['dog', 'cat']</code> I find this error <code> TypeError: Value passed to parameter 'input' has DataType bool not in list of allowed values: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, float16, uint32, uint64</code> So how I can update <code> def get_label(file_path)</code></p>
<p>I was having the same problem. Following the idea of ​​the last post:</p> <pre><code>one_hot = tf.dtypes.cast(parts[-2] == class_names, dtype = tf.int16) </code></pre>
python|tensorflow2.x
2
3,026
64,049,466
How to avoid df.apply running twice top row
<p>I am aware that the method <code>apply</code> run twice the top row to see if it can optimize my code.</p> <p>Is there a way to use <code>apply</code> without running twice on the first row?</p> <p>Here is my code:</p> <pre><code>df[[&quot;A&quot;, &quot;B&quot;]] = df.apply(lambda x: pd.Series(create_store(x[&quot;C&quot;], x[&quot;D&quot;]),axis=1)) </code></pre>
<p>You can pass a <code>tuple</code> of those 2 columns to the function such as</p> <pre><code>def create_store(tuple): ... tuple[0] #instead of x[&quot;C&quot;] tuple[1] #instead of X[&quot;D&quot;] </code></pre>
pandas|dataframe
0
3,027
63,789,152
How to append a value in a new column to Null rows in Pandas
<p>Hello stackoverflow community</p> <p>This is my sample data</p> <pre><code>Code Sales HeadQuarter CC 1000 XYZ AA NaN YYZ BB 2000 NaN DD NaN NaN </code></pre> <p>As I have to append a new column to respective rows that contains atleast one NaN value. The new column can contain any value. I'm using 1 as the value</p> <p>The new table should look like this</p> <pre><code>Code Sales HeadQuarter New_column CC 1000 XYZ AA NaN YYZ 1 BB 2000 NaN 1 DD NaN NaN 1 </code></pre> <p>Can anyone help me with this?</p> <p>Thanks in Advance!</p>
<p>Test missing values and get at least on <code>True</code> per row for mask and create new column in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p> <pre><code>df.loc[df.isna().any(axis=1), 'New_column'] = 1 print (df) Code Sales HeadQuarter New_column 0 CC 1000.0 XYZ NaN 1 AA NaN YYZ 1.0 2 BB 2000.0 NaN 1.0 3 DD NaN NaN 1.0 </code></pre> <p>If need set 2 values for match and not match use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p> <pre><code>df['New_column'] = np.where(df.isna().any(axis=1), 1, '') print (df) Code Sales HeadQuarter New_column 0 CC 1000.0 XYZ 1 AA NaN YYZ 1 2 BB 2000.0 NaN 1 3 DD NaN NaN 1 </code></pre>
pandas|dataframe
4
3,028
63,916,699
Delete row if the front row is the same in pandas
<p>I have a dataframe and want to delete row if the front row is the same.</p> <p>My current code:</p> <pre><code>df = pd.read_csv(&quot;MyCSV.csv&quot;, &quot;;&quot;) df_2 = df.loc[:,['A', 'B', 'C', 'D']] for i in df_2.itertuples(): if df_2[i] == df_2[i+1]: print(df) </code></pre> <p>I have this input dataframe:</p> <p><img src="https://i.stack.imgur.com/gvYKK.png" alt="I have this dataframe" /></p> <p>And this is the output that I want:</p> <p><img src="https://i.stack.imgur.com/itJ6P.png" alt="dataframe i want this dataframe" /></p> <p>Code for recreating the input dataframe:</p> <pre><code>df = pd.DataFrame({'time': [0,1,2,3,4,5,9,10,11,12,13,14,15], 'A': [0.0] * 13, 'B': [0.0,0.0,0.0,0.0,0.00000813,0.00000813,0.0, 0.0,0.0,0.0,0.0,0.00000813,0.00000813], 'C': [0.0000109,0.0000109,0.0,0.0,0.0,0.0,0.0,0.0,0.0000109,0.0000109,0.0,0.0,0.0], 'D': [0.00000222,0.00000222,0.0,0.0,0.0,0.0,0.0,0.0,0.00000222,0.00000222,0.0,0.0,0.0] }) </code></pre>
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.shift.html" rel="nofollow noreferrer"><code>shift</code></a> to do this as follows:</p> <pre><code>cols = ['A', 'B', 'C', 'D'] df.loc[(df[cols].shift(-1) == df[cols]).all(1)] </code></pre> <p>Resulting dataframe:</p> <pre><code>time A B C D 0 0.0 0.000000 0.000011 0.000002 2 0.0 0.000000 0.000000 0.000000 4 0.0 0.000008 0.000000 0.000000 9 0.0 0.000000 0.000000 0.000000 11 0.0 0.000000 0.000011 0.000002 14 0.0 0.000008 0.000000 0.000000 </code></pre>
python|pandas|numpy
1
3,029
64,013,033
How to run function for multitude of arrays
<p>so I need to analyse the peak number &amp; width of a signal (in my case Calcium signal from epidermis cells) that I have stored in an excelsheet. Each column has all the values for one Cell (600 values)</p> <p>To analyse the peaks, which I will be duing with the <code>scipy.signal.find_peaks()</code> and <code>scipy.signal.peak_widths()</code> function, I put the individual columns in an 1D numpy array containing all the 601 values from that column.</p> <p>I did this by saving all the individual columns (Columns are named A, B, C, D, etc in Excelsheet) into their own dataframes (df_A, df_B) then putting them in an array :</p> <pre><code>import numpy as np import pandas as pd df = pd.read_excel('test.xlsx') df_A = df.loc[:,'A'] df_B = df.loc[:,'B'] arrA = np.array(df_A) arrB = np.array(df_B) </code></pre> <p>To calculate the the peak number&amp;width i used the following lines :</p> <pre><code>from scipy.signal import find_peaks, peak_widths peaks_A, _ = find_peaks(x,height=7000, prominence= 1) results_peakwidth_A = peak_widths(x, peaks, rel_height=0.5) </code></pre> <p>Now since I have not only one but &gt; 100 cells/signals to analyse, is there an simple way to do this for all the cells/arrays ? This exceeds my capabilities so I would gladly welcome any help.</p>
<p>The proposal would be as follows. In essence you firstly select the required columns (however many there are). Then you create a function that will take in a column (no need to turn it into arrays, unless <code>scipy</code> disagrees, in that case add <code>column = column.values</code> in the top of <code>process</code> function).</p> <p>Afterwards use apply, which will loop through each column in the dataframe and pass it into the function that you defined.</p> <pre><code>import pandas as pd from scipy.signal import find_peaks, peak_widths df = pd.read_excel('test.xlsx') df = ... # select all columns from A-Z into a single dataframe with the columns required. # The shape here would b # A B C # 1 4 4.1 # 2 3 4.0 # ... # define the function you want to apply to each column def process(column): peaks, _ = find_peaks(column,height=7000, prominence= 1) return peak_widths(column, peaks, rel_height=0.5) new_columns = df.apply(process) </code></pre> <p>As I'm unsure of what the actual output should look like, you might want to keep the <code>peaks_A</code> and the <code>width</code>. In which case you could alter the process function slightly:</p> <pre class="lang-py prettyprint-override"><code>def process(column): peaks, _ = find_peaks(column,height=7000, prominence= 1) width = peak_widths(column, peaks, rel_height=0.5) return pd.Series({&quot;width&quot;: width, &quot;peaks&quot;: peaks}) </code></pre>
python|arrays|excel|numpy|scipy
0
3,030
64,093,423
Efficient Way of subset a dataframe into small ones based on unique values and simultaneously write out to csv file
<p>What is the most efficient way to subset a large dataframe <code>df</code> into small subsets based on a unique/ filter condition? For example, I have a dataset with a dimension of 22050 rows with 5 columns, something like this</p> <pre><code>id, nationality, age, gender, income 10001, France, 20, M, 45007 13328, UK, 52, F, 72308 11654, USA, 57, F, 95645 11765, UK, 39, M, 77343 10081, UAE, 41,M, 83117 10503, France, 22, F, 25665 </code></pre> <p>There are 15 unique nationalities in the entire dataset, I want to subset the dataset into 15 dataframes based on the 15 unique countries and simultaneously write out the 15 dataframes in 15 <code>csv</code> output files.</p> <p>Desired output should look like this</p> <p>dataframe-one in a <code>csv</code> file</p> <pre><code>id, nationality, age, gender, income 10001, France, 20, M, 45007 10503, France, 22, F, 25665 </code></pre> <p>dataframe-two in a <code>csv</code> file</p> <pre><code>13328, UK, 52, F, 72308 11765, UK, 39, M, 77343 </code></pre> <p>likewise for dataframes 3 to 15</p> <p>Here is my attempt:</p> <pre><code>fran = df[df.nationality == 'France'] fran.to_csv(file_name, sep=',') uk = df[df.nationality =='UK'] uk.to_csv(file_name, sep=',') USA = df[df.nationality == 'USA'] usa.to_csv(file_name, sep=',') </code></pre> <p>I want a more efficient way, <code>apply | lambda</code> or a <code>loop</code> approach</p>
<p>In <code>base R</code>, we can split the data by the 'nationality' column into a <code>list</code> of <code>data.frame</code></p> <pre><code>lst1 &lt;- split(df, df$nationality) </code></pre> <p>Then loop over the <code>list</code> and write it to different files</p> <pre><code>lapply(names(lst1), function(nm) write.csv(lst[[nm]], paste0(nm, &quot;.csv&quot;), row.names = FALSE, quote = FALSE)) </code></pre> <p>NOTE: The <code>split</code> method would be much faster than the <code>==</code> based subsetting</p>
python|r|pandas|dataframe|csv
1
3,031
46,933,324
Data doesn't match to original data after converting to TFRecord
<blockquote> <p><strong>TL: DR;</strong></p> </blockquote> <p>After converting my data to TFRecord it doesn't match with original data </p> <blockquote> <p><strong>Description:</strong></p> </blockquote> <p>I made a <a href="https://gist.github.com/niyamatalmass/b4e8f14d0a853b109035cc0973c03189" rel="nofollow noreferrer">method</a> that will read TFRecord data. And I want to test that method that the method is working as I expected. So I create a <a href="https://gist.github.com/niyamatalmass/160207d687cd07d20bede12d8092f429" rel="nofollow noreferrer">Tensorflow test case</a> where I create a fake TFRecord file given some random input data. And pass that data to that method. The results get from that method and the original data, I pass them to the assertAllEqual() method for unit testing. But the test failed. </p> <p>Here is test error</p> <pre><code>AssertionError: Arrays are not equal (mismatch 66.66666666666666%) x: array([[[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.],... y: array([[[ 0, 0, 0], [ 0, 0, 0], [ 0, 0, 0],... not equal where = (array([10, 10, 10, ..., 31, 31, 31]), array([21, 21, 22, ..., 31, 31, 31]), array([1, 2, 0, ..., 0, 1, 2])) not equal lhs = [128 128 128 ..., 255 255 255] not equal rhs = [ 0. 0. 0. ..., 0. 0. 0.] </code></pre> <p><em>Shape of the fake data (3, 32, 32, 3)</em> </p> <blockquote> <p><strong>Tried Solutions:</strong></p> </blockquote> <ol> <li>Tried different input</li> <li>Tried RandomSuffleQueue instead of FIFOQueue </li> <li><strong>Check if test on labels is pass or not. Yes! test on labels passed!</strong></li> </ol> <blockquote> <p><strong>My Question:</strong></p> </blockquote> <ol> <li>What's going wrong?</li> <li>Is there any problem in the first method that is used for reading TFRecord file?</li> <li>Am I doing anything wrong in my test case?</li> </ol>
<blockquote> <p>I found the solutions. The problem is in my test case where I made my fake data for testing purpose. <strong>The generation of fake data is wrong.</strong></p> </blockquote> <p>I changed my code from this </p> <pre><code>records = [self._record(0, 128, 255), self._record(255, 0, 1), self._record(254, 255, 0)] </code></pre> <p>to this </p> <pre><code> image_array_1 = np.random.random((32, 32, 3)) image_array_2 = np.random.random((32, 32, 3)) image_array_3 = np.random.random((32, 32, 3)) formatted1 = (image_array_1 * 255 / np.max(image_array_1)).astype('uint8') formatted2 = (image_array_2 * 255 / np.max(image_array_2)).astype('uint8') formatted3 = (image_array_3 * 255 / np.max(image_array_3)).astype('uint8') images_data_set = [formatted1, formatted2, formatted3] </code></pre> <p>And the test passed perfectly!</p>
tensorflow
0
3,032
63,069,555
Input 0 of layer sequential is incompatible with the layer
<p>I created a model and then loaded it in another script and try to perform a prediction from it however I can not understand why the shape being passed to the function is incorrect.</p> <p>This is how the model is created:</p> <pre><code>batch_size = 1232 epochs = 5 IMG_HEIGHT = 400 IMG_WIDTH = 400 model1 = np.load(&quot;training_data.npy&quot;, allow_pickle=True) model2 = np.load(&quot;training_data_1.npy&quot;, allow_pickle=True) data = np.asarray(np.concatenate((model1, model2), axis=0)) # 1232 train_data = data[:-100] X_train = np.asarray(np.array([i[0] for i in train_data])) Y_train = np.asarray([i[1] for i in train_data]) validation_data = data[-100:] X_val = np.asarray(np.array([i[0] for i in validation_data])) Y_val = np.asarray([i[1] for i in validation_data]) model = Sequential([ Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)), MaxPooling2D(), Conv2D(32, 3, padding='same', activation='relu'), MaxPooling2D(), Conv2D(64, 3, padding='same', activation='relu'), MaxPooling2D(), Flatten(), Dense(512, activation='relu'), Dense(1) ]) model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(X_train, Y_train, steps_per_epoch=batch_size, epochs=epochs, validation_data=(X_val, Y_val), validation_steps=batch_size) model.save(&quot;test&quot;) </code></pre> <p>And this is how I'm trying to make a prediction:</p> <pre><code>batch_size = 1232 epochs = 5 IMG_HEIGHT = 400 IMG_WIDTH = 400 model = tf.keras.models.load_model('test') test_1 = cv2.imread('./Data/Images/test_no.jpg') test_1 = cv2.resize(test_1, (IMG_HEIGHT, IMG_WIDTH)) prediction = model.predict([test_1])[0] print(prediction) </code></pre> <p>When printing the shape of the test image the output is: (400, 400, 3)</p> <p>I also tried using the numpy operation reshape when passing the test image to predict. However the error is always:</p> <pre><code>ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 400, 3] </code></pre>
<p>Add extra dimension to your input as [n_items,400,400,3]</p> <pre><code>import tensorflow as tf X_train = tf.expand_dims(X_train, axis =-1) </code></pre>
python|numpy|tensorflow|machine-learning|keras
0
3,033
68,007,737
pandas merge with multiindex columns but single index index
<p>I am using Python 3.8.6 with pandas version 1.2.4</p> <p>I want to do a self join on previous rows with this dataframe:</p> <pre><code> bar one index 0 0.238307 1 0.610819 </code></pre> <p>so i prepare the dataframe before doing a pandas merge</p> <p>the &quot;left&quot; merge data looks like this:</p> <pre><code> bar one 0 0.238307 1 0.610819 </code></pre> <p>the &quot;right&quot; merge data looks like this:</p> <pre><code> bar index1 one 0 0.238307 1 1 0.610819 2 </code></pre> <p>now i try this merge:</p> <pre><code>pd.merge(left, right, left_index=True, right_on=('index1',''), suffixes=('_n','_p')) </code></pre> <p>It throws a <strong>ValueError: len(right_on) must equal the number of levels in the index of &quot;left&quot;</strong></p> <p>To me, this makes no sense. What counts is that the values of ('index1,'') are comparable to left.index</p> <p>What am i missing?</p> <p>i have also tried the following: left</p> <pre><code> index bar one 0 0 0.972453 1 1 0.278209 </code></pre> <p>right</p> <pre><code> bar index1 one 0 0.972453 1 1 0.278209 2 </code></pre> <p>merge expression</p> <pre><code>pd.merge(left,right,left_on=('index',''),right_on=('index1',''),suffixes=('_n','_p')) </code></pre> <p>error</p> <pre><code> raise KeyError(key) KeyError: '' </code></pre> <p>NB</p> <pre><code>left.loc[:,('index','')] 0 0 1 1 2 2 right.loc[:,('index1','')] 0 1 1 2 2 3 </code></pre> <p>So again, some problem i don't understand</p> <p>Thanks Martin</p>
<p>The answer I discovered is:</p> <pre><code>left=left.set_index(('index','')) right=right.set_index(('index1','')) dfJoined=pd.merge(left,right,left_index=True,right_index=True,suffixes=('_n','_p')) </code></pre> <p>produces</p> <pre><code> bar_n bar_p one one 1 1.719833 -0.540152 #different random numbers from question above </code></pre>
python|pandas|dataframe|multi-index
0
3,034
67,730,008
How to More Efficiently Pivot Semi Colon Separated Columns into a 0/1/2 Indicator Matrix?
<p>I am starting with a pandas Dataframe that is generated by the following code:</p> <pre><code>import pandas as pd data = {'basket_1':['apple;banana;orange', 'apple;banana;mango', 'mango;orange;grapefruit'], 'basket_2':['pineapple;strawberry;peach', 'peach;lemon;guava', 'strawberry;peach;guava']} # Create DataFrame df = pd.DataFrame(data) basket_1 basket_2 0 apple;banana;orange pineapple;strawberry;peach 1 apple;banana;mango peach;lemon;guava 2 mango;orange;grapefruit strawberry;peach;guava </code></pre> <p>I also start with a numpy array of all of the fruits present in either basket column (as well as some additional fruits), generated as follows for the purpose of this example:</p> <pre><code>import numpy as np fruits = np.array(['apple', 'banana', 'orange', 'mango', 'grapefruit', 'kiwi', 'pineapple', 'strawberry', 'peach', 'lemon', 'guava', 'lime']) </code></pre> <p>From this initial Dataframe and array, I am looking to generate the resulting Dataframe in the most efficient way possible:</p> <p><a href="https://i.stack.imgur.com/51wUY.png" rel="nofollow noreferrer">Final Dataframe</a></p> <pre><code> basket_1 basket_2 apple banana orange mango grapefruit kiwi pineapple strawberry peach lemon guava lime 0 apple;banana;orange pineapple;strawberry;peach 1 1 1 0 0 0 2 2 2 0 0 0 1 apple;banana;mango peach;lemon;guava 1 1 0 1 0 0 0 0 2 2 2 0 2 mango;orange;grapefruit strawberry;peach;guava 0 0 1 1 1 0 0 2 2 0 2 0 </code></pre> <p>The final result has added a column for each of the elements present in the fruits array along with a '1' in a given row if the given fruit is present in the 'basket_1' column, a '2' in a given row if the given fruit is present in the 'basket_2' column, and a '0' otherwise.</p> <p>At the moment, I am using the following code to transform the initial Dataframe into the desired format:</p> <pre><code>def whichBasket(b1, b2, fruit): if fruit in b1: val = 1 elif fruit in b2: val = 2 else: val = 0 return val for f in fruits: df[f] = df.apply(lambda x: whichBasket(x.basket_1, x.basket_2, f), 1) </code></pre> <p>This solution calls an apply function iterating through each row of the Dataframe nested inside a for-loop iterating through each fruit, which works fine for a small example such as this one. However, I am attempting to scale this up to a Dataframe with over 1000 fruits and over 80000 rows, and this solution is far too slow to complete this job in a reasonable amount of time.</p> <p>Any ideas for ways to improve this code's performance in terms of shortening running time? Thanks in advance for your help.</p>
<p>Here is one option,</p> <ol> <li><p>create dataframe by appending all the columns from fruits list to current df</p> </li> <li><p>User <code>str.get_dummies</code> to generate dummy columns and assign to original df</p> </li> <li><p>For basket 2, add 1 to ensure str.get_dummies returns 2 as value instead of 1</p> </li> <li><p>Replace nan in missing cells with 0</p> <pre><code> df = pd.concat([df, pd.DataFrame(columns = fruits)], axis = 1) df = df.assign(**df['basket_1'].str.get_dummies(';')) df = df.assign(**df['basket_2'].str.get_dummies(';') * 2) df = df.fillna(0) </code></pre> </li> </ol> <p>Edit: addressing @Matt's question on a fruit being present in either basket but not at the same time. Slightly different approach, concat df with get_dummies results from baskets 1 &amp; 2. Use reindex to include rest of the columns from fruits list.</p> <pre><code>data = {'basket_1':['apple;banana;orange', 'apple;banana;mango', 'mango;orange;grapefruit','mango;peach;grapefruit'], 'basket_2':['pineapple;strawberry;peach', 'peach;lemon;guava', 'strawberry;peach;guava', 'apple;lemon;guava']} df = pd.DataFrame(data) fruits = np.array(['apple', 'banana', 'orange', 'mango', 'grapefruit', 'kiwi', 'pineapple', 'strawberry', 'peach', 'lemon', 'guava', 'lime']) req_cols = df.columns.tolist() + fruits.tolist() df = pd.concat([df, df['basket_1'].str.get_dummies(';'), df['basket_2'].str.get_dummies(';')*2], axis = 1).groupby(level=0, axis=1).sum() df = df.reindex(req_cols, axis = 1, fill_value = 0) </code></pre>
python|pandas|performance|lambda|pivot
2
3,035
67,823,698
Python Pandas count values and create summary dataframe
<p>I have a DF that contains multiple events for multiple customers. The important columns are:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Customer</th> <th>Result</th> </tr> </thead> <tbody> <tr> <td>cust1</td> <td>OK</td> </tr> <tr> <td>cust1</td> <td>OK</td> </tr> <tr> <td>cust1</td> <td>FAIL</td> </tr> <tr> <td>cust2</td> <td>OK</td> </tr> <tr> <td>cust2</td> <td>FAIL</td> </tr> <tr> <td>cust2</td> <td>FAIL</td> </tr> <tr> <td>cust3</td> <td>OK</td> </tr> <tr> <td>cust3</td> <td>OK</td> </tr> <tr> <td>cust3</td> <td>OK</td> </tr> </tbody> </table> </div> <p>I need to convert this to a summary dataframe like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Customer</th> <th>FAIL</th> <th>OK</th> <th>SUCCESS_RATE</th> </tr> </thead> <tbody> <tr> <td>cust1</td> <td>1</td> <td>2</td> <td>66.6</td> </tr> <tr> <td>cust2</td> <td>2</td> <td>1</td> <td>33.3</td> </tr> <tr> <td>cust2</td> <td>0</td> <td>3</td> <td>100</td> </tr> </tbody> </table> </div> <p>Looks simple enough, but can't find the right approach. Thank you very much.</p>
<p>Try <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html#pandas-crosstab" rel="nofollow noreferrer"><code>pd.crosstab</code></a>:</p> <pre><code>new_df = pd.crosstab(df['Customer'], df['Result']) new_df['SUCCESS_RATE'] = new_df['OK'] / new_df.sum(axis=1) * 100 new_df = new_df.rename_axis(None, axis=1).reset_index() </code></pre> <p><code>df</code>:</p> <pre><code> Customer FAIL OK SUCCESS_RATE 0 cust1 1 2 66.666667 1 cust2 2 1 33.333333 2 cust3 0 3 100.000000 </code></pre>
python|pandas|dataframe
2
3,036
67,838,047
Delete model from GPU/CPU in Pytorch
<p>I have a big issue with memory. I am developing a big application with GUI for testing and optimizing neural networks. The main program is showing the GUI, but training is done in thread. In my app I need to train many models with different parameters one after one. To do this I need to create a model for each attempt. When I train one I want to delete it and train new one, but I cannot delete old model. I am trying to do something like this:</p> <pre><code>del model torch.cuda.empty_cache() </code></pre> <p>but GPU memory doesn't change,</p> <p>then i tried to do this:</p> <pre><code>model.cpu() del model </code></pre> <p>When I move model to CPU, GPU memory is freed but CPU memory increase. In each attempt of training, memory is increasing all the time. Only when I close my app and run it again the all memory is freed.</p> <p>Is there a way to delete model permanently from GPU or CPU?</p> <p>Edit: Code:</p> <p>Thread, where the procces of training take pleace:</p> <pre><code>class uczeniegridsearcch(QObject): endofoneloop = pyqtSignal() endofonesample = pyqtSignal() finished = pyqtSignal() def __init__(self, train_loader, test_loader, epoch, optimizer, lenoftd, lossfun, numberofsamples, optimparams, listoflabels, model_name, num_of_class, pret): super(uczeniegridsearcch, self).__init__() self.train_loaderup = train_loader self.test_loaderup = test_loader self.epochup = epoch self.optimizername = optimizer self.lenofdt = lenoftd self.lossfun = lossfun self.numberofsamples = numberofsamples self.acc = 0 self.train_loss = 0 self.sendloss = 0 self.optimparams = optimparams self.listoflabels = listoflabels self.sel_Net = model_name self.num_of_class = num_of_class self.sel_Pret = pret self.modelforsend = [] def setuptrainmodel(self): if self.sel_Net == &quot;AlexNet&quot;: model = models.alexnet(pretrained=self.sel_Pret) model.classifier[6] = torch.nn.Linear(4096, self.num_of_class) elif self.sel_Net == &quot;ResNet50&quot;: model = models.resnet50(pretrained=self.sel_Pret) model.fc = torch.nn.Linear(model.fc.in_features, self.num_of_class) elif self.sel_Net == &quot;VGG13&quot;: model = models.vgg13(pretrained=self.sel_Pret) model.classifier[6] = torch.nn.Linear(model.classifier[6].in_features, self.num_of_class) elif self.sel_Net == &quot;DenseNet201&quot;: model = models.densenet201(pretrained=self.sel_Pret) model.classifier = torch.nn.Linear(model.classifier.in_features, self.num_of_class) elif self.sel_Net == &quot;MNASnet&quot;: model = models.mnasnet1_0(pretrained=self.sel_Pret) model.classifier[1] = torch.nn.Linear(model.classifier[1].in_features, self.num_of_class) elif self.sel_Net == &quot;ShuffleNet v2&quot;: model = models.shufflenet_v2_x1_0(pretrained=self.sel_Pret) model.fc = torch.nn.Linear(model.fc.in_features, self.num_of_class) elif self.sel_Net == &quot;SqueezeNet&quot;: model = models.squeezenet1_0(pretrained=self.sel_Pret) model.classifier[1] = torch.nn.Conv2d(512, self.num_of_class, kernel_size=(1, 1), stride=(1, 1)) model.num_classes = self.num_of_class elif self.sel_Net == &quot;GoogleNet&quot;: model = models.googlenet(pretrained=self.sel_Pret) model.fc = torch.nn.Linear(model.fc.in_features, self.num_of_class) return model def train(self): for x in range(self.numberofsamples): torch.cuda.empty_cache() modelup = self.setuptrainmodel() device = torch.device('cuda') optimizerup = TableWidget.setupotimfun(self, modelup, self.optimizername, self.optimparams[(x, 0)], self.optimparams[(x, 1)], self.optimparams[(x, 2)], self.optimparams[(x, 3)], self.optimparams[(x, 4)], self.optimparams[(x, 5)]) modelup = modelup.to(device) best_accuracy = 0.0 train_error_count = 0 for epoch in range(self.epochup): for images, labels in iter(self.train_loaderup): images = images.to(device) labels = labels.to(device) optimizerup.zero_grad() outputs = modelup(images) loss = TableWidget.setuplossfun(self, lossfun=self.lossfun, outputs=outputs, labels=labels) self.train_loss += loss loss.backward() optimizerup.step() train_error_count += float(torch.sum(torch.abs(labels - outputs.argmax(1)))) self.train_loss /= len(self.train_loaderup) test_error_count = 0.0 for images, labels in iter(self.test_loaderup): images = images.to(device) labels = labels.to(device) outputs = modelup(images) test_error_count += float(torch.sum(torch.abs(labels - outputs.argmax(1)))) test_accuracy = 1.0 - float(test_error_count) / float(self.lenofdt) print('%s, %d,%d: %f %f' % (&quot;Próba nr:&quot;, x+1, epoch, test_accuracy, self.train_loss), &quot;Parametry: &quot;, self.optimparams[x,:]) self.acc = test_accuracy self.sendloss = self.train_loss.item() self.endofoneloop.emit() self.endofonesample.emit() modelup.cpu() del modelup,optimizerup,device,test_accuracy,test_error_count,train_error_count,loss,labels,images,outputs torch.cuda.empty_cache() self.finished.emit() </code></pre> <p>How I call thread in main block:</p> <pre><code> self.qtest = uczeniegridsearcch(self.train_loader,self.test_loader, int(self.InputEpoch.text()), self.sel_Optim,len(self.test_dataset), self.sel_Loss, int(self.numberofsamples.text()), self.params, self.listoflabels, self.sel_Net,len(self.sel_ImgClasses),self.sel_Pret) self.qtest.endofoneloop.connect(self.inkofprogress) self.qtest.endofonesample.connect(self.inksamples) self.qtest.finished.connect(self.prints) testtret = threading.Thread(target=self.qtest.train) testtret.start() </code></pre>
<p>Assuming that the model creation code is run iteratively inside a loop,I suggest the following</p> <ol> <li>Put code for model creation, training,evaluation and model deletion code inside a separate function and call that function from the loop body.</li> <li>Call <code>gc.collect()</code> after the function call</li> </ol> <p>The rational for first point is that the model creation, deletion and cache clearing would happen in a separate stack and it would force the GPU memory clearance when the method returns.</p>
python|memory|pytorch|gpu|cpu
0
3,037
68,597,317
I am unable to run streamlit on pycharm and spyder. I am running the latest python version on window.When I try what the code ,it says invalid syntax
<p>#This code is used to open streamlit in browser</p> <p>import streamlit import streamlit as st import pandas as pd from FPL import predict_team, get_overview_data, extract_player_roster, <br /> extract_teams_data, extract_player_types import matplotlib.pyplot as plt import numpy as np pd.options.display.float_format = &quot;{:,.2f}&quot;.format</p> <pre><code>def get_team_limit(max_players_from_team): max_players_from_team['ARS'] = int(st.text_input('ARS:', 3)) max_players_from_team['AVL'] = int(st.text_input('AVL:', 3)) max_players_from_team['BHA'] = int(st.text_input('BHA:', 3)) max_players_from_team['BUR'] = int(st.text_input('BUR:', 3)) max_players_from_team['CHE'] = int(st.text_input('CHE:', 3)) max_players_from_team['CRY'] = int(st.text_input('CRY:', 3)) max_players_from_team['EVE'] = int(st.text_input('EVE:', 3)) max_players_from_team['FUL'] = int(st.text_input('FUL:', 3)) max_players_from_team['LEE'] = int(st.text_input('LEE:', 3)) max_players_from_team['LEI'] = int(st.text_input('LEI:', 3)) max_players_from_team['LIV'] = int(st.text_input('LIV:', 3)) max_players_from_team['MCI'] = int(st.text_input('MCI:', 3)) max_players_from_team['MUN'] = int(st.text_input('MUN:', 3)) max_players_from_team['NEW'] = int(st.text_input('NEW:', 3)) max_players_from_team['SHU'] = int(st.text_input('SHU:', 3)) max_players_from_team['SOU'] = int(st.text_input('SOU:', 3)) max_players_from_team['TOT'] = int(st.text_input('TOT:', 3)) max_players_from_team['WBA'] = int(st.text_input('WBA:', 3)) max_players_from_team['WHU'] = int(st.text_input('WHU:', 3)) max_players_from_team['WOL'] = int(st.text_input('WOL:', 3)) return max_players_from_team st.markdown(&quot;&lt;h1 style='text-align: center;'&gt;Welcome to FPL TeamMaker&lt;/h1&gt;&quot;, \ unsafe_allow_html=True) st.markdown(&quot;&lt;h3 style='text-align: center;'&gt;Use Data Science to build your \ team and win!&lt;/h3&gt;&quot;, unsafe_allow_html=True) transfer = False wildcard = False gw = 1 budget = 1000 old_data_weight = 0.4 new_data_weight = 0.6 form_weight = 0.5 max_players_from_team = {} current_team = [] num_transfers = 1 gw = int(st.text_input('Enter the Gameweek you want to make team for:', '1')) if gw == 1: st.write('Starting below, please provide how many players you want from each team.\ Use this in cases when a particular team does not have a fixture for the week.') max_players_from_team = get_team_limit(max_players_from_team) elif gw &gt; 1 and gw &lt;= 4: transfer_or_wildcard = st.radio('Select your mode of team making:', ('Transfer',\ 'New Team / Wildcard')) if transfer_or_wildcard == 'Transfer': transfer = True else: wildcard = True old_data_weight = float(st.text_input('Enter the weight you want to give to last \ season\'s data (0-1.0):', 0.4)) new_data_weight = float(st.text_input('Enter the weight you want to give to current \ season\'s data (0-1.0):', 0.6)) form_weight = float(st.text_input('Enter the weight you want to give to player form \ (0-1.0):', 0.5)) budget = float(st.text_input('Enter your budget x 10 (For transfers, enter \ the leftover budget using current team):', 1000)) if transfer: num_transfers = int(st.text_input('Enter the number of transfers to be made:', 1)) overview_data_json = get_overview_data() teams_df = extract_teams_data(overview_data_json) player_types_df = extract_player_types(overview_data_json) player_df = extract_player_roster(overview_data_json, player_types_df, teams_df) player_df = player_df[['code', 'first_name', 'second_name', 'team_code']] players = st.write('Please look at the list below and enter a comma \ separated list of player codes you have in your team. \ Note that they are ordered alphabetically by team name.', \ player_df) try: current_team = st.text_input('') current_team = list(map(int, current_team.split(','))) except: st.error('Please enter an input above') else: st.write('Starting below, please provide how many players you want from each team.\ Use this in cases when a particular team does not have a fixture for the week.') max_players_from_team = get_team_limit(max_players_from_team) elif gw &gt; 4 and gw &lt;=38: transfer_or_wildcard = st.radio('Select your mode of team making:', ('Transfer',\ 'New Team / Wildcard')) if transfer_or_wildcard == 'Transfer': transfer = True else: wildcard = True form_weight = float(st.text_input('Enter the weight you want to give to player form \ (0-1.0):', 0.5)) budget = float(st.text_input('Enter your budget x 10 (For transfers, enter \ the leftover budget using current team):', 1000)) if transfer: num_transfers = int(st.text_input('Enter the number of transfers to be made:', 1)) overview_data_json = get_overview_data() teams_df = extract_teams_data(overview_data_json) player_types_df = extract_player_types(overview_data_json) player_df = extract_player_roster(overview_data_json, player_types_df, teams_df) player_df = player_df[['code', 'first_name', 'second_name', 'team_code']] players = st.write('Please look at the list below and enter a comma \ separated list of player codes you have in your team. \ Note that they are ordered alphabetically by team name.', \ player_df) try: current_team = st.text_input('') current_team = list(map(int, current_team.split(','))) except: st.error('Please enter an input above') else: st.write('Starting below, please provide how many players you want from each team.\ Use this in cases when a particular team does not have a fixture for the week.') max_players_from_team = get_team_limit(max_players_from_team) if st.button('Get Team'): if gw &gt; 38: st.error('Enter correct GW') team, points, cost = predict_team(transfer, wildcard, gw, budget, old_data_weight, \ new_data_weight, form_weight, max_players_from_team, \ current_team, num_transfers) team['Cost'] /= 10 team = team.rename(columns = {&quot;First&quot;: &quot;First Name&quot;, &quot;Second&quot;: &quot;Second Name&quot;}) if len(team) &gt; 0: st.write(team) st.write('Total points of whole team:', points) st.write('Cost of the team:', cost) else: st.info('Please use this feature after GW4 has completed') st.markdown(&quot;&lt;h1 style='text-align: center;'&gt;Visualization of Results&lt;/h1&gt;&quot;, unsafe_allow_html=True) bot101_points = np.array([0, 84, 133, 180, 222, 294, 349, 401, 470\ , 551, 593, 662, 723, 774, 866, 914, 965, 1028, 1070\ , 1151, 1196, 1268, 1352, 1403, np.nan, np.nan, np.nan, np.nan\ , np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]) average_points = np.array([0, 50, 109, 152, 200, 260, 308, 361, 416\ , 471, 515, 577, 628, 670, 730, 771, 808, 864, 894\ , 968, 1010, 1058, 1115, 1173, np.nan, np.nan, np.nan, np.nan, np.nan\ , np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]) bot101_rank = np.array([0, 0.175178, 0.881887, 0.797683, 1.412642, 1.056323, 0.915314, 1.0164, 0.785028\ , 0.378354, 0.515661, 0.522446, 0.495127, 0.426231, 0.218577, 0.226317, 0.191098, 0.204354, 0.219032\ , 0.302307, 0.318217, 0.224458, 0.183696, 0.275990, np.nan, np.nan, np.nan, np.nan, np.nan\ , np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]) bot101_gwrank = np.array([0, 0.175178, 4.534625, 1.174666, 4.348521, 1.728420, 2.062787, 3.960282, 1.105014\ , 0.247396, 2.6096, 2.559066, 1.815474, 1.603100, 0.178608, 2.406347, 0.969028, 2.743019, 1.780634\ , 2.626326, 2.934196, 0.699079, 0.497828, 5.061059, np.nan, np.nan, np.nan, np.nan, np.nan\ , np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]) bot101_gw_points = np.array([0, 84, 49, 55, 42, 72, 55, 52, 69, 81, 50, 69, 61, 51, 92, 48, 51, 63, \ 42, 81, 45, 72, 88, 55, np.nan, np.nan, np.nan, np.nan, np.nan\ , np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]) bot101_gw_avg_points = np.array([0, 50, 59, 43, 48, 60, 48, 53, 55, 55, 44, 62, 51, 42, 60, 41, 37, \ 56, 30, 74, 42, 48, 57, 58, np.nan, np.nan, np.nan, np.nan, np.nan\ , np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]) gameweeks = [i for i in range(0, 39)] ax = plt.gca() ax.set_xlim(0,39) plt.plot(gameweeks, bot101_points, label = 'Cumulative Team Points') plt.plot(gameweeks, average_points, label = 'Cumulative Average Points') plt.xlabel('Gameweeks') plt.ylabel('Total Points') plt.title('Total Points Viewed per Week') plt.legend() plt.locator_params(axis=&quot;x&quot;, nbins=38) st.pyplot() ax = plt.gca() ax.invert_yaxis() ax.ticklabel_format(useOffset=False, style = 'plain') plt.locator_params(axis=&quot;y&quot;, nbins=20) plt.locator_params(axis=&quot;x&quot;, nbins=38) ax.set_xlim(0,39) plt.plot(gameweeks, bot101_rank, label = 'Overall Rank') plt.plot(gameweeks, bot101_gwrank, label = 'GW Rank') plt.xlabel('Gameweeks') plt.ylabel('Ranking (in millions)') plt.title('Overall vs GW Rank') plt.legend() st.pyplot() ax = plt.gca() ax.set_xlim(0,39) plt.plot(gameweeks, bot101_gw_points, label = 'Team Points per Gameweek') plt.plot(gameweeks, bot101_gw_avg_points, label = 'Average Points per Gameweek') plt.xlabel('Gameweeks') plt.ylabel('Points') plt.title('Points per Week') plt.legend() plt.locator_params(axis=&quot;x&quot;, nbins=38) st.pyplot() #THIS IS HOW THE PROGRAM RESPONDS #OUTPUT: #Warning: to view this Streamlit app on a browser, run it with the following #command: #streamlit run C:/Users/HP/PycharmProjects/pythonProject/app.py [ARGUMENTS] </code></pre>
<p>Use command prompt and Go to project Directory where the streamlit file stored. than &quot;streamlit run &lt;filename.py&gt;&quot;</p> <p>after that an url page will open where you can see your application.</p>
python-3.x|pandas|numpy|data-science|streamlit
0
3,038
53,165,807
How to calculate RMSPE in python using numpy
<p>I am doing a multivariate forecasting using the <a href="https://www.kaggle.com/c/rossmann-store-sales#description" rel="nofollow noreferrer">Rossmann dataset.</a> I now need to use the RMSPE metric to evaluate my model. I saw the relevant formula <a href="https://www.kaggle.com/c/rossmann-store-sales#evaluation" rel="nofollow noreferrer">here</a>. But I am not sure how to efficiently implement this using numpy. Any help is much appreciated. </p>
<p>You can take advantage of numpy's vectorisation capability for an error metric like this. The following function can be used to compute RMSPE:</p> <pre><code>def rmse(y_true, y_pred): ''' Compute Root Mean Square Percentage Error between two arrays. ''' loss = np.sqrt(np.mean(np.square(((y_true - y_pred) / y_true)), axis=0)) return loss </code></pre> <p>(For the error between vectors, <code>axis=0</code> makes it explicit that the error is computed row-wise, returning a vector. It isn't required, as this is the default behaviour for <code>np.mean</code>.)</p>
python-3.x|numpy|machine-learning
5
3,039
65,683,052
Is it possible to save Tensorboad session?
<p>I'm using Tensorboard and would like to save and send my report (email outside my organization), without losing interactive abilities. I've tried to save it as a complete html but that didn't work. Anyone encountered the same issue and found a solution?</p>
<p>Have you seen <a href="https://tensorboard.dev/" rel="nofollow noreferrer"><code>tensorboard.dev</code></a>?</p> <p>This page allows you to host your tensorboard experiment &amp; share it with others using a link (it's still interactive) for free.</p> <p>Also you can use it from the command line; try this from your CLI for more information:</p> <pre><code>$ tensorboard dev --help </code></pre>
tensorflow|pytorch|tensorboard
1
3,040
63,641,431
How to interpret cosine similarity output in python
<p>beginner @ Python here. I have a pandas DataFrame <strong>df</strong> with the columns: <strong>userID</strong>, <strong>weight</strong>, <strong>SEI</strong>, <strong>name</strong>.</p> <pre><code>#libraries import numpy as np; import pandas as pd from sklearn.metrics.pairwise import cosine_similarity #dataframe userID weight SEI name 3 125.0. 0.562140 263 4 254.0. 0.377294 869 5 451.0. 0.872896 196 1429 451.0. 0.872896 196 5 129.0. 0.569432 582 ... ... ... ... #output cosine_similarity(df) array([[1. , 0.98731894, 0.75370844, ..., 0.33814175, 0.33700687, 0.24443919], [0.98731894, 1. , 0.63987877, ..., 0.35037059, 0.34963404, 0.23870279], [0.75370844, 0.63987877, 1. , ..., 0.16648431, 0.16403693, 0.17438159], ..., </code></pre> <p>The person with <strong>userID</strong> 3 has a <strong>weight</strong> of 125.0, and <strong>SEI</strong> of 0.562140. The person with <strong>name</strong> 263 also has a <strong>weight</strong> of 125.0, and <strong>SEI</strong> of 0.562140. (<em>I had to use a label encoder for the <strong>name</strong> column because I could not run the cosine similarity function without changing the column data type. Hopefully this doesn't affect the end goal?</em>)</p> <p>The goal is to match up values from the column <strong>userID</strong> to values in the column <strong>name</strong> using cosine similarity on all rows. I just need some guidance in interpreting the output in order to do this. All I know is the higher the cosine value the greater the similarity.</p> <p>Any help is appreciated!</p>
<p>Make it easier for yourself and group by two columns</p> <pre><code>result1=df.sort_values('weight') result2=(result1.groupby(['userID_x','SEI']).apply(lambda g: cosine_similarity(g['weight'].values.reshape(1, -1), g['artist'].values.reshape(1,-1))[0][0])).rename('CosSim').reset_index() </code></pre>
python|pandas|dataframe|cosine-similarity
0
3,041
63,718,817
I want to extract data from the Data Frame based on first letters of column
<p>I have data frame like this:</p> <pre><code> Flt Desg Eff Date Dis Date day of week Routing 0 AI 0922 8-May 8-May ....fri,.. riyadh-calicut 1 AI 0381 8-May 12-May .tue,..fri,.. singapore-delhi 2 AI 1242 8-May 13-May .tue,wed,.fri,.. dhaka-srinagar 3 AI 0130 9-May 9-May .....sat,. london-mumbai 4 AI 0174 9-May 9-May .....sat,. san francisco-mumbai .. ... ... ... ... ... 615 AI 1932 25-Jul 25-Jul .....sat,. jeddah-delhi 616 AI 1936 25-Jul 25-Jul .....sat,. borispill-delhi 617 AI 1938 25-Jul 25-Jul .....sat,. manas-srinagar-delhi 618 AI 1942 25-Jul 25-Jul .....sat,. dammam-bengaluru 619 AI 1954 25-Jul 25-Jul .....sat,. doha-mumbai-vijayawada [620 rows x 5 columns] </code></pre> <p>I want to extract the data of Flt, Desg, Eff Date, Dis Date , day of week based on Routing which starts from <code>moscow</code></p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.startswith.html" rel="nofollow noreferrer"><code>Series.str.startswith</code></a>:</p> <pre><code>cols = ['Flt', 'Desg', 'Eff Date', 'Dis Date', 'day of week'] df1 = df.loc[df['Routing'].str.startswith('moscow'), cols] </code></pre> <p>If need all columns:</p> <pre><code>df2 = df[df['Routing'].str.startswith('moscow')] </code></pre>
python|pandas
1
3,042
63,419,636
show error bar in multi line plot using matplotlib
<p>I've created a multi line plot using marplot lib, and now I want to show the min-max value for each parameter on X-axis. My code is below:</p> <pre><code>import numpy as np import pandas as pd from pandas import DataFrame import matplotlib.pyplot as plt from matplotlib import pyplot as plt import seaborn as sns df = pd.DataFrame({'Time': ['D=0','D=2','D=5','D=X'], 'Latency': [74.92, 75.32, 79.64, 100], 'Delay': [18.2,80,82,84] }) plt.plot( 'Time', 'Latency', data=df, marker='s', color='black', markersize=4, linewidth=1, linestyle='--') plt.plot( 'Time', 'Delay', data=df, marker='o', color='black', markersize=4, linewidth=1,linestyle='-') plt.legend() plt.xlabel(&quot;Time&quot;) plt.ylabel(&quot;Average Score (%)&quot;) plt.ylim(0, 100) plt.xlim('D=0','D=X') plt.savefig('Fig2.png', dpi=300, bbox_inches='tight') plt.show() </code></pre> <p>The interval min-max that I want to add is:</p> <pre><code>Latency: D=0 =&gt; {73.3, 76} D=2 =&gt; {73.3, 80} D=5 =&gt; {75, 83.3} D=X =&gt; {100} Delay: D=0 =&gt; {0, 50} D=2 =&gt; {50, 100} D=5 =&gt; {68, 90} D=X =&gt; {75, 90} </code></pre> <p>Thanks so much in advance</p>
<p><code>plt.errorbar()</code> draws lineplots with error bars. It's parameters are quite similar to <code>plt.plot()</code>. The xlims need to be a bit wider to avoid the error bars being cut by the plot borders.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({'Time': ['D=0', 'D=2', 'D=5', 'D=X'], 'Latency': [74.92, 75.32, 79.64, 100], 'Delay': [18.2, 80, 82, 84]}) latency_min_max = np.array([(73.3, 76), (73.3, 80), (75, 83.3), (100, 100)]).T latency_err = np.abs(latency_min_max - df['Latency'].to_numpy()) delay_min_max = np.array([(0, 50), (50, 100), (68, 90), (75, 90)]).T delay_err = np.abs(delay_min_max - df['Delay'].to_numpy()) plt.errorbar('Time', 'Latency', yerr=latency_err, data=df, marker='s', capsize=2, color='black', markersize=4, linewidth=1, linestyle='--') plt.errorbar('Time', 'Delay', yerr=delay_err, data=df, marker='o', capsize=4, color='black', markersize=4, linewidth=1, linestyle='-') plt.legend() plt.xlabel(&quot;Time&quot;) plt.ylabel(&quot;Average Score (%)&quot;) plt.ylim(0, 100) plt.xlim(-0.2, 3.2) plt.savefig('Fig2.png', dpi=300, bbox_inches='tight') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/Riu6T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Riu6T.png" alt="example plot" /></a></p> <p>An alternative is to use <code>plt.fill_between</code> to create error bands:</p> <pre class="lang-py prettyprint-override"><code>plt.fill_between(df['Time'], latency_min_max[0, :], latency_min_max[1, :], color='red', alpha=0.2, label='Latency error') plt.fill_between(df['Time'], delay_min_max[0, :], delay_min_max[1, :], color='blue', alpha=0.2, label='Delay error') </code></pre> <p><a href="https://i.stack.imgur.com/gbg8d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gbg8d.png" alt="plot with errorbands" /></a></p>
python|pandas|matplotlib
2
3,043
53,796,915
Confidence level smaller than 0 with python linear regression
<p>My have the share prices df2[x] below as Y:</p> <pre><code>2018-09-05 6.22 2018-09-06 6.19 2018-09-07 6.22 2018-09-10 6.24 2018-09-11 6.24 </code></pre> <p>...</p> <pre><code>2018-12-05 4.65 2018-12-14 0.00 </code></pre> <p>short position csvReader5[x] as X:</p> <pre><code>2018-09-06 1.11 2018-09-07 1.04 2018-09-10 1.61 2018-09-11 1.52 2018-09-12 1.61 .. 2018-12-05 0.98 2018-12-14 7.00 </code></pre> <p>This is my code to calculate confidence level </p> <pre><code> y = numpy.array(csvReader5[x]).reshape(-1,1) X=numpy.array(df2[x]).reshape(-1,1) X = preprocessing.scale(X) X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.2) clf = LinearRegression() clf.fit(X_train, y_train) confidence = clf.score(X_test, y_test) Out :-1.08 </code></pre> <p>The confidence level I got changes every time I run it and it is always smaller than 1. I thought confidence level is the same as R square hence should always be between (0,1)?</p>
<p>From sklearn documentation:</p> <pre><code>score(X, y, sample_weight=None) </code></pre> <p>Returns the coefficient of determination R^2 of the prediction.</p> <p>The coefficient <code>R^2</code> is defined as <code>(1 - u/v)</code>, where u is the residual sum of squares <code>((y_true - y_pred) ** 2).sum()</code> and v is the total sum of squares <code>((y_true - y_true.mean()) ** 2).sum()</code>. The best possible score is 1.0 and <strong>it can be negative (because the model can be arbitrarily worse)</strong>. A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.</p>
python|numpy|dataframe|scikit-learn|linear-regression
2
3,044
53,357,687
How to test Tensorflowlite model with multiple inputs?
<p>I created a simple MLP Regression Keras model with 4 inputs and one output. I converted this model to TFlite now I'm just trying to find out how to test it on android studio. How can I input multiple 4D objects to test in Java? The following gives an error when trying to run the model:</p> <pre><code>try{ tflite = new Interpreter(loadModelFile()); } catch(Exception ex){ ex.printStackTrace(); } double[][] inp= new double[1][4]; inp[0][1]= 0; inp[0][0] = 0; inp[0][2]= 0; inp[0][3]=-2.01616982303105; double[] output = new double[100]; tflite.run(inp,output); </code></pre> <p>EDIT: Here is the model I originally created:</p> <pre><code># create model model = Sequential() model.add(Dense(50, activation="tanh", input_dim=4, kernel_initializer="random_uniform", name="input_tensor")) model.add(Dense(50, activation="tanh", kernel_initializer="random_uniform")) model.add(Dense(1, activation="linear", kernel_initializer='random_uniform', name="output_tensor")) </code></pre>
<p>If your inputs are actually 4 separate tensors, then you should use the <code>Interpreter.runForMultipleInputsAndOutputs</code> API which allows multiple separate inputs. See also <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/java/src/test/java/org/tensorflow/lite/InterpreterTest.java#L185" rel="nofollow noreferrer">this example</a> from the TensorFlow Lite repository. For example:</p> <pre><code>double[] input0 = {...}; double[] input1 = {...}; Object[] inputs = {input0, input1}; double[] output = new double[100]; Map&lt;Integer, Object&gt; outputs = new HashMap&lt;&gt;(); outputs.put(0, output); interpreter.runForMultipleInputsOutputs(inputs, outputs); </code></pre>
android|tensorflow|keras|tensorflow-lite
2
3,045
71,879,152
Is there a way to allocate remaining GPU to your code on PyTorch?
<ol> <li>Is there a way to allocate the remaining memory in each GPU for your task?</li> <li>Can I split my task across multiple GPU's?</li> </ol> <p><a href="https://i.stack.imgur.com/aRXza.png" rel="nofollow noreferrer">nvidia-smi for your reference</a></p>
<ol> <li>Yes. PyTorch is able to use any remaining GPU capacity given that there is enough memory. You only need to specify which GPUs to use: <a href="https://stackoverflow.com/a/39661999/10702372">https://stackoverflow.com/a/39661999/10702372</a></li> <li>Yes. GPU parallelism is implemented using PyTorch's <a href="https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html" rel="nofollow noreferrer">DistributedDataParallel</a></li> </ol>
python|pytorch|gpu|nvidia-smi
0
3,046
71,968,141
How to correspondence of unique values ​between 2 tables?
<p>I am fairly new to Python and I am trying to create a new function to work on my project. The function will aim to detect which unique value is present in another column of another table.</p> <p>At first, the function seeks to keep only the unique values ​​of the two tables, then merges them into a new dataframe</p> <p>It's the rest that gets complicated because I would like to return which row and on which table my value is missing</p> <p>If you have any other leads or thought patterns, I'm also interested.</p> <p>Here is my code :</p> <pre><code>def correspondance_cle(df1, df2, col): df11 = pd.DataFrame(df1[col].unique()) df11.columns= [col] df11['test1'] = 1 df21 = pd.DataFrame(df2[col].unique()) df21.columns= [col] df21['test2'] = 1 df3 = pd.merge(df11, df21, on=col, how='outer') df3 = df3.loc[(fk['test1'].isna() == True) | (fk['test2'].isna() == True),:] df3.info() for row in df3[col]: if df3['test1'].isna() == True: print(row, &quot;is not in df1&quot;) else: print(row, 'is not in df2') </code></pre> <p>Thanks to everyone who took the time to read the post.</p>
<p>First use outer join with remove duplicates by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.drop_duplicates.html" rel="nofollow noreferrer"><code>Series.drop_duplicates</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reset_index.html" rel="nofollow noreferrer"><code>Series.reset_index</code></a> for avoid removed original indices:</p> <pre><code>df1 = pd.DataFrame({'a':[1,2,5,5]}) df2 = pd.DataFrame({'a':[2,20,5,8]}) col = 'a' df = (df1[col].drop_duplicates().reset_index() .merge(df2[col].drop_duplicates().reset_index(), indicator=True, how='outer', on=col)) print (df) index_x a index_y _merge 0 0.0 1 NaN left_only 1 1.0 2 0.0 both 2 2.0 5 2.0 both 3 NaN 20 1.0 right_only 4 NaN 8 3.0 right_only </code></pre> <p>Then filter rows by helper column <code>_merge</code>:</p> <pre><code>print (df[df['_merge'].eq('left_only')]) index_x a index_y _merge 0 0.0 1 NaN left_only print (df[df['_merge'].eq('right_only')]) index_x a index_y _merge 3 NaN 20 1.0 right_only 4 NaN 8 3.0 right_only </code></pre>
python-3.x|pandas|dataframe|foreign-keys
0
3,047
55,173,994
Pandas Data Frame add new calculated field using Partitioned Data
<p>I am trying to add a new calculated field. I am trying the 2nd best answer in <a href="https://stackoverflow.com/questions/12376863/adding-calculated-columns-to-a-dataframe-in-pandas">Adding calculated column(s) to a dataframe in pandas</a> because it seems the best in my opinion as it is neat. Please feel free to offer better alternatives.</p> <p>Either way my initial code is below:</p> <pre><code>import pandas as pd #https://github.com/sivabalanb/Data-Analysis-with-Pandas-and-Python/blob/master/nba.csv dt_nba = pd.read_csv("data//nba.csv") #note this is just basic function. I want to pass partitioned data like team's average salary def GetSalaryIncrement(val): return val * 1.1 dt_nba["SalaryPlus10Percent"] = map(GetSalaryIncrement,dt_nba["Salary"]) dt_nba[["Name","Team","Salary","SalaryPlus10Percent"]][:5] </code></pre> <p>However, the result is not what I expected:</p> <pre><code>+----+---------------+----------------+--------------+--------------------------------+ | ID | Name | Team | Salary | SalaryPlus10Percent | +----+---------------+----------------+--------------+--------------------------------+ | 0 | Avery Bradley | Boston Celtics | 7730337.0000 | &lt;map object at 0x7fb819e9b7b8&gt; | | 1 | Jae Crowder | Boston Celtics | 6796117.0000 | &lt;map object at 0x7fb819e9b7b8&gt; | | 2 | John Holland | Boston Celtics | nan | &lt;map object at 0x7fb819e9b7b8&gt; | | 3 | R.J. Hunter | Boston Celtics | 1148640.0000 | &lt;map object at 0x7fb819e9b7b8&gt; | | 4 | Jonas Jerebko | Boston Celtics | 5000000.0000 | &lt;map object at 0x7fb819e9b7b8&gt; | +----+---------------+----------------+--------------+--------------------------------+ </code></pre> <p>In particular I am interested in passing "window/aggregate data" where it should gracefully ignore Nan values.</p> <p>Example in T-SQL I can do this:</p> <pre><code>-- INCREASE EACH PLAYERS SALARY BY 10% OF AVERAGE SALARY OF THE TEAM SELECT NewSalary= Salary + (.1 * AVG(Salary) OVER (PARTITION BY Team)) FROM nba_data </code></pre> <p>I want to do that in Pandas if possible. Thank you.</p>
<p>I think you are looking for </p> <pre><code>dt_nba["Salary"]=dt_nba["Salary"].map(GetSalaryIncrement) </code></pre> <p>Also you can do with </p> <pre><code>GetSalaryIncrement(dt_nba["Salary"]) </code></pre> <hr> <pre><code>dt_nba["Salary"].apply(GetSalaryIncrement) </code></pre> <hr> <p>To calculated<code>INCREASE EACH PLAYERS SALARY BY 10% OF AVERAGE SALARY OF THE TEAM</code></p> <pre><code>dt_nba['Newsa']=dt_nba.groupby('Team')['Salary'].transform('mean')*0.1+dt_nba["Salary"] </code></pre>
python|pandas
3
3,048
55,424,526
Conversion from DatetimeIndex to datetime64[s] via int without dividing by 1e9 possible?
<p>Is it possible to convert from a DatetimeIndex to datetime64[s] array via int array without dividing by <code>1e9?</code></p> <p>The following code delivers an int numpy array, but I have to divide by <code>1e9</code> to get from nanoseconds to seconds. </p> <p>Is it possible to take this journey (DatetimeIndex, int numpy array, and finally datetime64[s] numpy array) without the dividing by <code>1e9</code>?</p> <pre><code>import pandas as pd import numpy as np start = pd.Timestamp('2015-07-01') end = pd.Timestamp('2015-07-05') t = np.linspace(start.value, end.value, 5) datetimeIndex = pd.to_datetime(t) '''type: DatetimeIndex''' datetimeIndex Out[2]: DatetimeIndex(['2015-07-01', '2015-07-02', '2015-07-03', '2015-07-04', '2015-07-05'], dtype='datetime64[ns]', freq=None) datetimeIndexAs10e9int = datetimeIndex.values.astype(np.int64) '''datetimeIndexAs10e9int - like [1435708800000000000]''' datetimeIndexAs10e9int Out[3]: array([1435708800000000000, 1435795200000000000, 1435881600000000000, 1435968000000000000, 1436054400000000000]) datetime = (1/1e9*datetimeIndexAs10e9int).astype(np.float).astype('datetime64[s]') datetime Out[4]: array(['2015-07-01T00:00:00', '2015-07-02T00:00:00', '2015-07-03T00:00:00', '2015-07-04T00:00:00', '2015-07-05T00:00:00'], dtype='datetime64[s]') </code></pre>
<p>I think you can do it by modifying your code. Use astype('datetime64[s]') instead.</p> <pre><code>datetimeIndexAs10e9int = datetimeIndex.values.astype('datetime64[s]') </code></pre>
python|pandas|numpy
1
3,049
56,520,780
How to use geopanda or shapely to find nearest point in same geodataframe
<p>I have a geodataframe showing ~25 locations represented as point geometry. I am trying to come up with a script that goes through each point, identifies the nearest location and returns the name of the nearest location and the distance.</p> <p>I can easily do this if I have different geodataframes using nearest_points(geom1, geom2) in the shapely.ops library. However all my locations are stored in one geodataframe. I am trying to loop through and that is where I am having trouble</p> <p>here is my sample file:</p> <pre class="lang-py prettyprint-override"><code>geofile = gpd.GeoDataFrame([[0, 'location A', Point(55, 55)], [1, 'location B', Point(66, 66)], [2, 'Location C', Point(99, 99)], [3, 'Location D', Point(11, 11)]], columns=['ID','Location','geometry']) </code></pre> <p>Here is the loop I am creating to no avail.</p> <pre class="lang-py prettyprint-override"><code>for index, row in geofile.iterrows(): nearest_geoms=nearest_points(row, geofile) print('location:' + nearest_geoms[0]) print('nearest:' + nearest_geoms[1]) print('-------') </code></pre> <p>I am getting this error:</p> <pre class="lang-py prettyprint-override"><code>AttributeError: 'Series' object has no attribute '_geom' </code></pre> <p>However I think my problem is beyond the error cause somehow I have to exclude the row I am looping through cause that will automatically return as the closest location since it is that location.</p> <p>My end result for one location would be the following:</p> <pre class="lang-py prettyprint-override"><code>([0,'location A','location B', '5 miles', Point(55,55)], columns=['ID','Location','Nearest', 'Distance',geometry']) </code></pre>
<p>Shapely's nearest_points function compares shapely geometries. To compare a single Point geometry against multiple other Point geometries, you can use .unary_union to compare against the resulting MultiPoint geometry. And yes, at each row operation, drop the respective point so it is not compared against itself.</p> <pre><code>import geopandas as gpd from shapely.geometry import Point from shapely.ops import nearest_points df = gpd.GeoDataFrame([[0, 'location A', Point(55,55)], [1, 'location B', Point(66,66)], [2, 'Location C', Point(99,99)], [3, 'Location D' ,Point(11,11)]], columns=['ID','Location','geometry']) df.insert(3, 'nearest_geometry', None) for index, row in df.iterrows(): point = row.geometry multipoint = df.drop(index, axis=0).geometry.unary_union queried_geom, nearest_geom = nearest_points(point, multipoint) df.loc[index, 'nearest_geometry'] = nearest_geom </code></pre> <p>Resulting in </p> <pre><code> ID Location geometry nearest_geometry 0 0 location A POINT (55 55) POINT (66 66) 1 1 location B POINT (66 66) POINT (55 55) 2 2 Location C POINT (99 99) POINT (66 66) 3 3 Location D POINT (11 11) POINT (55 55) </code></pre> <p><a href="https://i.stack.imgur.com/JPgFa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JPgFa.png" alt="enter image description here"></a></p>
python|gis|geopandas|shapely
10
3,050
67,090,699
How to include rgb and grayscale images in CNN using tf.data?
<p>I am trying to use rgb images as input and grayscale images as label image based on <a href="https://stackoverflow.com/questions/37340129/tensorflow-training-on-my-own-image">this post</a>. How can I modify the following code to define that the label images contain one channel?</p> <pre><code># step 1 filenames = tf.constant(input_list) labels = tf.constant(label_list) # step 2: create a dataset returning slices of `filenames` dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) # step 3: parse every image in the dataset using `map` def _parse_function(filename, label): image_string = tf.io.read_file(filename) image_decoded = tf.image.decode_jpeg(image_string, channels=3) image = tf.cast(image_decoded, tf.float32) return image, label dataset = dataset.map(_parse_function) dataset = dataset.batch(2) # step 4: create iterator and final input tensor iterator = tf.compat.v1.data.make_one_shot_iterator(dataset) images, labels = iterator.get_next() </code></pre>
<p>Use this function to load an image with a varying number of channels:</p> <pre><code>def _parse_function(filename, channels): image_string = tf.io.read_file(filename) image = tf.image.decode_jpeg(image_string, channels=channels) image = tf.image.convert_image_dtype(image, tf.float32) return image </code></pre> <p>Then:</p> <pre><code>dataset = dataset.map(lambda x, y: ( _parse_function(x, channels=3), _parse_function(y, channels=1) ) ) </code></pre>
python|tensorflow|conv-neural-network
0
3,051
67,069,940
ValueError in binning the dataframe with lists of values
<p>I am attempting to assign bins to a dataframe, I am doing this but creating bins then passing them as an argument but it keeps failing (probably because the dataframe has lists of values instead of values)</p> <p>I am trying to set the frequencies of a column into bins. Each frequency in the column is a list of frequencies with the following format:</p> <pre><code> Freq Theta 0 191.300003 [-54.0, -52.9999, -52.0001, -51.0] 1 191.929001 [-58.9999, -58.0001, -57.0, -55.9999] </code></pre> <p>This is not the complete dataset, there are around 15 theta values and they extend from -60 degrees to +60 degrees I try to put them into bins by doing the following:</p> <pre><code>theta_bins = np.linspace(-60,60,60) final_lut['binned']=pd.cut(final_lut['Theta'], theta_bins, include_lowest=True) </code></pre> <p>but when I run this, I encounter the following error:</p> <pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>From further reading up on stack, I encountered this question: <a href="https://stackoverflow.com/questions/44844921/create-2d-histogram-from-a-list-of-lists">2D array binning</a> and I tried to flatten the Theta values before parsing however the error remains. Any idea why this could be happening and how to fix thetas into the bins of 2 degrees would be greatly appreciated. Thank you!</p>
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a> it first if want use <code>pd.cut</code>, because it cannot working with <code>list</code> (like many pandas methods):</p> <pre><code>df = final_lut.explode('Theta') theta_bins = np.linspace(-60,60,60) df['binned']=pd.cut(df['Theta'], theta_bins, include_lowest=True) </code></pre>
python-3.x|pandas|dataframe|numpy|nested-lists
1
3,052
66,911,400
TypeError: Can not convert a NoneType into a Tensor or Operation -- Error believe related to converting to graph
<p>Below find my model:</p> <pre><code>class CustomModel(tf.keras.Model): def __init__(self, model1, model2, model3, model4): super(deep_and_wide, self).__init__() self.model1 = model1 self.model2 = model2 self.model3 = model3 self.model4 = model4 def call(self, inputs): x1 = self.mode1([inputs[&quot;a&quot;], inputs[&quot;b&quot;]]) x2 = self.model2([inputs[&quot;a&quot;], inputs[&quot;b&quot;]]) x3 = self.model3([inputs[&quot;a&quot;], inputs[&quot;b&quot;]]) x4 = self.model4([inputs[&quot;a&quot;], inputs[&quot;b&quot;]]) x = Concatenate()([x1, x2, x3]) x = TimeDistributed(Dense(2))(x) x = Add()([x, x4]) x_fc = Dense(1)(x) x_ec = Dense(1)(x) return x_fc, x_ec def train_step(self, data): with tf.GradientTape() as tape: data = data_adapter.expand_1d(data) batch_inputs, batch_outputs, sample_weight= data_adapter.unpack_x_y_sample_weight(data) y_true_fc, y_true_ec = batch_outputs[&quot;y_fc&quot;], batch_outputs[&quot;y_ec&quot;] y_pred_fc, y_pred_ec = self(batch_inputs, training=True) loss_fc = self.compiled_loss(y_true_fc, y_pred_fc) loss_ec = self.compiled_loss(y_true_ec, y_pred_ec) print(&quot;here&quot;) trainable_variables = self.trainable_variables print(&quot;here&quot;) gradients = tape.gradient([loss_fc, loss_ec], trainable_variables) print(&quot;here&quot;) self.optimizer.apply_gradients(zip(gradients, trainable_variables)) print(&quot;here&quot;) </code></pre> <p>And below is my custom loss</p> <pre><code>class CustomLoss(tf.keras.losses.Loss): def __init__(self, mask=True, alpha=1, beta=1, gamma=1, dtype=tf.float64): super(CustomLoss, self).__init__(reduction=tf.keras.losses.Reduction.NONE) self.mask = mask self.alpha = alpha self.beta = beta self.gamma = gamma self.dtype = dtype def call(self, y_true, y_pred): def loss_fn(y_true, y_pred, mask): y_true = tf.boolean_mask(y_true, mask) y_pred = tf.boolean_mask(y_pred, mask) return tf.keras.losses.MSE(y_true, y_pred) self.mask = tf.not_equal(y_true, 0.) y_true = tf.cast(y_true, self.dtype) y_pred = tf.cast(y_pred, self.dtype) y_pred = tf.multiply(y_pred, tf.cast(self.mask, dtype=self.dtype)) y_pred_cum = tf.math.cumsum(y_pred, axis=1) y_pred_cum = tf.multiply(y_pred_cum, tf.cast(self.mask, dtype=self.dtype)) y_true_cum = tf.math.cumsum(y_true, axis=1) y_true_cum = tf.multiply(y_true_cum, tf.cast(self.mask, dtype=self.dtype)) loss_value = self.alpha * loss_fn(y_true, y_pred, self.mask) + \ self.gamma * loss_fn(y_true_cum, y_pred_cum, self.mask) return loss_value </code></pre> <p>And then finally:</p> <pre><code>optimizer = tf.keras.optimizers.Adam()
 loss = CustomLoss() model.compile(optimizer, loss)
 model.fit(train_data, epochs=5, validation_data=val_data) </code></pre> <p>My data inputs are of size (sequence length, feature length) where sequence length is variable hence I am using <code>tf.data.experimental.bucket_by_sequence_length</code> to pad to max sequence length of the batch (as opposed to batch to max sequence length). All in all, my train and val data are tf.data.Datasets each created using <code>tf.data.experimental.bucket_by_sequence_length</code> where each batch is of size (None, None, feature length).</p> <p>When I run the above code, I get the following errors and cannot seem to understand where I am going wrong:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;input&gt;&quot;, line 75, in &lt;module&gt; File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\keras\engine\training.py&quot;, line 1100, in fit tmp_logs = self.train_function(iterator) File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\eager\def_function.py&quot;, line 828, in __call__ result = self._call(*args, **kwds) File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\eager\def_function.py&quot;, line 871, in _call self._initialize(args, kwds, add_initializers_to=initializers) File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\eager\def_function.py&quot;, line 725, in _initialize self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\eager\function.py&quot;, line 2969, in _get_concrete_function_internal_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\eager\function.py&quot;, line 3361, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\eager\function.py&quot;, line 3196, in _create_graph_function func_graph_module.func_graph_from_py_func( File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\framework\func_graph.py&quot;, line 990, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\eager\def_function.py&quot;, line 634, in wrapped_fn out = weak_wrapped_fn().__wrapped__(*args, **kwds) File &quot;C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\framework\func_graph.py&quot;, line 977, in wrapper raise e.ag_error_metadata.to_exception(e) TypeError: in user code: C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\keras\engine\training.py:805 train_function * return step_function(self, iterator) C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\keras\engine\training.py:795 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1259 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2730 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:3417 _call_for_each_replica return fn(*args, **kwargs) C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\keras\engine\training.py:790 run_step ** with ops.control_dependencies(_minimum_control_deps(outputs)): C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\framework\ops.py:5359 control_dependencies return get_default_graph().control_dependencies(control_inputs) C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\framework\func_graph.py:362 control_dependencies return super(FuncGraph, self).control_dependencies(filtered_control_inputs) C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\framework\ops.py:4815 control_dependencies c = self.as_graph_element(c) C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\framework\ops.py:3726 as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) C:\Users\\Anaconda3\envs\tf_recsys\lib\site-packages\tensorflow\python\framework\ops.py:3814 _as_graph_element_locked raise TypeError(&quot;Can not convert a %s into a %s.&quot; % TypeError: Can not convert a NoneType into a Tensor or Operation. </code></pre> <p>The four print statements inserted in the <code>train_step</code> function above are printed.</p>
<p>This NoneType refers to the returned value of the custom train_step, when using a custom train_step you should return something that can be converted into a tensor so that the minimum control dependencies can process it, typically, the loss value as {&quot;loss&quot;: loss_value} and potentially some other metrics, or at least an empty dict {}.</p>
tensorflow|tf.keras
0
3,053
68,268,440
Python : How to assign ranks to categorical variables within a group in Python
<p>Given I have a dataset containing only the first two columns, how do I create another column using Python which will contain the rank based on these ranges for each group separately. My desired output would look like this -</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">id</th> <th style="text-align: center;">range</th> <th style="text-align: center;">rank</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">10-20</td> <td style="text-align: center;">2</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">20-30</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">5-10</td> <td style="text-align: center;">1</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">20-30</td> <td style="text-align: center;">2</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">10-20</td> <td style="text-align: center;">1</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;"></td> <td style="text-align: center;"></td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">10-20</td> <td style="text-align: center;">2</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">5-10</td> <td style="text-align: center;">1</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">20-30</td> <td style="text-align: center;">3</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">30+</td> <td style="text-align: center;">4</td> </tr> </tbody> </table> </div> <p>NOTE - These are the only 4 ranges [5-10, 10-20, 20-30, 30+] that can belong to any id at max. There can be blanks as well For example as given in the reproducible example, if for id 2 there are two ranges 10-20 and 20-30 the corresponding to 10-20 the rank will be 1 and corresponding to 20-30 the rank will be 2. I have checked that df.groupby can be used but I am not being able to figure out how in this case.</p>
<p>Convert your range column to a category dtype before apply <code>rank</code>:</p> <pre><code>df['range'] = df['range'].astype(pd.CategoricalDtype( ['5-10', '10-20', '20-30', '30+'], ordered=True)) df['rank'] = df.groupby('id')['range'].apply(lambda x: x.rank()) </code></pre> <pre><code>&gt;&gt;&gt; df id range rank 0 1 10-20 2.0 1 1 20-30 3.0 2 1 5-10 1.0 3 2 20-30 2.0 4 2 10-20 1.0 5 2 NaN NaN 6 3 10-20 2.0 7 3 5-10 1.0 8 3 20-30 3.0 9 3 30+ 4.0 </code></pre>
python|pandas|dataframe
2
3,054
68,149,692
Load HDF5 checkpoint models with custom metrics
<p>I've created a Keras Regressor to run a RandomizedSearch CV using a ModelCheckpoint callback, but the training overran the Colab runtime 12H limit and stopped halfway through. The models are saved in <code>hdf5</code> format.</p> <p>I used <code>tensorflow_addons</code> to add the <code>RSquare</code> class to monitor the R2 for train and validation sets. However, when I used <code>keras.models.load_model</code>, I get the following error:</p> <p><a href="https://i.stack.imgur.com/RIVQ5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RIVQ5.png" alt="enter image description here" /></a></p> <p>As you can see from the traceback, I have passed the <code>custom_objects</code> parameter, but still it is not recognised. How can I solve this?</p> <p>You can see the full code example below:</p> <pre><code>import os import tensorflow as tf import tensorflow_addons as tfa from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras.layers import Input, InputLayer from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from keras.wrappers.scikit_learn import KerasRegressor from sklearn.model_selection import RandomizedSearchCV def build_model(n_hidden = 2, n_neurons = 64, input_shape = (X_train.shape[1],), dropout = 0): model = Sequential() model.add(InputLayer(input_shape = input_shape)) for i in range(n_hidden): model.add(Dense(n_neurons, activation = 'relu')) model.add(Dropout(dropout)) model.add(Dense(1)) model.compile(loss = 'mean_squared_error', optimizer = 'adam', metrics = [tfa.metrics.RSquare(y_shape=(1,))]) return model keras_reg = KerasRegressor(build_model) checkdir = os.path.join(r'/content/drive/MyDrive/COP328/Case A', 'checkpoints', datetime.datetime.now().strftime('%d-%m-%Y_%H-%M-%S'), 'imputed_log1p-{epoch:02d}-{val_r_square:.3f}.hdf5') callbacks = [ModelCheckpoint(checkdir, save_freq='epoch', save_best_only = True, monitor = 'val_r_square', mode = 'max'), EarlyStopping(patience = 10)] # Here is where training got interrupted because of Colab runtime being dropped: param_dist = { 'n_hidden' : [1,2], 'n_neurons': [8,16,32,64,128], 'dropout': [0,0.2,0.4] } rnd_search_cv = RandomizedSearchCV(keras_reg, param_dist, n_iter= 15, cv = 5) rnd_search_cv.fit(X_train, y_train, epochs = 200, batch_size = 64, validation_data = (X_valid,y_valid), callbacks = callbacks) # Here is where I am trying to reload one of the most promising models based on R2, and getting the error: from keras.models import load_model import tensorflow_addons as tfa rnd_model = load_model(r'/content/drive/MyDrive/COP328/Case A/checkpoints/26-06-2021_17-32-29/imputed_log1p-56-1.000.hdf5', custom_objects = {'r_square': tfa.metrics.RSquare, 'val_r_square': tfa.metrics.RSquare}) </code></pre>
<p>This solution doesn't take into account the addons package but one possibility is to create the coefficient of determination (R^2) as a different metric without that package, and then defining it as your loss.</p> <pre><code> def coeff_determination(y_true, y_pred): from keras import backend as K SS_res = K.sum(K.square( y_true-y_pred )) SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) ) return ( 1 - SS_res/(SS_tot + K.epsilon()) ) </code></pre> <p>Recall that minimizing MSE will maximize R^2.</p>
tensorflow|keras
0
3,055
59,273,092
How to build Pytorch Mobile sample HelloWorld app?
<p><a href="https://pytorch.org/mobile/android/" rel="nofollow noreferrer">https://pytorch.org/mobile/android/</a></p> <p>I am trying to build a Pytorch demo HelloWorld app for Android. My machine is MacOS Mojave where I installed Python3 and Torchvision via conda. I am new to Pytorch, Pytorch mobile and gradlew etc.. In the past I used CMake and Make for C/C++ builds. I tried the following steps from the pytorch website but installDebug is not found. gradlew tasks definitely doesn't have any installDebug tasks.</p> <p>Is the documentation old or am I missing a step or two below? I do have Android SDK and NDK installed as far as I can tell.</p> <pre><code>$ git clone https://github.com/pytorch/android-demo-app.git $ cd android-demo-app $ cd HelloWorldApp $ python trace_model.py $ ./gradlew installDebug FAILURE: Build failed with an exception. * What went wrong: Task 'installDebug' not found in root project 'HelloWorldApp'. * Try: Run gradlew tasks to get a list of available tasks. Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 0s $ ./gradlew tasks </code></pre>
<p>Probably Android SDK wasn't found, try: <code>export ANDROID_HOME=/path/to/android/sdk</code></p> <p>Default location on linux: ~/Android/SDK/</p>
android|gradle|pytorch
-1
3,056
57,087,242
Print the elements of an array with certain rules
<p>My title might be confusing but I do not know what to put. Currently, I am trying to learn about RBF-Kernel-PCA from a book and I am at the code where they load the dataset and then plot the dataset with code like below:</p> <pre><code>from scipy.spatial.distance import pdist, squareform from scipy import exp from scipy.linalg import eigh from sklearn.datasets import make_moons import matplotlib.pyplot as plt import numpy as np X, y = make_moons(n_samples=100, random_state=123) plt.scatter(X[y==0, 0], X[y==0, 1], color='red', marker='^', alpha=0.5) plt.scatter(X[y==1, 0], X[y==1, 1], color='blue', marker='o', alpha=0.5) plt.show() </code></pre> <p>I dont understand why they use X[y==0,0] and X[y==0,1]. What is the y and why can it be performed with y== 0, 1? What is the 0 and 1 actually? Please kindly explain in detail or share your knowledge. I am still a beginner so I might not be able to understand deep explanations. Thank you</p> <p>Edit**</p> <p>I understand that the "y" is the label from the dataset now. But I dont get why they use 0,1. 0 stands for label 0 but what about 1?</p> <p>Example,</p> <pre><code>X[y==0,1] # here label is 0 so what about the 1? X[y==1,1] # here label is 1 so what about the 1? </code></pre>
<p>It's more like <code>X[(y==0), 1]</code>, note the parentheses. Specifically this code is selecting each row where <code>y==0</code>, and then the 1 is the column (the second column). The comma separates the axes of the <code>X</code> array. For example, let's have these arrays <code>X</code> and <code>y</code>:</p> <pre class="lang-py prettyprint-override"><code>In [100]: X = np.array([[5, 4], [3, 2], [1, 0]]) In [101]: X Out[101]: array([[5, 4], [3, 2], [1, 0]]) In [102]: y = np.array([1, 0, 0]) </code></pre> <p>Now <code>y==0</code> will give you a boolean array, the same size as <code>y</code>, but with <code>True</code> or <code>False</code> respectively where the values are equal to zero:</p> <pre class="lang-py prettyprint-override"><code>In [103]: y == 0 Out[103]: array([False, True, True]) </code></pre> <p>Now you can use this boolean array to select the rows, via <a href="https://numpy.org/devdocs/reference/arrays.indexing.html#boolean-array-indexing" rel="nofollow noreferrer">boolean indexing</a>:</p> <pre class="lang-py prettyprint-override"><code>In [104]: X[y == 0] Out[104]: array([[3, 2], [1, 0]]) </code></pre> <p>Note that it selected the second and third row, which were the indices where <code>y</code> was equal to zero. And if I wanted only one of these columns, I would just add another index:</p> <pre class="lang-py prettyprint-override"><code>In [105]: X[y == 0, 1] Out[105]: array([2, 0]) </code></pre> <p>So here a full description of this indexing operation is "select the rows according to the indexes where <code>y</code> is zero, and select the second column."</p>
python|numpy|scipy
1
3,057
45,724,470
A masked array indexing issue
<p>I have a numpy array with some <code>NaN</code> values:</p> <pre><code>arr = [ 0, NaN, 2, NaN, NaN, 5, 6, 7 ] </code></pre> <p>Using some logic (outside of the question scope), I generate a mask of the NaN locations:</p> <pre><code>mask = [ True, False, True, False, False, True, True, True ] </code></pre> <p>I use this mask to select only the valid data:</p> <pre><code>valid_arr = arr[mask] # [ 0, 2, 5, 6, 7 ] </code></pre> <p>I then perform an arbitrary algorithm which selects several <code>indeces</code> in this new array:</p> <pre><code>indeces = myAlgo(valid_arr) # [ 1, 3 ] </code></pre> <p>The <code>indeces</code> in the valid array are 1,3 (corresponding to values 2 and 6). I need to know what <code>indeces</code> these correspond to <em>in the original array</em> (<code>arr</code>). In the above example, this is obviously <strong>2 and 6</strong>.</p> <p>The array is time series data, not sorted. One solution is to iterate over the <code>mask</code>, incrementing a counter only when valid numbers are found. Can this be done more efficiently using numpy?</p>
<p>You can flat the mask which returns indices from the original array, and then use the new indices to subset the mask indices:</p> <pre><code>mask = np.array([ True, False, True, False, False, True, True, True ]) indices = [1,3] np.flatnonzero(mask)[indices] # array([2, 6]) </code></pre>
python|arrays|numpy|indexing
3
3,058
46,127,625
Need To Compile Keras Model Before `model.evaluate()`
<p>I load a <code>Keras</code> model from <strong>.json</strong> and <strong>.hdf5</strong> files. When I call <code>model.evaluate()</code>, it returns an error:</p> <blockquote> <p>You must compile a model before training/testing. Use `model.compile(optimizer, loss)</p> </blockquote> <p>Why do I need to compile to run <code>evaluate()</code>? </p> <p>To add, the model can be passed <code>predict()</code> with no problem. </p>
<p>Because <a href="https://keras.io/models/model/#evaluate" rel="noreferrer"><code>evaluate</code></a> will calculate the <strong>loss function and the metrics</strong>.</p> <p>You don't have any of them until you compile the model. They're parameters to the compile method:</p> <pre><code>model.compile(optimizer=..., loss=..., metrics=...) </code></pre> <p>On the other hand, <a href="https://keras.io/models/model/#predict" rel="noreferrer"><code>predict</code></a> doesn't evaluate any metric or loss, it just passes the input data through the model and gets its output.</p> <p>You need the &quot;loss&quot; for training too, so you can't train without compiling. And you can compile a model as many times as you want, and even change the parameters.</p> <hr /> <p><strong>The outputs and the loss function:</strong></p> <p>The model's outputs depend on it being defined with weights. That is automatic and you can <code>predict</code> from any model, even without any training. Every model in Keras is already born with weights (either initialized by you or randomly initialized)</p> <p>You input something, the model calculates the output. At the end of everything, this is all that matters. A good model has proper weights and outputs things correctly.</p> <p>But before getting to that end, your model needs to be trained.</p> <p>Now, the loss function takes the current output and compares it with the expected/true result. It's a function supposed to be minimized. The less the loss, the closer your results are to the expected. This is the function from which the derivatives will be taken so the backpropagation algorithm can update the weights.</p> <p>The loss function is not useful for the final purpose of the model, but it's necessary for training. That's probably why you can have models without loss functions (and consequently, there is no way to evaluate them).</p>
tensorflow|keras
32
3,059
45,765,421
What is the Python idiom for chaining tensor products?
<p>Numpy's <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.outer.html" rel="nofollow noreferrer"><code>outer</code></a> flattens its arguments. As a result it isn't possible to chain <code>outer</code> to implement the mathematical definition of the <a href="https://math.stackexchange.com/a/1507752/12400">(matrix) tensor product</a>, for example as is possible <a href="http://reference.wolfram.com/language/ref/TensorProduct.html" rel="nofollow noreferrer">in Mathematica</a> with</p> <pre><code>TensorProduct[a, b, c] </code></pre> <p>However it's possible to "recover" this functionality by reshaping the result with the dimensions of the arguments, with something like</p> <pre><code>np.outer(a, np.outer(b, c)).reshape(a.shape + b.shape + c.shape) </code></pre> <p>But I wonder if this is really the right approach. Is there an API I'm missing that already does this. Perhaps TensorFlow has something I've missed?</p>
<p>The <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.outer.html#numpy.ufunc.outer" rel="nofollow noreferrer"><code>outer</code> method</a> of NumPy ufuncs doesn't flatten:</p> <pre><code>outer = numpy.multiply.outer result = outer(a, outer(b, c)) </code></pre>
python|numpy|tensorflow
2
3,060
46,123,291
Tensorflow - use previously learned weights to initialize new weights of different dimensions
<p>I'm trying to use previously learned weights of dimension <code>m</code> to initialize a weight tensor of dimension <code>n</code> where <code>n &gt; m</code>. I can do it as I've done below.</p> <pre><code>all_weights['w1'] = tf.Variable(tf.zeros([n, output_sz], dtype=tf.float32)) all_weights['w1'] = all_weights['w1'][:m,:].assign(initial_weights['w1']) </code></pre> <p>However, I'm having an issue later on when the actual learning happens that I don't come across if I don't use weight sharing. <code>w1</code> is initially a tf.Variable and I noticed it changes to a Tensor object after the slicing assignment: <code>Tensor("strided_slice/_assign:0")</code>. My issue is I'm getting the error: </p> <pre><code>`LookupError: No gradient defined for operation 'strided_slice_2/_assign' (op type: StridedSliceAssign)`. </code></pre> <p>Does this have to do with the type (Tensor vs tf.Variable)? Does it make sense to some how cast the Tensor to a tf.Variable? I tried to do this but then I get an error like:</p> <pre><code>`FailedPreconditionError: Attempting to use uninitialized value Variable_4 [[Node: strided_slice/_assign = StridedSliceAssign[Index=DT_INT32, T=DT_FLOAT, _class=["loc:@Variable_4"], begin_mask=3, ellipsis_mask=0, end_mask=2, new_axis_mask=0, shrink_axis_mask=0, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_4, strided_slice/stack, strided_slice/stack_1, strided_slice/stack_2, strided_slice/_assign/value)]]` </code></pre> <p>I'm relatively new to Tensorflow so any help would be highly appreciated. Thanks!</p>
<p><code>tf.Variable</code> is a very different thing from <code>Tensor</code>. It does not make sense to "cast" between them.</p> <p>The easiest solution is to just use the <code>initial_weights</code> directly in the <code>Variable</code> creation. For example, something like this:</p> <pre><code>import numpy as np tf.Variable(np.append(initial_weights['w1'], np.zeros((n-m, output_sz)), axis=0), dtype=tf.float32) </code></pre>
python|tensorflow
1
3,061
51,018,409
Compare dictionaries and show only the differences in Python?
<p>I have two dictionaries and would like to compare them and list the differences: I thought about doing it as they are dictionaries which is not that easy after checking other answers here. One other way is to turn them to dataframe with pandas? I would like to take into consideration the same columns that are not in the same order too. So the check should be done by name.</p> <p>For example 'KAEK' is listed way lower in the second dictionary if they were the same in name data type and length they should not considered different just because the order is different in the two dictionaries. How should I do it?</p> <pre><code>pst.schema {'properties': OrderedDict([('KAEK', 'str:12'), ('PROP_TYPE', 'str:4'), ('ORI_TYPE', 'int:1'), ('ORI_CODE', 'str:100'), ('DEC_ID', 'str:254'), ('ADDRESS', 'str:254'), ('NUM', 'str:9'), ('LEN', 'float:19.11'), ('AREA', 'float:19.11')]), 'geometry': 'Polygon'} pst2.schema {'properties': OrderedDict([('OBJECTID_1', 'int:9'), ('OBJECTID', 'int:9'), ('FID_PERIVL', 'int:9'), ('DESC_', 'str:254'), ('PROP_TYPE', 'str:4'), ('Shape_Leng', 'float:19.11'), ('Shape_Le_1', 'float:19.11'), ('Shape_Area', 'float:19.11'), ('PARCEL_COD', 'str:254'), ('KAEK', 'str:50'), ('NUM', 'int:4'), ('DEC_ID', 'int:4'), ('ADDRESS', 'int:4'), ('ORI_CODE', 'int:4'), ('ORI_TYPE', 'int:4')]), 'geometry': 'Polygon'} </code></pre> <p>I was thinking about placing them in order like :</p> <pre><code>df = pd.DataFrame(pst2, columns=['NUM', 'DEC_ID','OBJECTID_1'])#place all the columns #which doesn't work </code></pre> <p>But if it did, any gaps with different columns between the two dictionaries would create chaos. For example, if columns in first would be:</p> <pre><code>A,B,C </code></pre> <p>and the second:</p> <pre><code>A,B,B2,C </code></pre> <p>would not be compared correctly. Therefore the comparison should occur by name.</p> <p>To sum up: Compare these and show if any combination is different than the other. Either extra columns that don't exist in the other or something like this:</p> <pre><code>'ADDRESS', 'str:254' #from 1st dictionary 'ADDRESS', 'int:4' #from 2nd dictionary </code></pre> <p>Trying to show from which dictionary belong:</p> <pre><code> pprint(set(('d1', el) if el in d1.items() else ('d2', el) for el in d2)) {('d2', 'ADDRESS'), ('d2', 'DEC_ID'), ('d2', 'DESC_'), ('d2', 'FID_PERIVL'), ('d2', 'KAEK'), ('d2', 'NUM'), ('d2', 'OBJECTID'), ('d2', 'OBJECTID_1'), ('d2', 'ORI_CODE'), ('d2', 'ORI_TYPE'), ('d2', 'PARCEL_COD'), ('d2', 'PROP_TYPE'), ('d2', 'Shape_Area'), ('d2', 'Shape_Le_1'), ('d2', 'Shape_Leng')} </code></pre> <p>the correct would be to show both dictionaries' differences.</p>
<p>If you just want to find the symmetric differences between two of the OrderedDicts,</p> <pre><code>from collections import OrderedDict &gt;&gt;&gt; d1 = {'properties': OrderedDict([('KAEK', 'str:12'), ... ('PROP_TYPE', 'str:4'), ... ('ORI_TYPE', 'int:1')... &gt;&gt;&gt; d1 = d1['properties'] &gt;&gt;&gt; d2 = {'properties': OrderedDict([('OBJECTID_1', 'int:9'), ... ('OBJECTID', 'int:9'), ... ('FID_PERIVL', 'int:9')... &gt;&gt;&gt; d2 = d2['properties'] </code></pre> <hr> <pre><code>&gt;&gt;&gt; from pprint import pprint &gt;&gt;&gt; pprint(d1) OrderedDict([('KAEK', 'str:12'), ('PROP_TYPE', 'str:4'), ('ORI_TYPE', 'int:1')... &gt;&gt;&gt; pprint(d2) OrderedDict([('OBJECTID_1', 'int:9'), ('OBJECTID', 'int:9'), ('FID_PERIVL', 'int:9')... </code></pre> <hr> <pre><code>pprint(set.symmetric_difference(set(d1.items()), set(d2.items()))) {('ADDRESS', 'int:4'), ('ADDRESS', 'str:254'), ('AREA', 'float:19.11'), ('DEC_ID', 'int:4'), ('DEC_ID', 'str:254'), ('DESC_', 'str:254'), ('FID_PERIVL', 'int:9'), ('KAEK', 'str:12'), ('KAEK', 'str:50'), ('LEN', 'float:19.11'), ('NUM', 'int:4'), ('NUM', 'str:9'), ('OBJECTID', 'int:9'), ('OBJECTID_1', 'int:9'), ('ORI_CODE', 'int:4'), ('ORI_CODE', 'str:100'), ('ORI_TYPE', 'int:1'), ('ORI_TYPE', 'int:4'), ('PARCEL_COD', 'str:254'), ('Shape_Area', 'float:19.11'), ('Shape_Le_1', 'float:19.11'), ('Shape_Leng', 'float:19.11')} </code></pre> <p>Then just use the result in whichever way you want ?</p> <p>Further edit OP requested,</p> <pre><code>&gt;&gt;&gt; d3 = set.symmetric_difference(set(d1.items()), set(d2.items())) &gt;&gt;&gt; pprint(set(('d1', el) if el in d1.items() else ('d2', el) for el in d3)) {('d1', ('ADDRESS', 'str:254')), ('d1', ('AREA', 'float:19.11')), ('d1', ('DEC_ID', 'str:254')), ('d1', ('KAEK', 'str:12')), ('d1', ('LEN', 'float:19.11')), ('d1', ('NUM', 'str:9')), ('d1', ('ORI_CODE', 'str:100')), ('d1', ('ORI_TYPE', 'int:1')), ('d2', ('ADDRESS', 'int:4')), ('d2', ('DEC_ID', 'int:4')), ('d2', ('DESC_', 'str:254')), ('d2', ('FID_PERIVL', 'int:9')), ('d2', ('KAEK', 'str:50')), ('d2', ('NUM', 'int:4')), ('d2', ('OBJECTID', 'int:9')), ('d2', ('OBJECTID_1', 'int:9')), ('d2', ('ORI_CODE', 'int:4')), ('d2', ('ORI_TYPE', 'int:4')), ('d2', ('PARCEL_COD', 'str:254')), ('d2', ('Shape_Area', 'float:19.11')), ('d2', ('Shape_Le_1', 'float:19.11')), ('d2', ('Shape_Leng', 'float:19.11'))} </code></pre>
python|pandas|dictionary
2
3,062
70,523,947
Conditional combination of arrays row by row
<p>The task is to combine two arrays row by row (construct the permutations) based on the resulting multiplication of two corresponding vectors. Such as:</p> <p>Row1_A, Row2_A, Row3_A,</p> <p>Row1_B, Row2_B, Row3_B,</p> <p>The result should be: Row1_A_Row1_B, Row1_A_Row2_B, Row1_A_Row3_B, Row2_A_Row1_B, etc..</p> <p>Given the following initial arrays:</p> <pre><code>n_rows = 1000 A = np.random.randint(10, size=(n_rows, 5)) B = np.random.randint(10, size=(n_rows, 5)) P_A = np.random.rand(n_rows, 1) P_B = np.random.rand(n_rows, 1) </code></pre> <p>Arrays P_A and P_B are corresponding vectors to the individual arrays, which contain a float. The combined rows should only appear in the final array if the multiplication surpasses a certain threshold, for example:</p> <pre><code>lim = 0.8 </code></pre> <p>I have thought of the following functions or ways to solve this problem, but I would be interested in faster solutions. I am open to using numba or other libraries, but ideally I would like to improve the vectorized solution using numpy.</p> <p>Method A</p> <pre><code>def concatenate_per_row(A, B): m1,n1 = A.shape m2,n2 = B.shape out = np.zeros((m1,m2,n1+n2),dtype=A.dtype) out[:,:,:n1] = A[:,None,:] out[:,:,n1:] = B return out.reshape(m1*m2,-1) %%timeit A_B = concatenate_per_row(A, B) P_A_B = (P_A[:, None]*P_B[None, :]) P_A_B = P_A_B.flatten() idx = P_A_B &gt; lim A_B = A_B[idx, :] P_A_B = P_A_B[idx] </code></pre> <p>37.8 ms ± 660 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)</p> <p>Method B</p> <pre><code>%%timeit A_B = [] P_A_B = [] for i in range(len(P_A)): P_A_B_i = P_A[i]*P_B idx = np.where(P_A_B_i &gt; lim)[0] if len(idx) &gt; 0: P_A_B.append(P_A_B_i[idx]) A_B_i = np.zeros((len(idx), A.shape[1] + B.shape[1]), dtype='int') A_B_i[:, :A.shape[1]] = A[i] A_B_i[:, A.shape[1]:] = B[idx, :] A_B.append(A_B_i) A_B = np.concatenate(A_B) P_A_B = np.concatenate(P_A_B) </code></pre> <p>9.65 ms ± 291 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)</p>
<p>First of all, there is a <strong>more efficient algorithm</strong>. Indeed, you can pre-compute the size of the output array so the values can be <strong>directly written</strong> in the final output arrays rather than stored temporary in lists. To find the size efficiently, you can <strong>sort</strong> the array <code>P_B</code> and then do a <strong>binary search</strong> so to find the number of value greater than <code>lim/P_A[i,0]</code> for all possible <code>i</code> (<code>P_B*P_A[i,0] &gt; lim</code> is equivalent to <code>P_B &gt; lim/P_A[i,0]</code>). The number of item filtered for each <code>i</code> can be temporary stored so to quickly loop over the filtered items.</p> <p>Moreover, you can use <strong>Numba</strong> to significantly speed the computation of the main loop up.</p> <p>Here is the resulting code:</p> <pre class="lang-py prettyprint-override"><code>@nb.njit('(int_[:,::1], int_[:,::1], float64[:,::1], float64[:,::1])') def compute(A, B, P_A, P_B): assert P_A.shape[1] == 1 assert P_B.shape[1] == 1 P_B_sorted = np.sort(P_B.reshape(P_B.size)) counts = len(P_B) - np.searchsorted(P_B_sorted, lim/P_A[:,0], side='right') n = np.sum(counts) mA, mB = A.shape[1], B.shape[1] m = mA + mB A_B = np.empty((n, m), dtype=np.int_) P_A_B = np.empty((n, 1), dtype=np.float64) k = 0 for i in range(P_A.shape[0]): if counts[i] &gt; 0: idx = np.where(P_B &gt; lim/P_A[i, 0])[0] assert counts[i] == len(idx) start, end = k, k + counts[i] A_B[start:end, :mA] = A[i, :] A_B[start:end, mA:] = B[idx, :] P_A_B[start:end, :] = P_B[idx, :] * P_A[i, 0] k += counts[i] return A_B, P_A_B </code></pre> <p>Here are performance results on my machine:</p> <pre class="lang-none prettyprint-override"><code>Original: 35.6 ms Optimized original: 18.2 ms Proposed (with order): 0.9 ms Proposed (no ordering): 0.3 ms </code></pre> <p>The algorithm proposed above is <strong>20 times faster</strong> than the original optimized algorithm. It can be made even faster. Indeed, if the <strong>order of items</strong> do not matter you can use an <code>argsort</code> so to reorder both <code>B</code> and <code>P_B</code>. This enable you not to compute <code>idx</code> every time in the hot loop and select directly the last elements from <code>B</code> and <code>P_B</code> (that are guaranteed to be higher than the threshold but not in the same order than the original code). Because the selected items are stored contiguously in memory, this implementation is much faster. In the end, this last implementation is about <strong>60 times faster</strong> than the original optimized algorithm. Note that the proposed implementations are significantly faster than the original ones even without Numba.</p> <p>Here is the implementation that do not care about the order of the items:</p> <pre class="lang-py prettyprint-override"><code>@nb.njit('(int_[:,::1], int_[:,::1], float64[:,::1], float64[:,::1])') def compute(A, B, P_A, P_B): assert P_A.shape[1] == 1 assert P_B.shape[1] == 1 nA, mA = A.shape nB, mB = B.shape m = mA + mB order = np.argsort(P_B.reshape(nB)) P_B_sorted = P_B[order, :] B_sorted = B[order, :] counts = nB - np.searchsorted(P_B_sorted.reshape(nB), lim/P_A[:,0], side='right') nRes = np.sum(counts) A_B = np.empty((nRes, m), dtype=np.int_) P_A_B = np.empty((nRes, 1), dtype=np.float64) k = 0 for i in range(P_A.shape[0]): if counts[i] &gt; 0: start, end = k, k + counts[i] A_B[start:end, :mA] = A[i, :] A_B[start:end, mA:] = B_sorted[nB-counts[i]:, :] P_A_B[start:end, :] = P_B_sorted[nB-counts[i]:, :] * P_A[i, 0] k += counts[i] return A_B, P_A_B </code></pre>
python|numpy|performance|vectorization
1
3,063
51,521,732
Convert categories to columns in dataframe python
<p>I have a dataframe which contains two columns. One column contains different categories and other contains values.</p> <pre><code>import pandas as pd data={&quot;category&quot;:[&quot;Topic1&quot;,&quot;Topic2&quot;,&quot;Topic3&quot;,&quot;Topic2&quot;,&quot;Topic1&quot;,&quot;Topic3&quot;], &quot;value&quot;:[&quot;hello&quot;,&quot;hey&quot;,&quot;hi&quot;,&quot;name&quot;,&quot;valuess&quot;,&quot;python&quot;]} df=pd.DataFrame(data=data) </code></pre> <p>I want different categories into column as given below.</p> <p>Current Input:</p> <pre><code>category value Topic1 hello Topic2 hey Topic3 hi Topic2 name Topic1 valuess Topic3 python </code></pre> <p>Desired Output:</p> <pre><code>Topic1 Topic2 Topic3 hello hey hi valuess name python </code></pre> <p>I tried using transposing the dataframe but not getting the expected result.</p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.concat.html" rel="noreferrer"><code>pandas.concat</code></a> along <code>axis=1</code>. This will also work for mismatched lengths.</p> <pre><code>grouper = df.groupby('category') df = pd.concat([pd.Series(v['value'].tolist(), name=k) for k, v in grouper], axis=1) print(df) Topic1 Topic2 Topic3 0 hello hey hi 1 valuess name python </code></pre>
python|pandas|dataframe|transpose
5
3,064
70,854,490
How can I do an if statement on a pandas dataframe to check multiple columns for specific values?
<p>I am wanting to check a pandas dataframe to see if two columns match two unique values. I know have to check one column at a time, but not two at once.</p> <p>Basically, I want to see if the person's last name is 'Smith' and their first name is either 'John' or 'Tom' all at the same time.</p> <p>My code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd # create dataframe name = {'last_name': ['smith','smith','jones','parker'], 'first_name': ['john','tom','mary','peter']} df = pd.DataFrame(name,columns=['last_name', 'first_name']) # this is what I want to do # df.loc[df['last_name'] == 'smith' and df['first_name'].isin(['john', 'tom']), 'match'] = 'yes' # this works by itself df.loc[df['last_name'] == 'smith', 'match'] = 'yes' # this works by itself df.loc[df['first_name'].isin(['john', 'tom']), 'match'] = 'yes' print(df) </code></pre>
<p>You want to filter rows where the last name is &quot;Smith&quot; AND the first name is either &quot;John&quot; OR &quot;Tom&quot;. This means it's either &quot;John Smith&quot; OR &quot;Tom Smith&quot;. This is equivalent to</p> <pre><code>(last_name==&quot;Smith&quot; AND first_name==&quot;John&quot;) OR (last_name==&quot;Smith&quot; AND first_name==&quot;Tom&quot;) </code></pre> <p>which is equivalent to:</p> <pre><code>(last_name==&quot;smith&quot;) AND (first_name=='john' OR first_name=='tom') </code></pre> <p>the latter OR can be handled using <code>isin</code>:</p> <pre><code>out = df[(df['last_name']=='smith') &amp; (df['first_name'].isin(['john','tom']))] </code></pre> <p>Output:</p> <pre><code> last_name first_name match 0 smith john yes 1 smith tom yes </code></pre>
python|pandas
1
3,065
51,815,010
Simple Linear Regression using CSV data file Sklearn
<p>I have been trying this for the last few days and not luck. What I want to do is do a simple Linear regression fit and predict using sklearn, but I cannot get the data to work with the model. I know I am not reshaping my data right I just dont know how to do that.<br> Any help on this will be appreciated. I have been getting this error recently Found input variables with inconsistent numbers of samples: [1, 9] This seems to mean that the Y has 9 values and the X only has 1. I would think that this should be the other way around, but when I print off X it gives me one line from the CSV file but the y gives me all the lines from the CSV file. Any help on this will be appreciated. </p> <p>Here is my code. </p> <pre><code>filename = "E:/TestPythonCode/animalData.csv" #Data set Preprocess data dataframe = pd.read_csv(filename, dtype = 'category') print(dataframe.head()) #Git rid of the name of the animal #And change the hunter/scavenger to 0/1 dataframe = dataframe.drop(["Name"], axis = 1) cleanup = {"Class": {"Primary Hunter" : 0, "Primary Scavenger": 1 }} dataframe.replace(cleanup, inplace = True) print(dataframe.head()) #array = dataframe.values #Data splt # Seperating the data into dependent and independent variables X = dataframe.iloc[-1:] y = dataframe.iloc[:,-1] print(X) print(y) logReg = LogisticRegression() #logReg.fit(X,y) logReg.fit(X[:None],y) #logReg.fit(dataframe.iloc[-1:],dataframe.iloc[:,-1]) </code></pre> <p>And this is the csv file</p> <pre><code>Name,teethLength,weight,length,hieght,speed,Calorie Intake,Bite Force,Prey Speed,PreySize,EyeSight,Smell,Class T-Rex,12,15432,40,20,33,40000,12800,20,19841,0,0,Primary Hunter Crocodile,4,2400,23,1.6,8,2500,3700,30,881,0,0,Primary Hunter Lion,2.7,416,9.8,3.9,50,7236,650,35,1300,0,0,Primary Hunter Bear,3.6,600,7,3.35,40,20000,975,0,0,0,0,Primary Scavenger Tiger,3,260,12,3,40,7236,1050,37,160,0,0,Primary Hunter Hyena,0.27,160,5,2,37,5000,1100,20,40,0,0,Primary Scavenger Jaguar,2,220,5.5,2.5,40,5000,1350,15,300,0,0,Primary Hunter Cheetah,1.5,154,4.9,2.9,70,2200,475,56,185,0,0,Primary Hunter KomodoDragon,0.4,150,8.5,1,13,1994,240,24,110,0,0,Primary Scavenger </code></pre>
<p>Use :</p> <pre><code>X = dataframe.iloc[:,0:-1] y = dataframe.iloc[:,-1] </code></pre>
python|pandas|numpy|scikit-learn
5
3,066
51,648,278
No module named tensorflow.python on windows 10 when I run classify_image.py
<p>I'm trying to use tensorflow to run classify_image.py, but I keep getting the same error: <code>Traceback (most recent call last): File "classify_image.py", line 46, in &lt;module&gt; import tensorflow as tf File "C:\Users\Diederik\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\__init__.py", line 22, in &lt;module&gt; from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import ModuleNotFoundError: No module named 'tensorflow.python'</code></p> <p>Someone asked me to do a pip3 list, so I did: <code>C:\Users\Diederik\AppData\Local\Programs\Python\Python36\Scripts&gt;pip3 list Package Version ----------- ------- absl-py 0.3.0 astor 0.7.1 gast 0.2.0 grpcio 1.13.0 Markdown 2.6.11 numpy 1.15.0 pip 10.0.1 protobuf 3.6.0 setuptools 39.0.1 six 1.11.0 tensorboard 1.9.0 tensorflow 1.9.0** termcolor 1.1.0 Werkzeug 0.14.1 wheel 0.31.1 You are using pip version 10.0.1, however version 18.0 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command.</code></p>
<p>I simply needed to add a python path, that was the only problem</p>
python|tensorflow|artificial-intelligence|image-recognition
0
3,067
37,486,502
why does pandas rolling use single dimension ndarray
<p>I was motivated to use pandas <code>rolling</code> feature to perform a rolling multi-factor regression (This question is <strong>NOT</strong> about rolling multi-factor regression). I expected that I'd be able to use <code>apply</code> after a <code>df.rolling(2)</code> and take the resulting <code>pd.DataFrame</code> extract the ndarray with <code>.values</code> and perform the requisite matrix multiplication. It didn't work out that way.</p> <p>Here is what I found:</p> <pre><code>import pandas as pd import numpy as np np.random.seed([3,1415]) df = pd.DataFrame(np.random.rand(5, 2).round(2), columns=['A', 'B']) X = np.random.rand(2, 1).round(2) </code></pre> <p>What do objects look like:</p> <pre><code>print "\ndf = \n", df print "\nX = \n", X print "\ndf.shape =", df.shape, ", X.shape =", X.shape df = A B 0 0.44 0.41 1 0.46 0.47 2 0.46 0.02 3 0.85 0.82 4 0.78 0.76 X = [[ 0.93] [ 0.83]] df.shape = (5, 2) , X.shape = (2L, 1L) </code></pre> <p>Matrix multiplication behaves normally:</p> <pre><code>df.values.dot(X) array([[ 0.7495], [ 0.8179], [ 0.4444], [ 1.4711], [ 1.3562]]) </code></pre> <p>Using apply to perform row by row dot product behaves as expected:</p> <pre><code>df.apply(lambda x: x.values.dot(X)[0], axis=1) 0 0.7495 1 0.8179 2 0.4444 3 1.4711 4 1.3562 dtype: float64 </code></pre> <p>Groupby -> Apply behaves as I'd expect:</p> <pre><code>df.groupby(level=0).apply(lambda x: x.values.dot(X)[0, 0]) 0 0.7495 1 0.8179 2 0.4444 3 1.4711 4 1.3562 dtype: float64 </code></pre> <p>But when I run:</p> <pre><code>df.rolling(1).apply(lambda x: x.values.dot(X)) </code></pre> <p>I get:</p> <blockquote> <p>AttributeError: 'numpy.ndarray' object has no attribute 'values'</p> </blockquote> <p>Ok, so pandas is using straight <code>ndarray</code> within its <code>rolling</code> implementation. I can handle that. Instead of using <code>.values</code> to get the <code>ndarray</code>, let's try:</p> <pre><code>df.rolling(1).apply(lambda x: x.dot(X)) </code></pre> <blockquote> <p>shapes (1,) and (2,1) not aligned: 1 (dim 0) != 2 (dim 0)</p> </blockquote> <p>Wait! What?!</p> <p>So I created a custom function to look at the what rolling is doing.</p> <pre><code>def print_type_sum(x): print type(x), x.shape return x.sum() </code></pre> <p>Then ran:</p> <pre><code>print df.rolling(1).apply(print_type_sum) &lt;type 'numpy.ndarray'&gt; (1L,) &lt;type 'numpy.ndarray'&gt; (1L,) &lt;type 'numpy.ndarray'&gt; (1L,) &lt;type 'numpy.ndarray'&gt; (1L,) &lt;type 'numpy.ndarray'&gt; (1L,) &lt;type 'numpy.ndarray'&gt; (1L,) &lt;type 'numpy.ndarray'&gt; (1L,) &lt;type 'numpy.ndarray'&gt; (1L,) &lt;type 'numpy.ndarray'&gt; (1L,) &lt;type 'numpy.ndarray'&gt; (1L,) A B 0 0.44 0.41 1 0.46 0.47 2 0.46 0.02 3 0.85 0.82 4 0.78 0.76 </code></pre> <p>My resulting <code>pd.DataFrame</code> is the same, that's good. But it printed out 10 single dimensional <code>ndarray</code> objects. What about <code>rolling(2)</code></p> <pre><code>print df.rolling(2).apply(print_type_sum) &lt;type 'numpy.ndarray'&gt; (2L,) &lt;type 'numpy.ndarray'&gt; (2L,) &lt;type 'numpy.ndarray'&gt; (2L,) &lt;type 'numpy.ndarray'&gt; (2L,) &lt;type 'numpy.ndarray'&gt; (2L,) &lt;type 'numpy.ndarray'&gt; (2L,) &lt;type 'numpy.ndarray'&gt; (2L,) &lt;type 'numpy.ndarray'&gt; (2L,) A B 0 NaN NaN 1 0.90 0.88 2 0.92 0.49 3 1.31 0.84 4 1.63 1.58 </code></pre> <p>Same thing, expect output but it printed 8 <code>ndarray</code> objects. <code>rolling</code> is producing a single dimensional <code>ndarray</code> of length <code>window</code> for each column as opposed to what I expected which was an <code>ndarray</code> of shape <code>(window, len(df.columns))</code>.</p> <h1>Question is Why?</h1> <p>I now don't have a way to easily run a rolling multi-factor regression.</p>
<p>I wanted to share what I've done to work around this problem.</p> <p>Given a <code>pd.DataFrame</code> and a window, I generate a stacked <code>ndarray</code> using <code>np.dstack</code> (<a href="https://stackoverflow.com/a/37448165/2336654"><strong>see answer</strong></a>). I then convert it to a <code>pd.Panel</code> and using <code>pd.Panel.to_frame</code> convert it to a <code>pd.DataFrame</code>. At this point, I have a <code>pd.DataFrame</code> that has an additional level on its index relative to the original <code>pd.DataFrame</code> and the new level contains information about each rolled period. For example, if the roll window is 3, the new index level will contain be <code>[0, 1, 2]</code>. An item for each period. I can now <code>groupby</code> <code>level=0</code> and return the groupby object. This now gives me an object that I can much more intuitively manipulate.</p> <h1>Roll Function</h1> <pre><code>import pandas as pd import numpy as np def roll(df, w): roll_array = np.dstack([df.values[i:i+w, :] for i in range(len(df.index) - w + 1)]).T panel = pd.Panel(roll_array, items=df.index[w-1:], major_axis=df.columns, minor_axis=pd.Index(range(w), name='roll')) return panel.to_frame().unstack().T.groupby(level=0) </code></pre> <h1>Demonstration</h1> <pre><code>np.random.seed([3,1415]) df = pd.DataFrame(np.random.rand(5, 2).round(2), columns=['A', 'B']) print df A B 0 0.44 0.41 1 0.46 0.47 2 0.46 0.02 3 0.85 0.82 4 0.78 0.76 </code></pre> <p>Let's <code>sum</code></p> <pre><code>rolled_df = roll(df, 2) print rolled_df.sum() major A B 1 0.90 0.88 2 0.92 0.49 3 1.31 0.84 4 1.63 1.58 </code></pre> <p>To peek under the hood, we can see the stucture:</p> <pre><code>print rolled_df.apply(lambda x: x) major A B roll 1 0 0.44 0.41 1 0.46 0.47 2 0 0.46 0.47 1 0.46 0.02 3 0 0.46 0.02 1 0.85 0.82 4 0 0.85 0.82 1 0.78 0.76 </code></pre> <p>But what about the purpose for which I built this, rolling multi-factor regression. But I'll settle for matrix multiplication for now.</p> <pre><code>X = np.array([2, 3]) print rolled_df.apply(lambda df: pd.Series(df.values.dot(X))) 0 1 1 2.11 2.33 2 2.33 0.98 3 0.98 4.16 4 4.16 3.84 </code></pre>
python|pandas|numpy|group-by|pandas-groupby
11
3,068
37,493,723
How do I create a "not" filter in python for pandas
<p>I have this large dataframe I've imported into pandas and I want to chop it down via a filter. Here is my basic sample code:</p> <pre><code>import pandas as pd import numpy as np from pandas import Series, DataFrame df = DataFrame({'A':[12345,0,3005,0,0,16455,16454,10694,3005],'B':[0,0,0,1,2,4,3,5,6]}) df2= df[df["A"].map(lambda x: x &gt; 0) &amp; (df["B"] &gt; 0)] </code></pre> <p>Basically this displays bottom 4 results which is semi-correct. But I need to display everything BUT these results. So essentially, I'm looking for a way to use this filter but in a "not" version if that's possible. So if column A is greater than 0 AND column B is greater than 0 then we want to disqualify these values from the dataframe. Thanks</p>
<p>No need for map function call on Series "A".</p> <p>Apply <a href="https://en.wikipedia.org/wiki/De_Morgan%27s_laws" rel="noreferrer">De Morgan's Law</a>:</p> <p><strong>"not (A and B)" is the same as "(not A) or (not B)"</strong></p> <pre><code>df2 = df[~(df.A &gt; 0) | ~(df.B &gt; 0)] </code></pre>
python|python-2.7|pandas
37
3,069
41,895,832
convert dataframe type object to dictionary
<p>I have the dataframe <code>best_scores</code> contanining</p> <pre><code> subsample colsample_bytree learning_rate max_depth min_child_weight \ 3321 0.8 0.8 0.3 2 3 objective scale_pos_weight silent 3321 binary:logistic 1.846154 1 </code></pre> <p>I would like to convert it in a dictionary <code>params</code> like:</p> <pre><code>params {'colsample_bytree': 0.8, 'learning_rate': 0.3, 'max_depth': 2, 'min_child_weight': 3, 'objective': 'binary:logistic', 'scale_pos_weight': 1.8461538461538463, 'silent': 1, 'subsample': 0.8} </code></pre> <p>but if I run </p> <pre><code>best_scores.to_dict(orient='records') </code></pre> <p>I get:</p> <pre><code>[{'colsample_bytree': 0.8, 'learning_rate': 0.3, 'max_depth': 2, 'min_child_weight': 3, 'objective': 'binary:logistic', 'scale_pos_weight': 1.8461538461538463, 'silent': 1L, 'subsample': 0.8}] </code></pre> <p>Can you please help?</p>
<p>You are getting a list of dictionaries, because you are converting a <code>DataFrame</code> to <code>dict</code>, which can potentially have multiple rows. Each row would be one entry in the list.</p> <p>Apart from the mentioned solution to simply select the first entry, the ideal way to achieve what you want is to use a <code>Series</code> instead of a <code>DataFrame</code>. That way, only one <code>dict</code> is returned:</p> <pre><code>In [2]: s = pd.Series([1, 2 ,3], index=['a', 'b', 'c']) In [3]: s.to_dict() Out[3]: {'a': 1, 'b': 2, 'c': 3} In [4]: d = pd.DataFrame(s).T In [5]: d Out[5]: a b c 0 1 2 3 In [6]: d.iloc[0] Out[6]: a 1 b 2 c 3 Name: 0, dtype: int64 In [7]: d.iloc[0].to_dict() Out[7]: {'a': 1, 'b': 2, 'c': 3} </code></pre>
python|pandas|dictionary|dataframe
1
3,070
37,716,699
How to hstack several sparse matrices (feature matrices)?
<p>I have 3 sparse matrices:</p> <pre><code>In [39]: mat1 Out[39]: (1, 878049) &lt;1x878049 sparse matrix of type '&lt;type 'numpy.int64'&gt;' with 878048 stored elements in Compressed Sparse Row format&gt; In [37]: mat2 Out[37]: (1, 878049) &lt;1x878049 sparse matrix of type '&lt;type 'numpy.int64'&gt;' with 744315 stored elements in Compressed Sparse Row format&gt; In [35]: mat3 Out[35]: (1, 878049) &lt;1x878049 sparse matrix of type '&lt;type 'numpy.int64'&gt;' with 788618 stored elements in Compressed Sparse Row format&gt; </code></pre> <p>From the <a href="https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.hstack.html" rel="noreferrer">documentation</a>, I read that it is possible to <code>hstack</code>, <code>vstack</code>, and <code>concatenate</code> them such type of matrices. So I tried to <code>hstack</code> them:</p> <pre><code>import numpy as np matrix1 = np.hstack([[address_feature, dayweek_feature]]).T matrix2 = np.vstack([[matrix1, pddis_feature]]).T X = matrix2 </code></pre> <p>However, the dimensions do not match:</p> <pre><code>In [41]: X_combined_features.shape Out[41]: (2, 1) </code></pre> <p>Note that I am stacking such matrices since I would like to use them with a scikit-learn classification algorithm. Therefore, <strong>How should I <code>hstack</code> a number of different sparse matrices?</strong>.</p>
<p>Use the <code>sparse</code> versions of <code>vstack</code>. As general rule you need to use sparse functions and methods, not the <code>numpy</code> ones with similar name. <code>sparse</code> matrices are not subclasses of <code>numpy</code> <code>ndarray</code>.</p> <p>But, your 3 three matrices do not look sparse. They are 1x878049. One has 878048 nonzero elements - that means just one 0 element.</p> <p>So you could just as well turned them into dense arrays (with <code>.toarray()</code> or <code>.A</code>) and use <code>np.hstack</code> or <code>np.vstack</code>.</p> <pre><code>np.hstack([address_feature.A, dayweek_feature.A]) </code></pre> <p>And don't use the double brackets. All concatenate functions take a simple list or tuple of the arrays. And that list can have more than 2 arrays</p> <pre><code>In [296]: A=sparse.csr_matrix([0,1,2,0,0,1]) In [297]: B=sparse.csr_matrix([0,0,0,1,0,1]) In [298]: C=sparse.csr_matrix([1,0,0,0,1,0]) In [299]: sparse.vstack([A,B,C]) Out[299]: &lt;3x6 sparse matrix of type '&lt;class 'numpy.int32'&gt;' with 7 stored elements in Compressed Sparse Row format&gt; In [300]: sparse.vstack([A,B,C]).A Out[300]: array([[0, 1, 2, 0, 0, 1], [0, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, 0]], dtype=int32) In [301]: sparse.hstack([A,B,C]).A Out[301]: array([[0, 1, 2, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0]], dtype=int32) In [302]: np.vstack([A.A,B.A,C.A]) Out[302]: array([[0, 1, 2, 0, 0, 1], [0, 0, 0, 1, 0, 1], [1, 0, 0, 0, 1, 0]], dtype=int32) </code></pre>
python|numpy|machine-learning|scipy|scikit-learn
6
3,071
37,877,895
How to round a number to a chosen integer
<p>In Denmark we have an odd grading system that goes as follows. [-3,00,02,4,7,10,12] Our assignment is to take a vector with different decimal numbers, and round it to the nearest valid grade. Here is our code so far. </p> <pre><code>import numpy as np def roundGrade(grades): if (-5&lt;grades&lt;-1.5): gradesRounded = -3 elif (-1.5&lt;=grades&lt;1.5): gradesRounded = 00 elif (1.5&lt;=grades&lt;3): gradesRounded = 2 elif (3&lt;=grades&lt;5.5): gradesRounded = 4 elif (5.5&lt;=grades&lt;8.5): gradesRounded = 7 elif (8.5&lt;=grades&lt;11): gradesRounded = 10 elif (11&lt;=grades&lt;15): gradesRounded = 12 return gradesRounded print(roundGrade(np.array[-2.1,6.3,8.9,9])) </code></pre> <p>Our console doesn't seem to like this and retuns: TypeError: builtin_function_or_method' object is not subscriptable</p> <p>All help is appreciated, and if you have a smarter method you are welcome to put us in our place. </p>
<p>You are getting that error because when you print, you are using incorrect syntax:</p> <pre><code>print(roundGrade(np.array[-2.1,6.3,8.9,9])) </code></pre> <p>needs to be</p> <pre><code>print(roundGrade(np.array([-2.1,6.3,8.9,9]))) </code></pre> <p>Notice the extra parentheses: <code>np.array(&lt;whatever&gt;)</code></p> <p>However, this won't work, since your function expects a single number. Fortunately, numpy provides a function which can fix that for you:</p> <pre><code>In [15]: roundGrade = np.vectorize(roundGrade) In [16]: roundGrade(np.array([-2.1,6.3,8.9,9])) Out[16]: array([-3, 7, 10, 10]) </code></pre> <p><a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.vectorize.html">http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.vectorize.html</a></p>
python|numpy|rounding
26
3,072
64,378,278
Removing duplicate data doesn't seem to be actually removing dups?
<p>I am trying to preprocess some data and am running this command:</p> <pre><code>df = df[df.duplicated(subset=['ticker','periodDate'], keep='first')] </code></pre> <p>but when I look for dups, they are still there:</p> <pre><code>dups = df[df.duplicated(subset=['ticker','periodDate'], keep=False)] print (dups[dups['ticker'] == 'cofe.us']) ticker periodDate ... exchangeRateChanges cashAndCashEquivalentsChanges 348 cofe.us 2017 ... 0.0 0.0 300109 cofe.us 2018 ... 0.0 0.0 300110 cofe.us 2018 ... 0.0 0.0 300111 cofe.us 2017 ... 0.0 0.0 300112 cofe.us 2017 ... 0.0 0.0 300113 cofe.us 2017 ... 0.0 0.0 300114 cofe.us 2017 ... 0.0 0.0 300115 cofe.us 2016 ... 0.0 0.0 300116 cofe.us 2016 ... 0.0 0.0 300117 cofe.us 2016 ... NaN NaN 300118 cofe.us 2016 ... NaN NaN </code></pre> <p>My goal is simply to keep the first match for ticker and periodDate then disregard the others.</p>
<p><code>duplicated</code> returns the duplicated rows. By doing <code>df[df.duplicated(....)]</code>, you keep only the duplicated rows, instead of filtering them out.</p> <p>Use <code>~</code> to get the non-dupes:</p> <pre><code>df[~df.duplicated(subset=[....], keep='first')] </code></pre>
python|pandas
3
3,073
64,569,724
Tensorflow library ImportError: DLL load failed: The specified procedure could not be found
<p>I have installed Python 3.7 and tensorflow but when i &quot;import tensorflow as tf&quot; , i have the follow error</p> <p>&quot;Tensorflow library ImportError: DLL load failed: The specified procedure could not be found&quot;</p> <p>here is my code</p> <p><code>import tensorflow as tf</code></p>
<p>Maybe not the answer you are hoping for but what worked for me was just to downgrade the version of tensorflow to 2.0</p> <pre><code>pip install tensorflow==2.0 </code></pre>
python|tensorflow
0
3,074
64,403,398
Reading and Organizing a CSV without modules?
<p>I have multiple different csv files that have a different number of headers. I need to read these csv's without using any modules so I have given it an attempt. How would I print the columns of the different csv's and then be able to get the mean, min, max and standard deviation for each of them, as well as plot them against each other?</p> <p>Here is what I have so far but in this case the lists have been hard coded in. In this case the file I am reading into it has 2 headers for the date-time and barometer reading but I am also going to be reading other files that have many more headers and more information.</p> <p>I can use pandas once I have written the csv into a python dataframe so that is why I have imported it. Any recommendations or ideas are much appreciated. Thanks!</p> <pre><code>import pandas as pd def readmyfile(InputFile): list_date = [] list_baro = [] with open(InputFile, 'r') as file: for line in file: row = line.split(',') list_date.append(row[0].strip('&quot;')) list_baro.append(row[1].strip('\n')) # df = pd.DataFrame(list_date[1:]) # df2 = pd.DataFrame(list_baro[1:]) # average = np.mean(df2) print(list_date) print(list_baro) readmyfile('barometer-last-year.csv') </code></pre> <p>Some of the data that is in the barometer-last-year.csv:</p> <pre><code>&quot;DateTime&quot;,&quot;Baro&quot; &quot;2016-10-09 00:00:00&quot;,1021.9 &quot;2016-10-10 00:00:00&quot;,1019.9 &quot;2016-10-11 00:00:00&quot;,1015.8 &quot;2016-10-12 00:00:00&quot;,1013.2 &quot;2016-10-13 00:00:00&quot;,1005.9 &quot;2016-10-14 00:00:00&quot;,998.6 &quot;2016-10-15 00:00:00&quot;,998 &quot;2016-10-16 00:00:00&quot;,1002.2 &quot;2016-10-17 00:00:00&quot;,1009.8 &quot;2016-10-18 00:00:00&quot;,1013.4 &quot;2016-10-19 00:00:00&quot;,1015.8 &quot;2016-10-20 00:00:00&quot;,1015.7 </code></pre> <p>and here is some of the data that I have in the other csv's:</p> <pre><code>&quot;DateTime&quot;,&quot;Temperature&quot;,&quot;Temperature_range (low)&quot;,&quot;Temperature_range (high)&quot; &quot;2016-10-09 00:00:00&quot;,10.66,7.2,13.8 &quot;2016-10-10 00:00:00&quot;,8.94,5.6,12.8 &quot;2016-10-11 00:00:00&quot;,8.69,5.3,14.3 &quot;2016-10-12 00:00:00&quot;,11.55,9,14.9 &quot;2016-10-13 00:00:00&quot;,9.4,6,13.3 &quot;2016-10-14 00:00:00&quot;,9.85,6.8,13.3 &quot;2016-10-15 00:00:00&quot;,10.72,8.2,14.7 &quot;2016-10-16 00:00:00&quot;,11.28,7.8,14.5 &quot;2016-10-17 00:00:00&quot;,11.84,10,15 &quot;2016-10-18 00:00:00&quot;,10.24,8.2,12.7 &quot;2016-10-19 00:00:00&quot;,10.2,8,13.4 &quot;2016-10-20 00:00:00&quot;,9.76,7.2,12.8 &quot;2016-10-21 00:00:00&quot;,7.96,3.7,15.1 &quot;2016-10-22 00:00:00&quot;,7.9,5.3,13 </code></pre>
<p>I've implemented <code>parse_csv()</code> function inside next code that uses no modules. It supports any separator (e.g. <code>,</code>) between cells and any quoting char (e.g. <code>&quot;</code>), also it correctly handles separators located inside quoted strings e.g. CSV line <code>&quot;a,b,c&quot;,d</code> will be handled as two cells <code>a,b,c</code> and <code>d</code>. Empty lines in CSV are skipped.</p> <p>First row is handled as columns names. Function returns columns names and rest of rows separately, so that these two can be directly passed to <code>pd.DataFrame()</code> constructor.</p> <p>Function accepts <code>header</code> argument (columns names), it should be <code>True</code> if header row should be read as first row of CSV file, it should be <code>False</code> or contain a list of columns names if CSV file has no header row.</p> <p>Input CSV file can be passed to function by different ways 1) Through <code>file</code> argument which can be either string that contains file path or name, or opened for reading file object. 2) Through <code>text</code> argument which can be either string containing CSV text or bytes containing CSV file contents.</p> <p>In simplest form you just do <code>columns, table = parse_csv(file = 'test.csv')</code>.</p> <p><a href="https://repl.it/@moytrage/StackOverflow64403398#main.py" rel="nofollow noreferrer">Try it online!</a></p> <pre><code>def parse_csv(*, file = None, text = None, sep = ',', quote = '&quot;', header = True, encoding = 'utf-8'): assert file is not None or text is not None, f'Either text or file argument should be provided!' if text is None: if type(file) is str: with open(file, 'r', encoding = encoding) as f: text = f.read() elif type(file) is bytes: text = file else: text = file.read() if type(text) is bytes: text = text.decode(encoding) first, ncols, table = True, None, [] for line in text.splitlines(): line = line.strip() if not line: continue parts = line.split(sep) i = 0 entries = [] while True: if i &gt;= len(parts): break if not parts[i].startswith(quote): entries.append(parts[i]) i += 1 else: entries.append(parts[i][len(quote):]) while True: if parts[i].endswith(quote): entries[-1] = entries[-1][:-len(quote)] break i += 1 entries[-1] += sep + parts[i] i += 1 if first: if header is True: hrow = True header = entries ncols = len(entries) elif header is False: hrow = False header = None ncols = len(entries) else: hrow = False assert type(header) in [list, tuple], type(header) header = list(header) ncols = len(header) first = False else: hrow = False if not hrow: assert len(entries) == ncols, f'Wrong number of columns (expected {ncols}) in row: {line}' table.append(entries) if first and type(header) is bool: header = None return header, table import pandas as pd # ----- First example ----- text = &quot;&quot;&quot; &quot;DateTime&quot;,&quot;Temperature&quot;,&quot;Temperature_range (low)&quot;,&quot;Temperature_range (high)&quot; &quot;2016-10-09 00:00:00&quot;,10.66,7.2,13.8 &quot;2016-10-10 00:00:00&quot;,8.94,5.6,12.8 &quot;2016-10-11 00:00:00&quot;,8.69,5.3,14.3 &quot;2016-10-12 00:00:00&quot;,11.55,9,14.9 &quot;2016-10-13 00:00:00&quot;,9.4,6,13.3 &quot;2016-10-14 00:00:00&quot;,9.85,6.8,13.3 &quot;2016-10-15 00:00:00&quot;,10.72,8.2,14.7 &quot;2016-10-16 00:00:00&quot;,11.28,7.8,14.5 &quot;2016-10-17 00:00:00&quot;,11.84,10,15 &quot;2016-10-18 00:00:00&quot;,10.24,8.2,12.7 &quot;2016-10-19 00:00:00&quot;,10.2,8,13.4 &quot;2016-10-20 00:00:00&quot;,9.76,7.2,12.8 &quot;2016-10-21 00:00:00&quot;,7.96,3.7,15.1 &quot;2016-10-22 00:00:00&quot;,7.9,5.3,13 &quot;&quot;&quot; columns, table = parse_csv(text = text) df = pd.DataFrame(table, columns = columns) print('-' * 30) print(df) # ----- Second example ----- text = &quot;&quot;&quot; a|b|'c|d'|e 1|2|3|4 &quot;&quot;&quot; ref = None for header in [ True, False, ['first', 'second', 'third', 'fourth'], ]: columns, table = parse_csv(text = text, sep = '|', quote = &quot;'&quot;, header = header) df = pd.DataFrame(table, columns = columns) print('-' * 30) print(df) if ref is None: ref = (columns, table) # ----- Third example ----- import io with open('test.csv', 'w', encoding = 'utf-8') as f: f.write(text) for file in [ io.StringIO(text), io.BytesIO(text.encode('utf-8')), text.encode('utf-8'), 'test.csv', open('test.csv', 'r', encoding = 'utf-8'), open('test.csv', 'rb'), ]: columns, table = parse_csv(file = file, sep = '|', quote = &quot;'&quot;) assert (columns, table) == ref </code></pre> <p>Output:</p> <pre><code>------------------------------ DateTime Temperature Temperature_range (low) Temperature_range (high) 0 2016-10-09 00:00:00 10.66 7.2 13.8 1 2016-10-10 00:00:00 8.94 5.6 12.8 2 2016-10-11 00:00:00 8.69 5.3 14.3 3 2016-10-12 00:00:00 11.55 9 14.9 4 2016-10-13 00:00:00 9.4 6 13.3 5 2016-10-14 00:00:00 9.85 6.8 13.3 6 2016-10-15 00:00:00 10.72 8.2 14.7 7 2016-10-16 00:00:00 11.28 7.8 14.5 8 2016-10-17 00:00:00 11.84 10 15 9 2016-10-18 00:00:00 10.24 8.2 12.7 10 2016-10-19 00:00:00 10.2 8 13.4 11 2016-10-20 00:00:00 9.76 7.2 12.8 12 2016-10-21 00:00:00 7.96 3.7 15.1 13 2016-10-22 00:00:00 7.9 5.3 13 ------------------------------ a b c|d e 0 1 2 3 4 ------------------------------ 0 1 2 3 0 a b c|d e 1 1 2 3 4 ------------------------------ first second third fourth 0 a b c|d e 1 1 2 3 4 </code></pre>
python|pandas|csv
1
3,075
47,617,045
Expand numbers in a list
<p>I have a list of numbers:</p> <pre><code>[10,20,30] </code></pre> <p>What I need is to expand it according to a predefined increment. Thus, let's call <code>x</code> the increment and <code>x=2</code>, my result should be:</p> <pre><code>[10,12,14,16,18,20,22,24,.....,38] </code></pre> <p>Right now I am using a for loop, but it is very slow and I am wondering if there is a faster way.</p> <p>EDIT:</p> <pre><code>newA = [] for n in array: newA= newA+ generateNewNumbers(n, p, t) </code></pre> <p>The function generates new number simply generate the new numbers to add to the list.</p> <p>EDIT2: To better define the problem the first array contains some timestamps:</p> <pre><code>[10,20,30] </code></pre> <p>I have two parameters one is the sampling rate and one is the sampling time, what I need is to expand the array adding between two timestamps the correct number of timestamps, according to the sampling rate. For example, if I have a sampling rate 3 and a sampling time 3 the result should be:</p> <pre><code>[10,13,16,19,20,23,26,29,30,33,36,39] </code></pre>
<p>You can add the same set of increments to each time stamp using <code>np.add.outer</code> and then flatten the result using <code>ravel</code>.</p> <pre><code>import numpy as np a = [10,20,35] inc = 3 ninc = 4 np.add.outer(a, inc * np.arange(ninc)).ravel() # array([10, 13, 16, 19, 20, 23, 26, 29, 35, 38, 41, 44]) </code></pre>
python|list|loops|numpy
3
3,076
48,987,774
How to crop a numpy 2d array to non-zero values?
<p>Let's say i have a 2d boolean numpy array like this:</p> <pre><code>import numpy as np a = np.array([ [0,0,0,0,0,0], [0,1,0,1,0,0], [0,1,1,0,0,0], [0,0,0,0,0,0], ], dtype=bool) </code></pre> <p>How can i in general crop it to the smallest box (rectangle, kernel) that includes all True values?</p> <p>So in the example above:</p> <pre><code>b = np.array([ [1,0,1], [1,1,0], ], dtype=bool) </code></pre>
<p>After some more fiddling with this, i actually found a solution myself:</p> <pre><code>coords = np.argwhere(a) x_min, y_min = coords.min(axis=0) x_max, y_max = coords.max(axis=0) b = cropped = a[x_min:x_max+1, y_min:y_max+1] </code></pre> <p>The above works for boolean arrays out of the box. In case you have other conditions like a threshold <code>t</code> and want to crop to values larger than t, simply modify the first line:</p> <pre><code>coords = np.argwhere(a &gt; t) </code></pre>
arrays|numpy|crop
12
3,077
49,060,520
Remove apostrophe on python numpy matrix
<p>I'm trying yo solve a simple matrix problem on python 3 using numpy with this code:</p> <pre><code>import numpy change_array = numpy.array(input().strip().split(' ')) change_array.shape = (3,3) print(change_array) </code></pre> <p>The output of this program with input: <code>1 2 3 4 5 6 7 8 9</code> is </p> <pre><code>[['1' '2' '3'] ['4' '5' '6'] ['7' '8' '9']] </code></pre> <p>I want the output without apostrophes like this:</p> <pre><code>[[1 2 3] [4 5 6] [7 8 9]] </code></pre> <p>But I haven't find the way yet, any help would be really appreciated.</p>
<p>One way or other you have to convert the strings to integers:</p> <pre><code>In [86]: x = np.array(input().strip().split()) 1 2 3 In [87]: x Out[87]: array(['1', '2', '3'], dtype='&lt;U1') # string dtype In [88]: x.astype(int) Out[88]: array([1, 2, 3]) </code></pre> <p>or</p> <pre><code>In [89]: x = np.array(input().strip().split(), dtype=int) 4 5 6 In [90]: x Out[90]: array([4, 5, 6]) </code></pre> <p>or do the conversion at the list level</p> <pre><code>In [91]: x = [int(i) for i in input().strip().split()] 1 2 3 In [92]: x Out[92]: [1, 2, 3] In [93]: np.array(x) Out[93]: array([1, 2, 3]) </code></pre> <p>But watch out for bad values</p> <pre><code>In [94]: x = np.array(input().strip().split(), dtype=int) 1 2 a --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-94-20cd8929f6ac&gt; in &lt;module&gt;() ----&gt; 1 x = np.array(input().strip().split(), dtype=int) ValueError: invalid literal for int() with base 10: 'a' </code></pre>
python|numpy|matrix
1
3,078
48,927,232
Python: Reproduce nested Excel Pivot table in python
<p><strong>Question:</strong></p> <p>Is it possible to display pandas report in exactly same format like excel?</p> <p><strong>Current situation:</strong></p> <p>I am trying to automate excel report using python. </p> <p>Excel report is in following format:</p> <p><a href="https://i.stack.imgur.com/YMELL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YMELL.png" alt="http://pbpython.com/images/excel-pivot-example.png"></a></p> <p>I am using below code to generate pivot table in pandas:</p> <pre><code> import pandas as pd import numpy as np df = pd.read_excel("sales-funnel.xlsx") table = pd.pivot_table(df,index=["Manager","Rep","Product"], values=["Price","Quantity"], aggfunc=[np.sum],fill_value=0)` </code></pre> <p>I would like to get report similar to excel due to huge size of data. Pandas report will be difficult to analyze in excel.</p> <p>Thanks </p>
<p>Do you want to save Pandas table as Pivot Table in .xlsx file? You'll never do it. There're libraries to save, edit and read Excel files, but there's no tool in them to work with Pivot Tables.</p>
python|excel|pandas|pivot-table
0
3,079
48,968,633
Clone items in a list by index
<p>I have a numpy array </p> <pre><code>np.array([[1,4,3,5,2], [3,2,5,2,3], [5,2,4,2,1]]) </code></pre> <p>and I want to clone items by their indexes. For example, I have an index of</p> <pre><code>np.array([[1,4], [2,4], [1,4]]) </code></pre> <p>These correspond to the positions of the items at each row. e.g. the first [1,4] are the indexes for 4, 2 in the first row.</p> <p>I want in the end returning a new numpy array giving initial array and the index array. </p> <pre><code>np.array([[1,4,4,3,5,2,2], [3,2,5,5,2,3,3], [5,2,2,4,2,1,1]]) </code></pre> <p>The effect is the selected column values are repeated once. Any way to do this? Thanks.</p>
<p>I commented that this could be viewed as a 1d problem. There's nothing 2d about it, except that you are adding 2 values per row, so you end up with a 2d array. The other key idea is that <code>np.repeats</code> lets us repeat selected elements several times.</p> <pre><code>In [70]: arr =np.array([[1,4,3,5,2], ...: [3,2,5,2,3], ...: [5,2,4,2,1]]) ...: In [71]: idx = np.array([[1,4], ...: [2,4], ...: [1,4]]) ...: </code></pre> <p>Make an array of 'repeat' counts - start with 1 for everything, and add 1 for the elements we want to dupicate:</p> <pre><code>In [72]: repeats = np.ones_like(arr) In [73]: repeats Out[73]: array([[1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]) In [74]: for i,j in enumerate(idx): ...: repeats[i,j] += 1 ...: In [75]: repeats Out[75]: array([[1, 2, 1, 1, 2], [1, 1, 2, 1, 2], [1, 2, 1, 1, 2]]) </code></pre> <p>Now just apply <code>repeat</code> to the flattened arrays, and reshape:</p> <pre><code>In [76]: np.repeat(arr.ravel(),repeats.ravel()) Out[76]: array([1, 4, 4, 3, 5, 2, 2, 3, 2, 5, 5, 2, 3, 3, 5, 2, 2, 4, 2, 1, 1]) In [77]: _.reshape(3,-1) Out[77]: array([[1, 4, 4, 3, 5, 2, 2], [3, 2, 5, 5, 2, 3, 3], [5, 2, 2, 4, 2, 1, 1]]) </code></pre> <p>I may add a list solution, once I work that out.</p> <hr> <p>a row by row <code>np.insert</code> solution (fleshing out the concept suggested by @f5r5e5d):</p> <p>Test with one row:</p> <pre><code>In [81]: row=arr[0] In [82]: i=idx[0] In [83]: np.insert(row,i,row[i]) Out[83]: array([1, 4, 4, 3, 5, 2, 2]) </code></pre> <p>Now apply iteratively to all rows. The list of arrays can then be turned back into an array:</p> <pre><code>In [84]: [np.insert(row,i,row[i]) for i,row in zip(idx,arr)] Out[84]: [array([1, 4, 4, 3, 5, 2, 2]), array([3, 2, 5, 5, 2, 3, 3]), array([5, 2, 2, 4, 2, 1, 1])] </code></pre>
python|arrays|numpy
2
3,080
58,890,298
Why does BeautifulSoup fail to extract data from websites to csv?
<p>User Chrisvdberge helped me creating the following code :</p> <pre><code>import pandas as pd import requests from bs4 import BeautifulSoup url_DAX = 'https://www.eurexchange.com/exchange-en/market-data/statistics/market-statistics-online/100!onlineStats?viewType=4&amp;productGroupId=13394&amp;productId=34642&amp;cp=&amp;month=&amp;year=&amp;busDate=20191114' req = requests.get(url_DAX, verify=False) html = req.text soup = BeautifulSoup(html, 'lxml') df = pd.read_html(str(html))[0] df.to_csv('results_DAX.csv') print(df) url_DOW = 'https://www.cmegroup.com/trading/equity-index/us-index/e-mini-dow_quotes_settlements_futures.html' req = requests.get(url_DOW, verify=False) html = req.text soup = BeautifulSoup(html, 'lxml') df = pd.read_html(str(html))[0] df.to_csv('results_DOW.csv') print(df) url_NASDAQ = 'https://www.cmegroup.com/trading/equity-index/us-index/e-mini-nasdaq-100_quotes_settlements_futures.html' req = requests.get(url_NASDAQ, verify=False) html = req.text soup = BeautifulSoup(html, 'lxml') df = pd.read_html(str(html))[0] df.to_csv('results_NASDAQ.csv') print(df) url_CAC = 'https://live.euronext.com/fr/product/index-futures/FCE-DPAR/settlement-prices' req = requests.get(url_CAC, verify=False) html = req.text soup = BeautifulSoup(html, 'lxml') df = pd.read_html(str(html))[0] df.to_csv('results_CAC.csv') print(df) </code></pre> <p>I have the following result :</p> <ul> <li><p>3 .csv files are created : results_DAX.csv (here, everything is ok, I have the values I want.) ; results_DOW.csv and results_NASDAQ.csv (here, the problem is that the .csv files don't have the wanted values.. I don't understand why ?)</p></li> <li><p>As you can see in the code, 4 files should be created and not only 3. </p></li> </ul> <p>So my questions are : </p> <ul> <li><p>How to get 4 csv files ?</p></li> <li><p>How to get values in the results_DOW.csv and in the results_NASDAQ.csv files ? (and maybe also in the results_CAC.csv file)</p></li> </ul> <p>Thank you for your answers ! :)</p>
<p>Try this to get those other sites. The last site is a little trickier, so you'd need to try out Selenium:</p> <pre><code>import pandas as pd import requests from bs4 import BeautifulSoup from datetime import date, timedelta url_DAX = 'https://www.eurexchange.com/exchange-en/market-data/statistics/market-statistics-online/100!onlineStats?viewType=4&amp;productGroupId=13394&amp;productId=34642&amp;cp=&amp;month=&amp;year=&amp;busDate=20191114' df = pd.read_html(url_DAX)[0] df.to_csv('results_DAX.csv') print(df) dt = date.today() - timedelta(days=2) dateParam = dt.strftime('%m/%d/%Y') url_DOW = 'https://www.cmegroup.com/CmeWS/mvc/Settlements/Futures/Settlements/318/FUT' payload = { 'tradeDate': dateParam, 'strategy': 'DEFAULT', 'pageSize': '500', '_': '1573920502874'} response = requests.get(url_DOW, params=payload).json() df = pd.DataFrame(response['settlements']) df.to_csv('results_DOW.csv') print(df) url_NASDAQ = 'https://www.cmegroup.com/CmeWS/mvc/Settlements/Futures/Settlements/146/FUT' payload = { 'tradeDate': dateParam, 'strategy': 'DEFAULT', 'pageSize': '500', '_': '1573920650587'} response = requests.get(url_NASDAQ, params=payload).json() df = pd.DataFrame(response['settlements']) df.to_csv('results_NASDAQ.csv') print(df) </code></pre>
python|pandas|beautifulsoup|export-to-csv
1
3,081
56,020,272
Convert Rows per Unique Id into all comma separated possibilities
<p>i have some data in the following format, at the moment this is in a Pandas Dataframe. </p> <pre><code>Row Uid Lender 1 1 HSBC 2 1 Lloyds 3 1 Barclays 4 2 Lloyds 5 2 Barclays 6 2 Santander 7 2 RBS 8 2 HSBC </code></pre> <p>What i require is all of the possible combinations of the Lenders columns for each Uid so the output would be something like this </p> <pre><code>Row Uid LenderCombo 1 1 Barclays 2 1 Lloyds 3 1 HSBC 4 1 Barclays, HSBC 5 1 Barclays, Lloyds 6 1 HSBC, Lloyds 7 1 Barclays, HSBC, Lloyds </code></pre> <p>And the same for Uid 2 and so on, apologies if this has been answered before i'm just unsure of how to approach this. </p> <p>Thanks,</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>GroupBy.apply</code></a> with custom function and join tuples by <code>join</code>:</p> <pre><code>from itertools import chain, combinations #https://stackoverflow.com/a/5898031 def all_subsets(ss): return chain(*map(lambda x: combinations(ss, x), range(1, len(ss)+1))) df = (df.groupby('Uid')['Lender'] .apply(lambda x: pd.Series([', '.join(y) for y in all_subsets(x)])) .reset_index() .rename(columns={'level_1':'Row'})) </code></pre> <hr> <pre><code>print (df) Uid Row Lender 0 1 0 HSBC 1 1 1 Lloyds 2 1 2 Barclays 3 1 3 HSBC, Lloyds 4 1 4 HSBC, Barclays 5 1 5 Lloyds, Barclays 6 1 6 HSBC, Lloyds, Barclays 7 2 0 Lloyds 8 2 1 Barclays 9 2 2 Santander 10 2 3 RBS 11 2 4 HSBC 12 2 5 Lloyds, Barclays 13 2 6 Lloyds, Santander 14 2 7 Lloyds, RBS 15 2 8 Lloyds, HSBC 16 2 9 Barclays, Santander 17 2 10 Barclays, RBS 18 2 11 Barclays, HSBC 19 2 12 Santander, RBS 20 2 13 Santander, HSBC 21 2 14 RBS, HSBC 22 2 15 Lloyds, Barclays, Santander 23 2 16 Lloyds, Barclays, RBS 24 2 17 Lloyds, Barclays, HSBC 25 2 18 Lloyds, Santander, RBS 26 2 19 Lloyds, Santander, HSBC 27 2 20 Lloyds, RBS, HSBC 28 2 21 Barclays, Santander, RBS 29 2 22 Barclays, Santander, HSBC 30 2 23 Barclays, RBS, HSBC 31 2 24 Santander, RBS, HSBC 32 2 25 Lloyds, Barclays, Santander, RBS 33 2 26 Lloyds, Barclays, Santander, HSBC 34 2 27 Lloyds, Barclays, RBS, HSBC 35 2 28 Lloyds, Santander, RBS, HSBC 36 2 29 Barclays, Santander, RBS, HSBC 37 2 30 Lloyds, Barclays, Santander, RBS, HSBC </code></pre>
python|pandas|dataframe|combinations
5
3,082
55,638,334
Data Type for numpy.rand.seed()
<p>Trying to input a seed runtime through command line option, hence wanted to understand what is the data type of numpy.rand.seed()</p>
<p>According to <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.seed.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.seed.html</a> it must be an integer or an 1-d array of integers convertible to an unsigned 32-bit integer. So you should be fine with any kind of integer.</p>
python|numpy
0
3,083
64,821,025
How can I reshape 1D np array into 3D?
<p>I have my 699 training features stored in the array X.</p> <pre><code>X.shape </code></pre> <p>(699,)</p> <p>Each row is however 1292 * 13</p> <p>For instance:</p> <pre><code>X[0].shape </code></pre> <p>(1292, 13)</p> <p>How can I reshape it correctly to input into a CNN?</p>
<p>In order to put them in a keras Conv2D for example, you must have a specific input_shape.</p> <p>So , your number of samples is 699 and your shape is (1292, 13, 1) . The last dimension (1) is the number of channels, so if you have gray images (or something else) you put 1 , if you have color you put 3.</p> <p>So something like that:</p> <p><code>input_shape = (len(x), X[0][0].shape, X[0][1].shape, 1)</code></p> <p><code>tf.keras.layers.Conv2D(2, 3, activation='relu', input_shape=input_shape[1:])(x)</code></p>
python-3.x|numpy|keras|numpy-ndarray
0
3,084
40,027,612
Apply a function to pandas dataframe and get different sized ndarray output
<p>My goal is to get the indexes of the local max heights of a dataframe. These change between 3 and 5 per column.</p> <p>I have tried using the apply function, but get either the error <code>Shape of passed values is (736, 4), indices imply (736, 480)</code> or <code>could not broadcast input array from shape (0) into shape (1)</code>.</p> <p>My matrix is <code>480x736</code>. </p> <p>Here is what I've written for the apply function:</p> <pre><code>import numpy as np import peakutils df.apply(lambda x: peakutils.indexes(x, thres=0.02/max(x), min_dist=100)) </code></pre> <p>Here is what I am able to get to work:</p> <pre><code>indexes =[] import numpy as np import peakutils for column in df: indexes.append(peakutils.indexes(df[column], thres=0.02/max(df[column]), min_dist=100)) </code></pre> <p>Often the indexes are 4 in length, but occasionally I'll get 1 more or less:</p> <pre><code>Out[32]: [array([ 12, 114, 217, 328, 433]), array([ 12, 116, 217, 325, 433]), array([ 64, 166, 283, 389]), array([105, 217, 326, 433]), array([105, 237, 390])] </code></pre> <p>My guess is that the problem with the output comes from my not knowing the shape of the resultant dataframe. The shape is indeterminable from the outset.</p> <p><strong>How do I apply a function to a df where the output differs in size &amp; type?</strong></p>
<p>pandas is trying to do "something" with the arrays. You can short circuit that "something" by wrapping your <code>lambda</code>'s return value in a <code>pd.Series</code></p> <p>Try this:</p> <pre><code>df.apply(lambda x: pd.Series(peakutils.indexes(x, thres=0.02/max(x), min_dist=100))) </code></pre>
python|python-3.x|pandas|scipy
2
3,085
39,433,655
What's the best way to print some samples of the predicted results during training of the model?
<p>I want to ask the model to predict the output for some random samples during the learning process. Currently, I built a class which derived from <code>tf.contrib.learn.monitors.EveryN</code> and overwrite <code>every_n_step_end</code> as follows:</p> <pre><code>def every_n_step_end(self, step, outputs): # evaluate the model on the validation set self._estimator.evaluate( input_fn=input_fn_eval, metrics=validation_metrics, steps=None) # generate some random samples for _ in range(FLAGS.num_samples): input_sample, output_sample, label = random.choice(validation_set) prob = self._estimator.predict(input_fn=lambda: get_features(input_sample, output_sample)) print("{}:{}:{}:{}".format(input_sample, output_sample, prob[0, 0], label)) </code></pre> <p>The problem is that is each iteration, <code>predict</code> function load the model from checkpoints and etc. <strong>Is it the proper way?</strong></p>
<p><code>predict</code> method get an input function(<code>input_fn</code>) to preprocess and feed data into your models. So it's easy to create an input function and then pass it to <code>predict</code> method to get a list of predicted probabilities. See <a href="https://www.tensorflow.org/versions/master/tutorials/input_fn/index.html" rel="nofollow">Building Input Functions with tf.contrib.learn</a>, for more information about writing input_fn function.</p>
tensorflow
0
3,086
44,045,066
Efficient use of numpy_indexed output
<pre><code>&gt;&gt;&gt; import numpy_indexed as npi &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.array([[0,0,1,1,2,2], [4,4,8,8,10,10]]).T &gt;&gt;&gt; a array([[ 0, 4], [ 0, 4], [ 1, 8], [ 1, 8], [ 2, 10], [ 2, 10]]) &gt;&gt;&gt; npi.group_by(a[:, 0]).sum(a[:,1]) (array([0, 1, 2]), array([ 8, 16, 20], dtype=int32)) </code></pre> <p>I want to perform calculations on subsets of the second column clustered by the first column on large sets (~1m lines). Is there an efficient (and/or vectorised) way to use the output of <code>group_by</code> by <code>numpy_indexed</code> in order to add a new column with the output of these calculations? In the example of <code>sum</code> as above I would like to produce the output below. </p> <p>If there is an efficient way of doing this without using <code>numpy_indexed</code> in the first place, that would also be very helpful. </p> <pre><code>array([[ 0, 4, 8], [ 0, 4, 8], [ 1, 8, 16], [ 1, 8, 16], [ 2, 10, 20], [ 2, 10, 20]]) </code></pre>
<p>One approach with <a href="https://docs.scipy.org/doc/numpy-1.12.0/reference/generated/numpy.unique.html" rel="nofollow noreferrer"><code>np.unique</code></a> to generate those unique tags and the interval shifting indices and then <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow noreferrer"><code>np.add.reduceat</code></a> for the <code>intervaled-summing</code> -</p> <pre><code>_,idx,tags = np.unique(a[:,0], return_index=1, return_inverse=1) out = np.c_[a, np.add.reduceat(a[:,1],idx)[tags]] </code></pre> <p>Another way that avoids the use of <code>np.unique</code> and might be beneficial on performance would be like so -</p> <pre><code>idx = np.r_[0,np.flatnonzero(a[1:,0] &gt; a[:-1,0])+1] tag_arr = np.zeros(a.shape[0], dtype=int) tag_arr[idx[1:]] = 1 tags = tag_arr.cumsum() out = np.c_[a, np.add.reduceat(a[:,1],idx)[tags]] </code></pre> <p>For further performance boost, we should use <code>np.bincount</code>. Thus, <code>np.add.reduceat(a[:,1],idx)</code> could be replaced by <code>np.bincount(tags, a[:,1])</code>.</p> <p>Sample run -</p> <pre><code>In [271]: a # Using a more generic sample Out[271]: array([[11, 4], [11, 4], [14, 8], [14, 8], [16, 10], [16, 10]]) In [272]: _,idx,tags = np.unique(a[:,0], return_index=1, return_inverse=1) In [273]: np.c_[a, np.add.reduceat(a[:,1],idx)[tags]] Out[273]: array([[11, 4, 8], [11, 4, 8], [14, 8, 16], [14, 8, 16], [16, 10, 20], [16, 10, 20]])] </code></pre> <p>Now, the listed approaches assume that the first column is already sorted. If that's not the case, we need to sort the array by the first column <code>argsort</code> and then use the proposed method. Thus, for the not sorted case, we need the following as pre-processing -</p> <pre><code>a = a[a[:,0].argsort()] </code></pre> <hr> <p><strong>Battle against <code>np.unique</code></strong></p> <p>Let's time the custom <code>flatnonzero</code> + <code>cumsum</code> based method against the built-in <code>np.unique</code> to create the shifting indices : <code>idx</code> and the uniqueness based IDs/tags : <code>tags</code>. For a case like this one, where we know beforehand that the labels column is already sorted, we are avoiding any sorting, as done with <code>np.unique</code>. This gives us an advantage on performance. So, let's verify it.</p> <p>Approaches -</p> <pre><code>def nonzero_cumsum_based(A): idx = np.concatenate(( [0] ,np.flatnonzero(A[1:] &gt; A[:-1])+1 )) tags = np.zeros(len(A), dtype=int) tags[idx[1:]] = 1 np.cumsum(tags, out = tags) return idx, tags def unique_based(A): _,idx,tags = np.unique(A, return_index=1, return_inverse=1) return idx, tags </code></pre> <p>Sample run with the custom func -</p> <pre><code>In [438]: a Out[438]: array([[11, 4], [11, 4], [14, 8], [14, 8], [16, 10], [16, 10]]) In [439]: idx, tags = nonzero_cumsum_based(a[:,0]) In [440]: idx Out[440]: array([0, 2, 4]) In [441]: tags Out[441]: array([0, 0, 1, 1, 2, 2]) </code></pre> <p>Timings - </p> <pre><code>In [444]: a = np.c_[np.sort(randi(10,10000,(100000))), randi(0,10000,(100000))] In [445]: %timeit unique_based(a[:,0]) 100 loops, best of 3: 4.3 ms per loop In [446]: %timeit nonzero_cumsum_based(a[:,0]) 1000 loops, best of 3: 486 µs per loop In [447]: a = np.c_[np.sort(randi(10,10000,(1000000))), randi(0,10000,(1000000))] In [448]: %timeit unique_based(a[:,0]) 10 loops, best of 3: 50.2 ms per loop In [449]: %timeit nonzero_cumsum_based(a[:,0]) 100 loops, best of 3: 3.98 ms per loop </code></pre>
python|numpy|numpy-indexed
3
3,087
69,572,321
use numpy.linalg.multi_dot for 3-dimensional arrays of (N, M, M) shape
<p>Is that any reasonable way to use np.linalg.multi_dot() function with Nx2x2 arrays like functools.reduce(np.matmul, Nx2x2_arrays)? Please, see example below.</p> <pre><code>import numpy as np from functools import reduce m1 = np.array(range(16)).reshape(4, 2, 2) m2 = m1.copy() m3 = m1.copy() reduce(np.matmul, (m1, m2, m3)) </code></pre> <p>result - 4x2x2 array:</p> <pre><code>array([[[ 6, 11], [ 22, 39]], [[ 514, 615], [ 738, 883]], [[ 2942, 3267], [ 3630, 4031]], [[ 8826, 9503], [10234, 11019]]]) </code></pre> <p>As you see, np.matmul treats 4x2x2 3-D arrays like 1-D arrays of 2x2 matrices. Can I do the same using np.linalg.multi_dot() instead of reduce(np.matmul) and, if yes, will it lead to any performance improvement?</p>
<p><code>np.linalg.multi_dot()</code> tries to optimize the operation by finding the order of dot products that leads to the fewest multiplications overall.</p> <p>As all your matrices are square, the order of dot products does not matter and you will always end up with the same number of multiplications.</p> <p>Internally, <code>np.linalg.multi_dot()</code> doesn't run any C code but merely calls out to <code>np.dot()</code>, so you can do the same:</p> <pre><code>functools.reduce(np.matmul, (m1, m2, m3)) </code></pre> <p>or simply</p> <pre><code>m1 @ m2 @ m3 </code></pre>
python|arrays|numpy|matrix-multiplication
2
3,088
69,382,610
PandasNotImplementedError: The method `pd.Series.__iter__()` is not implemented. If you want to collect your data as an NumPy array
<p>I try to create a new column in Koalas dataframe <code>df</code>. The dataframe has 2 columns: <code>col1</code> and <code>col2</code>. I need to create a new column <code>newcol</code> as a median of <code>col1</code> and <code>col2</code> values.</p> <pre><code>import numpy as np import databricks.koalas as ks # df is Koalas dataframe df = df.assign(newcol=lambda x: np.median(x.col1, x.col2).astype(float)) </code></pre> <p>But I get the following error:</p> <blockquote> <p>PandasNotImplementedError: The method <code>pd.Series.__iter__()</code> is not implemented. If you want to collect your data as an NumPy array, use 'to_numpy()' instead.</p> </blockquote> <p>Also I tried:</p> <pre><code>df.newcol = df.apply(lambda x: np.median(x.col1, x.col2), axis=1) </code></pre> <p>But it didn't work.</p>
<p>I had the same problem. One caveat, I'm using pyspark.pandas instead of koalas, but my understanding is that pyspark.pandas came from koalas, so my solution might still help. I tried to test it with koalas but was unable to run a cluster with a reasonable version.</p> <pre class="lang-py prettyprint-override"><code>import pyspark.pandas as ps data = {&quot;col_1&quot;: [1,2,3], &quot;col_2&quot;: [4,5,6]} df = ps.DataFrame(data) median_series = df[[&quot;col_1&quot;,&quot;col_2&quot;]].apply(lambda x: x.median(), axis=1) median_series.name = &quot;median&quot; df = ps.merge(df, median_series, left_index=True, right_index=True, how='left') </code></pre> <p>On apply, the lambda parameter x is a pandas.Series of each row, so I used its median method. Annoyingly, I couldn't get any assigning to work, the only way I found was to make this ugly merge. Oh, and used left to have the peace of mind that df would keep the same number of rows, but inner could be fine depending on context</p>
python|pandas|dataframe|databricks|spark-koalas
0
3,089
69,657,133
PyTorch bool value of tensor with more than one value is ambiguous
<p>I am trying to train a neural network with PyTorch, but I get the error in the title. I followed <a href="https://medium.com/analytics-vidhya/implementing-cnn-in-pytorch-with-custom-dataset-and-transfer-learning-1864daac14cc" rel="nofollow noreferrer">this tutorial</a>, and I just applied some small changes to meet my needs. Here's the network:</p> <pre><code>class ChordClassificationNetwork(nn.Module): def __init__(self, train_model=False): super(ChordClassificationNetwork, self).__init__() self.train_model = train_model self.flatten = nn.Flatten() self.firstConv = nn.Conv2d(3, 64, (3, 3)) self.secondConv = nn.Conv2d(64, 64, (3, 3)) self.pool = nn.MaxPool2d(2) self.drop = nn.Dropout(0.25) self.fc1 = nn.Linear(33856, 256) self.fc2 = nn.Linear(256, 256) self.outLayer = nn.Linear(256, 7) def forward(self, x): x = self.firstConv(x) x = F.relu(x) x = self.pool(x) x = self.secondConv(x) x = F.relu(x) x = self.pool(x) x = self.drop(x) x = self.flatten(x) x = self.fc1(x) x = F.relu(x) x = self.drop(x) x = self.fc2(x) x = F.relu(x) x = self.drop(x) x = self.outLayer(x) output = F.softmax(x, dim=1) return output </code></pre> <p>and the accuray check part, the one that is causing the error:</p> <pre><code>device = (&quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot;) transformations = transforms.Compose([ transforms.Resize((100, 100)) ]) num_epochs = 10 learning_rate = 0.001 train_CNN = False batch_size = 32 shuffle = True pin_memory = True num_workers = 1 dataset = GuitarDataset(&quot;../chords_data/cropped_images/train&quot;, transform=transformations) train_set, validation_set = torch.utils.data.random_split(dataset, [int(0.8 * len(dataset)), len(dataset) - int(0.8*len(dataset))]) train_loader = DataLoader(dataset=train_set, shuffle=shuffle, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory) validation_loader = DataLoader(dataset=validation_set, shuffle=shuffle, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory) model = ChordClassificationNetwork().to(device) criterion = nn.BCELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) def check_accuracy(loader, model): if loader == train_loader: print(&quot;Checking accuracy on training data&quot;) else: print(&quot;Checking accuracy on validation data&quot;) num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device) y = y.to(device=device) scores = model(x) predictions = torch.tensor([1.0 if i &gt;= 0.5 else 0.0 for i in scores]).to(device) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print( f&quot;Got {num_correct} / {num_samples} with accuracy {float(num_correct) / float(num_samples) * 100:.2f}&quot; ) return f&quot;{float(num_correct) / float(num_samples) * 100:.2f}&quot; def train(): model.train() for epoch in range(num_epochs): loop = tqdm(train_loader, total=len(train_loader), leave=True) if epoch % 2 == 0: loop.set_postfix(val_acc=check_accuracy(validation_loader, model)) for imgs, labels in loop: imgs = imgs.to(device) labels = labels.to(device) outputs = model(imgs) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() loop.set_description(f&quot;Epoch [{epoch}/{num_epochs}]&quot;) loop.set_postfix(loss=loss.item()) if __name__ == &quot;__main__&quot;: train() </code></pre> <p>The error is caused on this line: <code>predictions = torch.tensor([1.0 if i &gt;= 0.5 else 0.0 for i in scores]).to(device)</code> but I don't understand why. I saw some other answers but those could not fix my problem.</p> <p>Complete stack trace:</p> <pre><code> 0%| | 0/13 [00:00&lt;?, ?it/s]Checking accuracy on validation data Traceback (most recent call last): File &quot;/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&amp;_Chords_Recognition/ChordsClassification/train_CCN.py&quot;, line 80, in &lt;module&gt; train() File &quot;/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&amp;_Chords_Recognition/ChordsClassification/train_CCN.py&quot;, line 66, in train loop.set_postfix(val_acc=check_accuracy(validation_loader, model)) File &quot;/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&amp;_Chords_Recognition/ChordsClassification/train_CCN.py&quot;, line 52, in check_accuracy predictions = torch.tensor([1.0 if i &gt;= 0.5 else 0.0 for i in scores]).to(device) File &quot;/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&amp;_Chords_Recognition/ChordsClassification/train_CCN.py&quot;, line 52, in &lt;listcomp&gt; predictions = torch.tensor([1.0 if i &gt;= 0.5 else 0.0 for i in scores]).to(device) RuntimeError: Boolean value of Tensor with more than one value is ambiguous 0%| | 0/13 [00:02&lt;?, ?it/s] </code></pre>
<p>The output of the model will be a discrete distribution over your 7 classes. To retrieve the predicted image you can directly apply an argmax over it:</p> <pre><code>scores = model(x) predictions = scores.argmax(1) </code></pre>
python|neural-network|pytorch
3
3,090
40,871,797
Tensorflow softmax_cross_entropy_with_logits asks for unscaled log probabilities
<p>I have noticed that <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#softmax_cross_entropy_with_logits" rel="nofollow noreferrer"><code>tf.nn.softmax_cross_entropy_with_logits</code></a> asks for "unscaled log probabilities" for the logits argument. However, nowhere have I seen anyone suggest performing a log operation on their NN predictions before submission to this function. Am I missing something here?</p>
<p>You shouldn't perform a log operation. You shouldn't perform anything, actually :-). What this comment is (arguably poorly) trying to say is that each logit is an unrestricted real number (negative or positive, as big or as small as you want). The softmax cross entropy function will then (conceptually) apply the softmax operation (exponentiate, to turn unrestricted real numbers into positive numbers, and then normalize to make them sum to 1) and compute the cross-entropy.</p> <p>So, tl;dr, feed the outputs of your last linear layer without any normalization or transfer function to this and you won't be wrong.</p>
tensorflow
8
3,091
41,030,250
Apply a function to each row python
<p>I am trying to convert from UTC time to LocaleTime in my dataframe. I have a <code>dictionary</code> where I store the number of hours I need to shift for each country code. So for example if I have <code>df['CountryCode'][0]='AU'</code> and I have a <code>df['UTCTime'][0]=2016-08-12 08:01:00</code> I want to get <code>df['LocaleTime'][0]=2016-08-12 19:01:00</code> which is </p> <pre><code>df['UTCTime'][0]+datetime.timedelta(hours=dateDic[df['CountryCode'][0]]) </code></pre> <p>I have tried to do it with a <code>for loop</code> but since I have more than 1 million rows it's not efficient. I have looked into the <code>apply</code> function but I can't seem to be able to put it to take inputs from two different columns.</p> <p>Can anyone help me?</p>
<p>Without having a more concrete example its difficult but try this:</p> <pre><code>pd.to_timedelta(df.CountryCode.map(dateDict), 'h') + df.UTCTime </code></pre>
python|pandas
0
3,092
40,905,389
In python, How do we find the Correlation Coefficient between two matrices?
<p>I have got two matrices say, T1 and T2 each of size mxn. I want to find the correlation coefficient between two matrices<br> So far I haven't used any built-in library function for it. I am doing the following steps for it:<br> First I calculate the mean of the two matrices as: </p> <pre><code>M1 = T1.mean() M2 = T2.mean() </code></pre> <p>and then I subtract the mean from the corresponding matrices as: </p> <pre><code>A = np.subtract(T1, M1) B = np.subtract(T2, M2) </code></pre> <p>where np is the numpy library and A and B are the resulting matrices after doing the subtraction.<br> Now , I calculate the correlation coefficent as:</p> <pre><code>alpha = np.sum(A*B) / (np.sqrt((np.sum(A))*np.sum(B))) </code></pre> <p>However, the value i get is far greater than 1 and in not meaningful at all. It should be in between 0 and 1 to get some meaning out of it.<br> I have also tried to make use absolute values of matrix A and B, but that also did'nt work.<br> I also tried to use : </p> <pre><code>np.sum(np.dot(A,B.T)) instead of np.sum(A*B) </code></pre> <p>in the numerator , but that also didn't work.<br> Edit1:<br> This is the formula that I intend to calculate:<br> <a href="https://i.stack.imgur.com/TSwYZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TSwYZ.png" alt="This image shows the actual formula to be calculated"></a></p> <p>In this image, C is one of the matrices and T is another one.<br> 'u' is the mean symbol. </p> <p>Can somebody tell me where actually i am doing the mistake.</p>
<p>Well I think this function is doing what I intend for: </p> <pre><code>def correlation_coefficient(T1, T2): numerator = np.mean((T1 - T1.mean()) * (T2 - T2.mean())) denominator = T1.std() * T2.std() if denominator == 0: return 0 else: result = numerator / denominator return result </code></pre> <p>The calculation of numerator seems to be tricky here which doesn't exactly reflect the formula shown in the above image and denominator is just the product of standard deviations of the two images.<br> However, the result does make a sense now as the result lies only in between 0 and 1.</p>
python|numpy|matrix|correlation
0
3,093
40,956,834
Passing a pandas data frame through an R function using rpy2
<p>I am trying to reproduce R results in Python. The following R code works:</p> <pre><code>library("TTR") library("zoo") library("xts") library("quantmod") getSymbols("^GSPC",from = "2014-01-01", to = "2015-01-01") dataf = GSPC[,c("GSPC.High", "GSPC.Low", "GSPC.Close")] result = CCI(dataf, n=20, c=0.015) </code></pre> <p>But not the following Python code:</p> <pre><code>from datetime import datetime from rpy2.robjects.packages import importr TTR = importr('TTR') import pandas_datareader as pdr from rpy2.robjects import pandas2ri pandas2ri.activate() GSPC = pdr.get_data_yahoo(symbols='^GSPC', start=datetime(2014, 1, 1), end=datetime(2015, 1, 1)) dataf = GSPC[['High', 'Low', 'Close']] result = TTR.CCI(dataf, n=20, c=0.015) </code></pre> <p>The error I get occurs on the last line when using TTR.CCI. Traceback and error returned is:</p> <pre><code>Traceback (most recent call last): File "svm_strat_test_oliver.py", line 30, in &lt;module&gt; result = TTR.CCI(dataf, n=20, c=0.015) File "/usr/local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 178, in __call__ return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 106, in __call__ res = super(Function, self).__call__(*new_args, **new_kwargs) rpy2.rinterface.RRuntimeError: Error in `[.data.frame`(center, beg:NROW(x)) : undefined columns selected </code></pre>
<p>Your data.frame in the R code is actually an "xts" "zoo" object you just need to convert it to one in the python code:</p> <pre><code>rzoo = importr('zoo') datazoo = zoo.as_zoo_xts(dataf) result = TTR.CCI(datazoo, n=20, c=0.015) </code></pre>
python|r|pandas|dataframe|rpy2
0
3,094
54,141,073
How to get last n values in each row using pandas
<p>I have a df which contains quite similar to below. it has many columns and some of them contains NaN. I want to get last n elements from the each row excluding NaN. Where n represent 3 here. </p> <p>Input :</p> <pre><code> col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 \ 0 NaN NaN 23.0 23 23.0 NaN 23.0 23.0 123.0 NaN NaN 1 NaN NaN NaN 45 12.0 23.0 23.0 NaN NaN NaN NaN 2 45.0 56.0 34.0 23 323.0 12.0 NaN NaN NaN NaN NaN 3 NaN NaN 34.0 65 NaN 65.0 2343.0 NaN NaN 2344.0 2.0 4 NaN NaN NaN 5 675.0 34.0 34.0 34.0 NaN NaN NaN 5 34.0 45.0 45.0 45 NaN NaN NaN NaN NaN NaN NaN col12 col13 I 0 NaN NaN r1 1 NaN NaN r2 2 NaN NaN r3 3 324.0 234.0 r4 4 NaN NaN r5 5 NaN NaN r6 </code></pre> <p>Output:</p> <pre><code> col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 \ 0 NaN NaN 23.0 23 23.0 NaN 23.0 23.0 123.0 NaN NaN 1 NaN NaN NaN 45 12.0 23.0 23.0 NaN NaN NaN NaN 2 45.0 56.0 34.0 23 323.0 12.0 NaN NaN NaN NaN NaN 3 NaN NaN 34.0 65 NaN 65.0 2343.0 NaN NaN 2344.0 2.0 4 NaN NaN NaN 5 675.0 34.0 34.0 34.0 NaN NaN NaN 5 34.0 45.0 45.0 45 NaN NaN NaN NaN NaN NaN NaN col12 col13 I res1 0 NaN NaN r1 [23.0, 23.0, 123.0] 1 NaN NaN r2 [12.0, 23.0, 23.0] 2 NaN NaN r3 [23, 323.0, 12.0] 3 324.0 234.0 r4 [2.0, 324.0, 234.0] 4 NaN NaN r5 [34.0, 34.0, 34.0] 5 NaN NaN r6 [45.0, 45.0, 45] </code></pre> <p>So Far I get the solution using below code.</p> <pre><code>df['res1']=df.apply(lambda x:x.dropna().values.tolist()[len(x.dropna().values.tolist())-4:len(x.dropna().values.tolist())-1],axis=1) </code></pre> <p>My solution looks very ineffective, First thing i'm using lambda which yields my code performance to low, and repeating same method to get index. </p> <p>I hope to get clear performance solution for this problem.</p> <p>Input Dataframe file is <a href="https://ufile.io/qlt03" rel="nofollow noreferrer"> here </a></p> <pre><code>df=pd.read_csv('s1.csv')#code to reproduce input </code></pre>
<p>Solution if each row have more non missing rows like treshold:</p> <p>use numpy with <a href="https://stackoverflow.com/a/44559180"><code>justify</code></a> function:</p> <pre><code>df['res1'] = justify(df.iloc[:, :-1].values, invalid_val=np.nan, side='right')[:, -3:].tolist() print (df) col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 \ 0 NaN NaN 23.0 23 23.0 NaN 23.0 23.0 123.0 NaN NaN 1 NaN NaN NaN 45 12.0 23.0 23.0 NaN NaN NaN NaN 2 45.0 56.0 34.0 23 323.0 12.0 NaN NaN NaN NaN NaN 3 NaN NaN 34.0 65 NaN 65.0 2343.0 NaN NaN 2344.0 2.0 4 NaN NaN NaN 5 675.0 34.0 34.0 34.0 NaN NaN NaN 5 34.0 45.0 45.0 45 NaN NaN NaN NaN NaN NaN NaN col12 col13 I res1 0 NaN NaN r1 [23.0, 23.0, 123.0] 1 NaN NaN r2 [12.0, 23.0, 23.0] 2 NaN NaN r3 [23.0, 323.0, 12.0] 3 324.0 234.0 r4 [2.0, 324.0, 234.0] 4 NaN NaN r5 [34.0, 34.0, 34.0] 5 NaN NaN r6 [45.0, 45.0, 45.0] </code></pre> <p>If not, need loops:</p> <pre><code>#changed a bit https://stackoverflow.com/a/40835254 def loop_compr_based(a, last): mask = ~np.isnan(a) stop = mask.sum(1).cumsum() start = np.append(0,stop[:-1]) am = a[mask].tolist() out = np.array([am[start[i]:stop[i]][-last:] for i in range(len(start))]) return out df['res1'] = loop_compr_based(df.iloc[:, :-1].values, 5).tolist() print (df) col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 \ 0 NaN NaN 23.0 23 23.0 NaN 23.0 23.0 123.0 NaN NaN 1 NaN NaN NaN 45 12.0 23.0 23.0 NaN NaN NaN NaN 2 45.0 56.0 34.0 23 323.0 12.0 NaN NaN NaN NaN NaN 3 NaN NaN 34.0 65 NaN 65.0 2343.0 NaN NaN 2344.0 2.0 4 NaN NaN NaN 5 675.0 34.0 34.0 34.0 NaN NaN NaN 5 34.0 45.0 45.0 45 NaN NaN NaN NaN NaN NaN NaN col12 col13 I res1 0 NaN NaN r1 [23.0, 23.0, 23.0, 23.0, 123.0] 1 NaN NaN r2 [45.0, 12.0, 23.0, 23.0] 2 NaN NaN r3 [56.0, 34.0, 23.0, 323.0, 12.0] 3 324.0 234.0 r4 [2343.0, 2344.0, 2.0, 324.0, 234.0] 4 NaN NaN r5 [5.0, 675.0, 34.0, 34.0, 34.0] 5 NaN NaN r6 [34.0, 45.0, 45.0, 45.0] </code></pre>
python|pandas
4
3,095
54,061,120
Sorting Strings with separators in pandas
<p>I have a pandas series with strings with separators in them, like say:</p> <pre><code>['160.20.2257.92', '829.328.17.39'] </code></pre> <p>I want to sort them. If I use Seres.sort_values() like in the below code:</p> <pre><code>a = pd.Series(['6.0.0.0', '10.0.4.0']) a.sort_values() </code></pre> <p>I get the output as :</p> <pre><code>1 10.0.4.0 0 6.0.0.0 </code></pre> <p>which is quite expected since the sorting function compares 6 with 1 not 6 with 10 and since 1 is smaller it is displayed first in sorted order. What I want is it to be sorted by the first part before the separator('.'), followed by the 2nd part, and so on(i.e compare 10 &amp; 6, followed by 0 &amp; 0, followed by 4 &amp; 0, finally 0 &amp; 0)</p> <p>What is the best way in Pandas, in terms of speed to achieve this, since I am dealing with a large dataset?</p>
<p>I believe this is what you are looking for</p> <pre><code>a = ['160.20.2257.92', '829.328.17.39'] b = sorted(map(lambda x: tuple(map(int, x.split('.'))), a)) final = map(lambda x: '.'.join(map(str, x)), b) final ['160.20.2257.92', '829.328.17.39'] </code></pre> <p>I hope this covers all corner cases</p>
python-3.x|pandas|dataframe
2
3,096
66,108,629
WHY is `rename` with selection of columns not working with a lambda function?
<p>I want to rename a selected portion of columns with a lambda function using <code>rename</code></p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({'pre_col1': [1, 2], 'pre_col2': [3, 4], 'pre_col3': [ 3, 29], 'pre_col4': [94, 170], 'pre_col5': [31, 115]}) # This works but it renames all of them # df.rename(columns=lambda x: x.replace('pre_', '')) # I'm only wanting to edit and rename a selection df.iloc[:, 2:5] = (df.iloc[:, 2:5] .rename(columns=lambda x: x.replace('pre_', ''))) print(df) </code></pre> <p>This produces</p> <pre><code> pre_col1 pre_col2 pre_col3 pre_col4 pre_col5 0 1.0 3.0 NaN NaN NaN 1 2.0 4.0 NaN NaN NaN </code></pre> <p><strong>I know there are many ways to rename columns.</strong> I've read <a href="https://stackoverflow.com/questions/11346283/renaming-columns-in-pandas">here</a>, <a href="https://stackoverflow.com/questions/56790037/renaming-selected-columns-in-pandas">here</a>, and <a href="https://stackoverflow.com/questions/38101009/changing-multiple-column-names-but-not-all-of-them-pandas-python">here</a>.</p> <p>But why isn't this way working? And why does it fill the columns i'm trying to change with <code>NaN</code>s ??</p>
<ol> <li>Because of immutability of indexes. 2) Also the changes happens in a copy of the dataframe (probably because of 1)) as suggested by @SeaBean those are just in a copy.</li> </ol> <p><strong>Option 1)</strong> To change the columns names.</p> <pre><code>import pandas as pd df = pd.DataFrame({'pre_col1': [1, 2], 'pre_col2': [3, 4], 'pre_col3': [ 3, 29], 'pre_col4': [94, 170], 'pre_col5': [31, 115]}) columns_to_modify = df.columns.tolist()[ 2:5] columns_rename = {} for i in columns_to_modify: columns_rename[i] = i.replace('pre_', '') df.rename(columns=columns_rename,inplace = True) print(df) pre_col1 pre_col2 col3 col4 col5 0 1 3 3 94 31 1 2 4 29 170 115 </code></pre> <p><strong>Option 2)</strong> To change the columns names.</p> <pre><code>import pandas as pd df = pd.DataFrame({'pre_col1': [1, 2], 'pre_col2': [3, 4], 'pre_col3': [ 3, 29], 'pre_col4': [94, 170], 'pre_col5': [31, 115]}) df.columns.values[2:5] = list(map(lambda x: x.replace('pre_', '') ,df.columns.tolist()[2:5])) df pre_col1 pre_col2 col3 col4 col5 0 1 3 3 94 31 1 2 4 29 170 115 </code></pre> <p>I believe there that the original difficulties with as with <code>df.iloc[:, 2:5] = df.iloc[:, 2:5].rename(columns=lambda x: x.replace('pre_', ''))</code> could be due to the immutability of indexes in dataframes as in:</p> <ol> <li><a href="https://stackoverflow.com/questions/48372556/pandas-typeerror-index-does-not-support-mutable-operations">Pandas TypeError: Index does not support mutable operations</a></li> <li><a href="https://stackoverflow.com/questions/46193728/regarding-the-immutability-of-pandas-dataframe-indexes">Regarding the immutability of pandas dataframe indexes</a></li> <li><a href="https://stackoverflow.com/questions/40459254/pandas-change-a-specific-column-name-in-dataframe-having-multilevel-columns">Pandas: Change a specific column name in dataframe having multilevel columns</a> From those appear that the dataframes indexes are immutable so they are set up all at the same time and kept like that on purpose. Interestingly the indexes appear to be immutable however you could change the values as in the second option.</li> </ol>
python|pandas|rename
1
3,097
66,063,032
How to efficiently vectorize (i.e. avoid explicit loops) a mapping of some colours into others in a Numpy array
<p>I have as input a Numpy matrix of rank three (i.e. an image: horizontal, vertical and 4 color channels). I want to read this matrix element-wise in their first two indices and map only certain colours into others, defined in respective arrays. The performance is very important, as this mapping will be applied many times, possible becoming a bottleneck of the program.</p> <p>Precisely the code I have so far is:</p> <pre><code># data is the rank-3 Numpy array with the image (obtained using the library PIL) # palette has shape (11, 4) and defines the 11 colours to map # palette_grey has shape (11,) and defines the 11 tones of grey to apply for i in range(palette.shape[0]): # loop in colours match = (data[:,:,0] == palette[i,0]) &amp; (data[:,:,1] == palette[i,1]) &amp; (data[:,:,2] == palette[i,2]) # build matrix only True when a given pixel has the right color for j in range(3): # loop to apply the mapping to the three channels (because it's just grey, so all channels are equal) data[:,:,j] = np.where(match, grey_palette[i], data[:,:,j]) # the mapping itself </code></pre> <p>Although the main task is vectorized (via np.where), there are still two explicit loops I'd like to avoid to improve performance.</p> <p>Any idea to achieve this?</p> <p>EDIT:</p> <p>I have tried to remove the second loop (in channels) by defining the both palettes to have the same shape (11,4). Then, I have tried this:</p> <pre><code>for i in range(palette.shape[0]): match = (data[:,:,0] == palette[i,0]) &amp; (data[:,:,1] == palette[i,1]) &amp; (data[:,:,2] == palette[i,2]) data[:,:,:] = np.where(match, grey_palette[i], data[:,:,:]) </code></pre> <p>But it raises the error:</p> <blockquote> <p>ValueError: operands could not be broadcast together with shapes (480,480) (4,) (480,480,4)</p> </blockquote> <p>I guess this is the expected behaviour, but I thought the mapping I propose is unambiguous, and therefore doable by Numpy.</p>
<p>When I compare your solution with this one:</p> <pre><code>for i in range(palette.shape[0]): new_data[data == palette[i]] = grey_palette[i] </code></pre> <p>using <code>%%timeit</code> in a notebook gives 87ms vs 218ms for yours for a 1000x1000x3 <code>data</code>.</p> <p>EDIT: deleted comment about a 'problem' with your solution that I created by changing to <code>new_data</code> only in one place.</p>
python|numpy|mapping
1
3,098
66,334,784
Is it meaningless to use ReduceLROnPlateau with Adam optimizer?
<p><strong>This question is basically for the working of Keras or <code>tf.keras</code> for people who have the verty deep knowledge of the framework</strong></p> <p>According to my knowledge, <code>tf.keras.optimizers.Adam</code> is an optimizer which has already an <strong>Adaptive</strong> Learning rate scheme. So if we are using <code>from keras.callbacks.ReduceLROnPlateau</code> with the <code>Adam</code> optimizer or any other, isn't it meaningless to do so? I don't have the very inner workings of <code>Keras</code> based <code>Optimizer</code> but it looks natural to me that if we are using the adaptive optimizer, why to to use this and <strong>If we use this given callback, what would be the effect on the training</strong>?</p>
<p>Conceptually, consider the gradient a fixed, mathematical value from automatic differentiation.</p> <p>What every optimizer other than pure SGD does is to take the gradient and apply some statistical analysis to create a better gradient. In the simplest case, momentum, the gradient is averaged with previous gradients. In RMSProp, the variance of the gradient across batches is measured - the noisier it is, the less RMSProp &quot;trusts&quot; the gradient and so the gradient is reduced (divided by the stdev of the gradient for that weight). Adam does both.</p> <p>Then, all optimizers multiply the statistically adjusted gradient by a learning rate.</p> <p>So although one colloquial description of Adam is that it automatically tunes a learning rate... a more informative description is that Adam statistically adjusts gradients to be more reliable, but you still need to decide on a learning rate and how it changes during training (e.g. a LR policy). ReduceLROnPlateau, cosine decay, warmup, etc are examples of an LR policy.</p> <p>Whether you program TF or PyTorch, the psuedocode on PyTorch's optimizers are my go to to understand the optimizer algorithms. Looks like a wall of greek letters as first, but you'll grok it if you stare at it for a few minutes.</p> <p><a href="https://pytorch.org/docs/stable/optim.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/optim.html</a></p>
tensorflow|machine-learning|keras|deep-learning|tf.keras
0
3,099
52,608,799
How to get the value of a single row of columns with same name?
<p>I have data frames with same column names so i have merge them</p> <h1>df1</h1> <pre><code> wave num stlines 0 4050.32 3.0 0.282690 1 4208.98 5.5 0.490580 2 4374.94 9.0 0.714830 3 4379.74 9.0 0.314040 4 4398.01 14.0 0.504150 5 4502.21 8.0 0.562780 </code></pre> <h1>df2</h1> <pre><code> wave num stlines 0 4050.32 3 0.28616 1 4208.98 6 0.48781 2 4374.94 9 0.71548 3 4379.74 10 0.31338 4 4398.01 15 0.49950 5 4502.21 9 0.56362 </code></pre> <h1>df3</h1> <pre><code> wave num stlines 0 4050.32 3.0 0.282690 1 4208.98 7.5 0.490580 2 4374.94 9.0 0.714830 3 4379.74 9.0 0.314040 4 4398.01 14.0 0.504150 5 4502.21 8.0 0.562780 </code></pre> <p>after merging, the resultant dataframe looks like this:</p> <pre><code>df=pd.merge(df1,df2,df3, on='wave',axis=1,join='inner') wave num_x stlines_x num_x stlines_x num_x stlines_x 0 4050.32 3.0 0.282690 3 0.28616 3.0 0.282690 1 4208.98 5.5 0.490580 6 0.48781 5.5 0.490580 2 4374.94 9.0 0.714830 9 0.71548 9.0 0.714830 3 4379.74 9.0 0.314040 10 0.31338 9.0 0.314040 4 4398.01 14.0 0.504150 15 0.49950 14.0 0.504150 5 4502.21 8.0 0.562780 9 0.56362 8.0 0.562780 </code></pre> <p>So now if i want to take the values of all the coulmns with name<code>num_x</code> for any row. Then how can i get them?</p> <p>I can get the complete columns with same name using the following </p> <pre><code>df.num_x num num num 0 3 3.0 3 1 5.5 6 7.5 2 9 9.0 9 3 10 14.0 10 4 15 8.0 15 5 9 3.0 9 </code></pre> <p>but when i tried to do the same for a single row '1' then it didn't work:</p> <pre><code>df.num_x['1'] </code></pre> <p>The desired result should look like this:</p> <pre><code> num num num 1 5.5 6 7.5 </code></pre> <p>How can i get them??</p>
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p> <pre><code>df.loc[1, 'num_x'] </code></pre> <hr> <p>In pandas same columns names are problematic, because not easy seelct first, second <code>num_x</code>, so suggest create <code>MultiIndex</code>:</p> <pre><code>dfs = [df1, df2, df3] df = pd.concat([x.set_index('wave') for x in dfs], axis=1, keys=['df1','df2','df3'], join='inner') print (df) df1 df2 df3 num stlines num stlines num stlines wave 4050.32 3.0 0.28269 3 0.28616 3.0 0.28269 4208.98 5.5 0.49058 6 0.48781 7.5 0.49058 4374.94 9.0 0.71483 9 0.71548 9.0 0.71483 4379.74 9.0 0.31404 10 0.31338 9.0 0.31404 4398.01 14.0 0.50415 15 0.49950 14.0 0.50415 4502.21 8.0 0.56278 9 0.56362 8.0 0.56278 </code></pre> <p>And then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow noreferrer"><code>xs</code></a> for selecting:</p> <pre><code>df1 = df.xs('num', axis=1, level=1) print (df1) df1 df2 df3 wave 4050.32 3.0 3 3.0 4208.98 5.5 6 7.5 4374.94 9.0 9 9.0 4379.74 9.0 10 9.0 4398.01 14.0 15 14.0 4502.21 8.0 9 8.0 </code></pre>
pandas|dataframe|rows
1