Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
374,900
70,107,223
TensorFlow ValueError: Input 0 of layer sequential is incompatible with the layer
<p>When I run the last part of the code below I get the following error:</p> <pre><code>ValueError: Input 0 of layer sequential_1 is incompatible with the layer: expected axis -1 of input shape to have value 28 but received input with shape (None, 30, 30) </code></pre> <pre><code>import pandas as pd import numpy as np from tensorflow import keras from tensorflow.keras.layers import Conv1D, MaxPooling1D, Dense, Dropout, Flatten, GRU, SimpleRNN, LSTM, Bidirectional, Activation, TimeDistributed from tensorflow.keras import models from tensorflow.keras import layers from tensorflow.keras.regularizers import l2 from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt CNNmodel = keras.Sequential() CNNmodel.add(Conv1D(32, 2, activation='relu', input_shape=(20,28))) # 32 convolution filters used each of size 2 CNNmodel.add(Conv1D(64, 3, activation='relu')) # 64 convolution filters used each of size 3 CNNmodel.add(MaxPooling1D(pool_size=(1,))) # choose the best features via pooling CNNmodel.add(Dropout(0.25)) # randomly turn neurons on and off to improve convergence CNNmodel.add(Flatten()) # flatten we only want a classification output CNNmodel.add(Dense(30, activation='relu')) # fully connected to get all relevant data CNNmodel.add(Dropout(0.1)) # one more dropout CNNmodel.add(Dense(1, activation='sigmoid')) # output lr_schedule = keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=1e-2, decay_steps=10000, decay_rate=0.9) opt = keras.optimizers.Adagrad(learning_rate=lr_schedule) CNNmodel.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) CNNhistory = CNNmodel.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=20, batch_size=128) # Getting score metrics scores = CNNmodel.evaluate(x_test, y_test) print(&quot;Accuracy: %.2f%%&quot; % (scores[1]*100)) </code></pre>
<p><strong>Working sample code</strong></p> <pre><code>import pandas as pd import numpy as np from tensorflow import keras from tensorflow.keras.layers import Conv1D, MaxPooling1D, Dense, Dropout, Flatten, GRU, SimpleRNN, LSTM, Bidirectional, Activation, TimeDistributed from tensorflow.keras import models from tensorflow.keras import layers from tensorflow.keras.regularizers import l2 from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() CNNmodel = keras.Sequential() CNNmodel.add(Conv1D(32, 2, activation='relu', input_shape=(28,28))) # 32 convolution filters used each of size 2 CNNmodel.add(Conv1D(64, 3, activation='relu')) # 64 convolution filters used each of size 3 CNNmodel.add(MaxPooling1D(pool_size=(1,))) # choose the best features via pooling CNNmodel.add(Dropout(0.25)) # randomly turn neurons on and off to improve convergence CNNmodel.add(Flatten()) # flatten we only want a classification output CNNmodel.add(Dense(30, activation='relu')) # fully connected to get all relevant data CNNmodel.add(Dropout(0.1)) # one more dropout CNNmodel.add(Dense(1, activation='sigmoid')) lr_schedule = keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=1e-2, decay_steps=10000, decay_rate=0.9) opt = keras.optimizers.Adagrad(learning_rate=lr_schedule) CNNmodel.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) CNNhistory = CNNmodel.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=20, batch_size=128) </code></pre> <p><strong>Output</strong></p> <pre><code>Epoch 1/20 469/469 [==============================] - 14s 28ms/step - loss: -3719534848.0000 - accuracy: 0.1124 - val_loss: -11388519424.0000 - val_accuracy: 0.1135 Epoch 2/20 469/469 [==============================] - 9s 18ms/step - loss: -25869672448.0000 - accuracy: 0.1124 - val_loss: -44974247936.0000 - val_accuracy: 0.1135 Epoch 3/20 469/469 [==============================] - 8s 17ms/step - loss: -69248000000.0000 - accuracy: 0.1124 - val_loss: -99721273344.0000 - val_accuracy: 0.1135 Epoch 4/20 469/469 [==============================] - 8s 17ms/step - loss: -133157298176.0000 - accuracy: 0.1124 - val_loss: -175103967232.0000 - val_accuracy: 0.1135 Epoch 5/20 469/469 [==============================] - 8s 18ms/step - loss: -216887148544.0000 - accuracy: 0.1124 - val_loss: -270619656192.0000 - val_accuracy: 0.1135 Epoch 6/20 469/469 [==============================] - 8s 18ms/step - loss: -320444530688.0000 - accuracy: 0.1124 - val_loss: -385881538560.0000 - val_accuracy: 0.1135 Epoch 7/20 469/469 [==============================] - 9s 19ms/step - loss: -443233828864.0000 - accuracy: 0.1124 - val_loss: -520282669056.0000 - val_accuracy: 0.1135 Epoch 8/20 469/469 [==============================] - 8s 18ms/step - loss: -584527708160.0000 - accuracy: 0.1124 - val_loss: -673431617536.0000 - val_accuracy: 0.1135 Epoch 9/20 469/469 [==============================] - 8s 17ms/step - loss: -743466008576.0000 - accuracy: 0.1124 - val_loss: -844648939520.0000 - val_accuracy: 0.1135 Epoch 10/20 469/469 [==============================] - 8s 17ms/step - loss: -920933564416.0000 - accuracy: 0.1124 - val_loss: -1033648603136.0000 - val_accuracy: 0.1135 Epoch 11/20 469/469 [==============================] - 9s 19ms/step - loss: -1113547472896.0000 - accuracy: 0.1124 - val_loss: -1239565729792.0000 - val_accuracy: 0.1135 Epoch 12/20 469/469 [==============================] - 9s 19ms/step - loss: -1324937117696.0000 - accuracy: 0.1124 - val_loss: -1462383280128.0000 - val_accuracy: 0.1135 Epoch 13/20 469/469 [==============================] - 8s 18ms/step - loss: -1552220815360.0000 - accuracy: 0.1124 - val_loss: -1701631885312.0000 - val_accuracy: 0.1135 Epoch 14/20 469/469 [==============================] - 9s 19ms/step - loss: -1793859387392.0000 - accuracy: 0.1124 - val_loss: -1956641505280.0000 - val_accuracy: 0.1135 Epoch 15/20 469/469 [==============================] - 9s 19ms/step - loss: -2052668915712.0000 - accuracy: 0.1124 - val_loss: -2227197444096.0000 - val_accuracy: 0.1135 Epoch 16/20 469/469 [==============================] - 9s 20ms/step - loss: -2327011393536.0000 - accuracy: 0.1124 - val_loss: -2512955113472.0000 - val_accuracy: 0.1135 Epoch 17/20 469/469 [==============================] - 8s 18ms/step - loss: -2612614660096.0000 - accuracy: 0.1124 - val_loss: -2813191520256.0000 - val_accuracy: 0.1135 Epoch 18/20 469/469 [==============================] - 8s 18ms/step - loss: -2914698395648.0000 - accuracy: 0.1124 - val_loss: -3127708745728.0000 - val_accuracy: 0.1135 Epoch 19/20 469/469 [==============================] - 9s 18ms/step - loss: -3229450240000.0000 - accuracy: 0.1124 - val_loss: -3455992463360.0000 - val_accuracy: 0.1135 Epoch 20/20 469/469 [==============================] - 9s 19ms/step - loss: -3558048268288.0000 - accuracy: 0.1124 - val_loss: -3797768994816.0000 - val_accuracy: 0.1135 </code></pre>
python|tensorflow
0
374,901
70,042,492
Pandas: Multiple indices in a dataframe: drop some, keep others
<p>My data has the following structure:</p> <pre><code>&gt;&gt;&gt; df.head() value Date FIPS_state Date 2001-01-01 1 2001-03-31 6.4621 2 2001-03-31 11.3259 4 2001-03-31 6.3467 5 2001-03-31 6.0613 6 2001-03-31 7.5069 </code></pre> <p>[I'd like to post this dataframe here for convenience, but I can't even figure that out now. However see <code>data</code> and the steps outlined further down to recreate it.]</p> <p>The desired output is:</p> <pre><code>&gt;&gt;&gt; df.head() FIPS_state Date value 0 1 2001-03-31 6.4621 1 2 2001-03-31 11.3259 2 4 2001-03-31 6.3467 3 5 2001-03-31 6.0613 4 6 2001-03-31 7.5069 </code></pre> <p>where I want to drop the first <code>Date</code> index but keep the second <code>Date</code> index, and also have the <code>FIPS_state</code> index as a variable.</p> <p>Maybe I shouldn't be here in the first place. The <code>Date</code> index was created while running the following:</p> <pre><code>import pandas from pandas import Timestamp data = pandas.DataFrame.from_dict({'FIPS_state': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1}, 'FIPS_county': {0: 3, 1: 3, 2: 3, 3: 3, 4: 3}, 'value': {0: 3.1, 1: 3.4, 2: 3.9, 3: 5.9, 4: 6.4}, 'Date': {0: Timestamp('2020-12-01 00:00:00'), 1: Timestamp('2020-11-01 00:00:00'), 2: Timestamp('2020-10-01 00:00:00'), 3: Timestamp('2020-09-01 00:00:00'), 4: Timestamp('2020-08-01 00:00:00')}, 'Month/Year': {0: '12/2020', 1: '11/2020', 2: '10/2020', 3: '9/2020', 4: '8/2020'}}) df = data.set_index('Date').groupby(['Date','FIPS_state']).resample('Q')['value'].mean().to_frame() &gt;&gt;&gt; df.head() # FIPS_state FIPS_county value Date Month/Year # 0 1 3 3.1000 2020-12-01 12/2020 # 1 1 3 3.4000 2020-11-01 11/2020 # 2 1 3 3.9000 2020-10-01 10/2020 # 3 1 3 5.9000 2020-09-01 9/2020 # 4 1 3 6.4000 2020-08-01 8/2020 </code></pre> <p><strong>EDIT:</strong> This is not even doing the correct calculation, is it? Oh my... Anyways, my question about the index has been answered below by @user17242583, thanks!</p>
<p>You can do it by removing the the first <code>Date</code> column from the index (or any <code>Date</code> column - there just shouldn't be duplicate column names):</p> <pre class="lang-py prettyprint-override"><code>df.index = df.index.droplevel(0) </code></pre> <p>Then reset the index:</p> <pre class="lang-py prettyprint-override"><code>df = df.reset_index() </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df FIPS_state Date value 0 1 2001-03-31 6.4621 1 2 2001-03-31 11.3259 2 4 2001-03-31 6.3467 3 5 2001-03-31 6.0613 4 6 2001-03-31 7.5069 </code></pre>
python|pandas|pandas-groupby|multi-index
1
374,902
70,064,499
How does Python interpret a half 'empty' tuple (x,)?
<p>I came across this code fragment and can't understand what is happening here.</p> <pre><code>X = np.linspace(-5,5,50) Y = np.linspace(-5,5,50) X, Y = np.meshgrid(X,Y) pos = np.empty(X.shape+(2,)) </code></pre> <p>Why is <code>(2,)</code> necessary here and how does Python interpret half empty tuples like that in general?</p>
<p><code>(2,)</code> is literal for tuple with one element. You need that in</p> <pre><code>pos = np.empty(X.shape+(2,)) </code></pre> <p>as <code>X.shape</code> is <code>tuple</code> and <code>+</code> denotes concatenating tuples in <code>python</code>, in this particular example you are adding another dimension before using <code>numpy.empty</code>.</p> <p><a href="https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences" rel="nofollow noreferrer">Tuples and Sequence docs</a> put it following way</p> <blockquote> <p>A special problem is the construction of tuples containing 0 or 1 items: the syntax has some extra quirks to accommodate these. Empty tuples are constructed by an empty pair of parentheses; a tuple with one item is constructed by following a value with a comma (it is not sufficient to enclose a single value in parentheses). Ugly, but effective.</p> </blockquote>
python|numpy|tuples
2
374,903
70,223,893
Conv layer, No gradients provided for any variable
<p>I try to train mnist dataset but I got an error like this:</p> <pre><code> No gradients provided for any variable: ['module_wrapper/conv2d/kernel:0', 'module_wrapper/conv2d/bias:0', 'module_wrapper_2/conv2d_1/kernel:0', 'module_wrapper_2/conv2d_1/bias:0', 'module_wrapper_5/dense/kernel:0', 'module_wrapper_5/dense/bias:0', 'module_wrapper_6/dense_1/kernel:0', 'module_wrapper_6/dense_1/bias:0']. </code></pre> <p>My fit code:</p> <p><code> self.model.fit(x = self.datas.trainImages, y = self.datas.trainLabels, batch_size = self.datas.batch_size, epochs =self.datas.epochs)</code></p> <p>Here the variables:</p> <pre><code>self.datas.trainImages = numpy.stack([cv2.imread(image1)],[cv2.imread(image2), dtype = float64],[cv2.imread(image3)]) self.datas.trainLabels = numpy.stack([0,1,2], dtype = int32) </code></pre> <p>Also if I print model.summary(), and it is lenet model:</p> <pre><code>Model: &quot;sequential&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= module_wrapper (ModuleWrappe (None, 28, 28, 32) 320 _________________________________________________________________ module_wrapper_1 (ModuleWrap (None, 14, 14, 32) 0 _________________________________________________________________ module_wrapper_2 (ModuleWrap (None, 14, 14, 64) 18496 _________________________________________________________________ module_wrapper_3 (ModuleWrap (None, 7, 7, 64) 0 _________________________________________________________________ module_wrapper_4 (ModuleWrap (None, 3136) 0 _________________________________________________________________ module_wrapper_5 (ModuleWrap (None, 500) 1568500 _________________________________________________________________ module_wrapper_6 (ModuleWrap (None, 10) 5010 ================================================================= Total params: 1,592,326 Trainable params: 1,592,326 Non-trainable params: 0 _________________________________________________________________ </code></pre> <p>There are no layers named Conv2D but I added them,</p> <pre><code> model.add(layers.Conv2D(filters=32,kernel_size=3,strides=1,activation='relu',padding='same')) model.add(layers.MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(layers.Conv2D(filters=64,kernel_size=3,strides=1,activation='relu',padding='same')) model.add(layers.MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(layers.Flatten()) model.add(layers.Dense(500)) model.add(layers.Dense(self.datas.classCount,activation='softmax')) </code></pre> <p>When I research about the problem, google and stackoverflow says add labels into fit function but I already added them.</p> <p><strong>UPDATE 1</strong></p> <p>you can try this codes to run it :</p> <pre><code>import tensorflow from tensorflow.keras import layers from tensorflow.keras.models import Sequential import tensorflow.keras.losses parameters = parameters datas = datas model = Sequential() optimizer = tf.keras.optimizers.SGD() loss = tensorflow.keras.losses.CategoricalCrossentropy(name = 'CategoricalCrossentropy', from_logits = True) metrics = tensorflow.keras.metrics.CategoricalAccuracy(name = 'CategoricalAccuracy') model.add(layers.Conv2D(filters=32,kernel_size=3,strides=1,activation='relu',padding='same')) model.add(layers.MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(layers.Conv2D(filters=64,kernel_size=3,strides=1,activation='relu',padding='same')) model.add(layers.MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(layers.Flatten()) model.add(layers.Dense(500)) model.add(layers.Dense(self.datas.classCount,activation='softmax')) trainImages = numpy.stack([[cv2.imread(image1)],[cv2.imread(image2)],[cv2.imread(image3)]], dtype = float64) #All images is belong to mnist dataset. I read them from a folder and append to list, then convert the dataset list into numpy.stack trainLabels = numpy.stack([0,1,2], dtype = int32) model.compile(loss = loss, optimizer = optimizer, metrics = metrics) model.fit(x = trainImages, y = trainLabels, batch_size = 2, epochs =1) </code></pre>
<p>Here is a working example based on your code but with the MNIST dataset, and just in case you didn't know, you should either use a softmax function on your output layer or set the <code>from_logits</code> parameter of your loss function to <code>True</code>, but <strong>not</strong> both.</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() model = tf.keras.Sequential() optimizer = tf.keras.optimizers.SGD() loss = tf.keras.losses.CategoricalCrossentropy(name = 'CategoricalCrossentropy') metrics = tf.keras.metrics.CategoricalAccuracy(name = 'CategoricalAccuracy') model.add(tf.keras.layers.Conv2D(filters=32,kernel_size=3,strides=1,activation='relu',padding='same', input_shape=(28, 28, 1))) model.add(tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(tf.keras.layers.Conv2D(filters=64,kernel_size=3,strides=1,activation='relu',padding='same')) model.add(tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(500, activation='relu')) model.add(tf.keras.layers.Dense(10, activation='softmax')) y_train = tf.keras.utils.to_categorical(y_train, num_classes=10) print(model.summary()) model.compile(loss = loss, optimizer = optimizer, metrics = metrics) model.fit(x_train, y_train, batch_size = 64, epochs = 1) </code></pre> <pre><code>Model: &quot;sequential_3&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_6 (Conv2D) (None, 28, 28, 32) 320 max_pooling2d_6 (MaxPooling (None, 14, 14, 32) 0 2D) conv2d_7 (Conv2D) (None, 14, 14, 64) 18496 max_pooling2d_7 (MaxPooling (None, 7, 7, 64) 0 2D) flatten_3 (Flatten) (None, 3136) 0 dense_6 (Dense) (None, 500) 1568500 dense_7 (Dense) (None, 10) 5010 ================================================================= Total params: 1,592,326 Trainable params: 1,592,326 Non-trainable params: 0 _________________________________________________________________ None 938/938 [==============================] - 8s 8ms/step - loss: 24.1966 - CategoricalAccuracy: 0.1079 &lt;keras.callbacks.History at 0x7f831b6d0a50&gt; </code></pre>
python|tensorflow|keras|deep-learning|tensorflow2.0
1
374,904
56,151,618
conditional forward fill within groupby
<p>I have a data frame for patients and their visits to the clinic. Patients may take a drug at some visits, and only the initial dose is recorded, or when the dose is changed. If the dose doesn't change at the next visit, what's recorded is "drug ongoing? Yes. Dose changed? No". What I need to get is the exact dose for each visit. </p> <p>I tried forward fill with groupby (groupby <code>patient_id</code>), but I'm stuck at how to insert the condition that only fill missing when drug is ongoing and dose is not changed.</p> <pre><code>df = pd.DataFrame({'patient_id': ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c'], \ 'visit_number':[1, 2, 3, 2, 3, 4, 10, 11, 12], \ 'drug_ongoing':[np.nan, 1, 1, np.nan, 0, 1, 1, 1, 0], \ 'drug_dose_changed':[0, 0, 0, 0, np.nan,0, 0, 1, np.nan], \ 'dose':[40, np.nan, np.nan, 60, np.nan, 70, 80, np.nan, np.nan]}) </code></pre> <p>I tried:</p> <pre><code>df['dose_filled'] = df.groupby('patient_id')['dose'].ffill() </code></pre> <p>But in this way, all the missing is filled.</p> <p>The desired new column <code>'dose_filled'</code> is <code>[40, 40, 40, 60, np.nan, 70, 80, np.nan, np.nan]</code></p>
<p>In your case , filter before <code>ffill</code> </p> <pre><code>s=df.loc[(df['drug_ongoing'].eq(1)&amp;df['drug_dose_changed'].eq(0))|df.visit_number.eq(df.groupby('patient_id').visit_number.transform('first'))].groupby('patient_id').dose.ffill() df.dose.fillna(s,inplace=True) df Out[38]: patient_id visit_number drug_ongoing drug_dose_changed dose 0 a 1 NaN 0.0 40.0 1 a 2 1.0 0.0 40.0 2 a 3 1.0 0.0 40.0 3 b 2 NaN 0.0 60.0 4 b 3 0.0 NaN NaN 5 b 4 1.0 0.0 70.0 6 c 10 1.0 0.0 80.0 7 c 11 1.0 1.0 NaN 8 c 12 0.0 NaN NaN </code></pre>
python|pandas|dataframe|pandas-groupby|missing-data
4
374,905
56,329,451
Use pandas to plot highest correlations
<p>I have been plotting correlations via heatmaps with the following code. However, there are too many variables. Is it possible to plot the highest correlations ( over .5 and -.5) on a graph?</p> <pre><code>plt.rcParams['figure.figsize'] = [80,80] corr3 = datasetcm.corr() fig = plt.figure() ax = fig.add_subplot(111) cax = ax.matshow(corr3,cmap='coolwarm', vmin=-1, vmax=1) fig.colorbar(cax) ticks = np.arange(0,len(datasetcm.columns),1) ax.set_xticks(ticks) plt.xticks(rotation=90) ax.set_yticks(ticks) ax.set_xticklabels(datasetcm.columns) ax.set_yticklabels(datasetcm.columns) plt.show() </code></pre>
<p>Filter your correlation matrix on the treshold of 0.5 before plotting. This will return <code>0</code> for the correlations lower than <code>0.5</code>.</p> <p>Then we can use color mapping to show the rows with 0 as <code>not correlated</code> </p> <pre><code>corr3 = datasetcm.corr() corr3 = corr3 [corr3 &gt; 0.5].fillna(0) corr3.style.background_gradient(cmap='coolwarm', axis=None).set_precision(2) </code></pre> <p><a href="https://i.stack.imgur.com/S7uPK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S7uPK.png" alt="enter image description here"></a></p>
pandas|correlation
1
374,906
56,202,832
I assign list of values to a new column and getting the warning SettingWithCopyWarning:
<p>I have a data frame that has the month and date for 2015. I calculate the Year to Date value into a list. I assign this list to a new column in the data frame but get a warning SettingWithCopyWarning. How do I get around it and some explanation why is this happening. Thanking you all in advance.</p> <pre><code>print(dfabovemax.head()) print(dfabovemax.tail()) MaxTemp Data_Value Mon-Date 01-02 114 113 01-10 142 126 04-10 213 203 04-15 246 228 05-03 203 195 MaxTemp Data_Value Mon-Date 01-02 114 113 01-10 142 126 04-10 213 203 04-15 246 228 05-03 203 195 fmt = '%Y-%m-%d' ytodt2 = [] for i in dfabovemax.index: s='2005-{}'.format(i) dt = datetime.datetime.strptime(s, fmt) tt = dt.timetuple() ytodt2.append(tt.tm_yday) dfabovemax['YtoDt'] = list(ytodt2) </code></pre> <p>And I get a warning </p> <p>/opt/conda/lib/python3.6/site-packages/ipykernel/<strong>main</strong>.py:14: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead</p> <p>See the caveats in the documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy</a></p> <pre><code>print(dfabovemax.head()) print(dfabovemax.tail()) MaxTemp Data_Value YtoDt Mon-Date 01-02 114 113 2 01-10 142 126 10 04-10 213 203 100 04-15 246 228 105 05-03 203 195 123 MaxTemp Data_Value YtoDt Mon-Date 12-25 140 135 359 12-26 152 130 360 12-27 138 118 361 12-28 134 124 362 </code></pre> <p>12-30 134 128 364</p>
<p>You get this warning often when trying to set values on a copy of a dataframe. The only place in your code where this could happen is </p> <pre><code>dfabovemax['YtoDt'] = list(ytodt2) </code></pre> <p>This means that in all likelihood, dfabovemax is the result of another dataframe. In python, the dfabovemax is still referencing the original dataframe that it was made from. In order to rectify this use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.copy.html" rel="nofollow noreferrer">copy()</a> when you made dfabovemax.</p>
python|pandas
0
374,907
56,071,559
Pandas conditional map/fill/replace
<pre><code>d1=pd.DataFrame({'x':['a','b','c','c'],'y':[-1,-2,-3,0]}) d2=pd.DataFrame({'x':['d','c','a','b'],'y':[0.1,0.2,0.3,0.4]}) </code></pre> <p>I want to replace <code>d1.y</code> where y&lt;0 with the correspondent <code>y</code> in d2. It's something like vlookup in Excel. The core problem is replace y according to x rather than just simply manipulate y. What I want is</p> <pre><code>Out[40]: x y 0 a 0.3 1 b 0.4 2 c 0.2 3 c 0.0 </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> with condition:</p> <pre><code>s = d2.set_index('x')['y'] d1.loc[d1.y &lt; 0, 'y'] = d1['x'].map(s) print (d1) x y 0 a 0.3 1 b 0.4 2 c 0.2 3 c 0.0 </code></pre>
python|pandas
3
374,908
56,436,632
Improve performance of rewriting timestamps in parquet files
<p>Due to some limitations of the consumer of my data, I need to "rewrite" some parquet files to convert timestamps that are in nanosecond precision to timestamps that are in millisecond precision.</p> <p>I have implemented this and it works but I am not completely satisfied with it.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.read_parquet( f's3://{bucket}/{key}', engine='pyarrow') for col_name in df.columns: if df[col_name].dtype == 'datetime64[ns]': df[col_name] = df[col_name].values.astype('datetime64[ms]') df.to_parquet(f's3://{outputBucket}/{outputPrefix}{additionalSuffix}', engine='pyarrow', index=False) </code></pre> <p>I'm currently running this job in lambda for each file but I can see this may be expensive and may not always work if the job takes longer than 15 minutes as that is the maximum time Lambda's can run. </p> <p>The files can be on the larger side (>500 MB).</p> <p>Any ideas or other methods I could consider? I am unable to use pyspark as my dataset has unsigned integers in it.</p>
<p>You could try rewriting all columns at once. Maybe this would reduce some memory copies in pandas, thus speeding up the process if you have many columns:</p> <pre><code>df_datetimes = df.select_dtypes(include="datetime64[ns]") df[df_datetimes.columns] = df_datetimes.astype("datetime64[ms]") </code></pre>
python|pandas|amazon-s3|parquet|pyarrow
1
374,909
56,175,937
Write Multiple Dynamic DataFrames to Excel Workbook
<p>I am looking for help in filtering different dataframes to export to worksheets. Here is a sample dataframe.</p> <pre><code>import pandas as pd import numpy as np np.random.seed(1111) df = pd.DataFrame({ 'Category':np.random.choice( ['Group A','Group B','Group C','Group D'], 10000), 'Sub-Category':np.random.choice( ['X','Y','Z'], 10000), 'Sub-Category-2':np.random.choice( ['G','F','I'], 10000), 'Product':np.random.choice( ['Product 1','Product 2','Product 3'], 10000), 'Units_Sold':np.random.randint(1,100, size=(10000)), 'Dollars_Sold':np.random.randint(100,1000, size=10000), 'Customer':np.random.choice(pd.util.testing.rands_array(10,25,dtype='str'),10000), 'Date':np.random.choice( pd.date_range('1/1/2016','12/31/2018', freq='D'), 10000)}) </code></pre> <p>Here are different dataframes I'd like to export into Excel workbooks:</p> <pre><code>df1 = df.groupby(['Category','Sub-Category-2','Product']).agg({'Units_Sold':'sum'}) df2 = df.groupby(['Category','Product',pd.Grouper(key='Date',freq='A-APR')]).agg({'Dollars_Sold':'sum'}) df3 = df.groupby(['Category','Product','Sub-Category']).agg({'Units_Sold':'sum','Dollars_Sold':'sum'}) </code></pre> <p>For each 'Category', I'd like to create a separate Excel workbook with each dataframe in it filtered to show only that specific 'Category'. For example, workbook 'Group A' would have df1, df2, &amp; df3 as separate worksheets in it with the dataframe showing only the values where 'Category' = 'Group A'. Workbook 'Group B' would have the same info, just filtered where 'Category' = 'Group B'.</p> <p>I know how to do this manually by using .loc, but this seems very slow. My question is how do I do this in a pythonic way? The example data is not large, but my real-world data has 30+ categories in 'Category'. Is there a way to create a function to slice appropriately &amp; kick out dataframes after filtering? </p>
<p>How about just running</p> <pre><code>for c in df.Category.unique(): with pd.ExcelWriter(f"/Users/constantino/Desktop/{c}.xlsx") as writer: for i, d in enumerate([df1, df2, df3]): d.loc[c].to_excel(writer, sheet_name=f"df{i+1}") </code></pre>
python|excel|pandas
0
374,910
56,359,069
Reorder only a part of a pandas DataFrame
<p><strong>Context</strong></p> <p>I have a pandas-DataFrame with a structure analogous to something like the table on the left:</p> <pre><code> + Category + Content + Layer + Category + Content + Layer Index | | | Index | | | ---------------------------------- ---------------------------------- 000001| "A" | "Dummy" | 0 -&gt; 000001| "A" | "Dummy" | 0 ---------------------------------- ---------------------------------- 000002| "A" | "Dummy" | 1 -&gt; 000003| "A" | "Dummy" | 0 ---------------------------------- ---------------------------------- 000003| "A" | "Dummy" | 0 -&gt; 000002| "A" | "Dummy" | 1 ---------------------------------- ---------------------------------- 000004| "A" | "Dummy" | 1 -&gt; 000004| "A" | "Dummy" | 1 ---------------------------------- ---------------------------------- 000005| "B" | "Dummy" | 2 = 000005| "B" | "Dummy" | 2 ---------------------------------- ---------------------------------- 000006| "B" | "Dummy" | 0 = 000006| "B" | "Dummy" | 0 ---------------------------------- ---------------------------------- 000007| "B" | "Dummy" | 4 = 000007| "B" | "Dummy" | 4 ---------------------------------- ---------------------------------- </code></pre> <p>What I want to achieve is to reorder the dataframe like on the right.</p> <p><strong>Question</strong></p> <p>As shown in the right table, only a part of the dataframe is supposed to be reordered - Only elements of <code>category == "A"</code> shall be ordered in ascending manner of their <code>layer</code>. All elements of <code>category == "B"</code>have to stay as they are (which is my current problem when working with <code>dataframe.sort_values()</code> etc.).</p> <p><strong><em>How can I reorder (resort) only the specified part of the dataframe without affecting the rest?</em></strong></p>
<p>You can do this in two steps:</p> <ol> <li>Filter rows by condition, for example by creating a boolean <code>mask</code></li> <li>Directly address the underlying numpy-arrays via <code>.loc</code> (in order to prevent the alignment of index values)</li> </ol> <blockquote> <p><code>.loc</code>: Access a group of rows and columns by label(s) or a boolean array. (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer">Link to pandas-Documentation</a>)</p> </blockquote> <pre><code> #Boolean mask of the entire dataframe in order to identify relevant rows mask = df['Category'].eq('A') #Anlog to mask = (df["Category"] == 'A') #pandas &gt;= 0.24 df.loc[mask] = df.loc[mask].sort_values('Layer').to_numpy() #pandas &lt; 0.24 df.loc[mask] = df.loc[mmask.sort_values('Layer').values #Result print (df) Category Content Layer Index 000001 A Dummy 0 000002 A Dummy 0 000003 A Dummy 1 000004 A Dummy 1 000005 B Dummy 2 000006 B Dummy 0 000007 B Dummy 4 </code></pre>
python|pandas|sorting|dataframe
4
374,911
56,193,488
Python3 Panda's Holiday fails to NOT find dates in arbitrary periods in the past
<p>Made my own definition of MLK Day Holiday that adheres not to when the holiday was first observed, but by when it was first observed by the NYSE. The NYSE first observed MLK day in January of 1998.</p> <p>When asking the Holiday for the days in which the holiday occurred between dates, it works fine for the most part, returning an empty set when the MLK date is not in the range requested, and returning the appropriate date when it is. For date ranges that precede the <code>start_date</code> of the holiday, it appropriately returns the empty set, until we hit around 1995, and then it fails. I cannot figure out why it fails then and not in other situations when the empty set is the correct answer.</p> <p>Note: Still stuck on Pandas 0.22.0. Python3</p> <pre><code>import pandas as pd from datetime import datetime from dateutil.relativedelta import MO from pandas.tseries.holiday import Holiday __author__ = 'eb' mlk_rule = Holiday('MLK Day (NYSE Observed)', start_date=datetime(1998, 1, 1), month=1, day=1, offset=pd.DateOffset(weekday=MO(3))) start = pd.to_datetime('1999-01-17') end = pd.to_datetime('1999-05-01') finish = pd.to_datetime('1980-01-01') while start &gt; finish: print(f"{start} - {end}:") try: dates = mlk_rule.dates(start, end, return_name=True) except Exception as e: print("\t****** Fail *******") print(f"\t{e}") break print(f"\t{dates}") start = start - pd.DateOffset(years=1) end = end - pd.DateOffset(years=1) </code></pre> <p>When run, this results in:</p> <pre><code>1999-01-17 00:00:00 - 1999-05-01 00:00:00: 1999-01-18 MLK Day (NYSE Observed) Freq: 52W-MON, dtype: object 1998-01-17 00:00:00 - 1998-05-01 00:00:00: 1998-01-19 MLK Day (NYSE Observed) Freq: 52W-MON, dtype: object 1997-01-17 00:00:00 - 1997-05-01 00:00:00: Series([], dtype: object) 1996-01-17 00:00:00 - 1996-05-01 00:00:00: Series([], dtype: object) 1995-01-17 00:00:00 - 1995-05-01 00:00:00: ****** Fail ******* Must provide freq argument if no data is supplied </code></pre> <p>What happens in 1995 that causes it to fail, that does not happen in the same periods in the years before?</p>
<p>ANSWER: Inside of the <code>Holiday</code> class, the <code>dates()</code> method is used to gather the list of valid holidays within a requested date range. In order to insure that this occurs properly, the implementation gathers all holidays from one year before to one year after the requested date range via the internal <code>_reference_dates()</code> method. In this method, if the receiving <code>Holiday</code> instance has an internal start or end date, it uses that date as the begin or end of the range to be examined rather than the passed in requested range, even if the dates in the requested range precede or exceed the start or end date of the rule.</p> <p>The existing implementation mistakenly assumes it is ok to limit the effective range over which it must accurately identify what holidays are in existence to the range over which holidays exist. As part of a set of rules in a calendar, it is as important for a <code>Holiday</code> to identify where holidays do not exist as where they do. The NULL set response is an important function of the <code>Holiday</code> class.</p> <p>For example, in a <em>Trading Day Calendar</em> that needs to identify when financial markets are open or closed, the calendar may need to accurately identify which days the market is closed over a 100 year history. The market only closed for MLK day for a small part of that history. A calendar that includes the MLK holiday as constructed above throws an error when asked for the open days or holidays for periods preceding the MLK <code>start_date</code>[1].</p> <p>To fix this, I re-implemented the <code>_reference_dates()</code> method in a custom sub-class of Holiday to insure that when the requested date range extends before the <code>start_date</code> or after the <code>end_date</code> of the holiday rule, it uses the actual requested range to build the reference dates from, rather than bound the request by the internal start and end dates.</p> <p>Here is the implementation I am using.</p> <pre><code>class MLKHoliday(Holiday): def __init__(self): super().__init__('MLK Day (NYSE Observed)', start_date=datetime(1998, 1, 1), month=1, day=1, offset=pd.DateOffset(weekday=MO(3))) def _reference_dates(self, start_date, end_date): """ Get reference dates for the holiday. Return reference dates for the holiday also returning the year prior to the start_date and year following the end_date. This ensures that any offsets to be applied will yield the holidays within the passed in dates. """ if self.start_date and start_date and start_date &gt;= self.start_date: start_date = self.start_date.tz_localize(start_date.tz) if self.end_date and end_date and end_date &lt;= self.end_date: end_date = self.end_date.tz_localize(end_date.tz) year_offset = pd.DateOffset(years=1) reference_start_date = pd.Timestamp( datetime(start_date.year - 1, self.month, self.day)) reference_end_date = pd.Timestamp( datetime(end_date.year + 1, self.month, self.day)) # Don't process unnecessary holidays dates = pd.DatetimeIndex(start=reference_start_date, end=reference_end_date, freq=year_offset, tz=start_date.tz) return dates </code></pre> <p>Does anyone know if this has been fixed in a more up-to-date version of pandas?</p> <p>[1] Note: As constructed in the original question, the <code>mlk_rule</code> will not actually fail to provide the NULL set to the <code>dates()</code> call over a range just preceding the <code>start_date</code> but will actually start throwing exceptions a year or so before that. This is because the mistaken assumption about the lack of need for a proper NULL set response is mitigated by the extension of the date range by a year in each direction by <code>_reference_dates()</code>.</p>
python|pandas|calendar|timestamp
0
374,912
56,167,463
Changing the optimizer in the tensorflow object detection
<p>How to change the optimiser for the configuration</p> <p>for example the following is a confgi for ssd_coco_mobilenetv2</p> <pre><code>train_config: { batch_size: 4 optimizer { rms_prop_optimizer: { learning_rate: { exponential_decay_learning_rate { initial_learning_rate: 0.0001 decay_steps: 800720 decay_factor: 0.95 } } momentum_optimizer_value: 0.9 decay: 0.9 epsilon: 1.0 } } } </code></pre>
<p>Here is the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/protos/optimizer.proto" rel="noreferrer">proto</a> file that corresponds to the optimizer. According to the proto file, you can choose among three different optimizers, e.g. </p> <ol> <li><p>rms_prop_optimizer </p></li> <li><p>momentum_optimizer </p></li> <li><p>adam_optimizer </p></li> </ol> <p>And then for each optimizer, you can configure the learning rate as one of the following</p> <ol> <li>constant_learning_rate </li> <li>exponential_decay_learning_rate </li> <li>manual_step_learning_rate </li> <li>cosine_decay_learning_rate </li> </ol> <p>And then for each learning rate, you can configure what the values are, the default values are also provided by the proto file.</p>
python|tensorflow|object-detection|object-detection-api
5
374,913
56,148,611
When converting .dat into csv my code change the output values
<p>I have a .dat file and I am trying to convert it into a csv one. I have found a piece of code that "somehow" solved my problem. The thing is: such code gave me a messed up output file as a result. In other words: it changed my values!!!!</p> <p>Someone can help me with that? I am a total beginner at this.</p> <p>Thanks a lot.</p> <pre><code>with open('f.dat') as input_file: lines = input_file.readlines() newLines = [] for line in lines: newLines.append(newLine) with open('f_out.csv','w') as output_file: file_writer = csv.writer(output_file) file_writer.writerows(newLines) </code></pre> <p>My input file looks like this:</p> <pre><code>"-18.7723311308 3166157043.25795 0 1006743187.3562 -18.8214122765 188717303.231381 0 57141624.5127759 -18.7022205742 399933910.540253 0 87142384.8698447 -18.5903166748 23045528.3797531 0 5841919.83133624 -18.3051499783 76457482.0309581 0 25326122.2381197" (with more lines) </code></pre> <p>And the output file like this:</p> <pre><code>-21.5607314306,1200000000.0,0,500000000.0,MBH -21.5607314306,1200000000.0,0,500000000.0,MBH -21.5607314306,1200000000.0,0,500000000.0,MBH </code></pre> <p>What I simply want is an output file where my columns are separated by a comma, like: </p> <pre><code>"-18.7723311308, 3166157043.25795, 0, 1006743187.3562 -18.8214122765, 188717303.231381, 0 ,57141624.5127759" </code></pre>
<p>.dat files are not much readable using file io operations you can use asammdf module to read the .dat file.use pip install asammdf</p>
python|pandas
0
374,914
56,351,867
How to find mean of quantitative variable from categorical variable in a dataframe?
<p>Let's say I have the following data frame in pandas:</p> <pre><code>data = {'State':['CA', 'CA', 'CA', 'CA', 'NY', 'NY', 'TX'], 'Cost':[20, 30, 40, 50, 60, 70, 70]} test = pd.DataFrame(data) print(test.head(7)) </code></pre> <p>which would be the following</p> <pre><code> State Cost 0 CA 20 1 CA 30 2 CA 40 3 CA 50 4 NY 60 5 NY 70 6 TX 70 </code></pre> <p>In this scenario, the mean cost of California would be 35, the mean cost of new york would be 65, and the mean cost of texas would be 70.</p> <p>Here is my question: what would be the query in pandas in which we could find the mean cost of a state given that state?</p>
<p>Use <code>groupby</code> and <code>mean</code>:</p> <pre><code>print(test.groupby('State').mean()) </code></pre> <p>Which outputs:</p> <pre><code> Cost State CA 35 NY 65 TX 70 </code></pre> <p>If you want a cleaner <code>DataFrame</code>:</p> <pre><code>print(test.groupby('State', as_index=False).mean()) </code></pre> <p>Which gives:</p> <pre><code> State Cost 0 CA 35 1 NY 65 2 TX 70 </code></pre>
python|pandas
1
374,915
56,344,556
Image segmentation with edgeTPU
<p>I´m new here, so please be kind and teach me if I did not provide all the information you need :)</p> <p>I would like to compare Edge TPU with other edge device such as Myriad. I would like to select one object detection model and one image segmentation model. Considering the following link which shows supported operations, I have noticed that yolov3 cannot be compiled for EdgeTPU because it includes LeakyRelu.</p> <p><a href="https://coral.withgoogle.com/docs/edgetpu/models-intro/" rel="nofollow noreferrer">https://coral.withgoogle.com/docs/edgetpu/models-intro/</a> </p> <p>For image segmentation, I'd like to use Deeplab. But I'm still don't know if operations included in deeplab v3+, such as atrous convolution or feature pyramid network, are supported.</p> <p>I'd appreciate if someone teach me what models are usable on edgeTPU. Are there any models of image segmentation?</p>
<p>Here you can find all supported layers for edgetpu: <a href="https://coral.ai/docs/edgetpu/models-intro/#supported-operations" rel="nofollow noreferrer">https://coral.ai/docs/edgetpu/models-intro/#supported-operations</a>.</p> <p>And for Conv2D it says &quot;Must use the same dilation in x and y dimensions.&quot;. So implementing a version of deeplab v3+ is possible for the edgetpu.</p>
python-3.x|tensorflow-lite
0
374,916
56,430,955
Keras multi-gpu: specifying explicit GPU ids
<p>From looking at the file <code>keras/utils/multi_gpu_utils.py</code> in tensorflow GitHub repository, I could see that given that you specified that you want to use <code>x</code> GPUs, it will automatically allocate the GPU IDs from <code>range(x)</code>, i.e., <code>0, 1, 2, ..., x - 1</code>.</p> <p>I need to use GPUs <code>4, 5, 6 ,7</code>, since the first 4 GPUs are already working on another task. Is there a way to specify it?</p>
<p>In python you can use </p> <pre><code>import os os.environ["CUDA_VISIBLE_DEVICES"]="0,1" </code></pre> <p>Or set <code>CUDA_VISIBLE_DEVICES=0,1</code> in bash before starting python script</p> <p>You can also refer to my answer <a href="https://stackoverflow.com/questions/40069883/how-to-set-specific-gpu-in-tensorflow">here</a> to automate this process.</p>
tensorflow|keras|multi-gpu
1
374,917
56,295,471
Merge 2 dataframes with similar time indexes
<p>I have 2 dataframes, <code>ts1</code> and <code>ts2</code>. The data structure looks like this:</p> <pre><code> Date Close 0 2004-08-05 0.0 1 2004-08-06 -155.0 2 2004-08-09 -140.0 3 2004-08-10 -2.0 4 2004-08-11 -24.0 </code></pre> <p>Both have a <code>Date</code> and <code>Close</code> column. It possible that some dates are in ts1 but not in ts2 (and vice versa).</p> <p>I would like to create a dataframe, <code>ts_merged</code>, that looks like this:</p> <pre><code> Date Close_TS1 Close_TS2 0 2004-08-05 0.0 1 1 2004-08-06 -155.0 133 2 2004-08-09 -140.0 4 3 2004-08-10 -2.0 2 4 2004-08-11 -24.0 2 </code></pre> <p>I'd like a dataframe with <strong>only</strong> the Dates that are present in both <code>ts1</code> and <code>ts2</code>.</p> <p>For the comparison I've tried <code>ts1.Date[ts1.Date == ts2.Date]</code>, and it doesnt work. For the merging, I've tried <code>.merge()</code> but it just merge everything in a unique Close column.</p> <p>How can I do this?</p>
<p>Pass how='inner' to the merge function. This will tell the merge function to do an inner join which will only keep keys found in both Data Frames.</p> <pre><code>ts_merged=ts1.merge( ts2, on='Date', how='inner', suffixes=('_TS1','_TS2') ) </code></pre>
python|pandas
1
374,918
56,273,406
how to get the desire rows by condition in DataFrame
<p>I have the DataFrame with index is the post_code and its value as medicines name and proportion. How can I just get 1 medicine name for each post_code alphabetically (some post_codes may have multiple 'bnf_name' with the same rate for the maximum. In this case, take the alphabetically first 'bnf_name')</p> <pre><code> post_code bnf dev TR1 3ER Senna_Tab 7.5mg 0.33 TR1 3ER Oxybutynin HCl_Tab 2.5mg 0.33 B26 1TH Betnesol_Ear/Eye/Nose Dps 0.1% 0.16 B26 1TH Amoxicillin_Cap 500mg 0.16 </code></pre> <p>Desired result:</p> <pre><code> post_code bnf dev TR1 3ER Oxybutynin HCl_Tab 2.5mg 0.33 B26 1TH Amoxicillin_Cap 500mg 0.16 </code></pre>
<p>You probably first want to <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer">sort_values</a> by <em>both</em> index <code>post_code</code> and column <code>bnf</code> and then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer">drop_duplicates</a> while keeping the first occurrence:</p> <pre><code>df = df.sort_values(by=['post_code', 'bnf']) df = df.drop_duplicates(subset=['post_code'], keep='first') </code></pre>
python-3.x|pandas
-1
374,919
56,361,514
Divide data into bins with inf Python
<p>I'm having a problem with qcut function in python. My upper bounds and lower bounds are -Inf and Inf, but when I apply qcut with these bounds, Python return this error "cannot convert float infinity to integer".</p> <p>My friends told me that I should change the Inf into 1e100 (a very large number represents ) so qcut could use. However, another error occur: "IndexError: only integers, slices (<code>:</code>), ellipsis (<code>...</code>), numpy.newaxis (<code>None</code>) and integer or boolean arrays are valid indices"</p> <p>Example:</p> <pre class="lang-py prettyprint-override"><code>a1 = [-Inf, 26.6, 36.2, 38.7, 42.1, 47.2, 117.7] a2 = [-1e100, 26.6, 36.2, 38.7, 42.1, 47.2, 117.7] cut_range = [-Inf, 27.0, 33.0, 40.0, Inf] #For a1 cut_range = [-1e+100, 27.0, 33.0, 40.0, 1e+100] #For a2 b = pd.qcut(a, cut_range, duplicates = 'drop') </code></pre> <p>I want to have a final result like this:</p> <pre class="lang-py prettyprint-override"><code>b = ['[-Inf,27]','(33,40]','(33,40],'(40, Inf]','(40, Inf]','(40, Inf]'] or with 1e100: b = ['[-1e100,27]','(33,40]','(33,40],'(40, 1e100]','(40, 1e100]','(40, 1e100]'] </code></pre> <p>And someone could help me to explained how Inf works in Python and in R. They are both Infinite but how are they behave so different.</p> <p>In R I tried function with Inf and it worked:</p> <pre><code>as.character(cut(a1,cut_range, include.lowest = TRUE)) </code></pre>
<p>You actually need <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.cut.html" rel="nofollow noreferrer">pd.cut</a>. It's because you're binning/labeling your data based on ranges:</p> <pre><code>a1 = [-np.inf, 26.6, 36.2, 38.7, 42.1, 47.2, 117.7] cut_range = [-np.inf, 27.0, 33.0, 40.0, np.inf] pd.cut(a1, bins = cut_range, include_lowest=True) &gt;&gt; [(-inf, 27.0], (-inf, 27.0], (33.0, 40.0], (33.0, 40.0], (40.0, inf], (40.0, inf], (40.0, inf]] </code></pre> <p>Also note that qcut labels data based on <code>quantiles</code>, so if you have <code>[0, 0.25, 0.5, 0.75, 1]</code> as your <code>cut_range</code> then the data will be divided into 4 quantiles. The first quantile will belong to values from the minimum to the 25th percentile(0-0.25). When you add in -np.inf, there can't be a negative percentile value, and hence you got the error.</p>
python-3.x|pandas|intervals
3
374,920
56,194,723
Write pandas data to a CSV file if column sums are greater than a specified value
<p>I have a CSV file whose columns are frequency counts of words, and whose rows are time periods. I want to sum for each column the total frequencies. Then I want to write to a CSV file for sums greater than or equal to 30, the column and row values, thus dropping columns whose sums are less than 30.</p> <p>Just learning python and pandas. I know it is a simple question, but my knowledge is at that level. Your help is most appreciated.</p> <p>I can read in the CSV file and compute the column sums. </p> <pre><code>df = pd.read_csv('data.csv') </code></pre> <p><a href="https://i.stack.imgur.com/7mUtR.png" rel="nofollow noreferrer">Except of data file containing 3,874 columns and 100 rows</a></p> <pre><code>df.sum(axis = 0, skipna = True) </code></pre> <p><a href="https://i.stack.imgur.com/oZfvc.png" rel="nofollow noreferrer">Excerpt of sums for columns</a></p> <p>I am stuck on how to create the output file so that it looks like the original file but no longer has columns whose sums were less than 30.</p> <p>I am stuck on how to write to a CSV file each row for each column whose sums are greater than or equal to 30. The layout of the output file would be the same as for the input file. The sums would not be included in the output.</p> <p>Thanks very much for your help. </p> <p>So, here is a link showing an excerpt of a file containing 100 rows and 3,857 columns:</p>
<p>It's easiest to do this in two steps:</p> <p><strong>1. Filter the DataFrame to just the columns you want to save</strong></p> <pre><code>df_to_save = df.loc[:, (df.sum(axis=0, skipna=True) &gt;= 30)] </code></pre> <p><code>.loc</code> is for picking rows/columns based either on labels or conditions; the syntax is <code>.loc[rows, columns]</code>, so <code>:</code> means "take all the rows", and then the second part is the condition on our columns - I've taken the sum you'd given in your question and set it greater than or equal to 30.</p> <p><strong>2. Save the filtered DataFrame to CSV</strong></p> <pre><code>df_to_save.to_csv('path/to/write_file.csv', header=True, index=False) </code></pre> <p>Just put your filepath in as the first argument. <code>header=True</code> means the header labels from the table will be written back out to the file, and <code>index=False</code> means the numbered row labels Pandas automatically created when you read in the CSV won't be included in the export.</p> <hr> <p>See this answer here: <a href="https://stackoverflow.com/questions/31614804/how-to-delete-a-column-in-pandas-dataframe-based-on-a-condition">How to delete a column in pandas dataframe based on a condition?</a> . Note, the solution for your question doesn't need <code>isnull()</code> before the <code>sum()</code>, as that is specific to their question for counting <code>NaN</code> values.</p>
pandas
2
374,921
56,109,392
Pandas/Numpy shift rows into column based on existence
<p>I have a dataframe like so:</p> <pre><code>col_a | col b 0 1 0 2 0 3 1 1 1 2 </code></pre> <p>I want to convert it to:</p> <pre><code>col_a | 1 | 2 | 3 0 1 1 1 1 1 1 0 </code></pre> <p>Unfortunately, most questions/answers revolving around this topic simply pivot it</p> <p>Background: For Scikit, I want to use the existence of values in column b as an attribute/feature (like a sort of manual CountVectorizer, but for row values in this case instead of text)</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html" rel="nofollow noreferrer"><code>get_dummies</code></a> with creating first column to <code>index</code>, last use <code>max</code> per index for return only <code>1/0</code> values in output:</p> <pre><code>df = pd.get_dummies(df.set_index('col_a')['col b'], prefix='', prefix_sep='').max(level=0) print (df) 1 2 3 col_a 0 1 1 1 1 1 1 0 </code></pre>
python|pandas|numpy|scikit-learn
3
374,922
56,407,739
Is there a way to save time by not calculating unnecessary sums?
<h2>The objective</h2> <p>Given a 2-dimensional array <code>A</code>, I have to keep adding +1 to the value of the first row in each column until the sums of the columns equal to the same value, for example 28.</p> <h2>My solution</h2> <p>It is probably not the best of solutions, but considering the point I'd like to make, it will do. This is meant to be a simplified example. In the original version it is based on a probability distribution whether the first or the second row gets the +1, and it differs among columns. Plus it has to be done one by one, for the probability distribution changes due to whether the first or the second row of a column got the +1 in the previous cycle. So summation of columns and iteration are necessary.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np A = np.arange(20).reshape(2, 10) print(A) MASK = A.sum(axis=0) &lt; 28 print(A.sum(axis=0) &lt; 28) while np.any(MASK): LUCKYROW = np.repeat(0, np.count_nonzero(MASK)) A[LUCKYROW, MASK] += 1 MASK = A.sum(axis=0) &lt; 28 print(A.sum(axis=0) &lt; 28) print(A) </code></pre> <p>Let's take a look at the output:</p> <pre><code>[[ 0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19]] [ True True True True True True True True True False] [ True True True True True True True True True False] [ True True True True True True True True False False] [ True True True True True True True True False False] [ True True True True True True True False False False] [ True True True True True True True False False False] [ True True True True True True False False False False] [ True True True True True True False False False False] [ True True True True True False False False False False] [ True True True True True False False False False False] [ True True True True False False False False False False] [ True True True True False False False False False False] [ True True True False False False False False False False] [ True True True False False False False False False False] [ True True False False False False False False False False] [ True True False False False False False False False False] [ True False False False False False False False False False] [ True False False False False False False False False False] [False False False False False False False False False False] [[18 17 16 15 14 13 12 11 10 9] [10 11 12 13 14 15 16 17 18 19]] </code></pre> <p>Alright, it works, but why do I calculate the sum of each column in each cycle? Based on previous cycles I know which column's sum has already reached the target value. If I make use of this information, I can save time maybe.</p> <h2>My second solution</h2> <pre class="lang-py prettyprint-override"><code>import numpy as np A = np.arange(20).reshape(2, 10) print(A) MASK = A.sum(axis=0) &lt; 28 print(A.sum(axis=0) &lt; 28) while np.any(MASK): LUCKYROW = np.repeat(0, np.count_nonzero(MASK)) A[LUCKYROW, MASK] += 1 MASK[MASK] = A[:, MASK].sum(axis=0) &lt; 28 print(A[:, MASK].sum(axis=0) &lt; 28) print(A) </code></pre> <p>And the output:</p> <pre><code>[[ 0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19]] [ True True True True True True True True True False] [ True True True True True True True True True] [ True True True True True True True True] [ True True True True True True True True] [ True True True True True True True] [ True True True True True True True] [ True True True True True True] [ True True True True True True] [ True True True True True] [ True True True True True] [ True True True True] [ True True True True] [ True True True] [ True True True] [ True True] [ True True] [ True] [ True] [] [[18 17 16 15 14 13 12 11 10 9] [10 11 12 13 14 15 16 17 18 19]] </code></pre> <p>It seems to work. Although one problem emerges. It is <strong>not</strong> faster than the first solution. I have tried with 25000 columns and 74998‬ as target value, but they are roughly equal timewise.</p> <h2>My request</h2> <p>I think I may have a fundamental misunderstanding of either ndarray operations or ndarray indexing. The second solution should do less and less calculation with each cycle, so I'd expect a significant performance improvement. I am unable to find explanation. Where is my train of thought faulty?</p>
<p>Since you're only changing the first row, you don't need to recalculate the sum of the columns on each iteration. In fact, since the only change is adding 1 to some elements on the first row you don't need to iterate at all.</p> <pre><code>A = np.arange(20).reshape(2, 10) s = A.sum(0) d = max(s) - s A[0] += d &gt;&gt;&gt; A array([[18, 17, 16, 15, 14, 13, 12, 11, 10, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]]) </code></pre> <p>This may not be possible with more complex calculations, but with sums it's an easy shortcut.</p> <p>There could be a few reasons your "faster" code doesn't actually run faster. First off, kudos for actually profiling the code. The first reason is that <code>A</code> is very small. Generally, <code>numpy</code> only gives a speed benefit with thousands or tens of thousands of elements in an array.</p> <p>Second, in the "faster" code the line </p> <pre><code>MASK[MASK] = A[:, MASK].sum(axis=0) &lt; 28 </code></pre> <p>creates a copy of all the rows in <code>A</code> indexed by <code>MASK</code>. This can be a fairly expensive operation, so summing the extra rows in the original version using <code>MASK = A.sum(axis=0) &lt; 28</code> may be quicker simply because it doesn't need that extra copy.</p>
python|python-3.x|performance|numpy|numpy-ndarray
2
374,923
56,185,015
When trying to slice a pandas dataframe it raises "ValueError('Lengths must match to compare')"
<p>I have a huge pandas dataframe called df with the columns "Features","k","r2. The last two columns all contain numbers and the first row contains strings of lists(e.g "[Preop SC, Preop CC]").<br> I would like to slice the dataframe into smaller dataframes. One dataframe for every "Features"-"k" combination, using nested loops.</p> <p>Unfortunately it throws up <code>ValueError: Lengths must match to compare</code>.</p> <p>I've tried different slicing methods to produce z: <code>df[df["Features"]==feat]</code> and <code>df.iloc</code> too. Since when I print features, ["Preop SC","Preop CC"] shows up instead of the quoatation-mark-free version as noted below. I've also tried removing them by converting the entire item to a string, to use the .replace method, but to no avail. Nothing seems to help me slice with Features. (It works with k alone)</p> <p>EDIT: Groupby doesn't seem to work either, though I'm a novice at that too Here is the code:</p> <pre><code> import numpy as np import pandas as pd features=[['Preop SC', 'Preop CC'], ['Preop CC', 'Postoptag'], ['Preop CC', 'Pachy'], ['Preop CC', 'K2']] df=[] count=1 execute=1 while execute&lt;3: for i in features: r2=np.random.normal() df.append([i,count,r2]) count+=1 execute+=1 count=1 df=pd.DataFrame(df) df.columns=["Features","KNeighbors","r2 score"] summary=[] #Mean of results by feature-k combination for feat in features: for k in range(1,5): temp=o.loc[(o["Features"]==feat)&amp;(o["KNeighbors"]==k):,] summary.append([feat,k,temp["r2 score"].mean()]) summary=pd.Dataframe(summary) print(summary) ~~~~~~~~~~~~~~~~~~~~ This is what df looks like: Features KNeighbors r2 score 0 [Preop SC, Preop CC] 1 0.880299 1 [Preop CC, Postoptag] 2 0.681024 2 [Preop CC, Pachy] 3 -1.925969 3 [Preop CC, K2] 4 1.132059 4 [Preop SC, Preop CC] 1 0.397732 5 [Preop CC, Postoptag] 2 -0.969017 6 [Preop CC, Pachy] 3 -0.173293 7 [Preop CC, K2] 4 0.277422 this is what summary should look like 0 [Preop SC, Preop CC] 1 0.6390155 1 [Preop CC, Postoptag] 2 -0.1439965 2 [Preop CC, Pachy] 3 -1.049631 3 [Preop CC, K2] 4 0.7047405 Any tips will be dearly appreciated </code></pre>
<p>Transforming the lists to string using the .apply method allows you to use .groupby:</p> <pre><code>df["Features"]=df.Features.apply(str) summary=df.groupby("Features").mean() print(summary) </code></pre>
python|pandas|scikit-learn|pandas-groupby|grid-search
0
374,924
56,368,717
Select and add column values from another dataframe based on if the index exists in both
<p>I have two dataframes, let's call them <code>A</code> and <code>B</code>, with the same indexes (person IDs), but some IDs might be in A and not B, and vice versa. Additionally, the IDs are Non-Unique in <code>B</code>, while unique in dataframe <code>A</code>, so I want to </p> <p>I want to check <code>B</code> to see if there exists certain IDs, then add a column of the max B-Label into A, for that specific ID.</p> <p>I tried writing the function below as an argument to the pandas .apply() function.</p> <pre><code>def add_labels_to_dataframe(train_df, id_col_name='person_id', label_name="max_progress", label_filepath=LABELS_SRC_FILE, default_value=-1, save=True): """ Add labels column to train_df :param train_df: (DataFrame) the training dataframe that needs labels :param id_col_name: (str) name of the ID column to use :param label_name: (str) the column name of the label to use (score/progress/is_X/etc) :param label_filepath: (str) filepath with IDs and associated labels :param default_value: (int, or anything) The default label to give when a person_id has no associated label :return: (DataFrame) updated dataframe with labels """ labels_df = pd.read_csv(label_filepath) def get_max_score(row): """ DataFrame function to select max score when multiple exist per ID :param row: (DataFrame) A single row of the dataframe being modified :return: (int) returns elements of a Series that becomes a new column of the DataFrame """ # if person_id is in labels, then get max of labels pdb.set_trace() pid_labels_df = labels_df[row[id_col_name].isin(labels_df[id_col_name])] if not pid_labels_df.empty and not pd.isnull(pid_labels_df[label_name].max()): return 1 + pid_labels_df[label_name].max() return default_value train_df[label_name] = train_df.apply(get_max_score, axis=1) if save: train_df.to_csv(LABELED_TRAIN_DF_PATH) return train_df </code></pre> <blockquote> <p>ValueError: ('Can only compare identically-labeled Series objects', 'occurred at index 0')</p> </blockquote> <p>I know I could just convert both dataframe indexes into Python lists, check if value exists, then create a new DataFrame mapping old rows to either labeled values or some default -1, but I'm trying to do this all within Pandas, in order to utilize the vectorization.</p> <p>Can someone help me figure out a concise way to use only dataframe operations instead of casting to Python lists here?</p>
<p>I think* you're going to be able to do this with a <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#transformation" rel="nofollow noreferrer">groupby transform</a>:</p> <pre><code>df[label_name] = df.groupby("person_id").transform("max") </code></pre> <p>* It's a little hard to read precisely what your code is attempting to do...</p>
python|pandas|dataframe
0
374,925
56,042,320
Split multiple content into multiple lines
<p>I have a column of data, it mostly has only one value, but some are multi-valued data connected with commas, and some missing values. I want to split multivalued data connected with commas into multiple lines.</p> <p>I found a good solution in this (<a href="https://stackoverflow.com/a/50731254/10446425">Split cell into multiple rows in pandas dataframe</a>).</p> <p>But this can only extract a few lines split from multi-valued data, I will splicing it with the original data, but my data is a large file, I can't clearly know where each multi-valued data is and delete them.</p> <p>eg:</p> <pre><code>In [1]:data = {'id': [106452, 233649, 547531, 707841, 457009], 'size': (np.nan, 1, 40, 40, '12,13')} df = pd.DataFrame(data) </code></pre> <p>then:</p> <pre><code>In [2]:df_new = (df.set_index(['id']) .stack() .str.split(',', expand=True) .stack() .unstack(-2) .reset_index(-1, drop=True) .reset_index() ) df_new Out[1]: id size 0 457009 12 1 457009 13 </code></pre> <p>if:</p> <pre><code>In [3]:df_new = (df.set_index(['id']) .stack() .str.split(',', expand=True) .stack() .unstack(-2) .reset_index(-1, drop=True) .reset_index() ) df = pd.concat([df,df_new]) # I know it's a bit stupid, but I just want to express the idea of merging. df Out[2]: id size 0 106452 NaN 1 233649 1 2 547531 40 3 707841 40 4 457009 12,13 0 457009 12 1 457009 13 </code></pre> <p>I want this:</p> <pre><code>Out[2]: id size 0 106452 NaN 1 233649 1 2 547531 40 3 707841 40 4 457009 12 5 457009 13 </code></pre> <p>I should How to do it?</p>
<p>Try adding <code>astype(str)</code>:</p> <pre><code>df_new = (df.set_index(['id']).astype(str) .stack() .str.split(',', expand=True) .stack() .unstack(-2) .reset_index(-1, drop=True) .reset_index() ) </code></pre>
python|pandas
0
374,926
56,063,831
how to decode colnames pandas dataframe with python?
<p>I imported a data frame in python with pandas. But I have column names with strange encoding.</p> <pre><code>colnames = ['Price \xe2\x82\xac', 'x-rate \xe2\x82\xac/$'] </code></pre> <p>Can you help me to decode these column names?</p>
<p>Try the following:</p> <pre><code>colnames = [i.encode('raw_unicode_escape').decode('utf-8') for i in colnames] </code></pre> <p>Yields:</p> <pre><code>['Price €', 'x-rate €/$'] </code></pre> <p>Per @piRSquared's comment, you can do this with <code>pandas</code> using:</p> <pre><code>df.rename(columns=lambda x: x.encode('raw_unicode_escape').decode()) </code></pre>
python|pandas|dataframe
5
374,927
56,076,878
How to access data of previous rows in pandas dataframe?
<p>I am trying to access the previous (or further back) row to use as a value in a new column. Have tried several approaches with enumerate, iterrows and iloc but end up with the same problem, they use the last value. The following code is used:</p> <pre><code>df = pd.DataFrame({'values':(50.033,50.025,49.979,49.954,49.936,49.935,49.93)}) df['a']=df.diff() def my_func_disch(x): if abs(x) &gt;= 0 and abs(x) &lt;= 0.009: for index,row in df.iterrows(): eff_disch = row['values'] else: eff_disch = 'xxx' return eff_disch df["b"] = df.a.apply(my_func_disch) </code></pre> <p>Which produces: </p> <pre><code> values a b 0 50.033 NaN xxx 1 50.025 -0.008 49.93 2 49.979 -0.046 xxx 3 49.954 -0.025 xxx 4 49.936 -0.019 xxx 5 49.935 0.000 49.93 6 49.930 -0.005 49.93 </code></pre> <p>And I would like it to produce:</p> <pre><code> values a b 0 50.033 NaN xxx 1 50.025 -0.008 50.033 2 49.979 -0.046 xxx 3 49.954 -0.025 xxx 4 49.936 -0.019 xxx 5 49.935 0.000 49.936 6 49.930 -0.005 49.935 </code></pre>
<p>Do not use <code>apply</code>, but instead use vectorized <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a>, which is faster and more readable:</p> <pre><code>df['b'] = np.where(df['a'].abs().between(0, 0.009, inclusive=True), df['values'].shift(), 'xxx') # values a b #0 50.033 NaN xxx #1 50.025 -0.008 50.033 #2 49.979 -0.046 xxx #3 49.954 -0.025 xxx #4 49.936 -0.019 xxx #5 49.935 0.000 49.93600000000001 #6 49.930 -0.005 49.935 </code></pre> <p>The first argument specifies when to do something (when the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.abs.html" rel="nofollow noreferrer"><code>abs</code></a> is <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.between.html" rel="nofollow noreferrer"><code>between</code></a> some values), the second and third specify what to return when it is <code>True</code> or <code>False</code> respectively. You want the values-column <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.shift.html" rel="nofollow noreferrer"><code>shifted</code></a></p> <p>Your solution doesn't work because you always iterate over the entire DataFrame (which is almost never the way to go) only stopping after iterating over the last row, therefore returning the last value.</p>
python|pandas|dataframe
0
374,928
56,410,498
sort_value working on Name column but not Number column in dataframe
<p>When try to sort my dataframe by the column "Number" i get the error code</p> <blockquote> <p>1708 # Check for duplicates</p> <p>KeyError: 'Number'</p> </blockquote> <p>the dataframe looks something like this</p> <pre><code>Number Name City Sex 3 Jay A M 1 Marry A F 5 John B M </code></pre> <p>Number is int64 and the rest are objects</p> <pre><code>df.sort_values(by=['Number']) --&gt; error df.sort_values(by=['Name']) --&gt; works df.sort_values(by=['City']) --&gt; error df.sort_values(by=['Sex']) --&gt; works </code></pre> <p>What i am looking for is something like this</p> <pre><code>Number Name City Sex 1 Marry A F 3 Jay A M 5 John B M </code></pre>
<p>I try to make a DataFrame like yours and sort it &amp; it works to sort by <code>Number</code> column:</p> <pre><code>df=pd.DataFrame({'Number':[3,1,5], 'Name':['Jay','Marry','John'], 'City':['A','A','B'], 'Sex':['M','F','M']}) print(df) print(df.Number.dtype) df=df.sort_values(by=['Number']) print(df) </code></pre> <p>Output:</p> <pre><code> Number Name City Sex 0 3 Jay A M 1 1 Marry A F 2 5 John B M int64 Number Name City Sex 1 1 Marry A F 0 3 Jay A M 2 5 John B M </code></pre> <p>Maybe there is a white space in your columns, try this before sorting:</p> <pre><code>df.columns=df.columns.str.strip() </code></pre>
python|pandas|columnsorting
1
374,929
56,105,450
Python Pandas - find all unique combinations of rows of a DataFrame without repeating values in the columns
<p>I have a dataframe that looks similar to this</p> <pre><code>df = pd.DataFrame({'A': {0: 1, 1: 1, 2: 1, 3: 10, 4: 10, 5: 10, 6: 13, 7: 13, 8: 13}, 'B': {0: 17, 1: 20, 2: 25, 3: 17, 4: 20, 5: 25, 6: 17, 7: 20, 8: 25}, 'distance': {0: 304.0, 1: 326.0, 2: 426.0, 3: 124.0, 4: 146.0, 5: 246.0, 6: 69.0, 7: 91.0, 8: 191.0}}) </code></pre> <pre><code> A B distance 0 1 17 304.0 1 1 20 326.0 2 1 25 426.0 3 10 17 124.0 4 10 20 146.0 5 10 25 246.0 6 13 17 69.0 7 13 20 91.0 8 13 25 191.0 </code></pre> <p>I am trying to get all possible combinations of the rows of the dataframe without repeating values in column A and column B. </p> <p>I have tried looping through the entries but it's quite inefficient as the number of rows increases.</p> <p>I expect the output to be new dataframes for all possible combinations with the maximum number of rows. For instance:</p> <pre><code>A B distance 1 17 304.0 10 20 146.0 13 25 191.0 </code></pre> <pre><code>A B distance 1 20 326.0 10 17 124.0 13 25 191.0 </code></pre> <p><strong>Another sample:</strong></p> <pre><code>df = pd.DataFrame({'A': {0: 0, 1: 0, 2: 0, 3: 2, 4: 2, 5: 2, 6: 3, 7: 3, 8: 3, 9: 5, 10: 5, 11: 5, 12: 7, 13: 7, 14: 7, 15: 9, 16: 9, 17: 9, 18: 12, 19: 12, 20: 12, 21: 14, 22: 14, 23: 14, 24: 15, 25: 15, 26: 15, 27: 18, 28: 18}, 'B': {0: 17, 1: 20, 2: 25, 3: 17, 4: 20, 5: 25, 6: 17, 7: 20, 8: 25, 9: 17, 10: 20, 11: 25, 12: 17, 13: 20, 14: 25, 15: 17, 16: 20, 17: 25, 18: 17, 19: 20, 20: 25, 21: 17, 22: 20, 23: 25, 24: 17, 25: 20, 26: 25, 27: 20, 28: 25}, 'distance': {0: 408.0, 1: 430.0, 2: 530.0, 3: 293.0, 4: 315.0, 5: 415.0, 6: 281.0, 7: 303.0, 8: 403.0, 9: 242.0, 10: 264.0, 11: 364.0, 12: 208.0, 13: 230.0, 14: 330.0, 15: 170.0, 16: 192.0, 17: 292.0, 18: 74.0, 19: 96.0, 20: 196.0, 21: 48.0, 22: 70.0, 23: 170.0, 24: 27.0, 25: 49.0, 26: 149.0, 27: 17.0, 28: 117.0}}) </code></pre> <pre><code>Out[377]: A C distance 0 0 17 408.0 1 0 20 430.0 2 0 25 530.0 3 2 17 293.0 4 2 20 315.0 5 2 25 415.0 6 3 17 281.0 7 3 20 303.0 8 3 25 403.0 9 5 17 242.0 10 5 20 264.0 11 5 25 364.0 12 7 17 208.0 13 7 20 230.0 14 7 25 330.0 15 9 17 170.0 16 9 20 192.0 17 9 25 292.0 18 12 17 74.0 19 12 20 96.0 20 12 25 196.0 21 14 17 48.0 22 14 20 70.0 23 14 25 170.0 24 15 17 27.0 25 15 20 49.0 26 15 25 149.0 27 18 20 17.0 28 18 25 117.0 </code></pre> <p><strong>Expected Output (Sample)</strong></p> <pre><code>A B distance 0 17 408.0 2 20 315.0 3 25 403.0 A B distance 0 20 430.0 2 17 293.0 3 25 403.0 A B distance 0 25 530.0 2 17 293.0 3 20 303.0 A B distance 0 25 530.0 2 17 293.0 5 20 264.0 . . . </code></pre>
<p>I think you may need using <code>permutations</code> from <code>itertools</code>, then we just need look up the df after <code>pivot</code> </p> <pre><code>l=list(itertools.permutations([0,1,2])) s=df.pivot(*df.columns) list_of_df=[pd.DataFrame({'A':s.index, 'B':s.columns.values[list(x)], 'distance':s.values[np.arange(len(s)),x]}) for x in l ] list_of_df[0] Out[725]: A B distance 0 1 17 304.0 1 10 20 146.0 2 13 25 191.0 list_of_df[1] Out[726]: A B distance 0 1 17 304.0 1 10 25 246.0 2 13 20 91.0 </code></pre> <hr> <p>Update </p> <pre><code>s=df.pivot(*df.columns) l=list(itertools.permutations(list(range(s.shape[1])))) l1=list(itertools.permutations(list(range(len(s))),3)) list_of_df=[pd.DataFrame({'A':s.index[list(y)], 'C':s.columns.values[list(x)], 'distance':s.iloc[list(y),:].values[np.arange(len(y)),x]}) for x in l for y in l1 ] </code></pre>
python|pandas
3
374,930
56,108,214
ValueError: Input contains NaN, infinity or.....('float32')
<p>Trying to figure out why I keep getting the message listed as the headline of this question. I think I already cleaned data, removing NaN's. Can anyone help me out?</p> <p>Looking into a dataset with 11K lines, I am trying to make the code train data to predict level of students dropping out. Using an ordinary Windows laptop, whilst also exercising in getting better doing data analysis. </p> <pre><code># divide the data set into categorial and non categorial features and apply models to get the insight of the data print("\nDEFINING CATEGORICAL AND NUMERICAL FEATURES") categorical_features = X.select_dtypes(include=['object']).columns print(categorical_features) numerical_features = X.select_dtypes(exclude = ["object"]).columns print(numerical_features) print("\nDIVIDE THE DATA SET INTO CATEGORIAL AND NON CATEGORIAL FEATURES AND APPLY MODELS TO GET THE INSIGHT OF THE DATA") print("Numerical features : " + str(len(numerical_features))) print("Categorical features : " + str(len(categorical_features))) print("\nFILLING THE MISSING VALUE OF TEST WITH THEIR MEAN VALUE, FOR BETTER ACCURACY") test = test.select_dtypes(exclude=[np.object]) test.info() test = test.fillna(test.mean(), inplace=True) print("\nAPPLYING MODEL RANDOM FOREST REGRESSOR") import numpy as np import pandas as pd from sklearn.ensemble import RandomForestRegressor from warnings import simplefilter # ignore all future warnings simplefilter(action='ignore', category=FutureWarning) # pull data into target (y) and predictors (X) predictor_cols = ['F18 ECTS på kurser med beståede talkarakter'] # ------------------------------------------- # Create training predictors data train_X = X[predictor_cols] my_model = RandomForestRegressor() my_model.fit(train_X, y) my_model.score(train_X, y) print(predictor_cols) print(my_model.score(train_X, y)) test = pd.read_csv("…_test.csv") # ------------------------------------------- print("\nPRINT PREDICTED FACTORS") test_X = test[predictor_cols] # model to make predictions predicted_factor = my_model.predict(test_X) # at the predicted prices to ensure something sensible. print(predicted_factor) </code></pre> <p>Get most of my code running fine, except:</p> <pre class="lang-none prettyprint-override"><code>APPLYING MODEL RANDOM FOREST REGRESSOR Traceback (most recent call last): File "C:/Users/jcst/PycharmProjects/Frafaldsanalyse/DefiningCatAndNumFeatures_4_new.py", line 142, in &lt;module&gt; my_model.fit(train_X, y) File "C:\Users\jcst\PycharmProjects\Frafaldsanalyse\venv\lib\site-packages\sklearn\ensemble\forest.py", line 250, in fit X = check_array(X, accept_sparse="csc", dtype=DTYPE) File "C:\Users\jcst\PycharmProjects\Frafaldsanalyse\venv\lib\site-packages\sklearn\utils\validation.py", line 573, in check_array allow_nan=force_all_finite == 'allow-nan') File "C:\Users\jcst\PycharmProjects\Frafaldsanalyse\venv\lib\site-packages\sklearn\utils\validation.py", line 56, in _assert_all_finite raise ValueError(msg_err.format(type_err, X.dtype)) ValueError: Input contains NaN, infinity or a value too large for dtype('float32'). Process finished with exit code 1 </code></pre>
<p>As said, your dataset <code>X_train</code> or <code>y</code> must contain <code>nan</code>s. Check again to see where that comes from. It typically comes from division by 0 or math functions domain error like log of negative values.</p> <p>Something else you're gonna run in after :</p> <p>You're using <code>test = test.fillna(test.mean(), inplace=True)</code></p> <p>You should use <code>test = test.fillna(test.mean())</code></p> <p>Or <code>test.fillna(test.mean(), inplace=True)</code></p> <p>When specifying <code>inplace=True</code>, the function returns <code>None</code> and so <code>test</code> is <code>None</code>.</p> <p>Also you're doing all this without use since you're overwriting <code>test</code> by reading a DataFrame later. Maybe you have an unintended behavior here.</p>
python|sklearn-pandas
0
374,931
56,031,922
Save Multiple Dataframes on excel workbook then Upload to AWS S3 Bucket
<p>Good afternoon everyone, </p> <p>I am trying to save multiple dataframes to an excel workbook on different sheets. Then upload that workbook to an Amazon S3 bucket. the code below works 99% of the way but the writer.save() cannot find my excel file on my S3 Bucket. Please assist if you know a way around this. thanks. </p> <pre><code>#Exports the data back to Excel - PLEASE READ LINE BELOW THIS CODE bucket='sagemaker-bucket-xxxx/xxxx/xxxxx' data_key = 'Provider Data.xlsx' data_location = 's3://{}/{}'.format(bucket, data_key) writer = pd.ExcelWriter(data_location) #Targets the file where data is to be sent to Comparison.to_excel(writer,'DATA') #Targets the worksheet data is to be sent too df_current.to_excel(writer,'New Records') #Targets the worksheet data is to be sent too df_prev.to_excel(writer,'Old Records') #Targets the worksheet data is to be sent too df_same.to_excel(writer,'Same Records') #Targets the worksheet data is to be sent too ALLCOUNT.to_excel(writer,'RPN Roll Up Count') #Targets the worksheet data is to be sent too writer.save() #Saves files </code></pre> <p>error message is listed below. </p> <p>FileNotFoundError: [Errno 2] No such file or directory: 's3://sagemaker-bucket-xxxx/xxxx/xxxx/Provider Data.xlsx'</p>
<p>s3 is not a standard file system that you can read and write with frameworks (such as Pandas) that are not aware of the data location different interface.</p> <p>The simplest way is to write it locally to the file system of the notebook instance and then run <code>aws s3 cp</code> to upload it to s3. </p>
python|excel|pandas|amazon-s3|amazon-sagemaker
0
374,932
56,260,111
Filling pandas dataframe with foreign key
<p>I have 3 tables: master, summary, controller.</p> <p>master contains the columns: asset_id(unique), controller_name</p> <p>controller contains the columns: controller_id which is a primary key generated based on unique controller_names</p> <p>summary contains the columns: asset_id(primary key) and controller_id(empty)</p> <p>I need to find a way to fill the controller_id table based on it's asset id.</p> <p>For Example:</p> <p>If the Master table looked like this:</p> <p><img src="https://i.stack.imgur.com/PGjBq.png" alt="Master"></p> <p>This would be the corresponding Controller table:</p> <p><img src="https://i.stack.imgur.com/n6So7.png" alt="Controller"></p> <p>This is what I want the Summary table to look like</p> <p><img src="https://i.stack.imgur.com/S8krj.png" alt="Summary"></p> <p>Thanks in advance for any help!</p>
<p>You can use pandas merge and select the columns of interest</p> <pre><code>summary = master.merge( controller, on='controller_name', how='left' )[['asset_id','controller_id']] </code></pre>
python|pandas|sqlalchemy
1
374,933
56,325,181
Pyinstaller executable fails importing torchvision
<p>This is my <strong>main.py</strong>:</p> <pre class="lang-py prettyprint-override"><code>import torchvision input("Press key") </code></pre> <p>It runs correctly in the command line: <code>python main.py</code></p> <p>I need an executable for windows. So I did : <code>pyinstaller main.py</code></p> <p>But when I launched the <strong>main.exe</strong>, inside <code>/dist/main</code> I got this error:</p> <pre><code>Traceback (most recent call last): File "main.py", line 1, in &lt;module&gt; ... (omitted) File "site-packages\torchvision\ops\misc.py", line 135, in &lt;module&gt; File "site-packages\torchvision\ops\misc.py", line 148, in FrozenBatchNorm2d File "site-packages\torch\jit\__init__.py", line 850, in script_method File "site-packages\torch\jit\frontend.py", line 152, in get_jit_def File "inspect.py", line 973, in getsource File "inspect.py", line 955, in getsourcelines File "inspect.py", line 786, in findsource OSError: could not get source code [2836] Failed to execute script main </code></pre> <p>It seems that some source code is not correctly imported from pyinstaller. I am not sure if the problems is the <strong>torch</strong> module or <strong>torchvision</strong>.</p> <p>Additional info:</p> <ul> <li>I recently installed Visual Studio 2019</li> </ul> <p>System info:</p> <ul> <li>Window 10 </li> <li>Python 3.7 </li> <li>torch-1.1.0 </li> <li>torchvision-0.3.0</li> </ul> <p>[EDIT]</p> <p>I found that the problem is in the definition of the class <strong>FrozenBatchNorm2d</strong> inside torchvision. The following script produce the same error as the one before posted:</p> <p><strong>main.py</strong></p> <pre><code>import torch class FrozenBatchNorm2d(torch.jit.ScriptModule): def __init__(self, n): super(FrozenBatchNorm2d, self).__init__() @torch.jit.script_method def forward(self): pass </code></pre> <p>I copied all the torch source file. But I still got the error...</p>
<p>Downgrade <strong>torchvision</strong> to the previous version fix the error.</p> <pre><code>pip uninstall torchvision pip install torchvision==0.2.2.post3 </code></pre>
python|pytorch|pyinstaller|torchvision
3
374,934
56,040,964
How to compare 2 dataframes and generate new dataframe
<p>I have 2 similar dataframes that I would like to compare each row of the 1st dataframe with the 2nd based on condition. The dataframe looks like this:</p> <p><a href="https://i.stack.imgur.com/c3otw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c3otw.png" alt="Dataframe"></a></p> <p>Based on this comparison I would like to generate a similar dataframe with a new column 'change' containing the changes based on the following conditions:</p> <p>if the rows have similar values then 'change'='identical' otherwise if the date changed then 'change'='new date'.</p>
<p>Here is an easy workaround. </p> <pre><code># Import pandas library import pandas as pd # One dataframe data = [['foo', 10], ['bar', 15], ['foobar', 14]] df = pd.DataFrame(data, columns = ['Name', 'Age']) # Another similar dataframe but foo age is 13 this time data = [['foo', 13], ['bar', 15], ['foobar', 14]] df2 = pd.DataFrame(data, columns = ['Name', 'Age']) df3 = df2.copy() for index, row in df.iterrows(): if df.at[index,'Age'] != df2.at[index,'Age']: df3.at[index,'Change']="Changed" df3["Change"].fillna("Not Changed",inplace = True) print(df3) </code></pre> <h1>Here is the output</h1> <pre><code> Name Age Change 0 foo 13 Changed 1 bar 15 Not Changed 2 foobar 14 Not Changed </code></pre>
python|pandas|loops
0
374,935
56,366,831
How to do index and match like in Pandas
<p>I am trying to use index and match like function in Pandas. I am new to this. What I would like to do is</p> <ol> <li><p>index string or multiple strings and change the corresponding price</p></li> <li><p>index the string without regardless of uppercase and lowercase letters (a lot of the names of the fruits are in all uppercase, one uppercase and rest in lowercase, or all lowercase)</p> <pre><code>fruits price apple from us 10 Apple from US 11 Mango from Canada 15 Orange from Mexico 16 Orange from Costa 15 Orange from Brazil 19 Pear from Guatemala 32 Melon from Guatemala 4 orange from Honduras 5 </code></pre></li> </ol> <p>I tried df.loc[df['fruits'].str.contains('apple'), 'Target Price'] = 275 but I get</p> <pre><code> fruits price apple from us 275 Apple from US 275 Mango from Canada 275 Orange from Mexico 275 Orange from Costa 275 Orange from Brazil 275 Pear from Guatemala 275 Melon from Guatemala 275 Orange from Honduras 275 </code></pre> <p>but what I would like is </p> <pre><code> fruits price apple from us 275 Apple from US 275 Mango from Canada 15 Orange from Mexico 16 Orange from Costa 15 Orange from Brazil 19 Pear from Guatemala 32 Melon from Guatemala 4 Orange from Honduras 5 </code></pre> <p>Also, the line above does not let me have multiple conditions like containing "Orange" but not from Honduras. Is there a way to only exclude if certain string is also in it so that I can change the price of Orange is set to 222 but Orange from Honduras remains as it was.</p> <pre><code> fruits price apple from us 275 Apple from US 275 Mango from Canada 15 Orange from Mexico 222 Orange from Costa 222 Orange from Brazil 222 Pear from Guatemala 32 Melon from Guatemala 4 Orange from Honduras 5 </code></pre>
<p>you can convert to lower case</p> <pre><code>&gt;&gt;&gt; d_f.loc[d_f['Fruits'].str.lower().str.contains('apple'), 'Price'] = 275 &gt;&gt;&gt; d_f Fruits Price 0 apple from us 275 1 Apple from US 275 2 Mango from Canada 15 3 Orange from Mexico 16 4 Orange from Costa 15 5 Orange from Brazil 19 6 Pear from Guatemala 32 7 Melon from Guatemala 4 8 orange from Honduras 5 </code></pre>
python|pandas
0
374,936
56,178,766
Extract numbers from "dd.dd AAA dd.dd BBB" or "AAA dd.dd BBB dd.dd"
<p>I am trying to extract two values from arbitrary text, formatted in variable ways. The two values are different, and I want to distinguish them based on a nearby sring, lets say "DDT" and "EEG". Here are some examples of how the strings can be formatted.</p> <pre><code>This contains 42.121% DDT and 2.1% EEG Now with DDT: 12% EEG: 23.2% 47 DDT 22 EEG EEG N/A DDT 43 5% EEG 20% DDT and more </code></pre> <p>Essentially I need to be able to select both values preceded by and followed by their identifier. </p> <p>I have been using a | between two selectors to capture both "cases" for each value, but I am having trouble. I want to prevent the regex from selecting "12% EEG" in the second example line. I am trying to use negative lookaheads and positive lookbehinds but can't make it work.</p> <p>Here is the regex for selecting just ddt</p> <pre><code>(?&lt;=eeg)(\d{1,3}\.?\d{1,6}).{,10}?ddt|ddt(?!.*eeg).{,10}?(\d{1,3}\.?\d{1,6}) </code></pre> <p>This is the closest I have gotten, but it still does not work correctly. This version fails to match "20% DDT."</p> <p>My original regex did not use lookbehinds, but also fails in some cases.</p> <pre><code>(?:(?:(\d{1,3}\.?\d*)[^(?:eeg)]{0,10}?ddt)|(?:ddt[^(?:eeg)]{0,10}?(\d{1,3}\.?\d*))) </code></pre> <p>My original approach fails to recognize the 23.2% EEG strings formatted like this. "DDT: 12% EEG: 23.2%"</p> <p>I am not sure if this type of selector is possible with regex, but I want to use regex in order to vectorize this extraction. I have a function that does a good job of characterizing these strings, but it is very slow on large datasets (~1 million records). The regex runs quickly and is easy to apply to vectors, which is why I want to use it. If there are other suggestions to solve this problem with NLP or numpy/pandas functions I am open to those as well.</p>
<p>You could try the following, at least for these cases:</p> <p>1/ work out which is first EEG or DDT:</p> <pre><code>In [11]: s.str.extract("(DDT|EEG)") Out[11]: 0 0 DDT 1 DDT 2 DDT 3 EEG 4 EEG </code></pre> <p>2/ pull out all the numbers:</p> <pre><code>In [12]: s.str.extract("(\d+\.?\d*|N/A).*?(\d+\.?\d*|N/A)") Out[12]: 0 1 0 42.121 2.1 1 12 23.2 2 47 22 3 N/A 43 4 5 20 </code></pre> <p>To get rid of the N/A you can apply to_numeric:</p> <pre><code>In [13]: res = s.str.extract("(\d+\.?\d*|N/A).*?(\d+\.?\d*|N/A)").apply(pd.to_numeric, errors='coerce', axis=1) In [14]: res Out[14]: 0 1 0 42.121 2.1 1 12.000 23.2 2 47.000 22.0 3 NaN 43.0 4 5.000 20.0 </code></pre> <p>Now you have to rearrange these columns to match their respective DDT/EEG:</p> <pre><code>In [15]: pd.DataFrame({ "DDT": res[0].where(s.str.extract("(DDT|EEG)")[0] == 'DDT', res[1]), "EEG": res[1].where(s.str.extract("(DDT|EEG)")[0] == 'DDT', res[0]) }) Out[15]: DDT EEG 0 42.121 2.1 1 12.000 23.2 2 47.000 22.0 3 43.000 NaN 4 20.000 5.0 </code></pre> <p>Here <code>s</code> is the original Series/column:</p> <pre><code>In [21]: s Out[21]: 0 This contains 42.121% DDT and 2.1% EEG 1 Now with DDT: 12% EEG: 23.2% 2 47 DDT 22 EEG 3 EEG N/A DDT 43 4 5% EEG 20% DDT and more dtype: object </code></pre> <p>This assumes both DDT and EEG are both present, you might need to NaN out the rows where this isn't the case (which only have one of DDT/EEG)...</p>
python|regex|pandas|regex-negation|regex-lookarounds
0
374,937
56,087,907
Is there a way to use map function or for loop for melting? I need to melt 5 dataframes with same line of code
<p>I have 6 dataframes with me called <code>open_price</code>, <code>closed_price</code>, <code>volume</code>, <code>adj_open_price</code>, <code>high_price</code> and <code>low_price</code></p> <p>And each dataframe has values like this</p> <p><a href="https://i.stack.imgur.com/G5wmX.png" rel="nofollow noreferrer">This is before </a></p> <p><a href="https://i.stack.imgur.com/MBXR7.png" rel="nofollow noreferrer">This is my goal </a></p> <p>I know how to do it manually (without for loop or map) just apply <code>pd.melt</code> to each Dataframe and keep merging the results:</p> <pre><code>open=pd.melt(open_price, value_vars=['GOOG','AMZN','APPL','FB','NFLX','SBUX','TSLA'], var_name='Firm', value_name='Open', id_vars=['Firm']) close=pd.melt(close_price, value_vars=['GOOG','AMZN','APPL','FB','NFLX','SBUX','TSLA'], var_name='Firm', value_name='Close', id_vars=['Firm']) openclose = pd.merge(open,close) </code></pre> <p>and so on. </p> <p>Is there a way to make this repetitive tasks into one?</p> <p>I need to change the <code>value_name</code> to its respective dataframe’s name as well. </p> <p>Thank you!!</p>
<p>Try like this:</p> <p>Put dataframes in a list then apply pd.melt with map.</p> <pre><code>dfs = [df1, df2] result=list(map(lambda df:pd.melt(...), dfs)) df_1, df_2 = result </code></pre> <p>So in your case this would be:</p> <pre><code>dataframess = [open_price, low_price] openn=pd.melt(df, value_vars=['GOOG','AMZN','APPL','FB','NFLX','SBUX','TSLA'],\ var_name='Firm', value_name='Open',id_vars=['Firm']) low=pd.melt(low_price, id_vars=['Date'], value_vars=['GOOG','AMZN','APPL','FB','NFLX','SBUX','TSLA'],\ var_name='Firm', value_name='Low') result=list(map(lambda x: pd.melt(low_price, id_vars=['Date'], value_vars=['GOOG','AMZN','APPL','FB','NFLX','SBUX','TSLA'],\ var_name='Firm', value_name='x'),dataframess)) </code></pre> <p>The last line you dont neccessarily need to pass map into list. Because map returns a generator through which you can loop.</p>
python|pandas
0
374,938
56,099,141
Numpy broadcasting - using a variable value
<p><strong>EDIT:</strong></p> <p>As my question was badly formulated, I decided to rewrite it.</p> <p>Does numpy allow to create an array with a function, without using Python's standard list comprehension ?</p> <p>With list comprehension I could have:</p> <pre><code>array = np.array([f(i) for i in range(100)]) </code></pre> <p>with f a given function.</p> <p>But if the constructed array is really big, using Python's list would be slow and would eat a lot of memory.</p> <p>If such a way doesn't exist, I suppose I could first create an array of my wanted size</p> <pre><code>array = np.arange(100) </code></pre> <p>And then map a function over it.</p> <pre><code>array = f(array) </code></pre> <p><a href="https://stackoverflow.com/a/46470401/7973514">According to results from another post</a>, it seems that it would be a reasonable solution.</p> <hr> <p>Let's say I want to use the add function with a simple int value, it will be as follows:</p> <pre><code>array = np.array([i for i in range(5)]) array + 5 </code></pre> <p>But now what if I want the value (here 5) as something that varies according to the index of the array element. For example the operation:</p> <pre><code>array + [i for i in range(5)] </code></pre> <p>What object can I use to define special rules for a variable value within a vectorized operation ?</p>
<p>You can add two arrays together like this:</p> <p><a href="https://stackoverflow.com/questions/40955903/simple-adding-two-arrays-using-numpy-in-python">Simple adding two arrays using numpy in python?</a></p> <p>This assumes your "variable by index" is just another array. </p>
python|numpy|vectorization
0
374,939
56,274,699
One-line piece of script to read a single row in a file as numpy.array
<p>Simple question, I have a file with N entries and M values. I need a one-line piece of script to read the first or any single row of the file as numpy.array.</p>
<pre><code>A=np.loadtxt('file_name',skiprows = N,max_rows =1) #for rows B=np.loadtxt('file_name',usecols=M) #for colomns </code></pre>
python|python-2.7|numpy|readfile
1
374,940
56,433,990
rounding big floating point numbers
<p>I'm implementing Jacobian algorithm to find eigenvalue of a given matrix.My problem is with float numbers like 1.2335604410291751e+216.I can not round them.</p> <p>I tried np.around and round functions but they didnt work.</p>
<p>If what you want is to only round <em>m</em> of a number in standard form (m × bⁿ) you can do it like this, for m, b, and n the number, the base, and the exponent, respectively:</p> <pre><code>import math def round_(m, d, b = 10): n = math.floor(math.log(abs(m), b),) return float(round(m * (b ** (d - n))) * (b ** - (d - n))) </code></pre> <p>Some test outputs:</p> <pre><code>&gt;&gt;&gt; print(round_(1.23456783456787434567e-22, 1)) 1.2299999999999998e-22 &gt;&gt;&gt; round_(-1.23456783456787434567e+159, 7) -1.2345678e+159 &gt;&gt;&gt; round_(1.23456783456787434567e+50, 6) 1.234568e+50 &gt;&gt;&gt; round_(-0.23456783456787434567e+256, 5) -2.34568e+255 &gt;&gt;&gt; round_(1.23456783456787434567e+255, 4) 1.2346e+255 &gt;&gt;&gt; round_(0.23456783456787434567e+272, 3) 2.346e+271 &gt;&gt;&gt; round_(1.23456783456787434567e-23, 2) 1.235e-22 &gt;&gt;&gt; round_(-1.23456783456787434567e+251, 1) -1.2e+251 </code></pre> <p><strong>Overflows may occur (See output #1)</strong>.</p> <p>Tested using Python 3.7.</p>
python|numpy|rounding
0
374,941
56,331,296
How do I make Tkinter drawing lines from an x interval and a y NumPy array?
<p>I have two sources of data to be plotted as X and Y coordinates on a continuous Tkinter line. The x data is generated from a constant like say 1.3. So each x value has to be + 1.3 greater than the last - for example 3.9, 5.2, 6.5 and so on. The y values are held in a numpy array. I need to create a line on a canvas according to these two sets of data. </p> <p>I have a feeling that this is so, so easy and I am missing something really stupid. So, apologies if this is the case.</p> <p>At the moment I generate the line array using a for loop but it is too slow (I need to do loads of these per second).</p> <pre><code>x_start = 3 x_stop = 5 step = 1.3 for n in range(x_start, x_stop): x = n * step line_array[n * 2] = x line_array[n * 2 + 1] = array[n] </code></pre>
<p>You can simply call <code>Canvas.create_line(...)</code> with the array of points created from your two sources of data as below:</p> <pre><code>from tkinter import * import numpy.random root = Tk() canvas = Canvas(root, bg='white', width=1600, height=800) canvas.pack(fill=BOTH, expand=1) y_array = numpy.random.randint(100, 700, 5000) # generate 5000 y values x_start = 3 x_stop = 5000 step = 1.3 # create the array of points and draw them line_array = [(n*step, y_array[n]) for n in range(x_start, x_stop)] canvas.create_line(line_array) root.mainloop() </code></pre> <p>It takes less than a second to draw the lines in my i7 notebook.</p>
python|arrays|numpy|tkinter|line
0
374,942
56,292,555
Delete rows preceeding and following a row containing NaN in Python?
<p>I am trying to clean experimental data using python with numpy and pandas. Some of my measurements are implausible. I want to remove these measurements and the 2 preceeding and 2 following measurements from the same sample. I am trying to find an elegant way to achieve this without using a for loop as my dataframes are quite large.</p> <p>My data: </p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt;df Date Time Sample Measurement index 7737 2019-04-15 06:40:00 A 6.560 7739 2019-04-15 06:50:00 A 1.063 7740 2019-04-15 06:55:00 A 1.136 7741 2019-04-15 07:00:00 A 1.301 7742 2019-04-15 07:05:00 A 1.435 7743 2019-04-15 07:10:00 A 1.704 7744 2019-04-15 07:15:00 A 1.961 7745 2019-04-15 07:20:00 A 2.023 7746 2019-04-15 07:25:00 A 6.284 7747 2019-04-15 07:30:00 A 2.253 7748 2019-04-15 07:35:00 A 6.549 7749 2019-04-15 07:40:00 A 2.591 7750 2019-04-15 07:45:00 A 6.321 7752 2019-04-15 07:55:00 A 0.937 7753 2019-04-15 08:00:00 B 0.372 7754 2019-04-15 08:05:00 B 0.382 7755 2019-04-15 08:10:00 B 0.390 7756 2019-04-15 08:15:00 B 0.455 7757 2019-04-15 08:20:00 B 6.499 </code></pre> <pre class="lang-py prettyprint-override"><code> import numpy as np import pandas as pd df['Measurement'] = np.where(df['Measurement']&gt;6.0, np.nan, df['Measurement']) </code></pre> <p>gives </p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt;df Date Time Sample Measurement index 7737 2019-04-15 06:40:00 A NaN 7739 2019-04-15 06:50:00 A 1.063 7740 2019-04-15 06:55:00 A 1.136 7741 2019-04-15 07:00:00 A 1.301 7742 2019-04-15 07:05:00 A 1.435 7743 2019-04-15 07:10:00 A 1.704 7744 2019-04-15 07:15:00 A 1.961 7745 2019-04-15 07:20:00 A 2.023 7746 2019-04-15 07:25:00 A NaN 7747 2019-04-15 07:30:00 A 2.253 7748 2019-04-15 07:35:00 A NaN 7749 2019-04-15 07:40:00 A 2.591 7750 2019-04-15 07:45:00 A NaN 7752 2019-04-15 07:55:00 A 0.937 7753 2019-04-15 08:00:00 B 0.372 7754 2019-04-15 08:05:00 B 0.382 7755 2019-04-15 08:10:00 B 0.390 7756 2019-04-15 08:15:00 B 0.455 7757 2019-04-15 08:20:00 B NaN </code></pre> <p>I deleted rows using </p> <pre class="lang-py prettyprint-override"><code>df= df[np.isfinite(df['Measurement'])] </code></pre> <p>The result I am trying to obtain after removing the 2 rows preceeding and following a row containing NaN within a sample (note that 7753 has to stay as this measurement belongs to sample B).</p> <pre class="lang-py prettyprint-override"><code> Date Time Sample Measurement index 7741 2019-04-15 07:00:00 A 1.301 7742 2019-04-15 07:05:00 A 1.435 7743 2019-04-15 07:10:00 A 1.704 7753 2019-04-15 08:00:00 B 0.372 7754 2019-04-15 08:05:00 B 0.382 </code></pre>
<p>We can make mark all indices which are two places before or after the <code>NaN</code>, then replace their values with <code>NaN</code> as well:</p> <pre><code># Get indices of NaN's idxnull = df[df['Measurement'].isnull()].index a = [range(x+2) if x==0 else range(x-2, x) if x==idxnull.max() else range(x-2, x+2) for x in idxnull] for rng in a: df.loc[rng, 'Measurement'] = np.NaN df.dropna(inplace=True) df = df.iloc[1:] </code></pre> <hr> <pre><code> Index Date Time Sample Measurement 3 7741 2019-04-15 07:00:00 A 1.301 4 7742 2019-04-15 07:05:00 A 1.435 5 7743 2019-04-15 07:10:00 A 1.704 14 7753 2019-04-15 08:00:00 B 0.372 15 7754 2019-04-15 08:05:00 B 0.382 </code></pre> <p>The list comprehension looks quite difficult, but its the following:</p> <pre><code>for x in idxnull: if x &gt; 0: range(x-2, x+2) elif x==idxnull.max(): range(x-2) else: range(x+2) </code></pre>
python|pandas|numpy|dataframe
3
374,943
56,215,105
Wrong when I use tensorflow and keras together?
<h1>This is the code where I mix tensorflow with keras.</h1> <pre><code>def dense_block(x, nb_layers, nb_filter, growth_rate, bottleneck=False, dropout_rate=None, weight_decay=1e-4, grow_nb_filters=True, return_concat_list=False): ''' Build a dense_block where the output of each conv_block is fed to subsequent ones Args: x: keras tensor nb_layers: the number of layers of conv_block to append to the model. nb_filter: number of filters growth_rate: growth rate bottleneck: bottleneck block dropout_rate: dropout rate weight_decay: weight decay factor grow_nb_filters: flag to decide to allow number of filters to grow return_concat_list: return the list of feature maps along with the actual output Returns: keras tensor with nb_layers of conv_block appended ''' concat_axis = 1 if K.image_data_format() == 'channels_first' else -1 x_list = [x] for i in range(nb_layers): cb = __conv_block(x, growth_rate, bottleneck, dropout_rate, weight_decay) ######################################################## #This is where I mix tensorflow with keras. cb = K.reshape(cb,(-1,7*7*32)) W = weight_variable([7*7*32,7*7*32]) cb = tf.matmul(cb, W) cb = K.reshape(cb,(-1,7,7,32)) x_list.append(cb) ###################################################### x = concatenate([x, cb], axis=concat_axis) if grow_nb_filters: nb_filter += growth_rate if return_concat_list: return x, nb_filter, x_list else: return x, nb_filter </code></pre> <h1>but get the error like this:</h1> <p>AttributeError: 'Tensor' object has no attribute '_keras_history'</p>
<p>you cannot usually mix separate keras install with tensorflow use the one provided within tensoflow,<p> try replacing keras as follows <code>from tensorflow import keras as K</code> </p> <a href="https://www.tensorflow.org/api_docs/python/tf/keras" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras</a></p>
tensorflow|keras
0
374,944
56,332,802
Autoeconders Keras with Variable Inputs
<p>I have a keras code implementing an autoencoder like that:</p> <pre><code>ENCODING_DIM = 5 # input placeholder input_img = tf.keras.layers.Input(shape=(320,)) # this is the encoded representation of the input encoded = tf.keras.layers.Dense(35, activation='relu')(input_img) encoded = tf.keras.layers.Dense(20, activation='relu')(encoded) encoded = tf.keras.layers.Dense(ENCODING_DIM, activation='relu')(encoded) decoded = tf.keras.layers.Dense(20, activation='relu')(encoded) decoded = tf.keras.layers.Dense(35, activation='relu')(decoded) decoded = tf.keras.layers.Dense(320, activation='sigmoid')(decoded) autoencoder = tf.keras.models.Model(input_img, decoded) encoder = tf.keras.models.Model(input_img, encoded) encoded_input = tf.keras.layers.Input(shape=(ENCODING_DIM,)) decoder_layer = autoencoder.layers[-1] #decoded_input = tf.keras.models.Model(encoded_input,decoder_layer(encoded_input)) autoencoder.compile(optimizer='nadam', loss='binary_crossentropy') from keras.callbacks import ModelCheckpoint </code></pre> <p>it works perfectly.</p> <p>Now I would like to have variable input dimensions (e.g. the first vector [320x1], the second [280x1], etc...)</p> <p>Now I try to do that:</p> <pre><code>ENCODING_DIM = 5 # input placeholder input_img = tf.keras.layers.Input(shape=(None,)) # this is the encoded representation of the input encoded = tf.keras.layers.Dense(35, activation='relu')(input_img) encoded = tf.keras.layers.Dense(20, activation='relu')(encoded) encoded = tf.keras.layers.Dense(ENCODING_DIM, activation='relu')(encoded) decoded = tf.keras.layers.Dense(20, activation='relu')(encoded) decoded = tf.keras.layers.Dense(35, activation='relu')(decoded) decoded = tf.keras.layers.Dense(320, activation='sigmoid')(decoded) autoencoder = tf.keras.models.Model(input_img, decoded) encoder = tf.keras.models.Model(input_img, encoded) encoded_input = tf.keras.layers.Input(shape=(ENCODING_DIM,)) decoder_layer = autoencoder.layers[-1] #decoded_input = tf.keras.models.Model(encoded_input,decoder_layer(encoded_input)) autoencoder.compile(optimizer='nadam', loss='binary_crossentropy') from keras.callbacks import ModelCheckpoint </code></pre> <p>but it returns an error like:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-24-7764c4707491&gt; in &lt;module&gt;() 14 15 # this is the encoded representation of the input ---&gt; 16 encoded = tf.keras.layers.Dense(35, activation='relu')(input_img) 17 encoded = tf.keras.layers.Dense(20, activation='relu')(encoded) 18 encoded = tf.keras.layers.Dense(ENCODING_DIM, activation='relu')(encoded) 2 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py in build(self, input_shape) 935 input_shape = tensor_shape.TensorShape(input_shape) 936 if tensor_shape.dimension_value(input_shape[-1]) is None: --&gt; 937 raise ValueError('The last dimension of the inputs to `Dense` ' 938 'should be defined. Found `None`.') 939 last_dim = tensor_shape.dimension_value(input_shape[-1]) ValueError: The last dimension of the inputs to `Dense` should be defined. Found `None`. </code></pre> <p>How can I implement an autoencoder having different input dimensions?</p>
<p>Dense layer will create, in your case, 35 neurons where each will be connected to each input feature (out of 320). It will initialize the matrix of weight of size 35x320, for example. There is no way to initialize such a matrix when input size is not known, at least when it comes to dense layers. You will have to pad you inputs to some maximum possible input length (320?) to apply model as you define it. </p>
python|tensorflow|keras|autoencoder
0
374,945
56,218,003
Pandas dataframe : Operation per batch of rows
<p>I have a pandas DataFrame <code>df</code> for which I want to compute some statistics per batch of rows. </p> <p>For example, let's say that I have a <code>batch_size = 200000</code>. </p> <p>For each batch of <code>batch_size</code> rows I would like to have the number of unique values for a column <code>ID</code> of my DataFrame.</p> <p>How can I do something like that ? </p> <p>Here is an example of what I want : </p> <pre><code>print(df) &gt;&gt; +-------+ | ID| +-------+ | 1| | 1| | 2| | 2| | 2| | 3| | 3| | 3| | 3| +-------+ batch_size = 3 my_new_function(df,batch_size) &gt;&gt; For batch 1 (0 to 2) : 2 unique values 1 appears 2 times 2 appears 1 time For batch 2 (3 to 5) : 2 unique values 2 appears 2 times 3 appears 1 time For batch 3 (6 to 8) 1 unique values 3 appears 3 times </code></pre> <p>Note : The output can of course be a simple DataFrame </p>
<p>See <a href="https://stackoverflow.com/questions/33367142/split-dataframe-into-relatively-even-chunks-according-to-length/33368088">this post</a> for the splitting process, then you could do this to get number of unique 'ID'</p> <pre><code>df = pd.DataFrame({'ID' : [1, 1, 2, 2, 2, 3, 3, 3, 3]}) batch_size = 3 result = [] for batch_number, batch_df in df.groupby(np.arange(len(df)) // batch_size): result.append(batch_df['ID'].nunique()) pd.DataFrame(result) </code></pre> <p>edit: go with user3426270's answer, I didn't notice it when I answered</p>
python|pandas|performance|batch-processing
5
374,946
56,414,847
Concatenating Pandas DataFrames Doubling Rows
<p>I am trying to concat() two DataFrames in pandas. One of the dataframes are just some columns I have taken from the other dataframe and transformed, so at no point do I resort them. But when I try to concatenate them I get an error saying they can't be concatenated together and so they are concatenated almost diagonally with the number of rows doubling (as each has the same rows) and the number of columns increasing by columns in one plus the other.</p> <p>Ideally I would like the number of rows to stay the same and the number of columns to be the columns in one plus the columns in the other. Below is my code:</p> <pre><code>## In the below code I create new names for the scaled fields by adding SC_ to ## their existing names SC_ExplanVars = [] for var in explan_vars: sc_var= "SC_" + var SC_ExplanVars.append(sc_var) ## Scale the columns from my dataframe that will be used as explanatory ## variables X_Scale = preprocessing.scale(data[ExplanVars]) ## Put my newly scaled explanatory variables into a DataFrame with same headers ## but with SC_ infont X_Scale = pd.DataFrame(X_Scale, columns = SC_ExplanVars) ## Concatenate scaled variables onto original dataset datat = pd.concat([data, X_Scale], axis=1) </code></pre> <p>I get the warning:</p> <pre><code>C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\api.py:77: RuntimeWarning: '&lt;' not supported between instances of 'str' and 'int', sort order is undefined for incomparable objects result = result.union(other) </code></pre> <p>EDIT</p> <p>Below is a table of what I was describing. It is only the top 10 rows and I have changed it to only one column and still seems to give me the same issue</p> <pre><code>Data= Col1 297 297 297 297 275 275 275 400 400 400 X_Scale = SC_Col1 -0.4644471998668502 -0.4644471998668502 -0.4644471998668502 -0.4644471998668502 -0.8849343767010354 -0.8849343767010354 -0.8849343767010354 1.5041973098568349 1.5041973098568349 1.5041973098568349 </code></pre> <p>After concatenation</p> <pre><code>datat = Col1 SC_Col1 297.0 NaN 297.0 NaN 297.0 NaN 297.0 NaN 275.0 NaN 275.0 NaN 275.0 NaN 400.0 NaN 400.0 NaN 400.0 NaN NaN -0.4644471998668502 NaN -0.4644471998668502 NaN -0.4644471998668502 NaN -0.4644471998668502 NaN -0.8849343767010354 NaN -0.8849343767010354 NaN -0.8849343767010354 NaN 1.5041973098568349 NaN 1.5041973098568349 NaN 1.5041973098568349 </code></pre>
<p>may be there is a different index label, try using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer">reset_index()</a> in each dataframe before concatenating:</p> <p>Example i have this 2 dataframes with different index name and try to <code>concat</code> them:</p> <pre><code>d1={'Col1':[297,297,297,297,275,275,275,400,400,400]} d2={'SC_Col1': [-0.4644471998668502,-0.4644471998668502,-0.4644471998668502,-0.4644471998668502,-0.8849343767010354,-0.8849343767010354,-0.8849343767010354,1.5041973098568349,1.5041973098568349,1.5041973098568349]} df1=pd.DataFrame(d1, index=[10,11,12,13,14,15,16,17,18,19]) df2=pd.DataFrame(d2) print(pd.concat([df1, df2], axis=1)) </code></pre> <p>Output:</p> <pre><code> Col1 SC_Col1 0 NaN -0.464447 1 NaN -0.464447 2 NaN -0.464447 3 NaN -0.464447 4 NaN -0.884934 5 NaN -0.884934 6 NaN -0.884934 7 NaN 1.504197 8 NaN 1.504197 9 NaN 1.504197 10 297.0 NaN 11 297.0 NaN 12 297.0 NaN 13 297.0 NaN 14 275.0 NaN 15 275.0 NaN 16 275.0 NaN 17 400.0 NaN 18 400.0 NaN 19 400.0 NaN </code></pre> <hr> <p>After using <code>reset_index()</code> with parameter <code>drop=True</code> before <code>concat()</code> operation, the dataframe will look like this:</p> <pre><code>df1=df1.reset_index(drop=True) df2.reset_index(drop=True) print(pd.concat([df1, df2], axis=1)) </code></pre> <p>Output:</p> <pre><code> Col1 SC_Col1 0 297 -0.464447 1 297 -0.464447 2 297 -0.464447 3 297 -0.464447 4 275 -0.884934 5 275 -0.884934 6 275 -0.884934 7 400 1.504197 8 400 1.504197 9 400 1.504197 </code></pre> <p>Hope this can help you :)</p>
python-3.x|pandas|dataframe|scikit-learn
1
374,947
56,082,364
Finding a specific value in csv files Python
<p>I have a column of values, which are part of a dataframe df. </p> <pre><code>Value 6.868061881 6.5903628020000005 6.472865833999999 6.427754219 6.40081742 6.336348032 6.277545389 6.250755132 </code></pre> <p>These values have been put together from several CSV files. Now I'm trying to backtrack and find the original CSV file which contains the values. This is my code. The problem is each row of the CSV file contains alphanumeric entries and I'm comparing only for numeric ones (as Values above). So the code isn't working. </p> <pre><code>for item in df['Value']: for file in dirs: csv_file = csv.reader(open(file)) for row in csv_file: for column in row: if str(column) == str(item): print (file) </code></pre> <p>Plus, I'm trying to optimize the # loops. How do I approach this?</p>
<p>Assuming <code>dirs</code> is a list of file paths to CSV files:</p> <pre><code>csv_dfs = {file: pd.read_csv(file) for file in dirs} csv_df = pd.concat(csv_dfs) </code></pre> <p>If you're just looking in the <code>'Values'</code> column, this is pretty straightforward:</p> <pre><code>print csv_df[csv_df['Values'].isin(df['Values'])] </code></pre> <p>Because we made the dataframe from a dictionary of the files, where the keys are filenames, the printed values will have the original filename in the index.</p> <hr> <p>In a comment, you asked how to just get the filenames. Because of the way we constructed the dataframe's index, the following should work to get a series of the filenames:</p> <pre><code>csv_df[csv_df['Values'].isin(df['Values'])].reset_index()['level_0'] </code></pre> <hr> <p>Note, if you're not sure what column in the CSVs you're matching, then you can loop it:</p> <pre><code>for col in df.columns: print csv_df[csv_df[col].isin(df['Values'])] </code></pre>
python|pandas|csv
3
374,948
56,263,418
Iterating over columns and comparing each row value of that column to another column's value in Pandas
<p>I am trying to iterate through a range of 3 columns (named 0 ,1, 2). in each iteration of that column I want to compare each row-wise value to another column called Flag (row-wise comparison for equality) in the same frame. I then want to return the matching field.</p> <p>I want to check if the values match.</p> <p>Maybe there is an easier approach to concatenate those columns into a single list then iterate through that list and see if there are any matches to that extra column? I am not very well versed in Pandas or Numpy yet.</p> <p>I'm trying to think of something efficient as well as I have a large data set to perform this on.</p> <p>Most of this is pretty free thought so I am just trying lots of different methods</p> <p>Some attempts so far using the iterate over each column method:</p> <pre class="lang-py prettyprint-override"><code> ##Sample Data df = pd.DataFrame([['123','456','789','123'],['357','125','234','863'],['168','298','573','298'], ['123','234','573','902']]) df = df.rename(columns = {3: 'Flag'}) ##Loop to find matches i = 0 while i &lt;= 2: df['Matches'] = df[i].equals(df['Flag']) i += 1 </code></pre> <p>My thought process is to iterate over each column named 0 - 2, check to see if the row-wise values match between 'Flag' and the columns 0-2. Then return if they matched or not. I am not entirely sure which would be the best way to store the match result.</p> <p>Maybe utilizing a different structured approach would be beneficial.</p> <p>I provided a sample frame that should have some matches if I can execute this properly.</p> <p>Thanks for any help.</p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a> in combination with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>eq</code></a> than return the row if <em>any</em> of the columns match with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>.any</code></a>:</p> <pre><code>m = df.iloc[:, :-1].eq(df['Flag'], axis=0).any(axis=1) df['indicator'] = m 0 1 2 Flag indicator 0 123 456 789 123 True 1 357 125 234 863 False 2 168 298 573 298 True 3 123 234 573 902 False </code></pre> <p>The result you get back you can select by boolean indexing:</p> <pre><code>df.iloc[:, :-1].eq(df['Flag'], axis=0) 0 1 2 0 True False False 1 False False False 2 False True False 3 False False False </code></pre> <p>Then if we chain it with <code>any</code>:</p> <pre><code>df.iloc[:, :-1].eq(df['Flag'], axis=0).any(axis=1) 0 True 1 False 2 True 3 False dtype: bool </code></pre>
python-3.x|pandas|for-loop|while-loop|multiple-columns
3
374,949
56,112,796
Split a list with different number of elements into separate columns in a dataframe
<p>I am extracting results from SQL Queries into my Pandas data frame. The results are either 'Min and Max' or Min, Max, and Average'. </p> <p><a href="https://i.stack.imgur.com/4qqPO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4qqPO.png" alt="Min Max Data frame"></a></p> <p>I want to split the Results column into separate columns in the existing data frame. I tried the below code </p> <pre><code>df[["Max","Min", "Average"]] = df.apply(lambda x: pd.Series({"Min_value": x[-1][0], "Max_value": x[-1][1], "Avg_value": x[-1][2]}), axis=1) </code></pre> <p>Sample Output:</p> <pre><code>Data = {'SQL_Query': ['SELECT MIN([Batch_Date_Time]) as Min_value, MAX([Batch_Date_Time]) as Max_value FROM [dbo].[dq_account]', 'SELECT MIN([Trxn_amt]) as Min_value, MAX([Trxn_amt]) as Max_value, AVG([Trxn_amt]) as Avg_value FROM [dbo].[dq_trxn]', 'SELECT MIN([Trxn_date]) as Min_value, MAX([Trxn_date]) as Max_value FROM [dbo].[dq_trxn]'], 'Results': ['[2019-04-01 00:00:00, 2099-04-30 00:00:00]', '[-1991.0, 8910.22, 1912.4404615384615]', '[2019-04-01, 2099-04-30]'], 'Min': ['2019-04-01 00:00:00', '-1991.0', '2019-04-01'], 'Max': ['2099-04-30 00:00:00', '8910.22', '2099-04-30'], 'Avg': ['NA', '1912.4404615384615', 'NA']} df = pd.DataFrame(Data,columns= ['SQL_Query', 'Results', 'Min', 'Max', 'Avg']) </code></pre> <p>But since, element '2' does not exist in the result for query 1 and 3, I get an error - IndexError: ('row index out of range index=2 len=2', 'occurred at index 0')</p> <p>I don't understand how to resolve this error.</p>
<p>In your DF above, I've changed the dates to strings in a list. A vectorized solution is provided by tolist(). </p> <pre><code>pd.concat([df['SQL_Query'],pd.DataFrame(df.Results.values.tolist(), columns=['Min', 'Max', 'Avg'])], axis=1) SQL_Query Min Max Avg 0 SELECT MIN([Bat... 2019-04-01 00:00:00 2099-04-30 00:00:00 NaN 1 SELECT MIN([Trx... -1991 8910.22 1912.440461 2 SELECT MIN([Trx... 2019-04-01 2099-04-30 NaN </code></pre> <p><strong>EDIT</strong></p> <p>I should have included the detail of changing your data as per your comments above. I've modified the data to make it a list instead of one string. </p> <pre><code> "Results": [ ["2019-04-01 00:00:00", "2099-04-30 00:00:00"], [-1991.0, 8910.22, 1912.440461], ["2019-04-01", "2099-04-30"], ], </code></pre> <p>If you haven't changed this then you will get the error mentioned in your comment. Furthermore, I'm certain the dates will likely be datetime objects, not strings as I've shown. But this should not effect the results. </p>
pandas|list
0
374,950
56,256,480
How do I avoid my piece of python code from aggressively rounding off values to 1 decimal place
<p>I am trying to create a new column in my pandas dataframe that is the result of a basic mathematical equation performed on other columns in the dataset. The problem now is that the values captured in the column are extremely rounded up and does not represent the true values.</p> <p>2.5364 should not be rounded off to 2.5 and 3.775 should not be rounded off to 3.8</p> <p>I have tried to declare the denominators as floats in a bid to trick the system to supply values that look like that. ie 12/3.00 should be 4.00 but this is still returning 4.0 instead.</p> <p>This is currently what I am doing:</p> <pre class="lang-py prettyprint-override"><code>normal_load = 3 df['FirstPart_GPA'] = ((df[first_part].sum(axis = 1, skipna = True))/(normal_load*5.00)) </code></pre> <p>I set skipna to true because sometimes a column might not have any value but I still want to be able to calculate the GPA without the system throwing out any errors since any number plus NAN would give NAN.</p> <p>I am working with a dataframe that looks like this:</p> <pre class="lang-py prettyprint-override"><code>dict = {'course1': [15,12], 'course2': [9,6], 'course3': [12,15], 'course4': [15,3], 'course5': [15,9], 'course6': [9,12]} df = pd.DataFrame(dict) </code></pre> <p>Note that the dataframe I have contains some null values because some courses are electives. Please help me out. I am out of ideas.</p>
<p>You have not defined the first_part variable in your code, so I am going to assume it is some subset of dataframe columns, e.g:</p> <pre><code>first_part=['course1', 'course2', 'course3'] </code></pre> <p>All of the numbers in your dataframe are integer multiples of 3, therefore when you sum up any of them and divide by 15, you will always get a decimal number with no more than 1 digit after the decimal dot. Your values are not rounded up, they are exact.</p> <p>To display numbers with two digits after the decimal dot, add a line:</p> <pre><code>pd.options.display.float_format = '{:,.2f}'.format </code></pre> <p>Now</p> <pre><code>df['FirstPart_GPA'] = ((df[first_part].sum(axis = 1, skipna = True))/(normal_load*5.00)) df course1 course2 course3 course4 course5 course6 FirstPart_GPA 0 15 9 12 15 15 9 2.40 1 12 6 15 3 9 12 2.20 </code></pre>
python|pandas|dataframe
1
374,951
56,160,379
How to use np.where in another np.where (conext: ray tracing)
<p>The question is: how to use two np.where in the same statement, like this (oversimplified):</p> <pre><code>np.where((ndarr1==ndarr2),np.where((ndarr1+ndarr2==ndarr3),True,False),False) </code></pre> <p>To avoid computing second conditional statement if the first is not reached.</p> <p>My first objective is to find the intersection of a ray in a triangle, if there is one. This problem can be solved by this algorithm (found on stackoverflow):</p> <pre><code>def intersect_line_triangle(q1,q2,p1,p2,p3): def signed_tetra_volume(a,b,c,d): return np.sign(np.dot(np.cross(b-a,c-a),d-a)/6.0) s1 = signed_tetra_volume(q1,p1,p2,p3) s2 = signed_tetra_volume(q2,p1,p2,p3) if s1 != s2: s3 = signed_tetra_volume(q1,q2,p1,p2) s4 = signed_tetra_volume(q1,q2,p2,p3) s5 = signed_tetra_volume(q1,q2,p3,p1) if s3 == s4 and s4 == s5: n = np.cross(p2-p1,p3-p1) t = np.dot(p1-q1,n) / np.dot(q2-q1,n) return q1 + t * (q2-q1) return None </code></pre> <p>Here are two conditional statements:</p> <ol> <li>s1!=s2</li> <li>s3==s4 &amp; s4==s5</li> </ol> <p>Now since I have >20k triangles to check, I want to apply this function on all triangles at the same time. </p> <p>First solution is:</p> <pre><code>s1 = vol(r0,tri[:,0,:],tri[:,1,:],tri[:,2,:]) s2 = vol(r1,tri[:,0,:],tri[:,1,:],tri[:,2,:]) s3 = vol(r1,r2,tri[:,0,:],tri[:,1,:]) s4 = vol(r1,r2,tri[:,1,:],tri[:,2,:]) s5 = vol(r1,r2,tri[:,2,:],tri[:,0,:]) np.where((s1!=s2) &amp; (s3+s4==s4+s5),intersect(),False) </code></pre> <p>where s1,s2,s3,s4,s5 are arrays containing the value S for each triangle. Problem is, it means I have to compute s3,s4,and s5 for all triangles.</p> <p>Now the ideal would be to compute statement 2 (and s3,s4,s5) only when statement 1 is True, with something like this:</p> <pre><code>check= np.where((s1!=s2),np.where((compute(s3)==compute(s4)) &amp; (compute(s4)==compute(s5), compute(intersection),False),False) </code></pre> <p>(to simplify explanation, I just stated 'compute' instead of the whole computing process. Here, 'compute' is does only on the appropriate triangles).</p> <p>Now of course this option doesn't work (and computes s4 two times), but I'd gladly have some recommendations on a similar process</p>
<p>Here's how I used masked arrays to answer this problem:</p> <pre><code> loTrue= np.where((s1!=s2),False,True) s3=ma.masked_array(np.sign(dot(np.cross(r0r1, r0t0), r0t1)),mask=loTrue) s4=ma.masked_array(np.sign(dot(np.cross(r0r1, r0t1), r0t2)),mask=loTrue) s5=ma.masked_array(np.sign(dot(np.cross(r0r1, r0t2), r0t0)),mask=loTrue) loTrue= ma.masked_array(np.where((abs(s3-s4)&lt;1e-4) &amp; ( abs(s5-s4)&lt;1e-4),True,False),mask=loTrue) #also works when computing s3,s4 and s5 inside loTrue, like this: loTrue= np.where((s1!=s2),False,True) loTrue= ma.masked_array(np.where( (abs(np.sign(dot(np.cross(r0r1, r0t0), r0t1))-np.sign(dot(np.cross(r0r1, r0t1), r0t2)))&lt;1e-4) &amp; (abs(np.sign(dot(np.cross(r0r1, r0t2), r0t0))-np.sign(dot(np.cross(r0r1, r0t1), r0t2)))&lt;1e-4),True,False) ,mask=loTrue) </code></pre> <p>Note that the same process, when not using such approach, is done like this:</p> <pre><code> s3= np.sign(dot(np.cross(r0r1, r0t0), r0t1) /6.0) s4= np.sign(dot(np.cross(r0r1, r0t1), r0t2) /6.0) s5= np.sign(dot(np.cross(r0r1, r0t2), r0t0) /6.0) loTrue= np.where((s1!=s2) &amp; (abs(s3-s4)&lt;1e-4) &amp; ( abs(s5-s4)&lt;1e-4) ,True,False) </code></pre> <p>Both give the same results, however, when looping on this process only for 10k iterations, NOT using masked arrays is faster! (26 secs without masked arrays, 31 secs with masked arrays, 33 when using masked arrays in one line only (not computing s3,s4 and s5 separately, or computing s4 before).</p> <p>Conclusion: using nested arrays is solved here (note that the mask indicates where it won't be computed, hence first loTri must bet set to False (0) when condition is verified). However, in that scenario, it's not faster.</p>
python-3.x|numpy|optimization|geometry
0
374,952
56,123,378
Finding the logits with respect to labels Tensorflow Python
<p>I have the label array and logits array as: </p> <pre><code>label = [1,1,0,1,-1,-1,1,0,-1,0,-1,-1,0,0,0,1,1,1,-1,1] logits = [0.2,0.3,0.4,0.1,-1.4,-2,0.4,0.5,-0.231,1.9,1.4,-1.456,0.12,-0.45,0.5,0.3,0.4,0.2,1.2,12] </code></pre> <p>Using Tensorflow, I want to get the values from label and logits where: </p> <blockquote> <p>1> label is greater than zero<br> 2> label is less than zero<br> 3> label is equals to zero </p> </blockquote> <p>I am willing to have result something like this: </p> <pre><code>label1,logits1 = some_Condition_logic_Where(label &gt; 0) _ returns respective labels and logits </code></pre> <p>Can anyone suggest me how is this achievable? </p> <p><strong>EDITED:</strong></p> <pre><code>&gt;&gt;&gt; label = [1,1,0,1,-1,-1,1,0,-1,0,-1,-1,0,0,0,1,1,1,-1,1] &gt;&gt;&gt; logits = [0.2,0.3,0.4,0.1,-1.4,-2,0.4,0.5,-0.231,1.9,1.4,-1.456,0.12,-0.45,0.5,0.3,0.4,0.2,1.2,12] &gt;&gt;&gt; label1 = [];logits1 = [] &gt;&gt;&gt; for l1,l2 in zip(label,logits): ... if(l1&gt;0): ... label1.append(l1) ... logits1.append(l2) ... &gt;&gt;&gt; label1 [1, 1, 1, 1, 1, 1, 1, 1] &gt;&gt;&gt; logits1 [0.2, 0.3, 0.1, 0.4, 0.3, 0.4, 0.2, 12] </code></pre> <p>Want this logic to be implemented in Tensorflow same for the values with <code>-1 and 0</code>. How I can achieve this?</p>
<p>You can use <code>tf.boolean_mask</code>.</p> <pre><code>import tensorflow as tf label = tf.constant([1,1,0,1,-1,-1,1,0,-1,0,-1,-1,0,0,0,1,1,1,-1,1],dtype=tf.float32) logits = tf.constant([0.2,0.3,0.4,0.1,-1.4,-2,0.4,0.5,-0.231,1.9,1.4,-1.456,0.12,-0.45,0.5,0.3,0.4,0.2,1.2,12],dtype=tf.float32) # label&gt;0 label1 = tf.boolean_mask(label,tf.greater(label,0)) logits1 = tf.boolean_mask(logits,tf.greater(label,0)) # label&lt;0 label2 = tf.boolean_mask(label,tf.less(label,0)) logits2 = tf.boolean_mask(logits,tf.less(label,0)) # label=0 label3 = tf.boolean_mask(label,tf.equal(label,0)) logits3 = tf.boolean_mask(logits,tf.equal(label,0)) with tf.Session() as sess: print(sess.run(label1)) print(sess.run(logits1)) print(sess.run(label2)) print(sess.run(logits2)) print(sess.run(label3)) print(sess.run(logits3)) [1. 1. 1. 1. 1. 1. 1. 1.] [ 0.2 0.3 0.1 0.4 0.3 0.4 0.2 12. ] [-1. -1. -1. -1. -1. -1.] [-1.4 -2. -0.231 1.4 -1.456 1.2 ] [0. 0. 0. 0. 0. 0.] [ 0.4 0.5 1.9 0.12 -0.45 0.5 ] </code></pre>
python|tensorflow
1
374,953
56,296,355
Perform one-hot encoding on pandas dataframe on multiple column types
<p>So I have a pandas dataframe where certain columns have values of type list and a mix of columns of non-numeric and numeric data.</p> <p>Example data</p> <pre><code> dst_address dst_enforcement fwd_count ... 1 1.2.3.4 [Any,core] 8 2 3.4.5.6 [] 9 3 6.7.8.9 [Any] 10 4 8.10.3.2 [core] 0 </code></pre> <p>So far I've been able to find out which columns are non-numeric by these 2 lines of code</p> <pre><code>col_groups = df.columns.to_series().groupby(df.dtypes).groups non_numeric_cols = col_groups[np.dtype('O')] </code></pre> <p>Of all these non-numeric columns, I need to figure out which ones have list as data type and I want to perform one-hot encoding on all non-numeric columns (including those list type)</p> <p>EDIT: my expected output for above example would be something like</p> <pre><code> 1.2.3.4 | 3.4.5.6 | 6.7.8.9 | 8.10.3.2 | empty | Any | core | fwd_count ... 1 1 0 0 0 0 1 1 8 2 0 1 0 0 1 0 0 9 3 0 0 1 0 0 1 0 10 4 0 0 0 1 0 0 1 0 </code></pre>
<p>I use 3 steps as follows:</p> <pre><code>df['dst_enforcement'] = df.dst_enforcement.apply(lambda x: x if x else ['empty']) dm1 = pd.get_dummies(df[df.columns.difference(['dst_enforcement'])], prefix='', prefix_sep='') dm2 = df.dst_enforcement.str.join('-').str.get_dummies('-') pd.concat([dm1, dm2], axis=1) Out[1221]: fwd_count 1.2.3.4 3.4.5.6 6.7.8.9 8.10.3.2 Any core empty 1 8 1 0 0 0 1 1 0 2 9 0 1 0 0 0 0 1 3 10 0 0 1 0 1 0 0 4 0 0 0 0 1 0 1 0 </code></pre>
python|pandas|one-hot-encoding|data-processing
2
374,954
56,370,060
How to extract a value from a Pandas data frame from a reference in the frame, then "walk up" the frame to another specified value?
<p>I have the following toy data set:</p> <pre><code>import pandas as pd from StringIO import StringIO # read the data df = pd.read_csv(StringIO(""" Date Return 1/28/2009 -0.825148 1/29/2009 -0.859997 1/30/2009 0.000000 2/2/2009 -0.909546 2/3/2009 0.000000 2/4/2009 -0.899110 2/5/2009 -0.866104 2/6/2009 0.000000 2/9/2009 -0.830099 2/10/2009 -0.885111 2/11/2009 -0.878320 2/12/2009 -0.881853 2/13/2009 -0.884432 2/17/2009 -0.947781 2/18/2009 -0.966414 2/19/2009 -1.016344 2/20/2009 -1.029667 2/23/2009 -1.087432 2/24/2009 -1.050808 2/25/2009 -1.089594 2/26/2009 -1.121556 2/27/2009 -1.105873 3/2/2009 -1.205019 3/3/2009 -1.191488 3/4/2009 -1.059311 3/5/2009 -1.135962 3/6/2009 -1.147031 3/9/2009 -1.117328 3/10/2009 -1.009050"""), sep="\s+").reset_index() </code></pre> <p>My goals are to:</p> <p>a) find the most negative value in the "Return" column</p> <p>b) find the date the this value occurred</p> <p>c) then "walk up" the "Return" column to find the <strong>first instance</strong> a specific value (in this case, 0.000000).</p> <p>d) find the date associated with the value returned in step "c"</p> <p>The results I'm looking for are:</p> <p>a) -1.20519</p> <p>b) March 2, 2009</p> <p>c) 0.000000</p> <p>d) February 6, 2009</p> <p>I can find "a" with the following code:</p> <pre><code>max_dd = df['Maximum_Drawdown'].min() </code></pre> <p>To get "b", I tried to use the following code:</p> <pre><code>df.loc[df['Return'] == max_dd, 'Date'] </code></pre> <p>But, the error message says:</p> <pre><code>KeyError: 'Date' </code></pre> <p><strong>Note:</strong> I can get "b" to work in this toy example, but the actual data throws the error message. Here is <em>actual</em> code used to import the data from the csv file:</p> <pre><code>df = pd.read_csv(FILE_NAME, parse_dates=True).reset_index() df.set_index('Date', inplace = True) &lt;&lt;--- this is causing the problem </code></pre>
<p>Filter your dataframe for all rows less than the minimum value in Return and also Return equals zero, than show the last value. </p> <pre><code>df.loc[(df.index &lt; df.Return.idxmin()) &amp; (df['Return'] == 0), "Date"].tail(1) </code></pre>
python|pandas
2
374,955
56,063,686
Considerations of model definitions when moving from Tensorflow to PyTorch
<p>I've just recently switched to PyTorch after getting frustrated in debugging tf and understand that it is equivalent to coding in numpy almost completely. My question is what are the permitted python aspects we can use in a PyTorch model (to be put completely on GPU) eg. if-else has to be implemented as follows in tensorflow</p> <pre><code>a = tf.Variable([1,2,3,4,5], dtype=tf.float32) b = tf.Variable([6,7,8,9,10], dtype=tf.float32) p = tf.placeholder(dtype=tf.float32) ps = tf.placeholder(dtype=tf.bool) li = [None]*5 li_switch = [True, False, False, True, True] for i in range(5): li[i] = tf.Variable(tf.random.normal([5])) sess = tf.Session() sess.run(tf.global_variables_initializer()) def func_0(): return tf.add(a, p) def func_1(): return tf.subtract(b, p) with tf.device('GPU:0'): my_op = tf.cond(ps, func_1, func_0) for i in range(5): print(sess.run(my_op, feed_dict={p:li[i], ps:li_switch[i]})) </code></pre> <p>How would the structure change in pytorch for the above code? How to place the variables and ops above on GPU and parallelize the list inputs to our graph in pytorch?</p>
<p>In pytorch, the code can be written like the way normal python code is written.</p> <p><strong>CPU</strong></p> <pre><code>import torch a = torch.FloatTensor([1,2,3,4,5]) b = torch.FloatTensor([6,7,8,9,10]) cond = torch.randn(5) for ci in cond: if ci &gt; 0: print(torch.add(a, 1)) else: print(torch.sub(b, 1)) </code></pre> <p><strong>GPU</strong></p> <p>Move the tensors to GPU like this:</p> <pre><code>a = torch.FloatTensor([1,2,3,4,5]).to('cuda') b = torch.FloatTensor([6,7,8,9,10]).to('cuda') cond = torch.randn(5).to('cuda') import torch.nn as nn class Cond(nn.Module): def __init__(self): super(Cond, self).__init__() def forward(self, cond, a, b): result = torch.empty(cond.shape[0], a.shape[0]).cuda() for i, ci in enumerate(cond): if ci &gt; 0: result[i] = torch.add(a, 1) else: result[i] = torch.sub(b, 1) return result cond_model = Cond().to('cuda') output = cond_model(cond, a, b) </code></pre> <p><a href="https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#cuda-tensors" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#cuda-tensors</a></p>
python|tensorflow|pytorch
1
374,956
55,881,160
Confused by pandas dtype conversion to np.float16 value 2053 becomes 2052
<p>I was trying to reduce memory consumption by downcasting float data types.</p> <p>I checked the range of <code>np.float16</code>:</p> <pre><code>np.finfo(np.float16) finfo(resolution=0.001, min=-6.55040e+04, max=6.55040e+04, dtype=float16) </code></pre> <p>This shows <code>-6.55040e+04 &lt; 2053 &lt; 6.55040e+04</code></p> <p>Now:</p> <pre class="lang-py prettyprint-override"><code>s = pd.Series([2051,2052,2053,2054]) s.astype(np.float16) 0 2052.0 1 2052.0 2 2052.0 3 2054.0 </code></pre> <p>Why is it so?</p> <p><strong>UPDATE</strong><br> Borrowing documentation from <code>np.finfo</code></p> <pre><code>numpy.finfo class numpy.finfo[source] Machine limits for floating point types. Parameters: dtype : float, dtype, or instance Kind of floating point data-type about which to get information. Attributes eps (float) The smallest representable positive number such that 1.0 + eps != 1.0. Type of eps is an appropriate floating point type. epsneg (floating point number of the appropriate type) The smallest representable positive number such that 1.0 - epsneg != 1.0. iexp (int) The number of bits in the exponent portion of the floating point representation. machar (MachAr) The object which calculated these parameters and holds more detailed information. machep (int) The exponent that yields eps. max (floating point number of the appropriate type) The largest representable number. maxexp (int) The smallest positive power of the base (2) that causes overflow. min (floating point number of the appropriate type) The smallest representable number, typically -max. minexp (int) The most negative power of the base (2) consistent with there being no leading 0’s in the mantissa. negep (int) The exponent that yields epsneg. nexp (int) The number of bits in the exponent including its sign and bias. nmant (int) The number of bits in the mantissa. precision (int) The approximate number of decimal digits to which this kind of float is precise. resolution (floating point number of the appropriate type) The approximate decimal resolution of this type, i.e., 10**-precision. tiny (float) The smallest positive usable number. Type of tiny is an appropriate floating point type. </code></pre>
<p>It's not about <code>min</code> or <code>max</code>, which determine the lowest and highest values a <code>float16</code> can take respectively, but <code>resolution</code>, or the least difference between two values before they are considered identical.</p> <p><code>finfo</code> shows that the resolution of <code>float16</code> is <code>0.001</code>, or 4 significant figures. The digit <code>2</code> in your case is the 4th significant figure.</p>
python|pandas|numpy
3
374,957
55,582,218
Change the value of an item given a flag
<p>I have a data frame which holds bets placed on horses, with each row being a new bet. Each bet has multiple attributes including location, the name of the horse, winnings/losses etc. The problem is those bet winnings are given in a positive integer and a flag attribute is provided to say whether it is a win or a loss.</p> <p>This is the data frame provided:</p> <pre><code> Race Course Horse Year Month Date Amount Won/Lost 0 Aintree Red Rum 2017 5 12 11.58 won 1 Punchestown Camelot 2016 12 22 122.52 won 2 Sandown Beef of Salmon 2016 11 17 20.00 lost 3 Ayr Corbiere 2016 11 3 25.00 lost 4 Fairyhouse Red Rum 2016 12 2 65.75 won 5 Ayr Camelot 2017 3 11 12.05 won 6 Aintree Hurricane Fly 2017 5 12 11.58 won 7 Punchestown Beef or Salmon 2016 12 22 112.52 won 8 Sandown Aldaniti 2016 11 17 10.00 lost 9 Ayr Henry the Navigator 2016 11 1 15.00 lost 10 Fairyhouse Jumanji 2016 10 2 65.75 won 11 Ayr Came Second 2017 3 11 12.05 won 12 Aintree Murder 2017 5 12 5.00 lost 13 Punchestown King Arthur 2016 6 22 52.52 won 14 Sandown Filet of Fish 2016 11 17 20.00 lost 15 Ayr Denial 2016 11 3 25.00 lost 16 Fairyhouse Don't Gamble 2016 12 12 165.75 won 17 Ayr Ireland 2017 1 11 22.05 won </code></pre> <p>And I need to create a data frame of the following format:</p> <pre><code>Year Total Won Total Lost 2016 €123.45 €678.90 2017 €543.21 €987.60 </code></pre> <p>I have been trying to iterate through the columns as well as tried using the where function but cannot seem to get anything to work.</p>
<p>Use <code>groupby</code>, <code>sum</code>, and then unstack the result:</p> <pre><code>df.groupby(['Year', 'Won/Lost'])['Amount'].sum().unstack(-1).add_prefix('total_') Won/Lost total_lost total_won Year 2016 115.0 584.81 2017 5.0 69.31 </code></pre>
python|pandas|dataframe
4
374,958
55,881,784
Keras: custom loss causes "You must feed a value for placeholder tensor"
<p>I'm trying to build a variational autoencoder in Keras following the <a href="https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py" rel="nofollow noreferrer">example</a> in the Keras repository. Here's my setup:</p> <pre><code>from keras.layers import Dense, Input, merge, concatenate, Dense, LSTM, Lambda, Flatten, Reshape from keras import backend as K from keras.models import Model from keras.losses import mse import numpy as np class VAE: def __init__(self, n_verts=15, n_dims=3, n_layers=3, n_units=128, latent_dim=2): self.n_verts = n_verts self.n_dims = n_dims self.n_layers = n_layers self.n_units = n_units self.latent_dim = latent_dim self.encoder = self.build_encoder() self.decoder = self.build_decoder() inputs = Input((self.n_verts, self.n_dims)) outputs = self.decoder(self.encoder(inputs)[2]) self.model = Model(inputs, outputs, name='vae') self.model.compile(optimizer='adam', loss=self.get_loss) def build_encoder(self): i = Input(shape=(self.n_verts, self.n_dims), name='encoder_input') h = i h = Flatten()(h) h = Dense(self.n_units, activation='relu')(h) for idx in range(1, self.n_layers, 1): h = Dense(self.n_units // (2*idx), activation='relu')(h) self.z_mean = Dense(self.latent_dim, name='z_mean')(h) self.z_log_var = Dense(self.latent_dim, name='z_log_var')(h) # use reparameterization trick to factor stochastic node out of gradient flow self.z = Lambda(self.sample, output_shape=(self.latent_dim,), name='z')([self.z_mean, self.z_log_var]) return Model(i, [self.z_mean, self.z_log_var, self.z], name='encoder') def sample(self, args): ''' Reparameterization trick by sampling from an isotropic unit Gaussian. @arg (tensor): mean and log of variance of Q(z|X) @returns z (tensor): sampled latent vector ''' z_mean, z_log_var = args batch = K.shape(z_mean)[0] dim = K.int_shape(z_mean)[1] # by default, random_normal has mean = 0 and std = 1.0 epsilon = K.random_normal(shape=(batch, dim)) return z_mean + K.exp(0.5 * z_log_var) * epsilon def build_decoder(self): i = Input(shape=(self.latent_dim,), name='z_sampling') h = i for idx in range(1, self.n_layers, 1): h = Dense(self.n_units//(2*(self.n_layers-idx)), activation='relu')(h) h = Dense(self.n_units, activation='relu')(h) h = Dense(self.n_verts * self.n_dims, activation='sigmoid')(h) o = Reshape((self.n_verts, self.n_dims))(h) return Model(i, o, name='decoder') def get_loss(self, inputs, outputs): reconstruction_loss = mse(inputs, outputs) reconstruction_loss *= self.n_verts * self.n_dims return reconstruction_loss # this works fine kl_loss = 1 + self.z_log_var - K.square(self.z_mean) - K.exp(self.z_log_var) kl_loss = K.sum(kl_loss, axis=-1) kl_loss *= -0.5 vae_loss = K.mean(reconstruction_loss + kl_loss) # todo: make this balance parameterizable return vae_loss # this doesn't def train(self, X, predict='frame', n_epochs=10000): for idx in range(n_epochs): i = np.random.randint(0, X.shape[1]-1) # sample idx frame = np.expand_dims( X[:,i:i+1,:].squeeze(), axis=0) # shape = 1 sample, v verts, d dims next_frame = np.expand_dims( X[:,i+1:i+2,:].squeeze(), axis=0) if predict == 'frame': loss = self.model.train_on_batch(frame, frame) elif predict == 'next_frame': loss = self.model.train_on_batch(frame, next_frame) if idx % 1000 == 0: print(' * training idx', idx, 'loss', loss) X_train = np.random.rand(15, 100, 3) vae = VAE(n_verts=15, latent_dim=2, n_layers=3, n_units=128) vae.encoder.summary() vae.train(X_train, n_epochs=10000, predict='frame') </code></pre> <p>This works, but if you look at the <code>get_loss</code> function you'll see it's returning a little prematurely. If I comment out <code>return reconstruction_loss</code> so that the loss function returns <code>vae_loss</code>, I get an error:</p> <pre><code>--------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) &lt;ipython-input-7-57d76ed539a4&gt; in &lt;module&gt; 78 vae = VAE(n_verts=15, latent_dim=2, n_layers=3, n_units=128) 79 vae.encoder.summary() ---&gt; 80 vae.train(X_train, n_epochs=10000, predict='frame') &lt;ipython-input-7-57d76ed539a4&gt; in train(self, X, predict, n_epochs) 70 frame = np.expand_dims( X[:,i:i+1,:].squeeze(), axis=0) # shape = 1 sample, v verts, d dims 71 next_frame = np.expand_dims( X[:,i+1:i+2,:].squeeze(), axis=0) ---&gt; 72 if predict == 'frame': loss = self.model.train_on_batch(frame, frame) 73 elif predict == 'next_frame': loss = self.model.train_on_batch(frame, next_frame) 74 if idx % 1000 == 0: ~/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight) 1215 ins = x + y + sample_weights 1216 self._make_train_function() -&gt; 1217 outputs = self.train_function(ins) 1218 return unpack_singleton(outputs) 1219 ~/anaconda/envs/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs) 2713 return self._legacy_call(inputs) 2714 -&gt; 2715 return self._call(inputs) 2716 else: 2717 if py_any(is_tensor(x) for x in inputs): ~/anaconda/envs/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in _call(self, inputs) 2673 fetched = self._callable_fn(*array_vals, run_metadata=self.run_metadata) 2674 else: -&gt; 2675 fetched = self._callable_fn(*array_vals) 2676 return fetched[:len(self.outputs)] 2677 ~/anaconda/envs/3.5/lib/python3.5/site-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs) 1437 ret = tf_session.TF_SessionRunCallable( 1438 self._session._session, self._handle, args, status, -&gt; 1439 run_metadata_ptr) 1440 if run_metadata: 1441 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) ~/anaconda/envs/3.5/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg) 526 None, None, 527 compat.as_text(c_api.TF_Message(self.status.status)), --&gt; 528 c_api.TF_GetCode(self.status.status)) 529 # Delete the underlying status object from memory otherwise it stays alive 530 # as there is a reference to status from this from the traceback due to InvalidArgumentError: You must feed a value for placeholder tensor 'encoder_input_6' with dtype float and shape [?,15,3] [[{{node encoder_input_6}}]] </code></pre> <p>Does anyone know how to resolve this error? Any suggestions would be hugely appreciated!</p>
<p>Ah, I got this worked out once my variables were scoped properly:</p> <pre><code>from keras.layers import Dense, Input, merge, concatenate, Dense, LSTM, Lambda, Flatten, Reshape from keras import backend as K from keras.models import Model from keras.losses import mse import numpy as np class VAE: def __init__(self, n_verts=15, n_dims=3, n_layers=3, n_units=128, latent_dim=2): self.input_shape = (n_verts*n_dims,) self.n_layers = n_layers self.n_units = n_units self.latent_dim = latent_dim # build the encoder and decoder inputs = Input(shape=self.input_shape, name='encoder_input') self.encoder = self.get_encoder(inputs) self.decoder = self.get_decoder() # build the VAE outputs = self.decoder(self.encoder(inputs)[2]) self.model = Model(inputs, outputs, name='vae_mlp') # add loss and compile self.model.add_loss(self.get_loss(inputs, outputs)) self.model.compile(optimizer='adam') def get_encoder(self, inputs): h = inputs h = Dense(self.n_units, activation='relu')(h) for idx in range(1, self.n_layers, 1): h = Dense(self.n_units // (2*idx), activation='relu')(h) self.z_mean = Dense(self.latent_dim, name='z_mean')(h) self.z_log_var = Dense(self.latent_dim, name='z_log_var')(h) z = Lambda(self.sampling, output_shape=(self.latent_dim,), name='z')([self.z_mean, self.z_log_var]) encoder = Model(inputs, [self.z_mean, self.z_log_var, z], name='encoder') return encoder def sampling(self, args): self.z_mean, self.z_log_var = args batch = K.shape(self.z_mean)[0] dim = K.int_shape(self.z_mean)[1] # by default, random_normal has mean = 0 and std = 1.0 epsilon = K.random_normal(shape=(batch, dim)) return self.z_mean + K.exp(0.5 * self.z_log_var) * epsilon def get_decoder(self): latent_inputs = Input(shape=(self.latent_dim,), name='z_sampling') h = latent_inputs for idx in range(1, self.n_layers, 1): h = Dense(self.n_units//(2*(self.n_layers-idx)), activation='relu')(h) h = Dense(self.n_units, activation='relu')(h) outputs = Dense(self.input_shape[0], activation='sigmoid')(h) decoder = Model(latent_inputs, outputs, name='decoder') return decoder def get_loss(self, inputs, outputs): reconstruction_loss = mse(inputs, outputs) reconstruction_loss *= self.input_shape[0] kl_loss = 1 + self.z_log_var - K.square(self.z_mean) - K.exp(self.z_log_var) kl_loss = K.sum(kl_loss, axis=-1) kl_loss *= -0.5 vae_loss = K.mean(reconstruction_loss + kl_loss) return vae_loss # train x_train = np.random.rand(10000, 45) vae = VAE(n_verts=15, latent_dim=2, n_layers=3, n_units=128) vae.model.fit(x_train[:-1000,:], epochs=100, batch_size=128, validation_data=(x_train[-1000:,:], None)) </code></pre>
python|tensorflow|keras
1
374,959
55,609,810
How to add new row which copy's some columns, but assigns new values in other columns
<p>I have a dataframe that looks like this:</p> <pre><code>df = pd.DataFrame({'VisitorID': [1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000], 'EpochTime': [1554888560, 1554888560, 1554888560, 1554888560, 1554888560, 1521333510, 1521333510, 1521333510], 'HitTime': [1400, 5340, 7034, 11034, 13059, 990, 4149, 6450], 'HitNumber':[23, 54, 55, 65, 110, 14, 29, 54], 'PagePath':['orders/details', 'orders/payment', 'orders/afterpayment', 'orders/myorders', 'customercare', 'orders/details', 'orders/payment', 'orders/myorders']}) print(df) VisitorID EpochTime HitTime HitNumber PagePath 0 1000 1554888560 1400 23 orders/details 1 1000 1554888560 5340 54 orders/payment 2 1000 1554888560 7034 55 orders/afterpayment 3 1000 1554888560 11034 65 orders/myorders 4 1000 1554888560 13059 110 customercare 5 1000 1521333510 990 14 orders/details 6 1000 1521333510 4149 29 orders/payment 7 1000 1521333510 6450 54 orders/myorders </code></pre> <p>In reality my dataframe is +- 10 million rows. And has twice the columns. The data consists of website data which shows the behavior of customers. </p> <p><strong>What I want to do</strong><br> To analyze how long the customers are on the website before reaching the first page which is tracked, I want to add one row above each group which copies the values of the top row from columns:</p> <ul> <li>VisitorID</li> <li>EpochTime</li> </ul> <p>But gives new values to columns:</p> <ul> <li>HitTime = 0</li> <li>HitNumber = 0</li> <li>PagePath = <code>Home</code></li> </ul> <p><strong>Info</strong>: The combination of <code>VisitorID</code> + <code>EpochTime</code> makes a group unique.</p> <p>I achieved this with the following code, but it takes +- 5 min to run, I think there should be a faster way:</p> <pre><code>lst = [] for x, y in df.groupby(['VisitorID', 'EpochTime']): lst.append(y.iloc[:1]) df_first = pd.concat(lst, ignore_index=True) df_first['HitTime'] = 0.0 df_first['HitNumber'] = 0.0 df_first['PagePath'] = 'Home' print(df_first) VisitorID EpochTime HitTime HitNumber PagePath 0 1000 1521333510 0.0 0.0 Home 1 1000 1554888560 0.0 0.0 Home df_final = pd.concat([df, df_first], ignore_index=True).sort_values(['VisitorID', 'EpochTime', 'HitNumber']).reset_index(drop=True) print(df_final) VisitorID EpochTime HitTime HitNumber PagePath 0 1000 1521333510 0.0 0.0 Home 1 1000 1521333510 990.0 14.0 orders/details 2 1000 1521333510 4149.0 29.0 orders/payment 3 1000 1521333510 6450.0 54.0 orders/myorders 4 1000 1554888560 0.0 0.0 Home 5 1000 1554888560 1400.0 23.0 orders/details 6 1000 1554888560 5340.0 54.0 orders/payment 7 1000 1554888560 7034.0 55.0 orders/afterpayment 8 1000 1554888560 11034.0 65.0 orders/myorders 9 1000 1554888560 13059.0 110.0 customercare </code></pre> <p>The output of <code>df_final</code> is my <em>expected output</em>.</p> <p>So the question is, can I do this in a more efficient way?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a> for improve performance a bit:</p> <pre><code>d = {'HitTime':0,'HitNumber':0,'PagePath':'Home'} df_first = df.drop_duplicates(['VisitorID', 'EpochTime']).assign(**d) df_final = (pd.concat([df, df_first], ignore_index=True) .sort_values(['VisitorID', 'EpochTime', 'HitNumber']) .reset_index(drop=True)) </code></pre> <hr> <pre><code>print(df_final) VisitorID EpochTime HitTime HitNumber PagePath 0 1000 1521333510 0 0 Home 1 1000 1521333510 990 14 orders/details 2 1000 1521333510 4149 29 orders/payment 3 1000 1521333510 6450 54 orders/myorders 4 1000 1554888560 0 0 Home 5 1000 1554888560 1400 23 orders/details 6 1000 1554888560 5340 54 orders/payment 7 1000 1554888560 7034 55 orders/afterpayment 8 1000 1554888560 11034 65 orders/myorders 9 1000 1554888560 13059 110 customercare </code></pre> <p>Another idea is change index values in <code>df_first</code> by subtracting and last sort by index:</p> <pre><code>d = {'HitTime':0,'HitNumber':0,'PagePath':'Home'} df_first = df.drop_duplicates(['VisitorID', 'EpochTime']).assign(**d) df_first.index -= .5 df_final = pd.concat([df, df_first]).sort_index().reset_index(drop=True) print(df_final) VisitorID EpochTime HitTime HitNumber PagePath 0 1000 1554888560 0 0 Home 1 1000 1554888560 1400 23 orders/details 2 1000 1554888560 5340 54 orders/payment 3 1000 1554888560 7034 55 orders/afterpayment 4 1000 1554888560 11034 65 orders/myorders 5 1000 1554888560 13059 110 customercare 6 1000 1521333510 0 0 Home 7 1000 1521333510 990 14 orders/details 8 1000 1521333510 4149 29 orders/payment 9 1000 1521333510 6450 54 orders/myorders </code></pre>
python|pandas|group-by
2
374,960
55,916,235
What's the difference between multi-(single outputs) NN vs single-(multi targets) NN?
<p>I'm working on this following autoregressive problem with Keras/TF :</p> <p>Inputs : </p> <p><strong><em>m</strong> examples x <strong>10</strong> timesteps (sequence length) x <strong>7</strong> features</em></p> <p>(With each value being a real value)</p> <p>Outputs : </p> <p><strong><em>m</strong> examples x <strong>4</strong> targets/"labels" (real values that I want to predict)</em></p> <p>So far, using LSTMs <strong>with Dense layers at the end (edited)</strong>. So for one example, I give 7 features in a sequence of size 10 and I just want 4 real values out of it (I'm predicting current values according to previous values). </p> <h1><strong>My question is the following</strong> :</h1> <p>What is the difference between predicting :</p> <p>a. 1 output with dimensions <strong>m</strong> x <strong>4</strong></p> <p>b. 4 outputs with each of dimensions <strong>m</strong> x <strong>1</strong></p> <p>I've tried both methods and I don't see any particular difference but I want to understand what they both do. In the second case, I know I can specify different losses and different weight for the losses on each variable that I want to predict but this seems to be less correct than the first method.</p> <pre class="lang-py prettyprint-override"><code>#python / Keras-TF #a. multi output model = Model(inputs = X_input, outputs = [Output1,Output2,Output3,Output4]) prediction = model.predict(X_test_normalized) #returns an inconvenient list of 4 [2 by 1 vectors] which in the end gives me m*4 real values as wanted ###### vs ###### #b. single output model = Model(inputs = X_input, outputs = [Output1]) #vector of dimension 4 prediction = model.predict(X_test_normalized) #returns a m*4 matrix </code></pre>
<p>It does not make a difference most of the time. If your output layers are dense ones, each of them will have a n*1 weight matrix. If you have a single output layer the weight has the shape n*4 instead. Generally one big output layer is faster, as one big matrix multiplication often faster than multiple small ones.</p> <p>You can weight the outputs of the single output layer, too. Just multiply it with a weight vector of the size of your outputs before giving it to the loss.</p>
python|tensorflow|machine-learning|keras|deep-learning
0
374,961
55,943,967
check if a columns contains any str from list
<p>I try to use any() to check if the column contains any string from the list and make a new column with the corresponding results</p> <pre><code>df_data = pd.DataFrame({'A':[2,1,3], 'animals': ['cat, frog', 'kitten, fish', 'frog2, fish']}) cats = ['kitten', 'cat'] df_data['cats'] = df_data.apply(lambda row: True if any(item in cats for item in row['animals']) else False, axis = 1) </code></pre> <p>I got these results, and I don't understand why it is False for the first two rows :</p> <pre><code> A animals cats 0 2 cat, frog False 1 1 kitten, fish False 2 3 frog2, fish False </code></pre> <p>I expect to get False for the last row only</p>
<p>With pandas you should try your best not using for loop or apply , I am using <code>DataFrame</code> constructor with <code>isin</code> and <code>any</code> </p> <pre><code>df_data['cats']=pd.DataFrame(df_data.animals.str.split(', ').tolist()).isin(cats).any(1) df_data A animals cats 0 2 cat, frog True 1 1 kitten, fish True 2 3 frog2, fish False </code></pre>
python|pandas|dataframe
1
374,962
55,740,358
How to train model using TFSlim library?
<p>I'm reading Object Detection API source code and I wonder how to use TFSlim to train model?</p> <p>More specifically, when we use Tensorflow to train the model, we use something like this:</p> <pre><code>parameters = model(X_train, Y_train, X_test, Y_test) # Returns: parameters -- parameters learnt by the model. # They can then be used to predict. </code></pre> <p>And to predict the result, we use something like:</p> <pre><code>y_image_prediction = predict(my_image, parameters) </code></pre> <p>But in file trainer.py, we don't have something like above, we only get:</p> <pre><code>slim.learning.train( train_tensor, logdir=train_dir, master=master, is_chief=is_chief, session_config=session_config, startup_delay_steps=train_config.startup_delay_steps, init_fn=init_fn, summary_op=summary_op, number_of_steps=( train_config.num_steps if train_config.num_steps else None), save_summaries_secs=120, sync_optimizer=sync_optimizer, saver=saver) </code></pre> <p>And there are no return about this <code>slim.learning.train</code> function. So I wonder what is the using of <code>slim.learning.train</code> function, and how do we get the parameters -- that can be used to predict the result? </p> <p><a href="https://github.com/tensorflow/models/blob/master/research/object_detection/legacy/trainer.py" rel="nofollow noreferrer">HERE</a> is source code of trainer.py.</p>
<p>The <code>train</code> function does not return a value because it modifies the actual parameters of the model. The function does that by running the <code>train_tensor</code> which is: "A <code>Tensor</code> that, when executed, will apply the gradients and return the loss value." as written in the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/learning.py#L565" rel="nofollow noreferrer">function documentation</a>.</p> <p>The tensor the documentation talks about what you get when you tell an optimizer to optimize some cost function. It is <code>opt_op</code> in the following example:</p> <pre class="lang-py prettyprint-override"><code>opt = GradientDescentOptimizer(learning_rate=0.1) opt_op = opt.minimize(cost) </code></pre> <p>Find more in the <a href="https://www.tensorflow.org/api_docs/python/tf/train/Optimizer" rel="nofollow noreferrer">optimizer documentation</a>.</p>
python|tensorflow|machine-learning|deep-learning|computer-vision
0
374,963
55,726,480
ValueError: cannot reindex from a duplicate axis when making DataFrame from dictionary
<p>I have a dictionary with a similar format as the dictionary below:</p> <pre><code>{'ID': Unnamed: 0 2019-04-17 06:54:24 {'a': 6.75, 'b': 7.4} 2019-04-17 07:04:24 {'a': 6.75, 'b': 7.4} 2019-04-17 07:13:24 {'a': 6.75, 'b': 7.4} dtype: object, 'ID2': Unnamed: 0 2019-04-17 06:54:44 {'a': 6.35, 'b': 7.0} 2019-04-17 07:04:44 {'a': 6.35, 'b': 7.0} 2019-04-17 07:13:24 {'a': 6.35, 'b': 7.0} dtype: object, 'ID3': Unnamed: 0 2019-04-17 06:52:44 {'a': 6.65, 'b': 7.3} 2019-04-17 07:02:04 {'a': 6.65, 'b': 7.3} 2019-04-17 07:10:24 {'a': 6.65, 'b': 7.3} dtype: object, 'ID4': Unnamed: 0 2019-04-17 06:54:44 {'a': 5.45, 'b': 5.95} 2019-04-17 07:04:44 {'a': 5.45, 'b': 5.95} 2019-04-17 07:13:24 {'a': 5.45, 'b': 5.95} dtype: object} </code></pre> <p>When I try to perform</p> <pre class="lang-py prettyprint-override"><code>pd.DataFrame(dictionary) </code></pre> <p>I get the following error:</p> <pre><code>ValueError: cannot reindex from a duplicate axis </code></pre> <p>I know this isn't very helpful but I could not reproduce this error. Note there are some series with the same index eg. between ID2 and ID4 above.</p>
<p>Use dictionary comprehension with <code>DataFrame</code> constructor and <code>concat</code>:</p> <pre><code>df = pd.concat({k: pd.DataFrame(v.squeeze().values.tolist(), index=v.index) for k, v in d.items()}) print (df) a b ID 2019-04-17 06:54:24 6.75 7.40 2019-04-17 07:04:24 6.75 7.40 2019-04-17 07:13:24 6.75 7.40 ID2 2019-04-17 06:54:44 6.35 7.00 2019-04-17 07:04:44 6.35 7.00 2019-04-17 07:13:24 6.35 7.00 ID3 2019-04-17 06:52:44 6.65 7.30 2019-04-17 07:02:04 6.65 7.30 2019-04-17 07:10:24 6.65 7.30 ID4 2019-04-17 06:54:44 5.45 5.95 2019-04-17 07:04:44 5.45 5.95 2019-04-17 07:13:24 5.45 5.95 </code></pre> <p>If values are string repr of dictionaries:</p> <pre><code>import ast df = pd.concat({k: pd.DataFrame([ast.literal_eval(x) for x in v.squeeze()], index=v.index) for k, v in d.items()}) </code></pre>
python|pandas
0
374,964
55,737,227
Report a column values corresponding to 1st non-zero and last zero values from another column
<p>I have a dataframe as shown below. I would like to scan through 'Krg' column and find the row that corresponds to the last zero value in this column and to report 'Sg' from this row (0.03). Additionally, I would like to report 'Sg' corresponding to 1st non-zero value of 'Krg' (0.04).</p> <p>I could achieve that using query() - see my code below.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd col_labels = ['Sg', 'Krg', 'Krw', 'Pc'] df = pd.DataFrame(columns=col_labels) f = open('EPS.INC', 'r') for line in f: if 'SGWFN' in line: print('Reading relative permeability table') for line in f: line = line.strip() if (line.split() and not line.startswith('/') and not line.startswith('--')): cols = line.split() df=df.append(pd.Series(([float(i) for i in cols]), index=col_labels), ignore_index=True) print(df.loc[df.query('Krg != 0')['Krg'].idxmin(), 'Sg']) print(df.loc[(df.query('Krg != 0')['Krg'].idxmin())-1, 'Sg']) </code></pre> <pre><code> Sg Krg Krw Pc 0 0.00 0.000000 1.000000 0.000000 1 0.03 0.000000 0.500000 0.091233 2 0.04 0.000518 0.484212 0.093203 3 0.05 0.001624 0.468759 0.095237 4 0.06 0.003171 0.453639 0.097338 5 0.07 0.005098 0.438848 0.099508 6 0.08 0.007367 0.424382 0.101751 7 0.09 0.009953 0.410237 0.104070 8 0.10 0.012835 0.396410 0.106469 9 0.11 0.015999 0.382897 0.108950 10 0.12 0.019431 0.369695 0.111518 </code></pre> <p>The code does not seem to be too "pandorable" and appears to be slow. Is there a smarter way of obtain those 'Sg' values?</p> <p>Cheers, D</p>
<p>We can simplify this by using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>Series.eq</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ne.html" rel="nofollow noreferrer"><code>Series.ne</code></a> which stand for <code>equal</code> and <code>not equal</code>. We combine <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.head.html" rel="nofollow noreferrer"><code>DataFrame.head</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.tail.html" rel="nofollow noreferrer"><code>DataFrame.tail</code></a> to get he first and last row.</p> <hr> <pre><code>m = df['Krg'].ne(0) n = df['Krg'].eq(0) df.loc[m, 'Sg'].head(1).iloc[0] df.loc[n, 'Sg'].tail(1).iloc[0] </code></pre> <p>Output</p> <pre><code>0.04 0.03 </code></pre>
python|pandas|dataframe
0
374,965
55,656,114
Pandas cumprod with reset indicated by second column
<p>I need to calculate cumulative products that reset with some frequency indicated by a new value in a column <code>Wgt</code>.</p> <p>For example, in the DataFrame produced by:</p> <pre><code>df = pd.DataFrame(np.random.lognormal(0, 0.01, 27), pd.date_range('2019-01-06', '2019-02-01'), columns=['Chg']) df['Wgt'] = df['Chg'].asfreq('W') df.loc[df.Wgt &gt; 0, 'Wgt'] = np.random.uniform(0.5, 1, df.Wgt.count()) Chg Wgt 2019-01-06 1.014571 0.861546 2019-01-07 1.018993 NaN 2019-01-08 1.017461 NaN 2019-01-09 1.003788 NaN 2019-01-10 1.014106 NaN 2019-01-11 0.995758 NaN 2019-01-12 0.989058 NaN 2019-01-13 0.995897 0.602225 2019-01-14 1.007336 NaN 2019-01-15 1.004143 NaN ... </code></pre> <p>I want to compute a new column <code>Agg</code> whose value is:</p> <ol> <li>If <code>df.Wgt != np.nan</code> then <code>df.Agg = df.Wgt</code></li> <li>Else <code>df.Agg = df.Agg.shift() * df.Chg</code></li> </ol> <p>I.e., in this example <code>Agg</code> would be:</p> <pre><code> Chg Wgt Agg 1/6/2019 1.014571 0.861546 0.861546 1/7/2019 1.018993 NaN 0.877909343 1/8/2019 1.017461 NaN 0.893238518 1/9/2019 1.003788 NaN 0.896622106 1/10/2019 1.014106 NaN 0.909269857 1/11/2019 0.995758 NaN 0.905412734 1/12/2019 0.989058 NaN 0.895505708 1/13/2019 0.995897 0.602225 0.602225 1/14/2019 1.007336 NaN 0.606642923 1/15/2019 1.004143 NaN 0.609156244 ... </code></pre> <p>What are pandalicious ways of doing this?</p>
<p>Using <code>np.where</code> with <code>cumprod</code></p> <pre><code>s=df.loc[df.Wgt.isnull(),'Chg'].groupby(df.Wgt.notna().cumsum()).cumprod() np.where(df.Wgt.notna(),df.Wgt,s*df.Wgt.ffill()) Out[531]: array([0.861546 , 0.87790934, 0.89323852, 0.89662211, 0.90926986, 0.90541273, 0.89550571, 0.602225 , 0.60664292, 0.60915624]) </code></pre>
python|pandas|dataframe
2
374,966
55,639,151
Mixing datasets in set ratio
<p>In tensorlfow dataset, how do I mix 2 datasets, taking 75% of the set from my original data and 25% from the augmented data?</p> <pre><code>d = tf.data.Dataset.list_files("raw_data/")\ .flat_map(tf.data.TFRecordDataset) ad = tf.data.Dataset.list_files("augmented_data/")\ .flat_map(tf.data.TFRecordDataset) </code></pre>
<p>The problem is you can't use <code>len()</code> on a dataset object, so it's sometimes hard to know exact number of examples until you iterate a full epoch. But you can approximate this with <code>take</code> and <code>skip</code> methods.</p> <pre><code>train_dataset = dataset.take(number_examples_for_train) test_dataset = dataset.skip(number_examples_for_train) </code></pre> <p>Those methods are a direct alternative to each other. <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#take" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/data/Dataset#take</a></p>
tensorflow|tensorflow-datasets|tensorflow-estimator
1
374,967
55,752,970
How datasets are structured in TensorFlow?
<p>In my first TensorFlow project, I have a big dataset (1M elements) which contains 8 categories of elements, with each category, has a different number of elements of course. I want to split the big dataset into 10 exclusive small datasets, with each of them having approximately 1/10 of each category. (This is for 10-fold cross-validation purposes.)</p> <p>Here is how I do. I wind up having 80 datasets, with each category having 10 small datasets, then I randomly sample data from 80 of them by using sample_from_datasets. However, after some steps, I met a lot of warning saying "DirectedInterleave selected an exhausted input:36" where 36 can be some other integer numbers.</p> <p>The reason I want to do <code>sample_from_datasets</code> is that I tried to do shuffle the original dataset. Even though shuffle only 0.4 x total elements, it still takes a long long time to finish (about 20mins). </p> <p>My questions are 1. based on my case, any good advice on how to structure the datasets? 2. is it normal to have a long shuffling time? any better solution for shuffling? 3. why do I get this DirectIngerleave selected an exhausted input: warning? and what does it mean?</p> <p>thank you.</p>
<p>Split your <strong>whole datasets</strong> into Training, Testing and Validation categories. As you have 1M data, you can split like this: 60% training, 20% testing and 20% validation. Splitting of datasets is completely up to you and your requirements. But normally maximum data is used for training the model. Next, the rest of the datasets can be used for testing and validation. As you have <strong>ten class</strong> datasets, split each category into Training, Testing and Validation categories.</p> <p>Let, you have A, B, C and D categories data. Split your data "A", "B", "C", and "D" like below:</p> <p>'A'- 60 % for training 20% testing and 20% validation</p> <p>'B'- 60 % for training 20% testing and 20% validation</p> <p>'C'- 60 % for training 20% testing and 20% validation</p> <p>'D'- 60 % for training 20% testing and 20% validation</p> <p>Finally merge all the A, B, C and D training, testing and validation datasets. </p>
tensorflow|dataset
0
374,968
55,588,122
How to calculate the relative vectors from a list of points, from one point to every other point
<p>I have a list of points in <code>(x,y)</code> pairs, which represents the positions of a list of agents. For example, given 3 agents, there are 3 pairs of points, which I store as follows:</p> <pre><code>points = np.array([[x1, y1], [x2, y2], [x3, y3]]) </code></pre> <p>I would like to calculate a subsequent array, that is the relative position from one agent to every other agent, but NOT itself. So, using the data above, I would like to generate the array <code>relative_positions</code> using the array <code>points</code>. <code>points</code> can have <code>N</code> positions (I can have upwards of 50-100 agents at any one time). </p> <p>So using <code>points</code> described above, I would like to produce the output:</p> <pre><code>relative_positions = [[x2-x1, y2-y1], [x3-x1, y3-y1], [x1-x2, y1-y2], [x3-x2, y3-y2], [x1-x3, y1-y3], [x2-x3, y2-y3]] </code></pre> <p>For example, given four agent positions stored as a numpy array: </p> <pre><code>agent_points = np.array([[10, 1], [30, 3], [25, 10], [5, 5]]) </code></pre> <p>I would like to generate the output:</p> <pre><code>relative_positions = [[30-10, 3-1], [25-10, 10-1], [5-10, 5-1], [10-30, 1-3], [25-30, 10-3], [5-30, 5-3], [10-25, 1-10], [30-25, 3-10], [5-25, 5-10], [10-5, 1-5], [30-5, 3-5], [25-5, 10-5]] </code></pre> <p>How do I effectively go about doing this? I have thought about just calculating every difference possible, and removing the 0 cases (for when it's the relative position from the agent to itself), however I do not think that is a "pure" way to do it, since I could accidentally remove an agent that just happens to be on exactly the same point (or very close to)</p>
<p><strong>Approach #1</strong></p> <p>With <code>a</code> the input array, you can do -</p> <pre><code>d = (a-a[:,None,:]) valid_mask = ~np.eye(len(a),dtype=bool) out = d[valid_mask] </code></pre> <p>Basically, we are extending <code>a</code> to <code>3D</code> such that first axis is made <code>outer-broadcastable</code> and then we perform subtraction against its <code>2D</code> version, resulting in <code>mxmx2</code> shaped output, with <code>m</code> being the <code>a.shape[0]</code>. Schematically put -</p> <pre><code>a[:, None, :] : 4 x 1 x 2 a : 4 x 2 output : 4 x 4 x 2 </code></pre> <p><a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>More info</code></a>.</p> <p>Another way to create <code>valid_mask</code>, would be -</p> <pre><code>r = np.arange(len(a)) valid_mask = r[:,None] != r </code></pre> <p><strong>Approach #2</strong></p> <p>We will leverage <code>np.lib.stride_tricks.as_strided</code> to get a no-diagonal mask for <code>3D</code> arrays (along first two axes), so that we will use it here to mask the differences array <code>d</code>. This mask generation is inspired by a <code>2D</code> array problem as posted <a href="https://stackoverflow.com/a/43761941/"><strong><code>here</code></strong></a> and for a<code>3D</code> case would look something like this -</p> <pre><code>def nodiag_view3D(a): m = a.shape[0] p,q,r = a.strides return np.lib.stride_tricks.as_strided(a[:,1:], shape=(m-1,m,2), strides=(p+q,q,r)) </code></pre> <p>To solve our problem, it would be -</p> <pre><code>d = (a-a[:,None,:]) out = nodiag_view3D(d).reshape(-1,a.shape[1]) </code></pre> <hr /> <h3>Timings to showcase how approach#2 improves upon #1</h3> <pre><code>In [96]: a = np.random.rand(5000,2) In [97]: d = (a-a[:,None,:]) In [98]: %%timeit ...: valid_mask = ~np.eye(len(a),dtype=bool) ...: out = d[valid_mask] 1 loop, best of 3: 763 ms per loop In [99]: %%timeit ...: r = np.arange(len(a)) ...: valid_mask = r[:,None] != r ...: out = d[valid_mask] 1 loop, best of 3: 767 ms per loop In [100]: %timeit nodiag_view3D(d).reshape(-1,a.shape[1]) 10 loops, best of 3: 177 ms per loop </code></pre>
python|arrays|numpy
2
374,969
55,938,112
Describe a Dataframe on PySpark
<p>I have a fairly large Parquet file which I am loading using</p> <pre><code>file = spark.read.parquet('hdfs/directory/test.parquet') </code></pre> <p>Now I want to get some statistics (similar to pandas <code>describe()</code> function). What I've tried to do was:</p> <pre><code>file_pd = file.toPandas() file_pd.describe() </code></pre> <p>but obviously this requires to load all the data in memory and it will fail. Can anyone suggest a workaround? </p>
<p>What are the stats you need? Spark has a similar feature</p> <pre><code>file.summary().show() </code></pre> <pre><code>+-------+----+ |summary|test| +-------+----+ | count| 3| | mean| 2.0| | stddev| 1.0| | min| 1| | 25%| 1| | 50%| 2| | 75%| 3| | max| 3| +-------+----+ </code></pre>
python|pandas|apache-spark|pyspark
17
374,970
55,863,619
How to convert tensorflow placerholder variable to numpy array?
<p>I would like to use scipy interpolation function in the tensorflow code.</p> <p>Here is the example snippet similar to my situation.</p> <pre><code>import tensorflow as tf from scipy import interpolate def interpolate1D(Xval,Fval,inp): Xval = np.array(Xval) Fval = np.array(Fval) f = interpolate.interp1d(Xval, Fval, fill_value="extrapolate") z = f(inp) return z properties = { 'xval': [200,400,600,800,1100], 'fval': [100.0,121.6,136.2,155.3,171.0] } tensor = tf.placeholder("float") interpolate = interpolate1D(properties['xval'],properties['fval'], tensor) </code></pre> <p>Once I get the <code>interpolate</code> I'll convert it into tensor using <code>tf.convert_to_tensor(interpolate)</code></p> <p>Here <code>interpolate.interp1d</code> is just an example. I'll be using other interpolation methods and output of those methods will be fed into another neuron.</p> <p>I understand <code>placeholder</code> is empty variable so technically it's not possible to convert into numpy array. Also, I cannot use this interpolation function outside the tensorflow graph because in some situations I need to use output of a neural network as a input to interpolation function. </p> <p>Overall, I would like to use scipy interpolation function with in the tensor graph.</p>
<p>You could use <a href="https://www.tensorflow.org/api_docs/python/tf/py_func" rel="nofollow noreferrer"><code>tf.py_func</code></a> to use the SciPy function inside your graph, but a better option would be to implement the interpolation in TensorFlow. There is no function in the library that does this out of the box, but it is not difficult to implement it.</p> <pre><code>import tensorflow as tf # Assumes Xval is sorted def interpolate1D(Xval, Fval, inp): # Make sure input values are tensors Xval = tf.convert_to_tensor(Xval) Fval = tf.convert_to_tensor(Fval) inp = tf.convert_to_tensor(inp) # Find the interpolation indices c = tf.count_nonzero(tf.expand_dims(inp, axis=-1) &gt;= Xval, axis=-1) idx0 = tf.maximum(c - 1, 0) idx1 = tf.minimum(c, tf.size(Xval, out_type=c.dtype) - 1) # Get interpolation X and Y values x0 = tf.gather(Xval, idx0) x1 = tf.gather(Xval, idx1) f0 = tf.gather(Fval, idx0) f1 = tf.gather(Fval, idx1) # Compute interpolation coefficient x_diff = x1 - x0 alpha = (inp - x0) / tf.where(x_diff &gt; 0, x_diff, tf.ones_like(x_diff)) alpha = tf.clip_by_value(alpha, 0, 1) # Compute interpolation return f0 * (1 - alpha) + f1 * alpha properties = { 'xval': [200.0, 400.0, 600.0, 800.0, 1100.0], 'fval': [100.0, 121.6, 136.2, 155.3, 171.0] } with tf.Graph().as_default(), tf.Session() as sess: tensor = tf.placeholder("float") interpolate = interpolate1D(properties['xval'], properties['fval'], tensor) print(sess.run(interpolate, feed_dict={tensor: [40.0, 530.0, 800.0, 1200.0]})) # [100. 131.09 155.3 171. ] </code></pre>
python|numpy|tensorflow
2
374,971
55,590,814
Cuda compute capability 3.0. The minimum required Cuda capability is 3.7
<p>I have python 3.5, gpu: Quadro K1000M, CUDA 9.0 and I want toinstall tensorflow on gpu. The installation was done successfully, but when I check it, it gives me the warning and my python code works with CPU version in spite of I didn't install CPU version. The warning is:</p> <pre><code> Cuda compute capability 3.0. The minimum required Cuda capability is 3.7 </code></pre> <p>When I check it in python as follow:</p> <pre><code> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) sess.close() </code></pre> <p>It return me </p> <pre><code> no known devices </code></pre> <p>Can anyone please help me what I do? I'm new in python, any hint may be useful. Thanks</p>
<blockquote> <p>Cuda compute capability 3.0. The minimum required Cuda capability is 3.7</p> </blockquote> <p>Your GPU is too old, the bare minimum according to the <a href="https://en.wikipedia.org/wiki/CUDA" rel="nofollow noreferrer">list of GPU models on Wikipedia's CUDA page</a> is a Tesla K80. If you want touse TF on the GPU you'll need to upgrade the GPU first.</p>
python|tensorflow|gpu
1
374,972
55,984,986
Python: Evaluate Arithmetic String Within Pandas Dataframe Column
<p>To preface this, I'm a python newbie. I'm working on a script to automate a reporting process for website downtime each month. I've successfully built a script that scrapes our monitoring site with Beautifulsoup and pulls the data into a pandas dataframe. The "Duration" column of the dataframe lists downtime and comes in as "6 Minutes" or "1 Hour 5 Minutes" when scraped. I've been able to strip the " Minutes" from the values &lt; 1 hour and am able to convert that to an integer to operate math on. </p> <p>The values greater than 1 Hour are giving me issues. I first stripped the " Minutes" string from the end which leaves me with "1 Hour 5":</p> <pre class="lang-py prettyprint-override"><code>df["Duration"] = df["Duration"].str.replace(" Minutes", "") </code></pre> <p>I then tried to switch the " Hour " into a math expression and hoped that it would by default just give me "65" but it's simply giving me the string "1*60+5" when I try to export the dataframe to an excel sheet. </p> <pre class="lang-py prettyprint-override"><code>df["Duration"] = df["Duration"].str.replace(" Hour ", "*60+") </code></pre> <p>Is there any way I can parse through the "Duration" column, find any values that have "Hour" in them, and convert it into a math expression that automatically outputs the value in a "Minutes" sum?</p> <p>SAMPLE DATA:</p> <p><a href="https://i.stack.imgur.com/GQqT8.png" rel="nofollow noreferrer">Sampledata</a></p>
<p>(Updated answer to reflect new information.)</p> <pre><code># Sample data: ddict = { 'Record': [1, 2, 3, 4], 'Duration': ['1 Hour 5 Minutes', '2 Hours 1 Minute', '2 Hours 45 Minutes', '7 Minutes'] } df = pd.DataFrame(ddict) ### Replace plurals in 'Duration' using regular expression option in pandas.Series.replace() df['Duration'] = df['Duration'].replace(r'Hours', 'Hour', regex=True).replace(r'Minutes', 'Minute', regex=True) ### Iterate the dataframe index; Check if 'Hour' in 'Duration' value for each row; Calculate total time for i in df.index: if 'Hour' in df['Duration'][i]: df.loc[i, 'Duration'] = (int(df['Duration'][i].split('Hour')[0].strip()) * 60) + int(df['Duration'][i].split('Hour')[1].replace('Minute', '').strip()) else: df.loc[i, 'Duration'] = int(df['Duration'][i].split('Minute')[0].strip()) </code></pre> <p>Output:</p> <pre><code> Record Duration 0 1 65 1 2 121 2 3 165 3 4 7 </code></pre>
python|pandas|dataframe
0
374,973
55,754,477
Nearest neighbor matching in Pandas
<p>Given two DataFrames (t1, t2), both with a column 'x', how would I append a column to t1 with the ID of t2 whose 'x' value is the nearest to the 'x' value in t1?</p> <pre><code>t1: id x 1 1.49 2 2.35 t2: id x 3 2.36 4 1.5 output: id id2 1 4 2 3 </code></pre> <p>I can do this by creating a new DataFrame and iterating on t1.groupby() and doing look ups on t2 then merging, but this take incredibly long given a 17 million row t1 DataFrame.</p> <p>Is there a better way to accomplish? I've scoured the pandas docs regarding groupby, apply, transform, agg, etc. But an elegant solution has yet to present itself despite my thought that this would be a common problem.</p>
<p>Using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html#pandas.merge_asof" rel="nofollow noreferrer"><code>merge_asof</code></a></p> <pre><code>df = pd.merge_asof(df1.sort_values('x'), df2.sort_values('x'), on='x', direction='nearest', suffixes=['', '_2']) print(df) Out[975]: id x id_2 0 3 0.87 6 1 1 1.49 5 2 2 2.35 4 </code></pre> <hr> <p>Method 2 <code>reindex</code></p> <pre><code>df1['id2']=df2.set_index('x').reindex(df1.x,method='nearest').values df1 id x id2 0 1 1.49 4 1 2 2.35 3 </code></pre>
python|pandas
5
374,974
55,674,799
"No such table" error while loading the .db file in python
<p>I'm trying to read the .db file in python code, whereas i getting "no table found an" error. But i could see the table when I import it onto MYSQL DB.</p> <pre><code>import sqlite3; import pandas as pd; con=None def getConnection(): databaseFile="test.db" global con if con == None: con=sqlite3.connect(databaseFile) return con def queryExec(): con=getConnection() result=pd.read_sql_query("select * from Movie;",con) result queryExec() </code></pre> <p>Even I tried using the absolute path of the .db file, but no luck.</p>
<p>Assume you're trying to read data from <strong>SQLite</strong> database file, here is a simpler way to do it.</p> <pre><code>import sqlite3 import pandas as pd con = sqlite3.connect("test.db") with con: df = pd.read_sql("select * from Movie", con) print(df) </code></pre>
python-3.x|pandas|sqlite
0
374,975
55,903,895
Pick random entry from two ragged tensors
<p>How do I pick random entries from two ragged tensors? For example,</p> <pre><code>c = tf.ragged.constant([[1, 2, 3], [4, 5]]) v = tf.ragged.constant([[10., 20., 30.], [40., 50.]]) r = tf.random.uniform([1, 1], maxval=2, dtype=tf.int32) with tf.Session() as sess: print(sess.run([tf.gather_nd(c, r), tf.gather_nd(v, r)])) </code></pre> <p>gives me either <code>[1, 2, 3]</code> and <code>[10., 20., 30.]</code>, or <code>[4, 5]</code> and <code>[40., 50.]</code>. But now I want to pick a random number between 0 and the length of the returned list, and get the corresponding entries from both lists. And then I want to batch this entire process. </p> <p>Thank you in advance for your help! </p>
<p>Here is an example based on the values you've given: (I am using TF 1.13)</p> <pre><code>import tensorflow as tf tf.enable_eager_execution() # you can use a normal Session, but this is to show intermediate output c = tf.ragged.constant([[1, 2, 3], [4, 5]]) v = tf.ragged.constant([[10., 20., 30.], [40., 50.]]) r = tf.random.uniform([1, 1], maxval=2, dtype=tf.int32) a = tf.gather_nd(c, r) b = tf.gather_nd(v, r) print(a) print(b) # Output example #&lt;tf.RaggedTensor [[1, 2, 3]]&gt; #&lt;tf.RaggedTensor [[10.0, 20.0, 30.0]]&gt; # Lengths l_a = tf.squeeze(a.row_lengths()) l_b = tf.squeeze(b.row_lengths()) print(l_a) print(l_b) #Output example #tf.Tensor(3, shape=(), dtype=int64) #tf.Tensor(3, shape=(), dtype=int64) #Random index between 0 and length rand_idx_a = tf.random.uniform([1],minval=0,maxval=l_a,dtype=tf.int64) rand_idx_b = tf.random.uniform([1],minval=0,maxval=l_b,dtype=tf.int64) print(rand_idx_a) print(rand_idx_b) #Output example #tf.Tensor([0], shape=(1,), dtype=int64) #tf.Tensor([2], shape=(1,), dtype=int64) #Convert ragged tensor to tensor of shape [1,n] t_a = a.to_tensor() t_b = b.to_tensor() print(t_a) print(t_b) #Read from extracted tensor using random index rand_a = tf.gather_nd(tf.squeeze(t_a),rand_idx_a) #removes dimension of 1 rand_b = tf.gather_nd(tf.squeeze(t_b),rand_idx_b) print(rand_a) print(rand_b) #Output example #tf.Tensor([[1 2 3]], shape=(1, 3), dtype=int32) #tf.Tensor([[10. 20. 30.]], shape=(1, 3), dtype=float32) #tf.Tensor(1, shape=(), dtype=int32) #tf.Tensor(30.0, shape=(), dtype=float32) </code></pre> <p>All of these operations can be batched easily depending on your input.</p>
python|tensorflow|ragged
1
374,976
55,720,396
How to use mutiple arithmetic functions in multiple columns in pandas dataframe
<p>I have a pandas dataframe and three lists as follows.</p> <pre><code>list1 = ['n3', 'n5', 'n7'] list2 = ['n1', 'n2', 'n4', 'n11', 'n12'] list3 = ['n6', 'n8', 'n9', 'n10'] item n1 n2 n3 n4 n5 n6 n7 n8 n9 n10 n11 n12 item1 1 6 7 8 9 1 6 8 8 9 9 5 item2 1 6 7 6 9 1 8 8 8 9 9 5 </code></pre> <p>I want to select the column names in the three lists and perform the following arithmetic functions.</p> <ul> <li>list1: addition</li> <li>list2: take absolute of the number (i.e. <code>abs(n)</code>) and addition</li> <li>list3: take inverse of the number (i.e. <code>1/n</code>) and addition</li> </ul> <p>For example, if we take <code>item1</code>:</p> <ul> <li>list1:<code>add columns n3, n5, n7</code> i.e. <code>7+9+6 = 22</code></li> <li>list2: <code>take abosulte and add columns n1, n2, n4, n11, n12</code> i.e. <code>abs(1)+abs(6)+abs(8)+abs(9)+abs(5) = 29</code></li> <li>list3: <code>take inverse and add columns n6, n8, n9, n10</code> i.e. <code>1/1 + 1/8 + 1/8 + 1/9 = 1.3611</code></li> </ul> <p>Now, I want to add the sums separately and total to the dataframe.</p> <pre><code>item n1 n2 n3 n4 n5 n6 n7 n8 n9 n10 n11 n12 list1_sum list2_sum list3_sum total_sum item1 1 6 7 8 9 1 6 8 8 9 9 5 xxx xxx xxx xxx item2 1 6 7 6 9 1 8 8 8 9 9 5 xxx xxx xxx xxx </code></pre> <p>I was able to do <code>list1</code> as follows.</p> <pre><code>df['list1_sum'] = df[list1].sum(axis=1) </code></pre> <p>However, I could not find how to do the remaining operations.</p> <p>I am happy to provide more details if needed.</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.abs.html" rel="nofollow noreferrer"><code>DataFrame.abs</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rdiv.html" rel="nofollow noreferrer"><code>DataFrame.rdiv</code></a> for divide from right side:</p> <pre><code>df['list1_sum'] = df[list1].sum(axis=1) df['list2_sum'] = df[list2].abs().sum(axis=1) df['list3_sum'] = df[list3].rdiv(1).sum(axis=1) #same like #df['list3_sum'] = (1 / df[list3]).sum(axis=1) df['total_sum'] = df[['list1_sum','list2_sum','list3_sum']].sum(axis=1) print (df) item n1 n2 n3 n4 n5 n6 n7 n8 n9 n10 n11 n12 list1_sum \ 0 item1 1 6 7 8 9 1 6 8 8 9 9 5 22 1 item2 1 6 7 6 9 1 8 8 8 9 9 5 24 list2_sum list3_sum total_sum 0 29 1.361111 52.361111 1 27 1.361111 52.361111 </code></pre>
pandas
3
374,977
55,972,789
Difference between two dataframe and drop those that's not present in both
<p>State column exists in both of my dataframes, and I'm trying to drop the state that doesn't appear in both.</p> <p>I've tried:</p> <pre><code>for s in df1_2016.state if s not in df_2016.state: print(s) </code></pre> <p>Doesn't work.</p> <p>I planned to drop it by</p> <pre><code>df1_2016.state.drop('Alabama') </code></pre> <p>Doesn't work.</p> <p><a href="https://i.stack.imgur.com/KicjA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KicjA.png" alt="rmat"></a></p>
<p>Assuming a couple of dataframes like so:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({ 'state' : ['A','B','C','D'], 'values' : [1,2,3,4] }) df2 = pd.DataFrame({ 'state' : ['A','C','E','F'], 'values' : [5,6,7,8] }) df3 = df1[df1['state'].isin(df2['state'])] </code></pre> <p>df3 will contain the data in df1 where the state is in both df1 and df2 (i.e. A &amp; C)</p>
python|pandas
0
374,978
55,935,692
How to make second maximum values as new column in pandas?
<p>I have the following dataframe. </p> <pre><code>import pandas as pd import numpy as np d = { 'ID':[1,2,3,4,5], 'Price1':[5,9,4,3,9], 'Price2':[9,10,13,14,18], 'Price3':[5,9,4,3,9], 'Price4':[9,10,13,14,18], 'Price5':[5,9,4,3,9], 'Price6':[np.nan,10,13,14,18], 'Price7':[np.nan,9,4,3,9], 'Price8':[np.nan,10,13,14,18], 'Price9':[5,9,4,3,9], 'Price10':[9,10,13,14,18], 'Type':['A','A','B','C','D'], } df = pd.DataFrame(data = d) df </code></pre> <p>How to compare Price 1 to Price 10 columns and add second max value as new column?</p> <p>Expected Output:</p> <pre><code>import pandas as pd import numpy as np d = { 'ID':[1,2,3,4,5], 'Price1':[5,9,4,3,9], 'Price2':[9,10,13,14,18], 'Price3':[5,9,4,3,9], 'Price4':[9,10,13,14,18], 'Price5':[5,9,4,3,9], 'Price6':[np.nan,10,13,14,18], 'Price7':[np.nan,9,4,3,9], 'Price8':[np.nan,10,13,14,18], 'Price9':[5,9,4,3,9], 'Price10':[9,10,13,14,18], 'Type':['A','A','B','C','D'], 'Second_Max':[5,9,4,3,18] } df = pd.DataFrame(data = d) df </code></pre> <p>How to compare Price 1 to Price 10 columns and add second max value as new column?</p>
<p>one way to do this</p> <pre><code>df['Second_Max'] = df.drop(['ID','Type'], axis=1).fillna(0).apply(lambda x: (sorted(list(set(x)), reverse=True))[1], axis=1) </code></pre> <p>or</p> <pre><code>df['Second_Max'] = df.filter(like='Price').fillna(0).apply(lambda x: (sorted(list(set(x)), reverse=True))[1], axis=1) </code></pre> <p><strong>Output</strong></p> <pre><code> ID Price1 Price2 Price3 Price4 Price5 Price6 Price7 Price8 Price9 \ 0 1 5 9 5 9 5 NaN NaN NaN 5 1 2 9 10 9 10 9 10.0 9.0 10.0 9 2 3 4 13 4 13 4 13.0 4.0 13.0 4 3 4 3 14 3 14 3 14.0 3.0 14.0 3 4 5 9 18 9 18 9 18.0 9.0 18.0 9 Price10 Type Second_Max 0 9 A 5.0 1 10 A 9.0 2 13 B 4.0 3 14 C 3.0 4 18 D 9.0 </code></pre> <p>or more efficient way would be to use <a href="https://docs.python.org/3.0/library/heapq.html" rel="nofollow noreferrer">heapq</a></p> <p><a href="https://stackoverflow.com/questions/18549730/find-the-2nd-highest-element">Find the 2nd highest element</a></p>
pandas
3
374,979
55,672,269
How to access elements of Collections Counter stored as column in dataframe to be used in CountVectorizer
<p>One of the columns in the dataframe is in the following format</p> <pre><code>Row 1 : Counter({'First': 3, 'record': 2}) Row 2 : Counter({'Second': 2, 'record': 1}). </code></pre> <p>I want to create a new column which has the following value:</p> <pre><code>Row 1 : First First First record record Row 2 : Second Second record </code></pre>
<p>I was able to solve the question myself by the following code. It is very much related to regex.</p> <pre><code>def transform_word_count(text): words = re.findall(r'\'(.+?)\'',text) n = re.findall(r"[0-9]",text) result = [] for i in range(len(words)): for j in range(int(n[i])): result.append(words[i]) return result df['new'] = df.apply(lambda row: transform_word_count(row['old']), axis=1) </code></pre>
python|pandas|collections
1
374,980
56,008,742
How can I read multiple csv files from a single directory and graph them separately in Python?
<p>I want to read csv files from a directory and plot them and be able to click the arrow button to step through a plot and look at a different plot. I want to specify which column and be able to title it as well as I have in the code below as well.</p> <p>I am able to read the csv file and plot a single plot with specific columns but I am not sure how to do it with multiple. I've tried glob but it didn't work, I do not want to concatenate them to a single csv file. I have provided my code below. Any help would be appreciated. Thank you.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt cols_in = [1, 3] col_name = ['Time (s), Band (mb)'] df = pd.read_csv("/user/Desktop/TestNum1.csv", usecols = cols_in, names = col_name, header = None) fig, ax = plt.subplots() my_scatter_plot = ax.scatter(df["Time (s)"], df["Band (mb)"]) ax.set_xlabel("Time (s)") ax.set_ylabel("Band (mb)") ax.set_title("TestNum1") plt.show() </code></pre>
<p>You just need to add a <code>for</code> loop over all the files and use <code>glob</code> to collect them.</p> <p>For example,</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import glob cols_in = [1, 3] col_name = ['Time (s), Band (mb)'] # Select all CSV files on Desktop files = glob.glob("/user/Desktop/*.csv") for file in files: df = pd.read_csv(file, usecols = cols_in, names = col_name, header = None) fig, ax = plt.subplots() my_scatter_plot = ax.scatter(df["Time (s)"], df["Band (mb)"]) ax.set_xlabel("Time (s)") ax.set_ylabel("Band (mb)") ax.set_title("TestNum1") plt.show() </code></pre> <p>Keeping <code>plt.show()</code> inside the <code>for</code> loop will ensure each plot is plotted. It should be pretty easy to search for 'How to add a title to a plot in python' for answers to your other questions.</p>
python|pandas|csv|matplotlib|glob
1
374,981
56,000,679
Pandas CSV dataframes
<p>I have a dataframe like this :</p> <pre><code>+---+-------+------+-------+-------+ | id| prop1 | prop2| prop3|prop4 | +---+-------+------+-------+-------+ | 1| value1|value2| value3| null| | 2|value11| null|value13|value14| +---+-------+------+-------+-------+ I want to get this in python: +-------+------------+ | id | prop | +-------+------------+ | 1 | value1 | | 1 | value2 | | 1 | value3 | | 1 | null | | 2 | value11 | | 2 | null | +-------+------------+ import pandas as pd import numpy as ny df1 = pd.read_csv('C:\Python27\programs\DF.csv', delimiter = ',', index_col = 'id') print(df1) print('*************************************') for i,j in df1.iterrows(): df2 = (i,j) print(df2) </code></pre>
<p>It seems you need unpivot of your dataframe so use melt for unpivot</p> <pre><code>pd.melt(df,id_vars=['id'],value_vars=['prop1', 'prop2','prop3','prop4']) </code></pre>
python|pandas|dataframe
0
374,982
55,665,599
Pandas - Merging two dataframes by index ID
<p>I have 2 Dataframes as below:</p> <p><strong>Dataframe1:</strong></p> <pre><code> 6 count store_1 10 store_2 23 store_3 53 </code></pre> <p><strong>Dataframe2:</strong></p> <pre><code>store_name location store_1 location_a store_2 location_b </code></pre> <p>I am trying to join the above two Dataframes such that I get the below output:</p> <pre><code> 6 count location store_1 10 location_a store_2 23 location_b store_3 53 </code></pre> <p>I am trying to merge <code>Dataframe1</code> with <code>Dataframe2</code> using index id of the column and not by the column name.</p>
<p>Left merge on indexes would work for you.</p> <pre><code>data = {'6': ['store1', 'store2', 'store3'], 'count': [10, 23, 53]} df1 = pd.DataFrame(data).set_index('6') df1 </code></pre> <p><a href="https://i.stack.imgur.com/ES7M6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ES7M6.png" alt="enter image description here"></a></p> <pre><code>data = {'store_name': ['store1', 'store2'], 'location': ['location_a', 'location_b']} df2 = pd.DataFrame(data).set_index('store_name') df2 </code></pre> <p><a href="https://i.stack.imgur.com/bQlLO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bQlLO.png" alt="enter image description here"></a></p> <pre><code>df1.merge(df2, left_index=True, right_index=True, how='left') </code></pre> <p><a href="https://i.stack.imgur.com/Vkezz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vkezz.png" alt="enter image description here"></a></p> <p>Hope this helps.</p>
python|pandas
2
374,983
55,893,523
Bar plot from pivot table with grand total and percentage per group aggregation
<p>Here's the challenge: make a dataframe from the shipwreck.csv file. From this dataframe, build a pivot table that shows the average fares for males/females in each class, and the number of surviving males/females in each class. The row index should be the class values. Use margins to include averages for all males, females, and all passengers in each class. Print the entire frame.Then Create a bar plot that shows the survival percentage of males and females and all passengers on a per class basis. Use data from the pivot table in the previous problem. The width of the bars should be .25.</p> <p>My issue is I have the dataframe built with only those specified columns but I don't understand how to get the dataframe pivot table and finding the average fare for males/females to be able to set up the graph. </p> <p>Here's my code so far: </p> <pre><code>%matplotlib inline import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') matplotlib.rcParams['figure.figsize'] = (10.0, 4.0) df = pd.read_csv("shipwreck.csv",usecols= ['survived','sex','fare','class']) df.set_index('survived') print(df) #pivot table to get average fares for male/female then plot it #use bar graph w/ width of.25 for bars </code></pre> <p>heres what the .csv shows from dataframe: </p> <pre><code> survived sex fare class 0 0 male 7.2500 Third 1 1 female 71.2833 First 2 1 female 7.9250 Third 3 1 female 53.1000 First 4 0 male 8.0500 Third 5 0 male 8.4583 Third 6 0 male 51.8625 First 7 0 male 21.0750 Third 8 1 female 11.1333 Third 9 1 female 30.0708 Second 10 1 female 16.7000 Third 11 1 female 26.5500 First 12 0 male 8.0500 Third 13 0 male 31.2750 Third 14 0 female 7.8542 Third 15 1 female 16.0000 Second 16 0 male 29.1250 Third 17 1 male 13.0000 Second 18 0 female 18.0000 Third 19 1 female 7.2250 Third 20 0 male 26.0000 Second 21 1 male 13.0000 Second 22 1 female 8.0292 Third 23 1 male 35.5000 First 24 0 female 21.0750 Third 25 1 female 31.3875 Third 26 0 male 7.2250 Third 27 0 male 263.0000 First 28 1 female 7.8792 Third 29 0 male 7.8958 Third .. ... ... ... ... 861 0 male 11.5000 Second 862 1 female 25.9292 First 863 0 female 69.5500 Third 864 0 male 13.0000 Second 865 1 female 13.0000 Second 866 1 female 13.8583 Second 867 0 male 50.4958 First 868 0 male 9.5000 Third 869 1 male 11.1333 Third 870 0 male 7.8958 Third 871 1 female 52.5542 First 872 0 male 5.0000 First 873 0 male 9.0000 Third 874 1 female 24.0000 Second 875 1 female 7.2250 Third 876 0 male 9.8458 Third 877 0 male 7.8958 Third 878 0 male 7.8958 Third 879 1 female 83.1583 First 880 1 female 26.0000 Second 881 0 male 7.8958 Third 882 0 female 10.5167 Third 883 0 male 10.5000 Second 884 0 male 7.0500 Third 885 0 female 29.1250 Third 886 0 male 13.0000 Second 887 1 female 30.0000 First 888 0 female 23.4500 Third 889 1 male 30.0000 First 890 0 male 7.7500 Third [891 rows x 4 columns] </code></pre> <p>heres what the bar graph should look like:</p> <p><a href="https://i.stack.imgur.com/CrTwc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CrTwc.png" alt="enter image description here"></a></p>
<p>Here's what you can do:</p> <pre><code>df = pd.read_csv('shipwreck.csv', usecols=['survived', 'sex', 'class']) df_piv = pd.pivot_table(df, index='class', columns='sex', aggfunc=lambda x: 100*x.sum()/x.count(), # % per group margins=True, margins_name='Combined') df_piv.columns = df_piv.columns.droplevel() df_piv.plot.bar(rot='horizontal'); </code></pre> <p><a href="https://i.stack.imgur.com/K0123.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K0123.png" alt="enter image description here"></a></p>
python|pandas|matplotlib
3
374,984
55,926,039
selecting different columns each row
<p>I have a dataframe which has 500K rows and 7 columns for days and include start and end day.</p> <p>I search a value(like equal 0) in range(startDay, endDay)</p> <p>Such as, for id_1, startDay=1, and endDay=7, so, I should seek a value D1 to D7 columns.</p> <p>For id_2, startDay=4, and endDay=7, so, I should seek a value D4 to D7 columns. However, I couldn't seek different column range successfully.</p> <p>Above-mentioned, </p> <ol> <li>if startDay > endDay, I should see "-999"</li> <li><p>else, I need to find first zero (consider the day range) and such as for id_3's, first zero in D2 column(day 2). And starDay of id_3 is 1. And I want to see, 2-1=1 (D2 - StartDay)</p></li> <li><p>if I cannot find 0, I want to see "8"</p></li> </ol> <p>Here is my data;</p> <pre><code>data = { 'D1':[0,1,1,0,1,1,0,0,0,1], 'D2':[2,0,0,1,2,2,1,2,0,4], 'D3':[0,0,1,0,1,1,1,0,1,0], 'D4':[3,3,3,1,3,2,3,0,3,3], 'D5':[0,0,3,3,4,0,4,2,3,1], 'D6':[2,1,1,0,3,2,1,2,2,1], 'D7':[2,3,0,0,3,1,3,2,1,3], 'startDay':[1,4,1,1,3,3,2,2,5,2], 'endDay':[7,7,6,7,7,7,2,1,7,6] } data_idx = ['id_1','id_2','id_3','id_4','id_5', 'id_6','id_7','id_8','id_9','id_10'] df = pd.DataFrame(data, index=data_idx) </code></pre> <p>What I want to see;</p> <pre><code>df_need = pd.DataFrame([0,1,1,0,8,2,8,-999,8,1], index=data_idx) </code></pre>
<p>You can create boolean array to check in each row which 'Dx' column(s) are above 'startDay' and below 'endDay' and the value is equal to 0. For the first two conditions, you can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.outer.html#numpy.ufunc.outer" rel="nofollow noreferrer"><code>np.ufunc.outer</code></a> with the <code>ufunc</code> being <a href="https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.less_equal.html" rel="nofollow noreferrer"><code>np.less_equal</code></a> and <a href="https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.greater_equal.html#numpy.greater_equal" rel="nofollow noreferrer"><code>np.greater_equal</code></a> such as: import numpy as np</p> <pre><code>arr_bool = ( np.less_equal.outer(df.startDay, range(1,8)) # which columns Dx is above startDay &amp; np.greater_equal.outer(df.endDay, range(1,8)) # which columns Dx is under endDay &amp; (df.filter(regex='D[0-9]').values == 0)) #which value of the columns Dx are 0 </code></pre> <p>Then you can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html" rel="nofollow noreferrer">np.argmax</a> to find the first <code>True</code> per row. By adding 1 and removing 'startDay', you get the values you are looking for. Then you need to look for the other conditions with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html" rel="nofollow noreferrer">np.select</a> to replace values by -999 if <code>df.startDay &gt;= df.endDay</code> or 8 if no <code>True</code> in the row of <code>arr_bool</code> such as:</p> <pre><code>df_need = pd.DataFrame( (np.argmax(arr_bool , axis=1) + 1 - df.startDay).values, index=data_idx, columns=['need']) df_need.need= np.select( condlist = [df.startDay &gt;= df.endDay, ~arr_bool.any(axis=1)], choicelist = [ -999, 8], default = df_need.need) print (df_need) need id_1 0 id_2 1 id_3 1 id_4 0 id_5 8 id_6 2 id_7 -999 id_8 -999 id_9 8 id_10 1 </code></pre> <p>One note: to get -999 for <code>id_7</code>, I used the condition <code>df.startDay &gt;= df.endDay</code> in <code>np.select</code> and not <code>df.startDay &gt; df.endDay</code> like in your question, but you can cahnge to strict comparison, you get 8 instead of -999 in this case.</p>
python-3.x|pandas|numpy
1
374,985
55,610,891
numpy.load from io.BytesIO stream
<p>I have numpy arrays saved in Azure Blob Storage, and I'm loading them to a stream like this:</p> <pre><code>stream = io.BytesIO() store.get_blob_to_stream(container, 'cat.npy', stream) </code></pre> <p>I know from <code>stream.getvalue()</code> that the stream contains the metadata to reconstruct the array. This is the first 150 bytes:</p> <pre><code>b"\x93NUMPY\x01\x00v\x00{'descr': '|u1', 'fortran_order': False, 'shape': (720, 1280, 3), } \n\xc1\xb0\x94\xc2\xb1\x95\xc3\xb2\x96\xc4\xb3\x97\xc5\xb4\x98\xc6\xb5\x99\xc7\xb6\x9a\xc7" </code></pre> <p>Is it possible to load the bytes stream with <code>numpy.load</code> or by some other simple method?</p> <p>I could instead save the array to disk and load it from disk, but I'd like to avoid that for several reasons...</p> <p>EDIT: just to emphasize, the output would need to be a numpy array with the shape and dtype specified in the 128 first bytes of the stream.</p>
<p>I tried to use several ways to realize your needs.</p> <p>Here is my sample codes.</p> <pre><code>from azure.storage.blob.baseblobservice import BaseBlobService import numpy as np account_name = '&lt;your account name&gt;' account_key = '&lt;your account key&gt;' container_name = '&lt;your container name&gt;' blob_name = '&lt;your blob name&gt;' blob_service = BaseBlobService( account_name=account_name, account_key=account_key ) </code></pre> <p>Sample 1. To generate a blob url with sas token to get the content via <code>requests</code></p> <pre><code>from azure.storage.blob import BlobPermissions from datetime import datetime, timedelta import requests sas_token = blob_service.generate_blob_shared_access_signature(container_name, blob_name, permission=BlobPermissions.READ, expiry=datetime.utcnow() + timedelta(hours=1)) print(sas_token) url_with_sas = blob_service.make_blob_url(container_name, blob_name, sas_token=sas_token) print(url_with_sas) r = requests.get(url_with_sas) dat = np.frombuffer(r.content) print('from requests', dat) </code></pre> <p>Sample 2. To download the content of blob into memory via <code>BytesIO</code></p> <pre><code>import io stream = io.BytesIO() blob_service.get_blob_to_stream(container_name, blob_name, stream) dat = np.frombuffer(stream.getbuffer()) print('from BytesIO', dat) </code></pre> <p>Sample 3. Use <code>numpy.fromfile</code> with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.DataSource.html#numpy.DataSource" rel="nofollow noreferrer"><code>DataSource</code></a> to open a blob url with sas token, it will actually download blob file into local filesystem. </p> <pre><code>ds = np.DataSource() # ds = np.DataSource(None) # use with temporary file # ds = np.DataSource(path) # use with path like `data/` f = ds.open(url_with_sas) dat = np.fromfile(f) print('from DataSource', dat) </code></pre> <p>I think Samples 1 &amp; 2 are better for you.</p>
python|numpy|azure-storage
4
374,986
55,625,121
Where are the filter image data in this TensorFlow example?
<p>I'm trying to consume this tutorial by Google to use TensorFlow Estimator to train and recognise images: <a href="https://www.tensorflow.org/tutorials/estimators/cnn" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/estimators/cnn</a></p> <p>The data I can see in the tutorial are: <strong>train_data, train_labels, eval_data, eval_labels</strong>:</p> <pre><code>((train_data,train_labels),(eval_data,eval_labels)) = tf.keras.datasets.mnist.load_data(); </code></pre> <p>In the convolutional layers, there should be feature filter image data to multiply with the input image data? But <strong>I don't see them in the code</strong>.</p> <p>As from this guide, the input image data matmul with filter image data to check for low-level features (curves, edges, etc.), so <strong>there should be filter image data too (the right matrix in the image below)?</strong>: <a href="https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks" rel="nofollow noreferrer">https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks</a></p> <p><a href="https://i.stack.imgur.com/6nwZo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6nwZo.png" alt="enter image description here"></a></p>
<p>The filters are the weight matrices of the <code>Conv2d</code> layers used in the model, and are not pre-loaded images like the "butt curve" you gave in the example. If this were the case, we would need to provide the CNN with all possible types of shapes, curves, colours, and hope that any unseen data we feed the model contains this finite sets of images somewhere in them which the model can recognise.</p> <p>Instead, we allow the CNN to learn the filters it requires to sucessfully classify from the data itself, and hope it can generalise to new data. Through multitudes of iterations and data( which they require a lot of), the model iteratively crafts the best set of filters for it to succesfully classify the images. The random initialisation at the start of training ensures that all filters per layer learn to identify a different feature in the input image.</p> <p>The fact that earlier layers usually corresponds to colour and edges (like above) is not predefined, but the network has realised that looking for edges in the input is the only way to create context in the rest of the image, and thereby classify (humans do the same initially). </p> <p>The network uses these primitive filters in earlier layers to generate more complex interpretations in deeper layers. This is the power of distributed learning: representing complex functions through multiple applications of much simpler functions. </p>
tensorflow|machine-learning|artificial-intelligence|conv-neural-network|sample
1
374,987
55,705,910
reformatting a sequential data file into a data frame using pandas
<p>I have an input file, now converted to a <code>pandas.dataframe</code>. The records/rows are in a sequence which contain related data of the form</p> <pre><code> survey, a, b, c section, 1, 2, 3 observation, a, b, c values, 1, 2, 3 values, 4, 5, 6 observation, d, e, f values, 7, 8, 9 section, 4, 5, 6 ... </code></pre> <p>The survey record only occurs once. A section may occur multiple times and will contain observation and value records. Observations will always be followed by values sometimes as multiple records.</p> <p>I am trying to reformat this into rows where each set of values is on a separate row with its corresponding survey, section, and observation.</p> <pre><code>survey, a,b,c, section, 1,2,3, observation, a,b,c, values, 1,2,3 survey, a,b,c, section, 1,2,3, observation, a,b,c, values, 4,5,6 survey, a,b,c, section, 1,2,3, observation, d, e, f, values, 7, 8, 9 survey, a,b,c, section, 4, 5, 6 and so on.... </code></pre> <p>Can this be done with <code>pandas</code> or should I iterate through an if, then else structure ?</p> <p>The methods I've tried so far are the following (these are probably simplistic and heading in the wrong directions):</p> <pre><code>#pd.DataFrame(hmdDataToProcess.unstack()) #hmdDataToProcess.unstack #hmdDataToProcess.stack #pd.melt(hmdDataToProcess, id_vars =[0], value_vars = ['SURVEY','SECTION','OBSERV','OBVAL']) #df2 = hmdDataToProc0ess.pivot_table(index = [0]).reset_index() #df2 = df_in.pivot_table(index = #['Example1','Example2'],columns='VC', values= #['Weight','Rank']).reset_index() #hmdDataToProcess.groupby('SECTION').groups #, 'OBSERV', 'OBVAL' </code></pre>
<p>You could do it without using <code>Pandas</code> </p> <pre><code>s = '''survey, a, b, c section, 1, 2, 3 observation, a, b, c values, 1, 2, 3 values, 4, 5, 6 observation, d, e, f values, 7, 8, 9 section, 4, 5, 6''' list_s = s.strip().split('\n') list_s = [x.strip() for x in list_s] list_s # ['survey, a, b, c', 'section, 1, 2, 3', 'observation, a, b, c', 'values, 1, 2, 3', 'values, 4, 5, 6', 'observation, d, e, f', 'values, 7, 8, 9', 'section, 4, 5, 6'] for el in list_s: if el.split(',')[0] == 'survey': survey = el if el.split(',')[0] == 'section': section = el if el.split(',')[0] == 'observation': observation = el if el.split(',')[0] == 'values': print(f"{survey},{section},{observation},{el}") #survey, a, b, c,section, 1, 2, 3,observation, a, b, c,values, 1, 2, 3 #survey, a, b, c,section, 1, 2, 3,observation, a, b, c,values, 4, 5, 6 #survey, a, b, c,section, 1, 2, 3,observation, d, e, f,values, 7, 8, 9 </code></pre>
python|pandas|sequential
0
374,988
55,947,360
How to provide encoding while reading multiple files?
<p>I'm reading multiple csv files in from a folder. While reading multiple files I receive <code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 21: invalid start byte</code> </p> <p>When I try to read file one-by-one I provide encoding of type - <code>"ISO-8859-1"</code> in <code>pandas.read_csv(file_name, encoding</code>). My final objective is to append all files in single data frame. Following is the code I'm using for the mentioned purpose.</p> <pre><code>import glob files = glob.glob('/path_name/*.csv') df = None for i, f in enumerate (files): if i == 0: df = pd.read_csv(f) df['fname'] = f else: tmp = read_csv(f) tmp['fname'] = f df = df.append(tmp) df.head() </code></pre>
<p>Try adding <code>errors='ignore'</code>, then everything works, but you will lose couple of characters.</p> <pre><code>with open(path, encoding="utf8", errors='ignore') as f: </code></pre>
python|pandas
0
374,989
64,958,746
Calculating {z,w} from {x,y} given 200 samples of f(x, y) = (z,w)
<p>What would be the best way to calculate <code>(z,w)</code> from a <code>(x,y)</code> given that we have 200 samples of <code>f(x,y) = (z,w)</code>?</p>
<p>You want a 2D interpolation function such as <a href="https://scipython.com/book/chapter-8-scipy/examples/scipyinterpolateinterp2d/" rel="nofollow noreferrer"><strong><code>scipy.interpolate.interp2d</code></strong></a></p> <p>Like in the linked example use <code>Z=f(x,y)</code> and set the interpolation type to <code>cubic</code> for smooth results.</p> <pre><code>w = interp2d(x, y, Z, kind='cubic') </code></pre> <p>In reality, it depends on what the data looks like in case there are better models to fit.</p>
numpy|math
1
374,990
64,627,602
Combining three similar Pandas datasets, keeping original values
<p>So I have three similar data sets given by the lines below:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df1 = pd.DataFrame({'Name': ['Michael', 'Samantha', 'Jimmy'], 'Gender': ['M', 'F', 'M'], 'Mon': [0,1,2], 'Tue': [0,3,5], 'Wed': [0,5,3]}) df2 = pd.DataFrame({'Name': ['Michael', 'Samantha', 'Jimmy'], 'Gender': ['M', 'F', 'M'], 'Mon': [1,2,4], 'Tue': [2,3,5], 'Wed': [1,4,5]}) df3 = pd.DataFrame({'Name': ['Michael', 'Samantha', 'Jimmy'], 'Gender': ['M', 'F', 'M'], 'Mon': [5,4,0], 'Tue': [4,6,5], 'Wed': [1,7,6]}) </code></pre> <p>Each data frame, respectively, appears as such:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df1 Name Gender Mon Tue Wed 0 Michael M 0 0 0 1 Samantha F 1 3 5 2 Jimmy M 2 5 3 &gt;&gt;&gt; df2 Name Gender Mon Tue Wed 0 Michael M 1 2 1 1 Samantha F 2 3 4 2 Jimmy M 4 5 5 &gt;&gt;&gt; df3 df3 Name Gender Mon Tue Wed 0 Michael M 5 4 1 1 Samantha F 4 6 7 2 Jimmy M 0 5 6 </code></pre> <p>Is there any way to create a resultant data frame that combines the numbers into one data frame? A result would appear as:</p> <pre class="lang-py prettyprint-override"><code> Name Gender Mon Tue Wed 0 Michael M [0,1,5] [0,2,4] [0,1,1] 1 Samantha F [1,2,4] [3,3,6] [5,4,7] 2 Jimmy M [2,4,0] [5,5,5] [3,5,6] </code></pre> <p>The order of the data would have to be maintained. The first item in the list doesn't necessarily have to come from the first dataset (<code>df1</code>), but I would like to always know where the number from the first dataset lands so that I can pull that specific value out of the combined data frame.</p>
<p>Let us do <code>concat</code> then <code>groupby</code></p> <pre><code>df = pd.concat([df1,df2,df3]).set_index(['Name','Gender']).groupby(level=[0,1]).agg(list).reset_index() Out[20]: Name Gender Mon Tue Wed 0 Jimmy M [2, 4, 0] [5, 5, 5] [3, 5, 6] 1 Michael M [0, 1, 5] [0, 2, 4] [0, 1, 1] 2 Samantha F [1, 2, 4] [3, 3, 6] [5, 4, 7] </code></pre>
python|pandas|dataframe
1
374,991
64,921,756
Merge a list containing numpy and numbers to one numpy array
<p>I have the following list I want to turn into one numpy. What is the best and most effective way to do this?</p> <pre><code>[[array([1, 2, 3]), 1], [array([1, 2, 3]), 2], [array([1, 2, 3]), 4], [array([4, 4, 4]), 3]] </code></pre> <p>Expected result:</p> <pre><code>[[1, 2, 3, 1], [1, 2, 3, 2], [1, 2, 3, 4], [4, 4, 4, 3]] </code></pre>
<p>You can use this</p> <pre><code>test = [[np.array([1, 2, 3]), 1], [np.array([1, 2, 3]), 2], [np.array([1, 2, 3]), 4], [np.array([4, 4, 4]), 3]] np.apply_along_axis(lambda x:np.hstack((x[0],[x[1]])),axis=1,arr=test) </code></pre>
python|numpy
2
374,992
64,740,758
How to aggregate data based on two columns?
<p><a href="https://i.stack.imgur.com/5LzUw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5LzUw.png" alt="enter image description here" /></a></p> <p>i have table as shown above where there are two columns(Gender and year). i want to convert this into following format as shown below. any help on how to do this would be appreciated.</p> <p><a href="https://i.stack.imgur.com/qFAB6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qFAB6.png" alt="enter image description here" /></a></p>
<p>You can do:</p> <pre><code>df = pd.DataFrame({'Gender': ['m', 'm', 'm', 'm', 'f'], 'year': [2011, 2013, 2011, 2011, 2012]}) pd.crosstab(df['year'], df['Gender']) Gender f m year 2011 0 3 2012 1 0 2013 0 1 </code></pre> <p>To reverse the gender column, it will be:</p> <pre><code>pd.crosstab(df['year'], df['Gender'])[['m', 'f']] </code></pre>
python|pandas|data-science
2
374,993
64,986,643
Pivot output isn't as expected
<p>I have data which is already summed and grouped in a dataframe named <code>df</code>:</p> <pre><code>| id | segment | region | points | |----|---------|----------|--------| | 90 | Gold | APAC | 21 | | 90 | Silver | EMEA | 34 | | 90 | Bronze | AMERICAS | 564 | | 90 | Gold | EMEA | 3939 | | 90 | Silver | Americas | 989 | | 90 | Gold | EMEA | 43 | | 90 | Silver | APAC | 13 | | 90 | Bronze | AMERICAS | 567 | </code></pre> <p>I would like to pivot both <code>segment</code> and <code>region</code> to columns and then total the points for those columns. The output would look like the below based on the input above:</p> <pre><code>| id | Gold | Silver | Bronze | APAC | EMEA | AMERICAS | |----|------|--------|--------|------|------|----------| | 90 | 4003 | 1036 | 1131 | 34 | 4016 | 2120 | </code></pre> <p>What I've tried so far is to convert my dataframe to Pandas and then use the built in <code>pivot_table</code> function.</p> <pre><code>import pandas as pd df_pd = df.toPandas() pd.pivot_table(df_pd, values = 'points', index=['id'], columns = ['segment', 'region']).reset_index() </code></pre> <p>The code works but the output isn't as expected. Instead of getting totals with each <code>region</code> and <code>segment</code> as a column, I get two rows of columns. Within the two rows of columns, it appears that <code>region</code> is a subgrouping of <code>segment</code>. See below (note, numbers don't match due to random numbers being used in the sample data, I'm more concerned about the shape):</p> <p><a href="https://i.stack.imgur.com/54FC8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/54FC8.png" alt="enter image description here" /></a></p>
<p>Solution with double <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a>:</p> <pre><code>df1 = pd.pivot_table(df_pd, values='points', index='id', columns='segment', aggfunc='sum') df2 = pd.pivot_table(df_pd, values='points', index='id', columns='region', aggfunc='sum') df = df1.join(df2).reset_index() print (df) id Bronze Gold Silver AMERICAS APAC Americas EMEA 0 90 1131 4003 1036 1131 34 989 4016 </code></pre> <p>In your solution is possible add <code>sum</code> per first and per second level of <code>MultiIndex in columns</code> with <code>join</code>:</p> <pre><code>df3 = pd.pivot_table(df_pd, values = 'points', index='id', columns = ['segment', 'region'], aggfunc='sum') df = df3.sum(level=0, axis=1).join(df3.sum(level=1, axis=1)).reset_index() print (df) id Bronze Gold Silver AMERICAS APAC EMEA Americas 0 90 1131 4003 1036 1131 34 4016 989 </code></pre>
python-3.x|pandas|pivot|pivot-table|transpose
1
374,994
64,981,900
Reassignment of weights in tensorflow 2/keras
<p>I'm currently testing some modified versions of dropout in Keras and one of them involves adjusting the weights during the training of a customized dense layer. I however have not been able to run it without error yet. I suspect is has something to do with eager execution but I'm not sure.</p> <pre><code>class Linear(keras.layers.Layer): def __init__(self, units, **kwargs): super(Linear, self).__init__(**kwargs) self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer=&quot;random_normal&quot;, trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer=&quot;random_normal&quot;, trainable=True ) def call(self, inputs, training=False): prob = 0.0/10 if training: w = np.matrix(self.w) # w = self.w shape = w.shape size = shape[0] * shape[1] arr = np.random.choice([0,1], size=size, p=[prob, 1 - prob]) #random array of 1's and 0's arr = arr.reshape(shape) #reshape it to same dimensions as weights new_weights = np.multiply(arr, w) #element wise multiplication self.w = new_weights return tf.matmul(inputs, self.w) + self.b </code></pre> <pre><code>model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', padding='same')) model.add(layers.MaxPooling2D()) model.add(layers.Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(layers.MaxPooling2D()) model.add(layers.Conv2D(128, (3, 3), activation='relu',padding='same')) model.add(layers.MaxPooling2D()) model.add(layers.Conv2D(4, (3, 3), activation='relu',padding='same')) model.add(layers.MaxPooling2D()) model.add(layers.Flatten()) model.add(Linear(3)) #Custom layer model.add(layers.Dense(10, activation='softmax')) model.compile(loss = 'CategoricalCrossentropy', optimizer = 'adam', metrics=['accuracy']) epochs = 1 history = model.fit(train_dataset, validation_data=validation_dataset, epochs=epochs) </code></pre> <p>Error: <em>TypeError: Expected binary or unicode string, got &lt;tf.Tensor 'sequential_3/linear_3/mul:0' shape=(4, 3) dtype=float32&gt;</em></p> <p><a href="https://i.stack.imgur.com/nvg10.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nvg10.png" alt="The following is the full error I get when running this on google colab" /></a></p>
<p><code>self.w</code> has to be <code>tensorflow.Variable</code>. However after multiplication in <code>call()</code> it becomes <code>tensorflow.Tensor</code>. Just find another way to do the same thing in <code>call()</code> Try this code:</p> <pre><code> def call(self, inputs, training=False): prob = 0.0/10 if training: w = np.matrix(self.w) shape = w.shape size = shape[0] * shape[1] arr = np.random.choice([0,1], size=size, p=[prob, 1 - prob]) #random array of 1's and 0's arr = arr.reshape(shape) #reshape it to same dimensions as weights # CHANGED 3 LINES BELOW: arr = tf.convert_to_tensor(arr, dtype=tf.float32) new_weights = tf.multiply(arr, self.w) self.w.assign(new_weights) # Assign preserves tf.Variable return tf.matmul(inputs, self.w) + self.b </code></pre>
python|tensorflow|keras|neural-network|tensorflow2.0
1
374,995
64,911,096
How do I move a particular row up in a Pandas dataframe?
<p>Given the following dataframe:</p> <pre><code>df_test = pd.DataFrame( [['18-24', 334725], ['25-44', 698261], ['45-64', 273087], ['65+', 15035],['&lt;18', 80841]], columns=['age_group', 'total_arrests'] ) </code></pre> <p><a href="https://i.stack.imgur.com/XzP62.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XzP62.png" alt="enter image description here" /></a></p> <p>How would I move the last row (index=4) to the top of the dataframe? I need to do this because when I plot this in matplotlib, the <code>&lt;18</code> group appears at the end, not the beginning:</p> <p><a href="https://i.stack.imgur.com/ULGFw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ULGFw.png" alt="enter image description here" /></a></p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.argsort.html" rel="nofollow noreferrer"><code>Series.argsort</code></a> with compare column for not equal for indices and pass to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a>:</p> <pre><code>df_test = df_test.iloc[df_test['age_group'].ne('&lt;18').argsort()] print (df_test) age_group total_arrests 4 &lt;18 80841 0 18-24 334725 1 25-44 698261 2 45-64 273087 3 65+ 15035 </code></pre> <p>If value is always last and index values are unique change order with indexing by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p> <pre><code>df_test = df_test.loc[df_test.index[-1:].tolist() + df_test.index[:-1].tolist()] print (df_test) age_group total_arrests 4 &lt;18 80841 0 18-24 334725 1 25-44 698261 2 45-64 273087 3 65+ 15035 </code></pre>
python|pandas|dataframe
1
374,996
64,830,512
weird behavior of numpy when it calculates a vector and matrix multiplication
<p>I have the following weird behavior of <code>numpy</code> where numpy can't multiply a <code>(n,n)</code> matrix with <code>(n,)</code> matrix and convert the later to <code>(1,n)</code> matrix. I tried different examples and it worked fine. <code>u</code> and <code>s</code> were obtained from <code>svd</code> function as follows:</p> <pre><code> [u, s, vt] = np.linalg.svd(G) svd_estimate = np.matmul(u * s, vt) </code></pre> <p>and <code>G</code> is a numpy matrix. I tried to <code>squeeze(s)</code> but also didn't work. What am I missing? numpy version is <code>'1.19.2' </code></p> <p><a href="https://i.stack.imgur.com/B6ODk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B6ODk.png" alt="enter image description here" /></a></p>
<p>Look at what <code>svd</code> produces for a <code>matrix</code> versus <code>array</code>:</p> <pre><code>In [24]: np.linalg.svd(np.matrix(np.eye(3))) Out[24]: (matrix([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]), array([1., 1., 1.]), matrix([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])) In [25]: np.linalg.svd(np.eye(3)) Out[25]: (array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]), array([1., 1., 1.]), array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])) </code></pre> <p>With the array values, as shown in the docs:</p> <pre><code>In [27]: u,s,vh=_25 In [28]: np.dot(u*s,vh) Out[28]: array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) </code></pre> <p>With the matrix results we have use <code>np.multiply</code></p> <pre><code>In [37]: u,s,vh=_24 In [38]: np.multiply(u,s) Out[38]: matrix([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) In [39]: np.multiply(u,s)*vh Out[39]: matrix([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) </code></pre>
python|numpy
1
374,997
65,023,572
Combining Pandas startswith and rstrip
<p>I'm trying to compare two strings, each containing an array of ints, to see if one is the start of the other. These are both columns in a pandas DataFrame. Here's the problem reduced to a simple example.</p> <p>Here's the data:</p> <pre class="lang-python prettyprint-override"><code>data = {'pred1': ['[0, 1, 2, 3]', '[0, 2, 2, 4]'], 'pred2': ['[0, 1]', '[0, 1]']} df = pd.DataFrame(data=data) </code></pre> <p>Using Pandas <code>rstrip</code> I can take the final <code>]</code> from the <code>pred2</code> column:</p> <pre class="lang-python prettyprint-override"><code>df['pred2'].str.rstrip(']') </code></pre> <pre class="lang-none prettyprint-override"><code>&gt; 0 [0, 1 &gt; 1 [0, 1 &gt; Name: pred2, dtype: object </code></pre> <p>The following gives the result I would expect</p> <pre class="lang-python prettyprint-override"><code>df['pred1'].str.startswith('[0, 1') </code></pre> <pre class="lang-none prettyprint-override"><code>&gt; 0 True &gt; 1 False &gt; Name: pred1, dtype: bool </code></pre> <p>But combining the Pandas <code>startswith</code> and <code>rstrip</code> across the two columns does not seem to work:</p> <pre class="lang-python prettyprint-override"><code>df['pred1'].str.startswith(df['pred2'].str.rstrip(']')) </code></pre> <pre class="lang-none prettyprint-override"><code>&gt; 0 NaN &gt; 1 NaN &gt; Name: pred1, dtype: float64 </code></pre> <pre class="lang-python prettyprint-override"><code>df['pred1'].str.startswith(str(df['pred2'].str.rstrip(']'))) </code></pre> <pre class="lang-none prettyprint-override"><code>&gt; 0 False &gt; 1 False &gt; Name: pred1, dtype: bool </code></pre> <p>Given that the <code>rstrip</code> produces <code>[0, 1</code> for the first row in the second column, why is the resulting value for <code>startswith</code> <code>False</code>?</p>
<p>You should use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>df.apply</code></a> instead:</p> <pre><code>In [3793]: df.apply(lambda x: x['pred1'].startswith(x['pred2'].rstrip(']')), axis=1) Out[3793]: 0 True 1 False dtype: bool </code></pre> <p>Issue with your command <code>df['pred1'].str.startswith(df['pred2'].str.rstrip(']'))</code> is:</p> <p>You are passing a <code>series</code> in <code>series.str.startswith</code> , which is not expected.</p> <p>As per <a href="https://pandas.pydata.org/pandas-docs/version/0.23.1/generated/pandas.Series.str.startswith.html" rel="nofollow noreferrer"><code>series.str.startswith docs</code></a>:</p> <blockquote> <p>pat : str Character sequence. Regular expressions are not accepted.</p> </blockquote>
python|pandas
2
374,998
64,646,197
Watch-your-step model with StellarGraph is not working on a GPU
<p>I am trying to train a large graph-embedding using WatchYourStep algorithm using StellarGraph.</p> <p>For some reason, the model is only trained on a CPU a<strong>nd not utilizing the GPUs</strong>.<br /> using:</p> <ul> <li>TensorFlow-gpu 2.3.1</li> <li>having 2 GPUs , cuda 10.1</li> <li>running inside an nvidia-docker container.</li> <li>I know that tesnorflow do find the GPUs. (<code>tf.debugging.set_log_device_placement(True)</code>)</li> <li>I have tried to run under <code>with tf.device('/GPU:0'):</code></li> <li>I have tried to run it with <code>tf.distribute.MirroredStrategy()</code>.</li> <li>Tried to uninstall tensorflow and reinstall tensorflow-gpu.</li> </ul> <p>Nevertheless, when running <strong>nvidia-smi</strong>, I don't see any activity on the GPUs, and the training is very slow.<br /> How to debug this?</p> <pre class="lang-py prettyprint-override"><code>def watch_your_step_model(): '''use the config to geenrate the WatchYourStep model''' cfg = load_config() generator = generator_for_watch_your_step() num_walks = cfg['num_walks'] embedding_dimension = cfg['embedding_dimension'] learning_rate = cfg['learning_rate'] wys = WatchYourStep( generator, num_walks=num_walks, embedding_dimension=embedding_dimension, attention_regularizer=regularizers.l2(0.5), ) x_in, x_out = wys.in_out_tensors() model = Model(inputs=x_in, outputs=x_out) model.compile(loss=graph_log_likelihood, optimizer=optimizers.Adam(learning_rate)) return model, generator, wys def train_watch_your_step_model(epochs = 3000): cfg = load_config() batch_size = cfg['batch_size'] steps_per_epoch = cfg['steps_per_epoch'] callbacks, checkpoint_file = watch_your_step_callbacks(cfg) # strategy = tf.distribute.MirroredStrategy() # print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) # with strategy.scope(): model, generator, wys = watch_your_step_model() train_gen = generator.flow(batch_size=batch_size, num_parallel_calls=8) train_gen.prefetch(20480000) history = model.fit( train_gen, epochs=epochs, verbose=1, steps_per_epoch=steps_per_epoch, callbacks = callbacks ) copy_last_trained_wys_weights_to_data() return history, checkpoint_file with tf.device('/GPU:0'): train_watch_your_step_model() </code></pre>
<p>I just followed this instructions : <a href="https://github.com/stellargraph/stellargraph/issues/546" rel="nofollow noreferrer">https://github.com/stellargraph/stellargraph/issues/546</a>.</p> <p>It worked for me.</p> <p>Basically you have to edit the file setup.py from stellargraph github and remove the tensorflow requirement (line 25 and 27 <a href="https://github.com/stellargraph/stellargraph/blob/develop/setup.py" rel="nofollow noreferrer">https://github.com/stellargraph/stellargraph/blob/develop/setup.py</a>) .</p>
python|docker|tensorflow|gpu|stellargraph
0
374,999
64,775,560
Indexing list of tensors
<p>I have two identical lists of tensors (with different sizes) except that for the first one all of the tensors are assigned to the cuda device. For example:</p> <pre><code>list1=[torch.tensor([0,1,2]).cuda(),torch.tensor([3,4,5,6]).cuda(),torch.tensor([7,8]).cuda()] &gt;&gt;&gt; list1 [tensor([0, 1, 2], device='cuda:0'), tensor([3, 4, 5, 6], device='cuda:0'), tensor([7, 8], device='cuda:0')] </code></pre> <pre><code>list2=[torch.tensor([0,1,2]),torch.tensor([3,4,5,6]),torch.tensor([7,8])] &gt;&gt;&gt; list2 [tensor([0, 1, 2]), tensor([3, 4, 5, 6]), tensor([7, 8])] </code></pre> <p>I want to extract some tensors from the lists according to an array of indices such as:</p> <pre><code>ind=torch.tensor([0,2]) &gt;&gt;&gt; ind tensor([0, 2]) </code></pre> <p>So my solution was to do something like that:</p> <pre><code>np.array(list1)[ind] </code></pre> <pre><code>np.array(list2)[ind] </code></pre> <p>My question is why it works with the first list with the tensors defined on the cuda device and gives an error with the second list as shown below:</p> <pre><code>&gt;&gt;&gt; np.array(list1)[ind] array([tensor([0, 1, 2], device='cuda:0'), tensor([7, 8], device='cuda:0')], dtype=object) </code></pre> <pre><code>&gt;&gt;&gt; np.array(list2)[ind] Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: only one element tensors can be converted to Python scalars </code></pre> <hr /> <p>EDIT: Just to clarify, the error isn't raised because the tensors have different shapes. The following examples illustrate this point:</p> <pre><code>list3=[torch.tensor([1,2,3]).cuda()] list4=[torch.tensor([1,2,3]).cuda(),torch.tensor([4,5,6]).cuda()] list5=[torch.tensor([1,2,3])] list6=[torch.tensor([1,2,3]),torch.tensor([4,5,6])] </code></pre> <p>And the results are:</p> <pre><code>&gt;&gt;&gt; np.array(list3) array([tensor([1, 2, 3], device='cuda:0')], dtype=object) &gt;&gt;&gt; np.array(list4) array([tensor([1, 2, 3], device='cuda:0'), tensor([4, 5, 6], device='cuda:0')], dtype=object) &gt;&gt;&gt; np.array(list5) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: only one element tensors can be converted to Python scalars &gt;&gt;&gt; np.array(list6) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: only one element tensors can be converted to Python scalars </code></pre>
<p><code>np.array</code> trys to convert each of the elements of a list into a numpy array. This is only supported for CPU tensors. The short answer is you can explicitly instruct numpy to create an array with <code>dtype=object</code> to make the CPU case works. To understand what exactly is happening lets take a closer look at both cases.</p> <h3>Case 1 (CUDA tensors)</h3> <p>First note that if you attempt to use <code>np.array</code> on a CUDA tensor you get the following error</p> <pre class="lang-py prettyprint-override"><code>np.array(torch.zeros(2).cuda()) </code></pre> <pre class="lang-none prettyprint-override"><code>TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. </code></pre> <p>In your example, numpy tries to convert each element of <code>list1</code> to a numpy array, however an exception is raised so it just settles on creating an array with <code>dtype=object</code>.</p> <p>You end up with</p> <pre class="lang-py prettyprint-override"><code>np.array([torch.tensor([0,1,2]).cuda(), torch.tensor([3,4,5,6]).cuda(), torch.tensor([7,8]).cuda()]) </code></pre> <p>being just a container pointing to different objects</p> <pre class="lang-none prettyprint-override"><code>array([tensor([0, 1, 2], device='cuda:0'), tensor([3, 4, 5, 6], device='cuda:0'), tensor([7, 8], device='cuda:0')], dtype=object) </code></pre> <h3>Case 2 (CPU tensors)</h3> <p>For CPU tensors, PyTorch knows how to convert to numpy arrays. So when you run</p> <pre class="lang-py prettyprint-override"><code>np.array(torch.zeros(2)) </code></pre> <p>you get a numpy array with dtype <code>float32</code></p> <pre class="lang-none prettyprint-override"><code>array([0., 0.], dtype=float32) </code></pre> <p>The problem comes in your code when numpy successfully converts each element in <code>list2</code> into a numpy array and then tries to stack them into a single multi-dimensional array. Numpy expects that each list entry represents one row of a multi-dimensional array, but in your case it finds that not all rows have the same shape, so doesn't know how to proceed and raises an exception.</p> <p>One way to get around this is to explicitly specify that dtype should remain <code>object</code>. This basically tells numpy &quot;don't try to convert the entries to numpy arrays first&quot;.</p> <pre><code>np.array([torch.tensor([0,1,2]), torch.tensor([3,4,5,6]), torch.tensor([7,8])], dtype=object) </code></pre> <p>which now gives a similar result to case 1</p> <pre class="lang-none prettyprint-override"><code>array([tensor([0, 1, 2]), tensor([3, 4, 5, 6]), tensor([7, 8])], dtype=object) </code></pre>
python|numpy|pytorch
1