Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
6,900
46,622,031
Try to plot finance data with datetime but met error TypeError: string indices must be integers, not str
<p>I'd like to plot finance data with datetime, as below data example shown. But I get the error: </p> <pre><code>TypeError: string indices must be integers, not str </code></pre> <p>Could you please kindly help me to know why I meet this error, and the solution? </p> <pre><code>from datetime import datetime, timedelta import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.dates import date2num plt.figure(2) datafile = cbook.get_sample_data(self.minuteListFile, asfileobj=False) print('loading %s' % datafile) datafile['minute'] =date2num(pd.to_datetime(datafile['minute']).tolist()) plt.plotfile(datafile, (0, 1, 2, 3), checkrows=0, subplots=False) plt.show() </code></pre> <p>data example - </p> <pre><code>minute,spreadprice,bollup,bollmid,bolldown,buy,short,sell,cover 2014/01/02/09/00,144.0,0,0,1,142,0,0,0 2014/01/02/09/01,143.0,0,0,1,0,0,0,0 2014/01/02/09/02,145.0,0,0,1,0,0,0,0 2014/01/02/09/03,144.0,0,0,1,0,0,0,0 2014/01/02/09/04,142.0,0,0,1,0,0,0,0 2014/01/02/09/05,142.0,0,0,1,0,0,0,0 2014/01/02/09/06,143.0,0,0,1,0,0,0,0 2014/01/02/09/07,143.0,0,0,1,0,0,0,0 2014/01/02/09/08,142.0,0,0,1,0,0,0,0 2014/01/02/09/09,140.0,0,0,1,0,0,0,0 2014/01/02/09/10,140.0,0,0,1,0,0,0,0 2014/01/02/09/11,141.0,0,0,1,0,0,0,0 2014/01/02/09/12,142.0,0,0,1,0,0,0,0 2014/01/02/09/13,142.0,144.0,142.0,141.0,0,0,0,0 2014/01/02/09/14,142.0,144.0,142.0,141.0,0,0,0,0 2014/01/02/09/15,143.0,144.0,142.0,141.0,0,0,0,0 2014/01/02/09/16,142.0,144.0,142.0,141.0,0,0,0,0 2014/01/02/09/17,142.0,144.0,142.0,141.0,0,0,0,0 </code></pre>
<p>The following could be used to plot your data. The main point is that you need to specify the (rather unusual) format of the datetimes (<code>"%Y/%m/%d/%H/%M"</code>), such that it can be converted to a datetime object.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv("data/minuteData.csv") df["minute"] = pd.to_datetime(df["minute"], format="%Y/%m/%d/%H/%M") plt.plot(df["minute"],df["spreadprice"], label="spreadprice" ) plt.plot(df["minute"],df["bollup"], label="bollup" ) plt.legend() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/iFbNa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iFbNa.png" alt="enter image description here"></a></p>
python|pandas|matplotlib
0
6,901
46,541,445
Pandas resample timeseries in to 24hours
<h2>I have the data like this:</h2> <pre><code> OwnerUserId Score CreationDate 2015-01-01 00:16:46.963 1491895.0 0.0 2015-01-01 00:23:35.983 1491895.0 1.0 2015-01-01 00:30:55.683 1491895.0 1.0 2015-01-01 01:10:43.830 2141635.0 0.0 2015-01-01 01:11:08.927 1491895.0 1.0 2015-01-01 01:12:34.273 3297613.0 1.0 .......... </code></pre> <p>This is a whole year data with different user's score ,I hope to get the data like:</p> <pre><code>OwnerUserId 1491895.0 1491895.0 1491895.0 2141635.0 1491895.0 00:00 0.0 3.0 0.0 3.0 5.8 00:01 5.0 3.0 0.0 3.0 5.8 00:02 3.0 33.0 20.0 3.0 5.8 ...... 23:40 12.0 33.0 10.0 3.0 5.8 23:41 32.0 33.0 20.0 3.0 5.8 23:42 12.0 13.0 10.0 3.0 5.8 </code></pre> <p>The element of dataframe is the score(mean or sum). I have been try like follow:</p> <pre><code>pd.pivot_table(data_series.reset_index(),index=['CreationDate'],columns=['OwnerUserId'], fill_value=0).resample('W').sum()['Score'] </code></pre> <p>Get the result like the image. <a href="https://i.stack.imgur.com/avZga.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/avZga.png" alt="enter image description here"></a></p>
<p>I think you need:</p> <pre><code>#remove `[]` and add parameter values for remove MultiIndex in columns df = pd.pivot_table(data_series.reset_index(), index='CreationDate', columns='OwnerUserId', values='Score', fill_value=0) #truncate seconds and convert to timedeltaindex df.index = pd.to_timedelta(df.index.floor('T').strftime('%H:%M:%S')) #or round to minutes #df.index = pd.to_timedelta(df.index.round('T').strftime('%H:%M:%S')) print (df) OwnerUserId 1491895.0 2141635.0 3297613.0 00:16:00 0 0 0 00:23:00 1 0 0 00:30:00 1 0 0 01:10:00 0 0 0 01:11:00 1 0 0 01:12:00 0 0 1 idx = pd.timedelta_range('00:00:00', '23:59:00', freq='T') #resample by minutes, aggregate sum, for add missing rows use reindex df = df.resample('T').sum().fillna(0).reindex(idx, fill_value=0) print (df) OwnerUserId 1491895.0 2141635.0 3297613.0 00:00:00 0.0 0.0 0.0 00:01:00 0.0 0.0 0.0 00:02:00 0.0 0.0 0.0 00:03:00 0.0 0.0 0.0 00:04:00 0.0 0.0 0.0 00:05:00 0.0 0.0 0.0 00:06:00 0.0 0.0 0.0 ... ... </code></pre>
python|pandas|dataframe|time-series
1
6,902
46,604,760
What is the equivalent of length in R to python
<p>I have been using R to program and a naive in Python programming. I have a working code in R where I'm reading multiple files in a folder and sub-setting the file by few columns. The columns are not same in all the files. So, in R, I wrote a code:</p> <pre><code>selectedcolumns &lt;- df[,c(1,3:5,7:length(df))] </code></pre> <p>This code will select columns 1,3,4,5,7 and then will pick all the columns till the last column followed by 7th column present in the file.</p> <p>In Python, while I'm trying a similar code, I'm unable to understand what could be the possible equivalent keyword for <strong>"length"</strong> that'll help me dynamically pick the last of the file from the desired column.</p> <p>What I have been trying till now is:</p> <pre><code>import pandas as pd selectedcolumns = pd.read_excel('ABC.xlsx',sheetname= "myfile", header = None, usecols = [1,3,4,5,7]) </code></pre> <p>Now this code is reading the file and selecting the columns as mentioned. 1,3,4,5,7. However, I'm looking for 2 things over here:</p> <p>1) Is there any better way to write <strong>3:5</strong> in Python as it's possible in R?</p> <p>2) What can I write from 7th column till the last column since the last column is dynamic in all the files and I would require all columns from 7th in every file.</p> <p>Any help will be useful since I'm new to Python.Not much aware of different functions or libraries for doing the same operation.</p>
<p>It looks a little bit complex after R, but if you want to copy all columns after selected up to the end you should use code like this:</p> <pre><code>df1 = df.iloc[:,7:] </code></pre> <p>It will copy all columns from 7 to the last.</p> <p>You can select multiple ranges this way:</p> <p><code>df1 = df[df.columns[0:1].tolist() + df.columns[7:].tolist()]</code></p>
python|pandas|numpy
2
6,903
58,478,845
Moving duplicate rows from a subset of columns to another data frame in Python
<p>Using Python and Pandas I want to find all columns with duplicate rows in a data frame and move them to another data frame. For example I might have:</p> <pre><code>cats, tigers, 3.5, 1, cars, 2, 5 cats, tigers, 3.5, 6, 7.2, 22.6, 5 cats, tigers, 3.5, test, 2.6, 99, 52.3 </code></pre> <p>And I want cats, tigers, 3.5 in one data frame</p> <pre><code>cats, tigers, 3.5 </code></pre> <p>and in another data frame I want</p> <pre><code> 1, cars, 2, 5 6, 7.2, 22.6, 5 test, 2.6, 99, 52.3 </code></pre> <p>The code should check every column for repeat rows and only remove columns in which repeats occur in all rows.</p> <ol> <li>Some of the cases none of the columns have repeats.</li> <li>Some times more than just the first three columns have repeats. It should check all of the columns because repeats can occur in any column</li> </ol> <p>How could I do this?</p>
<p>You can use</p> <pre><code>df1 = pd.DataFrame(df.val.str.extract('([a-zA-Z ]+)', expand=False).str.strip().drop_duplicates()) #'val' is the column in which you have these values print(df1) </code></pre> <p><strong>Output</strong></p> <pre><code> val 0 ABCD </code></pre> <p>and </p> <pre><code>df2 = pd.DataFrame(df.val.str.extract('([0-9]+)', expand=False).str.strip().drop_duplicates()) #'val' is the column in which you have these values print(df2) </code></pre> <p><strong>Output</strong></p> <pre><code> val 0 1234 1 6578 2 4432 </code></pre>
python|pandas
1
6,904
58,229,964
How to move around Pandas Dataframe column?
<p><a href="https://i.stack.imgur.com/QN6RD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QN6RD.png" alt="enter image description here"></a></p> <p>I've attached a screenshot. I need some method to move the 'DATE' column to be aligned with the actual columns of the dataframes, which are SMA &amp; Closing Price. I need to be able to use the date column as an X parameter for visualization later. Please let me know any way to line up the date with other columns.</p>
<p>First column is called <code>index</code> in pandas and for convert to column use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a>:</p> <pre><code>df = df.reset_index() </code></pre> <hr> <p>But not necessary, if want use index later select it by <code>df.index</code>:</p> <pre><code>x = df.index </code></pre>
python|python-3.x|pandas|dataframe
3
6,905
58,237,181
How to do a pivot in pandas with duplicated entries
<p>How does one do a pivot in pandas? I can't around the 'duplicate entries' error. The input and the output should look like what is outlined below.</p> <pre><code>import pandas as pd input = pd.DataFrame({'measure': ['length','length','length','weight','weight','weight','sex','sex','sex'], 'species': [10, 10, 10, 10, 10, 10, 10, 10, 10], 'value': [1, 2, 3, 13, 45, 123, 0, 1, 1], 'set': [3, 3, 3, 3, 3, 3, 3, 3, 3]}) output = pd.DataFrame({'set': [3,3,3], 'species': [10, 10, 10], 'length': [1, 2, 3], 'weight': [13, 45, 123], 'sex': [0, 1, 1]}) test = input.pivot(index='set',columns='measure', values='value') print(test) </code></pre>
<p>In this situation, we usually resolve to <code>groupby().cumcount()</code> to get the new index:</p> <pre><code>indf['idx'] = indf.groupby('measure').cumcount() (indf.pivot_table(index=['idx','species','set'], columns='measure', values='value') .reset_index(('species','set')) ) </code></pre> <p>Output:</p> <pre><code>measure species set length sex weight idx 0 10 3 1 0 13 1 10 3 2 1 45 2 10 3 3 1 123 </code></pre>
python|pandas|dataframe|pivot
5
6,906
58,185,803
Trying to Reset column for every change in opponent
<p>Have a Batting Order here and trying to reset it based on changing opponent</p> <p>For example: When opponent changed from Colorado State to UTSA Batting Order needs to reset back to 1</p> <pre><code>df['Batting Order'] = df['pa'].cumsum().mod(9).apply(lambda x: 9 if x == 0 else x) pa Batting Order opponent Abilene Christian 0 1 1 Colorado State 1 1 2 Colorado State 2 1 3 Colorado State 3 1 4 Colorado State 4 1 5 Colorado State 5 1 6 Colorado State 6 1 7 Colorado State 7 1 8 Colorado State 8 1 9 Colorado State 9 1 1 Colorado State 10 1 2 Colorado State 11 1 3 Colorado State 12 1 4 Colorado State 13 1 5 Colorado State 14 1 6 Colorado State 15 1 7 Colorado State 16 1 8 Colorado State 17 1 9 Colorado State 18 1 1 Colorado State 19 1 2 Colorado State 20 1 3 Colorado State 21 0 3 Colorado State 22 1 4 Colorado State 23 1 5 UTSA 24 1 6 UTSA 25 1 7 UTSA 26 0 7 UTSA 27 1 8 UTSA 28 1 9 UTSA 29 1 1 UTSA 30 0 1 UTSA 31 1 2 UTSA 32 0 2 UTSA 33 1 3 UTSA 34 0 3 UTSA </code></pre>
<pre class="lang-py prettyprint-override"><code>def reset(df): grouping = df.groupby('Opponent') teams = grouping.groups.keys() batting_Order = [] for team in teams: subset = grouping.get_group(team) for i in range(len(subset)): batting_Order.append(i) df['Batting Order'] = batting_Order return df </code></pre>
python|pandas
0
6,907
69,238,330
DCGAN how to go RGB instead of greyscale
<p>I have this DCGAN that is pretty close to the TensorFlow docs.</p> <p>Here is the tutorial: <a href="https://www.tensorflow.org/tutorials/generative/dcgan" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/generative/dcgan</a></p> <p>It uses greyscale values in the test data. I am looking to start training with color data instead of just black and white.</p> <p>I am assuming that the shape of the training data will need to change, but does the shape of the generator model need to change too?</p> <p>How can I adapt this code to an RGB implementation?</p> <pre><code>from google.colab import drive drive.mount('/content/drive') import tensorflow as tf import glob import matplotlib.pyplot as plt import numpy as np import os import PIL from tensorflow.keras import layers import time from IPython import display train_dataset = tf.keras.preprocessing.image_dataset_from_directory( &quot;/content/drive/MyDrive/birds&quot;, seed=123, validation_split=0, image_size=(112, 112), color_mode=&quot;grayscale&quot;, shuffle=True, batch_size=1) train_images_array = [] for images, _ in train_dataset: for i in range(len(images)): train_images_array.append(images[i]) train_images = np.array(train_images_array) train_images = train_images.reshape(train_images.shape[0],112,112,1).astype('float32') train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1] BUFFER_SIZE = 60000 BATCH_SIZE = 8 # Batch and shuffle the data dataset_ = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) def make_generator_model(): model = tf.keras.Sequential() model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,))) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Reshape((7, 7, 256))) assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False)) assert model.output_shape == (None, 7, 7, 128) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False)) assert model.output_shape == (None, 14, 14, 64) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Conv2DTranspose(1, (20, 20), strides=(8, 8), padding='same', use_bias=False, activation='tanh')) assert model.output_shape == (None, 112, 112, 1) return model generator = make_generator_model() noise = tf.random.normal([1, 100]) generated_image = generator(noise, training=False) plt.imshow(generated_image[0, :, :, 0], cmap='gray') def make_discriminator_model(): model = tf.keras.Sequential() model.add(layers.Conv2D(64, (10, 10), strides=(2, 2), padding='same', input_shape=[112, 112, 1])) model.add(layers.LeakyReLU()) model.add(layers.Dropout(0.3)) model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same', input_shape=[112, 112, 1])) model.add(layers.LeakyReLU()) model.add(layers.Dropout(0.3)) model.add(layers.Flatten()) model.add(layers.Dense(1)) return model discriminator = make_discriminator_model() decision = discriminator(generated_image) print (decision) # This method returns a helper function to compute cross entropy loss cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True) def discriminator_loss(real_output, fake_output): real_loss = cross_entropy(tf.ones_like(real_output), real_output) fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output) total_loss = real_loss + fake_loss return total_loss def generator_loss(fake_output): return cross_entropy(tf.ones_like(fake_output), fake_output) generator_optimizer = tf.keras.optimizers.Adam(1e-4) discriminator_optimizer = tf.keras.optimizers.Adam(1e-4) checkpoint_dir = '/content/drive/MyDrive/training_checkpoints11' checkpoint_prefix = os.path.join(checkpoint_dir, &quot;ckpt&quot;) checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer, discriminator_optimizer=discriminator_optimizer, generator=generator, discriminator=discriminator) EPOCHS = 50 noise_dim = 100 num_examples_to_generate = 16 # You will reuse this seed overtime (so it's easier) # to visualize progress in the animated GIF) seed = tf.random.normal([num_examples_to_generate, noise_dim]) def generate_and_save_images(model, epoch, test_input): # Notice `training` is set to False. # This is so all layers run in inference mode (batchnorm). predictions = model(test_input, training=False) fig = plt.figure(figsize=(4, 4)) for i in range(predictions.shape[0]): plt.subplot(4, 4, i+1) plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray') plt.axis('off') plt.savefig('image_at_epoch_{:04d}.png'.format(epoch)) plt.show() # Notice the use of `tf.function` # This annotation causes the function to be &quot;compiled&quot;. @tf.function def train_step(images): noise = tf.random.normal([BATCH_SIZE, noise_dim]) with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: generated_images = generator(noise, training=True) real_output = discriminator(images, training=True) fake_output = discriminator(generated_images, training=True) gen_loss = generator_loss(fake_output) disc_loss = discriminator_loss(real_output, fake_output) gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables) gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) def train(dataset, epochs): for epoch in range(epochs): start = time.time() for image_batch in dataset: train_step(image_batch) # Produce images for the GIF as you go display.clear_output(wait=True) generate_and_save_images(generator, epoch + 1, seed) # Save the model every 1 epochs if (epoch + 1) % 8 == 0: checkpoint.save(file_prefix = checkpoint_prefix) print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start)) # Generate after the final epoch display.clear_output(wait=True) generate_and_save_images(generator, epochs, seed) return train(dataset_, 128) noise = tf.random.normal([1, 100]) generated_image = generator(noise, training=False) print(generated_image.shape) plt.imshow(generated_image[0, :, :, 0], cmap='gray') checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) </code></pre>
<p>Yes the generator needs to be changed too. Greyscale has one channel and you need three.</p> <p>So you need to change</p> <pre><code> model.add(layers.Conv2DTranspose(1, (20, 20), strides=(8, 8), padding='same', use_bias=False, activation='tanh')) assert model.output_shape == (None, 112, 112, 1) </code></pre> <p>to</p> <pre><code> model.add(layers.Conv2DTranspose(3, (20, 20), strides=(8, 8), padding='same', use_bias=False, activation='tanh')) assert model.output_shape == (None, 112, 112, 3) </code></pre>
python|tensorflow|machine-learning|keras|deep-learning
1
6,908
69,160,914
Why does "load_model" cause RAM memory problems while predicting?
<p>I trained neural network (transformer architecture) and saved it by using:</p> <pre><code>model.save(directory + args.name, save_format=&quot;tf&quot;) </code></pre> <p>After that, I want to load the model again with another script to test it by letting it make iterative predictions:</p> <pre><code>from keras.models import load_model model = load_model(args.model) for i in range(very_big_number): out, _ = model(something, training=False) </code></pre> <p>However, I have noticed that the RAM usage increases with each prediction and I don't know why. At some point the programme stops because there is no more memory available. You can also see the RAM consumption in the following screenshot:</p> <p><img src="https://i.stack.imgur.com/sc1Tr.png" alt="RAM usage while training" /></p> <p>If I use the same architecture, but only load the weights of the model with <code>model.load_weigts( ... )</code>, I do not have the problem.</p> <p>My question now is, why does <code>load_model</code> seem to cause this and how do I solve the problem?</p> <p>I'm using tensorflow 2.5.0.</p> <p><strong>Edit:</strong></p> <p>As I was not able to solve the problem and the answers did not help either, I simply used the <code>load_weights</code> method so that I created a new model and loaded the weights of the saved model like this:</p> <pre><code>model = myModel() saved_model = load_model(args.model) model.load_weights(saved_model + &quot;/variables/variables&quot;) </code></pre> <p>In this way, the usage of RAM remained constant. Nevertheless an non-optimal solution, in my opinion.</p>
<p>There is a fundamental difference between <code>load_model</code> and <code>load_weights</code>. When you save an model using <code>save_model</code> you save the following things:</p> <p>A Keras model consists of multiple components:</p> <ul> <li>The architecture, or configuration, which specifies what layers the model contain, and how they're connected.</li> <li>A set of weights values (the &quot;state of the model&quot;).</li> <li>An optimizer (defined by compiling the model).</li> <li>A set of losses and metrics (defined by compiling the model or calling add_loss() or add_metric()).</li> </ul> <p>However when you save the weights using <code>save_weights</code>, you only saves the weights, and this is useful for the <code>inference</code> purpose, while when you want to resume the training process, you need a <code>model</code> object, that is the reason we save everything in the model. When you just want to predict and get the result <code>save_weights</code> is enough. To learn more, you can check the documentation of <a href="https://www.tensorflow.org/guide/keras/save_and_serialize" rel="nofollow noreferrer">save/load models</a>.</p> <p>So, as you can see when you do <code>load_model</code>, it has many things to load as compared to <code>load_weights</code>, thus it will have more overhead hence your RAM usage.</p>
python|tensorflow|tensorflow2.0
0
6,909
44,555,763
Is there a way to check for linearly dependent columns in a dataframe?
<p>Is there a way to check for linear dependency for columns in a pandas dataframe? For example:</p> <pre><code>columns = ['A','B', 'C'] df = pd.DataFrame(columns=columns) df.A = [0,2,3,4] df.B = df.A*2 df.C = [8,3,5,4] print(df) A B C 0 0 0 8 1 2 4 3 2 3 6 5 3 4 8 4 </code></pre> <p>Is there a way to show that column <code>B</code> is a linear combination of <code>A</code>, but <code>C</code> is an independent column? My ultimate goal is to run a poisson regression on a dataset, but I keep getting a <code>LinAlgError: Singular matrix</code> error, meaning no inverse exists of my dataframe and thus it contains dependent columns. </p> <p>I would like to come up with a programmatic way to check each feature and ensure there are no dependent columns.</p>
<p>If you have <code>SymPy</code> you could use the <a href="https://en.wikipedia.org/wiki/Row_echelon_form" rel="noreferrer">"reduced row echelon form"</a> via <a href="http://docs.sympy.org/dev/tutorial/matrices.html#rref" rel="noreferrer"><code>sympy.matrix.rref</code></a>:</p> <pre><code>&gt;&gt;&gt; import sympy &gt;&gt;&gt; reduced_form, inds = sympy.Matrix(df.values).rref() &gt;&gt;&gt; reduced_form Matrix([ [1.0, 2.0, 0], [ 0, 0, 1.0], [ 0, 0, 0], [ 0, 0, 0]]) &gt;&gt;&gt; inds [0, 2] </code></pre> <p>The pivot columns (stored as <code>inds</code>) represent the "column numbers" that are linear independent, and you could simply "slice away" the other ones:</p> <pre><code>&gt;&gt;&gt; df.iloc[:, inds] A C 0 0 8 1 2 3 2 3 5 3 4 4 </code></pre>
python|pandas|dataframe|linear-algebra
9
6,910
44,708,739
Pandas/datetime/total seconds : numpy.timedelta64' object has no attribute 'total_seconds'
<p>I have a data frame. I converted two of my date columns to datetime format. And I want to calculate the difference in minutes. But I get the following error.</p> <pre><code>from datetime import datetime df['A'] = df['A'].apply(lambda t: datetime.strptime(t, '%Y-%m-%d %H:%M:%S')) df['B'] = df['B'].apply(lambda t: datetime.strptime(t, '%Y-%m-%d %H:%M:%S')) df['C'] = ((df['B']-df['A']).apply(lambda x:x.total_seconds()/60.)) </code></pre> <p>I get this error:</p> <pre><code>AttributeError: 'numpy.timedelta64' object has no attribute 'total_seconds' </code></pre> <p>Any help would be appreciated.</p> <p><strong>EDIT:</strong> Small dataset works fine:</p> <pre><code>df = pd.DataFrame({'A':['2015-09-01 00:02:34', '2015-09-02 00:02:34'],'B': ['2015-09-02 00:02:34', '2015-09-03 00:02:34']}) df['A'] = df['A'].apply(lambda t: datetime.strptime(t, '%Y-%m-%d %H:%M:%S')) df['B'] = df['B'].apply(lambda t: datetime.strptime(t, '%Y-%m-%d %H:%M:%S')) df['C'] = ((df['B']-df['A']).apply(lambda x:x.total_seconds()/60.)) df A B C 0 2015-09-01 00:02:34 2015-09-02 00:02:34 1440.0 1 2015-09-02 00:02:34 2015-09-03 00:02:34 1440.0 </code></pre> <p>For my original big dataset, If I only select the first two rows of each column and do the same apply function, I would get the same error.</p>
<p>It seems I need to do this:</p> <pre><code>df['C'] = (df['B'] - df['A'])/ np.timedelta64(1, 's') </code></pre>
python|pandas|datetime|numpy
3
6,911
71,739,322
scipy `SparseEfficiencyWarning` when division on rows of csr_matrix
<p>Suppose I already had a <code>csr_matrix</code>:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.sparse import csr_matrix indptr = np.array([0, 2, 3, 6]) indices = np.array([0, 2, 2, 0, 1, 2]) data = np.array([1., 2., 3., 4., 5., 6.]) mat = csr_matrix((data, indices, indptr), shape=(3, 3)) print(mat.A) [[1. 0. 2.] [0. 0. 3.] [4. 5. 6.]] </code></pre> <p>it's simple if I want to divide a single row of this csr_matrix:</p> <pre class="lang-py prettyprint-override"><code>mat[0] /= 2 print(mat.A) [[0.5 0. 1. ] [0. 0. 3. ] [4. 5. 6. ]] </code></pre> <p>However, if I want to change multiple rows, it throws an warning:</p> <pre class="lang-py prettyprint-override"><code>mat[np.array([0,1])]/=np.array([[1],[2]]) print(mat.A) [[1. 0. 2. ] [0. 0. 1.5] [4. 5. 6. ]] SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. self._set_arrayXarray(i, j, x) </code></pre> <p>How come division on multiple rows change its sparsity? It suggests me to change to lil_matrix, but when I checked the code of <code>def tolil()</code>:</p> <pre class="lang-py prettyprint-override"><code>def tolil(self, copy=False): lil = self._lil_container(self.shape, dtype=self.dtype) self.sum_duplicates() ptr,ind,dat = self.indptr,self.indices,self.data rows, data = lil.rows, lil.data for n in range(self.shape[0]): start = ptr[n] end = ptr[n+1] rows[n] = ind[start:end].tolist() data[n] = dat[start:end].tolist() return lil </code></pre> <p>which basically loops all rows, I don't think it's necessary in my case. What may be the correct way if I simply want to divide a few rows of a <code>csr_matrix</code>? Thanks!</p>
<p>Your matrix:</p> <pre><code>In [208]: indptr = np.array([0, 2, 3, 6]) ...: indices = np.array([0, 2, 2, 0, 1, 2]) ...: data = np.array([1., 2., 3., 4., 5., 6.]) ...: mat = sparse.csr_matrix((data, indices, indptr), shape=(3, 3)) In [209]: mat Out[209]: &lt;3x3 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 6 stored elements in Compressed Sparse Row format&gt; In [210]: mat.A Out[210]: array([[1., 0., 2.], [0., 0., 3.], [4., 5., 6.]]) </code></pre> <p>Simple division just changes the <code>mat.data</code> values, in-place:</p> <pre><code>In [211]: mat/= 3 In [212]: mat Out[212]: &lt;3x3 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 6 stored elements in Compressed Sparse Row format&gt; In [213]: mat *= 3 In [214]: mat.A Out[214]: array([[1., 0., 2.], [0., 0., 3.], [4., 5., 6.]]) </code></pre> <p>The RHS of your case produces a <code>np.matrix</code> object:</p> <pre><code>In [215]: mat[np.array([0,1])]/np.array([[1],[2]]) Out[215]: matrix([[1. , 0. , 2. ], [0. , 0. , 1.5]]) </code></pre> <p>Assigning that to the <code>mat</code> subset produces the warning:</p> <pre><code>In [216]: mat[np.array([0,1])] = _ /usr/local/lib/python3.8/dist-packages/scipy/sparse/_index.py:146: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. self._set_arrayXarray(i, j, x) </code></pre> <p>Your warning and mine occurs in the set step:</p> <pre><code>self._set_arrayXarray(i, j, x) </code></pre> <p>If I divide again I don't get the warning:</p> <pre><code>In [217]: mat[np.array([0,1])]/=np.array([[1],[2]]) In [218]: mat Out[218]: &lt;3x3 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 9 stored elements in Compressed Sparse Row format&gt; </code></pre> <p>Why? Because after the first assignment, <code>mat</code> has 9 non-zero terms, not the original six. So [217] doesn't change sparsity.</p> <p>Convert <code>mat</code> back to 6 zeros, and we get the warning again:</p> <pre><code>In [219]: mat.eliminate_zeros() In [220]: mat Out[220]: &lt;3x3 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 6 stored elements in Compressed Sparse Row format&gt; In [221]: mat[np.array([0,1])]/=np.array([[1],[2]]) /usr/local/lib/python3.8/dist-packages/scipy/sparse/_index.py:146: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. self._set_arrayXarray(i, j, x) </code></pre> <p>and a change in sparsity:</p> <pre><code>In [222]: mat Out[222]: &lt;3x3 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 9 stored elements in Compressed Sparse Row format&gt; </code></pre> <p>Assigning a sparse matrix of [215] doesn't trigger the warning:</p> <pre><code>In [223]: mat.eliminate_zeros() In [224]: m1=sparse.csr_matrix(mat[np.array([0,1])]/np.array([[1],[2]])) In [225]: m1 Out[225]: &lt;2x3 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 3 stored elements in Compressed Sparse Row format&gt; In [226]: mat[np.array([0,1])]=m1 In [227]: mat Out[227]: &lt;3x3 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 6 stored elements in Compressed Sparse Row format&gt; </code></pre> <p>===</p> <p>The [215] division is best seen as a <code>ndarray</code> action, not a sparse one:</p> <pre><code>In [232]: mat[np.array([0,1])]/np.array([[1],[2]]) Out[232]: matrix([[1. , 0. , 2. ], [0. , 0. , 0.09375]]) In [233]: mat[np.array([0,1])].todense()/np.array([[1],[2]]) Out[233]: matrix([[1. , 0. , 2. ], [0. , 0. , 0.09375]]) </code></pre> <p>The details of this division are found in <code>sparse._base.py</code>, <code>mat._divide</code>, with different actions depending whether the <code>other</code> is scalar, dense array, or sparse matrix. Sparse matrix division does not implement <code>broadcasting</code>.</p> <p>As a general rule, matrix multiplication is the most efficient sparse calculation. In fact actions like row or column sum are implemented with it. And so are some forms of indexing. Element-wise calculations are ok if they can be applied to the <code>M.data</code> array without regard to row or column indices (e.g. square, power, scalar multiplication). <code>M.multiply</code> is element-wise, but without the full broadcasting power of dense arrays. Sparse division is even more limited.</p> <h1>edit</h1> <p><code>sklearn</code> has some utilities to perform certain kinds of <code>sparse</code> actions that it needs, like scaling and normalizing.</p> <pre><code>In [274]: from sklearn.utils import sparsefuncs </code></pre> <p><a href="https://scikit-learn.org/stable/modules/classes.html#module-sklearn.utils" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/classes.html#module-sklearn.utils</a></p> <p><a href="https://scikit-learn.org/stable/modules/generated/sklearn.utils.sparsefuncs.inplace_row_scale.html#sklearn.utils.sparsefuncs.inplace_row_scale" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.utils.sparsefuncs.inplace_row_scale.html#sklearn.utils.sparsefuncs.inplace_row_scale</a></p> <p>With the sample <code>mat</code>:</p> <pre><code>In [275]: mat Out[275]: &lt;3x3 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 6 stored elements in Compressed Sparse Row format&gt; In [276]: mat.A Out[276]: array([[1. , 0. , 2. ], [0. , 0. , 0.1875], [4. , 5. , 6. ]]) </code></pre> <p>Applying the row scaling:</p> <pre><code>In [277]: sparsefuncs.inplace_row_scale(mat,np.array([10,20,1])) In [278]: mat Out[278]: &lt;3x3 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 6 stored elements in Compressed Sparse Row format&gt; In [279]: mat.A Out[279]: array([[10. , 0. , 20. ], [ 0. , 0. , 3.75], [ 4. , 5. , 6. ]]) </code></pre> <p>The scaling array has to match in length. In your case you'd need to take the inverse of your <code>[[1],[2]]</code>, and pad it with 1 to act on all rows.</p> <p>Looking at the source I see it uses <code>sparsefuncs.inplace_csr_row_scale</code>. That in turn does:</p> <pre><code>X.data *= np.repeat(scale, np.diff(X.indptr)) </code></pre> <p>The details of this action are:</p> <pre><code>In [283]: mat.indptr Out[283]: array([0, 2, 3, 6], dtype=int32) In [284]: np.diff(mat.indptr) Out[284]: array([2, 1, 3], dtype=int32) In [285]: np.repeat(np.array([10,20,1]), _) Out[285]: array([10, 10, 20, 1, 1, 1]) In [286]: mat.data Out[286]: array([100., 200., 75., 4., 5., 6.]) </code></pre> <p>So it converts the <code>scale</code> array into an array that matches the <code>data</code> in shape. Then the inplace <code>*=</code> array multiplication is easy.</p>
python|numpy|scipy|sparse-matrix
4
6,912
71,757,164
Catalog rows according to type conditions
<p>I have a given dataFrame with four columns -</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>X1</th> <th>X2</th> <th>X3</th> <th>X4</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1.2</td> <td>1.2</td> <td>2</td> </tr> <tr> <td>1</td> <td>1.3</td> <td>1.2</td> <td>1.2</td> </tr> <tr> <td>1</td> <td>3.2</td> <td>4.2</td> <td>1</td> </tr> <tr> <td>1.9</td> <td>1.2</td> <td>5.4</td> <td>3</td> </tr> </tbody> </table> </div> <p>I want to add a new column by this condition - if X1 and X4 are integers - so 1, else 0 as &quot;bug&quot;.</p> <p>I try this:</p> <pre class="lang-py prettyprint-override"><code>x = [] for column in df: if isinstance(df['T1'][i], int) == True and isinstance(df['T4'][i], int) == True: x.append(0) else: x.append(1) </code></pre> <p>Output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>X1</th> <th>X2</th> <th>X3</th> <th>X4</th> <th>bug</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1.2</td> <td>1.2</td> <td>2</td> <td>0</td> </tr> <tr> <td>1</td> <td>1.3</td> <td>1.2</td> <td>1.2</td> <td>1</td> </tr> <tr> <td>1</td> <td>3.2</td> <td>4.2</td> <td>1</td> <td>0</td> </tr> <tr> <td>1.9</td> <td>1.2</td> <td>5.4</td> <td>3</td> <td>1</td> </tr> </tbody> </table> </div> <p>Any suggestions?</p> <p>Thanks!</p>
<pre><code>df['new_column'] = df.apply(lambda x: 1 if isinstance(x['X1'],int) and isinstance(x['X4'],int) else 'bug', axis=1) </code></pre>
python|pandas|dataframe
1
6,913
71,536,438
Xlwings - unable to insert values for a range of rows
<p>I am trying to insert a value <code>n</code> into a specific <code>H</code> column from position H10:H20</p> <p>So, I tried the below</p> <pre><code>start_index = int(10) for n in range(20): print(type(n)) # this returns int range('H' + str(start_index)).value = n start_index = start_index + 1 </code></pre> <p>However, the above code results in the below error</p> <pre><code> 2 for n in range(20): 3 print(type(n)) ----&gt; 4 range('H' + str(start_index)).value = n 5 start_index = start_index + 1 TypeError: 'str' object cannot be interpreted as an integer </code></pre> <p>But my <code>n</code> is an integer.</p>
<p>Try the following <code>from xlwings import range as xlrange</code> and rename the code at line 4 as <code>xlrange</code>.</p> <p>Or use <code>import xlwings</code> and at line 4 use <code>xlwings.range</code>.</p> <p>Try to avoid asterisk in your import statements in order to avoid polluting the namespace. For more info check this <a href="https://stackoverflow.com/questions/2386714/why-is-import-bad">post</a></p>
python|excel|pandas|dataframe|xlwings
1
6,914
69,985,881
How the shape is (3 2 1) | Numpy |
<p>I am learning numpy , have a question in my mind not able to clearly visualise from where this 1 as come in shape</p> <pre><code>import numpy as np a = np.array([ [[1],[56]] , [[8],[98]] ,[[89],[62]] ]) np.shape(a) </code></pre> <p>The output is printed as : <code>(3 ,2 , 1)</code></p> <p>Will be appreciated if you could represent in diagrammatic / image format What actually the 1 means in output</p>
<p>Basically, that last 1 is because every number in <code>a</code> has brackets around it.</p> <p>Formally, it's the length of your &quot;last&quot; or &quot;innermost&quot; dimension. You can take your first two dimensions and arrange <code>a</code> as you would a normal matrix, but note that each element itself has brackets around it - each element is itself an array:</p> <pre><code>[[ [1] [56]] [ [8] [98]] [[89] [62]]] </code></pre> <p>If you add an element to each innermost-array, making that third <code>shape</code> number get larger, it's like stacking more arrays behind this top one in 3d, where now the corresponding elements in the &quot;behind&quot; array are in the same innermost array as the &quot;front&quot; array.</p> <p>Equivalently, instead of considering the <em>first</em> two indices to denote the regular flat matrices, you can think of the <em>back</em> two making the flat matrices. This is how numpy does it: try printing out an array like this: <code>x = np.random.randint(10, size = (3,3,3))</code>. Along the first dimension, <code>x[0]</code>, <code>x[1]</code>, and <code>x[2]</code> are printed after each other, and each one individually is formatted like a 3x3 matrix. Then the second index corresponds to the <em>rows</em> of each individual matrix, and the third index corresponds to the <em>columns</em>. Note that when you print <code>a</code>, there's only one column displayed - its third dimension has size 1. You can play with the definition of <code>x</code> to see more what's going on (change the numbers in the <code>size</code> argument).</p> <p>An alright example of visualizing a 3d array this way is this image, found on the Wikipedia page for the <a href="https://en.wikipedia.org/wiki/Levi-Civita_symbol" rel="nofollow noreferrer">Levi-Civita symbol</a>:</p> <p><a href="https://i.stack.imgur.com/4r8xk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4r8xk.png" alt="visualization of 3d Levi-Civita symbols" /></a></p> <p>Don't worry too much about what the Levi-Civita symbol actually <em>is</em> - just note that here, if it were a numpy array it would have shape <code>(3,3,3)</code> (like the <code>x</code> I defined above). You use three indices to specify each element, <em>i</em>, <em>j</em>, and <em>k</em>. <em>i</em> tells you the depth (blue, red, or green), <em>j</em> tells you the row, and <em>k</em> tells you the column. When numpy prints, it just lists out blue, red, then green in order.</p>
python|numpy|matrix
1
6,915
69,788,182
How to approach dataframe list to html (how to iterate in html code)?
<p>I have serveral pandas dataframes in a list. So I have several dataframes (df[0], df[1]). Each dataframe I want to write to html.</p> <p>The html code in the python file looks as follows:</p> <pre><code>html = f''' &lt;html&gt; &lt;head&gt; title&gt;{&quot;test&quot;}&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;{&quot;just a test sentence&quot;}&lt;/p&gt; &lt;body&gt; &lt;html&gt; ''' </code></pre> <p>To write just a normal df it is quite easy (<strong>{df.head(5).to_html()}</strong>):</p> <pre><code>html = f''' &lt;html&gt; &lt;head&gt; title&gt;{&quot;test&quot;}&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;{&quot;just a test sentence&quot;}&lt;/p&gt; {df.head(5).to_html()} &lt;body&gt; &lt;html&gt; ''' </code></pre> <p>How to approach it in my case where I have df[0], df[1] and so on. How to iterate over df list in html to show each of them among each other? Of course I just can use <strong>{df[0].head(5).to_html()}</strong>. But I don't know how many dataframes are in the list and therefore i have to use a for loop for example. But I don't know how to insert a for loop into the html code. Thanks</p>
<p>I'm assuming that you want to join the HTML strings of all DataFrames into a single one.</p> <p>Just create a function that produces the HTML code for a given DataFrame</p> <pre class="lang-py prettyprint-override"><code>def df_to_html(df): return f''' &lt;html&gt; &lt;head&gt; &lt;title&gt;{&quot;test&quot;}&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;{&quot;just a test sentence&quot;}&lt;/p&gt; {df.head(5).to_html()} &lt;body&gt; &lt;html&gt;''' </code></pre> <p>Then iterate over the list of DataFrames, call the function on each, and finally use <code>str.join</code> to concatenate the resulting HTML codes.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd # Create a list of 3 random DataFrames with shape (10, 3) # just for the sake of example df_list = [pd.DataFrame(np.random.randint(10, size=(10,3)), columns=list(&quot;ABC&quot;)) for _ in range(3)] # iterate over the dfs, generate the HTML, and concatenate the results all_df_html = &quot;&quot;.join(df_to_html(df) for df in df_list) &gt;&gt;&gt; print(all_df_html) </code></pre> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;test&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;just a test sentence&lt;/p&gt; &lt;table border="1" class="dataframe"&gt; &lt;thead&gt; &lt;tr style="text-align: right;"&gt; &lt;th&gt;&lt;/th&gt; &lt;th&gt;A&lt;/th&gt; &lt;th&gt;B&lt;/th&gt; &lt;th&gt;C&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;tr&gt; &lt;th&gt;0&lt;/th&gt; &lt;td&gt;9&lt;/td&gt; &lt;td&gt;6&lt;/td&gt; &lt;td&gt;6&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;1&lt;/th&gt; &lt;td&gt;4&lt;/td&gt; &lt;td&gt;0&lt;/td&gt; &lt;td&gt;8&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;2&lt;/th&gt; &lt;td&gt;4&lt;/td&gt; &lt;td&gt;6&lt;/td&gt; &lt;td&gt;4&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;3&lt;/th&gt; &lt;td&gt;7&lt;/td&gt; &lt;td&gt;5&lt;/td&gt; &lt;td&gt;2&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;4&lt;/th&gt; &lt;td&gt;1&lt;/td&gt; &lt;td&gt;9&lt;/td&gt; &lt;td&gt;1&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;body&gt; &lt;html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;test&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;just a test sentence&lt;/p&gt; &lt;table border="1" class="dataframe"&gt; &lt;thead&gt; &lt;tr style="text-align: right;"&gt; &lt;th&gt;&lt;/th&gt; &lt;th&gt;A&lt;/th&gt; &lt;th&gt;B&lt;/th&gt; &lt;th&gt;C&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;tr&gt; &lt;th&gt;0&lt;/th&gt; &lt;td&gt;4&lt;/td&gt; &lt;td&gt;2&lt;/td&gt; &lt;td&gt;3&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;1&lt;/th&gt; &lt;td&gt;4&lt;/td&gt; &lt;td&gt;6&lt;/td&gt; &lt;td&gt;3&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;2&lt;/th&gt; &lt;td&gt;0&lt;/td&gt; &lt;td&gt;8&lt;/td&gt; &lt;td&gt;1&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;3&lt;/th&gt; &lt;td&gt;6&lt;/td&gt; &lt;td&gt;7&lt;/td&gt; &lt;td&gt;1&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;4&lt;/th&gt; &lt;td&gt;9&lt;/td&gt; &lt;td&gt;5&lt;/td&gt; &lt;td&gt;3&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;body&gt; &lt;html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;test&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;just a test sentence&lt;/p&gt; &lt;table border="1" class="dataframe"&gt; &lt;thead&gt; &lt;tr style="text-align: right;"&gt; &lt;th&gt;&lt;/th&gt; &lt;th&gt;A&lt;/th&gt; &lt;th&gt;B&lt;/th&gt; &lt;th&gt;C&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;tr&gt; &lt;th&gt;0&lt;/th&gt; &lt;td&gt;7&lt;/td&gt; &lt;td&gt;4&lt;/td&gt; &lt;td&gt;3&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;1&lt;/th&gt; &lt;td&gt;1&lt;/td&gt; &lt;td&gt;9&lt;/td&gt; &lt;td&gt;7&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;2&lt;/th&gt; &lt;td&gt;4&lt;/td&gt; &lt;td&gt;0&lt;/td&gt; &lt;td&gt;1&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;3&lt;/th&gt; &lt;td&gt;0&lt;/td&gt; &lt;td&gt;7&lt;/td&gt; &lt;td&gt;9&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;th&gt;4&lt;/th&gt; &lt;td&gt;6&lt;/td&gt; &lt;td&gt;1&lt;/td&gt; &lt;td&gt;1&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;body&gt; &lt;html&gt;</code></pre> </div> </div> </p>
python|html|pandas|dataframe
0
6,916
69,790,781
TensorFlow model correctly predicting images, but not frames from real time video stream?
<p>Why does my TensorFlow model <strong>correctly predict JPG and PNG images</strong> but <strong>incorrectly predict frames from real time video stream?</strong> All frames in the real time video stream are all being incorrectly classified as class 1.</p> <p>Attempt: I saved a PNG image from the realtime video stream. When I saved the PNG image separately and tested it, the model correctly classifies it. When a similar image is a frame in the real time video stream it is incorrectly classified. The PNG images and real time video stream frames have identical content visually (background, lighting condition, camera angle, etc.).</p> <p>Structure of my model:</p> <pre><code>Model: &quot;sequential_1&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling_2 (Rescaling) (None, 180, 180, 3) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 180, 180, 16) 448 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 90, 90, 16) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 90, 90, 32) 4640 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 45, 45, 32) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 45, 45, 64) 18496 _________________________________________________________________ max_pooling2d_5 (MaxPooling2 (None, 22, 22, 64) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 30976) 0 _________________________________________________________________ dense_2 (Dense) (None, 128) 3965056 _________________________________________________________________ dense_3 (Dense) (None, 3) 387 ================================================================= Total params: 3,989,027 Trainable params: 3,989,027 Non-trainable params: 0 _________________________________________________________________ Found 1068 files belonging to 3 classes. </code></pre> <p>Realtime prediction code: (updated after Keertika's help!)</p> <pre><code>def testModel(imageName): import cv2 from PIL import Image from tensorflow.keras.preprocessing import image_dataset_from_directory batch_size = 32 img_height = 180 img_width = 180 img = keras.preprocessing.image.load_img( imageName, target_size=(img_height, img_width), interpolation = &quot;bilinear&quot;, color_mode = 'rgb' ) #preprocessing different here img_array = keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) #Create a batch predictions = new_model.predict(img_array) score = predictions[0] classes = ['1', '2','3'] prediction = classes[np.argmax(score)] print( &quot;This image {} most likely belongs to {} with a {:.2f} percent confidence.&quot; .format(imageName, classes[np.argmax(score)], 100 * np.max(score)) ) return prediction </code></pre> <p>Training code:</p> <pre><code>#image_dataset_from_directory returns a tf.data.Dataset that yields batches of images from #the subdirectories class_a and class_b, together with labels 0 and 1. from keras.preprocessing import image directory_test = &quot;/content/test&quot; tf.keras.utils.image_dataset_from_directory( directory_test, labels='inferred', label_mode='int', class_names=None, color_mode='rgb', batch_size=32, image_size=(256, 256), shuffle=True, seed=None, validation_split=None, subset=None, interpolation='bilinear', follow_links=False, crop_to_aspect_ratio=False ) tf.keras.utils.image_dataset_from_directory(directory_test, labels='inferred') train_ds = tf.keras.preprocessing.image_dataset_from_directory( directory_test, validation_split=0.2, subset=&quot;training&quot;, seed=123, image_size=(img_height, img_width), batch_size=batch_size) </code></pre> <p>Is the accuracy being affected by the reshaping in the realtime prediction code? I do not understand why frame predictions are incorrect, but single JPG and PNG image predictions are correct. Thank you for any help!</p>
<p>the reason for the real time prediction not correct is because of the preprocessing. The preprocessing of the inference code should be always same as the preprocessing used while training. Use <strong>tf.keras.preprocessing.image.load_img</strong> in your real-time prediction code but it takes image path to load the image. so you can save each frame by name <strong>&quot;sample.png&quot;</strong> and pass this path to <strong>tf.keras.preprocessing.image.load_img</strong>. this should solve the issue. and use the resize method <strong>&quot;bilinear&quot;</strong> because that was used for training data</p>
python|tensorflow|opencv|keras|video-streaming
1
6,917
43,395,584
Comparing two Pandas dataframes for differences on common dates
<p>I have two data frames, one with historical data and one with some new data appended to the historical data as:</p> <pre><code>raw_data1 = {'Series_Date':['2017-03-10','2017-03-11','2017-03-12','2017-03-13','2017-03-14','2017-03-15'],'Value':[1,2,3,4,5,6]} import pandas as pd df_history = pd.DataFrame(raw_data1, columns = ['Series_Date','Value']) print df_history raw_data2 = {'Series_Date':['2017-03-10','2017-03-11','2017-03-12','2017-03-13','2017-03-14','2017-03-15','2017-03-16','2017-03-17'],'Value':[1,2,3,4,4,5,6,7]} import pandas as pd df_new = pd.DataFrame(raw_data2, columns = ['Series_Date','Value']) print df_new </code></pre> <p>I want to check for all dates in df_history, if data in df_new is different. If data is different then it should append to df_check dataframe as follows: </p> <pre><code>raw_data3 = {'Series_Date':['2017-03-14','2017-03-15'],'Value_history':[5,6], 'Value_new':[4,5]} import pandas as pd df_check = pd.DataFrame(raw_data3, columns = ['Series_Date','Value_history','Value_new']) print df_check </code></pre> <p>The key point is that I want to check for all dates that are in my df_history DF and check if a value is present for that day in the df_new DF and if it's same.</p>
<p>Simply run a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>query</code></a> filter to capture records where <em>Value_history</em> does not equal <em>Value_new</em></p> <pre><code>df_check = pd.merge(df_history, df_new, on='Series_Date', suffixes=['_history', '_new'])\ .query('Value_history != Value_new').reset_index(drop=True) # Series_Date Value_history Value_new # 0 2017-03-14 5 4 # 1 2017-03-15 6 5 </code></pre>
python|python-2.7|pandas
0
6,918
43,252,197
python convert list to numpy array while preserving the number formats
<p>My goal is to convert my data into numpy array while preserving the number formats in the original list, clear and proper.</p> <p><br>for example, this is my data in list format:</p> <pre><code>[[24.589888563639835, 13.899891781550952, 4478597, -1], [26.822224204095697, 14.670531752529088, 4644503, -1], [51.450405486761866, 54.770422572665254, 5570870, 0], [44.979065080591504, 54.998835550128852, 6500333, 0], [44.866399274880663, 55.757240813761534, 6513301, 0], [45.535380533604247, 57.790074517001365, 6593281, 0], [44.850372630818214, 54.720574554485822, 6605483, 0], [51.32738085400576, 55.118344981379266, 6641841, 0]] </code></pre> <p>when i do convert it to numpy array, </p> <pre><code>data = np.asarray(data) </code></pre> <p>i get mathematical notation <code>e</code>, how can I conserve the same format in my output array?</p> <pre><code>[[ 2.45898886e+01 1.38998918e+01 4.47859700e+06 -1.00000000e+00] [ 2.68222242e+01 1.46705318e+01 4.64450300e+06 -1.00000000e+00] [ 5.14504055e+01 5.47704226e+01 5.57087000e+06 0.00000000e+00] [ 4.49790651e+01 5.49988356e+01 6.50033300e+06 0.00000000e+00] [ 4.48663993e+01 5.57572408e+01 6.51330100e+06 0.00000000e+00] [ 4.55353805e+01 5.77900745e+01 6.59328100e+06 0.00000000e+00] [ 4.48503726e+01 5.47205746e+01 6.60548300e+06 0.00000000e+00] [ 5.13273809e+01 5.51183450e+01 6.64184100e+06 0.00000000e+00]] </code></pre> <h2>update:</h2> <p>I did :</p> <pre><code>np.set_printoptions(precision=6,suppress=True) </code></pre> <p><strong>but I still get different numbers when I pass some part of data to another variable and then look inside it, and i see that the decimals have changed! Why is it internally changing the decimals, why can't it just hold them as it is?</strong></p>
<p>Simple array creation from the nested list:</p> <pre><code>In [133]: data = np.array(alist) In [136]: data.shape Out[136]: (8, 4) In [137]: data.dtype Out[137]: dtype('float64') </code></pre> <p>This is a 2d array, 8 'rows', 4 'columns'; all elements are stored as float.</p> <p>The list can be loaded into a structured array, that is defined to have a mix of float and integer fields. Note that I have to convert the 'rows' to tuples for this load.</p> <pre><code>In [139]: dt = np.dtype('f,f,i,i') In [140]: dt Out[140]: dtype([('f0', '&lt;f4'), ('f1', '&lt;f4'), ('f2', '&lt;i4'), ('f3', '&lt;i4')]) In [141]: data = np.array([tuple(row) for row in alist], dtype=dt) In [142]: data.shape Out[142]: (8,) In [143]: data Out[143]: array([( 24.58988762, 13.89989185, 4478597, -1), ( 26.82222366, 14.67053223, 4644503, -1), ( 51.45040512, 54.77042389, 5570870, 0), ( 44.97906494, 54.99883652, 6500333, 0), ( 44.86639786, 55.7572403 , 6513301, 0), ( 45.53538132, 57.79007339, 6593281, 0), ( 44.85037231, 54.72057343, 6605483, 0), ( 51.32738113, 55.11834335, 6641841, 0)], dtype=[('f0', '&lt;f4'), ('f1', '&lt;f4'), ('f2', '&lt;i4'), ('f3', '&lt;i4')]) </code></pre> <p>You access fields by name, not column number:</p> <pre><code>In [144]: data['f0'] Out[144]: array([ 24.58988762, 26.82222366, 51.45040512, 44.97906494, 44.86639786, 45.53538132, 44.85037231, 51.32738113], dtype=float32) In [145]: data['f3'] Out[145]: array([-1, -1, 0, 0, 0, 0, 0, 0], dtype=int32) </code></pre> <p>Compare those values with the display of single columns from the 2d float array:</p> <pre><code>In [146]: dataf = np.array(alist) In [147]: dataf[:,0] Out[147]: array([ 24.58988856, 26.8222242 , 51.45040549, 44.97906508, 44.86639927, 45.53538053, 44.85037263, 51.32738085]) In [148]: dataf[:,3] Out[148]: array([-1., -1., 0., 0., 0., 0., 0., 0.]) </code></pre> <p>The use of a structured array makes more sense when there's a mix of floats, int, strings or other dtypes. </p> <p>But to back up a bit - what is wrong with the pure float version? Why is important to retain the integer identity of 2 columns?</p>
python|arrays|numpy
1
6,919
72,212,756
How to import an arff file to a pandas df and later convert it to arff again
<p>I want to preprocess a data base with scikit learn from an arff file, and later use on an python-weka-wrapper3 model the preprocessed data base, so I need a function to load the arff as df or transform the arff to csv, and later again download the edited df on an arff or transform a csv to arff.</p> <p>Some people recomend <a href="https://github.com/renatopp/liac-arff" rel="nofollow noreferrer">https://github.com/renatopp/liac-arff</a> (liac-arff) but I don't know how to do that with this library.</p> <p>So, if someone knows any function or some code well explained on python3 I'll apreciate.</p> <p>In my case I tried with this function:</p> <pre><code>def arff2csv(arff_path, csv_path=None): with open(arff_path, 'r') as fr: attributes = [] if csv_path is None: csv_path = arff_path[:-4] + 'csv' # *.arff -&gt; *.csv write_sw = False with open(csv_path, 'w') as fw: for line in fr.readlines(): if write_sw: fw.write(line) elif '@data' in line: fw.write(','.join(attributes) + '\n') write_sw = True elif '@attribute' in line: #print(line.split(' ')[2]) attributes.append(line.split(' ')[1]) # @attribute attribute_tag numeric print(&quot;Convert {} to {}.&quot;.format(arff_path, csv_path)) </code></pre>
<p>If you want to stay within the scikit-learn ecosystem, you could have a look at the <a href="https://github.com/fracpete/sklearn-weka-plugin" rel="nofollow noreferrer">sklearn-weka-plugin</a> library, which uses <a href="https://github.com/fracpete/python-weka-wrapper3" rel="nofollow noreferrer">python-weka-wrapper3</a> under the hood.</p> <p>BTW python-weka-wrapper3 can create datasets directly <a href="https://fracpete.github.io/python-weka-wrapper3/weka.core.html?highlight=create_instances_from_matrices#weka.core.dataset.create_instances_from_matrices" rel="nofollow noreferrer">from numpy matrices</a>. Examples: <a href="https://github.com/fracpete/python-weka-wrapper3-examples/blob/8e8422eeda99be885ba67a3a54a53cd7ee35f860/src/wekaexamples/core/dataset.py#L176" rel="nofollow noreferrer">[1]</a>, <a href="https://github.com/fracpete/python-weka-wrapper3-examples/blob/8e8422eeda99be885ba67a3a54a53cd7ee35f860/src/wekaexamples/core/dataset.py#L186" rel="nofollow noreferrer">[2]</a></p>
pandas|dataframe|csv|weka|arff
0
6,920
72,294,420
Email Classifier to classify emails according to the time
<p>I have to design a program that can classify emails as spam or nonspam using Python and Pandas.</p> <p>I have done to classify the email as spam or nonspam according to the email's subject. For my second task, I have to classify the emails as spam or nonspam according to the time. If the email gets received on ('Friday and 'Saturday') it should be classified as spam. Otherwise nonspam. I literally don't have any idea how to do that. I tried to search but ended up with nothing.</p> <p>This is a screenshot from the excel file <img src="https://i.stack.imgur.com/LvnTy.png" alt="Email Table.xlsx" /></p> <pre><code>import pandas as pd ExcelFile = pd.read_excel(r'C:\Users\Documents\Email Table.xlsx') Subject = pd.DataFrame(ExcelFile, columns=['Subject']) def spam(Subject): A = len(ExcelFile[ExcelFile['Subject'].isnull()]) print(&quot;Number of spam emails &quot;,A) print(ExcelFile[ExcelFile['Subject'].isnull()]) spam(Subject) </code></pre>
<p>There are a million ways you could do this, but this is how I would do it. I provided comments and some naming conventions simply for clarity which should allow you to take and modify as necessary to fit your specific needs</p> <pre><code>#All necessary imports import pandas as pd import numpy as np import datetime #Create same sample data (just made this up nothing specific) data = { 'From' : ['test@gmail.com', 'test1@gmail.com', 'test2@gmail.com', 'test3@gmail.com', 'test4@gmail.com'], 'Subject' : ['Free Stuff', 'Buy Stuff', np.nan,'More Free Stuff', 'More Buy Stuff'], 'Dates' : ['2022-05-18 01:00:00', '2022-05-18 03:00:00', '2022-05-19 08:00:00', '2022-05-20 01:00:00', '2022-05-21 10:00:00'] } #Create a Dataframe with the data df = pd.DataFrame(data) #Set all nulls/nones/NaN to a blank string df.fillna('', inplace = True) #Set the Dates column to a date column with YYYY-MM-DD HH:MM:SS format df['Dates'] = pd.to_datetime(df['Dates'], format = '%Y-%m-%d %H:%M:%S') #Create a column that will identify the what day the Dates column is on df['Day'] = df['Dates'].dt.day_name() #Write a np.select() to determine if the Subject column is null or if the Day column is on Friday or Saturday #This is where you specify which days are spam days list_of_spam_days = ['Friday', 'Saturday'] #List of conditions to test of true or false (np.nan is equivilent of a null) condition_list = [df['Subject'] == '', df['Day'].isin(list_of_spam_days)] #Mirroring the condition_list from before what should happen if the condition is true true_list = ['Spam', 'Spam'] #Make a new column to which holds all of the results of our condition and true lists #The final 'Not Spam' is the default if the condition list was not satisfied df['Spam or Not Spam'] = np.select(condition_list, true_list, 'Not Spam') df </code></pre>
python|pandas
0
6,921
72,399,738
How to fix error: operands could not be broadcast together with shapes (450,600,3) (277,330,3)
<pre><code>import math import cv2 import numpy as np original = cv2.imread(r&quot;C:\Users\HP\Documents\fyp\img\4.bmp&quot;, 1) contrast = cv2.imread(r&quot;C:\Users\HP\Documents\fyp\img\dehaze4.png&quot;, 1) def psnr(img1, img2): mse = np.mean((img1 - img2) ** 2) if mse == 0: return 100 PIXEL_MAX = 255.0 return 20 * math.log10(PIXEL_MAX / math.sqrt(mse)) d = psnr(original, contrast) print(d) </code></pre> <p><strong>error</strong></p> <pre><code>runfile('C:/Users/HP/Documents/fyp/dcp/pnsr2.py', wdir='C:/Users/HP/Documents/fyp/dcp') Traceback (most recent call last): File ~\Documents\fyp\dcp\pnsr2.py:15 in &lt;module&gt; d = psnr(original, contrast) File ~\Documents\fyp\dcp\pnsr2.py:9 in psnr mse = np.mean((img1 - img2) ** 2) ValueError: operands could not be broadcast together with shapes (450,600,3) (277,330,3) </code></pre> <p>help me to solve this problem.</p>
<p>The two images are of different shapes. I'm not sure what you're trying to do comparing two images of different sizes, but one way to do it is to resize one of the images to the size of the other:</p> <pre><code>def psnr(img1, img2): if img1.shape != img2.shape: img2 = cv2.resize(img2, img1.shape, interpolation=cv2.INTER_LINEAR) </code></pre>
python|arrays|numpy|opencv
3
6,922
72,266,130
TypeError: unhashable type: 'numpy.ndarray' when taking first occurence
<p>I am trying to get the first occurence of unique values of <strong>chain_id</strong> in a pandas df. I am using the following code:</p> <pre><code>import pandas as pd import re df = pd.DataFrame(columns=&quot;Sender&quot;, &quot;Subject&quot;, &quot;Body&quot;, &quot;Datetime&quot;, &quot;chain_id&quot; first_occurrence_df = df[re.match(pd.unique(chain_id), df.chain_id),] </code></pre> <p>But it is returning the error; unhashable type: numpy.ndarray. Yes, I know it is to do with the 'shape' of the df. But I am completely new to coding with no prior knowledge of it - so can anyone explain this in lay man's terms? And how do I get around this?</p> <p>I have 3 other variables: &quot;Sender&quot;, &quot;Subject&quot;, &quot;Body&quot;, &quot;Datetime&quot;, &quot;chain_id&quot;. All are strings with the exception of Datetime, being date format. chain_id identifies the email chain.</p> <p>Error message</p> <pre><code>TypeError Traceback (most recent call last) Input In [232], in &lt;cell line: 2&gt;() 1 import re ----&gt; 2 first = df[re.match(pd.unique(df.chain_id), df.chain_id),] File C:\Anaconda3\envs\universal\lib\re.py:191, in match(pattern, string, flags) 188 def match(pattern, string, flags=0): 189 &quot;&quot;&quot;Try to apply the pattern at the start of the string, returning 190 a Match object, or None if no match was found.&quot;&quot;&quot; --&gt; 191 return _compile(pattern, flags).match(string) File C:\Anaconda3\envs\universal\lib\re.py:294, in _compile(pattern, flags) 292 flags = flags.value 293 try: --&gt; 294 return _cache[type(pattern), pattern, flags] 295 except KeyError: 296 pass TypeError: unhashable type: 'numpy.ndarray' </code></pre>
<p>So, just to be clear, what you want is the first row where each <code>chain_id</code> occurs? You can use</p> <pre><code>first = df.drop_duplicates( ['chain_id'], keep='first' ) </code></pre> <p>Keeping the first is the default, but since it is important, you might as well specify it.</p>
python|pandas|dataframe
1
6,923
72,329,302
How to write a FAST API function taking .csv file and making some preprocessing in pandas
<p>I am trying to create an API function, that takes in .csv file (uploaded) and opens it as pandas DataFrame. Like that:</p> <pre><code>from fastapi import FastAPI from fastapi import UploadFile, Query, Form import pandas as pd app = FastAPI() @app.post(&quot;/check&quot;) def foo(file: UploadFile): df = pd.read_csv(file.file) return len(df) </code></pre> <p>Then, I am invoking my API:</p> <pre><code>import requests url = 'http://127.0.0.1:8000/check' file = {'file': open('data/ny_pollution_events.csv', 'rb')} resp = requests.post(url=url, files=file) print(resp.json()) </code></pre> <p>But I got such error: <code>FileNotFoundError: [Errno 2] No such file or directory: 'ny_pollution_events.csv'</code></p> <p>As far as I understand from <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">doc</a> pandas is able to read .csv file from file-like object, which <code>file.file</code> is supposed to be. But it seems, that here in read_csv() method pandas obtains name (not a file object itself) and tries to find it locally.</p> <p>Am I doing something wrong? Can I somehow implement this logic?</p>
<p>To read the file in pandas, the file must be stored on your PC. Don't forget to import <code>shutil</code>. if you don't need the file to be stored on your PC, delete it using <code>os.remove(filepath)</code>.</p> <pre><code> if not file.filename.lower().endswith(('.csv',&quot;.xlsx&quot;,&quot;.xls&quot;)): return 404,&quot;Please upload xlsx,csv or xls file.&quot; if file.filename.lower().endswith(&quot;.csv&quot;): extension = &quot;.csv&quot; elif file.filename.lower().endswith(&quot;.xlsx&quot;): extension = &quot;.xlsx&quot; elif file.filename.lower().endswith(&quot;.xls&quot;): extension = &quot;.xls&quot; # eventid = datetime.datetime.now().strftime('%Y%m-%d%H-%M%S-') + str(uuid4()) filepath = &quot;location where you want to store file&quot;+ extension with open(filepath, &quot;wb&quot;) as buffer: shutil.copyfileobj(file.file, buffer) try: if filepath.endswith(&quot;.csv&quot;): df = pd.read_csv(filepath) else: df = pd.read_excel(filepath) except: return 401, &quot;File is not proper&quot; </code></pre>
python|pandas|python-requests|request|fastapi
1
6,924
50,662,176
Best practices for indexing with pandas
<p>I want to select rows based on a mask, <code>idx</code>. I can think of two different possibilities, either using <code>iloc</code> or just using brackets. I have shown the two possibilities (on a dataframe <code>df</code>) below. Are they both equally viable?</p> <pre><code>idx = (df["timestamp"] &gt;= 5) &amp; (df["timestamp"] &lt;= 10) idx = idx.values hr = df["hr"].iloc[idx] timestamps = df["timestamp"].iloc[idx] </code></pre> <p>or the following one:</p> <pre><code>idx = (df["timestamp"] &gt;= 5) &amp; (df["timestamp"] &lt;= 10) hr = df["hr"][idx] timestamps = df["timestamp"][idx] </code></pre>
<p>No, they are not the same. One uses direct syntax while the other relies on chained indexing.</p> <p>The crucial points are:</p> <ul> <li><a href="https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>pd.DataFrame.iloc</code></a> is used primarily for integer position-based indexing.</li> <li><a href="https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>pd.DataFrame.loc</code></a> is most often used with labels or Boolean arrays.</li> <li>Chained indexing, i.e. via <code>df[x][y]</code>, is <a href="https://pandas.pydata.org/pandas-docs/version/0.21/indexing.html#why-does-assignment-fail-when-using-chained-indexing" rel="nofollow noreferrer">explicitly discouraged</a> and is never necessary.</li> <li><code>idx.values</code> returns the <code>numpy</code> array representation of <code>idx</code> series. This cannot feed <code>.iloc</code> and is not necessary to feed <code>.loc</code>, which can take <code>idx</code> directly.</li> </ul> <p>Below are two examples which would work. In either example, you can use similar syntax to mask a dataframe or series. For example, <code>df['hr'].loc[mask]</code> would work as well as <code>df.loc[mask]</code>.</p> <h3>iloc</h3> <p>Here we use <code>numpy.where</code> to extract integer indices of <code>True</code> elements in a Boolean series. <code>iloc</code> does accept Boolean arrays but, in my opinion, this is less clear; "i" stands for integer.</p> <pre><code>idx = (df['timestamp'] &gt;= 5) &amp; (df['timestamp'] &lt;= 10) mask = np.where(idx)[0] df = df.iloc[mask] </code></pre> <h3>loc</h3> <p>Using <code>loc</code> is more natural when we are already querying by specific series.</p> <pre><code>mask = (df['timestamp'] &gt;= 5) &amp; (df['timestamp'] &lt;= 10) df = df.loc[mask] </code></pre> <ul> <li>When masking only rows, you can omit the <code>loc</code> accessor altogether and use <code>df[mask]</code>.</li> <li>If masking by rows and filtering for a column, you can use <code>df.loc[mask, 'col_name']</code></li> </ul> <p><a href="https://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow noreferrer">Indexing and Selecting Data</a> is fundamental to <code>pandas</code>: there is no substitute for reading the official documentation.</p>
python|pandas|dataframe|indexing|series
8
6,925
50,537,777
How Can I Extract Predictions from A Softmax Layer on Tensorflow
<p>I'm trying to extract predictions, use predictions in calculating accuracy/precision/recall/F1 and prediction probability. I know I have 10 output classes therefore I can't calculate precision per see but I will be doing all these in other models moreover I'd like to be able to extract prediction probabilities. My model is as follows. I've checked GitHub and StackOverflow however I have yet to find a way to extract those properties. Most of the answers come close but never answer what I needed. I've used some low epoch numbers there in order to check out model fast and keep the output screen less crowded.</p> <pre><code>import tensorflow as tf from tensorflow.contrib.layers import fully_connected from sklearn.datasets import fetch_mldata from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split mnist = fetch_mldata('MNIST original', data_home="data/mnist/") lb = LabelBinarizer().fit(mnist.target) X_train, X_test, y_train, y_test = train_test_split(mnist.data, lb.transform(mnist.target), train_size=0.9, test_size=0.1) X = tf.placeholder(tf.float32, shape=(None, 784)) y = tf.placeholder(tf.int64, shape=(None, 10)) lOne = fully_connected(inputs=X, num_outputs=100, activation_fn=tf.nn.elu) logits = fully_connected(inputs=lOne, num_outputs=10, activation_fn=tf.nn.softmax) pred = logits acc = tf.metrics.accuracy(labels=y, predictions=pred) loss = tf.losses.softmax_cross_entropy(logits=logits, onehot_labels=y) trainOP = tf.train.AdamOptimizer(0.001).minimize(loss) import numpy as np bSize = 100 batches = int(np.floor(X_train.shape[0]/bSize)+1) def batcher(dSet, bNum): return(dSet[bSize*(bNum-1):bSize*(bNum)]) epochs = 2 init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(0, epochs): for batch in range(1, batches): X_batch = batcher(X_train, batch) y_batch = batcher(y_train, batch) sess.run(trainOP, feed_dict={X: X_batch, y: y_batch}) lossVal = sess.run([loss], feed_dict={X: X_test, y: y_test}) print(lossVal) sess.close() </code></pre>
<p>The code shared in the question covers training, but not "using" (infering) with the resulting model.</p> <p>Two issues:</p> <ul> <li>The trained model is not serialized, so future runs will run on an <em>untrained</em> model, and predict whatever their initialization tells them to. Hence a <a href="https://stackoverflow.com/questions/50537777/how-can-i-extract-predictions-from-a-softmax-layer-on-tensorflow/50539238#comment88087647_50537777">question comment</a> suggesting to save the trained model, and restore it when predicting.</li> <li>The logits are the output of a SoftMax function. A common way to get a class from logits is to select the highest value in the tensor (here a vector).</li> </ul> <p>With TensorFlow, the last point can be done with <a href="https://www.tensorflow.org/api_docs/python/tf/argmax" rel="nofollow noreferrer"><code>tf.argmax</code></a> ("Returns the index with the largest value across axes of a tensor."):</p> <pre><code>tf.argmax(input=logits, axis=1) </code></pre> <p>All in all, the question's code covers only partially the <a href="https://www.tensorflow.org/tutorials/layers" rel="nofollow noreferrer">MNIST tutorial</a> from the TensorFlow team. Perhaps more pointers there if you get stuck with this code.</p>
python|tensorflow|prediction|softmax|categorization
3
6,926
45,511,995
Pandas Modify DataFrames in Loop Part 2
<p>Given the following data frames:</p> <pre><code>import pandas as pd k=pd.DataFrame({'A':[1,1],'B':[3,4]}) e=pd.DataFrame({'A':[1,1],'B':[6,7]}) k A B 0 1 3 1 1 4 e A B 0 1 6 1 1 7 </code></pre> <p>I'd like to apply a group-by sum in a loop, but doing so does not seem to modify the data frames. </p> <p>Here's what I've tried:</p> <pre><code>for d in dfsout: d=d.groupby(d.columns[0]).apply(sum) print(d) </code></pre> <p>When I print d in the loop, it shows that the correct operation is occurring...</p> <pre><code> A B A 1 2 7 A B A 1 2 13 </code></pre> <p>...but then when I print data frames k and e, they have not been modified.</p> <pre><code>k A B 0 1 3 1 1 4 e A B 0 1 6 1 1 7 </code></pre> <p><strong>Update</strong></p> <p>I also tried using it as a function (works in loop, still does not modify):</p> <pre><code>def moddf(d): return d.groupby(d.columns[0]).apply(sum) for d in dfsout: d=moddf(d) print(d) </code></pre> <p>Thanks in advance!</p>
<p>Ok , you can try this </p> <pre><code>import pandas as pd k=pd.DataFrame({'A':[1,1],'B':[3,4]}) e=pd.DataFrame({'A':[1,1],'B':[6,7]}) fields=['k','e'] dfsout=[k,e] variables = locals() for d,name in zip(dfsout,fields): variables["{0}".format(name)]=d.groupby(d.columns[0]).apply(sum) k Out[756]: A B A 1 2 7 e Out[757]: A B A 1 2 13 </code></pre>
python|loops|pandas|dataframe
0
6,927
62,477,692
ValueError: Expected 2D array, got 1D array instead: for the matrix?
<p>I get the following error and unsure as to why? ValueError: Expected 2D array, got 1D array instead: The dataset I used is <a href="https://catalog.data.gov/dataset/demographic-statistics-by-zip-code-acfc9" rel="nofollow noreferrer">https://catalog.data.gov/dataset/demographic-statistics-by-zip-code-acfc9</a></p> <p>I thought it was already converted to a matrix once it is in the dataframe. Especially because the plot data shows up correctly.</p> <p>Any help would be appreciated. </p> <pre><code>#implmenting KNN in python import pandas as pd import numpy as np import operator import seaborn as sns import matplotlib.pyplot as plt import matplotlib.cbook as cbook import time start_time = time.time() #1) Ingest the data via one of the provided formats. print("Getting csv") ################################### #2.) Create a data structure to store the data. data = pd.read_csv(r"C:\Users\trave.DESKTOP-KM5AM0U\Desktop\UNIT 5\Demographic_Statistics_By_Zip_Code.csv", usecols = ["COUNT PARTICIPANTS", "PERCENT RECEIVES PUBLIC ASSISTANCE"]) #msft.plot("JURISDICTION NAME", ["COUNT PUBLIC ASSISTANCE TOTAL", "COUNT PARTICIPANTS"], subplots=True) # print("Entering data to datfram only 3 columns") # print ("The number of rows is") # print (len(data.index)) # print ("The number of columns is") # print (len(data.columns)) print(data.head) ################# print("running plots") #scikit-learn ##################################### #MATPLOT FOR REGRESSION LINE DATA Par = data.iloc[:, 0] Per = data.iloc[:, 1] # Count_P = data[] # Public_Aid = data[] plt.title('Number of people to public aid percentage') plt.xlabel('#Participant') plt.ylabel('% Public aid') plt.plot (Par, Per, 'k.') plt.axis([.1,200,.1,1]) plt.grid(True) print("plots complete") ################################### print("sklearn running") from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit (X=Par, y=Per) print('SKLEARN MODULE COMPLETE') ######################################### ####making prediction print("running prediction") p = model.predict ([[78]]) [0][0] print (round(p,2)) # STOP MY TIMER print ("My program took", time.time() - start_time, "to run") </code></pre>
<p>The <code>y</code> you used to fit the model is 1D, so will the result of the prediction from the model.</p> <p>If you want to make a prediction from a unique value <code>x</code>, you can try:</p> <pre class="lang-py prettyprint-override"><code>p = model.predict([[78]])[0] </code></pre> <p>p will be 1D, so you just need to get the index 0 once</p>
python|arrays|pandas|matplotlib|scikit-learn
0
6,928
62,495,372
Jupyter notebook is giving me error for correct codes
<p>My jupyter notebook is giving error for codes that are correct. specifically this is my error: <code>AttributeError: ‘NoneType’ object has no attribute ‘plot’</code></p> <p>I have check and check again, i have re written my codes, i also ran my code cell by cell and also use <code>Run all</code>. but not working. Your help will sincerely be appreciated.</p> <p>here is my code</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt %matplotlib inline recent_grads = pd.read_csv(&quot;recent-grads.csv&quot;) cleaned_data_count = recent_grads.count() print (cleaned_data_count) </code></pre> <p>This is my error output</p> <pre><code>AttributeError Traceback (most recent call last) &lt;ipython-input-6-e0fa232c36bf&gt; in &lt;module&gt; 1 # Look up the number of rows to ascertain if data has been droped ----&gt; 2 cleaned_data_count = recent_grads.count() 3 print (cleaned_data_count) AttributeError: 'NoneType' object has no attribute 'count' </code></pre> <p>my plot also gives error. Here is the code for my plot</p> <pre><code>recent_grads.plot(x=&quot;Sample_size&quot;, y=&quot;Median&quot;, kind = &quot;scatter&quot;, title = &quot;Sample_size VS Median&quot;) recent_grads.plot(x=&quot;Sample_size&quot;, y=&quot;Unemployment_rate&quot;, kind = &quot;scatter&quot;, title = &quot;Sample_size VS Uemployemny&quot;) recent_grads.plot(x=&quot;Full_time&quot;, y=&quot;Median&quot;, kind = &quot;scatter&quot;, title = &quot;Full_time VS Median&quot;) recent_grads.plot(x=&quot;ShareWomen&quot;, y=&quot;Unemployment_rate&quot;, kind = &quot;scatter&quot;, title = &quot;Sharewoman VS Unemployment_rate&quot;) recent_grads.plot(x=&quot;Men&quot;,y=&quot;Median&quot;, kind = &quot;scatter&quot;, title = &quot;Men VS Median&quot;) recent_grads.plot(x=&quot;Women&quot;,y=&quot;Median&quot;, kind = &quot;scatter&quot;, title = &quot;Sample_size VS Median&quot;) </code></pre> <p>here is my plot error output</p> <pre><code>AttributeError Traceback (most recent call last) &lt;ipython-input-23-6d7d435b7c0f&gt; in &lt;module&gt; ----&gt; 1 recent_grads.plot(x=&quot;Sample_size&quot;, y=&quot;Median&quot;, kind = &quot;scatter&quot;, title = &quot;Sample_size VS Median&quot;) 2 recent_grads.plot(x=&quot;Sample_size&quot;, y=&quot;Unemployment_rate&quot;, kind = &quot;scatter&quot;, title = &quot;Sample_size VS Uemployemny&quot;) 3 recent_grads.plot(x=&quot;Full_time&quot;, y=&quot;Median&quot;, kind = &quot;scatter&quot;, title = &quot;Full_time VS Median&quot;) 4 recent_grads.plot(x=&quot;ShareWomen&quot;, y=&quot;Unemployment_rate&quot;, kind = &quot;scatter&quot;, title = &quot;Sharewoman VS Unemployment_rate&quot;) 5 recent_grads.plot(x=&quot;Men&quot;,y=&quot;Median&quot;, kind = &quot;scatter&quot;, title = &quot;Men VS Median&quot;) AttributeError: 'NoneType' object has no attribute 'plot' </code></pre> <p>here is my screenshoot <a href="https://i.stack.imgur.com/zMfP7.png" rel="nofollow noreferrer">code and error screenshot</a> <a href="https://i.stack.imgur.com/Xb7SR.png" rel="nofollow noreferrer">code and error screenshot for plot</a></p>
<p>Your stacktrace precisely indicates the offending row:</p> <pre><code>----&gt; 2 cleaned_data_count = recent_grads.count() ... AttributeError: 'NoneType' object has no attribute 'count' </code></pre> <p>Apparently <em>recent_grads</em> is <em>None</em>, so you can't invoke any method on it, including <em>count</em> (and <em>plot</em> which you attempt to invoke later).</p> <p>So there is probably something wrong with <em>read_csv</em> in the previous instruction. I suppose that the input file exists, otherwise another exception would have been thrown earlier. Maybe this file is empty?</p>
python|pandas|jupyter-notebook
1
6,929
62,595,431
How to map a list to a column with repeating values python
<p>I have a list of .png logos like so:</p> <pre><code>logos ['C.png', 'E.png', 'FUR.png', 'FaZe.png', 'GenG.png', 'HER.png', 'MiBR.png', 'X6.png'] </code></pre> <p>I have another column consisting of those values repeating multiple times, like so:</p> <pre><code>teams HER MiBR C E HER FaZe ... </code></pre> <p>You get the idea. Now what I would like to do is map the logos values (the .pngs) to their corresponding team, like so:</p> <pre><code>teams logos HER HER.png MiBR MiBR.png C C.png E E.png HER HER.png FaZe FaZe.png </code></pre> <p>Any help would be appreciated!</p>
<p>Creating DataFrame and lists:</p> <pre><code>pngs = ['C.png', 'E.png','FUR.png', 'FaZe.png', 'GenG.png', 'HER.png', 'MiBR.png', 'X6.png'] dataframe = pd.DataFrame({'teams': ['HER','MiBR','C','E','HER','FaZe','teste']}) </code></pre> <p>Getting only names of .png list:</p> <pre><code>pngs_only_name = [x[:-4] for x in pngs] ['C', 'E', 'FUR', 'FaZe', 'GenG', 'HER', 'MiBR', 'X6'] </code></pre> <p>The elements present in the series that are missing in the list we'll have a None value:</p> <pre><code>dataframe['logos'] = dataframe.teams.apply(lambda x: (x+'.png') if x in pngs_only_name else None) </code></pre> <p>Results:</p> <pre><code> teams logos 0 HER HER.png 1 MiBR MiBR.png 2 C C.png 3 E E.png 4 HER HER.png 5 FaZe FaZe.png 6 teste None </code></pre>
python|pandas|dataframe|mapping
2
6,930
62,638,401
df.to_csv without separator and spaces python
<p>I want to create a txt File so I used this.</p> <pre><code>df1.to_csv('C:/Users/junxonm/Desktop/Filetest.txt',sep=&quot; &quot; ,index=False, header=False) </code></pre> <p>I can completely remove the separator</p> <p>I tried this...</p> <pre><code>df1.to_csv('C:/Users/junxonm/Desktop/Filetest.txt',sep=&quot;&quot; ,index=False, header=False) </code></pre> <p>And this...</p> <pre><code>df1.to_csv('C:/Users/junxonm/Desktop/Filetest.txt',sep=str('') ,index=False, header=False) </code></pre> <p>both are not working</p> <pre><code>Traceback (most recent call last): File &quot;C:/Users/junxonm/PycharmProjects/kemper/JDSNFILE2.py&quot;, line 371, in &lt;module&gt; df1.to_csv('C:/Users/junxonm/Desktop/Filetest.txt',sep=str(&quot;&quot;),index=False, header=False) File &quot;C:\Program Files\Python37\lib\site-packages\pandas\core\generic.py&quot;, line 3228, in to_csv formatter.save() File &quot;C:\Program Files\Python37\lib\site-packages\pandas\io\formats\csvs.py&quot;, line 200, in save self.writer = UnicodeWriter(f, **writer_kwargs) File &quot;C:\Program Files\Python37\lib\site-packages\pandas\io\common.py&quot;, line 517, in UnicodeWriter return csv.writer(f, dialect=dialect, **kwds) TypeError: &quot;delimiter&quot; must be a 1-character string </code></pre> <p>Have you any tips or some other ideas??</p> <p>Thanks a lot</p>
<p>Try to write your pandas dataframe to the new text file like this (use a raw-string for your filename, and use None values instead of False for header and index):</p> <pre><code>df1.to_csv(r'C:/Users/junxonm/Desktop/Filetest.txt', header=None, index=None, sep=' ', mode='w' ) </code></pre>
python|pandas
1
6,931
54,336,881
How to get back DataFrame after using str(df)?
<p>I think I messed up trying to save a Pandas Series that contained a bunch of Pandas Dataframes. Turns out that the DataFrames were each saved as if I called <code>df.to_string()</code> on them.</p> <p>From my observations so far, my strings have extra spacing in some places, as well as extra <code>\</code> when the DataFrame has too many columns to be displayed on the same row.</p> <p>Here is a "more appropriate DataFrame:</p> <pre><code>df = pd.DataFrame(columns=["really long name that goes on for a while", "another really long string", "c"]*6, data=[["some really long data",2,3]*6,[4,5,6]*6,[7,8,9]*6]) </code></pre> <p>The strings that I have and wish to turn into a DataFrame look like this:</p> <pre><code># str(df) ' really long name that goes on for a while another really long string c \\\n0 some really long data 2 3 \n1 4 5 6 \n2 7 8 9 \n\n really long name that goes on for a while another really long string c \\\n0 some really long data 2 3 \n1 4 5 6 \n2 7 8 9 \n\n really long name that goes on for a while another really long string c \\\n0 some really long data 2 3 \n1 4 5 6 \n2 7 8 9 \n\n really long name that goes on for a while another really long string c \\\n0 some really long data 2 3 \n1 4 5 6 \n2 7 8 9 \n\n really long name that goes on for a while another really long string c \\\n0 some really long data 2 3 \n1 4 5 6 \n2 7 8 9 \n\n really long name that goes on for a while another really long string c \n0 some really long data 2 3 \n1 4 5 6 \n2 7 8 9 ' </code></pre> <p>How would I revert a string like this back to a DataFrame?</p> <p>Thanks</p>
<h3>New answer</h3> <p>In response to your new, edited question, the best answer I have is to use <code>to_csv</code> instead of <code>to_string</code>. <code>to_string</code> doesn't really support this use case as well as <code>to_csv</code> (and I don't see how I can save you from doing a bunch of conversions to and from StringIO instances...).</p> <pre><code>df = pd.DataFrame(columns=["really long name that goes on for a while", "another really long string", "c"]*6, data=[["some really long data",2,3]*6,[4,5,6]*6,[7,8,9]*6]) s = StringIO() df.to_csv(s) # To get the string use, `s.getvalue()` # Warning: will exhaust `s` pd.read_csv(StringIO(s.getvalue())) </code></pre> <p>I hope this update helps, I'll leave my old answer for continuity.</p> <hr> <h3>Old answer</h3> <p>In a very cool twist, the answer to this will also help you read a commonly pasted format of dataframe output on stackoverflow. Consider that we can read a <code>df</code> from a string like so:</p> <pre><code>data = """ 0 20 30 40 50 1 5 NaN 3 5 NaN 2 2 3 4 NaN 4 3 6 1 3 1 NaN""" import pandas as pd from io import StringIO data = StringIO(data) df = pd.read_csv(data, sep="\\s+") </code></pre> <p>This results in the following df:</p> <p><a href="https://i.stack.imgur.com/mHZnE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mHZnE.png" alt="enter image description here"></a></p> <p>You can read the output of <code>to_string</code> the same way:</p> <pre><code>pd.read_csv(StringIO(df.to_string()), sep="\\s+") </code></pre> <p>And the resulting <code>df</code> is the same.</p>
python|pandas|dataframe
2
6,932
71,265,442
Convert JSON to CSV but each json object should be contained in one row
<p>I want to convert my json file which has multiple jsons into a csv such that each json is in one column. I don't want to convert it such that each field in json is a seperate column. So there will be only one column and the entire json object is stored as string in it.</p> <p>Sample JSON file:</p> <pre><code>[ {&quot;Name&quot; : &quot;abcd&quot;,&quot;Id&quot; : &quot;123&quot;} , {&quot;Name&quot; : &quot;efgh&quot;,&quot;Id&quot; : &quot;124&quot;} ] </code></pre> <p>Sample csv :</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Data</th> </tr> </thead> <tbody> <tr> <td>&quot;Name&quot; : &quot;abcd&quot;,&quot;Id&quot; : &quot;123&quot;</td> </tr> <tr> <td>&quot;Name&quot; : &quot;efgh&quot;,&quot;Id&quot; : &quot;124&quot;</td> </tr> </tbody> </table> </div>
<p>Use:</p> <pre><code>import pandas as pd js = [ {&quot;Name&quot; : &quot;abcd&quot;,&quot;Id&quot; : &quot;123&quot;} , {&quot;Name&quot; : &quot;efgh&quot;,&quot;Id&quot; : &quot;124&quot;} ] df = pd.DataFrame([str(x) for x in js], columns = ['data']) </code></pre> <p>output:</p> <p><a href="https://i.stack.imgur.com/6ij2n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6ij2n.png" alt="enter image description here" /></a></p>
python|json|pandas|csv
0
6,933
71,226,400
Drop Duplicate Rows in Excel based on Column Data in Pandas
<p>I am attempting to use pandas to drop duplicate entries in an excel document based on very specific conditions. Here is an excerpt from my dataframe:</p> <pre><code> WD MSN TAIL REV 3425 30-11-11 26154 N754CX IR 3426 30-21-11 26154 N754CX IR 3427 31-31-11 26154 N754CX IR 3428 31-31-41 26154 N754CX A 3429 31-31-41 26154 N754CX B </code></pre> <p>As you can see, I have two copies of <code>WD</code> <code>31-31-41</code>, and I want to keep only the newest revision, REV B. However, several different &quot;MSN&quot; numbers may also have this WD, and I do not want to affect those entries. Furthermore I want this code to do this for all past revisions, regardless of MSN or WD. For instance, another MSN may have multiple revisions of 32-46-11, and I would need to keep only the newest one.</p> <p>I have found how to find duplicates in my dataframe using the following:</p> <pre><code>df.iloc[3425:3430 , 0:4].duplicated([&quot;WD&quot;,&quot;MSN&quot;],'last') </code></pre> <p>Which outputs:</p> <pre><code>3425 False 3426 False 3427 False 3428 True 3429 False dtype: bool </code></pre> <p>But this only shows the first entry as a True, but as these are being entered in by a human, the last entry may not necessarily be the newest revision.</p>
<p>A partial answer that assumes that the last entry is the newest.</p> <pre><code>&gt;&gt;&gt; df.groupby([&quot;WD&quot;, &quot;MSN&quot;]).tail(1) WD MSN TAIL REV 3425 30-11-11 26154 N754CX IR 3426 30-21-11 26154 N754CX IR 3427 31-31-11 26154 N754CX IR 3429 31-31-41 26154 N754CX B </code></pre> <p>The updated question indicates that the &quot;REV&quot; column has an implicit order, so we can create a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Categorical.html#pandas-categorical" rel="nofollow noreferrer"><code>pandas.Categorical</code></a> column with an explicit order:</p> <pre><code>&gt;&gt;&gt; df[&quot;REV&quot;] = df[&quot;REV&quot;].fillna(&quot;Unknown&quot;) # To support NaN values &gt;&gt;&gt; df[&quot;REV&quot;] = pd.Categorical( df[&quot;REV&quot;], categories=[&quot;Unknown&quot;, &quot;IR&quot;, &quot;A&quot;, &quot;B&quot;, &quot;C&quot;], ordered=True, ) &gt;&gt;&gt; df.loc[df.groupby([&quot;WD&quot;, &quot;MSN&quot;])[&quot;REV&quot;].idxmax()] WD MSN TAIL REV 3425 30-11-11 26154 N754CX IR 3426 30-21-11 26154 N754CX IR 3427 31-31-11 26154 N754CX IR 3429 31-31-41 26154 N754CX B </code></pre>
python|pandas|duplicates|boolean
0
6,934
52,302,474
pandas regex new column nan - but regex tester shows regex is valid
<p>I have a csv of error messages from test regression failures and I'm importing it into a pandas dataframe, but I want to find some substrings pertaining to the exceptions, specifically. </p> <p>I populate my dataframe with the contents of the .csv like so:</p> <pre><code>df = pd.read_csv('ErrorMessage3.csv', header=None, sep=',', names=['ErrorMessage']) </code></pre> <p>I have the following regex and corresponding test string (which is the first entry in my dataframe column of error messages), which returns exactly what I want:</p> <pre><code>teststring = "Step 13 - Iteration 1 Failed: Action: &lt;Update Latest CC Exp Date Record from Epay Account {DBServer;UserName;Password='', DatabaseName='',Year Offset='-10'}&gt; ---&gt; System.Data.SqlTypes.SqlNullValueException: Data is Null. This method or property cannotbecalled on Null values. ---&gt; System.Data.SqlTypes.SqlNullValueException2: Data is Null." re.findall(r"---&gt; ([^:]+): ", teststring) </code></pre> <p>which results in the following output:</p> <pre><code>['System.Data.SqlTypes.SqlNullValueException', 'System.Data.SqlTypes.SqlNullValueException2'] </code></pre> <p>BUT I want to be able to add this as an 'Exceptions' column in my dataframe. I thought this would work:</p> <pre><code>df['Exceptions'] = df['ErrorMessage'].str.extract(r"---&gt; ([^:]+): ") </code></pre> <p>but when I run it, I get my 'Exceptions' column added, but NaN for all the rows. I verified that my ErrorMessage is object type, and I have used an online regex tester to verify that at least a subset of my ErrorMessage entries do indeed contain an exception that matches my regex. I have read some other stack overflow questions that seem very similar, but I'm not having much luck.</p> <p>Why does applying the regex to the dataframe yield nan, but applying it to the individual string returns what I want?</p>
<pre><code>teststring1 = &quot;&quot;&quot;Step 13 - Iteration 1 Failed: Action: &lt;Update Latest CC Exp Date Record from Epay Account {DBServer;UserName;Password='', DatabaseName='',Year Offset='-10'}&gt; ---&gt; System.Data.SqlTypes.SqlNullValueException1: Data is Null. This method or property cannotbecalled on Null values. ---&gt; System.Data.SqlTypes.SqlNullValueException2: Data is Null. ---&gt; System.Data.SqlTypes.SqlNullValueException21: ---&gt; System.Data.SqlTypes.SqlNullValueException22: ---&gt; System.Data.SqlTypes.SqlNullValueException23: ---&gt; System.Data.SqlTypes.SqlNullValueException24: &quot;&quot;&quot; teststring2 = &quot;&quot;&quot;Step 13 - Iteration 1 Failed: Action: &lt;Update Latest CC Exp Date Record from Epay Account {DBServer;UserName;Password='', DatabaseName='',Year Offset='-10'}&gt; ---&gt; System.Data.SqlTypes.SqlNullValueException3: Data is Null. This method or property cannotbecalled on Null values. ---&gt; System.Data.SqlTypes.SqlNullValueException4: Data is Null.&quot;&quot;&quot; teststring3 = &quot;&quot;&quot;Step 13 - Iteration 1 Failed: Action: &lt;Update Latest CC Exp Date Record from Epay Account {DBServer;UserName;Password='', DatabaseName='',Year Offset='-10'}&gt; ---&gt; System.Data.SqlTypes.SqlNullValueException5: Data is Null. This method or property cannotbecalled on Null values. ---&gt; System.Data.SqlTypes.SqlNullValueException6: Data is Null.&quot;&quot;&quot; teststring4 = &quot;&quot;&quot;Step 13 - Iteration 1 Failed: Action: &lt;Update Latest CC Exp Date Record from Epay Account {DBServer;UserName;Password='', DatabaseName='',Year Offset='-10'}&gt; ---&gt; System.Data.SqlTypes.SqlNullValueException7: Data is Null. This method or property cannotbecalled on Null values. ---&gt; System.Data.SqlTypes.SqlNullValueException8: Data is Null.&quot;&quot;&quot; teststring5 = &quot;&quot;&quot;Step 13 - Iteration 1 Failed: Action: &lt;Update Latest CC Exp Date Record from Epay Account {DBServer;UserName;Password='', DatabaseName='',Year Offset='-10'}&gt; ---&gt; System.Data.SqlTypes.SqlNullValueException9: Data is Null. This method or property cannotbecalled on Null values. ---&gt; System.Data.SqlTypes.SqlNullValueException10: Data is Null.&quot;&quot;&quot; teststring6 = &quot;&quot;&quot;Step 13 - Iteration 1 Failed: Action: &lt;Update Latest CC Exp Date Record from Epay Account {DBServer;UserName;Password='', DatabaseName='',Year Offset='-10'}&gt; ---&gt; System.Data.SqlTypes.SqlNullValueException11: Data is Null. This method or property cannotbecalled on Null values. ---&gt; System.Data.SqlTypes.SqlNullValueException12: Data is Null.&quot;&quot;&quot; values = [[teststring1], [teststring2], [teststring3], [teststring4], [teststring5], [teststring6]] header = ['ErrorMessage'] df = pd.DataFrame(values, columns=header) exceptions = df['ErrorMessage'].str.extractall(r&quot;---&gt; ([^:]+): &quot;) </code></pre> <ul> <li><code>extractall</code> returns a new MultiIndex DataFrame, where the first index will match the original DataFrame index, and the second index will be the number of extractions or matches. The original and new DataFrame are not compatible.</li> </ul> <pre class="lang-py prettyprint-override"><code> 0 match 0 0 System.Data.SqlTypes.SqlNullValueException1 1 System.Data.SqlTypes.SqlNullValueException2 2 System.Data.SqlTypes.SqlNullValueException21 3 System.Data.SqlTypes.SqlNullValueException22 4 System.Data.SqlTypes.SqlNullValueException23 5 System.Data.SqlTypes.SqlNullValueException24 1 0 System.Data.SqlTypes.SqlNullValueException3 1 System.Data.SqlTypes.SqlNullValueException4 2 0 System.Data.SqlTypes.SqlNullValueException5 1 System.Data.SqlTypes.SqlNullValueException6 3 0 System.Data.SqlTypes.SqlNullValueException7 1 System.Data.SqlTypes.SqlNullValueException8 4 0 System.Data.SqlTypes.SqlNullValueException9 1 System.Data.SqlTypes.SqlNullValueException10 5 0 System.Data.SqlTypes.SqlNullValueException11 1 System.Data.SqlTypes.SqlNullValueException12 </code></pre>
python|regex|pandas|dataframe
1
6,935
60,631,279
Not able to get tensorflow-go
<p>Im trying to make api for my ML model but Im not able to install the go package for this. Im getting this error:</p> <pre><code>go get github.com/tensorflow/tensorflow/tensorflow/go </code></pre> <pre><code>package github.com/tensorflow/tensorflow/tensorflow/go/core/core_protos_go_proto: cannot find package "github.com/tensorflow/tensorflow/tensorflow/go/core/core_protos_go_proto" in any of: </code></pre> <pre><code> /usr/lib/go-1.10/src/github.com/tensorflow/tensorflow/tensorflow/go/core/core_protos_go_proto (from $GOROOT) /home/user/go/src/github.com/tensorflow/tensorflow/tensorflow/go/core/core_protos_go_proto (from $GOPATH) </code></pre> <p>So, can u help me?</p>
<p>That's a well-known issue that isn't going to be fixed (it seems).</p> <p>So I decided to maintain a fork that fixes the problem <a href="https://github.com/galeone/tensorflow" rel="nofollow noreferrer">https://github.com/galeone/tensorflow</a></p> <pre><code>go get github.com/galeone/tensorflow/tensorflow/go@r2.4-go </code></pre> <p>You can also use <a href="https://github.com/galeone/tfgo" rel="nofollow noreferrer">tfgo</a> that depends on the fork and it allows a simplified usage of the Go bindings:</p> <pre><code>go get github.com/galeone/tfgo </code></pre>
tensorflow|go|machine-learning
0
6,936
60,480,780
How to make dataframeA None if A's Id exist in B
<p>dataframeA, dataframeB<br/> Id, name <br/></p> <p>if I want to make dataframeA's name None if Id exist in dataframeB</p> <p>dataframA <br></p> <pre><code>ID, name 1 jake 2 kim </code></pre> <p>dataframe B <br/></p> <pre><code> ID, name 1, None </code></pre> <p>result <br/></p> <pre><code>ID, name 1 None 2 kim </code></pre> <blockquote> <p>sub.apply(lambda x: None if x.ImageId in noimages_list else x.EncodedPixels)</p> </blockquote>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mask.html" rel="nofollow noreferrer"><code>Series.mask</code></a>:</p> <pre><code>dfa['name'] = dfa['name'].mask(dfa['ID'].isin(dfb['ID']), None) </code></pre> <p>or</p> <pre><code>dfa.loc[dfa['ID'].isin(dfb['ID']), 'name'] = None </code></pre>
python|python-3.x|pandas|dataframe|data-science
1
6,937
60,563,566
pivot data frame in pandas?
<pre><code>people1 trait1 YES people1 trait2 YES people1 trait3 NO people1 trait4 RED people2 trait1 NO people2 trait2 YES people2 trait4 BLACK </code></pre> <p>etc..</p> <p>It's possible to create from that table something like this?</p> <pre><code> trait1, trait2, trait3, trait4 ... people1 YES YES NO RED people2 NO YES - BLACK people3 - - YES BLUE </code></pre> <p>The file is too big to do that in excel, I tried in pandas, but I can't find help in this case. I found pd.pivot_table funcion but I can't build working code. I tried and got various erors (99% my fault). </p> <p>Can someone explain me how to use it in my case? Or maybe is better option than pandas.pivot?+</p> <p><strong>EDIT</strong></p> <pre><code>I rebuild my frame: 1 'interpretation' 'trait' p1 YES t1 p1 BLACK t2 p1 NO t3 p2 NO t1 p2 RED t2 p2 NO t3 </code></pre> <p>And I use suggestion:</p> <p>data1.pivot_table(index=1, columns="name", values='trait', aggfunc=','.join, fill_value='-').</p> <p>And I got:</p> <pre><code>TypeError: sequence item 0: expected str instance, float found </code></pre> <p>If I change</p> <p>data1.pivot_table(index=1, columns="trait", values='value', aggfunc=','.join, fill_value='-').</p> <p>I got bad order table but without error:</p> <pre><code> p1 p2 p3 p4 YES trait1 t1 YES t1 t2 etc. NO RED No ... </code></pre> <p>So i think, the first option is correct, but I cant repair that error. When I dtype df it return (O) for all cols. </p>
<p>I think problem is missing values in column <code>trait</code>, so <code>join</code> function failed. So possible solution is replace missing values to empty strings:</p> <pre><code>print (data1) 1 name trait 0 p1 YES NaN &lt;- missing value 1 p1 BLACK t2 2 p1 NO t3 3 p2 NO t1 4 p2 RED t2 5 p2 NO t3 data1['trait'] = data1['trait'].fillna('') df = data1.pivot_table(index=1, columns="name", values='trait', aggfunc=','.join, fill_value='-') print (df) 1 p1 p2 name BLACK t2 - NO t3 t1,t3 RED - t2 YES - </code></pre> <p>Also if want convert index to column:</p> <pre><code>data1['trait'] = data1['trait'].fillna('') df = (data1.pivot_table(index=1, columns="name", values='trait', aggfunc=','.join, fill_value='-') .reset_index() .rename_axis(None, axis=1)) print (df) name p1 p2 0 BLACK t2 - 1 NO t3 t1,t3 2 RED - t2 3 YES - </code></pre>
python|pandas|dataframe
1
6,938
72,601,854
Manipulating DataFrame
<p>I have the following dataframe <code>df</code> where there are 3 columns: Date, value and topic. I want to create a new dataframe <code>df1</code> where the topic is the column and is indexed by day, and each topic has its own value per day. My problem is that I don't know how to match the value to the topic per day. Any help would be appreciated.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import random rng = pd.date_range('2015-02-24', periods=50, freq='H') TOPIC=np.random.choice(5, len(rng), replace=True) df = pd.DataFrame({ 'Date': rng, 'Val' : np.random.randn(len(rng)),'Topic':TOPIC}) columns=df.Topic.unique() df1=pd.DataFrame(columns=columns) df1['Date']=df['Date'] df1.set_index('Date',inplace=True) df1=df1.resample('D').ffill() df1 </code></pre>
<pre><code>df1 = (df.assign().pivot_table(index='Date', columns='Topic', values='Val')) </code></pre> <p>Output</p> <pre><code>Topic 0 1 2 3 4 Date 2015-02-24 00:00:00 NaN NaN NaN -1.311060 NaN 2015-02-24 01:00:00 0.194373 NaN NaN NaN NaN 2015-02-24 02:00:00 NaN NaN 0.182364 NaN NaN 2015-02-24 03:00:00 NaN NaN NaN -1.498907 NaN 2015-02-24 04:00:00 0.220041 NaN NaN NaN NaN 2015-02-24 05:00:00 NaN -0.183823 NaN NaN NaN 2015-02-24 06:00:00 NaN NaN NaN NaN 0.662866 2015-02-24 07:00:00 NaN 0.846723 NaN NaN NaN 2015-02-24 08:00:00 NaN NaN NaN -1.238696 NaN 2015-02-24 09:00:00 NaN NaN NaN -2.520253 NaN 2015-02-24 10:00:00 NaN NaN NaN NaN 1.056829 2015-02-24 11:00:00 NaN NaN NaN -0.749357 NaN 2015-02-24 12:00:00 NaN 0.038661 NaN NaN NaN 2015-02-24 13:00:00 NaN 0.304193 NaN NaN NaN 2015-02-24 14:00:00 NaN NaN NaN -1.217962 NaN 2015-02-24 15:00:00 NaN 2.073715 NaN NaN NaN 2015-02-24 16:00:00 NaN NaN NaN -0.320530 NaN 2015-02-24 17:00:00 -1.309147 NaN NaN NaN NaN 2015-02-24 18:00:00 NaN NaN NaN NaN -0.240466 2015-02-24 19:00:00 NaN NaN NaN 0.043733 NaN 2015-02-24 20:00:00 NaN NaN NaN NaN 1.395441 2015-02-24 21:00:00 NaN NaN 0.625773 NaN NaN 2015-02-24 22:00:00 NaN NaN NaN NaN 0.291916 2015-02-24 23:00:00 NaN NaN NaN 0.090431 NaN 2015-02-25 00:00:00 NaN NaN -0.509572 NaN NaN 2015-02-25 01:00:00 NaN NaN -0.309990 NaN NaN 2015-02-25 02:00:00 NaN NaN 0.711705 NaN NaN 2015-02-25 03:00:00 NaN 0.296445 NaN NaN NaN 2015-02-25 04:00:00 NaN NaN NaN 0.222146 NaN 2015-02-25 05:00:00 NaN NaN NaN NaN 1.030145 2015-02-25 06:00:00 1.064250 NaN NaN NaN NaN 2015-02-25 07:00:00 NaN NaN 0.023348 NaN NaN 2015-02-25 08:00:00 NaN NaN NaN NaN -0.576451 2015-02-25 09:00:00 NaN NaN 1.573513 NaN NaN 2015-02-25 10:00:00 NaN NaN 0.960823 NaN NaN 2015-02-25 11:00:00 NaN NaN 0.349976 NaN NaN 2015-02-25 12:00:00 NaN NaN NaN -0.885772 NaN 2015-02-25 13:00:00 NaN 1.050893 NaN NaN NaN 2015-02-25 14:00:00 NaN NaN -1.634622 NaN NaN 2015-02-25 15:00:00 NaN NaN NaN NaN 0.003866 2015-02-25 16:00:00 0.952088 NaN NaN NaN NaN 2015-02-25 17:00:00 NaN NaN 0.518994 NaN NaN 2015-02-25 18:00:00 -0.770279 NaN NaN NaN NaN 2015-02-25 19:00:00 NaN NaN -0.510245 NaN NaN 2015-02-25 20:00:00 -0.024560 NaN NaN NaN NaN 2015-02-25 21:00:00 NaN NaN -0.823536 NaN NaN 2015-02-25 22:00:00 NaN NaN NaN NaN -0.498414 2015-02-25 23:00:00 NaN 0.497084 NaN NaN NaN 2015-02-26 00:00:00 NaN 0.799647 NaN NaN NaN 2015-02-26 01:00:00 NaN NaN NaN -2.291271 NaN </code></pre>
python|pandas|dataframe|numpy
0
6,939
72,774,839
Find the third vertex of an equilateral trainable given two N-dimensional verteces in python
<h2><strong>Given:</strong></h2> <p>Two vertices of an equilateral trainable as A,B ∊ R<sup>N</sup> when N &gt; 1.</p> <h2><strong>Goal:</strong></h2> <p>Find the third vertex Z ∊ R<sup>N</sup> in which <code>||A-B|| = ||A-Z|| = ||B-Z||</code>.</p> <p>I have the following python script from <a href="https://stackoverflow.com/a/69672103/5437090">here</a> which calculates it for 2-dimensional points (A,B ∊ R<sup>2</sup> and Z ∊ R<sup>2</sup>):</p> <pre><code>import numpy as np def equilateral(A, B): # Computes `x coordinate` of the third vertex. vx = ( A[0] + B[0] + np.sqrt(3) * ( A[1] - B[1] ) ) / 2 # Computes 'y coordinate' of the third vertex. vy = ( A[1] + B[1] + np.sqrt(3) * ( A[0] - B[0] ) ) / 2 z = np.array([vx, vy]) #This point z is the third vertex. return z # Test for 2D vectors: A = np.array([2,0]) B = np.array([5,0]) Z = equilateral(A,B) print(z) # [ 3.5, -2.59807621] </code></pre> <p>How can I extend this (or maybe come up with more intelligent) solution to obtain the third vertex of N-dimensional vector such as the following test example?</p> <pre><code>N = 5 A = np.random.normal(size=(N, )) B = np.random.normal(size=(N, )) z = equilateral(A, B) </code></pre> <p>Cheers,</p>
<p>You can use a general function to find a perpendicular to a vector based on <a href="https://math.stackexchange.com/a/3175858">this method</a>. Then you just add a perpendicular vector of the right length to the midpoint of the first two vertices. As noted in comments, there are an infinite number of vertices that would solve this when the number of dimensions is more than 2. This method simply identifies one that is easy to calculate.</p> <pre><code>def magnitude(v): &quot;&quot;&quot; Return the magnitude of a vector. See https://stackoverflow.com/a/9184560/567595 &quot;&quot;&quot; return np.sqrt(v.dot(v)) def perpendicular(u): &quot;&quot;&quot; Return a vector perpendicular to vector v, of magnitude 1. Based on https://math.stackexchange.com/a/3175858 &quot;&quot;&quot; r = np.zeros(u.shape) zeros, = np.where(u == 0) if zeros.size &gt; 0: # If any u[i] is zero, return standard vector e_i r[zeros[0]] = 1 else: # Otherwise return u[1]e_0 - u[0]e_1 m = magnitude(u[:2]) r[:2] = u[1] / m, -u[0] / m return r def third_vertex_equilateral(A, B): &quot;&quot;&quot; Find a third vertex of an equilateral triangle with vertices A and B, in N dimensions &quot;&quot;&quot; side = A - B midpoint = (A + B) / 2 bisector = perpendicular(side) * np.sqrt(.75) * magnitude(side) return midpoint + bisector N = 5 A = np.random.normal(size=N) B = np.random.normal(size=N) C = third_vertex_equilateral(A, B) np.isclose(magnitude(A - B), magnitude(A - C)) # True np.isclose(magnitude(A - B), magnitude(B - C)) # True </code></pre> <p>Note that when written more succinctly, the function is quite similar to the one in the question:</p> <pre><code>def third_vertex_equilateral(A, B): &quot;&quot;&quot; Find a third vertex of an equilateral triangle with vertices A and B, in N dimensions &quot;&quot;&quot; return (A + B + np.sqrt(3) * perpendicular(A - B) * magnitude(A - B)) / 2 </code></pre>
python|numpy|math|geometry|triangle
0
6,940
59,699,577
Failure on Pandas Installation Using PyPy interpreter on PyCharm
<p>I've been trying to install pandas using the PyPy interpreter on Pycharm on a windows machine. I've troubleshooted the issues online extensively and can't resolve it. I've used the built in Pycharm module installer and also the CMD window. I've tried with and without the no-cache-dir command. I've installed microsoft Build Tools for Visual Studio 2019. I've checked that pip is the latest version. I can install other modules but for some reason pandas won't install. Numpy also fails on installation. Here is the error I get; any help would be appreciated.</p> <pre><code> (PYPYVE~1) C:\Users\flegg&gt;pip install pandas </code></pre> <pre><code> Using cached https://files.pythonhosted.org/packages/b7/93/b544dd08092b457d88e10fc1e0989d9397fd32ca936fdfcbb2584 178dd2b/pandas-0.25.3.tar.gz ERROR: Command errored out with exit status 1: command: 'c:\users\flegg\pypy venv\bin\pypy.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\ \Users\\flegg\\AppData\\Local\\Temp\\pip-install-kv0nhuc2\\pandas\\setup.py'"'"'; __file__='"'"'C:\\Users\\flegg\\ AppData\\Local\\Temp\\pip-install-kv0nhuc2\\pandas\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file __);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' e gg_info --egg-base 'C:\Users\flegg\AppData\Local\Temp\pip-install-kv0nhuc2\pandas\pip-egg-info' cwd: C:\Users\flegg\AppData\Local\Temp\pip-install-kv0nhuc2\pandas\ Complete output (216 lines): ERROR: Command errored out with exit status 1: command: 'c:\users\flegg\pypy venv\bin\pypy.exe' 'c:\users\flegg\pypy venv\site-packages\pip\_vendor\pep5 17\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\flegg\AppData\Local\Temp\tmp_z50eost' cwd: C:\Users\flegg\AppData\Local\Temp\pip-wheel-ydesvn0r\numpy Complete output (180 lines): Processing numpy/random\_bounded_integers.pxd.in Processing numpy/random\mtrand.pyx Processing numpy/random\_bit_generator.pyx Processing numpy/random\_bounded_integers.pyx.in Processing numpy/random\_common.pyx Processing numpy/random\_generator.pyx Processing numpy/random\_mt19937.pyx Processing numpy/random\_pcg64.pyx Processing numpy/random\_philox.pyx Processing numpy/random\_sfc64.pyx Cythonizing sources blas_opt_info: blas_mkl_info: customize MSVCCompiler libraries mkl_rt not found in ['C:\\'] NOT AVAILABLE blis_info: libraries blis not found in ['C:\\'] NOT AVAILABLE openblas_info: libraries openblas not found in ['C:\\'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS libraries tatlas not found in ['C:\\'] NOT AVAILABLE atlas_3_10_blas_info: libraries satlas not found in ['C:\\'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in ['C:\\'] NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in ['C:\\'] NOT AVAILABLE accelerate_info: NOT AVAILABLE blas_info: libraries blas not found in ['C:\\'] NOT AVAILABLE blas_src_info: NOT AVAILABLE NOT AVAILABLE 'svnversion' is not recognized as an internal or external command, operable program or batch file. non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: libraries mkl_rt not found in ['C:\\'] NOT AVAILABLE openblas_lapack_info: libraries openblas not found in ['C:\\'] NOT AVAILABLE openblas_clapack_info: libraries openblas,lapack not found in ['C:\\'] NOT AVAILABLE flame_info: libraries flame not found in ['C:\\'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\ libraries tatlas,tatlas not found in C:\ &lt;class 'numpy.distutils.system_info.atlas_3_10_threads_info'&gt; NOT AVAILABLE atlas_3_10_info: libraries lapack_atlas not found in C:\ libraries satlas,satlas not found in C:\ &lt;class 'numpy.distutils.system_info.atlas_3_10_info'&gt; NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\ libraries ptf77blas,ptcblas,atlas not found in C:\ &lt;class 'numpy.distutils.system_info.atlas_threads_info'&gt; NOT AVAILABLE atlas_info: libraries lapack_atlas not found in C:\ libraries f77blas,cblas,atlas not found in C:\ &lt;class 'numpy.distutils.system_info.atlas_info'&gt; NOT AVAILABLE lapack_info: libraries lapack not found in ['C:\\'] NOT AVAILABLE lapack_src_info: NOT AVAILABLE NOT AVAILABLE running dist_info running build_src build_src building py_modules sources creating build creating build\src.win32-3.6 creating build\src.win32-3.6\numpy creating build\src.win32-3.6\numpy\distutils building library "npymath" sources Running from numpy source directory. setup.py:461: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands() C:\Users\flegg\AppData\Local\Temp\pip-wheel-ydesvn0r\numpy\numpy\distutils\system_info.py:1896: UserWarnin g: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. if self._calc_info(blas): C:\Users\flegg\AppData\Local\Temp\pip-wheel-ydesvn0r\numpy\numpy\distutils\system_info.py:1896: UserWarnin g: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. if self._calc_info(blas): C:\Users\flegg\AppData\Local\Temp\pip-wheel-ydesvn0r\numpy\numpy\distutils\system_info.py:1896: UserWarnin g: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. if self._calc_info(blas): C:\Users\flegg\AppData\Local\Temp\pip-wheel-ydesvn0r\numpy\numpy\distutils\system_info.py:1730: UserWarnin g: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() C:\Users\flegg\AppData\Local\Temp\pip-wheel-ydesvn0r\numpy\numpy\distutils\system_info.py:1730: UserWarnin g: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() C:\Users\flegg\Pypy\pypy-c-jit-98354-1608da62bfc7-win32\lib-python\3\distutils\dist.py:261: UserWarning: U nknown distribution option: 'define_macros' warnings.warn(msg) error: Microsoft Visual C++ 14.1 is required. Get it with "Build Tools for Visual Studio": https://visuals tudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\flegg\pypy venv\bin\pypy.exe' 'c:\users\flegg\pypy ve nv\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\flegg\AppData\Local \Temp\tmp_z50eost' Check the logs for full command output. Traceback (most recent call last): File "c:\users\flegg\pypy venv\site-packages\setuptools\installer.py", line 128, in fetch_build_egg subprocess.check_call(cmd) File "C:\Users\flegg\Pypy\pypy-c-jit-98354-1608da62bfc7-win32\lib-python\3\subprocess.py", line 311, in chec k_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['c:\\users\\flegg\\pypy venv\\bin\\pypy.exe', '-m', 'pip', '--disable -pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\flegg\\AppData\\Local\\Temp\\tmpg6tpv8wp', '--quiet', 'numpy&gt;=1.13.3']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "C:\Users\flegg\AppData\Local\Temp\pip-install-kv0nhuc2\pandas\setup.py", line 840, in &lt;module&gt; **setuptools_kwargs File "c:\users\flegg\pypy venv\site-packages\setuptools\__init__.py", line 144, in setup _install_setup_requires(attrs) File "c:\users\flegg\pypy venv\site-packages\setuptools\__init__.py", line 139, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "c:\users\flegg\pypy venv\site-packages\setuptools\dist.py", line 721, in fetch_build_eggs replace_conflicting=True, File "c:\users\flegg\pypy venv\site-packages\pkg_resources\__init__.py", line 782, in resolve replace_conflicting=replace_conflicting File "c:\users\flegg\pypy venv\site-packages\pkg_resources\__init__.py", line 1065, in best_match return self.obtain(req, installer) File "c:\users\flegg\pypy venv\site-packages\pkg_resources\__init__.py", line 1077, in obtain return installer(requirement) File "c:\users\flegg\pypy venv\site-packages\setuptools\dist.py", line 777, in fetch_build_egg return fetch_build_egg(self, req) File "c:\users\flegg\pypy venv\site-packages\setuptools\installer.py", line 130, in fetch_build_egg raise DistutilsError(str(e)) distutils.errors.DistutilsError: Command '['c:\\users\\flegg\\pypy venv\\bin\\pypy.exe', '-m', 'pip', '--disab le-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\flegg\\AppData\\Local\\Temp\\tmpg6tpv8wp', '--quiet ', 'numpy&gt;=1.13.3']' returned non-zero exit status 1. ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. </code></pre>
<p>The problem is that pandas wants numpy, numpy installation is failing to find your compiler: <code>error: Microsoft Visual C++ 14.1 is required.</code>. </p> <p>You might want to update setuptools (the module that looks for the compiler) via <code>pip install --upgrade setuptools</code>. Hopefully you are using the <a href="https://www.pypy.org/download.html" rel="nofollow noreferrer">latest PyPy 7.3 release</a></p>
python|pandas|pip|pypy
1
6,941
59,650,713
Why does my line fit result in a gradient of np.nan when line fitting in Log-Log scale?
<p>I'm trying to find the gradient under which my graph is plotted whilst line fitting in the double log scale. Therefore I've written the function below.</p> <pre class="lang-python prettyprint-override"><code>def calc_coefficients_signal_shift(n: int, N: int, num: int, shift: int, operations: int): wnss = white_noise_signal_shift(n, N, num, shift, operations) whitenoise_filtered = np.abs(wnss[1][0:n // 2, :]).copy() shifted_whitenoise = np.abs(wnss[4][0:n // 2, :]).copy() x = np.linspace(1, n, n // 2) original_coefficients = [] shifted_coefficients = [] for i in range(num): original_coefficients.append(np.polyfit(np.log10(x), np.log10(whitenoise_filtered[:, i]), 1)) shifted_coefficients.append(np.polyfit(np.log10(x), np.log10(shifted_whitenoise[:, i]), 1)) original_coefficients, shifted_coefficients = \ np.asarray((original_coefficients, shifted_coefficients)) fig, (ax1, ax2) = plt.subplots(2, 1, sharey=True, figsize=(10, 7.5)) ax1.plot(whitenoise_filtered) ax1.loglog(10 ** (original_coefficients.mean() * np.log10(x) + 1), 'r') ax1.set_title('Original noise') ax2.loglog(shifted_whitenoise) ax2.loglog(10 ** (shifted_coefficients.mean() * np.log10(x) + 1), 'r') ax2.set_title('Shifted noise') print(original_coefficients.mean(), shifted_coefficients.mean()) </code></pre> <hr> <p>The goal of the function <code>calc_coefficients_signal_shift</code> is to find whether the gradient of the graph changes after shifting the signal. I expect it to be somewhere around <code>-5/3.</code> since that is the value I applied after my imports in the function <code>white_noise</code> with the variable <code>slope_loglog</code> (filtering the noise under said slope coefficient). When a <code>0</code> is entered for the number of <code>operations</code> the shift is performed it should result in identical arrays for both coefficients. However, it results in <code>nan</code> for the original noise and a real value for the shifted noise (which is in this case not shifted at all, thus the original noise).</p> <p>Can someone tell me what I am doing wrong?</p> <blockquote> <p>NOTE: you may assume that the shift operation is working properly since I've been debugging that one for a while now and it behaves as it should. This question is purely about my line fitting function.</p> </blockquote> <pre class="lang-python prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np import numpy.fft as fft import numpy.random as rnd # slope to use for every function grad = -5 / 3. rnd.seed(10) def white_noise(n: int, N: int, slope: int = grad): x = np.linspace(1, n, n // 2) slope_loglog = 10 ** (slope * np.log10(x) + 1) whitenoise = rnd.randn(n // 2, N) + 1j * rnd.randn(n // 2, N) whitenoise[0, :] = 0 # zero-mean noise whitenoise_filtered = whitenoise * slope_loglog[:, np.newaxis] whitenoise = np.concatenate((whitenoise, whitenoise[0:1, :], np.conj(whitenoise[-1:0:-1, :])), axis=0) whitenoise_filtered = np.concatenate( (whitenoise_filtered, whitenoise_filtered[0:1, :], np.conj(whitenoise_filtered[-1:0:-1, :])), axis=0) whitenoise_signal = fft.ifft(whitenoise_filtered, axis=0) whitenoise_signal = np.real_if_close(whitenoise_signal) if np.iscomplex(whitenoise_signal).any(): print('Warning! whitenoise_signal is complex-valued!') whitenoise_retransformed = fft.fft(whitenoise_signal, axis=0) return whitenoise, whitenoise_filtered, whitenoise_signal, whitenoise_retransformed, slope_loglog def random_arrays(N: int, num: int): res = np.asarray(range(N)) rnd.shuffle(res) return res[:num] def modified_roll(inp, shift: int, operations: int): count = 0 array = inp[:] array_rolled = array.copy() for k in range(operations): count += shift array = np.roll(array, shift, axis=0) array[:count] = 0 array_rolled += array out = array_rolled / operations return out def white_noise_signal_shift(n: int, N: int, num: int, shift: int, operations: int): whitenoise, whitenoise_filtered, whitenoise_signal = white_noise(n, N)[:3] # only showing the selected arrays arrays_to_select = random_arrays(N, num) selected_whitenoise = whitenoise[:, arrays_to_select].copy() selected_whitenoise_filtered = whitenoise_filtered[:, arrays_to_select].copy() selected_whitenoise_signal = whitenoise_signal[:, arrays_to_select].copy() # shifting the signal as a field of different refractive index would do if operations == 0: shifted_signal = selected_whitenoise_signal else: shifted_signal = modified_roll(selected_whitenoise_signal.copy(), shift, operations) # fourier transform back to the power frequency domain shifted_whitenoise = fft.fft(shifted_signal, axis=0) return selected_whitenoise, selected_whitenoise_filtered, selected_whitenoise_signal, shifted_signal, \ shifted_whitenoise def calc_coefficients_signal_shift(n: int, N: int, num: int, shift: int, operations: int): wnss = white_noise_signal_shift(n, N, num, shift, operations) whitenoise_filtered = np.abs(wnss[1][0:n // 2, :]).copy() shifted_whitenoise = np.abs(wnss[4][0:n // 2, :]).copy() x = np.linspace(1, n, n // 2) original_coefficients = [] shifted_coefficients = [] for i in range(num): original_coefficients.append(np.polyfit(np.log10(x), np.log10(whitenoise_filtered[:, i]), 1)) shifted_coefficients.append(np.polyfit(np.log10(x), np.log10(shifted_whitenoise[:, i]), 1)) original_coefficients, shifted_coefficients = \ np.asarray((original_coefficients, shifted_coefficients)) fig, (ax1, ax2) = plt.subplots(2, 1, sharey=True, figsize=(10, 7.5)) ax1.loglog(whitenoise_filtered) ax1.loglog(10 ** (original_coefficients.mean() * np.log10(x) + 1), 'r') ax1.set_title('Original noise') ax2.loglog(shifted_whitenoise) ax2.loglog(10 ** (shifted_coefficients.mean() * np.log10(x) + 1), 'r') ax2.set_title('Shifted noise') print(original_coefficients.mean(), shifted_coefficients.mean()) calc_coefficients_signal_shift(200, 1, 1, 0, 0) </code></pre>
<p>After some research, I found out that <code>slope_loglog</code> was defined incorrectly. The way it was defined was correct for plotting a straight line in the double-log graph, but since I was studying power-law behavior I needed to use the power-law formula. So, <code>slope_loglog</code> became <code>c * x ** m</code>, with <code>c = 10</code> (or whatever value I want to use, since that doesn't affect the outcome that much) and <code>m</code> being the gradient I want to calculate.</p> <p>Implementing this with the function <code>scipy.optimize.curve_fit</code> handed me the end result.</p> <pre class="lang-python prettyprint-override"><code>def calc_coefficients_signal_shift(n: int, N: int, num: int, shift: int, operations: int): wnss = white_noise_signal_shift(n, N, num, shift, operations) whitenoise_filtered = np.abs(wnss[1][0:n // 2, :]).copy() shifted_whitenoise = np.abs(wnss[4][0:n // 2, :]).copy() x = np.linspace(1, n, n // 2) original_coefficients = [] shifted_coefficients = [] for i in range(num): original_coefficients.append(curve_fit(lambda x, m: c * x ** m, x, whitenoise_filtered[:, i], p0=(-5/3.))[0][0]) shifted_coefficients.append(curve_fit(lambda x, m: c * x ** m, x, shifted_whitenoise[:, i])[0][0]) original_coefficients, shifted_coefficients = \ np.asarray((original_coefficients, shifted_coefficients)) fig, (ax1, ax2) = plt.subplots(2, 1, sharey=True, figsize=(10, 7.5)) ax1.loglog(whitenoise_filtered) ax1.loglog(c * x ** original_coefficients.mean(), 'k:', label=f'Slope: {np.round(original_coefficients.mean(), 3)}') ax1.set_xlabel('Log(f)') ax1.set_ylabel('Log(p)') ax1.set_title('Original noise') ax1.legend(loc=0) ax2.loglog(shifted_whitenoise) ax2.loglog(c * x ** shifted_coefficients.mean(), 'k:', label=f'Slope: {np.round(shifted_coefficients.mean(), 3)}') ax2.set_ylabel('Log(p)') ax2.set_xlabel('Log(f)') ax2.set_title('Shifted noise') ax2.legend(loc=0) plt.tight_layout() print(f'The mean of the original coefficients is: {stats.describe(original_coefficients)}') print(f'The mean of the shifted coefficients is: {stats.describe(shifted_coefficients)}') </code></pre>
python|numpy|signal-processing|curve-fitting
0
6,942
32,365,668
array passing between numpy and cython
<p>I would like to pass an numpy array to cython. The Cython C type should be float. Which numpy type do I have to choose. When I choose float or np.float, then its actually a C double.</p>
<p>You want <code>np.float32</code>. This is a 32-bit C <code>float</code>.</p>
numpy|cython
1
6,943
32,528,850
how to sort descending an alphanumeric pandas index.
<p>I have an pandas data frame that looks like:</p> <pre><code>df = DataFrame({'id':['a132','a132','b5789','b5789','c1112','c1112'], 'value':[0,0,0,0,0,0,]}) df = df.groupby('id').sum() value id a132 0 b5789 0 c1112 0 </code></pre> <p>I would like to sort it so that it looks like:</p> <pre><code> value id b5789 0 c1112 0 a132 0 </code></pre> <p>which is looking at the number(although string) and sorting as descending </p>
<p>Categoricals provide a reasonably easy way to define an arbitrary ordering</p> <pre><code>In [35]: df['id'] = df['id'].astype('category') In [39]: df['id'] = (df['id'].cat.reorder_categories( sorted(df['id'].cat.categories, key = lambda x: int(x[1:]), reverse=True))) In [40]: df.groupby('id').sum() Out[40]: value id b5789 0 c1112 0 a132 0 </code></pre>
python|sorting|pandas
2
6,944
32,231,547
Saving Python Numpy Structure As MySQL Blob
<p>One of my programs creates a very large numpty array that I wish to save as a Blob within a database as accessing the array is far faster than going back to the previous level and creating it. I can add it to the database by saving an .npz file to disc using:-</p> <pre><code>import numpy as n n.savez(outfile,**kwargs) </code></pre> <p>and saving this file to the database with:-</p> <pre><code>myData = open(outfile, 'rb').read() sql = "INSERT INTO myTable (BlobColumn) VALUES (%s)" cursor.execute(sql, (myData,)) </code></pre> <p>Whilst this works it seems somewhat inelegant but I cannot figure out how to save it directly to the database? </p>
<p>I know this is quite some time later - I was able to do this using pandas <code>pd.to_sql</code>. Say I have some numpy array <code>x</code> that I want to insert as a blob column. Then you can do the following:</p> <pre><code>row = [x.dumps()] data = pd.DataFrame(row, columns = ['myBlob']) data.to_sql(name = "myTable", schema = "mySchema", con = dbConnection, if_exists = "append", index = False) </code></pre> <p>This should place the blob in the mySchema.myTable with some <code>cnumpy.core.multiarray</code> ... data type.</p>
python|mysql|numpy
1
6,945
32,573,868
What is the proper way to create a numpy array of transformation matrices
<p>Given a list of rotation angles (lets say about the X axis):</p> <pre><code>import numpy as np x_axis_rotations = np.radians([0,10,32,44,165]) </code></pre> <p>I can create an array of matrices matching these angles by doing so:</p> <pre><code>matrices = [] for angle in x_axis_rotations: matrices.append(np.asarray([[1 , 0 , 0],[0, np.cos(angle), -np.sin(angle)], [0, np.sin(angle), np.cos(angle)]])) matrices = np.array(matrices) </code></pre> <p>This will work but it doesn't take advantage of numpy's strengths for dealing with large arrays... So if my array of angles is in the millions, doing it this way won't be very fast.</p> <p>Is there a better (faster) way to do create an array of transform matrices from an array of inputs?</p>
<p>Here's a direct and simple approach:</p> <pre><code>c = np.cos(x_axis_rotations) s = np.sin(x_axis_rotations) matrices = np.zeros((len(x_axis_rotations), 3, 3)) matrices[:, 0, 0] = 1 matrices[:, 1, 1] = c matrices[:, 1, 2] = -s matrices[:, 2, 1] = s matrices[:, 2, 2] = c </code></pre> <hr> <p>timings, for the curious:</p> <pre><code>In [30]: angles = 2 * np.pi * np.random.rand(1000) In [31]: timeit OP(angles) 100 loops, best of 3: 5.46 ms per loop In [32]: timeit askewchan(angles) 10000 loops, best of 3: 39.6 µs per loop In [33]: timeit divakar(angles) 10000 loops, best of 3: 93.8 µs per loop In [34]: timeit divakar_oneline(angles) 10000 loops, best of 3: 56.1 µs per loop In [35]: timeit divakar_combine(angles) 10000 loops, best of 3: 43.9 µs per loop </code></pre> <p>All are much faster than your loop, so use whichever you like the most :)</p>
python|arrays|performance|numpy|matrix
3
6,946
40,731,524
UnicodeDecodeError: ('utf-8' codec) while reading a dta file in Pandas
<p>I am trying to open a <code>dta</code> file with Pandas but get a <code>UnicodeDecodeError</code>:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; pd.read_stata('/some/stata/file.dta',encoding='utf8') # I've tried 'utf8', "ISO-8859-1", 'latin1', 'cp1252' and not putting in anything, same error. Traceback (most recent call last): File "&lt;pyshell#123&gt;", line 1, in &lt;interactive&gt; pd.read_stata(path,encoding='cp1252') File "/usr/local/lib/python2.7/dist-packages/pandas/io/stata.py", line 161, in read_stata chunksize=chunksize, encoding=encoding) File "/usr/local/lib/python2.7/dist-packages/pandas/io/stata.py", line 960, in __init__ self._read_header() File "/usr/local/lib/python2.7/dist-packages/pandas/io/stata.py", line 980, in _read_header self._read_new_header(first_char) File "/usr/local/lib/python2.7/dist-packages/pandas/io/stata.py", line 1056, in _read_new_header self.vlblist = self._get_vlblist() File "/usr/local/lib/python2.7/dist-packages/pandas/io/stata.py", line 1127, in _get_vlblist for i in range(self.nvar)] File "/usr/local/lib/python2.7/dist-packages/pandas/io/stata.py", line 1269, in _decode return s.decode('utf-8') File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0x93 in position 18: invalid start byte </code></pre> <p>The file contains non-ASCII characters and was saved (probably on a Windows or Mac) by someone else. <strong>R</strong> can open the file and save it as a <code>csv</code>, which I can then read normally, but it would be nice to be able to do everything with Python.</p> <p>For the encoding argument, following other threads here, I have tried 'utf8', "ISO-8859-1", 'latin1', 'cp1252' and not putting in anything. However, I always get the exact same error.</p> <p>Any idea what is going on and what can I do?</p> <p>I'm using Python 2.7 on Ubuntu 14.04 in case that matters.</p>
<p>A fix was committed to master on Github and should be released with version <code>0.25</code>.</p> <p>See details about this issue <a href="https://github.com/pandas-dev/pandas/issues/25960" rel="nofollow noreferrer">here</a>.</p> <p>For a temporary fix, change line <code>1334</code> of <code>pandas.io.stata</code> from</p> <pre><code>return s.decode('utf-8') </code></pre> <p>to</p> <pre><code>return s.decode('latin-1') </code></pre> <p>Unfortunately, there are certain cases where Stata, or perhaps other software, will allow some non-<code>UTF-8</code> characters in. Presumably, you are using a <code>dta</code> file the version of which is <code>118</code> and since these are supposed to be pure <code>UTF-8</code>, Pandas ignores the encoding <code>kwarg</code>.</p>
python|pandas|utf-8|stata
0
6,947
61,995,249
Cant train tensorflow ssd_mobilenet_v2.Failed to get matching files
<p>After successfully training the <code>faster_rcnn_inception_v2_coco_2018_01_28</code> model on a custom data set and getting good results, I attempted to use the <code>ssd_mobilenet_v2_quantized_300x300_coco</code> model using the same dataset and following the same tutorial linked below. I get this error when trying to train. I've double checked all the files used and I also get the exact same error when using <code>ssd_mobilenet_v1</code>.</p> <pre><code> (tensorflow1) C:\tensorflow1\models\research\object_detection&gt;python train.py --logtostderr -train_dir=C:/tensorflow1/models/research/object_detection/training --pipeline_config_path=C:/tensorflow1/models/research/object_detection/training/ssd_mobilenet_v2_quantized_300x300_coco.config C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) WARNING:tensorflow:From C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\platform\app.py:125: main (from __main__) is deprecated and will be removed in a future version. Instructions for updating: Use object_detection/model_main.py. WARNING:tensorflow:From C:\tensorflow1\models\research\object_detection\legacy\trainer.py:265: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.create_global_step WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards. WARNING:tensorflow:From C:\tensorflow1\models\research\object_detection\builders\dataset_builder.py:80: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.experimental.parallel_interleave(...)`. WARNING:tensorflow:From C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\ops\sparse_ops.py:1165: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version. Instructions for updating: Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead. WARNING:tensorflow:From C:\tensorflow1\models\research\object_detection\core\preprocessor.py:1208: calling squeeze (from tensorflow.python.ops.array_ops) with squeeze_dims is deprecated and will be removed in a future version. Instructions for updating: Use the `axis` argument instead WARNING:tensorflow:From C:\tensorflow1\models\research\object_detection\core\batcher.py:96: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`). WARNING:tensorflow:From C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\training\input.py:751: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the `tf.data` module. WARNING:tensorflow:From C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\training\input.py:751: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the `tf.data` module. INFO:tensorflow:depth of additional conv before box predictor: 0 INFO:tensorflow:depth of additional conv before box predictor: 0 INFO:tensorflow:depth of additional conv before box predictor: 0 INFO:tensorflow:depth of additional conv before box predictor: 0 INFO:tensorflow:depth of additional conv before box predictor: 0 INFO:tensorflow:depth of additional conv before box predictor: 0 Traceback (most recent call last): File "train.py", line 184, in &lt;module&gt; tf.app.run() File "C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\util\deprecation.py", line 306, in new_func return func(*args, **kwargs) File "train.py", line 180, in main graph_hook_fn=graph_rewriter_fn) File "C:\tensorflow1\models\research\object_detection\legacy\trainer.py", line 396, in train include_global_step=False)) File "C:\tensorflow1\models\research\object_detection\utils\variables_helper.py", line 126, in get_variables_available_in_checkpoint ckpt_reader = tf.train.NewCheckpointReader(checkpoint_path) File "C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 326, in NewCheckpointReader return CheckpointReader(compat.as_bytes(filepattern), status) File "C:\Users\gregt_000\Anaconda3\envs\tensorflow1\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: Unsuccessful TensorSliceReader constructor: Failed to get matching files on C:/tensorflow1/models/research/object_detection/ ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt: Not found: FindFirstFile failed for: C:/tensorflow1/models/research/object_detection/ ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03 : The system cannot find the path specified. ; No such process (tensorflow1) C:\tensorflow1\models\research\object_detection&gt; </code></pre> <p><a href="https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10" rel="nofollow noreferrer">https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10</a></p> <p><a href="https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi" rel="nofollow noreferrer">https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi</a></p>
<p>There is a rouge whitespace in the tutorial if followed exactly as is. this was causing the error. </p> <p>Line 156. Change fine_tune_checkpoint to: "C:/tensorflow1/models/research/object_detection/ ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt"</p>
python|tensorflow|object-detection|google-coral
0
6,948
61,678,885
How to calculate the sum of rows with the maximum date country wise
<p>I am trying to calculate the sum of the rows with the maximum date per country and if the country has more than one province then it should add the confirmed cases with the maximum date . For ex <a href="https://i.stack.imgur.com/KAqc6.jpg" rel="nofollow noreferrer">input</a> This is the input that I have and the output should be <a href="https://i.stack.imgur.com/j1yab.jpg" rel="nofollow noreferrer">Output</a></p> <p>So the output for China is 90 which is the sum of Tianjin and Xinjiang for the maximum date which is 02-03-2020. And since Argentina does not have any province it's output is 20 for the highest date which is again the same as above.</p>
<p>The strategy is to sort the values such that the last date is the first row of the Country/Region-Province/State pairs, then roll up the dataset twice, filtering the max date between roll ups.</p> <p>First, sorting to put most recent dates at the top of each group:</p> <pre><code>(df .sort_values(['Country/Region', 'Province/State', 'Date'], ascending=False)) Date Country/Region Province/State Confirmed 3 02-03-2020 China Xinjiang 70 2 01-03-2020 China Xinjiang 30 1 02-03-2020 China Tianjin 20 0 01-03-2020 China Tianjin 10 </code></pre> <p>Then rolling up to Country/Region-Province/State and taking the most recent date:</p> <pre><code>(df .sort_values(['Country/Region', 'Province/State', 'Date'], ascending=False) .groupby(['Country/Region', 'Province/State']) .first()) Date Confirmed Country/Region Province/State China Tianjin 02-03-2020 20 Xinjiang 02-03-2020 70 </code></pre> <p>Finally, rolling up again to just Country/Region: </p> <pre><code>(df .sort_values(['Country/Region', 'Province/State', 'Date'], ascending=False) .groupby(['Country/Region', 'Province/State']) .first() .groupby('Country/Region').sum()) Confirmed Country/Region China 90 </code></pre>
pandas|dataframe
1
6,949
61,766,636
adding percentage column by value in a column
<p>I'm trying encoding categorical column value to percentage frequency (binary encoding) as new feature.</p> <pre><code>Value Count Frequency (%) 20190 14723 16.2% 20100 11235 12.4% 20120 9449 10.4% 20130 7744 8.5% 20210 5920 6.5% 20140 5192 5.7% 20270 4324 4.8% 20220 3800 4.2% 20180 3707 4.1% 20110 3031 3.3% Other values (28) 21572 23.8% </code></pre> <p>id tried this:</p> <pre><code>df1['binary_group_of_materials']=df1['A_group_of_materials'].value_counts(normalize=True) * 100 </code></pre> <p>there is a new column but all the values are NaN.</p> <p>output should be: </p> <pre><code>Value Frequency (%) 20190 16.2% 20100 12.4% 20120 10.4% 20130 8.5% 20210 6.5% 20140 5.7% 20270 4.8% 20220 4.2% 20180 4.1% 20110 3.3% </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> for new column:</p> <pre><code>s = df1['A_group_of_materials'].value_counts(normalize=True) * 100 df1['binary_group_of_materials'] = df1['A_group_of_materials'].map(s) </code></pre> <p>If need percentages:</p> <pre><code>df1['binary_group_of_materials'] = df1['A_group_of_materials'].map(s).round(1).astype(str) + '%' </code></pre>
python|pandas|encoding|categorical-data|feature-selection
0
6,950
57,793,718
Iterate through two columns and match values from different rows in pandas
<p>My pandas DataFrame looks like this:</p> <pre><code> ID NAME PARENT_ID 0 1 Fruits 0 1 2 Bananas 1 2 3 Apples 1 3 4 Peaches 1 4 5 Pears 1 </code></pre> <p>I would like to iterate through rows and map <code>PARENT_ID</code> to actual <code>NAME</code> values, creating a new column called <code>PARENT_NAME</code>, like this:</p> <pre><code> ID NAME PARENT_ID PARENT 0 1 Fruits 0 NaN 1 2 Bananas 1 Fruits 2 3 Apples 1 Fruits 3 4 Peaches 1 Fruits 4 5 Pears 1 Fruits </code></pre> <p>What would be the best way to achieve this?</p>
<p>As per @user3483203 's answer:</p> <pre><code>df['PARENT'] = df['PARENT_ID'].map(df.set_index('ID')['NAME']) </code></pre>
python|pandas
1
6,951
58,081,592
Pandas calculate average value of column for rows satisfying condition
<p>I have a dataframe containing information about users rating items during a period of time. It has the following semblance : <a href="https://i.stack.imgur.com/8rytV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8rytV.png" alt="enter image description here" /></a></p> <p>In the dataframe I have a number of rows with identical 'user_id' and 'business_id' which i retrieve using the following code :</p> <pre><code>mask = reviews_df.duplicated(subset=['user_id','business_id'], keep=False) dup = reviews_df[mask] </code></pre> <p>obtaining something like this :<br /> <a href="https://i.stack.imgur.com/G681x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G681x.png" alt="enter image description here" /></a></p> <p>I now need to remove all such duplicates from the original dataframe and substitute them with their average. Is there a fast and elegant way to achive this?Thanks!</p>
<p>Se if you do have a dataframe looks like </p> <pre><code> review_id user_id business_id stars date 0 1 0 3 2.0 2019-01-01 1 2 1 3 5.0 2019-11-11 2 3 0 2 4.0 2019-10-22 3 4 3 4 3.0 2019-09-13 4 5 3 4 1.0 2019-02-14 5 6 0 2 5.0 2019-03-17 </code></pre> <p>Then the solution should be something like that:</p> <pre><code>df.loc[df.duplicated(['user_id', 'business_id'], keep=False)]\ .groupby(['user_id', 'business_id'])\ .apply(lambda x: x.stars - x.stars.mean()) </code></pre> <p>With the following result:</p> <pre><code>user_id business_id 0 2 2 -0.5 5 0.5 3 4 3 1.0 4 -1.0 </code></pre>
python|pandas
0
6,952
34,065,412
np.vectorize giving me IndexError: invalid index to scalar variable
<p>trying out something simple and it's frustratingly not working:</p> <pre><code>def myfunc(a,b): return a+b[0] v = np.vectorize(myfunc, exclude=['b']) a = np.array([1,2,3]) b = [0] v(a,b) </code></pre> <p>This gives me "IndexError: invalid index to scalar variable." Upon printing b, it appears that the b taken in by the function is always 0, instead of [0]. Can I specify which arguments should be vectorized and which should remain constant?</p>
<p>When you use <code>excluded=['b']</code> the <em>keyword</em> parameter <code>b</code> is excluded. Therefore, you must call <code>v</code> with keyword arguments, e.g. <code>v(a=a, b=b)</code> instead of <code>v(a, b)</code>. </p> <p>If you wish to call <code>v</code> with positional arguments with the second positional argument excluded, then use</p> <pre><code>v = np.vectorize(myfunc) v.excluded.add(1) </code></pre> <hr> <p>For example,</p> <pre><code>import numpy as np def myfunc(a, b): return a+b[0] a = np.array([1,2,3]) b = [0, 1] v = np.vectorize(myfunc, excluded=['b']) print(v(a=a, b=b)) # [1 2 3] v = np.vectorize(myfunc) v.excluded.add(1) print(v(a, b)) # [1 2 3] </code></pre>
python|numpy
8
6,953
34,085,823
how to assign non contiguous labels of one numpy array to another numpy array and add accordingly?
<p>I have the following labels</p> <pre><code>&gt;&gt;&gt; lab array([3, 0, 3 ,3, 1, 1, 2 ,2, 3, 0, 1,4]) </code></pre> <p>I want to assign this label to another numpy array i.e</p> <pre><code>&gt;&gt;&gt; arr array([[81, 1, 3, 87], # 3 [ 2, 0, 1, 0], # 0 [13, 6, 0, 0], # 3 [14, 0, 1, 30], # 3 [ 0, 0, 0, 0], # 1 [ 0, 0, 0, 0], # 1 [ 0, 0, 0, 0], # 2 [ 0, 0, 0, 0], # 2 [ 0, 0, 0, 0], # 3 [ 0, 0, 0, 0], # 0 [ 0, 0, 0, 0], # 1 [13, 2, 0, 11]]) # 4 </code></pre> <p>and add all corresponding rows with same labels.</p> <p>The output must be</p> <pre><code>([[108, 7, 4,117]--3 [ 0, 0, 0, 0]--0 [ 0, 0, 0, 0]--1 [ 0, 0, 0, 0]--2 [13, 2, 0, 11]])--4 </code></pre>
<p>You could use <code>groupby</code> from <a href="http://pandas.pydata.org/" rel="nofollow"><code>pandas</code></a>:</p> <pre><code>import pandas as pd parr=pd.DataFrame(arr,index=lab) pd.groupby(parr,by=parr.index).sum() 0 1 2 3 0 2 0 1 0 1 0 0 0 0 2 0 0 0 0 3 108 7 4 117 4 13 2 0 11 </code></pre>
python-3.x|numpy
1
6,954
37,095,161
Number of rows changes even after `pandas.merge` with `left` option
<p>I am merging two data frames using <code>pandas.merge</code>. Even after specifying <code>how = left</code> option, I found the number of rows of merged data frame is larger than the original. Why does this happen?</p> <pre><code>panel = pd.read_csv(file1, encoding ='cp932') before_len = len(panel) prof_2000 = pd.read_csv(file2, encoding ='cp932').drop_duplicates() temp_2000 = pd.merge(panel, prof_2000, left_on='Candidate_u', right_on="name2", how="left") after_len = len(temp_2000) print(before_len, after_len) &gt; 12661 13915 </code></pre>
<p>This sounds like having more than one rows in <code>right</code> under <code>'name2'</code> that match the key you have set for the <code>left</code>. Using option <code>'how='left'</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="noreferrer"><code>pandas.DataFrame.merge()</code></a> only means that: </p> <blockquote> <ul> <li>left: use only keys from left frame</li> </ul> </blockquote> <p>However, the actual number of rows in the result object is not necessarily going to be the same as the number of rows in the <code>left</code> object.</p> <p>Example:</p> <pre><code>In [359]: df_1 Out[359]: A B 0 a AAA 1 b BBA 2 c CCF </code></pre> <p>and then another DF that looks like this (notice that there are more than one entry for your desired key on the left):</p> <pre><code>In [360]: df_3 Out[360]: key value 0 a 1 1 a 2 2 b 3 3 a 4 </code></pre> <p>If I merge these two on <code>left.A</code>, here's what happens:</p> <pre><code>In [361]: df_1.merge(df_3, how='left', left_on='A', right_on='key') Out[361]: A B key value 0 a AAA a 1.0 1 a AAA a 2.0 2 a AAA a 4.0 3 b BBA b 3.0 4 c CCF NaN NaN </code></pre> <p>This happened even though I merged with <code>how='left'</code> as you can see above, there were simply more than one rows to merge and as shown here the result <code>pd.DataFrame</code> has in fact more rows than the <code>pd.DataFrame</code> on the <code>left</code>.</p> <p>I hope this helps!</p>
python|pandas
44
6,955
37,086,677
Pandas pivot table with mean
<p>I have a pandas data frame, df, that looks like this;</p> <pre><code>index New Old MAP Limit count 1 93 35 54 &gt; 18 1 2 163 93 116 &gt; 18 1 3 134 78 96 &gt; 18 1 4 117 81 93 &gt; 18 1 5 194 108 136 &gt; 18 1 6 125 57 79 &lt;= 18 1 7 66 39 48 &gt; 18 1 8 120 83 95 &gt; 18 1 9 150 98 115 &gt; 18 1 10 149 99 115 &gt; 18 1 11 148 85 106 &gt; 18 1 12 92 55 67 &lt;= 18 1 13 64 24 37 &gt; 18 1 14 84 53 63 &gt; 18 1 15 99 70 79 &gt; 18 1 </code></pre> <p>I need to create a pivot table that looks like this</p> <pre><code>Limit &lt;=18 &gt;18 New xx1 xx2 Old xx3 xx4 MAP xx5 xx6 </code></pre> <p>where values xx1, xx2, xx3, xx4, xx5, and xx6 are the mean of New, Old and Map for respective Limit. How can I achieve this? I tried the following without success.</p> <pre><code>table = df.pivot_table('count', index=['New', 'Old', 'MAP'], columns=['Limit'], aggfunc='mean') </code></pre>
<h3>Solution</h3> <pre><code>df.groupby('Limit')['New', 'Old', 'MAP'].mean().T Limit &lt;= 18 &gt; 18 New 108.5 121.615385 Old 56.0 72.769231 MAP 73.0 88.692308 </code></pre>
python|pandas
4
6,956
36,694,549
Access line by line to a numpy structured array
<p>I am trying to access to a structured array line by line by iterating on the values of one field of it but even if the value iterate well, the slice of the array doesn't change. Here is my SWE :</p> <pre><code>import numpy as np dt=np.dtype([('name',np.unicode,80),('x',np.float),('y',np.float)]) a=np.array( [('a',0.,0.),('b',0.,0.),('c',0.,0.) ],dtype=dt) for n in a['name']: print n,a['name'==n] </code></pre> <p>gives me :</p> <pre><code>a (u'a', 0.0, 0.0) b (u'a', 0.0, 0.0) c (u'a', 0.0, 0.0) </code></pre> <p>At each iteration, I always have the same slice of the array... strange ?</p>
<p>The last line is not right. The array index evaluates to True or False rather than doing a lookup of a named column. Try this:</p> <pre><code>for n in a['name']: print n,a[a['name']==n] </code></pre>
python|arrays|numpy|structured-array
4
6,957
36,931,732
Creating a numpy array fails when I try to create from an array of QString
<p>If the array has a size of 2x2 or greater all is well, but if the dimension of the row is 1, for example 1x2, numpy does something I did not expect.</p> <p>How can I solve this?</p> <pre><code># TEST 1 OK myarray = np.array([[QString('hello'), QString('world')], [QString('hello'), QString('moon')]], dtype=object) print myarray print myarray.shape #[[PyQt4.QtCore.QString(u'hello') PyQt4.QtCore.QString(u'world')] # [PyQt4.QtCore.QString(u'hello') PyQt4.QtCore.QString(u'moon')]] #(2, 2) # TEST 2 OK myarray = np.array([['hello'], ['world']], dtype=object) print myarray print myarray.shape #[['hello'] # ['world']] #(2, 1) # TEST 3 FAIL myarray = np.array([[QString('hello'), QString('world')]], dtype=object) print myarray print myarray.shape #[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[PyQt4.QtCore.QString(u'h')]]]]]]]]]]]]]]]]]]]]]]]]]]]]] #.. #[[[[[[[[[[[[[[[[[[[[[[[[[[[[[PyQt4.QtCore.QString(u'e')]]]]]]]]]]]]]]]]]]]]]]]]]]]]] # etc... #(1, 2, 5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) </code></pre>
<p>Try different length strings: </p> <pre><code>np.array([[QString('hello'), QString('moon')]], dtype=object)`. </code></pre> <p>or the create and fill approach to making an object array</p> <pre><code>A = np.empty((1,2), dtype=object) A[:] = [QString('hello'), QString('moon')] </code></pre> <p>I'm not familiar with these objects, but in other cases where we try to build object arrays from lists, it is tricky if the lists are the same length. If <code>QString</code> is iterable, with a <code>.__len__</code> something similar may be happening.</p> <p>I'm guessing your first example works because on <code>QString</code> is shorter than the others, not because it is 2x2.</p> <p>This recent question about making an object array from a custom dictionary class may be relevant: <a href="https://stackoverflow.com/questions/36663919/override-a-dict-with-numpy-support">Override a dict with numpy support</a></p>
python|numpy|pyqt4
2
6,958
54,811,207
Using Distributed Tensorflow on a Keras model on GCP Dataproc
<p>I am completely new to cloud computing on GCP Dataproc. I installed TonY (Tensorflow on Yarn) when I was creating my cluster in order to be able to run tensorflow on it. </p> <p>I am stuck on the part where I create the tf.train.ClusterSpec portion in order to run distributed tensorflow on my keras model. It seems as like as long as I create a clusterspec and then create a server and a session using tf.train.Server and tf.Session, I can just set the session for my keras model using K.set_session(session created). I just wanted to make sure if this is correct? What are the worker and ps nodes and how do I reference it to my master and worker nodes in the cluster that I created in GCP Dataproc. When I am creating a session as well is the parameter inside tf.train.Server just server.target?</p> <pre><code># Keras Core from keras.layers.convolutional import MaxPooling2D, Convolution2D, AveragePooling2D from keras.layers import Input, Dropout, Dense, Flatten, Activation from keras.layers.normalization import BatchNormalization from keras.layers.merge import concatenate from keras import regularizers from keras import initializers from keras.models import Model # Backend from keras import backend as K # Utils from keras.utils.layer_utils import convert_all_kernels_in_model from keras.utils.data_utils import get_file from keras.preprocessing.image import ImageDataGenerator from keras import optimizers from keras.preprocessing.image import img_to_array, load_img from keras import backend as K import numpy as np import os import inspect from tqdm import tqdm import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from keras.callbacks import ModelCheckpoint import tensorflow as tf from PIL import Image #Is worker going to reference to my worker nodes in my cluster and ps references to my master node in my cluster? #Do I put the external addresses of the nodes into their respective lists? cluster = tf.train.ClusterSpec({"worker": ["35.236.62.93:2222", "35.236.30.154:2222", "35.235.127.146:2222"], "ps": ["5.235.95.74:2222"]}) #Is my job name correct as well? server = tf.train.Server(cluster, job_name="ps") #Does tf.Session take in server.target as its parameter? sess = tf.Session(server.target) K.set_session(sess) </code></pre>
<p>In order to access your Cluster configuration, please use <code>CLUSTER_SPEC</code> from your TensorFlow code. You can follow <a href="https://github.com/linkedin/TonY/blob/master/tony-examples/mnist-tensorflow/mnist_distributed.py#L191" rel="nofollow noreferrer">this</a> working example:</p> <pre><code> cluster_spec_str = os.environ["CLUSTER_SPEC"] cluster_spec = json.loads(cluster_spec_str) ps_hosts = cluster_spec['ps'] worker_hosts = cluster_spec['worker'] </code></pre> <p>By launching a TonY job with Cloud Dataproc, TonY sets the <code>CLUSTER_SPEC</code> environment variable inside your YARN containers, which you can access directly as indicated above.</p> <p>You can also access the Job name using <code>JOB_NAME</code> environment variable:</p> <pre><code> job_name = os.environ["JOB_NAME"] </code></pre> <p>You should be able to use the TonY MNIST example as <a href="https://github.com/linkedin/TonY/tree/master/tony-examples/tony-in-gcp" rel="nofollow noreferrer">reference</a>. Please let us know if this works for you or not.</p> <p>In Cloud Dataproc we have 2 concepts:</p> <ul> <li>Master</li> <li>Workers</li> </ul> <p>In the Hadoop world, these refers to Resource Manager (Master) and Node Manager (Worker) respectively. In this example we have a Cloud Dataproc cluster of 1 master and 4 workers:</p> <p><a href="https://i.stack.imgur.com/llVNc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/llVNc.png" alt="enter image description here"></a></p> <p>This shows all VMs in the cluster:</p> <p><a href="https://i.stack.imgur.com/4C39R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4C39R.png" alt="enter image description here"></a></p> <p>From TensorFlow perspective you can do Distributed Machine Learning using 3 main strategies:</p> <ul> <li><strong>MirroredStrategy</strong>: Multiple GPUs, Single Node</li> <li><strong>CollectiveAllReduceStrategy</strong>: Multiple GPUs, Multiple Nodes All-Reduce</li> <li><strong>ParameterServerStrategy</strong>: Multiple GPUs, Multiple Nodes, Parameter+Worker Nodes</li> </ul> <p>In your case, when you launch a TonY job you seem to refer to the latter, hence you will define a .xml file in this case tony.xml where you define the number of parameter servers and workers.</p> <pre><code>&lt;configuration&gt; &lt;property&gt; &lt;name&gt;tony.application.security.enabled&lt;/name&gt; &lt;value&gt;false&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;tony.worker.instances&lt;/name&gt; &lt;value&gt;${worker_instances}&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;tony.worker.memory&lt;/name&gt; &lt;value&gt;${worker_memory}&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;tony.ps.instances&lt;/name&gt; &lt;value&gt;${ps_instances}&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;tony.ps.memory&lt;/name&gt; &lt;value&gt;${ps_memory}&lt;/value&gt; &lt;/property&gt; &lt;/configuration&gt; </code></pre> <p>When TonY client sends this request to Cloud Dataproc, Dataproc, by default will allocate containers in any of the <strong>Dataproc workers</strong> (Dataproc master is not used for processing) . Example:</p> <pre><code>&lt;configuration&gt; &lt;property&gt; &lt;name&gt;tony.application.security.enabled&lt;/name&gt; &lt;value&gt;false&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;tony.worker.instances&lt;/name&gt; &lt;value&gt;2&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;tony.worker.memory&lt;/name&gt; &lt;value&gt;4g&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;tony.ps.instances&lt;/name&gt; &lt;value&gt;1&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;tony.ps.memory&lt;/name&gt; &lt;value&gt;2g&lt;/value&gt; &lt;/property&gt; &lt;/configuration&gt; </code></pre> <p>This will request 4 containers:</p> <ul> <li>1 Application master </li> <li>1 Parameter Server </li> <li>2 Worker Servers</li> </ul> <p>The allocation depends on Resource Manager scheduler. By default Dataproc uses <code>DefaultResourceCalculator</code> and will try to find resources in any of the Dataproc cluster active workers.</p> <p>Please take a look at the current sample for MNIST and Cloud DataProc:</p> <p><a href="https://github.com/linkedin/TonY/tree/master/tony-examples/tony-in-gcp" rel="nofollow noreferrer">https://github.com/linkedin/TonY/tree/master/tony-examples/tony-in-gcp</a></p>
tensorflow|keras|google-cloud-platform|google-cloud-dataproc|tony
3
6,959
49,738,489
Python Filter multiple row
<p>I am using this query script to get data from api rest.</p> <p><a href="https://github.com/cubewise-code/TM1py-samples/blob/master/Query%20Data/cube%20data%20into%20pandas%20dataframe.py" rel="nofollow noreferrer">Script</a></p> <p>After doing this, I got the following data:</p> <p><a href="https://i.stack.imgur.com/zn22u.jpg" rel="nofollow noreferrer">Dataframe</a></p> <p>I am new in python, and I have some difficult to understand how do I select columns:</p> <p>I tried this following code, but it appears:</p> <pre><code>df1 = df[(df['Meses'] != 'Total') &amp; (df['Orcado x Realizado'] == 'Realizado')] KeyError: 'Meses' </code></pre> <p><a href="https://i.stack.imgur.com/SK8mv.jpg" rel="nofollow noreferrer">Data problem</a></p>
<p>You have 2 options to filter a MultiIndex dataframe:</p> <p><strong>1. Elevate index to columns and filter by columns</strong></p> <pre><code>df = df.reset_index() df1 = df[(df['Meses'] != 'Total') &amp; (df['Orcado x Realizado'] == 'Realizado')] </code></pre> <p><strong>2. Filter by index directly</strong></p> <pre><code>df1 = df[(df.index.get_level_values('Meses') != 'Total') &amp; (df.index.get_level_values('Orcado x Realizado') == 'Realizado')] </code></pre>
python|pandas|numpy|data-manipulation
1
6,960
28,099,881
How can I install matplotlib for my AWS Elastic Beanstalk application?
<p>I'm having a hell of a time deploying matplotlib on AWS Elastic Beanstalk. <a href="https://stackoverflow.com/a/15881797/656912">I gather</a> that my issue comes from some dependencies and the way that EB deploys packages installed with PIP, and have attempted to follow the <a href="https://stackoverflow.com/a/15881797/656912">instructions here on SO</a> for resolving the issue.</p> <p>I first tried incrementally deploying, as suggested in the linked answer, by adding pieces of the matplotlib package stack to my <code>requirements.txt</code> file in stages. But this takes <em>forever</em> (for each stage) and is prone to failure and timing out (which seems to leave build directories behind that stall subsequent package installations).</p> <p>So the simple solution mentioned off-handedly at the end of the answer appeals to me: just <code>eb ssh</code>, activate the virtialenv with </p> <pre><code>source /opt/python/run/venv/bin/activate </code></pre> <p>and <code>pip install</code> packages manually. But I can't get this to work either. First I'm often confronted with left-beind build directories (as mentioned above)</p> <pre><code>pip can't proceed with requirement 'xxxx' due to a pre-existing build directory. location: /opt/python/run/venv/build/xxxx This is likely due to a previous installation that failed. pip is being responsible and not assuming it can delete this. Please delete it and try again. </code></pre> <p>But even after removing these, I consistently get </p> <pre><code>Exception: Traceback (most recent call last): File "/opt/python/run/venv/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/opt/python/run/venv/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/opt/python/run/venv/lib/python2.7/site-packages/pip/req.py", line 1197, in prepare_files do_download, File "/opt/python/run/venv/lib/python2.7/site-packages/pip/req.py", line 1375, in unpack_url self.session, File "/opt/python/run/venv/lib/python2.7/site-packages/pip/download.py", line 582, in unpack_http_url unpack_file(temp_location, location, content_type, link) File "/opt/python/run/venv/lib/python2.7/site-packages/pip/util.py", line 625, in unpack_file untar_file(filename, location) File "/opt/python/run/venv/lib/python2.7/site-packages/pip/util.py", line 533, in untar_file os.makedirs(location) File "/opt/python/run/venv/lib64/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/opt/python/run/venv/build/xxxx' </code></pre> <p>in response to <code>pip install xxxx</code> (and <code>sudo pip</code> fails with <code>sudo: pip: command not found</code>).</p> <p>What can I do to get this working on AWS-EB? In particular, what do I need to do to get the simple SSH+PIP approach working; or is there some other better — simpler! — approach I should try.</p> <hr> <p>FWIW, I have a <code>.ebextensions/software.config</code> with</p> <pre><code>packages: yum: gcc-c++: [] gcc-gfortran: [] python-devel: [] atlas-sse3-devel: [] lapack-devel: [] libpng-devel: [] freetype-devel: [] zlib-devel: [] </code></pre> <p>and a <code>requirements.txt</code> that ends with</p> <pre><code>pytz==2014.10 pyparsing==2.0.3 python-dateutil==2.4.0 nose==1.3.4 six&gt;=1.8.0 mock==1.0.1 numpy==1.9.1 matplotlib==1.4.2 </code></pre> <p>After about 4 hours, I've gotten far as numpy (as reported by <code>pip list</code> in the EB virtualenv).</p> <p>And (in case it matters) the user who is SSHing is part in a group with the policy </p> <pre><code>{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticbeanstalk:*", "ec2:*", "elasticloadbalancing:*", "autoscaling:*", "cloudwatch:*", "s3:*", "sns:*", "cloudformation:*", "rds:*", "sqs:*", "iam:PassRole" ], "Resource": "*" } ] } </code></pre>
<p>I have used many approaches to build and deploy numpy/scipy/matplotlib, on Windows as well as Linux systems. I have used system-provided package managers (aptitude, rpm), 3rd-party package managers (pypm), Python package managers (easy_install, pip), source releases, used different build environments/tools (GCC, but also Intel MKL, OpenMP). While doing so, I have run into many many quite annoying situations, but have also learned a lot about the pros and cons of each approach.</p> <p>I have no experience with Elastic Beanstalk (EB), but I have experience with EC2. I see that you can SSH into an instance and poke around. So, what I suggest further below is based on</p> <ul> <li>above-stated experiences and on</li> <li>the more or less obvious boundary conditions regarding Beanstalk and on</li> <li>your application scenario, described in another question here on SO and on</li> <li>the fact that you just want to get things running, quickly</li> </ul> <p><strong>My suggestion:</strong> start off with not building these things yourself. Do not use pip. If possible, try to use the package manager of the Linux distribution in place and let it handle the installation of <em>everything required</em> for you, with a single command (e.g. <code>sudo apt-get install python-matplotlib</code>).</p> <p>Disadvantages:</p> <ul> <li>possibly old package versions, depending on the Linux distro in use</li> <li>non-optimized builds (e.g. not built against e.g. Intel MKL or not leveraging OpenMP features or not using special instruction sets)</li> </ul> <p>Advantages:</p> <ul> <li>it quickly downloads, because packages are most likely cached near your machine</li> <li>it quickly installs (these packages are pre-built, no compilation involved)</li> <li><strong>it just works</strong></li> </ul> <p>So, I hope you can just use aptitude or rpm or whatever on these machines and inherit the <em>great</em> work that the distribution package maintainers do for you, behind the scenes.</p> <p>Once you are confident in your application and identified some bottleneck or issue, you might have reason to use a <em>newer</em> version of numpy/matplotlib/... or you might have reason to have a <em>faster</em> version of these, by creating an optimized build.</p> <h2>Edit: EB-related details of outlined approach</h2> <p>In the meantime we have learned that EB by default runs <a href="http://en.wikipedia.org/wiki/Amazon_Machine_Image#Amazon_Linux_AMI" rel="nofollow">Amazon Linux</a> which is based on Red Hat Enterprise Linux. Likewise, it uses <code>yum</code> as package manager and packages are in RPM format.</p> <p>Amazon provides documentation about available packages. In Amazon Linux 2014.09, these packages are available: <a href="http://aws.amazon.com/de/amazon-linux-ami/2014.09-packages/" rel="nofollow">http://aws.amazon.com/de/amazon-linux-ami/2014.09-packages/</a></p> <p>In this list we find </p> <ul> <li>numpy-1.7.2</li> <li>python-matplotlib-0.99.1.2</li> </ul> <p>This version of matplotlib is very old, according to the <a href="http://matplotlib.org/_static/CHANGELOG" rel="nofollow">changelog</a> it is from September 2009: "2009-09-21 Tagged for release 0.99.1".</p> <p>I did not anticipate it to be <em>so</em> old, but still, it might be sufficient for your needs. So we proceed with our plan (but I'd understand if that's a blocker).</p> <p>Now, we <a href="http://www.zezuladp.com/2014/10/scaling-numpy-and-scipy-with-django-and.html" rel="nofollow">have learned</a> that system Python and EB Python are isolated from each other. That does not mean that EB Python cannot access system Python site packages. We just need it to tell so. A simple and clean method is to set up a proper directory structure with the packages that should be accessible to EB Python, and to communicate this directory to EB Python via <code>sys.path</code>.</p> <p>Clearly, we need to customize the bootstrapping phase of EB containers. The available tools are documented here: <a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html" rel="nofollow">http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html</a></p> <p>Obviously, we want to make use of the <code>packages</code> approach, and tell EB to install the <code>numpy</code> and <code>python-matplotlib</code> packages via yum. So the corresponding config file section should contain:</p> <pre><code> packages: yum: numpy: [] python-matplotlib: [] </code></pre> <p>Explicitly mentioning <code>numpy</code> might not be necessary, it likely is a dependency of python-matplotlib.</p> <p>Also, we need to make use of the <code>commands</code> section:</p> <blockquote> <p>You can use the commands key to execute commands on the EC2 instance. The commands are processed in alphabetical order by name, and they run before the application and web server are set up and the application version file is extracted.</p> </blockquote> <p>The following three commands create above-mentioned directory, and set up symbolic links to the numpy/mpl installation paths (these paths hopefully are available in the moment these commands become executed):</p> <pre><code>commands: 00-create-dir: command: "mkdir -p /opt/py26-selected-site-packages" 01-link-numpy: command: "ln -s /usr/lib64/python2.6/site-packages/numpy /opt/py26-selected-site-packages/numpy" 02-link-mpl: command: "ln -s /usr/lib64/python2.6/site-packages/matplotlib /opt/py26-selected-site-packages/matplotlib" </code></pre> <p>Two uncertainties: the AWS docs to not clarify that <code>packages</code> are processed before <code>commands</code> are executed. You have to try. It it does not work, use <code>container_commands</code>. Secondly, it is just an educated guess that <code>/usr/lib64/python2.6/site-packages/matplotlib</code> is available after installing python-matplotlib. It should be installed to this place, but it may end up somewhere else. Needs to be tested. Numpy should end up where specified as inferred from <a href="http://www.zezuladp.com/2014/10/scaling-numpy-and-scipy-with-django-and.html" rel="nofollow">this</a> article.</p> <p>[UPDATE FROM SEB] AWS documentation says "The cfn-init helper script processes these configuration sections in the following order: packages, groups, users, sources, files, commands, and then services." <a href="http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html" rel="nofollow">http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html</a></p> <p>So, your approach is safe [/UPDATE]</p> <p>The crucial step, as pointed out in the comments to this answer, is to tell your Python app where to look for packages. Direct modification of <code>sys.path</code> <em>before</em> attempting to import is a reliable method to take control of this. The following code adds our special directory to the selection of directories in which Python looks out for packages, and then attempts to import matplotlib:</p> <pre><code>sys.path.append("/opt/py26-selected-site-packages") from matplotlib import pyplot </code></pre> <p>The order in <code>sys.path</code> defines priorities, so in case there is any other matplotlib or numpy package available in one of the other directories, it might be a better idea to </p> <pre><code>sys.path.insert(0, "/opt/py26-selected-site-packages") </code></pre> <p>However, this should not be necessary if our whole approach was well thought-through.</p>
numpy|amazon-web-services|matplotlib|pip|amazon-elastic-beanstalk
5
6,961
28,111,791
Conditional selection of data in a pandas DataFrame
<p>I have two columns in my pandas DataFrame.</p> <pre><code> A B 0 1 5 1 2 3 2 3 2 3 4 0 4 5 1 </code></pre> <p>I need the value in A where the value of B is minimum. In the above case my answer would be 4 since the minimum B value is 0.</p> <p>Can anyone help me with it?</p>
<p>To find the minimum in column B, you can use <code>df.B.min()</code>. For your DataFrame this returns <code>0</code>.</p> <p>To find values at particular locations in a DataFrame, you can use <code>loc</code>:</p> <pre><code>&gt;&gt;&gt; df.loc[(df.B == df.B.min()), 'A'] 3 4 Name: A, dtype: int64 </code></pre> <p>So here, <code>loc</code> picks out all of the rows where column B is equal to its minimum value (<code>df.B == df.B.min()</code>) and selects the corresponding values in column A.</p> <p>This method returns all values in A corresponding to the minimum value in B. If you only need to find one of the values, the better way is to use <code>idxmin</code> as @aus_lacy suggests.</p>
python|pandas|dataframe|min
5
6,962
35,051,651
how to get a random sample in a multiindex pandas dataframe?
<p>I have a dataframe that is indexed according to the following variables: NAME - date. Name is some sort of bizarre ID, and date is.. a date.</p> <p>The data is very large and I would like to inspect the data I have for several random choices of NAME. </p> <p>That is, </p> <ol> <li>pick a random NAME among the possible ones</li> <li>inspect the data for this NAME, ordered by time.</li> </ol> <p>I dont know how to do that. I see that we can use <code>get_level_values</code>, but I dont have a specific NAME in mind, I just want to call random samples many times.</p> <p>Any help appreciated! Thanks!</p>
<pre><code>import pandas as pd import numpy as np import random import string df = pd.DataFrame(data={'NAME': [''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) for _ in range(17)) for _ in range(10)], 'Date': pd.date_range('1/01/2016', periods=10), 'Whatever': np.random.randint(20, 50, 10)}, columns=['NAME', 'Date', 'Whatever']).set_index(['NAME', 'Date']) random_df = df[df.index.get_loc(np.random.choice(df.index.levels[0])) == True].sort_index(level=1) print(random_df) </code></pre> <p>Returns a <code>df</code> that looks like this:</p> <pre><code> Whatever NAME Date xg71zOEQVOEfCZ2ne 2016-01-01 35 qLCXuEerCXi6gmF1Y 2016-01-02 26 0vDe7x8TIb5FRv7hV 2016-01-03 40 Ddc6FGKBdtcLqT53O 2016-01-04 31 IYcrKG9pjt7mHH3qn 2016-01-05 44 lAWObNTC8yXPMY3v5 2016-01-06 49 k90QWdPc5qFSCFi1c 2016-01-07 22 BWQoHo8lUyEwK9Nuf 2016-01-08 42 Xt0bxUerTan0i1eGw 2016-01-09 22 tc7PYCzpyGmYLbnxu 2016-01-10 46 </code></pre> <p>A <code>random_df</code> that looks like this:</p> <pre><code> Whatever NAME Date IYcrKG9pjt7mHH3qn 2016-01-05 44 </code></pre>
python|pandas|random
2
6,963
31,148,758
python pandas create correlation matrix from price dataframe
<p>I have a dataframe populated with stock price returns (indexed by Date). Could someone let me know how I can get a correlation matrix from this dataframe.</p> <p>The dataframe would look like:</p> <pre><code> BBG.XSTO BBG.XLON BBG.XETR BBG.XHEL Date 06/02/2014 0.001418 0.00708 0.019437 0.025848 07/02/2014 0.021329 0.016221 0.006784 0.032683 10/02/2014 0.005299 0.005177 0.007391 0.005111 11/02/2014 -0.006497 0.021656 -0.004109 0.001855 12/02/2014 -0.003844 0.019885 -0.002457 0.004617 13/02/2014 -0.004795 -0.001831 -0.010602 0.00917 14/02/2014 0.003276 0.010801 -0.000341 0.009992 17/02/2014 0.00206 0.003307 -0.002336 0.009443 18/02/2014 -0.010467 0.004102 0.046172 0.002236 19/02/2014 0.002929 0.003037 -0.009944 0.015511 20/02/2014 -0.003969 -0.015961 0.015342 0.003952 21/02/2014 0.004776 -0.001107 0.010403 0.005243 24/02/2014 0.015125 0.025254 0.018505 0.011263 25/02/2014 -0.001546 0.000742 0.004307 0.019623 26/02/2014 -0.000478 -0.000677 0.006721 0.003797 27/02/2014 -0.009898 0.002869 0.038103 0.010052 28/02/2014 0.005288 0.004927 -0.01254 -0.005852 03/03/2014 -0.035165 -0.023916 -0.022374 -0.01563 04/03/2014 0.020213 0.017346 0.016266 0.040465 05/03/2014 0.004067 0.002742 0.010699 0.005709 06/03/2014 -0.000648 -0.012987 0.013513 -0.008984 07/03/2014 -0.008855 -0.015162 -0.003511 -0.019051 10/03/2014 0.003684 0.002893 0.023136 0.004172 11/03/2014 -0.003214 0.020036 -0.013234 -0.004588 12/03/2014 -0.005376 -0.015244 -0.015922 -0.002511 13/03/2014 -0.016978 0.000689 -0.022335 -0.005889 </code></pre> <p>and hopefully the correlation matrix would look like:</p> <pre><code> BBG.XSTO BBG.XLON BBG.XETR BBG.XHEL BBG.XSTO 1 0.548504179 0.315191057 0.69486495 BBG.XLON 0.548504179 1 0.314246645 0.56176159 BBG.XETR 0.315191057 0.314246645 1 0.414599864 BBG.XHEL 0.69486495 0.56176159 0.414599864 1 </code></pre> <p>Thanks</p>
<p>Assuming your dataframe is named <code>df</code>.</p> <pre><code>df.corr() Out[106]: BBG.XSTO BBG.XLON BBG.XETR BBG.XHEL BBG.XSTO 1.0000 0.5801 0.3057 0.7185 BBG.XLON 0.5801 1.0000 0.1709 0.5366 BBG.XETR 0.3057 0.1709 1.0000 0.3340 BBG.XHEL 0.7185 0.5366 0.3340 1.0000 </code></pre>
python|pandas
1
6,964
31,050,714
pandas 'as_index' function doesn't work as expected
<p>This is a minimum reproducible example of my original dataframe called 'calls':</p> <pre><code> phone_number call_outcome agent call_number 0 83473306392 NOT INTERESTED orange 0 1 762850680150 CALL BACK LATER orange 1 2 476309275079 NOT INTERESTED orange 2 3 899921761538 CALL BACK LATER red 3 4 906739234066 CALL BACK LATER orange 4 </code></pre> <p>Writing this pandas command...</p> <pre><code>most_calls = calls.groupby('agent') \ .count().sort('call_number', ascending=False) </code></pre> <p>Returns this...</p> <pre><code> phone_number call_outcome call_number agent orange 2234 2234 2234 red 1478 1478 1478 black 750 750 750 green 339 339 339 blue 199 199 199 </code></pre> <p>Which is correct, but for the fact that I want 'agent' to be a variable and not indexed. </p> <p>I've used the <code>as_index=False</code> function on numerous occasions and am familiar with specifying <code>axis=1</code>. However in this instance it doesn't matter where or how I incorporate these parameters, every permutation returns an error.</p> <p>These are some examples I've tried and the corresponding errors:</p> <pre><code>most_calls = calls.groupby('agent', as_index=False) \ .count().sort('call_number', ascending=False) ValueError: invalid literal for long() with base 10: 'black' </code></pre> <p>And</p> <pre><code>most_calls = calls.groupby('agent', as_index=False, axis=1) \ .count().sort('call_number', ascending=False) ValueError: as_index=False only valid for axis=0 </code></pre>
<p>I believe that, irrespective of the <code>groupby</code> operation you've done, you just need to call <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> to say that the index column should just be a regular column.</p> <p>Starting with a mockup of your data:</p> <pre><code>import pandas as pd calls = pd.DataFrame({ 'agent': ['orange', 'red'], 'phone_number': [2234, 1478], 'call_outcome': [2234, 1478], }) &gt;&gt; calls agent call_outcome phone_number 0 orange 2234 2234 1 red 1478 1478 </code></pre> <p>here is the operation you did with <code>reset_index()</code> appended:</p> <pre><code>&gt;&gt; calls.groupby('agent').count().sort('phone_number', ascending=False).reset_index() agent call_outcome phone_number 0 orange 1 1 1 red 1 1 </code></pre>
python|pandas
4
6,965
31,030,096
Concatenating url pages as a single Data Frame
<p>I'm trying to download historic weather data for a given Location. I have altered an example given at <a href="http://flowingdata.com/2007/07/09/grabbing-weather-underground-data-with-beautifulsoup/" rel="nofollow">flowingdata</a> but I've stuck in the last step - how to concate multiple <code>Data Frames</code></p> <p><strong>MWE:</strong></p> <pre class="lang-py prettyprint-override"><code>import pandas as pd frames = pd.DataFrame(columns=['TimeEET', 'TemperatureC', 'Dew PointC', 'Humidity','Sea Level PressurehPa', 'VisibilityKm', 'Wind Direction', 'Wind SpeedKm/h','Gust SpeedKm/h','Precipitationmm', 'Events','Conditions', 'WindDirDegrees', 'DateUTC&lt;br /&gt;']) # Iterate through year, month, and day for y in range(2006, 2007): for m in range(1, 13): for d in range(1, 32): # Check if leap year if y%400 == 0: leap = True elif y%100 == 0: leap = False elif y%4 == 0: leap = True else: leap = False #Check if already gone through month if (m == 2 and leap and d &gt; 29): continue elif (m == 2 and d &gt; 28): continue elif (m in [4, 6, 9, 10] and d &gt; 30): continue # Open wunderground.com url url = "http://www.wunderground.com/history/airport/EFHK/"+str(y)+ "/" + str(m) + "/" + str(d) + "/DailyHistory.html?req_city=Vantaa&amp;req_state=&amp;req_statename=Finlandia&amp;reqdb.zip=00000&amp;reqdb.magic=4&amp;reqdb.wmo=02974&amp;format=1" df=pd.read_csv(url, sep=',',skiprows=2) frames=pd.concat(df) </code></pre> <p>This gives an error:</p> <pre><code> first argument must be an iterable of pandas objects, you passed an object of type "DataFrame" </code></pre> <p>The desired output would be to have one Data Frame with all days,month and years.</p>
<p>You should declare a list outside your loop and append to this then outside the loop you want to concatenate all the dfs into a single df:</p> <pre><code>import pandas as pd frames = pd.DataFrame(columns=['TimeEET', 'TemperatureC', 'Dew PointC', 'Humidity','Sea Level PressurehPa', 'VisibilityKm', 'Wind Direction', 'Wind SpeedKm/h','Gust SpeedKm/h','Precipitationmm', 'Events','Conditions', 'WindDirDegrees', 'DateUTC&lt;br /&gt;']) # Iterate through year, month, and day df_list = [] for y in range(2006, 2007): for m in range(1, 13): for d in range(1, 32): # Check if leap year if y%400 == 0: leap = True elif y%100 == 0: leap = False elif y%4 == 0: leap = True else: leap = False #Check if already gone through month if (m == 2 and leap and d &gt; 29): continue elif (m == 2 and d &gt; 28): continue elif (m in [4, 6, 9, 10] and d &gt; 30): continue # Open wunderground.com url url = "http://www.wunderground.com/history/airport/EFHK/"+str(y)+ "/" + str(m) + "/" + str(d) + "/DailyHistory.html?req_city=Vantaa&amp;req_state=&amp;req_statename=Finlandia&amp;reqdb.zip=00000&amp;reqdb.magic=4&amp;reqdb.wmo=02974&amp;format=1" df=pd.read_csv(url, sep=',',skiprows=2) df_list.append(df) frames=pd.concat(df_list, ignore_index=True) </code></pre>
python|pandas
3
6,966
67,356,885
How do I forward propagate in just subset of Dataframe columns with inplace=True?
<p>I want to fill missing values with fillna() and &quot;inplace=True&quot;. How do I forward propagate values in just two columns of a Dataframe that has more than two columns? Thanks</p>
<p>I don't believe there is a way to forward propagate a column subset with <code>method='ffill'</code> and <code>inplace=True</code>.</p> <p>You'll have to use assignment, e.g. for columns <code>A</code> and <code>D</code>:</p> <pre class="lang-py prettyprint-override"><code>df[['A','D']] = df[['A','D']].fillna(method='ffill') </code></pre>
python|pandas|missing-data
1
6,967
67,279,026
How to upload pandas, sqlalchemy package in lambda to avoid error "Unable to import module 'lambda_function': No module named 'importlib_metadata'"?
<p>I'm trying to upload a deployment package to my AWS lambda function following the article <a href="https://korniichuk.medium.com/lambda-with-pandas-fd81aa2ff25e" rel="nofollow noreferrer">https://korniichuk.medium.com/lambda-with-pandas-fd81aa2ff25e</a>. My final zip file is as follows: <a href="https://drive.google.com/file/d/1NLjvf_-Ks50E8z53DJezHtx7-ZRmwwBM/view" rel="nofollow noreferrer">https://drive.google.com/file/d/1NLjvf_-Ks50E8z53DJezHtx7-ZRmwwBM/view</a> but when I run my lambda function I get the error <code>Unable to import module 'lambda_function': No module named 'importlib_metadata' </code> <a href="https://i.stack.imgur.com/jSUE5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jSUE5.png" alt="errorimg" /></a></p> <p>My handler is named <code>lambda_function.lambda_handler</code> which is the file name and the function to run. I also tried uploading these zip files as layers excluding the <code>lambda_function.py</code> and get: <a href="https://i.stack.imgur.com/7uJEN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7uJEN.png" alt="layerErr" /></a> What am I doing wrong?</p> <p>EDIT: I tried using <code>zip/lambda_function.lambda_handler</code> as my handler still getting <code>Unable to import module 'zip/lambda_function': No module named 'zip/lambda_function' </code></p>
<p>There is a <a href="https://github.com/keithrozario/Klayers" rel="nofollow noreferrer">third party github repo</a> with public layers, including pandas. You don't have to do anything to use, except adding the layer arn to your function. The arn <a href="https://github.com/keithrozario/Klayers/tree/master/deployments/python3.8/arns" rel="nofollow noreferrer">depends on your region</a>, so you have to choose your region. For example, for <code>us-east-1</code> the pandas layer for python 3.8 is:</p> <pre><code>arn:aws:lambda:us-east-1:770693421928:layer:Klayers-python38-pandas:31 </code></pre>
python|pandas|amazon-web-services|lambda|aws-lambda
2
6,968
67,530,273
Interactive error when plotting the decision tree classifier, get an array of values.. makes the tree very hard to visualize
<p>This is the code needed to reproduce the decision tree classifier tree that gives far too many values to interpret the graph, I would like to avoid the overt array of values for a more simple value array if possible. Most of this code is needed for the processing of the dataset before attempting to plot the tree.</p> <pre><code>import numpy as np import pandas as pd df = pd.read_csv(&quot;https://raw.githubusercontent.com/Joseph-Villegas/CST383/main/Building_and_Safety_Customer_Service_Request__Closed_.csv&quot;) df.drop([ 'CSR Number', 'Address House Number', 'Address Street Direction', 'Address Street Name', 'Address Street Suffix', 'Parcel Identification Number (PIN)','Address House Fraction Number', 'Address Street Suffix Direction', 'Case Number Related to CSR'], axis=1, inplace=True) # Drop any row found with an NA value df.dropna(axis=0, how='any', inplace=True) # Observed date columns date_columns = ['Date Received', 'Date Closed', 'Due Date'] # Function to reformat date string str_2_date = lambda date: f&quot;{date[:6]}2{re.split('/', date)[2][1:]}&quot; # Apply said function to the date columns in the dataframe df = df.apply(lambda column: df[column.name].apply(str_2_date) if column.name in date_columns else column) for column in date_columns: original_dtype = str(df[column].dtypes) df[column] = pd.to_datetime(df[column]) new_dtype = str(df[column].dtypes) print(&quot;{:&lt;20} {:&lt;20} {:&lt;20}&quot;.format(column, original_dtype, new_dtype)) for column in date_columns: df[f&quot;{column} Day of Week&quot;] = df[column].dt.dayofweek # Monday=0, Sunday=6. df[f&quot;{column} Month&quot;] = df[column].dt.month df[f&quot;{column} Year&quot;] = df[column].dt.year # Remove original date columns df.drop(date_columns, axis=1, inplace=True) df['Lat.'] = [literal_eval(x)[0] for x in df['Latitude/Longitude']] df['Lon.'] = [literal_eval(x)[1] for x in df['Latitude/Longitude']] df.drop('Latitude/Longitude', axis=1, inplace=True) # Encode the rest of the columns having dtype 'object' using ordinal encoding object_columns = df.dtypes[(df.dtypes == &quot;object&quot;)].index.tolist() for column in object_columns: values_list = df[column].value_counts(ascending=True).index.tolist() ordinal_map = {value:(index + 1) for index, value in enumerate(values_list)} df[column] = df[column].map(ordinal_map) def sincos(x, period): radians = (2 * np.pi * x) / period return np.column_stack((np.sin(radians), np.cos(radians))) # Encode the day of week columns day_of_week_columns = df.filter(like='Day of Week', axis=1).columns.tolist() for column in day_of_week_columns: day_sc = sincos(df[column], 7) df[f&quot;{column} Sin&quot;] = day_sc[:,0] df[f&quot;{column} Cosine&quot;] = day_sc[:,1] # Encode the month columns month_columns = df.filter(like='Month', axis=1).columns.tolist() for column in month_columns: month_sc = sincos(df[column], 12) df[f&quot;{column} Sin&quot;] = day_sc[:,0] df[f&quot;{column} Cosine&quot;] = day_sc[:,1] date_info_columns = day_of_week_columns + month_columns df.drop(date_info_columns, axis=1, inplace=True) num_na = df.isna().sum().sum() num_rows, num_cols = df.shape # Below is the decision tree plot that gives unwanted array of values, is there a way to avoid this??? #----------------------------------------------------------------------------------------- from sklearn.tree import export_graphviz import graphviz # needed for the graph predictors = ['LADBS Inspection District', 'Address Street Zip', 'Date Received Year', 'Date Closed Year', 'Due Date Year', 'Case Flag', 'CSR Priority', 'Lat.', 'Lon.'] # features to predict from # we must pass np arrays into our decision tree X = df[predictors].values # numpy array for predictor variables y = df['Response Days'].values # numpy array for target variable X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.30, random_state=0) clf = DecisionTreeClassifier(max_depth=3, random_state=0).fit(X_train, y_train) # Using Decision Tree classifier model, and fit the model with the training data dot_data = export_graphviz(clf, precision=3, feature_names=predictors, proportion=True, class_names=predictors, filled=False, rounded=True, special_characters=True) # plot it graph = graphviz.Source(dot_data) graph </code></pre>
<p>Your target variable, <code>Response Days</code>, has lots of unique values, so using a classifier means each leaf keeps track of how many samples of each are there, hence the long lists. You probably would rather use a regression model, and if you do that the reported value of each leaf is just the (single!) average target value among samples.</p>
python|pandas|scikit-learn|decision-tree|pygraphviz
0
6,969
67,530,483
Why does keras neural network predicts the same number for all different images?
<p>I'm trying to use keras neural network of tensorflow to recognize the handwriting digit number. But idk why when i call <code>predict()</code>, it returns same results for all of input images.</p> <p>Here is code:</p> <pre><code> ### Train dataset ### mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train/255 x_test = x_test/255 model = tf.keras.models.Sequential() model.add(tf.keras.layers.Flatten(input_shape=(28,28))) model.add(tf.keras.layers.Dense(units=128,activation=tf.nn.relu)) model.add(tf.keras.layers.Dense(units=10,activation=tf.nn.softmax)) model.compile(optimizer=&quot;adam&quot;, loss=&quot;sparse_categorical_crossentropy&quot;, metrics=[&quot;accuracy&quot;]) model.fit(x_train, y_train, epochs=5) </code></pre> <p>The result looks like this:</p> <pre><code>Epoch 1/5 1875/1875 [==============================] - 2s 672us/step - loss: 0.2620 - accuracy: 0.9248 Epoch 2/5 1875/1875 [==============================] - 1s 567us/step - loss: 0.1148 - accuracy: 0.9658 Epoch 3/5 1875/1875 [==============================] - 1s 559us/step - loss: 0.0784 - accuracy: 0.9764 Epoch 4/5 1875/1875 [==============================] - 1s 564us/step - loss: 0.0596 - accuracy: 0.9817 Epoch 5/5 1875/1875 [==============================] - 1s 567us/step - loss: 0.0462 - accuracy: 0.9859 </code></pre> <p>Then the code to use image to test is below:</p> <pre><code> img = cv.imread('path/to/1.png') img = cv.cvtColor(img, cv.COLOR_BGR2GRAY) img = cv.resize(img,(28,28)) img = np.array([img]) if cv.countNonZero((255-image)) == 0: print('') img = np.invert(img) plt.imshow(img[0]) plt.show() prediction = model.predict(img) result = np.argmax(prediction) print(prediction) print(f'Result: {result}') </code></pre> <p>The result is:</p> <p><a href="https://i.stack.imgur.com/P60u4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P60u4.png" alt="Input with number 1" /></a></p> <p>plt show: <a href="https://i.stack.imgur.com/VMdFj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VMdFj.png" alt="PlT show 1" /></a></p> <pre><code>[[0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]] Result: 3 </code></pre> <p><a href="https://i.stack.imgur.com/LaIF1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LaIF1.png" alt="Input with number 2" /></a></p> <p>plt show <a href="https://i.stack.imgur.com/d38w6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d38w6.png" alt="PlT show 2" /></a></p> <pre><code>[[0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]] Result: 3 </code></pre>
<p>Normalize your data in inference time same what you did on the training set</p> <pre><code>img = np.array([img]) / 255 </code></pre> <p>Check <a href="https://stackoverflow.com/a/66191381/9215780">this answer (Inference)</a> for more details.</p> <hr /> <p>Based on your 3rd comment, here are some details.</p> <pre><code>def input_prepare(img): img = cv2.resize(img, (28, 28)) img = cv2.bitwise_not(img) img = tf.cast(tf.divide(img, 255) , tf.float64) img = tf.expand_dims(img, axis=0) return img img = cv2.imread('/content/1.png') orig = img.copy() # save for plotting later on img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # gray scaling img = input_prepare(img) plt.imshow(tf.reshape(img, shape=[28, 28])) </code></pre> <p><a href="https://i.stack.imgur.com/b2Jor.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b2Jor.png" alt="enter image description here" /></a></p> <pre><code>plt.imshow(cv2.cvtColor(orig, cv2.COLOR_BGR2RGB)) plt.title(np.argmax(model.predict(img))) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/0yNJi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0yNJi.png" alt="enter image description here" /></a></p> <p>It works as expected. But because of resizing the image, the digits get broken and lose their spatial information. That seems ok for the model but if it gets much worse, then the model will predict wrong. A case examples</p> <p><a href="https://i.stack.imgur.com/zDpZS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zDpZS.png" alt="enter image description here" /></a></p> <p>and the model predicts wrong for this.</p> <pre><code>plt.imshow(cv2.cvtColor(orig, cv2.COLOR_BGR2RGB)) plt.title(np.argmax(model.predict(img))) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/2uQff.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2uQff.png" alt="enter image description here" /></a></p> <p>To fix this we can apply <code>cv2.erode</code> to add some pixel after resizing, for example</p> <pre><code>def input_prepare(img): img = cv2.resize(img, (28, 28)) img = cv2.erode(img, np.ones((2, 2))) img = cv2.bitwise_not(img) img = tf.cast(tf.divide(img, 255) , tf.float64) img = tf.expand_dims(img, axis=0) return img </code></pre> <p><a href="https://i.stack.imgur.com/QNw7d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QNw7d.png" alt="enter image description here" /></a></p> <p>Not the best approach perhaps but now the model will understand better.</p> <pre><code>plt.imshow(cv2.cvtColor(orig, cv2.COLOR_BGR2RGB)) plt.title(np.argmax(model.predict(img))) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/7HWpV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7HWpV.png" alt="enter image description here" /></a></p>
python|tensorflow|keras|neural-network|handwriting-recognition
1
6,970
67,403,569
Get date that is closest to given timestamp from two series python pandas
<p>I have a series of timestamps called <code>dates</code> that look as such:</p> <pre><code>1 2021-04-21 09:34:00+00:00 2 2021-04-21 10:30:02+00:00 3 2021-04-21 15:54:00+00:00 4 2021-04-22 18:33:57+00:00 5 2021-04-23 18:48:04+00:00 </code></pre> <p>I am trying to find the closest date from another series called <code>PublishTime</code> which has minutely data for a 6 day time period covering some of the timestamps. The date must be after the timestamp and cannot be before. An example looks as such:</p> <pre><code>0 2021-04-21 09:30:00 1 2021-04-21 09:31:00 2 2021-04-21 09:32:00 3 2021-04-21 09:33:00 4 2021-04-21 09:34:00 </code></pre> <p>Is there an easy way to quickly find the closest date? I have looked in the <code>datetime</code> module but cannot find an answer.</p> <p>EDIT I incorrectly said that the date column covers all the timestamps. In the second series that I am trying to match it to, there is no minute data for weekends and non-business hours, therefore I would like to find the closest date AFTER the timestamp, not before.</p>
<p>Thanks @Quang Hoang, merge_asof worked. Since it was new to me as well, I tried it out and here's the result.</p> <p>First get the df from the question and reformat type to match the type in &quot;PublishTime&quot; series</p> <pre><code>df = pd.DataFrame({'dates': [&quot;2021-04-21 09:34:00+00:00&quot;, &quot;2021-04-21 10:30:02+00:00&quot;, &quot;2021-04-21 15:54:00+00:00&quot;, &quot;2021-04-22 18:33:57+00:00&quot;, &quot;2021-04-23 18:48:04+00:00&quot;]}) df['dates'] = pd.to_datetime(df['dates']) df['dates'] = df['dates'].dt.strftime('%Y-%m-%d %H:%M:%S') df['dates'] = pd.to_datetime(df['dates']) df dates 0 2021-04-21 09:34:00 1 2021-04-21 10:30:02 2 2021-04-21 15:54:00 3 2021-04-22 18:33:57 4 2021-04-23 18:48:04 </code></pre> <p>Get the df in PublishTime series</p> <pre><code>df2 = pd.DataFrame({'PublishTime': [&quot;2021-04-21 09:33:00&quot;, &quot;2021-04-21 09:34:00&quot;, &quot;2021-04-21 09:35:00&quot;, &quot;2021-04-21 10:31:00&quot;, &quot;2021-04-21 15:56:00&quot;, &quot;2021-04-25 15:56:00&quot;, &quot;2021-04-26 15:56:00&quot;]}) df2['PublishTime'] = pd.to_datetime(df2['PublishTime']) df2 PublishTime 0 2021-04-21 09:33:00 1 2021-04-21 09:34:00 2 2021-04-21 09:35:00 3 2021-04-21 10:31:00 4 2021-04-21 15:56:00 5 2021-04-25 15:56:00 6 2021-04-26 15:56:00 </code></pre> <p>Finally, merge_asof and use <code>forward</code> as the direction.</p> <pre><code>pd.merge_asof(df, df2, left_on='dates', right_on='PublishTime', direction='forward') dates PublishTime 0 2021-04-21 09:34:00 2021-04-21 09:34:00 1 2021-04-21 10:30:02 2021-04-21 10:31:00 2 2021-04-21 15:54:00 2021-04-21 15:56:00 3 2021-04-22 18:33:57 2021-04-25 15:56:00 4 2021-04-23 18:48:04 2021-04-25 15:56:00 </code></pre> <p>As you can see, in the PublishTime series I didn't add the data for 22nd - 24th April to show that some data can be missing (like weekends) and then it took the next closest one on 25th.</p>
python|pandas|datetime
1
6,971
67,584,691
Working with webp and jpeg images have different number of channels
<p>I'm working with computer vision project, where my images are combination of webp and jpeg. I'm using tensorflow '2.3.2'<br /> You can think my directories like this :</p> <pre><code>IMAGES |-img1.jpeg |-img2.webp </code></pre> <p>For reading webp, I use <a href="https://www.tensorflow.org/io/api_docs/python/tfio/image/decode_webp" rel="nofollow noreferrer">tfio.image.decode_webp</a> and when reading jpeg, I use tf.image.decode_jpeg(img, channels=3). Here's the code :</p> <pre class="lang-py prettyprint-override"><code>def load(file_path): img = tf.io.read_file(file_path) extension = tf.strings.split(file_path,sep=&quot;.&quot;) if extension[-1] == &quot;webp&quot; : img = tfio.image.decode_webp(img) else : img = tf.image.decode_jpeg(img, channels=3) #img preprocess here return img def create_dataset(df,batch_size): image = df[&quot;image_path&quot;] # I'm working on MultiTaskLearning so I have multiple targets target1 = df[&quot;target1&quot;].to_numpy() target2 = df[&quot;target2&quot;].to_numpy() ds = tf.data.Dataset.from_tensor_slices((image,target1,target2)) ds = ds.map(lambda image, target1,target2: (load(image), {&quot;target1&quot;:target1, &quot;target2&quot;:target2}), num_parallel_calls=tf.data.experimental.AUTOTUNE) ds = ds.batch(batch_size) ds = ds.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) return ds dataset = create_dataset(df,100) </code></pre> <p>The problem is, webp are converted to 4 channel(RGBA) tensor where decode jpeg is in 3 channel(RGB). This creates inconsistencies within my dataset since the model only except 3 channel images.</p> <p>One solution I can think of is converting all my webp to jpeg through <a href="https://stackoverflow.com/questions/49591274/convert-webp-to-jpg">this</a>. But is there any better solution for this? like converting the 4 channel into 3 channel in TensorFlow or reading webp as 3 channel in TensorFlow or anything else where I can just put the solution inside my python script?</p>
<p>If you want to train a single model with both <em>*.jpeg</em> and <em>*.webp</em> images then you should create the same input layer layout for both.</p> <p>To do so, you basically need to convert either RGB to RGBA or (what I would do) RGBA to RGB. If you want to simply drop the alpha-channel you can use tensorflow's <a href="https://www.tensorflow.org/io/api_docs/python/tfio/experimental/color/rgba_to_rgb" rel="nofollow noreferrer">rgba_to_rgb</a> (as <a href="https://stackoverflow.com/questions/67584691/working-with-webp-and-jpeg-images-have-different-number-of-channels/67585941#comment119460403_67584691">@Lescurel pointed out in the comments</a>).</p> <p>But the RBGA to RGB conversion with <a href="https://en.wikipedia.org/wiki/Alpha_compositing" rel="nofollow noreferrer">alpha compositing</a> is not very complex and you can do the operation directly on the tensor your get from calling <code>load</code>.</p> <p>Here is a tensorflow adaptation of a <a href="https://stackoverflow.com/a/58748986/1622937">numpy implementation</a> proposed for <a href="https://stackoverflow.com/questions/50331463/convert-rgba-to-rgb-in-python">this</a> SO question:</p> <pre class="lang-py prettyprint-override"><code>def rgba2rgb(rgba, background=(255,255,255)): row, col, ch = tf.shape(rgba) if ch == 3: return rgba assert ch == 4, 'RGBA image has 4 channels.' r, g, b, a = tf.unstack(tf.cast(rgba, tf.float32), axis=-1) a = tf.cast(a, tf.float32) / 255.0 R, G, B = background r = r * a + (1.0 - a) * R g = g * a + (1.0 - a) * G b = b * a + (1.0 - a) * B rgb = tf.stack([r,g,b], axis=-1) return tf.cast(rgb, tf.uint8) </code></pre> <p>Using this avoids having to call <code>.numpy()</code> on the tensor and the <a href="https://stackoverflow.com/q/52357542/1622937">potential issues related to that</a>.</p> <p>So basically <code>rgba2rgb(load(image))</code> should then do the trick.</p>
python|tensorflow
0
6,972
34,826,371
Add header to np.matrix
<p>I am trying to export a numpy matrix to ASCII format, but I want to add a header to it first.</p> <p>My code concept is this:</p> <ol> <li>Import ASCII file as np.ndarray, say matrix A </li> <li>Take the header of A (first 6 rows). The header contains both float values and characters</li> <li>Take the rows of A that are not header (from rows 6 to last row), giving array B</li> <li>Apply some functions on B</li> <li>Save as ASCII in this form: header(A) + B</li> </ol> <p>I tried the following:</p> <p>Try 1:</p> <pre><code>import numpy as np A = np.genfromtxt('......Input\chm_plot_1.txt', dtype=None, delimiter='\t') header = A[0:6] B = A[6:] mat_out = np.concatenate([A,B]) np.savetxt('........out.txt', mat_out, delimiter='\t') </code></pre> <p>,but gives the error:</p> <blockquote> <p>TypeError: Mismatch between array dtype ('|S3973') and format specifier ('%.18e')</p> </blockquote> <p>Try 2:</p> <pre><code>import numpy as np A = np.genfromtxt('......Input\chm_plot_1.txt', dtype=None, delimiter='\t') header = A[0:6] headers = np.vstack(header) head_list = headers.tolist() head_str = ''.join(str(v) for v in head_list) B = A[6:] np.savetxt('\out.txt', B, header = head_str, delimiter='\t') </code></pre> <p>,which gives the same error:</p> <blockquote> <p>TypeError: Mismatch between array dtype ('|S3973') and format specifier ('%.18e')</p> </blockquote> <p>Try 3:</p> <pre><code>import numpy as np import linecache A = np.genfromtxt('.............\Input\chm_plot_1.txt', dtype=None, delimiter='\t') line1 = linecache.getline('.............Input\chm_plot_1.txt', 1) line2 = linecache.getline('.............Input\chm_plot_1.txt', 2) line3 = linecache.getline('.............Input\chm_plot_1.txt', 3) line4 = linecache.getline('.............Input\chm_plot_1.txt', 4) line5 = linecache.getline('.............Input\chm_plot_1.txt', 5) line6 = linecache.getline('.............Input\chm_plot_1.txt', 6) header2 = line1 header2 += line2 header2 += line3 header2 += line4 header2 += line5 header2 += line6 B = A[6:] np.savetxt('........\out.txt', B , header = header2, delimiter='\t') </code></pre> <p>, which gives me the same error:</p> <blockquote> <p>TypeError: Mismatch between array dtype ('|S3973') and format specifier ('%.18e')</p> </blockquote> <p>The A array has the first lines like this:</p> <p>print A[0:8] #<em>starting from row 6, the rows have 100+ values, header is first 6 rows</em></p> <pre><code>['ncols 371' 'nrows 435' 'xllcorner 520298.0053' 'yllcorner 436731.3065' 'cellsize 1' 'NODATA_value -9999' '16.52002 15.90161 15.96692 20.32922 20.59827 18.28137 18.83533 17.66 ....... '13.16687 17.09497 7.309204 20.83655 19.05078 17.68591 17.88464 ...... '] </code></pre> <p>Any help would be greatly appreciated! Thanks :)</p> <p>Edit: I uploaded a sample from the input data (chm_plot_1.txt). The link is below: <a href="http://we.tl/mjgBe4QIRM" rel="nofollow noreferrer">http://we.tl/mjgBe4QIRM</a></p> <p>Edit2: Following the answer, the problem is that it inserts the "#" character at the beginning of the header lines, as in the image below below. Also, there is one supplementary line, the 7th one, that should be removed.</p> <p><a href="https://i.stack.imgur.com/JXbe7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JXbe7.png" alt="enter image description here"></a></p> <p>Edit 3: I think the error</p> <blockquote> <p>ValueError: invalid literal for float() </p> </blockquote> <p>is due to the different formats of data in the sample vs full files. Although both are .txt, they are arranged differently, as in the picture below.</p> <p><a href="https://i.stack.imgur.com/ljdl9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ljdl9.png" alt="chm_plot_1"></a> <a href="https://i.stack.imgur.com/h4Amv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h4Amv.png" alt="chm_plot_1_sample"></a></p>
<p>The problem is that your header has not the same format than data.</p> <p>A way to solve that : Treat header as a normal file text, and data as numeric.</p> <pre><code>with open('chm_plot_1_sample.txt') as f : header="".join([f.readline() for i in range(6)])[:-1] a=np.loadtxt('chm_plot_1_sample.txt',delimiter='\t',skiprows=6) a=a/2 # some treatement np.savetxt('out.txt',a,delimiter='\t',header=header,comments='') </code></pre>
python|arrays|numpy|ascii
1
6,973
65,204,523
TypeError: backward() got an unexpected keyword argument 'grad_tensors' in pytorch
<p>I have the following</p> <pre><code>w = torch.tensor([1.], requires_grad=True) x = torch.tensor([2.], requires_grad=True) a = torch.add(w, x) b = torch.add(w, 1) y0 = torch.mul(a, b) # y0 = (x+w) * (w+1) y1 = torch.add(a, b) # y1 = (x+w) + (w+1) loss = torch.cat([y0, y1], dim=0) # [y0, y1] weight = torch.tensor([1., 2.]) loss.backward(grad_tensors=weight) </code></pre> <p>The above give me <code>TypeError: backward() got an unexpected keyword argument 'grad_tensors'</code> I check the <a href="https://pytorch.org/docs/stable/autograd.html" rel="nofollow noreferrer">website</a> , the <code>grad_tensors</code> does live in the <code>backward</code>.</p> <p>However, when I use</p> <pre><code>loss.backward(gradient=weight) </code></pre> <p>It works. <code>gradient</code> is not a parameters in <code>backward</code>. Any idea of that? my pytorch version is <code>1.7.0</code>. Thanks.</p>
<p>You are calling the <a href="https://pytorch.org/docs/stable/_modules/torch/tensor.html#Tensor.backward" rel="nofollow noreferrer"><code>torch.Tensor.backward</code></a>, not <code>torch.autograd.backward</code>.</p> <p>As for your second question about the difference b/w the two, <code>torch.Tensor.backward</code> <a href="https://pytorch.org/docs/stable/_modules/torch/tensor.html#Tensor.backward" rel="nofollow noreferrer">internally calls</a> <code>torch.autograd.backward</code>, which calculates gradients of given tensors w.r.t. graph leaves.</p> <pre><code>torch.autograd.backward(self, gradient, retain_graph, create_graph) </code></pre> <p>which corresponds to</p> <pre><code>torch.autograd.backward(tensors: self, grad_tensors: gradient, retain_graph, create_graph) </code></pre> <p>Thus, below two are equivalent:</p> <pre><code>loss.backward(gradient=weight) torch.autograd.backward(loss, weight) </code></pre>
python-3.x|pytorch|gradient
3
6,974
65,113,529
How did the number of elements in each array end up being less than they were in their parent array (array of arrays)
<p>This problems seems to go away when I dont use <strong>shuffle(train_data)</strong>. I also tried doing <strong>shuffle(train_data_1)</strong> and <strong>shuffle(train_data_2)</strong> seperately but in that case too, the distribution seems to change.</p> <pre><code>import numpy as np import pandas as pd from collections import Counter from random import shuffle train_data_1 = np.load('training_data_v1.npy', allow_pickle=True) train_data_2 = np.load('training_data_v2.npy', allow_pickle=True) train_data = np.vstack((train_data_1, train_data_2)) df = pd.DataFrame(train_data) print(df.head()) print(Counter(df[1].apply(str))) </code></pre> <pre><code>Output : 0 1 0 [[33, 255, 255, 255, 255, 255, 255, 255, 255, ... [0, 1, 0] 1 [[33, 255, 255, 255, 255, 255, 255, 255, 255, ... [0, 1, 0] 2 [[33, 255, 255, 255, 255, 255, 255, 255, 255, ... [0, 1, 0] 3 [[33, 255, 255, 255, 255, 255, 255, 255, 255, ... [0, 1, 0] 4 [[33, 255, 255, 255, 255, 255, 255, 255, 255, ... [0, 1, 0] Counter({'[0, 1, 0]': 6064, '[1, 0, 0]': 542, '[0, 0, 1]': 394}) </code></pre> <p><strong>Seperating the forwards, lefts and rights into their respective arrays</strong></p> <pre><code>shuffle(train_data) </code></pre> <pre><code>lefts = [] rights = [] forwards = [] for data in train_data: img = data[0] choice = data[1] if choice == [1,0,0]: lefts.append([img,choice]) elif choice == [0,1,0]: forwards.append([img,choice]) elif choice == [0,0,1]: rights.append([img,choice]) else: print('no matches') print (len(forwards), len(lefts), len(rights)) </code></pre> <pre><code>Output: 6086 763 151 </code></pre> <p>At start, the distribution of forward, left, right was (6064, 542, 394)</p> <p>Now, it is (6096, 763, 151)</p> <p><strong>How has the distribution changed??</strong></p> <p>What am I doing wrong!?</p>
<p>Dont use shuffle(train_data) instead after the formation of forwards, lefts and rights array, use shuffle() on them individually.</p> <pre><code>shuffle(forwards) shuffle(lefts) shuffle(rights) </code></pre> <p>Now there isn't any problem with the distribution and shuffling is also performed correctly.</p>
python|arrays|pandas|numpy|dataframe
0
6,975
49,844,290
TensorFlow saving model - Paradoxical exception
<p>I've tried to save a basic MNIST model:</p> <pre><code>from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) import tensorflow as tf sess = tf.InteractiveSession() x = tf.placeholder(tf.float32, shape=[None, 784]) y_ = tf.placeholder(tf.float32, shape=[None, 10]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) y = tf.matmul(x, W) + b cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits = y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) saver.save(sess, './mnist_to-save-saved') for _ in range(1000): batch = mnist.train.next_batch(100) train_step.run(feed_dict={x: batch[0], y_: batch[1]}) correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver.save(sess, '/mnist-to-save-saved', global_step=1000) print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})) </code></pre> <p>It seems to partially function because, in the same directory that this model .py file some files are generated:</p> <pre><code>mnist_to-save-daved.index mnist_to-save_saved.meta mnist_to-save.saved.data-00000-of-00001 checkpoint </code></pre> <p>I can only access to the content of the "checkpoint" file, that is:</p> <pre><code>model_checkpoint_path: "mnist_to-save-saved" all_model_checkpoint_paths: "mnist_to-save-saved" </code></pre> <p>My question is, when executing the training model file, why do I get at the same time this error trace, and how can I fix it?</p> <pre><code>C:\Users\username\Anaconda3\envs\tensorflowenv\python.exe C:/Users/username/PycharmProjects/estimator/mnist-to-save.py Extracting MNIST_data\train-images-idx3-ubyte.gz Extracting MNIST_data\train-labels-idx1-ubyte.gz Extracting MNIST_data\t10k-images-idx3-ubyte.gz Extracting MNIST_data\t10k-labels-idx1-ubyte.gz 2018-04-15 18:22:39.959614: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\framework\op_kernel.cc:1192] Permission denied: Failed to create a directory: /; Permission denied Traceback (most recent call last): File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\client\session.py", line 1323, in _do_call return fn(*args) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\client\session.py", line 1302, in _run_fn status, run_metadata) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.PermissionDeniedError: Failed to create a directory: /; Permission denied [[Node: save/SaveV2 = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, Variable, Variable_1)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/Users/username/PycharmProjects/estimator/mnist-to-save.py", line 28, in &lt;module&gt; saver.save(sess, '/mnist-to-save-saved', global_step=1000) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\training\saver.py", line 1573, in save {self.saver_def.filename_tensor_name: checkpoint_file}) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\client\session.py", line 889, in run run_metadata_ptr) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\client\session.py", line 1120, in _run feed_dict_tensor, options, run_metadata) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _do_run options, run_metadata) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\client\session.py", line 1336, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.PermissionDeniedError: Failed to create a directory: /; Permission denied [[Node: save/SaveV2 = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, Variable, Variable_1)]] Caused by op 'save/SaveV2', defined at: File "C:/Users/username/PycharmProjects/estimator/mnist-to-save.py", line 11, in &lt;module&gt; saver = tf.train.Saver() File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\training\saver.py", line 1218, in __init__ self.build() File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\training\saver.py", line 1227, in build self._build(self._filename, build_save=True, build_restore=True) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\training\saver.py", line 1263, in _build build_save=build_save, build_restore=build_restore) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\training\saver.py", line 748, in _build_internal save_tensor = self._AddSaveOps(filename_tensor, saveables) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\training\saver.py", line 296, in _AddSaveOps save = self.save_op(filename_tensor, saveables) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\training\saver.py", line 239, in save_op tensors) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\ops\gen_io_ops.py", line 1162, in save_v2 shape_and_slices=shape_and_slices, tensors=tensors, name=name) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op op_def=op_def) File "C:\Users\username\Anaconda3\envs\tensorflowenv\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access PermissionDeniedError (see above for traceback): Failed to create a directory: /; Permission denied [[Node: save/SaveV2 = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, Variable, Variable_1)]] Process finished with exit code 1 </code></pre> <p>It seems to save the pre-trained mdel, but not works well with the training.</p>
<p>Here is an extract of your code</p> <pre><code>saver.save(sess, './mnist_to-save-saved') for _ in range(1000): batch = mnist.train.next_batch(100) train_step.run(feed_dict={x: batch[0], y_: batch[1]}) correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver.save(sess, '/mnist-to-save-saved', global_step=1000) </code></pre> <h1>Problem</h1> <p>The last line has <code>saver.save(sess, '/mnist-to-save-saved'</code> , which should probably be <code>saver.save(sess, './mnist-to-save-saved'</code></p> <p>This is most likely causing the issue <code>W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\framework\op_kernel.cc:1192] Permission denied: Failed to create a directory: /; Permission denied</code> because your path is using root "/" instead of relative "./"</p> <h1>Solution</h1> <p>Replace <code>/mnist-to-save-saved</code> with <code>./mnist-to-save-saved</code> or better yet define a variable at the top of your code like</p> <pre><code>TF_SAVE_FILE = './mnist-to-save-saved' </code></pre> <p>Then use that variable throughout your code instead of copying it.</p>
python|tensorflow|save
8
6,976
50,192,731
IOError: [Errno 21] Is a directory: '/tmp/speech_dataset/'
<p>I'm following the speech recognition tutorial from TensorFlow(link: <a href="https://www.tensorflow.org/versions/master/tutorials/audio_recognition#advanced_training" rel="nofollow noreferrer">https://www.tensorflow.org/versions/master/tutorials/audio_recognition#advanced_training</a>), and when I'm running the following command, which downloads the dataset provided by TensorFlow, it runs perfectly.</p> <pre><code>python tensorflow/examples/speech_commands/train.py </code></pre> <p>However, when I'm changing the defaults, so that it points to my dataset, it throws the following error:</p> <pre><code>Traceback (most recent call last): File "/home/users2/lmn/.local/lib/python2.7/site-packages/tensorflow/examples/speech_commands/train.py", line 428, in &lt;module&gt; tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) File "/home/users2/lmn/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run _sys.exit(main(argv)) File "/home/users2/lmn/.local/lib/python2.7/site-packages/tensorflow/examples/speech_commands/train.py", line 106, in main FLAGS.testing_percentage, model_settings) File "/home/users2/lmn/.local/lib/python2.7/site-packages/tensorflow/examples/speech_commands/input_data.py", line 158, in __init__ self.maybe_download_and_extract_dataset(data_url, data_dir) File "/home/users2/lmn/.local/lib/python2.7/site-packages/tensorflow/examples/speech_commands/input_data.py", line 204, in maybe_download_and_extract_dataset tarfile.open(filepath, 'r:gz').extractall(dest_directory) File "/usr/lib64/python2.7/tarfile.py", line 1693, in open return func(name, filemode, fileobj, **kwargs) File "/usr/lib64/python2.7/tarfile.py", line 1740, in gzopen fileobj = gzip.GzipFile(name, mode, compresslevel, fileobj) File "/usr/lib64/python2.7/gzip.py", line 94, in __init__ fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') IOError: [Errno 21] Is a directory: '/tmp/speech_dataset/' </code></pre> <p>The command I'm running is:</p> <pre><code>python tensorflow/examples/speech_commands/train.py --data_url=path/to/data/ --sample_rate=20000 --wanted_words=one,two,three,four,five,six,seven,eight,nine </code></pre> <p>Now, the error says that '/tmp/speech_dataset/' is a directory, but it is expecting a file, I guess. When I looked at <code>train.py</code> file, found the following code:</p> <pre><code>parser.add_argument( '--data_dir', type=str, default='/tmp/speech_dataset/', help="""\ Where to download the speech training data to. """) </code></pre> <p>The <code>--data-dir</code> argument defines where the files from the downloaded dataset should be stored. However, I'm not changing at at all, nor does the code need to save any data, since I already have the data on my computer, where I define them at <code>--data-url</code> argument. It seems to me that this is a bug from TensorFlow. </p> <p>Does anyone has experience with speech recognition on TensorFlow and know where the problem might be?</p> <p>Thank you in advance!</p>
<p>OK, I solved the problem, so I'm posting it here if anyone runs to the same problem.</p> <p>There was some confusion with the TensorFlow documentation. I thought that the <code>--data-url</code> argument should get the path to my data set, but this argument should only be used whenever you want to download some data set from somewhere. In case where you have your own data set, you need to explicitly define it as blank, i.e. give the following to your command <code>--data-url=</code> and <code>--data-dir</code> should then be the path to your data set.</p>
python|tensorflow|machine-learning|io
1
6,977
49,976,550
convolutional neural network image recognition
<p>Currently I am working on a project with convolutional network using tensorflow and I have set up the network and now i need to train it. I don't have a clue of how could the image should be for training. Like how much of % of the image the object is training on. It's a cigarette that I have to detect and I have tried around 280 individual pictures where the cigarette is about2-5% of the image. I'm thinking of scrapping those pictures and take new one where the cigarette is about 30-50% of the image. All the pictures are taking outside on the street environment.</p> <p>So my question is: are there are any kind of rule regarding good pictures in a training set? I will report back when I have tried my own solution</p>
<p>The object you are trying to recognise is too small. In the <a href="https://imgur.com/a/KjTfes4" rel="nofollow noreferrer">Sample</a>, I think first one will be the best bet for you. Convolution neural network works by doing convolution operations on image pixels. In the second picture, background is too large compared to the object you are trying to recongise. Training on such data will not help you.</p>
tensorflow
0
6,978
50,025,451
Issue installing pip and pandas
<p>I am trying to install Pandas with Pip and am running into some strange issues. Command prompt reported that pip is an unrecognized command. I thought that was strange, but decided to definitively remedy that by installing pip with the following commands:</p> <pre><code>curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python get-pip.py </code></pre> <p>I received the report that an existing version of pip was found, uninstalled, and the new version was installed. I then proceeded to run</p> <pre><code>pip install pandas </code></pre> <p>And I was informed that <code>'pip' is not recognized as an internal or external command, operable program or batch file.</code></p> <p>I then added to the path environment and the issue is still persisting. It's worth noting that I installed Python 3.6 by installing Anaconda. What am I missing here?<a href="https://i.stack.imgur.com/wIXEJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wIXEJ.jpg" alt="enter image description here"></a></p>
<p>The <code>pip</code> command is not found because it's not in your path. </p> <p>You should add the following to your <code>PATH</code> environment variable:</p> <pre><code>;%PYTHON_HOME%\;%PYTHON_HOME%\Scripts\ </code></pre> <p>A simple Google search should help you find how to change environment variables for you version of Windows.</p> <p>For example, see <a href="https://www.java.com/en/download/help/path.xml" rel="nofollow noreferrer">this page from the Java documentation</a>.</p>
python|pandas|pip
2
6,979
63,821,088
how to continue pulling data from a server like binance?
<p>I am trading on binance, I have a code that pulling data for 5 minutes candle (for example), when I click on run code, it will collect data, but how to continue pulling also for new candles ? this is my code:</p> <pre><code>import binance.client from binance.client import Client import pandas as pd import numpy as np import time import datetime from datetime import datetime, timedelta import matplotlib.pyplot as plt Pkey = 'xxxxxxxx' Skey = 'ccccccccccccc' client = Client(api_key=Pkey, api_secret=Skey) ticker = 'BTCUSDT' interval = Client.KLINE_INTERVAL_5MINUTE depth = '13 hours ago' raw = client.get_historical_klines(ticker, interval, depth) raw = pd.DataFrame(raw) print(raw) </code></pre> <p>thanks</p>
<p>As @Selcuk mentioned is his comment, you can loop the binance read and pause between each read. In your case, you are retrieving data at 5 minute intervals, so you can wait 5 minutes before reading again and request the previous 5 minutes. You can append to the initial dataframe using <code>append</code>.</p> <p>Try this code:</p> <pre><code>import ...... Pkey = 'xxxxxxxx' Skey = 'ccccccccccccc' client = Client(api_key=Pkey, api_secret=Skey) ticker = 'BTCUSDT' interval = Client.KLINE_INTERVAL_5MINUTE depth = '13 hours ago' raw = client.get_historical_klines(ticker, interval, depth) raw = pd.DataFrame(raw) alldata = raw print(raw) # intial load depth = '5 minutes ago' while True: # loop forever time.sleep(300) # wait 5 minutes raw = client.get_historical_klines(ticker, interval, depth) # 5 minutes of data raw = pd.DataFrame(raw) alldata.append(raw) # add to main dataset </code></pre>
python|pandas|algorithm|algorithmic-trading
1
6,980
63,860,201
Safe data retrieved from multiple pages from API
<p>I found a solution to print data from several pages from an api:</p> <pre><code>for page in range(1, 3): url = &quot;https://www.balldontlie.io/api/v1/players?page={}&quot;.format(page) ot_data_response = requests.get(url) ot_data = ot_data_response.text ot_dataparsed = json.loads(ot_data) ot_dataparsedfin = pd.json_normalize(ot_dataparsed, &quot;data&quot;) print(ot_dataparsedfin) </code></pre> <p>Is there a good way to save all data in one variable/dataframe so i can work with it?</p>
<p>You may wish to use <code>pd.concat</code>:</p> <pre><code>pd.concat(objs, axis=0, join='outer', ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False, copy=True) </code></pre> <p>For your case, it should be sth like this:</p> <pre><code>json_df_list = [] for page in range(1, 3): url = &quot;https://www.balldontlie.io/api/v1/players?page={}&quot;.format(page) ot_data_response = requests.get(url) ot_data = ot_data_response.text ot_dataparsed = json.loads(ot_data) ot_dataparsedfin = pd.json_normalize(ot_dataparsed, &quot;data&quot;) json_df_list.append(ot_dataparsefin) json_df = pd.concat(json_df_list) print(json_df) </code></pre>
python|pandas|api|python-requests
1
6,981
64,148,319
CSV to JSON output only if all values are present in CSV
<p>I have a concatenated CSV file that I am attempting to output into JSON format. How should I go about implementing the logic that the CSV file only get converted to a JSON object all fields have a value ?</p> <pre><code>import glob , os import pandas as pd import json import csv with open('some.csv', 'r', newline='') as csvfile, \ open('output.json', 'w') as jsonfile: for row in csv.DictReader(csvfile): restructured = { 'STATION_CODE': row['STORE_CODE'], 'id': row['ARTICLE_ID'], 'name': row['ITEM_NAME'], 'data': { # fieldname: value for (fieldname, value) in row.items() 'STORE_CODE': row['STORE_CODE'], 'ARTICLE_ID': row['ARTICLE_ID'], 'ITEM_NAME': row['ITEM_NAME'], 'BARCODE': row['BARCODE'], 'SALE_PRICE': row['SALE_PRICE'], 'LIST_PRICE': row['LIST_PRICE'], 'UNIT_PRICE': row['UNIT_PRICE'], } } json.dump(restructured, jsonfile, indent=4) jsonfile.write('\n') </code></pre> <p>Currently this will provide all values from the CSV file into the JSON output, which is unintended behavior. Any inputs on how to correct this ?</p>
<p>First I loop through all the elements of <code>CSV</code> and add it to a <code>JSON array</code>. If any row element <code>value is empty</code>, that row will be <code>ignored</code>. Once I have the all rows in the <code>JSON array</code>, I will output it to the <code>JSON</code> file</p> <pre><code>import json import csv csvjsonarr = [] with open('some.csv', 'r', newline='') as csvfile : for row in csv.DictReader(csvfile): hasemptyvalues = False for rowidx in row : if row[rowidx] == &quot;&quot; : hasemptyvalues = True break if hasemptyvalues == True : continue restructured = { 'STATION_CODE': row['STORE_CODE'], 'id': row['ARTICLE_ID'], 'name': row['ITEM_NAME'], 'data': { 'STORE_CODE': row['STORE_CODE'], 'ARTICLE_ID': row['ARTICLE_ID'], 'ITEM_NAME': row['ITEM_NAME'], 'BARCODE': row['BARCODE'], 'SALE_PRICE': row['SALE_PRICE'], 'LIST_PRICE': row['LIST_PRICE'], 'UNIT_PRICE': row['UNIT_PRICE'], } } csvjsonarr.append(restructured) if len(csvjsonarr) &gt; 0 : with open('output.json', 'w') as jsonfile : json.dump(csvjsonarr, jsonfile, indent=4) </code></pre>
python|json|pandas|dataframe|csv
0
6,982
47,004,304
How to combine two data columns in pandas?
<p>I have two tables, like below. I want to merge two table into 1. I tried to merge,concat, join in panda but it gives a new table of height 20, I want to have a height of 10 in the new combined table. How to do this one panda data frames? <a href="https://i.stack.imgur.com/SML9N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SML9N.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/twaCr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/twaCr.png" alt="enter image description here"></a></p>
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <code>axis=1</code>:</p> <pre><code>df = pd.concat([df1, df2], axis=1) </code></pre>
pandas
1
6,983
46,938,530
Produce balanced mini batch with Dataset API
<p>I've a question about the new dataset API (tensorflow 1.4rc1). I've a unbalanced dataset wrt to labels <code>0</code> and <code>1</code>. My goal is to create balanced mini batches during the preprocessing.</p> <p>Assume I've two filtered datasets:</p> <pre><code>ds_pos = dataset.filter(lambda l, x, y, z: tf.reshape(tf.equal(l, 1), [])) ds_neg = dataset.filter(lambda l, x, y, z: tf.reshape(tf.equal(l, 0), [])).repeat() </code></pre> <p>Is there a way to combine these two datasets such that the resulting dataset looks like <code>ds = [0, 1, 0, 1, 0, 1]</code>:</p> <p>Something like this:</p> <pre><code>dataset = tf.data.Dataset.zip((ds_pos, ds_neg)) dataset = dataset.apply(...) # dataset looks like [0, 1, 0, 1, 0, 1, ...] dataset = dataset.batch(20) </code></pre> <p>My current approach is:</p> <pre><code>def _concat(x, y): return tf.cond(tf.random_uniform(()) &gt; 0.5, lambda: x, lambda: y) dataset = tf.data.Dataset.zip((ds_pos, ds_neg)) dataset = dataset.map(_concat) </code></pre> <p>But I've the feeling there is a more elegant way.</p> <p>Thanks in advance!</p>
<p>You are on the right track. The following example uses <code>Dataset.flat_map()</code> to turn each pair of a positive example and a negative example into two consecutive examples in the result:</p> <pre><code>dataset = tf.data.Dataset.zip((ds_pos, ds_neg)) # Each input element will be converted into a two-element `Dataset` using # `Dataset.from_tensors()` and `Dataset.concatenate()`, then `Dataset.flat_map()` # will flatten the resulting `Dataset`s into a single `Dataset`. dataset = dataset.flat_map( lambda ex_pos, ex_neg: tf.data.Dataset.from_tensors(ex_pos).concatenate( tf.data.Dataset.from_tensors(ex_neg))) dataset = dataset.batch(20) </code></pre>
tensorflow|tensorflow-datasets
7
6,984
46,649,363
Removing inf/nan values from Pandas
<p>I'm aware that there are several posts about this, but none of the solutions seem to work and I can't figure out what I'm doing wrong.</p> <p>My dataframe has data with inf values.</p> <pre><code>print [x for x in train_x['meh'] if not np.isfinite(x)] </code></pre> <p>returns</p> <pre><code>[inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf] </code></pre> <p>I tried</p> <pre><code>train_x.replace([numpy.inf, -numpy.inf], numpy.nan) </code></pre> <p>But this does not convert the <code>inf</code> to <code>nan</code>. </p> <p>Next I tried</p> <pre><code>train_x[np.isinf(train_x)] = np.NaN </code></pre> <p>... which does convert the <code>inf</code> to <code>nan</code> but I'm not able to drop the <code>nan</code> rows even with <code>train_x.dropna()</code></p> <p>Essentially, I need to drop the <code>inf</code> and <code>nan</code> rows.</p>
<p>Seems like you can just use boolean indexing</p> <pre><code>train_x[np.isfinite(train_x) &amp; train_x.notnull()] </code></pre> <p>You actually don't even need the <code>train_x.notnull()</code>.</p>
pandas|numpy
0
6,985
63,073,304
How to edit mnist dataset?
<p>I want to have digit 0 to 5 in the &quot;mnist dataset&quot;. How can I do this on python? I try to solve this problem with numpy.delete, but it didn't work.</p>
<p>Assuming you have the images stored in a numpy array of shape <code>(num_examples, num_pixels)</code> and the labels stored in an array of shape <code>(num_examples,)</code>, you can do this:</p> <pre><code>images = images[labels &lt;= 5].copy() labels = labels[labels &lt;= 5].copy() </code></pre>
python|numpy|keras|dataset|mnist
1
6,986
67,756,332
Error creating model "segmentation_models" in Keras
<p>A week ago, my Notebook in Google Colaboratory was working fine after installing the following libraries:</p> <pre><code>!pip install te !pip install tensorflow==2.1 !pip install keras==2.3.1 !pip install -U segmentation-models !pip install -U --pre segmentation-models </code></pre> <p>and</p> <pre><code>import tensorflow as tf import segmentation_models as sm import glob import cv2 import numpy as np from matplotlib import pyplot as plt import keras from keras import normalize from keras.metrics import MeanIoU </code></pre> <p>It worked:</p> <pre><code># set class weights for dice_loss (car: 1.; pedestrian: 2.; background: 0.5;) dice_loss = sm.losses.DiceLoss(class_weights=np.array([0.25, 0.25, 0.25, 0.25])) focal_loss = sm.losses.CategoricalFocalLoss() total_loss = dice_loss + (1 * focal_loss) metrics = [sm.metrics.IOUScore(threshold=0.5), sm.metrics.FScore(threshold=0.5)] BACKBONE1 = 'resnet34' preprocess_input1 = sm.get_preprocessing(BACKBONE1) # preprocess input X_train1 = preprocess_input1(X_train) X_test1 = preprocess_input1(X_test) # define model model1 = sm.Unet(BACKBONE1, encoder_weights='imagenet', classes=n_classes, activation=activation) </code></pre> <p>Then, due to an error, I made changes:</p> <pre><code>!pip install -q tensorflow==2.1 !pip install -q keras==2.3.1 !pip install -q tensorflow-estimator==2.1 import os os.environ['CUDA_VISIBLE_DEVICES'] = '0' os.environ[&quot;SM_FRAMEWORK&quot;] = &quot;tf.keras&quot; from tensorflow import keras from tensorflow.keras.utils import normalize from tensorflow.keras.metrics import MeanIoU </code></pre> <p>After that, this part does not work:</p> <pre><code> # define model model1 = sm.Unet(BACKBONE1, encoder_weights='imagenet', classes=n_classes, activation=activation) </code></pre> <p>Error:</p> <blockquote> <p>/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/saving/hdf5_format.py in load_weights_from_hdf5_group(f, layers) 649 &quot;&quot;&quot; 650 if 'keras_version' in f.attrs: --&gt; 651 original_keras_version = f.attrs['keras_version'].decode('utf8') 652 else: 653 original_keras_version = '1'</p> <p>AttributeError: 'str' object has no attribute 'decode'</p> </blockquote> <p>Problem with loading scale values. But I don't know how to fix it</p>
<p>You may need to install <code>h5py</code> of the following version, <a href="https://github.com/qubvel/segmentation_models/issues/424" rel="nofollow noreferrer">source</a>.</p> <pre><code>pip install -q h5py==2.10.0 </code></pre> <p>FYI, I was able to reproduce your error on colab and the above solution resolve this.</p>
python|tensorflow|machine-learning|keras|deep-learning
1
6,987
67,631,375
Concatenating dataframes with a common column
<p>I have 2 data frames with one common column denoting the row number.</p> <pre><code>Df1: Rownum A B C 11 S V L 11 F U M 11 T C O 11 B X P Df2: Rownum E F G 12 S V L 12 F U M 12 T C O 12 B X P </code></pre> <p>Current implementation:</p> <pre><code>df = pd.concat(df1,df2,axis=1) Output: Rownum A B C Rownum E F G 11 S V L 12 S V L 11 F U M 12 F U M 11 T C O 12 T C O 11 B X P 12 B X P </code></pre> <p>Below mentioned is the <strong>desired output</strong> I'm trying to achieve:</p> <pre><code>Rownum A B C E F G 11 S V L 11 F U M 11 T C O 11 B X P 12 S V L 12 F U M 12 T C O 12 B X P </code></pre> <p>Any direction around this would be much appreciated.</p>
<p>Remove <code>axis=1</code> in <code>concat</code> with convert <code>Rownum</code> to index for both <code>DataFrames</code>:</p> <pre><code>df = pd.concat([df1.set_index('Rownum'),df2.set_index('Rownum')]).reset_index().fillna('') print (df) Rownum A B C E F G 0 11 S V L 1 11 F U M 2 11 T C O 3 11 B X P 4 12 S V L 5 12 F U M 6 12 T C O 7 12 B X P </code></pre>
python|pandas|dataframe
0
6,988
67,896,571
Filter by column with rows having multiple value using python
<p>I have data frame as in below and I am filtering by column 'STC' for a value '30'.</p> <p>I am using below code and I am getting empty data frame. How can I get rows with '30' only?</p> <pre><code>STC = [30] (df.loc[df['STC'].isin(STC)]) </code></pre> <pre><code> Code Desc STC ... 0 PUT123 Deduct 30,47,57 ... 1 MAT456 Coins 30, 54, 27 ... 2 CAT123 Copay 24,27 </code></pre>
<p>Staged with the assumption that the values in STC are a string of numbers, the example provided does not make them look like lists.</p> <p>You can use str.contains to find matches.</p> <pre><code>import pandas as pd data_dict= {'code': ['PUT123', 'MAT456', 'CAT123'], 'Desc': ['Deduct', 'Coin', 'Copay'], 'STC': ['30, 47, 57', '30, 54, 27', '8, 1, 9'] } df = pd.DataFrame(data=data_dict) df2 = (df.loc[df['STC'].str.contains(r'30', na=True)]) print(df2['STC']) </code></pre> <p>Prints</p> <pre><code>0 30, 47, 57 1 30, 54, 27 Name: STC, dtype: object </code></pre>
python|pandas
0
6,989
67,955,351
`np.linalg.solve` get solution matrix?
<p><code>np.linalg.solve</code> solves for x in a problem of the form Ax = b.</p> <p>For my application, this is done to avoid calculating the inverse explicitly (i.e inverse(A)b = x)</p> <p>I'd like to access what the effective inverse is that was used to solve this problem but looking at the <a href="https://numpy.org/doc/stable/reference/generated/numpy.linalg.solve.html" rel="nofollow noreferrer">documentation</a> it doesn't appear to be an option... Is there a reasonable alternative approach I can follow to recover the inverse of A?</p> <p>(np.linalg.inv(A) is not accurate enough for my use case)</p>
<p>Following the docs and source code, it seems NumPy is calling LAPACK's <code>_gesv</code> to compute the solution, the <a href="https://software.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-fortran/top/lapack-routines/lapack-linear-equation-routines/lapack-linear-equation-driver-routines/gesv.html" rel="nofollow noreferrer">documentation</a> of which reads:</p> <blockquote> <p>The routine solves for X the system of linear equations A*X = B, where A is an n-by-n matrix, the columns of matrix B are individual right-hand sides, and the columns of X are the corresponding solutions.</p> <p>The LU decomposition with partial pivoting and row interchanges is used to factor A as A = P * L * U, where P is a permutation matrix, L is unit lower triangular, and U is upper triangular. The factored form of A is then used to solve the system of equations A * X = B.</p> </blockquote> <p>The <a href="https://github.com/numpy/numpy/blob/27b98cbe0dd9d2969e9c227e7a2070aa56f41d6d/numpy/linalg/umath_linalg.c.src#L1613" rel="nofollow noreferrer">NumPy implementation for <code>solve</code></a> doesn't return the inverted matrix back to the caller, and just frees the memory for the inverted matrix, so there's no hope there. <a href="https://docs.scipy.org/doc/scipy/reference/linalg.lapack.html" rel="nofollow noreferrer">SciPy provides low-level access to LAPACK</a> so you should be able to access the result from there. You can follow the actual implementation in LAPACK's Fortran source code <a href="https://github.com/Reference-LAPACK/lapack/blob/v3.9.1/SRC/dgesv.f" rel="nofollow noreferrer">dgesv.f</a>, <a href="https://github.com/Reference-LAPACK/lapack/blob/v3.9.1/SRC/dgetrf.f" rel="nofollow noreferrer">dgetrf.f</a> and <a href="https://github.com/Reference-LAPACK/lapack/blob/v3.9.1/SRC/dgetrs.f" rel="nofollow noreferrer">dgetrs.f</a>. Alternatively, you could note that NumPy's <code>inv</code> still calls the same underlying code, so it might be enough for your use case... You didn't specify <em>why</em> is it that you need the approximate inverse matrix.</p>
python|numpy
1
6,990
67,716,231
How can I print the keys and values of a dictionary by passing a set as an argument?
<p>I have a dictionary containing the names and roll numbers of students namely class_details. And I want to print the roll numbers along with the names of students who are absent, the names of the students who are absent are stored in a set .</p> <p>Source Code :</p> <pre><code>import pandas as pd df = pd.DataFrame({'Roll num': [17013001,17013002,17013003,17013004,17013005,17013006,17013007,17013008,17013009,17013010,17013011,17013012,17013013,17013014,17013015,17013016,17013017,17013018,17013019,17013020,17013021,17013022,17013023,17013024,17013025,17013026,17013027,17013028,17013029,17013030,17013201,18013024,18013025,18013026,18013027,18013028,18013029,18013030,18013031,18013032,18013033,18013034,18013035,18013036,18013037,18013038,18013039,18013040,18013041,18013042,18013043,18013044], 'Name': ['Anjali Saini', 'Anjali Srivastava', 'Ayushi Solanki', 'Bhawana Saroha', 'Ekta Joshi', 'Harshita Virodhiya', 'Harshita Sehrawat', 'Monika Tehlan', 'Mudita Shiromani', 'Muskan Chahar', 'Neelam Chahal', 'Neeru Bhardwaj', 'Neha Yadav', 'Nishu Rathi', 'Pallavi Singh', 'Pragya Tomar', 'Punerva', 'Saumya Singh', 'Shagun Rana', 'Shaily Bavra', 'Sheena Goyal', 'Shipra Gaur', 'Shivani Pawriya', 'Shiwani Sihan', 'Sonam Bhardwaj', 'Sonika Singh', 'Srishti Rajbhar', 'Sujata', 'Ujjawal Tyagi', 'Vishwa Rana', 'Shalu Yadav', 'Aparna Singh', 'Bharti Kaushik', 'Deepika Yadav', 'Gunika Mehra', 'Heena Saini', 'Himanshi Singh', 'Kajal Rajput', 'Anshu Tomar', 'Krishna Aggarwal', 'Kusum Kundu', 'Monika Kushwaha', 'Monika Shira', 'Neha Kardam', 'Nisha Pahal', 'Osheen Kamboj', 'Preeti Deshwal', 'Priyanka Sharma', 'Renu Malik', 'Renu Bisht', 'Sonia Kadyan', 'Swati Arya']}) class_list = df['Name'] class_set = set(class_list) present = pd.DataFrame({'Attendees': ['Anshu Tomar','Aparna Singh','Ayushi Solanki','Bharti Kaushik','Bhawana Saroha','Deepika Yadav','Harshita Sehrawat','Harshita Virodhiya','Heena Saini','Kajal Rajput','Krishna Aggarwal','Kusum Kundu','Monika Shira','Mudita Shiromani','Muskan Chahar','Neha Kardam','Neha Yadav','Nisha Pahal','Nishu Rathi','Pragya Tomar','Preeti Deshwal','Punerva','Renu Bisht','Renu Malik','Saumya Singh','Shaily Bavra','Shalu Yadav','Sheena Goyal','Shivani Pawriya','Sonam Bhardwaj','Sonia Kadyan','Sonika Singh','Sujata','Swati Arya','Ujjawal Tyagi','bala suthar']}) present_list = present['Attendees'] present_set = set(present_list) pres = len(present_set) pres_num = pres - 1 #subtracting the extra 1 count for the faculty abs_num = 52 - pres_num #52 is the strength of the class absent_set = class_set - present_set print(&quot;Attendance of Java Lecture, 27/05&quot;) print(&quot;Number of Present Students:&quot;, pres_num) print(&quot;Number of Absentees:&quot;, abs_num) print(&quot;Names of Absentees are :&quot;, *sorted(absent_set),sep=&quot;\n&quot;) class_details = df.set_index('Name')['Roll num'].to_dict() </code></pre> <p>My Output:</p> <pre><code>Attendance of Java Lecture, 27/05 Number of Present Students: 35 Number of Absentees: 17 Names of Absentees are : Anjali Saini Anjali Srivastava Ekta Joshi Gunika Mehra Himanshi Singh Monika Kushwaha Monika Tehlan Neelam Chahal Neeru Bhardwaj Osheen Kamboj Pallavi Singh Priyanka Sharma Shagun Rana Shipra Gaur Shiwani Sihan Srishti Rajbhar Vishwa Rana </code></pre> <p>Desired Output is a list of students containing the roll numbers and names of absent students.</p>
<pre><code># Python program to demonstrate # passing dictionary as argument # A function that takes dictionary # as an argument def func(d): for key in d: print(&quot;key:&quot;, key, &quot;Value:&quot;, d[key]) # Driver's code D = {'a':1, 'b':2, 'c':3} func(D) </code></pre>
python|pandas|dictionary|set
0
6,991
41,298,078
Scipy interp2d interpolate masked fill values
<p>I want to interpolate data (120*120) in order to get output data (1200*1200).</p> <p>In this way I'm using <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.interp2d.html" rel="nofollow noreferrer"><code>scipy.interpolate.interp2d</code></a>.</p> <p>Below is my input data, where 255 corresponds to fill values, I mask these values before the interpolation.</p> <p><a href="https://i.stack.imgur.com/Eqxzr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Eqxzr.png" alt="Input data"></a></p> <p>I'm using the code below:</p> <pre><code>tck = interp2d(np.linspace(0, 1200, data.shape[1]), np.linspace(0, 1200, data.shape[0]), data, fill_value=255) data = tck(range(1200), range(1200)) data = np.ma.MaskedArray(data, data == 255) </code></pre> <p>I get the following result:</p> <p><a href="https://i.stack.imgur.com/RyYZg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RyYZg.jpg" alt="Output data"></a></p> <p>Fill values have been interpolated.</p> <p>How can I interpolate my data without interpolate fill values ?</p>
<p>I found a solution with <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.griddata.html" rel="nofollow noreferrer">scipy.interpolate.griddata</a> but I'm not sure that's the best one.</p> <p>I interpolate data with the <code>nearest</code> method parameter which returns the value at the data point closest to the point of interpolation.</p> <pre><code>points = np.meshgrid(np.linspace(0, 1200, data.shape[1]), np.linspace(0, 1200, data.shape[0])) points = zip(points[0].flatten(), points[1].flatten()) xi = np.meshgrid(np.arange(1200), np.arange(1200)) xi = zip(xi[0].flatten(), xi[1].flatten()) tck = griddata(np.array(points), data.flatten(), np.array(xi), method='nearest') data = tck.reshape((1200, 1200)) </code></pre> <p><a href="https://i.stack.imgur.com/E507Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E507Y.png" alt="Output data"></a></p>
python|arrays|numpy|scipy|interpolation
1
6,992
41,664,734
Splitting MNIST data tensorflow
<p>I've been following the tensorflow tutorials. I've imported the MNIST dataset and ran the code for a 2 layer convolutional neural net. It took nearly 45 minutes to train. I want to cut down the training data by discarding some of the data. How do I do that? Here's the code:</p> <pre><code>import tensorflow as tf import numpy as np from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME') x = tf.placeholder(tf.float32, shape=[None, 784]) y_ = tf.placeholder(tf.float32, [None, 10]) W_conv1 = weight_variable([5, 5, 1, 32]) b_conv1 = bias_variable([32]) x_image = tf.reshape(x, [-1,28,28,1]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) h_pool1 = max_pool_2x2(h_conv1) W_conv2 = weight_variable([5, 5, 32, 64]) b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) W_fc1 = weight_variable([7 * 7 * 64, 1024]) b_fc1 = bias_variable([1024]) h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) W_fc2 = weight_variable([1024, 10]) b_fc2 = bias_variable([10]) y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_conv, y_)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) sess = tf.Session() sess.run(tf.initialize_all_variables()) for i in range(20000): batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(session=sess,feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(session=sess,feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) print("test accuracy %g"%accuracy.eval(session=sess,feed_dict={x: np.split(mnist.test.images,5)[0], y_: np.split(mnist.test.labels,5)[0], keep_prob: 1.0})) </code></pre> <p>I cut down the size of testing data since it's a numpy array. How do I do the same for training data?</p>
<p>You are using dataset provider defined in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/datasets/mnist.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/datasets/mnist.py</a></p> <p>To reduce number of training sample, you can change this file (Line 237), or create a modified version and use it, instead of </p> <pre><code>from tensorflow.examples.tutorials.mnist import input_data </code></pre> <p>Which points to the link that I mentioned above.</p>
tensorflow|mnist
1
6,993
41,232,621
Python: Error while installing Numpy & Pandas
<p>I am trying to install numpy, scipy and pandas but getting the following error:</p> <pre><code>Aleeshas-MacBook-Air:~ aleesha$ pip install numpy scipy pandas Requirement already satisfied (use --upgrade to upgrade): numpy in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python Requirement already satisfied (use --upgrade to upgrade): scipy in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python Collecting pandas Using cached pandas-0.19.1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl Requirement already satisfied (use --upgrade to upgrade): pytz&gt;=2011k in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from pandas) Requirement already satisfied (use --upgrade to upgrade): python-dateutil in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from pandas) Installing collected packages: pandas Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 742, in install **kwargs File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 831, in install self.move_wheel_files(self.source_dir, root=root, prefix=prefix) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 1032, in move_wheel_files isolated=self.isolated, File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/wheel.py", line 346, in move_wheel_files clobber(source, lib_dir, True) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/wheel.py", line 317, in clobber ensure_dir(destdir) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py", line 83, in ensure_dir os.makedirs(path) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/pandas' You are using pip version 8.1.2, however version 9.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Aleeshas-MacBook-Air:~ aleesha$ </code></pre> <p>I have Python version - <code>Python 3.5.2</code>. Why in the first place is pip trying to install at 2.7?</p>
<p>I suggest you install Anaconda, with it you will solve a lot of issues. It supports the latest 3.5 version, comes equipped with most of the data analitics libraries and the ones you dont have you can get with pip install, which is guaranteed to work since pip get comes with Anaconda as well. </p>
python|python-3.x|pandas|numpy|scipy
0
6,994
68,453,360
Change dtype of a Pd DataFrame using a for loop
<p>I have a DataFrame that has 400 columns, and I need to change the dtype from object to categorical for all of the 400 columns, how can I use loops to do that?</p>
<p>Try this:</p> <pre><code>df = df.astype(pd.CategoricalDtype()) </code></pre>
python|pandas|dataframe
1
6,995
68,476,928
matplotlib error : loop of ufunc does not support argument 0 of type float which has no callable rint method
<p>This is my dataSeries : df =</p> <pre><code> count 17 83396.142857 18 35970.000000 19 54082.428571 20 21759.714286 21 16899.571429 22 19870.571429 23 32491.285714 24 40425.285714 25 30780.285714 26 11923.428571 27 13698.571429 28 28028.000000 29 52575.000000 </code></pre> <p>First converted it to int to avoid any issues:</p> <pre><code>df['count'] = df['count'].astype(int) df.index = df.index.astype(int) </code></pre> <p>I am trying to plot the data using :</p> <pre class="lang-py prettyprint-override"><code> _, ax = plt.subplots(1,2) df.plot.pie(ax = ax[1], y = df['count']) plt.show() </code></pre> <p>but it keeps throwing the exception error:</p> <pre><code>Type: TypeError Message: loop of ufunc does not support argument 0 of type float which has no callable rint method Stacktrace: File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/backends/backend_macosx.py&quot;, line 61, in _draw self.figure.draw(renderer) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/artist.py&quot;, line 41, in draw_wrapper return draw(artist, renderer, *args, **kwargs) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/figure.py&quot;, line 1863, in draw mimage._draw_list_compositing_images( File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/image.py&quot;, line 131, in _draw_list_compositing_images a.draw(renderer) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/artist.py&quot;, line 41, in draw_wrapper return draw(artist, renderer, *args, **kwargs) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/cbook/deprecation.py&quot;, line 411, in wrapper return func(*inner_args, **inner_kwargs) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/axes/_base.py&quot;, line 2747, in draw mimage._draw_list_compositing_images(renderer, self, artists) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/image.py&quot;, line 131, in _draw_list_compositing_images a.draw(renderer) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/artist.py&quot;, line 41, in draw_wrapper return draw(artist, renderer, *args, **kwargs) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/axis.py&quot;, line 1164, in draw ticks_to_draw = self._update_ticks() File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/axis.py&quot;, line 1022, in _update_ticks major_labels = self.major.formatter.format_ticks(major_locs) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/ticker.py&quot;, line 249, in format_ticks self.set_locs(values) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/ticker.py&quot;, line 782, in set_locs self._set_format() File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/ticker.py&quot;, line 884, in _set_format if np.abs(locs - np.round(locs, decimals=sigfigs)).max() &lt; thresh: File &quot;&lt;__array_function__ internals&gt;&quot;, line 5, in round_ File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/numpy/core/fromnumeric.py&quot;, line 3739, in round_ return around(a, decimals=decimals, out=out) File &quot;&lt;__array_function__ internals&gt;&quot;, line 5, in around File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/numpy/core/fromnumeric.py&quot;, line 3314, in around return _wrapfunc(a, 'round', decimals=decimals, out=out) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/numpy/core/fromnumeric.py&quot;, line 66, in _wrapfunc return _wrapit(obj, method, *args, **kwds) File &quot;/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/numpy/core/fromnumeric.py&quot;, line 43, in _wrapit result = getattr(asarray(obj), method)(*args, **kwds) [0m </code></pre> <p>Any suggestions .. what is wrong here ? I already spent hours to understand and fix it but no luck yet. Any help would be great.</p> <p><strong>Update :</strong></p> <p>Thank you @ehsan for the answer it worked for the pie chart , but I still get the same error when i do simple line plot using:</p> <pre class="lang-py prettyprint-override"><code>plot_kwargs = {'xticks': df.index, 'grid': True, 'color': 'Red', 'title' : &quot;Average &quot;} df.plot(ylabel = 'Average No. of tracks ', **plot_kwargs) </code></pre> <p>Its the exact same error I am getting with this code and I dont understand why . i even used <code>y='count'</code> here too , just to see if anything changes but its the same error. Any insights will be helpful Thank You!</p>
<p>you want this:</p> <pre><code>_, ax = plt.subplots(1,2) df.plot.pie(ax = ax[1], y = 'count') plt.show() </code></pre> <p>The mistake is you use <code>y=df['count']</code> instead of simply <code>y='count'</code>. You are using pandas plotting and no need to send column values, only column name. Also, you do not need to convert the dtype to <code>int</code>, unless you want to do it.</p> <p>output:</p> <p><a href="https://i.stack.imgur.com/ArWeQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ArWeQ.png" alt="enter image description here" /></a></p>
python-3.x|pandas|numpy|matplotlib|plot
0
6,996
68,512,301
Python & Pandas: How to address NaN values in a loop?
<p>With Python and Pandas I'm seeking to take values from CSV cells and write them as txt files via a loop. The structure of the CSV file is:</p> <pre><code>user_id, text, text_number 0, test text A, text_0 1, 2, 3, 4, 5, test text B, text_1 </code></pre> <p>The script below successfully writes a txt file for the first row - it is named text_0.txt and contains <code>test text A</code>.</p> <pre><code>import pandas as pd df= pd.read_csv(&quot;test.csv&quot;, sep=&quot;,&quot;) for index in range(len(df)): with open(df[&quot;text_number&quot;][index] + '.txt', 'w') as output: output.write(df[&quot;text&quot;][index]) </code></pre> <p>However, I receive an error when it proceeds to the next row:</p> <pre><code>TypeError: write() argument must be str, not float </code></pre> <p>I'm guessing the error is generated when it encounters values it reads as <code>NaN</code>. I attempted to add the <code>dropna</code> feature per the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer">pandas documentation</a> like so:</p> <pre><code>import pandas as pd df= pd.read_csv(&quot;test.csv&quot;, sep=&quot;,&quot;) df2 = df.dropna(axis=0, how='any') for index in range(len(df)): with open(df2[&quot;text_number&quot;][index] + '.txt', 'w') as output: output.write(df2[&quot;text&quot;][index]) </code></pre> <p>However, the same issue persists - a txt file is created for the first row, but a new error message is returned for the next row: <code>KeyError: 1</code>.</p> <p>Any suggestions? All assistance greatly appreciated.</p>
<p>The issue here is that you are creating a range index which is not necessarily in the data frame's index. For your use case, you can just iterate through rows of data frame and write to the file.</p> <pre><code>for t in df.itertuples(): if t.text_number: # do not write if text number is None with open(t.text_number + '.txt', 'w') as output: output.write(str(t.text)) </code></pre>
python|pandas|csv|nan
1
6,997
53,317,172
CSV Data preprocessing and reformatting python
<p>I have a csv file with 22000 rows. I need to convert the csv file from the normal rows and columns format to rows with elements separated with commas using python. Elements with same id are to be in a row. New row is to be created for each id.</p> <p><a href="https://i.stack.imgur.com/tUoxG.png" rel="nofollow noreferrer">Sample data before preprocessing is like this</a></p> <p><a href="https://i.stack.imgur.com/ajqsW.jpg" rel="nofollow noreferrer">Dataset after processing is like this</a></p> <p>I just want to delete the date column and display the elements with same id (in column B) inline.</p>
<p>I think you need this,</p> <pre><code>print pd.Series(sum(df.T.values.tolist(),[])).value_counts().reset_index() </code></pre>
python|pandas|csv|numpy
0
6,998
53,042,404
Command Line: Python program says "Killed"
<p>I'm extracting xml data from 465 webpages ,and parsing and storing it in ".csv" file using python dataframe. After running the program for 30 mins, the program saves "200.csv" files and kills itself. The command line execution says "Killed". But when I run the program for first 200 pages and rest of 265 pages for extraction separately, it works well. I had thoroughly searched on the internet, no proper answer for this issue. Could you please tell me what could be the reason?</p> <pre><code>for i in list: addr = str(url + i + '?&amp;$format=json') response = requests.get(addr, auth=(self.user_, self.pass_)) # print (response.content) json_data = response.json() if ('d' in json_data): df = json_normalize(json_data['d']['results']) paginate = 'true' while paginate == 'true': if '__next' in json_data['d']: addr_next = json_data['d']['__next'] response = requests.get(addr_next, auth=(self.user_, self.pass_)) json_data = response.json() df = df.append(json_normalize(json_data['d']['results'])) else: paginate = 'false' try: if(not df.empty): storage = '/usr/share/airflow/documents/output/' + i + '_output.csv' df.to_csv(storage, sep=',', encoding='utf-8-sig') else: pass except: pass </code></pre> <p>Thanks in advance!</p>
<p>It looks like you are running out of memory.</p> <p>Can you try to increase allowed memory (fast solution)<br> Or optimize your code for less memory consumption (best solution)</p> <p>If speed is not what is required, you can try to save data to temp files and read from them when needed, but I guess that for loop can be optimised for less memory consumption without using the file system.<br> After all, memory is the place where the loop should live.</p> <blockquote> <p>Try to run your code without <strong><code>try</code></strong> <strong><code>catch</code></strong> block</p> </blockquote>
python|xml|linux|pandas|dataframe
4
6,999
65,863,498
Dropping rows with duplicate string values in the DateTimeIndex
<p>This is my first problem! I'll try to explain it as clearly as possible:</p> <p>I have a Series with a DateTimeIndex like this:</p> <p><a href="https://i.stack.imgur.com/YwuxC.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YwuxC.jpg" alt="DateTime Series" /></a></p> <hr /> <p>And I need a function that checks the <strong>&quot;day&quot; value</strong> of the date (e.g. 2020-01-<strong>13</strong> 12:00:00) and <strong>REMOVES</strong> that record <strong>IF</strong> the <strong>day value</strong> matches the day value of the <strong>PREVIOUS</strong> record, example:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><strong>Datetime</strong></th> <th><strong>Description</strong></th> <th><strong>Action</strong></th> </tr> </thead> <tbody> <tr> <td>2020-01-13 11:00:00</td> <td>1st record has no previous record to compare to</td> <td><strong>MOVE ON</strong></td> </tr> <tr> <td>2020-01-13 12:00:00</td> <td>2nd record has the same &quot;day&quot; value as previous</td> <td><strong>DROP RECORD</strong></td> </tr> <tr> <td>2020-01-13 13:00:00</td> <td>3rd record has the same &quot;day&quot; value as previous?</td> <td><strong>ALSO DROP RECORD</strong></td> </tr> <tr> <td>2020-01-14 11:00:00</td> <td>4th record has a unique &quot;day&quot; value compared to previous</td> <td><strong>MOVE ON</strong></td> </tr> <tr> <td>2020-02-10 11:00:00</td> <td>5th record has a unique &quot;day&quot; value compared to previous</td> <td><strong>MOVE ON</strong></td> </tr> <tr> <td>2020-03-20 10:00:00</td> <td>6th record has a unique &quot;day&quot; value compared to previous</td> <td><strong>MOVE ON</strong></td> </tr> <tr> <td>2020-06-03 10:00:00</td> <td>7th record has a unique &quot;day&quot; value compared to previous</td> <td><strong>MOVE ON</strong></td> </tr> <tr> <td>2020-06-03 12:00:00</td> <td>8th record has the same &quot;day&quot; value compared to previous</td> <td><strong>DROP RECORD</strong></td> </tr> </tbody> </table> </div><hr /> <p>Notice how the drops need to be in sequential order so that only the <strong>first</strong> <strong>unique time of day</strong> remains in the series (the later times of the same day get dropped). In other words, I want there to be only <strong>one record per day</strong> (per month), and that record needs to be the <strong>first time of the day</strong>. Same &quot;day&quot; values for <strong>different</strong> <strong>months</strong> is allowed!</p> <p>Also keep in mind, I will be applying this function to <strong>hundreds</strong> of other Series just like this one (in fact, each unique Series will be part of a List).</p> <p>I'm sure this is much harder than it seems. For example, you probably can't use some type of [n-1] .loc index to tell the function to compare to the previous index location if you've already dropped a record, because you'd be telling it to look at a missing record? Complicated!</p>
<p>Just make a new column, with <code>date</code> instead of <code>datetime</code>, and drop the duplicates based on that column.</p> <p>Create column with Date as type.</p> <pre class="lang-py prettyprint-override"><code>df['Dates'] = df1['DT'].dt.date </code></pre> <p>Drop duplicates based on <strong>Dates</strong> column, and keep only first occurrence.</p> <pre class="lang-py prettyprint-override"><code>drop_duplicates('Dates', keep='first') </code></pre> <p>To see the result: <code>df</code></p> <p>If you want, then drop the new column you created, like this:</p> <pre class="lang-py prettyprint-override"><code>df = df.drop(['Dates'], axis=1) </code></pre>
python|pandas|data-manipulation
2