Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
10,300
71,327,256
pandas dataframe style numbers to currency
<p>I have a simple dataframe that contains Dates as columns and numbers as row values. I am attempting to add formatting to the numbers so that they display in currency format of $x.xx. I can't figure out what I am doing wrong as no error is produced and all the other formats I have are being applied except for the currency formatting. Below is a snippet of the code in question.</p> <pre><code>format_df = final_df.style. apply(lambda row: ['background-color: yellow' if row.name in team_totals_index else 'background-color:#FFCC99' for value in row], axis=1).\ apply(lambda row: ['${:,.2f}' if row.name in team_totals_index else '' for value in row], axis=1).\ apply(lambda row: ['font-weight: bold' if row.name in team_totals_index else '' for value in row], axis=1).\ apply(lambda row: ['border-top-style:solid' if row.name in team_totals_index else '' for value in row], axis=1).\ apply(lambda row: ['border-bottom-style:solid' if row.name in team_totals_index else '' for value in row], axis=1) </code></pre> <p><a href="https://i.stack.imgur.com/A7P1d.png" rel="nofollow noreferrer">here's an image of the excel sheet it produces, the bold, border, and colors show but not the currency format</a></p> <p>any insight on how I could fix this so that the currency format will reflect properly?</p>
<p>after looking through the official documentation I stumbled upon &quot;number-format&quot; which allowed me to add the currency format I was looking for.</p> <p><a href="https://i.stack.imgur.com/vvVND.png" rel="nofollow noreferrer">here's an image from the docs showing what is acceptable when writing to excel</a></p> <p>in case anyone is curious, I modified my code to the below:</p> <pre><code> #style the dataframe totals in bold/border and highlight background colors format_df = final_df.style. apply(lambda row: ['background-color: #FF9900' if row.name in team_totals_index else 'background-color:#FFCC99' for value in row], axis=1).\ apply(lambda row: ['number-format: $#,##0.00' if row.name in team_totals_index else 'number-format: $#,##0.00' for value in row], axis=1).\ apply(lambda row: ['font-weight: bold' if row.name in team_totals_index else '' for value in row], axis=1).\ apply(lambda row: ['border-top-style:solid' if row.name in team_totals_index else '' for value in row], axis=1).\ apply(lambda row: ['border-bottom-style:solid' if row.name in team_totals_index else '' for value in row], axis=1) return format_df </code></pre> <p>This has yielded the results I was looking for.</p> <p><a href="https://i.stack.imgur.com/a9GEV.png" rel="nofollow noreferrer">excel sheet with the proper formatting</a></p>
python|pandas|dataframe|formatting
2
10,301
71,350,027
Efficient way to "broadcast" the sum of elements of two 1D arrays to a 2D array
<p>Is there a more efficient way (without loops) to do this with Numpy ?:</p> <pre class="lang-py prettyprint-override"><code>for i, x in enumerate(array1): for j, y in enumerate(array2): result[i, j] = x + y </code></pre> <p>I was trying to use einsum without success yet.</p> <p>Thank you !</p>
<p>Simply use broadcasting with an extra dimension:</p> <pre><code>result = array1[:,None]+array2 </code></pre>
python|numpy|optimization|array-broadcasting|numpy-einsum
2
10,302
71,182,988
Pandas Dataframe: Dropping Selected rows with 0.0 float type values
<p>Please I have a dataset that contains amount as float type. Some of the rows contain values of 0.00 and because they skew the dataset, I need to drop them. I have temporarily set the &quot;Amount&quot; to index and sorted the value as well. Afterwards, I attempted to drop the rows after subsetting with <strong>iloc</strong> but eep getting error message in the form <strong>ValueError: Buffer has wrong number of dimensions (expected 1, got 3)</strong></p> <p>'''mortgage = mortgage.set_index('Gross Loan Amount').sort_values('Gross Loan Amount') mortgage.drop([mortgage.loc[0.0]])'''</p> <p>I equally tried this: '''mortgage.drop(mortgage.loc[0.0])''' it flagged the error of the form <strong>KeyError: &quot;[Column_names] not found in axis&quot;</strong></p> <p>Please how else can I accomplish the task?</p>
<p>You could make a boolean frame and then use any</p> <pre><code>df = df[~(df == 0).any(axis=1)] </code></pre> <p>in this code, all rows that have at least one zero in their data has been removed</p>
python-3.x|pandas|dataframe|subset|delete-row
0
10,303
52,255,307
Reshape Dataframe pandas with merge cell
<p>I have </p> <pre><code>df = pd.DataFrame({ 'key': ['value1','value2','value1','value2'], 'domain': ['domain1.com','domain1.com','domain2.com','domain2.com'], 'url' :['urlB','urlA','url1','url2'], 'score' : [12,14,200,2001]}) </code></pre> <p>I'd like to get result <a href="https://i.stack.imgur.com/jQt1W.jpg" rel="nofollow noreferrer">result</a></p> <p>I've tried with transpose, stack... but can not get the same.</p> <p>I'm new to Python Pandas, Please advice</p> <p>[Edit]</p> <p>Thanks @jezrael for the response, it works by using </p> <pre><code>df = df.set_index(['key','domain']).unstack().swaplevel(0,1, axis=1).sort_index(axis=1) </code></pre> <p>Move to the next level, for sorting, I started from the beginning for adding more rows:</p> <pre><code>df = pd.DataFrame({ 'key': ['value1','value2','value1','value2','value2','value3'], 'domain': ['domain1.com','domain1.com','domain2.com','domain2.com','domain3.com','domain4.com'], 'url' :['urlB','urlA','url1','url2','url3','url4'], 'score' : [12,14,200,2001,10,5] }) dfdomains = pd.DataFrame({ 'domain': ['domain1.com','domain2.com', 'domain3.com','domain4.com'], 'order' : [3,1,2,4] }) </code></pre> <p>I get dataframe by your answer:</p> <pre><code>df1 = df.set_index(['key','domain']).unstack().swaplevel(0,1, axis=1).sort_index(axis=1, ascending=False) </code></pre> <p>That gave me the result:</p> <pre><code>domain domain4.com domain3.com domain2.com domain1.com url score url score url score url score key value1 NaN NaN NaN NaN url1 200.0 urlB 12.0 value2 NaN NaN url3 10.0 url2 2001.0 urlA 14.0 value3 url4 5.0 NaN NaN NaN NaN NaN NaN </code></pre> <p>I'd like to <code>sort df1</code> by <code>order of dfdomains</code>: that means the first columns of <code>df1</code> is <code>domain2.com (order= 1)</code></p> <p>Expecting: <a href="https://i.stack.imgur.com/aKQsF.jpg" rel="nofollow noreferrer">image</a></p> <p>Can you please advice @jezrael Thanks</p>
<p>Use:</p> <pre><code>df = df.set_index(['key','domain']).unstack().swaplevel(0,1, axis=1).sort_index(axis=1) print (df) domain domain1.com domain2.com score url score url key value1 12 urlB 200 url1 value2 14 urlA 2001 url2 </code></pre> <ol> <li>First<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> for <code>MultiIndex</code> </li> <li>Reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> for reshape, </li> <li>Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.swaplevel.html" rel="nofollow noreferrer"><code>swaplevel</code></a> in <code>MultiIndex</code> in columns </li> <li>Last sort them by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>sort_index</code></a></li> </ol> <p>EDIT: First <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> for ordering by column <code>order</code> and then add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> - is necessary all values of <code>order</code> has to be in <code>df['domain']</code></p> <pre><code>order = dfdomains.sort_values('order')['domain'] print (order) 1 domain2.com 2 domain3.com 0 domain1.com 3 domain4.com Name: domain, dtype: object df1 = (df.set_index(['key','domain']) .unstack() .swaplevel(0,1, axis=1) .sort_index(axis=1, ascending=False) .reindex(order, axis=1, level=0)) print (df1) domain domain2.com domain3.com domain1.com domain4.com \ url score url score url score url key value1 url1 200.0 NaN NaN urlB 12.0 NaN value2 url2 2001.0 url3 10.0 urlA 14.0 NaN value3 NaN NaN NaN NaN NaN NaN url4 domain score key value1 NaN value2 NaN value3 5.0 </code></pre>
python|pandas
3
10,304
52,134,130
How to restrict the absolut value of each dimention of a sparse gradient from being too large?
<p>Consider the code below:</p> <pre><code>import tensorflow as tf inputs=tf.placeholder(tf.int32, [None]) labels=tf.placeholder(tf.int32, [None]) with tf.variable_scope('embedding'): embedding=tf.get_variable('embedding', shape=[2000000, 300], dtype=tf.float32) layer1=tf.nn.embedding_lookup(embedding, inputs) logits=tf.layers.dense(layer1, 2000000) loss=tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits) cost=tf.reduce_sum(loss) optimizer=tf.train.GradientDescentOptimizer(0.01) grads, vars=zip(*optimizer.compute_gradients(cost)) for g in grads: print(0, g) grads1=[tf.clip_by_value(g, -100, 100) for g in grads] for g in grads1: print(1, g) grads2, _=tf.clip_by_global_norm(grads, 10) for g in grads2: print(2, g) </code></pre> <p>The output is:</p> <pre><code>0 IndexedSlices(indices=Tensor("gradients/embedding_lookup_grad/Reshape_1:0", shape=(?,), dtype=int32), values=Tensor("gradients/embedding_lookup_grad/Reshape:0", shape=(?, 300), dtype=float32), dense_shape=Tensor("gradients/embedding_lookup_grad/ToInt32:0", shape=(2,), dtype=int32)) 0 Tensor("gradients/dense/MatMul_grad/tuple/control_dependency_1:0", shape=(300, 2000000), dtype=float32) 0 Tensor("gradients/dense/BiasAdd_grad/tuple/control_dependency_1:0", shape=(2000000,), dtype=float32) C:\Python\Python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py:97: UserWarning: Converting sparse IndexedSlices to a dense Tensor with 600000000 elements. This may consume a large amount of memory. num_elements) 1 Tensor("clip_by_value:0", shape=(?, 300), dtype=float32) 1 Tensor("clip_by_value_1:0", shape=(300, 2000000), dtype=float32) 1 Tensor("clip_by_value_2:0", shape=(2000000,), dtype=float32) 2 IndexedSlices(indices=Tensor("gradients/embedding_lookup_grad/Reshape_1:0", shape=(?,), dtype=int32), values=Tensor("clip_by_global_norm/clip_by_global_norm/_0:0", shape=(?, 300), dtype=float32), dense_shape=Tensor("gradients/embedding_lookup_grad/ToInt32:0", shape=(2,), dtype=int32)) 2 Tensor("clip_by_global_norm/clip_by_global_norm/_1:0", shape=(300, 2000000), dtype=float32) 2 Tensor("clip_by_global_norm/clip_by_global_norm/_2:0", shape=(2000000,), dtype=float32) </code></pre> <p>I know there are two ways to restrict gradients from being too large. <code>tf.clip_by_value</code> to restrict each dimentions, and <code>tf.clip_by_global_norm</code> to restrict according global gradients norms.</p> <p>However, <code>tf.clip_by_value</code> will cast a sparse gradient into a dense one, which significantly increase the memory usage and decreases the calculation efficiency, just as the warning indicates, while <code>tf.clip_by_global_norm</code> will not. Although I can understand why this is designed, how can I restrict the absolut value of each dimention of a sparse gradient from being too large without efficiency decrease?</p> <p>Please don't tell me just use <code>tf.clip_by_global_norm</code>, I know this is ok for most cases, but is not what I want.</p>
<p>Now I use this, it works well.</p> <pre><code>grads=[tf.IndexedSlices(tf.clip_by_value(g.values, -max_grad_value, max_grad_value), g.indices, g.dense_shape) if isinstance(g, tf.IndexedSlices) else tf.clip_by_value(g, -max_grad_value, max_grad_value) for g in grads] </code></pre>
python|tensorflow|sparse-matrix|gradient
0
10,305
52,440,927
pandas merging 300 dataframes
<p>The purpose of this code is</p> <ol> <li>Scrape a 300 of tables via Pandas and Beautiful Soup</li> <li>Concatenate this tables into a single data frame The code works fine for the first step. But it is not working in the second.</li> </ol> <p>Here is the code:</p> <pre><code>import pandas as pd from urllib.request import urlopen, Request from bs4 import BeautifulSoup header = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 " "Safari/537.36", "X-Requested-With": "XMLHttpRequest"} url = open(r"C:\Users\Sayed\Desktop\script\links.txt").readlines() for site in url: req = Request(site, headers=header) page = urlopen(req) soup = BeautifulSoup(page, 'lxml') table = soup.find('table') df = pd.read_html(str(table), parse_dates={'DateTime': ['Release Date', 'Time']}, index_col=[0])[0] df = pd.concat(df, axis=1, join='outer').sort_index(ascending=False) print(df) </code></pre> <p>Here is the error:</p> <p>Traceback (most recent call last):</p> <p>File "D:/Projects/Tutorial/try.py", line 18, in </p> <pre><code>df = pd.concat(df, axis=1, join='outer').sort_index(ascending=False) </code></pre> <p>File "C:\Users\Sayed\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 225, in concat copy=copy, sort=sort)</p> <p>File "C:\Users\Sayed\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py", line 241, in <strong>init</strong></p> <pre><code>'"{name}"'.format(name=type(objs).__name__)) </code></pre> <p>TypeError: first argument must be an iterable of pandas objects, you passed an object of type "DataFrame</p>
<p>The Pandas concat function takes a <em>sequence or mapping of Series, DataFrame, or Panel objects</em> as it's first argument. Your code is currently passing a single DataFrame.</p> <p>I suspect the following will fix your issue:</p> <pre><code>import pandas as pd from urllib.request import urlopen, Request from bs4 import BeautifulSoup header = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 " "Safari/537.36", "X-Requested-With": "XMLHttpRequest"} url = open(r"C:\Users\Sayed\Desktop\script\links.txt").readlines() dfs = [] for site in url: req = Request(site, headers=header) page = urlopen(req) soup = BeautifulSoup(page, 'lxml') table = soup.find('table') df = pd.read_html(str(table), parse_dates={'DateTime': ['Release Date', 'Time']}, index_col=[0])[0] dataframes.append(df) concat_df = pd.concat(dfs, axis=1, join='outer').sort_index(ascending=False) print(df) </code></pre> <p>All I have done is to create a list called <em>dfs</em>, as a place to append your DataFrames as you iterate through the sites. Then <em>dfs</em> is passed as the argument to concat.</p>
python|pandas|beautifulsoup
4
10,306
60,593,624
Modify trained model architecture and continue training Keras
<p>I want to train a model in a sequential manner. That is I want to train the model initially with a simple architecture and once it is trained, I want to add a couple of layers and continue training. Is it possible to do this in Keras? If so, how? </p> <p>I tried to modify the model architecture. But until I compile, the changes are not effective. Once I compile, all the weights are re-initialized and I lose all the trained information.</p> <p>All the questions in web and SO I found are either about loading a pre-trained model and continuing training or modifying the architecture of pre-trained model and then only test it. I didn't find anything related to my question. Any pointers are also highly appreciated.</p> <p>PS: I'm using Keras in tensorflow 2.0 package.</p>
<p>Without knowing the details of your model, the following snippet might help:</p> <pre><code>from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Input # Train your initial model def get_initial_model(): ... return model model = get_initial_model() model.fit(...) model.save_weights('initial_model_weights.h5') # Use Model API to create another model, built on your initial model initial_model = get_initial_model() initial_model.load_weights('initial_model_weights.h5') nn_input = Input(...) x = initial_model(nn_input) x = Dense(...)(x) # This is the additional layer, connected to your initial model nn_output = Dense(...)(x) # Combine your model full_model = Model(inputs=nn_input, outputs=nn_output) # Compile and train as usual full_model.compile(...) full_model.fit(...) </code></pre> <p>Basically, you train your initial model, save it. And reload it again, and wrap it together with your additional layers using the <code>Model</code> API. If you are not familiar with <code>Model</code> API, you can check out the Keras documentation <a href="https://keras.io/models/model/" rel="nofollow noreferrer">here</a> (afaik the API remains the same for Tensorflow.Keras 2.0).</p> <p>Note that you need to check if your initial model's final layer's output shape is compatible with the additional layers (e.g. you might want to remove the final Dense layer from your initial model if you are just doing feature extraction).</p>
python|tensorflow|keras|pre-trained-model
4
10,307
72,577,958
Tensorflow layer expects 1 tensor input, but getting two tensor input
<p>I am new to deep learning currently trying to learn neural network.However,I encountered this problem while training the neural network.</p> <p>This is the input .I thought by using the tensor Dataset I am ready to pass the values into the model I build.My train.values is the feature while trainLabel is the label(output)</p> <pre><code>train_dataset = tf.data.Dataset.from_tensor_slices((train.values, trainLabel.values)) test_dataset = tf.data.Dataset.from_tensor_slices((test.values, testLabel.values)) cv_dataset = tf.data.Dataset.from_tensor_slices((val.values, valLabel.values)) for features, targets in train_dataset.take(5): print ('Features: {}, Target: {}'.format(features, targets)) Features: [ 0 40 0 0 0 1 31 33 17], Target: 29 Features: [ 0 32 0 1 0 1 50 55 44], Target: 7 Features: [ 0 32 1 0 1 1 12 43 31], Target: 34 Features: [ 0 29 1 1 1 0 56 52 37], Target: 14 Features: [ 0 25 0 0 1 1 29 30 15], Target: 17 </code></pre> <p>This is my model using Keras API:</p> <pre><code>model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(32,9)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) </code></pre> <p>I am trying to preview the output before training the neural network.</p> <pre><code>train_iterator = train_dataset.as_numpy_iterator() one_batch = train_iterator.next() predictions = model(train_dataset) predictions[:5] </code></pre> <p>However, I got this error :</p> <pre><code>ValueError: Layer &quot;sequential_2&quot; expects 1 input(s), but it received 2 input tensors. Inputs received: [&lt;tf.Tensor: shape=(9,), dtype=int64, numpy=array([ 0, 32, 0, 1, 1, 1, 15, 15, 5])&gt;, 13] </code></pre> <p>Solution:</p>
<p>This issue is similar to your <a href="https://stackoverflow.com/questions/72568479/typeerror-inputs-to-a-layer-should-be-tensor">other</a> issue in Stackoverflow and please refer to the answer mentioned in the answer section.</p> <p>Please change the <code>input_shape</code> in the model definition here likewise mentioned in your other issue to avoid this value error.</p> <pre><code> tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(9,)), </code></pre>
python|tensorflow|keras
0
10,308
72,581,494
How does tensorflow handle training data passed to a neural network?
<p>I am having an issue with my code that I modified from <a href="https://keras.io/examples/generative/wgan_gp/" rel="nofollow noreferrer">https://keras.io/examples/generative/wgan_gp/</a> . Instead of the data being images, my data is a (1001,2) array of sequential data. The first column being the time and the second the velocity measurements. I'm getting this error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_14704/3651127346.py in &lt;module&gt; 21 # Training the WGAN-GP model 22 tic = time.perf_counter() ---&gt; 23 WGAN.fit(dataset, batch_size=batch_Size, epochs=n_epochs, callbacks=[cbk]) 24 toc = time.perf_counter() 25 time_elapsed(toc-tic) ~\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---&gt; 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb ~\Anaconda3\lib\site-packages\tensorflow\python\framework\func_graph.py in autograph_handler(*args, **kwargs) 1145 except Exception as e: # pylint:disable=broad-except 1146 if hasattr(e, &quot;ag_error_metadata&quot;): -&gt; 1147 raise e.ag_error_metadata.to_exception(e) 1148 else: 1149 raise ValueError: in user code: File &quot;C:\Users\sissonn\Anaconda3\lib\site-packages\keras\engine\training.py&quot;, line 1021, in train_function * return step_function(self, iterator) File &quot;C:\Users\sissonn\Anaconda3\lib\site-packages\keras\engine\training.py&quot;, line 1010, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File &quot;C:\Users\sissonn\Anaconda3\lib\site-packages\keras\engine\training.py&quot;, line 1000, in run_step ** outputs = model.train_step(data) File &quot;C:\Users\sissonn\AppData\Local\Temp/ipykernel_14704/3074469771.py&quot;, line 141, in train_step gp = self.gradient_penalty(batch_size, x_real, x_fake) File &quot;C:\Users\sissonn\AppData\Local\Temp/ipykernel_14704/3074469771.py&quot;, line 106, in gradient_penalty alpha = tf.random.uniform(batch_size,1,1) ValueError: Shape must be rank 1 but is rank 0 for '{{node random_uniform/RandomUniform}} = RandomUniform[T=DT_INT32, dtype=DT_FLOAT, seed=0, seed2=0](strided_slice)' with input shapes: []. </code></pre> <p>And here is my code:</p> <pre><code>import time from tqdm.notebook import tqdm import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import Model from tensorflow.keras.layers import Dense, Input import numpy as np import matplotlib.pyplot as plt def define_generator(latent_dim): # This function creates the generator model using the functional API. # Layers... # Input Layer inputs = Input(shape=latent_dim, name='INPUT_LAYER') # 1st hidden layer x = Dense(50, activation='relu', name='HIDDEN_LAYER_1')(inputs) # 2nd hidden layer x = Dense(150, activation='relu', name='HIDDEN_LAYER_2')(x) # 3rd hidden layer x = Dense(300, activation='relu', name='HIDDEN_LAYER_3')(x) # 4th hidden layer x = Dense(150, activation='relu', name='HIDDEN_LAYER_4')(x) # 5th hidden layer x = Dense(50, activation='relu', name='HIDDEN_LAYER_5')(x) # Output layer outputs = Dense(2, activation='linear', name='OUPUT_LAYER')(x) # Instantiating the generator model model = Model(inputs=inputs, outputs=outputs, name='GENERATOR') return model def generator_loss(fake_logits): # This function calculates and returns the WGAN-GP generator loss. # Expected value of critic ouput from fake images expectation_fake = tf.reduce_mean(fake_logits) # Loss to minimize loss = -expectation_fake return loss def define_critic(): # This function creates the critic model using the functional API. # Layers... # Input Layer inputs = Input(shape=2, name='INPUT_LAYER') # 1st hidden layer x = Dense(50, activation='relu', name='HIDDEN_LAYER_1')(inputs) # 2nd hidden layer x = Dense(150, activation='relu', name='HIDDEN_LAYER_2')(x) # 3rd hidden layer x = Dense(300, activation='relu', name='HIDDEN_LAYER_3')(x) # 4th hidden layer x = Dense(150, activation='relu', name='HIDDEN_LAYER_4')(x) # 5th hidden layer x = Dense(50, activation='relu', name='HIDDEN_LAYER_5')(x) # Output layer outputs = Dense(1, activation='linear', name='OUPUT_LAYER')(x) # Instantiating the critic model model = Model(inputs=inputs, outputs=outputs, name='CRITIC') return model def critic_loss(real_logits, fake_logits): # This function calculates and returns the WGAN-GP critic loss. # Expected value of critic output from real images expectation_real = tf.reduce_mean(real_logits) # Expected value of critic output from fake images expectation_fake = tf.reduce_mean(fake_logits) # Loss to minimize loss = expectation_fake - expectation_real return loss class define_wgan(keras.Model): # This class creates the WGAN-GP object. # Attributes: # critic = the critic model. # generator = the generator model. # latent_dim = defines generator input dimension. # critic_steps = defines how many times the discriminator gets trained for each training cycle. # gp_weight = defines and returns the critic gradient for the gradient penalty term. # Methods: # compile() = defines the optimizer and loss function of both the critic and generator. # gradient_penalty() = calcuates and returns the gradient penalty term in the WGAN-GP loss function. # train_step() = performs the WGAN-GP training by updating the critic and generator weights # and returns the loss for both. Called by fit(). def __init__(self, gen, critic, latent_dim, n_critic_train, gp_weight): super().__init__() self.critic = critic self.generator = gen self.latent_dim = latent_dim self.critic_steps = n_critic_train self.gp_weight = gp_weight def compile(self, generator_loss, critic_loss): super().compile() self.generator_optimizer = keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5, beta_2=0.9) self.critic_optimizer = keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5, beta_2=0.9) self.generator_loss_function = generator_loss self.critic_loss_function = critic_loss def gradient_penalty(self, batch_size, x_real, x_fake): # Random uniform samples of points between distribution. # &quot;alpha&quot; must be a tensor so that &quot;x_interp&quot; will also be a tensor. alpha = tf.random.uniform(batch_size,1,1) # Data interpolated between real and fake distributions x_interp = alpha*x_real + (1-alpha)*x_fake # Calculating critic output gradient wrt interpolated data with tf.GradientTape() as gp_tape: gp_tape.watch(x_interp) critc_output = self.discriminator(x_interp, training=True) grad = gp_tape.gradient(critic_output, x_interp)[0] # Calculating norm of gradient grad_norm = tf.sqrt(tf.reduce_sum(tf.square(grad))) # calculating gradient penalty gp = tf.reduce_mean((norm - 1.0)**2) return gp def train_step(self, x_real): # Critic training # Getting batch size for creating latent vectors print(x_real) batch_size = tf.shape(x_real)[0] print(batch_size) # Critic training loop for i in range(self.critic_steps): # Generating latent vectors latent = tf.random.normal(shape=(batch_size, self.latent_dim)) with tf.GradientTape() as tape: # Obtaining fake data from generator x_fake = self.generator(latent, training=True) # Critic output from fake data fake_logits = self.critic(x_fake, training=True) # Critic output from real data real_logits = self.critic(x_real, training=True) # Calculating critic loss c_loss = self.critic_loss_function(real_logits, fake_logits) # Calcuating gradient penalty gp = self.gradient_penalty(batch_size, x_real, x_fake) # Adjusting critic loss with gradient penalty c_loss = c_loss + gp_weight*gp # Calculating gradient of critic loss wrt critic weights critic_grad = tape.gradient(c_loss, self.critic.trainable_variables) # Updating critic weights self.critic_optimizer.apply_gradients(zip(critic_gradient, self.critic.trainable_variables)) # Generator training # Generating latent vectors latent = tf.random.normal(shape=(batch_size, self.latent_dim)) with tf.GradientTape() as tape: # Obtaining fake data from generator x_fake = self.generator(latent, training=True) # Critic output from fake data fake_logits = self.critic(x_fake, training=True) # Calculating generator loss g_loss = self.generator_loss_function(fake_logits) # Calculating gradient of generator loss wrt generator weights genertor_grad = tape.gradient(g_loss, self.generator.trainable_variables) # Updating generator weights self.generator_optimizer.apply_gradients(zip(generator_gradient, self.generator.trainable_variables)) return g_loss, c_loss class GAN_monitor(keras.callbacks.Callback): def __init__(self, n_samples, latent_dim): self.n_samples = n_samples self.latent_dim = latent_dim def on_epoch_end(self, epoch, logs=None): latent = tf.random.normal(shape=(self.n_samples, self.latent_dim)) generated_data = self.model.generator(latent) plt.plot(generated_data) plt.savefig('Epoch _'+str(epoch)+'.png', dpi=300) data = np.genfromtxt('Flight_1.dat', dtype='float', encoding=None, delimiter=',')[0:1001,0] time_span = np.linspace(0,20,1001) dataset = np.concatenate((time_sapn[:,np.newaxis], data[:,np.newaxis]), axis=1) dataset.shape # Training Parameters latent_dim = 100 n_epochs = 10 n_critic_train = 5 gp_weight = 10 batch_Size = 100 # Instantiating the generator and discriminator models gen = define_generator(latent_dim) critic = define_critic() # Instantiating the WGAN-GP object WGAN = define_wgan(gen, critic, latent_dim, n_critic_train, gp_weight) # Compling the WGAN-GP model WGAN.compile(generator_loss, critic_loss) # Instantiating custom Keras callback cbk = GAN_monitor(n_samples=1, latent_dim=latent_dim) # Training the WGAN-GP model tic = time.perf_counter() WGAN.fit(dataset, batch_size=batch_Size, epochs=n_epochs, callbacks=[cbk]) toc = time.perf_counter() time_elapsed(toc-tic) </code></pre> <p>This issue is the shape I am providing to tf.random.rand() for the assignment of alpha. I don't fully understand why the shape input is (batch_size, 1, 1, 1) in the Keras example. So I don't know how to specify the shape for my example. Furthermore I don't understand this line in the Keras example:</p> <pre><code>batch_size = tf.shape(real_images)[0] </code></pre> <p>In this example 'real_images' is a (60000, 28, 28, 1) array and it gets passed to the fit() method which then passes it to the train_step() method. (It gets passed as &quot;train_images&quot;, but they are the same variable.) If I add a line that prints out 'real_images' before this tf.shape() this is what it produces:</p> <pre><code>Tensor(&quot;IteratorGetNext:0&quot;, shape=(None, 28, 28, 1), dtype=float32) </code></pre> <p>Why is the 60000 now None? Then, I added a line that printed out &quot;batch_size&quot; after the tf.shape() and this is what it produces:</p> <pre><code>Tensor(&quot;strided_slice:0&quot;, shape=(), dtype=int32) </code></pre> <p>I googled &quot;tf strided_slice&quot;, but all I could find is the method tf.strided_slice(). So what exactly is the value of &quot;batch_size&quot; and why are the output of variables so ambiguous when they are tensors? In fact, I type:</p> <pre><code>tf.shape(train_images)[0] </code></pre> <p>in another cell of Jupyter notebook. I get a completely different output:</p> <pre><code>&lt;tf.Tensor: shape=(), dtype=int32, numpy=60000&gt; </code></pre> <p>I really need to understand this Keras example in order to successfully implement this code for my data. Any help is appreciated.</p> <p>BTW: I am using only one set of data for now, but once I get the GAN running, I will provide multiple sets of these (1001,2) datasets. Also, if you want to test the code yourself, replacing the &quot;dataset&quot; variable with any (1001,2) numpy array should suffice. Thank You.</p>
<p>'Why is the 60000 now None?': In defining TensorFlow models, the first dimension (batch_size) is None. Getting under the hood of what goes on with TensorFlow and how it uses graphs for computation can be quite complex. But for your understanding right now, all you need to know is that batch_size does not need to be specified when defining the model, hence None. This is essential as it allow a model to be defined once but then trained with and applied to datasets of an arbitrary number of examples. For example, when training you may provide the model with a batch of 256 images at a time, but when using the trained model for inference, it's very likely that you might only want the input to be a single image. Therefore the actual value of the first dimension of the size of the input is only important once the computation is going to begin.</p> <p>'I don't fully understand why the shape input is (batch_size, 1, 1, 1) in the Keras example': The reason for this size is that you want a different random value, alpha, for each image. You have batch_size number of images, hence batch_size in the first dimension, but it is just a single value in tensor format, so it only need size 1 in all other dimensions. The reason it has 4 dimensions overall is so that it can be used in calculation with your inputs, which are 4-D image tensors which will have a shape of something like (batch_size, img_h, img_w, 3) for color images with 3 RGB channels.</p> <p>In terms of understanding your error, <code>Shape must be rank 1 but is rank 0</code>, this is saying that the function you are using - <code>tf.random.uniform</code> requires a rank 1 tensor, i.e. something with 1 dimension, but is being passed a rank 0 tensor, i.e. a scalar value. It is possible from your code that you are just passing it the value of <code>batch_size</code> rather than a tensor. This might work instead:</p> <pre><code>alpha = tf.random.uniform([batch_size, 1, 1, 1]) </code></pre> <p>The first parameter of this function is its shape and so it is important to have the <code>[]</code> there. Check out the documentation on this function in order to make sure you're using it correctly - <a href="https://www.tensorflow.org/api_docs/python/tf/random/uniform" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/random/uniform</a>.</p>
python|tensorflow|keras|generative-adversarial-network
1
10,309
61,841,560
How to add Keras- Gaussian noise to image data
<p>Importing the modules:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras.layers import GaussianNoise from tensorflow.keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() </code></pre> <p>Re-scaling the data</p> <pre><code>X_train = X_train/255 X_test = X_test/255 plt.imshow(X_train[0]) </code></pre> <p>Adding Gaussian Noise with std dev=0.2</p> <pre><code>sample = GaussianNoise(0.2) noisey = sample(X_test[0:2],training=True) #plt.imshow(noisey[0]) </code></pre> <p>Getting Error:</p> <p><code>ValueError: Tensor conversion requested dtype float64 for Tensor with dtype float32: 'Tensor("gaussian_noise_4_1/random_normal:0", shape=(2, 28, 28), dtype=float32)'</code></p>
<p>Type casting is costly, and so Tensorflow doesn't do automatic type casting. As a default, Tensorflow's dtype is <code>float32</code>, and the dataset you imported has a dtype <code>float64</code>. You will just have to pass the optional dtype argument to <code>GaussianNoise</code>:</p> <pre><code>sample = GaussianNoise(0.2, dtype=tf.float64) </code></pre> <p>Or cast the array:</p> <pre><code>noisey = sample(X_test[0:2].astype(np.float32),training=True) </code></pre> <p>I suggest the second one.</p>
python|tensorflow|keras
4
10,310
61,753,567
Convert cumsum() output to binary array in xarray
<p>I have a 3D x-array that computes the cumulative sum for specific time periods and I'd like to detect which time periods meet a certain condition (and set to 1) and those which do not meet this condition (set to zero). I'll explain using the code below:</p> <pre><code>import pandas as pd import xarray as xr import numpy as np # Create demo x-array data = np.random.rand(20, 5, 5) times = pd.date_range('2000-01-01', periods=20) lats = np.arange(10, 0, -2) lons = np.arange(0, 10, 2) data = xr.DataArray(data, coords=[times, lats, lons], dims=['time', 'lat', 'lon']) data.values[6:12] = 0 # Ensure some values are set to zero so that the cumsum can reset between valid time steps data.values[18:] = 0 # This creates an xarray whereby the cumsum is calculated but resets each time a zero value is found cumulative = data.cumsum(dim='time')-data.cumsum(dim='time').where(data.values == 0).ffill(dim='time').fillna(0) print(cumulative[:,0,0]) &gt;&gt;&gt; &lt;xarray.DataArray (time: 20)&gt; array([0.13395 , 0.961934, 1.025337, 1.252985, 1.358501, 1.425393, 0. , 0. , 0. , 0. , 0. , 0. , 0.366988, 0.896463, 1.728956, 2.000537, 2.316263, 2.922798, 0. , 0. ]) Coordinates: * time (time) datetime64[ns] 2000-01-01 2000-01-02 ... 2000-01-20 lat int64 10 lon int64 0 </code></pre> <p>The print statement shows that the cumulative sum resets each time a zero is encountered on the time dimension. I need a solution to identify, which of the two periods exceeds a value of 2 and convert to a binary array to confirm where the conditions are met.</p> <p>So my expected output would be (for this specific example):</p> <pre><code>&lt;xarray.DataArray (time: 20)&gt; array([0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 1. , 1. , 1. , 1. , 1. , 1. , 0. , 0. ]) </code></pre>
<p>Solved this using some masking and the backfill functionality:</p> <pre><code># make something to put results in out = xr.full_like(cumulative, fill_value=0.0) # find the points which have met the criteria out.values[cumulative.values &gt; 3] = 1 # fill the other valid sections over 0, with nans so we can fill them out.values[(cumulative.values&gt;0) &amp; (cumulative.values&lt;3)] = np.nan # backfill it, so the ones that have not reached 2 are filled with 0 # and the ones that have are filled with 1 out_ds = out.bfill(dim='time').fillna(1) print ('Cumulative array:') print (cumulative.values[:,0,0]) print (' ') print ('Binary array') print (out_ds.values[:,0,0]) </code></pre>
python|numpy|python-xarray|cumsum
0
10,311
57,807,496
Parse SQL parameter marker with pandas using dynamic tables
<p>I have the following code:</p> <pre><code>query = """ DECLARE @DATABASE VARCHAR(128) = '{}'; DECLARE @SCHEMA VARCHAR(128) = '{}'; DECLARE @TABLE VARCHAR(128) = '{}'; DECLARE @sql VARCHAR(200) = 'SELECT * FROM ' + CONCAT(QUOTENAME(@DATABASE), '.', QUOTENAME(@SCHEMA), '.', QUOTENAME(@TABLE), ' WHERE COD = ?') EXEC sp_executesql @sql """.format( db_config.db.database, db_config.db.schema, db_config.db.table ) return pd.read_sql_query( query, db_config.db.connection, params=[cod_sol] ) </code></pre> <p>It was working right until I added a sql param marker <code>' WHERE COD = ?'</code>. It seems pandas or pyodbc can't parse that type of query, with a simple sql statement it works, but not with dynamic tables.</p> <p>Here is the result of the final <code>@sql</code> variable:</p> <pre><code>SELECT * FROM [DB].[SCHEMA].[TABLE] WHERE COD = ? </code></pre> <p>So it seems to be right. </p> <p>It's possible to do that sort of thing?</p>
<p>I have been able to solve it!</p> <p>Instead of trying to execute directly the sql statement in one step, I first construct the sql query, and generate its final form:</p> <pre><code>SELECT * FROM [DB].[SCHEMA].[TABLE] WHERE COD = ? </code></pre> <p>Then, I call <code>read_sql_query</code> passing in the params</p> <pre><code>query = """ DECLARE @DATABASE VARCHAR(128) = '{}'; DECLARE @SCHEMA VARCHAR(128) = '{}'; DECLARE @TABLE VARCHAR(128) = '{}'; DECLARE @sql NVARCHAR(200) = 'SELECT * FROM ' + CONCAT(QUOTENAME(@DATABASE), '.', QUOTENAME(@SCHEMA), '.', QUOTENAME(@TABLE)) WHERE COD = ?; SELECT @sql """.format( db_config.db.database, db_config.db.schema, db_config.db.table ) con = db_config.db.connection query = con.execute(query).fetchval() # Now this is working pd.read_sql_query(query, con, params=[cod_xml]).reset_index(drop=True) </code></pre>
python|pandas|pyodbc
2
10,312
57,986,730
How to apply the value from specific row within the group in a column python 3.7
<h1>GOAL</h1> <p>Use the value which is No.1 in "Group_Line" column within the group to overwrite "-" of the rest of rows in every group, without influence the group which doesn't have any "Name" value but "-".</p> <pre><code> Name Group Group_Line NEW_Name 0 Paul A-1 1 Paul 1 - A-1 2 Paul 2 - A-1 3 Paul 3 - B-1 1 - 4 - B-1 2 - 5 Amy C-1 2 Amy 6 Amy C-1 1 Amy </code></pre> <h1>sample data :</h1> <pre><code>xx = pd.DataFrame({"Name": ["Paul","-","-","-","-","Amy","Amy"], "Group": ["A-1","A-1","A-1","B-1","B-1","C-1","C-1"], "Group_Line": ["1","3","","1","2","2","1"] }) </code></pre> <h1>Script</h1> <pre><code># make a key xx = xx .assign(NAME_IND = xx['Group'].astype(str).copy() + xx['Group_Line'].astype(str).copy()) # get the value which is No.1 in "Group_Line" column within the group yy= xx.sort_values(by=['Group','Group_Line'],ascending=True).groupby('NAME_IND').first()[["Name","NAME_IND"]] xx["NEW_Name"] = xx['NAME_IND'].map(yy.set_index('NAME_IND')['Name']) &lt;-- get error </code></pre> <h1>Error</h1> <p>KeyError: "['NAME_IND'] not in index"</p> <p>In R can be achieve with [match(xx$NAME_KEY,xx$NAME_KEY)] by applying on "-" rows , what is the solution with python ?</p>
<p>Reason of error is <code>NAME_IND</code> is not column, but index, what is perfect for mapping, so only specify column <code>Name</code> after <code>groupby</code> and then <code>map</code> by <code>Series</code> called <code>y</code>:</p> <pre><code>y= (xx.sort_values(by=['Group','Group_Line'],ascending=True) .groupby('NAME_IND')['Name'] .first()) print (y) NAME_IND A-1 - A-11 Paul A-13 - B-11 - B-12 - C-11 Amy C-12 Amy Name: Name, dtype: object </code></pre> <p>Alternative solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a>:</p> <pre><code>y= (xx.sort_values(by=['Group','Group_Line'],ascending=True) .drop_duplicates('NAME_IND') .set_index('NAME_IND')['Name']) print (y) NAME_IND A-1 - A-11 Paul A-13 - B-11 - B-12 - C-11 Amy C-12 Amy Name: Name, dtype: object </code></pre> <hr> <pre><code>xx["NEW_Name"] = xx['NAME_IND'].map(y) print (xx) Name Group Group_Line NAME_IND NEW_Name 0 Paul A-1 1 A-11 Paul 1 - A-1 3 A-13 - 2 - A-1 A-1 - 3 - B-1 1 B-11 - 4 - B-1 2 B-12 - 5 Amy C-1 2 C-12 Amy 6 Amy C-1 1 C-11 Amy </code></pre> <p>EDIT:</p> <p>Previous answer - possible, but overcomplicated - first set index to column and then set same column to index:</p> <p>Reason is <code>NAME_IND</code> is index, so possible solutions are <code>as_index=False</code> parameter in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code></a>:</p> <pre><code>yy= (xx.sort_values(by=['Group','Group_Line'],ascending=True)[["Name","NAME_IND"]] .groupby('NAME_IND', as_index=False) .first()) </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a>:</p> <pre><code>yy= (xx.sort_values(by=['Group','Group_Line'],ascending=True)[["Name","NAME_IND"]] .groupby('NAME_IND') .first() .reset_index()) print (yy) NAME_IND Name 0 A-1 - 1 A-11 Paul 2 A-13 - 3 B-11 - 4 B-12 - 5 C-11 Amy 6 C-12 Amy </code></pre> <p>Also is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a>:</p> <pre><code>yy= (xx.sort_values(by=['Group','Group_Line'],ascending=True)[["Name","NAME_IND"]] .drop_duplicates('NAME_IND')) print (yy) Name NAME_IND 2 - A-1 0 Paul A-11 1 - A-13 3 - B-11 4 - B-12 6 Amy C-11 5 Amy C-12 xx["NEW_Name"] = xx['NAME_IND'].map(yy.set_index('NAME_IND')['Name']) </code></pre>
python-3.x|pandas-groupby
2
10,313
58,151,772
Python Appending DataFrame, weird for loop error
<p>I'm working on some NFL statistics web scraping, honestly the activity doesn't matter much. I spent a ton of time debugging because I couldn't believe what it was doing, either I'm going crazy or there is some sort of bug in a package or python itself. Here's the code I'm working with:</p> <pre><code>import pandas as pd from bs4 import BeautifulSoup as bs import requests import string import numpy as np #get player list players = pd.DataFrame({"name":[],"url":[],"positions":[],"startYear":[],"endYear":[]}) letters = list(string.ascii_uppercase) for letter in letters: print(letter) players_html = requests.get("https://www.pro-football-reference.com/players/"+letter+"/") soup = bs(players_html.content,"html.parser") for player in soup.find("div",{"id":"div_players"}).find_all("p"): temp_row = {} temp_row["url"] = "https://www.pro-football-reference.com"+player.find("a")["href"] temp_row["name"] = player.text.split("(")[0].strip() years = player.text.split(")")[1].strip() temp_row["startYear"] = int(years.split("-")[0]) temp_row["endYear"] = int(years.split("-")[1]) temp_row["positions"] = player.text.split("(")[1].split(")")[0] players = players.append(temp_row,ignore_index=True) players = players[players.endYear &gt; 2000] players.reset_index(inplace=True,drop=True) game_df = pd.DataFrame() def apply_test(row): #print(row) url = row['url'] #print(list(range(int(row['startYear']),int(row['endYear'])+1))) for yr in range(int(row['startYear']),int(row['endYear'])+1): print(yr) content = requests.get(url.split(".htm")[0]+"/gamelog/"+str(yr)).content soup = bs(content,'html.parser').find("div",{"id":"all_stats"}) #overheader over_headers = [] for over in soup.find("thead").find("tr").find_all("th"): if("colspan" in over.attrs.keys()): for i in range(0,int(over['colspan'])): over_headers = over_headers + [over.text] else: over_headers = over_headers + [over.text] #headers headers = [] for header in soup.find("thead").find_all("tr")[1].find_all("th"): headers = headers + [header.text] all_headers = [a+"___"+b for a,b in zip(over_headers,headers)] #remove first column, it's meaningless all_headers = all_headers[1:len(all_headers)] for row in soup.find("tbody").find_all("tr"): temp_row = {} for i,col in enumerate(row.find_all("td")): temp_row[all_headers[i]] = col.text game_df = game_df.append(temp_row,ignore_index=True) players.apply(apply_test,axis=1) </code></pre> <p>Now again I could get into what I'm trying to do, but there seems to be a much higher-level issue here. startYear and endYear in the for loop are 2013 and 2014, so the loop should be setting the yr variable to 2013 then 2014. But when you look at what prints out due to the <code>print(yr)</code>, you realize it's printing out 2013 twice. But if you simply comment out the <code>game_df = game_df.append(temp_row,ignore_index=True)</code> line, the printouts of yr are correct. There is an error shortly after the first two lines, but that is expected and one I am comfortable debugging. But the fact that appending to a global dataframe is causing a for loop to behave differently is blowing my mind right now. Can someone help with this?</p> <p>Thanks.</p>
<p>I don't really follow what the overall aim is but I do note two things:</p> <ol> <li><p>You either need the local <code>game_df</code> to be declared as <code>global game_df</code> before <code>game_df = game_df.append(temp_row,ignore_index=True)</code> or better still pass as an arg in the def signature though you would need to amend this: <code>players.apply(apply_test,axis=1)</code> accordingly.</p></li> <li><p>You need to handle the cases of find returning None e.g. with <code>soup.find("thead").find_all("tr")[1].find_all("th")</code> for page <a href="https://www.pro-football-reference.com/players/A/AaitIs00/gamelog/2014" rel="nofollow noreferrer">https://www.pro-football-reference.com/players/A/AaitIs00/gamelog/2014</a>. Perhaps put in <code>try except</code> blocks with appropriate default values to be supplied.</p></li> </ol>
python|pandas|loops|for-loop|append
1
10,314
54,933,438
Python Seaborn Pandas Dataframe plot first few groups
<p>I have a need to plot only the first n number of groups, or plot several plots of n items out of a set of groups from a pandas dataframe. The frame contains columns as </p> <pre><code>import pandas as pd import seaborn as sns; sns.set() import numpy as np datain = np.loadtxt("data.txt") df = pd.DataFrame(data = datain, columns = ["t","p","x","y","z"]) </code></pre> <p>Simply loaded in from a file using numpy. Current plotting code is,</p> <pre><code>ax2 = sns.scatterplot("t","x", data = df, hue = "p") plt.show() </code></pre> <p>The hue field is giving the grouping, so it's grouping by a polymer number parameter from the data file. The structure is that the are "N" polymers in the file, let's say 10, so the first 10 lines are each a different "p" value with the same "t" value and some x,y,z coordinate data. Plotting the coordinate data versus time is the main application. Let's say as an example, I want to plot the first 3 groups, so the first 3 polymers, out of the set in that plot command, I need to know how to do that, and then of course plot the next 3, etc. Very new to dataframes, so it's a bit puzzling to me how to manage this.</p> <p>Edit for clarity, here's the table, using N = 5,</p> <pre><code>t p x y z 10 0 1 3 2 10 1 4 2 1 10 2 5 6 3 10 3 7 5 3 10 4 -9 5 2 20 0 1 -1 1 20 1 0 1 -1 20 2 3 9 -2 20 3 5 6 9 20 4 -5 9 6 </code></pre> <p>So a desired output would be for the first 2 groups:</p> <pre><code>t p x y z 10 0 1 3 2 10 1 4 2 1 20 0 1 -1 1 20 1 0 1 -1 </code></pre> <p>And then I could still plot by grouping their p values.</p>
<p>If you specifically need multiple groups (polymers) plotted together on the same chart, you can subset/filter your dataframe to only the polymer (p) values that you need for your plot, e.g.:</p> <pre><code>df[df['p'].isin([0,1])] </code></pre> <p>and pass the output to the scatterplot command.</p>
python|pandas|grouping|seaborn
1
10,315
49,520,906
Changing zero to x in multidimensional array
<p>I have a 3-dimensional array in python, and would like to learn how to find and replace given elements</p> <p>For example, </p> <pre><code>x = np.array([[1, 1, 1, 0], [0, 5, 0, 1], [2, 1, 3, 10]], np.int32) </code></pre> <p>I'd like to replace each 0 with x in the array, which would result in:</p> <pre><code>([[1,1,1,x], [x,5,x,1], [2,1,3,10]]) </code></pre> <p>This is where I am at, but I get an error due to 'x' not being an integer</p> <p>import numpy as np x = np.array([[1,1,1,0],[0,5,0,1],[2,1,3,10]]) x[x==0] = 'x' print (x)</p>
<p>Can do something like:</p> <pre><code>x[x==0] = 10 </code></pre>
python|python-3.x|python-2.7|numpy
2
10,316
49,633,220
Serialized data doesn't match deserialized data in tenserflow TFRecordDataset code
<p>I have a large dataset of numpy integers which I want to analyze with a GPU. The dataset is too large to fit into main memory on the GPU so I am trying to serialize them into a TFRecord and then use the API to stream the record for processing. The below code is example code: it wants to create some fake data, serialize it into the TFRecord object, then using a TF session read the data back into memory, parsing with the map() function. My original data is non-homogenous in terms of the dimensions of the numpy arrays, though each is a 3D array with 10 as the length of the first axis. I recreated the hetorogeneity using random numbers when I made the fake data. The idea is to store the size of each image as I serialize the data, and I can use that to restore each array to its original size. But when I deserialize there are two issues: first of all the data going in does not match the data coming out (serialized doesn't match deserialized). Secondly, the iterator to get all of the serialized data out is incorrect. Here is the code: </p> <pre><code>import numpy as np from skimage import io from skimage.io import ImageCollection import tensorflow as tf import argparse #A function for parsing TFRecords def record_parser(record): keys_to_features = { 'fil' : tf.FixedLenFeature([],tf.string), 'm' : tf.FixedLenFeature([],tf.int64), 'n' : tf.FixedLenFeature([],tf.int64)} parsed = tf.parse_single_example(record, keys_to_features) m = tf.cast(parsed['m'],tf.int64) n = tf.cast(parsed['n'],tf.int64) fil_shape = tf.stack([10,m,n]) fil = tf.decode_raw(parsed['fil'],tf.float32) print("size: ", tf.size(fil)) fil = tf.reshape(fil,fil_shape) return (fil,m,n) #For writing and reading from the TFRecord filename = "test.tfrecord" if __name__ == "__main__": #Create the TFRecordWriter data_writer = tf.python_io.TFRecordWriter(filename) #Create some fake data files = [] i_vals = np.random.randint(20,size=10) j_vals = np.random.randint(20,size=10) print(i_vals) print(j_vals) for x in range(5): files.append(np.random.rand(10,i_vals[x],j_vals[x]).astype(np.float32)) i=0 #Serialize the fake data and record it as a TFRecord using the TFRecordWriter for fil in files: i+=1 f,m,n = fil.shape fil_raw = fil.tostring() print(fil.shape) example = tf.train.Example( features = tf.train.Features( feature = { 'fil' : tf.train.Feature(bytes_list=tf.train.BytesList(value=[fil_raw])), 'm' : tf.train.Feature(int64_list=tf.train.Int64List(value=[m])), 'n' : tf.train.Feature(int64_list=tf.train.Int64List(value=[n])) } ) ) data_writer.write(example.SerializeToString()) data_writer.close() #Deserialize and report on the fake data sess = tf.Session() dataset = tf.data.TFRecordDataset([filename]) dataset = dataset.map(record_parser) iterator = dataset.make_initializable_iterator() next_element = iterator.get_next() sess.run(iterator.initializer) while True: try: sess.run(next_element) fil,m,n = (next_element[0],next_element[1],next_element[2]) with sess.as_default(): print("fil.shape: ",fil.eval().shape) print("M: ",m.eval()) print("N: ",n.eval()) except tf.errors.OutOfRangeError: break </code></pre> <p>And here is the output: </p> <pre><code>MacBot$ python test.py /Users/MacBot/anaconda/envs/tflow/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters [ 6 7 3 18 9 10 4 0 3 12] [ 4 2 14 4 11 4 5 2 9 17] (10, 6, 4) (10, 7, 2) (10, 3, 14) (10, 18, 4) (10, 9, 11) 2018-04-03 10:52:29.324429: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA size: Tensor("Size:0", shape=(), dtype=int32) fil.shape: (10, 7, 2) M: 3 N: 4 </code></pre> <p>Anybody understand what I'm doing wrong? Thanks for any help! </p>
<p>Instead of </p> <pre><code>sess.run(iterator.initializer) while True: try: sess.run(next_element) fil,m,n = (next_element[0],next_element[1],next_element[2]) with sess.as_default(): print("fil.shape: ",fil.eval().shape) print("M: ",m.eval()) print("N: ",n.eval()) except tf.errors.OutOfRangeError: break </code></pre> <p>it should be </p> <pre><code>sess.run(iterator.initializer) while True: try: fil,m,n = sess.run(next_element) print("fil.shape: ",fil.eval().shape) print("M: ",m.eval()) print("N: ",n.eval()) except tf.errors.OutOfRangeError: break </code></pre>
python|tensorflow|tfrecord
0
10,317
49,538,497
How to apply function to slice of columns using .loc?
<p>I have a pd DataFrame with integers displayed as strings:</p> <pre><code>frame = pd.DataFrame(np.random.randn(4, 3), columns=list('ABC'), index=['1', '2', '3', '4']) frame = frame.apply(lambda x: x.astype(str)) </code></pre> <p>This gives me a dataframe:</p> <pre><code> A B C 1 -0.890 0.162 0.477 2 -1.403 0.160 -0.570 3 -1.062 -0.577 -0.370 4 1.142 0.072 -1.732 </code></pre> <p>If I type frame.type() I will get objects. Now I want to convert columns ['B':'C'] to numbers.</p> <p>Imagine that I have dozens of columns and therefore I would like to slice them. So what I do is:</p> <pre><code>frame.loc[:,'B':'C'] = frame.loc[:,'B':'C'].apply(lambda x: pd.to_numeric(x, errors='coerce') </code></pre> <p>If I just wanted to alter column, say, B, I would type:</p> <pre><code>frame['B'] = frame['B'].apply(lambda x: pd.to_numeric(x, errors='coerce') </code></pre> <p>and that would convert B into into float64 BUT if I use it with .loc then nothing happens after I call DataFrame.info()!</p> <p>Can someone help me? OF course I can just type all columns but I would like to get a more practical approach</p>
<p>You can pass kwargs to <code>apply</code></p> <h3>In Line with <code>assign</code></h3> <pre><code>frame.assign(**frame.loc[:, 'B':'C'].apply(pd.to_numeric, errors='coerce')) A B C 1 -1.50629471392 -0.578600 1.651437 2 -2.42667924339 -0.428913 1.265936 3 -0.866740402265 -0.678886 -0.094709 4 1.49138962612 -0.638902 -0.443982 </code></pre> <hr> <h3>In Place with <code>update</code></h3> <pre><code>frame.update(frame.loc[:, 'B':'C'].apply(pd.to_numeric, errors='coerce')) frame A B C 1 -1.50629471392 -0.578600 1.651437 2 -2.42667924339 -0.428913 1.265936 3 -0.866740402265 -0.678886 -0.094709 4 1.49138962612 -0.638902 -0.443982 </code></pre>
python|pandas|dataframe|apply
9
10,318
73,336,285
Make a new dataframe from multiple dataframes
<p>Suppose I have 3 dataframes that are wrapped in a list. The dataframes are:</p> <pre><code>df_1 = pd.DataFrame({'text':['a','b','c','d','e'],'num':[2,1,3,4,3]}) df_2 = pd.DataFrame({'text':['f','g','h','i','j'],'num':[1,2,3,4,3]}) df_3 = pd.DataFrame({'text':['k','l','m','n','o'],'num':[6,5,3,1,2]}) </code></pre> <p>The list of the dfs is:</p> <pre><code>df_list = [df_1, df_2, df_3] </code></pre> <p>Now I want to make a for loop such that goes on <code>df_list</code>, and for each <code>df</code> takes the text column and merge them on a new dataframe with a new column head called <code>topic</code>. Now since each <code>text</code> column is different from each dataframe I want to populate the headers as <code>topic_1</code>, <code>topic_2</code>, etc. The desired outcome should be as follow:</p> <pre><code> topic_1 topic_2 topic_3 0 a f k 1 b g l 2 c h m 3 d i n 4 e j o </code></pre> <p>I can easily extract the text columns as:</p> <pre><code>lst = [] for i in range(len(df_list)): lst.append(df_list[i]['text'].tolist()) </code></pre> <p>It is just that I am stuck on the last part, namely bringing the columns into 1 df without using brute force.</p>
<p>You can extract the wanted columns with a list comprehension and <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> them:</p> <pre><code>pd.concat([d['text'].rename(f'topic_{i}') for i,d in enumerate(df_list, start=1)], axis=1) </code></pre> <p>output:</p> <pre><code> topic_1 topic_2 topic_3 0 a f k 1 b g l 2 c h m 3 d i n 4 e j o </code></pre>
python|pandas
2
10,319
67,328,047
Filtering pandas dataframe on condition over NaN
<p>I have a datetime dataframe in pandas like this:</p> <pre><code> date value1 value2 name 0 2020-08-27 07:30:00 28.0 27.0 A 1 2020-08-27 08:00:00 28.2 27.0 A 2 2020-08-27 09:00:00 NaN 27.5 A 3 2020-08-27 09:30:00 29.0 NaN A 4 2020-08-27 10:30:00 NaN NaN A 5 2020-08-27 11:00:00 29.8 27.0 A 6 2020-08-27 11:30:00 30.0 27.0 A 7 2020-08-27 12:00:00 30.0 27.0 A 8 2020-08-27 12:30:00 30.0 27.0 A 9 2020-08-27 13:30:00 30.0 27.0 A 10 2020-08-27 07:30:00 28.0 27.0 B 11 2020-08-27 08:00:00 28.2 27.0 B 12 2020-08-27 09:00:00 NaN 27.5 B 13 2020-08-27 09:30:00 29.0 NaN B 14 2020-08-27 10:30:00 NaN NaN B 15 2020-08-27 11:00:00 29.8 NaN B 16 2020-08-27 11:30:00 30.0 27.0 B 17 2020-08-27 12:00:00 30.0 27.0 B 18 2020-08-27 12:30:00 30.0 27.0 B 19 2020-08-27 13:30:00 30.0 27.0 B </code></pre> <p>I wish to remove entry for all name for which number of NaN in any column is 3 or more. I am able to calculate NaN in each column.</p> <pre><code>df.drop('name', 1).isna().groupby(df.name, sort=False).sum().reset_index() </code></pre> <p>How can I use this to filter df:</p> <p>My expected output is:</p> <pre><code> date value1 value2 name 0 2020-08-27 07:30:00 28.0 27.0 A 1 2020-08-27 08:00:00 28.2 27.0 A 2 2020-08-27 09:00:00 NaN 27.5 A 3 2020-08-27 09:30:00 29.0 NaN A 4 2020-08-27 10:30:00 NaN NaN A 5 2020-08-27 11:00:00 29.8 27.0 A 6 2020-08-27 11:30:00 30.0 27.0 A 7 2020-08-27 12:00:00 30.0 27.0 A 8 2020-08-27 12:30:00 30.0 27.0 A 9 2020-08-27 13:30:00 30.0 27.0 A </code></pre>
<pre><code>&gt;&gt;&gt; df.set_index(&quot;name&quot;) \ .loc[df[[&quot;value1&quot;, &quot;value2&quot;]] \ .isna() \ .groupby(df[&quot;name&quot;]) \ .sum() \ .max(axis=&quot;columns&quot;) &lt; 3] date value1 value2 name A 2020-08-27 07:30:00 28.0 27.0 A 2020-08-27 08:00:00 28.2 27.0 A 2020-08-27 09:00:00 NaN 27.5 A 2020-08-27 09:30:00 29.0 NaN A 2020-08-27 10:30:00 NaN NaN A 2020-08-27 11:00:00 29.8 27.0 A 2020-08-27 11:30:00 30.0 27.0 A 2020-08-27 12:00:00 30.0 27.0 A 2020-08-27 12:30:00 30.0 27.0 A 2020-08-27 13:30:00 30.0 27.0 </code></pre>
pandas
0
10,320
67,422,881
How do I count the number of unique values in a csv using Python
<p>Maybe someone can post another question that already has an answer to my question, but I have been unable to find it.</p> <p>My dataset is a 10,000+ row csv that looks like this:</p> <pre><code> col_1 col_2 a, b, c, d 9 a, b, c 3 b, d 5 a, c, e 1 </code></pre> <p>I am wondering how do I iterate through col_1 to pull out each letter and then sum the amount of col_2 if it shows up.</p> <p>So for this example, the output would be:</p> <pre><code>a - 13 b - 17 c - 13 d - 14 e - 1 </code></pre>
<p>Get the dummies, multiply then sum:</p> <pre><code>df['col_1'].str.get_dummies(&quot;,&quot;).mul(df['col_2'],axis=0).sum() </code></pre> <hr /> <pre><code>a 13 b 17 c 13 d 14 e 1 dtype: int64 </code></pre>
python|python-3.x|pandas|for-loop|iteration
2
10,321
60,325,018
read excel cell containing formulas with link to external workbook with numpy/panda
<p>I need to read an excel file with a cell containing a reference to an external excel workbook in the same path as the origin excel file. Is there any function/parameter to get the reference in such cell with numpy/panda?</p>
<p>This seems to work.</p> <pre><code>from openpyxl import load_workbook import pandas as pd wb = load_workbook(filename = 'C:\\your_path\\Book1.xlsx') sheet_names = wb.get_sheet_names() name = sheet_names[0] sheet_ranges = wb[name] df = pd.DataFrame(sheet_ranges.values) df </code></pre> <p>Result:</p> <pre><code> 0 1 2 3 4 0 field1 field2 field3 field4 field5 1 1 =A2*10 =B2*10 =C2*10 =D2*10 2 1 =A3*10 =B3*10 =C3*10 =D3*10 3 1 =A4*10 =B4*10 =C4*10 =D4*10 </code></pre>
python|numpy|reference|external
0
10,322
65,201,940
Improving accuracy of multinomial logistic regression model built from scratch
<p>I am currently working on creating a multi class classifier using numpy and finally got a working model using softmax as follows:</p> <pre><code>class MultinomialLogReg: def fit(self, X, y, lr=0.00001, epochs=1000): self.X = self.norm_x(np.insert(X, 0, 1, axis=1)) self.y = y self.classes = np.unique(y) self.theta = np.zeros((len(self.classes), self.X.shape[1])) self.o_h_y = self.one_hot(y) for e in range(epochs): preds = self.probs(self.X) l, grad = self.get_loss(self.theta, self.X, self.o_h_y, preds) if e%10000 == 0: print(&quot;epoch: &quot;, e, &quot;loss: &quot;, l) self.theta -= (lr*grad) return self def norm_x(self, X): for i in range(X.shape[0]): mn = np.amin(X[i]) mx = np.amax(X[i]) X[i] = (X[i] - mn)/(mx-mn) return X def one_hot(self, y): Y = np.zeros((y.shape[0], len(self.classes))) for i in range(Y.shape[0]): to_put = [0]*len(self.classes) to_put[y[i]] = 1 Y[i] = to_put return Y def probs(self, X): return self.softmax(np.dot(X, self.theta.T)) def get_loss(self, w,x,y,preds): m = x.shape[0] loss = (-1 / m) * np.sum(y * np.log(preds) + (1-y) * np.log(1-preds)) grad = (1 / m) * (np.dot((preds - y).T, x)) #And compute the gradient for that loss return loss,grad def softmax(self, z): return np.exp(z) / np.sum(np.exp(z), axis=1).reshape(-1,1) def predict(self, X): X = np.insert(X, 0, 1, axis=1) return np.argmax(self.probs(X), axis=1) #return np.vectorize(lambda i: self.classes[i])(np.argmax(self.probs(X), axis=1)) def score(self, X, y): return np.mean(self.predict(X) == y) </code></pre> <p>And had several questions:</p> <ol> <li><p>Is this a correct mutlinomial logistic regression implementation?</p> </li> <li><p>It takes 100,000 epochs using learning rate 0.1 for the loss to be 1 - 0.5 and to get an accuracy of 70 - 90 % on the test set. Would this be considered bad performance?</p> </li> <li><p>What are some ways for improving performance or speeding up training (to need less epochs)?</p> </li> <li><p>I saw this cost function online which gives better accuracy, it looks like cross-entropy, but it is different from the equations of cross-entropy optimization I saw, can someone explain how the two differ:</p> </li> </ol> <pre><code>error = preds - self.o_h_y grad = np.dot(error.T, self.X) self.theta -= (lr*grad) </code></pre>
<ol> <li>This looks right, but I think the preprocessing you perform in the fit function should be done outside of the model.</li> <li>It's hard to know whether this is good or bad. While the loss landscape is convex, the time it takes to obtain a minima varies for different problems. One way to ensure you've obtained the optimal solution is to add a threshold that tests the size of the gradient norm, which is small when you're close to the optima. Something like <code>np.linalg.norm(grad) &lt; 1e-8</code>.</li> <li>You can use a better optimizer, such as Newton's method, or a quasi-Newton method, such as LBFGS. I would start with Newton's method as it's easier to implement. LBFGS is a non-trivial algorithm that approximates the Hessian required to perform Newton's method.</li> <li>It's the same; the gradients aren't being averaged. Since you're performing gradient descent, the averaging is a constant that can be ignored since a properly tuned learning rate is required anyways. In general, I think averaging makes it a bit easier to obtain a stable learning rate over different splits of the same dataset.</li> </ol> <p>A question for you: When you evaluate your test set, are you preprocessing them the same way you do the training set in your fit function?</p>
python|numpy|logistic-regression
1
10,323
65,356,414
How to apply a function to every element in a dataframe?
<p>This is probably a very basic question but I can't find the answer in other questions. I have two lists that I have used to create a 2D dataframe, let's say:</p> <pre class="lang-py prettyprint-override"><code>X= np.arange(0, 2.01, 0.25) Y= np.arange(10, 30, 5.0) df = pd.DataFrame(index = X, columns = Y) print(df) </code></pre> <p>Which gives:</p> <pre class="lang-py prettyprint-override"><code> 10.0 15.0 20.0 25.0 0.00 NaN NaN NaN NaN 0.25 NaN NaN NaN NaN 0.50 NaN NaN NaN NaN 0.75 NaN NaN NaN NaN 1.00 NaN NaN NaN NaN 1.25 NaN NaN NaN NaN 1.50 NaN NaN NaN NaN 1.75 NaN NaN NaN NaN 2.00 NaN NaN NaN NaN </code></pre> <p>I would like to go through all elements in the dataframe and use the values of <code>X</code> and <code>Y</code> as inputs to some function, <code>foo</code>, that I have written. For example, in the 2rd row, 1st column (using zero indexing) position I have <code>(X, Y) = (0.5, 15.0)</code>, so in this position I would like to apply <code>foo(0.5, 15.0)</code> and not <code>foo(2, 1)</code>.</p> <p>I think I should be able to use <code>df.apply()</code> or <code>df.applymap()</code> somehow but I can't figure it out!</p>
<p>Since your problem requires access to both the index and column labels of your <code>df</code> you probably want <code>df.apply()</code>.</p> <p><code>df.apply()</code> has access to a <code>pandas.Series</code> representing each row/column (dependent on <code>axis</code> argument value) and you will have access to the column name and index; whereas <code>df.applymap()</code> utilises each individual value of <code>df</code> at runtime - so you wouldn't necessarily have access to the index and column name as required.</p> <p><strong>Example</strong></p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd def foo(name, index): return name - index x = np.arange(0, 2.01, 0.25) y = np.arange(10, 30, 5.0) df = pd.DataFrame(index = x, columns = y) df.apply(lambda x: foo(x.name, x.index)) </code></pre> <p><strong>Output</strong></p> <pre><code> 10.0 15.0 20.0 25.0 0.00 10.00 15.00 20.00 25.00 0.25 9.75 14.75 19.75 24.75 0.50 9.50 14.50 19.50 24.50 0.75 9.25 14.25 19.25 24.25 1.00 9.00 14.00 19.00 24.00 1.25 8.75 13.75 18.75 23.75 1.50 8.50 13.50 18.50 23.50 1.75 8.25 13.25 18.25 23.25 2.00 8.00 13.00 18.00 23.00 </code></pre> <p>In the above example the column name and index of each Series constituting <code>df</code> is passed to <code>foo()</code> by way of <code>df.apply()</code>. Within <code>foo()</code> each value is defined by subtracting it's own index value from it's own column name value. Here you can see that the index value for each row is accessed using <code>x.index</code> and the column value is accessed using <code>x.name</code> within the call within <code>df.apply()</code>.</p> <p><strong>Update</strong></p> <p>Many thanks to @SyntaxError for pointing out that <code>x.index</code> and <code>x.name</code> could be passed to <code>foo()</code> within <code>df.apply()</code> instead of feeding the entire Series (<code>x</code>) into the function and accessing the values manually therein. As mentioned, this seems to fit OP's use case in a much neater manner than my original response - which was largely the same but passed each <code>x</code> series into <code>foo()</code> which then had responsibility for extracting <code>x.name</code> and <code>x.column</code>.</p>
python|pandas|dataframe|numpy
3
10,324
65,382,599
difference between detectObjectOnImage and runModelonImage in tflite flutter
<p>I'm trying to make a tflite multiple object detector in flutter I came across two function which takes image path as input that's why this question.</p> <p>the two function are <code>detectObjectOnImage</code> and <code>runModelOnImage</code> and when I use <code>runModelOnImage</code> my code is running and if I swap it with <code>detectObjectOnImage</code> the interprter does initialize but on calling the function the app automatically closes and shows <code>Lost connection to device</code></p> <p>this is how my code goes</p> <pre><code>classifyImage(String imgpath) async { var output = await Tflite.runModelOnImage( path: imgpath, imageMean: 0.0, imageStd: 255.0, threshold: 0.2, numResults: 1, asynch: true, ); setState(() { _loading = false; outputs = output; }); print(outputs); print(outputs[0][&quot;label&quot;]); } </code></pre> <p>I guess my assumptions are correct but I don't know why its not working, apart from that I created a model from Teachable machine by google and it only detects one object at a time so my next question is how do I make it detect more than 1 object</p> <p>Thanks</p>
<p>The difference between the two functions are their usages:</p> <p>For object detection you use Tflite.detectObjectOnImage()</p> <p>For image classification (finding objects without printing boxes around them) you use Tflite.runModelOnImage()</p> <p>The two Methods return different sized tensors. When the Tensors can't be mapped to the expected output the app disconnects as you described it.</p> <hr /> <p>Regarding your second Question:</p> <p>You set the parameter numResults, which limits the number of results, to 1. Increase this number to get more results. (source: <a href="https://pub.dev/packages/tflite#Example" rel="nofollow noreferrer">https://pub.dev/packages/tflite#Example</a>)</p>
flutter|adb|tensorflow-lite
1
10,325
65,185,120
Why does my Keras LSTM model perform horrible compared to RandomForest on timeseries forecasting?
<p>I have a DataFrame predicting the number of vehicles passing a road based on some sensor data.</p> <p>The DataFrame is shaped on the following format, and is indexed based on the timestamp</p> <pre><code> index | t | t - 1 | t - 2 | .... | t - 95 | number of cars 2020-08-01 : 00:00:15 410 499 380 ... 20 240 2020-08-01 : 00:00:30 305 410 499 ... 45 244 2020-08-01 : 00:00:45 290 305 410 ... 50 188 </code></pre> <p>The Data has the following shape <code>X_train.shape = (4210,97)</code></p> <p>What I do is the following</p> <pre><code>train = df.loc['2020-08-01 : 00:00:15':'2020:09:12 23:45:00'] test = df.loc['2020-09-13 : 00:00:00':] y_train = train['number of cars'] X_train = train.drop('y',axis=1) sc = StandardScaler() sc.fit(X_train) X_train= sc.transform(X_train) y_test = test['number of cars'] X_test = test.drop('y',axis=1) X_test = sc.transform(X_test) rf = RandomForest() rf.fit(X_train,y_train) preds = rf.predict(X_test) print(r2_score(preds,y_test)) print(mean_squared_error(preds,y_test)) </code></pre> <p>Which gives</p> <pre><code>'r2 : 0.89' 'mse : 60' </code></pre> <p>I wanted to see if a LSTM model could do better</p> <pre><code> X_train_lstm = X_train.values.reshape(X_train.shape[0],X_train.shape[1],1) X_test_lstm = X_train.values.reshape(X_train.shape[0],X_train.shape[1],1) model = Sequential() model.add(LSTM(units=64, return_sequences=False,activation='relu', input_shape (96, 1))) model.add(Dense(units=1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X_train_lstm,y_train,batch_size=64,epochs=100) lstm_preds = model.predict(X_test_lstm) print(r2_score(lstm_preds,y_test)) print(mean_squared_error(lstm_preds,y_test)) </code></pre> <p>which gives</p> <pre><code> 'r2 : -0.3' 'mse : 2100040' print(lstm_preds) [38,38.12,38.1,38,38.2,....,38] </code></pre> <p>The predicted values is basically the same value, what am I doing wrong here?</p>
<p>I think the main problem is that your <code>y_train</code> and <code>y_test</code> are not standardized.</p> <p>Also, for <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM" rel="nofollow noreferrer">LSTM</a> layer, you should not change the activation.</p>
python|tensorflow|keras
0
10,326
65,482,453
How to filter pandas dataframe column by multiple conditions
<p>I am trying to find median revenue in 2013 for US, France and Spain. My pandas dataframe looks like <a href="https://i.stack.imgur.com/SaTGx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SaTGx.png" alt="enter image description here" /></a></p> <p>I am using the following code</p> <pre><code> df[(df.year == 2013) &amp; (df.country == ['US', 'FR', 'ES'])] </code></pre> <p>and getting this error - <code>ValueError: Lengths must match to compare</code></p>
<p>To filter a value between different possibilities, use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a></p> <pre><code>df[(df.year == 2013) &amp; (df.country.isin(['US', 'FR', 'ES']))] </code></pre>
python|pandas
3
10,327
49,823,963
Get index of one series into another in pandas
<p>I have two series, and I want to get the index of each value of one series into the other:</p> <pre><code>import pandas as pd s1 = pd.Series(list('ABCDE'), index=range(1, 6)) s2 = pd.Series(list('BDAACE')) expected_result = pd.Series([2, 4, 1, 1, 3, 5]) assert pd.some_operation(s1, s2).equals(expected_result) </code></pre> <p>I know it sounds simple but I haven't been able to find a way to do it in a vectorized way.</p>
<p>Using <code>Series</code> <code>get</code></p> <pre><code>pd.Series(s1.index,index=s1).get(s2) Out[416]: B 2 D 4 A 1 A 1 C 3 E 5 dtype: int64 </code></pre>
python|pandas
3
10,328
49,865,478
Forward propagation slow - Training time normal
<p>I'm having trouble figuring out why when I perform forward propagation my code is extremely slow. The code in question can be found here: <a href="https://github.com/rekkit/lazy_programmer_ml_course/blob/develop/05_unsupervised_deep_learning/poetry_generator_rnn.py" rel="nofollow noreferrer">https://github.com/rekkit/lazy_programmer_ml_course/blob/develop/05_unsupervised_deep_learning/poetry_generator_rnn.py</a></p> <p>I'm comparing the performance of my code to that of this: <a href="https://github.com/lazyprogrammer/machine_learning_examples/blob/master/rnn_class/srn_language_tf.py" rel="nofollow noreferrer">https://github.com/lazyprogrammer/machine_learning_examples/blob/master/rnn_class/srn_language_tf.py</a></p> <p>The difference is when I run</p> <pre><code>self.session.run(self.predict(x_batch), feed_dict={...}) </code></pre> <p>or when I run</p> <pre><code>self.returnPrediction(x_batch) </code></pre> <p>it takes about 0.14 seconds to run. Now this might not sound like a catastrophe, but that's 0.14 seconds per sentence (I'm making a RNN to predict the next word in a sentence). Since there are 1436 sentences, we're looking at about 3 minutes and 20 seconds per epoch. If I want to train 10 epochs, that's half an hour. Way more than the other code takes.</p> <p>Does anyone have an idea of what the problem might be? The only difference that I can see is that I've modularized the code.</p> <p>Thanks for the help in advance.</p>
<p>I've figured it out. Every time I call the predict method I'm rebuilding the graph. Instead, in the fit method I define a variable:</p> <pre><code>preds = self.predict(self.tfX) </code></pre> <p>and then every time I need the predictions, instead of using:</p> <pre><code>predictions = self.session.run(self.predict(x_batch), feed_dict={...}) </code></pre> <p>I use:</p> <pre><code>predictions = self.session.run(self.preds, feed_dict={...}) </code></pre>
python|python-3.x|performance|tensorflow|tensorboard
3
10,329
63,770,159
How to convert time in days, hours, minutes, and seconds to only seconds
<p>I have the following dataframe column:</p> <p><img src="https://i.stack.imgur.com/kqQfy.png" alt="Columm of Dataset" /></p> <p>I need to convert object string data from the csv column into total seconds.</p> <p>Example: 10m -&gt; 600s</p> <hr /> <p>I tried this code:</p> <pre><code>df.duration = str(datetime.timedelta(df['duration'])) </code></pre> <p>But the following error is displayed</p> <blockquote> <p>TypeError: unsupported type for timedelta days component: Series</p> </blockquote>
<ul> <li>The correct, and vectorized way to convert <code>'duration'</code> to seconds is to: <ol> <li>Convert <code>'duration'</code> to a timedelta</li> <li>Divide by <code>pd.Timedelta(seconds=1)</code></li> </ol> </li> <li>The correct way to get seconds for only the hours, minutes and seconds component is to use <code>.dt.seconds</code></li> <li>See this <a href="https://stackoverflow.com/questions/63514967">answer</a> for a thorough discussion of <code>timedelta</code> and why the <code>.total_seconds</code> method is a total accident.</li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd # test data df = pd.DataFrame({'duration': ['10d 15h 23m', '10d 18h 13m']}) # convert duration to a timedelta df.duration = pd.to_timedelta(df.duration) # calculate total_seconds df['total_sec'] = df.duration / pd.Timedelta(seconds=1) # get seconds for just hours, minutes, seconds df['sec_without_days'] = df.duration.dt.seconds # display(df) duration total_sec sec_without_days 0 10 days 15:23:00 919380.0 55380 1 10 days 18:13:00 929580.0 65580 </code></pre>
python|pandas|dataframe|csv|data-analysis
3
10,330
63,076,679
Creating an aggregate columns in pandas dataframe
<p>I have a pandas dataframe as below:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'ORDER':[&quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;], 'var1':[2, 3, 1, 5],'a1_bal':[1,2,3,4], 'a1c_bal':[10,22,36,41], 'b1_bal':[1,2,33,4], 'b1c_bal':[11,22,3,4], 'm1_bal':[15,2,35,4]}) df ORDER var1 a1_bal a1c_bal b1_bal b1c_bal m1_bal 0 A 2 1 10 1 11 15 1 A 3 2 22 2 22 2 2 B 1 3 36 33 3 35 3 B 5 4 41 4 4 4 </code></pre> <p>I want to create new columns as below:</p> <pre><code>a1_final_bal = sum(a1_bal, a1c_bal) b1_final_bal = sum(b1_bal, b1c_bal) m1_final_bal = m1_bal (since we only have m1_bal field not m1c_bal, so it will renain as it is) </code></pre> <p>I don't want to hardcode this step because there might be more such columns as &quot;c_bal&quot;, &quot;m2_bal&quot;, &quot;m2c_bal&quot; etc..</p> <p>My final data should look something like below</p> <pre><code> ORDER var1 a1_bal a1c_bal b1_bal b1c_bal m1_bal a1_final_bal b1_final_bal m1_final_bal 0 A 2 1 10 1 11 15 11 12 15 1 A 3 2 22 2 22 2 24 24 2 2 B 1 3 36 33 3 35 38 36 35 3 B 5 4 41 4 4 4 45 8 4 </code></pre>
<p>You could try something like this. I am not sure if its exactly what you are looking for, but I think it should work.</p> <pre><code>dfforgroup = df.set_index(['ORDER','var1']) #Creates MultiIndex dfforgroup.columns = dfforgroup.columns.str[:2] #Takes first two letters of remaining columns df2 = dfforgroup.groupby(dfforgroup.columns,axis=1).sum().reset_index().drop(columns = ['ORDER','var1']).add_suffix('_final_bal') #groups columns by their first two letters and sums the columns up df = pd.concat([df,df2],axis=1) #concatenates new columns to original df </code></pre>
python-3.x|pandas
0
10,331
62,970,806
Expand DatasetV1Adapter shape grey scale image shape to 3 channels to make use of pretrained models
<p>I want to use the pre-trained model MobileNetV2 in order to classify the <a href="https://www.tensorflow.org/datasets/catalog/binary_alpha_digits" rel="nofollow noreferrer">Binary Alpha Data</a>. However, this data comes in shape <code>(20, 16, 1)</code> (greyscale one channel) and not as needed <code>(20, 16, 3)</code> (3 RGB channel). Actually I also have to resize, since 20x16 is neither a valid input, but I know how to do this. So my question is how can I get a 1 channel grescale (<code>DatasetV1Adapter</code>) into a 3 channel?</p> <p>My code so far:</p> <pre><code>import tensorflow as tf import os import PIL import numpy as np import matplotlib.pyplot as plt import tensorflow_datasets as tfds import tensorflow_hub as hub from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D from tensorflow.keras.preprocessing.image import ImageDataGenerator import keras_preprocessing from keras_preprocessing import image from tensorflow.python.keras.utils.version_utils import training from tensorflow.keras.optimizers import RMSprop (raw_train, raw_test, raw_validation), metadata = tfds.load( 'binary_alpha_digits', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True, ) IMG_SIZE = 96 def format_example(image, label): image = tf.cast(image, tf.float32) image = image*1/255.0 image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) return image, label train = raw_train.map(format_example) validation = raw_validation.map(format_example) test = raw_test.map(format_example) BATCH_SIZE = 32 SHUFFLE_BUFFER_SIZE = 1000 train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE) test_batches = test.batch(BATCH_SIZE) validation_batches = validation.batch(BATCH_SIZE) </code></pre> <p>When I check <code>train_batches</code> I get the output:</p> <pre><code>&lt;DatasetV1Adapter shapes: ((None, 96, 96, 1), (None,)), types: (tf.float32, tf.int64)&gt; </code></pre> <p>This gives an error when trying to fit later:</p> <pre><code>ValueError: The input must have 3 channels; got `input_shape=(96, 96, 1)` </code></pre> <p>So according to <a href="https://stackoverflow.com/questions/51872412/tensorflow-numpy-image-reshape-grayscale-images">this</a> post I tried:</p> <pre><code>def load_image_into_numpy_array(image): # The function supports only grayscale images # assert len(image.shape) == 2, &quot;Not a grayscale input image&quot; last_axis = -1 dim_to_repeat = 2 repeats = 3 grscale_img_3dims = np.expand_dims(image, last_axis) training_image = np.repeat(grscale_img_3dims, repeats, dim_to_repeat).astype('uint8') assert len(training_image.shape) == 3 assert training_image.shape[-1] == 3 return training_image train_mod=load_image_into_numpy_array(raw_train) </code></pre> <p>But I get an error:</p> <pre><code>AxisError: axis 2 is out of bounds for array of dimension 1 </code></pre> <p>How can I get this into <code>input_shape=(96, 96, 3)</code>?</p>
<p>Just from looking at it, shouldn't</p> <pre><code>def format_example(image, label): image = tf.cast(image, tf.float32) image = image*1/255.0 image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) image = tf.image.grayscale_to_rgb(image) return image, label </code></pre> <p>be the easiest solution.</p>
python|tensorflow|image-processing|rgb|tensorflow-datasets
2
10,332
63,248,888
Save preprocessing Tensorflow Transform function
<p>Currently we have a model that we are going to use for our API using Tensorflow Serving. Therefore we need to transform the current API input data into the features. As the creation of the model and the usage of the model are performed in two different repos, and I don't want to have the transformations in two different repos (to keep them the same for both repos), I was reading about Tensorflow Transform to be able to use 1 function to processed both the training data and the serving data. However, I find it hard to understand how it would work in production. When I save the model, can I include saving the preprocessing function? Or where can I &quot;host&quot; this preprocessing function?</p> <p>So to be clear, I have a model that preprocesses the training data. And I want to use the same function for the serving data.</p>
<p>Anything running in TensorFlow Serving is just a TensorFlow graph, whether that's the model itself or your preprocessing steps. All you'd need to do to fold the two together is to connect the two graphs by substituting the output of the preprocessing step as the input to the model, assuming that's compatible.</p> <p>For example, suppose your model were something really simple like this, that takes an arbitrary length input and computes its L2 norm:</p> <pre><code>input = tf.placeholder(tf.float32, [None]) norm = tf.norm(input, ord=2) </code></pre> <p>And then we had a data preparation function that we wanted to apply that doubled the original input before computing the L2 norm by adding it to itself:</p> <pre><code>input = tf.placeholder(tf.float32, [None]) doubled = tf.add(input, input) </code></pre> <p>You could preprocess and do the &quot;prediction&quot; (such as it is in this toy example) in one TensorFlow Serving deployment by doing something like this:</p> <pre><code>input = tf.placeholder(tf.float32, [None]) doubled = tf.add(input, input) norm = tf.norm(input, ord=2) </code></pre> <p>This is not especially useful, and probably a lot less complicated than what you're doing. Hopefully it gets the idea across!</p>
tensorflow|tensorflow-serving|tensorflow-transform
1
10,333
67,800,802
numpy function to reorder along an axis
<p>Is there an easier way (np function) to achieve the following? <code>bb</code> is the output I'm looking for.</p> <pre><code>import numpy as np aa = np.arange(4*4*3).reshape(4,4,3) bb = np.stack((aa[:,:,2],aa[:,:,1],aa[:,:,0]),axis=2) </code></pre> <p>I don't think <code>np.roll</code> is applicable here, because it seems to shift sequentially.</p>
<p>You could just use -1 as the step argument on last axis while indexing to create a reverse array along that axis:</p> <pre><code>In [10]: bb = aa[:,:,::-1] </code></pre>
python|arrays|numpy
2
10,334
67,921,404
What is the difference between the 'set' operation using loc vs iloc?
<p>What is the difference between the 'set' operation using loc vs iloc?</p> <pre><code>df.iloc[2, df.columns.get_loc('ColName')] = 3 #vs# df.loc[2, 'ColName'] = 3 </code></pre> <p>Why does the website of iloc (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html</a>) not have any set examples like those shown in loc website (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html</a>)? Is loc the preferred way?</p>
<p>There isn't much of a difference to say. It all comes down to your need and requirement.</p> <p>Say you have label of the index and column name (most of the time) you are supposed to use <code>loc</code> (location) operator to assign the values.</p> <p>Whereas like in normal matrix, you usually are going to have only the index number of the row and column and hence the cell location via integers (for <code>i</code>) your are supposed to use <code>iloc</code> (integer based location) for assignment.</p> <p>Pandas DataFrame support indexing via both usual integer based and index based.</p> <p>The problem arise when the index (the row or column) is itself integer instead of some string. So to make a clear difference to what operation user want to perform using integer based or label based indexing the two operations is provided.</p>
pandas
0
10,335
61,566,919
example of doing simple prediction with pytorch-lightning
<p>I have an existing model where I load some pre-trained weights and then do prediction (one image at a time) in pytorch. I am trying to basically convert it to a pytorch lightning module and am confused about a few things.</p> <p>So currently, my <code>__init__</code> method for the model looks like this:</p> <pre><code>self._load_config_file(cfg_file) # just creates the pytorch network self.create_network() self.load_weights(weights_file) self.cuda(device=0) # assumes GPU and uses one. This is probably suboptimal self.eval() # prediction mode </code></pre> <p>What I can gather from the lightning docs, I can pretty much do the same, except not to do the <code>cuda()</code> call. So something like:</p> <pre><code>self.create_network() self.load_weights(weights_file) self.freeze() # prediction mode </code></pre> <p>So, my first question is whether this is the correct way to use lightning? How would lightning know if it needs to use the GPU? I am guessing this needs to be specified somewhere.</p> <p>Now, for the prediction, I have the following setup:</p> <pre><code>def infer(frame): img = transform(frame) # apply some transformation to the input img = torch.from_numpy(img).float().unsqueeze(0).cuda(device=0) with torch.no_grad(): output = self.__call__(Variable(img)).data.cpu().numpy() return output </code></pre> <p>This is the bit that has me confused. Which functions do I need to override to make a lightning compatible prediction?</p> <p>Also, at the moment, the input comes as a numpy array. Is that something that would be possible from the lightning module or do things always have to use some sort of a dataloader?</p> <p>At some point, I want to extend this model implementation to do training as well, so want to make sure I do it right but while most examples focus on training models, a simple example of just doing prediction at production time on a single image/data point might be useful.</p> <p>I am using 0.7.5 with pytorch 1.4.0 on GPU with cuda 10.1</p>
<p><code>LightningModule</code> is a subclass of <code>torch.nn.Module</code> so the same model class will work for both inference and training. For that reason, you should probably call the <code>cuda()</code> and <code>eval()</code> methods outside of <code>__init__</code>.</p> <p>Since it's just a <code>nn.Module</code> under the hood, once you've loaded your weights you don't need to override any methods to perform inference, simply call the model instance. Here's a toy example you can use:</p> <pre class="lang-py prettyprint-override"><code>import torchvision.models as models from pytorch_lightning.core import LightningModule class MyModel(LightningModule): def __init__(self): super().__init__() self.resnet = models.resnet18(pretrained=True, progress=False) def forward(self, x): return self.resnet(x) model = MyModel().eval().cuda(device=0) </code></pre> <p>And then to actually run inference you don't need a method, just do something like:</p> <pre class="lang-py prettyprint-override"><code>for frame in video: img = transform(frame) img = torch.from_numpy(img).float().unsqueeze(0).cuda(0) output = model(img).data.cpu().numpy() # Do something with the output </code></pre> <p>The main benefit of PyTorchLighting is that you can also use the same class for training by implementing <code>training_step()</code>, <code>configure_optimizers()</code> and <code>train_dataloader()</code> on that class. You can find a simple example of that in the <a href="https://pytorch-lightning.readthedocs.io/en/latest/lightning-module.html" rel="noreferrer">PyTorchLightning docs</a>.</p>
pytorch|pytorch-lightning
9
10,336
61,277,181
Adding R-value (correlation) to scatter chart in Altair
<p>So I am playing around with the Cars dataset and am looking to add the R-value to a scatter chart. So I can use this code to produce a scatter chart using <code>transform_regression</code> to add a regression line which is great.</p> <pre><code>from vega_datasets import data import altair as alt import pandas as pd import numpy as np cars = data.cars() chart = alt.Chart(cars).mark_circle().encode( alt.X('Miles_per_Gallon', scale=alt.Scale(domain=(5,50))), y='Weight_in_lbs' ) chart + chart.transform_regression('Miles_per_Gallon','Weight_in_lbs').mark_line() </code></pre> <p>Here is the chart</p> <p><a href="https://i.stack.imgur.com/9sb6s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9sb6s.png" alt="enter image description here"></a></p> <p>Then I am looking get the R-value. So can use pandas with this code as I am not sure how to get the R-value with Altair.</p> <pre><code>corl = cars[['Miles_per_Gallon','Weight_in_lbs']].corr().iloc[0,1] corl </code></pre> <p>Now I was wondering how would I go about adding the R-value on the chart as a sort of label? </p>
<p>You can do this by adding a text layer:</p> <pre><code>text = alt.Chart({'values':[{}]}).mark_text( align="left", baseline="top" ).encode( x=alt.value(5), # pixels from left y=alt.value(5), # pixels from top text=alt.value(f"r: {corl:.3f}"), ) chart + text + chart.transform_regression('Miles_per_Gallon','Weight_in_lbs').mark_line() </code></pre> <p><a href="https://i.stack.imgur.com/SfY1g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SfY1g.png" alt="enter image description here"></a></p> <p>In future versions of Altair, the empty data in the chart will no longer be required.</p>
python|pandas|numpy|correlation|altair
5
10,337
68,687,299
python pandas change order and column name after merge
<p>I have merged two dataframes with multiple overlapping columns. I would like to put the columns side by side.</p> <pre><code>merge = df1.merge(df2) </code></pre> <p>For example, Current Output:</p> <pre><code>YEAR_x,DATE_x,MAX_x,MIN_x,YEAR_y,DATE_y,MAX_y,MIN_y </code></pre> <p>I want the output to be:</p> <pre><code>YEAR, YEAR_auto, DATE, DATE_auto, MAX, MAX_auto, MIN, MIN_auto </code></pre> <p>I have more than 150 columns so I don't want to do it manually. How could I do that?</p>
<p>Use <code>pd.merge</code> with <code>suffixes</code> parameter:</p> <pre><code>merge = df1.merge(df2[set(df2) &amp; set(df1)], suffixes=('', '_auto')) </code></pre> <p>To sort your columns as df1:</p> <pre><code>cols = sorted(merge.columns, key=lambda x: df1.columns.get_loc(x.split('_')[0])) </code></pre> <p>Example:</p> <pre><code>&gt;&gt;&gt; merge YEAR DATE MAX MIN YEAR_auto DATE_auto MAX_auto MIN_auto 0 2021 2021-08-06 100 0 2020 2020-08-06 50 20 &gt;&gt;&gt; merge[cols] YEAR YEAR_auto DATE DATE_auto MAX MAX_auto MIN MIN_auto 0 2021 2020 2021-08-06 2020-08-06 100 50 0 20 </code></pre>
python|pandas|merge
3
10,338
68,839,011
Python/Keras: LeakyRelu using tensorflow
<p>I am having problems installing keras. The following are giving me too much trouble to get around (even when doing updates on the terminal):</p> <pre><code>from keras.layers import Dense, Activation from keras.models import Sequential </code></pre> <p>So instead of initialising a ANN with <code>ann = Sequential()</code>, I do <code>ann = tf.keras.models.Sequential()</code>. This by importing:</p> <pre><code>import tensorflow as tf from tensorflow import keras </code></pre> <p>I would like to use LeakyReLU as an activation function. However, this one seems to be different to implement and the keras documentation is not helping me that much compared to how others tend to do.</p> <p>I've seen that ann.add(LeakyReLU(alpha=0.05)) is needed. However, what about the other parameters like unit or input_dim? How can I implement this using my code?</p> <pre><code># Initialising the ANN ann = tf.keras.models.Sequential() # Adding the input layer and the first hidden layer ann.add(tf.keras.layers.Dense(units=32, activation='relu')) # Adding the second hidden layer ann.add(tf.keras.layers.Dense(units=32, activation='relu')) # Adding the output layer ann.add(tf.keras.layers.Dense(units=1)) </code></pre>
<p>To use LeakyReLU in a layer you can do this:</p> <pre class="lang-py prettyprint-override"><code>ann.add(tf.keras.layers.Dense( units=32, activation=tf.keras.layers.LeakyReLU(alpha=0.3))) </code></pre>
python|tensorflow|keras|deep-learning|relu
1
10,339
52,940,677
AWS Sagemaker: AttributeError: module 'pandas' has no attribute 'core'
<p>Let me prefix this by saying I'm very new to tensorflow and even newer to AWS Sagemaker.</p> <p>I have some tensorflow/keras code that I wrote and tested on a local dockerized Jupyter notebook and it runs fine. In it, I import a csv file as my input.</p> <p>I use Sagemaker to spin up a jupyter notebook instance with conda_tensorflow_p36. I modified the pandas.read_csv() code to point to my input file, now hosted on a S3 bucket.</p> <p>So I changed this line of code from</p> <pre><code>import pandas as pd data = pd.read_csv("/input.csv", encoding="latin1") </code></pre> <p>to this</p> <pre><code>import pandas as pd data = pd.read_csv("https://s3.amazonaws.com/my-sagemaker-bucket/input.csv", encoding="latin1") </code></pre> <p>and I get this error</p> <pre><code>AttributeError: module 'pandas' has no attribute 'core' </code></pre> <p>I'm not sure if it's a permissions issue. I read that as long as I name my bucket with the string "sagemaker" it should have access to it.</p>
<p>Pull our data from S3 for example:</p> <pre><code>import boto3 import io import pandas as pd # Set below parameters bucket = '&lt;bucket name&gt;' key = 'data/training/iris.csv' endpointName = 'decision-trees' # Pull our data from S3 s3 = boto3.client('s3') f = s3.get_object(Bucket=bucket, Key=key) # Make a dataframe shape = pd.read_csv(io.BytesIO(f['Body'].read()), header=None) </code></pre>
pandas|tensorflow|amazon-sagemaker
1
10,340
53,285,454
Apply scikit-learn murmurhash3_32 on a Pandas dataframe
<p>I try to apply murmurhash on a pandas dataframe. I wanted to use scikit-learn murmurhash3_32 (any other easy proposition would be appreciated). I tried</p> <pre><code>import pandas as pd from sklearn.utils.murmurhash import murmurhash3_32 df = pd.DataFrame({'a': [100, 1000], 'b': [200, 2000]}, dtype='int32') df.apply(murmurhash3_32) </code></pre> <p>But I get </p> <blockquote> <p>TypeError: ("key 0 100\n1 1000\nName: a, dtype: int32 with type class 'pandas.core.series.Series' is not supported. Explicit conversion to bytes is required", 'occurred at index a')</p> </blockquote> <p>But Scikit is supposed to handle int32: <a href="https://scikit-learn.org/dev/modules/generated/sklearn.utils.murmurhash3_32.html#sklearn.utils.murmurhash3_32" rel="nofollow noreferrer">https://scikit-learn.org/dev/modules/generated/sklearn.utils.murmurhash3_32.html#sklearn.utils.murmurhash3_32</a></p> <p>Any idea or recommendation on it?</p>
<p>Stupid mistake, not sure if I should delete my question:</p> <p>Apply will pass a series to the function.</p> <p>Using applymap works as expected as it pass every element to the function.</p>
python|pandas|scikit-learn|murmurhash
1
10,341
53,154,192
Sum the duplicate rows of particular columns in dataframe
<p>I want to add the particular columns (C, D, E, F, G) based on the duplicate rows of column B. Whereas the remaining non-duplicate rows unchanged. The output of column A must be the first index of duplicate rows.</p> <p>I have a dataframe as follows:</p> <pre><code>A B C D E F G box1 0487 1 1 1 box2 0487 1 1 blue 0478 1 1 1 gray 0478 1 1 1 1 gray 0478 1 1 1 flat 8704 1 1 1 clay 8704 1 1 dark 8740 1 1 1 1 1 late 4087 1 1 1 </code></pre> <p>I want the output as follows:</p> <pre><code>A B C D E F G box1 0487 1 1 1 1 1 blue 0478 2 2 2 2 2 flat 8704 1 1 1 2 dark 8740 1 1 1 1 1 late 4087 1 1 1 </code></pre> <p>I am pleased to hear some suggestions.</p>
<p>Create dictionary of columns names with aggregate functions and pass to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="noreferrer"><code>agg</code></a>, also here is necessary <code>min_count=1</code> to <code>sum</code> for avoid <code>0</code> for sum <code>NaN</code>s values:</p> <pre><code>L = ['C','D','E','F','G'] d = {**dict.fromkeys(L, lambda x: x.sum(min_count=1)), **{'A':'first'}} df = df.groupby('B', as_index=False, sort=False).agg(d).reindex(columns=df.columns) print (df) A B C D E F G 0 box1 0487 1.0 1.0 1.0 1.0 1.0 1 blue 0478 2.0 2.0 2.0 2.0 2.0 2 flat 8704 1.0 1.0 1.0 NaN 2.0 3 dark 8740 1.0 1.0 1.0 1.0 1.0 4 late 4087 1.0 NaN 1.0 NaN 1.0 </code></pre> <hr> <pre><code>d = {**dict.fromkeys(L, 'sum'), **{'A':'first'}} df = df.groupby('B', as_index=False, sort=False).agg(d).reindex(columns=df.columns) print (df) A B C D E F G 0 box1 0487 1.0 1.0 1.0 1.0 1.0 1 blue 0478 2.0 2.0 2.0 2.0 2.0 2 flat 8704 1.0 1.0 1.0 0.0 2.0 3 dark 8740 1.0 1.0 1.0 1.0 1.0 4 late 4087 1.0 0.0 1.0 0.0 1.0 </code></pre>
python|pandas|dataframe
7
10,342
53,016,253
Pandas DataFrame parsing for integers
<p>This is how my df looks</p> <pre><code>person_a done 37918 , 37925 to37932 ,37934 to 37939 (17 ) person_b Done 37940 to 37950 (12 ) and 38101 to 38109 ( 9 ) </code></pre> <p>(Couldn't find a good way to show them side by side, person_a and person_b are columns). I need to parse all integers outside the <code>()</code> and then include all values including those between <code>to</code> into a new dataframe (<code>video_df</code>). The number within the <code>()</code> are small &lt; 1000 while the outside ones are > 10000</p> <p>I know I can do extract the numbers outside the <code>()</code> </p> <pre><code>video_numbers = df['person_a'].str.extractall(r'(\d+)')[0] video_df[person_a] = video_numbers[video_numbers.str.len() &gt; 4] </code></pre> <p>but not sure how to expand with <code>to</code></p> <p>My result should be <code>video_df</code></p> <pre><code>person_a person_b 37918 37940 37925 37941 37926 . . 37950 . 38101 37932 . 37934 . . 38109 . 0 37939 0 </code></pre> <p>Fill empty rows with 0. Let me know if anything is unclear.</p>
<p>maybe not so short but i think with some regex and list manipulation it is possible. first i extracted the numbers from the string for each person </p> <pre><code>df1.replace(to_replace=['\(\d+ \)','\( \d+ \)','Done','done'],value='', regex=True, inplace=True) df1.replace(to_replace=['to'],value='-', regex=True, inplace=True) df1.replace(to_replace=['and'],value=',', regex=True, inplace=True) df1.person_a = df1.person_a.str.split(',') df1.person_b = df1.person_b.str.split(',') </code></pre> <p><strong>df1</strong></p> <pre><code> person_a person_b 0 [ 37918 , 37925 -37932 , 37934 - 37939 ] [ 37940 - 37950 , 38101 - 38109 ] </code></pre> <hr> <p>second step is create df for each person with the ranges</p> <pre><code>person_a = pd.DataFrame(df1['person_a'].values.tolist()).T.rename(columns={0:'person_a'}) person_a = person_a.person_a.str.split('-', expand=True) \ .rename(columns={0:'start', 1:'end'}) \ .convert_objects(convert_numeric=True) \ .fillna(0) person_b = pd.DataFrame(df1['person_b'].values.tolist()).T.rename(columns={0:'person_b'}) person_b = person_b.person_b.str.split('-', expand=True) \ .rename(columns={0:'start', 1:'end'}) \ .convert_objects(convert_numeric=True) \ .fillna(0) </code></pre> <p><strong>person_a</strong></p> <pre><code> start end 0 37918 0.0 1 37925 37932.0 2 37934 37939.0 </code></pre> <p><strong>person_b</strong></p> <pre><code> start end 0 37940 37950 1 38101 38109 </code></pre> <hr> <p>final step is define a function to create list of the numbers for each person</p> <pre><code>def ranges(df): x = [] for i in range(df.shape[0]): if df.end[i] == 0: x.append(list(range(int(df.start[i]), int(df.start[i])+1))) else: x.append(list(range(int(df.start[i]), int(df.end[i])+1))) x = [val for sublist in x for val in sublist] return x df = pd.DataFrame({'person_a':pd.Series(ranges(person_a)),'person_b':pd.Series(ranges(person_b))}).fillna(0) </code></pre> <hr> <p><strong>df</strong></p> <pre><code> person_a person_b 0 37918.0 37940 1 37925.0 37941 2 37926.0 37942 3 37927.0 37943 4 37928.0 37944 5 37929.0 37945 6 37930.0 37946 7 37931.0 37947 8 37932.0 37948 9 37934.0 37949 10 37935.0 37950 11 37936.0 38101 12 37937.0 38102 13 37938.0 38103 14 37939.0 38104 15 0.0 38105 16 0.0 38106 17 0.0 38107 18 0.0 38108 19 0.0 38109 </code></pre>
python|pandas
1
10,343
53,230,880
Python: how to replicate the same row of a matrix?
<p>How can I copy each row of an array <em>n</em> times?</p> <p>So if I have a <code>2x3</code> array, and I copy each row 3 times, I will have a <code>6x3</code> array. For example, I need to convert <code>A</code> to <code>B</code> below:</p> <pre><code>A = np.array([[1, 2, 3], [4, 5, 6]]) B = np.array([[1, 2, 3], [1, 2, 3], [1, 2, 3], [4, 5, 6], [4, 5, 6], [4, 5, 6]]) </code></pre> <p>If possible, I would like to avoid a <code>for</code> loop.</p>
<p>If I read correctly, this is probably what you want assuming you started with <code>mat</code>:</p> <pre><code>transformed = np.concatenate([np.vstack([mat[:, i]] * 3).T for i in range(mat.shape[1])], axis=1) </code></pre> <p>Here's a verifiable example:</p> <pre><code># mocking a starting array import string mat = np.random.choice(list(string.ascii_lowercase), size=(5,3)) &gt;&gt;&gt; mat array([['s', 'r', 'e'], ['g', 'v', 'c'], ['i', 'b', 'd'], ['f', 'g', 's'], ['o', 'm', 'w']], dtype='&lt;U1') </code></pre> <p>Transform it:</p> <pre><code># this repeats it 3 times for sake of displaying transformed = np.concatenate([np.vstack([mat[i, :]] * 3).T for i in range(mat.shape[0])], axis=1).T &gt;&gt;&gt; transformed array([['s', 'r', 'e'], ['s', 'r', 'e'], ['s', 'r', 'e'], ['g', 'v', 'c'], ['g', 'v', 'c'], ['g', 'v', 'c'], ['i', 'b', 'd'], ['i', 'b', 'd'], ['i', 'b', 'd'], ['f', 'g', 's'], ['f', 'g', 's'], ['f', 'g', 's'], ['o', 'm', 'w'], ['o', 'm', 'w'], ['o', 'm', 'w']], dtype='&lt;U1') </code></pre> <p>The idea of this is to use vstack to concatenate each column to itself multiple time, and then concatenate the result of that to get the final array.</p>
python|arrays|numpy|matrix
1
10,344
65,600,720
Computing network properties for subset of nodes
<p><strong>Context:</strong> I have two panda dataframes that characterize a network, <code>df_nodes</code> and <code>df_edges</code>. They can be matched through a shared identfier, <code>id</code>.</p> <p><code>df_nodes</code> looks roughly like this:</p> <pre><code> id: att_1: att_2: att_3: id1 red ... ... id2 red ... ... id3 blue ... ... </code></pre> <p><code>df_edges</code>characterizes a (weighted) directed network, but I am interested in the (weighted) undirected representation for now.</p> <pre><code> id_from: id_to: weight: id1 id2 0.5 . id1 id3 0.2 id2 id4 0.4 </code></pre> <p>Two features are as follows:</p> <ul> <li><p>The same node sometimes appears the <code>id_from</code> column and at other times in <code>id_to</code> (in the example, this would be <code>id_4</code>; in practice there are millions of edges).</p> </li> <li><p>More importantly, <code>df_edges</code> includes connections to nodes that are <em>not</em> in <code>df_nodes</code>, ie I don't have any attribute data for those.</p> </li> </ul> <p><strong>Objective:</strong> I would like to create a <code>nx.Graph()</code> object that only includes edges between those nodes for which I have attributes data, ie which are in <code>df_nodes</code>. I then want to add (selected) attributes data in <code>df_nodes</code>, and compute statistics such as the average (standard deviation, ...) weighted degree for the group of nodes with some attribute value (eg where <code>df_nodes[att_1]='red'</code>).</p> <p><strong>Approach thus far</strong>: I am new to network analysis, so probably what I'm doing is misguided. I first create <code>G</code></p> <pre class="lang-py prettyprint-override"><code>G = nx.from_pandas_edgelist(df_edges, 'id_from', 'id_to', 'weight', nx.Graph()) </code></pre> <p>then tried adding the attribute of interest</p> <pre class="lang-py prettyprint-override"><code>nx.set_node_attributes(G, df_nodes[['id','att_1',]].set_index('id').to_dict('index'),'id') </code></pre> <p>I thought I could then use something like the following to filter out the nodes that meet an attribute value.</p> <pre class="lang-py prettyprint-override"><code>nodes_subset = [x for x,y in G.nodes(data=True) if y['att_1']='red'] </code></pre> <p>But (i) doing so throws a key error, presumably because many nodes don't even have <code>att_1</code>, and (ii) the approach seems very inefficient.</p> <p>I'd be very grateful for any help on how to achieve the objective (and do so efficiently, given the size of the actual data)!</p>
<p>I expect that filtering a Pandas dataframe will be quicker than filtering a Networkx graph. So I would try the following:</p> <p>Create a dictionary of nodes in the attribute table:</p> <pre><code>nodes_with_attributes = {x:0 for x in df_nodes['id'].values} </code></pre> <p>(Look ups in a dictionary are much faster than finding an element in a list, at the cost of memory.)</p> <p>Then filter the edges:</p> <pre><code>df_filtered_edges = df_edges[ (df_edges['id_from'].isin(nodes_with_attributes)&amp; (df_edges['id_to'].isin(nodes_with_attributes)] </code></pre> <p>Then you can make the filtered graph directly from the filtered dataframe.</p>
pandas|networkx
0
10,345
65,785,702
Transpose dataframe with respect to a column without duplicate columns
<p>I have a DataFrame which looks like:</p> <pre><code> ftr_1 ftr_2 ftr_3 ftr_4 1 0.1 A 10 2 0.2 A 11 3 0.3 B 12 4 0.4 B 13 5 0.5 C 14 6 0.6 C 15 7 0.7 D 16 8 0.8 D 17 </code></pre> <p>Now I want to transpose this DataFrame so that my columns becomes rows/index and my ftr_3 column becomes columns as shown below:</p> <pre><code> A B C D ftr_1 1 3 5 7 ftr_1 2 4 6 8 ftr_2 0.1 0.3 0.5 0.7 ftr_2 0.2 0.4 0.6 0.8 ftr_4 10 12 14 16 ftr_4 11 13 15 17 </code></pre> <p>I want to transpose with respect to ftr_3 column but don't want to have duplicate columns at the same time don't want to lose the data also.</p> <p>I tried the following approach but ended up having duplicate columns:-</p> <pre><code> df.set_index(['ftr_3'],inplace=True,drop=True) df = df.T </code></pre> <p>This may be a simple pivot but i am stuck at this. Please help and thanks in advance.</p>
<p>You can try:</p> <pre><code>df.set_index([&quot;ftr_3&quot;,df.groupby(&quot;ftr_3&quot;).cumcount()]).unstack().T.droplevel(1) </code></pre> <hr /> <pre><code>ftr_3 A B C D ftr_1 1.0 3.0 5.0 7.0 ftr_1 2.0 4.0 6.0 8.0 ftr_2 0.1 0.3 0.5 0.7 ftr_2 0.2 0.4 0.6 0.8 ftr_4 10.0 12.0 14.0 16.0 ftr_4 11.0 13.0 15.0 17.0 </code></pre> <p>To remove the index name:</p> <pre><code>(df.set_index([&quot;ftr_3&quot;,df.groupby(&quot;ftr_3&quot;).cumcount()]).unstack() .T.droplevel(1).rename_axis(None,axis=1)) A B C D ftr_1 1.0 3.0 5.0 7.0 ftr_1 2.0 4.0 6.0 8.0 ftr_2 0.1 0.3 0.5 0.7 ftr_2 0.2 0.4 0.6 0.8 ftr_4 10.0 12.0 14.0 16.0 ftr_4 11.0 13.0 15.0 17.0 </code></pre>
python|pandas|matrix
4
10,346
63,429,459
Converting VTK image (.vti) data to VTK poly (.vtp) data
<p>I'm trying to take some VTK image data generated from a 3-D <code>numpy</code> array and convert it into poly data so it can be read by a package that only takes .vtp as an input format. I chose to use the marching cubes algorithm to take my point/node data as input and give poly data as an output. The data is segmented into two phases (0 = black, 255 = white), so only one contour is necessary. I tried using the <code>vtkPolyDataReader</code> class to create an object for the <code>vtkMarchingCubes</code> class, then using <code>vtkPolyDataWriter</code> to take the contoured marching cubes object and save it as a VTP file:</p> <pre><code>import vtk input = 'mydata.vti' reader = vtk.vtkPolydataReader() reader.SetFileName(input) reader.Update() contour = vtk.vtkMarchingCubes() contour.SetInputConnection(reader.GetOutputPort()) contour.SetValue(0, 128.) contour.Update() writer = vtk.vtkPolyDataWriter() writer.SetInputData(contour.GetOutput()) writer.SetFileName('mydata.vtp') writer.Update() writer.Write() </code></pre> <p>When I run the code, it takes much less time than it ought to (the input file is about 2 GB), and the VTP file the code creates is less than 1 KB. I've been banging my head against a wall over this and poring over the VTK documentation and some provided examples, but I can't figure out what I've done wrong.</p>
<p>To read a .vtki file you need to use vtk.vtkXMLImageDataReader. You are trying to read an image file with a vtk.vtkPolyDataReader, which is designed for reading surface meshes.</p>
python|numpy|vtk
0
10,347
63,677,157
Check if column value is present in Dictionary Value
<p>i want to check whether the <code>colval</code> item is present in <code>values</code> of -<code>dictionary</code><br /> If it is present than append the corresponding <code>key</code> of that value, else append the <code>colval</code> item.</p> <p><strong>CODE</strong><br /> This is what i did</p> <pre><code>colormap = [] for col in colval: for k,v in master_colors.items(): for x in v: if col == x: colormap.append(k) else: colormap.append(col) </code></pre> <p>But this gives me <code>len(colormap)</code> more than <code>1000</code> which in actual should be <code>45</code> which is the length of colval</p>
<p>The problem is that colormap.append(col) is in the innermost loop. For each colval value, it's iterating through every value in the master_colors dict and every time it doesn't match that particular value, it appends colval. Instead you need to wait until you iterate through the entire dict and confirm that there's no match for the current colval value. Only then should you append that colval value.</p> <p>Also, the way you've written it now, it's case-sensitive. If you change both strings you're comparing to lowercase (as below), it works fine.</p> <pre><code>colormap = [] for col in colval: match = False for k,v in master_colors.items(): for x in v: if col.lower() == x.lower(): colormap.append(k) match = True if not match: colormap.append(col) </code></pre>
python|pandas|for-loop
1
10,348
63,552,224
How to search column elements and corresponding mappings in Python Pandas?
<p>I have a dataframe <strong>df1</strong> such as the following that has a list of tags.</p> <pre><code> tags 0 label 0 document 0 text 0 paper 0 poster ... 21600 wood 21600 hot tub 21600 tub 21600 terrace 21600 blossom </code></pre> <p>There's another dataframe <strong>df2</strong> that has mappings to the tags present in df mapped to a column name 'name'.</p> <pre><code> name iab 0 abies Nature and Wildlife 1 absinthe Food &amp; Drink 2 abyssinian Pets 3 accessories Style &amp; Fashion 4 accessory Style &amp; Fashion ... ... ... ... ... 1595 rows × 4 columns </code></pre> <p>Essentially, the idea is to search the column 'name' in df2 that correspond to the tags in df1 to find corresponding 'iab' mappings and output a CSV that has two columns - tags and it's corresponding 'iab' mappings.</p> <p>The Output would look something like this :</p> <pre><code> tags iab 0 label &lt;corresponding iab mapping to 'name' found in df2&gt; 0 document 0 text 0 paper 0 poster ... 21600 wood 21600 hot tub 21600 tub 21600 terrace 21600 blossom </code></pre> <p>I need help in achieving this. Thank you in advance!</p> <p>Note:</p> <p>What I tried is</p> <pre><code> df_iab[df_iab['name'].isin(df['image_CONTAINS_OBJECT'])] </code></pre> <p>But that would only cut down df2 to 'iab' that match 'tags' but not really perform a search and map found values.</p>
<p>Merge:</p> <pre><code>new_df = df1.merge(df2, how='left', left_on='tags', right_on='name') </code></pre>
python|pandas|dataframe|search|mapping
1
10,349
63,405,508
Error : module 'tensorflow.python.framework.ops' has no attribute '_TensorLike'
<p>Hi I'm building cycleGan below are the code that makes the as no attribute <code>'_TensorLike'</code> errors.</p> <p>my version of <code>keras is 2.3.1 , tensorflow is 2.3</code>.</p> <p>lots people suggest to replace with &quot;<code>from tensorflow.karas......</code> but this can't work with</p> <pre><code>from keras_contrib.layers.normalization.instancenormalization import InstanceNormalization. </code></pre> <p>and I'm not sure where is the mistake,I really need you guys help! thank you</p> <p>Code:</p> <pre><code>from random import random from numpy import load from numpy import zeros from numpy import ones from numpy import asarray from numpy.random import randint from keras.optimizers import Adam from keras.initializers import RandomNormal from keras.models import Model from keras.models import Input from keras.layers import Conv2D from keras.layers import Conv2DTranspose from keras.layers import LeakyReLU from keras.layers import Activation from keras.layers import Concatenate from keras_contrib.layers.normalization.instancenormalization import InstanceNormalization from matplotlib import pyplot # define the discriminator model def define_discriminator(image_shape): # weight initialization init = RandomNormal(stddev=0.02) # source image input in_image = Input(shape=image_shape) # C64 d = Conv2D(64, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(in_image) d = LeakyReLU(alpha=0.2)(d) # C128 d = Conv2D(128, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d) d = InstanceNormalization(axis=-1)(d) d = LeakyReLU(alpha=0.2)(d) # C256 d = Conv2D(256, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d) d = InstanceNormalization(axis=-1)(d) d = LeakyReLU(alpha=0.2)(d) # C512 d = Conv2D(512, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d) d = InstanceNormalization(axis=-1)(d) d = LeakyReLU(alpha=0.2)(d) # second last output layer d = Conv2D(512, (4,4), padding='same', kernel_initializer=init)(d) d = InstanceNormalization(axis=-1)(d) d = LeakyReLU(alpha=0.2)(d) # patch output patch_out = Conv2D(1, (4,4), padding='same', kernel_initializer=init)(d) # define model model = Model(in_image, patch_out) # compile model model.compile(loss='mse', optimizer=Adam(lr=0.0002, beta_1=0.5), loss_weights=[0.5]) return model # generator a resnet block def resnet_block(n_filters, input_layer): # weight initialization init = RandomNormal(stddev=0.02) # first layer convolutional layer g = Conv2D(n_filters, (3,3), padding='same', kernel_initializer=init)(input_layer) g = InstanceNormalization(axis=-1)(g) g = Activation('relu')(g) # second convolutional layer g = Conv2D(n_filters, (3,3), padding='same', kernel_initializer=init)(g) g = InstanceNormalization(axis=-1)(g) # concatenate merge channel-wise with input layer g = Concatenate()([g, input_layer]) return g # define the standalone generator model def define_generator(image_shape, n_resnet=9): # weight initialization init = RandomNormal(stddev=0.02) # image input in_image = Input(shape=image_shape) # c7s1-64 g = Conv2D(64, (7,7), padding='same', kernel_initializer=init)(in_image) g = InstanceNormalization(axis=-1)(g) g = Activation('relu')(g) # d128 g = Conv2D(128, (3,3), strides=(2,2), padding='same', kernel_initializer=init)(g) g = InstanceNormalization(axis=-1)(g) g = Activation('relu')(g) # d256 g = Conv2D(256, (3,3), strides=(2,2), padding='same', kernel_initializer=init)(g) g = InstanceNormalization(axis=-1)(g) g = Activation('relu')(g) # R256 for _ in range(n_resnet): g = resnet_block(256, g) # u128 g = Conv2DTranspose(128, (3,3), strides=(2,2), padding='same', kernel_initializer=init)(g) g = InstanceNormalization(axis=-1)(g) g = Activation('relu')(g) # u64 g = Conv2DTranspose(64, (3,3), strides=(2,2), padding='same', kernel_initializer=init)(g) g = InstanceNormalization(axis=-1)(g) g = Activation('relu')(g) # c7s1-3 g = Conv2D(3, (7,7), padding='same', kernel_initializer=init)(g) g = InstanceNormalization(axis=-1)(g) out_image = Activation('tanh')(g) # define model model = Model(in_image, out_image) return model # define a composite model for updating generators by adversarial and cycle loss def define_composite_model(g_model_1, d_model, g_model_2, image_shape): # ensure the model we're updating is trainable g_model_1.trainable = True # mark discriminator as not trainable d_model.trainable = False # mark other generator model as not trainable g_model_2.trainable = False # discriminator element input_gen = Input(shape=image_shape) gen1_out = g_model_1(input_gen) output_d = d_model(gen1_out) # identity element input_id = Input(shape=image_shape) output_id = g_model_1(input_id) # forward cycle output_f = g_model_2(gen1_out) # backward cycle gen2_out = g_model_2(input_id) output_b = g_model_1(gen2_out) # define model graph model = Model([input_gen, input_id], [output_d, output_id, output_f, output_b]) # define optimization algorithm configuration opt = Adam(lr=0.0002, beta_1=0.5) # compile model with weighting of least squares loss and L1 loss model.compile(loss=['mse', 'mae', 'mae', 'mae'], loss_weights=[1, 5, 10, 10], optimizer=opt) return model # load and prepare training images def load_real_samples(filename): # load the dataset data = load(filename) # unpack arrays X1, X2 = data['arr_0'], data['arr_1'] # scale from [0,255] to [-1,1] X1 = (X1 - 127.5) / 127.5 X2 = (X2 - 127.5) / 127.5 return [X1, X2] # select a batch of random samples, returns images and target def generate_real_samples(dataset, n_samples, patch_shape): # choose random instances ix = randint(0, dataset.shape[0], n_samples) # retrieve selected images X = dataset[ix] # generate 'real' class labels (1) y = ones((n_samples, patch_shape, patch_shape, 1)) return X, y # generate a batch of images, returns images and targets def generate_fake_samples(g_model, dataset, patch_shape): # generate fake instance X = g_model.predict(dataset) # create 'fake' class labels (0) y = zeros((len(X), patch_shape, patch_shape, 1)) return X, y # save the generator models to file def save_models(step, g_model_AtoB, g_model_BtoA): # save the first generator model filename1 = 'g_model_AtoB_%06d.h5' % (step+1) g_model_AtoB.save(filename1) # save the second generator model filename2 = 'g_model_BtoA_%06d.h5' % (step+1) g_model_BtoA.save(filename2) print('&gt;Saved: %s and %s' % (filename1, filename2)) # generate samples and save as a plot and save the model def summarize_performance(step, g_model, trainX, name, n_samples=5): # select a sample of input images X_in, _ = generate_real_samples(trainX, n_samples, 0) # generate translated images X_out, _ = generate_fake_samples(g_model, X_in, 0) # scale all pixels from [-1,1] to [0,1] X_in = (X_in + 1) / 2.0 X_out = (X_out + 1) / 2.0 # plot real images for i in range(n_samples): pyplot.subplot(2, n_samples, 1 + i) pyplot.axis('off') pyplot.imshow(X_in[i]) # plot translated image for i in range(n_samples): pyplot.subplot(2, n_samples, 1 + n_samples + i) pyplot.axis('off') pyplot.imshow(X_out[i]) # save plot to file filename1 = '%s_generated_plot_%06d.png' % (name, (step+1)) pyplot.savefig(filename1) pyplot.close() # update image pool for fake images def update_image_pool(pool, images, max_size=50): selected = list() for image in images: if len(pool) &lt; max_size: # stock the pool pool.append(image) selected.append(image) elif random() &lt; 0.5: # use image, but don't add it to the pool selected.append(image) else: # replace an existing image and use replaced image ix = randint(0, len(pool)) selected.append(pool[ix]) pool[ix] = image return asarray(selected) # train cyclegan models def train(d_model_A, d_model_B, g_model_AtoB, g_model_BtoA, c_model_AtoB, c_model_BtoA, dataset): # define properties of the training run n_epochs, n_batch, = 100, 1 # determine the output square shape of the discriminator n_patch = d_model_A.output_shape[1] # unpack dataset trainA, trainB = dataset # prepare image pool for fakes poolA, poolB = list(), list() # calculate the number of batches per training epoch bat_per_epo = int(len(trainA) / n_batch) # calculate the number of training iterations n_steps = bat_per_epo * n_epochs # manually enumerate epochs for i in range(n_steps): # select a batch of real samples X_realA, y_realA = generate_real_samples(trainA, n_batch, n_patch) X_realB, y_realB = generate_real_samples(trainB, n_batch, n_patch) # generate a batch of fake samples X_fakeA, y_fakeA = generate_fake_samples(g_model_BtoA, X_realB, n_patch) X_fakeB, y_fakeB = generate_fake_samples(g_model_AtoB, X_realA, n_patch) # update fakes from pool X_fakeA = update_image_pool(poolA, X_fakeA) X_fakeB = update_image_pool(poolB, X_fakeB) # update generator B-&gt;A via adversarial and cycle loss g_loss2, _, _, _, _ = c_model_BtoA.train_on_batch([X_realB, X_realA], [y_realA, X_realA, X_realB, X_realA]) # update discriminator for A -&gt; [real/fake] dA_loss1 = d_model_A.train_on_batch(X_realA, y_realA) dA_loss2 = d_model_A.train_on_batch(X_fakeA, y_fakeA) # update generator A-&gt;B via adversarial and cycle loss g_loss1, _, _, _, _ = c_model_AtoB.train_on_batch([X_realA, X_realB], [y_realB, X_realB, X_realA, X_realB]) # update discriminator for B -&gt; [real/fake] dB_loss1 = d_model_B.train_on_batch(X_realB, y_realB) dB_loss2 = d_model_B.train_on_batch(X_fakeB, y_fakeB) # summarize performance print('&gt;%d, dA[%.3f,%.3f] dB[%.3f,%.3f] g[%.3f,%.3f]' % (i+1, dA_loss1,dA_loss2, dB_loss1,dB_loss2, g_loss1,g_loss2)) # evaluate the model performance every so often if (i+1) % (bat_per_epo * 1) == 0: # plot A-&gt;B translation summarize_performance(i, g_model_AtoB, trainA, 'AtoB') # plot B-&gt;A translation summarize_performance(i, g_model_BtoA, trainB, 'BtoA') if (i+1) % (bat_per_epo * 5) == 0: # save the models save_models(i, g_model_AtoB, g_model_BtoA) # load image data dataset = load_real_samples('horse2zebra_256.npz') print('Loaded', dataset[0].shape, dataset[1].shape) # define input shape based on the loaded dataset image_shape = dataset[0].shape[1:] # generator: A -&gt; B g_model_AtoB = define_generator(image_shape) # generator: B -&gt; A g_model_BtoA = define_generator(image_shape) # discriminator: A -&gt; [real/fake] d_model_A = define_discriminator(image_shape) # discriminator: B -&gt; [real/fake] d_model_B = define_discriminator(image_shape) # composite: A -&gt; B -&gt; [real/fake, A] c_model_AtoB = define_composite_model(g_model_AtoB, d_model_B, g_model_BtoA, image_shape) # composite: B -&gt; A -&gt; [real/fake, B] c_model_BtoA = define_composite_model(g_model_BtoA, d_model_A, g_model_AtoB, image_shape) # train models train(d_model_A, d_model_B, g_model_AtoB, g_model_BtoA, c_model_AtoB, c_model_BtoA, dataset) </code></pre> <p>Error Message:</p> <pre><code>AttributeError Traceback (most recent call last) &lt;ipython-input-42-fd5a1040b602&gt; in &lt;module&gt; 4 image_shape = dataset[0].shape[1:] 5 # generator: A -&gt; B ----&gt; 6 g_model_AtoB = define_generator(image_shape) 7 # generator: B -&gt; A 8 g_model_BtoA = define_generator(image_shape) &lt;ipython-input-41-1fea83fe6287&gt; in define_generator(image_shape, n_resnet) 72 in_image = Input(shape=image_shape) 73 # c7s1-64 ---&gt; 74 g = Conv2D(64, (7,7), padding='same', kernel_initializer=init)(in_image) 75 g = InstanceNormalization(axis=-1)(g) 76 g = Activation('relu')(g) ~/opt/anaconda3/lib/python3.8/site-packages/keras/backend/tensorflow_backend.py in symbolic_fn_wrapper(*args, **kwargs) 73 if _SYMBOLIC_SCOPE.value: 74 with get_graph().as_default(): ---&gt; 75 return func(*args, **kwargs) 76 else: 77 return func(*args, **kwargs) ~/opt/anaconda3/lib/python3.8/site-packages/keras/engine/base_layer.py in __call__(self, inputs, **kwargs) 444 # Raise exceptions in case the input is not compatible 445 # with the input_spec specified in the layer constructor. --&gt; 446 self.assert_input_compatibility(inputs) 447 448 # Collect input shapes to build layer. ~/opt/anaconda3/lib/python3.8/site-packages/keras/engine/base_layer.py in assert_input_compatibility(self, inputs) 308 for x in inputs: 309 try: --&gt; 310 K.is_keras_tensor(x) 311 except ValueError: 312 raise ValueError('Layer ' + self.name + ' was called with ' ~/opt/anaconda3/lib/python3.8/site-packages/keras/backend/tensorflow_backend.py in is_keras_tensor(x) 693 ``` 694 &quot;&quot;&quot; --&gt; 695 if not is_tensor(x): 696 raise ValueError('Unexpectedly found an instance of type `' + 697 str(type(x)) + '`. ' ~/opt/anaconda3/lib/python3.8/site-packages/keras/backend/tensorflow_backend.py in is_tensor(x) 701 702 def is_tensor(x): --&gt; 703 return isinstance(x, tf_ops._TensorLike) or tf_ops.is_dense_tensor_like(x) 704 705 AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike' </code></pre>
<p>My scipy was deprecated, which apparently was the problem. This solved:<br /> <code>pip install --upgrade scipy</code></p>
tensorflow2.0|normalization|tf.keras|generative-adversarial-network
0
10,350
63,441,712
Error: its rank is undefined, but the layer requires a defined rank
<p>I have my tf.keras model feeding from a <code>tf.data.Dataset.from_generator</code> feature size <code>(224, 224, 1)</code> and label size <code>(1, 265)</code> as I have <code>265 CLASSES</code>. My batch size is <code>64</code>, returned feature size is <code>(64, 244, 244, 1)</code> and label size <code>(64, 265)</code></p> <p>below lies my training model: <code>IM_SIZE = (224, 224, 1)</code> while <code>DO_FINE_TUNING</code> has been set to <code>True</code> and <code>FINE_TUNE_AT = 40</code></p> <pre><code>def model_defenition(model_type='ResNet50'): if model_type == 'ResNet50': base_model = tf.keras.applications.ResNet50( include_top = False, weights='imagenet' ) print(f'num layers in base model: {len(base_model.layers)}') base_model.trainable = DO_FINE_TUNING for layer in base_model.layers[:FINE_TUNE_AT]: layer.trainable = False model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=IM_SIZE), tf.keras.layers.Conv2D(filters=3, kernel_size=(3, 3), padding='same'), base_model, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(units = len(CLASSES), activation=tf.nn.softmax) ]) return model model = model_defenition(model_type='ResNet50') model.compile(optimizer=OPTIMIZER, loss=LOSS_FN, metrics=METRICS_LIST) model.summary() </code></pre> <p>When I cal the model.fit function as per below</p> <pre><code>model.fit( train_ds, epochs=EPOCHS, steps_per_epoch=len(df_train)//BATCH_SIZE, validation_data=valid_ds, batch_size=BATCH_SIZE, verbose=1, callbacks=CALLBACKS, workers=1, use_multiprocessing=True ) </code></pre> <p>I'm getting the below error</p> <pre><code>ValueError: Input 0 of layer sequential is incompatible with the layer: its rank is undefined, but the layer requires a defined rank. </code></pre> <p>I'm using tensorflow version 2.2.0. Any help regarding this will be highly appreciated. Please feel free to ask for any other part of the code to reproduce the issue.</p>
<p>So the issue here is that the <code>Resnet</code> model also contains the <code>Input_layer</code>.</p> <p>If you do the summary of the <code>Resnet</code> model you can see this.</p> <p><code>base_model.summary()</code></p> <pre><code>Model: &quot;resnet50&quot; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_10 (InputLayer) [(None, None, None, 0 __________________________________________________________________________________________________ conv1_pad (ZeroPadding2D) (None, None, None, 3 0 input_10[0][0] __________________________________________________________________________________________________ </code></pre> <p>So I will suggest to use the <code>base_model</code> first and then use your own layers following to the <code>base_model</code>.</p> <p>Also just as a side note, do check the documentation of the <code>resenet</code> model. It says that if you use <code>include_top = False</code> then you will also have to specify the <code>input_shape</code>. This could also be the problem. Do check the API <a href="https://www.tensorflow.org/api_docs/python/tf/keras/applications/ResNet50" rel="nofollow noreferrer">documentation</a>.</p> <p>I do not have the complete code so I cannot try everything from here. But I will go through those above pointers.</p>
python|tensorflow|tf.keras
1
10,351
53,563,225
Pandas display all index labels in jupyter notebook despite repetition
<p>When displaying a DataFrame in jupyter notebook. The index is displayed in a hierarchical way. So that repeated labels are not shown in the following row. E.g. a dataframe with a Multiindex with the following labels</p> <pre><code>[1, 1, 1, 1] [1, 1, 0, 1] </code></pre> <p>will be displayed as </p> <pre><code>1 1 1 1 ... 0 1 ... </code></pre> <p>Can I change this behaviour so that all index values are shown despite repetition? Like this:</p> <pre><code>1 1 1 1 ... 1 1 0 1 ... </code></pre> <p>?</p> <pre><code>import pandas as pd import numpy as np import itertools N_t = 5 N_e = 2 classes = tuple(list(itertools.product([0, 1], repeat=N_e))) N_c = len(classes) noise = np.random.randint(0, 10, size=(N_c, N_t)) df = pd.DataFrame(noise, index=classes) df 0 1 2 3 4 0 0 5 9 4 1 2 1 2 2 7 9 9 1 0 1 7 3 6 9 1 4 9 8 2 9 # should be shown as 0 1 2 3 4 0 0 5 9 4 1 2 0 1 2 2 7 9 9 1 0 1 7 3 6 9 1 1 4 9 8 2 9 </code></pre>
<p>Use - </p> <pre><code>with pd.option_context('display.multi_sparse', False): print (df) </code></pre> <p><strong>Output</strong></p> <pre><code> 0 1 2 3 4 0 0 8 1 4 0 2 0 1 0 1 7 4 7 1 0 9 6 5 2 0 1 1 2 2 7 2 7 </code></pre> <p>And globally:</p> <pre><code>pd.options.display.multi_sparse = False </code></pre> <p><strong>or</strong></p> <p>thanks @Kyle - </p> <pre><code>print(df.to_string(sparsify=False)) </code></pre>
python|pandas|jupyter
3
10,352
71,953,266
Numpy compare with 2 different dimension array
<pre class="lang-py prettyprint-override"><code>a = np.array([[ 0, 100, 0], [ 0, 0, 0], [ 0, 50, 0]]) b = np.array([0, 50, 0]) c = np.array([0, 0, 1]) </code></pre> <p>How can I get array c through a and b, except use <code>for</code>? If b equals the item in a, then the same index item of c should be 1.</p> <p>This question bothers me a lot when I compare a pixel in an image. When I use <code>for</code> statement, it will be too slow.</p>
<p>Here's a possible way. First, <code>b</code> is subtracted from each row of <code>a</code> and the absolute value at each index is found. Then, the sum of each row is taken, and if the sum is not 0, then that value in <code>c</code> becomes 0. If the sum is 0, then that index row in <code>a</code> is equal to <code>b</code> and the index in <code>c</code> becomes 1.</p> <p>The absolute value step is necessary in case you have a row in <code>a</code> like <code>[b[0] - 4, b[1] + 4, 0]</code>, because the sum of that row minus <code>b</code> would actually be 0 because of the +4 and -4, even though that row and <code>b</code> aren't the same. That's why you need the absolute value step.</p> <pre><code>c = np.sum(np.abs(a - b), axis=1) c = np.where(c != 0, 0, 1) </code></pre> <p>I tested for <code>a</code> with shape <code>(100000, 3)</code> and <code>b</code> with shape <code>(3,)</code>. For my method, the time taken is 0.0085 seconds, while a <code>for</code> loop takes 0.0408 seconds, so more than a 4x speedup.</p> <p>Note that for small <code>a</code> (less than ~300 rows, always with 3 columns), the <code>for</code> loop is faster.</p>
python|numpy|computer-vision
1
10,353
71,905,244
I don't understand how the second bracket works
<p>This piece of code is for plotting a series of data by coloring by the classes they belong to. <code>X_train</code> is an array <code>(115,2)</code> and <code>Y_train</code> is another array <code>(115,)</code> with their respective scope values. My question is what does <code>[Y_train == i]</code> do exactly?</p> <pre><code>colors = [&quot;red&quot;, &quot;greenyellow&quot;, &quot;blue&quot;] for i in range(len(colors)): xs = X_train[:, 0][Y_train == i] ys = X_train[:,1][Y_train == i] plt.scatter(xs, ys, c = colors[i]) plt.legend(iris.target_names) plt.xlabel(&quot;Sepal length&quot;) plt.ylabel(&quot;Sepal width&quot;) </code></pre>
<p>Boolean values in python are just subclasses of integers.</p> <p><code>Y_train == i</code> just evaluates into either <code>False</code> or <code>True</code>, which is then used to access either index <code>0</code> or <code>1</code> respectively.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = ['this string is at index 0', 'this string is at index 1'] &gt;&gt;&gt; a[True] 'this string is at index 1' &gt;&gt;&gt; a[False] 'this string is at index 0' &gt;&gt;&gt; a[1 + 2 == 3] # true 'this string is at index 1' </code></pre>
python|python-3.x|numpy
4
10,354
72,099,336
K-fold cross validation for Keras Neural Network
<p>Hi have already tuned my hyperparameters and would like to perfrom kfold cross validation for my model. I have being looking around for different methods it won't seem to work for me. The code is here below:</p> <pre><code>tf.get_logger().setLevel(logging.ERROR) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # Set random seeds for repeatable results RANDOM_SEED = 3 random.seed(RANDOM_SEED) np.random.seed(RANDOM_SEED) tf.random.set_seed(RANDOM_SEED) classes_values = [ &quot;nearmiss&quot;, &quot;normal&quot; ] classes = len(classes_values) Y = tf.keras.utils.to_categorical(Y - 1, classes) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1) input_length = X_train[0].shape[0] train_dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train)) validation_dataset = tf.data.Dataset.from_tensor_slices((X_test, Y_test)) def get_reshape_function(reshape_to): def reshape(image, label): return tf.reshape(image, reshape_to), label return reshape callbacks = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3,mode=&quot;auto&quot;) model = Sequential() model.add(Dense(200, activation='tanh', activity_regularizer=tf.keras.regularizers.l1(0.0001))) model.add(Dropout(0.3)) model.add(Dense(44, activation='tanh', activity_regularizer=tf.keras.regularizers.l1(0.0001))) model.add(Dropout(0.3)) model.add(Dense(68, activation='tanh', activity_regularizer=tf.keras.regularizers.l1(0.0001))) model.add(Dropout(0.3)) model.add(Dense(44, activation='tanh', activity_regularizer=tf.keras.regularizers.l1(0.0001))) model.add(Dropout(0.3)) model.add(Dense(classes, activation='softmax', name='y_pred')) # this controls the learning rate opt = Adam(learning_rate=0.0002, beta_1=0.9, beta_2=0.999) # this controls the batch size, or you can manipulate the tf.data.Dataset objects yourself BATCH_SIZE = 32 train_dataset = train_dataset.batch(BATCH_SIZE, drop_remainder=False) validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=False) # train the neural network model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) history=model.fit(train_dataset, epochs=50, validation_data=validation_dataset, verbose=2, callbacks=callbacks) model.test_on_batch(X_test, Y_test) model.metrics_names # Use this flag to disable per-channel quantization for a model. # This can reduce RAM usage for convolutional models, but may have # an impact on accuracy. disable_per_channel_quantization = False </code></pre> <p>Appericate if someone could guide me on this as I am very new to TensorFlow and neural network</p>
<p>I haven't tested it, but this should roughly be what you want. You use the sklearn KFold method to split the dataset into different folds, and then you simply fit the model on the current fold.</p> <pre class="lang-py prettyprint-override"><code>tf.get_logger().setLevel(logging.ERROR) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # Set random seeds for repeatable results RANDOM_SEED = 3 random.seed(RANDOM_SEED) np.random.seed(RANDOM_SEED) tf.random.set_seed(RANDOM_SEED) classes_values = [ &quot;nearmiss&quot;, &quot;normal&quot; ] classes = len(classes_values) Y = tf.keras.utils.to_categorical(Y - 1, classes) def get_reshape_function(reshape_to): def reshape(image, label): return tf.reshape(image, reshape_to), label return reshape callbacks = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3,mode=&quot;auto&quot;) def create_model(): model = Sequential() model.add(Dense(200, activation='tanh', activity_regularizer=tf.keras.regularizers.l1(0.0001))) model.add(Dropout(0.3)) model.add(Dense(44, activation='tanh', activity_regularizer=tf.keras.regularizers.l1(0.0001))) model.add(Dropout(0.3)) model.add(Dense(68, activation='tanh', activity_regularizer=tf.keras.regularizers.l1(0.0001))) model.add(Dropout(0.3)) model.add(Dense(44, activation='tanh', activity_regularizer=tf.keras.regularizers.l1(0.0001))) model.add(Dropout(0.3)) model.add(Dense(classes, activation='softmax', name='y_pred')) return model # this controls the learning rate opt = Adam(learning_rate=0.0002, beta_1=0.9, beta_2=0.999) # this controls the batch size, or you can manipulate the tf.data.Dataset objects yourself BATCH_SIZE = 32 kf = KFold(n_splits=5) kf.get_n_splits(X) # Loop over the dataset to create seprate folds for train_index, test_index in kf.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = Y[train_index], Y[test_index] input_length = X_train[0].shape[0] train_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)) validation_dataset = tf.data.Dataset.from_tensor_slices((X_test, y_test)) train_dataset = train_dataset.batch(BATCH_SIZE, drop_remainder=False) validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=False) # Create a new model instance model = create_model() model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) # train the model on the current fold history=model.fit(train_dataset, epochs=50, validation_data=validation_dataset, verbose=2, callbacks=callbacks) model.test_on_batch(X_test, y_test) </code></pre>
python|tensorflow|keras
0
10,355
55,383,080
Pandas: Checking for NaN using rolling function
<p>I have a data frame with a variable "A" and I would like to create a rolling Nan checker, such that the new variable "rolling_nan" = 1 if ALL 3 (seconds) cells (current cell and the two previous ones) are NaN, else "rolling_nan" = 0.</p> <p>I am applying a function since the <code>.rolling</code> pandas function does not support <code>isna()</code>. However I am getting the following. Also I am not sure how to do include the same row value in the NaN checker.</p> <pre><code>import pandas as pd import numpy as np idx = pd.date_range('2018-01-01', periods=10, freq='S') df = pd.DataFrame({"A":[1,2,3,np.nan,np.nan,np.nan,6,7,8,9]}, index = idx) df def isna_func(x): return 1 if pd.isna(x).all() == True else 0 df['rolling_nan'] = df['A'].rolling(3).apply(isna_func) df A rolling_nan 2018-01-01 00:00:00 1.0 NaN 2018-01-01 00:00:01 2.0 NaN 2018-01-01 00:00:02 3.0 0.0 2018-01-01 00:00:03 NaN NaN 2018-01-01 00:00:04 NaN NaN 2018-01-01 00:00:05 NaN NaN 2018-01-01 00:00:06 6.0 NaN 2018-01-01 00:00:07 7.0 NaN 2018-01-01 00:00:08 8.0 0.0 2018-01-01 00:00:09 9.0 0.0 </code></pre> <p>In the above example, the <code>rolling_nan</code> should be equal to 1 only at timestamp <code>2018-01-01 00:00:05</code> and 0 otherwise.</p>
<p>You can think in the different way mark all <code>notna</code> , and find the <code>max</code> </p> <pre><code>df.A.notna().rolling(3).max()==0 Out[316]: 2018-01-01 00:00:00 False 2018-01-01 00:00:01 False 2018-01-01 00:00:02 False 2018-01-01 00:00:03 False 2018-01-01 00:00:04 False 2018-01-01 00:00:05 True 2018-01-01 00:00:06 False 2018-01-01 00:00:07 False 2018-01-01 00:00:08 False 2018-01-01 00:00:09 False Freq: S, Name: A, dtype: bool </code></pre> <p>Assign it back </p> <pre><code>df['rollingnan']=(df.A.notna().rolling(3).max()==0).astype(int) df Out[320]: A rollingnan 2018-01-01 00:00:00 1.0 0 2018-01-01 00:00:01 2.0 0 2018-01-01 00:00:02 3.0 0 2018-01-01 00:00:03 NaN 0 2018-01-01 00:00:04 NaN 0 2018-01-01 00:00:05 NaN 1 2018-01-01 00:00:06 6.0 0 2018-01-01 00:00:07 7.0 0 2018-01-01 00:00:08 8.0 0 2018-01-01 00:00:09 9.0 0 </code></pre> <hr> <p>Or base on your own idea using <code>all</code> </p> <pre><code>df['A'].isna().rolling(3).apply(lambda x : x.all(),raw=True) Out[323]: 2018-01-01 00:00:00 NaN 2018-01-01 00:00:01 NaN 2018-01-01 00:00:02 0.0 2018-01-01 00:00:03 0.0 2018-01-01 00:00:04 0.0 2018-01-01 00:00:05 1.0 2018-01-01 00:00:06 0.0 2018-01-01 00:00:07 0.0 2018-01-01 00:00:08 0.0 2018-01-01 00:00:09 0.0 Freq: S, Name: A, dtype: float64 </code></pre>
pandas|apply|nan|rolling-computation
1
10,356
55,519,386
Vectorised non zero groups in numpy array
<p>Say you have 1d numpy array:</p> <pre class="lang-py prettyprint-override"><code>[0,0,0,0,0,1,2,3,0,0,0,0,4,5,0,0,0] </code></pre> <p>How would you create the following groups <strong>without</strong> using for loop?</p> <pre class="lang-py prettyprint-override"><code>[1,2,3], [4,5] </code></pre>
<p>Here's one way using <code>np.split</code>:</p> <pre><code>a # array([0, 0, 0, 0, 0, 1, 2, 3, 0, 0, 0, 0, 4, 5, 0, 0, 0]) ### find nonzeros z = a!=0 ### find switching points z[1:] ^= z[:-1] ### split at switching points and discard zeros np.split(a, *np.where(z))[1::2] # [array([1, 2, 3]), array([4, 5])] </code></pre>
python|numpy
2
10,357
55,493,429
Merge one file to other file in groups
<p>In <code>Python</code> and <code>Pandas</code>, I have one dataframe for 2018 which looks like this:</p> <pre><code>Date Stock_id Stock_value 02/01/2018 1 4 03/01/2018 1 2 05/01/2018 1 7 01/01/2018 2 6 02/01/2018 2 9 03/01/2018 2 4 04/01/2018 2 6 </code></pre> <p>and a dataframe with one column which has all the 2018 dates like the following:</p> <pre><code>Date 01/01/2018 02/01/2018 03/01/2018 04/01/2018 05/01/2018 06/01/2018 etc </code></pre> <p>I want to merge these to get my first dataframe with full dates for 2018 <strong>for each stock</strong> and with NAs wherever they were not any data.</p> <p>Basically, I want to have for each stock a row for each date of 2018 (where the rows which do not have any data should filled in with NAs).</p> <p>Thus, I want to have the following as an output for the sample above:</p> <pre><code>Date Stock_id Stock_value 01/01/2018 1 NA 02/01/2018 1 4 03/01/2018 1 2 04/01/2018 1 NA 05/01/2018 1 7 01/01/2018 2 6 02/01/2018 2 9 03/01/2018 2 4 04/01/2018 2 6 05/01/2018 2 NA </code></pre> <p>How can I do this?</p> <p>I tested</p> <pre><code>data = data_1.merge(data_2, on='Date' , how='outer') </code></pre> <p>and</p> <pre><code>data = data_1.merge(data_2, on='Date' , how='right') </code></pre> <p>but I still got the original dataframe with no new dates added but only with some rows which had everywhere NAs added.</p>
<p>Use <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow noreferrer"><code>product</code></a> for all combinations of values with <code>Stock_id</code> and merge with <code>left join</code>:</p> <pre><code>df1['Date'] = pd.to_datetime(df1['Date'], dayfirst=True) df2['Date'] = pd.to_datetime(df2['Date'], dayfirst=True) from itertools import product c = ['Stock_id','Date'] df = pd.DataFrame(list(product(df1['Stock_id'].unique(), df2['Date'])), columns=c) print (df) Stock_id Date 0 1 2018-01-01 1 1 2018-01-02 2 1 2018-01-03 3 1 2018-01-04 4 1 2018-01-05 5 1 2018-01-06 6 2 2018-01-01 7 2 2018-01-02 8 2 2018-01-03 9 2 2018-01-04 10 2 2018-01-05 11 2 2018-01-06 </code></pre> <p>and</p> <pre><code>df = df[['Date','Stock_id']].merge(df1, how='left') #if necessary specify both columns #df = df[['Date','Stock_id']].merge(df1, how='left', on=['Date','Stock_id']) print (df) Date Stock_id Stock_value 0 2018-01-01 1 NaN 1 2018-01-02 1 4.0 2 2018-01-03 1 2.0 3 2018-01-04 1 NaN 4 2018-01-05 1 7.0 5 2018-01-06 1 NaN 6 2018-01-01 2 6.0 7 2018-01-02 2 9.0 8 2018-01-03 2 4.0 9 2018-01-04 2 6.0 10 2018-01-05 2 NaN 11 2018-01-06 2 NaN </code></pre> <hr> <p>Another idea, but should be slow in large data:</p> <pre><code>df = (df1.groupby('Stock_id')[['Date','Stock_value']] .apply(lambda x: x.set_index('Date').reindex(df2['Date'])) .reset_index()) print (df) Stock_id Date Stock_value 0 1 2018-01-01 NaN 1 1 2018-01-02 4.0 2 1 2018-01-03 2.0 3 1 2018-01-04 NaN 4 1 2018-01-05 7.0 5 1 2018-01-06 NaN 6 2 2018-01-01 6.0 7 2 2018-01-02 9.0 8 2 2018-01-03 4.0 9 2 2018-01-04 6.0 10 2 2018-01-05 NaN 11 2 2018-01-06 NaN </code></pre>
python|pandas
2
10,358
68,093,450
Find first layer with true condition
<p>Using these two example numpy arrays:</p> <pre><code>dis = np.array([[[40,42,44], [41,43,45], [41.5,43.5,45.5]], [[35,37,39], [36,38,40], [36.5,38.5,40.5]], [[30,32,34], [31,33,35], [31.5,33.5,35.5]], [[22,24,26], [23,25,27], [23.5,25.5,27.5]]]) hd = np.array([[[36.6, 37.4, 38.3], [37.1, 39.0, 37.8], [34.0, 32.0, 30.4]], [[36.5, 37.3, 38.2], [37.0, 38.9, 37.7], [33.9, 31.9, 30.3]], [[36.4, 37.2, 38.1], [36.9, 38.8, 37.6], [33.8, 31.8, 30.2]], [[36.3, 37.1, 38.0], [36.8, 38.7, 37.5], [33.7, 31.7, 30.1]]]) </code></pre> <p>I first take the mean of the <code>hd</code> array using <code>hd_mn=hd.mean(axis=0)</code> which yields:</p> <pre><code>array([[36.45, 37.25, 38.15], [36.95, 38.85, 37.65], [33.85, 31.85, 30.25]]) </code></pre> <p>and with this array, then, I'd like to retrieve a 2D array containing the first index where <code>hd_mn</code> is greater than <code>dis</code>. So in other words, I would get an array like the following:</p> <pre><code>[[1, 1, 1], [1, 1, 2], [2, 3, 3]] </code></pre> <p>For example, the value 36.45 at position (0, 0) of <code>hd_mn</code> is less than 40 at the same position in layer 0 but greater than 35 in the next layer down. Is there a one-liner for doing this operation?</p>
<p>You directly use greater than comparator then use <code>ndarray.sum</code> on the boolean values here.</p> <pre><code>(dis &gt; hd_mn).sum(0) array([[1, 1, 2], [1, 1, 2], [2, 3, 3]]) </code></pre> <hr /> <h2>Details</h2> <pre><code>dis &gt; hd_mn array([[[ True, True, True], # --\ [ True, True, True], # |-&gt; dis[0] &gt; hd_mn [ True, True, True]],# --/ [[False, False, True], # --\ [False, False, True], # |-&gt; dis[1] &gt; hd_mn [ True, True, True]],# --/ [[False, False, False], # --\ [False, False, False], # |-&gt; dis[2] &gt; hd_mn [False, True, True]],# --/ [[False, False, False], # --\ [False, False, False], # |-&gt; dis[3] &gt; hd_mn [False, False, False]]]) # --/ </code></pre>
python|numpy
3
10,359
68,083,434
how to get rows satisfying certain condition pandas
<pre><code> name strike INFY 1000 INFY 1020 INFY 1040 INFY 1060 INFY 1080 INFY 1100 INFY 1120 INFY 1140 INFY 1160 INFY 1180 INFY 1200 INFY 1220 </code></pre> <p>I have a dataframe containing columns name and strike,</p> <p>for query <code>ltp = 1065</code> I want to return dataframe containing 6 rows</p> <p>where three rows will have value greater than <code>ltp</code> and three rows will have value lower than <code>ltp</code></p> <p>in this case</p> <pre><code>INFY 1020 INFY 1040 INFY 1060 INFY 1080 INFY 1100 INFY 1120 </code></pre> <p>.</p> <p>how can i achieve this?</p>
<p>You need to play with the index.</p> <p>First of, create an empty dataframe: <code>results = pd.Dataframe()</code> Then <code>for index in df.iterrows():</code> And then if condition is reached (x &gt; 1065), ask to append</p> <pre><code>df.loc[:,index-3], df.loc[:,index-2], df.loc[:,index-1],df.loc[:,index],df.loc[:,index+1], df.loc[:,index+2] </code></pre>
python|pandas
0
10,360
68,163,679
pandas: detect and print outliers in a dataframe
<p>I am trying to identify and print the rows of a dataframe containing outliers. Just as an experiment, I am considering outliers all values under the column 'xy' between 6 and 10 that correspond to category 'C' under column 'x'. I am not sure why, my code prints an empty output.</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt data=[['A', 1,2 ,5], ['B', 5,5,6], ['C', 4,6,7] ,['A', 6,5,4], ['B',9,9,3], ['C', 7,9,1] ,['A', 2,3,1], ['B', 5,1,2], ['C',2,10,9] ,['B', 8,2,8], ['B', 5,4,3], ['C', 8,5 ,3]] df = pd.DataFrame(data, columns=['x','y','z','xy']) plt.scatter(df['x'], df['xy']) outliers= (df['xy'].between(6,10,inclusive=False) &amp; df['x']=='C') outliers_location=(df[outliers].index.values.tolist()) print(outliers_location) # should not print an empty list </code></pre>
<p>You need to take the second condition into <code>()</code> otherwise it is parsed incorrectly. Without it, it tries to compare <code>df['xy'].between(6,10,inclusive=False) &amp; df['x']</code> to <code>C</code></p> <pre><code>&gt;&gt;&gt; outliers= (df['xy'].between(6,10,inclusive=False) &amp; (df['x']=='C')) &gt;&gt;&gt; outliers_location=(df[outliers].index.values.tolist()) &gt;&gt;&gt; print(outliers_location) [2, 8] </code></pre>
python|pandas|dataframe|outliers
0
10,361
59,199,872
Do rolling mean in 2 different columns and make one column in Python
<p>I have a DataFrame that looks like this:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'hometeam_id': {0: 1, 1: 3, 2: 5, 3: 2, 4: 4, 5: 6, 6: 1, 7: 3, 8: 2}, 'awayteam_id': {0: 2, 1: 4, 2: 6, 3: 3, 4: 5, 5: 1, 6: 4, 7: 6, 8: 5}, 'home_score': {0: 1, 1: 4, 2: 3, 3: 2, 4: 1, 5: 5, 6: 4, 7: 7, 8: 8}, 'away_score': {0: 5, 1: 1, 2: 2, 3: 3, 4: 4, 5: 2, 6: 1, 7: 2, 8: 4}}) </code></pre> <p>I need to do rolling average on last 2 values for each row. But the trick is I need the total goals by id. For example team 1 played 2 games as home and 1 game as away. I need to add 2 new columns that will show total goals by home team and away team. For example for team 1 the 2 new colums would look like this.</p> <pre><code> output = pd.DataFrame({'home_id': {0: 1, 1: 6, 2: 1}, 'away_id': {0: 2, 1: 1, 2: 4}, 'home_score': {0: 1, 1: 5, 2: 4}, 'away_score': {0: 5, 1: 2, 2: 1}, 'total_home': {0: 1.0, 1: nan, 2: 1.5}, 'total_away': {0: nan, 1: 2.0, 2: nan}}) </code></pre> <p>Ignore the na values I have not calculated them for other teams, just calculations for team 1. Basically, in this format I need team average goals for last 2 games.</p>
<p>IIUC, you can do:</p> <pre><code>df['total_home'] = (df.groupby('hometeam_id') .home_score .rolling(2, min_periods=0) .mean() .reset_index(level=0, drop=True) ) df['total_away'] = (df.groupby('awayteam_id') .away_score .rolling(2, min_periods=0) .mean() .reset_index(level=0, drop=True) ) </code></pre> <p>Output:</p> <pre><code> hometeam_id awayteam_id home_score away_score total_home total_away 0 1 2 1 5 1.0 5.0 1 3 4 4 1 4.0 1.0 2 5 6 3 2 3.0 2.0 3 2 3 2 3 2.0 3.0 4 4 5 1 4 1.0 4.0 5 6 1 5 2 5.0 2.0 6 1 4 4 1 2.5 1.0 7 3 6 7 2 5.5 2.0 8 2 5 8 4 5.0 4.0 </code></pre>
python|pandas|moving-average
3
10,362
56,940,893
How to drop the first row number column pandas?
<p>This question may sound similar to other questions posted, but I'm posting this after searching long for this exact solution.</p> <p>So, I've a JSON from which I'm creating a pandas dataframe:</p> <pre><code>col_list = ["allocation","completion_date","has_expanded_access"] final_data = dict((k,d[k]) for k in (col_list) if k in d) a = json_normalize(final_data) </code></pre> <p><a href="https://i.stack.imgur.com/7CTiL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7CTiL.png" alt="enter image description here"></a></p> <p>And then this:</p> <p><a href="https://i.stack.imgur.com/Jk4AH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jk4AH.png" alt="enter image description here"></a></p> <p>I tried saving with:</p> <pre><code>df = df.reset_index(drop=True) </code></pre> <p>And</p> <pre><code>df = df.rename_axis(None) </code></pre> <p>As suggested on few answers, but of no use, when I try to save it, this default first column containing row index comes with header as blank (null), even if I try to drop, it doesn't work. Any help?</p>
<p>Try</p> <p><code>df.to_csv('df_name.csv', sep = ';', encoding = 'cp1251', index = False)</code></p> <p>to save df without indices.</p> <p>Or change index column with</p> <p><code>df.set_index('col_name')</code></p>
python|pandas
3
10,363
45,881,124
Session object not specified in Tensorflow MNIST tutorial
<p>Why is there no Session object in the Tensorflow Layers tutorial? Is it possible to obtain it in some way?</p> <p>Tutorial: <a href="https://www.tensorflow.org/tutorials/layers" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/layers</a></p> <p>Source code: <a href="https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/examples/tutorials/layers/cnn_mnist.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/examples/tutorials/layers/cnn_mnist.py</a></p> <p>In further development, the session object might be needed to save the trained model, for instance:</p> <pre><code>session = tf.Session() saver = tf.train.Saver() # some processing here saver.save(session, 'myModel',global_step=1000) </code></pre> <p>Thanks!</p>
<p>The <a href="https://www.tensorflow.org/tutorials/layers" rel="nofollow noreferrer">TensorFlow <code>tf.layers</code> tutorial</a> uses <a href="https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator" rel="nofollow noreferrer"><code>tf.estimator.Estimator</code></a> as a high-level API that hides the details of constructing a session, and writing a training loop that checkpoints your model and logs summaries. Instead, you specify an <code>input_fn</code> that describes your input data and a <code>model_fn</code> that describes the layer structure.</p> <p>If you prefer to use the <code>tf.Session</code> (or <code>tf.train.MonitoredSession</code>) API directly, you can invoke the <code>model_fn</code> directly in your own code and create an optimizer, saver, etc. as needed.</p>
python|tensorflow|mnist
0
10,364
51,069,750
How do I combine/ensemble results of 3 machine learning models stored in 3 dataframes and output 1 dataframe with results agreed by majority?
<p>I am currently participating in an online hackathon. All the top entries are within 1% of each other. So I decided to run 3 different models instead of a single best performing one, i.e. ensemble learning, tuned hyperparameters on each one of them and then combine results of all three to get a better model. I've combined results of all three in a dataframe, it's df.head() is as below:</p> <pre><code>index | building_id | rf_damage_grade | xg_damage_grade | lr_damage_grade | damage_grade 0 a3380c4f75 Grade 4 Grade 2 Grade 3 Grade 4 1 a338a4e653 Grade 5 Grade 5 Grade 5 Grade 5 2 a338a4e6b7 Grade 5 Grade 5 Grade 5 Grade 5 3 a33a6eaa3a Grade 3 Grade 2 Grade 4 Grade 3 4 a33b073ff6 Grade 5 Grade 5 Grade 5 Grade 5 </code></pre> <p>So 'rf_damage_grade' is the column of my best classifier. It gives around 74% accuracy, other two give 68% and 58% respectively. In final output i want, if 'xg_damage_grade' and 'lr_damage_grade' both agree on one value the final output 'damage_grade' gets changed to that value, otherwise it remains equal to the output of 'rf_damage_grade'. There are more than 400k rows in the data and and every time I rerun my model it is taking around an hour to do this on my Early 2015 MBP. Following is the code i've written: </p> <pre><code>for i in range(len(final)): if final.iloc[i,2]==final.iloc[i,3]: final.iloc[i,4]=final.iloc[i,2] if final.iloc[i,3]!=final.iloc[i,1]: count+=1 else: continue </code></pre> <p>What can I do to make it more efficient? Is there any inbuilt function in sklearn to do this sort of thing?</p>
<p>Simply run conditional logic with <code>.loc</code>:</p> <pre><code>df.loc[df['xg_damage_grade'] == df['lr_damage_grade'], 'damage_grade'] = df['xg_damage_grade'] df.loc[df['xg_damage_grade'] != df['lr_damage_grade'], 'damage_grade'] = df['rf_damage_grade'] </code></pre> <p>Or with numpy's <code>where</code>:</p> <pre><code>df['damage_grade'] = np.where(df['xg_damage_grade'] == df['lr_damage_grade'], df['xg_damage_grade'] df['rf_damage_grade']) </code></pre>
python|pandas|machine-learning|scikit-learn|ensemble-learning
1
10,365
66,447,464
Python numpy function for matrix math
<p>I have to np arrays</p> <pre><code>a = np.array[[1,2] [2,3] [3,4] [5,6]] b = np.array [[2,4] [6,8] [10,11] </code></pre> <p>I want to multiple each row of a against each element in array b so that array c is created with dimensions of a-rows x b rows (as columns)</p> <pre><code>c = np.array[[2,8],[6,16],[10,22] [4,12],[12,21],[20,33] ....] </code></pre> <p>There are other options for doing this, but I would really like to leverage the speed of numpy's ufuncs...if possible.</p> <p>any and all help is appreciated.</p>
<p>Does this do what you want?</p> <pre><code>&gt;&gt;&gt; a array([[1, 2], [2, 3], [3, 4], [5, 6]]) &gt;&gt;&gt; b array([[ 2, 4], [ 6, 8], [10, 11]]) &gt;&gt;&gt; a[:,None,:]*b array([[[ 2, 8], [ 6, 16], [10, 22]], [[ 4, 12], [12, 24], [20, 33]], [[ 6, 16], [18, 32], [30, 44]], [[10, 24], [30, 48], [50, 66]]]) &gt;&gt;&gt; _.shape (4, 3, 2) </code></pre> <p>Or if that doesn't have the right shape, you can reshape it:</p> <pre><code>&gt;&gt;&gt; (a[:,None,:]*b).reshape((a.shape[0]*b.shape[0], 2)) array([[ 2, 8], [ 6, 16], [10, 22], [ 4, 12], [12, 24], [20, 33], [ 6, 16], [18, 32], [30, 44], [10, 24], [30, 48], [50, 66]]) </code></pre>
python|arrays|numpy
4
10,366
66,609,882
Find max volume and data count above that volume in a Dataframe
<p>I have a sample dataframe as below. I need to find result as per the below condition.</p> <pre><code>Datetime Volume Price 2020-08-05 09:15:00 1033 504 2020-08-05 09:15:00 1960 516 2020-08-05 09:15:00 0 521 2020-08-05 09:15:00 1724 520 2020-08-05 09:15:00 0 500 2020-08-05 09:15:00 1870 540 2020-08-05 09:20:00 1024 476 2020-08-05 09:20:00 1980 548 2020-08-05 09:20:00 0 551 2020-08-05 09:20:00 1426 526 2020-08-05 09:20:00 0 586 2020-08-05 09:20:00 1968 518 </code></pre> <ol> <li>Find Price at Maximum Volume with group-by on Datetime Column.</li> <li>Calculate how many Price Values are above Price of Sl No 1 (ignoring rows with zero volume)</li> </ol> <p>I want my result dataframe as below:</p> <pre><code>Datetime Volume Price Count_abv_prc 2020-08-05 09:15:00 1960 516 2 2020-08-05 09:20:00 1980 548 0 </code></pre> <p>For Datetime = 2020-08-05 09:15:00, only two values are above 516 (520 and 540) and for Datetime = 2020-08-05 09:20:00, no values are above 548 (ignoring rows with zero volume)</p>
<p>Try:</p> <pre><code># positive volume pos_vol = df.query('Volume!=0') # rows with max volume by time s = pos_vol.groupby('Datetime').Volume.idxmax() # extract the output out = df.loc[s].set_index(['Datetime']) # map the datetime to the price corresponding to the max volume aligned_prc = pos_vol['Datetime'].map(out['Price']) # count by datetime out['Count_abv'] = (pos_vol['Price'].gt(aligned_prc) .groupby(pos_vol['Datetime']).sum() ) </code></pre> <p>Output:</p> <pre><code> Volume Price Count_abv Datetime 2020-08-05 09:15:00 1960 516 2 2020-08-05 09:20:00 1980 548 0 </code></pre>
python|python-3.x|pandas|dataframe|pandas-groupby
2
10,367
66,522,196
wrong input of image in tensorflow for training
<p>I am writing my thesis in machine learning and am trying to build a unet to perform it. The code is as follows:</p> <p>First i create the dataloader to create the datasets for input:</p> <pre><code>def dataloader(filepath, subset): # Initiliaze return arrays - input of shape = HYPERPARAMETER global size if subset==&quot;train&quot;: size = 129 elif subset==&quot;test&quot;: size = 18 input_data=np.zeros((size,1024,1024,1)) output_data=np.zeros((size,1024,1024,1)) # Open file and create loop with open(filepath+&quot;annotation_&quot;+subset+&quot;.txt&quot;, &quot;r&quot;) as input_file: # Count to pass through the file count=0 for line in input_file: line=line.split(&quot; &quot;) data=cv2.imread(filepath+str(line[0])+&quot;.jpg&quot;,cv2.IMREAD_GRAYSCALE) input_data[count,:,:,0]=data # Case of benevolent if line[3]==&quot;B&quot;: x=int(line[4]) y=1024-int(line[5]) radius=int(line[6]) for i in range(1024): for j in range(1024): if ((radius*radius-(i-x)*(i-x)-(j-y)*(j-y))&gt;0): # Setting 80 as th value of the benevolent mask output_data[count,i,j,0]=80 # Case of malevolent elif line[3]==&quot;M&quot;: x=int(line[4]) y=1024-int(line[5]) radius=int(line[6]) for i in range(1024): for j in range(1024): if ((radius*radius-(i-x)*(i-x)-(j-y)*(j-y))&gt;0): # Setting 160 as th value of the benevolent mask output_data[count,i,j,0]=160 if count==0: print(type(data)) print(type(input_data)) cv2.imshow('test',data) cv2.waitKey(0) cv2.imshow('image',input_data[count,:,:,0]) cv2.waitKey(0) cv2.imshow('mask',output_data[count,:,:,0]) cv2.waitKey(0) cv2.destroyAllWindows() count=count+1 #input_data=K.zeros_like(input_data) #output_data=K.zeros_like(output_data) return input_data, output_data </code></pre> <p>and then the model and the commands to run it:</p> <pre><code>def unet_model(optimizer, loss_metric, metrics, sample_width, sample_height, lr=1e-3): inputs = Input((sample_width, sample_height, 1)) print(inputs.shape) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) drop1 = Dropout(0.5)(pool1) conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(drop1) conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) drop2 = Dropout(0.5)(pool2) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(drop2) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3) pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) drop3 = Dropout(0.3)(pool3) conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(drop3) conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4) pool4 = MaxPooling2D(pool_size=(2, 2))(conv4) drop4 = Dropout(0.3)(pool4) conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(drop4) conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv5) up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3) conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6) conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv6) up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3) conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7) conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv7) up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3) conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8) conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv8) up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3) conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9) conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9) conv10 = Conv2D(1, (1, 1), activation='softmax')(conv9) model = Model(inputs=[inputs], outputs=[conv10]) model.compile(optimizer=optimizer(lr=lr), loss=loss_metric, metrics=metrics) return model #Filepath of datasets filepath = &quot;/home/tzikos/Downloads/train/&quot; # Load datasets train_input, train_output = dataloader(filepath, &quot;train&quot;) test_input, test_output = dataloader(filepath, &quot;test&quot;) train_input = normalize(train_input) test_input = normalize(test_input) train_output = normalize(train_output) test_output = normalize(test_output) print(train_input.shape) print(train_output.shape) print(test_input.shape) print(test_output.shape) # Load model model = unet_model(optimizer=Adam, loss_metric=tf.keras.losses.MeanSquaredError(), metrics=[&quot;accuracy&quot;], sample_width=train_input.shape[1], sample_height=train_input.shape[2],lr=1e-3) model.compile(optimizer=&quot;Adam&quot;, loss=tf.keras.losses.MeanSquaredError(), metrics=[&quot;accuracy&quot;]) history = model.fit(x=train_input, y=train_output, batch_size=1, epochs=30) # Save weights model_filepath = '/home/tzikos/Desktop/thesis_DENSE-IN-UNET/unet_weights.h5' model.save(model_filepath) # Check results results = model.evaluate(test_input, test_output) print(results) </code></pre> <p>So the problem is the following:</p> <p>When I train my model i get 0 accuracy and no change in the loss function. So I went and dived into the images.</p> <p>When I imshow the data variable i get the photo as should be. However when I input it into the numpy array it is transcribed into a binary one where there is black and white and idk why.</p> <p>So I think that is the problem but i cant see why that is since the data variable is allright</p>
<p>I am not sure what this network is supposed to do, so I will list some mistakes that in your code.</p> <p>In your function you already compile the model:</p> <pre><code>model.compile(optimizer=optimizer(lr=lr), loss=loss_metric, metrics=metrics) return model </code></pre> <p>After outside the function you do it again:</p> <pre><code>model.compile(optimizer=&quot;Adam&quot;, loss=tf.keras.losses.MeanSquaredError(), metrics=[&quot;accuracy&quot;]) </code></pre> <p>One issue is about metric and loss. <code>tf.keras.losses.MeanSquaredError()</code> is a regression loss and <code>'accuracy'</code> is a classification metric.</p> <p>The most obvious one:</p> <pre><code>conv10 = Conv2D(1, (1, 1), activation='softmax')(conv9) </code></pre> <p><code>Softmax</code> activation will be applied to your last axis. If you check your <code>model.summary()</code> your last axis consist of size 1 which means you have a single element. So you are just returning(outputting) a vector of ones everytime.</p>
python|image|image-processing|tensorflow2.0
1
10,368
66,626,700
Difference between Tensorflow's tf.keras.layers.Dense and PyTorch's torch.nn.Linear?
<p>I have a quick (and possibly silly) question about how Tensorflow defines its Linear layer. Within PyTorch, a Linear (or Dense) layer is defined as, y = x A^T + b where A and b are the weight matrix and bias vector for a Linear layer (see <a href="https://pytorch.org/docs/stable/generated/torch.nn.Linear.html?highlight=linear#torch.nn.Linear" rel="noreferrer">here</a>).</p> <p>However, I can't precisely find an equivalent equation for Tensorflow! Is it the same as PyTorch or is it just y = x A + b ?</p> <p>Thank you in advance!</p>
<p>If we set activation to <code>None</code> in the dense layer in <code>keras</code> API, then they are technically equivalent.</p> <p>Tensorflow's</p> <pre><code>tf.keras.layers.Dense(..., activation=None) </code></pre> <p>According to the <a href="https://keras.io/api/layers/core_layers/dense/" rel="noreferrer">doc</a>, more study <a href="https://keras.io/guides/making_new_layers_and_models_via_subclassing/" rel="noreferrer">here</a>.</p> <blockquote> <p>activation: Activation function to use. If you don't specify anything, no activation is applied (ie. &quot;linear&quot; activation: a(x) = x).</p> </blockquote> <p>And in PyTorch's <a href="https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html#Linear" rel="noreferrer">src</a>.</p> <pre><code>torch.nn.Linear </code></pre> <p>They are now equal at this point. A linear transformation to the incoming data: <code>y = x*W^T + b</code>. See the following more concrete equivalent implementation of these two. In <code>PyTorch</code>, we do</p> <pre><code>class Network(torch.nn.Module): def __init__(self): super(Network, self).__init__() self.fc1 = torch.nn.Linear(5, 30) def forward(self, state): return self.fc1(state) </code></pre> <p>or,</p> <pre><code>trd = torch.nn.Linear(in_features = 3, out_features = 30) y = trd(torch.ones(5, 3)) print(y.size()) # torch.Size([5, 30]) </code></pre> <p>Its equivalent <code>tf</code> implementation would be</p> <pre><code>model = tf.keras.models.Sequential() model.add(tf.keras.layers.Dense(30, input_shape=(5,), activation=None)) </code></pre> <p>or,</p> <pre><code>tfd = tf.keras.layers.Dense(30, input_shape=(3,), activation=None) x = tfd(tf.ones(shape=(5, 3))) print(x.shape) # (5, 30) </code></pre>
tensorflow|pytorch
14
10,369
66,450,790
How to Show the actual value instead of the percent in a Matplotlib Pie Chart
<p>The following code is for creating a pie chart that shows the number of purchases made by each person from the &quot;Shipping Address Name&quot; column. The 'labels' list contains the name of each person and the 'purchases' list contain the number of purchases each person has made.</p> <pre><code>labels = df['Shipping Address Name'].unique() purchases = df['Shipping Address Name'].value_counts() plt.pie(purchases, labels=labels, autopct='%.f') plt.title('Purchases Per Person') plt.axis('equal') return plt.show() </code></pre> <p>This is what the 'purchases' list basically looks like:</p> <pre><code>Lilly 269 Rolf 69 Sandy 11 Dan 2 </code></pre> <p>I am trying to have the values in the purchase list to be shown in the pie chart instead of the percentages from &quot;autopct=%.f&quot;. Not sure what the easiest way for the 'autopct' function to do this is, so any help is appreciated.</p>
<p>Try re-calculate the actual values by multiplying with the total purchases:</p> <pre><code>purchases = df['Shipping Address Name'].value_counts() purchases.plot.pie(autopct=lambda x: '{:.0f}'.format(x*purchases.sum()/100) ) # also # plt.pie(purchases, autopct=lambda x: '{:.0f}'.format(x*purchases.sum()/100)) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/Eap8V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Eap8V.png" alt="enter image description here" /></a></p>
python|pandas|list|dataframe|matplotlib
3
10,370
57,689,620
convert dates to int in pandas
<p>I have a date column of format YYYY-MM-DD and want to convert it to an int type, consecutively, where 1= Jan 1, 2000. So if I have a date 2000-01-31, it will convert to 31. If I have a date 2020-01-31 it will convert to (365*20yrs + 5 leap days), etc.</p> <p>Is this possible to do in pandas?</p> <p>I looked at <a href="https://stackoverflow.com/questions/50863691/pandas-convert-date-object-to-int">Pandas: convert date &#39;object&#39; to int</a>, but this solution converts to an int 8 digits long.</p>
<p>First subtract column by <code>Timestamp</code>, convert timedelts to days by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>Series.dt.days</code></a> and last add 1:</p> <pre><code>df = pd.DataFrame({"Date": ["2000-01-29", "2000-01-01", "2014-03-31"]}) d = '2000-01-01' df["new"] = pd.to_datetime(df["Date"]).sub(pd.Timestamp(d)).dt.days + 1 print( df ) Date new 0 2000-01-29 29 1 2000-01-01 1 2 2014-03-31 5204 </code></pre>
pandas|date
2
10,371
72,885,243
Applying lambda function to multiple columns with IF statement to leave blanks as-is
<p>I am working with a dataset where a few interger columns have two extra zeros at the end of each number. As such, I wrote a lambda function to remove them:</p> <pre><code>df[['col_8', 'col_9', 'col_10']] = df[['col_8', 'col_9', 'col_10']].apply(lambda x: x.apply(lambda value: '{:,.2f}'.format(value/100))) </code></pre> <p>However, some values in each column are blanks, which yields a <em>ValueError: invalid literal for int() with base 10: ''.</em> error. Therefore, if possible, I want to add an IF statement inside the lambda function that would leave blanks as-is.</p> <p>Based on another post, I tried the following:</p> <pre><code>df[['col_8', 'col_9', 'col_10']] = df[['col_8', 'col_9', 'col_10']].apply(lambda x: None if x.empty else x.apply(lambda value: '{:,.2f}'.format(value/100))) </code></pre> <p>But it yields the same error. Any idea on how I can achieve this?</p>
<p>Your code is almost correct. However, you need to put the check inside the second lambda function and improve the check for number values. This code should work:</p> <pre class="lang-py prettyprint-override"><code>import numbers df[['col_8', 'col_9', 'col_10']] = df[['col_8', 'col_9', 'col_10']].apply(lambda x: x.apply(lambda value: ('{:,.2f}'.format(value/100)) if isinstance(value, numbers.Number) else None)) </code></pre>
python|pandas|dataframe|lambda
0
10,372
72,905,347
Merging two dataset with partial match
<p>I want to merge two dataframe df1 and df2. Shape of df1 is (115, 16) and Df2 is (624402, 23).</p> <pre><code>df1 = pd.DataFrame({'Invoice': ['20561', '20562', '20563', '20564'], 'Currency': ['EUR', 'EUR', 'EUR', 'USD']}) df2 = pd.DataFrame({'Ref': ['20561', 'INV20562', 'INV20563BG', '20564'], 'Type': ['01', '03', '04', '02'], 'Amount': ['150', '175', '160', '180'], 'Comment': ['bla', 'bla', 'bla', 'bla']}) print(df1) Invoice Currency 0 20561 EUR 1 20562 EUR 2 20563 EUR 3 20564 USD print(df2) Ref Type Amount Comment 0 20561 01 150 bla 1 INV20562 03 175 bla 2 INV20563BG 04 160 bla 3 20564 02 180 bla </code></pre> <p>I applied the following code:</p> <pre><code>df4 = df1.copy() for i, row in df1.iterrows(): tmp = df2[df2['Ref'].str.contains(row['Invoice'], na=False)] df4.loc[i, 'Amount'] = tmp['Amount'].values[0] print(df4) </code></pre> <p><strong>It is showing: IndexError: index 0 is out of bounds for axis 0 with size 0</strong></p>
<p>The IndexError occurs when no row matches the invoice. You can check for this and return <code>np.nan</code> (or a different default value) if a matching invoice is not found:</p> <pre><code>df4 = df1.copy() for i, row in df1.iterrows(): tmp = df2[df2['Ref'].str.contains(row['Invoice'], na=False)] df4.loc[i, 'Amount'] = tmp['Amount'].values[0] if not tmp.empty else np.nan </code></pre>
pandas|dataframe|merge
0
10,373
72,925,734
How to create list of array combinations lexographically in numpy?
<p>I have this array and I want to return unique array combinations. I tried meshgrid but it creates duplicates and inverse array values</p> <pre><code>&gt;&gt; import numpy as np &gt;&gt; array = np.array([0,1,2,3]) &gt;&gt; combinations = np.array(np.meshgrid(array, array)).T.reshape(-1,2) &gt;&gt; print(combinations) [[0 0] [0 1] [0 2] [0 3] [1 0] [1 1] [1 2] [1 3] [2 0] [2 1] [2 2] [2 3] [3 0] [3 1] [3 2] [3 3]] </code></pre> <p>What I want to exclude are the <strong>repeating arrays</strong>: <code>[0,0] [1,1] [2,2] [3,3]</code> and the <strong>inverse arrays</strong> <code>when [2,3] is returned exclude [3,2] in the output</code>.</p> <p>Take a look at this combination calculator, <a href="https://planetcalc.com/3757/?set=%5B%7B%22value%22%3A%220%22%2C%22pkID%22%3A%2228746%22%2C%22save_label%22%3A%22%22%2C%22cancel_label%22%3A%22%22%7D%2C%7B%22value%22%3A%221%22%2C%22pkID%22%3A%2228747%22%2C%22save_label%22%3A%22%22%2C%22cancel_label%22%3A%22%22%7D%2C%7B%22value%22%3A%222%22%2C%22pkID%22%3A%22%22%2C%22save_label%22%3A%22%22%2C%22cancel_label%22%3A%22%22%7D%2C%7B%22value%22%3A%223%22%2C%22pkID%22%3A%22%22%2C%22save_label%22%3A%22%22%2C%22cancel_label%22%3A%22%22%7D%5D&amp;Msize=2" rel="nofollow noreferrer">this is the output that I like</a> but how can I create it in NumPy?</p>
<p>you could use <code>combinations</code> from itertools</p> <pre><code>import numpy as np from itertools import combinations array = np.array([0,1,2,3]) combs = np.array(list(combinations(arr, 2))) </code></pre>
python|arrays|numpy
2
10,374
70,422,783
is there a way to efficiently fill a pandas df column in python with hourly datetimes between two dates?
<p>So I am looking for a way to fill an empty dataframe column with hourly values between two dates. for example between</p> <blockquote> <p>StartDate = 2019:01:01 00:00:00</p> </blockquote> <p>to</p> <blockquote> <p>EndDate = 2019:02:01 00:00:00</p> </blockquote> <p>I would want a column that has</p> <blockquote> <p>2019:01:01 00:00:00,2019:01:01 01:00:00,2019:02:01 00:00:00...</p> </blockquote> <p>in Y:M:D H:M:S format. I am not sure what the most efficient way of doing this is, is there a way to do it via pandas or would you have to use a for loop over a given timedelta between a range for eg?</p> <p>`</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>date_range</code></a> with <code>DataFrame</code> constructor:</p> <pre><code>StartDate = '2019-01-01 00:00:00' EndDate = '2019-02-01 00:00:00' df = pd.DataFrame({'dates':pd.date_range(StartDate, EndDate, freq='H')}) </code></pre> <p>If there is custom format of dates first convert them to datetimes:</p> <pre><code>StartDate = '2019:01:01 00:00:00' EndDate = '2019:02:01 00:00:00' StartDate = pd.to_datetime(StartDate, format='%Y:%m:%d %H:%M:%S') EndDate = pd.to_datetime(EndDate, format='%Y:%m:%d %H:%M:%S') df = pd.DataFrame({'dates':pd.date_range(StartDate, EndDate, freq='H')}) print (df.head(10)) dates 0 2019-01-01 00:00:00 1 2019-01-01 01:00:00 2 2019-01-01 02:00:00 3 2019-01-01 03:00:00 4 2019-01-01 04:00:00 5 2019-01-01 05:00:00 6 2019-01-01 06:00:00 7 2019-01-01 07:00:00 8 2019-01-01 08:00:00 9 2019-01-01 09:00:00 </code></pre>
python|pandas|time|time-series
1
10,375
70,538,852
Column sum in pandas groupby
<p>Below is the dataframe</p> <pre><code>Skill Category Location Market Type Count Java Cat1 Europe Tier1 A 2 Java Cat1 Europe Tier1 B 1 Java Cat1 Europe Tier1 C 1 Java Cat2 Asia Tier2 D 1 Java Cat3 Asia Tier1 E 1 </code></pre> <p>Below is the intended output dataframe</p> <pre><code>Skill Category Location Market Type Count Sum_Market Java Cat1 Europe Tier1 A 2 4 Java Cat1 Europe Tier1 B 1 4 Java Cat1 Europe Tier1 C 1 4 Java Cat2 Asia Tier2 D 1 1 Java Cat3 Asia Tier1 E 1 1 </code></pre> <p>Problem Statement : Sum_Market should be done using groupby of specific skill, category, location with sum of market tier in each of these selection. Below is the try from my end:</p> <pre><code>df.groupby(['Skill','Category','Location','Market','Type'])['count'].sum() </code></pre>
<p>Just merge back to original one:</p> <pre class="lang-py prettyprint-override"><code>df.merge( df.groupby(['Skill','Category','Location','Market','Type'])['count'].sum().rename('Sum_Market').reset_index() ) </code></pre>
python|pandas|dataframe|group-by|pivot-table
1
10,376
51,799,843
tensorflow: find larges value in variable and replace it
<p>I have a <code>tf.Variable()</code> came out of softmax, as a sequence of probabilities, e.g., <code>[0.3, 0.5, 0.8, 0.1, 0.2]</code>. What I tried to do is to convert this sequence into [0, 0, 1, 0, 0], i.e. the highest probability replaced with 1 and all other with 0. But since <code>tf.Variable()</code> is not iterative, and <code>tf.reduce_max()</code> only gives the largest value itself, how can I do this? </p>
<pre><code>import tensorflow as tf tf.enable_eager_execution() softmax = tf.constant([0.3, 0.5, 0.8, 0.1, 0.2], dtype=tf.float32) index = tf.argmax(softmax, axis=0, output_type=tf.int32) # sparse = tf.SparseTensor(tf.reshape(index, [-1]), tf.constant([[1]], dtype=tf.int32), tf.shape(softmax)) result = tf.scatter_nd(tf.expand_dims(tf.expand_dims(index, axis=0), axis=0), tf.constant([1], dtype=tf.int32), shape=tf.shape(softmax)) </code></pre>
python|tensorflow|softmax
0
10,377
51,728,054
Appending data into pandas dataframe
<p>I'm building a system where raspberry pi receives data via bluetooth and parses it into pandas dataframe for further processing. However, there are a few issues. The bluetooth packets are converted into a pandas Series object which I attempted to append into the empty dataframe unsuccesfully. Splitting below is performed in order to extract telemetry from a bluetooth packet.</p> <p>Code creates a suitable dataframe with correct column names, but when I append into it, the Series object's row numbers become new columns. Each appended series is a single row in the final dataframe. What I want to know is: How do I add Series object into the dataframe so that values are put into columns with indices from 0 to 6 instead of from 7 to 14?</p> <p>Edit: Added a <a href="https://imgur.com/prsY5LC" rel="nofollow noreferrer">screenshot</a> with, output on the top, multiple of pkt below.</p> <p>Edit2: Added full code per request. Added error traceback.</p> <pre><code>import time import sys import subprocess import pandas as pd import numpy as np class Scan: def __init__(self, count, columns): self.running = True self.count = count self.columns = columns def run(self): i_count = 0 p_data = pd.DataFrame(columns=self.columns, dtype='str') while self.running: output = subprocess.check_output(["commands", "to", "follow.py"]).decode('utf-8') p_rows = output.split(";") series_list = [] print(len(self.columns)) for packet in p_rows: pkt = pd.Series(packet.split(","),dtype='str', index=self.columns) pkt = pkt.replace('\n','',regex=True) print(len(pkt)) series_list.append(pkt) p_data = pd.DataFrame(pd.concat(series_list, axis=1)).T print(p_data.head()) print(p_rows[0]) print(list(p_data.columns.values)) if i_count == self.count: self.running = False sys.exit() else: i_count += 1 time.sleep(10) def main(): columns = ['mac', 'rssi', 'voltage', 'temperature', 'ad count', 't since boot', 'other'] scan = Scan(0, columns) while True: scan.run() if __name__ == '__main__': main() </code></pre> <blockquote> <p>Traceback (most recent call last): File "blescanner.py", line 48, in main() File "blescanner.py", line 45, in main scan.run()</p> <p>File "blescanner.py", line 24, in run pkt = pd.Series(packet.split(","),dtype='str', index=self.columns)</p> <p>File "/mypythonpath/site-packages/pandas/core/series.py", line 262, in <strong>init</strong> .format(val=len(data), ind=len(index)))</p> <p>ValueError: Length of passed values is 1, index implies 7</p> </blockquote>
<p>You don't want to append to a DataFrame in that way. What you can do instead is create a list of series, and concatenate them together. </p> <p>So, something like this:</p> <pre><code>series_list = [] for packet in p_rows: pkt = pd.Series(packet.split(","),dtype='str') print(pkt) series_list.append(pkt) p_data = pd.DataFrame(pd.concat(series_list), columns=self.columns, dtype='str') </code></pre> <p>As long as you don't specify <code>ignore_index=True</code> in the <code>pd.concat</code> call the index will not be reset (the default is <code>ignore_index=False</code>)</p> <p>Edit:</p> <p>It's not clear from your question, but if you're trying to add the series as new columns (instead of stack on top of each other), then change the last line from above to:</p> <pre><code>p_data = pd.concat(series_list, axis=1) p_data.columns = self.columns </code></pre> <p>Edit2: </p> <p>Still not entirely clear, but it sounds like (from your edit) that you want to transpose the series to be the rows, where the index of the series becomes your columns. I.e.: </p> <pre><code>series_list = [] for packet in p_rows: pkt = pd.Series(packet.split(","), dtype='str', index=self.columns) series_list.append(pkt) p_data = pd.DataFrame(pd.concat(series_list, axis=1)).T </code></pre> <p>Edit 3: Based on your picture of output, when you split on <code>;</code> the last element in your list is empty. E.g.: </p> <pre><code>output = """f1:07:ad:6b:97:c8,-24,2800,23.00,17962365,25509655,None; f1:07:ad:6b:97:c8,-24,2800,23.00,17962365,25509655,None;""" output.split(';') ['f1:07:ad:6b:97:c8,-24,2800,23.00,17962365,25509655,None', '\n f1:07:ad:6b:97:c8,-24,2800,23.00,17962365,25509655,None', ''] </code></pre> <p>So instead of <code>for packet in p_rows</code> do <code>for packet in p_rows[:-1]</code></p> <p>Full example:</p> <pre><code>columns = ['mac', 'rssi', 'voltage', 'temperature', 'ad count', 't since boot', 'other'] output = """f1:07:ad:6b:97:c8,-24,2800,23.00,17962365,25509655,None; f1:07:ad:6b:97:c8,-24,2800,23.00,17962365,25509655,None;""" p_rows = output.split(";") series_list = [] for packet in p_rows[:-1]: pkt = pd.Series(packet.strip().split(","), dtype='str', index=columns) series_list.append(pkt) p_data = pd.DataFrame(pd.concat(series_list, axis=1)).T </code></pre> <p>produces</p> <pre><code> mac rssi voltage temperature ad count t since boot other 0 f1:07:ad:6b:97:c8 -24 2800 23.00 17962365 25509655 None 1 f1:07:ad:6b:97:c8 -24 2800 23.00 17962365 25509655 None </code></pre>
python|pandas|dataframe|append
1
10,378
51,596,522
Converting a list into comma separated and add quotes in python
<p>I have :</p> <pre><code>val = '[12 13 14 16 17 18]' </code></pre> <p>I want to have:</p> <pre><code>['12','13','14','16','17','18'] </code></pre> <p>I have done </p> <pre><code>x = val.split(' ') y = (" , ").join(x) </code></pre> <p>The result is </p> <pre><code>'[12 , 13 , 14 , 16 , 17 , 18 ]' </code></pre> <p>But not the exact one also the quotes</p> <p>What's the best way to do this in Python?</p>
<p>You can do it with</p> <pre><code>val.strip('[]').split() </code></pre>
python|list|pandas
3
10,379
35,829,211
Pandas: Counting the proportion of zeros in rows and columns of dataframe
<p>I have this code below. It is surprizing for me that it works for the columns and not for the rows.</p> <pre><code>import pandas as pd def summarizing_data_variables(df): numberRows=size(df['ID']) numberColumns=size(df.columns) summaryVariables=np.empty([numberColumns,2], dtype = np.dtype('a50')) cont=-1 for column in df.columns: cont=cont+1 summaryVariables[cont][0]=column summaryVariables[cont][1]=size(df[df[column].isin([0])][column])/(1.0*numberRows) print summaryVariables def summarizing_data_users(fileName): print "Sumarizing users..." numberRows=size(df['ID']) numberColumns=size(df.columns) summaryVariables=np.empty([numberRows,2], dtype = np.dtype('a50')) cont=-1 for row in df['ID']: cont=cont+1 summaryVariables[cont][0]=row dft=df[df['ID']==row] proportionZeros=(size(dft[dft.isin([0])])-1)/(1.0*(numberColumns-1)) # THe -1 is used to not count the ID column summaryVariables[cont][1]=proportionZeros print summaryVariables if __name__ == '__main__': df = pd.DataFrame([[1, 2, 3], [2, 5, 0.0],[3,4,5]]) df.columns=['ID','var1','var2'] print df summarizing_data_variables(df) summarizing_data_users(df) </code></pre> <p>The output is this:</p> <pre><code> ID var1 var2 0 1 2 3 1 2 5 0 2 3 4 5 [['ID' '0.0'] ['var1' '0.0'] ['var2' '0.333333333333']] Sumarizing users... [['1' '1.0'] ['2' '1.0'] ['3' '1.0']] </code></pre> <p>I was expecting that for users:</p> <pre><code>Sumarizing users... [['1' '0.0'] ['2' '0.5'] ['3' '0.0']] </code></pre> <p>It seems that the problem is in this line: </p> <blockquote> <p>dft[dft.isin([0])]</p> </blockquote> <p>It does not constrain dft to the "True" values like in the first case.</p> <p>Can you help me with this? (1) How to correct the users (ROWS) part (second function above)? (2) Is this the most efficient method to do this? [My database is very big]</p> <p><strong>EDIT:</strong></p> <p>In function summarizing_data_variables(df) I try to evaluate the proportion of zeros in each column. In the example above, the variable Id has no zero (thus the proportion is zero), the variable var1 has no zero (thus the proportion is also zero) and the variable var2 presents a zero in the second row (thus the proportion is 1/3). I keep these values in a 2D numpy.array where the first column is the label of the column of the dataframe and the second column is the evaluated proportion.</p> <p>The function summarizing_data_users I want to do the same, but I do that for each row. However, it is NOT working.</p>
<p>try this instead of the first funtion:</p> <pre><code>print(df[df == 0].count(axis=1)/len(df.columns)) </code></pre> <p>UPDATE (correction):</p> <pre><code>print('rows') print(df[df == 0].count(axis=1)/len(df.columns)) print('cols') print(df[df == 0].count(axis=0)/len(df.index)) </code></pre> <p>Input data (i've decided to add a few rows):</p> <pre><code>ID var1 var2 1 2 3 2 5 0 3 4 5 4 10 10 5 1 0 </code></pre> <p>Output:</p> <pre><code>rows ID 1 0.0 2 0.5 3 0.0 4 0.0 5 0.5 dtype: float64 cols var1 0.0 var2 0.4 dtype: float64 </code></pre>
python-2.7|pandas
8
10,380
37,559,561
Errors when importing files into spyder (Correct directory)
<p>Here is my code</p> <pre><code>import pandas as pd all_ages = pd.read_csv("all-ages.csv") all_ages.head(5) </code></pre> <p>And I have already put the csv file in the working directory, but I still encounter </p> <blockquote> <p>OSError: File b'all-ages.csv' does not exist</p> </blockquote> <p>But if I type each line in the Console instead of Script, it works sometimes.</p>
<p>You'd better provide the <strong>absolute file path</strong>. Python uses the current working directory which depends on where you invoke/run your python script. </p> <p>Even you put your python script and csv file "all-ages.csv" under the same directory, the current working directory might be different.</p> <p>For example:</p> <pre><code>/folder1/folder2/myscript.py /folder1/folder2/all-ages.csv </code></pre> <p>if you run <code>python myscript.py</code> under directory folder2, it can find all-ages.csv, but if you invoke <code>python folder2/myscript.py</code> under folder1, the current working directory is folder1, and it cannot find <code>all-ages.csv</code></p>
python|csv|pandas|anaconda|spyder
1
10,381
41,789,469
Set value based on day in month in pandas timeseries
<p>I have a timeseries </p> <pre><code>date 2009-12-23 0.0 2009-12-28 0.0 2009-12-29 0.0 2009-12-30 0.0 2009-12-31 0.0 2010-01-04 0.0 2010-01-05 0.0 2010-01-06 0.0 2010-01-07 0.0 2010-01-08 0.0 2010-01-11 0.0 2010-01-12 0.0 2010-01-13 0.0 2010-01-14 0.0 2010-01-15 0.0 2010-01-18 0.0 2010-01-19 0.0 2010-01-20 0.0 2010-01-21 0.0 2010-01-22 0.0 2010-01-25 0.0 2010-01-26 0.0 2010-01-27 0.0 2010-01-28 0.0 2010-01-29 0.0 2010-02-01 0.0 2010-02-02 0.0 </code></pre> <p>I would like to set the value to 1 based on the following rule:</p> <ul> <li>If the constant is set 9 this means the 9th of each month. Due to that that 2010-01-09 doesn't exist I would like to set the next date that exists in the series to 1 which is 2010-01-11 above.</li> </ul> <p>I have tried to create two series one (series1) with day &lt; 9 set to 1 and one (series2) with day > 9 to 1 and then <code>series1.shift(1) * series2</code> It works in the middle of the month but not if day is set to 1 due to that the last date in previous month is set to 0 in series1.</p>
<p>Assume your timeseries is <code>s</code> with a datetimeindex </p> <p>I want to create a <code>groupby</code> object of all index values whose days are greater than or equal to <code>9</code>.</p> <pre><code>g = s.index.to_series().dt.day.ge(9).groupby(pd.TimeGrouper('M')) </code></pre> <p>Then I'll check that there is at least one day past <code>&gt;= 9</code> and grab the first among them. With those, I'll assign the value of 1.</p> <pre><code>s.loc[g.idxmax()[g.any()]] = 1 s date 2009-12-23 1.0 2009-12-28 0.0 2009-12-29 0.0 2009-12-30 0.0 2009-12-31 0.0 2010-01-04 0.0 2010-01-05 0.0 2010-01-06 0.0 2010-01-07 0.0 2010-01-08 0.0 2010-01-11 1.0 2010-01-12 0.0 2010-01-13 0.0 2010-01-14 0.0 2010-01-15 0.0 2010-01-18 0.0 2010-01-19 0.0 2010-01-20 0.0 2010-01-21 0.0 2010-01-22 0.0 2010-01-25 0.0 2010-01-26 0.0 2010-01-27 0.0 2010-01-28 0.0 2010-01-29 0.0 2010-02-01 0.0 2010-02-02 0.0 Name: val, dtype: float64 </code></pre> <p>Note that <code>2009-12-23</code> also was assigned a <code>1</code> as it satisfies this requirement as well.</p>
python|date|pandas
3
10,382
41,874,452
averaging over subsets of array in numpy
<p>I have a numpy array of the shape (10, 10, 10, 60). The dimensions could be arbitrary but this just an example.</p> <p>I want to reduce this to an array of <code>(10, 10, 10, 20)</code> by taking the mean over some subsets I have two scenarios:</p> <p><strong>1</strong>: Take the mean of every <code>(10, 10, 10, 20)</code> block i.e. have three <code>(10, 10, 10, 20)</code> block and take the mean between the three. This can be done with: <code>m = np.mean((x[..., :20], x[..., 20:40], x[...,40:60]), axis=3)</code>. My question is how can I generate this when the last dimension is arbitrary without writing some explicit loop? So, I can do something like:</p> <pre><code>x = np.random.rand(10, 10, 10, 60) result = np.zeros((10, 10, 10, 20)) offset = 20 loops = x.shape[3] // offset for i in range(loops): index = i * offset result += x[..., index:index+offset] result = result / loops </code></pre> <p>However, this does not seem too pythonic and I was wondering if there is a more elegant way to do this.</p> <p><strong>2</strong>: Another scenario is that I want to break it down into 10 arrays of the shape <code>(10, 10, 10, 2, 3)</code> and then take the mean along the 5th dimension between these ten arrays and then reshape this to <code>(10, 10, 10, 20)</code> array as original planned. I can reshape the array and then again take the average as done previously and reshape again but that second part seems quite inelegant.</p>
<p>You could reshape splitting the last axis into two, such that the first one has the length as the number of blocks needed and then get the average/mean along the second last axis -</p> <pre><code>m,n,r = x.shape[:3] out = x.reshape(m,n,r,3,-1).mean(axis=-2) # 3 is no. of blocks </code></pre> <p>Alternatively, we could introduce <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>np.einsum</code></a> for noticeable performance boost -</p> <pre><code>In [200]: x = np.random.rand(10, 10, 10, 60) In [201]: %timeit x.reshape(m,n,r,3,-1).mean(axis=-2) 1000 loops, best of 3: 430 µs per loop In [202]: %timeit np.einsum('ijklm-&gt;ijkm',x.reshape(m,n,r,3,-1))/3.0 1000 loops, best of 3: 214 µs per loop </code></pre>
python|numpy
1
10,383
41,778,964
Pandas : using both log and stack on a bar plot
<p>I have some data that comes from amazon that I'd like to work on. One of the plot I'd like to include is a distribution of ratings for each brand, I thought the best way of doing this would be a stacked bar plot.</p> <p>However, some brands are much more reviewed than others, so I have to use the log scale or else the plot would be 3 peaks and the other brands would be impossible to decently see.</p> <p>There are about 300'000 entires that look like this</p> <pre><code>reviewID brand overall 0 Logitech 5.0 1 Garmin 4.0 2 Logitech 4.0 3 Logitech 5.0 </code></pre> <p>I've used this code</p> <pre><code>brandScore = swissDF.groupby(['brand', 'overall'])['brand'] brandScore = brandScore.count().unstack('overall') brandScore.plot(kind='bar', stacked=True, log=True, figsize=(8,6)) </code></pre> <p>And this is the result</p> <p><a href="https://i.stack.imgur.com/rJcak.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rJcak.png" alt="bar plot"></a></p> <p>Now, if you aren't familiar with the data this might look acceptable, but it really isn't. The 1.0 rating stacks look way too big compared to the others, because the logarithm isn't in "full effect" in that range but crunches the better scores. Is there any way to represent the ratings distribution linearly on a logarithmic plot ?</p> <p>By that I mean if 60% of the ratings are 5.0 then 60% of the bar should be pink, instead of what I have right now</p>
<p>In order to have the total bar height living on a logarithmic scale, but the proportions of the categories within the bar being linear, one could recalculate the stacked data such that it appears linear on the logarithmic scale.</p> <p>As a showcase example let's choose 6 datasets with very different totals (<code>[5,10,50,100,500,1000]</code>) such that on a linear scale the lower bars would be much to small. Let's divide it into pieces of in this case 30%, 50% and 20% (for simplicity all different data are divided by the same proportions).</p> <p>We can then calculate for each datapoint which should later on appear on a stacked bar how large it would need to be, such that the ratio of 30%, 50% and 20% is preserved in the logarithmically scaled plot and finally plot those newly created data.</p> <pre><code>from __future__ import division import pandas as pd import numpy as np import matplotlib.pyplot as plt a = np.array([5,10,50,100,500,1000]) p = [0.3,0.5,0.2] c = np.c_[p[0]*a,p[1]*a, p[2]*a] d = np.zeros(c.shape) for j, row in enumerate(c): g = np.zeros(len(row)+1) G = np.sum(row) g[1:] = np.cumsum(row) f = 10**(g/G*np.log10(G)) f[0] = 0 d[j, :] = np.diff( f ) collabels = ["{:3d}%".format(int(100*i)) for i in p] dfo = pd.DataFrame(c, columns=collabels) df2 = pd.DataFrame(d, columns=collabels) fig, axes = plt.subplots(ncols=2) axes[0].set_title("linear stack bar") dfo.plot.bar(stacked=True, log=False, ax=axes[0]) axes[0].set_xticklabels(a) axes[1].set_title("log total barheight\nlinear stack distribution") df2.plot.bar(stacked=True, log=True, ax=axes[1]) axes[1].set_xticklabels(a) axes[1].set_ylim([1, 1100]) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/yRTzt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yRTzt.png" alt="enter image description here"></a></p> <p>A final remark: I think one should be careful with such a plot. It may be useful for inspection, but I wouldn't recommend showing such a plot to other people unless one can make absolutely sure they understand what is plotted and how to read it. Otherwise this may cause a lot of confusion, because the stacked categories' height does not match with the scale which is simply false. And showing false data can cause a lot of trouble!</p>
python|python-3.x|pandas|matplotlib
4
10,384
37,658,776
How can you add external dependencies to bazel
<p>I am a student and currently working on a project where I am trying to connect my game that which I have created with Android Studio. A neural network has also been made with Tensorflow which is going to be used for the android game.</p> <p>The problem is that Android Studio uses a build tool which is called Gradle and Tensorflow uses Bazel. To solve this problem I have been trying to build my android game with Bazel but I am stuck at the part where I have to add the used external dependencies. For the game I use the following dependencies:</p> <ul> <li>Appcompat</li> <li>Support</li> <li>Percent</li> </ul> <p>Which supposedly should come with the android support repository.</p> <p>I have looked at <a href="http://www.bazel.io/docs/external.html" rel="nofollow">http://www.bazel.io/docs/external.html</a> and several other sources but I still do not understand how I can add the dependensies. Could someone provide me with an example how to do it with for example appcompat and what I have to do to make it work? Or is there another way which would be easier?</p> <p>EDIT: I have have been succesful in building the android example of Tensorflow but this: <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android" rel="nofollow">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android</a> But it doesn't include dependensies which I am using. </p>
<p>You may want to look at the Makefile support we just added for Android: <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/makefile" rel="nofollow">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/makefile</a></p> <p>It's still very experimental (and fiddly), but should let you build a static library that you can more easily use in your gradle project.</p>
android|gradle|tensorflow|bazel
0
10,385
37,870,929
How can you find the most common sets using python?
<p>I have a pandas dataframe where one column is a list of all courses taken by a student. The index is the student's ID.</p> <p>I'd like to find the most common set of courses across all students. For instance, if the dataframe looks like this:</p> <pre><code>ID | Courses 1 [A, C] 2 [A, C] 3 [A, C] 4 [B, C] 5 [B, C] 6 [K, D] ... </code></pre> <p>Then I'd like the output to return the most common sets and their frequency, something like:</p> <pre><code>{[A,C]: 3, [B,C]: 2} </code></pre>
<p>You can first convert <code>list</code> to <code>tuples</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html"><code>value_counts</code></a>. Last use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_dict.html"><code>to_dict</code></a>:</p> <pre><code>print (df.Courses.apply(tuple).value_counts()[:2].to_dict()) {('A', 'C'): 3, ('B', 'C'): 2} </code></pre>
python|pandas|set
5
10,386
64,411,352
min() max() and sum() functions working on pandas group by object but not mean()
<p>So basically, I have grouped month columns into quarters like columns 2000-01,2000-02,2000-03 into a single group 2000q1 where q1 means quarter 1 and so on. I have done is for 16 x 12 months and formed 48 quarters.</p> <p>Now, I wish to get the average value of each row in a group. When I do <code>grouped.max()</code> <code>grouped.min()</code> and <code>grouped.sum()</code> I get the min , max and sum of each row in each group.(The row indices are the same for each group)</p> <p>But when I try <code>grouped.mean()</code> I get an error saying:</p> <blockquote> <p><strong>No numeric types to aggregate.</strong></p> </blockquote> <p>Here is the code that I have written:</p> <pre class="lang-py prettyprint-override"><code>def quarter(val): month=val[5:] if month == &quot;01&quot; or month == &quot;02&quot;or month == &quot;03&quot;: return val[:4]+&quot;q1&quot; elif month == &quot;04&quot;or month == &quot;05&quot;or month == &quot;06&quot;: return val[:4]+&quot;q2&quot; elif month == &quot;07&quot; or month == &quot;08&quot; or month == &quot;09&quot;: return val[:4]+&quot;q3&quot; elif month == &quot;10&quot;or month == &quot;11&quot;or month == &quot;12&quot;: return val[:4]+&quot;q4&quot; city.fillna(0,inplace=True) g=city.groupby(quarter, axis= 1 ).mean() </code></pre> <p><strong>This is how my grouped data looks like</strong></p> <p>[('2000q1', 2000-01 2000-02 2000-03</p> <p><strong>0</strong> 0.0 0.0 0.0<br /> <strong>1</strong> 204400.0 207000.0 209800.0<br /> <strong>2</strong> 136800.0 138300.0 140100.0<br /> <strong>3</strong> 52700.0 53100.0 53200.0<br /> <strong>4</strong> 111000.0 111700.0 112800.0<br /> <strong>5</strong> 131700.0 132600.0 133500.0</p> <p>...</p> <p>('2000q2', 2000-04 2000-05 2000-06<br /> <strong>0</strong> 0.0 0.0 0.0<br /> <strong>1</strong> 212300.0 214500.0 216600.0<br /> <strong>2</strong> 141900.0 143700.0 145300.0<br /> <strong>3</strong> 53400.0 53700.0 53800.0<br /> <strong>4</strong> 113700.0 114300.0 115100.0<br /> <strong>5</strong> 134100.0 134400.0 134600.0</p> <p>...</p> <p>('2002q2', 2002-04 2002-05 2002-06<br /> <strong>0</strong> 0.0 0.0 0.0<br /> <strong>1</strong> 268600.0 272600.0 276900.0<br /> <strong>2</strong> 177800.0 177600.0 177300.0<br /> <strong>3</strong> 60300.0 60700.0 61200.0<br /> <strong>4</strong> 127900.0 128400.0 128800.0<br /> <strong>5</strong> 150400.0 151000.0 151400.0</p> <p>This is how city looks like <a href="https://i.stack.imgur.com/BKT0S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BKT0S.png" alt="enter image description here" /></a> This is a part of the output I get when I do grouped.max()</p>
<p>It's easier to groupby the columns with values, and perform the operations.</p> <pre><code>df = pd.DataFrame({'Region':[1,2,3],'City':['a','b','c'],'Country':['A','B','C']}) df = pd.concat([df,pd.DataFrame(np.random.uniform(0,1,(3,12)), columns=['2000-01','2000-02','2000-03','2000-04','2000-05','2000-06','2001-01','2001-02','2001-03','2001-04','2001-05','2001-06'])],axis=1) </code></pre> <p>You can use the date time function to create quarters:</p> <pre><code>def quarter(val): return pd.to_datetime(val).to_period(&quot;Q&quot;) quarter(df.columns[3:]) PeriodIndex(['2000Q1', '2000Q1', '2000Q1', '2000Q2', '2000Q2', '2000Q2', '2001Q1', '2001Q1', '2001Q1', '2001Q2', '2001Q2', '2001Q2'], dtype='period[Q-DEC]', freq='Q-DEC') </code></pre> <p>Then we take columns that have the numerical values:</p> <pre><code>df.iloc[:,3:].groupby(quarter,axis=1).mean() 2000Q1 2000Q2 2001Q1 2001Q2 0 0.506088 0.438958 0.132090 0.360160 1 0.635036 0.496895 0.673494 0.437333 2 0.560944 0.640423 0.603011 0.482962 </code></pre> <p>You can always concat back the first three columns:</p> <pre><code>pd.concat([df.iloc[:,:3],df.iloc[:,3:].groupby(quarter,axis=1).mean()],axis=1) </code></pre>
python|pandas|dataframe|pandas-groupby|mean
2
10,387
47,819,255
Python pandas map CSV file
<p>I want to "merge" two CSV files. I want to map the emails from the File 1 and get their respective userId from File 2 then I want to assign it to the respective emails of File 1</p> <p>Example:</p> <p>File 1</p> <pre><code>name, userId, email john, null, john@a.com alex, null, alex@a.com micheal, null, mike@a.com alex, null, alex@a.com john, null, john@a.com </code></pre> <p>File 2</p> <pre><code>name, userId, email alex, 5, alex@a.com micheal, 10, mike@a.com john, 12, john@a.com </code></pre> <p>Output File </p> <pre><code>name, userId, email john, 12, john@a.com alex, 5, alex@a.com micheal, 10, mike@a.com alex, 5, alex@a.com john, 12, john@a.com </code></pre> <p>This is my code but this doesn't assign the userId of the respective email because emails are not ordered</p> <pre><code>import pandas as pd df1 = pd.read_csv("file1.csv", sep=",") df2 = pd.read_csv("file2.csv", sep=",", index_col=0) df1["userId"] = df2["userId"].values df1.to_csv("output.csv", sep=";") </code></pre> <p>Anyone can help me?</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging" rel="nofollow noreferrer">Dataframe.merge</a></p> <pre><code>df1 = pd.read_csv("file1.csv", sep=",") df1.columns = ['name', 'userid', 'email'] df2 = pd.read_csv("file2.csv", sep=",", index_col=0) df1 = df1.drop(['userId'], axis=1) result = pd.merge(df1, df2, on=['name','email'], how='right') result.to_csv("output.csv", sep=";") </code></pre> <p>How I tested:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'name': ['john', 'alex', 'michael', 'alex', 'john'], 'userId': ['null', 'null', 'null', 'null', 'null'], 'email': ['john@a.com', 'alex@a.com', 'mike@a.com', 'alex@a.com', 'john@a.com'] }, columns=['name','userId','email']) df2 = pd.DataFrame({'name': ['alex', 'michael', 'john'], 'userId': ['5', '10', '12'], 'email': ['alex@a.com', 'mike@a.com', 'john@a.com'] }) df1 = df1.drop(['userId'], axis=1) result = pd.merge(df1, df2, on=['name','email'], how='right') print(df1) print(df2) print(result) </code></pre>
python|pandas|csv|dictionary
1
10,388
47,942,861
Join one dataset and the result of OneHotEncoder in Pandas
<p>Let's consider the dataset of House prices from <a href="https://github.com/ageron/handson-ml/blob/master/02_end_to_end_machine_learning_project.ipynb" rel="nofollow noreferrer">this example</a>.</p> <p>I have the entire dataset stored in the <code>housing</code> variable:</p> <pre><code>housing.shape </code></pre> <blockquote> <p>(20640, 10) </p> </blockquote> <p>I also have done a OneHotEncoder encoding of one dimensions and get <code>housing_cat_1hot</code>, so</p> <pre><code>housing_cat_1hot.toarray().shape </code></pre> <blockquote> <p>(20640, 5)</p> </blockquote> <p><strong>My target is to join the two variables and store everything in just one dataset.</strong></p> <p>I have tried the <a href="https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index" rel="nofollow noreferrer">Join with index tutorial</a> but the problem is that the second matrix haven't any index. How can I do a JOIN between <code>housing</code> and <code>housing_cat_1hot</code>?</p> <pre><code>&gt;&gt;&gt; left=housing &gt;&gt;&gt; right=housing_cat_1hot.toarray() &gt;&gt;&gt; result = left.join(right) </code></pre> <blockquote> <p>Traceback (most recent call last): File "", line 1, in result = left.join(right) File "/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/pandas/core/frame.py", line 5293, in join rsuffix=rsuffix, sort=sort) File "/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/pandas/core/frame.py", line 5323, in _join_compat can_concat = all(df.index.is_unique for df in frames) File "/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/pandas/core/frame.py", line 5323, in can_concat = all(df.index.is_unique for df in frames) AttributeError: 'numpy.ndarray' object has no attribute 'index'</p> </blockquote>
<p>Well, depends on how you created the one-hot vector. But if it's sorted the same as your original DataFrame, and itself is a DataFrame, you can add the same index before joining:</p> <pre><code>housing_cat_1hot.index = range(len(housing_cat_1hot)) </code></pre> <p>And if it's not a DataFrame, convert it to one. This is simple, as long as both objects are sorted the same</p> <p>Edit: If it's not a DataFrame, then: housing_cat_1hot = pd.DataFrame(housing_cat_1hot)</p> <p>Already creates the proper index for you</p>
python|pandas|join|one-hot-encoding
1
10,389
49,243,870
No module named 'pandas_datareader.mstar'
<p>I am using Python 3.6 (Anaconda) on Windows 10, PyCharm IDE. Please bear with me as I am new to coding. I just started Python for my equity research project. </p> <p>Here is the code: </p> <pre><code>import datetime as dt import matplotlib.pyplot as plt from matplotlib import style import pandas as pd import pandas_datareader.data as web import numpy as np style.use('ggplot') start=dt.datetime(2000,1,1) end=dt.datetime(2016,12,31) df= web.DataReader('ERIE', 'google', start, end) print(df.head()) </code></pre> <p>It comes with an error, seems like this is an issue with pandas_datareader itself but I have no idea what is causing it. I checked "pip show pandas_datareader" in command shell and it is installed properly. Would really appreciate if someone can help me. </p> <pre class="lang-none prettyprint-override"><code>C:\Users\vtmin\Anaconda3\envs\untitled\python.exe "D:/PyCharm Projects/Stock Analysis/FinancePython.py" Traceback (most recent call last): File "D:/PyCharm Projects/Stock Analysis/FinancePython.py", line 5, in &lt;module&gt; import pandas_datareader.data as web File "C:\Users\vtmin\AppData\Roaming\Python\Python36\site-packages\pandas_datareader\__init__.py", line 2, in &lt;module&gt; from .data import (DataReader, Options, get_components_yahoo, File "C:\Users\vtmin\AppData\Roaming\Python\Python36\site-packages\pandas_datareader\data.py", line 23, in &lt;module&gt; from pandas_datareader.mstar.daily import MorningstarDailyReader ModuleNotFoundError: No module named 'pandas_datareader.mstar' Process finished with exit code 1 </code></pre>
<p>For some reason I managed to resolve the error by deleting all yahoo related in the data.py (within the pandas-datareader package). Seems like there was an issue with the yahoo API, if I understand it correctly. </p>
python|pandas|dataframe|finance
0
10,390
49,050,243
Why is my output dataframe shape not 1459 x 2 but 1460 x 2
<p>Below is what i have done so far.</p> <pre><code>#importing the necessary modules import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import LinearRegression from sklearn.linear_model import RidgeCV from sklearn.linear_model import LassoCV from sklearn.linear_model import ElasticNetCV from sklearn.ensemble import RandomForestRegressor filepath = r"C:\Users...Kaggle data\house prediction iowa\house_predtrain (3).csv" train = pd.read_csv(filepath) print(train.shape) filepath2 = r"C:\Users...Kaggle data\house prediction iowa\house_predtest (1).csv" test = pd.read_csv (filepath2) print(test.shape) #first we raplace all the NANs by 0 in botht the train and test data train = train.fillna(0) test = test.fillna(0) #error one train.dtypes.value_counts() #isolating all the object/categorical feature and converting them to numeric features encode_cols = train.dtypes[train.dtypes == np.object] encode_cols2 = test.dtypes[test.dtypes == np.object] #print(encode_cols) encode_cols = encode_cols.index.tolist() encode_cols2 = encode_cols2.index.tolist() print(encode_cols2) # Do the one hot encoding train_dummies = pd.get_dummies(train, columns=encode_cols) test_dummies = pd.get_dummies(test, columns=encode_cols2) #align your test and train data (error2) train, test = train_dummies.align(test_dummies, join = 'left', axis = 1) print(train.shape) print(test.shape) #Now working with Floats features numericals_floats = train.dtypes == np.float numericals = train.columns[numericals_floats] print(numericals) #we check for skewness in the float data skew_limit = 0.35 skew_vals = train[numericals].skew() skew_cols = (skew_vals .sort_values(ascending=False) .to_frame() .rename(columns={0:'Skewness'})) skew_cols #Visualising them above data before and after log transforming %matplotlib inline field = 'GarageYrBlt' fig, (ax_before, ax_after) = plt.subplots(1, 2, figsize=(10,5)) train[field].hist(ax=ax_before) train[field].apply(np.log1p).hist(ax=ax_after) ax_before.set (title = 'Before np.log1p', ylabel = 'frequency', xlabel = 'Value') ax_after.set (title = 'After np.log1p', ylabel = 'frequency', xlabel = 'Value') fig.suptitle('Field: "{}"'.format (field)); #note how applying log transformation on GarageYrBuilt does not do much print(skew_cols.index.tolist()) #returns a list of the values for i in skew_cols.index.tolist(): if i == "SalePrice": #we do not want to transform the feature to be predicted continue train[i] = train[i].apply(np.log1p) test[i] = test[i].apply(np.log1p) feature_cols = [x for x in train.columns if x != ('SalePrice')] X_train = train[feature_cols] y_train = train['SalePrice'] X_test = test[feature_cols] y_test = train['SalePrice'] print(X_test.shape) print(y_train.shape) print(X_train.shape) #now to the most fun part. Feature engineering is over!!! #i am going to use linear regression, L1 regularization, L2 regularization and ElasticNet(blend of L1 and L2) #first up, Linear Regression alphas =[0.00005, 0.0005, 0.005, 0.05, 0.5, 0.1, 0.3, 1, 3, 5, 10, 25, 50, 100] #i choosed this l1_ratios = np.linspace(0.1, 0.9, 9) #LinearRegression linearRegression = LinearRegression().fit(X_train, y_train) prediction1 = linearRegression.predict(X_test) LR_score = linearRegression.score(X_train, y_train) print(LR_score) #ridge ridgeCV = RidgeCV(alphas=alphas).fit(X_train, y_train) prediction2 = ridgeCV.predict(X_test) R_score = ridgeCV.score(X_train, y_train) print(R_score) #lasso lassoCV = LassoCV(alphas=alphas, max_iter=1e2).fit(X_train, y_train) prediction3 = lassoCV.predict(X_test) L_score = lassoCV.score(X_train, y_train) print(L_score) #elasticNetCV elasticnetCV = ElasticNetCV(alphas=alphas, l1_ratio=l1_ratios, max_iter=1e2).fit(X_train, y_train) prediction4 = elasticnetCV.predict(X_test) EN_score = elasticnetCV.score(X_train, y_train) print(EN_score) from sklearn.ensemble import RandomForestRegressor randfr = RandomForestRegressor() randfr = randfr.fit(X_train, y_train) prediction5 = randfr.predict(X_test) print(prediction5.shape) RF_score = randfr.score(X_train, y_train) print(RF_score) #putting it lall together rmse_vals = [LR_score, R_score, L_score, EN_score, RF_score] labels = ['Linear', 'Ridge', 'Lasso', 'ElasticNet', 'RandomForest'] rmse_df = pd.Series(rmse_vals, index=labels).to_frame() rmse_df.rename(columns={0: 'SCORES'}, inplace=1) rmse_df \\KaggleHouse_submission_1 = pd.DataFrame({'Id': test.Id, 'SalePrice': prediction5}) KaggleHouse_submission_1 = KaggleHouse_submission_1 print(KaggleHouse_submission_1.shape) </code></pre> <p>In the kaggle house prediction there is a train dataset and a test dataset. here is the link to the actual data <a href="https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data" rel="nofollow noreferrer">link</a>. The output dataframe size should be a 1459 X 2 but mine is 1460 X 2 for some reason. I am not sure why this is happening. Any feedbacks is highly appreciated. </p>
<p>Scikit learn is very sensitive o ordering of columns, so if your train data set and the test data set are misaligned, you may have a problem similar to that above. so you need to first ensure that the test data is encoded same as the train data by using the following align command.</p> <pre><code>train, test = train_dummies.align(test_dummies, join='left', axis = 1) </code></pre> <p>see changes in my code above</p>
python|pandas|machine-learning|scikit-learn|kaggle
0
10,391
58,964,096
Creating a pivot table and finding correlations between books with multiple genres
<p>I have a table that is like</p> <pre><code>book_id original_title tag_id tag_name 1 The Hunger Games 11305 fantasy 1 The Hunger Games 26771 scifi 1 The Hunger Games 26138 romance 10000 The First World War 14467 historical 10000 The First World War 21689 nonfiction </code></pre> <p>and I want to create a pivot table, to then find books that correlate with each other according to genre. I have already done this using just ratings, but this was relatively simple as each book would have just one rating. Since there are multiple genres for each book, is there a good method of creating this pivot table?</p> <p>This is with the ultimate purpose of creating a simple recommender system. </p>
<p>Maybe this can help:</p> <pre><code>df.pivot(index='original_title',columns='tag_name',values='tag_id') </code></pre>
python|pandas|machine-learning|statistics|recommendation-system
0
10,392
70,281,203
How to drop rows with string <NA> value and trim strings from pandas data frame
<p>I have the below python code:</p> <pre><code>import streamlit as st import subprocess import pandas as pd git_output = subprocess.run(['git', 'worktree', 'list', '--porcelain'], cwd='F:/myenv/', capture_output=True, text=True).stdout df = pd.DataFrame([ {line.split()[0]: line.rsplit(&quot; &quot;, 1) for line in block.splitlines()} for block in git_output.split(&quot;\n\n&quot;)]) st.table(df.filter(items=['worktree', 'branch'])) </code></pre> <p>and the output is:</p> <pre><code> worktree branch 0 [&quot;worktree&quot;,&quot;F:/demo/a&quot;] &lt;NA&gt; 1 [&quot;worktree&quot;,&quot;F:/demo/b&quot;] [&quot;branch&quot;,&quot;refs/heads/dev/demo/b&quot;] 2 [&quot;worktree&quot;,&quot;F:/demo/c&quot;] [&quot;branch&quot;,&quot;refs/heads/dev/demo/c&quot;] 3 &lt;NA&gt; &lt;NA&gt; </code></pre> <p>which actions I can do on the <code>df</code> object to get this output:</p> <pre><code> worktree branch 0 [F:/demo/b] [refs/heads/dev/demo/b] 1 [F:/demo/c] [refs/heads/dev/demo/c] </code></pre> <p>Per the comments, also added Dictionary value:</p> <pre><code>{'worktree': {0: ['worktree', 'F:/myenv'], 1: ['worktree', 'F:/demo/a'], 2: ['worktree', 'F:/demo/b'], 3: ['worktree', 'F:/demo/c'], 4: nan}, 'bare': {0: ['bare'], 1: nan, 2: nan, 3: nan, 4: nan}, 'HEAD': {0: nan, 1: ['HEAD', '48cfcf49e277bafad'], 2: ['HEAD', '21eae7bc2694a3aaaf'], 3: ['HEAD', '28755aad57bf4820ca5'], 4: nan}, 'branch': {0: nan, 1: ['branch', 'refs/heads/dev/demo/a'], 2: ['branch', 'refs/heads/dev/demo/b'], 3: ['branch', 'refs/heads/dev/demo/c'], 4: nan}, 'prunable': {0: nan, 1: ['prunable gitdir file points to non-existent', 'location'], 2: nan, 3: nan, 4: nan}} </code></pre>
<p>This will work:</p> <pre><code>import ast df = df.dropna().astype(str).apply(lambda col: col.apply(lambda x: ast.literal_eval(x)[-1])) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; df worktree branch 1 F:/demo/b refs/heads/dev/demo/b 2 F:/demo/c refs/heads/dev/demo/c </code></pre> <p>If you're sure that the contain real <code>list</code> objects and not just strings, you can omit the <code>astype(str)</code> and <code>ast</code> stuff:</p> <pre><code>df = df.dropna().apply(lambda col: col.str[-1])) </code></pre>
python-3.x|pandas|dataframe
1
10,393
55,819,940
Looping through a Pandas Dataframe with multiple conditions
<p>This data contains the last four weeks of data and the idea is average the Total Volume based on Day of Week and Time. for example, if the day = Monday and time = 1 am then average the total volume from the last 4 weeks. </p> <pre><code> Day of Week Time Total Volume 0 Monday 00:00 4 1 Monday 00:30 8 2 Monday 01:00 10 3 Monday 01:30 8 4 Monday 02:00 2 </code></pre> <p>Here is what I've tried but this seems to be not working. Ideally, I'd like to put this in a function. Or is there a better way to loop through this df?</p> <pre><code>for row in data: if row["Day of Week"] == "Monday" and row["Time"] == "00:00" : avg = sum(row["Total Volume"])/4 break </code></pre>
<p>Using for loop in pandas tend to be very slow. It is often times faster to implement a simple calculation over the entire dataframe (that can leverage numpy), and then choose the day/time you want afterwards.</p> <p>You can try groupby function to calculate a 4 weeks moving average of volume from the same weekday and same time.</p> <p>For example:</p> <pre><code>df['sma_vol_4wks'] = df['volume'].groupby(level=['day_of_week','time']).rolling(window=4).mean() </code></pre>
python|pandas
1
10,394
55,767,020
Parse and expand JSON data that currently embedded within a Dataframe
<p>Essentially I have raw data that has been pulled from a certain Weather API. Through an SQL query, the data is formatted into a data frame with columns: latitudes (lats), longitudes (lngs), date, and "blob."</p> <p>The blob is JSON data that is nested in 2 layers. The data as you will see below starts off with a summary for the entire day "daily" at the first layer, and 24 hourly summaries under "hourly" at the second layer.</p> <pre><code>lats lngs date blob -46 168 2015-01-31 {"daily": {"apparentTemperatureMaxTime": 1422680400, "temperatureMax": 21.33, "temperatureMinTime": 1422615600, "temperatureMin": 16.06, "icon": "clear-day", "apparentTemperatureMax": 21.33, "summary": "Clear throughout the day.", "pressure": 1010.91, "temperatureMaxTime": 1422680400, "humidity": 0.81, "dewPoint": 15.14, "sunsetTime": 1422692673, "precipType": "rain", "windSpeed": 3.01, "apparentTemperatureMin": 16.06, "sunriseTime": 1422639631, "apparentTemperatureMinTime": 1422615600, "time": 1422615600, "visibility": 16.09, "windBearing": 75, "moonPhase": 0.38}, "hourly": [{"apparentTemperature": 16.06, "windSpeed": 4.29, "icon": "clear-night", "temperature": 16.06, "summary": "Clear", "pressure": 1015.05, "humidity": 0.89, "dewPoint": 14.23, "precipType": "rain", "time": 1422615600, "visibility": 16.09, "windBearing": 97}, {"apparentTemperature": 16.17, "windSpeed": 4.22, "icon": "clear-night", "temperature": 16.17, "summary": "Clear", "pressure": 1014.91, "humidity": 0.88, "dewPoint": 14.19, "precipType": "rain", "time": 1422619200, "visibility": 16.09, "windBearing": 94}, {"apparentTemperature": 16.27, "windSpeed": 4.09, "icon": "clear-night", "temperature": 16.27, "summary": "Clear", "pressure": 1014.51, "humidity": 0.87, "dewPoint": 14.14, "precipType": "rain", "time": 1422622800, "visibility": 16.09, "windBearing": 87}, {"apparentTemperature": 16.36, "windSpeed": 4, "icon": "clear-night", "temperature": 16.36, "summary": "Clear", "pressure": 1013.94, "humidity": 0.86, "dewPoint": 14.09, "precipType": "rain", "time": 1422626400, "visibility": 16.09, "windBearing": 80}, {"apparentTemperature": 16.4, "windSpeed": 3.9, "icon": "clear-night", "temperature": 16.4, "summary": "Clear", "pressure": 1013.43, "humidity": 0.86, "dewPoint": 14.07, "precipType": "rain", "time": 1422630000, "visibility": 16.09, "windBearing": 75}, </code></pre> <p>Below is a dict of 2 sets of data for 1 date.</p> <pre><code>{'lat': {0: -45, 1: -45}, 'lng': {0: 169, 1: 170}, 'date': {0: datetime.date(2015, 1, 1), 1: datetime.date(2015, 1, 1)}, 'blob': {0: {'daily': {'apparentTemperatureMaxTime': 1420088400, 'temperatureMax': 19.06, 'temperatureMinTime': 1420045200, 'temperatureMin': 7.86, 'icon': 'clear-day', 'apparentTemperatureMax': 19.06, 'summary': 'Clear throughout the day.', 'pressure': 1013.08, 'temperatureMaxTime': 1420088400, 'humidity': 0.61, 'dewPoint': 5.49, 'sunsetTime': 1420101288, 'precipType': 'rain', 'windSpeed': 3.18, 'apparentTemperatureMin': 6.76, 'sunriseTime': 1420045310, 'apparentTemperatureMinTime': 1420041600, 'time': 1420023600, 'visibility': 16, 'windBearing': 241, 'moonPhase': 0.36}, 'hourly': [{'apparentTemperature': 6.78, 'windSpeed': 6.98, 'icon': 'clear-night', 'temperature': 9.88, 'summary': 'Clear', 'pressure': 1005.77, 'humidity': 0.81, 'dewPoint': 6.74, 'precipType': 'rain', 'time': 1420023600, 'visibility': 12.59, 'windBearing': 208}, {'apparentTemperature': 7.23, 'windSpeed': 4.95, 'icon': 'clear-night', 'temperature': 9.7, 'summary': 'Clear', 'pressure': 1007.34, 'humidity': 0.81, 'dewPoint': 6.51, 'precipType': 'rain', 'time': 1420027200, 'visibility': 16.09, 'windBearing': 217}, {'apparentTemperature': 7.19, 'windSpeed': 4.13, 'icon': 'clear-night', 'temperature': 9.39, 'summary': 'Clear', 'pressure': 1008.35, 'humidity': 0.81, 'dewPoint': 6.29, 'precipType': 'rain', 'time': 1420030800, 'visibility': 16.09, 'windBearing': 226}, {'apparentTemperature': 6.96, 'windSpeed': 3.77, 'icon': 'clear-night', 'temperature': 9.06, 'summary': 'Clear', 'pressure': 1009.02, 'humidity': 0.82, 'dewPoint': 6.09, 'precipType': 'rain', 'time': 1420034400, 'visibility': 16.09, 'windBearing': 235}, {'apparentTemperature': 6.79, 'windSpeed': 3.38, 'icon': 'clear-night', 'temperature': 8.76, 'summary': 'Clear', 'pressure': 1009.81, 'humidity': 0.82, 'dewPoint': 5.83, 'precipType': 'rain', 'time': 1420038000, 'visibility': 16.09, 'windBearing': 243}, {'apparentTemperature': 6.76, 'windSpeed': 2.56, 'icon': 'clear-night', 'temperature': 8.29, 'summary': 'Clear', 'pressure': 1010.94, 'humidity': 0.82, 'dewPoint': 5.33, 'precipType': 'rain', 'time': 1420041600, 'visibility': 16.09, 'windBearing': 249}, {'apparentTemperature': 7.09, 'windSpeed': 1.6, 'icon': 'clear-night', 'temperature': 7.86, 'summary': 'Clear', 'pressure': 1012.19, 'humidity': 0.81, 'dewPoint': 4.76, 'precipType': 'rain', 'time': 1420045200, 'visibility': 16.09, 'windBearing': 255}, {'apparentTemperature': 8.08, 'windSpeed': 1.15, 'icon': 'clear-day', 'temperature': 8.08, 'summary': 'Clear', 'pressure': 1013.28, 'humidity': 0.78, 'dewPoint': 4.47, 'precipType': 'rain', 'time': 1420048800, 'visibility': 16.09, 'windBearing': 265}, {'apparentTemperature': 8.95, 'windSpeed': 1.62, 'icon': 'clear-day', 'temperature': 9.49, 'summary': 'Clear', 'pressure': 1014.17, 'humidity': 0.71, 'dewPoint': 4.59, 'precipType': 'rain', 'time': 1420052400, 'visibility': 16.09, 'windBearing': 265}, {'apparentTemperature': 11.57, 'windSpeed': 2.55, 'icon': 'clear-day', 'temperature': 11.57, 'summary': 'Clear', 'pressure': 1014.91, 'humidity': 0.63, 'dewPoint': 4.76, 'precipType': 'rain', 'time': 1420056000, 'visibility': 16.09, 'windBearing': 261}, {'apparentTemperature': 13.41, 'windSpeed': 3.3, 'icon': 'clear-day', 'temperature': 13.41, 'summary': 'Clear', 'pressure': 1015.39, 'humidity': 0.56, 'dewPoint': 4.79, 'precipType': 'rain', 'time': 1420059600, 'visibility': 16.09, 'windBearing': 260}, {'apparentTemperature': 14.68, 'windSpeed': 3.65, 'icon': 'clear-day', 'temperature': 14.68, 'summary': 'Clear', 'pressure': 1015.52, 'humidity': 0.52, 'dewPoint': 4.83, 'precipType': 'rain', 'time': 1420063200, 'visibility': 16.09, 'windBearing': 260}, {'apparentTemperature': 15.71, 'windSpeed': 3.81, 'icon': 'clear-day', 'temperature': 15.71, 'summary': 'Clear', 'pressure': 1015.39, 'humidity': 0.49, 'dewPoint': 4.96, 'precipType': 'rain', 'time': 1420066800, 'visibility': 16.09, 'windBearing': 261}, {'apparentTemperature': 16.59, 'windSpeed': 3.77, 'icon': 'clear-day', 'temperature': 16.59, 'summary': 'Clear', 'pressure': 1015.18, 'humidity': 0.47, 'dewPoint': 5.17, 'precipType': 'rain', 'time': 1420070400, 'visibility': 16.09, 'windBearing': 260}, {'apparentTemperature': 17.33, 'windSpeed': 3.4, 'icon': 'clear-day', 'temperature': 17.33, 'summary': 'Clear', 'pressure': 1014.95, 'humidity': 0.46, 'dewPoint': 5.55, 'precipType': 'rain', 'time': 1420074000, 'visibility': 16.09, 'windBearing': 253}, {'apparentTemperature': 17.92, 'windSpeed': 2.94, 'icon': 'clear-day', 'temperature': 17.92, 'summary': 'Clear', 'pressure': 1014.64, 'humidity': 0.46, 'dewPoint': 6.06, 'precipType': 'rain', 'time': 1420077600, 'visibility': 16.09, 'windBearing': 240}, {'apparentTemperature': 18.36, 'windSpeed': 2.81, 'icon': 'clear-day', 'temperature': 18.36, 'summary': 'Clear', 'pressure': 1014.31, 'humidity': 0.46, 'dewPoint': 6.39, 'precipType': 'rain', 'time': 1420081200, 'visibility': 16.09, 'windBearing': 229}, {'apparentTemperature': 18.78, 'windSpeed': 3.08, 'icon': 'clear-day', 'temperature': 18.78, 'summary': 'Clear', 'pressure': 1013.85, 'humidity': 0.44, 'dewPoint': 6.43, 'precipType': 'rain', 'time': 1420084800, 'visibility': 16.09, 'windBearing': 227}, {'apparentTemperature': 19.06, 'windSpeed': 3.5, 'icon': 'clear-day', 'temperature': 19.06, 'summary': 'Clear', 'pressure': 1013.37, 'humidity': 0.43, 'dewPoint': 6.29, 'precipType': 'rain', 'time': 1420088400, 'visibility': 16.09, 'windBearing': 230}, {'apparentTemperature': 18.78, 'windSpeed': 3.76, 'icon': 'clear-day', 'temperature': 18.78, 'summary': 'Clear', 'pressure': 1013.31, 'humidity': 0.43, 'dewPoint': 6.06, 'precipType': 'rain', 'time': 1420092000, 'visibility': 16.09, 'windBearing': 233}, {'apparentTemperature': 17.53, 'windSpeed': 3.78, 'icon': 'clear-day', 'temperature': 17.53, 'summary': 'Clear', 'pressure': 1014.01, 'humidity': 0.45, 'dewPoint': 5.54, 'precipType': 'rain', 'time': 1420095600, 'visibility': 16.09, 'windBearing': 238}, {'apparentTemperature': 15.72, 'windSpeed': 3.68, 'icon': 'clear-day', 'temperature': 15.72, 'summary': 'Clear', 'pressure': 1015.13, 'humidity': 0.48, 'dewPoint': 4.85, 'precipType': 'rain', 'time': 1420099200, 'visibility': 16.09, 'windBearing': 244}, {'apparentTemperature': 14.18, 'windSpeed': 3.29, 'icon': 'clear-night', 'temperature': 14.18, 'summary': 'Clear', 'pressure': 1016.13, 'humidity': 0.52, 'dewPoint': 4.51, 'precipType': 'rain', 'time': 1420102800, 'visibility': 16.09, 'windBearing': 250}, {'apparentTemperature': 13.23, 'windSpeed': 2.39, 'icon': 'clear-night', 'temperature': 13.23, 'summary': 'Clear', 'pressure': 1016.88, 'humidity': 0.57, 'dewPoint': 4.88, 'precipType': 'rain', 'time': 1420106400, 'visibility': 16.09, 'windBearing': 255}]}, 1: {'daily': {'apparentTemperatureMaxTime': 1420081200, 'temperatureMax': 18.18, 'temperatureMinTime': 1420045200, 'temperatureMin': 8.68, 'icon': 'clear-day', 'apparentTemperatureMax': 18.18, 'summary': 'Clear throughout the day.', 'pressure': 1013.16, 'temperatureMaxTime': 1420081200, 'humidity': 0.63, 'dewPoint': 6.58, 'sunsetTime': 1420101048, 'precipType': 'rain', 'windSpeed': 1.6, 'apparentTemperatureMin': 7.85, 'sunriseTime': 1420045069, 'apparentTemperatureMinTime': 1420041600, 'time': 1420023600, 'visibility': 16.06, 'windBearing': 232, 'moonPhase': 0.36}, 'hourly': [{'apparentTemperature': 11.77, 'windSpeed': 6.44, 'icon': 'clear-night', 'temperature': 11.77, 'summary': 'Clear', 'pressure': 1004.34, 'humidity': 0.77, 'dewPoint': 7.78, 'precipType': 'rain', 'time': 1420023600, 'visibility': 14.73, 'windBearing': 222}, {'apparentTemperature': 11.13, 'windSpeed': 5.33, 'icon': 'clear-night', 'temperature': 11.13, 'summary': 'Clear', 'pressure': 1006.11, 'humidity': 0.78, 'dewPoint': 7.52, 'precipType': 'rain', 'time': 1420027200, 'visibility': 16.09, 'windBearing': 218}, {'apparentTemperature': 10.48, 'windSpeed': 4.48, 'icon': 'clear-night', 'temperature': 10.48, 'summary': 'Clear', 'pressure': 1007.3, 'humidity': 0.79, 'dewPoint': 6.95, 'precipType': 'rain', 'time': 1420030800, 'visibility': 16.09, 'windBearing': 218}, {'apparentTemperature': 8.01, 'windSpeed': 3.65, 'icon': 'clear-night', 'temperature': 9.87, 'summary': 'Clear', 'pressure': 1008.18, 'humidity': 0.78, 'dewPoint': 6.27, 'precipType': 'rain', 'time': 1420034400, 'visibility': 16.09, 'windBearing': 222}, {'apparentTemperature': 7.87, 'windSpeed': 2.9, 'icon': 'clear-night', 'temperature': 9.42, 'summary': 'Clear', 'pressure': 1009.12, 'humidity': 0.78, 'dewPoint': 5.77, 'precipType': 'rain', 'time': 1420038000, 'visibility': 16.09, 'windBearing': 228}, {'apparentTemperature': 7.85, 'windSpeed': 2.18, 'icon': 'clear-night', 'temperature': 8.98, 'summary': 'Clear', 'pressure': 1010.33, 'humidity': 0.79, 'dewPoint': 5.55, 'precipType': 'rain', 'time': 1420041600, 'visibility': 16.09, 'windBearing': 235}, {'apparentTemperature': 8.07, 'windSpeed': 1.56, 'icon': 'clear-night', 'temperature': 8.68, 'summary': 'Clear', 'pressure': 1011.62, 'humidity': 0.8, 'dewPoint': 5.5, 'precipType': 'rain', 'time': 1420045200, 'visibility': 16.09, 'windBearing': 247}, {'apparentTemperature': 9.08, 'windSpeed': 1.31, 'icon': 'clear-day', 'temperature': 9.08, 'summary': 'Clear', 'pressure': 1012.76, 'humidity': 0.79, 'dewPoint': 5.6, 'precipType': 'rain', 'time': 1420048800, 'visibility': 16.09, 'windBearing': 265}, {'apparentTemperature': 10.71, 'windSpeed': 1.57, 'icon': 'clear-day', 'temperature': 10.71, 'summary': 'Clear', 'pressure': 1013.74, 'humidity': 0.72, 'dewPoint': 5.8, 'precipType': 'rain', 'time': 1420052400, 'visibility': 16.09, 'windBearing': 277}, {'apparentTemperature': 13.04, 'windSpeed': 2.04, 'icon': 'clear-day', 'temperature': 13.04, 'summary': 'Clear', 'pressure': 1014.59, 'humidity': 0.62, 'dewPoint': 5.82, 'precipType': 'rain', 'time': 1420056000, 'visibility': 16.09, 'windBearing': 280}, {'apparentTemperature': 15, 'windSpeed': 2.33, 'icon': 'clear-day', 'temperature': 15, 'summary': 'Clear', 'pressure': 1015.2, 'humidity': 0.53, 'dewPoint': 5.62, 'precipType': 'rain', 'time': 1420059600, 'visibility': 16.09, 'windBearing': 280}, {'apparentTemperature': 16.19, 'windSpeed': 2.35, 'icon': 'clear-day', 'temperature': 16.19, 'summary': 'Clear', 'pressure': 1015.48, 'humidity': 0.49, 'dewPoint': 5.54, 'precipType': 'rain', 'time': 1420063200, 'visibility': 16.09, 'windBearing': 277}, {'apparentTemperature': 17.03, 'windSpeed': 2.2, 'icon': 'clear-day', 'temperature': 17.03, 'summary': 'Clear', 'pressure': 1015.53, 'humidity': 0.47, 'dewPoint': 5.64, 'precipType': 'rain', 'time': 1420066800, 'visibility': 16.09, 'windBearing': 271}, {'apparentTemperature': 17.63, 'windSpeed': 1.76, 'icon': 'clear-day', 'temperature': 17.63, 'summary': 'Clear', 'pressure': 1015.51, 'humidity': 0.46, 'dewPoint': 5.91, 'precipType': 'rain', 'time': 1420070400, 'visibility': 16.09, 'windBearing': 263}, {'apparentTemperature': 18.01, 'windSpeed': 0.8, 'icon': 'clear-day', 'temperature': 18.01, 'summary': 'Clear', 'pressure': 1015.5, 'humidity': 0.47, 'dewPoint': 6.51, 'precipType': 'rain', 'time': 1420074000, 'visibility': 16.09, 'windBearing': 236}, {'apparentTemperature': 18.17, 'windSpeed': 1.01, 'icon': 'clear-day', 'temperature': 18.17, 'summary': 'Clear', 'pressure': 1015.42, 'humidity': 0.49, 'dewPoint': 7.28, 'precipType': 'rain', 'time': 1420077600, 'visibility': 16.09, 'windBearing': 134}, {'apparentTemperature': 18.18, 'windSpeed': 1.86, 'icon': 'clear-day', 'temperature': 18.18, 'summary': 'Clear', 'pressure': 1015.32, 'humidity': 0.51, 'dewPoint': 7.79, 'precipType': 'rain', 'time': 1420081200, 'visibility': 16.09, 'windBearing': 118}, {'apparentTemperature': 18.16, 'windSpeed': 1.94, 'icon': 'clear-day', 'temperature': 18.16, 'summary': 'Clear', 'pressure': 1015.08, 'humidity': 0.51, 'dewPoint': 7.83, 'precipType': 'rain', 'time': 1420084800, 'visibility': 16.09, 'windBearing': 118}, {'apparentTemperature': 17.98, 'windSpeed': 1.56, 'icon': 'clear-day', 'temperature': 17.98, 'summary': 'Clear', 'pressure': 1014.82, 'humidity': 0.51, 'dewPoint': 7.63, 'precipType': 'rain', 'time': 1420088400, 'visibility': 16.09, 'windBearing': 123}, {'apparentTemperature': 17.43, 'windSpeed': 1.06, 'icon': 'clear-day', 'temperature': 17.43, 'summary': 'Clear', 'pressure': 1014.82, 'humidity': 0.52, 'dewPoint': 7.46, 'precipType': 'rain', 'time': 1420092000, 'visibility': 16.09, 'windBearing': 136}, {'apparentTemperature': 16.22, 'windSpeed': 0.63, 'icon': 'clear-day', 'temperature': 16.22, 'summary': 'Clear', 'pressure': 1015.3, 'humidity': 0.56, 'dewPoint': 7.32, 'precipType': 'rain', 'time': 1420095600, 'visibility': 16.09, 'windBearing': 191}, {'apparentTemperature': 14.62, 'windSpeed': 1.22, 'icon': 'clear-day', 'temperature': 14.62, 'summary': 'Clear', 'pressure': 1016.05, 'humidity': 0.61, 'dewPoint': 7.07, 'precipType': 'rain', 'time': 1420099200, 'visibility': 16.09, 'windBearing': 249}, {'apparentTemperature': 13.17, 'windSpeed': 1.68, 'icon': 'clear-night', 'temperature': 13.17, 'summary': 'Clear', 'pressure': 1016.68, 'humidity': 0.66, 'dewPoint': 6.86, 'precipType': 'rain', 'time': 1420102800, 'visibility': 16.09, 'windBearing': 262}, {'apparentTemperature': 11.98, 'windSpeed': 1.21, 'icon': 'clear-night', 'temperature': 11.98, 'summary': 'Clear', 'pressure': 1017.11, 'humidity': 0.71, 'dewPoint': 6.86, 'precipType': 'rain', 'time': 1420106400, 'visibility': 16.09, 'windBearing': 271}]}}} </code></pre> <p>Right now, so that I can eventually turn it into a function, I have been doing it step by step hoping to get all the steps, but I've been getting stuck with some of the JSON/DICT conversion of the nested data.</p> <p>The goal is to breakdown the blob so that the 24 hours are pulled out and separated while keeping them paired to the original lats, lgns, and date.</p> <p>From the SQL query mentioned, I already get the data in a 4 column dataframe shown above. I am able to isolate the "blob" using:</p> <pre><code>test_df = temps_df.iloc[:,3] </code></pre> <p>and get an output of:</p> <pre><code>id blob 0 {'daily': {'apparentTemperatureMaxTime': 14215... </code></pre> <p>I then tried to normalize this using:</p> <pre><code>test_df = pd.DataFrame.from_dict(json_normalize(test_df)) </code></pre> <p>And it breaks it down one layer into a dataframe with all the daily conditions (trash), and then all the 24 hourly conditions in another bob (there's no neat way of putting this 22 column table in here.</p> <p>Trying to take it one depth further, I tried:</p> <pre><code>hourly_df = json_normalize(data=test_df, record_path = 'hourly') </code></pre> <p>But this gives me:</p> <pre><code>TypeError: string indices must be integers </code></pre> <pre><code>temps_df = db.get_historical_weather(lats, lngs, start_date, end_date) temps_df.head() lat lng date blob 0 -45 170 2015-01-18 {'daily': {'apparentTemperatureMaxTime': 14215... 1 -45 170 2015-01-19 {'daily': {'apparentTemperatureMaxTime': 14216... 2 -45 170 2015-01-20 {'daily': {'apparentTemperatureMaxTime': 14217... 3 -45 170 2015-01-21 {'daily': {'apparentTemperatureMaxTime': 14218... 4 -45 170 2015-01-22 {'daily': {'apparentTemperatureMaxTime': 14219... test_df = temps_df.iloc[:,3] test_df.head() 0 {'daily': {'apparentTemperatureMaxTime': 14215... 1 {'daily': {'apparentTemperatureMaxTime': 14216... 2 {'daily': {'apparentTemperatureMaxTime': 14217... 3 {'daily': {'apparentTemperatureMaxTime': 14218... 4 {'daily': {'apparentTemperatureMaxTime': 14219... Name: blob, dtype: object test_df = pd.DataFrame.from_dict(json_normalize(test_df)) test_df.head() daily.apparentTemperatureMax daily.apparentTemperatureMaxTime daily.apparentTemperatureMin daily.apparentTemperatureMinTime daily.dewPoint daily.humidity daily.icon daily.moonPhase daily.precipType daily.pressure ... daily.sunsetTime daily.temperatureMax daily.temperatureMaxTime daily.temperatureMin daily.temperatureMinTime daily.time daily.visibility daily.windBearing daily.windSpeed hourly 0 21.17 1421542800 12.39 1421514000 10.78 0.74 clear-day 0.90 rain 995.62 ... 1421569528 21.17 1421542800 12.39 1421514000 1421492400 14.27 232 1.13 [{'apparentTemperature': 14.21, 'windSpeed': 0... 1 15.69 1421632800 9.66 1421600400 9.34 0.79 clear-day 0.94 rain 1000.24 ... 1421655887 15.69 1421632800 9.66 1421600400 1421578800 13.74 223 0.53 [{'apparentTemperature': 11.41, 'windSpeed': 1... 2 16.73 1421719200 8.53 1421686800 7.86 0.74 clear-day 0.97 rain 1014.10 ... 1421742244 16.73 1421719200 8.53 1421686800 1421665200 15.85 208 1.94 [{'apparentTemperature': 10.08, 'windSpeed': 1... hourly_df = json_normalize(data=test_df, record_path = 'hourly') --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-15-dd6793be4e4c&gt; in &lt;module&gt;() ----&gt; 1 hourly_df = json_normalize(data=test_df, record_path = 'hourly') /opt/conda/lib/python3.6/site-packages/pandas/io/json/normalize.py in json_normalize(data, record_path, meta, meta_prefix, record_prefix, errors, sep) 260 records.extend(recs) 261 --&gt; 262 _recursive_extract(data, record_path, {}, level=0) 263 264 result = DataFrame(records) /opt/conda/lib/python3.6/site-packages/pandas/io/json/normalize.py in _recursive_extract(data, path, seen_meta, level) 236 else: 237 for obj in data: --&gt; 238 recs = _pull_field(obj, path[0]) 239 240 # For repeating the metadata later /opt/conda/lib/python3.6/site-packages/pandas/io/json/normalize.py in _pull_field(js, spec) 183 result = result[field] 184 else: --&gt; 185 result = result[spec] 186 187 return result TypeError: string indices must be integers </code></pre> <p>Again, I'm trying this line by line hoping to get one grand function that produces the result I'm looking for. In the end, I would be looking for a data frame that just has...</p> <pre><code> lat lng date hour temp 0 -45 170 2015-01-28 0 10 1 -45 170 2015-01-28 1 10 2 -45 170 2015-01-28 2 10 3 -45 170 2015-01-28 3 10 4 -45 170 2015-01-28 4 10 </code></pre> <p>So, it would show all 24 hours of data for one date of one lat and lng, then move on to the next date for that lat and lng, until it goes through all the dates in the frame, then it would increment to the lat lng pair.</p>
<pre><code>import json from pandas.io.json import json_normalize #from your data df = pd.DataFrame(data) #make the entire df a json file df_json = df.to_json(orient = 'records', date_format='iso') #use json_normalize to read in your json file, look at the hourly dict, and attach lat, lng and date. df2 = json_normalize(json.loads(df_json), record_path = ['blob' , 'hourly'] , meta = ['lat', 'lng', 'date']) #look at only the columns you want df3 = df2.reindex(['lat', 'lng', 'date','temperature'], axis = 1) #repeat 0-23 for the lenght of the df (since the time column in hourly isnt quite right, look at your dict that you posted) df3['hour'] = (np.arange(0, 24).tolist())*(int(len(df3)/24)) df3.head() lat lng date temperature hour 0 -44 169 2015-09-28T00:00:00.000Z 8.62 0 1 -44 169 2015-09-28T00:00:00.000Z 8.34 1 2 -44 169 2015-09-28T00:00:00.000Z 7.30 2 3 -44 169 2015-09-28T00:00:00.000Z 5.94 3 4 -44 169 2015-09-28T00:00:00.000Z 4.88 4 </code></pre> <p>The easiest way to do this is export the data to json, then read it in with json_normalize. The time column in the dict isnt quite right, so I created my own hour column (but if they are all full days then this will work fine).</p> <pre><code>#Output with two rows: df3.iloc[np.r_[0:5, -5:0]] lat lng date temperature hour 0 -45 169 2015-01-01T00:00:00.000Z 9.88 0 1 -45 169 2015-01-01T00:00:00.000Z 9.70 1 2 -45 169 2015-01-01T00:00:00.000Z 9.39 2 3 -45 169 2015-01-01T00:00:00.000Z 9.06 3 4 -45 169 2015-01-01T00:00:00.000Z 8.76 4 43 -45 170 2015-01-01T00:00:00.000Z 17.43 19 44 -45 170 2015-01-01T00:00:00.000Z 16.22 20 45 -45 170 2015-01-01T00:00:00.000Z 14.62 21 46 -45 170 2015-01-01T00:00:00.000Z 13.17 22 47 -45 170 2015-01-01T00:00:00.000Z 11.98 23 </code></pre>
python|json|pandas|dataframe
0
10,395
39,507,417
Identify cells containing specific strings and overwrite content with numbers using Python
<p>I have a dataframe which looks like this:</p> <p><a href="https://i.stack.imgur.com/qe7Y2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qe7Y2.png" alt="enter image description here"></a></p> <p>My goal is to identify for each cell of every column if the following strings are contained: <code>'KSS'</code>, <code>'ABC'</code>, <code>'DEF'</code>, <code>'ABC / DEF'</code>, <code>'KSS / DEF'</code></p> <p>Subsequently I would like to substitute the content with the following values: <code>'KSS'</code> -> 100, <code>'ABC'</code> -> 200, <code>'DEF'</code> -> 300, <code>'ABC / DEF'</code> -> 400, <code>'KSS / DEF'</code> -> 500</p> <p>The output should be like something like this:</p> <p><a href="https://i.stack.imgur.com/ETIMW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ETIMW.png" alt="enter image description here"></a></p> <p>Notice: the algorithm should be generic and check every column, not only number 3. For sake of completeness the data types are all <code>objects</code>.</p> <p>So far my line of codes are these but I guess they are incomplete...</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame([ ['XYZ', 'BALSO', 'PISCO', 'KSS', 'Yes', 660, 'Cop'], ['XYZ', 'TONTO', 'LOLLO', '195', 500, 'Yes', 'nan'], ['XYZ', 'CALLO', 'WANDA', 'ABC / DEF', 'Yes', 500, 'nan'], ['XYZ', 'AZUNGO', 'FINGI', 'KSS / DEF', 'Yes', 500, 'nan'] ]) df = pd.read_csv('prova.csv', sep=',', skiprows=0, header=None, low_memory=False) df.str.replace('KSS|ABC|DEF','?') </code></pre>
<p>If you create a dict with your lookup and replacement values then you can call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html#pandas.Series.map" rel="nofollow"><code>map</code></a> on this column, additionally you need to pass <code>na_action='ignore'</code> to <code>map</code> otherwise you get a <code>KeyError</code> for the missing values, additionally you will note that as you have missing values the values get converted to <code>float</code> but you can cast again using <code>astype(int)</code> later:</p> <pre><code>In [182]: d={'KSS':100, 'ABC' :200, 'DEF' : 300, 'ABC / DEF' : 400, 'KSS / DEF' : 500} df[3] = df[3].map(d, na_action='ignore') df Out[182]: 0 1 2 3 4 5 0 XYZ BALSO PISCO 100.00 660 Cop 1 XYZ TONTO LOLLO nan 500 nan 2 XYZ CALLO WANDA 400.00 500 nan 3 XYZ AZUNGO FINGI 500.00 500 nan </code></pre> <p>here we cast the type using <code>astype</code>:</p> <pre><code>In [178]: df[3] = df[3].astype(int) df Out[178]: 0 1 2 3 4 5 0 XYZ BALSO PISCO 100 660 Cop 1 XYZ TONTO LOLLO 195 500 nan 2 XYZ CALLO WANDA 400 500 nan 3 XYZ AZUNGO FINGI 500 500 nan </code></pre>
python|pandas|replace|substitution
3
10,396
43,983,757
Python: Is it OK to use "as_matrix" with dataframes as input to scikit models
<p>Hi I have seen some examples of machine learning implementations that uses as_matrix with dataframes as inputs to machine learning algorithms. I wonder if it is OK to use tuples, which are output of .as_matrix as inputs to machine learning algorithms such as below. Thanks</p> <pre><code>trainArr_All = df.as_matrix(cols_attr) # training array trainRes_All = df.as_matrix(col_class) # training results trainArr, x_test, trainRes, y_test = train_test_split(trainArr_All, trainRes_All, test_size=0.20, random_state=42) rf = RandomForestClassifier(n_estimators=20, criterion='gini', random_state=42) # 100 decision trees y_score = rf.fit(trainArr, trainRes.ravel()).predict(x_test) y_score = y_score.tolist() </code></pre>
<p>Pandas <code>as_matrix</code> converts the dataframe to <strong>numpy.array</strong> (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.as_matrix.html" rel="nofollow noreferrer">documentation</a>) <strong>NOT tuple</strong>! sklearn assumes that the inputs are in the form of numpy arrays and if not it converts the dtype to dtype=np.float32 or a sparse csc_matrix internally. Although using pandas dataframe as input is usually fine when using a stable version of sklearn (internal conversion), you may have occasional problems due to data type incompatibility. It is usually safer to use <code>as_matrix</code> and convert the dataframe to numpy.array before using sklearn. </p> <p>Here is an example of someone having problem with pandas dataframe: <a href="https://stackoverflow.com/questions/39211339/using-slices-in-python/39234260#39234260">Using slices in Python</a></p>
python|pandas|dataframe|scikit-learn
1
10,397
44,339,463
confusing situations when `tf.constant` not displayed in `tensorboard`?
<p>Below is the working code where some <code>tf.constant</code> get displayed in <code>tensorboard</code>, some don't. </p> <p>However, I have no idea why those don't get displayed. </p> <p>Could anyone help me out here? Thanks</p> <pre><code>import tensorflow as tf import numpy as np # tf.constant(value, dtype=None, shape=None, # name='Const', verify_shape=False) a = tf.constant([2, 2], name="a") b = tf.constant([[0, 1], [2, 3]], name="b") x = tf.add(a, b, name="add") y = tf.multiply(a, b, name="mul") # verify_shape=True, error if shape not match # edge1 = tf.constant(2, dtype=None, shape=[2,2], name="wrong_shape", verify_shape=True) # verify_shape=False, if shape not match, will add to match edge2 = tf.constant(2, dtype=None, shape=[2,2], name="edge2", verify_shape=False) # increase row by row, from left to right edge3 = tf.constant([1,2,3,4], dtype=None, shape=[4,3], name="edge3", verify_shape=False) # reassign works edge2c = edge2 edge3c = edge3 edge4 = tf.constant(np.ones((2,2)), dtype=None, shape=None, name="shape22", verify_shape=False) # increase row by row, from left to right edge5 = tf.constant(np.ones((4,3)), dtype=None, shape=[4,3], name="shape43", verify_shape=False) with tf.Session() as sess: writer = tf.summary.FileWriter('./log/01_tf', sess.graph) x, y = sess.run([x, y]) sess.run(edge4) sess.run(edge5) sess.run(edge2c) sess.run(edge3c) writer.close() </code></pre>
<p>I got it now.</p> <p>To display any nodes in tensorboard, the nodes have to be used in an operation first. Being a <code>tf.constant</code> without involving in <code>add</code> or <code>multiply</code> or any other operations, won't be displayed by tensorboard.</p>
tensorflow|tensorboard
0
10,398
69,636,787
Select rows in pandas where any of six column are not all zero
<p>Here's what the pandas table look like:</p> <p><a href="https://i.stack.imgur.com/1VDKE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1VDKE.jpg" alt="enter image description here" /></a></p> <p>As you can see the red marked rows have any of all six column values set to '0'. I want to select only non-red rows and filter these red ones out.</p> <p>I can't seem to figure if there's in-built or, easy way to do it.</p>
<p>Use a boolean mask as suggested by @Ch3steR and use <code>.iloc</code> or <code>.loc</code> to select a subset of columns:</p> <pre><code># Minimal sample &gt;&gt;&gt; df A B C D E F G H I J 0 4 0 0 0 0 0 0 1 3 2 # Drop 1 4 6 4 0 0 0 0 0 0 0 # Keep # .iloc version: select the first 7 columns &gt;&gt;&gt; df[df.iloc[:, :7].eq(0).sum(1).lt(6)] A B C D E F G H I J 1 4 6 4 0 0 0 0 0 0 0 # .loc version: select columns from A to G &gt;&gt;&gt; df[df.loc[:, 'A':'G'].eq(0).sum(1).lt(6)] A B C D E F G H I J 1 4 6 4 0 0 0 0 0 0 0 </code></pre> <p>Step by Step:</p> <pre><code># Is value equal to 0 &gt;&gt;&gt; df.loc[:, 'A':'G'].eq(0) A B C D E F G 0 False True True True True True True 1 False False False True True True True # Sum of boolean, if there are 3 True, the sum will be 3 # sum(1) &lt;- 1 is for axis, the sum per row &gt;&gt;&gt; df.loc[:, 'A':'G'].eq(0).sum(1) 0 6 # 6 zeros 1 4 # 4 zeros dtype: int64 # Are there less than 6 zeros ? &gt;&gt;&gt; df.loc[:, 'A':'G'].eq(0).sum(1).lt(6) 0 False 1 True dtype: bool # If yes, I keep row else I drop it &gt;&gt;&gt; df[df.loc[:, 'A':'G'].eq(0).sum(1).lt(6)] A B C D E F G H I J 1 4 6 4 0 0 0 0 0 0 0 </code></pre>
python|pandas
1
10,399
69,445,143
Pandas - Averaging entries in specific row and column
<p>I've imported an excel sheet which has a series of tables. The pandas dataframe looks like this:</p> <pre><code> 1 2 3 4 0 3 2 7 2 1 4 2 8 1 2 5 1 4 1 3 6 0 2 3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1 2 3 4 0 3 2 2 1 1 3 3 9 1 2 3 1 5 1 3 2 9 4 1 ...... </code></pre> <p>I'd like to average all values in each respective cell (ie. average values in row 0, column 1 of each table) resulting in 1 table which contains all the averages.</p> <p>I'm not sure how to alter the <code>df.groupby(['1']).mean()</code> function in order to isolate the cells by row as well. I can use a loop to iterate through columns, but it may be tricky to do that and iterate through the rows simultaneously. I'd appreciate any suggestions.</p> <p>Desired output:</p> <pre><code> 1 2 3 4 0 3 2 4.5 1.5 1 3.5 2.5 8.5 1 2 4 1 4.5 1 3 4 4.5 3 2 </code></pre>
<p>If first column is index and same columns names in each subDataFrame simpliest is:</p> <pre><code>print (df) 1 2 3 4 0.0 3.0 2.0 7.0 2.0 1.0 4.0 2.0 8.0 1.0 2.0 5.0 1.0 4.0 1.0 3.0 6.0 0.0 2.0 3.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1.0 2.0 3.0 4.0 0.0 3.0 2.0 2.0 1.0 1.0 3.0 3.0 9.0 1.0 2.0 3.0 1.0 5.0 1.0 3.0 2.0 9.0 4.0 1.0 df = df.groupby(level=0).mean() print (df) 1 2 3 4 0.0 3.0 2.0 4.5 1.5 1.0 3.5 2.5 8.5 1.0 2.0 4.0 1.0 4.5 1.0 3.0 4.0 4.5 3.0 2.0 </code></pre> <p>If not, is necessary some preprocessing, depends of data.</p> <p>E.g. remove rows if last column has <code>NaN</code>s for avoid mixed row=1 with indexes=1:</p> <pre><code>print (df) 1 2 3 4 0.0 3.0 2.0 7.0 2.0 1.0 4.0 2.0 8.0 1.0 2.0 5.0 1.0 4.0 1.0 3.0 6.0 0.0 2.0 3.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1.0 2.0 3.0 4.0 NaN &lt;- columns names like index=1 here are removed 0.0 3.0 2.0 2.0 1.0 1.0 3.0 3.0 9.0 1.0 2.0 3.0 1.0 5.0 1.0 3.0 2.0 9.0 4.0 1.0 df = df.dropna(subset=df.columns[-1:]).groupby(level=0).mean() print (df) 1 2 3 4 0.0 3.0 2.0 4.5 1.5 1.0 3.5 2.5 8.5 1.0 2.0 4.0 1.0 4.5 1.0 3.0 4.0 4.5 3.0 2.0 </code></pre>
python|pandas
3