Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
3,100
46,474,393
Quantile threshold/filter within pandas groupby
<p>I have one categorical variable and two numeric cols:</p> <pre><code>np.random.seed(123) df = pd.DataFrame({'group' : ['a']*10+['b']*10, 'var1' : np.random.randn(20), 'var2' : np.random.randint(10,size=20)}) </code></pre> <p>I want to find, by <code>group</code>, the mean of <code>var1</code> after filtering to constrict <code>df</code> to the top-quartile <code>var2</code> values <em>by group</em>. That is, the threshold for each group would be:</p> <pre><code>thresh = df.groupby('group')['var2'].quantile(0.75) </code></pre> <p>Here's a loopy demonstration of what I want to end up with:</p> <pre><code>for group, frame in df.groupby('group'): print(frame[frame.var2 &gt;= frame.var2.quantile(0.75)].var1.mean()) # -1.4713362407192072 # 0.15512098976530683 </code></pre> <p>The result should be a DataFrame with (<code>group</code>, <code>var</code>) columns or a Series indexed by <code>group</code>. I believe the solution may involve <code>.transform</code>/<code>.apply</code> but am stuck there.</p>
<p>Are you looking for this ?</p> <pre><code>new = df.groupby('group').apply(lambda x : \ x[x.var2&gt;=x.var2.quantile(0.75)] \ .var1.mean()).to_frame() </code></pre> <pre> 0 group a -1.471336 b 0.155121 </pre>
python|pandas|pandas-groupby|split-apply-combine
5
3,101
58,554,905
Difference between max and min value of columns
<p>I have a pandas dataframe with 2000+ columns. All the columns have numeric values. I want to find the difference between minimum and maximum values of each column. And then I want to filter top 10 columns having biggest differences.</p> <pre><code>Col1 Col2 Col3 ..... Col2500 4 1 3 ..... 6 7 5 10 ..... 17 1 22 4 ..... 2 </code></pre> <p>I tried a few options, but none worked! Please suggest a solution.</p>
<p>This will give you the result in <code>Series</code>:</p> <pre><code>df.T.apply(lambda x: x.max() - x.min(), axis=1).nlargest(10) </code></pre> <p>Example:</p> <pre><code>df Col1 Col2 Col3 Col2500 0 4 1 3 6 1 7 5 10 17 2 1 22 4 2 df.T.apply(lambda x: x.max() - x.min(), axis=1).nlargest(3) Col2 21 Col2500 15 Col3 7 dtype: int64 </code></pre> <p>Or just:</p> <pre><code>(df.max() - df.min()).nlargest(10) </code></pre>
python|pandas
1
3,102
58,565,389
Install Anaconda along with Spyder and Tensorflow on Windows 7 PC that doesn't have internet connectivity
<p>I'm in need of installing Anaconda along with Spyder and Tensorflow on a Windows 7 laptop that does not have a connection to the internet. Is this possible and if so, would there be directions on how to do this?</p> <p>Thanks...</p>
<p>From <a href="https://docs.anaconda.com/anaconda/install/" rel="nofollow noreferrer">https://docs.anaconda.com/anaconda/install/</a>:</p> <blockquote> <p><strong>Installing Anaconda on a non-networked machine (air gap)</strong></p> <p>Obtain a local copy of the appropriate Anaconda installer for the non-networked machine. You can copy the Anaconda installer to the target machine using many different methods including a portable hard drive, USB drive or CD. After copying the installer to the non-networked machine, follow the detailed installation instructions for your operating system.</p> </blockquote> <p>This will include Spyder, but not Tensorflow. There are <a href="https://stackoverflow.com/search?q=tensorflow+install+without+internet">a number of questions and answers</a> on here about how to install Tensorflow offline. </p> <p>I think the laborious part will be ensuring you have the right versions of all dependencies, so you might want to start by creating an Anaconda env on a networked machine that has the packages and dependencies you need, then copy the downloaded conda packages from that machine to a folder on the non-networked machine. Then you can specify that folder as a channel, using a <code>file://</code> URI, in a <code>conda create</code> or <code>conda install</code> command.</p> <p>An alternative could be to use Docker, if you can use a pre-built container or build one yourself?</p>
python|tensorflow|anaconda|spyder|windows-7-x64
1
3,103
69,002,340
How to create a Data frame and prevent creation of new columns and additional rows during a for loop for each dataset
<p>I'm new to posting here.</p> <p>I'm currently trying to extract tables from a word document and have them laid out in a transposed data frame that can be exported as a csv.</p> <p>My issue lies on the data frame I get from the following code:</p> <pre><code>from docx.api import Document import pandas as pd def extract_tables_from_docx(path,output_path,name): document = Document(path) data = [] for table in document.tables: keys = tuple(cell.text for cell in table.rows[0].cells) for row in table.rows[1:]: data.append(dict(zip(keys,(cell.text for cell in row.cells)))) df1 = pd.DataFrame(data).T print(df1) </code></pre> <p><a href="https://i.stack.imgur.com/pfJ5b.png" rel="nofollow noreferrer">This is the current data frame I get when I input the relevant information when calling the function</a></p> <p>So the issue is that I'm adding extra columns to fill in the information for the next data set when I want the data to be filled where the NaN's are. Basically every new entry from the loop is causing the data to be entered to the right if that's how you describe it. I'm fairly new to Python so apologies if this code doesn't look good.</p> <p>Can anyone help on how I get around this? Any help is appreciated.</p> <p>Edit:</p> <p><a href="https://i.stack.imgur.com/RXzNA.png" rel="nofollow noreferrer">This is how I expect my data frames to appear</a></p> <p><a href="https://i.stack.imgur.com/MA06w.png" rel="nofollow noreferrer">The dataset I'm using</a></p>
<p>Your data is organized &quot;vertically&quot; with the records in columns rather than rows. So you need something like this:</p> <pre><code>from docx.api import Document import pandas as pd def extract_tables_from_docx(path): document = Document(path) data = [] for table in document.tables: keys = (cell.text for cell in table.columns[0].cells) values = (cell.text for cell in table.columns[1].cells) data.append(dict(zip(keys, values))) df1 = pd.DataFrame(data).T print(df1) </code></pre> <p>Give that a try and see what you get.</p>
python|python-3.x|pandas|dataframe|python-docx
0
3,104
44,762,525
How to save the result from equation(float) to column, python
<p>I have data frame look line this:</p> <p><code>df:</code></p> <pre><code> 1 2 3.4 -2 2 1.1 2 3 4 -5 5 5 </code></pre> <p>I can use this data on my equation like:</p> <p><code>result=abs(int(df[0])) +( int(df[1]) / 2 + float(df[2]) / 32)</code></p> <p>So after this calculation I receive a list with results for each line from <code>df</code> , and the resulting type is a float. Question: How can I save it to one column or dataframe and add this one column with <code>result</code> to the another dataframe that's same as <code>df</code> ?</p> <p>I've tried <code>pd.DataFrame(result)</code>, which doesn't work.</p>
<p>Assign directly to the new column you're trying to create.</p> <pre><code>df[3] = abs(int(df[0])) +( int(df[1]) / 2 + float(df[2]) / 32) </code></pre>
python|pandas|dataframe
4
3,105
44,372,638
pandas / numpy np.where(df['x'].str.contains('y')) vs np.where('y' in df['x'])
<p>As a newb to python and pandas, I tried:</p> <pre><code>df_rows = np.where('y' in df['x'])[0] for i in df_rows: print df_rows.iloc[i] </code></pre> <p>returned no rows, but</p> <pre><code>df_rows = np.where(df['x'].str.contains('y'))[0] for i in df_rows: print df_rows.iloc[i] </code></pre> <p>did work and returned rows containing <code>'y'</code> in <code>df['x']</code>.</p> <p>What am I missing? Why did the first form fail? (Python 2.7) </p>
<p>Pandas requires specific syntax for things to work. Looking for a <code>str</code> <code>y</code> using the operator <a href="https://docs.python.org/3/reference/expressions.html#in" rel="nofollow noreferrer">in</a> checks for membership of the string <code>y</code> in a pandas <code>Series</code>.</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'x': ['hiya', 'howdy', 'hello']}) &gt;&gt;&gt; df x 0 hiya 1 howdy 2 hello &gt;&gt;&gt; df_rows = np.where('y' in df['x'])[0] &gt;&gt;&gt; df_rows array([], dtype=int64) &gt;&gt;&gt; df_rows = np.where(df['x'].str.contains('y'))[0] &gt;&gt;&gt; df_rows array([0, 1], dtype=int64) </code></pre> <p>Try this and notice it returns one bool instead of three (like we might first think since there are three items in the series):</p> <pre><code>&gt;&gt;&gt; 'y' in df['x'] False &gt;&gt;&gt; 'hiya' in df['x'] False &gt;&gt;&gt; 'hiya' in df['x'].values True </code></pre> <p>You always need to think to yourself: "am I looking for items in a series, or am I looking for strings within the items within the series?"</p> <p>For items in a series, use <code>isin</code>:</p> <pre><code>df['x'].isin(['hello']) </code></pre> <p>For strings within an item, use <code>.str.{whatever}</code> (or <code>.apply(lambda s: s)</code>):</p> <pre><code>&gt;&gt;&gt; df['x'].str.contains('y') 0 True 1 True 2 False Name: x, dtype: bool &gt;&gt;&gt; df['x'].apply(lambda s: 'y' in s) 0 True 1 True 2 False Name: x, dtype: bool </code></pre>
python|string|pandas|numpy|where
1
3,106
60,880,271
Pandas: Count days in each month between given start and end date
<p>I have a pandas dataframe with some beginning and ending dates. </p> <pre><code>ActualStartDate ActualEndDate 0 2019-06-30 2019-08-15 1 2019-09-01 2020-01-01 2 2019-08-28 2019-11-13 </code></pre> <p>Given these start &amp; end dates I need to count how many days in each month between beginning and ending dates. I can't figure out a good way to approach this, but resulting dataframe should be something like:</p> <pre><code>ActualStartDate ActualEndDate 2019-06 2019-07 2019-08 2019-09 2019-10 2019-11 2019-12 2020-01 etc 0 2019-06-30 2019-08-15 1 31 15 0 0 0 0 0 1 2019-09-01 2020-01-01 0 0 0 30 31 30 31 1 2 2019-08-28 2019-11-13 0 0 4 30 31 13 0 0 </code></pre> <p>Note that actual dataframe has ~1,500 rows with varying beginning &amp; end dates. Open to different df output, but showing the above to give you the idea of what I need to accomplish. Thank you in advance for any help!</p>
<p>Idea is create month periods by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DatetimeIndex.to_period.html" rel="nofollow noreferrer"><code>DatetimeIndex.to_period</code></a> from <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>date_range</code></a> and count by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.value_counts.html" rel="nofollow noreferrer"><code>Index.value_counts</code></a>, then create <code>DataFrame</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with replace missing values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a>, last join to original by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a>:</p> <pre><code>L = {r.Index: pd.date_range(r.ActualStartDate, r.ActualEndDate).to_period('M').value_counts() for r in df.itertuples()} df = df.join(pd.concat(L, axis=1).fillna(0).astype(int).T) print (df) ActualStartDate ActualEndDate 2019-06 2019-07 2019-08 2019-09 2019-10 \ 0 2019-06-30 2019-08-15 1 31 15 0 0 1 2019-09-01 2020-01-01 0 0 0 30 31 2 2019-08-28 2019-11-13 0 0 4 30 31 2019-11 2019-12 2020-01 0 0 0 0 1 30 31 1 2 13 0 0 </code></pre> <p><strong>Performance</strong>: </p> <pre><code>df = pd.concat([df] * 1000, ignore_index=True) In [44]: %%timeit ...: L = {r.Index: pd.date_range(r.ActualStartDate, r.ActualEndDate).to_period('M').value_counts() ...: for r in df.itertuples()} ...: df.join(pd.concat(L, axis=1).fillna(0).astype(int).T) ...: 689 ms ± 5.63 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [45]: %%timeit ...: df.join( ...: df.apply(lambda v: pd.Series(pd.date_range(v['ActualStartDate'], v['ActualEndDate'], freq='D').to_period('M')), axis=1) ...: .apply(pd.value_counts, axis=1) ...: .fillna(0) ...: .astype(int)) ...: 994 ms ± 5.17 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre>
python|pandas|datetime
1
3,107
61,054,359
InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Incompatible shapes: [4,3] vs. [4,4]
<p>I am facing below error when trying to train a multi-class classification model ( 4 classes) for Image dataset. Even though my output tensor is of shape 4 I am facing below issue. Please let me know how to fix this issue.</p> <pre><code> Epoch 1/10 --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) &lt;ipython-input-30-01c6f78f4d4f&gt; in &lt;module&gt; 4 epochs=epochs, 5 validation_data=val_data_gen, ----&gt; 6 validation_steps=total_val // batch_size 7 ) /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch) 1294 shuffle=shuffle, 1295 initial_epoch=initial_epoch, -&gt; 1296 steps_name='steps_per_epoch') 1297 1298 def evaluate_generator(self, /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_generator.py in model_iteration(model, data, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch, mode, batch_size, steps_name, **kwargs) 263 264 is_deferred = not model._is_compiled --&gt; 265 batch_outs = batch_function(*batch_data) 266 if not isinstance(batch_outs, list): 267 batch_outs = [batch_outs] /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight, reset_metrics) 1015 self._update_sample_weight_modes(sample_weights=sample_weights) 1016 self._make_train_function() -&gt; 1017 outputs = self.train_function(ins) # pylint: disable=not-callable 1018 1019 if reset_metrics: /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/backend.py in __call__(self, inputs) 3474 3475 fetched = self._callable_fn(*array_vals, -&gt; 3476 run_metadata=self.run_metadata) 3477 self._call_fetch_callbacks(fetched[-len(self._fetches):]) 3478 output_structure = nest.pack_sequence_as( /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py in __call__(self, *args, **kwargs) 1470 ret = tf_session.TF_SessionRunCallable(self._session._session, 1471 self._handle, args, -&gt; 1472 run_metadata_ptr) 1473 if run_metadata: 1474 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Incompatible shapes: [4,3] vs. [4,4] [[{{node loss_2/predictions_loss/logistic_loss/mul}}]] [[loss_2/mul/_19047]] (1) Invalid argument: Incompatible shapes: [4,3] vs. [4,4] [[{{node loss_2/predictions_loss/logistic_loss/mul}}]] 0 successful operations. 0 derived errors ignored. </code></pre> <p><strong>My batch size is 4 and below is last few layers of my model.</strong></p> <pre><code>conv5_block16_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv5_block16_1_relu[0][0] __________________________________________________________________________________________________ conv5_block16_concat (Concatena (None, 16, 16, 1024) 0 conv5_block15_concat[0][0] conv5_block16_2_conv[0][0] __________________________________________________________________________________________________ bn (BatchNormalization) (None, 16, 16, 1024) 4096 conv5_block16_concat[0][0] __________________________________________________________________________________________________ relu (Activation) (None, 16, 16, 1024) 0 bn[0][0] __________________________________________________________________________________________________ avg_pool (GlobalAveragePooling2 (None, 1024) 0 relu[0][0] __________________________________________________________________________________________________ predictions (Dense) (None, 4) 4100 avg_pool[0][0] ================================================================================================== </code></pre> <p><strong>Loss function</strong></p> <pre><code>model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True)) </code></pre>
<p>I think, there is nothing wrong with the shapes, but with the loss function, you are trying to use. Ideally for multiclass classification, the final layer has to have <strong>softmax</strong> activation (for your logits to sum up to 1) and use <strong>CategoricalCrossentropy</strong> as your loss function if your labels are one-hot and <strong>SparseCategoricalCrossentropy</strong> if your labels are integers. Tensorflow documentation attached below. <a href="https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy</a></p> <p>Changes that are to be done to your code</p> <pre><code> # adding softmax activation to final dense layer predictions = Dense(4, activation='softmax')(avg_pool) # assuming you have one-hot labels model.compile(optimizer='adam',loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True)) </code></pre>
python|tensorflow|keras|deep-learning|multiclass-classification
0
3,108
60,870,128
Can't install geopandas in Anaconda environment
<p>I am trying to install the <code>geopandas</code> package with Anaconda Prompt, but after I use <code>conda install geopandas</code> an unexpected thing happened:</p> <pre class="lang-bash prettyprint-override"><code>Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: - Found conflicts! Looking for incompatible packages </code></pre> <p>After this, it proceeds to search for conflicts, but hours pass without finishing. In the end, I still cannot use <code>geopandas</code>.</p> <p>I have also tried installing <code>geopandas</code> in a different virtual environment and it works but I do not know how to use the environment in Jupyter Notebooks.</p> <p>I would like to know, <strong>how can install <code>geopandas</code> without a separate environment?</strong></p> <p>Or, alternatively, <strong>how can I use <code>geopandas</code> in Jupyter Notebooks after install it in a separate environment?</strong></p>
<p>Install it in a new env, and include <code>ipykernel</code> if you plan to use it in Jupyter:</p> <pre><code>conda create -n my_env geopandas ipykernel </code></pre> <p>Note, <code>nb_conda_kernels</code> should be installed install in your base env (i.e. where you launch Jupyter from). This enables Jupyter to automatically recognize other envs that are kernel-ready:</p> <pre><code>conda install -n base nb_conda_kernels </code></pre>
python|anaconda|conda|geopandas
14
3,109
60,845,365
Date formatting in chr when concatenating CSV files in pandas
<p>I've got an issue with the below function mergeit2. </p> <p>The function is concatenating two files together. </p> <p>A current year file and a historic file. In the historic file, the dates are in chr(10) format whilst in the current file the dates are in date format. As I'm then loading the data in Tableau, I want to translate all dates to chr(10) format by recognising the column as str column. However, this keeps failing for some reason. Should I replace my str recognition for another command? </p> <p>My code is as shown below:</p> <pre><code>import pandas as pd import os import glob import time import numpy as np def mergeit2(path1, path2): for fl in path1: df = pd.read_csv(fl, header = None, skiprows = 0) df = df.replace(to_replace = 'Date', value = np.nan).dropna() dflist.append(df) concat2 = pd.concat(dflist, axis = 0) concat2.to_csv(path2,header = cols, index = False) df = pd.read_csv(path2) df['Date'] = df['Date'].astype(str) df.to_csv(path2, index = False) cols = ['Year','Month','Week','Week in Number','Date','GPU Util%','CPU Util%'] dflist = [] path1 = glob.iglob('C:\*Users\&lt;username&gt;..&lt;rest of path&gt;..\*Data_*.csv') path2 = "C:\\Users\\&lt;username&gt;..&lt;rest of path&gt;..\\Data_master.csv" mergeit2(path1,path2) </code></pre> <p>An Example Dataset is:</p> <p>Before Code Execution:</p> <pre><code>Current file Example Data Int int Str int datetime64 float float Year Month Week Week in Number Date GPU Util% CPU Util% 2020 1 First 1 01/01/2020 0.680 0.450 2020 1 First 1 02/01/2020 0.320 0.056 2020 1 First 1 03/01/2020 0.560 0.470 2020 1 First 1 04/01/2020 0.520 0.325 Historic File Example Data int int Str int chr(10) float float Year Month Week Week in Number Date GPU Util% CPU Util% 2019 1 First 1 05/01/2020 0.467 0.284 2019 1 Second 2 06/01/2020 0.516 0.360 2019 1 Second 2 07/01/2020 0.501 0.323 2019 1 Second 2 08/01/2020 0.494 0.322 </code></pre> <p>After Code Execution (merged CSV master file - some dates in chr(10) and others in datetime format)</p> <pre><code>Year Month Week Week in Number Date GPU Util% CPU Util% 2020 1 First 1 2020-01-01 0.680 0.450 2020 1 First 1 2020-01-02 0.320 0.056 2020 1 First 1 2020-01-03 0.560 0.470 2020 1 First 1 2020-01-04 0.520 0.325 int int Str int chr(10) float float 2019 1 First 1 05/01/2020 0.467 0.284 2019 1 Second 2 06/01/2020 0.516 0.360 2019 1 Second 2 07/01/2020 0.501 0.323 2019 1 Second 2 08/01/2020 0.494 0.322 </code></pre> <p>My Expected Output is as follows:</p> <pre><code>Int int Str int chr(10) float float Year Month Week Week in Number Date GPU Util% CPU Util 2020 1 First 1 01/01/2020 0.680 0.450 2020 1 First 1 02/01/2020 0.320 0.056 2020 1 First 1 03/01/2020 0.560 0.470 2020 1 First 1 04/01/2020 0.520 0.325 2019 1 First 1 05/01/2020 0.467 0.284 2019 1 Second 2 06/01/2020 0.516 0.360 2019 1 Second 2 07/01/2020 0.501 0.323 2019 1 Second 2 08/01/2020 0.494 0.322 </code></pre>
<p>First of all, a big kudos to Serge Ballesta. You cannot imagine how many manhours of work you saved me and thank you for your kind explanations over our discussion.</p> <p>Serge mentioned:</p> <blockquote> <p>Hint: you could ask pandas to parse the dates in the current file and use <code>dt.strftime</code> to force the expected format</p> </blockquote> <p>I've changed my function mergeit2 as follows:</p> <p>I've imported the datetime library and changed a few lines below (see lines with comments)</p> <pre><code>import datetime def mergeit2(path1, path2): for fl in path1: df = pd.read_csv(fl, sep = ",", parse_dates = True) #Changed this line to include separator and parse_dates function. df['Date'] = df['Date'].dt.strftime('%d/%m/%Y') #added this line to ensure Dates are recognised as strings and transformed to the shape I want dflist.append(df) concat2 = pd.concat(dflist, axis = 0) concat2.to_csv(path2,header = cols, index = False) df = pd.read_csv(path2) df['Date'] = df['Date'].astype(str) df.to_csv(path2, index = False) </code></pre>
python|pandas|csv|date|tableau-api
1
3,110
71,455,791
Grouping the data in python
<p>Suppose, I have the data like this,</p> <pre><code> Date Time Energy_produced 01.01.2016 00:00 500 01.01.2016 00:15 580 01.01.2016 00:30 600 01.01.2016 00:45 620 01.01.2016 01:00 580 01.01.2016 01:15 520 01.01.2016 01:30 590 01.01.2016 01:45 570 01.01.2016 02:00 540 </code></pre> <p>Now, i want to sum the energy produced based on each hour</p> <p>suppose ,</p> <pre><code>Date Hour Energy produced per hour 01.01.2016 00:00 2280(per hour) 01:01:2016 01:00 2240(per hour) </code></pre> <p>How to sum like this?</p>
<p>If you want to keep Date/Time as strings, you could use:</p> <pre><code>(df.groupby(['Date', df['Time'].str[:3].rename('Hour')+'00']) ['Energy_produced'].sum() .reset_index() ) </code></pre> <p>Output:</p> <pre><code> Date Hour Energy_produced 0 01.01.2016 00:00 2300 1 01.01.2016 01:00 2260 2 01.01.2016 02:00 540 </code></pre> <p><em>NB. You can also get the second group with: <code>df['Time'].str.replace(r'\d{2}$', '00', regex=True).rename('Hour')</code></em></p>
python|pandas|dataframe|data-analysis
0
3,111
71,782,405
Pandas dropna() not removing entire row
<p>When I serached a way to remove an entire column in pandas if there is a null/NaN value, the only appropriate function I found was dropna(). For some reason, it's not removing the entire row as intended, but instead replacing the null values with zero. As I want to discard the entire row to then make a mean age of the animals from the dataframe, I need a way to not count the NaN values. Here's the code:</p> <pre><code>import numpy as np import pandas as pd data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'], 'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3], 'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1], 'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']} labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] df = pd.DataFrame(data, labels) df.dropna(inplace= True) df.head() </code></pre> <p>In this case, I need to delete the Dog 'd' and Cat 'h'. But the code that comes out is:</p> <p><img src="https://i.stack.imgur.com/05qoz.png" alt="Output" /></p> <p>To note I have also done this, and it didn't work either:</p> <pre><code>df2 = df.dropna() </code></pre>
<p>you have to specify the axis = 1 and any to remove column see : <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html</a></p> <pre><code>df.dropna(axis=1, inplace= True, how='any') </code></pre> <p>if you want just delet the row :</p> <pre><code>df.dropna(inplace= True, how='any') </code></pre>
python|python-3.x|pandas
3
3,112
69,854,674
Python generate lat/long points from address
<p>How do I take addresses and generate lat, long coordinates from them in python? I have a few addresses that I would like get lat, long points but seems it doesn't work.</p> <p>I used geopandas but it returns me nothing. I am also a bit confused about what to use for the <strong>user_agent</strong>. Here is my code,</p> <pre><code>import pandas as pd from geopy.geocoders import Nominatim df2['location_lat'] = &quot;&quot; df2['location_long'] = &quot;&quot; geolocator = Nominatim(user_agent=&quot;myApp&quot;) for i in df2.index: try: #tries fetch address from geopy location = geolocator.geocode(df2['Location'][i]) #append lat/long to column using dataframe location df2.loc[i,'location_lat'] = location.latitude df2.loc[i,'location_long'] = location.longitude except: #catches exception for the case where no value is returned #appends null value to column df2.loc[i,'location_lat'] = &quot;&quot; df2.loc[i,'location_long'] = &quot;&quot; </code></pre> <p>Any help is appreciated. Thanks.</p>
<p>You can use <code>apply</code> directly on a DataFrame column:</p> <pre><code>import pandas as pd from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent=&quot;myApp&quot;) df2 = pd.DataFrame({'Location': ['2094 Valentine Avenue,Bronx,NY,10457', '1123 East Tremont Avenue,Bronx,NY,10460', '412 Macon Street,Brooklyn,NY,11233']}) df2[['location_lat', 'location_long']] = df2['Location'].apply( geolocator.geocode).apply(lambda x: pd.Series( [x.latitude, x.longitude], index=['location_lat', 'location_long'])) </code></pre> <p>It should give:</p> <pre><code> Location location_lat location_long 0 2094 Valentine Avenue,Bronx,NY,10457 40.852905 -73.899665 1 1123 East Tremont Avenue,Bronx,NY,10460 40.840130 -73.876245 2 412 Macon Street,Brooklyn,NY,11233 40.682651 -73.934353 </code></pre>
python|pandas|latitude-longitude|geopandas|geopy
4
3,113
69,914,296
How to find each row and column data type in pandas dataframe using apply, map or applymap?
<p>I have dataframe as shown in image. I want each row and columns data type using apply/map/applymap. How to get this datatype? Some columns have mixed datatype as highlighted e.g. list and str, some have list and dict.</p> <p>[![samplepandasdataframe][1]][1]</p> <p>[1]:</p>
<p>If you want to have the evaluated type value of every cell you can use</p> <pre><code>def check_type(x): try: return type(eval(x)) except Exception as e: return type(x) df.applymap(check_type) </code></pre> <p>If you want to also get how many datatypes you have you can use things like</p> <pre><code>df.applymap(type).value_counts() </code></pre> <p>or if you want to get the values for all of the dataframe instead of by column</p> <pre><code>np.unique(df.applymap(type).astype(str).values, return_counts=True) </code></pre>
python|pandas|dataframe|complex-data-types
5
3,114
43,321,814
How to vectorize fourier series partial sum in numpy
<p>Given the Fourier series coefficients <code>a[n]</code> and <code>b[n]</code> (for cosines and sines respectively) of a function with period <code>T</code> and <code>t</code> an equally spaced interval the following code will evaluate the partial sum for all points in interval <code>t</code> (<code>a</code>,<code>b</code>,<code>t</code> are all <code>numpy</code> arrays). It is clarified that len(t) &lt;> len(a).</p> <pre><code>yn=ones(len(t))*a[0] for n in range(1,len(a)): yn=yn+(a[n]*cos(2*pi*n*t/T)-b[n]*sin(2*pi*n*t/T)) </code></pre> <p>My question is: Can this for loop be vectorized? </p>
<p>Here's one vectorized approach making use <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>broadcasting</code></a> to create the <code>2D</code> array version of cosine/sine input : <code>2*pi*n*t/T</code> and then using <code>matrix-multiplication</code> with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html" rel="nofollow noreferrer"><code>np.dot</code></a> for the <code>sum-reduction</code> -</p> <pre><code>r = np.arange(1,len(a)) S = 2*np.pi*r[:,None]*t/T cS = np.cos(S) sS = np.sin(S) out = a[1:].dot(cS) - b[1:].dot(sS) + a[0] </code></pre> <p><strong>Further performance boost</strong></p> <p>For further boost, we can make use of <a href="https://github.com/pydata/numexpr/wiki/Numexpr-Users-Guide" rel="nofollow noreferrer"><code>numexpr</code> module</a> to compute those trignometric steps -</p> <pre><code>import numexpr as ne cS = ne.evaluate('cos(S)') sS = ne.evaluate('sin(S)') </code></pre> <p><strong>Runtime test -</strong></p> <p>Approaches -</p> <pre><code>def original_app(t,a,b,T): yn=np.ones(len(t))*a[0] for n in range(1,len(a)): yn=yn+(a[n]*np.cos(2*np.pi*n*t/T)-b[n]*np.sin(2*np.pi*n*t/T)) return yn def vectorized_app(t,a,b,T): r = np.arange(1,len(a)) S = (2*np.pi/T)*r[:,None]*t cS = np.cos(S) sS = np.sin(S) return a[1:].dot(cS) - b[1:].dot(sS) + a[0] def vectorized_app_v2(t,a,b,T): r = np.arange(1,len(a)) S = (2*np.pi/T)*r[:,None]*t cS = ne.evaluate('cos(S)') sS = ne.evaluate('sin(S)') return a[1:].dot(cS) - b[1:].dot(sS) + a[0] </code></pre> <p>Also, including function <code>PP</code> from @Paul Panzer's post.</p> <p>Timings -</p> <pre><code>In [22]: # Setup inputs ...: n = 10000 ...: t = np.random.randint(0,9,(n)) ...: a = np.random.randint(0,9,(n)) ...: b = np.random.randint(0,9,(n)) ...: T = 3.45 ...: In [23]: print np.allclose(original_app(t,a,b,T), vectorized_app(t,a,b,T)) ...: print np.allclose(original_app(t,a,b,T), vectorized_app_v2(t,a,b,T)) ...: print np.allclose(original_app(t,a,b,T), PP(t,a,b,T)) ...: True True True In [25]: %timeit original_app(t,a,b,T) ...: %timeit vectorized_app(t,a,b,T) ...: %timeit vectorized_app_v2(t,a,b,T) ...: %timeit PP(t,a,b,T) ...: 1 loops, best of 3: 6.49 s per loop 1 loops, best of 3: 6.24 s per loop 1 loops, best of 3: 1.54 s per loop 1 loops, best of 3: 1.96 s per loop </code></pre>
python|numpy|fft|vectorization|series
3
3,115
43,127,996
How get latitude and longitude through address without too many requests error in python
<p>I have a database contains 5000 building approx, I want to locate them through its address, so I use GeoPy like this:</p> <pre><code>def getLalo(address): geolocator = Nominatim() location = geolocator.geocode(address) if location == None: return [0,0] return [location.latitude,location.longitude] bmk['latitude'], bmk['longitude']= bmk.apply(lambda row: getLalo(row['full_address']), axis=1) </code></pre> <p>However, seems like I got <code>GeocoderServiceError: HTTP Error 429: Too Many Requests</code></p> <p>How do I avoid this? thx!</p>
<p>When you send requests in short of time, rate-limiting must be taken into account.</p> <p>You will receive: Too Many Requests 429 HTTP error or timing out.</p> <p>Try with RateLimiter </p> <pre><code>from geopy.extra.rate_limiter import RateLimiter geocode = RateLimiter(geolocator.geocode, min_delay_seconds=1) </code></pre> <p>reference <a href="https://geopy.readthedocs.io/en/1.16.0/#usage-with-pandas" rel="nofollow noreferrer">https://geopy.readthedocs.io/en/1.16.0/#usage-with-pandas</a></p>
python|pandas|geopy
0
3,116
72,166,401
Adding rows of values to a dataframe in new columns, based on values in existing columns
<p>Columns that I already have saved as a dataframe: 'date', 'high', 'close,' 'target,' and 'portion'</p> <p>I am trying to create columns 'capital,' 'trade,' 'quantity,' and 'gain_loss' to the existing dataframe by performing operations row by row.</p> <p>I'm trying to:</p> <ol> <li>Start with a value of 1000 for capital.</li> <li>Calculate trade by capital * portion.</li> <li>Calculate quantity by trade / target price.</li> <li>Calculate gain/loss by close * quantity - target * quantity</li> <li>Calculate capital of next row = capital + gain/loss</li> <li>Repeat for next row.</li> </ol> <p>Something I'm struggling with is: I only want gain/loss to be added to the capital of next row if high &gt; target</p> <p>For example, on the first row, high &gt; target, which means the gain/loss on the first row should be added to the next row capital.</p> <p>However, on the second row, high &lt; target, which means whatever the gain/loss is on the second row should NOT be added to the next row capital. Instead, I want the second row capital to be simply carried over to the third row.</p> <p>Below is what I want it to look like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>date</th> <th>high</th> <th>close</th> <th>target</th> <th>portion</th> <th>capital</th> <th>trade</th> <th>quantity</th> <th>gain/loss</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>110</td> <td>150</td> <td>100</td> <td>0.2</td> <td>1000</td> <td>1000*0.2</td> <td>(1000*0.2) / 100</td> <td>0</td> </tr> <tr> <td>1</td> <td>250</td> <td>200</td> <td>260</td> <td>0.6</td> <td>1000</td> <td>1100*0.6</td> <td>(1100*0.6) / 260</td> <td>0 (since high &lt; target, no trade executed)</td> </tr> <tr> <td>2</td> <td>350</td> <td>320</td> <td>280</td> <td>0.5</td> <td>1000</td> <td>1000 * 0.5</td> <td>(1000*0.5) / 280</td> <td>71.2</td> </tr> <tr> <td>3</td> <td>410</td> <td>500</td> <td>500</td> <td>0.3</td> <td>1071.2</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> </div> <p>Here is what I've tried. My if-statement isn't operating as I intended. I'm also getting &quot;ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().&quot;</p> <pre><code>df.at[0, 'capital'] = 10000 df.at[0, 'gain_loss'] = 0 for i in range(1, 500): if df['high'] &lt; df['target']: df.at[i, 'capital'] = df.at[i - 1, 'capital'] df.at[i, 'trade_amount'] = df.at[i, 'capital'] * df.at[i, 'portion'] df.at[i, 'quantity'] = df.at[i, 'trade_amount'] / df.at[i, 'target'] df.at[i, 'gain_loss'] = 0 else: df.at[i, 'capital'] = df.at[i - 1, 'capital'] + df.at[i - 1, 'gain_loss'] df.at[i, 'trade_amount'] = df.at[i, 'capital'] * df.at[i, 'portion'] df.at[i, 'quantity'] = df.at[i, 'trade_amount'] / df.at[i, 'target'] df.at[i, 'gain_loss'] = (df.at[i, 'close'] - df.at[i, 'target']) * df.at[i, 'quantity'] </code></pre> <p>*EDIT</p> <pre><code>df.at[0, 'capital'] = 1000 df.at[0, 'gain_loss'] = 0 for i, row in df.iterrows(): if row['high'] &gt; row['target']: df.at[i, 'trade_amount'] = df.at[i, 'capital'] * row['portion'] df.at[i, 'quantity'] = df.at[i, 'trade_amount'] / row['target'] df.at[i, 'gain_loss'] = (row['close'] - row['target']) * df.at[i, 'quantity'] df.at[i + 1, 'capital'] = df.at[i, 'capital'] + df.at[i, 'gain_loss'] else: df.at[i, 'trade_amount'] = df.at[i, 'capital'] * row['portion'] df.at[i, 'quantity'] = df.at[i, 'trade_amount'] / row['target'] df.at[i, 'gain_loss'] = 0 df.at[i + 1, 'capital'] = df.at[i, 'capital'] </code></pre>
<p>As I can see there is a problem with the conditional &quot;if&quot;. In that line, you are comparing the whole column, not just the rows' values.</p> <p>On the other hand, maybe you need to set some initial conditions in first row. And add some controls to your loop.</p> <pre class="lang-py prettyprint-override"><code>data = {'date': [0, 1, 2, 3], 'high': [110, 250, 350, 410], 'close': [150, 200, 320, 500], 'target': [100, 260, 280, 500], 'portion': [0.2, 0.6, 0.5, 0.3]} df = pd.DataFrame(data) </code></pre> <pre class="lang-py prettyprint-override"><code>df.at[0, 'capital'] = 1000 nrow = df.shape[0] for i, row in df.iterrows(): if row['high'] &lt; row['target']: df.at[i, 'gain_loss'] = 0 else: df.at[i, 'trade_amount'] = df.at[i, 'capital'] * df.at[i,'portion'] df.at[i, 'quantity'] = df.at[i, 'trade_amount'] / df.at[i,'target'] df.at[i, 'gain_loss'] = (df.at[i, 'close'] - df.at[i, 'target']) * df.at[i, 'quantity'] if i &lt; nrow-1: df.at[i+1, 'capital'] = df.at[i, 'capital'] + df.at[i, 'gain_loss'] display(df) </code></pre> <p><a href="https://i.stack.imgur.com/jm0AX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jm0AX.png" alt="enter image description here" /></a></p>
python|pandas|dataframe
0
3,117
50,448,034
Reshuffle tensor according to other specific tensor in Tensorflow
<p>Given certain tensors <code>a</code> and <code>b</code> where <code>b</code> is of type <code>uint</code> and <code>max(b) &lt; len(a)</code>, I would like to get a tensor <code>c</code>:</p> <pre><code>c = a[b] </code></pre> <p>This would essentially reshuffle tensor <code>a</code> according to order given by <code>b</code>.</p>
<p>Just to close the question - as per @jdehesa 's comment:</p> <pre><code>c = tf.gather(a, b) </code></pre> <p>This was exactly needed. Thanks @jdehesa!</p>
python|tensorflow
0
3,118
50,261,680
Issues concerning 'brew link numpy'
<p>Dear Stackoverflow community,</p> <p>I have a problem with <code>brew link numpy</code>. I have installed the latest python but it still uses python 2.7. With the error logs shown below, with what ever way I did, I'm still stuck here. Furthermore, I have a problem installing opencv functionalities with the current python environment on my mac.</p> <pre><code>Error: Could not symlink lib/python2.7/site-packages/numpy/LICENSE.txt Target /usr/local/lib/python2.7/site-packages/numpy/LICENSE.txt already exists. You may want to remove it: rm '/usr/local/lib/python2.7/site-packages/numpy/LICENSE.txt' To force the link and overwrite all conflicting files: brew link --overwrite numpy To list all files that would be deleted: brew link --overwrite --dry-run numpy </code></pre>
<p>If you have 2 different versions of python you should create a virtual environment for each flavor of python.</p> <p>If python 3.6 was installed using Anaconda, search the Anaconda documentation for creating virtual environment.</p> <p>If you installed python 3.6 another way, there are unix commands to create virtual environments.</p> <p>You can create one environment for python 2.7 and another for python 3.6. Activate whichever version you want to run.</p>
python|macos|numpy|homebrew
0
3,119
45,292,014
Placeholder settings for a simple neural network
<p>My data is the following CVS file:</p> <pre><code>1,0 2,0 3,0 4,0 5,1 6,0 7,1 8,1 9,1 10,1 </code></pre> <p>I want to perform logistic regression on this with the first column as x and the second column as y. Furthermore, I want to do this using TensorFlow with a simple neural network consisting of a single input node and a single output node (no hidden layer). As I am using only 1 node for input and 1 node for output, do I use</p> <pre><code>x = tf.placeholder(tf.float32, [None, 1]) y = tf.placeholder(tf.float32, [None, 1]) </code></pre> <p>?</p> <p>I am not interested at this stage with using batches because I want to understand the other parts first.</p>
<p>I see where you are confused about (if you know how logistic regression works).<br> 1. For this single-dimension logistic regression setting, you just need <code>tf.placeholder(tf.float32, [1])</code>, as @Eric mentioned.<br> 2. If a placeholder is set to have shape <code>[None, dim]</code>, it means the data batch size is not decided when constructing network (corresponds to the first one being <code>None</code>), and that data has a dimension of <code>dim</code>.<br> 3. Note that in Tensorflow convention, batch size usually comes out first. You'll see how convenient it is -- no transpose needed if you construct multiple layers of <code>Wx+b</code> networks. </p>
tensorflow|placeholder|logistic-regression
0
3,120
45,482,755
Compare headers of dataframes in pandas
<p>I am trying to compare the headers of two pandas dataframes and filter the columns that match. df1 is my big dataframe with two headers, df2 is sort of a dictionary where I have saved every column header I will need from df1.</p> <p>So if df1 is something like this:</p> <pre><code> A B C D a b c d 0.469112 -0.282863 -1.509059 -1.135632 1.212112 -0.173215 0.119209 -1.044236 -0.861849 -2.104569 -0.494929 1.071804 0.721555 -0.706771 -1.039575 0.271860 -0.424972 0.567020 0.276232 -1.087401 -0.673690 0.113648 -1.478427 0.524988 </code></pre> <p>and df2 is something like this:</p> <pre><code> B D E </code></pre> <p>I need to get the output:</p> <pre><code> B D -0.282863 -1.135632 -0.173215 -1.044236 -2.104569 1.071804 -0.706771 0.271860 0.567020 -1.087401 0.113648 0.524988 </code></pre> <p>and also a list of the header elements that were not matching:</p> <pre><code>A C </code></pre> <p>as well as elements missing from df1:</p> <pre><code>E </code></pre> <p>So far I have tried the iloc command and a lot of different suggestions here on stackoverflow for comparing rows. Since I am comparing the headers though it was not possible.</p> <p>EDIT: I have tried</p> <pre><code>df1.columns.intersection(df2.columns) </code></pre> <p>but the result is:</p> <pre><code>MultiIndex(levels=[[], []], labels=[[], []]) </code></pre> <p>Is this because of the multiple headers?</p>
<p>Here's are couple of methods, for given <code>df1</code> and <code>df2</code></p> <pre><code>In [1041]: df1.columns Out[1041]: Index([u'A', u'B', u'C', u'D'], dtype='object') In [1042]: df2.columns Out[1042]: Index([u'B', u'D', u'E'], dtype='object') </code></pre> <p>Columns in both <code>df1</code> and <code>df2</code></p> <pre><code>In [1046]: df1.columns.intersection(df2.columns) Out[1046]: Index([u'B', u'D'], dtype='object') </code></pre> <p>Columns in <code>df1</code> not in <code>df2</code></p> <pre><code>In [1047]: df1.columns.difference(df2.columns) Out[1047]: Index([u'A', u'C'], dtype='object') </code></pre> <p>Columns in <code>df2</code> not in <code>df1</code></p> <pre><code>In [1048]: df2.columns.difference(df1.columns) Out[1048]: Index([u'E'], dtype='object') </code></pre>
python|python-3.x|pandas
29
3,121
62,866,152
NaN Values inputed into test and train data
<p>I am working on a Data Science project with the Fifa dataset. I cleaned the data and took care of any NaN values in the Data to get it ready to be split into test and train. I need to use StratifiedShuffleSplit in order to split the data. Updated to a cleaner way to divided the value data into groups, but I am still getting NaN values once it goes through the split.</p> <p>Link to the data set I am using: <a href="https://www.kaggle.com/karangadiya/fifa19" rel="nofollow noreferrer">https://www.kaggle.com/karangadiya/fifa19</a></p> <pre><code>n = fifa['value'].count() folds = 3 fifa.sort_values('value', ascending=False, inplace=True) fifa['group_id'] = np.floor(np.arange(n)/folds) fifa['value_cat'] = fifa.groupby('group_id', as_index = False)['name'].transform(lambda x: np.random.choice(v_cats, size=x.size, replace = False)) </code></pre> <p>At this point when I check the test and train data I now have mystery NaN values inputed. I think the NaN values maybe a result of .loc since I am getting a 'warning' in jupyter.</p> <pre><code>c:\python37\lib\site-packages\ipykernel_launcher.py:6: FutureWarning: Passing list-likes to .loc or [] with any missing label will raise KeyError in the future, you can use .reindex() as an alternative. </code></pre> <p>Code below:</p> <pre><code>from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(fifa, fifa['value_cat']): strat_train_set = fifa.loc[train_index] strat_test_set = fifa.loc[test_index] fifa = strat_train_set.drop('value', axis=1) value_labels = strat_train_set['value'].copy() </code></pre> <p>PLEASE HELP MY POOR SOUL!!</p> <p><a href="https://i.stack.imgur.com/o7Bsa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o7Bsa.png" alt="enter image description here" /></a></p>
<p>Here's one solution.</p> <pre><code>import numpy as np import pandas as pd n = 100 folds = 3 # Make some data df = pd.DataFrame({'id':np.arange(n), 'value':np.random.lognormal(mean=10, sigma=1, size=n)}) # Sort by value df.sort_values('value', ascending=False, inplace=True) # Insert 'group' ids, 0, 0, 0, 1, 1, 1, 2, 2, 2, ... df['group_id'] = np.floor(np.arange(n)/folds) # Randomly assign folds within each group df['fold'] = df.groupby('group_id', as_index=False)['id'].transform(lambda x: np.random.choice(folds, size=x.size, replace=False)) # Inspect df.head(10) id value group_id fold 46 46 208904.679048 0.0 0 3 3 175730.118616 0.0 2 0 0 137067.103600 0.0 1 87 87 101894.243831 1.0 2 11 11 100570.573379 1.0 1 90 90 93681.391254 1.0 0 73 73 92462.150435 2.0 2 13 13 90349.408620 2.0 1 86 86 87568.402021 2.0 0 88 88 82581.010789 3.0 1 </code></pre> <p>Assuming you want k folds, the idea is to sort the data by value, then randomly assign folds 1, 2, ..., k to the first k rows, then do the same to the next k rows, etc.</p> <p>By the way, you will have more luck getting answers to questions here if you can create reproducible examples with data that make it easy for others to tinker with. :)</p>
python|pandas|data-science|linear-regression|sklearn-pandas
0
3,122
54,301,388
Saving an image as numpy array
<p>I am not able to load images into numpy array and getting an error like this...</p> <blockquote> <p>ValueError: could not broadcast input array from shape (175,217,3) into shape (100,100,3)</p> </blockquote> <p>The function code:</p> <pre><code>import cv2 import numpy as np import os train_data_dir = '/home/ec2-user/SageMaker/malaria-detection-model/malaria/training' valid_data_dir = '/home/ec2-user/SageMaker/malaria-detection-model/malaria/validation' # declare the number of samples in each category nb_train_samples = 22045 # training samples nb_valid_samples = 5513# validation samples num_classes = 2 img_rows_orig = 100 img_cols_orig = 100 def load_training_data(): labels = os.listdir(train_data_dir) total = len(labels) X_train = np.ndarray((nb_train_samples, img_rows_orig, img_cols_orig, 3), dtype=np.uint8) Y_train = np.zeros((nb_train_samples,), dtype='uint8') i = 0 j = 0 for label in labels: image_names_train = os.listdir(os.path.join(train_data_dir, label)) total = len(image_names_train) print(label, total) for image_name in image_names_train: img = cv2.imread(os.path.join(train_data_dir, label, image_name), cv2.IMREAD_COLOR) img = np.array([img]) X_train[i] = img Y_train[i] = j if i % 100 == 0: print('Done: {0}/{1} images'.format(i, total)) i += 1 j += 1 print(i) print('Loading done.') np.save('imgs_train.npy', X_train, Y_train) return X_train, Y_train </code></pre> <p>This function is part of the file load_data.py that can be found in malaria_cell_classification_code.zip file from:</p> <p><a href="https://ceb.nlm.nih.gov/repositories/malaria-datasets/" rel="nofollow noreferrer">https://ceb.nlm.nih.gov/repositories/malaria-datasets/</a></p> <hr> <p>I tried to change X_train and Y_train to list instead of numpy array. The function halts at np.save method.</p> <pre><code>X_train = Y_train = list() X_train.append(img) Y_train.append(j) </code></pre> <p>What is the correct and standard way to save images in numpy?</p> <hr> <p>After resizing the image, I get different error:</p> <pre><code>Done: 19400/9887 images Done: 19500/9887 images Done: 19600/9887 images Done: 19700/9887 images Done: 19800/9887 images 19842 Loading done. Transform targets to keras compatible format. Done: 19800/9887 images 19842 Loading done. Transform targets to keras compatible format. ------------------------------ Creating validation images... ------------------------------ Parasitized 1098 --------------------------------------------------------------------------- error Traceback (most recent call last) &lt;ipython-input-6-8008be74f482&gt; in &lt;module&gt;() 2 #load data for training 3 X_train, Y_train = load_resized_training_data(img_rows, img_cols) ----&gt; 4 X_valid, Y_valid = load_resized_validation_data(img_rows, img_cols) 5 #print the shape of the data 6 print(X_train.shape, Y_train.shape, X_valid.shape, Y_valid.shape) ~/SageMaker/malaria-detection-model/malaria_cell_classification_code/load_data.py in load_resized_validation_data(img_rows, img_cols) 103 def load_resized_validation_data(img_rows, img_cols): 104 --&gt; 105 X_valid, Y_valid = load_validation_data() 106 107 # Resize images ~/SageMaker/malaria-detection-model/malaria_cell_classification_code/load_data.py in load_validation_data() 75 76 img = np.array([img]) ---&gt; 77 img2 = cv2.resize(img, (100, 100)) 78 X_valid[i] = img2 79 Y_valid[i] = j error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3427: error: (-215:Assertion failed) !dsize.empty() in function 'resize' ------------------------------ Creating validation images... ------------------------------ Parasitized 1098 --------------------------------------------------------------------------- error Traceback (most recent call last) &lt;ipython-input-6-8008be74f482&gt; in &lt;module&gt;() 2 #load data for training 3 X_train, Y_train = load_resized_training_data(img_rows, img_cols) ----&gt; 4 X_valid, Y_valid = load_resized_validation_data(img_rows, img_cols) 5 #print the shape of the data 6 print(X_train.shape, Y_train.shape, X_valid.shape, Y_valid.shape) ~/SageMaker/malaria-detection-model/malaria_cell_classification_code/load_data.py in load_resized_validation_data(img_rows, img_cols) 103 def load_resized_validation_data(img_rows, img_cols): 104 --&gt; 105 X_valid, Y_valid = load_validation_data() 106 107 # Resize images ~/SageMaker/malaria-detection-model/malaria_cell_classification_code/load_data.py in load_validation_data() 75 76 img = np.array([img]) ---&gt; 77 img2 = cv2.resize(img, (100, 100)) 78 X_valid[i] = img2 79 Y_valid[i] = j error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/resize.cpp:3427: error: (-215:Assertion failed) !dsize.empty() in function 'resize' </code></pre> <p>The complete script can be found here...</p> <p><a href="https://gist.github.com/shantanuo/cfe0913b367647890451f5ae3f6fb691" rel="nofollow noreferrer">https://gist.github.com/shantanuo/cfe0913b367647890451f5ae3f6fb691</a></p>
<p><code>opencv2</code> already returns a numpy array. Don't make a new one, especially not one with an additional level of nesting:</p> <pre><code>img = cv2.imread(os.path.join(train_data_dir, label, image_name), cv2.IMREAD_COLOR) img = cv2.resize(img, (100, 100)) </code></pre>
python|image|numpy|opencv|multidimensional-array
1
3,123
54,578,809
Simple workable example of multiprocessing
<p>I am looking for a simple example of python <code>multiprocessing</code>.</p> <p>I am trying to figure out workable example of python <code>multiprocessing</code>. I have found an example on breaking large numbers into primes. That worked because there was little input (one large number per core) and lot of computing (breaking the numbers into primes). </p> <p>However, my interest is different - I have lot of input data on which I perform simple calculations. I wonder if there is a simple way to modify the below code so that multicores really beats single core. I am running python 3.6 on Win10 machine with 4 physical cores and 16 GB RAM. </p> <p>Here comes my sample code.</p> <pre><code>import numpy as np import multiprocessing as mp import timeit # comment the following line to get version without queue queue = mp.Queue() cores_no = 4 def npv_zcb(bnd_info, cores_no): bnds_no = len(bnd_info) npvs = [] for bnd_idx in range(bnds_no): nom = bnd_info[bnd_idx][0] mat = bnd_info[bnd_idx][1] yld = bnd_info[bnd_idx][2] npvs.append(nom / ((1 + yld) ** mat)) if cores_no == 1: return npvs # comment the following two lines to get version without queue else: queue.put(npvs) # generate random attributes of zero coupon bonds print('Generating random zero coupon bonds...') bnds_no = 100 bnd_info = np.zeros([bnds_no, 3]) bnd_info[:, 0] = np.random.randint(1, 31, size=bnds_no) bnd_info[:, 1] = np.random.randint(70, 151, size=bnds_no) bnd_info[:, 2] = np.random.randint(0, 100, size=bnds_no) / 100 bnd_info = bnd_info.tolist() # single core print('Running single core...') start = timeit.default_timer() npvs = npv_zcb(bnd_info, 1) print(' elapsed time: ', timeit.default_timer() - start, ' seconds') # multiprocessing print('Running multiprocessing...') print(' ', cores_no, ' core(s)...') start = timeit.default_timer() processes = [] idx = list(range(0, bnds_no, int(bnds_no / cores_no))) idx.append(bnds_no + 1) for core_idx in range(cores_no): input_data = bnd_info[idx[core_idx]: idx[core_idx + 1]] process = mp.Process(target=npv_zcb, args=(input_data, cores_no)) processes.append(process) process.start() for process_aux in processes: process_aux.join() # comment the following three lines to get version without queue mylist = [] while not queue.empty(): mylist.append(queue.get()) print(' elapsed time: ', timeit.default_timer() - start, ' seconds') </code></pre> <p>I would be very grateful if anyone could advice me how to modify the code so that multiple core run beats single core run. I have also noticed that increasing variable <code>bnds_no</code> to 1,000 leads to <code>BrokenPipeError</code>. One would expect that increasing amount of input would lead to longer computational time rather than an error... What is wrong here?</p>
<p>This doesn't directly answer your question but if you were using RxPy for reactive Python programming you could check out their small example on multiprocessing: <a href="https://github.com/ReactiveX/RxPY/tree/release/v1.6.x#concurrency" rel="nofollow noreferrer">https://github.com/ReactiveX/RxPY/tree/release/v1.6.x#concurrency</a></p> <p>Seems a bit easier to manage concurrency with ReactiveX/RxPy than trying to do it manually.</p>
python|numpy|multiprocessing
1
3,124
54,437,822
RuntimeError: empty_like method already has a docstring
<p>I am working on a project that was developed using python 3.6 and I am using python 3.7 instead. I tried to run the tests that passed. However in the end I got a series of errors like this one:</p> <pre><code>Error in atexit._run_exitfuncs: Traceback (most recent call last): File "&lt;frozen importlib._bootstrap&gt;", line 983, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 953, in _find_and_load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 219, in _call_with_frames_removed File "/project/.eggs/scikit_learn-0.20.2-py3.7-macosx-10.13-x86_64.egg/sklearn/__init__.py", line 64, in &lt;module&gt; from .base import clone File "/project/.eggs/scikit_learn-0.20.2-py3.7-macosx-10.13-x86_64.egg/sklearn/base.py", line 10, in &lt;module&gt; import numpy as np File "/project/.eggs/numpy-1.16.0-py3.7-macosx-10.13-x86_64.egg/numpy/__init__.py", line 142, in &lt;module&gt; from . import core File "/project/.eggs/numpy-1.16.0-py3.7-macosx-10.13-x86_64.egg/numpy/core/__init__.py", line 16, in &lt;module&gt; from . import multiarray File "/project/.eggs/numpy-1.16.0-py3.7-macosx-10.13-x86_64.egg/numpy/core/multiarray.py", line 70, in &lt;module&gt; def empty_like(prototype, dtype=None, order=None, subok=None): File "/project/.eggs/numpy-1.16.0-py3.7-macosx-10.13-x86_64.egg/numpy/core/overrides.py", line 240, in decorator docs_from_dispatcher=docs_from_dispatcher)(implementation) File "/project/.eggs/numpy-1.16.0-py3.7-macosx-10.13-x86_64.egg/numpy/core/overrides.py", line 204, in decorator add_docstring(implementation, dispatcher.__doc__) RuntimeError: empty_like method already has a docstring </code></pre> <p>Do you have any advice?</p>
<p>According to Numpy's Github issues page, the errors the OP reported, as well as the other errors reported by the commenters were known Numpy bugs, that seem to have been solved by now.</p>
python|python-3.x|numpy|python-3.6|python-3.7
1
3,125
73,697,322
How to convert column values into new columns showing frequency
<p>I created a new dataframe by splitting a column and expanding it.</p> <p>I now want to convert the dataframe to create new columns for every value and only display the frequency of the value.</p> <p>I wrote an example below.</p> <p>Example dataframe:</p> <pre><code>import pandas as pd import numpy as np df= pd.DataFrame({0:['cake','fries', 'ketchup', 'potato', 'snack'], 1:['fries', 'cake', 'potato', np.nan, 'snack'], 2:['ketchup', 'cake', 'potatos', 'snack', np.nan], 3:['potato', np.nan,'cake', 'ketchup',np.nan], 'index':['james','samantha','ashley','tim', 'mo']}) df.set_index('index') </code></pre> <p>Expected output:</p> <pre><code>output = pd.DataFrame({'cake': [1, 2, 1, 0, 0], 'fries': [1, 1, 0, 0, 0], 'ketchup': [1, 0, 1, 1, 0], 'potatoes': [1, 0, 2, 1, 0], 'snack': [0, 0, 0, 1, 2], 'index': ['james', 'samantha', 'asheley', 'tim', 'mo']}) output.set_index('index') </code></pre>
<p>Based on the description of what you want, you would need a <code>crosstab</code> on the reshaped data:</p> <pre><code>df2 = df.reset_index().melt('index') out = pd.crosstab(df2['index'], df2['value'].str.lower()) </code></pre> <p>This, however, doesn't match the provided output.</p> <p>Output:</p> <pre><code>value apple berries cake chocolate drink fries fruits ketchup potato potatoes snack index Ashley 0 0 0 0 0 0 0 1 1 0 1 James 0 1 1 0 0 1 1 0 0 0 0 Mo 0 0 0 1 0 0 1 1 0 1 0 samantha 1 0 0 1 0 1 0 0 0 0 0 tim 0 0 0 0 1 0 0 0 0 0 1 </code></pre>
python|pandas|dataframe|split|pivot
0
3,126
73,825,612
Why is dataframe.sum(axis=0) getting NAN's when every value in every column is a real number?
<p>All column values in the selected <code>measure_cols</code> of the <code>dfm</code> DataFrame are real numbers - in fact all are between <code>[-1.0..1.0]</code> inclusive.</p> <p>Following gives <code>False</code> for all Series/Columns in the <code>dfc</code> dataframe</p> <pre><code>[print(f&quot;{c}: {dfc[c].hasnans}&quot;) for c in dfc.columns] </code></pre> <p>Results: all <code>False</code></p> <p>But all row sums in <code>dfc['shap_sum']</code> are coming up as <code>NAN</code>'s. Why would this be?</p> <pre><code>dfc['shap_sum'] = dfc[measure_cols].sum(axis=0) </code></pre> <p><a href="https://i.stack.imgur.com/XzwGf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XzwGf.png" alt="enter image description here" /></a></p> <p><strong>Update</strong> The following has the correct results - as seen in the debugger</p> <pre><code>dfc[measure_cols].sum(axis=0) </code></pre> <p>But when assigned to a new column in the dataframe they get distorted into <code>NaN</code>'s.</p> <p><a href="https://i.stack.imgur.com/7HWo7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7HWo7.png" alt="enter image description here" /></a></p> <p>Why is this happening ?</p> <pre><code>dfc['shap_sum'] = dfc[measure_cols].sum(axis=0) </code></pre> <p><a href="https://i.stack.imgur.com/XzwGf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XzwGf.png" alt="enter image description here" /></a></p>
<p>Oh I made the mistake of using <code>axis=0</code> intending to do <code>row</code> sums. But it's <code>axis=1</code> to do row sums. I will never agree with that decision on polarity.</p>
python|pandas
0
3,127
73,613,153
Set all values between 2 ranges in numpy array to certain value
<p>I have 2 1d arrays of type int and a start and a stop value that look like this:</p> <pre><code>y_start = #some number y_end = #some number x_start = #some array of ints x_end = #some array of ints </code></pre> <p>What I want is to simulate the following behavior without loops:</p> <pre><code>for i, y in enumerate(range(y_start, y_end)): arr[x_start[i]:x_end[i], y] = c </code></pre> <p>Example:</p> <pre><code>y_start = 2 y_end = 5 x_start = np.array([2, 1, 3]) x_end = np.array([4, 3, 6]) c = 1 </code></pre> <p><strong>Input</strong></p> <pre><code>arr = array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]]) </code></pre> <p><strong>Output:</strong></p> <pre><code>arr = array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0], [0, 0, 1, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0, 0]]) </code></pre> <p>Would this be possible?</p>
<p>You can use indexing and a crafted boolean arrays converted to integer:</p> <pre><code>v = np.arange(arr.shape[0])[:,None] # conversion to int is implicit arr[:, y_start:y_end] = ((v&gt;=x_start) &amp; (v&lt;x_end))#.astype(int) </code></pre> <p>output:</p> <pre><code>array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0], [0, 0, 1, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0, 0]]) </code></pre>
python|arrays|numpy
3
3,128
71,215,246
ValueError: Incompatible indexer with Series while adding date to Date to Data Frame
<p>I am new to python and I can't figure out why I get this error: ValueError: Incompatible indexer with Series.</p> <p>I am trying to add a date to my data frame.</p> <p>The date I am trying to add:</p> <pre><code>date = (chec[(chec['Día_Sem']=='Thursday') &amp; (chec['ID']==2011957)]['Entrada']) date </code></pre> <p>Date output:</p> <pre><code> 56 1900-01-01 07:34:00 Name: Entrada, dtype: datetime64[ns] </code></pre> <p>Then I try to add 'date' to my data frame using loc:</p> <pre><code>rep.loc[2039838,'Thursday'] = date rep </code></pre> <p>And I get this error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-347-3e0678b0fdbf&gt; in &lt;module&gt; ----&gt; 1 rep.loc[2039838,'Thursday'] = date 2 rep ~/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py in __setitem__(self, key, value) 188 key = com.apply_if_callable(key, self.obj) 189 indexer = self._get_setitem_indexer(key) --&gt; 190 self._setitem_with_indexer(indexer, value) 191 192 def _validate_key(self, key, axis): ~/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py in _setitem_with_indexer(self, indexer, value) 640 # setting for extensionarrays that store dicts. Need to decide 641 # if it's worth supporting that. --&gt; 642 value = self._align_series(indexer, Series(value)) 643 644 elif isinstance(value, ABCDataFrame): ~/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py in _align_series(self, indexer, ser, multiindex_indexer) 781 return ser.reindex(ax)._values 782 --&gt; 783 raise ValueError('Incompatible indexer with Series') 784 785 def _align_frame(self, indexer, df): ValueError: Incompatible indexer with Series </code></pre>
<p>Try <code>date.iloc[0]</code> instead of <code>date</code>:</p> <pre><code>rep.loc[2039838,'Thursday'] = date.iloc[0] </code></pre> <p>Because <code>date</code> is actually a Series (so basically like a list/array) of the values, and <code>.iloc[0]</code> actually selects the value.</p>
python|pandas|date|valueerror
0
3,129
71,166,840
Return mask from numpy isin function in 1 dimension
<p>I am trying to use numpy's function isin to return a mask for a given query. For example, let's say I want to get a mask for element 2.1 in the numpy array below:</p> <pre><code>import numpy as np a = np.array( [ [&quot;1&quot;, &quot;1.1&quot;], [&quot;1&quot;, &quot;1.2&quot;], [&quot;2&quot;, &quot;2.1&quot;], [&quot;2&quot;, &quot;2.2&quot;], [&quot;2.1&quot;, &quot;2.1.1&quot;], [&quot;2.1&quot;, &quot;2.1.2&quot;], [&quot;2.2&quot;, &quot;2.2.1&quot;], [&quot;2.2&quot;, &quot;2.2.2&quot;], ] ) </code></pre> <p>I am querying it with the parameters <code>np.isin(a, &quot;2.1&quot;)</code>, but this returns another 2D array instead of a 1D mask:</p> <pre><code>[[False False] [False False] [False True] [False False] [ True False] [ True False] [False False] [False False]] </code></pre> <p>I was expecting it would return something like:</p> <pre><code>[False False True False True True False False] </code></pre> <p>What am I supposed to do to fix this query?</p>
<p>If you want the rows that &quot;2.1&quot; appears in <code>a</code>, you want the <code>any</code> method on axis:</p> <pre><code>&gt;&gt;&gt; np.isin(a, &quot;2.1&quot;).any(axis=1) array([False, False, True, False, True, True, False, False]) </code></pre> <p>If you want the indexes of where &quot;2.1&quot; appears in <code>a</code>, you could use <code>np.where</code>:</p> <pre><code>&gt;&gt;&gt; np.where(np.isin(a, &quot;2.1&quot;)) (array([2, 4, 5], dtype=int64), array([1, 0, 0], dtype=int64)) </code></pre>
python|numpy|isin
2
3,130
71,380,792
Pandas groupby two columns, one by row and another by column
<p>I have a csv file that contains n rows of sales of houses.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">House</th> <th style="text-align: center;">House_type</th> <th style="text-align: center;">Sale_year</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">One</td> <td style="text-align: center;">Semi</td> <td style="text-align: center;">2010</td> </tr> <tr> <td style="text-align: center;">two</td> <td style="text-align: center;">Flat</td> <td style="text-align: center;">2011</td> </tr> <tr> <td style="text-align: center;">three</td> <td style="text-align: center;">bungalow</td> <td style="text-align: center;">2012</td> </tr> <tr> <td style="text-align: center;">four</td> <td style="text-align: center;">Semi</td> <td style="text-align: center;">2013</td> </tr> <tr> <td style="text-align: center;">five</td> <td style="text-align: center;">Semi</td> <td style="text-align: center;">2013</td> </tr> </tbody> </table> </div> <p>I want to groupby the data by House_type (flat, bungalow, semi) by sale_year (2010,2011,etc) counts as columns. So I'm trying to output the data in the below format.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">House_type</th> <th style="text-align: center;">2010</th> <th style="text-align: center;">2011</th> <th style="text-align: center;">2012</th> <th style="text-align: center;">2013</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">Semi</td> <td style="text-align: center;">1</td> <td style="text-align: center;">0</td> <td style="text-align: center;">0</td> <td style="text-align: center;">2</td> </tr> <tr> <td style="text-align: center;">Flat</td> <td style="text-align: center;">0</td> <td style="text-align: center;">1</td> <td style="text-align: center;">0</td> <td style="text-align: center;">0</td> </tr> <tr> <td style="text-align: center;">bungalow</td> <td style="text-align: center;">0</td> <td style="text-align: center;">0</td> <td style="text-align: center;">1</td> <td style="text-align: center;">0</td> </tr> </tbody> </table> </div> <p>However, when I run the code, it returns both House_type and Sale_year as two columns.</p> <pre><code>house= housedata.groupby([&quot;House_type&quot;, &quot;Sale_year&quot;])[&quot;Sale_year&quot;].count() house House_type Sale_year Flat 2011.0 1 bungalow 2012.0 1 Semi 2010.0 1 2013.0 2 </code></pre> <p>How do I get pandas to output the data desired?</p> <p>Many thanks</p>
<p>You can achieve the same using get_dummies method of pandas. It basically creates multiple columns for a categorical column and fills it with values.</p> <pre><code>df = pd.DataFrame({'House_type':['Semi','Flat','Bungalow','Semi','Semi'],'sale_year':[2010,2011,2012,2013,2013]}) df_final = pd.get_dummies(df,columns=['sale_year']).groupby('House_type').sum() df_final </code></pre>
python|jupyter-notebook|pandas-groupby
3
3,131
71,306,892
How to group columns based on unique values from another columns in pandas
<p>Let's say I have a pandas dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">brand</th> <th style="text-align: center;">category</th> <th style="text-align: right;">size</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">nike</td> <td style="text-align: center;">sneaker</td> <td style="text-align: right;">9</td> </tr> <tr> <td style="text-align: left;">adidas</td> <td style="text-align: center;">boots</td> <td style="text-align: right;">11</td> </tr> <tr> <td style="text-align: left;">nike</td> <td style="text-align: center;">boots</td> <td style="text-align: right;">9</td> </tr> </tbody> </table> </div> <p>There could be more than 100 brands and some brands could have more categories than others. How do I get a table that will group them based on brands? That is the first column(index) that should be the brands, the second should be the categories belonging to the brand, and if possible the mean size for each brand as well, using pandas.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">brand</th> <th style="text-align: center;">category</th> <th style="text-align: right;">size</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">nike</td> <td style="text-align: center;">sneaker</td> <td style="text-align: right;">10.5</td> </tr> <tr> <td style="text-align: left;"></td> <td style="text-align: center;">boots</td> <td style="text-align: right;"></td> </tr> <tr> <td style="text-align: left;">adidas</td> <td style="text-align: center;">boots</td> <td style="text-align: right;">11</td> </tr> </tbody> </table> </div>
<p>Maybe their is a little error in the size from the example (mean is 9 instead of 10.5), but a solution might be :</p> <pre class="lang-py prettyprint-override"><code>df.groupby(['brand'], as_index=False).agg({'category': list, 'size': 'mean'}) </code></pre> <p>Output :</p> <pre><code> brand category size 0 adidas [boots] 11 1 nike [sneaker, boots] 9 </code></pre>
pandas|dataframe
1
3,132
71,377,763
Plot a dataframe based on index values only
<p>I have a simple question. I have a dataframe with multiple columns for date and 2 index rows like this:</p> <p><a href="https://i.stack.imgur.com/rgVg3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rgVg3.png" alt="enter image description here" /></a></p> <p>I wish to plot this given dataframe as a lineplot but based on indexes. What I mean by this is, I want a plot with 2 lines: Ptf and bmk representing the values in respective indices. Any help is appreciated.</p>
<p>With a DataFrame like this:</p> <pre class="lang-py prettyprint-override"><code>data = pd.DataFrame(data = {'2021-01-01': [0,1], '2021-01-02': [2.3, 2.4], '2021-01-03': [3.1, 4.2]}, index=['ptf', 'bmk']) data.columns = pd.to_datetime(data.columns) print (data) 2021-01-01 2021-01-02 2021-01-03 ptf 0 2.3 3.1 bmk 1 2.4 4.2 </code></pre> <p>You can access the transposed <code>data</code> with the <code>.T</code> property and then use the <code>.plot()</code> command. The <code>plot</code> method plots each column as a function of the index, so by transposing you get exactly what you are asking:</p> <pre class="lang-py prettyprint-override"><code>data.T.plot() </code></pre> <p><a href="https://i.stack.imgur.com/wzCbh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wzCbh.png" alt="enter image description here" /></a></p>
python-3.x|pandas|matplotlib
1
3,133
71,329,493
Pandas: winsorize feature outliers for each group
<p>I am having dataframe with 100 features and I want to winsorize outliers for each 'group'. You can use the following code to generate the dataframe.</p> <pre><code>import numpy as np import pandas as pd from scipy.stats import mstats data = np.random.randint(1,999,size=(500,101)) cols = [] for i in range(101): cols += [f'f_{i}'] df = pd.DataFrame(data, columns=cols) df['group'] = np.random.randint(1,4,size=(500,1)) df = df.sort_values(by=['group']) </code></pre> <p>Now I want to winsorize (NOT delete !) extreme values for each feature in each group.</p> <p>If you are not sure about 'winsorize'. Here is an example:</p> <p>Before winsorize:</p> <pre><code>1, 2, 3, 4, 5 ... 97, 98, 99, 100 </code></pre> <p>After winsorize the smallest and largest 1%:</p> <pre><code>2, 2, 3, 4, 5 ... 97, 98, 99, 99 </code></pre> <p>I know how to winsorize extreme 1% values for each featrues for the entire dataframe by using the following code.</p> <pre><code>for col in df.columns: df[col] = stats.mstats.winsorize(df[col], limits=[0.01, 0.01]) </code></pre> <p>However, I want to winsorize for each features for each group.</p> <p>Can anyone please advise ? Thank you !</p>
<p>There must be a more elegant way than this, but it seems to work for me and it's just a tiny addition to your solution:</p> <pre><code>for col in df.columns: for group in df.group.unique(): df[col][df.group==group] = mstats.winsorize(df[col][df.group==group], limits=[0.01, 0.01]) </code></pre> <p>As you can see, I also iterate through the groups in addition to the columns, and solve the problem with simple filtering of each column.</p>
python|pandas|dataframe|pandas-groupby
1
3,134
52,048,757
How to split every sentence into individual words and average polarity score per sentence and append into new column in dataframe?
<p>I can successfully split a sentence into its individual words and take of every average of the polarity score of every word using this code. It works great. </p> <pre><code>import statistics as s from textblob import TextBlob a = TextBlob("""Thanks, I'll have a read!""") print(a) c=[] for i in a.words: c.append(a.sentiment.polarity) d = s.mean(c) d = 0.25 a.words = WordList(['Thanks', 'I', "'ll", 'have', 'a', 'read']) </code></pre> <p>How do I transfer the above code to a df that looks like this?: </p> <p>df</p> <pre><code> text 1 Thanks, I’ll have a read! </code></pre> <p>but take the average of every polarity per word? </p> <p>The closet is I can apply polarity to every sentence for every sentence in df: </p> <pre><code>def sentiment_calc(text): try: return TextBlob(text).sentiment.polarity except: return None df_sentences['sentiment'] = df_sentences['text'].apply(sentiment_calc) </code></pre>
<p>I have the impression the sentiment polarity only works on TextBlob type.</p> <p>So my idea here is to split the text blob into words (with the split function -- see doc <a href="https://textblob.readthedocs.io/en/dev/api_reference.html#textblob.blob.WordList" rel="nofollow noreferrer">here</a>) and convert them to TextBlob objects. This is done in the list comprehension:</p> <pre><code>[TextBlob(x).sentiment.polarity for x in a.split()] </code></pre> <p>So the whole thing looks like this:</p> <pre><code>import statistics as s from textblob import TextBlob import pandas as pd a = TextBlob("""Thanks, I'll have a read!""") def compute_mean(a): return s.mean([TextBlob(x).sentiment.polarity for x in a.split()]) print(compute_mean("Thanks, I'll have a read!")) df = pd.DataFrame({'text':["Thanks, I'll have a read!", "Second sentence", "a bag of apples"]}) df['score'] = df['text'].map(compute_mean) print(df) </code></pre>
python|pandas|dataframe|nlp|textblob
1
3,135
72,728,456
Clip value of cumprod during calculation
<p>Say I have the following dataframe</p> <pre class="lang-py prettyprint-override"><code>x = pd.DataFrame({'value': [1.0, 1.1, 1.1, 1.1, 1.2, 1.0, 0.9, 1.9, 1.7, 0.8, 0.5, 0.3]}) </code></pre> <p>and I want to calculate the cumulative product without the value ever going below <code>1.0</code> or above <code>3.0</code>.</p> <p>If I simply do the cumulative product (<code>x.cumprod()</code>), I end up with</p> <pre><code> value 0 1.000000 1 1.100000 2 1.210000 3 1.331000 4 1.597200 5 1.597200 6 1.437480 7 2.731212 8 4.643060 9 3.714448 10 1.857224 11 0.557167 </code></pre> <p>But what I would like to do is something like this</p> <pre class="lang-py prettyprint-override"><code>def mycumprod(series, start, low, high): values = [] last_value = start for value in series.values: last_value = last_value * value if last_value &lt; low: last_value = low elif last_value &gt; high: last_value = high values.append(last_value) return pd.Series(values) </code></pre> <p>where, during the cumulative product I prevent the value from ever going below <code>low</code> or above <code>high</code>.</p> <p>Calling <code>mycumprod(x['value'], 1.0, 1.0, 3.0)</code> leads to the following series</p> <pre><code>0 1.000000 1 1.100000 2 1.210000 3 1.331000 4 1.597200 5 1.597200 6 1.437480 7 2.731212 8 3.000000 9 2.400000 10 1.200000 11 1.000000 dtype: float64 </code></pre> <p>Is there a way to do this efficiently in Pandas?</p> <p>I used <a href="https://stackoverflow.com/questions/25701494/pandas-cumsum-with-conditional-product-of-lagged-value">this solution</a> for cumsum in the past, but I don't know how to apply it to cumprod.</p> <p>Thanks for any help you can provide!</p>
<p>This type of cacluation is very difficult / impossible to vectorize using pandas/numpy, but you could use <a href="https://numba.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer"><code>numba</code></a>:</p> <pre><code>@njit def mycumprod_numba(values, start, low, high): products = np.empty_like(values) last_value = start for i in range(len(values)): last_value *= values[i] if last_value &lt; low: last_value = low elif last_value &gt; high: last_value = high products[i] = last_value return products </code></pre> <p>For 1000 elements (<code>pd.DataFrame({'value': np.random.rand(1_000) * 2})</code>) I get a speed-up of about 15x:</p> <pre><code>%timeit mycumprod(x['value'], 1.0, 1.0, 3.0) # 534 µs ± 6.34 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit pd.Series(mycumprod_numba(x['value'].to_numpy(), 1.0, 1.0, 3.0)) # 36.7 µs ± 630 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre>
python|pandas|numpy
3
3,136
72,590,955
Merging multiple csv files(unnamed colums) from a folder in python
<pre><code>import pandas as pd import os import glob path = r'C:\Users\avira\Desktop\CC\SAIL\Merging\CISF' files = glob.glob(os.path.join(path, '*.csv')) combined_data = pd.DataFrame() for file in files : data = pd.read_csv(file) print(data) combined_data = pd.concat([combined_data,data],axis=0,ignore_index=True) combined_data.to_csv(r'C:\Users\avira\Desktop\CC\SAIL\Merging\CISF\data2.csv') </code></pre> <p>The files are merging diagonally,ie-next to the last cell of the first file, is the beginning of second file. ALSO, it is taking the first entry of file as column names. All of my files are without column names. How do I vertically merge my files,and provide coluumn names to the merged csv.</p>
<p>For the header problem while reading csv , u can do this:</p> <pre><code>pd.read_csv(file, header=None) </code></pre> <p>While dumping the result u can pass list containing the header names</p> <pre><code>df.to_csv(file_name,header=['col1','col2']) </code></pre>
python|pandas|csv|merge|directory
1
3,137
72,688,258
Pinv not inverting my complex matrix entirely correct
<p>I have quite an extensive code so I'm not sure how I can share it and it be easy for you to read but my main question concerns the pinv function in numpy.linalg. I am inverting a non-square complex matrix. Upon inverting I find myself with absolute values that are correct but the real or the complex (always one is incorrect but never both) values are negative when they need to be postive and vice-versa. To resolve this I thought multiplying by -1 would have resolved the problem but as mentioned it's never both signs that are wrong. Does anyone have any idea why the pinv function would do this?</p>
<p>I wrote a code about it without using <code>np.linalg.pinv</code>. it worked fine.</p> <p>it is my code:</p> <p>X and Y are my matrix</p> <pre><code>Xt = np.transpose(X) X1 = np.matmul(Xt,X) X2 = np.matmul(X,Xt) try: Xinv = np.linalg.inv(X1) W = np.matmul(Xinv,Xt) print(&quot;1&quot;) except: Xinv = np.linalg.inv(X2) W = np.matmul(Xt,Xinv) print(&quot;2&quot;) #W = np.linalg.pinv(X,rcond=1e-5) W = np.matmul(W,Y) </code></pre>
python|numpy|matrix-multiplication|complex-numbers
1
3,138
59,827,696
pandas schema validation with specific columns
<p>I have a pandas dataframe with almost 56 columns and 120000 row.</p> <p>I would like to implement validation only on some columns and not for all of them.</p> <p>I followed article at <a href="https://tmiguelt.github.io/PandasSchema/" rel="nofollow noreferrer">https://tmiguelt.github.io/PandasSchema/</a></p> <p>When i did like something below function, it throws an error as </p> <p>"Invalid number of columns. The schema specifies 2, but the data frame has 56"</p> <pre><code>def DoValidation(self, df): null_validation = [CustomElementValidation(lambda d: d is not np.nan, 'this field cannot be null')] schema = pandas_schema.Schema([Column('ItemId', null_validation)], [Column('ItemName', null_validation)]) errors = schema.validate(df) if (len(errors) &gt; 0): for error in errors: print(error) return False return True </code></pre> <p>Am i doing something wrong ?</p> <p>What is the correct way to validate specific column in a dataframe ?</p> <p>Note: I have to implement different type of validations like decimal, length, null check validations etc on different columns and not just null check validation as show in function above.</p>
<p>As Yuki Ho mentioned in his answer, by default you have to specify as many columns in the schema as your dataframe.</p> <p>But you can also use the <code>columns</code> parameter in <code>schema.validate()</code> to specify which columns to check. Combining that with <code>schema.get_column_names()</code> you can do the following to easily avoid your issue.</p> <pre><code>schema.validate(df, columns=schema.get_column_names()) </code></pre>
pandas|validation|schema
4
3,139
59,695,656
Find out the difference in two dataframe with same column pandas
<p>I have three dataframe as shown below</p> <p>df1:</p> <pre><code>Unit_ID Price 1 10 2 20 3 10 </code></pre> <p>after one day df1 is updated as df2 as shown below.</p> <p>df2:</p> <pre><code>Unit_ID Price 1 10 2 20 3 10 4 15 5 20 </code></pre> <p>after one day from that day df2 updated as df3 as shown below.I would like to find out the new unit in the current dataframe as shown below.</p> <p>df3:</p> <pre><code>Unit_ID Price 1 10 2 20 3 10 4 15 5 20 6 80 </code></pre> <p>I would like to write a function to return new unit with its dataframe in pandas. I would like to find out the new unit in the current dataframe as shown below.</p> <p>For example in first update it should below data frame</p> <p>df:</p> <pre><code>Unit_ID Price 4 15 5 20 </code></pre> <p>In Next update it should return below dataframe</p> <p>df:</p> <pre><code>Unit_ID Price 6 80 </code></pre> <p>steps 1. Make sure that in each dataframe Unit_ID is unique. 2. Find out the new Unit_ID in current table.</p>
<p>For each day is necessary copy <code>DataFrame</code> to new one:</p> <pre><code>df1 = df.copy() </code></pre> <p>and after adding new rows you can use test membership by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a> with inverted mask by <code>~</code>:</p> <pre><code>df_added = df[~df['Unit_ID'].isin(df1['Unit_ID'])] </code></pre> <p>Verifying solution:</p> <pre><code>print (df2) Unit_ID Price 0 1 10 1 2 20 2 3 10 3 4 15 4 5 20 print (df3) Unit_ID Price 0 1 10 1 2 20 2 3 10 3 4 15 4 5 20 5 6 80 df_added = df3[~df3['Unit_ID'].isin(df2['Unit_ID'])] print (df_added) Unit_ID Price 5 6 80 </code></pre>
pandas|pandas-groupby
1
3,140
59,662,725
Find Consecutive Repeats of Specific Length in NumPy
<p>Say that I have a NumPy array:</p> <pre><code>a = np.array([0, 1, 2, 2, 3, 4, 5, 5, 6, 7, 8, 9, 9, 9, 10, 11, 12, 13, 13, 13, 14, 15]) </code></pre> <p>And I have a length <code>m = 2</code> that the user specifies in order to see if there are any repeats of that length within the time series. In this case, the repeats of length <code>m = 2</code> are:</p> <pre><code>[2, 2] [5, 5] [9, 9] [9, 9] [13, 13] </code></pre> <p>And the user can change this to <code>m = 3</code> and the repeats of length <code>m = 3</code> are:</p> <pre><code>[9, 9, 9] [13, 13, 13] </code></pre> <p>I need a function that either returns the index of where a repeat is found or <code>None</code>. So, for <code>m = 3</code> the function would return the following NumPy array of starting indices:</p> <pre><code>[11, 17] </code></pre> <p>And for <code>m = 4</code> the function would return <code>None</code>. What's the cleanest and fastest way to accomplish this?</p> <p><strong>Update</strong> Note that the array does not have to be sorted and we are <strong><em>not</em></strong> interested in the result after a sort. We only want the result from the unsorted array. Your result for <code>m = 2</code> should be the same for this array:</p> <pre><code>b = np.array([0, 11, 2, 2, 3, 40, 5, 5, 16, 7, 80, 9, 9, 9, 1, 11, 12, 13, 13, 13, 4, 5]) </code></pre>
<p><strong>Approach #1</strong></p> <p>We could leverage <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html" rel="nofollow noreferrer"><code>1D convolution</code></a> for a vectorized solution -</p> <pre><code>def consec_repeat_starts(a, n): N = n-1 m = a[:-1]==a[1:] return np.flatnonzero(np.convolve(m,np.ones(N, dtype=int))==N)-N+1 </code></pre> <p>Sample runs -</p> <pre><code>In [286]: a Out[286]: array([ 0, 1, 2, 2, 3, 4, 5, 5, 6, 7, 8, 9, 9, 9, 10, 11, 12, 13, 13, 13, 14, 15]) In [287]: consec_repeat_starts(a, 2) Out[287]: array([ 2, 6, 11, 12, 17, 18]) In [288]: consec_repeat_starts(a, 3) Out[288]: array([11, 17]) In [289]: consec_repeat_starts(a, 4) Out[289]: array([], dtype=int64) </code></pre> <p><strong>Approach #2</strong></p> <p>We could also make use of <code>binary-erosion</code> -</p> <pre><code>from scipy.ndimage.morphology import binary_erosion def consec_repeat_starts_v2(a, n): N = n-1 m = a[:-1]==a[1:] return np.flatnonzero(binary_erosion(m,[1]*N))-(N//2) </code></pre>
python|numpy
2
3,141
32,244,019
How to rotate x-axis tick labels in a pandas plot
<p>With the following code:</p> <pre><code>import matplotlib matplotlib.style.use('ggplot') import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({ 'celltype':["foo","bar","qux","woz"], 's1':[5,9,1,7], 's2':[12,90,13,87]}) df = df[["celltype","s1","s2"]] df.set_index(["celltype"],inplace=True) df.plot(kind='bar',alpha=0.75) plt.xlabel("") </code></pre> <p>I made this plot:</p> <p><a href="https://i.stack.imgur.com/6pLLq.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/6pLLq.jpg" alt="enter image description here"></a></p> <p>How can I rotate the x-axis tick labels to 0 degrees?</p> <p>I tried adding this but did not work:</p> <pre><code>plt.set_xticklabels(df.index,rotation=90) </code></pre>
<p>Pass param <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html" rel="noreferrer"><code>rot=0</code></a> to rotate the xticklabels:</p> <pre><code>import matplotlib matplotlib.style.use('ggplot') import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({ 'celltype':[&quot;foo&quot;,&quot;bar&quot;,&quot;qux&quot;,&quot;woz&quot;], 's1':[5,9,1,7], 's2':[12,90,13,87]}) df = df[[&quot;celltype&quot;,&quot;s1&quot;,&quot;s2&quot;]] df.set_index([&quot;celltype&quot;],inplace=True) df.plot(kind='bar',alpha=0.75, rot=0) plt.xlabel(&quot;&quot;) plt.show() </code></pre> <p>yields plot:</p> <p><a href="https://i.stack.imgur.com/JPdxv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JPdxv.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib
264
3,142
40,416,056
How to download previous version of tensorflow?
<p>For some reason, I want to use some previous version of tensorflow('tensorflow-**-.whl', not source code on github) and where can I download the previous version and how can I know the corresponding <code>cuda version</code> that is compatible.</p>
<p>It works for me, since I have 1.6</p> <pre><code>pip install tensorflow==1.5 </code></pre>
tensorflow
55
3,143
61,755,445
How do I iterate a function on two sides in python?
<p>I have a list called 'y' that is composed of the lowest chi squared values in a data table. So my list of y looks something like </p> <blockquote> <p>y = [0.014, 0.048, 3.53, 3.61, 9.08, 12.93, 13.15, 25.03, 26.55, 27.14]</p> </blockquote> <p>I also have a list called "chi2".</p> <p>In this list, I look for the exact location of where chi2 is equal to a specific value in the y[i] list. I do this using</p> <pre><code>index_min1 = np.where(chi2 == y[0]) index_min2 = np.where(chi2 == y[1]) index_min3 = np.where(chi2 == y[2]) index_min4 = np.where(chi2 == y[3]) index_min5 = np.where(chi2 == y[4]) index_min6 = np.where(chi2 == y[5]) index_min7 = np.where(chi2 == y[6]) index_min8 = np.where(chi2 == y[7]) index_min9 = np.where(chi2 == y[8]) index_min10 = np.where(chi2 == y[9]) </code></pre> <p>I'm fairly new to python, and I was wondering if there was a better way I could iterate each side instead of manually typing out each line. </p> <p>My thought process was something like </p> <pre><code>import numpy as np import math from heapq import nsmallest from numpy import arange for i in arange(0,9,1): index_min+(i+1) = np.where(chi2 == y[i]) </code></pre> <p>This is probably very wrong and I was wondering if there was a better way to do this than manual. </p>
<p>You need the left-hand side to be some kind of data structure supporting assignment to its elements, as many elements as you have in <code>y</code>.</p> <p>For example, adapting your idea with a <code>list</code>:</p> <pre><code>indices = [] for i in arange(0, 9, 1): indices[i] = np.where(chi2 == y[i]) </code></pre> <p>You can further simplify this with a 'list comprehension':</p> <pre><code>indices = [np.where(chi2 == y[i]) for i in arange(0, 9, 1)] </code></pre> <p>And finally, you didn't actually need <code>arange</code> since you can just iterate <code>y</code>:</p> <pre><code>indices = [np.where(chi2 == y_el) for y_el in y] </code></pre> <p>If you're more familiar with functional languages, an equivalent (still valid python) form is:</p> <pre><code>indices = list(map(lambda e: np.where(chi2 == e), y)) </code></pre> <p>(Where the outer <code>list()</code> is only needed if you actually need it to be a list.)</p>
python|numpy
2
3,144
61,707,434
How do I rank rank values from a very large csv excel file when I only need a few data points from the file?
<p>I uploaded an incredibly large Excel file as such </p> <pre><code>import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv("C:\\Users\\willi\\Downloads\\Formatted Corona Virus Data.csv") index = ['ARG', 'BOL', 'CHL', 'COL', 'CRI', 'CUB', 'ECU', 'PAN', 'PER', 'PRY'] </code></pre> <p>and it looks like the image below obviously it continues on to more countries</p> <p>how do I create a list to compare and rank these 10 countries from the index for total cases from the most recent date from most to least? </p> <p>*edit some of them may have different most recent dates</p> <p><a href="https://i.stack.imgur.com/RejCJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RejCJ.png" alt="enter image description here"></a></p>
<p>How about</p> <p><code>df[df.iso_code.isin(index)].groupby(['index','date']).total_cases.rank()</code>?</p>
python|excel|pandas|csv|matplotlib
0
3,145
61,738,530
Indexing with Boolean arrays
<pre><code>a = np.arange(12).reshape(3,4) b1 = np.array([False,True,True] b2 = np.array([True,False,True,False]) a[b1,b2] </code></pre> <p>output: </p> <pre><code>array([4,10]) </code></pre> <p>I am not getting how it comes 4 and 10 in a[b1,b2]</p>
<p>Apparently you expected to see <code>array([[ 4, 6],[ 8, 10]])</code>.</p> <p>In boolean indexing NumPy returns only diagonal elements as described <a href="https://numpy.org/devdocs/reference/arrays.indexing.html" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>without the <code>np.ix_</code> call, only the diagonal elements would be selected(...). This difference is the most important thing to remember about indexing with multiple advanced indexes.</p> </blockquote> <p>For the desired output use <code>np.ix_()</code>:</p> <pre><code>a[np.ix_(b1,b2)] </code></pre>
python|numpy|numpy-ndarray|numpy-slicing
3
3,146
61,929,275
AttributeError: module 'tensorflow' has no attribute 'keras' in conda prompt
<p>*I try to install tensorflow and keras</p> <p>I installed tensorflow and I imported it with no errors</p> <p>Keras is installed but I can't import it *</p> <pre><code>(base) C:\Windows\system32&gt;pip uninstall keras Found existing installation: Keras 2.3.1 Uninstalling Keras-2.3.1: Would remove: c:\users\asus\anaconda3\anaconda\lib\site-packages\docs\* c:\users\asus\anaconda3\anaconda\lib\site-packages\keras-2.3.1.dist-info\* c:\users\asus\anaconda3\anaconda\lib\site-packages\keras\* Proceed (y/n)? y Successfully uninstalled Keras-2.3.1 (base) C:\Windows\system32&gt;pip install keras Collecting keras Using cached Keras-2.3.1-py2.py3-none-any.whl (377 kB) Requirement already satisfied: six&gt;=1.9.0 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (1.14.0) Requirement already satisfied: numpy&gt;=1.9.1 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (1.18.4) Requirement already satisfied: keras-applications&gt;=1.0.6 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (1.0.8) Requirement already satisfied: keras-preprocessing&gt;=1.0.5 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (1.1.2) Requirement already satisfied: scipy&gt;=0.14 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (1.4.1) Requirement already satisfied: pyyaml in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (5.3.1) Requirement already satisfied: h5py in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (2.10.0) Installing collected packages: keras Successfully installed keras-2.3.1 </code></pre> <p>then I try</p> <pre><code>(base) C:\Windows\system32&gt;python Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; from keras.models import sequential Using TensorFlow backend. 2020-05-21 10:03:38.204077: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-05-21 10:03:38.210602: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\__init__.py", line 3, in &lt;module&gt; from . import utils File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\utils\__init__.py", line 26, in &lt;module&gt; from .vis_utils import model_to_dot File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\utils\vis_utils.py", line 7, in &lt;module&gt; from ..models import Model File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\models.py", line 10, in &lt;module&gt; from .engine.input_layer import Input File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\engine\__init__.py", line 8, in &lt;module&gt; from .training import Model File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\engine\training.py", line 14, in &lt;module&gt; from . import training_utils File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\engine\training_utils.py", line 17, in &lt;module&gt; from .. import metrics as metrics_module File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\metrics.py", line 1850, in &lt;module&gt; BaseMeanIoU = tf.keras.metrics.MeanIoU AttributeError: module 'tensorflow' has no attribute 'keras' &gt;&gt;&gt; from keras.models import Sequential Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\__init__.py", line 3, in &lt;module&gt; from . import utils File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\utils\__init__.py", line 26, in &lt;module&gt; from .vis_utils import model_to_dot File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\utils\vis_utils.py", line 7, in &lt;module&gt; from ..models import Model File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\models.py", line 12, in &lt;module&gt; from .engine.training import Model File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\engine\__init__.py", line 8, in &lt;module&gt; from .training import Model File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\engine\training.py", line 14, in &lt;module&gt; from . import training_utils File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\engine\training_utils.py", line 17, in &lt;module&gt; from .. import metrics as metrics_module File "C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\keras\metrics.py", line 1850, in &lt;module&gt; BaseMeanIoU = tf.keras.metrics.MeanIoU AttributeError: module 'tensorflow' has no attribute 'keras' </code></pre> <blockquote> <p>I try : pip install --upgrade --no-deps --force-reinstall tensorflow</p> </blockquote> <p>pip install --upgrade pip setuptools wheel</p> <p>pip uninstall protobuf</p> <p>pip install protobuf</p> <p>pip install termcolor</p> <p>**I have: keras 2.3.1 pypi_0 pypi</p> <p>keras-applications 1.0.8 pypi_0 pypi</p> <p>keras-preprocessing 1.1.2 pypi_0 pypi</p> <p>keyring 21.1.1 py36_2</p> <p>kivy 1.10.1.dev0 pypi_0 pypi</p> <p>kivy-deps-glew 0.2.0 pypi_0 pypi</p> <p>kivy-deps-gstreamer 0.2.0 pypi_0 pypi</p> <p>kivy-deps-sdl2 0.2.0 pypi_0 pypi</p> <p>kivy-garden 0.1.4 pypi_0 pypi</p> <p>kiwisolver 1.2.0 py36h74a9793_0</p> <p>lazy-object-proxy 1.4.3 py36he774522_0 **</p> <p><em>I have numpy, pytorch, pip, tensorflow, and most DL, ML, CV, Ai, DS libraries</em></p> <pre><code>(base) C:\Users\ASUS&gt;python Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow 2020-05-22 06:23:19.327748: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-05-22 06:23:19.343057: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. &gt;&gt;&gt; print(tensorflow) &lt;module 'tensorflow' from 'C:\\Users\\ASUS\\Anaconda3\\Anaconda\\lib\\site-packages\\tensorflow\\__init__.py'&gt; &gt;&gt;&gt; </code></pre> <p>When I run tensorflow</p> <pre><code>(base) C:\Users\ASUS&gt;python Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow.compat.v1 as tf 2020-05-22 06:18:12.900849: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-05-22 06:18:12.914907: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. &gt;&gt;&gt; tf.disable_v2_behavior() WARNING:tensorflow:From C:\Users\ASUS\Anaconda3\Anaconda\lib\site-packages\tensorflow\python\compat\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term &gt;&gt;&gt; hello = tf.constant('Hello, TensorFlow!') &gt;&gt;&gt; sess = tf.compat.v1.Session() 2020-05-22 06:20:28.305196: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2020-05-22 06:20:29.278356: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1050 computeCapability: 6.1 coreClock: 1.493GHz coreCount: 5 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 104.43GiB/s 2020-05-22 06:20:29.295408: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-05-22 06:20:29.305186: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_10.dll'; dlerror: cublas64_10.dll not found 2020-05-22 06:20:29.314235: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cufft64_10.dll'; dlerror: cufft64_10.dll not found 2020-05-22 06:20:29.323997: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'curand64_10.dll'; dlerror: curand64_10.dll not found 2020-05-22 06:20:29.336231: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found 2020-05-22 06:20:29.349522: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cusparse64_10.dll'; dlerror: cusparse64_10.dll not found 2020-05-22 06:20:29.362633: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudnn64_7.dll'; dlerror: cudnn64_7.dll not found 2020-05-22 06:20:29.372823: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1598] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2020-05-22 06:20:29.392493: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2020-05-22 06:20:29.413844: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x18bf4dc2f60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-05-22 06:20:29.424116: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-05-22 06:20:29.432117: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-05-22 06:20:29.446877: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] &gt;&gt;&gt; print(sess.run(hello)) b'Hello, TensorFlow!' </code></pre> <blockquote> <p><strong>After I Solve it</strong> Check my answer below</p> </blockquote> <pre><code>(base) C:\Users\ASUS&gt;python Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow as tf 2020-05-29 17:59:47.814562: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll &gt;&gt;&gt; import keras Using TensorFlow backend. &gt;&gt;&gt; from keras.models import Sequential &gt;&gt;&gt; tf.test.is_built_with_cuda() True &gt;&gt;&gt; tf.config.list_physical_devices('GPU') 2020-05-29 18:02:56.764618: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2020-05-29 18:02:57.602680: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1050 computeCapability: 6.1 coreClock: 1.493GHz coreCount: 5 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 104.43GiB/s 2020-05-29 18:02:57.607142: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll 2020-05-29 18:02:57.703213: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll 2020-05-29 18:02:57.766722: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll 2020-05-29 18:02:57.797062: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll 2020-05-29 18:02:57.872961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll 2020-05-29 18:02:57.920469: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll 2020-05-29 18:02:58.035860: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2020-05-29 18:02:58.148699: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] &gt;&gt;&gt; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None) 2020-05-29 18:03:43.187834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1050 computeCapability: 6.1 coreClock: 1.493GHz coreCount: 5 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 104.43GiB/s 2020-05-29 18:03:43.200093: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll 2020-05-29 18:03:43.205083: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll 2020-05-29 18:03:43.214701: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll 2020-05-29 18:03:43.217616: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll 2020-05-29 18:03:43.229945: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll 2020-05-29 18:03:43.238134: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll 2020-05-29 18:03:43.247848: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2020-05-29 18:03:43.256302: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2020-05-29 18:03:43.264822: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-05-29 18:03:43.269994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2020-05-29 18:03:43.271951: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2020-05-29 18:03:43.275672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/device:GPU:0 with 2993 MB memory) -&gt; physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1) True &gt;&gt;&gt; </code></pre>
<p>Thanks for all answers But I solve it I follow this tutorial with CUDA 10.1 <a href="https://towardsdatascience.com/installing-tensorflow-with-cuda-cudnn-and-gpu-support-on-windows-10-60693e46e781" rel="nofollow noreferrer">https://towardsdatascience.com/installing-tensorflow-with-cuda-cudnn-and-gpu-support-on-windows-10-60693e46e781</a> .After this tutorial I uninstall this Libs and install it again<code>pip install keras</code>, <code>pip install --upgrade setuptools</code>, <code>pip install cmake</code>, <code>pip install keras-models</code>, <code>pip install keras-applications</code>, <code>pip install keras-preprocessing</code> and download Visual studio 2015. Then Run Code from my quesion like <code>from keras.models import Sequential</code> and check pathon path.</p>
python|tensorflow|keras|deep-learning
0
3,147
57,865,531
custom function to merge two csv files based on common cloumn with different names
<pre><code>a,b,c 5,Ugh,wq 2,Kj,asd 3,Yu,Dx 4,Po,Cv d,e 3,8i 4,Y6 2,X09 5,m3 </code></pre> <p>Write a function that uses pandas create_result(“X.a|X.b|X.c|Y.e” , “X.a=Y.d”)</p> <p>This will create result.csv with columns from X and Y as passed as parameters as above, and column values are mapped according to the key between the 2 files, specified as 2nd parameter - X.a and Y.d</p> <p>result should be like this</p> <pre><code>a,b,c,f 5,Ugh,wq,m3 2,Kj,asd,X09 3,Yu,Dx,8i 4,Po,Cv,Y6 </code></pre> <p>i have tried a function like this</p> <pre><code>x=pd.read_csv("C:/Users/Venkata sai/Desktop/SQL_VENKATASAI_ASSIGNMENT/test/X.csv") y=pd.read_csv("C:/Users/Venkata sai/Desktop/SQL_VENKATASAI_ASSIGNMENT/test/Y.csv") print(x) print(y) def create_result(x,y): merged=pd.merge(x,y,on='x.a=y.d') print(merged) merged.to_csv("resultstable.csv",index=false) </code></pre> <p>i am not getting the desired output.</p>
<p>You can <code>rename</code> column and merge by <code>a</code> from both <code>DataFrame</code>s:</p> <pre><code>df = pd.merge(df1,df2.rename(columns={'d':'a'}), on='a') print (df) a b c e 0 5 Ugh wq m3 1 2 Kj asd X09 2 3 Yu Dx 8i 3 4 Po Cv Y6 </code></pre>
python|pandas
0
3,148
57,982,158
ValueError: Unknown loss function:focal_loss_fixed when loading model with my custom loss function
<p>I designed my own loss function. However when trying to revert to the best model encountered during training with </p> <pre><code>model = load_model("lc_model.h5") </code></pre> <p>I got the following error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-105-9d09ef163b0a&gt; in &lt;module&gt; 23 24 # revert to the best model encountered during training ---&gt; 25 model = load_model("lc_model.h5") C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\saving.py in load_model(filepath, custom_objects, compile) 417 f = h5dict(filepath, 'r') 418 try: --&gt; 419 model = _deserialize_model(f, custom_objects, compile) 420 finally: 421 if opened_new_file: C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\saving.py in _deserialize_model(f, custom_objects, compile) 310 metrics=metrics, 311 loss_weights=loss_weights, --&gt; 312 sample_weight_mode=sample_weight_mode) 313 314 # Set optimizer weights. C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, **kwargs) 137 loss_functions = [losses.get(l) for l in loss] 138 else: --&gt; 139 loss_function = losses.get(loss) 140 loss_functions = [loss_function for _ in range(len(self.outputs))] 141 self.loss_functions = loss_functions C:\ProgramData\Anaconda3\lib\site-packages\keras\losses.py in get(identifier) 131 if isinstance(identifier, six.string_types): 132 identifier = str(identifier) --&gt; 133 return deserialize(identifier) 134 if isinstance(identifier, dict): 135 return deserialize(identifier) C:\ProgramData\Anaconda3\lib\site-packages\keras\losses.py in deserialize(name, custom_objects) 112 module_objects=globals(), 113 custom_objects=custom_objects, --&gt; 114 printable_module_name='loss function') 115 116 C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 163 if fn is None: 164 raise ValueError('Unknown ' + printable_module_name + --&gt; 165 ':' + function_name) 166 return fn 167 else: ValueError: Unknown loss function:focal_loss_fixed </code></pre> <p>Here is the neural network :</p> <pre><code>from keras.callbacks import ModelCheckpoint from keras.models import load_model model = create_model(x_train.shape[1], y_train.shape[1]) epochs = 35 batch_sz = 64 print("Beginning model training with batch size {} and {} epochs".format(batch_sz, epochs)) checkpoint = ModelCheckpoint("lc_model.h5", monitor='val_acc', verbose=0, save_best_only=True, mode='auto', period=1) from keras.models import Sequential from keras.layers import Dense, Dropout from keras.constraints import maxnorm def create_model(input_dim, output_dim): print(output_dim) # create model model = Sequential() # input layer model.add(Dense(100, input_dim=input_dim, activation='relu', kernel_constraint=maxnorm(3))) model.add(Dropout(0.2)) # hidden layer model.add(Dense(60, activation='relu', kernel_constraint=maxnorm(3))) model.add(Dropout(0.2)) # output layer model.add(Dense(output_dim, activation='softmax')) # Compile model # model.compile(loss='categorical_crossentropy', loss_weights=None, optimizer='adam', metrics=['accuracy']) model.compile(loss=focal_loss(alpha=1), loss_weights=None, optimizer='adam', metrics=['accuracy']) return model # train the model history = model.fit(x_train.as_matrix(), y_train.as_matrix(), validation_split=0.2, epochs=epochs, batch_size=batch_sz, # Can I tweak the batch here to get evenly distributed data ? verbose=2, class_weight = weights, # class_weight tells the model to "pay more attention" to samples from an under-represented fraud class. callbacks=[checkpoint]) # revert to the best model encountered during training model = load_model("lc_model.h5") </code></pre> <p>And here is my loss function:</p> <pre><code>import tensorflow as tf def focal_loss(gamma=2., alpha=4.): gamma = float(gamma) alpha = float(alpha) def focal_loss_fixed(y_true, y_pred): """Focal loss for multi-classification FL(p_t)=-alpha(1-p_t)^{gamma}ln(p_t) Notice: y_pred is probability after softmax gradient is d(Fl)/d(p_t) not d(Fl)/d(x) as described in paper d(Fl)/d(p_t) * [p_t(1-p_t)] = d(Fl)/d(x) Focal Loss for Dense Object Detection https://arxiv.org/abs/1708.02002 Arguments: y_true {tensor} -- ground truth labels, shape of [batch_size, num_cls] y_pred {tensor} -- model's output, shape of [batch_size, num_cls] Keyword Arguments: gamma {float} -- (default: {2.0}) alpha {float} -- (default: {4.0}) Returns: [tensor] -- loss. """ epsilon = 1.e-9 y_true = tf.convert_to_tensor(y_true, tf.float32) y_pred = tf.convert_to_tensor(y_pred, tf.float32) model_out = tf.add(y_pred, epsilon) ce = tf.multiply(y_true, -tf.log(model_out)) weight = tf.multiply(y_true, tf.pow(tf.subtract(1., model_out), gamma)) fl = tf.multiply(alpha, tf.multiply(weight, ce)) reduced_fl = tf.reduce_max(fl, axis=1) return tf.reduce_mean(reduced_fl) return focal_loss_fixed # model.compile(loss=focal_loss(alpha=1), optimizer='nadam', metrics=['accuracy']) # model.fit(X_train, y_train, epochs=3, batch_size=1000) </code></pre>
<p>You have to load the <code>custom_objects</code> of focal_loss_fixed as shown below:</p> <pre><code>model = load_model("lc_model.h5", custom_objects={'focal_loss_fixed': focal_loss()}) </code></pre> <p>However, if you wish to just perform inference with your model and not further optimization or training your model, you can simply wish to ignore the loss function like this:</p> <pre><code>model = load_model("lc_model.h5", compile=False) </code></pre>
python|python-3.x|tensorflow|keras|loss-function
38
3,149
34,168,200
Concatenating Numpy array to Numpy array of arrays
<p>I'm trying to make a for loop that each time adds an array, to the end of an array of arrays and I can't quite put my finger on how to. The general idea of the program:</p> <pre><code>for x in range(0,longnumber): generatenewarray add new array to end of array </code></pre> <p>So for example, the output of:</p> <pre><code>newArray = [1,2,3] array = [[1,2,3,4],[1,4,3]] </code></pre> <p>would be: <code>[[1,2,3,4],[1,4,3],[1,2,3]]</code></p> <p>If the wording is poor let me know and I can try and edit it to be better!</p>
<p>Is this what you need?</p> <pre><code>list_of_arrays = [] for x in range(0,longnumber): a = generatenewarray list_of_arrays.append(a) </code></pre>
python|arrays|numpy
1
3,150
34,193,538
pandas Groupby after groupby
<pre><code>df = pd.DataFrame({'A': [1,2,3,1,2,3], 'B': [10,10,11,10,10,15], 'key1':['a','b','a','b','c','c'],'key2':1}) df1 = pd.DataFrame({'A': [1,2,3,1,2,3], 'B': [100,100,110,100,100,150], 'key1':['a','c','b','a','a','c'],'key2':1}) dfn = pd.merge(df,df1,on='key2') dfn_grouped = dfn.groupby('key1_y') the list(dfn_grouped): [('a', A_x B_x key1_x key2 A_y B_y key1_y 0 1 10 a 1 1 100 a 3 1 10 a 1 1 100 a ... ... ... ... 33 3 15 c 1 1 100 a 34 3 15 c 1 2 100 a), ('b', A_x B_x key1_x key2 A_y B_y key1_y 2 1 10 a 1 3 110 b 8 2 10 b 1 3 110 b 14 3 11 a 1 3 110 b 20 1 10 b 1 3 110 b 26 2 10 c 1 3 110 b 32 3 15 c 1 3 110 b), ('c', A_x B_x key1_x key2 A_y B_y key1_y 1 1 10 a 1 2 100 c ...... ... .... 35 3 15 c 1 3 150 c)] </code></pre> <p>now i need groupby the dfn_grouped by "key1_x" and concat to dict like A_x:A_y</p> <pre><code> key1_y key1_x A_X:A_Y b a {'10':'110','11':110} b b {'10':110} b c {'10':110,'15':110} // if A_x in dict append the A_y like: // b e {'10':[11,12]} </code></pre>
<p><strong>Is this what you need?:</strong></p> <pre><code>&gt;&gt; grouped = dfn.groupby(['key1_y','key1_x','A_x']) &gt;&gt; dfg = pd.DataFrame(grouped.apply(lambda x: [a for a in x.A_y])).reset_index() &gt;&gt; dfg.columns = [u'key1_y', u'key1_x', u'A_x', 'dic_values'] &gt;&gt; dfg['dic'] = [{a:b} for a,b in zip(dfg.A_x.values,dfg.dic_values.values)] &gt;&gt; dfg.drop(['A_x','dic_values'],1,inplace=True) &gt;&gt; g_dics = dfg.groupby(['key1_y','key1_x']).apply(lambda x: dict(sum(map(dict.items, [d for d in x.dic]), []))) &gt;&gt; pd.DataFrame(g_dics).reset_index() </code></pre>
pandas
1
3,151
54,802,328
Why am I getting different values between loss functions and metrics in TensorFlow Keras?
<p>In my CNN training using TensorFlow, I am using <code>Keras.losses.poisson</code> as a loss function. Now, I like to calculate many metrics alongside that loss function, and I am observing that <code>Keras.metrics.poisson</code> gives different results - although the two are the same function.</p> <p>See here for some example output: <code>loss</code> and <code>poisson</code> outputs have different ranges, 0.5 vs. 0.12:</p> <pre><code>Epoch 1/20 Epoch 00001: val_loss improved from inf to 0.53228, saving model to P:\Data\xyz.h5 - 8174s - loss: 0.5085 - binary_crossentropy: 0.1252 - poisson: 0.1271 - mean_squared_error: 1.2530e-04 - mean_absolute_error: 0.0035 - mean_absolute_percentage_error: 38671.1055 - val_loss: 0.5323 - val_binary_crossentropy: 0.1305 - val_poisson: 0.1331 - val_mean_squared_error: 5.8477e-05 - val_mean_absolute_error: 0.0035 - val_mean_absolute_percentage_error: 1617.8346 Epoch 2/20 Epoch 00002: val_loss improved from 0.53228 to 0.53218, saving model to P:\Data\xyz.h5 - 8042s - loss: 0.5067 - binary_crossentropy: 0.1246 - poisson: 0.1267 - mean_squared_error: 1.0892e-05 - mean_absolute_error: 0.0017 - mean_absolute_percentage_error: 410.8044 - val_loss: 0.5322 - val_binary_crossentropy: 0.1304 - val_poisson: 0.1330 - val_mean_squared_error: 4.9087e-05 - val_mean_absolute_error: 0.0035 - val_mean_absolute_percentage_error: 545.5222 Epoch 3/20 Epoch 00003: val_loss improved from 0.53218 to 0.53199, saving model to P:\Data\xyz.h5 - 8038s - loss: 0.5066 - binary_crossentropy: 0.1246 - poisson: 0.1266 - mean_squared_error: 6.6870e-06 - mean_absolute_error: 0.0013 - mean_absolute_percentage_error: 298.9844 - val_loss: 0.5320 - val_binary_crossentropy: 0.1304 - val_poisson: 0.1330 - val_mean_squared_error: 4.3858e-05 - val_mean_absolute_error: 0.0031 - val_mean_absolute_percentage_error: 452.3541 </code></pre> <p>I have found a similar questions while typing this one: <a href="https://stackoverflow.com/questions/48719540/keras-loss-and-metric-calculated-differently">Keras - Loss and Metric calculated differently?</a> However, I am not using regularization.</p> <p>In addition, I have come across this one, which at least helped me reproduce the issue: <a href="https://stackoverflow.com/questions/53808163/same-function-in-keras-loss-and-metric-give-different-values-even-without-regula">Same function in Keras Loss and Metric give different values even without regularization</a></p> <pre><code>from tensorflow import keras layer = keras.layers.Input(shape=(1, 1, 1)) model = keras.models.Model(inputs=layer, outputs=layer) model.compile(optimizer='adam', loss='poisson', metrics=['poisson']) data = [[[[[1]]], [[[2]]], [[[3]]]]] model.fit(x=data, y=data, batch_size=2, verbose=1) </code></pre> <p>What I have found then is that, basically, it's the dimensionality that triggers this issue. From the following extended example, you can see that</p> <ul> <li>the issue can be reproduced with many loss functions (the ones hat don't begin with <code>mean_</code>),</li> <li>the issue goes away when replacing <code>tensorflow.keras</code> with <code>keras</code>, and</li> <li><code>tensorflow.keras</code> <em>seems to scale the metrics by the batch size if the dimensionality of the data is larger than three</em>. At least that is my humble interpretation.</li> </ul> <p>The code:</p> <pre><code>import numpy as np from tensorflow import keras # import keras nSamples = 98765 nBatch = 2345 metric = 'poisson' # metric = 'squared_hinge' # metric = 'logcosh' # metric = 'cosine_proximity' # metric = 'binary_crossentropy' # example data: always the same samples np.random.seed(0) dataIn = np.random.rand(nSamples) dataOut = np.random.rand(nSamples) for dataDim in range(1, 10): # reshape samples into size (1,), ..., (1, 1, ...) according to dataDim dataIn = np.expand_dims(dataIn, axis=-1) dataOut = np.expand_dims(dataOut, axis=-1) # build a model that does absolutely nothing Layer = keras.layers.Input(shape=np.ones(dataDim)) model = keras.models.Model(inputs=Layer, outputs=Layer) # compile, fit and observe loss ratio model.compile(optimizer='adam', loss=metric, metrics=[metric]) history = model.fit(x=dataIn, y=dataOut, batch_size=nBatch, verbose=1) lossRatio = history.history['loss'][0] / history.history[metric][0] print(lossRatio) </code></pre> <p>I find this behavior inconsistent at least. Should I consider it a bug or a feature?</p> <p><strong>Update</strong>: After further investigation, I have found out that the metrics values seen to be computed correctly, while the loss values are not; in fact, the losses are weighted sums of the sample losses, where the weighting of each sample is the size of the batch that sample is in. This has two implications:</p> <ol> <li>If the batch size divides the number of samples, the weighing of all samples is identical and the losses are simply off by that factor equal to the batch size.</li> <li>If the batch size does not divide the number of sample, since batches are usually shuffled, the weighting, and thus the computed loss changes from one epoch to the next, despite nothing else having changed. This also applies to metrics such as the MSE.</li> </ol> <p>The following code proves these points:</p> <pre><code>import numpy as np import tensorflow as tf from tensorflow import keras # metric = keras.metrics.poisson # metricName = 'poisson' metric = keras.metrics.mse metricName = 'mean_squared_error' nSamples = 3 nBatchSize = 2 dataIn = np.random.rand(nSamples, 1, 1, 1) dataOut = np.random.rand(nSamples, 1, 1, 1) tf.InteractiveSession() layer = keras.layers.Input(shape=(1, 1, 1)) model = keras.models.Model(inputs=layer, outputs=layer) model.compile(optimizer='adam', loss=metric, metrics=[metric]) h = model.fit(x=dataIn, y=dataOut, batch_size=nBatchSize, verbose=1, epochs=10) for (historyMetric, historyLoss) in zip(h.history[metricName], h.history['loss']): # the metric value is correct and can be reproduced in a number of ways kerasMetricOfData = metric(dataOut, dataIn).eval() averageMetric = np.mean(kerasMetricOfData) assert np.isclose(historyMetric, averageMetric), "..." flattenedMetric = metric(dataOut.flatten(), dataIn.flatten()).eval() assert np.isclose(historyMetric, flattenedMetric), "..." if metric == keras.metrics.poisson: numpyMetric = np.mean(dataIn - np.log(dataIn) * dataOut) assert np.isclose(historyMetric, numpyMetric), "..." # the loss value is incorrect by at least a scaling factor (~ batch size). # also varies *randomly* if the batch size does not divide the # of samples: if nSamples == 3: incorrectLoss = np.array([ np.mean(kerasMetricOfData.flatten() * [1, nBatchSize, nBatchSize]), np.mean(kerasMetricOfData.flatten() * [nBatchSize, 1, nBatchSize]), np.mean(kerasMetricOfData.flatten() * [nBatchSize, nBatchSize, 1]), ]) elif nSamples == 4: incorrectLoss = np.mean(kerasMetricOfData) * nBatchSize assert np.any(np.isclose(historyLoss, incorrectLoss)), "..." </code></pre> <p>It outputs:</p> <pre><code>Epoch 1/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0044 - mean_squared_error: 0.0022 3/3 [==============================] - 0s 5ms/sample - loss: 0.0099 - mean_squared_error: 0.0084 Epoch 2/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0238 - mean_squared_error: 0.0119 3/3 [==============================] - 0s 2ms/sample - loss: 0.0163 - mean_squared_error: 0.0084 Epoch 3/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0238 - mean_squared_error: 0.0119 3/3 [==============================] - 0s 2ms/sample - loss: 0.0163 - mean_squared_error: 0.0084 Epoch 4/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0238 - mean_squared_error: 0.0119 3/3 [==============================] - 0s 2ms/sample - loss: 0.0163 - mean_squared_error: 0.0084 Epoch 5/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0238 - mean_squared_error: 0.0119 3/3 [==============================] - 0s 2ms/sample - loss: 0.0163 - mean_squared_error: 0.0084 Epoch 6/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0222 - mean_squared_error: 0.0111 3/3 [==============================] - 0s 2ms/sample - loss: 0.0158 - mean_squared_error: 0.0084 Epoch 7/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0222 - mean_squared_error: 0.0111 3/3 [==============================] - 0s 2ms/sample - loss: 0.0158 - mean_squared_error: 0.0084 Epoch 8/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0238 - mean_squared_error: 0.0119 3/3 [==============================] - 0s 2ms/sample - loss: 0.0163 - mean_squared_error: 0.0084 Epoch 9/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0222 - mean_squared_error: 0.0111 3/3 [==============================] - 0s 2ms/sample - loss: 0.0158 - mean_squared_error: 0.0084 Epoch 10/10 2/3 [===================&gt;..........] - ETA: 0s - loss: 0.0044 - mean_squared_error: 0.0022 3/3 [==============================] - 0s 2ms/sample - loss: 0.0099 - mean_squared_error: 0.0084 </code></pre> <p><strong>Update</strong>: Finally, there seems to be a difference between using <code>keras.metrics.mse</code> and <code>'mse'</code>, as this example shows:</p> <pre><code>import numpy as np from tensorflow import keras # these three reproduce the issue: # metric = keras.metrics.poisson # metric = 'poisson' # metric = keras.metrics.mse # this one does not: metric = 'mse' nSamples = 3 nBatchSize = 2 dataIn = np.random.rand(nSamples, 1, 1, 1) dataOut = np.random.rand(nSamples, 1, 1, 1) layer = keras.layers.Input(shape=(1, 1, 1)) model = keras.models.Model(inputs=layer, outputs=layer) model.compile(optimizer='adam', loss=metric, metrics=[metric]) model.fit(x=dataIn, y=dataOut, batch_size=2, verbose=1, epochs=10) </code></pre> <p>I begin to believe that this must be a bug and <a href="https://github.com/tensorflow/tensorflow/issues/25970" rel="noreferrer">reported it here</a>.</p>
<p>This has been confirmed as a bug and fixed. For more information, see <a href="https://github.com/tensorflow/tensorflow/issues/25970" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/25970</a>.</p>
tensorflow|keras|loss-function
2
3,152
55,019,437
Getting the first 2 numbers from a value of 5 numbers and entered in a new column in pandas
<p>I've been looking for hours for a solution to this problem: I would like to sort a column consisting of 5 numbers (integers). Then I want to use the first 2 numbers of this value to make a grouping. then I want to Count that groupings.</p> <p>Is there a simple way to do that? I use that for counting:</p> <pre><code>print(worksheet['postalcolumn'].value_counts()) </code></pre> <p>The postalcolumn is like that <strong>74</strong>660, <strong>74</strong>5667, <strong>78</strong>320, <strong>71</strong>345 I want a new column like that 74, 74, 78, 71</p>
<p>Convert the dtype of the column to string and use a <code>str</code>slicer , You can use:</p> <pre><code>worksheet['new_col']=worksheet['postalcolumn'].astype(str).str[:2].astype(int) </code></pre>
python-3.x|pandas|slice|pandas-groupby
1
3,153
55,003,543
Python: Converting a seconds to a datetime format in a dataframe column
<p>Currently I am working with a big dataframe (12x47800). One of the twelve columns is a column consisting of an integer number of seconds. I want to change this column to a column consisting of a datetime.time format. Schedule is my dataframe where I try changing the column named 'depTime'. Since I want it to be a datetime.time and it could cross midnight i added the if-statement. This 'works' but really slow as one could imagine. Is there a faster way to do this? My current code, the only one I could get working is:</p> <pre><code>for i in range(len(schedule)): t_sec = schedule.iloc[i].depTime [t_min, t_sec] = divmod(t_sec,60) [t_hour,t_min] = divmod(t_min,60) if t_hour&gt;23: t_hour -= 23 schedule['depTime'].iloc[i] = dt.time(int(t_hour),int(t_min),int(t_sec)) </code></pre> <p>Thanks in advance guys.</p> <p>Ps: I'm pretty new to Python, so if anybody could help me I would be very gratefull :)</p>
<p>I'm adding a new solution which is much faster than the original since it relies on pandas vectorized functions instead of looping (pandas apply functions are essentially optimized loops on the data). </p> <p>I tested it with a sample similar in size to yours and the difference is from 778ms to 21.3ms. So I definitely recommend the new version.</p> <p>Both solutions are based on transforming your seconds integers into timedelta format and adding it to a reference datetime. Then, I simply capture the time component of the resulting datetimes.</p> <p>New (Faster) Option:</p> <pre><code>import datetime as dt seconds = pd.Series(np.random.rand(50)*100).astype(int) # Generating test data start = dt.datetime(2019,1,1,0,0) # You need a reference point datetime_series = seconds.astype('timedelta64[s]') + start time_series = datetime_series.dt.time time_series </code></pre> <p>Original (slower) Answer:</p> <p>Not the most elegant solution, but it does the trick.</p> <pre><code>import datetime as dt seconds = pd.Series(np.random.rand(50)*100).astype(int) # Generating test data start = dt.datetime(2019,1,1,0,0) # You need a reference point time_series = seconds.apply(lambda x: start + pd.Timedelta(seconds=x)).dt.time </code></pre>
python|pandas|datetime|seconds
6
3,154
54,808,848
Pandas to_sql - Increase table's index when appending DataFrame
<p>I've been working to develop a product which centers in the daily execution of a data analysis Python 3.7.0 script. Everyday at midnight it will proccess a huge amount of data, and then export the result to two MySQL tables. The first one will only contain the data relative to the current day, while the other table will contain the concatenated data of all executions.</p> <p>To exemplify what I current have, see the code below, supposing <code>df</code> would be the final DataFrame generated from the data analysis:</p> <pre><code>import pandas as pd import sqlalchemy engine = sqlalchemy.create_engine(r"mysql+pymysql://user:psswd@localhost/pathToMyDB") df = pd.DataFrame({'Something':['a','b','c']}) df.to_sql('DReg', engine, index = True, if_exists='replace') #daily database df.to_sql('AReg', engine, index = False, if_exists='append') #anual database </code></pre> <p>As you can see in the parameters of my second <code>to_sql</code> function, I ain't setting an index to the anual database. However, my manager asked me to do so, creating an index that would center around a simple rule: it would be an auto increasing numeric index, that would automatically attribute a number to every row saved on the database corresponding to its position. </p> <p>So basically, the first time I saved <code>df</code>, the database should look like:</p> <pre><code>index Something 0 a 1 b 2 c </code></pre> <p>And in my second execution:</p> <pre><code>index Something 0 a 1 b 2 c 3 a 4 b 5 c </code></pre> <p>However, when I set my index to <code>True</code> in the second <code>df.to_sql</code> command (turning it into <code>df.to_sql('AReg', engine, index = True, if_exists='append')</code>), after two executions my database ends up looking like:</p> <pre><code>index Something 0 a 1 b 2 c 0 a 1 b 2 c </code></pre> <p>I did some research, but could not find a way to allow this auto increase on the index. I considered reading the anual database at every execution and then adapting my dataframe's index to it, but my database can easily get REALLY huge, which would make it's execution absurdly slow (and also forbid me to simultaneously execute the same data analysis in two computers without compromising my index).</p> <p>So what is the best solution to make this index work? What am I missing here?</p>
<p>Even though Pandas has a lot of export options, its main purpose is not intented to use as database management api. Managing indexes is typically something a database should take care of. </p> <p>I would suggest to set <code>index=False, if_exists='append'</code> and create the table with an auto-increment index:</p> <pre class="lang-sql prettyprint-override"><code>CREATE TABLE AReg ( id INT NOT NULL AUTO_INCREMENT, # your fields here PRIMARY KEY (id) ); </code></pre>
python|mysql|pandas|dataframe|sqlalchemy
16
3,155
49,501,538
Custom weight initialization tensorflow tf.layers.dense
<p>I'm trying to set up custom initializer to <code>tf.layers.dense</code> where I initialize <code>kernel_initializer</code> with a weight matrix I already have.</p> <pre><code>u_1 = tf.placeholder(tf.float32, [784, 784]) first_layer_u = tf.layers.dense(X_, n_params, activation=None, kernel_initializer=u_1, bias_initializer=tf.keras.initializers.he_normal()) </code></pre> <p>This is throwing error saying <code>ValueError: If initializer is a constant, do not specify shape.</code></p> <p>Is it a problem to assign placeholder to <code>kernel_initializer</code> or am I missing something?</p>
<p>There are at least two ways to achieve this:</p> <p>1 Create your own layer</p> <pre><code> W1 = tf.Variable(YOUR_WEIGHT_MATRIX, name='Weights') b1 = tf.Variable(tf.zeros([YOUR_LAYER_SIZE]), name='Biases') #or pass your own h1 = tf.add(tf.matmul(X, W1), b1) </code></pre> <p>2 Use the <code>tf.constant_initializer</code></p> <pre><code>init = tf.constant_initializer(YOUR_WEIGHT_MATRIX) l1 = tf.layers.dense(X, o, kernel_initializer=init) </code></pre>
python|python-3.x|tensorflow|deep-learning
16
3,156
49,690,794
ValueError: invalid literal for int() with base 10: '10025.0'
<pre><code>COD_CUST 10025.0 10761.0 10869.0 12361.0 </code></pre> <p>trying to convert the above column into integer as below:</p> <pre><code>mser_offus['COD_CUST']=mser_offus['COD_CUST'].astype(int) </code></pre> <p>but getting the <strong><em>following error:</em></strong></p> <blockquote> <p>ValueError: invalid literal for int() with base 10: '10025.0'</p> </blockquote>
<p>you may use, print (int(float(COD_CUST))) </p>
python|pandas
0
3,157
27,954,343
How to use `str.replace()` method on all columns in a scraped Pandas dataframe?
<p>I'm a Python/Pandas beginner in data analysis. I am trying to import(/scrape) a table from a Wikipedia article on letter frequency, clean it, and turn it into a data frame. </p> <p>Here's the code I used to turn the table into a dataframe called <code>letter_freq_all</code>:</p> <pre><code>import pandas as pd import numpy as np letter_freq_all = pd.read_html('http://en.wikipedia.org/wiki/Letter_frequency', header=0)[4] letter_freq_all </code></pre> <p>I want to clean the data and properly format it for data analysis:</p> <ul> <li>I want to remove the square brackets with numbers from the column names and make sure there is no whitespace padding on either side</li> <li>I also want to remove the percent signs and any asterisks from each column so I can convert each of the columns to a float type.</li> <li>So far, I have unsuccessfully attempted to remove the % signs from all of the columns. </li> </ul> <p>This is the code I tried:</p> <pre><code>letter_freq_all2 = [str.replace(i,'%','') for i in letter_freq_all] </code></pre> <p>Instead of getting a new dataframe that does not have any % signs, I just got a list of all the columns in letter_freq_all: </p> <pre><code>['Letter','French [14]','German [15]','Spanish [16]','Portuguese [17]','Esperanto [18]','Italian[19]','Turkish[20]','Swedish[21]','Polish[22]','Dutch [23]','Danish[24]','Icelandic[25]','Finnish[26]','Czech'] </code></pre> <p>Then I tried getting rid of the % sign in just one column:</p> <pre><code>letter_freq_all3 = [str.replace(i,'%','') for i in letter_freq_all['Italian[19]']]** </code></pre> <p>When I did this, the <code>str.replace</code> method sort of worked - I got a list which did not have any <code>%</code> signs (I was expecting to get a series). </p> <p><strong>So, how can I get rid of the <code>%</code> sign in all of the columns in my dataframe <code>letter_freq_all</code>? Also, how can I get rid of all the brackets and extra white space padding from all of the columns? I'm guessing I might have to use the <code>.split()</code> method</strong> </p>
<p>The most succinct way to accomplish your goal is to use the str.replace() method with regular expressions:</p> <p>1) Rename columns:</p> <pre><code>letter_freq_all.columns = pd.Series(letter_freq_all.columns).str.replace('\[\d+\]', '').str.strip() </code></pre> <p>2) Replace asterisks and percent signs and convert to decimal fraction:</p> <pre><code>letter_freq_all.apply(lambda x: x.str.replace('[%*]', '').astype(float)/100, axis=1) </code></pre> <p>In this case, apply() performs the str.replace() method on each column. </p> <p>Learn more about regex metacharacters here:</p> <p><a href="https://www.hscripts.com/tutorials/regular-expression/metacharacter-list.php" rel="nofollow">https://www.hscripts.com/tutorials/regular-expression/metacharacter-list.php</a></p>
python|string|pandas|dataframe|web-scraping
3
3,158
73,222,905
Pandas : Count the number of occurrence of all matched patterns in a column
<p>Say I have a dataframe</p> <pre><code>df = pd.DataFrame({ 'column_1': ['ABC DEF', 'JKL', 'GHI ABC', 'ABC ABC', 'DEF GHI', 'DEF', 'DEF DEF', 'ABC GHI DEF ABC'], 'column_2': [9, 2, 3, 4, 6, 2, 7, 1 ] }) </code></pre> <pre><code>df column_1 column_2 0 ABC DEF 9 1 GHI ABC 3 2 ABC ABC 4 3 DEF GHI 6 4 DEF 2 5 DEF DEF 7 6 ABC GHI DEF ABC 1 </code></pre> <p>I want to count the number of times each of my regex pattern group is present in the column.</p> <p>For simplicity, say the pattern is the word <code>ABC and DEF</code> then I need the count of those in all the rows.</p> <p>Expected output :</p> <pre><code> column_1 column_2 Group1_count Group2_count 0 ABC DEF 9 1 1 1 JKL 9 0 0 2 GHI ABC 3 1 0 3 ABC ABC 4 2 0 4 DEF GHI 6 0 1 5 DEF 2 0 1 6 DEF DEF 7 0 2 7 ABC GHI DEF ABC 1 2 1 </code></pre> <p>This is what I tried, where I am unable to figure out how to move ahead to get the count value.</p> <pre><code>df['column_1'].str.extractall('(ABC)|(DEF)').groupby(level=0).first() 0 1 0 ABC DEF 2 ABC None 3 ABC None 4 None DEF 5 None DEF 6 None DEF 7 ABC DEF </code></pre> <p>A vector solution/one liner approach would be appreciated for this question. Also note that in this example I have <code>ABC</code> and <code>DEF</code> for simplicity but it could be a complex regex pattern as well.</p>
<p>You can get the desired result with your approach if you sum the <code>notna</code> instead of <code>first</code>, and then join back with the original <code>df</code></p> <pre class="lang-py prettyprint-override"><code> df.join(df['column_1'].str.extractall('(ABC)|(DEF)').notna().groupby(level=0).sum(), how='left').fillna(0) </code></pre> <p><strong>Output</strong></p> <pre><code> column_1 column_2 0 1 0 ABC DEF 9 1.0 1.0 1 JKL 2 0.0 0.0 2 GHI ABC 3 1.0 0.0 3 ABC ABC 4 2.0 0.0 4 DEF GHI 6 0.0 1.0 5 DEF 2 0.0 1.0 6 DEF DEF 7 0.0 2.0 7 ABC GHI DEF ABC 1 2.0 1.0 </code></pre>
python|pandas|regex|dataframe|group-by
3
3,159
35,124,435
Finding the maximum, miniumum and nearest data in the Pandas dataframe
<p>I have the pandas data frame like below : </p> <pre><code> A B C D E 2014 132 463 52 463 413 2015 31 71 237 71 149 2016 64 138 305 138 21 2017 33 338 338 338 177 2018 20 413 413 413 187 2019 237 149 149 149 214 2020 209 21 21 21 456 2021 4 177 177 71 52 2022 169 187 187 138 237 2023 400 214 214 338 214 2024 300 456 463 52 456 </code></pre> <p>I would like to find out which of these columns represent the maximum, minimum and nearest based on my average value (for example 100).Could you please suggest me the way to handle this issue effectively. </p> <p>For Maximum and Minimum, I have tried like this : </p> <pre><code> x = df-100 rr=x.rank(axis=1) rrr=rr.sum() </code></pre> <p>Based on this I can find out which column represent maximum and minimum. Now, I would like to find out the nearest column. How can i do that. Please suggest whether my approach of finding maximum or minimum also making sense or not ? </p>
<p>If you think minimal <code>sum</code> of absolute values, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow"><code>sum</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.min.html" rel="nofollow"><code>min</code></a>:</p> <pre><code>print (df- 100).abs().sum() A 1195 B 1743 C 1710 D 1462 E 1730 dtype: int64 print (df- 100).abs().sum().min() 1195 print (df- 100).abs().sum().isin([(df- 100).abs().sum().min()]) A True B False C False D False E False dtype: bool print df.loc[:, (df- 100).abs().sum().isin([(df- 100).abs().sum().min()])] A 2014 132 2015 31 2016 64 2017 33 2018 20 2019 237 2020 209 2021 4 2022 169 2023 400 2024 300 </code></pre> <p>EDIT:</p> <p>You can get minimal, maximal and nearest value' s column by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.idxmax.html" rel="nofollow"><code>idxmax</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.idxmin.html" rel="nofollow"><code>idxmin</code></a> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>loc</code></a>:</p> <pre><code>print (df-100).sum() A 499 B 1527 C 1456 D 1092 E 1476 dtype: int64 </code></pre> <pre><code>print (df-100).sum().idxmin() A print df.loc[:, (df-100).sum().idxmin()] 2014 132 2015 31 2016 64 2017 33 2018 20 2019 237 2020 209 2021 4 2022 169 2023 400 2024 300 Name: A, dtype: int64 print (df-100).sum().idxmax() B print df.loc[:, (df-100).sum().idxmax()] 2014 463 2015 71 2016 138 2017 338 2018 413 2019 149 2020 21 2021 177 2022 187 2023 214 2024 456 Name: B, dtype: int64 </code></pre> <pre><code>print (df-100).abs().sum().idxmin() A print df.loc[:, (df-100).abs().sum().idxmin()] 2014 132 2015 31 2016 64 2017 33 2018 20 2019 237 2020 209 2021 4 2022 169 2023 400 2024 300 Name: A, dtype: int64 </code></pre>
python-2.7|pandas
1
3,160
30,879,104
Will numpy.roots() ever return n different floats when a polynomial only has <n unique (exact) roots?
<p>I think the title says it all, but just to be specific, say I have some list of numbers named "coeffs". Assuming the polynomial with said coefficients has exactly k unique roots, will the following code ever set number_of_unique_roots to be a number greater than k?</p> <pre><code>import numpy as np number_of_unique_roots = len(set(np.roots(coeffs))) </code></pre>
<p>Yes.</p> <pre><code>&gt;&gt;&gt; len(set(numpy.roots([1, 6, 9]))) 2 &gt;&gt;&gt; numpy.roots([1, 6, 9]) array([-3. +3.72529030e-08j, -3. -3.72529030e-08j]) </code></pre>
numpy|scipy
2
3,161
30,973,503
AttributeError: 'numpy.ndarray' object has no attribute 'A'
<p>I am trying to perform tfidf on a matrix. I would like to use gensim, but <code>models.TfidfModel()</code> only works on a corpus and therefore returns a list of lists of varying lengths (I want a matrix).</p> <p>The options are to somehow fill in the missing values of the list of lists, or just convert the corpus to a matrix </p> <pre><code>numpy_matrix = gensim.matutils.corpus2dense(corpus, num_terms=number_of_corpus_features) </code></pre> <p>Choosing the latter, I then try to convert this count matrix to a tf-idf weighted matrix:</p> <pre><code>def TFIDF(m): #import numpy WordsPerDoc = numpy.sum(m, axis=0) DocsPerWord = numpy.sum(numpy.asarray(m &gt; 0, 'i'), axis=1) rows, cols = m.shape for i in range(rows): for j in range(cols): amatrix[i,j] = (amatrix[i,j] / WordsPerDoc[j]) * log(float(cols) / DocsPerWord[i]) </code></pre> <p>But, I get the error <code>AttributeError: 'numpy.ndarray' object has no attribute 'A'</code></p> <p>I copied the function above from another script. It was:</p> <pre><code>def TFIDF(self): WordsPerDoc = sum(self.A, axis=0) DocsPerWord = sum(asarray(self.A &gt; 0, 'i'), axis=1) rows, cols = self.A.shape for i in range(rows): for j in range(cols): self.A[i,j] = (self.A[i,j] / WordsPerDoc[j]) * log(float(cols) / DocsPerWord[i]) </code></pre> <p>Which I believe is where it's getting the <code>A</code> from. However, I re-imported the function. </p> <p>Why is this happening?</p>
<p><code>self.A</code> is either an <code>np.matrix</code> or <code>sparse</code> matrix. For both <code>A</code> means, return a copy that is a <code>np.ndarray</code>. In other words, it converts the 2d matrix to a regular numpy array. If <code>self</code> is already an array, it would produce your error.</p> <p>It looks like you have corrected that with your own version of <code>TFIDF</code> - except that uses 2 variables, <code>m</code> and <code>amatrix</code> instead of <code>self.A</code>.</p> <p>I think you need to look more at the error message and stack, to identify where that <code>.A</code> is. Also make sure you understand where the code expects a matrix, especially a sparse one. And whether your own code differs in that regard.</p> <p>I recall from other SO questions that one of the learning packages had switched to using sparse matrices, and that required adding <code>.todense()</code> to some of their code (which expected dense ones).</p>
python|numpy|matrix|gensim
0
3,162
67,496,720
Converting Yolov4 Tiny to tflite. error:cannot reshape array of size 372388 into shape (256,256,3,3)
<p>i'm converting my custom weights file to tflite by using open source from <a href="https://github.com/haroonshakeel/tensorflow-yolov4-tflite" rel="nofollow noreferrer">https://github.com/haroonshakeel/tensorflow-yolov4-tflite</a>.</p> <p>there is no error when i convert Yolov4.weights to tflite but when i switch to Yolov4-tiny.weights i got an error like this</p> <pre><code> conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0]) </code></pre> <p><code>ValueError: cannot reshape array of size 372388 into shape (256,256,3,3)</code></p> <p>does anyone knows how to fix this problem? Thank you</p>
<p>I solved it doing 2 changes; replacing classes names and installing specific version of tensorflow-cpu (2.3.0)</p> <ol> <li>In my case I changed the <code>core/config.py</code> file at the line 14 containing the code:</li> </ol> <blockquote> <p>__C.YOLO.CLASSES = &quot;./data/classes/coco.names&quot;</p> </blockquote> <p>replacing coco.names to custom.names, like that</p> <blockquote> <p>__C.YOLO.CLASSES = &quot;./data/classes/custom.names&quot;</p> </blockquote> <p>and then I created this new file <code>custom.names</code> at <code>./data/classes</code> directory containing the names of my new custom classes instead of the default COCO classes.</p> <ol start="2"> <li>I updated my pip3 version and then installed tensorflow version 2.3.0rc2 for cpu:</li> </ol> <p><code>pip3 install --upgrade pip</code></p> <p><code>pip3 install tensorflow-cpu==2.3.0rc2</code></p> <p>that solved the issue for me.</p> <p>PS: I built my model on colab using gpu (T4) but I was testing/using the model on my local machine without gpu.</p>
tensorflow|yolov4
0
3,163
34,762,463
Install Tensorflow pip wheel without internet
<p>I do not have internet access on my linux computer therefore I installed TF from source by following <a href="https://www.tensorflow.org/versions/master/get_started/os_setup.html#installing-from-sources" rel="nofollow">TensorFlow Get Started</a>.<br> I ran into a few trouble to build trainer_example due to the lack of internet connection hopefully someone from tensorflow helped me through it by creating local repositories for re2, gemmlowp, jpegsrc v9a, libpng and six and modifying WORKSPACE accordingly.<br> When I try to bazel build pip_package to create the wheel then I think I run into the same problem but : </p> <p>-the list of repositories is insanely long (to manually install each of them) even if they seem to be mostly part of PolymerElements</p> <p>Is there an easy workaround ?</p>
<p>If you are happy to create a PIP package <strong>without TensorBoard</strong>, you should be able to avoid rewriting the Polymer dependencies by removing <a href="https://github.com/tensorflow/tensorflow/blob/03b5bef1ddbb5841aaf059a8b77267becbe8ea21/tensorflow/tools/pip_package/BUILD#L29" rel="nofollow">this line</a> (<code>"//tensorflow/tensorbaord"</code> in the <code>build_pip_package</code> dependencies) from <code>tensorflow/tools/pip_package/BUILD</code>.</p>
tensorflow|tensorboard|bazel
2
3,164
34,642,595
Tensorflow Strides Argument
<p>I am trying to understand the <strong>strides</strong> argument in tf.nn.avg_pool, tf.nn.max_pool, tf.nn.conv2d. </p> <p>The <a href="https://www.tensorflow.org/versions/master/api_docs/python/nn.html#max_pool" rel="noreferrer">documentation</a> repeatedly says </p> <blockquote> <p>strides: A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor.</p> </blockquote> <p>My questions are:</p> <ol> <li>What do each of the 4+ integers represent?</li> <li>Why must they have strides[0] = strides[3] = 1 for convnets?</li> <li>In <a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3%20-%20Neural%20Networks/convolutional_network.ipynb" rel="noreferrer">this example</a> we see <code>tf.reshape(_X,shape=[-1, 28, 28, 1])</code>. Why -1?</li> </ol> <p>Sadly the examples in the docs for reshape using -1 don't translate too well to this scenario.</p>
<p>The pooling and convolutional ops slide a "window" across the input tensor. Using <a href="https://www.tensorflow.org/versions/master/api_docs/python/nn.html#conv2d"><code>tf.nn.conv2d</code></a> as an example: If the input tensor has 4 dimensions: <code>[batch, height, width, channels]</code>, then the convolution operates on a 2D window on the <code>height, width</code> dimensions.</p> <p><code>strides</code> determines how much the window shifts by in each of the dimensions. The typical use sets the first (the batch) and last (the depth) stride to 1.</p> <p>Let's use a very concrete example: Running a 2-d convolution over a 32x32 greyscale input image. I say greyscale because then the input image has depth=1, which helps keep it simple. Let that image look like this:</p> <pre><code>00 01 02 03 04 ... 10 11 12 13 14 ... 20 21 22 23 24 ... 30 31 32 33 34 ... ... </code></pre> <p>Let's run a 2x2 convolution window over a single example (batch size = 1). We'll give the convolution an output channel depth of 8.</p> <p>The input to the convolution has <code>shape=[1, 32, 32, 1]</code>.</p> <p>If you specify <code>strides=[1,1,1,1]</code> with <code>padding=SAME</code>, then the output of the filter will be [1, 32, 32, 8].</p> <p>The filter will first create an output for:</p> <pre><code>F(00 01 10 11) </code></pre> <p>And then for:</p> <pre><code>F(01 02 11 12) </code></pre> <p>and so on. Then it will move to the second row, calculating:</p> <pre><code>F(10, 11 20, 21) </code></pre> <p>then</p> <pre><code>F(11, 12 21, 22) </code></pre> <p>If you specify a stride of [1, 2, 2, 1] it won't do overlapping windows. It will compute:</p> <pre><code>F(00, 01 10, 11) </code></pre> <p>and then</p> <pre><code>F(02, 03 12, 13) </code></pre> <p>The stride operates similarly for the pooling operators.</p> <p><strong>Question 2: Why strides [1, x, y, 1] for convnets</strong></p> <p>The first 1 is the batch: You don't usually want to skip over examples in your batch, or you shouldn't have included them in the first place. :)</p> <p>The last 1 is the depth of the convolution: You don't usually want to skip inputs, for the same reason.</p> <p>The conv2d operator is more general, so you <em>could</em> create convolutions that slide the window along other dimensions, but that's not a typical use in convnets. The typical use is to use them spatially.</p> <p><strong>Why reshape to -1</strong> -1 is a placeholder that says "adjust as necessary to match the size needed for the full tensor." It's a way of making the code be independent of the input batch size, so that you can change your pipeline and not have to adjust the batch size everywhere in the code.</p>
python|neural-network|convolution|tensorflow|conv-neural-network
227
3,165
65,189,843
Use Image classification model trained with coco
<p>i need to work with an image classification trained with the dataset named coco. I searched on internet but i only find objects detectors.</p> <p>Anyone know pre-trained models image classification for tensorflow 2?</p>
<p>Object Detection is different from Image Classification.</p> <blockquote> <p>Object Detection algorithms act as a combination of image classification and object localization. It takes an image as input and produces one or more bounding boxes with the class label attached to each bounding box. These algorithms are capable enough to deal with multi-class classification and localization as well as to deal with the objects with multiple occurrences. <a href="https://stackoverflow.com/a/64956657/5801823">ref</a></p> </blockquote> <p>You can however use the coco dataset to strip out each and every localisation box and create a new dataset which can now be fed for your image classifier. You will have to do processing of the coco dataset to achieve this.</p> <p>For eg. If your dataset annotations looks like this: (contains 4 objects localised namely the following):</p> <ol> <li>123, 23, 13, 45, kite, image_1.jpg</li> <li>133, 43, 213, 77, bird image_1.jpg</li> <li>133, 13, 413, 73, bird, image_2.jpg</li> <li>12, 233, 440, 34, tree, image_1.jpg</li> </ol> <p>You can write a script to convert into this:</p> <ol> <li>kite_123231345image_1.jpg</li> <li>bird_1334521377image_1.jpg</li> <li>bird_1331341373image_2.jpg</li> <li>tree_1223344034image_1.jpg</li> </ol> <p>and create those images with cut out boxes.</p> <p>And use this annotation to train your classifier. I have personally done this on cocodataset before and gives decent precision.</p>
tensorflow|tensorflow2.0
1
3,166
65,303,008
Bug in Neural Network with cost function rising
<p>I have been working on my first neural net, building it completely from scratch. However when printing the cost function to track the models progress it only rises, the data I am using is just 1s,0s I wanted something simple for my first model. It has one hidden layer of two tanh nodes and then outputs into a sigmoid unit.</p> <p>Code is below, copied from markdown version of jupyter notebook:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt </code></pre> <pre class="lang-py prettyprint-override"><code>#creating our data x = np.array([[0, 1, 0, 1], [0, 1, 0, 1], [1, 0, 1, 0], [1, 0, 1, 0], [0, 1, 0, 1]]) y = np.array([0, 1, 0, 1]) y = y.reshape(1, 4) </code></pre> <pre class="lang-py prettyprint-override"><code>print(x) </code></pre> <pre><code>[[0 1 0 1] [0 1 0 1] [1 0 1 0] [1 0 1 0] [0 1 0 1]] </code></pre> <pre class="lang-py prettyprint-override"><code>print(y) </code></pre> <pre><code>[[0 1 0 1]] </code></pre> <pre class="lang-py prettyprint-override"><code>print(x.shape) </code></pre> <pre><code>(5, 4) </code></pre> <pre class="lang-py prettyprint-override"><code>print(y.shape) </code></pre> <pre><code>(1, 4) </code></pre> <pre class="lang-py prettyprint-override"><code>#initalize parameters def rand_params(): W1 = np.random.randn(2, 5) b1 = np.zeros([2, 1]) W2 = np.random.randn(1, 2) b2 = np.zeros([1, 1]) return W1, b1, W2, b2 W1, b1, W2, b2 = rand_params() </code></pre> <pre class="lang-py prettyprint-override"><code>print(f&quot;W1: {W1}, b1: {b1}&quot;) print(W1.shape, b1.shape) </code></pre> <pre><code>W1: [[ 0.60366603 -0.12225707 -0.44483219 -1.40200651 -3.02768333] [-0.98659326 -0.91009808 0.72461745 0.20677563 0.17493105]], b1: [[0.] [0.]] (2, 5) (2, 1) </code></pre> <pre class="lang-py prettyprint-override"><code>print(f&quot;W2: {W2}, b2: {b2}&quot;) print(W2.shape, b2.shape) </code></pre> <pre><code>W2: [[0.05478931 0.99102802]], b2: [[0.]] (1, 2) (1, 1) </code></pre> <pre class="lang-py prettyprint-override"><code>#forward propogation def tanh(z): a = (np.exp(z) - np.exp(-z)) / (np.exp(z) + np.exp(-z)) return a def sigmoid(z): a = 1 / (1 + np.exp(z)) return a def der_tanh(z): a = 1 - (tanh(z))**2 return a def der_sigmoid(z): a = sigmoid(z) * (1 - sigmoid(z)) # return a &lt;-- MISSING? </code></pre> <pre class="lang-py prettyprint-override"><code>#forward computation def forward_prop(x, W1, b1, W2, b2): Z1 = np.dot(W1, x) + b1 A1 = np.tanh(Z1) Z2 = np.dot(W2, A1) + b2 y_hat = sigmoid(Z2) return Z1, A1, Z2, y_hat Z1, A1, Z2, y_hat = forward_prop(x, W1, b1, W2, b2) </code></pre> <pre class="lang-py prettyprint-override"><code>def cost_function(y, y_hat, x): m = x.shape[1] J = -1 / m * np.sum(y * np.log(y_hat) + (1 - y) * np.log(1 - y_hat)) return J, m J, m = cost_function(y, y_hat, x) </code></pre> <pre class="lang-py prettyprint-override"><code>#back propogation def back_prop(): dZ2 = y_hat - y dW2 = 1 / m * np.dot(dZ2, A1.T) db2 = 1 / m * np.sum(dZ2, axis=1, keepdims=True) dZ1 = np.dot(W2.T, dZ2) * der_tanh(Z1) dW1 = 1 / m * np.dot(dZ1, x.T) db1 = 1 / m * np.sum(dZ1, axis=1, keepdims=True) return dW2, db2, dW1, db1 dW2, db2, dW1, db1 = back_prop() </code></pre> <pre class="lang-py prettyprint-override"><code>#optimizing weights + biases def update(W1, b1, W2, b2): lr = 0.01 W1 = W1 - lr * dW1 b1 = b1 - lr * db1 W2 = W2 - lr * dW2 b2 = b2 - lr * db2 return W1, b1, W2, b2 W1, b1, W2, b2 = update(W1, b1, W2, b2) </code></pre> <pre class="lang-py prettyprint-override"><code># model costs = [] W1, b1, W2, b2 = rand_params() for epoch in range(1500): Z1, A1, Z2, y_hat = forward_prop(x, W1, b1, W2, b2) J, m = cost_function(y, y_hat, x) if epoch % 100 == 0: print(J) costs.append(J) dW2, db2, dW1, db1 = back_prop() W1, b1, W2, b2 = update(W1, b1, W2, b2) plt.plot(costs) </code></pre> <pre><code>0.8188282199860928 1.1665507761146539 1.6868025884074527 2.3940967534280753 3.2473658397522387 4.183790888527539 5.158135855432985 6.147978715339146 7.143956636487831 8.142392777023431 9.141860280152706 10.141802197682296 11.142002210070622 12.142384342966537 13.142939005842882 </code></pre>
<p>Apart from any other possible bugs, sigmoid(z) should be defined as:</p> <pre><code>def sigmoid(z): a = 1/(1 + np.exp(-z)) # ^ return a </code></pre>
python|numpy|deep-learning|neural-network|activation-function
-2
3,167
49,875,667
How to deal with duplicate fields in a pandas dataframe?
<p>I want to do some analysis on data I have scraped from a forum. This is the first time I'm doing something like this so it's possible that my method is wrong from the start but here is what I have at the moment.</p> <p>I have scraped 17k discussions, each of which contains a certain number of posts (for a total of 78k post). I have stored everything in a dataframe with 6 columns. Each row corresponds to a post, and the columns are respectively: </p> <pre><code>'thread_id', 'thread_length', 'thread_title', 'post_number', 'post content' ,'poster' </code></pre> <p>As you can see the values that pertain to the thread (so title, id, and length) are repeated a lot of times: for example if a thread has 30 posts, its id, length and title will be repeated 30 times.</p> <p>My problem is: how can I plot an histogram of the thread lengths? I probably should only pick length values that have a different thread id value, but I can't figure out how to do it. Also I guess there has to be a 'cleaner' way to organize this dataframe, so I'm open to any advice.</p>
<p>The columns look fine to me. You can use:</p> <pre><code>df.drop_duplicates('thread_id').thread_length.plot.hist() </code></pre> <ul> <li><code>drop_duplicates</code> identifies duplicates by considering the <code>thread_id</code> column only, keeping the first occurrence (by default).</li> <li>I then take the <code>thread_length</code> column, </li> <li>which gives you a <code>Series</code> that you can <code>plot</code> with method <code>hist</code> to get a histogram.</li> </ul>
python|pandas
2
3,168
64,069,388
Google Colab - pandas/pyplot will only accept column references not titles
<p>I've opened a Google Sheet in Colab using gspread</p> <pre><code>document = gc.open_by_url('https://docs.google.com/myspreadsheet') sheet = elem.worksheet('Sheet1') data = sheet.get_all_values() df = pd.DataFrame(data) </code></pre> <p>The document contains element data and a print of head() looks like this:</p> <pre><code> 0 1 ... 26 27 0 AtomicNumber Element ... NumberofShells NumberofValence 1 1 Hydrogen ... 1 1 2 2 Helium ... 1 3 3 Lithium ... 2 1 4 4 Beryllium ... 2 2 </code></pre> <p>The problem I have is that when I try to reference by title, for example:</p> <pre><code>df.plot(x = 'AtomicNumber', y= 'AtomicMass', kind = 'scatter') </code></pre> <p>I get an error. I have also tried:</p> <pre><code>df.plot(x = df.AtomicNumber, y= df.AtomicMass, kind = 'scatter') </code></pre> <p>and</p> <pre><code>df.plot(x = df['AtomicNumber'], y= df['AtomicMass'], kind = 'scatter') </code></pre> <p>but I have no joy either. Unless I am using the column references like so:</p> <pre><code>df.plot(x = 0, y= 17, kind = 'scatter') </code></pre> <p>I get nothing. It will get tiring pretty fast if I have to keep referencing the .csv file to figure out which column reference I need!!</p> <p>Finally, when I print:</p> <pre><code>df.columns.values </code></pre> <p>I get:</p> <pre><code>array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]) </code></pre> <p>I can't seem to <strong>not</strong> get this - even if I try to create a new dataframe that contains every row of df bar row index 0</p> <p>I'm pretty new with this so I'm sure it's pretty simple, but I've hit an impasse... Help!</p>
<p>I've just figured one solution out, which I'm happy with so I'll mark this question as resolved.</p> <p>The problem seems to be from the way I was creating my dataframe:</p> <pre><code>data = sheet.get_all_values() df = pd.DataFrame(data) </code></pre> <p>If I instead use the 'Get_all_records()' function then the dataframe is make without a seemingly non-removable column reference numbers as titles (see below)</p> <pre><code>df = pd.DataFrame(raw.get_all_records()) </code></pre> <p>when I print the head() of this dataframe I get:</p> <pre><code> AtomicNumber Element Symbol ... SpecificHeat NumberofShells NumberofValence 0 1 Hydrogen H ... 14.304 1 1 1 2 Helium He ... 5.193 1 2 3 Lithium Li ... 3.582 2 1 3 4 Beryllium Be ... 1.825 2 2 4 5 Boron B ... 1.026 2 3 </code></pre> <p>and when I then call df.columns.values, I get:</p> <pre><code>array(['AtomicNumber', 'Element', 'Symbol', 'AtomicMass', 'NumberofNeutrons', 'NumberofProtons', 'NumberofElectrons', 'Period', 'Group', 'Phase', 'Radioactive', 'Natural', 'Metal', 'Nonmetal', 'Metalloid', 'Type', 'AtomicRadius', 'Electronegativity', 'FirstIonization', 'Density', 'MeltingPoint', 'BoilingPoint', 'NumberOfIsotopes', 'Discoverer', 'Year', 'SpecificHeat', 'NumberofShells', 'NumberofValence'], dtype=object) </code></pre> <p>I'm going to do a little dive into the documentation of gspread now and try to figure out what the distinction is between get_all_values and get_all_records, but I'm so happy to have figured it out! :-)</p>
python|pandas|matplotlib|google-colaboratory|gspread
1
3,169
63,791,464
Why does this not split the genres properly? (Python)
<p>I am trying to find the best-rated genres for this <a href="https://www.kaggle.com/isaactaylorofficial/imdb-10000-most-voted-feature-films-041118" rel="nofollow noreferrer">data set</a>. I started off splitting the genres because there were multiple genres in most rows. Then I sorted through the genres and their scores calculating the average score for each genre. I then update the data frame with each genre and their average score. However, there are repeating genres in the list for some reason and I'm not sure why.</p> <pre><code>dataGenre = data df5 = pd.DataFrame(data={&quot;Genre&quot;:dataYearScore['Genre'], &quot;Score&quot;: dataYearScore['Score']}) df5 = df5.assign(Genre=df5['Genre'].str.split(',')).explode('Genre').reset_index(drop=True) genre_list5 = [] avg_scores5 = [] for genre in df5[&quot;Genre&quot;].unique(): genre_list5.append(genre) avg_scores5.append(df5.loc[df5[&quot;Genre&quot;]==genre, &quot;Score&quot;].mean()) plt.bar(genre_list5, avg_scores5, width = 0.8) plt.xlabel('Genre') plt.ylabel('Average Score') plt.xticks(rotation=65) plt.title('Average Score for Each Genre') plt.show() df5 = pd.DataFrame(data={&quot;Genre&quot;:genre_list5, &quot;Score&quot;: avg_scores5}) df5 </code></pre> <p>I believe the problem is either in line 3 or the for loop but I'm not sure whats doing it. Any help at all would be greatly appreciated :)</p> <p>Update:</p> <p>The data can be found here <a href="https://www.kaggle.com/isaactaylorofficial/imdb-10000-most-voted-feature-films-041118" rel="nofollow noreferrer">https://www.kaggle.com/isaactaylorofficial/imdb-10000-most-voted-feature-films-041118</a></p> <p>It's imported with</p> <pre><code>data = pd.read_csv('movies.csv') </code></pre> <p>I don't really need the graph, I just need the data frame to have a column with genres (no repeats) and their average score.</p> <pre><code>df5 = pd.DataFrame(data={&quot;Genre&quot;:genre_list5, &quot;Score&quot;: avg_scores5}) df5 </code></pre> <p>This is checked using the code above^</p>
<p>Because there might be some spaces before or after the <code>comma</code> separating two genres, hence you need to use the regex pattern <code>\s*,\s*</code> with <code>Series.str.split</code> to properly split the <code>Genres</code>:</p> <pre><code>s = data[['Score']].assign( Genre=data['Genre'].str.split(r'\s*,\s*')).explode('Genre') avg = s.groupby('Genre')['Score'].mean() </code></pre> <p>Plotting the average ratings:</p> <pre><code>avg.plot(kind='bar', width=0.8) plt.ylabel('Average Rating') plt.title('Average Score for Each Genre') </code></pre> <p>Result:</p> <p><a href="https://i.stack.imgur.com/INq2k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/INq2k.png" alt="enter image description here" /></a></p>
python|pandas|dataframe|csv|matplotlib
2
3,170
63,777,388
how can use some(numbers) cores of cpu from all cores, with tensorflow attributes
<p>I'm trying to use the university server for my deep code, all CPU's core on the server is 64 but I have to use just 24 cores to everybody can use the server too. I try to limit my CPU resource. I search all StackOverflow to find a solution but all suggestion doesn't work for me for example downgrade tensorflow and use</p> <pre><code>config = tf.ConfigProto(allow_soft_placement=True, intra_op_parallelism_threads=ncpu, inter_op_parallelism_threads=ncpu) </code></pre> <p>and some others solutions by using</p> <pre><code>import tensorflow as tf tf.config.threading.set_intra_op_parallelism_threads(numb) tf.config.threading.set_inter_op_parallelism_threads(numb) </code></pre> <p>I have to use TensorFlow version 2 or higher because I use 'kerastuner' package in my code</p>
<p>If you have Admin rights on the server and its running a Version of Windows, you can simply restrict the resources via the task-manager.</p> <p>If you want to do it by code... It looks like its a bug in Tensorflow, which might be fixed, regarding to the <a href="https://github.com/tensorflow/tensorflow/issues/29968" rel="nofollow noreferrer">github</a> issue.</p> <p>You might want to try:</p> <pre><code>export OMP_NUM_THREADS=2 tf.config.threading.set_intra_op_parallelism_threads(2) tf.config.threading.set_inter_op_parallelism_threads(1) </code></pre> <p>As this was reported working by Leslie-Fang. If this does not work for you, I guess your only option is to join the github discussion, until its fixed.</p>
python|tensorflow|deep-learning
2
3,171
46,996,261
sorting dataframe python by alphabets then by year
<p>I am trying to sort the following data frame first in alphabetical order, and within that alphabetical order I want the date (mmddyear) to be in chronological order. i.e. I have this data frame:</p> <pre><code>0 A11 01011997 1 C11 07202005 2 A12 02011997 3 B12 12102001 4 A13 10012000 5 B11 11012001 6 A00 01101980 </code></pre> <p>and I want to sort it to be of this form:</p> <pre><code>A11 01011997 A00 01101980 A12 02011997 A13 10012000 B11 11012001 B12 12102001 C11 07202005 </code></pre> <p>This is the dataframe I used in python.</p> <pre><code>sales = [('account', ['A11', 'C11', 'A12','B12','A13','B11']), ('date', [1011997, 7202005,2011997,12102001,10012000,11012001]) ] df = pd.DataFrame.from_items(sales) </code></pre> <p>I tried <code>sales = sales.sort_values(by=['account'])</code>, and that sorts everything in alphabetical order. When I apply <code>sales = sales.sort_values(by=['date'])</code>, everything becomes out of order.</p> <p>Any suggestions?</p>
<p>You need to sort on both fields using <code>df.sort_values(['account', 'date'])</code>. </p> <p>But you can't just sort the data frame when the date is represented as a string or an integer because in many cases you will get the wrong order, e.g. integer 1011997 sorts before 1021980 although the latter represents a date in 1980. Similarly <code>'01011997'</code> sorts before <code>'01021980'</code>.</p> <p>So convert the dates into <code>datetime</code>s first. Here I assume that the date column contains strings because your sample data suggests that.</p> <pre><code>import pandas as pd sales = [('account', ['A11', 'A11', 'C11', 'A12','B12','A13','B11']), ('date', ['01011997', '01021980', '07202005', '02011997', '12102001', '10012000', '11012001'])] df = pd.DataFrame.from_items(sales) &gt;&gt;&gt; df.sort_values(['account', 'date']) account date 0 A11 01011997 1 A11 01021980 3 A12 02011997 5 A13 10012000 6 B11 11012001 4 B12 12102001 2 C11 07202005 </code></pre> <p>In this case row 1 should sort before row 0, but it doesn't because the column is sorted lexicographically. To fix that convert <code>df['date']</code> to dtype <code>datetime64</code> then sort:</p> <pre><code>&gt;&gt;&gt; df['date'] = pd.to_datetime(df['date'], format='%m%d%Y') &gt;&gt;&gt; df account date 0 A11 1997-01-01 1 A11 1980-01-02 2 C11 2005-07-20 3 A12 1997-02-01 4 B12 2001-12-10 5 A13 2000-10-01 6 B11 2001-11-01 &gt;&gt;&gt; df.sort_values(['account', 'date']) account date 1 A11 1980-01-02 0 A11 1997-01-01 3 A12 1997-02-01 5 A13 2000-10-01 6 B11 2001-11-01 4 B12 2001-12-10 2 C11 2005-07-20 </code></pre> <p>which looks correct.</p>
python|pandas|sorting
1
3,172
47,003,318
Stack vectors of different lengths in Tensorflow
<p>How can I stack vectors of different length in tensorflow, e.g. from</p> <pre><code>[1, 3, 5] [2, 3, 9, 1, 1] [6, 2] </code></pre> <p>get zero-padded matrix</p> <pre><code>[1, 3, 5, 0, 0] [2, 3, 9, 1, 1] [6, 2, 0, 0, 0] </code></pre> <p>Vector count is known at definition time, but their lengths are not. Vectors are produced using <code>tf.where(condition)</code></p>
<p>One way you can do this is like:</p> <pre><code>In [11]: v1 = [1, 3, 5] In [12]: v2 = [2, 3, 9, 1, 1] In [14]: v3 = [6, 2] In [38]: max_len = max(len(v1), len(v2), len(v3)) In [39]: pad1 = [[0, max_len-len(v1)]] In [40]: pad2 = [[0, max_len-len(v2)]] In [41]: pad3 = [[0, max_len-len(v3)]] # pads 0 to original vectors up to `max_len` length In [42]: v1_padded = tf.pad(v1, pad1, mode='CONSTANT') In [43]: v2_padded = tf.pad(v2, pad2, mode='CONSTANT') In [44]: v3_padded = tf.pad(v3, pad3, mode='CONSTANT') In [53]: res = tf.stack([v1_padded, v2_padded, v3_padded], axis=0) In [56]: res.eval() Out[56]: array([[1, 3, 5, 0, 0], [2, 3, 9, 1, 1], [6, 2, 0, 0, 0]], dtype=int32) </code></pre> <p>To make it work with <code>N</code> vectors efficiently, you should probably use a <code>for</code> loop to prepare the <code>pad</code> variables for all the vectors and the padded vectors subsequently. And, finally use <code>tf.stack</code> to stack these padded vectors along the <code>0</code>th axis to get your desired result.</p> <hr> <p><strong>P.S.</strong>: You can get the length of the vectors dynamically once they are obtained from <code>tf.where(condition)</code>.</p>
python|tensorflow|neural-network|tensor|zero-padding
3
3,173
46,877,403
numpy `rint` weird behavior
<p>This question is about <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.rint.html" rel="nofollow noreferrer"><code>numpy.rint</code></a>, which by definition rounds to the nearest integer. However, the following code produces inconsistent results.</p> <pre><code>In [1]: import numpy as np for i in range(5, 105, 10): print(i, np.rint(i/10)) Out[1]: 5 0 # Should be 1 15 2 # Correct 25 2 # Should be 3 35 4 # Correct 45 4 # Should be 5 55 6 # Correct 65 6 # Should be 7 ... </code></pre> <p>So there appears to be a pattern: if, after division by 10, the units place is even, the number is rounded down, but if the units place is odd, the number is rounded up. However, according to <a href="https://en.wikipedia.org/wiki/Rounding#Tie-breaking" rel="nofollow noreferrer">rounding rules</a>, the units place should not matter!</p> <p>Either <code>numpy</code> should use the "round half up", that is, exactly at half, it rounds up to the next integer, or it should use the "round half down". It cannot do both, and be inconsistent.</p> <p>Normally, I would open a bug report with <code>numpy</code> for this, but I am not sure if this is entirely <code>numpy</code> being weird, or some underlying quirk of how python interprets floating points, or due to <a href="https://youtu.be/PZRI1IfStY0" rel="nofollow noreferrer">loss of precision from conversion to binary and back</a>.</p> <p>Note that <code>numpy.round(i, 0)</code> also behaves the same.</p> <hr> <p>A fix is to add a small fraction after the division by 10: <code>numpy.rint(i/10 + 0.1)</code>, and the answers come correct.</p>
<p>Since the near-universal adoption of the IEEE 754 standard, numeric functions have flocked to "round half to even" rounding, a description of which can be found on the same Wikipedia page you linked to, but <a href="https://en.wikipedia.org/wiki/Rounding#Round_half_to_even" rel="nofollow noreferrer">a bit down</a>.</p> <p>Loss of precision during base conversion here is not a problem, because conversion of integers to floats (C doubles) in this range is exact, and the quotients are exactly representable in binary floating point.</p> <p>Adding 0.1 is not a sane workaround. For example, add 0.1 to 4.42 and you get 4.52, which rounds up to 5.0 instead of to the correct 4.0.</p>
python|python-3.x|numpy
5
3,174
32,697,958
Is there a way to track which DataFrame Column corresponds to which Array Column(s) after LabelBinarizer Transform in sklearn?
<p>I have a series of variables of string type and I have to transform them in order to use sklearn estimators. </p> <p>I'm using DataFrameMapper from the library sklearn_pandas. </p> <p>In the following example I have a dataframe with columns A,B,C,D,E. Suppose that 'A', 'B' &amp; 'C' are string features: A has 25 unique strings, B has 10 unique strings and C has 30 unique strings. After tranforming the data by LabelBinarizer() the corresponding matrix would have 25+ 10+ 30+ 1 (from D) +1 (from E) = <strong>67 features</strong>. <em>How do I know which column correspond to the previous string values of each original variable?</em></p> <p>As mentioned before the first 3 are string variables so I have to do the following transformation:</p> <pre><code> mapper = DataFrameMapper([ ('A', LabelBinarizer()), ('B', LabelBinarizer()), ('C', LabelBinarizer()), (['D','E'],StandardScaler())]) X = np.array(mapper.fit_transform(df),dtype=float) </code></pre> <p>Where X is matrix of size (num_features)*67</p>
<p>Combining DictVectorizer() and mapper it is possible to keep track the column variable names. This is useful if one wants to visualize a DecisionTree with export_graphviz.</p> <p>The answer is based on: <a href="http://nbviewer.ipython.org/github/rasbt/pattern_classification/blob/master/preprocessing/feature_encoding.ipynb" rel="nofollow">http://nbviewer.ipython.org/github/rasbt/pattern_classification/blob/master/preprocessing/feature_encoding.ipynb</a></p> <pre><code> from sklearn.feature_extraction import DictVectorizer dvec = DictVectorizer(sparse=False) X=dvec.fit_transform(df.transpose().to_dict().values()) df_t= pd.DataFrame(X,columns=dvec.get_feature_names()) </code></pre> <p>df is the input DataFrame with A,B,C being categorical features. df_t is the transformed DataFrame with the categorical features encoded with its corresponding header.</p> <p>So then you can scale the other numerical features D, E and transform everything into a numpy array to use in sklearn.</p> <pre><code>numerical=['D','E'] categorical=list(set(list(df_t.columns.values))-set(numerical)) mapper = DataFrameMapper([ (categorical, None), (numerical,StandardScaler())]) explanatory_variables_columns=categorical+numerical X = np.array(mapper.fit_transform(df_t),dtype=float) </code></pre> <ul> <li>Although there is no transformation to be done on 'A', 'B' and 'C' you will have to include them in the mapper and use None to express "do nothing".</li> </ul>
python|pandas|machine-learning|scikit-learn|sklearn-pandas
2
3,175
38,576,674
Neo4j Bolt StatementResult to Pandas DataFrame
<p>Based on example from <a href="https://neo4j.com/developer/python/" rel="nofollow">Neo4j</a> </p> <pre><code>from neo4j.v1 import GraphDatabase, basic_auth driver = GraphDatabase.driver("bolt://localhost", auth=basic_auth("neo4j", "neo4j")) session = driver.session() session.run("CREATE (a:Person {name:'Arthur', title:'King'})") result = session.run("MATCH (a:Person) WHERE a.name = 'Arthur' RETURN a.name AS name, a.title AS title") for record in result: print("%s %s" % (record["title"], record["name"])) session.close() </code></pre> <p>Here <code>result</code> is of datatype <code>neo4j.v1.session.StatementResult</code>. How to access this data in pandas dataframe <strong>without explicitly iterating</strong>?</p> <p><code>pd.DataFrame.from_records(result)</code> doesn't seem to help.</p> <p>This is what I have using list comprehension</p> <pre><code>resultlist = [[record['title'], record['name']] for record in result] pd.DataFrame.from_records(resultlist, columns=['title', 'name']) </code></pre>
<p>The best I can come up with is a list comprehension similar to yours, but less verbose:</p> <pre><code>df = pd.DataFrame([r.values() for r in result], columns=result.keys()) </code></pre> <p>The <a href="http://py2neo.org/v3/" rel="noreferrer"><code>py2neo</code></a> package seems to be more suitable for DataFrames, as it's fairly straightforward to return a list of dictionaries. Here's the equivalent code using <code>py2neo</code>:</p> <pre><code>import py2neo # Some of these keyword arguments are unnecessary, as they are the default values. graph = py2neo.Graph(bolt=True, host='localhost', user='neo4j', password='neo4j') graph.run("CREATE (a:Person {name:'Arthur', title:'King'})") query = "MATCH (a:Person) WHERE a.name = 'Arthur' RETURN a.name AS name, a.title AS title" df = pd.DataFrame(graph.data(query)) </code></pre>
python|pandas|neo4j
9
3,176
38,812,923
how to get given value from dataframe in Pandas?
<p>Lets say a dataframe DF looks like</p> <pre><code>record_id species wgt 33321 DM 44 33322 DO 58 33323 PB 45 </code></pre> <p>If I wanted to get the value for <code>wgt</code> when <code>record_id==33323</code> and <code>species=='PB'</code>, then what do we have to type in Pandas? Something like</p> <pre><code>DF[species=='PB'][record_id==33323]? </code></pre>
<p>try this for filter method. </p> <pre><code>DF[(DF.species=='PB') &amp; (DF.record_id==33323)]['wgt'] 2 45 Name: wgt, dtype: int64 Use this to get only value list(DF[(DF.species=='PB') &amp; (DF.record_id==33323)]['wgt'].values) [45] </code></pre>
pandas|dataframe
0
3,177
38,728,722
pandas install issues in gitlab and docker
<pre><code>Collecting numpy (from -r requirements.txt (line 21)) Downloading numpy-1.11.1.zip (4.7MB) Collecting pandas (from -r requirements.txt (line 22)) Downloading pandas-0.18.1.tar.gz (7.3MB) Complete output from command python setup.py egg_info: Download error on https://pypi.python.org/simple/numpy/: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645) -- Some packages may not be found! Couldn't find index page for 'numpy' (maybe misspelled?) Download error on https://pypi.python.org/simple/: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645) -- Some packages may not be found! No local packages or download links found for numpy&gt;=1.7.0 Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "/tmp/pip-build-8puw9oba/pandas/setup.py", line 631, in &lt;module&gt; **setuptools_kwargs) File "/usr/local/lib/python3.5/distutils/core.py", line 108, in setup _setup_distribution = dist = klass(attrs) File "/usr/local/lib/python3.5/site-packages/setuptools/dist.py", line 269, in __init__ self.fetch_build_eggs(attrs['setup_requires']) File "/usr/local/lib/python3.5/site-packages/setuptools/dist.py", line 313, in fetch_build_eggs replace_conflicting=True, File "/usr/local/lib/python3.5/site-packages/pkg_resources/__init__.py", line 826, in resolve dist = best[req.key] = env.best_match(req, ws, installer) File "/usr/local/lib/python3.5/site-packages/pkg_resources/__init__.py", line 1092, in best_match return self.obtain(req, installer) File "/usr/local/lib/python3.5/site-packages/pkg_resources/__init__.py", line 1104, in obtain return installer(requirement) File "/usr/local/lib/python3.5/site-packages/setuptools/dist.py", line 380, in fetch_build_egg return cmd.easy_install(req) File "/usr/local/lib/python3.5/site-packages/setuptools/command/easy_install.py", line 634, in easy_install raise DistutilsError(msg) distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('numpy&gt;=1.7.0') ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-8puw9oba/pandas/ ERROR: Build failed: exit code 1 </code></pre> <p>Trying continuous Integration with gitlab and am running into an issue after pandas has been added as a requirement. when running the pytest the error above happens. the yaml for the gitlab-ci looks like this:</p> <pre><code>pytest: image: python:3-alpine script: - pip install -r requirements.txt - python -m pytest tests --ignore=tests/test_routes.py eslint: image: node:4.4.7 cache: paths: - src/static/node_modules/ script: - cd src/static - npm --loglevel=silent install - npm --loglevel=silent install gulp -g - gulp lint </code></pre> <p>pytest is the one that is failing before it even gets to running the tests</p> <p>the contents of our requirements.txt are as follows:</p> <pre><code>astroid==1.4.5 blinker==1.4 click==6.3 colorama==0.3.7 Flask==0.10.1 Flask-DebugToolbar==0.10.0 Flask-Login==0.3.2 Flask-Mail==0.9.1 Flask-Principal==0.4.0 Flask-WTF==0.12 Jinja2==2.8 lazy-object-proxy==1.2.1 MarkupSafe==0.23 passlib==1.6.5 pylint==1.5.5 requests==2.9.1 six==1.10.0 Werkzeug==0.11.4 wrapt==1.10.6 WTForms==2.1 pandas pyaml rtyaml webtest hypothesis beautifulsoup4 pytest </code></pre> <p>I attempted manually adding numpy before pandas but got the same result. since it complained about numpy >=1.7.0 I also attempted explicitly telling it that version but that did not resolve the issue either. Is there anything I am missing in this configuration that would be causing this problem?</p>
<p><code>pip</code> is unable to verify the certificate. You need to manually say which certificate it should use to verify it.</p> <p>This should work:</p> <pre><code>pip --cert /etc/ssl/certs/DigiCert_High_Assurance_EV_Root_CA.pem install -r requirements.txt </code></pre>
python|pandas|numpy|continuous-integration|gitlab
0
3,178
38,656,284
Visible deprecation warning using boolean operation on numpy array
<p>I'm having an issue where I keep receiving a warning stating:</p> <pre><code>VisibleDeprecationWarning: boolean index did not match indexed array along dimension 0; dimension is 744 but corresponding boolean dimension is 1 </code></pre> <p>When I try to use this:</p> <pre><code>x_low = xcontacts[(xcontacts[5:6] &lt;= 2000).any(1), :] x_med = xcontacts[(xcontacts[5:6] &lt;= 4000).any(1), :] x_med = xcontacts[(xcontacts[5:6] &gt; 2000).any(1), :] x_hi = xcontacts[(xcontacts[5:6] &gt; 4000).any(1), :] </code></pre> <p>On an array of shape:</p> <pre><code>xcontacts.shape Out[46]: (744L, 6L) </code></pre> <p>Here's a sample of the array:</p> <pre><code>[[ 1. 0. 0. 4. 0. 228.681 ] [ 2. 4. 0. 8. 0. 219.145 ] [ 3. 8. 0. 12. 0. 450.269 ] ..., [ 60. 236. 96. 240. 96. 933.4565] [ 61. 240. 96. 244. 96. 646.449 ] [ 62. 244. 96. 248. 96. 533.657 ]] </code></pre> <p>I'm trying to create three new arrays which are copies of the first but after a boolean operation has been performed on the final column, removing rows that do not agree with the operator:</p> <pre><code>x_low where col5 &lt;= 2000 x_med where 2000 &lt; col5 &lt;= 4000 x_hi where 4000 &lt; col5 </code></pre> <p>Does anyone know what I'm doing wrong?</p>
<p>Thanks to @Syrtis Major for this:</p> <pre><code>x_low = xcontacts[(xcontacts[:,5] &lt;= 2000)] x_med = xcontacts[(xcontacts[:,5] &lt;= 4000)] x_med = xcontacts[(xcontacts[:,5] &gt; 2000)] x_hi = xcontacts[(xcontacts[:,5] &gt; 4000)] </code></pre>
python|arrays|numpy|boolean-operations
0
3,179
38,532,055
Numba not speeding up function
<p>I have some code I'm trying to speed up with numba. I've done some reading on the topic, but I haven't been able to figure it out 100%.</p> <p>Here is the code:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as st import seaborn as sns from numba import jit, vectorize, float64, autojit sns.set(context='talk', style='ticks', font_scale=1.2, rc={'figure.figsize': (6.5, 5.5), 'xtick.direction': 'in', 'ytick.direction': 'in'}) #%% constraints x_min = 0 # death below this x_max = 20 # maximum weight t_max = 100 # maximum time foraging_efficiencies = np.linspace(0, 1, 10) # potential foraging efficiencies R = 10.0 # Resource level #%% make the body size and time categories body_sizes = np.arange(x_min, x_max+1) time_steps = np.arange(t_max) #%% parameter functions @jit def metabolic_fmr(x, u,temp): # metabolic cost function fmr = 0.125*(2**(0.2*temp))*(1 + 0.5*u) + x*0.1 return fmr def intake_dist(u): # intake stochastic function (returns a vector) g = st.binom.pmf(np.arange(R+1), R, u) return g @jit def mass_gain(x, u, temp): # mass gain function (returns a vector) x_prime = x - metabolic_fmr(x, u,temp) + np.arange(R+1) x_prime = np.minimum(x_prime, x_max) x_prime = np.maximum(x_prime, 0) return x_prime @jit def prob_attack(P): # probability of an attack p_a = 0.02*P return p_a @jit def prob_see(u): # probability of not seeing an attack p_s = 1-(1-u)**0.3 return p_s @jit def prob_lethal(x): # probability of lethality given a successful attack p_l = 0.5*np.exp(-0.05*x) return p_l @jit def prob_mort(P, u, x): p_m = prob_attack(P)*prob_see(u)*prob_lethal(x) return np.minimum(p_m, 1) #%% terminal fitness function @jit def terminal_fitness(x): t_f = 15.0*x/(x+5.0) return t_f #%% linear interpolation function @jit def linear_interpolation(x, F, t): floor = x.astype(int) delta_c = x-floor ceiling = floor + 1 ceiling[ceiling&gt;x_max] = x_max floor[floor&lt;x_min] = x_min interpolated_F = (1-delta_c)*F[floor,t] + (delta_c)*F[ceiling,t] return interpolated_F #%% solver @jit def solver_jit(P, temp): F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort D = np.zeros((len(body_sizes), len(time_steps))) # Decision for t in range(t_max-1)[::-1]: for x in range(x_min+1, x_max+1): # iterate over every body size except dead for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency u = foraging_efficiencies[i] g_u = intake_dist(u) # calculate the distribution of intakes xp = mass_gain(x, u, temp) # calculate the mass gain p_m = prob_mort(P, u, x) # probability of mortality V[i,x,t] = (1 - p_m)*(linear_interpolation(xp, F, t+1)*g_u).sum() # Fitness calculation vmax = V[:,x,t].max() idx = np.argwhere(V[:,x,t]==vmax).min() D[x,t] = foraging_efficiencies[idx] F[x,t] = vmax return D, F def solver_norm(P, temp): F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort D = np.zeros((len(body_sizes), len(time_steps))) # Decision for t in range(t_max-1)[::-1]: for x in range(x_min+1, x_max+1): # iterate over every body size except dead for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency u = foraging_efficiencies[i] g_u = intake_dist(u) # calculate the distribution of intakes xp = mass_gain(x, u, temp) # calculate the mass gain p_m = prob_mort(P, u, x) # probability of mortality V[i,x,t] = (1 - p_m)*(linear_interpolation(xp, F, t+1)*g_u).sum() # Fitness calculation vmax = V[:,x,t].max() idx = np.argwhere(V[:,x,t]==vmax).min() D[x,t] = foraging_efficiencies[idx] F[x,t] = vmax return D, F </code></pre> <p>The individual jit functions tend to be much faster than the un-jitted ones. For example, prob_mort is about 600% faster once it has been run through jit. However, the solver itself isn't much faster:</p> <pre><code>In [3]: %timeit -n 10 solver_jit(200, 25) 10 loops, best of 3: 3.94 s per loop In [4]: %timeit -n 10 solver_norm(200, 25) 10 loops, best of 3: 4.09 s per loop </code></pre> <p>I know some functions can't be jitted, so I replaced the st.binom.pmf function with a custom jit function and that actually slowed down the time to about 17s per loop, over 5x slower. Presumably because the scipy functions are, at this point, heavily optimized.</p> <p>So I suspect the slowness is either in the linear_interpolate function or somewhere in the solver code outside of the jitted functions (because at one point I un-jitted all the functions and ran solver_norm and got the same time). Any thoughts on where the slow part would be and how to speed it up?</p> <p><strong>UPDATE</strong></p> <p>Here's the binomial code I used in an attempt to speed up jit</p> <pre><code>@jit def factorial(n): if n==0: return 1 else: return n*factorial(n-1) @vectorize([float64(float64,float64,float64)]) def binom(k, n, p): binom_coef = factorial(n)/(factorial(k)*factorial(n-k)) pmf = binom_coef*p**k*(1-p)**(n-k) return pmf @jit def intake_dist(u): # intake stochastic function (returns a vector) g = binom(np.arange(R+1), R, u) return g </code></pre> <p><strong>UPDATE 2</strong> I tried running my binomial code in nopython mode and found out I was doing it wrong because it was recursive. Upon fixing that by changing code to:</p> <pre><code>@jit(int64(int64), nopython=True) def factorial(nn): res = 1 for ii in range(2, nn + 1): res *= ii return res @vectorize([float64(float64,float64,float64)], nopython=True) def binom(k, n, p): binom_coef = factorial(n)/(factorial(k)*factorial(n-k)) pmf = binom_coef*p**k*(1-p)**(n-k) return pmf </code></pre> <p>the solver now runs at</p> <pre><code>In [34]: %timeit solver_jit(200, 25) 1 loop, best of 3: 921 ms per loop </code></pre> <p>which is about 3.5x faster. However, solver_jit() and solver_norm() still run at the same pace, which means there is some code outside the jit functions slowing it down.</p>
<p>I was able to make a few changes to your code to make it so the jit version could compile completely in <code>nopython</code> mode. On my laptop, this results in:</p> <pre><code>%timeit solver_jit(200, 25) 1 loop, best of 3: 50.9 ms per loop %timeit solver_norm(200, 25) 1 loop, best of 3: 192 ms per loop </code></pre> <p>For reference, I'm using Numba 0.27.0. I'll admit that Numba's compilation errors still make it difficult to identify what is going on, but since I've been playing with it for a while, I've built up an intuition for what needs to be fixed. The complete code is below, but here is the list of changes I made:</p> <ul> <li>In <code>linear_interpolation</code> change <code>x.astype(int)</code> to <code>x.astype(np.int64)</code> so it could compile in <code>nopython</code> mode. </li> <li>In the solver, use <code>np.sum</code> as a function and not a method of an array.</li> <li><code>np.argwhere</code> isn't supported. Write a custom loop.</li> </ul> <p>There are probably some further optimizations that could be made, but this gives an initial speed-up.</p> <p>The full code:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as st import seaborn as sns from numba import jit, vectorize, float64, autojit, njit sns.set(context='talk', style='ticks', font_scale=1.2, rc={'figure.figsize': (6.5, 5.5), 'xtick.direction': 'in', 'ytick.direction': 'in'}) #%% constraints x_min = 0 # death below this x_max = 20 # maximum weight t_max = 100 # maximum time foraging_efficiencies = np.linspace(0, 1, 10) # potential foraging efficiencies R = 10.0 # Resource level #%% make the body size and time categories body_sizes = np.arange(x_min, x_max+1) time_steps = np.arange(t_max) #%% parameter functions @njit def metabolic_fmr(x, u,temp): # metabolic cost function fmr = 0.125*(2**(0.2*temp))*(1 + 0.5*u) + x*0.1 return fmr @njit() def factorial(nn): res = 1 for ii in range(2, nn + 1): res *= ii return res @vectorize([float64(float64,float64,float64)], nopython=True) def binom(k, n, p): binom_coef = factorial(n)/(factorial(k)*factorial(n-k)) pmf = binom_coef*p**k*(1-p)**(n-k) return pmf @njit def intake_dist(u): # intake stochastic function (returns a vector) g = binom(np.arange(R+1), R, u) return g @njit def mass_gain(x, u, temp): # mass gain function (returns a vector) x_prime = x - metabolic_fmr(x, u,temp) + np.arange(R+1) x_prime = np.minimum(x_prime, x_max) x_prime = np.maximum(x_prime, 0) return x_prime @njit def prob_attack(P): # probability of an attack p_a = 0.02*P return p_a @njit def prob_see(u): # probability of not seeing an attack p_s = 1-(1-u)**0.3 return p_s @njit def prob_lethal(x): # probability of lethality given a successful attack p_l = 0.5*np.exp(-0.05*x) return p_l @njit def prob_mort(P, u, x): p_m = prob_attack(P)*prob_see(u)*prob_lethal(x) return np.minimum(p_m, 1) #%% terminal fitness function @njit def terminal_fitness(x): t_f = 15.0*x/(x+5.0) return t_f #%% linear interpolation function @njit def linear_interpolation(x, F, t): floor = x.astype(np.int64) delta_c = x-floor ceiling = floor + 1 ceiling[ceiling&gt;x_max] = x_max floor[floor&lt;x_min] = x_min interpolated_F = (1-delta_c)*F[floor,t] + (delta_c)*F[ceiling,t] return interpolated_F #%% solver @njit def solver_jit(P, temp): F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort D = np.zeros((len(body_sizes), len(time_steps))) # Decision for t in range(t_max-2,-1,-1): for x in range(x_min+1, x_max+1): # iterate over every body size except dead for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency u = foraging_efficiencies[i] g_u = intake_dist(u) # calculate the distribution of intakes xp = mass_gain(x, u, temp) # calculate the mass gain p_m = prob_mort(P, u, x) # probability of mortality V[i,x,t] = (1 - p_m)*np.sum((linear_interpolation(xp, F, t+1)*g_u)) # Fitness calculation vmax = V[:,x,t].max() for k in xrange(V.shape[0]): if V[k,x,t] == vmax: idx = k break #idx = np.argwhere(V[:,x,t]==vmax).min() D[x,t] = foraging_efficiencies[idx] F[x,t] = vmax return D, F def solver_norm(P, temp): F = np.zeros((len(body_sizes), len(time_steps))) # Expected fitness F[:,-1] = terminal_fitness(body_sizes) # expected terminal fitness for every body size V = np.zeros((len(foraging_efficiencies), len(body_sizes), len(time_steps))) # Fitness for each foraging effort D = np.zeros((len(body_sizes), len(time_steps))) # Decision for t in range(t_max-1)[::-1]: for x in range(x_min+1, x_max+1): # iterate over every body size except dead for i in range(len(foraging_efficiencies)): # iterate over every possible foraging efficiency u = foraging_efficiencies[i] g_u = intake_dist(u) # calculate the distribution of intakes xp = mass_gain(x, u, temp) # calculate the mass gain p_m = prob_mort(P, u, x) # probability of mortality V[i,x,t] = (1 - p_m)*(linear_interpolation(xp, F, t+1)*g_u).sum() # Fitness calculation vmax = V[:,x,t].max() idx = np.argwhere(V[:,x,t]==vmax).min() D[x,t] = foraging_efficiencies[idx] F[x,t] = vmax return D, F </code></pre>
python|performance|numpy|numba
2
3,180
38,849,992
How do i change color based on value of an HTML table generated from a pd.DataFrame using to_html
<p>I have a pandas dataFrame which I am converting to an HTML table using <code>to_html()</code> however I would like to color certain cells based on values in the HTML table that I return. </p> <p>Any idea how to go about this? </p> <p>Eg: All cells in a column called 'abc' that have a value greater than 5 must appear red else blue.</p>
<p>here is one way to do this:</p> <pre><code>df = pd.DataFrame(np.random.randint(0,10, (5,3)), columns=list('abc')) def color_cell(cell): return 'color: ' + ('red' if cell &gt; 5 else 'green') html = df.style.applymap(color_cell, subset=['a']).render() with open('c:/temp/a.html', 'w') as f: f.write(html) </code></pre> <p>result:</p> <p><a href="https://i.stack.imgur.com/lMoVL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lMoVL.jpg" alt="enter image description here"></a></p>
python|html|css|pandas
2
3,181
38,532,939
Pandas - join item from different dataframe within an array
<p>I am a first data frame looking like this</p> <pre><code>item_id | options ------------------------------------------ item_1_id | [option_1_id, option_2_id] </code></pre> <p>And a second like this:</p> <pre><code>option_id | option_name --------------------------- option_1_id | option_1_name </code></pre> <p>And I'd like to transform my first data set to:</p> <pre><code>item_id | options ---------------------------------------------- item_1_id | [option_1_name, option_2_name] </code></pre> <p>What is an elegant way to do so using Pandas' data frames?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html" rel="nofollow"><code>apply</code></a>.</p> <p>For the record, storing lists in <code>DataFrames</code> is typically unnecessary and not very "pandonic". Also, if you only have one column, you can do this with a <code>Series</code> (though this solution also works for <code>DataFrames</code>).</p> <h3>Setup</h3> <p>Build the <code>Series</code> with the lists of options.</p> <pre><code>index = list('abcde') s = pd.Series([['opt1'], ['opt1', 'opt2'], ['opt0'], ['opt1', 'opt4'], ['opt3']], index=index) </code></pre> <p>Build the <code>Series</code> with the names.</p> <pre><code>index_opts = ['opt%s' % i for i in range(5)] vals_opts = ['name%s' % i for i in range(5)] s_opts = pd.Series(vals_opts, index=index_opts) </code></pre> <h3>Solution</h3> <p>Map options to names using <code>apply</code>. The lambda function looks up each option in the <code>Series</code> mapping options to names. It is applied to each element of the <code>Series</code>.</p> <pre><code>s.apply(lambda l: [s_opts[opt] for opt in l]) </code></pre> <p>outputs</p> <pre><code>a [name1] b [name1, name2] c [name0] d [name1, name4] e [name3] </code></pre>
python|pandas
0
3,182
38,531,670
Create pandas pivot table on new sheet in workbook
<p>I am trying to send my pivot table that I have created onto a new sheet in the workbook, however, for some reason when I execute my code a new sheet is created with the pivot table (sheet is called 'Sheet1') and the data sheet gets deleted.</p> <p>Here is my code:</p> <pre><code>worksheet2 = workbook.create_sheet() worksheet2.title = 'Sheet1' worksheet2 = workbook.active workbook.save(filename) excel = pd.ExcelFile(filename) df = pd.read_excel(filename, usecols=['Product Description', 'Supervisor']) table1 = df[['Product Description', 'Supervisor']].pivot_table(index='Supervisor', columns='Product Description', aggfunc=len, fill_value=0, margins=True, margins_name='Grand Total') print table1 writer = pd.ExcelWriter(filename, engine='xlsxwriter') table1.to_excel(writer, sheet_name='Sheet1') workbook.save(filename) writer.save() </code></pre> <p>Also, i'm having a bit of trouble with my pivot table design. Here is what the pivot table looks like: </p> <p><a href="https://i.stack.imgur.com/H8hgL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H8hgL.png" alt="enter image description here"></a></p> <p>How can I add a column to the end that sums up each row? Like this: (I just need the column at the end, I don't care about formatting it like that or anything)</p> <p><a href="https://i.stack.imgur.com/WZPqz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WZPqz.png" alt="enter image description here"></a></p>
<p>Just use <code>margins=True</code> and <code>margins_name='Grand Total'</code> parameters when calling <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow">pivot_table()</a></p> <p>Demo:</p> <pre><code>In [15]: df = pd.DataFrame(np.random.randint(0, 5, size=(10, 3)), columns=list('abc')) In [16]: df Out[16]: a b c 0 4 3 0 1 1 1 4 2 4 4 0 3 2 3 2 4 1 1 3 5 3 1 3 6 3 3 0 7 0 2 0 8 2 1 1 9 4 2 2 In [17]: df.pivot_table(index='a', columns='b', aggfunc='sum', fill_value=0, margins=True, margins_name='Grand Total') Out[17]: c b 1 2 3 4 Grand Total a 0 0.0 0.0 0.0 0.0 0.0 1 7.0 0.0 0.0 0.0 7.0 2 1.0 0.0 2.0 0.0 3.0 3 3.0 0.0 0.0 0.0 3.0 4 0.0 2.0 0.0 0.0 2.0 Grand Total 11.0 2.0 2.0 0.0 15.0 </code></pre>
python|pandas|dataframe|openpyxl
5
3,183
63,110,589
GridSearchCV results are not reproducable
<p>Im new to Keras and I need your professional help. I have used GridSearchCV to optmize my regression network. When i try to use the results, the newly created network is far worse in regards to the mean squared error than the one calculated by GridSearch. The GridSearchCV code:</p> <pre><code>import os import numpy as np import pandas as pd import tensorflow as tf from time import time from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score from sklearn.preprocessing import StandardScaler, MinMaxScaler from tensorflow import keras from keras.callbacks import ModelCheckpoint, EarlyStopping, TensorBoard from keras.models import Sequential, Model from keras.layers import Dense, Activation, Flatten, Input, Dropout, LeakyReLU from keras.utils import plot_model from keras.optimizers import SGD, rmsprop, adam from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor from keras.initializers import uniform, normal, glorot_uniform from keras.losses import MAPE #Data preprocessing def get_data(): data = pd.read_csv(&quot;test.csv&quot;, sep=&quot;;&quot;, usecols=[&quot;rHsubLS&quot;,&quot;b&quot;,&quot;lowerSetpoint&quot;]) test = data.loc[:,['rHsubLS','b']] target = data.loc[:,'lowerSetpoint'] print(test.shape) print(target.shape) return test.astype(float), target.astype(float) def split_data(test, target): X_train, X_test, y_train, y_test = train_test_split(test, target) X_train = np.array(X_train) X_test = np.array(X_test) y_train = np.array(y_train) y_test = np.array(y_test) stdsc1 = StandardScaler() train_data_std = stdsc1.fit_transform(X_train) test_data_std = stdsc1.fit_transform(X_test) y_train_1 = np.reshape(y_train, (-1, 1)) y_test_1 = np.reshape(y_test, (-1, 1)) train_target_std = stdsc1.fit_transform(y_train_1) test_target_std = stdsc1.fit_transform(y_test_1) return train_data_std, test_data_std, train_target_std, test_target_std #Network Creation def create_NN(optimizer='rmsprop', init='glorot_uniform', alpha=0.15, activation_func='tanh'): NN_model = Sequential() #input layer NN_model.add(Dense(128, kernel_initializer=init, input_dim=2, activation=activation_func)) #hidden layers NN_model.add(LeakyReLU(alpha=alpha)) NN_model.add(Dense(256, kernel_initializer=init, activation='relu')) #output layer NN_model.add(Dense(1, kernel_initializer=init, activation='linear')) NN_model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=[&quot;mse&quot;, &quot;mape&quot;]) NN_model.summary() return NN_model #GridSearchCV def train_NN(NN_model, train_data, train_target): seed = 4 np.random.seed(seed) model = KerasRegressor(build_fn=create_NN, verbose=1) optimizers = ['rmsprop', 'adam', 'SGD'] inits = ['glorot_uniform', 'normal', 'uniform', 'he_uniform'] activation_funcs = ['tanh','relu','softmax'] epochs = [50, 100, 150] batches = [50, 100, 500] alphas = [0.15, 0.45, 0.3] grid_parameter = dict(optimizer=optimizers, epochs=epochs, batch_size=batches, init=inits, alpha=alphas, activation_func=activation_funcs)#, dropout_rate=dropout) if __name__ == '__main__': grid = GridSearchCV(estimator=model, scoring='neg_mean_squared_error' , param_grid=grid_parameter, verbose=1, cv=3) grid_results = grid.fit(train_data, train_target, use_multiprocessing=True, shuffle=True, workers=8) print(&quot;Best: %f using %s&quot; % (grid_results.best_score_, grid_results.best_params_)) means = grid_results.cv_results_['mean_test_score'] stds = grid_results.cv_results_['std_test_score'] params = grid_results.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print(&quot;%f (%f) with: %r&quot; % (mean, stdev, param)) try: test, target = get_data() train_data, test_data, train_target, test_target = split_data(test, target) print(&quot;Data split\n&quot;) NN_model = create_NN() train_NN(NN_model, train_data, train_target) except (KeyboardInterrupt, SystemExit): raise </code></pre> <p>The results of the GridSearch:</p> <p><strong>Best: -0.000064 using {'activation_func': 'relu', 'alpha': 0.3, 'batch_size': 50, 'epochs': 150, 'init': 'he_uniform', 'optimizer': 'adam'}</strong></p> <p>When I try to reproduce this network with this code:</p> <pre><code>import os import numpy as np import pandas as pd import tensorflow as tf import matplotlib.pyplot as plt from time import time from sklearn.model_selection import train_test_split from sklearn.metrics import mean_absolute_error, mean_squared_error from sklearn.preprocessing import StandardScaler, MinMaxScaler from tensorflow import keras from keras.callbacks import ModelCheckpoint, EarlyStopping, TensorBoard from keras.models import Sequential, Model from keras.layers import Dense, Activation, Flatten, Input, Dropout, PReLU, LeakyReLU from keras.utils import plot_model from keras.optimizers import SGD from keras.losses import MeanAbsolutePercentageError def get_data(): data = pd.read_csv(&quot;test.csv&quot;, sep=&quot;;&quot;, usecols=[&quot;rHsubLS&quot;,&quot;b&quot;,&quot;lowerSetpoint&quot;]) test = data.loc[:,['rHsubLS','b']] target = data.loc[:,'lowerSetpoint'] print(test.shape) print(target.shape) return test.astype(float), target.astype(float) def split_data(test, target): X_train, X_test, y_train, y_test = train_test_split(test, target) X_train = np.array(X_train) X_test = np.array(X_test) y_train = np.array(y_train) y_test = np.array(y_test) stdsc1 = StandardScaler() train_data_std = stdsc1.fit_transform(X_train) test_data_std = stdsc1.fit_transform(X_test) y_train_1 = np.reshape(y_train, (-1, 1)) y_test_1 = np.reshape(y_test, (-1, 1)) train_target_std = stdsc1.fit_transform(y_train_1) test_target_std = stdsc1.fit_transform(y_test_1) return train_data_std, test_data_std, train_target_std, test_target_std def create_NN(): NN_model = Sequential() #input layer NN_model.add(Dense(128, input_dim=2, kernel_initializer='he_uniform', activation='relu')) #hidden layers NN_model.add(LeakyReLU(0.3)) NN_model.add(Dense(256, kernel_initializer='he_uniform', activation='relu')) #output layer NN_model.add(Dense(1, activation='linear')) keras.backend.set_epsilon(1) NN_model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse','mape']) NN_model.summary() return NN_model def train_NN(NN_model, train_data, train_target, test_data, test_target): history = NN_model.fit(train_data, train_target, epochs=150, shuffle=True, batch_size=50, verbose=1, use_multiprocessing=True) return history def test_NN(NN_model, test_data, test_target, train_data, train_target): mean_test = NN_model.evaluate(test_data, test_target, verbose=1) mean_train = NN_model.evaluate(train_data, train_target, verbose=1) return mean_test, mean_train try: seed = 4 np.random.seed(seed) test, target = get_data() train_data, test_data, train_target, test_target = split_data(test, target) print(&quot;Data split\n&quot;) NN_model = create_NN() print(&quot;Neural Network created\n&quot;) history = train_NN(NN_model, train_data, train_target, test_data, test_target) mean_test, mean_train = test_NN(NN_model, test_data, test_target, train_data, train_target) print(&quot;Durchschnittliche Abweichung Training: &quot;, mean_train) print(&quot;Durchschnittliche Abweichung Test: &quot;, mean_test) print(NN_model.metrics_names) NN_model.save('Regelung_v1.h5') print(&quot;Neural Network saved&quot;) except (KeyboardInterrupt, SystemExit): raise </code></pre> <p>I get this result: <strong>mse loss training data: 0.028168134637475015; mse loss test data: 0.028960488473176955</strong></p> <p>The mean average percentage error is at about 9%. This result is not what i expected. Where is my mistake?</p> <p>Thank you for your help in advance</p> <p>Have a nice day!</p> <p>PC Specs:</p> <p>Intel i5 4570 16GB RAM + 16 GB page file Nvidia GTX 1070 3 TB HDD</p> <p>Software:</p> <p>Windows 10 Geforce Game ready driver 451.48 Tensorflow 2.2.0 Keras 2.3.1 Sklearn 0.23.1 Cuda 10.1 Python 3.7.7</p> <p>Edit: Here are a few lines of the test.csv</p> <pre><code>TIMESTAMP;rHsubLS;b;lowerSetpoint 20200714091423000.00000000000;2.28878288783;-0.74361743617;-0.27947195702 20200714091423000.00000000000;0.13274132741;-0.94552945529;-0.32351276857 20200714091423000.00000000000;1.85753857539;0.77844778448;0.22244954249 20200714091423000.00000000000;1.31896318963;0.44518445184;0.33573301999 20200714091423000.00000000000;2.55885558856;-0.77792777928;-0.28837806344 </code></pre>
<p>The Output layer had its initialization weight missing:</p> <p>NN_model.add(Dense(1, kernel_initializer='he_uniform', activation='linear'))</p>
python|tensorflow|keras|grid-search
0
3,184
62,909,038
Use pandas to create multi plot in a loop?
<p>I am using the jupyter notebook to draw a bar chart, and I want to draw a pandas plots in a for loop.</p> <p>Here is my Dataframe that I want to draw a bar chart in a for loop</p> <pre><code>In[7]: test_df Lehi Boise 1 True True 2 True True 3 False False 4 True True 5 True True 6 True True 7 True True 8 False False </code></pre> <p>My code</p> <pre><code>place = ['Lehi','Boise'] for p in place: bar = test_df.groupby(p).size().plot(kind='bar') </code></pre> <p>But I only get the 'Boise' bar chart... If I write them in different jupyter cells, it works well</p> <pre><code>In[9] bar = test_df.groupby('Lehi').size().plot(kind='bar') In[10] bar = test_df.groupby('Boise').size().plot(kind='bar') </code></pre> <p>Is there any solution to solve this problem in jupyter notebook. Thanks!</p>
<p>The issue is that without extra specification, the loop is overwriting the same plotting axes. You can more explicitly create a new Axes within the loop for each plot, and map <code>df.plot</code> to those Axes:</p> <pre><code>colors = ['red', 'green'] place = ['Lehi','Boise'] for p in place: fig, ax = plt.subplots(figsize=(5,5)) bar = test_df.groupby(p).size().plot(kind='bar', color=colors, ax=ax) </code></pre> <p>This will create multiple plots under one cell. I include the <code>colors</code> bit b/c there was something like that in your original Q (which was undefined). I believe the <code>groupby</code> operation will always sort <code>False</code> first then <code>True</code>, so you just have to present the colors in the order you want to match.</p>
python|pandas|matplotlib|seaborn
1
3,185
63,257,855
Pandas keep latest row and aggregate value
<p>I have a dataframe for Projects. If a project fails a test then that test is repeated at a later data and passed value updated. df_Project =</p> <pre><code>Date Project_ID TestA TestB TestC TestD 27072020 Project1 Pass Pass Pass Fail 30072020 Project1 None None None Pass </code></pre> <p>I want to create another dataframe which keeps the last date only and aggregates the test results as Pass if any date passed. df_Summary =</p> <pre><code>Date Project_ID TestA TestB TestC TestD 30072020 Project1 Pass Pass Pass Pass </code></pre> <p>How can i do it in pandas?</p>
<p>You can do <code>groupby</code> with <code>max</code></p> <pre><code>out=df.groupby('Project_ID').max().reset_index() Out[115]: Project_ID Date TestA TestB TestC TestD 0 Project1 30072020 Pass Pass Pass Pass </code></pre> <p>The reason why this work</p> <pre><code>'Pass'&gt;'Fail' Out[116]: True </code></pre>
python|pandas
3
3,186
67,675,432
PANDAS dataframe concat and pivot data
<p>I'm leaning python pandas and playing with some example data. I have a CSV file of a dataset with net worth by percentile of US population by quarter of year. I've successfully subseted the data by percentile to create three scatter plots of net worth by year, one plot for each of three population sections. However, I'm trying to combine those three plots to one data frame so I can combine the lines on a single plot figure.</p> <p>Data here: <a href="https://www.federalreserve.gov/releases/z1/dataviz/download/dfa-income-levels.csv" rel="nofollow noreferrer">https://www.federalreserve.gov/releases/z1/dataviz/download/dfa-income-levels.csv</a></p> <p>Code thus far:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt # importing numpy as np import numpy as np df = pd.read_csv(&quot;dfa-income-levels.csv&quot;) df99th = df.loc[df['Category']==&quot;pct99to100&quot;] df99th.plot(x='Date',y='Net worth', title='Net worth by percentile') dfmid = df.loc[df['Category']==&quot;pct40to60&quot;] dfmid.plot(x='Date',y='Net worth') dflow = df.loc[df['Category']==&quot;pct00to20&quot;] dflow.plot(x='Date',y='Net worth') data = dflow['Net worth'], dfmid['Net worth'], df99th['Net worth'] headers = ['low', 'mid', '99th'] newdf = pd.concat(data, axis=1, keys=headers) </code></pre> <p>And that yields a dataframe shown below, which is not what I want for plotting the data.</p> <pre><code> low mid 99th 0 NaN NaN 3514469.0 3 NaN 2503918.0 NaN 5 585550.0 NaN NaN 6 NaN NaN 3602196.0 9 NaN 2518238.0 NaN ... ... ... ... 747 NaN 8610343.0 NaN 749 3486198.0 NaN NaN 750 NaN NaN 32011671.0 753 NaN 8952933.0 NaN 755 3540306.0 NaN NaN </code></pre> <p>Any recommendations for other ways to approach this?</p>
<p>I don't see the categories mentioned in your code in the csv file you shared. In order to concat dataframes along columns, you could use <code>pd.concat</code> along <code>axis=1</code>. It concats the columns of same index number. So first set the <code>Date</code> column as index and then concat them, and then again bring back <code>Date</code> as a dataframe column.</p> <ul> <li>To set <code>Date</code> column as index of dataframe, <code>df1 = df1.set_index('Date')</code> and <code>df2 = df2.set_index('Date')</code></li> <li>Concat the dataframes <code>df1</code> and <code>df2</code> using <code>df_merge = pd.concat([df1,df2],axis=1)</code> or <code>df_merge = pd.merge(df1,df2,on='Date')</code></li> <li>bringing back <code>Date</code> into column by <code>df_merge = df_merge.reset_index()</code></li> </ul>
python|pandas|dataframe
1
3,187
32,092,169
merge few pivot tables in pandas
<p>How I can merge two pandas pivot tables? When I try run my code I have error: keyerror</p> <blockquote> <pre><code>data_pivot= pandas.DataFrame(data.pivot_table(values = 'NR_ACTIONS', index=["HOUR", "OPID", "NAME"], columns='CONTACTED_PERSON_NEW', aggfunc='sum')) data_pivot.fillna(0, inplace=True) data2_pivot= pandas.DataFrame(data2.pivot_table(values = 'AMOUNT_PA', index=["HOUR", "OPID", "NAME"], columns='PA_TYPE', aggfunc='sum')) data2_pivot.fillna(0, inplace=True) all_data = pandas.merge(data_pivot, data2_pivot, 'left', on = ["HOUR", "OPID", "NAME"] ) </code></pre> </blockquote>
<p>answer for my question is :</p> <pre><code>data_pivot= pandas.DataFrame(data.pivot_table(values = 'NR_ACTIONS', index=["HOUR", "OPID", "NAME"], columns='CONTACTED_PERSON_NEW', aggfunc='sum')) data_pivot.fillna(0, inplace=True) data_pivot.reset_index( inplace=True) data2_pivot= pandas.DataFrame(data2.pivot_table(values = 'AMOUNT_PA', index=["HOUR", "OPID", "NAME"], columns='PA_TYPE', aggfunc='sum')) data2_pivot.fillna(0, inplace=True) data2_pivot.reset_index( inplace=True) all_data = pandas.merge(data_pivot, data2_pivot, 'left', on = ["HOUR", "OPID", "NAME"] ) </code></pre>
python|python-3.x|pandas
8
3,188
41,589,717
Finding minimum value for each level of a multi-index dataframe
<p>I have a DataFrame that looks like this:</p> <pre><code> data a b 1 1 0.1 2 0.2 3 0.3 2 1 0.5 2 0.6 3 0.7 </code></pre> <p>and I want to find the minimum value for each level of <code>a</code> ignoring the <code>b</code> level, so as an output I'm looking for something like</p> <pre><code>a min 1 0.1 2 0.5 </code></pre>
<p>The simpliest is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.min.html" rel="noreferrer"><code>min</code></a> with parameter <code>level=0</code>:</p> <pre><code>print (df.data.min(level=0).reset_index(name='min')) a min 0 1 0.1 1 2 0.5 </code></pre> <p>If need output as <code>df</code> and only one column <code>df</code>:</p> <pre><code>print (df.min(level=0)) data a 1 0.1 2 0.5 </code></pre> <p>Or <code>groupby</code> by first level with aggregating <code>min</code>:</p> <pre><code>print (df.groupby(level=0).data.min().reset_index(name='min')) a min 0 1 0.1 1 2 0.5 </code></pre>
python|pandas
7
3,189
41,613,767
tf.parse_example used in mnist export example
<p>I am new to tensorflow and are reading mnist_export.py in tensorflow serving example.</p> <p>There is something here I cannot understand:</p> <pre><code> sess = tf.InteractiveSession() serialized_tf_example = tf.placeholder(tf.string, name='tf_example') feature_configs = { 'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32), } tf_example = tf.parse_example(serialized_tf_example, feature_configs) x = tf.identity(tf_example['x'], name='x') # use tf.identity() to assign name </code></pre> <p>Above, serialized_tf_example is a Tensor.</p> <p>I have read the api document <a href="https://www.tensorflow.org/api_docs/python/io_ops/converting#parse_example" rel="nofollow noreferrer">tf.parse_example</a> but it seems that <code>serialized</code> is serialized <code>Example</code> protos like:</p> <pre><code>serialized = [ features { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } }, features { feature []}, features { feature { key: "ft" value { float_list { value: [3.0] } } } ] </code></pre> <p>So how to understand <code>tf_example = tf.parse_example(serialized_tf_example, feature_configs)</code> here as <code>serialized_tf_example</code> is a Tensor, not <code>Example</code> proto?</p>
<p>The below mentioned code provides the simple example of using parse_example</p> <pre><code>import tensorflow as tf sess = tf.InteractiveSession() serialized_tf_example = tf.placeholder(tf.string, shape=[1], name='serialized_tf_example') feature_configs = {'x': tf.FixedLenFeature(shape=[1], dtype=tf.float32)} tf_example = tf.parse_example(serialized_tf_example, feature_configs) feature_dict = {'x': tf.train.Feature(float_list=tf.train.FloatList(value=[25]))} example = tf.train.Example(features=tf.train.Features(feature=feature_dict)) f = example.SerializeToString() sess.run(tf_example,feed_dict={serialized_tf_example:[f]}) </code></pre>
tensorflow|tensorflow-serving
3
3,190
27,585,577
Centralising data in numpy
<p>I have matrices with rows that need to be centralised. In other words each row has trailing zeros at both ends, while the actual data is between the trailing zeros. However, I need the number of trailing zeros to be equal at both ends or in other words what I call the data (values between the trailing zeros) to be centred at the middle of the row. Here is an example:</p> <pre><code>array: [[0, 1, 2, 0, 2, 1, 0, 0, 0], [2, 1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 2, 0, 0, 0]] centred_array: [[0, 0, 1, 2, 0, 2, 1, 0, 0], [0, 0, 0, 2, 1, 1, 0, 0, 0], [0, 0, 1, 0, 0, 2, 0, 0, 0]] </code></pre> <p>I hope that explains it well enough so that you can see some of the issues I am having. One, I am not guaranteed a even value for the size of the "data" so the function needs to pick a centre for even values which is consistent; also this is the case for rows (rows might have an even size which means one placed needs to be chosen as the centre).</p> <p>EDIT: I should probably note that I have a function that does this; its just that I can get 10^3 number of rows to centralise and my function is too slow, so efficiency would really help.</p> <p>@HYRY</p> <pre><code>a = np.array([[0, 1, 2, 0, 2, 1, 0, 0, 0], [2, 1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 2, 0, 0, 0]]) cd = [] (x, y) = np.shape(a) for row in a: trim = np.trim_zeros(row) to_add = y - np.size(trim) a = to_add / 2 b = to_add - a cd.append(np.pad(trim, (a, b), 'constant', constant_values=(0, 0)).tolist()) result = np.array(cd) print result [[0 0 1 2 0 2 1 0 0] [0 0 0 2 1 1 0 0 0] [0 0 1 0 0 2 0 0 0]] </code></pre>
<pre><code>import numpy as np def centralise(arr): # Find the x and y indexes of the nonzero elements: x, y = arr.nonzero() # Find the index of the left-most and right-most elements for each row: nonzeros = np.bincount(x) nonzeros_idx = nonzeros.cumsum() left = y[np.r_[0, nonzeros_idx[:-1]]] right = y[nonzeros_idx-1] # Calculate how much each y has to be shifted shift = ((arr.shape[1] - (right-left) - 0.5)//2 - left).astype(int) shift = np.repeat(shift, nonzeros) new_y = y + shift # Create centered_arr centered_arr = np.zeros_like(arr) centered_arr[x, new_y] = arr[x, y] return centered_arr arr = np.array([[0, 1, 2, 0, 2, 1, 0, 0, 0], [2, 1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 2, 0, 0, 0]]) print(centralise(arr)) </code></pre> <p>yields</p> <pre><code>[[0 0 1 2 0 2 1 0 0] [0 0 0 2 1 1 0 0 0] [0 0 1 0 0 2 0 0 0]] </code></pre> <hr> <p>A benchmark comparing the original code to centralise:</p> <pre><code>def orig(a): cd = [] (x, y) = np.shape(a) for row in a: trim = np.trim_zeros(row) to_add = y - np.size(trim) a = to_add / 2 b = to_add - a cd.append(np.pad(trim, (a, b), 'constant', constant_values=(0, 0)).tolist()) result = np.array(cd) return result </code></pre> <hr> <pre><code>In [481]: arr = np.tile(arr, (1000, 1)) In [482]: %timeit orig(arr) 10 loops, best of 3: 140 ms per loop In [483]: %timeit centralise(arr) 1000 loops, best of 3: 537 µs per loop In [486]: (orig(arr) == centralise(arr)).all() Out[486]: True </code></pre>
python|numpy
3
3,191
61,328,489
Assigning values based on existing numeric values in a column in pandas
<p>I would want to implement the following logic in pandas:</p> <p>if df['xxx'] &lt;= 0 then df['xyz']== 'a'</p> <p>if df['xxx'] between 0.5 and 10.97 then df['xyz']== 'b'</p> <p>if df['xxx'] between 11 and 89.57 then df['xyz']== 'c'</p> <p>if df['xxx'] > 100 then df['xyz']== 'd'</p> <p>How can I do this in the simplest way?</p> <p>Much thanks.</p>
<p>Define a function and use apply method</p> <pre><code>def fun(x): if x &lt;= 0: return 'a' elif (0 &lt; x &lt;= 1000): return 'b' elif (1000 &lt; x): return 'c' np.random.seed(3) df = pd.DataFrame(dict(xxx=np.random.choice([-1, 10, 2000],1000))) df['xyz'] = df.xxx.apply(fun) df.head(3) # xxx xyz # 0 2000 c # 1 -1 a # 2 10 b </code></pre>
python|pandas
1
3,192
61,236,399
AssertionError: Number of manager items must equal union of block items # manager items: 6004, # tot_items: 6005
<p>My code:</p> <pre><code>for column_name, column_data in summary_words.iteritems(): if column_name != "summary" and column_name != "text" and column_name != "score" and column_name != "helpfulness": summary_words[column_name] = summary_words["summary"].str.count(column_name) </code></pre> <p>where summary_words is a pandas dataframe, and "summary" is a column in that dataframe. When I run the code I get this error:</p> <blockquote> <p>AssertionError: Number of manager items must equal union of block items manager items: 6004, # tot_items: 6005</p> </blockquote> <p>Does anyone have any idea why I'm getting this error and how to fix it?</p> <pre><code>great my This love you best and will favorite watch ... step succeeds judge (who strictly things, helpfulness score summary text 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 100.0 3 "There Is So Much Darkness Now ~ Come For The ... Synopsis: On the daily trek from Juarez, Mexic... 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 100.0 3 Worthwhile and Important Story Hampered by Poo... THE VIRGIN OF JUAREZ is based on true events s... 2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 80.0 5 This movie needed to be made. The scenes in this film can be very disquietin... 3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 100.0 3 distantly based on a real tragedy THE VIRGIN OF JUAREZ (2006)&lt;br /&gt;directed by K... 4 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 100.0 3 "What's going on down in Juarez and shining a ... Informationally, this SHOWTIME original is ess... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 99995 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0.0 5 A Great Collection! Gave this for a friends birthday and she LOVES... 99996 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0.0 5 TOOOOO FUNNY I had not seen the MP guys for years. I have o... 99997 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0.0 5 monty python this is the best flying circus that monty pyth... 99998 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0.0 5 Python at its best and purest! If you are a serious Monty Python fan, then th... 99999 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 0.0 5 Monty Python 16 DVD set I got this as a Christmas gift for my son - th... </code></pre>
<p>It is very likely that your special-use keywords, like <code>summary</code> and <code>helpfulness</code>, are colliding with words in the vocabulary you are analyzing.</p> <p>You should be able to check this pretty quickly by looking at the lengths:</p> <pre class="lang-py prettyprint-override"><code>len(summary_words.columns) len(set(summary_words.columns)) </code></pre> <p>See this <a href="https://stackoverflow.com/questions/35137952/pandas-concat-failing/42385623">SO Q&amp;A</a> for additional detail on the multiple columns issue, in the context of a <code>pd.concat</code>.</p>
python|python-3.x|pandas|dataframe|iteritems
5
3,193
68,818,356
Tensorflow2: How to convert string tensor to bag of words? Help needed after days of struggling
<p>I am trying to write a function that takes tensor of strings as an input and return sparse tensor of ones and zeros so that each row is a bag of words representation of one string from input.</p> <p><strong>About input</strong></p> <ul> <li>each input string is a name consisting of 1 up to 7 words</li> <li>a word occurs in one name 1 or 0 times</li> <li>inputs are lowercase and alphanumeric only</li> <li>a name can occur many times in input tensor</li> <li>some words are repeated among different names</li> </ul> <p><strong>Requirements</strong></p> <ul> <li>function should take parameter <code>k_top</code> that indicates number of most popular words that are considered (it's probably called vocabulary size)</li> <li>function should consist <strong>only of operations allowed in graph mode</strong> (e.g. <code>tensor.numpy()</code> won't work)</li> <li>compatible with Tensorflow 2.5.0</li> </ul> <p><strong>Example</strong></p> <pre><code># tensor of strings, shape: (7,) inputs = tf.strings.lower([ &quot;one two three&quot;, &quot;three&quot;, &quot;one two&quot;, &quot;one two cat&quot;, &quot;three cat&quot;, &quot;one cat two three&quot;, &quot;banana one&quot; ]) </code></pre> <p>Word frequencies:</p> <pre><code>&quot;one&quot;: 5 &quot;two&quot;: 4 &quot;three&quot;: 4 &quot;cat&quot;: 3 &quot;banana&quot;: 1 </code></pre> <p>After calling function with <code>k_top = 2</code> (take words with two top counts and there are two words with count 4) each string is represented as vector of ones and zeros indicating wheter &quot;one&quot;, &quot;two&quot;, &quot;three&quot; is present:</p> <pre><code>&quot;one two three&quot; -&gt; [1,1,1] &quot;three&quot; -&gt; [0,0,1] &quot;one two&quot; -&gt; [1,1,0] &quot;one two cat&quot; -&gt; [1,1,0] &quot;three cat&quot; -&gt; [0,0,1] &quot;one cat two three&quot; -&gt; [1,1,1] &quot;banana one&quot; -&gt; [1,0,0] </code></pre> <p>I've been trying for a few days combining different functions from <code>tf.Transform</code> module and still getting errors instead of result (probably because I'm new to Tensorflow and also have difficulty with debugging because it's hard to see contents of tensor when not in eager mode (see edit here: <a href="https://stackoverflow.com/questions/68813197/tensorflow2-how-to-print-value-of-a-tensor-returned-from-tf-function-when-eager">Tensorflow2: How to print value of a tensor returned from tf.function when eager execution is disabled?</a>)).Any help would be greatly appreciated!</p>
<p>There is no direct method available, but there is some workaround to get to your solution using below code.</p> <pre><code>import tensorflow as tf from tensorflow.keras.preprocessing.text import Tokenizer import tensorflow_datasets as tfds inputs = [ &quot;one two three&quot;, &quot;three&quot;, &quot;one two&quot;, &quot;one two cat&quot;, &quot;three cat&quot;, &quot;one cat two three&quot;, &quot;banana one&quot; ] tf_keras_tokenizer = Tokenizer(oov_token=0) tf_keras_tokenizer.fit_on_texts(inputs) tf_keras_encoded = tf_keras_tokenizer.texts_to_sequences(inputs) tf_keras_encoded = tf.keras.preprocessing.sequence.pad_sequences(tf_keras_encoded, padding=&quot;post&quot;) word_index =tf_keras_tokenizer.word_index #tf_keras_tokenizer.word_index word_counts = tf_keras_tokenizer.word_counts #OrderedDict([('one', 5), ('two', 4), ('three', 4), ('cat', 3), ('banana', 1)]) #Below is the encoded result based on the word index. tf_keras_encoded array([[2, 3, 4, 0], [4, 0, 0, 0], [2, 3, 0, 0], [2, 3, 5, 0], [4, 5, 0, 0], [2, 5, 3, 4], [6, 2, 0, 0]], dtype=int32) </code></pre> <p>Now, we need to mask the values with 1s and 0s based on the top n frequent words.</p> <pre><code>num_words = 2 #To get the number of frequent words by it's values, even if the values matches it will return the keys. n_frequent_words = [key for key,value in word_counts.items() if value in list(word_counts.values())[:num_words]] #Based on the keys extracted above, get the word index mapping value to replace. frequent_word_index = [value for key,value in word_index.items() if key in n_frequent_words] </code></pre> <p>Now, let's use condition to replace values using <code>np.where</code>. First, let's iterate through frequent words and replace with 1s and after the loop replace rest of the words with 0, except 1s.</p> <pre><code>import numpy as np for i in frequent_word_index: tf_keras_encoded=np.where(tf_keras_encoded==i,1, tf_keras_encoded) tf_keras_encoded=np.where(tf_keras_encoded==1,1, 0) </code></pre> <p><strong>Result:</strong> Note: It's in the sequence of input words, if you need to change the sequence as you wish then play around with this code.</p> <pre><code>tf_keras_encoded array([[1, 1, 1, 0], [1, 0, 0, 0], [1, 1, 0, 0], [1, 1, 0, 0], [1, 0, 0, 0], [1, 0, 1, 1], [0, 1, 0, 0]]) </code></pre>
python|tensorflow|nlp|tensorflow2.0|word2vec
0
3,194
68,867,557
pandas resample nested ohlc data
<p>I have ohlc data that is contained in a 'mid' column and am not sure how to resample to keep the correct ohlc data</p> <p>Here is my data ... Columns are time, mid, volume , complete</p> <pre><code>candle_data time \ 0 2021-08-20 19:43:00+00:00 1 2021-08-20 19:44:00+00:00 2 2021-08-20 19:45:00+00:00 3 2021-08-20 19:46:00+00:00 4 2021-08-20 19:47:00+00:00 5 2021-08-20 19:48:00+00:00 6 2021-08-20 19:49:00+00:00 7 2021-08-20 19:50:00+00:00 8 2021-08-20 19:51:00+00:00 9 2021-08-20 19:52:00+00:00 10 2021-08-20 19:53:00+00:00 11 2021-08-20 19:54:00+00:00 12 2021-08-20 19:55:00+00:00 13 2021-08-20 19:56:00+00:00 14 2021-08-20 19:57:00+00:00 mid volume complete 0 {'o': 20.36418, 'h': 20.36455, 'l': 20.36075, ... 68 True 1 {'o': 20.36127, 'h': 20.36134, 'l': 20.35814, ... 49 True 2 {'o': 20.35845, 'h': 20.359, 'l': 20.3558, 'c'... 164 True 3 {'o': 20.35635, 'h': 20.35768, 'l': 20.35275, ... 155 True 4 {'o': 20.35315, 'h': 20.3535, 'l': 20.353, 'c'... 69 True 5 {'o': 20.35315, 'h': 20.3563, 'l': 20.35315, '... 146 True 6 {'o': 20.35525, 'h': 20.35776, 'l': 20.35312, ... 158 True 7 {'o': 20.35338, 'h': 20.35512, 'l': 20.35237, ... 166 True 8 {'o': 20.3524, 'h': 20.35335, 'l': 20.35123, '... 85 True 9 {'o': 20.35335, 'h': 20.35365, 'l': 20.35305, ... 44 True 10 {'o': 20.35365, 'h': 20.3544, 'l': 20.35365, '... 76 True 11 {'o': 20.3541, 'h': 20.3563, 'l': 20.3541, 'c'... 92 True 12 {'o': 20.35458, 'h': 20.36225, 'l': 20.35408, ... 188 True 13 {'o': 20.361, 'h': 20.36704, 'l': 20.36085, 'c... 392 True 14 {'o': 20.36638, 'h': 20.3672, 'l': 20.36637, '... 14 False </code></pre> <p>I have converted my time into a datetime object, and I have done something like this before where you aggregate data:</p> <pre><code>df2 = df1.resample('60min', on='date') .agg({'volume': 'sum', 'open': 'first', 'close': 'last', 'high': 'max', 'low': 'min'}) </code></pre> <p>But not sure how to accomplish it with this format.<br /> I want to do a different timeframe, eg. 5min, and:</p> <ul> <li>sum the volume</li> <li>open is first candle</li> <li>close is last</li> <li>high is max</li> <li>low is min</li> <li>complete is 1 if all are 1, but 0 if any are 0</li> </ul> <p>Here is the output of .head().to_dict()</p> <pre><code>{'time': {0: Timestamp('2021-08-20 20:17:00+0000', tz='UTC'), 1: Timestamp('2021-08-20 20:18:00+0000', tz='UTC'), 2: Timestamp('2021-08-20 20:19:00+0000', tz='UTC'), 3: Timestamp('2021-08-20 20:20:00+0000', tz='UTC'), 4: Timestamp('2021-08-20 20:21:00+0000', tz='UTC')}, 'mid': {0: {'o': 20.3778, 'h': 20.3778, 'l': 20.37066, 'c': 20.37066}, 1: {'o': 20.37066, 'h': 20.37141, 'l': 20.37066, 'c': 20.37133}, 2: {'o': 20.37133, 'h': 20.37141, 'l': 20.37113, 'c': 20.37113}, 3: {'o': 20.37113, 'h': 20.37172, 'l': 20.37113, 'c': 20.37158}, 4: {'o': 20.3716, 'h': 20.37165, 'l': 20.36865, 'c': 20.36865}}, 'volume': {0: 217, 1: 23, 2: 13, 3: 20, 4: 45}, 'complete': {0: True, 1: True, 2: True, 3: True, 4: True}} </code></pre> <p>Any idea how to accomplish this?</p>
<p>Convert the nested dictionaries to their own columns and then resample. Then, convert back if needed:</p> <pre><code>df[[&quot;o&quot;, &quot;h&quot;, &quot;l&quot;, &quot;c&quot;]] = df[&quot;mid&quot;].apply(pd.Series) df = df.drop(&quot;mid&quot;, axis=1) \ .set_index(&quot;time&quot;) \ .resample(&quot;5min&quot;) \ .agg({&quot;o&quot;: &quot;first&quot;, &quot;h&quot;: &quot;max&quot;, &quot;l&quot;: &quot;min&quot;, &quot;c&quot;: &quot;last&quot;, &quot;volume&quot;: &quot;sum&quot;, &quot;complete&quot;: all }) #to convert back to original structure df = df.assign(mid=df[[&quot;o&quot;, &quot;h&quot;, &quot;l&quot;, &quot;c&quot;]].apply(dict, axis=1)).drop([&quot;o&quot;, &quot;h&quot;, &quot;l&quot;, &quot;c&quot;], axis=1) </code></pre> <h6>Output:</h6> <pre><code>&gt;&gt;&gt; df time volume complete mid 2021-08-20 20:15:00+00:00 253 True {'o': 20.3778, 'h': 20.3778, 'l': 20.37066, 'c': 20.37113} 2021-08-20 20:20:00+00:00 65 True {'o': 20.37113, 'h': 20.37172, 'l': 20.36865, 'c': 20.36865} </code></pre>
python|pandas
1
3,195
68,834,726
How to create a nested dataframe in pandas
<p>I am trying to create a nested json as a target</p> <pre><code>[{'id':0, name:'Albert', last_name:'Einstein', info:{'dob':1903}}, ... 'id':100000, name:'Zooey', last_name:'Deschanel;', info:{'dob':1980}} ] </code></pre> <p>I am operating with an existing json converted into a dataframe, how can I form a valid nested dataframe and convert it back to json?</p> <pre><code>[{'id':0, name:'Albert', last_name:'Einstein', 'dob':1903, 'extra':{'field1': 1}}, ... {'id':100000, name:'Zooey', last_name:'Deschanel', 'dob':1980, 'extra':{'field1': 1}} ] </code></pre> <p>the following approach didn't really work</p> <pre><code>f.insert(2, 'info.dob', df['dob']) df.drop(['dob']) </code></pre> <p><code>{'id':0, name:'Albert', last_name:'Einstein', 'info.dob':1903} </code></p>
<p>Use <code>pd.json_normalize</code> and then <code>apply</code> a custom function on column &quot;dob&quot;:</p> <pre><code>import json import pandas as pd #assuming your json is stored in a file called &quot;myjson.json&quot; df = pd.json_normalize(json.loads(open(&quot;myjson.json&quot;).read())) df[&quot;dob&quot;] = df[&quot;dob&quot;].apply(lambda x: {&quot;info&quot;: x}) json_string = df.to_json(orient=&quot;records&quot;) &gt;&gt;&gt; json_string '[{&quot;id&quot;:0,&quot;name&quot;:&quot;Albert&quot;,&quot;last_name&quot;:&quot;Einstein&quot;,&quot;dob&quot;:{&quot;info&quot;:1903},&quot;extra.field1&quot;:1},{&quot;id&quot;:100000,&quot;name&quot;:&quot;Zooey&quot;,&quot;last_name&quot;:&quot;Deschanel&quot;,&quot;dob&quot;:{&quot;info&quot;:1980},&quot;extra.field1&quot;:1}]' </code></pre>
python|pandas|dataframe
0
3,196
36,465,746
Save several Pandas DataFrames into single Excel file
<p>I have several Pandas data frames that i would like to save into single MS Excel file, each dataframe as separate sheet in this file. Any advice more than welcome. Felix</p>
<p>You can use <code>sheet_name</code> argument of <code>to_excel</code> like below example.</p> <blockquote> <p><strong>pandas.DataFrame.to_excel</strong></p> <p>If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can be used to save different DataFrames to one workbook:</p> <pre><code>writer = ExcelWriter('output.xlsx') df1.to_excel(writer,'Sheet1') df2.to_excel(writer,'Sheet2') writer.save() </code></pre> <p>For compatibility with to_csv, to_excel serializes lists and dicts to strings before writing.</p> <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html</a></p> </blockquote>
excel|pandas|dataframe
2
3,197
36,674,399
How to remove duplicates in pandas?
<p>I have lots of data in excel files. I would like to concatenate these datas into one excel file by removing duplicate records according to id column information.</p> <pre><code>df1 id name date 0 1 cab 2017 1 11 den 2012 2 13 ers 1998 df2 id name date 0 11 den 2012 1 14 ces 2011 2 4 guk 2007 </code></pre> <p>I want to have below concantenated file finally. </p> <pre><code>Concat df id name date 0 1 cab 2017 1 11 den 2012 2 13 ers 1998 1 14 ces 2011 2 4 guk 2007 </code></pre> <p>I try below but it does not remove duplicates. Can anyone advise how to fix this ? </p> <pre><code>pd.concat([df1,df2]).drop_duplicates().reset_index(drop=True) </code></pre> <p>My concatenated data are as below. Duplicated ids are still on the file.</p> <pre><code> id created_at retweet_count 0 721557296757797000 2016-04-17 04:34:00 21 1 721497712726844000 2016-04-17 00:37:14 94 2 721462059515453000 2016-04-16 22:15:33 0 3 721460623285072000 2016-04-16 22:09:51 0 4 721460397241446000 2016-04-16 22:08:57 0 5 721459817651577000 2016-04-16 22:06:39 0 6 721456334894469000 2016-04-16 21:52:48 0 7 721557296757797000 2016-04-17 04:34:00 21 8 721497712726844000 2016-04-17 00:37:14 94 9 721462059515453000 2016-04-16 22:15:33 0 10 721460623285072000 2016-04-16 22:09:51 0 11 721460397241446000 2016-04-16 22:08:57 0 12 721459817651577000 2016-04-16 22:06:39 0 13 721456334894469000 2016-04-16 21:52:48 0 </code></pre>
<p>I think you need add parameter <code>subset</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow"><code>drop_duplicates</code></a> for filtering by column <code>id</code>:</p> <pre><code>print pd.concat([df1,df2]).drop_duplicates(subset='id').reset_index(drop=True) id name date 0 1 cab 2017 1 11 den 2012 2 13 ers 1998 3 14 ces 2011 4 4 guk 2007 </code></pre> <p>EDIT:</p> <p>I try your new data and for me it works:</p> <pre><code>import pandas as pd df = pd.DataFrame({'created_at': {0: '2016-04-17 04:34:00', 1: '2016-04-17 00:37:14', 2: '2016-04-16 22:15:33', 3: '2016-04-16 22:09:51', 4: '2016-04-16 22:08:57', 5: '2016-04-16 22:06:39', 6: '2016-04-16 21:52:48', 7: '2016-04-17 04:34:00', 8: '2016-04-17 00:37:14', 9: '2016-04-16 22:15:33', 10: '2016-04-16 22:09:51', 11: '2016-04-16 22:08:57', 12: '2016-04-16 22:06:39', 13: '2016-04-16 21:52:48'}, 'retweet_count': {0: 21, 1: 94, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 21, 8: 94, 9: 0, 10: 0, 11: 0, 12: 0, 13: 0}, 'id': {0: 721557296757797000, 1: 721497712726844000, 2: 721462059515453000, 3: 721460623285072000, 4: 721460397241446000, 5: 721459817651577000, 6: 721456334894469000, 7: 721557296757797000, 8: 721497712726844000, 9: 721462059515453000, 10: 721460623285072000, 11: 721460397241446000, 12: 721459817651577000, 13: 721456334894469000}}, columns=['id','created_at','retweet_count']) </code></pre> <pre><code>print df id created_at retweet_count 0 721557296757797000 2016-04-17 04:34:00 21 1 721497712726844000 2016-04-17 00:37:14 94 2 721462059515453000 2016-04-16 22:15:33 0 3 721460623285072000 2016-04-16 22:09:51 0 4 721460397241446000 2016-04-16 22:08:57 0 5 721459817651577000 2016-04-16 22:06:39 0 6 721456334894469000 2016-04-16 21:52:48 0 7 721557296757797000 2016-04-17 04:34:00 21 8 721497712726844000 2016-04-17 00:37:14 94 9 721462059515453000 2016-04-16 22:15:33 0 10 721460623285072000 2016-04-16 22:09:51 0 11 721460397241446000 2016-04-16 22:08:57 0 12 721459817651577000 2016-04-16 22:06:39 0 13 721456334894469000 2016-04-16 21:52:48 0 print df.dtypes id int64 created_at object retweet_count int64 dtype: object print df.drop_duplicates(subset='id').reset_index(drop=True) id created_at retweet_count 0 721557296757797000 2016-04-17 04:34:00 21 1 721497712726844000 2016-04-17 00:37:14 94 2 721462059515453000 2016-04-16 22:15:33 0 3 721460623285072000 2016-04-16 22:09:51 0 4 721460397241446000 2016-04-16 22:08:57 0 5 721459817651577000 2016-04-16 22:06:39 0 6 721456334894469000 2016-04-16 21:52:48 0 </code></pre>
python|pandas
1
3,198
36,394,340
Centering a Numpy array of images
<p>I have some numpy arrays of images that I want to center (subtract the mean and divide by the standard deviation). Can I simply do it like this?</p> <pre><code># x is a np array img_mean = x.mean(axis=0) img_std = np.std(x) x = (x - img_mean) / img_std </code></pre>
<p>I don't think this is what you are trying to do.<br> Let's say we have an array like this:</p> <pre><code>In [2]: x = np.arange(25).reshape((5, 5)) In [3]: x Out[3]: array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) </code></pre> <p><code>x.mean(axis=0)</code> calculates the mean value for each column (axis 0):</p> <pre><code>In [4]: x.mean(axis=0) Out[4]: array([ 10., 11., 12., 13., 14.]) </code></pre> <p>Subtracted from our original <code>x</code> array, each value gets subtracted by its column's mean value:</p> <pre><code>In [5]: x - x.mean(axis=0) Out[5]: array([[-10., -10., -10., -10., -10.], [ -5., -5., -5., -5., -5.], [ 0., 0., 0., 0., 0.], [ 5., 5., 5., 5., 5.], [ 10., 10., 10., 10., 10.]]) </code></pre> <p>If we don't specify an axis for <code>x.mean</code>, the whole array is being taken:</p> <pre><code>In [6]: x.mean(axis=None) Out[6]: 12.0 </code></pre> <p>This is what you've been doing with <code>x.std()</code> all the time, since for both <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.std.html" rel="nofollow"><code>np.std</code></a> and <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.mean.html" rel="nofollow"><code>np.mean</code></a> the default axis is <code>None</code>.<br> This might be what you want:</p> <pre><code>In [7]: x - x.mean() Out[7]: array([[-12., -11., -10., -9., -8.], [ -7., -6., -5., -4., -3.], [ -2., -1., 0., 1., 2.], [ 3., 4., 5., 6., 7.], [ 8., 9., 10., 11., 12.]]) In [8]: (x - x.mean()) / x.std() Out[8]: array([[-1.6641005, -1.5254255, -1.3867504, -1.2480754, -1.1094003], [-0.9707253, -0.8320502, -0.6933752, -0.5547002, -0.4160251], [-0.2773501, -0.1386750, 0. , 0.1386750, 0.2773501], [ 0.4160251, 0.5547002, 0.6933752, 0.8320502, 0.9707253], [ 1.1094003, 1.2480754, 1.3867504, 1.5254255, 1.6641005]]) </code></pre>
python|arrays|numpy|computer-vision
5
3,199
53,195,614
Python: Pandas Module - Nested IF Statement that Fills in NaN (Empty Values) in a Dataframe
<p>I've created a function that tests multiple IF statements given the data in the 'Name' column. </p> <p>Criteria 1: If 'Name' is blank, return the 'Secondary_Name'. However, if 'Secondary_Name' is also blank, return the 'Third_Name'. </p> <p>Criteria 2: If 'Name' == 'GENERAL', return the 'Secondary_Name'. However, if 'Secondary_Name' is also blank, return the 'Third_Name'</p> <p>Else: Return the 'Name'</p> <pre><code>def account_name(row): if row['Name'] == None and row['Secondary_Name'] == None: return row['Third_Name'] elif row['Name'] == 'GENERAL': if row['Secondary_Name'] == None: return row['Third_Name'] else: return row['Name'] </code></pre> <p>I've tried == None, == np.NaN, == Null, .isnull(), == '', == '0'. </p> <p>Nothing seems to replace the empty values to what I want. </p> <p>Edit: </p> <p><a href="https://i.stack.imgur.com/haimj.png" rel="nofollow noreferrer">Example of DF</a></p>
<p>Consider this df</p> <pre><code>df = pd.DataFrame({'Name':['a', 'GENERAL', None],'Secondary_Name':['e','f',None], 'Third_Name':['x', 'y', 'z']}) Name Secondary_Name Third_Name 0 a e x 1 GENERAL f y 2 None None z </code></pre> <p>Since you are writing the function in python, you can use is None</p> <pre><code>def account_name(row): if (row['Name'] is None or row['Name'] == 'GENERAL') and (row['Secondary_Name'] is None): return row['Third_Name'] elif row['Name'] is None or row['Name'] == 'GENERAL': return row['Secondary_Name'] else: return row['Name'] df['Name'] = df.apply(account_name, axis = 1) </code></pre> <p>You get</p> <pre><code> Name Secondary_Name Third_Name 0 a e x 1 f f y 2 z None z </code></pre> <p>You can get same output using pandas and nested np.where</p> <pre><code>cond1 = (df['Name'].isnull()) | (df['Name'] == 'GENERAL') cond2 = (cond1) &amp; (df['Secondary_Name'].isnull()) np.where(cond2, df['Third_Name'], np.where(cond1, df['Secondary_Name'], df['Name'])) </code></pre>
python-3.x|pandas|dataframe|null
0