Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
8,700
44,762,560
puzzling when pandas reading text seperated by whitespace
<p><code>pandas</code> could not read text as follows:</p> <pre><code>NothGrassland Meteor Sites MTCLIM v4.3 OUTPUT FILE : Mon Jun 26 16:57:31 2017 year yday Tmax Tmin Tday prcp VPD srad daylen (deg C) (deg C) (deg C) (cm) (Pa) (W m-2) (s) 1961 1 -24.08 -36.19 -27.41 0.00 36.81 128.45 28460 1961 2 -16.08 -29.79 -19.85 0.02 75.12 135.12 28524 1961 3 -16.08 -26.19 -18.86 0.05 65.86 118.79 28594 1961 4 -23.58 -33.29 -26.25 0.00 34.87 116.98 28668 1961 5 -24.28 -37.49 -27.91 0.00 37.27 163.75 28748 1961 6 -20.68 -33.19 -24.12 0.01 49.79 133.63 28832 1961 7 -19.48 -31.29 -22.73 0.18 53.78 131.91 28922 </code></pre> <p>when reading text use code as follows:</p> <pre><code>df=pd.read_csv(file,sep=' ',header=0,skiprows=[0,1,3]) </code></pre> <p>hint errors:</p> <pre><code>runfile('C:/temp/python/Models/GSI.py', wdir='C:/temp/python') Traceback (most recent call last): File "&lt;ipython-input-115-7bbdd08f49f8&gt;", line 1, in &lt;module&gt; runfile('C:/temp/python/Models/GSI.py', wdir='C:/temp/python') File "C:\Program Files\Winpython\python-3.6.1.amd64\lib\site-packages\spyder\utils\site\sitecustomize.py", line 880, in runfile execfile(filename, namespace) File "C:\Program Files\Winpython\python-3.6.1.amd64\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/temp/python/Models/GSI.py", line 23, in &lt;module&gt; df=pd.read_csv(file,header=0,sep=' ') File "C:\Program Files\Winpython\python-3.6.1.amd64\lib\site-packages\pandas\io\parsers.py", line 646, in parser_f return _read(filepath_or_buffer, kwds) File "C:\Program Files\Winpython\python-3.6.1.amd64\lib\site-packages\pandas\io\parsers.py", line 401, in _read data = parser.read() File "C:\Program Files\Winpython\python-3.6.1.amd64\lib\site-packages\pandas\io\parsers.py", line 939, in read ret = self._engine.read(nrows) File "C:\Program Files\Winpython\python-3.6.1.amd64\lib\site-packages\pandas\io\parsers.py", line 1508, in read data = self._reader.read(nrows) File "pandas\parser.pyx", line 848, in pandas.parser.TextReader.read (pandas\parser.c:10415) File "pandas\parser.pyx", line 870, in pandas.parser.TextReader._read_low_memory (pandas\parser.c:10691) File "pandas\parser.pyx", line 924, in pandas.parser.TextReader._read_rows (pandas\parser.c:11437) File "pandas\parser.pyx", line 911, in pandas.parser.TextReader._tokenize_rows (pandas\parser.c:11308) File "pandas\parser.pyx", line 2024, in pandas.parser.raise_parser_error (pandas\parser.c:27037) CParserError: Error tokenizing data. C error: Expected 10 fields in line 3, saw 34 </code></pre> <p>If remove <code>sep=' '</code> as follow:</p> <pre><code>df=pd.read_csv(file,header=None,skiprows=4) </code></pre> <p>the code run.</p>
<p>For me works <code>sep="\s+"</code> or <code>delim_whitespace=True</code>:</p> <pre><code>import pandas as pd from pandas.compat import StringIO temp=u"""NothGrassland Meteor Sites MTCLIM v4.3 OUTPUT FILE : Mon Jun 26 16:57:31 2017 year yday Tmax Tmin Tday prcp VPD srad daylen (deg C) (deg C) (deg C) (cm) (Pa) (W m-2) (s) 1961 1 -24.08 -36.19 -27.41 0.00 36.81 128.45 28460 1961 2 -16.08 -29.79 -19.85 0.02 75.12 135.12 28524 1961 3 -16.08 -26.19 -18.86 0.05 65.86 118.79 28594 1961 4 -23.58 -33.29 -26.25 0.00 34.87 116.98 28668 1961 5 -24.28 -37.49 -27.91 0.00 37.27 163.75 28748 1961 6 -20.68 -33.19 -24.12 0.01 49.79 133.63 28832 1961 7 -19.48 -31.29 -22.73 0.18 53.78 131.91 28922""" #after testing replace 'StringIO(temp)' to 'filename.csv' df = pd.read_csv(StringIO(temp), sep="\s+", skiprows=[0,1,3], header=0) print (df) year yday Tmax Tmin Tday prcp VPD srad daylen 0 1961 1 -24.08 -36.19 -27.41 0.00 36.81 128.45 28460 1 1961 2 -16.08 -29.79 -19.85 0.02 75.12 135.12 28524 2 1961 3 -16.08 -26.19 -18.86 0.05 65.86 118.79 28594 3 1961 4 -23.58 -33.29 -26.25 0.00 34.87 116.98 28668 4 1961 5 -24.28 -37.49 -27.91 0.00 37.27 163.75 28748 5 1961 6 -20.68 -33.19 -24.12 0.01 49.79 133.63 28832 6 1961 7 -19.48 -31.29 -22.73 0.18 53.78 131.91 28922 </code></pre> <p>And also:</p> <pre><code>#after testing replace 'StringIO(temp)' to 'filename.csv' df = pd.read_csv(StringIO(temp), delim_whitespace=True, skiprows=[0,1,3], header=0) print (df) year yday Tmax Tmin Tday prcp VPD srad daylen 0 1961 1 -24.08 -36.19 -27.41 0.00 36.81 128.45 28460 1 1961 2 -16.08 -29.79 -19.85 0.02 75.12 135.12 28524 2 1961 3 -16.08 -26.19 -18.86 0.05 65.86 118.79 28594 3 1961 4 -23.58 -33.29 -26.25 0.00 34.87 116.98 28668 4 1961 5 -24.28 -37.49 -27.91 0.00 37.27 163.75 28748 5 1961 6 -20.68 -33.19 -24.12 0.01 49.79 133.63 28832 6 1961 7 -19.48 -31.29 -22.73 0.18 53.78 131.91 28922 </code></pre>
pandas
2
8,701
44,580,199
pytest 3.0.7 error when import pandas 2.20.1
<p>Anyone have the same issue I have for running pytest with following error. The way I install the environment is download python from <a href="https://www.python.org/downloads/" rel="nofollow noreferrer">https://www.python.org/downloads/</a> and install pkg file create req.file and install package by pip install -r req.file</p> <pre><code>os: x el capitan python:3.6.1 pytest:3.0.7 pandas:2.20.2 req.file psutil==4.0.0 pandas==0.20.2 numpy==1.10.4 py==1.4.31 pytest==3.0.7 pytest-cov==2.2.1 pytest-mock==0.10.1 script.py import pandas as pd /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pandas/__init__.py:31: in &lt;module&gt; "extensions first.".format(module)) E ImportError: C extension: umpy.core.multiarray failed to import not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first. -------------------------------------------------------------------------------- Captured stderr --------------------------------------------------------------------------------- RuntimeError: module compiled against API version 0xb but this version of numpy is 0xa </code></pre>
<p>so what i have tried is using virtual env to set up python project and reinstall all package to make sure project environment is isolated to my local. so set up virtual env and then no problem to install pandas anymore</p> <pre><code>$pip3 install virtualenv $virtualenv --python=/usr/bin/python3.6 &lt;path/to/new/project/&gt; </code></pre>
python-3.x|pandas|pytest
0
8,702
61,046,870
How to save weights of keras model for each epoch?
<p>I want to save keras model and I want to save weights of each epoch to have best weights. How I do that?</p> <p>Any help would be appreciated.</p> <p><strong>code</strong>:</p> <pre><code>def createModel(): input_shape=(1, 22, 5, 3844) model = Sequential() #C1 model.add(Conv3D(16, (22, 5, 5), strides=(1, 2, 2), padding='same',activation='relu',data_format= "channels_first", input_shape=input_shape)) model.add(keras.layers.MaxPooling3D(pool_size=(1, 2, 2),data_format= "channels_first", padding='same')) model.add(BatchNormalization()) #C2 model.add(Conv3D(32, (1, 3, 3), strides=(1, 1,1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding model.add(keras.layers.MaxPooling3D(pool_size=(1,2, 2),data_format= "channels_first", )) model.add(BatchNormalization()) #C3 model.add(Conv3D(64, (1,3, 3), strides=(1, 1,1), padding='same',data_format= "channels_first", activation='relu'))#incertezza se togliere padding model.add(keras.layers.MaxPooling3D(pool_size=(1,2, 2),data_format= "channels_first",padding='same' )) model.add(Dense(64, input_dim=64, kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l1(0.01))) model.add(BatchNormalization()) model.add(Flatten()) model.add(Dropout(0.5)) model.add(Dense(256, activation='sigmoid')) model.add(Dropout(0.5)) model.add(Dense(2, activation='softmax')) opt_adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) model.compile(loss='categorical_crossentropy', optimizer=opt_adam, metrics=['accuracy']) return model </code></pre>
<p>model.get_weights() will return a tensor as a numpy array. You can save those weights in a file with extension .npy using np.save().</p> <p>To save weights every epoch, you can use something known as callbacks in Keras.</p> <pre class="lang-python prettyprint-override"><code>from keras.callbacks import ModelCheckpoint </code></pre> <p>before you do model.fit, define a checkpoint as below</p> <p><code>checkpoint = ModelCheckpoint(.....)</code>, assign the argument 'period' as 1 which assigns the periodicity of epochs. This should do it.</p>
python|tensorflow|machine-learning|keras|deep-learning
3
8,703
60,999,536
Seaborn example to plot with date on the X axis not showing dates
<p>I am executing this code from <a href="https://seaborn.pydata.org/generated/seaborn.lineplot.html" rel="nofollow noreferrer">this page</a> and it is not working as expected.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np, pandas as pd; plt.close("all") index = pd.date_range("1 1 2000", periods=100, freq="m", name="date") data = np.random.randn(100, 4).cumsum(axis=0) wide_df = pd.DataFrame(data, index, ["a", "b", "c", "d"]) ax = sns.lineplot(data=wide_df) plt.show() plt.close() </code></pre> <p>The X axis is not displaying dates.</p> <p><a href="https://i.stack.imgur.com/gvXBO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gvXBO.png" alt=""></a></p> <p>I am using these versions:</p> <pre><code>seaborn==0.10.0 pandas==0.25.0 matplotlib==3.2.0 </code></pre> <p>How to plot with dates on X?</p>
<p>It works fine on my system, running your code without changing anything:</p> <p><a href="https://i.stack.imgur.com/FHyAP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FHyAP.png" alt="enter image description here"></a></p> <p>I obviously have a different default figure size and style, but that is not the issue. You should upgrade to the latest version of Pandas. There was a major new release earlier this year.</p>
python|pandas|seaborn
1
8,704
60,915,372
How to apply linear layer to 2D layer only in one dimension (by row or by column) - partially connected layers
<p>I'm trying to apply a linear layer to a 2D matrix of tensors connecting it only by column as in the picture below.</p> <p><a href="https://i.stack.imgur.com/IOiBE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IOiBE.png" alt="1D linear layer applied to 2D layer"></a></p> <p>The input shape is <strong>(batch_size, 3, 50)</strong>. I first tried with 2D convolution, adding a 1 channel dimension, so input shape is <strong>(batch_size, 1, 3, 50)</strong></p> <pre><code>import torch.nn as nn import torch class ColumnConv(nn.Module): def __init__(self): self.layers = nn.Sequential( nn.Conv2d( in_channels=1, out_channels=1, kernel_size=(3, 1), stride=1, ), # shape is B, 1, 1, 50 nn.ReLU(), nn.Flatten() #shape is B, 50 ) def forward(self, x): return self.layers(x) </code></pre> <p>But it doesn't seem to work. I'm planning to use a list of 50 <code>nn.Linear</code> layers and apply them to column slices of input, but it seems much more like a workaround not optimized for performance.</p> <p>Is there a more <em>"pytorchic"</em> way of doing this?</p>
<p>The PyTorch <a href="https://pytorch.org/docs/stable/nn.html?highlight=linear#torch.nn.Linear" rel="nofollow noreferrer"><code>nn.Linear</code></a> module can be applied to multidimensional input, the linear will be applied to the last dimension so to apply by column the solution is to swap rows and columns.</p> <pre><code>linear_3_to_1 = nn.Linear(3, 1) x = torch.randn(1, 1, 3, 50) x = x.transpose(2, 3) #swap 3 and 50 out = linear_3_to_1(x).flatten() </code></pre>
neural-network|artificial-intelligence|pytorch
2
8,705
71,452,476
Looping through Dataframe to delete "near duplicate" rows
<p>I have a dataframe and would like to get rid of rows where a particular column has matching values to subsequent values. An example can be found below:</p> <p>Original Data Frame:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>Cat1</th> <th>Cat2</th> <th>Cat3</th> <th>Averages</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>8</td> <td>8</td> <td>SWE</td> <td>220.90</td> </tr> <tr> <td>2</td> <td>8</td> <td>8</td> <td>SWE</td> <td>249.14</td> </tr> <tr> <td>3</td> <td>SWE</td> <td>SWE</td> <td>SWE</td> <td>358.54</td> </tr> <tr> <td>4</td> <td>DR</td> <td>DR</td> <td>DR</td> <td>204.41</td> </tr> <tr> <td>5</td> <td>SWE</td> <td>SWE</td> <td>SWE</td> <td>354.08</td> </tr> <tr> <td>6</td> <td>Eng</td> <td>Eng</td> <td>Eng</td> <td>212.14</td> </tr> <tr> <td>7</td> <td>HTE</td> <td>HTE</td> <td>HTE</td> <td>220.04</td> </tr> <tr> <td>8</td> <td>8</td> <td>SWE</td> <td>SWE</td> <td>220.90</td> </tr> <tr> <td>9</td> <td>8</td> <td>SWE</td> <td>SWE</td> <td>249.14</td> </tr> <tr> <td>10</td> <td>4</td> <td>Apple</td> <td>Apple</td> <td>296.04</td> </tr> <tr> <td>11</td> <td>4</td> <td>Grape</td> <td>Grape</td> <td>336.52</td> </tr> <tr> <td>12</td> <td>3</td> <td>Apple</td> <td>Apple</td> <td>768.01</td> </tr> <tr> <td>13</td> <td>5</td> <td>Peach</td> <td>Peach</td> <td>519.90</td> </tr> <tr> <td>14</td> <td>NBS</td> <td>Apple</td> <td>Apple</td> <td>525.58</td> </tr> <tr> <td>15</td> <td>Staff</td> <td>BBQ</td> <td>BBQ</td> <td>326.25</td> </tr> <tr> <td>16</td> <td>BP</td> <td>Pear</td> <td>Pear</td> <td>262.11</td> </tr> <tr> <td>17</td> <td>PM</td> <td>Pear</td> <td>Pear</td> <td>469.20</td> </tr> <tr> <td>18</td> <td>Marketing</td> <td>Banana</td> <td>Banana</td> <td>206.75</td> </tr> <tr> <td>19</td> <td>SWE</td> <td>Grape</td> <td>Grape</td> <td>400.28</td> </tr> <tr> <td>20</td> <td>SWE</td> <td>Barley</td> <td>Barley</td> <td>321.63</td> </tr> </tbody> </table> </div> <p>I'd like to ignore any rows where the Averages column value has already occurred. For instance, ID 1 and ID 8 have the same Averages value. So I would like to remove the ID 8 row from the table. Another occurrence are the row with ID 2 and ID 9, they have the same Averages value so I'd eliminate the row with ID 9 from the table. My end output would then be:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>Cat1</th> <th>Cat2</th> <th>Cat3</th> <th>Averages</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>8</td> <td>8</td> <td>SWE</td> <td>220.90</td> </tr> <tr> <td>2</td> <td>8</td> <td>8</td> <td>SWE</td> <td>249.14</td> </tr> <tr> <td>3</td> <td>SWE</td> <td>SWE</td> <td>SWE</td> <td>358.54</td> </tr> <tr> <td>4</td> <td>DR</td> <td>DR</td> <td>DR</td> <td>204.41</td> </tr> <tr> <td>5</td> <td>SWE</td> <td>SWE</td> <td>SWE</td> <td>354.08</td> </tr> <tr> <td>6</td> <td>Eng</td> <td>Eng</td> <td>Eng</td> <td>212.14</td> </tr> <tr> <td>7</td> <td>HTE</td> <td>HTE</td> <td>HTE</td> <td>220.04</td> </tr> <tr> <td>10</td> <td>4</td> <td>Apple</td> <td>Apple</td> <td>296.04</td> </tr> <tr> <td>11</td> <td>4</td> <td>Grape</td> <td>Grape</td> <td>336.52</td> </tr> <tr> <td>12</td> <td>3</td> <td>Apple</td> <td>Apple</td> <td>768.01</td> </tr> <tr> <td>13</td> <td>5</td> <td>Peach</td> <td>Peach</td> <td>519.90</td> </tr> <tr> <td>14</td> <td>NBS</td> <td>Apple</td> <td>Apple</td> <td>525.58</td> </tr> <tr> <td>15</td> <td>Staff</td> <td>BBQ</td> <td>BBQ</td> <td>326.25</td> </tr> <tr> <td>16</td> <td>BP</td> <td>Pear</td> <td>Pear</td> <td>262.11</td> </tr> <tr> <td>17</td> <td>PM</td> <td>Pear</td> <td>Pear</td> <td>469.20</td> </tr> <tr> <td>18</td> <td>Marketing</td> <td>Banana</td> <td>Banana</td> <td>206.75</td> </tr> <tr> <td>19</td> <td>SWE</td> <td>Grape</td> <td>Grape</td> <td>400.28</td> </tr> <tr> <td>20</td> <td>SWE</td> <td>Barley</td> <td>Barley</td> <td>321.63</td> </tr> </tbody> </table> </div> <p>I've tried to do for loops to do this, but when I do a for loop on the data frame, it lops through the columns and not the rows. Any help here would be much appreciated!</p>
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>drop_duplicates</code></a></p> <p>By default it uses all columns to identify the duplicated rows:</p> <pre><code>df.drop_duplicates() </code></pre> <p>To limit to a subset of columns:</p> <pre><code>df.drop_duplicates(subset=['Averages']) </code></pre>
pandas|dataframe|loops|for-loop
0
8,706
71,639,199
Python Pandas Error: 'DataFrame' object has no attribute 'Datetime' when trying to create an average of time periods
<p>I am trying to create a dataframe (df) that creates a sample sums, means, and standard deviations of the following 12 monthly return series by month from another dataframe cdv file called QUESTION1DATA.csv and he head of performance data looks like this: <a href="https://i.stack.imgur.com/kxKrc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kxKrc.png" alt="enter image description here" /></a>.</p> <p>So far I have created a code to find what I am looking for and have come up with this:</p> <pre><code>import time df['Timestamp'] = portlist.to_datetime(df['Year'],format='%Y') new_df = df.groupby([df.index.year, df.index.month]).agg(['sum', 'mean', 'std']) new_df.index.set_names(['Year', 'Month'], inplace = True) new_df.reset_index(inplace = True) </code></pre> <p>However when I run this code I get this error and don't know where to go from there.</p> <pre><code>`&quot;list' object has no attribute 'to_datetime' </code></pre> <p>I</p>
<p><code>to_datetime</code> is available in <code>Pandas</code> and should be use as follows:</p> <pre><code>&gt;&gt;pd.to_datetime('13000101', format='%Y%m%d', errors='ignore') datetime.datetime(1300, 1, 1, 0, 0) </code></pre> <p>In your case <code>portlist</code> is a list and hence the code is throwing error.</p> <p>In your case it should be as follows:</p> <pre><code>df['Timestamp'] = pd.to_datetime(df['Year'],format='%Y') </code></pre>
python|pandas|file|datetime
0
8,707
42,555,276
tensorflow - run optimizer op on a large batch
<p>Normally, we call the run command with the optimizer operation as input to update the trainable parameters of some model:</p> <pre><code>session.run(model.optimizer_op, feed_dict={model.X: X_batch, model.y: y_batch}) </code></pre> <p>But when the batch size is large, X_batch can't fit in GPU memory. How can I split this task to handle a large batch size ?</p>
<p>This depends mainly on your GPU memory size. However, it is hard to fit your entire dataset along with the model and its required operations (i.e. predicting probabilities). Thus, you would need to think of batching in a different perspective. I assume your code goes along these lines:</p> <pre><code># Model Definition X = tf.placeholder(tf.float32, shape=[None, DIM,DIM,3], name='X') y = tf.placeholder(tf.float32, shape=[None, N_CLASSES], name='y') ... train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) ... # Training your model sess.run([train_step], feed_dict={X: X_batch, y: y_batch}) </code></pre> <p>Instead of using <code>X</code> and <code>y</code> as your <code>feed_dict</code> to the <code>train_step</code>. You could just accumulate the <code>cross_entropy</code> for all batches (i.e. for the entire dataset). Then, you can run the <code>train_step</code> once. For example:</p> <pre><code>cross_entropy_all = [] for X_batch, y_batch in batches_generator(): cross_entropy_all += sess.run([cross_entropy], feed_dict={X: X_batch, y: y_batch}) # Numpy or Tensorflow equivalent for `vstack` cross_entropy_all = np.vstack(cross_entropy_all) # Run the optimizer on the entire dataset (not just on a specific batch) sess.run([train_step], feed_dict={cross_entropy: cross_entropy_all}) </code></pre> <p>This should achieve your goal without running your GPU out of memory. The suggested approach runs the optimization step against all cross entropies. Thus, you don't need to feed X and y (that are used/needed to produce the <code>cross_entropy</code> because it is already fed to the optimization step).</p>
python|memory|tensorflow
0
8,708
42,379,818
Correct way to set new column in pandas DataFrame to avoid SettingWithCopyWarning
<p>Trying to create a new column in the netc df but i get the warning</p> <pre><code>netc["DeltaAMPP"] = netc.LOAD_AM - netc.VPP12_AM C:\Anaconda\lib\site-packages\ipykernel\__main__.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead </code></pre> <p>whats the proper way to create a field in the newer version of Pandas to avoid getting the warning?</p> <pre><code>pd.__version__ Out[45]: u'0.19.2+0.g825876c.dirty' </code></pre>
<p>As it says in the error, try using <code>.loc[row_indexer,col_indexer]</code> to create the new column.</p> <pre><code>netc.loc[:,"DeltaAMPP"] = netc.LOAD_AM - netc.VPP12_AM. </code></pre> <h2>Notes</h2> <p>By the <a href="http://pandas-docs.github.io/pandas-docs-travis/indexing.html#indexing-view-versus-copy" rel="noreferrer">Pandas Indexing Docs</a> your code should work. </p> <pre><code>netc["DeltaAMPP"] = netc.LOAD_AM - netc.VPP12_AM </code></pre> <p>gets translated to </p> <pre><code>netc.__setitem__('DeltaAMPP', netc.LOAD_AM - netc.VPP12_AM) </code></pre> <p>Which should have predictable behaviour. The <code>SettingWithCopyWarning</code> is only there to warn users of unexpected behaviour during chained assignment (which is not what you're doing). However, as mentioned in the docs,</p> <blockquote> <p>Sometimes a <code>SettingWithCopy</code> warning will arise at times when there’s no obvious chained indexing going on. These are the bugs that <code>SettingWithCopy</code> is designed to catch! Pandas is probably trying to warn you that you’ve done this:</p> </blockquote> <p>The docs then go on to give an example of when one might get that error even when it's not expected. So I can't tell why that's happening without more context.</p>
python|pandas
46
8,709
42,574,644
Tensorflow installation on Ubuntu permission error
<p>I installed virtual box on my Windows 10 machine and installed Ubuntu on the virtual box. Then I installed Tensorflow on Ubuntu by following <a href="https://www.tensorflow.org/install/install_linux" rel="nofollow noreferrer">this instructions from Tensorflow.org</a>. Everything went well including pip install and stuff but when I run <code>$ pip install tensorflow</code> I run into permission error as the screenshot shows. <a href="https://www.tensorflow.org/install/install_linux" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DCp2N.png" alt="enter image description here"></a> </p> <p>This error is not described in the install errors listed on Tensorflow.org at the bottom of that step 2. How do I solve that?</p>
<p>It seems like you need elevated permissions to write to <code>/usr/local/lib</code>.</p> <p>Executing <code>sudo pip install tensorflow</code> will install tensorflow using root privileges.</p> <p>(Also, your problem is with Ubuntu, not with Windows 10! Your host system has no influence in the permissions of the guest.)</p>
python|ubuntu|tensorflow
1
8,710
43,403,147
how to create a encode_raw tensorflow function?
<p>I am trying to perform the opposite of what tf.decode_raw does.</p> <p>An example would be given a tensor of dtype=tf.float32, I would like to have a function encode_raw() that takes in a float tensor and returns a Tensor of type string.</p> <p>This is useful because then I can use tf.write_file to write the file.</p> <p>Does anyone know how to create such a function in Tensorflow using existing functions?</p>
<p>I would recommend writing numbers as text with <code>tf.as_string</code>. If you really want to write them as a binary string, however, it turns out to be possible:</p> <pre><code>import tensorflow as tf with tf.Graph().as_default(): character_lookup = tf.constant([chr(i) for i in range(256)]) starting_dtype = tf.float32 starting_tensor = tf.random_normal(shape=[10, 10], stddev=1e5, dtype=starting_dtype) as_string = tf.reduce_join( tf.gather(character_lookup, tf.cast(tf.bitcast(starting_tensor, tf.uint8), tf.int32))) back_to_tensor = tf.reshape(tf.decode_raw(as_string, starting_dtype), [10, 10]) # Shape information is lost with tf.Session() as session: before, after = session.run([starting_tensor, back_to_tensor]) print(before - after) </code></pre> <p>This for me prints an array of all zeros.</p>
tensorflow
3
8,711
72,256,025
Stop pandas dataframe from converting to vector
<p>I have below <code>pandas dataframe</code></p> <pre><code>import pandas as pd df = pd.DataFrame({'product name': ['laptop', 'printer', 'printer',], 'price': [1200, 150, 1200], 'price1': [1200, 150, 1200]}) </code></pre> <p>Now I want to print one single row</p> <pre><code>df.iloc[1,:] </code></pre> <p>This gives below result</p> <pre><code>product name printer price 150 price1 150 Name: 1, dtype: object </code></pre> <p>As can be seen here, above display is a column format. Is it possible to preserve the row format of original <code>dataframe</code> when displaying a single row?</p>
<p>Actually, <code>df.iloc[1,:]</code> is not a <code>pd.DataFrame</code> it is a <code>pd.Series</code> you can check it with <code>type(df.iloc[1, :])</code>. So row or column doesn't have any sense in these case.</p> <p>To keep it as a <code>pd.DataFrame</code> you could select a range of rows of length 1: <code>df.iloc[1:2, :]</code> or <code>df.iloc[[1], :]</code></p>
python-3.x|pandas
1
8,712
50,312,585
compare rows in dataframe to change values
<p>I'm using python3 and there are two data frames: df1 df2</p> <pre><code>df1 num1 num2 num3 class 0 1 2 3 0 1 1 2 4 0 2 1 2 5 0 3 2 2 4 0 df2 num1 num2 num3 class 0 1 2 3 1 1 1 2 4 1 </code></pre> <p>I want to compare the two data frames so that the rows in df1 and also in df2 will use the class value from df2 as in the above example. </p> <p>The result should be as follows:</p> <pre><code>df12 num1 num2 num3 class 0 1 2 3 1 1 1 2 4 1 2 1 2 5 0 3 2 2 4 0 </code></pre> <p>any help would be appreciated!</p>
<p>You could do an <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer">outer merge</a> on <code>['num1', 'num2', 'num3']</code>, and keep the <code>class</code> column only from <code>df2</code> (so drop the <code>class</code> from <code>df1</code>):</p> <pre><code>df12 = (df1.merge(df2, on=['num1', 'num2', 'num3'], how = 'outer') .fillna(0) .drop('class_x', axis=1)) &gt;&gt;&gt; df12 # num1 num2 num3 class_y # 0 1 2 3 1.0 # 1 1 2 4 1.0 # 2 1 2 5 0.0 # 3 2 2 4 0.0 </code></pre> <p><strong>Edit</strong>: as suggested by @cᴏʟᴅsᴘᴇᴇᴅ, it's a bit cleaner to first drop <code>class</code> from <code>df1</code>, and then do a merge:</p> <pre><code>df12 = (df1.drop('class', 1) .merge(df2, how='left') .fillna(0) .astype({'class' : int})) </code></pre>
python|pandas
2
8,713
50,479,021
Keras callback causes error : You must feed a value for placeholder tensor 'conv2d_1_input' with dtype float
<p>im new to tensorflow and keras and deep learning scene. i tried to make a simple neural network to recognize emotion in fer2013 database and keep running into this error after training the first epoch</p> <blockquote> <pre><code>File "main.py", line 92, in &lt;module&gt; train(image_data, label_data, conv_arch, dense, dropout, epochs, batch_size, validation_split, patience) File "main.py", line 38, in train hist = model.fit(image_data, label_data, epochs=epochs, batch_size=batch_size, validation_split=validation_split, callbacks=callbacks, shuffle=True, verbose=1) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/models.py", line 965, in fit validation_steps=validation_steps) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/engine/training.py", line 1669, in fit validation_steps=validation_steps) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/engine/training.py", line 1226, in _fit_loop callbacks.on_epoch_end(epoch, epoch_logs) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/callbacks.py", line 76, in on_epoch_end callback.on_epoch_end(epoch, logs) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/callbacks.py", line 786, in on_epoch_end result = self.sess.run([self.merged], feed_dict=feed_dict) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run run_metadata_ptr) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run feed_dict_string, options, run_metadata) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run target_list, options, run_metadata) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'conv2d_1_input' with dtype float [[Node: conv2d_1_input = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op u'conv2d_1_input', defined at: File "main.py", line 92, in &lt;module&gt; train(image_data, label_data, conv_arch, dense, dropout, epochs, batch_size, validation_split, patience) File "main.py", line 25, in train model = create_model(image_data, label_data, conv_arch=conv_arch, dense=dense, dropout=dropout) File "/Users/rifkibramantyo/tensorflow/variance/src/deepCNN.py", line 22, in create_model model.add(Conv2D(conv_arch[0][0], kernel_size=(3, 3), strides=3, padding='same', activation='relu', data_format='channels_first', input_shape=(1, image_data_train.shape[2], image_data_train.shape[3]))) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/models.py", line 463, in add name=layer.name + '_input') File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/engine/topology.py", line 1455, in Input input_tensor=tensor) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/engine/topology.py", line 1364, in __init__ name=self.name) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 504, in placeholder x = tf.placeholder(dtype, shape=shape, name=name) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1520, in placeholder name=name) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2149, in _placeholder name=name) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op op_def=op_def) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2395, in create_op original_op=self._default_original_op, op_def=op_def) File "/Users/rifkibramantyo/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1264, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'conv2d_1_input' with dtype float [[Node: conv2d_1_input = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] </code></pre> </blockquote> <p>I tried to fiddle with the code and the error disappears when I remove the callback array on <code>main.py</code>? Did I format the callbacks wrong?</p> <p>Here's the source code</p> <p><strong>main.py</strong></p> <pre><code>from __future__ import absolute_import from __future__ import division from __future__ import print_function from datetime import datetime import numpy as np from keras.callbacks import EarlyStopping, TensorBoard from keras.utils import plot_model from keras import backend as K from deepCNN import * from log import * K.set_learning_phase(1) def load_test_data(): image_data_fname = '../data/data_images_Training.npy' label_data_fname = '../data/data_labels_Training.npy' image_data = np.load(image_data_fname) label_data = np.load(label_data_fname) return image_data, label_data def train(image_data, label_data, conv_arch, dense, dropout, epochs, batch_size, validation_split, patience): model = create_model(image_data, label_data, conv_arch=conv_arch, dense=dense, dropout=dropout) plot_model(model, to_file="graph.png") # define callbacks callbacks = [] if patience != 0: early_stopping = EarlyStopping(monitor='val_loss', patience=patience, verbose=1) tensor_board = TensorBoard(log_dir='../logs', histogram_freq=1, write_graph=True, write_grads=False, write_images=False, embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None) callbacks.append(early_stopping) callbacks.append(tensor_board) start_time = datetime.now() print('Training....\n') hist = model.fit(image_data, label_data, epochs=epochs, batch_size=batch_size, validation_split=validation_split, callbacks=callbacks, shuffle=True, verbose=1) end_time = datetime.now() # model result: train_val_accuracy = hist.history train_acc = train_val_accuracy['acc'] val_acc = train_val_accuracy['val_acc'] print(' Done!') print(' Train acc: ', train_acc[-1]) print('Validation acc: ', val_acc[-1]) print(' Overfit ratio: ', val_acc[-1] / train_acc[-1]) my_log(model, start_time, end_time, batch_size=batch_size, epochs=epochs, conv_arch=conv_arch, dense=dense, dropout=dropout, image_data_shape=image_data.shape, train_acc=train_acc, val_acc=val_acc, dirpath=result_location) if __name__ == '__main__': image_data, label_data = load_test_data() result_location = '../data/results/' arr_conv_arch = [[(32, 1), (64, 0), (128, 0)], [(32, 0), (64, 1), (128, 0)], [(32, 0), (64, 0), (128, 1)], [(32, 2), (64, 0), (128, 0)], [(32, 3), (64, 0), (128, 0)], [(32, 1), (64, 1), (128, 1)], [(32, 2), (64, 2), (128, 1)], [(32, 3), (64, 2), (128, 1)], [(32, 3), (64, 3), (128, 3)]] arr_dense = [[64, 2], (256,2),(256,4),(512,2),(512,4)] arr_dropouts = [0.2, 0.3, 0.4, 0.5] arr_epochs = [1, 10, 40] arr_batch_size = [50, 256] validation_split = 0.2 patience = 5 #train(image_data, label_data, arr_conv_arch[8], arr_dense[0], arr_dropouts[0], 1, arr_batch_size[0], validation_split, patience) #train(image_data, label_data, arr_conv_arch[1], arr_dense[1], arr_dropouts[0], arr_epochs[0], arr_batch_size[0], validation_split, patience) for conv_arch in arr_conv_arch: for dense in arr_dense: for dropout in arr_dropouts: for epochs in arr_epochs: for batch_size in arr_batch_size: print_architecture(image_data.shape, label_data.shape, batch_size, dropout, epochs, conv_arch, dense) train(image_data, label_data, conv_arch, dense, dropout, epochs, batch_size, validation_split, patience) </code></pre> <p><strong>deepCNN.py</strong></p> <pre><code>from __future__ import absolute_import from __future__ import division from __future__ import print_function import os # disable some warnings os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' from keras.layers import Conv2D, MaxPooling2D from keras.layers import Dropout, Flatten, Dense from keras.models import Sequential from keras.layers.core import K K.set_learning_phase(1) def create_model(image_data_train, label_data_train, conv_arch=[(32, 1)], dense=[32, 1], dropout=0.5): image_data_train = image_data_train.astype('float32') # Define model: model = Sequential() model.add(Conv2D(conv_arch[0][0], kernel_size=(3, 3), strides=3, padding='same', activation='relu', data_format='channels_first', input_shape=(1, image_data_train.shape[2], image_data_train.shape[3]))) if (conv_arch[0][1] - 1) != 0: for i in range(conv_arch[0][1] - 1): model.add(Conv2D(conv_arch[0][0], kernel_size=(3, 3), strides=1, padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=2)) if conv_arch[1][1] != 0: for i in range(conv_arch[1][1]): model.add(Conv2D(conv_arch[1][0], kernel_size=(3, 3), strides=1, padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=2)) if conv_arch[2][1] != 0: for i in range(conv_arch[2][1]): model.add(Conv2D(conv_arch[2][0], kernel_size=(3, 3), strides=1, padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=2)) # Print data shape through network for debugging for z in range(len(model.layers)): print("{} : {} --&gt; {}".format(z, model.layers[z].input_shape, model.layers[z].output_shape)) model.add(Flatten()) # this converts 3D feature maps to 1D feature vectors if dense[1] != 0: for i in range(dense[1]): model.add(Dense(dense[0], activation='relu')) if dropout: model.add(Dropout(dropout)) prediction = model.add(Dense(label_data_train.shape[1], activation='softmax')) # optimizer: model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model </code></pre> <p><strong>log.py</strong></p> <pre><code>import os import pandas as pd def print_architecture(image_data_shape, label_data_shape, batch_size, dropout, epochs, conv_arch, dense): print(' image data shape: ', image_data_shape) # (n_sample, 1, 48, 48) print(' label data shape: ', label_data_shape) # (n_sample, n_categories) print(' img size: ', image_data_shape[2], image_data_shape[3]) print(' batch size: ', batch_size) print(' epochs: ', epochs) print(' dropout: ', dropout) print('conv architect: ', conv_arch) print('neural network: ', dense) def time_difference(start, end): return '{}_min_{}_sec'.format(end.time().minute - start.time().minute, end.time().second - start.time().second) def save_config(formatted_val_acc, start_time, config, dirpath='../data/results/'): with open(dirpath + formatted_val_acc + '-' + 'config_log.txt', 'a') as f: f.write(start_time.isoformat() + '\n') f.write(str(config) + '\n') def save_model(formatted_val_acc, start_time, json_string, dirpath='../data/results/'): with open(dirpath + formatted_val_acc + '-' + start_time.isoformat() + '_model.txt', 'w') as f: f.write(json_string) def save_result(start_time, end_time, train_acc, val_acc, conv_arch, batch_size, epochs, dense, dropout, dirpath='../data/results/'): with open(dirpath + 'result_log.txt', 'a') as f: f.write(start_time.isoformat() + ' -&gt; ' + end_time.isoformat() + '\n') f.write(' batch size: ' + str(batch_size) + '\n epochs: ' + str(epochs) + '\n dropout: ' + str(dropout) + '\n conv: ' + str(conv_arch) + '\n dense: ' + str(dense) + '\n - Train acc: ' + str(train_acc[-1]) + '\n - Val acc: ' + str(val_acc[-1]) + '\n - Ratio: ' + str(val_acc[-1] / train_acc[-1]) + '\n\n') # def append_to_result_log(text): # with open('../data/results/result_log.txt', 'a') as f: # f.write('\n' + text + '\n') def my_log(model, start_time, end_time, batch_size, epochs, conv_arch, dense, dropout, image_data_shape, train_acc, val_acc, dirpath): if (val_acc[-1] &gt; 0.5): formatted_val_acc = "{:.4f}".format(val_acc[-1]) model.save_weights('../data/weights/{}-{}'.format(formatted_val_acc, start_time)) save_model(formatted_val_acc, start_time, model.to_json(), dirpath) save_config(formatted_val_acc, start_time, model.get_config(), dirpath) save_result(start_time, end_time, train_acc, val_acc, conv_arch, batch_size, epochs, dense, dropout, dirpath) save_as_csv(start_time, end_time, train_acc, val_acc, conv_arch, batch_size, epochs, dense, dropout, image_data_shape, dirpath) def save_as_csv(start_time, end_time, train_acc, val_acc, conv_arch, batch_size, epochs, dense, dropout, image_data_shape, dirpath): data = { 'start_time': start_time, 'end_time': end_time, 'duration': time_difference(start_time, end_time), 'batch_size': batch_size, 'epochs': epochs, 'conv_arch': conv_arch, 'dense': dense, 'dropout': dropout, 'image_width': image_data_shape[2], 'image_height': image_data_shape[3], 'train_acc': train_acc, 'val_acc': val_acc } df = pd.DataFrame(dict([(k, pd.Series(v)) for k, v in data.iteritems()])) filepath = dirpath + 'results.csv' if not os.path.isfile(filepath): df.to_csv(filepath, index=False, sep='#', escapechar='^', header='column_names') else: # else it exists so append without writing the header df.to_csv(filepath, index=False, sep='#', escapechar='^', mode='a', header=False) </code></pre>
<p>I had a similar problem. The issue appeared when using TensorBoard callback with the <code>histogram_freq</code> parameter greater than 0.</p> <p>Clearing tensorflow session right before creating the model fixed it</p> <pre><code>import keras.backend as K K.clear_session() </code></pre> <p>(from this <a href="https://github.com/keras-team/keras/issues/4417#issuecomment-384502287" rel="nofollow noreferrer" title="issue">issue</a> in keras)</p>
python|tensorflow|keras
4
8,714
45,557,110
Force Pandas to keep multiple columns with the same name
<p>I'm building a program that collects data and adds it to an ongoing excel sheet weekly (read_excel() and concat() with the new data). The issue I'm having is that I need the columns to have the same name for presentation (it doesn't look great with x.1, x.2, ...). </p> <p>I only need this on the final output. Is there any way to accomplish this? Would it be too time consuming to modify pandas? </p>
<p>You can add spaces to the end of the column name. It will appear the same in a Excel, but pandas can distinguish the difference.</p> <pre><code>import pandas as pd df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], columns=['x','x ','x ']) df x x x 0 1 2 3 1 4 5 6 2 7 8 9 </code></pre>
python|excel|pandas
1
8,715
54,281,633
How to set a color for a specific row in pandastable, using python 3.7
<p>I have created a simple pandastable form in python, but I have some problems getting the rows in colors.</p> <p>I have tried the following definition from the documentation, but it does not seem to work?</p> <pre><code>pt.setRowColors(rows=rows1, clr="red") </code></pre> <p>Here is my code:</p> <pre><code># pandas as pt # rows1 is a list of rows i would like to color app = tk.Tk() f = tk.Frame(app) f.pack(fill=tk.BOTH,expand=1) pt = Table(f, dataframe=myData, showtoolbar=False, showstatusbar=False) pt.show() pt.setRowColors(rows=rows1, clr="red") pt.redraw() </code></pre> <p>I wanted 30 rows to have a red background, but it does nothing. I do not even get an error to go by...</p> <p>Hope you can help.</p>
<p>The <a href="https://pandastable.readthedocs.io/en/latest/pandastable.html#pandastable.core.Table.setRowColors" rel="nofollow noreferrer">pandastable API docs</a> suggest you should use a Hex value for the colour:</p> <blockquote> <p>setRowColors(rows=None, clr=None, cols=None)[source] Set rows color from menu. :param rows: row numbers to be colored :param clr: color in hex :param cols: column numbers, can also use ‘all’</p> </blockquote> <p>So, according to this, the following should color all rows red in row1:</p> <pre><code>setRowColors(rows=rows1, clr='#FF0000', cols=None) </code></pre>
python|pandas
0
8,716
54,346,347
Getting pandas column name based on bool mask
<p>I have two dataframes, <code>df1</code> and <code>df2</code>, where one value is changed in <code>df2</code>. I'm trying to get the column name for the value that changed.</p> <p>df1</p> <pre><code> type method 0 variable method1 1 variable method1 2 variable method1 3 variable method1 </code></pre> <p>df2</p> <pre><code> type method 0 variable method1 1 variable method1 2 variable method1 3 timeseries method1 </code></pre> <p>find the changes:</p> <pre><code>changes = df1.ne(df2) </code></pre> <p>changes:</p> <pre><code> type method 0 False False 1 False False 2 False False 3 True False </code></pre> <p>How would you get the column name for the column that changed?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>DataFrame.any</code></a> for test at least one <code>True</code> per column and then filter columns names:</p> <pre><code>print (changes.any()) type True method False dtype: bool print (changes.columns[changes.any()]) Index(['type'], dtype='object') </code></pre>
python|pandas
4
8,717
71,173,925
Sort data across columns per row and return column names in new variable
<p>I have a table with 3 columns - a, b, and c. How do I add a 4th column (d) which stores the column names of dates in sorted order (across columns) while ignoring the NaT cells - example shown below:</p> <pre><code> a b c d |:------------:|:------------:|:-------------:|:-----------:| | 2022-01-15 | 2022-01-09 | 2022-01-13 | [b, c, a] | | 2022-02-11 | NaT | 2022-02-20 | [a, c] | | 2022-02-15 | 2022-02-14 | NaT | [b, a] | </code></pre> <p>For the first row, value in column d is [b,c,a] since the value in b &lt; value in c &lt; value in a</p>
<p>Use <code>np.argsort</code> for positions columns with remove missi values and convert to lists in lambda function:</p> <pre><code>df['d'] = df.apply(lambda x: list(df.columns[np.argsort(x.dropna())]), axis=1) </code></pre> <p>Or sorting per rows, remove NaNs and convert index to lists:</p> <pre><code>df['d'] = df.apply(lambda x: x.sort_values().dropna().index.tolist(), 1) </code></pre> <p>Or reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a> for remove missing values, sorting by 2 columns and aggregate <code>list</code>s:</p> <pre><code>df['d'] = (df.stack() .rename_axis(['idx','c']) .reset_index(name='val') .sort_values(['idx', 'val']) .groupby('idx')['c'] .agg(list)) </code></pre> <hr /> <pre><code>print (df) a b c d 0 2022-01-15 2022-01-09 2022-01-13 [b, c, a] 1 2022-02-11 NaT 2022-02-20 [a, c] 2 2022-02-15 2022-02-14 NaT [b, a] </code></pre>
python-3.x|pandas
1
8,718
71,093,763
Pandas - assign incremented value if condition in other column is matched
<p>This is my dataset:</p> <pre class="lang-py prettyprint-override"><code># dataset DATE NUMBER 0 10. 9. 2002 NaN 1 10. 9. 2002 8.0 2 10. 9. 2002 9.0 3 10. 9. 2002 NaN 4 10. 9. 2002 11.0 </code></pre> <p>I would like to add a new column 'T_ID' where I will store number that increments every time the value in 'NUMBER' column is NaN.</p> <p>My expected table should look like this:</p> <pre class="lang-py prettyprint-override"><code># dataset DATE NUMBER T_ID 0 10. 9. 2002 NaN 1 1 10. 9. 2002 8.0 1 2 10. 9. 2002 9.0 1 3 10. 9. 2002 NaN 2 4 10. 9. 2002 11.0 2 </code></pre> <p>Thanks for all your answers.</p>
<p>Use</p> <pre><code>df['T_ID'] = df.NUMBER.isna().cumsum() </code></pre> <p>This uses <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isna.html" rel="nofollow noreferrer"><code>isna()</code></a> to get an array with bools. This can be used by <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer"><code>cumsum()</code></a> which returns a cumulative sum over this boolean array.</p>
pandas|dataframe
3
8,719
71,232,640
Getting "AttributeError: Can only use .str accessor with string values" while transposing the data in dataframe
<p>I have one source file with structure as below:</p> <pre><code>ID|RcrdId|SrcId|Name|Address row1|r1|src1#src2|val1#val2|val4#val5 row2|r2|src2#src1|val11#val12|val14#val15 row3|r3|src1|val44|val23 </code></pre> <p>I need to include values in Name and Address fields only for Src2 value which is present in SrcID column, in case there is no value for SRC2, then null values should be populated in the Name and address field like below(RecordID=r3):</p> <pre><code>ID|RcrdId|SrcId|Name|Address row1|r1|src2|val2|val5 row2|r2|src2|val11|val14 row3|r3||| </code></pre> <p>I have this sample code, but I am not able to handle the record r3 where src2 is not present and I am getting following error : AttributeError: Can only use .str accessor with string values!</p> <pre><code>from itertools import zip_longest cols = ['SrcId', 'Name', 'Address'] df[cols] = (df[cols].fillna(#) .apply(lambda c: c.str.split('#')) .apply(lambda r: next(filter(lambda x: x[0]=='src2', zip_longest(*r)), float('nan')), axis=1, result_type='expand') ) </code></pre>
<p>Split your values to get a list then explode it into multiple rows. Filter out your rows and fill missing values with original ones:</p> <pre><code>cols = ['SrcId', 'Name', 'Address'] df[cols] = df[cols].apply(lambda x: x.str.split('#')).explode(cols) \ .query(&quot;SrcId == 'src2'&quot;).reindex(df.index).fillna(df[cols]) print(df) # Output ID RcrdId SrcId Name Address 0 row1 r1 src2 val2 val5 1 row2 r2 src2 val11 val14 2 row3 r3 src1 val44 val23 </code></pre> <p>Step-by-step:</p> <pre><code>&gt;&gt;&gt; out = df[cols].apply(lambda x: x.str.split('#')) SrcId Name Address 0 [src1, src2] [val1, val2] [val4, val5] 1 [src2, src1] [val11, val12] [val14, val15] 2 [src1] [val44] [val23] &gt;&gt;&gt; out = out.explode(cols) SrcId Name Address 0 src1 val1 val4 0 src2 val2 val5 1 src2 val11 val14 1 src1 val12 val15 2 src1 val44 val23 &gt;&gt;&gt; out = out.query(&quot;SrcId == 'src2'&quot;) SrcId Name Address 0 src2 val2 val5 1 src2 val11 val14 &gt;&gt;&gt; out = out.reindex(df.index) SrcId Name Address 0 src2 val2 val5 1 src2 val11 val14 2 NaN NaN NaN &gt;&gt;&gt; out = out.fillna(df[cols]) SrcId Name Address 0 src2 val2 val5 1 src2 val11 val14 2 src1 val44 val23 </code></pre>
python|pandas|dataframe
1
8,720
52,086,175
How to split a large dataframe into multiple dataframe based on the first 3 characters of the column names?
<p>I have a huge dataframe (2077 columns) that I would like to break down into multiple dataframes (78 exactly). Each column name starts with a 3 letter acronym (coc, cou, wam etc.). How would I split up the master dataframe into multiple smaller data frames based on the first 3 letters of the column names?</p> <p>Thanks in advance. </p>
<p>Call <code>groupby</code> with a lambda and iterate over the group object to separate them out into a list of DataFrames:</p> <pre><code>df_list = [g for _, g in df.groupby(by=lambda x: x[:3], axis=1)] </code></pre> <p>If you want a mapping of {prefix : dataFrame} instead, you can create a dictionary:</p> <pre><code>df_dict = {k: g for k, g in df.groupby(by=lambda x: x[:3], axis=1)} </code></pre>
python|string|pandas|dataframe|multiple-columns
1
8,721
52,306,416
tf.summary.image seems not work for estimator prediction
<p>I want visualize my input image use tf.estimator when predict, but it seems tf.summary.image not save image. But it work for training.</p> <p>This is my code in model_fn:</p> <pre><code>... summary_hook = tf.train.SummarySaverHook( save_secs=2, output_dir='summary', scaffold=tf.train.Scaffold(summary_op=tf.summary.merge_all())) #summary_op=tf.summary.merge_all()) tf.summary.histogram("logit",logits) tf.summary.image('feat', feat) if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode, predictions=preds, prediction_hooks=[summary_hook]) ... </code></pre> <p>and this my prediction code:</p> <pre><code>config = tf.estimator.RunConfig(save_summary_steps=0) estimator = tf.estimator.Estimator(model_fn=model_fn, model_dir='logs', config=config) preds = estimator.predict(input_fn=eval_input_fn) </code></pre> <p>Is there something wrong for using <code>tf.train.SummarySaverHook</code>?</p>
<p>I would assume that you need to put the summary ops (histogram/image) <em>before</em> calling <code>merge_all</code> so that <code>merge_all</code> actually has something to merge.</p> <pre><code>... tf.summary.histogram("logit",logits) tf.summary.image('feat', feat) summary_hook = tf.train.SummarySaverHook( save_secs=2, output_dir='summary', scaffold=tf.train.Scaffold(summary_op=tf.summary.merge_all())) #summary_op=tf.summary.merge_all()) if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode, predictions=preds, prediction_hooks=[summary_hook]) ... </code></pre>
python|tensorflow|tensorflow-estimator
2
8,722
52,200,710
Pandas+seaborn faceting with multidimensional dataframes
<p>In Python <code>pandas</code>, I need to do a facet grid from a multidimensional <code>DataFrame</code>. In columns <code>a</code> and <code>b</code> I hold scalar values, which represent conditions of an experiment. In columns <code>x</code> and <code>y</code> instead I have two numpy arrays. Column <code>x</code> is the x-axis of the data and column <code>y</code> is the value of a function corresponding to <code>f(x)</code>. Obviously both <code>x</code> and <code>y</code> have the same number of elements.</p> <p>I now would like to do a <strong>facet grid</strong> with rows and columns specifying the conditions, and in every cell of the grid, plot the value of column D vs column D.</p> <p>This could be a minimal working example:</p> <pre><code>import pandas as pd d = [0]*4 # initialize a list with 4 elements d[0] = {'x':[1,2,3],'y':[4,5,6],'a':1,'b':2} # then fill these elements d[1] = {'x':[3,1,5],'y':[6,5,1],'a':0,'b':3} d[2] = {'x':[3,1,5],'y':[6,5,1],'a':1,'b':3} d[3] = {'x':[3,1,5],'y':[6,5,1],'a':0,'b':2} pd.DataFrame(d) # create the pandas dataframe </code></pre> <p>How can I use already existing faceting functions to address the issue of plotting <code>y vs x</code> grouped by the conditions <code>a</code> and <code>b</code>?</p> <p>Since I need to apply this function to general datasets with different column names, I would like to avoid resorting on hard-coded solutions, but rather see whether it is possible to extend <code>seaborn FacetGrid</code> function to this kind of problem.</p>
<p>I think the best way to go is to split the nested arrays first and then create a facet grid with seaborn.</p> <p>Thanks to this post (<a href="https://stackoverflow.com/questions/38372016/split-nested-array-values-from-pandas-dataframe-cell-over-multiple-rows">Split nested array values from Pandas Dataframe cell over multiple rows</a>) I was able to split the nested array in your dataframe:</p> <pre><code>unnested_lst = [] for col in df.columns: unnested_lst.append(df[col].apply(pd.Series).stack()) result = pd.concat(unnested_lst, axis=1, keys=df.columns).fillna(method='ffill') </code></pre> <p>Then you can make the facet grid with this code:</p> <pre><code>import seaborn as sbn fg = sbn.FacetGrid(result, row='b', col='a') fg.map(plt.scatter, "x", "y", color='blue') </code></pre>
python|pandas|seaborn|facet
2
8,723
60,372,163
scipy.linalg.expm of hermitian is not special unitary
<p>If I have a (real) hermitian matrix, for instance </p> <pre><code>H = matrix([[-2. , 0.5, 0.5, 0. ], [ 0.5, 2. , 0. , 0.5], [ 0.5, 0. , 0. , 0.5], [ 0. , 0.5, 0.5, 0. ]]) </code></pre> <p>(This matrix is hermitian; it is the Hamiltonian of a 2-spin Ising chain with coupling to an external field.)</p> <p>Then there exists a special orthogonal transformation <code>O</code> (preserves length of column and row vectors of a matrix) s.t.</p> <p><code>H = O.transpose() @ D @ O</code></p> <p>where <code>D</code> is diagonal. For the matrix exponential this leads to</p> <p><code>T = expm(1j * H) = O.transpose() @ expm(1j * D) @ O</code></p> <p>so all column/row vectors of <code>T</code> must have length <code>1</code>.</p> <p>If I use <code>scipy.linalg.expm</code> this property is violated:</p> <pre><code>In [1]: import numpy as np In [2]: from numpy import matrix In [3]: from scipy.linalg import expm In [4]: H = matrix([[-2. , 0.5, 0.5, 0. ], ...: [ 0.5, 2. , 0. , 0.5], ...: [ 0.5, 0. , 0. , 0.5], ...: [ 0. , 0.5, 0.5, 0. ]]) In [5]: T = expm(1j * H) In [6]: np.sum(np.abs(T[0])) Out[6]: 1.6099093263121051 In [7]: np.sum(np.abs(T[1])) Out[7]: 1.609909326312105 In [8]: np.sum(np.abs(T[2])) Out[8]: 1.7770244703003222 In [9]: np.sum(np.abs(T[3])) Out[9]: 1.7770244703003222 </code></pre> <p>Is this a bug in <code>expm</code> or am I making a mistake here?</p>
<p>you're using the wrong norm. Use</p> <pre><code>np.sqrt( np.sum( np.abs(T[0])**2 ) ) </code></pre> <p>Or even in a shorter way</p> <pre><code>np.linalg.norm( T[0] ) </code></pre>
python|numpy|matrix|scipy|linear-algebra
2
8,724
60,632,054
Filter rows from multiple date columns based on the specific year and month in Pandas
<p>For the given dataframe as follows:</p> <pre><code> id start_date end_date 0 1 2014/5/26 2014/5/27 1 2 2014/6/27 2014/6/28 2 3 2014/7/20 2014/7/21 3 4 2014/9/12 2014/9/13 4 5 2014/10/10 2014/10/11 5 6 2020/3/20 2020/4/21 6 7 2020/4/10 2020/4/11 7 8 2020/4/15 2020/4/16 8 9 2020/3/23 2020/3/24 9 10 2020/4/6 2020/4/7 </code></pre> <p>I want to filter rows which either <code>start_date</code> or <code>end_date</code> is in the range of <code>2020-02, 2020-03, 2020-04</code>, thanks for sharing other optional solutions besides mine.</p> <p>The lookforwarding result will like this:</p> <pre><code> id start_date end_date 5 6 2020-03-20 2020-04-21 6 7 2020-04-10 2020-04-11 7 8 2020-04-15 2020-04-16 9 10 2020-04-06 2020-04-07 </code></pre>
<p>I think here is better <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>DataFrame.apply</code></a> for processing by columns, <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.applymap.html" rel="nofollow noreferrer"><code>DataFrame.applymap</code></a> is for processing elementwise:</p> <pre><code>df[['start_date', 'end_date']] = (df[['start_date', 'end_date']] .apply(lambda x: pd.to_datetime(x, format = '%Y/%m/%d'))) </code></pre> <p>Then for filtering is used months periods by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.to_period.html" rel="nofollow noreferrer"><code>Series.dt.to_period</code></a>:</p> <pre><code>df = (df[(df['start_date'].dt.to_period('m')== '2020-04') | (df['end_date'].dt.to_period('m')== '2020-04')]) print (df) id start_date end_date 5 6 2020-03-20 2020-04-21 6 7 2020-04-10 2020-04-11 7 8 2020-04-15 2020-04-16 9 10 2020-04-06 2020-04-07 </code></pre> <p>Solution with loop by columns is possible by <a href="https://stackoverflow.com/a/20528566/2901002"><code>np.logical_or.reduce</code></a>, solution is better if more columns:</p> <pre><code>c = ['start_date', 'end_date'] df[c] = df[c].apply(lambda x: pd.to_datetime(x, format = '%Y/%m/%d')) df = df[np.logical_or.reduce([df[x].dt.to_period('m')== '2020-04' for x in c])] print (df) id start_date end_date 5 6 2020-03-20 2020-04-21 6 7 2020-04-10 2020-04-11 7 8 2020-04-15 2020-04-16 9 10 2020-04-06 2020-04-07 </code></pre>
python-3.x|pandas|dataframe|datetime
1
8,725
60,411,332
Unpacking list of lists of dicts column in Pandas dataframe
<p>I have <code>df_in</code> where one of the columns is a <code>list</code> of <code>lists</code> of <code>dicts</code>:</p> <pre><code>df_in = pd.DataFrame({ 'A': [1, 2, 3], 'B': [ [{'B1': 1, 'B2': 2, 'B3': 3}, {'B1': 4, 'B2': 5, 'B3': 6}, {'B1': 7, 'B2': 8, 'B3': 9}], [{'B1': 10, 'B2': 11, 'B3': 12}], [{'B1': 13, 'B2': 14, 'B3': 15}, {'B1': 16, 'B2': 17, 'B3': 18}] ], 'C': ['a', 'b', 'c'] }) df_in A B C 0 1 [{'B1': 1, 'B2': 2, 'B3': 3}, {'B1': 4, 'B2': ... a 1 2 [{'B1': 10, 'B2': 11, 'B3': 12}] b 2 3 [{'B1': 13, 'B2': 14, 'B3': 15}, {'B1': 16, 'B... c </code></pre> <p>What I want to achieve is a general approach to unpacking <code>B</code> so that (1) every unique key (<code>B1</code>, <code>B2</code> and <code>B3</code> in this case) gets put into a column. And (2) stack mutiple lists in each row as new observations. I think an example output explains this best:</p> <pre><code>df_out = pd.DataFrame({ 'A': [1, 1, 1, 2, 3, 3], 'B1': [1, 4, 7, 10, 13, 16], 'B2': [2, 5, 8, 11, 14, 17], 'B3': [3, 6, 9, 12, 15, 18], 'C': ['a', 'a', 'a', 'b', 'c', 'c'] }) df_out A B1 B2 B3 C 0 1 1 2 3 a 1 1 4 5 6 a 2 1 7 8 9 a 3 2 10 11 12 b 4 3 13 14 15 c 5 3 16 17 18 c </code></pre> <p>Any ideas?</p>
<p>Use dictionary comprehension with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> for extract column:</p> <pre><code>df1 = pd.concat({k: pd.DataFrame(x) for k, x in df_in.pop('B').items()}) print (df1) B1 B2 B3 0 0 1 2 3 1 4 5 6 2 7 8 9 1 0 10 11 12 2 0 13 14 15 1 16 17 18 </code></pre> <p>Add original data by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> and for correct order extract and append <code>C</code> column:</p> <pre><code>df = df_in.join(df1.reset_index(level=1, drop=True)).reset_index(drop=True) df['C'] = df.pop('C') print (df) A B1 B2 B3 C 0 1 1 2 3 a 1 1 4 5 6 a 2 1 7 8 9 a 3 2 10 11 12 b 4 3 13 14 15 c 5 3 16 17 18 c </code></pre> <p>Alternative solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a>, for correct order is used <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html" rel="nofollow noreferrer"><code>DataFrame.insert</code></a>:</p> <pre><code>df1 = pd.concat([pd.DataFrame(v['B']).assign(A=v['A'], C=v['C']) for k, v in df_in.to_dict('index').items()], ignore_index=True) df1.insert(0, 'A', df1.pop('A')) print (df1) A B1 B2 B3 C 0 1 1 2 3 a 1 1 4 5 6 a 2 1 7 8 9 a 3 2 10 11 12 b 4 3 13 14 15 c 5 3 16 17 18 c </code></pre>
python|pandas|dataframe
4
8,726
60,410,049
Holoviews Image with HoverTool tooltip from different datasource
<p>I have a 100x100 km grid GeoDataFrame (Mollweide) that I am plotting as a <code>gv.Image</code> with classified values (8 Categories) through Holoviews/Bokeh:</p> <pre class="lang-py prettyprint-override"><code># convert GeoDataFrame to xarray object xa_dataset = gv.Dataset(grid.to_xarray(), vdims=f'{metric}_cat', crs=crs.Mollweide()) # convert to gv.Image img_grid = xa_dataset.to(gv.Image) # custom tooltip hover = HoverTool( tooltips=[ ("index", "$index"), ("data (using $) (x,y)", "($x, $y)"), ("data (using @) (x,y)", "(@x, @y)"), ("canvas (x,y)", "($sx, $sy)"), ("Category", "@image"), ]) image_layer = img_grid.opts( cmap=cmap_with_nodata, colorbar=True, colorbar_opts={ 'formatter': formatter, 'major_label_text_align':'left'}, tools=[hover], # optional unpack of width and height **{k: v for k, v in optional_kwargs.items() if v is not None} ) # combine layers and set global plotting options gv_layers = (image_layer * gf.coastline * gf.borders).opts( projection=crs.Mollweide(), global_extent=True, responsive=responsive, finalize_hooks=[set_active_tool], title_format=title) </code></pre> <p>I can show the Image Values as HoverTool Tooltips, the above will lead to the following tooltip:</p> <p><a href="https://i.stack.imgur.com/g6iCo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g6iCo.png" alt="enter image description here"></a></p> <p>However, to make this more usefull, I would like to show the exact values for each bin from my original GeoDataFrame, not the classified values from the image (the class reference number is shown above with <code>Category</code> in the tooltip). <code>canvas (x,y)</code> appears to refer to my x and y bins from my GeoDataFrame. Is it possible to make the tooltip query my original GeoDataFrame to show the exact values of each bin, not the classified ones?</p> <p>I tried to create an additional <code>bokeh.plotting.ColumnDataSource</code>:</p> <pre class="lang-py prettyprint-override"><code>grid_df = pd.DataFrame(grid[[col for col in grid.columns if col != grid._geometry_column_name]]) source = ColumnDataSource.from_df(grid_df) </code></pre> <p>But I don't know how to add this source "invisible" over the <code>gv.Image</code> Layer only for the purpose of showing tooltips with exact values.</p> <p>I know this is somehow against the principle of bokeh that everything what is shown must also be included in the data. But in this case, adding exact tooltip information would increase usability of the interactive plot a lot in my context.</p>
<p>Found the answer myself: even when using <code>gv.Image</code>, I can specify additional <code>vdims</code>, e.g.:</p> <pre class="lang-py prettyprint-override"><code># xa_dataset from GeoDataFrame # with additional vdims xa_dataset = gv.Dataset( grid.to_xarray(), vdims=[f'{metric}_cat', 'postcount', 'usercount'], crs=crs.Mollweide()) # convert to gv.Image img_grid = xa_dataset.to(gv.Image) # custom tooltip hover = HoverTool( tooltips=[ ("Usercount", "@usercount{,f}"), ("Postcount", "@postcount{,f}") ]) </code></pre> <p><a href="https://i.stack.imgur.com/IRSR8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IRSR8.png" alt="enter image description here"></a></p> <p>I actually don't understand how these additional vdims are stored in the <code>gv.Image</code>, but it works!</p>
python|bokeh|geopandas|holoviews|geoviews
2
8,727
72,751,931
pandas: calculate the daily average, grouped by label
<p>I want to create a graph with lines represented by my <code>label</code></p> <p>so in this example picture, each line represents a distinct label</p> <p><a href="https://i.stack.imgur.com/4jlue.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4jlue.png" alt="enter image description here" /></a></p> <p>The data looks something like this where the x-axis is the datetime and the y-axis is the count.</p> <pre><code>datetime, count, label 1656140642, 12, A 1656140643, 20, B 1656140645, 11, A 1656140676, 1, B </code></pre> <p>Because I have a lot of data, I want to aggregate it by 1 hour or even 1 day chunks.</p> <p>I'm able to generate the above picture with</p> <pre><code># df is dataframe here, result from pandas.read_csv df.set_index(&quot;datetime&quot;).groupby(&quot;label&quot;)[&quot;count&quot;].plot </code></pre> <p>and I can get a time-range average with</p> <pre><code>df.set_index(&quot;datetime&quot;).groupby(pd.Grouper(freq='2min')).mean().plot() </code></pre> <p>but I'm unable to get both rules applied. Can someone point me in the right direction?</p>
<p>You can use <code>.pivot</code> (<a href="https://pandas.pydata.org/docs/reference/api/pandas.pivot.html?highlight=pivot" rel="nofollow noreferrer">documentation</a>) function to create a convenient structure where <code>datetime</code> is index and the different <code>labels</code> are the columns, with <code>count</code> as values.</p> <pre class="lang-py prettyprint-override"><code>df.set_index('datetime').pivot(columns='label', values='count') </code></pre> <p>output:</p> <pre class="lang-py prettyprint-override"><code>label A B datetime 1656140642 12.0 NaN 1656140643 NaN 20.0 1656140645 11.0 NaN 1656140676 NaN 1.0 </code></pre> <p>Now when you have your data in this format, you can perform simple aggregation over the index (with <code>groupby</code> / <code>resample</code>/ whatever suits you) so it will be applied each column separately. Then plotting the results is just plotting different line for each column.</p>
python|pandas|dataframe|matplotlib
2
8,728
59,847,311
series.where on a series containing lists
<p>I have this series called <code>hours_by_analysis_date</code>, where the index is <code>datetime</code>s, and the values are a list of ints. For example:</p> <pre><code>Index | 01-01-2000 | [1, 2, 3, 4, 5] 01-02-2000 | [2, 3, 4, 5, 6] 01-03-2000 | [1, 2, 3, 4, 5] </code></pre> <p>I want to return all the indices where the value is <code>[1, 2, 3, 4, 5]</code>, so it should return <code>01-01-2000</code> and <code>01-03-2000</code></p> <p>I tried <code>hours_by_analysis_date.where(fh_by_analysis_date==[1, 2, 3, 4, 5])</code>, but it gives me the error:</p> <p><code>{ValueError} lengths must match to compare</code></p>
<p>It's confused between comparing two array-like objects and equality test for each element.</p> <p>You can use <code>apply</code>:</p> <pre><code>hours_by_analysis_date.apply(lambda elem: elem == [1,2,3,4,5]) </code></pre>
pandas|series
1
8,729
59,781,612
Concat dataframe without doubling the index
<p>I want to concat those 2 dataframe: </p> <pre><code> circulating_supply currency BCH             18225550 BTC             18163250 ETH             109296900 QASH            350000000 XRP             43653780000 circulating_supply currency BCH 1.822718e+07 BTC 1.816522e+07 ETH 1.093100e+08 QASH 3.500000e+08 XRP 4.365378e+10 </code></pre> <p>my code:</p> <pre><code>pd.concat([supp_bal, supp_prev], axis=1, sort=True) </code></pre> <p>The output:</p> <pre><code> circulating_supply circulating_supply BCH 1.822718e+07 NaN BCH             NaN 1.822555e+07 BTC 1.816522e+07 NaN BTC             NaN 1.816325e+07 ETH 1.093100e+08 NaN ETH             NaN 1.092969e+08 QASH 3.500000e+08 NaN QASH            NaN 3.500000e+08 XRP 4.365378e+10 NaN XRP             NaN 4.365378e+10 </code></pre> <p>I would like the same output without double index and NaN. Any contribution would be appreciated.</p>
<p>Join the dataframes (which does a left join on the indices by default) and specify a suffix for each column since they have the same name.</p> <pre><code>&gt;&gt;&gt; df1.join(df2, lsuffix='_1', rsuffix='_2') circulating_supply_1 circulating_supply_2 currency BCH 18225550 1.822718e+07 BTC 18163250 1.816522e+07 ETH 109296900 1.093100e+08 QASH 350000000 3.500000e+08 XRP 43653780000 4.365378e+10 </code></pre> <p>Also review: <a href="https://stackoverflow.com/questions/53645882/pandas-merging-101">Pandas Merging 101</a></p>
python|pandas
0
8,730
59,721,119
Installing OpenCV into PyCharm
<p>Idk if this is a stackoverflow-appropriate post so forgive if the question is misplaced. I'm trying to install OpenCV into my Pycharm IDE through the conda virtual environment. I typed <code>conda install -c conda-forge opencv</code> inside the PyCharm terminal and it has been doing this for 11 hours and God knows how many more to go.</p> <p><a href="https://i.stack.imgur.com/rzkF8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rzkF8.png" alt="enter image description here"></a></p> <p>Pycharm did this with PyTorch as well. Am I doing something wrong or is this normal?</p>
<p>While you can install packages directly in PyCharm by going to <code>file-&gt;settings</code> select <code>Project Interpreter</code> and click on the '+' icon on the top right (see image) <a href="https://i.stack.imgur.com/kxuZm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kxuZm.png" alt="enter image description here"></a> I would recommend creating a <code>requirements.txt</code> file in the root of your project, and write down all your required packages. When a package is missing, PyCharm will automatically suggest to install the package for you.</p> <p>e.g. for installing opencv you can add the following to you <code>requirements.txt</code></p> <pre><code>opencv-python </code></pre> <p>Or even specify the version that your project needs</p> <pre><code>opencv-python==4.1.2 </code></pre> <p>edit: the advantage of using a <code>requirement.txt</code> is that you can more easily port the project to another machine, and re-install the packages if needed.</p>
opencv|intellij-idea|computer-vision|pycharm|pytorch
1
8,731
54,992,495
Count values occurrences in pandas and put the result in one single string
<p>My dataframe looks like this:</p> <pre><code>id column1 column2 a x l a x n a y n b y l b y m </code></pre> <p>Currently, I generate value counts with this</p> <pre><code>def value_occurences(grouped, column_name): return (grouped[column_name].value_counts(normalize=False, dropna=False) .to_frame('count_'+column_name) .reset_index(level=1)) result = value_occurences(grouped, 'column1') """ &gt;&gt;&gt;result id column1 count_column1 a x 2 a y 1 b y 1 """ </code></pre> <p>And I need to count value occurrences in this format:</p> <pre><code>id column1 column2 a 'x:2; y:1' 'l:1; n:2' b 'y:1' 'l:1; m:1' </code></pre> <p>how can I turn my result into that format?</p>
<p>You can first generate groups of the <code>df</code> by <code>df.groupby(['id'])</code> and apply <code>value_counts</code> to each group:</p> <pre class="lang-py prettyprint-override"><code>import io, pandas as pd def seqdict(x): return ', '.join('{}:{}'.format(*i) for i in sorted(x.items())) def value_occurences(df): return pd.DataFrame({c: {i: seqdict(d.iloc[:,j].value_counts().to_dict()) for i, d in df.groupby(by=['id']) } for j, c in enumerate(df.keys()) }) grouped = pd.read_table(io.StringIO("""id column1 column2 a x l a x n a y n b y l b y m """), sep='\s+') value_occurences(grouped) </code></pre> <p>Results:</p> <pre><code> column1 column2 a x:2, y:1 l:1, n:2 b y:2 l:1, m:1 </code></pre>
python|pandas
0
8,732
54,997,082
Creating Pandas Dataframe row on the basis of other column value
<p>I have a dataframe having three column :</p> <pre><code>order_no product quantity 0 5bf69f 3 0 5beaba 2 1 5bwq21 1 1 5bf69f 1 </code></pre> <p>I want to create row if quantity value is greater than 1 like this:</p> <pre><code>order_no product quantity 0 5bf69f 1 0 5bf69f 1 0 5bf69f 1 0 5beaba 1 0 5beaba 1 1 5bwq21 1 1 5bf69f 1 </code></pre>
<p>First is necessary unique index values, so if necessary:</p> <pre><code>df = df.reset_index(drop=True) </code></pre> <p>Then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.repeat.html" rel="nofollow noreferrer"><code>Index.repeat</code></a> by column <code>quantity</code> and expand rows by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>, set column to <code>1</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a> and last again create unique index by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a>:</p> <pre><code>df = df.loc[df.index.repeat(df['quantity'])].assign(quantity=1).reset_index(drop=True) print (df) order_no product quantity 0 0 5bf69f 1 1 0 5bf69f 1 2 0 5bf69f 1 3 0 5beaba 1 4 0 5beaba 1 5 1 5bwq21 1 6 1 5bf69f 1 </code></pre> <p>Using <code>numpy.repeat</code> is possible, but numpy cast all data to object, because <code>string</code> column:</p> <pre><code>print (pd.DataFrame(np.repeat(df.values,df.quantity,axis=0)).dtypes) 0 object 1 object 2 object dtype: object </code></pre>
python|pandas
3
8,733
49,552,204
How to open the log `global_step/sec` like `tf.estimator.Estimator` using MonitoredTrainingSession?
<p>I meet a small probelms but I don't how to deal with it.</p> <p>When I use the <code>tf.estimator.Estimator</code>, it will log two line each step like:</p> <pre><code>INFO:tensorflow:global_step/sec: 1110.33 INFO:tensorflow:loss = 0.00026583532, step = 9376 (0.090 sec) </code></pre> <p>But when I use the <code>tf.train.MonitoredTrainingSession</code> alone with <code>LoggingTensorHook</code>, there's only one line each step without the info anbout <code>global_step/sec</code>.</p> <pre><code>INFO:tensorflow:step = 131, loss = 0.11608909, acc = 0.955 (0.282 sec) </code></pre> <p>So I want to know how to open the log of <code>global_step/sec</code></p> <p>I find the info <code>global_step/sec</code> seems be controled bt the arg <code>log_step_count_steps</code>, but I have set it.</p> <p>In addition, I also want to know the meaning of <code>global_step/sec</code>, of which the annotation is "The frequency, in number of global steps, that the global step/sec is logged." from <code>MonitoredTrainingSession</code></p>
<p>OK, I find the question. The reason is that the arg <code>log_every_steps</code> of <code>tf.train.LoggingTensorHook</code> not match the <code>log_step_count_steps</code> of <code>MonitoredTrainingSession</code>.</p> <p>Note, the two args must be same, if different, such 5 and 10, the <code>global_step/sec</code> will be one low and one high which lead to the graph of <code>global_step/sec</code> no meaning.</p> <p>If someone know the meaning of <code>global_step/sec</code>, please tell me.</p>
python|tensorflow
1
8,734
49,646,959
How can I include missing items using groupby in Pandas?
<p>Let's say I have a dataframe with the following columns: date, time, day, month, year, description, price, type, manufacturer</p> <p>Using pandas and <code>value_counts()</code>, I can get the count for every unique item in a column:</p> <pre><code>df.manufacturer.value_counts() </code></pre> <p>Also, using groupby I can get the average price for every day in my data:</p> <pre><code>df.groupby("day").price.mean() </code></pre> <p>The issue with that is there are 7 days in total but in my data there might be only 5 or 6, so I need to add the missing days with the mean being zero or None.</p> <p>In general, if I have a specific list how do I include the missing items when I do something like value_counts or groupby operations ?</p>
<p>I think you can convert days to <code>categorical</code>s, so then if use <a href="http://pandas.pydata.org/pandas-docs/stable/categorical.html#operations" rel="nofollow noreferrer"><code>groupby + mean</code></a> get <code>NaN</code>s for missing categories:</p> <pre><code>df = pd.DataFrame({ 'day': ['Monday','Tuesday','Tuesday','Tuesday','Thursday'], 'price': list(range(5)) }) print (df) day price 0 Monday 0 1 Tuesday 1 2 Tuesday 2 3 Tuesday 3 4 Thursday 4 cats = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] df['day'] = pd.Categorical(df['day'], categories=cats, ordered=True) print(df.groupby("day", as_index=False).price.mean()) day price 0 Monday 0.0 1 Tuesday 2.0 2 Wednesday NaN 3 Thursday 4.0 4 Friday NaN 5 Saturday NaN 6 Sunday NaN </code></pre> <p>Another solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a> by all posible categories:</p> <pre><code>cats = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] print(df.groupby("day").price.mean().reindex(cats)) day Monday 0.0 Tuesday 2.0 Wednesday NaN Thursday 4.0 Friday NaN Saturday NaN Sunday NaN Name: price, dtype: float64 print(df.groupby("day").price.mean().reindex(cats, fill_value=0)) day Monday 0 Tuesday 2 Wednesday 0 Thursday 4 Friday 0 Saturday 0 Sunday 0 Name: price, dtype: int64 </code></pre>
python|pandas
3
8,735
49,394,681
Reading multiple CSVs into different arrays
<p>Update. Here is my code. I am importing 400 csv files into 1 list. Each csv file is 200 rows and 5 columns. My end goal is to sum the values from the 4th column of each row or each csv file. The below code imports all the csv files. However, I am struggling to isolate 4th column of data from each csv file from the large list. </p> <pre><code>for i in range (1, 5, 1): data = list() for i in range(1,400,1): datafile = 'particle_path_%d' % i data.append(np.genfromtxt(datafile, delimiter = "", skip_header=2)) print datafile </code></pre> <p>I want to read 100 csv files into 100 different arrays in python. For example:</p> <p>array1 will have csv1 array2 will have csv2 etc etc.</p> <p>Whats the best way of doing this? I am appending to a list right now but I have one big list which is proving difficult to split into smaller lists. My ultimate goal is to be able to perform different operations of each array (add, subtract numbers etc)</p>
<p>So you have a list of 100 arrays. What can you tell us about their shapes?</p> <p>If they all have the same shape you could use</p> <pre><code>arr = np.stack(data) </code></pre> <p>I expect <code>arr.shape</code> will be (100,200,5)</p> <pre><code>fthcol = arr[:,:,3] # 4th column </code></pre> <p>If they aren't all the same, then a simple list comprehension will work</p> <pre><code>fthcol = [a[:,3] for a in data] </code></pre> <p>Again, depending on the shapes you could <code>np.stack(fthcol)</code> (choose your axis).</p> <p>Don't be afraid to iterate over the elements of the <code>data</code> list. With 100 items the cost won't be prohibitive.</p>
python|arrays|list|loops|numpy
0
8,736
73,206,882
How to append I'd after the last maximum number to next numbers
<p>I have 2 dataframes and I wan to append one with the other.When appended I want the column with I'd as continuous numbers. Example:</p> <p>Df1: | I'd | value | |-----|-------| | 1. | ABC | | 2. | Bcs. |</p> <p>Df2: | I'd | value | |-----|-------| | 1. | Xyx | | 2. | Yus |</p> <p>Expected output: | I'd | value | |-----|-------| | 1. | ABC | | 2. | Bcs. | | 3. | Xyx | | 4 | Yus |</p> <p>I want I'd as seen in the expected df not it's original value, how do I do that? I'd acts as a primary key. Everytime I get a new dataframe I want to append it and I want I'd to be an increment from the last number.</p> <p>What I tried is</p> <pre><code>Max = final_df['id'].max() Final_df['id'] = range(Max , Max+Len(final_df)) </code></pre> <p>Which results in:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>I'd</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>2.</td> <td>ABC</td> </tr> <tr> <td>3.</td> <td>Bcs.</td> </tr> <tr> <td>4</td> <td>Xyx</td> </tr> <tr> <td>5</td> <td>Yus</td> </tr> </tbody> </table> </div> <p>Pardon me for my bad English</p>
<p>I am not sure if I understood your question, but if it is what I think it is, you want a &quot;concatenation&quot; of dataframes.</p> <p>The command is pd.concat (see the docs: <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.concat.html</a>)</p> <pre><code>df1 = pd.DataFrame([{'id': 1, 'value': 'abc'}, {'id': 2, 'value': 'XYZ'}]) df2 = pd.DataFrame([{'id': 1, 'value': 'def'}, {'id': 2, 'value': 'MNP'}]) </code></pre> <p>With pd.concat (<code>pd.concat([df1, df2])</code>) you would obtain the following dataframe:</p> <pre><code> id value 0 1 abc 1 2 XYZ 0 1 def 1 2 MNP </code></pre> <p>By continuous you mean a crescent index, you can the you <code>id</code> column as the index from the start:</p> <pre><code>df1 = pd.DataFrame([{'id': 1, 'value': 'abc'}, {'id': 2, 'value': 'XYZ'}]).set_index('id') df2 = pd.DataFrame([{'id': 1, 'value': 'def'}, {'id': 2, 'value': 'MNP'}]).set_index('id') </code></pre> <p>And use the same command with an additionnal parameter:</p> <pre><code>pd.concat([df1, df2], ignore_index=True) </code></pre> <p>You will obtain:</p> <pre><code> value 0 abc 1 XYZ 2 def 3 MNP </code></pre> <p>I think it is what you need!</p>
python|pandas|dataframe
1
8,737
73,268,612
Range mapping in Python
<p><strong>MAPPER DATAFRAME</strong></p> <pre><code> col_data = {'p0_tsize_qbin_':[1, 2, 3, 4, 5] , 'p0_tsize_min':[0.0, 7.0499999999999545, 16.149999999999977, 32.65000000000009, 76.79999999999973] , 'p0_tsize_max':[7.0, 16.100000000000023, 32.64999999999998, 76.75, 6759.850000000006]} map_df = pd.DataFrame(col_data, columns = ['p0_tsize_qbin_', 'p0_tsize_min','p0_tsize_max']) map_df </code></pre> <p><a href="https://i.stack.imgur.com/P6536.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P6536.png" alt="Mapper" /></a></p> <p>in Above data frame is <code>map_df</code> where column 2 and column 3 is the range and column1 is mapper value to the new data frame .</p> <p><strong>MAIN DATAFRAME</strong></p> <pre><code> raw_data = { 'id': ['1', '2', '2', '3', '3','1', '2', '2', '3', '3','1', '2', '2', '3', '3'], 'val' : [3, 56, 78, 11, 5000,37, 756, 78, 49, 21,9, 4, 14, 75, 31,]} df = pd.DataFrame(raw_data, columns = ['id', 'val','p0_tsize_qbin_mapped']) df </code></pre> <p><strong>EXPECTED OUTPUT MARKED IN BLUE</strong></p> <p><a href="https://i.stack.imgur.com/09JRh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/09JRh.png" alt="enter image description here" /></a></p> <p>look for <code>val</code> of df dataframe in map_df min(column1) and max(columns2) where ever it lies get the p0_tsize_qbin_ value.</p> <p>For Example : from df data frame val = 3 , lies in the range of <code>p0_tsize_min</code> <code>p0_tsize_max</code> where <code>p0_tsize_qbin_</code> ==1 . so 1 will return</p>
<p>Try using <code>pd.cut()</code></p> <pre><code>bins = map_df['p0_tsize_min'].tolist() + [map_df['p0_tsize_max'].max()] labels = map_df['p0_tsize_qbin_'].tolist() df.assign(p0_tsize_qbin_mapped = pd.cut(df['val'],bins = bins,labels = labels)) </code></pre> <p>Output:</p> <pre><code> id val p0_tsize_qbin_mapped 0 1 3 1 1 2 56 4 2 2 78 5 3 3 11 2 4 3 5000 5 5 1 37 4 6 2 756 5 7 2 78 5 8 3 49 4 9 3 21 3 10 1 9 2 11 2 4 1 12 2 14 2 13 3 75 4 14 3 31 3 </code></pre>
python-3.x|pandas|dataframe|range-map
2
8,738
73,459,767
Loading a tokenizer on huggingface: AttributeError: 'AlbertTokenizer' object has no attribute 'vocab'
<p>I'm trying to load a <code>huggingface</code> model and tokenizer. This normally works really easily (I've done it with a dozen models):</p> <pre><code>from transformers import pipeline, BertForMaskedLM, BertForMaskedLM, AutoTokenizer, RobertaForMaskedLM, AlbertForMaskedLM, ElectraForMaskedLM tokenizer = AutoTokenizer.from_pretrained(&quot;emilyalsentzer/Bio_ClinicalBERT&quot;) model = BertForMaskedLM.from_pretrained(&quot;emilyalsentzer/Bio_ClinicalBERT&quot;) </code></pre> <p>But for some reason I'm getting an error when I'm trying to load this one:</p> <pre><code>tokenizer = AutoTokenizer.from_pretrained(&quot;sultan/BioM-ALBERT-xxlarge&quot;, use_fast=False) model = AlbertForMaskedLM.from_pretrained(&quot;sultan/BioM-ALBERT-xxlarge&quot;) tokenizer.vocab </code></pre> <p>I found <a href="https://github.com/cl-tohoku/bert-japanese/issues/17" rel="nofollow noreferrer">this question</a> related, but it seems like this was an issue in the git repo itself and not on <code>huggingface</code>. I checked the actual repo where this model is saved on huggingface (<a href="https://huggingface.co/sultan/BioM-ALBERT-xxlarge/tree/main" rel="nofollow noreferrer">link</a>) and it clearly has a vocab file (<code>PubMD-30k-clean.vocab</code>) like the rest of the models I loaded.</p>
<p>There seems to be some issue with the tokenizer. It works, if you remove <code>use_fast</code> parameter or set it true, then you will be able to display the vocab file.</p> <pre><code>tokenizer = AutoTokenizer.from_pretrained(&quot;sultan/BioM-ALBERT-xxlarge&quot;, use_fast=True) model = AlbertForMaskedLM.from_pretrained(&quot;sultan/BioM-ALBERT-xxlarge&quot;) tokenizer.vocab </code></pre> <p>Output:</p> <pre><code>{'intervention': 7062, '▁tongue': 6911, '▁kit': 8341, '▁biosimilar': 26423, 'bank': 19880, '▁diesel': 20349, 'SOD': 6245, 'iri': 17739, .... </code></pre>
huggingface-transformers|huggingface-tokenizers
1
8,739
67,379,210
How to use if condition for an column in python
<p>Having a column where i need to check for NaN rows using a conditional statement based on which i am looking to update a new column.</p> <p><strong>Input Data</strong></p> <pre><code>Column1 Column2 AxBZ234 AYBY123 NaN ZX23468 AC23YUK NaN </code></pre> <p><strong>Script i have been using</strong></p> <pre><code>df['col2'] = df.apply(lambda x:'Col1 value Not available' if (x['col1'] == 'NaN') else 'Available',axis=1) </code></pre> <p>Please suggest</p>
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>Series.isna</code></a>:</p> <pre><code>df['col2'] = np.where(df['col1'].isna(), 'Col1 value Not available','Available') </code></pre>
python|pandas
2
8,740
67,288,042
Python array, getting every n entries, then moving onto the next n entries until the end of the array
<p>I have a 1D numpy array with 4,050,000 entries, I will call image_t. I am trying to slice it in such a way that I retrieve the first 10,000 entries, so image_t[0:10000], then the next step would be image_t[10000:20000] and so forth until it reaches the last slice.</p> <p>This would end up giving me 405 different arrays, each of 10,000 entries. My problem is that I have tried many different kinds of loops and I am not sure what is going wrong.</p> <p>I have tried:</p> <p>Defining it as a function</p> <pre><code> def d_slice(S,step1): return [S[i::step] for i in range(step)] </code></pre> <p>This doesn't work because it returns 10,000 arrays of 405 entries each, meaning that it counts every 405th entry.</p> <p>I tried a start stop inside the loop:</p> <pre><code>def s_t_slice(S): for i in np.arange(0,4050000, 10000): start = i end = i + 10000 print(i) return S[start:end] </code></pre> <p>Here I hoped that if I told the array to slice the from i to i + 10000 it would do what I explained in the first paragraph. Unfortunately, the program dies after i = 0. Not sure why.</p> <p>Next I tried creating an empty list of arrays 405 long and skipping the function.</p> <pre><code>im_array = [[] for i in range(1,406)] for i in range(0,405): for j in range(0,4050000,10000): print(j) k = j + 10000 im_array[k] = image_t[j:k] </code></pre> <p>This worked in that I managed to get 405 arrays of 10,000 entries each but the entries did not match the entries in the full array (i.e. im_array[0] was not the same as image_t[0:10000] and so forth).</p> <p>I am pretty sure I am close to cracking it but I could use a hand as to what I am missing.</p>
<p>you can use np.array_split(). the following code should work:</p> <pre><code>split_array = np.array_split(img_t, 405) </code></pre> <p>now split_array variable is a list of 405 arrays, each with shape of (10000,)</p>
python|arrays|function|for-loop|numpy-slicing
0
8,741
67,452,371
Pandas Melt function for time series data
<p>I am trying to melt my pandas data frame but I am not quiet sure how to assign the variables properly. I looked through the other examples on stack but I can't seem to find a variation matching this. My data frame (df1) looks like this :</p> <pre><code>[IN]: df1 [OUT]: 40025.0 21201.0 30061.0 46021.0 date 2020-08-08 0.000861 0.001292 0.000287 0.001177 2020-08-09 0.001147 0.001290 0.000344 0.001204 2020-08-10 0.001431 0.001288 0.000401 0.001231 </code></pre> <p>Each column is for a different FIPS code, the values are the number of Covid cases per day (this data has been processed for future clustering) and index is a datetime index (day). The data frame is 804 columns by 470 rows. I would like my data frame to look like this:</p> <p><a href="https://i.stack.imgur.com/e2cTv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e2cTv.png" alt="enter image description here" /></a></p> <p>I know I can make this work if I leave &quot;date&quot; as a column (as opposed to the index) by doing this:</p> <pre><code>df1 =df1.melt(id_vars=&quot;date&quot;, var_name=&quot;FIPS&quot;, value_name=&quot;Covid_cases&quot;) </code></pre> <p>But if I do that, then I get an error when trying to convert the &quot;date&quot; column as the index. I need it the index to be a datetime index because I am going to kmeans cluster the time series data and then plot time series clusters. Any input would be greatly appreciated! Thank you!</p>
<p>If <code>date</code> is currently the index, you should be able to <code>reset_index()</code> and then <code>set_index('date')</code> afterwards:</p> <pre class="lang-py prettyprint-override"><code>df1 = (df1 .reset_index() .melt(id_vars='date', var_name='FIPS', value_name='Covid_cases') .set_index('date') ) </code></pre> <pre><code> FIPS Covid_cases date 2020-08-08 40025.0 0.000861 2020-08-09 40025.0 0.001147 2020-08-10 40025.0 0.001431 2020-08-08 21201.0 0.001292 2020-08-09 21201.0 0.001290 2020-08-10 21201.0 0.001288 2020-08-08 30061.0 0.000287 2020-08-09 30061.0 0.000344 2020-08-10 30061.0 0.000401 2020-08-08 46021.0 0.001177 2020-08-09 46021.0 0.001204 2020-08-10 46021.0 0.001231 </code></pre>
python|pandas|dataframe
1
8,742
67,549,023
Why is the GNU scientific library matrix multiplication slower than numpy.matmul?
<p>Why is it that the matrix multiplication with Numpy is much faster than <code>gsl_blas_sgemm</code> from GSL, for instance:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import time N = 1000 M = np.zeros(shape=(N, N), dtype=np.float) for i in range(N): for j in range(N): M[i, j] = 0.23 + 100*i + j tic = time.time() np.matmul(M, M) toc = time.time() print(toc - tic) </code></pre> <p>gives something between 0.017 - 0.019 seconds, while in C++:</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;chrono&gt; #include &lt;iostream&gt; #include &lt;gsl/gsl_matrix.h&gt; #include &lt;gsl/gsl_blas.h&gt; using namespace std::chrono; int main(void) { int N = 1000; gsl_matrix_float* M = gsl_matrix_float_alloc(N, N); for (int i = 0; i &lt; N; i++) { for (int j = 0; j &lt; N; j++) { gsl_matrix_float_set(M, i, j, 0.23 + 100 * i + j); } } gsl_matrix_float* C = gsl_matrix_float_alloc(N, N); // save the result into C auto start = high_resolution_clock::now(); gsl_blas_sgemm(CblasNoTrans, CblasNoTrans, 1.0, M, M, 0.0, C); auto stop = high_resolution_clock::now(); auto duration = duration_cast&lt;milliseconds&gt;(stop - start); std::cout &lt;&lt; duration.count() &lt;&lt; std::endl; return 0; } </code></pre> <p>I get a runtime of the multiplication of about 2.7 seconds. I am also compiling with the maximum speed option <code>/02</code>. I am working with Visual Studio. I must do something very wrong. I was not expecting a much better performance from the C++ code because I am aware that Numpy is optimized C-Code, but neither was I expecting it to be about 150 times slower than python. Why is that? How can I improve the runtime of the multiplication relative to Numpy?</p> <p>Background of the problem: I need to evaluate an 1000 to 2000 dimensional integral, and I am doing it with the Monte-Carlo method. For that I wrote almost the whole integrand as Numpy array operations, this works quite fast but i need it even faster in order to evaluate the same integrand 100.000 to 500.000 times, so any little improvement would help. Does it make sense to write the same code in C/C++ or should I stick to Numpy? Thanks!</p>
<p><strong>TL;DR:</strong> the C++ code and Numpy do not use the same matrix-multiplication library.</p> <p>The <em>matrix multiplication of the GSL library is not optimized</em>. On my machine, it runs sequentially, does not use <em>SIMD instructions</em> (<a href="https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions" rel="nofollow noreferrer">SSE</a>/<a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions" rel="nofollow noreferrer">AVX</a>), does not efficiently unroll the loops to perform register tiling. I also suspect it also does not use the CPU cache efficiently due to the lack of tiling. These optimizations are critical to achieve high-performance and widely used in fast linear algebra libraries.</p> <p><em>Numpy uses a <a href="https://fr.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms" rel="nofollow noreferrer">BLAS library</a> installed on your machine</em>. On many Linux platform, its uses OpenBLAS or the Intel MKL. Both are very fast (they use all the methods described above) and should run in parallel.</p> <p>You can find which implementation of BLAS is used by Numpy <a href="https://stackoverflow.com/questions/37184618/find-out-if-which-blas-library-is-used-by-numpy">here</a>. On my Linux machine, Numpy use by default CBLAS which internally use OpenBLAS (OpenBLAS is strangely not directly detected by Numpy).</p> <p>There are many fast parallel BLAS implementations (GotoBLAS, ATLAS, BLIS, etc.). The open-source BLIS library is great because its matrix multiplication is very fast on many different architectures.</p> <p>As a result, the simplest way to improve your C++ code is to use the <code>cblas_sgemm</code> CBLAS function and <em>link a fast BLAS library like OpenBLAS or BLIS</em> for example.</p> <hr /> <p><strong>For more information:</strong></p> <p>One simple way to see how bad the GSL perform is to use a <strong>profiler</strong> (like perf on Linux or VTune on Windows). In your case Linux perf, report that &gt;99% of the time is spent in <code>libgslcblas.so</code> (ie. the GSL library). More specifically, most of the execution time is spent in this following assembly loop:</p> <pre class="lang-none prettyprint-override"><code>250: movss (%rdx),%xmm1 add $0x4,%rax add $0x4,%rdx mulss %xmm2,%xmm1 # scalar instructions addss -0x4(%rax),%xmm1 movss %xmm1,-0x4(%rax) cmp %rax,%r9 ↑ jne 250 </code></pre> <p>As for Numpy, 99% of its time is spent in <code>libopenblasp-r0.3.13.so</code> (ie. the OpenBLAS library). More specifically in the following assembly code of the function <code>dgemm_kernel_HASWELL</code>:</p> <pre class="lang-none prettyprint-override"><code>110: lea 0x80(%rsp),%rsi add $0x60,%rsi mov %r12,%rax sar $0x3,%rax cmp $0x2,%rax ↓ jl d26 prefetcht0 0x200(%rdi) # Data prefetching vmovups -0x60(%rsi),%ymm1 prefetcht0 0xa0(%rsi) vbroadcastsd -0x80(%rdi),%ymm0 # Fast SIMD instruction (AVX) prefetcht0 0xe0(%rsi) vmovups -0x40(%rsi),%ymm2 prefetcht0 0x120(%rsi) vmovups -0x20(%rsi),%ymm3 vmulpd %ymm0,%ymm1,%ymm4 prefetcht0 0x160(%rsi) vmulpd %ymm0,%ymm2,%ymm8 vmulpd %ymm0,%ymm3,%ymm12 prefetcht0 0x1a0(%rsi) vbroadcastsd -0x78(%rdi),%ymm0 vmulpd %ymm0,%ymm1,%ymm5 vmulpd %ymm0,%ymm2,%ymm9 [...] </code></pre> <p>We can clearly see that the GSL code is not optimized (because of the scalar code and the naive simple loop) and that OpenBLAS code is optimized as it uses at least wide SIMD instructions, data prefetching and loop unroling. Note that the executed OpenBLAS code is not optimal as it could use the <a href="https://en.wikipedia.org/wiki/FMA_instruction_set" rel="nofollow noreferrer">FMA instructions</a> available on my processor.</p>
python|c++|performance|numpy|gsl
27
8,743
59,952,801
Renaming columns in pandas based on strings in list
<p>Im trying to replace the column names of a dataframe (<code>sens_second_X</code>) based on the strings in a list (<code>updated_fist_stage</code>) being substrings to the column names. <code>updated_fist_stage = ['ccc_230', 'LN_S_P500', 'mf_100']</code> and </p> <pre><code>sens_second_X.columns = ['resid', 'ccc_230_TY', EQ_ETF', 'LN_S_P500_changes', 'mf_100_equity', 'inflows_2009', 'inflows_2010'] </code></pre> <p>I try to do this as follows:</p> <pre><code>def renaming_fun(x): for var in updated_fist_stage: if var in x: return var return x sens_second_X.columns = map(renaming_fun, sens_second_X.columns) </code></pre> <p>but I get that only <code>ccc_230</code> has been renamed in the dataframe and the output is as follows:</p> <pre><code>sens_second_X.columns = ['resid', 'ccc_230', EQ_ETF', 'LN_S_P500_changes', 'mf_100_equity', 'inflows_2009', 'inflows_2010'] </code></pre>
<p>You can try something like below with <code>str.extract</code></p> <pre><code>mapped = sens_second_X.columns.str.extract(r'({})'.format('|'.join(updated_fist_stage)) ,expand=False) sens_second_X.columns = pd.Index(pd.Series(mapped).fillna(pd.Series(sens_second_X.columns))) </code></pre> <hr> <pre><code>Index(['resid', 'ccc_230', 'EQ_ETF', 'LN_S_P500', 'mf_100', 'inflows_2009', 'inflows_2010'], dtype='object') </code></pre>
python|pandas
1
8,744
60,132,430
reshaping vectors into tensors for embedding layer in keras LSTM mini-batch training
<p>I'm trying to train an LSTM topic model (many-to-one problem) on text using an embedding layer and mini-batch training in <code>keras</code> with <code>tensorflow</code> backend in Python. I am struggling with formatting my inputs and outputs in a way that is compatible with the embedding layer format. </p> <p>My input consists of batches of <a href="https://www.oreilly.com/library/view/applied-text-analysis/9781491963036/ch04.html" rel="nofollow noreferrer">count-vectorized</a> text tokens, post-padded to 50 cells. My output is a vector of iteger labels corresponding to one of 4 classes, likewise post-padded to 50 cells.</p> <p>This is an example input vector:</p> <pre><code>array([2777, 2879, 114, 207, 2879, 3031, 1831, 565, 1961, 161, 1503, 1485, 1036, 3380, 3255, 2879, 3243, 2152, 2406, 653, 3122, 3053, 623, 1145, 2152, 3255, 2529, 3210, 119, 944, 161, 2879, 1282, 2846, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32) </code></pre> <p>And this is the corresponding output vector:</p> <pre><code>array([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32) </code></pre> <p>As a whole, my inputs and outputs consist of a list of padded arrays each. Next, I initialize my model architecture as follows:</p> <pre><code>from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers.embeddings import Embedding model = Sequential() model.add(Embedding(20000, 100, input_length=50)) model.add(LSTM(100)) model.add(Dense(4, activation='sigmoid')) model.compile(loss='sparse_categorical_crossentropy',optimizer='adam', metrics=['accuracy']) </code></pre> <p>My <a href="https://keras.io/layers/embeddings/" rel="nofollow noreferrer">embedding layer</a> has three parameters: (1) input dimension of 20000, corresponding to the size of my vocabulary, (2) output dimension of 100, which is the arbitrary dimension of the dense embedding, (3) inout length of 50, which is the maximum length of my post-padded vectors.</p> <pre><code>print(model.summary()) _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, 50, 100) 2000000 _________________________________________________________________ lstm_1 (LSTM) (None, 100) 80400 _________________________________________________________________ dense_1 (Dense) (None, 4) 101 ================================================================= Total params: 2,080,501 Trainable params: 2,080,501 Non-trainable params: 0 </code></pre> <p>To simplify the batching problem, I train my model in a <code>for</code>-loop, passing one batch at a time. So my input is now correctly specified to be in 2D as required by the embedding layer (as per an earlier error stating <code>expected embedding_1_input to have 2 dimensions</code> after I tried to reshape to 3D.</p> <pre><code>for X, y in data: model.fit(X, y, epochs=1, batch_size=1, verbose=0) </code></pre> <p>When I try to fit the model, I get this error: </p> <pre><code>ValueError: Error when checking input: expected embedding_1_input to have shape (50,) but got array with shape (1,) </code></pre> <p>This is really puzzling, because when I double-check the dimensions of my input they are indeed (50,)!</p> <pre><code>np.shape(data[0][0]) &gt; (50,) </code></pre> <p>Transposing a 1 dimensional vector does not make any difference, so I'm not sure how to proceed at this point. Any advice?</p> <p>I also noticed <a href="https://stackoverflow.com/questions/56960432/valueerror-error-when-checking-input-expected-embedding-1-input-to-have-shape">this</a> post has a similar question, but so far no one has attempted to answer it. Thanks in advance!</p>
<p>I am not really sure what you are trying to do with that reshaping but <code>Embedding</code> layers expect 2D tensor, not a 3D tensor that you are trying to push through this layer.</p> <p>Here is what documentation says about the input of <code>Embedding</code> layer</p> <blockquote> <p>2D tensor with shape: (batch_size, sequence_length)</p> </blockquote> <p>So you just pass it the 2D tensor with shape <code>(50, your sequence length)</code></p> <p>As for the <code>26300</code> number - it expects the length of a single batch, not a number of batches.</p>
python|tensorflow|keras|deep-learning|word-embedding
1
8,745
65,135,679
python dataframe vector comparison
<p>I have a df like this:</p> <pre><code> | name | age | |----|------| | Dav | 25 | | Las | 50 | | Oms | 70 | </code></pre> <p>how to create a new df or matrix based on comparison on the age difference of these people to each other? the output will like this(the * are just for explaining, does't need to show up) :</p> <pre><code> |* Dav|* Las |* Oms | | *Dav | 0 | 25 | 45| | *Las | 25 | 0 | 20| | *Oms | 45 | 20 | 0| </code></pre> <p>or like this:</p> <pre><code> | 25 | 45| | 25 | 20| | 45 | 20| </code></pre>
<p>You could use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.corr.html" rel="nofollow noreferrer">corr</a> to compute the all pairs differences:</p> <pre><code>import numpy as np from operator import sub # compute all pairs diffs res = df.set_index('name').T.corr(sub).abs() # optional df.set_index('name').T.corr(lambda x, y: abs(x - y)) # set diagonal to 0 res.values[(np.arange(res.shape[0]),)*2] = 0 print(res) </code></pre> <p>The expression:</p> <pre><code>df.set_index('name').T </code></pre> <p>creates the following DataFrame:</p> <pre><code>name Dav Las Oms age 25 50 70 </code></pre> <p>The <em>corr</em> function sets the diagonal values to 1, from the documentation:</p> <blockquote> <p>and returning a float. Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior.</p> <p>New in version 0.24.0.</p> </blockquote> <p>and the function <a href="https://docs.python.org/3/library/operator.html#operator.sub" rel="nofollow noreferrer">sub</a> is just:</p> <pre><code>def sub(a, b): &quot;Same as a - b.&quot; return a - b </code></pre>
python|pandas|dataframe|compare
0
8,746
65,379,271
How to find the index of a list element in a numpy array using ranges
<p>Say I have a numpy array as follows:</p> <p><code>arr = np.array([[[1, 7], [5, 1]], [[5, 7], [6, 7]]])</code></p> <p>where each of the innermost sub-arrays is an element. So for example; [1, 7] and [5, 1] are both considered elements.</p> <p>... and I would like to find all the elements which satisfy: [&lt;=5, &gt;=7]. So, a truthy result array for the above example would look as follows:</p> <pre><code>arr_truthy = [[True, False], [True, False]] </code></pre> <p>... as for one of the bottom elements in <code>arr</code> the first value is <code>&lt;=5</code> and the second is <code>&gt;=7</code>.</p> <p>I can solve this easily by iterating over each of the axes in <code>arr</code>:</p> <pre><code> for x in range(arr.shape[0]): for y in range(arr.shape[1]): # test values, report if true. </code></pre> <p>.. but this method is slow and I'm hoping there's a more <code>numpy</code> way to do it. I've tried <code>np.where</code> but I can't work out how to do the multi sub-element conditional.</p> <p>I'm effectively trying to test an independent conditional on each number in the elements.</p> <p>Can anyone point me in the right direction?</p>
<p>Are you looking for</p> <pre><code>(arr[:,:,0] &lt;= 5) &amp; (arr[:,:,1] &gt;= 7) </code></pre> <p>? You can perform broadcasted comparison.</p> <p>Output:</p> <pre><code>array([[True, False], [True, False]]) </code></pre>
python|numpy
3
8,747
65,395,280
Subclass definitions of TensorFlow models with customizable hidden layers
<p>I am learning about Models subclass definitions in TensorFlow</p> <p>A pretty straightforward definition will be something like this</p> <pre><code>class MyNetwork1(tf.keras.Model): def __init__(self, num_classes = 10): super().__init__() self.num_classes = num_classes self.input_layer = tf.keras.layers.Flatten() self.hidden_1 = tf.keras.layers.Dense(128, activation = 'relu') self.hidden_2 = tf.keras.layers.Dense(64, activation = 'relu') self.output_layer = tf.keras.layers.Dense(self.num_classes, activation = 'softmax') def call(self, input_tensor): x = self.input_layer(input_tensor) x = self.hidden_1(x) x = self.hidden_2(x) x = self.output_layer(x) return x </code></pre> <p>After building the model,</p> <pre><code>Model1 = MyNetwork1() Model1.build((None, 28, 28, 1)) </code></pre> <p>It will look like</p> <pre><code>Model: &quot;my_network1&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten (Flatten) multiple 0 _________________________________________________________________ dense (Dense) multiple 100480 _________________________________________________________________ dense_1 (Dense) multiple 8256 _________________________________________________________________ dense_2 (Dense) multiple 650 ================================================================= Total params: 109,386 Trainable params: 109,386 Non-trainable params: 0 </code></pre> <p>Since this method cannot customize the number of neurons and activation type per layer I have tried to edit it a little bit.</p> <p>I have tried the following definition</p> <pre><code>class MyNetwork2(tf.keras.Model): def __init__(self, num_classes = 2, hidden_dimensions = [100], hidden_activations = ['relu']): super().__init__() self.inputlayer = tf.keras.layers.Flatten() i = 0 self.hidden_layers = [] for d,a in zip(hidden_dimensions,hidden_activations): i += 1 setattr(self, 'hidden_' + str(i) , tf.keras.layers.Dense(d, activation = a)) self.hidden_layers.append('self.hidden_' + str(i) + '(x)') self.outputlayer = tf.keras.layers.Dense(num_classes, activation = 'softmax') self.num_layers = len(hidden_dimensions) + 2 def call(self, inputtensor): x = self.inputlayer(inputtensor) for h in self.hidden_layers: # print(h) x = eval(h,{}, x) x = self.outputlayer(x) return x </code></pre> <p>In this code, I tried to do as same as the previous definition.</p> <pre><code>Model2 = MyNetwork2(num_classes = 10, hidden_dimensions = [128,64], hidden_activations = ['relu', 'relu']) Model2.build((None, 28, 28, 1)) </code></pre> <p>However, I faced the following error:</p> <pre><code>TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got 'self' </code></pre> <p>How can I fix this error to achieve my goal?</p>
<p>Seems like a very complicated way to do things. If you use a dictionary for a variable number of layers instead of eval, everything works fine.</p> <pre><code>class MyNetwork2(tf.keras.Model): def __init__(self, num_classes=2, hidden_dimensions=[100], hidden_activations=['relu']): super(MyNetwork2, self).__init__() self.inputlayer = tf.keras.layers.Flatten() self.hidden_layers = dict() for i, (d, a) in enumerate(zip(hidden_dimensions, hidden_activations)): self.hidden_layers['hidden_'+str(i)]=tf.keras.layers.Dense(d, activation=a) self.outputlayer = tf.keras.layers.Dense(num_classes, activation='softmax') self.num_layers = len(hidden_dimensions) + 2 </code></pre> <p>Running example:</p> <pre><code>import tensorflow as tf import numpy as np class MyNetwork2(tf.keras.Model): def __init__(self, num_classes=2, hidden_dimensions=[100], hidden_activations=['relu']): super(MyNetwork2, self).__init__() self.inputlayer = tf.keras.layers.Flatten() self.hidden_layers = dict() for i, (d, a) in enumerate(zip(hidden_dimensions, hidden_activations)): self.hidden_layers['hidden_'+str(i)]=tf.keras.layers.Dense(d, activation=a) self.outputlayer = tf.keras.layers.Dense(num_classes, activation='softmax') self.num_layers = len(hidden_dimensions) + 2 def call(self, inputtensor, training=None, **kwargs): x = self.inputlayer(inputtensor) for k, v in self.hidden_layers.items(): x = v(x) x = self.outputlayer(x) return x Model2 = MyNetwork2(num_classes = 10, hidden_dimensions = [128,64], hidden_activations = ['relu', 'relu']) Model2.build((None, 28, 28, 1)) Model2(np.random.uniform(0, 1, (1, 28, 28, 1)).astype(np.float32)) </code></pre> <pre><code>&lt;tf.Tensor: shape=(1, 10), dtype=float32, numpy= array([[0.14969216, 0.10196744, 0.0874036 , 0.08350615, 0.18459582, 0.07227989, 0.08263624, 0.08537506, 0.10291573, 0.04962786]], dtype=float32)&gt; </code></pre> <p>The hidden layers as a dictionary:</p> <pre><code>Model2.hidden_layers </code></pre> <pre><code>{'hidden_0': &lt;tensorflow.python.keras.layers.core.Dense at 0x1891b5c13a0&gt;, 'hidden_1': &lt;tensorflow.python.keras.layers.core.Dense at 0x1891b5c1d00&gt;} </code></pre>
python|python-3.x|tensorflow|keras|neural-network
1
8,748
65,204,011
Is there a way in Python to get a sub matrix as in Matlab?
<p>For example, let's say that I have the following matrices in Matlab:</p> <pre><code>A = zeros(10) B = ones(2,2) </code></pre> <p>I want to add the matrix A with B in specific positions of A that are stored like this:</p> <pre><code>locations = [1, 3] </code></pre> <p>I can do this:</p> <pre><code>A(locations, locations) = A(locations, locations) + B </code></pre> <p>So the job is done. In python, I would like to the same using NumPy arrays, like:</p> <pre><code>import numpy as np A = np.zeros([10,10]) B = np.ones([2,2]) locations = np.array([0, 2]) #Because NumPy arrays are zero indexed A[locations, locations] = A[locations, locations] + B </code></pre> <p>But I get this error:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: shape mismatch: value array of shape (2,2) could not be broadcast to indexing result of shape (2,) </code></pre> <p>Does anyone know how can I do this?</p>
<pre><code>A = np.zeros([10,10]) B = np.ones([2,2]) print(B) A[:2,:2]=B print(A) #output B [[1. 1.] [1. 1.]] #output A [[1. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [1. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] </code></pre>
python|matlab|numpy
0
8,749
49,975,169
Azure HDInsights Spark Cluster Install External Libraries
<p>I have a HDInsights Spark Cluster. I installed tensorflow using a script action. The installation went fine (Success).</p> <p>But now when I go and create a Jupyter notebook, I get:</p> <pre><code>import tensorflow Starting Spark application The code failed because of a fatal error: Session 8 unexpectedly reached final status 'dead'. See logs: YARN Diagnostics: Application killed by user.. Some things to try: a) Make sure Spark has enough available resources for Jupyter to create a Spark context. For instructions on how to assign resources see http://go.microsoft.com/fwlink/?LinkId=717038 b) Contact your cluster administrator to make sure the Spark magics library is configured correctly. </code></pre> <p>I don't know how to fix this error... I tried some things like looking at logs but they are not helping. </p> <p>I just want to connect to my data and train a model using tensorflow.</p>
<p>This looks like error with Spark application resources. Check resources available on your cluster and close any applications that you don't need. Please see more details here: <a href="https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-resource-manager#kill-running-applications" rel="nofollow noreferrer">https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-resource-manager#kill-running-applications</a></p>
azure|apache-spark|tensorflow|machine-learning|azure-hdinsight
1
8,750
50,145,209
Installing tensorboard built from source
<p>This is about a tensorboard which is built from source, not about pip-installed one.</p> <p>I could successfully build it.</p> <pre><code>$ git clone https://github.com/tensorflow/tensorboard.git $ cd tensorboard/ $ bazel build //tensorboard tensorflow/tensorboard$ bazel build //tensorboard Starting local Bazel server and connecting to it... ...................................... : (log messages here) Target //tensorboard:tensorboard up-to-date: bazel-bin/tensorboard/tensorboard INFO: Elapsed time: 326.553s, Critical Path: 187.92s INFO: 619 processes: 456 linux-sandbox, 12 local, 151 worker. INFO: Build completed successfully, 1268 total actions </code></pre> <p>Then yes I can run it as documented in <a href="https://github.com/tensorflow/tensorboard" rel="nofollow noreferrer">tensorboard/README.md</a>, and it works.</p> <pre><code>$ ./bazel-bin/tensorboard/tensorboard --logdir path/to/logs </code></pre> <p>The problem is, I'd like to run it as if installed via pip like this:</p> <pre><code>$ tensorboard --logdir path/to/logs </code></pre> <p>But as far as I looked for, no script provided to create <code>.whl</code> file so that we can locally-pip-install it, unlike <a href="https://www.tensorflow.org/install/install_sources#build_the_pip_package" rel="nofollow noreferrer">tensorflow provides one like this</a>.</p> <pre><code>$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg $ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.8.0-py2-none-any.whl </code></pre> <p>So... can anybody show how to do that? Making packaging script would solve this, but it should exist somewhere as long as tensorboard is provided via pip anyway. :)</p> <p>My workaround so far is not clean enough:</p> <pre><code>$ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard ~/bin $ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard.runfiles ~/bin </code></pre> <p>I appreciate your suggestions, thanks!</p> <p>Update July-21:</p> <p>Thanks to W JC, I found instruction is already there in tensorboard/pip_package/BUILD.</p> <pre><code># rm -rf /tmp/tensorboard # bazel run //tensorboard/pip_package:build_pip_package # pip install -U /tmp/tensorboard/*py2*.pip </code></pre> <p>Though unfortunately it shows error in my environment, and I guess it's local issue maybe because I'm using anaconda.</p> <p>But basically the problem was resolved. It should basically work as long as running on supported environment.</p>
<p>It seems there exists an script in the /tensorboard/pip_packages try to build wheels</p>
tensorflow|tensorboard
2
8,751
49,840,892
Deep Q Network is not learning
<p>I tried to code a Deep Q Network to play Atari games using Tensorflow and OpenAI's Gym. Here's my code:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import gym import numpy as np import os env_name = 'Breakout-v0' env = gym.make(env_name) num_episodes = 100 input_data = tf.placeholder(tf.float32,(None,)+env.observation_space.shape) output_labels = tf.placeholder(tf.float32,(None,env.action_space.n)) def convnet(data): layer1 = tf.layers.conv2d(data,32,5,activation=tf.nn.relu) layer1_dropout = tf.nn.dropout(layer1,0.8) layer2 = tf.layers.conv2d(layer1_dropout,64,5,activation=tf.nn.relu) layer2_dropout = tf.nn.dropout(layer2,0.8) layer3 = tf.layers.conv2d(layer2_dropout,128,5,activation=tf.nn.relu) layer3_dropout = tf.nn.dropout(layer3,0.8) layer4 = tf.layers.dense(layer3_dropout,units=128,activation=tf.nn.softmax,kernel_initializer=tf.zeros_initializer) layer5 = tf.layers.flatten(layer4) layer5_dropout = tf.nn.dropout(layer5,0.8) layer6 = tf.layers.dense(layer5_dropout,units=env.action_space.n,activation=tf.nn.softmax,kernel_initializer=tf.zeros_initializer) return layer6 logits = convnet(input_data) loss = tf.losses.sigmoid_cross_entropy(output_labels,logits) train = tf.train.GradientDescentOptimizer(0.001).minimize(loss) saver = tf.train.Saver() init = tf.global_variables_initializer() discount_factor = 0.5 with tf.Session() as sess: sess.run(init) for episode in range(num_episodes): x = [] y = [] state = env.reset() feed = {input_data:np.array([state])} print('episode:', episode+1) while True: x.append(state) if (episode+1)/num_episodes &gt; np.random.uniform(): Q = sess.run(logits,feed_dict=feed)[0] action = np.argmax(Q) else: action = env.action_space.sample() state,reward,done,info = env.step(action) Q = sess.run(logits,feed_dict=feed)[0] new_Q = np.zeros(Q.shape) new_Q[action] = reward+np.amax(Q)*discount_factor y.append(new_Q) if done: break for sample in range(len(x)): _,l = sess.run([train,loss],feed_dict={input_data:[x[sample]],output_labels:[y[sample]]}) print('training loss on sample '+str(sample+1)+': '+str(l)) saver.save(sess,os.getcwd()+'/'+env_name+'-DQN.ckpt') </code></pre> <p>The Problem is that:</p> <ol> <li>The loss isn't decreasing while training and is always somewhere around 0.7 or 0.8</li> <li>When I test the network on the Breakout environment even after I trained it for 1000 episodes, the actions still seem kind of random and it rarely hits the ball.</li> </ol> <p>I already tried to use different loss functions (softmax crossentropy and mean squared error), use another optimizer (Adam) and increasing the learning rate but nothing changed.</p> <p>Can someone tell me how to fix this?</p>
<p>Here are some things that stand out that you could look into (it's always difficult in these kinds of cases to tell for sure without trying exactly which issue(s) is/are the most important ones):</p> <ul> <li>100 episodes does not seem like a lot. In the image below, you see learning curves of some variants of Double DQN (slightly more advanced than DQN) on Breakout (<a href="https://openai.com/blog/openai-baselines-dqn/" rel="noreferrer">source</a>). Training time on the <code>x</code>-axis is measured in millions of frames there, not in episodes. I don't know exactly where 100 episodes would be on that <code>x</code>-axis, but I don't think it would be far in. It may simply not be reasonable to expect any kind of decent performance yet after 100 episodes.</li> </ul> <p><a href="https://i.stack.imgur.com/wrk74.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wrk74.png" alt="OpenAI Baselines DQN Learning Curves Breakout"></a><br> <sub>(source: <a href="https://blog.openai.com/content/images/2017/05/Pasted-image-at-2017_05_23-02_11-PM-3.png" rel="noreferrer">openai.com</a>)</sub> </p> <ul> <li><p>It looks like you're using dropout in your networks. I'd recommend getting rid of the dropout. I don't know 100% for sure that it's bad to use dropout in Deep Reinforcement Learning, but 1) it's certainly not common, and 2) intuitively it doesn't seem necessary. Dropout is used to combat overfitting in supervised learning, but overfitting is not really much of a risk in Reinforcement Learning (at least, not if you're just trying to train for a single game at a time like you are here).</p></li> <li><p><code>discount_factor = 0.5</code> seems extremely low, this is going to make it impossible to propagate long-term rewards back to more than a handful of actions. Something along the lines of <code>discount_factor = 0.99</code> would be much more common.</p></li> <li><p><code>if (episode+1)/num_episodes &gt; np.random.uniform():</code>, this code looks like it's essentially decaying <code>epsilon</code> from <code>1.0 - 1 / num_episodes</code> in the first episode to <code>1.0 - num_episodes / num_episodes = 0.0</code> in the last episode. With your current <code>num_episodes = 100</code>, this means it's decaying from <code>0.99</code> to <code>0.0</code> over <code>100</code> episodes. That seems to me like it's decaying way too quickly. For reference, in the <a href="https://deepmind.com/research/publications/human-level-control-through-deep-reinforcement-learning/" rel="noreferrer">original DQN paper</a>, <code>epsilon</code> is slowly decayed linearly from <code>1.0</code> to <code>0.1</code> over <strong>1 million frames</strong>, and kept fixed forever after.</p></li> <li><p>You're not using Experience Replay, and not using a separate Target network, as described in <a href="https://deepmind.com/research/publications/human-level-control-through-deep-reinforcement-learning/" rel="noreferrer">the original DQN paper</a>. All of the points above are <strong>significantly</strong> easier to look into and fix, so I'd recommend that first. That might already be enough to actually start seeing some better-than-random performance after learning, but will likely still perform worse than it would with these two additions.</p></li> </ul>
tensorflow|neural-network|artificial-intelligence|reinforcement-learning|q-learning
12
8,752
63,871,508
In pycharm Pandas Datafram want to highlight specific column with specific color
<p>I have following code ,working in Pycharm, It not working is any basic problem? there is many similar questions in stack overflow forum, I have gone through that , I came to conclusion that <em><strong>style</strong></em> function not works in Pycharm I have to switchover to Jupiter Notebook , is it true ? please guide import pandas as pd import numpy as np data = pd.DataFrame(np.random.randn(5, 3), columns=list('ABC')) #print (data)</p> <p>def highlight_cols(x): #copy df to new - original data are not changed df = x.copy() #select all values to default value - red color df.loc[:,:] = 'background-color: red' #overwrite values grey color df[['B','C']] = 'background-color: grey' #return color df return df</p> <p>data.style.apply(highlight_cols, axis=None) print(data)</p>
<p>looks like your trying to print it should be</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(5, 3), columns=list('ABC')) def color_negative_red(val): color = 'red' if val &lt; 0 else 'black' return 'color: %s' % color # from the resource given I assume apply is an actual method, while the resource # below uses applymap method styled = df.style.applymap(highlight_cols) # you previous code, your attempting to print instead of render. # styled.render() #will return the css styling as string styled </code></pre> <p>Personally, I like to use pycharm for personal project while, Colab or junypter I like to run quick. Here's a note book if you want to see the code in action.</p> <p><a href="https://colab.research.google.com/drive/1deWaFprBv3IB1NNFOHB9ce-ebAOKOBXB?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1deWaFprBv3IB1NNFOHB9ce-ebAOKOBXB?usp=sharing</a></p> <p><strong>NOTE</strong>: The apply method vs applymap, the usage can be better explained in the resource below. Though, to the point the apply method is expecting an 'axis' not of Null or None; while applymap is expecting a scalar value.</p> <p>Padas API reference: <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html</a></p>
python|pandas|dataframe|styles
0
8,753
63,876,479
Reading in data into jupyter notebook
<p>When I read in a file (whether it'd be in .csv or .txt format) I tend to use the pandas.read_csv() method to read them. I was wondering if there is any difference with using this method and using with open(file). And if so, is there any advantages using one over another?</p> <p>Any insights on this will be greatful. :)</p>
<ul> <li><p>The advantage of using <strong>pandas</strong>, is that you have to write minimal code, but also you probably load alot of stuff that you dont need.</p> </li> <li><p>The advantage of using <code>open()</code> is that you have more control, and you program is gonna be significantly more minimalistic</p> </li> </ul>
python|pandas
0
8,754
63,917,784
How can write and indicator function in Python?
<p>I want to make an indicator function for pde in Python u(x) = 2<em>x if 0&lt;x&lt;1/2 and 2-2</em>x if 1/2&lt;x&lt;1 but when i do it an error occurs.</p> <p>I have chosen the np.where which is a if else function.</p> <p>Can someone help me?</p> <pre><code>import numpy as np x = np.linspace(0,1) x np.where(x&gt;0 &amp; x&lt;1/2,2*x,2-2*x) </code></pre>
<p>It would be really helpful if you provided the error message instead of just saying &quot;an error occurs&quot;.</p> <p>Anyway, add parentheses, i.e. <code>(x&gt;0) &amp; (x&lt;0.5)</code>. You need them because the <code>&amp;</code> operator has <a href="https://docs.python.org/3/reference/expressions.html#operator-precedence" rel="nofollow noreferrer">higher precedence</a> than the comparison operators, so in <code>x&gt;0 &amp; x&lt;0.5</code>, the first expression to be evaluated is <code>0 &amp; x</code>. The error message complains that this is not valid when <code>x</code> is a NumPy array.</p> <p>PS: This is not an indicator function.</p>
python|numpy|pde
0
8,755
64,048,720
Pytorch Unfold and Fold: How do I put this image tensor back together again?
<p>I am trying to filter a single channel 2D image of size 256x256 using unfold to create 16x16 blocks with an overlap of 8. This is shown below:</p> <pre><code># I = [256, 256] image kernel_size = 16 stride = bx/2 patches = I.unfold(1, kernel_size, int(stride)).unfold(0, kernel_size, int(stride)) # size = [31, 31, 16, 16] </code></pre> <p>I have started to attempt to put the image back together with fold but I’m not quite there yet. I’ve tried to use view to get the image to ‘fit’ the way it’s supposed to but I don’t see how this would preserve the original image. Perhaps I’m overthinking this.</p> <pre><code># patches.shape = [31, 31, 16, 16] patches = = filt_data_block.contiguous().view(-1, kernel_size*kernel_size) # [961, 256] patches = patches.permute(1, 0) # size = [951, 256] </code></pre> <p>Any help would be greatly appreciated. Thanks very much.</p>
<p>I believe you will benefit from using <code>torch.nn.functional.fold</code> and <code>torch.nn.functional.unfold</code> in this case, as these functions are built specifically for images (or any 4D tensors, that is with shape B X C X H X W).</p> <p>Let's start with unfolding the image:</p> <pre><code>import torch import torch.nn.functional as F import matplotlib.pyplot as plt from sklearn.datasets import load_sample_image #Used to load a sample image dtype = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor #Load a flower image from sklearn.datasets, crop it to shape 1 X 3 X 256 X 256: I = torch.from_numpy(load_sample_image('flower.jpg')).permute(2,0,1).unsqueeze(0).type(dtype)[...,128:128+256,256:256+256] kernel_size = 16 stride = kernel_size//2 I_unf = F.unfold(I, kernel_size, stride=stride) </code></pre> <p>Here we obtain all the 16x16 image patches with strides of 8 by using the <code>F.unfold</code> function. This will result in a 3D tensor with shape <code>torch.Size([1, 768, 961])</code>. ie - 961 patches with 768 = 16 X 16 X 3 pixels within each.</p> <p>Now, say we wish to fold it back to I:</p> <pre><code>I_f = F.fold(I_unf,I.shape[-2:],kernel_size,stride=stride) norm_map = F.fold(F.unfold(torch.ones(I.shape).type(dtype),kernel_size,stride=stride),I.shape[-2:],kernel_size,stride=stride) I_f /= norm_map </code></pre> <p>We use <code>F.fold</code> where we tell it the original shape of <code>I</code>, the <code>kernel_size</code> we used to unfold and the <code>stride</code> used. After folding <code>I_unf</code> we will obtain a summation <strong>with overlaps</strong>. This means that the resulting image will appear saturated. As a result, we need to compute a normalization map which will normalize multiple summation of pixels due to overlaps. A way to do this efficiently is to take a ones tensor and use <code>unfold</code> followed by <code>fold</code> - to mimic the summation with overlaps. This gives us the normalization map by which we normalize <code>I_f</code> to recover <code>I</code>.</p> <p>Now, we wish to plot <code>I_f</code> and <code>I</code> to prove content is preserved:</p> <pre><code>#Plot I: plt.imshow(I[0,...].permute(1,2,0).cpu()/255) </code></pre> <p><a href="https://i.stack.imgur.com/Mdwt2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mdwt2.png" alt="enter image description here" /></a></p> <pre><code>#Plot I_f: plt.imshow(I_f[0,...].permute(1,2,0).cpu()/255) </code></pre> <p><a href="https://i.stack.imgur.com/n2tue.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n2tue.png" alt="enter image description here" /></a></p> <p>This whole process will work also for single-channel images. One thing to notice is that if spatial dimensions of the image are not divisible by the stride, you will get <code>norm_map</code> with zeros (at the edges) due to some pixels not reachable but you can easily handle this case as well.</p>
machine-learning|pytorch|artificial-intelligence|vision
5
8,756
47,064,242
Python Pandas Conditional First Differencing
<p>I have a Pandas dataframe that looks like something this:</p> <pre><code> Item1 Item2 Item3 Customer date 1 2014-03-24 0.0 10.0 50.0 2014-06-23 0.0 20.0 60.0 2014-09-22 0.0 20.0 40.0 2014-12-22 3.0 30.0 20.0 2014-12-29 0.0 30.0 20.0 2 2014-03-24 0.0 10.0 50.0 2014-06-23 0.0 20.0 60.0 2014-09-22 0.0 20.0 40.0 2014-12-22 4.0 30.0 20.0 2014-12-29 0.0 30.0 20.0 3 2014-03-24 0.0 10.0 50.0 2014-06-23 0.0 20.0 60.0 2014-09-22 0.0 20.0 40.0 2014-12-22 5.0 30.0 20.0 2014-12-29 0.0 30.0 20.0 </code></pre> <p>It is multi indexed on customer number and date. I want to calculate the first difference in each item for reach customer while ignoring instances when the number goes from 0 to 0. Output would look like this:</p> <pre><code> Item1 Item2 Item3 Customer date 1 2014-03-24 NaN NaN NaN 2014-06-23 NaN 10.0 10.0 2014-09-22 NaN 0.0 20.0 2014-12-22 3.0 10.0 -20.0 2014-12-29 -3.0 0.0 0.0 2 2014-03-24 NaN NaN NaN 2014-06-23 NaN 10.0 10.0 2014-09-22 NaN 0.0 20.0 2014-12-22 4.0 10.0 -20.0 2014-12-29 -4.0 0.0 0.0 3 2014-03-24 NaN NaN NaN 2014-06-23 NaN 10.0 10.0 2014-09-22 NaN 0.0 20.0 2014-12-22 5.0 10.0 -20.0 2014-12-29 -5.0 0.0 0.0 </code></pre> <p>If not for the need to exclude 0-to-0 changes, df.groupby(level=0).diff() would work fine. </p> <p>I can devise a way to look through the rows to do this, but the dataframe is quite massive (tens of thousands of customers and dozens of items), so this will not fly. I reckon there is a way to do this with an .apply() operation, but I cannot quite sort it out at this point.</p>
<p>you almost there, adding <code>.mask</code> </p> <pre><code> df.groupby(level=0).diff().mask(df==0) Out[740]: Item1 Item2 Item3 Customer date 1 2014-03-24 NaN NaN NaN 2014-06-23 NaN 10.0 10.0 2014-09-22 NaN 0.0 -20.0 2014-12-22 3.0 10.0 -20.0 2 2014-03-24 NaN NaN NaN 2014-06-23 NaN 10.0 10.0 2014-09-22 NaN 0.0 -20.0 2014-12-22 4.0 10.0 -20.0 3 2014-03-24 NaN NaN NaN 2014-06-23 NaN 10.0 10.0 2014-09-22 NaN 0.0 -20.0 2014-12-22 5.0 10.0 -20.0 </code></pre> <p>EDIT : </p> <pre><code>df.groupby(level=0).diff().mask(df.groupby(level='Customer').apply(lambda x: (x==0).cumprod())==1) </code></pre>
python|pandas|diff
1
8,757
63,221,977
Drawing OpenCV contours and save as transparent image
<p>I'm trying to draw contours i have found using findContours.</p> <p>If i draw like this, i get a black background with the contour drawn on it.</p> <pre><code> out = np.zeros_like(someimage) cv2.drawContours(out, contours, -1, 255, 1) cv2.imwrite('contours.png',out) </code></pre> <p>If i draw like this, i get a fully transparent image with no drawn contours.</p> <pre><code> out = np.zeros((55, 55, 4), dtype=np.uint8) cv2.drawContours(out, contours, -1, 255, 1) cv2.imwrite('contours.png',out) </code></pre> <p>How do i go about making an image with size (55,55) and draw a contour on this, while keeping a transparent background?</p> <p>Thanks</p>
<p>To work with transparent images in OpenCV you need to utilize the fourth channel after BGR called alpha with controls it. So instead of creating a three-channel image, create one with four channels, and also while drawing make sure you assign the fourth channel to 255.</p> <pre><code>mask = np.zeros((55, 55, 4), dtype=np.uint8) cv2.drawContours(mask, cnts, -1, (255, 255, 255, 255), 1) #change first three channels to any color you want. cv2.imwrite('res.png', mask) </code></pre> <p><a href="https://i.stack.imgur.com/mBNEa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mBNEa.png" alt="INput" /></a></p> <p>Input image whose contours to draw.</p> <p><a href="https://i.stack.imgur.com/3qACO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3qACO.png" alt="result" /></a></p> <p>Result</p>
python|numpy|opencv
3
8,758
67,738,552
Pandas to ODBC connection with to_sql
<p>I'm trying to export a pandas DataFrame into an MS Access table through <code>pyodbc</code>.</p> <pre class="lang-py prettyprint-override"><code>conn = pyodbc.connect(r'Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=my_db.accdb;') df.to_sql('test', conn, index=False) DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': ('42S02', &quot;[42S02] [Microsoft][ODBC Microsoft Access Driver] The Microsoft Access database engine cannot find the input table or query 'sqlite_master'. Make sure it exists and that its name is spelled correctly. (-1305) (SQLExecDirectW)&quot;) </code></pre> <p><code>sqlite_master</code>? Where does that come from?</p>
<p><code>.to_sql()</code> expects the second argument to be either a SQLAlchemy <code>Connectable</code> object or a DBAPI <code>Connection</code> object. If it is the latter then pandas <em>assumes</em> that it is a SQLite connection.</p> <p>You need to use the <a href="https://github.com/gordthompson/sqlalchemy-access" rel="nofollow noreferrer">sqlalchemy-access</a> dialect.</p> <p>(Disclosure: I maintain that dialect.)</p>
python|pandas|ms-access|pyodbc
0
8,759
67,990,805
How to transform or melt column in python?
<p>I have this tabular table, but I want to make it into a simple form. The existing form is like this:</p> <pre><code>group mp_current mh_current mp_total mh_total contractor 25 4825 0 0 </code></pre> <p>I want to transform the table into this form:</p> <pre><code>group mp mh period contractor 25 4825 current contractor 0 0 total </code></pre> <p>where I would have one dedicated column for mp and mh, and one extra column as the period column.</p> <p>How can I perform this in python?</p>
<h3><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html" rel="noreferrer"><code>wide_to_long</code></a></h3> <p>You specify the stubnames (column prefixes), the separator (<code>'_'</code>), and that the suffix is anything (<code>'.*'</code>) as it by default expects numerics. The <code>j</code> argument becomes the column label for the the values after the separator. The column referenced by <code>i</code> needs to uniquely label each row.</p> <pre><code>df1 = (pd.wide_to_long(df, i='group', j='period', stubnames=['mh', 'mp'], sep='_', suffix='.*') .reset_index()) </code></pre> <hr /> <pre><code> group period mh mp 0 contractor current 4825 25 1 contractor total 0 0 </code></pre>
python|pandas|transform
8
8,760
61,225,373
Pandas replace rows based on multiple conditions
<p>Take for example the following dataframe:</p> <pre><code>df = pd.DataFrame({"val":np.random.rand(8), "id1":[1,2,3,4,1,2,3,4], "id2":[1,2,1,2,2,1,2,2], "id3":[1,1,1,1,2,2,2,2]}) </code></pre> <p>I would like to replace the id2 rows where id3 does not equal an arbitrary reference with the corresponding id2 values which have the same id1</p> <p>I have a solution which partially works but does not operate using the 2nd condition (replcae id2 based on same values as id1 when id3 is equal to the reference). This prevents my solution from being very robust, as discussed below. </p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({"val":np.random.rand(8), "id1":[1,2,3,4,1,2,3,4], "id2":[1,2,1,2,2,1,2,2], "id3":[1,1,1,1,2,2,2,2]}) reference = 1 df.loc[df['id3'] != reference, "id2"] = df[df["id3"]==reference]["id2"].values print(df) </code></pre> <p>Output:</p> <pre><code> val id1 id2 id3 0 0.580965 1 1 1 1 0.941297 2 2 1 2 0.001142 3 1 1 3 0.479363 4 2 1 4 0.732861 1 1 2 5 0.650075 2 2 2 6 0.776919 3 1 2 7 0.377657 4 2 2 </code></pre> <p>This solution does work, but only under the condition that id3 has two distinct values. If there are three id3 values, i.e.</p> <pre><code>df = pd.DataFrame({"val":np.random.rand(12), "id1":[1,2,3,4,1,2,3,4,1,2,3,4], "id2":[1,2,1,2,2,1,2,2,1,1,2,2], "id3":[1,1,1,1,2,2,2,2,3,3,3,3]}) </code></pre> <p>Expected/desired output:</p> <pre><code> val id1 id2 id3 0 0.800934 1 1 1 1 0.505645 2 2 1 2 0.268300 3 1 1 3 0.295300 4 2 1 4 0.564372 1 1 2 5 0.154572 2 2 2 6 0.591691 3 1 2 7 0.896055 4 2 2 8 0.275267 1 1 3 9 0.840533 2 2 3 10 0.192257 3 1 3 11 0.543342 4 2 3 </code></pre> <p>Then unfortunately my solution ceases to work. If anyone could provide some tips how to circumvent this issue, I would be very appreciative. </p>
<p>If <code>id1</code> column is like counter of groups create helper <code>Series</code> by <code>reference</code> group by filtering and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> first and then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a>:</p> <pre><code>reference = 1 s = df[df['id3'] == reference].set_index('id1')['id2'] df['id2'] = df['id1'].map(s) print (df) val id1 id2 id3 0 0.986277 1 1 1 1 0.873392 2 2 1 2 0.509746 3 1 1 3 0.271836 4 2 1 4 0.336919 1 1 2 5 0.216954 2 2 2 6 0.276477 3 1 2 7 0.343316 4 2 2 8 0.862159 1 1 3 9 0.156700 2 2 3 10 0.140887 3 1 3 11 0.757080 4 2 3 </code></pre> <p>If not counter column create new one by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a>:</p> <pre><code>reference = 1 df['g'] = df.groupby('id3').cumcount() s = df[df['id3'] == reference].set_index('g')['id2'] df['id2'] = df['g'].map(s) print (df) val id1 id2 id3 g 0 0.986277 1 1 1 0 1 0.873392 2 2 1 1 2 0.509746 3 1 1 2 3 0.271836 4 2 1 3 4 0.336919 1 1 2 0 5 0.216954 2 2 2 1 6 0.276477 3 1 2 2 7 0.343316 4 2 2 3 8 0.862159 1 1 3 0 9 0.156700 2 2 3 1 10 0.140887 3 1 3 2 11 0.757080 4 2 3 3 </code></pre>
python|pandas|dataframe
2
8,761
61,542,620
Is there a package called document_base?
<p>I tried to implement YOLOv3 using PyTorch and I found that models had changed to doqu and I changed it accordingly. What I got next was this</p> <blockquote> <p>ModuleNotFoundError: No module named 'document_base'</p> </blockquote> <p>When I tried to install it with pip however I got this</p> <blockquote> <p>ERROR: Could not find a version that satisfies the requirement document_base (from versions: none)</p> <p>ERROR: No matching distribution found for document_base</p> </blockquote> <p>Is there a way to install this or are there any workarounds to this??</p>
<p>It's a module inside <code>doqu</code>: <a href="https://bitbucket.org/neithere/doqu/src/default/doqu/document_base.py" rel="nofollow noreferrer">https://bitbucket.org/neithere/doqu/src/default/doqu/document_base.py</a></p> <p>To import it:</p> <pre><code>import doqu.document_base </code></pre> <p>or</p> <pre><code>from doqu import document_base </code></pre>
python-3.x|pip|pytorch|google-colaboratory|yolo
0
8,762
61,468,817
How to add length 1 dimension to NDArray en c#
<p>I have an NDArray of shape (480, 640, 3) in c# using NumSharp.</p> <p>I need to reshape it to (1, 480, 640, 3).</p> <p>On python it would be with <code>imgArr = imgArr.reshape((1, 480, 640, 3))</code>.</p> <p>How can this be done on c#?</p> <p>Thank you!</p> <p><em>PS: Sorry for the tags, but there are not NumSharp nor NumSharp-NDArray tags and I can't add tags yet.</em></p>
<p>Solved it with a bit ok luck. I can't explain why, but the solution was to give a -1 in one of the three lasts dimensions, <code>imgArr = imgArr.reshape(new Shape(1, 480, -1, 3))</code>.</p> <p>(for the curious, <code>imgArr = imgArr.reshape(new Shape(1, 480, 640, 3))</code> did not work)</p>
c#|numpy|numpy-ndarray
0
8,763
68,650,265
What is the "data.max" of a torch.Tensor?
<p>I have been browsing the documentation of torch.Tensor, but I have not been able to find this (just similar things).</p> <p>If <code>a_tensor</code> is a <code>torch.Tensor</code>, what is <code>a_tensor.data.max</code>? What type, etc.?</p> <p>In particular, I am reading <code>a_tensor.data.max(1)[1]</code> and <code>a_tensor.data.max(1)[1][i].cpu().numpy()</code>.</p>
<p>When accessing <code>.data</code> you are accessing the underlying data of the tensor. The returned object <em>is</em> a <code>Torch.*Tensor</code> as well, however, it won't be linked to any computational graph.</p> <p>Take this example:</p> <pre><code>&gt;&gt;&gt; x = torch.rand(4, requires_grad=True) &gt;&gt;&gt; y = x**2 &gt;&gt;&gt; y tensor([0.5272, 0.3162, 0.1374, 0.3004], grad_fn=&lt;PowBackward0&gt;) </code></pre> <p>While <code>y.data</code> is somewhat detached from graph (no <code>grad_fn</code> function), yet it is not a copy of <code>y</code> as <code>y.detach()</code> would return:</p> <pre><code>&gt;&gt;&gt; y.data tensor([0.5272, 0.3162, 0.1374, 0.3004] </code></pre> <p>Therefore, if you modify <code>y.data</code>'s components you end modifying <code>y</code> itself:</p> <pre><code>&gt;&gt;&gt; y.data[0] = 1 &gt;&gt;&gt; y tensor([1.0000, 0.3162, 0.1374, 0.3004], grad_fn=&lt;PowBackward0&gt;) </code></pre> <p>Notice how the <code>grad_fn</code> didn't change there. If you had done <code>y[0] = 1</code>, <code>grad_fn</code> would have been updated to <code>&lt;CopySlices&gt;</code>. This shows that modify your tensor's data through <code>.data</code> is not accounted for in terms of gradient, <em>i.e.</em>, you won't be able to backpropagate these operations. It is required to work with <code>y</code> - <em>not <code>y.data</code></em> - when planning to use Autograd.</p> <hr /> <p>So, to give an answer to your question: <code>a_tensor.data</code> is a <code>torch.*Tensor</code>, same type as <code>a_tensor</code>, and <code>a_tensor.data.max</code> is a function bound to that tensor.</p>
python|pytorch
1
8,764
68,821,679
Divide into groups with approximately equal count of rows and draw bars with average values
<p>I have many rows in:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(columns = ['param_1', 'param_2', ..., 'param_n']) </code></pre> <p>and I need to group rows into several groups and display stacked bars with averaged values:</p> <pre class="lang-py prettyprint-override"><code>bin_count = 10 step = math.ceil(df.shape[0] / bin_count) df['p'] = 0 for i in range(0, df.shape[0], step): df.iloc[i:i + step]['p'] = i df_group = df.groupby('p').mean() df_group.plot.bar(stacked = True) </code></pre> <p>How to do it more efficiently?</p>
<p>Use <code>pd.qcut</code> to split the DataFrame by passing it a <code>range</code> based on the length of the DataFrame.</p> <h3>Sample Data</h3> <pre><code>import pandas as pd import numpy as np # 111 rows to split into (10) groups Nr = 111 df = pd.DataFrame({'param_1': np.random.randint(0, 10, Nr), 'param_2': range(Nr)}) </code></pre> <h3>Code</h3> <pre><code># Number of bins bin_count = 10 df_group = df.groupby(pd.qcut(range(len(df)), bin_count, labels=False)).mean() df_group.plot.bar(stacked=True, rot=0) </code></pre> <p><a href="https://i.stack.imgur.com/dDusr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dDusr.png" alt="enter image description here" /></a></p> <hr /> <p>This will form groups as close to evenly sized as possible given the length of your data, similar to <code>np.array_split</code></p> <pre><code>df.groupby(pd.qcut(range(len(df)), bin_count, labels=False)).size() 0 12 1 11 2 11 3 11 4 11 5 11 6 11 7 11 8 11 9 11 dtype: int64 </code></pre>
python|pandas
1
8,765
68,662,594
how to scrape data from an interactive code
<p>I want to scrape data from a tourist <a href="https://tn.tunisiebooking.com/" rel="nofollow noreferrer">site</a> there is a list of hotels i'm extracting names and arrangements but i've got stuck in extracting the price of every arrangement because it's interactive the price shows up as soon as i choose the arrangement. I put at your disposal my code if any of you can help me and thank you in advance.</p> <pre><code>#!/usr/bin/env python # coding: utf-8 import json from time import sleep from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait, Select # create path and start webdriver PATH = &quot;C:\chromedriver.exe&quot; driver = webdriver.Chrome(PATH) # first get website driver.get('https://tn.tunisiebooking.com/') wait = WebDriverWait(driver, 20) # params to select params = { 'destination': 'Tunis', 'date_from': '08/08/2021', 'date_to': '09/08/2021', 'bedroom': '1' } # select destination destination_select = Select(driver.find_element_by_id('ville_des')) destination_select.select_by_value(params['destination']) # select bedroom bedroom_select = Select(driver.find_element_by_id('select_ch')) bedroom_select.select_by_value(params['bedroom']) # select dates script = f&quot;document.getElementById('depart').value ='{params['date_from']}';&quot; script += f&quot;document.getElementById('checkin').value ='{params['date_to']}';&quot; driver.execute_script(script) # click bouton search btn_rechercher = driver.find_element_by_id('boutonr') btn_rechercher.click() sleep(10) # click bouton details #btn_plus = driver.find_element_by_id('plus_res') #btn_plus.click() #sleep(10) # ---------------------------------------------------------------------------- # get list of all hotels hotels_list = [] hotels_objects = driver.find_elements_by_xpath( '//div[contains(@class, &quot;enveloppe_produit&quot;)]' ) for hotel_obj in hotels_objects: # get price object price_object = hotel_obj.find_element_by_xpath( './/div[@class=&quot;monaieprix&quot;]' ) price_value = price_object.find_element_by_xpath( './/div[1]' ).text.replace('\n', '') # get title data title_data = hotel_obj.find_element_by_xpath( './/span[contains(@class, &quot;tittre_hotel&quot;)]' ) # get arrangements arrangements_obj = hotel_obj.find_elements_by_xpath( './/div[contains(@class, &quot;angle&quot;)]//u' ) arrangements = [ao.text for ao in arrangements_obj] # get arrangements prixM_obj = hotel_obj.find_elements_by_xpath( './/div[contains(@id, &quot;prixtotal&quot;)]' ) prixM = [ao.text for ao in prixM_obj] # create new object hotels_list.append({ 'name': title_data.find_element_by_xpath('.//a//h3').text, 'arrangements': arrangements, 'prixM':prixM, 'price': f'{price_value}' }) # ---------------------------------------------------------------- #for hotel in hotels_list: # print(json.dumps(hotel, indent=4)) import pandas as pd df = pd.DataFrame(hotels_list, columns=['name','arrangements','price']) df.head() </code></pre>
<p>In order to get prizes for all the arrangement options, performing click operations is necessary.</p> <p>Below code retrieves 1st options(like Breakfast) arrangements and its prizes. Need to repeat the same process for all the other options available.</p> <pre><code>hotels = driver.find_elements_by_xpath(&quot;//div[starts-with(@id,'produit_affair_')]&quot;) hoteldata = {} for hotel in hotels: name = hotel.find_element_by_tag_name(&quot;h3&quot;).text arr = hotel.find_elements_by_tag_name(&quot;u&quot;) rooms = hotel.find_elements_by_tag_name(&quot;label&quot;) roomdata = [] for room in rooms: room.click() rprize = hotel.find_element_by_xpath(&quot;//div[starts-with(@id,'prixtotal_')]&quot;).text roomdata.append((room.text,rprize)) hoteldata[name] = roomdata print(hoteldata) </code></pre> <p>And the output:</p> <pre><code>{'KANTA': [('Chambre Double ', '43'), ('Chambre Double Vue Piscine ', '50')], 'El Mouradi Palace': [('Chambre Double ', '50'), ('Chambre Double superieure ', '50')], 'Occidental Sousse Marhaba': [('Double Standard ', '50'), ('Chambre Double Vue Mer. ', '50')], 'Tui Blue Scheherazade': [('Double Standard Vue Mer ', '50'), ('Double -Swim Up ', '50')], 'Golf Residence GAS': [('Double--Standard ', '50')], 'Sindbad Center GAS': [('Chambre Double ', '50')], 'Iberostar Diar el Andalous': [('Double Standard ', '50'), ('Double Standard Vue Mer ', '50'), ('Double Prestige ', '50'), ('Suite-Junior Double ', '50')], 'Seabel AlHambra Beach Golf &amp; Spa': [('Bungalow Double ', '50'), ('Chambre Double superieure ', '50')], 'Marhaba Palace': [('Chambre Double ', '50')], 'Cosmos Tergui Club': [('Chambre Double ', '50'), ('Double_vue Mer ', '50')], 'Riadh Palms': [('Chambre Double-superieure ', '50'), ('Chambre Double Superieure Vue Mer ', '50')], 'Royal Jinene': [('Chambre Double ', '50'), ('Double Standard Vue Mer ', '50')], 'Houria Palace': [('Chambre-double-vue piscine ', '50'), ('Chambre Double ', '50')], 'Marhaba Beach': [('Chambre Double ', '50')], 'Marhaba Club': [('Chambre Double ', '50'), ('Chambre Double Vue Mer ', '50')], 'Palmyra Aqua Park ex soviva': [('Chambre Double ', '50')], 'Sousse City &amp; Beach Hotel': [('Double Standard ', '50'), ('Double Standard Vue Mer ', '50')], 'Sousse Pearl Marriott Resort &amp; Spa': [('Chambre Double Standard ', '50'), ('Double Standard Vue Mer ', '50')], 'Riviera': [('Double Standard ', '50')], 'Concorde Green Park Palace': [('Double Standard ', '50'), ('Double Standard Vue Mer ', '50'), ('Suite Prestige Vue mer ', '50')]} </code></pre>
python|pandas|selenium|selenium-webdriver|web-scraping
2
8,766
68,816,292
Deep learning AI for integers in a sequence
<p>I am new to ML, and I would like to use keras to categorize every number in a sequence as a 1 or 0 depending on whether it is greater than the previous number. That is, if I had:</p> <p>sequence a = [1, 2, 6, 4, 5],</p> <p>The solution should be: sequence b = [0, 1, 1, 0, 1].</p> <p>So far, I have written:</p> <pre><code>import tensorflow as tf from tensorflow import keras from keras.models import Sequential from keras.layers import Dense model = tf.keras.Sequential([tf.keras.layers.Dense(units=1, input_shape=[1,1])]) model.add(tf.keras.layers.Dense(17)) model.add(tf.keras.layers.Dense(17)) model.compile(optimizer='sgd', loss='BinaryCrossentropy', metrics=['binary_accuracy']) b = [1,6,8,3,5,8,90,5,432,3,5,6,8,8,4,234,0] a = [0,1,1,0,1,1,1,0,1,0,1,1,1,0,0,1,0] b = np.array(b, dtype=float) a = np.array(a, dtype=float) model.fit(b, a, epochs=500, batch_size=1) # # Generate predictions for samples predictions = model.predict(b) print(predictions) </code></pre> <p>When I do this, I end up with:</p> <pre><code>Epoch 500/500 17/17 [==============================] - 0s 499us/step - loss: 7.9229 - binary_accuracy: 0.4844 [[[-1.37064695e+01 4.70858345e+01 -4.67341652e+01 -1.94298875e+00 5.75960045e+01 6.70146179e+01 6.34545479e+01 -4.86319550e+02 2.26250134e+01 -8.60109329e+00 -4.03220863e+01 -1.67574768e+01 3.36148148e+01 -4.55171967e+00 -1.39924898e+01 6.31023712e+01 -9.14120102e+00]] [[-6.92644653e+01 2.40270264e+02 -2.37715302e+02 -9.42625141e+00 2.93314209e+02 3.41092743e+02 3.23760315e+02 -2.49306396e+03 1.15242020e+02 -4.38339310e+01 -2.05973328e+02 -8.48139114e+01 1.70274872e+02 -2.48692398e+01 -7.15372696e+01 3.22131958e+02 -4.57872620e+01]] [[-9.14876480e+01 3.17544006e+02 -3.14107819e+02 -1.24195509e+01 3.87601562e+02 4.50723969e+02 4.27882660e+02 -3.29576172e+03 1.52288818e+02 -5.79270554e+01 -2.72233856e+02 -1.12036469e+02 2.24938889e+02 -3.29962883e+01 -9.45551834e+01 4.25743744e+02 -6.04456978e+01]] [[-3.59296684e+01 1.24359612e+02 -1.23126640e+02 -4.93629456e+00 1.51883270e+02 1.76645889e+02 1.67576874e+02 -1.28901733e+03 5.96718216e+01 -2.26942272e+01 -1.06582588e+02 -4.39800491e+01 8.82788391e+01 -1.26787395e+01 -3.70104065e+01 1.66714172e+02 -2.37996235e+01]] [[-5.81528549e+01 2.01633392e+02 -1.99519104e+02 -7.92959309e+00 2.46170563e+02 2.86277161e+02 2.71699158e+02 -2.09171509e+03 9.67186279e+01 -3.67873497e+01 -1.72843094e+02 -7.12026062e+01 1.42942856e+02 -2.08057709e+01 -6.00283318e+01 2.70326050e+02 -3.84580460e+01]] [[-9.14876480e+01 3.17544006e+02 -3.14107819e+02 -1.24195509e+01 3.87601562e+02 4.50723969e+02 4.27882660e+02 -3.29576172e+03 1.52288818e+02 -5.79270554e+01 -2.72233856e+02 -1.12036469e+02 2.24938889e+02 -3.29962883e+01 -9.45551834e+01 4.25743744e+02 -6.04456978e+01]] [[-1.00263879e+03 3.48576855e+03 -3.44619800e+03 -1.35145050e+02 4.25337939e+03 4.94560596e+03 4.69689697e+03 -3.62063594e+04 1.67120789e+03 -6.35745117e+02 -2.98891406e+03 -1.22816174e+03 2.46616406e+03 -3.66204163e+02 -1.03828992e+03 4.67382764e+03 -6.61441223e+02]] [[-5.81528549e+01 2.01633392e+02 -1.99519104e+02 -7.92959309e+00 2.46170563e+02 2.86277161e+02 2.71699158e+02 -2.09171509e+03 9.67186279e+01 -3.67873497e+01 -1.72843094e+02 -7.12026062e+01 1.42942856e+02 -2.08057709e+01 -6.00283318e+01 2.70326050e+02 -3.84580460e+01]] [[-4.80280518e+03 1.66995840e+04 -1.65093086e+04 -6.47000305e+02 2.03765059e+04 2.36925508e+04 2.25018145e+04 -1.73467625e+05 8.00621289e+03 -3.04566919e+03 -1.43194590e+04 -5.88322070e+03 1.18137129e+04 -1.75592432e+03 -4.97435352e+03 2.23914492e+04 -3.16803076e+03]] [[-3.59296684e+01 1.24359612e+02 -1.23126640e+02 -4.93629456e+00 1.51883270e+02 1.76645889e+02 1.67576874e+02 -1.28901733e+03 5.96718216e+01 -2.26942272e+01 -1.06582588e+02 -4.39800491e+01 8.82788391e+01 -1.26787395e+01 -3.70104065e+01 1.66714172e+02 -2.37996235e+01]] [[-5.81528549e+01 2.01633392e+02 -1.99519104e+02 -7.92959309e+00 2.46170563e+02 2.86277161e+02 2.71699158e+02 -2.09171509e+03 9.67186279e+01 -3.67873497e+01 -1.72843094e+02 -7.12026062e+01 1.42942856e+02 -2.08057709e+01 -6.00283318e+01 2.70326050e+02 -3.84580460e+01]] [[-6.92644653e+01 2.40270264e+02 -2.37715302e+02 -9.42625141e+00 2.93314209e+02 3.41092743e+02 3.23760315e+02 -2.49306396e+03 1.15242020e+02 -4.38339310e+01 -2.05973328e+02 -8.48139114e+01 1.70274872e+02 -2.48692398e+01 -7.15372696e+01 3.22131958e+02 -4.57872620e+01]] [[-9.14876480e+01 3.17544006e+02 -3.14107819e+02 -1.24195509e+01 3.87601562e+02 4.50723969e+02 4.27882660e+02 -3.29576172e+03 1.52288818e+02 -5.79270554e+01 -2.72233856e+02 -1.12036469e+02 2.24938889e+02 -3.29962883e+01 -9.45551834e+01 4.25743744e+02 -6.04456978e+01]] [[-9.14876480e+01 3.17544006e+02 -3.14107819e+02 -1.24195509e+01 3.87601562e+02 4.50723969e+02 4.27882660e+02 -3.29576172e+03 1.52288818e+02 -5.79270554e+01 -2.72233856e+02 -1.12036469e+02 2.24938889e+02 -3.29962883e+01 -9.45551834e+01 4.25743744e+02 -6.04456978e+01]] [[-4.70412598e+01 1.62996490e+02 -1.61322891e+02 -6.43295908e+00 1.99026932e+02 2.31461517e+02 2.19638016e+02 -1.69036609e+03 7.81952209e+01 -2.97407875e+01 -1.39712814e+02 -5.75913391e+01 1.15610855e+02 -1.67422562e+01 -4.85193672e+01 2.18520096e+02 -3.11288433e+01]] [[-2.60270850e+03 9.04948047e+03 -8.94645508e+03 -3.50663330e+02 1.10420654e+04 1.28390557e+04 1.21937041e+04 -9.40005859e+04 4.33857861e+03 -1.65045227e+03 -7.75966846e+03 -3.18818774e+03 6.40197412e+03 -9.51349304e+02 -2.69557886e+03 1.21338779e+04 -1.71684766e+03]] [[-2.59487200e+00 8.44894505e+00 -8.53793907e+00 -4.46333081e-01 1.04523640e+01 1.21989994e+01 1.13933916e+01 -8.49708328e+01 4.10160637e+00 -1.55452514e+00 -7.19183874e+00 -3.14619255e+00 6.28279734e+00 -4.88203079e-01 -2.48353434e+00 1.12964716e+01 -1.81198704e+00]]] </code></pre>
<p>There are few issues with how you are approaching this -</p> <ol> <li><p>Your <strong>setup</strong> for the deep learning problem is flawed. You want to use the information of the previous element to infer the labels for the next element. But for inference (and training), you only pass the current element. If tomorrow I deploy this model, imagine what would happen. The only information I will provide you, say, &quot;15&quot; and as you if it's bigger than the previous element, which doesn't exist. How will your model respond?</p> </li> <li><p>Secondly, why are your <strong>output layer</strong> is predicting a 17-dimensional vector? Shouldn't the goal be to predict a 0 or 1 (probability)? In that case your output should be a single element with sigmoid activation. Refer to this diagram as a guide for your future setups for neural networks.</p> </li> </ol> <p><a href="https://i.stack.imgur.com/QyN1H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QyN1H.png" alt="enter image description here" /></a></p> <ol start="3"> <li>Third, you are not using any <strong>activation functions</strong> which is the core reason to be using neural networks (nonlinearity). Without activation functions, you are just building a standard regression model. Here is a basic proof -</li> </ol> <pre><code>#2 layer neural network without activation h = W1.X+B1 o = W2.h+B2 o = W2.(W1.X+B1)+B2 = W2.W1.X + (W1.B1+B2) = W3.X + B3 #Same as linear regression! #2 layer neural network with activations. h = activation(W1.X+B1) o = activation(W2.h+B2) </code></pre> <hr /> <p>I would advise starting from basics of neural networks to first build best practices, then jumping into making your own problem statements. The Keras author <code>Fchollet</code> has some excellent <a href="https://github.com/fchollet/deep-learning-with-python-notebooks" rel="nofollow noreferrer">starter notebooks</a> that you can explore.</p> <p>For your case, try these modifications -</p> <pre><code>import tensorflow as tf from tensorflow import keras from keras.models import Sequential from keras.layers import Dense #Modify input shape and output shape + add activations model = tf.keras.Sequential([tf.keras.layers.Dense(units=1, input_shape=(2,))]) #&lt;------ model.add(tf.keras.layers.Dense(17, activation='relu')) #&lt;------ model.add(tf.keras.layers.Dense(1, activation='sigmoid')) #&lt;------ model.compile(optimizer='sgd', loss='BinaryCrossentropy', metrics=['binary_accuracy']) #create 2 features, 1st is previous element 2nd is current element b = [1,6,8,3,5,8,90,5,432,3,5,6,8,8,4,234,0] b = np.array([i for i in zip(b,b[1:])]) #&lt;---- (16,2) #Start from first paid of elements a = np.array([0,1,1,0,1,1,1,0,1,0,1,1,1,0,0,1,0])[1:] #&lt;---- (16,) model.fit(b, a, epochs=20, batch_size=1) # # Generate predictions for samples predictions = model.predict(b) print(np.round(predictions)) </code></pre> <pre><code>Epoch 1/20 16/16 [==============================] - 0s 1ms/step - loss: 3.0769 - binary_accuracy: 0.7086 Epoch 2/20 16/16 [==============================] - 0s 823us/step - loss: 252.6490 - binary_accuracy: 0.6153 Epoch 3/20 16/16 [==============================] - 0s 1ms/step - loss: 3.8109 - binary_accuracy: 0.9212 Epoch 4/20 16/16 [==============================] - 0s 787us/step - loss: 0.0131 - binary_accuracy: 0.9845 Epoch 5/20 16/16 [==============================] - 0s 2ms/step - loss: 0.0767 - binary_accuracy: 1.0000 Epoch 6/20 16/16 [==============================] - 0s 1ms/step - loss: 0.0143 - binary_accuracy: 0.9800 Epoch 7/20 16/16 [==============================] - 0s 2ms/step - loss: 0.0111 - binary_accuracy: 1.0000 Epoch 8/20 16/16 [==============================] - 0s 2ms/step - loss: 4.0658e-04 - binary_accuracy: 1.0000 Epoch 9/20 16/16 [==============================] - 0s 941us/step - loss: 6.3996e-04 - binary_accuracy: 1.0000 Epoch 10/20 16/16 [==============================] - 0s 1ms/step - loss: 1.1477e-04 - binary_accuracy: 1.0000 Epoch 11/20 16/16 [==============================] - 0s 837us/step - loss: 6.8807e-04 - binary_accuracy: 1.0000 Epoch 12/20 16/16 [==============================] - 0s 2ms/step - loss: 5.0521e-04 - binary_accuracy: 1.0000 Epoch 13/20 16/16 [==============================] - 0s 851us/step - loss: 0.0015 - binary_accuracy: 1.0000 Epoch 14/20 16/16 [==============================] - 0s 1ms/step - loss: 0.0012 - binary_accuracy: 1.0000 Epoch 15/20 16/16 [==============================] - 0s 765us/step - loss: 0.0014 - binary_accuracy: 1.0000 Epoch 16/20 16/16 [==============================] - 0s 906us/step - loss: 3.9230e-04 - binary_accuracy: 1.0000 Epoch 17/20 16/16 [==============================] - 0s 1ms/step - loss: 0.0022 - binary_accuracy: 1.0000 Epoch 18/20 16/16 [==============================] - 0s 1ms/step - loss: 2.2149e-04 - binary_accuracy: 1.0000 Epoch 19/20 16/16 [==============================] - 0s 2ms/step - loss: 1.7345e-04 - binary_accuracy: 1.0000 Epoch 20/20 16/16 [==============================] - 0s 1ms/step - loss: 7.7950e-05 - binary_accuracy: 1.0000 [[1.] [1.] [0.] [1.] [1.] [1.] [0.] [1.] [0.] [1.] [1.] [1.] [0.] [0.] [1.] [0.]] </code></pre> <p>The above model is easy to train since the problem is not a complex problem. You can see that the accuracy goes to 100% very quickly. Let's try to make predictions on <strong>unseen data</strong> with this new model -</p> <pre><code>np.round(model.predict([[5,1], #&lt;- Is 5 &lt; 1 [5,500], #&lt;- Is 5 &lt; 500 [5,6]])) #&lt;- Is 5 &lt; 6 array([[0.], #&lt;- No [1.], #&lt;- Yes [1.]], dtype=float32) #&lt;- Yes </code></pre>
python|tensorflow|machine-learning|keras|deep-learning
2
8,767
53,307,748
Extract Grind-Anchors from Object-Detection API Model
<p>I'm currently trying to get my SSDLite Network, which I trained with the Tensroflow Object-detection API working, with iOS.</p> <p>So I'm using the Open Source Code of <a href="https://github.com/vonholst/SSDMobileNet_CoreML" rel="nofollow noreferrer">SSDMobileNet_CoreML</a>.</p> <p>The Graph allready works with some limitations. For running on iOS I had to extract the FeatureExtractor from my Graph and where unable to keep Preprocessor, Posprocessor and MutlipleGrindAnchorBox, same as they did in <a href="https://github.com/vonholst/SSDMobileNet_CoreML" rel="nofollow noreferrer">SSDMobileNet_CoreML</a>.</p> <p>Here you can see the <a href="https://github.com/vonholst/SSDMobileNet_CoreML/blob/master/SSDMobileNet/SSDMobileNet/Anchors.swift" rel="nofollow noreferrer">Anchors</a> they have used.</p> <p>So cause my Anchors seem to be a little different I tried to undestand how they got this array. </p> <p>So I found in an GitHub <a href="https://github.com/tf-coreml/tf-coreml/issues/107#issuecomment-359675509" rel="nofollow noreferrer">Issue</a> an explenation, where the User who created the <a href="https://github.com/vonholst/SSDMobileNet_CoreML/blob/master/SSDMobileNet/SSDMobileNet/Anchors.swift" rel="nofollow noreferrer">Anchors</a> explains how he got them. </p> <p>He says:</p> <blockquote> <p>I just exported them out of the Tensorflow Graph from the import/MultipleGridAnchorGenerator/Identity tensor</p> </blockquote> <p>I allready found the matching tensor in my Graph but I don't know how to export the Graph and retrive the correct Anchor encoding. Can sombody explain this to me?</p>
<p>I allready figured it out. A little below quote was a link to a <a href="https://gist.github.com/vincentchu/5f2f669aeb6df9864ec6c43252261269" rel="nofollow noreferrer">Python Notebook</a> which explains everything in detail.</p>
tensorflow|tensor|coreml|object-detection-api
1
8,768
52,987,904
How to apply a pandas function based on the value of another column?
<p>I have a pandas dataframe with two columns, one with some string values and another one with empty dicts:</p> <pre><code>ColA ColB True {} False {} True {} True {} False {} False {} True {} </code></pre> <p>I have a function that updates a dict with some other values:</p> <pre><code>def update_dict(a): return a.update({"VAL":["yes"]}) </code></pre> <p>How can I apply the above function to all the ColB cells that have "False" strings next to them in their ColA?:</p> <pre><code>ColA ColB True {} False {"VAL":["yes"]} True {} True {} False {"VAL":["yes"]} False {"VAL":["yes"]} True {} </code></pre> <p>I know that in pandas you can use the apply function or applymap. However, I do not know how can I do it based on a previous column value.</p>
<p>It is possible but working with <code>dict</code> in values of columns is not recommended, because lost all vectorized functions:</p> <pre><code>def update_dict(a): a.update({"VAL":["yes"]}) return a df['ColB'] = [update_dict(j) if i == 'False' else j for i, j in zip(df['ColA'], df['ColB'])] print (df) ColA ColB 0 True {} 1 False {'VAL': ['yes']} 2 True {} 3 True {} 4 False {'VAL': ['yes']} 5 False {'VAL': ['yes']} 6 True {} </code></pre>
python|python-3.x|pandas
0
8,769
52,905,459
Import and reshape MNIST data, numpy
<p>I want to reshape the MNIST dataset from shape (70000, 784) to (70000, 28, 28), the following code is tryed, but it gets a TypeError:</p> <p>TypeError: only integer scalar arrays can be converted to a scalar index</p> <pre><code>df = pd.read_csv('images.csv', sep=',', header=None) x_data = np.array(df) x_data = x_data.reshape(x_data[0], 28, 28) </code></pre> <p>This works, but is slow</p> <pre><code>data = np.array(df) x_data = [] for d in data: x_data.append(d.reshape(28,28)) x_data = np.array(x_data) </code></pre> <p>How should this be with numpy.reshape() and without looping? Manny thanks!</p>
<p>I think, the problem with the second one is because ur using a for loop it can take more time. So i would suggest you can try this</p> <pre><code>import tensorflow as tf #load the data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) #considering only first 2 data points img = mnist.train.images[:2] x = tf.reshape(img, shape=[-1, 28, 28, 1]) # -1 refers to standard feature which is equivalent to 28*28*1 here </code></pre> <p>Ideally i got the shape for x as (2, 28, 28, 1). Hope this helps!!</p>
python|numpy|machine-learning|computer-vision
2
8,770
65,798,822
Python - Group by time periods with tolerance
<p>Three columns exist in this data set: ID (unique employee identification), WorkComplete (indicating when all work has been completed), and DateDiff (number of days from their start date). I am looking to group the DaysDiff column based on certain time periods with an added layer of tolerance or leniency. For my mock data, I am spacing the time periods by 30 days.</p> <pre><code>Group 0: 0-30 DateDiff (with a 30 day extra window if 'Y' is not found) Group 1: 31-60 DateDiff (with a 30 day extra window if 'Y' is not found) Group 2: 61-90 DateDiff (with a 30 day extra window if 'Y' is not found) </code></pre> <p>I was able to create very basic code and assign the groupings, but I am having trouble with the extra 30 day window. For example, if an employee completed their work (Y) during the time periods above, then they receive the attributed grouping. For ID 111 below, you can see that the person did not complete their work within the first 30 days, so I am giving them an addition 30 days to complete their work. If they complete their work, then the first instance we see a 'Y', it is grouped in the previous grouping.</p> <pre><code>df = pd.DataFrame({'ID':[111, 111, 111, 111, 111, 111, 112, 112, 112], 'WorkComplete':['N', 'N', 'Y', 'N', 'N', 'N', 'N', 'Y', 'Y'], 'DaysDiff': [0, 29, 45, 46, 47, 88, 1, 12, 89]}) </code></pre> <p>Input</p> <pre><code>ID WorkComplete DaysDiff 111 N 0 111 N 29 111 Y 45 111 N 46 111 N 47 111 N 88 123 N 1 123 Y 12 123 Y 89 </code></pre> <p>Output</p> <pre><code>ID WorkComplete DaysDiff Group 111 N 0 0 111 N 29 0 111 Y 45 0 &lt;---- note here the grouping is 0 to provide extra time 111 N 46 1 &lt;---- back to normal 111 N 47 1 111 N 88 2 123 N 1 0 123 Y 12 0 123 Y 89 2 </code></pre> <pre><code>minQ1 = 0 highQ1 = 30 minQ2 = 31 highQ2 = 60 minQ2 = 61 highQ2 = 90 def Group_df(df): if (minQ1 &lt;= df['DateDiff'] &lt;= highQ1): return '0' elif (minQ1 &lt;= df['DateDiff'] &lt;= highQ1): return '1' elif (minQ2 &lt;= df['DateDiff'] &lt;= highQ2): return '2' df['Group'] = df.apply(Group_df, axis = 1) </code></pre> <p>The trouble I am having is allowing for the additional 30 days if the person did not complete the work. My above attempt is partial at trying to resolve the issue.</p>
<ol> <li>You can use <code>np.select</code> for the primary conditions.</li> <li>Then, use <code>mask</code> for the specific condition you mention. <code>s</code> is the <em>first</em> index location for all <code>Y</code> values <em>per group</em>. I then temporarily <code>assign</code> <code>s</code> as a new column, so that I can check for rows against <code>df.index</code> (the index) to return rows that meet the condition. The second condition is if the group number is <code>1</code> from the previos line of code:</li> </ol> <hr /> <pre><code>df['Group'] = np.select([df['DaysDiff'].between(0,30), df['DaysDiff'].between(31,60), df['DaysDiff'].between(61,90)], [0,1,2]) s = df[df['WorkComplete'] == 'Y'].groupby('ID')['DaysDiff'].transform('idxmin') df['Group'] = df['Group'].mask((df.assign(s=s)['s'].eq(df.index)) &amp; (df['Group'].eq(1)), 0) df Out[1]: ID WorkComplete DaysDiff Group 0 111 N 0 0 1 111 N 29 0 2 111 Y 45 0 3 111 N 46 1 4 111 N 47 1 5 111 N 88 2 6 123 N 1 0 7 123 Y 12 0 8 123 Y 89 2 </code></pre>
python|python-3.x|pandas|datetime|pandas-groupby
1
8,771
65,668,608
Getting an error while training Resnet50 on Imagenet at 14th Epoch
<p>I am training Resnet50 on imagenet using the script provided from PyTorch (with a slight trivial tweak for my purpose). However, I am getting the following error after 14 epochs of training. I have allocated 4 gpus in the server I'm using to run this. Any pointers as to what this error is about would be appreciated. Thanks a lot!</p> <pre><code>Epoch: [14][5000/5005] Time 1.910 (2.018) Data 0.000 (0.191) Loss 2.6954 (2.7783) Total 2.6954 (2.7783) Reg 0.0000 Prec@1 42.969 (40.556) Prec@5 64.844 (65.368) Test: [0/196] Time 86.722 (86.722) Loss 1.9551 (1.9551) Prec@1 51.562 (51.562) Prec@5 81.641 (81.641) Traceback (most recent call last): File &quot;main_group.py&quot;, line 549, in &lt;module&gt; File &quot;main_group.py&quot;, line 256, in main File &quot;main_group.py&quot;, line 466, in validate if args.gpu is not None: File &quot;/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/dataloader.py&quot;, line 801, in __next__ return self._process_data(data) File &quot;/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/dataloader.py&quot;, line 846, in _process_data data.reraise() File &quot;/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/_utils.py&quot;, line 385, in reraise raise self.exc_type(msg) OSError: Caught OSError in DataLoader worker process 11. Original Traceback (most recent call last): File &quot;/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py&quot;, line 178, in _worker_loop data = fetcher.fetch(index) File &quot;/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py&quot;, line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File &quot;/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py&quot;, line 44, in &lt;listcomp&gt; data = [self.dataset[idx] for idx in possibly_batched_index] File &quot;/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torchvision/datasets/folder.py&quot;, line 138, in __getitem__ sample = self.loader(path) File &quot;/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torchvision/datasets/folder.py&quot;, line 174, in default_loader return pil_loader(path) File &quot;/home/users/oiler/anaconda3/envs/ml/lib/python3.7/site-packages/torchvision/datasets/folder.py&quot;, line 155, in pil_loader with open(path, 'rb') as f: OSError: [Errno 5] Input/output error: '/data/users2/oiler/github/imagenet-data/val/n02102973/ILSVRC2012_val_00009130.JPEG' </code></pre>
<p>It is difficult to tell what the problem is just by looking at the error you have posted.</p> <p>All we know is that there was an issue reading the file at <code>'/data/users2/oiler/github/imagenet-data/val/n02102973/ILSVRC2012_val_00009130.JPEG'</code>.</p> <p>Try the following:</p> <ol> <li>Confirm the file actually exists.</li> <li>Confirm that it is infact a valid JPEG and not corrupted (by viewing it).</li> <li>Confirm that you can open it with Python and also load it manually with PIL.</li> <li>If none of that works, try deleting the file. Do you get the same error on another file in the folder?</li> </ol>
python|pytorch|imagenet|pytorch-dataloader
1
8,772
63,377,361
KeyError with a poisson process using pandas
<p>I am trying to create a function which will simulate a poison process for a changeable dt and total time, and have the following:</p> <pre><code>def compound_poisson(lamda,mu,sigma,dt,T): points = pd.Series(0) out = pd.Series(0) inds = simple_poisson(lamda,dt,T) for ind in inds.index: if inds[ind+dt] &gt; inds[ind]: points[ind+dt] = np.random.normal(mu,sigma) else: points[ind+dt] = 0 out = out.append(np.cumsum(points),ignore_index=True) out.index = np.linspace(0,T,int(T/dt + 1)) return out </code></pre> <p>However, I receive a &quot;KeyError: 0.010000000000000002&quot;, which should not be in the index at all. Is this a result of being lax with float objects?</p>
<p>In short, yes, it's a floating point error. It's quite hard to know how you got there, but probably something like this:</p> <pre><code>&gt;&gt;&gt; 0.1 * 0.1 0.010000000000000002 </code></pre> <p>Maybe use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.round.html" rel="nofollow noreferrer">round</a>?</p>
python|pandas|function|numpy|stochastic-process
0
8,773
53,559,771
Count number of rows between time intervals
<p>I have a pandas DataFrame with start and end times (datetime.time) for a list of processes:</p> <pre><code>from datetime import time import pandas as pd df = pd.DataFrame(columns=['start', 'end'], index=pd.Index(['proc01', 'proc02'], name='Processes'), data=[ [time(10), time(14)], [time(12), time(16)] ]) </code></pre> <p>I want to transform this info into a histogram that counts how many processes are running:</p> <pre><code>&gt;&gt;&gt; bins = pd.date_range('08:00', '22:00', freq='1H').time &gt;&gt;&gt; count_processes(df, bins) array([0, 0, 1, 1, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0]) </code></pre> <p>I have an implementation but it's kinda slow for big dataframes (around 2~3 million rows), I would like to know if there's a way to vectorize it or at least make it more fast:</p> <pre><code>def count_processes(df, bins): result = np.zeros_like(bins, dtype=int) for _, row in df.iterrows(): aux = (row['start'] &lt;= bins) &amp; (bins &lt; row['end']) result += aux.astype(int) return result </code></pre>
<p>Iterating over a Dataframe is usually a sign you're not using <code>pandas</code> optimally. You could instead substract the processes that have ended from the processes that have started, like this:</p> <pre><code>res = [] for b in bins: s = (df['start'] &lt; b).sum() e = (df['end'] &lt; b).sum() res.append(s-e) # [0, 0, 0, 1, 1, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0] </code></pre>
python|pandas
3
8,774
53,659,787
Python - For Loop ( two parallel iterations)
<p>I am working on the estimation module of a prototype. The purpose is to send proper seasonality variation parameters to the forecaster module.</p> <p>Initially, in the booking curve estimation, we were using a formula for day of year seaonality - trigonometric function with 5 orders (fixed). It goes like this:</p> <pre><code>doy_seasonality = np.exp(z[0]*np.sin(2*np.pi*doy/365.)+z[1]*np.cos(2*np.pi*doy/365.) +z[2]*np.sin(4*np.pi*doy/365.)+ z[3]*np.cos(4*np.pi*doy/365.) +z[4]*np.sin(6*np.pi*doy/365.)+ z[5]*np.cos(6*np.pi*doy/365.) +z[6]*np.sin(8*np.pi*doy/365.)+ z[7]*np.cos(8*np.pi*doy/365.) + z[8]*np.sin(10*np.*pi*doy/365.)+ z[9]*np.cos(10*np.pi*doy/365.)) </code></pre> <p>i.e. we had 5 fixed orders <code>[2, 4, 6, 8, 10]</code></p> <p>Now, we have found a better way to get the orders through Fast Fourier Transform. Depending on the estimation key we use as input in the simulation, the order array could have different number of values.</p> <p>For instance, let's say the order array is as follows</p> <pre><code>orders = [2, 6, 10, 24] </code></pre> <p>Corresponding to every order value, there would be two values of z (it's a trigonometric parameter - one value for SIN part and one value for COS part). For example, it could look like this</p> <pre><code>z = [0.08 0.11 0.25 0.01 0.66 0.19 0.45 0.07] </code></pre> <p>To achieve this, I would need to define a for-loop with two parallel iterations:</p> <pre><code>z[0] to z[2*length(orders)-1] i.e. `z[0] to z[7]` </code></pre> <p>and <code>orders[0] to orders[length(orders)-1] i.e. orders[0] to orders[3]</code></p> <p>ultimately, the formula should compute this:</p> <pre><code>doy_seasonality = np.exp(z[0]*np.sin(orders[0]*np.pi*doy/365.)+z[1]*np.cos(orders[0]*np.pi*doy/365.) +z[2]*np.sin(orders[1]*np.pi*doy/365.)+ z[3]*np.cos(orders[1]*np.pi*doy/365.) +z[4]*np.sin(orders[2]*np.pi*doy/365.)+ z[5]*np.cos(orders[2]*np.pi*doy/365.) +z[6]*np.sin(orders[3]*np.pi*doy/365.)+ z[7]*np.cos(orders[3]*np.pi*doy/365.)) </code></pre> <p>I am not able to design the appropriate syntax for this.</p> <p>doy (day of year) is a vector taking equally spaced values : 1, 2, 3... 364, 365 </p>
<pre><code>orders = np.array([2, 6, 10, 24]) z = np.array([0.08, 0.11, 0.25, 0.01, 0.66, 0.19, 0.45, 0.07]) doy = np.arange(365) + 1 s = 0 for k in range(len(orders)): s += z[2 * k ] * np.sin(orders[k] * np.pi * doy / 365.) s += z[2 * k + 1] * np.cos(orders[k] * np.pi * doy / 365.) s = np.exp(s) </code></pre>
python-3.x|numpy|for-loop
1
8,775
71,835,768
How to groupby without reset_index() based on multtindex?
<p><strong>data struncture</strong></p> <pre><code> col1 col2 A 2021-01-01 A 2022-01-01 B 2021-01-01 B 2022-01-01 </code></pre> <p>This is a dataframe with multiindex(<code>ts_code</code>, <code>date</code>).</p> <p><strong>Goal</strong></p> <p>I want to get min date for <code>ts_code</code>. So I have to run <code>df.reset_index().groupby('ts_code')['date'].min()</code>. Is it any method not to reset index to achieve it?</p>
<p>You can either rename your axis for your code to work, or you can pass <code>level=0</code> into <code>groupby()</code></p> <p>Option 1:</p> <pre><code>df.rename_axis(['ts_code','date']).groupby('ts code') </code></pre> <p>Option 2:</p> <pre><code>df.groupby(level=0) </code></pre>
pandas
1
8,776
71,931,961
Calculating mean values yearly in a dataframe with a new value daily
<p>I have this dataframe, which contains average temps for all the summer days:</p> <pre><code>DATE TAVG 0 1955-06-01 NaN 1 1955-06-02 NaN 2 1955-06-03 NaN 3 1955-06-04 NaN 4 1955-06-05 NaN ... ... ... 5805 2020-08-27 2.067854 5806 2020-08-28 3.267854 5807 2020-08-29 3.067854 5808 2020-08-30 1.567854 5809 2020-08-31 4.167854 </code></pre> <p>And I want to calculate the mean value yearly, so I can plot it, how could I do that?</p>
<p>If I understand correctly, can you try this ?</p> <pre><code>df['DATE']=pd.to_datetime(df['DATE']) df.groupby(df['DATE'].dt.year)['TAVG'].mean() </code></pre>
python|pandas|dataframe|date|mean
0
8,777
71,943,408
Why can't I import functions in python files in another folder?
<pre><code>from keras.models import load_model import cv2 import numpy as np from PIL import Image from MaskDetection.MaskCropper import cropEyeLineFromMasked IMAGE_SIZE = (224, 224) actors_dict = {0: 'Andrew Garfield', 1: 'Angelina Jolie', 2: 'Anthony Hopkins', 3: 'Ben Affleck', 4: 'Beyonce Knowles'} def actor_recognition(img, original_img): model = load_model('D:/project/FMDR/MaskDetection/models/face_mask_detection.hdf5') # image = cv2.imread('./Images4Test/') # imageRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) image_resized = cv2.resize(img, IMAGE_SIZE) # image_np = image_resized / 255.0 # Normalized to 0 ~ 1 image_exp = np.expand_dims(image_resized, axis=0) # image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # result = vgg16.predict_classes(image_exp) result = model.predict(image_exp) key = np.argmax(result, axis=1) actor = actors_dict.get(key[0]) cv2.putText(original_img, actor, (20, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2) Image.fromarray(original_img).show() # print(result) PATH_IMG = 'D:\project\FMDR\Datasets\SysAgWmaskCropped\Test\Angelina Jolie\1.GettyImages-1169983509L.jpg' if __name__ == '__main__': img = cv2.imread(PATH_IMG) # Image.fromarray(img).show() img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Image.fromarray(img).show() img_crop = cropEyeLineFromMasked(img) Image.fromarray(img_crop).show() actor_recognition(img_crop, img) </code></pre> <p>this error 2022-04-20 23:41:25.568495: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2022-04-20 23:41:25.568712: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File &quot;D:\project\FMDR\MaskDetection\actors_recognition.py&quot;, line 5, in from MaskDetection.MaskCropper import cropEyeLineFromMasked ModuleNotFoundError: No module named 'MaskDetection'</p> <p>i have function cropEyeLineFromMasked in file MaskCropper.py which it is in the folder MaskDetection</p>
<p>When python imports something, it imports it from its predefined locations...it essentially searches a predefined list of locations from where modules should be imported. If your module is not present in that list of directories or locations, it will return an error. You can find the list by typing:</p> <pre><code>import sys print(sys.path) </code></pre> <p>so because the path of the directory(folder) containing your function is not there in <code>sys.path</code>, python can't find the function.</p> <p>you have to first append or insert the directory to <code>sys.path</code> by doing <code>sys.path.insert(0,'path')</code> or <code>sys.path.append('path')</code></p> <p>when you do this your directory path is added to <code>sys.path</code> and so you can now call the function from the module in the required directory</p> <p>Hope it helps</p>
python|tensorflow
0
8,778
71,846,365
Tensorflow with custom loss containing multiple inputs - Graph disconnected error
<p>I have a CNN output a scalar, this output is concatenated with the output of an MLP and then fed to another dense layer. I get a Graph Disconnected error</p> <p>Please advise as to how to fix this. Thanks in advance.</p> <pre><code>from tensorflow.keras.models import Model from tensorflow.keras.layers import Conv2D, Dense, Flatten, concatenate, Input import tensorflow as tf tf.keras.backend.clear_session() #----custom function def custom_loss(ytrue, ypred): loss = tf.math.log(1. + ytrue) - tf.math.log(1. + ypred) loss = tf.math.square(loss) loss = tf.math.reduce_mean(loss) return loss #------------------ cnnin = Input(shape=(10, 10, 1)) x = Conv2D(8, 4)(cnnin) x = Conv2D(16, 4)(x) x = Conv2D(32, 2)(x) x = Conv2D(64, 2)(x) x = Flatten()(x) x = Dense(4)(x) x = Dense(4, activation=&quot;relu&quot;)(x) cnnout = Dense(1, activation=&quot;linear&quot;)(x) cnnmodel= Model(cnnin, cnnout, name=&quot;cnn_model&quot;) yt = Input(shape=(2, )) #---dummy input #---mlp start mlpin = Input(shape=(2, ), name=&quot;mlp_input&quot;) z = Dense(4, activation=&quot;sigmoid&quot;)(mlpin) z = Dense(4, activation = &quot;softmax&quot;)(z) mlpout = Dense(1, activation=&quot;linear&quot;)(z) mlpmodel = Model(mlpin, mlpout, name=&quot;mlp_model&quot;) #----concatenate combinedout = concatenate([mlpmodel.output, cnnmodel.output ]) x = Dense(4, activation=&quot;sigmoid&quot;)(combinedout) finalout = Dense(2, activation=&quot;linear&quot;)(x) model = Model( [mlpin, cnnin], finalout) model.add_loss(custom_loss(yt, finalout)) model.compile(optimizer='adam', learning_rate=1e-3, initialization=&quot;glorotnorm&quot;, loss=None) </code></pre> <p>Graph disconnected: cannot obtain value for tensor Tensor(&quot;input_8:0&quot;, shape=(None, 2), dtype=float32) at layer &quot;input_8&quot;. The following previous layers were accessed without issue: ['input_7', 'conv2d_12', 'conv2d_13', 'conv2d_14', 'conv2d_15', 'flatten_3', 'mlp_input', 'dense_24', 'dense_27', 'dense_25', 'dense_28', 'dense_29', 'dense_26', 'concatenate_3', 'dense_30', 'dense_31']</p>
<p>You can customize what happens in Model.fit based on <a href="https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit</a></p> <ul> <li>We create a new class that subclasses keras.Model.</li> <li>We just override the method train_step(self, data).</li> <li>We return a dictionary mapping metric names (including the loss) to their current value.</li> </ul> <p>For example with your models:</p> <pre><code>loss_tracker = tf.keras.metrics.Mean(name = &quot;custom_loss&quot;) class TestModel(tf.keras.Model): def __init__(self, model1): super(TestModel, self).__init__() self.model1 = model1 def compile(self, optimizer): super(TestModel, self).compile() self.optimizer = optimizer def train_step(self, data): x, y = data with tf.GradientTape() as tape: ypred = self.model1([x], training = True) loss_value = custom_loss(y, ypred) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss_value, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) loss_tracker.update_state(loss_value) return {&quot;loss&quot;: loss_tracker.result()} import numpy as np x = np.random.rand(6, 10,10,1) x2 = np.random.rand(6,2) y = tf.ones((6,2)) model = Model( [mlpin, cnnin], finalout) trainable_model = TestModel(model) trainable_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate = 0.0001)) trainable_model.fit(x=(x2, x), y = y, epochs=5) </code></pre> <p>Gives the following output:</p> <pre><code>Epoch 1/5 1/1 [==============================] - 0s 382ms/step - loss: 0.2641 Epoch 2/5 1/1 [==============================] - 0s 4ms/step - loss: 0.2640 Epoch 3/5 1/1 [==============================] - 0s 6ms/step - loss: 0.2638 Epoch 4/5 1/1 [==============================] - 0s 7ms/step - loss: 0.2635 Epoch 5/5 1/1 [==============================] - 0s 6ms/step - loss: 0.2632 &lt;tensorflow.python.keras.callbacks.History at 0x14c69572688&gt; </code></pre>
tensorflow|keras|tensorflow2.0
1
8,779
56,721,498
How can I use the Keras.applications' ResNeXt in TensorFlow's eager execution?
<p>I am trying to get ResNet101 or ResNeXt, which are only available in Keras' repository for some reason, from <a href="https://keras.io/applications/" rel="nofollow noreferrer">Keras applications</a> in TensorFlow 1.10:</p> <pre><code>import tensorflow as tf from keras import applications tf.enable_eager_execution() resnext = applications.resnext.ResNeXt101(include_top=False, weights='imagenet', input_shape=(SCALED_HEIGHT, SCALED_WIDTH, 3), pooling=None) </code></pre> <p>However, this results in:</p> <pre><code>Traceback (most recent call last): File "myscript.py", line 519, in get_fpn resnet = applications.resnet50.ResNet50(include_top=False, weights='imagenet', input_shape=(SCALED_HEIGHT, SCALED_WIDTH, 3), pooling=None) File "Keras-2.2.4-py3.5.egg/keras/applications/__init__.py", line 28, in wrapper return base_fun(*args, **kwargs) File "Keras-2.2.4-py3.5.egg/keras/applications/resnet50.py", line 11, in ResNet50 return resnet50.ResNet50(*args, **kwargs) File "Keras_Applications-1.0.8-py3.5.egg/keras_applications/resnet50.py", line 214, in ResNet50 img_input = layers.Input(shape=input_shape) File "Keras-2.2.4-py3.5.egg/keras/engine/input_layer.py", line 178, in Input input_tensor=tensor) File "Keras-2.2.4-py3.5.egg/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "Keras-2.2.4-py3.5.egg/keras/engine/input_layer.py", line 87, in __init__ name=self.name) File "Keras-2.2.4-py3.5.egg/keras/backend/tensorflow_backend.py", line 529, in placeholder x = tf.placeholder(dtype, shape=shape, name=name) File "tensorflow/python/ops/array_ops.py", line 1732, in placeholder raise RuntimeError("tf.placeholder() is not compatible with " RuntimeError: tf.placeholder() is not compatible with eager execution. </code></pre> <p>I installed Keras from its GitHub master branch, since the pip installs of Keras and TensorFlow's Keras API for some strange reason do not include ResNet101, ResNetv2, ResNeXt, etc. Does anyone know how I can run such models (preferably ResNeXt) in TensorFlow's eager execution?</p>
<p>As the error indicates, tf.placeholder() which is used as placeholder for feeding data to a tf Session using feed_dict, is incompatible with eager mode. </p> <p>This link nicely explains it with an example: <a href="https://github.com/tensorflow/tensorflow/issues/18165#issuecomment-377841925" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/18165#issuecomment-377841925</a></p> <p>You can use models from tf.keras.applications for this purpose. I've tried with TF2.0 Beta release.</p> <p><a href="https://www.tensorflow.org/beta/tutorials/images/transfer_learning#create_the_base_model_from_the_pre-trained_convnets" rel="nofollow noreferrer">https://www.tensorflow.org/beta/tutorials/images/transfer_learning#create_the_base_model_from_the_pre-trained_convnets</a></p> <pre><code>import tensorflow as tf resnext = tf.keras.applications.ResNeXt50(weights=None) print(tf.executing_eagerly()) </code></pre> <p>True</p> <p>ResNeXt models are not available(I had to make some changes like copying resnext.py from keras/applications to tensorflow/python/keras/applications and changes to __init__.py etc.) but you can try with the existing models like ResNet50, if they work then you can try porting ResNeXt. </p>
python|tensorflow|keras|eager-execution
3
8,780
56,774,065
Matrix of percentages of values to the sum of values grouped by a column's criteria
<p>I have a matrix of values and need to get their shares to the sum of a respective group.</p> <p>Example:</p> <p><a href="https://i.stack.imgur.com/uZ1oj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uZ1oj.png" alt="enter image description here"></a></p> <p>Need to get - a matrix of percentages of each id within a class to a total of class/region <a href="https://i.stack.imgur.com/iOOGP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iOOGP.png" alt="enter image description here"></a></p> <p>I was trying the code:</p> <pre><code>import pandas as pd df = pd.DataFrame({'id':['id_1', 'id_2','id_3','id_4','id_5','id_6','id_7','id_8','id_9'], 'region':['reg_1','reg_1','reg_1','reg_2','reg_2','reg_2','reg_3','reg_3','reg_3'], 'class_1':[5,8,2,5,5,4,6,5,3], 'class_2':[6,8,3,7,8,5,8,6,4], 'class_3':[7,8,4,4,3,6,7,9,8,]}) cols=list(df.iloc[:,2:].columns) weights=df.iloc[:,2:].div(df.groupby(['region'])[cols].sum()) </code></pre> <p>It doesn't work..</p> <p>I took a matrix of sums in region/class</p> <pre><code>sum=df.set_index('id').groupby(['region']).sum() </code></pre> <p>but I do not know how to divide matrices of different sizes then..</p> <p>Please, can anybody help? Thank you</p>
<p>Create <code>MultiIndex</code>, so possible use parameter <code>level</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.div.html" rel="nofollow noreferrer"><code>DataFrame.div</code></a>:</p> <pre><code>cols = df.columns[2:] df1 = df.groupby(['region'])[cols].sum() #another solution #df1 = df.iloc[:,2:].groupby(df['region']).sum() weights=df.set_index(['id','region']).div(df1, level='region').reset_index() print (weights) id region class_1 class_2 class_3 0 id_1 reg_1 0.333333 0.352941 0.368421 1 id_2 reg_1 0.533333 0.470588 0.421053 2 id_3 reg_1 0.133333 0.176471 0.210526 3 id_4 reg_2 0.357143 0.350000 0.307692 4 id_5 reg_2 0.357143 0.400000 0.230769 5 id_6 reg_2 0.285714 0.250000 0.461538 6 id_7 reg_3 0.428571 0.444444 0.291667 7 id_8 reg_3 0.357143 0.333333 0.375000 8 id_9 reg_3 0.214286 0.222222 0.333333 </code></pre> <p>Or create <code>Multiindex</code> first, so possible use <code>sum</code> with <code>level</code> parameter too:</p> <pre><code>df1=df.set_index(['id','region']) weights = df1.div(df1.sum(level='region'), level='region').reset_index() print (weights) id region class_1 class_2 class_3 0 id_1 reg_1 0.333333 0.352941 0.368421 1 id_2 reg_1 0.533333 0.470588 0.421053 2 id_3 reg_1 0.133333 0.176471 0.210526 3 id_4 reg_2 0.357143 0.350000 0.307692 4 id_5 reg_2 0.357143 0.400000 0.230769 5 id_6 reg_2 0.285714 0.250000 0.461538 6 id_7 reg_3 0.428571 0.444444 0.291667 7 id_8 reg_3 0.357143 0.333333 0.375000 8 id_9 reg_3 0.214286 0.222222 0.333333 </code></pre> <p>Another idea is filter columns by positions, use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for <code>DataFrame</code> with same size like original, so possible divide and assign back:</p> <pre><code>cols = df.columns[2:] df[cols] = df[cols].div(df.groupby('region')[cols].transform('sum')) print (df) id region class_1 class_2 class_3 0 id_1 reg_1 0.333333 0.352941 0.368421 1 id_2 reg_1 0.533333 0.470588 0.421053 2 id_3 reg_1 0.133333 0.176471 0.210526 3 id_4 reg_2 0.357143 0.350000 0.307692 4 id_5 reg_2 0.357143 0.400000 0.230769 5 id_6 reg_2 0.285714 0.250000 0.461538 6 id_7 reg_3 0.428571 0.444444 0.291667 7 id_8 reg_3 0.357143 0.333333 0.375000 8 id_9 reg_3 0.214286 0.222222 0.333333 </code></pre> <p>EDIT:</p> <p><code>Performance</code> for @Brendam Cox:</p> <pre><code>np.random.seed(123) N = 1000 L = list('abcdefghijklmno') df1 = pd.DataFrame({'id': np.arange(N*len(L)), 'region': np.repeat(L, N)}) df = df1.join(pd.DataFrame(np.random.randint(100, size=(N*len(L), 5))).add_prefix('class_')) print (df) </code></pre> <hr> <pre><code>In [349]: %%timeit ...: cols = df.columns[2:] ...: df1 = df.groupby(['region'])[cols].sum() ...: #another solution ...: #df1 = df.iloc[:,2:].groupby(df['region']).sum() ...: weights=df.set_index(['id','region']).div(df1, level='region').reset_index() ...: ...: 13.9 ms ± 227 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [350]: %%timeit ...: df1=df.set_index(['id','region']) ...: weights = df1.div(df1.sum(level='region'), level='region').reset_index() ...: 13.8 ms ± 595 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [351]: %%timeit ...: cols = df.columns[2:] ...: df[cols] = df[cols].div(df.groupby('region')[cols].transform('sum')) ...: 8.99 ms ± 602 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [352]: %%timeit ...: (df.set_index(['id','region']) ...: .groupby('region') ...: .apply(lambda x: x/x.sum() ...: ) ...: ) ...: 49.5 ms ± 428 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre>
python|pandas
3
8,781
66,830,486
Mapping data from Excel sheet1 to sheet2 creates duplicate column in python
<p>I am trying to map values from Sheet1 to sheet2 using an pandas dataframe, Having Column names listed in sheet2, but when i writes data into sheet 2, it leaves the first column null and append data into an duplicate column.</p> <p>It also happens when i am trying to print dataframe2</p> <pre><code>df1 = pd.read_excel(open(r'C:\Users\Desktop\notepad.xlsx', 'rb'), sheet_name='sheet1') df2 = pd.read_excel(open(r'C:\Users\Desktop\notepad.xlsx', 'rb'), sheet_name='sheet2') df2['col1'] = df1['col3'] df2['col2'] = df1['col5'] df2['col3'] = df1['col8'] df2['col4'] = df1['col6'] </code></pre> <p>When trying to print df1 its gives correct data But when trying to print df2 column gets duplicate with first column have null values</p> <pre><code>print(df2) Output col1 col2 col3 col4 col1 col2 col3 col4 NaN NaN NaN NaN Ind Aus Nz Aus NaN NaN NaN NaN Sri Afg Chi Ber NaN NaN NaN NaN Usa Uk Un Ind </code></pre> <p>How to avoid duplicate columns having Null values</p>
<p>To avoid potential issue of the columns having any non-standard characters in their names or anything else weird going on with the column names, have you tried:</p> <pre><code>df2[0] = df1['col3'] df2[1] = df1['col5'] df2[2] = df1['col8'] df2[3] = df1['col6'] </code></pre> <p>Also, what's the output of print(df1) please?</p>
python|pandas
0
8,782
47,256,926
calculate the mean of one row according it's label
<p>calculate the mean of the values in one row according it's label:</p> <pre><code>A = [1,2,3,4,5,6,7,8,9,10] B = [0,0,0,0,0,1,1,1,1, 1] Result = pd.DataFrame(data=[A, B]) </code></pre> <p>I want the output is: 0->3; 1-> 7.8</p> <p>pandas has the groupby function, but I don't know how to implement this. Thanks</p>
<p>This is simple <code>groupby</code> problem ...</p> <pre><code>Result=Result.T Result.groupby(Result[1])[0].mean() Out[372]: 1 0 3 1 8 Name: 0, dtype: int64 </code></pre>
pandas|pandas-groupby
1
8,783
47,403,217
python, tensorflow, how to get a tensor shape with half the features
<p>I need the shape of a tensor, except instead of feature_size as the -1 dimension I need feature_size//2</p> <p>The code I'm currently using is</p> <pre><code>_, half_output = tf.split(output,2,axis=-1) half_shape = tf.shape(half_output) </code></pre> <p>This works but it's incredibly inelegant. I don't need an extra copy of half the tensor, I just need that shape. I've tried to do this other ways but nothing besides this bosh solution has worked yet.</p> <p>Anyone know a simple way to do this?</p>
<p>A simple way to get the shape with the last value halved:</p> <pre><code>half_shape = tf.shape(output[..., 1::2]) </code></pre> <p>What it does is simply iterate <code>output</code> in its last dimension with step 2, starting from the second element (index 1).</p> <p>The <code>...</code> doesn't touch other dimensions. As a result, you will have <code>output[..., 1::2]</code> with the same dimensions as <code>output</code>, except for the last one, which will be sampled like the following example, resulting in half the original value.</p> <pre><code>&gt;&gt;&gt; a = np.random.rand(5,5) &gt;&gt;&gt; a array([[ 0.21553665, 0.62008421, 0.67069869, 0.74136913, 0.97809012], [ 0.70765302, 0.14858418, 0.47908281, 0.75706245, 0.70175868], [ 0.13786186, 0.23760233, 0.31895335, 0.69977537, 0.40196103], [ 0.7601455 , 0.09566717, 0.02146819, 0.80189659, 0.41992885], [ 0.88053697, 0.33472285, 0.84303012, 0.10148065, 0.46584882]]) &gt;&gt;&gt; a[..., 1::2] array([[ 0.62008421, 0.74136913], [ 0.14858418, 0.75706245], [ 0.23760233, 0.69977537], [ 0.09566717, 0.80189659], [ 0.33472285, 0.10148065]]) </code></pre> <p>This <code>half_shape</code> prints the following <code>Tensor</code>:</p> <blockquote> <p>Tensor("Shape:0", shape=(3,), dtype=int32)</p> </blockquote> <p>Alternatively you could get the shape of <code>output</code> and create the shape you want manually:</p> <pre><code>s = output.get_shape().as_list() half_shape = tf.TensorShape(s[:-1] + [s[-1] // 2]) </code></pre> <p>This <code>half_shape</code> prints a <code>TensorShape</code> showing the shape halved in the last dimension.</p>
python|tensorflow
1
8,784
47,435,240
Pandas groupby date month and count items within months
<p>I have a dataframe like this:</p> <pre><code>STYLE | INVOICE_DATE2 A | 2017-01-03 B | 2017-01-03 C | 2017-01-03 A | 2017-02-03 A | 2017-01-03 B | 2017-02-03 B | 2017-01-03 </code></pre> <p>I'm trying to group them by month and count itself within month, result must like this:</p> <pre><code>Month | Item | Count 1 | A | 2 | B | 2 | C | 1 2 | A | 1 | B | 1 </code></pre> <p>I have tried this:</p> <pre><code>lastyear_df.groupby([(df['INVOICE_DATE2']).dt.month, df['STYLE']])['STYLE'].count() </code></pre> <p>But it didn't work for me.</p>
<p>Here is a one liner...</p> <pre><code>ans = df.groupby([df.INVOICE_DATE2.apply(lambda x: x.month), 'STYLE']).count() </code></pre> <p>Here is the output</p> <pre><code>In [21]: ans Out[21]: INVOICE_DATE2 INVOICE_DATE2 STYLE 1 A 2 B 2 C 1 2 A 1 B 1 </code></pre> <p>NOTE: That at this point you have a hierarchical index, which you can flatten by using <code>reset_index</code></p> <pre><code>ans = ans.reset_index(1) STYLE INVOICE_DATE2 INVOICE_DATE2 1 A 2 1 B 2 1 C 1 2 A 1 2 B 1 </code></pre> <p>You can now change the column and index names if you like:</p> <pre><code>ans.index.name = 'MONTH' ans.columns = ['ITEM', 'COUNT'] </code></pre>
python|pandas
3
8,785
47,092,185
TensorFlow Word2Vec model running on GPU
<p>In <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/udacity/5_word2vec.ipynb" rel="nofollow noreferrer">this TensorFlow example</a> a training of skip-gram Word2Vec model described. It contains the following code fragment, which explicitly requires CPU device for computations, i.e. <code>tf.device('/cpu:0')</code>:</p> <pre><code>batch_size = 128 embedding_size = 128 # Dimension of the embedding vector. skip_window = 1 # How many words to consider left and right. num_skips = 2 # How many times to reuse an input to generate a label. # We pick a random validation set to sample nearest neighbors. Here we limit the # validation samples to the words that have a low numeric ID, which by # construction are also the most frequent. valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # Only pick dev samples in the head of the distribution. valid_examples = np.array(random.sample(range(valid_window), valid_size)) num_sampled = 64 # Number of negative examples to sample. graph = tf.Graph() with graph.as_default(), tf.device('/cpu:0'): # Input data. train_dataset = tf.placeholder(tf.int32, shape=[batch_size]) train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # Variables. embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) softmax_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) softmax_biases = tf.Variable(tf.zeros([vocabulary_size])) # Model. # Look up embeddings for inputs. embed = tf.nn.embedding_lookup(embeddings, train_dataset) # Compute the softmax loss, using a sample of the negative labels each time. loss = tf.reduce_mean( tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed, labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size)) # Optimizer. # Note: The optimizer will optimize the softmax_weights AND the embeddings. # This is because the embeddings are defined as a variable quantity and the # optimizer's `minimize` method will by default modify all variable quantities # that contribute to the tensor it is passed. # See docs on `tf.train.Optimizer.minimize()` for more details. optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss) # Compute the similarity between minibatch examples and all embeddings. # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True)) normalized_embeddings = embeddings / norm valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset) similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings)) </code></pre> <p>When trying switch to GPU, the following exception is raised:</p> <blockquote> <p><strong>InvalidArgumentError</strong> (see above for traceback): Cannot assign a device for operation 'Variable_2/Adagrad': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.</p> </blockquote> <p>I wonder what is the reason why the provided graph cannot be computed on GPU? Does it happen due to <code>tf.int32</code> type? Or should I switch to another optimizer? In other words, is there any way to make possible processing Word2Vec model on GPU? (Without types casting).</p> <hr> <p><strong>UPDATE</strong></p> <p>Following Akshay Agrawal recommendation, here is an updated fragment of the original code that achieves required result:</p> <pre><code>with graph.as_default(), tf.device('/gpu:0'): # Input data. train_dataset = tf.placeholder(tf.int32, shape=[batch_size]) train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) softmax_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) softmax_biases = tf.Variable(tf.zeros([vocabulary_size])) embed = tf.nn.embedding_lookup(embeddings, train_dataset) with tf.device('/cpu:0'): loss = tf.reduce_mean( tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed, labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size)) optimizer = tf.train.AdamOptimizer(0.001).minimize(loss) norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True)) normalized_embeddings = embeddings / norm valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset) similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings)) </code></pre>
<p>The error is raised because <code>AdagradOptimizer</code> does not have a GPU kernel for its sparse apply operation; a sparse apply is triggered because differentiating through the embedding lookup results in a sparse gradient. </p> <p><code>GradientDescentOptimizer</code> and <code>AdamOptimizer</code> do support sparse apply operations. If you were to switch to one of these optimizers, you would unfortunately see another error: tf.nn.sampled_softmax_loss appears to create an op that does not have a GPU kernel. To get around that, you could wrap the <code>loss = tf.reduce_mean(...</code> line with a <code>with tf.device('/cpu:0'):</code> context, though doing so would introduce cpu-gpu communication overhead.</p>
python|word2vec|tensorflow|word-embedding
2
8,786
68,294,637
ValueError: Failed to find data adapter that can handle <numpy.narray>
<p>I am using the following code:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import numpy as np in = np.array([10,20,30,40,50,60,70],dtype=float) out = np.array([50,68,86,104,122,140,158],dtype=float) ly1 = tf.keras.layers.Dense(units=3, input_shape=[1]) ly2 = tf.keras.layers.Dense(units=3) result = tf.keras.layers.Dense(units=1) model = tf.keras.models.Sequential([ly1,ly2,result]) model.compile(optimizer=tf.keras.optimizers.Adam(0.1),loss='mean_squared_error') </code></pre> <p>The error appears at this line:</p> <pre><code>train = model.fit(in, out, epochs=1000, verbose=3) </code></pre>
<p>After renaming variable <code>in</code> to <code>in1</code>it started working as shown below</p> <pre><code>import tensorflow as tf print(tf.__version__) import numpy as np print(np.__version__) in1 = np.array([10,20,30,40,50,60,70],dtype=float) out = np.array([50,68,86,104,122,140,158],dtype=float) ly1 = tf.keras.layers.Dense(units=3, input_shape=[1]) ly2 = tf.keras.layers.Dense(units=3) result = tf.keras.layers.Dense(units=1) model = tf.keras.models.Sequential([ly1,ly2,result]) model.compile(optimizer=tf.keras.optimizers.Adam(0.1),loss='mean_squared_error') train = model.fit(in1, out, epochs=10, verbose=1) </code></pre> <p>Output:</p> <pre><code>2.5.0 1.19.5 Epoch 1/10 1/1 [==============================] - 1s 553ms/step - loss: 4867.3154 Epoch 2/10 1/1 [==============================] - 0s 6ms/step - loss: 1008.0177 Epoch 3/10 1/1 [==============================] - 0s 5ms/step - loss: 454.7526 Epoch 4/10 1/1 [==============================] - 0s 4ms/step - loss: 1876.2919 Epoch 5/10 1/1 [==============================] - 0s 4ms/step - loss: 1345.9777 Epoch 6/10 1/1 [==============================] - 0s 4ms/step - loss: 392.1256 Epoch 7/10 1/1 [==============================] - 0s 6ms/step - loss: 235.7297 Epoch 8/10 1/1 [==============================] - 0s 5ms/step - loss: 699.6002 Epoch 9/10 1/1 [==============================] - 0s 5ms/step - loss: 981.0598 Epoch 10/10 1/1 [==============================] - 0s 7ms/step - loss: 822.2618 </code></pre> <p><em>Note: we should not use system defined keywords as variable names</em></p>
python|numpy|tensorflow|keras
0
8,787
68,092,747
Python Compare two different size dataframes on If condition
<p>I have two data frames. Main and auxiliary. The main data frame is big. Auxiliary is small. I want to compare both and produce True/False in the Main data frame.</p> <p>My code:</p> <pre><code>mdf = A B C 0 3 0 -2 1 -3 -4 -5 2 4 -3 5 3 1 -8 1 adf = A_low A_up B_low B_up C_low C_up 0 -2 2 -6 -2 -4 4 # up limit columns ul_cols = ['A_up','B_up','C_up'] # low limit columns ll_cols = ['A_low','B_low','C_low'] # output columns op_cols = [i+'_op' for i in mdf.columns] # compute output column mdf[op_cols] = np.where((mdf&lt;adf[ul_cols])&amp;(mdf&gt;adf[ll_cols]), True, False) </code></pre> <p>Present output:</p> <pre><code>ValueError: Can only compare identically-labeled DataFrame objects </code></pre> <p>Expected output:</p> <pre><code>mdf = A B C A_op B_op C_op 0 3 0 -2 False False True 1 -3 -4 -5 False True False 2 4 -3 5 False True False 3 1 -8 1 True False True </code></pre>
<p>Try with reformatting <code>adf</code>:</p> <pre><code>adf.columns = adf.columns.str.split('_', expand=True).swaplevel(0, 1) </code></pre> <p><code>adf</code>:</p> <pre><code> low up low up low up A A B B C C 0 -2 2 -6 -2 -4 4 </code></pre> <p>Then with <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html#broadcasting" rel="nofollow noreferrer">broadcasted</a> comparison to create a new dataframe:</p> <pre><code>new_df = (adf['low'].to_numpy() &lt;= mdf) &amp; (mdf &lt;= adf['up'].to_numpy()) </code></pre> <p><code>new_df</code>:</p> <pre><code> A B C 0 False False True 1 False True False 2 False True False 3 True False True </code></pre> <p>Then <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.join.html#pandas-dataframe-join" rel="nofollow noreferrer"><code>join</code></a> back with suffix:</p> <pre><code>mdf = mdf.join(new_df, rsuffix='_op') </code></pre> <p><code>mdf</code>:</p> <pre><code> A B C A_op B_op C_op 0 3 0 -2 False False True 1 -3 -4 -5 False True False 2 4 -3 5 False True False 3 1 -8 1 True False True </code></pre> <p>All Together:</p> <pre><code>adf.columns = adf.columns.str.split('_', expand=True).swaplevel(0, 1) mdf = mdf.join((adf['low'].to_numpy() &lt;= mdf) &amp; (mdf &lt;= adf['up'].to_numpy()), rsuffix='_op') </code></pre> <hr /> <p>Another option with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.filter.html#pandas-dataframe-filter" rel="nofollow noreferrer"><code>filter</code></a> instead of creating a multi-index:</p> <pre><code>mdf = mdf.join((adf.filter(like='_low').to_numpy() &lt;= mdf) &amp; (mdf &lt;= adf.filter(like='_up').to_numpy()), rsuffix='_op') </code></pre> <p><code>mdf</code>:</p> <pre><code> A B C A_op B_op C_op 0 3 0 -2 False False True 1 -3 -4 -5 False True False 2 4 -3 5 False True False 3 1 -8 1 True False True </code></pre> <hr /> <p>DataFrames Used:</p> <pre><code>mdf = pd.DataFrame({'A': [3, -3, 4, 1], 'B': [0, -4, -3, -8], 'C': [-2, -5, 5, 1]}) adf = pd.DataFrame({'A_low': [-2], 'A_up': [2], 'B_low': [-6], 'B_up': [-2], 'C_low': [-4], 'C_up': [4]}) </code></pre>
python|pandas|dataframe|numpy
1
8,788
68,435,244
pandas merge rows with same value in one column
<p>I have a pandas dataframe in which one particular column (ID) can have 1, 2, or 3 entries in another column (Number), like this:</p> <pre><code> ID Address Number 120004 3188 James Street 123-456-789 120004 3188 James Street 111-123-134 120004 100 XYZ Avenue 001-002-003 321002 500 ABC Street 444-222-666 321002 500 ABC Street 323-123-423 321003 800 ABC Street 100-200-300 </code></pre> <p>What I need to do is merge rows with the same ID into a single row, keep only the first address, and fill in the additional columns for any additional &quot;Numbers&quot; if necessary, like so:</p> <pre><code> ID Address Number1 Number2 Number3 120004 3188 James Street 123-456-789 111-123-134 001-002-003 321002 500 ABC Street 444-222-666 323-123-423 - 321003 800 ABC Street 100-200-300 - - </code></pre> <p>How would I do this? What I did was generate a new dataframe with only the ID and Numbers:</p> <pre><code>dx = df.set_index(['ID', df.groupby('ID') .cumcount()])['Number'] .unstack() .add_prefix('Number') .reset_index() </code></pre> <p>And then combining this modified dataframe with the original dataframe, and dropping duplicates/keeping the first index only, but I am wondering if this is correct and if there is a more efficient way.</p>
<p>You can first use <code>groupby</code> to flatten up the <code>Numbers</code> and then rename the columns. Finally, create the <code>Address</code> column by taking the first address from each group.</p> <pre><code>( df.groupby('ID') .apply(lambda x: x.Number.tolist()) .apply(pd.Series) .rename(lambda x: f'Number{int(x)+1}', axis=1) .assign(Address=df.groupby('ID').Address.first()) ) </code></pre>
pandas|dataframe
1
8,789
59,119,966
How to make code flexible enough so blank column value is filtered same as nan?
<p>I want to create logic that is fired off only when a particular row integer in the index is present and in the same row if a particular column is empty. </p> <p>The dataframe has a code where all 'nan' are filled with blanks like so:</p> <pre><code>df = df.fillna('') </code></pre> <p>This is the logic I am using to see if a particular row integer exists and if the column is null:</p> <pre><code> if 12 in df.index: if df.col1.isnull()[12] == True: [rest of the code] </code></pre> <p>The <code>isnull</code> is not picking up that blank for the row is null. </p> <p>How do I make my code flexible enough to say that blank is null ? or do I have to replace blanks with NaN? </p> <p>Ideally I want to avoid the latter option because to add back the <code>nan</code> I have to use <code>np.nan</code> and for the serverless architecture I want to avoid adding more libraries since the original package only contains pandas. </p>
<p>I think it is better to <code>replace</code> blank with null </p> <pre><code>df=df.replace({'' : np.nan}) </code></pre> <p>Or we can adding one more condition </p> <pre><code>if 12 in df.index: if df.col1.isnull()[12] or (df.col1[12]==''): [rest of the code] </code></pre>
python-3.x|pandas
1
8,790
59,418,086
With tf.keras.callbacks.ModelCheckpoint, what are the valid string values for the 'monitor' argument?
<p>Using the following link: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint?version=stable" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint?version=stable</a> I am unable to determine the valid argument inputs for the <code>monitor</code> argument. By default it is set as <code>'val_loss'</code>.</p> <p>Where can I find the list of acceptable inputs for this argument?</p> <p>With more context to my problem, I am trying to set a checkpoint that saves the "best" model following n numbers of epochs. But the "best" model is determined by the <code>monitor</code> argument - and the documentation on the page doesn't seem to display the "acceptable" inputs. I'd like to save the "best" model based on the most balanced precision/recall (F1-score).</p>
<p>The <code>monitor</code> arg of <code>ModelCheckpoint</code> expects you to provide a string, this needs to be the name of a metric or a loss, for example, if your compile method looks like this</p> <p><code>model.compile(loss='mse', optimizer='sgd', metrics=['mae', 'accuracy'])</code></p> <p>the valid strings for <code>monitor</code> arg would be: </p> <p><code>'train_loss'</code>, <code>'val_loss'</code>, <code>'train_mae'</code>, <code>'val_mae'</code>, <code>'train_accuracy'</code>, <code>'val_accuracy'</code>.</p>
python-3.x|tensorflow|tensorflow2.0|tf.keras
1
8,791
57,281,816
performing mathematical operations for each row of a 2d array against another 2d array
<p>I can only use the numpy import. I need to calculate the closest distance is the test set to the training set. I.E find the closest distance in the the test(find the distance between all the lists in training array) and return both the test name and training name. The following formula is used:</p> <pre><code>dist(x,y)=√((a-a2 )^2+(b-b2 )^2+(c-c2 )^2+(d-d2)^2 ) </code></pre> <p><a href="https://drive.google.com/open?id=12gaqBy5Xjs3ah7IZ1VZT1TEsU2dajzRO" rel="nofollow noreferrer">link</a> to data used and expect first row.</p> <p>This is the code I have that functions correctly for the first row in the Train test set. I need for each row of the train array to go through the same operation in variable q. Below is my input</p> <pre><code>Training a b c d name training 5 3 1.6 0.2 G 5 3.4 1.6 0.4 G 5.5 2.4 3.7 1 R 5.8 2.7 3.9 1.2 R 7.2 3.2 6 1.8 Y 6.2 2.8 4.8 1.8 Y testing a2 b2 c2 d2 name true 5 3.6 1.4 0.2 E 5.4 3.9 1.7 0.4 G 6.9 3.1 4.9 1.5 R 5.5 2.3 4 1.3 R 6.4 2.7 5.3 1.9 Y 6.8 3 5.5 2.1 Y </code></pre> <pre><code>train = np.asarray(train) test = np.asarray(test) print('Train shape',train.shape) print('test shape',test.shape) train_1 = train[:,0:(train.shape[1])-1].astype(float) test_1 = test[:,0:(test.shape[1])-1].astype(float) print('Train '+'\n',train_1) print('test '+'\`enter code here`n',test_1) q=min((np.sqrt(np.sum((train_1[0,:]-test_1)**2,axis=1,keepdims=True)))) </code></pre> <p>I expect to get the closest distance from the training row compared to entire array of test. Using this the first row train using the formula would produce the below. I would then return G,E as those are the 2 rows that are closest.</p>
<p>you can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html" rel="nofollow noreferrer"><code>numpy.linalg.norm</code></a>. here is an example:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; arr = np.array([1, 2, 3, 4]) &gt;&gt;&gt; np.linalg.norm(arr) 5.477225575051661 </code></pre> <p><code>5.477225575051661</code> is the result of <code>sqrt(1^2 + 2^2 + 3^2 + 4^2)</code></p> <pre class="lang-py prettyprint-override"><code>import numpy as np train = np.array([[5, 3, 1.6, 0.2], [5, 3.4, 1.6, 0.4], [5.5, 2.4, 3.7, 1], [5.8, 2.7, 3.9, 1.2], [7.2, 3.2, 6, 1.8], [6.2, 2.8, 4.8, 1.8]]) test = np.array([[5, 3.6, 1.4, 0.2], [5.4, 3.9, 1.7, 0.4], [6.9, 3.1, 4.9, 1.5], [5.5, 2.3, 4, 1.3], [6.4, 2.7, 5.3, 1.9], [6.8, 3, 5.5, 2.1]]) # first get subtraction of each row of train to test subtraction = train[:, None, :] - test[None, :, :] # get distance from each train_row to test s = np.linalg.norm(subtraction, axis=2, keepdims=True) print(np.min(s, axis=1)) # get minimum q = np.argmin(s, axis=1) print("minimum indices:") print(q) </code></pre> <p>output:</p> <pre><code>[[0.63245553] [0.34641016] [0.43588989] [0.51961524] [0.73484692] [0.55677644]] minimum indices: [[0] [0] [3] [3] [5] [4]] </code></pre>
python|numpy
0
8,792
56,983,420
Find a substring that appears before a word in a string upto a number
<p>I have a string :</p> <pre><code>"abc mysql 23 rufos kanso engineer" </code></pre> <p>I want the regex to output the string before the word "engineer" till it sees a number.</p> <p>That is the regex should output :</p> <pre><code>23 rufos kanso </code></pre> <p>Another example:</p> <p>String:</p> <pre><code>def grusol defno 1635 minos kalopo, ruso engineer okas puno" </code></pre> <p>I want the regex to output the string before the word "engineer" till it sees a number.</p> <p>That is the regex should output :</p> <pre><code>1635 minos kalopo, ruso </code></pre> <p>I am able to achieve this by a series of regex .</p> <p>Can I do this in one shot?</p> <p>Thanks</p>
<p>Use <a href="https://stackoverflow.com/a/2973495/7177029"><code>positive look-ahead</code></a> to match until the word engineer preceded by a digit.</p> <p><a href="https://regex101.com/r/p6Nycm/1/" rel="nofollow noreferrer"><code>The regex</code></a> - <code>(?=\d)(.+)(?=engineer)</code></p> <p>Just to get an idea:</p> <pre><code>import re pattern = r"(?=\d)(.+)(?=engineer)" input = [ "\"def grusol defno 1635 minos kalopo, ruso engineer okas puno\"", "\"abc mysql 23 rufos kanso engineer\"" ] matches = [] for item in input: matches.append(re.findall(pattern, item)) </code></pre> <p>Outputting:</p> <pre><code>[['1635 minos kalopo, ruso '], ['23 rufos kanso ']] </code></pre>
python|regex|pandas
0
8,793
46,128,720
Quadratic trend line equation on plot?
<p>I've seen many examples for displaying a linear trend line's equation on a plot, but haven't found one for displaying one of a higher order. I assumed it would be similar, but I keep getting errors. I have a feeling it has to do with not understanding the purpose of the z's in my print statement. Here is my code, thanks in advance!</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np df=pd.read_csv('myfile', delimiter=',', usecols=[1,4], names=['Time','Position']) plt.figure(figsize=(9,6)) plt.suptitle('') x=df['Time'] y=df['Position'] plt.subplot(1,1,1) plt.tick_params(labelsize=6) plt.plot(x, y, 'o') z = np.polyfit(x, y, 2) p = np.poly1d(z) plt.plot(x,p(x),"r--") plt.xlabel('Time (s)', fontsize=9) plt.ylabel('Position (mm)', fontsize=9) plt.title('13.087 Degree Incline', fontsize=10, weight='bold') plt.legend( loc=2, prop={'size': 6}) plt.tight_layout() print "y=%.6fx^2+%.6fx+(%.6f)"%(z[0],z[1]) plt.show() </code></pre> <p>edit: when running in python2, I get </p> <pre><code>File "scriptname.py", line 27, in &lt;module&gt; print "y=%.6fx^2+%.6fx+(%.6f)"%(z[0],z[1]) TypeError: not enough arguments for format string </code></pre> <p>when running in python3 I get :</p> <pre><code> File "scriptname.py", line 27 print "y=%.6fx^2+%.6fx+(%.6f)"%(z[0],z[1]) ^ SyntaxError: invalid syntax </code></pre>
<p>In principle the error messages are pretty good at explaining what's wrong. In the string <code>"y=%.6fx^2+%.6fx+(%.6f)"</code> you have 3 format specifiers, you later only provide 2 arguments (<code>%(z[0],z[1])</code>)</p> <p>The solution is of course to supply to as many arguments as format specifiers.</p> <pre><code>print "y=%.6fx^2+%.6fx+(%.6f)"%(z[0],z[1],z[2]) </code></pre> <p>In python3 you would eventually end up with the same error, once you use the print <em>function</em> <code>print(arg)</code> instead of of the print <em>statement</em> <code>print arg</code> as in python 2.</p> <pre><code>print ( "y=%.6fx^2+%.6fx+(%.6f)"%(z[0],z[1],z[2]) ) </code></pre>
python|numpy|matplotlib
0
8,794
46,063,249
Delete some rows of a column if a value does not exist
<p>Given the following DataFrame:</p> <pre><code>import pandas as pd d = pd.DataFrame({'Group':['a','a','c','c'],'Value':[1,1,2,2]}) print(d) Group Value 0 a 1 1 a 1 2 c 2 3 c 2 </code></pre> <p>Since there is no row with a group of 'b', I want to delete any row with a group of 'a'. I only want to delete the 'a' group because there are no rows in the 'b' group.</p> <p>I know how to get the number of rows in group 'b' like this:</p> <pre><code>len(d.loc[d['Group']=='b']) </code></pre> <p>and then do this:</p> <pre><code>if len(d.loc[d['Group']=='b'])==0: d=d.loc[d['Group']!='a'] print(d) Group Value 2 c 2 3 c 2 </code></pre> <p>...but I'm wondering how to work it into the .loc method in one line.</p> <p>Thanks in advance!</p>
<p>A more intuitive solution to <code>len</code> would be creating a mask and then using <code>.all</code>. After that, just reassign to the filtered slice. </p> <pre><code>if ~(d.Group == 'b').all(): d = d[d.Group != 'a'] d Group Value 2 c 2 3 c 2 </code></pre> <p>You can also use a host of other query methods like <code>df.query</code> and <code>df.*loc</code>.</p>
python|pandas|dataframe
1
8,795
50,690,757
Combine pandas DataFrames to give unique element counts
<p>I have a few pandas DataFrames and I am trying to find a good way to calculate and plot the number of times each unique entry occurs across DataFrames. As an example if I had the 2 following DataFrames:</p> <pre><code> year month 0 1900 1 1 1950 2 2 2000 3 year month 0 1900 1 1 1975 2 2 2000 3 </code></pre> <p>I was thinking maybe there is a way to combine them into a single DataFrame while using a new column <code>counts</code> to keep track of the number of times a unique combination of <code>year + month</code> occurred in any of the DataFrames. From there I figured I could just scatter plot the <code>year + month</code> combinations with their corresponding counts.</p> <pre><code> year month counts 0 1900 1 2 1 1950 2 1 2 2000 3 2 3 1975 2 1 </code></pre> <p>Is there a good way to achieve this?</p>
<p><code>concat</code> then using <code>groupby</code> <code>agg</code></p> <pre><code>pd.concat([df1,df2]).groupby('year').month.agg(['count','first']).reset_index().rename(columns={'first':'month'}) Out[467]: year count month 0 1900 2 1 1 1950 1 2 2 1975 1 2 3 2000 2 3 </code></pre>
python|pandas|dataframe
1
8,796
51,002,693
groupby and sum two columns and set as one column in pandas
<p>I have the following data frame:</p> <pre><code>import pandas as pd data = pd.DataFrame() data['Home'] = ['A','B','C','D','E','F'] data['HomePoint'] = [3,0,1,1,3,3] data['Away'] = ['B','C','A','E','D','D'] data['AwayPoint'] = [0,3,1,1,0,0] </code></pre> <p>i want to groupby the columns ['Home', 'Away'] and change the name as Team. Then i like to sum homepoint and awaypoint as name as Points.</p> <pre><code> Team Points A 4 B 0 C 4 D 1 E 4 F 3 </code></pre> <p>How can I do it? I was trying different approach using the following post: <a href="https://stackoverflow.com/questions/46431243/pandas-dataframe-groupby-how-to-get-sum-of-multiple-columns">Link</a></p> <p>But I was not able to get the format that I wanted.</p> <p>Greatly appreciate your advice.</p> <p>Thanks</p> <p>Zep.</p>
<p>A simple way is to create two new Series indexed by the teams:</p> <pre><code>home = pd.Series(data.HomePoint.values, data.Home) away = pd.Series(data.AwayPoint.values, data.Away) </code></pre> <p>Then, the result you want is:</p> <pre><code>home.add(away, fill_value=0).astype(int) </code></pre> <p>Note that <code>home + away</code> does not work, because team F never played away, so would result in NaN for them. So we use <code>Series.add()</code> with <code>fill_value=0</code>.</p> <p>A complicated way is to use <code>DataFrame.melt()</code>:</p> <pre><code>goo = data.melt(['HomePoint', 'AwayPoint'], var_name='At', value_name='Team') goo.HomePoint.where(goo.At == 'Home', goo.AwayPoint).groupby(goo.Team).sum() </code></pre> <p>Or from the other perspective:</p> <pre><code>ooze = data.melt(['Home', 'Away']) ooze.value.groupby(ooze.Home.where(ooze.variable == 'HomePoint', ooze.Away)).sum() </code></pre>
python|python-3.x|pandas|pandas-groupby
1
8,797
51,022,861
Replace value in a specific with corresponding value
<p>I have a dataframe called <code>REF</code> with the following structure:</p> <pre><code>old_id new_id 3 6 4 7 5 8 </code></pre> <p>I want to replace all the values that can be found equal to any of the <code>old_id</code> values in another dataframe <code>NEW</code> that is:</p> <pre><code>old_id column_1 column_2 3 a e 4 b f 9 c g 9 d h </code></pre> <p>Therefore the new output dataset <code>NEW</code> will be:</p> <pre><code>old_id column_1 column_2 6 a e 7 b f 9 c g 9 d h </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a>:</p> <pre><code>s = df1.set_index('old_id')['new_id'] df2['old_id'] = df2['old_id'].map(s).fillna(df2['old_id']) </code></pre> <p>Or <a href="https://stackoverflow.com/q/49259580/2901002">slowier solution</a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow noreferrer"><code>replace</code></a>:</p> <pre><code>df2['old_id'] = df2['old_id'].replace(s) </code></pre>
python|python-3.x|pandas|replace|replacewith
3
8,798
50,911,227
How can I visualize categorical data?
<p>I have two datasets like this (dots are random numbers):</p> <pre><code> Category 1 A B C D E F G H 1 3 0 0 3 . . . 2 2 0 0 2 . . . . 3 0 2 4 2 . . . . 4 1 1 5 2 . . . . 5 0 . . . . . . . 6 2 . . . . . . . 7 3 . . . . . . . 8 0 . . . . . . . Category 2 A B C D E F G H 1 1 0 0 1 . . . . 2 1 0 0 1 . . . . 3 1 2 1 2 . . . . 4 0 1 5 0 . . . . 5 0 . . . . . . . 6 0 . . . . . . . 7 3 . . . . . . . 8 0 . . . . . . . </code></pre> <p>A-H = things that the respondent should rank</p> <p>1-10 = the ranking</p> <p>Ex. A is one time ranked on place 1, two times ranked on place 2 and zero times ranked on place 3 etc. </p> <p>I have two different kind of people that filled in the survey (Category 1 &amp; Category 2) so that's why there are two datasets with different kind of outcomes. Now I want to visualize this in the best way possible, so that you can see things like Category 1 find A &amp; D more important than Category 2. I work with python/pandas/matplotlib.</p> <p>Can somebody help?</p>
<p>Rename the columns <code>A B C D E F G H</code> into: <code>A1 B1 C1 D1 E1 F1 G1 H1</code> for Dataset 1 <code>A2 B2 C2 D2 E2 F2 G2 H2</code> for Dataset 2</p> <p>Then merge the datasets.</p>
python|pandas|matplotlib|visualization|data-visualization
0
8,799
51,043,135
How to fill first N/A cell when apply rolling mean to a column -python
<p>I need to apply rolling mean to a column as showing in pic1 s3, after i apply rolling mean and set windows = 5, i got correct answer , but left first 4 rows empty,as showing in pic2 sa3.</p> <p>i want to fill the first 4 empty cells in pic2 sa3 with the mean of all data in pic1 s3 up to the current row,as showing in pic3 a3.</p> <p>how can i do with with an easy function besides the rolling mean method. <a href="https://i.stack.imgur.com/zIHds.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zIHds.png" alt="pic1"></a></p> <p><a href="https://i.stack.imgur.com/COLec.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/COLec.png" alt="pic2"></a></p> <p><a href="https://i.stack.imgur.com/bk8ME.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bk8ME.png" alt="pic3"></a></p>
<p>I think need parameter <code>min_periods=1</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html" rel="nofollow noreferrer"><code>rolling</code></a>:</p> <blockquote> <p>min_periods : int, default None</p> <p>Minimum number of observations in window required to have a value (otherwise result is NA). For a window that is specified by an offset, this will default to 1.</p> </blockquote> <pre><code>df = df.rolling(5, min_periods=1).mean() </code></pre> <p><strong>Sample</strong>:</p> <pre><code>np.random.seed(1256) df = pd.DataFrame(np.random.randint(10, size=(10, 5)), columns=list('abcde')) print (df) a b c d e 0 1 5 8 8 9 1 3 6 3 0 6 2 7 0 1 5 1 3 6 6 5 0 4 4 4 9 4 6 1 5 7 7 5 8 3 6 0 7 2 8 2 7 4 8 3 5 5 8 8 2 0 9 2 9 4 7 1 5 1 df = df.rolling(5, min_periods=1).mean() print (df) a b c d e 0 1.000000 5.000000 8.00 8.000000 9.000000 1 2.000000 5.500000 5.50 4.000000 7.500000 2 3.666667 3.666667 4.00 4.333333 5.333333 3 4.250000 4.250000 4.25 3.250000 5.000000 4 4.200000 5.200000 4.20 3.800000 4.200000 5 5.400000 5.600000 3.60 3.800000 3.000000 6 4.800000 5.800000 3.40 5.400000 2.200000 7 4.200000 7.400000 3.80 5.400000 3.000000 8 4.600000 6.600000 2.80 7.200000 2.600000 9 4.600000 6.200000 2.20 7.000000 2.600000 </code></pre>
python|pandas|rolling-average
3