Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
2,100
41,539,658
Tensorflow error when I try to use tf.contrib.layers.convolution2d
<p>When I invoke tf.contrib.layers.convolution2d the tensorflow execution terminates with an error about one of the parameters used</p> <pre><code>got an unexpected keyword argument 'weight_init' </code></pre> <p>The parameter passed are the follows:</p> <pre><code>layer_one = tf.contrib.layers.convolution2d( float_image_batch, num_output_channels=32, kernel_size=(5,5), activation_fn=tf.nn.relu, weight_init=tf.random_normal, stride=(2, 2), trainable=True) </code></pre> <p>That is exactly as described in the book that I'm reading. I suspect a possible syntax problem with <code>weight_init=tf.random_normal</code> written directly inside the call, but I don't know how to fix. I'm using Tensorflow 0.12.0 </p>
<p>The book that you are reading (You didn't mention which one) might be using an older version of TensorFlow when the initial values for the weight tensor was passed through the <code>weight_init</code> argument. In the TensorFlow library version you are using (You didn't mention your TF version), probably that argument is replaced with <code>weight_initializer</code>. The latest (TensorFlow v0.12.0) documentation for <code>tf.contrib.layers.convolution2d</code> is <a href="https://www.tensorflow.org/api_docs/python/contrib.layers/higher_level_ops_for_building_neural_network_layers_#convolution2d" rel="nofollow noreferrer">here</a>.</p> <p>To fix your problem, you can change the following line in your code:</p> <p><code>weight_init=tf.random_normal</code></p> <p>to</p> <p><code>weight_initializer=tf.random_normal_initializer()</code></p> <p>According to the <a href="https://www.tensorflow.org/versions/master/api_docs/python/state_ops/sharing_variables?hl=bn#random_normal_initializer" rel="nofollow noreferrer">documentation</a>, by default, <code>tf.random_normal_initialier</code> uses a 0.0 mean, a standard deviation of 1.0 and the datatype to be tf.float32. You may change the arguments as per your need using this line instead: <code>weight_initializer=tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)</code></p>
tensorflow|deep-learning|convolution
3
2,101
61,598,173
Datetime error in combining date and time dataframe objects
<p>I have an issue with Python when I want to merge two data columns into one DateTime object. The initial dates column is in string format and hours are integers (1, 2, 3, ....,23, 24) and a new day starts with 1 again (not with 24). I use the command <code>smartmeter_data['Datetime']=pd.to_datetime(smartmeter_data['Date']) + smartmeter_data['Time'].astype('timedelta64[h]')</code> to add a new column with both: date and time. However, I received very weird results:</p> <pre><code>... 19 01/09/2019 2019-01-09 20:00:00 20 01/09/2019 2019-01-09 21:00:00 21 01/09/2019 2019-01-09 22:00:00 22 01/09/2019 2019-01-09 23:00:00 23 01/09/2019 2019-01-10 00:00:00 24 02/09/2019 2019-02-09 01:00:00 25 02/09/2019 2019-02-09 02:00:00 26 02/09/2019 2019-02-09 03:00:00 ... </code></pre> <p>There the date <code>01/09/2019</code> was changed to the DateTime object <code>2019-01-10 00:00:00</code>, which is wrong and makes a very strange "jump" on my graph. My desired output is:</p> <pre><code>... 19 01/09/2019 2019-01-09 20:00:00 20 01/09/2019 2019-01-09 21:00:00 21 01/09/2019 2019-01-09 22:00:00 22 01/09/2019 2019-01-09 23:00:00 23 01/09/2019 2019-02-09 00:00:00 24 02/09/2019 2019-02-09 01:00:00 25 02/09/2019 2019-02-09 02:00:00 26 02/09/2019 2019-02-09 03:00:00 ... </code></pre> <p>I tried to find a solution through google but no success. Does anyone know how to solve the issue?</p> <p>I would be very thankful when you could help, using dates and times is the base of my work.</p>
<p>A day has 24 hours so if you add a timedelta of 24 hours to a date, the date will change to the next day. However, why don't you just subtract 1 to get the correct timedelta (0-23 instead of 1-24)? E.g.</p> <pre><code>import pandas as pd smartmeter_data = pd.DataFrame({'Date': ['01/09/2019', '01/09/2019', '01/09/2019', '01/09/2019', '01/09/2019', '02/09/2019', '02/09/2019', '02/09/2019'], 'Time': [20, 21, 22, 23, 24, 1, 2, 3]}) smartmeter_data['Datetime'] = (pd.to_datetime(smartmeter_data['Date'], format='%d/%m/%Y') + (smartmeter_data['Time'] - 1).astype('timedelta64[h]')) # smartmeter_data # Date Time Datetime # 0 01/09/2019 20 2019-09-01 19:00:00 # 1 01/09/2019 21 2019-09-01 20:00:00 # 2 01/09/2019 22 2019-09-01 21:00:00 # 3 01/09/2019 23 2019-09-01 22:00:00 # 4 01/09/2019 24 2019-09-01 23:00:00 # 5 02/09/2019 1 2019-09-02 00:00:00 # 6 02/09/2019 2 2019-09-02 01:00:00 # 7 02/09/2019 3 2019-09-02 02:00:00 </code></pre>
python|pandas|dataframe|datetime
0
2,102
68,707,680
How to automatically judge whether the training process of the deep learning model is converged?
<p>When training a deep learning model, I have to look at the loss curve and performance curve to judge whether the training process of the deep learning model is converged.</p> <p>This has cost me a lot of time. Sometimes, the time of convergence judged by the naked eye is not accurate.</p> <p>Therefore, I'd like to know whether there exists an algorithm or a package that can automatically judge whether the training process of the deep learning model is converged.</p> <p>Can anyone help me?</p> <p>Thanks a lot.</p>
<p>To the risk of disappointing you, I believe there is no such universal algorithm. In my experience, it depends on what you want to achieve, which metrics are important to you and how much time you are willing to let the training go on for.</p> <ul> <li><p>I have already seen validation losses dramatically go up (a sign of overfitting) while other metrics (mIoU in this case) were still improving on the validation set. In these cases, you need to know what your target is.</p> </li> <li><p>It is possible (although it is very rare) that your loss goes up for a substantial amount of time before going down again and reach better levels than before. There is no way to anticipate this.</p> </li> <li><p>Finally, and this is arguably a common case if you have tons of training data, your validation loss may continually go down, but do so slower and slower. In this case, the best strategy if you had an infinite amount of time would be to let it keep the training going indefinitely. In practice, this is impossible, and you would need to find the right balance between performance and training time.</p> </li> </ul> <p>If you really need an algorithm, I would suggest this quite simple one :</p> <ol> <li>Compute a validation metric <code>M(i)</code> after each <code>i</code>th epoch on a fixed subset of your validation set or the whole validation set. Let's suppose that the higher <code>M(i)</code>is, the better. Fix <code>k</code> an integer depending on the duration of one training epoch (<code>k~3</code> should do the trick)</li> <li>If for some <code>n</code> you have <code>M(n) &gt; max(M(n+1), ..., M(n+k))</code>, stop and keep the network you had at epoch <code>n</code>.</li> </ol> <p>It's far from perfect, but should be enough for simple tasks.</p> <p>[Edit] If you're not using it yet, I invite you to use TensorBoard to visualize the evolution of your metrics throughout the training. Once set up, it is a huge gain of time.</p>
tensorflow|deep-learning|pytorch
3
2,103
68,682,003
How can I return a list with only up to the second decimal place python?
<p>I have the following list of values:</p> <pre><code> value_list=[2.5655665, 3.151498745, 3.1, 0.9999999999] </code></pre> <p>I need to update this list keeping only to the second decimal place. I would like the result to be:</p> <pre><code> print(value_list) [2.56, 3.15, 3.1, 0.99] </code></pre> <p>I tried to keep only to the second decimal place using the round() method passing parameter 2. As follows:</p> <pre><code> value_list.round(2) </code></pre> <p>But, the error message appears:</p> <pre><code> AttributeError: 'list' object has no attribute 'round' </code></pre> <p>I made an attempt by transforming the value_list to the array type, like this:</p> <pre><code> import numpy as np value_list = np.array(value_list).round(2) </code></pre> <p>This way it works, but it returns an array and I needed the return to be of type list. How can I return a list with only up to the second decimal place?</p>
<p>You can use map and apply whatever function you see fit:</p> <pre><code>value_list=[2.5655665, 3.151498745, 3.1, 0.9999999999] value_rounded = list(map(lambda x: float(format(x, '.2f')), value_list)) value_truncated = list(map(lambda x: float(str(x)[:str(x).index('.')+3]), value_list)) print(value_rounded) print(value_truncated) </code></pre> <p>Outputs for both cases:</p> <pre><code>[2.57, 3.15, 3.1, 1.0] [2.56, 3.15, 3.1, 0.99] </code></pre>
python|list|numpy
1
2,104
36,480,100
pandas query single output
<p>I have a pandas dataframe (called smalls) that is repurposed several times to create several network diagrams from a dataset. I am trying to set the color of the nodes in one of the diagrams based on entity type and need to query the original dataframe. However, when I do so it results in a series, which I then cannot perform a comparison on. How can I modify the first line below to only give me the first entry from the dataframe (all of the others will be the same)?</p> <pre><code>temp=smalls.Role[smalls.Entity==big_nodes_order[i]] print(temp) 10 Threat 11 Threat 12 Threat Name: Role, dtype: object </code></pre>
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iloc.html" rel="nofollow"><code>iloc</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iat.html" rel="nofollow"><code>iat</code></a>:</p> <pre><code>temp=smalls.Role[smalls.Entity==big_nodes_order[i]] print temp 10 Threat 11 Threat 12 Threat Name: Role, dtype: object print temp.iloc[0] Threat print temp.iat[0] Threat print temp.iloc[:1] 10 Threat Name: Role, dtype: object </code></pre>
python|python-3.x|pandas|dataframe
1
2,105
36,271,186
Partially argsort a 2D array in Python
<p>I have a KNN, and I need to partially <strong>argsort</strong> a list. </p> <p>Here is how it works right now in code: </p> <pre><code>sorted_distance_indices = distances.argsort(axis=1)[:,:self.parameters['k']+1] kplus_1_nearest_classes = self.trainingY[sorted_distance_indices] ...etc. </code></pre> <p>I found this answer, <a href="https://stackoverflow.com/questions/4555820/how-can-i-partially-sort-a-python-list">How can I partially sort a list?</a> But I don't see how to adapt the 'heapification' for the argsort task, (and I have no idea how to do language interop in Python, so I don't see how to do the heapsort alg manually)...</p>
<p>I think I've got the answer. </p> <p>Running: </p> <pre><code>sorted_distance_indices = np.argpartition(distances,self.parameters['k']+1,axis=1)[:,:self.parameters['k']+1] </code></pre> <p>Gets the job done. Open for a faster way. </p>
python|numpy|heap|heapsort
2
2,106
53,330,407
Combine SIMILAR value rows using Python Pandas
<p>Suppose I have the following Dataframe-</p> <pre><code>company money jack &amp; jill, Boston, MA 02215 51 jack &amp; jill, MA 02215 49 </code></pre> <p>Now, I know that these 2 rows mean the same company, so I want to merge them and also sum the money-</p> <pre><code>company money jack &amp; jill, Boston, MA 02215 100 </code></pre> <p>I don't care about the format of the company name, as long as the duplicates get merged and the money gets added.</p> <p>How should I go about this? Is there a library out there that merges SIMILAR value rows and sums the corresponding quantitative value?</p>
<p>If you have same pattern in <code>company</code> column i.e. the value before the 1st comma is company name. You can use something like below:</p> <pre><code>df = pd.DataFrame({'company':['jack &amp; jill, Boston, MA 02215','jack &amp; jill, MA 02215','Google, New Jersey', 'Google'], 'money':[51,49, 33, 22]}) df['company'] = df['company'].apply(lambda x: x.split(",")[0]) new_df = df.groupby(['company'])['money'].sum().reset_index() print(new_df) </code></pre> <p>Output:</p> <pre><code> company money 0 Google 55 1 jack &amp; jill 100 </code></pre>
python-3.x|pandas|dataframe|data-science
0
2,107
65,823,701
Is there a way to import compare_ssim for python IDLE?
<p>I tried running the python code in IDLE to import compare_ssim with this command line,</p> <p>from skimage.measure import compare_ssim: <a href="https://i.stack.imgur.com/MJbdX.jpg" rel="nofollow noreferrer">code for importing compare_ssim</a></p> <pre><code>from keras.layers import Input, Dense from keras.models import Model from keras.callbacks import ModelCheckpoint import matplotlib.pyplot as plt import numpy as np import cv2 import math from PIL import Image from keras import backend as K from keras.layers import Conv2D, MaxPooling2D, UpSampling2D, Flatten from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing import image from skimage.measure import compare_ssim import argparse import imutils import cv2 #from maskalgo import getMask from DefectSniffer import sniff import time </code></pre> <p>But I am having this error: <a href="https://i.stack.imgur.com/1BB8w.jpg" rel="nofollow noreferrer">Error shown in idle</a></p> <pre><code>Python 3.7.8 (tags/v3.7.8:4b47a5b6ba, Jun 28 2020, 08:53:46) [MSC v.1916 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license()&quot; for more information. &gt;&gt;&gt; =============== RESTART: C:\Users\User\Desktop\keras\Training.py =============== Traceback (most recent call last): File &quot;C:\Users\User\Desktop\keras\Training.py&quot;, line 13, in &lt;module&gt; from skimage.measure import compare_ssim ImportError: cannot import name 'compare_ssim' from 'skimage.measure' (C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\skimage\measure\__init__.py) &gt;&gt;&gt; </code></pre> <p>How do I resolve this issue? Please and thank you.</p>
<p>As per - <a href="https://github.com/scikit-image/scikit-image/blob/b38fd6f02917db2965f5faf2e6e18fc197b8d6c8/skimage/metrics/_structural_similarity.py#L17" rel="nofollow noreferrer">https://github.com/scikit-image/scikit-image/blob/b38fd6f02917db2965f5faf2e6e18fc197b8d6c8/skimage/metrics/_structural_similarity.py#L17</a></p> <p>versionchanged:: 0.16 This function was renamed from <code>skimage.measure.compare_ssim</code> to <code>skimage.metrics.structural_similarity</code></p>
python|python-3.x|tensorflow|keras|tf.keras
3
2,108
65,790,561
clean dataset nulls, etc cannot astype a datetimelike from [datetime64[ns]] to [float64] Tags
<p>I have the following function</p> <pre><code>def clean_dataset(df): assert isinstance(df, pd.DataFrame), &quot;df needs to be a pd.DataFrame&quot; df.dropna(inplace=True) indices_to_keep = ~df.isin([np.nan, np.inf, -np.inf]).any(1) return df[indices_to_keep].astype(np.float64) </code></pre> <p>However when I try to clean the dataframe;</p> <pre><code>#def main(): df = load_data() pd.set_option('display.max_columns', None) df.head(5) clean_dataset(df) </code></pre> <p>I get this error:</p> <pre><code>TypeError: cannot astype a datetimelike from [datetime64[ns]] to [float64] </code></pre>
<p>I got this problem in the past. The solution is to convert to string first:</p> <pre><code>return df[indices_to_keep].astype(str).astype(np.float64) </code></pre>
python|python-3.x|pandas|dataframe
1
2,109
3,209,362
How to plot empirical cdf (ecdf)
<p>How can I plot the empirical CDF of an array of numbers in matplotlib in Python? I'm looking for the cdf analog of pylab's &quot;hist&quot; function.</p> <p>One thing I can think of is:</p> <pre><code>from scipy.stats import cumfreq a = array([...]) # my array of numbers num_bins = 20 b = cumfreq(a, num_bins) plt.plot(b) </code></pre>
<p>If you like <code>linspace</code> and prefer one-liners, you can do:</p> <pre><code>plt.plot(np.sort(a), np.linspace(0, 1, len(a), endpoint=False)) </code></pre> <p>Given my tastes, I almost always do:</p> <pre><code># a is the data array x = np.sort(a) y = np.arange(len(x))/float(len(x)) plt.plot(x, y) </code></pre> <p>Which works for me even if there are <code>&gt;O(1e6)</code> data values. If you really need to downsample I'd set</p> <pre><code>x = np.sort(a)[::down_sampling_step] </code></pre> <p><strong>Edit</strong> to respond to comment/edit on why I use <code>endpoint=False</code> or the <code>y</code> as defined above. The following are some technical details.</p> <p>The empirical CDF is usually formally defined as</p> <pre><code>CDF(x) = &quot;number of samples &lt;= x&quot;/&quot;number of samples&quot; </code></pre> <p>in order to exactly match this formal definition you would need to use <code>y = np.arange(1,len(x)+1)/float(len(x))</code> so that we get <code>y = [1/N, 2/N ... 1]</code>. This estimator is an unbiased estimator that will converge to the true CDF in the limit of infinite samples <a href="http://en.wikipedia.org/wiki/Empirical_distribution_function" rel="nofollow noreferrer">Wikipedia ref.</a>.</p> <p>I tend to use <code>y = [0, 1/N, 2/N ... (N-1)/N]</code> since:</p> <p>(a) it is easier to code/more idiomatic,</p> <p>(b) but is still formally justified since one can always exchange <code>CDF(x)</code> with <code>1-CDF(x)</code> in the convergence proof, and</p> <p>(c) works with the (easy) downsampling method described above.</p> <p>In some particular cases, it is useful to define</p> <pre><code>y = (arange(len(x))+0.5)/len(x) </code></pre> <p>which is intermediate between these two conventions. Which, in effect, says &quot;there is a <code>1/(2N)</code> chance of a value less than the lowest one I've seen in my sample, and a <code>1/(2N)</code> chance of a value greater than the largest one I've seen so far.</p> <p>Note that the selection of this convention interacts with the <code>where</code> parameter used in the <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.step.html" rel="nofollow noreferrer"><code>plt.step</code></a> if it seems more useful to display the CDF as a piecewise constant function. In order to exactly match the formal definition mentioned above, one would need to use <code>where=pre</code> the suggested <code>y=[0,1/N..., 1-1/N]</code> convention, or <code>where=post</code> with the <code>y=[1/N, 2/N ... 1]</code> convention, but not the other way around.</p> <p>However, for large samples, and reasonable distributions, the convention is given in the main body of the answer is easy to write, is an unbiased estimator of the true CDF, and works with the downsampling methodology.</p>
python|numpy|statistics|matplotlib|scipy
122
2,110
63,374,903
Get all integer values between two set of arrays
<p>I have two set of numpy.arrays like for example:</p> <pre><code>a = np.array([10, 25, 36, 56, 78], dtype=int) b = np.array([15, 32, 45, 64, 89], dtype=int) </code></pre> <p>They represent the upper and lower limits of indices for some other dataset. So, I want a pythonic way to get all values between a pair of elements from both sets, like, for the first elements I must get:</p> <pre><code>c = np.array([10, 11, 12, 13, 14, 15], dtype=int) </code></pre> <p>and so on. Is there a one-liner for that?</p> <p>EDIT: I need it to come out as 1d-array. Sorry for not specifying that before.</p> <pre><code>c_all = np.array([10, 11, 12, 13, 14, 15, 25, 26, 27, ...], dtype=int) </code></pre>
<p>Shortest way of doing this using list comprehension:</p> <pre><code>[np.arange(x, y + 1) for x, y in zip(a, b)] </code></pre>
python|arrays|numpy
4
2,111
63,694,539
Checking for Specific Value in a Pandas Column and performing further operation
<p>I have a pandas DataFrame, which has a column named <code>is_retweeted</code>. The values in this column are either <code>Yes</code> or <code>No</code>. If, the value is 'Yes', I want to go ahead performing X type sentiment analysis (the code for which I have). Else-if value is <code>No</code>, I want to go ahead performing Y type sentiment analysis (again, the code for which I have)</p> <p>But, I am unable to check for this condition. I get the same error seen <a href="https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o">here</a>. No solution here is helping for my usecase.</p> <p>Based on what is suggested <a href="https://www.kite.com/python/answers/how-to-check-if-a-value-exists-in-a-pandas-dataframe-in-python" rel="nofollow noreferrer">here</a> if I do: <code>s = 'Yes' in tweet_df.is_retweeted print(s)</code><br /> I get <code>False</code> as output.</p> <p>This is what the dataframe looks like (for ease of representation I havent displayed other columns here):</p> <p>tweet_dt is_retweeted <br /> 2020-09-01 No <br /> 2020-09-01 No <br /> 2020-09-01 Yes</p> <p>I want to perform below sorta operation based on the value in 'is_retweeted' column:</p> <pre><code>retweets_nerlst = [] while tweet_df['is_retweeted'] == 'Yes': for index, row in tqdm(tweet_df.iterrows(), total=tweet_df.shape[0]): cleanedTweet = row['tweet'].replace(&quot;#&quot;, &quot;&quot;) sentence = Sentence(cleanedTweet, use_tokenizer=True) </code></pre> <p>PS: My codebase can be seen <a href="https://colab.research.google.com/drive/1gwAQ1Tfwgejwgamoqpbu92j9R5lLa_jS#scrollTo=cFOuD4v-JUXo" rel="nofollow noreferrer">here</a></p>
<p>I think you can do it with np.where:</p> <pre><code>import pandas as pd import numpy as np def SentimentX(text): #your SentimentX code return f&quot;SentimentX_result of {text}&quot; def SentimentY(text): #your SentimentY code return f&quot;SentimentY_result of {text}&quot; data={&quot;date&quot;:[&quot;2020-09-01&quot;,&quot;2020-09-02&quot;,&quot;2020-09-03&quot;], &quot;is_retweeted&quot;:[&quot;No&quot;,&quot;No&quot;,&quot;Yes&quot;],'text':['text1','text2','text3']} df=pd.DataFrame(data) df['sentiment']=np.where(df[&quot;is_retweeted&quot;]==&quot;Yes&quot;,df['text'].apply(SentimentX),df['text'].apply(SentimentY)) print(df) </code></pre> <p>result:</p> <pre><code> date is_retweeted text sentiment 0 2020-09-01 No text1 SentimentY_result of text1 1 2020-09-02 No text2 SentimentY_result of text2 2 2020-09-03 Yes text3 SentimentX_result of text3 </code></pre>
python|pandas|numpy|csv|data-science
0
2,112
24,788,732
Finding unique columns in an HDF5 dataset
<p>I'm using <code>HDF5</code> to store very large data sets of <code>uint8s</code> (400 x 121000000). There is a huge amount of redundancy in the columns (97% of the columns are not unique). I need to merge duplicate columns efficiently. This means that I need to remove duplicate columns, while storing metadata to remember which columns were merged.</p> <p>I am currently using Python with <code>h5py</code>, but if someone has an efficient C++ solution, I could simply use <code>boost::python</code> to implement it.</p> <p>My current solution consists in loading blocks of the data set into a <code>NumPy</code> array and using a <code>dictionary</code> to store the unique columns and the metadata.</p> <p>Note: the <code>HashableNDArray</code> class can be found <a href="http://machineawakening.blogspot.ca/2011/03/making-numpy-ndarrays-hashable.html" rel="nofollow noreferrer">here</a>. I just renamed it.</p> <pre><code>def find_column_redundancy(dataset): n_columns = dataset.shape[1] block_size = 500000 n_blocks = int(ceil(float(n_columns) / float(block_size))) d = {} analysed_column_count = 0 for block in xrange(n_blocks): block_offset = block*block_size block_data = dataset[:, block_offset : block_offset+block_size] for i in xrange(block_data.shape[1]): hashable_array = HashableNDArray(np.ascontiguousarray(block_data[:, i])) d[hashable_array] = np.append(d.get(hashable_array, np.array([], dtype=np.int32)), block_offset + i) analysed_column_count += 1 return d </code></pre> <p>Once I have iterated through all the columns, I return a <code>dictionary</code> that I use to write a new <code>HDF5</code> data set with the redundancy removed.</p> <p>I need help; this can't be optimal!</p> <p>Thanks!</p>
<p>I did some profiling with <a href="https://pythonhosted.org/line_profiler/" rel="nofollow">kernprof</a> and optimized my code. </p> <ul> <li><p>The biggest bottleneck was the instantiation of HashableNDArray objects. I found that by making the numpy arrays read-only, I could hash their data buffer without having to use the wrapper class. Also, extracting the buffer data as a string seems to allow for much faster hashing. To recover the column data, I use <code>np.frombuffer(dict_key, dtype=np.uint8)</code>.</p></li> <li><p>I also obtained a small speedup by replacing the dictionnary with a <a href="https://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow">defaultdict</a> and eliminating the try/except block.</p></li> <li><p>Since my data only contains binary values, I found that using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.packbits.html" rel="nofollow">np.packbits</a> on the columns allows to save memory by a factor of 8 when storing the keys and still allows to match identical columns. The only thing you need to remember to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.unpackbits.html" rel="nofollow">np.unpackbits</a> is the actual len of your columns, since numpy pads incomplete bytes with trailing 0.</p></li> </ul> <p>Finally, I fine tuned the block_size to use the maximum amount of memory available. This allows for slightly longer disk reads and much better CPU usage.</p> <p>This function used to run in ~18 hours on my data and it now runs in ~0.5 hours!</p> <pre><code>def find_column_redundancy(dataset): n_columns = dataset.shape[1] block_size = 10000000 n_blocks = int(ceil(float(n_columns) / float(block_size))) d = defaultdict(list) analysed_column_count = 0 for block in xrange(n_blocks): block_offset = block*block_size block_data = dataset[:, block_offset : block_offset+block_size] block_data = np.asfortranarray(block_data) block_data = np.packbits(block_data, axis=0) block_data.flags.writeable = False for i in xrange(block_data.shape[1]): d[block_data[:, i].data[:]].append(block_offset + i) analysed_column_count += 1 print float(analysed_column_count)/n_columns*100, "% completed. Dictionnary has", len(d), "items." return d </code></pre>
python|c++|numpy|hdf5|h5py
3
2,113
30,167,822
group-by + case when Equivalent
<p>Want to select:</p> <pre><code>select user_id, max(case when value &gt; 0 then timestamp else 0 end) as max_timestamp_when_value_is_positive from df group by user_id </code></pre> <p>What is the right way to aggregate?</p> <pre><code>groupped = raw_data.groupby('user_id') res = groupped.agg({&lt;how-to-do-described-aggregation?&gt;}) </code></pre> <p><strong>UPDATE</strong> Explanation and example.</p> <pre><code>In [2]: df = pd.DataFrame({'user_id': [1, 1, 1, 2, 2, 3, 3, 3, 3], 'timestamp': [100, 200, 300, 10, 110, 10, 110, 210, 250], 'value': [0, 1, 0, 0, 0, 0, 10, 0, 1]}) In [3]: groupped = df.groupby('user_id') In [4]: res = groupped.agg({'timestamp': [min, max], 'value': lambda x: sum(x &gt; 0), &lt;described-magic&gt;}) In [5]: res Out[5]: timestamp value &lt;...magic...&gt; min max &lt;lambda&gt; user_id 1 100 300 1 200 2 10 110 0 0 3 10 250 2 210 </code></pre> <p><em>Magic</em> is what I want.</p>
<p>Create a new column <code>positive_value_timestamp</code> as</p> <pre><code>df['positive_value_timestamp'] = df.timestamp * df.value.apply(lambda x: 1 if x &gt; 0 else 0) </code></pre> <p>When grouping, take the <code>max</code> of this column</p> <pre><code>res = df.groupby('user_id').agg( { 'timestamp': [min, max], 'value': sum, 'positive_value_timestamp': max }) </code></pre>
python|pandas|group-by|dataframe
4
2,114
29,971,075
Count number of non-NaN entries in every column of Dataframe
<p>I have a really big DataFrame and I was wondering if there was short (one or two liner) way to get the a count of non-NaN entries in a DataFrame. I don't want to do this one column at a time as I have close to 1000 columns. </p> <pre><code>df1 = pd.DataFrame([(1,2,None),(None,4,None),(5,None,7),(5,None,None)], columns=['a','b','d'], index = ['A', 'B','C','D']) a b d A 1 2 NaN B NaN 4 NaN C 5 NaN 7 D 5 NaN NaN </code></pre> <p>Output:</p> <pre><code>a: 3 b: 2 d: 1 </code></pre>
<p>The <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.count.html"><code>count()</code></a> method returns the number of non-<code>NaN</code> values in each column:</p> <pre><code>&gt;&gt;&gt; df1.count() a 3 b 2 d 1 dtype: int64 </code></pre> <p>Similarly, <code>count(axis=1)</code> returns the number of non-<code>NaN</code> values in each row.</p>
python|pandas|dataframe|count|nan
187
2,115
17,633,377
Merging two columns in a DataFrame while preserving first column values
<p>Here is an example DataFrame:</p> <pre><code>In [308]: df Out[308]: A B 0 1 1 1 1 2 2 2 3 3 2 4 4 3 5 5 3 6 </code></pre> <p>I want to merge A and B while keeping order, indexing and duplicates in A intact. At the same time, I only want to get values from B that are not in A so the resulting DataFrame should look like this:</p> <pre><code>In [308]: df Out[308]: A B 0 1 1 1 1 2 2 2 3 3 2 4 4 3 5 5 3 6 6 4 NaN 7 5 NaN 8 6 NaN </code></pre> <p>Any pointers would be much appreciated. I tried doing a concat of the two columns and a groupby but that doesn't preserve column A values since duplicates are discarded. </p> <p>I want to retain what is already there but also add values from B that are not in A.</p>
<p>To get those elements of B not in A, use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> method with the <code>~</code> invert (not) operator:</p> <pre><code>In [11]: B_notin_A = df['B'][~df['B'].isin(df['A'])] In [12]: B_notin_A Out[12]: 3 4 4 5 5 6 Name: B, dtype: int64 </code></pre> <p>And then you can append (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tools.merge.concat.html" rel="nofollow"><code>concat</code></a>) these with A, sort (if you use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.order.html" rel="nofollow"><code>order</code></a> it returns the result rather than doing the operation in place) and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p> <pre><code>In [13]: A_concat_B_notin_A = pd.concat([df['A'], B_notin_A]).order().reset_index(drop=True) In [14]: A_concat_B_notin_A Out[14]: 0 1 1 1 2 2 3 2 4 3 5 3 6 4 7 5 8 6 dtype: int64 </code></pre> <p>and then create a new DataFrame:</p> <pre><code>In [15]: pd.DataFrame({'A': A_concat_B_notin_A, 'B': df['B']}) Out[15]: A B 0 1 1 1 1 2 2 2 3 3 2 4 4 3 5 5 3 6 6 4 NaN 7 5 NaN 8 6 NaN </code></pre> <p><em>FWIW I'm not sure whether this is necessarily the correct datastructure for you...</em></p>
python|pandas
0
2,116
15,711,019
How to detect if a 2D array is inside another 2D array?
<p>So with the help of a stack-overflow member, I have the following code:</p> <pre><code>data = "needle's (which is a png image) base64 code goes here" decoded = data.decode('base64') f = cStringIO.StringIO(decoded) image = Image.open(f) needle = image.load() while True: screenshot = ImageGrab.grab() haystack = screenshot.load() if detectImage(haystack, needle): break else: time.sleep(5) </code></pre> <p>I've written the following code to check if the needle is in the haystack:</p> <pre><code>def detectImage(haystack, needle): counter = 0 for hayrow in haystack: for haypix in hayrow: for needlerow in needle: for needlepix in needlerow: if haypix == needlepix: counter += 1 if counter == 980: #the needle has 980 pixels return True else: return False </code></pre> <p>The issue is that I get this error for line 3: 'PixelAccess' object is not iterable</p> <p>It was suggested to me that it would be easier to copy both needle and haystack into a numpy/scipy array. And then I can just use a function that checks to see if the 2D array needle is inside the 2D array haystack.</p> <p>I need help with:</p> <p>1) converting those arrays to numpy arrays.</p> <p>2) a function that checks to see if the 2D array needle is inside the 2D array haystack. My function doesn't work.</p> <p>These are the images:<br> Needle:<br> <a href="https://i.stack.imgur.com/7R6YD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7R6YD.png" alt="needle"></a> <br> Haystack:<br> <img src="https://web.archive.org/web/20130407130033/http://s1.postimg.org/aow57gdny/haystack.jpg" alt="haystack"> <img src="https://web.archive.org/web/20130407130031/http://s1.postimg.org/th80b1a26/haystack.jpg" alt="haystack"></p>
<p>To convert the image into a numpy array, you should be able to simply do this:</p> <pre><code>import numpy as np from PIL import Image needle = Image.open('needle.png') haystack = Image.open('haystack.jpg') needle = np.asarray(needle) haystack = np.asarray(haystack) </code></pre> <p>To get you started with finding the needle, note that this will give you a list of all the places where the corner matches:</p> <pre><code>haystack = np.array([[1,2,3],[3,2,1],[2,1,3]]) needle = np.array([[2,1],[1,3]]) np.where(haystack == needle[0,0]) #(array([0, 1, 2]), row-values # array([1, 1, 0])) col-values </code></pre> <p>Then, you can look at all the corner matches, and see if the subhaystack there matches:</p> <pre><code>h,w = needle.shape rows, cols = np.where(haystack == needle[0,0]) for row, col in zip(rows, cols): if np.all(haystack[row:row+h, col:col+w] == needle): print "found it at row = %i, col = %i"%(row,col) break else: print "no needle in haystack" </code></pre> <p>Below is a more robust version that finds the best match, and if it matches better than some percentage, considers the needle found. Returns the corner coordinate if found, <code>None</code> if not.</p> <pre><code>def find_needle(needle, haystack, tolerance=.80): """ input: PIL.Image objects output: coordinat of found needle, else None """ # convert to grayscale ("L"uminosity) for simplicity. needle = np.asarray(needle.convert('L')) haystack = np.asarray(haystack.convert('L')) h,w = needle.shape H,W = haystack.shape L = haystack.max() best = (None, None, 1) rows, cols = np.where((haystack - needle[0,0])/L &lt; tolerance) for row, col in zip(rows, cols): if row+h &gt; H or col+w &gt; W: continue # out of range diff = np.mean(haystack[row:row+h, col:col+w] - needle)/L if diff &lt; best[-1]: best = (diff, row, col) return best if best[-1] &lt; tolerance else None </code></pre>
python|python-2.7|numpy|scipy|detection
3
2,117
15,618,976
Trying to use replace method with pandas
<p>I am trying to do a simple replace with pandas:</p> <pre><code>from pandas import * In [2]: df = DataFrame({1: [2,3,4], 2: [3,4,5]}) In [4]: df[2] Out[4]: 0 3 1 4 2 5 Name: 2 In [5]: df[2].replace(4, 17) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) c:\Python27\&lt;ipython-input-5-b4adce9e9b15&gt; in &lt;module&gt;() ----&gt; 1 df[2].replace(4, 17) AttributeError: 'Series' object has no attribute 'replace' </code></pre> <p>What am I missing?</p>
<p>The <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow"><code>replace</code></a> method was added in version 0.9.0 (see <a href="http://pandas.pydata.org/pandas-docs/version/0.9.0/whatsnew.html#other-new-features" rel="nofollow">release notes</a>).</p> <p><em>Note: You can inspect the docs for a <strong>specific version</strong> of pandas by selecting that version on the <a href="http://pandas.pydata.org/" rel="nofollow">right-hand-side of the webpage</a>. But do consider updating to the latest stable version.</em></p>
python|replace|pandas
3
2,118
71,911,437
Efficient way to get the N largest values of a column
<p>I need to get the <code>w highest</code> values of a <code>column</code> groupying by Country.</p> <p>The code below is working:</p> <pre><code>w = 100 df.groupby('country').apply(lambda x: x.sort_values('x', ascending=False).head(w) </code></pre> <p>Is there a way to make this code more efficient? My dataset is huge, like 30kk rows.</p>
<p>You can try <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.SeriesGroupBy.nlargest.html" rel="nofollow noreferrer"><code>pandas.core.groupby.SeriesGroupBy.nlargest</code></a></p> <pre class="lang-py prettyprint-override"><code>w = 100 df.groupby('country').nlargest(w) </code></pre> <p>According to the doc</p> <blockquote> <p>Faster than <code>.sort_values(ascending=False).head(n)</code> for small n relative to the size of the Series object.</p> </blockquote> <p>Since your <code>w=100</code> is small relative to <code>30kk</code>, it will be faster.</p>
python-3.x|pandas|jupyter-notebook
1
2,119
71,962,543
Python: Convert JSON from df column into individual df columns
<p>I have an excel file that looks something like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Header1</th> <th>Header2</th> <th>Header3</th> </tr> </thead> <tbody> <tr> <td>data</td> <td>data</td> <td>[{&quot;key1&quot;:&quot;123&quot;,&quot;key2&quot;:&quot;Value1&quot;},{&quot;key1&quot;:&quot;123&quot;,&quot;key2&quot;:&quot;Value2&quot;}, {&quot;key1&quot;:&quot;123&quot;,&quot;key2&quot;:&quot;Value3&quot;}]</td> </tr> <tr> <td>data</td> <td>data</td> <td>[{&quot;key1&quot;:&quot;123&quot;,&quot;key2&quot;:&quot;Value1&quot;},{&quot;key1&quot;:&quot;123&quot;,&quot;key2&quot;:&quot;Value2&quot;}, {&quot;key1&quot;:&quot;123&quot;,&quot;key2&quot;:&quot;Value3&quot;}]</td> </tr> </tbody> </table> </div> <p>Header3 contains JSON strings that look like this</p> <pre><code>[ {&quot;key1&quot;:&quot;123&quot;,&quot;key2&quot;:&quot;Value1&quot;}, {&quot;key1&quot;:&quot;123&quot;,&quot;key2&quot;:&quot;Value2&quot;}, {&quot;key1&quot;:&quot;123&quot;,&quot;key2&quot;:&quot;Value3&quot;} ] </code></pre> <p>I would like to parse the JSON Header3 column and for each key create a column with the name of the key appended with the value of key2, the keys are always the same throughout the file.</p> <p>The end data frame should look something like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Header1</th> <th>Header2</th> <th>Key1.Value1</th> <th>Key2.Value1</th> <th>Key1.Value2</th> <th>Key2.Value2</th> </tr> </thead> <tbody> <tr> <td>data</td> <td>data</td> <td>123</td> <td>Value1</td> <td>123</td> <td>Value2</td> </tr> <tr> <td>data</td> <td>data</td> <td>123</td> <td>Value1</td> <td>123</td> <td>Value2</td> </tr> </tbody> </table> </div> <p>Actual example:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Company</th> <th>JSON</th> </tr> </thead> <tbody> <tr> <td>Amazon</td> <td>[{&quot;charge1&quot;:&quot;500&quot;, &quot;charge2&quot;:&quot;200&quot;,&quot;card&quot;:&quot;Visa&quot;},{&quot;charge1&quot;:&quot;234&quot;, &quot;charge2&quot;:&quot;654&quot;,&quot;card&quot;:&quot;Amex&quot;}</td> </tr> <tr> <td>Apple</td> <td>[{&quot;charge1&quot;:&quot;689&quot;, &quot;charge2&quot;:&quot;433&quot;,&quot;card&quot;:&quot;Visa&quot;},{&quot;charge1&quot;:&quot;25434&quot;, &quot;charge2&quot;:&quot;6554644&quot;,&quot;card&quot;:&quot;Amex&quot;}]</td> </tr> </tbody> </table> </div> <p>Needs to become:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Company</th> <th>charge1.Visa</th> <th>charge2.Visa</th> <th>card.Visa</th> <th>charge1.AMEX</th> <th>charge2.AMEX</th> <th>card.AMEX</th> </tr> </thead> <tbody> <tr> <td>Amazon</td> <td>500</td> <td>200</td> <td>Visa</td> <td>234</td> <td>654</td> <td>Amex</td> </tr> <tr> <td>Apple</td> <td>689</td> <td>433</td> <td>Visa</td> <td>25434</td> <td>6554644</td> <td>Amex</td> </tr> </tbody> </table> </div> <p>Before getting into fancy stuff I tried to at least normalize the data, but I'm returned with an empty series.</p> <pre><code>df = pd.read_excel('test.xlsx') pd.json_normalize(df.JSON) Output: 0 1 2 3 4 5 ... 188 rows x 0 columns </code></pre>
<p>You can't normalize it because it's loaded as string from Excel. Try this:</p> <pre class="lang-py prettyprint-override"><code>import json s = df[&quot;JSON&quot;].apply(json.loads).explode() tmp = ( pd.DataFrame(s.to_list(), index=s.index) .set_index(&quot;card&quot;, append=True) .unstack() ) tmp.columns = [&quot;.&quot;.join(col) for col in tmp.columns] pd.concat([df[[&quot;Company&quot;]], tmp], axis=1) </code></pre> <p>The <code>card.*</code> columns look kinda weird. If you know what column you are referring to, you already know its value so why include it in the output dataframe?</p>
python|json|pandas
1
2,120
71,826,557
Compare two rows on a loop for on Pandas
<p>I have the following dataframe where I want to determinate if the column A is greater than column B and if column C is greater of column B. In case it is smaller, I want to change that value for 0.</p> <pre><code>d = {'A': [6, 8, 10, 1, 3], 'B': [4, 9, 12, 0, 2], 'C': [3, 14, 11, 4, 9] } df = pd.DataFrame(data=d) df </code></pre> <p>I have tried this with the np.where and it is working:</p> <pre><code>df[B] = np.where(df[A] &gt; df[B], 0, df[B]) df[C] = np.where(df[B] &gt; df[C], 0, df[C]) </code></pre> <p>However, I have a huge amount of columns and I want to know if there is any way to do this without writing each comparation separately. For example, a loop for.</p> <p>Thanks</p>
<p>To use a vectorial approach, you cannot simply use a diff as the condition depends on the previous value being replaced or not by 0. Thus two consecutive diff cannot happen.</p> <p>You can achieve a correct vectorial replacement using a shifted mask:</p> <pre><code>m1 = df.diff(axis=1).lt(0) # check if &lt; than previous m2 = ~m1.shift(axis=1, fill_value=False) # and this didn't happen twice df2 = df.mask(m1&amp;m2, 0) </code></pre> <p>output:</p> <pre><code> A B C 0 6 0 3 1 8 9 14 2 10 12 0 3 1 0 4 4 3 0 9 </code></pre>
python|pandas
1
2,121
72,040,940
Removing outliers in a df containing mixed dtype
<p>I am working on a pandas DataFrame containing numerical columns as well as string columns (dtype is <code>object</code>), and would like to remove the rows containing outliers with respect to the distributions within a column. In other words, detect the outliers in each column and drop the corresponding rows.</p> <p>I have found two solutions to this, but neither takes into account that my df does not contain only numbers, hence they both result in errors (when encountering strings, I assume).</p> <p><a href="https://stackoverflow.com/questions/23199796/detect-and-exclude-outliers-in-a-pandas-dataframe">Way 1</a>:</p> <pre><code>from scipy import stats df[(np.abs(stats.zscore(df)) &lt; 3).all(axis=1)] </code></pre> <p>returns <code>TypeError: unsupported operand type(s) for /: 'str' and 'int'</code>. This is why I guess the error arises from the df having mixed dtypes.</p> <p><a href="https://towardsdatascience.com/how-to-exclude-the-outliers-in-pandas-dataframe-c749fca4e091" rel="nofollow noreferrer">Way 2</a>:</p> <pre><code>for col in df.columns: lower = df[col].quantile(0.05) upper = df[col].quantile(0.95) df = df[col].clip(lower=lower, upper=upper) </code></pre> <p>returns <code>KeyError</code> with this traceback:</p> <pre><code>File omissis, in Class.remove_outliers(self, df) 423 def remove_outliers(self, df): 424 for col in df.columns: --&gt; 425 lower = df[col].quantile(0.05) 426 upper = df[col].quantile(0.95) 427 df = df[col].clip(lower=lower, upper=upper) File omissis, in Series.__getitem__(self, key) 955 return self._values[key] 957 elif key_is_scalar: --&gt; 958 return self._get_value(key) 960 if is_hashable(key): 961 # Otherwise index.get_value will raise InvalidIndexError 962 try: 963 # For labels that don't resolve as scalars like tuples and frozensets File omissis, in Series._get_value(self, label, takeable) 1066 return self._values[label] 1068 # Similar to Index.get_value, but we do not fall back to positional -&gt; 1069 loc = self.index.get_loc(label) 1070 return self.index._get_values_for_loc(self, loc, label) File omissis, in RangeIndex.get_loc(self, key, method, tolerance) 387 raise KeyError(key) from err 388 self._check_indexing_error(key) --&gt; 389 raise KeyError(key) 390 return super().get_loc(key, method=method, tolerance=tolerance) KeyError: 'colname' </code></pre> <p>How would you solve this?</p> <p>EDIT: the idea is to skip the non numeric columns, to ignore them.</p>
<p>I would break the problem into stages:</p> <p>Firstly, identify (numeric) columns you want to do the outlier removal. <a href="https://stackoverflow.com/questions/25039626/how-do-i-find-numeric-columns-in-pandas">Reference</a></p> <pre class="lang-py prettyprint-override"><code>newdf = df.select_dtypes(include=np.number) </code></pre> <p>Now perform whatever filtering/outlier removal you want on the rows of <code>newdf</code>. Afterwards, <code>newdf</code> should contain only rows you wish to retain.</p> <p>Then keep only the rows of <code>df</code> those index are in <code>newdf</code>. <a href="https://stackoverflow.com/questions/48864923/select-certain-rows-by-index-of-another-dataframe/48864996#48864996">Reference</a></p> <pre class="lang-py prettyprint-override"><code>df = df[df.index.isin(newdf.index)] </code></pre>
python|pandas|dataframe|dtype
2
2,122
16,856,470
Is there a MATLAB accumarray equivalent in numpy?
<p>I'm looking for a fast solution to MATLAB's <a href="http://www.mathworks.com/help/matlab/ref/accumarray.html" rel="noreferrer"><code>accumarray</code></a> in numpy. The <code>accumarray</code> accumulates the elements of an array which belong to the same index. An example:</p> <pre><code>a = np.arange(1,11) # array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) accmap = np.array([0,1,0,0,0,1,1,2,2,1]) </code></pre> <p>Result should be </p> <pre><code>array([13, 25, 17]) </code></pre> <p><strong>What I've done so far:</strong> I've tried the <code>accum</code> function in the <a href="http://www.scipy.org/Cookbook/AccumarrayLike" rel="noreferrer">recipe here</a> which works fine but is slow.</p> <pre><code>accmap = np.repeat(np.arange(1000), 20) a = np.random.randn(accmap.size) %timeit accum(accmap, a, np.sum) # 1 loops, best of 3: 293 ms per loop </code></pre> <p>Then I tried to use the <a href="http://mldesign.net/blog/2013/02/18/speedy-numpy-replacement-for-matlab-accumarray/" rel="noreferrer">solution here</a> which is supposed to work faster but it doesn't work correctly:</p> <pre><code>accum_np(accmap, a) # array([ 1., 2., 12., 13., 17., 10.]) </code></pre> <p>Is there a built-in numpy function that can do accumulation like this? Or any other recommendations?</p>
<p>Use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html"><code>np.bincount</code></a> with the <code>weights</code> optional argument. In your example you would do:</p> <pre><code>np.bincount(accmap, weights=a) </code></pre>
python|numpy|accumulator
21
2,123
19,035,736
python ncreduce for windows
<p>I'm doing astronomical image processing using python, and numpy.std(a) is consuming way too much memory. Some searching turns up the ncreduce package by Luis Pedro, but I'm having difficulty building my download of the package form <a href="https://pypi.python.org/pypi/ncreduce/0.2" rel="nofollow">here</a>. <a href="http://code.activestate.com/pypm/ncreduce/" rel="nofollow">ActiveState</a> seems to suggest that this package won't build on Windows. I'm using Windows 7 and Python 2.7.</p> <p>Is it possible to use ncreduce on Windows? If not, is there an alternative fast algorithm for computing standard deviation or variance that isn't as memory-hungry as numpy.std(a)?</p>
<p>The package requires a few small changes to build with msvc. It is quite old and there are no tests so use at your own risk.</p> <pre><code>--- ncreduce/reduce.cpp Thu Aug 14 13:02:50 2008 +++ ncreduce/reduce.cpp Thu Sep 26 11:56:04 2013 @@ -6,6 +6,7 @@ #include &lt;iterator&gt; #include &lt;vector&gt; #include &lt;cmath&gt; +#include &lt;limits&gt; extern "C" { #include &lt;Python.h&gt; #include &lt;numpy/ndarrayobject.h&gt; @@ -98,7 +99,7 @@ } *result /= N; if (extra.is_std) { - *result = std::sqrt(*result); + *result = std::sqrt((double)(*result)); } } @@ -142,7 +143,7 @@ for (unsigned i = 0; i != result.diameter(); ++i) { first_result[i] = divide(first_result[i],ArrSize/result.diameter()); if (extra.is_std) { - first_result[i] = sqrt(first_result[i]); + first_result[i] = sqrt((double)first_result[i]); } } --- setup.py Thu Aug 14 13:54:48 2008 +++ setup.py Thu Sep 26 12:03:16 2013 @@ -1,7 +1,7 @@ # -*- coding: utf-8 -*- from numpy.distutils.core import setup, Extension -ncreduce = Extension('ncreduce', sources = ['ncreduce/reduce.cpp', 'ncreduce/numpy_utils.hpp'], extra_compile_args=['-Wno-sign-compare']) +ncreduce = Extension('ncreduce', sources = ['ncreduce/reduce.cpp', 'ncreduce/numpy_utils.hpp'], extra_compile_args=['/EHsc']) classifiers = [ 'Development Status :: 4 - Beta', </code></pre> <p>I put the binaries at <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/</a> . Search for ncreduce.</p>
c++|python|numpy
3
2,124
22,213,576
how to optimize the performance of pandas retrieving stock quotes from yahoo feed
<p>my code is about looping 1200+ nos of stock symbols and retrieve the historical quotes (160~200 days)from yahoo feed. as it takes time, i stored the quotes in csv, just make it to download the quotes for the time delta, i.e. any quotes for the date differences or any quote csv files are missing... i used the pandas function call get_data_yahoo(stocknum,start,end).</p> <p>however, i found the time to take for the whole process make no difference, seems like d/l quotes for one day is same as that for 200 days... how did pandas handle the stock quotes from yahoo? any other feed better? any suggestion or enhancement i could make to speed up the process?</p> <p>here below the function call i made for the stock quote retrieval.</p> <p>def readStockPrice(stock,period=200,delay=1):</p> <pre><code>#define the period for stock filtering now=dt.date.today() end=now.strftime("%Y-%m-%d") #start=(now-dt.timedelta(days=period)).strftime("%Y-%m-%d") if os.path.isfile('cache/'+stock+'.csv'): df=pd.read_csv('cache/'+stock+'.csv',index_col=0,parse_dates=True) lastrecorddate=df.index.to_pydatetime()[-1].date() delta=(now-lastrecorddate).days if delta&gt;delay: #print("retrieving "+stock+" quotes from the web") start=(lastrecorddate+dt.timedelta(days=1)).strftime("%Y-%m-%d") try: df_delta=web.get_data_yahoo(stock,start,end) df=df.append(df_delta) df_delta.to_csv('cache/'+stock+'.csv',header=False,mode='a') except IOError: return pd.DataFrame() else: #print("retrieving "+stock+" quotes from the web") start=(now-dt.timedelta(days=period)).strftime("%Y-%m-%d") try: df=web.get_data_yahoo(stock,start,end) if not df.empty: df.to_csv('cache/'+stock+'.csv') except IOError: return pd.DataFrame() return df </code></pre>
<p>Pandas uses _get_hist_yahoo to get your data in pandas/io/data.py, like below:</p> <pre><code>def _get_hist_yahoo(sym, start, end, retry_count, pause): """ Get historical data for the given name from yahoo. Date format is datetime Returns a DataFrame. """ start, end = _sanitize_dates(start, end) url = (_HISTORICAL_YAHOO_URL + 's=%s' % sym + '&amp;a=%s' % (start.month - 1) + '&amp;b=%s' % start.day + '&amp;c=%s' % start.year + '&amp;d=%s' % (end.month - 1) + '&amp;e=%s' % end.day + '&amp;f=%s' % end.year + '&amp;g=d' + '&amp;ignore=.csv') return _retry_read_url(url, retry_count, pause, 'Yahoo!') </code></pre> <p>I think it will be transferred from ulr into a SQL query like </p> <pre><code>where date between START_TIME and END_TIME </code></pre> <p>if Yahoo put index on date column, it could use index so response time isn't related with how long the period.</p>
python|pandas
0
2,125
22,398,266
Compound word Pattern detection using pandas for large datasets
<p>Lets say I have two list of words one that follows the other. They are connected by a space or dash. To make it simple they will be the same words:</p> <pre><code>First=['Derp','Foo','Bar','Python','Monte','Snake'] Second=['Derp','Foo','Bar','Python','Monte','Snake'] </code></pre> <p>So the following combinations of the following words exist(indicated by yes):</p> <pre><code> Derp Foo Bar Python Monte Snake Derp No No Yes Yes Yes Yes Foo Yes No No Yes Yes Yes Bar Yes Yes No Yes Yes Yes Python No Yes Yes No Yes Yes Monte No Yes Yes No No No Snake Yes No Yes Yes Yes No </code></pre> <p>I have a data set like this which I am detecting particular words:</p> <pre><code>df=pd.DataFrame({'Name': [ 'Al Gore', 'Foo-Bar', 'Monte-Python', 'Python Snake', 'Python Anaconda', 'Python-Pandas', 'Derp Bar', 'Derp Python', 'JavaScript', 'Python Monte'], 'Class': ['Politician','L','H','L','L','H', 'H','L','L','Circus']}) </code></pre> <p>If I use Regex and mark all the data that is from the pattern it would look something like this:</p> <pre><code>import pandas as pd df=pd.DataFrame({'Name': [ 'Al Gore', 'Foo-Bar', 'Monte-Python', 'Python Snake', 'Python Anaconda', 'Python-Pandas', 'Derp Bar', 'Derp Python', 'JavaScript', 'Python Monte'], 'Class': ['Politician','L','H','L','L','H', 'H','L','L','Circus']}) df['status']='' patterns=['^Derp(-|\s)(Foo|Bar|Snake)$', '^Foo(-|\s)(Bar|Python|Monte)$', '^Python(-|\s)(Derp|Foo|Bar|Snake)', '^Monte(-|\s)(Derp|Foo|Bar|Python|Snake)$'] for i in range(len(patterns)): df.loc[df.Name.str.contains(patterns[i]),'status'] = 'Found' print (df) </code></pre> <p>Here is the print:</p> <pre><code>&gt;&gt;&gt; Class Name status 0 Politician Al Gore 1 L Foo-Bar Found 2 H Monte-Python Found 3 L Python Snake Found 4 L Python Anaconda 5 H Python-Pandas 6 H Derp Bar Found 7 L Derp Python 8 L JavaScript 9 Circus Python Monte [10 rows x 3 columns] </code></pre> <p>For larger datasets it does not seem very feasible to write out all the Regex patterns. So is there a way to make a loop or something to go through patterns from a matrix of combinations to retrieve patterns that exist (indicated as yes in table above) and skip the ones that do not (indicated as no in table above)? I know that in the <code>itertools</code> library there is a function called <code>combinations</code> that can go through and generate all the possible patterns via looping.</p>
<p>I don't think it's too hard to generate those regexes from the combination matrix you've got:</p> <pre><code># Reading in your combination matrix: pattern_mat = pd.read_clipboard() # Map from first words to following words: w2_dict = {} for w1, row in pattern_mat.iterrows(): w2_dict[w1] = list(row.loc[row == 'Yes'].index) # Print all the resulting regexes: # (not sure if the backspace needs to be escaped?) for w1, w2_list in w2_dict.items(): pattern = "^{w1}(-|\s)({w2s})$".format(w1=w1, w2s='|'.join(w2_list)) print(pattern) </code></pre> <p>Output:</p> <pre><code>^Monte(-|\s)(Foo|Bar)$ ^Snake(-|\s)(Derp|Bar|Python|Monte)$ ^Bar(-|\s)(Derp|Foo|Python|Monte|Snake)$ ^Foo(-|\s)(Derp|Python|Monte|Snake)$ ^Python(-|\s)(Foo|Bar|Monte|Snake)$ ^Derp(-|\s)(Bar|Python|Monte|Snake)$ </code></pre>
python|pandas|pattern-matching|iteration
1
2,126
22,119,877
Need help thinking through splitting lists at integer divisions in Python
<p>I currently have a dataset as a list of lists which I'd like to split at integer divisions, and insert new data if there is overlap. For example:</p> <p>edit:The data set is always ordered ascendingly.</p> <pre><code>data = [[1.565888, 2.073744], [2.073744, 2.962492], [2.962492, 4.52838], [4.52838, 5.417127], [5.417127, 6.390517], [7.025337, 7.871763]] fix_list(data) #[[1.565888, 2.0], [2.0, 2.073744], [2.073744, 2.962492], [2.962492, 3.0], [3.0. 4.0], [4.0, 4.52838], [4.52838, 5.0], [5.0, 5.417127], [5.417127, 6.0], [6.0, 6.390517], [7.025337, 7.871763]] </code></pre> <p>However I'm at a loss when thinking through exactly how one might account for every situation, specifically when inserting [3.0, 4.0] as this is completely new information which did not exist in the previous list element.</p> <p>Any help as always is greatly appreciated.</p>
<h2>Update</h2> <p>This isn't as clean looking, but now all integer pairs are output, even if they aren't in the original data. On the other hand, now it's implemented with a recursive generator, which I think is kinda slick :)</p> <hr> <p>This might do what you want. <code>int_split</code> is a generator which scans through the input list, yielding pairs clipped to integer boundaries if the pairs have different integer values. This algorithm leaves items like <code>[3.0, 4.0]</code> alone, since they are already on integer boundaries. </p> <pre><code>import math data = [[1.565888, 2.073744], [2.073744, 2.962492], [2.962492, 4.52838], [4.52838, 5.417127], [5.417127, 6.390517], [7.025337, 7.871763], [11.1, 12.1]] def int_split(data): last_b = data[0][1] for item in data: a, b = item # First, make sure we haven't missed anything from the last loop if math.floor(a) - math.ceil(last_b) &gt; 1.0: for x in int_split([[last_b, a]]): yield x # Did we cross an integer boundary, i.e. # Is ceil(b) - floor(a) &gt; 1.0? if math.ceil(b) - math.floor(a) &gt; 1.0: # Yes, so split, making sure to include all integers between a and b # Find any integers in the range (and convert to floats) ints = [float(x) for x in range(int(math.ceil(a)), int(b))] for c in ints: # output those values yield [a, c] a = c else: yield [a, math.floor(b)] yield [math.floor(b), b] else: yield item # remember where we are last_b = b </code></pre> <p>This outputs: </p> <pre><code>[[1.565888, 2.0], [2.0, 2.073744], [2.073744, 2.962492], [2.962492, 3.0], [3.0, 4.0], [4.0, 4.52838], [4.52838, 5.0], [5.0, 5.417127], [5.417127, 6.0], [6.0, 6.390517], [7.025337, 7.871763], [7.871763, 8.0], [8.0, 9.0], [9.0, 10.0], [10.0, 11.0], [11.0, 11.1], [11.1, 12.0], [12.0, 12.1]] </code></pre>
python|list|numpy|insert|split
0
2,127
8,638,156
How to add differently shaped numpy arrays?
<p>I have 2 numpy arrays. One is a 2*2 array.</p> <pre><code>a = [[1,2],[3,4]] </code></pre> <p>The other is a 2*2*4 array.</p> <pre><code>b = [[[0,0,0,0],[0,0,0,0]],[[0,0,0,0],[0,0,0,0]]] </code></pre> <p>I want to add them so that I have a 2*2*4 array, c.</p> <pre><code>c = [[[1,0,0,0],[2,0,0,0]],[[3,0,0,0],[4,0,0,0]]] </code></pre> <p>What's the correct numpythonic way to do this?</p> <p>Edit: This appears to work </p> <pre><code> b[:,:,:1]+=a[:,:,np.newaxis] </code></pre>
<p>Not sure whether you can do the sum in one single steps. Here it is in two steps:</p> <pre><code>c = b.copy() c[...,0] += a </code></pre>
python|multidimensional-array|numpy
0
2,128
8,685,994
Python, hstack column numpy arrays (column vectors) of different types
<p>I currently have a numpy multi-dimensional array (of type float) and a numpy column array (of type int). I want to combine the two into a mutli-dimensional numpy array. </p> <pre><code>import numpy &gt;&gt; dates.shape (1251,) &gt;&gt; data.shape (1251,10) &gt;&gt; test = numpy.hstack((dates, data)) ValueError: all the input arrays must have same number of dimensions </code></pre> <p>To show that the types of the arrays are different:</p> <pre><code>&gt;&gt; type(dates[0]) &lt;type 'numpy.int64'&gt; &gt;&gt; type(data[0,0]) &lt;type 'numpy.float64'&gt; </code></pre>
<pre><code>import numpy as np np.column_stack((dates, data)) </code></pre> <p>The types are cast automatically to the most precise, so your int array will be converted to float.</p>
python|numpy
11
2,129
55,287,289
Modifying a tensorflow graph to output an intermediate value, after training
<p>I'm really new to TF, and so this is my disclaimer that what I'm asking might not make much sense. I'd appreciate any corrections to my understanding. I'm happy to provide more code / information if necessary.</p> <p>I'm working from the following tutorial: <a href="https://www.oreilly.com/learning/perform-sentiment-analysis-with-lstms-using-tensorflow" rel="nofollow noreferrer">https://www.oreilly.com/learning/perform-sentiment-analysis-with-lstms-using-tensorflow</a>.</p> <p>I've added <code>name_scope</code>s to the variables / placeholders / etc to help me understand what's going on. Instead of posting all of the code, I thought just posting an image of the graph might be enough for this question:</p> <p><a href="https://i.stack.imgur.com/qj9Cl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qj9Cl.png" alt="enter image description here"></a></p> <p>There are a number of things about this graph that I still don't understand, so as a side note: If anybody has good resources for getting a good intuition for these graphs, I'd appreciate that guidance.</p> <h2>My understanding</h2> <p>It looks like the graph currently accepts a feed of <code>input_data</code> and <code>labels</code> in order to calculate error during training. I believe that "accuracy" is currently the output (as it doesn't have any outputs itself?). It makes sense to me that the cost takes as inputs the current predictions and source-of-truth labels.</p> <p>Because I found this as part of a tutorial, of course training works well and I couldn't have done that by myself just yet. I'm willing to overlook that for just a second as I try to grasp for intuition here.</p> <h2>My question</h2> <p>I'm interested in now calling <code>sess.run()</code> on my graph with <em>only</em> <code>input_data</code>, and viewing the results of "predictions". It seems reasonable - I don't even have labels when I'm using this model, say, in a production system. The whole point is to get back predictions.</p> <p>What steps might I take to so that I can call <code>sess.run</code> and get back the new desired output? I'd still somehow need to be able to train the model, though? What "process" might I use to be able to train with both placeholders, and then reduce it to one for predicting?</p>
<p>The argument of <code>sess.run</code> is always a reference to a node on the graph (i.e. what you have provided an image of). </p> <p>Tensorflow is written such that it only needs the values of upstream values in order to compute the value at some node--not <em>all</em> possible inputs. Your question appears to be how to get the predictions from the networks without providing the truth labels (what you want the network to learn during training). This is the quintessential "testing" scenario.</p> <p>With no more information about your code, it seems like you should be able to simply do:</p> <pre><code>with tf.Session() as sess: predictions_eval = sess.run(predictions, feed_dict={input_data=input_data}) </code></pre>
tensorflow|tensorboard
1
2,130
55,336,944
How to add a list to new column in pandas?
<p>I would like to add a new column with lists by iterating over every row in pandas</p> <p>I have tried to use df.at but it gives me a value error</p> <pre><code> import pandas as pd df = pd.DataFrame({'A': [1, 2, 3], 'B': ['x', 'y', 'z']}) for index, row in df.iterrows(): df.at[index,'new_col'] = ['m','n'] </code></pre> <p>Actual Result:</p> <pre><code> Traceback (most recent call last): File "D:/Projects/fasttextproj/test.py", line 10, in df.at[index,'new_col'] = ['m','n'] File "D:\Projects\fasttext\lib\site-packages\pandas\core\indexing.py", line 2287, in __setitem__ self.obj._set_value(*key, takeable=self._takeable) File "D:\Projects\fasttext\lib\site-packages\pandas\core\frame.py", line 2823, in _set_value self.loc[index, col] = value File "D:\Projects\fasttext\lib\site-packages\pandas\core\indexing.py", line 190, in __setitem__ self._setitem_with_indexer(indexer, value) File "D:\Projects\fasttext\lib\site-packages\pandas\core\indexing.py", line 366, in _setitem_with_indexer self._setitem_with_indexer(new_indexer, value) File "D:\Projects\fasttext\lib\site-packages\pandas\core\indexing.py", line 611, in _setitem_with_indexer raise ValueError('Must have equal len keys and value ' ValueError: Must have equal len keys and value when setting with an iterable </code></pre>
<p>Try:</p> <p><code>df['new_col'] = [['m', 'n']] * df.shape[0]</code></p>
python-3.x|pandas
0
2,131
55,273,385
Unable to add two layers in Keras of Tensorflow
<p>I have a simple regression model as below. The layers <code>layer_abc</code> and <code>layer_efg</code> both have <code>(None, 5)</code> as output, so their output have same dimension and can be added. Thus I want to unhide the code <code>#keras.layers.Add()(['layer_abc', 'layer_efg'])</code>. But whenever I do this, I got an error <code>AttributeError: 'str' object has no attribute 'get_shape'</code>. If I didn't unhide this line, then the code is fine. </p> <p>How can I add the two layers without having error? Many thanks!</p> <pre><code>from __future__ import absolute_import, division, print_function from scipy import misc import tensorflow as tf from tensorflow import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization, Activation import numpy as np import matplotlib.pyplot as plt train_images=np.array([[[0],[1],[2]],[[0],[0],[2]],[[1],[1],[1]],[[1],[0],[1]]]) train_labels=np.array([[1],[0],[1],[0]]) model = keras.Sequential([ keras.layers.Flatten(input_shape=(3, 1)), keras.layers.Dense(5, activation=tf.nn.relu,name='layer_abc'), keras.layers.Dense(5, activation=tf.nn.relu,name='layer_efg'), #keras.layers.Add()(['layer_abc', 'layer_efg']), keras.layers.Dense(1, activation=tf.nn.softmax), ]) model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy','mean_squared_error']) print(model.summary()) model.fit(train_images, train_labels, epochs=2) </code></pre>
<p>You can use the functional API like this to do the Add, for single output between 0 and 1, use the sigmoid activation for the output:</p> <pre><code>input = keras.layers.Input((3,1)) x1 = keras.layers.Dense(5, activation=tf.nn.relu, name='layer_abc')(input) x2 = keras.layers.Dense(5, activation=tf.nn.relu, name='layer_efg')(input) x = keras.layers.Add()([x1, x2]) x = keras.layers.Flatten()(x) output = keras.layers.Dense(1, activation=tf.nn.sigmoid)(x) model = keras.models.Model(input, output) </code></pre> <p>This might work:</p> <pre><code>model = keras.Sequential([ keras.layers.Flatten(input_shape=(3, 1)), keras.layers.Dense(5, activation=tf.nn.relu,name='layer_abc'), keras.layers.Dense(5, activation=tf.nn.relu,name='layer_efg')]) model.add(keras.layers.Lambda(lambda x: tf.add(model.layers[1].output, x))) model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid)) </code></pre>
python|tensorflow|keras|neural-network|deep-learning
1
2,132
56,693,554
GroupBy Method Changing DataType
<p>Using Python3 and Anaconda, I have pandas and os imported on ipython. I have an extremely large csv file. After using read_csv on the file, I try to use .groupby() on two columns, but it changes the data type from DataFrame to DataFrameGroupBy, and I can no longer run data frame methods on it.</p> <p>I can't think of anything to try. I have very little experience with pandas, gained through codecademy. My code appears to work there.</p> <pre><code>import os import pandas as pd totals = pd.read_csv('filename') band_gaps = totals.groupby(['column1','column2']) band_gaps.info() AttributeError: Cannot access callable attribute 'info' of 'DataFrameGroupBy' objects, try using the 'apply' method type(band_gaps) pandas.core.groupby.generic.DataFrameGroupBy </code></pre> <p>I expect that when I run band_gaps.info(), it provides me with the info for the data frame. Instead, it gives me an error. When I check band_gaps' type, it is no longer a dataframe, and is instead a DataFrameGroupBy.</p>
<p>If you look at the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">Pandas groupby documentation</a> you'll see that it returns a <code>DataFrameGroupBy</code> or <code>SeriesGroupBy</code> object, depending on whether you called <code>.groupby</code> on a <code>DataFrame</code> or a <code>Series</code>. So the behavior you've observed shouldn't be surprising. </p> <p>More importantly, why does Pandas do that? Well, in your case you're grouping a bunch of rows together. Pandas can hold on to some representation of the grouped <code>DataFrame</code>, but it can't do anything else with it (ie, return it to you as another <code>DataFrame</code>) until you apply an aggregation function like <code>.sum</code> or <code>.count</code>. An aggregation function takes each group of rows and defines some way of turning that row into a single row. Try applying one of those aggregation functions to <code>band_gaps</code> and see what happens.</p> <p>For example:</p> <pre><code>df.groupby('column1').mean() </code></pre> <p>will return a <code>DataFrame</code> expressing the mean of every column after grouping all rows by <code>column1</code>.</p> <pre><code>df.groupby('column1')['column2'].sum() </code></pre> <p>will return a <code>Series</code> with the sum of the values in <code>column2</code> after grouping by <code>column1</code>. Note that</p> <pre><code>df.groupby('column1').sum()['column2'] </code></pre> <p>may also be possible, but in that case you're taking the column you're interested in after you've aggregated over all columns, which is slower than slicing before aggregating.</p>
pandas|pandas-groupby
1
2,133
56,719,315
extracting data from numpy array in python3
<p>I imported my <code>csv</code> file into a python using <code>numpy.txt</code> and the results look like this:</p> <pre><code>&gt;&gt;&gt; print(FH) array([['Probe_Name', '', 'A2M', ..., 'POS_D', 'POS_E', 'POS_F'], ['Accession', '', 'NM_000014.4', ..., 'ERCC_00092.1', 'ERCC_00035.1', 'ERCC_00034.1'], ['Class_Name', '', 'Endogenous', ..., 'Positive', 'Positive', 'Positive'], ..., ['CF33294_10', '', '6351', ..., '1187', '226', '84'], ['CF33299_11', '', '5239', ..., '932', '138', '64'], ['CF33300_12', '', '37372', ..., '981', '202', '58']], dtype=object) </code></pre> <p>every single list is a column and the first item of every column is the header. I want to plot the data in different ways. to do so, I want to make variable for every single column. for example the first column I want to <code>print(Probe_Name)</code> as the header and the results will be shown like this:</p> <pre><code>A2M . . . POS_D POS_E POS_F </code></pre> <p>and this is the case for the rest of columns. and then I will plot the variables. I tried to do that in python3 like this:</p> <pre><code>def items(N_array:) for item in N_array: name = item[0] content = item[1:] return name, content </code></pre> <p><code>print(items(FH))</code>it does not return what I expect. do you know how to fix it?</p>
<p>One simple way to do this is with pandas dataframes. When you read the csv file using a pandas dataframe, you essentially get a collection of 'columns' (called series in pandas).</p> <pre><code>import pandas as pd df = pd.read_csv("your filename.csv") df Probe_Name Accession 0 A2m MD_9999 1 POS_D NM_0014.4 2 POS_E 99999 </code></pre> <p>Now we can deal with each column, which is named automatically by the header column.</p> <pre><code>print(df['Probe_Name']) 0 A2m 1 POS_D 2 POS_E </code></pre> <p>Furthermore, you can you do plotting (assuming you have numeric data in here somewhere).</p> <p><a href="http://pandas.pydata.org/pandas-docs/stable/index.html" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/index.html</a></p>
python-3.x|csv|numpy
0
2,134
56,709,469
How would I make these slices within an aggregate group by more programmatic?
<p>I had to set up 3 different functions for the slices I wanted to incorporate alongside the count, mean, median aggregate calls. Is there an easier way?</p> <pre class="lang-py prettyprint-override"><code>def from_0_up_to_6(x): return (x &lt; 6).sum() def from_6_up_to_12(y): return ((y &gt;= 6) &amp; (y &lt; 12)).sum() def from_12_and_up(z): return (z &gt;= 12).sum() MonthsUntilBUGift = df.groupby('BusinessUnit').agg({'MonthsTillBUGift': ['mean','count','median' , from_0_up_to_6 , from_6_up_to_12 , from_12_and_up ]}) </code></pre> <p>Desired results are fine, but my concern is when stakeholders choose to redefine the ranges/slices, which could make me go crazy.</p>
<pre><code>from functools import partial bounds = [0, 6, 12, float('inf')] def agg(lower, upper, x): return ((lower &lt;= x) &amp; (x &lt; upper)).sum() aggs = [ partial(agg, lower, upper) for lower, upper in zip(bounds, bounds[1:]) ] print(aggs) # [functools.partial(&lt;function agg at 0x10afad7b8&gt;, 0, 6), functools.partial(&lt;function agg at 0x10afad7b8&gt;, 6, 12), functools.partial(&lt;function agg at 0x10afad7b8&gt;, 12, inf)] </code></pre>
python|pandas|function
0
2,135
56,640,164
More efficient way to import from CSV
<p>I'd like to import from a CSV to an object. For ease, we'll say it's a city and I have a CSV like so:</p> <pre><code>Seattle,WA,600,000,Seahawks,Starbucks </code></pre> <p>What is the best way to import that into a class? Right now I import CSV, and do something like the following:</p> <pre class="lang-py prettyprint-override"><code>with open(filePath,'rb') as r: cityReader = csv.reader(r) for row in cityReader: cityName = row[0] state = row[1] population=row[2] nflTeam=row[3] bigCompany=row[4] newCity=city(cityName,state,population,nflTeam,bigCompany) addToCityList(newCity) </code></pre> <p>I'm wondering if there is a better way. I feel like you maybe could use pandas for something like this? This just doesn't seem the most efficient way.</p>
<p>TRY:-</p> <pre><code>import pandas as pd df = pd.read_csv("/path/to/XXX.csv") for idx, row in df: city_name=row[0] state = row[1] </code></pre>
python|pandas
-1
2,136
56,782,935
How to re-write using numpy
<p>I'm wondering how I can re-write this using vectorization with numpy, assuming I change all lists to numpy arrays.</p> <pre><code># dcdw1 = m x m array # a1 = len(x) x m array # a2 = len(x) x 1 array # w2 = m x 1 array # x = len(x) x m array # y = len(x) x 1 array for i in range(len(x)): for j in range(m): for k in range(m): dcdw1[k, j] = (a2[i] - y[i]) * a2[i] * (1 - a2[i]) * w2[j] * a1[i, j] * (1 - a1[i, j]) * x[i, k] # other stuff that uses dcdw1 </code></pre>
<pre><code># dcdw1 = m x m array # a1 = len(x) x m array # a2 = len(x) x 1 array # w2 = m x 1 array # x = len(x) x m array # y = len(x) x 1 array import numpy as np m = 10 lx = 4 # len(x) dcdw1 = np.zeros([lx, m, m]) dcdw2 = np.zeros_like(dcdw1) a1 = np.ones([lx, m]) * 0.5 a2 = np.ones([lx, 1]) * 2 w2 = np.ones([m, 1]) * 3 x = np.ones([lx, m]) * 4 y = np.ones([lx, 1]) * 5 for i in range(lx): for j in range(m): for k in range(m): dcdw1[i, k, j] = (a2[i] - y[i]) * a2[i] * (1 - a2[i]) * w2[j] * a1[i, j] * (1 - a1[i, j]) * x[i][k] # Why are you using j on rows and k on columns? anyways print(dcdw1[-1]) first_term = np.reshape( (a2-y) * a2 * (1-a2), [lx, 1, 1] ) # this is on 3d tensor level applied to each matrix seperately # corresponds to (a2[i] - y[i]) * a2[i] * (1 - a2[i]) print(first_term.shape) # [lx, 1, 1] obviously a1_term = (a1 * (1-a1))[:, :, np.newaxis] # On each matrix calculate this vector product [lx, m] and shape to [lx, m, 1] print(a1_term.shape) row_level_term = a1_term * w2 # Element wise multiplication yet again # w2 is [m, 1] so it is broadcasted to every matrix row_level_tensor = first_term * row_level_term # this applies first term values to every matrix -&gt; [lx, m, 1] print(row_level_tensor.shape) x = np.reshape(x, [lx, 1, 10]) # x is weird. Foreach matrix it is used as a coefficient for matrix rows # x[i][k] # ignoring i, k is basically telling takes this row vector # and dstack it m times with different coeffs # to create giant linearly dependent matrices print(x.shape) dcdw2 = np.matmul(row_level_tensor, x) # mxm matrix product lx times print(dcdw2[-1]) </code></pre> <p>This is quite ugly but it gets the job done(two reshapes and a newaxis, ugh. People do not usually perform elementwise matrix ops on tensors I guess, at least I do not). I did not like overwriting <code>dcdw1</code>. This above creates a tensor where your current <code>dcdw1</code> is the last element. I checked it against your serial code with loops and the output is the same. You need to tweak your current code a bit, though.</p> <p>Here is the <a href="https://colab.research.google.com/drive/1AloJtypxvwlmf6ELJkLZdhEUklRQh1dX" rel="nofollow noreferrer">Colab link</a> of the code. </p> <p>Improvements and suggestions are most welcome.</p>
python|numpy
1
2,137
66,862,832
Barchart with dummies
<p>I have a dataset with the following explicit dummy variables: &quot;Kidhome&quot; and &quot;Teenhome&quot;. Of course, for each row where both &quot;Kidhome&quot; and &quot;Teenhome&quot; = 0 that implies an implicit variable that is neither &quot;Teen nor kids at home&quot;.</p> <p>I also have another variable (categorical) that is &quot;marital status&quot;</p> <p>INPUT: df['Marital_Status'].value_counts()</p> <p>OUTPUT: Married 2108 Together 1257 Single 1052 Divorced 437 Widow 146 Name: Marital_Status, dtype: int64.</p> <p>Now I wonder if there is a way:</p> <ol> <li><p>To create a barchart in seaborn to count all three dummies (including the implicit one by giving it a name in the plot) and,</p> </li> <li><p>At the same time, the hue is the categorical variable &quot;MaritalStatus&quot; to each one of the 3 bins with dummies ploted?</p> </li> </ol>
<ol> <li>Create the implicit column <code>Neitherhome</code>:</li> </ol> <pre><code>df['Neitherhome'] = (df.Kidhome.eq(0) &amp; df.Teenhome.eq(0)).astype(int) </code></pre> <ol start="2"> <li>Aggregate the grouped sums:</li> </ol> <pre><code>data = df.groupby('Marital_Status').sum().reset_index() </code></pre> <ol start="3"> <li>Plot with <code>hue='Marital_Status'</code>:</li> </ol> <pre><code>sns.catplot(data=data.melt('Marital_Status'), kind='bar', x='variable', y='value', hue='Marital_Status') </code></pre> <p><a href="https://i.stack.imgur.com/IlZWO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IlZWO.png" alt="Figure output" /></a></p>
python|pandas|seaborn
2
2,138
66,892,641
How to save a df in many excel files?
<p>I have this df called result:</p> <pre><code> CODE YEAR MONTH DAY TMAX TMIN PP 9984 000130 1991 1 1 32.6 23.4 0.0 9985 000130 1991 1 2 31.2 22.4 0.0 9986 000130 1991 1 3 32.0 NaN 0.0 9987 000130 1991 1 4 32.2 23.0 0.0 9988 000130 1991 1 5 30.5 22.0 0.0 ... ... ... ... ... ... ... 20118 000130 2018 9 30 31.8 21.2 NaN 30028 000132 1991 1 1 35.2 NaN 0.0 30029 000132 1991 1 2 34.6 NaN 0.0 30030 000132 1991 1 3 35.8 NaN 0.0 30031 000132 1991 1 4 34.8 NaN 0.0 ... ... ... ... ... ... ... 45000 000132 2019 10 5 35.5 NaN 21.1 </code></pre> <p>I want to save the data in many excel files. One excel file per code. I have 371 unique codes in CODE column.</p> <p>One excel file for code 000130, another excel file for code 000132, etc etc.</p> <p>I've tried this code:</p> <pre><code>for code, data in result.groupby('CODE'): name=&quot;/PATH/TO/FILES/station&quot;+str(code)+&quot;.xlsx&quot; writer = pd.ExcelWriter(name) data.to_excel(writer,'Sheet2',index = False, header = False) writer.save() </code></pre> <p>But it takes too long and is not working. Would you mind to help me? Thanks.</p>
<p>You're not closing the files; ExcelWriter is a <code>contextmanager</code>, so you should be using it with a with clause (so that the file is closed).</p> <pre class="lang-py prettyprint-override"><code>for code, data in result.groupby('CODE'): name=&quot;/PATH/TO/FILES/station&quot;+str(code)+&quot;.xlsx&quot; with pd.ExcelWriter(name) as writer: data.to_excel(writer,'Sheet2',index = False, header = False) </code></pre> <p>For your case, though, you could also just use DataFrame.to_excel with a filename (using ExcelWriter is more helpful if either you want to do additional modifications to the sheet like adding charts, or if you want to write multiple DataFrames to the same excel file).</p> <pre class="lang-py prettyprint-override"><code>for code, data in result.groupby('CODE'): name=&quot;/PATH/TO/FILES/station&quot;+str(code)+&quot;.xlsx&quot; data.to_excel(name, index=False, header=False) </code></pre>
pandas
1
2,139
67,180,716
How to create a column containing whitespace using assign() method in Pandas
<p><strong>Sample Data:</strong></p> <pre><code>import pandas as pd df1=pd.DataFrame({'Original City': {'Daimler': 'Chicago', 'Mitsubishi': 'LA', 'Tesla': 'Vienna', 'Toyota': 'Zurich', 'Renault': 'Sydney', 'Ford': 'Toronto'}}) df2=pd.DataFrame({'Current City': {'Tesla': 'Amsterdam', 'Renault': 'Paris', 'BMW': 'Munich', 'Fiat': 'Detroit', 'Audi': 'Berlin', 'Ferrari': 'Bruxelles'}}) </code></pre> <p><strong>Now my question is:</strong></p> <p>I am trying to create a column containing <code>whitespace</code> using <code>assign()</code> method:</p> <pre><code>df1.assign(Original City=df2['Current City']) ^ #white space </code></pre> <p><strong>I tried:</strong></p> <pre><code>df1.assign('Original City'=df2['Current City']) df1.assign(r'Original City'=df2['Current City']) </code></pre> <p>And as expected It is giving me an error:</p> <pre><code> File &quot;&lt;ipython-input-132-2ac564ac9d47&gt;&quot;, line 1 df1.assign(Original City=df2['Current City']) ^ SyntaxError: invalid syntax </code></pre> <p>But I can able to assign my column like this:</p> <pre><code>df1.assign(OriginalCity=df2['Current City']) </code></pre> <p><strong>Is it possible to create a column containing ' ' space(white space) using <code>assign()</code> method?</strong></p>
<p>You can use unpack dictionary:</p> <pre><code>d = {'Original City':df2['Current City']} df1.assign(**d) </code></pre> <p>Or unpack a dataframe:</p> <pre><code>df1.assign(**df2[['Current City']].rename({'Current City': 'Original City'})) </code></pre>
python|pandas
5
2,140
67,127,637
Given a pandas dataframe that contains multiple dates and multiple times per date how can I select the times for each date?
<p>I've tried:</p> <pre><code>start_date = '2019-12-02' end_date = '2019-12-02' day_df = df[(df['timestamp'] &gt;= start_date) &amp; (df['timestamp'] &lt;= end_date)] </code></pre> <p>which returns an empty Dataframe</p> <p>I've also tried:</p> <pre><code>start_date = '2019-12-02' + ' 00:00:00' end_date = '2019-12-02' + ' 23:59:59' </code></pre> <p>which also returns an empty Dataframe.</p> <p>Edit: Example Data</p> <pre><code>2019-12-02 06:41:54 2019-12-02 06:41:54 2019-12-03 00:38:56 2019-12-03 06:37:13 2019-12-03 06:52:09 </code></pre> <p>Thanks</p>
<p>Based on your comments this might work:</p> <pre><code>df['timestamp'] = pd.to_datetime(df.timestamp) df.set_index('timestamp', inplace=True) # a DateTimeIndex is necessary to use between_time start_time = '00:00:00' end_time = '23:59:59' df.between_time(start_time, end_time).resample('D').mean() </code></pre>
python|pandas|dataframe
0
2,141
67,178,061
How to solve "OOM when allocating tensor with shape[XXX]" in tensorflow (when training a GCN)
<p>So... I have checked a few posts on this issue (there should be many that I haven't checked but I think it's reasonable to seek help with a question now), but I haven't found any solution that might suit my situation.</p> <p>This OOM error message always emerge (with no single exception) in the second round of a whatever-fold training loop, and when re-running the training code again after a first run. So this might be an issue related to this post: <a href="https://stackoverflow.com/questions/42499592/resourceexhaustederror-oom-when-allocating-tensor-with-shape">A previous stackoverflow question for OOM linked with tf.nn.embedding_lookup()</a>, but I am not sure which function my issue lies in.</p> <p>My NN is a GCN with two graph convolutional layers, and I am running the code on a server with several 10 GB Nvidia P102-100 GPUs. Have set batch_size to 1 but nothing has changed. Also am using Jupyter Notebook rather than running python scripts with command because in command line I cannot even run one round... Btw does anyone know why some code can run without problem on Jupyter while popping OOM in command line? It seems a bit strange to me.</p> <p>UPDATE: After replacing Flatten() with GlobalMaxPool(), the error disappeared and I can run the code smoothly. However, if I further add one GC layer, the error would come in the first round. Thus, I guess the core issue is still there...</p> <p>UPDATE2: Tried to replace <code>tf.Tensor</code> with <code>tf.SparseTensor</code>. Successful but of no use. Also tried to set up the mirrored strategy as mentioned in ML_Engine's answer, but it looks like one of the GPU is occupied most highly and OOM still came out. Perhaps it's kind of &quot;data parallel&quot; and cannot solve my problem since I have set <code>batch_size</code> to 1?</p> <p><strong>Code</strong> (adapted from <a href="https://github.com/xiaoyeye/GCNG" rel="nofollow noreferrer">GCNG</a>):</p> <pre><code>from keras import Input, Model from keras.callbacks import EarlyStopping, ModelCheckpoint from keras.layers import Dense, Flatten from keras.optimizers import Adam from keras.regularizers import l2 import tensorflow as tf #from spektral.datasets import mnist from spektral.layers import GraphConv from spektral.layers.ops import sp_matrix_to_sp_tensor from spektral.utils import normalized_laplacian from keras.utils import plot_model from sklearn import metrics import numpy as np import gc l2_reg = 5e-7 # Regularization rate for l2 learning_rate = 1*1e-6 # Learning rate for SGD batch_size = 1 # Batch size epochs = 1 # Number of training epochs es_patience = 50 # Patience fot early stopping # DATA IMPORTING &amp; PREPROCESSING OMITTED # this part of adjacency matrix calculation is not important... fltr = self_connection_normalized_adjacency(adj) test = fltr.toarray() t = tf.convert_to_tensor(test) A_in = Input(tensor=t) del fltr, test, t gc.collect() # Here comes the issue. for test_indel in range(1,11): # SEVERAL LINES OMITTED (get X_train, y_train, X_val, y_val, X_test, y_test) # Build model N = X_train.shape[-2] # Number of nodes in the graphs F = X_train.shape[-1] # Node features dimensionality n_out = y_train.shape[-1] # Dimension of the target X_in = Input(shape=(N, F)) graph_conv = GraphConv(32,activation='elu',kernel_regularizer=l2(l2_reg),use_bias=True)([X_in, A_in]) graph_conv = GraphConv(32,activation='elu',kernel_regularizer=l2(l2_reg),use_bias=True)([graph_conv, A_in]) flatten = Flatten()(graph_conv) fc = Dense(512, activation='relu')(flatten) output = Dense(n_out, activation='sigmoid')(fc) model = Model(inputs=[X_in, A_in], outputs=output) optimizer = Adam(lr=learning_rate) model.compile(optimizer=optimizer,loss='binary_crossentropy',metrics=['acc']) model.summary() save_dir = current_path+'/'+str(test_indel)+'_self_connection_Ycv_LR_as_nega_rg_5-7_lr_1-6_e'+str(epochs) if not os.path.isdir(save_dir): os.makedirs(save_dir) early_stopping = EarlyStopping(monitor='val_acc', patience=es_patience, verbose=0, mode='auto') checkpoint1 = ModelCheckpoint(filepath=save_dir + '/weights.{epoch:02d}-{val_loss:.2f}.hdf5', monitor='val_loss',verbose=1, save_best_only=False, save_weights_only=False, mode='auto', period=1) checkpoint2 = ModelCheckpoint(filepath=save_dir + '/weights.hdf5', monitor='val_acc', verbose=1,save_best_only=True, mode='auto', period=1) callbacks = [checkpoint2, early_stopping] # Train model validation_data = (X_val, y_val) print('batch size = '+str(batch_size)) history = model.fit(X_train,y_train,batch_size=batch_size,validation_data=validation_data,epochs=epochs,callbacks=callbacks) # Prediction and write-file code omitted del X_in, X_data_train,Y_data_train,gene_pair_index_train,count_setx_train,X_data_test, Y_data_test,gene_pair_index_test,trainX_index,validation_index,train_index, X_train, y_train, X_val, y_val, X_test, y_test, validation_data, graph_conv, flatten, fc, output, model, optimizer, history gc.collect() </code></pre> <p><strong>Model Summary</strong>:</p> <pre><code>Model: &quot;model_1&quot; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) (None, 13129, 2) 0 __________________________________________________________________________________________________ input_1 (InputLayer) (13129, 13129) 0 __________________________________________________________________________________________________ graph_conv_1 (GraphConv) (None, 13129, 32) 96 input_2[0][0] input_1[0][0] __________________________________________________________________________________________________ graph_conv_2 (GraphConv) (None, 13129, 32) 1056 graph_conv_1[0][0] input_1[0][0] __________________________________________________________________________________________________ flatten_1 (Flatten) (None, 420128) 0 graph_conv_2[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 512) 215106048 flatten_1[0][0] __________________________________________________________________________________________________ dense_2 (Dense) (None, 1) 513 dense_1[0][0] ================================================================================================== Total params: 215,107,713 Trainable params: 215,107,713 Non-trainable params: 0 __________________________________________________________________________________________________ batch size = 1 </code></pre> <p><strong>Error message</strong> (Please note that this message never comes during the first round after a Restart-and-Clear-Output):</p> <pre><code>Train on 2953 samples, validate on 739 samples Epoch 1/1 --------------------------------------------------------------------------- ResourceExhaustedError Traceback (most recent call last) &lt;ipython-input-5-943385df49dc&gt; in &lt;module&gt;() 62 mem = psutil.virtual_memory() 63 print(&quot;current mem &quot; + str(round(mem.percent))+'%') ---&gt; 64 history = model.fit(X_train,y_train,batch_size=batch_size,validation_data=validation_data,epochs=epochs,callbacks=callbacks) 65 mem = psutil.virtual_memory() 66 print(&quot;current mem &quot; + str(round(mem.percent))+'%') /public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 1237 steps_per_epoch=steps_per_epoch, 1238 validation_steps=validation_steps, -&gt; 1239 validation_freq=validation_freq) 1240 1241 def evaluate(self, /public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/keras/engine/training_arrays.py in fit_loop(model, fit_function, fit_inputs, out_labels, batch_size, epochs, verbose, callbacks, val_function, val_inputs, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq) 194 ins_batch[i] = ins_batch[i].toarray() 195 --&gt; 196 outs = fit_function(ins_batch) 197 outs = to_list(outs) 198 for l, o in zip(out_labels, outs): /public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/tensorflow/python/keras/backend.py in __call__(self, inputs) 3290 3291 fetched = self._callable_fn(*array_vals, -&gt; 3292 run_metadata=self.run_metadata) 3293 self._call_fetch_callbacks(fetched[-len(self._fetches):]) 3294 output_structure = nest.pack_sequence_as( /public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs) 1456 ret = tf_session.TF_SessionRunCallable(self._session._session, 1457 self._handle, args, -&gt; 1458 run_metadata_ptr) 1459 if run_metadata: 1460 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[420128,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node training_1/Adam/mul_23}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[metrics_1/acc/Identity/_323]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. (1) Resource exhausted: OOM when allocating tensor with shape[420128,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node training_1/Adam/mul_23}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. 0 successful operations. 0 derived errors ignored. </code></pre>
<p>You can make use of distributed strategies in tensorflow to make sure that your multi-GPU set up is being used appropriately:</p> <pre><code>mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): for test_indel in range(1,11): &lt;etc&gt; </code></pre> <p>See the docs <a href="https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy" rel="nofollow noreferrer">here</a></p> <p>Mirrored strategy is used for synchronous distributed training across multiple GPUs on a single server, which sounds like the setup you're using. There's also a more intuitive explanation <a href="https://towardsdatascience.com/train-a-neural-network-on-multi-gpu-with-tensorflow-42fa5f51b8af" rel="nofollow noreferrer">in this blog</a>.</p> <p>Also, you could try making use of <a href="http://tensorflow.org/guide/mixed_precision" rel="nofollow noreferrer">mixed precision</a> which should free up memory significantly by altering the float type of the parameters in the model.</p>
tensorflow|keras|graph|neural-network|conv-neural-network
1
2,142
47,228,807
Set array to NAN if less than x amount of data points are in a datetime
<p>I have a dataframe output like the following. </p> <pre><code>dateparse = lambda x: dt.datetime.strptime(x, "%d:%m:%Y %H:%M:%S") df = pd.read_csv('somefile', skiprows = 6, na_values = ['-999.000000'], parse_dates={'times':[0,1]}, date_parser=dateparse) df = df.set_index('times') print(df) times A B C D \ 0 2003-01-01 00:34:38 0.082516 0.102096 0.143908 0.227867 1 2003-01-01 00:38:57 0.085833 0.106582 0.149704 0.238216 2 2003-01-01 00:44:07 0.084019 0.103900 0.146425 0.232553 3 2003-01-01 00:50:18 0.085005 0.105516 0.148011 0.235786 4 2003-01-01 00:57:57 0.089478 0.110041 0.155636 0.247485 5 2003-01-01 01:07:31 0.090858 0.112727 0.157333 0.250582 6 2003-01-01 01:18:11 0.095183 0.118650 0.166181 0.266422 7 2003-01-01 01:24:54 0.095459 0.118206 0.165703 0.265689 8 2003-01-01 01:33:24 0.096138 0.118215 0.165815 0.264526 9 2003-01-01 01:48:11 0.093621 0.116163 0.162775 0.260970 10 2003-01-01 02:03:11 0.097354 0.120059 0.167964 0.270480 11 2003-01-01 02:18:12 0.095817 0.118176 0.166490 0.266451 12 2003-01-01 02:24:50 0.094345 0.113859 0.161243 0.256563 13 2003-01-01 02:33:12 0.093151 0.112366 0.158082 0.252427 14 2003-01-01 02:48:12 0.088426 0.108625 0.152767 0.246545 15 2003-01-01 03:03:11 0.090825 0.111603 0.156169 0.252650 16 2003-01-01 03:18:12 0.088676 0.107009 0.150868 0.241441 17 2003-01-01 03:24:51 0.091120 0.107680 0.153139 0.245209 18 2003-01-01 03:33:12 0.094014 0.113787 0.159610 0.256748 19 2003-01-01 03:48:13 0.090774 0.110138 0.155652 0.252272 20 2003-01-01 04:03:12 0.091755 0.111134 0.158139 0.256281 21 2003-01-01 04:18:12 0.105797 0.127926 0.182425 0.300056 22 2003-01-01 04:33:12 0.108270 0.130971 0.184997 0.309122 23 2003-01-01 04:48:13 0.112102 0.136731 0.192955 0.323832 24 2003-01-01 05:03:12 0.113862 0.139115 0.195716 0.330440 25 2003-01-01 05:18:13 0.109960 0.134348 0.187129 0.317269 26 2003-01-01 05:24:50 0.111670 0.133918 0.187268 0.315839 27 2003-01-01 05:33:12 0.115415 0.139826 0.195095 0.330612 28 2003-01-01 05:48:11 0.108153 0.130107 0.181749 0.311513 29 2003-01-01 06:03:11 0.124846 0.150228 0.209006 0.360110 ... ... ... ... ... ... 62094 2012-12-30 02:32:45 0.076481 0.082212 0.098530 0.127626 62095 2012-12-30 04:02:46 0.074581 0.075652 0.093802 0.121700 62096 2012-12-30 04:17:45 0.069553 0.069642 0.087689 0.111479 62097 2012-12-30 04:24:45 0.078354 0.078970 0.098119 0.129527 62098 2012-12-30 04:32:46 0.082699 0.081013 0.102228 0.134887 62099 2012-12-30 04:47:45 0.085212 0.089094 0.108791 0.141562 62100 2012-12-30 07:56:36 0.085549 0.090656 0.110367 0.158078 62101 2012-12-30 08:01:01 0.088070 0.094420 0.114416 0.164917 62102 2012-12-31 01:35:43 0.199741 0.223171 0.286124 0.447926 62103 2012-12-31 01:48:13 0.217590 0.244189 0.308924 0.477287 62104 2012-12-31 02:18:12 0.209208 0.229590 0.283930 0.422209 62105 2012-12-31 02:25:06 0.225229 0.245545 0.301449 0.445500 62106 2012-12-31 02:33:12 0.236910 0.258252 0.313862 0.460853 62107 2012-12-31 03:03:13 0.228002 0.246693 0.302313 0.445168 62108 2012-12-31 03:18:13 0.235366 0.257824 0.317586 0.475176 62109 2012-12-31 03:25:07 0.246707 0.269936 0.332616 0.498684 62110 2012-12-31 03:33:13 0.246553 0.269369 0.330051 0.490452 62111 2012-12-31 04:25:10 0.249062 0.268720 0.320451 0.454450 62112 2012-12-31 04:33:13 0.251103 0.269946 0.321919 0.456619 62113 2012-12-31 04:48:13 0.257122 0.278713 0.335067 0.488326 62114 2012-12-31 05:03:14 0.274840 0.298629 0.361423 0.536313 62115 2012-12-31 05:18:14 0.272331 0.297988 0.361081 0.540261 62116 2012-12-31 05:33:13 0.259284 0.280575 0.336578 0.490123 62117 2012-12-31 05:48:13 0.281216 0.305000 0.364803 0.533686 62118 2012-12-31 07:03:13 0.316535 0.346806 0.418629 0.643321 62119 2012-12-31 07:18:13 0.287614 0.315958 0.383844 0.597279 62120 2012-12-31 07:46:05 0.323202 0.350592 0.430860 0.657584 62121 2012-12-31 07:52:16 0.288645 0.319988 0.389610 0.622262 62122 2012-12-31 07:57:24 0.294492 0.328670 0.398978 0.638286 62123 2012-12-31 08:01:45 0.287507 0.320027 0.391866 0.636343 </code></pre> <p>What I'm hoping to achieve would be saying if any of the index 'times' has less than 30 points for that year-month-day (ie less than 30 datapoints for 2003-01-01 or any other date), set A,B,C,D = NAN. Is there an easy way to do this?</p>
<p>Create a mask with <code>groupby</code> and then assign with <code>loc</code>:</p> <pre><code>m = df.set_index('times')\ .groupby(pd.TimeGrouper(freq='D')).A.transform(len).lt(30) df.loc[m.values, ['A', 'B', 'C', 'D']] = np.nan </code></pre>
python|pandas|date|datetime
1
2,143
68,127,249
Keras-Tuner: Is it possible to use test/validation set in the objective/metric function?
<p>Is it possible to score/evaluate the model performance, using <code>keras-tuner</code>, based on the test set instead of the training set? I'm asking this, because as of now, my understanding is that the metric function used as objective in the <code>tuner.search()</code> uses only <code>y_true</code> and <code>y_pred</code> as the input parameters, and they both refer to the training set (correct me if I'm wrong).<br /> So how can I use test data in my metric function?</p>
<p>Short answer: you cannot, neither should you, use test data metrics during hyper-parameter tuning. <a href="https://www.tensorflow.org/tutorials/keras/keras_tuner?hl=uk" rel="nofollow noreferrer">KerasTuner</a> allows you to use validation data metrics as the objective, which I encourage. However, the final test should always be done after all tuning and training is complete, and should use none of the training or validation data.</p> <p>To use a validation metric, simply tell KT that its objective is something beginning with <code>val</code> in the name.</p>
python|keras|tensorflow2.0|keras-tuner
1
2,144
68,083,426
How to merge two dataframe with some row values equal?
<p>I have two dataframes which I want to merge into one. The first one has as its columns the ID, while the second has the same values but in the column named id_number. I tried the below code, but in the end the final_df has both ID and the id_number columns and their values. How can I keep only one column for the ids after merging?</p> <pre><code>final_df = df.merge( df2, left_on='ID', right_on='id_number', how='inner') </code></pre> <p>Also, let's say the following dataframe format for df column A:</p> <pre><code>A 0 1 2 </code></pre> <p>The same column A in the second dataframe has some empty fields, like this:</p> <pre><code>A - 1 2 </code></pre> <p>After merge, how can the final dataframe compound the two dataframes so that A won't have empty values?</p>
<p>try selecting required columns after merge</p> <pre><code>final_df = df.merge( df2, left_on='ID', right_on='id_number', how='inner')[['ID', 'col1', 'col2']] </code></pre> <p>or drop the column after merge</p> <pre><code>final_df = df.merge( df2, left_on='ID', right_on='id_number', how='inner').drop(['id_number'], axis=1) </code></pre>
python|pandas|dataframe|inner-join
0
2,145
59,275,959
How to visualize RNN/LSTM weights in Keras/TensorFlow?
<p>I've come across research publications and Q&amp;A's discussing a need for inspecting RNN weights; some related answers are in the right direction, suggesting <code>get_weights()</code> - but how do I actually visualize the weights <em>meaningfully</em>? Namely, LSTMs and GRUs have <em>gates</em>, and all RNNs have <em>channels</em> that serve as independent feature extractors - so how do I <strong>(1)</strong> fetch <em>per-gate</em> weights, and <strong>(2)</strong> plot them in an informative manner? </p>
<p>Keras/TF build RNN weights in a well-defined order, which can be inspected from the source code or via <code>layer.__dict__</code> directly - then to be used to fetch <em>per-kernel</em> and <em>per-gate</em> weights; <em>per-channel</em> treatment can then be employed given a tensor's shape. Below code &amp; explanations cover <em>every possible case</em> of a Keras/TF RNN, and should be easily expandable to any future API changes.</p> <p>Also see visualizing RNN gradients, and an application to <a href="https://stackoverflow.com/questions/48714407/rnn-regularization-which-component-to-regularize/58868383#58868383">RNN regularization</a>; unlike in the former post, I won't be including a simplified variant here, as it'd still be rather large and complex per the nature of weight extraction and organization; instead, simply view relevant source code in the repository (see next section).</p> <hr> <p><strong>Code source</strong>: <a href="https://github.com/OverLordGoldDragon/see-rnn" rel="noreferrer">See RNN</a> (this post included w/ bigger images), my repository; included are:</p> <ul> <li>Activations visualization</li> <li>Weights visualization</li> <li>Activations gradients visualization</li> <li>Weights gradients visualization</li> <li>Docstrings explaining all functionality</li> <li>Support for Eager, Graph, TF1, TF2, and <code>from keras</code> &amp; <code>from tf.keras</code></li> <li>Greater visual customizability than shown in examples</li> </ul> <hr> <p><strong>Visualization methods</strong>:</p> <ul> <li><strong>2D heatmap</strong>: plot weight distributions per gate, per kernel, per direction; <em>clearly shows kernel-to-hidden relations</em></li> <li><strong>histogram</strong>: plot weight distributions per gate, per kernel, per direction; <em>loses context info</em></li> </ul> <hr> <p><strong>EX 1: uni-LSTM, 256 units, weights</strong> -- <code>batch_shape = (16, 100, 20)</code> (input)<br> <code>rnn_histogram(model, 'lstm', equate_axes=False, show_bias=False)</code><br> <code>rnn_histogram(model, 'lstm', equate_axes=True, show_bias=False)</code><br> <code>rnn_heatmap(model, 'lstm')</code></p> <ul> <li>Top plot is a histogram subplot grid, showing weight distributions per kernel, and within each kernel, per gate</li> <li>Second plot sets <code>equate_axes=True</code> for an even comparison across kernels and gates, improving quality of comparison, but potentially degrading visual appeal</li> <li>Last plot is a heatmap of the same weights, with gate separations marked by vertical lines, and bias weights also included</li> <li>Unlike histograms, the heatmap <em>preserves channel/context information</em>: input-to-hidden and hidden-to-hidden transforming matrices can be clearly distinguished</li> <li>Note the large concentration of maximal values at the Forget gate; as trivia, in Keras (and usually), bias gates are all initialized to zeros, except the Forget bias, which is initialized to ones</li> </ul> <p><img src="https://i.stack.imgur.com/1Deh4.png" width="600"></p> <p><img src="https://i.stack.imgur.com/IZN6k.png" width="600"></p> <p><img src="https://i.stack.imgur.com/E9GkQ.png" width="620"></p> <hr> <p><strong>EX 2: bi-CuDNNLSTM, 256 units, weights</strong> -- <code>batch_shape = (16, 100, 16)</code> (input)<br> <code>rnn_histogram(model, 'bidir', equate_axes=2)</code><br> <code>rnn_heatmap(model, 'bidir', norm=(-.8, .8))</code></p> <ul> <li>Bidirectional is supported by both; biases included in this example for histograms</li> <li>Note again the bias heatmaps; they no longer appear to reside in the same locality as in EX 1. Indeed, <code>CuDNNLSTM</code> (and <code>CuDNNGRU</code>) biases are defined and initialized differently - something that can't be inferred from histograms</li> </ul> <p><a href="https://i.stack.imgur.com/vkGiF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vkGiF.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/gEjp0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gEjp0.png" alt="enter image description here"></a></p> <hr> <p><strong>EX 3: uni-CuDNNGRU, 64 units, weights gradients</strong> -- <code>batch_shape = (16, 100, 16)</code> (input)<br> <code>rnn_heatmap(model, 'gru', mode='grads', input_data=x, labels=y, cmap=None, absolute_value=True)</code></p> <ul> <li>We may wish to visualize <em>gradient intensity</em>, which can be done via <code>absolute_value=True</code> and a greyscale colormap</li> <li>Gate separations are apparent even without explicit separating lines in this example: <ul> <li><code>New</code> is the most active kernel gate (input-to-hidden), suggesting more error correction on <em>permitting information flow</em></li> <li><code>Reset</code> is the least active recurrent gate (hidden-to-hidden), suggesting least error correction on memory-keeping</li> </ul></li> </ul> <p><img src="https://i.stack.imgur.com/cwiAS.png" width="600"></p> <hr> <p><strong>BONUS EX: LSTM NaN detection, 512 units, weights</strong> -- <code>batch_shape = (16, 100, 16)</code> (input)</p> <ul> <li>Both the heatmap and the histogram come with built-in NaN detection - kernel-, gate-, and direction-wise</li> <li>Heatmap will print NaNs to console, whereas histogram will mark them directly on the plot</li> <li>Both will set NaN values to zero before plotting; in example below, all related non-NaN weights were already zero</li> </ul> <p><img src="https://i.stack.imgur.com/T6ZAa.png" width="600"></p>
python|tensorflow|keras|visualization|recurrent-neural-network
7
2,146
59,207,257
Can we alter pandas cross tabulation?
<p>I have loaded raw_data from MySQL using sqlalchemy and pymysql</p> <p><code>engine = create_engine('mysql+pymysql://[user]:[passwd]@[host]:[port]/[database]')</code></p> <p><code>df = pd.read_sql_table('data', engine)</code></p> <p>df is something like this</p> <pre><code>| Age Category | Category | |--------------|----------------| | 31-26 | Engaged | | 26-31 | Engaged | | 31-36 | Not Engaged | | Above 51 | Engaged | | 41-46 | Disengaged | | 46-51 | Nearly Engaged | | 26-31 | Disengaged | </code></pre> <p>Then i had performed analysis as follow</p> <p><code>age = pd.crosstab(df['Age Category'], df['Category'])</code></p> <pre><code>| Category | A | B | C | D | |--------------|---|----|----|---| | Age Category | | | | | | 21-26 | 2 | 2 | 4 | 1 | | 26-31 | 7 | 11 | 12 | 5 | | 31-36 | 3 | 5 | 5 | 2 | | 36-41 | 2 | 4 | 1 | 7 | | 41-46 | 0 | 1 | 3 | 2 | | 46-51 | 0 | 0 | 2 | 3 | | Above 51 | 0 | 3 | 0 | 6 | </code></pre> <p>I want to change it to Pandas DataFrame something like this.</p> <pre><code>| Age Category | A | B | C | D | |--------------|---|----|----|---| | 21-26 | 2 | 2 | 4 | 1 | | 26-31 | 7 | 11 | 12 | 5 | | 31-36 | 3 | 5 | 5 | 2 | | 36-41 | 2 | 4 | 1 | 7 | | 41-46 | 0 | 1 | 3 | 2 | | 46-51 | 0 | 0 | 2 | 3 | | Above 51 | 0 | 3 | 0 | 6 | </code></pre> <p>Thank you for your time and consideration</p>
<p>Both texts are called columns and index names, solution for change them is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>DataFrame.rename_axis</code></a>:</p> <pre><code>age = age.rename_axis(index=None, columns='Age Category') </code></pre> <p>Or set columns names by index names, and then set index names to default - <code>None</code>:</p> <pre><code>age.columns.name = age.index.name age.index.name = None </code></pre> <hr> <pre><code>print (age) Age Category Disengaged Engaged Nearly Engaged Not Engaged 26-31 1 1 0 0 31-26 0 1 0 0 31-36 0 0 0 1 41-46 1 0 0 0 46-51 0 0 1 0 Above 51 0 1 0 0 </code></pre> <p>But this texts are something like metadata, so some functions should remove them.</p>
python|pandas
1
2,147
14,224,172
Equality in Pandas DataFrames - Column Order Matters?
<p>As part of a unit test, I need to test two DataFrames for equality. The order of the columns in the DataFrames is not important to me. However, it seems to matter to Pandas:</p> <pre><code>import pandas df1 = pandas.DataFrame(index = [1,2,3,4]) df2 = pandas.DataFrame(index = [1,2,3,4]) df1['A'] = [1,2,3,4] df1['B'] = [2,3,4,5] df2['B'] = [2,3,4,5] df2['A'] = [1,2,3,4] df1 == df2 </code></pre> <p>Results in:</p> <pre><code>Exception: Can only compare identically-labeled DataFrame objects </code></pre> <p>I believe the expression <code>df1 == df2</code> should evaluate to a DataFrame containing all <code>True</code> values. Obviously it's debatable what the correct functionality of <code>==</code> should be in this context. My question is: Is there a Pandas method that does what I want? That is, is there a way to do equality comparison that ignores column order?</p>
<p>The most common intent is handled like this:</p> <pre><code>def assertFrameEqual(df1, df2, **kwds ): """ Assert that two dataframes are equal, ignoring ordering of columns""" from pandas.util.testing import assert_frame_equal return assert_frame_equal(df1.sort_index(axis=1), df2.sort_index(axis=1), check_names=True, **kwds ) </code></pre> <p>Of course see <code>pandas.util.testing.assert_frame_equal</code> for other parameters you can pass</p>
python|pandas
40
2,148
44,988,573
'No gradients provided for any variable' while training convolutional autoencoder
<p>I was trying to create a convolutional autoencoder but I've ran into a problem Here's the code:</p> <pre><code>import tensorflow as tf import numpy as np import matplotlib.image as mpimg import matplotlib.pyplot as plt img = mpimg.imread('data.jpg') x = (img-np.mean(img))/np.std(img) y = img epochs = 500 def autoencoder(x, weights): global output output = tf.nn.conv2d([x], weights[0], strides=[1,1,1,1],padding='SAME') output = tf.nn.relu(output) output = tf.nn.conv2d(output, weights[1], strides=[1,2,2,1],padding='SAME') output = tf.nn.relu(output) output = tf.nn.conv2d(output, weights[2], strides=[1,2,2,1],padding='SAME') output = tf.nn.relu(output) output = tf.nn.conv2d(output, weights[3], strides=[1,2,2,1],padding='SAME') output = tf.nn.relu(output) output = tf.nn.conv2d(output, weights[4], strides=[1,1,1,1],padding='SAME') output = tf.nn.relu(output) output = tf.image.resize_images(output, [50, 38]) output = tf.nn.conv2d(output, weights[5], strides=[1,1,1,1],padding='SAME') output = tf.nn.relu(output) output = tf.image.resize_images(output, [100, 76]) output = tf.nn.conv2d(output, weights[6], strides=[1,1,1,1],padding='SAME') output = tf.nn.relu(output) output = tf.image.resize_images(output, [200, 152]) output = tf.nn.conv2d(output, weights[7], strides=[1,1,1,1],padding='SAME') weights = [tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3]))] init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for e in range(epochs): print('epoch:',e+1) autoencoder(tf.cast(x,tf.float32), weights) plt.imshow(output.eval()[0]) plt.savefig(str(e+1)+'.png') cost = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(output.eval()[0],y))) tf.train.AdamOptimizer().minimize(cost) </code></pre> <p>And here's the error:</p> <pre><code>Traceback (most recent call last): File "D:\Kay\Tensorflow\Session 3\Autoencoder.py", line 56, in &lt;module&gt; tf.train.AdamOptimizer().minimize(cost) File "C:\Users\Katharina\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\optimizer.py", line 276, in minimize ([str(v) for _, v in grads_and_vars], loss)) ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ['Tensor("Variable/read:0", shape=(5, 5, 3, 3), dtype=float32)', 'Tensor("Variable_1/read:0", shape=(5, 5, 3, 3), dtype=float32)', 'Tensor("Variable_2/read:0", shape=(5, 5, 3, 3), dtype=float32)', 'Tensor("Variable_3/read:0", shape=(5, 5, 3, 3), dtype=float32)', 'Tensor("Variable_4/read:0", shape=(5, 5, 3, 3), dtype=float32)', 'Tensor("Variable_5/read:0", shape=(5, 5, 3, 3), dtype=float32)', 'Tensor("Variable_6/read:0", shape=(5, 5, 3, 3), dtype=float32)', 'Tensor("Variable_7/read:0", shape=(5, 5, 3, 3), dtype=float32)'] and loss Tensor("Mean_1:0", shape=(), dtype=float32). </code></pre> <p>Can anyone help me out?</p>
<p>Your code does not even run as is, but with some rewriting you get this that actually runs.</p> <pre><code>import tensorflow as tf import numpy as np import matplotlib.image as mpimg import matplotlib.pyplot as plt import scipy img = scipy.misc.imresize(scipy.misc.face(), [200, 152])[None, :] x = (img-np.mean(img))/np.std(img) y = img epochs = 500 def autoencoder(x, weights): output = tf.nn.conv2d([x], weights[0], strides=[1,1,1,1],padding='SAME') output = tf.nn.relu(output) output = tf.nn.conv2d(output, weights[1], strides=[1,2,2,1],padding='SAME') output = tf.nn.relu(output) output = tf.nn.conv2d(output, weights[2], strides=[1,2,2,1],padding='SAME') output = tf.nn.relu(output) output = tf.nn.conv2d(output, weights[3], strides=[1,2,2,1],padding='SAME') output = tf.nn.relu(output) output = tf.nn.conv2d(output, weights[4], strides=[1,1,1,1],padding='SAME') output = tf.nn.relu(output) output = tf.image.resize_images(output, [50, 38]) output = tf.nn.conv2d(output, weights[5], strides=[1,1,1,1],padding='SAME') output = tf.nn.relu(output) output = tf.image.resize_images(output, [100, 76]) output = tf.nn.conv2d(output, weights[6], strides=[1,1,1,1],padding='SAME') output = tf.nn.relu(output) output = tf.image.resize_images(output, [200, 152]) output = tf.nn.conv2d(output, weights[7], strides=[1,1,1,1],padding='SAME') return output weights = [tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3])), tf.Variable(tf.random_normal([5,5,3,3]))] output = autoencoder(tf.cast(x, tf.float32), weights) cost = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(output, y))) train_op = tf.train.AdamOptimizer().minimize(cost) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for e in range(epochs): print('epoch:',e+1) output_result, _ = sess.run([output, train_op]) </code></pre> <p>Even so, it returns bogus values and is not good for much.</p> <p>You had several errors:</p> <ul> <li>You create a new optimizer in each iteration</li> <li>You used <code>eval</code> instead of <code>sess.run</code></li> <li>You have very small filters, simply 3 channels which is way too small.</li> <li>You have no biases</li> </ul> <p>Anyway, a short working version of your code is given below, but there is much improvements left.</p> <pre><code>import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import scipy img = scipy.misc.imresize(scipy.misc.face(), [200, 152])[None, :] x = (img-np.mean(img))/np.std(img) y = img epochs = 50000 def apply_conv(x, strides=1, filters=32, activation=tf.nn.relu): return tf.layers.conv2d(x, strides=strides, filters=filters, kernel_size=3, padding='SAME', kernel_initializer=tf.contrib.layers.xavier_initializer(), activation=tf.nn.relu) def autoencoder(x): output = apply_conv(x, strides=1) output = apply_conv(output, strides=2) output = apply_conv(output, strides=2) output = apply_conv(output, strides=2) output = apply_conv(output, strides=1) output = tf.image.resize_images(output, [50, 38]) output = apply_conv(output, strides=1) output = tf.image.resize_images(output, [100, 76]) output = apply_conv(output, strides=1) output = tf.image.resize_images(output, [200, 152]) output = apply_conv(output, strides=1, filters=3, activation=None) return output output = autoencoder(tf.cast(x, tf.float32)) cost = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(output, y))) train_op = tf.train.AdamOptimizer().minimize(cost) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for e in range(epochs): print('epoch:',e+1) output_result, cost_result, _ = sess.run([output, cost, train_op]) print('cost = {}'.format(cost_result)) if e % 20 == 0: plt.imshow(output_result[0].astype('uint8')) plt.pause(0.0001) # wait for plot to show </code></pre>
python|tensorflow|neural-network|autoencoder
0
2,149
57,081,116
How do I calculate number of words and number of unique words contained within a list of a column across all rows of my dataframe?
<p>I generated a column <code>df['adjectives']</code> in my pandas dataframe that has a list of all the adjectives from another column, <code>df['reviews']</code>.</p> <p>The values of <code>df['adjectives']</code> are in this format, for example: </p> <blockquote> <p><code>['excellent', 'better', 'big', 'unexpected', 'excellent', 'big']</code></p> </blockquote> <p>I would like to create a new column that counts the total number of words in <code>df['adjectives']</code> as well as the number of 'unique' words in <code>df['adjectives']</code>.</p> <p>The function should iterate across the entire dataframe and apply the counts for each row.</p> <p>For the above row example, I would want <code>df['totaladj']</code> to be 6 and <code>df['uniqueadj']</code> to be 4 (since 'excellent' and 'big' are repeated)</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df=pd.read_csv('./data.csv') df['totaladj'] = df['adjectives'].str.count(' ') + 1 df.to_csv('./data.csv', index=False) </code></pre> <p>The above code works when counting the total number of adjectives, but not the unique number of adjectives.</p>
<p>Is this the type of behavior that you are looking for?</p> <p>Based off of your description I assumed that the values in the <strong>adjectives</strong> column are a string formatted like a list e.g. <strong>"['big','excellent','small']"</strong></p> <p>The code below converts the strings to a list using <strong>split()</strong>, and then gets the length using <strong>len()</strong>.Finding the number of unique adjectives is done by converting the list to a set before using <strong>len()</strong>.</p> <pre class="lang-py prettyprint-override"><code>df['adjcount'] = df['adjectives'].apply(lambda x: len(x[1:-1].split(','))) df['uniqueadjcount'] = df['adjectives'].apply(lambda x: len(set(x[1:-1].split(',')))) </code></pre>
python|pandas
1
2,150
56,939,282
How do you feed a tf.data.Dataset dynamically in eager execution mode where initializable_iterator isn't available?
<p>What is the new approach (under eager execution) to feeding data through a dataset pipeline in a dynamic fashion, when we need to feed it sample by sample? </p> <p>I have a <code>tf.data.Dataset</code> which performs some preprocessing steps and reads data from a generator, drawing from a large dataset during training. </p> <p>Let's say that dataset is represented as:</p> <pre><code>ds = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6]) ds = ds.map(tf.square).shuffle(2).batch(2) iterator = tf.data.make_one_shot_iterator(ds) </code></pre> <p>After training I want to produce various visualizations which require that I feed one sample at a time through the network for inference. I've now got this dataset preprocessing pipeline that I need to feed my raw sample through to be sized and shaped appropriately for the network input.</p> <p>This seems like a use case for the initializable iterator:</p> <pre><code>placeholder = tf.placeholder(tf.float32, shape=None) ds = tf.data.Dataset.from_tensor_slices(placeholder) ds = ds.map(tf.square).shuffle(2).batch(2) iterator = tf.data.make_initializable_iterator(ds) # now re-initialize for each sample </code></pre> <blockquote> <p>Keep in mind that the map operation in this example represents a long sequence of preprocessing operations that can't be duplicated for each new data sample being feed in.</p> </blockquote> <p><strong>This doesn't work with eager execution</strong>, you can't use the placeholder. The documentation examples all seem to assume a static input such as in the first example here.</p> <p>The only way I can think of doing this is with a queue and <code>tf.data.Dataset.from_generator(...)</code> which reads from the queue that I push to before predicting on the data. But this feels both hacky, and appears prone to deadlocks that I've yet to solve.</p> <p>TF 1.14.0</p>
<p>I just realized that the answer to this question is trivial:</p> <blockquote> <p>Just create a new dataset!</p> </blockquote> <p>In non-eager mode the code below would have degraded in performance because each dataset operation would have been added to the graph and never released, and in non-eager mode we have the initializable iterator to resolve that issue.</p> <p>However, in eager execution mode tensorflow operations like this are ephemeral, added iterators aren't being added to a global graph, they just get created and die when no longer referenced. Win one for TF2.0!</p> <p>The code below (copy/paste runnable) demonstrates:</p> <pre><code>import tensorflow as tf import numpy as np import time tf.enable_eager_execution() inp = np.ones(shape=5000, dtype=np.float32) t = time.time() while True: ds = tf.data.Dataset.from_tensors(inp).batch(1) val = next(iter(ds)) assert np.all(np.squeeze(val, axis=0) == inp) print('Processing time {:.2f}'.format(time.time() - t)) t = time.time() </code></pre> <p>The motivation for the question came on the heels of this issue in 1.14 where creating multiple dataset operations in graph mode under Keras constitutes a memory leak. </p> <p><a href="https://github.com/tensorflow/tensorflow/issues/30448" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/30448</a></p>
python|tensorflow
1
2,151
56,918,916
Finding a value in an equation
<p>i`m looking for an easy way to find the value of a variable depending on the result of another one:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt T = np.arange(0.01, 4.5, 0.0001) N = (2.63 * 10 ** -16) * ((2.71828 ** (6.93 * T)) - 1) + ((4.05 * 10 ** -6) * T) plt.plot(N,T) plt.axis(xmin=-0.001, ymax=5) plt.show() </code></pre> <p>For example I need the value of T for N= 0,00006762 (Or the closest value). This would be easy if I could solve for T, but I find it easier to create an array of the possible T`s and try the other way.</p>
<p>You can loop over values of T, calculate N, and compare it with the number you are looking for (0.00006762), and return the closest one you find:</p> <pre class="lang-py prettyprint-override"><code>target = 0.00006762 smallest_diff = 1000 best_answer = 'NA' for T in np.arange(0.01, 4.5, 0.0001): N = (2.63 * 10 ** -16) * ((2.71828 ** (6.93 * T)) - 1) + ((4.05 * 10 ** -6) * T) if abs(N - target) &lt; smallest_dif: smallest_diff = abs(N - target) best_answer = T print(best_answer) </code></pre>
python|numpy|matplotlib
-1
2,152
57,063,456
Numpy arange function produces variable of a different length than the variable it depends on
<p>I am trying to make a pyplot graph, but my x and y values are of different length. </p> <p>I am using numpy's arange function to create the range of x values based on the length of my list of y values.</p> <pre><code>def event_editor(EventDatabase, ConcatenatedEvents, ConcatenatedFits): for i in list(EventDatabase): # Check for events with absurd blockage current if abs(EventDatabase[i]['AllLevelFits'][0]) &gt; 5: del EventDatabase[i] continue event = ConcatenatedEvents[i][0] fit = ConcatenatedFits[i] x_values = np.arange(0, len(event) / 100, .01) x_values2 = np.arange(0, len(fit) / 100, .01) fig = plt.figure() plt.plot(x_values, event, figure=fig) plt.plot(x_values2, fit, figure=fig) plt.xlabel('Time (ms)', figure=fig) plt.ylabel('Current (I)', figure=fig) plt.title('Event #' + str(i), figure=fig) plt.show() </code></pre> <p>I'd expect the <code>x_values</code> and the <code>event</code>/<code>fit</code> lists to have the same length, and most of the time they do have the same length. However, when the length of <code>event</code>/<code>fit</code> is 111, the length of the <code>x_values</code> is 112. What causes this?</p>
<p>This is due to <code>float</code> approximation in Python / NumPy. The inconsistent behavior is documented also in its official <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html" rel="nofollow noreferrer">docs</a>.</p> <p>A more robust approach is to use: <code>np.linspace()</code> e.g.:</p> <pre><code>step = 0.01 np.arange(0, len(event) / 100, step) np.arange(0, len(fit) / 100, step) </code></pre> <p>becomes, for example:</p> <pre><code>step = 0.01 N = int(max(len(event) / 100 / step, len(fit) / 100 / step)) np.linspace(0, len(event) / 100, N) np.linspace(0, len(fit) / 100, N) </code></pre> <p>note that with <code>np.linspace()</code> you specify the number of points rather than the <code>step</code> as last parameter.</p>
python|numpy
1
2,153
57,101,758
How to mask a dataframe given a list of values or indices in the dataframe
<p>I have a dataframe that has a column 'rel_max' that has a list of all the values of local maxima (if relevant or more useful I also have a column of the indices of these local extrema). I would like to take this list of values or indices and mask the dataframe so that I have a maxima in its correct spot and NaN or 0 for all other values of the dataframe. </p> <pre><code>df = pd.DataFrame({'123': [20.908, 8.743, 8.34, 2.4909], '124': [2, 2.34, 0, 4.1234], '412': [2, 20.123, 3.123123, 0], '516': [5, 20.120, 4.12, 0], '129': [6, 20.10, 3.123123, 0], 'rel_max': [[20.908, 6], [8.743,20.123], [8.34,4.12], [4.1234]]}, index=['2015-01-10', '2015-02-10', '2015-03-10', '2015-04-10']) </code></pre> <p>This is the dataframe with the relative max values. ^</p> <p>This is the expected dataframe.</p> <pre><code>df1 = pd.DataFrame({'123': [20.908, 8.743, 8.34, 0], '124': [0, 0, 0, 4.1234], '412': [0, 20.123, 0, 0], '516': [0, 0, 4.12, 0], '129': [6, 0, 0, 0], 'rel_max': [[20.908, 6], [8.743,20.123], [8.34,4.12], [4.1234]]}, index=['2015-01-10', '2015-02-10', '2015-03-10', '2015-04-10']) </code></pre> <p>Essentially, I am trying to retrieve or pull the dataframe with only the local extrema. </p> <pre><code> 123 124 412 516 129 rel_max 2015-01-10 20.908 0.0000 0.000 0.00 6 [20.908, 6] 2015-02-10 8.743 0.0000 20.123 0.00 0 [8.743, 20.123] 2015-03-10 8.340 0.0000 0.000 4.12 0 [8.34, 4.12] 2015-04-10 0.000 4.1234 0.000 0.00 0 [4.1234] </code></pre>
<p>You could try something like this:</p> <pre><code>pd.concat([df.iloc[:, :-1].where(df.apply(lambda x: x[:-1].isin(x.iloc[-1]), axis=1), 0), df.iloc[:, -1]], axis=1) </code></pre> <p>Output:</p> <pre><code> 123 124 412 516 129 rel_max 2015-01-10 20.908 0.0000 0.000 0.00 6.0 [20.908, 6] 2015-02-10 8.743 0.0000 20.123 0.00 0.0 [8.743, 20.123] 2015-03-10 8.340 0.0000 0.000 4.12 0.0 [8.34, 4.12] 2015-04-10 0.000 4.1234 0.000 0.00 0.0 [4.1234] </code></pre>
python|pandas|dataframe
1
2,154
45,902,080
Incorrect results when using "in" operator with Pandas series?
<p>I have two dataframes df1 contains 100K rows and df2 contains 6 million rows. I want to fix values for 'SoftDel???' column in df1 when 'id' matches in df2. code is working but results are wrong. </p> <p>I have completed this task using merge and results are satisfactory but want to know why below is producing wrong results?</p> <p><code>for x, y in df1.iterrows():<br> if y['id'] in df2['id']: df1.loc[x,'SoftDel???'] = 'No'</code></p>
<p><code>y['id']</code> is a float. But, <code>df2['id']</code> is a series. The <code>in</code> operator is not designed to work with a series as one of the arguments.</p>
python|pandas|series
2
2,155
45,900,740
Upgrading to Tensorflow 1.3 Error: DLL load failed: The specified module could not be found
<p>I have installed tensorflow 1.3.0 as seen below and I have included cudnn64_6.dll in my %PATH%, along with installed CUDA 8.0, but I still get an error message when importing tensorflow. The error message is after the installation message below:</p> <pre><code>(tensorflow) C:\Users\alexz&gt;pip install --ignore-installed --upgrade tensorflow-gpu Collecting tensorflow-gpu Using cached tensorflow_gpu-1.3.0-cp36-cp36m-win_amd64.whl Collecting protobuf&gt;=3.3.0 (from tensorflow-gpu) Using cached protobuf-3.4.0-py2.py3-none-any.whl Collecting wheel&gt;=0.26 (from tensorflow-gpu) Using cached wheel-0.29.0-py2.py3-none-any.whl Collecting tensorflow-tensorboard&lt;0.2.0,&gt;=0.1.0 (from tensorflow-gpu) Using cached tensorflow_tensorboard-0.1.5-py3-none-any.whl Collecting numpy&gt;=1.11.0 (from tensorflow-gpu) Using cached numpy-1.13.1-cp36-none-win_amd64.whl Collecting six&gt;=1.10.0 (from tensorflow-gpu) Using cached six-1.10.0-py2.py3-none-any.whl Collecting setuptools (from protobuf&gt;=3.3.0-&gt;tensorflow-gpu) Using cached setuptools-36.2.7-py2.py3-none-any.whl Collecting werkzeug&gt;=0.11.10 (from tensorflow-tensorboard&lt;0.2.0,&gt;=0.1.0-&gt;tensorflow-gpu) Using cached Werkzeug-0.12.2-py2.py3-none-any.whl Collecting html5lib==0.9999999 (from tensorflow-tensorboard&lt;0.2.0,&gt;=0.1.0-&gt;tensorflow-gpu) Collecting markdown&gt;=2.6.8 (from tensorflow-tensorboard&lt;0.2.0,&gt;=0.1.0-&gt;tensorflow-gpu) Collecting bleach==1.5.0 (from tensorflow-tensorboard&lt;0.2.0,&gt;=0.1.0-&gt;tensorflow-gpu) Using cached bleach-1.5.0-py2.py3-none-any.whl Installing collected packages: six, setuptools, protobuf, wheel, werkzeug, html5lib, markdown, bleach, numpy, tensorflow-tensorboard, tensorflow-gpu Successfully installed bleach-1.5.0 html5lib-0.9999999 markdown-2.6.9 numpy-1.13.1 protobuf-3.4.0 setuptools-36.2.7 six-1.10.0 tensorflow-gpu-1.3.0 tensorflow-tensorboard-0.1.5 werkzeug-0.12.2 wheel-0.29.0 </code></pre> <p>Additionally, I have python 3.6 with anaconda and followed instruction on tensorflow's website to install everything. Error Message:</p> <pre><code>Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:25:24) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow as tf Traceback (most recent call last): File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper return importlib.import_module(mname) File "C:\My_Items\Anaconda\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "&lt;frozen importlib._bootstrap&gt;", line 978, in _gcd_import File "&lt;frozen importlib._bootstrap&gt;", line 961, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 950, in _find_and_load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 648, in _load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 560, in module_from_spec File "&lt;frozen importlib._bootstrap_external&gt;", line 922, in create_module File "&lt;frozen importlib._bootstrap&gt;", line 205, in _call_with_frames_removed ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 41, in &lt;module&gt; from tensorflow.python.pywrap_tensorflow_internal import * File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 21, in &lt;module&gt; _pywrap_tensorflow_internal = swig_import_helper() File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper return importlib.import_module('_pywrap_tensorflow_internal') File "C:\My_Items\Anaconda\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named '_pywrap_tensorflow_internal' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\__init__.py", line 24, in &lt;module&gt; from tensorflow.python import * File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\__init__.py", line 49, in &lt;module&gt; from tensorflow.python import pywrap_tensorflow File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 52, in &lt;module&gt; raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper return importlib.import_module(mname) File "C:\My_Items\Anaconda\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "&lt;frozen importlib._bootstrap&gt;", line 978, in _gcd_import File "&lt;frozen importlib._bootstrap&gt;", line 961, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 950, in _find_and_load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 648, in _load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 560, in module_from_spec File "&lt;frozen importlib._bootstrap_external&gt;", line 922, in create_module File "&lt;frozen importlib._bootstrap&gt;", line 205, in _call_with_frames_removed ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 41, in &lt;module&gt; from tensorflow.python.pywrap_tensorflow_internal import * File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 21, in &lt;module&gt; _pywrap_tensorflow_internal = swig_import_helper() File "C:\My_Items\Anaconda\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper return importlib.import_module('_pywrap_tensorflow_internal') File "C:\My_Items\Anaconda\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named '_pywrap_tensorflow_internal' Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/install_sources#common_installation_problems for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. </code></pre>
<p>I got the same issue while installing Tensorflow 1.3 a month ago. I fixed it by installing CudNN 5.1 instead of 6.</p> <p>NB : I just saw that latest version is 7 now, maybe it works (I haven't tried).</p> <p>I hope it helps,</p>
tensorflow
0
2,156
45,780,190
Split an array in all possible combinations (not regular splitting)
<p>Suppose I have an array,</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; array = np.linspace(1,4,4, dtype=np.int) &gt;&gt;&gt; array array([1, 2, 3, 4]) </code></pre> <p>I want a function that will split this array in all possible parts, such that,</p> <p>No split :</p> <pre><code>([1,2,3,4]) </code></pre> <p>Split in <code>2</code> parts :</p> <pre><code>([1], [2,3,4]) ([1,2], [3,4]) ([1,2,3] ,[4]) </code></pre> <p>Split in <code>3</code> parts :</p> <pre><code>([1], [2], [3,4]) ([1,2]), [3], [4]) ([1], [2,3], [4]) </code></pre> <p>Split in <code>len(array)</code> parts :</p> <pre><code>([1],[2],[3],[4]) </code></pre> <p>I know there is <code>np.split(array, r)</code>, but it will not give all possible splits. e.g. <code>np.split(array, 2)</code> will give,</p> <pre><code>[array([0, 1]), array([2, 3])] </code></pre> <p>As you can see this is not what I need. How to achieve my need?</p>
<p>You could use <a href="https://docs.python.org/library/itertools.html" rel="noreferrer"><code>itertools.combinations</code></a> to generate the indices where to split inside a loop over the number of splits:</p> <pre><code>&gt;&gt;&gt; from itertools import combinations &gt;&gt;&gt; [np.split(array, idx) ... for n_splits in range(5) ... for idx in combinations(range(1, len(array)), n_splits)] [[array([1, 2, 3, 4])], [array([1]), array([2, 3, 4])], [array([1, 2]), array([3, 4])], [array([1, 2, 3]), array([4])], [array([1]), array([2]), array([3, 4])], [array([1]), array([2, 3]), array([4])], [array([1, 2]), array([3]), array([4])], [array([1]), array([2]), array([3]), array([4])]] </code></pre>
python|arrays|python-3.x|numpy|split
6
2,157
45,825,701
What is wrong with my cost function in numpy?
<p>I was trying to implement a cost function for a Programming assignment in Andrew Ng Deep Learning course which requires my own, original work. I am also not allowed to reproduce the assignment code without permission, but am doing so anyway in this question.</p> <p>The expected result for the cost = 6.000064773192205, But with this code, my result for cost = 4.50006477319. Does anyone have any idea what I did wrong in this code?</p> <p>removed code</p>
<p>There is an error in your sigmoid function. You are supposed to calculate negative of <code>np.dot(np.transpose(w), X) + b)</code>.</p> <p>Here is the one I have used</p> <pre><code>A = 1 / (1 + np.exp(-(np.dot(np.transpose(w), X) + b))) </code></pre>
python|numpy|machine-learning|deep-learning
2
2,158
45,795,645
Groupby to Dataframe in Pandas
<p>Basically, i use an excel file that contains thousands of data and I'm using pandas to read in the file.</p> <pre><code>import pandas as pd agg = pd.read_csv('Station.csv', sep = ',') </code></pre> <p>Then what i did was i grouped the data accordingly to these categories,</p> <pre><code>month_station = agg.groupby(['month','StationName']) </code></pre> <p>the groupby will not be used for counting the mean, median or etc but just aggregating the data in terms of month and station name. <em>it's what the question wants</em></p> <p>Now, I would want to output the month_station into an excel file so first i would need to transfer the groupby into the dataframe. </p> <p>I've seen examples:</p> <pre><code>pd.DataFrame(month_station.size().reset_index(name = "Group_Count")) </code></pre> <p>but the thing is, i don't require the size/count of my data but just grouping it in terms of month and station name which does not require count or sorts. I tried removing the size() and it gives me an error.</p> <p>I just want the content of month_station to be ported into a dataframe so i could proceed and output as a csv file but it seemed complicated.</p>
<p>The nature of groupby is so that you can derive an aggregate calculation, such as mean or count or sum or etc. If you are merely trying to see on of each pair of month and station name, try this:</p> <pre><code>month_station = agg.groupby(['month','StationName'],as_index=False).count() month_station = month_station['month','StationName'] </code></pre>
python|pandas|dataframe|group-by
0
2,159
23,198,053
How do you shift Pandas DataFrame with a multiindex?
<p>With the following DataFrame, how can I shift the "beyer" column based on the index without having Pandas assign the shifted value to a different index value?</p> <pre><code> line_date line_race beyer horse Last Gunfighter 2013-09-28 10 99 Last Gunfighter 2013-08-18 10 102 Last Gunfighter 2013-07-06 8 103 ..... Paynter 2013-09-28 10 103 Paynter 2013-08-31 10 88 Paynter 2013-07-27 8 100 </code></pre> <p><code>df['beyer'].shift(1)</code> produces...</p> <pre><code> line_date line_race beyer beyer_shifted horse Last Gunfighter 2013-09-28 10 99 NaN Last Gunfighter 2013-08-18 10 102 99 Last Gunfighter 2013-07-06 8 103 102 ..... Paynter 2013-09-28 10 103 71 Paynter 2013-08-31 10 88 103 Paynter 2013-07-27 8 100 88 </code></pre> <p>The problem is that Paynter was given a beyer that Last Gunfighter (his first record) was assigned. Instead I want it to go like this...</p> <pre><code> line_date line_race beyer beyer_shifted horse Last Gunfighter 2013-09-28 10 99 NaN Last Gunfighter 2013-08-18 10 102 99 Last Gunfighter 2013-07-06 8 103 102 ..... Paynter 2013-09-28 10 103 NaN Paynter 2013-08-31 10 88 103 Paynter 2013-07-27 8 100 88 </code></pre>
<p>Use <code>groupby/shift</code> to apply the shift to each group individually: (Thanks to Jeff for pointing out this simplification.)</p> <pre><code>In [60]: df['beyer_shifted'] = df.groupby(level=0)['beyer'].shift(1); df Out[61]: line_date line_race beyer beyer_shifted Last Gunfighter 2013-09-28 10 99 NaN Last Gunfighter 2013-08-18 10 102 99 Last Gunfighter 2013-07-06 8 103 102 Paynter 2013-09-28 10 103 NaN Paynter 2013-08-31 10 88 103 Paynter 2013-07-27 8 100 88 </code></pre> <p>If you have a multiindex, you can group by more than one level by passing a sequence of <code>ints</code> or level names to <code>groupby's</code> <code>level</code> parameter.</p>
python|pandas
53
2,160
35,754,638
Getting the monthly maximum of a daily dataframe with the corresponding index value
<p>I have dowloaded daily data from yahoo finance </p> <pre><code> Open High Low Close Volume \ Date 2016-01-04 10485.809570 10485.910156 10248.580078 10283.440430 116249000 2016-01-05 10373.269531 10384.259766 10173.519531 10310.099609 82348000 2016-01-06 10288.679688 10288.679688 10094.179688 10214.019531 87751700 2016-01-07 10144.169922 10145.469727 9810.469727 9979.849609 124188100 2016-01-08 10010.469727 10122.459961 9849.339844 9849.339844 95672200 ... 2016-02-23 9503.120117 9535.120117 9405.219727 9416.769531 87240700 2016-02-24 9396.480469 9415.330078 9125.190430 9167.799805 99216000 2016-02-25 9277.019531 9391.309570 9199.089844 9331.480469 0 2016-02-26 9454.519531 9576.879883 9436.330078 9513.299805 95662100 2016-02-29 9424.929688 9498.570312 9332.419922 9495.400391 90978700 </code></pre> <p>I would like to find the maximum closing price each month and also the date of this closing price.</p> <p>With a groupby <code>dfM = df['Close'].groupby(df.index.month).max()</code> it returns me the monthly maximums but I am losing the daily index position.</p> <pre><code> grouped by month 1 10310.099609 2 9757.879883 </code></pre> <p>Is there a good way to to keep the index? </p> <p>I would be looking for a result like this:</p> <pre><code> grouped by month 2016-01-05 10310.099609 2016-02-01 9757.879883 </code></pre>
<p>You can get the max value per month using <code>TimeGrouper</code> together with <code>groupby</code>:</p> <pre><code>from pandas.io.data import DataReader aapl = DataReader('AAPL', data_source='yahoo', start='2015-6-1') &gt;&gt;&gt; aapl.groupby(pd.TimeGrouper('M')).Close.max() Date 2015-06-30 130.539993 2015-07-31 132.070007 2015-08-31 119.720001 2015-09-30 116.410004 2015-10-31 120.529999 2015-11-30 122.570000 2015-12-31 119.029999 2016-01-31 105.349998 2016-02-29 98.120003 2016-03-31 100.529999 Freq: M, Name: Close, dtype: float64 </code></pre> <p>Using <code>idxmax</code> will get the corresponding dates of the max price.</p> <pre><code>&gt;&gt;&gt; aapl.groupby(pd.TimeGrouper('M')).Close.idxmax() Date 2015-06-30 2015-06-01 2015-07-31 2015-07-20 2015-08-31 2015-08-10 2015-09-30 2015-09-16 2015-10-31 2015-10-29 2015-11-30 2015-11-03 2015-12-31 2015-12-04 2016-01-31 2016-01-04 2016-02-29 2016-02-17 2016-03-31 2016-03-01 Name: Close, dtype: datetime64[ns] </code></pre> <p>To get the results side-by-side:</p> <pre><code>&gt;&gt;&gt; aapl.groupby(pd.TimeGrouper('M')).Close.agg({'max date': 'idxmax', 'max price': np.max}) max price max date Date 2015-06-30 130.539993 2015-06-01 2015-07-31 132.070007 2015-07-20 2015-08-31 119.720001 2015-08-10 2015-09-30 116.410004 2015-09-16 2015-10-31 120.529999 2015-10-29 2015-11-30 122.570000 2015-11-03 2015-12-31 119.029999 2015-12-04 2016-01-31 105.349998 2016-01-04 2016-02-29 98.120003 2016-02-17 2016-03-31 100.529999 2016-03-01 </code></pre>
python|pandas|group-by|max|time-series
9
2,161
35,431,447
How to summarise data by percentages in pandas
<p>This code:</p> <pre><code> #Missing analysis for actions - which action is missing the most action_types? grouped_missing_analysis = pd.crosstab(clean_sessions.action_type, clean_sessions.action, margins=True).unstack() grouped_unknown = grouped_missing_analysis.loc(axis=0)[slice(None), ['Missing', 'Unknown', 'Other']] print(grouped_unknown) </code></pre> <p>Leads to the printing of this:</p> <pre><code>action action_type 10 Missing 0 Unknown 0 11 Missing 0 Unknown 0 12 Missing 0 Unknown 0 15 Missing 0 Unknown 0 about_us Missing 0 Unknown 416 accept_decline Missing 0 Unknown 0 account Missing 0 Unknown 9040 acculynk_bin_check_failed Missing 0 Unknown 1 acculynk_bin_check_success Missing 0 Unknown 51 acculynk_load_pin_pad Missing 0 Unknown 50 </code></pre> <p>How would I now aggregate the total <code>Missing</code>, <code>Unknown</code> and <code>Other</code> for each action as a total value count for each action, and have as a percentage of <code>All</code> action_types which are <code>Missing</code>, <code>Unknown</code> or <code>Other</code>? So for example, there would be one row for each action, and <code>about_us</code> row would have <code>406+0/Total Missing + Unknown + Other</code> for all actions.</p> <p>See <a href="https://stackoverflow.com/questions/35430279/how-to-filter-a-crosstab-created-in-pandas-by-a-specific-column?noredirect=1#comment58560016_35430279">this question</a> for context. </p> <p>The problem is that the above contains a row right at the bottom of it called <code>All</code> which is the sum of everything, so:</p> <pre><code>All Missing 1126204 Unknown 1031170 </code></pre> <p>Desired output would be:</p> <pre><code>action percent_total_missing_action_type 10 0 11 0 12 0 15 0 about_us 416/total_missing_action_type (in the All row - 2157374, or the sum of everything in the action_type column) accept_decline 0 account 9040/total_missing_action_type (in the All row - 2157374, or the sum of everything in the action_type column) acculynk_bin_check_failed 1/total_missing_action_type (in the All row - 2157374, or the sum of everything in the action_type column) etc.. </code></pre> <p>Here is some test data:</p> <pre><code>action action_type a Missing 2 Unknown 5 b Missing 3 Unknown 4 c Missing 5 Unknown 6 d Missing 1 Unknown 9 All Missing 11 Unknown 24 </code></pre> <p>Which should go into this:</p> <pre><code> action action_type_percentage a Missing 2/11 Unknown 5/24 b Missing 3/11 Unknown 4/24 c Missing 5/11 Unknown 6/24 d Missing 1/11 Unknown 9/24 All Missing 11/11 Unknown 24/24 </code></pre>
<p>First you can find value of <code>Multindex</code> with key <code>All</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow"><code>xs</code></a> and then you can try it by original <code>Series</code>. Last you can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p> <pre><code>print df action action_type a Missing 2 Unknown 5 b Missing 3 Unknown 4 c Missing 5 Unknown 6 d Missing 1 Unknown 9 All Missing 11 Unknown 24 dtype: int64 print df.xs('All') Missing 11 Unknown 24 dtype: int64 action action_type print df / df.xs('All') action action_type a Missing 0.181818 Unknown 0.208333 b Missing 0.272727 Unknown 0.166667 c Missing 0.454545 Unknown 0.250000 d Missing 0.090909 Unknown 0.375000 All Missing 1.000000 Unknown 1.000000 dtype: float64 </code></pre> <pre><code>print (df / df.xs('All')).reset_index().rename(columns={0:'action_type_percentage'}) action action_type action_type_percentage 0 a Missing 0.181818 1 a Unknown 0.208333 2 b Missing 0.272727 3 b Unknown 0.166667 4 c Missing 0.454545 5 c Unknown 0.250000 6 d Missing 0.090909 7 d Unknown 0.375000 8 All Missing 1.000000 9 All Unknown 1.000000 </code></pre>
pandas|group-by|aggregate|pivot-table|crosstab
1
2,162
28,393,292
Daily schedule as an index in Pandas
<p>I want to represent a daily schedule, given originally as a CSV file, as a Pandas DataFrame. The key to each row in the schedule is an hourly range in a day. The ranges are not overlapping. For example:</p> <pre><code>00:00, 01:00, some data 01:00, 03:00, some more data 03:00, 04:30, some other data </code></pre> <p>How can I create a data frame with one level of the index representing the start-to-end hours range?</p>
<p>Starting from your example dataframe (put column names on it):</p> <pre><code>In [78]: df Out[78]: start end other 0 00:00 01:00 some data 1 01:00 03:00 some more data 2 03:00 04:30 some other data </code></pre> <p>Assuming start and end are strings, we can convert it to a datetime with <code>to_datetime</code>. This will use a default date as the data are only hours:</p> <pre><code>In [79]: pd.to_datetime(df['end'], format='%H:%M') Out[79]: 0 1900-01-01 01:00:00 1 1900-01-01 03:00:00 2 1900-01-01 04:30:00 Name: end, dtype: datetime64[ns] </code></pre> <p>But assuming the start and end are always on the same day, this default date does not matter if we just use the datetime to calculate the time difference between start and end:</p> <pre><code>In [80]: df['range'] = pd.to_datetime(df['end'], format='%H:%M') - pd.to_datetime(df['start'], format='%H:%M') In [81]: df Out[81]: start end other range 0 00:00 01:00 some data 01:00:00 1 01:00 03:00 some more data 02:00:00 2 03:00 04:30 some other data 01:30:00 </code></pre>
python|datetime|pandas|dataframe|period
0
2,163
50,712,636
How to remove duplicates in a data frame using Python
<p>So the data frame is</p> <pre><code>Product Price Weight Range Count A 40 20 1-3 20 A 40 20 4-7 23 B 20 73 1-3 54 B 20 73 4-7 43 B 20 73 8-15 34 B 20 73 &gt;=16 12 C 10 20 4-7 22 </code></pre> <p>So basically there is a product with price and weight and the range here specifies the no of days the product was sold continuously and the count specifies the count of products sold in that range</p> <p>Expected Output</p> <pre><code>Product Price Weight Range Count A 40 20 1-3 20 4-7 23 B 20 73 1-3 54 4-7 43 8-15 34 B 20 73 &gt;=16 12 C 10 20 4-7 22 </code></pre> <p>or</p> <pre><code> Product Price Weight 1-3 4-7 8-15 &gt;=16 A 40 20 20 23 NaN NaN B 20 73 54 43 34 1 C 10 20 0 22 NaN NaN </code></pre>
<p>Fulfilling the second output makes more sense than the first. Use <code>set_index</code>, followed by <code>unstack</code>.</p> <pre><code>(df.set_index(['Product', 'Price', 'Weight', 'Range']) .Count .unstack(fill_value=0) .reset_index() ) Range Product Price Weight 1-3 4-7 8-15 &gt;=16 0 A 40 20 20 23 0 0 1 B 20 73 54 43 34 12 2 C 10 100 0 22 0 0 </code></pre>
python|python-3.x|pandas|duplicates
3
2,164
51,087,820
issues storing and extracting arrays in numpy file
<p>Trying to store an array in numpy file however, while trying to extract it, and use it, getting an error message as trying to apply array to a sequence.</p> <p>These are the two arrays, unsure which one is causing the issue.</p> <pre><code>X = [[1,2,3],[4,5,6],[7,8,9]] y = [0,1,2,3,4,5,6....] </code></pre> <p>while trying to retrieve it and use it getting the values as:</p> <pre><code>X: array(list[1,2,3],list[4,5,6],list[7,8,9]) y = array([0,1,2,3,4,5...]) </code></pre> <p>Here is the code:</p> <pre><code>vectors = np.array(X) labels = np.array(y) </code></pre> <p>While retrieving working on t-sne</p> <pre><code>visualisations = TSNE(n_components=2).fit_transform(X,y) </code></pre> <p>I get the following error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-11-244f99341167&gt; in &lt;module&gt;() ----&gt; 1 visualisations = TSNE(n_components=2).fit_transform(X,y) C:\ProgramData\Anaconda3\lib\site-packages\sklearn\manifold\t_sne.py in fit_transform(self, X, y) 856 Embedding of the training data in low-dimensional space. 857 """ --&gt; 858 embedding = self._fit(X) 859 self.embedding_ = embedding 860 return self.embedding_ C:\ProgramData\Anaconda3\lib\site-packages\sklearn\manifold\t_sne.py in _fit(self, X, skip_num_points) 658 else: 659 X = check_array(X, accept_sparse=['csr', 'csc', 'coo'], --&gt; 660 dtype=[np.float32, np.float64]) 661 if self.method == 'barnes_hut' and self.n_components &gt; 3: 662 raise ValueError("'n_components' should be inferior to 4 for the " C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py in check_array(array, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator) 431 force_all_finite) 432 else: --&gt; 433 array = np.array(array, dtype=dtype, order=order, copy=copy) 434 435 if ensure_2d: ValueError: setting an array element with a sequence. </code></pre>
<pre><code>array(list[1,2,3],list[4,5,6],list[7,8,9]) </code></pre> <p>is a 1d object dtype array. To get that from</p> <pre><code>[[1,2,3],[4,5,6],[7,8,9]] </code></pre> <p>requires more than <code>np.array([[1,2,3],[4,5,6],[7,8,9]])</code>; either the list elements have to vary in size, or you have to initialize an object array and copy the list values to it.</p> <p>In any case <code>fit_transform</code> cannot handle that kind of array. It expects a 2d numeric dtype. Notice the parameters to the <code>check_array</code> function.</p> <p>If all the list elements of <code>X</code> are the same size, then</p> <pre><code>X = np.stack(X) </code></pre> <p>should turn it into a 2d numeric array. </p> <p>I suspect <code>X</code> was that 1d object array type before saving. By itself <code>save/load</code> should not turn a 2d numeric array into an object one.</p>
python-3.x|numpy|numpy-ndarray
0
2,165
9,537,543
Replace NaN's in NumPy array with closest non-NaN value
<p>I have a NumPy array <code>a</code> like the following:</p> <pre><code>&gt;&gt;&gt; str(a) '[ nan nan nan 1.44955726 1.44628034 1.44409573\n 1.4408188 1.43657094 1.43171624 1.42649744 1.42200684 1.42117704\n 1.42040255 1.41922908 nan nan nan nan\n nan nan]' </code></pre> <p>I want to replace each NaN with the closest non-NaN value, so that all of the NaN's at the beginning get set to <code>1.449...</code> and all of the NaN's at the end get set to <code>1.419...</code>.</p> <p>I can see how to do this for specific cases like this, but I need to be able to do it generally for any length of array, with any length of NaN's at the beginning and end of the array (there will be no NaN's in the middle of the numbers). Any ideas?</p> <p>I can find the NaN's easily enough with <code>np.isnan()</code>, but I can't work out how to get the closest value to each NaN.</p>
<p>As an alternate solution (this will linearly interpolate for arrays <code>NaN</code>s in the middle, as well):</p> <pre><code>import numpy as np # Generate data... data = np.random.random(10) data[:2] = np.nan data[-1] = np.nan data[4:6] = np.nan print data # Fill in NaN's... mask = np.isnan(data) data[mask] = np.interp(np.flatnonzero(mask), np.flatnonzero(~mask), data[~mask]) print data </code></pre> <p>This yields:</p> <pre><code>[ nan nan 0.31619306 0.25818765 nan nan 0.27410025 0.23347532 0.02418698 nan] [ 0.31619306 0.31619306 0.31619306 0.25818765 0.26349185 0.26879605 0.27410025 0.23347532 0.02418698 0.02418698] </code></pre>
python|arrays|numpy|nan
59
2,166
5,761,642
Python/Numpy - Get Index into Main Array from Subset
<p>Say I have a 100 element numpy array. I perform some calculation on a subset of this array - maybe 20 elements where some condition is met. Then I pick an index in this subset, how can I (efficiently) recover the index in the first array? I don't want to perform the calculation on all values in a because it is expensive, so I only want to perform it where it is required (where that condition is met).</p> <p>Here is some pseudocode to demonstrate what I mean (the 'condition' here is the list comprehension):</p> <pre><code>a = np.arange(100) # size = 100 b = some_function(a[[i for i in range(0,100,5)]]) # size = 20 Index = np.argmax(b) # Index gives the index of the maximum value in b, # but what I really want is the index of the element # in a </code></pre> <p><strong>EDIT:</strong></p> <p>I wasn't being very clear, so I've provided a more full example. I hope this makes it more clear about what my goal is. I feel like there is some clever and efficient way to do this, without some loops or lookups.</p> <p>CODE:</p> <pre><code>import numpy as np def some_function(arr): return arr*2.0 a = np.arange(100)*2. # size = 100 b = some_function(a[[i for i in range(0,100,5)]]) # size = 20 Index = np.argmax(b) print Index # Index gives the index of the maximum value in b, but what I really want is # the index of the element in a # In this specific case, Index will be 19. So b[19] is the largest value # in b. Now, what I REALLY want is the index in a. In this case, that would # 95 because some_function(a[95]) is what made the largest value in b. print b[Index] print some_function(a[95]) # It is important to note that I do NOT want to change a. I will perform # several calculations on SOME values of a, then return the indices of 'a' where # all calculations meet some condition. </code></pre>
<p>I am not sure if I understand your question. So, correct me if I am wrong.</p> <p>Let's say you have something like</p> <pre><code>a = np.arange(100) condition = (a % 5 == 0) &amp; (a % 7 == 0) b = a[condition] index = np.argmax(b) # The following should do what you want a[condition][index] </code></pre> <p>Or if you don't want to work with masks:</p> <pre><code>a = np.arange(100) b_indices = np.where(a % 5 == 0) b = a[b_indices] index = np.argmax(b) # Get the value of 'a' corresponding to 'index' a[b_indices][index] </code></pre> <p>Is this what you want?</p>
python|indexing|numpy
2
2,167
66,348,482
Count the number of field having value populated except NULL using lambda function throughout all rows
<p>Thanks for having a look at this question. I am creating a logic using lambda, that runs through all the rows and counts the number of the field having value except NA. As you can see in the given example.</p> <pre><code>Input : project_id project_a project_b project_c project_d project_e 1 Yes Yes Yes NA Yes 2 Yes Yes Yes NA Yes 3 NA Yes Yes NA Yes 4 Yes Yes Yes NA Yes 5 NA Yes Yes NA Yes Desired Output : project_id project_a project_b project_c project_d project_e field_populated 1 Yes Yes Yes NA Yes 5 2 Yes Yes Yes NA Yes 5 3 NA Yes Yes NA Yes 3 4 Yes Yes Yes NA Yes 5 5 NA Yes Yes NA Yes 4 </code></pre> <p>I have tried using the following code but facing some issues.</p> <pre><code>proj_table['field_populated'] = proj_table['project_id', 'project_a', 'project_b','project_c, 'project_d','project_e].apply(lambda x: x+1 if x != &quot;NA&quot; or np.nan else x) </code></pre>
<p>You are overcomplicating it, you can count non-null values of your dataframe and populate a new column using, <code>count</code>, and perform operation along rows (<code>axis=1</code>).</p> <p><code>filter(like='project')</code> will consider only columns with 'project' in case you have more columns in your actual <code>df</code>.</p> <pre><code>df['field_populated'] = df.filter(like='project').count(axis=1) </code></pre> <p>Which prints:</p> <pre><code>df project_id project_a project_b ... project_d project_e field_populated 0 1 Yes Yes ... NaN Yes 5 1 2 Yes Yes ... NaN Yes 5 2 3 NaN Yes ... NaN Yes 4 3 4 Yes Yes ... NaN Yes 5 4 5 NaN Yes ... NaN Yes 4 </code></pre>
python|pandas
3
2,168
66,394,218
Applying .get() function On a Pandas series
<p>I am working on sample dataset to retrieve location information from address(some details are changed for identification purpose);</p> <pre><code>temp2=pd.DataFrame({'USER_ID':[1268,12345,4204,4208], 'IP_ADDR':['142.176.00.83','24.000.63.230','187.178.252.99','187.178.250.99']}) </code></pre> <p>My goal is to get Lattitude and longitude information using the <code>ip2geotools</code> python package. The syntax is follows;</p> <pre><code>!pip install ip2geotools response = DbIpCity.get(a, api_key='free') json_file = response.to_json() </code></pre> <p>where <code>a='142.176.00.83'</code>. Then we get a JSON file like this;</p> <pre><code>'{&quot;ip_address&quot;: &quot;142.176.00.83&quot;, &quot;city&quot;: &quot;Charlotte&quot;, &quot;region&quot;: &quot;Prince Edward&quot;, &quot;country&quot;: &quot;CA&quot;, &quot;latitude&quot;: 46.2, &quot;longitude&quot;: -63.131}' </code></pre> <p>I am trying to apply the function on an entire pandas series (vectored form) and retrieve latitude and longitude as two different columns. Here is my attempt:</p> <pre><code>temp2['y'] = temp2['IP_ADDR'].apply(lambda x: DbIpCity.get(x, api_key='free')) </code></pre> <p>But it seems it doesn't like this syntax, <code>InvalidRequestError: </code>.</p> <p>But if I execute the code on one string it works fine;</p> <pre><code>DbIpCity.get('2401:4900:40cc:e9cc:6ccc:348e:4020:2593', api_key='free') ip2geotools.models.IpLocation(2401:4900:40cc:e9cc:6ccc:348e:4020:2593) </code></pre> <p>On the other hand, if there are no quotes then it fails;</p> <pre><code>DbIpCity.get(2401:4900:40cc:e9cc:6ccc:348e:4020:2593, api_key='free') SyntaxError: invalid syntax </code></pre> <p>But my data doesn't have quotes around it. If I try to add the quotes it fails;</p> <pre><code>i=str(2401:4900:40cc:e9cc:6ccc:348e:4020:2593) print(&quot;'&quot;+str(i)+&quot;'&quot;) i=str(2401:4900:40cc:e9cc:6ccc:348e:4020:2593) ^ SyntaxError: invalid syntax </code></pre> <p>Can I kindly get some help on how to vectorize this operation and retrieve fields from JSON file. thanks</p>
<p>The error is raised by ip2geotools, not pandas, because the IP format is improper. Code works for me after changing IP's to have only single 0's in each part.</p> <p>i.e. change <code>'24.000.63.230'</code> to <code>'24.0.63.230'</code></p> <p>You can apply this fix to your dataframe using:</p> <pre><code>temp2['IP_ADDR'] = temp2['IP_ADDR'].replace(r'\.0+\.', '.0.', regex=True) </code></pre>
python|pandas|geolocation
0
2,169
57,541,947
Keras LSTM TensorFlow Error: 'Shapes must be equal rank, but are 1 and 0'
<p>I'm trying to create a Keras LSTM (Please note that I am new to LSTMs and RNNs in Keras). The neural network is supposed to take an input of 4116 values, and output 4116 values. This is to be done for 288 timesteps. I have 27 such timesteps (I realize this will likely lead to overfitting; I have a larger dataset, but first want to test my code with just 27 training examples).</p> <p>The training data is stored in two numpy arrays <code>x</code> and <code>y</code>. These variables have a shape of <code>(27, 288, 4116)</code>.</p> <p>My code:</p> <pre><code>datapoints = data.get.monthPoints(2, year) x, y = datapoints[:-1], datapoints[1:] del datapoints input_shape = x.shape[1:] output_shape = y.shape[1:] checkpoint = ModelCheckpoint('model/files/alpha.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='auto', period=1) early = EarlyStopping(monitor='val_loss', min_delta=0, patience=1, verbose=1, mode='auto') model = Sequential() model.add(LSTM(5488, activation='relu', input_shape=input_shape)) model.add(RepeatVector(output_shape)) model.add(LSTM(5488, activation='relu', return_sequences=True)) model.add(TimeDistributed(Dense(output_shape))) model.compile(loss='mse', optimizer='adam') model.fit(x, y, epochs=100, batch_size=8, callbacks = [checkpoint, early]) </code></pre> <p>When I run the program, I get the following error(s):</p> <pre><code>tensorflow.python.framework.errors_impl.InvalidArgumentError: Shapes must be equal rank, but are 1 and 0 From merging shape 1 with other shapes. for 'repeat_vector/stack_1' (op: 'Pack') with input shapes: [], [2], [] </code></pre> <p>and</p> <pre><code>During handling of the above exception, another exception occurred: ValueError: Shapes must be equal rank, but are 1 and 0 From merging shape 1 with other shapes. for 'repeat_vector/stack_1' (op: 'Pack') with input shapes: [], [2], [] </code></pre> <p>I've seen a few other similar questions like <a href="https://stackoverflow.com/q/46273364/7296192">this</a> and <a href="https://stackoverflow.com/q/54806450/7296192">this</a>, but they haven't offered solutions that fix my problem or the solutions have been unclear.</p> <p>I guess my problem has something to do with me structuring the network incorrectly or formatting my data incorrectly.</p> <p>Any insight would me much appreciated.</p> <p>Thank you.</p>
<p>You probably want to repeat the output of first LSTM layer as much as the number of timesteps in model's output sequence (i.e. <code>y</code>). Therefore, it should be:</p> <pre><code>model.add(RepeatVector(y.shape[1])) </code></pre>
python|tensorflow|keras|lstm|recurrent-neural-network
0
2,170
57,404,679
Subset a df using an if statement - Pandas
<p>I am hoping to create and return a subsetted <code>df</code> using an <code>if</code> statement. Specifically, for the code below, I have two different sets of values. The <code>df</code> I want to return will vary based on one of these values. </p> <p>Using the code below, the specific value will be within <code>normal</code> and <code>different</code>. The value in <code>place</code> will dictate how the <code>df</code> will be subsetted. </p> <p>Below is my attempt. The value in <code>place</code> will only ever be a single value, so it won't match the lists in full. Is it possible to return the <code>df</code> when the value in <code>place</code> is equal to a single value with these lists?</p> <p>I'm hoping to return <code>df1</code> to be used for subsequent tasks.</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'period' : [1.0, 1.0, 2.0, 2.0, 3.0, 4.0, 5.0, 7.0, 7.0, 8.0, 9.0], }) place = 'a' normal = ['a','b'] different = ['v','w','x','y','z'] different_subset_start = 2 normal_subset_start = 4 subset_end = 8 for val in df: if place in different: print('place is different') df1 = df[(df['period'] &gt;= different_subset_start) &amp; (df['period'] &lt;= subset_end)].drop_duplicates(subset = 'period') return df1 elif place in normal: print('place is normal') df1 = df[(df['period'] &gt;= normal_subset_start) &amp; (df['period'] &lt;= subset_end)].drop_duplicates(subset = 'period') return df1 else: print('Incorrect input for Day. Day Floater could not be scheduled. Please check input value') return </code></pre> <p>print(df1)</p> <p>Intended output would be to return <code>df1</code> to be used later.</p> <pre><code> period 2 2.0 4 3.0 5 4.0 6 5.0 7 7.0 9 8.0 </code></pre>
<p>To check if an object is <em>in</em> something rather than check if it <em>equal to</em> something, use <code>in</code>.</p> <pre><code>if place in different: </code></pre> <p>and similarly</p> <pre><code>elif place in normal: </code></pre> <hr> <p><strong>EDIT:</strong></p> <p>Here is how it should look if you make it a function. Basically, you just need to do a <code>def my_function_name(arguments):</code> sort of thing, then indent the rest of your code so it belongs to that function. Like this:</p> <pre><code>import pandas as pd def get_subset(df, place): normal = ['a','b'] different = ['v','w','x','y','z'] different_subset_start = 2 normal_subset_start = 4 subset_end = 8 if place in different: df1 = df[(df['period'] &gt;= different_subset_start) &amp; (df['period'] &lt;= subset_end)].drop_duplicates(subset = 'period') elif place in normal: df1 = df[(df['period'] &gt;= normal_subset_start) &amp; (df['period'] &lt;= subset_end)].drop_duplicates(subset = 'period') else: df1 = None return df1 df = pd.DataFrame({ 'period' : [1.0, 1.0, 2.0, 2.0, 3.0, 4.0, 5.0, 7.0, 7.0, 8.0, 9.0], }) place = 'a' print(get_subset(df, place)) </code></pre>
python|pandas|if-statement
1
2,171
57,465,747
Strange behavior with pandas timestamp to posix conversion
<p>I do the following operations:</p> <ol> <li>Convert string datetime in pandas dataframe to python datetime via <code>apply(strptime)</code></li> <li>Convert <code>datetime</code> to posix timestamp via <code>.timestamp()</code> method</li> <li>If I revert posix back to <code>datetime</code> with <code>.fromtimestamp()</code> I obtain different datetime</li> </ol> <p>It differs by 3 hours which is my timezone (I'm at UTC+3 now), so I suppose it is a kind of timezone issue. Also I understand that in apply it implicitly converts to <code>pandas.Timestamp</code>, but I don't understand the difference in this case.</p> <p>What is the reason for such strange behavior and what should I do to avoid it? Actually in my project I need to compare this pandas timestamps with correct poxis timestamps and now it works wrong.</p> <p>Below is dummy reproducible example:</p> <pre><code>df = pd.DataFrame(['2018-03-03 14:30:00'], columns=['c']) df['c'] = df['c'].apply(lambda x: datetime.datetime.strptime(x, '%Y-%m-%d %H:%M:%S')) dt = df['c'].iloc[0] dt &gt;&gt; Timestamp('2018-03-03 14:30:00') datetime.datetime.fromtimestamp(dt.timestamp()) &gt;&gt; datetime.datetime(2018, 3, 3, 17, 30) </code></pre>
<p>First, I suggest using the <code>np.timedelta64</code> dtype when working with <code>pandas</code>. In this case it makes the reciprocity simple.</p> <pre><code>pd.to_datetime('2018-03-03 14:30:00').value #1520087400000000000 pd.to_datetime(pd.to_datetime('2018-03-03 14:30:00').value) #Timestamp('2018-03-03 14:30:00') </code></pre> <p>The issue with the other methods is that POSIX has UTC as the origin, but <code>fromtimestamp</code> returns the local time. If your system isn't UTC compliant, then we get issues. The following methods will work to remedy this:</p> <pre><code>from datetime import datetime import pytz dt #Timestamp('2018-03-03 14:30:00') # Seemingly problematic: datetime.fromtimestamp(dt.timestamp()) #datetime.datetime(2018, 3, 3, 9, 30) </code></pre> <hr> <pre><code>datetime.fromtimestamp(dt.timestamp(), tz=pytz.utc) #datetime.datetime(2018, 3, 3, 14, 30, tzinfo=&lt;UTC&gt;) datetime.combine(dt.date(), dt.timetz()) #datetime.datetime(2018, 3, 3, 14, 30) mytz = pytz.timezone('US/Eastern') # Use your own local timezone datetime.fromtimestamp(mytz.localize(dt).timestamp()) #datetime.datetime(2018, 3, 3, 14, 30) </code></pre>
python|pandas|datetime|timezone
1
2,172
57,715,881
How to compare 2 columns of 2 different dataframes pandas, and sum the result pandas
<p>I have 2 dataframes with the same length, but different number of columns. </p> <p>I'd like to compare 2 specific columns from those dataframes and if the values are even, the counter is added by 1, like so:</p> <p>df1:</p> <pre><code>count = o num 0 0 1 1 2 0 3 0 4 1 </code></pre> <p>df2:</p> <pre><code> Preg Glu outcome 0 5.0 116.0 0.0 1 10.0 115.0 0.0 2 2.0 197.0 0.0 3 7.0 196.0 1.0 4 10.0 125.0 1.0 </code></pre> <p>Thus, since they were even in 3 values, the result should be:</p> <pre><code>count = 3 </code></pre> <p>What is the best way to do that?</p>
<p>You can check by performing an <em>elementwise</em> comparison between the two:</p> <pre><code>&gt;&gt;&gt; (df1['num'] == df2['outcome']).sum() 3 </code></pre>
pandas|compare
0
2,173
57,429,648
Query within multiple csv files for get the suitable data-set based on given conditions on pandas columns
<p>I have approximately 25 csv datasets. Where every csv file has many common column names. Now these all csv file are for speech recognition domain where you can work on text to speech projects. Choosing a dataset for specific kind of project needs to see all 25 datasets and choose the preferred one. </p> <p>Such as, for project <code>A</code> I need specific dataset which has specific features in it. Like, column <code>Speacker</code> is <code>Male</code>, <code>Sampling rate</code> is <code>48000</code>, <code>Language</code> is <code>en</code> etc etc.</p> <p>How to read all those cvs files and get the name of the data set which matches the conditions. </p> <p>I tried to use <code>itertuples</code> over the rows of csv to find the row which contains the targeted information. However, I need just the name of data as a outcome. </p> <p>I didn't find much to look for here:</p> <pre class="lang-py prettyprint-override"><code>import os, fnmatch result = [] def find(pattern, path): for root, dirs, files in os.walk(path): for name in files: if fnmatch.fnmatch(name, pattern): result.append(os.path.join(root, name)) return result csv = find('*.csv', './') </code></pre> <p>This function return all those 25 csv file and now I am stuck to write logic to search over all the csv file and find the name of the dataset where column contain given values. I am looking for something where my code accepts multiple arguments (Conditions) and query those 25 csv files columns and find the match in every column. Then tell the name of the dataset which contains such features. </p> <p>Conditions:</p> <pre><code>Language = 'en' Gender = 'Male' Sample rate = 48000 </code></pre> <p>Expected outout:</p> <pre><code>Following Data has such features: 1) Data_xyz 2) Data_abc </code></pre> <h3>Edited</h3>
<p>You can iterate over each row, and if all of the key words in your query are somewhere in that row, you have a match. You can use a list comprehension where you check 'if all items in the query are in the row i'm looking at, consider it a match'. In this approach we're actually capturing those rows, into a new dataframe and looking at the shape of the resulting dataframe. Shape is a tuple (rows, cols) so if we look at shape[0] of the resulting dataframe it will either be zero if there are no matches, or 1+ if there are rows that match.</p> <pre><code>data_0 = {'Name':['Ned', 'Ped', 'Ded'], 'Gender': ['Male', 'Male', 'Female'], 'Lang': ['En', 'De', 'Fr']} data_1 = {'Name':['Sia', 'Kori', 'Maya'], 'Gender': ['Female', 'Female','Female' ], 'Lang': ['En', 'En','En']} &gt;&gt;&gt; for c, data_frame in enumerate(final_data): ... if data_frame[data_frame.apply(lambda row: all([i in row.values for i in query]), axis=1)].shape[0] &gt;= 1: ... print('match on data_{}'.format(c)) ... match on data_0 </code></pre>
python|pandas|csv|numpy
0
2,174
24,236,079
As of June 2014, what tools should one consider for improving Python code performance?
<p>I have written a small scientific experiment in Python, and I now need to consider optimizing this code. After profiling, what tools should I consider in order to improve performance. From my understanding, the following wouldn't work:</p> <p><strong>Psyco</strong>: out of date (doesn't support Python 2.7)</p> <p><strong>Pyrex:</strong> last update was in 2010</p> <p><strong>Pypy:</strong> has issues with NumPy</p> <p>What options remain now apart from writing C modules and then somehow interfacing them with Python (for example, by using <strong>Cython</strong>)?</p>
<p>You can use Cython to compile the bottlenecks to C. This is very effective for numerical code where you have tight loops. Python loops add quite a lot of overhead, that is non-existent if you can translate things to pure C. In general, you can get very good performance for any statically typed code (that is, your types do not change, and you can annotate them on the source).</p> <p>You can also write the core parts of your algorithm in C (or take an already written library) and wrap it. You can still do it writing a lot of boilerplate code with Cython or SWIG, but now there are tools like XDress that can do this for you. If you are a FORTRAN person, f2py is your tool.</p> <p>Modern CPUs have many cores, so you should be able to take advantage of it usin Python's multiprocessing. The guys at joblib have provided a very nice and simplified interface for it.</p> <p>Some problems are also suitable for GPU computing when you can use PyCUDA.</p> <p>Theano is a library that is a bridge between Numpy, Cython, Sympy, and PyCUDA. It can evaluate and compile expressions and generate GPU kernels.</p> <p>Lastly, there is the future, with Numba and Blaze. Numba is a JIT compiler based on LLVM. The development is not complete, as some syntax is missing and bugs are quite common. I don't believe it is ready for production code, unless you are sure your codebase is fully supported and you have a <em>very</em> good test coverage. Blaze is a <em>next generation Numpy</em>, with support for out of core storage and more flexible arrays; and designed to use Numba as a backend to speed up execution. It is in a quite early stage of development.</p> <p>Regarding your options:</p> <ul> <li><strong>Pysco:</strong> the author considered the project was done, and he decided to collaborate with Pypy. Most of its features are in there now.</li> <li><strong>Pyrex:</strong> abandoned project, where Cython was forked from. It has all its features and much more.</li> <li><strong>Pypy:</strong> not a real option for general scientific code because the interfacing with C is too slow, and not complete. Numpy is only partially suported, and there is little hope Scipy will ever be (mainly because of the FORTRAN dependencies). This may change in the future, but probably not any time soon. Not being able to fully use C extensions limits very much the possibilities for using external code. I must add I have used it successfully with Networkx (pure Python networks library), so there are use cases where it could be of use.</li> </ul>
python|numpy
5
2,175
43,765,031
How to plot lines based on the existence of continuous data
<p>I have a dataset that looks like the below:</p> <pre><code>+------------+--------+ | trend_name | date | +------------+--------+ | dogs | 5/3/17 | | cats | 5/3/17 | | owls | 5/3/17 | | dogs | 5/4/17 | | cats | 5/4/17 | | tigers | 5/4/17 | | cats | 5/5/17 | | bears | 5/5/17 | | giraffes | 5/5/17 | +------------+--------+ </code></pre> <p>I'd like to create a plot that has <code>trend_name</code> on the y-axis and <code>date</code> on the x-axis with lines connecting trends that continue for >1 periods and the same plane of the trend and a dot for trends that only exist for a single period and nothing if a trend does not exist for a particular period. </p> <p>The plot would look something like this: <a href="https://i.stack.imgur.com/0LuSd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0LuSd.jpg" alt="enter image description here"></a></p> <p>I tried simply <code>t.plot(x='date', y='trend_name')</code> but of course there is no data, so it threw an error.</p> <p>Is there a specific name for this type of plot so I can find better resources or does anyone have suggestions on how to accomplish this?</p> <p>UPDATE:</p> <p>t is a pandas dataframe like this, but follows a similar pattern to he mock dataframe above:</p> <p><a href="https://i.stack.imgur.com/lVS9c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lVS9c.png" alt="enter image description here"></a></p> <p><code>t.plot(x='datetime_collected', y='name')</code> yields:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-95-d2a37de17ec0&gt; in &lt;module&gt;() ----&gt; 1 t.plot(x='datetime_collected', y='name') /usr/local/lib/python2.7/site-packages/pandas/tools/plotting.pyc in __call__(self, x, y, kind, ax, subplots, sharex, sharey, layout, figsize, use_index, title, grid, legend, style, logx, logy, loglog, xticks, yticks, xlim, ylim, rot, fontsize, colormap, table, yerr, xerr, secondary_y, sort_columns, **kwds) 3772 fontsize=fontsize, colormap=colormap, table=table, 3773 yerr=yerr, xerr=xerr, secondary_y=secondary_y, -&gt; 3774 sort_columns=sort_columns, **kwds) 3775 __call__.__doc__ = plot_frame.__doc__ 3776 /usr/local/lib/python2.7/site-packages/pandas/tools/plotting.pyc in plot_frame(data, x, y, kind, ax, subplots, sharex, sharey, layout, figsize, use_index, title, grid, legend, style, logx, logy, loglog, xticks, yticks, xlim, ylim, rot, fontsize, colormap, table, yerr, xerr, secondary_y, sort_columns, **kwds) 2641 yerr=yerr, xerr=xerr, 2642 secondary_y=secondary_y, sort_columns=sort_columns, -&gt; 2643 **kwds) 2644 2645 /usr/local/lib/python2.7/site-packages/pandas/tools/plotting.pyc in _plot(data, x, y, subplots, ax, kind, **kwds) 2468 plot_obj = klass(data, subplots=subplots, ax=ax, kind=kind, **kwds) 2469 -&gt; 2470 plot_obj.generate() 2471 plot_obj.draw() 2472 return plot_obj.result /usr/local/lib/python2.7/site-packages/pandas/tools/plotting.pyc in generate(self) 1039 def generate(self): 1040 self._args_adjust() -&gt; 1041 self._compute_plot_data() 1042 self._setup_subplots() 1043 self._make_plot() /usr/local/lib/python2.7/site-packages/pandas/tools/plotting.pyc in _compute_plot_data(self) 1148 if is_empty: 1149 raise TypeError('Empty {0!r}: no numeric data to ' -&gt; 1150 'plot'.format(numeric_data.__class__.__name__)) 1151 1152 self.data = numeric_data TypeError: Empty 'DataFrame': no numeric data to plot </code></pre>
<p>It's possible that this is far from the most elegant solution, especially since I'm not very familiar with pandas. But anyway, here's a solution which creates an auxiliary dataframe for your plot limits (this is inevitable if you want to ignore data points which are not represented in your current time window):</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt from matplotlib.ticker import FuncFormatter # dummy data dat = pd.DataFrame({'beast': ['dog','cat','owl','dog','cat','tiger','cat','bear','giraffe','unicorn'], 'collected': pd.to_datetime(['2016-03-09']*3 + ['2016-04-05']*3 + ['2016-05-05']*3 + ['2016-06-06'])}) # plotting date interval t1,t2 = (pd.to_datetime(t) for t in ('2016-03-09','2016-05-05')) # create auxiliary dataframe for plotting dat_tmp = dat[(t1&lt;=dat.collected) &amp; (dat.collected&lt;=t2)] # filtered between t1 and t2 beast_id,beasts = zip(*enumerate(dat_tmp.beast.unique())) # indexing step: see http://stackoverflow.com/a/22346955 dat_tmp = dat_tmp.merge(pd.DataFrame({'beast': beasts, 'beast_id': beast_id}),on='beast',how='left') dat_tmp = dat_tmp.pivot(index='collected',columns='beast',values='beast_id') # plot dat_tmp.plot(style='.-') def format_fn(tick_val, tick_pos): '''uses items in the list `beasts` to set yticklabels''' if int(tick_val) in beast_id: return beasts[int(tick_val)] else: return '' plt.gca().yaxis.set_major_formatter(FuncFormatter(format_fn)) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/mAadu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mAadu.png" alt="result"></a></p> <p>As you can see, there's still a lot of room for formatting improvements: hiding irrelevant x ticks, zooming out a bit to fully show all the points, moving around the legend, etc., but those are trivial facelifts.</p> <p>As for the dummy example I put together (I suggest you do the same yourself next time, makes it easier for others to play around with your problem), we started out with this dataframe:</p> <pre><code> beast collected 0 dog 2016-03-09 1 cat 2016-03-09 2 owl 2016-03-09 3 dog 2016-04-05 4 cat 2016-04-05 5 tiger 2016-04-05 6 cat 2016-05-05 7 bear 2016-05-05 8 giraffe 2016-05-05 9 unicorn 2016-06-06 </code></pre> <p>Note the unicorn data point which is altogether missing from the plot. After the indexing/merging step we end up with</p> <pre><code> beast collected beast_id 0 dog 2016-03-09 0 1 cat 2016-03-09 1 2 owl 2016-03-09 2 3 dog 2016-04-05 0 4 cat 2016-04-05 1 5 tiger 2016-04-05 3 6 cat 2016-05-05 1 7 bear 2016-05-05 4 8 giraffe 2016-05-05 5 </code></pre> <p>As you can see, each point has been annotated with the integer index of the given animal. We need this, as this is the data we need for the <code>y</code> axis of our plot. After pivoting the final result is</p> <pre><code>beast bear cat dog giraffe owl tiger collected 2016-03-09 NaN 1.0 0.0 NaN 2.0 NaN 2016-04-05 NaN 1.0 0.0 NaN NaN 3.0 2016-05-05 4.0 1.0 NaN 5.0 NaN NaN </code></pre> <p>the columns of which will be plotted as separate lines. There's likely a shorter course of action that results in the same or equivalently useful data frame, but this is all I've got. The upside is that the <code>NaN</code>s in the data set will automagically enforce your "lines where data are contiguously available" rule.</p>
python|pandas|matplotlib|plot
3
2,176
73,027,326
How to import a module once I have installed the module on a virtual environment - TensorFlow
<p>I am very new to this virtual environment concept. So if you could also explain that, it would be great.</p> <p>Anyways, I am using Anaconda3. Here are the steps that I took to try to use TensorFlow.</p> <ol> <li>From &quot;base&quot; anaconda I tried to install, which gave me the below error.</li> </ol> <pre><code>(base) C:\Users\ikim1&gt;conda create -n tf tensorflow Collecting package metadata (current_repodata.json): done Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: done </code></pre> <p>FYI, the Environment -&gt; base tab on Anaconda3 did not show TensorFlow - seems like some people were lucky that their Anaconda3 just came with TensowFlow Module...</p> <ol start="2"> <li>After reading some documentation, I realized that using virtual environment can solve the issue. And then after using &quot;tf&quot; as a virtual environment name, I was able to install TensorFlow.</li> </ol> <p>After the install, I ran this code</p> <pre><code>conda activate tf </code></pre> <p>to activate the environment.</p> <ol start="3"> <li>On Spyder, I checked if TensorFlow would import with the below code</li> </ol> <pre><code>import tensorflow as tf </code></pre> <p>which gives me this error : ModuleNotFoundError: No module named 'tensorflow'</p> <p>Thus my question is do I need to change directory so that Python knows where it needs to import the module from? So do I need to write a code like below in Spyder?</p> <pre><code>cd &quot;my virtual environment&quot; (not sure what the code would be) import tensorflow as tf </code></pre> <p>Or did I just make some mistake installing it?</p>
<p>Please run the following command to see if Tensorflow is installed. It will either state that the package is not installed or display a bunch of information about it.</p> <pre><code>pip show tensorflow </code></pre> <p>If the Tensorflow package is not already installed, please try installing it in the environment where you are working with the following commands before attempting to import tensorflow.</p> <pre><code># activate the environment conda activate tf # install tensorflow pip install tensorflow </code></pre>
python|tensorflow|virtual-environment
0
2,177
10,377,096
Multiple conditions using 'or' in numpy array
<p>So I have these conditions:</p> <blockquote> <p>A = 0 to 10 <strong>OR</strong> 40 to 60</p> <p>B = 20 to 50</p> </blockquote> <p>and I have this code:</p> <pre><code>area1 = N.where((A&gt;0) &amp; (A&lt;10)),1,0) area2 = N.where((B&gt;20) &amp; (B&lt;50)),1,0) </code></pre> <p>My question is: how do I do '<strong>OR</strong>' condition in numpy?</p>
<p>If numpy overloads <code>&amp;</code> for boolean <code>and</code> you can safely assume that <code>|</code> is boolean <code>or</code>.</p> <pre><code>area1 = N.where(((A&gt;0) &amp; (A&lt;10)) | ((A&gt;40) &amp; (A&lt;60))),1,0) </code></pre>
python|numpy
48
2,178
10,601,041
Pandas: where's the memory leak here?
<p>I face the problem of memory leaks using <strong><code>pandas</code></strong> library in <strong>python</strong>. I create <code>pandas.dataframe</code> objects in my class and I have method, that change dataframe size according my conditions. After changing dataframe size and creating new pandas object I rewrite original pandas.dataframe in my class. But memory usage is very high even after significally reducing of initial table. Some code for short example (I didn't write process manager,see task manager):</p> <pre><code>import time, string, pandas, numpy, gc class temp_class (): def __init__(self, nrow = 1000000, ncol = 4, timetest = 5): self.nrow = nrow self.ncol = ncol self.timetest = timetest def createDataFrame(self): print('Check memory before dataframe creating') time.sleep(self.timetest) self.df = pandas.DataFrame(numpy.random.randn(self.nrow, self.ncol), index = numpy.random.randn(self.nrow), columns = list(string.letters[0:self.ncol])) print('Check memory after dataFrame creating') time.sleep(self.timetest) def changeSize(self, from_ = 0, to_ = 100): df_new = self.df[from_:to_].copy() print('Check memory after changing size') time.sleep(self.timetest) print('Check memory after deleting initial pandas object') del self.df time.sleep(self.timetest) print('Check memory after deleting copy of reduced pandas object') del df_new gc.collect() time.sleep(self.timetest) if __name__== '__main__': a = temp_class() a.createDataFrame() a.changeSize() </code></pre> <ul> <li><p>Before dataframe creating I have approx. 15 mb of memory usage</p></li> <li><p>After creating - 67mb</p></li> <li><p>After changing size - 67 mb</p></li> <li><p>After deleting original dataframe - 35mb </p></li> <li><p>After deleting reduced table - 31 mb.</p></li> </ul> <p><strong>16 mb?</strong></p> <p>I use python 2.7.2(x32) on Windows 7 (x64) machine, pandas.<strong>version</strong> is 0.7.3. numpy.<strong>version</strong> is 1.6.1</p>
<p>A couple things to point out:</p> <ol> <li><p>In "Check memory after changing size", you haven't deleted the original DataFrame yet, so this will be using strictly more memory</p></li> <li><p>The Python interpreter is a bit greedy about holding onto OS memory.</p></li> </ol> <p>I looked into this and can assure you that pandas is not leaking memory. I'm using the memory_profiler (http://pypi.python.org/pypi/memory_profiler) package:</p> <pre><code>import time, string, pandas, numpy, gc from memory_profiler import LineProfiler, show_results import memory_profiler as mprof prof = LineProfiler() @prof def test(nrow=1000000, ncol = 4, timetest = 5): from_ = nrow // 10 to_ = 9 * nrow // 10 df = pandas.DataFrame(numpy.random.randn(nrow, ncol), index = numpy.random.randn(nrow), columns = list(string.letters[0:ncol])) df_new = df[from_:to_].copy() del df del df_new gc.collect() test() # for _ in xrange(10): # print mprof.memory_usage() show_results(prof) </code></pre> <p>And here's the output</p> <pre><code>10:15 ~/tmp $ python profmem.py Line # Mem usage Increment Line Contents ============================================== 7 @prof 8 28.77 MB 0.00 MB def test(nrow=1000000, ncol = 4, timetest = 5): 9 28.77 MB 0.00 MB from_ = nrow // 10 10 28.77 MB 0.00 MB to_ = 9 * nrow // 10 11 59.19 MB 30.42 MB df = pandas.DataFrame(numpy.random.randn(nrow, ncol), 12 66.77 MB 7.58 MB index = numpy.random.randn(nrow), 13 90.46 MB 23.70 MB columns = list(string.letters[0:ncol])) 14 114.96 MB 24.49 MB df_new = df[from_:to_].copy() 15 114.96 MB 0.00 MB del df 16 90.54 MB -24.42 MB del df_new 17 52.39 MB -38.15 MB gc.collect() </code></pre> <p>So indeed, there is more memory in use than when we started. But is it leaking?</p> <pre><code>for _ in xrange(20): test() print mprof.memory_usage() </code></pre> <p>And output:</p> <pre><code>10:19 ~/tmp $ python profmem.py [52.3984375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59375] [122.59765625] [122.59765625] [122.59765625] </code></pre> <p>So actually what's gone on is that the Python process is holding on to a pool of memory given what it's been using to avoid having to keep requesting more memory (and then freeing it) from the host OS. I don't know all the technical details behind this, but that is at least what is going on.</p>
python|pandas
26
2,179
70,685,858
Can I use pad_sequence with transformer in Pytorch?
<p>I'm trying to use transformer to process some image data (not NLP data), e.g. 480 x 640 images with different sequence length, an example would be [6, 480, 640], [7, 480, 640], [8, 480, 640]. And I would like to put these three sequences into one batch.</p> <p>However, most tutorials I saw use torchtext to deal with the non-fixed length problem. But since I run the transformer with my own dataset, torchtext is not applicable(is it?). After searching I find pad_sequence can be used to deal with this problem.</p> <p>However I didn't find any tutorials about using pad_sequence with transformer. Is it applicable?Has anyone try it before?</p>
<p>Let's say we have 03 images with different dimensions. Applying <code>pad_sequence</code> function on them will result as follow:</p> <pre class="lang-py prettyprint-override"><code>import torch from torch.nn.utils.rnn import pad_sequence image_1 = torch.ones(25, 30) image_2 = torch.ones(32, 30) image_3 = torch.ones(29, 30) images = pad_sequence([image_1, image_2, image_3]) print(images.size()) # torch.Size([32, 3, 30]) </code></pre> <p>This remains the same if you are working with 3D images</p> <pre class="lang-py prettyprint-override"><code>import torch from torch.nn.utils.rnn import pad_sequence image_1 = torch.ones(25, 30, 50) image_2 = torch.ones(32, 30, 50) image_3 = torch.ones(29, 30, 50) images = pad_sequence([image_1, image_2, image_3]) print(images.size()) # torch.Size([32, 3, 30, 50]) </code></pre> <p>One thing you should be aware of with this function is that it only works when the images share the <code>n - 1</code> dimensions. In other words, if you have something like this:</p> <pre class="lang-py prettyprint-override"><code>import torch from torch.nn.utils.rnn import pad_sequence image_1 = torch.ones(25, 30, 50) image_2 = torch.ones(32, 50, 30) image_3 = torch.ones(29, 31, 50) images = pad_sequence([image_1, image_2, image_3]) # RuntimeError: The size of tensor a (50) must match the size of tensor b (30) at non-singleton dimension 2 print(images.size()) </code></pre> <p>It won't work!</p> <p>But anyways, since you're working with images, I suggest you to use the <code>Pad</code> transformation from torchvision. It works the same as the <code>pad_sequence</code> function but with more options. Just follow the <a href="http://pytorch.org/vision/main/generated/torchvision.transforms.Pad.html" rel="nofollow noreferrer">doc</a>.</p>
deep-learning|pytorch|transformer-model
0
2,180
70,683,969
Writing Pyspark Dataframe to TFrecords file
<p>I have a dataframe with schema, and want to convert this into tfRecords</p> <pre><code>root |-- col1: string (nullable = true) |-- col2: array (nullable = true) | |-- element: string (containsNull = true) |-- col3: array (nullable = true) | |-- element: string (containsNull = true) |-- col4: array (nullable = true) | |-- element: float (containsNull = true) |-- col5: array (nullable = true) | |-- element: float (containsNull = true) |-- col6: array (nullable = true) | |-- element: integer (containsNull = true) |-- col7: array (nullable = true) | |-- element: string (containsNull = true) |-- col8: array (nullable = true) | |-- element: string (containsNull = true) |-- col9: array (nullable = true) | |-- element: string (containsNull = true) </code></pre> <p>I'm using spark tensorflow connector</p> <pre><code>df.write.mode(&quot;overwrite&quot;).format(&quot;tfrecords&quot;).option(&quot;recordType&quot;, &quot;Example&quot;).save(&quot;targetpath.tf&quot;) </code></pre> <p>Error which I'm getting while saving the data into tfrecords</p> <pre><code>java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutable/ArrayOps </code></pre> <p>I have tried similar approach in databricks community edition as well , also got the similar erro</p> <p>Can anyone help here ?</p>
<p>The most probable cause (judging from <a href="https://mvnrepository.com/artifact/org.tensorflow/spark-tensorflow-connector" rel="nofollow noreferrer">Maven Central information</a>) is that you're using connector compiled for Scala 2.11 on the Databricks runtime that uses Scala 2.12.</p> <p>Either you need to use DBR 6.4 for that conversion, or <a href="https://github.com/tensorflow/ecosystem/tree/master/spark/spark-tensorflow-connector#usage-examples" rel="nofollow noreferrer">compile connector for Scala 2.12</a> &amp; use.</p>
tensorflow|apache-spark|pyspark|databricks
2
2,181
70,462,851
why is np.exp(x) not equal to np.exp(1)**x
<p>Why is why is np.exp(x) not equal to np.exp(1)**x?</p> <p>For example:</p> <pre><code>np.exp(400) &gt;&gt;&gt;5.221469689764144e+173 np.exp(1)**400 &gt;&gt;&gt;5.221469689764033e+173 np.exp(400)-np.exp(1)**400 &gt;&gt;&gt;1.1093513018771065e+160 </code></pre>
<p>It looks like a rounding issue. In the first case it's internally using a very precise value of <code>e</code>, while in the second you get a less precise value, which when multiplied 400 times the precision issues become more apparent.</p> <p>The actual result when using the Windows calculator is <code>5.2214696897641439505887630066496e+173</code>, so you can see your first outcome is fine, while the second is not.</p> <pre><code>5.2214696897641439505887630066496e+173 // calculator 5.221469689764144e+173 // exp(400) 5.221469689764033e+173 // exp(1)**400 </code></pre> <p>Starting from your result, it looks it's using a value with 15 digits of precision.</p> <pre><code>2.7182818284590452353602874713527 // e 2.7182818284590450909589085441968 // 400th root of the 2nd result </code></pre>
python-3.x|numpy|exponential
2
2,182
70,405,675
How to convert Excel to JSON and append it to existing JSON file?
<p>I have Excel file like this:</p> <pre><code>------------------------- | myValue | myValue2 | myValue3 | --------+-------+-------- | 1 | A | AA| | 2 | B | BB| | 4 | C | CC | | 5 | D | DD | | 6 | E | EE| | 7 | F | FF | | 8 | G | GG | -------------------------- </code></pre> <p>I want to convert my Excel file to JSON like</p> <pre><code>{ &quot;myValue&quot;: { &quot;1&quot;:&quot;1&quot;, &quot;2&quot;:&quot;2&quot; }, &quot;myValue2&quot;: { &quot;A&quot;:&quot;A&quot;, &quot;B&quot;:&quot;B&quot; }, &quot;myValue3&quot;: { &quot;AA&quot;:&quot;AA&quot;, &quot;BB&quot;:&quot;BB&quot; } } </code></pre> <p>I already have a JSON file like this, so I should append values from Excel to that file. It should add values under &quot;myValue&quot; add to another JSON file under the &quot;myValue&quot; in that file. I already tried some solutions from site, but they did not work for me. Also, the problem is that <strong>myValue, myValue2, myValue3 are not always in the same order as displayed here</strong>. The ideal solution would be to find myValue in JSON file which already contains it and add values in it directly from Excel row which contains the same value.</p>
<p><strong>This works</strong></p> <pre><code># Importing dependencies import pandas import json # Reading xlsx into pandas dataframe df = pandas.read_excel('../Data/18-12-21.xlsx') # Encoding/decoding a Dataframe using 'columns' formatted JSON jsonfile = df.to_json(orient='columns') # Print out the result print('Excel Sheet to JSON:\n', jsonfile) # Make the string into a list to be able to input in to a JSON-file json_dict = json.loads(jsonfile) # write from and file to write to with open('data.json', 'w') as json_file: json.dump(json_dict, json_file) </code></pre> <p><strong>Output</strong></p> <pre><code>{ &quot;myValue&quot;: { &quot;0&quot;: 1, &quot;1&quot;: 2, &quot;2&quot;: 4, &quot;3&quot;: 5, &quot;4&quot;: 6, &quot;5&quot;: 7, &quot;6&quot;: 8 }, &quot;myValue2&quot;: { &quot;0&quot;: &quot;A&quot;, &quot;1&quot;: &quot;B&quot;, &quot;2&quot;: &quot;C&quot;, &quot;3&quot;: &quot;D&quot;, &quot;4&quot;: &quot;E&quot;, &quot;5&quot;: &quot;F&quot;, &quot;6&quot;: &quot;G&quot; }, &quot;myValue3&quot;: { &quot;0&quot;: &quot;AA&quot;, &quot;1&quot;: &quot;BB&quot;, &quot;2&quot;: &quot;CC&quot;, &quot;3&quot;: &quot;DD&quot;, &quot;4&quot;: &quot;EE&quot;, &quot;5&quot;: &quot;FF&quot;, &quot;6&quot;: &quot;GG&quot; } } </code></pre>
python|json|excel|pandas
2
2,183
42,663,400
How to group by the partial value of the column in pandas?
<p>I have some data in the <code>pandas data frame</code> as following where I converted the <code>currency</code> and the <code>value</code> earlier to the <code>USD</code> from <code>CYN</code> <code>(Chinese Yuan)</code></p> <pre><code> currency port supplier_id value 0 USD CNAQG 35 118.8344 1 USD CNAQG 19 121.0082 2 USD CNAQG 49 86.9520 3 USD CNAQG 54 112.3130 4 USD CNAQG 113 113.7622 5 USD CNAQG 5 114.4868 6 USD CNAQG 55 111.5884 7 USD CNAQG 81 117.3852 8 USD CNAQG 2 111.5884 6651 USD USTPA 14 420.0000 6652 USD USTPA 56 420.0000 6653 USD USTPA 113 420.0000 6654 USD USTPA 5 500.0000 6655 USD USTPA 55 500.0000 6656 USD USTPA 193 390.0000 6657 USD USTPA 74 450.0000 6658 USD USTPA 35 420.0000 6659 USD USTPA 54 420.0000 6660 USD USTPA 231 450.0000 </code></pre> <p>The <code>df.info()</code> prints the following, </p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 6652 entries, 0 to 6660 Data columns (total 4 columns): currency 6652 non-null object port 6652 non-null object supplier_id 6652 non-null int64 value 6652 non-null float64 dtypes: float64(1), int64(1), object(2) memory usage: 259.8+ KB None </code></pre> <p>The first 2 letter of the port indicates country and I have a map for that, </p> <pre><code>COUNTRIES = { "CN": "CHINA", "US": "USA" } </code></pre> <p>I would like to group the data based on the country where the port is situated and the intention will be to visualize the list of values per country in a meaningful way. I appreciate any suggestion of what kind of graph would be appropriate for the job done. </p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a> and then plot by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.bar.html" rel="nofollow noreferrer"><code>plot.bar</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>plot</code></a>:</p> <pre><code>df1 = pd.pivot(index=df['supplier_id'], columns = df['port'].str[:2].map(COUNTRIES), values=df['value']).fillna(0) print (df1) port CHINA USA supplier_id 2 111.5884 0.0 5 114.4868 500.0 14 0.0000 420.0 19 121.0082 0.0 35 118.8344 420.0 49 86.9520 0.0 54 112.3130 420.0 55 111.5884 500.0 56 0.0000 420.0 74 0.0000 450.0 81 117.3852 0.0 113 113.7622 420.0 193 0.0000 390.0 231 0.0000 450.0 </code></pre> <hr> <pre><code>df1.plot.bar() df1.plot() </code></pre> <hr> <p>But if error:</p> <blockquote> <p>ValueError: Index contains duplicate entries, cannot reshape</p> </blockquote> <p>then need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a> with some aggregate function <code>mean</code>, <code>sum</code>... (default function is 'mean'):</p> <pre><code>print (df) currency port supplier_id value 0 USD CNAQG 35 118.8344 1 USD CNAQG 19 121.0082 2 USD CNAQG 49 86.9520 3 USD CNAQG 54 112.3130 4 USD CNAQG 113 113.7622 5 USD CNAQG 5 114.4868 6 USD CNAQG 55 111.5884 7 USD CNAQG 81 117.3852 8 USD CNAQG 2 111.5884 6651 USD USTPA 14 420.0000 6652 USD USTPA 56 420.0000 6653 USD USTPA 113 420.0000 6654 USD USTPA 5 500.0000 6655 USD USTPA 55 500.0000 6656 USD USTPA 193 390.0000 6657 USD USTPA 74 450.0000 6658 USD USTPA 35 420.0000 6659 USD USTPA 54 420.0000 6660 USD USTPA 231 450.0000 &lt;-duplicates for USTPA, 231 6660 USD USTPA 231 800.0000 &lt;-duplicates for USTPA, 231 </code></pre> <pre><code>COUNTRIES = { "CN": "CHINA", "US": "USA" } df1 = pd.pivot_table(df, index='supplier_id', columns = df['port'].str[:2].map(COUNTRIES), values='value', aggfunc=np.mean, fill_value=0) print (df1) port CHINA USA supplier_id 2 111.5884 0 5 114.4868 500 14 0.0000 420 19 121.0082 0 35 118.8344 420 49 86.9520 0 54 112.3130 420 55 111.5884 500 56 0.0000 420 74 0.0000 450 81 117.3852 0 113 113.7622 420 193 0.0000 390 231 0.0000 625 &lt;-mean (450 + 800) /2 df1.plot.bar() </code></pre> <p>Alternative solution with <code>groupby</code> and <code>mean</code>:</p> <pre><code>df1 = df.groupby(['supplier_id', df['port'].str[:2].map(COUNTRIES)])['value'] .mean() .unstack(fill_value=0) print (df1) port CHINA USA supplier_id 2 111.5884 0.0 5 114.4868 500.0 14 0.0000 420.0 19 121.0082 0.0 35 118.8344 420.0 49 86.9520 0.0 54 112.3130 420.0 55 111.5884 500.0 56 0.0000 420.0 74 0.0000 450.0 81 117.3852 0.0 113 113.7622 420.0 193 0.0000 390.0 231 0.0000 625.0 </code></pre>
python|pandas
1
2,184
42,846,345
Sklearn: Categorical Imputer?
<p>Is there a way to impute categorical values using a sklearn.preprocessing object? I would like to ultimatly create a preprocessing object which I can apply to new data and have it transformed the same way as old data. </p> <p>I am looking for a way to do it so that I can use it <a href="https://github.com/paulgb/sklearn-pandas#multiple-transformers-for-the-same-column" rel="nofollow noreferrer">this</a> way.</p>
<p>Yes, it is possible. For example, you can use <code>sklearn.preprocessing.Imputer</code> with parameter <code>strategy = 'most_frequent'</code>. </p> <p>Use <code>fit_transform</code> method to apply it to old data (train set) and then <code>transform</code> on new data (test set).</p>
machine-learning|tensorflow|scikit-learn|sklearn-pandas|imputation
2
2,185
14,665,828
How can I select a certain value based on 2(or more) other values in a pandas dataframe
<p>I've been trying to find out how to select a certain value based on multiple other values in the same tuple of a dataframe. The data looks like this(copied from the current dataframe)</p> <pre><code> DealID PropId LoanId ServicerId ServicerPropId 0 BAC98765 15 000015 30220144 010-002-001 1 BAC98765 16 000016 30220092 010-003-001 2 BAC98765 45 000045 30220155 010-045-001 3 BAC98765 48 000048 30220157 010-048-001 </code></pre> <p>In SQL terms what I would like to accomplish is this: </p> <pre><code>Select ServicerPropId from dataframe where DealID = 'BAC98765' and ServicerId = '30220144' </code></pre> <p>I've tried a few different ways to slice the data, but can't seem to figure out how to get multiple selection criteria to work and return only 1 value into a variable.</p>
<pre><code>columns = ['DealID', 'PropId', 'LoanId', 'ServicerId', 'ServicerPropId'] d = [('A', [ 'BAC98765', '15', '000015', '30220144', '010-002-001']), ('B', [ 'BAC98765', '16', '000016', '30220092', '010-003-001']), ('C', [ 'BAC98765', '45', '000045', '30220155', '010-045-001']), ('D', [ 'BAC98765', '48', '000048', '30220157', '010-048-001']),] D = pandas.DataFrame.from_items(d, orient='index', columns=columns) criterion1 = D['DealID'].map(lambda x: x == 'BAC98765' ) criterion2 = D['ServicerId'].map(lambda x: x == '30220144') res = D[criterion1 &amp; criterion2]['ServicerPropId'] </code></pre> <p>Using the <code>map</code> lets you put in any condition you want, in this case you can do this more simply (as pointed out in the comments by DSM)</p> <pre><code>res = D[(D['DealID'] == "BAC98765") &amp; (D["ServicerId"] == "30220144")]['ServicerPropId'] </code></pre> <p>Which gives</p> <pre><code>In [35]: print res A 010-002-001 Name: ServicerPropId In [36]: type(res) Out[36]: pandas.core.series.Series </code></pre> <p><a href="http://pandas.pydata.org/pandas-docs/dev/indexing.html#boolean-indexing" rel="nofollow">(doc)</a></p>
python|pandas
2
2,186
14,847,551
Pandas: DataFrame.sum() or DataFrame().as_matrix.sum()
<p>I am writing a function that computes conditional probability all columns in a pd.DataFrame that has ~800 columns. I wrote a few versions of the function and found a very big difference in compute time over two primary options:</p> <pre><code>col_sums = data.sum() #Simple Column Sum over 800 x 800 DataFrame </code></pre> <p><strong>Option #1:</strong> {'col_sums' and 'data' are a Series and DataFrame respectively}</p> <p>[This is contained within a loop over index1 and index2 to get all combinations]</p> <pre><code>joint_occurance = data[index1] * data[index2] sum_joint_occurance = joint_occurance.sum() max_single_occurance = max(col_sum[index1], col_sum[index2]) cond_prob = sum_joint_occurance / max_single_occurance #Symmetric Conditional Prob results[index1][index2] = cond_prob </code></pre> <p>Vs. </p> <p><strong>Option #2:</strong> [While looping over index1 and index2 to get get all combinations] Only Difference is instead of using DataFrame I exported the data_matrix to a np.array prior to looping</p> <pre><code>new_data = data.T.as_matrix() [Type: np.array] </code></pre> <p>Option #1 Runtime is ~1700 sec Option #2 Runtime is ~122 sec</p> <p><strong>Questions:</strong></p> <ol> <li>Is converting the contents of DataFrames to np.array's best for computational tasks? </li> <li>Is the .sum() routine in pandas significantly different to to .sum() routine in NumPy or is the difference in speed due to the label access to data? </li> <li>Why are these runtimes so different?</li> </ol>
<p>While reading the documentation I came across:</p> <blockquote> <p><strong>Section 7.1.1 Fast scalar value getting and setting</strong> Since indexing with [] must handle a lot of cases (single-label access, slicing, boolean indexing, etc.), it has a bit of overhead in order to figure out what you’re asking for. If you only want to access a scalar value, the fastest way is to use the get_value method, which is implemented on all of the data structures:</p> </blockquote> <pre><code>In [656]: s.get_value(dates[5]) Out[656]: -0.67368970808837059 In [657]: df.get_value(dates[5], ’A’) Out[657]: -0.67368970808837059 </code></pre> <p><strong>Best Guess:</strong> Because I am accessing individual data elements many times from the dataframe (order of ~640,000 per matrix). I think the speed reduction came from how I referenced the data (i.e. "indexing with [] handles a lot of cases") and therefore I should be using the get_value() method for accessing scalars similar to a matrix lookup. </p>
python|pandas
1
2,187
26,472,653
Computing the mean square displacement of a 2d random walk in Python
<p>I'm simulating a 2-dimensional random walk, with direction 0 &lt; θ &lt; 2π and T=1000 steps. I already have a code which simulates a single walk, repeats it 12 times, and saves each run into sequentially named text files:</p> <pre><code>a=np.zeros((1000,2), dtype=np.float) print a # Prints array with zeros as entries # Single random walk def randwalk(x,y): # Defines the randwalk function theta=2*math.pi*rd.rand() x+=math.cos(theta); y+=math.sin(theta); return (x,y) # Function returns new (x,y) coordinates x, y = 0., 0. # Starting point is the origin for i in range(1000): # Walk contains 1000 steps x, y = randwalk(x,y) a[i,:] = x, y # Replaces entries of a with (x,y) coordinates # Repeating random walk 12 times fn_base = "random_walk_%i.txt" # Saves each run to sequentially named .txt for j in range(12): rd.seed() # Uses different random seed for every run x, y = 0., 0. for i in range(1000): x, y = randwalk(x,y) a[i,:] = x, y fn = fn_base % j # Allocates fn to the numbered file np.savetxt(fn, a) # Saves run data to appropriate text file </code></pre> <p>Now I want to calculate the mean square displacement over all 12 walks. To do this, my initial thought was to import the data from each text file back into a numpy array, eg:</p> <pre><code>infile="random_walk_0.txt" rw0dat=np.genfromtxt(infile) print rw0dat </code></pre> <p>And then somehow manipulate the arrays to find the mean square displacement.</p> <p>Is there a more efficient way to go about finding the MSD with what I have?</p>
<p>Here is a quick snipet to compute the mean square displacement (MSD). Where path is made of points equally spaced in time, as it seems to be the case for your randwalk. You can just place in the 12-walk for loop and compute it for each a[i,:] </p> <pre><code>#input path =[ [x1,y1], ... ,[xn,yn] ]. def compute_MSD(path): totalsize=len(path) msd=[] for i in range(totalsize-1): j=i+1 msd.append(np.sum((path[0:-j]-path[j::])**2)/float(totalsize-j)) msd=np.array(msd) return msd </code></pre>
python|arrays|numpy|random-walk
4
2,188
39,398,251
Create unique MultiIndex from Non-unique Index Python Pandas
<p>I have a pandas DataFrame with a non-unique index:</p> <pre><code>index = [1,1,1,1,2,2,2,3] df = pd.DataFrame(data = {'col1': [1,3,7,6,2,4,3,4]}, index=index) df Out[12]: col1 1 1 1 3 1 7 1 6 2 2 2 4 2 3 3 4 </code></pre> <p>I'd like to turn this into unique MultiIndex and preserve order, like this:</p> <pre><code> col1 Ind2 1 0 1 1 3 2 7 3 6 2 0 2 1 4 2 3 3 0 4 </code></pre> <p>I would imagine pandas would have a function for something like this but haven't found anything</p>
<p>You can do a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="noreferrer"><code>groupby.cumcount</code></a> on the index, and then append it as a new level to the index using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="noreferrer"><code>set_index</code></a>:</p> <pre><code>df = df.set_index(df.groupby(level=0).cumcount(), append=True) </code></pre> <p>The resulting output:</p> <pre><code> col1 1 0 1 1 3 2 7 3 6 2 0 2 1 4 2 3 3 0 4 </code></pre>
python|pandas
6
2,189
39,398,933
Error setting dtypes of an array
<p>I was attempting to make a 1x5 numpy array with the following code</p> <pre><code>testArray = np.array([19010913, "Hershey", "Bar", "Birthday", 12.34]) </code></pre> <p>but encountered the unwanted result that</p> <pre><code>testArray.dtype dtype("&lt;U8") </code></pre> <p>I want each column to be a specific data type, so I attempted to input this</p> <pre><code>testArray = np.array([19010913, "Hershey", "Bar", "Birthday", 12.34], dtype=[('f0','&lt;i8'),('f1','&lt;U64'),('f2','&lt;U64'),('f3','&lt;U64'),('f4','&lt;f10')] ) </code></pre> <p>but got the error</p> <pre><code>/usr/local/lib/python3.4/dist-packages/ipykernel/__main__.py:1: DeprecationWarning: Specified size is invalid for this data type. Size will be ignored in NumPy 1.7 but may throw an exception in future versions. if __name__ == '__main__': --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-11-d2c44d88c8a5&gt; in &lt;module&gt;() ----&gt; 1 testArray = np.array([19840913, "Hershey", "Bar", "Birthday", 64.25], dtype=[('f0','&lt;i8'),('f1','&lt;U64'),('f2','&lt;U64'), ('f3','&lt;U64'),('f4','&lt;f10')] ) TypeError: 'int' does not support the buffer interface </code></pre>
<p>First off, I am not sure if <code>f10</code> is something known. </p> <p>Note that structured arrays need to be defined as "list of tuples". Try the following:</p> <pre><code>testArray = np.array([(19010913, "Hershey", "Bar", "Birthday", 12.34)], dtype=[('f0','&lt;i8'),('f1','&lt;U64'),('f2','&lt;U64'),('f3','&lt;U64'),('f4','&lt;f8')]) </code></pre> <p>See also <a href="http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html" rel="nofollow">this</a> and <a href="http://docs.scipy.org/doc/numpy-1.10.1/user/basics.rec.html" rel="nofollow">this</a> for different ways of defining <code>np.dtype</code>s and structured arrays.</p> <p>Edit:</p> <p>For multiple rows in the same structure, define each row of your array as a separate tuple in the list.</p> <pre><code>dt = np.dtype([('f0','&lt;i8'),('f1','&lt;U64'),('f2','&lt;U64'),('f3','&lt;U‌64'),('f4','&lt;f8')]) testArray = np.array([(19010913, "Hershey", "Bar", "Birthday", 12.34), (123, "a", "b", "c", 56.78)], dtype=dt) </code></pre>
python|arrays|numpy
2
2,190
39,254,947
Python Special Colon Inquiry
<pre><code>sortedWinnerIndices = winnerIndices[-numActive:][::-1] </code></pre> <p>Can someone tell me what is going on here? </p> <p><code>WinnerIndices</code> is 2048 ints long, Numpy array. I read somewhere that <code>[::-1]</code> reverses the result but I still can't figure out how this function selects a subset of winnerIndices?</p>
<pre><code>winnerIndices[-numActive:] </code></pre> <p>Above takes a slice from <code>-numActive</code> index to the end of the original list</p> <pre><code>x[::-1] </code></pre> <p>This reverses x</p>
python|numpy|operator-keyword|colon
1
2,191
39,358,775
Receving an error when trying to use the pandas read_html function
<p>I'm really new to python and pandas so I could be making a simple mistake.</p> <p>I'm trying to run the code below:</p> <pre><code>import quandl import pandas as pd df3 = pd.read_html('https://simple.wikipedia.org/wiki/List_of_U.S._states') print(df3) </code></pre> <p>I have installed pandas as well quandl through pip. When I run the code I get the following error:</p> <pre><code>Traceback (most recent call last): File "C:\Python27\FunwithQuandl.py", line 14, in &lt;module&gt; df3 = pd.read_html('https://simple.wikipedia.org /wiki/List_of_U.S._states') File "C:\Python27\lib\site-packages\pandas\io\html.py", line 874, in read_html parse_dates, tupleize_cols, thousands, attrs, encoding) File "C:\Python27\lib\site-packages\pandas\io\html.py", line 726, in _parse parser = _parser_dispatch(flav) File "C:\Python27\lib\site-packages\pandas\io\html.py", line 685, in _parser_dispatch raise ImportError("lxml not found, please install it") ImportError: lxml not found, please install it </code></pre> <p>I then tried installing lxml via command prompt and pip and I got a few errors:</p> <pre><code>Cannnot open include file: 'libxml/xpath.h': No such file or directory Could not find function xmlCheckVersion in library libxm12. Is libxml2 installed? error: command 'CL\\User\\...Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2 </code></pre> <p>I even tried suggestions on this from <a href="https://stackoverflow.com/questions/33785755/getting-could-not-find-function-xmlcheckversion-in-library-libxml2-is-libxml2">this thread</a>:</p> <p>such as </p> <blockquote> <p>"install lxml from lfd.uci.edu/~gohlke/pythonlibs/#lxml for your python version. It's a precompiled WHL with required modules/dependencies.</p> <p>The site lists several packages, when e.g. using Win32 Python 2.7, use lxml-3.6.1-cp27-cp27m-win32.whl."</p> </blockquote> <p>I have downloaded the whl file from the site suggested but I can't seem to install it. I have tried using pip and typing out the name of the file but the file isn't being recognized via pip.</p> <p>I am using Python 2.7 and Windows 7 professional</p> <p>Thanks for you help.</p>
<p>I fixed the problem. I moved the file that I wanted to install from <code>Downloads</code> to <code>LocalDisk/Users/My_name</code>. Inside the right directory, <code>PIP</code> was able to locate and install it for some reason. Thanks for the responses.</p>
python|windows|python-2.7|pandas|pip
0
2,192
19,344,663
ndarray that updates its data on demand
<p>I would like to have a system, where I have a "Data class" that has some arrays and a "Selection class" that has the same array names whose data are just a mere view on a subset of the arrays from the Data class. which ones should be determined by the instance of the selection class. If one changes something in the instance of the Selection class it should get mapped back to the corresponding Data class and to make things really difficult, not only should the arrays from the selection class be real ndarrays (so al the methods should work), they should also be only views on teh original data or only be created on demand.</p> <p>The scaffold I have created so far is</p> <pre><code>import numpy as np class DataObj(): def __init__( self, Data_dict ): self.arrays = [ n for n,d in Data_dict.iteritems() ] for n,d in Data_dict.iteritems(): setattr( self, n, d ) class Darray(np.ndarray): def __new__(cls, input_array, SelObj, *args, **kwargs): obj = np.asarray(input_array[ SelObj.selA, SelObj.selB ]).view(cls) obj.SelObj = SelObj return obj def __array_finalize__(self, obj): if obj is None: return def __getitem__(self, index): return super(Darray, self).__getitem__(index) class SelObj(): def __init__(self, DataObj, selA, selB): self.selA = selA self.selB = selB self.DataObj = DataObj for n in DataObj.arrays: Darr = Darray( getattr( self.DataObj, n), self ) setattr( self, n, Darr ) ### creating some objects DObj = DataObj({ "X":ones((10,20)), "Y":zeros((10,20)), "Z":arange(10*20).reshape(10,20) }) SObj1 = SelObj( DObj, array([1,3,4]), slice(None,None,2) ) SObj2 = SelObj( DObj, array([4,5,7]), slice(None,2,None) ) SObj3 = SelObj( DObj, array([1,3,4]), slice(None) ) </code></pre> <p>This works, but now If a do</p> <pre><code>SObj1.X = 10 </code></pre> <p>it looses the connection and just has a 10 instead of the original array. Even when I do something that actually makes sense like</p> <pre><code>SObj1.X[0,0] = 10. </code></pre> <p>this will not appear in the DObj ( because with <code>input_array[ SelObj.selA, SelObj.selB ]</code> I created a copy of the array ). Ans now every SObj will have it's own data which will eventually clog up memory.</p> <p>I know what I want will not be easy but I would still like to do it. I was also looking into properties and making every Darray a property of SObj; doing the slicing on demand whenever the property is called. However then things like</p> <pre><code>SObj1.X[0,0] = 10. </code></pre> <p>will not work anymore, since the property already slices and an additional slice is not mapped back anymore.</p> <p>I would be very grateful for any hints that would point towards a solution of this structure.</p> <p>David</p>
<p>If you want an expression like</p> <pre><code>SObj1.X = 10 </code></pre> <p>to change values of some distant sources, you should override <code>__setattr__</code> method in your <code>SelObj</code> class. Like this:</p> <pre><code>class SelObj(): def __init__(self, DataObj, selA, selB): self.selA = selA self.selB = selB self.DataObj = DataObj for n in DataObj.arrays: Darr = Darray( getattr( self.DataObj, n), self ) setattr( self, n, Darr ) def __setattr__(self, name, value): setattr(self.DataObj, name, value) # updates the source of the data super(SelObj, self).__setattr__(name, value) # updates the representation </code></pre>
python|arrays|numpy
0
2,193
29,070,850
How to convert json response into Python list
<p>I get the JSON response by <code>requests.get</code></p> <pre><code>req = requests.get(SAMPLE_SCHEDULE_API) </code></pre> <p>and convert it into dictionary</p> <p><code>data = json.loads(req.text)["data"]</code></p> <p>When I tried to convert the string into Python dict,</p> <p>I got <code>ValueError: malformed node or string:</code></p> <p><code>ast.literal_eval(data)</code></p> <p>I have no idea how to do this task.</p> <h1>code snippets</h1> <pre><code> def schedules(cls, start_date=None, end_date=None): import ast req = requests.get(SAMPLE_SCHEDULE_API) data = json.loads(req.text)["data"] ast.literal_eval(data) return pd.DataFrame(json.loads(req.text)["data"]) </code></pre> <h1>JSON response</h1> <pre><code>{ status: "ok", version: "v1", data: "[ {"_id":"2015-01-28","end_date":"2015-01-28","estimated_release":1422453600000,"is_projection":false,"is_statement":true,"material_link":null,"start_date":"2015-01-27"}, {"_id":"2015-03-18","end_date":"2015-03-18","estimated_release":1426687200000,"is_projection":false,"is_statement":false,"material_link":null,"start_date":"2015-03-17"}, {"_id":"2015-04-29","end_date":"2015-04-29","estimated_release":1430316000000,"is_projection":false,"is_statement":false,"material_link":null,"start_date":"2015-04-28"}, {"_id":"2015-06-17","end_date":"2015-06-17","estimated_release":1434549600000,"is_projection":false,"is_statement":false,"material_link":null,"start_date":"2015-06-16"}, {"_id":"2015-07-29","end_date":"2015-07-29","estimated_release":1438178400000,"is_projection":false,"is_statement":false,"material_link":null,"start_date":"2015-07-28"}]" } </code></pre> <h1>Detail error message</h1> <pre><code>Traceback (most recent call last): File "fomc.py", line 25, in &lt;module&gt; schedules = FOMC.schedules() File "fomc.py", line 21, in schedules ast.literal_eval(data) File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 86, in literal_eval return _convert(node_or_string) File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 58, in _convert return list(map(_convert, node.elts)) File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 63, in _convert in zip(node.keys, node.values)) File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 62, in &lt;genexpr&gt; return dict((_convert(k), _convert(v)) for k, v File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 85, in _convert raise ValueError('malformed node or string: ' + repr(node)) ValueError: malformed node or string: &lt;_ast.Name object at 0x10a19c990&gt; </code></pre>
<p>You have encoded the <code>data</code> twice (which would strictly not be necessary). You just need to decode the <code>data</code> again with <code>json.loads</code>:</p> <pre><code>def schedules(cls, start_date=None, end_date=None): req = requests.get(SAMPLE_SCHEDULE_API) data_json = json.loads(req.text)["data"] data = json.loads(data_json) return pd.DataFrame(data) </code></pre> <hr> <p>Do note that <code>ast.literal_eval</code> is for Python code, whereas <code>json.loads</code> is for JSON that closely follows JavaScript code; the differences are for example <code>true</code> , <code>false</code> and <code>null</code> vs <code>True</code>, <code>False</code> and <code>None</code>. The former are the javascript syntax as used in JSON (and thus you would need <code>json.loads</code>; the latter is Python code, for which you would use <code>ast.literal_eval</code>.</p>
python|python-3.x|pandas
3
2,194
33,792,332
Pandas: Getting a rolling sum while grouping by a column
<p>I have a pandas dataframe that looks like</p> <pre><code>Name Date Value Sarah 11-01-2015 3 Sarah 11-02-2015 2 Sarah 11-03-2015 27 Bill 11-01-2015 42 Bill 11-02-2015 5 Bill 11-03-2015 15 .... (a couple hundred rows) </code></pre> <p>How do I get a 30 day (or x day) rolling sum of these values broken out by whoever is in the 'Name' column? The ideal output would have the same columns as the current dataframe, but instead of having the values for each row be what that person had as a value for that day, it would be the cumulative sum of what their values over the past 30 days.</p> <p>I know I can do</p> <pre><code>result = pd.rolling_sum(df, 30) </code></pre> <p>to get the rolling sum overall. But how do I return a dataframe with that rolling sum grouped by the 'Name' column?</p>
<p>Figured it out using the grigri group_resample function.</p> <pre><code>df = group_resample(df,date_column='Date',groupby=group_by,value_column='Value',how='sum',freq='d') df = df.unstack(group_by).fillna(0) result = pd.rolling_mean(df,30) </code></pre>
python|pandas|dataframe|rolling-sum
1
2,195
23,483,655
datetime range indexing: datetimes that may not be in the index?
<p>I have a dataframe indexed by <code>datetime</code> objects: </p> <pre><code>In &lt;10&gt;: all_data.head().index Out&lt;10&gt;: Index([2014-04-23, 2014-04-13, 2014-04-15, 2014-04-30, 2014-04-06], dtype='object') </code></pre> <p>and two timestamps:</p> <pre><code>In &lt;11&gt;: d1 Out&lt;11&gt;: datetime.datetime(2014, 3, 24, 0, 0) In &lt;12&gt;: d2 Out&lt;12&gt;: datetime.datetime(2014, 4, 6, 0, 0) </code></pre> <p>I would like to index a column base don the <code>d1:d2</code> range. Note that <code>d1</code> or <code>d2</code> may <strong>not</strong> be in the index. How can I do this in Pandas?</p> <p>I tried:</p> <pre><code>all_data.loc[d1:d2,:] </code></pre> <p>but I get: <code>start bound[2014-03-24 00:00:00] is not the [index]</code></p>
<p>Well, if you make the index a <code>DateTimeIndex</code>, partial string indexing should work:</p> <pre><code>print df print df.index x1 x2 date 2014-04-23 1 2 2014-04-13 2 4 2014-04-15 3 6 2014-04-30 4 8 2014-04-06 5 10 [5 rows x 2 columns] &lt;class 'pandas.tseries.index.DatetimeIndex'&gt; [2014-04-23, ..., 2014-04-06] </code></pre> <p>Then you can use partial string slicing:</p> <pre><code>print df['2014-03-24':'2014-04-06'] x1 x2 2014-04-06 5 10 </code></pre> <p>or</p> <pre><code>print df.ix['2014-03-24':'2014-04-13',:] x1 x2 date 2014-04-13 2 4 2014-04-06 5 10 </code></pre>
python|datetime|pandas
3
2,196
23,688,307
SettingWithCopyWarning, even when using loc (?)
<p>I get <code>SettingWithCopyWarning</code> errors in cases where I would not expect them:</p> <pre><code>N.In &lt;38&gt;: # Column B does not exist yet N.In &lt;39&gt;: df['B'] = df['A']/25 N.In &lt;40&gt;: df['B'] = df['A']/50 /Users/josh/anaconda/envs/py27/lib/python2.7/site-packages/pandas/core/indexing.py:389: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead self.obj[item] = s </code></pre> <p>and</p> <pre><code>N.In &lt;41&gt;: df.loc[:,'B'] = df['A']/50 /Users/josh/anaconda/envs/py27/lib/python2.7/site-packages/pandas/core/indexing.py:389: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead self.obj[item] = s </code></pre> <p>Why does it happen in case 1 and 2? </p>
<p>In case 1, <code>df['A']</code> creates a copy of <code>df</code>. As explained by the <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy">Pandas documentation</a>, this can lead to unexpected results when chaining, thus a warning is raised. Case 2 looks correct, but false positives are possible:</p> <blockquote> <p>Warning: The chained assignment warnings / exceptions are aiming to inform the user of a possibly invalid assignment. There may be false positives; situations where a chained assignment is inadvertantly reported.</p> </blockquote> <p>To turn off <code>SettingWithCopyWarning</code> for a single dataframe, use</p> <pre><code>df.is_copy = False </code></pre> <p>To turn off chained assignment warnings altogether, use</p> <pre><code>options.mode.chained_assignment = None </code></pre>
python|pandas
19
2,197
22,543,745
Can't import numpy
<p>When I try to import numpy on Python, it says:</p> <blockquote> <p>ImportError: No module named numpy</p> </blockquote> <p>If I try to install numpy, it says it has already been installed. It seems like it's been installed, but in a wrong place? I don't know where it should be though. </p> <p>Python version is 2.4.6, but if I try to install a newer version, like 2.7, it says it has already been installed.</p>
<p>You can ask Python if numpy is installed, by searching through <code>sys.path</code>: </p> <pre><code>&gt;&gt;&gt; import os &gt;&gt;&gt; import sys &gt;&gt;&gt; for p in sys.path: ... if os.path.exists(os.path.join(p, 'numpy')): ... print p ... break ... else: ... print "Numpy not found" /usr/lib/python2.7/dist-packages </code></pre> <p>In this case, I have numpy installed in <code>/usr/lib/python2.7/dist-packages</code>. "Numpy not found" will be printed if it's not installed. </p> <p>If you're on Windows (sounds like maybe?) then you need to make sure that the path is set up correctly. It sounds like you may have more than one python installed, and numpy is being installed in one of them, but your default interpreter is the other one. </p>
python|numpy
3
2,198
15,283,872
How do I boolean mask an array using chained comparisons?
<p>How can I filter a numpy array using a pair of inequalities, such as:</p> <pre><code>&gt;&gt;&gt; a = np.arange(10) &gt;&gt;&gt; a[a &lt;= 6] array([0, 1, 2, 3, 4, 5, 6]) &gt;&gt;&gt; a[3 &lt; a] array([4, 5, 6, 7, 8, 9]) &gt;&gt;&gt; &gt;&gt;&gt; a[3 &lt; a &lt;= 6] Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>I get the same response if I try <code>a.all(3 &lt; a &lt;= 6)</code></p> <p><code>np.array([x for x in a if 3 &lt; x &lt;= 6])</code> works, but it seems nasty. What's the right way to do this?</p>
<p>You need to do:</p> <pre><code>a[(3 &lt; a) &amp; (a &lt;= 6)] </code></pre> <p>It's a "wart" in python. In python <code>(3 &lt; a &lt;=6)</code> is translated to <code>((3 &lt; a) and (a &lt;= 6))</code>. However numpy arrays don't work with the <code>and</code> operation because python doesn't allow overloading of the <code>and</code> and <code>or</code> operators. Because of that numpy uses <code>&amp;</code> and <code>|</code>. There was some discussion about fixing this about a year ago, but I haven't seem much about it since.</p> <p><a href="http://mail.python.org/pipermail/python-dev/2012-March/117510.html" rel="noreferrer">http://mail.python.org/pipermail/python-dev/2012-March/117510.html</a></p>
python|numpy
6
2,199
15,466,274
Maximum recursion depth exceeded when using a numpy.bytes_ object in string formatting
<p>The code should speak for itself:</p> <pre><code>$ python Python 3.3.0 (default, Dec 22 2012, 21:02:07) [GCC 4.7.2] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; '{}'.format(np.bytes_(b'Hello')) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; RuntimeError: maximum recursion depth exceeded while calling a Python object &gt;&gt;&gt; np.version.version '1.7.0' </code></pre> <p>Both <code>str</code> and <code>repr</code> return <code>"b'Hello'"</code> on <code>np.bytes_(b'Hello')</code>, and I can <code>print(np.bytes_(b'Hello'))</code> just fine, but in a format string it falls into a recursion loop.</p> <p>Am I being stupid or is it indeed what it appears to be, i.e. a problem in <code>numpy</code>? Even if it is, I don't quite understand what is happening. Can someone please explain?</p> <p>I haven't reproduced it with Python 2.</p>
<p>The behaviour of <code>{}</code> is to call <code>np.bytes_(b'Hello').__format__()</code>. It seems there is a bug where <code>__format__</code> is calling itself. See this <a href="https://github.com/numpy/numpy/issues/385" rel="nofollow">related ticket</a></p> <p>Here is a workaround.</p> <pre><code>Python 3.2.3 (default, Oct 19 2012, 19:53:57) [GCC 4.7.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; '{}'.format(np.bytes_(b'Hello')) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; RuntimeError: maximum recursion depth exceeded while calling a Python object &gt;&gt;&gt; '{!s}'.format(np.bytes_(b'Hello')) "b'Hello'" &gt;&gt;&gt; '{!r}'.format(np.bytes_(b'Hello')) "b'Hello'" </code></pre>
python|numpy|python-3.x
2