Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
4,000
53,553,558
Subset pandas dataframe on multiple columns based on values from another dataframe
<p>I have two dataframes as</p> <pre><code>import pandas as pd points = pd.DataFrame({'player':['a','b','c','d','e'],'points':[2,5,3,6,1]}) matches = pd.DataFrame({'p1':['a','c','e'], 'p2':['c', 'b', 'd']}) </code></pre> <p>I want to retain only those rows from dataframe matches where both p1 and p2 have points greater than 2. Right now I am first merging points and matches on p1 and player then merging resulting dataframe and points on p2 and player. After this applying filter on both points columns of resulting dataframe. </p> <pre><code>new_df = pd.merge(matches, points, how = 'left', left_on = 'p1', right_on = 'player') new_df = pd.merge(new_df, points, how = 'left', left_on = 'p2', right_on = 'player') new_df = new_df[(new_df.points_x &gt;2) &amp; (new_df.points_y &gt;2)] </code></pre> <p>This gives me what I require but I was wondering what would be a better and efficient way to do this?</p>
<p>I would avoid the joins in this case and write it like this:</p> <pre><code>scorers = points.query('points &gt; 2').player matches.query('p1 in @scorers and p2 in @scorers') </code></pre> <p>I think it's more readable.</p> <p>It feels a little silly to benchmark on such a small example, but on my machine this method runs on average in 2.99ms while your original method takes 4.45ms. It would be interesting to find if this scales better or not.</p> <p>I don't know if there are other micro optimizations you could make to this code like converting <code>scorers</code> to a set.</p> <p>If you don't like the <code>query</code> syntax:</p> <pre><code>scorers = points[points.points &gt; 2].player matches[matches.p1.isin(scorers) &amp; matches.p2.isin(scorers)] </code></pre> <p>This has better performance as well, taking about 1.36ms.</p>
python|python-3.x|pandas|performance|dataframe
2
4,001
17,449,701
to convert a list to a 2D matrix in python
<pre><code>arr2=[0]*(x^2) # x is the length of the list data for i in range(x): arr2[i]=data[i].split(',')#data is a list like:['1,2','3,4'] arr2=np.array(arr2) A=np.asmatrix(arr2) print A.I </code></pre> <p>This is giving error as setting an array element with a sequence</p>
<p>Something like this:</p> <pre><code>&gt;&gt;&gt; data = ['1,2','3,4'] &gt;&gt;&gt; arr2=[ map(float,x.split(',')) for x in data] &gt;&gt;&gt; arr2 = np.asarray(arr2) &gt;&gt;&gt; A = np.asmatrix(arr2) &gt;&gt;&gt; A.I matrix([[-2. , 1. ], [ 1.5, -0.5]]) </code></pre>
python|numpy
0
4,002
16,986,317
Multipying and adding inside of a too large array
<p>I have an array A that is of the shape (M,N) now I'd like to do the operation</p> <p><code>R = (A[:,newaxis,:] * A[newaxis,:,:]).sum(2)</code></p> <p>which should yield an (MxM) array. Now the problem is that the array is quite large and I get a Memory error, because the MxMxN array won't fit into memory.</p> <p>what would be the best strategy to get this done? C? map()? or is there a special function for this yet?</p> <p>thank you, David</p>
<p>I'm not sure how large you arrays are but the following is equivalent:</p> <pre><code>R = np.einsum('ij,kj',A,A) </code></pre> <p>And can be quite a bit faster and is much less memory intensive:</p> <pre><code>In [7]: A = np.random.random(size=(500,400)) In [8]: %timeit R = (A[:,np.newaxis,:] * A[np.newaxis,:,:]).sum(2) 1 loops, best of 3: 1.21 s per loop In [9]: %timeit R = np.einsum('ij,kj',A,A) 10 loops, best of 3: 54 ms per loop </code></pre> <p>If I increase the size of <code>A</code> to <code>(500,4000)</code>, <code>np.einsum</code> plows through the calculation in about 2 seconds, whereas the original formulation grinds my machine to a halt due to the size of the temporary array it has to create. </p> <p><strong>Update</strong>:</p> <p>As @Jaime pointed out in the comments, <code>np.dot(A,A.T)</code> is also an equivalent formulation of the problem, and can be even faster than the <code>np.einsum</code> solution. Full credit to him for pointing that out, but in case he doesn't post it as a formal solution, I wanted to pull it out into the main answer.</p>
python|numpy
7
4,003
16,617,973
why isn't numpy.mean multithreaded?
<p>I've been looking for ways to easily multithread some of my simple analysis code since I had noticed numpy it was only using one core, despite the fact that it is supposed to be multithreaded. </p> <p>I know that numpy is configured for multiple cores, since I can see tests using numpy.dot use all my cores, so I just reimplemented mean as a dot product, and it runs way faster. Is there some reason mean can't run this fast on its own? I find similar behavior for larger arrays, although the ratio is close to 2 than the 3 shown in my example.</p> <p>I've been reading a bunch of posts on similar numpy speed issues, and apparently its way more complicated than I would have thought. Any insight would be helpful, I'd prefer to just use mean since it's more readable and less code, but I might switch to dot based means.</p> <pre><code>In [27]: data = numpy.random.rand(10,10) In [28]: a = numpy.ones(10) In [29]: %timeit numpy.dot(data,a)/10.0 100000 loops, best of 3: 4.8 us per loop In [30]: %timeit numpy.mean(data,axis=1) 100000 loops, best of 3: 14.8 us per loop In [31]: numpy.dot(data,a)/10.0 - numpy.mean(data,axis=1) Out[31]: array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.11022302e-16, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -1.11022302e-16]) </code></pre>
<blockquote> <p>I've been looking for ways to easily multithread some of my simple analysis code since I had noticed numpy it was only using one core, despite the fact that it is supposed to be multithreaded.</p> </blockquote> <p>Who says it's supposed to be multithreaded?</p> <p><code>numpy</code> is primarily designed to be as fast as possible on a single core, and to be as parallelizable as possible if you need to do so. But you still have to parallelize it.</p> <p>In particular, you can operate on independent sub-objects at the same time, and slow operations release the GIL when possible—although "when possible" may not be nearly enough. Also, <code>numpy</code> objects are designed to be shared or passed between processes as easily as possible, to facilitate using <code>multiprocessing</code>.</p> <p>There are some specialized methods that are automatically parallelized, but most of the core methods are not. In particular, <code>dot</code> is implemented on top of BLAS when possible, and BLAS is automatically parallelized on most platforms, but <code>mean</code> is implemented in plain C code.</p> <p>See <a href="http://www.scipy.org/topical-software.html#parallel-and-distributed-programming" rel="noreferrer">Parallel Programming with numpy and scipy</a> for details.</p> <hr> <p>So, how do you know which methods are parallelized and which aren't? And, of those which aren't, how do you know which ones can be nicely manually-threaded and which need multiprocessing?</p> <p>There's no good answer to that. You can make educated guesses (X seems like it's probably implemented on top of ATLAS, and my copy of ATLAS is implicitly threaded), or you can read the source.</p> <p>But usually, the best thing to do is try it and test. If the code is using 100% of one core and 0% of the others, add manual threading. If it's now using 100% of one core and 10% of the others and barely running faster, change the multithreading to multiprocessing. (Fortunately, Python makes this pretty easy, especially if you use the Executor classes from <code>concurrent.futures</code> or the Pool classes from <code>multiprocessing</code>. But you still often need to put some thought into it, and test the relative costs of sharing vs. passing if you have large arrays.)</p> <p>Also, as kwatford points out, just because some method doesn't seem to be implicitly parallel doesn't mean it won't be parallel in the next version of numpy, or the next version of BLAS, or on a different platform, or even on a machine with slightly different stuff installed on it. So, be prepared to re-test. And do something like <code>my_mean = numpy.mean</code> and then use <code>my_mean</code> everywhere, so you can just change one line to <code>my_mean = pool_threaded_mean</code>.</p>
python|multithreading|performance|numpy
29
4,004
22,099,246
Numpy: evaluation of standard deviation of values above/below the average
<p>I want to calculate the standard deviation for values below and above the average of a matrix of n_par parameters and n_sample samples. The fastest way I found so far is:</p> <pre><code>stdleft = numpy.zeros_like(mean) for jpar in xrange(mean.shape[1]): stdleft[jpar] = p[p[:,jpar] &lt; \ mean[jpar],jpar].std() </code></pre> <p>where p is a matrix like (n_samples,n_par). Is there a smarter way to do it without the for loop? I have roughly n_par = 200 and n_samples = 1e8 and therefore these three lines take ages to be performed.</p> <p>Any idea would be really helpfull!</p> <p>Thank you</p>
<p>Pandas is your friend. Convert your matrix in pandas Dataframe and index the Dataframe logically. Something like this</p> <pre><code>mat = pandas.DataFrame(p) </code></pre> <p>This creates a DataFrame from original numpy matrix <code>p</code>. Then we compute the column means for the DataFrame.</p> <pre><code>m = mat.mean() </code></pre> <p>Creates <code>n_par</code> sized array of all column means of <code>mat</code>. Finally, index the <code>mat</code> matrix using <code>&lt;</code> logical operation and apply <code>std</code> to that.</p> <pre><code>stdleft = mat[mat &lt; m].std() </code></pre> <p>Similarly for <code>stdright</code>. Take a couple of minutes to compute on my machine.</p> <p>Here's the doc page for pandas: <a href="http://pandas.pydata.org/" rel="nofollow">http://pandas.pydata.org/</a></p> <p><strong>Edit</strong>: Edited using the comment below. You can do almost similar indexing using the original <code>p</code>.</p> <pre><code>m = p.mean(axis=0) logical = p &lt; m </code></pre> <p><code>logical</code> contains a boolean matrix of same size as <code>p</code>. This is where pandas comes handy. You can directly index a pandas matrix using logical of same size. Doing so in numpy is slightly hard. I guess looping is the best way to achieve it?</p> <pre><code>for i in range(len(p)): stdleft[i] = p[logical[:, i], i].std() </code></pre>
python|optimization|numpy|standards|deviation
2
4,005
55,378,909
How to preprocessing sequential data for 2D-CNN
<p>Is there any way to transform sequential data to 2-dimensional data in order to use it for a common CNN?</p> <p>My dataset Looks like: 14,40,84,120,38,29,395,58,153,...</p> <p>But I need a 2-dimensional representation for that. Is there any established algorithm for that purpose?</p>
<p>Are you really sure that a CNN is what you want? It might not be the best choice for sequential data. If you want to learn more about alternative ways to deal with sequential data (like time series), I recommend reading the paper "<a href="https://arxiv.org/abs/1602.01711" rel="nofollow noreferrer">The Great Time Series Classification Bake Off</a>".</p> <p>I especially recommend looking into <a href="https://en.wikipedia.org/wiki/Dynamic_time_warping" rel="nofollow noreferrer">Dynamic time warping</a> first. It might not be a bleeding edge deep learning algorithm but turns out to be very hard to beat when it comes to classifying sequential data.</p>
tensorflow|machine-learning|neural-network|conv-neural-network
0
4,006
55,286,449
How to calculate the number of days between stages in this df Python Pandas?
<pre><code>df = pd.DataFrame({'Campaign ID':[48464,48464,48464,48464,26380,26380,22676,39529,39529,46029,46029,46029,17030,46724,46724,39379,39379,39379], 'Campaign stage':["Lost","Developing","Discussing","Starting","Discussing", "Starting","Developing", "Discussing","Starting","Developing", "Discussing","Starting","Developing", "Developing","Discussing","Lost", "Developing","Discussing"], 'Stage Number':[-1, 3, 2, 1, 2, 1, 3, 2, 1, 3, 2, 1, 3, 3, 2, -1, 3, 2], 'Campaign Date':["2/8/2019","1/9/2019","1/3/2019","3/3/2018","2/14/2019","12/5/2018","7/25/2018","6/8/2018","3/4/2018","12/8/2018","9/9/2018","5/31/2018","6/7/2018","3/27/2018","1/6/2018","2/15/2019","12/15/2018","9/4/2018"]}) pvt = pd.pivot_table(df,values=['Campaign stage'],index=['Campaign ID','Campaign stage','Stage Number','Campaign Date'],aggfunc='count') pvt.sort_values(['Campaign ID','Campaign Date'],ascending=[True,False]) </code></pre> <p>Hi guys, I have the above dataframe and I'd like to calculate the number of days between campaign stage "starting" and "discussing" for each campaign and then calculate the average.</p> <p>Because of the data quality, the campaign stages are not consistent. So, for campaigns don't have the two stages "starting" and "discussing", I want to set as 0.</p> <p>I created a pivot table view of the data and sorted the campaign date descending order...But I don't know how to do next.</p> <p>Thanks in advance for the help. </p>
<pre><code>df['Campaign Date'] = pd.to_datetime(df['Campaign Date'],format='%m/%d/%Y') compare= {} for ids,gp in df.groupby('Campaign ID'): try: compare[ids]= gp.loc[gp['Campaign stage']=='Discussing']['Campaign Date'].iloc[0] - gp.loc[gp['Campaign stage']=='Starting']['Campaign Date'].iloc[0] except: compare[ids] =0 df['new_col'] = df['Campaign ID'].apply(lambda x:compare[x]) </code></pre>
python|pandas|dataframe|pandas-groupby
0
4,007
55,182,326
Understanding ResourceVariables in tensorflow
<p>From <a href="https://stackoverflow.com/questions/40817665/whats-the-difference-between-variable-and-resourcevariable-in-tensorflow">here</a></p> <blockquote> <p>Unlike tf.Variable, a tf.ResourceVariable has well-defined semantics. Each usage of a ResourceVariable in a TensorFlow graph adds a read_value operation to the graph. The Tensors returned by a read_value operation are guaranteed to see all modifications to the value of the variable which happen in any operation on which the read_value depends on (either directly, indirectly, or via a control dependency) and guaranteed to not see any modification to the value of the variable on which the read_value operation does not depend on. For example, if there is more than one assignment to a ResourceVariable in a single session.run call there is a well-defined value for each operation which uses the variable's value if the assignments and the read are connected by edges in the graph.</p> </blockquote> <p>So i tried to test the behavior. My code:</p> <pre><code>tf.reset_default_graph() a = tf.placeholder(dtype=tf.float32,shape=(), name='a') d = tf.placeholder(dtype=tf.float32,shape=(), name='d') b = tf.get_variable(name='b', initializer=tf.zeros_like(d), use_resource=True) c=a+b b_init = tf.assign(b, d) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run([c,b_init,b], feed_dict={a:5.,d:10.})) </code></pre> <p>This prints [15.,10.,10.]. As per my understanding of resource variables in tensorflow variable <code>c</code> should not have access to the value of <code>b</code> that was assigned to it in <code>b_init</code> which, would mean the output instead should be [5.,10.,0.]. Please, help me understand where i am going wrong</p>
<p>Two remarks:</p> <ol> <li><p>The order in which you write variables/ops in the first argument of <code>sess.run</code> does not mean that is the order of execution.</p></li> <li><p>If something worked in one step it does not mean it will work if you add loads of parallelism.</p></li> </ol> <p>The answer to the question:</p> <p>The key in the definition is <code>depends on</code> : <code>a read_value operation are guaranteed to see all modifications on which the read_value depends on</code>. If you look at the graph below, the add operation actually contains a <code>ReadVariableOp</code> operation for <code>b</code>, and then <code>ReadVariableOp</code> also depends on <code>AssignVariableOp</code>. Hence, <code>c</code> should take into account all modifications to <code>b</code>. </p> <p>Unless I am mixing something, but I sound convincing to myself. :) <a href="https://i.stack.imgur.com/zvsou.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zvsou.png" alt="enter image description here"></a></p> <p>If you want to see [10.0, 5.0, 0.0] you have to add <code>tf.control_dependency</code> like below</p> <pre><code>tf.reset_default_graph() a = tf.placeholder(dtype=tf.float32,shape=(), name='a') d = tf.placeholder(dtype=tf.float32,shape=(), name='d') b = tf.get_variable(name='b', initializer=tf.zeros_like(d), use_resource=True) c=a+b with tf.control_dependencies([c]): b_init = tf.assign(b, d) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run([b_init,c,b], feed_dict={a:5.,d:10.})) </code></pre> <p>Then the graph will change a bit <a href="https://i.stack.imgur.com/z858K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z858K.png" alt="enter image description here"></a></p>
python|tensorflow|deep-learning
2
4,008
55,575,061
Vectorizing consequential/iterative simulation (in python)
<p>This is a very general question -- is there any way to vectorize consequential simulation (where next step depends on previous), or any such iterative algorithm in general?</p> <p>Obviously, if one need to run M simulations (each N steps) you can use <code>for i in range(N)</code> and calculate M values on each step to get a significant speed-up. But say you only need one or two simulations with a lot of steps, or your simulations don't have a fixed amount of steps (like radiation detection), or you are solving a differential system (again, for a lot of steps). Is there any way to shove upper for-loop under the <code>numpy</code> hood (with a speed gain, I am not talking passing python function object to <code>numpy.vectorize</code>), or cython-ish approaches are the only option? Or maybe this is possible in R or some similar language, but not (currently?) in Python?</p>
<p>Perhaps <a href="https://en.wikipedia.org/wiki/Multigrid_method" rel="nofollow noreferrer">Multigrid in time methods</a> can give some improvements.</p>
python|numpy|numerical-computing
0
4,009
56,764,823
AttributeError: module 'torch' has no attribute 'hub'
<pre><code>import torch model = torch.hub.list('pytorch/vision') </code></pre> <p>My pytorch version is 1.0.0, but I can't load the hub, why is this?</p>
<p>You will need <code>torch &gt;= 1.1.0</code> to use <code>torch.hub</code> attribute.</p> <p>Alternatively, try by downloading <a href="https://github.com/pytorch/pytorch/blob/master/torch/hub.py" rel="nofollow noreferrer">this</a> <code>hub.py</code> file and then try below code:</p> <pre><code>import hub model = hub.list('pytorch/vision', force_reload=False) </code></pre> <p><strong>Arguments:</strong></p> <p><code>github:</code> Required, a string with format <code>repo_owner/repo_name[:tag_name]</code> with an optional tag/branch. The default branch is <code>master</code> if not specified. Example: <code>pytorch/vision[:hub]</code></p> <p><code>force_reload:</code> Optional, whether to discard the existing cache and force a fresh download. Default is <code>False</code>.</p>
pytorch
3
4,010
56,650,180
Where is the CUDA toolkit located on Ubuntu?
<p>I installed Nvidia's 375 driver and CUDA 8.0 on Ubuntu 16.04 from <a href="https://developer.nvidia.com/cuda-80-ga2-download-archive" rel="nofollow noreferrer">Nvidia's .deb package</a>. I want to build TensorFlow with GPU support. This is the output of TensorFlow's <code>configure</code> script:</p> <pre class="lang-none prettyprint-override"><code>./configure You have bazel 0.4.5 installed. Please specify the location of python. [Default is /usr/bin/python3]: Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Using python library path: /usr/local/lib/python3.5/dist-packages Do you wish to build TensorFlow with MKL support? [y/N] No MKL support will be enabled for TensorFlow Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Do you wish to use jemalloc as the malloc implementation? [Y/n] jemalloc enabled Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] No Google Cloud Platform support will be enabled for TensorFlow Do you wish to build TensorFlow with Hadoop File System support? [y/N] y Hadoop File System support will be enabled for TensorFlow Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] No XLA support will be enabled for TensorFlow Do you wish to build TensorFlow with VERBS support? [y/N] No VERBS support will be enabled for TensorFlow Do you wish to build TensorFlow with OpenCL support? [y/N] No OpenCL support will be enabled for TensorFlow Do you wish to build TensorFlow with CUDA support? [y/N] y CUDA support will be enabled for TensorFlow Do you want to use clang as CUDA compiler? [y/N] nvcc will be used as CUDA compiler Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]: Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Invalid path to CUDA 8.0 toolkit. /usr/local/cuda/lib64/libcudart.so.8.0 cannot be found Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]: </code></pre> <p>The CUDA toolkit directory is not found at the default path, and I can't find it anywhere in <code>/usr</code>:</p> <pre class="lang-none prettyprint-override"><code>find /usr -type f -name '*cuda*' /usr/src/linux-headers-4.4.0-151/include/linux/cuda.h /usr/src/linux-headers-4.4.0-151/include/uapi/linux/cuda.h /usr/src/linux-headers-4.4.0-142/include/linux/cuda.h /usr/src/linux-headers-4.4.0-142/include/uapi/linux/cuda.h /usr/lib/nvidia-384/bin/nvidia-cuda-mps-server /usr/lib/nvidia-384/bin/nvidia-cuda-mps-control /usr/lib/x86_64-linux-gnu/libicudata.so.55.1 /usr/share/man/man1/alt-nvidia-384-cuda-mps-control.1.gz /usr/share/vim/vim74/syntax/cuda.vim /usr/share/vim/vim74/indent/cuda.vim /usr/include/linux/cuda.h </code></pre> <p>Did I miss something in the CUDA installation?</p>
<p>The .deb package I downloaded only installed the repository's metadata. As the <a href="https://developer.download.nvidia.com/compute/cuda/8.0/secure/Prod2/docs/sidebar/CUDA_Quick_Start_Guide.pdf?pMqazYtU3YrzLLpA6_pwd14SOox0Yj_A2Vr5TteRSOCfVSHzRqUHx52uMuPSoWM0TLndXavlSb4zjvqOX8q6s3mLYEtUICWZ45aQoY7hSS-aR2rYl9-q7QguS4uveKvMxNwcyMIAsnc6JHtzio3npqUkUqIwyScCnIhzE5HNCeT1UH7e" rel="nofollow noreferrer">documentation</a> says (page 14), I had to install cuda after installing the package:</p> <pre class="lang-sh prettyprint-override"><code>apt update apt install cuda </code></pre>
linux|tensorflow|cuda|installation|ubuntu-16.04
1
4,011
56,604,264
find duplicate rows containing various types of lists (of lists) in pandas dataframe
<p><strong>Background</strong></p> <p>I have the following <code>df</code> that contains a mix of list types</p> <pre><code>import pandas as pd df = pd.DataFrame({'Size' : [[[['small', 'small', 'big', 'big']]], [['big', 'small','small']], ['big'], ['big']], 'ID': [1,2,3,3], 'Animal' : [['cat', 'dog', 'dog', 'cat'], ['dog', 'pig','dog'], ['pig'], ['pig']] }) </code></pre> <p>Which looks like this</p> <pre><code> Animal ID Size 0 [cat, dog, dog, cat] 1 [[[small, small, big, big]]] 1 [dog, pig, dog] 2 [[big, small, small]] 2 [pig] 3 [big] 3 [pig] 3 [big] </code></pre> <p><strong>Problem</strong></p> <p>I use the following </p> <pre><code>df.duplicated() </code></pre> <p>I get the following error since my dataframe contains list (at least I think this is why)</p> <pre><code>TypeError: unhashable type: 'list' </code></pre> <p><strong>Question</strong></p> <p>How do I check for duplicate rows in a dataframe that contains multiple types of lists?</p>
<pre><code>df.loc[df.astype(str).drop_duplicates().index] </code></pre>
python-3.x|pandas|list|duplicates
0
4,012
25,766,831
Numpy structured arrays: string type not understood when specifying dtype with a dict
<p>Here's what happens if I initialize a struct array with the same field names and types in different ways:</p> <pre><code>&gt;&gt;&gt; a = np.zeros(2, dtype=[('x','int64'),('y','a')]) &gt;&gt;&gt; a array([(0L, ''), (0L, '')], dtype=[('x', '&lt;i8'), ('y', 'S')]) </code></pre> <p>So initializing with list of tuples works fine.</p> <pre><code>&gt;&gt;&gt; mdtype = dict(names=['x','y'],formats=['int64','a']) &gt;&gt;&gt; mdtype {'names': ['x', 'y'], 'formats': ['int64', 'a']} &gt;&gt;&gt; a = np.zeros(2,dtype=mdtype) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: data type not understood </code></pre> <p>So initializing with a dict doesn't, and the problem is the string type:</p> <pre><code>&gt;&gt;&gt; mdtype = dict(names=['x','y'],formats=['int64','float64']) &gt;&gt;&gt; a = np.zeros(2,dtype=mdtype) &gt;&gt;&gt; </code></pre> <p>No problems there. Any ideas? Is this a Numpy bug?</p> <p>Numpy version: 1.8.0</p> <p>Python 2.7.6 (default, Nov 10 2013, 19:24:24) [MSC v.1500 64 bit (AMD64)] on win32</p>
<p>As a workaround, it works if you specify the string width:</p> <pre><code>&gt;&gt;&gt; mdtype = dict(names=['x','y'],formats=['int64','a1']) &gt;&gt;&gt; np.dtype(mdtype) dtype([('x', '&lt;i8'), ('y', 'S1')]) </code></pre> <p>Probably related to <a href="https://stackoverflow.com/questions/25219344/numpy-set-values-in-structured-array-based-on-other-values-in-structured-array">this</a> and <a href="https://github.com/numpy/numpy/issues/4955" rel="nofollow noreferrer">this</a>. If it isn't a bug, it is awfully close...</p>
python|numpy|structured-array
3
4,013
26,260,127
Numpy only way to read text file and keep comments
<p>Here is a minimal working example of a text file:</p> <pre><code># A B C 1 7 9 7 2 10 10 20 30 </code></pre> <p>Loading this file using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html" rel="nofollow"><code>numpy.loadtxt</code></a> will discard the commented line. Is there a nice way to map the columns stored as a comment into an array that I can use for access? It is very easy to do this with a few lines of standard python, reading, parsing, splitting and mapping to an array, but I was looking for a built-in command and it seems that both <code>loadtxt</code> and <code>genfromtxt</code> throw away all comments. I have an inkling that this might be what pandas is for and an answer that uses another library for data management is OK too.</p>
<p>Looks like the comment character doesn't bother <code>genfromtxt</code>. It can still treat that 1st line as a source for names, and load the data as a structured array.</p> <pre><code>In [189]: s="""\ # A B C 1 7 9 7 2 10 10 20 30 """ In [190]: X=np.genfromtxt(s.splitlines(),names=True) In [191]: X Out[191]: array([(1.0, 7.0, 9.0), (7.0, 2.0, 10.0), (10.0, 20.0, 30.0)], dtype=[('A', '&lt;f8'), ('B', '&lt;f8'), ('C', '&lt;f8')]) In [192]: X.dtype.names Out[192]: ('A', 'B', 'C') In [193]: X['A'] Out[193]: array([ 1., 7., 10.]) In [194]: X[1] Out[194]: (7.0, 2.0, 10.0) </code></pre>
python|numpy
3
4,014
66,891,947
Tensorflow error can't figure out what it is
<p>I was doing tensorflow object detection project to detect sign languages using google colab<br /> I was getting tensorflow has no attribute gflie error and I found that i have to downgrade to tensorflow 1<br /> so I ran <code>!pip install tensorflow==1.13.0rc1</code> in my colab cell<br /> But now when I run the same cell I get this error which I cant figureout how to resolve<br /> code -</p> <pre><code>!python {SCRIPTS_PATH + 'generate_tfrecord.py'} -x {IMAGE_PATH + 'train'} -l {ANNOTATION_PATH + 'label_map.pbtxt'} -o {ANNOTATION_PATH + 'train.record'} !python {SCRIPTS_PATH + 'generate_tfrecord.py'} -x{IMAGE_PATH + 'test'} -l {ANNOTATION_PATH + 'label_map.pbtxt'} -o {ANNOTATION_PATH + 'test.record'} </code></pre> <p>Error -</p> <pre><code>/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([(&quot;qint8&quot;, np.int8, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([(&quot;quint8&quot;, np.uint8, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([(&quot;qint16&quot;, np.int16, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([(&quot;quint16&quot;, np.uint16, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([(&quot;qint32&quot;, np.int32, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([(&quot;resource&quot;, np.ubyte, 1)]) Traceback (most recent call last): File &quot;Tensorflow/scripts/generate_tfrecord.py&quot;, line 62, in &lt;module&gt; label_map_dict = label_map_util.get_label_map_dict(label_map) File &quot;/usr/local/lib/python3.7/dist-packages/object_detection/utils/label_map_util.py&quot;, line 164, in get_label_map_dict label_map = load_labelmap(label_map_path) File &quot;/usr/local/lib/python3.7/dist-packages/object_detection/utils/label_map_util.py&quot;, line 133, in load_labelmap label_map_string = fid.read() File &quot;/usr/local/lib/python3.7/dist-packages/tensorflow/python/lib/io/file_io.py&quot;, line 125, in read self._preread_check() File &quot;/usr/local/lib/python3.7/dist-packages/tensorflow/python/lib/io/file_io.py&quot;, line 85, in _preread_check compat.as_bytes(self.__name), 1024 * 512, status) File &quot;/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/compat.py&quot;, line 61, in as_bytes (bytes_or_text,)) TypeError: Expected binary or unicode string, got item { name: &quot;Hello&quot; id: 1 } item { name: &quot;Yes&quot; id: 2 } item { name: &quot;Thank You&quot; id: 3 } item { name: &quot;I Love You&quot; id: 5 } item { name: &quot;No&quot; id: 5 } /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([(&quot;qint8&quot;, np.int8, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([(&quot;quint8&quot;, np.uint8, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([(&quot;qint16&quot;, np.int16, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([(&quot;quint16&quot;, np.uint16, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([(&quot;qint32&quot;, np.int32, 1)]) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([(&quot;resource&quot;, np.ubyte, 1)]) Traceback (most recent call last): File &quot;Tensorflow/scripts/generate_tfrecord.py&quot;, line 62, in &lt;module&gt; label_map_dict = label_map_util.get_label_map_dict(label_map) File &quot;/usr/local/lib/python3.7/dist-packages/object_detection/utils/label_map_util.py&quot;, line 164, in get_label_map_dict label_map = load_labelmap(label_map_path) File &quot;/usr/local/lib/python3.7/dist-packages/object_detection/utils/label_map_util.py&quot;, line 133, in load_labelmap label_map_string = fid.read() File &quot;/usr/local/lib/python3.7/dist-packages/tensorflow/python/lib/io/file_io.py&quot;, line 125, in read self._preread_check() File &quot;/usr/local/lib/python3.7/dist-packages/tensorflow/python/lib/io/file_io.py&quot;, line 85, in _preread_check compat.as_bytes(self.__name), 1024 * 512, status) File &quot;/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/compat.py&quot;, line 61, in as_bytes (bytes_or_text,)) TypeError: Expected binary or unicode string, got item { name: &quot;Hello&quot; id: 1 } item { name: &quot;Yes&quot; id: 2 } item { name: &quot;Thank You&quot; id: 3 } item { name: &quot;I Love You&quot; id: 5 } item { name: &quot;No&quot; id: 5 } </code></pre> <p>Other cells of my colab file</p> <pre><code>WORKSPACE_PATH = 'Tensorflow/workspace/' SCRIPTS_PATH = 'Tensorflow/scripts/' APIMODEL_PATH = 'Tensorflow/models/' ANNOTATION_PATH = WORKSPACE_PATH+'annotations/' IMAGE_PATH = WORKSPACE_PATH+'images/' MODEL_PATH = WORKSPACE_PATH+'models/' PRETRAINED_MODEL_PATH = WORKSPACE_PATH+'pre-trained-models/' CONFIG_PATH = MODEL_PATH+'my_ssd_mobnet/pipeline.config' CHECKPOINT_PATH = MODEL_PATH+'my_ssd_mobnet/' </code></pre> <pre><code>labels = [ {'name':'Hello', 'id':1}, {'name':'Yes', 'id':2}, {'name':'Thank You', 'id':3}, {'name':'I Love You', 'id':5}, {'name':'No', 'id':5} ] with open(ANNOTATION_PATH + 'label_map.pbtxt', 'w') as f: for label in labels: f.write('item { \n') f.write('\tname:\'{}\'\n'.format(label['name'])) f.write('\tid:{}\n'.format(label['id'])) f.write('}\n') </code></pre>
<p>In <code>generate_tfrecords.py</code>, remove line 61 and change line 62 to <code>label_map_dict = label_map_util.get_label_map_dict(args.labels_path)</code> and if you have any line that says <code>fine_tune_checkpoint_version</code> in the <code>pipeline.config</code> file, delete that and try</p>
python|tensorflow|object-detection
0
4,015
67,042,817
append sequence number with padded zeroes to a series using padas
<p>I have a dataframe like as shown below</p> <pre><code>df = pd.DataFrame({'person_id': [101,101,101,101,202,202,202], 'login_date':['5/7/2013 09:27:00 AM','09/08/2013 11:21:00 AM','06/06/2014 08:00:00 AM','06/06/2014 05:00:00 AM','12/11/2011 10:00:00 AM','13/10/2012 12:00:00 AM','13/12/2012 11:45:00 AM']}) df.login_date = pd.to_datetime(df.login_date) df['logout_date'] = df.login_date + pd.Timedelta(days=5) df['login_id'] = [1,1,1,1,8,8,8] </code></pre> <p>As you can see in the sample dataframe, the <code>login_id</code> is the same even though <code>login</code> and <code>logout</code> dates are different for the person.</p> <p>For example, <code>person = 101</code>, has logged in and out at 4 different timestamps. but he has got the same login_ids which is incorrect.</p> <p>Instead, I would like to generate a <code>new login_id</code> column where each person gets a new login_id but retains the <code>1st login_id</code> information in their subsequent logins. So, we can know its a sequence</p> <p>I tried the below but it doesn't work well</p> <pre><code>df.groupby(['person_id','login_date','logout_date'])['login_id'].rank(method=&quot;first&quot;, ascending=True) + 100000 </code></pre> <p>I expect my output to be like as shown below. You can see how <code>1</code> and <code>8</code>, the 1st login_id for each person is retained in their subsequent <code>login_ids</code>. We just add a sequence by adding <code>00001</code> and plus one based on number of rows.</p> <p>Please note I would like to apply this on a big data and the <code>login_ids</code> may not just be <code>single digit</code> in real data. For ex, 1st login_id could even be <code>576869578</code> etc kind of random number. In that case, the subsequent login id will be <code>57686957800001</code>. Hope this helps. Whatever is the 1st <code>login_id</code> for that subject, add <code>00001</code>, <code>00002</code> etc based on the number of rows that person has. Hope this helps</p> <p><a href="https://i.stack.imgur.com/msfbD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/msfbD.png" alt="enter image description here" /></a></p>
<p><strong>Update 2:</strong> Just realized my previous answers also added 100000 to the first index. Here is a version that uses <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><strong><code>GroupBy.transform()</code></strong></a> to add 100000 only to subsequent indexes:</p> <pre class="lang-py prettyprint-override"><code>cumcount = df.groupby(['person_id','login_id']).login_id.cumcount() df.login_id = df.groupby(['person_id','login_id']).login_id.transform( lambda x: x.shift().mul(100000).fillna(x.min()) ).add(cumcount) person_id login_date logout_date login_id # 0 101 2013-05-07 09:27:00 2013-05-12 09:27:00 1 # 1 101 2013-09-08 11:21:00 2013-09-13 11:21:00 100001 # 2 101 2014-06-06 08:00:00 2014-06-11 08:00:00 100002 # 3 101 2014-06-06 05:00:00 2014-06-11 05:00:00 100003 # 4 202 2011-12-11 10:00:00 2011-12-16 10:00:00 8 # 5 202 2012-10-13 00:00:00 2012-10-18 00:00:00 800001 # 6 202 2012-12-13 11:45:00 2012-12-18 11:45:00 800002 </code></pre> <hr /> <p><strong>Update:</strong> Faster option is to build the sequence with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><strong><code>GroupBy.cumcount()</code></strong></a>:</p> <pre class="lang-py prettyprint-override"><code>cumcount = df.groupby(['person_id','login_id']).login_id.cumcount() df.login_id = df.login_id.mul(100000).add(cumcount) # person_id login_date logout_date login_id # 0 101 2013-05-07 09:27:00 2013-05-12 09:27:00 100000 # 1 101 2013-09-08 11:21:00 2013-09-13 11:21:00 100001 # 2 101 2014-06-06 08:00:00 2014-06-11 08:00:00 100002 # 3 101 2014-06-06 05:00:00 2014-06-11 05:00:00 100003 # 4 202 2011-12-11 10:00:00 2011-12-16 10:00:00 800000 # 5 202 2012-10-13 00:00:00 2012-10-18 00:00:00 800001 # 6 202 2012-12-13 11:45:00 2012-12-18 11:45:00 800002 </code></pre> <hr /> <p>You can build the sequence in a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><strong><code>GroupBy.apply()</code></strong></a>:</p> <pre class="lang-py prettyprint-override"><code>df.login_id = df.groupby(['person_id','login_id']).login_id.apply( lambda x: pd.Series([x.min()*100000+seq for seq in range(len(x))], x.index) ) </code></pre>
python|pandas|dataframe|pandas-groupby|series
2
4,016
67,160,127
Return value in referencing column of dataframe with apply/lambda function
<h3>Problem</h3> <p>I have the following dataframe</p> <pre class="lang-python prettyprint-override"><code>p = {'parentId':['071cb2c2-d1be-4154-b6c7-a29728357ef3', 'a061e7d7-95d2-4812-87c1-24ec24fc2dd2', 'Highest Level', '071cb2c2-d1be-4154-b6c7-a29728357ef3'], 'id_x': ['a061e7d7-95d2-4812-87c1-24ec24fc2dd2', 'd2b62e36-b243-43ac-8e45-ed3f269d50b2', '071cb2c2-d1be-4154-b6c7-a29728357ef3', 'a0e97b37-b9a1-4304-9769-b8c48cd9f184'], 'type': ['Department', 'Department', 'Department', 'Function'], 'name': ['Sales', 'Finances', 'Management', 'Manager']} df = pd.DataFrame(data = p) df | parentId | id_x | type | name | | ------------------------------------ | ------------------------------------ | ---------- | ---------- | | 071cb2c2-d1be-4154-b6c7-a29728357ef3 | a061e7d7-95d2-4812-87c1-24ec24fc2dd2 | Department | Sales | | a061e7d7-95d2-4812-87c1-24ec24fc2dd2 | d2b62e36-b243-43ac-8e45-ed3f269d50b2 | Department | Finances | | Highest Level | 071cb2c2-d1be-4154-b6c7-a29728357ef3 | Department | Management | | 071cb2c2-d1be-4154-b6c7-a29728357ef3 | a0e97b37-b9a1-4304-9769-b8c48cd9f184 | Function | Manager | </code></pre> <p>I tried to create a function that should return the <code>name</code> of the corresponding entry, where the <code>parentId</code> is the <code>id_x</code>and put it in a new column. With the function I get the following result:</p> <pre class="lang-python prettyprint-override"><code>def allocator(id_x, parent_ID, name): d = &quot;no sub-dependency&quot; for node in id_x: if node == parent_ID: d = name return d df['Parent_name'] = df.apply(lambda x: allocator(df['id_x'], x['parentId'], x['name']), axis=1) df | parentId | id_x | type | name | Parent_name | | ------------------------------------ | ------------------------------------ | ---------- | ---------- | ----------------- | | 071cb2c2-d1be-4154-b6c7-a29728357ef3 | a061e7d7-95d2-4812-87c1-24ec24fc2dd2 | Department | Sales | Sales | | a061e7d7-95d2-4812-87c1-24ec24fc2dd2 | d2b62e36-b243-43ac-8e45-ed3f269d50b2 | Department | Finances | Finances | | Highest Level | 071cb2c2-d1be-4154-b6c7-a29728357ef3 | Department | Management | no sub-dependency | | 071cb2c2-d1be-4154-b6c7-a29728357ef3 | a0e97b37-b9a1-4304-9769-b8c48cd9f184 | Function | Manager | Manager | </code></pre> <h3>Expected result</h3> <p>The function up to now only puts in the name of the corresponding <code>id_x</code> itself. However, it should take the <code>name</code> of the entry where the <code>parentId</code> is the <code>id_x</code>.</p> <pre class="lang-python prettyprint-override"><code> | parentId | id_x | type | name | Parent_name | | ------------------------------------ | ------------------------------------ | ---------- | ---------- | ----------------- | | 071cb2c2-d1be-4154-b6c7-a29728357ef3 | a061e7d7-95d2-4812-87c1-24ec24fc2dd2 | Department | Sales | Management | | a061e7d7-95d2-4812-87c1-24ec24fc2dd2 | d2b62e36-b243-43ac-8e45-ed3f269d50b2 | Department | Finances | Sales | | Highest Level | 071cb2c2-d1be-4154-b6c7-a29728357ef3 | Department | Management | no sub-dependency | | 071cb2c2-d1be-4154-b6c7-a29728357ef3 | a0e97b37-b9a1-4304-9769-b8c48cd9f184 | Function | Manager | Management | </code></pre> <p>How do I have to change the function, so it takes the <code>name</code> of the related parent entry?</p>
<p>You can use <code>.map()</code>:</p> <pre><code>mapping = dict(zip(df[&quot;id_x&quot;], df[&quot;name&quot;])) df[&quot;Parent_name&quot;] = df[&quot;parentId&quot;].map(mapping).fillna(&quot;no sub-dependency&quot;) print(df) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> parentId id_x type name Parent_name 0 071cb2c2-d1be-4154-b6c7-a29728357ef3 a061e7d7-95d2-4812-87c1-24ec24fc2dd2 Department Sales Management 1 a061e7d7-95d2-4812-87c1-24ec24fc2dd2 d2b62e36-b243-43ac-8e45-ed3f269d50b2 Department Finances Sales 2 Highest Level 071cb2c2-d1be-4154-b6c7-a29728357ef3 Department Management no sub-dependency 3 071cb2c2-d1be-4154-b6c7-a29728357ef3 a0e97b37-b9a1-4304-9769-b8c48cd9f184 Function Manager Management </code></pre>
python|pandas|function|dataframe|lambda
1
4,017
67,087,138
Joining multiple DateTime columns into one Columns (Python)
<p>I am trying to gather all my date time infromation into only one columns. Right now I have a columns for each period over the span of 6 months , however in order to do a Time series Analysis , I was trying to gather all the date time into a single columns.</p> <p>Here is what I tired:</p> <pre><code> df['Dates'] = df[&quot;1 Month Date&quot;]+ df[&quot;2 Month Date&quot;]+ df[&quot;3 Month Date&quot;] + df[&quot;4 Month Date&quot;] + df[&quot;5 Month Date&quot;] +df[&quot;6 Month Date&quot;] </code></pre> <p>Error message :</p> <pre><code> TypeError: cannot add DatetimeArray and DatetimeArray </code></pre> <p>Second trial :</p> <pre><code>Dates = df[&quot;1 Month Date&quot;,&quot;2 Month Date&quot;,&quot;3 Month Date&quot;,&quot;4 Month Date&quot;,&quot;5 Month Date&quot;,&quot;6 Month Date&quot;] </code></pre> <p>Error Message:</p> <pre><code>KeyError: ('1 Month Date', '2 Month Date', '3 Month Date', '4 Month Date', '5 Month Date', '6 Month Date') </code></pre> <p>Extra infromation: It is an excel sheet that I imported using panda , my 1 Month date 2 Month date ect .. is a datetime64[ns] when I do df.info())</p> <p>Sample data :</p> <pre><code> 1 Month Date 1 Month Room Booked 2 Month Date 2 Month Room Booked \ 0 2020-09-01 339 2020-10-01 346 1 2020-09-01 2 2020-10-01 4 2 2020-09-01 4 2020-10-01 4 3 2020-09-01 0 2020-10-01 0 4 2020-09-01 0 2020-10-01 0 5 2020-09-01 1 2020-10-01 1 6 2020-09-01 2 2020-10-01 2 7 2020-09-01 50 2020-10-01 58 8 2020-09-01 12 2020-10-01 12 9 2020-09-01 9 2020-10-01 9 10 2020-09-01 6 2020-10-01 6 11 2021-03-01 112 2021-04-01 112 12 2021-03-01 0 2021-04-01 0 13 2021-02-01 36 2021-03-01 36 14 2021-02-01 18 2021-03-01 18 15 2021-02-01 20 2021-03-01 20 16 2021-02-01 12 2021-03-01 12 17 2021-02-01 0 2021-03-01 0 </code></pre>
<p>IIUC use:</p> <pre><code>Dates = df[[&quot;1 Month Date&quot;,&quot;2 Month Date&quot;,&quot;3 Month Date&quot;,&quot;4 Month Date&quot;,&quot;5 Month Date&quot;,&quot;6 Month Date&quot;]].apply(pd.Series.explode).sum(axis=1) </code></pre>
python|pandas|datetime|typeerror|keyerror
0
4,018
47,262,654
How to select identical rows from a pandas dataframe along with null
<p>I'm new to pandas and I'm having problem with row selections from dataframe.</p> <p>Following is my DataFrame :</p> <pre><code> Index Column1 Column2 Column3 Column4 Column5 0 1234 500 NEWYORK NY NaN 1 5678 700 AUSTIN TX 5678956010 2 1234 300 NEWYORKCITY NY NaN 3 8910 235 RICHMOND FL 8484883666 4 8910 250 AUSTIN TX 8484883666 5 5324 150 AUSTIN TX NaN </code></pre> <p>1.) I want to select rows that are having same values in Column5. So the output dataframe should contain rows with index 0,2,3 and 4. Note that two rows with NaN in Column 5 should be selected only if their Column1 value is same(example. rows with index 0 and 2).</p> <p>Can any one help me with a step-by-step procedure for this custom selection. Thanks in advance...</p>
<p>I think you need 2 sets of conditions - for <code>NaN</code>s in <code>Column5</code> and for non NaNs and last chain them by <code>|</code> (or):</p> <pre><code>m1 = df['Column1'].duplicated(keep=False) &amp; df['Column5'].isnull() m2 = df['Column5'].duplicated(keep=False) &amp; df['Column5'].notnull() df = df[m1 | m2] print (df) Index Column1 Column2 Column3 Column4 Column5 0 0 1234 500 NEWYORK NY NaN 2 2 1234 300 NEWYORKCITY NY NaN 3 3 8910 235 RICHMOND FL 8.484884e+09 4 4 8910 250 AUSTIN TX 8.484884e+09 </code></pre> <p>Detail:</p> <pre><code>print (m1) 0 True 1 False 2 True 3 False 4 False 5 False dtype: bool print (m2) 0 False 1 False 2 False 3 True 4 True 5 False Name: Column5, dtype: bool </code></pre>
python|pandas|dataframe
2
4,019
11,253,495
numpy: applying argsort to an array
<p>The <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html#numpy.argsort" rel="noreferrer"><code>argsort()</code></a> function returns a matrix of indices that can be used to index the original array so that the result would match the <code>sort()</code> result.</p> <p>Is there a way to apply those indices? I have two arrays, one is the array used for obtaining the sort order, and another is some associated data. </p> <p>I would like to compute <code>assoc_data[array1.argsort()]</code> but that doesn't seem to work.</p> <p>Here's an example:</p> <pre><code>z=array([1,2,3,4,5,6,7]) z2=array([z,z*z-7]) i=z2.argsort() </code></pre> <hr> <pre><code>z2=array([[ 1, 2, 3, 4, 5, 6, 7], [-6, -3, 2, 9, 18, 29, 42]]) i =array([[1, 1, 1, 0, 0, 0, 0], [0, 0, 0, 1, 1, 1, 1]]) </code></pre> <p>I would like to apply i to z2 (or another array with associated data) but I'm not sure how to do so. </p>
<p>This is probably overkill, but this will work in the nd case:</p> <pre><code>import numpy as np axis = 0 index = list(np.ix_(*[np.arange(i) for i in z2.shape])) index[axis] = z2.argsort(axis) z2[index] # Or if you only need the 3d case you can use np.ogrid. axis = 0 index = np.ogrid[:z2.shape[0], :z2.shape[1], :z2.shape[2]] index[axis] = z2.argsort(axis) z2[index] </code></pre>
python|arrays|numpy
10
4,020
68,436,818
How can I create an array based on another array?
<p>I am new to python, I would like to try to create a array based on another array.</p> <p>If I have a array like:</p> <pre><code>array = [[1, 1], [1, 2], [2, 2], [3, 2], [3, 3], [4, 2], [5, 1], [5, 3]] </code></pre> <p>Then if the matrix would like to be created based on array if there's a value for corresponding position, like this: In array, the first number represents row in the matrix, and the second number represents column in the matrix, such as [1, 1] means row1 and column1 have value, then = 1; there's no [1,3] in the array means row1 and column 3 equal to 0. So I want the result to be like this:</p> <pre><code> col1 col2 col3 row1 [ 1 1 0 ] row2 [ 0 1 0 ] row3 [ 0 1 1 ] row4 [ 0 1 0 ] row5 [ 1 0 1 ] result = [[1, 1, 0], [0, 1, 0], [0, 1, 1], [0, 1, 0], [1, 0, 1]] </code></pre> <p>Note that the value in the array is just an example not the exact position of the matrix.</p> <p>I've tried the insert the value in the empty array but the corresponding position is hard to be identified in the matrix.</p> <p>Another example is:</p> <pre><code>array =[[4, 3], [4, 23], [5, 308], [5, 432], [8, 432], [8, 429]] </code></pre> <p>and the matrix would be like:</p> <pre><code> col1 col2 col3 col4 col5 row1 [ 1 1 0 0 0 ] row2 [ 0 0 1 1 0 ] row3 [ 0 0 0 1 1 ] </code></pre> <p>Not sure this is clear for the problem description.</p>
<p>The first array (named <code>array</code>) is called a <em>sparse binary matrix representation</em>. The second array (named <code>result</code>) is called a <em>dense binary matrix representation</em>. If you want to convert the values to ranks (while respecting duplicates) beforehand, you can use the <code>numpy.unique</code> function. So, the full procedure would be:</p> <pre><code>from scipy.sparse import csr_matrix from numpy import unique array = [[1, 1], [1, 2], [2, 2], [3, 2], [3, 3], [4, 2], [5, 1], [5, 3]] r_unique, rows = unique([v[0] for v in array], return_inverse=True) c_unique, cols = unique([v[1] for v in array], return_inverse=True) values = [1] * len(array) n = len(r_unique) m = len(c_unique) result = csr_matrix((values, (rows, cols)), shape=(n, m)).toarray() </code></pre> <p>This works because <code>r_unique</code> is the set of all unique numbers of one dimension in <code>array</code>, which is the size of the <code>result</code> in the corresponding dimension. <code>rows</code> then contains mapping from the <code>number</code> to its <code>rank</code> within this dimension. The same applies correspondingly to <code>cols</code> and <code>c_unique</code>.</p>
python|arrays|numpy
4
4,021
68,283,607
How to check if any row is more than x and return name of column?
<p>I have dataframe of id, col1,col2,col3,col4 as it is described on picture.</p> <p>I want to write function which find if any row is more than x an return name of column in which this condition was true and return it in result column.</p> <pre><code>userid col1 col2 col3 col4 result d1 40 50 75 65 col3 d2 54 20 61 71 col4 d3 12 75 12 60 col2 d4 75 12 14 16 col1 </code></pre> <p><img src="https://i.stack.imgur.com/uNL9w.png" alt="See Image" /></p>
<p>You can use <code>idxmax</code> with <code>(axis=1)</code> to work on columns :</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df[['col1','col2','col3','col4']].idxmax(axis=1) 0 col3 1 col4 2 col2 3 col1 dtype: object </code></pre> <p>And to assign it to your <code>df</code> :</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df['result'] = df[['col1','col2','col3','col4']].idxmax(axis=1) </code></pre> <p>You get :</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df userid col1 col2 col3 col4 result 0 d1 40 50 75 65 col3 1 d2 54 20 61 71 col4 2 d3 12 75 12 60 col2 3 d4 75 12 14 16 col1 </code></pre>
python|pandas
3
4,022
68,182,976
How to make bar subplots in plotly using pandas pivot dataframes?
<p>Below are the pivot dataframes df1 and df2. Now I am trying to make subplots in plotly by using below dataframes. But I am getting key error while executing my code. My code as under:</p> <pre><code>import pandas as pd import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots pio.renderers.default='browser' df1 = Student July August Bobby 824 516 df2 = Country July August EUR 274 150 USA 212 128 China 113 170 Port 44 10 # plotly setup for fig fig = make_subplots(2,1) fig.add_trace(go.Bar(x=df1.Student, y=df1.loc['July','August']),row=1, col=1) fig.add_trace(go.Bar(x=df2.Country, y=df2.loc['July','August']),row=2, col=1) fig.show() </code></pre>
<ul> <li>you are using <strong>plotly express</strong> concepts to create the traces. You need to create a trace for each bar column</li> <li>simple case of then adding that to the subplots figure</li> </ul> <pre><code>import plotly.graph_objects as go from plotly.subplots import make_subplots import pandas as pd import numpy as np s = 200 # generate pivot table dataframes similar to question df = pd.DataFrame({&quot;Student&quot;: np.random.choice([&quot;Bob&quot;, &quot;John&quot;, &quot;Fred&quot;, &quot;Jill&quot;, &quot;Anne&quot;], s), &quot;Month&quot;: np.random.choice( pd.Series(pd.date_range(&quot;31-Jan-2020&quot;, freq=&quot;M&quot;, periods=6)).dt.strftime(&quot;%B&quot;),s), &quot;score&quot;: np.random.randint(5, 75, s), }) df1 = df.groupby([&quot;Student&quot;,&quot;Month&quot;], as_index=False).agg({&quot;score&quot;:&quot;mean&quot;}).pivot(index=&quot;Student&quot;, columns=&quot;Month&quot;, values=&quot;score&quot;) df = pd.DataFrame({&quot;Country&quot;: np.random.choice([&quot;UK&quot;, &quot;USA&quot;, &quot;France&quot;, &quot;Germany&quot;, &quot;Mexico&quot;], s), &quot;Month&quot;: np.random.choice( pd.Series(pd.date_range(&quot;31-Jan-2020&quot;, freq=&quot;M&quot;, periods=6)).dt.strftime(&quot;%B&quot;),s), &quot;score&quot;: np.random.randint(5, 75, s), }) df2 = df.groupby([&quot;Country&quot;,&quot;Month&quot;], as_index=False).agg({&quot;score&quot;:&quot;mean&quot;}).pivot(index=&quot;Country&quot;, columns=&quot;Month&quot;, values=&quot;score&quot;) # create figure fig = make_subplots(rows=2, cols=1, specs=[[{&quot;type&quot;:&quot;bar&quot;}],[{&quot;type&quot;:&quot;bar&quot;}]]) # add traces for col in [&quot;April&quot;,&quot;May&quot;]: fig.add_trace(go.Bar(x=df1.index, y=df1[col], name=col), row=1, col=1) for col in [&quot;April&quot;,&quot;May&quot;]: fig.add_trace(go.Bar(x=df2.index, y=df2[col], name=col), row=2, col=1) fig </code></pre> <p><a href="https://i.stack.imgur.com/dCJ2T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dCJ2T.png" alt="enter image description here" /></a></p>
pandas|plotly
1
4,023
68,239,584
numpy.ndarray has no attribute read (when trying to pass a video)
<p>I am attempting to write a video with annotations to a file ( or at least print it our on-screen while using google colab). I've tried using cv_imshow but this prints the video one frame at a time, which is not what I'm after. I've amended the script to use <code>VideoWriter</code>, but still getting stuck when using cap.read() as I am getting an error stating <code>numpy.ndarray has no attribute read</code>.</p> <p>I understand why this error is occurring, as I believe the <code>.read()</code> function is expecting a video, while I am trying to pass a numpy array. However, I cannot seem to find another way around this. any help would be highly appreciated.</p> <p>This is the full code I am using:</p> <pre><code>import cv2 import tensorflow as tf from google.colab.patches import cv2_imshow cap = cv2.VideoCapture(r'/content/drive/MyDrive/vid1.mp4') from google.colab.patches import cv2_imshow import numpy as np while True: ret, image_np = cap.read() image_np_expanded = np.expand_dims(image_np, axis=0) input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32) detections, predictions_dict, shapes = detect_fn(input_tensor) label_id_offset = 1 image_np_with_detections = image_np.copy() viz_utils.visualize_boxes_and_labels_on_image_array( image_np_with_detections, detections['detection_boxes'][0].numpy(), (detections['detection_classes'][0].numpy() + label_id_offset).astype(int), detections['detection_scores'][0 ].numpy(), category_index, use_normalized_coordinates=True, max_boxes_to_draw=200, min_score_thresh=.30, agnostic_mode=False) cap=image_np_with_detections res=(800,600) # this format fail to play in Chrome/Win10/Colab # fourcc = cv2.VideoWriter_fourcc(*'MP4V') #codec fourcc = cv2.VideoWriter_fourcc(*'H264') #codec out = cv2.VideoWriter('output.mp4', fourcc, 20.0, res) while(True): # Capture frame-by-frame ret, frame = cap.read() print(&quot;Frame number: &quot; + str(counter)) counter = counter+1 if cv2.waitKey(1) &amp; 0xFF == ord('q'): break out.write(frame) out.release() cap.release() cv2.destroyAllWindows() </code></pre> <p>Thanks in advance!</p>
<p>I managed to adjust the code in order to the VideoWriter to work. As hpaulj pointed out, I was assigning the variable cap twice.</p> <p>The correct code is below:</p> <pre><code>cap = cv2.VideoCapture(r'/content/drive/MyDrive/Workspace/Images/Test/vid3.mp4') res=(800,600) fourcc = cv2.VideoWriter_fourcc(*'H264') #codec out = cv2.VideoWriter('/content/drive/MyDrive/Workspace/Images/Test/vid3output.mp4', fourcc, 20.0, res) while True: ret, image_np = cap.read() ##expand dimensions as the model expects images to have the shape :: [1,None, None,3] image_np_expanded = np.expand_dims(image_np, axis=0) input_tensor = tf.convert_to_tensor(image_np_expanded, dtype=tf.float32) detections, predictions_dict, shapes = detect_fn(input_tensor) label_id_offset = 1 image_np_with_detections = image_np.copy() viz_utils.visualize_boxes_and_labels_on_image_array( image_np_with_detections, detections['detection_boxes'][0].numpy(), (detections['detection_classes'][0].numpy() + label_id_offset).astype(int), detections['detection_scores'][0].numpy(), category_index, use_normalized_coordinates=True, max_boxes_to_draw=200, min_score_thresh=.5, agnostic_mode=False) out.write(cv2.resize(image_np_with_detections,(800,600))) # Release everything if job is finished cap.release() out.release() cv2.destroyAllWindows() </code></pre>
python|numpy|object-detection|video-processing|object-detection-api
0
4,024
68,287,676
Pandas won´t create .csv file
<p>I recently started diving into algo trading and building a bot for crypto trading.</p> <p>For this i created a backtester with pandas to run different strategies with different parameters. The datasets (csv files) I use are rather larger (around 40mb each).</p> <p>These are processed, but as soon as i want to save the processed data to a csv, nothing happens. No output whatsoever, not even an error message. I tried to use the full path, I tried to save it just with the filename, I even tried to save it as a .txt file. Nothing seems to work. I also tried the solutions I was able to find on stackoverflow.</p> <p>I am using Anaconda3 in case that could be the source of my problem.</p> <p>Here you can find the part of my code ,which tries to save the dataframe to a file.</p> <pre><code>results_df = pd.DataFrame(results) results_df.columns = ['strategy', 'number_of_trades', &quot;capital&quot;] print(results_df) for i in range(2, len(results_df)): if results_df.capital.iloc[i] &lt; results_df.capital.iloc[0]: results_df.drop([i],axis=&quot;index&quot;) #results to csv current_dir = os.getcwd() results_df.to_csv(os.getcwd()+'\\file.csv') print(results_df) </code></pre> <p>Thank you for your help!</p>
<p>You can simplifiy your code by a great deal and write it as (should also run faster):</p> <pre><code>results_df = pd.DataFrame(results) results_df.columns = ['strategy', 'number_of_trades', &quot;capital&quot;] print(results_df) first_row_capital= results_df.capital.iloc[0] indexer_capital_smaller= results_df.capital &lt; first_row_capital values_to_delete= indexer_capital_smaller[indexer_capital_smaller].index results_df.drop(index=values_to_delete, inplace=True) #results to csv current_dir = os.getcwd() results_df.to_csv(os.getcwd()+'\\file.csv') print(results_df) </code></pre> <p>I think, the main problem in your code might be, that you write the csv each time you found an entry in the dataframe where capital sattisfies the condition and you write it only if you find such a case.</p> <p>And if you just do the deletion for the csv output but don't need the dataframe in memory anymore, you can make it even simpler:</p> <pre><code>results_df = pd.DataFrame(results) results_df.columns = ['strategy', 'number_of_trades', &quot;capital&quot;] print(results_df) first_row_capital= results_df.capital.iloc[0] indexer_capital_smaller= results_df.capital &lt; first_row_capital #results to csv current_dir = os.getcwd() results_df[indexer_capital_smaller].to_csv(os.getcwd()+'\\file.csv') print(results_df[indexer_capital_smaller]) </code></pre> <p>This second variant only applies a filter before writing the filtered lines and before printing the content.</p>
python|pandas|csv|anaconda
1
4,025
59,433,626
Assign column values to unique row in pandas dataframe
<p>I have the foll. dataframe:</p> <pre><code>AA AB AC AD Col_1 Col_2 Col_3 Northeast Argentina Northeast Argentina South America Corrientes Misiones Northern Argentina Northern Argentina South America Chaco Formosa Santiago Del </code></pre> <p>I want to convert it to:</p> <pre><code>AA AB AC AD Col Northeast Argentina Northeast Argentina South America Corrientes Northeast Argentina Northeast Argentina South America Misiones Northern Argentina Northern Argentina South America Chaco Northern Argentina Northern Argentina South America Formosa Northern Argentina Northern Argentina South America Santiago Del </code></pre> <p>i.e. I want to preserve the first 4 columns but assign each of the remaining column values into a separate row. Is there a way to accomplish this without using a for loop?</p>
<p>You can try this:</p> <pre><code>df = df.melt(id_vars=['AA','AB','AC','AD']) df.dropna(inplace=True) df.drop(columns='variable', inplace=True) df = df.sort_values('AA').reset_index(drop=True) df.rename(columns={'value':'Col'}, inplace=True) AA AB AC AD Col 0 Northeast Argentina Northeast Argentina South America Corrientes 1 Northeast Argentina Northeast Argentina South America Misiones 2 Northern Argentina Northern Argentina South America Chaco 3 Northern Argentina Northern Argentina South America Formosa 4 Northern Argentina Northern Argentina South America Santiago Del </code></pre>
python|pandas
3
4,026
59,193,734
TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [bool, float32] that don't all match
<p>I'm trying to reproduce the notebook for entity recognition using LSTM that i found on this link: <a href="https://medium.com/@rohit.sharma_7010/a-complete-tutorial-for-named-entity-recognition-and-extraction-in-natural-language-processing-71322b6fb090" rel="noreferrer">https://medium.com/@rohit.sharma_7010/a-complete-tutorial-for-named-entity-recognition-and-extraction-in-natural-language-processing-71322b6fb090</a></p> <p>When I try to train the model I get an error that I cannot understand (I'm quite new to tensorflow). In particular the part of code with the error is this one:</p> <pre><code>from keras.models import Model, Input from keras.layers import LSTM, Embedding, Dense, TimeDistributed, Dropout, Bidirectional from keras_contrib.layers import CRF # Model definition input = Input(shape=(MAX_LEN,)) model = Embedding(input_dim=n_words+2, output_dim=EMBEDDING, # n_words + 2 (PAD &amp; UNK) input_length=MAX_LEN, mask_zero=True)(input) # default: 20-dim embedding model = Bidirectional(LSTM(units=50, return_sequences=True, recurrent_dropout=0.1))(model) # variational biLSTM model = TimeDistributed(Dense(50, activation="relu"))(model) # a dense layer as suggested by neuralNer crf = CRF(n_tags+1) # CRF layer, n_tags+1(PAD) print(model) out = crf(model) # output model = Model(input, out) model.compile(optimizer="rmsprop", loss=crf.loss_function, metrics=[crf.accuracy]) model.summary() </code></pre> <p>The error is on the line </p> <pre><code>out = crf(model) </code></pre> <p>The error that I get is this:</p> <pre><code>TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [bool, float32] that don't all match. </code></pre> <p>Can someone give me an explanation?</p>
<p>I also came across this problem today. what worked for me was to remove <code>mask_zero=True</code> from the embedding layer. unfortunately I don't know why this helps.</p>
python|tensorflow|keras|lstm|named-entity-recognition
9
4,027
59,278,891
how to find the english and chinese combination records in pandas dataframe
<p>In pandas, data frame has 2 columns like "FirstName" and "LastName". From that columns "FirstName" column would be either english or chinese combination and same as "LastName" column would be either chinese or english combination. so, i want to display the those records of english-chinese combination in dataframe.</p> <pre><code> code snippet: df.loc[df['FirstName'].str.contains(r'[a-zA-Z]+') &amp; df['FirstName'].str.contains(r'[一种-ž]+'))] </code></pre> <p>I do not know this code snippet whether it's working or not.</p> <p>my input dataframe is: </p> <pre><code> FirstName LastName jocovich nadhal smith pointing 西德哈斯 supreet yuvi 雷迪 bsreddy rakshita sreeja 巴尔加维 雷迪 西德哈斯 Cédric LEMARCHAND Radosław Piotrowski </code></pre> <p>above is the my data frame. but my required output is like below:</p> <pre><code> FirstName LastName 西德哈斯 supreet yuvi 雷迪 sreeja 巴尔加维 </code></pre> <p>I want to display the engilsh-chinese or chinese-english records from dataframe.</p>
<p>You can search for the unicodes as i do here. You can inverse the matches as well:</p> <pre><code>df.query("FirstName.str.contains(r'[\u4e00-\u9FFF]', regex=True) or LastName.str.contains(r'[\u4e00-\u9FFF]', regex=True)") or df[(df['FirstName'].str.contains(r'[\u4e00-\u9FFF]', regex=True)) | ( df['LastName'].str.contains(r'[\u4e00-\u9FFF]', regex=True))] </code></pre> <p>or to not match both chinese first and last names as well:</p> <pre><code>df[((df['FirstName'].str.contains(r'[\u4e00-\u9FFF]', regex=True)) | ( df['LastName'].str.contains(r'[\u4e00-\u9FFF]', regex=True))) &amp; (~df['FirstName'].str.contains(r'[\u4e00-\u9FFF]', regex=True) | (~df['LastName'].str.contains(r'[\u4e00-\u9FFF]', regex=True)))] </code></pre> <p>output:</p> <pre><code> FirstName LastName 2 西德哈斯 supreet 3 yuvi 雷迪 5 sreeja 巴尔加维 </code></pre>
python|pandas|numpy|pandas-groupby
2
4,028
59,437,334
How to sample different number of rows from each group in DataFrame
<p>I have a dataframe with a category column. Df has different number of rows for each category. </p> <pre><code>category number_of_rows cat1 19189 cat2 13193 cat3 4500 cat4 1914 cat5 568 cat6 473 cat7 216 cat8 206 cat9 197 cat10 147 cat11 130 cat12 49 cat13 38 cat14 35 cat15 35 cat16 30 cat17 29 cat18 9 cat19 4 cat20 4 cat21 1 cat22 1 cat23 1 </code></pre> <p>I want to select different number of rows from each category. (Instead of n fixed number of rows from each category)</p> <pre><code>Example input: size_1 : {"cat1": 40, "cat2": 20, "cat3": 15, "cat4": 11, ...} Example input: size_2 : {"cat1": 51, "cat2": 42, "cat3": 18, "cat4": 21, ...} </code></pre> <p>What I want to do is actually a stratified sampling with given number of instances corresponding to each category. </p> <p>Also, it should be randomly selected. For example, I don't need the top 40 values for size_1.["cat1"], I need random 40 values.</p> <p>Thanks for the help.</p>
<h2>Artificial data generation</h2> <hr /> <h3>Dataframe</h3> <p>Let's first generate some data to see how we can solve the problem:</p> <pre><code># Define a DataFrame containing employee data df = pd.DataFrame({'Category':['Jai', 'Jai', 'Jai', 'Princi', 'Princi'], 'Age':[27, 24, 22, 32, 15], 'Address':['Delhi', 'Kanpur', 'Allahabad', 'Kannauj', 'Noida'], 'Qualification':['Msc', 'MA', 'MCA', 'Phd', '10th']} ) </code></pre> <h3>Sampling rule</h3> <pre><code># Number of rows, that we want to be sampled from each category samples_per_group_dict = {'Jai': 1, 'Princi':2} </code></pre> <br> <br> <h2>Problem solving</h2> <hr /> <p>I can propose two solutions:</p> <ol> <li><p>Apply on groupby (one-liner)</p> <pre><code>output = df.groupby('Category').apply(lambda group: group.sample(samples_per_group_dict[group.name])).reset_index(drop = True) </code></pre> </li> <li><p>Looping groups (more verbose)</p> <pre><code>list_of_sampled_groups = [] for name, group in df.groupby('Category'): n_rows_to_sample = samples_per_group_dict[name] sampled_group = group.sample(n_rows_to_sample) list_of_sampled_groups.append(sampled_group) output = pd.concat(list_of_sampled_groups).reset_index(drop=True) </code></pre> </li> </ol> <p>Performance should be the same for both approaches. If performance matters you can vectorize your calculation. But exact optimization depends on n_groups and n_samples in each group.</p>
python|python-3.x|dataframe|random|pandas-groupby
11
4,029
59,390,946
Is there an easy way to lock some columns in pandas dataframe from being manipulated?
<p>I want to apply a function to every column except a couple that needs to remain unchanged. The way I am doing it right now:</p> <ol> <li>Assign xxx columns to a variable</li> <li>Drop xxx columns from df</li> <li>Do some operation on df</li> <li>Merge variable to df</li> </ol> <p>Example:</p> <pre><code>cobId = combined.Id cobSale = combined.SalePrice combined = combined.drop(['Id', 'SalePrice'], axis = 1) combined=(combined-combined.mean())/combined.std() combined['Id'] = cobId combined['SalePrice'] = cobSale </code></pre> <p><strong>How to imporve here?</strong></p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.difference.html" rel="nofollow noreferrer"><code>pd.Index.difference</code></a>:</p> <pre><code>cols = combined.columns.difference(['Id','SalePrice']) combined[cols] = combined[cols].sub(combined[cols].mean()).div(combined[cols].std()) print(combined) </code></pre> <hr> <p><strong>Here is an example:</strong></p> <pre><code>df = pd.DataFrame({'col1':[1,2],'col2':[3,4]}) print(df) col1 col2 0 1 3 1 2 4 </code></pre> <hr> <pre><code>cols = df.columns.difference(['col1']) df[cols] = df[cols].sub(df[cols].mean()).div(df[cols].std()) print(df) </code></pre> <hr> <p>We can also use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.update.html" rel="nofollow noreferrer"><code>DataFrame.update</code></a>:</p> <pre><code>df2=df.drop(axis=1,labels='col1') #df2=df[df.columns.difference(['col1'])] df2 = df2.sub(df2.mean()).div(df2.std()) df.update(df2) print(df) </code></pre> <p><strong>Output:</strong></p> <pre><code> col1 col2 0 1 -0.707107 1 2 0.707107 </code></pre>
python|pandas
1
4,030
59,287,621
Unexpected KeyError with for loop but not when manual
<p>I have written a function that manually creates separate dataframes for each participant in the main dataframe. However, I'm trying to write it so that it's more automated as participants will be added to the dataframe in the future.</p> <p>My original function:</p> <pre><code>def separate_participants(main_df): S001 = main_df[main_df['participant'] == 'S001'] S001.name = "S001" S002 = main_df[main_df['participant'] == 'S002'] S002.name = "S002" S003 = main_df[main_df['participant'] == 'S003'] S003.name = "S003" S004 = main_df[main_df['participant'] == 'S004'] S004.name = "S004" S005 = main_df[main_df['participant'] == 'S005'] S005.name = "S005" S006 = main_df[main_df['participant'] == 'S006'] S006.name = "S006" S007 = main_df[main_df['participant'] == 'S007'] S007.name = "S007" participants = (S001, S002, S003, S004, S005, S006, S007) participant_names = (S001.name, S002.name, S003.name, S004.name, S005.name, S006.name, S007.name) return participants, participant_names </code></pre> <p>However, when I try and change this I get a KeyError for the name of the participant in the main_df. The code is as follows:</p> <pre><code>def separate_participants(main_df): participant_list = list(main_df.participant.unique()) participants = [] for participant in participant_list: name = participant temp_df = main_df[main_df[participant] == participant] name = temp_df participants.append(name) return participants </code></pre> <p>The error I get: KeyError: 'S001'</p> <p>I can't seem to figure out what I'm doing wrong, that means it works in the old function but not the new one. The length of the objects in the dataframe and the list are the same (4) so there are no extra characters.</p> <p>Any help/pointers would be greatly appreciated!</p>
<p>Thanks @Iguananaut for the answer:</p> <p>Your DataFrame has a column named 'participant' but you're indexing it with the value of the variable participant which is presumably not a column in your DataFrame. You probably wanted main_df['participant']. Most likely the KeyError came with a "traceback" leading back to the line temp_df = main_df[main_df[participant] == participant] which suggests you should examine it closely.</p>
python|python-3.x|pandas|dataframe
1
4,031
44,959,020
How to find the mean of the value at an index in numpy
<p>Suppose I have a numpy array as show below and I want to calculate the mean of values at index 0 of each array (1,1,1) or index 3 (4,5,6). Is there a numpy function that can solve this? I tried numpy.mean, but it does not solve the issue.</p> <pre><code>[[1,2,3,4], [1,2,3,5], --&gt; = [(1+1+1)/3, (2+2+2)/3, (3+3+3)/3, (4+5+6)/3] --&gt; [1,2,3,5] [1,2,3,6]] </code></pre>
<pre><code>a = array([[1, 2, 3, 4], [1, 2, 3, 5], [1, 2, 3, 6]]) np.mean(a, axis=0) -&gt; array([ 1., 2., 3., 5.]) </code></pre> <p>The parameter <code>axis</code> lets you select the direction across which you want to calculate the mean.</p>
python|arrays|numpy
3
4,032
44,947,554
Python/Pandas - How to assign value for each row which depends on cells values
<p>I have data frame like this:</p> <pre><code> x y z 0 AA BB CC 1 BB NaN CC 2 BB AA NaN </code></pre> <p>and dictionary:</p> <pre><code>d = {'AA': 1, 'BB': 2, 'CC': 3} </code></pre> <p>I want to compare values from each cell with values from dictionary and add another new column with sum of these values for each row. In result I need something like this: </p> <pre><code> x y z sum 0 AA BB CC 6 1 BB NaN CC 5 2 BB AA NaN 3 </code></pre> <p>I need the most efficient solution, any ideas?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="noreferrer"><code>replace</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="noreferrer"><code>sum</code></a> per row by <code>axis=1</code>, last convert to <code>int</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="noreferrer"><code>astype</code></a>:</p> <pre><code>print (df.replace(d)) x y z 0 1 2.0 3.0 1 2 NaN 3.0 2 2 1.0 NaN df['sum'] = df.replace(d).sum(axis=1).astype(int) print (df) x y z sum 0 AA BB CC 6 1 BB NaN CC 5 2 BB AA NaN 3 </code></pre>
python|pandas
4
4,033
45,038,037
Counting changes of value in each column in a data frame in pandas ignoring NaN changes
<p>I am trying to count the number of changes of value in each column in a data frame in pandas. The code I have works great except for NaNs: if a column contains two subsequent NaNs, it is counted as a change of value, which I don't want. How can I avoid that?</p> <p>I do as follows (thanks to <a href="https://stackoverflow.com/a/45024317/395857">unutbu's answer</a>):</p> <pre><code>import pandas as pd import numpy as np frame = pd.DataFrame({ 'time':[1234567000 , np.NaN, np.NaN], 'X1':[96.32,96.01,96.05], 'X2':[23.88,23.96,23.96] },columns=['time','X1','X2']) print(frame) changes = (frame.diff(axis=0) != 0).sum(axis=0) print(changes) changes = (frame != frame.shift(axis=0)).sum(axis=0) print(changes) </code></pre> <p>returns:</p> <pre><code> time X1 X2 0 1.234567e+09 96.32 23.88 1 NaN 96.01 23.96 2 NaN 96.05 23.96 time 3 X1 3 X2 2 dtype: int64 time 3 X1 3 X2 2 dtype: int64 </code></pre> <p>Instead, the results should be (notice the change in the time column):</p> <pre><code>time 2 X1 3 X2 2 dtype: int64 </code></pre>
<pre><code>change = (frame.fillna(0).diff() != 0).sum() </code></pre> <p>Output:</p> <pre><code>time 2 X1 3 X2 2 dtype: int64 </code></pre> <p>NaN are <a href="https://stackoverflow.com/questions/43925797/why-python-pandas-does-not-use-3-valued-logic">"truthy"</a>. Change NaN to zero then evaluate.</p> <pre><code>nan - nan = nan nan != 0 = True fillna(0) 0 - 0 = 0 0 != 0 = False </code></pre>
python|pandas|dataframe
3
4,034
45,099,544
Splitting Pandas dataframe on string properties index
<p>I'm trying to split a dataset into 2 types of datapoints. Currently I have a pandas dataframe with this format.</p> <pre><code>CS1001 True value1 CM1001 False value2 CS1002 True value3 </code></pre> <p>Now i would like to split this into a S and a M dataframe like this:</p> <p>S frame:</p> <pre><code>C1001 True value1 C1002 True value3 </code></pre> <p>M frame:</p> <pre><code>C1001 False value2 </code></pre> <p>Now i run into two problems fistly I can't seem to group on the first 4 characters with this.</p> <pre><code>data.groupby(data.index[:4]) </code></pre> <p>And then I can't edit the index value to remove the S/M. I have not used pandas before so I feel like I'm overseeing an obvious solution but I can't figure it out.</p>
<p>IIUC:</p> <pre><code>In [15]: data Out[15]: 1 2 CS1001 True value1 CM1001 False value2 CS1002 True value3 In [16]: data.groupby(data.index.str[:2]).groups Out[16]: {'CM': Index(['CM1001'], dtype='object'), 'CS': Index(['CS1001', 'CS1002'], dtype='object')} </code></pre> <p>Removing second letter from index values:</p> <pre><code>In [5]: df.index = df.index.str[:1] + df.index.str[2:] In [6]: df Out[6]: 1 2 C1001 True value1 C1001 False value2 C1002 True value3 </code></pre>
python|python-3.x|pandas
1
4,035
45,200,428
How to find intersection of a line with a mesh?
<p>I have trajectory data, where each trajectory consists of a sequence of coordinates(x, y points) and each trajectory is identified by a unique ID.</p> <p>These trajectories are in <strong>x - y</strong> plane, and I want to divide the whole plane into equal sized grid (square grid). This grid is obviously invisible but is used to divide trajectories into sub-segments. Whenever a trajectory intersects with a grid line, it is <strong>segmented</strong> there and becomes a new sub-trajectory with <strong>new_id</strong>.</p> <p>I have included a simple handmade graph to make clear what I am expecting.</p> <p><a href="https://i.stack.imgur.com/7gLGi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7gLGi.jpg" alt="enter image description here"></a></p> <p>It can be seen how the trajectory is divided at the intersections of the grid lines, and each of these segments has new unique id.</p> <p>I am working on Python, and seek some python implementation links, suggestions, algorithms, or even a pseudocode for the same.</p> <p>Please let me know if anything is unclear.</p> <p><strong>UPDATE</strong></p> <p>In order to divide the plane into grid , cell indexing is done as following:</p> <pre><code>#finding cell id for each coordinate #cellid = (coord / cellSize).astype(int) cellid = (coord / 0.5).astype(int) cellid Out[] : array([[1, 1], [3, 1], [4, 2], [4, 4], [5, 5], [6, 5]]) #Getting x-cell id and y-cell id separately x_cellid = cellid[:,0] y_cellid = cellid[:,1] #finding total number of cells xmax = df.xcoord.max() xmin = df.xcoord.min() ymax = df.ycoord.max() ymin = df.ycoord.min() no_of_xcells = math.floor((xmax-xmin)/ 0.5) no_of_ycells = math.floor((ymax-ymin)/ 0.5) total_cells = no_of_xcells * no_of_ycells total_cells Out[] : 25 </code></pre> <p>Since the plane is now divided into 25 cells each with a <strong>cellid</strong>. In order to find intersections, maybe I could check the next coordinate in the trajectory, if the <strong>cellid</strong> remains the same, then that segment of the trajectory is in the same cell and has no intersection with grid. Say, if x_cellid[2] is greater than x_cellid[0], then segment intersects vertical grid lines. Though, I am still unsure how to find the intersections with the grid lines and segment the trajectory on intersections giving them new id. </p>
<p>This can be solved by shapely:</p> <pre><code>%matplotlib inline import pylab as pl from shapely.geometry import MultiLineString, LineString import numpy as np from matplotlib.collections import LineCollection x0, y0, x1, y1 = -10, -10, 10, 10 n = 11 lines = [] for x in np.linspace(x0, x1, n): lines.append(((x, y0), (x, y1))) for y in np.linspace(y0, y1, n): lines.append(((x0, y), (x1, y))) grid = MultiLineString(lines) x = np.linspace(-9, 9, 200) y = np.sin(x)*x line = LineString(np.c_[x, y]) fig, ax = pl.subplots() for i, segment in enumerate(line.difference(grid)): x, y = segment.xy pl.plot(x, y) pl.text(np.mean(x), np.mean(y), str(i)) lc = LineCollection(lines, color="gray", lw=1, alpha=0.5) ax.add_collection(lc); </code></pre> <p>The result:</p> <p><a href="https://i.stack.imgur.com/Xvibi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Xvibi.png" alt="enter image description here"></a></p> <p>To not use shapely, and do it yourself:</p> <pre><code>import pylab as pl import numpy as np from matplotlib.collections import LineCollection x0, y0, x1, y1 = -10, -10, 10, 10 n = 11 xgrid = np.linspace(x0, x1, n) ygrid = np.linspace(y0, y1, n) x = np.linspace(-9, 9, 200) y = np.sin(x)*x t = np.arange(len(x)) idx_grid, idx_t = np.where((xgrid[:, None] - x[None, :-1]) * (xgrid[:, None] - x[None, 1:]) &lt;= 0) tx = idx_t + (xgrid[idx_grid] - x[idx_t]) / (x[idx_t+1] - x[idx_t]) idx_grid, idx_t = np.where((ygrid[:, None] - y[None, :-1]) * (ygrid[:, None] - y[None, 1:]) &lt;= 0) ty = idx_t + (ygrid[idx_grid] - y[idx_t]) / (y[idx_t+1] - y[idx_t]) t2 = np.sort(np.r_[t, tx, tx, ty, ty]) x2 = np.interp(t2, t, x) y2 = np.interp(t2, t, y) loc = np.where(np.diff(t2) == 0)[0] + 1 xlist = np.split(x2, loc) ylist = np.split(y2, loc) fig, ax = pl.subplots() for i, (xp, yp) in enumerate(zip(xlist, ylist)): pl.plot(xp, yp) pl.text(np.mean(xp), np.mean(yp), str(i)) lines = [] for x in np.linspace(x0, x1, n): lines.append(((x, y0), (x, y1))) for y in np.linspace(y0, y1, n): lines.append(((x0, y), (x1, y))) lc = LineCollection(lines, color="gray", lw=1, alpha=0.5) ax.add_collection(lc); </code></pre>
python|algorithm|numpy|grid|intersection
9
4,036
56,927,222
concatenate pandas dataframes with priority replacment of NaN
<p>I have data collected from a lineage of instruments with some overlap. I want to merge them to a single pandas data structure in a way where the newest available data for each column take precedence if not NaN, otherwise the older data are retained. </p> <p>The following code produces the intended output, but involves a lot of code for such a simple task. Additionally, the final step involves identifying duplicated index values, and I am nervous about whether I can rely on the "last" part because df.combine_first(other) reorders the data. Is there a more compact, efficient and/or predictable way to do this?</p> <pre><code># set up the data df0 = pd.DataFrame({"x": [0.,1.,2.,3.,4,],"y":[0.,1.,2.,3.,np.nan],"t" :[0,1,2,3,4]}) # oldest/lowest priority df1 = pd.DataFrame({"x" : [np.nan,4.1,5.1,6.1],"y":[3.1,4.1,5.1,6.1],"t": [3,4,5,6]}) df2 = pd.DataFrame({"x" : [8.2,10.2],"t":[8,10]}) df0.set_index("t",inplace=True) df1.set_index("t",inplace=True) df2.set_index("t",inplace=True) # this concatenates, leaving redundant indices in df0, df1, df2 dfmerge = pd.concat((df0,df1,df2),sort=True) print("dfmerge, with duplicate rows and interlaced NaN data") print(dfmerge) # Now apply, in priority order, each of the original dataframes to fill the original dfmerge2 = dfmerge.copy() for ddf in (df2,df1,df0): dfmerge2 = dfmerge2.combine_first(ddf) print("\ndfmerge2, fillable NaNs filled but duplicate indices now reordered") print(dfmerge2) # row order has changed unpredictably # finally, drop duplicate indices dfmerge3 = dfmerge2.copy() dfmerge3 = dfmerge3.loc[~dfmerge3.index.duplicated(keep='last')] print ("dfmerge3, final") print (dfmerge3) </code></pre> <p>The output of which is this:</p> <pre><code>dfmerge, with duplicate rows and interlaced NaN data x y t 0 0.0 0.0 1 1.0 1.0 2 2.0 2.0 3 3.0 3.0 4 4.0 NaN 3 NaN 3.1 4 4.1 4.1 5 5.1 5.1 6 6.1 6.1 8 8.2 NaN 10 10.2 NaN dfmerge2, fillable NaNs filled but duplicate indices now reordered x y t 0 0.0 0.0 1 1.0 1.0 2 2.0 2.0 3 3.0 3.0 3 3.0 3.1 4 4.0 4.1 4 4.1 4.1 5 5.1 5.1 6 6.1 6.1 8 8.2 NaN 10 10.2 NaN dfmerge3, final x y t 0 0.0 0.0 1 1.0 1.0 2 2.0 2.0 3 3.0 3.1 4 4.1 4.1 5 5.1 5.1 6 6.1 6.1 8 8.2 NaN 10 10.2 NaN </code></pre>
<p>In your case </p> <pre><code>s=pd.concat([df0,df1,df2],sort=False) s[:]=np.sort(s,axis=0) s=s.dropna(thresh=1) s x y t 0 0.0 0.0 1 1.0 1.0 2 2.0 2.0 3 3.0 3.0 4 4.0 3.1 3 4.1 4.1 4 5.1 5.1 5 6.1 6.1 6 8.2 NaN 8 10.2 NaN </code></pre>
python|pandas|numpy|dataframe|merge
0
4,037
56,922,718
How to strip columns from dataframe using pd.read_html and return output as a list
<p>I'm trying to get a list of stock symbols using the Pandas <code>read_html</code> function (instead of using Beautiful Soup to scrape the web). </p> <p>The website I'm referencing is:</p> <p><a href="https://en.wikipedia.org/wiki/List_of_S%26P_500_companies" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/List_of_S%26P_500_companies</a></p> <p>The desired output is:</p> <pre><code>['MMM', 'ABT', 'ABBV', 'ACN', 'ATVI' ... ] </code></pre> <p>My code is:</p> <pre><code>import pandas as pd url = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies' df = pd.read_html(url)[0] #df.columns = df.iloc[0] df.drop(df.index[0], inplace=True) tickers = df['Symbol'].tolist() </code></pre> <p>The output of this code is a dataframe that looks as follows:</p> <pre><code>df.head() Symbol Security SEC filings GICS Sector GICS Sub Industry Headquarters Location Date first added CIK Founded 1 ABT Abbott Laboratories reports Health Care Health Care Equipment North Chicago, Illinois 1964-03-31 1800 1888 2 ABBV AbbVie Inc. reports Health Care Pharmaceuticals North Chicago, Illinois 2012-12-31 1551152 2013 (1888) 3 ABMD ABIOMED Inc reports Health Care Health Care Equipment Danvers, Massachusetts 2018-05-31 815094 1981 4 ACN Accenture plc reports Information Technology IT Consulting &amp; Other Services Dublin, Ireland 2011-07-06 1467373 1989 5 ATVI Activision Blizzard reports Communication Services Interactive Home Entertainment Santa Monica, California 2015-08-31 718877 2008 </code></pre> <p>If I uncomment <code>df.columns = df.iloc[0]</code>, then Pandas throws the following error message</p> <pre><code>KeyError: 'Symbol' </code></pre> <p>The line <code>df.iloc[0]</code> returns:</p> <pre><code>Symbol ABT Security Abbott Laboratories SEC filings reports GICS Sector Health Care GICS Sub Industry Health Care Equipment Headquarters Location North Chicago, Illinois Date first added 1964-03-31 CIK 1800 Founded 1888 </code></pre> <p>Which is not what I'm looking for (rather, the header row before this one that contains the 'Symbol' column).</p> <p>Does anyone see what I'm doing incorrectly here? Thanks!</p>
<p>Using <code>pandas</code> library to read html table data. <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.tolist.html#pandas.Series.tolist" rel="nofollow noreferrer">tolist()</a> is used to convert a series to list.</p> <pre><code>import pandas as pd url = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies' df = pd.read_html(url)[0] # filter table data by selected columns df = df[['Symbol']] tickers = df['Symbol'].tolist() print(tickers) </code></pre> <p>O/P:</p> <pre><code>['MMM', 'ABT', 'ABBV', 'ABMD', 'ACN', 'ATVI', 'ADBE', 'AMD', 'AAP', 'AES', 'AMG', 'AFL', 'A', 'APD', 'AKAM', 'ALK', 'ALB', 'ARE', 'ALXN', 'ALGN', 'ALLE', 'AGN', 'ADS', 'LNT', 'ALL', 'GOOGL', 'GOOG', 'MO', 'AMZN', 'AMCR', 'AEE', 'AAL', 'AEP', 'AXP', 'AIG', 'AMT', 'AWK', 'AMP', 'ABC', 'AME', 'AMGN', 'APH', 'APC', 'ADI', 'ANSS', 'ANTM', 'AON', 'AOS', 'APA', 'AIV', 'AAPL', 'AMAT', 'APTV', 'ADM', 'ARNC', 'ANET', 'AJG', 'AIZ', 'ATO', 'T', 'ADSK', 'ADP', 'AZO', 'AVB', 'AVY', 'BHGE', 'BLL', 'BAC', 'BK', 'BAX', 'BBT', 'BDX', 'BRK.B', 'BBY', 'BIIB', 'BLK', 'HRB', 'BA', 'BKNG', 'BWA', 'BXP', 'BSX', 'BMY', 'AVGO', 'BR', 'BF.B', 'CHRW', 'COG', 'CDNS', 'CPB', 'COF', 'CPRI', 'CAH', 'KMX', 'CCL', 'CAT', 'CBOE', 'CBRE', 'CBS', 'CE', 'CELG', 'CNC', 'CNP', 'CTL', 'CERN', 'CF', 'SCHW', 'CHTR', 'CVX', 'CMG', 'CB', 'CHD', 'CI', 'XEC', 'CINF', 'CTAS', 'CSCO', 'C', 'CFG', 'CTXS', 'CLX', 'CME', 'CMS', 'KO', 'CTSH', 'CL', 'CMCSA', 'CMA', 'CAG', 'CXO', 'COP', 'ED', 'STZ', 'COO', 'CPRT', 'GLW', 'CTVA', 'COST', 'COTY', 'CCI', 'CSX', 'CMI', 'CVS', 'DHI', 'DHR', 'DRI', 'DVA', 'DE', 'DAL', 'XRAY', 'DVN', 'FANG', 'DLR', 'DFS', 'DISCA', 'DISCK', 'DISH', 'DG', 'DLTR', 'D', 'DOV', 'DOW', 'DTE', 'DUK', 'DRE', 'DD', 'DXC', 'ETFC', 'EMN', 'ETN', 'EBAY', 'ECL', 'EIX', 'EW', 'EA', 'EMR', 'ETR', 'EOG', 'EFX', 'EQIX', 'EQR', 'ESS', 'EL', 'EVRG', 'ES', 'RE', 'EXC', 'EXPE', 'EXPD', 'EXR', 'XOM', 'FFIV', 'FB', 'FAST', 'FRT', 'FDX', 'FIS', 'FITB', 'FE', 'FRC', 'FISV', 'FLT', 'FLIR', 'FLS', 'FMC', 'FL', 'F', 'FTNT', 'FTV', 'FBHS', 'FOXA', 'FOX', 'BEN', 'FCX', 'GPS', 'GRMN', 'IT', 'GD', 'GE', 'GIS', 'GM', 'GPC', 'GILD', 'GPN', 'GS', 'GWW', 'HAL', 'HBI', 'HOG', 'HIG', 'HAS', 'HCA', 'HCP', 'HP', 'HSIC', 'HSY', 'HES', 'HPE', 'HLT', 'HFC', 'HOLX', 'HD', 'HON', 'HRL', 'HST', 'HPQ', 'HUM', 'HBAN', 'HII', 'IDXX', 'INFO', 'ITW', 'ILMN', 'IR', 'INTC', 'ICE', 'IBM', 'INCY', 'IP', 'IPG', 'IFF', 'INTU', 'ISRG', 'IVZ', 'IPGP', 'IQV', 'IRM', 'JKHY', 'JEC', 'JBHT', 'JEF', 'SJM', 'JNJ', 'JCI', 'JPM', 'JNPR', 'KSU', 'K', 'KEY', 'KEYS', 'KMB', 'KIM', 'KMI', 'KLAC', 'KSS', 'KHC', 'KR', 'LB', 'LHX', 'LH', 'LRCX', 'LW', 'LEG', 'LEN', 'LLY', 'LNC', 'LIN', 'LKQ', 'LMT', 'L', 'LOW', 'LYB', 'MTB', 'MAC', 'M', 'MRO', 'MPC', 'MKTX', 'MAR', 'MMC', 'MLM', 'MAS', 'MA', 'MKC', 'MXIM', 'MCD', 'MCK', 'MDT', 'MRK', 'MET', 'MTD', 'MGM', 'MCHP', 'MU', 'MSFT', 'MAA', 'MHK', 'TAP', 'MDLZ', 'MNST', 'MCO', 'MS', 'MOS', 'MSI', 'MSCI', 'MYL', 'NDAQ', 'NOV', 'NKTR', 'NTAP', 'NFLX', 'NWL', 'NEM', 'NWSA', 'NWS', 'NEE', 'NLSN', 'NKE', 'NI', 'NBL', 'JWN', 'NSC', 'NTRS', 'NOC', 'NCLH', 'NRG', 'NUE', 'NVDA', 'ORLY', 'OXY', 'OMC', 'OKE', 'ORCL', 'PCAR', 'PKG', 'PH', 'PAYX', 'PYPL', 'PNR', 'PBCT', 'PEP', 'PKI', 'PRGO', 'PFE', 'PM', 'PSX', 'PNW', 'PXD', 'PNC', 'PPG', 'PPL', 'PFG', 'PG', 'PGR', 'PLD', 'PRU', 'PEG', 'PSA', 'PHM', 'PVH', 'QRVO', 'PWR', 'QCOM', 'DGX', 'RL', 'RJF', 'RTN', 'O', 'RHT', 'REG', 'REGN', 'RF', 'RSG', 'RMD', 'RHI', 'ROK', 'ROL', 'ROP', 'ROST', 'RCL', 'CRM', 'SBAC', 'SLB', 'STX', 'SEE', 'SRE', 'SHW', 'SPG', 'SWKS', 'SLG', 'SNA', 'SO', 'LUV', 'SPGI', 'SWK', 'SBUX', 'STT', 'SYK', 'STI', 'SIVB', 'SYMC', 'SYF', 'SNPS', 'SYY', 'TROW', 'TTWO', 'TPR', 'TGT', 'TEL', 'FTI', 'TFX', 'TXN', 'TXT', 'TMO', 'TIF', 'TWTR', 'TJX', 'TMK', 'TSS', 'TSCO', 'TDG', 'TRV', 'TRIP', 'TSN', 'UDR', 'ULTA', 'USB', 'UAA', 'UA', 'UNP', 'UAL', 'UNH', 'UPS', 'URI', 'UTX', 'UHS', 'UNM', 'VFC', 'VLO', 'VAR', 'VTR', 'VRSN', 'VRSK', 'VZ', 'VRTX', 'VIAB', 'V', 'VNO', 'VMC', 'WAB', 'WMT', 'WBA', 'DIS', 'WM', 'WAT', 'WEC', 'WCG', 'WFC', 'WELL', 'WDC', 'WU', 'WRK', 'WY', 'WHR', 'WMB', 'WLTW', 'WYNN', 'XEL', 'XRX', 'XLNX', 'XYL', 'YUM', 'ZBH', 'ZION', 'ZTS'] </code></pre>
python|pandas
2
4,038
56,987,018
How to encode a feature which has a list of categorical values in each row for training an machine learning model?
<p>I have a dataset where I have list of categorical values as value of a feature. How can I encode it to train a model?</p> <p>For example, I have some data like:</p> <pre><code>feature1: [a, b, c] feature2: [[category1, category2, category3], [category2], [category3, category4]] </code></pre> <p>how to encode feature2?</p>
<p>You can use <code>LabelEncoder</code> AND <code>OneHotEncode</code> :-</p> <pre><code>from sklearn.preprocessing import LabelEncoder,OneHotEncoder labelencoder_x=LabelEncoder() X[:, 0]=labelencoder_x.fit_transform(X[:,0]) onehotencoder_x=OneHotEncoder(categorical_features=[0]) X=onehotencoder_x.fit_transform(X).toarray() </code></pre> <p>I think from this you can get the idea.</p>
python|numpy|machine-learning|scikit-learn
0
4,039
56,917,161
Installing Keras and Tensorflow on AWS SageMaker
<p>I am trying to download Keras to my notebook instance on AWS SageMaker. The code and the errors or warnings are listed below:</p> <pre><code>from keras.models import Sequential #Sequential Models from keras.layers import Dense #Dense Fully Connected Layer Type from keras.optimizers import SGD #Stochastic Gradient Descent Optimizer from keras.callbacks import EarlyStopping from keras.wrappers.scikit_learn import KerasClassifier </code></pre> <p>Error:</p> <pre><code>ModuleNotFoundError: No module named 'tensorflow' </code></pre> <p>Then I tried to download Tensorflow:</p> <pre><code>!pip install tensorflow </code></pre> <p>Installation is completed with the following note:</p> <pre><code>Installing collected packages: wrapt, tensorflow Found existing installation: wrapt 1.10.11 Cannot uninstall 'wrapt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. You are using pip version 10.0.1, however version 19.1.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. </code></pre> <p>Then I try to uninstall wrapt but still having the same issue. Did anyone have the same issue? And is this SageMaker related issue? I also tried to run the below line but didn't change:</p> <pre><code>from sagemaker.tensorflow import TensorFlow </code></pre>
<p>Try this:</p> <pre><code>!pip install tensorflow -t ./ </code></pre> <p>it will install tensorflow in your current directory</p>
tensorflow|keras|amazon-sagemaker
2
4,040
57,278,231
Cannot store an array using dask
<p>I am using the following code to create an array and and store the the results sequentially in a hdf5 format. I was checking out the dask documentation, and the suggested to use dask.store to store the arrays generated in a function like mine. However I receive an error: <code>dask has no attribute store</code></p> <p>My code: </p> <pre><code>import os import numpy as np import time import concurrent.futures import multiprocessing from itertools import product import h5py import dask as da def mean_py(array): start_time = time.time() x = array.shape[1] y = array.shape[2] values = np.empty((x,y), type(array[0][0][0])) for i in range(x): for j in range(y): values[i][j] = ((np.mean(array[:,i,j]))) end_time = time.time() hours, rem = divmod(end_time-start_time, 3600) minutes, seconds = divmod(rem,60) print("{:0&gt;2}:{:0&gt;2}:{:05.2f}".format(int(hours), int(minutes), int(seconds))) print(f"{'.'*80}") return values def generate_random_array(): a = np.random.randn(120560400).reshape(10980,10980) return a def generate_array(nums): for num in range(nums): a = generate_random_array() f = h5py.File('test_db.hdf5') d = f.require_dataset('/data', shape=a.shape, dtype=a.dtype) da.store(a, d) start = time.time() generate_array(8) end = time.time() print(f'\nTime complete: {end-start:.2f}s\n') </code></pre> <p>Should I use dask for such a a task, or do you recommend to store the results using h5py directly? Please Ignore the mean_py(array) function. It's for something I want to try out once the data has been produced. </p>
<p>As suggested in the comments, you're currently doing this</p> <pre><code>import dask as da </code></pre> <p>When you probably meant to do this</p> <pre><code>import dask.array as da </code></pre>
numpy|dask|h5py
1
4,041
35,638,151
Populating new numpy array with another's values in for loop
<p>I'm doing something stupid with <code>numpy</code>. Trying to populate one <code>numpy</code> array with values from another via a for loop, like this: </p> <pre><code>for i in range(0,9998): a[i] = b[i] * c[i] </code></pre> <p>I'm getting the following error: </p> <blockquote> <p>"TypeError: 'numpy.float64' object does not support item assignment"</p> </blockquote> <p>Btw I need this loop (as opposed to just multiplying the arrays without indexes) because b and c are longer arrays then I want a to be, and I couldn't find an elegant way of making numpy arrays shorter. Thanks in advance for advice!</p>
<p>I suspect you are doing something like this:</p> <pre><code>In [281]: a=np.float64(0) In [282]: a[0]=2 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-282-ed5200211ec0&gt; in &lt;module&gt;() ----&gt; 1 a[0]=2 TypeError: 'numpy.float64' object does not support item assignment </code></pre> <p>You need to start with something like</p> <pre><code>a = np.zeros(10000,), dtype=float) </code></pre> <p>if you want that sort of loop to work.</p>
python|numpy
0
4,042
50,950,638
Filter down the dataframe based on condition that value in a column is a whole number (integer)
<p>I am struggling to filter my dataframe, so that I have only rows with whole numbers (like 21.00). I saw one similar QA (<a href="https://stackoverflow.com/questions/40370382/creating-new-column-in-pandas-df-populated-by-true-false-depending-on-whether-an/40370477">Creating new column in pandas df populated by True,False depending on whether another column is a whole number</a>), but it is not I want to achieve. I tried float.is_integer(), but this is not a method of Series and it would have to be applied element-wise with for loop.</p> <p>In my dataframe I have a columns like this:</p> <pre><code>index value 0 43.00 1 23.47 2 5.31 3 349.00 </code></pre> <p>and I want to extract only rows that contain whole numbers, so in the case above I want only rows with values: 43.00 and 349.00.</p> <p>How can it be done without using for loops or adding the new column with indicator variable if the value is a whole number? </p> <p>My dataframe has tens of millions of rows so I'd rather avoid using loops or adding another column if possible.</p>
<p>You can use a Boolean series to filter a dataframe:</p> <pre><code>res = df[df['value'].map(lambda x: x.is_integer())] print(res) index value 0 0 43.0 3 3 349.0 </code></pre> <p>For performance, you may wish to compare a series against an integer version of itself:</p> <pre><code>res = df[df['value'] == df['value'].astype(int)] </code></pre> <p><strong>Performance benchmarking</strong></p> <p>The cost is dominated by construction of the Boolean series.</p> <pre><code>df2 = pd.concat([df]*100000) %timeit df2['value'].values % 1 == 0.0 # 20.8 ms per loop %timeit df2['value'] == df2['value'].astype(int) # 2.59 ms per loop %timeit df2['value'].map(lambda x: x.is_integer()) # 195 ms per loop %timeit ~(df2['value'] % 1).astype(bool) # 23.3 ms per loop %timeit df2['value'] % 1 == 0.0 # 21.8 ms per loop </code></pre> <p>Versions:</p> <pre><code>sys.version # '3.6.0' pd.__version__ # '0.19.2' np.__version__ # '1.11.3' </code></pre>
python|pandas|dataframe|floating-point|series
3
4,043
50,773,874
Functions in Python with or without parentheses?
<p>In Python, there are functions that need parentheses and some that don't, e.g. consider the following example:</p> <pre><code>a = numpy.arange(10) print(a.size) print(a.var()) </code></pre> <p>Why does the size function not need to be written with parentheses, as opposed to the variance function? Is there a general scheme behind this or do you just have to memorize it for every function?</p> <p>Also, there are functions that are written before the argument (as opposed to the examples above), like</p> <pre><code>a = numpy.arange(10) print(np.round_(a)) </code></pre> <p>Why not write <code>a.round_</code> or <code>a.round_()</code>?</p>
<p>It sounds like you're confused with 3 distinct concepts, which are not specific to python, rather to (object oriented) programming.</p> <ul> <li><strong>attributes</strong> are values, characteristics of an object. Like <code>array.shape</code></li> <li><strong>methods</strong> are functions an object can run, actions it can perform. <code>array.mean()</code></li> <li><strong>static methods</strong> are functions which are inherent to a class of objects, but don't need an object to be executed like <code>np.round_()</code></li> </ul> <p>It sounds like you should look into OOP: <a href="https://realpython.com/instance-class-and-static-methods-demystified/#instance-class-and-static-methods-an-overview" rel="nofollow noreferrer">here is a python primer on methods</a>.</p> <hr> <p>Also, a more pythonic and specific kind of attributes are <a href="https://www.programiz.com/python-programming/property" rel="nofollow noreferrer"><code>property</code></a>s. They are methods (of an object) which are not called with <code>()</code>. Sounds a bit weird but can be useful; look into it.</p>
python|numpy|syntax
4
4,044
33,248,117
How to group a dataframe by some transform of a column
<p>Is there a way to group the rows of a dataframe not by the value of some column, but rather by the result of applying some function to the value of that column? For example, to group the rows of the dataframe according to whether the value of a certain column is &gt; 0 or &le; 0.</p> <p>Of course, I realize that one can always create an auxiliary column to hold the result of the transform, and use this auxiliary column as the argument to <code>groupby</code>. My question here is whether there's a way to do the same thing without needing to create an auxiliary column.</p>
<p>The example you give is pretty simple:</p> <pre><code>import numpy import pandas numpy.random.seed(0) N = 15 df = pandas.DataFrame({ 'A': numpy.arange(N), 'B': numpy.round(numpy.random.normal(size=N), 2) }) print(df.to_string()) A B 0 0 1.76 1 1 0.40 2 2 0.98 3 3 2.24 4 4 1.87 5 5 -0.98 6 6 0.95 7 7 -0.15 8 8 -0.10 9 9 0.41 10 10 0.14 11 11 1.45 12 12 0.76 13 13 0.12 14 14 0.44 </code></pre> <p>So then I can group by the comparison of column A to 10:</p> <pre><code>df.groupby(by=df['A'] &lt; 10).sum() A B A False 60 2.91 True 45 7.38 </code></pre> <p>The <code>by</code> statement can be more complex (i.e., return any number of values):</p> <pre><code>classifier = {0: 'old', 1: 'busted', 2: 'hotness'} df.groupby(by=(df['A'] % 3).map(classifier)).sum() A B A old 30 6.12 busted 35 2.38 hotness 40 1.79 </code></pre>
python|pandas
3
4,045
33,177,274
Pandas: Most efficient way to make dictionary of dictionaries from DataFrame columns
<hr> <pre><code>import pandas as pd import numpy as np import random labels = ["c1","c2","c3"] c1 = ["one","one","one","two","two","three","three","three","three"] c2 = [random.random() for i in range(len(c1))] c3 = ["alpha","beta","gamma","alpha","gamma","alpha","beta","gamma","zeta"] DF = pd.DataFrame(np.array([c1,c2,c3])).T DF.columns = labels </code></pre> <p>DataFrame looks like:</p> <pre><code> c1 c2 c3 0 one 0.440958516531 alpha 1 one 0.476439953723 beta 2 one 0.254235673552 gamma 3 two 0.882724336464 alpha 4 two 0.79817899139 gamma 5 three 0.677464637887 alpha 6 three 0.292927670096 beta 7 three 0.0971956881825 gamma 8 three 0.993934915508 zeta </code></pre> <p>The only way I could think of making the dictionary was to:</p> <pre><code>D_greek_value = {} for greek in set(DF["c3"]): D_c1_c2 = {} for i in range(DF.shape[0]): row = DF.iloc[i,:] if row[2] == greek: D_c1_c2[row[0]] = row[1] D_greek_value[greek] = D_c1_c2 D_greek_value </code></pre> <p>The resulting dictionary looks like this:</p> <pre><code>{'alpha': {'one': '0.67919712421', 'three': '0.67171020684', 'two': '0.571150669821'}, 'beta': {'one': '0.895090207979', 'three': '0.489490074662'}, 'gamma': {'one': '0.964777504708', 'three': '0.134397632659', 'two': '0.10302290374'}, 'zeta': {'three': '0.0204226923557'}} </code></pre> <p>I don't want to assume c1 will come in blocks ("one" being together everytime). I'm doing this on a csv that is a few hundred MB and I feel like I'm doing it all wrong. Please help if you have any ideas!</p>
<p>IIUC, you could take advantage of <code>groupby</code> to handle most of the work:</p> <pre><code>&gt;&gt;&gt; result = df.groupby("c3")[["c1","c2"]].apply(lambda x: dict(x.values)).to_dict() &gt;&gt;&gt; pprint.pprint(result) {'alpha': {'one': 0.440958516531, 'three': 0.677464637887, 'two': 0.8827243364640001}, 'beta': {'one': 0.47643995372299996, 'three': 0.29292767009599996}, 'gamma': {'one': 0.254235673552, 'three': 0.0971956881825, 'two': 0.79817899139}, 'zeta': {'three': 0.993934915508}} </code></pre> <hr> <p>Some explanation: first we group by c3, and select the columns c1 and c2. This gives us the groups we want to turn into dictionaries:</p> <pre><code>&gt;&gt;&gt; grouped = df.groupby("c3")[["c1", "c2"]] &gt;&gt;&gt; grouped.apply(lambda x: print(x,"\n","--")) # just for display purposes c1 c2 0 one 0.679926178687387 3 two 0.11495090934413166 5 three 0.7458197179794177 -- c1 c2 0 one 0.679926178687387 3 two 0.11495090934413166 5 three 0.7458197179794177 -- c1 c2 1 one 0.12943266757277916 6 three 0.28944292691097817 -- c1 c2 2 one 0.36642834809341274 4 two 0.5690944224514624 7 three 0.7018221838129789 -- c1 c2 8 three 0.7195852795555373 -- </code></pre> <p>Given any of these subframes, say the next-to-last, we need to come up with a way to turn it into a dictionary. For example:</p> <pre><code>&gt;&gt;&gt; d3 c1 c2 2 one 0.366428 4 two 0.569094 7 three 0.701822 </code></pre> <p>If we try <code>dict</code> or <code>to_dict</code>, we don't get what we want because the indices and column labels get in the way:</p> <pre><code>&gt;&gt;&gt; dict(d3) {'c1': 2 one 4 two 7 three Name: c1, dtype: object, 'c2': 2 0.366428 4 0.569094 7 0.701822 Name: c2, dtype: float64} &gt;&gt;&gt; d3.to_dict() {'c1': {2: 'one', 4: 'two', 7: 'three'}, 'c2': {2: 0.36642834809341279, 4: 0.56909442245146236, 7: 0.70182218381297889}} </code></pre> <p>But we can ignore this by dropping down to the underlying data with <code>.values</code>, and then that can be passed into <code>dict</code>:</p> <pre><code>&gt;&gt;&gt; d3.values array([['one', 0.3664283480934128], ['two', 0.5690944224514624], ['three', 0.7018221838129789]], dtype=object) &gt;&gt;&gt; dict(d3.values) {'three': 0.7018221838129789, 'one': 0.3664283480934128, 'two': 0.5690944224514624} </code></pre> <p>So if we apply this we get a Series with the indices as the c3 keys we want and the values as dictionaries, and that we can turn into a dictionary using <code>.to_dict()</code>:</p> <pre><code>&gt;&gt;&gt; result = df.groupby("c3")[["c1", "c2"]].apply(lambda x: dict(x.values)) &gt;&gt;&gt; result c3 alpha {'three': '0.7458197179794177', 'one': '0.6799... beta {'one': '0.12943266757277916', 'three': '0.289... gamma {'three': '0.7018221838129789', 'one': '0.3664... zeta {'three': '0.7195852795555373'} dtype: object &gt;&gt;&gt; result.to_dict() {'zeta': {'three': '0.7195852795555373'}, 'gamma': {'three': '0.7018221838129789', 'one': '0.36642834809341274', 'two': '0.5690944224514624'}, 'beta': {'one': '0.12943266757277916', 'three': '0.28944292691097817'}, 'alpha': {'three': '0.7458197179794177', 'one': '0.679926178687387', 'two': '0.11495090934413166'}} </code></pre>
python|pandas|hash|machine-learning|dataframe
4
4,046
33,431,286
Float required in list output
<p>I am trying to create a custom filter to run it with the generic filter from SciPy package.</p> <blockquote> <p>scipy.ndimage.filters.generic_filter</p> </blockquote> <p>The problem is that I don't know how to get the returned value to be a scalar, as it needs for the generic function to work. I read through these threads (bottom), but I can't find a way for my function to perform. </p> <p>The code is this:</p> <pre><code>import scipy.ndimage as sc def minimum(window): list = [] for i in range(window.shape[0]): window[i] -= min(window) list.append(window[i]) return list test = np.ones((10, 10)) * np.arange(10) result = sc.generic_filter(test, minimum, size=3) </code></pre> <p>It gives the error: </p> <pre><code>cval, origins, extra_arguments, extra_keywords) TypeError: a float is required </code></pre> <p><a href="https://stackoverflow.com/questions/28774642/scipy-filter-with-multi-dimensional-or-non-scalar-output">Scipy filter with multi-dimensional (or non-scalar) output</a></p> <p><a href="https://stackoverflow.com/questions/14059529/how-to-apply-ndimage-generic-filter">How to apply ndimage.generic_filter()</a></p> <p><a href="http://ilovesymposia.com/2014/06/24/a-clever-use-of-scipys-ndimage-generic_filter-for-n-dimensional-image-processing/" rel="nofollow noreferrer">http://ilovesymposia.com/2014/06/24/a-clever-use-of-scipys-ndimage-generic_filter-for-n-dimensional-image-processing/</a></p>
<p>If I understand, you want to substract each pixel the min of its 3-horizontal neighbourhood. It's not a good practice to do that with lists, because numpy is for efficiency( ~100 times faster ). The simplest way to do that is just :</p> <pre><code>test-sc.generic_filter(test, np.min, size=3) </code></pre> <p>Then the substraction is vectorized on the whole array. You can also do:</p> <pre><code>test-np.min([np.roll(test,1),np.roll(test,-1),test],axis=0) </code></pre> <p>10 times faster, if you accept the artefact at the border.</p>
python|numpy|filtering
2
4,047
33,143,810
Pandas histogram from count of columns
<p>I have a big dataframe that consist of about 6500 columns where one is a classlabel and the rest are boolean values of either 0 or 1, the dataframe is sparse.</p> <p>example:</p> <pre><code>df = pd.DataFrame({ 'label' : ['a', 'b', 'c', 'b','a', 'c', 'b', 'a'], 'x1' : np.random.choice(2, 8), 'x2' : np.random.choice(2, 8), 'x3' : np.random.choice(2, 8)}) </code></pre> <p>What I want is a report (preferably in pandas so I can plot it easily) that shows me the sum of unique elements of the columns grouped by the label.</p> <p>So for example this data frame:</p> <pre><code> x1 x2 x3 label 0 0 1 1 a 1 1 0 1 b 2 0 1 0 c 3 1 0 0 b 4 1 1 1 a 5 0 0 1 c 6 1 0 0 b 7 0 1 0 a </code></pre> <p>The result should be something like this:</p> <pre><code>a: 3 (since it has x1, x2 and x3) b: 2 (since it has x1, x3) c: 2 (since it has x2, x3) </code></pre> <p>So it's kind of a count of which columns are present in each label. Think of a histogram where the x-axis is the <code>label</code> and the y-axis the <code>number of columns</code>.</p>
<p>You could try pivoting:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({ 'label' : ['a', 'b', 'c', 'b','a', 'c', 'b', 'a'], 'x1' : np.random.choice(2, 8), 'x2' : np.random.choice(2, 8), 'x3' : np.random.choice(2, 8)}) pd.pivot_table(df, index='label').transpose().apply(np.count_nonzero) </code></pre> <p>For df:</p> <pre><code>label x1 x2 x3 0 a 0 0 0 1 b 0 1 0 2 c 1 0 1 3 b 0 1 0 4 a 1 1 1 5 c 1 0 1 6 b 0 1 0 7 a 1 1 1 </code></pre> <p>The result is:</p> <pre><code>label a 3 b 1 c 2 dtype: int64 </code></pre>
python|pandas
3
4,048
66,341,349
How to group and sum certain columns of an array based on their classification (eg to group cities by country)
<h2>The issue</h2> <p><strong>I have arrays which track certain items over time. The items belong to certain categories. I want to calculate the sum by time and category, e.g. to go from a table by time and city to one by time and country.</strong></p> <p><strong>I have found a couple of ways, but they seem clunky - there must be a better way!</strong> Surely I'm not the first one with this issue? Maybe using <code>np.where</code>?</p> <p>More specifically:</p> <p>I have a number of numpy arrays of shape (p x i), where p is the period and i is the item I am tracking over time. I then have a separate array of shape i which classifies the items into categories (red, green, yellow, etc.).</p> <p>What I want to do is calculate an array of shape (p x number of unique categories) which sums the values of the big array by time and category. In pictures:</p> <p><a href="https://i.stack.imgur.com/i8JRb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i8JRb.png" alt="enter image description here" /></a></p> <p><strong>I'd need the code to be as efficient as possible as I need to do this multiple times on arrays which can be up to 400 x 1,000,000</strong></p> <h2>What I have tried:</h2> <p>This <a href="https://stackoverflow.com/questions/4373631/sum-array-by-number-in-numpy">question</a> covers a number of ways to groupby without resorting to pandas. I like the scipy.ndimage approach, but AFAIK it works on one dimension only.</p> <p>I have tried a solution with <code>pandas</code>:</p> <ul> <li>I create a dataframe of shape periods x items</li> <li>I unpivot it with <code>pd.melt()</code>, join the categories and do a crosstab period/categories</li> </ul> <p>I have also tried a set of loops, optimised with <code>numba</code>:</p> <ul> <li>A first loop creates an array which converts the categories into integers, i.e. the first category in alphabetical order becomes 0, the 2nd 1, etc</li> <li>A second loop iterates through all the items, then for each item it iterates through all the periods and sums by category</li> </ul> <h2>My findings</h2> <ul> <li>for small arrays, pandas is faster</li> <li>for large arrays, numba is better, but it's better to set <code>parallel = False</code> in the numba decorator</li> <li>for very large arrays, numba with <code>parallel = True</code> shines <code>parallel = True</code> makes use of numba's parallelisation by using <code>numba.prange</code> on the outer loops.</li> </ul> <p><a href="https://i.stack.imgur.com/o1Vcg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o1Vcg.png" alt="enter image description here" /></a></p> <p>PS I am aware of the pitfalls of premature optimisation etc etc - I am only looking into this because a significant amount of time is spent doing precisely this</p> <h2>The code</h2> <pre><code>import numpy as np import pandas as pd import time import numba periods = 300 n = int(2000) categories = np.tile(['red','green','yellow','brown'],n) my_array = np.random.randint(low = 0, high = 10, size = (periods, len(categories) )) # my_arrays will have shape (periods x (n * number of categories)) #---- pandas start = time.time() df_categories = pd.DataFrame(data = categories).reset_index().rename(columns ={'index':'item',0:'category'}) df = pd.DataFrame(data = my_array) unpiv = pd.melt(df.reset_index(), id_vars ='index', var_name ='item', value_name ='value').rename( columns = {'index':'time'}) unpiv = pd.merge(unpiv, df_categories, on='item' ) crosstab = pd.crosstab( unpiv['time'], unpiv['category'], values = unpiv['value'], aggfunc='sum' ) print(&quot;panda crosstab in:&quot;) print(time.time() - start) # yep, I know that timeit.timer would have been better, but I was in a hurry :) print(&quot;&quot;) #---- numba @numba.jit(nopython = True, parallel = True, nogil = True) def numba_classify(x, categories): cat_uniq = np.unique(categories) num_categories = len(cat_uniq) num_items = x.shape[1] periods = x.shape[0] categories_converted = np.zeros(len(categories), dtype = np.int32) out = np.zeros(( periods, num_categories)) # before running the actual classification, I must convert the categories, which can be strings, to # the corresponsing number in cat_uniq, e.g. if brown is the first category by alphabetical sorting, then # brown --&gt; 0, etc for i in numba.prange(num_items): for c in range(num_categories): if categories[i] == cat_uniq[c]: categories_converted[i] = c for i in numba.prange(num_items): for p in range(periods): out[ p, categories_converted[i] ] += x[p,i] return out start = time.time() numba_out = numba_classify(my_array, categories) print(&quot;numba done in:&quot;) print(time.time() - start) </code></pre>
<p>You can use <code>df.groupby(categories, axis=1).sum()</code> for a substantial speedup.</p> <pre><code>import numpy as np import pandas as pd import time def make_data(periods, n): categories = np.tile(['red','green','yellow','brown'],n) my_array = np.random.randint(low = 0, high = 10, size = (periods, len(categories) )) return categories, pd.DataFrame(my_array) for n in (200, 2000, 20000): categories, df = make_data(300, n) true_n = n * 4 start = time.time() tabulation =df.groupby(categories, axis=1).sum() elapsed = time.time() - start print(f&quot;300 x {true_n:5}: {elapsed:.3f} seconds&quot;) # prints: 300 x 800: 0.005 seconds 300 x 8000: 0.021 seconds 300 x 80000: 0.673 seconds </code></pre>
python|pandas|dataframe|numba
1
4,049
16,178,471
Numpy running at half the speed of MATLAB
<p>I've been porting MATLAB code over to Python and, after quite a lot of work, I have stuff that works. The downside, however, is that Python is running my code more slowly than MATLAB did. I understand that using optimised ATLAS libraries will speed things up, but actually implementing this is confusing me. Here's what's going on:</p> <p>I start an ipython session with no BLAS installed:</p> <pre><code>import numpy.distutils.system_info as sysinfo import time In [11]: sysinfo.get_info('atlas') Out[11]: {} timeit( eig(randn(1E2,1E2)) ) 100 loops, best of 3: 13.4 ms per loop </code></pre> <p>The same code in Matlab runs twice as fast</p> <pre><code>tic,eig(randn(1E2));toc*1000 6.5650 </code></pre> <p>I install the non-optimised ATAS deb from the Ubuntu repository. Re-start ipython and now I get:</p> <pre><code>In [2]: sysinfo.get_info('atlas') ... Out[2]: {'define_macros': [('ATLAS_INFO', '"\\"3.8.4\\""')], 'include_dirs': ['/usr/include/atlas'], 'language': 'f77', 'libraries': ['lapack', 'f77blas', 'cblas', 'atlas'], 'library_dirs': ['/usr/lib/atlas-base/atlas', '/usr/lib/atlas-base']} </code></pre> <p>And the test code:</p> <pre><code>In [4]: timeit( eig(randn(1E2,1E2)) ) 100 loops, best of 3: 16.8 ms per loop </code></pre> <p>So no faster. If anything a touch slower. But I haven't yet switched to the optimised BLAS. I follow these instructions: <a href="http://danielnouri.org/notes/category/python/">http://danielnouri.org/notes/category/python/</a> I build the libraries and overwrite the non-optimised versions with these. I re-start ipython but there's no change:</p> <pre><code>In [4]: timeit( eig(randn(1E2,1E2)) ) 100 loops, best of 3: 15.3 ms per loop </code></pre> <p>Can't it get better than this? MATLAB is still twice as fast in this simple example. In a real-world example where I'm doing image registration in the Fourier domain, the Matlab equivalent is 4 to 5 times faster than the Python version. Has anyone managed to get Numpy working at MATLAB speeds?</p>
<h2>Simple example</h2> <p>Numpy is calculating both the eigenvectors and eigenvalues, so it will take roughly twice longer, which is consistent with your slowdown (use <code>np.linalg.eigvals</code> to compute only the eigenvalues).</p> <p>In the end, <code>np.linalg.eig</code> is a tiny wrapper around dgeev, and likely the same thing happens in Matlab, which is using MKL.</p> <p>To get virtually the same speed in linear algebra, you could build Numpy against MKL or OpenBLAS. There are some commercial offers (maybe free for academics) from <a href="https://store.continuum.io/cshop/accelerate" rel="nofollow noreferrer">Continuum</a> or <a href="https://www.enthought.com/products/canopy/faq/" rel="nofollow noreferrer">Enthought</a>. You could also get MKL and build Numpy <a href="http://software.intel.com/en-us/articles/numpy-scipy-with-mkl" rel="nofollow noreferrer">yourself</a>.</p> <h2>Real-world example</h2> <p>4x slower seems like too much (I have rewritten some Matlab code in Numpy and both programs performed in a very similar way). Take into account that recent Matlab versions come with a simple JIT, so loops aren't as bad as in the usual Python implementation. If you're doing many FFT, you could benefit from using a <a href="https://stackoverflow.com/questions/6365623/improving-fft-performance-in-python">FFTW wrapper</a> (<a href="http://hgomersall.github.io/pyFFTW/" rel="nofollow noreferrer">pyFFTW</a> seems nice, but I haven't used it).</p>
python|performance|matlab|numpy|atlas
12
4,050
16,366,124
Share a numpy array in gunicorn processes
<p>I have a big numpy array that is stored in redis. This array acts as an index. I want to serve filtered result over http from a flask app running on gunicorn and I want all the workers spawned by gunicorn to have access to this numpy array. I don't want to go to redis every time and deserialize the entire array in memory, instead on startup I want to run some code that does this and every forked worker of gunicorn just gets a copy of this array. The problem is, I can not find any examples on how to use gunicorn's server hooks: <a href="http://docs.gunicorn.org/en/latest/configure.html#server-hooks">http://docs.gunicorn.org/en/latest/configure.html#server-hooks</a> to achieve this. May be server hooks is not the right way of doing it, has anyone else done something similar?</p>
<p>Create an instance of a Listener <em>server</em> and have your gunicorn children connect to that process to fetch whatever data they need as Clients. This way the processes can modify the information as needed and request it from the main process instead of going to Redis to reload the entire dataset.</p> <p>More info here: <a href="http://docs.python.org/2/library/multiprocessing.html#multiprocessing-listeners-clients" rel="noreferrer">Multiprocessing - 16.6.2.10. Listeners and Clients</a>.</p>
python|numpy|flask|ipc|gunicorn
7
4,051
16,420,097
What is the difference between np.sum and np.add.reduce?
<p>What is the difference between <code>np.sum</code> and <code>np.add.reduce</code>?<br> While <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduce.html">the docs</a> are quite explicit: </p> <blockquote> <p>For example, add.reduce() is equivalent to sum().</p> </blockquote> <p>The performance of the two seems to be quite different: for relatively small array sizes <code>add.reduce</code> is about twice faster. </p> <pre><code>$ python -mtimeit -s"import numpy as np; a = np.random.rand(100); summ=np.sum" "summ(a)" 100000 loops, best of 3: 2.11 usec per loop $ python -mtimeit -s"import numpy as np; a = np.random.rand(100); summ=np.add.reduce" "summ(a)" 1000000 loops, best of 3: 0.81 usec per loop $ python -mtimeit -s"import numpy as np; a = np.random.rand(1000); summ=np.sum" "summ(a)" 100000 loops, best of 3: 2.78 usec per loop $ python -mtimeit -s"import numpy as np; a = np.random.rand(1000); summ=np.add.reduce" "summ(a)" 1000000 loops, best of 3: 1.5 usec per loop </code></pre> <p>For larger array sizes, the difference seems to go away:</p> <pre><code>$ python -mtimeit -s"import numpy as np; a = np.random.rand(10000); summ=np.sum" "summ(a)" 100000 loops, best of 3: 10.7 usec per loop $ python -mtimeit -s"import numpy as np; a = np.random.rand(10000); summ=np.add.reduce" "summ(a)" 100000 loops, best of 3: 9.2 usec per loop </code></pre>
<p>Short answer: when the argument is a numpy array, <code>np.sum</code> ultimately calls <code>add.reduce</code> to do the work. The overhead of handling its argument and dispatching to <code>add.reduce</code> is why <code>np.sum</code> is slower.</p> <p>Longer answer: <code>np.sum</code> is defined in <a href="https://github.com/numpy/numpy/blob/master/numpy/core/fromnumeric.py" rel="noreferrer"><code>numpy/core/fromnumeric.py</code></a>. In the definition of <code>np.sum</code>, you'll see that the work is passed on to <code>_methods._sum</code>. That function, in <a href="https://github.com/numpy/numpy/blob/master/numpy/core/_methods.py" rel="noreferrer"><code>_methods.py</code></a>, is simply:</p> <pre><code>def _sum(a, axis=None, dtype=None, out=None, keepdims=False): return um.add.reduce(a, axis=axis, dtype=dtype, out=out, keepdims=keepdims) </code></pre> <p><code>um</code> is the module where the <code>add</code> ufunc is defined.</p>
python|numpy
32
4,052
57,706,068
How to align text to center in reportlab python?
<p>I am generating a pdf using reportlab and I want my title to be in center. But how do achieve it, unable to find a soltuion.</p> <p>Here is my code:</p> <pre><code>def add_text(text, style="Normal", fontsize=12): Story.append(Spacer(1, 12)) ptext = "&lt;font size={}&gt;{}&lt;/font&gt;".format(fontsize, text) Story.append(Paragraph(ptext, styles[style])) Story.append(Spacer(1, 12)) add_text("Title", style="Heading1", fontsize=24) </code></pre>
<p>I would create my own text style and refer to this, in your case</p> <pre><code>def add_text(text, style="Normal", fontsize=12): </code></pre> <p>to </p> <pre><code>def add_text(text, style="Normal_CENTER", fontsize=12): </code></pre> <p>Below is how you create your own style:</p> <pre><code>from reportlab.lib.styles import ParagraphStyle, getSampleStyleSheet from reportlab.lib.enums import TA_JUSTIFY, TA_LEFT, TA_CENTER, TA_RIGHT from reportlab.lib import colors styles = getSampleStyleSheet() styles.add(ParagraphStyle(name='Normal_CENTER', parent=styles['Normal'], fontName='Helvetica', wordWrap='LTR', alignment=TA_CENTER, fontSize=12, leading=13, textColor=colors.black, borderPadding=0, leftIndent=0, rightIndent=0, spaceAfter=0, spaceBefore=0, splitLongWords=True, spaceShrinkage=0.05, )) styles.add(ParagraphStyle(name='New Style', alignment=TA_LEFT, fontName='Helvetica', fontSize=7, textColor=colors.darkgray, leading=8, textTransform='uppercase', wordWrap='LTR', splitLongWords=True, spaceShrinkage=0.05, )) </code></pre>
python|pandas|pdf|reportlab
2
4,053
57,648,928
I would like to convert a pivot table's column data to rows (unpivot a table)
<p>I have a dataframe that I created from an Excel file of the following form: </p> <pre><code> Ticker 0 Ticker 1 Ticker 2 Delta 0 ... Gamma 1 Gamma 2 IL Var 2019-01-01 -0.0 -1.0 -1.0 0.0 ... -3.0 2.0 10 5 2019-01-02 0.0 -0.0 -1.0 -1.0 ... 0.0 0.0 10 5 2019-01-03 2.0 -1.0 1.0 0.0 ... -0.0 -2.0 10 5 2019-01-04 1.0 0.0 0.0 -1.0 ... -0.0 -1.0 10 5 2019-01-05 1.0 -1.0 -0.0 -1.0 ... -0.0 -1.0 10 5 2019-01-06 2.0 1.0 1.0 -1.0 ... 0.0 0.0 10 5 </code></pre> <p>Given that on each date, the data on <em>Ticker i</em> corresponds to the data on <em>Delta i</em> and <em>Gamma i</em> and so I wish to make a table of the following form: </p> <pre><code> Date Ticker Delta Gamma IL Var 2019-01-01 NaN NaN NaN 10 5 2019-01-01 NaN NaN NaN 10 5 2019-01-01 NaN NaN NaN 10 5 2019-01-01 NaN NaN NaN 10 5 2019-01-01 NaN NaN NaN 10 5 2019-01-01 NaN NaN NaN 10 5 2019-01-02 NaN NaN NaN 10 5 2019-01-02 NaN NaN NaN 10 5 . . . 2019-01-03 NaN NaN NaN 10 5 . . . . 2019-01-04 NaN NaN NaN 10 5 2019-01-05 NaN NaN NaN 10 5 2019-01-06 NaN NaN NaN 10 5 </code></pre> <p>I tried to use the <code>pd.melt()</code> method but I don't know how to make the date appear multiple times... </p> <p>To recreate a similar dataframe I used the code:</p> <pre class="lang-py prettyprint-override"><code> import pandas as pd import numpy as np l=[] for i in range(3): l.append('Ticker ' + str(i)) for i in range(3): l.append('Delta ' + str(i)) for i in range(3): l.append('Gamma ' + str(i)) dates = pd.date_range('20190101', periods=6) data = np.random.randn(6, len(l)) df = pd.DataFrame(data.round(0), index = dates, columns = l) df['IL']=10 df['Var']=5 df Out[9]: Ticker 0 Ticker 1 Ticker 2 Delta 0 ... Gamma 1 Gamma 2 IL Var 2019-01-01 -0.0 -1.0 -1.0 0.0 ... -3.0 2.0 10 5 2019-01-02 0.0 -0.0 -1.0 -1.0 ... 0.0 0.0 10 5 2019-01-03 2.0 -1.0 1.0 0.0 ... -0.0 -2.0 10 5 2019-01-04 1.0 0.0 0.0 -1.0 ... -0.0 -1.0 10 5 2019-01-05 1.0 -1.0 -0.0 -1.0 ... -0.0 -1.0 10 5 2019-01-06 2.0 1.0 1.0 -1.0 ... 0.0 0.0 10 5 [6 rows x 11 columns] </code></pre> <p>I greatly appreciate your help. </p>
<p>Seems like you're converting from a wide format to a longitudinal format. Try</p> <pre><code>df.reset_index(inplace = True) df = pd.wide_to_long(df, ['Ticker', 'Delta', 'Gamma'], i = 'index', j = 'timepoint', sep = " ") </code></pre> <p>where the stubnames of your variables are <code>['Ticker', 'Delta', 'Gamma']</code>, you're identifying rows based on their dates, and the timepoints are 0, 1, 2. </p> <pre><code>Out[19]: Var IL Ticker Delta Gamma index timepoint 2019-01-01 0 5 10 -2.0 -1.0 -0.0 2019-01-02 0 5 10 0.0 -0.0 1.0 2019-01-03 0 5 10 -1.0 -0.0 -2.0 2019-01-04 0 5 10 1.0 -0.0 -1.0 2019-01-05 0 5 10 -1.0 -1.0 -1.0 2019-01-06 0 5 10 2.0 -1.0 -1.0 2019-01-01 1 5 10 0.0 1.0 -1.0 2019-01-02 1 5 10 1.0 -1.0 2.0 2019-01-03 1 5 10 -1.0 -0.0 -0.0 2019-01-04 1 5 10 0.0 1.0 0.0 2019-01-05 1 5 10 0.0 1.0 2.0 2019-01-06 1 5 10 1.0 1.0 -0.0 2019-01-01 2 5 10 -0.0 -2.0 0.0 2019-01-02 2 5 10 -1.0 -2.0 -0.0 2019-01-03 2 5 10 -1.0 1.0 -1.0 2019-01-04 2 5 10 0.0 2.0 -1.0 2019-01-05 2 5 10 -0.0 2.0 1.0 2019-01-06 2 5 10 -2.0 1.0 1.0 </code></pre> <p>Add </p> <pre><code>df.sort_values(by=['index', 'timepoint']).reset_index() </code></pre> <p>To sort by the date and timepoint, then use <code>reset_index()</code> to return them to columns.</p>
python|pandas|dataframe|pivot
1
4,054
57,357,049
How to load grayscale image dataset to Mobile net model
<p>I am trying to load a grayscale image dataset(fashion-mnist) to MobileNet model to predict hand written numbers but according to <a href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l06c01_tensorflow_hub_and_transfer_learning.ipynb#scrollTo=s4YuF5HvpM1W" rel="nofollow noreferrer">this</a> tutorial only RGB images can be loaded to the model. When I try to feed fashion-mnist samples, it gives me the following error</p> <blockquote> <p>Error when checking input: expected keras_layer_13_input to have shape (224, 224, 3) but got array with shape (224, 224, 1)</p> </blockquote> <p>How to solve this problem ?</p>
<p>Probably pre-trained MobileNet is not suitable for this task. You have two different problems. Mobilenet is made for Imagenet images which are 224x224 images with 3 color channels, while MNIST dataset is 28x28 images with one color channel. You can repeat the color channel in RGB: </p> <pre><code># data.shape [70000, 224, 224, 1] -&gt; [70000, 224, 224, 3] data = np.repeat(data, 3, -1) </code></pre> <p>But before that, you need to resize images. For example, you can use <code>PIL</code> for resizing images:</p> <pre><code>from PIL import Image data = np.array([Image.fromarray(x).resize([224,224]) for x in data]) </code></pre> <p>There are some small details here which you should figure out yourself. Such as <code>dtype</code> of the images if you have loaded from the dataset as numpy. You may need to convert numpy types to integers with <code>np.uint8()</code>.</p>
python|tensorflow|conv-neural-network|mobilenet
2
4,055
57,652,508
How to convert a dict which is inside a list inside an array of json object into a dataframe?
<p>I have an array in json text file which has a list of dicts. I need to extract all of it into a dataframe. Array is something on these lines:-</p> <pre><code>[{ "_id" : "abc" , "players" : [ "1" , "2"] , "tId" : "1" , "ef" : 200 , "pr" : 360 , "mode" : 1.0 , "1" : { "before" : { "rm" : { "$numberLong" : "1070"} , "cap" : 450.0 , "nrrm" : 20.0} , "after" : { "rm" : { "$numberLong" : "970"} , "cap" : 250.0 , "nrrm" : 120.0}} , "2" : { "before" : { "rm" : { "$numberLong" : "470"} , "cap" : 0.0 , "nrrm" : 310.0} , "after" : { "rm" : { "$numberLong" : "730"} , "cap" : 0.0 , "nrrm" : 410.0}} , "ts" : { "$date" : { "$numberLong" : "1565548200670"}} , "shots" : [ { "iBS" : 1 , "bSTOP" : 1 , "aSTOP" : 1 , "bSPB" : "NOT DECIDED" , "aSPB" : "NOT DECIDED" , "lBP" : [ 1] , "iBP" : [ ] , "bP" : [ 1] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( -0.6728522,0.04,-0.5520813 )" , "iF" : "( 0.480835,-2.9104E-16,0.1699932 )" , "iP" : "( -0.7105647,0.04,-0.5654141 )" , "cA" : "( 6.539359E-14,70.5296,2.670466E-14 )" , "iTBA" : 1 , "tBID" : 2 , "tBP" : "( 0.724014,0.0335,-0.041 )" , "tT" : 12.66367 , "iSU" : "" , "uSV" : "" , "iPB" : 0 , "bHBC" : 7 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2cfdd0" , "toT" : 12.63438} , { "iBS" : 0 , "bSTOP" : 1 , "aSTOP" : 0 , "bSPB" : "NOT DECIDED" , "aSPB" : "SOLID BALLS" , "lBP" : [ 3] , "iBP" : [ ] , "bP" : [ 3] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 1.07788,0.04,-0.5111128 )" , "iF" : "( -0.2340834,1.694598E-16,0.4531059 )" , "iP" : "( 1.096239,0.04,-0.5466505 )" , "cA" : "( -3.80758E-14,332.6783,-1.613636E-14 )" , "iTBA" : 1 , "tBID" : 3 , "tBP" : "( 0.4538818,0.04000001,0.54636 )" , "tT" : 15.64122 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 4 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2cfdd0" , "toT" : 15.62162} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "SOLID BALLS" , "aSPB" : "SOLID BALLS" , "lBP" : [ ] , "iBP" : [ ] , "bP" : [ ] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.1115417,0.04,-0.1958274 )" , "iF" : "( 0.002243765,-1.426218E-18,0.0006675497 )" , "iP" : "( 0.07320246,0.04,-0.2072338 )" , "cA" : "( 6.981425E-14,73.43156,3.15853E-14 )" , "iTBA" : 1 , "tBID" : 5 , "tBP" : "( 0.7939866,0.04000001,-0.07077976 )" , "tT" : 24.50556 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 0 , "cHRB" : 0 , "cHSM" : 0 , "hRBIP" : 0 , "cIH" : 1 , "cIP" : 0 , "bIP" : 0 , "aBP" : 0 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2cfdd0" , "toT" : 24.4843} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "STRIPE BALLS" , "aSPB" : "STRIPE BALLS" , "lBP" : [ 14] , "iBP" : [ ] , "bP" : [ 14] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( -0.1471594,0.04,0.2157262 )" , "iF" : "( 0.09524104,0,0.1471072 )" , "iP" : "( -0.1688982,0.04,0.182149 )" , "cA" : "( 0,32.92009,0 )" , "iTBA" : 1 , "tBID" : 14 , "tBP" : "( -0.06695016,0.04000001,0.3848823 )" , "tT" : 9.197196 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 0 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 1 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "1bdf72" , "toT" : 9.181252} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "STRIPE BALLS" , "aSPB" : "STRIPE BALLS" , "lBP" : [ 12] , "iBP" : [ ] , "bP" : [ 12] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.3852863,0.04,0.4167935 )" , "iF" : "( 0.282476,0,0.2797301 )" , "iP" : "( 0.3568642,0.04,0.3886477 )" , "cA" : "( 0,45.27985,0 )" , "iTBA" : 1 , "tBID" : 12 , "tBP" : "( 0.6180266,0.04000001,0.5658391 )" , "tT" : 6.139053 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 7 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "1bdf72" , "toT" : 6.123772} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "STRIPE BALLS" , "aSPB" : "STRIPE BALLS" , "lBP" : [ 11] , "iBP" : [ ] , "bP" : [ 11] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.7701012,0.04,-0.2259813 )" , "iF" : "( 0.200658,0,-0.4688671 )" , "iP" : "( 0.7543634,0.04,-0.1892074 )" , "cA" : "( 0,156.8309,0 )" , "iTBA" : 1 , "tBID" : 11 , "tBP" : "( 0.941402,0.04000002,-0.4693463 )" , "tT" : 5.77317 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 6 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "1bdf72" , "toT" : 5.756089} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "STRIPE BALLS" , "aSPB" : "STRIPE BALLS" , "lBP" : [ 9] , "iBP" : [ ] , "bP" : [ 9] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.0779118,0.04,-0.53505 )" , "iF" : "( -0.2458796,0,0.2141081 )" , "iP" : "( 0.1080778,0.04,-0.5613181 )" , "cA" : "( 0,311.0488,0 )" , "iTBA" : 1 , "tBID" : 9 , "tBP" : "( -0.4028816,0.04000001,-0.1177362 )" , "tT" : 25.78172 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 0 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 1 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "1bdf72" , "toT" : 25.76452} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "STRIPE BALLS" , "aSPB" : "STRIPE BALLS" , "lBP" : [ 15] , "iBP" : [ 0] , "bP" : [ 15 , 0] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( -0.6202361,0.04,0.07899453 )" , "iF" : "( 0.5099897,0,-0.003231468 )" , "iP" : "( -0.6602353,0.04,0.07924797 )" , "cA" : "( 0,90.36304,0 )" , "iTBA" : 1 , "tBID" : 15 , "tBP" : "( 0.4645979,0.04000001,0.1173781 )" , "tT" : 17.43617 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 5 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 1 , "cIP" : 1 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "1bdf72" , "toT" : 17.41947} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "SOLID BALLS" , "aSPB" : "SOLID BALLS" , "lBP" : [ 5] , "iBP" : [ ] , "bP" : [ 5] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.4321823,0.03999999,-0.146901 )" , "iF" : "( 0.1381158,0,0.1290963 )" , "iP" : "( 0.40296,0.03999999,-0.174215 )" , "cA" : "( 0,46.93323,0 )" , "iTBA" : 1 , "tBID" : 5 , "tBP" : "( 0.5929074,0.04,0.002491749 )" , "tT" : 25.0796 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 0 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 1 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2cfdd0" , "toT" : 26.25124} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "SOLID BALLS" , "aSPB" : "SOLID BALLS" , "lBP" : [ 7] , "iBP" : [ ] , "bP" : [ 7] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.6570191,0.04,0.06490942 )" , "iF" : "( 0.1059603,0,-0.2250292 )" , "iP" : "( 0.6399788,0.04,0.1010982 )" , "cA" : "( 0,154.7855,0 )" , "iTBA" : 1 , "tBID" : 7 , "tBP" : "( 0.7849708,0.04000001,-0.1436278 )" , "tT" : 27.20016 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 1 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 1 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2cfdd0" , "toT" : 27.18202} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "SOLID BALLS" , "aSPB" : "SOLID BALLS" , "lBP" : [ 2] , "iBP" : [ ] , "bP" : [ 2] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.4290362,0.04,-0.4845348 )" , "iF" : "( -0.1070574,0,0.3149122 )" , "iP" : "( 0.4419109,0.04,-0.5224062 )" , "cA" : "( 0,341.2241,0 )" , "iTBA" : 1 , "tBID" : 2 , "tBP" : "( 0.35649,0.04000001,-0.2899367 )" , "tT" : 27.1818 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 2 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2cfdd0" , "toT" : 27.15446} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "SOLID BALLS" , "aSPB" : "SOLID BALLS" , "lBP" : [ 4] , "iBP" : [ ] , "bP" : [ 4] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.5885352,0.04,-0.1053962 )" , "iF" : "( -0.2677357,0,0.2490637 )" , "iP" : "( 0.6178222,0.04,-0.1326408 )" , "cA" : "( 0,312.9308,0 )" , "iTBA" : 1 , "tBID" : 4 , "tBP" : "( -0.2299831,0.04000001,0.5813306 )" , "tT" : 16.6083 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 2 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 1 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2cfdd0" , "toT" : 16.57993} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "SOLID BALLS" , "aSPB" : "SOLID BALLS" , "lBP" : [ ] , "iBP" : [ ] , "bP" : [ ] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( -0.1893151,0.04,-0.4421305 )" , "iF" : "( 0.5092477,0,-0.02769054 )" , "iP" : "( -0.2292561,0.04,-0.4399587 )" , "cA" : "( 0,93.11242,0 )" , "iTBA" : 1 , "tBID" : 6 , "tBP" : "( 0.6927831,0.04,-0.5543976 )" , "tT" : 17.58389 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 5 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 0 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 0 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2cfdd0" , "toT" : 25.99698} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "STRIPE BALLS" , "aSPB" : "STRIPE BALLS" , "lBP" : [ 13] , "iBP" : [ ] , "bP" : [ 13] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.6336635,0.04,0.07858364 )" , "iF" : "( -0.1611403,0,0.06044259 )" , "iP" : "( 0.6711156,0.04,0.06453566 )" , "cA" : "( 0,290.5607,0 )" , "iTBA" : 1 , "tBID" : 13 , "tBP" : "( -0.5944556,0.04000001,0.5183017 )" , "tT" : 14.77339 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 0 , "bHBC" : 2 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 1 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "1bdf72" , "toT" : 14.75704} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "STRIPE BALLS" , "aSPB" : "STRIPE BALLS" , "lBP" : [ 10] , "iBP" : [ ] , "bP" : [ 10] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( -0.6597628,0.04,0.539317 )" , "iF" : "( 0.3268366,0,-0.3805755 )" , "iP" : "( -0.6858233,0.04,0.5696625 )" , "cA" : "( 0,139.3442,0 )" , "iTBA" : 1 , "tBID" : 10 , "tBP" : "( 0.3564146,0.04000001,-0.5567175 )" , "tT" : 10.13039 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 1 , "bHBC" : 4 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "1bdf72" , "toT" : 10.11172} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "STRIPE BALLS" , "aSPB" : "STRIPE BALLS" , "lBP" : [ ] , "iBP" : [ ] , "bP" : [ ] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.2228186,0.04,0.1685899 )" , "iF" : "( -0.246861,0,-0.3906604 )" , "iP" : "( 0.2441863,0.04,0.2024045 )" , "cA" : "( 0,212.2891,0 )" , "iTBA" : 1 , "tBID" : 8 , "tBP" : "( -0.2992525,0.04000001,-0.5290359 )" , "tT" : 12.02916 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 1 , "bHBC" : 3 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 0 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 0 , "iGO" : 0 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "1bdf72" , "toT" : 12.01164} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "SOLID BALLS" , "aSPB" : "SOLID BALLS" , "lBP" : [ 6] , "iBP" : [ ] , "bP" : [ 6] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( -0.3779429,0.03999999,0.07964184 )" , "iF" : "( 0.2466588,0,0.09699224 )" , "iP" : "( -0.4151683,0.03999999,0.0650039 )" , "cA" : "( 0,68.53404,0 )" , "iTBA" : 1 , "tBID" : 6 , "tBP" : "( 0.7465774,0.04000001,0.5018011 )" , "tT" : 21.34981 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 1 , "bHBC" : 2 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 1 , "cIH" : 0 , "cIP" : 0 , "bIP" : 0 , "aBP" : 1 , "iGO" : 0 , "sSC" : 1 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2cfdd0" , "toT" : 21.3332} , { "iBS" : 0 , "bSTOP" : 0 , "aSTOP" : 0 , "bSPB" : "SOLID BALLS" , "aSPB" : "SOLID BALLS" , "lBP" : [ 8] , "iBP" : [ ] , "bP" : [ 8] , "iTO" : 0 , "iCOPG" : 0 , "nTCCOP" : 0 , "iCOPS" : 0 , "cBP" : "( 0.9329018,0.04,0.4941435 )" , "iF" : "( -0.230935,0,-0.1550016 )" , "iP" : "( 0.9661143,0.04,0.5164354 )" , "cA" : "( 0,236.1308,0 )" , "iTBA" : 1 , "tBID" : 8 , "tBP" : "( -0.6080351,0.04000002,-0.4977565 )" , "tT" : 22.92525 , "iSU" : "" , "uSV" : "( 0,0,0 )" , "iPB" : 1 , "bHBC" : 2 , "cHRB" : 1 , "cHSM" : 1 , "hRBIP" : 0 , "cIH" : 0 , "cIP" : 0 , "bIP" : 1 , "aBP" : 1 , "iGO" : 1 , "sSC" : 0 , "sBC" : 0 , "dBC" : 0 , "cSC" : 0 , "iSC" : 0 , "pSId" : "2" , "toT" : 22.90602}] , "winner" : "2" , "reason" : 12}] </code></pre> <p>I have tried reading the json file and separately open that 'shots' column from json file but it fails.</p> <pre><code>#reading data with open("z.txt", 'r', encoding = 'utf-8') as datafile: data = json.load(datafile) df1 = pd.DataFrame(data) </code></pre> <pre><code>shots = pd.DataFrame([d["shots"] for d in data]) </code></pre> <p>I want to extract the "_id" along with all the columns that would be inside "shots" in a single dataframe.</p>
<p>You can use this:</p> <pre><code>pd.io.json.json_normalize(data, "shots", "_id") </code></pre> <p>Output of the head "_id" is located in the last column, all the others columns are what's inside data:</p> <pre><code> iBS bSTOP aSTOP bSPB aSPB lBP iBP bP iTO iCOPG nTCCOP iCOPS cBP iF iP cA iTBA tBID tBP tT iSU uSV iPB bHBC cHRB cHSM hRBIP cIH cIP bIP aBP iGO sSC sBC dBC cSC iSC pSId toT _id 0 1 1 1 NOT DECIDED NOT DECIDED [1] [] [1] 0 0 0 0 ( -0.6728522,0.04,-0.5520813 ) ( 0.480835,-2.9104E-16,0.1699932 ) ( -0.7105647,0.04,-0.5654141 ) ( 6.539359E-14,70.5296,2.670466E-14 ) 1 2 ( 0.724014,0.0335,-0.041 ) 12.663670 0 7 1 1 1 0 0 0 1 0 0 0 0 0 0 2cfdd0 12.634380 abc 1 0 1 0 NOT DECIDED SOLID BALLS [3] [] [3] 0 0 0 0 ( 1.07788,0.04,-0.5111128 ) ( -0.2340834,1.694598E-16,0.4531059 ) ( 1.096239,0.04,-0.5466505 ) ( -3.80758E-14,332.6783,-1.613636E-14 ) 1 3 ( 0.4538818,0.04000001,0.54636 ) 15.641220 ( 0,0,0 ) 0 4 1 1 1 0 0 0 1 0 0 0 0 0 0 2cfdd0 15.621620 abc 2 0 0 0 SOLID BALLS SOLID BALLS [] [] [] 0 0 0 0 ( 0.1115417,0.04,-0.1958274 ) ( 0.002243765,-1.426218E-18,0.0006675497 ) ( 0.07320246,0.04,-0.2072338 ) ( 6.981425E-14,73.43156,3.15853E-14 ) 1 5 ( 0.7939866,0.04000001,-0.07077976 ) 24.505560 ( 0,0,0 ) 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 2cfdd0 24.484300 abc 3 0 0 0 STRIPE BALLS STRIPE BALLS [14] [] [14] 0 0 0 0 ( -0.1471594,0.04,0.2157262 ) ( 0.09524104,0,0.1471072 ) ( -0.1688982,0.04,0.182149 ) ( 0,32.92009,0 ) 1 14 ( -0.06695016,0.04000001,0.3848823 ) 9.197196 ( 0,0,0 ) 0 0 1 1 1 0 0 0 1 0 1 0 0 0 0 1bdf72 9.181252 abc 4 0 0 0 STRIPE BALLS STRIPE BALLS [12] [] [12] 0 0 0 0 ( 0.3852863,0.04,0.4167935 ) ( 0.282476,0,0.2797301 ) ( 0.3568642,0.04,0.3886477 ) ( 0,45.27985,0 ) 1 12 ( 0.6180266,0.04000001,0.5658391 ) 6.139053 ( 0,0,0 ) 0 7 1 1 1 0 0 0 1 0 0 0 0 0 0 1bdf72 6.123772 abc </code></pre> <blockquote> <p>Documentation:</p> <p><a href="https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.io.json.json_normalize.html" rel="nofollow noreferrer">json_normalize</a>: “Normalize” semi-structured JSON data into a flat table.</p> </blockquote>
python|json|python-3.x|pandas|dictionary
1
4,056
57,480,658
Sorting column labels by numeric in string, Python
<p>I have successfully connected Python to Microsoft Access Database. The problem appears when I am trying to sort the column labels in the data frame by number in increasing order. The column names also contain characters.</p> <p>I have looked into several sorting functions, but none of them seems to work for this issue.</p> <p>My data frame is as follows;</p> <pre><code>C1 C10 C11 C12 C13 C14 ... C2 C20 C21 ... C3 ... </code></pre> <p>How I want my columns to be sorted;</p> <pre><code>C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 ... </code></pre> <p>My data frame also contain other component, such as benzene, toluene, etc, so I would like the list to be in alphabetic order too.</p> <p>Moreover, is there a way to sort it as;</p> <pre><code>... C4 C5 iC5 nC5 C6 iC6 nC6. </code></pre> <p>The above question is most important, but if someone know if/how this can be done, please advice be!</p> <p>On beforehand, thanks for your help!</p>
<p>I think this might be answer you are looking for:</p> <pre><code>data.reindex_axis(sorted(data.columns, key=lambda x: float(x[1:])), axis=1) </code></pre> <p>You can freely modify the value in x[1:] to include or exclude more characters in the string.</p>
python|pandas|sorting|columnsorting
1
4,057
57,679,220
Expand nested lists to rows, create headers, and map back to original columns
<p>I would like to expand the nested lists to multiple rows and columns. At the same time, map back the results to the corresponding column values.</p> <p>The dataframe is like the following.</p> <pre><code>df=pd.DataFrame({ 'column_name':['income_level', 'geo_level'], 'results':[[[0, 12, 13], [0, 98, 43], [1, 29, 73], [2, 12, 34]], [[0, 78, 23], [1, 56, 67], [2, 67, 34]]]}) column_name | results ---------------------- income_level | [[0, 12, 13], [0, 98, 43], [1, 29, 73], [2, 12, 34]] geo_level | [[0, 78, 23], [1, 56, 67], [2, 67, 34]] </code></pre> <p>The final results I'm looking for are like this. (expanding the nested list to rows and columns and matching the corresponding column values)</p> <pre><code>column_name | num |pct | index income_level | 0 | 12 | 13 income_level | 0 | 98 | 43 income_level | 1 | 29 | 73 income_level | 2 | 12 | 34 geo_level | 0 | 78 | 23 geo_level | 1 | 56 | 67 geo_level | 2 | 67 | 34 </code></pre> <p>My current code:</p> <pre><code>pd.DataFrame(list(itertools.chain(*df['results'].values.tolist())), columns=['num', 'pct', 'index']) </code></pre> <p>I'm able to expand and create header but I cannot match back to corresponding column values (i.e. the column_name)</p>
<p><code>Explode</code> column <code>results</code> and assign to <code>df1</code>. Create the new dataframe from list of sublist of <code>df1.results</code> and <code>reset_index</code></p> <pre><code>df1 = df.explode('results') pd.DataFrame(df1.results.tolist(), index=df1.column_name, columns=['num', 'pct', 'index']).reset_index() Out[562]: column_name num pct index 0 income_level 0 12 13 1 income_level 0 98 43 2 income_level 1 29 73 3 income_level 2 12 34 4 geo_level 0 78 23 5 geo_level 1 56 67 6 geo_level 2 67 34 </code></pre> <hr> <p>On pandas &lt; 0.25, use <code>sum</code>, <code>np.repeat</code>, and <code>reset_index</code> to achieve the same thing</p> <pre><code>pd.DataFrame(df.results.sum(), index=np.repeat(df.column_name, df.results.str.len()), columns=['num', 'pct', 'index']).reset_index() Out[572]: column_name num pct index 0 income_level 0 12 13 1 income_level 0 98 43 2 income_level 1 29 73 3 income_level 2 12 34 4 geo_level 0 78 23 5 geo_level 1 56 67 6 geo_level 2 67 34 </code></pre>
python|pandas|list
1
4,058
24,236,252
Read the properties of HDF file in Python
<p>I have a problem reading hdf file in pandas. As of now, I don't know the keys of the file.</p> <p>How do I read the file [data.hdf] in such a case? And, my file is .hdf not .h5 , Does it make a difference it terms data fetching?</p> <p>I see that you need a 'group identifier in the store'</p> <pre><code>pandas.io.pytables.read_hdf(path_or_buf, key, **kwargs) </code></pre> <p>I was able to get the metadata from pytables</p> <pre><code>File(filename=data.hdf, title='', mode='a', root_uep='/', filters=Filters(complevel=0, shuffle=False, fletcher32=False, least_significant_digit=None)) / (RootGroup) '' /UID (EArray(317,)) '' atom := StringAtom(itemsize=36, shape=(), dflt='') maindim := 0 flavor := 'numpy' byteorder := 'irrelevant' chunkshape := (100,) /X Y (EArray(8319, 2, 317)) '' atom := Float32Atom(shape=(), dflt=0.0) maindim := 0 flavor := 'numpy' byteorder := 'little' chunkshape := (1000, 2, 100) </code></pre> <p>How do I make it readable via pandas?</p>
<p>First (.hdf or .h5) doesn't make any difference. Second, I'm not sure about the pandas, but I read the HDF5 key like:</p> <pre><code>import h5py h5f = h5py.File("test.h5", "r") h5f.keys() </code></pre> <p>or</p> <pre><code>h5f.values() </code></pre>
python|pandas|hdf5|hdfstore|hdf
1
4,059
24,126,542
Pandas multi-index slices for level names
<p>The latest version of Pandas supports multi-index slicers. However, one needs to know the integer location of the different levels to use them properly.</p> <p>E.g. the following:</p> <pre><code>idx = pd.IndexSlice dfmi.loc[idx[:,:,['C1','C3']],idx[:,'foo']] </code></pre> <p>assumes that we know that the <strong>third</strong> row level is the one we want to index with <code>C1</code> and <code>C3</code>, and that the <strong>second</strong> column level is the one we want to index with <code>foo</code>.</p> <p>Sometimes I know the <strong>names</strong> of the levels but not their location in the multi-index. Is there a way to use multi-index slices in this case?</p> <p>For example, say that I know what <strong>slices</strong> I want to apply on each level name, e.g. as a dictionary:</p> <pre><code>'level_name_1' -&gt; ':' 'level_name_2' -&gt; ':' 'level_name_3' -&gt; ['C1', 'C3'] </code></pre> <p>but that I don't know the position (depth) of these levels in the multi-index. Does Pandas a built-in <strong>indexing</strong> mechanism for this? </p> <p>Can I still use <strong><code>pd.IndexSlice</code></strong> objects somehow if I know level names, but not their position?</p> <p>PD: I know I could could use <code>reset_index()</code> and then just work with flat columns, but I would like to avoid resetting the index (even if temporarily). I could also use <code>query</code>, but <code>query</code> requires index names to be compatible with Python identifiers (e.g. no spaces, etc). </p> <hr> <p>The closest I have seen for the above is:</p> <pre><code>df.xs('C1', level='foo') </code></pre> <p>where <code>foo</code> is the name of the level and <code>C1</code> is the value of interest. </p> <p>I know that <code>xs</code> supports multiple keys, e.g.:</p> <pre><code>df.xs(('one', 'bar'), level=('second', 'first'), axis=1) </code></pre> <p>but it does <strong>not</strong> support slices or ranges (like <code>pd.IndexSlice</code> does).</p>
<p>This is still an open issue for enhancement, see <a href="https://github.com/pydata/pandas/issues/4036" rel="nofollow">here</a>. Its pretty straightforward to support this. pull-requests are welcome!</p> <p>You can easily do this as a work-around:</p> <pre><code>In [11]: midx = pd.MultiIndex.from_product([list(range(3)),['a','b','c'],pd.date_range('20130101',periods=3)],names=['numbers','letters','dates']) In [12]: midx.names.index('letters') Out[12]: 1 In [13]: midx.names.index('dates') Out[13]: 2 </code></pre> <p>Here's a complete example</p> <pre><code>In [18]: df = DataFrame(np.random.randn(len(midx),1),index=midx) In [19]: df Out[19]: 0 numbers letters dates 0 a 2013-01-01 0.261092 2013-01-02 -1.267770 2013-01-03 0.008230 b 2013-01-01 -1.515866 2013-01-02 0.351942 2013-01-03 -0.245463 c 2013-01-01 -0.253103 2013-01-02 -0.385411 2013-01-03 -1.740821 1 a 2013-01-01 -0.108325 2013-01-02 -0.212350 2013-01-03 0.021097 b 2013-01-01 -1.922214 2013-01-02 -1.769003 2013-01-03 -0.594216 c 2013-01-01 -0.419775 2013-01-02 1.511700 2013-01-03 0.994332 2 a 2013-01-01 -0.020299 2013-01-02 -0.749474 2013-01-03 -1.478558 b 2013-01-01 -1.357671 2013-01-02 0.161185 2013-01-03 -0.658246 c 2013-01-01 -0.564796 2013-01-02 -0.333106 2013-01-03 -2.814611 </code></pre> <p>This is your dict of level names -> slices</p> <pre><code>In [20]: slicers = { 'numbers' : slice(0,1), 'dates' : slice('20130102','20130103') } </code></pre> <p>This creates an indexer that is empty (selects everything)</p> <pre><code>In [21]: indexer = [ slice(None) ] * len(df.index.levels) </code></pre> <p>Add in your slicers</p> <pre><code>In [22]: for n, idx in slicers.items(): indexer[df.index.names.index(n)] = idx </code></pre> <p>And select (this has to be a tuple, but was a list to start as we had to modify it)</p> <pre><code>In [23]: df.loc[tuple(indexer),:] Out[23]: 0 numbers letters dates 0 a 2013-01-02 -1.267770 2013-01-03 0.008230 b 2013-01-02 0.351942 2013-01-03 -0.245463 c 2013-01-02 -0.385411 2013-01-03 -1.740821 1 a 2013-01-02 -0.212350 2013-01-03 0.021097 b 2013-01-02 -1.769003 2013-01-03 -0.594216 c 2013-01-02 1.511700 2013-01-03 0.994332 </code></pre>
python|pandas
6
4,060
43,576,321
Error when using KNeighborsClassifier using sklearn
<p>I am doing KNN classification for a dataset of 28 features and 5000 samples:</p> <pre><code>trainingSet = [] testSet = [] imdb_score = range(1,11) print ("Start splitting the dataset ...") splitDataset(path + 'movies.csv', 0.60, trainingSet, testSet) print ("Start KNeighborsClassifier ... \n") neigh = KNeighborsClassifier(n_neighbors=5) neigh.fit(trainingSet, imdb_score) </code></pre> <p>However, I ran into this error:</p> <pre><code> " samples: %r" % [int(l) for l in lengths]) ValueError: Found input variables with inconsistent numbers of samples: [3362, 10] </code></pre> <p>I think my code looks alright. Kindly, has anyone run into this issue before?</p>
<p>So you got 6000 samples, use 60% of these, resulting in 3362 samples (as it seems, i don't seed your exact calculations).</p> <p>You call <code>fit(X,Y)</code> <a href="http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier" rel="nofollow noreferrer">where the following is needed</a>:</p> <ul> <li><code>y : {array-like, sparse matrix}</code></li> <li><code>Target values of shape = [n_samples] or [n_samples, n_outputs]</code></li> </ul> <p>As your <code>y=imdb_score</code> is just a list of 10 values, neither of these rules apply as it needs to be either an array-like data-structure (list would be okay) with 3362 values or an array of shape <code>(3362, 1)</code>.</p>
numpy|machine-learning|scikit-learn|knn
1
4,061
43,656,763
np.concatenate a list of numpy.ndarray in new dimension?
<p>I have a list with numpy.ndarrays - each of shape <code>(33,1,8,45,3)</code></p> <p>Problem that when i concatenate the list using <code>a = np.concatenate(list)</code> The output shape of a becomes </p> <pre><code>print a.shape (726,1,8,45,3) </code></pre> <p>instead of shape <code>(22,33,1,8,45,3)</code>. </p> <p>How do I cleanly concatenate the list, without having to change the input. </p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer"><code>np.concatenate</code></a>:</p> <blockquote> <p>Join a sequence of arrays along an <strong>existing axis</strong>.</p> </blockquote> <p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.stack.html#numpy.stack" rel="nofollow noreferrer"><code>np.stack</code></a>:</p> <blockquote> <p>Stack a sequence of arrays along a <strong>new axis</strong>.</p> </blockquote> <pre><code>a = np.ones((3, 4)) b = np.stack([a, a]) print(b.shape) # (2, 3, 4) </code></pre>
python|numpy
2
4,062
2,148,538
Python: strange numbers being pulled from binary file /confusion with hex and decimals
<p>This might be extremely trivial, and if so I apologise, but I'm getting really confused with the outputs I'm getting: hex? decimal? what?</p> <p>Here's an example, and what it returns:</p> <pre><code>&gt;&gt;&gt; print 'Rx State: ADC Clk=', ADC_Clock_MHz,'MHz DDC Clk=', DDC_Clock_kHz,'kHz Temperature=', Temperature,'C' Rx State: ADC Clk= [1079246848L, 0L] MHz DDC Clk= [1078525952L, 0L] kHz Temperature= [1078140928L, 0L] C </code></pre> <p>Now I admit this is slight guesswork because I don't know exactly what the data is - I have a specification of how to parse it out of the file, but it's giving me very strange answers.</p> <p>As you can see - the values are very similar, all around the 1078000000 mark, which leads me to believe I might be extracting something strange (like hex, but I don't think it is...) </p> <p>The structure is read as follows (apologies for length):</p> <pre><code>#Read block more = 1 while(more == 1): a = array.array("L") a.fromfile(wholeFile,2) if len(a) == 2: structure_id = a[0] print 'structure_id: ', hex(structure_id) structure_length = a[1] print 'structure_length: ', structure_length else: print 'cannot read structure start' numDwords = (structure_length/4) - 2 - 1; print 'numDwords: ', numDwords content = array.array("L") content.fromfile(wholeFile,numDwords) if len(content) != numDwords: print 'cannot read structure' more = 0 ok = 0 </code></pre> <p>and then the above example was retrieved from this by:</p> <pre><code>pos = 2 v1 = [content[pos+1], content[pos]] pos = pos+2 v2 = [content[pos+1], content[pos]] pos = pos+2 v3 = [content[pos+1], content[pos]] pos = pos+2 ADC_Clock_MHz = v1 DDC_Clock_kHz = v2 Temperature = v3 </code></pre> <p>Right sorry again for how verbose that was, but it's not just those values, it seems some values are ok and some aren't, which leads me to believe that the larger numbers are encoded differently... Also I have no idea why all the values are in pairs either!</p> <p>Pants question, but if anyone has any insight it'd be much appreciated.</p>
<p>Found out it's incredibly easy to do this with numpy.fromfile</p> <p>Specify what you want to extract (e.g. uint32, int16 etc.) and it extracts it as an array. You can even specify your own types as a collection of existing types, meaning you can extract known structures in one go (e.g. 2 uint32s then 1 string then 5 int16s as an array of 8 values)</p>
python|binary|numpy|hex
0
4,063
1,764,859
How to compute laplacian of a field?
<p>I'm trying to compute the laplacian of a 2d field <strong>A</strong> using <em>scipy.ndimage.convolve</em>.</p> <pre><code>stencil = numpy.array([[0, 1, 0],[1, -4, 1], [0, 1, 0]]) scipy.ndimage.convolve(A, stencil, mode='wrap') </code></pre> <p>This doesn't seem to give me the right answer though. Any ideas where I'm going wrong, or are there better ways of computing the laplacian in numpy?</p>
<p>I got another idea: did you take into account that your stencil, in order to approximate the Laplacian, should be divided by step**2, where step is the step size of your grid? Only then can you compare the ndimage.convolve result with the analytical result.</p> <p>In fact, with a Gaussian, I obtain results that indicate that ndimage.convolve works quite well:</p> <pre><code>from scipy import ndimage stencil = numpy.array([[0, 1, 0],[1, -4, 1], [0, 1, 0]]) x = linspace(-10, 10, 100) y = linspace(-10, 10, 100) xx, yy = meshgrid(x, y) image = exp(-xx**2-yy**2) # Standard deviation in x or y: 1/sqrt(2) laplaced = ndimage.convolve(image, stencil)/(x[1]-x[0])**2 # stencil from original post expected_result = -4*image + 8*(xx**2+yy**2)*image # Very close to laplaced, in most points! </code></pre>
python|numpy|scipy
2
4,064
72,891,291
Why can't you format the date?
<p>I have a dataframe with different dates with the following format 15 January 2012 and want to pass it with a format 15/01/2012</p> <p>My code</p> <pre><code>data['Last Date'] = pd.to_datetime(data['Last Date'], format=&quot;%d/%B/%Y&quot;) print(data.info()) </code></pre> <p>but I get an error.</p> <pre><code>TypeError Traceback (most recent call last) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pandas\core\tools\datetimes.py:510, in _to_datetime_with_format(arg, orig_arg, name, tz, fmt, exact, errors, infer_datetime_format) 509 try: --&gt; 510 values, tz = conversion.datetime_to_datetime64(arg) 511 dta = DatetimeArray(values, dtype=tz_to_dtype(tz)) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pandas\_libs\tslibs\conversion.pyx:360, in pandas._libs.tslibs.conversion.datetime_to_datetime64() TypeError: Unrecognized value type: &lt;class 'str'&gt; During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) c:\Users\Ivan\Programacion\Hacthon atrazeneca\exceltosql.ipynb Cell 4' in &lt;module&gt; ----&gt; 1 data['Last Refreshed on'] = pd.to_datetime(data['Last Refreshed on'], format=&quot;%d/%B/%Y&quot;) 2 print(data.info()) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pandas\core\tools\datetimes.py:1047, in to_datetime(arg, errors, dayfirst, yearfirst, utc, format, exact, unit, infer_datetime_format, origin, cache) 1045 result = arg.tz_localize(tz) 1046 elif isinstance(arg, ABCSeries): -&gt; 1047 cache_array = _maybe_cache(arg, format, cache, convert_listlike) 1048 if not cache_array.empty: 1049 result = arg.map(cache_array) ... 439 return _return_parsed_timezone_results(result, timezones, tz, name) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pandas\_libs\tslibs\strptime.pyx:150, in pandas._libs.tslibs.strptime.array_strptime() ValueError: time data '19 February 2015' does not match format '%d/%B/%Y' (match) </code></pre>
<p>Convert the column to datetime in the parsed format and then convert it to the format you want</p> <pre><code> date = pd.to_datetime(data[&quot;Last Date&quot;], format=&quot;%d %B %Y&quot;) data['Last Date'] = date.dt.strftime(&quot;%d/%m/%Y&quot;) </code></pre> <p>Example:</p> <pre><code>import pandas as pd time = [item.strftime(&quot;%d %B %Y&quot;) for item in pd.date_range(&quot;2021-05-12&quot;, &quot;2021-06-02&quot;)] df = pd.DataFrame({&quot;Last Date&quot;: time}) date = pd.to_datetime(df[&quot;Last Date&quot;], format=&quot;%d %B %Y&quot;) df[&quot;Last Date&quot;] = date.dt.strftime(&quot;%d/%m/%Y&quot;) print(df) </code></pre>
python|python-3.x|pandas|dataframe|date
0
4,065
72,971,755
Deciles in Python
<p>I want to group a column into deciles and assign points out of 50.</p> <p>The lowest decile receives 5 points and points are increased in 5 point increments.</p> <p>With below I am able to group my column into deciles. How do I assign points so the lowest decile has 5 points, 2nd lowest has 10 points so on ..and the highest decile has 50 points.</p> <pre><code>df = pd.DataFrame({'column'[1,2,2,3,4,4,5,6,6,7,7,8,8,9,10,10,10,12,13,14,16,16,16,18,19,20,20,22,24,28]}) df['decile'] = pd.qcut(df['column'], 10, labels = False)``` </code></pre>
<p>Simple enough; you can apply operations between columns directly. Deciles are numbered from 0 through 9, so they are naturally ordered. You want increments of 5 points per decile, so multiplying the deciles by 5 will give you that. Since you want to start at 5, you can offset with a simple sum. The following gives you what I believe you want:</p> <p><code>df['points'] = df['decile'] * 5 + 5</code></p>
python|pandas|cut
1
4,066
73,030,553
Does PyTorch allocate GPU memory eagerly?
<p>Consider the following script:</p> <pre><code>import torch def unnecessary_compute(): x = torch.randn(1000,1000, device='cuda') l = [] for i in range(5): print(i,torch.cuda.memory_allocated()) l.append(x**i) unnecessary_compute() </code></pre> <p>Running this script with PyTorch (1.11) generates the following output:</p> <pre><code>0 4000256 1 8000512 2 12000768 3 16001024 4 20971520 </code></pre> <p>Given that <a href="https://pytorch.org/docs/master/notes/cuda.html#asynchronous-execution" rel="nofollow noreferrer">PyTorch uses asynchronous computation</a> and we never evaluated the contents of l or of a tensor that depends on l, why did PyTorch eagerly allocate GPU memory to the new tensors? Is there a way of invoking these tensors in an utterly lazy way (i.e., without triggering GPU memory allocation before it is required)?</p>
<p><code>torch.cuda.memory_allocated()</code> returns the memory that has been allocated, not the memory that has been &quot;used&quot;.</p> <p>In a typical GPU compute pipeline, you would record operations in a queue along with whatever synchronization primitives your API offers. The GPU will then dequeue and execute those operations, respecting the enqueued synchronization primitives. However, GPU memory allocation is not usually an operation which even <em>goes</em> on the queue. Rather, there's usually some sort of fundamental instruction that the CPU can issue to the GPU in order to allocate memory, just as recording operations is <em>another</em> fundamental instruction. This means that <strong>the memory necessary for a GPU operation has to be allocated before the operation has even been enqueued;</strong> there is no &quot;allocate memory&quot; operation in the queue to synchronize with.</p> <p>Consider Vulkan as a simple example. Rendering operations are enqueued on a graphics queue. However, memory is typically allocated via calls to <code>vkAllocateMemory()</code>, which does not accept any sort of queue at all; it only accepts the device handle and information about the allocation (size, memory type, etc). From my understanding, the allocation is done &quot;immediately&quot; / synchronously (the memory is safe to use by the time the function call returns on the CPU).</p> <p>I don't know enough about GPUs to explain why this is the case, but I'm sure there's a good reason. And perhaps the limitations vary from device to device. But if I were to guess, memory allocation probably has to be a fairly centralized operation; it can't be done by just any core executing recorded operations on a queue. This would make sense, at least; the space of GPU memory is usually shared across cores.</p> <p>Let's apply this knowledge to answer your question: When you call <code>l.append(x**i)</code>, you're trying to record a compute operation. That operation will require memory to store the result, and so PyTorch is likely allocating the memory prior to enqueuing the operation. This explains the behavior you're seeing.</p> <p>However, this doesn't invalidate PyTorch's claims about asynchronous compute. The memory might be allocated synchronously, but it won't be populated with the result of the operation until the operation has been dequeued and completed by the GPU, which indeed happens asynchronously.</p>
memory-management|pytorch|gpu|lazy-evaluation
1
4,067
70,629,560
What is the difference between batch, batch_size, timesteps & features in Tensorflow?
<p>I am new to deep learning and I am utterly confused about the terminology.</p> <p>In the Tensorflow documentation,</p> <p>for [RNN layer] <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN#input_shape" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN#input_shape</a></p> <pre><code>N-D tensor with shape [batch_size, timesteps, ...] </code></pre> <p>for [LSTM layer] <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM</a></p> <pre><code>inputs: A 3D tensor with shape [batch, timesteps, feature]. </code></pre> <ol> <li><p>I understand for the input_shape, we don't have to specify the batch/batch size. But still I would like to know the difference between batch &amp; batch size.</p> </li> <li><p>What is time-steps vs features?</p> </li> </ol> <p>Is the 1st Dimension always the batch? The 2nd-D = Time-steps, and 3rd-D = Features?</p> <p><strong>Example 1</strong></p> <pre><code>data = array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) data = data.reshape((1, 5, 2)) print(data.shape) --&gt; (1, 5, 2) print(data) [[[ 1 2] [ 3 4] [ 5 6] [ 7 8] [ 9 10]]] model = Sequential() model.add(LSTM(32, input_shape=(5, 2))) </code></pre> <p><strong>Example 2</strong></p> <pre><code>data1 = array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11]) n_features = 1 data1 = data1.reshape((len(data1), n_features)) print(data1) # define generator n_input = 2 generator = TimeseriesGenerator(data1, data1, length=n_input, stride=2, batch_size=10) # number of batch print('Batches: %d' % len(generator)) # OUT --&gt; Batches: 1 # print each batch for i in range(len(generator)): x, y = generator[i] print('%s =&gt; %s' % (x, y)) x, y = generator[0] print(x.shape) [[[ 1] [ 2]] [[ 3] [ 4]] [[ 5] [ 6]] [[ 7] [ 8]] [[ 9] [10]]] =&gt; [[ 3] [ 5] [ 7] [ 9] [11]] (5, 2, 1) # define model model = Sequential() model.add(LSTM(100, activation='relu', input_shape=(n_input, n_features))) </code></pre>
<h2>Difference between <code>batch_size</code> v. <code>batch</code></h2> <p>In the documentation you quoted, <code>batch</code> means <code>batch_size</code>.</p> <h2>Meaning of <code>timesteps</code> and <code>feature</code></h2> <p>Taking a glance at <a href="https://www.tensorflow.org/tutorials/structured_data/time_series" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/structured_data/time_series</a> (weather forecast example with real-world data!) will help you understand more about time-series data.</p> <p><code>feature</code> is what you want the model to make predictions from; in the above forecast example, it is an vector (array) of pressure, temperature, etc...</p> <p>RNN/ LSTM are designed to handle time-series. This is why you need to feed <code>timesteps</code>, along with <code>feature</code>, to your model. <code>timesteps</code> represents when the data is recorded; again, in the example above, data is sampled every hour, so <code>timesteps == 0</code> is the data taken at the first hour, <code>timesteps == 1</code> the second hour, ...</p> <h2>Order of dimensions of the input/ output data</h2> <p>In TensorFlow, the first dimension of data <em>often</em> represents a batch.</p> <p>What comes after the batch axis, depends on the problem field. In general, global features (like batch size) precedes element-specific features (like image size).</p> <p>Examples:</p> <ul> <li>time-series data are in <code>(batch_size, timesteps, feature)</code> format.</li> <li>Image data are often represented in NHWC format: <code>(batch_size, image_height, image_width, channels)</code>.</li> </ul> <p>From <a href="https://www.tensorflow.org/guide/tensor#about_shapes" rel="nofollow noreferrer">https://www.tensorflow.org/guide/tensor#about_shapes</a> :</p> <blockquote> <p>While axes are often referred to by their indices, you should always keep track of the meaning of each. Often axes are ordered from global to local: The batch axis first, followed by spatial dimensions, and features for each location last. This way feature vectors are contiguous regions of memory.</p> </blockquote>
tensorflow|keras|lstm
1
4,068
70,611,600
Displaying None and Values with pandas dictionaries Python
<p>There are <code>None</code> values indicating there is no value for the <code>Last Month</code> row within the dictionary <code>a</code> below. How would I be able to modify the pandas style format so that it could print table <code>a</code> and still place dollar signs and in front of the set columns?</p> <pre><code>import numpy as np import pandas as pd a = {'Timeframes': ['Entirety:', 'Last Month:', 'Three Months:', 'Six Months:', 'Last Year:', 'Last Two Years:'], 'Compounding With Lev': np.array([2398012.89, None, 90.07, 85.29, 620.39, 30611.48], dtype=object), 'Compounding With Seperate Levs': np.array([21165662669.71, None, 91.18, 107.54, 3004.87, 13287947.75], dtype=object), 'Adjusted Long Compounding Lev': np.array([3.25, None, 1.0, 1.0, 3.5, 4.75], dtype=object), 'Adjusted Short Compounding Lev': np.array([3.75, None, 1.0, 3.0, 1.0, 2.0], dtype=object), 'Non Compounding With Lev': np.array([3626.41, None, 89.95, 95.73, 1577.75, 1380.80], dtype=object), 'Non Compounding With Seperate Levs': np.array([5679.53, None, 91.15, 408.40, 1953.53, 2530.58], dtype=object), 'Adjusted Long NonCompounding Lev': np.array([4.25, None, 1.0, 1.0, 10.5, 4.25], dtype=object), 'Adjusted Short NonCompounding Lev': np.array([7.75, None, 1.0, 33.25, 1.0, 7.75], dtype=object)} display(pd.DataFrame(a).style.format(formatter={'Compounding With Lev': '${:,.2f}', 'Compounding With Seperate Levs': '${:,.2f}', 'Non Compounding With Lev': '${:,.2f}', 'Non Compounding With Seperate Levs': '${:,.2f}'})) </code></pre> <p>Expected Output:</p> <p><a href="https://i.stack.imgur.com/cG2sk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cG2sk.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np import pandas as pd a = {'Timeframes': ['Entirety:', 'Last Month:', 'Three Months:', 'Six Months:', 'Last Year:', 'Last Two Years:'], 'Compounding With Lev': np.array([2398012.89, None, 90.07, 85.29, 620.39, 30611.48], dtype=float), 'Compounding With Seperate Levs': np.array([21165662669.71, None, 91.18, 107.54, 3004.87, 13287947.75], dtype=float), 'Adjusted Long Compounding Lev': np.array([3.25, None, 1.0, 1.0, 3.5, 4.75], dtype=float), 'Adjusted Short Compounding Lev': np.array([3.75, None, 1.0, 3.0, 1.0, 2.0], dtype=float), 'Non Compounding With Lev': np.array([3626.41, None, 89.95, 95.73, 1577.75, 1380.80], dtype=float), 'Non Compounding With Seperate Levs': np.array([5679.53, None, 91.15, 408.40, 1953.53, 2530.58], dtype=float), 'Adjusted Long NonCompounding Lev': np.array([4.25, None, 1.0, 1.0, 10.5, 4.25], dtype=float), 'Adjusted Short NonCompounding Lev': np.array([7.75, None, 1.0, 33.25, 1.0, 7.75], dtype=float)} display(pd.DataFrame(a).style.format(formatter={'Compounding With Lev': '${:,.2f}', 'Compounding With Seperate Levs': '${:,.2f}', 'Non Compounding With Lev': '${:,.2f}', 'Non Compounding With Seperate Levs': '${:,.2f}'}, na_rep='None')) </code></pre> <p>By changing <code>dtype=float</code> we allow <code>numpy</code> to place <code>nan</code> values in the array. <code>na_rep</code> parameter is what the formatter will put when the formatting is not applicable.</p> <p>To convert dictionary <code>a</code> in case your dictionary is read like that from a file and the arrays are already <code>dtype=object</code>:</p> <pre><code>for k, v in a.items(): if isinstance(v, np.ndarray): a[k] = v.astype(float) </code></pre>
python|pandas|database|numpy|format
1
4,069
70,706,171
Displaying all multindex labels in pandas dataframe as html
<p>Whenever multiline index is used, pandas merges same value indexes during export with <code>to_html</code>. I am looking for a solution to unmerge it or disable merging, so even if values are repeated in index, they are not merged Currently pandas displays data as</p> <p><img src="https://i.stack.imgur.com/5nHpl.png" alt="enter image description here" /></p> <p>Whereas I require it to be as</p> <p><img src="https://i.stack.imgur.com/HGiZE.png" alt="enter image description here" /></p>
<p>When you export with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_html.html" rel="nofollow noreferrer"><code>to_html</code></a>, use the <code>sparsify=False</code> option:</p> <pre><code>df.to_html('output.html', sparsify=False) </code></pre> <blockquote> <p>sparsify: bool, optional, default True</p> <p>Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row.</p> </blockquote>
pandas|dataframe|multi-index
0
4,070
42,831,629
Changing label name when retraining Inception on Google Cloud ML
<p>I currently follow the tutorial to retrain Inception for image classification: <a href="https://cloud.google.com/blog/big-data/2016/12/how-to-train-and-classify-images-using-google-cloud-machine-learning-and-cloud-dataflow" rel="nofollow noreferrer">https://cloud.google.com/blog/big-data/2016/12/how-to-train-and-classify-images-using-google-cloud-machine-learning-and-cloud-dataflow</a></p> <p>However, when I make a prediction with the API I get only the index of my class as a label. However I would like that the API actually gives me a string back with the actual class name e.g instead of</p> <pre><code> ​predictions: - key: '0' prediction: 4 scores: - 8.11998e-09 - 2.64907e-08 - 1.10307e-06 </code></pre> <p>I would like to get:</p> <pre><code>​predictions: - key: '0' prediction: ROSES scores: - 8.11998e-09 - 2.64907e-08 - 1.10307e-06 </code></pre> <p>Looking at the reference for the Google API it should be possible: <a href="https://cloud.google.com/ml-engine/reference/rest/v1/projects/predict" rel="nofollow noreferrer">https://cloud.google.com/ml-engine/reference/rest/v1/projects/predict</a></p> <p>I already tried to change in the model.py the following to</p> <pre><code>outputs = { 'key': keys.name, 'prediction': tensors.predictions[0].name, 'scores': tensors.predictions[1].name } tf.add_to_collection('outputs', json.dumps(outputs)) </code></pre> <p>to</p> <pre><code> if tensors.predictions[0].name == 0: pred_name ='roses' elif tensors.predictions[0].name == 1: pred_name ='tulips' outputs = { 'key': keys.name, 'prediction': pred_name, 'scores': tensors.predictions[1].name } tf.add_to_collection('outputs', json.dumps(outputs)) </code></pre> <p>but this doesn't work.</p> <p>My next idea was to change this part in the preprocess.py file. So instead getting the index I want to use the string label.</p> <pre><code> def process(self, row, all_labels): try: row = row.element except AttributeError: pass if not self.label_to_id_map: for i, label in enumerate(all_labels): label = label.strip() if label: self.label_to_id_map[label] = label #i </code></pre> <p>and</p> <pre><code>label_ids = [] for label in row[1:]: try: label_ids.append(label.strip()) #label_ids.append(self.label_to_id_map[label.strip()]) except KeyError: unknown_label.inc() </code></pre> <p>but this gives the error:</p> <pre><code>TypeError: 'roses' has type &lt;type 'str'&gt;, but expected one of: (&lt;type 'int'&gt;, &lt;type 'long'&gt;) [while running 'Embed and make TFExample'] </code></pre> <p>hence I thought that I should change something here in preprocess.py, in order to allow strings:</p> <pre><code> example = tf.train.Example(features=tf.train.Features(feature={ 'image_uri': _bytes_feature([uri]), 'embedding': _float_feature(embedding.ravel().tolist()), })) if label_ids: label_ids.sort() example.features.feature['label'].int64_list.value.extend(label_ids) </code></pre> <p>But I don't know how to change it appropriately as I could not find someting like str_list. Could anyone please help me out here?</p>
<p>Online prediction certainly allows this, the model itself needs to be updated to do the conversion from int to string.</p> <p>Keep in mind that the Python code is just building a graph which describes what computation to do in your model -- you're not sending the Python code to online prediction, you're sending the graph you build.</p> <p>That distinction is important because the changes you have made are in Python -- you don't yet have any inputs or predictions, so you won't be able to inspect their values. What you need to do instead is add the equivalent lookups to the graph that you're exporting. </p> <p>You could modify the code like so:</p> <pre><code>labels = tf.constant(['cars', 'trucks', 'suvs']) predicted_indices = tf.argmax(softmax, 1) prediction = tf.gather(labels, predicted_indices) </code></pre> <p>And leave the inputs/outputs untouched from the original code</p>
tensorflow|google-cloud-platform|image-recognition|google-cloud-ml
0
4,071
30,419,722
In PIL, why isn't convert('L') turning image grayscale?
<p>For a program I'm writing, I need to convert an RGB image to grayscale and read it as a NumPy array using PIL.</p> <p>But when I run the following code, it converts the image not to grayscale, but to a strange color distortion a bit like the output of a thermal camera, as presented.</p> <p>Any idea what the problem might be?</p> <p>Thank you!</p> <p><a href="http://www.loadthegame.com/wp-content/uploads/2014/09/thermal-camera.png" rel="nofollow">http://www.loadthegame.com/wp-content/uploads/2014/09/thermal-camera.png</a></p> <pre><code>from PIL import Image from numpy import * from pylab import * im = array(Image.open('happygoat.jpg').convert("L")) inverted = Image.fromarray(im) imshow(inverted) show() </code></pre>
<p>matplotlib's <code>imshow</code> is aimed at scientific representation of data - not just image data. By default it's configured to use a high constrast color palette.</p> <p>You can force it to display data using grayscale by passing the following option:</p> <pre><code>import matplotlib.cm imshow(inverted, cmap=matplotlib.cm.Greys_r) </code></pre>
python|numpy|python-imaging-library
9
4,072
26,438,710
Pandas Dataframe reindexing issue
<p>I have a DF that looks like this:</p> <pre><code> value objectID ab798 54.68 ab799 45.98 ab800 38.79 etc.. etc.. </code></pre> <p>where "value" is accesible as a column but "objectID" isn't, it's as if the DF has been indexed by "objectID". I want to have objectID be a column header like value and be able to access all of its rows (ab798, ab799, etc...) by calling pd.objectID.</p>
<p>You can reset the index</p> <pre><code>df.reset_index() </code></pre> <p>Or, you can access it as an index:</p> <pre><code>df.index.values </code></pre>
pandas|indexing|dataframe
1
4,073
26,652,873
numpy 3D indexing by list
<p>Why do these different ways of indexing into X return different values?</p> <pre><code>print vertices[0] print X[vertices[0]] print X[tuple(vertices[0])] print X[vertices[0][0]], X[vertices[0][1]], X[vertices[0][2]] print [X[v] for v in vertices[0]] </code></pre> <p>And the output:</p> <pre><code>[(0, 2, 3), (0, 2, 4), (0, 3, 3)] [-1. -0.42857143 0.14285714] [-1. -0.42857143 0.14285714] -0.428571428571 -0.428571428571 -0.142857142857 [-0.4285714285714286, -0.4285714285714286, -0.1428571428571429] </code></pre> <p>How can I use <code>vertices[0]</code> to get the output in the last line?</p>
<p>If you had used four vertices instead of three, writing</p> <pre><code>vertices = [[(0, 2, 3), (0, 2, 4), (0, 3, 3), (3,3,3)],] </code></pre> <p>followed by</p> <pre><code>print X[tuple(vertices[0])] </code></pre> <p>then the error message</p> <pre><code>IndexError: too many indices for array </code></pre> <p>would have shown that the right way to go is </p> <pre><code>print X[zip(*vertices[0])] </code></pre> <p>or definining the elements of vertices like</p> <pre><code># rtices[0] = [(0, 2, 3), (0, 2, 4), (0, 3, 3), (3,3,3)] # 4 different i's 4 j's 4 k's vertices[0] = [(0,0,0,3), (2,2,3,3), (3,4,3,3)] </code></pre>
python|numpy
1
4,074
39,384,066
Bokeh - Link dataframe time series data to Select interaction
<p>I'm trying create a single line chart in Bokeh and linking different charts in one dataframe to a Select interaction. The dataframe structure looks something like this:</p> <p>Date, KPI1, KPI2, KPI3, KPI4 ... ...</p> <p>Date is always x axis, whereas KPI's should be changeable on the y axis.</p> <p>I can't get it to work. Please see my current code below. Explanations are shown as comments</p> <pre><code>from app import app from flask import render_template,request import pandas as pd import numpy as np from bokeh.embed import components from bokeh.plotting import figure from bokeh.resources import CDN from bokeh.embed import file_html from bokeh.plotting import figure, output_file, show from bokeh.io import output_file, show, vform from bokeh.charts import Scatter, output_file, show from bokeh.models import DatetimeTickFormatter from bokeh.models import CustomJS, ColumnDataSource, Select from app import data def createChartHTML(): #Import data from Excel file #Data Structure Looks as follows: Date, KPI 1, KPI 2, KPI 3, KPI 4 ... ... ... allData = pd.read_excel(open('Charts.xlsx', 'rb'), sheetname='DATA') #Get list of column names columnNameList = list(allData.columns.values) #Remove date column from column names columnNameList.pop(0) #Create line chart with markers with initial data p = figure(plot_width=800, plot_height=400,x_axis_type="datetime",title="KPI 1") p.left[0].formatter.use_scientific = False p.line(allData['Date'], allData["KPI 1"], line_width=2) p.circle(allData['Date'], allData["KPI 1"], fill_color="white", size=12) #Create callback source = ColumnDataSource(data=allData) #How do I link this code to the allData dataframe? #I need to pass a parameter from the selection box to this code to select the right data from the dataframe callback = CustomJS(args=dict(source=source), code=""" var data = source.get('data'); var f = cb_obj.get('value') x = data['LINK THIS TO NEWLY SELECTED DATA?????'] y = data['Date'] source.trigger('change'); """) select = Select(title="Option:", value=columnNameList[0], options=columnNameList,callback=callback) layout = vform(select,p) output_file('plot.html', title='Plot') html = file_html(layout, CDN, "my plot") return(html) </code></pre>
<p>I found an answer to my question here:</p> <p><a href="https://groups.google.com/a/continuum.io/forum/#!topic/bokeh/2rCCRIyXtk8" rel="nofollow">https://groups.google.com/a/continuum.io/forum/#!topic/bokeh/2rCCRIyXtk8</a></p>
python|pandas|bokeh
1
4,075
39,375,738
How can I pass the previous state of a tuple-based tf.nn.MultiRNNCell to the next sess.run() call in TensorFlow?
<p>I am using a stack of RNNs built with <code>tf.nn.MultiRNNCell</code> and I want to pass the <code>final_state</code> to the next graph invocation. Since tuples are not supported in the feed dictionary, is stacking the cell states and slicing the input to yield a tuple at the beginning of the graph the only way of accomplishing that, or is there some functionality in TensorFlow that allows to do that?</p>
<p>Suppose you have 3 RNNCells in your MultiRNNCell and each is a LSTMCell with an LSTMStateTuple state. You must replicate this structure with placeholders:</p> <pre><code>lstm0_c = tf.placeholder(...) lstm0_h = tf.placeholder(...) lstm1_c = tf.placeholder(...) lstm1_h = tf.placeholder(...) lstm2_c = tf.placeholder(...) lstm2_h = tf.placeholder(...) initial_state = tuple( tf.nn.rnn_cell.LSTMStateTuple(lstm0_c, lstm0_h), tf.nn.rnn_cell.LSTMStateTuple(lstm1_c, lstm1_h), tf.nn.rnn_cell.LSTMStateTuple(lstm2_c, lstm2_h)) ... sess.run(..., feed_dict={ lstm0_c: final_state[0].c, lstm0_h: final_state[0].h, lstm1_c: final_state[1].c, lstm1_h: final_state[1].h, ... }) </code></pre> <p>If you have N stacked LSTM layers you can programmatically create the placeholders and feed_dict with for loops.</p>
tensorflow
4
4,076
38,986,074
Creating a matrix of a certain size from a dictionary
<p>I am wanting to solve a systems of equations through linalg.solve(A, b) Solve a linear matrix equation, or system of linear scalar equations from scipy.org. Specifically, I have two dictionaries, dict1 and dict1, and I need to convert them to matrices in order to use the above script. </p> <pre><code> food = ['fruits', 'vegetables', 'bread', 'meat'] frequency = ['daily', 'rarely'] consumptions = {'fruits': {'daily': 6, 'rarely': 4}, 'vegetables': {'daily': 8, 'rarely': 6}, 'bread': {'daily': 2, 'rarely': 1}, 'meat': {'daily': 2, 'rarely': 1}} dict1 = {} for f in food: #type of food for j in food: dict2 = {} total = 0. for q in frequency: dict2.update({q:(consumptions.get(j).get(q)*consumptions.get(f).get(q))}) key = f+'v'+j #comparing the different foods dict1.update({key:dict2}) </code></pre> <p>This gives me:</p> <pre><code>{'breadvbread': {'daily': 4, 'rarely': 1}, 'breadvfruits': {'daily': 12, 'rarely': 4}, 'breadvmeat': {'daily': 4, 'rarely': 1}, 'breadvvegetables': {'daily': 16, 'rarely': 6}, 'fruitsvbread': {'daily': 12, 'rarely': 4}, 'fruitsvfruits': {'daily': 36, 'rarely': 16}, 'fruitsvmeat': {'daily': 12, 'rarely': 4}, 'fruitsvvegetables': {'daily': 48, 'rarely': 24}, 'meatvbread': {'daily': 4, 'rarely': 1}, 'meatvfruits': {'daily': 12, 'rarely': 4}, 'meatvmeat': {'daily': 4, 'rarely': 1}, 'meatvvegetables': {'daily': 16, 'rarely': 6}, 'vegetablesvbread': {'daily': 16, 'rarely': 6}, 'vegetablesvfruits': {'daily': 48, 'rarely': 24}, 'vegetablesvmeat': {'daily': 16, 'rarely': 6}, 'vegetablesvvegetables': {'daily': 64, 'rarely': 36}} </code></pre> <p>I would like to convert this into a 4 x 4 matrix since I am using 4 types of foods. I did not put dict2 as once I figure out how to convert to a matrix with one dictionary, I can do the other but if you need it, I can update. </p> <p>I am new to Python and wanted to play around with dictionaries and the matrix solver :) . It was easy to do it with arrays, but now I want to see how to go about if I have dictionaries.</p>
<p>You can create a numpy array from the dictionary using list comprehensions:</p> <pre><code>import numpy as np A = np.array([[(consumptions[x]["daily"]*consumptions[y]["daily"], consumptions[x]["rarely"]*consumptions[y]["rarely"]) for y in food] for x in food]) </code></pre> <p>This will give you:</p> <pre><code>array([[[36, 16], [48, 24], [12, 4], [12, 4]], [[48, 24], [64, 36], [16, 6], [16, 6]], [[12, 4], [16, 6], [ 4, 1], [ 4, 1]], [[12, 4], [16, 6], [ 4, 1], [ 4, 1]]]) </code></pre> <p>This is a 4x4x2 array:</p> <pre><code>&gt; A.shape (4, 4, 2) </code></pre> <p>Then, to get a 4x4 matrix of the <code>daily</code> values and the <code>rarely</code> values separately, use numpy's advanced slicing. Unlike Python lists, numpy arrays can be sliced over multiple dimensions at once. This is done by placing a slice object (ex: 3:, 0, :) within the brackets for each dimension of the array, separated by commas. Our array, <code>A</code>, has three dimensions:</p> <pre><code>&gt; A.ndim 3 </code></pre> <p>The third dimension indicates whether a value is "daily" (0) or "rarely" (1). So to get all of the daily values, we want all of the rows (:), all of the columns (:), and only the first entry in the third dimension (0). With numpy's advanced slicing, we just separte the slice we want for each dimension with commas:</p> <pre><code>&gt; daily = A[:, :, 0] &gt; daily array([[36, 48, 12, 12], [48, 64, 16, 16], [12, 16, 4, 4], [12, 16, 4, 4]]) &gt; rarely = A[:, :, 1] &gt; rarely array([[16, 24, 4, 4], [24, 36, 6, 6], [ 4, 6, 1, 1], [ 4, 6, 1, 1]]) </code></pre> <p>If you want to make the meaning of these values more explicit, you can convert the numpy arrays to a pandas DataFrame:</p> <pre><code>&gt; import pandas as pd &gt; df = pd.DataFrame(daily, columns=food, index=food) &gt; df fruits vegetables bread meat fruits 36 48 12 12 vegetables 48 64 16 16 bread 12 16 4 4 meat 12 16 4 4 </code></pre> <p>See <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing</a> for more info on advanced slicing.</p>
python|python-2.7|numpy|dictionary|matrix
2
4,077
39,029,480
Pandas Python - Finding Time Series Not Covered
<p>Hoping someone can help me out with this one because I don't even know where to start.</p> <p>Given a data frame that contains a series of start and end times, such as:</p> <pre><code>Order Start Time End Time 1 2016-08-18 09:30:00.000 2016-08-18 09:30:05.000 1 2016-08-18 09:30:00.005 2016-08-18 09:30:25.001 1 2016-08-18 09:30:30.001 2016-08-18 09:30:56.002 1 2016-08-18 09:30:40.003 2016-08-18 09:31:05.003 1 2016-08-18 11:30:45.000 2016-08-18 13:31:05.000 </code></pre> <p>For each order id, I am looking to find a list of time periods that are not covered by any of the ranges between the earliest start time and latest end time </p> <p>So in the example above, I would be looking for </p> <pre><code>2016-08-18 09:30:05.000 to 2016-08-18 09:30:00.005 (the time lag between the first and second rows) 2016-08-18 09:30:25.001 to 2016-08-18 09:30:30.001 (the time lag between the second and third rows) </code></pre> <p>and</p> <pre><code>2016-08-18 09:31:05.003 to 2016-08-18 11:30:45.000 (the time period between 4 and 5) </code></pre> <p>There is overlap between the 3 and 4 rows, so they wouldn't count</p> <p><strong>A few things to consider (additional color):</strong></p> <p>Each record indicates an outstanding order placed at (for example) one of the stock exchanges. Therefore, I can have orders open at Nasdaq and NYSE at the same time. I also can have a short duration order at Nasdaq and a long one at NYSE starting at the same time.</p> <p>That would look as following:</p> <pre><code>Order Start Time End Time 1 2016-08-18 09:30:00.000 2016-08-18 09:30:05.000 (NYSE) 1 2016-08-18 09:30:00.001 2016-08-18 09:30:00.002 (NASDAQ) </code></pre> <p>I am trying to figure out when we are doing nothing at all, and I have no live orders on any exchanges. </p> <p>I have zero idea where to even start on this..any ideas would be appreciated</p>
<h3>Setup</h3> <pre><code>from StringIO import StringIO import pandas as pd text = """Order Start Time End Time 1 2016-08-18 09:30:00.000 2016-08-18 09:30:05.000 1 2016-08-18 09:30:00.005 2016-08-18 09:30:25.001 1 2016-08-18 09:30:30.001 2016-08-18 09:30:56.002 1 2016-08-18 09:30:40.003 2016-08-18 09:31:05.003 1 2016-08-18 11:30:45.000 2016-08-18 13:31:05.000 2 2016-08-18 09:30:00.000 2016-08-18 09:30:05.000 2 2016-08-18 09:30:00.005 2016-08-18 09:30:25.001 2 2016-08-18 09:30:30.001 2016-08-18 09:30:56.002 2 2016-08-18 09:30:40.003 2016-08-18 09:31:05.003 2 2016-08-18 11:30:45.000 2016-08-18 13:31:05.000""" df = pd.read_csv(StringIO(text), sep='\s{2,}', engine='python', parse_dates=[1, 2]) </code></pre> <h3>Solution</h3> <pre><code>def find_gaps(df, start_text='Start Time', end_text='End Time'): # rearrange stuff to get all times and a tracker # in single columns. cols = [start_text, end_text] df = df.reset_index() df1 = df[cols].stack().reset_index(-1) df1.columns = ['edge', 'time'] df1['edge'] = df1['edge'].eq(start_text).mul(2).sub(1) # sort by ascending time, then descending edge # (starts before ends if equal time) # this will ensure we avoid zero length gaps. df1 = df1.sort_values(['time', 'edge'], ascending=[True, False]) # we identify gaps when we've reached a number # of ends equal to number of starts. # we'll track that with cumsum, when cumsum is # zero, we've found a gap # last position should always be zero and is not a gap. # So I remove it. track = df1['edge'].cumsum().iloc[:-1] gap_starts = track.index[track == 0] gaps = df.ix[gap_starts] gaps[start_text] = gaps[end_text] gaps[end_text] = df.shift(-1).ix[gap_starts, start_text] return gaps df.set_index('Order').groupby(level=0).apply(find_gaps) </code></pre> <p><a href="https://i.stack.imgur.com/9Gyyl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Gyyl.png" alt="enter image description here"></a></p>
python|pandas|time-series
1
4,078
33,801,584
pandas multi index slicing "Level type mismatch"
<p>I moved to pandas version 0.17 from 0.13.1 and I get some new errors on slicing.</p> <pre><code>&gt;&gt;&gt; df date int data 0 2014-01-01 0 0 1 2014-01-02 1 -1 2 2014-01-03 2 -2 3 2014-01-04 3 -3 4 2014-01-05 4 -4 5 2014-01-06 5 -5 &gt;&gt;&gt; df.set_index("date").ix[datetime.date(2013,12,30):datetime.date(2014,1,3)] int data date 2014-01-01 0 0 2014-01-02 1 -1 2014-01-03 2 -2 &gt;&gt;&gt; df.set_index(["date","int"]).ix[datetime.date(2013,12,30):datetime.date(2014,1,3)] Traceback (most recent call last): ... TypeError: Level type mismatch: 2013-12-30 </code></pre> <p>it's working fine with 0.13.1, and it's seems specific for multi-index with date. Am I doing something wrong here?</p>
<p>This error occurs because you're trying to slice on dates (labels) that are not included in the index. To solve this Level mismatch error and return values within a range that may or may not be within a df multiindex use:</p> <pre><code>df.loc[df.index.get_level_values(level = 'date') &gt;= datetime.date(2013,12,30)] # You can use a string also i.e. '2013-12-30' </code></pre> <p><code>get_level_values()</code> and the comparison operator set a mask of True/False index values for the indexer. </p> <p>Slicing with a string or date object normally works in Pandas with a single index regardless of if the string is in the index, but doesn't work on multiindex dataframes. Though you attempted to set the index from 2013-12-30 to 2014-01-03 with the datetime.date(2013,12,30) : datetime.date(2014,1,3) set_index call, the resulting df index was from 2014-01-01 to 2014-01-03. One correct way to set the index for those dates including 2013-12-30 would be to set the index as a date range using either strings for datetime objects like:</p> <pre><code>df.set_index("date").loc[pd.date_range('2013-12-30', '2014-12-03')] </code></pre>
python|pandas|slice|multi-index
2
4,079
33,673,281
How to find the appropriate linear fit in Python?
<p>I am trying to find the most appropriate linear fit for a large amount of data that has linear behaviour for most of samples. The data (<a href="https://drive.google.com/file/d/0BwwhEMUIYGyTcy1OaVlaZ0FBVms/view?usp=sharing" rel="nofollow noreferrer">link</a>) when plotted in the raw form is as shown below:</p> <p><a href="https://i.stack.imgur.com/Pwj9t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pwj9t.png" alt="enter image description here"></a></p> <p>I need the linear fit that encompasses most of the points as shown by the thick orange line in the figure below:</p> <p><a href="https://i.stack.imgur.com/tgIfO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tgIfO.png" alt="enter image description here"></a></p> <p>I tried calculating the mean of the points but how do I extract the linear region using Python?</p> <p><strong>Reproducible code</strong></p> <pre><code>import matplotlib.pyplot as plt import numpy as np import itertools from scipy import optimize data = np.loadtxt('linear.dat', skiprows = 1, delimiter = '\t') print data x = data[:, 0] y = data[:, 1:] m = y.shape[0] n = y.shape[1] def linear_fit(x, a, b): return a * x + b y_fit = np.empty(shape=(m, n)) for i in range(n): fit_y_fit_a, fit_y_fit_b = optimize.curve_fit(linear_fit, x, y[:, i])[0] y_fit[:, i] = fit_y_fit_a * x + fit_y_fit_b y[~np.isfinite(y)] = 0 y_mean = np.mean(y, axis = 1) fig = plt.figure(figsize=(5, 5)) fig.clf() plot_y_vs_x = plt.subplot(111) markers = itertools.cycle(('o', '^', 's', 'v', 'h', '&gt;', 'p', '&lt;')) for i in range(n): plot_y_vs_x.plot(x, y, linestyle = '', marker = markers.next(), alpha = 1, zorder = 2) # plot_y_vs_x.plot(x, y_fit, linestyle = ':', color = 'gray', linewidth = 0.5, zorder = 1) plot_y_vs_x.plot(x, y_mean, linestyle = '-', linewidth = 3.0, color = 'red', zorder = 3) plot_y_vs_x.set_ylim([-10, 10]) plot_y_vs_x.set_ylabel('Y', labelpad = 6) plot_y_vs_x.set_xlabel('X', labelpad = 6) fig.savefig('plot.pdf') plt.close() </code></pre>
<p>You're looking to calculate the <strong>linear regression</strong> of your points. To do that, </p> <pre><code>import numpy as np x = np.array([0, 1, 2, 3]) y = np.array([-1, 0.2, 0.9, 2.1]) A = np.vstack([x, np.ones(len(x))]).T m, c = np.linalg.lstsq(A, y)[0] </code></pre> <p>This will give you values m and c that fit to <code>y = mx + c</code>. Simply replace x and y here with your own values as numpy arrays.</p> <p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html</a></p>
python|numpy|matplotlib|scipy|linear-regression
3
4,080
23,713,434
Installing gfortran for numpy with homebrew
<p>I want to install a working version of <code>numpy</code> using brew. <code>brew install numpy</code> gives the message: </p> <pre><code>==&gt; python setup.py build --fcompiler=gnu95 install --prefix=/usr/local/Cellar/numpy/1.8.1 File "/private/tmp/numpy-ncUw/numpy-1.8.1/numpy/distutils/fcompiler/gnu.py", line 197, in get_flags_opt v = self.get_version() File "/private/tmp/numpy-ncUw/numpy-1.8.1/numpy/distutils/fcompiler/__init__.py", line 434, in get_version raise CompilerNotFound() numpy.distutils.fcompiler.CompilerNotFound </code></pre> <p><code>brew doctor</code> is okay, so it may actually be a missing fortran compiler problem. Try: <code>brew install gfortran</code></p> <pre><code>Error: No available formula for gfortran </code></pre> <p>Huh. from the comments in the brew <a href="https://github.com/Homebrew/homebrew/pull/28855" rel="nofollow">GitHub issue tracker</a>, it looks like gfortran is no longer in brew. Let's try to download gfortran 4.9.0 from <a href="https://gcc.gnu.org/wiki/GFortranBinaries" rel="nofollow">the project website</a> and set <code>FC=\path\to\gfortran</code> so that brew knows to use it</p> <pre><code>==&gt; Building with an alternative Fortran compiler This is unsupported. Warning: No Fortran optimization information was provided. You may want to consider setting FCFLAGS and FFLAGS or pass the `--default-fortran-flags` option to `brew install` if your compiler is compatible with GCC. If you like the default optimization level of your compiler, ignore this warning. ==&gt; Downloading https://downloads.sourceforge.net/project/numpy/NumPy/1.8.1/numpy-1.8.1.tar.gz Already downloaded: /Library/Caches/Homebrew/numpy-1.8.1.tar.gz ==&gt; python setup.py build --fcompiler=gnu95 install --prefix=/usr/local/Cellar/numpy/1.8.1 File "/private/tmp/numpy-mrQk/numpy-1.8.1/numpy/distutils/fcompiler/gnu.py", line 197, in get_flags_opt v = self.get_version() File "/private/tmp/numpy-mrQk/numpy-1.8.1/numpy/distutils/fcompiler/__init__.py", line 434, in get_version raise CompilerNotFound() numpy.distutils.fcompiler.CompilerNotFound </code></pre> <p>Drat, so brew <em>doesn't</em> want to use a non-default fortran compiler. I'm using OSX 10.9 with llvm installed by default, so am wary about adding a gcc install. When llvm took over, many programs had to be re-compiled, and so changing the default compiler (again) seems dangerous.</p> <p>Any advice on how to get brew to complete the installation would be very welcome.</p>
<p><code>brew install gcc</code></p> <p>Numpy install now works fine.</p>
python|numpy|fortran|homebrew|gfortran
9
4,081
22,752,443
Convert nested dictionary into dataframe
<p>My dictionary looks like this</p> <pre><code>mydict = {240594.0: {1322.0: 1.6899999999999999, 1323.0: 1.6900000000000002, 1324.0: 1.6899999999999999, 1325.0: 1.6899999999999999, 1326.0: 1.6899999999999999, 1327.0: 1.6900000000000002, 1328.0: 1.6899999999999999, 1329.0: 1.6899999999999999, 1356.0: 1.6900000000000002, 1357.0: 1.6900000000000002, 1358.0: 1.6899999999999999, 1359.0: 1.6900000000000002, 1360.0: 1.6900000000000002, ...}, 226918.0: {1322.0: 1.6900000000000002, 1323.0: 1.6899999999999999, 1324.0: 1.6900000000000002, 1325.0: 1.6899999999999999, 1326.0: 1.6900000000000002, 1327.0: 1.6899999999999999, 1328.0: 1.6900000000000002, 1329.0: 1.6899999999999999, 1352.0: 1.6900000000000002, 1353.0: 1.6900000000000002, 1354.0: 1.6899999999999999 ...}} </code></pre> <p>which is the real value of <code>{iri_key: {week:price, week:price ...}, iri_key: {...}}</code> and I want to convert this dictionary into dataframe which looks like</p> <pre><code> week week week ... irikey: price price price ... irikey: ... ... ... </code></pre> <p>in above case</p> <pre><code> 1322.0 ... 240594.0 1.6899999999999999 ... 226918.0 1.6900000000000002 ... </code></pre> <p>how could I do this?</p>
<p>As you have probably discovered, <code>DataFrame(mydict)</code> is valid code. You could simply take the transpose (<code>.T</code>) to get your desired result.</p> <p>A better way, in terms of code readability and directness, is available: use the specific DataFrame constructor <code>DataFrame.from_dict</code>, which has a keyword argument <code>orient</code>.</p> <pre><code>In [2]: DataFrame.from_dict(mydict, orient='index') Out[2]: 1356 1357 1358 1359 1360 1322 1323 1324 1325 1326 1327 \ 226918 NaN NaN NaN NaN NaN 1.69 1.69 1.69 1.69 1.69 1.69 240594 1.69 1.69 1.69 1.69 1.69 1.69 1.69 1.69 1.69 1.69 1.69 1328 1329 1352 1353 1354 226918 1.69 1.69 1.69 1.69 1.69 240594 1.69 1.69 NaN NaN NaN [2 rows x 16 columns] </code></pre> <p>As you can see from the example data you provided, missing values and variable lengths are handled properly.</p>
python|dictionary|pandas|dataframe
2
4,082
22,704,802
numpy to generate discrete probability distribution
<p>I'm following a code example I found at <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#subclassing-rv-discrete" rel="nofollow noreferrer">http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#subclassing-rv-discrete</a> for implementing a random number generator for discrete values of a normal distribution. The exact example (not surprisingly) works quite well, but if I modify it to allow only left or right-tailed results, the distribution around 0 should is too low (bin zero should contain more values). I must have hit a boundary condition, but am unable to work it out. Am I missing something?</p> <p>This is the result of counting the random numbers per bin:</p> <pre><code>np.bincount(rvs) [1082 2069 1833 1533 1199 837 644 376 218 111 55 20 12 7 2 2] </code></pre> <p>This is the histogram:</p> <p><img src="https://i.stack.imgur.com/Vl2Dw.png" alt="enter image description here"></p> <pre><code>from scipy import stats np.random.seed(42) def draw_discrete_gaussian(rng, tail='both'): # number of integer support points of the distribution minus 1 npoints = rng if tail == 'both' else rng * 2 npointsh = npoints / 2 npointsf = float(npoints) # bounds for the truncated normal nbound = 4 # actual bounds of truncated normal normbound = (1+1/npointsf) * nbound # integer grid grid = np.arange(-npointsh, npointsh+2, 1) # bin limits for the truncnorm gridlimitsnorm = (grid-0.5) / npointsh * nbound # used later in the analysis gridlimits = grid - 0.5 grid = grid[:-1] probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound)) gridint = grid normdiscrete = stats.rv_discrete(values=(gridint, np.round(probs, decimals=7)), name='normdiscrete') # print 'mean = %6.4f, variance = %6.4f, skew = %6.4f, kurtosis = %6.4f'% normdiscrete.stats(moments = 'mvsk') rnd_val = normdiscrete.rvs() if tail == 'both': return rnd_val if tail == 'left': return -abs(rnd_val) elif tail == 'right': return abs(rnd_val) rng = 15 tail = 'right' rvs = [draw_discrete_gaussian(rng, tail=tail) for i in xrange(10000)] if tail == 'both': rng_min = rng / -2.0 rng_max = rng / 2.0 elif tail == 'left': rng_min = -rng rng_max = 0 elif tail == 'right': rng_min = 0 rng_max = rng gridlimits = np.arange(rng_min-.5, rng_max+1.5, 1) print gridlimits f, l = np.histogram(rvs, bins=gridlimits) # cheap way of creating histogram import matplotlib.pyplot as plt %matplotlib inline bins, edges = f, l left,right = edges[:-1],edges[1:] X = np.array([left, right]).T.flatten() Y = np.array([bins, bins]).T.flatten() # print 'rvs', rvs print 'np.bincount(rvs)', np.bincount(rvs) plt.plot(X,Y) plt.show() </code></pre>
<p>I try to answer my own question based on comments from @user333700 and @user235711:</p> <p>I insert into the method before <code>normdiscrete = ...</code></p> <pre><code>if tail == 'right': gridint = gridint[npointsh:] probs = probs[npointsh:] s = probs.sum() probs = probs / s elif tail == 'left': gridint = gridint[0: npointsh] probs = probs[0: npointsh] s = probs.sum() probs = probs / s </code></pre> <p>The resulting histograms<img src="https://i.stack.imgur.com/jjC4u.png" alt="enter image description here"> and <img src="https://i.stack.imgur.com/axRxi.png" alt="enter image description here">look much nicer:</p>
python|numpy|scipy
0
4,083
62,141,428
Is there a way to get the element 1 before/after upper indexing limit?
<p>I have a Dataframe with Timeindex and a Timeindex till which I want to slice the dataframe. </p> <pre><code>df[:upper_Timeindex_Timevalue] </code></pre> <p>The Question:</p> <p>How do I get the element before this "limiting-index"?</p> <pre><code>df[:upper_Timeindex_Timevalue -1] # this wouldnt work this way bc timevalue +1 is not the next in dataframe </code></pre> <p>And how do I get the element after this "limiting-index"?</p> <pre><code>df[:upper_Timeindex_Timevalue +1] # this wouldnt work this way bc timevalue +1 is not the next in dataframe </code></pre>
<p>You can use boolean indexing:</p> <pre><code>df.loc[df.index &lt; upper_Timeindex_Timevalue].last() </code></pre> <p>or</p> <pre><code>df.loc[df.index &gt; lower_Timeindex_TimeValue].first() </code></pre>
python|pandas|dataframe|indexing|datetimeindex
1
4,084
62,141,614
Concatenate Series using For
<p>I'm having some trouble creating a DataFrame with some Series. Is there a way to concat them with a for? Because each time I try I only get the last Series in the DF, when I really want it to concat it the columns and not in place.</p> <pre><code>suma_queries = list() for query in queries: cur.execute(query) schema = lib.get_schema_sql(cursor = cur) table = lib.get_table_sql(cur) df = pd.DataFrame(data = table, columns = schema) suma_queries.append(df.iloc[:,18].sum()) suma_queries = pd.Series(suma_queries) concat_df = pd.concat([suma_queries], axis=1) </code></pre> <p>As you see, for each "suma_queries" Series gotten from the for, I try to concatenate it to a dataframe called concat_df, and so on for the next "suma_queries" Series, but in the end, I only get the last Series, because the for replaces the value.</p> <p>What I want at the end should be a dataframe like:</p> <pre><code>Series1 Series2 Series3 … SeriesN s1_1 s2_1 s3_1 sn_1 s1_2 … … … s1_3 … … … … … … … s1_n s2_n s3_n sn_n </code></pre> <p>where each column is a series.</p> <p>Please let me know if there is way to do it, </p> <p>Thanks!!</p>
<p>you should reasign the appended dataframe back to itself in the for loop:</p> <pre><code>for query in queries: cur.execute(query) schema = lib.get_schema_sql(cursor = cur) table = lib.get_table_sql(cur) df = pd.DataFrame(data = table, columns = schema) suma_queries=suma_queries.append(df.iloc[:,18].sum()) suma_queries = pd.Series(suma_queries) concat_df = pd.concat([suma_queries], axis=1) </code></pre>
python|pandas|for-loop
0
4,085
62,364,889
Pandas text column group by based on unique id
<p>I have below csv file,</p> <pre><code>itemid testresult duplicateid 100 textboxerror 0 101 text_input_issue 100 102 menuitemerror 0 103 text_click_issue 100 104 text_caps_error 100 105 menu_drop_down_error 102 106 text_lower_error 100 107 menu_item_null 102 </code></pre> <p>I want to convert the above table testreslts into two columns based on duplicateid, resulted column as similartestresults, example table needs to be as below,</p> <p>Required dataframe:</p> <pre><code>index testresult similartestresults duplicateid 1 textboxerror text_click_issue 100 2 textboxerror text_caps_error 100 3 textboxerror text_caps_error 100 4 textboxerror text_lower_error 100 5 menuitemerror menu_drop_down_error 102 6 menuitemerror menu_item_null 102 </code></pre> <p>I tried using pandas groupby , but it only gives single list ,code as follows,</p> <pre><code>df1 = df.groupby(["duplicateid", "testresult"]) print (df1) print (df1.groups) df['similartestresults'] = df.groupby("duplicateid")['testresult'].apply(lambda tags: ','.join(tags)) print (df2) </code></pre> <p>But both above methods not given desired results.Please suggest on this. Thanks, TSJ</p>
<p>Copy the column of test results and update it with the first four characters as the group name. Replace it with the final group name. Then remove the unnecessary columns and reorder them. Does this meet the intent of your question?</p> <pre><code>df['simlartestresult'] = df['testresult'].copy() # Update to group_name df['testresult'] = df['simlartestresult'].apply(lambda x: x[:4]) df['testresult'].replace(['text','menu'],['textboxerror','menuitemerror'],inplace=True) # delete 'dupulicateid = 0' df = df[~(df['duplicateid'] == 0)] df = df.sort_values('duplicateid', ascending=True) df itemid testresult duplicateid simlartestresult 1 101 textboxerror 100 text_input_issue 3 103 textboxerror 100 text_click_issue 4 104 textboxerror 100 text_caps_error 6 106 textboxerror 100 text_lower_error 5 105 menuitemerror 102 menu_drop_down_error 7 107 menuitemerror 102 menu_item_null </code></pre>
python|pandas|pandas-groupby
0
4,086
62,198,966
Tensorflow installing error: __ is not a supported wheel on this platform
<p>I'm trying to install tensorflow on my PC but I keep getting errors.</p> <p>I have seen multiple posts about tensorflow installing errors online but all I found was solutions saying that the version of python was not compatible. However, I am using python 3.8 and I am using the URL for python 3.8 provided on tensorflow's website, so I don't see how that could be the issue.</p> <p>The command I'm using:</p> <pre class="lang-sh prettyprint-override"><code>python -m pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl </code></pre> <p>The error I'm getting:</p> <pre class="lang-sh prettyprint-override"><code>ERROR: tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl is not a supported wheel on this platform. </code></pre> <p>I'm using <code>python 3.8</code>, <code>pip 20.1.1</code> and my PC is running 64 bit Windows 10.</p> <p>From tensorflow's website, the requirements are :</p> <ul> <li>Python 3.5-3.8</li> <li>pip 19.0 or later</li> <li>Windows 7 or later</li> </ul> <p>Which are all satisfied. </p> <p>Why am I getting this error?</p> <p>EDIT: using only <code>pip install tensorflow</code> gets me the following errors:</p> <pre class="lang-sh prettyprint-override"><code>ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) ERROR: No matching distribution found for tensorflow </code></pre>
<p>It's likely you're using the 32bit version of python 3.8 instead of the 64bit version. You can check by opening the interpreter and looking at the first line. If it has <code>32 bit (Intel)</code> or something similar, then it would be the 32 bit version. To get the 64bit edition, scroll down to Files on this link <a href="https://www.python.org/downloads/release/python-380/" rel="nofollow noreferrer">https://www.python.org/downloads/release/python-380/</a> and pick up the x86-64 version.</p>
python|python-3.x|tensorflow|pip
3
4,087
51,536,617
How rename pd.value_counts() index with a correspondance dictionary
<p>I am doing a <code>value_counts()</code> over a column of integers that represent categorical values. </p> <p>I have a dict that maps the numbers to strings that correspond to the category name. </p> <p>I want to find the <strong>best</strong> way to have the index with the corresponding name. As I am not happy with my 4 lines solution.</p> <h3>My current solution</h3> <pre><code>df = pd.DataFrame({"weather": [1,2,1,3]}) df &gt;&gt;&gt; weather 0 1 1 2 2 1 3 3 weather_correspondance_dict = {1:"sunny", 2:"rainy", 3:"cloudy"} </code></pre> <p>Now how I solve the problem:</p> <pre><code>df_vc = df.weather.value_counts() index = df_vc.index.map(lambda x: weather_correspondance_dict[x] ) df_vc.index = index df_vc &gt;&gt;&gt; sunny 2 cloudy 1 rainy 1 dtype: int64 </code></pre> <h3>Question</h3> <p>I am not happy with that solution that is very tedious, do you have a best practice for that situation ?</p>
<p>This is my solution :</p> <pre><code>&gt;&gt;&gt; weather_correspondance_dict = {1:"sunny", 2:"rainy", 3:"cloudy"} &gt;&gt;&gt; df["weather"].value_counts().rename(index=weather_correspondance_dict) sunny 2 cloudy 1 rainy 1 Name: weather, dtype: int64 </code></pre>
python|pandas|dictionary|dataframe|counting
7
4,088
51,453,933
Frequency of words in a DataFrame
<p>I've a pandas DataFrame containing a list of words in 'review' column. I need to find the frequency of the words that occurs in the review column. </p> <pre><code>id sentiment review 0 5814_8 1 [stuff, going, moment, mj, 've, started, liste... 1 2381_9 1 [\the, classic, war, worlds\, '', timothy, hin... 2 7759_3 0 [film, starts, manager, nicholas, bell, giving... 3 3630_4 0 [must, assumed, praised, film, \the, greatest,... 4 9495_8 1 [superbly, trashy, wondrously, unpretentious, ... 5 8196_8 1 [dont, know, people, think, bad, movie, got, p... </code></pre> <p>I've tried using counter function but it shows 'unhashable list' as an error. How to do this?? </p>
<p>You could use a list comprehension inside counter:</p> <pre><code>Counter([i for s in df.review for i in s]) </code></pre>
list|pandas|dataframe
0
4,089
51,470,574
Close HDF File After to_hdf() using mode='a'
<p>I want to save a series of DataFrame using pandas into hdf file. So I use to_hdf()</p> <pre><code> x = pd.DataFrame(np.random.rand(10, 10), index=pd.date_range(end='1/1/2018', periods=10), columns=list('abcdefghij')) x.iloc[:5, :].to_hdf('append.h5', format='table', key='part1', mode='a') </code></pre> <p>After this, I want to check the situation of this hdf file. So I use read_hdf()</p> <pre><code> y = pd.read_hdf('append.h5', key='part1', mode='r') </code></pre> <p>Obviously, it will shows error:</p> <pre><code> The file 'append.h5' is already opened, but not in read-only mode (as requested). </code></pre> <p>So I'm just wondering how to close this hdf after to_hdf()? * I need to set the mode='a' so as to append several tables into this hdf file</p> <p>Python version 3.6.5</p>
<pre><code>import pandas as pd x = pd.DataFrame(np.random.rand(10, 10), index=pd.date_range(end='1/1/2018', periods=10), columns=list('abcdefghij')) x.iloc[:5, :].to_hdf('append.h5', format='table', key='part1', mode='a') y = pd.read_hdf('append.h5', key='part1', mode='r') </code></pre> <p>is working (as said in the comments of the question). Would be nice to delete the question or mark it as answered?</p>
python|python-3.x|pandas
0
4,090
51,327,767
Python Data Analysis from SQL Query
<p>I'm about to start some Python Data analysis unlike anything I've done before. I'm currently studying numpy, but so far it doesn't give me insight on how to do this. </p> <p>I'm using python 2.7.14 Anaconda with cx_Oracle to Query complex records.</p> <p>Each record will be a unique individual with a column for Employee ID, Relationship Tuples (Relationship Type Code paired with Department number, may contain multiple), Account Flags (Flag strings, may contain multiple). (3 columns total)</p> <p>so one record might be:</p> <pre><code> [(123456), (135:2345678, 212:4354670, 198:9876545), (Flag1, Flag2, Flag3)] </code></pre> <p>I need to develop a python script that will take these records and create various counts.</p> <p>The example record would be counted in at least 9 different counts<br> How many with relationship: 135<br> How many with relationship: 212<br> How many with relationship: 198<br> How many in Department: 2345678<br> How many in Department: 4354670<br> How many in Department: 9876545<br> How many with Flag: Flag1<br> How many with Flag: Flag2<br> How many with Flag: Flag3 </p> <p>The other tricky part of this, is I can't pre-define the relationship codes, departments, or flags What I'm counting for has to be determined by the data retrieved from the query.</p> <p>Once I understand how to do that, hopefully the next step to also get how many relationship X has Flag y, etc., will be intuitive.</p> <p>I know this is a lot to ask about, but If someone could just point me in the right direction so I can research or try some tutorials that would be very helpful. Thank you!</p>
<p>At least <strong>you need to structurate this data</strong> to make a good analysis, you can do it in your database engine or in python (I will do it by this way, using pandas like SNygard suggested).</p> <p>At first, I create some fake data(it was provided by you):</p> <pre><code>import pandas as pd import numpy as np from ast import literal_eval data = [[12346, '(135:2345678, 212:4354670, 198:9876545)', '(Flag1, Flag2, Flag3)'], [12345, '(136:2343678, 212:4354670, 198:9876541, 199:9876535)', '(Flag1, Flag4)']] df = pd.DataFrame(data,columns=['id','relationships','flags']) df = df.set_index('id') df </code></pre> <p>This return a dataframe like this: <a href="https://i.stack.imgur.com/2XYnM.png" rel="nofollow noreferrer">raw_pandas_dataframe</a></p> <p>In order to summarize or count by columns, we need to improve our data structure, in some way that we can apply group by operations with department, relationships or flags.</p> <p>We will convert our relationships and flags columns from string type to a python list of strings. So, the flags column will be a python list of flags, and the relationships column will be a python list of relations.</p> <pre><code>df['relationships'] = df['relationships'].str.replace('\(','').str.replace('\)','') df['relationships'] = df['relationships'].str.split(',') df['flags'] = df['flags'].str.replace('\(','').str.replace('\)','') df['flags'] = df['flags'].str.split(',') df </code></pre> <p>The result is: <a href="https://i.stack.imgur.com/VLv87.png" rel="nofollow noreferrer">dataframe_1</a></p> <p>With our <code>relationships</code> column converted to list, we can create a new dataframe with as much columns as relations in that lists we have.</p> <pre><code>rel = pd.DataFrame(df['relationships'].values.tolist(), index=rel.index) </code></pre> <p>After that we need to stack our columns preserving its index, so we will use pandas multi_index: the id and the relation column number(0,1,2,3)</p> <pre><code>relations = rel.stack() relations.index.names = ['id','relation_number'] relations </code></pre> <p>We get: <a href="https://i.stack.imgur.com/XVYsF.png" rel="nofollow noreferrer">dataframe_2</a></p> <p>At this moment we have all of our relations in rows, but still we can't group by using <code>relation_type</code> feature. So we will split our relations data in two columns: <code>relation_type</code> and <code>department</code> using <code>:</code>.</p> <pre><code>clear_relations = relations.str.split(':') clear_relations = pd.DataFrame(clear_relations.values.tolist(), index=clear_relations.index,columns=['relation_type','department']) clear_relations </code></pre> <p>The result is <a href="https://i.stack.imgur.com/DpfLZ.png" rel="nofollow noreferrer">dataframe_3_clear_relations</a></p> <p>Our relations are ready to analyze, but our flags structure still is very useless. So we will convert the flag list, to columns and after that we will stack them.</p> <pre><code>flags = pd.DataFrame(df['flags'].values.tolist(), index=rel.index) flags = flags.stack() flags.index.names = ['id','flag_number'] </code></pre> <p>The result is <a href="https://i.stack.imgur.com/YZBS8.png" rel="nofollow noreferrer">dataframe_4_clear_flags</a></p> <hr> <p>Voilá!, It's all ready to analyze!.</p> <p>So, for example, <strong>how many relations from each type we have, and wich one is the biggest</strong>:</p> <pre><code>clear_relations.groupby('relation_type').agg('count')['department'].sort_values(ascending=False) </code></pre> <p>We get: <a href="https://i.stack.imgur.com/wWbQz.png" rel="nofollow noreferrer">group_by_relation_type</a></p> <hr> <p>All code: <a href="https://github.com/navicor90/stack_overflow_q2051327767/blob/master/StackOverflow%2051327767-python-data-analysis-from-sql-query.ipynb" rel="nofollow noreferrer">Github project</a></p>
python|pandas|numpy|analytics
1
4,091
51,402,547
Given a value how can I know in which columns it is present?
<p>I have a huge data frame with 4000 columns, and I need to look if a value exist in one or more columns (I need the name of columns), how can I index the number of columns and the column names in pandas? So far I tried to apply this idea:</p> <pre><code>df.index[df.columns] == 'my_val'].tolist() </code></pre> <p>However this is just returning me boolean values, any dea of how to return the names of the columns where the value lives in?</p>
<p>I think need:</p> <pre><code>cols = df.columns[(df == 'my_val').any()] </code></pre> <p><strong>Sample</strong>:</p> <pre><code>df = pd.DataFrame({'A':list('abcdef'), 'B':[4,5,4,5,5,4], 'C':[7,8,9,4,2,3], 'D':[1,3,5,7,1,0], 'E':[5,3,6,9,2,4], 'F':list('aaabbb')}) print (df) A B C D E F 0 a 4 7 1 5 a 1 b 5 8 3 3 a 2 c 4 9 5 6 a 3 d 5 4 7 9 b 4 e 5 2 1 2 b 5 f 4 3 0 4 b cols = df.columns[(df == 'a').any()] print (cols) Index(['A', 'F'], dtype='object') </code></pre> <p><strong>Explanation</strong>:</p> <p>First compare by value all DataFrame:</p> <pre><code>print (df == 'a') A B C D E F 0 True False False False False True 1 False False False False False True 2 False False False False False True 3 False False False False False False 4 False False False False False False 5 False False False False False False </code></pre> <p>Then filter at least one <code>True</code> per row by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>DataFrame.any</code></a>:</p> <pre><code>print ((df == 'a').any()) A True B False C False D False E False F True dtype: bool </code></pre> <p>Last filter <code>df.columns</code> by boolean mask:</p> <pre><code>print (df.columns[(df == 'a').any()]) Index(['A', 'F'], dtype='object') </code></pre>
python|python-3.x|pandas
4
4,092
51,270,348
How can you identify the best companies for each variable and copy the cases?
<p>i want to compare the means of subgroups. The cases of the subgroup with the lowest and the highest mean should be copied and applied to the end of the dataset:</p> <pre><code>Input df.head(10) Outcome Company Satisfaction Image Forecast Contact 0 Blue 2 3 3 1 1 Blue 2 1 3 2 2 Yellow 4 3 3 3 3 Yellow 3 4 3 2 4 Yellow 4 2 1 5 5 Blue 1 5 1 2 6 Blue 4 2 4 3 7 Yellow 5 4 1 5 8 Red 3 1 2 2 9 Red 1 1 1 2 </code></pre> <p>I have around 100 cases in my sample. Now i look at the means for each company.</p> <pre><code>Input df.groupby(['Company']).mean() Outcome Satisfaction Image Forecast Contact Company Blue 2.666667 2.583333 2.916667 2.750000 Green 3.095238 3.095238 3.476190 3.142857 Orange 3.125000 2.916667 3.416667 2.625000 Red 3.066667 2.800000 2.866667 3.066667 Yellow 3.857143 3.142857 3.000000 2.714286 </code></pre> <p>So for satisfaction <strong>Yellow</strong> got the best and <strong>Blue</strong> the worst value. I want to copy the cases of yellow and blue and add them to the dataset but now with the new lable "Best" and "Worst". I dont want to rename it and i want to iterate over the dataset and to this for other columns, too (for example Image). Is there a solution for it? After i added the cases i want an output like this:</p> <pre><code>Input df.groupby(['Company']).mean() Expected Outcome Satisfaction Image Forecast Contact Company Blue 2.666667 2.583333 2.916667 2.750000 Green 3.095238 3.095238 3.476190 3.142857 Orange 3.125000 2.916667 3.416667 2.625000 Red 3.066667 2.800000 2.866667 3.066667 Yellow 3.857143 3.142857 3.000000 2.714286 Best 3.857143 3.142857 3.000000 3.142857 Worst 2.666667 2.583333 2.866667 2.625000 </code></pre> <p>But how i said. It is really important that the companies with the best and worst values for each column will be added again and not just be renamed because i want to do to further data processing with another software.</p> <p>************************UPDATE****************************</p> <p>I found out how to copy the correct cases:</p> <pre><code>Input df2 = df.loc[df['Company'] == 'Yellow'] df2 = df2.replace('Yellow','Best') df2 = df2[['Company','Satisfaction']] new = [df,df2] result = pd.concat(new) result Output Company Contact Forecast Image Satisfaction 0 Blue 1.0 3.0 3.0 2 1 Blue 2.0 3.0 1.0 2 2 Yellow 3.0 3.0 3.0 4 3 Yellow 2.0 3.0 4.0 3 .......................................... 87 Best NaN NaN NaN 3 90 Best NaN NaN NaN 4 99 Best NaN NaN NaN 1 111 Best NaN NaN NaN 2 </code></pre> <p>Now i want to copy the cases of the company with the best values for the other variables, too. But now i have to identify manually which company is best for each category. Isnt there a more comfortable solution?</p>
<p>I have a solution. First i create a dictionary with the variables i want to create a dummy company for best and worst:</p> <pre><code>variables = ['Contact','Forecast','Satisfaction','Image'] </code></pre> <p>After i loop over this columns and adding the cases again with the new label "Best" or "Worst":</p> <pre><code>for n in range(0,len(variables),1): Start = variables[n-1] neu = df.groupby(['Company'], as_index=False)[Start].mean() Best = neu['Company'].loc[neu[Start].idxmax()] Worst = neu['Company'].loc[neu[Start].idxmin()] dfBest = df.loc[df['Company'] == Best] dfWorst = df.loc[df['Company'] == Worst] dfBest = dfBest.replace(Best,'Best') dfWorst = dfWorst.replace(Worst,'Worst') dfBest = dfBest[['Company',Start]] dfWorst = dfWorst[['Company',Start]] new = [df,dfBest,dfWorst] df = pd.concat(new) </code></pre> <p>Thanks guys :)</p>
python-3.x|pandas
0
4,093
51,445,026
Pandas - Look in 2 columns and check each column for a different element, if both columns contain the elements return the value in a different column
<p>I have a data frame which has 3 columns (called all_names). The first column is called ID, the second column is 'First_names' and the third is 'Last_names' - the data frame has 1 million rows. I have a different data frame (called combos) which has 2 rows: 'First' and 'Last'. (the data frames also have an index column). I need to check the First_names and Last_names column at the same time to see if they contain the combination of first and last in the other data frame. </p> <p>Currently, I have:</p> <pre><code>all_names['First_names'] = all_names.First_names.astype(str) #setting column to string data type all_names['Last_names'] = all_names.Last_names.astype(str) combos['First'] = combos.First.astype(str) combos['Last'] = combos.Last.astype(str) #setting column to string data type for index, row in combos.iterrows(): correct_IDS = all_names.loc[all_names.First_names.str.contains(row.First)] &amp; all_names.loc[all_names.Last_names.str.contains(row.Last), 'ID'] print(correct_tiles) </code></pre> <p>However, this doesn't work and is messy as has to iterate through all rows. any help would be great</p> <p>The all_names looks like this (when opened in notepad):</p> <pre><code>,ID,First_names,Last_names 0,5231,Harry,Smith 1,2745,Mark,Hammond </code></pre> <p>The combos looks like this (when opened in notepad):</p> <pre><code>,First,Last 0,Liam,Bradnam 1,James,Beckham </code></pre>
<p>Your problem can be solved using <code>merge</code>. Let's say we have</p> <pre><code>all_names = pd.DataFrame({'First_names':['John','John','Bob','Robert'], 'Last_names':['Do','Smith','Do','Smith'],'ID':[1,2,3,4]}) combos = pd.DataFrame({'First':['John','Bob','Robert'],'Last':['Smith','Do','Do']}) </code></pre> <p>Then if you use <code>rename</code> in the <code>merge</code>, with <code>how='inner'</code> to keep common couple (First, Last) between both dataframes:</p> <pre><code>combos.merge(all_names.rename(columns={'First_names':'First','Last_names':'Last'}),how='inner') </code></pre> <p>and you get</p> <pre><code> First Last ID 0 John Smith 2 1 Bob Do 3 </code></pre> <p>Now if you want only a list of ID's, you do </p> <pre><code>list_ID = combos.merge(all_names.rename(columns={'First_names':'First','Last_names':'Last'}) ,how='inner')['ID'].tolist() </code></pre> <p>and you have <code>list_ID</code> equal to <code>[2, 3]</code></p>
python|pandas
2
4,094
51,146,054
How do I get the address for an element in a Python array?
<p>I'm a newbie and I'm trying to get the address of an element at a particular index in a Numpy or regular Python array. I'm following a class on Coursera where the instructor gets the address but I'm confused as to if I can do the same in Python since the class is taught in another language. Here's a mathematical sample of what I'm trying to accomplish with code: <a href="http://www.guideforschool.com/625348-memory-address-calculation-in-an-array/" rel="nofollow noreferrer">http://www.guideforschool.com/625348-memory-address-calculation-in-an-array/</a> </p> <p>Here's what I'd like to do. Let's say I have the following array with the indices:</p> <pre><code>|_|_|_|_|_|_|_| 1 2 3 4 5 6 7 </code></pre> <p>I'd like to calculate the address for index 4, so here's the calculation:</p> <pre><code>array_address + elem_size x (index - first_index) </code></pre> <p>I need the first item, the array_address or Base address. I've tried this:</p> <pre><code>import numpy as np a = np.ndarray([1,2,3,4,5,6,7]) print(a.__array_interface__['data']) </code></pre> <p>But I keep getting a different value every time I run it and it's not for 1 particular index.</p> <p>I'm sure I'll never have to do these calculations but I want to understand everything I'm learning in depth. Or will I use it? What are some examples of when I would use this?</p>
<p>An array and its attributes:</p> <pre><code>In [28]: arr = np.arange(1,8) In [29]: arr.__array_interface__ Out[29]: {'data': (41034176, False), 'strides': None, 'descr': [('', '&lt;i8')], 'typestr': '&lt;i8', 'shape': (7,), 'version': 3} </code></pre> <p>The data buffer location for a slice:</p> <pre><code>In [30]: arr[4:].__array_interface__['data'][0] Out[30]: 41034208 In [31]: arr[4:].__array_interface__['data'][0]-arr.__array_interface__['data'][0] Out[31]: 32 </code></pre> <p>The slice shares the data buffer, but with an offset of 4 elements (4*8).</p> <p>Using this information I can fetch a slice using the <code>ndarray</code> constructor (not usually needed):</p> <pre><code>In [35]: np.ndarray((3,), dtype=int, buffer=arr.data, offset=32) Out[35]: array([5, 6, 7]) In [36]: arr[4:7] Out[36]: array([5, 6, 7]) </code></pre> <hr> <p><code>arr.data</code> is a <code>memoryview</code> object that somehow references the data buffer of this array. The id/address of <code>arr.data</code> is not the same as the data pointer I used above.</p> <pre><code>In [38]: arr.data Out[38]: &lt;memory at 0x7f996d950e88&gt; In [39]: type(arr.data) Out[39]: memoryview </code></pre> <hr> <p>Note that data location for <code>arr[4]</code> is totally different. It is 'unboxed', not a slice:</p> <pre><code>In [37]: arr[4].__array_interface__ Out[37]: {'data': (38269024, False), 'strides': None, 'descr': [('', '&lt;i8')], 'typestr': '&lt;i8', 'shape': (), 'version': 3, '__ref': array(5)} </code></pre>
python|python-3.x|numpy
3
4,095
48,005,035
Pandas sorting multiple columns
<p>I have the following dataframe</p> <pre><code>A B b 10 b 5 a 25 a 5 c 6 c 2 b 20 a 10 c 4 c 3 b 15 </code></pre> <p>How can I sort it as follows:</p> <pre><code>A B b 20 b 15 b 10 b 5 a 25 a 10 a 5 c 6 c 4 c 3 c 2 </code></pre> <p>Column A is sorted based on the sum of corresponding values in column B in descending order(The sums are b-50, a-40, c-15) .</p>
<p>Create a temporary column <code>_t</code> and sort using <code>sort_values</code> on <code>_t, B</code></p> <pre><code>In [269]: (df.assign(_t=df['A'].map(df.groupby('A')['B'].sum())) .sort_values(by=['_t', 'B'], ascending=False) .drop('_t', 1)) Out[269]: A B 6 b 20 10 b 15 0 b 10 1 b 5 2 a 25 7 a 10 3 a 5 4 c 6 8 c 4 9 c 3 5 c 2 </code></pre> <p>Details</p> <pre><code>In [270]: df.assign(_t=df['A'].map(df.groupby('A')['B'].sum())) Out[270]: A B _t 0 b 10 50 1 b 5 50 2 a 25 40 3 a 5 40 4 c 6 15 5 c 2 15 6 b 20 50 7 a 10 40 8 c 4 15 9 c 3 15 10 b 15 50 </code></pre>
python|pandas|sorting
2
4,096
48,362,180
Find common tangent line between two cubic curves
<p>Given two functions, I would like to sort out the common tangent for both curves:</p> <p><a href="https://i.stack.imgur.com/vTBod.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vTBod.png" alt="enter image description here"></a></p> <p>The slope of the common tangent can be obtained by the following:</p> <pre><code>slope of common tangent = (f(x1) - g(x2)) / (x1 - x2) = f'(x1) = g'(x2) </code></pre> <p>So that in the end we have a system of 2 equations with 2 unknowns:</p> <pre><code>f'(x1) = g'(x2) # Eq. 1 (f(x1) - g(x2)) / (x1 - x2) = f'(x1) # Eq. 2 </code></pre> <p>For some reason I do not understand, python does not find the solution:</p> <pre><code>import numpy as np from scipy.optimize import curve_fit import matplotlib.pyplot as plt import sys from sympy import * import sympy as sym # Intial candidates for fit E0_init = -941.510817926696 V0_init = 63.54960592453 B0_init = 76.3746233515232 B0_prime_init = 4.05340727164527 # Data 1 (Red triangles): V_C_I, E_C_I = np.loadtxt('./1.dat', skiprows = 1).T # Data 14 (Empty grey triangles): V_14, E_14 = np.loadtxt('./2.dat', skiprows = 1).T def BM(x, a, b, c, d): return (2.293710449E+17)*(1E-21)* (a + b*x + c*x**2 + d*x**3 ) def P(x, b, c, d): return -b - 2*c*x - 3 *d*x**2 init_vals = [E0_init, V0_init, B0_init, B0_prime_init] popt_C_I, pcov_C_I = curve_fit(BM, V_C_I, E_C_I, p0=init_vals) popt_14, pcov_14 = curve_fit(BM, V_14, E_14, p0=init_vals) x1 = var('x1') x2 = var('x2') E1 = P(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) - P(x2, popt_14[1], popt_14[2], popt_14[3]) print 'E1 = ', E1 E2 = ((BM(x1, popt_C_I[0], popt_C_I[1], popt_C_I[2], popt_C_I[3]) - BM(x2, popt_14[0], popt_14[1], popt_14[2], popt_14[3])) / (x1 - x2)) - P(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) sols = solve([E1, E2], [x1, x2]) print 'sols = ', sols # Linspace for plotting the fitting curves: V_C_I_lin = np.linspace(V_C_I[0], V_C_I[-1], 10000) V_14_lin = np.linspace(V_14[0], V_14[-1], 10000) plt.figure() # Plotting the fitting curves: p2, = plt.plot(V_C_I_lin, BM(V_C_I_lin, *popt_C_I), color='black', label='Cubic fit data 1' ) p6, = plt.plot(V_14_lin, BM(V_14_lin, *popt_14), 'b', label='Cubic fit data 2') # Plotting the scattered points: p1 = plt.scatter(V_C_I, E_C_I, color='red', marker="^", label='Data 1', s=100) p5 = plt.scatter(V_14, E_14, color='grey', marker="^", facecolors='none', label='Data 2', s=100) plt.ticklabel_format(useOffset=False) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/T8bAF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T8bAF.png" alt="enter image description here"></a></p> <p><code>1.dat</code> is the following:</p> <pre><code>61.6634100000000 -941.2375622594436 62.3429030000000 -941.2377748739724 62.9226515000000 -941.2378903605746 63.0043440000000 -941.2378981684135 63.7160150000000 -941.2378864590100 64.4085050000000 -941.2377753645115 65.1046835000000 -941.2375332100225 65.8049585000000 -941.2372030376584 66.5093925000000 -941.2367456992965 67.2180970000000 -941.2361992239395 67.9311515000000 -941.2355493856510 </code></pre> <p><code>2.dat</code> is the following:</p> <pre><code>54.6569312500000 -941.2300821583739 55.3555152500000 -941.2312112888004 56.1392347500000 -941.2326135552780 56.9291575000000 -941.2338291772218 57.6992532500000 -941.2348135408652 58.4711572500000 -941.2356230099117 59.2666985000000 -941.2362715934311 60.0547935000000 -941.2367074271724 60.8626545000000 -941.2370273047416 </code></pre> <p><strong>Update:</strong> Using @if.... approach:</p> <pre><code>import numpy as np from scipy.optimize import curve_fit import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties # Intial candidates for fit, per FU: - thus, the E vs V input data has to be per FU E0_init = -941.510817926696 V0_init = 63.54960592453 B0_init = 76.3746233515232 B0_prime_init = 4.05340727164527 def BM(x, a, b, c, d): return a + b*x + c*x**2 + d*x**3 def devBM(x, b, c, d): return b + 2*c*x + 3*d*x**2 # Data 1 (Red triangles): V_C_I, E_C_I = np.loadtxt('./1.dat', skiprows = 1).T # Data 14 (Empty grey triangles): V_14, E_14 = np.loadtxt('./2.dat', skiprows = 1).T init_vals = [E0_init, V0_init, B0_init, B0_prime_init] popt_C_I, pcov_C_I = curve_fit(BM, V_C_I, E_C_I, p0=init_vals) popt_14, pcov_14 = curve_fit(BM, V_14, E_14, p0=init_vals) from scipy.optimize import fsolve def equations(p): x1, x2 = p E1 = devBM(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) - devBM(x2, popt_14[1], popt_14[2], popt_14[3]) E2 = ((BM(x1, popt_C_I[0], popt_C_I[1], popt_C_I[2], popt_C_I[3]) - BM(x2, popt_14[0], popt_14[1], popt_14[2], popt_14[3])) / (x1 - x2)) - devBM(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) return (E1, E2) x1, x2 = fsolve(equations, (50, 60)) print 'x1 = ', x1 print 'x2 = ', x2 slope_common_tangent = devBM(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) print 'slope_common_tangent = ', slope_common_tangent def comm_tangent(x, x1, slope_common_tangent): return BM(x1, popt_C_I[0], popt_C_I[1], popt_C_I[2], popt_C_I[3]) - slope_common_tangent * x1 + slope_common_tangent * x # Linspace for plotting the fitting curves: V_C_I_lin = np.linspace(V_C_I[0], V_C_I[-1], 10000) V_14_lin = np.linspace(V_14[0], V_14[-1], 10000) plt.figure() # Plotting the fitting curves: p2, = plt.plot(V_C_I_lin, BM(V_C_I_lin, *popt_C_I), color='black', label='Cubic fit Calcite I' ) p6, = plt.plot(V_14_lin, BM(V_14_lin, *popt_14), 'b', label='Cubic fit Calcite II') xp = np.linspace(54, 68, 100) pcomm_tangent, = plt.plot(xp, comm_tangent(xp, x1, slope_common_tangent), 'green', label='Common tangent') # Plotting the scattered points: p1 = plt.scatter(V_C_I, E_C_I, color='red', marker="^", label='Calcite I', s=100) p5 = plt.scatter(V_14, E_14, color='grey', marker="^", facecolors='none', label='Calcite II', s=100) fontP = FontProperties() fontP.set_size('13') plt.legend((p1, p2, p5, p6, pcomm_tangent), ("1", "Cubic fit 1", "2", 'Cubic fit 2', 'Common tangent'), prop=fontP) print 'devBM(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) = ', devBM(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) plt.ylim(-941.240, -941.225) plt.ticklabel_format(useOffset=False) plt.show() </code></pre> <p>I am able to find the common tangent, as shown below:</p> <p><a href="https://i.stack.imgur.com/aLQRS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aLQRS.png" alt="enter image description here"></a></p> <p>However, this common tangent corresponds to a common tangent in an area outside the data range, i.e., using </p> <pre><code>V_C_I_lin = np.linspace(V_C_I[0]-30, V_C_I[-1], 10000) V_14_lin = np.linspace(V_14[0]-20, V_14[-1]+2, 10000) xp = np.linspace(40, 70, 100) plt.ylim(-941.25, -941.18) </code></pre> <p>is possible to see the following:</p> <p><a href="https://i.stack.imgur.com/wLXmP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wLXmP.png" alt="enter image description here"></a></p> <p>Is it possible to constraint the solver to the range where we have data in order to find the desired common tangent?</p> <p><strong>Update 2.1</strong>: Using @if.... Range constraints approach, the following code yields <code>x1 = 61.2569899</code> and <code>x2 = 59.7677843</code>. If we plot it:</p> <pre><code>import numpy as np from scipy.optimize import curve_fit import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties import sys from sympy import * import sympy as sym import os # Intial candidates for fit, per FU: - thus, the E vs V input data has to be per FU E0_init = -941.510817926696 # -1882.50963222/2.0 V0_init = 63.54960592453 #125.8532/2.0 B0_init = 76.3746233515232 #74.49 B0_prime_init = 4.05340727164527 #4.15 def BM(x, a, b, c, d): return a + b*x + c*x**2 + d*x**3 def devBM(x, b, c, d): return b + 2*c*x + 3*d*x**2 # Data 1 (Red triangles): V_C_I, E_C_I = np.loadtxt('./1.dat', skiprows = 1).T # Data 14 (Empty grey triangles): V_14, E_14 = np.loadtxt('./2.dat', skiprows = 1).T init_vals = [E0_init, V0_init, B0_init, B0_prime_init] popt_C_I, pcov_C_I = curve_fit(BM, V_C_I, E_C_I, p0=init_vals) popt_14, pcov_14 = curve_fit(BM, V_14, E_14, p0=init_vals) def equations(p): x1, x2 = p E1 = devBM(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) - devBM(x2, popt_14[1], popt_14[2], popt_14[3]) E2 = ((BM(x1, popt_C_I[0], popt_C_I[1], popt_C_I[2], popt_C_I[3]) - BM(x2, popt_14[0], popt_14[1], popt_14[2], popt_14[3])) / (x1 - x2)) - devBM(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) return (E1, E2) from scipy.optimize import least_squares lb = (61.0, 59.0) # lower bounds on x1, x2 ub = (62.0, 60.0) # upper bounds result = least_squares(equations, [61, 59], bounds=(lb, ub)) print 'result = ', result # The result obtained is: # x1 = 61.2569899 # x2 = 59.7677843 slope_common_tangent = devBM(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) print 'slope_common_tangent = ', slope_common_tangent def comm_tangent(x, x1, slope_common_tangent): return BM(x1, popt_C_I[0], popt_C_I[1], popt_C_I[2], popt_C_I[3]) - slope_common_tangent * x1 + slope_common_tangent * x # Linspace for plotting the fitting curves: V_C_I_lin = np.linspace(V_C_I[0]-2, V_C_I[-1], 10000) V_14_lin = np.linspace(V_14[0], V_14[-1]+2, 10000) fig_handle = plt.figure() # Plotting the fitting curves: p2, = plt.plot(V_C_I_lin, BM(V_C_I_lin, *popt_C_I), color='black' ) p6, = plt.plot(V_14_lin, BM(V_14_lin, *popt_14), 'b' ) xp = np.linspace(54, 68, 100) pcomm_tangent, = plt.plot(xp, comm_tangent(xp, x1, slope_common_tangent), 'green', label='Common tangent') # Plotting the scattered points: p1 = plt.scatter(V_C_I, E_C_I, color='red', marker="^", label='1', s=100) p5 = plt.scatter(V_14, E_14, color='grey', marker="^", facecolors='none', label='2', s=100) fontP = FontProperties() fontP.set_size('13') plt.legend((p1, p2, p5, p6, pcomm_tangent), ("1", "Cubic fit 1", "2", 'Cubic fit 2', 'Common tangent'), prop=fontP) plt.ticklabel_format(useOffset=False) plt.show() </code></pre> <p>We see that we are not obtaining a common tangent:</p> <p><a href="https://i.stack.imgur.com/veEna.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/veEna.png" alt="enter image description here"></a></p>
<h3>Symbolic root finding</h3> <p>Your system of equations consists of a quadratic equation and a cubic equation. There is no closed-form symbolic solution of such a system. Indeed, if there was, one would be able to apply it to a general 5th degree equation <code>x**5 + a*x**4 + ... = 0</code> by introducing <code>y = x**2</code> (quadratic) and rewriting the original equation as <code>x*y**2 + a*y**2 + ... = 0</code> (cubic). And we know that <a href="https://en.wikipedia.org/wiki/Quintic_function" rel="nofollow noreferrer">can't be done</a>. So it's not surprising that SymPy can't solve it. You need a numeric solver (another reason is that SymPy isn't really designed to solve equations full of floating point constants, they are trouble for symbolic manipulations). </p> <h3>Numeric root finding</h3> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html" rel="nofollow noreferrer">SciPy fsolve</a> is the first thing that comes to mind. You could do something like this:</p> <pre><code>def F(x): x1, x2 = x[0], x[1] E1 = P(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) - P(x2, popt_14[1], popt_14[2], popt_14[3]) E2 = ((BM(x1, popt_C_I[0], popt_C_I[1], popt_C_I[2], popt_C_I[3]) - BM(x2, popt_14[0], popt_14[1], popt_14[2], popt_14[3])) / (x1 - x2)) - P(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) return [E1, E2] print fsolve(F, [50, 60]) # some reasonable initial point </code></pre> <p>By the way, I would move (x1-x2) from the denominator in E2, rewriting E2 as </p> <pre><code>(...) - (x1 - x2) * P(x1, popt_C_I[1], popt_C_I[2], popt_C_I[3]) </code></pre> <p>so the system is polynomial. This will likely make the life of <code>fsolve</code> a little easier.</p> <h3>Range constraints: minimization</h3> <p>Neither <code>fsolve</code> nor its relatives like <code>root</code> support placing bounds on the variables. But you can use <code>least_squares</code> which will look for the minimum of the sum of squares of expressions E1, E2. It supports upper and lower bounds, and with any luck, the minimum value ("cost") will be 0 within machine precision, indicating you found a root. An abstract example (since I don't have your data):</p> <pre><code>f1 = lambda x: 2*x**3 + 20 df1 = lambda x: 6*x**2 # derivative of f1. f2 = lambda x: (x-3)**3 + x df2 = lambda x: 3*(x-3)**2 + 1 def eqns(x): x1, x2 = x[0], x[1] eq1 = df1(x1) - df2(x2) eq2 = df1(x1)*(x1 - x2) - (f1(x1) - f2(x2)) return [eq1, eq2] from scipy.optimize import least_squares lb = (2, -2) # lower bounds on x1, x2 ub = (5, 3) # upper bounds least_squares(eqns, [3, 1], bounds=(lb, ub)) </code></pre> <p>Output:</p> <pre><code> active_mask: array([0, 0]) cost: 2.524354896707238e-29 fun: array([7.10542736e-15, 0.00000000e+00]) grad: array([1.93525199e-13, 1.34611132e-13]) jac: array([[27.23625045, 18.94483256], [66.10672633, -0. ]]) message: '`gtol` termination condition is satisfied.' nfev: 8 njev: 8 optimality: 2.4802477446153134e-13 status: 1 success: True x: array([ 2.26968753, -0.15747203]) </code></pre> <p>The cost is very small, so we have a solution, and it is x. Typically, one assigns the output of <code>least_squares</code> to some variable <code>res</code> and accesses <code>res.x</code> from there.</p>
python|numpy|scipy|sympy|equation-solving
6
4,097
48,274,053
Python - Calculating Percent of Grand Total in Pivot Tables
<p>I have a dataframe that I converted to a pivot table using pd.pivot_table method and a sum aggregate function:</p> <pre><code>summary = pd.pivot_table(df, index=["Region"], columns=["Product"], values=['Price'], aggfunc=[np.sum], fill_value=0, margins=True, margins_name="Total" ) </code></pre> <p>I have received an output like this:</p> <p><a href="https://i.stack.imgur.com/vdPV7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vdPV7.png" alt="Sample Pivot Table"></a></p> <p>I would like to add another pivot table that displays percent of grand total calculated in the previous pivot table for each of the categories. All these should add up to 100% and should look like this.</p> <p><a href="https://i.stack.imgur.com/LgHFm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LgHFm.png" alt="Pivot Table with percents of Grand Total"></a></p> <p>I have tried the following workaround that I found on stackoverflow:</p> <pre><code>total = df['Price'].sum() table = pd.pivot_table(DF, index=["Region"], columns=["Product"], values=['Price'], aggfunc=[np.sum, (lambda x: sum(x)/total*100) ], fill_value=0, margins=True, margins_name="Total" ) </code></pre> <p>This calculated the percentages but they only add up to 85%...</p> <p>It'd be great to not have to calculate the total outside of the pivot tabe and just be able to call the Grand Total from the first pivot. But even if I have to calculate separately, like in the code above, as long as it adds up to 100% it would still be great.</p> <p>Thank you in advance!</p>
<p>This can be done very easily:</p> <pre><code> import numpy as np import pandas as pd # Create table table_1 = np.matrix([[100, 200, 650, 950], [200, 250, 350, 800], [400, 500, 200, 200], [700, 950, 1200, 2850]]) column_labels = ['A', 'B', 'C', 'Region Total'] idx_labels = ['Region 1', 'Region 2', 'Region 3', 'Product Total'] df = pd.DataFrame(table_1) df.columns = column_labels df.index = idx_labels df.index.name = 'Sales' # Create percentage table df_percentage = np.round(df*100/df.iloc[-1, -1], 1) print(df_percentage) A B C Region Total Sales Region 1 3.5 7.0 22.8 33.3 Region 2 7.0 8.8 12.3 28.1 Region 3 14.0 17.5 7.0 7.0 Product Total 24.6 33.3 42.1 100.0 </code></pre>
python|pandas|pivot-table|percentage
1
4,098
48,400,250
Subtract two objects data type in python
<p>I want to subtract two columns in order to get the time, but my columns are object types.</p> <p>This are my initial columns dtypes:</p> <pre><code>Column1 object Column2 object EVS_START object Column3 object time object dtype: object </code></pre> <p>I changed EVS_START and time to datetime64[ns] like this:</p> <pre><code>df['time'] = pd.to_datetime(df['time']) df['EVS_START'] = pd.to_datetime(df['EVS_START']) </code></pre> <p>I checked again with <code>df.dtypes</code> and they were changed:</p> <pre><code>Column1 object Column2 object EVS_START datetime64[ns] Column3 object time datetime64[ns] dtype: object </code></pre> <p>But when I am subtracting them I get <code>TypeError: ufunc subtract cannot use operands with types dtype('&lt;M8[ns]') and dtype('O')</code></p> <pre><code>df['Time_duration'] = df['time'] - df['EVS_START'] </code></pre> <p>What am I doing wrong? I did something similar with a df and it worked fine I am using python 2.x</p>
<p>There is a similar question here, discussing incompatibility between <code>datetime64[ns]</code> and <code>&lt;M8[ns]</code>:</p> <p><a href="https://stackoverflow.com/questions/29206612/difference-between-data-type-datetime64ns-and-m8ns">Difference between data type &#39;datetime64[ns]&#39; and &#39;&lt;M8[ns]&#39;?</a></p> <p>One suggestion offered is to update both Pandas and Numpy at the same time, so that they are working with the same datetime representation. Could you try that?</p>
python|pandas
0
4,099
48,883,106
Pandas replace column by value in row
<p>How can I vectorise a replace, by looking for a value in the row. </p> <p>For a dataframe as follows: </p> <pre><code>df = pd.DataFrame([(1, 2, 3, 4, np.NaN, np.NaN, 4), (1, 2, 3, 0, 0, np.NaN, 0), (1, 2, 3, 4, 5, np.NaN, 5)], columns = ['P0', 'P1', 'P2', 'P3', 'P4', 'P5', 'Last_not_NaN_value'], index = ['row1', 'row2', 'row3']) </code></pre> <p>Output df:</p> <pre><code> P0 P1 P2 P3 P4 P5 Last_not_NaN_value row1 1 2 3 4 NaN NaN 4 row2 1 2 3 0 0.0 NaN 0 row3 1 2 3 4 5.0 NaN 5 </code></pre> <p>How can I do something like </p> <p><code>df.replace(df['Last_not_NaN_value'], 0 )</code> &lt;- which does nothing.</p> <p>How can I look for where the <code>last_not_NaN_value</code> is in the df, and replace that with a 0, e.g.:</p> <pre><code> P0 P1 P2 P3 P4 P5 Last_not_NaN_value row1 1 2 3 *0* NaN NaN 4 row2 1 2 3 0 *0* NaN 0 row3 1 2 3 4 *0* NaN 5 </code></pre>
<p>Vectorized, as requested. Perform broadcasted comparison, find the indices of replacement, and just replace accordingly. Afterwards, you can assign the result back using a neat <code>df[:] = ...</code> trick.</p> <pre><code>v = df.values i = v[:, :-1] j = v[:, -1] v[np.arange(v.shape[0]), (i == j[:, None]).argmax(axis=1)] = 0 df[:] = v df P0 P1 P2 P3 P4 P5 Last_not_NaN_value row1 1.0 2.0 3.0 0.0 NaN NaN 4.0 row2 1.0 2.0 3.0 0.0 0.0 NaN 0.0 row3 1.0 2.0 3.0 4.0 0.0 NaN 5.0 </code></pre>
python|pandas
2