Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
4,700
36,970,623
PyAudio: when is it safe to modify the callback buffer?
<p>A typical PyAudio callback operating in output mode is:</p> <pre><code>def callback(ignored, frame_count, time_info, status): buffer = &lt;fill and return a buffer with frame_count samples of data&gt; return (buffer, pyaudio.paContinue) </code></pre> <p>I haven't tried it, but I'm pretty sure that if I started modifying <code>buffer</code> immediately after the callback returns that it would corrupt the data as it gets played -- true? </p> <p>So the question: Is there a way to know when PyAudio has finished playing the buffer? If so, I'd like to create a buffer pool so I can reuse buffers after PyAudio is finished with them.</p> <p>(If there isn't a mechanism for finding out when PyAudio has finished with a buffer, the only alternative I see is to allocate a fresh buffer at each callback. Perhaps that's not a big issue.)</p>
<p>Since the callback runs in a different thread, it's basically never safe to access your <code>buffer</code> from the main thread and you cannot know <em>immediately</em> when the callback is finished (at least not without blocking the callback, which you should try to avoid). However, you can cycle through a list of buffers and as long as their total length is longer than the system latency you should be <em>reasonably</em> safe ...</p> <p>What's typically done in this case is to use a lock-free ringbuffer to transport audio data in and out of the callback. Sadly, PortAudio's ringbuffer implementation is not part of their shared library and AFAIK it isn't available in PyAudio either (nor in the <code>sounddevice</code> module).</p> <p>Anyway, Python isn't really the best environment for doing realtime audio in the first place, so you'll have to live with some limitations and you'll never really have both reliable performance <em>and</em> very low latencies.</p>
python|numpy|pyaudio
0
4,701
37,014,715
Efficiently comparing data across rows in a Pandas Dataframe
<p>I have a <code>CSV</code> file of monthly cell phone bills in no particular order that I read into a <code>Pandas</code> <code>Dataframe</code>. I'd like to add a column for each bill that shows how much it differed from the previous bill for the same account. This CSV is just a sub-set of my data. My code works fine, but is pretty sloppy and very slow when you look at a CSV file close to a million rows. </p> <p>What should I be doing to make this more efficient?</p> <p>CSV:</p> <pre><code>Account Number,Bill Month,Bill Amount 4543,3/1/2015,300 4543,1/1/2015,100 4543,2/1/2015,200 2322,1/1/2015,22 2322,3/1/2015,38 2322,2/1/2015,25 </code></pre> <p>Python:</p> <pre><code>import numpy as np import pandas as pd data = pd.read_csv('data.csv', low_memory=False) # sort my data and reset the index so I can use index and index - 1 in the loop data = data.sort_values(by=['Account Number', 'Bill Month']) data = data.reset_index(drop=True) # add a blank column for the difference data['Difference'] = np.nan for index, row in data.iterrows(): # special handling for the first row so I don't get negative indexes if index == 0: data.ix[index, 'Difference'] = "-" else: # if the account in the current row and the row before are the same, then compare Bill Amounts if data.ix[index, 'Account Number'] == data.ix[index - 1, 'Account Number']: data.ix[index, 'Difference'] = data.ix[index, 'Bill Amount'] - data.ix[index - 1, 'Bill Amount'] else: data.ix[index, 'Difference'] = "-" print data </code></pre> <p>Desired Output:</p> <pre><code> Account Number Bill Month Bill Amount Difference 0 2322 1/1/2015 22 - 1 2322 2/1/2015 25 3 2 2322 3/1/2015 38 13 3 4543 1/1/2015 100 - 4 4543 2/1/2015 200 100 5 4543 3/1/2015 300 100 </code></pre>
<pre><code>df = pd.DataFrame({ 'Account Number': {0: 4543, 1: 4543, 2: 4543, 3: 2322, 4: 2322, 5: 2322}, 'Bill Amount': {0: 300.0, 1: 100.0, 2: 200.0, 3: 22.0, 4: 38.0, 5: 25.0}, 'Bill Month': { 0: pd.Timestamp('2015-03-01 00:00:00'), 1: pd.Timestamp('2015-01-01 00:00:00'), 2: pd.Timestamp('2015-02-01 00:00:00'), 3: pd.Timestamp('2015-01-01 00:00:00'), 4: pd.Timestamp('2015-03-01 00:00:00'), 5: pd.Timestamp('2015-02-01 00:00:00')}} </code></pre> <p>You can group on account number and billing month (which sorts by default), sum the Bill Amount (or just take the first if you are guaranteed to only have one bill per month), group again on the first level of the index (the account number), and take the difference using <code>diff</code>.</p> <pre><code>&gt;&gt;&gt; (df.groupby(['Account Number', 'Bill Month'])['Bill Amount'] .sum() .groupby(level=0) .diff()) Account Number Bill Month 2322 2015-01-01 NaN 2015-02-01 3 2015-03-01 13 4543 2015-01-01 NaN 2015-02-01 100 2015-03-01 100 </code></pre>
python|python-2.7|pandas
1
4,702
54,969,646
How does pytorch backprop through argmax?
<p>I'm building Kmeans in pytorch using gradient descent on centroid locations, instead of expectation-maximisation. Loss is the sum of square distances of each point to its nearest centroid. To identify which centroid is nearest to each point, I use argmin, which is not differentiable everywhere. However, pytorch is still able to backprop and update weights (centroid locations), giving similar performance to sklearn kmeans on the data.</p> <p>Any ideas how this is working, or how I can figure this out within pytorch? Discussion on pytorch github suggests argmax is not differentiable: <a href="https://github.com/pytorch/pytorch/issues/1339" rel="noreferrer">https://github.com/pytorch/pytorch/issues/1339</a>. </p> <p>Example code below (on random pts):</p> <pre><code>import numpy as np import torch num_pts, batch_size, n_dims, num_clusters, lr = 1000, 100, 200, 20, 1e-5 # generate random points vector = torch.from_numpy(np.random.rand(num_pts, n_dims)).float() # randomly pick starting centroids idx = np.random.choice(num_pts, size=num_clusters) kmean_centroids = vector[idx][:,None,:] # [num_clusters,1,n_dims] kmean_centroids = torch.tensor(kmean_centroids, requires_grad=True) for t in range(4001): # get batch idx = np.random.choice(num_pts, size=batch_size) vector_batch = vector[idx] distances = vector_batch - kmean_centroids # [num_clusters, #pts, #dims] distances = torch.sum(distances**2, dim=2) # [num_clusters, #pts] # argmin membership = torch.min(distances, 0)[1] # [#pts] # cluster distances cluster_loss = 0 for i in range(num_clusters): subset = torch.transpose(distances,0,1)[membership==i] if len(subset)!=0: # to prevent NaN cluster_loss += torch.sum(subset[:,i]) cluster_loss.backward() print(cluster_loss.item()) with torch.no_grad(): kmean_centroids -= lr * kmean_centroids.grad kmean_centroids.grad.zero_() </code></pre>
<p>As alvas noted in the comments, <code>argmax</code> is not differentiable. However, once you compute it and assign each datapoint to a cluster, the derivative of loss with respect to the location of these clusters is well-defined. This is what your algorithm does.</p> <p>Why does it work? If you had only one cluster (so that the <code>argmax</code> operation didn't matter), your loss function would be quadratic, with minimum at the mean of the data points. Now with multiple clusters, you can see that your loss function is piecewise (in higher dimensions think volumewise) quadratic - for any set of centroids <code>[C1, C2, C3, ...]</code> each data point is assigned to some centroid <code>CN</code> and the loss is <em>locally</em> quadratic. The extent of this locality is given by all alternative centroids <code>[C1', C2', C3', ...]</code> for which the assignment coming from <code>argmax</code> remains the same; within this region the <code>argmax</code> can be treated as a constant, rather than a function and thus the derivative of <code>loss</code> is well-defined.</p> <p>Now, in reality, it's unlikely you can treat <code>argmax</code> as constant, but you can still treat the naive "argmax-is-a-constant" derivative as pointing approximately towards a minimum, because the majority of data points are likely to indeed belong to the same cluster between iterations. And once you get close enough to a local minimum such that the points no longer change their assignments, the process can converge to a minimum.</p> <p>Another, more theoretical way to look at it is that you're doing an approximation of expectation maximization. Normally, you would have the "compute assignments" step, which is mirrored by <code>argmax</code>, and the "minimize" step which boils down to finding the minimizing cluster centers given the current assignments. The minimum is given by <code>d(loss)/d([C1, C2, ...]) == 0</code>, which for a quadratic loss is given analytically by the means of data points within each cluster. In your implementation, you're solving the same equation but with a gradient descent step. In fact, if you used a 2nd order (Newton) update scheme instead of 1st order gradient descent, you would be implicitly reproducing exactly the baseline EM scheme.</p>
machine-learning|cluster-analysis|pytorch|k-means|backpropagation
18
4,703
55,081,439
Python, Splitting Multiple Strings in a Column
<p>Good afternoon, i am trying to split text in a column to a specfic format here is my table below</p> <pre><code>UserId Application 1 Grey Blue::Black Orange;White:Green 2 Yellow Purple::Orange Grey;Blue Pink::Red </code></pre> <p>I would like it to read the following:</p> <pre><code>UserId Application 1 Grey Blue 1 White Orange 2 Yellow Purple 2 Blue Pink </code></pre> <p>Basically, i would like to keep the first string of every :: instance for every string in a given cell. </p> <p>So far my code is</p> <pre><code>def unnesting(df, explode): idx=df.index.repeat(df[explode[0]].str.len()) df1=pd.concat([pd.DataFrame({x:np.concatenate(df[x].values)} )for x in explode],axis=1) df1.index=idx return df1.join(df.drop(explode,1),how='left') df['Application']=df.Role.str.split(';|::|').map(lambda x : x[0::2]) unnesting(df.drop('Role',1),['Application'] </code></pre> <p>The following code reads </p> <pre><code>UserId Application 1 Grey Blue, White Orange 2 Yellow Purple, Blue Pink </code></pre> <p>Please Assist i dont know where i should be using pandas or numpy to solve this problem!!</p>
<p>Maybe you can try using <code>extractall</code></p> <pre><code>yourdf=df.set_index('UserId').Application.str.extractall(r'(\w+):').reset_index(level=0) # You can adding rename(columns={0:'Application'})at the end Out[87]: UserId 0 match 0 1 Grey 1 1 White 0 2 Yellow 1 2 Blue </code></pre> <hr> <p><strong><em>Update</em></strong> look at the <a href="https://stackoverflow.com/questions/53218931/how-do-i-unnest-explode-a-column-in-a-pandas-dataframe/53218939#53218939">unnesting</a> , after we <code>split</code> and select the value we need from the string , we store them into a <code>list</code> , when you have a <code>list</code> type in you <code>columns</code> , I recommend using <a href="https://stackoverflow.com/questions/53218931/how-do-i-unnest-explode-a-column-in-a-pandas-dataframe/53218939#53218939">unnesting</a> </p> <pre><code>df['LIST']=df.Application.str.split(';|::|:').map(lambda x : x[0::2]) unnesting(df.drop('Application',1),['LIST']) Out[111]: LIST UserId 0 Grey Blue 1 0 White 1 1 Yellow Purple 2 1 Blue Pink 2 </code></pre> <hr> <p>My own def-function </p> <pre><code>def unnesting(df, explode): idx=df.index.repeat(df[explode[0]].str.len()) df1=pd.concat([pd.DataFrame({x:np.concatenate(df[x].values)} )for x in explode],axis=1) df1.index=idx return df1.join(df.drop(explode,1),how='left') </code></pre>
python|pandas|numpy|dataframe|text
2
4,704
28,253,102
Python 3: Multiply a vector by a matrix without NumPy
<p>I'm fairly new to Python and trying to create a function to multiply a vector by a matrix (of any column size). e.g.:</p> <pre><code>multiply([1,0,0,1,0,0], [[0,1],[1,1],[1,0],[1,0],[1,1],[0,1]]) [1, 1] </code></pre> <p>Here is my code:</p> <pre><code>def multiply(v, G): result = [] total = 0 for i in range(len(G)): r = G[i] for j in range(len(v)): total += r[j] * v[j] result.append(total) return result </code></pre> <p>The problem is that when I try to select the first row of each column in the matrix (r[j]) the error 'list index out of range' is shown. Is there any other way of completing the multiplication without using NumPy?</p>
<p>The Numpythonic approach: (using <code>numpy.dot</code> in order to get the dot product of two matrices)</p> <pre><code>In [1]: import numpy as np In [3]: np.dot([1,0,0,1,0,0], [[0,1],[1,1],[1,0],[1,0],[1,1],[0,1]]) Out[3]: array([1, 1]) </code></pre> <p>The Pythonic approach:</p> <p>The length of your second <code>for</code> loop is <code>len(v)</code> and you attempt to indexing <code>v</code> based on that so you got index Error . As a more pythonic way you can use <code>zip</code> function to get the columns of a list then use <code>starmap</code> and <code>mul</code> within a list comprehension:</p> <pre><code>In [13]: first,second=[1,0,0,1,0,0], [[0,1],[1,1],[1,0],[1,0],[1,1],[0,1]] In [14]: from itertools import starmap In [15]: from operator import mul In [16]: [sum(starmap(mul, zip(first, col))) for col in zip(*second)] Out[16]: [1, 1] </code></pre>
python|python-3.x|numpy|matrix|vector
10
4,705
34,935,329
Iterate over a column in a dataframe matching each value with a value in another column in another dataframe
<p>I basically have two data frames. Let's say aa and bb. I want to look all the values in the first column of bb that are in the first column of aa and if they are I have to get column 2 of aa and add it to a new column in bb (if there is not much I'll put a 0). Let's see if looking at some code it makes more sense. I've done it using apply and a function:</p> <pre><code>aa=pd.DataFrame({'a':[1,2,3,4,5],'b':[6,7,8,9,0]}) bb=pd.DataFrame({'c':[11,2,13,4,15],'d':['f','h','j','k','l']}) a b 0 1 6 1 2 7 2 3 8 3 4 9 4 5 0 c d 0 11 f 1 2 h 2 13 j 3 4 k 4 15 l def set_time_session (row): element = row['c'] if element in aa['a'].unique(): return aa['b'][aa['a']==element] else: return 0 column = bb.apply(set_time_session,axis=1) bb['newcolumn']=column c d newcolumn 0 11 f 0 1 2 h 7 2 13 j 0 3 4 k 9 4 15 l 0 </code></pre> <p>This actually works, but when done in a dataframe with 200000 rows it takes forever to complete. I'm sure the is a better and faster way to do it. Thanks!</p>
<p>Try this:</p> <pre><code>res = pd.merge(aa, bb, left_on='a', right_on='c', how='inner', left_index=True) bb['newcolumn']= res.reindex(range(len(aa))).fillna(0)['b'] print(bb) </code></pre>
python|pandas
0
4,706
34,972,297
Fill in missing values in pandas dataframe using mean
<pre><code>datetime 2012-01-01 125.5010 2012-01-02 NaN 2012-01-03 125.5010 2013-01-04 NaN 2013-01-05 125.5010 2013-02-28 125.5010 2014-02-28 125.5010 2016-01-02 125.5010 2016-01-04 125.5010 2016-02-28 NaN </code></pre> <p>I would like to fill in the missig values in this dataframe by using a climatology computed from the dataset i.e fill in missing <code>28th feb 2016</code> value by averaging values of <code>28th feb</code> from other years. How do i do this?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by <code>month</code> and <code>day</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow"><code>transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.fillna.html" rel="nofollow"><code>fillna</code></a> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mean.html" rel="nofollow"><code>mean</code></a>:</p> <pre><code>print df.groupby([df.index.month, df.index.day]).transform(lambda x: x.fillna(x.mean())) datetime 2012-01-01 125.501 2012-01-02 125.501 2012-01-03 125.501 2013-01-04 125.501 2013-01-05 125.501 2013-02-28 125.501 2014-02-28 125.501 2016-01-02 125.501 2016-01-04 125.501 2016-02-28 125.501 </code></pre>
python|pandas|dataframe|mean|missing-data
1
4,707
35,277,725
clojure working with scipy and numpy
<p>Is there any good way to call python from clojure as a means of doing data science with scipy, numpy, scikit-learn, etc.</p> <p>I know about implementations of clojure which run on python instead of java, but this doeesn't work for me, as I also need to call java libraries in my project. I also know about Jython, but I don't know of a clean way to use this with Clojure.</p> <p>I want to use Clojure in my projects because I prefer it as a language, but I can't deny that Python has an incredible community, and some of the most beautiful, well-designed libraries around.</p>
<p>Instead of trying to get Jython to play well with both Clojure and numpy/scipy, you can use <a href="http://docs.hylang.org/en/latest/" rel="noreferrer">Hy</a>. It is hosted on Python and it somewhat resembles Clojure.</p> <p>If I really wanted to use numpy/scipy, I would write the backend in Python (or Hy), run it as a separate service. And if I really like ring for instance, or can't live without Instaparse, I would write a frontend in Clojure. </p> <p>As an aside Python has <a href="https://github.com/swaroopch/edn_format" rel="noreferrer">EDN</a> libs. It would be an interesting project to integrate one of them in Hy, or write one from scratch.</p>
python|numpy|clojure|jython|data-science
5
4,708
34,943,913
pandas How to find all zeros in a column
<pre><code>import copy head6= copy.deepcopy(df) closed_day = head6[["DATEn","COUNTn"]]\ .groupby(head6['DATEn']).sum() print closed_day.head(10) </code></pre> <p>Output:</p> <pre><code> COUNTn DATEn 06-29-13 11326823 06-30-13 5667746 07-01-13 8694140 07-02-13 7275701 07-03-13 9948824 07-04-13 1072542591 07-05-13 7867611 07-06-13 4733018 07-07-13 4838404 07-08-13 42962814 </code></pre> <p>Now what if I want to find if <code>COUNTn</code> has any zeros and I want to return corresponding day? I've written something like this but I'm getting an error saying my df doesn't have any column called <code>COUNTn</code></p> <pre><code>ndf = closed_day[["DATEn","COUNTn"]][closed_day.COUNTn == 0] print ndf.head(1) </code></pre>
<p>After the groupby, COUNTn is converted into a Series, which doesn't have columns (it's just a single column). If you want to keep it as a dataframe, as your code is expecting, use <code>groupby(grouper, as_index=False)</code>.</p>
python|pandas
1
4,709
31,098,228
Solving system using linalg with constraints
<p>I want to solve some system in the form of matrices using <code>linalg</code>, but the resulting solutions should sum up to 1. For example, suppose there are 3 unknowns, x, y, z. After solving the system their values should sum up to 1, like .3, .5, .2. Can anyone please tell me how I can do that? </p> <p>Currently, I am using something like <code>result = linalg.solve(A, B)</code>, where <code>A</code> and <code>B</code> are matrices. But this doesn't return solutions in the range <code>[0, 1]</code>.</p>
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html" rel="noreferrer">Per the docs</a>, </p> <blockquote> <p><code>linalg.solve</code> is used to compute the "exact" solution, <code>x</code>, of the well-determined, i.e., full rank, linear matrix equation <code>ax = b</code>.</p> </blockquote> <p>Being linear, there can be at most one solution. If the solution you found does not sum up to 1, then adding the extra constraint would yield no solution.</p> <p>However, you could use <a href="http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.minimize.html" rel="noreferrer"><code>scipy.optimize.minimize</code></a> to find the point on the constraint plane which <em>minimizes</em> the quantity <code>||Ax-b||^2</code>:</p> <pre><code>def f(x): y = np.dot(A, x) - b return np.dot(y, y) cons = ({'type': 'eq', 'fun': lambda x: x.sum() - 1}) res = optimize.minimize(f, [0, 0, 0], method='SLSQP', constraints=cons, options={'disp': False}) </code></pre> <hr> <p>For example, given this system of equations</p> <pre><code>import numpy as np import numpy.linalg as LA import scipy.optimize as optimize A = np.array([[1, 3, 4], [5, 6, 9], [1, 2, 3]]) b = np.array([1, 2, 1]) x = LA.solve(A, b) </code></pre> <p>The solution does not add up to 1:</p> <pre><code>print(x) # [-0.5 -1.5 1.5] </code></pre> <p>But you could try to minimize <code>f</code>:</p> <pre><code>def f(x): y = np.dot(A, x) - b return np.dot(y, y) </code></pre> <p>subject to the constraint <code>cons</code>:</p> <pre><code>cons = ({'type': 'eq', 'fun': lambda x: x.sum() - 1}) res = optimize.minimize(f, [0, 0, 0], method='SLSQP', constraints=cons, options={'disp': False}) xbest = res['x'] # array([ 0.30000717, 1.89998823, -1.1999954 ]) </code></pre> <p><code>xbest</code> sums to 1:</p> <pre><code>print(xbest.sum()) 1 </code></pre> <p>The difference <code>A·xbest - b</code> is:</p> <pre><code>print(np.dot(A, xbest) - b) # [ 0.19999026 0.10000663 -0.50000257] </code></pre> <p>and the sum of the squares of the difference, (also computable as <code>f(xbest)</code>) is :</p> <pre><code>print(res['fun']) 0.30000000014542572 </code></pre> <p>No other value of x minimizes this quantity more while satisfying the constraint.</p>
python|numpy|matrix|equation-solving
14
4,710
67,301,172
How to predict data from scikit-learn toy dataset
<p>I am studying machine learning and I am trying to analyze the scikit diabetes toy database. In this case, I want to change the default Bunch object to a pandas DataFrame object. I tried using the argument <em>as_frame=True</em> and it did actually change the object type to DataFrame.</p> <p>So after that, I trained the data and the problems come when I'm trying to plot it:</p> <pre><code>import matplotlib.pyplot as plt import pandas as pd import numpy as np from sklearn import datasets, linear_model from sklearn.model_selection import train_test_split dataset = datasets.load_diabetes(as_frame=True) X = dataset.data y = dataset.target y = y.to_frame() X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=42) regressor = linear_model.LinearRegression() regressor.fit(X_train, y_train) plt.scatter(X_train, y_train, color='blue') plt.plot(X_train, regressor.predict(X_test), color='red') </code></pre> <p>The problem is when I am trying to plot it using matplotlib, since the <em>as_frame=True</em> returns (data, target) where the data is a DataFrame object and target as Series.</p> <pre><code>Traceback (most recent call last): File &quot;C:/Users/Kelvin/OneDrive/Documents/analytics/diabetes-sklearn/test.py&quot;, line 19, in &lt;module&gt; plt.scatter(X_train, y_train, color='blue') File &quot;C:\Users\Kelvin\OneDrive\Desktop\analytics\lib\site-packages\matplotlib\pyplot.py&quot;, line 3037, in scatter __ret = gca().scatter( File &quot;C:\Users\Kelvin\OneDrive\Desktop\analytics\lib\site-packages\matplotlib\__init__.py&quot;, line 1352, in inner return func(ax, *map(sanitize_sequence, args), **kwargs) File &quot;C:\Users\Kelvin\OneDrive\Desktop\analytics\lib\site-packages\matplotlib\axes\_axes.py&quot;, line 4478, in scatter raise ValueError(&quot;x and y must be the same size&quot;) ValueError: x and y must be the same size </code></pre> <p>So, my question is if there are ways that I can change the whole data as DataFrame just like how we get the data using <em>pd.read_csv()</em>?</p>
<p>That is already a dataframe, you are getting error because you are plotting X_train with y_train and X_train has multiple columns.</p> <p>but if you want your dataset in csv file you can use this code.</p> <pre><code>X.to_csv('train_data.csv') </code></pre> <p>this will save that dataset into a csv file in your working directory. Now you can use <code>pd.read_csv</code> on <code>train_data.csv</code>.</p>
python|pandas|matplotlib|machine-learning|scikit-learn
0
4,711
34,859,391
2D NumPy array comparison in given range
<p>If I have a 2D array of numbers and I want to see if every value inside the array are inside another 2D array of by some range, how would you do it efficiently with NumPy?</p> <pre><code>[[1,2,1],[2,3,2],[2,3,4],[1,2,3],[1,3,2]] is in range 1 with [[1,2,1],[2,3,2],[2,3,4],[1,2,3],[1,3,2]] =&gt; TRUE [[1,2,1],[2,3,2],[2,3,4],[1,2,3],[1,3,2]] is in range 1 with [[0,3,0],[1,4,3],[1,4,5],[0,3,4],[0,4,3]] =&gt; TRUE [[1,2,1],[2,3,2],[2,3,4],[1,2,3],[1,3,2]] is in range 1 with [[0,4,0],[1,4,3],[1,4,5],[0,3,4],[0,4,3]] =&gt; FALSE </code></pre> <p>Last one is FALSE because on of the item on index 0.1 is 4 which means abs(2-4) > 1</p>
<p>You can do this easily with numpy's vectorized arithmetic and <code>all</code>. For example:</p> <pre><code>&gt;&gt;&gt; a = np.array([[1,2,1],[2,3,2],[2,3,4],[1,2,3],[1,3,2]]) &gt;&gt;&gt; b = np.array([[1,2,1],[2,3,2],[2,3,4],[1,2,3],[1,3,2]]) &gt;&gt;&gt; abs(a-b) array([[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]]) &gt;&gt;&gt; abs(a-b) &lt;= 1 array([[ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True]], dtype=bool) &gt;&gt;&gt; (abs(a-b) &lt;= 1).all() True </code></pre> <p>and</p> <pre><code>&gt;&gt;&gt; a2 = np.array([[1,2,1],[2,3,2],[2,3,4],[1,2,3],[1,3,2]]) &gt;&gt;&gt; b2 = np.array([[0,4,0],[1,4,3],[1,4,5],[0,3,4],[0,4,3]]) &gt;&gt;&gt; abs(a2-b2) &lt;= 1 array([[ True, False, True], [ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True]], dtype=bool) &gt;&gt;&gt; (abs(a2-b2) &lt;= 1).all() False </code></pre>
python|arrays|numpy|comparison
2
4,712
60,104,886
Count the number of records based on a variable criteria Python
<p>I have a dataframe as follow: </p> <pre><code>dashboard = pd.DataFrame({ 'id':[1,1,1,1,1,2,2,3,3,4,4], 'level': [1,2,2.1,2.2,3,3.1,4,1.1,2,3,4], 'cost': [10,6,4,8,9,6,11,23,3,2,12], 'category': ['Original', 'Time', 'Money','Original','Original','Time','Original','Original','Time','Original','Original'] }) </code></pre> <p>I need to get the following table where if for example the level is 3, the code will sum all the previous levels only (2.2, 2.1 - excluding 2):</p> <pre><code>pd.DataFrame({ 'id': [1,2,3,4], 'level': [3,4,2,4], 'cost': [12,6,23,0], 'category': ['Time &amp; Money','Time','Time',''] }) </code></pre>
<p>You can do it this way </p> <pre><code>df2 = dashboard.groupby('id')['level'].last().astype(int).reset_index() df2['cost'] = dashboard.groupby('id').apply(lambda x: x[x['level']&gt;=(x['level'].tail(1)-0.9).sum()]['cost'].sum()-x['cost'].tail(1)).reset_index(drop=True) df2['category'] = dashboard.groupby('id').apply(lambda x: x[x['level']&gt;=(x['level'].tail(1)-0.9).sum()].groupby('id')['category'].agg(' &amp; '.join)).reset_index(drop=True).replace('Original','', regex=True).str.strip((' &amp; ')) df2 </code></pre> <p><strong>Output</strong> (the input &amp; the output you have provided do not math for column 'category') </p> <pre><code>id level cost category 0 1 3 12 Money 1 2 4 6 Time 2 3 2 23 Time 3 4 4 0 </code></pre>
python|pandas
0
4,713
59,987,987
ImportError and AttributeError for TensorFlow
<blockquote> <p>Background</p> </blockquote> <p>I'm trying to work on a GAN neural network (I'm a beginner for both Python and Machine-Learning), and I need Tensorflow. </p> <blockquote> <p><strong>Problem</strong></p> </blockquote> <p>I have tried to use TensorFlow but can't install. I have read <a href="https://stackoverflow.com/questions/42244198/importerror-no-module-named-tensorflow">questions and answers on SO</a> about various errors, and have tested out those solutions, but I believe this case is different.</p> <blockquote> <h2>What I have tried (in chronological order)</h2> </blockquote> <p><strong>1. Plain Reboot</strong></p> <p>a) Close all tabs for Jupyter Notebook<br> b) Close Anaconda Navigator<br> c) Restart Jupyter Notebook<br> d) Rerun code</p> <p>Result: <code>ImportError: no module</code></p> <p><strong>2. Reinstall tf</strong></p> <p>a) Repeat 1a and 1b<br> b) Open Anaconda Prompt<br> c) <code>pip install tensorflow</code></p> <p>Result: <code>module installed</code></p> <p><strong>3. Check out Navigator</strong></p> <p>Tensorflow installed in all environments I have.</p> <p><strong>4. Reinstall tf (Take 2)</strong></p> <p>a) Repeat 1a and 1b<br> b) Open Anaconda Prompt<br> c) <code>conda install -c conda-forge tensorflow</code></p> <p>Result: <code>EnvironmentNotWritableError:The current user does not have write permissions to the target environment. environment location: C:\ProgramData\&lt;my username&gt;</code></p> <p><strong>5. Run as admin <a href="https://stackoverflow.com/questions/55290271/updating-anaconda-fails-environment-not-writable-error/57144988">(from this question)</a></strong></p> <p>a) Repeat 1a and 1b<br> b) Open Anaconda Prompt<br> c) <code>conda install -c conda-forge tensorflow</code></p> <p>Result: </p> <pre><code>Preparing transaction: done Verifying transaction: done Executing transaction: &lt;after some text&gt; done </code></pre> <p><strong>6. Run in Jupyter</strong></p> <p>The code has nothing to do with TF at the moment, but still, it doesn't work.</p> <pre><code>import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import os import glob2 font_lib = glob2.glob('**/*.ttf', recursive=True) count = 0 for f in font_lib: count = count + 1 if count &lt; 10: print (f) else: break print ("done") </code></pre> <p>Result: </p> <pre><code>AttributeError Traceback (most recent call last) &lt;ipython-input-6-efbffb1990be&gt; in &lt;module&gt; ----&gt; 1 import tensorflow as tf 2 import numpy as np 3 import matplotlib.pyplot as plt 4 import os 5 import glob2 C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\__init__.py in &lt;module&gt; 22 23 # pylint: disable=g-bad-import-order ---&gt; 24 from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import 25 26 from tensorflow._api.v1 import app C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\__init__.py in &lt;module&gt; 80 from tensorflow.python import data 81 from tensorflow.python import distribute ---&gt; 82 from tensorflow.python import keras 83 from tensorflow.python.feature_column import feature_column_lib as feature_column 84 from tensorflow.python.layers import layers C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\__init__.py in &lt;module&gt; 23 24 from tensorflow.python.keras import activations ---&gt; 25 from tensorflow.python.keras import applications 26 from tensorflow.python.keras import backend 27 from tensorflow.python.keras import callbacks C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\applications\__init__.py in &lt;module&gt; 24 from tensorflow.python.keras import backend 25 from tensorflow.python.keras import engine ---&gt; 26 from tensorflow.python.keras import layers 27 from tensorflow.python.keras import models 28 from tensorflow.python.keras import utils C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\layers\__init__.py in &lt;module&gt; 27 28 # Advanced activations. ---&gt; 29 from tensorflow.python.keras.layers.advanced_activations import LeakyReLU 30 from tensorflow.python.keras.layers.advanced_activations import PReLU 31 from tensorflow.python.keras.layers.advanced_activations import ELU C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\layers\advanced_activations.py in &lt;module&gt; 25 from tensorflow.python.keras.engine.base_layer import Layer 26 from tensorflow.python.keras.engine.input_spec import InputSpec ---&gt; 27 from tensorflow.python.keras.utils import tf_utils 28 from tensorflow.python.ops import math_ops 29 from tensorflow.python.util.tf_export import tf_export C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\utils\__init__.py in &lt;module&gt; 36 from tensorflow.python.keras.utils.layer_utils import get_source_inputs 37 from tensorflow.python.keras.utils.losses_utils import squeeze_or_expand_dimensions ---&gt; 38 from tensorflow.python.keras.utils.multi_gpu_utils import multi_gpu_model 39 from tensorflow.python.keras.utils.np_utils import normalize 40 from tensorflow.python.keras.utils.np_utils import to_categorical C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\utils\multi_gpu_utils.py in &lt;module&gt; 20 from tensorflow.python.framework import ops 21 from tensorflow.python.keras import backend as K ---&gt; 22 from tensorflow.python.keras.engine.training import Model 23 from tensorflow.python.ops import array_ops 24 from tensorflow.python.util.tf_export import tf_export C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in &lt;module&gt; 40 from tensorflow.python.keras.engine import training_generator 41 from tensorflow.python.keras.engine import training_utils ---&gt; 42 from tensorflow.python.keras.engine.network import Network 43 from tensorflow.python.keras.optimizer_v2 import optimizer_v2 44 from tensorflow.python.keras.utils import data_utils C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\network.py in &lt;module&gt; 38 from tensorflow.python.keras.engine import base_layer 39 from tensorflow.python.keras.engine import base_layer_utils ---&gt; 40 from tensorflow.python.keras.engine import saving 41 from tensorflow.python.keras.engine import training_utils 42 from tensorflow.python.keras.utils import generic_utils C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\saving.py in &lt;module&gt; 36 # pylint: disable=g-import-not-at-top 37 try: ---&gt; 38 import h5py 39 HDF5_OBJECT_HEADER_LIMIT = 64512 40 except ImportError: C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py in &lt;module&gt; 34 _errors.silence_errors() 35 ---&gt; 36 from ._conv import register_converters as _register_converters 37 _register_converters() 38 h5py\h5r.pxd in init h5py._conv() h5py\h5r.pyx in init h5py.h5r() AttributeError: type object 'h5py.h5r.Reference' has no attribute '__reduce_cython__' </code></pre> <p><strong>7. Update h5py</strong></p> <p>a) Followed @Alireza Tajadod's instructions and tried. b) Run code in Jupyter</p> <p>Result: Same as 6)...</p> <hr> <p>I have tried every method I could, and any help would be highly appreciated. Thank you in advance!</p> <blockquote> <p>Edit:</p> </blockquote> <p>Reminded by the answer by @GarytheIceBreaker: Sorry that I forgot to mention, but I have everything installed and set up in Windows. Although this might be frustrating to some, please suggest solutions that can be done within Windows OS premises. Thanks!</p>
<p>I tried jumping through hoops on Windows, and did get Anaconda working, but not Tensorflow. I recommend running Ubuntu virtually, on WSL if you don't want to make any major changes to your machine. Ubuntu is pretty user friendly these days, even if you use it without any graphical shell enabled.<br> Enable WSL, install Ubuntu from the Microsoft store, and apt-get install tensorflow.</p>
python|tensorflow|anaconda
0
4,714
59,968,166
Python tensorflow creating tfrecord with multiple array features
<p>I am following the TensorFlow <a href="https://www.tensorflow.org/tutorials/load_data/tfrecord" rel="nofollow noreferrer">docs</a> to generate a tf.record from three NumPy arrays, however, I am getting an error when trying to serialize the data. I want the resulting <code>tfrecord</code> to contain three features. </p> <pre class="lang-or-tag-here prettyprint-override"><code>import numpy as np import pandas as pd # some random data x = np.random.randn(85) y = np.random.randn(85,2128) z = np.random.choice(range(10),(85,155)) def _float_feature(value): """Returns a float_list from a float / double.""" return tf.train.Feature(float_list=tf.train.FloatList(value=[value])) def _int64_feature(value): """Returns an int64_list from a bool / enum / int / uint.""" return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def serialize_example(feature0, feature1, feature2): """ Creates a tf.Example message ready to be written to a file. """ # Create a dictionary mapping the feature name to the tf.Example-compatible # data type. feature = { 'feature0': _float_feature(feature0), 'feature1': _float_feature(feature1), 'feature2': _int64_feature(feature2) } # Create a Features message using tf.train.Example. example_proto = tf.train.Example(features=tf.train.Features(feature=feature)) return example_proto.SerializeToString() features_dataset = tf.data.Dataset.from_tensor_slices((x, y, z)) features_dataset &lt;TensorSliceDataset shapes: ((), (2128,), (155,)), types: (tf.float64, tf.float32, tf.int64)&gt; for f0,f1,f2 in features_dataset.take(1): print(f0) print(f1) print(f2) def tf_serialize_example(f0,f1,f2): tf_string = tf.py_function( serialize_example, (f0,f1,f2), # pass these args to the above function. tf.string) # the return type is `tf.string`. return tf.reshape(tf_string, ()) # The result is a scalar </code></pre> <p>Yet, when trying to run <code>tf_serialize_example(f0,f1,f2)</code></p> <p>I am getting the error:</p> <pre><code>InvalidArgumentError: TypeError: &lt;tf.Tensor: shape=(2128,), dtype=float32, numpy= array([-0.5435242 , 0.97947884, -0.74457455, ..., has type tensorflow.python.framework.ops.EagerTensor, but expected one of: int, long, float Traceback (most recent call last): </code></pre> <p>I think the reason is, that my features are arrays and not numbers. How do I make this code work for features, which are arrays and not numbers?</p>
<p>Okay, I found time to have a closer look now. I noticed that the usage of <code>features_dataset</code> and <code>tf_serialize_example</code> comes from the tutorial on the tensorflow webppage. I don't know what the advantages of this method are and how to fix this.</p> <p>But here's a workflow that should work for your code (I re-opened the generated tfrecords files and they were fine).</p> <pre><code>import numpy as np import tensorflow as tf # some random data x = np.random.randn(85) y = np.random.randn(85,2128) z = np.random.choice(range(10),(85,155)) def _float_feature(value): """Returns a float_list from a float / double.""" return tf.train.Feature(float_list=tf.train.FloatList(value=value.flatten())) def _int64_feature(value): """Returns an int64_list from a bool / enum / int / uint.""" return tf.train.Feature(int64_list=tf.train.Int64List(value=value.flatten())) def serialize_example(feature0, feature1, feature2): """ Creates a tf.Example message ready to be written to a file. """ # Create a dictionary mapping the feature name to the tf.Example-compatible # data type. feature = { 'feature0': _float_feature(feature0), 'feature1': _float_feature(feature1), 'feature2': _int64_feature(feature2) } # Create a Features message using tf.train.Example. return tf.train.Example(features=tf.train.Features(feature=feature)) writer = tf.python_io.TFRecordWriter('TEST.tfrecords') example = serialize_example(x,y,z) writer.write(example.SerializeToString()) writer.close() </code></pre> <p>The main difference in this code is that you feed numpy arrays as opposed to tensorflow Tensors to <code>serialize_example</code>. Hope this helps</p>
python|tensorflow|tfrecord
1
4,715
65,292,293
Sorting a Pandas DataFrame by both column and row index at the same time
<p>I have a DataFrame, df, that I want to sort by both columns and rows at the same time.</p> <pre><code>data = {'c2': ['4.0', '2.0', '1.0', '3.0'], 'c1': ['200', '100', '300', '400'], 'c3': ['aa', 'cc', 'dd', 'ee']} df = pd.DataFrame(data) df.index = ['d', 'b', 'c', 'a'] df </code></pre> <p>I have no problem sorting by either of them, but I cannot figure out how to get them sorted at the same time. I would like the output to have the columns sorted by 'c1', 'c2', 'c3' and the rows to be 'a', 'b', 'c', 'd'. Don't think it is very difficult but I cannot figure out how to do it.</p>
<p>You can chain <code>sort_index</code>:</p> <pre><code>df.sort_index().sort_index(axis=1) </code></pre> <p>Output:</p> <pre><code> c1 c2 c3 a 400 3.0 ee b 100 2.0 cc c 300 1.0 dd d 200 4.0 aa </code></pre>
python|pandas
1
4,716
49,880,927
Vectorized pythonic way to get count of elements greater than current element
<p>I'd like to compare every entry in array b with its respective column to find how many entries (from that column) are larger than the reference. My code seems embarrassingly slow and I suspect it is due to for loops rather than vector operations.</p> <p>Can we 'vectorize' the following code?</p> <pre><code>import numpy as np n = 1000 m = 200 b = np.random.rand(n,m) p = np.zeros((n,m)) for i in range(0,n): #rows for j in range(0,m): # cols r = b[i,j] p[i,j] = ( ( b[:,j] &gt; r).sum() ) / (n) </code></pre> <p>After some more thought, I think sorting each column would improve overall runtime by making the later comparisons much faster.</p> <p>After some more searching I believe I want column-wise percentileofscore (<a href="http://lagrange.univ-lyon1.fr/docs/scipy/0.17.1/generated/scipy.stats.percentileofscore.html" rel="nofollow noreferrer">http://lagrange.univ-lyon1.fr/docs/scipy/0.17.1/generated/scipy.stats.percentileofscore.html</a>)</p>
<p>It just needed a bit of deeper study to figure out that we could simply use <code>argsort()</code> indices along each column to get the count of greater than the current element at each iteration.</p> <p><strong>Approach #1</strong></p> <p>With that theory in mind, one solution would be simply using two <code>argsort</code>-ing to get the counts -</p> <pre><code>p = len(b)-1-b.argsort(0).argsort(0) </code></pre> <p><strong>Approach #2</strong></p> <p>We could optimize it further, given the fact that the <code>argsort</code> indices are unique numbers. So, the second <code>argsort</code> step could use some array-assignment + <code>advanced-indexing</code>, like so -</p> <pre><code>def count_occ(b): n,m = b.shape out = np.zeros((n,m),dtype=int) idx = b.argsort(0) out[idx, np.arange(m)] = n-1-np.arange(n)[:,None] return out </code></pre> <p>Finally, divide by <code>n</code> as noted in the question for both the approaches.</p> <hr /> <h3>Benchmarking</h3> <p>Timing all the approaches posted thus far -</p> <pre><code>In [174]: np.random.seed(0) ...: n = 1000 ...: m = 200 ...: b = np.random.rand(n,m) In [175]: %timeit (len(b)-1-b.argsort(0).argsort(0))/float(n) 100 loops, best of 3: 17.6 ms per loop In [176]: %timeit count_occ(b)/float(n) 100 loops, best of 3: 12.1 ms per loop # @Brad Solomon's soln In [177]: %timeit np.sum(b &gt; b[:, None], axis=-2) / float(n) 1 loop, best of 3: 349 ms per loop # @marco romelli's loopy soln In [178]: %timeit loopy_soln(b)/float(n) 10 loops, best of 3: 139 ms per loop </code></pre>
python|performance|numpy|vectorization
3
4,717
49,959,730
Create a numpy.recarray from two lists python
<p>Is there a easy way to create a <code>numpy.recarray</code> from two lists. For instance, give the following lists:</p> <pre><code>list1 = ["a","b","c"] list2 = [1,2,3,4,5,6,7,8,9,10,11,12] </code></pre> <p>What I am trying to do is to get the following result:</p> <pre><code>rec_array = np.rec.array([('a', 1), ('a', 2),('a', 3),('a', 4), ('b', 5), ('b', 6),('b', 7),('b', 8), ('c', 9), ('c', 10),('c', 11),('c', 12)] dtype = [('string','|U5'),('int', '&lt;i4')]) </code></pre> <p>I mean I know how a rec.array works but don't really know how to create one from lists. Maybe <code>dicts</code> could make things easy since the <code>key ,value</code> option. But from lists is there a way to do this?.</p>
<pre><code>In [73]: list1 = ["a","b","c"] ...: list2 = [1,2,3,4,5,6,7,8,9,10,11,12] ...: In [74]: dt = [('string','|U5'),('int', '&lt;i4')] </code></pre> <p>A simple pairing of elements:</p> <pre><code>In [75]: [(i,j) for i, j in zip(list1,list2)] Out[75]: [('a', 1), ('b', 2), ('c', 3)] </code></pre> <p>break <code>list2</code> into 3 groups:</p> <pre><code>In [79]: list3 = [list2[i:i+4] for i in range(0,12,4)] In [80]: list3 Out[80]: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]] </code></pre> <p>double list comprehension:</p> <pre><code>In [81]: [(i,j) for i,row in zip(list1,list3) for j in row] Out[81]: [('a', 1), ('a', 2), ('a', 3), ('a', 4), ('b', 5), ('b', 6), ('b', 7), ('b', 8), ('c', 9), ('c', 10), ('c', 11), ('c', 12)] </code></pre> <p>make a structured array from that:</p> <pre><code>In [82]: np.array(_, dtype=dt) Out[82]: array([('a', 1), ('a', 2), ('a', 3), ('a', 4), ('b', 5), ('b', 6), ('b', 7), ('b', 8), ('c', 9), ('c', 10), ('c', 11), ('c', 12)], dtype=[('string', '&lt;U5'), ('int', '&lt;i4')]) </code></pre> <p>OR to make a (3,4) array:</p> <pre><code>In [86]: [[(i,j) for j in row] for i,row in zip(list1, list3)] Out[86]: [[('a', 1), ('a', 2), ('a', 3), ('a', 4)], [('b', 5), ('b', 6), ('b', 7), ('b', 8)], [('c', 9), ('c', 10), ('c', 11), ('c', 12)]] In [87]: np.array(_, dt) Out[87]: array([[('a', 1), ('a', 2), ('a', 3), ('a', 4)], [('b', 5), ('b', 6), ('b', 7), ('b', 8)], [('c', 9), ('c', 10), ('c', 11), ('c', 12)]], dtype=[('string', '&lt;U5'), ('int', '&lt;i4')]) In [88]: _.shape Out[88]: (3, 4) </code></pre> <p>Or replicate <code>list1</code> to same size as <code>list2</code>:</p> <pre><code>In [97]: np.array([(i,j) for i,j in zip(np.repeat(list1,4),list2)],dt).reshape(3 ...: ,4) Out[97]: array([[('a', 1), ('a', 2), ('a', 3), ('a', 4)], [('b', 5), ('b', 6), ('b', 7), ('b', 8)], [('c', 9), ('c', 10), ('c', 11), ('c', 12)]], dtype=[('string', '&lt;U5'), ('int', '&lt;i4')]) </code></pre>
arrays|list|numpy
1
4,718
63,865,722
extract a list of tuples from pandas series stored as string
<p>I have a csv that I've read into pandas with the following format</p> <pre><code>import pandas as pd data = [['A', &quot;[('a', 1), ('b', 2)]&quot;], ['C', &quot;[('c', 3), ('d', 4)]&quot;]] mydf = pd.DataFrame(data, columns = ['name', 'tupledat']) </code></pre> <p>how can I extract values from the second column as a list of tuples? I think this may have been generated as a multiindex? I'm not very familiar with that concept. Can this be specified when reading in from csv with pandas? I have struggled to accomplish this by string-splitting. I think there must be a ready-made solution for this common scenario.</p> <p>example desired result: <code>[('a', 1), ('b', 2)]</code></p>
<p>Check with <code>ast</code></p> <pre><code>import ast df.tupledat=df.tupledat.apply(ast.literal_eval) df Out[59]: name tupledat 0 A [(a, 1), (b, 2)] 1 C [(c, 3), (d, 4)] </code></pre>
python|pandas
2
4,719
63,913,137
Assign a category, according to range of the value as a new column, python
<p>I have a piece of R code that i am trying to figure out how to do in Python pandas. It takes a column called INDUST_CODE and <strong>check its value to assign a category according to range of the value as a new column.</strong> May i ask how i can do something like that in python please?</p> <pre><code> industry_index &lt;- full_table_update %&gt;% mutate(industry = case_when( INDUST_CODE &lt; 1000 ~ 'Military_service', INDUST_CODE &lt; 1500 &amp; INDUST_CODE &gt;= 1000 ~ 'Public_service', INDUST_CODE &lt; 2000 &amp; INDUST_CODE &gt;= 1500 ~ 'Private_sector', INDUST_CODE &gt;= 2000 ~ 'Others' )) %&gt;% select(industry) </code></pre>
<p>You can use <code>pandas.cut</code> to organise this into bins in line with your example.</p> <pre><code>df = pd.DataFrame([500, 1000, 1001, 1560, 1500, 2000, 2300, 7, 1499], columns=['INDUST_CODE']) INDUST_CODE 0 500 1 1000 2 1001 3 1560 4 1500 5 2000 6 2300 7 7 8 1499 df['Categories'] = pd.cut(df['INDUST_CODE'], [0, 999, 1499, 1999, 100000], labels=['Military_service', 'Public_service', 'Private_sector', 'Others']) INDUST_CODE Categories 0 500 Military_service 1 1000 Public_service 2 1001 Public_service 3 1560 Private_sector 4 1500 Private_sector 5 2000 Others 6 2300 Others 7 7 Military_service 8 1499 Public_service Categories (4, object): [Military_service &lt; Public_service &lt; Private_sector &lt; Others] </code></pre>
python|r|pandas|range|categories
2
4,720
63,839,881
groupby aggregate does not work as expected for Pandas
<p>I need some help with aggregation and joining the dataframe groupby output.</p> <p>Here is my dataframe:</p> <pre><code> df = pd.DataFrame({ 'Date': ['2020/08/18','2020/08/18', '2020/08/18', '2020/08/18', '2020/08/18', '2020/08/18', '2020/08/18'], 'Time':['Val3',60,30,'Val2',60,60,'Val2'], 'Val1': [0, 53.5, 33.35, 0,53.5, 53.5,0], 'Val2':[0, 0, 0, 45, 0, 0, 35], 'Val3':[48.5,0,0,0,0,0,0], 'Place':['LOC_A','LOC_A','LOC_A','LOC_B','LOC_B','LOC_B','LOC_A'] }) </code></pre> <p>I want following result:</p> <pre><code> Place Total_sum Factor Val2_new 0 LOC_A 86.85 21.71 35 1 LOC_B 107.00 26.75 45 </code></pre> <p>I have tried following:</p> <pre><code>df_by_place = df.groupby('Place')['Val1'].sum().reset_index(name='Total_sum') df_by_place['Factor'] = round(df_by_place['Total_sum']*0.25, 2) df_by_place['Val2_new'] = df.groupby('Place')['Val2'].agg('sum') print(df_by_place) </code></pre> <p>But I get following result:</p> <pre><code> Place Total_sum Factor Val2_new 0 LOC_A 86.85 21.71 NaN 1 LOC_B 107.00 26.75 NaN </code></pre> <p>When I do following operation by it self:</p> <pre><code>print(df.groupby('Place')['Val2'].agg('sum')) Output is desired: Place LOC_A 35 LOC_B 45 </code></pre> <p>But when I assign to a column it gives &quot;NaN&quot; value.</p> <p>Any help to this issue would be appreciated.</p> <p>Thank You in advance.</p>
<p>Groupby in pandas &gt;= 0.25 will allow you to assign names to columns inside of it and do what you want in one go.</p> <pre><code>df.groupby('Place').agg(Total_sum = ('Val1','sum'), Factor = ('Val1', lambda x: round((x * 0.25).sum(),2)), Val2_new = ('Val2', 'sum')).reset_index() </code></pre> <p>This provides your desired result.</p> <pre><code> Place Total_sum Factor Val2_new 0 LOC_A 86.85 21.71 35 1 LOC_B 107.00 26.75 45 </code></pre> <p>Using lambda functions within groupby will make things a lot neater!</p>
python|pandas|pandas-groupby
0
4,721
63,791,216
Keras MaxPooling3D not allowed
<p>I'm trying to build a CNN and got stuck with MaxPooling3D layers not working. Both layers get an input shape of (1, 5, 32) and I'd like to max-pool over the depth using poolsize (1, 1, 32) so the output becomes of shape (1, 5, 1). However this throws the error:</p> <pre><code>ValueError: Input 0 of layer max_pooling3d is incompatible with the layer: expected ndim=5, found ndim=4. Full shape received: [None, 1, 5, 32] </code></pre> <p>I don't understand why a dimension of 5 is expected/required. If I instead use MaxPooling2D layers with poolsize (1,1) everything compiles correctly and I get the model below.</p> <pre><code>&gt; Model: &quot;functional_1&quot; &gt; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to &gt; ================================================================================================== input_1 (InputLayer) [(None, 5, 5, 1)] 0 __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 5, 1, 32) 192 input_1[0][0] __________________________________________________________________________________________________ conv2d (Conv2D) (None, 1, 5, 32) 192 input_1[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 1, 5, 32) 0 conv2d_1[0][0] __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 1, 5, 32) 0 conv2d[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 1, 5, 32) 0 reshape[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 1, 5, 64) 0 max_pooling2d[0][0] max_pooling2d_1[0][0] ================================================================================================== Total params: 384 Trainable params: 384 Non-trainable params: 0 __________________________________________________________________________________________________ Process finished with exit code 0 </code></pre> <p>The code I used to build this:</p> <pre><code> n=5 inp_similarity = Input(shape=(n, n, 1)) conv11 = Conv2D(32, (n, 1))(inp_similarity) conv12 = Conv2D(32, (1, n))(inp_similarity) reshape1 = Reshape((1,5,32))(conv12) maxpl11 = MaxPooling2D((1, 1))(conv11) maxpl12 = MaxPooling2D((1, 1))(reshape1) merge1 = Concatenate()([maxpl11, maxpl12]) model = Model(inp_similarity, merge1) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.summary() </code></pre>
<p>your aim is to operate a 'pooling' on the the feature dimensionality... this is not the scope of the pooling layer... they operate pooling only on the spatial dimensionalities. you need something more simple</p> <pre><code>n=5 inp_similarity = Input(shape=(n, n, 1)) conv11 = Conv2D(32, (n, 1))(inp_similarity) conv12 = Conv2D(32, (1, n))(inp_similarity) reshape1 = Reshape((1,5,32))(conv12) maxpl11 = Lambda(lambda x: tf.reduce_max(x, axis=-1, keepdims=True))(conv11) maxpl12 = Lambda(lambda x: tf.reduce_max(x, axis=-1, keepdims=True))(reshape1) merge1 = Concatenate()([maxpl11, maxpl12]) model = Model(inp_similarity, merge1) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.summary() __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_4 (InputLayer) [(None, 5, 5, 1)] 0 __________________________________________________________________________________________________ conv2d_35 (Conv2D) (None, 5, 1, 32) 192 input_4[0][0] __________________________________________________________________________________________________ conv2d_34 (Conv2D) (None, 1, 5, 32) 192 input_4[0][0] __________________________________________________________________________________________________ reshape_2 (Reshape) (None, 1, 5, 32) 0 conv2d_35[0][0] __________________________________________________________________________________________________ lambda_6 (Lambda) (None, 1, 5, 1) 0 conv2d_34[0][0] __________________________________________________________________________________________________ lambda_7 (Lambda) (None, 1, 5, 1) 0 reshape_2[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 1, 5, 2) 0 lambda_6[0][0] lambda_7[0][0] ================================================================================================== </code></pre>
python|tensorflow|keras|conv-neural-network|max-pooling
2
4,722
64,070,776
Write numpy array to buffer as jpeg
<p>I have a program which returns a stream of np.uin8 arrays. I would like to now broadcast these to a website being hosted by that computer.</p> <p>I planned to do this by injecting the code in <a href="https://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming" rel="nofollow noreferrer">this</a> documentation by replacing the line <code>camera.start_recording(output, format='mjpeg')</code> with <code>output.write(&lt;numpy_array_but_jpeg&gt;)</code>. The documentation for <code>start_recording</code> states that if the <code>write()</code> method exists it will write the data in the requested format to that buffer. I can find lots of stuff online that instructs on how to save a <code>np.uint8</code> as a jpeg, but in my case I want to write that data to a buffer in memory, and I won't want to have to save the image to file and then read that file into the buffer.</p> <p>Unfortunately, changing the output format of the <code>np.uint8</code> earlier in the stream is not an option.</p> <p>Thanks for any assistance. For simplicity I have copied the important bits of code below</p> <pre><code>class StreamingOutput(object): def __init__(self): self.frame = None self.buffer = io.BytesIO() self.condition = Condition() def write(self, buf): if buf.startswith(b'\xff\xd8'): # New frame, copy the existing buffer's content and notify all # clients it's available self.buffer.truncate() with self.condition: self.frame = self.buffer.getvalue() self.condition.notify_all() self.buffer.seek(0) return self.buffer.write(buf) with picamera.PiCamera(resolution='640x480', framerate=24) as camera: output = StreamingOutput() camera.start_recording(output, format='mjpeg') </code></pre>
<p>OpenCV has functions to do this</p> <p><code>retval, buf = cv.imencode(ext,img[, params])</code></p> <p>lets you write an array to a memory buffer.</p> <p>This <a href="https://github.com/jeffbass/imagezmq/blob/master/examples/pub_sub_broadcast.py" rel="nofollow noreferrer">example</a> here shows a basic implementation of what I was talking about.</p>
numpy|raspberry-pi|video-streaming|streaming|mjpeg
0
4,723
63,832,786
Effective way to use iterrows in pandas (another way)
<p>This is my first question on forum. Thanks for any help!</p> <p>I wrote nested for loop based on df.iterrows () (sic.) and it takes a huuuuuge amount of time to perform. I need to assing value from one dataframe into another one by checking all the cells in described condition. Can you just help me to make it effective? (multiprocessing, apply method, vectorization or anything else?) Would be so grateful! :)</p> <p>Sample data:</p> <pre><code>import pandas as pd import numpy as np d1 = {'geno_start' : [60, 1120, 1660], 'geno_end' : [90, 1150, 1690], 'original_subseq' : ['AAATGCCTGAACCTTGGAATTGGA', 'AAATGCCTGAACCTTGGAATTGGA', 'AAATGCCTGAACCTTGGAATTGGA']} d2 = {'most_left_coordinate_genome' : [56, 1120, 1655], 'most_right_coordinate_genome' : [88, 1150, 1690], 'protein_ID' : ['XYZ_1', 'XYZ_2', 'XYZ_3']} df_1 = pd.DataFrame(data=d1) df_2 = pd.DataFrame(data=d2) df_1['protein_ID'] = np.nan def match_ranges(df1: pd.DataFrame, df2: pd.DataFrame): for index, row_2 in df2.iterrows(): for index_1, row_1 in df1.iterrows(): if (row_1['geno_start'] &gt;= row_2['most_left_coordinate_genome']) &amp; (row_1['geno_end'] &lt;= row_2['most_right_coordinate_genome']): df1['protein_ID'].iloc[index_1] = row_2['protein_ID'] elif (abs(row_1['geno_start'] - row_2['most_left_coordinate_genome']) &lt; 30) &amp; (row_1['geno_end'] &lt;= row_2['most_right_coordinate_genome']): df1['protein_ID'].iloc[index_1] = row_2['protein_ID'] elif (row_1['geno_start'] &gt;= row_2['most_left_coordinate_genome']) &amp; (abs(row_1['geno_end'] - row_2['most_right_coordinate_genome']) &lt; 30): df1['protein_ID'].iloc[index_1] = row_2['protein_ID'] match_ranges(df_1, df_2) </code></pre> <p><a href="https://i.stack.imgur.com/10djT.png" rel="nofollow noreferrer">Desired output:</a></p>
<p>Here is a way that goes from 2 for-loops to 1. I re-named a couple columns to cut line width.</p> <p>First, create the data frames:</p> <pre><code>import pandas as pd d1 = {'geno_start' : [60, 1120, 1660], 'geno_end' : [90, 1150, 1690], 'original_subseq' : ['AAATGCCTGAACCTTGGAATTGGA', 'AAATGCCTGAACCTTGGAATTGGA', 'AAATGCCTGAACCTTGGAATTGGA'],} d2 = {'left' : [56, 1120, 1655], 'right' : [88, 1150, 1690], 'protein_ID' : ['XYZ_1', 'XYZ_2', 'XYZ_3']} df_1 = pd.DataFrame(data=d1) df_1['protein_ID'] = '?' df_1['rule'] = '?' df_2 = pd.DataFrame(data=d2) </code></pre> <p>Second, populate the <code>protein_ID</code> column in the first data frame (i.e., with genome start, genome end):</p> <pre><code>for g in df_1.itertuples(): # Rule A: left most &lt;= geno start &lt; geno end &lt;= right-most # LM-----------------------RM left- and right-most # GS-----------GE genome start, end if ((df_2['left'] &lt;= g.geno_start) &amp; (g.geno_end &lt;= df_2['right'])).any(): mask = (df_2['left'] &lt;= g.geno_start) &amp; (g.geno_end &lt;= df_2['right']) df_1.at[g.Index, 'protein_ID'] = df_2.loc[mask, 'protein_ID'].values[0] df_1.at[g.Index, 'rule'] = 'Rule A' # Rule B: geno start before left-most # LM-----------------RM # GS-----------------GE elif ((df_2['left'] - g.geno_start &lt; 30) &amp; (g.geno_end &lt;= df_2['right'])).any(): mask = (df_2['left'] - g.geno_start &lt; 30) &amp; (g.geno_end &lt;= df_2['right']) df_1.at[g.Index, 'protein_ID'] = df_2.loc[mask, 'protein_ID'].values[0] df_1.at[g.Index, 'rule'] = 'Rule B' # Rule C: geno end after right-most # LM-----------------RM # GS-----------------GE elif ((df_2['left'] &lt;= g.geno_start) &amp; (g.geno_end - df_2['right'] &lt; 30)).any(): mask = (df_2['left'] &lt;= g.geno_start) &amp; (g.geno_end - df_2['right'] &lt; 30) df_1.at[g.Index, 'protein_ID'] = df_2.loc[mask, 'protein_ID'].values[0] df_1.at[g.Index, 'rule'] = 'Rule C' else: pass print(df_1) geno_start geno_end original_subseq protein_ID rule 0 60 90 AAATGCCTGAACCTTGGAATTGGA XYZ_1 Rule C 1 1120 1150 AAATGCCTGAACCTTGGAATTGGA XYZ_2 Rule A 2 1660 1690 AAATGCCTGAACCTTGGAATTGGA XYZ_3 Rule A </code></pre>
python|pandas|multiprocessing|vectorization
0
4,724
47,066,244
Is it possible to get OpenCL on Windows Linux Subsystem?
<p>I'v been trying for the past day to get <strong>Tensorflow</strong> built with <strong>OpenCL</strong> on the Linux Subsystem. </p> <p>I followed <a href="https://www.codeplay.com/portal/03-30-17-setting-up-tensorflow-with-opencl-using-sycl" rel="noreferrer">this guide</a>. But when typing <code>clinfo</code> it says</p> <blockquote> <p>Number of platforms 0</p> </blockquote> <p>Then typing <code>/usr/local/computecpp/bin/computecpp_info</code> gives me</p> <blockquote> <p>OpenCL error -1001: Unable to retrieve number of platforms. Device Info: Cannot find any devices on the system. Please refer to your OpenCL vendor documentation. Note that OPENCL_VENDOR_PATH is not defined. Some vendors may require this environment variable to be set.</p> </blockquote> <p>Am I doing anything wrong? Is it even possible to install OpenCL on Windows Linux Subsystem?</p> <p><strong>Note:</strong> I'm using an <code>AMD R9 390X</code> from <code>MSI</code>, <code>64bit Windows Home Edition</code></p>
<p>With the launch of WSL2, CUDA programs are now supported in WSL (<a href="https://docs.nvidia.com/cuda/wsl-user-guide/index.html" rel="nofollow noreferrer">more information here</a>), however there is still no support for OpenCL as of this writing: <a href="https://github.com/microsoft/WSL/issues/6951" rel="nofollow noreferrer">https://github.com/microsoft/WSL/issues/6951</a>.</p>
linux|tensorflow|opencl|windows-subsystem-for-linux
4
4,725
32,833,609
How to circumvent the restriction on field names?
<p>If I define a recarray r with a field called data as follows</p> <pre><code>import numpy r = numpy.zeros( 1, numpy.dtype([('data', 'f8')]) ).view(numpy.recarray ) </code></pre> <p>the data field will refer to some internal recarray buffer rather than a floating point number. Indeed, running</p> <pre><code>r.data </code></pre> <p>yields</p> <pre><code>&lt;read-write buffer for 0x7f3c10841cf8, size 8, offset 0 at 0x7f3c1083ee70&gt; </code></pre> <p>rather than [0]. I suspect the reason for the failure is that recarray already has a member called data and hence it just ignores my field called data. The same problem occurs if I try to use any name of already existing members of recarray.</p> <p>My questions are:</p> <p>1) Is it possible to circumvent this limitation of recarray and how to do it?</p> <p>2) Is this limitation likely to be lifted in the future?</p>
<p>Here is the <code>getattribute</code> method for <code>recarray</code>. Python translates <code>obj.par1</code> to <code>obj.__getattribute__('par1')</code>. This would explain why the field name has to be a valid attribute name, when used in recarrays.</p> <pre><code>def __getattribute__(self, attr): try: return object.__getattribute__(self, attr) #** except AttributeError: # attr must be a fieldname pass fielddict = ndarray.__getattribute__(self, 'dtype').fields try: res = fielddict[attr][:2] except (TypeError, KeyError): raise AttributeError("record array has no attribute %s" % attr) obj = self.getfield(*res) # if it has fields return a recarray, otherwise return # normal array if obj.dtype.fields: return obj if obj.dtype.char in 'SU': return obj.view(chararray) return obj.view(ndarray) </code></pre> <p>The <code>**</code> line explains why <code>obj.data</code> returns the buffer pointer, not your field. Same would apply to 'shape' and 'strides'. This also makes it possible to access array methods. You want the recarray to behave as much like a regular array as possible, don't you?</p> <hr> <p>The field names in a structured array are like the keys of a dictionary, relatively free form (though I've never explored the limits). But in <code>recarray</code>, those names have to function also a attribute names. Attributes names have to be valid variable names - that's a Python constraint.</p> <p>In <a href="https://stackoverflow.com/a/32540939/901925">https://stackoverflow.com/a/32540939/901925</a> I quote from the <code>genfromtxt</code> docs:</p> <blockquote> <p>Numpy arrays with a structured dtype can also be viewed as recarray, where a field can be accessed as if it were an attribute. For that reason, we may need to make sure that the field name doesn’t contain any space or invalid character, or that it does not correspond to the name of a standard attribute (like size or shape), which would confuse the interpreter.</p> </blockquote> <p>Also a tutorial on Python classes says:</p> <blockquote> <p>Attribute references use the standard syntax used for all attribute references in Python: obj.name. Valid attribute names are all the names that were in the class’s namespace when the class object was created. <a href="https://docs.python.org/2/tutorial/classes.html#tut-object" rel="nofollow noreferrer">https://docs.python.org/2/tutorial/classes.html#tut-object</a></p> </blockquote>
python-2.7|numpy|recarray
3
4,726
32,789,991
python, dimension subset of ndimage using indices stored in another image
<p>I have two images with the following dimensions, x, y, z:</p> <p>img_a: 50, 50, 100</p> <p>img_b: 50, 50</p> <p>I'd like to reduce the z-dim of img_a from 100 to 1, grabbing just the value coincide with the indices stored in img_b, pixel by pixel, as indices vary throughout the image.</p> <p>This should result in a third image with the dimension:</p> <p>img_c: 50, 50</p> <p>Is there already a function dealing with this issue?</p> <p>thanks, peter</p>
<p>Ok updated with a vectorized method.</p> <p><a href="https://stackoverflow.com/questions/27277240/index-numpy-nd-array-along-last-dimension">Here</a> is a duplicate question but the solution currently doesn't work when the row and column dimensions are not the same size.</p> <p>The code below has the method I added that explicitly creates the indices for look up purposes with <code>numpy.indices()</code> and then does the loop logic but in a vectorized way. It's slightly slower (2x) than the <code>numpy.meshgrid()</code> method but I think it's easier to understand and it also works with unequal row and column sizes.</p> <p>The timing is approximate but on my system I get:</p> <pre><code>Meshgrid time: 0.319000005722 Indices time: 0.704999923706 Loops time: 13.3789999485 </code></pre> <p>-</p> <pre><code>import numpy as np import time x_dim = 5000 y_dim = 5000 channels = 3 # base data a = np.random.randint(1, 1000, (x_dim, y_dim, channels)) b = np.random.randint(0, channels, (x_dim, y_dim)) # meshgrid method (from here https://stackoverflow.com/a/27281566/377366 ) start_time = time.time() i1, i0 = np.meshgrid(xrange(x_dim), xrange(y_dim), sparse=True) c_by_meshgrid = a[i0, i1, b] print('Meshgrid time: {}'.format(time.time() - start_time)) # indices method (this is the vectorized method that does what you want) start_time = time.time() b_indices = np.indices(b.shape) c_by_indices = a[b_indices[0], b_indices[1], b[b_indices[0], b_indices[1]]] print('Indices time: {}'.format(time.time() - start_time)) # loops method start_time = time.time() c_by_loops = np.zeros((x_dim, y_dim), np.intp) for i in xrange(x_dim): for j in xrange(y_dim): c_by_loops[i, j] = a[i, j, b[i, j]] print('Loops time: {}'.format(time.time() - start_time)) # confirm correctness print('Meshgrid method matches loops: {}'.format(np.all(c_by_meshgrid == c_by_loops))) print('Loop method matches loops: {}'.format(np.all(c_by_indices == c_by_loops))) </code></pre>
python|numpy|indexing|ndimage
0
4,727
38,626,435
Tensorflow ValueError: No variables to save from
<p>I have written a tensorflow CNN and it is already trained. I wish to restore it to run it on a few samples but unfortunately its spitting out:</p> <blockquote> <p>ValueError: No variables to save</p> </blockquote> <p>My eval code can be found here:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import main import Process import Input eval_dir = "/Users/Zanhuang/Desktop/NNP/model.ckpt-30" checkpoint_dir = "/Users/Zanhuang/Desktop/NNP/checkpoint" init_op = tf.initialize_all_variables() saver = tf.train.Saver() def evaluate(): with tf.Graph().as_default() as g: sess.run(init_op) ckpt = tf.train.get_checkpoint_state(checkpoint_dir) saver.restore(sess, eval_dir) images, labels = Process.eval_inputs(eval_data = eval_data) forward_propgation_results = Process.forward_propagation(images) top_k_op = tf.nn.in_top_k(forward_propgation_results, labels, 1) print(top_k_op) def main(argv=None): evaluate() if __name__ == '__main__': tf.app.run() </code></pre>
<p>The <code>tf.train.Saver</code> must be created <em>after</em> the variables that you want to restore (or save). Additionally it must be created in the same graph as those variables.</p> <p>Assuming that <code>Process.forward_propagation(…)</code> also creates the variables in your model, adding the saver creation after this line should work:</p> <pre><code>forward_propgation_results = Process.forward_propagation(images) </code></pre> <p>In addition, you must pass the new <code>tf.Graph</code> that you created to the <code>tf.Session</code> constructor so you'll need to move the creation of <code>sess</code> inside that <code>with</code> block as well.</p> <p>The resulting function will be something like:</p> <pre class="lang-py prettyprint-override"><code>def evaluate(): with tf.Graph().as_default() as g: images, labels = Process.eval_inputs(eval_data = eval_data) forward_propgation_results = Process.forward_propagation(images) init_op = tf.initialize_all_variables() saver = tf.train.Saver() top_k_op = tf.nn.in_top_k(forward_propgation_results, labels, 1) with tf.Session(graph=g) as sess: sess.run(init_op) saver.restore(sess, eval_dir) print(sess.run(top_k_op)) </code></pre>
python|tensorflow|machine-learning
27
4,728
38,543,850
How to Display Custom Images in Tensorboard (e.g. Matplotlib Plots)?
<p>The <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tensorboard/README.md#image-dashboard" rel="noreferrer">Image Dashboard</a> section of the Tensorboard ReadMe says:</p> <blockquote> <p>Since the image dashboard supports arbitrary pngs, you can use this to embed custom visualizations (e.g. matplotlib scatterplots) into TensorBoard.</p> </blockquote> <p>I see how a pyplot image could be written to file, read back in as a tensor, and then used with tf.image_summary() to write it to TensorBoard, but this statement from the readme suggests there is a more direct way. Is there? If so, is there any further documentation and/or examples of how to do this efficiently? </p>
<p>It is quite easy to do if you have the image in a memory buffer. Below, I show an example, where a pyplot is saved to a buffer and then converted to a TF image representation which is then sent to an image summary.</p> <pre><code>import io import matplotlib.pyplot as plt import tensorflow as tf def gen_plot(): """Create a pyplot plot and save to buffer.""" plt.figure() plt.plot([1, 2]) plt.title("test") buf = io.BytesIO() plt.savefig(buf, format='png') buf.seek(0) return buf # Prepare the plot plot_buf = gen_plot() # Convert PNG buffer to TF image image = tf.image.decode_png(plot_buf.getvalue(), channels=4) # Add the batch dimension image = tf.expand_dims(image, 0) # Add image summary summary_op = tf.summary.image("plot", image) # Session with tf.Session() as sess: # Run summary = sess.run(summary_op) # Write summary writer = tf.train.SummaryWriter('./logs') writer.add_summary(summary) writer.close() </code></pre> <p>This gives the following TensorBoard visualization:</p> <p><a href="https://i.stack.imgur.com/ARU43.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ARU43.png" alt="enter image description here"></a></p>
python|tensorflow|matplotlib|pytorch|tensorboard
46
4,729
38,751,406
No module named 'pandas' - Jupyter, Python3 Kernel, TensorFlow through Docker
<p>I have a Docker container running from tensrflow with Jupyter (Python 3 Kernel) image: erroneousboat/tensorflow-python3-jupyter</p> <p>This works great and I can access the jupyter notebook from </p> <p><code>http://DOCKER_IP:8888</code></p> <p>My only issue is that pandas library is not installed. So, I tried to install it on my own. I opened up the docker quickstart terminal and ran:</p> <pre><code>docker exec CONTAINER_ID apt-get update docker exec CONTAINER_ID apt-get install -y python3-pandas </code></pre> <p>The installation succeeds, and yet I still get the ImportError: No module named 'pandas' when I try to import pandas in the jupyter notebook, like so:</p> <pre><code>import pandas as pd </code></pre> <p>I also tried installing pandas to the image rather than just my container by:</p> <pre><code>docker run -it erroneousboat/tensorflow-python3-jupyter /bin/bash apt-get update apt-get install -y python3-pandas exit </code></pre> <p>Still, in my jupyter notebook, pandas is not recognized. How can I fix this? Thank you!</p>
<p><code>pip install pandas</code> will install the latest version of pandas for you.</p> <p>Based on your tags <code>python-3.x</code>, I assumed <code>pip</code> belongs to your Python3 version, if you have multiple python versions installed, make sure you have the correct pip.</p>
python-3.x|pandas|tensorflow|jupyter-notebook|docker-toolbox
2
4,730
63,301,232
how to use isin function in the IF condition in Python
<p>I am trying to use isin funtion in the if condtion within a function but it gives me an error</p> <p>I have function <code>f</code> , and I am passing columns <code>A</code> from the dataframe <code>df</code> , and my if condition should check if A in <code>('IND','USA')</code> then return visited_countries else not_visited_countries</p> <pre><code>def f(A) if A.isin(['IND','USA']): return Visited_countries else: return not_visited_countries df['D']=df.apply(lambda x: f(x.A,axis=1) </code></pre> <p>when I execute this code it give the below error</p> <pre><code>AttributeError: (&quot;'str' object has no attribute 'isin'&quot;, 'occurred at index 0') </code></pre> <p>Please let me know what I am missing here.</p>
<p>You need to use Series, not all DF, eg column A in DF so : <code>if DF.A.isin(['IND','USA']).any():</code></p>
python|pandas|dataframe
1
4,731
63,115,470
replace the existing values to NAN in a given .csv file
<p>Hi there I'm a newbie in python learning through notebook, I have given iris dataset through .csv file and asked to replace one of the column values in some particular rows to NaN.I have tried &quot;fillna&quot; functions and &quot;replace&quot; functions but I'm not successful.Here is my code:</p> <pre><code>import pandas as pd import numpy as np from numpy import nan as NaN url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' iris = pd.read_csv(url) iris.columns = ['sepal_length','sepal_width','petal_length','petal_width','class'] iris.columns #iris iris.petal_length.fillna(np.nan) iris1=iris.iloc[10:30] print (iris1) #bool_series = pd.isnull(iris['petal_length']) #print (df) </code></pre>
<p>Looks like the problem is, that you are not saving the resulting DataFrame from <code>.fillna()</code> or <code>.replace()</code>. By default, those methods return new DataFrame object. To fix this either save the result to a variable or use <code>inplace=True</code> argument in your <code>replace()</code> or <code>fillna()</code> calls.</p>
python|pandas|numpy
2
4,732
63,167,919
How can I check the image sizes while CNN?
<p>I'm trying to classify cat and dog in CNN with PyTorch. While I made few layers and processing images, I found that final processed feature map size doesn't match with calculated size. So I tried to check feature map size step by step in CNN process with print shape but it doesn't work. I heard tensorflow enables check tensor size in steps but how can I do that?</p> <p>What I want is :</p> <pre><code> def __init__(self): super(CNN, self).__init__() conv1 = nn.Conv2d(1, 16, 3, 1, 1) conv1_1 = nn.Conv2d(16, 16, 3, 1, 1) pool1 = nn.MaxPool2d(2) conv2 = nn.Conv2d(16, 32, 3, 1, 1) conv2_1 = nn.Conv2d(32, 32, 3, 1, 1) pool2 = nn.MaxPool2d(2) conv3 = nn.Conv2d(32, 64, 3, 1, 1) conv3_1 = nn.Conv2d(64, 64, 3, 1, 1) conv3_2 = nn.Conv2d(64, 64, 3, 1, 1) pool3 = nn.MaxPool2d(2) self.conv_module = nn.Sequential( conv1, nn.ReLU(), conv1_1, nn.ReLU(), pool1, # check first result size conv2, nn.ReLU(), conv2_1, nn.ReLU(), pool2, # check second result size conv3, nn.ReLU(), conv3_1, nn.ReLU(), conv3_2, nn.ReLU(), pool3, # check third result size pool4, # check fourth result size pool5 # check fifth result size ) </code></pre> <p>If there's any other way to check feature size at every step, please give some advice. Thanks in advance.</p>
<p>To do that you shouldn't use <code>nn.Sequential</code>. Just initialize your layers in <code>__init__()</code> and call them in the forward function. In the forward function you can print the shapes out. For example like this:</p> <pre><code>class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Conv2d(...) self.maxpool1 = nn.MaxPool2d() self.conv2 = nn.Conv2d(...) self.maxpool2 = nn.MaxPool2d() def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.maxpool1(x) print(x.size()) x = self.conv2(x) x = F.relu(x) x = self.maxpool2(x) print(x.size()) </code></pre> <p>Hope thats what you looking for!</p>
python|pytorch|conv-neural-network
0
4,733
67,893,599
How to process a file located in Azure blob Storage using python with pandas read_fwf function
<p>I need to open and work on data coming in a text file with python. The file will be stored in the Azure Blob storage or Azure file share.</p> <p>However, my question is can I use the same modules and functions like os.chdir() and read_fwf() I was using in windows? The code I wanted to run:</p> <pre><code>import pandas as pd import os os.chdir( file_path) df=pd.read_fwf(filename) </code></pre> <p>I want to be able to run this code and file_path would be a directory in Azure blob.</p> <p>Please let me know if it's possible. If you have a better idea where the file can be stored please share.</p> <p>Thanks,</p>
<p>As far as I know, <code>os.chdir(path)</code> can only operate on local files. If you want to move files from storage to local, you can refer to the following code:</p> <pre><code> connect_str = &quot;&lt;your-connection-string&gt;&quot; blob_service_client = BlobServiceClient.from_connection_string(connect_str) container_name = &quot;&lt;container-name&gt;&quot; file_name = &quot;&lt;blob-name&gt;&quot; container_client = blob_service_client.get_container_client(container_name) blob_client = container_client.get_blob_client(file_name) download_file_path = &quot;&lt;local-path&gt;&quot; with open(download_file_path, &quot;wb&quot;) as download_file: download_file.write(blob_client.download_blob().readall()) </code></pre> <p><code>pandas.read_fwf</code> can read blob directly from storage using URL:</p> <p><a href="https://i.stack.imgur.com/DbIRg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DbIRg.png" alt="enter image description here" /></a></p> <p>For example:</p> <pre><code> url = &quot;https://&lt;your-account&gt;.blob.core.windows.net/test/test.txt?&lt;sas-token&gt;&quot; df=pd.read_fwf(url) </code></pre>
python|pandas|operating-system|azure-blob-storage|read-fwf
0
4,734
67,608,501
Looping through rows with condition using Pandas
<p>I am trying to iterate the rows with condition like in below dataframe by creating new column <strong>New_Course</strong>.</p> <p>What I am looking for is when <strong>status= New and Status1=7</strong> then create new column <strong>New_COurse</strong> in below df and mark the sequence of course till the loop detects same condition and rest should be entered as 0 meaning if <strong>status= new and Status1 =0</strong> then it should be marked as 0 till next iteration.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>Status</th> <th>status1</th> <th>New_Course</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>New</td> <td>7</td> <td>1</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>1</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>1</td> </tr> <tr> <td>1</td> <td>New</td> <td>7</td> <td>2</td> </tr> <tr> <td>1</td> <td>New</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>0</td> </tr> <tr> <td>1</td> <td>New</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>New</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>New</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>0</td> </tr> <tr> <td>1</td> <td>New</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>New</td> <td>7</td> <td>3</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>3</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>3</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>3</td> </tr> <tr> <td>1</td> <td>New</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>0</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>0</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>0</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>0</td> </tr> <tr> <td>1</td> <td>New</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>0</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>0</td> </tr> <tr> <td>1</td> <td>New</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>Existing Course</td> <td>7</td> <td>0</td> </tr> </tbody> </table> </div>
<p>Here is one way to do it:</p> <pre><code># create the increasing new course df['New_Course'] = (df.Status.eq('New') &amp; df.status1.eq(7)).cumsum() # set new course with Status=new and status1=0 as 0 df.loc[df.Status.eq('New') &amp; df.status1.ne(7), 'New_Course'] = 0 # set other rows as NaN df.loc[df.Status.ne('New'), 'New_Course'] = None # forward fill NaNs df.New_Course = df.New_Course.ffill().astype(int) df ID Status status1 New_Course 0 1 New 7 1 1 1 Existing Course 7 1 2 1 Existing Course 7 1 3 1 New 7 2 4 1 New 0 0 5 1 Existing Course 7 0 6 1 New 0 0 7 1 New 0 0 8 1 New 0 0 9 1 Existing Course 7 0 10 1 New 0 0 11 1 New 7 3 12 1 Existing Course 7 3 13 1 Existing Course 7 3 14 1 Existing Course 7 3 15 1 New 0 0 16 1 Existing Course 7 0 17 1 Existing Course 7 0 18 1 Existing Course 7 0 19 1 Existing Course 7 0 20 1 New 0 0 21 1 Existing Course 7 0 22 1 Existing Course 7 0 23 1 New 0 0 24 1 Existing Course 7 0 </code></pre>
python|pandas
1
4,735
67,978,203
How to Find Time Since Last Game Played by a Team in Pandas
<p>I have a dataset that contains matchups of teams, and each team can be either team1 or team2 for any given matchup, I am looking to write a pandas function to create the following table which keeps track of how long it has been since the last game played by that team.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>team.id_team1</th> <th>team.id_team2</th> <th>game.date</th> <th>time_since_last_team1</th> <th>time_since_last_team2</th> </tr> </thead> <tbody> <tr> <td>3</td> <td>23</td> <td>2007-11-03</td> <td>NaT</td> <td>NaT</td> </tr> <tr> <td>2</td> <td>28</td> <td>2007-11-03</td> <td>NaT</td> <td>NaT</td> </tr> <tr> <td>18</td> <td>20</td> <td>2007-11-03</td> <td>NaT</td> <td>NaT</td> </tr> <tr> <td>23</td> <td>4</td> <td>2007-11-04</td> <td>1</td> <td>NaT</td> </tr> <tr> <td>7</td> <td>2</td> <td>2007-11-05</td> <td>NaT</td> <td>2</td> </tr> <tr> <td>28</td> <td>3</td> <td>2007-11-05</td> <td>2</td> <td>2</td> </tr> <tr> <td>18</td> <td>3</td> <td>2007-11-06</td> <td>3</td> <td>1</td> </tr> </tbody> </table> </div> <p>I have made many attempts but none worth mentioning, the main problem that I am encountering is that the team doesn't stay in one column.</p> <p>Any suggestions?</p>
<p>Starting from this <code>games</code> dataframe (with <code>game.date</code> a datetime columns), and a current date <code>curdate</code>:</p> <pre><code>&gt;&gt;&gt; games team.id_team1 team.id_team2 game.date 0 3 23 2007-11-03 1 2 28 2007-11-03 2 18 20 2007-11-03 3 23 4 2007-11-04 4 7 2 2007-11-05 5 28 3 2007-11-05 6 18 3 2007-11-06 &gt;&gt;&gt; games.dtypes team.id_team1 int64 team.id_team2 int64 game.date datetime64[ns] dtype: object &gt;&gt;&gt; curdate = pd.Timestamp.today().floor(freq='D') &gt;&gt;&gt; curdate Timestamp('2021-06-15 00:00:00') </code></pre> <p>Now get for each column the last games, then join them and get the max per team id</p> <pre><code>&gt;&gt;&gt; last_game_1 = games.groupby('team.id_team1')['game.date'].max() &gt;&gt;&gt; last_game_2 = games.groupby('team.id_team2')['game.date'].max() &gt;&gt;&gt; last_game = pd.concat([last_game_1, last_game_2]).groupby(level=0).max() &gt;&gt;&gt; df = last_game.rename_axis('team.id_team').reset_index() &gt;&gt;&gt; df team.id_team game.date 0 2 2007-11-05 1 3 2007-11-06 2 4 2007-11-04 3 7 2007-11-05 4 18 2007-11-06 5 20 2007-11-03 6 23 2007-11-04 7 28 2007-11-05 </code></pre> <p>Finally get the days between <code>curdate</code> and the date column:</p> <pre><code>&gt;&gt;&gt; df['days_since_last_game'] = (curdate - df['game.date']).dt.days &gt;&gt;&gt; df team.id game.date days_since_last_game 0 2 2007-11-05 4971 1 3 2007-11-06 4970 2 4 2007-11-04 4972 3 7 2007-11-05 4971 4 18 2007-11-06 4970 5 20 2007-11-03 4973 6 23 2007-11-04 4972 7 28 2007-11-05 4971 &gt;&gt;&gt; </code></pre> <p>Now if I understand what you need well, join can join this time since last game back with the input columns you’re interested in:</p> <pre><code>&gt;&gt;&gt; games.merge(df[['team.id_team', 'days_since_last_game']].add_suffix('1'), on='team.id_team1')\ ... .merge(df[['team.id_team', 'days_since_last_game']].add_suffix('2'), on='team.id_team2') team.id_team1 team.id_team2 game.date days_since_last_game1 days_since_last_game2 0 3 23 2007-11-03 4970 4972 1 2 28 2007-11-03 4971 4971 2 18 20 2007-11-03 4970 4973 3 18 3 2007-11-06 4970 4970 4 28 3 2007-11-05 4971 4970 5 23 4 2007-11-04 4972 4972 6 7 2 2007-11-05 4971 4971 </code></pre>
pandas|dataframe
0
4,736
67,800,002
TensorFlow 2.5 random set seed not working , Giving an Error
<pre><code>tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' </code></pre> <hr /> <blockquote> <p>TypeError Traceback (most recent call last) in () ----&gt; 1 tf.random.set_seed(1234) 2 print(tf.random.uniform(<a href="https://i.stack.imgur.com/e2JtV.png" rel="nofollow noreferrer">1</a>, seed=1)) # generates 'A1' 3 print(tf.random.uniform(<a href="https://i.stack.imgur.com/e2JtV.png" rel="nofollow noreferrer">1</a>, seed=1)) # generates 'A2' 4 tf.random.set_seed(1234) 5 print(tf.random.uniform(<a href="https://i.stack.imgur.com/e2JtV.png" rel="nofollow noreferrer">1</a>, seed=1)) # generates 'A1'</p> <p>TypeError: 'int' object is not callable</p> </blockquote>
<blockquote> <p>TypeError: 'int' object is not callable</p> </blockquote> <p>Generally you will get above error, if you have assigned some <code>integer</code> to <code>tf.random.set_seed</code> and tried to execute above code in the same session caused this issue.</p> <pre><code>import tensorflow as tf tf.random.set_seed=1234 tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' </code></pre> <p>Output:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-1-5e2ffd56d477&gt; in &lt;module&gt;() 3 tf.random.set_seed=1234 4 ----&gt; 5 tf.random.set_seed(1234) 6 print(tf.random.uniform([1], seed=1)) # generates 'A1' 7 print(tf.random.uniform([1], seed=1)) # generates 'A2' TypeError: 'int' object is not callable </code></pre> <p><strong>Fixed code:</strong></p> <p>You should remove <code>tf.random.set_seed=1234</code> and restart your kernel has solve the issue.</p> <pre><code>import tensorflow as tf tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' tf.random.set_seed(1234) print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' </code></pre> <p>Output:</p> <pre><code>tf.Tensor([0.1689806], shape=(1,), dtype=float32) tf.Tensor([0.7539084], shape=(1,), dtype=float32) tf.Tensor([0.1689806], shape=(1,), dtype=float32) tf.Tensor([0.7539084], shape=(1,), dtype=float32) </code></pre>
python|tensorflow|google-colaboratory
0
4,737
31,943,622
drop a series of true false in pandas removes first two lines
<p>Handing the pandas.drop function a list of <code>True</code> and <code>False</code> statements drops the first two lines. Why? Is this a bug?</p> <pre><code>df = pd.DataFrame({"foo":[1,2,3]}) df.drop([False, False, True]) foo 2 3 </code></pre> <p>Also only giving it a list of <code>False</code> will just drop the first line.</p> <pre><code>df = pd.DataFrame({"foo":[1,2,3]}) df.drop([False, False, False]) foo 1 2 2 3 </code></pre>
<h1>Explanation why this happens</h1> <p>No, this is not a bug, it is just a side-effect of that <code>True</code> and <code>False</code> are equal to <code>1</code> and <code>0</code></p> <p>This code:</p> <pre><code>df = pd.DataFrame({&quot;foo&quot;:[1,2,3]}) df.drop([False, False, True]) </code></pre> <p>Is identical to this code:</p> <pre><code>df = pd.DataFrame({&quot;foo&quot;:[1,2,3]}) df.drop([0, 0, 1]) </code></pre> <p>The pandas drop function takes a list of indicies to drop, not a mask.</p> <h1>How to properly use the drop method</h1> <p>The proper way of using masks to drop data is either to mask, then access the index and hand this to the drop function:</p> <pre><code>df.drop(df[[False, False, True]].index) foo 0 1 1 2 </code></pre> <p>Or just by inverted masking:</p> <pre><code>df[~pd.Series([False, False, True])] foo 0 1 1 2 </code></pre>
python|pandas
4
4,738
41,488,676
Python Data Frame: cumulative sum of column until condition is reached and return the index
<p>I am new in Python and am currently facing an issue I can't solve. I really hope you can help me out. English is not my native languge so I am sorry if I am not able to express myself properly.</p> <p>Say I have a simple data frame with two columns:</p> <pre><code>index Num_Albums Num_authors 0 10 4 1 1 5 2 4 4 3 7 1000 4 1 44 5 3 8 Num_Abums_tot = sum(Num_Albums) = 30 </code></pre> <p>I need to do a cumulative sum of the data in <code>Num_Albums</code> until a certain condition is reached. Register the index at which the condition is achieved and get the correspondent value from <code>Num_authors</code>.</p> <p>Example: cumulative sum of <code>Num_Albums</code> until the sum equals 50% ± 1/15 of 30 (--> 15±2):</p> <pre><code>10 = 15±2? No, then continue; 10+1 =15±2? No, then continue 10+1+41 = 15±2? Yes, stop. </code></pre> <p>Condition reached at index 2. Then get <code>Num_Authors</code> at that index: <code>Num_Authors(2)=4</code></p> <p>I would like to see if there's a function already implemented in <code>pandas</code>, before I start thinking how to do it with a while/for loop....</p> <p>[I would like to specify the column from which I want to retrieve the value at the relevant index (this comes in handy when I have e.g. 4 columns and i want to sum elements in column 1, condition achieved =yes then get the correspondent value in column 2; then do the same with column 3 and 4)].</p>
<p><strong><em>Opt - 1:</em></strong></p> <p>You could compute the cumulative sum using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cumsum.html" rel="noreferrer"><code>cumsum</code></a>. Then use <a href="https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.isclose.html" rel="noreferrer"><code>np.isclose</code></a> with it's inbuilt tolerance parameter to check if the values present in this series lies within the specified threshold of 15 +/- 2. This returns a boolean array. </p> <p>Through <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.flatnonzero.html" rel="noreferrer"><code>np.flatnonzero</code></a>, return the ordinal values of the indices for which the <code>True</code> condition holds. We select the first instance of a <code>True</code> value.</p> <p>Finally, use <code>.iloc</code> to retrieve value of the column name you require based on the index computed earlier.</p> <pre><code>val = np.flatnonzero(np.isclose(df.Num_Albums.cumsum().values, 15, atol=2))[0] df['Num_authors'].iloc[val] # for faster access, use .iat 4 </code></pre> <p>When performing <code>np.isclose</code> on the <code>series</code> later converted to an array:</p> <pre><code>np.isclose(df.Num_Albums.cumsum().values, 15, atol=2) array([False, False, True, False, False, False], dtype=bool) </code></pre> <p><strong><em>Opt - 2:</em></strong></p> <p>Use <a href="https://pandas-docs.github.io/pandas-docs-travis/generated/pandas.Index.get_loc.html" rel="noreferrer"><code>pd.Index.get_loc</code></a> on the <code>cumsum</code> calculated series which also supports a <code>tolerance</code> parameter on the <code>nearest</code> method.</p> <pre><code>val = pd.Index(df.Num_Albums.cumsum()).get_loc(15, 'nearest', tolerance=2) df.get_value(val, 'Num_authors') 4 </code></pre> <p><strong><em>Opt - 3:</em></strong></p> <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html" rel="noreferrer"><code>idxmax</code></a> to find the first index of a <code>True</code> value for the boolean mask created after <code>sub</code> and <code>abs</code> operations on the <code>cumsum</code> series:</p> <pre><code>df.get_value(df.Num_Albums.cumsum().sub(15).abs().le(2).idxmax(), 'Num_authors') 4 </code></pre>
python|pandas|dataframe|sum
7
4,739
41,645,365
read_csv import large numbers
<p>A sample CSV file has contents like </p> <pre><code>Afghanistan,AFG,8013233121.55065,8689883606.07776,8781610175.40574 </code></pre> <p>When I import this with</p> <pre><code>GDP = pd.read_csv('world_bank.csv', header=4, usecols=fields) </code></pre> <p>I get the numbers in scientific notation. </p> <pre><code>Afghanistan,AFG,3.992331e+12,4.559041e+12 </code></pre> <p>What is the correct converter to use?</p>
<p>Those numbers are floats. You are just seeing what is displayed.</p> <p>Consider the following</p> <pre><code>txt = """Afghanistan,AFG,8013233121.55065,8689883606.07776,8781610175.40574""" GDP = pd.read_csv(StringIO(txt), header=None, converters={i:np.float128 for i in [2,3,4]}) print(GDP) 0 1 2 3 4 0 Afghanistan AFG 8.013233e+09 8.689884e+09 8.781610e+09 </code></pre> <hr> <p>However, a closer look at one cell</p> <pre><code>GDP.iloc[0, 2] 8013233121.5506496 </code></pre> <hr> <p>To <code>print</code> with a float format you like you can. <code>pd.set_option('display.float_format', '{:0.6f}'.format)</code> or do it temporarily</p> <pre><code>with pd.option_context('display.float_format', '{:0.6f}'.format): print(GDP) 0 1 2 3 4 0 Afghanistan AFG 8013233121.550650 8689883606.077761 8781610175.405741 </code></pre>
python|pandas
3
4,740
41,643,044
What is the difference between tf.train.MonitoredTrainingSession and tf.train.Supervisor
<p>I'm wondering what is the difference between these 2 tensorflow object when used to train a neural networks ?</p>
<p>Supervisor is on the way to getting deprecated and new users are encouraged to use to tf.train.FooSession classes (from <a href="https://github.com/tensorflow/tensorflow/issues/6604#issuecomment-270950324" rel="noreferrer">comment</a>)</p>
tensorflow
8
4,741
61,579,248
Using transformers-cli on Windows?
<p>I can't figure out how to use transformers-cli on Windows. I got it working on Google Colab, and am using it in the meantime.</p> <p>[EDIT]</p> <p>Here's the process that I'm going through, what I expect, and what is happening:</p> <p><strong>I'm on a Windows System (brackets are the exact commands I'm typing into CMD)</strong></p> <ol> <li>I install transformers==2.8.0 (pip install transformers==2.8.0)</li> <li>I try to run transformers-cli as explained on Huggingface's website (transformers-cli) <a href="https://huggingface.co/transformers/model_sharing.html" rel="nofollow noreferrer">https://huggingface.co/transformers/model_sharing.html</a></li> </ol> <p><strong>I get:</strong> </p> <pre><code>'transformers-cli' is not recognized as an internal or external command, operable program or batch file. </code></pre> <p>I don't know if I have to add some directory to my PATH or perhaps the CLI isn't available on Windows?</p> <p><strong>I repeat the exact same process on Google Colab, and it works as expected. I get:</strong></p> <pre><code>usage: transformers-cli &lt;command&gt; [&lt;args&gt;] positional arguments: {convert,download,env,run,serve,login,whoami,logout,s3,upload} transformers-cli command helpers convert CLI tool to run convert model from original author checkpoints to Transformers PyTorch checkpoints. run Run a pipeline through the CLI serve CLI tool to run inference requests through REST and GraphQL endpoints. login Log in using the same credentials as on huggingface.co whoami Find out which huggingface.co account you are logged in as. logout Log out s3 {ls, rm} Commands to interact with the files you upload on S3. upload Upload a model to S3. optional arguments: -h, --help show this help message and exit </code></pre>
<p>All you have to do is to locate the script and launch it. It won't be added to the $PATH automatically. In my python interpreter installation on Windows 10 (not anaconda just python), it was installed in the <code>Scripts</code> folder of my python interpreter directory. You have to launch it with the python interpreter as windows, as far as I know, doesn't support Shebangs.</p> <pre><code>cd YOURPYTHONINTERPRETERDIRECTORY\Scripts python.exe transformers-cli login </code></pre> <p>You can define a <a href="https://superuser.com/questions/560519/how-to-set-an-alias-in-windows-command-line">macro</a> to shortcut transformers-cli.</p>
huggingface-transformers
3
4,742
68,526,277
How to insert rows depending on previous row meeting conditions
<p>I have a simplified file (foo.csv),</p> <p><strong>Contents of foo:</strong></p> <pre><code>['MyNum', 'Cycle', 'Line', 'V1', 'V2', 'T1'] ['1', 'C', '1', '6.7', '25.6', '90'] ['3', 'A', '1', '5.8', '22.5', '89.9'] ['3', 'A', '2', '5.8', '24.2', '90'] ['3', 'A', '3', '5.8', '25.4', '90'] ['5', 'B', '1', '6', '25.3', '89.9'] ['5', 'B', '2', '6.3', '23.8', '89.9'] ['7', 'C', '1', '7.1', '24', '89.9'] ['7', 'C', '2', '9999', '9111', '9333'] ['7', 'C', '3', '9999', '9111', '9333'] </code></pre> <p>What I want to have is 3 rows each having the same first item (MyNum) but their third item (Line) incrementing from 1 to 3. So if I only have 1 or 2 rows with that MyNum first term value I need to insert either one or two rows, each of which is the same as the row above it except for the Line term which should increment.</p> <p><strong>Desired output:</strong></p> <pre><code>['MyNum', 'Cycle', 'Line', 'V1', 'V2', 'T1'] ['1', 'C', '1', '6.7', '25.6', '90'] ['1', 'C', '2', '6.7', '25.6', '90'] ['1', 'C', '3', '6.7', '25.6', '90'] ['3', 'A', '1', '5.8', '22.5', '89.9'] ['3', 'A', '2', '5.8', '24.2', '90'] ['3', 'A', '3', '5.8', '25.4', '90'] ['5', 'B', '1', '6', '25.3', '89.9'] ['5', 'B', '2', '6.3', '23.8', '89.9'] ['5', 'B', '3', '6.3', '23.8', '89.9'] ['7', 'C', '1', '7.1', '24', '89.9'] ['7', 'C', '2', '9999', '9111', '9333'] ['7', 'C', '3', '9999', '9111', '9333'] </code></pre> <p><strong>Code</strong></p> <pre><code>import csv data = pd.read_csv('foo.csv') df = pd.DataFrame(data) print('\n'*5) print(df[&quot;MyNum&quot;]) &quot;&quot;&quot; for i in df[&quot;MyNum&quot;]: if i+1 = i print(i) &quot;&quot;&quot; with open('foo.csv', 'r') as f_in, open('__fooOut.csv', 'w') as f_out: # this creates a new output file in write mode reader = csv.reader(f_in, delimiter=',') # modify for your file writer = csv.writer(f_out, delimiter=',') # modify for your file num = 0 num_count = 3 while num_count &gt; 0: for row in reader: print(row) &quot;&quot;&quot; Manual method The first item in the first row is 1 and a third item 1. The following three rows have a first item of 3 and their third items go from 1, 2 to 3. The following two rows have a first item of 5 and their third items go from 1 to 2. The following 3 rows have a first item of 7 and their third items go from 1, 2 to 3. What should happen is there should be three rows each having the same first item, and when they do their third item (row[2] or &quot;Line&quot;) should increment and be either a 1, 2 or 3. When there is not two rows with the same first item as the row above a new row should be inserted immediately below the row with the same details as the row above except for the third item. &quot;&quot;&quot; </code></pre> <p>I don't know how to do this, whether the approach should be a dataframe or not, nor how to check that the next row's first item is equal to the row under test.</p>
<p>If input data are integers for <code>Line</code> column use custom lambda function with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> with <code>method='ffill'</code> for forward filling not existed values in range <code>1,4</code> like:</p> <pre><code>df = pd.read_csv('foo.csv') f = lambda x: x.reindex(range(1, 4), method='ffill') df = (df.set_index('Line') .groupby(['MyNum','Cycle']) .apply(f) .drop(['MyNum','Cycle'], 1) .reset_index()) </code></pre> <hr /> <pre><code>print (df) MyNum Cycle Line V1 V2 T1 0 1 C 1 6.7 25.6 90.0 1 1 C 2 6.7 25.6 90.0 2 1 C 3 6.7 25.6 90.0 3 3 A 1 5.8 22.5 89.9 4 3 A 2 5.8 24.2 90.0 5 3 A 3 5.8 25.4 90.0 6 5 B 1 6.0 25.3 89.9 7 5 B 2 6.3 23.8 89.9 8 5 B 3 6.3 23.8 89.9 9 7 C 1 7.1 24.0 89.9 10 7 C 2 9999.0 9111.0 9333.0 11 7 C 3 9999.0 9111.0 9333.0 </code></pre>
python|pandas|dataframe|numpy|csv
2
4,743
68,556,648
How to resolve ValueError: cannot reindex from a duplicate axis
<p><strong>Input</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Client</th> <th>First name</th> <th>Last Name</th> <th>Start Date</th> <th>End Date</th> <th>Amount</th> <th>Invoice Date</th> </tr> </thead> <tbody> <tr> <td>XXX</td> <td>John</td> <td>Kennedy</td> <td>15-01-2021</td> <td>28-02-2021</td> <td>137,586.00</td> <td>20-04-2021</td> </tr> <tr> <td>YYY</td> <td>Peter</td> <td>Paul</td> <td>7-02-2021</td> <td>31-03-2021</td> <td>38,750.00</td> <td>20-04-2021</td> </tr> <tr> <td>ZZZ</td> <td>Michael</td> <td>K</td> <td>10-03-2021</td> <td>29-04-2021</td> <td>137,586.00</td> <td>30-04-2021</td> </tr> </tbody> </table> </div> <p><strong>Code</strong></p> <pre><code>df = pd.read_excel ('file.xlsx',parse_dates=['Start Date','End Date'] ) df['Start Date'] = pd.to_datetime(df['Start Date'],format='%d-%m-%Y') df['End Date'] = pd.to_datetime(df['End Date'],format='%d-%m-%Y') df['r'] = df.apply(lambda x: pd.date_range(x['Start Date'],x['End Date']), axis=1) df = df.explode('r') print(df) months = df['r'].dt.month starts, ends = months.ne(months.groupby(level=0).shift(1)), months.ne(months.groupby(level=0).shift(-1)) df2 = pd.DataFrame({'First Name': df['First name'], 'Start Date': df.loc[starts, 'r'].dt.strftime('%Y-%m-%d'), 'End Date': df.loc[ends, 'r'].dt.strftime('%Y-%m-%d'), 'Date Diff': df.loc[ends, 'r'].dt.strftime('%d').astype(int)-df.loc[starts, 'r'].dt.strftime('%d').astype(int)+1}) df = df.loc[~df.index.duplicated(), :] df2 = pd.merge(df, df2, left_index=True, right_index=True) df2['Amount'] = df['Amount'].mul(df2['Date_Diff']) print(df['Amount']) print (df) df.to_excel('report.xlsx', index=True) </code></pre> <p><strong>Error</strong> ValueError: cannot reindex from a duplicate axis</p> <p><strong>Expected output</strong></p> <p><a href="https://i.stack.imgur.com/qd2OC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qd2OC.png" alt="enter image description here" /></a></p> <p>how to resolve this issue?</p>
<p>Start with some correction in your input Excel file, namely change <em>First name</em> to <em>First Name</em> - with <strong>capital</strong> &quot;N&quot;, just like in other columns.</p> <p>Then, to read your Excel file, it is enough to run:</p> <pre><code>df = pd.read_excel('Input.xlsx', parse_dates=['Start Date', 'End Date', 'Invoice Date'], dayfirst=True) </code></pre> <p>No need to call <em>to_datetime</em>.</p> <p>Note also that since <em>Invoice Date</em> contains also dates, I added this column to <em>parse_dates</em> list.</p> <p>Then define two functions:</p> <ol> <li><p>A function to get monthly data for the current row:</p> <pre><code>def getMonthData(grp, amnt, dayNo): return pd.Series([grp.min(), grp.max(), amnt * grp.size / dayNo], index=['Start Date', 'End Date', 'Amount']) </code></pre> <p>It converts the input Series of dates (for a single month) into the &quot;new&quot; content of the output rows (start / end dates and the proper share of the total amount, to be accounted for this month).</p> <p>It will be called in the following function.</p> </li> <li><p>A function to &quot;explode&quot; the current row:</p> <pre><code>def rowExpl(row): ind = pd.date_range(row['Start Date'], row['End Date']).to_series() rv = ind.groupby(pd.Grouper(freq='M')).apply(getMonthData, amnt=row.Amount, dayNo=ind.size).unstack().reset_index(drop=True) rv.insert(0, 'Client', row.Client) rv.insert(1, 'First Name', row['First Name']) rv.insert(2, 'Last Name', row['Last Name']) return rv.assign(**{'Invoice Date': row['Invoice Date']}) </code></pre> </li> </ol> <p>And the last step is to get the result. Apply <em>rowExpl</em> to each row and concatenate the partial results into a single output DataFrame:</p> <pre><code>result = pd.concat(df.apply(rowExpl, axis=1).values, ignore_index=True) </code></pre> <p>The result, for your data sample is:</p> <pre><code> Client First Name Last Name Start Date End Date Amount Invoice Date 0 XXX John Kennedy 2021-01-15 2021-01-31 51976.9 2021-04-20 1 XXX John Kennedy 2021-02-01 2021-02-28 85609.1 2021-04-20 2 YYY Peter Paul 2021-02-07 2021-02-28 16084.9 2021-04-20 3 YYY Peter Paul 2021-03-01 2021-03-31 22665.1 2021-04-20 4 ZZZ Michael K 2021-03-10 2021-03-31 59350.8 2021-04-30 5 ZZZ Michael K 2021-04-01 2021-04-29 78235.2 2021-04-30 </code></pre> <p>Don't be disaffected by seemingly too low precision of <em>Amount</em> column. It is only the way how <em>Jupyter Notebook</em> displays the DataFrame.</p> <p>When you run <code>result.iloc[0, 5]</code>, you will get:</p> <pre><code>51976.933333333334 </code></pre> <p>with full, <strong>actually</strong> held precision.</p>
python|pandas|dataframe|numpy|date
1
4,744
36,627,362
Converting a matrix into an image in Python
<p>I would like to save a numpy matrix as a .png image, but I cannot do so using the matplotlib since I would like to retain its original size (which apperently the matplotlib doesn't do so since it adds the scale and white background etc). Anyone knows how I can go around this problem using numpy or the PIL please? Thanks </p>
<p>Solved using scipy library</p> <p>import scipy.misc</p> <p>...(code)</p> <p>scipy.misc.imsave(name,array,format)</p> <p>or</p> <p>scipy.misc.imsave('name.ext',array) where ext is the extension and hence determines the format at which the image will be stored.</p>
python|image|numpy|matrix|matplotlib
1
4,745
65,797,708
How to iterate over rows and index in a DataFrame in Pandas to filter bolean values
<p>I am working in a project to find anomalies though some stock market tickers, fishing abnormal volumes... I'm struggling to filters the True values(those pass in the 'filter'). The main objective is create a data frame with the tickers that passed on the ' stats filter'.</p> <pre><code>import numpy as np import pandas as pd from pandas_datareader import data as web </code></pre> <p>Get data frame</p> <pre><code>tickers = ['F', 'GE', 'GM','TSLA'] data = pd.DataFrame() for t in tickers: data[t] = web.DataReader(t, data_source='yahoo', start='2020-1-1')['Volume'] </code></pre> <p>Stats filters</p> <pre><code>data_std = data.std() data_mean = data.mean() anomaly_cut_off = data_std * 3 upper_limit = data_mean + anomaly_cut_off </code></pre> <p>Data frame with boolean values (True or False)</p> <pre><code>outlier = data &gt; upper_limit </code></pre> <p>Anomalies should be a data frame with the DATE(index) and the ticker ('F', 'GE', 'GM','TSLA') just if is True... The code below worked if i change the pd to np.array(data), but just with one tickers.</p> <pre><code>anomalies = [] for outlier in data: if outlier &gt; upper_limit: anomalies.append(outlier) return anomalies </code></pre>
<p>If you want to return the rows where at least one your tickers is <code>True</code>, this works:</p> <pre><code>outlier[outlier.any(axis=1)] </code></pre>
python|pandas|dataframe|yahoo-finance|outliers
0
4,746
63,507,640
Dividing values in a dataframe based on another value in dataframe
<p>I'm trying to create a column in a dataframe that is a result of dividing values based on another value in the dataframe.</p> <p>So this means that i would like to divide the SCI value that has a corrosponding Temp value between 19.5 and 20.5 and equal chainage.</p> <p>I created a small dataframe that could help solve the problem.</p> <pre><code>data = {'Chainage':[10,20,30,10,20,30,10,20,30], 'SCI':[123, 45, 19, 18, 36, 125, 54, 78,85], 'Temp':[20.4,35,16,22,20.1,19.8,18,21,28]} df = pd.DataFrame(data) </code></pre> <p>The dataframe:</p> <pre><code> Chainage SCI Temp 0 10 123 20.4 1 20 45 35.0 2 30 19 16.0 3 10 18 22.0 4 20 36 20.1 5 30 125 19.8 6 10 54 18.0 7 20 78 21.0 8 30 85 28.0 </code></pre> <p>Here is the end result as it should be. Grouped by the chainage, and then the SCI values with a Temp between 19.5 and 20.5 is used to divide with the others in the group. I have tried to illustrate below:</p> <pre><code> Chainage SCI Temp f 0 10 123 20.4 123/123 = 1 3 10 18 22.0 123/18 = 6.8 6 10 54 18.0 123/54 = 2.2 7 20 78 21.0 36/78 = 0.4 1 20 45 35.0 36/45 = 0.8 4 20 36 20.1 36/36 = 1 2 30 19 16.0 6.6 5 30 125 19.8 1 8 30 85 28.0 1.5 </code></pre> <p>I have been trying to use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/groupby.html" rel="nofollow noreferrer">groupby</a> but gets stuck when adding the extra conditioning. Any help is appreciated.</p>
<p>I think you need to break this up into a couple of steps.</p> <ol> <li>Compute your &quot;base temperatures&quot;</li> <li>merge the base temps into the main dataframe</li> <li>do the division</li> <li>clean up the extra columns.</li> </ol> <p>That looks like this:</p> <pre class="lang-py prettyprint-override"><code>import pandas data = { 'Chainage':[10,20,30,10,20,30,10,20,30], 'SCI':[123, 45, 19, 18, 36, 125, 54, 78,85], 'Temp':[20.4,35,16,22,20.1,19.8,18,21,28] } df = pandas.DataFrame(data) base_temp = ( df.loc[df['Temp'].between(19.5, 20.5)] .groupby('Chainage', as_index=False) .first() .drop(columns=['SCI']) ) </code></pre> <p>The <code>base_temp</code> dataframe looks like this:</p> <pre><code> Chainage Temp 0 10 20.4 1 20 20.1 2 30 19.8 </code></pre> <p>We queried out the rows where the temperature was in the correct range, but then did a group-by/first to ensure didn't have any duplicate Chainage values.</p> <p>Now we can do everything else:</p> <pre class="lang-py prettyprint-override"><code> result = ( df.merge(base_temp, on='Chainage', how='left', suffixes=('', '_base')) .assign(f=lambda df: df['Temp_base'] / df['Temp']) .drop(columns=['Temp_base']) ) </code></pre> <p>Which gives you:</p> <pre><code> Chainage SCI Temp f 0 10 123 20.4 1.000000 1 20 45 35.0 0.574286 2 30 19 16.0 1.237500 3 10 18 22.0 0.927273 4 20 36 20.1 1.000000 5 30 125 19.8 1.000000 6 10 54 18.0 1.133333 7 20 78 21.0 0.957143 8 30 85 28.0 0.707143 </code></pre>
python|dataframe|pandas-groupby
1
4,747
63,452,968
ModuleNotFoundError: No module named 'dask.dataframe'; 'dask' is not a package
<p>For a current project, I am planning to merge two very large CSV files with Dask as an alternative to Pandas. I have installed Dask thorough <code>pip install &quot;dask[dataframe]&quot;</code>.</p> <p>When running <code>import dask.dataframe as dd</code>, I am however receiving the feedback <code>ModuleNotFoundError: No module named 'dask.dataframe'; 'dask' is not a package</code>.</p> <p>Several users seem to have had the same problem and were recommneded to install the module via Conda, which has not helped either in my case.</p> <p>What is the reason for the module not being found?</p>
<p>As user John Gordon mentioned, the reason for the error notification is that a file within the same folder was named <code>dask.py</code>. Renaming the file solved the situation within seconds.</p> <p>As a general rule/conclusion: is as advisable to not use .py file names that directly relate to Python modules.</p>
python|pandas|dask|dask-dataframe
4
4,748
21,855,775
numpy.i is missing. What is the recommended way to install it?
<p>I am writing a C++ library which can be called from both C++ and Python by using SWIG-Python interface. I would like to make a few functions in the library to return numpy array when they are used in Python.</p> <p>The SWIG documentation [1] says that <code>numpy.i</code> located under <code>numpy/docs/swig</code> can be used for this purpose. But I cannot find this directory on the following systems.</p> <ul> <li>Scientific Linux 6.4 (RHEL 6.4 clone) + Python 2.6 + NumPy 1.4 (installed via <code>yum</code>)</li> <li>OS X Mavericks + Python 2.7 + NumPy 1.8 (via <code>easy_install</code>)</li> <li>OS X Mavericks + Python 2.7 + NumPy 1.8 (built from the source <code>python setup.py install</code>)</li> </ul> <p>There exists <code>numpy.i</code> under <code>numpy-1.8.0/doc/swig</code> if I get the .tar.gz source code from the NumPy site. But this file is not automatically installed when <code>python setup.py install</code> is executed.</p> <p><strong>So I would like to know what the best or recommended way to install <code>numpy.i</code> on my system is.</strong></p> <p>As I distribute this library to my colleagues, putting <code>numpy.i</code> in my code might be an easy solution. But I am concerning about version mismatch with their NumPy.</p> <p>[1] <a href="http://docs.scipy.org/doc/numpy/reference/swig.interface-file.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/swig.interface-file.html</a></p>
<p>The safest option is probably just to bundle a copy of <code>numpy.i</code> with your project, as the file is not currently installed by Numpy itself.</p> <p>The <code>numpy.i</code> file is written using Numpy's C-API, so the backward compatibility questions are the same as if you wrote the corresponding C code by hand.</p>
python|numpy|swig
4
4,749
21,498,474
using numpy to randomly distribute DNA sequence reads over genomic features
<p>Hi I have written a script that randomly shuffles read sequences over the gene they were mapped to. This is useful if you want to determine if a peak that you observe over your gene of interest is statistically significant. I use this code to calculate False Discovery Rates for peaks in my gene of interest. Below the code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt iterations = 1000 # number of times a read needs to be shuffled featurelength = 1000 # length of the gene a = np.zeros((iterations,featurelength)) # create a matrix with 1000 rows of the feature length b = np.arange(iterations) # a matrix with the number of iterations (0-999) reads = np.random.randint(10,50,1000) # a random dataset containing an array of DNA read lengths </code></pre> <p>Below the code to fill the large matrix (a):</p> <pre><code>for i in reads: # for read with read length i r = np.random.randint(-i,featurelength-1,iterations) # generate random read start positions for the read i for j in b: # for each row in a: pos = r[j] # get the first random start position for that row if pos &lt; 0: # start position can be negative because a read does not have to completely overlap with the feature a[j][:pos+i]+=1 else: a[j][pos:pos+i]+=1 # add the read to the array and repeat </code></pre> <p>Then generate a heat map to see if the distribution is roughly even:</p> <pre><code>plt.imshow(a) plt.show() </code></pre> <p>This generates the desired result but it is very slow because of the many for loops. I tried to do fancy numpy indexing but I constantly get the "too many indices error".</p> <p>Anybody have a better idea of how to do this?</p>
<p>Fancy indexing is a bit tricky, but still possible:</p> <pre><code>for i in reads: r = np.random.randint(-i,featurelength-1,iterations) idx = np.clip(np.arange(i)[:,None]+r, 0, featurelength-1) a[b,idx] += 1 </code></pre> <p>To deconstruct this a bit, we're:</p> <ol> <li><p>Creating a simple index array as a column vector, from 0 to i: <code>np.arange(i)[:,None]</code></p></li> <li><p>Adding each element from <code>r</code> (a row vector), which broadcasts to make a matrix of size <code>(i,iterations)</code> with the correct offsets into the columns of <code>a</code>.</p></li> <li><p>Clamping the indices to the range <code>[0,featurelength)</code>, via <code>np.clip</code>.</p></li> <li><p>Finally, we fancy-index <code>a</code> for each row (<code>b</code>) and the relevant columns (<code>idx</code>).</p></li> </ol>
python|random|numpy|sequence
0
4,750
30,039,988
How to make a binary decomposition of pandas.Series column
<p>I want to decompose a <code>pandas.Series</code> into several other columns (number of column = number of values), save that factorization and use it with other <code>DataFrame</code> or <code>Series</code>. Something like <code>pandas.get_dummies</code> which will remember mapping and can handle <code>NaN</code>.</p> <p>Example.<br> Given the following <code>DataFrame</code>:</p> <p><code> A B 0 a 0 1 b 1 2 a 2 3 c 3 </code></p> <p>I want to have a decomposition of series <code>A</code> into:</p> <p><code> A_a A_b A_c B 0 1 0 0 0 1 0 1 0 1 2 1 0 0 2 3 0 0 1 3 </code></p> <p>Then I want to save that factorization and apply it to other <code>DataFrame</code> (look input doesn't have <em>c</em> values in column A):</p> <p><code> A B A_a A_b A_c B 0 a 0 0 1 0 0 0 1 a 1 -&gt; 1 1 0 0 1 2 b 2 2 0 1 0 2 </code></p> <p>Is there any automatic way for such thing? I can do it manually. I was trying <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html#sklearn.preprocessing.LabelEncoder" rel="nofollow">scikit-learn LabelEncoder</a> but it doesn't handle <code>NaN</code>s. I want to use it for classification models.</p>
<p>I don't think there is a way to do this automatically:</p> <pre><code>In [11]: res = df.pop("A").str.get_dummies() # Note: pop removes column A from df In [12]: res.columns = res.columns.map(lambda x: "A_" + x) In [13]: res Out[13]: A_a A_b A_c 0 1 0 0 1 0 1 0 2 1 0 0 3 0 0 1 In [14]: res.join(df) Out[14]: A_a A_b A_c B 0 1 0 0 0 1 0 1 0 1 2 1 0 0 2 3 0 0 1 3 </code></pre> <hr> <p>To standardize, I would use <code>reindex_axis</code> on the columns you want. i.e. to force df2 to have the columns of df1.</p> <pre><code>df2.reindex_axis(df1.columns, axis=1, fill_value=0) </code></pre>
python|pandas|machine-learning|scikit-learn
1
4,751
53,565,544
Delimiting a region of a numpy matrix
<p>I have a 560x560 numpy matrix, which I want to convert to a 28x28 one. <br> Therefore, I want to subdivide it into regions with size 16x16, calculate the mean of each such regions and put that value in a new matrix. <br> <br></p> <p>Now I have:</p> <pre><code>import numpy as np oldMat = ... #I load the 560x560 matrix newMat = np.zeros((28,28)) #Initializes the new matrix of size 28x28 for i in range(0,560, 16): for j in range(0,560, 16): #Loops over the top left corner of each region sum = 0 for di in range(16): for dj in range(16): #Loops over the indices of the elements in each region sum += oldMat[i+di, j+dj] mean = sum/256 #Calculates the mean of the elements of each region newMat[i][j] = mean </code></pre> <p><br> Is there a faster way to do this? (I'm sure there is.)</p>
<p>If you simply want to reshape your matrix from <code>2D --&gt; 4D</code>, then you can use <code>np.reshape()</code>:</p> <pre><code>import numpy as np np.random.seed(0) data = np.random.randint(0,5,size=(6,6)) </code></pre> <p>Yields:</p> <pre><code>[[4 0 3 3 3 1] [3 2 4 0 0 4] [2 1 0 1 1 0] [1 4 3 0 3 0] [2 3 0 1 3 3] [3 0 1 1 1 0]] </code></pre> <p>Then reshape:</p> <pre><code>data.reshape((3,3,2,2)) </code></pre> <p>Returns:</p> <pre><code>[[[[4 0] [3 3]] [[3 1] [3 2]] [[4 0] [0 4]]] [[[2 1] [0 1]] [[1 0] [1 4]] [[3 0] [3 0]]] </code></pre>
python|numpy|matrix
0
4,752
53,464,487
How to efficiently convert a subdictionary into matrix in python
<p>I have a dictionary like this:</p> <pre><code>{'test2':{'hi':4,'bye':3}, 'religion.christian_20674': {'path': 1, 'religious': 1, 'hi':1}} </code></pre> <p>the value of this dictionary is itself a dictionary.</p> <p>what my output should look like:</p> <p><a href="https://i.stack.imgur.com/4EmsG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4EmsG.png" alt="enter image description here"></a></p> <p>how can I do that efficiently?</p> <p>I have read <a href="https://stackoverflow.com/questions/39113110/converting-a-dictionary-into-a-square-matrix">this</a> post, which the shape of matrix is different from mine.</p> <p><a href="https://stackoverflow.com/questions/37862139/convert-dictionary-to-sparse-matrix">this</a> one was closest to my case, but it had a set inside the dictionary not another dictionary.</p> <p>the thing that is different in my question is that I want also conver the value of the inside dictionary as the values of the matrix.</p> <p>I was thinking something like this:</p> <pre><code>doc_final =[[]] for item in dic1: for item2, value in dic1[item]: doc_final[item][item2] = value </code></pre> <p>but it wasnt the correct way.</p> <p>Thanks for your help :)</p>
<p>Using the pandas library you can easily turn your dictionary into a matrix.</p> <p>Code:</p> <pre><code>import pandas as pd d = {'test2':{'hi':4,'bye':3}, 'religion.christian_20674': {'path': 1, 'religious': 1, 'hi':1}} df = pd.DataFrame(d).T.fillna(0) print(df) </code></pre> <p>Output:</p> <pre><code> bye hi path religious test2 3.0 4.0 0.0 0.0 religion.christian_20674 0.0 1.0 1.0 1.0 </code></pre>
python|arrays|numpy|dictionary|matrix
2
4,753
53,684,017
Replace specific words by user dictionary and others by 0
<p>So I have a review dataset having reviews like </p> <blockquote> <p>Simply the best. I bought this last year. Still using. No problems faced till date.Amazing battery life. Works fine in darkness or broad daylight. Best gift for any book lover.</p> </blockquote> <p>(This is from the original dataset, I have removed all punctuation and have all lower case in my processed dataset)</p> <p>What I want to do is replace some words by 1(as per my dictionary) and others by 0. My dictionary is </p> <pre><code>dict = {"amazing":"1","super":"1","good":"1","useful":"1","nice":"1","awesome":"1","quality":"1","resolution":"1","perfect":"1","revolutionary":"1","and":"1","good":"1","purchase":"1","product":"1","impression":"1","watch":"1","quality":"1","weight":"1","stopped":"1","i":"1","easy":"1","read":"1","best":"1","better":"1","bad":"1"} </code></pre> <p>I want my output like:</p> <pre><code>0010000000000001000000000100000 </code></pre> <p>I have used this code:</p> <pre><code>df['newreviews'] = df['reviews'].map(dict).fillna("0") </code></pre> <p>This always returns 0 as output. I did not want this so I took 1s and 0s as strings, but despite that I'm getting the same result. Any suggestions how to solve this?</p>
<p>First dont use <code>dict</code> as variable name, because builtins (python reserved word), then use <code>list comprehension</code> with <code>get</code> for replace not matched values to <code>0</code>.</p> <p><strong>Notice</strong>:</p> <p><em>If data are like <code>date.Amazing</code> - no space after punctuation is necessary replace by whitespace.</em></p> <pre><code>df = pd.DataFrame({'reviews':['Simply the best. I bought this last year. Still using. No problems faced till date.Amazing battery life. Works fine in darkness or broad daylight. Best gift for any book lover.']}) d = {"amazing":"1","super":"1","good":"1","useful":"1","nice":"1","awesome":"1","quality":"1","resolution":"1","perfect":"1","revolutionary":"1","and":"1","good":"1","purchase":"1","product":"1","impression":"1","watch":"1","quality":"1","weight":"1","stopped":"1","i":"1","easy":"1","read":"1","best":"1","better":"1","bad":"1"} df['reviews'] = df['reviews'].str.replace(r'[^\w\s]+', ' ').str.lower() </code></pre> <hr> <pre><code>df['newreviews'] = [''.join(d.get(y, '0') for y in x.split()) for x in df['reviews']] </code></pre> <p>Alternative:</p> <pre><code>df['newreviews'] = df['reviews'].apply(lambda x: ''.join(d.get(y, '0') for y in x.split())) </code></pre> <hr> <pre><code>print (df) reviews \ 0 simply the best i bought this last year stil... newreviews 0 0011000000000001000000000100000 </code></pre>
python|python-3.x|pandas|dictionary|dataframe
1
4,754
53,643,746
Create Dynamic Columns with Calculation
<p>I have a dataframe called prices, with historical stocks prices for the following companies:</p> <p><em>['APPLE', 'AMAZON', 'GOOGLE']</em></p> <p>So far on, with the help of a friendly user, I was able to create a dataframe for each of this periods with the following code:</p> <pre><code>import pandas as pd import numpy as np from datetime import datetime, date prices = pd.read_excel('database.xlsx') companies=prices.columns companies=list(companies) del companies[0] timestep = 250 prices_list = [prices[day:day + step] for day in range(len(prices) - step)] </code></pre> <p>Now, I need to evaluate the change in price for every period of 251 days (Price251/Price1; Price252/Price2; Price 253/Price and so on) for each one of the companies, and create a column for each one of them.</p> <p>I would also like to put the column name dynamic, so I can replicate this to a much longer database.</p> <p>So, I would get a dataframe similar to this: <a href="https://i.stack.imgur.com/RHVYS.png" rel="nofollow noreferrer">open image here</a></p> <p>Here you can find the dataframe head(3): <a href="https://i.stack.imgur.com/S8jGe.png" rel="nofollow noreferrer">Initial Dataframe</a></p>
<p>IIUC, try this:</p> <pre><code>def create_cols(df,num_dates): for col in list(df)[1:]: df['{}%'.format(col)] = - ((df['{}'.format(col)].shift(num_dates) - df['{}'.format(col)]) / df['{}'.format(col)].shift(num_dates)).shift(- num_dates) return df create_cols(prices,251) </code></pre> <p>you only would have to format the columns to percentages.</p>
python-3.x|pandas|dataframe|dynamic-columns
1
4,755
20,255,779
What is best way to find first half of True's in boolean numpy array?
<p>Here is the problem:</p> <p>Take a numpy boolean array:</p> <pre><code>a = np.array([False, False, True, True, True, False, True, False]) </code></pre> <p>Which I am using as indexes to panda dataframe. But I need to create 2 new arrays where they each have half the True's as the original array. Note the example arrays are much shorter than actual set. </p> <p>So like:</p> <pre><code>1st_half = array([False, False, True, True, False, False, False, False]) 2nd_half = array([False, False, False, False, True, False, True, False]) </code></pre> <p>Does anyone have a good way to do this? Thanks.</p>
<p>First find true indices</p> <pre><code>inds = np.where(a)[0] cutoff = inds[inds.shape[0]//2] </code></pre> <p>Set values equivalent before and after cutoff:</p> <pre><code>b = np.zeros(a.shape,dtype=bool) c = np.zeros(a.shape,dtype=bool) c[cutoff:] = a[cutoff:] b[:cutoff] = a[:cutoff] </code></pre> <p>Results:</p> <pre><code>b Out[65]: array([False, False, True, True, False, False, False, False], dtype=bool) c Out[64]: array([False, False, False, False, True, False, True, False], dtype=bool) </code></pre> <p>There are numerous ways to handle the indexing.</p>
python|arrays|numpy|pandas
3
4,756
71,938,456
How to use pandas concatenate
<p><a href="https://i.stack.imgur.com/C4wzJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C4wzJ.png" alt="enter image description here" /></a></p> <p>Hi im running a program and I am being warned that the append method for pandas is being depreciated. How would I use concatenate in the following piece of code.</p> <pre><code>tweets_df = tweets_df.append(pd.DataFrame({'user_name': tweet.user.name, 'user_location': tweet.user.location, 'user_description': tweet.user.description, 'user_verified': tweet.user.verified, 'date': str(tweet.created_at), 'text': text, 'hashtags': [hashtags if hashtags else None], 'source': tweet.source})) </code></pre>
<pre><code>tweet = pd.Series({ 'user_name': tweet.user.name, 'user_location': tweet.user.location, 'user_description': tweet.user.description, 'user_verified': tweet.user.verified, 'date': str(tweet.created_at), 'text': text, 'hashtags': [hashtags if hashtags else None], 'source': tweet.source}) df = pd.concat([tweets_df, tweet]) </code></pre>
python|pandas
2
4,757
22,048,341
Importing Module(PyCogent) with Python
<p>I'm having trouble installing the Pycogent 1.5.3 module on Python. </p> <ol> <li>I Downloaded it(PyCogent-1.5.3.tgz) and then unzipped it with 7-zip.</li> <li>I then open the cmd(windows), and go to the Pycogent directory. Then I try to set it up with "python setup.py install"</li> <li>First error i had to deal with, was the need to setup Numpy version 1.3 or greater. Done. Now Numpy is shown on the python site packages.</li> <li><p>Now the error i get when trying to setup Pycogent is:</p> <pre><code>running install running build running build_py running build_ext building 'cogent.align._compare' extension error: Unable to find vcvarsall.bat </code></pre></li> </ol> <p>I installed Microsoft Visual Studio 2010 and still get this error. Any ideas? Could it be that it's not possible to import this module on windows? On this website it says something about problems on gz file compression <a href="http://pycogent.org/install.html" rel="nofollow">http://pycogent.org/install.html</a></p> <p>Thanks in advance!</p>
<p>PyCogent does not work on windows unless you have a Linux virtual machine installed.</p> <p>Taken from Pycogent's installation guide (<a href="http://pycogent.org/install.html" rel="nofollow noreferrer">here</a>):</p> <p><img src="https://i.stack.imgur.com/EMJS9.png" alt="enter image description here"></p> <p>Let me know if you need help installing the QIIME virtual box if you want to really use PyCogent with Windows.</p>
python|windows|numpy|module
0
4,758
4,623,800
Is there support for sparse matrices in Python?
<p>Is there support for sparse matrices in python?</p> <p>Possibly in numpy or in scipy?</p>
<p><strong>Yes.</strong></p> <p>SciPi provides <a href="http://docs.scipy.org/doc/scipy/reference/sparse.html" rel="noreferrer">scipy.sparse</a>, a "2-D sparse matrix package for numeric data".</p> <blockquote> <p>There are seven available sparse matrix types:</p> <ol> <li>csc_matrix: Compressed Sparse Column format</li> <li>csr_matrix: Compressed Sparse Row format</li> <li>bsr_matrix: Block Sparse Row format</li> <li>lil_matrix: List of Lists format</li> <li>dok_matrix: Dictionary of Keys format</li> <li>coo_matrix: COOrdinate format (aka IJV, triplet format)</li> <li>dia_matrix: DIAgonal format</li> </ol> </blockquote>
python|numpy|scipy|sparse-matrix
41
4,759
55,569,369
Is there an pandas function to count element occurring after specific words?
<pre><code>df ['ch*', 'co*', 'DePe*', 'DePe*', 'DePe*', 'pm*', 'tpm*', 'lep*'] ['ch*', 'co*', 'DePe*', 'DePe*', 'DePe*', 'am*', 'te*', 'qe*','te*'] ['ch*', 'co*', 'DePe*', 'ch*', 'DePe*', 'DePe*', 'tpm*', 'lep*'] ['ch*', 'DePe*', 'eeae*', 'ps*', 'er*'] Name: df, Length: 4, dtype: object </code></pre> <p>i need to count items occurring after last instance of 'DePe*' (left to right) i am looking for outcome like this.</p> <pre><code>df count ['ch*', 'co*', 'DePe*', 'DePe*', 'DePe*', 'pm*', 'tpm*', 'lep*'] 3 ['ch*', 'co*', 'DePe*', 'DePe*', 'DePe*', 'am*', 'te*', 'qe*','te*'] 4 ['ch*', 'co*', 'DePe*', 'ch*', 'DePe*', 'DePe*', 'tpm*', 'lep*'] 2 ['ch*', 'DePe*', 'eeae*', 'ps*', 'er*'] 3 </code></pre>
<p>Use <code>apply</code> with lambda function and <code>index</code> of reversed <code>lists</code>, it working nice, because lists are 0 based indexed in python:</p> <pre><code>df['count'] = df['A'].apply(lambda x: x[::-1].index('DePe*')) print (df) A count 0 [ch*, co*, DePe*, DePe*, DePe*, pm*, tpm*, lep*] 3 1 [ch*, co*, DePe*, DePe*, DePe*, am*, te*, qe*,... 4 2 [ch*, co*, DePe*, ch*, DePe*, DePe*, tpm*, lep*] 2 3 [ch*, DePe*, eeae*, ps*, er*] 3 </code></pre> <p>If possible some value not exist is possible specify value in <code>try-except</code> statement:</p> <pre><code>def f(x): try: return x[::-1].index('DePe*') except ValueError: return np.nan #or return 0 df['count'] = df['A'].apply(f) </code></pre>
python|pandas
2
4,760
55,289,901
Drop columns with no header on csv import with pandas
<p>Here is a sample csv:</p> <pre><code>| Header A | | Unnamed: 1 | Header D | |-----------|------|------------|-----------| | a1 | b1 | c1 | d1 | | a2 | b2 | c2 | d2 | </code></pre> <p>If I import it with <code>pandas.read_csv</code>, it turns into this:</p> <pre><code> Header A Unnamed: 1 Unnamed: 1.1 Header D 0 a1 b1 c1 d1 1 a2 b2 c2 d2 </code></pre> <p>My goal is dropping all the columns with empty headers, in this case the second column, but I cannot use the assigned column names by pandas to filter them, because there might also be non-empty columns starting with <code>Unnamed</code>, like the third column in the example.</p> <p>Columns are not known before hand, so I do not have any control over them.</p> <p>I have tried the following args with <code>read_csv</code>, but have not had any luck with them:</p> <ul> <li><code>prefix</code>: it just does not work!</li> <li><code>usecols</code>: Empty headers already have a name when they are passed to <code>usecols</code>, which makes it unusable to me.</li> </ul> <p>I have looked at some other answers on SO, like the ones below, but none of them cover my case:</p> <p><a href="https://stackoverflow.com/questions/36519086/how-to-get-rid-of-unnamed-column-in-a-pandas-dataframe">How to get rid of `Unnamed:` column in a pandas dataframe</a></p> <p><a href="https://stackoverflow.com/questions/43983622/remove-unnamed-columns-in-pandas-dataframe">Remove Unnamed columns in pandas dataframe</a></p>
<p>The only way I can think of is to "peek" at the headers beforehand and get the indices of non-empty headers. Then it's not a case of dropping them, but not including them in the original df.</p> <pre><code>import csv import pandas as pd with open('test.csv') as infile: reader = csv.reader(infile) headers = next(reader) header_indices = [i for i, item in enumerate(headers) if item] df = pd.read_csv('test.csv', usecols=header_indices) </code></pre>
python|pandas|csv
2
4,761
55,361,055
Why can't I use a variable in an for x in df.itertuples loop created prior to that loop?
<p>This is probably a basic Pandas question but I can't figure it out?</p> <p>I have this loop : </p> <pre><code>n=0 for lum in lum_df.itertuples(): print(lum.X) print(lum.Y) lum_x = float(lum.X) lum_y = float(lum.Y) for point in street_df.itertuples(): print(point.X) print(point.Y) print(lum_x) print(lum_y) dist = calculate_dist(lum_x, point.X, lum_y, point.Y) print('DISTANCE IS : ' + str(dist)) print('================= next point================') print('=============NEXT LUM==============') </code></pre> <p>Somehow, when I try to compte the distance between the 2 points in the second for loop, the values (lum_x and lum_y) are returned as nan. I would need to find a way to use these previously created variables in the second loop. Why doesn't it allow me to do so and what can I do about it?</p> <p>PS: the point.x and point.y are already float variables ! </p> <p>Thanks a lot!</p>
<pre><code>import pandas as pd lum_df = pd.DataFrame({'X': [1, 2, 3, 4], 'Y': [3, 4, 5, 6]}) street_df = pd.DataFrame({'X': [5, 6, 7, 8], 'Y': [9, 10, 11, 12]}) for ix, lum in lum_df.iterrows(): print(float(lum.X)) print(float(lum.Y)) for ix, point in street_df.iterrows(): print(point.X) print(point.Y) </code></pre>
python|pandas|for-loop
0
4,762
55,470,962
numpy ndarray inside array using index
<p>I have a <code>numpy.ndarray</code> - <code>a</code></p> <p>and <code>a[0]</code> gives me:</p> <p><code>array([0., 0., 0., ..., 0., 0., 0.])</code></p> <p>also <code>a[0:1]</code> gives: </p> <p><code>array([[0., 0., 0., ..., 0., 0., 0.]])</code></p> <p>How can I make <code>a[0]</code> to the format <code>array([[0., 0., 0., ..., 0., 0., 0.]])</code> ie; in the format of array inside an array, like we do <code>[list[0]]</code></p>
<p>In case you want to reshape the array, so that every element of the first axis (every row) is an array of shape (1,m), instead of just (m,) you could do the following:</p> <pre><code>a = a[:,np.newaxis,:] </code></pre> <p>This way the (n,m) array is transformed into shape (n,1,m) and getting the first row with <code>a[0]</code> returns:</p> <pre><code>array([[0., 0., 0., ..., 0., 0., 0.]]) </code></pre>
python|python-3.x|numpy
0
4,763
9,718,731
Matlab -> scipy ode (complex) function translation
<p>I'm learning python, numpy and scipy. I'm wonder if it is possible translate this kind of functions in matlab to python:</p> <pre><code>function [tT, u ] = SSolve5TH(n, t, t0,tf,u_env,utop_init, utop_final,ubottom,te_idx) options = []; [tT,u] = ode23s(@SS,t,u_env,options,@B); function y = B(x) y = zeros(n,1); if x &gt;= t(te_idx) y(1,1) = utop_final; y(n,1) = ubottom ; else y(1,1) = (x - t0) / ( tf - t0) * (utop_final - utop_init) + utop_init; y(n,1) = ubottom ; end end function rp = SS(t,r,B) global AH rp = AH * r + B(t); end end </code></pre> <p>In this example, n is number, e.g 15;</p> <p>t is the time array</p> <p>AH = [15]x t matrix</p> <p>t0 = 0</p> <p>tf = 20 (e.g)</p> <p>u_env = [20,20,20,20,20,20,20,20,20,20,20,20,20,20,20]</p> <p>utop_init = 20</p> <p>utop_final = 40; ubottom = 20;</p> <p>te_idx = 4;</p>
<p>Yes, it's possible. Along these lines:</p> <p><a href="http://scipy-central.org/item/13/2/integrating-an-initial-value-problem-multiple-odes" rel="nofollow noreferrer">http://scipy-central.org/item/13/2/integrating-an-initial-value-problem-multiple-odes</a></p> <p>Just replace 'vode' with 'zvode'. Or:</p> <p><a href="https://stackoverflow.com/search?q=%5Bscipy%5D+complex+ode">https://stackoverflow.com/search?q=%5Bscipy%5D+complex+ode</a></p>
python|matlab|numpy|scipy|ode
1
4,764
7,140,560
numpy sum along axis
<p>Is there a numpy function to sum an array <em>along</em> (not over) a given axis? By along an axis, I mean something equivalent to:</p> <pre><code>[x.sum() for x in arr.swapaxes(0,i)]. </code></pre> <p>to sum <em>along</em> axis i.</p> <p>For example, a case where numpy.sum will not work directly:</p> <pre><code>&gt;&gt;&gt; a = np.arange(12).reshape((3,2,2)) &gt;&gt;&gt; a array([[[ 0, 1], [ 2, 3]], [[ 4, 5], [ 6, 7]], [[ 8, 9], [10, 11]]]) &gt;&gt;&gt; [x.sum() for x in a] # sum along axis 0 [6, 22, 38] &gt;&gt;&gt; a.sum(axis=0) array([[12, 15], [18, 21]]) &gt;&gt;&gt; a.sum(axis=1) array([[ 2, 4], [10, 12], [18, 20]]) &gt;&gt;&gt; a.sum(axis=2) array([[ 1, 5], [ 9, 13], [17, 21]]) </code></pre>
<p>You can just pass a tuple with the axes that you want to sum over, and leave out the one that you want to 'sum along':</p> <pre><code>&gt;&gt; a.sum(axis=(1,2)) array([ 6, 22, 38]) </code></pre>
numpy|sum|axis
5
4,765
56,507,181
How to change the date format while writing it to excel file using openpyxl
<p>I want to convert the date format from <code>dd-mm-yy</code> to <code>mm-yy</code> while writing it into excel file. I tried all the methods but to no success. I am trying to copy data from one excel file and paste it into another. But the date messes up everything.</p> <p>This is my original Document. From where the code will copy the data: <img src="https://i.stack.imgur.com/0ly1W.png" alt="This is my original Document. From where the code will copy the data"></p> <p>This is how it gets displayed in destination excel file:<br> <img src="https://i.stack.imgur.com/BHvmi.png" alt="This is how it gets displayed in destination excel file."></p> <p>I have used Openpyxl, Pandas for the same.</p>
<p>If you have a variable with all the content you want to write, just use the <code>datetime_format</code> and NOT the <code>date_format</code>.</p> <p>For instance, I am deleting a sheet called <code>ValorLiq</code> and then rewriting it. I have the content I want to write in <code>myFILE</code>. The commands are:</p> <pre><code>fn = 'C:/Users/whatever/yourspreadsheet.xlsx' writer=pd.ExcelWriter(fn,mode='a',engine='openpyxl',datetime_format='DD/MM/YYYY') idx = writer.book.sheetnames.index('ValorLiq') writer.book.remove(writer.book.worksheets[idx]) myFILE.to_excel(writer, merge_cells = False, sheet_name='ValorLiq') writer.save() </code></pre>
python|python-3.x|pandas|openpyxl
2
4,766
56,749,460
How do i display dictionaries on new tkinter window
<p>I'm working on excel file to show the data on new tkinter window.</p> <p>Here is the excel data i converted to dict:</p> <pre><code>{'Country': {0: 'Japan', 1: 'China', 2: 'USA', 3: 'Russia', 4: 'Japan', 5: 'Japan', 6: 'China'}, 'Port': {0: 'Yokohama', 1: 'Ningbo', 2: 'Baltimore', 3: 'Moscow', 4: 'Tokyo', 5: 'Tokyo', 6: 'Shanghai'}, 'incoterm': {0: 'FOB', 1: 'DAT', 2: 'FOB', 3: 'EXW', 4: 'FOB', 5: 'FOB', 6: 'EXW'}, 'Capacity': {0: '40ton', 1: '40ton', 2: 'Other', 3: '20ton', 4: '20ton', 5: 'Other', 6: '40ton'}, 'Date': {0: nan, 1: nan, 2: nan, 3: nan, 4: nan, 5: nan, 6: nan}, 'Oct': {0: 400, 1: 500, 2: 600, 3: 100, 4: 400, 5: 500, 6: 120}, 'Nov': {0: 500, 1: 200, 2: 200, 3: 300, 4: 500, 5: 600, 6: 985}, 'Dec': {0: 100, 1: 200, 2: 800, 3: 400, 4: 200, 5: 100, 6: 146}, '$ value': {0: 2650.6, 1: 2650.6, 2: 2650.6, 3: 2650.6, 4: 2650.6, 5: 2650.6, 6: 2500.6}, 'Total': {0: 2650600.0, 1: 2385540.0, 2: 4240960.0, 3: 2120480.0, 4: 2915660.0, 5: 3180720.0, 6: 3128250.6}} </code></pre> <p>What i got so far:</p> <pre><code>import pandas as pd from tkinter import * from tkinter import ttk df = pd.read_excel("some excel data") df = df.to_dict() a = [] a.append(dict(df)) print(a) root = Tk() for data in a: temp_text = '{0} {1} - ({2})'.format(data['Country'], data['incoterm'], data['Total']) ttk.Label(root, text=temp_text).pack() mainloop() </code></pre> <p>Output:</p> <pre><code>{0: 'Japan', 1: 'China', 2: 'USA', 3: 'Russia', 4: 'Japan', 5: 'Japan', 6: 'China'}{0: 'FOB', 1: 'DAT', 2: 'FOB', 3: 'EXW', 4: 'FOB', 5: 'FOB', 6: 'EXW'}-({0: 2650600.0, 1: 2385540.0, 2: 4240960.0, 3: 2120480.0, 4: 2915660.0, 5: 3180720.0, 6: 3128250.6}}) </code></pre> <p>Expected output:</p> <pre><code>Japan FOB -(2650600.0) China EXW -(2385540.0) ....etc </code></pre>
<p>Here convert to list with dictioanry is not necessary, use:</p> <pre><code>df = pd.DataFrame(a) print (df) Country Port incoterm Capacity Date Oct Nov Dec $ value \ 0 Japan Yokohama FOB 40ton NaN 400 500 100 2650.6 1 China Ningbo DAT 40ton NaN 500 200 200 2650.6 2 USA Baltimore FOB Other NaN 600 200 800 2650.6 3 Russia Moscow EXW 20ton NaN 100 300 400 2650.6 4 Japan Tokyo FOB 20ton NaN 400 500 200 2650.6 5 Japan Tokyo FOB Other NaN 500 600 100 2650.6 6 China Shanghai EXW 40ton NaN 120 985 146 2500.6 Total 0 2650600.0 1 2385540.0 2 4240960.0 3 2120480.0 4 2915660.0 5 3180720.0 6 3128250.6 </code></pre> <p>You can join all columns with converting numeric columns to strings:</p> <pre><code>s = df['Country'] + ' ' + df['incoterm'] + ' - (' + df['Total'].astype(str) + ')' for temp_text in s: print (temp_text) </code></pre> <p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.itertuples.html" rel="nofollow noreferrer"><code>DataFrame.itertuples</code></a>:</p> <pre><code>for data in df.itertuples(): temp_text = '{0} {1} - ({2})'.format(data.Country, data.incoterm, data.Total) print (temp_text) </code></pre> <p>If performance is not important use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iterrows.html" rel="nofollow noreferrer"><code>DataFrame.iterrows</code></a>, but is is slowiest:</p> <pre><code>for i, data in df.iterrows(): temp_text = data temp_text = '{0} {1} - ({2})'.format(data['Country'], data['incoterm'], data['Total']) print (temp_text) Japan FOB - (2650600.0) China DAT - (2385540.0) USA FOB - (4240960.0) Russia EXW - (2120480.0) Japan FOB - (2915660.0) Japan FOB - (3180720.0) China EXW - (3128250.6) </code></pre>
python|python-3.x|pandas|tkinter
1
4,767
56,643,625
how to efficiently divide a large image to several parts and rotate them?
<p>Given a big picture the following takes a lot of time:</p> <p><strong>1</strong> cut image to 4 evenly shaped images (divide the image one time in the middle horizontally, and divide the image one time vertically):</p> <p><a href="https://i.stack.imgur.com/soAVC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/soAVC.png" alt="enter image description here"></a></p> <p><strong>2</strong> rotate the 4 parts the following way: <a href="https://i.stack.imgur.com/SIKai.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SIKai.png" alt="enter image description here"></a></p> <p><strong>3</strong> concat (so it will be ready as an input to a neural network to be run as a batch of 4 image)</p> <p>what i have tried (image is square and :</p> <pre><code>image_A = image_np[: int(size / 2), :, :] image_B = cv2.flip(image_np[int(size / 2):, :, :], -1) image_C = cv2.rotate(image_np[:, :int(size / 2), :], cv2.ROTATE_90_CLOCKWISE) image_D = cv2.rotate(image_np[:, int(size / 2):, :], cv2.ROTATE_90_COUNTERCLOCKWISE) </code></pre> <p>So i was wondering if there's a faster way, given that image/part is a numpy array.</p>
<p>We could simply get the views for virtually free runtime with array-slicing, like so -</p> <pre><code>B = image_np[-1:int(size / 2)-1:-1, ::-1, :] C = image_np[::-1, :int(size / 2),:].swapaxes(0,1) D = image_np[:, -1:int(size / 2)-1:-1, :].swapaxes(0,1) </code></pre> <p><code>image_A = image_np[: int(size / 2), :, :]</code> already seems like a view, so no change(s) required for that one.</p>
python|image|numpy
1
4,768
26,337,189
python: multiply two columns of nd-arrays to get the vector of same dimensions?
<p>I have the following code:</p> <pre><code> x = np.random.randint(0,10,size=(10,2)) y = np.random.randint(0,10,size=(10,2)) </code></pre> <p>x and y are <code>10 x 2</code> matrix. Now I want to multiply second column of x and y. I did <code>z = x[:,1] * y[:,1].</code> I got the result but z is <code>1 x 10</code> array instead of <code>10 x 1</code> array. Is there any way such that I get the result direct in the form of <code>10 x 1</code> without need to transpose it?</p>
<p>Instead of getting the exact column use slicing, that way shape will be preserved:</p> <pre><code>&gt;&gt;&gt; x = np.random.randint(0, 10, size=(10, 2)) &gt;&gt;&gt; y = np.random.randint(0, 10, size=(10, 2)) &gt;&gt;&gt; x[:,1:2] * y[:,1:2] array([[36], [ 0], [ 0], [45], [ 5], [28], [ 5], [12], [56], [ 6]]) </code></pre>
python|arrays|numpy
6
4,769
66,913,089
Overloading "==" operator for numpy arrays
<p>I am defining a function in Python that needs to check</p> <pre><code>if a==b: do.stuff() </code></pre> <p>In principle, <code>a</code> and <code>b</code> could be numpy arrays or integers, and I would like my implementation to be robust against this. However, to check equality for a numpy array, one needs to append the boolean with all(), which will break the code when <code>a</code> and <code>b</code> are integers.</p> <p>Is there a simple way to code the equality test so that it works regardless of whether <code>a</code> and <code>b</code> are integers or numpy arrays?</p>
<p>how about this that works for both arrays and integers(numbers):</p> <pre><code>if np.array_equal(a,b): do.stuff() </code></pre>
python|numpy|operator-overloading
1
4,770
66,806,974
Executing .py file from Command Prompt raises "ImportError: no module named geopandas"- Script works in Spyder (using anaconda)
<p>I have a python script that accomplishes a few small tasks:</p> <ol> <li>Create new directory structure</li> <li>Download a .zip file from a URL and unzip contents</li> <li>Clean up the data</li> <li>Export data as a .csv</li> </ol> <p>The full .py file runs successfully giving desired output when in Spyder, but when trying to run the .py from Command Prompt, it raises &quot;ImportError: no module named geopandas&quot;</p> <p>I am using Windows 10 Enterprise version 1909, conda v4.9.2, Anaconda command line client v 1.7.2, Spyder 4.2.3.</p> <p>I am in a virtual environment with all the needed packages that my script imports. The first part of my script only needs <code>os</code> and <code>requests</code> packages, and it runs fine as its own .py file from Command Prompt:</p> <pre><code>import os import requests #setup folders, download .zip file and unzip it #working directory is directory the .py file is in wd = os.path.dirname(__file__) if not os.path.exists(wd): os.mkdir(wd) #data source directory src_path = os.path.join(wd, &quot;src&quot;) if not os.path.exists(src_path): os.mkdir(src_path) #data output directory output_path = os.path.join(wd,&quot;output&quot;) if not os.path.exists(output_path): os.mkdir(output_path) #create new output directories and define as variables out_parent = os.path.join(wd, &quot;output&quot;) if not os.path.exists(out_parent): os.mkdir(out_parent) folders = [&quot;imgs&quot;, &quot;eruptions_processed&quot;] for folder in folders: new_dir = os.path.join(out_parent, folder) if not os.path.exists(new_dir): os.mkdir(new_dir) output_imgs = os.path.join(out_parent, &quot;imgs&quot;) if not os.path.exists(output_imgs): os.mkdir(output_imgs) output_eruptions = os.path.join(out_parent, &quot;eruptions_processed&quot;) if not os.path.exists(output_eruptions): os.mkdir(output_eruptions) if not os.path.exists(os.path.join(src_path,&quot;Historical_Significant_Volcanic_Eruption_Locations.zip&quot;)): url = 'https://opendata.arcgis.com/datasets/3ed5925b69db4374aec43a054b444214_6.zip?outSR=%7B%22latestWkid%22%3A3857%2C%22wkid%22%3A102100%7D' doc = requests.get(url) os.chdir(src_path) #change working directory to src folder with open('Historical_Significant_Volcanic_Eruption_Locations.zip', 'wb') as f: f.write(doc.content) file = os.path.join(src_path,&quot;Historical_Significant_Volcanic_Eruption_Locations.zip&quot;) #full file path of downloaded </code></pre> <p>But once I re-introduce my full list of packages in the .py file:</p> <pre><code>import os import pandas as pd import geopandas as gpd import requests import datetime import shutil </code></pre> <p>and run again from Command Prompt, I get:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\KWOODW01\py_command_line_tools\download_eruptions.py&quot;, line 17, in &lt;module&gt; import geopandas as gpd ImportError: No module named geopandas </code></pre> <p>I am thinking the problem is something to do with not finding my installed packages in my anaconda virtual environment, but I don't have a firm grasp on how to troubleshoot that. I thought I had added the necessary Anaconda file paths to my Windows PATH variable before.</p> <p>The path to my virtual environment packages are in &quot;C:\Users\KWOODW01\Anaconda3\envs\pygis\Lib\site-packages&quot;</p> <p><code>echo %PATH% </code> returns:</p> <pre><code>C:\Users\KWOODW01\Anaconda3\envs\pygis;C:\Users\KWOODW01\Anaconda3\envs\pygis\Library\mingw-w64\bin;C:\Users\KWOODW01\Anaconda3\envs\pygis\Library\usr\bin;C:\Users\KWOODW01\Anaconda3\envs\pygis\Library\bin;C:\Users\KWOODW01\Anaconda3\envs\pygis\Scripts;C:\Users\KWOODW01\Anaconda3\envs\pygis\bin;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C:\WINDOWS\System32\OpenSSH;C:\Program Files\McAfee\Solidcore\Tools\GatherInfo;C:\Program Files\McAfee\Solidcore\Tools\Scanalyzer;C:\Program Files\McAfee\Solidcore;C:\Program Files\McAfee\Solidcore\Tools\ScGetCerts;C:\Users\KWOODW01\AppData\Local\Microsoft\WindowsApps;C:\Users\KWOODW01\Anaconda3\Library\bin;C:\Users\KWOODW01\Anaconda3\Scripts;C:\Users\KWOODW01\Anaconda3\condabin;C:\Users\KWOODW01\Anaconda3;. </code></pre> <p>So it appears that the path to the directory where my <code>pygis</code> venv packages live are already added to my PATH variables, yet from Command Prompt the script still raises the &quot;ImportError: no module named geopandas&quot;. Pretty stuck on this one. Hoping someone can provide some more troubleshooting tips. Thanks.</p>
<p>I figured out I wasn't calling python in command prompt before executing the python file. The proper command is <code>python modulename.py</code> instead of <code>modulename.py</code> if you want to execute a .py file from the command prompt. Yikes. Let this be a lesson for other python novices.</p>
python|anaconda|command-prompt|geopandas
1
4,771
47,224,930
Unable to import pandas on CentOS machine
<p>While importing pandas I am getting below error:</p> <pre><code>&gt;&gt;&gt; import pandas as pd **RuntimeError: module compiled against API version 6 but this version of numpy is 4** numpy.core.multiarray failed to import Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/lib64/python2.6/site-packages/pandas/__init__.py", line 6, in &lt;module&gt; from . import hashtable, tslib, lib ImportError: numpy.core.multiarray failed to import </code></pre> <p>Tried to upgrade numpy module to the latest version. Getting below message:</p> <pre><code>[abc]# yum install numpy Loaded plugins: fastestmirror, refresh-packagekit, security Setting up Install Process Loading mirror speeds from cached hostfile * base: centos.excellmedia.net * extras: centos.excellmedia.net * updates: centos.excellmedia.net **Package numpy-1.4.1-9.el6.x86_64 already installed and latest version** </code></pre> <p>Is there any manual work around to resolve this problem?</p>
<p>Well the module's answer to your import request seems to be pretty self-explaining. Did you try a </p> <pre><code>yum update numpy </code></pre> <p>If that doesn't fix your problem (I don't know what repos your centos distro uses) you could always try the pip approach</p> <pre><code>pip install -U numpy </code></pre> <p>If that still doesnt solve your problem, you might have several versions of numpy installed and your should try the answers to <a href="https://stackoverflow.com/questions/28517937/how-can-i-upgrade-numpy">this question</a>.</p>
python|linux|pandas|centos6
0
4,772
47,240,308
Differences between numpy.random.rand vs numpy.random.randn in Python
<p>What are the differences between <code>numpy.random.rand</code> and <code>numpy.random.randn</code>?</p> <p>From the documentation, I know the only difference between them is the probabilistic distribution each number is drawn from, but the overall structure (dimension) and data type used (float) is the same. I have a hard time debugging a neural network because of this.</p> <p>Specifically, I am trying to re-implement the Neural Network provided in the <a href="http://neuralnetworksanddeeplearning.com/chap1.html" rel="noreferrer">Neural Network and Deep Learning book by Michael Nielson</a>. The original code can be found <a href="https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network.py" rel="noreferrer">here</a>. My implementation was the same as the original; however, I instead defined and initialized weights and biases with <code>numpy.random.rand</code> in the <code>init</code> function, rather than the <code>numpy.random.randn</code> function as shown in the original.</p> <p>However, my code that uses <code>random.rand</code> to initialize <code>weights and biases</code> does not work. The network won't learn and the weights and biases will not change.</p> <p>What is the difference(s) between the two random functions that cause this weirdness?</p>
<p>First, as you see from the documentation <code>numpy.random.randn</code> generates samples from the normal distribution, while <code>numpy.random.rand</code> from a uniform distribution (in the range [0,1)).</p> <p>Second, why did the uniform distribution not work? The main reason is the activation function, especially in your case where you use the sigmoid function. The plot of the sigmoid looks like the following:</p> <p><a href="https://i.stack.imgur.com/fh1jc.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/fh1jc.jpg" alt="enter image description here" /></a></p> <p>So you can see that if your input is away from 0, the slope of the function decreases quite fast and as a result you get a tiny gradient and tiny weight update. And if you have many layers - those gradients get multiplied many times in the back pass, so even &quot;proper&quot; gradients after multiplications become small and stop making any influence. So if you have a lot of weights which bring your input to those regions you network is hardly trainable. That's why it is a usual practice to initialize network variables around zero value. This is done to ensure that you get reasonable gradients (close to 1) to train your net.</p> <p>However, uniform distribution is not something completely undesirable, you just need to make the range smaller and closer to zero. As one of good practices is using Xavier initialization. In this approach you can initialize your weights with:</p> <ol> <li><p>Normal distribution. Where mean is 0 and <code>var = sqrt(2. / (in + out))</code>, where in - is the number of inputs to the neurons and out - number of outputs.</p> </li> <li><p>Uniform distribution in range <code>[-sqrt(6. / (in + out)), +sqrt(6. / (in + out))]</code></p> </li> </ol>
python|numpy|neural-network|numpy-random
129
4,773
47,419,345
Save rows of a bidimensional numpy array in another array
<p>I have a bidimensional np array V (100000x50). I want to create a new array V_tgt in which I keep just certain rows of V, so the dimension will be (ix50). It may be easy to do it but I tried different things and it seems to save just the first of the 50 elements. My code is the following:</p> <pre><code>V_tgt = np.array([]) for i in IX_items: if i in IX_tgt_items: V_tgt=np.append(V_tgt, V[i]) </code></pre> <p>I tried with functions such as insert and delete as well but it didn't work.How can I save all the values and create an array with the right dimension? Any help is really appreciated.</p>
<p>From your comments I assume that you have some kind of list of target indices (in my example <code>tgt_idx1</code> and <code>tgt_idx2</code>)that tells you which elements to take from V. You could do something like this:</p> <pre><code>import numpy as np V = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) tgt_idx1 = np.array([1, 2, 3]) tgt_idx2 = np.array([1, 3]) mask = [] for i, elem in enumerate(V): inTargets = i in tgt_idx1 and i in tgt_idx2 mask.append(inTargets) print mask V_tgt = V[mask] print V_tgt </code></pre> <p>This prints</p> <pre><code>[False, True, False, True] [[ 4 5 6] [10 11 12]] </code></pre>
python|arrays|numpy
0
4,774
11,248,812
How to plot data from multiple two column text files with legends in Matplotlib?
<p>How do I open multiple text files from different directories and plot them on a single graph with legends?</p>
<p>This is relatively simple if you use pylab (included with matplotlib) instead of matplotlib directly. Start off with a list of filenames and legend names, like [ ('name of file 1', 'label 1'), ('name of file 2', 'label 2'), ...]. Then you can use something like the following:</p> <pre><code>import pylab datalist = [ ( pylab.loadtxt(filename), label ) for filename, label in list_of_files ] for data, label in datalist: pylab.plot( data[:,0], data[:,1], label=label ) pylab.legend() pylab.title("Title of Plot") pylab.xlabel("X Axis Label") pylab.ylabel("Y Axis Label") </code></pre> <p>You also might want to add something like fmt='o' to the plot command, in order to change from a line to points. By default, matplotlib with pylab plots onto the same figure without clearing it, so you can just run the plot command multiple times.</p>
python|numpy|matplotlib
30
4,775
68,418,216
Dropping number of columns (ending with same letter) from pandas dataframe
<p><a href="https://i.stack.imgur.com/RBMaZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RBMaZ.png" alt="enter image description here" /></a></p> <p>I want to drop all columns ending with &quot;T&quot;. I used for loop which works. I want to know is there any better way. My code-</p> <pre><code>for column in df.columns: if column[-1]== &quot;T&quot;: df.drop(columns = column, inplace = True) </code></pre>
<p>Try:</p> <pre><code>df=df.drop(df.filter(regex='T$').columns,axis=1) </code></pre> <p>OR</p> <pre><code>df=df.loc[:,~df.columns.str.endswith('T')] </code></pre> <p>OR</p> <pre><code>df=df.loc[:,~(df.columns.str[-1]=='T')] </code></pre>
python|pandas
0
4,776
68,087,213
Pandas Read Excel, or faster method
<p>I didn't find any related thread.</p> <p>I have a not so big Excel file (100 MB), but quite large (ie 930 columns for 35k rows), and that's my problem. Excel read this file in a second, but pandas took at least 10-20 minutes on my computer. I tried the following:</p> <ul> <li>not infering type, by giving dtype parameter.</li> <li>limit columns, by using usecols</li> <li>iterate over rows, by using nrows and skiprows in a loop</li> </ul> <p>I can not convert this excel into a csv.</p> <p>This is my code so far:</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_excel(&quot;rei_2018/REI_2018.xlsx&quot;, engine = &quot;openpyxl&quot;, dtype = str, usecols=['H11'], nrows=200) </code></pre> <p><strong>edit 1:</strong></p> <ul> <li><p>data : <a href="https://www.impots.gouv.fr/portail/www2/fichiers/statistiques/base_de_donnees/rei/rei_2018.zip" rel="nofollow noreferrer">https://www.impots.gouv.fr/portail/www2/fichiers/statistiques/base_de_donnees/rei/rei_2018.zip</a></p> </li> <li><p>I tried running the following command on the above data (thus limiting to 200 row), it hooks exactly 760seconds.</p> </li> </ul> <pre class="lang-py prettyprint-override"><code>df = pd.read_excel(&quot;rei_2018/REI_2018.xlsx&quot;, engine = &quot;openpyxl&quot;, dtype = str, usecols=['H11'], nrows=200) </code></pre> <ul> <li>pandas versions (Windows 64bits with Anaconda, on 8Gb Intel i5 8265U):</li> </ul> <pre class="lang-py prettyprint-override"><code>pd.show_versions() INSTALLED VERSIONS ------------------ commit : b5958ee1999e9aead1938c0bba2b674378807b3d python : 3.7.6.final.0 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.18362 machine : AMD64 processor : Intel64 Family 6 Model 142 Stepping 12, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : None.None pandas : 1.1.5 numpy : 1.18.1 pytz : 2019.3 dateutil : 2.8.1 pip : 20.0.2 setuptools : 45.2.0.post20200210 Cython : 0.29.15 pytest : 5.3.5 hypothesis : 5.5.4 sphinx : 2.4.0 blosc : None feather : None xlsxwriter : 1.2.7 lxml.etree : 4.3.5 html5lib : 1.0.1 pymysql : None psycopg2 : None jinja2 : 2.11.1 IPython : 7.12.0 pandas_datareader: None bs4 : 4.8.2 bottleneck : 1.3.2 fsspec : 0.6.2 fastparquet : None gcsfs : None matplotlib : 3.1.3 numexpr : 2.7.1 odfpy : None openpyxl : 3.0.3 pandas_gbq : None pyarrow : 0.13.0 pytables : None pyxlsb : None s3fs : None scipy : 1.4.1 sqlalchemy : 1.3.13 tables : 3.6.1 tabulate : None xarray : None xlrd : 1.2.0 xlwt : 1.3.0 numba : 0.48.0 </code></pre>
<p>Consider an ODBC connection to Excel workbook to query worksheet like a database table with <code>pandas.read_sql</code>. Below ODBC driver installs with most Windows MS Office installations. SQL query retrieves one column, <code>H11</code>, and first 200 rows of worksheet named <code>mySheet</code>.</p> <pre class="lang-py prettyprint-override"><code>import pyodbc import pandas as pd strfile = &quot;/path/to/workbook.xlsx&quot; conn = pyodbc.connect( r'Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};' 'DBQ={};'.format(strfile), autocommit=True ) df = pd.read_sql(&quot;SELECT TOP 200 [H11] FROM [mySheet$]&quot;, conn) # ALTERNATIVELY, ASSUMING H11 COLUMN IS &quot;H&quot; COLUMN IN SPREADSHEET df = pd.read_sql(&quot;SELECT [H11] FROM [mySheet$H1:H200]&quot;, conn) conn.close() </code></pre>
python|excel|pandas
2
4,777
59,376,213
Expected performance of training tf.keras.Sequential model with model.fit, model.fit_generator and model.train_on_batch
<p>I am using tensorflow to train a 1D CNN to detect specific events from sensor data. While the data with tens of millions samples easily fits to the ram in the form of an 1D float array, it obviously takes a huge amount of memory to store the data as a N x inputDim array that can be passed to model.fit for training. While I can use model.fit_generator or model.train_on_batch to generate the required mini batches on the fly, for some reason I am observing a huge performance gap between model.fit and model.fit_generator &amp; model.train_on_batch even though everything is stored in memory and mini batch generation is fast as it basically only consists of reshaping the data. Therefore, I'm wondering whether I am doing something terribly wrong or if this kind of performance gap is to be expected. I am using the cpu version of Tensorflow 2.0 with 3.2 GHz Intel Core i7 processor (4 cores with multithreading support) and Python 3.6.3. on Mac Os X Mojave.</p> <p>In short, I created a dummy python script to recreate the issue, and it reveals that with batch size of 64, if takes 407 seconds to run 10 epochs with model.fit, 1852 seconds with model.fit_generator, and 1985 seconds with model.train_on_batch. CPU loads are ~220%, ~130%, and ~120% respectively, and it seems especially odd that model.fit_generator &amp; model.train_on_batch are practically on par, while model.fit_generator should be able to parallelise mini batch creation and model.train_on_batch definitely does not. That is, model.fit (with huge memory requirements) beats the other solution candidates with easily manageable memory requirements by a factor of four. Obviously, CPU loads increase and total training times decrease by increasing batch size, but model.fit is always fastest with a a margin of at least two up to batch size of 8096.</p> <p>Is this kind of behaviour normal (when there is no GPU involved) or what could be done in order to increase the computation speed of the less memory intensive options? It seems that no such option is available to divide all data into manageable pieces, and then run model.fit in iterative manner. </p> <pre><code>from __future__ import absolute_import from __future__ import division from __future__ import print_function from tqdm import tqdm import numpy as np import tensorflow as tf import time import sys import argparse class DataGenerator(tf.keras.utils.Sequence): 'Generates data for Keras' def __init__(self, inputData, outputData, batchIndices, batchSize, shuffle): 'Initialization' self.inputData = inputData self.outputData = outputData self.batchIndices = batchIndices self.batchSize = batchSize self.shuffle = shuffle self.on_epoch_end() def __len__(self): 'Denotes the number of batches per epoch' return int( np.floor( self.inputData.size / self.batchSize ) ) def __getitem__(self, index): 'Generate one batch of data' # Generate data X, y = self.__data_generation(self.indexes[index*self.batchSize:(index+1)*self.batchSize]) return X, y def on_epoch_end(self): 'Updates indexes after each epoch' self.indexes = np.arange(self.inputData.size) if self.shuffle == True: np.random.shuffle(self.indexes) def __data_generation(self, INDX): 'Generates data containing batch_size samples' # Generate data X = np.expand_dims( self.inputData[ np.mod( self.batchIndices + np.reshape(INDX,(INDX.size,1)) , inputData.size ) ], axis=2) y = self.outputData[INDX,:] return X, y FLAGS = None parser = argparse.ArgumentParser() parser.add_argument('--batchSize', type=int, default=128, help='Batch size') parser.add_argument('--epochCount', type=int, default=5, help='Epoch count') FLAGS, unparsed = parser.parse_known_args() batchSize = FLAGS.batchSize epochCount = FLAGS.epochCount # Data generation print(' ') print('Generating data...') np.random.seed(0) # For reproducible results inputDim = int(104) # Input dimension outputDim = int( 2) # Output dimension N = int(1049344) # Total number of samples M = int(5e4) # Number of anomalies trainINDX = np.arange(N, dtype=np.uint32) inputData = np.sin(trainINDX) + np.random.normal(loc=0.0, scale=0.20, size=N) # Source data stored in a single array anomalyLocations = np.random.choice(N, M, replace=False) inputData[anomalyLocations] += 0.5 outputData = np.zeros((N,outputDim)) # One-hot encoded target array without ones for i in range(N): if( np.any( np.logical_and( anomalyLocations &gt;= i, anomalyLocations &lt; np.mod(i+inputDim,N) ) ) ): outputData[i,1] = 1 # set class #2 to one if there is at least a single anomaly within range [i,i+inputDim) else: outputData[i,0] = 1 # set class #1 to one if there are no anomalies within range [i,i+inputDim) print('...completed') print(' ') # Create a model for anomaly detection model = tf.keras.Sequential([ tf.keras.layers.Conv1D(filters=24, kernel_size=9, strides=1, padding='valid', dilation_rate=1, activation='relu', use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', input_shape=(inputDim,1)), tf.keras.layers.MaxPooling1D(pool_size=4, strides=None, padding='valid'), tf.keras.layers.Flatten(), tf.keras.layers.Dense(20, activation='relu', use_bias=True), tf.keras.layers.Dense(outputDim, activation='softmax') ]) model.compile( tf.keras.optimizers.Adam(), loss=tf.keras.losses.CategoricalCrossentropy(), metrics=[tf.keras.metrics.CategoricalAccuracy()]) print(' ') relativeIndices = np.arange(inputDim) # Indices belonging to a single sample relative to current position batchIndices = np.tile( relativeIndices, (batchSize,1) ) # Relative indices tiled into an array of size ( batchSize , inputDim ) stepsPerEpoch = int( np.floor( N / batchSize ) ) # Steps per epoch # Create an intance of dataGenerator class generator = DataGenerator(inputData, outputData, batchIndices, batchSize=batchSize, shuffle=True) # Solve by gathering data into a large float32 array of size ( N , inputDim ) and feeding it to model.fit startTime = time.time() X = np.expand_dims( inputData[ np.mod( np.tile(relativeIndices,(N,1)) + np.reshape(trainINDX,(N,1)) , N ) ], axis=2) y = outputData[trainINDX, :] history = model.fit(x=X, y=y, sample_weight=None, batch_size=batchSize, verbose=1, callbacks=None, validation_split=None, shuffle=True, epochs=epochCount) referenceTime = time.time() - startTime print(' ') print('Total solution time with model.fit: %6.3f seconds' % referenceTime) # Solve with model.fit_generator startTime = time.time() history = model.fit_generator(generator=generator, steps_per_epoch=stepsPerEpoch, verbose=1, callbacks=None, epochs=epochCount, max_queue_size=1024, use_multiprocessing=True, workers=4) generatorTime = time.time() - startTime print(' ') print('Total solution time with model.fit_generator: %6.3f seconds (%6.2f %% more)' % (generatorTime, 100.0 * generatorTime/referenceTime)) print(' ') # Solve by gathering data into batches of size ( batchSize , inputDim ) and feeding it to model.train_on_batch startTime = time.time() for epoch in range(epochCount): print(' ') print('Training epoch # %2d ...' % (epoch+1)) print(' ') np.random.shuffle(trainINDX) epochStartTime = time.time() for step in tqdm( range( stepsPerEpoch ) ): INDX = trainINDX[ step*batchSize : (step+1)*batchSize ] X = np.expand_dims( inputData[ np.mod( batchIndices + np.reshape(INDX,(batchSize,1)) , N ) ], axis=2) y = outputData[INDX,:] history = model.train_on_batch(x=X, y=y, sample_weight=None, class_weight=None, reset_metrics=False) print(' ') print('...completed with loss = %9.6e, accuracy = %6.2f %%, %6.2f ms/step' % (history[0], 100.0*history[1], (1000*(time.time() - epochStartTime)/np.floor(trainINDX.size / batchSize)))) print(' ') batchTime = time.time() - startTime print(' ') print('Total solution time with model.train_on_batch: %6.3f seconds (%6.2f %% more)' % (batchTime, 100.0 * batchTime/referenceTime)) print(' ') </code></pre>
<p><strong>model.fit</strong> - suitable if you load the data as numpy-array and train without augmentation.</p> <p><strong>model.fit_generator</strong> - if your dataset is too big to fit in the memory or\and you want to apply augmentation on the fly. </p> <p><strong>model.train_on_batch</strong> - less common, usually used when training more than one model at a time (GAN for example)</p>
python-3.x|tensorflow2.0
0
4,778
59,350,190
Cannot convert Pandas Dataframe columns to float
<p>I am using Pandas to read a CSV file containing several columns that must be converted to floats:</p> <pre><code> df = pd.read_csv(r'dataset.csv', low_memory=False, sep = ',') df.head(2) Coal Flow 01 Air Flow 01 Outlet Temp 01 Inlet Temp 01 Bowl DP 01 Current 01 Vibration 01 0 51.454407 101.432340 64.917089 234.2488932 2.470623 96.727352 1.874374 1 51.625368 100.953089 64.726890 233.2340394 2.495698 96.309512 1.996391 </code></pre> <p>Next I specify the columns that need to be converted to floats in a variable called <code>features</code>:</p> <pre><code> features = ['Coal Flow 01', 'Air Flow 01', 'Outlet Temp 01', 'Inlet Temp 01', 'Bowl DP 01', 'Current 01', 'Vibration 01'] </code></pre> <p>Then I needed to convert the the value of the columns to float, but I got an error.</p> <pre><code>features = np.stack([df[col].values for col in features], 1) features = torch.tensor(features, dtype=torch.float) features[:5] </code></pre> <p>and the error that Pandas is showing me is:</p> <blockquote> <p>KeyError: "None of [Index([ 51.45440668, 101.4323397, 64.91708906, '234.2488932',\n 2.470623484, 96.72735193, 1.87437372],\n dtype='object')] are in the [columns]"</p> </blockquote>
<p>Why not just use <code>astype</code>:</p> <pre><code>df = pd.read_csv(r'dataset.csv', low_memory=False, sep = ',') df[features] = df[features].apply(lambda x: x.apply(lambda x: x[0]).astype(float)) </code></pre>
python|pandas|numpy|pytorch
1
4,779
59,473,894
Using Tensorflows Universal Sentence Encoder in Node.js?
<p>I'm using tensorflow js in node and trying to encode my inputs.</p> <pre><code>const tf = require('@tensorflow/tfjs-node'); const argparse = require('argparse'); const use = require('@tensorflow-models/universal-sentence-encoder'); </code></pre> <p>These are imports, the suggested import statement (ES6) isn't permitted for me in my node environment? Though they seem to work fine here.</p> <pre><code>const encodeData = (tasks) =&gt; { const sentences = tasks.map(t =&gt; t.input); let model = use.load(); let embeddings = model.embed(sentences); console.log(embeddings.shape); return embeddings; // `embeddings` is a 2D tensor consisting of the 512-dimensional embeddings for each sentence. }; </code></pre> <p>This code produces an error that model.embed is not a function. Why? How do I properly implement an encoder in node.js?</p>
<p><code>load</code> returns a promise that resolve to the model</p> <pre><code>use.load().then(model =&gt; { // use the model here let embeddings = model.embed(sentences); console.log(embeddings.shape); }) </code></pre> <p>If you would rather use <code>await</code>, the <code>load</code> method needs to be in an enclosing <code>async</code> function</p> <pre><code>const encodeData = async (tasks) =&gt; { const sentences = tasks.map(t =&gt; t.input); let model = await use.load(); let embeddings = model.embed(sentences); console.log(embeddings.shape); return embeddings; // `embeddings` is a 2D tensor consisting of the 512-dimensional embeddings for each sentence. }; </code></pre>
javascript|node.js|tensorflow|tensorflow.js|encoder
3
4,780
59,110,612
Pandas groupby mode every n rows
<p>I have a pandas df that I am trying to groupby every 3 rows and get the mode. How can I do this?</p> <p>Example: </p> <pre><code>time a b 0 0.5 -2.0 1 0.5 -2.0 2 0.1 -1.0 3 0.1 -1.0 4 0.1 -1.0 5 0.5 -1.0 6 0.5 -1.0 7 0.5 -3.0 8 0.5 -1.0 </code></pre> <p>Should be:</p> <pre><code>time a b 2 0.5 -2.0 5 0.1 -1.0 8 0.5 -1.0 </code></pre>
<p>You can use <code>groupby</code> and <code>mode</code>:</p> <pre><code>df.groupby(np.arange(len(df)) // 3).agg(lambda x: x.mode().to_numpy()[-1]) time a b 0 2 0.5 -2.0 1 5 0.1 -1.0 2 8 0.5 -1.0 </code></pre> <p>The output here may differ from your expected output in some cases if it is possible to have more than one mode.</p> <p>I should also mention that you may not want to use mode on data that is not categorical nature (this includes floating point data). Consider factorizing your column first or you may have inaccurate results due to floating point inaccuracies.</p>
python|pandas|group-by
2
4,781
59,462,450
Better way to add label data to convolutional neural network?
<p>I am working on an image classification CNN to practice understanding machine learning, and I want to be as vanilla as possible to really get a concrete understanding of what is happening while also remaining somewhat efficient. </p> <p>I have a directory structure that goes like this:</p> <pre><code>training folder 3 folders named 0, 1, 2 0 contains only pictures of cats 1 contains only pictures of dogs 2 contains only pictures of ducks testing folder 3 folders named 0, 1, 2 0 contains only pictures of cats 1 contains only pictures of dogs 2 contains only pictures of ducks </code></pre> <p>I created this snippet of code to go through folder 0, convert all images(of cats) to image array, then go to folder 1 and do the same for all images(of dogs), and finally go to folder 2 and repeat for ducks. I then converted that returned list into a numpy array defined as x_train to feed into the model.</p> <pre><code>def get_img_array(dir): for num in range(0,3): image_list = [img for img in os.listdir(dir + str(num)) if img.endswith('.jpg')] for img_name in range(0,len(image_list)): loaded_image = image.load_img(dir + str(num) + '\\' + str(image_list[img_name]), grayscale = False) process_img = image.img_to_array(loaded_image) processed_list.append(process_img/255) return processed_list </code></pre> <p>but I'm not sure of how to move forward giving them the label y_train and y_test</p> <p>I'm aware I could create a csv file with the name of each image and a corresponding label such as "0", "1", and "2" in the next column depending on the picture and import them that way, but I'm curious to see if there is a better and more efficient way to add labels with the structure I currently have?</p> <p>I've tried to research and look over GitHub repos, guides, and SO questions(<a href="https://stackoverflow.com/questions/56113867/convolutional-neural-networks-labels">Convolutional Neural Networks labels</a> unfortunately doesn't have a useful answer) but I've only come across data sets that were hardly explained, or it was imported from a database pre-labeled in some way unknown to me, so an in-depth explanation would be great! </p>
<p>You can create label array same time when you create pixel array.Lets assume your categories are <code>cat=0,dog=1,ducks=2</code>.initialize a empty numpy array and create label array for each folder.and concatenate the each array for get the final labels.</p> <pre><code>def get_img_array(dir): labels_arr= np.empty(shape=[0,1]) for num in range(0,3): image_list = [img for img in os.listdir(dir + str(num)) if img.endswith('.jpg')] for img_name in range(0,len(image_list)): loaded_image = image.load_img(dir + str(num) + '\\' + str(image_list[img_name]), grayscale = False) process_img = image.img_to_array(loaded_image) processed_list.append(process_img/255) labels = np.full((len(image_list),1),num) labels_arr= np.concatenate((labels_arr, labels)) return processed_list,labels_arr </code></pre> <p>Check this answer as well for more intuitive. <a href="https://stackoverflow.com/questions/59294900/how-to-prepare-training-data-for-image-classification/59296781#59296781">How to prepare training data for image classification</a></p>
python|tensorflow|keras|dataset|conv-neural-network
2
4,782
59,050,251
Reshaping array using array.reshape(-1, 1)
<p>I have a dataframe called <code>data</code> from which I am trying to identify any outlier prices.</p> <p>The data frame head looks like:</p> <pre><code> Date Last Price 0 29/12/2017 487.74 1 28/12/2017 422.85 2 27/12/2017 420.64 3 22/12/2017 492.76 4 21/12/2017 403.95 </code></pre> <p>I have found a some code which I need to adjust slightly for my data that loads the data and then compares the timeseries to a scaler. The code looks like:</p> <pre><code> data = pd.read_csv(path) data = rawData['Last Price'] data = data['Last Price'] scaler = StandardScaler() np_scaled = scaler.fit_transform(data) data = pd.DataFrame(np_scaled) # train oneclassSVM outliers_fraction = 0.01 model = OneClassSVM(nu=outliers_fraction, kernel="rbf", gamma=0.01) model.fit(data) data['anomaly3'] = pd.Series(model.predict(data)) fig, ax = plt.subplots(figsize=(10,6)) a = data.loc[data['anomaly3'] == -1, ['date_time_int', 'Last Price']] #anomaly ax.plot(data['date_time_int'], data['Last Price'], color='blue') ax.scatter(a['date_time_int'],a['Last Price'], color='red') plt.show(); def getDistanceByPoint(data, model): distance = pd.Series() for i in range(0,len(data)): Xa = np.array(data.loc[i]) Xb = model.cluster_centers_[model.labels_[i]-1] distance.set_value(i, np.linalg.norm(Xa-Xb)) return distance </code></pre> <p>However get the error message:</p> <pre><code>ValueError: Expected 2D array, got 1D array instead: array=[487.74 422.85 420.64 ... 461.57 444.33 403.84]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. </code></pre> <p>and I am unsure as to where I need to resize the array.</p> <p>For information, here is the trace back:</p> <pre><code> File "&lt;ipython-input-23-628125407694&gt;", line 1, in &lt;module&gt; runfile('C:/Users/stacey/Downloads/techJob.py', wdir='C:/Users/stacey/Downloads') File "C:\Anaconda_Python 3.7\2019.03\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile execfile(filename, namespace) File "C:\Anaconda_Python 3.7\2019.03\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/staceyDownloads/techJob.py", line 92, in &lt;module&gt; main() File "C:/Users/stacey/Downloads/techJob.py", line 56, in main np_scaled = scaler.fit_transform(data) File "C:\Anaconda_Python 3.7\2019.03\lib\site-packages\sklearn\base.py", line 464, in fit_transform return self.fit(X, **fit_params).transform(X) File "C:\Anaconda_Python 3.7\2019.03\lib\site-packages\sklearn\preprocessing\data.py", line 645, in fit return self.partial_fit(X, y) File "C:\Anaconda_Python 3.7\2019.03\lib\site-packages\sklearn\preprocessing\data.py", line 669, in partial_fit force_all_finite='allow-nan') File "C:\Anaconda_Python 3.7\2019.03\lib\site-packages\sklearn\utils\validation.py", line 552, in check_array "if it contains a single sample.".format(array)) ValueError: Expected 2D array, got 1D array instead: array=[7687.77 7622.88 7620.68 ... 5261.57 5244.37 5203.89]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. </code></pre>
<p>You should be able to fix the error by changing this line:</p> <pre><code>np_scaled = scaler.fit_transform(data) </code></pre> <p>with this:</p> <pre><code>np_scaled = scaler.fit_transform(data.values.reshape(-1,1)) </code></pre>
python|pandas|numpy|scikit-learn
4
4,783
45,239,458
pip install numpy error - Unmatched "
<p>I am trying to install numpy in a python3 virtual environment</p> <pre><code>python3 -m venv venv source venv/bin/activate pip install numpy </code></pre> <p>After running the above, the installation fails with an error something like this...</p> <pre><code>error Command "gcc ..." failed with exit status 1 Unmatched ". Unmatched ". </code></pre> <p>Why does this happen and how can I install numpy correctly</p>
<p>When you install numpy using pip, it runs various shell commands to build the parts of numpy that are written in c. Your environment's <code>$SHELL</code> variable will be used to determine which shell to use. In this case, <code>csh</code> is being used but the command in the installation script expects to be able to use bash like syntax.</p> <h1>Make sure your <code>$SHELL</code> is set to <code>/bin/bash</code> before building</h1> <p>The syntax for this depends on your shell</p> <h2>bash/zsh:</h2> <pre><code>export SHELL="/bin/bash" </code></pre> <h2>fish:</h2> <pre><code>set -x SHELL "/bin/bash/" </code></pre> <h2>csh:</h2> <pre><code>setenv SHELL /bin/bash </code></pre> <p>Now you should be able to run <code>pip install numpy</code> again, and it might work this time.</p> <p>[This is possibly a {bug/oversight} in numpy]</p>
python-3.x|numpy|virtualenv
0
4,784
44,927,592
Pandas Series boxplot not showing correctly
<p>I have a Pandas Series named a, and a.describe() gives this</p> <pre><code>count 1116.000 mean 211.495 std 1241.612 min 1.000 25% 16.000 50% 20.000 75% 57.000 max 23220.000 </code></pre> <p>I'd like to create a boxplot out of it so I did <code>a.plot(kind='box')</code>, there is what I get: <a href="https://i.stack.imgur.com/YAabL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YAabL.png" alt="enter image description here"></a> The first few entries of the series look like this:</p> <pre><code>338.000 17.000 9.000 20.000 68.000 288.000 18.000 25.000 </code></pre> <p>Why isn't the boxplot showing up correctly?</p> <hr> <p>The maximum value indeed makes this plot unreadable. I decide to hide the outliers by doing this:</p> <pre><code>plt.boxplot(a, showfliers=False) </code></pre> <p><a href="https://i.stack.imgur.com/Rl3vJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rl3vJ.png" alt="enter image description here"></a></p>
<p>Actually, regarding to your values, your box plot is showing up 'correctly' because of your max value : 23220.000. Try playing with xlim and ylim arguments of <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.plot.html" rel="nofollow noreferrer">Pandas plot function</a></p>
python|pandas|plot|boxplot
2
4,785
44,881,307
Pandas How to hide the header when the output need header as a condition to filter the result?
<h2><strong>Solution</strong></h2> <p>If you hide the header of the output. use the header = None option. And notice that just use it when you are going to print it. For the reason that if you set the header = None when you load the data, the name of the column is unusable to you, so you can't use it to filter the data or do something else.</p> <p>For example :</p> <p><code>print(ResDf.to_string(header = None))</code></p> <h2><strong>Update</strong></h2> <p>The output I want has no header. For example the output is </p> <pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 &lt;------------------column name 3 b 7 a 4 b 2 b 6 b NaN 10 b 8 a 8 a 6 b 2 c 4 a NaN 10 c </code></pre> <p>The output I want is </p> <p>------------without column name -------------------</p> <pre><code>3 b 7 a 4 b 2 b 6 b NaN 10 b 8 a 8 a 6 b 2 c 4 a NaN 10 c </code></pre> <p>But it can't be done by using <code>header = none</code>, so I wonder how to make it? </p> <p>Thing is that if you set the <code>header = None</code> option, the column name can't be used as a condition to filter the data. Cuz there is no column name already. For example I set the filter ( or called <em>mask</em>) as <code>mask = df[u'客户'].str.contains(Client, na=False) &amp; df[u'型号'].str.contains(GoodsType, na=False)</code>. If you set the header = None I think that there is no <code>型号</code> or <code>客户</code> in the dataframe, so It can't be used. So how to hide header when you still want to use the header to filter the output data?</p> <p>I want the pandas output without the <strong>header</strong>, but the output needs the header to filtered.</p> <p>Here is my code, I knew the trick to set the <code>header=None</code>, but I can't do that because the header still is used as a condition to filter the output. For example here I want the output with the <strong>'客户'</strong>(which is a column name) contain the certain word '<strong>Tom</strong>'(For example). If I use the <code>header = None</code> option, the '客户' will not be recognized. So how to get the output without the header, in my condition? </p> <pre><code># -*- coding: utf-8 -*- # -*- coding: gbk -*- import pandas as pd import numpy as np import sys import re import os import sys Client = sys.argv[1] GoodsType = sys.argv[2] Weight = sys.argv[3] script_dir = os.path.dirname(os.path.abspath(__file__)) os.chdir(script_dir ) # change to the path that you already know pd.set_option('display.max_columns', 1000) # df = pd.read_excel("packagesum.xlsx", header = None) # '客户' will not be recognized when set the header to None df = pd.read_excel("packagesum.xlsx") # print(str(df.ix[:,u'客户经理':u'内袋标贴'][df[u'客户'].str.contains(Client, na = False)][df[u'型号'].str.contains(GoodsType, na = False)])) ResDf = df.ix[:,u'客户经理':u'留样'][df[u'客户'].str.contains(Client, na = False)][df[u'型号'].str.contains(GoodsType, na = False)] ResDf[u'重量'] = Weight print(str(ResDf)) with open('GoodsTypeRes.txt', 'w') as the_file: the_file.write(str(ResDf)) </code></pre> <p>This is the header of my excel file.</p> <p><a href="https://i.stack.imgur.com/jtSuV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jtSuV.png" alt="enter image description here"></a></p>
<p>I think you need parameter <code>names</code> for set column names if does not exist, also <code>header = None</code> can be omit:</p> <pre><code>#change column names by your data df = pd.read_excel("packagesum.xlsx", names=['col1','col2','col3', ...]) </code></pre> <p>And then code can be simplify by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer"><code>DataFrame.to_csv</code></a>:</p> <pre><code>mask = df[u'客户'].str.contains(Client, na=False) &amp; df[u'型号'].str.contains(GoodsType, na=False) ResDf = df.loc[mask,u'客户经理':u'留样'] ResDf[u'重量'] = Weight ResDf.to_csv('GoodsTypeRes.tx', header=False) </code></pre> <hr> <p>Another solution is select columns by position with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a>.</p> <pre><code>df = pd.read_excel("packagesum.xlsx", header=None) #check positions if corrects, python starts from 0 for first position mask = df.iloc[:, 2].str.contains(Client, na=False) &amp; df.iloc[:, 4].str.contains(GoodsType, na=False) #all columns ResDf = df[mask].copy() #add new column to position 10 what is same as column name ResDf[10] = Weight ResDf.to_csv('GoodsTypeRes.tx', header=False) </code></pre> <p>Sample:</p> <pre><code>np.random.seed(345) N = 10 df = pd.DataFrame({0:np.random.choice(list('abc'), size=N), 1:np.random.choice([8,7,0], size=N), 2:np.random.choice(list('abc'), size=N), 3:np.random.randint(10, size=N), 4:np.random.choice(list('abc'), size=N), 5:np.random.choice([2,0], size=N), 6:np.random.choice(list('abc'), size=N), 7:np.random.randint(10, size=N), 8:np.random.choice(list('abc'), size=N), 9:np.random.choice([np.nan,0], size=N), 10:np.random.choice([1,0], size=N), 11:np.random.choice(list('abc'), size=N)}) print (df) 0 1 2 3 4 5 6 7 8 9 10 11 0 a 7 b 6 a 2 a 7 c 0.0 1 b 1 a 8 b 3 b 0 a 7 a NaN 0 b 2 b 8 b 3 b 2 a 8 c NaN 1 b 3 b 7 a 4 b 2 b 6 b NaN 0 b 4 c 0 b 2 c 2 c 7 a NaN 1 b 5 a 0 a 8 c 2 b 1 c NaN 1 b 6 a 8 b 5 c 2 a 5 a 0.0 0 a 7 b 8 a 2 c 0 a 1 a NaN 1 c 8 a 8 a 6 b 2 c 4 a NaN 0 c 9 c 0 b 2 a 0 b 2 c 0.0 0 b </code></pre> <hr> <pre><code>Client = 'a' GoodsType = 'b' Weight = 10 mask = df.iloc[:, 2].str.contains(Client, na=False) &amp; df.iloc[:, 4].str.contains(GoodsType, na=False) ResDf = df[mask].copy() ResDf[10] = Weight print (ResDf) 0 1 2 3 4 5 6 7 8 9 10 11 3 b 7 a 4 b 2 b 6 b NaN 10 b 8 a 8 a 6 b 2 c 4 a NaN 10 c </code></pre>
python|pandas
2
4,786
45,035,213
plot normal distribution with pd.hist
<p>I have a pd.DataFrame which I want to plot and fit a bell curve over. I got as far as plotting the histogram. How do I fit the bell curve?</p> <pre><code>wordfreq = pd.DataFrame(columns=vocab, index = authors, data = rates) wordfreq.hist(column='the', grid = False, normed = True, color = '#9ebcda')[:25] plt.rcParams['axes.facecolor'] = 'white' plt.title("use of 'the' women") </code></pre> <p><a href="https://i.stack.imgur.com/RBfGx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RBfGx.png" alt="histogram"></a></p>
<p>you can use <a href="https://seaborn.pydata.org/" rel="nofollow noreferrer">seaborn</a> package as:</p> <pre><code>seaborn.distplot(wordfreq[column]) </code></pre>
python|pandas|matplotlib|visualization
1
4,787
45,048,615
Custom pipeline for different data type in scikit learn
<p>I am currently trying to predict whether a kickstarter project will be successful or no depending on a bunch of integer and some text features. I was looking at building a pipeline which would look something like this</p> <p>Reference : <a href="http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html#sphx-glr-auto-examples-hetero-feature-union-py" rel="nofollow noreferrer">http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html#sphx-glr-auto-examples-hetero-feature-union-py</a></p> <p>Here is my ItemSelector and pipeline code</p> <pre><code>class ItemSelector(BaseEstimator, TransformerMixin): def __init__(self, keys): self.keys = keys def fit(self, x, y=None): return self def transform(self, data_dict): return data_dict[self.keys] </code></pre> <p>I verified that the ItemSelector is working as expected by </p> <pre><code>t = ItemSelector(['cleaned_text']) t.transform(df) And it extract the necessary columns </code></pre> <h3>Pipeline</h3> <pre><code>pipeline = Pipeline([ # Use FeatureUnion to combine the features from subject and body ('union', FeatureUnion( transformer_list=[ # Pipeline for pulling features from the post's subject line ('text', Pipeline([ ('selector', ItemSelector(['cleaned_text'])), ('counts', CountVectorizer()), ('tf_idf', TfidfTransformer()) ])), # Pipeline for pulling ad hoc features from post's body ('integer_features', ItemSelector(int_features)), ] )), # Use a SVC classifier on the combined features ('svc', SVC(kernel='linear')), ]) </code></pre> <p>But when I run pipeline.fit(X_train, y_train) I receive this error. Any idea how to fix this?</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-27-317e1c402966&gt; in &lt;module&gt;() ----&gt; 1 pipeline.fit(X_train, y_train) ~/Anaconda/anaconda/envs/ds/lib/python3.5/site-packages/sklearn/pipeline.py in fit(self, X, y, **fit_params) 266 This estimator 267 """ --&gt; 268 Xt, fit_params = self._fit(X, y, **fit_params) 269 if self._final_estimator is not None: 270 self._final_estimator.fit(Xt, y, **fit_params) ~/Anaconda/anaconda/envs/ds/lib/python3.5/site-packages/sklearn/pipeline.py in _fit(self, X, y, **fit_params) 232 pass 233 elif hasattr(transform, "fit_transform"): --&gt; 234 Xt = transform.fit_transform(Xt, y, **fit_params_steps[name]) 235 else: 236 Xt = transform.fit(Xt, y, **fit_params_steps[name]) \ ~/Anaconda/anaconda/envs/ds/lib/python3.5/site-packages/sklearn/pipeline.py in fit_transform(self, X, y, **fit_params) 740 self._update_transformer_list(transformers) 741 if any(sparse.issparse(f) for f in Xs): --&gt; 742 Xs = sparse.hstack(Xs).tocsr() 743 else: 744 Xs = np.hstack(Xs) ~/Anaconda/anaconda/envs/ds/lib/python3.5/site-packages/scipy/sparse/construct.py in hstack(blocks, format, dtype) 456 457 """ --&gt; 458 return bmat([blocks], format=format, dtype=dtype) 459 460 ~/Anaconda/anaconda/envs/ds/lib/python3.5/site-packages/scipy/sparse/construct.py in bmat(blocks, format, dtype) 577 exp=brow_lengths[i], 578 got=A.shape[0])) --&gt; 579 raise ValueError(msg) 580 581 if bcol_lengths[j] == 0: ValueError: blocks[0,:] has incompatible row dimensions. Got blocks[0,1].shape[0] == 81096, expected 1. </code></pre>
<p>The ItemSelector is returning a Dataframe, not an array. Thats why the <code>scipy.hstack</code> is throwing up error. Change the ItemSelector as below:</p> <pre><code>class ItemSelector(BaseEstimator, TransformerMixin): .... .... .... def transform(self, data_dict): return data_dict[self.keys].as_matrix() </code></pre> <p>The error occurs in the <code>integer_features</code> part of your pipeline. For the first part <code>text</code>, the transformers below the ItemSelector support the Dataframe and hence, convert it to array correctly. But the second part only have ItemSelector and returns Dataframe.</p> <p><strong>Update</strong>:</p> <p>In the comment, you have mentioned that you want to perform some actions on the resultant Dataframe returned from the ItemSelector. So instead of modifying the transform method of the ItemSelector, you can make a new Transformer and append it to the second part of your pipeline.</p> <pre><code>class DataFrameToArrayTransformer(BaseEstimator, TransformerMixin): def __init__(self): def fit(self, x, y=None): return self def transform(self, X): return X.as_matrix() </code></pre> <p>Then you pipeline should look like this:</p> <pre><code>pipeline = Pipeline([ # Use FeatureUnion to combine the features from subject and body ('union', FeatureUnion( transformer_list=[ # Pipeline for pulling features from the post's subject line ('text', Pipeline([ ('selector', ItemSelector(['cleaned_text'])), ('counts', CountVectorizer()), ('tf_idf', TfidfTransformer()) ])), # Pipeline for pulling ad hoc features from post's body ('integer', Pipeline([ ('integer_features', ItemSelector(int_features)), ('array', DataFrameToArrayTransformer()), ])), ] )), # Use a SVC classifier on the combined features ('svc', SVC(kernel='linear')), ]) </code></pre> <p>The main thing to understand here is that FeatureUnion will only handle 2-D arrays when combining them, so any other type like DataFrame may present a problem there. </p>
python|pandas|numpy|machine-learning|scikit-learn
1
4,788
57,260,056
How can I get 2 target values?
<p>so I am currently writing a program that can make predictions of longitude and latitude values. So far, my program can make predictions for 1 target value, but I need it to make 2. How should I go about doing that.</p> <pre><code>column_names = ['longitude', 'Latitude'] raw_dataset = pd.read_csv('loglat.csv', names=column_names) dataset = raw_dataset.copy() dataset.tail() </code></pre> <pre><code>train_dataset = dataset.sample(random_state=0) test_dataset = dataset.drop(train_dataset.index) </code></pre> <pre><code>train_labels = train_dataset.pop('longitude') test_labels = test_dataset.pop('longitude') </code></pre> <pre><code>normed_train_data = train_dataset normed_test_data = test_dataset </code></pre> <pre><code>def build_model(): model = keras.Sequential([ layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_dataset.keys())]), layers.Dense(64, activation=tf.nn.relu), layers.Dense(1) ]) optimizer = tf.keras.optimizers.RMSprop(0.001) model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['mean_absolute_error', 'mean_squared_error']) return model </code></pre> <pre><code>model = build_model() linreg = model.fit( normed_train_data, train_labels, epochs=1000, verbose=0) </code></pre> <pre><code>test_predictions = model.predict(normed_test_data).flatten() </code></pre> <p>Right now I only get longitude predictions, but I want both. Thank you, ~Sir Cappery</p>
<p>I would suggest using Model API instead of Sequential API.</p> <pre><code>from tf.keras.layers import Input, Dense from tf.keras.models import Model def build_model(): X = Input(shape=[len(train_dataset.keys())]) hidden_1 = Dense(64, activation=tf.nn.relu)(X) hidden_2 = Dense(64, activation=tf.nn.relu)(hidden_1) out_1 = Dense(1)(hidden_2) out_2 = Dense(1)(hidden_2) model = Model(inputs=[X], outputs=[out_1, out_2]) optimizer = tf.keras.optimizers.RMSprop(0.001) model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['mean_absolute_error', 'mean_squared_error']) return model </code></pre> <p>if you need different losses for different output or etc. please check <a href="https://keras.io/models/model/" rel="nofollow noreferrer">here</a></p>
python|tensorflow|keras
0
4,789
57,267,874
Time series CNN, trying to use 1,1 input shape
<p>I'm trying to do a CNN 1D for time series.</p> <p><strong>First issue:</strong> When trying to use an input shape of [1,1] I get an error:</p> <pre><code>Error: Negative dimension size caused by adding layer average_pooling1d_AveragePooling1D1 with input shape [,0,128] </code></pre> <p><strong>2nd issue</strong> I have 2 different arrays (1d) for my data: first array is the input data containing the time series and the 2nd array contains the output data with closed values for a stock.</p> <p>Something that got me to a few more results was to set the input shape to [6,1]. </p> <p>Model summary:</p> <pre><code>_________________________________________________________________ Layer (type) Output shape Param # ================================================================= conv1d_Conv1D1 (Conv1D) [null,5,128] 384 _________________________________________________________________ average_pooling1d_AveragePoo [null,4,128] 0 _________________________________________________________________ conv1d_Conv1D2 (Conv1D) [null,3,64] 16448 _________________________________________________________________ average_pooling1d_AveragePoo [null,2,64] 0 _________________________________________________________________ conv1d_Conv1D3 (Conv1D) [null,1,16] 2064 _________________________________________________________________ average_pooling1d_AveragePoo [null,0,16] 0 _________________________________________________________________ flatten_Flatten1 (Flatten) [null,0] 0 _________________________________________________________________ dense_Dense1 (Dense) [null,1] 1 ================================================================= </code></pre> <p>Here training the model got me into issues:</p> <pre><code>const trainX = tf.tensor1d(data.inTime).reshape([100, 6, 1]) </code></pre> <p>100 - size of my array 6 - features 1 - 1 unit as output</p> <pre><code>Error: Size(100) must match the product of shape 100,6,1 </code></pre> <p>I'm stuck at the training step because I don't know how to train it. I would prefere to have a [1,1] input shape, to give only 1 time series and to have 1 output from it.</p> <h1>The model</h1> <pre><code>async function buildModel() { const model = tf.sequential() // settings const kernelSize = 2 const poolSize = [2] // tf layers model.add(tf.layers.conv1d({ inputShape: [6, 1], kernelSize: kernelSize, filters: 128, strides: 1, useBias: true, activation: 'relu', kernelInitializer: 'varianceScaling' })) model.add(tf.layers.averagePooling1d({poolSize: poolSize, strides: [1]})) // 2nd layer model.add(tf.layers.conv1d({ kernelSize: kernelSize, filters: 64, strides: 1, useBias: true, activation: 'relu', kernelInitializer: 'varianceScaling' })) model.add(tf.layers.averagePooling1d({poolSize: poolSize, strides: [1]})) model.add(tf.layers.conv1d({ kernelSize: kernelSize, filters: 16, strides: 1, useBias: true, activation: 'relu', kernelInitializer: 'varianceScaling' })) model.add(tf.layers.averagePooling1d({poolSize: poolSize, strides: [1]})) model.add(tf.layers.flatten()) model.add(tf.layers.dense({ units: 1, kernelInitializer: 'VarianceScaling', activation: 'linear' })) // optimizer + learning rate const optimizer = tf.train.adam(0.0001) model.compile({ optimizer: optimizer, loss: 'meanSquaredError', metrics: ['accuracy'], }) return model } </code></pre> <h1>Training where the error is occurring</h1> <pre><code>async function train(model, data) { console.log(`MODEL SUMMARY:`) model.summary() // Train the model const epochs = 2 // train data size, 28, 28, 1 const trainX = tf.tensor1d(data.inTime).reshape([100, 6, 1]) const trainY = tf.tensor([data.outClosed], [1, data.size, 1]) let result = await model.fit(trainX, trainY, { epochs: epochs }) print("Loss after last Epoch (" + result.epoch.length + ") is: " + result.history.loss[result.epoch.length-1]) return result } </code></pre> <p>Any ideas into how to fix it will be much appreciated!</p>
<p><strong>Time series</strong> is a sequence taken at successive equally spaced points in time according to <a href="https://en.wikipedia.org/wiki/Time_series" rel="nofollow noreferrer">wikipedia</a>. The goal of the neural network NN used on time series is to find the pattern between the series of data. Convolutiona Neural Networks CNN are rarely if not never used on this kind of data. Other NN often used are RNN and LSTM. If we are interested in finding a pattern in a series of data, the inputShape can't be [1, 1]; otherwise it will mean finding a pattern on a unique point. It can be done theoretically, but in reality it does not capture the essence of the time series.</p> <p>The model used here is using CNN with average pooling layer. Of course, a pooling layer cannot be applied on a layer with a pooling size bigger than the shape of the layer thus throwing the error:</p> <blockquote> <p>Error: Negative dimension size caused by adding layer average_pooling1d_AveragePooling1D1 with input shape [,0,128]</p> </blockquote> <p>The last error:</p> <blockquote> <p>Error: Size(100) must match the product of shape 100,6,1</p> </blockquote> <p>indicates a mismatch of the size of the tensors.</p> <p>100 * 6 * 1 = 600 elements in the tensor (size =600) whereas the input tensor has 100 elements resulting in the error.</p>
tensorflow|tensorflow.js
1
4,790
46,103,859
Reshape list/sequence of integers and arrays to array
<p>I have multiple lists of format (count_i, 1dim-array_i) and I would like to convert them to arrays such that they read</p> <p>[count_i, 1dim-array_i[0], 1dim-array_i[1], 1dim-array_i[2], ... , 1dim-array_i[n]]</p> <p>If it helps to understand what I mean, here is an example list:</p> <pre><code>mylist = [[0, array([ 1. , 0.73475787, 0.36224658, 0.08579446, -0.11767365, -0.09927562, 0.17444341, 0.47212111, 1.00584593, 1.69147789, 1.89421069, 1.4718292 ])], [2, array([ 1. , 0.68744907, 0.38420843, 0.25922927, 0.04719614, 0.00841919, 0.21967246, 0.22183329, 0.28910002, 0.54637077, -0.04389335, -1.33445338])], [3, array([ 1. , 0.77854922, 0.41093192, 0.0713814 , -0.08194854, -0.07885753, 0.1491798 , 0.56297583, 1.0759857 , 1.57149366, 1.37958867, 0.64409152])], [4, array([ 1. , 0.35988801, 0.18939934, 0.45618952, 0.24415997, -0.33527807, -0.35296085, -0.41893959, -0.48589674, -0.66222111, -0.58601528, -1.14922484])], [5, array([ 1. , 0.09182989, 0.14988215, -0.1272845 , 0.12154707, -0.01194815, -0.06136953, 0.18783772, 0.46631855, 0.78850281, 0.64755372, 0.69757144])]] </code></pre> <p>I have tried (for one of those lists)</p> <pre><code>mylist_sorted = np.ones((len(mylist),len(arrays)+1)) for i in range(len(mylist)): mylist_sorted[i] = [i,[mylist[i][1][j] for j in range(len(arrays))]] </code></pre> <p>but this obviously gave me</p> <pre><code>ValueError: cannot copy sequence with size 2 to array axis with dimension n+1 </code></pre> <p>Functions like numpy.reshape didn't help for sequences either...</p> <p>What's the smartest way to accomplish this?</p> <p>Many thanks!</p>
<p>Your <em>list comprehension</em> generates <em>lists with two elements</em> where the second one lis a list. For example for <code>i = 1</code>, it generates:</p> <pre><code>&gt;&gt;&gt; [i,[mylist[i][1][j] for j in range(12)]] [1, [1.0, 0.68744907, 0.38420842999999999, 0.25922927000000001, 0.047196139999999998, 0.00841919, 0.21967246000000001, 0.22183328999999999, 0.28910002000000001, 0.54637077000000001, -0.043893349999999998, -1.33445338]] </code></pre> <p>You can however make things easier by writing:</p> <pre><code><b>mylist_sorted[:,0] = np.arange(len(mylist))</b> for i in range(len(mylist)): mylist_sorted[i<b>,1:]</b> = <b>mylist[i][1]</b></code></pre> <p>With your sample input, this produces:</p> <pre><code>&gt;&gt;&gt; mylist_sorted array([[ 0. , 1. , 0.73475787, 0.36224658, 0.08579446, -0.11767365, -0.09927562, 0.17444341, 0.47212111, 1.00584593, 1.69147789, 1.89421069, 1.4718292 ], [ 1. , 1. , 0.68744907, 0.38420843, 0.25922927, 0.04719614, 0.00841919, 0.21967246, 0.22183329, 0.28910002, 0.54637077, -0.04389335, -1.33445338], [ 2. , 1. , 0.77854922, 0.41093192, 0.0713814 , -0.08194854, -0.07885753, 0.1491798 , 0.56297583, 1.0759857 , 1.57149366, 1.37958867, 0.64409152], [ 3. , 1. , 0.35988801, 0.18939934, 0.45618952, 0.24415997, -0.33527807, -0.35296085, -0.41893959, -0.48589674, -0.66222111, -0.58601528, -1.14922484], [ 4. , 1. , 0.09182989, 0.14988215, -0.1272845 , 0.12154707, -0.01194815, -0.06136953, 0.18783772, 0.46631855, 0.78850281, 0.64755372, 0.69757144]]) </code></pre> <p><strong>EDIT</strong>: In case the "counts" are not 0, 1, 2, ... you can alter the above code fragment:</p> <pre><code>for <b>i, (left, right)</b> in <b>enumerate(mylist)</b>: <b>mylist_sorted[i,0] = left</b> mylist_sorted[i,1:] = <b>right</b></code></pre>
python|arrays|list|numpy|reshape
0
4,791
46,091,924
Python: How to drop a row whose particular column is empty/NaN?
<p>I have a csv file. I read it:</p> <pre><code>import pandas as pd data = pd.read_csv('my_data.csv', sep=',') data.head() </code></pre> <p>It has output like:</p> <pre><code>id city department sms category 01 khi revenue NaN 0 02 lhr revenue good 1 03 lhr revenue NaN 0 </code></pre> <p>I want to remove all the rows where <code>sms</code> column is empty/NaN. What is efficient way to do it?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="noreferrer"><code>dropna</code></a> with parameter <code>subset</code> for specify column for check <code>NaN</code>s:</p> <pre><code>data = data.dropna(subset=['sms']) print (data) id city department sms category 1 2 lhr revenue good 1 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="noreferrer"><code>boolean indexing</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.notnull.html" rel="noreferrer"><code>notnull</code></a>:</p> <pre><code>data = data[data['sms'].notnull()] print (data) id city department sms category 1 2 lhr revenue good 1 </code></pre> <p>Alternative with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="noreferrer"><code>query</code></a>:</p> <pre><code>print (data.query("sms == sms")) id city department sms category 1 2 lhr revenue good 1 </code></pre> <p><strong>Timings</strong></p> <pre><code>#[300000 rows x 5 columns] data = pd.concat([data]*100000).reset_index(drop=True) In [123]: %timeit (data.dropna(subset=['sms'])) 100 loops, best of 3: 19.5 ms per loop In [124]: %timeit (data[data['sms'].notnull()]) 100 loops, best of 3: 13.8 ms per loop In [125]: %timeit (data.query("sms == sms")) 10 loops, best of 3: 23.6 ms per loop </code></pre>
python|pandas|dataframe
72
4,792
45,879,776
TensorFlow how to make results reproducible for `tf.nn.sampled_softmax_loss`
<p>I would like to get reproducible results for my tensorflow runs. The way I'm trying to make this happen is to set up the numpy and tensorflow seeds:</p> <pre><code>import numpy as np rnd_seed = 1 np.random.seed(rnd_seed) import tensorflow as tf tf.set_random_seed(rnd_seed) </code></pre> <p>As well as make sure that the weights of the neural network, that I initialized with <code>tf.truncated_normal</code> also use that seed: <code>tf.truncated_normal(..., seed=rnd_seed)</code></p> <p>For reasons that are beyond the scope of this question, I'm using the sampled softmax loss function, <code>tf.nn.sampled_softmax_loss</code>, and unfortunately, I'm not able to control the stochasticity of this function with a random seed.</p> <p>By a look at the TensorFlow documentation of this function (<a href="https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss</a>), I can see that parameter <code>sampled_values</code> should be the only parameter that affects randomization, but I'm not able to understand how to actually use a seed.</p> <p>[EDITED] This is (part of) my script</p> <pre><code>import numpy as np # set a seed so that the results are consistent rnd_seed = 1 np.random.seed(rnd_seed) import tensorflow as tf tf.set_random_seed(rnd_seed) embeddings_ini = np.random.uniform(low=-1, high=1, size=(self.vocabulary_size, self.embedding_size)) with graph.as_default(), tf.device('/cpu:0'): train_dataset = tf.placeholder(tf.int32, shape=[None, None]) train_labels = tf.placeholder(tf.int32, shape=[None, 1]) valid_dataset = tf.constant(self.valid_examples, dtype=tf.int32) # Variables. initial_embeddings = tf.placeholder(tf.float32, shape=(self.vocabulary_size, self.embedding_size)) embeddings = tf.Variable(initial_embeddings) softmax_weights = tf.Variable( tf.truncated_normal([self.vocabulary_size, self.embedding_size], stddev=1.0 / math.sqrt(self.embedding_size), seed=rnd_seed)) softmax_biases = tf.Variable(tf.zeros([self.vocabulary_size])) # Model. # Look up embeddings for inputs. if self.model == "skipgrams": # Skipgram model embed = tf.nn.embedding_lookup(embeddings, train_dataset) elif self.model == "cbow": # CBOW Model embeds = tf.nn.embedding_lookup(embeddings, train_dataset) embed = tf.reduce_mean(embeds, 1, keep_dims=False) # Compute the softmax loss, using a sample of the negative labels each time. loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed, labels=train_labels, num_sampled=self.num_sampled, num_classes=self.vocabulary_size)) </code></pre>
<p>I finally, found out how to make results reproducible. Just like @Anis suggested I should've set the graph seed and this can be done by:</p> <pre><code>with graph.as_default(), tf.device('/cpu:0'): tf.set_random_seed(1234) </code></pre>
python|numpy|random|tensorflow
0
4,793
23,293,011
How to plot a superimposed bar chart using matplotlib in python?
<p>I want to plot a bar chart or a histogram using matplotlib. I don't want a stacked bar plot, but a superimposed barplot of two lists of data, for instance I have the following two lists of data with me:</p> <p>Some code to begin with :</p> <pre><code>import matplotlib.pyplot as plt from numpy.random import normal, uniform highPower = [1184.53,1523.48,1521.05,1517.88,1519.88,1414.98,1419.34, 1415.13,1182.70,1165.17] lowPower = [1000.95,1233.37, 1198.97,1198.01,1214.29,1130.86,1138.70, 1104.12,1012.95,1000.36] plt.hist(highPower, bins=10, histtype='stepfilled', normed=True, color='b', label='Max Power in mW') plt.hist(lowPower, bins=10, histtype='stepfilled', normed=True, color='r', alpha=0.5, label='Min Power in mW') </code></pre> <p>I want to plot these two lists against the number of values in the two lists such that I am able to see the variation per reading.</p>
<p>You can produce a superimposed bar chart using <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.bar" rel="noreferrer"><code>plt.bar()</code></a> with the <code>alpha</code> keyword as shown below.</p> <p>The <code>alpha</code> controls the transparency of the bar. </p> <p><strong>N.B.</strong> when you have two overlapping bars, one with an alpha &lt; 1, you will get a mixture of colours. As such the bar will appear purple even though the legend shows it as a light red. To alleviate this I have modified the width of one of the bars, this way even if your powers should change you will still be able to see both bars.</p> <p><a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.xticks" rel="noreferrer"><code>plt.xticks</code></a> can be used to set the location and format of the x-ticks in your graph.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np width = 0.8 highPower = [1184.53,1523.48,1521.05,1517.88,1519.88,1414.98, 1419.34,1415.13,1182.70,1165.17] lowPower = [1000.95,1233.37, 1198.97,1198.01,1214.29,1130.86, 1138.70,1104.12,1012.95,1000.36] indices = np.arange(len(highPower)) plt.bar(indices, highPower, width=width, color='b', label='Max Power in mW') plt.bar([i+0.25*width for i in indices], lowPower, width=0.5*width, color='r', alpha=0.5, label='Min Power in mW') plt.xticks(indices+width/2., ['T{}'.format(i) for i in range(len(highPower))] ) plt.legend() plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/0hjX7.png" alt="Plot"></p>
python|numpy|matplotlib|plot|histogram
19
4,794
35,425,093
Reformat pandas DataFrame
<p>I have a <code>pandas</code>.<code>DataFrame</code> with the following data:</p> <pre><code> country branch Name salary mobile no emailid x a aa 250000 Null Null x b bb 350000 8976646410 xx@xx.com y c cc 450000 8777945411 yy@yy.com y d dd 589630 Null Null </code></pre> <p>Depending on certain criteria, I filter the <code>DataFrame</code> (pseudocode):</p> <blockquote> <pre><code>if salary &lt;= 250000: Normal Employee elif salary &gt;= 250000 and salary &lt;= 600000: Experienced Employee </code></pre> </blockquote> <p>In doing this, I add a new column as follows:</p> <pre><code>normal = data_df['salary'] &lt;= 250000 experienced = (data_df['salary'] &gt; 250000) &amp; \ (data_df['customer_total_sales'] &lt;= 600000) data_df['position'] = np.where(normal, 'normal', np.where(experienced, 'experienced','unknown')) </code></pre> <p>Yet, I would like to display the <code>DataFrame</code> as follows, removing rows with the value <code>Null</code>:</p> <pre><code>country branch count_employee count_mobile_no count_email_id count_normal _employee count_experienced_employee x a 1 0 0 1 0 y c 1 1 1 0 1 </code></pre> <p>To count fields, I use the following code:</p> <pre><code>a = {'employee': ['count'], 'mobile_number': ['count'], 'customer_emailid': ['count']} </code></pre>
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="nofollow"><code>replace</code></a> <code>Null</code> to <code>NaN</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.aggregate.html" rel="nofollow"><code>agg</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p> <pre><code>print data_df country branch Name salary mobile no emailid position 0 x a aa 250000 Null Null unknown 1 x b bb 350000 8976646410 xx@xx.com unknown 2 y c cc 450000 8777945411 yy@yy.com unknown 3 y d dd 589630 Null Null unknown data_df = data_df.replace('Null', np.nan) print data_df country branch Name salary mobile no emailid position 0 x a aa 250000 NaN NaN unknown 1 x b bb 350000 8976646410 xx@xx.com unknown 2 y c cc 450000 8777945411 yy@yy.com unknown 3 y d dd 589630 NaN NaN unknown df = data_df.groupby(['country', 'branch']).agg({'Name': 'count', 'mobile no':'count', 'emailid': 'count', 'position': 'count'}) print df.reset_index() country branch emailid position Name mobile no 0 x a 0 1 1 0 1 x b 1 1 1 1 2 y c 1 1 1 1 3 y d 0 1 1 0 </code></pre> <p>EDIT:</p> <p>If you need count positions by <code>category</code>, create <code>columns</code> for each category, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.count.html" rel="nofollow"><code>count</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>drop</code></a> column <code>salary</code> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p> <pre><code>print data_df country branch Name salary mobile no emailid 0 x a aa 250000 Null Null 1 x a aa 20000 Null Null 2 x b bb 350000 8976646410 xx@xx.com 3 y c cc 45000 8777945411 yy@yy.com 4 y d dd 589630 Null Null normal = data_df['salary'] &lt;= 20000 experienced = (data_df['salary'] &gt; 20000) &amp; (data_df['salary'] &lt;= 50000) unknown = data_df['salary'] &gt; 50000 data_df.loc[normal, 'position_normal'] = 'normal employee' data_df.loc[experienced,'position_experienced'] = 'experienced employee' data_df.loc[unknown,'position_unknown'] = 'unknown employee' print data_df country branch Name salary mobile no emailid position_normal \ 0 x a aa 250000 Null Null NaN 1 x a aa 20000 Null Null normal employee 2 x b bb 350000 8976646410 xx@xx.com NaN 3 y c cc 45000 8777945411 yy@yy.com NaN 4 y d dd 589630 Null Null NaN position_experienced position_unknown 0 NaN unknown employee 1 NaN NaN 2 NaN unknown employee 3 experienced employee NaN 4 NaN unknown employee </code></pre> <pre><code>#replace Null to NaN data_df = data_df.replace('Null', np.nan) df = data_df.groupby(['country', 'branch']).count() #remove column salary df = df.drop('salary', axis=1) df = df.reset_index() print df country branch Name mobile no emailid position_normal \ 0 x a 2 0 0 1 1 x b 1 1 1 0 2 y c 1 1 1 0 3 y d 1 0 0 0 position_experienced position_unknown 0 0 1 1 0 1 2 1 0 3 0 1 </code></pre>
python|pandas
2
4,795
35,693,687
Forcing dependence on a variable update
<p>Say I have some function <code>f</code> of variable <code>x</code>:</p> <pre><code>x = tf.Variable(1.0) fx = x*x </code></pre> <p>and an op which updates <code>x</code>:</p> <pre><code>new_x = x.assign(2.0) </code></pre> <p>and I want to get the value of <code>f</code> resulting from the updated <code>x</code>. I had thought that </p> <pre><code>with tf.control_dependencies([new_x,]): new_fx = tf.identity(fx) </code></pre> <p>would force <code>new_fx</code> to depend on the update <code>new_x</code>, but this doesn't seem to be the case:</p> <pre><code>init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) # prints 1.0, expected 4.0 print "new fx", sess.run(new_fx) </code></pre> <p>Is there some other way to define the updated value of <code>fx</code>?</p> <p>Obviously I could create a new independent copy by writing something like <code>new_fx = new_x * new_x</code> but this blows up the graph size, and also requires access to the definition of <code>fx</code>, which I'd prefer to treat as a black box. </p> <p><strong>Edit</strong>: to motivate this, here's a sketch of the code I want to write:</p> <pre><code># Hamiltonian Monte Carlo update, simplified def hmc_step(x, momentum, logpdf, n_steps=50): # x and momentum are Variables # logpdf is a Tensor with potentially complicated dependence on x grad = tf.gradients(logpdf, x)[0] # initial position new_x = x for i in range(n_steps): # update position new_x = x.assign(new_x + momentum) # update momentum using gradient at *current* position with tf.control_dependencies([new_x]): momentum = momentum + grad # DOESN'T WORK # DOES WORK BUT IS UGLY # new_logpdf = define_logpdf(new_x) # new_grad = tf.gradients(new_logpdf, new_x)[0] # momentum = momentum + new_grad # (do some stuff to accept/reject the new x) # .... return new_x </code></pre> <p>It feels really inelegant to define a new copy of logpdf and rederive gradients each time through the loop: it requires access to define_logpdf() and blows up the graph size by a factor of 50. Is there no better way to do this (barring some equivalent of theano.scan)?</p>
<p>The <code>with tf.control_dependencies([op])</code> block enforces control dependency on <code>op</code> to other ops <em>created</em> within with block. In your case, <code>x*x</code> is created outside, and the <code>tf.identity</code> just gets the old value. Here is what you want :</p> <pre><code>with tf.control_dependencies([new_x,]): new_fx = x*x </code></pre>
python|tensorflow
2
4,796
35,703,963
Trying to compute softmax values, getting AttributeError: 'list' object has no attribute 'T'
<p>First off, here is my code: </p> <pre><code>"""Softmax.""" scores = [3.0, 1.0, 0.2] import numpy as np def softmax(x): """Compute softmax values for each sets of scores in x.""" num = np.exp(x) score_len = len(x) y = [0] * score_len for index in range(1,score_len): y[index] = (num[index])/(sum(num)) return y print(softmax(scores)) # Plot softmax curves import matplotlib.pyplot as plt x = np.arange(-2.0, 6.0, 0.1) scores = np.vstack([x, np.ones_like(x), 0.2 * np.ones_like(x)]) plt.plot(x, softmax(scores).T, linewidth=2) plt.show() </code></pre> <p>Now looking at <a href="https://stackoverflow.com/questions/5741372/syntax-in-python-t">this</a> question, I can tell that T is the transpose of my list. However, I seem to be getting the error: </p> <blockquote> <p>AttributeError: 'list' object has no attribute 'T'</p> </blockquote> <p>I do not understand what's going on here. Is my understanding of this entire situation wrong. I'm trying to get through the Google Deep Learning course and I thought that I could Python by implementing the programs, but I could be wrong. I currently know a lot of other languages like C and Java, but new syntax always confuses me. </p>
<p>As described in the comments, it is essential that the output of <code>softmax(scores)</code> be an array, as lists do not have the <code>.T</code> attribute. Therefore if we replace the relevant bits in the question with the code below, we can access the <code>.T</code> attribute again. </p> <pre><code>num = np.exp(x) score_len = len(x) y = np.array([0]*score_len) </code></pre> <p>It must be noted that we need to use the <code>np.array</code> as non <code>numpy</code> libraries do not usually work with ordinary <code>python</code> libraries. </p>
python|list|numpy|attributes|softmax
4
4,797
28,833,074
Aggregate group with multi-level columns
<p>I have a grouped DataFrame which i want to aggregate with a dictionary of functions which should map to certain columns. For single-level columns this is straightforward with <code>groups.agg({'colname': &lt;function&gt;})</code>. I am struggling however to get this working with multi-level columns, from which i only want to refer to a single level. </p> <p>Here is an example.</p> <p>Lets make some sample data:</p> <pre><code>import itertools import pandas as pd lev1 = ['foo', 'bar', 'baz'] lev2 = list('abc') n = 6 df = pd.DataFrame({k: np.random.randn(n) for k in itertools.product(lev1,lev2)}, index=pd.DatetimeIndex(start='2015-01-01', periods=n, freq='11D')) </code></pre> <p>That looks like:</p> <pre><code> bar baz foo a b c a b c a b c 2015-01-01 -1.11 2.12 -1.00 0.18 0.14 1.24 0.73 0.06 3.66 2015-01-12 -1.43 0.75 0.38 0.04 -0.33 -0.42 1.00 -1.63 -1.35 2015-01-23 0.01 -1.70 -1.39 0.59 -1.10 -1.17 -1.51 -0.54 -1.11 2015-02-03 0.93 0.70 -0.12 1.07 -0.97 -0.45 -0.19 0.11 -0.79 2015-02-14 0.30 0.49 0.60 -0.28 -0.38 1.11 0.15 0.78 -0.58 2015-02-25 -0.26 0.51 0.82 0.05 -1.45 0.14 0.53 -0.33 -1.35 </code></pre> <p>And grouping by month with:</p> <pre><code>groups = df.groupby(pd.TimeGrouper('MS')) </code></pre> <p>Define some functions based on the top-level in the columns:</p> <pre><code>funcs = {'bar': np.sum, 'baz': np.mean, 'foo': np.min} </code></pre> <p>However, doing <code>groups.agg(funcs)</code> results in a KeyError, because it expects a key for each level, which makes sense.</p> <p>This does work for example:</p> <pre><code>groups.agg({('bar', 'a'): np.mean}) bar a 2015-01-01 -0.845554 2015-02-01 0.324897 </code></pre> <p>But i don't want to specify each key on the second level. So I'm looking for something which would work like:</p> <pre><code>groups.agg({('bar', slice(None)): np.mean}) </code></pre> <p>But that doesn't work of course since a <code>slice</code> is not hashable, and therefore cant be put in a dictionary.</p> <p>A workaround would be:</p> <pre><code>def multifunc(group): func = funcs[group.name[0]] return func(group) groups.agg(multifunc) </code></pre> <p>But that is not very readable nor does it seem "Pandonic" to me. Also it doesnt allow for multiple functions on the same column as the <code>agg</code> function does. There must a better/standard way of performing such a task, it cant be very uncommon.</p>
<p>I don't think there is a short-cut for this. Fortunately, it is not too hard to build the desired dict explicitly:</p> <pre><code>result = groups.agg( {(k1, k2): funcs[k1] for k1, k2 in itertools.product(lev1,lev2)}) </code></pre> <hr> <pre><code>import itertools import numpy as np import pandas as pd lev1 = ['foo', 'bar', 'baz'] lev2 = list('abc') n = 6 df = pd.DataFrame( {k: np.random.randn(n) for k in itertools.product(lev1,lev2)}, index=pd.DatetimeIndex(start='2015-01-01', periods=n, freq='11D')) groups = df.groupby(pd.TimeGrouper('MS')) funcs = {'bar': np.sum, 'baz': np.mean, 'foo': np.min} result = groups.agg( {(k1, k2): funcs[k1] for k1, k2 in itertools.product(lev1,lev2)}) result = result.sortlevel(axis=1) print(result) </code></pre> <p>yields</p> <pre><code> bar baz \ a b c a b c 2015-01-01 -2.144890 1.075044 1.038169 -0.460649 -0.309966 -0.211147 2015-02-01 1.313744 0.247171 1.049129 -0.174827 -0.437982 -0.196427 foo a b c 2015-01-01 -1.358973 -1.846916 -0.896234 2015-02-01 -1.354953 -0.699607 0.288214 </code></pre>
python|pandas
4
4,798
50,690,276
Difference between tf.random_normal() and tf.random_normal_initializer()
<p>I'm learning Tensorflow these days. When using random variables, I noticed there are two different versions. One is the API such as <code>tf.random_normal()</code> and the others are APIs like <code>tf.random_normal_initialize()</code>. I think those two are doing exactly the same things. The following is an example.</p> <pre><code>random_normal = tf.random_normal_initializer(0.0, 1.0, seed=0) a = random_normal([10]) b = tf.random_normal([10], 0.0, 1.0) with tf.Session() as sess: print (sess.run(a)) print (sess.run(b)) </code></pre> <p>I have two questions.</p> <p>Q1. If these two are doing exactly the same thing, why did they make duplicate APIs?</p> <p>Q2. I assume that class names in Tensorflow start with an upper case letter such as <code>tf.Variable</code>, and "op" starts with lower cases. Then why <code>tf.random_normal_initializer</code> starts with a lower case letter while it is actually a class not an op.</p>
<p>Did you have a look at the respective implementations? The initializer (defined <a href="https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/python/ops/init_ops.py" rel="nofollow noreferrer">here</a>) is a class that internally calls <code>tf.random_normal()</code>, which is just an operation (defined <a href="https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/python/ops/random_ops.py" rel="nofollow noreferrer">here</a>). The attributes of the class instance for example can be exported for further use whereas the params of the op can not. </p> <p>Concerning your second question: Your assumption is right. But the used name is just an alias. In the corresponding file is written</p> <pre><code>@tf_export("keras.initializers.RandomNormal", "initializers.random_normal", "random_normal_initializer") class RandomNormal(Initializer): """Initializer that generates tensors with a normal distribution. </code></pre> <p>You can see that this is consistent with the naming convention.</p> <p>Hope this helps.</p>
tensorflow|random
1
4,799
20,850,219
Speed performance improvement needed. Using nested for loops
<p>I have a 2D array shaped (1002,1004). For this question it could be generated via:</p> <pre><code>a = numpy.arange( (1002 * 1004) ).reshape(1002, 1004) </code></pre> <p>What I do is generate two lists. The lists are generated via:</p> <pre><code>theta = (61/180.) * numpy.pi x = numpy.arange(a.shape[0]) #(1002, ) y = numpy.arange(a.shape[1]) #(1004, ) max_y_for_angle = int(y[-1] - (x[-1] / numpy.tan(theta))) </code></pre> <p>The first list is given by:</p> <pre><code>x_list = numpy.linspace(0, x[-1], len(x)) </code></pre> <p>Note that this list is identical to x. However, for illustration purposes and to give a clear picture I declared this 'list'.</p> <p>What I now want to do is create a y_list which is as long as x_list. I want to use these lists to determine the elements in my 2D array. After I determine and store the sum of the elements, I want to shift my y_list by one and determine the sum of the elements again. I want to do this for max_y_for_angle iterations. The code I have is:</p> <pre><code>sum_list = numpy.zeros(max_y_for_angle) for idx in range(max_y_for_angle): y_list = numpy.linspace((len(x) / numpy.tan(theta)) + idx, y[0] + idx , len(x)) elements = 0 for i in range(len(x)): elements += a[x_list[i]][y_list[i]] sum_list[idx] = elements </code></pre> <p>This operation works. However, as one might imagine this takes a lot of time due to the for loop within a for loop. The number of iterations of the for loops do not help as well. How can I speed things up? The operation now takes about 1 s. I'm looking for something below 200 ms.</p> <p>Is it maybe possible to return a list of the 2D array elements when the inputs are x_list and y_list? I tried the following but this does not work:</p> <pre><code>a[x_list][y_list] </code></pre> <p>Thank you very much!</p>
<p>It's possible to return an <em>array</em> of elements form a 2d array by doing <code>a[x, y]</code> where <code>x</code> and <code>y</code> are both integer arrays. This is called advanced indexing or sometimes <a href="http://wiki.scipy.org/Tentative_NumPy_Tutorial#head-0dffc419afa7d77d51062d40d2d84143db8216c2" rel="nofollow">fancy indexing</a>. In your question you mention lists a lot but never actually use any lists in your code, x_list and y_list are both arrays. Also, numpy multidimensional arrays are generally indexed <code>a[i, j]</code> even when when <code>i</code> and <code>j</code> are integers values. </p> <p>Using fancy indexing along with some clean up of you code produced this:</p> <pre><code>import numpy def line_sums(a, thata): xsize, ysize = a.shape tan_theta = numpy.tan(theta) max_y_for_angle = int(ysize - 1 - ((xsize - 1) / tan_theta)) x = numpy.arange(xsize) y_base = numpy.linspace(xsize / tan_theta, 0, xsize) y_base = y_base.astype(int) sum_list = numpy.zeros(max_y_for_angle) for idx in range(max_y_for_angle): sum_list[idx] = a[x, y_base + idx].sum() return sum_list a = numpy.arange( (1002 * 1004) ).reshape(1002, 1004) theta = (61/180.) * numpy.pi sum_list = line_sums(a, theta) </code></pre> <p>Hope that helps.</p>
python|arrays|performance|for-loop|numpy
1