Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
374,300
| 26,945,812
|
Compute the product of neighborhood for each cell in a matrix with numpy/scipy
|
<p>I'm trying to implement an image processing algorithm which involves computing the product of 4-adjacent neighborhood for each cell. That is, to compute a new matrix Y for X where <code>y[i, j] = x[i-1, j] * x[i, j-1] * x[i+1, j] * x[i, j+1]</code>. Out-of-bound neighbors should be ignored.</p>
<p>Now I can only think of this approach: use <code>scipy.ndimage.filters.correlate</code> and pass in weights with zeros and one 1 to get four matrices, each containing the neighbor for each cell in a direction, like passing in <code>weight = [[0, 0, 0], [1, 0, 0], [1, 1]]</code> and I get <code>a[i, j] = x[i-1, j]</code>, and with other weights I can get <code>b[i, j] = x[i, j-1]</code>, <code>c[i, j] = x[i+1, j]</code>, <code>d[i, j] = x[i, j+1]</code>. Then I use <code>np.multiply</code> to compute the product of these four matrices. </p>
<p>However, this approach is a bit too slow, and I can't ignore the boundaries. Is there another way to do it with numpy/scipy so I don't have to resort to for loops?</p>
|
<p>I think this better matches what you're asking for:</p>
<pre><code>import numpy as np
x = np.array([
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 1],
[2, 3, 4, 5, 6],
[7, 8, 9, 1, 2]
])
y = np.ones_like(x)
y[+1:, :] *= x[:-1, :]
y[:-1, :] *= x[+1:, :]
y[:, +1:] *= x[:, :-1]
y[:, :-1] *= x[:, +1:]
y
#>>> array([[ 12, 21, 64, 135, 4],
#>>> [ 14, 288, 756, 160, 270],
#>>> [ 126, 448, 1080, 216, 10],
#>>> [ 16, 189, 32, 90, 6]])
</code></pre>
<p>Note that the first <code>*=</code> can be an assignment if you need extra speed.</p>
|
python|numpy
| 4
|
374,301
| 26,934,349
|
How to assign scipy.sparse matrix to NumPy array via indexing?
|
<p>When I try to assign a <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/sparse.html" rel="nofollow"><code>scipy.sparse</code></a> matrix <code>s</code> (any of the available sparse types) to a NumPy array <code>a</code> like this:</p>
<pre><code>a[:] = s
</code></pre>
<p>I get a <code>TypeError</code>:</p>
<blockquote>
<p>TypeError: float() argument must be a string or a number</p>
</blockquote>
<p>Is there a way to get around this?</p>
<p>I know about the <code>todense()</code> and <code>toarray()</code> methods, but I'd really like to avoid the unnecessary copy and I'd prefer to use the same code for both NumPy arrays and SciPy sparse matrices.
For now, I'm not concerned with getting the values from the sparse matrix being inefficient.</p>
<p>Is there probably some kind of wrapper around sparse matrices that works with NumPy indexing assignment?</p>
<p>If not, any advice how I could build such a thing by myself?</p>
<p>Is there a different sparse array library that cooperates with NumPy in this situation?</p>
<p><strong>UPDATE:</strong></p>
<p>I poked around in the NumPy sources and, searching for the error message string, I think I found the section where the indexing assignment happens in <a href="https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/arraytypes.c.src#L187" rel="nofollow"><code>numpy/core/src/multiarray/arraytypes.c.src</code> around line 187</a> in the function <code>@TYPE@_setitem()</code>.</p>
<p>I still don't really get it, but at some point, the <code>float()</code> function seems to be called (if <code>a</code> is a floating-point array). So I tried to monkey-patch one of the SciPy sparse matrix classes to allow this function to be called:</p>
<pre><code>import scipy
s = scipy.sparse.dok_matrix((5, 1))
def myfloat(self):
assert self.shape == (1, 1)
return self[0, 0]
scipy.sparse.dok.dok_matrix.__float__ = myfloat
a[:] = s
</code></pre>
<p>Sadly, this doesn't work because <code>float()</code> is called on the whole sparse matrix and not on the individual items thereof.</p>
<p>So I guess my new question is: how can I further change the sparse matrix class to make NumPy iterate over all the items and call <code>float()</code> on each of them?</p>
<p><strong>ANOTHER UPDATE:</strong></p>
<p>I found a sparse array module on Github (<a href="https://github.com/FRidh/sparse" rel="nofollow">https://github.com/FRidh/sparse</a>), which allows assignment to a NumPy array. Sadly, the features of the module are quite limited (e.g. slicing doesn't really work yet), but it might help to understand how assigning to NumPy arrays can be achieved.
I'll investigate that further ...</p>
<p><strong>YET ANOTHER UPDATE:</strong></p>
<p>I did some more digging and found that a more interesting source file is probably <a href="https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/ctors.c" rel="nofollow"><code>numpy/core/src/multiarray/ctors.c</code></a>.
I suspect that the function <code>PySequence_Check()</code> (<a href="https://docs.python.org/3.4/c-api/sequence.html#c.PySequence_Check" rel="nofollow">docs</a>/<a href="https://github.com/python/cpython/blob/c71e8b81f1f4d349d1a24a6fe162cbbecedff8f0/Objects/abstract.c#L1370" rel="nofollow">code</a>) is called sometime during the assignment. The simple sparse array class from <a href="https://github.com/FRidh/sparse" rel="nofollow">https://github.com/FRidh/sparse</a> passes the test, but it looks like the sparse matrix classes from SciPy don't (although in my opinion they are sequences).</p>
<p>They get checked for <code>__array_struct__</code>, <code>__array_interface__</code> and <code>__array__</code>, and then it's somehow decided that they are not sequences.
The attributes <code>__getitem__</code> and <code>__len__</code> (which all the sparse array classes have!) are not checked.</p>
<p>This leads me to yet another question: How can I manipulate the sparse matrix classes (or objects thereof) in a way that they pass <code>PySequence_Check()</code>?</p>
<p>I think as soon as they are recognized as sequences, assignment should work, because <code>__getitem__()</code> and <code>__len__()</code> should be sufficient for that.</p>
|
<p>As mentioned in a comment to my question, the sequence interface won't work for sparse <em>matrices</em>, because they don't lose a dimension when indexed with a single number.
To try it anyway, I created a very limited quick-and-dirty sparse <em>array</em> class in pure Python, which, when indexed with a single number, returns a "row" class (which holds a view to the original data), which again can be indexed with a single number to yield the actual value at this index. Using an instance <code>s</code> of my class, assigning to a NumPy array <code>a</code> works exactly as requested:</p>
<pre><code>a[:] = s
</code></pre>
<p>I expected this to be somewhat inefficient, but it is really, really, really, extremely slow. Assigning a 500.000 x 100 sparse array took several minutes!
The good news, though, is that no full-sized temporary array is created during the assignment. The memory usage stays about constant during the assignment (while one of the CPUs maxes out).</p>
<p>So this is basically one solution to the original question.</p>
<p>To make the assignment more efficient and still use no temporary copy of the dense array data, NumPy would have to internally do something similar to</p>
<pre><code>s.toarray(out=a)
</code></pre>
<p>As far as I know, there is currently no way to get NumPy to do that.</p>
<p>However, there is a way to do something very similar, by providing an <code>__array__()</code> method that returns a NumPy array. Incidentally, SciPy sparse matrices already have such a method, just with a different name: <code>toarray()</code>. So I just renamed it:</p>
<pre><code>scipy.sparse.dok_matrix.__array__ = scipy.sparse.dok_matrix.toarray
a[:] = s
</code></pre>
<p>This works like a charm (also with the other sparse matrix classes) and is totally fast!</p>
<p>According to my limited understanding of the situation, this should create a temporary NumPy array with the same size as <code>a</code> which holds all the values from <code>s</code> (and many zeros) and which is then assigned to <code>a</code>.
But strangely, even when I use a very large <code>a</code> that occupies nearly all my available RAM, the assignment still happens very quickly and no additional RAM is used.</p>
<p>So I guess this is another, much better solution to my original question.</p>
<p>Which leaves another question: why does this work without a temporary array?</p>
|
python|numpy|scipy
| 1
|
374,302
| 14,905,443
|
Can I avoid using `asmatrix`?
|
<p>Is there any way for me to create matrices directly and not have to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.asmatrix.html" rel="nofollow"><code>asmatrix</code></a>? From what I can see, all of the typical matrix functions (<code>ones</code>, <code>rand</code>, etc) in Numpy return arrays, not matrices, which means (according to the documentation) that <code>asmatrix</code> will copy the data. Is there any way to avoid this?</p>
|
<p>According to the documentation:</p>
<blockquote>
<p>Unlike matrix, asmatrix does not make a copy if the input is already a
matrix or an ndarray. Equivalent to matrix(data, copy=False).</p>
</blockquote>
<p>So, <code>asmatrix</code> does <em>not</em> copy the data if it doesn't need to:</p>
<pre><code>>>> import numpy as np
>>> a = np.arange(9).reshape((3,3))
>>> b = np.asmatrix(a)
>>> b.base is a
True
>>> a[0] = 3
>>> b
matrix([[3, 3, 3],
[3, 4, 5],
[6, 7, 8]])
</code></pre>
|
python|numpy
| 1
|
374,303
| 14,447,925
|
Iterating and Writing Pandas Dataframe NaNs back to MySQL
|
<p>I'm attempting to write the results of a regression back to MySQL, but am having problems iterating through the fitted values and getting the NaNs to write as null values. Originally, I did the iteration this way:</p>
<pre><code>for i in dataframe:
cur = cnx.cursor()
query = ("UPDATE Regression_Data.Input SET FITTEDVALUES="+(dataframe['yhat'].__str__())+" where timecount="+(datafrane['timecount'].__str__())+";")
cur.execute(query)
cnx.commit()
cur.close()
</code></pre>
<p>.....which SQL thew back to me by saying: </p>
<pre><code> "mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'NaN'
</code></pre>
<p>So, I've been trying to filter out the NaNs by only asking Python to commit when yhat does not equal NaN:</p>
<pre><code>for i in dataframe:
if cleandf['yhat']>(-1000):
cur = cnx.cursor()
query = ("UPDATE Regression_Data.Input SET FITTEDVALUES="+(dataframe['yhat'].__str__())+" where timecount="+(datafrane['timecount'].__str__())+";")
cur.execute(query)
cnx.commit()
cur.close()
</code></pre>
<p>But then I get this: </p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>So, I try to get around it with this in my above syntax:</p>
<pre><code>if cleandf['yhat'][i]>(-1000):
</code></pre>
<p>but then get this:</p>
<pre><code>ValueError: Can only tuple-index with a MultiIndex
</code></pre>
<p>And then tried adding itterows() to both as in:</p>
<pre><code> for i in dataframe.iterrows():
if cleandf['yhat'][i]>(-1000):
</code></pre>
<p>but get the same problems as above.</p>
<p>I'm not sure what I'm doing wrong here, but assume it's something with iterating in Pandas DataFrames. But, even if I got the iteration right, I would want to write Nulls into SQL where the NaN appeared.</p>
<p>So, how do you think I should do this? </p>
|
<p>I don't have a complete answer, but perhaps I have some tips that might help. I believe you are thinking of your <code>dataframe</code> as an object similar to a SQL record set. </p>
<pre><code>for i in dataframe
</code></pre>
<p>This will iterate over the column name strings in the dataframe. <code>i</code> will take on column names, not rows.</p>
<pre><code>dataframe['yhat']
</code></pre>
<p>This returns an entire column (<code>pandas.Series</code>, which is a <code>numpy.ndarray</code>), not a single value. Therefore:</p>
<pre><code>dataframe['yhat'].__str__()
</code></pre>
<p>will give a string representation of an entire column that is useful for humans to read. It is certainly not a single value that can be converted to string for your query.</p>
<pre><code>if cleandf['yhat']>(-1000)
</code></pre>
<p>This gives an error, because again, <code>cleandf['yhat']</code> is an entire array of values, not just a single value. Think of it as an entire column, not the value from a single row.</p>
<pre><code>if cleandf['yhat'][i]>(-1000):
</code></pre>
<p>This is getting closer, but you really want <code>i</code> to be an integer here, not another column name.</p>
<pre><code>for i in dataframe.iterrows():
if cleandf['yhat'][i]>(-1000):
</code></pre>
<p>Using <code>iterrows</code> seems like the right thing for you. However, <code>i</code> takes on the value of each row, not an integer that can index into a column (<code>cleandf['yhat']</code> is a full column).</p>
<p>Also, note that pandas has better ways to check for missing values than relying on a huge negative number. Try something like this:</p>
<pre><code>non_missing_index = pandas.isnull(dataframe['yhat'])
cleandf = dataframe[non_missing_index]
for row in cleandf.iterrows():
row_index, row_values = row
query = ("UPDATE Regression_Data.Input SET FITTEDVALUES="+(row_values['yhat'].__str__())+" where timecount="+(row_values['timecount'].__str__())+";")
execute_my_query(query)
</code></pre>
<p>You can implement <code>execute_my_query</code> better than I can, I expect. However, this solution is not quite what you want. You really want to iterate over all rows and do two types of inserts. Try this:</p>
<pre><code>for row in dataframe.iterrows():
row_index, row_values = row
if pandas.isnull(row_values['yhat']):
pass # populate the 'null' insert query here
else:
query = ("UPDATE Regression_Data.Input SET FITTEDVALUES="+(row_values['yhat'].__str__())+" where timecount="+(row_values['timecount'].__str__())+";")
execute_my_query(query)
</code></pre>
<p>Hope it helps.</p>
|
python|mysql|iteration|pandas
| 3
|
374,304
| 14,503,660
|
tuples are inmutable, create lists before replacing its entries
|
<p>I was trying to do some simple manipulation of lists and numpy arrays and got stuck in some easy thing: </p>
<pre><code>a=np.arange(12)
a
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
a=np.reshape(a,(3,4))
a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
b=np.arange(12,24)
b
array([12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23])
b=np.reshape(3,4)
list1 = [(a,'a'),(b,'b')]
data = [(i, j) for i,j in list1]
</code></pre>
<p>When I tried to do:</p>
<pre><code>data[0][0]=np.delete(data[0][0], np.s_[-1::],0)
</code></pre>
<p>I got the following error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
</code></pre>
<p>But if I do:</p>
<pre><code>cop=np.delete(data[0][0], np.s_[-1::],0)
cop
array([[0, 1, 2, 3],
[4, 5, 6, 7]])
</code></pre>
<p>It works perfectly fine.</p>
<p>But I also can't do:</p>
<pre><code>data[0][0]=np.copy(cop)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
</code></pre>
<p>But if I check the types, both are actually arrays:</p>
<pre><code>type(cop)
<type 'numpy.ndarray'>
type(data[0][0])
<type 'numpy.ndarray'>
</code></pre>
<p>I couldn’t find the mistake for quite a few hours.</p>
|
<p>Then I realized that data is actually a tuple.</p>
<p>So this is what solves the problem:</p>
<pre><code>data = [[i, j] for i,j in list1]
</code></pre>
<p>And then I can replace elements like data[0][0]</p>
|
python|list|numpy|tuples
| 1
|
374,305
| 25,427,197
|
numpy: How to add a column to an existing structured array?
|
<p>I have a starting array such as:</p>
<pre><code>[(1, [-112.01268501699997, 40.64249414272372])
(2, [-111.86145708699996, 40.4945008710162])]
</code></pre>
<p>The first column is an <code>int</code> and the second is a <code>list</code> of <code>floats</code>. I need to add a <code>str</code> column called <code>'USNG'</code>.</p>
<p>I then create a structured numpy array, as such:</p>
<pre><code>dtype = numpy.dtype([('USNG', '|S100')])
x = numpy.empty(array.shape, dtype=dtype)
</code></pre>
<p>I want to append the <code>x</code> numpy array to the existing array as a new column, so I can output some information to that column for each row.</p>
<p>When I do the following:</p>
<pre><code>numpy.append(array, x, axis=1)
</code></pre>
<p>I get the following error:</p>
<pre><code>'TypeError: invalid type promotion'
</code></pre>
<p>I've also tried <a href="https://numpy.org/doc/stable/reference/generated/numpy.vstack.html" rel="noreferrer">vstack</a> and <a href="https://numpy.org/doc/stable/reference/generated/numpy.hstack.html" rel="noreferrer">hstack</a></p>
|
<p>You have to create a new dtype that contains the new field.</p>
<p>For example, here's <code>a</code>:</p>
<pre><code>In [86]: a
Out[86]:
array([(1, [-112.01268501699997, 40.64249414272372]),
(2, [-111.86145708699996, 40.4945008710162])],
dtype=[('i', '<i8'), ('loc', '<f8', (2,))])
</code></pre>
<p><code>a.dtype.descr</code> is <code>[('i', '<i8'), ('loc', '<f8', (2,))]</code>; i.e. a list of field types. We'll create a new dtype by adding <code>('USNG', 'S100')</code> to the end of that list:</p>
<pre><code>In [87]: new_dt = np.dtype(a.dtype.descr + [('USNG', 'S100')])
</code></pre>
<p>Now create a <em>new</em> structured array, <code>b</code>. I used <code>zeros</code> here, so the string fields will start out with the value <code>''</code>. You could also use <code>empty</code>. The strings will then contain garbage, but that won't matter if you immediately assign values to them.</p>
<pre><code>In [88]: b = np.zeros(a.shape, dtype=new_dt)
</code></pre>
<p>Copy over the existing data from <code>a</code> to <code>b</code>:</p>
<pre><code>In [89]: b['i'] = a['i']
In [90]: b['loc'] = a['loc']
</code></pre>
<p>Here's <code>b</code> now:</p>
<pre><code>In [91]: b
Out[91]:
array([(1, [-112.01268501699997, 40.64249414272372], ''),
(2, [-111.86145708699996, 40.4945008710162], '')],
dtype=[('i', '<i8'), ('loc', '<f8', (2,)), ('USNG', 'S100')])
</code></pre>
<p>Fill in the new field with some data:</p>
<pre><code>In [93]: b['USNG'] = ['FOO', 'BAR']
In [94]: b
Out[94]:
array([(1, [-112.01268501699997, 40.64249414272372], 'FOO'),
(2, [-111.86145708699996, 40.4945008710162], 'BAR')],
dtype=[('i', '<i8'), ('loc', '<f8', (2,)), ('USNG', 'S100')])
</code></pre>
|
python|python-2.7|numpy|structured-array|recarray
| 16
|
374,306
| 25,172,212
|
Pandas - Python: Locate a forward looking variable based on time (minutes)
|
<p><em>Sorry for the poor title, I am not sure how to best describe my issue in one line</em></p>
<p>I have a dataframe <code>df1</code> with index:</p>
<p><code>[2014-01-02 10:00:02.644000, ..., 2014-01-02 15:59:58.630000]
Length: 26761, Freq: None, Timezone: None</code></p>
<p>My <code>df1</code> column <code>price</code> contains some values like <code>40</code>,<code>38</code>, etc.</p>
<p>My <code>df1</code> looks like this:</p>
<pre><code>Timestamp price1
2014-01-02 10:00:02.120000 38
2014-01-02 10:00:03.213000 40
2014-01-02 10:00:06.648000 39
2014-01-02 10:00:02.699320 50
...
</code></pre>
<p>I have another DataFrame, <code>df2</code></p>
<pre><code>Timestamp price2
2014-01-02 10:00:06.879000 39
2014-01-02 10:00:07.457200 41
2014-01-02 10:00:10.625450 35
2014-01-02 10:00:12.674320 47
...
</code></pre>
<p>My objective is to create another variable, <code>price2</code> in <code>df1</code> that locates the value of <code>price2</code> 5 minutes after each timestamps in <code>df1</code>. For instance, if we look at the first row in <code>df1</code>, <code>price2</code> will be equal to the value of <code>price2</code> at 10:00:07.120000 in <code>df2</code>. BUT, I don't have a price <code>price2</code> in <code>df2</code> at that specified time. I will have to extrapolate... what's the best way to do this?</p>
|
<p>I got my answer <a href="https://stackoverflow.com/questions/9877391/how-to-get-the-closest-single-row-after-a-specific-datetime-index-using-python-p">here</a>. This methodology helps me to find the closest match based on time. I couldn't ask for something better!</p>
|
python|pandas
| 0
|
374,307
| 25,296,130
|
Access column in data frame that shares a name with other columns
|
<p>I have three different columns each named <code>Weight (LB)</code>. When I print out the column names pandas seems to distinguish between them using <code>Weight (LB)</code> and <code>Weight (LB).1</code> and <code>Weight (LB).2</code>. So I tried accessing each one individually while iterating the rows and appending their values to separate lists. I should end up with 3 lists of size 22 but instead I have 3 lists of size 66. Each list is getting all of the values in the <code>Weight (LB)</code> columns. So I switched it up and tried accessing at the specific column indexes. Surely there is no way this won't work! But I ended up in the same boat. </p>
<pre><code>>>> for idx, row in df.iterrows():
... squat.append(df.iloc[idx, 6])
... if row['Exercise 2'] == 'Overhead press':
... overhead.append(df.iloc[idx, 14])
... else:
... bench.append(df.iloc[idx, 14])
... if row['Exercise 3'] == 'Deadlift':
... deadlift.append(df.iloc[idx, 22])
... else:
... barbell.append(df.iloc[idx, 22])
...
>>> len(squat)
66
</code></pre>
<p>So basically what I need help with is accessing data in the specific columns separately despite them having the same name.</p>
<p>Thanks!</p>
<p>Edit: I can access each column via the <code>iloc</code> properly but for whatever reason all of the values are getting added to each list. O_o</p>
<p>Edit again: I noticed that when I created the lists using <code>squat = bench = overhead = deadlift = barbell = []</code> it would yield the unexpected behavior but when I created the lists each on their own lines then it worked as expected.</p>
|
<p>I'd just rename your columns, in general it will make life a lot easier. It's a little tricky with duplicates, but you can assign directly to the columns with some kind of mapping function like this.</p>
<pre><code>def rename_dup(col):
ans = []
counter = 1
for c in col:
if c.startswith('Weight (LB)'):
ans.append(c + str(counter))
counter += 1
else:
ans.append(c)
return ans
df.columns = rename_dup(df.columns)
</code></pre>
<p>Also, you may not want to be using itterrows. Probably cleaner to write something like this:</p>
<pre><code>overhead = df.loc[df['Exercise 2'] == 'Overhead press', 'Weight(LB)2']
bench = df.loc[df['Exercise 2'] != 'Overhead press', 'Weight(LB)2']
# etc...
</code></pre>
|
python|python-2.7|pandas
| 0
|
374,308
| 25,101,344
|
Split columns using pandas
|
<pre><code>Games Home Away
Team 1 vs. Team 2 Team 1 Team 2
Team 1 @ Team 2 Team 2 Team 1
</code></pre>
<p>I have a column called Games and want to split it into two new columns label as Home and Away.
For the @ I used <code>df['Away'] = df['Games'].map(lambda x: x.split('@')[0])</code> and it works. But I tried using <code>df['Away'] = df['Games'].map(lambda x: x.split('vs.')[1])</code> it didn't work. </p>
<p>What am I missing??</p>
|
<p>It's not clear from the information you provided exactly what is going wrong here. But pandas provides tools that are specific to this kind of work and likely to provide informative errors if things go wrong.</p>
<p>Take a look at the <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#vectorized-string-methods" rel="nofollow">string methods documentation</a>. Something like <code>df['Games'].str.extract('(.*)(vs.|@)(.*)')</code> might be best here. Or the <code>str.split</code> method, but I like extract better.</p>
|
pandas
| 0
|
374,309
| 25,126,520
|
Pandas - remove cells based on value
|
<p>I have a dataframe with z-scores for several values. It looks like this:</p>
<pre><code>ID Cat1 Cat2 Cat3
A 1.05 -1.67 0.94
B -0.88 0.22 -0.56
C 1.33 0.84 1.19
</code></pre>
<p>I want to write a script that will tell me which IDs correspond with values in each category relative to a cut-off value I specify as needed. Because I am working with z-scores, I will need to compare the absolute value against my cut-off.</p>
<p>So if I set my cut-off at 0.75, the resulting dataframe would be:</p>
<pre><code>Cat1 Cat2 Cat3
A A A
B C C
C
</code></pre>
<p>If I set 1.0 as my cut-off value: the dataframe above would return:</p>
<pre><code>Cat1 Cat2 Cat3
A A C
C
</code></pre>
<p>I know that I can do queries like this:</p>
<pre><code>df1 = df[df['Cat1'] > 1]
df1
df1 = df[df['Cat1'] < -1]
df1
</code></pre>
<p>to individually query each column and find the information I'm looking for but this is tedious even if I figure out how to use the abs function to combine the two queries into one.How can I apply this filtration to the whole dataframe?</p>
<p>I've come up with this skeleton of a script:</p>
<pre><code>cut_off = 1.0
cols = list(df.columns)
cols.remove('ID')
for col in cols:
# FOR CELL IN VALUE OF EACH CELL IN COLUMN:
if (abs.CELL < cut_off):
CELL = NaN
</code></pre>
<p>to basically just eliminate any values that don't meet the cut-off. If I can get this to work, it will bring me closer to my goal but I am stuck and don't even know if I am on the right track. Again, the overall goal is to quickly figure out which cells have absolute-values above the cut-off in each category be able to list the corresponding IDs. </p>
<p>I apologize if anything is confusing or vague; let me know in comments and I'll fix it. I've been trying to figure this out for most of today and my brain is somewhat fried</p>
|
<p>You don't have to apply the filtration to columns, you can also do</p>
<pre><code>df[df > 1]
</code></pre>
<p>, and also,</p>
<pre><code>df[df > 1] = np.NaN
</code></pre>
|
python|pandas|dataframe
| 2
|
374,310
| 25,129,195
|
How to access an element in a Numpy array
|
<p>So I have this list of Numpy arrays:</p>
<pre><code>import numpy as np
from numpy import array
m = [array([0, 64]), array([ 0, 79]), array([0, 165]), array([0, 50])]
</code></pre>
<p>How do I index the number 50 from the m[3] element in the array?</p>
|
<p>As already mentioned in the other comments, if your intention is to use a 2D-array, you should create it as:</p>
<pre><code>m = array([[0, 64], [0, 79], [0, 165], [0, 50]])
</code></pre>
<p>and then access the elements like:</p>
<pre><code>print(m[3, 1])
</code></pre>
|
python|list|numpy
| 4
|
374,311
| 30,697,769
|
How can I unserialize a numpy array that was cast to a bytestring?
|
<p>I need to serialize a numpy array to some JSON-compatible form. Since the framework I'm using doesn't give me access to the JSON encoder/decoder object, I'm stuck serializing a numpy array to something that can <em>then</em> be marshalled into JSON. I've opted for either <code>array.tobytes</code> or <code>array.tostring</code> (both seem to be essentially the same thing).</p>
<p>Below is an example which illustrates my problem:</p>
<pre><code>import numpy as np
a = np.random.rand(1024, 1024) # create array of random values
b = array.tobytes() # serialize array
a2 = np.fromstring(b)
</code></pre>
<p>When I inspect the value of <code>a2</code>, I find that it only contains the first line of the original <code>a</code>. In other words, <code>a2 == a[0, :]</code>.</p>
<p>How can I decode the full array?</p>
|
<p>Actually numpy.fromstring() returns a single dimensional array of 1024X1024 intead of a 2 Dimensional array, All you need to do is reshape into 1024X1024, </p>
<p>Try this :- </p>
<pre><code>import numpy as np
a = np.random.rand(1024, 1024) # create array of random values
b = array.tobytes()
np.fromstring(b).reshape(1024,1024)
</code></pre>
|
python|numpy
| 1
|
374,312
| 30,623,721
|
Plot multiple DataFrame columns in Seaborn FacetGrid
|
<p>I am using the following code</p>
<pre><code>import seaborn as sns
g = sns.FacetGrid(dataframe, col='A', hue='A')
g.map(plt.plot, 'X', 'Y1')
plt.show()
</code></pre>
<p>to make a seaborn facet plot like this:
<img src="https://i.stack.imgur.com/B2eay.png" alt="Example facet plot"></p>
<p>Now I would like to add another row to this plot with a different variable, call it Y2, on the y axis. The result should look similar to vertically stacking the two plots obtained by</p>
<pre><code>g = sns.FacetGrid(dataframe, col='A', hue='A')
g.map(plt.plot, 'X', 'Y1')
plt.show()
g = sns.FacetGrid(dataframe, col='A', hue='A')
g.map(plt.plot, 'X', 'Y2')
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/VB167.png" alt="Example plot with two rows"></p>
<p>but in a single plot, without the duplicate x axis and titles ("A=<value>") and without creating a new <code>FacetGrid</code> object.</p>
<p>Note that</p>
<pre><code>g = sns.FacetGrid(dataframe, col='A', hue='A')
g.map(plt.plot, 'X', 'Y1')
g.map(plt.plot, 'X', 'Y2')
plt.show()
</code></pre>
<p>does not achive this, because it results in both the curve for Y1 and Y2 being displayed in the same subplot for each value of A.</p>
|
<p>I used the following code to create a synthetic dataset which appears to match yours:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# Generate synthetic data
omega = np.linspace(0, 50)
A0s = [1., 18., 40., 100.]
dfs = []
for A0 in A0s:
V_w_dr = np.sin(A0*omega)
V_w_tr = np.cos(A0*omega)
dfs.append(pd.DataFrame({'omega': omega,
'V_w_dr': V_w_dr,
'V_w_tr': V_w_tr,
'A0': A0}))
df = pd.concat(dfs, axis=0)
</code></pre>
<p>Then you can do what you want. Thanks to @mwaskom in the comments for <code>sharey='row'</code>, and <code>margin_titles=True</code>:</p>
<pre class="lang-py prettyprint-override"><code>dfm = df.melt(id_vars=['A0', 'omega'], value_vars=['V_w_dr', 'V_w_tr'])
g = sns.FacetGrid(dfm, col='A0', hue='A0', row='variable', sharey='row', margin_titles=True)
g.map(plt.plot, 'omega', 'value')
</code></pre>
<p>This results in</p>
<p><a href="https://i.stack.imgur.com/pCv5S.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pCv5S.png" alt="enter image description here" /></a></p>
<h2>Update</h2>
<ul>
<li>As of this update, the correct method is to use <a href="https://seaborn.pydata.org/generated/seaborn.relplot.html#seaborn.relplot" rel="noreferrer"><code>seaborn.relplot</code></a>, which plots a FacetGrid.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>sns.relplot(data=dfm, x='omega', y='value', col='A0', hue='A0', row='variable', kind='line')
</code></pre>
<p><a href="https://i.stack.imgur.com/j6W1d.png" rel="noreferrer"><img src="https://i.stack.imgur.com/j6W1d.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|plot|seaborn
| 13
|
374,313
| 30,735,358
|
How can I convert a list to a numpy array for filtering elements?
|
<p>I have a list of <code>float</code> numbers and I would like to convert it to <code>numpy array</code> so I can use <code>numpy.where()</code> to get indices of elements that are bigger than 0.0 (not zero)</p>
<p>I tried this, but with no luck: </p>
<pre><code>import numpy as np
arr = np.asarray(enumerate(grade_list))
g_indices = np.where(arr[1] > 0)[0]
</code></pre>
<p>Edit: </p>
<p>is <code>dtype=float</code> needed?</p>
|
<p>You don't need the <code>enumerate()</code>: </p>
<pre><code>arr = np.asarray(grade_list)
g_indices = np.where(arr > 0)[0]
</code></pre>
|
python|arrays|numpy|where
| 3
|
374,314
| 30,514,378
|
Divide one column in array by another numpy
|
<p>I am trying to get</p>
<pre><code>[[ 4. 0. 0. ]
[ 8. 0. 0. ]]
</code></pre>
<p>out of this:</p>
<pre><code>[[ 2. 0.5 0. ]
[ 2. 0.25 0. ]]
</code></pre>
<p>So I want to divide the first column by the second one:</p>
<p><code>div = arr[:,0]/arr[:,1]</code> but don't know what's the best way to reshape and add zeros to get the result.</p>
<p>Thanks in advance.</p>
|
<p>If you want to do it in place, you could do</p>
<pre><code>a[:, 0] = a[:, 0] / a[:, 1]
a[:, 1] = 0
</code></pre>
<p>If not</p>
<pre><code>b = np.zeros(6).reshape(2, 3)
b[:, 0] = (a[:, 0] / a[:, 1])
</code></pre>
|
python|numpy
| 2
|
374,315
| 30,305,069
|
Numpy concatenate 2D arrays with 1D array
|
<p>I am trying to concatenate 4 arrays, one 1D array of shape (78427,) and 3 2D array of shape (78427, 375/81/103). Basically this are 4 arrays with features for 78427 images, in which the 1D array only has 1 value for each image.</p>
<p>I tried concatenating the arrays as follows:</p>
<pre><code>>>> print X_Cscores.shape
(78427, 375)
>>> print X_Mscores.shape
(78427, 81)
>>> print X_Tscores.shape
(78427, 103)
>>> print X_Yscores.shape
(78427,)
>>> np.concatenate((X_Cscores, X_Mscores, X_Tscores, X_Yscores), axis=1)
</code></pre>
<p>This results in the following error: </p>
<blockquote>
<p>Traceback (most recent call last):
File "", line 1, in
ValueError: all the input arrays must have same number of dimensions</p>
</blockquote>
<p>The problem seems to be the 1D array, but I can't really see why (it also has 78427 values). I tried to transpose the 1D array before concatenating it, but that also didn't work. </p>
<p>Any help on what's the right method to concatenate these arrays would be appreciated!</p>
|
<p>Try concatenating <code>X_Yscores[:, None]</code> (or <code>X_Yscores[:, np.newaxis]</code> as imaluengo suggests). This creates a 2D array out of a 1D array.</p>
<p>Example:</p>
<pre><code>A = np.array([1, 2, 3])
print A.shape
print A[:, None].shape
</code></pre>
<p>Output:</p>
<pre><code>(3,)
(3,1)
</code></pre>
|
python|arrays|numpy|concatenation
| 31
|
374,316
| 30,354,637
|
Grouping and aggregating by counts: how to keep column names?
|
<p>I have an example dataframe similar to the synthetic one I create below. Each ID is classified as <code>good</code> or <code>bad</code> (these could also be country codes, e.g. <code>US</code>, <code>ES</code>, <code>RU</code>, etc):</p>
<pre class="lang-py prettyprint-override"><code>In [55]: nf = pandas.DataFrame({'id': numpy.random.randint(0,100,1000)
,'how':numpy.random.choice(['good','bad'],1000)
,'A':numpy.random.randn(1000)
,'B':numpy.random.randn(1000)
})
In [56]: for i in numpy.unique(nf['id'].values):
.....: nf.loc[nf.loc[idx[:],idx['id']] == i, 'how'] = "good" if is_odd(i) else "bad"
</code></pre>
<p>where I have defind <code>is_odd()</code> by:</p>
<pre><code>def is_odd(num):
return num & 0x1
</code></pre>
<p>Now, I want to do the following operations</p>
<ul>
<li>Group the data by IDs</li>
<li>Count each group's entries / rows</li>
<li>Plot a histogram of the counts for the entire population</li>
<li>Plot histogram's of the counts for "good" and "bad"</li>
</ul>
<p>For example, I would do the first two operations like:</p>
<pre class="lang-py prettyprint-override"><code>In [57]: nf.groupby(['id','how']).agg('count')
Out[57]:
A B
id how
0 bad 9 9
1 good 13 13
2 bad 16 16
3 good 8 8
4 bad 7 7
5 good 11 11
6 bad 10 10
7 good 14 14
8 bad 12 12
9 good 8 8
10 bad 12 12
... .. ..
</code></pre>
<p>My problem: I lose access to the columns <code>ip</code> and <code>how</code>. I can <code>.hist()</code> on the grouped result, but I cannot separate the data anymore.</p>
<p>Is there a smarter (not to say, correct) way of going about this?</p>
|
<p>Well you can just use <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>pandas.DataFrame.reset_index()</code></a> to turn multi-index into columns:</p>
<pre><code>In [6]: nf.groupby(['id','how']).agg('count').reset_index().head(10)
Out[6]:
id how A B
0 0 bad 7 7
1 0 good 6 6
2 1 bad 5 5
3 1 good 5 5
4 2 bad 6 6
5 2 good 4 4
6 3 bad 3 3
7 3 good 7 7
8 4 bad 11 11
9 4 good 6 6
</code></pre>
<p>Another way to do this could be use <code>as_index</code> parameter of the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>pandas.DataFrame.groupby()</code></a>:</p>
<pre><code>In [13]: nf.groupby(['id','how'], as_index=False).agg({'A':'count', 'B':'count'}).head(10)
Out[13]:
id how A B
0 0 bad 7 7
1 0 good 6 6
2 1 bad 5 5
3 1 good 5 5
4 2 bad 6 6
5 2 good 4 4
6 3 bad 3 3
7 3 good 7 7
8 4 bad 11 11
9 4 good 6 6
</code></pre>
|
python|pandas
| 2
|
374,317
| 30,486,141
|
Python3 changes function name <...>
|
<p>I have a pandas df where one column lists a particular func used to get the result in that line of the df.</p>
<p>It appears that Python changes the name of a func. if it is part of a list of functions. So Python takes the func name 'strategy0' and changes it to the less useful <code>'<function strategy0 at 0x0000000009CCA488>'</code> if it is part of a list of functions.</p>
<p>How can I either avoid Python changing the fct name alltogether or use Reg. Ex. to create a new df column 'stratname' with the proper fct. name (changing it back)?</p>
<p>This is my df as it looks now:</p>
<pre><code> Strategy Ticker ROI
0 <function strategy0 at 0x0000000009CCA488> nflx 142.976946
1 <function strategy0 at 0x0000000009CCA488> lnkd 61.710992
2 <function strategy0 at 0x0000000009CCA488> hsp 8.589611
</code></pre>
<p>This is how I would like it to look:</p>
<pre><code> Strategy Ticker ROI
0 strategy0 nflx 142.976946
1 strategy0 lnkd 61.710992
2 strategy0 hsp 8.589611
</code></pre>
<p>This way the column info could be used directly as input for a new function.</p>
<p>A good friend of mine helped me to the below code. But he is not profficient in pandas, so we're kind of struggling with it.</p>
<pre><code>def format(x):
m = re.search(r'<function\s+(strategy\d+)\s+.*, x)
return m.groups(1)
df_max_res['stratname'] = df_max_res['Strategy'].applymap(format)
</code></pre>
<p>This seems to look a lot like my problem, but apparantly Python3 handles this differently than P2.
<a href="https://stackoverflow.com/questions/13076172/return-function-name-in-python">Return function name in python?</a></p>
|
<p>Another solution if you like map and list</p>
<pre><code>strategies = list(map(lambda x: x.__name__, strategies))
</code></pre>
|
python|pandas
| 1
|
374,318
| 30,328,427
|
Add months to a datetime column in pandas
|
<p>I have a dataframe df with 2 columns as below -</p>
<pre><code> START_DATE MONTHS
0 2015-03-21 240
1 2015-03-21 240
2 2015-03-21 240
3 2015-03-21 240
4 2015-03-21 240
5 2015-01-01 120
6 2017-01-01 240
7 NaN NaN
8 NaN NaN
9 NaN NaN
</code></pre>
<p>The datatypes of the 2 columns are objects.</p>
<pre><code>>>> df.dtypes
START_DATE object
MONTHS object
dtype: object
</code></pre>
<p>Now, I want to create a new column "Result" by adding df['START_DATE'] & df['MONTHS']. So, I have done the below -</p>
<pre><code>from dateutil.relativedelta import relativedelta
df['START_DATE'] = pd.to_datetime(df['START_DATE'])
df['MONTHS'] = df['MONTHS'].astype(float)
df['offset'] = df['MONTHS'].apply(lambda x: relativedelta(months=x))
df['Result'] = df['START_DATE'] + df['offset']
</code></pre>
<p>Here, I get the below error -</p>
<pre><code>TypeError: incompatible type [object] for a datetime/timedelta operation
</code></pre>
<p>Note: Wanted to convert df['Months'] to int but wouldn't work as the field had Nulls.</p>
<p>Can you please give me some directions.Thanks.</p>
|
<p>This is a vectorized way to do this, so should be quite performant. Note that it doesn't handle month crossings / endings (and doesn't deal well with DST changes. I believe that's why you get the times).</p>
<pre><code>In [32]: df['START_DATE'] + df['MONTHS'].values.astype("timedelta64[M]")
Out[32]:
0 2035-03-20 20:24:00
1 2035-03-20 20:24:00
2 2035-03-20 20:24:00
3 2035-03-20 20:24:00
4 2035-03-20 20:24:00
5 2024-12-31 10:12:00
6 2036-12-31 20:24:00
7 NaT
8 NaT
9 NaT
Name: START_DATE, dtype: datetime64[ns]
</code></pre>
<p>If you need exact MonthEnd/Begin handling, this is an appropriate method. (Use MonthsOffset to get the same day)</p>
<pre><code>In [33]: df.dropna().apply(lambda x: x['START_DATE'] + pd.offsets.MonthEnd(x['MONTHS']), axis=1)
Out[33]:
0 2035-02-28
1 2035-02-28
2 2035-02-28
3 2035-02-28
4 2035-02-28
5 2024-12-31
6 2036-12-31
dtype: datetime64[ns]
</code></pre>
|
python|python-2.7|python-3.x|pandas|ipython
| 13
|
374,319
| 30,302,520
|
Parallelize operations for each cell in a numpy array
|
<p>I am trying to figure out which is the best way to parallelize the execution of a single operation for each cell in a 2D numpy array.</p>
<p>In particular, I need to do a bitwise operation for each cell in the array.</p>
<p>This is what I do using a single <code>for</code> cycle:</p>
<pre><code>for x in range(M):
for y in range(N):
v[x][y] = (v[x][y] >> 7) & 255
</code></pre>
<p>I found a way to do the same above using the <code>vectorize</code> method:</p>
<pre><code>def f(x):
return (x >> 7) & 255
f = numpy.vectorize(f)
v = f(v)
</code></pre>
<p>However, using vectorize doesn't seem to improve performance.</p>
<p>I read about <em>numexpr</em> in <a href="https://stackoverflow.com/a/11460119/738017">this answer on StackOverflow</a>, where also <em>Theano</em> and <em>Cython</em> are cited. <em>Theano</em> in particular seems a good solution, but I cannot find examples that fit my case.</p>
<p>So my question is: which is the best way to improve the above code, using parallelization and possibly GPU computation? May someone post some sample code to do this?</p>
|
<p>I am not familiar with bitwise operations but this here gives me the same result as your code and is vectorized. </p>
<pre><code>import numpy as np
# make sure it is a numpy.array
v = np.array(v)
# vectorized computation
N = (v >> 7) & 255
</code></pre>
|
python|numpy|parallel-processing|theano
| 4
|
374,320
| 30,561,617
|
How to get numpy array from multiple lists of same length and sort along an axis?
|
<p>I have a very simple question ,How to get numpy array from multiple lists of same length and sort along an axis ?</p>
<p>I'm looking for something like: </p>
<pre><code>a = [1,1,2,3,4,5,6]
b = [10,10,11,09,22,20,20]
c = [100,100,111,090,220,200,200]
d = np.asarray(a,b,c)
print d
>>>[[1,10,100],[1,10,100],[2,11,111].........[6,20,200]]
</code></pre>
<p>2nd Question : And if this could be achieved can i sort it along an axis (for eg. on the values of List b)?</p>
<p>3rd Question : Can the sorting be done over a range ? for eg. for values between b+10 and b-10 while looking at List c for further sorting. like</p>
<pre><code>[[1,11,111][1,10,122][1,09,126][1,11,154][1,11,191]
[1,20,110][1,25,122][1,21,154][1,21,155][1,21,184]]
</code></pre>
|
<p>You can zip to get the array:</p>
<pre><code>a = [1, 1, 2, 3, 4, 5, 6]
b = [10, 10, 11, 9, 22, 20, 20]
c = [100, 100, 111, 90, 220, 200, 200]
d = np.asarray(zip(a,b,c))
print(d)
[[ 1 10 100]
[ 1 10 100]
[ 2 11 111]
[ 3 9 90]
[ 4 22 220]
[ 5 20 200]
[ 6 20 200]]
print(d[np.argsort(d[:, 1])]) # a sorted copy
[[ 3 9 90]
[ 1 10 100]
[ 1 10 100]
[ 2 11 111]
[ 5 20 200]
[ 6 20 200]
[ 4 22 220]]
</code></pre>
<p>I don't know how you would do an inplace sort without doing something like:</p>
<pre><code>d = np.asarray(zip(a,b,c))
d.dtype = [("0", int), ("1", int), ("2", int)]
d.shape = d.size
d.sort(order="1")
</code></pre>
<p>The leading <code>0</code> would make the <code>090</code> octal in python2 or invalid syntax in python3 so I removed it.</p>
<p>You can also sort the zipped elements before you pass the:</p>
<pre><code>from operator import itemgetter
zipped = sorted(zip(a,b,c),key=itemgetter(1))
d = np.asarray(zipped)
print(d)
[[ 3 9 90]
[ 1 10 100]
[ 1 10 100]
[ 2 11 111]
[ 5 20 200]
[ 6 20 200]
[ 4 22 220]]
</code></pre>
|
python|arrays|sorting|numpy
| 2
|
374,321
| 26,601,001
|
Calculate STD manually using Groupby Pandas DataFrame
|
<p>I was trying to write a solution for <a href="https://stackoverflow.com/questions/26599347/groupby-pandas-dataframe-and-calculate-mean-and-stdev-of-one-column-and-add-the">this question</a> by providing a different and a manual way to calculate a mean and std. </p>
<p>I created the <a href="https://stackoverflow.com/questions/26599347/groupby-pandas-dataframe-and-calculate-mean-and-stdev-of-one-column-and-add-the">dataframe as described in the question</a></p>
<pre><code>a= ["Apple","Banana","Cherry","Apple"]
b= [3,4,7,3]
c= [5,4,1,4]
d= [7,8,3,7]
import pandas as pd
df = pd.DataFrame(index=range(4), columns=list("ABCD"))
df["A"]=a
df["B"]=b
df["C"]=c
df["D"]=d
</code></pre>
<p>Then, I created a list of A's without duplication. Then I went through the items, by grouping everytime the items and calculate the solution.</p>
<pre><code>import numpy as np
l= list(set(df.A))
df.groupby('A', as_index=False)
listMean=[0]*len(df.C)
listSTD=[0]*len(df.C)
for x in l:
s= np.mean(df[df['A']==x].C.values)
z= [index for index, item in enumerate(df['A'].values) if x==item ]
for i in z:
listMean[i]=s
for x in l:
s= np.std(df[df['A']==x].C.values)
z= [index for index, item in enumerate(df['A'].values) if x==item ]
for i in z:
listSTD[i]=s
df['C']= listMean
df['E']= listSTD
print df
</code></pre>
<p>I used <code>describe()</code> grouped by "A" to calculate the mean, std. </p>
<pre><code>print df.groupby('A').describe()
</code></pre>
<p>And tested the suggested solution: </p>
<pre><code>result = df.groupby(['a'], as_index=False).agg(
{'c':['mean','std'],'b':'first', 'd':'first'})
</code></pre>
<p>I noticed that I got different results when I calculate std ("E"). I am just curious, what did I miss ?</p>
|
<p>There are <a href="https://statistics.laerd.com/statistical-guides/measures-of-spread-standard-deviation.php" rel="nofollow noreferrer">two kinds of standard deviations (SD)</a>: the population SD and the sample SD.</p>
<p>The population SD</p>
<p><img src="https://i.stack.imgur.com/j5p4r.png" alt="enter image description here"></p>
<p>is used when the values represent the entire universe of values that you are studying.</p>
<p>The sample SD</p>
<p><img src="https://i.stack.imgur.com/zlRWv.png" alt="enter image description here"></p>
<p>is used when the values are a mere sample from that universe.</p>
<p><code>np.std</code> calculates the population SD by default, while Pandas' <code>Series.std</code> calculates the sample SD by default.</p>
<pre><code>In [42]: np.std([4,5])
Out[42]: 0.5
In [43]: np.std([4,5], ddof=0)
Out[43]: 0.5
In [44]: np.std([4,5], ddof=1)
Out[44]: 0.70710678118654757
In [45]: x = pd.Series([4,5])
In [46]: x.std()
Out[46]: 0.70710678118654757
In [47]: x.std(ddof=0)
Out[47]: 0.5
</code></pre>
<p><code>ddof</code> stands for "degrees of freedom", and controls the number subtracted from <code>N</code> in the SD formulas.</p>
<p>The formula images above come from <a href="http://en.wikipedia.org/wiki/Standard_deviation#Estimation" rel="nofollow noreferrer">this Wikipedia page</a>. There the "uncorrected sample standard deviation" is what I (and <a href="https://www.mathsisfun.com/data/standard-deviation-formulas.html" rel="nofollow noreferrer">others</a>) call the population SD, and the "corrected sample standard deviation" is the sample SD.</p>
|
python|algorithm|pandas
| 7
|
374,322
| 26,795,616
|
Randomly select the tuples from the list: type error
|
<pre><code>import numpy as np
x = [1,2,3,4,5,6,7,8,9]
y = [11,12,13,14,15,16,17,18,19]
</code></pre>
<p>I have a list of tuples as follows:</p>
<pre><code>xy = [(x,y) for x,y in zip(x,y)]
</code></pre>
<p>Now I wanted to randomly select 3 position/index of the tuples in the list.</p>
<pre><code>random_indices = np.random.choice(len(xy),3,replace=False)
</code></pre>
<p>Here I applied the indices to return the list of SELECTED tuples:</p>
<pre><code>selected_xy = xy[random_indices]
print selected_xy
</code></pre>
<p>BUT I got the following error:</p>
<pre><code>Traceback (most recent call last):
File "D:/test.py", line 11, in <module>
selected_xy = xy[random_indices]
TypeError: only integer arrays with one element can be converted to an index
</code></pre>
<p>My goal is to randomly select the tuples from the list, the expected result should look LIKE as follows:</p>
<pre><code>[(1,11),(3,13),(4,14)]
</code></pre>
<p>What is the best way of doing it?</p>
|
<p>Convert <code>xy</code> to a NumPy array, lists only support indexing with integers:</p>
<pre><code>>>> xy = np.array([(a, b) for a, b in zip(x, y)])
>>> random_indices = np.random.choice(len(xy), 3,replace=False)
>>> xy[random_indices]
array([[ 7, 17],
[ 2, 12],
[ 4, 14]])
</code></pre>
<p>In Python you can do this using either <code>operator.itemgetter</code> or loop over indices in a list comprehension and fetch the item at that index:</p>
<pre><code>>>> from operator import itemgetter
>>> xy = [(a, b) for a, b in zip(x, y)]
>>> random_indices = np.random.choice(len(xy), 3, replace=False)
>>> itemgetter(*random_indices)(xy)
((2, 12), (1, 11), (4, 14))
</code></pre>
|
python|numpy
| 4
|
374,323
| 26,671,112
|
Error of pytz if frequency of dataframe is below 1 hour for DST change [multiindex pandas]
|
<p>I face a problem to change the timezone of a dataframe in which frequency is below 1 hour. In my case, I get a quarter-hourly dataframe from a CSV source and I have to delete the DST hour in March and add the DST hour in October.
The below function works well if freq is hourly but doesn't work with below freq.</p>
<p>Has someone any solution to this problem ?</p>
<pre><code>import pandas as pd
import numpy as np
from pytz import timezone
def DST_Paris(NH, NH_str):
## Suppose that I do not create the dataframe here but I import one from a CSV file
df = pd.DataFrame(np.random.randn(NH * 365), index = pd.date_range(start="01/01/2014", freq=NH_str, periods=NH * 365))
## I need to delete the hour in March and duplicate the hour in October
## If freq is inf at 1 hour, I need to duplicate all the data inside the considerated hour
tz = timezone('Europe/Paris')
change_date = tz._utc_transition_times
GMT1toGMT2_dates = [datei.date() for datei in list(change_date) if datei.month == 3]
GMT2toGMT1_dates = [datei.date() for datei in list(change_date) if datei.month == 10]
ind_March = np.logical_and(np.in1d(df.index.date, GMT1toGMT2_dates),(df.index.hour == 2))
ind_October = np.logical_and(np.in1d(df.index.date, GMT2toGMT1_dates),(df.index.hour == 2))
df['ind_March'] = (1-ind_March)
df['ind_October'] = ind_October * 1
df = df[df.ind_March == 1]
df = df.append(df[df.ind_October == 1])
del df['ind_March']
del df['ind_October']
df = df.sort()
## Error if granularity below of 1 hours
df = df.tz_localize('Europe/Paris', ambiguous = 'infer')
return df
try:
DST_Paris(24, "1h")
print "dataframe freq = 1h ==> no pb"
except:
print "dataframe freq = 1h ==> error"
try:
DST_Paris(96, "15min")
print "dataframe freq = 15min ==> no pb"
except:
print "dataframe freq = 15min ==> error"
</code></pre>
<p>The output is :</p>
<pre><code>dataframe freq = 1h ==> no pb
dataframe freq = 15min ==> error
</code></pre>
|
<p>A workaround would be to use</p>
<pre><code>is_dst = False # or True
df = df.tz_localize('Europe/Paris', ambiguous=[is_dst]*len(df))
</code></pre>
<p>to explicitly specify if the ambiguous local times should be interpreted as in the Daylight Savings Time zone or not.</p>
<hr>
<p>By the way, </p>
<pre><code>df['ind_March'] = (1-ind_March)
df['ind_October'] = ind_October * 1
df = df[df.ind_March == 1]
df = df.append(df[df.ind_October == 1])
del df['ind_March']
del df['ind_October']
df = df.sort()
</code></pre>
<p>could be simplified to</p>
<pre><code>df = df.loc[(~ind_March) & (ind_October)]
df = df.sort()
</code></pre>
|
python|pandas|timezone|pytz|multi-index
| 1
|
374,324
| 26,666,799
|
Array/List from txt file in Python
|
<p>I was trying to get value from .txt file into array/list in python.
Let's say I have this data in user.txt :</p>
<pre><code> ghost:001
ghost:002
ghost:003
</code></pre>
<p>So, when I want to output it as :</p>
<pre><code> 'ghost:001','ghost:002','ghost:003'
</code></pre>
<p>I use this function</p>
<pre><code> def readFromFile(filename, use_csv):
userlist = ''
userlist_b = ''
print ("Fetching users from '%s'"% filename)
f = open (filename,"r")
for line in f:
userlist+=str(line)
userlist = "','".join(userlist.split("\n"))
userlist = "'" + userlist + "'"
userlist = "(%s)" %userlist
return userlist
</code></pre>
<p>My question is how could I do this:
I want to print specific user. Something like </p>
<pre><code>idx = 2
print("User[%s] : %s",%idx, %(array[idx]))
*output:*
User[2] : ghost:003
</code></pre>
<p>How do I form the array?</p>
<p>Could anyone help me?</p>
|
<p>I would store the users in a dict where the keys increment for each user:</p>
<pre><code>d = {}
with open("in.txt") as f:
user = 1
for line in f:
d[user]= line.rstrip()
user += 1
print(d)
{1: 'ghost:001', 2: 'ghost:002', 3: 'ghost:003'}
</code></pre>
<p>If you just want a list of user and to access by index:</p>
<pre><code>with open("in.txt") as f:
users = f.readlines()
print("User {}".format(users[0]))
User ghost:001
</code></pre>
|
python|arrays|pandas
| 1
|
374,325
| 26,787,755
|
How can I access multiple columns in Pandas 0.15 DataFrame.resample method?
|
<p>In Pandas 0.12, if you used the resample method on a DataFrame with a custom resampling function, it would make one call per dataframe row to the custom function, giving access to the values in all columns. In Pandas 0.15, the resample method calls my custom function once per dataframe entry, and the only available value is that entry (not the entire row).</p>
<p><strong>How can I recover the 0.12 behavior and see the entire row in my custom function?</strong></p>
<p>Here's the difference:</p>
<p>Initial setup:</p>
<pre><code>In [1]: import pandas
In [2]: import datetime
In [3]: import sys
In [4]: dt = datetime.datetime(2014,1,1)
In [5]: idx = [dt + datetime.timedelta(days=i) for i in [0,2]]
In [6]: df = pandas.DataFrame({'a': [1.0, 2.0], 'b': ['x', 'y']}, index=idx)
In [7]: foo = lambda data: sys.stdout.write("***\n" + str(data) + "\n")
</code></pre>
<p>0.12 behavior (notice that there are 3 calls to foo):</p>
<pre><code>In [8]: pandas.__version__
Out[8]: '0.12.0'
In [9]: df.resample(rule='D', how=foo, fill_method='ffill')
***
a b
2014-01-01 1 x
***
Empty DataFrame
Columns: [a, b]
Index: []
***
a b
2014-01-03 2 y
Out[9]:
a b
2014-01-01 None None
2014-01-02 None None
2014-01-03 None None
</code></pre>
<p>0.15 behavior (notice that there are 6 calls to foo):</p>
<pre><code>In [8]: pandas.__version__
Out[8]: '0.15.0'
In [9]: df.resample(rule='D', how=foo, fill_method='ffill')
***
2014-01-01 1
Name: a, dtype: float64
***
Series([], name: a, dtype: float64)
***
2014-01-03 2
Name: a, dtype: float64
***
2014-01-01 x
Name: b, dtype: object
***
Series([], name: b, dtype: object)
***
2014-01-03 y
Name: b, dtype: object
Out[9]:
a b
2014-01-01 NaN None
2014-01-02 NaN None
2014-01-03 NaN None
</code></pre>
|
<p>I don't know why the behavior changed, but think using a <code>TimeGrouper</code> and <code>groupby</code> can get you back to the old results, although will error out unless foo is given a return value.</p>
<pre><code>In [496]: df.groupby(pd.TimeGrouper('D')).apply(foo)
***
a b
2014-01-01 1 x
***
Empty DataFrame
Columns: [a, b]
Index: []
***
a b
2014-01-03 2 y
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
.....
ValueError: All objects passed were None
</code></pre>
|
pandas
| 0
|
374,326
| 26,618,964
|
Convert list of tuples in tabular format in python
|
<p>What is an elegant way to convert a list of tuples into tables in the following form?</p>
<p>Input:</p>
<pre><code>from pandas import DataFrame
mytup = [('a','b',1), ('a','c',2), ('b','a',2), ('c','a',3), ('c','c',1)]
a b 1
a c 2
b a 2
c a 3
c c 1
mydf = DataFrame(mytup, columns = ['from', 'to', 'val'])
</code></pre>
<p>output: <code>-</code> may be replaced with blank or <code>nan</code></p>
<pre><code> a b c
a - 1 2
b 2 - -
c 3 - 1
</code></pre>
|
<p><code>pivot</code> and <code>fillna</code> are what you want:</p>
<pre><code>import pandas as pd
mytup = [('a','b',1), ('a','c',2), ('b','a',2), ('c','a',3), ('c','c',1)]
mydf = pd.DataFrame(mytup, columns=['from', 'to', 'val'])
mydf.pivot(index='from', columns='to', values='val').fillna(value='-')
to a b c
from
a - 1 2
b 2 - -
c 3 - 1
</code></pre>
|
python|pandas
| 7
|
374,327
| 39,007,934
|
Error installing bazel for tensorflow: command not found
|
<p>I am trying to use bazel to run retrain Inception's Final Layer for New Categories in Tensorflow.</p>
<p>I have limited knowledge of anything other than jupyter notebooks, so terminal work = copying and pasting.</p>
<p>I installed bazel via brew. So it's there somewhere.</p>
<p>When I run:</p>
<pre><code>bazel build tensorflow/examples/image_retraining:retrain
</code></pre>
<p>I recieve the error:</p>
<pre><code>-bash: bazel: command not found
</code></pre>
<p>So I tried this too:</p>
<pre><code>export PATH="$PATH:$HOME/bin"
</code></pre>
<p>But nothing happened. What am I doing wrong?</p>
|
<p>Try running:</p>
<pre><code>$ brew info bazel
</code></pre>
<p>This should print a path to wherever it installed Bazel. You can either use it from there (<code>/usr/local/Cellar/bazel/0.3.1/bin/bazel build tensorflow/and/so/on</code>) or create a symlink to somewhere on your PATH, e.g.,</p>
<pre><code>$ mkdir $HOME/bin
$ ln -s /usr/local/Cellar/bazel/0.3.1/bin/bazel $HOME/bin/bazel
</code></pre>
<p>Then it should work with the first command you tried.</p>
<p>(Verify that bazel actually <em>is</em> at <code>/usr/local/Cellar/bazel/0.3.1/bin/bazel</code>, that's just a guess.)</p>
|
cmd|terminal|tensorflow|bazel
| 1
|
374,328
| 39,281,956
|
Remove columns where all items in column are identical (excluding header) and match a specified string
|
<p>My question is an extension of <a href="https://stackoverflow.com/questions/21164910/delete-column-in-pandas-based-on-condition">Delete Column in Pandas based on Condition</a>, but I have headers and the information isn't binary. Instead of removing a column containing all zeros, I'd like to be able to pass a variable "search_var" (containing a string) to filter out columns containing only that string.</p>
<p>I initially thought I should read in the df and iterate across each column, read each column in as a list, and print columns where len(col_list) > 2 and search_var not in col_list. The solution provided to the previous post involving a boolean dataframe (df != search_var) intrigued me there might be a simpler way, but how could I go around the issue that the header will not match and therefore cannot purely filter on True/False?</p>
<p>What I have (non-working):</p>
<pre><code>import pandas as pd
df = pd.read_table('input.tsv', dtype=str)
with open('output.tsv', 'aw') as ofh:
df['col_list'] = list(df.values)
if len(col_list) < 3 and search_var not in col_list:
df.to_csv(ofh, sep='\t', encoding='utf-8', header=False)
</code></pre>
<h1>Example input, search_var = 'red'</h1>
<pre><code>Name Header1 Header2 Header3
name1 red red red
name2 red orange red
name3 red yellow red
name4 red green red
name5 red blue blue
</code></pre>
<h1>Expected Output</h1>
<pre><code>Name Header2 Header3
name1 red red
name2 orange red
name3 yellow red
name4 green red
name5 blue blue
</code></pre>
|
<p>You can check the number of <code>non-red</code> item in the column, if it is not zero then select it using <code>loc</code>:</p>
<pre><code>df.loc[:, (df != 'red').sum() != 0]
# Name Header2 Header3
# 0 name1 red red
# 1 name2 orange red
# 2 name3 yellow red
# 3 name4 green red
# 4 name5 blue blue
</code></pre>
|
python|pandas
| 2
|
374,329
| 39,407,254
|
how to set the primary key when writing a pandas dataframe to a sqlite database table using df.to_sql
|
<p>I have created a sqlite database using pandas df.to_sql however accessing it seems considerably slower than just reading in the 500mb csv file. </p>
<p>I need to: </p>
<ol>
<li>set the primary key for each table using the df.to_sql method</li>
<li>tell the sqlite database what datatype each of the columns in my
3.dataframe are? - can I pass a list like [integer,integer,text,text]</li>
</ol>
<p>code.... (format code button not working)</p>
<pre><code>if ext == ".csv":
df = pd.read_csv("/Users/data/" +filename)
columns = df.columns columns = [i.replace(' ', '_') for i in columns]
df.columns = columns
df.to_sql(name,con,flavor='sqlite',schema=None,if_exists='replace',index=True,index_label=None, chunksize=None, dtype=None)
</code></pre>
|
<p>Unfortunately there is no way right now to set a primary key in the pandas df.to_sql() method. Additionally, just to make things more of a pain there is no way to set a primary key on a column in sqlite after a table has been created. </p>
<p>However, a work around at the moment is to create the table in sqlite with the pandas df.to_sql() method. Then you could create a duplicate table and set your primary key followed by copying your data over. Then drop your old table to clean up.</p>
<p>It would be something along the lines of this.</p>
<pre><code>import pandas as pd
import sqlite3
df = pd.read_csv("/Users/data/" +filename)
columns = df.columns columns = [i.replace(' ', '_') for i in columns]
#write the pandas dataframe to a sqlite table
df.columns = columns
df.to_sql(name,con,flavor='sqlite',schema=None,if_exists='replace',index=True,index_label=None, chunksize=None, dtype=None)
#connect to the database
conn = sqlite3.connect('database')
c = conn.curser()
c.executescript('''
PRAGMA foreign_keys=off;
BEGIN TRANSACTION;
ALTER TABLE table RENAME TO old_table;
/*create a new table with the same column names and types while
defining a primary key for the desired column*/
CREATE TABLE new_table (col_1 TEXT PRIMARY KEY NOT NULL,
col_2 TEXT);
INSERT INTO new_table SELECT * FROM old_table;
DROP TABLE old_table;
COMMIT TRANSACTION;
PRAGMA foreign_keys=on;''')
#close out the connection
c.close()
conn.close()
</code></pre>
<p>In the past I have done this as I have faced this issue. Just wrapped the whole thing as a function to make it more convenient... </p>
<p>In my limited experience with sqlite I have found that not being able to add a primary key after a table has been created, not being able to perform Update Inserts or UPSERTS, and UPDATE JOIN has caused a lot of frustration and some unconventional workarounds.</p>
<p>Lastly, in the pandas df.to_sql() method there is a a dtype keyword argument that can take a dictionary of column names:types. IE: dtype = {col_1: TEXT}</p>
|
python|sqlite|pandas|primary-key
| 15
|
374,330
| 39,226,024
|
how to convert header row into new columns in python pandas?
|
<p>I am having following dataframe:</p>
<pre><code>A,B,C
1,2,3
</code></pre>
<p>I have to convert above dataframe like following format:</p>
<pre><code>cols,vals
A,1
B,2
c,3
</code></pre>
<p>How to create column names as a new column in pandas?</p>
|
<p>You can transpose by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow"><code>T</code></a>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': {0: 1}, 'C': {0: 3}, 'B': {0: 2}})
print (df)
A B C
0 1 2 3
print (df.T)
0
A 1
B 2
C 3
df1 = df.T.reset_index()
df1.columns = ['cols','vals']
print (df1)
cols vals
0 A 1
1 B 2
2 C 3
</code></pre>
<p>If <code>DataFrame</code> has more rows, you can use:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': {0: 1, 1: 9, 2: 1},
'C': {0: 3, 1: 6, 2: 7},
'B': {0: 2, 1: 4, 2: 8}})
print (df)
A B C
0 1 2 3
1 9 4 6
2 1 8 7
df.index = 'vals' + df.index.astype(str)
print (df.T)
vals0 vals1 vals2
A 1 9 1
B 2 4 8
C 3 6 7
df1 = df.T.reset_index().rename(columns={'index':'cols'})
print (df1)
cols vals0 vals1 vals2
0 A 1 9 1
1 B 2 4 8
2 C 3 6 7
</code></pre>
|
python|python-2.7|pandas|dataframe|transpose
| 2
|
374,331
| 39,299,726
|
Can't find package on Anaconda Navigator. What to do next?
|
<p>I am trying to install "pulp" module in Anaconda Navigator's Environment tabs. But when I search in "All" packages I can't find it. It happened with other packages too. </p>
<p><a href="https://i.stack.imgur.com/JqYIF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JqYIF.png" alt="enter image description here"></a></p>
<p>Is there any way to install my package to the desired environment?</p>
<p>I tried to install it by opening a terminal in the environment, but I see that afterwards it won't show up in the list. </p>
<p>What am I missing here?</p>
|
<ol>
<li><p>Click <em>Open Terminal</em> from environment.</p>
<p><a href="https://i.stack.imgur.com/EiiFc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EiiFc.png" alt="open" /></a></p>
</li>
<li><p>Execute <code>conda install (package-name)</code> in terminal mode. (The image below shows the installation of a package named <code>Keras</code>.)</p>
<p><a href="https://i.stack.imgur.com/P9ivr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P9ivr.png" alt="execute" /></a></p>
</li>
</ol>
|
python|numpy|scipy|anaconda|pulp
| 30
|
374,332
| 39,119,025
|
I write my program with following the steps of Building Autoencoders in Keras in Keras blog, but it errors as follows:
|
<pre><code> callbacks=[TensorBoard(log_dir='/Users/lyj/Programs/KiseliuGit/DeepLearning/tmp/autoencoder')])
File "/Library/Python/2.7/site-packages/keras/callbacks.py", line 457, in __init__
raise Exception('TensorBoard callback only works '
Exception: TensorBoard callback only works with the TensorFlow backend.
</code></pre>
<p>Why? I absolutely followed the steps, here's my program:</p>
<pre><code>#coding:utf-8
from keras.layers import Input, Dense, Convolution2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras.datasets import mnist
import numpy as np
from keras.callbacks import TensorBoard
input_img = Input(shape=(1, 28, 28))
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(input_img)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
x = MaxPooling2D((2, 2), border_mode='same')(x)
x = Convolution2D(8 ,3, 3, activation='relu', border_mode='same')(x)
encoded = MaxPooling2D((2,2), border_mode='same')(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded)
x = UpSampling2D((2,2))(x)
x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x)
x = UpSampling2D((2,2))(x)
x = Convolution2D(16, 3, 3, activation='relu')(x)
x = UpSampling2D((2,2))(x)
decoded = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 1, 28, 28))
x_test = np.reshape(x_test, (len(x_test), 1, 28, 28))
autoencoder.fit(x_train, x_test, nb_epoch=50, batch_size=128, shuffle=True,validation_data=(x_test, x_test),
callbacks=[TensorBoard(log_dir='/Users/kiseliu/DeepLearning/tmp/autoencoder')])
</code></pre>
<p>And I inputed the command "tensorboard --logdir=/Users/kiseliu/DeepLearning/tmp/autoencoder"in cmd tools before I run this program. </p>
|
<p>Change your backend keras from theano to tnesorflow from .keras.json file</p>
|
tensorflow|keras
| 2
|
374,333
| 39,396,694
|
Running tensorflow as daemon and piping all output to log file
|
<p>To run tensorflow model as daemon I use : </p>
<pre><code>nohup python translate.py --data_dir data &
</code></pre>
<p>This logs error messages to nohup.out but it does not capture Tensorflow stdout . This thread offers describes related : <a href="https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/SO_JRts-VIs" rel="nofollow">https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/SO_JRts-VIs</a> but does not provide solution.</p>
<p>I require to run as daemon as model takes quite some time to run. This is to prevent ssh disconnecting due to inactivity.</p>
<p>How to run Tensorflow as daemon process and pipe all output to file ?</p>
|
<p>Why not try </p>
<pre><code>nohup python translate.py --data_dir data &> outputfile.txt
</code></pre>
<p>You can then suspend the file your self with kill -19 %1 to suspend the first job or whatever number its present as. Then kill -CONT %1 to restart it. </p>
<p>Other options:</p>
<ul>
<li>"disown" command </li>
<li>tmux (as suggested in the comments)</li>
<li>screen (similar to tmux)</li>
<li>using mosh instead of ssh</li>
<li>save the outputs from within the file translate.py instead of printing to stdout</li>
</ul>
|
python|linux|tensorflow
| 0
|
374,334
| 39,276,650
|
Python Pandas ValueError on simple query
|
<p>The following line causes a ValueError (Pandas 17.1), and I'm trying to understand why.</p>
<pre><code>x = (matchdf['ANPR Matched_x'] == 1)
</code></pre>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
<p>I'm trying to use it for following conditional assignment:</p>
<pre><code>matchdf.loc[x, 'FullMatch'] = 1
</code></pre>
<p>But I can't get past the previous issue.</p>
<p>I'm sure I've done this kind of thing dozens of times before, and I can't see why it should matter what is in the dataframe, but perhaps it does? or more likely, I'm probably making a silly mistake I just can't see!</p>
<p>Thanks for any help.</p>
<p>EDIT: For more context here's some preceding code:</p>
<pre><code>inpairs = []
for m in inmatchedpairs:
# more code
p = {'Type In': mtype ,'Best In Time': besttime, 'Best G In Time': bestgtime,
'Reg In': reg, 'ANPR Matched': anprmatch, 'ANPR Match Key': anprmatchkey}
inpairs.append(p)
outpairs = []
for m in outmatchedpairs:
# more code
p = {'Type Out': mtype ,'Best Out Time': besttime, 'Best G Out Time': bestgtime,
'Reg Out': reg, 'ANPR Matched': anprmatch, 'ANPR Match Key': anprmatchkey}
outpairs.append(p)
indf = pd.DataFrame(inpairs)
outdf = pd.DataFrame(outpairs)
matchdf = pd.merge(indf, outdf, how='outer', on='ANPR Match Key')
matchdf['FullMatch'] = 0
x = (matchdf['ANPR Matched_x'] == 0)
</code></pre>
<p>I get the error on the last line.</p>
|
<p>Use <code>loc</code> to set the values.</p>
<pre><code>matchdf.loc[matchdf['APNR Matched_x'] == 1, 'FullMatch'] = 1
</code></pre>
<p><strong>Example</strong></p>
<pre><code>df = pd.DataFrame({'APNR Matched_x': [0, 1, 1, 0], 'Full Match': [False] * 4})
>>> df
APNR Matched_x Full Match
0 0 False
1 1 False
2 1 False
3 0 False
df.loc[df['APNR Matched_x'] == 1, 'FullMatch'] = 1
>>> df
APNR Matched_x Full Match FullMatch
0 0 False NaN
1 1 False 1
2 1 False 1
3 0 False NaN
</code></pre>
|
python|pandas|numpy
| 2
|
374,335
| 39,000,115
|
How can I set the colors per value when coloring plots by a DataFrame column?
|
<p>In matplotlib (in particular, pandas), how can I map specific colors to values of a column that I use for differentiating colors?</p>
<p>Let's say I have a column ...</p>
<pre><code>>> df["country"]
DE
EN
US
DE
</code></pre>
<p>... and now I'd like to plot values from the DataFrame where each country is colored differently. How can I determine which country gets which color? With a colormap? I wasn't able to find the proper documentation, unfortunately.</p>
<p>I would like to apply a dict like this:</p>
<pre><code># pseudo-code
colormapping = {"DE": "blue", ...}
df.plot(colorby="country", colormapping)
</code></pre>
<p>Edit:</p>
<p>Here's a sample DataFrame.</p>
<pre><code> outlook play temperature country
0 sunny True 25 DE
1 sunny True 25 EN
2 overcast True 19 DE
3 rain False 21 US
4 overcast False 33 IT
5 rain False 27 EN
6 rain False 22 FR
7 overcast True 26 FR
8 sunny True 13 FR
9 sunny True 16 CH
</code></pre>
|
<p>You can do so by specifying the dictionary mapping of hue levels to corresponding <code>matplotlib</code> colors in the <a href="https://stanford.edu/~mwaskom/software/seaborn/tutorial/color_palettes.html" rel="noreferrer"><code>palette</code></a> argument of a <a href="https://stanford.edu/~mwaskom/software/seaborn/tutorial/categorical.html" rel="noreferrer"><code>categorical plot</code></a> using <code>seaborn</code> as shown:</p>
<pre><code>sns.set(style="whitegrid")
sns.swarmplot(x="outlook", y="temperature", hue="country", data=df, size=8,
palette={'DE':'b', 'EN':'g', 'US':'r','IT':'c', 'FR':'y', 'CH':'k'})
</code></pre>
<p><a href="https://i.stack.imgur.com/s5TVE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/s5TVE.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|colors|seaborn
| 8
|
374,336
| 39,071,334
|
Solving Non-Linear Differential Equation Sympy
|
<p>This code only works for solving the differential equation v_equation if v(t) isn't squared. When I squared it it returned the error PolynomialDivisionFailed. Is there another way of doing this with Sympy or should I find a different python package for doing these sorts of calculations.</p>
<pre><code>from sympy import *
from matplotlib import pyplot as plt
import numpy as np
m = float(raw_input('Mass:\n> '))
g = 9.8
k = float(raw_input('Drag Coefficient:\n> '))
f1 = g * m
t = Symbol('t')
v = Function('v')
v_equation = dsolve(f1 - k * (v(t) ** 2) - m * Derivative(v(t)), 0)
C1 = Symbol('C1')
C1_ic = solve(v_equation.rhs.subs({t:0}),C1)[0]
v_equation = v_equation.subs({C1:C1_ic})
func = lambdify(t, v_equation.rhs,'numpy')
</code></pre>
|
<p>From my experience with symbolic math packages, I would not recommend performing (symbolic) calculations using floating point constants. It is better to define equations using symbolic constants, perform calculations as far as possible, and then substitute with numerical values. </p>
<p>With this approach, Sympy can provide a solution for this D.E. </p>
<p>First, define symbolic constants. To aid calculations, note that we can provide additional information about these constants (e.g., real, positive, e.t.c)</p>
<pre><code>import sympy as sp
t = sp.symbols('t', real = True)
g, k, m = sp.symbols('g, k, m', real = True, positive = True)
v = sp.Function('v')
</code></pre>
<p>The symbolic solution for the DE can be obtained as follows</p>
<pre><code>f1 = g * m
eq = f1 - k * (v(t) ** 2) - m * sp.Derivative(v(t))
sol = sp.dsolve(eq,v(t)).simplify()
</code></pre>
<p>The solution <code>sol</code> will be a function of <code>k</code>, <code>m</code>, <code>g</code>, and a constant <code>C1</code>. In general, there will be two, complex <code>C1</code> values corresponding to the initial condition. However, both values of <code>C1</code> result in the same (real-valued) solution when substituted in <code>sol</code>.</p>
<p>Note that if you don't need a symbolic solution, a numerical ODE solver, such as Scipy's <code>odeint</code>, may be used. The code would be the following (for an initial condition <code>0</code>):</p>
<pre><code>from scipy.integrate import odeint
def fun(v, t, m, k, g):
return (g*m - k*v**2)/m
tn = np.linspace(0, 10, 101)
soln = odeint(fun, 0, tn, args=(1000, 0.2, 9.8))
</code></pre>
<p><code>soln</code> is an array of samples <code>v(t)</code> corresponding to the <code>tn</code> elements</p>
|
python|numpy|sympy
| 4
|
374,337
| 39,376,891
|
Why is numpy's sine function so inaccurate at some points?
|
<p>I just checked <code>numpy</code>'s <code>sine</code> function. Apparently, it produce highly inaccurate results around pi. </p>
<pre><code>In [26]: import numpy as np
In [27]: np.sin(np.pi)
Out[27]: 1.2246467991473532e-16
</code></pre>
<p>The expected result is 0. Why is <code>numpy</code> so inaccurate there?</p>
<p>To some extend, I feel uncertain whether it is acceptable to regard the calculated result as inaccurate: Its absolute error comes within one machine epsilon (for binary64), whereas the relative error is <code>+inf</code> -- reason why I feel somewhat confused. Any idea?</p>
<p>[Edit] I fully understand that floating-point calculation can be inaccurate. But most of the floating-point libraries can manage to deliver results within a small range of error. Here, the relative error is +inf, which seems unacceptable. Just imagine that we want to calculate </p>
<pre><code>1/(1e-16 + sin(pi))
</code></pre>
<p>The results would be disastrously wrong if we use numpy's implementation. </p>
|
<p>The main problem here is that <code>np.pi</code> is not exactly π, it's a finite binary floating point number that is close to the true irrational real number π but still off by ~1e-16. <code>np.sin(np.pi)</code> is actually returning a value closer to the true infinite-precision result for <code>sin(np.pi)</code> (i.e. the ideal mathematical <code>sin()</code> function being given the approximated <code>np.pi</code> value) than 0 would be.</p>
|
python|numpy|floating-point
| 7
|
374,338
| 39,309,327
|
Why does this piece of code gets slower with time?
|
<p>I'm trying to preprocess my images adding them to a 4D array. It starts off right but it gets slower with time, I thought this was due to my CPU but I tried running it on a GPU on the cloud and it still gets slower. Is this due to RAM? How can I optimize this to run faster?</p>
<pre><code>import tensorflow as tf
import os
import glob
import numpy as np
from PIL import Image
from random import randint
sess = tf.InteractiveSession()
def process_image(filename):
im = Image.open(filename)
array = np.array(im,dtype=np.uint8)
#Resize and normalize
resized = tf.image.resize_images(array, size[0], size[1], method = 0)
normalized = tf.image.per_image_whitening(resized)
result = sess.run(normalized)
return result
counter_train = 0
counter_val = 0
for i, foldername in enumerate(foldernames):
ind = 0
index = randint(ind,ind+29)
for j, filename in enumerate(glob.glob(foldername + '*.ppm')):
print filename
result = process_image(filename)
if j == index:
npX_val[counter_val]=result
npClass_val[counter_val]=i
ind += 30
index = randint(ind,ind+29)
counter_val += 1
else:
npX_train[counter_train]=result
npClass_train[counter_train]=i
counter_train += 1
print counter_val
print counter_train
</code></pre>
<p>I also ran pyinstrument and I get this</p>
<pre><code>3.160 <module> process.py:1
└─ 2.763 <module> tensorflow/__init__.py:19
└─ 2.761 <module> tensorflow/python/__init__.py:26
├─ 2.144 <module> tensorflow/contrib/__init__.py:15
│ ├─ 0.955 <module> tensorflow/contrib/learn/__init__.py:65
│ │ └─ 0.953 <module> tensorflow/contrib/learn/python/__init__.py:16
│ │ └─ 0.950 <module> tensorflow/contrib/learn/python/learn/__init__.py:16
│ │ ├─ 0.889 <module> tensorflow/contrib/learn/python/learn/estimators/__init__.py:16
│ │ │ ├─ 0.789 <module> tensorflow/contrib/learn/python/learn/estimators/autoencoder.py:16
│ │ │ │ └─ 0.770 <module> tensorflow/contrib/learn/python/learn/estimators/base.py:16
│ │ │ │ └─ 0.764 <module> tensorflow/contrib/learn/python/learn/estimators/estimator.py:16
│ │ │ │ └─ 0.729 <module> tensorflow/contrib/learn/python/learn/learn_io/__init__.py:16
│ │ │ │ └─ 0.724 <module> tensorflow/contrib/learn/python/learn/learn_io/pandas_io.py:16
│ │ │ │ └─ 0.724 <module> pandas/__init__.py:5
│ │ │ │ ├─ 0.307 <module> pandas/core/api.py:5
│ │ │ │ │ └─ 0.283 <module> pandas/core/groupby.py:1
│ │ │ │ │ └─ 0.268 <module> pandas/core/frame.py:10
│ │ │ │ │ ├─ 0.135 <module> pandas/core/series.py:3
│ │ │ │ │ │ └─ 0.116 <module> pandas/tools/plotting.py:3
│ │ │ │ │ │ └─ 0.112 <module> pandas/tseries/converter.py:1
│ │ │ │ │ │ ├─ 0.061 <module> matplotlib/__init__.py:101
│ │ │ │ │ │ └─ 0.044 <module> matplotlib/dates.py:111
│ │ │ │ │ └─ 0.102 <module> pandas/core/generic.py:2
│ │ │ │ │ └─ 0.085 <module> pandas/core/internals.py:1
│ │ │ │ │ └─ 0.075 <module> pandas/sparse/array.py:3
│ │ │ │ │ └─ 0.070 <module> pandas/core/ops.py:5
│ │ │ │ │ └─ 0.066 <module> pandas/computation/__init__.py:2
│ │ │ │ │ └─ 0.065 <module> numexpr/__init__.py:22
│ │ │ │ ├─ 0.123 <module> pytz/__init__.py:9
│ │ │ │ │ └─ 0.110 <module> pkg_resources/__init__.py:15
│ │ │ │ │ ├─ 0.037 _call_aside pkg_resources/__init__.py:2938
│ │ │ │ │ │ └─ 0.037 _initialize_master_working_set pkg_resources/__init__.py:2953
│ │ │ │ │ └─ 0.036 load_module pkg_resources/extern/__init__.py:34
│ │ │ │ ├─ 0.114 <module> pandas/core/config_init.py:11
│ │ │ │ │ └─ 0.083 <module> pandas/formats/format.py:2
│ │ │ │ │ └─ 0.032 <module> pandas/core/index.py:2
│ │ │ │ └─ 0.067 <module> pandas/io/api.py:3
│ │ │ └─ 0.053 <module> tensorflow/contrib/learn/python/learn/estimators/linear.py:16
│ │ │ └─ 0.051 <module> tensorflow/contrib/linear_optimizer/__init__.py:20
│ │ │ └─ 0.043 <module> tensorflow/contrib/linear_optimizer/python/ops/sdca_ops.py:15
│ │ └─ 0.033 <module> tensorflow/contrib/learn/python/learn/dataframe/__init__.py:16
│ ├─ 0.711 <module> tensorflow/contrib/distributions/__init__.py:73
│ │ ├─ 0.508 <module> tensorflow/contrib/distributions/python/ops/chi2.py:15
│ │ │ └─ 0.506 <module> tensorflow/contrib/distributions/python/ops/gamma.py:15
│ │ │ └─ 0.506 <module> tensorflow/contrib/framework/__init__.py:58
│ │ │ └─ 0.498 <module> tensorflow/contrib/framework/python/ops/__init__.py:15
│ │ │ └─ 0.489 <module> tensorflow/contrib/framework/python/ops/embedding_ops.py:15
│ │ │ └─ 0.487 <module> tensorflow/contrib/layers/__init__.py:79
│ │ │ └─ 0.482 <module> tensorflow/contrib/layers/python/layers/__init__.py:15
│ │ │ ├─ 0.172 <module> tensorflow/contrib/layers/python/layers/layers.py:17
│ │ │ │ └─ 0.160 <module> tensorflow/python/ops/standard_ops.py:17
│ │ │ │ └─ 0.061 <module> tensorflow/python/ops/gradients.py:15
│ │ │ ├─ 0.131 <module> tensorflow/contrib/layers/python/layers/optimizers.py:15
│ │ │ │ └─ 0.127 <module> tensorflow/python/training/training.py:137
│ │ │ │ └─ 0.035 <module> tensorflow/python/training/adadelta.py:16
│ │ │ │ └─ 0.035 <module> tensorflow/python/training/training_ops.py:16
│ │ │ ├─ 0.069 <module> tensorflow/contrib/layers/python/layers/feature_column.py:68
│ │ │ ├─ 0.053 <module> tensorflow/contrib/layers/python/layers/embedding_ops.py:15
│ │ │ │ └─ 0.050 <module> tensorflow/contrib/layers/python/ops/sparse_feature_cross_op.py:15
│ │ │ │ └─ 0.045 load_op_library tensorflow/python/framework/load_library.py:40
│ │ │ └─ 0.048 <module> tensorflow/contrib/layers/python/layers/target_column.py:16
│ │ │ └─ 0.046 <module> tensorflow/contrib/metrics/__init__.py:135
│ │ │ └─ 0.039 <module> tensorflow/contrib/metrics/python/ops/metric_ops.py:19
│ │ │ └─ 0.037 <module> tensorflow/contrib/metrics/python/ops/set_ops.py:15
│ │ │ └─ 0.034 load_op_library tensorflow/python/framework/load_library.py:40
│ │ └─ 0.161 <module> tensorflow/contrib/distributions/python/ops/bernoulli.py:15
│ │ └─ 0.158 <module> tensorflow/python/ops/nn.py:271
│ │ └─ 0.087 <module> tensorflow/python/ops/init_ops.py:16
│ │ └─ 0.085 <module> tensorflow/python/ops/nn_ops.py:15
│ │ └─ 0.060 <module> tensorflow/python/ops/gen_nn_ops.py:4
│ │ └─ 0.057 _InitOpDefLibrary tensorflow/python/ops/gen_nn_ops.py:1630
│ │ └─ 0.054 Merge google/protobuf/text_format.py:291
│ │ └─ 0.052 MergeLines google/protobuf/text_format.py:331
│ │ └─ 0.052 _ParseOrMerge google/protobuf/text_format.py:350
│ │ └─ 0.052 _MergeField google/protobuf/text_format.py:374
│ │ └─ 0.052 _MergeField google/protobuf/text_format.py:374
│ │ └─ 0.038 _MergeField google/protobuf/text_format.py:374
│ ├─ 0.265 <module> tensorflow/contrib/bayesflow/__init__.py:18
│ │ └─ 0.264 <module> tensorflow/contrib/bayesflow/python/ops/stochastic_graph.py:38
│ │ ├─ 0.145 <module> tensorflow/python/ops/array_ops.py:70
│ │ │ ├─ 0.068 <module> tensorflow/python/ops/gen_math_ops.py:4
│ │ │ │ └─ 0.065 _InitOpDefLibrary tensorflow/python/ops/gen_math_ops.py:2378
│ │ │ │ └─ 0.063 Merge google/protobuf/text_format.py:291
│ │ │ │ └─ 0.063 MergeLines google/protobuf/text_format.py:331
│ │ │ │ └─ 0.063 _ParseOrMerge google/protobuf/text_format.py:350
│ │ │ │ └─ 0.063 _MergeField google/protobuf/text_format.py:374
│ │ │ │ └─ 0.059 _MergeField google/protobuf/text_format.py:374
│ │ │ │ └─ 0.052 _MergeField google/protobuf/text_format.py:374
│ │ │ └─ 0.045 <module> tensorflow/python/ops/gen_array_ops.py:4
│ │ │ └─ 0.039 _InitOpDefLibrary tensorflow/python/ops/gen_array_ops.py:2677
│ │ │ └─ 0.038 Merge google/protobuf/text_format.py:291
│ │ │ └─ 0.038 MergeLines google/protobuf/text_format.py:331
│ │ │ └─ 0.038 _ParseOrMerge google/protobuf/text_format.py:350
│ │ │ └─ 0.038 _MergeField google/protobuf/text_format.py:374
│ │ │ └─ 0.035 _MergeField google/protobuf/text_format.py:374
│ │ └─ 0.085 <module> tensorflow/python/ops/math_ops.py:210
│ ├─ 0.071 <module> tensorflow/contrib/slim/__init__.py:18
│ │ └─ 0.046 <module> tensorflow/contrib/slim/python/slim/data/tfexample_decoder.py:20
│ │ └─ 0.036 TFExampleDecoder tensorflow/contrib/slim/python/slim/data/tfexample_decoder.py:273
│ ├─ 0.051 <module> tensorflow/contrib/quantization/__init__.py:16
│ │ └─ 0.050 <module> tensorflow/contrib/quantization/python/__init__.py:15
│ └─ 0.045 <module> tensorflow/contrib/copy_graph/__init__.py:20
│ └─ 0.043 <module> tensorflow/contrib/copy_graph/python/util/copy_elements.py:27
├─ 0.299 <module> numpy/__init__.py:106
│ └─ 0.235 <module> numpy/add_newdocs.py:10
│ └─ 0.230 <module> numpy/lib/__init__.py:1
│ └─ 0.160 <module> numpy/lib/type_check.py:3
│ └─ 0.158 <module> numpy/core/__init__.py:1
│ └─ 0.036 <module> numpy/testing/__init__.py:7
├─ 0.151 <module> tensorflow/python/pywrap_tensorflow.py:11
│ └─ 0.148 swig_import_helper tensorflow/python/pywrap_tensorflow.py:13
├─ 0.072 <module> tensorflow/core/framework/graph_pb2.py:4
└─ 0.039 <module> tensorflow/python/platform/test.py:57
</code></pre>
|
<p>I don't know much about TensorFlow, but I believe the problem is <code>process_image</code> is using a bunch of globals, particularly <code>tf</code>. Every time it's called you're running TensorFlow on an ever increasing set of images. First there's <a href="https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF" rel="nofollow">1, then 2, then 3, then 4, 5, 6, ...</a></p>
<pre><code>1 + 2 + 3 + 4 + 5 + ... + n = n ( n + 1 ) / 2
</code></pre>
<p>So by 100 images you've actually processed 5,050. This is an O(n<sup>2</sup>) algorithm which means its runtime (and, in this case, memory) will grow exponentially as the number of images increases.</p>
<p>Again, I don't know much about TensorFlow, but perhaps leaving calling <code>sess.run</code> for the end makes more sense? Though you appear to be interested in the intermediate results?</p>
<p>And, as a very good rule of thumb, avoid globals. It's hard to tell them apart from local variables, they break the neat encapsulation of functions making the program hard to understand, and they and lead to accumulation problems like this.</p>
|
python|performance|numpy|tensorflow
| 1
|
374,339
| 39,232,013
|
Extracting tables using pandas read_html function?
|
<p>This is an unusual problem. I am trying to extract a table from certain website(link cant be given because of security). The problem is that the site will load the table when accessed through website but when we use <code>inspect element</code> on any values/tables on that table it is not visible. It just show <code><html>_</html></code> with some scripts and links inside. Initially I tried to extract table using <code>beautifulsoup</code> but it was unsuccessful. Then I used pandas
<code>pandas.read_html(html)</code> but the site contains more than one table and its output is something like this</p>
<pre><code>[ Code Name
0 A John
1 B Terry
2 C Kitty
Column 1 Column 2 Column 3
0 1 0.6173661242 8
1 2 0.7232098163 20
2 3 0.9954581943 39
3 4 0.5595425507 18
4 5 0.9644025159 20
5 6 0.3914102544 29
6 7 0.0154642132 49
....
[873 rows x 3 columns],
0\n\t\t\t\t\t\t\t\t\t
0 0 ]
</code></pre>
<p>Then I tried something like this <code>pandas.read_html(html, match="Column 1")</code> it returns this error</p>
<blockquote>
<p>ValueError: No tables found matching pattern 'Column 1' </p>
</blockquote>
<p>any idea how we can use read_html to extract tables?</p>
|
<p>When data scraping off a secure website, the website can be using Java to load the tables so you never see the HTML-styled code. This could be why BeautifulSoup is not returning anything. </p>
<p>Does the "scripts and links inside" look like Java? </p>
<p>Maybe have a look at <a href="http://selenium-python.readthedocs.io/" rel="nofollow">Selenium?</a></p>
|
python|html|pandas|web-scraping
| 0
|
374,340
| 19,550,655
|
numpy: modifyng a transposed array don't work as expected
|
<p>I have, from a more complex program, this code:</p>
<pre><code>import numpy as np
ph=np.arange(6).reshape([2,3])
T=np.transpose(ph)
print 'T:\n',T
print 'ph:\n',ph # printing arrays before for cycle
for i in range(0,len(T)):
T[i]=2*T[i]
print 'ph:\n', ph # printing arrays after for cycle
print 'T:\n',T
</code></pre>
<p>i expect to have in output T and</p>
<pre><code>ph:
[[0 1 2]
[3 4 5]]
</code></pre>
<p>instead, i have</p>
<pre><code>ph:
[[ 0 2 4]
[ 6 8 10]]
T:
[[ 0 6]
[ 2 8]
[ 4 10]]
</code></pre>
<p>So when i multiply *2 every line of T inside the for cicle, I am doing the same to ph. Why?</p>
|
<p>You can find the reason in the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html" rel="nofollow">docstring of <code>np.transpose</code></a>:</p>
<pre><code> Returns
------- p : ndarray
`a` with its axes permuted. A view is returned whenever
possible.
</code></pre>
<p>Solution is to use <code>T = ph.T.copy()</code> if you don't want the view, but a copy. </p>
|
python|numpy|transpose
| 3
|
374,341
| 19,623,150
|
Issue installing Numpy on Mac OSX using virtualenv
|
<p>I am attempting to install Numpy via pip (Python version 2.7.5) and keep running into an error that states:</p>
<p>SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel</p>
<p>I am using virtualenv with virtualenv wrapper if that helps. I installed python via homebrew</p>
|
<p>You need to install the python development package from homebrew:</p>
<pre><code>brew install python-dev
</code></pre>
<h2>edit</h2>
<p>Indeed, the homebrew formula does not exist. You could find the missing headers by following these steps:
<a href="https://stackoverflow.com/questions/15931331/how-to-install-the-python-development-headers-on-mac-os-x">How to install the Python development headers on Mac OS X?</a> but I am not sure where should should link them to.</p>
|
python|macos|numpy
| 0
|
374,342
| 19,472,566
|
python read_fwf error: 'dtype is not supported with python-fwf parser'
|
<p>Using python 2.7.5 and pandas 0.12.0, I'm trying to import fixed-width-font text files into a DataFrame with 'pd.io.parsers.read_fwf()'. The values I'm importing are all numeric, but it's important that leading zeros be preserved, so I'd like to specify the dtype as string rather than int.</p>
<p>According to the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_fwf.html#pandas.io.parsers.read_fwf">documentation for this function</a>, the dtype attribute is supported in read_fwf, but when I try to use it:</p>
<p><code>data= pd.io.parsers.read_fwf(file, colspecs = ([79,81], [87,90]), header = None, dtype = {0: np.str, 1: np.str})</code></p>
<p>I get the error:</p>
<p><code>ValueError: dtype is not supported with python-fwf parser</code></p>
<p>I've tried as many variations as I can think of for setting 'dtype = something', but all of them return the same message. </p>
<p>Any help would be much appreciated! </p>
|
<p>Instead of specifying dtypes, specify a converter for the column you want to keep as str, building on @TomAugspurger's example:</p>
<pre><code>from io import StringIO
import pandas as pd
data = StringIO(u"""
121301234
121300123
121300012
""")
pd.read_fwf(data, colspecs=[(0,3),(4,8)], converters = {1: str})
</code></pre>
<p>Leads to</p>
<pre><code> \n Unnamed: 1
0 121 0123
1 121 0012
2 121 0001
</code></pre>
<p>Converters are a mapping from a column name or index to a function to convert the value in the cell (eg. int would convert them to integer, float to floats, etc) </p>
|
python|parsing|pandas
| 8
|
374,343
| 19,721,838
|
reprojectImageTo3D() typeError, OpenCV Python
|
<p>I'm not able to use reprojectImageTo3D() using python in the latest openCV version.
I keep getting "TypeError: disparity is not a numpy array". It's an iplImage of course.</p>
<pre><code>disparityImg = CreateImage( (320,240), IPL_DEPTH_32F, 1)
depthMapImg = CreateImage( (320,240), IPL_DEPTH_32F, 3)
depthMapImg = reprojectImageTo3D(disparityImg, Q)
</code></pre>
<p>But if I use an array for depthMapImg instead of an iplImage, I get "OpenCV Error: Assertion failed (stype == CV_8UC1 || stype == CV_16SC1 || stype == CV_32SC1 || stype == CV_32FC1) in reprojectImageTo3D,..."</p>
<p>This latter error makes me think the data types aren't matching between the array and reprojectImageTo3D().</p>
<p>Neither works, what am i to do?</p>
<p>official reprojectImageTo3D() doc here: <a href="http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20reprojectImageTo3D%28InputArray%20disparity,%20OutputArray%20_3dImage,%20InputArray%20Q,%20bool%20handleMissingValues,%20int%20ddepth%29" rel="nofollow">http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20reprojectImageTo3D%28InputArray%20disparity,%20OutputArray%20_3dImage,%20InputArray%20Q,%20bool%20handleMissingValues,%20int%20ddepth%29</a></p>
|
<p>take a sharp look : it's cv2.reprojectImageTo3D (or, cv.Reproject...)</p>
<p>seems, you're trying to mix the old (deprecated) cv api with the newer cv2 one. <em>don't</em> !</p>
<p>cv is using wrapped IplImages, cv2 is using numpy arrays</p>
<p>so, discard the old cv api, as it won't be supported in future versions.
avoid any code, that's using iplimages</p>
|
python|arrays|opencv|numpy
| 1
|
374,344
| 19,624,104
|
Equations in Python
|
<p>I'm trying to implement an equation from a paper in Python (black square equations) -</p>
<p><img src="https://i.stack.imgur.com/ePalE.png" alt="enter image description here"></p>
<p>So far I have a simplified model but I'm unable to generate the intended output (below image); I suspect the issue is with <a href="http://docs.scipy.org/doc/numpy-1.6.0/reference/generated/numpy.exp.html" rel="nofollow noreferrer">np.exp()</a> though I'm unsure - any suggestions of how I can do this?</p>
<pre><code>import numpy as np
import math
import matplotlib.pyplot as plt
f = 1e6
T = 1/f
Omega = 2*np.pi*f
i = np.arange(0,50e-6,100e-9)
y = np.sin(Omega*i) * (i**2) * np.exp(-i)
plt.figure(1)
plt.plot(i,y,'b-')
plt.grid()
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/eX8ui.png" alt="enter image description here"></p>
|
<p>To illustrate Jacob's comment, here's what you can get by tweaking the constants:</p>
<p><img src="https://i.stack.imgur.com/ZMeXP.png" alt="Graph"></p>
<p>Code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
f = 5
Omega = 2*np.pi*f
i = np.arange(0, 10, 0.001)
y = np.sin(Omega*i) * (i**2) * np.exp(-i)
plt.figure(1)
plt.plot(i,y,'b-')
plt.grid()
plt.show()
</code></pre>
<p>Or, you could keep the time scale and introduce an <em>h</em> of about 5e-6, as Bas Swinckels suggests in his answer:</p>
<pre><code>f = 1e6
Omega = 2*np.pi*f
i = np.arange(0,50e-6,100e-9)
y = np.sin(Omega*i) * (i**2) * np.exp(-i/5e-6)
</code></pre>
<p>This produces a very similar output.</p>
|
python|numpy|plot
| 2
|
374,345
| 19,731,012
|
Combine first two entries in each column as the header when reading excel file
|
<p>I've been searching this for a while but still can't figure it out. I appreciate if you can provide me some help.</p>
<p>I have an excel file:</p>
<pre><code> , John, James, Joan,
, Smith, Smith, Smith,
Index1, 234, 432, 324,
Index2, 2987, 234, 4354,
</code></pre>
<p>I'd like to read it into a dataframe, such that
"John Smith, James Smith, Joan Smith" is my header.
I've tried the follwoing, but my header is still "John, James, Joan"</p>
<pre><code>xl = pd.ExcelFile(myfile, header=None)
row = df.apply(lambda x: str(x.iloc[0]) + str(x.iloc[1]))
df.append(row,ignore_index=True)
nrow = df.shape[0]
df = pd.concat([df.ix[nrow:], df.ix[2:nrow-1]])
</code></pre>
|
<p>May be it's easier to do by hand?:</p>
<pre><code>>>> import itertools
>>> xl = pd.ExcelFile(myfile, header=None)
>>> sh = xl.book.sheet_by_index(0)
>>> rows = (sh.row_values(i) for i in xrange(sh.nrows))
>>> hd = zip(*itertools.islice(rows, 2))[1:] # read first two rows
>>> df = pd.DataFrame(rows) # create DataFrame from remaining rows
>>> df = df.set_index(0)
>>> df.columns = [' '.join(x) for x in hd] # rename columns
>>> df
John Smith James Smith Joan Smith
0
Index1 234 432 324
Index2 2987 234 4354
</code></pre>
|
python|excel|pandas
| 1
|
374,346
| 12,886,240
|
iter over dataframe
|
<p>I want to iterate over a Dataframe like this:</p>
<pre><code>for i in y.itertuples(): print i
</code></pre>
<p>result:</p>
<pre><code>(datetime.date(2012, 9, 10), 63.930000305175781, 64.589996337890625, 63.880001068115234, 64.099998474121094, 507700.0, 64.099998474121094)
(datetime.date(2012, 9, 11), 63.490001678466797, 63.790000915527344, 62.509998321533203, 63.759998321533203, 896600.0, 63.759998321533203)
</code></pre>
<p>How can I create a new Dataframe of each iterated tuple with indexing the date object?</p>
|
<pre><code>pd.DataFrame.from_records([i], index=0)
</code></pre>
|
python|pandas
| 2
|
374,347
| 12,841,827
|
Accessing pandas Multiindex Dataframe using integer indexes
|
<p>I have the following pandas Dataframe:</p>
<pre><code>from pandas import DataFrame, MultiIndex
index = MultiIndex.from_tuples(zip([21,22,23],[45,45,46]), names=['A', 'B'])
df = DataFrame({'values': [0.67, 0.87, 0.23]}, index=index)
Out[10]:
values
A B
21 45 0.67
22 45 0.87
23 46 0.23
</code></pre>
<p>What is the correct way to access the value for the element (22,45)? I have tried all the obvious alternatives but any of them seems to work:</p>
<pre><code>df[22,45]
df[(22,45)]
df.ix[22,45]
df.ix[(22,45)]
</code></pre>
<p>I am using pandas 0.9.0.dev-1e68fd9.</p>
|
<p>Last two are the correct syntax, but there is a (<a href="https://github.com/pydata/pandas/issues/2051" rel="nofollow">bug</a> preventing to display the result.</p>
<pre><code>s = df.ix[(22, 45)]
</code></pre>
<p>works fine, but you can not display it</p>
|
python|pandas
| 2
|
374,348
| 28,974,425
|
Calculating Kendall's tau using scipy and groupby
|
<p>I have a csv file with precipitation data per year and per weather station. It looks like this:</p>
<pre><code>station_id year Sum
210018 1916 65.024
210018 1917 35.941
210018 1918 28.448
210018 1919 68.58
210018 1920 31.115
215400 1916 44.958
215400 1917 31.496
215400 1918 38.989
215400 1919 74.93
215400 1920 53.5432
</code></pre>
<p>I want to return a Kendall's tau correlation and p-value based upon unique station id's. So for above I want the correlation between sum and year for station id 210018 and 215400. </p>
<p>The correlation for station_id 210018 would then be -.20 and a p-value of .62 and for station_id 215400 correlation would be .40 and a p-value of .33. </p>
<p>I am trying to use this:</p>
<pre><code>grouped=df.groupby(['station_id'])
grouped.aggregate([tau, p_value=sp.stats.kendalltau(df.year, df.Sum)])
</code></pre>
<p>The error returned is a syntax error on the equal sign after p_value. </p>
<p>Any help would be appreciated.</p>
|
<p>One way to calculate this is to use <code>apply</code> on the <code>groupby</code> object:</p>
<pre><code>>>> import scipy.stats as st
>>> df.groupby(['station_id']).apply(lambda x: st.kendalltau(x['year'], x['Sum']))
station_id
210018 (-0.2, 0.62420612399)
215400 (0.4, 0.327186890661)
dtype: object
</code></pre>
|
python|pandas|dataframe|scipy|statistics
| 9
|
374,349
| 29,093,235
|
How to calculate group by cumulative sum for multiple columns in python
|
<p>I have a data set like,</p>
<pre><code>data=pd.DataFrame({'id':pd.Series([1,1,1,2,2,3,3,3]),'var1':pd.Series([1,2,3,4,5,6,7,8]),'var2':pd.Series([11,12,13,14,15,16,17,18]),
'var3':pd.Series([21,22,23,24,25,26,27,28])})
</code></pre>
<p>Here I need to calculate groupwise cumulative sum for all columns(var1,var2,var3) based on id.
How can I write python code to crate output as per my requirement?</p>
<p>Thanks in advance.</p>
|
<p>If I have understood you right, you can use <code>DataFrame.groupby</code> to calculate the cumulative sum across columns grouped by your <code>'id'</code>-column. Something like:</p>
<pre><code>import pandas as pd
data=pd.DataFrame({'id':[1,1,1,2,2,3,3,3],'var1':[1,2,3,4,5,6,7,8],'var2':[11,12,13,14,15,16,17,18], 'var3':[21,22,23,24,25,26,27,28]})
data.groupby('id').apply(lambda x: x.drop('id', axis=1).cumsum(axis=1).sum())
</code></pre>
|
python|pandas
| 2
|
374,350
| 29,068,715
|
How can I repeat this array of 2d pairs using Numpy?
|
<p>I have an array that I want to repeat.</p>
<p><code>test = numpy.array([(1, 11,), (2, 22), (3, 33)])</code></p>
<p>Now</p>
<pre><code>numpy.repeat(test, 2, 0)
numpy.repeat(test, 2, 1)
</code></pre>
<p>results in</p>
<pre><code>array([[ 1, 11],
[ 1, 11],
[ 2, 22],
[ 2, 22],
[ 3, 33],
[ 3, 33]])
array([[ 1, 1, 11, 11],
[ 2, 2, 22, 22],
[ 3, 3, 33, 33]]).
</code></pre>
<p>While</p>
<pre><code>numpy.tile(test, 2)
</code></pre>
<p>results in</p>
<pre><code>array([[ 1, 11, 1, 11],
[ 2, 22, 2, 22],
[ 3, 33, 3, 33]]).
</code></pre>
<p>How can I get this result instead?</p>
<pre><code>array([[ 1, 11],
[ 2, 22],
[ 3, 33],
[ 1, 11],
[ 2, 22],
[ 3, 33]])
</code></pre>
<p>Alternatively, for my use case I only use the repeated values once. To avoid the memory allocations, is there a way to have a generator of the repeated series instead somehow?</p>
|
<p><code>np.tile</code> lets you specify repeats for each axis (as a tuple)</p>
<pre><code>In [370]: np.tile(test,(2,1))
Out[370]:
array([[ 1, 11],
[ 2, 22],
[ 3, 33],
[ 1, 11],
[ 2, 22],
[ 3, 33]])
</code></pre>
|
python|arrays|numpy|repeat
| 6
|
374,351
| 29,230,866
|
how to generate histogram in pandas with x-axis labels from column?
|
<p>Given the following dataframe:</p>
<pre><code>import pandas as pd
df = pd.read_json('{"genre":{"0":"Drama","1":"Comedy","2":"Action","3":"Thriller"},"count":{"0":1603,"1":1200,"2":503,"3":492}}')
</code></pre>
<p>Is there a pandas one-liner/fast way to generate a histogram where bars are based on the "count" column, and x-axis labels are the "genre" column?</p>
|
<p>Yes you can select the columns to be used with the <code>x</code> and <code>y</code> keyword arguments. You also select the kind of plot you want, in this case a hist, using <code>kind='bar'</code>.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_json('{"genre":{"0":"Drama","1":"Comedy","2":"Action","3":"Thriller"},"count":{"0":1603,"1":1200,"2":503,"3":492}}')
df.plot(x='genre', y='count', kind='bar', legend=False)
plt.show()
</code></pre>
<p>Note that I've also used <code>legend=False</code> to remove the legend, you could leave this in if you so wish.</p>
<p><img src="https://i.stack.imgur.com/dgktv.png" alt="enter image description here"></p>
|
python|pandas
| 9
|
374,352
| 29,253,027
|
pandas scatter plot colors with three points and seaborn
|
<p>There is a strange behavior when using pandas and seaborn to plot a scatter plot that has only three points: the points don't have the same color. The problem disappears when seaborn is not loaded or when there are more than three points, or when plotting with matplotlib's scatter method directly. See the following example:</p>
<pre><code>from pandas import DataFrame #0.16.0
import matplotlib.pyplot as plt #1.4.3
import seaborn as sns #0.5.1
import numpy as np #1.9.2
df = DataFrame({'x': np.random.uniform(0, 1, 3), 'y': np.random.uniform(0, 1, 3)})
df.plot(kind = 'scatter', x = 'x', y = 'y')
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/puzx3.png" alt=""></p>
<pre><code>df = DataFrame({'x': np.random.uniform(0, 1, 4), 'y': np.random.uniform(0, 1, 4)})
df.plot(kind = 'scatter', x = 'x', y = 'y')
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/WpiUu.png" alt=""></p>
|
<p>I've tracked down the bug. The bug is in <code>pandas</code> technically, not <code>seaborn</code> as I originally thought, though it involves code from <code>pandas</code>, <code>seaborn</code>, and <code>matplotlib</code>...</p>
<p>In <a href="https://github.com/pydata/pandas/blob/master/pandas/tools/plotting.py#L1417" rel="noreferrer"><code>pandas.tools.plotting.ScatterPlot._make_plot</code></a> the following code occurs to choose the colours to be used in the scatter plot</p>
<pre><code>if c is None:
c_values = self.plt.rcParams['patch.facecolor']
elif c_is_column:
c_values = self.data[c].values
else:
c_values = c
</code></pre>
<p>In your case <code>c</code> will be equal to <code>None</code>, which is the default value, and so <code>c_values</code> will be given by <code>plt.rcParams['patch.facecolor']</code>.</p>
<p>Now, as part of setting itself up, seaborn modifies <code>plt.rcParams['patch.facecolor']</code> to <code>(0.5725490196078431, 0.7764705882352941, 1.0)</code> which is an RGB tuple. If <code>seaborn</code> is not used then the value is the matplotlib default which is <code>'b'</code> (a string indicating the colour "blue").</p>
<p><code>c_values</code> is then used later on to actually plot the graph within <code>ax.scatter</code></p>
<pre><code>scatter = ax.scatter(data[x].values, data[y].values, c=c_values,
label=label, cmap=cmap, **self.kwds)
</code></pre>
<p>The issue arises because the keyword argument <code>c</code> can accept multiple different types of argument, it can accept:- </p>
<ul>
<li>a string (such as <code>'b'</code> in the original matplotlib case); </li>
<li>a sequence of color specifications (say a sequence of RGB values); </li>
<li>a sequence of values to map onto the current colormap. </li>
</ul>
<p>The matplotlib docs specifically state the following, highlighting mine</p>
<blockquote>
<p>c can be a single color format string, or a sequence of color specifications of length N, or a sequence of N numbers to be mapped to colors using the cmap and norm specified via kwargs (see below). <strong>Note that c should not be a single numeric RGB or RGBA sequence because that is indistinguishable from an array of values to be colormapped.</strong> c can be a 2-D array in which the rows are RGB or RGBA, however.</p>
</blockquote>
<p>What basically happens is that matplotlib takes the <code>c_values</code> value (which is a tuple of three numbers) and then maps those colours onto the current colormap (which is set by pandas to be <code>Greys</code> by default). As such, you get three scatter points with different <em>"greyishness"</em>. When you have more than 3 scatter points, matplotlib assumes that it must be a RGB tuple because the length doesn't match the length of the data arrays (3 != 4) and so uses it as a constant RBG colour.</p>
<p>This has been written up as a bug report on the pandas Github <a href="https://github.com/pydata/pandas/issues/9724" rel="noreferrer">here</a>.</p>
|
python|pandas|seaborn
| 6
|
374,353
| 28,910,231
|
Failing to convert Pandas dataframe timestamp
|
<p>I'm pretty new to working with Pandas and am trying to figure out why this timestamp won't convert. As an example, one individual timestamp is the string <code>'2010-10-06 16:38:02'</code>. The code looks like this:</p>
<pre><code>newdata = pd.DataFrame.from_records(data, columns = ["col1", "col2", "col3", "timestamp"], index = "timestamp")
newdata.index = newdata.index.tz_localize('UTC').tz_convert('US/Eastern')
</code></pre>
<p>And gets this error: </p>
<pre><code>AttributeError: 'Index' object has no attribute 'tz_localize'
</code></pre>
<p>Someone commented <a href="https://stackoverflow.com/questions/28903399/index-object-has-no-attribute-tz-localize">here</a> that tz_localize is not a method available to Index types, so I tried converting it as a column instead but that gave the error</p>
<pre><code>TypeError: index is not a valid DatetimeIndex or PeriodIndex
</code></pre>
<p>And then I found <a href="https://stackoverflow.com/questions/26089670/unable-to-apply-methods-on-timestamps-using-series-built-ins">this site</a>, which says tz_localize <em>only</em> acts on the index, anyway. </p>
<p>If anyone could help me out it would be much appreciated! I'm using Pandas 0.15.2. I believe this code may have worked for someone else with an earlier version, but I can't switch.</p>
<p>EDIT:</p>
<p>Ok after messing around a little I found that this doesn't throw any errors and seemed to do what I want in the short-term: <code>newdata.index=pd.DatetimeIndex(newdata.index).tz_localize('UTC').tz_convert('US/Eastern')</code></p>
|
<p>I've been asked to add a formal answer instead of just editing my question, so here it is. Note it builds off the answer above, but that that one didn't quite work for me.</p>
<p><code>newdata.index=pd.DatetimeIndex(newdata.index).tz_localize('UTC').tz_convert('US/Eastern')</code></p>
|
python|indexing|pandas|timezone|timestamp
| 4
|
374,354
| 29,144,921
|
Numpy local maximas in one dimension of 2D array
|
<p>I'd like to find the local maximas of a 2D array but only in one dimension. Ie:</p>
<pre><code>1 2 3 2 1 1 4 5 6 2
2 2 3 3 3 2 2 2 2 2
1 2 3 2 2 2 2 3 3 3
</code></pre>
<p>would return:</p>
<pre><code>0 0 1 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0 1 0
</code></pre>
<p>Obviously this is trivial to solve by iterating through the array, but this is slow and usually avoidable. Is there a fast way of achieving this?</p>
<p>Edit:
I've devised a faster solution:</p>
<p>import numpy as np</p>
<pre><code>testArray = np.array([[1,2,3,2,1,1,4,5,6,2],[2,2,3,3,3,2,2,2,2,2],[1,2,3,2,2,2,2,3,3,3] ])
leftShift = np.roll(testArray,1, axis=1)
rightShift = np.roll(testArray,-1, axis=1)
Max = ((testArray>leftShift) & (testArray>rightShift) )*1
print(Max)
</code></pre>
<p>Which returns:</p>
<pre><code>[[0 0 1 0 0 0 0 0 1 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 1 0 0 0 0 0 0 0]]
</code></pre>
<p>This is the right result except for repeated readings. Ie.. what differentiates "13331" (maxima) from "13333789" (stationary point)</p>
|
<p>You can solve this by applying finite difference gradient to each row and check sign change.
However it is not clear what to do at the boundaries.</p>
|
python|arrays|numpy
| 1
|
374,355
| 33,611,782
|
Pandas dataframe from nested dictionary
|
<p>My dictionary looks like this:</p>
<pre><code>{'x': {'b': 10, 'c': 20}, 'y': {'b': '33', 'c': 44}}
</code></pre>
<p>I want to get a dataframe that looks like this:</p>
<pre><code>index col1 col2 val
0 x b 10
1 x c 20
2 y b 33
3 y c 44
</code></pre>
<p>I tried calling pandas.from_dict(), but it did not give me the desired result.
So, what is the most elegant, practical way to achieve this?</p>
<p>EDIT: In reality, my dictionary is of depth 4, so I'd like to see a solution for that case, or ideally, one that would work for arbitrary depth in a general setup.</p>
<p>Here is an example of a deeper dictionary:
<code>{'x':{'a':{'m':1, 'n':2}, 'b':{'m':10, 'n':20}}, 'y':{'a':{'m':100, 'n':200}, 'b':{'m':111, 'n':222}} }</code> The appropriate dataframe should have 8 rows.</p>
<p>ANSWER: </p>
<pre><code>df = pd.DataFrame([(k1, k2, k3, k4, k5, v) for k1, k2345v in dict.items()
for k2, k345v in k2345v.items()
for k3, k45v in k345v.items()
for k4, k5v in k45v.items()
for k5, v in k5v.items()])
</code></pre>
|
<p>You can use a list comprehension to reorder your dict into a list of tuples where each tuple is a row and then you can sort your dataframe</p>
<pre><code>import pandas as pd
d = {'x': {'b': 10, 'c': 20}, 'y': {'b': '33', 'c': 44}}
df = pd.DataFrame([(k,k1,v1) for k,v in d.items() for k1,v1 in v.items()], columns = ['Col1','Col2','Val'])
print df.sort(['Col1','Col2','Val'], ascending=[1,1,1])
Col1 Col2 Val
3 x b 10
2 x c 20
1 y b 33
0 y c 44
</code></pre>
|
python|dictionary|pandas|dataframe
| 5
|
374,356
| 33,784,214
|
How to test tensorflow cifar10 cnn tutorial model
|
<p>I am relatively new to machine-learning and currently have almost no experiencing in developing it.</p>
<p>So my <strong>Question</strong> is: after training and evaluating the cifar10 dataset from the tensorflow <a href="http://www.tensorflow.org/tutorials/deep_cnn/index.html" rel="noreferrer">tutorial</a> I was wondering how could one test it with sample images?</p>
<p>I could train and evaluate the <a href="http://caffe.berkeleyvision.org/gathered/examples/imagenet.html" rel="noreferrer">Imagenet tutorial from the caffe machine-learning framework</a> and it was relatively easy to use the trained model on custom applications using the python API.</p>
<p>Any help would be very appreciated!</p>
|
<p>This isn't 100% the answer to the question, but it's a similar way of solving it, based on a MNIST NN training example suggested in the comments to the question.</p>
<p>Based on the TensorFlow begginer MNIST tutorial, and thanks to <a href="http://opensourc.es/blog/tensorflow-mnist" rel="noreferrer">this tutorial</a>, this is a way of training and using your Neural Network with custom data.</p>
<p>Please note that similar should be done for tutorials such as the CIFAR10, as @Yaroslav Bulatov mentioned in the comments.</p>
<pre><code>import input_data
import datetime
import numpy as np
import tensorflow as tf
import cv2
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
from random import randint
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
x = tf.placeholder("float", [None, 784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
y_ = tf.placeholder("float", [None,10])
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#Train our model
iter = 1000
for i in range(iter):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
#Evaluationg our model:
correct_prediction=tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy=tf.reduce_mean(tf.cast(correct_prediction,"float"))
print "Accuracy: ", sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})
#1: Using our model to classify a random MNIST image from the original test set:
num = randint(0, mnist.test.images.shape[0])
img = mnist.test.images[num]
classification = sess.run(tf.argmax(y, 1), feed_dict={x: [img]})
'''
#Uncomment this part if you want to plot the classified image.
plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary)
plt.show()
'''
print 'Neural Network predicted', classification[0]
print 'Real label is:', np.argmax(mnist.test.labels[num])
#2: Using our model to classify MNIST digit from a custom image:
# create an an array where we can store 1 picture
images = np.zeros((1,784))
# and the correct values
correct_vals = np.zeros((1,10))
# read the image
gray = cv2.imread("my_digit.png", 0 ) #0=cv2.CV_LOAD_IMAGE_GRAYSCALE #must be .png!
# rescale it
gray = cv2.resize(255-gray, (28, 28))
# save the processed images
cv2.imwrite("my_grayscale_digit.png", gray)
"""
all images in the training set have an range from 0-1
and not from 0-255 so we divide our flatten images
(a one dimensional vector with our 784 pixels)
to use the same 0-1 based range
"""
flatten = gray.flatten() / 255.0
"""
we need to store the flatten image and generate
the correct_vals array
correct_val for a digit (9) would be
[0,0,0,0,0,0,0,0,0,1]
"""
images[0] = flatten
my_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})
"""
we want to run the prediction and the accuracy function
using our generated arrays (images and correct_vals)
"""
print 'Neural Network predicted', my_classification[0], "for your digit"
</code></pre>
<p>For further image conditioning (digits should be completely dark in a white background) and better NN training (accuracy>91%) please check the Advanced MNIST tutorial from TensorFlow or the 2nd tutorial i've mentioned.</p>
|
python|testing|machine-learning|tensorflow
| 11
|
374,357
| 33,627,662
|
Python Email in HTML format mimelib
|
<p>I am trying to send two dataframes created in Pandas Python as a html format in an email sent from the python script.</p>
<p>I want to write a text and the table and repeat this for two more dataframes but the script is not able to attach more than one html block.
The code is as follows:</p>
<pre><code>import numpy as np
import pandas as pd
import smtplib
import time
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
sender = "blabla@gmail.com"
recipients = ['albalb@gmail.com']
msg = MIMEMultipart('alternative')
msg['Subject'] = "This a reminder call " + time.strftime("%c")
msg['From'] = sender
msg['To'] = ", ".join(recipients)
text = "Hi!\nHow are you?\nHere is the link you wanted:\nhttps://www.python.org"
html = df[['SYMBOL','ARBITRAGE BASIS %']].to_html()
part1 = MIMEText(text, 'plain')
part2 = MIMEText(html, 'html')
msg.attach(part1)
msg.attach(part2)
username = 'blabla@gmail.com'
password = 'blahblah'
server = smtplib.SMTP('smtp.gmail.com:587')
server.ehlo()
server.starttls()
server.login(username,password)
server.sendmail(sender, recipients, msg.as_string())
server.quit()
print("Success")
</code></pre>
<p>I am getting an email with just the last part as a formatted html table in the email body. The part 1 text is not appearing. What's wrong?</p>
|
<p>The problem is that you are marking up the parts as <code>multipart/alternative</code> -- this means, "I have the information in multiple renderings; choose the one you prefer" and your email client is apparently set up to choose the HTML version. Both parts are in fact there, but you have tagged them as either/or where apparently you want both.</p>
<p>The conventional quick fix would be to switch to <code>multipart/related</code> but really, what is the purpose of a text part which simply says the content is elsewhere?</p>
<p>If you want the HTML as an attachment, maybe also set <code>Content-Disposition: attachment</code> (and supply a file name) for the HTML part.</p>
|
python|email|pandas|mime
| 2
|
374,358
| 33,920,544
|
Avoiding numerical instability when computing 1/(1+exp(x)) python
|
<p>I would like to compute 1/(1+exp(x)) for (possibly large) x. This is a well behaved function between 0 and 1. I could just do</p>
<pre><code>import numpy as np
1.0/(1.0+np.exp(x))
</code></pre>
<p>but in this naive implementation np.exp(x) will likely just return 0 or infinity for large x, depending on the sign. Are there functions available in python that will help me out here? </p>
<p>I am considering implementing a series expansion and series acceleration, but I am wondering if this problem has already been solved. </p>
|
<p>You can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.expit.html" rel="nofollow"><code>scipy.special.expit(-x)</code></a>. It will avoid the overflow warnings generated by <code>1.0/(1.0 + exp(x))</code>.</p>
|
python|numpy|floating-point|scipy|expansion
| 5
|
374,359
| 33,771,675
|
pandas concat/merge and sum one column
|
<p>I have two <code>pandas.DataFrame</code> objects with <code>MultiIndex</code> indices. Some of the index values are shared with the two dataframes, but not all. I would like to merge these two data frames and take the sum of one of the columns if the row (index value) exists. Otherwise, keep the row and column value as it exists. </p>
<p>: <a href="https://stackoverflow.com/questions/16583668/merge-2-dataframes-in-pandas-join-on-some-columns-sum-up-others">This is close, but does not use <code>MultiIndex</code></a></p>
<p>I've tried to create an example:</p>
<pre><code>def mklbl(prefix,n):
try:
return ["%s%s" % (prefix,i) for i in range(n)]
except:
return ["%s%s" % (prefix,i) for i in n]
mi1 = pd.MultiIndex.from_product([mklbl('A',4), mklbl('C',2)])
mi2 = pd.MultiIndex.from_product([mklbl('A',[2,3,4]), mklbl('C',2)])
df2 = pd.DataFrame({'b':np.arange(len(mi2)), 'c':np.arange(len(mi2))[::-1]},
index=mi2).sort_index().sort_index(axis=1)
df1 = pd.DataFrame({'a':np.arange(len(mi1)), 'b':np.arange(len(mi1))[::-1]},
index=mi1).sort_index().sort_index(axis=1)
</code></pre>
<p>The individual <code>DataFrame</code> objects look like:</p>
<pre><code>In [117]: df1
Out[117]:
a b
A0 C0 0 7
C1 1 6
A1 C0 2 5
C1 3 4
A2 C0 4 3
C1 5 2
A3 C0 6 1
C1 7 0
</code></pre>
<p>and</p>
<pre><code>In [118]: df2
Out[118]:
b c
A2 C0 0 5
C1 1 4
A3 C0 2 3
C1 3 2
A4 C0 4 1
C1 5 0
</code></pre>
<p>What I want to do is merge these two, and sum the 'b' column, but keep all rows whether they exist in one or the other dataframe:</p>
<pre><code>In [117]: df_merged_bsummed
Out[117]:
a b c
A0 C0 0 7 NaN
C1 1 6 NaN
A1 C0 2 5 NaN
C1 3 4 NaN
A2 C0 4 3 5
C1 5 3 4
A3 C0 6 3 3
C1 7 3 2
A4 C0 NaN 4 1
C1 NaN 5 0
</code></pre>
|
<p>In this particular case, I think you could just add them and use <code>fill_value=0</code>, relying on the default alignment behaviour:</p>
<pre><code>>>> df1.add(df2,fill_value=0)
a b c
A0 C0 0 7 NaN
C1 1 6 NaN
A1 C0 2 5 NaN
C1 3 4 NaN
A2 C0 4 3 5
C1 5 3 4
A3 C0 6 3 3
C1 7 3 2
A4 C0 NaN 4 1
C1 NaN 5 0
</code></pre>
<p>There being only one column in common, only one is summed, but if you wanted to make that explicit you could instead do something like</p>
<pre><code>>>> m = pd.concat([df1, df2],axis=1)
>>> m["b"] = m.pop("b").sum(axis=1)
>>> m
a c b
A0 C0 0 NaN 7
C1 1 NaN 6
A1 C0 2 NaN 5
C1 3 NaN 4
A2 C0 4 5 3
C1 5 4 3
A3 C0 6 3 3
C1 7 2 3
A4 C0 NaN 1 4
C1 NaN 0 5
</code></pre>
|
python|pandas
| 7
|
374,360
| 33,736,845
|
Flag dates that are between a range
|
<p>I have the following dataframe:</p>
<pre><code> exdiv_date expiry_date
0 2015-09-18 2015-12-18
1 2015-11-20 2015-12-18
2 NaN 2016-01-20
3 2015-12-26 2016-01-15
4 NaN 2015-11-21
</code></pre>
<p>I need to flag each row where the exdiv_date is after today and before the expiry_date. The output should be:</p>
<pre><code> exdiv_date expiry_date flag
0 2015-09-18 2015-12-18 False
1 2015-11-20 2015-12-18 True
2 NaN 2016-01-20 False
3 2015-12-26 2016-01-15 True
4 NaN 2015-11-21 False
</code></pre>
<p>As per the example, some rows do not have an exdiv_date (ie: NaN). I have ensured the exdiv_date and expiry_date are of the same type as follows:</p>
<pre><code>df['exdiv_date'] = pd.to_datetime(df['exdiv_date'])
df['expiry_date'] = pd.to_datetime(df['expiry_date'])
</code></pre>
<p>I have tried doing this as follows:</p>
<pre><code>mask = (df['exdiv_date'] > dt.date.today) & (df['exdiv_date'] < df['expiry_date'])
df.loc[mask, 'flag'] = True
</code></pre>
<p>But I get an error: <code>TypeError: Cannot convert input to Timestamp</code></p>
<p>I presume the error is because of the NaN's but im not sure how to get around it.</p>
|
<p>There is problem with brackets - use <code>dt.date.today()</code>.</p>
<p>You can use alternatively <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow">np.where</a>:</p>
<pre><code>import datetime as dt
# exdiv_date expiry_date
#0 2015-09-18 2015-12-18
#1 2015-11-20 2015-12-18
#2 NaT 2016-01-20
#3 2015-12-26 2016-01-15
#4 NaT 2015-11-21
mask = (df['exdiv_date'] > dt.date.today()) & (df['exdiv_date'] < df['expiry_date'])
df.loc[mask, 'flag'] = True
print df
# exdiv_date expiry_date flag
#0 2015-09-18 2015-12-18 NaN
#1 2015-11-20 2015-12-18 True
#2 NaT 2016-01-20 NaN
#3 2015-12-26 2016-01-15 True
#4 NaT 2015-11-21 NaN
#if condition true add value True else add False to column flag
df['flag'] = np.where((df['exdiv_date'] > dt.date.today()) & (df['exdiv_date'] < df['expiry_date']), 'True', 'False')
print df
# exdiv_date expiry_date flag
#0 2015-09-18 2015-12-18 False
#1 2015-11-20 2015-12-18 True
#2 NaT 2016-01-20 False
#3 2015-12-26 2016-01-15 True
#4 NaT 2015-11-21 False
</code></pre>
|
python|pandas
| 1
|
374,361
| 33,742,098
|
border/edge operations on numpy arrays
|
<p>Suppose I have a 3D numpy array of nonzero values and <code>"background" = 0</code>. As an example I will take a sphere of random values:</p>
<pre><code>array = np.random.randint(1, 5, size = (100,100,100))
z,y,x = np.ogrid[-50:50, -50:50, -50:50]
mask = x**2 + y**2 + z**2<= 20**2
array[np.invert(mask)] = 0
</code></pre>
<p>First, I would like to find the "border voxels" (all nonzero values that have a zero within their <code>3x3x3</code> neigbourhood). Second, I would like to replace all border voxels with the mean of their nonzero neighbours. So far I tried to use scipy's generic filter in the following way:</p>
<p>Function to apply at each element:</p>
<pre><code>def borderCheck(values):
#check if the footprint center is on a nonzero value
if values[13] != 0:
#replace border voxels with the mean of nonzero neighbours
if 0 in values:
return np.sum(values)/np.count_nonzero(values)
else:
return values[13]
else:
return 0
</code></pre>
<p>Generic filter:</p>
<pre><code>from scipy import ndimage
result = ndimage.generic_filter(array, borderCheck, footprint = np.ones((3,3,3)))
</code></pre>
<p>Is this a proper way to handle this problem? I feel that I am trying to reinvent the wheel here and that there must be a shorter, nicer way to achieve the result. Are there any other suitable (numpy, scipy ) functions that I can use? </p>
<p><strong>EDIT</strong></p>
<p>I messed one thing up: I would like to replace all border voxels with the mean of their nonzero <strong>AND non-border</strong> neighbours. For this, I tried to clean up the <code>neighbours</code> from ali_m's code (2D case):</p>
<pre><code>#for each neighbour voxel, check whether it also appears in the border/edges
non_border_neighbours = []
for each in neighbours:
non_border_neighbours.append([i for i in each if nonzero_idx[i] not in edge_idx])
</code></pre>
<p>Now I can't figure out why <code>non_border_neighbours</code> comes back empty?</p>
<p>Furthermore, correct me if I am wrong but doesn't <code>tree.query_ball_point</code> with radius 1 adress only the 6 next neighbours (euclidean distance 1)? Should I set <code>sqrt(3)</code> (3D case) as radius to get the 26-neighbourhood?</p>
|
<p>I think it's best to start out with the 2D case first, since it can be visualized much more easily:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
A = np.random.randint(1, 5, size=(100, 100)).astype(np.double)
y, x = np.ogrid[-50:50, -50:50]
mask = x**2 + y**2 <= 30**2
A[~mask] = 0
</code></pre>
<p>To find the edge pixels you could perform binary erosion on your mask, then XOR the result with your mask</p>
<pre><code># rank 2 structure with full connectivity
struct = ndimage.generate_binary_structure(2, 2)
erode = ndimage.binary_erosion(mask, struct)
edges = mask ^ erode
</code></pre>
<p>One approach to find the nearest non-zero neighbours of each edge pixel would be to use a <a href="https://i.stack.imgur.com/b0O5m.png" rel="noreferrer"><code>scipy.spatial.cKDTree</code></a>:</p>
<pre><code>from scipy.spatial import cKDTree
# the indices of the non-zero locations and their corresponding values
nonzero_idx = np.vstack(np.where(mask)).T
nonzero_vals = A[mask]
# build a k-D tree
tree = cKDTree(nonzero_idx)
# use it to find the indices of all non-zero values that are at most 1 pixel
# away from each edge pixel
edge_idx = np.vstack(np.where(edges)).T
neighbours = tree.query_ball_point(edge_idx, r=1, p=np.inf)
# take the average value for each set of neighbours
new_vals = np.hstack(np.mean(nonzero_vals[n]) for n in neighbours)
# use these to replace the values of the edge pixels
A_new = A.astype(np.double, copy=True)
A_new[edges] = new_vals
</code></pre>
<p>Some visualisation:</p>
<pre><code>fig, ax = plt.subplots(1, 3, figsize=(10, 4), sharex=True, sharey=True)
norm = plt.Normalize(0, A.max())
ax[0].imshow(A, norm=norm)
ax[0].set_title('Original', fontsize='x-large')
ax[1].imshow(edges)
ax[1].set_title('Edges', fontsize='x-large')
ax[2].imshow(A_new, norm=norm)
ax[2].set_title('Averaged', fontsize='x-large')
for aa in ax:
aa.set_axis_off()
ax[0].set_xlim(20, 50)
ax[0].set_ylim(50, 80)
fig.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/b0O5m.png" rel="noreferrer"><img src="https://i.stack.imgur.com/b0O5m.png" alt="enter image description here"></a></p>
<p>This approach will also generalize to the 3D case:</p>
<pre><code>B = np.random.randint(1, 5, size=(100, 100, 100)).astype(np.double)
z, y, x = np.ogrid[-50:50, -50:50, -50:50]
mask = x**2 + y**2 + z**2 <= 20**2
B[~mask] = 0
struct = ndimage.generate_binary_structure(3, 3)
erode = ndimage.binary_erosion(mask, struct)
edges = mask ^ erode
nonzero_idx = np.vstack(np.where(mask)).T
nonzero_vals = B[mask]
tree = cKDTree(nonzero_idx)
edge_idx = np.vstack(np.where(edges)).T
neighbours = tree.query_ball_point(edge_idx, r=1, p=np.inf)
new_vals = np.hstack(np.mean(nonzero_vals[n]) for n in neighbours)
B_new = B.astype(np.double, copy=True)
B_new[edges] = new_vals
</code></pre>
<p>Test against your version:</p>
<pre><code>def borderCheck(values):
#check if the footprint center is on a nonzero value
if values[13] != 0:
#replace border voxels with the mean of nonzero neighbours
if 0 in values:
return np.sum(values)/np.count_nonzero(values)
else:
return values[13]
else:
return 0
result = ndimage.generic_filter(B, borderCheck, footprint=np.ones((3, 3, 3)))
print(np.allclose(B_new, result))
# True
</code></pre>
<p>I'm sure this isn't the most efficient way to do it, but it will still be significantly faster than using <code>generic_filter</code>.</p>
<hr>
<h2>Update</h2>
<p>The performance could be further improved by reducing the number of points that are considered as candidate neighbours of the edge pixels/voxels:</p>
<pre><code># ...
# the edge pixels/voxels plus their immediate non-zero neighbours
erode2 = ndimage.binary_erosion(erode, struct)
candidate_neighbours = mask ^ erode2
nonzero_idx = np.vstack(np.where(candidate_neighbours)).T
nonzero_vals = B[candidate_neighbours]
# ...
</code></pre>
|
python|numpy|scipy
| 11
|
374,362
| 33,763,963
|
pandas data frame headers are shifted over when perfoming csv read
|
<p>I'm trying to read data from a csv file into a pandas data frame but the headers are shifting over two columns when read into data frame. </p>
<p>I think it has to do with there being two blank rows after the header, but I'm not sure. It seems to be reading in the first two columns as row titles/indexes.</p>
<p>CSV Format: </p>
<pre><code>VendorID,lpep_pickup_datetime,Lpep_dropoff_datetime,Store_and_fwd_flag,RateCodeID,Pickup_longitude,Pickup_latitude,Dropoff_longitude,Dropoff_latitude,Passenger_count,Trip_distance,Fare_amount,Extra,MTA_tax,Tip_amount,Tolls_amount,Ehail_fee,Total_amount,Payment_type,Trip_type
2,2014-04-01 00:00:00,2014-04-01 14:24:20,N,1,0,0,0,0,1,7.45,23,0,0.5,0,0,,23.5,2,1,,
2,2014-04-01 00:00:00,2014-04-01 17:21:33,N,1,0,0,-73.987663269042969,40.780872344970703,1,8.95,31,1,0.5,0,0,,32.5,2,1,,
</code></pre>
<p>Data Frame Format:</p>
<pre><code> VendorID lpep_pickup_datetime \
2 2014-04-01 00:00:00 2014-04-01 14:24:20 N
2014-04-01 00:00:00 2014-04-01 17:21:33 N
2014-04-01 00:00:00 2014-04-01 15:06:18 N
2014-04-01 00:00:00 2014-04-01 08:09:27 N
2014-04-01 00:00:00 2014-04-01 16:15:13 N
Lpep_dropoff_datetime Store_and_fwd_flag RateCodeID \
2 2014-04-01 00:00:00 1 0 0
2014-04-01 00:00:00 1 0 0
2014-04-01 00:00:00 1 0 0
2014-04-01 00:00:00 1 0 0
2014-04-01 00:00:00 1 0 0
</code></pre>
<p>Code Below:</p>
<pre><code>file ='green_tripdata_2014-04.csv'
df4 = pd.read_csv(file)
print(df4.head(5))
</code></pre>
<p>I just need it to read into the data frame with the headers in the correct location.</p>
|
<p>Your csv data does look strange - you have 20 column headers, but 22 entries in the first line with data.</p>
<p>Assuming this is only a copy-paste error*, you can try the following:</p>
<pre><code>df = pd.read_csv(file, skiprows=[1,2], index_col=False)
</code></pre>
<p><code>skiprows</code> will skip the two empty rows, and <code>index_col</code> might mitigate the effect of data being interpreted as index columns.</p>
<p>See <a href="http://pandas.pydata.org/pandas-docs/version/0.16.2/generated/pandas.read_csv.html" rel="noreferrer">http://pandas.pydata.org/pandas-docs/version/0.16.2/generated/pandas.read_csv.html</a> for all options to the csv parser.</p>
<h3>Edit:</h3>
<p>*: If your data look exactly as you posted, then your csv is malformed. You have two more data columns (see the last two commas <code>,,</code>).</p>
<p>When you delete both commas, the parser works fine.</p>
<p>Another option is to specify the columns to be used:</p>
<pre><code>pd.read_csv("file.csv", skiprows=[1,2], usecols=np.arange(20))
</code></pre>
<p>Here, <code>np.arange(20)</code> tells the parser to only parse columns 1-20, that is, the columns that have a valid header (in your first line).</p>
|
python|csv|pandas
| 8
|
374,363
| 33,951,194
|
Density profile integral over line of sight
|
<p>My question is like this:</p>
<p>I know the density as a function of radius for a sphere numerically. Say density rho(1000) and radius(1000) are already calculated numerically. I want to find the integration of the density over a line of sight, as shown below in 2D, although it is a 3D problem:
<a href="https://i.stack.imgur.com/GNcYU.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GNcYU.gif" alt="enter image description here"></a></p>
<p>This line of sight can move from center to the boundary. I know we need to interpolate the density along the line of sight first, then add together to get the integral of density over the line of sight. But anyone can offer me some idea how to do the interpolation fast? Thank you.</p>
|
<p>I have the implementation below (assume density profile <code>rho = exp(1-log(1+r/rs)/(r/rs))</code>):</p>
<p>The first approach is much faster because it does not need to deal with the singularity from <code>r/np.sqrt(r**2-r_p**2)</code>.</p>
<pre><code>import numpy as np
from scipy import integrate as integrate
### From the definition of the LOS integral
def LOS_integration(rs,r_vir,r_p): #### radius in kpc
rho = lambda l: np.exp(1 - np.log(1+np.sqrt(l**2 + r_p**2)/rs)/(np.sqrt(l**2 + r_p**2)/rs))
result = integrate.quad(rho,0,np.sqrt(r_vir**2-r_p**2),epsabs=1.49e-08, epsrel=1.49e-08)
return result[0]
integration_vec = np.vectorize(LOS_integration) ### vectorize the function
### convert LOS integration to radius integration
def LOS_integration1(rs,r_vir,r_p): #### radius in kpc
rho = lambda r: np.exp(1 - np.log(1+r/rs)/(r/rs)) * r/np.sqrt(r**2-r_p**2)
### r/np.sqrt(r**2-r_p**2) is the factor convert from LOS integration to radius integration
result = integrate.quad(rho,r_p,r_vir,epsabs=1.49e-08, epsrel=1.49e-08)
return result[0]
integration1_vec = np.vectorize(LOS_integration1)
</code></pre>
|
python|numpy|scipy|integration|integral
| 1
|
374,364
| 23,848,003
|
Detecting mulicollinear , or columns that have linear combinations while modelling in Python : LinAlgError
|
<p>I am modelling data for a logit model with 34 dependent variables,and it keep throwing in the singular matrix error , as below -:</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#1116>", line 1, in <module>
test_scores = smf.Logit(m['event'], train_cols,missing='drop').fit()
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/discrete/discrete_model.py", line 1186, in fit
disp=disp, callback=callback, **kwargs)
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/discrete/discrete_model.py", line 164, in fit
disp=disp, callback=callback, **kwargs)
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/base/model.py", line 357, in fit
hess=hess)
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/base/model.py", line 405, in _fit_mle_newton
newparams = oldparams - np.dot(np.linalg.inv(H),
File "/usr/local/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 445, in inv
return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
File "/usr/local/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 328, in solve
raise LinAlgError, 'Singular matrix'
LinAlgError: Singular matrix
</code></pre>
<p>Which was when I stumpled on this method to reduce the matrix to its independent columns</p>
<pre><code>def independent_columns(A, tol = 0):#1e-05):
"""
Return an array composed of independent columns of A.
Note the answer may not be unique; this function returns one of many
possible answers.
https://stackoverflow.com/q/13312498/190597 (user1812712)
http://math.stackexchange.com/a/199132/1140 (Gerry Myerson)
http://mail.scipy.org/pipermail/numpy-discussion/2008-November/038705.html
(Anne Archibald)
>>> A = np.array([(2,4,1,3),(-1,-2,1,0),(0,0,2,2),(3,6,2,5)])
2 4 1 3
-1 -2 1 0
0 0 2 2
3 6 2 5
# try with checking the rank of matrixs
>>> independent_columns(A)
np.array([[1, 4],
[2, 5],
[3, 6]])
"""
Q, R = linalg.qr(A)
independent = np.where(np.abs(R.diagonal()) > tol)[0]
#print independent
return A[:, independent], independent
A,independent_col_indexes=independent_columns(train_cols.as_matrix(columns=None))
#train_cols will not be converted back from a df to a matrix object,so doing this explicitly
A2=pd.DataFrame(A, columns=train_cols.columns[independent_col_indexes])
test_scores = smf.Logit(m['event'],A2,missing='drop').fit()
</code></pre>
<p>I still get the LinAlgError , though I was hoping I will have the reduced matrix rank now. </p>
<p>Also, I see <code>np.linalg.matrix_rank(train_cols)</code> returns 33 (ie. before calling on the independent_columns function total "x" columns was 34(ie, <code>len(train_cols.ix[0])=34</code> ), meaning I don't have a full rank matrix), while <code>np.linalg.matrix_rank(A2)</code> returns 33 (meaning I have dropped a columns, and yet I still see the LinAlgError , when I run <code>test_scores = smf.Logit(m['event'],A2,missing='drop').fit()</code> , what am I missing ? </p>
<p>reference to the code above -
<a href="https://stackoverflow.com/questions/13312498/how-to-find-degenerate-rows-columns-in-a-covariance-matrix/13313828?noredirect=1#comment36375985_13313828">How to find degenerate rows/columns in a covariance matrix</a></p>
<p>I tried to start building the model forward,by introducing each variable at a time, which doesn't give me the singular matrix error, but I would rather have a method that is deterministic, and lets me know, what am I doing wrong & how to eliminate these columns. </p>
<p><strong>Edit (updated post the suggestions by @
user333700 below)</strong></p>
<p><strong>1.</strong> You are right, "A2" doesn't have the reduced rank of 33 . ie. <code>len(A2.ix[0]) =34</code> -> meaning the possibly collinear columns are not dropped - should I increase the "tol", tolerance to get rank of A2 (and the numbers of columns thereof) , as 33. If I change the tol to "1e-05" above, then I do get <code>len(A2.ix[0]) =33</code>, which suggests to me that tol >0 (strictly) is one indicator.
After this I just did the same, <code>test_scores = smf.Logit(m['event'],A2,missing='drop').fit()</code>, without nm to get the convergence. </p>
<p><strong>2.</strong> Errors post trying 'nm' method. Strange thing though is that if I take just 20,000 rows, I do get the results. Since it is not showing up Memory error, but "<code>Inverting hessian failed, no bse or cov_params available</code>" - <strong>I am assuming, there are multiple nearly-similar records - what would you say ?</strong></p>
<pre><code>m = smf.Logit(data['event_custom'].ix[0:1000000] , train_cols.ix[0:1000000],missing='drop')
test_scores=m.fit(start_params=None,method='nm',maxiter=200,full_output=1)
Warning: Maximum number of iterations has been exceeded
Warning (from warnings module):
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/base/model.py", line 374
warn(warndoc, Warning)
Warning: Inverting hessian failed, no bse or cov_params available
test_scores.summary()
Traceback (most recent call last):
File "<pyshell#17>", line 1, in <module>
test_scores.summary()
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/discrete/discrete_model.py", line 2396, in summary
yname_list)
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/discrete/discrete_model.py", line 2253, in summary
use_t=False)
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/iolib/summary.py", line 826, in add_table_params
use_t=use_t)
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/iolib/summary.py", line 447, in summary_params
std_err = results.bse
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/tools/decorators.py", line 95, in __get__
_cachedval = self.fget(obj)
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/base/model.py", line 1037, in bse
return np.sqrt(np.diag(self.cov_params()))
File "/usr/local/lib/python2.7/site-packages/statsmodels-0.5.0-py2.7-linux-i686.egg/statsmodels/base/model.py", line 1102, in cov_params
raise ValueError('need covariance of parameters for computing '
ValueError: need covariance of parameters for computing (unnormalized) covariances
</code></pre>
<p><strong>Edit 2:</strong> (updated post the suggestions by @user333700 below)</p>
<blockquote>
<p>Reiterating what I am trying to model - less than about 1% of total
users "convert" (success outcomes) - so I took a balanced sample of
35(+ve) /65 (-ve)</p>
</blockquote>
<p>I suspect the model is not robust, though it converges. So, will use "start_params" as the params from earlier iteration, from a different dataset.
This edit is about confirming is the "start_params" can feed into the results as below -:</p>
<pre><code>A,independent_col_indexes=independent_columns(train_cols.as_matrix(columns=None))
A2=pd.DataFrame(A, columns=train_cols.columns[independent_col_indexes])
m = smf.Logit(data['event_custom'], A2,missing='drop')
#m = smf.Logit(data['event_custom'], train_cols,missing='drop')#,method='nm').fit()#This doesnt work, so tried 'nm' which work, but used lasso, as nm did not converge.
test_scores=m.fit_regularized(start_params=None, method='l1', maxiter='defined_by_method', full_output=1, disp=1, callback=None, alpha=0, \
trim_mode='auto', auto_trim_tol=0.01, size_trim_tol=0.0001, qc_tol=0.03)
a_good_looking_previous_result.params=test_scores.params #storing the parameters of pass1 to feed into pass2
test_scores.params
bidfloor_Quartile_modified_binned_0 0.305765
connectiontype_binned_0 -0.436798
day_custom_binned_Fri -0.040269
day_custom_binned_Mon 0.138599
day_custom_binned_Sat -0.319997
day_custom_binned_Sun -0.236507
day_custom_binned_Thu -0.058922
user_agent_device_family_binned_iPad -10.793270
user_agent_device_family_binned_iPhone -8.483099
user_agent_masterclass_binned_apple 9.038889
user_agent_masterclass_binned_generic -0.760297
user_agent_masterclass_binned_samsung -0.063522
log_height_width 0.593199
log_height_width_ScreenResolution -0.520836
productivity -1.495373
games 0.706340
entertainment -1.806886
IAB24 2.531467
IAB17 0.650327
IAB14 0.414031
utilities 9.968253
IAB1 1.850786
social_networking -2.814148
IAB3 -9.230780
music 0.019584
IAB9 -0.415559
C(time_day_modified)[(6, 12]]:C(country)[AUS] -0.103003
C(time_day_modified)[(0, 6]]:C(country)[HKG] 0.769272
C(time_day_modified)[(6, 12]]:C(country)[HKG] 0.406882
C(time_day_modified)[(0, 6]]:C(country)[IDN] 0.073306
C(time_day_modified)[(6, 12]]:C(country)[IDN] -0.207568
C(time_day_modified)[(0, 6]]:C(country)[IND] 0.033370
... more params here
</code></pre>
<p>Now on a different dataset(pass2, for indexing), I model the same as below -:
ie. I read a new dataframe, do all the variable transformation and then model via Logit as earlier . </p>
<pre><code>m_pass2 = smf.Logit(data['event_custom'], A2_pass2,missing='drop')
test_scores_pass2=m_pass2.fit_regularized(start_params=a_good_looking_previous_result.params, method='l1', maxiter='defined_by_method', full_output=1, disp=1, callback=None, alpha=0, \
trim_mode='auto', auto_trim_tol=0.01, size_trim_tol=0.0001, qc_tol=0.03)
</code></pre>
<p>and, possibly keep iterating by picking up "start_params" from earlier passes.</p>
|
<p>Several points to this:</p>
<p>You need tol > 0 to detect near perfect collinearity, which might also cause numerical problems in later calculations.
Check the number of columns of <code>A2</code> to see whether a column has really be dropped. </p>
<p>Logit needs to do some non-linear calculations with the exog, so even if the design matrix is not very close to perfect collinearity, the transformed variables for the log-likelihood, derivative or Hessian calculations might still end up being with numerical problems, like singular Hessian.</p>
<p>(All these are floating point problems when we work near floating point precision, 1e-15, 1e-16. There are sometimes differences in the default thresholds for matrix_rank and similar linalg functions which can imply that in some edge cases one function identifies it as singular and another one doesn't.)</p>
<p>The default optimization method for the discrete models including Logit is a simple Newton method, which is fast in reasonably nice cases, but can fail in cases that are badly conditioned. You could try one of the other optimizers which will be one of those in scipy.optimize, <code>method='nm'</code> is usually very robust but slow, <code>method='bfgs'</code> works well in many cases but also can run into convergence problems.</p>
<p>Nevertheless, even when one of the other optimization methods succeeds, it is still necessary to inspect the results. More often than not, a failure with one method means that the model or estimation problem might not be well defined.</p>
<p>A good way to check whether it is just a problem with bad starting values or a specification problem is to run <code>method='nm'</code> first and then run one of the more accurate methods like <code>newton</code> or <code>bfgs</code> using the <code>nm</code> estimate as starting value, and see whether it succeeds from good starting values.</p>
|
python-2.7|numpy|statsmodels|logistic-regression|singular
| 6
|
374,365
| 23,706,412
|
Merge a lot of DataFrames together, without loop and not using concat
|
<p>I have >1000 DataFrames, each have >20K rows and several columns, need to be merge by a certain common column, the idea can be illustrated by this:</p>
<pre><code>data1=pd.DataFrame({'name':['a','c','e'], 'value':[1,3,4]})
data2=pd.DataFrame({'name':['a','d','e'], 'value':[3,3,4]})
data3=pd.DataFrame({'name':['d','e','f'], 'value':[1,3,5]})
data4=pd.DataFrame({'name':['d','f','g'], 'value':[0,3,4]})
#some or them may have more or less columns that the others:
#data5=pd.DataFrame({'name':['d','f','g'], 'value':[0,3,4], 'score':[1,3,4]})
final_data=data1
for i, v in enumerate([data2, data3, data4]):
if i==0:
final_data=pd.merge(final_data, v, how='outer', left_on='name',
right_on='name', suffixes=('_0', '_%s'%(i+1)))
#in real case right_on may be = columns other than 'name'
#dependents on the dataframe, but this requirement can be
#ignored in this minimal example.
else:
final_data=pd.merge(final_data, v, how='outer', left_on='name',
right_on='name', suffixes=('', '_%s'%(i+1)))
</code></pre>
<p>Result:</p>
<pre><code> name value_0 value_1 value value_3
0 a 1 3 NaN NaN
1 c 3 NaN NaN NaN
2 e 4 4 3 NaN
3 d NaN 3 1 0
4 f NaN NaN 5 3
5 g NaN NaN NaN 4
[6 rows x 5 columns]
</code></pre>
<p>It works, but anyway this can be done without a loop?</p>
<p><strong>Also</strong>, why the column name of the second to last column is not <code>value_2</code>?</p>
<hr>
<p><em>P.S.</em>
I know that in this minimal example, the result can also be achieved by: </p>
<pre><code>pd.concat([item.set_index('name') for item in [data1, data2, data3, data4]], axis=1)
</code></pre>
<p>But In the real case due to the way how the dataframes were constructed and the information stored in the index columns, this is not an ideal solution without additional tricks. So, let's not consider this route.</p>
|
<p>Does it even make sense to merge it, then? What's wrong with a panel?</p>
<pre><code>> data = [data1, data2, data3, data4]
> p = pd.Panel(dict(zip(map(str, range(len(data))), data)))
> p.to_frame().T
major 0 1 2
minor name value name value name value
0 a 1 c 3 e 4
1 a 3 d 3 e 4
2 d 1 e 3 f 5
3 d 0 f 3 g 4
# and just for kicks
> p.transpose(2, 0, 1).to_frame().reset_index().pivot_table(values='value', rows='name', cols='major')
major 0 1 2 3
name
a 1 3 NaN NaN
c 3 NaN NaN NaN
d NaN 3 1 0
e 4 4 3 NaN
f NaN NaN 5 3
g NaN NaN NaN 4
</code></pre>
|
python|pandas
| 1
|
374,366
| 23,815,527
|
Pandas / Numpy: Issues with np.where
|
<p>I have a strange problem with <code>np.where</code>. I first load a database called <code>df</code> and create a duplicate of <code>df</code>, <code>df1</code>. I then use <code>np.where</code> to make each value in <code>df1</code> be 1 if the number in the cell is greater or equal to its mean (found in the DataFrame <code>df_mean</code>) else make the cell equal to 0. I use a for loop to iterate over each column headers in <code>df1</code> and through a list of mean values <code>df_mean</code>. Here's my code:</p>
<pre><code>#Load the data
df = pd.read_csv('F:\\file.csv')
df.head(2)
>>> A AA AAP AAPL ABC
2011-01-10 09:30:00 -0.000546 0.006528 -0.001051 0.034593 -0.000095 ...
2011-01-10 09:30:10 -0.000256 0.007705 -0.001134 0.008578 -0.000549 ...
# Show list file with columns average
>>> df_mean.head(4)
A 0.000656
AA 0.002068
AAP 0.001134
AAPL 0.001728
...
df_1 = df
for x in list:
df_1[x] = np.where(df_1[x] >= *df_mean[x], 1, 0)
>>> df_1.head(4) #Which is my desired output (but which also makes df = df_1...WHY?)
A AA AAP AAPL ABC
2011-01-10 09:30:00 0 1 0 1 0 ...
2011-01-10 09:30:10 0 1 0 1 0 ...
2011-01-10 09:30:20 0 0 0 1 0 ...
2011-01-10 09:30:30 0 0 0 1 1 ...
</code></pre>
<p>Now, I get what I want which is a binary 1/0 matrix for <code>df_1</code>, but it turns that <code>df</code> also gets into a binary matrix (same as <code>df_1</code>). WHY? The loop does not incorporate <code>df</code>...</p>
|
<p>Although this is not what you asked for, but my spidy sense tells me, you want to find some form of indicator, if a stock is currently over or underperforming in regard of "something" using the mean of this "something". Maybe try this:</p>
<pre><code>S = pd.DataFrame(
np.array([[1.2,3.4],[1.1,3.5],[1.4,3.3],[1.2,1.6]]),
columns=["Stock A","Stock B"],
index=pd.date_range("2014-01-01","2014-01-04",freq="D")
)
indicator = S > S.mean()
binary = indicator.astype("int")
print S
print indicator
print binary
</code></pre>
<p>This gives the output:</p>
<pre><code> Stock A Stock B
2014-01-01 1.2 3.4
2014-01-02 1.1 3.5
2014-01-03 1.4 3.3
2014-01-04 1.2 1.6
[4 rows x 2 columns]
Stock A Stock B
2014-01-01 False True
2014-01-02 False True
2014-01-03 True True
2014-01-04 False False
[4 rows x 2 columns]
Stock A Stock B
2014-01-01 0 1
2014-01-02 0 1
2014-01-03 1 1
2014-01-04 0 0
[4 rows x 2 columns]
</code></pre>
<p>While you are at it, you should probably look into <code>pd.rolling_mean(S, n_periods_for_mean)</code>.</p>
|
python|numpy|pandas
| 1
|
374,367
| 22,780,563
|
Group labels in matplotlib barchart using Pandas MultiIndex
|
<p>I have a pandas DataFrame with a MultiIndex:</p>
<pre><code>group subgroup obs_1 obs_2
GroupA Elem1 4 0
Elem2 34 2
Elem3 0 10
GroupB Elem4 5 21
</code></pre>
<p>and so on. As noted in <a href="https://stackoverflow.com/questions/19184484/how-to-add-group-labels-for-bar-charts-in-matplotlib">this SO question</a> this is actually doable in matplotlib, but I'd rather (if possible) use the fact that I already know the hierarchy (thanks to the MultiIndex). Currently what's happening is that the index is shown as a tuple.</p>
<p>Is such a thing possible?</p>
|
<p>If you have just two levels in the <code>MultiIndex</code>, I believe the following will be easier:</p>
<pre><code>plt.figure()
ax = plt.gca()
DF.plot(kind='bar', ax=ax)
plt.grid(True, 'both')
minor_XT = ax.get_xaxis().get_majorticklocs()
DF['XT_V'] = minor_XT
major_XT = DF.groupby(by=DF.index.get_level_values(0)).first()['XT_V'].tolist()
DF.__delitem__('XT_V')
ax.set_xticks(minor_XT, minor=True)
ax.set_xticklabels(DF.index.get_level_values(1), minor=True)
ax.tick_params(which='major', pad=15)
_ = plt.xticks(major_XT, (DF.index.get_level_values(0)).unique(), rotation=0)
</code></pre>
<p><img src="https://i.stack.imgur.com/Xhc39.png" alt="enter image description here"></p>
<p>And a bit of involving, but more general solution (doesn't matter how many levels you have):</p>
<pre><code>def cvt_MIdx_tcklab(df):
Midx_ar = np.array(df.index.tolist())
Blank_ar = Midx_ar.copy()
col_idx = np.arange(Midx_ar.shape[0])
for i in range(Midx_ar.shape[1]):
val,idx = np.unique(Midx_ar[:, i], return_index=True)
Blank_ar[idx, i] = val
idx=~np.in1d(col_idx, idx)
Blank_ar[idx, i]=''
return map('\n'.join, np.fliplr(Blank_ar))
plt.figure()
ax = plt.gca()
DF.plot(kind='bar', ax=ax)
ax.set_xticklabels(cvt_MIdx_tcklab(DF), rotation=0)
</code></pre>
|
python|matplotlib|pandas
| 5
|
374,368
| 22,891,523
|
Joining 2 data frames with overlapping data
|
<p>I have 2 data frames created by pivot tables</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
df=pd.DataFrame({'axis1': ['Unix','Window','Apple','Linux'],
'A': [1,np.nan,1,1],
'B': [1,np.nan,np.nan,1],
'C': [np.nan,1,np.nan,1],
'D': [1,np.nan,1,np.nan],
}).set_index(['axis1'])
print (df)
df2=pd.DataFrame({'axis1': ['Unix','Window','Apple','Linux','A'],
'A': [1,1,np.nan,np.nan,np.nan],
'E': [1,np.nan,1,1,1],
}).set_index(['axis1'])
print (df2)
</code></pre>
<p>Output looks like this</p>
<pre><code> A B C D
axis1
Unix 1 1 NaN 1
Window NaN NaN 1 NaN
Apple 1 NaN NaN 1
Linux 1 1 1 NaN
[4 rows x 4 columns]
A E
axis1
Unix 1 1
Window 1 NaN
Apple NaN 1
Linux NaN 1
A NaN 1
</code></pre>
<p>Lets say I want to combine them but I want only want values of 1
So far I got it but it does not have column E or row A:</p>
<pre><code>>>> df.update(df2)
>>> df
A B C D
axis1
Unix 1 1 NaN 1
Window 1 NaN 1 NaN
Apple 1 NaN NaN 1
Linux 1 1 1 NaN
[4 rows x 4 columns]
</code></pre>
<p>How would I update it to get the additional axis values? (include row A and Column E)</p>
|
<p>you want to <a href="http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.reindex.html" rel="nofollow">reindex</a> your first Dataframe before you call update</p>
<p>one robust way would be to calculate the union of both columns and rows of both df, maybe there is a smarter way, but I can't think of any at the moment</p>
<pre><code>df = df.reindex(columns=df2.columns.union(df.columns),
index=df2.index.union(df.index))
</code></pre>
<p>then you call update on that, and it should work.</p>
|
python|join|pandas
| 0
|
374,369
| 22,749,007
|
plotting pandas data frame with unequal data set
|
<p>I am trying to plot a pandas Data Frame that contain an unequal amount of data points (rows) and I am not sure if this is causing an issue for my plot.</p>
<p>in the below code, the portfolioValue# differs in length</p>
<pre><code>portfolioValue1 = 521
portfolioValue1 = 500
portfolioValue1 = 521
portfolioValue1 = 521
portfolioValue1 = 425
</code></pre>
<p>my pandas data frame shape is</p>
<pre><code>(1, 5)
</code></pre>
<p>Here is the python code:</p>
<pre><code>portToPlot = {'AAPL.txt':[portfolioValue1], 'GOOG.txt':[portfolioValue2], 'MSFT.txt':[portfolioValue3],
'AMZN.txt':[portfolioValue4],'CMG.txt':[portfolioValue5]}
portDFrame = DataFrame(portToPlot)
portDFrame.plot(sharex=True)
</code></pre>
<p>This is the error I keep getting</p>
<pre><code>return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
</code></pre>
|
<p>They need to be of equal length, for example, we can shorten everything to 425 elements:</p>
<pre><code>portfolioValue1 = random.random(521)
portfolioValue2 = random.random(500)
portfolioValue3 = random.random(521)
portfolioValue4 = random.random(521)
portfolioValue5 = random.random(425)
portDFrame=DataFrame(zip(portfolioValue1,portfolioValue2,portfolioValue3,portfolioValue4,portfolioValue5))
portDFrame.columns=['AAPL.txt', 'GOOG.txt', 'MSFT.txt','AMZN.txt','CMG.txt']
portDFrame.plot(sharex=True)
</code></pre>
<p><img src="https://i.stack.imgur.com/5wmzt.png" alt="enter image description here"></p>
<p>But it looks to me that you are working on interday stock data for one year duration. I think the reason some of them are shorter is because there are missing interday price for some trading days. You should have preserved those missing data in the first place rather than just discard them altogether.</p>
|
python|plot|pandas|dataframe
| 0
|
374,370
| 22,787,602
|
Python pandas: Slicing /indexing confusion
|
<p>I am solving some model using pandas/Python. However I get some very strange results when selecting data. I suspect I am not understanding something very fundamental.</p>
<p>The index of the DataFrame is a pandas quarterly timeseries.</p>
<p>The problem is when I write:</p>
<pre><code>data.SI_PER
</code></pre>
<p>I get the correct series: </p>
<pre><code>2014Q1 116.832000
2014Q2 111.728001
2014Q3 106.976102
2014Q4 102.366623
2015Q1 97.849300
2015Q2 93.719593
2015Q3 89.766363
2015Q4 86.037304
</code></pre>
<p>and </p>
<pre><code>data.SI_PER['2014Q1']
</code></pre>
<p>gives <code>116.83200000000002</code></p>
<p>But when I write:</p>
<pre><code>data.loc['2014Q1','SI_PER']
</code></pre>
<p>I get</p>
<pre><code>0.0
</code></pre>
<p>In my understanding, the output should be the same, so clearly I am misunderstanding something.</p>
<p><strong>Edit:</strong></p>
<pre><code>data.info()
<class 'pandas.core.frame.DataFrame'>
PeriodIndex: 144 entries, 1980Q1 to 2015Q4
Columns: 2948 entries, YEAR to FIHERHVERV_NON_CRDIV_SUP
dtypes: float64(2946), int64(2)>>>
</code></pre>
|
<p>This is from 0.13.1, works ok</p>
<pre><code>In [16]: df = DataFrame(np.random.randn(10,2),index=period_range('2013',periods=10, freq='Q-JAN'),columns=['A','B'])
In [17]: df
Out[17]:
A B
2013Q4 -0.905673 2.670701
2014Q1 -0.465485 -1.849802
2014Q2 -0.526230 -1.265586
2014Q3 -0.515863 -0.464663
2014Q4 -0.791347 -0.888892
2015Q1 -0.152992 0.004867
2015Q2 -0.349412 -2.581611
2015Q3 1.367116 -1.583860
2015Q4 0.837310 0.631884
2016Q1 -0.558182 0.408349
[10 rows x 2 columns]
In [18]: df.A['2014Q1']
Out[18]: -0.46548521567154932
In [19]: df.loc['2014Q1','A']
Out[19]: -0.46548521567154932
</code></pre>
|
python|pandas
| 2
|
374,371
| 22,752,931
|
SKLearn Cross Validation Error -- Type Error
|
<p>I'm attempting to implement cross validation on the results from my KNN classifier. I have used the following code, which returns a type error.</p>
<p>For context, I have already imported SciKit Learn, Numpy, and Pandas libraries.</p>
<pre><code>from sklearn.cross_validation import cross_val_score, ShuffleSplit
n_samples = len(y)
knn = KNeighborsClassifier(3)
cv = ShuffleSplit(n_samples, n_iter=10, test_size=0.3, random_state=0)
test_scores = cross_val_score(knn, X, y, cv=cv)
test_scores.mean()
</code></pre>
<p>Returns:</p>
<pre><code> ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-139-d8cc3ee0c29b> in <module>()
7 cv = ShuffleSplit(n_samples, n_iter=10, test_size=0.3, random_state=0)
8
9 test_scores = cross_val_score(knn, X, y, cv=cv)
10 test_scores.mean()
//anaconda/lib/python2.7/site-packages/sklearn/cross_validation.pyc in cross_val_score(estimator, X, y, scoring, cv, n_jobs, verbose, fit_params, score_func, pre_dispatch)
1150 delayed(_cross_val_score)(clone(estimator), X, y, scorer, train, test,
1151 verbose, fit_params)
1152 for train, test in cv)
1153 return np.array(scores)
1154
//anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self, iterable)
515 try:
516 for function, args, kwargs in iterable:
517 self.dispatch(function, args, kwargs)
518
519 self.retrieve()
//anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in dispatch(self, func, args, kwargs)
310 """
311 if self._pool is None:
312 job = ImmediateApply(func, args, kwargs)
313 index = len(self._jobs)
314 if not _verbosity_filter(index, self.verbose):
//anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __init__(self, func, args, kwargs)
134 # Don't delay the application, to avoid keeping the input
135 # arguments in memory
136 self.results = func(*args, **kwargs)
137
138 def get(self):
//anaconda/lib/python2.7/site-packages/sklearn/cross_validation.pyc in _cross_val_score(estimator, X, y, scorer, train, test, verbose, fit_params)
1056 y_test = None
1057 else:
1058 y_train = y[train]
1059 y_test = y[test]
1060 estimator.fit(X_train, y_train, **fit_params)
TypeError: only integer arrays with one element can be converted to an index
</code></pre>
|
<p>This is an error related to pandas. Scikit learn expects numpy arrays, sparse matrices or objects that behave similarly to these.</p>
<p>The main issue with pandas DataFrames is due to the fact that indexing with [...] chooses columns and not lines. Line indexing in pandas is done through DataFrame.loc[...]. This is unexpected behaviour for sklearn. The error probably came from line 1058, where the code is failing to extract the train sample.</p>
<p>To remedy this, if your y is one DataFrame column, try converting your column to array type</p>
<pre><code>y = y.values
</code></pre>
<p>Otherwise <a href="https://github.com/paulgb/sklearn-pandas.git" rel="nofollow">pandas-sklearn</a> is possibly an option.</p>
|
python|numpy|pandas|scikit-learn|cross-validation
| 1
|
374,372
| 22,837,305
|
Want to create a thumbnail(25,25) of an Image of size (181,256) using python
|
<p>I want to create thumbnail(25,25) of an image whose size is 181 x 256. But when i run the code i get the output image of (17,25) thumbnail image . Why am i not getting a image of 25 x 25 height and width?</p>
<pre><code>from PIL import Image
from numpy import *
size=25,25
im=array(Image.open('D:/1.png'))
im.thumbnail(size)
im.save("Thumbnail.png","PNG")
print im.shape
</code></pre>
|
<pre><code>from PIL import Image
from numpy import *
size=25,25
im=(Image.open('...'))
im = im.resize(size, Image.ANTIALIAS)
im.save("Thumbnail.png","PNG")
imgArr = array(im)
print imgArr.shape
</code></pre>
|
python|python-2.7|image-processing|numpy|python-imaging-library
| 1
|
374,373
| 22,798,934
|
Pandas long to wide reshape, by two variables
|
<p>I have data in long format and am trying to reshape to wide, but there doesn't seem to be a straightforward way to do this using melt/stack/unstack:</p>
<pre><code>Salesman Height product price
Knut 6 bat 5
Knut 6 ball 1
Knut 6 wand 3
Steve 5 pen 2
</code></pre>
<p>Becomes:</p>
<pre><code>Salesman Height product_1 price_1 product_2 price_2 product_3 price_3
Knut 6 bat 5 ball 1 wand 3
Steve 5 pen 2 NA NA NA NA
</code></pre>
<p>I think Stata can do something like this with the reshape command.</p>
|
<p>Here's another solution more fleshed out, taken from <a href="https://chrisalbon.com/python/data_wrangling/pandas_long_to_wide/" rel="noreferrer">Chris Albon's site</a>. </p>
<h3>Create "long" dataframe</h3>
<pre><code>raw_data = {'patient': [1, 1, 1, 2, 2],
'obs': [1, 2, 3, 1, 2],
'treatment': [0, 1, 0, 1, 0],
'score': [6252, 24243, 2345, 2342, 23525]}
df = pd.DataFrame(raw_data, columns = ['patient', 'obs', 'treatment', 'score'])
</code></pre>
<p><img src="https://i.stack.imgur.com/RRfjY.png" width="210"></p>
<h3>Make a "wide" data</h3>
<pre><code>df.pivot(index='patient', columns='obs', values='score')
</code></pre>
<p><img src="https://i.stack.imgur.com/agIMh.png" width="210"></p>
|
python|pandas|stata|reshape
| 60
|
374,374
| 22,902,040
|
Convert black and white array into an image in python?
|
<p>I have an array of 50x50 elements of which each is either True or False - this represents a 50x50 black and white image.</p>
<p>I can't convert this into image? I've tried countless different functions and none of them work.</p>
<pre><code>import numpy as np
from PIL import Image
my_array = np.array([[True,False,False,False THE DATA IS IN THIS ARRAY OF 2500 elements]])
im = Image.fromarray(my_array)
im.save("results.jpg")
</code></pre>
<p>^ This one gives me: "Cannot handle this data type".</p>
<p>I've seen that PIL has some functions but they only convert a list of RGB pixels and I have a simple black and white array without the other channels.</p>
|
<p>First you should make your array 50x50 instead of a 1d array:</p>
<pre><code>my_array = my_array.reshape((50, 50))
</code></pre>
<p>Then, to get a standard 8bit image, you should use an unsigned 8-bit integer dtype:</p>
<pre><code>my_array = my_array.reshape((50, 50)).astype('uint8')
</code></pre>
<p>But you don't want the <code>True</code>s to be <code>1</code>, you want them to be <code>255</code>:</p>
<pre><code>my_array = my_array.reshape((50, 50)).astype('uint8')*255
</code></pre>
<p>Finally, you can convert to a PIL image:</p>
<pre><code>im = Image.fromarray(my_array)
</code></pre>
<p>I'd do it all at once like this:</p>
<pre><code>im = Image.fromarray(my_array.reshape((50,50)).astype('uint8')*255)
</code></pre>
|
python|arrays|image-processing|numpy|python-imaging-library
| 11
|
374,375
| 15,471,936
|
Writing calculation results back into its array?
|
<p>Something of a follow-up question to <a href="https://stackoverflow.com/questions/15374291/writing-a-faster-python-physics-simulator">my last one</a> about writing efficient python programs. I have been playing with writing my own physics simulations, and want to get away from using a billion classes and methods.</p>
<p>So I want to be able to perform calculations on sets of data. This is my latest attempt:</p>
<pre><code>particles = np.array([ #position, last position, velocity, mass, size
[[200,0],[200,200],[5,5],10,15], \
[[210,210],[210,210],[8,2],20,25],\
[[215,215],[195,195],[5,3],5,15], \
[[192,186],[160,160],[10,-4],30,30]])
def moveParticles(part, dt):
part[0] = part[1]
part[1] += np.multiply(part[2],dt)
</code></pre>
<p>I am trying to store each of the properties of each particle in an array, and then update them in-place. Here I am trying to multiply the velocity vector by the time step, and then add that to the position vector. This seems like a natural way to express this to me, but it gives me the error:</p>
<pre><code>TypeError: can't multiply sequence by non-int of type 'float'
</code></pre>
<p>Can I write data back into the same array, and how would I go about doing so?</p>
<p>I have been reading around, and looked at things like numpy's vectorize function, itertools, map(), etc... but how would I go about placing the results back into the original array? </p>
<p>Or is using an intermediate array to store the results before overwriting the original the only way to go?</p>
|
<p>I think, you just invoke your routine the wrong way (probably passing it the entire particle array instead of the array for only one particle.</p>
<p>Anyway, on other possible solution would be to split your array in individual arrays:</p>
<pre><code>import numpy as np
pos = np.array([[200,0], [210,210], [215,215], [192,186]], dtype=float)
lastpos = np.array([[200,2000], [ 210,210], [195, 195], [160,160]], dtype=float)
velocity = np.array([[ 5,5], [8,2], [5,3], [10,-4]], dtype=float)
mass = np.array([ 10, 20, 5, 30 ], dtype=float)
size = np.array([ 15, 25, 15, 30 ], dtype=float)
def moveParticles(pos, lastpos, velocity, dt):
lastpos[:] = pos[:]
pos[:] += velocity * dt
</code></pre>
<p>This would make in-place replacement for <code>pos</code> and <code>lastpos</code>. In order to move your particles, you would have to invoke the function as:</p>
<pre><code>moveParticles(pos, lastpos, velocity, 1)
</code></pre>
<p>where I set dt = 1. I also assumed, that you want to have floating point coordinates, if not, you should generate integer arrays instead.</p>
|
python|multidimensional-array|numpy|scientific-computing
| 2
|
374,376
| 15,316,985
|
Numpy: regrid by averaging?
|
<p>I'm trying to regrid a numpy array onto a new grid. In this specific case, I'm trying to regrid a power spectrum onto a logarithmic grid so that the data are evenly spaced logarithmically for plotting purposes.</p>
<p>Doing this with straight interpolation using <code>np.interp</code> results in some of the original data being ignored entirely. Using <code>digitize</code> gets the result I want, but I have to use some ugly loops to get it to work:</p>
<pre><code>xfreq = np.fft.fftfreq(100)[1:50] # only positive, nonzero freqs
psw = np.arange(xfreq.size) # dummy array for MWE
# new logarithmic grid
logfreq = np.logspace(np.log10(np.min(xfreq)), np.log10(np.max(xfreq)), 100)
inds = np.digitize(xfreq,logfreq)
# interpolation: ignores data *but* populates all points
logpsw = np.interp(logfreq, xfreq, psw)
# so average down where available...
logpsw[np.unique(inds)] = [psw[inds==i].mean() for i in np.unique(inds)]
# the new plot
loglog(logfreq, logpsw, linewidth=0.5, color='k')
</code></pre>
<p>Is there a nicer way to accomplish this in numpy? I'd be satisfied with just a replacement of the inline loop step.</p>
|
<p>You can use <code>bincount()</code> twice to calculate the average value of every bins:</p>
<pre><code>logpsw2 = np.interp(logfreq, xfreq, psw)
counts = np.bincount(inds)
mask = counts != 0
logpsw2[mask] = np.bincount(inds, psw)[mask] / counts[mask]
</code></pre>
<p>or use <code>unique(inds, return_inverse=True)</code> and <code>bincount()</code> twice:</p>
<pre><code>logpsw4 = np.interp(logfreq, xfreq, psw)
uinds, inv_index = np.unique(inds, return_inverse=True)
logpsw4[uinds] = np.bincount(inv_index, psw) / np.bincount(inv_index)
</code></pre>
<p>Or if you use Pandas:</p>
<pre><code>import pandas as pd
logpsw4 = np.interp(logfreq, xfreq, psw)
s = pd.groupby(pd.Series(psw), inds).mean()
logpsw4[s.index] = s.values
</code></pre>
|
numpy
| 1
|
374,377
| 14,928,169
|
looping through an array to find euclidean distance in python
|
<p>This is what I have thus far:</p>
<pre><code>Stats2003 = np.loadtxt('/DataFiles/2003.txt')
Stats2004 = np.loadtxt('/DataFiles/2004.txt')
Stats2005 = np.loadtxt('/DataFiles/2005.txt')
Stats2006 = np.loadtxt('/DataFiles/2006.txt')
Stats2007 = np.loadtxt('/DataFiles/2007.txt')
Stats2008 = np.loadtxt('/DataFiles/2008.txt')
Stats2009 = np.loadtxt('/DataFiles/2009.txt')
Stats2010 = np.loadtxt('/DataFiles/2010.txt')
Stats2011 = np.loadtxt('/DataFiles/2011.txt')
Stats2012 = np.loadtxt('/DataFiles/2012.txt')
Stats = Stats2003, Stats2004, Stats2004, Stats2005, Stats2006, Stats2007, Stats2008, Stats2009, Stats2010, Stats2011, Stats2012
</code></pre>
<p>I am trying to calculate euclidean distance between each of these arrays with every other array but am having difficulty doing so.</p>
<p>I have the output I would like by calculating the distance like:</p>
<pre><code>dist1 = np.linalg.norm(Stats2003-Stats2004)
dist2 = np.linalg.norm(Stats2003-Stats2005)
dist11 = np.linalg.norm(Stats2004-Stats2005)
</code></pre>
<p>etc but I would like to make these calculations with a loop.</p>
<p>I am displaying the calculations into a table using Prettytable.</p>
<p>Can anyone point me in the right direction? I haven't found any previous solutions that have worked.</p>
|
<p>To do the loop you will need to <a href="http://nedbatchelder.com/blog/201112/keep_data_out_of_your_variable_names.html" rel="nofollow">keep data out of your variable names</a>. A simple solution would be to use dictionaries instead. The loops are implicit in the dict comprehensions:</p>
<pre><code>import itertools as it
years = range(2003, 2013)
stats = {y: np.loadtxt('/DataFiles/{}.txt'.format(y) for y in years}
dists = {(y1,y2): np.linalg.norm(stats[y1] - stats[y2]) for (y1, y2) in it.combinations(years, 2)}
</code></pre>
<p>now access stats for a particular year, e.g. 2007, by <code>stats[2007]</code> and distances with tuples e.g. <code>dists[(2007, 20011)]</code>. </p>
|
python|loops|numpy
| 2
|
374,378
| 15,089,310
|
repeat arange with numpy
|
<p>I have an array with integer values.</p>
<pre><code>a = [2,1,4,0,2]
</code></pre>
<p>I want a apply arange function to each value in a so as to have : </p>
<pre><code>b = [0,1,0,0,1,2,3,1,2]
b "=" [arange(2),arange(1),arange(4),arange(0),arange(2)]
</code></pre>
<p>In fact I use a np.repeat function to repeat array rows according to array a, and I want to have a mark of i to linked each repeated value to the original one and to have an identification number to then distinguish them.</p>
<p>I tried with np.vectorize but with no success.</p>
|
<p>There are definitely more numpythonic ways of doing things. One possibility could be something like this:</p>
<pre><code>import numpy as np
from numpy.lib.stride_tricks import as_strided
def concatenated_ranges(ranges_list) :
ranges_list = np.array(ranges_list, copy=False)
base_range = np.arange(ranges_list.max())
base_range = as_strided(base_range,
shape=ranges_list.shape + base_range.shape,
strides=(0,) + base_range.strides)
return base_range[base_range < ranges_list[:, None]]
</code></pre>
<p>If you are concatenating only a few ranges, then probably Mr. E's pure python solution is your best choice, but if you have even as few as a hundred ranges to concatenate, this stars being noticeably faster. For comparison I have used this two functions extracted from the other answers:</p>
<pre><code>def junuxx(a) :
b = np.array([], dtype=np.uint8)
for x in a:
b = np.append(b, np.arange(x))
return b
def mr_e(a) :
return reduce(lambda x, y: x + range(y), a, [])
</code></pre>
<p>And here are some timings:</p>
<pre><code>In [2]: a = [2, 1, 4, 0 ,2] # the OP's original example
In [3]: concatenated_ranges(a) # show it works!
Out[3]: array([0, 1, 0, 0, 1, 2, 3, 0, 1])
In [4]: %timeit concatenated_ranges(a)
10000 loops, best of 3: 31.6 us per loop
In [5]: %timeit junuxx(a)
10000 loops, best of 3: 34 us per loop
In [6]: %timeit mr_e(a)
100000 loops, best of 3: 2.58 us per loop
In [7]: a = np.random.randint(1, 10, size=(10,))
In [8]: %timeit concatenated_ranges(a)
10000 loops, best of 3: 27.1 us per loop
In [9]: %timeit junuxx(a)
10000 loops, best of 3: 79.8 us per loop
In [10]: %timeit mr_e(a)
100000 loops, best of 3: 7.82 us per loop
In [11]: a = np.random.randint(1, 10, size=(100,))
In [12]: %timeit concatenated_ranges(a)
10000 loops, best of 3: 57.4 us per loop
In [13]: %timeit junuxx(a)
1000 loops, best of 3: 756 us per loop
In [14]: %timeit mr_e(a)
10000 loops, best of 3: 149 us per loop
In [15]: a = np.random.randint(1, 10, size=(1000,))
In [16]: %timeit concatenated_ranges(a)
1000 loops, best of 3: 358 us per loop
In [17]: %timeit junuxx(a)
100 loops, best of 3: 9.38 ms per loop
In [18]: %timeit mr_e(a)
100 loops, best of 3: 8.93 ms per loop
</code></pre>
|
python|numpy|repeat
| 3
|
374,379
| 15,454,285
|
Numpy: Array of class instances
|
<p>This might be a dumb question, but say i want to build a program from bottom-up like so:</p>
<pre><code>class Atom(object):
def __init__(self):
'''
Constructor
'''
def atom(self, foo, bar):
#...with foo and bar being arrays of atom Params of lengths m & n
"Do what atoms do"
return atom_out
</code></pre>
<p>...i can put my instances in a dictionary:</p>
<pre><code>class Molecule(Atom):
def __init__(self):
def structure(self, a, b):
#a = 2D array of size (num_of_atoms, m); 'foo' Params for each atom
#b = 2D array of size (num_of_atoms, n); 'bar' Params for each atom
unit = self.atom()
fake_array = {"atom1": unit(a[0], b[0]),
"atom2": unit(a[1], b[1]),
: : :
: : :}
def chemicalBonds(self, this, that, theother):
: : :
: : :
</code></pre>
<p>My question is, is there a way to do this with numpy arrays so that each element in "<code>real_array</code>" would be an instance of <code>atom</code>--i.e., the output of the individual computations of <code>atom</code> function? I can extend this to <code>class Water(molecule):</code> which would perform fast numpy operations on the large <code>structure</code> and <code>chemicalBonds</code> outputs, hence the need for arrays...Or is it the case that i'm going about this the wrong way?</p>
<p>Also if i am on the right track, i'd appreciate if you wanted to throw in any tips on how to structure a "hierarchical program" like this, as i'm not sure i'm doing the above correctly and recently discovered that i don't know what i'm doing.</p>
<p>Thanks in advance.</p>
|
<p>The path to hell is paved with premature optimization... As a beginner in python, focus on your program and what is supposed to do, once it is doing it too slowly you can ask focused questions about how to make it do it faster. I would stick with learning python's intrinsic data structures for managing your objects. You can implement your algorithms using using numpy arrays with standard data types if you are doing large array operations. Once you have some working code you can do performance testing to determine where you need optimization. </p>
<p>Numpy does allow you to create arrays of objects, and I will give you enough rope to hang yourself with below, but creating an ecosystem of tools to operate on those arrays of objects is not a trivial undertaking. You should first work with python data structures (buy Beazley's essential python reference), then with numpy's built in types, then creating your own <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html">compound numpy types</a>. As a last resort, use the object type from the example below.</p>
<p>Good luck!</p>
<p>David</p>
<pre><code>import numpy
class Atom(object):
def atoms_method(self, foo, bar):
#...with foo and bar being arrays of Paramsof length m & n
atom_out = foo + bar
return atom_out
array = numpy.ndarray((10,),dtype=numpy.object)
for i in xrange(10):
array[i] = Atom()
for i in xrange(10):
print array[i].atoms_method(i, 5)
</code></pre>
|
python|numpy|inner-classes
| 8
|
374,380
| 15,111,230
|
what is a reason to use ndarray instead of python array
|
<p>I build a class with some iteration over coming data. The data are in an array form without use of numpy objects. On my code I often use <code>.append</code> to create another array. At some point I changed one of the big array 1000x2000 to numpy.array. Now I have an error after error. I started to convert all of the arrays into ndarray but comments like <code>.append</code> does not work any more. I start to have a problems with pointing to rows, columns or cells. and have to rebuild all code.</p>
<p>I try to google an answer to the question: "what is and advantage of using ndarray over normal array" I can't find a sensible answer. Can you write when should I start to use ndarrays and if in your practice do you use both of them or stick to one only.</p>
<p>Sorry if the question is a novice level, but I am new to python, just try to move from Matlab and want to understand what are pros and cons. Thanks</p>
|
<p>NumPy and Python arrays share the property of being efficiently stored in memory.</p>
<p>NumPy arrays can be added together, multiplied by a number, you can calculate, say, the sine of all their values in one function call, etc. As HYRY pointed out, they can also have more than one dimension. You cannot do this with Python arrays.</p>
<p>On the other hand, Python arrays can indeed be appended to. Note that NumPy arrays can however be concatenated together (<code>hstack()</code>, <code>vstack()</code>,…). That said, NumPy arrays are mostly meant to have a fixed number of elements.</p>
<p>It is common to first build a list (or a Python array) of values iteratively and then convert it to a NumPy array (with <code>numpy.array()</code>, or, more efficiently, with <code>numpy.frombuffer()</code>, as HYRY mentioned): this allows mathematical operations on arrays (or matrices) to be performed very conveniently (simple syntax for complex operations). Alternatively, <code>numpy.fromiter()</code> might be used to construct the array from an iterator. Or <code>loadtxt()</code> to construct it from a text file.</p>
|
python|numpy|multidimensional-array
| 8
|
374,381
| 15,033,511
|
Compute a confidence interval from sample data
|
<p>I have sample data which I would like to compute a confidence interval for, assuming a normal distribution.</p>
<p>I have found and installed the numpy and scipy packages and have gotten numpy to return a mean and standard deviation (numpy.mean(data) with data being a list). Any advice on getting a sample confidence interval would be much appreciated.</p>
|
<pre><code>import numpy as np
import scipy.stats
def mean_confidence_interval(data, confidence=0.95):
a = 1.0 * np.array(data)
n = len(a)
m, se = np.mean(a), scipy.stats.sem(a)
h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1)
return m, m-h, m+h
</code></pre>
<p>You can calculate like this.</p>
|
python|numpy|scipy|statistics|confidence-interval
| 237
|
374,382
| 13,659,401
|
numpy change array values when mask is one
|
<p>i'm new to numpy and i'm running into trouble.</p>
<p>I've got two numpy arrays, img and thr:</p>
<pre><code>>>>img.shape
(2448, 3264, 3)
>>>thr.shape
(2448, 3264)
</code></pre>
<p>And i want to do something like this: set <code>img[x,y] = [255,255,255]</code> only when <code>thr[x,y] is not 0</code> </p>
<p>I tried iterating over the array and do it myself but it takes a long time, so i really need the C underneath numpy. I also took a look to masked arrays but i didn't understand how to use them.</p>
<p>Thanks!</p>
|
<p>Using <a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html#assigning-values-to-indexed-arrays" rel="nofollow">NumPy assignment to an indexed array</a>:</p>
<pre><code>img[thr != 0] = [255,255,255]
</code></pre>
|
python|numpy
| 4
|
374,383
| 13,711,803
|
Python: np.loadtxt, read multiple files
|
<p>I have managed to get loadtxt to read in a single file, but now I want it to read in a bunch of files off a .list file I have. I tried throwing it in a for loop, but I can't seem to get it to work. Can anyone help please?</p>
<p><code>[row1, row2, row3] = np.loadtxt("data.fits",unpack=True,skiprows=1)</code></p>
<p>And I want something like</p>
<pre><code>for i in range(0,len(array)):
[row1, row2, row3] = np.loadtxt("list.list[i]",unpack=True,skiprows=1)
DO THINGS
</code></pre>
|
<pre><code>for i in range(len(array)):
[row1, row2, row3] = np.loadtxt(list.list[i],unpack=True,skiprows=1)
</code></pre>
<p>Additionally:</p>
<pre><code>filelist=['file1','file2']
for file in filelist:
[row1, row2, row3] = np.loadtxt(file,unpack=True,skiprows=1)
#Do Stuff
</code></pre>
<p>I believe the quotation marks is messing with you. Also you do not need the 0 in range.</p>
<p>If this doesnt work can you paste what list.list is and array?</p>
|
python|numpy
| 3
|
374,384
| 13,288,202
|
Average arrays with Null values
|
<blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://stackoverflow.com/questions/13281904/avarage-of-a-number-of-arrays-with-numpy-without-considering-zero-values">avarage of a number of arrays with numpy without considering zero values</a> </p>
</blockquote>
<p>I am working on numpy and I have a number of arrays with the same size and shape. They are 500*500. It has some Null values. I want to have an array that is result of one by one element average of my original arrays. For example:</p>
<pre><code>A=[ 1 Null 8 Null; Null 4 6 1]
B=[ 8 5 8 Null; 5 9 5 3]
</code></pre>
<p>the resulting array should be like:</p>
<pre><code>C=[ 4.5 5 8 Null; 5 6.5 5.5 2]
</code></pre>
<p>How can I do that?</p>
|
<p>Update: As of NumPy 1.8, you could use <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.nanmean.html" rel="nofollow noreferrer">np.nanmean</a> instead of <code>scipy.stats.nanmean</code>.</p>
<hr>
<p>If you have <code>scipy</code>, you could use <a href="http://www.scipy.org/doc/api_docs/SciPy.stats.stats.html#nanmean" rel="nofollow noreferrer">scipy.stats.nanmean</a>:</p>
<pre><code>In [2]: import numpy as np
In [45]: import scipy.stats as stats
In [3]: nan = np.nan
In [43]: A = np.array([1, nan, 8, nan, nan, 4, 6, 1])
In [44]: B = np.array([8, 5, 8, nan, 5, 9, 5, 3])
In [46]: C = np.array([A, B])
In [47]: C
Out[47]:
array([[ 1., nan, 8., nan, nan, 4., 6., 1.],
[ 8., 5., 8., nan, 5., 9., 5., 3.]])
In [48]: stats.nanmean(C)
Warning: invalid value encountered in divide
Out[48]: array([ 4.5, 5. , 8. , nan, 5. , 6.5, 5.5, 2. ])
</code></pre>
<p>You can find other numpy-only (masked-array) solutions, <a href="https://stackoverflow.com/q/5480694/190597">here</a>. Namely,</p>
<pre><code>In [60]: C = np.array([A, B])
In [61]: C = np.ma.masked_array(C, np.isnan(C))
In [62]: C
Out[62]:
masked_array(data =
[[1.0 -- 8.0 -- -- 4.0 6.0 1.0]
[8.0 5.0 8.0 -- 5.0 9.0 5.0 3.0]],
mask =
[[False True False True True False False False]
[False False False True False False False False]],
fill_value = 1e+20)
In [63]: np.mean(C, axis = 0)
Out[63]:
masked_array(data = [4.5 5.0 8.0 -- 5.0 6.5 5.5 2.0],
mask = [False False False True False False False False],
fill_value = 1e+20)
In [66]: np.ma.filled(np.mean(C, axis = 0), nan)
Out[67]: array([ 4.5, 5. , 8. , nan, 5. , 6.5, 5.5, 2. ])
</code></pre>
|
python|arrays|numpy|null|average
| 7
|
374,385
| 13,432,492
|
How to do a 3D revolution plot in matplotlib?
|
<p>Suppose you have a 2D curve, given by e.g.:</p>
<pre><code>from matplotlib import pylab
t = numpy.linspace(-1, 1, 21)
z = -t**2
pylab.plot(t, z)
</code></pre>
<p>which produces </p>
<p><img src="https://i.stack.imgur.com/69Ior.png" alt="http://i.imgur.com/feQzk.png"></p>
<p>I would like to perform a revolution to achieve a 3d plot (see <a href="http://reference.wolfram.com/mathematica/ref/RevolutionPlot3D.html" rel="noreferrer">http://reference.wolfram.com/mathematica/ref/RevolutionPlot3D.html</a>). Plotting a 3d surface is not the problem, but it does not produce the result I'm expecting: </p>
<p><img src="https://i.stack.imgur.com/iSSbQ.png" alt="http://i.imgur.com/ljXHQ.png"></p>
<p>How can I perform a rotation of this blue curve in the 3d plot ?</p>
|
<p>Your plot on your figure seems to use cartesian grid. There is some examples on the matplotlib website of 3D cylindrical functions like Z = f(R) (here: <a href="http://matplotlib.org/examples/mplot3d/surface3d_radial_demo.html" rel="nofollow noreferrer">http://matplotlib.org/examples/mplot3d/surface3d_radial_demo.html</a>).
Is that what you looking for ?
Below is what I get with your function Z = -R**2 :<img src="https://i.stack.imgur.com/Rmw8n.png" alt="Plot of Z = -R**2 function"></p>
<p>And to add cut off to your function, use the following example:
(matplotlib 1.2.0 required)</p>
<pre><code>from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = -(abs(X) + abs(Y))
## 1) Initial surface
# Flatten mesh arrays, necessary for plot_trisurf function
X = X.flatten()
Y = Y.flatten()
Z = Z.flatten()
# Plot initial 3D surface with triangles (more flexible than quad)
#surfi = ax.plot_trisurf(X, Y, Z, cmap=cm.jet, linewidth=0.2)
## 2) Cut off
# Get desired values indexes
cut_idx = np.where(Z > -5)
# Apply the "cut off"
Xc = X[cut_idx]
Yc = Y[cut_idx]
Zc = Z[cut_idx]
# Plot the new surface (it would be impossible with quad grid)
surfc = ax.plot_trisurf(Xc, Yc, Zc, cmap=cm.jet, linewidth=0.2)
# You can force limit if you want to compare both graphs...
ax.set_xlim(-5,5)
ax.set_ylim(-5,5)
ax.set_zlim(-10,0)
plt.show()
</code></pre>
<p>Result for surfi:</p>
<p><img src="https://i.stack.imgur.com/WJEo2.png" alt="surfi"></p>
<p>and surfc:</p>
<p><img src="https://i.stack.imgur.com/5wVht.png" alt="surfc"></p>
|
python|numpy|matplotlib
| 4
|
374,386
| 29,686,547
|
Assigning values to multi-dimensional masked arrays does not clear the mask?
|
<p>Assigning to a masked array is supposed to clear the mask. This works ok for me in a single-dimensional array, but doesn't work in a multi-dimensional array. I am able to workaround this by either flattening the array to a single dimension or assigning the mask explicitly (shown below), but it doesn't seem like I should have to do either of those. Am I doing this wrong?</p>
<pre><code>import numpy
marray = numpy.ma.masked_all(3)
marray
marray.hardmask
marray.data
marray.mask
marray[2] = 2
marray
marray2 = numpy.ma.masked_all((3,3))
marray2
marray2.hardmask
marray2.data
marray2.mask
marray2[2][2] = 2
marray2
marray2.data
marray2.mask
marray2.mask[2][2] = False
marray2
</code></pre>
|
<p>When you do <code>marray2[2][2] = 2</code>, the first <code>[2]</code> is actually returning a <em>copy</em> of the 3rd row of the array, not a reference to the row within <code>marray2</code>, so you are manipulating the copy and not affecting <code>marray2</code>.</p>
<p>Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. Try replacing <code>marray2[2][2] = 2</code> with <code>marray2[2,2] = 2</code> and I believe you will get the result you are expecting.</p>
|
python|arrays|numpy
| 1
|
374,387
| 29,382,903
|
How to apply piecewise linear fit in Python?
|
<p>I am trying to fit piecewise linear fit as shown in fig.1 for a data set</p>
<p><img src="https://i.stack.imgur.com/Thrit.png" alt="enter image description here"></p>
<p>This figure was obtained by setting on the lines. I attempted to apply a piecewise linear fit using the code:</p>
<pre><code>from scipy import optimize
import matplotlib.pyplot as plt
import numpy as np
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15])
y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03])
def linear_fit(x, a, b):
return a * x + b
fit_a, fit_b = optimize.curve_fit(linear_fit, x[0:5], y[0:5])[0]
y_fit = fit_a * x[0:7] + fit_b
fit_a, fit_b = optimize.curve_fit(linear_fit, x[6:14], y[6:14])[0]
y_fit = np.append(y_fit, fit_a * x[6:14] + fit_b)
figure = plt.figure(figsize=(5.15, 5.15))
figure.clf()
plot = plt.subplot(111)
ax1 = plt.gca()
plot.plot(x, y, linestyle = '', linewidth = 0.25, markeredgecolor='none', marker = 'o', label = r'\textit{y_a}')
plot.plot(x, y_fit, linestyle = ':', linewidth = 0.25, markeredgecolor='none', marker = '', label = r'\textit{y_b}')
plot.set_ylabel('Y', labelpad = 6)
plot.set_xlabel('X', labelpad = 6)
figure.savefig('test.pdf', box_inches='tight')
plt.close()
</code></pre>
<p>But this gave me fitting of the form in fig. 2, I tried playing with the values but no change I can't get the fit of the upper line proper. The most important requirement for me is how can I get Python to get the gradient change point. In essence <strong><em>I want Python to recognize and fit two linear fits in the appropriate range. How can this be done in Python?</em></strong></p>
<p><img src="https://i.stack.imgur.com/UjrF6.png" alt="enter image description here"></p>
|
<p>You can use <code>numpy.piecewise()</code> to create the piecewise function and then use <code>curve_fit()</code>, Here is the code</p>
<pre><code>from scipy import optimize
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15], dtype=float)
y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03])
def piecewise_linear(x, x0, y0, k1, k2):
return np.piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0])
p , e = optimize.curve_fit(piecewise_linear, x, y)
xd = np.linspace(0, 15, 100)
plt.plot(x, y, "o")
plt.plot(xd, piecewise_linear(xd, *p))
</code></pre>
<p>the output:</p>
<p><img src="https://i.stack.imgur.com/xw0QH.png" alt="enter image description here" /></p>
<p>For an N parts fitting, please reference <a href="https://gist.github.com/ruoyu0088/70effade57483355bbd18b31dc370f2a" rel="noreferrer">segments_fit.ipynb</a></p>
|
python|numpy|scipy|curve-fitting|piecewise
| 78
|
374,388
| 29,344,966
|
Numpy: add row and column
|
<p>How can I add one row and one column to a numpy array. The array has the shape (480,639,3) and I want to have the shape (481,640,3). The new row and column should filled with zeros, like this:</p>
<pre><code>[43,42,40], ... [64,63,61], [0,0,0]
... ... ... [0,0,0]
[29,29,29], ... [38,37,35], [0,0,0]
[0,0,0], [0,0,0] ... [0,0,0]
</code></pre>
<p>To add a new column I'm doing this:</p>
<pre><code>b = numpy.zeros((480,640,3), dtype = int)
b[:,:-1] = old_arry
</code></pre>
<p>But <em>how I can add one row? Have I to use a loop or exists a better way to do this?</em> </p>
|
<p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html" rel="nofollow"><code>pad</code></a></p>
<pre><code>>>> old = np.random.random_integers(0, 100, size=(480, 640))
>>> np.pad(old, pad_width=((0, 1), (0, 1)), mode='constant')
array([[ 66, 22, 51, ..., 18, 15, 0],
[ 28, 12, 43, ..., 8, 38, 0],
[ 55, 43, 89, ..., 67, 58, 0],
...,
[ 17, 25, 100, ..., 12, 52, 0],
[ 97, 59, 82, ..., 38, 97, 0],
[ 0, 0, 0, ..., 0, 0, 0]])
>>> np.pad(old, pad_width=((0, 1), (0, 1)), mode='constant').shape
(481, 641)
>>>
</code></pre>
<p>You can also write it as <code>np.pad(old, ((0, 1), (0, 1)), mode='constant')</code>, i.e. without the <code>pad_width</code> keyword. To set a different value for the padded areas, see the <code>constant_values</code> parameter in the documentation.</p>
|
python|arrays|numpy
| 3
|
374,389
| 29,737,919
|
datetime conversion and manipulation in python
|
<p>My raw data is in CSV. I load it as a pandas dataframe and datetime fields are loaded as objects. </p>
<pre><code>datetime1 22773 non-null object
datetime2 22771 non-null object
</code></pre>
<p>Using <code>pd.to_datetime(df['datetime1'])</code> I convert it to - <code>datetime64[ns]</code>.</p>
<p>But in doing so the actual value is increased by 7 hours.</p>
<p>I have 2 questions - </p>
<ol>
<li><p>What is the unit <code>datetime64[ns]</code>? is it based on unix time or some other time zone?</p></li>
<li><p>How can I subtract the 7 hours and keep the actual value but my field format is still datetime? </p></li>
</ol>
|
<ol>
<li><p>It's just a data type that's based on numpy's datetime64[ns]. It doesn't contain a timezone attribute that altered your data</p></li>
<li><p><code>df["existing or new column"] = df["datetime1] - pd.Timedelta(7, 'h')</code></p></li>
</ol>
<p>Also, you can always convert to date time when you read the csv using the parse_dates parameter like so. That way to can skip the pd.to_datetime() step</p>
<pre><code>df = pd.read_csv("filename", parse_dates = ["datetime1","datetime2"])
</code></pre>
|
python|datetime|pandas
| 0
|
374,390
| 29,356,825
|
python: calculate center of mass
|
<p>I have a data set with 4 columns: x,y,z, and value, let's say:</p>
<pre><code>x y z value
0 0 0 0
0 1 0 0
0 2 0 0
1 0 0 0
1 1 0 1
1 2 0 1
2 0 0 0
2 1 0 0
2 2 0 0
</code></pre>
<p>I would like to calculate the center of mass <code>CM = (x_m,y_m,z_m)</code> of all values. In the present example, I would like to see <code>(1,1.5,0)</code> as output.</p>
<p>I thought this must be a trivial problem, but I can't find a solution to it in the internet. <code>scipy.ndimage.measurements.center_of_mass</code> seems to be the right thing, but unfortunately, the function always returns two values (instead of 3). In addition, I can't find any documentation on how to set up an <code>ndimage</code> from an array: Would I use a numpy array N of shape <code>(9,4)</code>? Would then N[:,0] be the x-coordinate?</p>
<p>Any help is highly appreciated.</p>
|
<p>The simplest way I can think of is this: just find an average of the coordinates of mass components weighted by each component's contribution.</p>
<pre><code>import numpy
masses = numpy.array([[0, 0, 0, 0],
[0, 1, 0, 0],
[0, 2, 0, 0],
[1, 0, 0, 0],
[1, 1, 0, 1],
[1, 2, 0, 1],
[2, 0, 0, 0],
[2, 1, 0, 0],
[2, 2, 0, 0]])
nonZeroMasses = masses[numpy.nonzero(masses[:,3])] # Not really necessary, can just use masses because 0 mass used as weight will work just fine.
CM = numpy.average(nonZeroMasses[:,:3], axis=0, weights=nonZeroMasses[:,3])
</code></pre>
|
python|numpy|centering
| 14
|
374,391
| 29,437,001
|
Pandas: get index of removed row
|
<p>I have a large dataframe. Here is a small one for the example.</p>
<pre><code> C1 C2 C3 C4
0 foo one 1 4
1 foo one 1 5
2 foo two 2 3
3 bar one 3 6
4 bar two 2 7
</code></pre>
<p>I perform a list of filters that remove several rows. Here is the final df</p>
<pre><code> C1 C2 C3 C4
0 foo one 1 4
2 foo two 2 3
3 bar one 3 6
</code></pre>
<p>What I want is the index of the removed lines, so I can output all the values that were rejected.</p>
|
<p>You could use the <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.Index.difference.html" rel="nofollow"><code>difference</code></a> method on the two Index objects:</p>
<pre><code>>>> df_orig.index.difference(df_final.index)
Int64Index([1, 4], dtype='int64')
</code></pre>
<p>If you're using a version of pandas without this, you could use <code>np.setdiff1d</code> instead:</p>
<pre><code>>>> np.setdiff1d(df_orig.index, df_final.index)
array([1, 4], dtype=int64)
</code></pre>
|
python|indexing|pandas
| 2
|
374,392
| 29,658,567
|
Create vertical NumPy arrays in Python
|
<p>I'm using NumPy in Python to work with arrays. This is the way I'm using to create a vertical array:</p>
<pre><code>import numpy as np
a = np.array([[1],[2],[3]])
</code></pre>
<p>Is there a simple and more direct way to create vertical arrays?</p>
|
<p>You can use <code>reshape</code> or <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="noreferrer"><code>vstack</code></a> :</p>
<pre><code>>>> a=np.arange(1,4)
>>> a
array([1, 2, 3])
>>> a.reshape(3,1)
array([[1],
[2],
[3]])
>>> np.vstack(a)
array([[1],
[2],
[3]])
</code></pre>
<p>Also, you can use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="noreferrer"><em>broadcasting</em></a> in order to reshape your array:</p>
<pre><code>In [32]: a = np.arange(10)
In [33]: a
Out[33]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [34]: a[:,None]
Out[34]:
array([[0],
[1],
[2],
[3],
[4],
[5],
[6],
[7],
[8],
[9]])
</code></pre>
|
python|arrays|numpy
| 28
|
374,393
| 29,739,894
|
pandas: read_csv how to force bool data to dtype bool instead of object
|
<p>I'm reading in a large flatfile which has timestamped data with multiple columns. Data has a boolean column which can be True/False or can have no entry(which evaluates to nan).</p>
<p>When reading the csv the bool column gets typecast as object which prevents saving the data in hdfstore because of serialization error.</p>
<p>example data: </p>
<pre><code>A B C D
a 1 2 true
b 5 7 false
c 3 2 true
d 9 4
</code></pre>
<p>I use the following command to read</p>
<pre><code>import pandas as pd
pd.read_csv('data.csv', parse_dates=True)
</code></pre>
<p>One solution is to specify the dtype while reading in the csv but I was hoping for a more succinct solution like convert_objects where i can specify parse_numeric or parse_dates.</p>
|
<p>As you had a missing value in your csv the dtype of the columns is shown to be object as you have mixed dtypes, the first 3 row values are boolean, the last will be a float.</p>
<p>To convert the <code>NaN</code> value use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a>, it accepts a dict to map desired fill values with columns and produce a homogeneous dtype:</p>
<pre class="lang-py prettyprint-override"><code>>>> t = """
A B C D
a 1 NaN true
b 5 7 false
c 3 2 true
d 9 4 """
>>> df = pd.read_csv(io.StringIO(t),sep='\s+')
>>> df
A B C D
0 a 1 NaN True
1 b 5 7 False
2 c 3 2 True
3 d 9 4 NaN
>>> df.fillna({'C':0, 'D':False})
A B C D
0 a 1 0 True
1 b 5 7 False
2 c 3 2 True
3 d 9 4 False
</code></pre>
|
python|pandas
| 10
|
374,394
| 29,678,154
|
Why is numpy/pandas parsing of a csv file with long lines so slow?
|
<p>I'm trying to efficiently parse a csv file with around 20,000 entries per line (and a few thousand lines) to a numpy array (or list of arrays, or anything similar really). I found a number of other questions, along with <a href="http://wesmckinney.com/blog/a-new-high-performance-memory-efficient-file-parser-engine-for-pandas/" rel="noreferrer">this</a> blog post, which suggest that pandas's csv parser is extremely fast. However I've benchmarked pandas, numpy and some pure-python approaches and it appears that the trivial pure-python string splitting + list comprehension beats everything else by quite a large margin.</p>
<ul>
<li><p>What's going on here?</p></li>
<li><p>Are there any csv parsers that that would be more efficient?</p></li>
<li><p>If I change the format of the input data will it help?</p></li>
</ul>
<p>Here's the source code I'm benchmarking with (the <code>sum()</code> is just to make sure any lazy iterators are forced to evaluate everything):</p>
<pre><code>#! /usr/bin/env python3
import sys
import time
import gc
import numpy as np
from pandas.io.parsers import read_csv
import csv
def python_iterator_csv():
with open("../data/temp_fixed_l_no_initial", "r") as f:
for line in f.readlines():
all_data = line.strip().split(",")
print(sum(float(x) for x in all_data))
def python_list_csv():
with open("../data/temp_fixed_l_no_initial", "r") as f:
for line in f.readlines():
all_data = line.strip().split(",")
print(sum([float(x) for x in all_data]))
def python_array_csv():
with open("../data/temp_fixed_l_no_initial", "r") as f:
for line in f.readlines():
all_data = line.strip().split(",")
print(sum(np.array([float(x) for x in all_data])))
def numpy_fromstring():
with open("../data/temp_fixed_l_no_initial", "r") as f:
for line in f.readlines():
print(sum(np.fromstring(line, sep = ",")))
def numpy_csv():
with open("../data/temp_fixed_l_no_initial", "r") as f:
for row in np.loadtxt(f, delimiter = ",", dtype = np.float, ndmin = 2):
print(sum(row))
def csv_loader(csvfile):
return read_csv(csvfile,
header = None,
engine = "c",
na_filter = False,
quoting = csv.QUOTE_NONE,
index_col = False,
sep = ",")
def pandas_csv():
with open("../data/temp_fixed_l_no_initial", "r") as f:
for row in np.asarray(csv_loader(f).values, dtype = np.float64):
print(sum(row))
def pandas_csv_2():
with open("../data/temp_fixed_l_no_initial", "r") as f:
print(csv_loader(f).sum(axis=1))
def simple_time(func, repeats = 3):
gc.disable()
for i in range(0, repeats):
start = time.perf_counter()
func()
end = time.perf_counter()
print(func, end - start, file = sys.stderr)
gc.collect()
gc.enable()
return
if __name__ == "__main__":
simple_time(python_iterator_csv)
simple_time(python_list_csv)
simple_time(python_array_csv)
simple_time(numpy_csv)
simple_time(pandas_csv)
simple_time(numpy_fromstring)
simple_time(pandas_csv_2)
</code></pre>
<p>The output (to stderr) is:</p>
<pre><code><function python_iterator_csv at 0x7f22302b1378> 19.754893831999652
<function python_iterator_csv at 0x7f22302b1378> 19.62786615600271
<function python_iterator_csv at 0x7f22302b1378> 19.66641107099713
<function python_list_csv at 0x7f22302b1ae8> 18.761991592000413
<function python_list_csv at 0x7f22302b1ae8> 18.722911622000538
<function python_list_csv at 0x7f22302b1ae8> 19.00348913199923
<function python_array_csv at 0x7f222baffa60> 41.8681991630001
<function python_array_csv at 0x7f222baffa60> 42.141840383999806
<function python_array_csv at 0x7f222baffa60> 41.86879085799956
<function numpy_csv at 0x7f222ba5cc80> 47.957625758001086
<function numpy_csv at 0x7f222ba5cc80> 47.245571732000826
<function numpy_csv at 0x7f222ba5cc80> 47.25457685799847
<function pandas_csv at 0x7f2228572620> 43.39656048499819
<function pandas_csv at 0x7f2228572620> 43.5016079220004
<function pandas_csv at 0x7f2228572620> 43.567352316000324
<function numpy_fromstring at 0x7f593ed3cc80> 32.490607361
<function numpy_fromstring at 0x7f593ed3cc80> 32.421125410997774
<function numpy_fromstring at 0x7f593ed3cc80> 32.37903898300283
<function pandas_csv_2 at 0x7f846d1aa730> 24.903284349999012
<function pandas_csv_2 at 0x7f846d1aa730> 25.498485038999206
<function pandas_csv_2 at 0x7f846d1aa730> 25.03262125800029
</code></pre>
<p>From the blog post linked above it seems that pandas can import a csv matrix of random doubles at a data rate of <code>145/1.279502</code> = 113 MB/s. My file is 814 MB, so pandas is only manages ~19 MB/s for me!</p>
<p>edit: As pointed out by @ASGM, this wasn't really fair to pandas because it is not designed for rowise iteration. I've included the suggested improvement in the benchmark but it's still slower than pure python approaches. (Also: I've played around with profiling similar code, before simplifying it to this benchmark, and the parsing always dominated the time taken.)</p>
<p>edit2: Best of three times without the <code>sum</code>:</p>
<pre><code>python_list_csv 17.8
python_array_csv 23.0
numpy_csv 28.6
numpy_fromstring 13.3
pandas_csv_2 24.2
</code></pre>
<p>so without the summation <code>numpy.fromstring</code> beats pure python by a small margin (I think fromstring is written in <a href="https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/ctors.c#L3528" rel="noreferrer">C</a> so this makes sense).</p>
<p>edit3:</p>
<p>I've done some experimentation with the C/C++ float parsing code <a href="http://tinodidriksen.com/2011/05/28/cpp-convert-string-to-double-speed/" rel="noreferrer">here</a> and it looks like I'm probably expecting too much from pandas/numpy. Most of the robust parsers listed there give times of 10+ seconds just to parse this number of floats. The only parser which resoundingly beats <code>numpy.fromstring</code> is boost's <a href="http://www.boost.org/doc/libs/1_57_0/libs/spirit/doc/html/index.html" rel="noreferrer"><code>spirit::qi</code></a> which is C++ and so not likely to make it into any python libraries.</p>
<p>[ More precise results: <code>spirit::qi</code> ~ 3s, <code>lexical_cast</code> ~ 7s, <code>atof</code> and <code>strtod</code> ~ 10s, <code>sscanf</code> ~ 18s, <code>stringstream</code> and <code>stringstream reused</code> are incredibly slow at 50s and 28s. ]</p>
|
<p>Does your CSV file contain column headers? If not, then explicitly passing <code>header=None</code> to <code>pandas.read_csv</code> can give a slight performance improvement for the Python parsing engine (but not for the C engine):</p>
<pre><code>In [1]: np.savetxt('test.csv', np.random.randn(1000, 20000), delimiter=',')
In [2]: %timeit pd.read_csv('test.csv', delimiter=',', engine='python')
1 loops, best of 3: 9.19 s per loop
In [3]: %timeit pd.read_csv('test.csv', delimiter=',', engine='c')
1 loops, best of 3: 6.47 s per loop
In [4]: %timeit pd.read_csv('test.csv', delimiter=',', engine='python', header=None)
1 loops, best of 3: 6.26 s per loop
In [5]: %timeit pd.read_csv('test.csv', delimiter=',', engine='c', header=None)
1 loops, best of 3: 6.46 s per loop
</code></pre>
<h2>Update</h2>
<p>If there are no missing or invalid values then you can do a little better by passing <code>na_filter=False</code> (only valid for the C engine):</p>
<pre><code>In [6]: %timeit pd.read_csv('test.csv', sep=',', engine='c', header=None)
1 loops, best of 3: 6.42 s per loop
In [7]: %timeit pd.read_csv('test.csv', sep=',', engine='c', header=None, na_filter=False)
1 loops, best of 3: 4.72 s per loop
</code></pre>
<p>There may also be small gains to be had by specifying the <code>dtype</code> explicitly:</p>
<pre><code>In [8]: %timeit pd.read_csv('test.csv', sep=',', engine='c', header=None, na_filter=False, dtype=np.float64)
1 loops, best of 3: 4.36 s per loop
</code></pre>
<h2>Update 2</h2>
<p>Following up on @morningsun's comment, setting <code>low_memory=False</code> squeezes out a bit more speed:</p>
<pre><code>In [9]: %timeit pd.read_csv('test.csv', sep=',', engine='c', header=None, na_filter=False, dtype=np.float64, low_memory=True)
1 loops, best of 3: 4.3 s per loop
In [10]: %timeit pd.read_csv('test.csv', sep=',', engine='c', header=None, na_filter=False, dtype=np.float64, low_memory=False)
1 loops, best of 3: 3.27 s per loop
</code></pre>
<p>For what it's worth, these benchmarks were all done using the current dev version of pandas (0.16.0-19-g8d2818e).</p>
|
python|parsing|csv|numpy|pandas
| 17
|
374,395
| 29,439,589
|
How to create a pivot table on extremely large dataframes in Pandas
|
<p>I need to create a pivot table of 2000 columns by around 30-50 million rows from a dataset of around 60 million rows. I've tried pivoting in chunks of 100,000 rows, and that works, but when I try to recombine the DataFrames by doing a .append() followed by .groupby('someKey').sum(), all my memory is taken up and python eventually crashes.</p>
<p>How can I do a pivot on data this large with a limited ammount of RAM?</p>
<p>EDIT: adding sample code</p>
<p>The following code includes various test outputs along the way, but the last print is what we're really interested in. Note that if we change segMax to 3, instead of 4, the code will produce a false positive for correct output. The main issue is that if a shipmentid entry is not in each and every chunk that sum(wawa) looks at, it doesn't show up in the output.</p>
<pre><code>import pandas as pd
import numpy as np
import random
from pandas.io.pytables import *
import os
pd.set_option('io.hdf.default_format','table')
# create a small dataframe to simulate the real data.
def loadFrame():
frame = pd.DataFrame()
frame['shipmentid']=[1,2,3,1,2,3,1,2,3] #evenly distributing shipmentid values for testing purposes
frame['qty']= np.random.randint(1,5,9) #random quantity is ok for this test
frame['catid'] = np.random.randint(1,5,9) #random category is ok for this test
return frame
def pivotSegment(segmentNumber,passedFrame):
segmentSize = 3 #take 3 rows at a time
frame = passedFrame[(segmentNumber*segmentSize):(segmentNumber*segmentSize + segmentSize)] #slice the input DF
# ensure that all chunks are identically formatted after the pivot by appending a dummy DF with all possible category values
span = pd.DataFrame()
span['catid'] = range(1,5+1)
span['shipmentid']=1
span['qty']=0
frame = frame.append(span)
return frame.pivot_table(['qty'],index=['shipmentid'],columns='catid', \
aggfunc='sum',fill_value=0).reset_index()
def createStore():
store = pd.HDFStore('testdata.h5')
return store
segMin = 0
segMax = 4
store = createStore()
frame = loadFrame()
print('Printing Frame')
print(frame)
print(frame.info())
for i in range(segMin,segMax):
segment = pivotSegment(i,frame)
store.append('data',frame[(i*3):(i*3 + 3)])
store.append('pivotedData',segment)
print('\nPrinting Store')
print(store)
print('\nPrinting Store: data')
print(store['data'])
print('\nPrinting Store: pivotedData')
print(store['pivotedData'])
print('**************')
print(store['pivotedData'].set_index('shipmentid').groupby('shipmentid',level=0).sum())
print('**************')
print('$$$')
for df in store.select('pivotedData',chunksize=3):
print(df.set_index('shipmentid').groupby('shipmentid',level=0).sum())
print('$$$')
store['pivotedAndSummed'] = sum((df.set_index('shipmentid').groupby('shipmentid',level=0).sum() for df in store.select('pivotedData',chunksize=3)))
print('\nPrinting Store: pivotedAndSummed')
print(store['pivotedAndSummed'])
store.close()
os.remove('testdata.h5')
print('closed')
</code></pre>
|
<p>You could do the appending with HDF5/pytables. This keeps it out of RAM.</p>
<p>Use the <a href="http://pandas.pydata.org/pandas-docs/dev/io.html#table-format" rel="noreferrer">table format</a>:</p>
<pre><code>store = pd.HDFStore('store.h5')
for ...:
...
chunk # the chunk of the DataFrame (which you want to append)
store.append('df', chunk)
</code></pre>
<p>Now you can read it in as a DataFrame in one go (assuming this DataFrame can fit in memory!):</p>
<pre><code>df = store['df']
</code></pre>
<p>You can also query, to get only subsections of the DataFrame.</p>
<p>Aside: You should also buy more RAM, it's cheap.</p>
<hr>
<p>Edit: you can groupby/sum from the store <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#iterator" rel="noreferrer">iteratively</a> since this "map-reduces" over the chunks:</p>
<pre><code># note: this doesn't work, see below
sum(df.groupby().sum() for df in store.select('df', chunksize=50000))
# equivalent to (but doesn't read in the entire frame)
store['df'].groupby().sum()
</code></pre>
<p>Edit2: Using sum as above doesn't actually work in pandas 0.16 (I thought it did in 0.15.2), instead you can use <a href="https://docs.python.org/2/library/functions.html#reduce" rel="noreferrer"><code>reduce</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add.html" rel="noreferrer"><code>add</code></a>:</p>
<pre><code>reduce(lambda x, y: x.add(y, fill_value=0),
(df.groupby().sum() for df in store.select('df', chunksize=50000)))
</code></pre>
<p><em>In python 3 you must <a href="https://docs.python.org/3.0/library/functools.html#functools.reduce" rel="noreferrer">import reduce from functools</a>.</em></p>
<p>Perhaps it's more pythonic/readable to write this as:</p>
<pre><code>chunks = (df.groupby().sum() for df in store.select('df', chunksize=50000))
res = next(chunks) # will raise if there are no chunks!
for c in chunks:
res = res.add(c, fill_value=0)
</code></pre>
<p><em>If performance is poor / if there are a large number of new groups then it may be preferable to start the res as zero of the correct size (by getting the unique group keys e.g. by looping through the chunks), and then add in place.</em></p>
|
python|python-3.x|pandas|pivot-table
| 16
|
374,396
| 62,357,239
|
Add attention layer to Seq2Seq model
|
<p>I have build a Seq2Seq model of encoder-decoder. I want to add an attention layer to it. I tried adding attention layer <a href="https://www.kaggle.com/residentmario/seq-to-seq-rnn-models-attention-teacher-forcing" rel="nofollow noreferrer">through this</a> but it didn't help.</p>
<p>Here is my initial code without attention</p>
<pre><code># Encoder
encoder_inputs = Input(shape=(None,))
enc_emb = Embedding(num_encoder_tokens, latent_dim, mask_zero = True)(encoder_inputs)
encoder_lstm = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder_lstm(enc_emb)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)
dec_emb = dec_emb_layer(decoder_inputs)
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(dec_emb,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
</code></pre>
<p>And this is the code after I added attention layer in decoder (the encoder layer is same as in initial code)</p>
<pre><code># Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)
dec_emb = dec_emb_layer(decoder_inputs)
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
attention = dot([decoder_lstm, encoder_lstm], axes=[2, 2])
attention = Activation('softmax')(attention)
context = dot([attention, encoder_lstm], axes=[2,1])
decoder_combined_context = concatenate([context, decoder_lstm])
decoder_outputs, _, _ = decoder_combined_context(dec_emb,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
</code></pre>
<p>While doing this, I got an error </p>
<pre><code> Layer dot_1 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.recurrent.LSTM'>. Full input: [<keras.layers.recurrent.LSTM object at 0x7f8f77e2f3c8>, <keras.layers.recurrent.LSTM object at 0x7f8f770beb70>]. All inputs to the layer should be tensors.
</code></pre>
<p>Can someone please help in fitting an attention layer in this architecture?</p>
|
<p>the dot products need to be computed on tensor outputs... in encoder you correctly define the encoder_output, in decoder you have to add <code>decoder_outputs, state_h, state_c = decoder_lstm(enc_emb, initial_state=encoder_states)</code></p>
<p>the dot products now are</p>
<pre><code>attention = dot([decoder_outputs, encoder_outputs], axes=[2, 2])
attention = Activation('softmax')(attention)
context = dot([attention, encoder_outputs], axes=[2,1])
</code></pre>
<p>the concatenation doesn't need initial_states. you have to define it in your rnn layer: <code>decoder_outputs, state_h, state_c = decoder_lstm(enc_emb, initial_state=encoder_states)</code></p>
<p>here the full example</p>
<p>ENCODER + DECODER</p>
<pre><code># dummy variables
num_encoder_tokens = 30
num_decoder_tokens = 10
latent_dim = 100
encoder_inputs = Input(shape=(None,))
enc_emb = Embedding(num_encoder_tokens, latent_dim, mask_zero = True)(encoder_inputs)
encoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
encoder_outputs, state_h, state_c = encoder_lstm(enc_emb)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)
dec_emb = dec_emb_layer(decoder_inputs)
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(dec_emb,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
</code></pre>
<p>DECODER w\ ATTENTION</p>
<pre><code># Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)
dec_emb = dec_emb_layer(decoder_inputs)
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, state_h, state_c = decoder_lstm(dec_emb, initial_state=encoder_states)
attention = dot([decoder_outputs, encoder_outputs], axes=[2, 2])
attention = Activation('softmax')(attention)
context = dot([attention, encoder_outputs], axes=[2,1])
decoder_outputs = concatenate([context, decoder_outputs])
decoder_dense = Dense(num_decoder_tokens, activation='softmax')(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_dense)
model.summary()
</code></pre>
|
python-3.x|tensorflow|keras|nlp|machine-translation
| 5
|
374,397
| 62,087,703
|
Need to add missing value from pandas column
|
<p>I've two pandas dataframe having one common column in both but they are not having same values. Wish to get the missing values to another dataframe common column.</p>
<pre><code>df1
name mobile email
abcd 992293 abcd@abcd.com
efgh 687678 efgh@efgh.com
ijkl 7878678 ijkl@ijkl.com
mnop 678687 mnop@mnop.com
qrst 6876 qrst@qrst.com
</code></pre>
<pre><code>df2
name age
abcd 22
efgh 12
</code></pre>
<pre><code>Expected output
name age
abcd 22
efgh 12
ijkl
mnop
qrst
</code></pre>
|
<p>This is a simple case of joining two data frames on a common key:</p>
<pre><code>pd.merge(df1, df2, on='name',how='left')
</code></pre>
|
python|pandas
| 1
|
374,398
| 62,239,111
|
how do I create a new column out of a dictionary's sub string on a pandas dataframe
|
<p>I have the following repo for the files: <a href="https://github.com/Glarez/learning.git" rel="nofollow noreferrer">https://github.com/Glarez/learning.git</a></p>
<p><a href="https://i.stack.imgur.com/zjO5h.png" rel="nofollow noreferrer">dataframe</a></p>
<p>I need to create a column with the bold part of that string under the params column: "ufield_18":<strong>"ONLY"</strong> I dont see how can I get that since I'm learning to code from scratch. The solution to this would be nice, but what I would really appreciate is you to point me at the right direction to get the answer for myself. THANKS!</p>
|
<p>Since you do not want the exact answer. I will provide you one of the ways to achieve this:</p>
<ol>
<li>filter the params column into a dictionary variable</li>
<li>create a loop to access the keys of the dictionary</li>
<li>append it to the pandas df you have (df[key] = np.nan) - Make sure you add some values while appending the column if your df already has some rows or just add np.nan </li>
</ol>
<p>note np is numpy library which needs to be imported</p>
|
python|pandas
| 0
|
374,399
| 62,455,255
|
How do I turn a Tensorflow Dataset into a Numpy Array?
|
<p>I'm interested in a Tensorflow Dataset, but I want to manipulate it using <code>numpy</code>. Is it possible to turn this <code>PrefetchDataset</code> into an array?</p>
<pre><code>import tensorflow_datasets as tfds
import numpy as np
dataset = tfds.load('mnist')
</code></pre>
|
<p>Since you didn't specify <code>split</code> or <code>as_supervised</code>, <code>tfds</code> will return a dictionary with <code>train</code> and <code>test</code> set. Since <code>as_supervised</code> defaults to <code>False</code>, the <code>image</code> and <code>label</code> will also be separate in a dictionary. This is what it will look like:</p>
<pre><code>{'test': <PrefetchDataset shapes: {image: (28, 28, 1), label: ()},
types: {image: tf.uint8, label: tf.int64}>,
'train': <PrefetchDataset shapes: {image: (28, 28, 1), label: ()},
types: {image: tf.uint8, label: tf.int64}>}
</code></pre>
<p>So here's how you can turn it into a <code>numpy</code> array:</p>
<pre><code>import tensorflow_datasets as tfds
import numpy as np
dataset = tfds.load('mnist')
train, test = dataset['train'], dataset['test']
train_numpy = np.vstack(tfds.as_numpy(train))
test_numpy = np.vstack(tfds.as_numpy(test))
X_train = np.array(list(map(lambda x: x[0]['image'], train_numpy)))
y_train = np.array(list(map(lambda x: x[0]['label'], train_numpy)))
X_test = np.array(list(map(lambda x: x[0]['image'], test_numpy)))
y_test = np.array(list(map(lambda x: x[0]['label'], test_numpy)))
</code></pre>
<p>You might want to set <code>as_supervised=True</code>, which will return <code>tuple</code> instead of a dictionary for <code>image</code> and <code>label</code>.</p>
<pre><code>[<PrefetchDataset shapes: ((28, 28, 1), ()), types: (tf.uint8, tf.int64)>]
</code></pre>
<p>In this case, you will need to select the 'image' and 'label' using indexing like <code>[0]</code>. So here's how you can turn it into a <code>numpy</code> array:</p>
<pre><code>import tensorflow_datasets as tfds
import numpy as np
dataset = tfds.load('mnist', split=['test'], as_supervised=True)
array = np.vstack(tfds.as_numpy(dataset[0]))
X_train = np.array(list(map(lambda x: x[0], array)))
y_train = np.array(list(map(lambda x: x[1], array)))
</code></pre>
<p>Proof:</p>
<pre><code>X_train.shape
</code></pre>
<pre><code>(10000, 28, 28, 1)
</code></pre>
|
python|arrays|numpy|tensorflow
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.