Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
378,100
| 26,523,324
|
pandas AttributeError: 'unicode' object has no attribute 'view'
|
<p>This is a killer problem that probably has a simple solution for a pandas newbie like me:</p>
<p>I'm trying to replace one record of pandas DataFrame (df) with the latest version of that label (found in a separate DataFrame (latest_version).</p>
<pre><code>df.ix[label] = latest_version.ix[label]
</code></pre>
<p>The error:</p>
<pre><code>AttributeError: 'unicode' object has no attribute 'view'
</code></pre>
<p>df itself is large and complex (and proprietary) so I'd like to avoid posting it if I can; I'm hoping there's something easy I'm missing but I can't figure it out.</p>
<p>EDIT: output of df.info() and latest_version.info()</p>
<pre><code>ipdb> df.info()
<class 'pandas.core.frame.DataFrame'>
Index: 7 entries, A to G
Data columns (total 73 columns):
Column 0 7 non-null object
Column 1 7 non-null object
Column 2 7 non-null object
Column 3 7 non-null object
Column 4 7 non-null object
Column 5 7 non-null float64
Column 6 1 non-null object
Column 7 7 non-null object
Column 8 7 non-null object
Column 9 6 non-null datetime64[ns]
Column 10 0 non-null object
Column 11 0 non-null object
Column 12 5 non-null object
Column 13 0 non-null object
Column 14 0 non-null object
Column 15 6 non-null datetime64[ns]
Column 16 0 non-null object
Column 17 0 non-null object
Column 18 0 non-null object
Column 19 0 non-null object
Column 20 0 non-null object
Column 21 0 non-null object
Column 22 0 non-null object
Column 23 0 non-null object
Column 24 0 non-null object
Column 25 0 non-null object
Column 26 0 non-null object
Column 27 0 non-null object
Column 28 0 non-null object
Column 29 0 non-null object
Column 30 0 non-null object
Column 31 0 non-null object
Column 32 0 non-null object
Column 33 0 non-null object
Column 34 0 non-null object
Column 35 0 non-null object
Column 36 0 non-null object
Column 37 4 non-null object
Column 38 6 non-null object
Column 39 4 non-null object
Column 40 0 non-null object
Column 41 0 non-null object
Column 42 0 non-null object
Column 43 6 non-null object
Column 44 0 non-null object
Column 45 6 non-null object
Column 46 0 non-null object
Column 47 4 non-null object
Column 48 0 non-null object
Column 49 4 non-null object
Column 50 0 non-null object
Column 51 0 non-null object
Column 52 0 non-null object
Column 53 0 non-null object
Column 54 0 non-null object
Column 55 0 non-null object
Column 56 0 non-null object
Column 57 0 non-null object
Column 58 0 non-null object
Column 59 0 non-null object
Column 60 0 non-null object
Column 61 0 non-null object
Column 62 0 non-null object
Column 63 0 non-null object
Column 64 0 non-null object
Column 65 0 non-null object
Column 66 0 non-null object
Column 67 0 non-null object
Column 68 0 non-null object
Column 69 0 non-null object
Column 70 0 non-null object
Column 71 0 non-null object
Column 72 0 non-null object
dtypes: datetime64[ns](2), float64(1), object(70)ipdb>
ipdb> latest_version.info()
<class 'pandas.core.frame.DataFrame'>
Index: 4 entries, A to D
Data columns (total 73 columns):
Column 0 4 non-null object
Column 1 4 non-null object
Column 2 4 non-null object
Column 3 4 non-null object
Column 4 4 non-null object
Column 5 4 non-null int64
Column 6 4 non-null object
Column 7 4 non-null object
Column 8 4 non-null object
Column 9 4 non-null object
Column 10 4 non-null object
Column 11 4 non-null object
Column 12 4 non-null object
Column 13 4 non-null object
Column 14 4 non-null object
Column 15 4 non-null object
Column 16 3 non-null object
Column 17 4 non-null object
Column 18 4 non-null object
Column 19 4 non-null object
Column 20 3 non-null object
Column 21 3 non-null object
Column 22 4 non-null object
Column 23 4 non-null object
Column 24 4 non-null object
Column 25 4 non-null object
Column 26 4 non-null object
Column 27 4 non-null object
Column 28 4 non-null object
Column 29 4 non-null object
Column 30 4 non-null object
Column 31 4 non-null object
Column 32 4 non-null object
Column 33 4 non-null object
Column 34 4 non-null object
Column 35 4 non-null object
Column 36 4 non-null object
Column 37 4 non-null object
Column 38 4 non-null object
Column 39 4 non-null object
Column 40 4 non-null object
Column 41 4 non-null object
Column 42 4 non-null object
Column 43 4 non-null object
Column 44 4 non-null object
Column 45 4 non-null float64
Column 46 4 non-null object
Column 47 4 non-null object
Column 48 4 non-null object
Column 49 4 non-null object
Column 50 4 non-null object
Column 51 4 non-null object
Column 52 4 non-null object
Column 53 4 non-null object
Column 54 4 non-null object
Column 55 4 non-null object
Column 56 1 non-null object
Column 57 1 non-null object
Column 58 4 non-null object
Column 59 4 non-null object
Column 60 4 non-null object
Column 61 4 non-null object
Column 62 4 non-null object
Column 63 4 non-null object
Column 64 4 non-null object
Column 65 4 non-null object
Column 66 4 non-null object
Column 67 4 non-null object
Column 68 4 non-null object
Column 69 4 non-null object
Column 70 4 non-null object
Column 71 4 non-null object
Column 72 4 non-null object
dtypes: float64(1), int64(1), object(71)ipdb>
</code></pre>
<p>Further edit (in response to Ed): Here are the tables with just the columns that have different types:</p>
<pre><code>ipdb> latest_version.ix[:,[5,9,15]]
line_number entry_date entry_ref_a
unique_index
NEW/AAAAAAAAAAAAAAAAAAA 0 2014-12-30 2015-01-14
NEW/AAAAAAAAAAAAAAAAAAB 1 2014-12-30
NEW/AAAAAAAAAAAAAAAAAAC 2 2014-12-30
ipdb>/df.ix[:,[5,9,15]]
line_number entry_date \
unique_index
OLD/204442 0 1419897600000000000
OLD/343278 1 1419897600000000000
OLD/359628 2 1419897600000000000
NEW/AAAAAAAAAAAAAAAAAAA 0 2014-12-30
entry_ref_a
unique_index
OLD/204442 1421193600000000000
OLD/343278 1421193600000000000
OLD/359628 1422230400000000000
NEW/AAAAAAAAAAAAAAAAAAA 2015-01-14
</code></pre>
<p>Definitely lends credence to the idea that there's a type mismatch issue here...</p>
|
<p>So your problem here seems to be that you had a mismatch on the dtypes between the 2 dfs you were trying to assign to and from:</p>
<pre><code>df dtypes: datetime64[ns](2), float64(1), object(70)
</code></pre>
<p>whilst </p>
<pre><code>latest_version is :dtypes: float64(1), int64(1), object(71)
</code></pre>
<p>From the output we can see that that columns that clash some are datetimes, whilst they are int64's in the corresponding column in the other df.</p>
<p>You can convert the ill-formed columns to datetime by doing:</p>
<pre><code>df['entry_date'] = pd.to_datetime(df['entry_date')
</code></pre>
<p>and likewise for <code>entry_ref_a</code></p>
|
python|pandas
| 1
|
378,101
| 39,220,929
|
python convolution with different dimension
|
<p>I'm trying to implement convolutional neural network in Python.<br>
However, when I use signal.convolve or np.convolve, it can not do convolution on X, Y(X is 3d, Y is 2d). X are training minibatches. Y are filters.
I don't want to do for loop for every training vector like:</p>
<pre><code>for i in xrange(X.shape[2]):
result = signal.convolve(X[:,:,i], Y, 'valid')
....
</code></pre>
<p>So, is there any function I can use to do convolution efficiently?</p>
|
<p>Scipy implements standard N-dimensional convolutions, so that the matrix to be convolved and the kernel are both N-dimensional.</p>
<p>A quick fix would be to add an extra dimension to <code>Y</code> so that <code>Y</code> is 3-Dimensional:</p>
<pre><code>result = signal.convolve(X, Y[..., None], 'valid')
</code></pre>
<p>I'm assuming here that the last axis corresponds to the image index as in your example <code>[width, height, image_idx]</code> (or <code>[height, width, image_idx]</code>). If it is the other way around and the images are indexed in the first axis (as it is more common in C-ordering arrays) you should replace <code>Y[..., None]</code> with <code>Y[None, ...]</code>.</p>
<p>The line <code>Y[..., None]</code> will add an extra axis to <code>Y</code>, making it 3-dimensional <code>[kernel_width, kernel_height, 1]</code> and thus, converting it to a valid 3-Dimensional convolution kernel.</p>
<p>NOTE: This assumes that all your input mini-batches have the same <code>width x height</code>, which is standard in CNN's.</p>
<hr>
<p>EDIT: Some timings as @Divakar suggested.</p>
<p>The testing framework is setup as follows:</p>
<pre><code>def test(S, N, K):
""" S: image size, N: num images, K: kernel size"""
a = np.random.randn(S, S, N)
b = np.random.randn(K, K)
valid = [slice(K//2, -K//2+1), slice(K//2, -K//2+1)]
%timeit signal.convolve(a, b[..., None], 'valid')
%timeit signal.fftconvolve(a, b[..., None], 'valid')
%timeit ndimage.convolve(a, b[..., None])[valid]
</code></pre>
<p>Find bellow tests for different configurations:</p>
<ul>
<li><p>Varying image size <code>S</code>:</p>
<pre><code>>>> test(100, 50, 11) # 100x100 images
1 loop, best of 3: 909 ms per loop
10 loops, best of 3: 116 ms per loop
10 loops, best of 3: 54.9 ms per loop
>>> test(1000, 50, 11) # 1000x1000 images
1 loop, best of 3: 1min 51s per loop
1 loop, best of 3: 16.5 s per loop
1 loop, best of 3: 5.66 s per loop
</code></pre></li>
<li><p>Varying number of images <code>N</code>:</p>
<pre><code>>>> test(100, 5, 11) # 5 images
10 loops, best of 3: 90.7 ms per loop
10 loops, best of 3: 26.7 ms per loop
100 loops, best of 3: 5.7 ms per loop
>>> test(100, 500, 11) # 500 images
1 loop, best of 3: 9.75 s per loop
1 loop, best of 3: 888 ms per loop
1 loop, best of 3: 727 ms per loop
</code></pre></li>
<li><p>Varying kernel size <code>K</code>:</p>
<pre><code>>>> test(100, 50, 5) # 5x5 kernels
1 loop, best of 3: 217 ms per loop
10 loops, best of 3: 100 ms per loop
100 loops, best of 3: 11.4 ms per loop
>>> test(100, 50, 31) # 31x31 kernels
1 loop, best of 3: 4.39 s per loop
1 loop, best of 3: 220 ms per loop
1 loop, best of 3: 560 ms per loop
</code></pre></li>
</ul>
<p>So, in short, <code>ndimage.convolve</code> is always faster, except when the kernel size is very large (as <code>K = 31</code> in the last test).</p>
|
python|numpy|scipy|deep-learning
| 5
|
378,102
| 39,359,478
|
Print Text Representation of Tensorflow (tf-slim) Model
|
<p>Is there any way to print a textual representation of a tf-slim model along the lines of what <a href="https://gist.github.com/solitaire/441a33e0eaa3c7fc959f#file-neural-net-info" rel="nofollow">nolearn offers</a>:</p>
<pre><code>## Layer information
name size total cap.Y cap.X cov.Y cov.X filter Y filter X field Y field X
-------------- ---------- ------- ------- ------- ------- ------- ---------- ---------- --------- ---------
input 1x144x192 27648 100.00 100.00 100.00 100.00 144 192 144 192
Conv2DLayer 12x144x192 331776 100.00 100.00 2.08 1.56 3 3 3 3
Conv2DLayer 12x144x192 331776 60.00 60.00 3.47 2.60 3 3 5 5
MaxPool2DLayer 12x72x96 82944 60.00 60.00 3.47 2.60 3 3 5 5
...
DenseLayer 7 7 100.00 100.00 100.00 100.00 144 192 144 192
</code></pre>
<p>EDIT:</p>
<p>I can use something like this to print the info for a given layer:</p>
<pre><code>print("%s: %s" % (layer.name, layer.get_shape()))
</code></pre>
<p>what I would need to complete the table, would be some way to crawl or walk up the "layer stack" (i.e. get from a given layer to the incoming / input layer(s).</p>
|
<p>It is not textual representation that you seek, but maybe TensorBoard will suffice? You can visualize whole computation graph and monitor your model using this tool.</p>
<p><a href="https://www.tensorflow.org/how_tos/summaries_and_tensorboard/" rel="nofollow noreferrer">https://www.tensorflow.org/how_tos/summaries_and_tensorboard/</a></p>
|
tensorflow|tf-slim
| 0
|
378,103
| 39,375,348
|
sum vs np.nansum weirdness while summing columns with same name on a pandas dataframe - python
|
<p>taking inspiration from this discussion here on SO (<a href="https://stackoverflow.com/questions/13078751/merge-columns-within-a-dataframe-that-have-the-same-name">Merge Columns within a DataFrame that have the Same Name</a>), I tried the method suggested and, while it works while using the function <code>sum()</code> it doesn't when I am using <code>np.nansum</code> :</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.rand(100,4), columns=['a', 'a','b','b'], index=pd.date_range('2011-1-1', periods=100))
print(df.head(3))
</code></pre>
<p><code>sum()</code> case:</p>
<pre><code>print(df.groupby(df.columns, axis=1).apply(sum, axis=1).head(3))
a b
2011-01-01 1.328933 1.678469
2011-01-02 1.878389 1.343327
2011-01-03 0.964278 1.302857
</code></pre>
<p><code>np.nansum()</code> case:</p>
<pre><code>print(df.groupby(df.columns, axis=1).apply(np.nansum, axis=1).head(3))
a [1.32893299939, 1.87838886222, 0.964278430632,...
b [1.67846885234, 1.34332662587, 1.30285727348, ...
dtype: object
</code></pre>
<p>any idea why?</p>
|
<p>The issue is that <code>np.nansum</code> converts its input to a numpy array, so it effectively loses the column information (<code>sum</code> doesn't do this). As a result, the <code>groupby</code> doesn't get back any column information when constructing the output, so the output is just a Series of numpy arrays.</p>
<p>Specifically, <a href="https://github.com/numpy/numpy/blob/v1.11.0/numpy/lib/nanfunctions.py#L512" rel="nofollow">the source code for <code>np.nansum</code></a> calls the <code>_replace_nan</code> function. In turn, <a href="https://github.com/numpy/numpy/blob/v1.11.0/numpy/lib/nanfunctions.py#L60-L62" rel="nofollow">the source code for <code>_replace_nan</code></a> checks if the input is an array, and converts it to one if it's not.</p>
<p>All hope isn't lost though. You can easily replicate <code>np.nansum</code> with Pandas functions. Specifically use <code>sum</code> followed by <code>fillna</code>:</p>
<pre><code>df.groupby(df.columns, axis=1).sum().fillna(0)
</code></pre>
<p>The <code>sum</code> should ignore <code>NaN</code>'s and just sum the non-null values. The only case you'll get back a <code>NaN</code> is if all the values attempting to be summed are <code>NaN</code>, which is why <code>fillna</code> is required. Note that you could also do the <code>fillna</code> before the <code>groupby</code>, i.e. <code>df.fillna(0).groupby...</code>.</p>
<p>If you really want to use <code>np.nansum</code>, you can recast as <code>pd.Series</code>. This will likely impact performance, as constructing a Series can be a relatively expensive, and you'll be doing it multiple times:</p>
<pre><code>df.groupby(df.columns, axis=1).apply(lambda x: pd.Series(np.nansum(x, axis=1), x.index))
</code></pre>
<p><strong>Example Computations</strong></p>
<p>For some example computations, I'll be using the following simple DataFrame, which includes <code>NaN</code> values (your example data doesn't):</p>
<pre><code>df = pd.DataFrame([[1,2,2,np.nan,4],[np.nan,np.nan,np.nan,3,3],[np.nan,np.nan,-1,2,np.nan]], columns=list('aaabb'))
a a a b b
0 1.0 2.0 2.0 NaN 4.0
1 NaN NaN NaN 3.0 3.0
2 NaN NaN -1.0 2.0 NaN
</code></pre>
<p>Using <code>sum</code> without <code>fillna</code>:</p>
<pre><code>df.groupby(df.columns, axis=1).sum()
a b
0 5.0 4.0
1 NaN 6.0
2 -1.0 2.0
</code></pre>
<p>Using <code>sum</code> and <code>fillna</code>:</p>
<pre><code>df.groupby(df.columns, axis=1).sum().fillna(0)
a b
0 5.0 4.0
1 0.0 6.0
2 -1.0 2.0
</code></pre>
<p>Comparing to the fixed <code>np.nansum</code> method:</p>
<pre><code>df.groupby(df.columns, axis=1).apply(lambda x: pd.Series(np.nansum(x, axis=1), x.index))
a b
0 5.0 4.0
1 0.0 6.0
2 -1.0 2.0
</code></pre>
|
pandas|dataframe|group-by|multiple-columns
| 2
|
378,104
| 39,059,371
|
Can numpy's argsort give equal element the same rank?
|
<p>I want to get the rank of each element, so I use <code>argsort</code> in <code>numpy</code>:</p>
<pre><code>np.argsort(np.array((1,1,1,2,2,3,3,3,3)))
array([0, 1, 2, 3, 4, 5, 6, 7, 8])
</code></pre>
<p>it give the same element the different rank, can I get the same rank like:</p>
<pre><code>array([0, 0, 0, 3, 3, 5, 5, 5, 5])
</code></pre>
|
<p>If you don't mind a dependency on scipy, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rankdata.html" rel="noreferrer"><code>scipy.stats.rankdata</code></a>, with <code>method='min'</code>:</p>
<pre><code>In [14]: a
Out[14]: array([1, 1, 1, 2, 2, 3, 3, 3, 3])
In [15]: from scipy.stats import rankdata
In [16]: rankdata(a, method='min')
Out[16]: array([1, 1, 1, 4, 4, 6, 6, 6, 6])
</code></pre>
<p>Note that <code>rankdata</code> starts the ranks at 1. To start at 0, subtract 1 from the result:</p>
<pre><code>In [17]: rankdata(a, method='min') - 1
Out[17]: array([0, 0, 0, 3, 3, 5, 5, 5, 5])
</code></pre>
<hr>
<p>If you don't want the scipy dependency, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="noreferrer"><code>numpy.unique</code></a> to compute the ranking. Here's a function that computes the same result as <code>rankdata(x, method='min') - 1</code>:</p>
<pre><code>import numpy as np
def rankmin(x):
u, inv, counts = np.unique(x, return_inverse=True, return_counts=True)
csum = np.zeros_like(counts)
csum[1:] = counts[:-1].cumsum()
return csum[inv]
</code></pre>
<p>For example,</p>
<pre><code>In [137]: x = np.array([60, 10, 0, 30, 20, 40, 50])
In [138]: rankdata(x, method='min') - 1
Out[138]: array([6, 1, 0, 3, 2, 4, 5])
In [139]: rankmin(x)
Out[139]: array([6, 1, 0, 3, 2, 4, 5])
In [140]: a = np.array([1,1,1,2,2,3,3,3,3])
In [141]: rankdata(a, method='min') - 1
Out[141]: array([0, 0, 0, 3, 3, 5, 5, 5, 5])
In [142]: rankmin(a)
Out[142]: array([0, 0, 0, 3, 3, 5, 5, 5, 5])
</code></pre>
<hr>
<p>By the way, a single call to <code>argsort()</code> does not give ranks. You can find an assortment of approaches to ranking in the question <a href="https://stackoverflow.com/questions/5284646/rank-items-in-an-array-using-python-numpy">Rank items in an array using Python/NumPy</a>, including how to do it using <code>argsort()</code>.</p>
|
python|sorting|numpy
| 17
|
378,105
| 39,030,366
|
NumPy array element not getting updated
|
<p>I have a NumPy array as follows:</p>
<pre><code>supp = np.array([['A', '5', '0'], ['B', '3', '0'], ['C', '4', '0'], ['D', '1', '0'], ['E', '2', '0']])
</code></pre>
<p>Now, I want to update the row[2] as row[1]/6.
I'm using..</p>
<p><code>for row in supp:
row[2] = row[1].astype(int) / 6</code></p>
<p>But row[2] seems to remain unaffected..</p>
<pre><code>>>> supp
array([['A', '5', '0'],
['B', '3', '0'],
['C', '4', '0'],
['D', '1', '0'],
['E', '2', '0']],
dtype='<U1')
</code></pre>
<p>I'm using Python 3.5.2 and NumPy 1.11.1.</p>
<p>Any help is appreciated. Thanks in advance</p>
|
<p>The problem is that an <code>np.array</code> has only one type which is automatically assumed to be strings <code>supp.dtype == '|S1'</code> since your input contains only strings of length <code>1</code>. So numpy will automatically convert your updated inputs to strings of length <code>1</code>, <code>'0'</code>s in your case. Force it to be of generic type <code>object</code> and then it will be able to have both strings and ints or floats or anything else:</p>
<pre><code>supp = np.array([['A', '5', '0'], ['B', '3', '0'], ['C', '4', '0'], ['D', '1', '0'], ['E', '2', '0']])
supp = supp.astype(object)
for row in supp:
row[2] = int(row[1]) / 6
</code></pre>
<p>result:</p>
<pre><code>[['A' '5' 0.8333333333333334]
['B' '3' 0.5]
['C' '4' 0.6666666666666666]
['D' '1' 0.16666666666666666]
['E' '2' 0.3333333333333333]]
</code></pre>
<p>alternatively you can also use the <code>dtype</code> <code>'|Sn'</code> with larger value of <code>n</code>:</p>
<pre><code>supp = np.array([['A', '5', '0'], ['B', '3', '0'], ['C', '4', '0'], ['D', '1', '0'], ['E', '2', '0']])
supp = supp.astype('|S5')
for row in supp:
row[2] = int(row[1]) / 6
</code></pre>
<p>result: </p>
<pre><code>[['A' '5' '0.833']
['B' '3' '0.5']
['C' '4' '0.666']
['D' '1' '0.166']
['E' '2' '0.333']]
</code></pre>
<p>and in this case you are still having only strings if that is what you want.</p>
|
python|arrays|python-3.x|numpy
| 5
|
378,106
| 39,278,042
|
Storing pure python datetime.datetime in pandas DataFrame
|
<p>Since <code>matplotlib</code> doesn't support <a href="https://github.com/pydata/pandas/issues/8113" rel="nofollow noreferrer">either</a><code>pandas.TimeStamp</code> <a href="https://stackoverflow.com/questions/22048792/how-do-i-display-dates-when-plotting-in-matplotlib-pyplot">or</a><code>numpy.datetime64</code>, and there are <a href="https://stackoverflow.com/questions/27472548/pandas-scatter-plotting-datetime">no simple workarounds</a>, I decided to convert a native pandas date column into a pure python <code>datetime.datetime</code> so that scatter plots are easier to make.</p>
<p>However:</p>
<pre><code>t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31')]})
t.dtypes # date datetime64[ns], as expected
pure_python_datetime_array = t.date.dt.to_pydatetime() # works fine
t['date'] = pure_python_datetime_array # doesn't do what I hoped
t.dtypes # date datetime64[ns] as before, no luck changing it
</code></pre>
<p>I'm guessing pandas auto-converts the pure python <code>datetime</code> produced by <code>to_pydatetime</code> into its native format. I guess it's convenient behavior in general, but is there a way to override it?</p>
|
<p>The use of <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#converting-to-python-datetimes" rel="nofollow noreferrer">to_pydatetime()</a> is correct.</p>
<pre><code>In [87]: t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31'), pd.to_datetime('2013-12-31')]})
In [88]: t.date.dt.to_pydatetime()
Out[88]:
array([datetime.datetime(2012, 12, 31, 0, 0),
datetime.datetime(2013, 12, 31, 0, 0)], dtype=object)
</code></pre>
<p>When you assign it back to <code>t.date</code>, it automatically converts it back to <code>datetime64</code></p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#overview" rel="nofollow noreferrer">pandas.Timestamp</a> is a datetime subclass anyway :)</p>
<p>One way to do the plot is to convert the datetime to int64:</p>
<pre><code>In [117]: t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31'), pd.to_datetime('2013-12-31')], 'sample_data': [1, 2]})
In [118]: t['date_int'] = t.date.astype(np.int64)
In [119]: t
Out[119]:
date sample_data date_int
0 2012-12-31 1 1356912000000000000
1 2013-12-31 2 1388448000000000000
In [120]: t.plot(kind='scatter', x='date_int', y='sample_data')
Out[120]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3c852662d0>
In [121]: plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/oOWCU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oOWCU.png" alt="enter image description here"></a></p>
<p>Another workaround is (to not use scatter, but ...):</p>
<pre><code>In [126]: t.plot(x='date', y='sample_data', style='.')
Out[126]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3c850f5750>
</code></pre>
<p>And, the last work around:</p>
<pre><code>In [141]: import matplotlib.pyplot as plt
In [142]: t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31'), pd.to_datetime('2013-12-31')], 'sample_data': [100, 20000]})
In [143]: t
Out[143]:
date sample_data
0 2012-12-31 100
1 2013-12-31 20000
In [144]: plt.scatter(t.date.dt.to_pydatetime() , t.sample_data)
Out[144]: <matplotlib.collections.PathCollection at 0x7f3c84a10510>
In [145]: plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/Xh4ZE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xh4ZE.png" alt="enter image description here"></a></p>
<p>This has an issue at <a href="https://github.com/pydata/pandas/issues/8113" rel="nofollow noreferrer">github</a>, which is open as of now.</p>
|
python|python-3.x|datetime|pandas
| 4
|
378,107
| 39,313,425
|
Extracting the last problem_id for every user
|
<p>I have a dataframe with the following columns: <code>['user_id', 'problem_id', 'timestamp']</code>. So basically who solved what and when. Clearly there are users who solved many many problems.</p>
<p>I want to extract the last problem solved by every user. My first approach was to group by user_id and get the maximum: <code>df_s.groupby('user_id').max()[['problem_id']]</code>, but after looking at it more closely I realized that it will just return me the highest lexicographically ordered problem solved by the user.</p>
<p>I also clearly can iterate over groupby aggregation, sort the dataframe and take the first problem, but I hope for a quick one/few liners.</p>
|
<p>If your <code>timestamp</code> sorts naturally - ie - latest values are last, then:</p>
<pre><code>df_s.sort_values('timestamp').groupby('user_id').last()
</code></pre>
<p>Should give you what you want as <code>groupby</code> retains the order of its input for grouping...</p>
|
pandas|dataframe
| 1
|
378,108
| 39,378,363
|
Remove nan rows in a scipy sparse matrix
|
<p>I am given a (normalized) sparse adjacency matrix and a list of labels for the respective matrix rows. Because some nodes have been removed by another sanitization function, there are some rows containing NaNs in the matrix. I want to find these rows and remove them <em>as well as their respective labels</em>. Here is the function I wrote:</p>
<pre class="lang-py prettyprint-override"><code>def sanitize_nan_rows(adj, labels):
# convert to numpy array and keep dimension
adj = np.array(adj, ndmin=2)
for i, row in enumerate(adj):
# check if row all nans
if np.all(np.isnan(row)):
# print("Removing nan row label in %s" % i)
# remove row index from labels
del labels[i]
# remove all nan rows
adj = adj[~np.all(np.isnan(adj), axis=1)]
# return sanitized adj and labels_clean
return adj, labels
</code></pre>
<p><code>labels</code> is a simple Python list and <code>adj</code> has the type <code><class 'scipy.sparse.lil.lil_matrix'></code> (containing elements of type <code><class 'numpy.float64'></code>), which are both the result of</p>
<pre><code>adj, labels = nx.attr_sparse_matrix(infected, normalized=True)
</code></pre>
<p>On execution I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-503-8a404b58eaa9> in <module>()
----> 1 adj, labels = sanitize_nans(adj, labels)
<ipython-input-502-ead99efec677> in sanitize_nans(adj, labels)
6 for i, row in enumerate(adj):
7 # check if row all nans
----> 8 if np.all(np.isnan(row)):
9 print("Removing nan row label in %s" % i)
10 # remove row index from labels
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p>So I thought that SciPy NaNs were different from numpy NaNs. After that I tried to convert the sparse matrix into a numpy array (taking the risk of flooding my RAM, because the matrix has about 40k rows and columns). When running it, the error stays the same however. It seems that the <code>np.array()</code> call just wrapped the sparse matrix and didn't convert it, as <code>type(row)</code> inside the for loop still outputs <code><class 'scipy.sparse.lil.lil_matrix'></code></p>
<p>So my question is how to resolve this issue and whether there is a better approach that gets the job done. I am fairly new to numpy and scipy (as used in networkx), so I'd appreciate an explanation. Thank you!</p>
<p>EDIT: After changing the conversion to what <a href="https://stackoverflow.com/users/901925/hpaulj">hpaulj</a> proposed, I'm getting a MemoryError:</p>
<pre><code>---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-519-8a404b58eaa9> in <module>()
----> 1 adj, labels = sanitize_nans(adj, labels)
<ipython-input-518-44201f4ff35c> in sanitize_nans(adj, labels)
1 def sanitize_nans(adj, labels):
----> 2 adj = adj.toarray()
3
4 for i, row in enumerate(adj):
5 # check if row all nans
/usr/lib/python3/dist-packages/scipy/sparse/lil.py in toarray(self, order, out)
348 def toarray(self, order=None, out=None):
349 """See the docstring for `spmatrix.toarray`."""
--> 350 d = self._process_toarray_args(order, out)
351 for i, row in enumerate(self.rows):
352 for pos, j in enumerate(row):
/usr/lib/python3/dist-packages/scipy/sparse/base.py in_process_toarray_args(self, order, out)
697 return out
698 else:
--> 699 return np.zeros(self.shape, dtype=self.dtype, order=order)
700
701
MemoryError:
</code></pre>
<p>So apparently I'll have to stick with the sparse matrix to save RAM.</p>
|
<p>If I make a sample array:</p>
<pre><code>In [328]: A=np.array([[1,0,0,np.nan],[0,np.nan,np.nan,0],[1,0,1,0]])
In [329]: A
Out[329]:
array([[ 1., 0., 0., nan],
[ 0., nan, nan, 0.],
[ 1., 0., 1., 0.]])
In [331]: M=sparse.lil_matrix(A)
</code></pre>
<p>This lil sparse matrix is stored in 2 arrays:</p>
<pre><code>In [332]: M.data
Out[332]: array([[1.0, nan], [nan, nan], [1.0, 1.0]], dtype=object)
In [333]: M.rows
Out[333]: array([[0, 3], [1, 2], [0, 2]], dtype=object)
</code></pre>
<p>With your function, no rows will be removed, even though the middle row of the sparse matrix only contains <code>nan</code>.</p>
<pre><code>In [334]: A[~np.all(np.isnan(A), axis=1)]
Out[334]:
array([[ 1., 0., 0., nan],
[ 0., nan, nan, 0.],
[ 1., 0., 1., 0.]])
</code></pre>
<p>I could test the rows of <code>M</code> for <code>nan</code>, and identify the ones that only contain <code>nan</code> (besides 0s). But it's probably easier to collect the ones that we want to keep. </p>
<pre><code>In [346]: ll = [i for i,row in enumerate(M.data) if not np.all(np.isnan(row))]
In [347]: ll
Out[347]: [0, 2]
In [348]: M[ll,:]
Out[348]:
<2x4 sparse matrix of type '<class 'numpy.float64'>'
with 4 stored elements in LInked List format>
In [349]: _.A
Out[349]:
array([[ 1., 0., 0., nan],
[ 1., 0., 1., 0.]])
</code></pre>
<p>A row of <code>M</code> is a list, but <code>np.isnan(row)</code> will convert it to an array and do it's array test.</p>
|
python|numpy|scipy|sparse-matrix|networkx
| 1
|
378,109
| 39,156,650
|
How to print or cout a tensor?
|
<p>I have a tensor that I would like to print for debugging</p>
<pre><code>tensorflow::Tensor image_tensor;
</code></pre>
<p>I tried </p>
<pre><code>std::cout << &image_tensor;
</code></pre>
<p>But I get something like this:</p>
<pre><code>0x16fd81cf8I
</code></pre>
|
<p>you have to use the .vec or .flat methods.
let’s say the tensor is a <code>int8</code></p>
<pre><code>// for i in image_tensor.size …
cout << i << “ “ << image_tensor.vec<int8>()(i)
</code></pre>
<p>see also this answer <a href="https://stackoverflow.com/questions/39148671/how-to-fill-a-tensor-in-c">How to fill a tensor in C++</a></p>
|
c++|tensorflow
| 3
|
378,110
| 39,279,858
|
How to draw a graphical count table in pandas
|
<p>I have a dataframe df with two columns <code>customer1</code> and <code>customer2</code> which are string valued. I would like to make a square graphical representation of the count number for each pair from those two columns. </p>
<p>I can do</p>
<pre><code>df[['customer1', 'customer2']].value_counts()
</code></pre>
<p>which will give me the counts. But how can I make something that looks a little like:</p>
<p><a href="https://i.stack.imgur.com/GZ6yI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GZ6yI.png" alt="enter image description here"></a> </p>
<p>from the result?</p>
<p>I can't provide my real dataset but here is a toy example with three labels in csv.</p>
<pre><code>customer1,customer2
a,b
a,c
a,c
b,a
b,c
b,c
c,c
a,a
b,c
b,c
</code></pre>
|
<p><strong>UPDATE:</strong> </p>
<blockquote>
<p>Is it possible to sort the rows/columns so the highest count rows are
at the top ? In this case the order would be b,a,c</p>
</blockquote>
<p>IIUC you can do it this way (where ):</p>
<pre><code>In [80]: x = df.pivot_table(index='customer1',columns='customer2',aggfunc='size',fill_value=0)
In [81]: idx = x.max(axis=1).sort_values(ascending=0).index
In [82]: idx
Out[82]: Index(['b', 'a', 'c'], dtype='object', name='customer1')
In [87]: sns.heatmap(x[idx].reindex(idx), annot=True)
Out[87]: <matplotlib.axes._subplots.AxesSubplot at 0x9ee3f98>
</code></pre>
<p><a href="https://i.stack.imgur.com/sIVen.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sIVen.png" alt="enter image description here"></a></p>
<p><strong>OLD answer:</strong></p>
<p>you can use <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.heatmap.html" rel="nofollow noreferrer">heatmap()</a> method from <code>seaborn</code> module:</p>
<pre><code>In [42]: import seaborn as sns
In [43]: df
Out[43]:
customer1 customer2
0 a b
1 a c
2 a c
3 b a
4 b c
5 b c
6 c c
7 a a
8 b c
9 b c
In [44]: x = df.pivot_table(index='customer1',columns='customer2',aggfunc='size',fill_value=0)
In [45]: x
Out[45]:
customer2 a b c
customer1
a 1 1 2
b 1 0 4
c 0 0 1
In [46]: sns.heatmap(x)
Out[46]: <matplotlib.axes._subplots.AxesSubplot at 0xb150b70>
</code></pre>
<p><a href="https://i.stack.imgur.com/YMVgk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YMVgk.png" alt="enter image description here"></a></p>
<p>or with annotations:</p>
<pre><code>In [48]: sns.heatmap(x, annot=True)
Out[48]: <matplotlib.axes._subplots.AxesSubplot at 0xc596d68>
</code></pre>
<p><a href="https://i.stack.imgur.com/dN2YV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dN2YV.png" alt="enter image description here"></a></p>
|
python|pandas
| 2
|
378,111
| 19,529,402
|
Need some basic Pandas help -- trying to print a dataframe row by row and perform operations on the elements within specific columns of that row
|
<p>Basically, I have a query returning a dataframe and row by row I want to generate new queries using the elements of the row as arguments for the next query -- the example walks through a simplified version and understanding that should be sufficient!</p>
<pre><code>>>> import pandas as pd
>>> df2 = pd.DataFrame({'a' : ['colorado', 'california', 'texas', 'oregon'], 'b' : ['go buffs', 'go bears', 'go sooners', 'go ducks'], 'c' : [14,14,15,13]})
>>> df2
a b c
0 colorado go buffs 14
1 california go bears 14
2 texas go sooners 15
3 oregon go ducks 13
#Print element by element in column
>>> for x in df2['a']:
... print x
...
colorado
california
texas
oregon
#What I want is to print full ROWS
>>> df3 = df2.loc[:, ['a','b']]
>>> df3
a b
0 colorado go buffs
1 california go bears
2 texas go sooners
3 oregon go ducks
#I want to get something like
#for x, y in df3['a','b']:
# print 'in %s say %s!'%(x,y)
#and get:
#in colorado say go buffs!
#in california....etc
#How do I do that!?
</code></pre>
|
<p>You can unroll the tuples as you iterate through the dataframe and then just print the columns you desire:</p>
<pre><code>for row in df2.itertuples():
index, a, b, c = row
print 'in %s say %s!'%(a,b)
in colorado say go buffs!
in california say go bears!
in texas say go sooners!
in oregon say go ducks!
</code></pre>
<p>Alternatively you can use <code>iterrows</code> which as @DSM pointed out returns an index with nested values (actually a Series):</p>
<pre><code>for row in df2.iterrows():
index, data = row
print 'in %s say %s!' % (data['a'], data['b'])
</code></pre>
<p>This will also output the same as the first code snippet</p>
|
python|pandas
| 2
|
378,112
| 19,306,211
|
OpenCV cv2 image to PyGame image?
|
<pre><code>def cvimage_to_pygame(image):
"""Convert cvimage into a pygame image"""
return pygame.image.frombuffer(image.tostring(), image.shape[:2],
"RGB")
</code></pre>
<p>The function takes a numpy array taken from the cv2 camera. When I display the returned pyGame image on a pyGame window, it appears in three broken images. I don't know why this is!</p>
<p>Any help would be greatly appreciated.</p>
<p>Heres what happens::</p>
<p>(Pygame on the left)</p>
<p><img src="https://i.stack.imgur.com/EtyVz.jpg" alt="enter image description here"></p>
|
<p>In the <code>shape</code> field width and height parameters are swapped. Replace argument:</p>
<pre><code>image.shape[:2] # gives you (height, width) tuple
</code></pre>
<p>With </p>
<pre><code>image.shape[1::-1] # gives you (width, height) tuple
</code></pre>
|
python|numpy|opencv|pygame
| 8
|
378,113
| 19,630,265
|
ValueError: Shape of passed values is (3, 27), indices imply (4, 27) # pandas DataFrame
|
<p>Here is my numpy array:</p>
<pre><code>import numpy as np
num = np.array([[ 0.17899619 0.33093259 0.2076353 0.06130814]
[ 0.20392888 0.42653105 0.33325891 0.10473969]
[ 0.17038247 0.19081956 0.10119709 0.09032416]
[-0.10606583 -0.13680513 -0.13129103 -0.03684349]
[ 0.20319428 0.28340985 0.20994867 0.11728491]
[ 0.04396872 0.23703525 0.09359683 0.11486036]
[ 0.27801304 -0.05769304 -0.06202813 0.04722761]])
</code></pre>
<p>Here is my header row:</p>
<pre><code>days = ['5 days', '10 days', '20 days', '60 days']
</code></pre>
<p>And here is my first column:</p>
<pre><code>prices = ['AAPL', 'ADBE', 'AMD', 'AMZN', 'CRM', 'EXPE', 'FB']
</code></pre>
<p>I want to put it all in one HTML table like this:</p>
<pre><code><table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>5 days</th>
<th>10 days</th>
<th>20 days</th>
<th>60 days</th>
</tr>
</thead>
<tbody>
<tr>
<th>AAPL</th>
<td> 0.178996</td>
<td> 0.330933</td>
<td> 0.207635</td>
<td> 0.061308</td>
</tr>
<tr>
<th>ADBE</th>
<td> 0.203929</td>
<td> 0.426531</td>
<td> 0.333259</td>
<td> 0.104740</td>
</tr>
<tr>
<th>AMD</th>
<td> 0.170382</td>
<td> 0.190820</td>
<td> 0.101197</td>
<td> 0.090324</td>
</tr>
<tr>
<th>AMZN</th>
<td>-0.106066</td>
<td>-0.136805</td>
<td>-0.131291</td>
<td>-0.036843</td>
</tr>
<tr>
<th>CRM</th>
<td> 0.203194</td>
<td> 0.283410</td>
<td> 0.209949</td>
<td> 0.117285</td>
</tr>
<tr>
<th>EXPE</th>
<td> 0.043969</td>
<td> 0.237035</td>
<td> 0.093597</td>
<td> 0.114860</td>
</tr>
<tr>
<th>FB</th>
<td> 0.278013</td>
<td>-0.057693</td>
<td>-0.062028</td>
<td> 0.047228</td>
</tr>
</tbody>
</table>
</code></pre>
<p>I have tried to do this way, using pandas:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(num, index=prices, columns=days)
html = df.to_html()
print html
</code></pre>
<p>But when i run this code, i have the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 2226, in create_block_manager_from_blocks
blocks = [ make_block(blocks[0], axes[0], axes[0], placement=placement) ]
File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 967, in make_block
return klass(values, items, ref_items, ndim=values.ndim, fastpath=fastpath, placement=placement)
File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 45, in __init__
% (len(items), len(values)))
ValueError: Wrong number of items passed 4, indices imply 3
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\user\Documents\progfun\finance\file.py", line 261, in <module>
main()
File "C:\Users\user\Documents\progfun\finance\file.py", line 42, in main
print (html_table(returns_array, list_of_days, [index] + security_list))
File "C:\Users\user\Documents\progfun\finance\file.py", line 185, in html_table
df = pd.DataFrame(big_array, index=companies, columns=days)
File "C:\Users\user\Documents\progfun\finance\file.py", line 415, in __init__
copy=copy)
File "C:\Users\user\Documents\progfun\finance\file.py", line 561, in _init_ndarray
return create_block_manager_from_blocks([ values.T ], [ columns, index ])
File "C:\Users\user\Documents\progfun\finance\file.py", line 2235, in create_block_manager_from_blocks
construction_error(tot_items,blocks[0].shape[1:],axes)
File "C:\Users\user\Documents\progfun\finance\file.py", line 2217, in construction_error
tuple(map(int, [len(ax) for ax in axes]))))
ValueError: Shape of passed values is (3, 27), indices imply (4, 27)
</code></pre>
<p>How can I fix that?</p>
|
<p><strong>Update:</strong> Make sure that <code>big_array</code> has 4 columns. The shape of <code>big_array</code> does not match the shape of your sample array <code>num</code>. That's why the example code is working, but your real code not.</p>
<hr>
<p>I was unable to reproduce your error message. On my system (Windows, Python 2.7, pandas-0.11.0, numpy-1.7.1) everything works as expected when running the following code:</p>
<pre><code>import numpy as np
import pandas as pd
num = np.array([[ 0.17899619, 0.33093259, 0.2076353, 0.06130814],
[ 0.20392888, 0.42653105, 0.33325891, 0.10473969],
[ 0.17038247, 0.19081956, 0.10119709, 0.09032416],
[-0.10606583, -0.13680513, -0.13129103, -0.03684349],
[ 0.20319428, 0.28340985, 0.20994867, 0.11728491],
[ 0.04396872, 0.23703525, 0.09359683, 0.11486036],
[ 0.27801304, -0.05769304, -0.06202813, 0.04722761]])
days = ['5 days', '10 days', '20 days', '60 days']
prices = ['AAPL', 'ADBE', 'AMD', 'AMZN', 'CRM', 'EXPE', 'FB']
print pd.DataFrame(num, index=prices, columns=days).to_html()
</code></pre>
|
python|python-3.x|numpy|pandas
| 8
|
378,114
| 19,711,999
|
rgb_to_hsv and backwards using python and numpy
|
<p>I tried to execute this code <a href="https://stackoverflow.com/a/7274986/2349589">here</a> as described in this answer. Bu I can't seem to get away from dividing with zero value. </p>
<p>I tried to copy this code from caman Js for transforming from rgb to hsv but I get the same thing.</p>
<pre><code>RuntimeWarning invalide value encountered in divide
</code></pre>
<p>caman code is</p>
<pre><code>Convert.rgbToHSV = function(r, g, b) {
var d, h, max, min, s, v;
r /= 255;
g /= 255;
b /= 255;
max = Math.max(r, g, b);
min = Math.min(r, g, b);
v = max;
d = max - min;
s = max === 0 ? 0 : d / max;
if (max === min) {
h = 0;
} else {
h = (function() {
switch (max) {
case r:
return (g - b) / d + (g < b ? 6 : 0);
case g:
return (b - r) / d + 2;
case b:
return (r - g) / d + 4;
}
})();
h /= 6;
}
return {
h: h,
s: s,
v: v
};
};
</code></pre>
<p>my code based on the answer from here</p>
<pre><code>import Image
import numpy as np
def rgb_to_hsv(rgb):
hsv = np.empty_like(rgb)
hsv[...,3] = rgb[...,3]
r,g,b = rgb[...,0], rgb[...,1], rgb[...,2]
maxc = np.amax(rgb[...,:3], axis=-1)
print maxc
minc = np.amin(rgb[...,:3], axis=-1)
print minc
hsv[...,2] = maxc
dif = (maxc - minc)
hsv[...,1] = np.where(maxc==0, 0, dif/maxc)
#rc = (maxc-r)/ (maxc-minc)
#gc = (maxc-g)/(maxc-minc)
#bc = (maxc-b)/(maxc-minc)
hsv[...,0] = np.select([dif==0, r==maxc, g==maxc, b==maxc], [np.zeros(maxc.shape), (g-b) / dif + np.where(g<b, 6, 0), (b-r)/dif + 2, (r - g)/dif + 4])
hsv[...,0] = (hsv[...,0]/6.0) % 1.0
idx = (minc == maxc)
hsv[...,0][idx] = 0.0
hsv[...,1][idx] = 0.0
return hsv
</code></pre>
<p>The exception I get it in both whereever I divide with maxc or with dif (because they have zero values).</p>
<p>I encounter the same problem on the original code by @unutbu, runtimewarning. Caman seems to do this in every pixel seperately that is for every r,g,b combinations.</p>
<p>I also get a ValueError of shape missmatch: Objexts cannot be broadcast to a single shape when the select function is executed. But i double checked all the shapes of the choices and they are all (256,256)</p>
<p>Edit:
I corrected the function using <a href="http://en.wikipedia.org/wiki/HSL_and_HSV#Hue_and_chroma" rel="nofollow noreferrer">this wikipedia article</a>, and updated the code...now i get only the runimeWarning</p>
|
<p>The error comes from the fact that <code>numpy.where</code> (and <code>numpy.select</code>) computes all its arguments, even if they aren't used in the output. So in your line <code>hsv[...,1] = np.where(maxc==0, 0, dif/maxc)</code>, <code>dif / maxc</code> is computed even for elements where <code>maxc == 0</code>, but then only the ones where <code>maxc != 0</code> are used. This means that your output is fine, but you still get the RuntimeWarning.</p>
<p>If you want to avoid the warning (and make your code a little faster), do something like:</p>
<pre><code>nz = maxc != 0 # find the nonzero values
hsv[nz, 1] = dif[nz] / maxc[nz]
</code></pre>
<p>You'll also have to change the <code>numpy.select</code> statement, because it also evaluates all its arguments.</p>
|
python|image|image-processing|numpy|python-imaging-library
| 0
|
378,115
| 19,602,864
|
How to plot in a specific axis with DataFrame.hist(by=) in pandas
|
<p>I am trying plot several histogram groups in the same figure. Each group contains two conditions and I am therefore using the 'by=' argument from pandas histogram options. However, this does not work as I expected and pandas creates a new figure instead of plotting in the axis I am passing. I tried to pass four axes as well, but still no go. Sample code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'color': ['blue','blue','yellow','blue','yellow'], 'area': [2,2,3,4,4]})
fig, (ax1, ax2) = plt.subplots(1,2)
df.area.hist(by=df.color, ax=ax1)
</code></pre>
<p>I'm using pandas 0.12.0, matplotlib 1.3.0 and python 2.7.5. Any suggestion that leads to a way of combining/stitching multiple 'hist(by=)-plots' in the same subplot grid is welcome.</p>
<p><em>Update:</em></p>
<p>Maybe this describes what I want to achieve more accurately.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'color': ['blue','blue','yellow','blue','yellow'], 'area': [2,2,3,4,4]})
#fig, (ax1, ax2) = plt.subplots(1,2)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2)
ax3.plot([[2,2], [3,6]])
ax4.plot([[3,6], [2,2]])
df.area.hist(by=df.color, ax=ax1)
</code></pre>
<p>Ideally, in my example, the pandas histogram is 1,2 and should then split ax1 into two subplots. Alternatively, it could be plotted into ax1 and ax2, and then the user could make sure that the correct number of empty subplots are available.</p>
|
<p>This is fixed as of <a href="https://github.com/pydata/pandas/pull/7736" rel="nofollow noreferrer">GH7736</a> which was merged into pandas 0.15.0</p>
<p>The correct way to pass multiple plots into an existing figure is to first create all the desired axes and then pass all of them to the pandas plotting command.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({'color': ['blue','blue','yellow','blue','yellow'], 'area': [2,2,3,4,4]})
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2)
#Hide upper left corner plot and create two new subplots
ax1.axis('off')
ax = fig.add_subplot(2,4,1)
ax0 = fig.add_subplot(2,4,2)
#Plot
ax3.plot([[2,2], [3,6]])
ax4.plot([[3,6], [2,2]])
df.area.hist(by=df.color, ax=(ax,ax0)) #Pass both new subplots
</code></pre>
<p><img src="https://i.stack.imgur.com/2ZX6i.png" alt="enter image description here"></p>
<p>You can carry out the subplot creation more elegantly using <a href="http://matplotlib.org/users/gridspec.html" rel="nofollow noreferrer">GridSpec</a></p>
|
python|matplotlib|pandas
| 1
|
378,116
| 19,705,200
|
Multiprocessing with numpy makes Python quit unexpectedly on OSX
|
<p>I've run into a problem, where Python quits unexpectedly, when running multiprocessing with numpy. I've isolated the problem, so that I can now confirm that the multiprocessing works perfect when running the code stated below:</p>
<pre><code>import numpy as np
from multiprocessing import Pool, Process
import time
import cPickle as p
def test(args):
x,i = args
if i == 2:
time.sleep(4)
arr = np.dot(x.T,x)
print i
if __name__ == '__main__':
x = np.random.random(size=((2000,500)))
evaluations = [(x,i) for i in range(5)]
p = Pool()
p.map_async(test,evaluations)
p.close()
p.join()
</code></pre>
<p>The problem occurs when I try to evaluate the code below. This makes Python quit unexpectedly:</p>
<pre><code>import numpy as np
from multiprocessing import Pool, Process
import time
import cPickle as p
def test(args):
x,i = args
if i == 2:
time.sleep(4)
arr = np.dot(x.T,x)
print i
if __name__ == '__main__':
x = np.random.random(size=((2000,500)))
test((x,4)) # Added code
evaluations = [(x,i) for i in range(5)]
p = Pool()
p.map_async(test,evaluations)
p.close()
p.join()
</code></pre>
<p>Please help someone. I'm open to all suggestions. Thanks. Note: I have tried two different machines and the same problem occurs.</p>
|
<p>I figured out a workaround to the problem. The problem occurs when Numpy is used together with BLAS before initializing a multiprocessing instance. My workaround is simply to put the Numpy code (running BLAS) into a single process and then running the multiprocessing instances afterwards. This is not a good coding style, but it works. See example below:</p>
<p>Following will fail - Python will quit:</p>
<pre><code>import numpy as np
from multiprocessing import Pool, Process
def test(x):
arr = np.dot(x.T,x) # On large matrices, this calc will use BLAS.
if __name__ == '__main__':
x = np.random.random(size=((2000,500))) # Random matrix
test(x)
evaluations = [x for _ in range(5)]
p = Pool()
p.map_async(test,evaluations) # This is where Python will quit, because of the prior use of BLAS.
p.close()
p.join()
</code></pre>
<p>Following will succeed:</p>
<pre><code>import numpy as np
from multiprocessing import Pool, Process
def test(x):
arr = np.dot(x.T,x) # On large matrices, this calc will use BLAS.
if __name__ == '__main__':
x = np.random.random(size=((2000,500))) # Random matrix
p = Process(target = test,args = (x,))
p.start()
p.join()
evaluations = [x for _ in range(5)]
p = Pool()
p.map_async(test,evaluations)
p.close()
p.join()
</code></pre>
|
python|macos|numpy|multiprocessing
| 6
|
378,117
| 12,970,842
|
Python multiple search in arrays
|
<p><code>idtopick</code> is an array of ids </p>
<pre><code> idtopick=array([50,48,12,125,3458,155,299,6,7,84,58,63,0,8,-1])
</code></pre>
<p><code>idtolook</code> is another array containing the ids I'm interested in</p>
<pre><code> idtolook=array([0,8,12,50])
</code></pre>
<p>I would like to store in another array the positions of <code>idtopick</code> that corresponds to <code>idtolook</code>.</p>
<p>This is my solution</p>
<pre><code> positions=array([where(idtopick==dummy)[0][0] for dummy in idtolook])
</code></pre>
<p>Resulting in</p>
<pre><code> array([12, 13, 2, 0])
</code></pre>
<p>It works but in reality the arrays I'm working with store millions of point so the above script is rather slow. I would like to know if there's a way to make it faster. Also, I want to keep the order of <code>idtolook</code> so any algorithm that would sort it wouldn't work for my case.</p>
|
<p>You can use sorting:</p>
<pre><code> sorter = np.argsort(idtopick, kind='mergesort') # you need stable sorting
sorted_ids = idtopick[sorter]
positions = np.searchsorted(sorted_ids, idtolook)
positions = sorter[positions]
</code></pre>
<p>Note that it won't throw an error though if there is and <code>idtolook</code> missing in <code>idtopick</code>. You could actually sort idtolook into the results array too, which should be faster:</p>
<pre><code> c = np.concatenate((idtopick, idtolook))
sorter = np.argsort(c, kind='mergesort')
#reverse = np.argsort(sorter) # The next two lines are this, but faster:
reverse = np.empty_like(sorter)
reverse[sorter] = np.arange(len(sorter))
positions = sorter[reverse[-len(idtolook):]-1]
</code></pre>
<p>Which has similarity to the set operations.</p>
|
python|arrays|numpy
| 3
|
378,118
| 13,221,218
|
How to select rows within a pandas dataframe based on time only when index is date and time
|
<p>I have a dataframe that looks like this:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 2016910 entries, 2009-01-02 04:51:00 to 2012-11-02 20:00:00
Freq: T
Data columns:
X1 2016910 non-null values
X2 2016910 non-null values
X3 2016910 non-null values
X4 2016910 non-null values
X5 2016910 non-null values
dtypes: float64(5)
</code></pre>
<p>and I would like to "filter" it by accessing only certain times across the whole range of dates. For example, I'd like to return a dataframe that contains all rows where the time is between 13:00:00 and 14:00:00, but for all of the dates. I am reading the data from a CSV file and the datetime is one column, but I could just as easily make the input CSV file contain a separate date and time. I tried the separate date and time route, and created a multiindex, but when I did, I ended up with two index columns -- one of them containing the proper date with an incorrect time instead of just a date, and the second one containing an incorrect date, and then a correct time, instead of just a time. The input data for my multiindex attempt looked like this:</p>
<pre><code> 20090102,04:51:00,89.9900,89.9900,89.9900,89.9900,100
20090102,05:36:00,90.0100,90.0100,90.0100,90.0100,200
20090102,05:44:00,90.1400,90.1400,90.1400,90.1400,100
20090102,05:50:00,90.0500,90.0500,90.0500,90.0500,500
20090102,05:56:00,90.1000,90.1000,90.1000,90.1000,300
20090102,05:57:00,90.1000,90.1000,90.1000,90.1000,200
</code></pre>
<p>which I tried to read using this code:</p>
<pre><code> singledf = pd.DataFrame.from_csv("inputfile",header=None,index_col=[0,1],parse_dates=True)
</code></pre>
<p>which resulted in a dataframe that looks like this:</p>
<pre><code>singledf.sort()
singledf
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 716244 entries, (<Timestamp: 2009-01-02 00:00:00>, <Timestamp: 2012-11-04 04:51:00>) to (<Timestamp: 2012-11-02 00:00:00>, <Timestamp: 2012-11-04 20:00:00>)
Data columns:
X2 716244 non-null values
X3 716244 non-null values
X4 716244 non-null values
X5 716244 non-null values
X6 716244 non-null values
dtypes: float64(4), int64(1)
</code></pre>
<p>Maybe the multiindex approach is totally wrong, but it's one thing I tried. It seems like it is stuck on using a datetime object, and wants to force the index columns to have a datetime instead of just a date or a time. My source CSV files for the my non-multiindex attempt looks like this:</p>
<pre><code>20090102 04:51:00,89.9900,89.9900,89.9900,89.9900,100
20090102 05:36:00,90.0100,90.0100,90.0100,90.0100,200
20090102 05:44:00,90.1400,90.1400,90.1400,90.1400,100
20090102 05:50:00,90.0500,90.0500,90.0500,90.0500,500
20090102 05:56:00,90.1000,90.1000,90.1000,90.1000,300
</code></pre>
<p>I am using pandas .9. Any suggestions are appreciated!</p>
|
<p>A regular DatetimeIndex allows to use between_time method.</p>
<pre><code>In [12]: data = """\
20090102,04:51:00,89.9900,89.9900,89.9900,89.9900,100
20090102,05:36:00,90.0100,90.0100,90.0100,90.0100,200
20090102,05:44:00,90.1400,90.1400,90.1400,90.1400,100
20090102,05:50:00,90.0500,90.0500,90.0500,90.0500,500
20090102,05:56:00,90.1000,90.1000,90.1000,90.1000,300
20090102,05:57:00,90.1000,90.1000,90.1000,90.1000,200
"""
In [13]: singledf = pd.DataFrame.from_csv(StringIO(data), header=None, parse_dates=[[0,1]])
In [14]: singledf
Out[14]:
X2 X3 X4 X5 X6
X0_X1
2009-01-02 04:51:00 89.99 89.99 89.99 89.99 100
2009-01-02 05:36:00 90.01 90.01 90.01 90.01 200
2009-01-02 05:44:00 90.14 90.14 90.14 90.14 100
2009-01-02 05:50:00 90.05 90.05 90.05 90.05 500
2009-01-02 05:56:00 90.10 90.10 90.10 90.10 300
2009-01-02 05:57:00 90.10 90.10 90.10 90.10 200
In [15]: singledf.between_time('5:30:00', '5:45:00')
Out[15]:
X2 X3 X4 X5 X6
X0_X1
2009-01-02 05:36:00 90.01 90.01 90.01 90.01 200
2009-01-02 05:44:00 90.14 90.14 90.14 90.14 100
</code></pre>
|
dataframe|pandas
| 2
|
378,119
| 12,931,569
|
Green to red colormap in matplotlib, centered on the median of the data
|
<p>In my application I'm transitioning from R to native Python (scipy + matplotlib) where possible, and one of the biggest tasks was converting from a R heatmap to a matplotlib heatmap. <a href="http://code.activestate.com/recipes/578175-hierarchical-clustering-heatmap-python/" rel="noreferrer">This post</a> guided me with the porting. While most of it was painless, I'm still not convinced on the colormap.</p>
<p>Before showing code, an explanation: in the R code I defined "breaks", i.e. a fixed number of points starting from the lowest value up to 10, and ideally centered on the median value of the data. Its equivalent here would be with <code>numpy.linspace</code>:</p>
<pre><code># Matrix is a DataFrame object from pandas
import numpy as np
data_min = min(matrix.min(skipna=True))
data_max = max(matrix.max(skipna=True))
median_value = np.median(matrix.median(skipna=True))
range_min = np.linspace(0, median_value, 50)
range_max = np.linspace(median_value, data_max, 50)
breaks = np.concatenate((range_min, range_max))
</code></pre>
<p>This gives us 100 points that will be used for coloring. However, I'm not sure on how to do the exact same thing in Python. Currently I have:</p>
<pre><code>def red_black_green():
cdict = {
'red': ((0.0, 0.0, 0.0),
(0.5, 0.0, 0.0),
(1.0, 1.0, 1.0)),
'blue': ((0.0, 0.0, 0.0),
(1.0, 0.0, 0.0)),
'green': ((0.0, 0.0, 1.0),
(0.5, 0.0, 0.0),
(1.0, 0.0, 0.0))
}
my_cmap = mpl.colors.LinearSegmentedColormap(
'my_colormap', cdict, 100)
return my_cmap
</code></pre>
<p>And further down I do:</p>
<pre><code># Note: vmin and vmax are the maximum and the minimum of the data
# Adjust the max and min to scale these colors
if vmin > 0:
norm = mpl.colors.Normalize(vmin=0, vmax=vmax / 1.08)
else:
norm = mpl.colors.Normalize(vmin / 2, vmax / 2)
</code></pre>
<p>The numbers are totally empirical, that's why I want to change this into something more robust. How can I normalize my color map basing on the median, or do I need normalization at all?</p>
|
<p>By default, matplotlib will normalise the colormap such that the maximum colormap value will be the maximum of your data. Likewise for the minimum of your data. This means that the median of the colormap (the middle value) will line up with the interpolated median of your data (interpolated if you don't have a data point exactly at the median).</p>
<p>Here's an example:</p>
<pre><code>from numpy.random import rand
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
cdict = {'red': ((0.0, 0.0, 0.0),
(0.5, 0.0, 0.0),
(1.0, 1.0, 1.0)),
'blue': ((0.0, 0.0, 0.0),
(1.0, 0.0, 0.0)),
'green': ((0.0, 0.0, 1.0),
(0.5, 0.0, 0.0),
(1.0, 0.0, 0.0))}
cmap = mcolors.LinearSegmentedColormap(
'my_colormap', cdict, 100)
ax = plt.subplot(111)
im = ax.imshow(2*rand(20, 20) + 1.5, cmap=cmap)
plt.colorbar(im)
plt.show()
</code></pre>
<p>Notice the middle of the colour bar takes value 2.5. This is the median of the data range: (min + max) / 2 = (1.5+3.5) / 2 = 2.5.</p>
<p>Hope this helps.</p>
|
python|numpy|matplotlib|heatmap
| 4
|
378,120
| 12,861,314
|
Interleave rows of two numpy arrays in Python
|
<p>I wanted to interleave the rows of two numpy arrays of the same size.
I came up with this solution.</p>
<pre><code># A and B are same-shaped arrays
A = numpy.ones((4,3))
B = numpy.zeros_like(A)
C = numpy.array(zip(A[::1], B[::1])).reshape(A.shape[0]*2, A.shape[1])
print(C)
</code></pre>
<p>Outputs</p>
<pre><code>[[ 1. 1. 1.]
[ 0. 0. 0.]
[ 1. 1. 1.]
[ 0. 0. 0.]
[ 1. 1. 1.]
[ 0. 0. 0.]
[ 1. 1. 1.]
[ 0. 0. 0.]]
</code></pre>
<p>Is there a cleaner, faster, better, numpy-only way?</p>
|
<p>It is maybe a bit clearer to do:</p>
<pre><code>A = np.ones((4,3))
B = np.zeros_like(A)
C = np.empty((A.shape[0]+B.shape[0],A.shape[1]))
C[::2,:] = A
C[1::2,:] = B
</code></pre>
<p>and it's probably a bit faster as well, I'm guessing.</p>
|
python|arrays|numpy
| 20
|
378,121
| 13,078,751
|
Combine duplicated columns within a DataFrame
|
<p>If I have a dataframe that has columns that include the same name, is there a way to combine the columns that have the same name with some sort of function (i.e. sum)?</p>
<p>For instance with:</p>
<pre><code>In [186]:
df["NY-WEB01"].head()
Out[186]:
NY-WEB01 NY-WEB01
DateTime
2012-10-18 16:00:00 5.6 2.8
2012-10-18 17:00:00 18.6 12.0
2012-10-18 18:00:00 18.4 12.0
2012-10-18 19:00:00 18.2 12.0
2012-10-18 20:00:00 19.2 12.0
</code></pre>
<p>How might I collapse the NY-WEB01 columns (there are a bunch of duplicate columns, not just NY-WEB01) by summing each row where the column name is the same?</p>
|
<p>I believe this does what you are after:</p>
<pre><code>df.groupby(lambda x:x, axis=1).sum()
</code></pre>
<p>Alternatively, between 3% and 15% faster depending on the length of the df:</p>
<pre><code>df.groupby(df.columns, axis=1).sum()
</code></pre>
<p>EDIT: To extend this beyond sums, use <code>.agg()</code> (short for <code>.aggregate()</code>):</p>
<pre><code>df.groupby(df.columns, axis=1).agg(numpy.max)
</code></pre>
|
python|pandas|dataframe|group-by|pandas-groupby
| 48
|
378,122
| 29,292,522
|
String manipulations using Python Pandas
|
<p>I have some name and ethnicity data, for example:</p>
<pre><code>John Wick English
Black Widow French
</code></pre>
<p>I then do a bit of manipulation to make the name as below</p>
<pre><code>John Wick -> john#wick??????????????????????????????????
Black Widow -> black#widow????????????????????????????????
</code></pre>
<p>I then proceed into creating multiple variables and each contain the 3-character sub-strings through the for loop.</p>
<p>I also try to find the number of alphabets using the re.findall.</p>
<p>I have two questions:
1) Is the for loop efficient? Can I replace with better code even though it is working as is?
2) I can't get the code that tries to find the number of alphabet to work. Any suggestions?</p>
<pre><code>import pandas as pd
from pandas import DataFrame
import re
# Get csv file into data frame
data = pd.read_csv("C:\Users\KubiK\Desktop\OddNames_sampleData.csv")
frame = DataFrame(data)
frame.columns = ["name", "ethnicity"]
name = frame.name
ethnicity = frame.ethnicity
# Remove missing ethnicity data cases
index_missEthnic = frame.ethnicity.isnull()
index_missName = frame.name.isnull()
frame2 = frame.loc[~index_missEthnic, :]
frame3 = frame2.loc[~index_missName, :]
# Make all letters into lowercase
frame3.loc[:, "name"] = frame3["name"].str.lower()
frame3.loc[:, "ethnicity"] = frame3["ethnicity"].str.lower()
# Remove all non-alphabetical characters in Name
frame3.loc[:, "name"] = frame3["name"].str.replace(r'[^a-zA-Z\s\-]', '') # Retain space and hyphen
# Replace empty space as "#"
frame3.loc[:, "name"] = frame3["name"].str.replace('[\s]', '#')
# Find the longest name in the dataset
##frame3["name_length"] = frame3["name"].str.len()
##nameLength = frame3.name_length
##print nameLength.max() # Longest name has !!!40 characters!!! including spaces and hyphens
# Add "?" to fill spaces up to 43 characters
frame3["name_filled"] = frame3["name"].str.pad(side="right", width=43, fillchar="?")
# Split into three-character strings
for i in range(1, 41):
substr = "substr" + str(i)
frame3[substr] = frame3["name_filled"].str[i-1:i+2]
# Count number of characters
frame3["name_len"] = len(re.findall('[a-zA-Z]', name))
# Test outputs
print frame3
</code></pre>
|
<p>!) Regarding the loop, I can't think of a better way than what you're already doing</p>
<p>2) Try <code>frame3["name_len"] = frame3["name"].map(lambda x : len(re.findall('[a-zA-Z]', x)))</code></p>
|
string|python-2.7|pandas
| 1
|
378,123
| 28,995,937
|
Convert python byte string to numpy int?
|
<p>Is there a direct way instead of the following?</p>
<pre><code>np.uint32(int.from_bytes(b'\xa3\x8eq\xb5', 'big'))
</code></pre>
|
<p>Using <code>np.fromstring</code> for this is deprecated now. Use <code>np.frombuffer</code> instead. You can also pass in a normal numpy dtype:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
np.frombuffer(b'\xa3\x8eq\xb5', dtype=np.uint32)
</code></pre>
|
python|numpy
| 8
|
378,124
| 29,075,785
|
Assigning real and imaginary parts of a complex array from two arrays containing the two parts - Python
|
<p>I have two arrays, <code>field_in_k_space_REAL</code> and <code>field_in_k_space_IMAGINARY</code>, that contain, respectively, the real and imaginary parts of an array of complex numbers, <code>field_in_k_space_TOTAL</code>, which I would like to create. Why does the following assignment not work, producing the error</p>
<pre><code>AttributeError: attribute 'real' of 'numpy.generic' objects is not writable
field_in_k_space_TOTAL = zeros(n, complex)
for i in range(n):
field_in_k_space_TOTAL[i].real = field_in_k_space_REAL[i]
field_in_k_space_TOTAL[i].imag = field_in_k_space_IMAGINARY[i]
</code></pre>
|
<p>The suggestion by @Ffisegydd (and by @jonsharpe in a comment) are good ones. See if that works for you.</p>
<p>Here, I'll just point out that the <code>real</code> and <code>imag</code> attributes of the <em>array</em> are writeable, and the vectorized assignment works, so you can simplify your code to</p>
<pre><code>field_in_k_space_TOTAL = zeros(n, complex)
field_in_k_space_TOTAL.real = field_in_k_space_REAL
field_in_k_space_TOTAL.imag = field_in_k_space_IMAGINARY
</code></pre>
|
python|arrays|numpy|complex-numbers
| 4
|
378,125
| 33,926,947
|
How to perform cumulative calculations in pandas that restart with each change in date?
|
<p>This is a simplified version of my data:</p>
<pre><code> Date and Time Price Volume
0 2015-01-01 17:00:00.211 2030.25 342
1 2015-01-01 17:00:02.456 2030.75 203
2 2015-01-02 17:00:00.054 2031.00 182
3 2015-01-02 17:00:25.882 2031.75 249
</code></pre>
<p>I would like to calculate cumulative volume per day, so that the end result would be something like:</p>
<pre><code>data['cum_Vol'] = data['Volume'].cumsum()
</code></pre>
<p>Output:</p>
<pre><code> Date and Time Price Volume cum_Vol
0 2015-01-01 17:00:00.211 2030.25 342 342
1 2015-01-01 17:00:02.456 2030.75 203 545
2 2015-01-02 17:00:00.054 2031.00 182 182
3 2015-01-02 17:00:25.882 2031.75 249 431
</code></pre>
<p>Notice how instead of doing the regular <code>cumsum()</code>, the calculation re-starts when there's a change in Date, in the example from 2015-01-01 to 2015-01-02.</p>
|
<p>The easiest way would probably be to set 'Date and Time' as the index and then use <code>groupby</code> with <code>TimeGrouper</code> to group the dates. Then you can apply <code>cumsum()</code>:</p>
<pre><code>>>> df2 = df.set_index('Date and Time')
>>> df2['Volume'] = df2.groupby(pd.TimeGrouper('D'))['Volume'].cumsum()
>>> df2
Price Volume
DateandTime
2015-01-01 17:00:00.211 2030.25 342
2015-01-01 17:00:02.456 2030.75 545
2015-01-02 17:00:00.054 2031.00 182
2015-01-02 17:00:25.882 2031.75 431
</code></pre>
<p>You can always reset the index again afterwards.</p>
|
python|pandas|date|dataframe|cumsum
| 4
|
378,126
| 33,942,316
|
Pandas: Combining and summing rows based on values from other rows
|
<p>In a Panda's data frame, I'd like combine all <code>'other'</code> rows from <code>col_2</code> into a one row for each value from <code>col_1</code> by assigning <code>col_3</code> the sum of all corresponding values.</p>
<p><strong>EDIT - Clarification:</strong> In total, I have about 20 columns (where values in those columns is unique for each col_1. there however 80,000 <code>other</code> fields; however, there are three columns affecting my question</p>
<p><strong>Current dataframe <code>df</code>:</strong></p>
<pre><code>col_1 col_2 col_3
1 a 30
1 b 25
1 other 1
1 other 5
2 a 321
2 b 1
2 other 45
2 other 52
2 other 17
2 other 8
</code></pre>
<p><strong>Desired resultin :</strong></p>
<pre><code>col_1 col_2 col_3
1 a 30
1 b 25
1 other 6
2 a 321
2 b 1
2 other 122
</code></pre>
<p>How can I do this in Pandas?</p>
|
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html#pandas.DataFrame.groupby" rel="nofollow"><code>groupby</code></a> on col_1 and col_2 and call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.sum.html#pandas.core.groupby.GroupBy.sum" rel="nofollow"><code>sum</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html#pandas.DataFrame.reset_index" rel="nofollow"><code>reset_index</code></a>:</p>
<pre><code>In [188]:
df.groupby(['col_1','col_2']).sum().reset_index()
Out[188]:
col_1 col_2 col_3
0 1 a 30
1 1 b 25
2 1 other 6
3 2 a 321
4 2 b 1
5 2 other 122
</code></pre>
|
python-2.7|pandas
| 0
|
378,127
| 33,846,323
|
how to assign labels of one numpy array to another numpy array and group accordingly?
|
<p>I have the following labels </p>
<pre><code>>>> lab
array([2, 2, 2, 2, 2, 3, 3, 0, 0, 0, 0, 1])
</code></pre>
<p>I want to assign this label to another numpy array i.e</p>
<pre><code>>>> arr
array([[81, 1, 3, 87], # 2
[ 2, 0, 1, 0], # 2
[13, 6, 0, 0], # 2
[14, 0, 1, 30], # 2
[ 0, 0, 0, 0], # 2
[ 0, 0, 0, 0], # 3
[ 0, 0, 0, 0], # 3
[ 0, 0, 0, 0], # 0
[ 0, 0, 0, 0], # 0
[ 0, 0, 0, 0], # 0
[ 0, 0, 0, 0], # 0
[13, 2, 0, 11]]) # 1
</code></pre>
<p>and add the elements of 0th group, 1st group, 2nd group, 3rd group?</p>
|
<p>If the labels of equal values are contiguous, as in your example, then you may use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow"><code>np.add.reduceat</code></a>:</p>
<pre><code>>>> lab
array([2, 2, 2, 2, 2, 3, 3, 0, 0, 0, 0, 1])
>>> idx = np.r_[0, 1 + np.where(lab[1:] != lab[:-1])[0]]
>>> np.add.reduceat(arr, idx)
array([[110, 7, 5, 117], # 2
[ 0, 0, 0, 0], # 3
[ 0, 0, 0, 0], # 0
[ 13, 2, 0, 11]]) # 1
</code></pre>
<p>if they are not contiguous, then use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html" rel="nofollow"><code>np.argsort</code></a> to align the array and labels such that labels of the same values are next to each other:</p>
<pre><code>>>> i = np.argsort(lab)
>>> lab, arr = lab[i], arr[i, :] # aligns array and labels such that labels
>>> lab # are sorted and equal labels are contiguous
array([0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 3, 3])
>>> idx = np.r_[0, 1 + np.where(lab[1:] != lab[:-1])[0]]
>>> np.add.reduceat(arr, idx)
array([[ 0, 0, 0, 0], # 0
[ 13, 2, 0, 11], # 1
[110, 7, 5, 117], # 2
[ 0, 0, 0, 0]]) # 3
</code></pre>
<p>or alternatively use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow"><code>groupby</code> from pandas library</a>:</p>
<pre><code>>>> pd.DataFrame(arr).groupby(lab).sum().values
array([[ 0, 0, 0, 0],
[ 13, 2, 0, 11],
[110, 7, 5, 117],
[ 0, 0, 0, 0]])
</code></pre>
|
python-3.x|numpy
| 1
|
378,128
| 33,680,750
|
convert (nx2) array of floats into (nx1) array of 2-tuples
|
<p>I have a NumPy float array</p>
<pre class="lang-py prettyprint-override"><code>x = np.array([
[0.0, 1.0],
[2.0, 3.0],
[4.0, 5.0]
],
dtype=np.float32
)
</code></pre>
<p>and need to convert it into a NumPy array with a tuple dtype,</p>
<pre class="lang-py prettyprint-override"><code>y = np.array([
(0.0, 1.0),
(2.0, 3.0),
(4.0, 5.0)
],
dtype=np.dtype((np.float32, 2))
)
</code></pre>
<p>NumPy <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.view.html" rel="nofollow"><code>view</code></a>s unfortunately don't work here:</p>
<pre class="lang-py prettyprint-override"><code>y = x.view(dtype=np.dtype((np.float32, 2)))
</code></pre>
<pre><code>ValueError: new type not compatible with array.
</code></pre>
<p>Is there a chance to get this done without iterating through <code>x</code> and copying over every single entry?</p>
|
<p>This is close:</p>
<pre><code>In [122]: dt=np.dtype([('x',float,(2,))])
In [123]: y=np.zeros(x.shape[0],dtype=dt)
In [124]: y
Out[124]:
array([([0.0, 0.0],), ([0.0, 0.0],), ([0.0, 0.0],)],
dtype=[('x', '<f8', (2,))])
In [125]: y['x']=x
In [126]: y
Out[126]:
array([([0.0, 1.0],), ([2.0, 3.0],), ([4.0, 5.0],)],
dtype=[('x', '<f8', (2,))])
In [127]: y['x']
Out[127]:
array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.]])
</code></pre>
<p><code>y</code> has one compound field. That field has 2 elements.</p>
<p>Alternatively you could define 2 fields:</p>
<pre><code>In [134]: dt=np.dtype('f,f')
In [135]: x.view(dt)
Out[135]:
array([[(0.0, 1.0)],
[(2.0, 3.0)],
[(4.0, 5.0)]],
dtype=[('f0', '<f4'), ('f1', '<f4')])
</code></pre>
<p>But that is shape (3,1); so reshape:</p>
<pre><code>In [137]: x.view(dt).reshape(3)
Out[137]:
array([(0.0, 1.0), (2.0, 3.0), (4.0, 5.0)],
dtype=[('f0', '<f4'), ('f1', '<f4')])
</code></pre>
<p>Apart from the dtype that displays the same as your <code>y</code>.</p>
|
python|arrays|numpy|casting|user-defined-types
| 1
|
378,129
| 33,559,875
|
Querying dataframe based on numerical similarities between rows
|
<p>I have a dataframe like this:</p>
<pre><code> Allotment Date NDVI_Kurtosis NDVI_Skewness
1 D 19840621 1.02 3.06
2 D 19850619 1.76 2.56
3 A 19840621 3.66 3.50
4 A 19850619 1.56 3.20
</code></pre>
<p>and I want to return every <code>Allotment</code> and associated <code>Date</code> if BOTH the <code>NDVI_Kurtosis</code> and <code>NDVI_Skewness</code> are within 1.00 of each other between the different rows. So in this case, I would want this returned:</p>
<pre><code> Allotment Date NDVI_Kurtosis NDVI_Skewness
D 19840621 1.02 3.06
D 19850619 1.76 2.56
A 19850619 1.56 3.20
</code></pre>
<p>I have played around with using <code>iloc</code> for this but have been unsuccessful so far.</p>
|
<p>You can use the shift function for create new columns and after you can compare them to the initial columns.</p>
<pre><code>import pandas as pd
df=pd.read_clipboard()
df['NDVI_Kurtosis_lag']=df['NDVI_Kurtosis'].shift(1).fillna(df['NDVI_Kurtosis'])
df['NDVI_Skewness_lag']=df['NDVI_Skewness'].shift(1).fillna(df['NDVI_Skewness'])
df
df2=df[(df['NDVI_Kurtosis']-df['NDVI_Kurtosis_lag']<1) & (df['NDVI_Skewness']-df['NDVI_Skewness_lag']<1)]
df2
</code></pre>
<p>Cordialy,
Laurent.</p>
|
python|pandas
| 0
|
378,130
| 33,794,750
|
replace min value to another in numpy array
|
<p>Lets say we have this array and I want to replace the minimum value with number 50</p>
<pre><code>import numpy as np
numbers = np.arange(20)
numbers[numbers.min()] = 50
</code></pre>
<p>So the output is <code>[50,1,2,3,....20]</code></p>
<p>But now I have problems with this:</p>
<pre><code>numbers = np.arange(20).reshape(5,4)
numbers[numbers.min(axis=1)]=50
</code></pre>
<p>to get <code>[[50,1,2,3],[50,5,6,7],....]</code></p>
<p>However I get this error:</p>
<blockquote>
<p>IndexError: index 8 is out of bounds for axis 0 with size 5 .... </p>
</blockquote>
<p>Any ideas for help?</p>
|
<p>You need to use <code>numpy.argmin</code> instead of <code>numpy.min</code>:</p>
<pre><code>In [89]: numbers = np.arange(20).reshape(5,4)
In [90]: numbers[np.arange(len(numbers)), numbers.argmin(axis=1)] = 50
In [91]: numbers
Out[91]:
array([[50, 1, 2, 3],
[50, 5, 6, 7],
[50, 9, 10, 11],
[50, 13, 14, 15],
[50, 17, 18, 19]])
In [92]: numbers = np.arange(20).reshape(5,4)
In [93]: numbers[1,3] = -5 # Let's make sure that mins are not on same column
In [94]: numbers[np.arange(len(numbers)), numbers.argmin(axis=1)] = 50
In [95]: numbers
Out[95]:
array([[50, 1, 2, 3],
[ 4, 5, 6, 50],
[50, 9, 10, 11],
[50, 13, 14, 15],
[50, 17, 18, 19]])
</code></pre>
<p>(I believe my original answer was incorrect, I confused rows and columns, and this is right)</p>
|
python|arrays|numpy|min
| 7
|
378,131
| 33,728,965
|
difficult to run the program
|
<p>I'm trying to run the program below, but have run into this error.</p>
<p>Traceback (most recent call last):</p>
<pre><code>File "C:\Users\danil\Desktop\HK1.py", line 76, in <module>
img1c = cv2.imread(sys.argv[1])
IndexError: list index out of range
</code></pre>
<p>The code I am using is described below:</p>
<pre><code>import cv2
import sys
import numpy as np
def Pixel(img, i, j):
i = i if i >= 0 else 0
j = j if j >= 0 else 0
i = i if i < img.shape[0] else img.shape[0] - 1
j = j if j < img.shape[1] else img.shape[1] - 1
return img[i, j]
def xDer(img1, img2):
res = np.zeros_like(img1)
for i in range(res.shape[0]):
for j in range(res.shape[1]):
sm = 0
sm += Pixel(img1, i, j + 1) - Pixel(img1, i, j)
sm += Pixel(img1, i + 1, j + 1) - Pixel(img1, i + 1, j)
sm += Pixel(img2, i, j + 1) - Pixel(img2, i, j)
sm += Pixel(img2, i + 1, j + 1) - Pixel(img2, i + 1, j)
sm /= 4.0
res[i, j] = sm
return res
def yDer(img1, img2):
res = np.zeros_like(img1)
for i in range(res.shape[0]):
for j in range(res.shape[1]):
sm = 0
sm += Pixel(img1, i + 1, j ) - Pixel(img1, i, j )
sm += Pixel(img1, i + 1, j + 1) - Pixel(img1, i, j + 1)
sm += Pixel(img2, i + 1, j ) - Pixel(img2, i, j )
sm += Pixel(img2, i + 1, j + 1) - Pixel(img2, i, j + 1)
sm /= 4.0
res[i, j] = sm
return res
def tDer(img, img2):
res = np.zeros_like(img)
for i in range(res.shape[0]):
for j in range(res.shape[1]):
sm = 0
for ii in range(i, i + 2):
for jj in range(j, j + 2):
sm += Pixel(img2, ii, jj) - Pixel(img, ii, jj)
sm /= 4.0
res[i, j] = sm
return res
averageKernel = np.array([[ 0.08333333, 0.16666667, 0.08333333],
[ 0.16666667, 0. , 0.16666667],
[ 0.08333333, 0.16666667, 0.08333333]], dtype=np.float32)
def average(img):
return cv2.filter2D(img.astype(np.float32), -1, averageKernel)
def translateBrute(img, u, v):
res = np.zeros_like(img)
u = np.round(u).astype(np.int)
v = np.round(v).astype(np.int)
for i in range(img.shape[0]):
for j in range(img.shape[1]):
res[i, j] = Pixel(img, i + v[i, j], j + u[i, j])
return res
def hornShunckFlow(img1, img2, alpha):
img1 = img1.astype(np.float32)
img2 = img2.astype(np.float32)
Idx = xDer(img1, img2)
Idy = yDer(img1, img2)
Idt = tDer(img1, img2)
u = np.zeros_like(img1)
v = np.zeros_like(img1)
#100 iterations enough for small example
for iteration in range(100):
u0 = np.copy(u)
v0 = np.copy(v)
uAvg = average(u0)
vAvg = average(v0)
u = uAvg - 1.0/(alpha**2 + Idx**2 + Idy**2) * Idx * (Idx * uAvg + Idy * vAvg + Idt)
v = vAvg - 1.0/(alpha**2 + Idx**2 + Idy**2) * Idy * (Idx * uAvg + Idy * vAvg + Idt)
return u, v
if __name__ == '__main__':
img1c = cv2.imread(sys.argv[1])
img2c = cv2.imread(sys.argv[2])
img1g = cv2.cvtColor(img1c, cv2.COLOR_BGR2GRAY)
img2g = cv2.cvtColor(img2c, cv2.COLOR_BGR2GRAY)
u, v = hornShunckFlow(img1g, img2g, 0.1)
imgRes = translateBrute(img2g, u, v)
cv2.imwrite('Movimento.jpg', imgRes)
print img1g
print translateBrute(img2g, u, v)
</code></pre>
|
<pre><code>if __name__ == '__main__':
img1c = cv2.imread(sys.argv[1])
img2c = cv2.imread(sys.argv[2])
</code></pre>
<p>needs to be changed to something like</p>
<pre><code>if __name__ == '__main__'
if sys.argv[2:]:
arg1 = sys.argv[1]
arg2 = sys.argv[2]
else:
print('need at least 2 arguments')
sys.exit(2)
img1c = cv2.imread(arg1)
img2c = cv2.imread(arg2)
...
</code></pre>
<p>And then it needs to be called with 2 file names (in addition to the script .py).</p>
|
python|numpy
| 0
|
378,132
| 23,880,138
|
Display a 3D bar graph using transparency and multiple colors in matplotlib
|
<p>I have a dataframe where rows represent hours of the day and the columns represent time frequencies. The aim is to create a 3D bar chart and each column represented a different color. My dataframe is as follows </p>
<pre><code>frec=pd.read_csv('tiempo.csv', parse_dates='Horas',index_col='Horas')
frec.index=[date.strftime('%H:%M') for date in frec.index]
frec
Inicio MaxExt Fin
18:00 1 1 1
19:00 0 0 3
20:00 1 1 1
21:00 1 1 0
22:00 3 1 0
23:00 9 1 0
00:00 8 3 2
01:00 2 0 1
02:00 3 8 1
03:00 5 3 2
04:00 6 2 6
05:00 6 6 5
06:00 5 6 4
07:00 5 7 2
08:00 2 4 5
09:00 1 6 6
10:00 0 3 2
11:00 2 5 5
12:00 4 1 9
13:00 2 4 2
15:00 0 2 3
14:00 3 2 4
15:00 0 2 3
16:00 1 1 3
17:00 0 2 3
</code></pre>
<p>The following lines of code trying to create the plot</p>
<pre><code>xpos=np.arange(frec.shape[0])
ypos=np.arange(frec.shape[1])
yposM, xposM = np.meshgrid(ypos+0.5, xpos+0.5)
zpos=np.zeros(frec.shape).flatten()
dx = 0.5 * np.ones_like(zpos)
dy= 0.1 * np.ones_like(zpos)
dz=frec.values.ravel()
fig = plt.figure(figsize=(12,9))
ax = fig.add_subplot(111, projection='3d')
values = np.linspace(0.2, 1., xposM.ravel().shape[0])
colors = cm.rainbow(values)
ax.bar3d(xposM.ravel(),yposM.ravel(),zpos,dx,dy,dz,color=colors, alpha=0.5)
ticks_x = np.arange(0.5, 24, 1)
ax.set_xticks(ticks_x)
ticks_y=np.arange(0.6,3,1)
ax.set_yticks(ticks_y)
ax.w_xaxis.set_ticklabels(frec.index)
ax.w_yaxis.set_ticklabels(frec.columns)
ax.set_xlabel('Hora')
ax.set_ylabel('B')
ax.set_zlabel('Occurrence')
plt.xticks(ticks_x ['1PM','2PM','3PM','4PM','5PM','6PM','7PM','8PM','9PM','1OPM','11PM','12AM','1AM','2AM','3AM','4AM','5AM','6AM','7AM','9AM','10AM','11AM','12PM'])
fig.autofmt_xdate()
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/4yPqI.png" alt="enter image description here"></p>
<p>How I get a plot where each column is drawn with different color ?. Ie, the bars of Inicio column are blue, the bars of MaxExt column are red and the bars of Fin column are yellow</p>
|
<p>create <code>colors</code> by following method:</p>
<pre><code>values = np.linspace(0.2, 1., frec.shape[0])
cmaps = [cm.Blues, cm.Reds, cm.Greens]
colors = np.hstack([c(values) for c in cmaps]).reshape(-1, 4)
</code></pre>
<p>Here is the output:</p>
<p><img src="https://i.stack.imgur.com/I7m1a.png" alt="enter image description here"></p>
|
python|matplotlib|pandas
| 3
|
378,133
| 23,667,574
|
How to add two columns efficiently in Pandas DataFrame?
|
<p>I have quite large dataset (over 6 million rows with just a few columns). When I try to add two float64 columns (data['C'] = data.A + data.B) it gives me a memory error:</p>
<pre><code>Traceback (most recent call last):
File "01_processData.py", line 354, in <module>
prepareData(snp)
File "01_processData.py", line 161, in prepareData
data['C'] = data.A + data.C
File "/usr/local/lib/python2.7/dist-packages/pandas/core/ops.py", line 480, in wrapper
return_indexers=True)
File "/usr/local/lib/python2.7/dist-packages/pandas/tseries/index.py", line 976, in join
return_indexers=return_indexers)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/index.py", line 1304, in join
return_indexers=return_indexers)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/index.py", line 1345, in _join_non_unique
how=how, sort=True)
File "/usr/local/lib/python2.7/dist-packages/pandas/tools/merge.py", line 465, in _get_join_indexers
return join_func(left_group_key, right_group_key, max_groups)
File "join.pyx", line 152, in pandas.algos.full_outer_join (pandas/algos.c:34716)
MemoryError
</code></pre>
<p>I understand that this operation uses index to properly calculate output, but it seems inefficient, since by the fact that two columns belong to the same DataFrame they have perfect alignment.</p>
<p>I was able to solve the problem by using</p>
<p>data['C'] = data.A.values + data.B.values</p>
<p>but I wonder if there is a method designed to do this or more elegant solution?</p>
|
<p>I cannot reproduce what you are doing (as it won't hit the alignment issue as the indexes are the same).</p>
<p>In master/0.14 (releasing shortly)</p>
<pre><code>In [2]: df = DataFrame(np.random.randn(6000000,2),columns=['A','C'],index=pd.MultiIndex.from_product([['foo','bar'],range(3000000)]))
In [3]: df.values.nbytes
Out[3]: 96000000
In [4]: %memit df['D'] = df['A'] + df['C']
maximum of 1: 625.839844 MB per loop
</code></pre>
<p>However in 0.13.1. (I do remember some optimizations were put in 0.14)</p>
<pre><code>In [3]: %memit df['D'] = df['A'] + df['C']
maximum of 1: 1113.671875 MB per loop
</code></pre>
|
python|pandas
| 2
|
378,134
| 23,574,614
|
Appending Pandas dataframe to sqlite table by primary key
|
<p>I want to append the Pandas dataframe to an existing table in a sqlite database called 'NewTable'. NewTable has three fields (ID, Name, Age) and ID is the primary key. My database connection:</p>
<pre><code>import sqlite3
DB='<path>'
conn = sqlite3.connect(DB)
</code></pre>
<p>The dataframe I want to append:</p>
<pre><code>test=pd.DataFrame(columns=['ID','Name','Age'])
test.loc[0,:]='L1','John',17
test.loc[1,:]='L11','Joe',30
</code></pre>
<p>As mentioned above, ID is the primary key in NewTable. The key 'L1' is already in NewTable, but key 'L11' is not. I try to append the dataframe to NewTable. </p>
<pre><code>from pandas.io import sql
sql.write_frame(test,name='NewTable',con=conn,if_exists='append')
</code></pre>
<p>This throws an error:</p>
<pre><code>IntegrityError: column ID is not unique
</code></pre>
<p>The error is likely to the fact that key 'L1' is already in NewTable. Neither of the entries in the dataframe are appended to NewTable. But, I can append dataframes with new keys to NewTable without problem.</p>
<p>Is there a simple way (e.g., without looping) to append Pandas dataframes to a sqlite table such that new keys in the dataframe are appended, but keys that already exist in the table are not?</p>
<p>Thanks.</p>
|
<p>You can use SQL functionality <code>insert or replace</code></p>
<pre><code>query=''' insert or replace into NewTable (ID,Name,Age) values (?,?,?) '''
conn.executemany(query, test.to_records(index=False))
conn.commit()
</code></pre>
|
python|sqlite|pandas
| 11
|
378,135
| 23,530,848
|
xtick max value setup with panda and matplotlib
|
<p>How is it possible that the last xtick would be the last value of the data?</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
a = [583, 1379, 1404, 5442, 5512, 5976, 5992, 6111, 6239, 6375, 6793, 6994, 7109, 7149, 7210, 7225, 7459, 8574, 9154, 9417, 9894, 10119, 10355, 10527, 11933, 12170, 12172, 12273, 12870, 13215, 13317, 13618, 13632, 13790, 14260, 14282, 14726, 15156, 19262, 21501, 21544, 21547, 21818, 21939, 22386, 22622, 22830, 23898, 25796, 26126, 26340, 28585, 28645, 28797, 28808, 28878, 29167, 29168, 31161, 31225, 32284, 32332, 32641, 33227, 34175, 34349, 34675, 34772, 34935, 35086]
d = pd.DataFrame(a)
d.hist(bins=50)
f = plt.figure()
plt.xlabel("xTEST")
plt.ylabel("yTEST")
plt.subplots_adjust(bottom=.25, left=.25)
plt.ticklabel_format(style = 'plain')
plt.title('Title here!', color='black')
plt.xlim(xmin=1, xmax=35086)
plt.show()
</code></pre>
|
<p>Not sure what exactly what you need, but you'll need to delete the line</p>
<pre><code>f = plt.figure()
</code></pre>
<p>if you want you histogram of <code>d</code> and the title/label/xlim etc. to be in the same chart.</p>
|
python|matplotlib|pandas
| 1
|
378,136
| 23,896,336
|
Can't see matplotlib plots in iPython notebook
|
<p>I'm trying to show pandas-generated plots in iPython notebooks (running with <code>pylab=inline</code>), but these have mysteriously stopped working—I'll do something like:</p>
<pre><code>In [6]: pd.Series([0,2,4,3,8]).plot()
Out[6]: <matplotlib.axes.AxesSubplot at 0x10e69e110>
<matplotlib.figure.Figure at 0x10eb40d90>
</code></pre>
<p>Note: there's no plots here, just the text.</p>
<p>I do, however, get these errors in the console where I'm running iPython:</p>
<pre><code>libpng warning: Application built with libpng-1.5.18 but running with 1.6.10
</code></pre>
<p>How do I sort this out and get plots working again?</p>
<p>(I have libpng installed through homebrew, iPython v.1.1.0, matplotlib v.1.3.1)</p>
<hr>
<p>UPDATE: Now I'm using iPython v.2.1.0.</p>
<p>I still get the libpng error, but in the notebook I now get</p>
<pre><code>In [2]: pd.Series([0,2,4,3,8]).plot()
Out[2]: <matplotlib.axes.AxesSubplot at 0x112821110>
/Library/Python/2.7/site-packages/IPython/core/formatters.py:239: FormatterWarning: Exception in image/png formatter: Could not create write struct
FormatterWarning,
<matplotlib.figure.Figure at 0x112788a50>
</code></pre>
<p>So... progress?</p>
<p>(I am also now using <code>%pylab inline</code> in the document, instead of using it as a command-line flag.)</p>
|
<pre><code> %pylab inline
</code></pre>
<p>work fine in my env</p>
<pre><code> %pylab inline
pd.Series([0,2,4,3,8]).plot()
</code></pre>
<p><img src="https://i.stack.imgur.com/o71QF.png" alt="enter image description here"></p>
<p>my Ipython(modules) version</p>
<pre><code>Jinja2==2.7.3
MarkupSafe==0.23
backports.ssl-match-hostname==3.4.0.2
certifi==14.05.14
gnureadline==6.3.3
ipython==2.3.0
matplotlib==1.4.2
mock==1.0.1
nose==1.3.4
numpy==1.9.1
pandas==0.15.1
pyparsing==2.0.3
python-dateutil==2.2
pytz==2014.9
pyzmq==14.4.1
six==1.8.0
tornado==4.0.2
wsgiref==0.1.2
</code></pre>
|
macos|matplotlib|pandas|ipython|libpng
| 1
|
378,137
| 22,848,196
|
Using Pandas dataframe with FOR loops
|
<p>and thank you for looking.</p>
<p>I am trying my hand at modifying a Python script to download a bunch of data from a website. I have decided that given the large data that will be used, I am wanting to convert the script to Pandas for this. I have this code so far.</p>
<pre><code>snames = ['Index','Node_ID','Node','Id','Name','Tag','Datatype','Engine']
sensorinfo = pd.read_csv(sensorpath, header = None, names = snames, index_col=['Node', 'Index'])
for j in sensorinfo['Node']:
for z in sensorinfo['Index']:
# create a string for the url of the data
data_url = "http://www.mywebsite.com/emoncms/feed/data.json?id=" + sensorinfo['Id'] + "&apikey1f8&start=&end=&dp=600"
print data_url
# read in the data from emoncms
sock = urllib.urlopen(data_url)
data_str = sock.read()
sock.close
# data is output as a string so we convert it to a list of lists
data_list = eval(data_str)
myfile = open(feed_list['Name'[k]] + ".csv",'wb')
wr=csv.writer(myfile,quoting=csv.QUOTE_ALL)
</code></pre>
<p>The first part of the code gives me a very nice table which means I am opening my csv data file and import the information, my question is this:</p>
<p>So I am trying to do this in pseudo code:</p>
<pre><code>For node is nodes (4 nodes so far)
For index in indexes
data_url = websiteinfo + Id + sampleinformation
smalldata.read.csv(data_url)
merge(bigdata, smalldata.no_time_column)
</code></pre>
<p>This is my first post here, I tried to keep it short but still supply the relevant data. Let me know if I need to clarify anything.</p>
|
<p>In your pseudocode, you can do this:</p>
<pre><code>dfs = []
For node is nodes (4 nodes so far)
For index in indexes
data_url = websiteinfo + Id + sampleinformation
df = smalldata.read.csv(data_url)
dfs.append(df)
df = pd.concat(dfs)
</code></pre>
|
python|pandas
| 0
|
378,138
| 22,546,425
|
How to implement a Boolean search with multiple columns in pandas
|
<p>I have a pandas df and would like to accomplish something along these lines (in SQL terms):</p>
<pre><code>SELECT * FROM df WHERE column1 = 'a' OR column2 = 'b' OR column3 = 'c' etc.
</code></pre>
<p>Now this works, for one column/value pair:</p>
<pre><code>foo = df.loc[df['column']==value]
</code></pre>
<p>However, I'm not sure how to expand that to multiple column/value pairs.</p>
<ul>
<li>To be clear, each column matches a different value.</li>
</ul>
|
<p>You need to enclose multiple conditions in braces due to operator precedence and use the bitwise and (<code>&</code>) and or (<code>|</code>) operators:</p>
<pre><code>foo = df[(df['column1']==value) | (df['columns2'] == 'b') | (df['column3'] == 'c')]
</code></pre>
<p>If you use <code>and</code> or <code>or</code>, then pandas is likely to moan that the comparison is ambiguous. In that case, it is unclear whether we are comparing every value in a series in the condition, and what does it mean if only 1 or all but 1 match the condition. That is why you should use the bitwise operators or the numpy <code>np.all</code> or <code>np.any</code> to specify the matching criteria.</p>
<p>There is also the query method: <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.query.html" rel="noreferrer">http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.query.html</a></p>
<p>but there are some limitations mainly to do with issues where there could be ambiguity between column names and index values.</p>
|
python|pandas
| 115
|
378,139
| 15,011,935
|
ConfigNumParser ValueError: invalid literal for int() with base 10: ''
|
<p>This code is typed nearly exactly from a <a href="http://www.scipy.org/Cookbook/Reading_Custom_Text_Files_with_Pyparsing" rel="nofollow">scipy.org cookbook recipe</a> and I can't yet notice any typo's so perhaps the code is oudated? Why does this code parse the numbers correctly but fail on the KeyWord() and QuotedString() methods?</p>
<pre><code> #use the Regex element to rapidly detect strings representing numbers:
from re import VERBOSE
number = Regex(r"""
[+-]? #optional sign
(
(?:\d+(?P<float1>\.\d*)?) # match 2 or 2.02
| # or
(?P<float2>\.\d+)? # match .02
)
(?P<float3>[Ee][+-]?\d+)? #optional exponent
""", flags=VERBOSE
)
# a function to convert this string into python float or integer and set a
# parseAction to tell pyparsing to automatically convert a number when it finds
# one:
def convert_number(t):
"""Convert a string matching a number to a python number"""
print "Converting " + str(t)
if t.float1 or t.float2 or t.float3:
return [float(t[0])]
else:
return [int(t[0])]
#try:
# return [int(t[0])]
#except:
# return t
number.setParseAction(convert_number)
# create a list of element converting strings to python objects:
from numpy import NAN
pyvalue_list = [
number,
Keyword('True').setParseAction(replaceWith(True)),
Keyword('False').setParseAction(replaceWith(False)),
Keyword('NAN', caseless=True).setParseAction(replaceWith(NAN)),
Keyword('None').setParseAction(replaceWith(None)),
QuotedString('"""', multiline=True),
QuotedString("'''", multiline=True),
QuotedString('"'),
QuotedString("'"),
]
pyvalue = MatchFirst( e.setWhitespaceChars(' \t\r') for e in pyvalue_list)
</code></pre>
<p>According to the recipe my output should be:</p>
<pre><code>>>> test2 = '''
>>> 1 2 3.0 0.3 .3 2e2 -.2e+2 +2.2256E-2
>>> True False nan NAN None
>>> "word" "two words"
>>> """'more words', he said"""
>>> '''
>>> print pyValue.searchString(test2)
[[1], [2], [3.0], [0.29999999999999999], [0.29999999999999999], [200.0], [-20.0], [0.022256000000000001],
[True], [False], [nan], [nan], [None], ['word'], ['two words'], ["'more words', he said"]]
</code></pre>
<p>But I get ValueError: invalid literal for int() with base 10: '' so i added a print statement to help debug, here is terminal session:</p>
<pre><code> Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import ConfigNumParser as parser
>>> test2 = '''
... 1 2 3.0 0.3 .3 2e3 -.2e+2 +2.2256E-2
... True False nan NAN None
... "word" "two words"
... """'more words', he daid"""
... '''
>>> print parser.pyvalue.searchString(test2)
Converting ['1']
Converting ['2']
Converting ['3.0']
Converting ['0.3']
Converting ['.3']
Converting ['2e3']
Converting ['-.2e+2']
Converting ['+2.2256E-2']
Converting ['']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\Lib\site-packages\pyparsing.py", line 1099, in searchString
return ParseResults([ t for t,s,e in self.scanString( instring, maxMatches ) ])
File "C:\Python27\Lib\site-packages\pyparsing.py", line 1036, in scanString
nextLoc,tokens = parseFn( instring, preloc, callPreParse=False )
File "C:\Python27\Lib\site-packages\pyparsing.py", line 871, in _parseNoCache
loc,tokens = self.parseImpl( instring, preloc, doActions )
File "C:\Python27\Lib\site-packages\pyparsing.py", line 2451, in parseImpl
ret = e._parse( instring, loc, doActions )
File "C:\Python27\Lib\site-packages\pyparsing.py", line 897, in _parseNoCache
tokens = fn( instring, tokensStart, retTokens )
File "C:\Python27\Lib\site-packages\pyparsing.py", line 660, in wrapper
ret = func(*args[limit[0]:])
File "ConfigNumParser.py", line 33, in convert_number
return [int(t[0])]
ValueError: invalid literal for int() with base 10: ''
</code></pre>
<p>so after searching several suggestions here I added the try-catch you see in the comment out area above. The results now are:</p>
<pre><code> Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import ConfigNumParser as parser
>>> test2 = '''
... 1 2 3.0 0.3 .3 2e3 -.2e+2 +2.2256E-2
... True False nan NAN None
... "word" "two words"
... """'more words', he daid"""
... '''
>>> print parser.pyvalue.searchString(test2)
Converting ['1']
Converting ['2']
Converting ['3.0']
Converting ['0.3']
Converting ['.3']
Converting ['2e3']
Converting ['-.2e+2']
Converting ['+2.2256E-2']
Converting ['']
Converting ['']
Converting ['']
<deleted 65+ more of these>
Converting ['']
Converting ['']
Converting ['']
[[1], [2], [3.0], [0.3], [0.3], [2000.0], [-20.0], [0.022256], [''], [''], [''], [''], [''], [''], [''], [''], [''], [''], [''], [''], ['']]
>>>
</code></pre>
<p>While I continue to search & learn, I thought posting the question to the pro's will help me and others.</p>
<p>Regards,
Bill</p>
|
<blockquote>
<p>I can't yet notice any typo's so</p>
</blockquote>
<p>...ooops...</p>
<pre><code>(?P<float2>\.\d+)?
</code></pre>
<p>should be</p>
<pre><code>(?P<float2>\.\d+)
</code></pre>
<p>That fixed it.</p>
|
python|parsing|numpy|scipy|pyparsing
| 2
|
378,140
| 15,412,597
|
How can i combine several database files with numpy?
|
<p>I know that I can can read a file with numpy with the genfromtxt command. It works like this:</p>
<pre><code>data = numpy.genfromtxt('bmrbtmp',unpack=True,names=True,dtype=None)
</code></pre>
<p>I can plot the stuff in there easily with:</p>
<pre><code>ax.plot(data['field'],data['field2'], linestyle=" ",color="red")
</code></pre>
<p>or </p>
<pre><code>ax.boxplot(data)
</code></pre>
<p>and its awesome. What I really would like to do now is read a whole folder of files and combine them into one giant dataset. How do I add datapoints to the data data structure?
And how do I read a whole folder at once?</p>
|
<p>To visit all the files in a directory, use <a href="http://docs.python.org/2/library/os.html#os.walk" rel="nofollow">os.walk</a>.</p>
<p>To stack two structured numpy arrays "vertically", use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow">np.vstack</a>.</p>
<p>To save the result, use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html" rel="nofollow">np.savetxt</a> to save in a text format, or <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html#numpy.save" rel="nofollow">np.save</a> to save the array in a (smaller) binary format.</p>
<hr>
<pre><code>import os
import numpy as np
result = None
for root, dirs, files in os.walk('.', topdown = True):
for filename in files:
with open(os.path.join(root, filename), 'r') as f:
data = np.genfromtxt(f, unpack=True, names=True, dtype=None)
if result is None:
result = data
else:
result = np.vstack((result, data))
print(result[:10]) # print first 10 lines
np.save('/tmp/outfile.npy', result)
</code></pre>
|
python|numpy|matplotlib
| 2
|
378,141
| 15,125,961
|
Start:stop slicing inconsistencies between numpy and Pandas?
|
<p>I am a bit suprised/confused about the following difference between numpy and Pandas</p>
<pre><code>import numpy as np
import pandas as pd
a = np.random.randn(10,10)
> a[:3,0, newaxis]
array([[-1.91687144],
[-0.6399471 ],
[-0.10005721]])
</code></pre>
<p>However:</p>
<pre><code>b = pd.DataFrame(a)
> b.ix[:3,0]
0 -1.916871
1 -0.639947
2 -0.100057
3 0.251988
</code></pre>
<p>In other words, numpy does not include the <code>stop</code> index in <code>start:stop</code> notation, but Pandas does. I thought Pandas was based on Numpy. Is this a bug? Intentional? </p>
|
<p>This <em>is</em> documented, and it's part of <a href="http://pandas.pydata.org/pandas-docs/dev/indexing.html#indexing-advanced" rel="nofollow">Advanced Indexing</a>. The key here is that you're not using a stop index at all.</p>
<p>The <code>ix</code> attribute is a special thing that lets you do various kinds of advanced indexing <em>by label</em>—choosing a list of labels, using an inclusive range of labels instead of a half-exclusive range of indices, and various other things.</p>
<p>If you don't want that, just don't use it:</p>
<pre><code>In [191]: b[:3][0]
Out[191]:
0 -0.209386
1 0.050345
2 0.318414
Name: 0
</code></pre>
<p>If you play with this a bit more without reading the docs, you'll probably come up with a case where your labels are, say, <code>'A', 'B', 'C', 'D'</code> instead of <code>0, 1, 2, 3</code>, and suddenly, <code>b.ix[:3]</code> will returns only 3 rows instead of 4, and you'll be baffled all over again.</p>
<p>The difference is that in that case, <code>b.ix[:3]</code> is a slice on <em>indices</em>, not on <em>labels</em>.</p>
<p>What you've requested in your code is actually ambiguous between "all labels up to an including 3" and "all indices up to but not including 3", and labels always win with <code>ix</code> (because if you don't want label slicing, you don't have to use <code>ix</code> in the first place). And that's why I said the problem is that you're not using a stop index at all.</p>
|
python|numpy|pandas
| 3
|
378,142
| 13,419,822
|
pandas dataframe, copy by value
|
<p>I noticed a bug in my program and the reason it is happening is because it seems that pandas is copying by reference a pandas dataframe instead of by value. I know immutable objects will always be passed by reference but pandas dataframe is not immutable so I do not see why it is passing by reference. Can anyone provide some information? </p>
<p>Thanks!
Andrew</p>
|
<p>All functions in Python are "pass by reference", there is no "pass by value". If you want to make an explicit copy of a pandas object, try <code>new_frame = frame.copy()</code>.</p>
|
python|pandas
| 41
|
378,143
| 29,417,763
|
Plot rolling mean together with data
|
<p>I have a DataFrame that looks something like this:</p>
<pre><code>####delays:
Worst case Avg case
2014-10-27 2.861433 0.953108
2014-10-28 2.899174 0.981917
2014-10-29 3.080738 1.030154
2014-10-30 2.298898 0.711107
2014-10-31 2.856278 0.998959
2014-11-01 3.118587 1.147104
...
</code></pre>
<p>I would like to plot the data of this DataFrame, together with the rolling mean of the data. I would like the data itself should be a dotted line and the rolling mean to be a full line. The worst case column should be in red, while the average case column should be in blue.</p>
<p>I've tried the following code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
rolling = pd.rolling_mean(delays, 7)
delays.plot(x_compat=True, style='r--')
rolling.plot(style='r')
plt.title('Delays per day on entire network')
plt.xlabel('Date')
plt.ylabel('Minutes')
plt.show()
</code></pre>
<p>Unfortunately, this gives me 2 different plots. One with the data and one with the rolling mean. Also, the worst case column and average case column are both in red.</p>
<p>How can I get this to work?</p>
|
<p>You need to say to pandas where you want to plot. By default pandas creates a new figure. </p>
<p>Just modify these 2 lines:</p>
<pre><code>delays.plot(x_compat=True, style='r--')
rolling.plot(style='r')
</code></pre>
<p>by:</p>
<pre><code>ax_delays = delays.plot(x_compat=True, style='--', color=["r","b"])
rolling.plot(color=["r","b"], ax=ax_delays, legend=0)
</code></pre>
<p>in the 2nd line you now tell pandas to plot on ax_delays, and to not show the legend again.</p>
<p>To get 2 different colors for the 2 lines, just pass as many colors with <code>color</code> argument (see above).</p>
|
python|pandas|matplotlib
| 7
|
378,144
| 29,468,761
|
Columns get lost on pandas frame join
|
<p>I have two datasets:</p>
<ol>
<li>Dataset <strong>A</strong> represents the number of fans a player of a team has in a specific year</li>
<li>Dataset <strong>B</strong> represents the number of wins a team has in a specific game</li>
</ol>
<p>I would now like to combine both data frames and aggregate the data per year per team.</p>
<pre><code>a = pd.DataFrame({
'year': [1995, 1995, 1995, 1995, 1996, 1996, 1996, 1996],
'team': ['Panthers', 'Panthers', 'Eagles', 'Eagles', 'Panthers', 'Panthers', 'Eagles', 'Eagles'],
'name': ['Joe', 'Betty', 'James', 'Sandra', 'Tyrone', 'Betty', 'James', 'Michael'],
'fans': [100, 200, 244, 277, 800, 900, 122, 300]
})
b = pd.DataFrame({
'year': [1995, 1995, 1995, 1995, 1996, 1996, 1996, 1996],
'team': ['Panthers', 'Panthers', 'Eagles', 'Eagles', 'Panthers', 'Panthers', 'Eagles', 'Eagles'],
'wins': [4, 2, 3, 5, 6, 7, 2, 4]
})
aa = a.groupby(['year', 'team']).sum()
bb = b.groupby(['year', 'team']).sum()
aa.join(bb)
</code></pre>
<p>This works but there seems to be some problem with the columns. The final operation <code>aa.join(bb).columns</code> only yields into <code>['fans', 'wins']</code>. I guess this is a left over from an incomplete <code>groupby</code> operation.</p>
<p>To give you a better insight you can view the data with nbviewer <a href="http://nbviewer.ipython.org/urls/gist.githubusercontent.com/bodokaiser/fad68c0965fc6e563434/raw/8fc86dcc40e34406d3b1be45200c84fbdb7d39a8/join.ipynb" rel="nofollow">here</a>.</p>
<p><strong>How do I properly do a group-by and join these two frames?</strong></p>
|
<p><strong>1)</strong> <code>reset_index()</code> can be used only once.</p>
<pre><code>aa = a.groupby(['year', 'team']).sum()
bb = b.groupby(['year', 'team']).sum()
aa.join(bb).reset_index()
</code></pre>
<p><strong>2)</strong> Alternatively, don't create levels for <code>aa</code> and <code>bb</code> using <code>as_index=False</code> and <code>pd.merge</code></p>
<pre><code>aa = a.groupby(['year', 'team'], as_index=False).sum()
bb = b.groupby(['year', 'team'], as_index=False).sum()
pd.merge(aa, bb)
</code></pre>
<p>Both methods, will give you same output</p>
<pre><code> year team fans wins
0 1995 Eagles 521 8
1 1995 Panthers 300 6
2 1996 Eagles 422 6
3 1996 Panthers 1700 13
</code></pre>
|
python|pandas
| 1
|
378,145
| 29,463,967
|
Plotting labed time series data pandas
|
<p>I am newbie in pandas. I want to plot labeled time series (daily activity) data in pandas. On horizontal (x-axis) represents time and on vertical (y-axis) represents label each activity. On the horizontal, I want a point where the time series says activity happened. My dataset looks like below:</p>
<pre><code> [58]:
import pandas as pd
from random import random
from datetime import datetime
rng = pd.date_range('1/1/2011', periods=5, freq='H')
Activity = ([True,True,False,True,False])
ts = pd.DataFrame(Activity, index=rng, columns=['activity'])
data = ts.asfreq('45Min', method='pad')
data
Out[58]:
activity
2011-01-01 00:00:00 True
2011-01-01 00:45:00 True
2011-01-01 01:30:00 True
2011-01-01 02:15:00 False
2011-01-01 03:00:00 True
2011-01-01 03:45:00 True
</code></pre>
<p>Then the plot would be like this:
<a href="https://www.dropbox.com/s/scimfsnqrvimmoq/Untitled.png?dl=0" rel="nofollow">https://www.dropbox.com/s/scimfsnqrvimmoq/Untitled.png?dl=0</a></p>
|
<p>This is really a matplotlib question ...</p>
<p>I have not sought to replicate every feature of your example plot, but you will get the drift.</p>
<p><img src="https://i.stack.imgur.com/pByr8.png" alt="Example image"></p>
<p>The code for this image follows ...</p>
<pre><code># --- initial data
import pandas as pd
from random import random
from datetime import datetime
rng = pd.date_range('1/1/2011', periods=5, freq='H')
Activity = ([True,True,False,True,False])
ts = pd.DataFrame(Activity, index=rng, columns=['activity'])
data = ts.asfreq('45Min', method='pad')
# --- organise the data for plotting
data['colour'] = 'green'
data.colour = data.colour.where(~data.activity, other='red')
data['sz'] = 100
data.sz = data.sz.where(~data.activity, other=50)
data['position'] = data.activity.astype(int)
print(data)
# --- plot the data
import matplotlib.pyplot as plt
from matplotlib.ticker import FixedLocator
fig, ax = plt.subplots(figsize=(8,4))
ax.scatter(data.index, data.position, s=data.sz, c=data.colour)
# - the x axis
ax.set_xlim(['2010-12-31 23:00:00','2011-01-01 04:45:00'])
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
# - the y axis
ax.set_ylim(-1, 2)
ax.spines['right'].set_color('none')
ax.yaxis.set_ticks_position('left')
labels = ['False', 'True']
tick_locations = [0, 1]
ax.yaxis.set_major_locator(FixedLocator(tick_locations))
ax.set_yticklabels(labels, minor=False)
# - and display
plt.show()
</code></pre>
<p>I have a matplotlib cheat sheet here: <a href="http://bit.ly/python_cs" rel="nofollow noreferrer">http://bit.ly/python_cs</a></p>
|
python|pandas|matplotlib
| 0
|
378,146
| 29,418,236
|
Apply where function [SQL like] on datatime Pandas
|
<p>I have a dataset like:</p>
<pre><code> Date/Time Byte
</code></pre>
<p>0 2015-04-02 10:44:31 1 <br>
1 2015-04-02 10:44:21 10 <br>
2 2015-04-02 11:01:11 2 <br>
3 2015-04-02 11:01:21 20 <br></p>
<p>I wish to print all rows related to:</p>
<p>2015-04-02 at 11h </p>
<p>I tried many different solutions but with no results
df is my DataFrame.</p>
<p>For istance to print only flows related to 11 I tried the following: <br>
res = df.loc[df['stamp'].hour == 11] </p>
<p>With error:
AttributeError: 'Series' object has no attribute 'hour'</p>
<p>How can I extract all rows related to a specific hour? <br>
How can I extract all rows related to a specific hour of a specific day?</p>
<p>Thanks, have a good day</p>
|
<p>use pd.to_datetime() on your timestamps if they are stored as strings.</p>
<p>Then you can do</p>
<pre><code>df[df['a_date_col'].apply(lambda x: x.hour) == 11]
</code></pre>
<p>Or you can use the .dt accessor:</p>
<pre><code>df[df['a_date_col'].dt.hour == 11]
</code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/dev/timeseries.html" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/dev/timeseries.html</a></p>
<p><a href="http://pandas.pydata.org/pandas-docs/dev/basics.html#dt-accessor" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/dev/basics.html#dt-accessor</a></p>
<p><a href="https://stackoverflow.com/questions/26961805/accessing-years-within-a-dataframe-in-pandas">Accessing years within a dataframe in Pandas</a></p>
|
python|pandas
| 0
|
378,147
| 29,377,590
|
Index error - Python, Numpy, MatLab
|
<p>I have converted a section of MatLab code to Python using the numpy and scipy libraries. I am however stuck on the following index error;</p>
<pre><code>IndexError: index 698 is out of bounds for axis 3 with size 2
</code></pre>
<p>698 is the size of the time list.</p>
<p>it occurs in the last section of code, on this line;</p>
<pre><code>exp_Pes[indx,jndx,j,i]=np.trace(rhotemp[:,:,indx,jndx] * Pe)
</code></pre>
<p>the rest is included for completeness.</p>
<p>the following is the code;</p>
<pre><code>import numpy as np
import math
import time
#from scipy.linalg import expm
tic = time.clock()
dt = 1e-2; # time step
t = np.arange(0, 7-dt, dt) # t
gam=1
mt=2
nt=2
delta=1
ensemble_size=6
RKflag=0
itype = 'em'
#========================================================================
def _min(x):
return [np.amin(x), np.argmin(x)]
#========================================================================
###############
#Wavepacket envelope
#rising exponential
sig1 = 1;
xi_t = np.zeros(len(t));
[min_dif, array_pos_start] = _min(abs(t -t[0] ));
[min_dif, array_pos_stop ] = _min(abs(t -t[-1]/2.0));
step = t[1]-t[0]
p_len = np.arange(0, t[array_pos_stop] - t[array_pos_start] + step, step)
xi_t[array_pos_start:array_pos_stop+1] = math.sqrt(sig1) * np.exp((sig1/2.0)*p_len);
norm = np.trapz(t, xi_t * np.conj(xi_t));
xi = xi_t/np.sqrt(abs(norm));
#Pauli matrices
#========================
Pe=np.array([[1,0],[0,0]])
Sm=np.array([[0,0],[1,0]])
Sz=np.array([[1,0],[0,- 1]])
Sy=np.array([[0,- 1j],[1j,0]])
Sx=np.array([[0,1],[1,0]])
#S,L,H coefficients
#=========================
S=np.eye(2)
L=np.sqrt(gam) * Sm
H=np.eye(2)
#=========================
psi=np.array([[0],[1]])
rhoi=psi * psi.T
#rho=np.zeros(2,2,2,2)
rho=np.zeros((2,2,2,2))
rhotot=np.copy(rho)
rhoblank=np.copy(rho)
rhos=np.copy(rho)
rhotots=np.copy(rho)
rhon=np.copy(rho)
rhototN=np.copy(rho)
rhov=np.copy(rhoi)
exp_Pes=np.zeros((2,2,2,2))
exp_Pes2=np.zeros((2,2,2,2))
#initial conditions into rho
#==================
rho[:,:,0,0]=rhoi
#rho[:,:,2,2]=rhoi
rhosi=np.copy(rho)
num_bad=0
avg_val=0
num_badRK=0
avg_valRK=0
#np.zeros(len(t));
exp_Sz=np.zeros((2,2,len(t)))
exp_Sy=np.copy(exp_Sz)
exp_Sx=np.copy(exp_Sz)
exp_Pe=np.copy(exp_Sz)
##########################3
#functions
#========================================================================
#D[X]rho = X*rho* ctranspose(X) -0.5*(ctranspose(X)*X*rho + rho*ctranspose(X)*X)
#
# To call write curlyD(X,rho)
def curlyD(X,rho,nargout=1):
y=X * rho * X.conj().T - 0.5 * (X.conj().T * X * rho + rho * X.conj().T * X)
return y
#========================================================================
def curlyC(A,B,nargout=1):
y=A * B - B * A
return y
#========================================================================
def calc_expect(rhotots,Pe,nargout=1):
for indx in (1,2):
for jndx in (1,2):
exp_Pes[indx,jndx]=np.trace(rhotots[:,:,indx,jndx] * Pe)
exp_Pes2[indx,jndx]=np.trace(rhotots[:,:,indx,jndx] * Pe) ** 2
return exp_Pes,exp_Pes2
#========================================================================
#========================================================================
#========================================================================
#========================================================================
#========================================================================
def single_photon_liouvillian(S,L,H,rho,xi,nargout=1):
rhotot[:,:,1,1]=curlyD(L,rho[:,:,1,1]) + xi * curlyC(S * rho[:,:,0,1],L.T) + xi.T * curlyC(L,rho[:,:,1,0] * S.T) + xi.T * xi * (S * rho[:,:,0,0] * S.T - rho[:,:,0,0])
rhotot[:,:,1,0]=curlyD(L,rho[:,:,1,0]) + xi * curlyC(S * rho[:,:,0,0],L.T)
rhotot[:,:,0,1]=curlyD(L,rho[:,:,0,1]) + xi.T * curlyC(L,rho[:,:,0,0] * S.T)
rhotot[:,:,0,0]=curlyD(L,rho[:,:,0,0])
A=np.copy(rhotot)
return A
#========================================================================
def single_photon_stochastic(S,L,H,rho,xi,nargout=1):
K=np.trace((L + L.T) * rho[:,:,1,1]) + np.trace(S * rho[:,:,0,1]) * xi + np.trace(S.T * rho[:,:,1,0]) * xi.T
rhotot[:,:,1,1]=(L * rho[:,:,1,1] + rho[:,:,1,1] * L.T + S * rho[:,:,0,1] * xi + rho[:,:,1,0] * S.T * xi.T - K * rho[:,:,1,1])
rhotot[:,:,1,0]=(L * rho[:,:,1,0] + rho[:,:,1,0] * L.T + S * rho[:,:,0,0] * xi - K * rho[:,:,1,0])
rhotot[:,:,0,1]=(L * rho[:,:,0,1] + rho[:,:,0,1] * L.T + rho[:,:,0,0] * S.T * xi.T - K * rho[:,:,0,1])
rhotot[:,:,0,0]=(L * rho[:,:,0,0] + rho[:,:,0,0] * L.T - K * rho[:,:,0,0])
B=np.copy(rhotot)
return B
#========================================================================
def sde_int_io2_rk(a,b,S,L,H,yn,xi,dt,dW,nargout=1):
Gn=yn + a[S,L,H,yn,xi] * dt + b[S,L,H,yn,xi] * dW
Gnp=yn + a[S,L,H,yn,xi] * dt + b[S,L,H,yn,xi] * np.sqrt(dt)
Gnm=yn + a[S,L,H,yn,xi] * dt - b[S,L,H,yn,xi] * np.sqrt(dt)
ynp1=yn + 0.5 * (a[S,L,H,Gn,xi] + a[S,L,H,yn,xi]) * dt + 0.25 * (b[S,L,H,Gnp,xi] + b[S,L,H,Gnm,xi] + 2 * b[S,L,H,yn,xi]) * dW + 0.25 * (b[S,L,H,Gnp,xi] - b[S,L,H,Gnm,xi]) * (dW ** 2 - dt) * (dt) ** (- 0.5)
return ynp1
#========================================================================
#========================================================================
def sde_int_photon(itype,rhos,S,L,H,Pe,xi,t,nargout=1):
dt=t[1] - t[0]
rhoblank=np.zeros(len(rhos))
Ax=single_photon_liouvillian
Bx=single_photon_stochastic
#if strcmp(itype,'em'):
if itype == 'em':
for i in (1,(len(t)-1)):
if i == 1:
exp_Pes[:,:,i],exp_Pes2[:,:,i]=calc_expect(rhos,Pe,nargout=2)
continue
dW=np.sqrt(dt) * np.randn
rhotots=rhos + dt * single_photon_liouvillian(S,L,H,rhos,xi[i]) + dW * single_photon_stochastic(S,L,H,rhos,xi[i])
exp_Pes[:,:,i],exp_Pes2[:,:,i]=calc_expect(rhotots,Pe,nargout=2)
rhos=np.copy(rhotots)
rhotots=np.copy(rhoblank)
if itype == 'rk':
for i in (1,(len(t)-1)):
if i == 1:
exp_Pes[:,:,i],exp_Pes2[:,:,i]=calc_expect(rhos,Pe,nargout=2)
continue
dW=np.sqrt(dt) * np.randn
rhotots=sde_int_io2_rk(Ax,Bx,S,L,H,rhos,xi[i],dt,dW)
exp_Pes[:,:,i],exp_Pes2[:,:,i]=calc_expect(rhotots,Pe,nargout=2)
rhos=np.copy(rhotots)
rhotots=np.copy(rhoblank)
return exp_Pes,exp_Pes2
#========================================================================
"""
def md_expm(Ain,nargout=1):
Aout=np.zeros(len(Ain))
r,c,d1,d2=len(Ain,nargout=4)
for indx in (1,d1):
for jndx in (1,d2):
Aout[:,:,indx,jndx]=expm(Ain[:,:,indx,jndx])
return Aout
"""
#========================================================================
#========================================================================
Ax=single_photon_liouvillian
Bx=single_photon_stochastic
toc = time.clock()
for indx in (1,range(mt)):
for jndx in (1,range(nt)):
exp_Sz[indx,jndx,1]=np.trace(rho[:,:,indx,jndx] * Sz)
exp_Sy[indx,jndx,1]=np.trace(rho[:,:,indx,jndx] * Sy)
exp_Sx[indx,jndx,1]=np.trace(rho[:,:,indx,jndx] * Sx)
exp_Pe[indx,jndx,1]=np.trace(rho[:,:,indx,jndx] * Pe)
for i in (2,len(t)-1):
#Master equation
rhotot=rho + dt * single_photon_liouvillian(S,L,H,rho,xi[i - 1])
for indx in (1,range(mt)):
for jndx in (1,range(nt)):
exp_Sz[indx,jndx,i]=np.trace(rhotot[:,:,indx,jndx] * Sz)
exp_Sy[indx,jndx,i]=np.trace(rhotot[:,:,indx,jndx] * Sy)
exp_Sx[indx,jndx,i]=np.trace(rhotot[:,:,indx,jndx] * Sx)
exp_Pe[indx,jndx,i]=np.trace(rhotot[:,:,indx,jndx] * Pe)
rho=np.copy(rhotot)
rhotot=np.copy(rhoblank)
for j in (1,range(ensemble_size)):
psi1=np.array([[0],[1]])
rho1=psi1 * psi1.T
rhotemp = np.zeros((2,2,2,2))
rhotemp[:,:,0,0]=rho1
rhotemp[:,:,1,1]=rho1
rhos=np.copy(rhotemp)
for indx in (1,range(2)):
for jndx in (1,range(2)):
exp_Pes[indx,jndx,j,i]=np.trace(rhotemp[:,:,indx,jndx] * Pe)
exp_Pes2[indx,jndx,j,i]=np.trace(rhotemp[:,:,indx,jndx] * Pe) ** 2
for i in (2,(len(t)-1)):
dW=np.sqrt(dt) * np.random.randn()
rhotots=rhos + dt * single_photon_liouvillian(S,L,H,rhos,xi[i - 1]) + dW * single_photon_stochastic(S,L,H,rhos,xi[i - 1])
for indx in (1,range(mt)):
for jndx in (1,range(nt)):
exp_Pes[indx,jndx,j,i]=np.trace(rhotots[:,:,indx,jndx] * Pe)
exp_Pes2[indx,jndx,j,i]=np.trace(rhotots[:,:,indx,jndx] * Pe) ** 2
rhos=np.copy(rhotots)
rhotots=np.copy(rhoblank)
Irow=np.where(np.squeeze(exp_Pes[2,2,j,:]) > 1)
Val=np.squeeze(exp_Pes[2,2,j,Irow])
if Irow:
num_bad=num_bad + 1
avg_val=avg_val + max(Val)
</code></pre>
<p>Any help would be appreciated as I have been stuck on this for a while.</p>
<p>Thanks!</p>
|
<p>The problem is that you are not defining <code>i</code> in the loop. <code>i</code> is being left over from the last time it was used. Specifically, it is using the last value of <code>i</code> from the previous loop. This value is <code>len(t)-1</code>, which is much larger than the length of the 3rd dimension of <code>exp_Pes</code> which has a length of 2. You need to define a valid value for <code>i</code> somewhere.</p>
<p>If you don't want to loop over <code>i</code>, generally you could just define it once before the loop starts. In your case, however, you would need to set it every time you loop since it is also being defined inside this loop a few lines below the one causing the error.</p>
<p>However, it would probably be clearer to just do something like <code>exp_Pes[indx,jndx,j,0] = ...</code>, since people reading your code won't need to look back and find what value <code>i</code> was set to.</p>
<p>A couple more minor points of advice:</p>
<p><code>np.arange</code> does not include the last value. So the case where you use <code>7-dt</code> at the beginning, you are ending up with the start being <code>0</code> and the end being <code>7-dt-dt</code>, which is probably not what you want. But anyway, you should never use <code>np.arange</code> (or the equivalent in MATLAB) with floats, since it can have floating-point errors. Use <code>np.linspace</code> instead.</p>
<p>Second, for your <code>_min</code> function, you don't need <code>[]</code>, you can just do <code>return np.amin(x), np.argmin(x)</code>. </p>
<p>However, it is usually easier for arrays to use the method version, which is the version attached to the array. This would be <code>return x.min(), x.argmin()</code>. Same with <code>np.copy(rho)</code>, use <code>rho.copy()</code> instead.</p>
<p>However, this function could be further simplified to a <code>lambda</code> expression, the equivalent of a Matlab anonymous function, like so: <code>_min = lambda x: [x.min(), x.argmin()]</code> (you do need the <code>[]</code> there).</p>
<p>You should use the <code>numpy</code> version of functions with <code>numpy</code> arrays. They will be faster. So <code>np.abs(t-t[0])</code> is faster than <code>abs(t-t[0])</code>. Same with using <code>np.sqrt(sig1)</code> instead of <code>math.sqrt(sig1)</code>.</p>
<p>You also don't need the <code>[] in returned values, so</code>[min_dif, array_pos_start] = _min(abs(t -t[0] ))<code>can be</code>min_dif, array_pos_start = _min(np.abs(t-t[0])). However, since you are never actually using <code>min_dif</code>, this could just be <code>array_pos_start = np.abs(t-t[0]).argmax()</code>.</p>
<p>You don't need to end lines with <code>;</code>.</p>
<p>You don't need to assign to a variable before returning. So you can just have, for example, <code>return A*B-B*A</code>.</p>
<p>You seem to unnecessarily use the <code>nargout</code> argument. Python doesn't use this, in large part because you can just index the results So say you have a function <code>foo(x)</code> that returns the min and max of <code>x</code>. You can just want the max, you can do <code>foo(x)[1]</code>. Matlab doesn't let you do that, so it relies on <code>nargout</code> to work around it.</p>
|
python|matlab|numpy
| 1
|
378,148
| 29,672,674
|
Python pandas: equivalent to SQL's aggregate functions?
|
<p>What is the pandas equivalent of something like:</p>
<pre><code>select mykey, sum(Field1) as Field1, avg(Field1) as avg_field1, min(field2) as min_field2
from df
group by mykey
</code></pre>
<p>in SQL? I understand that in pandas I can do</p>
<pre><code>grouped = df.groupby('mykey')
</code></pre>
<p>and then</p>
<pre><code>grouped.mean()
</code></pre>
<p>would calculate the mean for all the fields.
However, I need different aggregate functions on different columns: on some columns none at all, on others sum and avg, on other just the maximum, etc.</p>
<p>How do I achieve this in pandas?
Thanks!</p>
|
<p>You can apply multiple functions to multiple fields:</p>
<pre><code> f = {'Field1':'sum',
'Field2':['max','mean'],
'Field3':['min','mean','count'],
'Field4':'count'
}
grouped = df.groupby('mykey').agg(f)
</code></pre>
<p>Hope this helps! Pandas is a very powerful tool. </p>
|
python|sql|pandas
| 2
|
378,149
| 29,404,377
|
Spliting dataframe in 10 equal parts and merge 9 parts after picking one at a time in loop
|
<p>I need to split dataframe into 10 parts then use one part as the testset and remaining 9 (merged to use as training set) , I have come up to the following code where I am able to split the dataset , and m trying to merge the remaining sets after picking one of those 10.
The first iteration goes fine , but I get following error in second iteration.</p>
<pre><code>df = pd.DataFrame(np.random.randn(10, 4), index=list(xrange(10)))
for x in range(3):
dfList = np.array_split(df, 3)
testdf = dfList[x]
dfList.remove(dfList[x])
print testdf
traindf = pd.concat(dfList)
print traindf
print "================================================"
</code></pre>
<p><img src="https://i.stack.imgur.com/BctWC.png" alt="enter image description here"></p>
|
<p>I don't think you have to split the dataframe in 10 but just in 2.
I use this code for splitting a dataframe in training set and validation set:</p>
<p>test_index = np.random.choice(df.index, int(len(df.index)/10), replace=False)</p>
<p>test_df = df.loc[test_index]</p>
<p>train_df = df.loc[~df.index.isin(test_index)]</p>
|
python|numpy|pandas
| 2
|
378,150
| 29,447,457
|
Covariance with a columns
|
<p>If I have a numpy array X with <code>X.shape=(m,n)</code> and a second column vector y with <code>y.shape=(m,1)</code>, how can I calculate the covariance of each column of X with y wihtout using a for loop? I expect the result to be of shape <code>(m,1)</code> or <code>(1,m)</code>.</p>
|
<p>Assuming that the output is meant to be of shape <code>(1,n)</code> i.e. a scalar each for <code>covariance</code> operation for each column of <code>A</code> with <code>B</code> and thus for <code>n</code> columns ending up with <code>n</code> such scalars, you can use two approaches here that use <code>covariance formula</code>.</p>
<p><strong>Approach #1: With Broadcasting</strong></p>
<pre><code>np.sum((A - A.mean(0))*(B - B.mean(0)),0)/B.size
</code></pre>
<p><strong>Approach #2: With Matrix-multiplication</strong></p>
<pre><code>np.dot((B - B.mean(0)).T,(A - A.mean(0)))/B.size
</code></pre>
|
python|numpy|scipy|vectorization
| 2
|
378,151
| 29,719,324
|
Fortran ordered (column-major) numpy structured array possible?
|
<p>I am looking for a way to more efficiently assign column of a numpy structured array.</p>
<p>Example:</p>
<pre><code>my_col = fn_returning_1D_array(...)
</code></pre>
<p>executes more than two times faster on my machine than the same assignment to the column of a structured array:</p>
<pre><code>test = np.ndarray(shape=(int(8e6),), dtype=dtype([('column1', 'S10'), ...more columns...]))
test['column1'] = fn_returning_1D_array(...)
</code></pre>
<p>I tried creating <code>test</code> with fortran ordering but it did not help. Presumably the fields stay interleaved in memory.</p>
<p>Does anybody have any idea here? I would be willing to use low-level numpy interfaces and cython if they could help.</p>
<hr>
<h2>Edit 1: in response to hpaulj's answer</h2>
<p>The apparent equivalence of recarray column assignment and "normal" array column assignment results only if the latter is created with row-major order. With column-major ordering the two assignments are far from equivalent:</p>
<p><strong>Row-major</strong></p>
<pre><code>In [1]: import numpy as np
In [2]: M,N=int(1e7),10
In [4]: A1=np.zeros((M,N),'f')
In [9]: dt=np.dtype(','.join(['f' for _ in range(N)]))
In [10]: A2=np.zeros((M,),dtype=dt)
In [11]: X=np.arange(M+0.0)
In [13]: %timeit for n in range(N):A1[:,n]=X
1 loops, best of 3: 2.36 s per loop
In [15]: %timeit for n in dt.names: A2[n]=X
1 loops, best of 3: 2.36 s per loop
In [16]: %timeit A1[:,:]=X[:,None]
1 loops, best of 3: 334 ms per loop
In [8]: A1.flags
Out[8]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
</code></pre>
<p><strong>Column-major</strong></p>
<pre><code>In [1]: import numpy as np
In [2]: M,N=int(1e7),10
In [3]: A1=np.zeros((M,N),'f', 'F')
In [4]: dt=np.dtype(','.join(['f' for _ in range(N)]))
In [5]: A2=np.zeros((M,),dtype=dt)
In [6]: X=np.arange(M+0.0)
In [8]: %timeit for n in range(N):A1[:,n]=X
1 loops, best of 3: 374 ms per loop
In [9]: %timeit for n in dt.names: A2[n]=X
1 loops, best of 3: 2.43 s per loop
In [10]: %timeit A1[:,:]=X[:,None]
1 loops, best of 3: 380 ms per loop
In [11]: A1.flags
Out[11]:
C_CONTIGUOUS : False
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
</code></pre>
<p>Note that for column-major ordering the two buffers are no longer identical:</p>
<pre><code>In [6]: A3=np.zeros_like(A2)
In [7]: A3.data = A1.data
In [20]: A2[0]
Out[20]: (0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0)
In [21]: A2[1]
Out[21]: (1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0)
In [16]: A3[0]
Out[16]: (0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0)
In [17]: A3[1]
Out[17]: (10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0)
</code></pre>
|
<p>These are not equivalent actions. One just generates an array (and assigns it to a variable, a minor action). The other generates the array and fills a column of a structured array.</p>
<pre><code>my_col = fn_returning_1D_array(...)
test['column1'] = fn_returning_1D_array(...)
</code></pre>
<p>I think a fairer comparison, would to fill in the columns of a 2D array.</p>
<pre><code>In [38]: M,N=1000,10
In [39]: A1=np.zeros((M,N),'f') # 2D array
In [40]: dt=np.dtype(','.join(['f' for _ in range(N)]))
In [41]: A2=np.zeros((M,),dtype=dt) # structured array
In [42]: X=np.arange(M+0.0)
In [43]: A1[:,0]=X # fill a column
In [44]: A2['f0']=X # fill a field
In [45]: timeit for n in range(N):A1[:,n]=X
10000 loops, best of 3: 65.3 µs per loop
In [46]: timeit for n in dt.names: A2[n]=X
10000 loops, best of 3: 40.6 µs per loop
</code></pre>
<p>I'm a bit surprised that filling the structured array is faster.</p>
<p>Of course the fast way to fill the 2D array is with broadcasting:</p>
<pre><code>In [50]: timeit A1[:,:]=X[:,None]
10000 loops, best of 3: 29.2 µs per loop
</code></pre>
<p>But the improvement over filling the fields is not that great. </p>
<p>I don't see anything significantly wrong with filling a structured array field by field. It's got to be faster than generating a list of tuples to fill the whole array.</p>
<p>I believe <code>A1</code> and <code>A2</code> have identical data buffers. For example if I makes a zeros copy of A2, I can replace its data buffer with <code>A1's</code>, and get a valid structured array</p>
<pre><code>In [64]: A3=np.zeros_like(A2)
In [65]: A3.data=A1.data
</code></pre>
<p>So the faster way of filling the structured array is to do the fastest 2D fill, followed by this <code>data</code> assignment.</p>
<p>But in the general case the challenge is to create a compatible 2D array. It's easy when all field dtypes are the same. With a mix of dtypes you'd have to work at the byte level. There are some advanced <code>dtype</code> specifications (with offsets, etc), that may facilitate such a mapping.</p>
<hr>
<p>Now you have shifted the focus to Fortran order. In the case of a 2d array that does help. But it will do so at the expense of row oriented operations.</p>
<pre><code>In [89]: A1=np.zeros((M,N),'f',order='F')
In [90]: timeit A1[:,:]=X[:,None]
100000 loops, best of 3: 18.2 µs per loop
</code></pre>
<p>One thing that you haven't mentioned, at least not before the last rewrite of the question, is how you intend to use this array. If it is just a place to store a number of arrays by name, you could use a straight forward Python dictionary:</p>
<pre><code>In [96]: timeit D={name:X.copy() for name in dt.names}
10000 loops, best of 3: 25.2 µs per loop
</code></pre>
<p>Though this really is a time test for <code>X.copy()</code>.</p>
<p>In any case, there isn't any equivalent to Fortran order when dealing with dtypes. None of the array operations like <code>reshape</code>, <code>swapaxes</code>, <code>strides</code>, broadcasting cross the 'dtype' boundary.</p>
|
python|arrays|numpy|recarray|structured-array
| 1
|
378,152
| 29,806,936
|
Why is Pandas Concatenation (pandas.concat) so Memory Inefficient?
|
<p>I have about 30 GB of data (in a list of about 900 dataframes) that I am attempting to concatenate together. The machine I am working with is a moderately powerful Linux Box with about 256 GB of ram. However, when I try to concatenate my files I quickly run out of available ram. I have tried all sorts of workarounds to fix this (concatenating in smaller batches with for loops, etc.) but I still cannot get these to concatenate. Two questions spring to mind:</p>
<ol>
<li><p>Has anyone else dealt with this and found an effective workaround? I cannot use a straight append because I need the 'column merging' (for lack of a better word) functionality of the <code>join='outer'</code> argument in <code>pd.concat()</code>.</p></li>
<li><p>Why is Pandas concatenation (which I know is just calling <code>numpy.concatenate</code>) so inefficient with its use of memory?</p></li>
</ol>
<p>I should also note that I do not think the problem is an explosion of columns as concatenating 100 of the dataframes together gives about 3000 columns whereas the base dataframe has about 1000. </p>
<h2>Edit:</h2>
<p>The data I am working with is financial data about 1000 columns wide and about 50,000 rows deep for each of my 900 dataframes. The types of data going across left to right are:</p>
<ol>
<li>date in string format,</li>
<li><code>string</code></li>
<li><code>np.float</code></li>
<li><code>int</code></li>
</ol>
<p>... and so on repeating. I am concatenating on column name with an outer join which means that any columns in <code>df2</code> that are not in <code>df1</code> will not be discarded but shunted off to the side. </p>
<hr>
<h2>Example:</h2>
<pre><code> #example code
data=pd.concat(datalist4, join="outer", axis=0, ignore_index=True)
#two example dataframes (about 90% of the column names should be in common
#between the two dataframes, the unnamed columns, etc are not a significant
#number of the columns)
print datalist4[0].head()
800_1 800_2 800_3 800_4 900_1 900_2 0 2014-08-06 09:00:00 BEST_BID 1117.1 103 2014-08-06 09:00:00 BEST_BID
1 2014-08-06 09:00:00 BEST_ASK 1120.0 103 2014-08-06 09:00:00 BEST_ASK
2 2014-08-06 09:00:00 BEST_BID 1106.9 11 2014-08-06 09:00:00 BEST_BID
3 2014-08-06 09:00:00 BEST_ASK 1125.8 62 2014-08-06 09:00:00 BEST_ASK
4 2014-08-06 09:00:00 BEST_BID 1117.1 103 2014-08-06 09:00:00 BEST_BID
900_3 900_4 1000_1 1000_2 ... 2400_4 0 1017.2 103 2014-08-06 09:00:00 BEST_BID ... NaN
1 1020.1 103 2014-08-06 09:00:00 BEST_ASK ... NaN
2 1004.3 11 2014-08-06 09:00:00 BEST_BID ... NaN
3 1022.9 11 2014-08-06 09:00:00 BEST_ASK ... NaN
4 1006.7 10 2014-08-06 09:00:00 BEST_BID ... NaN
_1 _2 _3 _4 _1.1 _2.1 _3.1 _4.1 0 #N/A Invalid Security NaN NaN NaN #N/A Invalid Security NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN NaN
dater
0 2014.8.6
1 2014.8.6
2 2014.8.6
3 2014.8.6
4 2014.8.6
[5 rows x 777 columns]
print datalist4[1].head()
150_1 150_2 150_3 150_4 200_1 200_2 0 2013-12-04 09:00:00 BEST_BID 1639.6 30 2013-12-04 09:00:00 BEST_ASK
1 2013-12-04 09:00:00 BEST_ASK 1641.8 133 2013-12-04 09:00:08 BEST_BID
2 2013-12-04 09:00:01 BEST_BID 1639.5 30 2013-12-04 09:00:08 BEST_ASK
3 2013-12-04 09:00:05 BEST_BID 1639.4 30 2013-12-04 09:00:08 BEST_ASK
4 2013-12-04 09:00:08 BEST_BID 1639.3 133 2013-12-04 09:00:08 BEST_BID
200_3 200_4 250_1 250_2 ... 2500_1 0 1591.9 133 2013-12-04 09:00:00 BEST_BID ... 2013-12-04 10:29:41
1 1589.4 30 2013-12-04 09:00:00 BEST_ASK ... 2013-12-04 11:59:22
2 1591.6 103 2013-12-04 09:00:01 BEST_BID ... 2013-12-04 11:59:23
3 1591.6 133 2013-12-04 09:00:04 BEST_BID ... 2013-12-04 11:59:26
4 1589.4 133 2013-12-04 09:00:07 BEST_BID ... 2013-12-04 11:59:29
2500_2 2500_3 2500_4 Unnamed: 844_1 Unnamed: 844_2 0 BEST_ASK 0.35 50 #N/A Invalid Security NaN
1 BEST_ASK 0.35 11 NaN NaN
2 BEST_ASK 0.40 11 NaN NaN
3 BEST_ASK 0.45 11 NaN NaN
4 BEST_ASK 0.50 21 NaN NaN
Unnamed: 844_3 Unnamed: 844_4 Unnamed: 848_1 dater
0 NaN NaN #N/A Invalid Security 2013.12.4
1 NaN NaN NaN 2013.12.4
2 NaN NaN NaN 2013.12.4
3 NaN NaN NaN 2013.12.4
4 NaN NaN NaN 2013.12.4
[5 rows x 850 columns]
</code></pre>
|
<p>I've had performance issues concatenating a large number of DataFrames to a 'growing' DataFrame. My workaround was appending all sub DataFrames to a list, and then concatenating the list of DataFrames once processing of the sub DataFrames has been completed.</p>
|
python|numpy|pandas|ram
| 19
|
378,153
| 62,286,514
|
Is it possible to multiple updates across rows based on a query on single pandas DataFrame column
|
<p>I am trying to update a dataframe of country names in one go</p>
<pre><code>import pandas as pd
df = pd.DataFrame( {'countries': ['United States of America','United Kingdom','Republic of Korea','Netherlands']})
df
</code></pre>
<p>Output 1:</p>
<p><img src="https://i.stack.imgur.com/7btsT.png" alt="Output 1:" /></p>
<p>I would like country names updated and it seems inefficient to do it as below</p>
<pre><code>df.loc[df['countries']=='United States of America' ,'countries'] = 'USA'
df.loc[df['countries']=='United Kingdom' ,'countries'] = 'UK'
df.loc[df['countries']=='Republic of Korea' ,'countries'] = 'South Korea'
df.loc[df['countries']=='Netherlands' ,'countries'] = 'Holland'
df
</code></pre>
<p>The above works to give me this output:</p>
<p><img src="https://i.stack.imgur.com/pBomF.png" alt="Output 2:" /></p>
<p>I'd ideally like to update this with something on the lines of:</p>
<pre><code>df.loc[df['countries'] in ['United States of America','United Kingdom','Republic of Korea','Netherlands']
,'countries'] = ['USA','UK','South Korea','Holland']
</code></pre>
<p>However, I am presented with this error and I am not able to get past it by attempting to use <code>.any()</code> function or anything else I've tried so far.</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>Would appreciate any help to try and make this more efficient to update multiple ordered qualifications with a corresponding list.</p>
|
<p>Use a dictionary with <code>pd.DataFrame.replace</code>:</p>
<pre><code>dd = {'United States of America':'USA',
'United Kingdom':'UK',
'Republic of Korea':'South Korea',
'Netherlands':'Holland'}
df.replace(dd)
</code></pre>
<p>Output:</p>
<pre><code> countries
0 USA
1 UK
2 South Korea
3 Holland
</code></pre>
|
python|pandas|dataframe
| 4
|
378,154
| 62,201,732
|
How to create a scatter plot or heatmap of spearman's correlation
|
<p>I have two dataframes 'A' and 'B' each of them is having 1000 values(some values are missing from each column).</p>
<p>Dataframe 'A'</p>
<pre><code>([-1.73731693e-03, -5.11060266e-02, 8.46153465e-02, 1.48671467e-03,
1.52286786e-01, 8.26033395e-02, 1.18621477e-01, -6.81430566e-02,
5.11196597e-02, 2.25723347e-02, -2.98125029e-02, -9.61589832e-02,
-1.61495353e-03, 3.72062420e-02, 1.66557311e-02, 2.39392450e-01,
-3.91891332e-02, 3.94344811e-02, 4.10956733e-02, 6.69258037e-02,
7.92391216e-02, 2.59883593e+00, 1.54048404e+00, -4.92893250e-01,
-2.91309155e-01, -8.63923310e-01, -8.51987780e-01, 4.60905145e-01,
2.76583773e-01, 1.68323381e+00, 1.82011391e+00, 3.68951641e-01,
-1.35627096e+00, -1.24374617e+00, -1.97728773e+00, 2.70233476e+00,
-5.60139584e-01, -8.50132695e-01, 1.85987594e+00, -2.89995402e+00,
2.05908855e+00, -2.36161146e-01, -6.62032149e-01, -3.46654905e-01,
1.60181172e+00, 1.65443393e+00, -3.77934113e-03, -7.94313157e-01,
5.20531845e-03, -5.24688509e-01, -1.57952723e+00, 3.14415761e-01,
-9.32905832e-01, -1.34278662e-01, -1.84121185e+00, -1.67941178e-01,
-1.21144093e+00, 3.76283451e-01, 5.61453284e-01, -6.26859439e-01,
-4.66613293e-02, 2.56535385e-01, -5.86989954e-01, -4.21848822e-01,
5.21841502e-01, 5.76096822e-01, -1.58315586e-01, -3.31595062e-02,
-5.72139189e-01, 7.27998737e-01, 1.54143678e+00, 2.58551028e+00,
1.11951220e+00, 2.08231826e+00, 8.48119597e-01, 3.91317082e-01,
1.45425737e+00, -5.08802476e-01, -9.04742166e-01, -4.39964548e-02,
-5.07664895e-01, 1.34800131e-01, 6.60639468e-01, -7.81770841e-02,
1.77803055e-01, -5.25474907e-01, 1.56286558e+00, 1.37397348e+00,
9.35845142e-01, -8.29997405e-01, -1.12959459e+00, -7.34076036e-01,
-1.34298352e+00, -1.55242566e+00, -3.48126090e-01, 9.46175316e-01,
1.04627046e+00, 2.78090673e-01, 5.24197520e-01, -7.31359265e-01,
9.81771972e-01, -7.06560821e-01, 9.87914170e-01, 4.21145043e-01,
7.99874801e-01, -3.61598953e-01, -6.91521208e-02, -3.02639311e-01,
1.22688070e-02, -1.28362301e-01, 1.55251598e+00, 1.50264374e+00,
-1.50725278e+00, -1.15365780e-01, -9.54988005e-01, -8.96627259e-01,
-2.83129466e-01, -1.30206622e+00, -8.17198805e-01, 1.10860713e+00,
-9.80216468e-01, -8.91534692e-01, -8.34263124e-01, -7.16062684e-01,
9.43266610e-01, -6.39953720e-01, 2.20295404e-01, 6.53124338e-01,
1.12831707e+00, 7.95192837e-01, 1.06274424e+00, -9.84363663e-01,
-1.86648718e+00, 2.47560957e-01, -1.54991644e-01, -1.06641038e-01,
-2.08836784e-03, 6.62447504e-01, -1.34260765e-01, 2.98202604e-01,
-2.19112992e-01, -4.66701070e-01, -4.29040735e-02, 2.77548893e-01,
-6.48395632e-02, 4.43922718e-01, -1.06670096e+00, 7.60389677e-01,
-3.50944675e-01, -2.68452398e-01, 1.65183406e-01, -3.35291595e-01,
8.29848518e-01, 5.20341409e-01, 8.95388863e-01, 2.10437855e-01,
2.35693685e+00, -1.30064957e+00, 5.94602557e-02, -2.14385684e-02,
-1.01823776e+00, 8.10292523e-01, -1.22324503e+00, 3.37151269e-01,
6.34668773e-01, -6.14841220e-01, -3.06480016e-01, -8.71147997e-01,
-2.38711565e-01, -3.71304349e-01, -5.21931353e-01, -7.25105848e-01,
9.55749034e-01, -5.03756385e-01, 1.11945956e+00, -1.13072038e+00,
1.46584643e+00, -1.03178731e+00, 1.49044585e+00, 4.29069135e-01,
5.71108660e-01, -8.24272706e-01, 3.75994251e-01, 1.18141844e+00,
-1.22185847e-01, -1.73339604e-03, -1.89326424e-01, -1.83774529e-02,
1.63951866e-01, 2.68499548e-01, 4.42841678e-01, -5.51856731e-02,
-2.09071328e-01, -1.80936048e-01, -1.32749060e-01, -1.37133946e-01,
3.04451064e-01, -2.60560303e-02, 1.64786954e-01, 1.32592907e-01,
-1.46235968e+00, -7.26806017e-01, -3.67486773e-01, 3.71101544e-01,
8.83259501e-01, -7.15065260e-02, 1.66389135e+00, -1.78108597e+00,
-1.26130490e+00, -2.24665654e-01, -8.12489764e-01, 5.74641618e-01,
-4.67201906e-01, -1.12587866e+00, 7.75153678e-01, 5.72844798e-01,
-1.26508809e+00, 8.06000266e-01, -6.82706612e-01, 1.50495168e+00,
8.52438532e-01, 9.43195172e-01, -4.40088490e-02, -2.45587111e-01,
-9.86037547e-01, -1.11312353e+00, 9.32310853e-01, -1.04108755e+00,
4.26250651e-01, 1.70686581e-01, -2.64108584e-01, 8.06651732e-02,
5.71204776e-01, 1.46614492e-01, 1.18698807e-01, 3.55246874e-04,
6.77137159e-01, 1.15635393e-01, 1.34337204e-01, 3.27307728e-01,
-2.05416923e-01, 4.18027455e-01, 9.88345937e-02, 2.18627719e-01,
-5.18426174e-02, -1.17021957e-01, 1.70474550e-01, 4.82736350e-02,
3.21336545e-01, -1.45544581e-01, -1.20319001e-01, 2.03828814e-01,
-3.08498184e-02, -1.40565005e+00, -1.43214088e-01, 4.97504769e-01,
1.56273785e-01, 2.75011645e-01, -4.60341398e-03, 1.43803337e+00,
1.39331909e-01, 2.06784989e-01, -5.12059356e-01, -1.17023126e+00,
-5.96174413e-01, -1.22451379e+00, 1.96344831e-01, 2.14817355e-01,
1.24091029e-01, 5.14485621e-01, -6.03650270e-01, -1.65868324e+00,
-8.21932382e-01, -7.13710026e-01, -8.08813887e-01, -8.04744593e-01,
-1.06858314e-01, -4.50248193e-02, -2.20419270e-01, 8.09215220e-02,
1.35851711e+00, -1.14235665e+00, -6.68174295e-02, -6.01281650e-01,
2.34869773e-01, 3.67129075e-01, -1.34835335e+00, 7.52430154e-01,
1.37352587e+00, -1.02421527e+00, -2.07610263e-02, -3.39083658e-02,
-5.75996009e-02, -2.31073554e-02, 4.61795647e-02, -4.59340619e-01,
-3.62781811e-01, 4.54813190e-02, 6.04157090e-02, -1.87268083e-01,
1.70276057e-01, -8.61843513e-02, -1.27476047e+00, 1.30585731e+00,
-6.46389245e-01, -1.40635401e-01, -1.77942738e+00, -1.41113903e-01,
1.56715807e-01, -1.67712695e-01, 1.86451110e-01, -6.01158881e-02,
4.64978376e-01, 5.13440781e-01, 6.19532336e-01, 2.54267587e-01,
-2.78759433e-01, -3.88565967e-01, 3.87152834e-02, 1.06240041e+00,
2.09454855e-01, 9.64690667e-03, 8.95837369e-02, -3.96816092e-01,
-3.41660062e-01, 6.29889334e-01, -8.67980022e-03, 7.84849030e-01,
-4.85106947e-01, -7.31377792e-01, -8.87659450e-01, 7.61389541e-01,
9.76497314e-01, -1.06744789e+00, 1.47065840e+00, 6.25211618e-01,
7.25988559e-01, 4.19787342e-01, 1.92491575e-01, 1.13681147e+00,
-1.41299616e-01, 1.88563224e+00, 1.20414116e+00, 8.84760070e-02,
-5.82623462e-01, -6.35685252e-01, 9.42374369e-01, -2.68795041e+00,
1.55265515e-01, 1.11831120e+00, 1.42496225e+00, -2.49172328e+00,
-2.96253872e+00, -1.27634582e+00, 8.64353099e-01, 1.75738299e+00,
-1.08871311e+00, -9.71165087e-01, 7.15048842e-01, -2.17295734e-01,
-9.51989200e-01, -2.18546988e-01, 9.17042794e-01, 8.62052366e-01,
-1.85594903e-01, 4.56294789e-01, -6.85416684e-01, -2.80209189e-01,
-5.46608487e-01, 1.08818926e+00, -7.21033879e-01, -6.71183475e-01,
-6.36051999e-01, -4.59980192e-01, -5.05580110e-01, -3.78244959e-01,
-7.24025921e-01, -2.08545177e-01, 4.57899036e-02, 4.40788256e-02,
-2.37824313e-01, 1.52266134e+00, 8.17944390e-03, 1.10203927e+00,
9.86476664e-01, -5.18193891e-01, -3.20302684e-01, -3.62147726e-01,
8.09107079e-02, -2.23162278e+00, 1.08676773e+00, 5.61964453e-01,
1.27519559e-01, 9.24886749e-01, -4.75508805e-01, -5.42765960e-01,
-1.00917988e+00, -1.38181867e+00, -1.32190961e+00, 1.22737946e+00,
3.60475117e-01, 4.94411259e-01, -9.84878721e-01, -1.27991181e+00,
7.05733451e-01, 6.05978064e-01, 7.24010257e-01, 7.31500866e-01,
-2.10270319e+00, -1.44749054e+00, -4.62989149e-01, 1.88742227e+00,
2.23502013e+00, 1.24196002e+00, -8.39133460e-02, -5.83997089e-01,
7.63111106e-01, 3.59541173e-01, 1.69019230e+00, 3.16779306e-01,
8.04994106e-01, -7.79848130e-01, 4.55373478e-01, -6.99628529e-01,
-8.88776585e-01, 5.58784034e-01, 1.03796435e+00, -1.39833046e+00,
-1.30889596e+00, 1.92064711e+00, -1.03993971e+00, -5.44703609e-01,
-1.25879891e+00, -2.25683759e+00, -1.61033547e-01, 1.76603501e-01,
-2.47327624e-02, 6.42444167e-02, -6.01551357e-01, -7.00803499e-01,
1.03391796e-02, -1.65584150e-01, -6.05071619e-01, -3.43937387e-01,
-2.21285625e-01, -1.86325091e-02, -9.79578217e-01, -1.73186370e-02,
-2.30215061e-02, 9.63819799e-01, 2.14069445e+00, -2.99999601e-01,
-1.06696731e+00, 1.38805597e-01, -1.36281099e+00, -1.71499344e+00,
-2.44679986e-01, 5.14666974e-02, 4.18733154e-01, 1.59951320e+00,
1.00618752e+00, -1.88645728e+00, 1.59363671e+00, -1.70729555e-01,
9.42793430e-02, -7.23224009e-02, 6.02105534e-02, 5.52374283e-01,
6.91499535e-02, 9.86658898e-02, 1.26584605e-01, -5.92396665e-02,
2.90992852e-01, -5.76585947e-01, 6.72979673e-02, 7.38910628e-01,
-8.75090268e-02, 6.94842842e-02, -2.30246430e-01, 1.94134747e-01,
-2.09682980e+00, 7.74844906e-01, 6.15444420e-01, -1.56931485e-01,
1.66940287e+00, -1.45283370e+00, 1.37121988e-02, 1.07479283e+00,
8.83275627e-01, -7.41385657e-01, 5.47602991e-01, -1.02874882e+00,
-1.51215589e+00, 1.55364306e+00, 1.71320405e-01, 2.06341676e-01,
-1.68945906e+00, 7.59196774e-01, -2.83121853e-01, -7.70003972e-01,
-4.35559207e-01, -1.29156247e+00, -7.57105374e-01, -7.85287786e-01,
1.31572406e-01, 1.20446876e+00, -1.46802375e+00, -5.35860581e-01,
5.98595824e-01, -4.62785553e-01, 6.75677761e-02, -5.66531534e-01,
1.09685209e+00, 8.24234006e-01, 1.13620680e+00, 3.96653080e-01,
1.89639322e+00, -9.96802022e-01, -1.24232069e+00, -1.25410024e+00,
-2.06379176e+00, 1.47885801e+00, -1.66257841e+00, 8.79827437e-01,
-1.04440327e+00, -1.42881405e+00, -5.69974045e-01, 1.01359651e-01,
4.86755601e-01, -3.35863751e-01, 2.64648983e-01, 1.27375046e-02,
-6.16941256e-02, 4.08408937e-01, 7.55366537e-01, -7.27771779e-01,
7.75935529e-01, 3.58925729e-01, 6.84118904e-01, 7.47932803e-01,
-5.42091983e-01, 2.08484384e-01, 1.56950556e-01, -1.14533505e+00,
-1.22366245e+00, 1.24506739e-01, -1.02935547e+00, 2.54296268e-01,
-4.03847587e-01, -1.00212453e+00, -1.48661344e+00, 9.75954860e-01,
9.38841010e-01, -1.23894642e+00, -9.78138112e-01, -1.04247682e+00,
-1.03866562e+00, 1.26731592e+00, -3.67089461e-01, -8.48251235e-02,
-1.82675815e+00, 6.06962041e-01, -2.33818172e-01, -4.57014619e-01,
1.52576283e+00, 1.54494449e+00, 6.00789311e-01, -7.17249969e-01,
-6.12826202e-01, 4.53766411e-01, 1.39275445e+00, -1.54383812e+00,
1.54210845e+00, 2.69465492e-01, -2.30273047e+00, 1.73201080e+00,
-2.46161686e+00, -8.25393337e-01, 4.33285105e-01, 7.14390347e-01,
5.46413657e-01, 3.55625054e-01, 4.55356504e-01, -4.69216962e-01,
-9.08073083e-01, -1.55192369e+00, -1.23692861e+00, -1.01703738e+00,
-1.13617318e+00, -6.06261893e-01, 1.31444701e+00, 4.20469663e-01,
1.25780763e-01, -3.17988182e-02, 8.14623566e-01, 8.66121880e-01,
-7.69000333e-01, -1.67427496e-02, -7.96633360e-01, -3.49124840e-01,
-2.07410767e-01, -1.09316367e-01, -2.86175298e-01, 4.21715381e-01,
1.22897221e-01, -2.05947043e-01, 7.31217030e-01, -8.02955705e-01,
8.88777313e-02, 2.07183542e-01, -4.79090236e-01, -6.23960583e-01,
-4.50498790e-01, -1.08117179e-01, -2.59395547e-01, -7.48280208e-01,
3.88011905e-01, 2.54908503e-01, 8.52262132e-01, 4.77972889e-01,
-8.33500747e-02, -1.41622779e+00, -2.49822422e-01, -2.28753939e-01,
-2.26889536e-01, -2.45202952e-01, 3.17116703e-01, -1.19760575e+00,
7.04262050e-02, -5.31419343e-02, -7.31634189e-01, -4.17957184e-01,
3.77288107e-01, 7.69283048e-01, 1.55929725e+00, -1.01963387e+00,
9.07556960e-01, -4.98822527e-01, 1.02488029e+00, 5.58381436e-01,
-2.14274914e+00, -6.94806179e-01, -1.11654335e+00, -1.11325319e+00,
-1.10016520e+00, 5.18861155e-01, -1.04176598e+00, -8.66814672e-01,
2.36604302e+00, -3.18431467e-01, 2.91334051e+00, -6.61828903e-02,
-1.26603821e-02, -1.45414666e-01, 4.78580610e-02, -2.09898537e-03,
-6.69714780e-02, 1.05549065e+00, -8.84106729e-02, -9.18073007e-04,
1.25938385e+00, -8.14172470e-01, -2.59554042e-01, -6.95466246e-01,
1.08730831e+00, -9.67021920e-01, 5.84575935e-02, -1.71321175e+00,
-1.26317109e-01, -2.90733362e-01, 7.47312951e-03, -1.45607222e+00,
4.60382102e-01, 1.61288034e+00, -5.28648252e-01, 1.66048408e-01,
8.34903372e-01, 4.74884503e-01, 5.04686505e-01, 4.95510854e-01,
-1.20924643e-01, 2.99423740e-01, 1.09738018e+00, 1.50838843e-01,
-2.87229078e-01, -1.24761215e+00, 7.36582234e-01, -2.77173578e+00,
-3.74992668e+00, 5.41312143e-01, -4.37583398e-01, -1.69064854e-02,
1.84765431e+00, 5.73052756e-01, -1.06164050e+00, 5.07717049e-02,
4.25819917e-02, -2.92715384e-01, -2.03200363e-01, -5.84490589e-01,
-3.57083164e-01, 9.10876306e-01, 2.52143752e-01, 2.63129337e-02,
3.83262339e-01, 7.74313729e-01, -3.60963951e-01, -7.70989956e-02,
7.56541998e-01, 1.09766125e+00, 8.20902509e-01, 2.58690757e-01,
1.25444572e+00, 5.71737922e-01, 2.55898541e-01, -8.80233282e-01,
1.78192270e-01, 2.42501217e-01, -1.30266510e+00, -2.48044014e-02,
1.07537714e-01, 1.67386472e-01, -1.11797061e-01, -6.35950485e-02,
8.00025515e-02, -1.32397319e-01, -6.58003041e-03, 3.03937065e-01,
-1.27135161e-01, 1.01363440e-01, -8.82766995e-01, 8.44379448e-01,
-5.09627327e-01, -1.03326533e-01, -3.15431942e-01, 5.37076573e-01,
3.26753114e+00, 4.15751153e-01, 2.56849348e-01, 5.14462581e-01,
-2.61730161e-02, -3.28715744e-02, 1.88278800e-01, -1.19832919e+00,
-1.19590287e+00, -1.11394334e+00, 2.17055714e+00, 7.96829829e-01,
-1.85619100e+00, -1.07888882e+00, -2.30865383e-02, -2.40273840e-01,
4.39953192e-01, -5.29613217e-01, 6.69906410e-01, 1.15145012e+00,
6.06638031e-01, 5.99079947e-01, 9.16942482e-01, -9.66304057e-03,
5.91654439e-02, 4.37388222e-01, 1.18295465e+00, -1.64263112e+00,
-1.03293336e+00, -1.18222197e+00, -6.33519878e-02, 2.27962536e-01,
1.66108232e+00, -1.23851592e+00, -1.43787196e+00, 8.87857019e-01,
-1.19151817e+00, -1.47236056e+00, 3.50282869e-01, 1.06004408e+00,
-4.26199859e-01, 4.37361363e-01, -2.50084772e-02, 8.67900174e-01,
5.37760532e-01, 8.14530962e-02, 6.62491540e-01, 1.37045014e-01,
-7.01697152e-01, -4.21657704e-01, 7.83331329e-01, 7.70034379e-01,
1.28212695e+00, 2.53511223e+00, -3.24006440e-01, -3.41291501e-01,
-2.49147123e-01, 1.70446849e-01, -1.37162583e-01, -4.81858038e-01,
-4.86338762e-01, 6.85229336e-01, -1.55517356e-01, 1.83307879e-01,
-1.49384229e-01, 1.56007957e-02, 2.40326236e-01, 1.07336933e+00,
-3.99730396e-01, -3.33898955e-01, 3.40244317e-01, -4.92340248e-01,
-4.95815316e-01, 6.22512483e-02, 5.08544685e-01, -2.83347226e-01,
-3.08918714e-01, 1.08292681e+00, -5.29213035e-01, -2.23617454e-02,
2.62202341e-01, 1.02718292e+00, -4.49869615e-01, 3.34969168e-01,
-3.43212844e-01, 6.16483430e-01, -9.47779684e-01, -4.78857633e-01,
-9.98923354e-01, 6.32191682e-01, 2.72973961e-01, -2.96008388e-01,
2.30922383e-01, 2.06884014e-01, 5.21099867e-01, 4.16729600e-01,
-8.26782099e-02, -5.95457632e-01, -2.10804413e-01, -2.93975286e-01,
2.03009273e-01, 1.43593375e+00, -5.49739765e-01, 7.03821943e-01,
-8.28059434e-01, 9.83503607e-01, -1.08534889e+00, -6.27821255e-01,
4.03117722e-01, -2.03629129e-01, -3.95124233e-02, 3.21970160e-01,
-2.71920636e-01, -5.10057329e-01, -1.04202621e-01, 3.20627596e-01,
2.47291994e-01, -1.04118706e-01, -3.16545995e-01, 3.35604518e-01,
-5.69433751e-03, -2.38370280e-01, 3.32991597e-01, -6.11308103e-02,
-2.53167433e-01, -1.08142836e-01, 6.37938271e-01, 4.74190570e-01,
-2.08524397e-01, 9.95434184e-01, 6.78813341e-01, 1.48137820e-01,
3.66997494e-02, 1.12354066e-01, 1.33086253e+00, 6.58021086e-01,
8.35274797e-01, -1.27346531e+00, -1.19618900e+00, -1.06490676e+00,
-1.15966483e+00, 2.19041187e+00, -2.40703158e-01, -1.04679828e+00,
5.26221976e-01, 9.57229098e-01, -3.17806974e-04, 5.25084392e-02,
1.03682933e-01, -1.14126721e-01, 9.97109170e-02, 1.03757185e-01,
4.10600042e-01, 5.78106727e-01, 1.01148051e+00, -4.79936067e-01,
-1.32848972e+00, -2.20624284e-01, -1.42350771e+00, -1.17722544e-01,
-4.78121525e-01, -7.67503366e-01, 1.88827881e-01, -5.96936872e-01,
1.03021358e+00, 2.60795689e-02, -3.33047585e-03, -4.92126750e-01,
1.11066769e+00, 1.01787072e+00, -1.20277626e+00, 7.53480929e-01,
-1.13091340e+00, -4.33899313e-01, -1.50633595e+00, -1.39755762e+00,
1.68206963e+00, 3.05696594e-02, -4.92375834e-01, 4.42329013e-01,
2.13249223e+00, -1.16923258e+00, -7.43727428e-01, 9.63488691e-01,
-1.40534085e+00, 1.30882281e+00, -1.22007716e+00, 7.24629619e-01,
3.95142700e-01, -2.07336912e-01, 2.55075616e-01, -8.44328303e-02,
-3.94616429e-01, -7.84743985e-02, -2.05229049e-01, -5.23357338e-01,
3.31521045e-02, -1.46889669e+00, 4.00045935e-01, 1.27852950e-02,
-2.18957838e+00, -9.22286699e-01, -1.00263590e-02, -2.15168189e-03,
-9.58758007e-01, 1.40708729e-03, 4.08836699e-02, -3.10267180e-03,
-1.97213536e-01, -1.57090203e-04, -6.56863610e-04, -3.41218036e-03,
3.65899320e-02, 1.01258475e-02, -4.00850464e-03, 1.39965489e-03,
1.87395867e+00, -2.50914219e-04, -1.36854426e-02, -5.59371636e-01,
8.60638162e-01, -5.89030315e-02, -3.06438078e-01, -6.36052431e-02,
6.98020295e-02, 1.09568657e-01, -4.95597777e-01, -1.45987919e-01,
6.23584012e-01, -5.52485913e-01, 3.43299341e-01, -4.26641584e-01,
-6.99084799e-02, -4.55572848e-01, 2.75544065e-01, -6.38720353e-01,
3.68422013e-01, 4.06005693e-01, -2.99449896e-01, 9.50228459e-01,
4.76344007e+00, 9.73504981e-02, -3.58437771e-01, 1.98629533e-02,
9.93927115e-01, 5.36396410e-01, 5.36029608e-01, 1.42388869e+00,
4.76638501e-01, 4.36781372e-01, -4.46066365e-01, -4.20019724e-01,
5.00997260e-01, 5.30703691e-01, 1.74726375e-01, 2.35885059e-01,
-3.33462461e-01, -8.84958758e-01, 1.70318874e-01, -5.73460407e-01,
-5.17774883e-01, -3.75158795e-02, 1.68564324e+00, 4.88754154e-01])
</code></pre>
<p>Dataframe 'B'</p>
<pre><code>[10000, 10000, 10000, 1000, 1000, 1000, 5000, 5000, 5000,
1000, 5000, 5000, 10000, 5000, 1000, 1000, 5000, 1000,
10000, 5000, 5000, 1000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 1000, 1000, 1000, 1000, 1000, 1000, 1000,
5000, 5000, 5000, 5000, 5000, 1000, 5000, 5000, 5000,
5000, 10000, 10000, 1000, 10000, 1000, 10000, 10000, 10000,
1000, 5000, 7500, 7500, 1000, 1000, 1000, 1000, 5000,
5000, 500, 500, 500, 500, 7500, 7500, 5000, 5000,
10000, 10000, 5000, 1000, 10000, 5000, 10000, 10000, 1000,
5000, 5000, 5000, 1000, 5000, 10000, 5000, 10000, 10000,
1000, 1000, 5000, 5000, 10000, 1000, 10000, 1000, 1000,
10000, 10000, 1000, 5000, 10000, 5000, 10000, 1000, 1000,
1000, 5000, 1000, 1000, 1000, 5000, 5000, 1000, 1000,
5000, 5000, 1000, 5000, 10000, 1000, 1000, 5000, 10000,
5000, 10000, 10000, 5000, 5000, 10000, 10000, 1000, 1000,
5000, 10000, 10000, 10000, 1000, 1000, 1000, 300, 300,
5000, 5000, 5000, 5000, 5000, 5000, 5000, 5000, 10000,
10000, 1000, 1000, 1000, 300, 5000, 5000, 1000, 1000,
300, 300, 5000, 10000, 10000, 10000, 10000, 1000, 1000,
1000, 1000, 300, 300, 5000, 1000, 1000, 1000, 300,
300, 300, 5000, 5000, 10000, 10000, 1000, 1000, 300,
300, 300, 300, 10000, 10000, 1000, 300, 300, 5000,
5000, 5000, 10000, 10000, 10000, 1000, 1000, 1000, 300,
5000, 5000, 10000, 1000, 300, 300, 5000, 5000, 1000,
1000, 300, 300, 5000, 10000, 1000, 1000, 1000, 300,
300, 5000, 5000, 10000, 10000, 1000, 1000, 1000, 300,
5000, 5000, 10000, 10000, 1000, 300, 300, 5000, 5000,
1000, 1000, 1000, 300, 300, 300, 300, 300, 300,
5000, 10000, 10000, 1000, 1000, 1000, 1000, 1000, 300,
300, 300, 300, 10000, 1000, 1000, 300, 300, 5000,
10000, 10000, 1000, 1000, 5000, 5000, 5000, 10000, 1000,
1000, 300, 300, 5000, 5000, 1000, 1000, 1000, 300,
5000, 5000, 10000, 10000, 10000, 10000, 300, 300, 300,
300, 5000, 5000, 5000, 10000, 10000, 1000, 300, 300,
300, 5000, 10000, 10000, 10000, 1000, 1000, 1000, 1000,
5000, 1000, 1000, 1000, 300, 300, 300, 300, 5000,
5000, 5000, 10000, 10000, 1000, 300, 300, 5000, 10000,
10000, 5000, 5000, 5000, 5000, 5000, 5000, 5000, 5000,
5000, 5000, 5000, 10000, 10000, 5000, 10000, 5000, 10000,
1000, 1000, 10000, 10000, 10000, 10000, 10000, 5000, 1000,
10000, 1000, 1000, 5000, 5000, 5000, 5000, 5000, 500,
500, 7500, 1000, 5000, 1000, 5000, 1000, 5000, 5000,
5000, 5000, 10000, 1000, 10000, 10000, 10000, 1000, 5000,
5000, 1000, 5000, 10000, 1000, 5000, 5000, 5000, 10000,
5000, 10000, 5000, 1000, 5000, 10000, 1000, 5000, 5000,
5000, 1000, 210, 226, 442, 3511, 3511, 3511, 2310,
1619, 2404, 1768, 837, 2241, 2382, 3774, 4432, 973,
580, 1501, 2369, 473, 4626, 4635, 439, 1620, 850,
1620, 1107, 2310, 390, 1982, 1587, 1497, 1588, 730,
1619, 6546, 1000, 1000, 10000, 10000, 1000, 10000, 1000,
1000, 10000, 10000, 1000, 5000, 5000, 5000, 10000, 1000,
10000, 10000, 1000, 5000, 10000, 10000, 5000, 5000, 10000,
5000, 10000, 10000, 5000, 5000, 5000, 10000, 1000, 5000,
5000, 1000, 10000, 10000, 10000, 5000, 5000, 5000, 1000,
5000, 5000, 1000, 10000, 10000, 1000, 1000, 10000, 1000,
1000, 1000, 5000, 10000, 10000, 1000, 5000, 1000, 5000,
5000, 10000, 10000, 10000, 5000, 5000, 1000, 1000, 10000,
1000, 1000, 5000, 10000, 5000, 1000, 5000, 1000, 10000,
5000, 10000, 1000, 5000, 5000, 10000, 10000, 1000, 5000,
10000, 1000, 10000, 10000, 5000, 5000, 10000, 10000, 1000,
5000, 10000, 1000, 1000, 5000, 5000, 5000, 5000, 10000,
10000, 10000, 1000, 5000, 1000, 5000, 10000, 1000, 10000,
10000, 5000, 1000, 5000, 1000, 1000, 10000, 1000, 5000,
1000, 10000, 10000, 10000, 1000, 5000, 10000, 10000, 5000,
5000, 10000, 1000, 5000, 1000, 1000, 10000, 10000, 1000,
10000, 10000, 1000, 1000, 1000, 5000, 1000, 5000, 10000,
1000, 5000, 10000, 296, 296, 296, 296, 296, 296,
296, 255, 588, 319, 444, 468, 432, 600, 480,
588, 352, 600, 396, 372, 420, 3650, 3645, 248,
2950, 208, 5000, 10000, 10000, 10000, 10000, 1000, 500,
500, 500, 500, 10000, 1000, 5000, 5000, 5000, 5000,
5000, 500, 10000, 10000, 10000, 10000, 10000, 600, 2739,
289, 2753, 277, 4751, 9570, 9601, 6186, 5116, 7996,
9601, 9613, 8024, 9601, 948, 1440, 600, 10000, 10000,
10000, 10000, 10000, 5000, 5000, 5000, 5000, 5000, 1534,
7980, 845, 823, 493, 721, 325, 8280, 5132, 7632,
2606, 5025, 5190, 7468, 6304, 8760, 9829, 8002, 8393,
9097, 9470, 678, 676, 658, 658, 655, 643, 2004,
516, 2288, 1651, 1093, 4111, 695, 1289, 1736, 1656,
1656, 1656, 452, 4233, 815, 6569, 4613, 2366, 2330,
1618, 2403, 1346, 1619, 396, 4634, 2847, 5432, 2368,
2368, 7127, 1527, 1533, 6167, 985, 1836, 1821, 1836,
629, 747, 5511, 5491, 1656, 2048, 2048, 2048, 2048,
2048, 2048, 1024, 1024, 1024, 1024, 1024, 2048, 1024,
1024, 1024, 1024, 4463, 9526, 1000, 9093, 3000, 3000,
3000, 3000, 3000, 3000, 617, 1548, 2602, 1512, 979,
549, 2495, 1940, 7601, 2058, 6001, 8808, 8201, 2163,
2163, 5701, 5901, 3653, 3653, 313, 313, 5101, 6501,
6601, 7201, 8701, 8901, 9301, 9701, 5401, 6101, 6401,
7001, 7201, 7401, 7401, 7901, 9901, 9901, 4914, 7842,
2726, 9201, 7712, 1096, 1100, 6801, 768, 1096, 4655,
1424, 786, 1687, 5051, 1256, 3549, 8808, 542, 542,
720, 542, 720, 720, 542, 1506, 825, 894, 825,
5301, 5701, 5701, 6001, 6601, 6701, 8101, 9101, 1020,
4560, 3845, 3922, 4491, 3886, 2042, 4106, 1900, 1045,
2229, 6712, 664, 4317, 3948, 3566, 623, 8420, 3089,
3362, 1656, 1776, 1656, 1656, 1656, 1550, 1540, 1248,
1247, 1416, 1540, 1248, 8500, 8800, 7000, 9700, 8500,
8600, 9800, 8900, 9200, 10000, 2400, 4500, 2200, 1300,
5800, 1800, 7500, 3700, 3500, 2200, 4000, 1500, 3600,
6000, 3400, 9000, 259, 1700, 8300, 3800, 4300, 4300,
1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800, 1800,
1800, 1800, 1800, 1800, 1800, 6306, 6492, 6264, 6150,
6276, 6282, 6342, 6102, 6588, 6300, 7104, 6234, 9989,
9995, 9996, 9993, 9991, 9986, 9994, 9993, 9985, 9986,
9994, 9999, 9989, 9997, 9991, 10000, 10000, 9700, 7500,
7900, 9600, 1200, 2100, 1500, 6900, 4900, 3800, 1600,
2200, 3600, 6000, 5700, 7700, 3200, 1500, 8200, 2800,
4300, 5400, 1600, 10000, 2600, 5600, 2000, 5500, 8600,
6300, 4700, 3500, 8600, 3900, 6500, 5300, 6800, 5800,
3800, 8400, 4600, 1900, 3400, 3000, 5800, 7000, 5900,
6100]
</code></pre>
<p>I want to compute spearman's correlation of the dataframes and want to plot them. Plot can be Scatter or heatmap.
But i don't have any idea how it can be done.</p>
<p>Any resource or reference will be helpful.
Thanks. </p>
|
<p>Check this code:</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
A = [...] # insert here your list of values for A
B = [...] # insert here your list of values for B
df = pd.DataFrame({'A': A,
'B': B})
corr = df.corr(method = 'spearman')
sns.heatmap(corr, annot = True)
plt.show()
</code></pre>
<p>I get this correlation matrix:</p>
<p><a href="https://i.stack.imgur.com/j3xn0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j3xn0.png" alt="enter image description here"></a></p>
<p>The column <code>A</code> is highly correlated with itself (obviously, this always happens), while the correlation between column <code>A</code> and <code>B</code> is very low.</p>
<p>Version info:</p>
<pre><code>Python 3.7.0
matplotlib 3.2.1
pandas 1.0.4
seaborn 0.10.1
</code></pre>
|
python|pandas|matplotlib|seaborn|correlation
| 3
|
378,155
| 62,062,929
|
Flatten in simple feedforward networks
|
<p>I am working on the CIFAR10 dataset and came across this example in Keras, using data augmentation:</p>
<p><a href="https://keras.io/examples/cifar10_cnn/" rel="nofollow noreferrer">https://keras.io/examples/cifar10_cnn/</a></p>
<p>The example uses CNN. I want to implement just a simple feedforward network, not CNN.
Therefore, in order for my simple model to "work", I have to add "model.Flatten()" before the output layer, in order to have consistency in the data shapes.</p>
<p>However, I have seen using the Flatten() only in CNNs.</p>
<p>I believe that it can be used in simple feedforward networks, but am I missing something?</p>
<p>Below is the model's code that I want to use with the keras example.</p>
<pre><code>model = Sequential()
model.add(Dense(layer_size, input_shape=x_train.shape[1:], activation = "relu")
model.add(Dense(128, activation = "relu"))
model.add(Dense(64, activation = "relu"))
model.add(Flatten())
model.add(Dense(10, activation = "softmax"))
model.summary()
</code></pre>
<p>Thank you</p>
|
<p>You should <code>Flatten</code> your input:</p>
<pre><code>model = Sequential()
model.add(Flatten(input_shape=x_train.shape[1:]))
model.add(Dense(layer_size,activation = "relu")
model.add(Dense(128, activation = "relu"))
model.add(Dense(64, activation = "relu"))
model.add(Dense(10, activation = "softmax"))
model.summary()
</code></pre>
<p><code>Flatten</code> flattens an <code>n</code> dimensional tensor into a <code>1</code> dimensional tensor. For example a <code>2x2</code> grayscale image becomes 1 dimesnional:</p>
<pre><code>[[255, 127 ],
[154, 123]]
</code></pre>
<p>becomes</p>
<pre><code>[255, 127, 154, 123]
</code></pre>
<p>This way your input color image (3 dimensional , <code>[width, height, channels]</code>) will also become 1 dimensional and fit into a <code>Dense</code> layer. </p>
|
tensorflow|machine-learning|keras|deep-learning
| 0
|
378,156
| 62,077,680
|
How to convert values in list of strings into Pandas DataFrame
|
<p>I would like to convert this list of strings into a Pandas DataFrame with columns ‘Open’, ‘High’, ‘Low’, ‘Close’, ‘PeriodVolume’, OpenInterest’ and ‘Datetime’ as index. How can I extract the values and create the DataFrame? Thanks for your help!</p>
<pre><code>['RequestId: , Datetime: 5/28/2020 12:00:00 AM, High: 323.44, Low: 315.63, Open: 316.77, Close: 318.25, PeriodVolume: 33449103, OpenInterest: 0',
'RequestId: , Datetime: 5/27/2020 12:00:00 AM, High: 318.71, Low: 313.09, Open: 316.14, Close: 318.11, PeriodVolume: 28236274, OpenInterest: 0',
'RequestId: , Datetime: 5/26/2020 12:00:00 AM, High: 324.24, Low: 316.5, Open: 323.5, Close: 316.73, PeriodVolume: 31380454, OpenInterest: 0',
'RequestId: , Datetime: 5/22/2020 12:00:00 AM, High: 319.23, Low: 315.35, Open: 315.77, Close: 318.89, PeriodVolume: 20450754, OpenInterest: 0']
</code></pre>
|
<p>You can use split() and some for loops put your data into a dictionary and then pass the dictionary to a dataframe.</p>
<pre><code>import pandas as pd
# First create the list containing your entries.
entries = [
'RequestId: , Datetime: 5/28/2020 12:00:00 AM, High: 323.44, Low: 315.63,' \
' Open: 316.77, Close: 318.25, PeriodVolume: 33449103, OpenInterest: 0',
'RequestId: , Datetime: 5/27/2020 12:00:00 AM, High: 318.71, Low: 313.09,' \
' Open: 316.14, Close: 318.11, PeriodVolume: 28236274, OpenInterest: 0',
'RequestId: , Datetime: 5/26/2020 12:00:00 AM, High: 324.24, Low: 316.5,' \
' Open: 323.5, Close: 316.73, PeriodVolume: 31380454, OpenInterest: 0',
'RequestId: , Datetime: 5/22/2020 12:00:00 AM, High: 319.23, Low: 315.35,' \
' Open: 315.77, Close: 318.89, PeriodVolume: 20450754, OpenInterest: 0'
]
# Next create a dictionary in which we will store the data after processing.
data = {
'Datetime': [], 'Open': [], 'High': [], 'Low': [],
'Close': [], 'PeriodVolume': [], 'OpenInterest': []
}
# Now split your entries by ','
split_entries = [entry.split(',') for entry in entries]
# Loop over the list
for entry in split_entries:
# and loop over each of the inner lists
for ent in entry:
# Split by ': ' to get the 'key'
# I have added the [1:] as there is a space before each
# column name which needs to be cut out for this to work.
key = ent.split(': ')[0][1:]
# Now we check if the key is in the keys of the dictionary
# we created earlier and append the value to the list
# associated with that key if so.
if key in data.keys():
data[key].append(ent.split(': ')[1])
# Now we can pass the data into panda's DataFrame class
dataframe = pd.DataFrame(data)
# Then call one more method to set the index
dataframe = dataframe.set_index('Datetime')
</code></pre>
|
python|pandas|string
| 1
|
378,157
| 62,384,998
|
Extract and map similar textresult with base text, convert into two columns using Pandas
|
<p>The following is gui result dataframe.</p>
<pre><code>Item_id Similarity_Id Result
100 0 textboxerror
101 100 text_input_issue
102 0 menuitemerror
103 100 text_click_issue
104 100 text_caps_error
105 102 menu_drop_down_error
106 100 text_lower_error
107 102 menu_item_null
</code></pre>
<p>In the above dataframe, Item_id and Result are correlated. Each Item_id has one Results.
Based on the similarity_Id, I need to create two different columns.
One column sentence one is base sentence and sentence2 is similarity sentences.
For example. In similarity_Id four sentences in Result have same similarity_Id.
Item_id of 101,103,104 and 106 have similar Result of Item_id 100.
So, in sentence 1 , I need to have Result respective to Similarity_Id 100, in sentence2 I need similar Results of Item_id 100.</p>
<p>The final result needs to be as follows, </p>
<pre><code>index sentence1 sentence2 Similarity_Id
1 textboxerror text_click_issue 100
2 textboxerror text_caps_error 100
3 textboxerror text_caps_error 100
4 textboxerror text_lower_error 100
5 menuitemerror menu_drop_down_error 102
6 menuitemerror menu_item_null 102
7 textboxerror Null 0
8 menuitemerror Null 0
</code></pre>
<p>I tried groupby and merge,melt and unique.
But, desired result not comes.</p>
<pre><code>df1 = pd.read_cav("/test.csv")
group = df1.groupby('Result')
df2 = group.apply(lambda x: x['Result'].unique())
print ("df2: \n", df2)
print (df1.Result.apply(pd.Series))
df3 = df1.Result.apply(pd.Series).merge(df1, left_index = True, right_index = True).drop(["Result"], axis = 1) \
.melt(id_vars = ['Item_id', 'Similarity_Id'], value_name = "Result").drop("variable", axis = 1)\
.dropna()
print (df3)
</code></pre>
<p>How can I achieve this.
Thanks,
Sundara</p>
|
<p>We can use <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.merge.html" rel="nofollow noreferrer"><code>pd.merge</code></a> to <code>left</code> merge the dataframe <code>df</code> with itself on <code>Similarity_ID</code> and <code>Item_id</code> then use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html" rel="nofollow noreferrer"><code>DataFrame.rename</code></a> to rename the columns as required:</p>
<pre><code>df1 = (
pd.merge(
df[['Similarity_Id', 'Result']], df[['Item_id', 'Result']],
left_on='Similarity_Id', right_on='Item_id', how='left')
.rename(columns={'Result_x': 'sentence1', 'Result_y': 'sentence2'})
.filter(items=['sentence1', 'sentence2', 'Similarity_Id'])
)
</code></pre>
<hr>
<pre><code># print(df1)
sentence1 sentence2 Similarity_Id
0 textboxerror NaN 0
1 text_input_issue textboxerror 100
2 menuitemerror NaN 0
3 text_click_issue textboxerror 100
4 text_caps_error textboxerror 100
5 menu_drop_down_error menuitemerror 102
6 text_lower_error textboxerror 100
7 menu_item_null menuitemerror 102
</code></pre>
|
python|pandas|dataframe|nlp
| 1
|
378,158
| 62,150,852
|
Pandas: create columns with deviation from mean in each group
|
<p>Consider the following <code>DataFrame</code> in Python:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'id':[0]*3+[1]*3,'y':np.random.randn(6),'x':np.random.randn(6)})
</code></pre>
<p>which gives</p>
<pre><code> id y x
0 0 0.721757 1.595646
1 0 0.359601 1.128473
2 0 1.134922 2.317929
3 1 0.290152 -1.901336
4 1 0.128742 0.982683
5 1 0.556914 0.745208
</code></pre>
<p>Note that <code>y</code> and <code>x</code> are grouped according to <code>id</code>. I want to creat the following <code>DataFrame</code></p>
<pre><code> id y x y_md x_md
0 0 0.721757 1.595646 -0.017003 -0.085037
1 0 0.359601 1.128473 -0.379159 -0.552209
2 0 1.134922 2.317929 0.396162 0.637246
3 1 0.290152 -1.901336 -0.035117 -1.843521
4 1 0.128742 0.982683 -0.196527 1.040498
5 1 0.556914 0.745208 0.231644 0.803023
</code></pre>
<p>where</p>
<ul>
<li><code>y_md</code> contains the value of deviation from its group mean (<code>id</code>=<code>0</code> & <code>1</code>)</li>
<li><code>x_md</code> contains the value of deviation from its group mean (<code>id</code>=<code>0</code> & <code>1</code>)</li>
</ul>
<p>What I came up with is</p>
<pre><code>df_g = df.groupby('id')
yy = pd.Series( df['y'].values - df_g['y'].mean().repeat(3).values )
xx = pd.Series( df['x'].values - df_g['x'].mean().repeat(3).values )
pd.concat([df,yy.rename('y_md'), xx.rename('x_md')],axis=1)
</code></pre>
<p>but it does not look good to me. I wonder if there is an elegant one liner or similar for the same result? I would appreciate your help.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for processing multiple columns, subtract by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sub.html" rel="nofollow noreferrer"><code>DataFrame.sub</code></a>, change columns names by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.add_suffix.html" rel="nofollow noreferrer"><code>DataFrame.add_suffix</code></a> and append to original by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a>:</p>
<pre><code>c = ['x','y']
df = df.join(df[c].sub(df.groupby('id')[c].transform('mean')).add_suffix('_md'))
print (df)
id y x x_md y_md
0 0 0.721757 1.595646 -0.085037 -0.017003
1 0 0.359601 1.128473 -0.552210 -0.379159
2 0 1.134922 2.317929 0.637246 0.396162
3 1 0.290152 -1.901336 -1.843521 -0.035117
4 1 0.128742 0.982683 1.040498 -0.196527
5 1 0.556914 0.745208 0.803023 0.231645
</code></pre>
<p>Or is possible assign new columns names:</p>
<pre><code>df[['x_md','y_md']] = df[['x','y']].sub(df.groupby('id')[['x','y']].transform('mean'))
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 2
|
378,159
| 62,263,547
|
Python - Numpy 3D array - concatenate issues
|
<p>I have a txt file with 46 entries that looks like this -</p>
<pre><code>2020-05-24T10:57:12.743606#[0.0, 0.0, 0.0653934553265572, 0.0, 1.0, 0.0]
2020-05-24T10:57:12.806380#[0.0, 0.0, 0.0, 0.0, 1.0, 0.0]
2020-05-24T10:57:12.869022#[0.0, 0.0, 0.0, 0.0, 1.0, 0.0]
</code></pre>
<p>The first argument is a timestamp of the camera image taken. For each timestamp, there are 3 RGB images. </p>
<p>My goal is to concatenate them along the channel axis(axis = 2). The image dimension is 70x320x3. So the desired output is 46x70x320x9. </p>
<p>I would need to wait till all 3 images are recognised, then append them to a list and feed that to a numpy array. I'm failing as the output dimension I'm getting is 46x138(for 3 images from append)x70x320x3 <code>46x138x70x320x3</code> before concate. Concatenation doesn't work when implemented with <code>axis =2 or 3</code></p>
<p>From this how can I get <code>46x70x320x9</code>?</p>
<p>Code -</p>
<pre><code>with open("train.txt", 'r') as f:
data = f.readlines()[:]
images = []
image_concat = []
labels = []
for row in data:
for camera in ['center', 'left', 'right']:
img_id, label = row.strip("\n").split("#")
img_path = os.path.join(IMG_PATH, '{}-{}.jpg'.format(camera, img_id))
image = cv2.imread(img_path)
images.append(image)
if camera == 'right':
image_concat.append(images)
X_data = np.array(image_concat)
print(X_data.shape)
</code></pre>
<p><strong>Referred links -</strong></p>
<p><a href="https://stackoverflow.com/questions/43264816/need-help-combining-two-3-channel-images-into-6-channel-image-python">Need help combining two 3 channel images into 6 channel image Python</a></p>
<p><a href="https://stackoverflow.com/questions/51439583/numpy-concatenate-two-arrays-along-the-3rd-dimension">numpy: concatenate two arrays along the 3rd dimension</a></p>
<p><a href="https://stackoverflow.com/questions/51017203/numpy-concatenate-multiple-arrays-arrays">numpy concatenate multiple arrays arrays</a></p>
<p><a href="https://stackoverflow.com/questions/61316456/numpy-concatenate-over-dimension">numpy concatenate over dimension</a></p>
<p>Please help. Any help will be appreciated. Thank you.</p>
|
<p>Here is an implementation with dummy data</p>
<pre><code>collect = []
for i in range(46):
#create dummy arrays, simulate list of 3 RGB images
a = [np.zeros((70,320,3)) for b in range(3)]
# a[0].shape: (70,320,3)
#concatenate along axis 2
b = np.concatenate(a, axis=2)
# b.shape: (70,320,9)
#create new axis in position zero
b = b[np.newaxis, ...]
# b.shape : (1,70,320,9)
collect.append(b)
output = np.concatenate(collect, axis=0)
output.shape
(46, 70, 320, 9)
</code></pre>
<h3>edit:</h3>
<pre><code># IIUC:
# left camera makes 70,320,3 at time t
# right camera makes 70,320,3 at time t
# center camera makes 70,320,3 at time t
# these need to be concatenated to 70,320,9
# if so, you can use a dictionary
#initialise dict
collected_images = {}
for timepoint, row in enumerate(data):
#at every timepoint, initialise dict entry
collected_images[timepoint] = []
for camera in ['center', 'left', 'right']:
image = cv2.imread('path/to/image')
collected_images[timepoint].append(image)
# now you have all images in a dictionary
# to generate the array, you can
output = []
for key, val in collected_iamges.items():
temp = np.concatenate(val, axis=2)
output.append(temp[np.newaxis, ...])
output = np.concatenate(output, axis=0)
</code></pre>
|
python|numpy
| 3
|
378,160
| 62,132,312
|
Clip or threshold a tensor using condition and zero pad the result in PyTorch
|
<p>let's say I have a tensor like this</p>
<pre><code>w = [[0.1, 0.7, 0.7, 0.8, 0.3],
[0.3, 0.2, 0.9, 0.1, 0.5],
[0.1, 0.4, 0.8, 0.3, 0.4]]
</code></pre>
<p>Now I want to eliminate certain values base on some condition (for example greater than 0.5 or not)</p>
<pre><code>w = [[0.1, 0.3],
[0.3, 0.2, 0.1],
[0.1, 0.4, 0.3, 0.4]]
</code></pre>
<p>Then pad it to equal length:</p>
<pre><code>w = [[0.1, 0.3, 0, 0],
[0.3, 0.2, 0.1, 0],
[0.1, 0.4, 0.3, 0.4]]
</code></pre>
<p>and this is how I implemented it in pytorch:</p>
<pre><code>w = torch.rand(3, 5)
condition = w <= 0.5
w = [w[i][condition[i]] for i in range(3)]
w = torch.nn.utils.rnn.pad_sequence(w)
</code></pre>
<p>But apparently this is going to be extremely slow, mainly because of the list comprehension.
is there any better way to do it?</p>
|
<p>Here's one straightforward way using <em>boolean masking</em>, <a href="https://pytorch.org/docs/master/generated/torch.split.html" rel="nofollow noreferrer">tensor splitting</a>, and then eventually padding the splitted tensors using <a href="https://pytorch.org/docs/master/generated/torch.nn.utils.rnn.pad_sequence.html" rel="nofollow noreferrer"><code>torch.nn.utils.rnn.pad_sequence(...)</code></a>.</p>
<pre><code># input tensor to work with
In [213]: w
Out[213]:
tensor([[0.1000, 0.7000, 0.7000, 0.8000, 0.3000],
[0.3000, 0.2000, 0.9000, 0.1000, 0.5000],
[0.1000, 0.4000, 0.8000, 0.3000, 0.4000]])
# values above this should be clipped from the input tensor
In [214]: clip_value = 0.5
# generate a boolean mask that satisfies the condition
In [215]: boolean_mask = (w <= clip_value)
# we need to sum the mask along axis 1 (needed for splitting)
In [216]: summed_mask = boolean_mask.sum(dim=1)
# a sequence of splitted tensors
In [217]: splitted_tensors = torch.split(w[boolean_mask], summed_mask.tolist())
# finally pad them along dimension 1 (or axis 1)
In [219]: torch.nn.utils.rnn.pad_sequence(splitted_tensors, 1)
Out[219]:
tensor([[0.1000, 0.3000, 0.0000, 0.0000],
[0.3000, 0.2000, 0.1000, 0.5000],
[0.1000, 0.4000, 0.3000, 0.4000]])
</code></pre>
<p><strong>A short note on efficiency</strong>: Using <a href="https://pytorch.org/docs/master/generated/torch.split.html" rel="nofollow noreferrer"><code>torch.split()</code></a> is super efficient since it returns the splitted tensors as a <em>view</em> of the original tensor (i.e. no copy is made).</p>
|
python|pytorch|vectorization|tensor|zero-padding
| 1
|
378,161
| 62,103,565
|
cannot import name 'CenterCrop' from 'tensorflow.keras.layers.experimental.preprocessing'
|
<p>I am using anaconda env. </p>
<p>Python 3.7
keras : 2.3.1
tensorflow: 2.1.0</p>
<p>when i want to use CenterCrop and Rescaling modules, pycharm gives me error.</p>
<pre><code>from tensorflow.keras.layers.experimental.preprocessing import CenterCrop
from tensorflow.keras.layers.experimental.preprocessing import Rescaling
</code></pre>
<p>error messages is:</p>
<pre><code>D:\NewAnaconda\envs\Tensor_Turkcell\python.exe "C:/Users/Burak Ekincioğlu/Dekstop/TENSORFLOW/tensor_intro.py"
Traceback (most recent call last):
File "C:/Users/Burak Ekincioğlu/Dekstop/TENSORFLOW/tensor_intro.py", line 5, in <module>
from tensorflow.keras.layers.experimental.preprocessing import CenterCrop
ImportError: cannot import name 'CenterCrop' from 'tensorflow.keras.layers.experimental.preprocessing' (D:\NewAnaconda\envs\Tensor_Turkcell\lib\site-packages\tensorflow_core\python\keras\api\_v2\keras\layers\experimental\preprocessing\__init__.py)
</code></pre>
|
<p>I've tried the import with tensorflow 2.1.0 (keras 2.2.4 by default) and it gave me the same error you are encountering.</p>
<p>Using Tensorflow 2.2.0 with keras 2.3.0 works fine.</p>
<p>So you just need to upgrade tensorflow.</p>
|
python|tensorflow|keras
| 1
|
378,162
| 62,161,183
|
pandas unique values with condition
|
<p>I am working with a pandas DataFrame and I need to loop trough the unique values of a column.
Such columns might contain values that i dont want to loop through, for instance ''</p>
<p>normally I do:</p>
<pre><code>edges = [edge for edge in estados['EDGE'].unique() if edge != '']
for edge in edges:
pass
</code></pre>
<p>my question is if there is a more pandonic way to build up the list different than the conprehension list.</p>
<p>like:</p>
<pre><code>estados['EDGE'].unique().exclude('')
</code></pre>
<p>THANKS</p>
<p>Note:
I looked for solutions like in:
<a href="https://stackoverflow.com/questions/55074857/nunique-excluding-some-values-in-pandas">nunique excluding some values in pandas</a>
<a href="https://stackoverflow.com/questions/46218652/python-pandas-unique-value-ignoring-nan/46218844">Python pandas unique value ignoring NaN</a>
but these solutions are even less concise as mine.</p>
|
<p>You can use NOT operator <code>~</code>:</p>
<pre><code>estados[~estados['EDGE'] == '']['EDGE'].dropna().unique()
</code></pre>
<p><strong>OR</strong> Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ne.html" rel="nofollow noreferrer"><code>.ne</code></a>:</p>
<pre><code>estados[estados['EDGE'].ne('')]['EDGE'].dropna().unique()
</code></pre>
|
python|pandas|filtering|unique
| 1
|
378,163
| 62,375,281
|
Input shape for CNN and LSTM
|
<p>I would like to train CNN + LSTM model to drive a car using CNN + LSTM using this code</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, GlobalMaxPool2D, Flatten, Dropout, Dense, TimeDistributed, GRU, LSTM
from tensorflow.keras.applications.vgg16 import VGG16
def vgg(input_shape, num_classes):
# create a VGG16 "model", we will use
# image with shape (224, 224, 3)
vgg = VGG16(
include_top=False,
weights='imagenet',
input_shape=(200, 160, 3)
)
# do not train first layers, I want to only train
# the 4 last layers (my own choice, up to you)
for layer in vgg.layers[:-4]:
layer.trainable = False
# create a Sequential model
model = Sequential()
# add vgg model for 4 input images (keeping the right shape
model.add(
TimeDistributed(vgg, input_shape=(4, 200, 160, 3))
)
# now, flatten on each output to send 5
# outputs with one dimension to LSTM
model.add(
TimeDistributed(
Flatten()
)
)
model.add(LSTM(256, activation='relu', return_sequences=False))
# finalize with standard Dense, Dropout...
model.add(Dense(128, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile('adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
</code></pre>
<p>This is my main code</p>
<pre><code># Splitting data into a training set and test set
train_data = np.load("/mydrive/Mister_car/training_data-1.npy", allow_pickle=True)
train = train_data[:]
test = train_data[:]
print(np.shape(train[5]))
X = np.array([i[0] for i in train]).reshape(-1, 4, WIDTH, HEIGHT, 3)
Y = np.array([i[1] for i in train])#.reshape(-1, 9)
x_test = np.array([i[0] for i in test]).reshape(-1, 4, WIDTH, HEIGHT, 3)
y_test = np.array([i[1] for i in test])#.reshape(-1, 9)
# start training
model.fit(x=X,
y=Y,
epochs=10,
validation_data=(x_test, y_test)
)
# save the whole model
model.save(MODEL_DIR)
</code></pre>
<p>I have an array and each element in this array has 2 elements:
array [0] is a sequance of 4 images with shape (200,160,3)
array <a href="https://i.stack.imgur.com/xLRQH.jpg" rel="nofollow noreferrer">1</a> is an array with 9 elements</p>
<p>But I get the following error after last epoch while the model is saving</p>
<p><a href="https://i.stack.imgur.com/xLRQH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xLRQH.jpg" alt="enter image description here"></a></p>
|
<p>I think you need to remove the <code>TimeDistributed()</code> wrapper around <code>vgg</code>, because what it does is add another dimension. So, you end up with 5 instead of 4, which is causing the error. Also remove the -1 in this line:</p>
<pre><code>X = np.array([i[0] for i in train]).reshape(-1, 4, WIDTH, HEIGHT, 3)
</code></pre>
<p>Beyond that, some parts of your code are missing. If you provide them I'll be able to help with more certainty.</p>
|
python|tensorflow|keras|conv-neural-network
| 0
|
378,164
| 62,327,956
|
How to fill missing values in a dataframe based on group value counts?
|
<p>I have a pandas DataFrame with 2 columns: Year(int) and Condition(string). In column Condition I have a nan value and I want to replace it based on information from groupby operation.</p>
<pre><code>import pandas as pd
import numpy as np
year = [2015, 2016, 2017, 2016, 2016, 2017, 2015, 2016, 2015, 2015]
cond = ["good", "good", "excellent", "good", 'excellent','excellent', np.nan, 'good','excellent', 'good']
X = pd.DataFrame({'year': year, 'condition': cond})
stat = X.groupby('year')['condition'].value_counts()
</code></pre>
<p>It gives:</p>
<pre><code>print(X)
year condition
0 2015 good
1 2016 good
2 2017 excellent
3 2016 good
4 2016 excellent
5 2017 excellent
6 2015 NaN
7 2016 good
8 2015 excellent
9 2015 good
print(stat)
year condition
2015 good 2
excellent 1
2016 good 3
excellent 1
2017 excellent 2
</code></pre>
<p>As nan value in 6th row gets year = 2015 and from stat I get that from 2015 the most frequent is 'good' so I want to replace this nan value with 'good' value.</p>
<p>I have tried with fillna and .transform method but it does not work :(</p>
<p>I would be grateful for any help.</p>
|
<p>I did a little extra transformation to get <code>stat</code> as a dictionary mapping the year to its highest frequency name (credit to <a href="https://stackoverflow.com/a/29919489/13386979">this answer</a>):</p>
<pre><code>In[0]:
fill_dict = stat.unstack().idxmax(axis=1).to_dict()
fill_dict
Out[0]:
{2015: 'good', 2016: 'good', 2017: 'excellent'}
</code></pre>
<p>Then use <code>fillna</code> with <code>map</code> based on this dictionary (credit to <a href="https://stackoverflow.com/a/42849091/13386979">this answer</a>): </p>
<pre><code>In[0]:
X['condition'] = X['condition'].fillna(X['year'].map(fill_dict))
X
Out[0]:
year condition
0 2015 good
1 2016 good
2 2017 excellent
3 2016 good
4 2016 excellent
5 2017 excellent
6 2015 good
7 2016 good
8 2015 excellent
9 2015 good
</code></pre>
|
python|pandas|dataframe|pandas-groupby|fillna
| 1
|
378,165
| 62,133,500
|
Python Panda Dataframe Error while fetching path from .config file
|
<p>Trying to create the data frame using (.config) file to fetch the file but getting error during creation of Dataframe from the below file </p>
<p><strong>Actual file name:rgf_ltd_060520202</strong></p>
<p>Sample Structure of my config fil(which is pipe seperated) :</p>
<pre><code>...|/user/Doc/ABC/rgf_ltd_[0-9]*|CSV|Collection
</code></pre>
<p>and from here when I try to create the data frame by fetching my config file in my Script</p>
<pre><code>import pandas as pd
#fetching details fromconfig file
with open('config','r') as rd:
lines=rd.readlies()
for line in lines:
f_path=#fetching my csv file path(/user/Doc/ABC/rgf_ltd_[0-9]*)
</code></pre>
<p>Above part working file and <strong>/user/Doc/ABC/rgf_ltd_[0-9]*</strong> is also fetched by python script when i pass f_path in read_csv function.</p>
<pre><code>#dataframe
data=pd.read_csv(f_path,sep='|',engine='python')
</code></pre>
<p>and when I execute the above script interpreter throw an error:</p>
<pre><code>No such file or Directory:/user/Doc/ABC/rgf_ltd_[0-9]*
</code></pre>
<p>I am giving this regex to make my path more <strong>dynamic</strong>. </p>
|
<p>The <code>pandas.read_csv</code> doesn't deal with regex patters on file reading, you could use the python <a href="https://docs.python.org/3/library/glob.html#glob.glob" rel="nofollow noreferrer">glob.glob</a> module to get a similar result with shell-style wildcards.</p>
<blockquote>
<p>Return a possibly-empty list of path names that match pathname, which must be a string containing a path specification. pathname can be either absolute (like /usr/src/Python-1.5/Makefile) or relative (like ../../Tools/<em>/</em>.gif), <strong>and can contain shell-style wildcards</strong>. Broken symlinks are included in the results (as in the shell). Whether or not the results are sorted depends on the file system.</p>
</blockquote>
<pre><code>import glob
import os
import pandas as pd
f_path = os.path.join("user","Doc", "ABC")
f_pattern = "rgf_ltd_[0-9]*"
file_list = glob.glob(os.path.join(f_path, f_pattern))
print(file_list) # ['user\\Doc\\ABC\\rgf_ltd_3498543058']
# dataframe
data = pd.read_csv(file_list[0], sep='|', engine='python')
print(data)
</code></pre>
<p>Output from <em>data</em></p>
<pre><code> Col0 Col1 Col2
0 InsideRGF_R0C0 InsideRGF_R0C1 InsideRGF_R0C2
1 InsideRGF_R1C0 InsideRGF_R1C1 InsideRGF_R1C2
2 InsideRGF_R2C0 InsideRGF_R2C1 InsideRGF_R2C2
3 InsideRGF_R3C0 InsideRGF_R3C1 InsideRGF_R3C2
</code></pre>
|
python|regex|pandas
| 0
|
378,166
| 62,439,560
|
How to make a time index in dataframe pandas with 15 minutes spacing
|
<p>How to make a time index in dataframe pandas with 15 minutes spacing for 24 hours with out the date format (12\4\2020 00:15)or doing it manually?
example that I only want is 00:15 00:30 00:45.........23:45 00:00 as an index.</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>pd.date_range</code></a> to create dummy dates with your desired time frequency, then just extract them:</p>
<pre><code>pd.Series(pd.date_range(
'1/1/2020', '1/2/2020', freq='15min', closed='left')).dt.time
0 00:00:00
1 00:15:00
2 00:30:00
3 00:45:00
4 01:00:00
...
91 22:45:00
92 23:00:00
93 23:15:00
94 23:30:00
95 23:45:00
Length: 96, dtype: object
</code></pre>
|
python|pandas|dataframe|time
| 3
|
378,167
| 62,178,804
|
Pandas dropna messes up datetime index
|
<p>I am writing a function that processes a dataframe. Rows in this dataframe are indexed by a datetime index and there is a row per hour in the dataframe.
Basically, after doing some processing, this is what I have:</p>
<pre><code> inquinante temperatura precipitazioni ... umidita day_of_year day_of_week
Data ...
2000-07-04 00:00:00 55.0 23.9 0.0 ... 86.8 186 1
2000-07-04 01:00:00 NaN 23.4 0.0 ... 86.2 186 1
2000-07-04 02:00:00 NaN 22.7 0.0 ... 92.5 186 1
2000-07-04 03:00:00 NaN 22.1 0.0 ... 97.5 186 1
2000-07-04 04:00:00 NaN 22.2 0.0 ... 95.9 186 1
</code></pre>
<p>Now I want to filter out the rows for which the value for the column 'inquinante' is NaN, so I wrote the following line of code:</p>
<pre><code>df = df.dropna(subset=["inquinante"])
</code></pre>
<p>but what I get after it executes is the following:</p>
<pre><code> inquinante temperatura precipitazioni ... umidita day_of_year day_of_week
Data ...
2014-01-31 25.0 4.700000 1.000000 ... 95.700000 31 4
2014-02-01 31.0 5.800000 0.000000 ... 94.800000 32 5
2014-02-02 20.0 6.100000 1.800000 ... 97.300000 33 6
2014-02-03 17.0 6.700000 0.600000 ... 96.300000 34 0
2014-02-04 18.0 6.600000 0.800000 ... 97.200000 35 1
</code></pre>
<p>Why now my dates are gouped by days and not hours like they were before?
I also tried to change the line of code to:</p>
<pre><code>df = df[df.inquinante >= 0]
#or
df = df[df.inquinante.notna()]
</code></pre>
<p>But none of these seemed to fix the problem. Is there any way I can fix this and prevent pandas from grouping my dates?</p>
<p>Thanks in advance</p>
|
<p>This is the automatic representation of a datetime index when all the index labels have midnight or time 00:00:00 as its time stamp.</p>
<pre><code>df = pd.DataFrame({'value':np.arange(20)}, index=pd.date_range('2020-02-01', periods=20, freq='12H'))
df
</code></pre>
<p>Output:</p>
<pre><code> value
2020-02-01 00:00:00 0
2020-02-01 12:00:00 1
2020-02-02 00:00:00 2
2020-02-02 12:00:00 3
2020-02-03 00:00:00 4
2020-02-03 12:00:00 5
2020-02-04 00:00:00 6
2020-02-04 12:00:00 7
2020-02-05 00:00:00 8
2020-02-05 12:00:00 9
2020-02-06 00:00:00 10
2020-02-06 12:00:00 11
2020-02-07 00:00:00 12
2020-02-07 12:00:00 13
2020-02-08 00:00:00 14
2020-02-08 12:00:00 15
2020-02-09 00:00:00 16
2020-02-09 12:00:00 17
2020-02-10 00:00:00 18
2020-02-10 12:00:00 19
</code></pre>
<p>Now, let's drop all time where hour == 12 leaving only the midnight timestamp:</p>
<pre><code>df[df.index.hour != 12]
</code></pre>
<p>Output:</p>
<pre><code> value
2020-02-01 0
2020-02-02 2
2020-02-03 4
2020-02-04 6
2020-02-05 8
2020-02-06 10
2020-02-07 12
2020-02-08 14
2020-02-09 16
2020-02-10 18
</code></pre>
<p>That is still a datetimeindex and each label does have a timestamp.</p>
<pre><code>df[df.index.hour != 12].index.strftime('%Y-%m-%d %H:%M:%S')
</code></pre>
<p>Output:</p>
<pre><code>Index(['2020-02-01 00:00:00', '2020-02-02 00:00:00', '2020-02-03 00:00:00',
'2020-02-04 00:00:00', '2020-02-05 00:00:00', '2020-02-06 00:00:00',
'2020-02-07 00:00:00', '2020-02-08 00:00:00', '2020-02-09 00:00:00',
'2020-02-10 00:00:00'],
dtype='object')
</code></pre>
|
python|pandas
| 2
|
378,168
| 62,275,841
|
How Can I Solve Error : Unsupported format, or corrupt file: Expected BOF record; found b'<table c'
|
<p>When I Run This Code Show This Error Unsupported format, or corrupt file: Expected BOF record; found b'
<p>data : <a href="https://github.com/DevangBaroliya/DataSet/blob/master/DistrictWiseReport20200607.xlsx" rel="nofollow noreferrer">https://github.com/DevangBaroliya/DataSet/blob/master/DistrictWiseReport20200607.xlsx</a></p>
<pre><code> import pandas as pd
data = pd.read_excel('DistrictWiseReport.xlsx')
data
</code></pre>
|
<p>If you copy the data from your github, right-click cell A1 in Excel and paste special as Unicode Text and save it as an .xlsx file, you will be able to read it in. I'm not sure exactly what you are trying to do and what exactly is going wrong.</p>
<p><a href="https://i.stack.imgur.com/i9Cu3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i9Cu3.png" alt="enter image description here"></a></p>
|
python|pandas|data-science
| 0
|
378,169
| 62,396,711
|
tf.keras.losses.categorical_crossentropy returning wrong value
|
<p>I have</p>
<p><code>y_true = 16</code></p>
<p>and</p>
<pre><code>y_pred = array([1.1868494e-08, 1.8747659e-09, 1.2777099e-11, 3.6140797e-08,
6.5852622e-11, 2.2888577e-10, 1.4515833e-09, 2.8392664e-09,
4.7054605e-10, 9.5605066e-11, 9.3647139e-13, 2.6149302e-10,
2.5338919e-14, 4.8815413e-10, 3.9381631e-14, 2.1434269e-06,
9.9999785e-01, 3.0857247e-08, 1.3536775e-09, 4.6811921e-10,
3.0638234e-10, 2.0818169e-09, 2.9950772e-10, 1.0457132e-10,
3.2959850e-11, 3.4232595e-10, 5.1689473e-12], dtype=float32)
</code></pre>
<p>When I use <code>tf.keras.losses.categorical_crossentropy(to_categorical(y_true,num_classes=27),y_pred,from_logits=True)</code></p>
<p>The loss value I get is <code>2.3575358</code>.</p>
<p>But if I use the formula for categorical cross entropy to get the loss value </p>
<pre><code>-np.sum(to_categorical(gtp_out_true[0],num_classes=27)*np.log(gtp_pred[0]))
</code></pre>
<p>according to the formula <a href="https://i.stack.imgur.com/LSVEc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LSVEc.png" alt="enter image description here"></a></p>
<p>I get the value <code>2.1457695e-06</code></p>
<p>Now, my question is, why does the function <code>tf.keras.losses.categorical_crossentropy</code> give different value.</p>
<p>The strange thing is that, my model gives 100% accuracy even though the loss is stuck at 2.3575.
Below is the image of the plot of accuracy and losses during training.</p>
<p><a href="https://i.stack.imgur.com/oNh36.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oNh36.png" alt="enter image description here"></a></p>
<p>What formula does Tensorflow use to calculate categorical cross-entropy?</p>
|
<p>Found where the problem is</p>
<p>I used <strong>softmax</strong> activation in my last layer </p>
<p><code>output = Dense(NUM_CLASSES, activation='softmax')(x)</code></p>
<p>But I used <code>from_logits=True</code> in <code>tf.keras.losses.categorical_crossentropy</code>, which resulted in <strong>softmax</strong> being applied again on the output of the last layer (which was already <code>softmax(logits)</code>). So, the <code>output</code> argument that I was passing to the loss function was <code>softmax(softmax(logits))</code>.</p>
<p>Hence, the anomaly in the values of loss.</p>
<p>When using <code>softmax</code> as activation in the last layer, we should use <code>from_logits=False</code></p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 2
|
378,170
| 62,080,926
|
Removing outliers based on column variables or multi-index in a dataframe
|
<p>This is another IQR outlier question. I have a dataframe that looks something like this:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0,100,size=(100, 3)), columns=('red','yellow','green'))
df.loc[0:49,'Season'] = 'Spring'
df.loc[50:99,'Season'] = 'Fall'
df.loc[0:24,'Treatment'] = 'Placebo'
df.loc[25:49,'Treatment'] = 'Drug'
df.loc[50:74,'Treatment'] = 'Placebo'
df.loc[75:99,'Treatment'] = 'Drug'
df = df[['Season','Treatment','red','yellow','green']]
df
</code></pre>
<p>I would like to find and remove the outliers for each condition (i.e. Spring Placebo, Spring Drug, etc). Not the whole row, just the cell. And would like to do it for each of the 'red', 'yellow', 'green' columns. </p>
<p>Is there way to do this without breaking the dataframe into a whole bunch of sub dataframes with all of the conditions broken out separately? I'm not sure if this would be easier if 'Season' and 'Treatment' were handled as columns or indices. I'm fine with either way.</p>
<p>I've tried a few things with .iloc and .loc but I can't seem to make it work. </p>
|
<p>If need replace outliers by missing values use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.quantile.html" rel="nofollow noreferrer"><code>DataFrame.quantile</code></a>, then compare for lower and greater values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.lt.html" rel="nofollow noreferrer"><code>DataFrame.lt</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.gt.html" rel="nofollow noreferrer"><code>DataFrame.gt</code></a>, chain masks by <code>|</code> for bitwise <code>OR</code> and set missing values in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mask.html" rel="nofollow noreferrer"><code>DataFrame.mask</code></a>, default replacement, so not specified:</p>
<pre><code>np.random.seed(2020)
df = pd.DataFrame(np.random.randint(0,100,size=(100, 3)), columns=('red','yellow','green'))
df.loc[0:49,'Season'] = 'Spring'
df.loc[50:99,'Season'] = 'Fall'
df.loc[0:24,'Treatment'] = 'Placebo'
df.loc[25:49,'Treatment'] = 'Drug'
df.loc[50:74,'Treatment'] = 'Placebo'
df.loc[75:99,'Treatment'] = 'Drug'
df = df[['Season','Treatment','red','yellow','green']]
g = df.groupby(['Season','Treatment'])
df1 = g.transform('quantile', 0.05)
df2 = g.transform('quantile', 0.95)
c = df.columns.difference(['Season','Treatment'])
mask = df[c].lt(df1) | df[c].gt(df2)
df[c] = df[c].mask(mask)
print (df)
Season Treatment red yellow green
0 Spring Placebo NaN NaN 67.0
1 Spring Placebo 67.0 91.0 3.0
2 Spring Placebo 71.0 56.0 29.0
3 Spring Placebo 48.0 32.0 24.0
4 Spring Placebo 74.0 9.0 51.0
.. ... ... ... ... ...
95 Fall Drug 90.0 35.0 55.0
96 Fall Drug 40.0 55.0 90.0
97 Fall Drug NaN 54.0 NaN
98 Fall Drug 28.0 50.0 74.0
99 Fall Drug NaN 73.0 11.0
[100 rows x 5 columns]
</code></pre>
|
python-3.x|pandas|dataframe|multi-index|outliers
| 1
|
378,171
| 62,272,495
|
using RNN with CNN in Keras
|
<p>beginner question</p>
<p>Using Keras I have a sequential CNN model that predicts an output of the size of [3*1] (regression) based on image(input).</p>
<p>How to implement RNN in order to add the output of the model as a second input to the next step.
(so that we have 2 inputs: the image and the output of the previous sequence)?</p>
<pre><code>model = models.Sequential()
model.add(layers.Conv2D(64, (3, 3), activation='relu', input_shape=X.shape[1:]))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(3, activation='linear'))
</code></pre>
|
<p>The easiest method I found was to directly extend <code>Model</code>. The following code will work in TF 2.0, but may not work in older versions:</p>
<pre><code>class RecurrentModel(Model):
def __init__(self, num_timesteps, *args, **kwargs):
self.num_timesteps = num_timesteps
super().__init__(*args, **kwargs)
def build(self, input_shape):
inputs = layers.Input((None, None, input_shape[-1]))
x = layers.Conv2D(64, (3, 3), activation='relu'))(inputs)
x = layers.MaxPooling2D((2, 2))(x)
x = layers.Flatten()(x)
x = layers.Dense(3, activation='linear')(x)
self.model = Model(inputs=[inputs], outputs=[x])
def call(self, inputs, **kwargs):
x = inputs
for i in range(self.num_timestaps):
x = self.model(x)
return x
</code></pre>
|
tensorflow|keras|lstm|recurrent-neural-network|keras-layer
| 3
|
378,172
| 62,431,377
|
Adding Data to Pandas DataFrame
|
<p>I want to use machine learning techniques to categorise "images" of energy released in an electromagnetic calorimeter, using a keras CNN. In order to import the data I'm using a Pandas DataFrame, however the data isn't formatted in the necessary way.</p>
<p>The calorimeter can be considered a 28x28 crystal square, however the data that I receive only show the energy in crystals that have been triggered, on average about 10-15 crystals per event.</p>
<pre><code> Event X Y Energy
0 22 13 203.49
0 23 12 73.1848
...
...
1 23 16 55.1652
1 24 16 0
1 25 16 20.4953
</code></pre>
<p>That means I want to add a layer to the data frame for every crystal (X,Y) that doesn't already have an energy assigned, and assign 0 energy to it.</p>
<p>I've tried the following:</p>
<pre><code>newdf=pd.DataFrame()
for event in range(0,2):#999):
for xi in range(0,28):
for yi in range(0,28):
arr=np.array([event,xi,yi,0])
newdf=newdf.append(pd.DataFrame(arr))
print('newdf = ',newdf)
</code></pre>
<p>But the arrays get appended into column data in some strange way.</p>
<p>Can anyone tell me an efficient way of doing this?</p>
<p>Thank you.</p>
|
<p>Your arr shape is actually (4,) and what you want is an array of (1,4) if I didn't misunderstood. You could do<code>arr=np.array([[event,xi,yi,0]])</code> to have the good shape.</p>
|
python|pandas|numpy
| 1
|
378,173
| 62,301,254
|
Convert Column to Polygon in Python to perform Point in Polygon
|
<p>I have written Code to establish Point in Polygon in Python, the program uses a shapefile that I read in as the Polygons.
I now have a dataframe I read in with a column containing the Polygon e.g <code>[[28.050815,-26.242253],[28.050085,-26.25938],[28.011934,-26.25888],[28.020216,-26.230127],[28.049828,-26.230704],[28.050815,-26.242253]]</code>.
I want to transform this column into a polygon in order to perform Point in Polygon, but all the examples use <code>geometry = [Point(xy) for xy in zip(dataPoints['Long'], dataPoints['Lat'])]</code> but mine is already zip?
How would I go about achieving this?</p>
<p>Thanks</p>
|
<p>taking your example above you could do the following:</p>
<pre><code>list_coords = [[28.050815,-26.242253],[28.050085,-26.25938],[28.011934,-26.25888],[28.020216,-26.230127],[28.049828,-26.230704],[28.050815,-26.242253]]
</code></pre>
<pre><code>from shapely.geometry import Point, Polygon
# Create a list of point objects using list comprehension
point_list = [Point(x,y) for [x,y] in list_coords]
# Create a polygon object from the list of Point objects
polygon_feature = Polygon([[poly.x, poly.y] for poly in point_list])
</code></pre>
<p>And if you would like to apply it to a dataframe you could do the following:</p>
<pre><code>import pandas as pd
import geopandas as gpd
df = pd.DataFrame({'coords': [list_coords]})
def get_polygon(list_coords):
point_list = [Point(x,y) for [x,y] in list_coords]
polygon_feature = Polygon([[poly.x, poly.y] for poly in point_list])
return polygon_feature
df['geom'] = df['coords'].apply(get_polygon)
</code></pre>
<p>However, there might be geopandas built-in functions in order to avoid "reinventing the wheel", so let's see if anyone else has a suggestion :)</p>
|
python|geopandas|point-in-polygon
| 0
|
378,174
| 62,345,523
|
How to fill missing values with conditions?
|
<p>I have a pandas DataFrame like this:</p>
<pre><code>year = [2015, 2016, 2009, 2000, 1998, 2017, 1980, 2016, 2015, 2015]
mode = ["automatic", "automatic", "manual", "manual", np.nan,'automatic', np.nan, 'automatic', np.nan, np.nan]
X = pd.DataFrame({'year': year, 'mode': mode})
print(X)
year mode
0 2015 automatic
1 2016 automatic
2 2009 manual
3 2000 manual
4 1998 NaN
5 2017 automatic
6 1980 NaN
7 2016 automatic
8 2015 NaN
9 2015 NaN
</code></pre>
<p>I want to fill missing values with like this: if year is <2010 I want to fill NaN with 'manual' and if year is >=2010 I want to fill NaN value with 'automatic'</p>
<p>I thought about combination .groupby function with these condition but I do not know honestly how to do it :(</p>
<p>I would be grateful for any help.</p>
|
<p>Similar approach to my answer on your <a href="https://stackoverflow.com/questions/62327956/how-to-fill-missing-values-in-a-dataframe-based-on-group-value-counts">other question</a>:</p>
<pre><code>cond = X['year'] < 2010
X['mode'] = X['mode'].fillna(cond.map({True:'manual', False: 'automatic'}))
</code></pre>
|
python|pandas|dataframe|nan|missing-data
| 3
|
378,175
| 62,335,645
|
How to slice a panda data frame to get required results
|
<p>I'm working on this problem in Jupyter where I have to get the desired result. The initial DataFrame is:</p>
<pre><code>print(df1)
df2=pd.DataFrame({'custID':[1,2,3,4],
'cust_age':[20,35,50,85]},columns=['custID','cust_age'])
print(df2)
</code></pre>
<p>I have managed to get my input and output to get like this.</p>
<pre><code>grouped = df2[df2.cust_age.lt(50).groupby(df2.custID).transform('any')]
grouped
custID cust_age
0 1 20
1 2 35
</code></pre>
<p>But I'm required to get the answer to come out simply as [1, 2] and I cannot get the last step for slicing figured out because I'm not great at it. Any help would be great thanks!</p>
|
<p>You can get the desired output by</p>
<pre><code>In[1]: df2[df2.cust_age.lt(50)].custID.values
Out[2]:array([1, 2])
</code></pre>
<p>or </p>
<pre><code>In[1]: df2[df2.cust_age < 50].custID.tolist()
Out[2]:[1, 2]
</code></pre>
<p>depending on whether you want to get a numpy-array or a list.</p>
|
pandas|pandas-groupby
| 0
|
378,176
| 62,310,570
|
How to fix what dates your dataframe includes
|
<p>I have a dataframe whereby I'm trying to get data from today (-5) days until the end of next month.</p>
<p>In the case of today this would be;</p>
<pre><code>ix = pd.DatetimeIndex(start=datetime(2020, 6, 05), end=datetime(2020, 7, 31), freq='D')
df.reindex(ix)
</code></pre>
<p>If I wanted to automate this is there any function I can take advantage of?</p>
<p>I've tried</p>
<pre><code>startdate = pd.to_datetime('today') - pd.DateOffset(days=5)
enddate = pd.to_datetime('today', format) + MonthEnd(2)
ix = pd.DatetimeIndex(start=startdate, end=enddate, freq='D')
df.reindex(ix)
</code></pre>
<p>..but does now seem to be working.
Any help appreciated!</p>
|
<p>Your code was close, I think the main issue is you are constructing a <code>DatetimeIndex</code> incorrectly: it doesn't take a <code>start</code> or <code>end</code> parameter (see <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DatetimeIndex.html" rel="nofollow noreferrer">docs</a>). Also agree with @MrFuppes about <code>format</code> being unnecessary.</p>
<p>I think you want <code>pandas.date_range</code>, which still returns a <code>DatetimeIndex</code></p>
<pre><code>startdate = pd.to_datetime('today') - pd.DateOffset(days=5)
enddate = pd.to_datetime('today') + pd.tseries.offsets.MonthEnd(2)
ix = pd.date_range(startdate,enddate,freq='D')
</code></pre>
<p>This seems to work for me, and can be used for reindexing. You could also call the <code>floor</code> method on your start and end dates if you just want to get dates without the specific time of day the code was run:</p>
<pre><code>ix = pd.date_range(startdate.floor('d'),enddate.floor('d'),freq='D')
</code></pre>
|
python|pandas|datetime|indexing|reindex
| 1
|
378,177
| 62,138,342
|
How to generate url changing wrt date?
|
<p>I need to extract, unzip & read data from this url (<a href="https://www1.ukp.com/content/historical/2020/MAY/cm29MAY2020bhav.csv.zip" rel="nofollow noreferrer">https://www1.ukp.com/content/historical/2020/MAY/cm29MAY2020bhav.csv.zip</a><br>
) on every working day. I manually edit the url everyday . Is there any way to automate it in python</p>
<pre><code>!wget https://www.ukp.com/content/historical/2020/MAY/cm29MAY2020bhav.csv.zip
!unzip cm29MAY2020bhav.csv.zip
cm3a = pd.read_csv('cm29MAY2020bhav.csv.zip',engine='python')
</code></pre>
|
<p>Use <code>date.strftime</code> to generate the URL.</p>
<pre><code>>>> from datetime import date
>>> date.today().strftime("https://www1.ukp.com/content/historical/%Y/%B/cm%d%B%Ybhav.csv.zip")
'https://www1.ukp.com/content/historical/2020/June/cm01June2020bhav.csv.zip'
</code></pre>
<p>If case sensitivity matters, you'll have to break it up into a few pieces. For example:</p>
<pre><code>>>> year, month, day = date.today().strftime("%Y-%B-%d").split("-")
>>> month = month.upper()
>>> f'https://www1.ukp.com/content/historical/{year}/{month}/cm{day}{month}{year}bhav.csv.zip'
'https://www1.ukp.com/content/historical/2020/JUNE/cm01JUNE2020bhav.csv.zip'
</code></pre>
|
python|pandas
| 1
|
378,178
| 62,130,542
|
Converting 15 minutes interval usage data in pivot form to 60 minutes format
|
<p><a href="https://i.stack.imgur.com/mutL1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mutL1.png" alt="This is my 15 Minute interval file sample distributed over 24 hrs"></a></p>
<p>I want to convert this to hourly format like:</p>
<pre><code>df['1:00'] = df['0:15'] + df['0:30'] + df['0:45'] + df['1:00']
</code></pre>
<p>Also, i do not want to create extra duplicate columns.</p>
<p>Please help. Thanks in advance.</p>
|
<p>Use:</p>
<pre><code>#convert non time values to MultiIndex and sorting times columns
df1 = df.set_index(['Account','Date']).sort_index(axis=1)
#create groups by previous values of ends with :00
g = df1.columns.str.endswith(':00')[::-1].cumsum()[::-1]
#aggregate sum
df2 = df1.groupby(g, axis=1).sum().add_suffix(':00').reset_index()
</code></pre>
|
python|pandas
| 1
|
378,179
| 62,278,894
|
Writing into Excel by using Pandas Dataframe
|
<p>I'm very new to programming and I'm just starting out to try. So sorry if my question comes off as too basic. I'm currently trying to extract columns of data from 1 excel sheet to be written into an output excel sheet. However, after I have extracted the column, I am unable to write it into my output sheet as I get the error:
<code>AttributeError: 'list' object has no attribute 'to_excel'</code></p>
<p>Below is my code: </p>
<pre><code>import pandas as pd
#Read data from input excel
sourcepath = pd.read_excel("path to input excel",indexcol=0)
#extract data from input excel
col1 = list(sourcepath.iloc[1:345,0])
col1 = [str(x)for x in col1]
#write data into output excel
extractpath = "path to output excel"
writer = pd.ExcelWriter(extractpath,engine='xlsxwriter')
col1.to_excel(writer,'Sheet1',index=True)
writer.save()
</code></pre>
<p>I'm also not sure if the line <code>col1 = [Str(x) for x in col1]</code>is even required at all for this code? I copied it off the net and not really sure the use of this.</p>
<p>Thank you very much for your help! Greatly appreciated <3</p>
|
<p>It was cast to a list.</p>
<pre><code>col1 = list(sourcepath.iloc[1:345,0])
</code></pre>
<p>Create a new dataframe</p>
<pre><code>df = pd.DataFrame(col1)
df.to_excel(writer,'Sheet1',index=True)
</code></pre>
|
python|excel|pandas
| 0
|
378,180
| 62,177,415
|
How to take a substring from a column in excel using Python?
|
<p>I have an Excel file and I want to read a specific column in that Excel file, I do that with the following code:</p>
<pre><code>import pandas as pd
import xlrd
file_location = input('Where is the file located? Please input the file path here. ')
column = input('In what column is the code? ')
code_array = pd.read_excel(file_location, usecols=column)
for i in code_array:
print(code_array)
</code></pre>
<p>and that code prints out the contents of that column in the console. Now, that column has text as the following: <em>12345 - Description</em>. I only want to extract the number, how would I be able to do this? I thought of using a substring from [0:5] or converting the data into an array of string, but I'm not sure on how to do that. </p>
|
<p>If the digits will be 5 digits long each time, you could do a quick substring using a lambda.</p>
<pre><code>code_array["number_column"] = code_array["YourColumnNameHere"].apply(lambda x: str(x)[:5])
</code></pre>
<p>If it will not be the same length each time, but it will be in the same position, you can split it into an array of strings, and then access the first element:</p>
<pre><code>code_array["number_column"] = code_array["YourColumnNameHere"].apply(lambda x: str(x).split()[0])
</code></pre>
<p>Let me know if this solves your problem, otherwise we will need to use regex. NB to change YourColumnNameHere to be the same name as the column in your dataframe.</p>
|
python|excel|pandas|xlrd
| 1
|
378,181
| 62,333,471
|
In Pandas, how to return 2 data using resample('D').first()?
|
<h2> Questions </h2>
<ul>
<li>Q1. Why does the <code>float</code> data changes to <code>string</code> data when it gets put into a <code>pd.dataframe</code>? Is there any way to keep it float, rather than changing it to float afterwards with <code>.astype(float)</code>?</li>
<li>Q2. How to get 2 data using <code>resample('D').first()</code> method? The method <code>.first()</code> returns only 1 data, while I want 2 data to be returnd. : If it is not possible with <code>.first()</code> method, can you give me an alternative solution?</li>
</ul>
<h2> Code Example </h2>
<pre><code>import pandas as pd
import numpy as np
from datetime import datetime
BTC_df = pd.DataFrame(np.array([[1.05,'BTC'],[1.2,'BTC'],[0.9,'BTC']]),
columns = ['return','coin'],
index = [datetime(2020,5,1,15), datetime(2020,5,2,9,20), datetime(2020,5,3,23,40)])
ETH_df = pd.DataFrame(np.array([[1.1,'ETH'],[0.9,'ETH'],[0.95,'ETH']]),
columns = ['return','coin'],
index = [datetime(2020,5,1,8,30), datetime(2020,5,2,17,30), datetime(2020,5,3,11,50)])
EOS_df = pd.DataFrame(np.array([[1.3,'EOS'],[0.6,'EOS'],[0.8,'EOS']]),
columns = ['return','coin'],
index = [datetime(2020,5,1,1,20), datetime(2020,5,2,22,10), datetime(2020,5,3,13,5)])
BTC_df
>>> return coin
2020-05-01 15:00:00 1.05 BTC
2020-05-02 09:20:00 1.2 BTC
2020-05-03 23:40:00 0.9 BTC
##############################################
# Q1. Why does the 'float' changes to 'str'? #
##############################################
BTC_df.loc[datetime(2020,5,2,9,20),'return']
>>> '1.2'
merged_df = pd.concat([BTC_df,ETH_df,EOS_df])
merged_df.loc[:,'return'] = merged_df.loc[:,'return'].astype(float) # from 'str' to 'float'
merged_df.loc[:,'time'] = merged_df.index.time # to preserve hour and minutes
#################################################################
# Q2. How to create the 'desired output' using resample method? #
#################################################################
merged_df.resample('D').first()
>>> return coin time
2020-05-01 1.30 EOS 01:20:00
2020-05-02 1.20 BTC 09:20:00
2020-05-03 0.95 ETH 11:50:00
</code></pre>
<p>My desired output is as follows, showing 2 coins with the earliest time : </p>
<pre><code>desired_output_df
>>> return coin time
2020-05-01 1.30 EOS 01:20:00
1.10 ETH 08:30:00
2020-05-02 1.20 BTC 09:20:00
0.90 ETH 17:30:00
2020-05-03 0.95 ETH 11:50:00
0.80 EOS 13:05:00
</code></pre>
|
<p>Q1: this is because you defined the whole data as <strong>one</strong> array, which can have one type only for all data (string). Define it columnwise like so:</p>
<pre><code>BTC_df = pd.DataFrame({'return': [1.05, 1.2, 0.9], 'coin': ['BTC', 'BTC', 'BTC']},
index = [datetime(2020,5,1,15), datetime(2020,5,2,9,20), datetime(2020,5,3,23,40)])
</code></pre>
<p>Q2: use</p>
<pre><code>merged_df.resample('D').apply(lambda x: x[:2])
</code></pre>
|
python|pandas|dataframe
| 1
|
378,182
| 62,166,941
|
Create single pandas dataframe from a list of dataframes
|
<p>I have a list of about 25 dfs and all of the columns are the same. Although the row count is different, I am only interested in the first row of each df.
How can I iterate through the list of dfs, copy the first row from each and concatenate them all into a single df?</p>
|
<p>Select first row by position with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a> and <code>[[0]]</code> for one row DataFrames and join together by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>:</p>
<pre><code>df = pd.concat([x.iloc[[0]] for x in dfs], ignore_index=True)
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.head.html" rel="nofollow noreferrer"><code>DataFrame.head</code></a> for one row <code>DataFrame</code>s:</p>
<pre><code>df = pd.concat([x.head(1) for x in dfs], ignore_index=True)
</code></pre>
|
python|python-3.x|pandas
| 0
|
378,183
| 62,120,508
|
Python Tensorflow - Running model.fit multiple times without reinstantiating the model
|
<h2>Background</h2>
<p>I am watching a <a href="https://youtu.be/tPYj3fFJGjk?t=12950" rel="nofollow noreferrer">popular YouTube crash course</a> on machine learning.</p>
<p>At <a href="https://youtu.be/tPYj3fFJGjk?t=12950" rel="nofollow noreferrer">3:35:50</a>, he mentions that the model is likely overfit, so fits it again with less epochs.</p>
<p>Since he didn't reinstantiate the model, isn't this equivalent to fitting the model with that same data, thereby continuing to overtrain it?</p>
<h2>My Question</h2>
<p>Assume you have a model created and data ready to go.</p>
<p>You run:</p>
<pre><code>model.fit(train_images, train_labels, epochs=10)
model.fit(train_images, train_labels, epochs=8)
</code></pre>
<p><strong>Is this equivalent to running:</strong></p>
<pre><code>model.fit(train_images, train_labels, epochs=18)
</code></pre>
<p><strong>Or:</strong></p>
<pre><code>model.fit(train_images, train_labels, epochs=8)
</code></pre>
<p>If <a href="https://stackoverflow.com/questions/49841324/what-does-calling-fit-multiple-times-on-the-same-model-do">previously fitted data is overwritten</a>, why does running <code>model.fit</code> a second time begin with the accuracy of the previous model?</p>
<p>In <a href="https://stackoverflow.com/questions/42666046/loading-a-trained-keras-model-and-continue-training">multiple</a> <a href="https://stackoverflow.com/questions/45393429/keras-how-to-save-model-and-continue-training">other</a> <a href="https://stackoverflow.com/questions/51854463/is-it-possible-to-retrain-a-previously-saved-keras-model">questions</a> regarding saving and training models, the accepted solutions are to load the previously trained model, and run <code>model.fit</code> again. </p>
<p>If this will overwrite the pre-existing weights, doesn't that defeat the purpose of saving the model in the first place? Wouldn't training the model for the first time on the new data be equivalent?</p>
<p>What is the appropriate way to train a model across multiple, similar datasets while retaining accuracy across all of the data?</p>
|
<blockquote>
<p>Since he didn't reinstantiate the model, isn't this equivalent to
fitting the model with that same data, thereby continuing to overtrain
it?</p>
</blockquote>
<p>You are correct! In order to check which number of epochs would do better in his example, he should have compiled the network again (that is, execute the above cell again).</p>
<p>Just remember that in general, whenever you instantiate a model again it most likely will start with completely new weights, totally different from past weights (unless you change this manually). So even though you keep the same amount of epochs, your final accuracy can change depending on the initial weights.</p>
<p><strong>Are these two commands equivalent?</strong></p>
<pre><code>model.fit(train_images, train_labels, epochs=10)
model.fit(train_images, train_labels, epochs=8)
</code></pre>
<p>and</p>
<pre><code>model.fit(train_images, train_labels, epochs=18)
</code></pre>
<p><strong>No.</strong></p>
<p>In the first case, you are training your network with some weights <code>X</code> going through all your training set 10 times, then you update your weights for some value <code>y</code>.
Then you will train your network again though all your training set 8 times but now you are using a network with weights <code>X+y</code>. </p>
<p>For the second case, you will train your network through all your training data 18 times with the weights <code>X</code>. </p>
<p><strong>This is different!</strong></p>
|
python|tensorflow|keras
| 4
|
378,184
| 62,470,453
|
Underfitting Problem in Binary Classification using Multi-Layer Perceptron
|
<p>I'm currently developing a supervised anomaly detection using Multi-Layer Perceptron (MLP), the goal is to classify between the benign and malicious traffics. I used the <a href="https://www.stratosphereips.org/datasets-ctu13" rel="nofollow noreferrer">CTU-13 dataset</a>, the sample of the dataset is as follows:
<a href="https://i.stack.imgur.com/WeXae.png" rel="nofollow noreferrer">Sample of Dataset</a>. The dataset has 169032 benign traffics and 143828 malicious traffics. The code for my MLP model is as follows:</p>
<pre><code>def MLP_model():
model = Sequential()
model.add(Dense(1024,input_dim=15, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256,activation='relu'))
model.add(Dense(256,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(128,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Dense(1, activation='sigmoid'))
adam = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
model.compile(optimizer = adam, loss='binary_crossentropy', metrics=['accuracy'])
return model
model = MLP_model()
#With Callbacks
callbacks = [EarlyStopping('val_loss', patience=5)]
hist = model.fit(Xtrain, Ytrain, epochs=50, batch_size=50, validation_split=0.20, callbacks=callbacks, verbose=1)
</code></pre>
<p>The results that I obtained are as follows:</p>
<pre><code>Accuracy: 0.923045
Precision: 0.999158
Recall: 0.833308
F1 score: 0.908728
</code></pre>
<p>However, from the training curve, I suspect that the model is underfitting (based on <a href="https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/" rel="nofollow noreferrer">this article</a>):
<a href="https://i.stack.imgur.com/YVnst.png" rel="nofollow noreferrer">The Model's Training Curve</a></p>
<p>I've tried to increase the neurons and the number of layers (as suggested <a href="https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw">here</a>), but the same problem still occurs. I appreciate any help to solve this problem.</p>
|
<p>First of all, I am pretty sure your model is actually overfitting, not underfitting. Plot only the training loss and you should see the loss fall close to 0. But as you can see in your plot the validation loss is still quite high compared to training loss. This happens because your model has way too many parameters compared to the number of data points you have in your training dataset.</p>
<p>I would recommend reducing your dense layer sizes to double digits/low triple digits.</p>
|
python|tensorflow|keras|neural-network|mlp
| 1
|
378,185
| 62,275,877
|
Array Slicing with step 2
|
<p>Have array like </p>
<pre><code>arr = [1,2,3,4,5,6,7,8,9,10].
</code></pre>
<p>How I can get array like this:</p>
<pre><code>[1,2,5,6,9,10]
</code></pre>
<p><strong>take 2 elements with step 2(::2)</strong></p>
<p>I try something like <code>arr[:2::2]</code>.it's not work</p>
|
<p><code>[:2::2]</code> is not valid Python syntax. A slice only takes 3 values - start, stop, step. You are trying to provide 4.</p>
<p>Here's what you need to do:</p>
<pre><code>In [233]: arr = np.arange(1,11)
In [234]: arr
Out[234]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
</code></pre>
<p>first reshape to form groups of 2:</p>
<pre><code>In [235]: arr.reshape(5,2)
Out[235]:
array([[ 1, 2],
[ 3, 4],
[ 5, 6],
[ 7, 8],
[ 9, 10]])
</code></pre>
<p>now slice to get every other group:</p>
<pre><code>In [236]: arr.reshape(5,2)[::2 ,:]
Out[236]:
array([[ 1, 2],
[ 5, 6],
[ 9, 10]])
</code></pre>
<p>and then back to 1d:</p>
<pre><code>In [237]: arr.reshape(5,2)[::2,:].ravel()
Out[237]: array([ 1, 2, 5, 6, 9, 10])
</code></pre>
<p>You have to step back a bit, and imagine the array as a whole, and ask how to make it fit the desire pattern.</p>
|
python|numpy|pytorch
| 2
|
378,186
| 62,341,052
|
TypeError: __call__() takes 2 positional arguments but 3 were given. To train Raccoon prediction model using FastRCNN through Transfer Learning
|
<pre><code> from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from engine import train_one_epoch, evaluate
import utils
import torchvision.transforms as T
num_epochs = 10
for epoch in range(num_epochs):
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
lr_scheduler.step()
evaluate(model, data_loader_test, device=device)
</code></pre>
<p>I am using the same code as provided in this link <a href="https://towardsdatascience.com/building-your-own-object-detector-pytorch-vs-tensorflow-and-how-to-even-get-started-1d314691d4ae" rel="nofollow noreferrer">Building Raccoon Model</a> but mine is not working. </p>
<p>This is the error message I am getting
TypeError Traceback (most recent call last)
in ()</p>
<pre><code> 2 for epoch in range(num_epochs):
3 # train for one epoch, printing every 10 iterations
4 ----> train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
5 # update the learning rate
6 lr_scheduler.step()
</code></pre>
<p>7 frames</p>
<p> in <strong>getitem</strong>(self, idx)</p>
<pre><code> 29 target["iscrowd"] = iscrowd
30 if self.transforms is not None:
31 ---> img, target = self.transforms(img, target)
32 return img, target
33
</code></pre>
<p>TypeError: <strong>call</strong>() takes 2 positional arguments but 3 were given</p>
|
<p>The above answer is incorrect, I accidentally upvoted before noticing. You are using the wrong Compose, note that it says</p>
<p><a href="https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#putting-everything-together" rel="noreferrer">https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#putting-everything-together</a></p>
<p>"In references/detection/, we have a number of helper functions to simplify training and evaluating detection models. Here, we will use references/detection/engine.py, references/detection/utils.py and references/detection/transforms.py. Just copy them to your folder and use them here."</p>
<p>there are helper scripts. They subclass the compose and flip methods</p>
<p><a href="https://github.com/pytorch/vision/blob/6315358dd06e3a2bcbe9c1e8cdaa10898ac2b308/references/detection/transforms.py#L17" rel="noreferrer">https://github.com/pytorch/vision/blob/6315358dd06e3a2bcbe9c1e8cdaa10898ac2b308/references/detection/transforms.py#L17</a></p>
<p>I did the same thing before noticing this. Do not use the compose method from torchvision.transforms, or else you will get the error above. Download their module and load it.</p>
|
image-processing|computer-vision|pytorch
| 5
|
378,187
| 62,237,696
|
I am getting Value Error when using if else to manipulate dataframe using pandas?
|
<p>This is the code I have written and df is dataframe.
I am using python3 and I am new to pandas and I have tried bitwise operator as well as keywords and or</p>
<pre><code>if((df['Day_Perc_Change']>=-0.5) & (df['Day_Perc_Change']<=0.5)):
df['Trend']="Slight or No Change"
elif((df['Day_Perc_Change']>=0.5) & (df['Day_Perc_Change']<=1)):
df['Trend']="Slight Positive"
elif((df['Day_Perc_Change']>=-1) & (df['Day_Perc_Change']<=-0.5)):
df['Trend']="Slight Negative"
elif((df['Day_Perc_Change']>=1) & (df['Day_Perc_Change']<=3)):
df['Trend']="Positive"
elif((df['Day_Perc_Change']>=-3) & (df['Day_Perc_Change']<=-1)):
df['Trend']="Negative"
elif((df['Day_Perc_Change']>=3) & (df['Day_Perc_Change']<=7)):
df['Trend']='Among top gainers'
</code></pre>
<p>else:</p>
<pre><code> df['Trend']="Bear drop"
}
</code></pre>
<blockquote>
<p>**This is the error I am getting</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
I have used both and as well as | but it is working.
Can anyone help me out?
**
</code></pre>
</blockquote>
|
<p>BAM, <code>np.where()</code>, which is also vectorized and high-performing.</p>
<pre><code>df['Trend'] = ''
df['Trend'] = np.where((df['Day_Perc_Change']>=-0.5) & (df['Day_Perc_Change']<=0.5), "Slight or No Change", df['Trend'])
df['Trend'] = np.where((df['Day_Perc_Change']>=0.5) & (df['Day_Perc_Change']<=1), "Slight Positive", df['Trend'])
df['Trend'] = np.where((df['Day_Perc_Change']>=-1) & (df['Day_Perc_Change']<=-0.5), "Slight Negative", df['Trend'])
df['Trend'] = np.where((df['Day_Perc_Change']>=1) & (df['Day_Perc_Change']<=3), "Positive", df['Trend'])
df['Trend'] = np.where((df['Day_Perc_Change']>=-3) & (df['Day_Perc_Change']<=-1), "Negative", df['Trend'])
df['Trend'] = np.where((df['Day_Perc_Change']>=3) & (df['Day_Perc_Change']<=7), "Among top gainers", df['Trend'])
</code></pre>
|
python|pandas
| 0
|
378,188
| 62,087,635
|
cannot select specific column after merging it with another data frame
|
<pre><code>unitown=pd.merge(Q1(),Q5(),how='inner',left_on=['State','RegionName'],right_index=True)
</code></pre>
<p>i created this new data frame called unitown after merging two data frames with index 'State' and 'RegionName'.
Below is how unitown looks like:
<a href="https://i.stack.imgur.com/2vIb8.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>from the pic you can see it has column named in the format of Year and Quarter. However when I try <code>unitown['2000Q1']</code> it gives me the following error:</p>
<pre><code> 2798 if self.columns.nlevels > 1:
2799 return self._getitem_multilevel(key)
-> 2800 indexer = self.columns.get_loc(key)
2801 if is_integer(indexer):
2802 indexer = [indexer]
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2646 return self._engine.get_loc(key)
2647 except KeyError:
-> 2648 return self._engine.get_loc(self._maybe_cast_indexer(key))
2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2650 if indexer.ndim > 1 or indexer.size > 1:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: '2000Q1'*
</code></pre>
<p>I have tried <code>unitown.columns.tolist()</code> and below is part of the output:</p>
<pre><code>['State',
'RegionName',
Period('2000Q1', 'Q-DEC'),
Period('2000Q2', 'Q-DEC'),
Period('2000Q3', 'Q-DEC'),
Period('2000Q4', 'Q-DEC'),
Period('2001Q1', 'Q-DEC'),
Period('2001Q2', 'Q-DEC'),
Period('2001Q3', 'Q-DEC'),
Period('2001Q4', 'Q-DEC'),
Period('2002Q1', 'Q-DEC'),
Period('2002Q2', 'Q-DEC'),
Period('2002Q3', 'Q-DEC'),
</code></pre>
<p>I am not sure why it gives such error given '2000Q1' is clearly one of the column names. Can anyone please help me on this? Thanks a lot!</p>
|
<p>This should resolve the issue:</p>
<pre><code>df.columns = [str(col) for col in df.columns]
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
378,189
| 62,245,518
|
Pandas merge creates duplicate entries
|
<p>I'm doing a merge of two dataframes, but when I do so, I have many duplicates entries.
My code is a little bit long, so here are the example of two datasets :</p>
<pre><code> df1
Season SeasonType ... PitchingWalksPerNineInnings_17 PitchingWeightedOnBasePercentage_17
GameID ...
47547 2017.0 1.0 ... NaN NaN
47546 2017.0 1.0 ... NaN NaN
50022 2017.0 1.0 ... NaN NaN
47556 2017.0 1.0 ... NaN NaN
47557 2017.0 1.0 ... NaN NaN
... ... ... ... ... ...
49970 2017.0 1.0 ... NaN NaN
49964 2017.0 1.0 ... NaN NaN
49974 2017.0 1.0 ... NaN NaN
49975 2017.0 1.0 ... NaN NaN
47562 NaN NaN ... NaN NaN
df2
GameID StatID_28 ... PitchingWalksPerNineInnings_28 PitchingWeightedOnBasePercentage_28
0 47562 1748078 ... 5.0 0.351
[1 rows x 52 columns]
</code></pre>
<p>GameID column is my index on both.
df1 can have multiple columns similar with df2, that's why I'm using this to get them :</p>
<pre><code>columnsMerge = list(set(df.columns).intersection(set(tpstatdf.columns)))
columnsMerge.append('GameID')
</code></pre>
<p>I've shared the csv file generated here : <a href="https://drive.google.com/drive/folders/1RVKNsB42ixQ2I2WUqqu_dDNjmcVIz5No?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1RVKNsB42ixQ2I2WUqqu_dDNjmcVIz5No?usp=sharing</a></p>
<p>Please find below what I got. What is expected is to have only one line with the game id, agregated with the following two.</p>
<p><a href="https://i.stack.imgur.com/4xghM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4xghM.png" alt="enter image description here"></a></p>
<p>Any help will be very appreciated on that.</p>
<p>Thanks</p>
<p>Geoffrey</p>
|
<p>If you want to keep all columns, you can <code>join</code> them with a suffix, and keep only the cols which are in columnsMerge.</p>
<pre><code>df_merged = df1.join(df2, how='right', rsuffix='_df2')
columnsMerge = list(set(df.columns).intersection(set(tpstatdf.columns)))
cols = [c if any(c in s for s incolumnsMerge) for c in df_merged.columns]
df_merged = df_merged[cols]
</code></pre>
|
python|pandas
| 0
|
378,190
| 62,198,953
|
Combine multiple Pandas series with identical column names, but different indices
|
<p>I have many pandas series structured more or less as follows. </p>
<pre><code>s1 s2 s3 s4
Date val1 Date val1 Date val2 Date val2
Jan 10 Apr 25 Jan 14 Apr 11
Feb 11 May 18 Feb 17 May 7
Mar 8 Jun 15 Mar 16 Jun 21
</code></pre>
<p>I would like to combine these series into a single data frame, with structure as follows:</p>
<pre><code>Date val1 val2
Jan 10 14
Feb 11 17
Mar 8 16
Apr 25 11
May 18 7
Jun 15 21
</code></pre>
<p>In an attempt to combine them, I have tried using <code>pd.concat</code> to create this single data frame. However, I have not been able to do so. The results of <code>pd.concat(series, axis=1)</code> (where <code>series</code> is a list <code>[s1,s2,s3,s4]</code>) is:</p>
<pre><code>Date val1 val1 val2 val2
Jan 10 nan 14 nan
Feb 11 nan 17 nan
Mar 8 nan 16 nan
Apr nan 25 nan 11
May nan 18 nan 7
Jun nan 15 nan 21
</code></pre>
<p>And <code>pd.concat(series, axis=0)</code> simply creates a single series, ignoring the column names.</p>
<p>Is there a parameter in concat that will yield my desired result? Or is there some other function that can collapse the incorrect, nan-filled data frame into a frame with non-repeated columns and no nans?</p>
|
<p>One way to do is groupby <code>Date</code> and choose <code>first</code>:</p>
<pre><code>(pd.concat( [s1,s2,s3,s4])
.groupby('Date', as_index=False, sort=False).first()
)
</code></pre>
<p>Output:</p>
<pre><code> Date val1 val2
0 Jan 10 14
1 Feb 11 17
2 Mar 8 16
3 Apr 25 11
4 May 18 7
5 Jun 15 21
</code></pre>
|
python|pandas|dataframe
| 2
|
378,191
| 62,369,633
|
How to find the indices of the n largest numbers in an array that aren't 100 in Python
|
<p>I am trying to find the indices of an array of the n largest numbers from largest to smallest that aren't 100 in Python. I have found several different ways to find the top n maximum numbers from an array, and ways to exclude the values that are equal to 100, but not one that preserves the indices as well. This is what the array looks like: </p>
<pre><code>array([ 10, 10, 11, 11, 10, 10, 12, 12, 10, 10, 10, 13, 14,
14, 15, 100, 15, 12, 13, 11, 10, 12, 14, 14, 100, 100,
100, 12, 13, 10, 10, 11, 13, 100, 100, 13, 14, 13, 12,
10, 10, 11, 10, 100, 100, 100, 12, 13, 12, 13, 10, 10,
10, 15, 100, 14, 14, 11, 12, 12, 10, 10, 10, 15, 15,
14, 10, 10, 10, 11, 10, 10, 10, 12, 11, 11, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 10])
</code></pre>
<p>For an n of 10, I want an output like this:
array([14, 16, 63, 64, 12, 13, 22, 23, 55, 56])</p>
<p>I am preferably looking for one-liners if possible/an efficient way to perform this without using a traditional if/elif sorter. Let me know if the wording is confusing or if this problem has already been solved. </p>
|
<p>First, sort the list but keep track of the original indices. In my solution below, I'm using tuples.</p>
<p>Then, go backward on the sorted list, and if the value is not <strong>valueToIgnore</strong>, then append the indice to <em>res</em> until <em>res</em> has a length of <strong>n</strong>.</p>
<pre><code>n = 10
valueToIgnore = 100
array = [ 10, 10, 11, 11, 10, 10, 12, 12, 10, 10, 10, 13, 14,
14, 15, 100, 15, 12, 13, 11, 10, 12, 14, 14, 100, 100,
100, 12, 13, 10, 10, 11, 13, 100, 100, 13, 14, 13, 12,
10, 10, 11, 10, 100, 100, 100, 12, 13, 12, 13, 10, 10,
10, 15, 100, 14, 14, 11, 12, 12, 10, 10, 10, 15, 15,
14, 10, 10, 10, 11, 10, 10, 10, 12, 11, 11, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 10]
array = [(i, val) for i,val in enumerate(array)]
array.sort(key= lambda x: x[1])
res = []
for i in range(len(array)-1, -1, -1):
if len(res) == n: break
if array[i][1] != valueToIgnore:
res.append(array[i][0])
print(sorted(res))
# This will print [14, 16, 23, 36, 53, 55, 56, 63, 64, 65]
</code></pre>
|
python|arrays|pandas|numpy
| 3
|
378,192
| 62,168,928
|
Python How to loop sequence match Dataframes through specific columns and extra the rows
|
<p>I have been trying the last 2 weeks to solve this problem, and i am almost at the goal.</p>
<p><strong>Case:</strong>
<a href="https://i.stack.imgur.com/bPN5A.jpg" rel="nofollow noreferrer">Overall depiction of what i am trying</a></p>
<ul>
<li>I have 2 dataframes extracted from 2 different excel sheets for this example let us say 3x3 (DF1 and DF2)</li>
<li>I want to match the cells from Column2 in DF1 with Column2 in DF2</li>
<li>I need to match the cells one by one </li>
</ul>
<p>Example: Let us say i have Cell X1 and i match it which each cell in Y(1,2,3)
X1 match the most with Y3.</p>
<ul>
<li>I want to Extract the Row X1 is located in and the Row Y3 is located in and save them aligned next to each other in a single row potentially in a 3. excel sheet</li>
</ul>
<p><strong>UPDATED What i have:</strong></p>
<p>This code is able to match with sequencematcher and print the matches, however i only get one output match instead of a list of maximum matches:</p>
<pre><code>import pandas as pd
from difflib import SequenceMatcher
data1 = {'Fruit': ['Apple','Pear','mango','Pinapple'],
'nr1': [22000,25000,27000,35000],
'nr2': [1,2,3,4]}
data2 = {'Fruit': ['Apple','Pear','mango','Pinapple'],
'nr1': [22000,25000,27000,35000],
'nr2': [1,2,3,4]}
df1 = pd.DataFrame(data1, columns = ['Fruit', 'nr1', 'nr2'])
df2 = pd.DataFrame(data2, columns = ['nr1','Fruit', 'nr2'])
#Single out specefic columns to match
col1=(df1.iloc[:,[0]])
col2=(df2.iloc[:,[1]])
#function to match 2 values similarity
def similar(a,b):
ratio = SequenceMatcher(None, a, b).ratio()
matches = a, b
return ratio, matches
for i in col1:
print(max(similar(i,j) for j in col2))
</code></pre>
<p>Output: (1.0, ('Fruit', 'Fruit'))</p>
<p>How do i fix so that it will give me all the max matches and how do i extract the respective rows the matches are located in? </p>
|
<p>This should work:</p>
<pre><code>import pandas as pd
import numpy as np
from difflib import SequenceMatcher
def similar(a, b):
ratio = SequenceMatcher(None, a, b).ratio()
return ratio
data1 = {'Fruit': ['Apple', 'Pear', 'mango', 'Pinapple'],
'nr1': [22000, 25000, 27000, 35000],
'nr2': [1, 2, 3, 4]}
data2 = {'Fruit': ['Apple', 'mango', 'peer', 'Pinapple'],
'nr1': [22000, 25000, 27000, 35000],
'nr2': [1, 2, 3, 4]}
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
order = []
for index, row in df1.iterrows():
maxima = [similar(row['Fruit'], j) for j in df2['Fruit']]
best_ratio = max(maxima)
best_row = np.argmax(maxima)
order.append(best_row)
df2 = df2.iloc[order].reset_index()
pd.concat([df1, df2], axis=1)
</code></pre>
|
python|excel|pandas|dataframe|sequencematcher
| 0
|
378,193
| 62,294,549
|
Convert data-00000-of-00001 file to Tensorflow Lite
|
<p>Is there any way to <code>convert data-00000-of-00001</code> to Tensorflow Lite model?
The file structure is like this</p>
<pre><code> |-semantic_model.data-00000-of-00001
|-semantic_model.index
|-semantic_model.meta
</code></pre>
|
<p><strong><em>Using TensorFlow Version: 1.15</em></strong></p>
<p>The following 2 steps will convert it to a <code>.tflite</code> model.</p>
<p><strong>1. Generate a TensorFlow Model for Inference (a frozen graph <code>.pb</code> file) using the <a href="https://stackoverflow.com/a/45868106/3903505">answer posted here</a></strong></p>
<p>What you currently have is model <code>checkpoint</code> (a TensorFlow 1 model saved in 3 files: .data..., .meta and .index. This model can be further trained if needed). You need to convert this to a <code>frozen graph</code> (a TensorFlow 1 model saved in a single <code>.pb</code> file. This model cannot be trained further and is optimized for inference/prediction).</p>
<p><strong>2. Generate a TensorFlow lite model ( <code>.tflite</code> file)</strong></p>
<p>A. Initialize the TFLiteConverter: The <code>.from_frozen_graph</code> API can be defined <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/lite/TFLiteConverter#from_frozen_graph" rel="nofollow noreferrer">this</a> way and the attributes which can be added are <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/lite/TFLiteConverter#attributes" rel="nofollow noreferrer">here</a>. To find the names of these arrays, visualize the <code>.pb</code> file in <a href="https://lutzroeder.github.io/netron/" rel="nofollow noreferrer">Netron</a></p>
<pre><code>converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file='....path/to/frozen_graph.pb',
input_arrays=...,
output_arrays=....,
input_shapes={'...' : [_, _,....]}
)
</code></pre>
<p>B. Optional: Perform the simplest optimization known as <a href="https://www.tensorflow.org/lite/performance/post_training_quantization#dynamic_range_quantization" rel="nofollow noreferrer">post-training dynamic range quantization</a>. You can refer to the same document for other types of optimizations/quantization methods.</p>
<pre><code>converter.optimizations = [tf.lite.Optimize.DEFAULT]
</code></pre>
<p>C. Convert it to a <code>.tflite</code> file and save it</p>
<pre><code>tflite_model = converter.convert()
tflite_model_size = open('model.tflite', 'wb').write(tflite_model)
print('TFLite Model is %d bytes' % tflite_model_size)
</code></pre>
|
python|tensorflow|tensorflow-lite
| 1
|
378,194
| 62,146,800
|
Numpy array_equal and float exact equality check
|
<p>I know that similar precision questions have been asked here however I am reading a code of a project that is doing an exact equality comparison among floats and is puzzling me. </p>
<p>Assume that <code>x1</code> and <code>x2</code> are of type <code>numpy.ndarray</code> and of dtype <code>np.float32</code>. These two variables have been computed by the same code executed on the same data but <code>x1</code> has been computed by one machine and <code>x2</code> by another (this is done on an AWS cluster which communicates with MPI).</p>
<p>Then the values are compared as follows</p>
<p><code>numpy.array_equal(x1, x2)</code></p>
<p>Hence, exact equality (no tolerance) is crucial for this program to work and it seems to work fine. This is confusing me. How can one compare two <code>np.float32</code> computed on different machines and face no precision issues? When can these two (or more) floats can be equal?</p>
|
<p>The arithmetic specified by IEEE-754 is deterministic given certain constraints discussed in its clause 11 (2008 version), including suitable rules for expression evaluation (such as unambiguous translation from expressions in a programming language to IEEE-754 operations, such as <code>a+b+c</code> must give <code>(a+b)+c</code>, not <code>a+(b+c)</code>).</p>
<p>If parallelism is not used or is constructed suitably, such as always partitioning a job into the same pieces and combining their results in the same way regardless of order of completion of computations, then obtaining identical results is not surprising.</p>
<p>Some factors that prevent reproducibility include varying parallelism, using different math libraries (with different implementations of functions such as <code>pow</code>), and using languages that are not strict about floating-point evaluation (such as permitting, but not requiring, extra precision).</p>
|
python|numpy|floating-point|precision|equality
| 1
|
378,195
| 62,056,668
|
Joining multiple dataframes with multiple common columns
|
<p>I have multiple dataframes like this-</p>
<pre><code>df=pd.DataFrame({'a':[1,2,3],'b':[3,4,5],'c':[4,6,7]})
df2=pd.DataFrame({'a':[1,2,3],'d':[66,24,55],'c':[4,6,7]})
df3=pd.DataFrame({'a':[1,2,3],'f':[31,74,95],'c':[4,6,7]})
</code></pre>
<p>I want this output-</p>
<pre><code> a c
0 1 4
1 2 6
2 3 7
</code></pre>
<p>This is the common columns across the 3 datasets. I am looking for a solution which works for multiple columns without having to specify the common columns as I have seen on SO( since the actual data frames are huge).</p>
|
<p>A combination of <a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="nofollow noreferrer">reduce</a>, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.intersection.html" rel="nofollow noreferrer">intersection</a>, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.filter.html" rel="nofollow noreferrer">filter</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer">concat</a> could help with your usecase:</p>
<pre><code>dfs = (df,df2,df3)
cols = [ent.columns for ent in dfs]
cols
[Index(['a', 'b', 'c'], dtype='object'),
Index(['a', 'd', 'c'], dtype='object'),
Index(['a', 'f', 'c'], dtype='object')]
#find the common columns to all :
from functools import reduce
universal_cols = reduce(lambda x,y : x.intersection(y), cols).tolist()
universal_cols
['a', 'c']
#filter for only universal_cols for each df
updates = [ent.filter(universal_cols) for ent in dfs]
</code></pre>
<p>If the columns and contents of the columns are the same, then you can skip the list comprehension and just filter from only one dataframe:</p>
<pre><code>#let's use the first dataframe
output = df.filter(universal_cols)
</code></pre>
<p>If the columns' contents are different, then concatenate and drop duplicates:</p>
<pre><code>#concatenate and drop duplicates
res = pd.concat(updates).drop_duplicates()
res #output has the same result
a c
0 1 4
1 2 6
2 3 7
</code></pre>
|
python|python-3.x|pandas
| 0
|
378,196
| 62,210,075
|
Pandas evaluate a string ratio into a float
|
<p>I have the following dataframe:</p>
<pre><code>Date Ratios
2009-08-23 2:1
2018-08-22 2:1
2019-10-24 2:1
2020-10-28 3:2
</code></pre>
<p>I want to convert the ratios into floats, so 2:1 becomes 2/1 becomes 0.5, 3:2 becomes 0.66667.</p>
<p>I used the following formula </p>
<pre><code>df['Ratios'] = 1/pd.eval(df['Ratios'].str.replace(':','/'))
</code></pre>
<p>But I keep getting this error <code>TypeError: unsupported operand type(s) for /: 'int' and 'list'</code></p>
<p>What's wrong with my code and how do I fix it? </p>
|
<p>Dont use <code>pd.eval</code> for <code>Series</code>, because if more like 100 rows it return ugly error, so need convert each value separately:</p>
<pre><code>df['Ratios'] = 1/df['Ratios'].str.replace(':','/').apply(pd.eval)
</code></pre>
<p>But also your error seems some non numeric values together with <code>:</code>.</p>
<p>Error for 100+ rows:</p>
<blockquote>
<p>AttributeError: 'PandasExprVisitor' object has no attribute 'visit_Ellipsis'</p>
</blockquote>
<hr>
<p>If not working and still error you can try test if data are correct in custom function:</p>
<pre><code>print (df)
Date Ratios
0 2009-08-23 2:1r
1 2018-08-22 2:1
2 2019-10-24 2:1
3 2020-10-28 3:2
def f(x):
try:
pd.eval(x)
return False
except:
return True
df = df[df['Ratios'].str.replace(':','/').apply(f)]
print (df)
Date Ratios
0 2009-08-23 2:1r
</code></pre>
|
python|pandas|eval
| 3
|
378,197
| 62,073,545
|
How do I make it so that my program reads multiple txt files and creates it into a dataframe for python?
|
<p>Currently I am making a program to cycle through multiple txt files and turn them into dataframes so that the data can be analysed. I have used the glob function to return a list of txt files. After that, I have created a for loop which cycles through every item in the list. Then I use the read_csv function to read this data, data.head() to print it. I know my code is probably really stupid but please help me fix it. I am currently at a loss for what to do. </p>
<p>Here is my original code:</p>
<pre><code>import glob
import pandas as pd
path = '/content/gdrive/My Drive/Datapoints/*.txt'
dataframes = []
for filename in glob.glob(path):
data = pd.read_csv(filename, header=None, delimiter='\t')
data.head()
</code></pre>
<p>For reasons I do not understand (I am a novice when it comes to programming), my code is getting a lot of errors:</p>
<pre><code>ParserError Traceback (most recent call last)
<ipython-input-52-f940d2e4b46d> in <module>()
2 dataframes = []
3 for filename in glob.glob(path):
----> 4 data = pd.read_csv(filename, header=None, delimiter='\t')
3 frames
/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in read(self, nrows)
2035 def read(self, nrows=None):
2036 try:
-> 2037 data = self._reader.read(nrows)
2038 except StopIteration:
2039 if self._first_chunk:
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()
pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 9 fields in line 110853, saw 10
</code></pre>
|
<p>Looks like one of your CSV's has an incorrect number of columns. It's on line 110853. You could add some test code to help troubleshoot it, like this:</p>
<pre><code>import glob
import pandas as pd
path = '/content/gdrive/My Drive/Datapoints/*.txt'
dataframes = []
for filename in glob.iglob(path):
try:
data = pd.read_csv(filename, header=None, delimiter='\t')
data.head()
except pd.errors.ParserError:
print(f'Error in file: {filename}')
raise
</code></pre>
<p>This should print out the filename that is causing the problem.</p>
<p>Note that I changed <code>glob.glob(path)</code> to <code>glob.iglob(path)</code>, which probably won't make much difference, unless you have an enormous number of files. <code>iglob</code> gives you an iterator, whereas <code>glob</code> gives a list, and then behind the scenes it ends up using the <code>list.__iter__</code> method in the same way. <code>iglob</code> will be slightly more efficient, and is a bit more pythonic.</p>
<p>Also, the <code>except</code> block ends with a <code>raise</code> statement, which is usually good practice when you do exception handling, since it prevents information about the error from being lost. It will also stop any additional files from being processed, which is a good thing in situations where the error is not recoverable and the code that caused the error should not be allowed to continue running.</p>
|
python|pandas|file|glob
| 0
|
378,198
| 62,146,601
|
Storing multiple ndarrays to a list
|
<p>During each iteration of for loop some results get stored in a ndarray which look like this,</p>
<pre><code>testpredict=[[1.1],
[2.344],
[3.00]]
</code></pre>
<p>I want to store the above results in a list variable during each iteration.
Something like...</p>
<pre><code>list[i]= testpredict
</code></pre>
<p>My final list should look like this:</p>
<pre><code>final_list=[
[[1.1], [2.344], [3.00]],
[[4.03130], [4.55914], [4.46367]],
.......
]
</code></pre>
<p>how can I do this correctly?</p>
|
<pre><code># declared outside the iteration loop
new_list = []
# inside the loop
new_list = new_list.append(testpredict.tolist())
</code></pre>
<p>'list' is a built-in type. Avoid using it as a variable name, as a best practice.</p>
|
python|python-3.x|pandas|numpy
| 0
|
378,199
| 62,278,366
|
How to make a stacked bar chart in python
|
<p>Hi guys I'm having a trouble on making a stacked bar chart here is my df</p>
<pre><code>In[]top_10_medals_breakdown = pd.DataFrame()
top_10_medals_breakdown = top_10_medals_breakdown.append(d)
top_10_medals_breakdown
Out[]
Noc Medal Count
342 USA Bronze 1358
343 USA Gold 2638
344 USA Silver 1641
336 URS Bronze 689
337 URS Gold 1082
338 URS Silver 732
124 GER Bronze 746
125 GER Gold 745
126 GER Silver 674
115 GBR Bronze 651
116 GBR Gold 678
117 GBR Silver 739
108 FRA Bronze 666
109 FRA Gold 501
110 FRA Silver 610
167 ITA Bronze 531
168 ITA Gold 575
169 ITA Silver 531
296 SWE Bronze 535
297 SWE Gold 479
298 SWE Silver 522
48 CAN Bronze 451
49 CAN Gold 463
50 CAN Silver 438
14 AUS Bronze 517
15 AUS Gold 348
16 AUS Silver 455
142 HUN Bronze 371
143 HUN Gold 432
144 HUN Silver 332
</code></pre>
<p>Here is my attempted bar column which only read the Gold medal count</p>
<pre><code>plt.bar(top_10_medals_breakdown['Noc'], top_10_medals_breakdown['Count'], color='b')
</code></pre>
<p><a href="https://i.stack.imgur.com/HObrE.png" rel="nofollow noreferrer">It only counts the gold medals</a></p>
<p>so tl:dr I want to make a stacked bar chart that counts the medal of each of the countries</p>
|
<p>Use <code>pivot</code> to lay out the data, and then use the order of stacking and descending gold medals. Organized the data. We have graphed it.</p>
<pre><code>import pandas as pd
import numpy as np
import io
data = '''
Noc Medal Count
342 USA Bronze 1358
343 USA Gold 2638
344 USA Silver 1641
336 URS Bronze 689
337 URS Gold 1082
338 URS Silver 732
124 GER Bronze 746
125 GER Gold 745
126 GER Silver 674
115 GBR Bronze 651
116 GBR Gold 678
117 GBR Silver 739
108 FRA Bronze 666
109 FRA Gold 501
110 FRA Silver 610
167 ITA Bronze 531
168 ITA Gold 575
169 ITA Silver 531
296 SWE Bronze 535
297 SWE Gold 479
298 SWE Silver 522
48 CAN Bronze 451
49 CAN Gold 463
50 CAN Silver 438
14 AUS Bronze 517
15 AUS Gold 348
16 AUS Silver 455
142 HUN Bronze 371
143 HUN Gold 432
144 HUN Silver 332
'''
df = pd.read_csv(io.StringIO(data), sep='\s+')
df3 = df.pivot(columns='Medal', index='Noc', values='Count')
df3.sort_values('Gold', ascending=False, inplace=True)
df3 = df3.iloc[:,[1,2,0]]
df3
Medal Bronze Gold Silver
Noc
USA 1358 2638 1641
URS 689 1082 732
GER 746 745 674
GBR 651 678 739
ITA 531 575 531
FRA 666 501 610
SWE 535 479 522
CAN 451 463 438
HUN 371 432 332
AUS 517 348 455
</code></pre>
<p><a href="https://i.stack.imgur.com/Oha2b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oha2b.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|seaborn
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.