Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
8,400
| 51,736,715
|
Numpy Dynamic Slicing Per Row
|
<p>How do I dynamically slice each row given a starting and ending index without using a for loop. I can do it with loop listed below, but it is way too slow for something where the x.shape[0] > 1 mill</p>
<pre><code>x= np.arange(0,100)
x = x.reshape(20,5)
s_idx = np.random.randint(0,3,x.shape[0])
e_idx = np.random.randint(3,6,x.shape[0])
print(s_idx)
>>> array([2, 1, 2, ..., 1, 0, 2])
print(e_idx)
>>> array([3, 4, 5, ..., 3, 3, 3])
print(x)
>>> array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
...,
[85, 86, 87, 88, 89],
[90, 91, 92, 93, 94],
[95, 96, 97, 98, 99]])
x_indexed = []
for idx,value in enumerate(s_idx):
x_indexed.append(x[idx][s_idx[idx]:e_idx[idx]])
print(x_indexed)
>>> [array([2]),
array([6, 7, 8]),
array([12, 13, 14]),
array([15, 16, 17]),
array([20, 21, 22, 23]),
array([26, 27, 28, 29]),
array([30, 31, 32, 33]),
array([35, 36, 37, 38, 39]),
array([40, 41, 42]),
array([46, 47, 48]),
array([52, 53, 54]),
array([56, 57]),
array([62, 63, 64]),
array([67]),
array([70, 71, 72, 73]),
array([77]),
array([80, 81, 82, 83, 84]),
array([86, 87]),
array([90, 91, 92]),
array([97])]
</code></pre>
|
<p>You can work with <a href="https://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html" rel="nofollow noreferrer">masked arrays</a>:</p>
<pre><code>import numpy as np
np.random.seed(100)
x = np.arange(0, 100)
x = x.reshape(20, 5)
s_idx = np.random.randint(0, 3, x.shape[0])
e_idx = np.random.randint(3, 6, x.shape[0])
# This is optional, reduce x to the minimum possible block
first_col, last_col = s_idx.min(), e_idx.max()
x = x[:, first_col:last_col]
s_idx -= first_col
e_idx -= first_col
col_idx = np.arange(x.shape[1])
# Mask elements out of range
mask = (col_idx < s_idx[:, np.newaxis]) | (col_idx >= e_idx[:, np.newaxis])
x_masked = np.ma.array(x, mask=mask)
print(x_masked)
</code></pre>
<p>Output:</p>
<pre><code>[[0 1 2 3 --]
[5 6 7 8 9]
[10 11 12 13 14]
[-- -- 17 -- --]
[-- -- 22 -- --]
[25 26 27 28 --]
[-- -- 32 33 --]
[-- 36 37 38 --]
[-- -- 42 -- --]
[-- -- 47 -- --]
[-- -- 52 53 --]
[-- -- 57 58 --]
[-- 61 62 63 --]
[65 66 67 68 69]
[70 71 72 -- --]
[75 76 77 78 79]
[80 81 82 83 --]
[-- -- 87 88 --]
[90 91 92 93 94]
[-- 96 97 98 99]]
</code></pre>
<p>You can do most NumPy operations with a masked array, but if you still want the list of arrays you could do something like:</p>
<pre><code>list_arrays = [row[~m] for row, m in zip(x, x_masked.mask)]
print(list_arrays)
</code></pre>
<p>Output:</p>
<pre><code>[array([0, 1, 2, 3]),
array([5, 6, 7, 8, 9]),
array([10, 11, 12, 13, 14]),
array([17]),
array([22]),
array([25, 26, 27, 28]),
array([32, 33]),
array([36, 37, 38]),
array([42]),
array([47]),
array([52, 53]),
array([57, 58]),
array([61, 62, 63]),
array([65, 66, 67, 68, 69]),
array([70, 71, 72]),
array([75, 76, 77, 78, 79]),
array([80, 81, 82, 83]),
array([87, 88]),
array([90, 91, 92, 93, 94]),
array([96, 97, 98, 99])]
</code></pre>
<p>Although in this case obviously you do not need to construct the intermediate masked array, you can just iterate through the rows of <code>x</code> and <code>mask</code>.</p>
|
python|numpy|dynamic|slice
| 2
|
8,401
| 51,827,030
|
How to replace old string values in Series/column of dataframe with values from dict?
|
<p>This question is somewhat similar to: <a href="https://stackoverflow.com/questions/20250771/remap-values-in-pandas-column-with-a-dict">Remap values in pandas column with a dict</a>, however, the answers are quite dated and do not cover the "SettingWithCopyWarning".</p>
<p>I am simply trying to replace the original strings in a column, "col", of my dataframe, "df", using a dictionary, "dict1". Here is my code which successfully replaces the values:</p>
<pre><code> temp_series = df.loc[:,col].copy()
for name in temp_series:
for old, new in q_names_dict.items():
if (old.lower() == name.lower()):
temp_series.replace(name, new, inplace=True)
</code></pre>
<p>-However, when I attempt to update my original dataframe with this copy, "temp_series", I get a "SettingWithCopyWarning". Here is the code which throws that warning:</p>
<pre><code>df.loc[:,col] = temp_series
# The bottom three don't work either.
#df[col] = temp_series
#df.loc[:,col].update(temp_series)
#df[col].update(temp_series)
</code></pre>
|
<p>By changing the deep copy to a shallow copy, I was able to have changes to the original dataframe, "df". It is stated in the docs: <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.copy.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.copy.html</a></p>
<p>Here is the code that enables shallow copy:</p>
<pre><code>temp_series = df.loc[:,col].copy(deep=False)
</code></pre>
<p>Therefore, I don't have to explicitly update my dataframe after using the above code. This also prevents the SettingWithCopyWarning.</p>
|
python-3.x|pandas|dataframe|series
| 0
|
8,402
| 51,667,881
|
Lossy compression of numpy array (image, uint8) in memory
|
<p>I am trying to load a data set of 1.000.000 images into memory. As standard numpy arrays (uint8) all images combined fill around 100 GB of RAM, but I need to get this down to < 50 GB while still being able to quickly read the images back into numpy (that's the whole point of keeping everything in memory). Lossless compression like blosc only reduces file size by around 10%, so I went to JPEG compression. Minimum example:</p>
<pre><code>import io
from PIL import Image
numpy_array = (255 * np.random.rand(256, 256, 3)).astype(np.uint8)
image = Image.fromarray(numpy_array)
output = io.BytesIO()
image.save(output, format='JPEG')
</code></pre>
<p>At runtime I am reading the images with:</p>
<pre><code>[np.array(Image.open(output)) for _ in range(1000)]
</code></pre>
<p>JPEG compression is very effective (< 10 GB), but the time it takes to read 1000 images back into numpy array is around 2.3 seconds, which seriously hurts the performance of my experiments. I am searching for suggestions that give a better trade-off between compression and read-speed.</p>
|
<p>I am still not certain I understand what you are trying to do, but I created some dummy images and did some tests as follows. I'll show how I did that in case other folks feel like trying other methods and want a data set.</p>
<p>First, I created 1,000 images using <strong>GNU Parallel</strong> and <strong>ImageMagick</strong> like this:</p>
<pre><code>parallel convert -depth 8 -size 256x256 xc:red +noise random -fill white -gravity center -pointsize 72 -annotate 0 "{}" -alpha off s_{}.png ::: {0..999}
</code></pre>
<p>That gives me 1,000 images called <code>s_0.png</code> through <code>s_999.png</code> and image 663 looks like this:</p>
<p><a href="https://i.stack.imgur.com/kiPR6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kiPR6.png" alt="enter image description here"></a></p>
<p>Then I did what I think you are trying to do - though it is hard to tell from your code:</p>
<pre><code>#!/usr/local/bin/python3
import io
import time
import numpy as np
from PIL import Image
# Create BytesIO object
output = io.BytesIO()
# Load all 1,000 images and write into BytesIO object
for i in range(1000):
name="s_{}.png".format(i)
print("Opening image: {}".format(name))
im = Image.open(name)
im.save(output, format='JPEG',quality=50)
nbytes = output.getbuffer().nbytes
print("BytesIO size: {}".format(nbytes))
# Read back images from BytesIO ito list
start=time.clock()
l=[np.array(Image.open(output)) for _ in range(1000)]
diff=time.clock()-start
print("Time: {}".format(diff))
</code></pre>
<p>And that takes 2.4 seconds to read all 1,000 images from the BytesIO object and turn them into numpy arrays.</p>
<p>Then, I palettised the images by reducing to 256 colours (which I agree is lossy - just as your method) and saved a list of palettised image objects which I can readily later convert back to numpy arrays by simply calling:</p>
<pre><code>np.array(ImageList[i].convert('RGB'))
</code></pre>
<p>Storing the data as a palettised image saves 66% of the space because you only store one byte of palette index per pixel rather than 3 bytes of RGB, so it is better than the 50% compression you seek.</p>
<pre><code>#!/usr/local/bin/python3
import io
import time
import numpy as np
from PIL import Image
# Empty list of images
ImageList = []
# Load all 1,000 images
for i in range(1000):
name="s_{}.png".format(i)
print("Opening image: {}".format(name))
im = Image.open(name)
# Add palettised image to list
ImageList.append(im.quantize(colors=256, method=2))
# Read back images into numpy arrays
start=time.clock()
l=[np.array(ImageList[i].convert('RGB')) for i in range(1000)]
diff=time.clock()-start
print("Time: {}".format(diff))
# Quick test
# Image.fromarray(l[999]).save("result.png")
</code></pre>
<p>That now takes 0.2s instead of 2.4s - let's hope the loss of colour accuracy is acceptable to your unstated application :-)</p>
|
python|performance|numpy|compression|image-compression
| 5
|
8,403
| 37,562,111
|
Currency and Exchange Name from Yahoo
|
<p>I'm quite new to pandas (and coding in general), but am really enjoying messing around with pulling stock data from Yahoo Finance.</p>
<p>I was just wondering if there's a way to also pull the name of the exchange that the stock is listed on (i.e. LSE, NYSE, AIM etc), as well as the currency the stock is listed in from Yahoo?</p>
<p>This is my code so far (I'll work on adding some axis labels when I'm back from work tonight):</p>
<pre><code>import pandas as pd
import sys
import matplotlib
import matplotlib.pyplot as plt
import pandas_datareader.data as web
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
print('Matplotlib version ' + matplotlib.__version__)
symbols_list = ['ORCL', 'AAPL', 'TSLA']
d = {}
for x in symbols_list:
d[x] = web.DataReader(x, "yahoo", '2012-12-01')
ticker = pd.Panel(d)
df1 = ticker.minor_xs('Adj Close')
print df1
fig = plt.figure()
fig.suptitle("Stock Prices", fontsize=36, fontweight='bold')
plt.plot(df1)
plt.legend(ticker, loc='best', shadow=True, fontsize=36)
plt.show()
</code></pre>
|
<p>I think you can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> from <a href="http://www.nasdaq.com/screening/company-list.aspx" rel="nofollow">link</a>, filter columns and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> them to <code>df</code>. Then you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>loc</code></a> for maping:</p>
<pre><code>import pandas as pd
import sys
import matplotlib
import matplotlib.pyplot as plt
import pandas_datareader.data as web
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
print('Matplotlib version ' + matplotlib.__version__)
df_NASDAQ = pd.read_csv('http://www.nasdaq.com/screening/companies-by-industry.aspx?exchange=NASDAQ&render=download',
usecols=['Symbol', 'Name'])
#print (df_NASDAQ.head())
df_NYSE = pd.read_csv('http://www.nasdaq.com/screening/companies-by-industry.aspx?exchange=NYSE&render=download',
usecols=['Symbol', 'Name'])
#print (df_NYSE.head())
df_AMEX = pd.read_csv('http://www.nasdaq.com/screening/companies-by-industry.aspx?exchange=AMEX&render=download',
usecols=['Symbol', 'Name'])
#print (df_AMEX.head())
df = pd.concat([df_NASDAQ, df_NYSE, df_AMEX]).set_index('Symbol')
print (df.head())
Name
Symbol
TFSC 1347 Capital Corp.
TFSCR 1347 Capital Corp.
TFSCU 1347 Capital Corp.
TFSCW 1347 Capital Corp.
PIH 1347 Property Insurance Holdings, Inc.
</code></pre>
<pre><code>symbols_list = ['ORCL', 'AAPL', 'TSLA']
d = {}
for x in symbols_list:
print (x, df.loc[x, 'Name'])
ORCL Oracle Corporation
AAPL Apple Inc.
TSLA Tesla Motors, Inc.
#d[ x ] = web.DataReader(x, "yahoo", '2012-12-01')
d[ df.loc[x, 'Name'] ] = web.DataReader(x, "yahoo", '2012-12-01')
ticker = pd.Panel(d)
df1 = ticker.minor_xs('Adj Close')
print (df1.head())
fig = plt.figure()
fig.suptitle("Stock Prices", fontsize=36, fontweight='bold')
plt.plot(df1)
plt.legend(ticker, loc='best', shadow=True, fontsize=36)
plt.show()
</code></pre>
|
pandas|currency|yahoo|yahoo-finance
| 0
|
8,404
| 37,214,482
|
Saving with h5py arrays of different sizes
|
<p>I am trying to store about 3000 numpy arrays using HDF5 data format. Arrays vary in length from 5306 to 121999 np.float64 </p>
<p>I am getting
<code>Object dtype dtype('O') has no native HDF5 equivalent</code>
error since due to the irregular nature of the data numpy uses the general object class.</p>
<p>My idea was to pad all the arrays to 121999 length and storing the sizes in another dataset.</p>
<p>However this seems quite inefficient in space, is there a better way? </p>
<p>EDIT: To clarify, I want to store 3126 arrays of <code>dtype = np.float64</code>. I have them stored in a <code>list</code>and when h5py does the routine it converts to an array of <code>dtype = object</code> because they are different lengths. To illustrate it:</p>
<pre><code>a = np.array([0.1,0.2,0.3],dtype=np.float64)
b = np.array([0.1,0.2,0.3,0.4,0.5],dtype=np.float64)
c = np.array([0.1,0.2],dtype=np.float64)
arrs = np.array([a,b,c]) # This is performed inside the h5py call
print(arrs.dtype)
>>> object
print(arrs[0].dtype)
>>> float64
</code></pre>
|
<p>Looks like you tried something like:</p>
<pre><code>In [364]: f=h5py.File('test.hdf5','w')
In [365]: grp=f.create_group('alist')
In [366]: grp.create_dataset('alist',data=[a,b,c])
...
TypeError: Object dtype dtype('O') has no native HDF5 equivalent
</code></pre>
<p>But if instead you save the arrays as separate datasets it works:</p>
<pre><code>In [367]: adict=dict(a=a,b=b,c=c)
In [368]: for k,v in adict.items():
grp.create_dataset(k,data=v)
.....:
In [369]: grp
Out[369]: <HDF5 group "/alist" (3 members)>
In [370]: grp['a'][:]
Out[370]: array([ 0.1, 0.2, 0.3])
</code></pre>
<p>and to access all the datasets in the group:</p>
<pre><code>In [389]: [i[:] for i in grp.values()]
Out[389]:
[array([ 0.1, 0.2, 0.3]),
array([ 0.1, 0.2, 0.3, 0.4, 0.5]),
array([ 0.1, 0.2])]
</code></pre>
|
python|arrays|numpy|hdf5|h5py
| 21
|
8,405
| 37,468,869
|
Python, opposite of conditional array
|
<p>I have two <code>numpy</code> arrays, let's say <code>A</code> and <code>B</code></p>
<pre><code>In [3]: import numpy as np
In [4]: A = np.array([0.10,0.20,0.30,0.40,0.50])
In [5]: B = np.array([0.15,0.23,0.33,0.41,0.57])
</code></pre>
<p>I apply a condition like this:</p>
<pre><code>In [6]: condition_array = A[(B>0.2)*(B<0.5)]
In [7]: condition_array
Out[7]: array([ 0.2, 0.3, 0.4])
</code></pre>
<p><strong>Now how do I get the opposite of <code>condition_array</code>?</strong> </p>
<p>i.e. the values of array <code>A</code> for which array <code>B</code> is <strong><code>NOT GREATER THAN 0.2 and NOT LESS THAN 0.5</code></strong> ? </p>
<pre><code>In [8]: test_array = A[(B<0.2)*(B>0.5)]
In [9]: test_array
Out[9]: array([], dtype=float64)
</code></pre>
<p>The above doesn't seem to work ! </p>
|
<p>You can use the <code>~</code> operator to invert the array ...</p>
<pre><code>A[~((B>0.2)*(B<0.5))]
</code></pre>
<p>Note that your use of <code>*</code> seems like it's meant to do a logical "and". Many people would prefer that you use the binary "and" operator (<code>&</code>) instead -- Personally, I prefer to be even more explicit:</p>
<pre><code>A[~np.logical_and(B > 0.2, B < 0.5)]
</code></pre>
<p>Alternatively, the following work too:</p>
<pre><code>A[(B <= 0.2) | (B >= 0.5)]
A[np.logical_or(B <= 0.2, B >= 0.5)]
</code></pre>
|
python|arrays|numpy|conditional
| 3
|
8,406
| 41,743,773
|
Python3 - convert csv to json using pandas
|
<p>I've got a <code>.csv</code> files with 5 columns but I only need the <code>json</code> file to contain 3 of these how would i go about doing it?</p>
<p>csv file:</p>
<pre><code>Ncode Ocode name a b c
1 1.1 1x 1a 1b 1c
2 2.2 2x 2a 2b 2c
3 3.3 3x 3a 3b 3c
</code></pre>
<p>Json output:</p>
<pre><code>{"1.1":[{"a":"1a"},{"b":"1b"},{"c":"1c"}],"2.2":[{"a":"2a"},{"b":"2b"},{"c":"2c"}]}
</code></pre>
|
<pre><code>txt = """Ncode Ocode name a b c
1 1.1 1x 1a 1b 1c
2 2.2 2x 2a 2b 2c
3 3.3 3x 3a 3b 3c
"""
df = pd.read_csv(StringIO(txt), delim_whitespace=True)
json.dumps(
{'{:0.2f}'.format(r.Ocode): [{'a': r.a}, {'b': r.b}, {'c': r.c}]
for r in df.itertuples()}
)
'{"2.20": [{"a": "2a"}, {"b": "2b"}, {"c": "2c"}], "3.30": [{"a": "3a"}, {"b": "3b"}, {"c": "3c"}], "1.10": [{"a": "1a"}, {"b": "1b"}, {"c": "1c"}]}'
</code></pre>
|
python|json|pandas|csv|data-conversion
| 1
|
8,407
| 37,842,120
|
When does advanced indexing on structured masked arrays *really* return a copy?
|
<p>When I have a structured masked array with boolean indexing, under what conditions do I get a view and when do I get a copy? The <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing" rel="nofollow">documentation</a> says that advanced indexing always returns a copy, but this is not true, since something like <code>X[X>0]=42</code> is technically advanced indexing, but the assignment works. My situation is more complex:</p>
<p>I want to set the mask of a particular field based on a criterion from another field, so I need to get the field, apply the boolean indexing, and get the mask. There are 3! = 6 orders of doing so.</p>
<p>Preparation:</p>
<pre><code>In [83]: M = ma.MaskedArray(random.random(400).view("f8,f8,f8,f8")).reshape(10, 10)
In [84]: crit = M[:, 4]["f2"] > 0.5
</code></pre>
<ol>
<li><p>Field - index - mask (fails):</p>
<pre><code>In [85]: M["f3"][crit, 3].mask = True
In [86]: print(M["f3"][crit, 3].mask)
[False False False False False]
</code></pre></li>
<li><p>Index - field - mask (fails):</p>
<pre><code>In [87]: M[crit, 3]["f3"].mask = True
In [88]: print(M[crit, 3]["f3"].mask)
[False False False False False]
</code></pre></li>
<li><p>Index - mask - field (fails):</p>
<pre><code>In [94]: M[crit, 3].mask["f3"] = True
In [95]: print(M[crit, 3].mask["f3"])
[False False False False False]
</code></pre></li>
<li><p>Mask - index - field (fails):</p>
<pre><code>In [101]: M.mask[crit, 3]["f3"] = True
In [102]: print(M.mask[crit, 3]["f3"])
[False False False False False]
</code></pre></li>
<li><p>Field - mask - index (succeeds):</p>
<pre><code>In [103]: M["f3"].mask[crit, 3] = True
In [104]: print(M["f3"].mask[crit, 3])
[ True True True True True]
# set back to False so I can try method #6
In [105]: M["f3"].mask[crit, 3] = False
In [106]: print(M["f3"].mask[crit, 3])
[False False False False False]
</code></pre></li>
<li><p>Mask - field - index (succeeds):</p>
<pre><code>In [107]: M.mask["f3"][crit, 3] = True
In [108]: print(M.mask["f3"][crit, 3])
[ True True True True True]
</code></pre></li>
</ol>
<p>So, it looks like indexing must come <em>last</em>.</p>
|
<p>The issue of <code>__setitem__</code> v. <code>__getitem__</code> is important, but with structured array and masking it's a little harder to sort out when a <code>__getitem__</code> is first making a copy.</p>
<p>Regarding the structured arrays, it shouldn't matter whether the field index occurs first or the element. However some releases appear to have a bug in this regard. I'll try to find a recent SO question where this was a problem.</p>
<p>With a masked array, there's the question of how to correctly modify the mask. The <code>.mask</code> is a property that accesses the underlying <code>._mask</code> array. But that is fetched with <code>__getattr__</code>. So the simple <code>setitem</code> v <code>getitem</code> distinction does not apply directly.</p>
<p>Lets skip the structured bit first</p>
<pre><code>In [584]: M = np.ma.MaskedArray(np.arange(4))
In [585]: M
Out[585]:
masked_array(data = [0 1 2 3],
mask = False,
fill_value = 999999)
In [586]: M.mask
Out[586]: False
In [587]: M.mask[[1,2]]=True
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-587-9010ee8f165e> in <module>()
----> 1 M.mask[[1,2]]=True
TypeError: 'numpy.bool_' object does not support item assignment
</code></pre>
<p>Initially <code>mask</code> is a scalar boolean, not an array.</p>
<p>This works</p>
<pre><code>In [588]: M.mask=np.zeros((4,),bool) # change mask to array
In [589]: M
Out[589]:
masked_array(data = [0 1 2 3],
mask = [False False False False],
fill_value = 999999)
In [590]: M.mask[[1,2]]=True
In [591]: M
Out[591]:
masked_array(data = [0 -- -- 3],
mask = [False True True False],
fill_value = 999999)
</code></pre>
<p>This does not</p>
<pre><code>In [592]: M[[1,2]].mask=True
In [593]: M
Out[593]:
masked_array(data = [0 -- -- 3],
mask = [False True True False],
fill_value = 999999)
</code></pre>
<p><code>M[[1,2]]</code> is evidently the copy, and the assignment is to its <code>mask</code> attribute, not <code>M.mask</code>.</p>
<p>....</p>
<p>A masked array has <code>.__setmask__</code> method. You can study that in <code>np.ma.core.py</code>. And the mask property is defined with</p>
<pre><code>mask = property(fget=_get_mask, fset=__setmask__, doc="Mask")
</code></pre>
<p>So <code>M.mask=...</code> does use this.</p>
<p>So it looks like the problem case is doing</p>
<pre><code>M.__getitem__(index).__setmask__(values)
</code></pre>
<p>hence the copy. The <code>M.mask[]=...</code> is doing</p>
<pre><code>M._mask.__setitem__(index, values)
</code></pre>
<p>since <code>_getmask</code> just does <code>return self._mask</code>.</p>
<hr>
<pre><code>M["f3"].mask[crit, 3] = True
</code></pre>
<p>works because <code>M['f3']</code> is a view. (<code>M[['f1','f3']]</code> is ok for get, but doesn't work for setting).</p>
<p><code>M.mask["f3"]</code> is also a view. I'm not entirely sure of the order the relevant get and sets. <code>__setmask__</code> has code that deals specifically with compound dtype (structured).</p>
<p>=========================</p>
<p>Looking at a structured array, without the masking complication, the indexing order matters</p>
<pre><code>In [607]: M1 = np.arange(16).view("i,i")
In [609]: M1[[3,4]]['f1']=[3,4] # no change
In [610]: M1[[3,4]]['f1']
Out[610]: array([7, 9], dtype=int32)
In [611]: M1['f1'][[3,4]]=[1,2] # change
In [612]: M1
Out[612]:
array([(0, 1), (2, 3), (4, 5), (6, 1), (8, 2), (10, 11), (12, 13), (14, 15)], dtype=[('f0', '<i4'), ('f1', '<i4')])
</code></pre>
<p>So we still have a <code>__getitem__</code> followed by a <code>__setitem__</code>, and we have to pay attention as to whether the get returns a view or a copy.</p>
|
numpy|indexing|structured-array|masked-array
| 1
|
8,408
| 31,643,178
|
How to retain Index information when calculating euclidean distances in a dataframe?
|
<p>Hi I would like to calculate euclidean distances between all points with X,Y coordinates in a dataframe and return the ID(the index) of the closest point.</p>
<p>currently I am using this to create a distance matrix:</p>
<pre><code>diatancematrix=squareform(pdist(group))
df=pd.DataFrame(dists)
</code></pre>
<p>followed by this to return the minimum point:</p>
<pre><code>closest=df.idxmin()
</code></pre>
<p>I dont seem to be able to retain the correct ID/index in the first step as it seems to assign column and row numbers from 0 onwards instead of using the index. is there a way to keep the correct index here?</p>
|
<p>The distance matrix includes each point's distance to itself, which will always be zero. Thus, you should expect each row to just see itself as its own minimum.</p>
|
python|pandas|scipy
| 0
|
8,409
| 31,485,576
|
Fast basic linear algebra in Cython for recurrent calls
|
<p>I'm trying to program a function in cython for a monte-carlo simulation. The function involves multiple small linear algebra operations, like dot products and matrix inversions. As the function is being called hundred of thousands of times the numpy overhead is getting a large share of the cost.
Three years ago some one asked this question: <a href="https://stackoverflow.com/questions/16114100/calling-dot-products-and-linear-algebra-operations-in-cython">calling dot products and linear algebra operations in Cython?</a>
I have tried to use the recommendations from both answers, but the first scipy.linalg.blas still goes through a python wrapper and I'm not really getting any improvements. The second, using the gsl wrapper is also fairly slow and tends to freeze my system when the dimensions of the vectors are very large. I also found the Ceygen package, that looked very promising but it seems that the installation file broke in the last Cython update.
On the other hand I saw that scipy is working on a cython wrapper for lapack, but it looks as its still unavailable (<a href="http://docs.scipy.org/doc/scipy-dev/reference/linalg.cython_lapack.html" rel="nofollow noreferrer">scipy-cython-lapack)</a>
In the end I can also code my own C routines for those operations but it seems kind of re-inventing the wheel.</p>
<p>So as to summarize: Is there a <strong>new</strong> way to this kind of operations in Cython? (Hence I don't think this is a duplicate) Or have you found a better way to deal with this sort of problem that I haven't seen yet?</p>
<p>Obligatory code sample:
(This is just an example, of course it can still be improved, but just to give the idea)</p>
<pre><code> cimport numpy as np
import numpy as np
cpdef double risk(np.ndarray[double, ndim=2, mode='c'] X,
np.ndarray[double, ndim=1, mode='c'] v1,
np.ndarray[double, ndim=1, mode='c'] v2):
cdef np.ndarray[double, ndim=2, mode='c'] tmp, sumX
cdef double ret
tmp = np.exp(X)
sumX = np.tile(np.sum(tmp, 1).reshape(-1, 1), (1, tmp.shape[0]))
tmp = tmp / sumX
ret = np.inner(v1, np.dot(X, v2))
return ret
</code></pre>
<p>Thanks!!</p>
<p>tl;dr: how-to linear algebra in cython? </p>
|
<p>The answer <a href="https://stackoverflow.com/questions/16114100/calling-dot-products-and-linear-algebra-operations-in-cython">you link to</a> is still a good way to call BLAS function from Cython. It is not really a python wrapper, Python is merely used so get the C pointer to the function and this can be done at initialization time. So you should get essentially C-like speed. I could be wrong, but I think that the upcoming Scipy 0.16 release will provide a convenient BLAS Cython API, based on this approach, it will not change things performance wise. </p>
<p>If you didn't experience any speed up after porting to Cython of repeatedly called BLAS functions, either the python overhead for doing this in numpy doesn't matter (e.g. if the computation itself is the most expensive part) or you are doing something wrong (unnecessary memory copies etc.)</p>
<p>I would say that this approach should be faster and easier to maintain than with GSL, provided of course that you compiled scipy with an optimized BLAS (OpenBLAS, ATLAS, MKL, etc).</p>
|
python|numpy|scipy|cython
| 1
|
8,410
| 64,396,493
|
The most efficient way to search every element of a list in a dataframe
|
<p>I have a over 1M dataset like d. I need to find indexes of a dataframe like seekingframe which is over 1500 element in that dataset.</p>
<pre><code>import pandas as pd
d=pd.DataFrame([225,230,235,240,245,250,255,260,265,270,275,280,285,290,295,300,305,310,315,320])
seekingframe=pd.DataFrame([275,280,285,290,295,300,305,310,315,320,325,330,335,340,345,350,355,180,255,260])
</code></pre>
<p>I need to find every element of seekingframe in d as fast as possible. I mean, i need a final array like:</p>
<pre><code>array([ 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7])
</code></pre>
<p>or the difference array like</p>
<pre><code>[11, 12, 13, 14, 15, 16, 17, 18]
</code></pre>
<p>or sth that denoting the similarities or differences. Actually, if it is possible, i would rather to drop that different sets.</p>
|
<p>It's likely faster to use numpy. On these small unique arrays, numpy was more than 100x faster than pandas <code>.isin()</code> without passing <code>assume_unique=True</code> to the numpy function that finds the intersection of two arrays ( <code>np.in1d</code> ) and returns <code>True</code> or <code>False</code>.</p>
<p>It was 300x faster if you <em>did</em> pass <code>assume_unique=True</code>:</p>
<pre><code>#finding similar
%timeit d[d[0].isin(seekingframe[0])].index
404 µs ± 6.25 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
#finding difference
%timeit seekingframe[~seekingframe[0].isin(d[0])].index
458 µs ± 2.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# finding similar with numpy arrays and NOT passing `assume_unique=True`
a = d[0].to_numpy()
b = seekingframe[0].to_numpy()
%timeit np.arange(a.shape[0])[np.in1d(a, b)]
35.4 µs ± 779 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
# finding similar with numpy arrays and passing `assume_unique=True`
a = d[0].to_numpy()
b = seekingframe[0].to_numpy()
%timeit np.arange(a.shape[0])[np.in1d(a, b, assume_unique=True)]
12 µs ± 337 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
</code></pre>
|
python-3.x|pandas|numpy|dataframe|data-science
| 3
|
8,411
| 47,984,941
|
python count occurrences in csv with pandas
|
<p>I'm new to Python and I'm trying to work on a small project and got a little confused.</p>
<p>I have 2 csv files that looks like this:</p>
<p>all_cars:</p>
<pre><code>first_Car,second_car
Mazda, Skoda
Ferrari, Volkswagen
Volkswagen, Toyota
BMW, Ferrari
BMW, Mercedes
</code></pre>
<p>super_cars:</p>
<pre><code>super_car_name
Ferrari
BMW
Mercedes
</code></pre>
<p>What I'm basicly trying to do is just to count how many times a car from file 2 represented in file 1. If the car represented only in file 1 and not in file 2, I don't want it.</p>
<p>What I'm trying to do based on my example files is :</p>
<pre><code>Ferrari : 2
BMY : 2
Mercedes : 1
</code></pre>
|
<p>I'd do it this way:</p>
<pre><code>In [220]: d1.stack().value_counts().to_frame('car').loc[d2.super_car_name]
Out[220]:
car
Ferrari 2
BMW 2
Mercedes 1
</code></pre>
<p>where <code>d1</code> and <code>d2</code> - your source DataFrames (which can be easily parsed from CSV files using <code>pd.read_csv()</code> method):</p>
<pre><code>In [218]: d1
Out[218]:
first_Car second_car
0 Mazda Skoda
1 Ferrari Volkswagen
2 Volkswagen Toyota
3 BMW Ferrari
4 BMW Mercedes
In [219]: d2
Out[219]:
super_car_name
0 Ferrari
1 BMW
2 Mercedes
</code></pre>
|
python|pandas
| 2
|
8,412
| 58,811,975
|
How to find a specific value from a pandas Data Frame
|
<p>I would like to know how will I find a specific value from a Dataframe. I have a value <strong>?</strong> spread across my data frame and its time consuming to check every column in a data frame. Is there any easy way to get the columns name that contains that specific value? For example, I have <strong>?</strong> spread across my <strong>car</strong> database.</p>
<p>I can do this easily via column as below:</p>
<pre><code>df_car['bhp'].where(df_car['bhp']=?) //something like this
</code></pre>
<p>Can I get an easy way to fetch all <strong>?</strong> and <strong>0</strong> value and then replace it?</p>
<p>Thanks,</p>
|
<p>Do you mean to find all columns with <code>?</code> or <code>0</code> in them? </p>
<p>Then you can use <code>df.isin()</code>, it's find some value whether in <code>df</code></p>
<p>This example will show how to find df's column where them include value <code>"italy"</code> and <code>1</code>. </p>
<pre class="lang-py prettyprint-override"><code>>>> import pandsa as pd
>>> df = pd.DataFrame({'location': ['canada', 'canada', 'italy', 'italy'], 'item': ['coke', 'coke', 'pepsi', 'coke'], 'weight': [1, 1, 2, 1]})
location item weight
0 canada coke 1
1 canada coke 1
2 italy pepsi 2
3 italy coke 1
>>> res = df[df.isin([1, "italy"])].notna().any()
>>> columns = res[res].index.tolist()
>>> print(columns)
['location', 'weight']
</code></pre>
<p>Output is <code>['location', 'weight']</code> because <code>"italy"</code> in column <code>location</code>, and <code>1</code> in column <code>weight</code></p>
|
python-3.x|pandas|numpy|dataframe
| 0
|
8,413
| 58,655,574
|
How to read a file .txt containing an array in it?
|
<p>I want to read data from file into a <strong>DataFrame</strong>. But this file is a special format. Include so many lines like this: </p>
<p><code>year = [1, 2, 3]</code></p>
<p><code>age = [4, 5, 6]</code></p>
<p>And this is the link go to the special file: <a href="https://github.com/cuongpiger/Py-for-ML-DS-DV/blob/master/Matplotlib/Chap6_data/dulieu_year_gap_pop_life.txt" rel="nofollow noreferrer">https://github.com/cuongpiger/Py-for-ML-DS-DV/blob/master/Matplotlib/Chap6_data/dulieu_year_gap_pop_life.txt</a></p>
|
<p>If need all values to <code>DataFrame</code> create dictionary of Series and pass to <code>DataFrame</code> constructor with <code>ast.literal_eval</code> for parse lists:</p>
<pre><code>import ast
d = {}
with open('dulieu_year_gap_pop_life.txt') as file:
splitted = file.readlines()
for x in splitted:
h, data = x.strip().split(' = ')
d[h] = pd.Series(ast.literal_eval(data))
df = pd.DataFrame(d)
print (df)
year pop gdp_cap life_exp life_exp1950
0 1950 2.53 974.580338 43.828 28.80
1 1951 2.57 5937.029526 76.423 55.23
2 1952 2.62 6223.367465 72.301 43.08
3 1953 2.67 4797.231267 42.731 30.02
4 1954 2.71 12779.379640 75.320 62.48
.. ... ... ... ... ...
146 2096 10.81 NaN NaN NaN
147 2097 10.82 NaN NaN NaN
148 2098 10.83 NaN NaN NaN
149 2099 10.84 NaN NaN NaN
150 2100 10.85 NaN NaN NaN
[151 rows x 5 columns]
</code></pre>
<p>For only 2 columns use:</p>
<pre><code>df = pd.DataFrame(d, columns=['year','pop'])
print (df)
year pop
0 1950 2.53
1 1951 2.57
2 1952 2.62
3 1953 2.67
4 1954 2.71
.. ... ...
146 2096 10.81
147 2097 10.82
148 2098 10.83
149 2099 10.84
150 2100 10.85
[151 rows x 2 columns]
</code></pre>
|
python|pandas
| 3
|
8,414
| 58,909,689
|
Compare a column in one dataframe with two other columns in a different dataframe?
|
<p>I have created two data frames from two tsv files. The data frames are as follows:</p>
<pre><code>Dataframe1 (df1)
chr position
5 745
7 963
8 1024
Dataframe2 (df2)
chr start end
1 10 100
1 500 600
5 250 600
5 784 1045
7 98 980
7 11 85
8 450 1000
8 1546 1886
12 63 1400
</code></pre>
<p>Now, I want to create a new column of df1 which will give 'True' if for the same <code>chr</code> the position falls within the <code>start</code> and <code>end</code> (of df2). I am using the following code:</p>
<pre><code>df1['Valid'] = np.where((df1['chr'] == df2['chr']) & (df1['position'] >= df2['start']) & (df1['position'] <= df2['end']),'True','False')
</code></pre>
<p>This is not working and giving the error message - ValueError: Can only compare identically-labeled Series objects. How to do this?</p>
<p>Expected output is:</p>
<pre><code>Dataframe1 (df1)
chr position Valid
5 745 False
7 963 True
8 1024 False
</code></pre>
|
<p>Merge the dataframes, evaluate, then drop the unused columns.</p>
<pre><code>>>> (df1
.merge(df2, on='chr', how='left')
.assign(Valid=lambda df: df.eval('start <= position <= end'))
.drop(columns=['start', 'end'])
)
chr position Valid
0 5 745 False
1 7 963 True
2 8 1024 False
</code></pre>
<p>In the case of multiple <code>chr</code> values in <code>df2</code>, merge the <code>position</code> onto <code>df2</code>, evaluate each, and then group on <code>chr</code> and determine if any position is valid. Assign the result back to <code>df1</code>:</p>
<pre><code>valid = (
df2
.merge(df1, on='chr', how='right')
.assign(Valid=lambda df: df.eval('start <= position <= end'))
.groupby('chr')['Valid'].any()
)
>>> df1.merge(valid, left_on='chr', right_index=True)
chr position Valid
0 5 745 False
1 7 963 True
2 8 1024 False
</code></pre>
|
python|pandas
| 2
|
8,415
| 70,240,935
|
Filling Missing Values Based on String Condition
|
<p>I'm trying to write a function to impute some null values from a Numeric column based on string conditions from a Text column.</p>
<p>My attempt example:</p>
<pre><code>def fill_nulls(string, val):
if df['TextColumn'].str.contains(string) == True:
df['NumericColumn'] = df['NumericColumn'].fillna(value=val)
</code></pre>
<p>The 'string' and 'val' parameters are manually entered. I tried applying function to my numeric column but it gives me this error:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>I tried to find examples that I could tweak for my situation, but they all involved using 'groupby' to get the average numeric values relating to the discrete string values that had only a handful of unique values. Basically, only exact wording could be imputed, whereas I'm trying to generalize my string filtering by using partial strings and <strong>imputing the null values in the numeric column</strong> based on the <strong>resulting rows of the text columns</strong>.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>Series.str.contains</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>m = df['TextColumn'].str.contains(string)
df.loc[m, 'NumericColumn'] = df.loc[m, 'NumericColumn'].fillna(value=val)
</code></pre>
<p>Or chain conditions by <code>&</code> for bitwise <code>AND</code> for test missing values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>Series.isna</code></a> and assign value in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>m1 = df['TextColumn'].str.contains(string)
m2 = df['NumericColumn'].isna()
df.loc[m1 & m2, 'NumericColumn'] = val
</code></pre>
|
python|pandas|dataframe|fillna
| 1
|
8,416
| 70,061,018
|
'loss nan' in time-series classification
|
<p>I have a transformer model almost exactly the same as in the Keras example code for time series data. I'll take for stock info process for practice a classification via transformer, targeting a simple {0,1} separation as result. The problem here is all I get is always loss <code>nan</code> without any accuracy improvement. please see my model:</p>
<pre><code>def transformer_encoder(inputs, head_size, num_heads, ff_dim, dropout=0):
# Attention and Normalization
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(inputs, inputs)
x = layers.Dropout(dropout)(x)
x = layers.LayerNormalization(epsilon=1e-6)(x)
res = x + inputs
# Feed Forward Part
x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation="relu")(res)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
x = layers.LayerNormalization(epsilon=1e-6)(x)
return x + res
def build_model(
input_shape,
head_size,
num_heads,
ff_dim,
num_transformer_blocks,
mlp_units,
dropout=0,
mlp_dropout=0,
n_classes=0,
):
inputs = keras.Input(shape=input_shape)
x = inputs
for _ in range(num_transformer_blocks):
x = transformer_encoder(x, head_size, num_heads, ff_dim, dropout)
x = layers.GlobalAveragePooling1D(data_format="channels_first")(x)
for dim in mlp_units:
x = layers.Dense(dim, activation="relu")(x)
x = layers.Dropout(mlp_dropout)(x)
outputs = layers.Dense(n_classes, activation="softmax")(x)
return keras.Model(inputs, outputs)
model = build_model(
training_input_data[0].shape,
head_size=10,
num_heads=4,
ff_dim=2,
num_transformer_blocks=8,
mlp_units=[128],
mlp_dropout=0.4,
dropout=0.25,
n_classes=num_class
)
</code></pre>
<p>Below shows how I compile and run to fit the model:</p>
<pre><code>model.compile(
loss="categorical_crossentropy",
optimizer=keras.optimizers.Adam(learning_rate=1e-5),
metrics=["binary_accuracy"],
)
callbacks = [keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True)]
model.fit(
training_input_data,
training_output_data,
epochs=EOPCHS,
batch_size=32,
callbacks=callbacks,
validation_split=0.1
)
</code></pre>
<p>please note that all the training data is normalized as below example (a small slice of the whole dataset). The 'date' colume is actually poped off before turning data into numpy ndarray.
<a href="https://i.stack.imgur.com/Foalo.jpg" rel="nofollow noreferrer">train data screenshot</a></p>
<p>The result of training is like:</p>
<pre><code>Epoch 1/200
1049/1049 [==============================] - 22s 17ms/step - loss: nan - binary_accuracy: 0.5000 - val_loss: nan - val_binary_accuracy: 0.5000
Epoch 2/200
1049/1049 [==============================] - 17s 16ms/step - loss: nan - binary_accuracy: 0.5000 - val_loss: nan - val_binary_accuracy: 0.5000
Epoch 3/200
1049/1049 [==============================] - 17s 16ms/step - loss: nan - binary_accuracy: 0.5000 - val_loss: nan - val_binary_accuracy: 0.5000
Epoch 4/200
1049/1049 [==============================] - 17s 16ms/step - loss: nan - binary_accuracy: 0.5000 - val_loss: nan - val_binary_accuracy: 0.5000
Epoch 5/200
1049/1049 [==============================] - 17s 16ms/step - loss: nan - binary_accuracy: 0.5000 - val_loss: nan - val_binary_accuracy: 0.5000
Epoch 6/200
59/1049 [>.............................] - ETA: 16s - loss: nan - binary_accuracy: 0.5000
</code></pre>
<p>Seems that categorical_crossentropy loss should be ok for as simple as a 0-1 classfication, outputting from the last layer of softmax. The model seems like it has learned nothing at all ---- the acc stuck at 0.5 all the way for a 0-1 job.</p>
<p>Any ideas?</p>
|
<p>without having your data it is just a guess. You say that you want to make a <strong>binary prediction</strong> but you use "categorical_crossentropy" as your loss function which is normaly used for multiple classes. You can have a look here how and when to use diffrent loss and activation functions <a href="https://gombru.github.io/2018/05/23/cross_entropy_loss/" rel="nofollow noreferrer">Click</a> or <a href="https://towardsdatascience.com/understanding-binary-cross-entropy-log-loss-a-visual-explanation-a3ac6025181a" rel="nofollow noreferrer">Click2</a>.</p>
<p>So the solution for you might be:</p>
<pre><code>model.compile(
loss="binary_crossentropy",
optimizer=keras.optimizers.Adam(learning_rate=1e-5),
metrics=["binary_accuracy"])`
</code></pre>
<p>Another thing you might look into is your activation function.You could try to use the sigmoid function and not the softmax function, to get prediction values between (0,1).</p>
|
tensorflow|machine-learning|keras
| 0
|
8,417
| 56,130,164
|
How to incorporate elevation into euclidean distance matrix in pandas?
|
<p>I have the following <code>dataframe</code> in pandas:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
"CityId": {
"0": 0,
"1": 1,
"2": 2,
"3": 3,
"4": 4
},
"X": {
"0": 316.83673906150904,
"1": 4377.40597216624,
"2": 3454.15819771172,
"3": 4688.099297634771,
"4": 1010.6969517482901
},
"elevation_meters": {
"0": 1,
"1": 2,
"2": 3,
"3": 4,
"4": 5
},
"Y": {
"0": 2202.34070733524,
"1": 336.602082171235,
"2": 2820.0530112481106,
"3": 2935.89805580997,
"4": 3236.75098902635
}
})
</code></pre>
<p>I am trying to create a distance matrix that represents the cost of moving between each of these <code>CityIds</code>. Using <code>pdist</code> and <code>squareform</code> from <code>scipy.spatial.distance</code> I can do the following:</p>
<pre><code>from scipy.spatial.distance import pdist, squareform
df_m = pd.DataFrame(
squareform(
pdist(
df[['CityId', 'X', 'Y']].iloc[:, 1:],
metric='euclidean')
),
index=df.CityId.unique(),
columns= df.CityId.unique()
)
</code></pre>
<p>This gives me a distance matrix between all the <code>CityIds</code> using pairwise distances calculated from <code>pdist</code>. </p>
<p>I would like to incorporate <code>elevation_meters</code> into the this distance matrix. What is an efficient way to do so?</p>
|
<p>You can try <code>scipy.spatial.distance_matrix</code>:</p>
<pre><code>xx = df[['X','elevation_meters', 'Y']]
pd.DataFrame(distance_matrix(xx,xx), columns= df['CityId'],
index=df['CityId'])
</code></pre>
<p>Output:</p>
<pre><code>CityId 0 1 2 3 4
CityId
0 0.000000 4468.691544 3197.555070 4432.386687 1245.577226
1 4468.691544 0.000000 2649.512402 2617.799439 4443.602402
2 3197.555070 2649.512402 0.000000 1239.367465 2478.738402
3 4432.386687 2617.799439 1239.367465 0.000000 3689.688537
4 1245.577226 4443.602402 2478.738402 3689.688537 0.000000
</code></pre>
|
python|pandas|matrix|euclidean-distance|altitude
| 1
|
8,418
| 56,184,013
|
Tensorflow Lite GPU support for python
|
<p>Anyone know if Tensorflow Lite has GPU support for Python? I've seen guides for Android and iOS, but I haven't come across anything about Python. If <code>tensorflow-gpu</code> is installed and <code>tensorflow.lite.python.interpreter</code> is imported, will GPU be used automatically?</p>
|
<p>According to <a href="https://github.com/tensorflow/tensorflow/issues/31377" rel="nofollow noreferrer">this</a> thread, it is not.</p>
|
tensorflow|machine-learning|tensorflow-lite
| 3
|
8,419
| 56,012,137
|
Trying to optimize parameters in the Lugre Dynamic Friction model
|
<p>I have data collected in CSV of every output of the friction model. the model imagines the contact between to surfaces as one dimensional bristles that react to being bent like springs this deflection. the force of friction is model as: </p>
<pre><code>FL(V,Z) = sig0*Z +sig1*DZ/Dt +sig2*V
</code></pre>
<p>where V is the Velocity of the Surface Z is the deflection of the Bristles And DZ/Dt is the rate of deflection and is equal to:</p>
<pre><code>DZ/Dt = V + abs(V)*Z/(Fc + (Fs-Fc)*exp(-(V^2/Vs^2))
= V + abs(V)*Z/G(V)
= V + H(V)*Z
</code></pre>
<p>Where Fc is the friction of the object in motion(constant), Fs is equal to the Force required to get the object into motion (a constant > Fc) and Vs is the total speed required to transition between the domains(a constant I've experimentally derived). the velocity and position of the block are provided in the CSV as well as the force of friction all with respect to time. I have also created an easily integrable approximation of the Velocity as a function of time (trigonometric). </p>
<p>On to the problem: the code throws a fit with the way I'm trying to pass lists in to the functions (I think).</p>
<p>The function the passes the parameters SEEMS to work (taken from a different file that simply plots the data the) however I've tried to numerically integrate the DZ/Dt and fit the sig parameters to the imported Friction data.</p>
<p>What I imported</p>
<pre><code>import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from scipy import optimize
import pylab as pp
from math import sin, pi, exp, fabs, pow
</code></pre>
<p>Parameters</p>
<pre><code>Fc=2.7 #N
Fs=8.2 #N
Vs=.34 #mm/s
</code></pre>
<p>Initial_conditions</p>
<pre><code>ITime=Time[0]
Iz=[0,0,0]
</code></pre>
<p>Building the friction model</p>
<pre><code>def velocity(time):
V=-13/2*1/60*pi*sin(1/60*pi*time+pi)
return V
def g(v,vs,fc,fs,sig0):
G=(1/sig0)*(fc+(fs-fc)*exp(-pow(v,2)/pow(vs,2)))
return G
def h(v,vg):
H=fabs(v)/vg
return H
def findz(z, time, sig):
Vx=velocity(time)
VG=g(Vx,Vs,Fc,Fs,sig)
HVx=h(Vx,VG)
dzdt=Vx+HVx*z
return dzdt
def friction(time,sig,iz):
dz=lambda z,time: findz(z,time,sig)
z=odeint(dz,iz,time)
return sig[0]*z+sig[1]*findz(z,time,sig[0])+sig[2]*velocity(Time)
</code></pre>
<p>Should return the difference between the Constructed function and the data and
yield a list containing the optimized parameters</p>
<pre><code>def residual(sig):
return Ff-friction(Time,sig,Iz)
SigG=[4,20,1]
SigVal=optimize.leastsq(residual,SigG)
print "parameter values are ",SigVal
</code></pre>
<p>This returns</p>
<pre><code>line 56, in velocity
V=-13/2*1/60*pi*sin(1/60*pi*time+pi)
TypeError: can't multiply sequence by non-int of type 'float'
</code></pre>
<p>Is this to do with the fact that I am passing lists?</p>
|
<p>As I mentioned in my comment, <code>Velocity()</code> ist the cause of the error that is most probably due to the fact that it uses a time value, whereas you pass a whole list/ array (with multiple values) to <code>Velocity()</code> when you call it in <code>friction()</code>.</p>
<p>Using some chosen values and after shortening you code and passing <code>ITime</code> instead of <code>Time</code> the code runs correctly but it is left to you to judge if this is analytically what you wanted to achieve. Below is my code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy import optimize
from scipy.integrate import odeint
from math import sin, pi, exp, fabs
# Parameters
Fc = 2.7 #N
Fs = 8.2 #N
Vs = .34 #mm/s
# define test values for Ff and Time
Ff = np.array([100, 50, 50])
Time = np.array([10, 20, 30])
# Initial_conditions
ITime = Time[0]
Iz = np.array([0, 0, 0])
# Building the friction model
V = lambda t: (-13 / 2) * ( 1 / (60 * pi * sin(1 / 60 * pi * t + pi)))
G = lambda v, vs, fc, fs, sig0: (1 / sig0) * (fc + (fs - fc) * exp(-v**2 / vs**2))
H = lambda v, vg: fabs(v) / vg
dzdt = lambda z, t, sig: V(t) + H(V(t), G(V(t), Vs, Fc, Fs, sig)) * z
def friction(t, sig, iz):
dz = lambda z, t: dzdt(z, t, sig)
z = odeint(dz, iz, t)
return sig[0]*z + sig[1]*dzdt(z, t, sig[0]) + sig[2]*V(t)
# Should return the difference between the Constructed function and the data
# and yield a list containing the optimized parameters
def residual(sig):
return Ff-friction(ITime, sig, Iz)[0]
SigG = np.array([4, 20, 1])
SigVal = optimize.leastsq(residual, SigG, full_output = False)
print("parameter values are ", SigVal )
</code></pre>
<p>Output: </p>
<pre><code>parameter values are (array([ 4. , 3251.47271228, -2284.82881887]), 1)
</code></pre>
|
python|numpy|scipy|physics|ode
| 0
|
8,420
| 55,623,798
|
Syntax Error In Python When Trying To Refer To Range Of Columns
|
<p>I am trying to remove the last several columns from a data frame. However I get a syntax error when I do this:</p>
<p><code>db = db.drop(db.columns[[12:22]], axis = 1)</code></p>
<p>This works but it seems clumsy...</p>
<p><code>db = db.drop(db.columns[[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]], axis = 1)</code></p>
<p>How do I refer to a range of columns?</p>
|
<p>The first example uses <code>[12:22]</code> is a "slice" of nothing. It's not a meaningful statement, so as you say, it gives a syntax error. It seems that what you want is a list containing the numbers 12 through 22. You need to either write it out fully as you did, or use some generator function to create it.</p>
<p>The simplest is <code>range</code>, which is a generator that creates a list of sequential values. So you can rewrite your example like:</p>
<p><code>db = db.drop(db.columns[list(range(12,23)]], axis = 1)</code></p>
<p>Though it looks like you are using some sort of library. If you want more detailed control, you need to look the documentation of that library. It seems that <code>db.columns</code> is an object of a class that has defined an array operator. Perhaps that class's documentation shows a way of specifying ranges in a way other than a list.</p>
|
python|pandas
| 1
|
8,421
| 55,912,900
|
How can I implement this type of search in Pandas?
|
<p>assuming I have a dataframe with a lot of names like:</p>
<pre><code>[[jack,rose,mike],
[mike,jack,lee],
[jeff,jack,alex]]
</code></pre>
<p>what I need is like a function that when I input "jack", the return dataframe is like:</p>
<pre><code>[[1,0,0],
[0,1,0],
[0,1,0]]
</code></pre>
<p>Is there any method in Pandas that fits my requirement?</p>
|
<p>You can directly compare DataFrame items to a scalar:</p>
<pre><code>(df == 'jack').astype(int)
</code></pre>
|
python|pandas
| 0
|
8,422
| 55,845,445
|
Pandas add increment to timestamp to break ties, preserving original order
|
<p>I have a dataframe of the format </p>
<pre><code> df = pandas.DataFrame([{'tstamp':'2019-03-06 06:42:13.582500', 'value' : 1},
{'tstamp':'2019-03-06 06:43:28.937400', 'value': 2},
{'tstamp':'2019-03-06 06:43:28.937400', 'value' : -1},
{'tstamp':'2019-03-06 06:43:28.937400', 'value' : 2},
{'tstamp':'2019-03-06 06:43:28.937400', 'value' : -4},
{'tstamp':'2019-03-06 06:43:37.237500', 'value' : 1},
{'tstamp':'2019-03-06 06:43:37.237500', 'value' : 1},
{'tstamp':'2019-03-06 06:43:37.237500', 'value' : 1},
{'tstamp':'2019-03-06 06:47:25.470300', 'value' : 3},
{'tstamp':'2019-03-06 06:47:54.791500', 'value' : 4},
{'tstamp':'2019-03-06 06:49:11.971600', 'value' : 5},
{'tstamp':'2019-03-06 06:49:11.971600', 'value' : 2},
{'tstamp':'2019-03-06 06:49:33.285500', 'value' : 1},
{'tstamp':'2019-03-06 06:49:42.414700', 'value' : 10},
{'tstamp':'2019-03-06 06:49:55.300300', 'value' : 11},
{'tstamp':'2019-03-06 06:49:55.300300', 'value' : 9},
{'tstamp':'2019-03-06 06:52:03.992600', 'value' : -1},
{'tstamp':'2019-03-06 06:52:03.992600', 'value' : 2}])
</code></pre>
<p>Some of the index timestamps have ties in them. </p>
<p>My question is: How can I efficiently add just enough timedelta to the index of the rows with a tie, to break the ties in index whilst preserving the order of the data?</p>
<p>@jezrael:</p>
<p>I need a to create a new 'tstamp' columns, let's call it 'tstamp2', that satisfies these conditions:</p>
<ul>
<li><code>(df.sort_values('tstamp2').index == df.sort_values('tstamp').index).all()</code> be True,</li>
<li><code>df.tstamp2.duplicated().any()</code> be False,</li>
<li><code>(df[~df.tstamp.duplicated()].tstamp == df[~df.tstamp.duplicated()].tstamp2).all()</code> be True,</li>
</ul>
|
<p>If a conversion of <code>'tstamp'</code> to <code>np.datetime</code> format is ok, then this should work:</p>
<pre><code>df['tstamp2'] = pandas.to_datetime(df.tstamp)
df['tstamp2'] += pandas.to_timedelta(df.groupby(df.tstamp2).cumcount(), unit='ns')
# Condition 1:
# Out: True
# Condition 2:
# Out: False
# Condition 3:
# Out: True
</code></pre>
<p>Assuming "just enough timedelta" is a nanosecond (<code>unit='ns'</code>).</p>
<p>If you want to preserve <code>'tstamp'</code> as strings, your task can be achieved like this:</p>
<pre><code>df['tstamp2'] = df.tstamp + df.groupby(df.tstamp).cumcount().astype(str)
# Condition 1:
# Out: True
# Condition 2:
# Out: False
# Condition 3:
# Out: True
</code></pre>
<p>Both methods satisfy all three conditions.</p>
|
pandas|timedelta
| 1
|
8,423
| 39,668,665
|
Format a table that was added to a plot using pandas.DataFrame.plot
|
<p>I'm producing a bar graph with a table using pandas.DataFrame.plot.</p>
<p>Is there a way to format the table size and/or font size in the table to make it more readable?</p>
<p>My DataFrame (dfexe):</p>
<pre><code>City State Waterfalls Lakes Rivers
LA CA 2 3 1
SF CA 4 9 0
Dallas TX 5 6 0
</code></pre>
<p>Create a bar chart with a table:</p>
<pre><code>myplot = dfex.plot(x=['City','State'],kind='bar',stacked='True',table=True)
myplot.axes.get_xaxis().set_visible(False)
</code></pre>
<p>Output:
<a href="https://i.stack.imgur.com/qNGUx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNGUx.png" alt="enter image description here"></a></p>
|
<p>Here is an answer.</p>
<pre><code># Test data
dfex = DataFrame({'City': ['LA', 'SF', 'Dallas'],
'Lakes': [3, 9, 6],
'Rivers': [1, 0, 0],
'State': ['CA', 'CA', 'TX'],
'Waterfalls': [2, 4, 5]})
myplot = dfex.plot(x=['City','State'],kind='bar',stacked='True',table=True)
myplot.axes.get_xaxis().set_visible(False)
# Getting the table created by pandas and matplotlib
table = myplot.tables[0]
# Setting the font size
table.set_fontsize(12)
# Rescaling the rows to be more readable
table.scale(1,2)
</code></pre>
<p><a href="https://i.stack.imgur.com/gigvi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gigvi.png" alt="enter image description here"></a></p>
<p>Note: Check also <a href="https://stackoverflow.com/questions/15514005/how-to-change-the-tables-fontsize-with-matplotlib-pyplot">this</a> answer for more information. </p>
|
python|pandas|matplotlib
| 5
|
8,424
| 39,483,546
|
get_dummies split character
|
<p>I have data labelled which I need to apply one-hot-encoding: <code>'786.2'</code>, <code>'ICD-9-CM|786.2'</code>, <code>'ICD-9-CM'</code>, <code>'786.2b|V13.02'</code>, <code>'V13.02'</code>, <code>'279.12'</code>, <code>'ICD-9-CM|V42.81'</code> is labels. The <code>|</code> mean that the document have 2 labels at the same time. So I wrote the code like this:</p>
<pre><code>labels = np.asarray(label_docs)
labels = np.array([u'786.2', u'ICD-9-CM|786.2', u'|ICD-9-CM', u'786.2b|V13.02', u'V13.02', u'279.12', u'ICD-9-CM|V42.81|'])
df = pd.DataFrame(labels, columns=['label'])
labels = df['label'].str.get_dummies(sep='|')
</code></pre>
<p>and the result: </p>
<pre><code>279.12 786.2 786.2b ICD-9-CM V13.02 V42.81
0 0 1 0 0 0 0
1 0 1 0 1 0 0
2 0 0 0 1 0 0
3 0 0 1 0 1 0
4 0 0 0 0 1 0
5 1 0 0 0 0 0
6 0 0 0 1 0 1
</code></pre>
<p>However, now I only want 1 label for each document: </p>
<p><code>'ICD-9-CM|786.2'</code> is <code>'ICD-9-CM'</code>,</p>
<p><code>'ICD-9-CM|V42.81|'</code> is <code>'ICD-9-CM'</code>.</p>
<p>How could I do seperate by <code>get_dummies</code> like that?</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow"><code>str.strip</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>str.split</code></a> and then select first item of list by <code>str[0]</code>:</p>
<pre><code>print (df.label.str.strip('|').str.split('|').str[0])
0 786.2
1 ICD-9-CM
2 ICD-9-CM
3 786.2b
4 V13.02
5 279.12
6 ICD-9-CM
Name: label, dtype: object
labels = df.label.str.strip('|').str.split('|').str[0].str.get_dummies()
print (labels)
279.12 786.2 786.2b ICD-9-CM V13.02
0 0 1 0 0 0
1 0 0 0 1 0
2 0 0 0 1 0
3 0 0 1 0 0
4 0 0 0 0 1
5 1 0 0 0 0
6 0 0 0 1 0
</code></pre>
<p>If in row with index <code>2</code> need no value, remove <code>str.strip</code>:</p>
<pre><code>print (df.label.str.split('|').str[0])
0 786.2
1 ICD-9-CM
2
3 786.2b
4 V13.02
5 279.12
6 ICD-9-CM
Name: label, dtype: object
labels = df.label.str.split('|').str[0].str.get_dummies(sep='|')
print (labels)
279.12 786.2 786.2b ICD-9-CM V13.02
0 0 1 0 0 0
1 0 0 0 1 0
2 0 0 0 0 0
3 0 0 1 0 0
4 0 0 0 0 1
5 1 0 0 0 0
6 0 0 0 1 0
</code></pre>
|
python|pandas|one-hot-encoding
| 4
|
8,425
| 44,053,388
|
Fast splitting of array on column indices from each row of sparse array
|
<p>Let's say I have a sparse array and a dense array that has the same number of columns but fewer rows:</p>
<pre><code>from scipy.sparse import csr_matrix
import numpy as np
sp_arr = csr_matrix(np.array([[1,0,0,0,1],[0,0,1,0,0],[0,1,0,0,1],[0,0,0,1,1],[0,0,0,1,0]]))
arr = np.random.rand(10).reshape(2,5)
print(arr)
[[ 0.47027789 0.82510323 0.01321617 0.66264852 0.3618022 ]
[ 0.80198907 0.36350616 0.10254934 0.65209401 0.094961 ]]
</code></pre>
<p>I would like to get an array containing all of the submatrices for the indices that contain values for each row of the sparse array. For example, the indices for data in <code>sp_arr</code> are as follows:</p>
<p>0: [0, 4]
1: [2]
2: [1, 4]
3: [3, 4]
4: [3]</p>
<p>My output should look like this:</p>
<pre><code>array([array([[ 0.47027789, 0.3618022 ],
[ 0.80198907, 0.094961 ]]),
array([[ 0.01321617],
[ 0.10254934]]),
array([[ 0.82510323, 0.3618022 ],
[ 0.36350616, 0.094961 ]]),
array([[ 0.66264852, 0.3618022 ],
[ 0.65209401, 0.094961 ]]),
array([[ 0.66264852],
[ 0.65209401]])], dtype=object)
</code></pre>
<p>I can create this with the following code, but as the size of the arrays scale up (greatly in my case) it gets really slow.</p>
<pre><code>output = np.empty(sp_arr.shape[0], dtype=object)
for row in range(sp_arr.shape[0]):
output[row] = arr[:, sp_arr[row].indices]
</code></pre>
<p>I've thought about vectorizing the process and applying it along the axis, but <code>np.apply_along_axis</code> doesn't work with sparse matrices, and unfortunately while this example is small enough to make dense and then use <code>apply_along_axis</code> my actual sparse matrix is much too large to do so (>100Gb).</p>
<p>I had thought that perhaps there is a fancy way to index or use something like hsplit to accomplish this with already vectorized methods, but so far I haven't had any luck. Is there someway to achieve this that is just escaping me?</p>
<p><strong>Update</strong></p>
<p>Per the answer from @Divakar, which is great, I found another way to implement the same thing with the ever so slightest, and negligible, improvement.</p>
<p>@Divakars best answer was:</p>
<pre><code>def app2(sp_arr, arr):
r,c = sp_arr.nonzero()
idx = np.flatnonzero(r[1:] > r[:-1])+1
idx0 = np.concatenate(( [0] , idx, [r.size] ))
arr_c = arr[:,c]
return [arr_c[:,i:j] for i,j in zip(idx0[:-1], idx0[1:])]
</code></pre>
<p>Which increased my performance by 50 - 60x! But its kind of hard to read.</p>
<p>What I discovered is that given the csr_matrix format you can use the <code>indices</code> and <code>indptr</code> attributes to your advantage here. </p>
<pre><code>def final_app():
idx = sp_arr.indptr
arr_c = arr[:, sp_arr.indices]
out = [arr_c[:, i:j] for i, j in zip(idx[:-1], idx[1:])]
return out
</code></pre>
<p>In the end the performance is statistically the same (less than 50ms improvement on a sparse matrix 276538 x 33114), but it feels easier to understand. <strong>More Importantly</strong> this approach does include the rows for which there are no values at all, whereas the previous approach does not. This may seem unimportant, but for my use case it is pretty critical.</p>
<p><strong>Update 2</strong></p>
<p>In response to @EelcoHoogendoorn. The problem is part of a parallel implementation of the alternative least squares with regularization method that I am trying to implement in python. This comes from the oft cited paper <a href="https://endymecy.gitbooks.io/spark-ml-source-analysis/content/%E6%8E%A8%E8%8D%90/papers/Large-scale%20Parallel%20Collaborative%20Filtering%20the%20Netflix%20Prize.pdf" rel="nofollow noreferrer">Large-scale Parallel Collaborative Filtering for
the Netflix Prize</a> The normal way of doing this is by having distributed copies of the Ratings, User, and Items matrices across processes. I thought it might be interesting to see what would happen if we constructed all of the item submatrices up front and just distributed those to the processes. That way the processes need only return the feature columns for either one user or one item respectively and those could be used to update the User and Item matrices. </p>
<p>The above problem was actually the bottleneck in my current implementation. And per your comment, in this situation I don't believe the transposition is critical as part of the algorithm takes the dot product of each submatrix with its transpose. </p>
|
<p>Well, there are two options - <code>np.split</code> or <code>loop comprehension</code>. In my experience, I have found out the latter to be faster. But, the priority must be to do minimal work inside the loop comprehension by doing as much of pre-processing as possible.</p>
<p><strong>Approach #1 :</strong> First approach using <code>np.split</code> -</p>
<pre><code># Get row, col indices
r,c = sp_arr.nonzero()
# Get intervaled indices for row indices.
# We need to use these to cut the column indexed input array.
idx = np.flatnonzero(r[1:] > r[:-1])+1
out = np.split(arr[:,c], idx, axis=1)
</code></pre>
<p>Sample output -</p>
<pre><code>In [56]: [i.tolist() for i in out]
Out[56]:
[[[0.47027789, 0.3618022], [0.80198907, 0.094961]],
[[0.01321617], [0.10254934]],
[[0.82510323, 0.3618022], [0.36350616, 0.094961]],
[[0.66264852, 0.3618022], [0.65209401, 0.094961]],
[[0.66264852], [0.65209401]]]
</code></pre>
<p><strong>Approach #2 :</strong> The second one and should be better on performance and we will re-use <code>r,c,idx</code> from previous method -</p>
<pre><code>idx0 = np.concatenate(( [0] , idx, [r.size] ))
arr_c = arr[:,c]
out = [arr_c[:,i:j] for i,j in zip(idx0[:-1], idx0[1:])]
</code></pre>
<p>See, the <code>loop-comprehension</code> simply slices the array of already indexed array <code>arr_c</code>. That's as minimal as one could get and as such should be good.</p>
<p><strong>Runtime test</strong></p>
<p>Approaches -</p>
<pre><code>def org_app(sp_arr, arr):
output = np.empty(sp_arr.shape[0], dtype=object)
for row in range(sp_arr.shape[0]):
output[row] = arr[:, sp_arr[row].indices]
return output
def app1(sp_arr, arr):
r,c = sp_arr.nonzero()
idx = np.flatnonzero(r[1:] > r[:-1])+1
return np.split(arr[:,c], idx, axis=1)
def app2(sp_arr, arr):
r,c = sp_arr.nonzero()
idx = np.flatnonzero(r[1:] > r[:-1])+1
idx0 = np.concatenate(( [0] , idx, [r.size] ))
arr_c = arr[:,c]
return [arr_c[:,i:j] for i,j in zip(idx0[:-1], idx0[1:])]
</code></pre>
<p>Timings -</p>
<pre><code>In [146]: sp_arr = csr_matrix((np.random.rand(100000,100)>0.8).astype(int))
...: arr = np.random.rand(10,sp_arr.shape[1])
...:
In [147]: %timeit org_app(sp_arr, arr)
...: %timeit app1(sp_arr, arr)
...: %timeit app2(sp_arr, arr)
...:
1 loops, best of 3: 5.66 s per loop
10 loops, best of 3: 146 ms per loop
10 loops, best of 3: 105 ms per loop
</code></pre>
|
python|numpy|scipy|sparse-matrix
| 1
|
8,426
| 69,374,842
|
Can't install tensorflow-macos on MacM1 (errors while installing grpcio)
|
<p>This has been a long fight trying to install tensorflow in Mac Mini M1...
I'm using macOS Monterey(12.0 Beta)
According to the last instructions from tensorflow/apple (<a href="https://developer.apple.com/metal/tensorflow-plugin/" rel="nofollow noreferrer">https://developer.apple.com/metal/tensorflow-plugin/</a>), I'm using miniforge conda, create a blank environment and then do the following:</p>
<pre><code>conda install -c apple tensorflow-deps
</code></pre>
<p>Everything goes ok but then when I do the next step everything breaks:</p>
<pre><code>python -m pip install tensorflow-macos
</code></pre>
<ul>
<li><p>Tried with python3.8 with the following error (summary, not the full logs):</p>
<pre><code> distutils.errors.CompileError: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for grpcio
</code></pre>
</li>
<li><p>Tried with python3.9 with the following error (summary, not the full logs):</p>
<pre><code> distutils.errors.CompileError: command '/usr/bin/clang' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for grpcio
</code></pre>
</li>
<li><p>Tried with force reinstall and no-cache-dir <code>(python -m pip install tensorflow-macos --no-cache-dir --force-reinstall)</code> with the following error :</p>
<p>ERROR: Command errored out with exit status 1: /Users/machine/miniforge3/envs/tf38/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/0k/hz9yngm56nz1htdc3c3t3d0c0000gn/T/pip-install-djre1j5j/numpy_48546adcbc9d4c558a4dc32a8e607649/setup.py'"'"'; <strong>file</strong>='"'"'/private/var/folders/0k/hz9yngm56nz1htdc3c3t3d0c0000gn/T/pip-install-djre1j5j/numpy_48546adcbc9d4c558a4dc32a8e607649/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(<strong>file</strong>) if os.path.exists(<strong>file</strong>) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, <strong>file</strong>, '"'"'exec'"'"'))' install --record /private/var/folders/0k/hz9yngm56nz1htdc3c3t3d0c0000gn/T/pip-record-343ln54c/install-record.txt --single-version-externally-managed --prefix /private/var/folders/0k/hz9yngm56nz1htdc3c3t3d0c0000gn/T/pip-build-env-1fyu7c9t/normal --compile --install-headers /private/var/folders/0k/hz9yngm56nz1htdc3c3t3d0c0000gn/T/pip-build-env-1fyu7c9t/normal/include/python3.8/numpy Check the logs for full command output.
----------------------------------------
WARNING: Discarding <a href="https://files.pythonhosted.org/packages/a7/81/20d5d994c91ed8347efda90d32c396ea28254fd8eb9e071e28ee5700ffd5/h5py-3.1.0.tar.gz#sha256=1e2516f190652beedcb8c7acfa1c6fa92d99b42331cbef5e5c7ec2d65b0fc3c2" rel="nofollow noreferrer">https://files.pythonhosted.org/packages/a7/81/20d5d994c91ed8347efda90d32c396ea28254fd8eb9e071e28ee5700ffd5/h5py-3.1.0.tar.gz#sha256=1e2516f190652beedcb8c7acfa1c6fa92d99b42331cbef5e5c7ec2d65b0fc3c2</a> (from <a href="https://pypi.org/simple/h5py/" rel="nofollow noreferrer">https://pypi.org/simple/h5py/</a>) (requires-python:>=3.6). Command errored out with exit status 1: /Users/machine/miniforge3/envs/tf38/bin/python /private/var/folders/0k/hz9yngm56nz1htdc3c3t3d0c0000gn/T/pip-standalone-pip-nmsgrvml/<strong>env_pip</strong>.zip/pip install --ignore-installed --no-user --prefix /private/var/folders/0k/hz9yngm56nz1htdc3c3t3d0c0000gn/T/pip-build-env-1fyu7c9t/normal --no-warn-script-location --no-binary :none: --only-binary :none: -i <a href="https://pypi.org/simple" rel="nofollow noreferrer">https://pypi.org/simple</a> -- 'numpy==1.12; python_version == "3.6"' 'Cython>=0.29; python_version < "3.8"' 'numpy==1.14.5; python_version == "3.7"' 'numpy==1.19.3; python_version >= "3.9"' 'numpy==1.17.5; python_version == "3.8"' pkgconfig 'Cython>=0.29.14; python_version >= "3.8"' Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement h5py~=3.1.0 (from tensorflow-macos) (from versions: 2.2.1, 2.3.0b1, 2.3.0, 2.3.1, 2.4.0b1, 2.4.0, 2.5.0, 2.6.0, 2.7.0rc2, 2.7.0, 2.7.1, 2.8.0rc1, 2.8.0, 2.9.0rc1, 2.9.0, 2.10.0, 3.0.0rc1, 3.0.0, 3.1.0, 3.2.0, 3.2.1, 3.3.0, 3.4.0)
ERROR: No matching distribution found for h5py~=3.1.0</p>
</li>
</ul>
|
<p>Try running the following code line</p>
<pre><code>SYSTEM_VERSION_COMPAT=0 python -m pip install tensorflow-macos
</code></pre>
<p>For a short tutorial on installation <a href="https://medium.com/@aaparikh_/setting-up-apple-silicon-devices-to-allow-tensorflow-use-native-gpu-for-data-science-60a355c7d008" rel="nofollow noreferrer">refer this</a></p>
|
macos|tensorflow|apple-m1
| 0
|
8,427
| 69,384,357
|
Pandas dataframe on python
|
<p>I feel like this may be a really easy question but I can't figure it out I have a data frame that looks like this</p>
<pre><code>one two three
1 2 3
2 3 3
3 4 4
</code></pre>
<p>The third column has duplicates if I want to keep the first row but drop the second row because there is a duplicate on row two how would I do this.</p>
|
<p>Pandas DataFrame objects have a method for this; assuming <code>df</code> is your dataframe, <code>df.drop_duplicates(subset='name_of_third_column')</code> returns the dataframe with any rows containing duplicate values in the third column removed.</p>
|
python|pandas
| 1
|
8,428
| 69,557,333
|
Initializing a differentiable param in pytorch
|
<p>I'm trying to define a set of new parameters <code>B</code> in a pytorch model. I would like to initialize the new params with current weights of the model <code>W</code>.</p>
<p><strong>Question:</strong> I want these params <code>B</code> to be differentiable, but autograd should not track their history to <code>W</code> (so <code>B</code> should have a new memory with no reference to <code>W</code>). What is the correct function to use?</p>
<p>I understand <code>B = W.clone()</code> will result in autograd tracking history of B to W while differentiating. Also I understand that <code>B = W.detach().clone()</code> will not be differentiable.</p>
<hr />
<p>EDIT:</p>
<p>I believe <code>B = nn.Parameter(W.detach().clone())</code> should be the correct function. Is this correct and if yes, is this the simplest function to use?</p>
|
<p>There are several ways to do it and one of them is</p>
<pre><code>B = W.clone().detach()
</code></pre>
<p>Another elegant one comes to my mind is</p>
<pre><code>B = torch.new_tensor(x, requires_grad=True)
</code></pre>
<p>which is much more readable and <a href="https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html?highlight=new_tensor" rel="nofollow noreferrer">the doc says</a> they are equivalent.</p>
<p>If you want it as a parameter to <code>nn.Module</code>, of course wrap it with</p>
<pre><code>self.B = nn.Parameter(...)
</code></pre>
|
pytorch
| 0
|
8,429
| 40,781,795
|
Complex function for getting combinations of one column with other
|
<p>I have a table in pandas df</p>
<pre><code>id_x id_y
a b
b c
a c
d a
x a
m b
c z
a k
b q
d w
a w
q v
</code></pre>
<p>How to read this table is :</p>
<p>the combinations for a is, a-b,a-c,a-k,a-w, similarly for b(b-c,b-q) and so on..
I want to write a function which takes id_x from the df <code>def test_func(id)</code></p>
<p>and check whether the occurrences of that id is greater than 3 or not, which may be done by <code>df['id_x'].value_counts</code> .</p>
<p>for eg.</p>
<pre><code>def test_func(id):
if id_count >= 3:
print 'yes'
ddf = df[df['id_x'] == id]
ddf.to_csv(id+".csv")
else:
print 'no'
while id_count <3:
# do something.(I've explained below what I have to do when count<3)
</code></pre>
<p>Say for b the occurrence is only 2(i.e b-c, and b-q) which is less than 3.</p>
<p>so in such case, look if 'c'(from id_y) has any combinations.</p>
<p>c has 1 combination(c-z) and similarly q has 1 combination(q-v)</p>
<p>thus b should be linked with z and v.</p>
<pre><code>id_x id_y
b c
b q
b z
b v
</code></pre>
<p>and store it in ddf2 like we stored for >10.</p>
<p>Also for particular id,if I could have csv saved with the name of id.
I hope I explained my question correctly, I am very new to python and I don't know to write functions, this was my logic.</p>
<p>Can anyone help me with the implementation part.
Thanks in advance.</p>
|
<p><strong>Edited:</strong> solution redesign according to comments</p>
<pre><code>import pandas as pd
def direct_related(df, values, column_names=('x', 'y')):
rels = set()
for value in values:
for i, v in df[df[column_names[0]]==value][column_names[1]].iteritems():
rels.add(v)
return rels
def indirect_related(df, values, recursion=1, column_names=('x', 'y')):
rels = direct_related(df, values, column_names)
for i in range(recursion):
rels = rels.union(direct_related(df, rels, column_names))
return rels
def related(df, value, recursion=1, column_names=('x', 'y')):
rels = indirect_related(df, [value], recursion, column_names)
return pd.DataFrame(
{
column_names[0]: value,
column_names[1]: list(rels)
}
)
def min_related(df, value, min_appearances=3, max_recursion=10, column_names=('x', 'y')):
for i in range(max_recursion + 1):
if len(indirect_related(df, [value], i, column_names)) >= min_appearances:
return related(df, value, i, column_names)
return None
df = pd.DataFrame(
{
'x': ['a', 'b', 'a', 'd', 'x', 'm', 'c', 'a', 'b', 'd', 'a', 'q'],
'y': ['b', 'c', 'c', 'a', 'a', 'b', 'z', 'k', 'q', 'w', 'w', 'v']
}
)
print(min_related(df, 'b', 3))
</code></pre>
|
python|python-2.7|python-3.x|pandas
| 0
|
8,430
| 40,813,733
|
formatting a .txt file in pandas
|
<p>I would like to take a .txt file that is in the following format: </p>
<pre><code>StateOne[edit]
RegionOne (UniversityOne)[1]
RegionTwo (UniversityTwo)
RegionThree (UniversityThree)[2]
</code></pre>
<p>and have this data be cleaned up and returned in a DataFrame of this format: </p>
<pre><code>State RegionName
0 StateOne RegionOne
1 StateOne RegionTwo
2 StateOne RegionThree
</code></pre>
<p>so for example I have: </p>
<pre><code>Alabama[edit]
Auburn (Auburn University)[1]
Florence (University of North Alabama)
Jacksonville (Jacksonville State University)[2]
</code></pre>
<p>and i need to convert this into the data frame: </p>
<pre><code> State RegionName
0 Alabama Auburn
1 Alabama Florence
2 Alabama Jacksonville
</code></pre>
<p>I'm a bit confused how to remove characters like <code>"["</code> to the end and have them be named <code>"State"</code>. And for <code>"RegionName"</code>, when removing every character from <code>"("</code> to end when needed. Pretty new at pandas and confused about a quick easy way to do this.</p>
|
<p>This is assuming that the state always have the "edit" with <code>[]</code> and the regions <code>()</code>.</p>
<p>The trick is to do a <a href="https://docs.python.org/3.3/library/stdtypes.html#str.split" rel="nofollow noreferrer">split</a> in "[" and "(" (as appropriate) and staying with the first part of the string.</p>
<pre><code>string = '''Alabama[edit]
Auburn (Auburn University)[1]
Florence (University of North Alabama)
Jacksonville (Jacksonville State University)[2]'''
i = 0
print(' \t' + 'State' + '\t' + 'RegionName')
for line in string.split('\n'): # Split by the line breaks
if line == '': # We skip the line if it is empty
continue
if 'edit' in line: # We look for some "edit" and
state, spam = line.split('[') # store it in a variable
continue # When we find other
# it will replace
region_name, spam = line.split(' (')
i += 1 # The same but with '('
print(str(i) + '\t' + state + '\t' + region_name)
</code></pre>
<p>I hope it helps!</p>
|
python|csv|pandas|file-io|data-science
| 0
|
8,431
| 54,191,262
|
eig(a,b) in Python giving error "takes 1 positional argument but 2 were given"
|
<p>According to <a href="https://docs.scipy.org/doc/numpy-1.15.0/user/numpy-for-matlab-users.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.15.0/user/numpy-for-matlab-users.html</a>, the equivalent numpy expression for the MATLAB <code>[V,D]=eig(a,b)</code> is <code>V,D = np.linalg.eig(a,b)</code>. </p>
<p>But when I try this I get the error:</p>
<pre><code>TypeError: eig() takes 1 positional argument but 2 were given
</code></pre>
<p>I'm confused, the documentation says <code>np.linalg.eig</code> can take two arguments?</p>
<p>Curiously, when I look at the <code>linalg</code> documentation at <a href="https://docs.scipy.org/doc/numpy-1.15.1/reference/routines.linalg.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.15.1/reference/routines.linalg.html</a>, under the heading 'Matrix eigenvalues' there is no mention of <code>linalg.eig</code> taking two arguments?</p>
<p>How can I get <code>eig</code> to take two arguments like in MATLAB?</p>
<h2>This works in MATLAB</h2>
<pre><code>a = diag(ones(3,1));
b = diag(2*ones(3,1));
[V,D] = eig(a,b)
</code></pre>
<p>Output:</p>
<pre><code>V =
0.7071 0 0
0 0.7071 0
0 0 0.7071
D =
0.5000 0 0
0 0.5000 0
0 0 0.5000
</code></pre>
<h2>This doesn't work in Python</h2>
<pre><code>import numpy as np
a = np.diag(np.ones(3))
b = np.diag(2*np.ones(3))
V,D = np.linalg.eig(a,b)
</code></pre>
<p>Error:</p>
<pre><code>TypeError: eig() takes 1 positional argument but 2 were given
</code></pre>
|
<p>As you saw in the docs of <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html#numpy.linalg.eig" rel="nofollow noreferrer"><code>numpy.linalg.eig</code></a>, it only accepts a single array argument and correspondingly it doesn't compute generalized eigenvalue problems.</p>
<p>Fortunately we have <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eig.html" rel="nofollow noreferrer"><code>scipy.linalg.eig</code></a>:</p>
<pre><code>scipy.linalg.eig(a, b=None, left=False, right=True, overwrite_a=False, overwrite_b=False, check_finite=True, homogeneous_eigvals=False)
Solve an ordinary or generalized eigenvalue problem of a square matrix.
</code></pre>
<p>Here's your example case:</p>
<pre><code>import numpy as np
import scipy.linalg
a = np.diag(np.ones(3))
b = np.diag(2*np.ones(3))
eigvals,eigvects = scipy.linalg.eig(a, b)
</code></pre>
<p>Now we have</p>
<pre><code>>>> eigvals
array([0.5+0.j, 0.5+0.j, 0.5+0.j])
>>> eigvects
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
</code></pre>
<p>The difference in the eigenvectors might be due to a different choice in normalization for the eigenvalues. I'd check with two nontrivial matrices and see if the results correspond to one another (comparing corresponding eigenvalue-eigenvector pairs, of course).</p>
|
python|matlab|numpy|eigenvalue|eigenvector
| 2
|
8,432
| 38,183,565
|
Remapping a pandas dataframe column using a dict
|
<p>I'm quite new to python so bear with me if this is obvious. </p>
<p>I've got a column, 'age', in a dataframe, dff, containing the values 1 to 66. Each value corresponds to a key in the dictionary, di, and I'm trying to replace the values in the column with the corresponding values from the dictionary. </p>
<p>I can do this for a single value, for example:</p>
<pre><code>dff['age'] = dff['age'].replace('1', di.get('1'))
</code></pre>
<p>But I want to do it for all 66 values. I tried this:</p>
<pre><code>i = 1
while i <= 66:
i = str(i)
dff['age'] = dff['age'].replace(i, di.get(i))
i = int(i)
i = i + 1
</code></pre>
<p>Which doesn't seem to change the values in the column at all. Any ideas?
Thanks.</p>
|
<p>I think <code>.replace()</code> will do a better job. <code>.map()</code> fills <code>nans</code> if a particular match is not found. Purely depends on which is the desired output</p>
<pre><code>dff['age'] = dff['age'].replace(di)
</code></pre>
<p>For example</p>
<pre><code>dff = pd.DataFrame(['a', 'b', 'c', 'd', 'a'], columns=['age'])
df['age'].replace({'a': '10-25', 'b': '50-60'})
0 10-25
1 50-60
2 c
3 d
4 10-25
Name: age, dtype: object
</code></pre>
<p>Where <code>.map()</code> will introduce <code>nan</code>.</p>
<pre><code>dff['age'].map({'a': '10-25', 'b': '50-60'})
0 10-25
1 50-60
2 NaN
3 NaN
4 10-25
Name: age, dtype: object
</code></pre>
|
python|pandas|dataframe
| 0
|
8,433
| 66,235,484
|
pandas dataframe interpolate for Nans with groupby using window of discrete days of the year
|
<p>The small reproducible example below sets up a dataframe that is 100 yrs in length containing some randomly generated values. It then inserts 3 100-day stretches of missing values. Using this small example, I am attempting to sort out the pandas commands that will fill in the missing days using average values for that day of the year (hence the use of .groupby) with a condition. For example, if April 12th is missing, how can the last line of code be altered such that only the 10 nearest April 12th's are used to fill in the missing value? In other words, a missing April 12th value in 1920 would be filled in using the mean April 12th values between 1915 to 1925; a missing April 12th value in 2000 would be filled in with the mean April 12th values between 1995 to 2005, etc. I tried playing around with adding a .rolling() to the lambda function in last line of script, but was unsuccessful in my attempt.</p>
<p>Bonus question: The example below extends from 1918 to 2018. If a value is missing on April 12th 1919, for example, it would still be nice if ten April 12ths were used to fill in the missing value even though the window couldn't be 'centered' on the missing day because of its proximity to the beginning of the time series. Is there a solution to the first question above that would be flexible enough to still use a minimum of 10 values when missing values are close to the beginning and ending of the time series?</p>
<pre><code>import pandas as pd
import numpy as np
import random
# create 100 yr time series
dates = pd.date_range(start="1918-01-01", end="2018-12-31").strftime("%Y-%m-%d")
vals = [random.randrange(1, 50, 1) for i in range(len(dates))]
# Create some arbitrary gaps
vals[100:200] = vals[9962:10062] = vals[35895:35995] = [np.nan] * 100
# Create dataframe
df = pd.DataFrame(dict(
list(
zip(["Date", "vals"],
[dates, vals])
)
))
# confirm missing vals
df.iloc[95:105]
df.iloc[35890:35900]
# set a date index (for use by groupby)
df.index = pd.DatetimeIndex(df['Date'])
df['Date'] = df.index
# Need help restricting the mean to the 10 nearest same-days-of-the-year:
df['vals'] = df.groupby([df.index.month, df.index.day])['vals'].transform(lambda x: x.fillna(x.mean()))
</code></pre>
|
<p>This answers both parts</p>
<ul>
<li>build a DF <code>dfr</code> that is the calculation you want</li>
<li><code>lambda</code> function returns a dict <code>{year:val, ...}</code></li>
<li>make sure indexes are named in reasonable way</li>
<li>expand out <code>dict</code> with <code>apply(pd.Series)</code></li>
<li>reshape by putting year columns back into index</li>
<li><code>merge()</code> built DF with original DF. <strong>vals</strong> column contains <code>NaN</code> <strong>0</strong> column is value to fill</li>
<li>finally <code>fillna()</code></li>
</ul>
<pre><code># create 100 yr time series
dates = pd.date_range(start="1918-01-01", end="2018-12-31")
vals = [random.randrange(1, 50, 1) for i in range(len(dates))]
# Create some arbitrary gaps
vals[100:200] = vals[9962:10062] = vals[35895:35995] = [np.nan] * 100
# Create dataframe - simplified from question...
df = pd.DataFrame({"Date":dates,"vals":vals})
df[df.isna().any(axis=1)]
ystart = df.Date.dt.year.min()
# generate rolling means for month/day. bfill for when it's start of series
dfr = (df.groupby([df.Date.dt.month, df.Date.dt.day])["vals"]
.agg(lambda s: {y+ystart:v for y,v in enumerate(s.dropna().rolling(5).mean().bfill())})
.to_frame().rename_axis(["month","day"])
)
# expand dict into columns and reshape to by indexed by month,day,year
dfr = dfr.join(dfr.vals.apply(pd.Series)).drop(columns="vals").rename_axis("year",axis=1).stack().to_frame()
# get df index back, plus vals & fillna (column 0) can be seen alongside each other
dfm = df.merge(dfr, left_on=[df.Date.dt.month,df.Date.dt.day,df.Date.dt.year], right_index=True)
# finally what we really want to do - fill tha NaNs
df.fillna(dfm[0])
</code></pre>
<h3>analysis</h3>
<ul>
<li>taking NaN for 11-Apr-1918, default is 22 as it's backfilled from 1921</li>
<li>(12+2+47+47+2)/5 == 22</li>
</ul>
<pre><code>dfm.query("key_0==4 & key_1==11").head(7)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">key_0</th>
<th style="text-align: right;">key_1</th>
<th style="text-align: right;">key_2</th>
<th style="text-align: left;">Date</th>
<th style="text-align: right;">vals</th>
<th style="text-align: right;">0</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">100</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1918</td>
<td style="text-align: left;">1918-04-11 00:00:00</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">22</td>
</tr>
<tr>
<td style="text-align: right;">465</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1919</td>
<td style="text-align: left;">1919-04-11 00:00:00</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">22</td>
</tr>
<tr>
<td style="text-align: right;">831</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1920</td>
<td style="text-align: left;">1920-04-11 00:00:00</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">22</td>
</tr>
<tr>
<td style="text-align: right;">1196</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1921</td>
<td style="text-align: left;">1921-04-11 00:00:00</td>
<td style="text-align: right;">47</td>
<td style="text-align: right;">27</td>
</tr>
<tr>
<td style="text-align: right;">1561</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1922</td>
<td style="text-align: left;">1922-04-11 00:00:00</td>
<td style="text-align: right;">47</td>
<td style="text-align: right;">36</td>
</tr>
<tr>
<td style="text-align: right;">1926</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1923</td>
<td style="text-align: left;">1923-04-11 00:00:00</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">34.6</td>
</tr>
<tr>
<td style="text-align: right;">2292</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">1924</td>
<td style="text-align: left;">1924-04-11 00:00:00</td>
<td style="text-align: right;">37</td>
<td style="text-align: right;">29.4</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas
| 1
|
8,434
| 66,140,256
|
How do I create a new column based on matching values in two different dataframes?
|
<p>I have two dataframes:</p>
<p>df1 (a row for every event that happens in the game)</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Game</th>
<th>Event Type</th>
<th>Player</th>
<th>Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>02/28/10</td>
<td>USA vs Canada</td>
<td>Faceoff</td>
<td>Sidney Crosby</td>
<td>20:00</td>
</tr>
<tr>
<td>02/28/10</td>
<td>USA vs Canada</td>
<td>Pass</td>
<td>Drew Doughty</td>
<td>19:59</td>
</tr>
<tr>
<td>02/28/10</td>
<td>USA vs Canada</td>
<td>Pass</td>
<td>Scott Niedermayer</td>
<td>19:42</td>
</tr>
<tr>
<td>02/28/10</td>
<td>USA vs Canada</td>
<td>Shot</td>
<td>Sidney Crosby</td>
<td>18:57</td>
</tr>
<tr>
<td>02/28/10</td>
<td>USA vs Canada</td>
<td>Takeaway</td>
<td>Dany Heatley</td>
<td>18:49</td>
</tr>
<tr>
<td>02/28/10</td>
<td>USA vs Canada</td>
<td>Shot</td>
<td>Dany Heatley</td>
<td>18:02</td>
</tr>
<tr>
<td>02/28/10</td>
<td>USA vs Canada</td>
<td>Shot</td>
<td>Sidney Crosby</td>
<td>17:37</td>
</tr>
</tbody>
</table>
</div>
<p>df2</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Player</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sidney Crosby</td>
</tr>
<tr>
<td>Dany Heatley</td>
</tr>
<tr>
<td>Scott Niedermayer</td>
</tr>
<tr>
<td>Drew Doughty</td>
</tr>
</tbody>
</table>
</div>
<p>How do I create a new column in df2 that matches the Player column in each dataframe and counts each row where the Event Type in df1 is "Shot"?</p>
<p>This is the output I would look for in this example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Player</th>
<th>Shots</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sidney Crosby</td>
<td>2</td>
</tr>
<tr>
<td>Dany Heatley</td>
<td>1</td>
</tr>
<tr>
<td>Scott Niedermayer</td>
<td>0</td>
</tr>
<tr>
<td>Drew Doughty</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>I'm new to Python, so I apologize if there's an easy answer that I'm missing. Thank you!</p>
|
<p>You can filter the <code>df1</code> for <code>shot</code>, then do a value count:</p>
<pre><code>shots = df1.loc[df1['Event Type']=='shot', 'Player'].value_counts()
df2['shots'] = df2['Player'].map(shots)
# or using reindex with `fill_value` option
# shots.reindex(df2['Player'], fill_value=0).values
</code></pre>
<hr />
<p><strong>Bonus:</strong> Use <code>crosstab</code> and <code>merge</code> to get all statistics at once:</p>
<pre><code>df2.merge(pd.crosstab(df1['Player'], df1['Event Type']),
on='Player', how='left')
</code></pre>
|
python|pandas|dataframe|numpy
| 1
|
8,435
| 46,288,854
|
Determine change in values in a grouped dataframe
|
<p>Assume a dataset like this (which originally is read in from a .csv):</p>
<pre><code>data = pd.DataFrame({'id': [1,2,3,1,2,3],
'time':['2017-01-01 12:00:00','2017-01-01 12:00:00','2017-01-01 12:00:00',
'2017-01-01 12:10:00','2017-01-01 12:10:00','2017-01-01 12:10:00'],
'value': [10,11,12,10,12,13]})
</code></pre>
<p>=></p>
<pre><code> id time value
0 1 2017-01-01 12:00:00 10
1 2 2017-01-01 12:00:00 11
2 3 2017-01-01 12:00:00 12
3 1 2017-01-01 12:10:00 10
4 2 2017-01-01 12:10:00 12
5 3 2017-01-01 12:10:00 13
</code></pre>
<p>Time is identical for all IDs in each observation period. The series goes on like that for many observations, i.e. every ten minutes. </p>
<p>I want the number of total changes in the <code>value</code> column by id between consecutive times. For example: For id=1 there is no change (result: 0). For id=2 there is one change (result: 1).
Inspired by this post, I have tried taking differences:
<a href="https://stackoverflow.com/questions/30196063/determining-when-a-column-value-changes-in-pandas-dataframe">Determining when a column value changes in pandas dataframe</a></p>
<p>This is what I've come up so far (not working as expected):</p>
<pre><code>data = data.set_index(['id', 'time']) # MultiIndex
grouped = data.groupby(level='id')
data['diff'] = grouped['value'].diff()
data.loc[data['diff'].notnull(), 'diff'] = 1
data.loc[data['diff'].isnull(), 'diff'] = 0
grouped['diff'].sum()
</code></pre>
<p>However, this will just be the sum of occurrences for each id.</p>
<p>Since my dataset is huge (and wont fit into memory), the solution should be as fast as possible. ( This is why I use a MultiIndex on id + time. I expect significant speedup because optimally the data need not be shuffled anymore.)</p>
<p>Moreover, I have come around dask dataframes which are very similar to pandas dfs. A solution making use of them would be fantastic. </p>
|
<p>I think you're looking for a <code>groupby</code> and comparison by <code>shift</code>;</p>
<pre><code>data.groupby('id')['value'].agg(lambda x: (x != x.shift(-1)).sum() - 1)
id
1 0
2 1
3 1
Name: value, dtype: int64
</code></pre>
|
python|pandas|dataframe|group-by|pandas-groupby
| 4
|
8,436
| 58,254,949
|
How to search all the values in a dataframe with a particular string
|
<p>I am actually stuck and want to search a Dataframe to find all the cells which includes a url link into a different dataframe i.e.</p>
<p><strong>Input:</strong></p>
<pre><code> A B C
0 1 2 https://123
1 https://432 333 qq
2 https://567 rt q4
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> R
0 https://123
1 https://432
2 https://567
</code></pre>
<p>I am trying an approach to search all the columns containing the string "http" but its not working</p>
|
<p>Try:</p>
<pre><code>output_df = pd.dataframe(columns=['R'])
for col in df.columns.tolist():
output_df = pd.concat([ouput_df, df.loc[df[col].str.contains('https'), col].rename({col: 'R'}, axis=1)])
</code></pre>
|
python|pandas|dataframe
| 0
|
8,437
| 58,244,542
|
Tensorflow LSTM stateful option not maintaining state between batches
|
<p>I am new to Tensorflow and wanted to understand the <a href="https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/keras/layers/LSTM" rel="nofollow noreferrer">keras LSTM layer</a> so I wrote this
test program to discern the behavior of the <code>stateful</code> option.</p>
<pre class="lang-py prettyprint-override"><code>#Tensorflow 1.x version
import tensorflow as tf
import numpy as np
NUM_UNITS=1
NUM_TIME_STEPS=5
NUM_FEATURES=1
BATCH_SIZE=4
STATEFUL=True
STATEFUL_BETWEEN_BATCHES=True
lstm = tf.keras.layers.LSTM(units=NUM_UNITS, stateful=STATEFUL,
return_state=True, return_sequences=True,
batch_input_shape=(BATCH_SIZE, NUM_TIME_STEPS, NUM_FEATURES),
kernel_initializer='ones', bias_initializer='ones',
recurrent_initializer='ones')
x = tf.keras.Input((NUM_TIME_STEPS,NUM_FEATURES),batch_size=BATCH_SIZE)
result = lstm(x)
I = tf.compat.v1.global_variables_initializer()
sess = tf.compat.v1.Session()
sess.run(I)
X_input = np.array([[[3.14*(0.01)] for t in range(NUM_TIME_STEPS)] for b in range(BATCH_SIZE)])
feed_dict={x: X_input}
def matprint(run, mat):
print('Batch = ', run)
for b in range(mat.shape[0]):
print('Batch Sample:', b, ', per-timestep output')
print(mat[b].squeeze())
print('BATCH_SIZE = ', BATCH_SIZE, ', T = ', NUM_TIME_STEPS, ', stateful =', STATEFUL)
if STATEFUL:
print('STATEFUL_BETWEEN_BATCHES = ', STATEFUL_BETWEEN_BATCHES)
for r in range(2):
feed_dict={x: X_input}
OUTPUT_NEXTSTATES = sess.run({'result': result}, feed_dict=feed_dict)
OUTPUT = OUTPUT_NEXTSTATES['result'][0]
NEXT_STATES=OUTPUT_NEXTSTATES['result'][1:]
matprint(r,OUTPUT)
if STATEFUL:
if STATEFUL_BETWEEN_BATCHES:
#For TF version 1.x manually re-assigning states from
#the last batch IS required for some reason ...
#seems like a bug
sess.run(lstm.states[0].assign(NEXT_STATES[0]))
sess.run(lstm.states[1].assign(NEXT_STATES[1]))
else:
lstm.reset_states()
</code></pre>
<p>Note that the LSTM's weights are set to all ones and the input is constant for consistency.</p>
<p>As expected the script's output when <code>statueful=False</code> has no sample, time, or inter-batch
dependence:</p>
<pre><code>BATCH_SIZE = 4 , T = 5 , stateful = False
Batch = 0
Batch Sample: 0 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 1 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 2 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 3 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch = 1
Batch Sample: 0 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 1 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 2 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 3 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
</code></pre>
<p>Upon setting <code>stateful=True</code> I was <em>expecting</em> the samples within each batch to yield different outputs (
presumably because the TF graph maintains state between the batch samples). This was not the case, however:</p>
<pre><code>BATCH_SIZE = 4 , T = 5 , stateful = True
STATEFUL_BETWEEN_BATCHES = True
Batch = 0
Batch Sample: 0 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 1 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 2 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 3 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch = 1
Batch Sample: 0 , per-timestep output
[0.86686385 0.8686781 0.8693927 0.8697042 0.869853 ]
Batch Sample: 1 , per-timestep output
[0.86686385 0.8686781 0.8693927 0.8697042 0.869853 ]
Batch Sample: 2 , per-timestep output
[0.86686385 0.8686781 0.8693927 0.8697042 0.869853 ]
Batch Sample: 3 , per-timestep output
[0.86686385 0.8686781 0.8693927 0.8697042 0.869853 ]
</code></pre>
<p>In particular, note the outputs from the first two samples of the same batch are identical.</p>
<p><strong>EDIT</strong>: I have been informed by <a href="https://stackoverflow.com/users/10133797/overlordgolddragon">OverlordGoldDragon</a>
that this behavior is expected and my confusion is in the distinction between a <em>Batch</em> -- a collection of
<code>(samples, timesteps, features)</code> -- and <em>Sample</em> within a batch (or a single "row" of the batch).
Represented by the following figure:</p>
<p><img src="https://i.stack.imgur.com/qbWeb.png" width="400"></p>
<p>So this raises the question of the dependence (if any) between individual samples for a given batch. From the
output of my script, I'm led to believe that <em>each</em> sample is fed to a (logically) separate LSTM block -- and
the LSTM states for the difference samples are independent. I've drawn this here:</p>
<p><img src="https://i.stack.imgur.com/X1T0n.png" width="500"></p>
<p><strong>Is my understanding correct?</strong></p>
<p>As an aside, it seems the <code>stateful=True</code> is broken in TensorFlow 1.x because if I remove the explicit
assignment of the state from the previous batch:</p>
<pre class="lang-py prettyprint-override"><code> sess.run(lstm.states[0].assign(NEXT_STATES[0]))
sess.run(lstm.states[1].assign(NEXT_STATES[1]))
</code></pre>
<p>it stops working, i.e., the second batch's output is identical to the first's.</p>
<p>I re-wrote the above script with the Tensorflow 2.0 syntax and the behavior is what I would expect
(without having to manually carry over LSTM state between batches):</p>
<pre class="lang-py prettyprint-override"><code>#Tensorflow 2.0 implementation
import tensorflow as tf
import numpy as np
NUM_UNITS=1
NUM_TIME_STEPS=5
NUM_FEATURES=1
BATCH_SIZE=4
STATEFUL=True
STATEFUL_BETWEEN_BATCHES=True
lstm = tf.keras.layers.LSTM(units=NUM_UNITS, stateful=STATEFUL,
return_state=True, return_sequences=True,
batch_input_shape=(BATCH_SIZE, NUM_TIME_STEPS, NUM_FEATURES),
kernel_initializer='ones', bias_initializer='ones',
recurrent_initializer='ones')
X_input = np.array([[[3.14*(0.01)]
for t in range(NUM_TIME_STEPS)]
for b in range(BATCH_SIZE)])
@tf.function
def forward(x):
return lstm(x)
def matprint(run, mat):
print('Batch = ', run)
for b in range(mat.shape[0]):
print('Batch Sample:', b, ', per-timestep output')
print(mat[b].squeeze())
print('BATCH_SIZE = ', BATCH_SIZE, ', T = ', NUM_TIME_STEPS, ', stateful =', STATEFUL)
if STATEFUL:
print('STATEFUL_BETWEEN_BATCHES = ', STATEFUL_BETWEEN_BATCHES)
for r in range(2):
OUTPUT_NEXTSTATES = forward(X_input)
OUTPUT = OUTPUT_NEXTSTATES[0].numpy()
NEXT_STATES=OUTPUT_NEXTSTATES[1:]
matprint(r,OUTPUT)
if STATEFUL:
if STATEFUL_BETWEEN_BATCHES:
pass
#Explicitly re-assigning states from the last batch isn't
# required as the model maintains inter-batch history.
#This is NOT the same behavior for TF.version < 2.0
#lstm.states[0].assign(NEXT_STATES[0].numpy())
#lstm.states[1].assign(NEXT_STATES[1].numpy())
else:
lstm.reset_states()
</code></pre>
<p>This is the output:</p>
<pre><code>BATCH_SIZE = 4 , T = 5 , stateful = True
STATEFUL_BETWEEN_BATCHES = True
Batch = 0
Batch Sample: 0 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 1 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 2 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch Sample: 3 , per-timestep output
[0.38041887 0.663519 0.79821336 0.84627265 0.8617684 ]
Batch = 1
Batch Sample: 0 , per-timestep output
[0.86686385 0.8686781 0.8693927 0.8697042 0.869853 ]
Batch Sample: 1 , per-timestep output
[0.86686385 0.8686781 0.8693927 0.8697042 0.869853 ]
Batch Sample: 2 , per-timestep output
[0.86686385 0.8686781 0.8693927 0.8697042 0.869853 ]
Batch Sample: 3 , per-timestep output
[0.86686385 0.8686781 0.8693927 0.8697042 0.869853 ]
</code></pre>
|
<p>Everything appears to be working as intended - but the code's in need of much revision:</p>
<ul>
<li><code>Batch: 0</code> should be <code>Sample: 0</code>; your <code>batch_shape=(4, 5, 1)</code>, contains 4 <em>samples</em>, 5 <em>timesteps</em>, and 1 <em>feature</em> / <em>channel</em>. <code>I</code> in your case is the actual batch marker</li>
<li>Each sample is treated as an <em>independent sequence</em>, so it's like first feeding sample 1, then sample 2 - except during learning, batch sample losses are averaged to compute the gradient</li>
<li>Each one of your samples is <em>identical</em> - so it's sensible to get identical outputs for each batch; run <code>print(X_input)</code> to verify</li>
<li>Stateful works as intended: given the <em>same</em> input, <code>stateful=False</code> yields <em>same</em> outputs (because no internal state is maintained) - whereas <code>stateful=True</code> yields <em>different</em> outputs for different <code>I</code>, even though the inputs are same (due to memory)</li>
<li>As-is, your <code>lstm</code> is <em>not</em> learning, so weights are the same - and all <code>stateful=False</code> outputs will be exactly the same for same inputs</li>
<li>Initializing all weights to the same value is strongly discouraged - instead, use a <a href="https://stackoverflow.com/questions/58210700/are-these-models-equivalent/58210853#58210853">random seed</a></li>
</ul>
|
python|tensorflow|keras|lstm
| 3
|
8,438
| 58,309,845
|
Pandas re-arange flat hierarchy from bottom up to top down
|
<p>I am stuck with a challenge to re-arange a flat unbalanced hierarchy that is build bottom up, i.e. mapping a child element to parent and the parent's parent and so on, to a top down structure, i.e. starting from root and populating the structure downwards. Because the tree is unbalanced some end with a lower hierarchy level than others.</p>
<p>Example:</p>
<p>Source:</p>
<pre><code>Child|Parent+0|Parent+1|Parent+2|Parent+3|Parent+4
Julia|Peter|Alice|Paul|Sara|Bianca
Chris|Jen|Bob|Fred|Bianca|NaN
Ben|John|Bianca|NaN|NaN
</code></pre>
<p>Target:</p>
<pre><code>Parent-0|Parent-1|Parent-2|Parent-3|Parent-4|Child
Bianca|Sara|Paul|Alice|Peter|Julia
Bianca|Fred|Bob|Jen|NaN|Chris
Bianca|John|NaN|NaN|NaN|Ben
</code></pre>
<p>I've tried different ideas but so far had no luck.
Appreciate your help or ideas.</p>
|
<p><code>set_index</code> and flip the values. Then make use of the <a href="https://stackoverflow.com/a/47898659/4333359"><code>justify</code></a> function that cs95 modified from Divakar.</p>
<pre><code>df = df.set_index('Child').loc[:, ::-1]
pd.DataFrame(justify(df.to_numpy(), invalid_val=np.NaN),
index=df.index,
columns=[f'Parent-{i}' for i in range(0, df.shape[1])])
</code></pre>
<hr>
<pre><code> Parent-0 Parent-1 Parent-2 Parent-3 Parent-4
Child
Julia Bianca Sara Paul Alice Peter
Chris Bianca Fred Bob Jen NaN
Ben Bianca John NaN NaN NaN
</code></pre>
|
pandas
| 0
|
8,439
| 58,381,379
|
Looping over a dataframe and referencing a series
|
<p>I'm trying to iterate over a data frame in python and in my if statement I reference a couple of columns that happen to be a Series. When i run my code I get the following error:</p>
<pre><code>The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p><strong>Data</strong>:<br>
Taken from solution provided by @CypherX.</p>
<pre class="lang-py prettyprint-override"><code>template = ['some', 'abra', 'cadabra', 'juju', 'detail page', 'lulu', 'boo', 'honolulu', 'detail page']
prev = ['home', 'abra', 'cacobra', 'juju', 'detail page', 'lulu', 'booboo', 'picabo', 'detail here']
df = pd.DataFrame({'Template': template, 'Prev': prev})
</code></pre>
<pre><code> Template Prev
0 some home
1 abra abra
2 cadabra cacobra
3 juju juju
4 detail page detail page
5 lulu lulu
6 boo booboo
7 honolulu picabo
8 detail page detail here
</code></pre>
<hr>
<p>My code is the following:</p>
<pre><code>for row in s:
if (s['Template']=='detail page') and (s['Template']==s['Prev']):
s['Swipe']=1
else:
s['Swipe']=0
</code></pre>
<p>where <code>s</code> is my dataframe. </p>
<p>What can I do to fix this? Any ideas?</p>
|
<p>You could try setting the value of <code>s['Swipe']</code> using <code>np.where</code> instead:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
s['Swipe'] = np.where((s['Template'] == 'detail page') & (s['Template'] == s['Prev']), 1, 0)
</code></pre>
|
python|pandas|loops|numpy|dataframe
| 2
|
8,440
| 58,267,188
|
Dataframe filter rows based on comparison with another dataframe
|
<p>I want to filter the one dataframe based on dates which falls in between dates of another dataframe.</p>
<p>I've tried the following code:</p>
<pre><code>df1 = pd.DataFrame({
'Start':['1/1/2016', '1/1/2016', '1/1/2016', '1/1/2016', '1/1/2016'],
'end':['1/12/2016', '1/12/2016', '1/12/2016', '1/12/2016', '1/12/2016'],
'Qty':[1, 2, 3, 4, 2],
})
df2 = pd.DataFrame({
'Start':['1/1/2016', '1/1/2016', '1/1/2016'],
'end':['1/6/2016', '1/6/2016', '1/6/2016'],
'Price':[11, 12, 31],
})
df2[(df2['Start']>=df1['Start']) & (df2['end']<=df1['end'])]
</code></pre>
<p>It should select all three rows of df2. But gives this error:</p>
<p><code>ValueError: Can only compare identically-labeled Series objects</code></p>
<p>P.S. Number of rows can't be same in my case.</p>
|
<p>You should have equal number of rows in both data frames for comparison.Here You have 5 rows in <code>df1</code> and 3 rows in <code>df2</code>.</p>
|
python|pandas|dataframe|filter
| 0
|
8,441
| 69,225,294
|
NumPy + PyTorch Tensor assignment
|
<p>lets assume we have a <code>tensor</code> representing an image of the shape <code>(910, 270, 1)</code> which assigned a number (some index) to each pixel with width=910 and height=270.</p>
<p>We also have a <code>numpy</code> array of size <code>(N, 3)</code> which maps a 3-tuple to an index.</p>
<p>I now want to create a new numpy array of shape <code>(920, 270, 3)</code> which has a 3-tuple based on the original tensor index and the mapping-3-tuple-numpy array. How do I do this assignment without for loops and other consuming iterations?</p>
<p>This would look simething like:</p>
<pre><code>color_image = np.zeros((self._w, self._h, 3), dtype=np.int32)
self._colors = np.array(N,3) # this is already present
indexed_image = torch.tensor(920,270,1) # this is already present
#how do I assign it to this numpy array?
color_image[indexed_image.w, indexed_image.h] = self._colors[indexed_image.flatten()]
</code></pre>
|
<p>Assuming you have <code>_colors</code>, and <code>indexed_image</code>. Something that ressembles to:</p>
<pre><code>>>> indexed_image = torch.randint(0, 10, (920, 270, 1))
>>> _colors = np.random.randint(0, 255, (N, 3))
</code></pre>
<p>A common way of converting a dense map to a RGB map is to loop over the label set:</p>
<pre><code>>>> _colors = torch.FloatTensor(_colors)
>>> rgb = torch.zeros(indexed_image.shape[:-1] + (3,))
>>> for lbl in range(N):
... rgb[lbl == indexed_image[...,0]] = _colors[lbl]
</code></pre>
|
python|arrays|numpy|pytorch|tensor
| 1
|
8,442
| 69,124,087
|
How to add a Column named Key into a dictionary of multiple dataframes
|
<p>Given a dictionary with multiple dataframes in it. How I can add a column to each dataframe with all the rows in that df filled with the key name'?</p>
<p><a href="https://i.stack.imgur.com/tDSxA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDSxA.png" alt="Dictionary Structure" /></a></p>
<p>I tried this code:</p>
<pre><code>for key, df in sheet_to_df_map.items():
df['sheet_name'] = key
</code></pre>
<p>This code does add the key column in each dataframe inside the dictionary, but also creates an additional dataframe.</p>
<p><a href="https://i.stack.imgur.com/s86ob.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s86ob.png" alt="DF" /></a></p>
<p>Can't this be done without creating an additional dataframe?</p>
<p>Furthermore, I want to separate dataframes from the dictionary by number of columns. All the dataframes that have 10 columns concatenated, the ones with 9 concatenated and so on. I don't know how to do this.</p>
|
<p>I could do it with the method assign() in the DataFrames and then replacing the hole value in the dictionary, but I don't know in fact if it's this that you want...</p>
<pre><code> for key, df in myDictDf.items():
myDictDf[key] = df.assign(sheet_name=[key for w in range(len(df.index))])
</code></pre>
<p>To sort your dictionary, I think you can use an OrderedDict with the columns property of the DataFrames.
By using len(df.columns) you can get the quantity of columns for each frame.</p>
<p>I think these links can be useful for you:</p>
<p><a href="https://note.nkmk.me/en/python-pandas-len-shape-size/" rel="nofollow noreferrer">https://note.nkmk.me/en/python-pandas-len-shape-size/</a></p>
<p><a href="https://www.geeksforgeeks.org/python-sort-python-dictionaries-by-key-or-value/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/python-sort-python-dictionaries-by-key-or-value/</a></p>
<p>I've found a related question too:
<a href="https://stackoverflow.com/questions/12555323/adding-new-column-to-existing-dataframe-in-python-pandas">Adding new column to existing DataFrame in Python pandas</a></p>
|
python|pandas|dataframe|dictionary
| 0
|
8,443
| 44,424,836
|
Filtering a Pandas Dataframe with a boolean mask
|
<p>How do I drop all dataframe rows that don't match a pair of conditions.</p>
<p>I did this:</p>
<pre><code>df = df[ ! ((df['FVID'] == 0) & (df['vstDelta'] == 0)) ]
</code></pre>
<p>but that was a syntax error. Hopefully it illustrates what I want to do, which is to drop all records containing these 2 conditions.</p>
|
<p>You should use '~' instead of ! to get the negation of the condition.</p>
<pre><code>df = df[~((df['FVID'] == 0) & (df['vstDelta'] == 0))]
</code></pre>
|
python|pandas|dataframe
| 5
|
8,444
| 44,643,137
|
How do you use PyTorch PackedSequence in code?
|
<p>Can someone give a full working code (not a snippet, but something that runs on a variable-length recurrent neural network) on how would you use the PackedSequence method in PyTorch?</p>
<p>There do not seem to be any examples of this in the documentation, github, or the internet.</p>
<p><a href="https://github.com/pytorch/pytorch/releases/tag/v0.1.10" rel="noreferrer">https://github.com/pytorch/pytorch/releases/tag/v0.1.10</a></p>
|
<p>Not the most beautiful piece of code, but this is what I gathered for my personal use after going through PyTorch forums and docs. There can be certainly better ways to handle the sorting - restoring part, but I chose it to be in the network itself</p>
<p>EDIT: See answer from @tusonggao which makes torch utils take care of sorting parts</p>
<pre class="lang-py prettyprint-override"><code>class Encoder(nn.Module):
def __init__(self, vocab_size, embedding_size, embedding_vectors=None, tune_embeddings=True, use_gru=True,
hidden_size=128, num_layers=1, bidrectional=True, dropout=0.6):
super(Encoder, self).__init__()
self.embed = nn.Embedding(vocab_size, embedding_size, padding_idx=0)
self.embed.weight.requires_grad = tune_embeddings
if embedding_vectors is not None:
assert embedding_vectors.shape[0] == vocab_size and embedding_vectors.shape[1] == embedding_size
self.embed.weight = nn.Parameter(torch.FloatTensor(embedding_vectors))
cell = nn.GRU if use_gru else nn.LSTM
self.rnn = cell(input_size=embedding_size, hidden_size=hidden_size, num_layers=num_layers,
batch_first=True, bidirectional=True, dropout=dropout)
def forward(self, x, x_lengths):
sorted_seq_lens, original_ordering = torch.sort(torch.LongTensor(x_lengths), dim=0, descending=True)
ex = self.embed(x[original_ordering])
pack = torch.nn.utils.rnn.pack_padded_sequence(ex, sorted_seq_lens.tolist(), batch_first=True)
out, _ = self.rnn(pack)
unpacked, unpacked_len = torch.nn.utils.rnn.pad_packed_sequence(out, batch_first=True)
indices = Variable(torch.LongTensor(np.array(unpacked_len) - 1).view(-1, 1)
.expand(unpacked.size(0), unpacked.size(2))
.unsqueeze(1))
last_encoded_states = unpacked.gather(dim=1, index=indices).squeeze(dim=1)
scatter_indices = Variable(original_ordering.view(-1, 1).expand_as(last_encoded_states))
encoded_reordered = last_encoded_states.clone().scatter_(dim=0, index=scatter_indices, src=last_encoded_states)
return encoded_reordered
</code></pre>
|
machine-learning|torch|recurrent-neural-network|pytorch
| 7
|
8,445
| 44,413,793
|
How to do nested iterrows in Pandas
|
<p>I am trying to take the data from the endResult dataframe'issues' column and put it into the 'Sprint' column in df. When I run this bit of code, it returns a dataframe that has the third entry from the 'issues' column inserted into each row of the 'Sprint' column in df. </p>
<pre><code>for i, r in endResult.iterrows():
j = endResult['issues'][i]['key']
for x, y in df.iterrows():
df['Sprint'][x] = j
</code></pre>
<p>What I'm getting:</p>
<p>Sprint<br>
0 SPGC-9445<br>
1 SPGC-9445<br>
2 SPGC-9445 </p>
<p>What I should be getting:</p>
<p>Sprint<br>
0 SPGC-14075<br>
1 SPGC-9456<br>
2 SPGC-9445</p>
<p>Entries are taken from endResult dataframe which contains json. </p>
<pre><code> issues
0 {u'key': u'SPGC-14075', u'fields': {u'status':...
1 {u'key': u'SPGC-9456', u'fields': {u'status': ...
2 {u'key': u'SPGC-9445', u'fields': {u'status': ...
</code></pre>
|
<p>Because you are assigning everything to <code>j</code> in the first loop, you overwrite this value on each loop. Then you assign each value in sprint to the value of <code>j</code>, which is going to be the last value in <code>issues</code>.</p>
<p>One simple change that fixes this is to change j to a list and append each value as you loop through. This also eliminates the second loop, since you can just make a column out of the created list:</p>
<pre><code>import pandas as pd
endResult = pd.DataFrame({'issues' : [{'key': 1},{'key': 2},{'key': 3}]})
df = pd.DataFrame()
j = []
for i, r in endResult.iterrows():
j.append(endResult['issues'][i]['key'])
df['Sprint'] = j
</code></pre>
|
python|pandas
| 0
|
8,446
| 60,873,683
|
Python - Filtering dataframe based on 3 columns potentially containing a sought after value
|
<p>I'm trying to take a query of recent customer transactions and match potential primary phone, cellphone and work phone matches against a particular list of customers I have.</p>
<p>Essentially, I am taking one dataframe column (the list of customers I am trying to see if they had transactions recently) against the overall universe of all recent transactions (dataframe being transaction_data) and remove any row that does not have a match in either the primary phone, cellphone or workphone column,</p>
<p>Here is what I am currently trying to do but it only returns Falses across each column header and does not filter the dataframe by rows as I had hoped,</p>
<pre><code>transaction_data[(transaction_data['phone'].isin(df['phone'])) | (transaction_data['cell'].isin(df['phone'])) | (transaction_data['workphone'].isin(df['phone']))].any()
</code></pre>
<p>I'm trying to return a dataframe containing rows of transactional records where there is a match on either primary phone, cellphone or workphone.</p>
<p>Is there a better way to do this perhaps? Or do I need a minor tweak on my code?</p>
|
<p>The thing here is that applying the <code>.isin()</code> method of a Series to another Series will return a boolean Series. </p>
<p>In your example <code>transaction_data['phone']</code> is a Series, and also <code>df['phone']</code>. The return of this method will be a boolean Series containing the value <code>True</code> in a row which the value in <code>transaction_data['phone']</code> appears in <code>df['phone']</code> and <code>False</code> otherwise. This is similar for all applications of <code>isin()</code> method in your example.</p>
<p>And, good news! This boolean Series is exactly what is needed for slicing the dataframe. Therefor your code just need a small tweak. Just delete the <code>.any()</code> at the end of the line.</p>
<pre><code>transaction_data[(transaction_data['phone'].isin(df['phone'])) | (transaction_data['cell'].isin(df['phone'])) | (transaction_data['workphone'].isin(df['phone']))]
</code></pre>
|
python|pandas
| 0
|
8,447
| 60,880,095
|
Appending elements to a numpy nd array
|
<p>I have initialized a numpy nd array like the following</p>
<pre><code>arr = np.zeros((6, 6))
</code></pre>
<p>This empty array is passed as an input argument to a function,</p>
<pre><code>def fun(arr):
arr.append(1) # this works for arr = [] initialization
return arr
for i in range(0,12):
fun(arr)
</code></pre>
<p>But append doesn't work for nd array. I want to fill up the elements of the nd array row-wise.
Is there any way to use a python scalar index for the nd array? I could increment this index every time <code>fun</code> is called and append elements to <code>arr</code> </p>
<p>Any suggestions? </p>
|
<pre><code>In [523]: arr = np.zeros((6,6),int)
In [524]: arr
Out[524]:
array([[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
In [525]: arr[0] = 1
In [526]: arr
Out[526]:
array([[1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
In [527]: arr[1] = [1,2,3,4,5,6]
In [528]: arr[2,3:] = 2
In [529]: arr
Out[529]:
array([[1, 1, 1, 1, 1, 1],
[1, 2, 3, 4, 5, 6],
[0, 0, 0, 2, 2, 2],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
</code></pre>
|
python|arrays|numpy|append
| 0
|
8,448
| 71,788,845
|
Remove duplicate data based on the same unix time
|
<p>multiple data on the same date. I am trying to remove the multiple data and have the data aligned based on the unix time given,
I tried using remove duplicate but its not working</p>
<pre><code> time x y
0 1648598400000 233 6758
1 1648598400000 234 6758
2 1648598403000 553 8678
3 1648598404000 987 8778
4 1648598405000 732 4535
5 1648598406000 234 7656
6 1648598406000 234 8977
7 1648598406000 465 7656
8 1648598406000 465 8977
</code></pre>
|
<p><code>df[ ~df['time'].duplicated() ]</code> (with <code>~</code>) works for me.</p>
<p>I use <code>io</code> only to simulate file - so everyone can copy it.</p>
<pre><code>data = ''' time x y
0 1648598400000 233 6758
1 1648598400000 234 6758
2 1648598403000 553 8678
3 1648598404000 987 8778
4 1648598405000 732 4535
5 1648598406000 234 7656
6 1648598406000 234 8977
7 1648598406000 465 7656
8 1648598406000 465 8977
'''
import pandas as pd
import io
df = pd.read_csv(io.StringIO(data), sep='\s+')
print('\n--- before ---\n')
print(df)
print('\n--- after ---\n')
print( df[ ~df['time'].duplicated() ] )
</code></pre>
<p>Result:</p>
<pre><code>--- before ---
time x y
0 1648598400000 233 6758
1 1648598400000 234 6758
2 1648598403000 553 8678
3 1648598404000 987 8778
4 1648598405000 732 4535
5 1648598406000 234 7656
6 1648598406000 234 8977
7 1648598406000 465 7656
8 1648598406000 465 8977
--- after ---
time x y
0 1648598400000 233 6758
2 1648598403000 553 8678
3 1648598404000 987 8778
4 1648598405000 732 4535
5 1648598406000 234 7656
</code></pre>
<hr />
<p>If I use <code>duplicated(keep='last')</code> then it gives</p>
<pre><code>
--- after ---
time x y
1 1648598400000 234 6758
2 1648598403000 553 8678
3 1648598404000 987 8778
4 1648598405000 732 4535
8 1648598406000 465 8977
</code></pre>
|
python|pandas|dataframe
| 1
|
8,449
| 42,249,852
|
fetch values from csv with different number of columns in csv, numpy
|
<p>I am reading a csv with </p>
<pre><code>numpy.genfromtxt(csv_name, delimiter=',')
</code></pre>
<p>but I am unable to do so because my csv contains different no of columns for each row.</p>
<p>o/p:</p>
<pre><code>ValueError: Some errors were detected
Line #2 (got 8 columns instead of 7)
Line #3 (got 8 columns instead of 7)
Line #4 (got 8 columns instead of 7)
Line #6 (got 8 columns instead of 7)
Line #7 (got 5 columns instead of 7)
Line #8 (got 5 columns instead of 7)
Line #9 (got 5 columns instead of 7)
Line #10 (got 5 columns instead of 7)
</code></pre>
<p>Is is possible to do with numpy?</p>
|
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html</a>, you can do it using the <code>filling_values</code> argument of <code>genfromtxt</code>.</p>
<p>Otherwise, you could use this answer: <a href="https://stackoverflow.com/questions/9823037/python-how-to-read-a-data-file-with-uneven-number-of-columns">Python: How to read a data file with uneven number of columns</a></p>
|
python|numpy
| 1
|
8,450
| 42,326,748
|
tensorflow on GPU: no known devices, despite cuda's deviceQuery returning a "PASS" result
|
<blockquote>
<p>Note : this question was initially <a href="https://github.com/tensorflow/tensorflow/issues/7648#issuecomment-280866214" rel="noreferrer">asked on github</a>, but it was asked to be here instead</p>
</blockquote>
<p>I'm having trouble running tensorflow on gpu, and it does not seems to be the usual cuda's configuration problem, because everything seems to indicate cuda is properly setup.</p>
<p>The main symptom: when running tensorflow, my gpu is not detected (<a href="https://gist.github.com/oelmekki/cafda411bf5c2ea695d984fa98e0995b" rel="noreferrer">the code being run</a>, and <a href="https://gist.github.com/oelmekki/77235c6b0dde99b3438f190eb557f40f" rel="noreferrer">its output</a>).</p>
<p>What differs from usual issues is that cuda seems properly installed and running <code>./deviceQuery</code> from cuda samples is successful (<a href="https://gist.github.com/oelmekki/fe65a15daec45aa90ec33b10b51d3aae" rel="noreferrer">output</a>).</p>
<p>I have two graphical cards:</p>
<ul>
<li>an old GTX 650 used for my monitors (I don't want to use that one with tensorflow)</li>
<li>a GTX 1060 that I want to dedicate to tensorflow</li>
</ul>
<p>I use:</p>
<ul>
<li><a href="https://pypi.python.org/pypi/tensorflow" rel="noreferrer">tensorflow-1.0.0</a></li>
<li>cuda-8.0 (<a href="https://gist.github.com/oelmekki/6e5e9d7d1ea871e1d73efae307efe9ce" rel="noreferrer">ls -l /usr/local/cuda/lib64/libcud*</a>)</li>
<li>cudnn-5.1.10</li>
<li>python-2.7.12</li>
<li>nvidia-drivers-375.26 (this was installed by cuda and replaced my distro driver package)</li>
</ul>
<p>I've tried:</p>
<ul>
<li>adding <code>/usr/local/cuda/bin/</code> to <code>$PATH</code></li>
<li>forcing gpu placement in tensorflow script using <code>with tf.device('/gpu:1'):</code> (and <code>with tf.device('/gpu:0'):</code> when it failed, for good measure)</li>
<li>whitelisting the gpu I wanted to use with <code>CUDA_VISIBLE_DEVICES</code>, in case the presence of my old unsupported card did cause problems</li>
<li>running the script with sudo (because why not)</li>
</ul>
<p>Here are the outputs of <a href="https://gist.github.com/oelmekki/7bdcb5cc2f791cea561a60f8b21e87b5" rel="noreferrer">nvidia-smi</a> and <a href="https://gist.github.com/oelmekki/b83a5a0a72e8924aeb44b70b3598f9b4" rel="noreferrer">nvidia-debugdump -l</a>, in case it's useful.</p>
<p>At this point, I feel like I have followed all the breadcrumbs and have no idea what I could try else. I'm not even sure if I'm contemplating a bug or a configuration problem. Any advice about how to debug this would be greatly appreciated. Thanks!</p>
<p><strong>Update</strong>: with the help of Yaroslav on github, I gathered more debugging info by raising log level, but it doesn't seem to say much about the device selection : <a href="https://gist.github.com/oelmekki/760a37ca50bf58d4f03f46d104b798bb" rel="noreferrer">https://gist.github.com/oelmekki/760a37ca50bf58d4f03f46d104b798bb</a></p>
<p><strong>Update 2</strong>: Using theano detects gpu correctly, but interestingly it complains about cuDNN being too recent, then fallback to cpu (<a href="https://gist.github.com/oelmekki/34b6e41a0ff2b17ff9f39bcf56d0635a" rel="noreferrer">code ran</a>, <a href="https://gist.github.com/oelmekki/11626d6b34058337dae64f1915e5a9fe" rel="noreferrer">output</a>). Maybe that could be the problem with tensorflow as well?</p>
|
<p>From the log output, it looks like you are running the CPU version of TensorFlow (PyPI: <a href="https://pypi.python.org/pypi/tensorflow" rel="noreferrer"><code>tensorflow</code></a>), and not the GPU version (PyPI: <a href="https://pypi.python.org/pypi/tensorflow-gpu" rel="noreferrer"><code>tensorflow-gpu</code></a>). Running the GPU version would either log information about the CUDA libraries, or an error if it failed to load them or open the driver.</p>
<p>If you run the following commands, you should be able to use the GPU in subsequent runs:</p>
<pre><code>$ pip uninstall tensorflow
$ pip install tensorflow-gpu
</code></pre>
|
tensorflow
| 78
|
8,451
| 69,750,333
|
How to show sliding windows of a numpy array with matplotlib FuncAnimation
|
<p>I am developing a simple algorithm for the detection of peaks in a signal. To troubleshoot my algorithm (and to showcase it), I would like to observe the signal and the detected peaks all along the signal duration (i.e. <code>20</code> minutes at <code>100Hz</code> = <code>20000</code> time-points).</p>
<p>I thought that the best way to do it would be to create an animated plot with <code>matplotlib.animation.FuncAnimation</code> that would continuously show the signal sliding by 1 time-points and its superimposed peaks within a time windows of <code>5</code> seconds (i.e. <code>500</code> time-points). The signal is stored in a 1D <code>numpy.ndarray</code> while the peaks information are stored in a 2D <code>numpy.ndarray</code> containing the <code>x</code> and <code>y</code> coordinates of the peaks.</p>
<p>This is a "still frame" of how the plot would look like.</p>
<p><a href="https://i.stack.imgur.com/MwpAH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MwpAH.png" alt="enter image description here" /></a></p>
<p>Now the problem is that I cannot wrap my head around the way of doing this with FuncAnimation.</p>
<p>If my understanding is correct I need three main pieces: the <code>init_func</code> parameter, a function that create the empty frame upon which the plot is drawn, the <code>func</code> parameter, that is the function that actually create the plot for each frame, and the parameter <code>frames</code> which is defined in the help as <code>Source of data to pass func and each frame of the animation</code>.</p>
<p>Looking at examples of plots with <code>FuncAnimation</code>, I can only find use-cases in which the data to plot are create on the go, like <a href="https://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/" rel="nofollow noreferrer">here</a>, or <a href="https://riptutorial.com/matplotlib/example/23558/basic-animation-with-funcanimation" rel="nofollow noreferrer">here</a>, where the data to plot are <strong>created</strong> on the basis of the <code>frame</code>.</p>
<p>What I do not understand is how to implement this with data that are already there, but that are <strong>sliced</strong> on the basis of the frame. I would thus need the <code>frame</code> to work as a sort of sliding window, in which the first window goes from <code>0</code> to <code>499</code>, the second from <code>1</code>to <code>500</code> and so on until the end of the time-points in the <code>ndarray</code>, and an associated <code>func</code> that will select the points to plot on the basis of those <code>frames</code>. I do not know how to implement this.</p>
<p>I add the code to create a realistic signal, to simply detect the peaks and to plot the 'static' version of the plot I would like to animate:</p>
<pre><code>import neurokit2 as nk
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from scipy.signal import find_peaks
#create realistic data
data = nk.ecg_simulate(duration = 50, sampling_rate = 100, noise = 0.05,\
random_state = 1)
#scale data
scaler = MinMaxScaler()
scaled_arr = scaler.fit_transform(data.reshape(-1,1))
#find peaks
peak = find_peaks(scaled_arr.squeeze(), height = .66,\
distance = 60, prominence = .5)
#plot
plt.plot(scaled_arr[0:500])
plt.scatter(peak[0][peak[0] < 500],\
peak[1]['peak_heights'][peak[0] < 500],\
color = 'red')
</code></pre>
|
<p>I've created an animation using the data you presented; I've extracted the data in 500 increments for 5000 data and updated the graph. To make it easy to extract the data, I have created an index of 500 rows, where id[0] is the start row, id<a href="https://i.stack.imgur.com/ofeJx.gif" rel="nofollow noreferrer">1</a> is the end row, and the number of frames is 10. This code works, but the initial settings and dataset did not work in the scatter plot, so I have drawn the scatter plot directly in the loop process.</p>
<pre><code>import neurokit2 as nk
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from scipy.signal import find_peaks
import numpy as np
#create realistic data
data = nk.ecg_simulate(duration = 50, sampling_rate = 100, noise = 0.05, random_state = 1)
#scale data
scaler = MinMaxScaler()
scaled_arr = scaler.fit_transform(data.reshape(-1,1))
#find peaks
peak = find_peaks(scaled_arr.squeeze(), height = .66, distance = 60, prominence = .5)
ymin, ymax = min(scaled_arr), max(scaled_arr)
fig = plt.figure()
ax = fig.add_subplot(111)
line, = ax.plot([],[], lw=2)
scat = ax.scatter([], [], s=20, facecolor='red')
idx = [(s,e) for s,e in zip(np.arange(0,len(scaled_arr), 1), np.arange(499,len(scaled_arr)+1, 1))]
def init():
line.set_data([], [])
return line,
def animate(i):
id = idx[i]
#print(id[0], id[1])
line.set_data(np.arange(id[0], id[1]), scaled_arr[id[0]:id[1]])
x = peak[0][(peak[0] > id[0]) & (peak[0] < id[1])]
y = peak[1]['peak_heights'][(peak[0] > id[0]) & (peak[0] < id[1])]
#scat.set_offsets(x, y)
ax.scatter(x, y, s=20, c='red')
ax.set_xlim(id[0], id[1])
ax.set_ylim(ymin, ymax)
return line,scat
anim = FuncAnimation(fig, animate, init_func=init, frames=50, interval=50, blit=True)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/ofeJx.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ofeJx.gif" alt="enter image description here" /></a></p>
|
python|numpy|matplotlib|visualization
| 2
|
8,452
| 69,812,787
|
How can I use weighted labels in the knn algorithm?
|
<p>I am working on my own implementation of the weighted knn algorithm.</p>
<p>To simplify the logic, let's represent this as a predict method, which takes three parameters:</p>
<p>indices - matrix of nearest j neighbors from the training sample for object i (i=1...n, n objects in total). [i, j] - index of object from the training sample.
For example, for 4 objects and 3 neighbors:</p>
<pre><code>indices = np.asarray([[0, 3, 1],
[0, 3, 1],
[1, 2, 0],
[5, 4, 3]])
</code></pre>
<p>distances - matrix of distances from j nearest neighbors from the training sample to object i. (i=1...n, n objects in total). For example, for 4 objects and 3 neighbors:</p>
<pre><code>distances = np.asarray([[ 4.12310563, 7.07106781, 7.54983444],
[ 4.89897949, 6.70820393, 8.24621125],
[ 0., 1.73205081, 3.46410162],
[1094.09368886, 1102.55022561, 1109.62245832]])
</code></pre>
<p>labels - vector with true labels of classes for each object j of training sample. For example:</p>
<pre><code>labels = np.asarray([0, 0, 0, 1, 1, 2])
</code></pre>
<p>Thus, the function signature is:</p>
<pre><code> def predict(indices, distances, labels):
....
# return [np.bincount(x).argmax() for x in labels[indices]]
return predict
</code></pre>
<p>In the commentary you can see the code that returns the prediction for the "non-weighted" knn-method, which does not use distances. Can you please show, how predictions can be calculated with using the distance matrix? I found the algorithm, but now I'm completely stumped becase I don't know how to realize it with numpy.</p>
<p>Thank you!</p>
|
<p>This should work:</p>
<pre><code># compute inverses of distances
# suppress division by 0 warning,
# replace np.inf with a very large number
with np.errstate(divide='ignore'):
dinv = np.nan_to_num(1 / distances)
# an array with distinct class labels
distinct_labels = np.array(list(set(labels)))
# an array with labels of neighbors
neigh_labels = labels[indices]
# compute the weighted score for each potential label
weighted_scores = ((neigh_labels[:, :, np.newaxis] == distinct_labels) * dinv[:, :, np.newaxis]).sum(axis=1)
# choose the label with the highest score
predictions = distinct_labels[weighted_scores.argmax(axis=1)]
</code></pre>
|
python|numpy|knn
| 1
|
8,453
| 69,694,093
|
GPU is not available for Pytorch
|
<p>I installed Anaconda, CUDA, and PyTorch today, and I can't access my GPU (RTX 2070) in torch. I followed all of installation steps and PyTorch works fine otherwise, but when I try to access the GPU either in shell or in script I get</p>
<pre class="lang-py prettyprint-override"><code>>>> import torch
>>> torch.cuda.is_available()
False
>>> torch.cuda.device_count()
0
>>> print(torch.version.cuda)
None
</code></pre>
<p>Running <code>conda list</code> shows this as my installed package</p>
<pre class="lang-sh prettyprint-override"><code>cudatoolkit 11.3.1 h59b6b97_2
</code></pre>
<p>and running <code>numba -s</code> in the conda environment shows</p>
<pre class="lang-sh prettyprint-override"><code>__CUDA Information__
CUDA Device Initialized : True
CUDA Driver Version : 11030
CUDA Detect Output:
Found 1 CUDA devices
id 0 b'NVIDIA GeForce RTX 2070' [SUPPORTED]
compute capability: 7.5
pci device id: 0
pci bus id: 1
Summary:
1/1 devices are supported
</code></pre>
<p>and all of the tests pass with <code>ok</code>. CUDA 11.3 is one of the supported compute platforms for PyTorch and by my GPU and that is the version that I installed.</p>
<p>I already tried reinstalling CUDA, I am on Windows 10, <code>nvcc --version</code> shows that CUDA is installed <code>Build cuda_11.3.r11.3/compiler.29745058_0</code></p>
<p>Any suggestions would be helpful</p>
<p>Edit: I am using PyTorch 1.10 installed from the generated command on <a href="https://pytorch.org/" rel="nofollow noreferrer">their website</a>. Using <code>python 3.9.7</code>. I also installed PyTorch again in a fresh conda environment and got the same problem.</p>
|
<p>Downgrading CUDA to 10.2 and using PyTorch LTS 1.8.2 lets PyTorch use the GPU now. Per the comment from @talonmies it seems like PyTorch 1.10 doesn't support CUDA</p>
|
python|pytorch|anaconda|conda
| 1
|
8,454
| 70,019,359
|
Code completion problems using numpy with collections
|
<p>The code completion e.g. in Visual Studio shows me like in the screenshot below, what possibilities I have to code completion my code.</p>
<p>In Python I started to use Linux and the software PyCharm to code now. My problem here is, that the code completion by far doesn't show me the possibilities I have to code completion my code.
I would expect to get here all the methods I can call with <code>axd['bottom']</code>. But for some reason the code completion only shows me unusable stuff. Is their some feature to activate a more useful code completion in PyCharm or otherwise is their probably a much easier code editor in that way.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Some example data to display
x = np.linspace(0, 2 * np.pi, 400)
y = np.sin(x ** 2)
fig, axd = plt.subplot_mosaic([['left', 'right'],['bottom', 'bottom']],
constrained_layout=True)
playerax = fig.add_axes([0.20, 0.1, 0.64, 0.04])
axd['left'].plot(x, y, 'C0')
axd['right'].plot(x, y, 'C1')
axd['bottom'].plot(x, y, 'C2')
axd['bottom'].
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/mtJZs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mtJZs.png" alt="enter image description here" /></a></p>
|
<p>1.Go to code menu
2. Go to completion sub menu of Code menu
3. The following code completion options are available
A. Basic
B SmartType
C Cyclic Expand word
D Cyclic Expand word Backward</p>
<p>Pick each of one, then test to see if it gives you what you want
e.g pick basic, then test, if not satisfied
pick SmartType, etc</p>
|
python|numpy|pycharm|code-completion
| 0
|
8,455
| 72,400,813
|
How to create a ranking variable/function for different periods in a panel data?
|
<p>I have a dataset, <code>df</code>, that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Code</th>
<th>City</th>
<th>State</th>
<th>Population</th>
<th>Quantity</th>
<th>QTDPERCAPITA</th>
</tr>
</thead>
<tbody>
<tr>
<td>2020-01</td>
<td>11001</td>
<td>Los Angeles</td>
<td>CA</td>
<td>5000000</td>
<td>100000</td>
<td>0.02</td>
</tr>
<tr>
<td>2020-02</td>
<td>11001</td>
<td>Los Angeles</td>
<td>CA</td>
<td>5000000</td>
<td>125000</td>
<td>0.025</td>
</tr>
<tr>
<td>2020-03</td>
<td>11001</td>
<td>Los Angeles</td>
<td>CA</td>
<td>5000000</td>
<td>135000</td>
<td>0.027</td>
</tr>
<tr>
<td>2020-01</td>
<td>12002</td>
<td>Houston</td>
<td>TX</td>
<td>3000000</td>
<td>150000</td>
<td>0.05</td>
</tr>
<tr>
<td>2020-02</td>
<td>12002</td>
<td>Houston</td>
<td>TX</td>
<td>3000000</td>
<td>100000</td>
<td>0.033</td>
</tr>
<tr>
<td>2020-03</td>
<td>12002</td>
<td>Houston</td>
<td>TX</td>
<td>3000000</td>
<td>200000</td>
<td>0.066</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>2021-07</td>
<td>11001</td>
<td>Los Angeles</td>
<td>CA</td>
<td>5500499</td>
<td>340000</td>
<td>0.062</td>
</tr>
<tr>
<td>2021-07</td>
<td>12002</td>
<td>Houston</td>
<td>TX</td>
<td>3250012</td>
<td>211000</td>
<td>0.065</td>
</tr>
</tbody>
</table>
</div>
<p>Where<code>QTDPERCAPITA</code> is simply <code>Quantity/Population</code>. I have multiple cities (4149 to be more precise).</p>
<p>The quantities change according to every month, and so does the population.</p>
<p>I would like to create a new variable that serve as a ranking, ranging from <code>[0,1]</code>, where <code>0</code> is the city with the lowest <code>QTDPERCAPITA</code> in that month, and <code>1</code> is the city with the most quantity per capita in that month. Essentially, I want to create a new column that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Code</th>
<th>City</th>
<th>State</th>
<th>Population</th>
<th>Quantity</th>
<th>QTDPERCAPITA</th>
<th>RANKING</th>
</tr>
</thead>
<tbody>
<tr>
<td>2020-01</td>
<td>11001</td>
<td>Los Angeles</td>
<td>CA</td>
<td>5000000</td>
<td>100000</td>
<td>0.02</td>
<td>0</td>
</tr>
<tr>
<td>2020-02</td>
<td>11001</td>
<td>Los Angeles</td>
<td>CA</td>
<td>5000000</td>
<td>125000</td>
<td>0.025</td>
<td>0</td>
</tr>
<tr>
<td>2020-03</td>
<td>11001</td>
<td>Los Angeles</td>
<td>CA</td>
<td>5000000</td>
<td>135000</td>
<td>0.027</td>
<td>0</td>
</tr>
<tr>
<td>2020-01</td>
<td>12002</td>
<td>Houston</td>
<td>TX</td>
<td>3000000</td>
<td>150000</td>
<td>0.05</td>
<td>1</td>
</tr>
<tr>
<td>2020-02</td>
<td>12002</td>
<td>Houston</td>
<td>TX</td>
<td>3000000</td>
<td>100000</td>
<td>0.033</td>
<td>1</td>
</tr>
<tr>
<td>2020-03</td>
<td>12002</td>
<td>Houston</td>
<td>TX</td>
<td>3000000</td>
<td>200000</td>
<td>0.066</td>
<td>1</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>2021-07</td>
<td>11001</td>
<td>Los Angeles</td>
<td>CA</td>
<td>5500499</td>
<td>340000</td>
<td>0.062</td>
<td>0</td>
</tr>
<tr>
<td>2021-07</td>
<td>12002</td>
<td>Houston</td>
<td>TX</td>
<td>3250012</td>
<td>211000</td>
<td>0.065</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>How can I create this column such that the <code>RANKING</code> changes every month? I was thinking of a <code>for</code> loop that extracts the <code>QTDPERCAPITA</code> for every city on every unique date, and creates a new column, <code>df['RANKING']</code> with the same <code>date</code> and <code>city</code>.</p>
|
<p>You can use:</p>
<pre><code># MinMax scaler: (rank - min) / (max - min)
ranking = lambda x: (x.rank() - 1) / (len(x) - 1)
# Rank between [0, 1] -> 0 the lowest, 1 the highest
df['RANKING'] = df.groupby('Date')['QTDPERCAPITA'].apply(ranking)
# Rank between [1, 4149] -> 1 the lowest, 4149 the highest
# df['RANKING'] = df.groupby('Date')['QTDPERCAPITA'].rank('dense')
</code></pre>
<p>Output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Date</th>
<th style="text-align: right;">Code</th>
<th style="text-align: left;">City</th>
<th style="text-align: left;">State</th>
<th style="text-align: right;">Population</th>
<th style="text-align: right;">Quantity</th>
<th style="text-align: right;">QTDPERCAPITA</th>
<th style="text-align: right;">RANKING</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2020-01</td>
<td style="text-align: right;">11001</td>
<td style="text-align: left;">Los Angeles</td>
<td style="text-align: left;">CA</td>
<td style="text-align: right;">5000000</td>
<td style="text-align: right;">100000</td>
<td style="text-align: right;">0.02</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">2020-02</td>
<td style="text-align: right;">11001</td>
<td style="text-align: left;">Los Angeles</td>
<td style="text-align: left;">CA</td>
<td style="text-align: right;">5000000</td>
<td style="text-align: right;">125000</td>
<td style="text-align: right;">0.025</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">2020-03</td>
<td style="text-align: right;">11001</td>
<td style="text-align: left;">Los Angeles</td>
<td style="text-align: left;">CA</td>
<td style="text-align: right;">5000000</td>
<td style="text-align: right;">135000</td>
<td style="text-align: right;">0.027</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">2020-01</td>
<td style="text-align: right;">12002</td>
<td style="text-align: left;">Houston</td>
<td style="text-align: left;">TX</td>
<td style="text-align: right;">3000000</td>
<td style="text-align: right;">150000</td>
<td style="text-align: right;">0.05</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2020-02</td>
<td style="text-align: right;">12002</td>
<td style="text-align: left;">Houston</td>
<td style="text-align: left;">TX</td>
<td style="text-align: right;">3000000</td>
<td style="text-align: right;">100000</td>
<td style="text-align: right;">0.033</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2020-03</td>
<td style="text-align: right;">12002</td>
<td style="text-align: left;">Houston</td>
<td style="text-align: left;">TX</td>
<td style="text-align: right;">3000000</td>
<td style="text-align: right;">200000</td>
<td style="text-align: right;">0.066</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2021-07</td>
<td style="text-align: right;">11001</td>
<td style="text-align: left;">Los Angeles</td>
<td style="text-align: left;">CA</td>
<td style="text-align: right;">5500499</td>
<td style="text-align: right;">340000</td>
<td style="text-align: right;">0.618</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2021-07</td>
<td style="text-align: right;">12002</td>
<td style="text-align: left;">Houston</td>
<td style="text-align: left;">TX</td>
<td style="text-align: right;">3250012</td>
<td style="text-align: right;">211000</td>
<td style="text-align: right;">0.065</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|ranking-functions
| 3
|
8,456
| 72,379,260
|
How to Read Huge and Valid JSON File Line by Line in Python
|
<p>I've been trying to use this code to read a huge JSON file (It contains 80+ million records) line by line:</p>
<pre><code>import json
import pandas as pd
lines = []
with open('file_path','r') as f:
for line in f:
lines.append(json.loads(line))
df = pd.DataFrame(lines)
</code></pre>
<p>But this gives an error:</p>
<pre><code>JSONDecodeError: Expecting property name enclosed in double quotes
</code></pre>
<p>Then, I used replace function with below code,</p>
<pre><code>import json
import pandas as pd
lines = []
jstr = ""
with open('filepath','r') as f:
for line in f:
jstr = f'{jstr}{line}'
jstr = line.replace("'", '"')
lines.append(json.loads(jstr))
df = pd.DataFrame(lines)
</code></pre>
<p>But I can only read first six rows and then I got this error:</p>
<pre><code>JSONDecodeError: Expecting ',' delimiter
</code></pre>
<p>It is ensured that json is a valid format but I don't know what to do.</p>
<p>Would anyone help me how to handle this problem?</p>
|
<p>Maybe are you searching this?</p>
<pre><code>from pandas as pd
df = pd.read_json('data/simple.json')
</code></pre>
|
python|json|pandas
| 0
|
8,457
| 50,261,076
|
Using pandas to add list elements together
|
<p>I have the following array of dicts:</p>
<pre><code>items = [
{
'FirstName': 'David',
'Language': ['en',]
},
{
'FirstName': 'David',
'Language': ['fr',]
},
{
'FirstName': 'David',
'Language': ['en',]
},
{
'FirstName': 'Bob',
'Language': ['en',]
}
]
</code></pre>
<p>Which I want to group by on FirstName and add the unique languages together, like so:</p>
<pre><code>items = [
{
'FirstName': 'David',
'Language': ['en', 'fr']
},
{
'FirstName': 'Bob',
'Language': ['en',]
}
]
</code></pre>
<p>The SQL I would use would be:</p>
<pre><code>SELECT FirstName, GROUP_CONCAT(DISTINCT Language ORDER BY Language)
FROM items
GROUP BY FirstName
</code></pre>
<p>Using pandas, how would I combine this and do a group by on FirstName and get an array of unique languages? Here is what I have so far:</p>
<pre><code>>>> df = pandas.DataFrame(items)
>>> df.groupby('FirstName')['Language']
.apply(lambda x: list(set(x))) # this line is off
.reset_index()
.to_dict(orient='records')
</code></pre>
|
<p>Aggregate all with sum, <code>transform</code> values to set and then <code>to_dict()</code></p>
<pre><code>>>> df.groupby('FirstName').sum()["Language"].transform(set).reset_index().to_dict(orient='records')
[{'FirstName': 'Bob', 'Language': {'en'}},
{'FirstName': 'David', 'Language': {'en', 'fr'}}]
</code></pre>
|
python|pandas
| 6
|
8,458
| 50,500,415
|
pandas new column from values in others
|
<p>I have a <code>df</code> that is populated with XY coordinates from different subjects. I want to create a new column that takes specified XY coordinates from those subjects. </p>
<p>This is achieved when the name of any subject is highlighted in the <code>'Person'</code> column. This returns the XY coordinates of that subject at that index. </p>
<pre><code>import pandas as pd
import numpy as np
import random
AA = 10, 20
k = 5
N = 10
df = pd.DataFrame({
'John Doe_X' : np.random.uniform(k, k + 100 , size=N),
'John Doe_Y' : np.random.uniform(k, k + 100 , size=N),
'Kevin Lee_X' : np.random.uniform(k, k + 100 , size=N),
'Kevin Lee_Y' : np.random.uniform(k, k + 100 , size=N),
'Liam Smith_X' : np.random.uniform(k, k + -100 , size=N),
'Liam Smith_Y' : np.random.uniform(k, k + 100 , size=N),
'Event' : ['AA', 'nan', 'BB', 'nan', 'nan', 'CC', 'nan','CC', 'DD','nan'],
'Person' : ['nan','nan','John Doe','John Doe','nan','Kevin Lee','nan','Liam Smith','John Doe','John Doe']})
df['X'] = df.apply(lambda row: row.get(row['Person']+'_X') if pd.notnull(row['Person']) else np.nan, axis=1)
df['Y'] = df.apply(lambda row: row.get(row['Person']+'_Y') if pd.notnull(row['Person']) else np.nan, axis=1)
</code></pre>
<p>Output:</p>
<pre><code> Event John Doe_X John Doe_Y Kevin Lee_X Kevin Lee_Y Liam Smith_X \
0 AA 75.047164 19.281168 28.064313 87.184248 -76.148559
1 nan 50.642782 68.308319 46.088057 64.132263 -83.109383
2 BB 9.965115 77.950894 48.864693 8.613132 0.106708
3 nan 44.726136 58.751520 69.904076 40.818433 -87.656064
4 nan 101.501119 99.156872 101.976300 93.539749 -57.026015
5 CC 87.778446 65.814911 7.302116 40.577156 -28.703879
6 nan 99.682139 91.715231 88.029451 82.309191 -66.444582
7 CC 38.248267 38.648960 76.065297 67.322639 -34.754868
8 DD 69.429353 61.252800 83.024358 58.038962 -62.001353
9 nan 9.522023 73.009883 41.873986 8.677565 -20.389939
Liam Smith_Y Person X Y
0 18.420494 nan NaN NaN
1 33.206289 nan NaN NaN
2 73.833204 John Doe 9.965115 77.950894
3 39.652071 John Doe 44.726136 58.751520
4 88.176561 nan NaN NaN
5 53.776995 Kevin Lee 7.302116 40.577156
6 95.025923 nan NaN NaN
7 26.851864 Liam Smith -34.754868 26.851864
8 102.771046 John Doe 69.429353 61.252800
9 28.633231 John Doe 9.522023 73.009883
</code></pre>
<p>I'm now hoping to use the <code>'Event'</code> column to refine the new <code>['X','Y']</code> column. Specifically, I want to return the coordinates of <code>AA (10,20)</code> when the value <code>'AA'</code> is in the <code>'Event'</code> column. Furthermore, I like to get the same coordinates until the next coordinates appear. </p>
<p>So the output would look like:</p>
<pre><code> Event John Doe_X John Doe_Y Kevin Lee_X Kevin Lee_Y Liam Smith_X \
0 AA 75.047164 19.281168 28.064313 87.184248 -76.148559
1 nan 50.642782 68.308319 46.088057 64.132263 -83.109383
2 BB 9.965115 77.950894 48.864693 8.613132 0.106708
3 nan 44.726136 58.751520 69.904076 40.818433 -87.656064
4 nan 101.501119 99.156872 101.976300 93.539749 -57.026015
5 CC 87.778446 65.814911 7.302116 40.577156 -28.703879
6 nan 99.682139 91.715231 88.029451 82.309191 -66.444582
7 CC 38.248267 38.648960 76.065297 67.322639 -34.754868
8 DD 69.429353 61.252800 83.024358 58.038962 -62.001353
9 nan 9.522023 73.009883 41.873986 8.677565 -20.389939
Liam Smith_Y Person X Y
0 18.420494 nan 10 20
1 33.206289 nan 10 20
2 73.833204 John Doe 9.965115 77.950894
3 39.652071 John Doe 44.726136 58.751520
4 88.176561 nan NaN NaN
5 53.776995 Kevin Lee 7.302116 40.577156
6 95.025923 nan NaN NaN
7 26.851864 Liam Smith -34.754868 26.851864
8 102.771046 John Doe 69.429353 61.252800
9 28.633231 John Doe 9.522023 73.009883
</code></pre>
<p>I have tried to write something like this:</p>
<pre><code>for value in df['Event']:
if value == 'AA' :
df['X', 'Y'] = AA
</code></pre>
<p>But get a ValueError: <code>ValueError: Length of values does not match length of index</code></p>
|
<p>If you want to iterate through rows you can try:</p>
<pre><code># iterate through rows
for index, row in df.iterrows():
# check Event value for the row
if row['Event'] == 'AA' :
# update dataframe
df.loc[index,('X', 'Y')] = AA
print(df)
</code></pre>
<p>Result:</p>
<pre><code> Event John Doe_X John Doe_Y Kevin Lee_X Kevin Lee_Y Liam Smith_X \
0 AA 12.603084 81.636376 25.997186 76.733337 -17.683132
1 nan 104.652839 104.064767 56.762357 83.599629 -34.714117
2 BB 69.724434 33.324135 98.452840 57.407782 -8.479175
3 nan 16.361719 51.290716 41.929234 46.494053 -81.882100
4 nan 30.874579 34.683986 95.434111 80.343098 -62.448286
5 CC 77.619875 70.164773 7.385376 40.142712 -55.590472
6 nan 31.214066 54.081010 36.249414 34.218611 -21.754019
7 CC 91.487647 28.307019 71.235864 48.915612 -37.196812
8 DD 45.036216 61.655465 50.231592 29.511502 -4.583804
9 nan 95.249002 25.649100 31.959114 10.234085 -93.106746
X NaN NaN NaN NaN NaN NaN
Liam Smith_Y Person X Y
0 86.267909 nan 10.000000 20.000000
1 43.090388 nan NaN NaN
2 56.330139 John Doe 69.724434 33.324135
3 65.648633 John Doe 16.361719 51.290716
4 16.349304 nan NaN NaN
5 5.528887 Kevin Lee 7.385376 40.142712
6 75.717007 nan NaN NaN
7 100.925457 Liam Smith -37.196812 100.925457
8 87.256541 John Doe 45.036216 61.655465
9 35.361163 John Doe 95.249002 25.649100
X NaN NaN NaN NaN
</code></pre>
|
python|pandas|indexing|apply
| 1
|
8,459
| 45,705,474
|
Transform with group by in Pandas
|
<p>I am creating a Dataframe </p>
<pre><code>import pandas as pd
df1 = pd.DataFrame( {
"Name" : ["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"] ,
"City" : ["Seattle", "Seattle", "Portland", "Seattle", "Seattle",
"Portland"] } )
df1.groupby( ["City"] )['Name'].transform(lambda x:
','.join(x)).drop_duplicates()
I want the output as
Name City
Alice,Bob,Mallory,Bob Seattle
Mallory,Mallory Portland
but i am getting only
Name
Alice,Bob,Mallory,Bob
Mallory,Mallory
This is an example with small number of columns but in my actual problem i
have too many columns so i cannot use
df1['Name']= df1.groupby( ['City'] )['Name'].transform(lambda x:
','.join(x))
df1.groupby( ['City','Name'], as_index=False )
df1.drop_duplicates()
</code></pre>
<p>because for each column i have to write the same code<br>
Is there any way to do it without writing transform for each column
individually. </p>
|
<p><strong>1. column aggregation</strong></p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>apply</code></a> with <code>,.join</code>, then for change order use double <code>[[]]</code>:</p>
<pre><code>df = df1.groupby(["City"])['Name'].apply(','.join).reset_index()
df = df[['Name','City']]
print (df)
Name City
0 Mallory,Mallory Portland
1 Alice,Bob,Mallory,Bob Seattle
</code></pre>
<p>Because <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>transform</code></a> create new column with aggregate values:</p>
<pre><code>df1['new'] = df1.groupby("City")['Name'].transform(','.join)
print (df1)
City Name new
0 Seattle Alice Alice,Bob,Mallory,Bob
1 Seattle Bob Alice,Bob,Mallory,Bob
2 Portland Mallory Mallory,Mallory
3 Seattle Mallory Alice,Bob,Mallory,Bob
4 Seattle Bob Alice,Bob,Mallory,Bob
5 Portland Mallory Mallory,Mallory
</code></pre>
<p><strong>2. columns and more aggregation</strong></p>
<p>If more columns need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow noreferrer"><code>agg</code></a> with specify columns in <code>[]</code> or no specifying for join all string columns:</p>
<pre><code>df1 = pd.DataFrame( {
"Name" : ["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"] ,
"Name2": ["Alice1", "Bob1", "Mallory1", "Mallory1", "Bob1" , "Mallory1"],
"City" : ["Seattle", "Seattle", "Portland", "Seattle", "Seattle",
"Portland"] } )
print (df1)
City Name Name2
0 Seattle Alice Alice1
1 Seattle Bob Bob1
2 Portland Mallory Mallory1
3 Seattle Mallory Mallory1
4 Seattle Bob Bob1
5 Portland Mallory Mallory1
df = df = df1.groupby('City')['Name', 'Name2'].agg(','.join).reset_index()
print (df)
City Name Name2
0 Portland Mallory,Mallory Mallory1,Mallory1
1 Seattle Alice,Bob,Mallory,Bob Alice1,Bob1,Mallory1,Bob1
</code></pre>
<p>Anf if need aggregate all columns:</p>
<pre><code>df = df1.groupby('City').agg(','.join).reset_index()
print (df)
City Name Name2
0 Portland Mallory,Mallory Mallory1,Mallory1
1 Seattle Alice,Bob,Mallory,Bob Alice1,Bob1,Mallory1,Bob1
</code></pre>
<hr>
<pre><code>df1 = pd.DataFrame( {
"Name" : ["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"] ,
"Name2": ["Alice1", "Bob1", "Mallory1", "Mallory1", "Bob1" , "Mallory1"],
"City" : ["Seattle", "Seattle", "Portland", "Seattle", "Seattle", "Portland"],
'Numbers':[1,5,4,3,2,1]} )
print (df1)
City Name Name2 Numbers
0 Seattle Alice Alice1 1
1 Seattle Bob Bob1 5
2 Portland Mallory Mallory1 4
3 Seattle Mallory Mallory1 3
4 Seattle Bob Bob1 2
5 Portland Mallory Mallory1 1
df = df1.groupby('City').agg({'Name': ','.join,
'Name2': ','.join,
'Numbers': 'max'}).reset_index()
print (df)
City Name Name2 Numbers
0 Portland Mallory,Mallory Mallory1,Mallory1 4
1 Seattle Alice,Bob,Mallory,Bob Alice1,Bob1,Mallory1,Bob1 5
</code></pre>
|
python|python-3.x|pandas|pandas-groupby
| 3
|
8,460
| 45,312,698
|
RuntimeError: Attempted to use a closed Session in tflearn
|
<p>I want to train my model with tflearn, but i get the error showed above.
Here is my training loop:
BTW I splitted my training inputs in seperate numpy files</p>
<pre><code>for i in range(EPOCHS):
for file in filess:
file = np.load(file)
x = []
y = []
for a, b in file:
x.append(a)
y.append(b[0])
x = np.array(x).reshape(-1,WIDTH,HEIGHT,1)
for sd in range(len(y)):
idx = genres.index(y[sd])
y[sd] = idx
print(y)
y = np.array(y)
try:
model.load(MODEL_NAME)
except:
print("no model")
model.fit({'input': x}, {'targets': y}, n_epoch=1,
snapshot_step=500, show_metric=True, run_id=MODEL_NAME)
model.save(MODEL_NAME)`
</code></pre>
<p>Here is full error message:</p>
<pre><code>`Traceback (most recent call last):
File "main.py", line 39, in <module>
model.fit({'input': x}, {'targets': y}, n_epoch=1, snapshot_step=500,
show_metric=True, run_id=MODEL_NAME)
File "D:\Anaconda3\envs\python35\lib\site-packages\tflearn\models\dnn.py",
line 215, in fit
callbacks=callbacks)
File "D:\Anaconda3\envs\python35\lib\site-
packages\tflearn\helpers\trainer.py", line 356, in fit
self.train_ops = original_train_ops
File "D:\Anaconda3\envs\python35\lib\contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "D:\Anaconda3\envs\python35\lib\site-
packages\tensorflow\python\framework\ops.py", line 3625, in get_controller
yield default
File "D:\Anaconda3\envs\python35\lib\site-
packages\tflearn\helpers\trainer.py", line 336, in fit
show_metric)
File "D:\Anaconda3\envs\python35\lib\site-
packages\tflearn\helpers\trainer.py", line 775, in _train
tflearn.is_training(True, session=self.session)
File "D:\Anaconda3\envs\python35\lib\site-packages\tflearn\config.py", line
95, in is_training
tf.get_collection('is_training_ops')[0].eval(session=session)
File "D:\Anaconda3\envs\python35\lib\site-
packages\tensorflow\python\framework\ops.py", line 569, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "D:\Anaconda3\envs\python35\lib\site-
packages\tensorflow\python\framework\ops.py", line 3741, in
_eval_using_default_session
return session.run(tensors, feed_dict)
File "D:\Anaconda3\envs\python35\lib\site-
packages\tensorflow\python\client\session.py", line 778, in run
run_metadata_ptr)
File "D:\Anaconda3\envs\python35\lib\site-
packages\tensorflow\python\client\session.py", line 914, in _run
raise RuntimeError('Attempted to use a closed Session.')
RuntimeError: Attempted to use a closed Session.`
</code></pre>
<p>I really hope you can help me cause I tried for some time now, but I didnt found any solutions </p>
|
<p>I replaced <code>try/except</code> with <code>if os.path.exists(...)</code></p>
<p>But <code>save(MODEL_NAME)</code> doesn't create one file with name <code>MODEL_NAME</code> but few files with names <code>"MODEL_NAME.meta"</code>, <code>"MODEL_NAME.index"</code>, <code>"MODEL_NAME.data-00000-of-00001"</code> so <code>if os.path.exists(...)</code> has to check one of these files.</p>
<pre><code>import os
if os.path.exists(MODEL_NAME + ".meta"):
model.load(MODEL_NAME)
else:
model.fit(...)
model.save(MODEL_NAME)
</code></pre>
<hr>
<p>Created as answer to question: <a href="https://stackoverflow.com/questions/57632831/creating-an-ai-chatbot-but-getting-a-traceback-error">Creating an ai chatbot, but getting a traceback error</a></p>
|
python|numpy|tflearn
| 2
|
8,461
| 62,741,474
|
Is there a way of summing specific columns if they exist in a list?
|
<p>I'm trying to lookup a column in df2 and only sum the columns in df1 that exist in the df2 column</p>
<pre class="lang-py prettyprint-override"><code>df1 =
London, New York, Paris, LA, Chicago
1000, 2000, 5000, 10000, 3000
df2 =
US Cities
New York
Miami
LA
Chicago
Seattle
</code></pre>
<p>result:</p>
<pre class="lang-py prettyprint-override"><code>df1 =
London, New York, Paris, LA, Chicago, Sum of US Cities
1000, 2000, 5000, 10000, 3000, 15000
</code></pre>
|
<p>Here you go:</p>
<pre class="lang-py prettyprint-override"><code>df1['Sum of US Cities'] = df1.loc[:, df1.columns.isin(df2['US Cities'])].sum(axis=1)
</code></pre>
<p>Output</p>
<pre><code> London New York Paris LA Chicago Sum of US Cities
0 1000 2000 5000 10000 3000 15000
</code></pre>
|
python|pandas|list|sum|lookup
| 2
|
8,462
| 62,574,971
|
Convert horizontal values of a pandas dataframe into vertical values
|
<p>I created a pandas dataframe from a dictionary like this:</p>
<pre><code> dictionary={'cat': [B1, B2,B3,B4,B5,B6,B7,B8,B9,B10], 'Dog': [c1, c2,c3], 'Bird': [d1,d2,d3,d4,d5]}
</code></pre>
<p><code>df = pd.DataFrame(dictionary.items(), columns=['ID_1','ID_match'])</code></p>
<p>But I get a table looking like this:</p>
<p><a href="https://i.stack.imgur.com/LuGQJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LuGQJ.png" alt="![![enter image description here" /></a></p>
<p>And I would like to be this way:</p>
<p><a href="https://i.stack.imgur.com/kj7P2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kj7P2.png" alt="![enter image description here" /></a></p>
<p>So far I did this way:</p>
<pre><code>df_2_1=df .replace('', np.nan).set_index('ID_1').stack().reset_index(name='ID_match').drop('level_1',1)
</code></pre>
<p>But I get the second value as list...
Can someone point me in the right direction?</p>
<p><strong>Solution</strong>:</p>
<p>I just needed to expand the second column:
<code>df.explode('ID_match')</code></p>
|
<p>This solution should work. The first .iloc is taking every other starting with the first column, and the second is taking every other starting with the second column.</p>
<pre><code>df1 = df.iloc[:,::2].melt()
df1 = df1['variable']
df2 = df.iloc[:,1::2].melt()
df2 = df2['value']
df3 = pd.DataFrame({'col1':df1, 'col2':df2})
</code></pre>
|
python|pandas
| 0
|
8,463
| 62,532,042
|
Finding the Unique Arrays in an List of Arrays
|
<p>I have a list of arrays, say</p>
<pre><code>List = [A,B,C,D,E,...]
</code></pre>
<p>where each A,B,C etc. is an nxn array.</p>
<p>I wish to have the most efficient algorithm to find the unique nxn arrays in the list. That is, say if all entries of A and B are equal, then we discard one of them and generate the list</p>
<pre><code>UniqueList = [A,C,D,E,...]
</code></pre>
|
<p>Not sure if there is a faster way, but I think this should be pretty fast (using the built-in unique function of numpy and choosing axis=0 to look for nxn unique arrays. More detail in the <a href="https://numpy.org/devdocs/reference/generated/numpy.unique.html" rel="nofollow noreferrer">numpy doc</a>):</p>
<pre><code>[i for i in np.unique(np.array(List),axis=0)]
</code></pre>
<p>Example:</p>
<pre><code>A = np.array([[1,1],[1,1]])
B = np.array([[1,1],[1,2]])
List = [A,B,A]
[array([[1, 1],
[1, 1]]),
array([[1, 1],
[1, 2]]),
array([[1, 1],
[1, 1]])]
</code></pre>
<p>Output:</p>
<pre><code>[array([[1, 1],
[1, 1]]),
array([[1, 1],
[1, 2]])]
</code></pre>
|
python|arrays|list|numpy
| 1
|
8,464
| 54,349,604
|
Export Web Scraped Table to Excel
|
<p>I am having trouble getting pandas to export some web scraped data in the format I want.</p>
<p>I want to visit each URL in <code>URLs</code> and get the various elements from that page and put them into an Excel spreadsheet with the column names specified. I then want to visit the next URL in <code>URLs</code> and put this data on the next row of the Excel sheet so that I have an Excel sheet with 6 columns and three rows of data, one for each plant (each plant in on a separate URL).</p>
<p>Currently I have an error saying <code>ValueError: Length mismatch: Expected axis has 18 elements, new values have 6 elements</code> as the new records are being placed horizontally next to each other rather than on a new row in Excel and Pandas isn't expecting that. </p>
<p>Can someone help pls?
Thanks</p>
<pre><code>import csv
import pandas as pd
from pandas import ExcelWriter
from pandas import ExcelFile
import numpy as np
from urllib2 import urlopen
import bs4
from bs4 import BeautifulSoup
URLs = ["http://adbioresources.org/map/ajax-single/27881",
"http://adbioresources.org/map/ajax-single/27967",
"http://adbioresources.org/map/ajax-single/27880"]
mylist = []
for plant in URLs:
soup = BeautifulSoup(urlopen(plant),'lxml')
table = soup.find_all('td')
for td in table:
mylist.append(td.text)
heading2 = soup.find_all('h2')
for h2 in heading2:
mylist.append(h2.text)
para = soup.find_all('p')
for p in para:
mylist.append(p.text)
df = pd.DataFrame(mylist)
transposed_df = df.T
transposed_df.columns =
['Status','Type','Capacity','Feedstock','Address1','Address2']
writer = ExcelWriter('Pandas-Example.xlsx')
transposed_df.to_excel(writer,'Sheet1',index=False)
writer.save()
</code></pre>
|
<pre><code>masterlist = []
i = 0
for plant in URLs:
sublist = []
soup = BeautifulSoup(urlopen(plant),'lxml')
table = soup.find_all('td')
for td in table:
sublist.append(td.text)
heading2 = soup.find_all('h2')
for h2 in heading2:
sublist.append(h2.text)
para = soup.find_all('p')
for p in para:
sublist.append(p.text)
masterlist.append(sublist)
i = i + 1
print i
df = pd.DataFrame(masterlist)
df.columns = ['Status','Type','Capacity','Feedstock','Address1','Address2']
writer = ExcelWriter('Pandas-Example.xlsx')
df.to_excel(writer,'Sheet1',index=False)
writer.save()
</code></pre>
|
python|excel|pandas
| 1
|
8,465
| 73,724,397
|
How to get the most repated elements in a dataframe/array
|
<p>I compiled a list of the top artists for every year across 14 years and I want to gather the top 7 for the 14 years combined so my idea was to gather them all in a dataframe then gather the most repeated artists for these years, but it didn't work out.</p>
<pre><code>#Collecting the top 7 artists across the 14 years
artists = []
year = 2020
while year >= 2006:
TAChart = billboard.ChartData('Top-Artists', year = year)
artists.append(str(TAChart))
year -= 1
len(artists)
Artists = pd.DataFrame(artists)
n = 7
Artists.value_counts().index.tolist()[:n]
</code></pre>
|
<p>You're very close - you just need to flatten your list of lists into a single list, then call value_counts:</p>
<pre><code>artists_flat = [a for lst in artists for a in lst]
pd.Series(artists_flat).value_counts().head(n)
</code></pre>
<p>Your current code is counting the occurrences of entire lists (as strings), rather than individual artists.
Also, note that I used head(n) rather than indexing, as this is more robust in case there are ties for the nth place spot.</p>
|
python|python-3.x|pandas|dataframe|data-science
| 0
|
8,466
| 73,732,393
|
Combine excel files
|
<p>Can someone help how to get output in excel readable format? I am getting output as dataframe but #data is embedded a string in row number 2 and 3</p>
<pre><code>import pandas as pd
import os
input_path = 'C:/Users/Admin/Downloads/Test/'
output_path = 'C:/Users/Admin/Downloads/Test/'
[enter image description here][1]
excel_file_list = os.listdir(input_path)
df = pd.DataFrame()
for file in excel_file_list:
if file.endswith('.xlsx'):
df1 = pd.read_excel(input_path+file, sheet_name=None)
df = df.append(df1, ignore_index=True)enter image description here
writer = pd.ExcelWriter('combined.xlsx', engine='xlsxwriter')
for sheet_name in df.keys():
df[sheet_name].to_excel(writer, sheet_name=sheet_name, index=False)
writer.save()
</code></pre>
|
<p>Your issue may be in using <code>sheet_name=None</code>. If any of the files have multiple sheets, a dictionary will be returned by pd.read_excel() with {'sheet_name':dataframe} format.</p>
<p>To .append() with this, you can try something like this, using python's Dictionary.items() method:</p>
<pre class="lang-py prettyprint-override"><code>def combotime(dfinput):
df1 = pd.DataFrame()
for k, v in dfinput.items():
df1 = df1.append(dfin[k])
return df1
</code></pre>
<p>EDIT: <strong>If you mean to keep the sheets separate</strong> as implied by your <code>writer</code> loop, do not use a pd.DataFrame() object like your <code>df</code> to add the dictionary items. Instead, add to an existing dictionary:</p>
<pre class="lang-py prettyprint-override"><code>sheets = {}
sheets = sheets.update(df1) #df1 is your read_excel dictionary
for sheet in sheets.keys():
sheets[sheet].to_excel(writer, sheet_name=sheet, index=Fasle)
</code></pre>
|
python|excel|pandas|export-to-excel
| 0
|
8,467
| 73,626,874
|
How to search for substring in a pandas column from a given list efficiently?
|
<p>How can I search for substring in a column efficiently? If I use str.contains() method, it takes forever to search through the df.</p>
<pre><code>frame = pd.DataFrame({'a' : ['111,222,333,444', '11,44', '222,333,444','666,777','555']})
mylist = ['111', '222', '444','555']
pattern = '|'.join(mylist)
frame.loc[frame.a.str.contains(pattern)]
</code></pre>
<p>Is there a way to make this search faster? This works in a small dataframe but doesn't work if it's big (millions rows).</p>
<p>Thanks,
Sam</p>
|
<p>I think this is better:</p>
<pre><code>import pandas as pd
import re
pattern = '|'.join(mylist)
def regex_filter(val):
if val:
mo = re.search(pattern,val)
if mo:
return True
else:
return False
else:
return re.search(pattern,val)
frame = pd.DataFrame({'a' : ['111,222,333,444', '11,44', '222,333,444','666,777','555']})
mylist = ['111', '222', '444','555']
frame[frame['a'].apply(regex_filter)]
</code></pre>
|
python|python-3.x|pandas
| 0
|
8,468
| 71,215,627
|
geopandas shape files coordinates
|
<p>I'm currently trying to create geojson files from a set of shape files.</p>
<pre><code>for shape_file in shape_files[1:]:
print(fileName(shape_file))
shp = geopandas.read_file(shape_file)
shp.to_crs(epsg = '4326')
file_name = shape_file[0:len(shape_file) - len('.shp')] + '.geojson'
print(file_name)
print('Adding to JSON file')
shp.to_file(file_name, driver = 'GeoJSON')
print(fileName(file_name) + ' JSON file created.')
print()
print('DONE')
</code></pre>
<p>One of the problems is that the coordinates are not in the format I would like to use.</p>
<p>To combat this I've altered the code to edit the coordinate system but I'm now getting this error.</p>
<p>RuntimeError: b'no arguments in initialization list'</p>
<p>Any suggestions?</p>
|
<p>The dtype to put the epsg in is incorrect. If you declare epsg it must be int. So your code should look like this:</p>
<pre><code>shp.to_crs(epsg = 4326)
</code></pre>
<p>or</p>
<pre><code>shp.to_crs('epsg:4326')
</code></pre>
|
geopandas|shapefile|coordinate-systems
| 0
|
8,469
| 71,347,542
|
ValueError for sklearn, problem maybe caused by float32/float64 dtypes?
|
<p>So I want to check the feature importance in a dataset, but I get this error:</p>
<pre><code>ValueError: Input contains NaN, infinity or a value too large for dtype('float32').
</code></pre>
<p>I checked the dataset and fair enough there were nan values. So I added a line to drop all nan rows. Now there are no nan values. I re-ran the code and still the same error. I checked the <code>.dtypes</code> and fair enough, it was all float64. So I added <code>.astype(np.float32)</code> to the columns that I pass to sklearn. But now I still have the same error. I scrolled through the entire dataframe manually and also used <code>data.describe()</code> and all values are between 0 and 5, so far away from infinity or large values.</p>
<p>What is causing the error here?</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.ensemble import ExtraTreesClassifier
import matplotlib.pyplot as plt
data = pd.read_csv("data.csv")
data.dropna(inplace=True) #dropping all nan values
X = data.iloc[:,8:42]
X = X.astype(np.float32) #converting data from float64 to float32
y = data.iloc[:,4]
y = y.astype(np.float32) #converting data from float64 to float32
# feature importance
model = ExtraTreesClassifier()
model.fit(X,y)
print(model.feature_importances_)
</code></pre>
|
<p>You are in the third case (large value) then in the second case (infinity) after the downcast:</p>
<p>Demo:</p>
<pre><code>import numpy as np
a = np.array(np.finfo(numpy.float64).max)
# array(1.79769313e+308)
b = a.astype('float32')
# array(inf, dtype=float32)
</code></pre>
<p>How to debug? Suppose the following array:</p>
<pre><code>a = np.array([np.finfo(numpy.float32).max, np.finfo(numpy.float64).max])
# array([3.40282347e+038, 1.79769313e+308])
a[a > np.finfo(numpy.float32).max]
# array([1.79769313e+308])
</code></pre>
|
pandas|scikit-learn|sklearn-pandas
| 1
|
8,470
| 71,111,735
|
Python Pandas: Get number of NaN before first non NaN value
|
<p>I have the following DataFrame:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<td>NaN</td>
</tr>
<tr>
<td>NaN</td>
</tr>
<tr>
<td>5</td>
</tr>
<tr>
<td>NaN</td>
</tr>
<tr>
<td>7</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to write a function that will return the number of NaN values before the first non-NaN value. Given the above example, the function should return the value 2.</p>
<p>I tried to solve my problem using <a href="https://stackoverflow.com/questions/46926618/number-of-nan-values-before-first-non-nan-value-python-dataframe">this question</a>, but it did not help me much.</p>
<p>Edit: The values always start with a NaN. If the column is all NaN, the function should return the column length.</p>
|
<p>You could use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>isna</code></a> to get True/1 on the NaN values and <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.cumprod.html" rel="nofollow noreferrer"><code>cumprod</code></a> to get rid of all values that follow a non-NaN. Then <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.sum.html" rel="nofollow noreferrer"><code>sum</code></a>:</p>
<pre><code>df['y'].isna().cumprod().sum()
</code></pre>
<p>output: <code>2</code></p>
|
python|pandas|dataframe
| 3
|
8,471
| 71,139,210
|
Implementation of stack using numpy in python
|
<p>Like there are implementation of stack using array in c++ I was wondering if the same can be done in python using numpy stack? This was my take on it</p>
<pre class="lang-py prettyprint-override"><code>from turtle import shape
import numpy as np
class stack:
def __init__(self):
self.stack = np.empty(shape=(1,100),like=np.empty_like)
self.n = 100
self.top = -1
def push(self, element):
if (self.top >= self.n - 1):
print("Stack overflow")
else:
self.top = self.top + 1
self.stack[self.top] = element
def pop(self):
if (self.top <= -1):
print("Stack Underflow")
else:
print("Popped element: ", self.stack[self.top])
self.top = self.top - 1
def display(self):
if (self.top >= 0):
print("Stack elements are: ")
i = self.top
while i >= 0:
print(self.stack[i], end=", ")
i = i - 1
else:
print("The stack is empty")
def gettop(self):
if (self.top <= -1):
print("Empty stack")
else:
print("Top: ", self.stack[self.top])
def isEmpty(self):
if (self.top == -1):
print("Stack is Empty")
else:
print("Stack is not empty")
if __name__ == "__main__":
s = stack()
ch = 0
val = 0
print("1) Push in stack")
print("2) Pop from stack")
print("3) Get Top")
print("4) Check if Empty")
print("5) Display Stack")
print("6) Exit")
while (ch != 6):
ch = int(input("Enter Choice: "))
print(ch)
if (ch == 1):
val = input("Enter the value to be pushed: ")
s.push(val)
elif (ch == 2):
s.pop()
elif (ch == 3):
s.gettop()
elif (ch == 4):
s.isEmpty()
elif (ch == 5):
s.display()
elif (ch == 6):
print("Exit")
else:
print("Invalid Choice")
</code></pre>
<p>But I am stuck at the creation of stack at the start. It produces a stack with 12 all over when I try to push any element into the array.</p>
<p>And I do know that there are much simpler implementation of the same in python but I was curious if it is possible or not.</p>
|
<p>I messed around with the code for a while and I found a way to do it, and here it is:</p>
<pre class="lang-py prettyprint-override"><code>"""
Time: 15:08
Date: 16-02-2022
"""
from turtle import shape
import numpy as np
class stack:
def __init__(self):
self.stack = np.array([0,0,0,0,0,0,0,0,0,0])
self.n = 10
self.top = -1
def push(self, element):
if (self.top >= self.n - 1):
print("Stack overflow")
else:
self.top = self.top + 1
self.stack[self.top] = element
def pop(self):
if (self.top <= -1):
print("Stack Underflow")
else:
print("Popped element: ", self.stack[self.top])
self.top = self.top - 1
def display(self):
if (self.top >= 0):
print("Stack elements are: ")
i = self.top
while i >= 0:
print(self.stack[i], end=", ")
i = i - 1
print("")
else:
print("The stack is empty")
def gettop(self):
if (self.top <= -1):
print("Empty stack")
else:
print("Top: ", self.stack[self.top])
def isEmpty(self):
if (self.top == -1):
print("Stack is Empty")
else:
print("Stack is not empty")
if __name__ == "__main__":
s = stack()
ch = 0
val = 0
print("1) Push in stack")
print("2) Pop from stack")
print("3) Get Top")
print("4) Check if Empty")
print("5) Display Stack")
print("6) Exit")
while (ch != 6):
ch = int(input("Enter Choice: "))
print(ch)
if (ch == 1):
val = input("Enter the value to be pushed: ")
s.push(val)
elif (ch == 2):
s.pop()
elif (ch == 3):
s.gettop()
elif (ch == 4):
s.isEmpty()
elif (ch == 5):
s.display()
elif (ch == 6):
print("Exit")
else:
print("Invalid Choice")
</code></pre>
|
python|arrays|numpy|stack
| 0
|
8,472
| 71,363,689
|
converting pandas dataframe to xarray dataset
|
<pre><code> Unnamed: 0 index datetime ... cVI Reg average_temp
0 0 2000-01-01 2000-01-01 ... NaN Central -5.883996
1 1 2000-01-02 2000-01-02 ... NaN Central -6.715087
2 2 2000-01-03 2000-01-03 ... NaN Central -6.074254
3 3 2000-01-04 2000-01-04 ... NaN Central -4.222387
4 4 2000-01-05 2000-01-05 ... NaN Central -0.994825
</code></pre>
<p>I want to convert the dataframe above to an xarray dataset, with <code>datetime</code> as the index. I do this:</p>
<pre><code>ds = xr.Dataset.from_dataframe(df)
</code></pre>
<p>but I am not able to get the <code>datetime</code> column as index. How do I do that?</p>
|
<p>xarray will treat the index in a dataframe as the dimensions of the resulting dataset. A MultiIndex will be unstacked such that each level will form a new orthogonal dimension in the result.</p>
<p>To convert your data to xarray, first set the datetime as index in pandas, with <code>df.set_index('datetime')</code>.</p>
<pre class="lang-py prettyprint-override"><code>ds = df.set_index('datetime').to_xarray()
</code></pre>
<p>Alternatively, you could promote it afterwards, with <code>ds.set_coords('datetime')</code> and then swap the indexing dimension with <a href="https://xarray.pydata.org/en/stable/generated/xarray.Dataset.swap_dims.html" rel="nofollow noreferrer"><code>ds.swap_dims</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>ds = df.to_xarray()
ds.set_coords('datetime').swap_dims({'index': 'datetime'})
</code></pre>
<p>I'd recommend the first option but the second also works if you already have your data as a Dataset and want to swap the index.</p>
|
python|pandas|dataframe|python-xarray
| 1
|
8,473
| 71,300,132
|
Count of active items on day given start and stop date
|
<p>I have a dataframe with 2 columns similar to below.</p>
<pre><code>+------+-------------+------------+
| id | start_date | stop_date |
+------+-------------+------------+
| Foo | 2019-06-01 | 2019-06-03 |
| Bar | 2019-06-07 | 2019-06-10 |
| Pop | 2019-06-09 | 2019-06-11 |
| Bob | 2019-06-13 | |
| Tom | 2019-06-01 | 2019-06-05 |
| Tim | 2019-06-04 | 2019-06-05 |
| Ben | 2019-06-07 | 2019-06-09 |
| Ted | 2019-06-08 | 2019-06-09 |
+------+------------+-------------+
</code></pre>
<p>I need to return 2 df's, one with the count of active items within the date range (example below)</p>
<pre><code>+------------+-------+
| Day |Active |
+------------+-------+
| 2019-06-01 | 2 |
| 2019-06-02 | 2 |
| 2019-06-03 | 2 |
| 2019-06-04 | 2 |
| 2019-06-05 | 2 |
| 2019-06-06 | 0 |
| 2019-06-07 | 2 |
| 2019-06-08 | 3 |
| 2019-06-09 | 4 |
| 2019-06-10 | 2 |
| 2019-06-11 | 1 |
| 2019-06-12 | 0 |
| 2019-06-13 | 1 |
| 2019-06-14 | 1 |
| 2019-06-15 | 1 |
+------------+-------+
</code></pre>
<p>and another that returns a df with that contain active items for a given date ie
2019-06-10 returns df:</p>
<pre><code> | Bar | 2019-06-07 | 2019-06-10 |
| Pop | 2019-06-09 | 2019-06-11 |
</code></pre>
<p>So far I have tried to return the the second example:</p>
<pre><code>active_date = pd.Timestamp('2019-06-10')
df_active = df[(df['start_date'] <= active_date) & ((df["stop_date"].isnull()) | (df["stop_date"] > active_date))]`
</code></pre>
<p>Any help is appreciated!</p>
|
<p>You can do this:</p>
<pre><code>df[["start_date", "stop_date"]] = df[["start_date", "stop_date"]].apply(pd.to_datetime)
df = df.ffill(axis=1)
df["days"] = [
pd.date_range(s, e, freq="D") for s, e in zip(df["start_date"], df["stop_date"])
]
df2 = (
df.explode("days")
.groupby("days")["id"]
.nunique()
.reindex(pd.date_range(df["start_date"].min(), df["stop_date"].max()), fill_value=0)
)
</code></pre>
<p>Output:</p>
<pre><code>2019-06-01 2
2019-06-02 2
2019-06-03 2
2019-06-04 2
2019-06-05 2
2019-06-06 0
2019-06-07 2
2019-06-08 3
2019-06-09 4
2019-06-10 2
2019-06-11 1
2019-06-12 0
2019-06-13 1
Freq: D, Name: id, dtype: int64
</code></pre>
<p>And, use pd.IntervalIndex:</p>
<pre><code>active_date = pd.Timestamp('2019-06-10')
df[
pd.IntervalIndex.from_arrays(df["start_date"], df["stop_date"]).contains(
active_date
)
].drop("days", axis=1)
</code></pre>
<p>Output:</p>
<pre><code> id start_date stop_date
1 Bar 2019-06-07 2019-06-10
2 Pop 2019-06-09 2019-06-11
</code></pre>
|
python|pandas|datetime|date-range
| 2
|
8,474
| 52,240,476
|
Delete array from 2D array
|
<p>I have a 2D array like this:</p>
<pre><code> [array([71, 35, 44, 0])
array([56, 55, 0])
array([32, 90, 11])
array([ 0, 3, 81, 9, 20])
array([0, 0]) array([0, 0]) array([0, 0]) array([ 5, 89])]
</code></pre>
<p>and I want to remove <code>[0, 0]</code></p>
<p>I try to </p>
<p><code>myarray = np.delete(myarray, np.where(myarray == [0, 0]), axis=0)</code></p>
<p>but it doesnt work.</p>
<p>How can I remove <code>[0, 0]</code> ?</p>
|
<p>Use a list comprehension with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.array_equal.html" rel="nofollow noreferrer"><code>np.array_equal</code></a>:</p>
<pre><code>>>> [i for i in arr if not np.array_equal(i, [0,0])]
</code></pre>
<p></p>
<pre><code>[array([71, 35, 44, 0]),
array([56, 55, 0]),
array([32, 90, 11]),
array([ 0, 3, 81, 9, 20]),
array([ 5, 89])]
</code></pre>
<p>However, it is best to not work with jagged numpy arrays, as numpy does not behave well with such arrays.</p>
|
python|python-3.x|numpy
| 3
|
8,475
| 52,166,594
|
How to use Padding in conv2d layer of specific size
|
<p>My input size image is :
<code>256 * 256</code></p>
<p>Conv2d Kernal Size : <code>4*4</code> and strides at <code>2*2</code>.</p>
<p>The output will be <code>127*127</code>.
I want to pass to Max Pool for this i want to apply padding to make it <code>128*128</code> so that pooling works well and pooling output will be used in other layers.</p>
<p>How i can apply padding for this conv.</p>
<pre><code>conv1 = tf.layers.conv2d(x, 32, (4,4),strides=(2,2), activation=tf.nn.relu)
</code></pre>
|
<p><code>tf.layers.conv2d</code> has a <code>padding</code> parameter that you can use to do this. The default is <code>"valid"</code> which means no padding is done, so each convolution will slightly shrink the input. You can pass <code>padding="same"</code> instead. This will apply padding such that the output of the convolution is equal in size to the input. This is <em>before</em> strides, so using a stride of 2 will still downsample by a factor 2. In your example, using <code>padding="same"</code> should result in the convolution output to have size 128x128.</p>
|
python|tensorflow|image-processing|padding|conv-neural-network
| 0
|
8,476
| 52,387,191
|
Reading large CSV file with Pandas freezes computer
|
<p>I am working with a relatively large CSV file in Python. I am using the pandas <code>read_csv</code> function to import it. The data is on a shared folder at work and around 25 GB.</p>
<p>I have 2x8 GB RAM and an Intel Core i5 processor and using the juypter notebook. While loading the file the RAM Monitoring goes up to 100%. It stays at 100% or 96% for some minutes and then my computer clock stopped and my screen is frozen. Even if I wait 2 hours my computer is not able to use any more, so I have to restart.</p>
<p>My question is:
Do I need to split the data? Would it help? Or is it a general performance problem with my laptop?</p>
<p>It is the first time that I am working with such a 'large' dataset (I still think 25 GB is not too much.)</p>
|
<p>For large files, pandas can read them in chunks.</p>
<pre><code>chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk)
</code></pre>
|
python|pandas|csv
| 4
|
8,477
| 52,129,486
|
Python: Find the nearest neighbor pairs in a list of point coordinates
|
<p>I have a list of coordinates. The first element of tuple is the slice number. The 2nd and 3rd are the xy coordinate. Now I want to find the set of points which are nearest. So If I have 6 slices, there must be a return list with pairs of 6 coordinates which belong to each other.</p>
<p>Example dataset:</p>
<pre><code>array=[(0.0, 333.56146977926664, 3008.7982175915004),
(0.0, 172.37833716058688, 1649.3905288621663),
(0.0, 77.50966744006188, 2283.89997422016),
(0.0, 116.57814612355257, 581.9223534943867),
(0.0, 340.0307776756536, 2184.111532373366),
(0.0, 313.93527847976634, 318.1736354983754),
(0.0, 547.9143957324791, 1097.8102318962867),
(0.0, 432.92846166979683, 215.97046269421205),
(0.0, 449.53987956249233, 336.3028143050264),
(0.0, 503.6196838777661, 167.45041095890411),
(0.0, 503.75169204737733, 343.3688663282572),
(0.0, 636.3648922234131, 2193.3168988924617),
(0.0, 717.4529664732162, 3457.784309632166),
(0.0, 846.2564443852878, 603.3166955055681),
(0.0, 1535.2396900242939, 3131.323672911694),
(0.0, 1469.1578125128474, 1707.8372376026773),
(0.0, 1396.379645056139, 92.53690691778341),
(0.0, 1415.425816023739, 2627.336980712166),
(0.0, 1486.3829039812647, 2626.749414519906),
(0.0, 1646.0350180728474, 507.73284624177455),
(0.0, 1604.0196393706212, 901.6203629263811),
(0.0, 1842.0375522573343, 1814.7994048597038),
(0.0, 2007.165137614679, 979.321865443425),
(0.0, 2147.671282186245, 1843.2944622254672),
(0.0, 2070.921192867391, 3782.243082598893),
(0.0, 2065.1681055155877, 478.74772182254196),
(0.0, 2104.848780487805, 533.6113821138212),
(0.0, 2380.856736123359, 1090.9250925949254),
(0.0, 2279.9033647854726, 745.2587146163432),
(0.0, 2292.8939526730937, 831.2670902716915),
(0.0, 2474.9004065643826, 3268.4603377155236),
(0.0, 2453.183810923341, 3408.1759735488613),
(0.0, 2585.0363084532373, 2374.9155425659474),
(0.0, 2773.128683566231, 725.2908866474347),
(0.0, 2624.9709841731856, 915.2607786065126),
(0.0, 2714.8521421572996, 971.4124877527389),
(0.0, 2798.4276574009355, 2330.6135276090113),
(0.0, 2783.3074825567405, 934.502365867351),
(0.0, 2865.471879033745, 3063.324635810437),
(0.0, 2908.2407809110628, 1320.8433839479392),
(1.0, 154.6280574750466, 606.6849292530437),
(1.0, 376.7177563593005, 2208.945026828299),
(1.0, 364.8949263599067, 344.15018742067826),
(1.0, 583.3997824789169, 1120.0000411711828),
(1.0, 468.60350318471336, 243.40191082802548),
(1.0, 538.9199157860642, 190.7942831693591),
(1.0, 540.7046009389671, 364.90441314553993),
(1.0, 679.3226418205804, 3433.8067694591027),
(1.0, 673.7542547115934, 2217.11924614506),
(1.0, 889.9705312329176, 620.7821398399841),
(1.0, 1499.330283784636, 3113.874643753002),
(1.0, 1121.688416543088, 3732.0913165266106),
(1.0, 1501.9564873611191, 1734.5836790922503),
(1.0, 1433.621833357739, 118.40620881534632),
(1.0, 1682.7643898580789, 528.6640451282664),
(1.0, 1642.755639624492, 926.4499789827659),
(1.0, 1882.9388528959287, 1840.9367077636455),
(1.0, 2185.5049122512387, 1865.852288185406),
(1.0, 2101.435029585799, 498.7003550295858),
(1.0, 2139.9415746348413, 557.1467759173495),
(1.0, 2413.369506990553, 1121.0576549175898),
(1.0, 2317.6752836026126, 763.9984530766586),
(1.0, 2329.424714229405, 851.9824595979503),
(1.0, 2438.439804695262, 3244.4834643881113),
(1.0, 2420.7388542963886, 3385.5640099626403),
(1.0, 2623.1869865740236, 2399.7963666630476),
(1.0, 2675.525004799386, 812.9451430216932),
(1.0, 2657.7694468189034, 948.9477347661497),
(1.0, 2737.3965619442797, 977.0773562537048),
(1.0, 2758.0705197325856, 1038.3683416001725),
(1.0, 2759.6937973617737, 669.6233511086164),
(1.0, 2836.7935110202584, 2354.228267421443),
(1.0, 2812.892162448116, 961.1602506714414),
(1.0, 428.61883088048717, 1787.5984283811215),
(1.0, 125.98787227509979, 2312.208090267117),
(2.0, 337.10235964928484, 3008.4183779818836),
(2.0, 172.18748968333261, 1650.019251987316),
(2.0, 79.05630596448627, 2285.681286993474),
(2.0, 120.59550561797752, 583.0968845760981),
(2.0, 341.2470033058268, 2182.697769197769),
(2.0, 311.12108963690093, 318.82918379131144),
(2.0, 545.8929083744167, 1095.6975712707547),
(2.0, 429.5308275144852, 221.87297578368742),
(2.0, 448.40031011758623, 337.2594650471637),
(2.0, 500.984451750145, 164.64977760587894),
(2.0, 636.0321566075593, 2195.677728973483),
(2.0, 716.7293186080433, 3460.95140654303),
(2.0, 845.8828675783573, 602.3421387921516),
(2.0, 1535.26017827827, 3127.550027046247),
(2.0, 1158.3666919050597, 3758.0845047239686),
(2.0, 1464.3599847675866, 1709.9265908748323),
(2.0, 1395.8473628318584, 96.62223008849557),
(2.0, 1417.4062823610632, 2615.9252675181224),
(2.0, 1487.533211456429, 2615.8653260207193),
(2.0, 1645.2234663632826, 502.6575851022424),
(2.0, 1609.6065007403229, 898.9125008813368),
(2.0, 1855.3184030907921, 1815.9022279459111),
(2.0, 1870.8715660298424, 1979.6834472179612),
(2.0, 2004.820143884892, 975.848201438849),
(2.0, 2152.2386336712934, 1845.8399522879715),
(2.0, 2064.784335981839, 474.4242905788876),
(2.0, 2101.247747747748, 532.5915915915916),
(2.0, 2378.1860178079924, 1098.1421760767212),
(2.0, 2279.763305997749, 736.8652194886638),
(2.0, 2292.220562217417, 828.8089247100452),
(2.0, 2475.482818773363, 3265.9790945590476),
(2.0, 2460.545879086205, 3408.4455383061972),
(2.0, 2587.7001645639057, 2374.7207167672336),
(2.0, 2777.0123269458654, 733.5197693574959),
(2.0, 2623.891525049658, 918.2103288457295),
(2.0, 2707.5055488701696, 948.1838481080358),
(2.0, 2728.9928986442865, 1012.8095545513235),
(2.0, 2801.1314431698643, 2329.664604518359),
(2.0, 2777.031130820399, 920.7952993348115),
(2.0, 2866.286997193639, 3064.177923292797),
(2.0, 2909.7604712041884, 1317.9070680628272),
(3.0, 157.26388004449086, 1633.8971530944073),
(3.0, 63.67491895078102, 2270.573312702623),
(3.0, 106.90561963190184, 566.6600245398773),
(3.0, 367.2937136901629, 3023.2236402191747),
(3.0, 324.9559239582667, 2168.772604493375),
(3.0, 309.9233307424951, 305.357814310226),
(3.0, 529.6920537916766, 1079.0594923211586),
(3.0, 414.9144941240038, 209.41901931649332),
(3.0, 486.1743528597254, 147.9357282972422),
(3.0, 620.9823211314476, 2181.4336522462563),
(3.0, 747.0673195115955, 3476.260099975416),
(3.0, 830.2220659995038, 587.1107435282441),
(3.0, 1564.271350026497, 3143.328428533576),
(3.0, 1447.2135826080168, 1694.8505366691386),
(3.0, 1379.2560933268524, 82.5819734740643),
(3.0, 1414.0898379970545, 2637.6224840451646),
(3.0, 1458.7942639742464, 2632.1287679250804),
(3.0, 1517.2141613464887, 2632.6935577481136),
(3.0, 1628.7177629761502, 486.4310598852021),
(3.0, 1592.5086746302616, 884.4677901023891),
(3.0, 1846.0530963067265, 1798.3329519786441),
(3.0, 1854.721832632071, 1959.0901622921258),
(3.0, 2140.705506419401, 1827.9028408396168),
(3.0, 2046.6188630490956, 459.6313522825151),
(3.0, 2100.54031783402, 3798.304787129684),
(3.0, 2089.9236276849642, 518.0429594272076),
(3.0, 2362.4917481532502, 1082.4142175546672),
(3.0, 2263.1660548213413, 717.3157427802252),
(3.0, 2275.2605606758834, 812.102534562212),
(3.0, 2502.8201961695036, 3282.5997377120843),
(3.0, 2492.2876712328766, 3424.5829975825945),
(3.0, 2572.046443965517, 2362.7854166666666),
(3.0, 2618.7545978589073, 753.6802834826442),
(3.0, 2600.3548616882345, 902.7820254066247),
(3.0, 2689.188485206103, 918.132850037933),
(3.0, 2822.238917763738, 692.1082028778023),
(3.0, 2713.429400386847, 994.9756838905776),
(3.0, 2785.0664297628546, 2313.4155002779994),
(3.0, 2769.067081895463, 901.3574614931404),
(3.0, 2895.185851318945, 3078.8379450285925),
(3.0, 2894.081445993031, 1303.1498257839721),
(4.0, 323.2980014095852, 2995.5082158679015),
(4.0, 189.00062107109585, 1661.3407749960134),
(4.0, 94.69584245076587, 2304.1963718500742),
(4.0, 138.99071904003, 594.3969250960907),
(4.0, 356.0687429605538, 2199.471494197598),
(4.0, 339.94278226043934, 335.9263649213735),
(4.0, 561.0918771562345, 1107.1995040660424),
(4.0, 444.95715341049004, 240.15735040630386),
(4.0, 515.9579145492189, 174.8035761340109),
(4.0, 521.9724964739069, 361.19340620592385),
(4.0, 652.1437428698973, 2213.3314815733347),
(4.0, 703.7371733205066, 3448.0016033349366),
(4.0, 863.0894711992446, 615.1586402266289),
(4.0, 1519.4442919707187, 3125.3200876862925),
(4.0, 1477.601623813873, 1724.884220058513),
(4.0, 1413.3926438653637, 112.9267779587405),
(4.0, 1473.6105100463678, 2606.236991241628),
(4.0, 1656.7474928951754, 515.9234802208213),
(4.0, 1621.8129663859793, 915.8049565276922),
(4.0, 1876.2702648647105, 1826.619646275608),
(4.0, 1891.8117573483428, 1991.976235146967),
(4.0, 2173.9414860981046, 1854.8787176671942),
(4.0, 2056.983448913546, 3768.833934350439),
(4.0, 2075.197315150224, 487.019603665033),
(4.0, 2117.8430769230768, 548.3425641025641),
(4.0, 2393.594353853348, 1111.8233033653628),
(4.0, 2292.022869692533, 742.7756954612006),
(4.0, 2304.9634436117713, 839.7576311460427),
(4.0, 2459.5847136835887, 3257.187065424575),
(4.0, 2448.2789465232054, 3396.4231572185645),
(4.0, 2601.3766628260187, 2390.8952201389616),
(4.0, 2652.0092467353616, 780.6522145439343),
(4.0, 2645.946173800259, 919.6041319251436),
(4.0, 2736.3547049441786, 956.7454545454545),
(4.0, 2824.591325417979, 814.5812454567482),
(4.0, 2752.9186953438902, 1025.5029543843063),
(4.0, 2815.841896499733, 2343.718099788917),
(4.0, 2799.1525048681183, 923.3706850770047),
(4.0, 2876.0865684798177, 644.4555588202887),
(4.0, 2849.6915052160953, 3051.0775894187777),
(4.0, 2926.4961904761903, 1333.1619047619047),
(5.0, 342.7083988173585, 3010.581102783726),
(5.0, 172.66986024652678, 1644.3937337675723),
(5.0, 79.33176646910553, 2287.3308808501943),
(5.0, 123.69916165562041, 579.1842163016285),
(5.0, 341.2068281113469, 2186.740931219573),
(5.0, 324.24324719150366, 322.68505397864226),
(5.0, 545.4950787372192, 1091.116850511232),
(5.0, 429.0323831242873, 226.10558722919043),
(5.0, 499.3247200058561, 158.63070053436792),
(5.0, 506.5545222465354, 352.4772064186725),
(5.0, 637.0303665431858, 2199.3088808734647),
(5.0, 719.7670217505772, 3464.994248450727),
(5.0, 845.3587260761026, 601.5611619638399),
(5.0, 1536.9222192362013, 3135.167320414663),
(5.0, 1462.9025670193, 1712.0848974329806),
(5.0, 1397.2887252583935, 96.53935013173006),
(5.0, 1389.9905660377358, 2628.7861635220124),
(5.0, 1433.833930046819, 2622.1148443954835),
(5.0, 1491.3909811694748, 2623.4930624380577),
(5.0, 1640.366485013624, 500.02659137914054),
(5.0, 1603.046150785757, 902.7337709700948),
(5.0, 1874.4067119887775, 1811.3451494550557),
(5.0, 2163.080461618503, 1840.8093648015893),
(5.0, 2074.5243486073673, 3782.4163522012577),
(5.0, 2056.7912423625253, 470.181466395112),
(5.0, 2105.2317497103127, 524.8532251834686),
(5.0, 2375.7653644855686, 1097.8645923046913),
(5.0, 2274.869907197827, 725.5888411045722),
(5.0, 2286.78021978022, 823.1176669484362),
(5.0, 2477.5736712443654, 3273.0428766118043),
(5.0, 2461.705685618729, 3413.5393416651996),
(5.0, 2585.4316079444975, 2374.4842538430144),
(5.0, 2722.678243517861, 781.2931102024827),
(5.0, 2630.109522052039, 902.0422587193209),
(5.0, 2721.1595322390303, 938.8852054500591),
(5.0, 2739.7317179655097, 1009.6524776249727),
(5.0, 2799.6063686385432, 2328.704373190728),
(5.0, 2783.330575692964, 904.1339019189766),
(5.0, 2858.7568411552347, 622.506119133574),
(5.0, 2866.938534507792, 3064.189139483109),
(5.0, 2910.775520077407, 1317.6782776971456)]
</code></pre>
<p><strong>So a return could be: [(coordinate0,coordinate1,coordinate2,coordinate3,coordinate4,coordinate5),(coordinate0,coordinate1....]</strong></p>
<p><strong>Minimal Example, where coordinates do not change through the slices:</strong></p>
<pre><code>list=[(0,1,1),
(0,2,2),
(1,1,1),
(1,2,2),
(2,1,1),
(2,2,2),
(3,1,1),
(3,2,2),
(4,1,1),
(4,2,2)]
</code></pre>
<p><strong>return</strong></p>
<pre><code>coordinates=[[(0,1,1),(1,1,1),(2,1,1),(3,1,1),(4,1,1),[(0,2,2),(1,2,2),(2,2,2),(3,2,2),(4,2,2)]
</code></pre>
<p>There is also the possibility that in the slices are not the same number of coordinates</p>
|
<p>Here is some code, looking for the nearest point from one slice to the next:</p>
<pre><code>import numpy as np
from scipy.spatial import KDTree
import matplotlib.pylab as plt
def get_points_on_slice(i):
return slices[ slices[:, 0] == i ][:, (1, 2)]
# Look for the nearest point slice by slice:
n_last_slice = int( np.max(slices[:, 0]) )
start_points = get_points_on_slice(0)
path_through = np.arange(start_points.shape[0]).reshape(1, -1)
for i in range(1, n_last_slice+1):
get_nearest = KDTree(get_points_on_slice(i))
previous_points = get_points_on_slice(i-1)[path_through[-1, :]]
distance, idx_nearest = get_nearest.query(previous_points)
path_through = np.vstack((path_through, idx_nearest))
# `path_through` is a (nbre slices x nbr points on the first slice) array
# with the index of the nearest point on the corresponding slice
# Graph
plt.figure(figsize=(6, 6))
for path_idx in path_through.T:
path_coords = [get_points_on_slice(i)[idx] for i, idx in enumerate(path_idx)]
plt.plot(*zip(*path_coords), 'x-', alpha=.8);
plt.axis('equal');
</code></pre>
<p>The result is:</p>
<p><a href="https://i.stack.imgur.com/CRXAH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CRXAH.png" alt="graph of the merged points"></a></p>
|
python|numpy|scipy
| 1
|
8,478
| 60,357,116
|
Command to print top 10 rows of python pandas dataframe without index?
|
<p><code>head()</code> prints the indexes.
<code>dataframe.to_string(index=False,max_rows=10)</code> prints the first 5 and last 5 rows.</p>
|
<p>You should try this : </p>
<pre><code>print(df.head(n=10).to_string(index=False))
</code></pre>
<p>This will work because <code>df.head</code> return a <code>Dataframe</code> object so you can apply the <code>to_string</code> method to it and get rid of that index ^^.</p>
|
python|pandas|dataframe
| 5
|
8,479
| 60,585,948
|
Convert model.fit_generator to model.fit
|
<p>I have codes in the following, </p>
<pre><code>train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'data/train',
target_size=(150, 150),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
'data/validation',
target_size=(150, 150),
batch_size=32,
class_mode='binary')
</code></pre>
<p>Now <code>model.fit_generator</code> is defined as following:</p>
<pre><code>model.fit_generator(
train_generator,
steps_per_epoch=2000,
epochs=50,
validation_data=validation_generator,
validation_steps=800)
</code></pre>
<p>Now <code>model.fit_generator</code> is deprecated, what is the proper way to change <code>model.fit_generator</code> to <code>model.fit</code> in this case?</p>
|
<p>You just have to change <code>model.fit_generator()</code> to <code>model.fit()</code>.</p>
<p>As of TensorFlow 2.1, <code>model.fit()</code> also accepts generators as input. As simple as that.</p>
<p>From TensorFlow's official documentation: </p>
<blockquote>
<p>Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future
version. Instructions for updating: Please use Model.fit, which
supports generators.</p>
</blockquote>
|
keras|tensorflow2.0|tensorflow2.x
| 6
|
8,480
| 32,333,179
|
Remove seconds from date Pandas
|
<p>I have a dataframe that contains a column with a date (StartTime) in the following format: <strong>28-7-2015 0:09:00</strong> the same dataframe contains also a column that contains the number of seconds (SetupDuration1). </p>
<p>I would like to create a new column that subtracts the number of seconds from the datefield, </p>
<pre><code>dftask['Start'] = dftask['StartTime'] - dftask['SetupDuration1']
</code></pre>
<p>The SetupDuration1 column is a numeric column and must stay a numeric column because I do different operations on this column, take absolute value etc. </p>
<p>So how should I subtract the number of seconds in the correct way. ?</p>
|
<p><code>apply</code> a lambda to convert to timedelta and then subtract:</p>
<pre><code>In [88]:
df = pd.DataFrame({'StartTime':pd.date_range(start=dt.datetime(2015,1,1), end = dt.datetime(2015,2,1)), 'SetupDuration1':np.random.randint(0, 59, size=32)})
df
Out[88]:
SetupDuration1 StartTime
0 14 2015-01-01
1 55 2015-01-02
2 21 2015-01-03
3 50 2015-01-04
4 21 2015-01-05
5 6 2015-01-06
6 6 2015-01-07
7 2 2015-01-08
8 10 2015-01-09
9 3 2015-01-10
10 11 2015-01-11
11 32 2015-01-12
12 53 2015-01-13
13 45 2015-01-14
14 48 2015-01-15
15 23 2015-01-16
16 7 2015-01-17
17 5 2015-01-18
18 18 2015-01-19
19 26 2015-01-20
20 48 2015-01-21
21 8 2015-01-22
22 58 2015-01-23
23 24 2015-01-24
24 47 2015-01-25
25 10 2015-01-26
26 32 2015-01-27
27 26 2015-01-28
28 36 2015-01-29
29 36 2015-01-30
30 40 2015-01-31
31 18 2015-02-01
In [94]:
df['Start'] = df['StartTime'] - df['SetupDuration1'].apply(lambda x: pd.Timedelta(x, 's'))
df
Out[94]:
SetupDuration1 StartTime Start
0 14 2015-01-01 2014-12-31 23:59:46
1 55 2015-01-02 2015-01-01 23:59:05
2 21 2015-01-03 2015-01-02 23:59:39
3 50 2015-01-04 2015-01-03 23:59:10
4 21 2015-01-05 2015-01-04 23:59:39
5 6 2015-01-06 2015-01-05 23:59:54
6 6 2015-01-07 2015-01-06 23:59:54
7 2 2015-01-08 2015-01-07 23:59:58
8 10 2015-01-09 2015-01-08 23:59:50
9 3 2015-01-10 2015-01-09 23:59:57
10 11 2015-01-11 2015-01-10 23:59:49
11 32 2015-01-12 2015-01-11 23:59:28
12 53 2015-01-13 2015-01-12 23:59:07
13 45 2015-01-14 2015-01-13 23:59:15
14 48 2015-01-15 2015-01-14 23:59:12
15 23 2015-01-16 2015-01-15 23:59:37
16 7 2015-01-17 2015-01-16 23:59:53
17 5 2015-01-18 2015-01-17 23:59:55
18 18 2015-01-19 2015-01-18 23:59:42
19 26 2015-01-20 2015-01-19 23:59:34
20 48 2015-01-21 2015-01-20 23:59:12
21 8 2015-01-22 2015-01-21 23:59:52
22 58 2015-01-23 2015-01-22 23:59:02
23 24 2015-01-24 2015-01-23 23:59:36
24 47 2015-01-25 2015-01-24 23:59:13
25 10 2015-01-26 2015-01-25 23:59:50
26 32 2015-01-27 2015-01-26 23:59:28
27 26 2015-01-28 2015-01-27 23:59:34
28 36 2015-01-29 2015-01-28 23:59:24
29 36 2015-01-30 2015-01-29 23:59:24
30 40 2015-01-31 2015-01-30 23:59:20
31 18 2015-02-01 2015-01-31 23:59:42
</code></pre>
<p><strong>Timings</strong></p>
<p>Actually it looks quicker to just construct a Timedeltaindex inplace:</p>
<pre><code>In [99]:
%timeit df['Start'] = df['StartTime'] - pd.TimedeltaIndex(df['SetupDuration1'], unit='s')
1000 loops, best of 3: 837 µs per loop
In [100]:
%timeit df['Start'] = df['StartTime'] - df['SetupDuration1'].apply(lambda x: pd.Timedelta(x, 's'))
100 loops, best of 3: 1.97 ms per loop
</code></pre>
<p>So I'd just do:</p>
<pre><code>In [101]:
df['Start'] = df['StartTime'] - pd.TimedeltaIndex(df['SetupDuration1'], unit='s')
df
Out[101]:
SetupDuration1 StartTime Start
0 14 2015-01-01 2014-12-31 23:59:46
1 55 2015-01-02 2015-01-01 23:59:05
2 21 2015-01-03 2015-01-02 23:59:39
3 50 2015-01-04 2015-01-03 23:59:10
4 21 2015-01-05 2015-01-04 23:59:39
5 6 2015-01-06 2015-01-05 23:59:54
6 6 2015-01-07 2015-01-06 23:59:54
7 2 2015-01-08 2015-01-07 23:59:58
8 10 2015-01-09 2015-01-08 23:59:50
9 3 2015-01-10 2015-01-09 23:59:57
10 11 2015-01-11 2015-01-10 23:59:49
11 32 2015-01-12 2015-01-11 23:59:28
12 53 2015-01-13 2015-01-12 23:59:07
13 45 2015-01-14 2015-01-13 23:59:15
14 48 2015-01-15 2015-01-14 23:59:12
15 23 2015-01-16 2015-01-15 23:59:37
16 7 2015-01-17 2015-01-16 23:59:53
17 5 2015-01-18 2015-01-17 23:59:55
18 18 2015-01-19 2015-01-18 23:59:42
19 26 2015-01-20 2015-01-19 23:59:34
20 48 2015-01-21 2015-01-20 23:59:12
21 8 2015-01-22 2015-01-21 23:59:52
22 58 2015-01-23 2015-01-22 23:59:02
23 24 2015-01-24 2015-01-23 23:59:36
24 47 2015-01-25 2015-01-24 23:59:13
25 10 2015-01-26 2015-01-25 23:59:50
26 32 2015-01-27 2015-01-26 23:59:28
27 26 2015-01-28 2015-01-27 23:59:34
28 36 2015-01-29 2015-01-28 23:59:24
29 36 2015-01-30 2015-01-29 23:59:24
30 40 2015-01-31 2015-01-30 23:59:20
31 18 2015-02-01 2015-01-31 23:59:42
</code></pre>
|
python|pandas
| 1
|
8,481
| 40,706,338
|
Adding extra entry in a multi-indexed pandas dataframe from another multi-indexed pandas dataframe
|
<p>I have a multi-indexed pandas dataframe that I have used the <code>groupby</code> method followed by the <code>describe</code> method on to give me the following:</p>
<pre><code> grouped= self.HK_data.groupby(level=[0,1])
summary= grouped.describe()
</code></pre>
<p>which gives: </p>
<pre><code>Antibody Time
Customer_Col1A2 0 count 3.000000
mean 0.757589
std 0.188750
min 0.639933
25% 0.648732
50% 0.657532
75% 0.816417
max 0.975302
10 count 3.000000
mean 0.716279
std 0.061939
min 0.665601
25% 0.681757
50% 0.697913
75% 0.741618
max 0.785324
... .........
</code></pre>
<p>I have calculated the <code>SEM</code> using:</p>
<pre><code> SEM=grouped.mean()/(numpy.sqrt(grouped.count()))
</code></pre>
<p>Giving:</p>
<pre><code>Antibody Time
Customer_Col1A2 0 0.437394
10 0.413544
120 0.553361
180 0.502792
20 0.512797
240 0.514609
30 0.505618
300 0.481021
45 0.534658
5 0.425800
60 0.430633
90 0.525115
... .........
</code></pre>
<p>How do I <code>concat</code> these two frames such that the SEM's become another entry of the summary statistics? </p>
<p>So something like:</p>
<pre><code>Antibody Time
Customer_Col1A2 0 count 3.000000
mean 0.757589
std 0.188750
min 0.639933
25% 0.648732
50% 0.657532
75% 0.816417
max 0.975302
SEM 0.437394
10 count 3.000000
mean 0.716279
std 0.061939
min 0.665601
25% 0.681757
50% 0.697913
75% 0.741618
max 0.785324
SEM 0.413544
</code></pre>
<p>I've tried <code>pandas.concat</code> but this didn't give me what I want. </p>
<p>Thanks!</p>
|
<p>I think you first add third level of <code>MultiIndex</code> , assign new index by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_tuples.html" rel="nofollow noreferrer"><code>MultiIndex.from_tuples</code></a> and last use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>sort_index</code></a>:</p>
<pre><code>HK_data = pd.DataFrame({'Antibody':['Customer_Col1A2','Customer_Col1A2','Customer_Col1A2'],
'Time':[0,10,10],
'Col':[7,8,9]})
HK_data = HK_data.set_index(['Antibody','Time'])
print (HK_data)
Col
Antibody Time
Customer_Col1A2 0 7
10 8
10 9
</code></pre>
<pre><code>grouped= HK_data.groupby(level=[0,1])
summary= grouped.describe()
print (summary)
Col
Antibody Time
Customer_Col1A2 0 count 1.000000
mean 7.000000
std NaN
min 7.000000
25% 7.000000
50% 7.000000
75% 7.000000
max 7.000000
10 count 2.000000
mean 8.500000
std 0.707107
min 8.000000
25% 8.250000
50% 8.500000
75% 8.750000
max 9.000000
SEM=grouped.mean()/(np.sqrt(grouped.count()))
#change multiindex
new_index = list(zip(SEM.index.get_level_values('Antibody'),
SEM.index.get_level_values('Time'),
['SEM'] * len(SEM.index)))
SEM.index = pd.MultiIndex.from_tuples(new_index, names=('Antibody','Time', None))
print (SEM)
Col
Antibody Time
Customer_Col1A2 0 SEM 7.000000
10 SEM 6.010408
</code></pre>
<pre><code>df = pd.concat([summary, SEM]).sort_index()
print (df)
Col
Antibody Time
Customer_Col1A2 0 25% 7.000000
50% 7.000000
75% 7.000000
SEM 7.000000
count 1.000000
max 7.000000
mean 7.000000
min 7.000000
std NaN
10 25% 8.250000
50% 8.500000
75% 8.750000
SEM 6.010408
count 2.000000
max 9.000000
mean 8.500000
min 8.000000
std 0.707107
</code></pre>
|
python|pandas|multi-index
| 2
|
8,482
| 40,372,026
|
Importing TensorFlow graph fails for uninitialized variables
|
<p>I'm trying to export the multi layer perceptron example as a .pb graph.
In order to do it, I have named the input variables and output operation and added the following line:</p>
<pre><code>tf.train.write_graph(sess.graph_def, "./", "graph.pb", False)
</code></pre>
<p>To import, I did the following:</p>
<pre><code>with gfile.FastGFile("graph.pb",'rb') as f:
print("load graph")
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
with tf.Session() as persisted_sess:
persisted_result = persisted_sess.graph.get_tensor_by_name("output:0")
avd = persisted_sess.run(persisted_result, feed_dict={"input_x:0": features_t})
print ("Result:", str(avd))
</code></pre>
<p>It does import fine but throws an error for the "run" line.</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 972, in _do_call
return fn(*args)
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 954, in _run_fn
status, run_metadata)
File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__
next(self.gen)
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value Variable_3
[[Node: Variable_3/read = Identity[T=DT_FLOAT, _class=["loc:@Variable_3"], _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_3)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "teste.py", line 56, in <module>
avd = persisted_sess.run(persisted_result, feed_dict={"input_x:0": features_t})
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 717, in run
run_metadata_ptr)
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 915, in _run
feed_dict_string, options, run_metadata)
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 965, in _do_run
target_list, options, run_metadata)
File "/usr/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 985, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value Variable_3
[[Node: Variable_3/read = Identity[T=DT_FLOAT, _class=["loc:@Variable_3"], _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_3)]]
Caused by op 'Variable_3/read', defined at:
File "teste.py", line 37, in <module>
_ = tf.import_graph_def(graph_def, name='')
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 285, in import_graph_def
op_def=op_def)
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2380, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1298, in __init__
self._traceback = _extract_stack()
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value Variable_3
[[Node: Variable_3/read = Identity[T=DT_FLOAT, _class=["loc:@Variable_3"], _device="/job:localhost/replica:0/task:0/cpu:0"](Variable_3)]]
</code></pre>
<p>I have tried to initialize all variables but it does not work.</p>
|
<p>TensorFlow splits saving the Graph definition and the Variable values in different files (graph and checkpoint respectively).</p>
<p>You want to use the TF Saver. </p>
<p>See this answer for details:
<a href="https://stackoverflow.com/a/33762168/4120005">https://stackoverflow.com/a/33762168/4120005</a></p>
<p>Or the documentation here:
<a href="https://www.tensorflow.org/versions/r0.11/how_tos/variables/index.html#saving-variables" rel="nofollow noreferrer">https://www.tensorflow.org/versions/r0.11/how_tos/variables/index.html#saving-variables</a></p>
<p>If you really need to restore just from the graphdef file (*.pb), to load it from C++ for instance, you will need to use the freeze_graph.py script from here:
<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py</a></p>
<p>This script takes a graphdef (<em>.pb) and a checkpoint (</em>.ckpt) file as input and outputs a graphdef file which contains the weights in the form of constants (you can read the docs on the script for more details).</p>
|
python|tensorflow
| 2
|
8,483
| 62,009,696
|
How to get the row position based on column value if dataframe has been resorted?
|
<p>My dataframe has been resorted so the index values have lost their sequence.</p>
<p>In this example below, when I select for df_ranking.Ticker == 'WRB ', I want to get 0 instead of 478.</p>
<pre><code>In [113]: df_ranking.head()
Out[113]:
Ticker TrendScoreStr TrendScoreNum
478 WRB GCAGA 2000100000200
259 ISRG CMAMA 2000100000000
18 ALGN DGAGA 2000001000200
106 CINF GADMA 2000001000100
450 TRV GADMA 2000001000100
</code></pre>
|
<p>Do this:</p>
<pre><code>In [1788]: df_ranking.reset_index(drop=True, inplace=True)
In [1789]: df_ranking
Out[1789]:
Ticker TrendScoreStr TrendScoreNum
0 WRB GCAGA 2000100000200
1 ISRG CMAMA 2000100000000
2 ALGN DGAGA 2000001000200
3 CINF GADMA 2000001000100
4 TRV GADMA 2000001000100
In [1790]: df_ranking[df_ranking.Ticker.eq('WRB')]
Out[1790]:
Ticker TrendScoreStr TrendScoreNum
0 WRB GCAGA 2000100000200
</code></pre>
|
python|pandas|dataframe
| 1
|
8,484
| 61,850,718
|
How to fill multiple values into One Column in pandas dataframe? (without using strings) Python
|
<p>so far I managed to do this by using a string and splitting it up later.</p>
<pre><code>print(df)
a b c z
0 0 0 0 "23,8,100"
1 1 1 1 "23,2,100"
2 2 2 2 "1,8,100"
3 3 3 3 "23,5,300"
4 4 4 4 "23,8,7"
# converting column to list
x_list = df["z"].tolist()
# splitting via list comprehension
[[float(x) for x in xstring.split(",")] for xstring in xlist]
</code></pre>
<p>But I wonder if there is a faster way to put a small <strong>list</strong> [23,8,100] into one single column and receiving a <strong>list</strong> back when calling the index in dataframe.
(or even better: calling the whole column as a list of lists)</p>
<p>(the amount of elements in the list depends on a static input,
so when i have 3 elements it will alway be lists of 3,
but i could also enter 100, so the amount of elements in every list will be 100.)</p>
|
<p>IIUC you want to convert string to list<br>
First, remove extra <code>"</code> quotes using <code>strip</code> then split string into list </p>
<pre class="lang-py prettyprint-override"><code>df.z.str.strip('"').str.split(',')
0 [23, 8, 100]
1 [23, 2, 100]
2 [1, 8, 100]
3 [23, 5, 300]
4 [23, 8, 7]
Name: z, dtype: object
</code></pre>
<p>And use <code>map</code> to convert string to <code>float</code> with <code>apply</code></p>
<pre class="lang-py prettyprint-override"><code>df.z.str.strip('"').str.split(',').apply(lambda x:list(map(float,x)))
0 [23.0, 8.0, 100.0]
1 [23.0, 2.0, 100.0]
2 [1.0, 8.0, 100.0]
3 [23.0, 5.0, 300.0]
4 [23.0, 8.0, 7.0]
Name: z, dtype: object
</code></pre>
<p>or you can assign back</p>
<pre class="lang-py prettyprint-override"><code>df.z = df.z.str.strip('"').str.split(',').apply(lambda x:list(map(float,x)))
</code></pre>
|
python|arrays|pandas|list|multiple-columns
| 0
|
8,485
| 57,846,434
|
Difference between sample step and time step in LSTM (Keras)
|
<p>I'm trying to understand how the state progresses in an LSTM-layer. If I have the following code</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
model.add(LSTM(2, return_sequences=True,input_shape=(4,2),stateful=False,batch_size=4))
yp=model.predict(np.array([ [[0,0],[0,1],[0,0],[1,1]],
[[0,1],[0,0],[1,1],[0,0]],
[[0,0],[1,1],[0,0],[0,1]],
[[1,1],[0,0],[0,1],[0,0]],
]))
print(yp)
</code></pre>
<p>why do I not get yp[0,:,:] equal yp[:,0,:]?</p>
|
<p>Your input is an array of <code>shape=(4,4,2)</code>, where the first 4 is batch size (4 samples), second 4 is time_step, and the last 2 is the input_dim of each time_step. </p>
<h2>Difference between sample step and time step in LSTM</h2>
<p>Each sample can have multiple time steps, or even different time steps between samples is also valid. </p>
<h2>Why do I not get yp[0,:,:] equal yp[:,0,:]?</h2>
<p>You need to understand that RNN block such as LSTM/GRU can <strong>catch the dependency between time steps.</strong> More concretely, there are different gates in LSTM that can carry the information from previous time step to current time step.<br>
In your example, when the first sample <code>[[0,0],[0,1],[0,0],[1,1]]</code> get into the model, the LSTM block actually <strong>process the time step one after one, from [0,0]->[0,1]->...[1,1].</strong> This is how forward-propagation in RNN is done. Therefore, the output state of different time step in different sample should be different. </p>
<p>Here is a <a href="https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21" rel="nofollow noreferrer">blog</a> gives good explaination of LSTM/GRU with visualisation.</p>
|
tensorflow|keras|lstm
| 0
|
8,486
| 57,828,028
|
How can I add a column of one data frame to another based on the nearest identifier?
|
<p>Problem:</p>
<ol>
<li><p>I have a data frame <code>foo</code> that contains measurements and a <code>common_step</code> column, which contains integers indicating when each row was measured.</p></li>
<li><p>I have a second data frame that also contains a <code>common_step</code> column and a <code>bar_step</code> column. It translates between the two integer steps.</p></li>
<li><p>I would like to add <code>bar_step</code> as a column to <code>foo</code>. However, the <code>common_step</code> values of both data frames are not aligned.</p></li>
<li><p>Thus, for each row in <code>foo</code>, I would like to find the row in <code>bar</code> with the nearest <code>global_step</code> and add its <code>bar_step</code> to the <code>foo</code> row.</p></li>
<li><p>I have found a way to do this. However, the solution is very slow. This is because for every row in <code>foo</code>, it searches through all rows in <code>bar</code> to find the one with closest <code>global_step</code>.</p></li>
</ol>
<pre class="lang-py prettyprint-override"><code>foo.sort_values('common_step', inplace=True)
bar.sort_values('common_step', inplace=True)
def find_nearest(foo_row):
index = abs(bar.common_step - foo_row.common_step).idxmin()
return bar.loc[index].bar_step
foo['bar_step'] = scores.apply(find_nearest, axis=1)
</code></pre>
<p>Questions:</p>
<ul>
<li>How I can add the closest match for <code>bar_step</code> to the <code>foo</code> data table with sub quadratic run time?</li>
<li>Moreover, it would be ideal to have a flag that chooses the row with the closest but <em>smaller</em> <code>global_step</code>.</li>
</ul>
|
<p>As @QuangHoang suggested in the comment, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer">merge_asof</a> does this. Moreover, the second data frame should contain no other columns to not interfere with existing columns in the first one:</p>
<pre class="lang-py prettyprint-override"><code>foo.sort_values('common_step', inplace=True)
bar.sort_values('common_step', inplace=True)
bar = bar[['bar_step', 'common_step']]
foo = pandas.merge_asof(foo, bar, on='common_step', direction='backward')
</code></pre>
<p>The <code>direction</code> parameter specifies whether to use the nearest lower match, nearest higher match, or nearest match considering both directions. From the documentation:</p>
<blockquote>
<ul>
<li>A “backward” search selects the last row in the right DataFrame whose ‘on’ key is less than or equal to the left’s key.</li>
<li>A “forward” search selects the first row in the right DataFrame whose ‘on’ key is greater than or equal to the left’s key.</li>
<li>A “nearest” search selects the row in the right DataFrame whose ‘on’ key is closest in absolute distance to the left’s key.</li>
</ul>
</blockquote>
|
python|pandas
| 0
|
8,487
| 58,049,454
|
Is this TF training curve overfitting or underfitting?
|
<p>In the case of overfitting, to my knowledge the <code>val_loss</code> has to soar as opposed to the <code>train_loss</code>.
But how about the case below (<code>val_loss</code> remains low)? Is this model underfitting horribly? Or is it some completely different case?
Previously my models would overfit badly so I added the dropout of 0.3 (4 CuDNNGRU layers with 64 neurons and one Dense layer and batchsize of 64), so should I reduce the dropout?</p>
<p><a href="https://i.stack.imgur.com/1SoVK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1SoVK.png" alt="train_loss vs. validation_loss" /></a></p>
|
<p>This is neither overfitting nor underfitting. Some people refer to it as <em><a href="https://stats.stackexchange.com/questions/187335/validation-error-less-than-training-error">Unknown fit</a></em>. Validation << training loss happens when you apply regularization (L1, L2, Dropout, ...) in keras because they are applied to training only and not on testing (validating). So it makes sense that your training loss is bigger (not all neurons are available for feed forward due to dropout for example).</p>
<p>But what is clear is that your model is not being optimized for your validation set, (almost a flat line). This can be due to many things:</p>
<ul>
<li>Your validation set is badly representative of your dataset, has a very easy predictions or very small.</li>
<li>decrease learning rate or add more regularization (recurrent_regularization since you are using CuDNNGRU)</li>
<li>Your loss function is not appropriate for the problem you are trying to solve.</li>
</ul>
<p>Hope these tips help you out.</p>
|
python|tensorflow|keras|deep-learning
| 2
|
8,488
| 58,082,903
|
python Panda float number get rounded while converting to string
|
<p>I have this CSV file</p>
<pre><code>id,adset_id,source
1,,google
2,23843814084680281,facebook
3,,google
4,23843814088700279,facebook
5,23843704830370464,facebook
</code></pre>
<p>My problem is when I am trying to read it with panda since I can not pass the schema panda infer the schema for <code>adset_id</code> column to be float64 (because of NaN value)</p>
<p>So if I write this</p>
<pre><code>import pandas as pd
df = pd.read_csv('/Users/test/Desktop/float.csv')
print(df)
</code></pre>
<p>I will get scientific notation for <code>adset_id</code>
result:</p>
<pre><code> id adset_id source
0 1 NaN google
1 2 2.384381e+16 facebook
2 3 NaN google
3 4 2.384381e+16 facebook
4 5 2.384370e+16 facebook
</code></pre>
<p>I could not find any way to fix this so I tried to do a hack and convert this number to String. But in order to do that, I need to convert it to <code>int64</code> first and after that convert it to string.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv('/Users/test/Desktop/float.csv')
df = df.fillna({'adset_id':-1})
df['adset_id'] = df['adset_id'].astype('int64')
df['adset_id'] = df['adset_id'].astype('str')
df['adset_id'].replace('-1', np.NaN, inplace=True)
print(df)
</code></pre>
<p>The result is:</p>
<pre><code> id adset_id source
0 1 NaN google
1 2 23843814084680280 facebook
2 3 NaN google
3 4 23843814088700280 facebook
4 5 23843704830370464 facebook
</code></pre>
<p>As you can see 2 of my <code>adset_id</code> get rounded:<br>
<code>23843814084680281</code> -> <code>23843814084680280</code><br>
<code>23843814088700279</code> -> <code>23843814088700280</code></p>
<p>I just want to be able to read this CSV to panda data frame and don't get <code>adset_id</code> as scientific notation, any solution would be appreciated</p>
|
<p>Within <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>pd.read_csv</code></a>. Look at the <code>dtype</code> argument. You can set a dictionary of dtypes to ensure it is read as a string.</p>
<pre><code>df = pd.read_csv('PATH_TO_CSV.csv', dtype={'adset_id':str})
</code></pre>
<p>You can also look at the <code>na_values</code>, <code>keep_default_na</code>, and <code>na_filter</code> arguments to help with handling NULLs</p>
|
python|pandas|csv|scientific-notation
| 1
|
8,489
| 57,777,003
|
Column <x> has dtype object, cannot use method 'nsmallest' with this dtype
|
<p>Any query I try on my table complains that</p>
<pre><code>Column 'time' has dtype object, cannot use method 'nsmallest' with this dtype
</code></pre>
<p>However, when I look at the table source that I queried, the schema reports types of long. In this case, I am querying Treasure Data.</p>
<p>How then do I sort my dataframe by a column value and the the one with the least?</p>
|
<p>You sort it by:</p>
<pre><code>df.sort_values(['time'], inplace=True)
</code></pre>
<p>If you use:</p>
<pre><code>df['time'].sort_values(ascending=True).head(n)
</code></pre>
<p>You should get the first n smallest values as well.</p>
|
pandas|dataframe
| 1
|
8,490
| 57,786,652
|
pandas merge two dataframe to form a multiindex
|
<p>I'm playing around with Pandas to see if I can do some stock calculation better/faster than with other tools. If I have a single stock it's easy to create daily calculation L</p>
<pre><code>df['mystuff'] = df['Close']+1
</code></pre>
<p>If I download more than a ticker it gets complicated: </p>
<pre><code>df = df.stack()
df['mystuff'] = df['Close']+1
df = df.unstack()
</code></pre>
<p>If I want to use prevous' day "Close" it gets too complex for me. I thought I might go back to fetch a single ticker, do any operation with iloc[i-1] or something similar (I haven't figured it yet) and then merge the dataframes.</p>
<p>How do I merget two dataframes of single tickers to have a multiindex?
So that:</p>
<pre><code>f1 = web.DataReader('AAPL', 'yahoo', start, end)
f2 = web.DataReader('GOOG', 'yahoo', start, end)
</code></pre>
<p>is like</p>
<pre><code>f = web.DataReader(['AAPL','GOOG'], 'yahoo', start, end)
</code></pre>
<p>Edit:
This is the nearest thing to f I can create. It's not exactly the same so I'm not sure I can use it instead of f.</p>
<pre><code>f_f = pd.concat(['AAPL':f1,'GOOG':f2},axis=1)
</code></pre>
<p>Maybe I should experiment with operations working on a multiindex instead of splitting work on simpler dataframes.</p>
<p>Full Code:</p>
<pre><code>import pandas_datareader.data as web
import pandas as pd
from datetime import datetime
start = datetime(2001, 9, 1)
end = datetime(2019, 8, 31)
a = web.DataReader('AAPL', 'yahoo', start, end)
g = web.DataReader('GOOG', 'yahoo', start, end)
# here are shift/diff calculations that I don't knokw how to do with a multiindex
a_g = web.DataReader(['AAPL','GOOG'], 'yahoo', start, end)
merged = pd.concat({'AAPL':a,'GOOG':g},axis=1)
a_g.to_csv('ag.csv')
merged.to_csv('merged.csv')
import code; code.interact(local=locals())
</code></pre>
<p>side note: I don't know how to compare the two csv</p>
|
<p>This is not exactly the same but it returns Multiindex you can use as in the <code>a_g</code> case</p>
<pre class="lang-py prettyprint-override"><code>import pandas_datareader.data as web
import pandas as pd
from datetime import datetime
start = datetime(2019, 7, 1)
end = datetime(2019, 8, 31)
out = []
for tick in ["AAPL", "GOOG"]:
d = web.DataReader(tick, 'yahoo', start, end)
cols = [(col, tick) for col in d.columns]
d.columns = pd.MultiIndex\
.from_tuples(cols,
names=['Attributes', 'Symbols'] )
out.append(d)
df = pd.concat(out, axis=1)
</code></pre>
<p><strong>Update</strong></p>
<p>In case you want to calculate and add a new column in case you have multiindex columns you can follow this</p>
<pre class="lang-py prettyprint-override"><code>import pandas_datareader.data as web
import pandas as pd
from datetime import datetime
start = datetime(2019, 7, 1)
end = datetime(2019, 8, 31)
ticks = ['AAPL','GOOG']
df = web.DataReader(ticks, 'yahoo', start, end)
names = list(df.columns.names)
df1 = df["Close"].shift()
cols = [("New", col) for col in df1.columns]
df1.columns = pd.MultiIndex.from_tuples(cols,
names=names)
df = df.join(df1)
</code></pre>
|
pandas|pandas-datareader
| 1
|
8,491
| 36,715,067
|
Scipy's leastsq with complex numbers
|
<p>I'm trying to use scipy.optimize.leastsq with complex numbers. I know there are some questions about this already but I still can't get my simple example working, which is complaining about casting from complex to real numbers.</p>
<p>If I did it right the solution to the below should be <code>x=[1+1j,2j]</code>:</p>
<pre><code>import numpy as np
from scipy.optimize import leastsq
def cost_cpl(x,A,b):
return np.abs(np.dot(A,x)-b)
A=np.array([[1,1],[2j,0]],dtype=np.complex128)
b=np.array([1+3j,-2+2j],dtype=np.complex128)
x,r=leastsq(cost_cpl,np.array([0+0j,0+0j]),args=(A,b))
print x
print r
</code></pre>
<p>But I'm getting</p>
<pre><code>TypeError: Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe'
</code></pre>
<p>EDIT: If I change the first guess from <code>np.array([0+0j,0+0j])</code> to <code>np.array([0,0])</code> the function runs but I get the wrong answer (a real one).</p>
|
<p>Since <code>leastsq()</code> can only accept real numbers, you need to use <code>.view()</code> method to convert between real array and complex array.</p>
<pre><code>import numpy as np
from scipy.optimize import leastsq
def cost_cpl(x, A, b):
return (np.dot(A, x.view(np.complex128)) - b).view(np.double)
A = np.array([[1,1],[2j,0]],dtype=np.complex128)
b = np.array([1+3j,-2+2j],dtype=np.complex128)
init = np.array([0.0, 0.0, 0.0, 0.0])
x, r = leastsq(cost_cpl, init, args=(A, b))
print(x.view(np.complex128))
</code></pre>
<p>output:</p>
<pre><code>array([ 1.00000000e+00+1.j, 4.96506831e-16+2.j])
</code></pre>
|
python|numpy|scipy|complex-numbers|least-squares
| 3
|
8,492
| 54,816,225
|
Multi-hot labels encoding
|
<p>I'm new to Tensorflow. I have a image dataset with several labels for one image. As far as I understand, I need to use <code>tf.losses.sigmoid_cross_entropy()</code>. I tried to apply <code>tf.one_hot</code> to labels but when I try to pass them into loss function I get error, shapes incompatible. How can I fix this?</p>
|
<p>You're right about <code>tf.losses.sigmoid_cross_entropy</code>. All you need to do is wrap <code>tf.one_hot</code> with <code>tf.reduce_max</code> to reduce dimensionality like this. </p>
<pre><code>tf.reduce_max(tf.one_hot(labels, num_classes, dtype=tf.int32), axis=0)
</code></pre>
<p>That should return tensor of shape <code>(num_classes,)</code>, exactly what is needed for your loss function.</p>
|
python|tensorflow
| 3
|
8,493
| 54,915,755
|
ANSI codes not working in a ndarray of strings
|
<p>I think the best way to explain my problem is to just show it:</p>
<pre><code>import numpy as np
coloured_letters = np.ndarray(shape=(2, 2), dtype="<U100")
print("\033[1;32;40m A test \033[30m")
def fill(ndarray):
y = 0
x = 0
while y < 2:
while x < 2:
ndarray[y][x] = "\033[1;32;40m A test \033[30m"
x = x + 1
x = 0
y = y + 1
fill(coloured_letters)
print(coloured_letters)
</code></pre>
<p>Outputs:</p>
<pre><code> A test
[['\x1b[1;32;40m A test \x1b[30m' '\x1b[1;32;40m A test \x1b[30m']
['\x1b[1;32;40m A test \x1b[30m' '\x1b[1;32;40m A test \x1b[30m']]
</code></pre>
<p>Where the "A test" is in bright green with a white background.</p>
|
<p>Numpy is storing exactly the values you want. However, when you print the variable <code>coloured_letters</code> numpy calls the <code>__repr__</code> or <code>__str__</code> function to convert the string into a printable representation. This means that it will translate each string into something the terminal can print with just normal ASCII characters.</p>
<p>If you print any element from <code>coloured_letters</code> it will print correctly. If you still want to get the array format of the numpy array you can access each element and print brackets around them as such.</p>
<pre><code>for row in range(len(coloured_letters)):
print("["+",".join(coloured_letters[row])+"]")
</code></pre>
<p>That will print something like the following with each <code>A test</code> being green on white.</p>
<pre><code>[A test, A test]
[A test, A test]
</code></pre>
|
python|numpy
| 1
|
8,494
| 54,732,675
|
Python Keras Prediction returning nan
|
<p>I am having problems with understanding how Keras works with data and why my model is not working accordingly. I am trying to build small model that could predict cities based on input of longitude and latitude.</p>
<p>What i would like to see is when i make a prediction, for example, the first index of cities array i would like to see the output array index zero to have largest activation value.</p>
<p><strong>My current model with Keras & Tensorflow</strong></p>
<p><strong>Data</strong>
The latitude and longitude data is normalized between 0 / 1</p>
<pre><code>cities = [];
cities.append([60.1695213,24.9354496]); #1
cities.append([60.2052002,24.6522007]); #2
cities.append([61.4991112,23.7871208]); #3
cities.append([64.222176,27.72785]); #4
cities.append([60.4514809,22.2686901]); #5
cities.append([65.0123596,25.4681606]); #6
cities.append([60.9826698,25.6615105]); #7
cities.append([62.8923798,27.6770306]); #8
cities.append([62.2414703,25.7208805]); #9
cities.append([61.4833298,21.7833309]); #10
cities.append([61.0587082,28.1887093]); #11
cities.append([63.0960007,21.6157703]); #12
cities.append([60.4664001,26.9458199]); #13
cities.append([62.601181,29.7631607]); #14
cities.append([60.9959602,24.4643402]); #15
cities.append([60.3923302,25.6650696]); #16
cities.append([61.6885681,27.2722702]); #17
cities.append([65.579287,24.196943]); #18
cities.append([65.986503,28.692848]); #19
cities.append([61.1272392,21.5112705]); #20
train_cities = np.array(cities);
for i in train_cities:
i[0] = normalize(i[0],65.986503,60.1695213,0.99,0.01)
i[1] = normalize(i[1],29.7631607,21.5112705,0.99,0.01)
train_labels = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20];
</code></pre>
<p><strong>Normalized longitude/latitude</strong></p>
<pre><code>[[0.01168472 0.41784541]
[0.01769563 0.38420658]
[0.23568373 0.28146911]
[0.69444458 0.74947275]
[0.05918709 0.10113927]
[0.82756859 0.48111052]
[0.14867768 0.50407289]
[0.47041082 0.7434374 ]
[0.36075063 0.51112371]
[0.233025 0.04349768]
[0.16148804 0.80420471]
[0.50471529 0.02359807]
[0.06170056 0.65659833]
[0.42135191 0.99118761]
[0.15091674 0.36189614]
[0.04922184 0.50449557]
[0.26760196 0.69536778]
[0.92308013 0.33013987]
[0.99168472 0.86407655]
[0.17303361 0.01118761]]
</code></pre>
<p><strong>Model</strong></p>
<pre><code>model = keras.Sequential([
keras.layers.Dense(10, activation=tf.nn.relu, input_shape = (2,)),
keras.layers.Dense(20, activation=tf.nn.softmax)
]);
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_cities, train_labels, epochs=50)
</code></pre>
<p><strong>Prediction</strong></p>
<pre><code>model.fit(train_cities, train_labels, epochs=50)
</code></pre>
<p>What i would like to do with this data is simply input one of the cities index array to the network and get the corresponding label for it.</p>
<p>I am getting a output array of nan indexes</p>
<pre><code>array([[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan]], dtype=float32)
</code></pre>
<p>Also it seems that the network is not actually learning for reasons that i cant figure out.</p>
<pre><code>Epoch 50/50
20/20 [==============================] - 0s 200us/step - loss: nan - acc: 0.0000e+00
</code></pre>
<p>Any help would be a appreciated.</p>
<p>Normalisation function</p>
<pre><code>def normalize(value,maxValue,minValue,maxRange,minRange):
return ((value - (minValue - 0.01)) * (maxRange - (minRange))) / ((maxValue - 0.01) - (minValue - 0.01)) + (minRange)
</code></pre>
|
<p>Not clear what is <code>train_labels</code>. If it's the same as <code>labels</code> then you'll need to have output of the last layer to be <code>21</code> and not <code>20</code>, since in keras labels start from <code>0</code>. Or you can redefine your labels to be from <code>0</code> to <code>19</code>. Otherwise your code is ok and it's working on my pc. I've got <code>100%</code> accuracy after <code>~1900</code> epochs</p>
|
python|tensorflow|keras|neural-network
| 4
|
8,495
| 49,523,140
|
Spectral norm 2x2 matrix in tensorflow
|
<p>I've got a 2x2 matrix defined by the variables <code>J00, J01, J10, J11</code> coming in from other inputs. Since the matrix is small, I was able to compute the spectral norm by first computing the trace and determinant</p>
<pre><code>J_T = tf.reduce_sum([J00, J11])
J_ad = tf.reduce_prod([J00, J11])
J_cb = tf.reduce_prod([J01, J10])
J_det = tf.reduce_sum([J_ad, -J_cb])
</code></pre>
<p>and then solving the quadratic</p>
<pre><code>L1 = J_T/2.0 + tf.sqrt(J_T**2/4.0 - J_det)
L2 = J_T/2.0 - tf.sqrt(J_T**2/4.0 - J_det)
spectral_norm = tf.maximum(L1, L2)
</code></pre>
<p>This works, but it looks rather ugly and it isn't generalizable to larger matrices. Is there cleaner way (maybe a method call that I'm missing) to compute <code>spectral_norm</code>?</p>
|
<p>The spectral norm of a matrix <code>J</code> equals the largest <a href="https://en.wikipedia.org/wiki/Singular-value_decomposition" rel="nofollow noreferrer">singular value</a> of the matrix.</p>
<p>Therefore you can use <a href="https://www.tensorflow.org/api_docs/python/tf/svd" rel="nofollow noreferrer"><code>tf.svd()</code></a> to perform the singular value decomposition, and take the largest singular value:</p>
<pre><code>spectral_norm = tf.svd(J,compute_uv=False)[...,0]
</code></pre>
<p>where <code>J</code> is your matrix.</p>
<p>Notes:</p>
<ul>
<li>I use <code>compute_uv=False</code> since we are interested only in singular values, not singular vectors.</li>
<li><code>J</code> does not need to be square.</li>
<li>This solution works also for the case where <code>J</code> has any number of batch dimensions (as long as the two last dimensions are the matrix dimensions).</li>
<li>The elipsis <code>...</code> operation <a href="https://stackoverflow.com/a/12116854/5737630">works as in NumPy</a>.</li>
<li>I take the <code>0</code> index because we are interested only in the largest singular value.</li>
</ul>
|
python|matrix|tensorflow
| 4
|
8,496
| 27,955,727
|
How to display just the mesh of meshgrid
|
<p>The following four lines will create a rectangular meshgrid with bottom-left corner as (-5,-5) and top-right corner as (5,5). The width of each cell in the meshgrid will be 0.55 and height will 0.5. Is it possible to just display this created mesh in python? That is, without superimposing on it any other plot of a function? </p>
<pre><code>import numpy as np
x = np.arange(-5, 5, 0.55)
y = np.arange(-5, 5, 0.5)
xx, yy = np.meshgrid(x, y)
</code></pre>
<p>I Will be grateful for help. Thanks.</p>
|
<p>You can use <code>matplotlib</code>'s <code>plot</code> to put a point at each point of the grid.
<img src="https://i.stack.imgur.com/dJoX0.png" alt="enter image description here"></p>
<pre><code>plt.plot(xx, yy, ".k")
plt.show()
</code></pre>
<p>Here, this is actually plotting each column as a separate plot, and would give each a separate color, which is why I set <code>".k"</code>, where the <code>k</code> makes every point black. If you don't like this behavior you could do:</p>
<pre><code>plt.plot(xx.flat, yy.flat, ".")
</code></pre>
|
numpy|matplotlib
| 7
|
8,497
| 28,199,056
|
Pandas: Conver all strings in column to 1
|
<p>I have a df with a column df.open. I want to check this column for strings. If there's a string, I'd like to convert it to a 1. There are already a lot of 1s and 0s in the column. </p>
<p>So, suppose the values in the column are as follows: <0,0,1,0,text,1,open,0,0,xyz,1>. I'd like to go through the column and turn all strings (i.e. 'text', 'open', 'xyz') into 1s.</p>
<p>One thing I was thinking was to convert numeric. Then, where ever there's a NaN, convert that to a 1. But that seems silly...</p>
|
<p>Convert all to boolean then to int:</p>
<pre><code>df.open = df.open.astype(bool).astype(int)
</code></pre>
<p><code>1</code> and any non-empty text is <code>True</code>, <code>0</code> is <code>False</code>.</p>
<p><code>True</code> is <code>1</code>, <code>False</code> is <code>0</code>.</p>
|
python|pandas
| 2
|
8,498
| 28,174,580
|
Sort csv-data while reading, using pandas
|
<p>I have a csv-file with entries like this:</p>
<pre><code>1,2014 1 1 0 1,5
2,2014 1 1 0 1,5
3,2014 1 1 0 1,5
4,2014 1 1 0 1,6
5,2014 1 1 0 1,6
6,2014 1 1 0 1,12
7,2014 1 1 0 1,17
8,2014 5 7 1 5,4
</code></pre>
<p>The first column is the ID, the second the arrival-date (example of last entry: may 07, 1:05 a.m.) and the last column is the duration of work (in minutes).</p>
<p>Actually, I read in the data using pandas and the following function:</p>
<pre><code>import pandas as pd
def convert_data(csv_path):
store = pd.HDFStore(data_file)
print('Loading CSV File')
df = pd.read_csv(csv_path, parse_dates=True)
print('CSV File Loaded, Converting Dates/Times')
df['Arrival_time'] = map(convert_time, df['Arrival_time'])
df['Rel_time'] = (df['Arrival_time'] - REF.timestamp)/60.0
print('Conversion Complete')
store['orders'] = df
</code></pre>
<p>My question is: How can I sort the entries according to their duration, but considering the arrival-date? So, I'd like to sort the csv-entries according to "arrival-date + duration". How is this possible?</p>
<p>Thanks for any hint! Best regards, Stan.</p>
|
<p>OK, the following shows you can convert the date times and then shows how to add the minutes:</p>
<pre><code>In [79]:
df['Arrival_Date'] = pd.to_datetime(df['Arrival_Date'], format='%Y %m %d %H %M')
df
Out[79]:
ID Arrival_Date Duration
0 1 2014-01-01 00:01:00 5
1 2 2014-01-01 00:01:00 5
2 3 2014-01-01 00:01:00 5
3 4 2014-01-01 00:01:00 6
4 5 2014-01-01 00:01:00 6
5 6 2014-01-01 00:01:00 12
6 7 2014-01-01 00:01:00 17
7 8 2014-05-07 01:05:00 4
In [80]:
import datetime as dt
df['Arrival_and_Duration'] = df['Arrival_Date'] + df['Duration'].apply(lambda x: dt.timedelta(minutes=int(x)))
df
Out[80]:
ID Arrival_Date Duration Arrival_and_Duration
0 1 2014-01-01 00:01:00 5 2014-01-01 00:06:00
1 2 2014-01-01 00:01:00 5 2014-01-01 00:06:00
2 3 2014-01-01 00:01:00 5 2014-01-01 00:06:00
3 4 2014-01-01 00:01:00 6 2014-01-01 00:07:00
4 5 2014-01-01 00:01:00 6 2014-01-01 00:07:00
5 6 2014-01-01 00:01:00 12 2014-01-01 00:13:00
6 7 2014-01-01 00:01:00 17 2014-01-01 00:18:00
7 8 2014-05-07 01:05:00 4 2014-05-07 01:09:00
In [81]:
df.sort(columns=['Arrival_and_Duration'])
Out[81]:
ID Arrival_Date Duration Arrival_and_Duration
0 1 2014-01-01 00:01:00 5 2014-01-01 00:06:00
1 2 2014-01-01 00:01:00 5 2014-01-01 00:06:00
2 3 2014-01-01 00:01:00 5 2014-01-01 00:06:00
3 4 2014-01-01 00:01:00 6 2014-01-01 00:07:00
4 5 2014-01-01 00:01:00 6 2014-01-01 00:07:00
5 6 2014-01-01 00:01:00 12 2014-01-01 00:13:00
6 7 2014-01-01 00:01:00 17 2014-01-01 00:18:00
7 8 2014-05-07 01:05:00 4 2014-05-07 01:09:00
</code></pre>
|
python|sorting|csv|pandas
| 0
|
8,499
| 73,387,241
|
Automatically extracting data from csv file into specific matrix position
|
<p>I have a rather large csv file that I need the program to read, then input the data into the correct position of a zero matrix. Sample of csv block (also attached file):</p>
<pre><code> Sector,Service,Data_Point
Bio,Electricity NonEmitting,0
NEElectricity,Electricity NonEmitting,0.5
RE,Electricity NonEmitting,0
Electricity,Electricity NonEmitting,-1
Bio,Electricity Bio,0.8
NEElectricity,Electricity Bio,0
RE,Electricity Bio,0.04
Electricity,Electricity Bio,-2
Bio,Electricity BECCS,0.84
NEElectricity,Electricity BECCS,0
RE,Electricity BECCS,0.4
Electricity,Electricity BECCS,-1
Bio,Ammonia HB,0
Electricity,Ammonia HB,2.8
RE,Ammonia HB,0.06
Ammonia,Ammonia HB,-1
Bio,Biofuel TBD,0.30
Electricity,Biofuel TBD,0.02
RE,Biofuel TBD,0.012
Electricity,CarUse BEV,0.5
RE,CarUse BEV,0
CarUse,CarUse BEV,-1
Hydrogen,CarUse HFCEV,0.2
RE,CarUse HFCEV,0
CarUse,CarUse HFCEV,-1
Bio,NET DAC,0
NEElectricity,NET DAC,10.5
RE,NET DAC,-1
</code></pre>
<p>The problem is that I need it to be able to sort the data based on the Sector and Service columns. I.e. Sector = rows, Service = columns in the matrix. So if the program reads Sector as Bio: row = 1, and Service as Electricity NonEmitting: column 1, it inputs the corresponding number from Data_Point (in this case Data_Point is '0') into row 1 column 1 of the matrix. Or if it reads Sector as NEElectricity: row = 2, but service as Electricity NonEmitting again: column 1, the corresponding Data_Point '0.5' is inputted into row 2 column 1 of the matrix.</p>
<p>Below I have written code that automatically generates a zero matrix based on the number of unique elements in the Sector and Service columns. I just cannot figure out how to sort the values into the correct matrix position, so any help would be greatly appreciated.</p>
<pre><code>import csv
import numpy as np
import pandas as pd
sector = pd.read_csv('Coeff_Sample.csv', usecols=["Sector"])
matrix_column = int(sector.nunique())
service = pd.read_csv('Coeff_Sample.csv', usecols=["Service"])
matrix_row = int(service.nunique())
coeff_matrix = np.zeros((matrix_row, matrix_column))
</code></pre>
<p>Best regards</p>
|
<p><a href="https://i.stack.imgur.com/XrRzb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XrRzb.png" alt="Matrix" /></a></p>
<p>Is that the kind of matrix u wanted to create?</p>
<p>I created this matrix without pandas with the following source code:</p>
<pre><code>import csv
import numpy as np
rows = []
columns = []
all_rows = []
with open('test.csv', 'r') as read_obj:
csv_dict_reader = csv.DictReader(read_obj)
for row in csv_dict_reader:
columns.append(row['Sector'])
rows.append(row['Service'])
all_rows.append(row)
rows_set = set(rows)
columns_set = set(columns)
coeff_matrix = np.full((len(rows_set)+1, len(columns_set)+1), 0).tolist()
row_list = list(rows_set)
columns_list = list(columns_set)
for idx, x in enumerate(columns_list):
coeff_matrix[0][idx+1] = x
for idy, y in enumerate(row_list):
coeff_matrix[idy+1][0] = y
for e in all_rows:
sector = e['Sector']
service = e['Service']
value = e['Data_Point']
for row_idx, row in enumerate(coeff_matrix):
if row[0] == service:
row_index = row_idx
for column_idx, column in enumerate(coeff_matrix[0]):
if column == sector:
column_index = column_idx
coeff_matrix[row_index][column_index] = value
np_coeff_matrix = np.asarray(coeff_matrix)
</code></pre>
<p>But it got a lot of loops inside. Maybe there are ways to be faster with that task using pandas or list/np.array functions.</p>
|
python|python-3.x|pandas|numpy|csv
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.