Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
1,800
| 42,804,939
|
Functional API Keras alternate solution for predict_classes()
|
<p>Please refer <a href="https://stackoverflow.com/questions/42440274/training-on-the-merged-layer-in-keras">here</a> for my previous question for background information. As per <a href="https://stackoverflow.com/a/42456188/7103753">answer</a> suggested by <a href="https://stackoverflow.com/users/7137636/nassim-ben">Nassim Ben</a>. I trained model of two-path architecture using functional API. Now I feel stuck as I need to predict the class of each pixel. here is the code for the same:</p>
<pre><code> imgs = io.imread(test_img).astype('float').reshape(5,240,240)
plist = []
# create patches from an entire slice
for img in imgs[:-1]:
if np.max(img) != 0:
img /= np.max(img)
p = extract_patches_2d(img, (33,33))
plist.append(p)
patches = np.array(zip(np.array(plist[0]), np.array(plist[1]), np.array(plist[2]), np.array(plist[3])))
# predict classes of each pixel based on model
full_pred = self.model_comp.predict_classes(patches)
fp1 = full_pred.reshape(208,208)
</code></pre>
<p>But according to the <a href="https://github.com/fchollet/keras/issues/2524" rel="nofollow noreferrer">github-link</a> predict_classes() is unavailable. So my question is there any other alternative that I can try?</p>
|
<p>Nassim answer is great but I want to share with you the experience I have with similiar tasks:</p>
<ol>
<li><strong>Never use <code>predict_proba</code> Keras</strong> for version. <a href="https://stackoverflow.com/questions/41716380/keras-output-of-model-predict-proba/41729210#41729210">Here</a> you could find why.</li>
<li>Most of methods used for turning predictions into classes doesn't take into account your data statistics. In case of image segmentation - very often detecting an object is more important then detecting a background. For this reason I advise you to use a threshold obtained from a <a href="http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html" rel="nofollow noreferrer">precision-recall</a> curve for each class. In this case - you need to set a threshold value for which <code>precision == recall</code> (or it's as close as possible). After you obtain the thresholds - you need to write your custom function for a class prediction.</li>
</ol>
|
python|tensorflow|deep-learning|keras|convolution
| 2
|
1,801
| 42,818,202
|
TensorFlow GPU Support Mac - OpenCL
|
<p>Does <code>TensorFlow</code> have GPU support for a late 2015 <code>mac</code> running an <code>AMD Radeon R9 M370X</code></p>
<pre><code>AMD Radeon R9 M370X:
Chipset Model: AMD Radeon R9 M370X
Type: GPU
Bus: PCIe
PCIe Lane Width: x8
VRAM (Total): 2048 MB
Vendor: ATI (0x1002)
Device ID: 0x6821
Revision ID: 0x0083
ROM Revision: 113-C5670E-777
Automatic Graphics Switching: Supported
gMux Version: 4.0.20 [3.2.8]
EFI Driver Version: 01.00.777
Metal: Supported
</code></pre>
|
<p>OpenCL is currently not supported.</p>
<p>You can follow the progress here: <a href="https://github.com/tensorflow/tensorflow/issues/22" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/22</a></p>
|
macos|tensorflow
| 1
|
1,802
| 42,624,350
|
Adding two matrix with different dimension
|
<p>Let <code>A</code> be a matrix with <code>[m x n]</code> elements and <code>B</code> another matrix with <code>[m x n x o]</code> elements.
Is there any linear algebraic way to add both matrices such that <code>C = A + B</code> where <code>C</code> will be in <code>[m x n x o]</code> without any sort of looping along the <code>o</code> dimension?</p>
<p>Example:</p>
<p>Let </p>
<p>A =</p>
<pre><code> 1 2 5 6
1 2 3 4
1 5 8 9
</code></pre>
<p>and</p>
<p>B(:,:,1) =</p>
<pre><code> 1 1 1 1
1 1 1 1
1 1 1 1
</code></pre>
<p>B(:,:,2) =</p>
<pre><code> 1 1 1 1
1 1 1 1
1 1 1 1
</code></pre>
<p>B(:,:,3) =</p>
<pre><code> 1 1 1 1
1 1 1 1
1 1 1 1
</code></pre>
<p>C = A + B </p>
<p>C(:,:,1) =</p>
<pre><code> 2 3 6 7
2 3 4 5
2 6 9 10
</code></pre>
<p>C(:,:,2) =</p>
<pre><code> 2 3 6 7
2 3 4 5
2 6 9 10
</code></pre>
<p>C(:,:,3) =</p>
<pre><code> 2 3 6 7
2 3 4 5
2 6 9 10
</code></pre>
|
<p>In MATLAB this can be done using implicit expansion (R2016b onward) or <code>bsxfun(@plus,...)</code>. </p>
<p>The following would work in all recent MATLAB versions:</p>
<pre><code>C = bsxfun(@plus,A,B);
</code></pre>
<p>In NumPy, this behavior is known as "broadcasting".</p>
|
python|matlab|numpy|multidimensional-array|linear-algebra
| 5
|
1,803
| 27,321,489
|
Trouble accessing column in pandas dataframe
|
<p>I created a DataFrame like so:</p>
<pre><code>stock_data = pd.read_csv('http://www.google.com/finance/historical?output=csv&q=AAPL')
</code></pre>
<p>It has a <code>Date</code> column but when I call <code>stock_data['Date']</code> I get a key error.
How do I access the date for each row?</p>
|
<p>It looks like some junk (in particular, a UTF-8 BOM) found its way into that column name:</p>
<pre><code>In [16]: stock_data = pd.read_csv('http://www.google.com/finance/historical?output=csv&q=AAPL')
In [17]: stock_data.columns
Out[17]: Index([u'Date', u'Open', u'High', u'Low', u'Close', u'Volume'], dtype='object')
In [18]: stock_data.columns[0]
Out[18]: '\xef\xbb\xbfDate'
</code></pre>
<p>which is why it isn't working. One workaround:</p>
<pre><code>In [19]: stock_data.columns = [col.decode("utf-8-sig") for col in stock_data.columns]
In [20]: stock_data.columns[0]
Out[20]: u'Date'
In [21]: stock_data["Date"].head()
Out[21]:
0 4-Dec-14
1 3-Dec-14
2 2-Dec-14
3 1-Dec-14
4 28-Nov-14
Name: Date, dtype: object
</code></pre>
|
python|python-2.7|pandas
| 2
|
1,804
| 27,284,749
|
Timedelta error; version 0.15.1-=np19py27_0
|
<p>This problem was asked at:</p>
<p><a href="https://stackoverflow.com/questions/15149265/pandas-timedelta-error">pandas Timedelta error</a></p>
<p>However, the solution (to get the latest version of pandas) did not work for me. </p>
<p>I've got the same problem (installed using anaconda, on Windows 7), and trying to do this gets the same problem.</p>
<p>Running from ipython:</p>
<pre><code>In [1]: import pandas as pd
-----------------------------------------
ImportError
<ipython-input-1-af55e7023913> in <module
----> 1 import pandas as pd
C:\Anaconda\lib\site-packages\pandas\__in
45
46 # let init-time option registrati
---> 47 import pandas.core.config_init
48
49 from pandas.core.api import *
C:\Anaconda\lib\site-packages\pandas\core
15 i
16 g
---> 17 from pandas.core.format import de
18
19
C:\Anaconda\lib\site-packages\pandas\core
7 from pandas.core.base import Pand
8 from pandas.core.common import ad
----> 9 from pandas.core.index import Ind
10 from pandas import compat
11 from pandas.compat import(StringI
C:\Anaconda\lib\site-packages\pandas\core
13 import pandas.algos as _algos
14 import pandas.index as _index
---> 15 from pandas.lib import Timestamp,
16 from pandas.core.base import Pand
17 from pandas.util.decorators impor
ImportError: cannot import name Timedelta
</code></pre>
<p>I've checked the pandas version, it is 0.15.1-np19py27_0.</p>
<p>nosetests pandas also returns problems:</p>
<pre><code>PS R:\data\python_testing\ipython_notebooks> nosetests pandas
E
======================================================================
ERROR: Failure: ImportError (cannot import name Timedelta)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Anaconda\lib\site-packages\nose\loader.py", line 403, in loadTestsFromName
module = resolve_name(addr.module)
File "C:\Anaconda\lib\site-packages\nose\util.py", line 311, in resolve_name
module = __import__('.'.join(parts_copy))
File "C:\Anaconda\lib\site-packages\pandas\__init__.py", line 47, in <module>
import pandas.core.config_init
File "C:\Anaconda\lib\site-packages\pandas\core\config_init.py", line 17, in <module>
from pandas.core.format import detect_console_encoding
File "C:\Anaconda\lib\site-packages\pandas\core\format.py", line 9, in <module>
from pandas.core.index import Index, MultiIndex, _ensure_index
File "C:\Anaconda\lib\site-packages\pandas\core\index.py", line 15, in <module>
from pandas.lib import Timestamp, Timedelta, is_datetime_array
ImportError: cannot import name Timedelta
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
</code></pre>
<p>This issue is discussed at github:</p>
<p><a href="https://github.com/pydata/pandas/issues/8862" rel="nofollow noreferrer">https://github.com/pydata/pandas/issues/8862</a></p>
|
<p>I had this problem recently, and it turned out to be because I had recently used conda to install some packages from the command prompt, but had forgetten to launch the command prompt as administrator.</p>
<p>In my case I was able to fix the problem by launching command prompt as administrator, and reinstalling the packages concerned with "conda install -f ".</p>
<p>In your case you could try "conda install -f pandas".
It is possible you might have this problem with more than one package.</p>
|
pandas|timedelta
| 2
|
1,805
| 27,263,805
|
Pandas column of lists, create a row for each list element
|
<p>I have a dataframe where some cells contain lists of multiple values. Rather than storing multiple
values in a cell, I'd like to expand the dataframe so that each item in the list gets its own row (with the same values in all other columns). So if I have:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
{'trial_num': [1, 2, 3, 1, 2, 3],
'subject': [1, 1, 1, 2, 2, 2],
'samples': [list(np.random.randn(3).round(2)) for i in range(6)]
}
)
df
Out[10]:
samples subject trial_num
0 [0.57, -0.83, 1.44] 1 1
1 [-0.01, 1.13, 0.36] 1 2
2 [1.18, -1.46, -0.94] 1 3
3 [-0.08, -4.22, -2.05] 2 1
4 [0.72, 0.79, 0.53] 2 2
5 [0.4, -0.32, -0.13] 2 3
</code></pre>
<p>How do I convert to long form, e.g.:</p>
<pre><code> subject trial_num sample sample_num
0 1 1 0.57 0
1 1 1 -0.83 1
2 1 1 1.44 2
3 1 2 -0.01 0
4 1 2 1.13 1
5 1 2 0.36 2
6 1 3 1.18 0
# etc.
</code></pre>
<p>The index is not important, it's OK to set existing
columns as the index and the final ordering isn't
important.</p>
|
<h2>Pandas >= 0.25</h2>
<p>Series and DataFrame methods define a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html#pandas.DataFrame.explode" rel="noreferrer"><strong><code>.explode()</code></strong></a> method that explodes lists into separate rows. See the docs section on <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html#exploding-a-list-like-column" rel="noreferrer">Exploding a list-like column</a>.</p>
<pre><code>df = pd.DataFrame({
'var1': [['a', 'b', 'c'], ['d', 'e',], [], np.nan],
'var2': [1, 2, 3, 4]
})
df
var1 var2
0 [a, b, c] 1
1 [d, e] 2
2 [] 3
3 NaN 4
df.explode('var1')
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
2 NaN 3 # empty list converted to NaN
3 NaN 4 # NaN entry preserved as-is
# to reset the index to be monotonically increasing...
df.explode('var1').reset_index(drop=True)
var1 var2
0 a 1
1 b 1
2 c 1
3 d 2
4 e 2
5 NaN 3
6 NaN 4
</code></pre>
<p>Note that this also handles mixed columns of lists and scalars, as well as empty lists and NaNs appropriately (this is a drawback of <code>repeat</code>-based solutions).</p>
<p>However, you should note that <strong><code>explode</code> only works on a single column</strong> (for now).</p>
<p>P.S.: if you are looking to explode a column of <em>strings</em>, you need to split on a separator first, then use <code>explode</code>. See this (very much) <a href="https://stackoverflow.com/questions/12680754/split-explode-pandas-dataframe-string-entry-to-separate-rows/57122617#57122617">related answer by me.</a></p>
|
python|pandas|list
| 148
|
1,806
| 27,061,168
|
Inverse of a symmetric matrix
|
<p>An inverse of a real symmetric matrix should in theory return a real symmetric matrix (the same is valid for Hermitian matrices). However, when I compute the inverse with numpy or scipy the returned matrix is asymmetric. I understand that this is due to numerical error.</p>
<p>What is the best way to avoid this asymmetry? I want it to be valid mathematically, in order that it does not propagate the error further when I use it in my computations.</p>
<pre><code>import numpy as np
n = 1000
a =np.random.rand(n, n)
a_symm = (a+a.T)/2
a_symm_inv = np.linalg.inv(a_symm)
if (a_symm_inv == a_symm_inv.T).all():
print("Inverse of matrix A is symmetric") # This does not happen!
else:
print("Inverse of matrix A is asymmetric")
print("Max. asymm. value: ", np.max(np.abs((a_symm_inv-a_symm_inv.T)/2)))
</code></pre>
<p><strong>EDIT</strong></p>
<p>This is my solution to the problem:</p>
<pre><code>math_symm = (np.triu_indices(len(a_symm_inv), 1))
a_symm_inv[math_symm]=np.tril(a_symm_inv, -1).T[math_symm]
</code></pre>
|
<p>Luckily for you, this inverse is symmetric. Unluckily for you you can't compare floating points this way:</p>
<pre><code>>>> import numpy as np
>>>
>>> n = 1000
>>> a =np.random.rand(n, n)
>>> a_symm = (a+a.T)/2
>>>
>>> a_symm_inv = np.linalg.inv(a_symm)
>>> a_symm_inv_T = a_symm_inv.T
>>> print a_symm_inv[2,1]
0.0505944152801
>>> print a_symm_inv_T[2,1]
0.0505944152801
>>> print a_symm_inv[2,1] == a_symm_inv_T[2,1]
False
</code></pre>
<p>Luckily for you, you can use numpy all close to solve this problem <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html</a></p>
<pre><code>>>> np.allclose(a_symm_inv, a_symm_inv_T)
True
</code></pre>
<p>Looks like its your lucky day</p>
<p>Edit: Wow, I am quite surprised cels answer looks to be faster than this:</p>
<pre><code>>>> import timeit
>>> setup = """import numpy as np
... a = np.random.rand(1000, 1000)
... b = np.random.rand(1000, 1000)
... def cool_comparison_function(matrix1, matrix2):
... epsilon = 1e-9
... if (np.abs(matrix1 - matrix2) < epsilon).all():
... return True
... else:
... return False
... """
>>> timeit.Timer("cool_comparison_function(a,b)",setup).repeat(1, 1000)
[2.6709160804748535]
>>> timeit.Timer("np.allclose(a,b)",setup).repeat(1, 1000)
[11.295115947723389]
</code></pre>
|
python|arrays|numpy|scipy
| 1
|
1,807
| 30,691,797
|
Writing and Reading numpy objects an plain text
|
<p>In Python, I would like to store numpy arrays, matrices and possibly later other objects in plain text format.</p>
<p>My idea was to use ConfigParser and define parser <code>array2string</code>, <code>matrix2string</code>, <code>string2array</code> and <code>string2matrix</code> (there is <code>numpy.array2string</code> and <code>matrix2string</code> could be implemented based on that, but I couldn't find functions for the reverse). Then writing looks like:</p>
<pre><code>config.set('calibration', 'center', array2string(center))
config.set('calibration', 'trans_matrix', matrix2string(trans_matrix))
</code></pre>
<p>and reading like:</p>
<pre><code>center = string2array(config.get('calibration', 'center'))
trans_matrix = string2matrix(config.get('calibration', 'trans_matrix'))
</code></pre>
<p>What is the best way to write and read the numpy objects?</p>
|
<p>Answer in this post gives a nice function which works well:</p>
<p><a href="https://stackoverflow.com/questions/35612235/how-to-read-numpy-2d-array-from-string">how to read numpy 2D array from string?</a></p>
<pre><code>import configparser
import re
import ast
import numpy as np
def str2array(s):
# Remove space after [
s=re.sub('\[ +', '[', s.strip())
# Replace commas and spaces
s=re.sub('[,\s]+', ', ', s)
return np.array(ast.literal_eval(s))
config = configparser.ConfigParser()
config.read("config.ini")
var_a = config.get("myvars", "var_a")
var_b = config.get("myvars", "var_b")
var_c = config.get("myvars", "var_c")
var_d=config.get("myvars", "var_d")
var_e=config.get("myvars", "var_e")
q=str2array(var_d)
r=str2array(var_e)
</code></pre>
<p>Where the config.ini file is:</p>
<pre><code># Variable tests
#
#
[myvars]
var_a='home'
var_b='car'
var_c=15.5
var_d=[2.1 3.4 4.6]
var_e=[[ 1.1 2.2 2.233]
[ 1.00000000e+02 1.17809494e+03 4.9410e+02]
[ 2.00000000e+04 1.20e+01 1.2323e+04]]
</code></pre>
|
python|python-2.7|numpy
| 0
|
1,808
| 39,111,347
|
How do I convert a MultiIndex to type string
|
<p>consider the MultiIndex <code>idx</code></p>
<pre><code>idx = pd.MultiIndex.from_product([range(2013, 2016), range(1, 5)])
</code></pre>
<p>When I do</p>
<pre><code>idx.to_series().str.join(' ')
</code></pre>
<p>I get</p>
<pre><code>2013 1 NaN
2 NaN
3 NaN
4 NaN
2014 1 NaN
2 NaN
3 NaN
4 NaN
2015 1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
</code></pre>
<p>This happens because the dtypes of the different levels are <code>int</code> and not <code>str</code>. <code>join</code> expects a <code>str</code>. How do I convert the whole <code>idx</code> to <code>str</code>?</p>
<p>I've done</p>
<pre><code>join = lambda x, delim=' ': delim.join([str(y) for y in x])
idx.to_series().apply(join, delim=' ')
2013 1 2013 1
2 2013 2
3 2013 3
4 2013 4
2014 1 2014 1
2 2014 2
3 2014 3
4 2014 4
2015 1 2015 1
2 2015 2
3 2015 3
4 2015 4
dtype: object
</code></pre>
<p>I expect there is a simpler way that I'm overlooking.</p>
|
<p>Something like this?</p>
<pre><code>idx.to_series().apply(lambda x: '{0}-{1}'.format(*x))
</code></pre>
|
python|pandas|multi-index
| 5
|
1,809
| 39,253,672
|
pandas.DataFrame.query keeping original multiindex
|
<p>I have a dataframe with multiindex:</p>
<pre><code>>>> df = pd.DataFrame(np.random.randint(0,5,(6, 2)), columns=['col1','col2'])
>>> df['ind1'] = list('AAABCC')
>>> df['ind2'] = range(6)
>>> df.set_index(['ind1','ind2'], inplace=True)
>>> df
col1 col2
ind1 ind2
A 0 2 0
1 2 2
2 1 2
B 3 2 2
C 4 4 0
5 1 4
</code></pre>
<p>when I select data using <code>.loc[]</code> on one of the index levels, and apply <code>.query()</code> afterwards, resulting index is "shrinked" as expected to match only those values contained in resulting dataframe:</p>
<pre><code>>>> df.loc['A'].query('col2 == 2')
col1 col2
ind2
1 2 2
2 1 2
>>> df.loc['A'].query('col2 == 2').index
Int64Index([1, 2], dtype='int64', name='ind2')
</code></pre>
<p>however when I try to recieve same result using just <code>.query()</code>, pandas keeps the same index as on original dataframe (despite the fact, that it didn't behave like that above, in the case of single index - resulting index went from <code>[0,1,2]</code> to <code>[1,2]</code>, matching only <code>col2 == 2</code> rows):</p>
<pre><code>>>> df.query('ind1 == "A" & col2 == 2')
col1 col2
ind1 ind2
A 1 2 2
2 1 2
>>> df.query('ind1 == "A" & col2 == 2').index
MultiIndex(levels=[['A', 'B', 'C'], [0, 1, 2, 3, 4, 5]],
labels=[[0, 0], [1, 2]],
names=['ind1', 'ind2'])
</code></pre>
<p>is it a bug or a feature? if feature, could you please explain such behavior?</p>
<p>EDIT1:
I would expect following index instead:</p>
<pre><code>MultiIndex(levels=[['A'], [1, 2]],
labels=[[0, 0], [0, 1]],
names=['ind1', 'ind2'])
</code></pre>
<p>EDIT2:
as explained in <a href="https://stackoverflow.com/questions/32585009/dataframe-slice-does-not-remove-index-values">Dataframe Slice does not remove Index Values</a> index values shouldn't be removed at all when slicing DF; such behavior should give following result: </p>
<pre><code>>>> df.loc['A'].query('col2 == 2')
col1 col2
ind2
1 2 2
2 1 2
>>> df.loc['A'].query('col2 == 2').index
EXPECTATION: Int64Index([0, 1, 2], dtype='int64', name='ind2')
REALITY: Int64Index([1, 2], dtype='int64', name='ind2')
</code></pre>
|
<p><code>df.loc[A]</code> returns you a DF (or a "view") with a regular ("single") index:</p>
<pre><code>In [12]: df.loc['A']
Out[12]:
col1 col2
ind2
0 1 1
1 0 3
2 1 2
</code></pre>
<p>so <code>.query()</code> will be applied on that DF with a regular index...</p>
|
python|pandas|dataframe|multi-index
| 1
|
1,810
| 39,335,149
|
if negative then with weighted average
|
<p>I have a DataFrame:</p>
<pre><code>a = {'Price': [10, 15, 20, 25, 30], 'Total': [10000, 12000, 15000, 14000, 10000],
'Previous Quarter': [0, 10000, 12000, 15000, 14000]}
a = pd.DataFrame(a)
print (a)
</code></pre>
<p>With this raw data, i have added a number of additional columns including a weighted average price (WAP)</p>
<pre><code>a['Change'] = a['Total'] - a['Previous Quarter']
a['Amount'] = a['Price']*a['Change']
a['Cum Sum Amount'] = np.cumsum(a['Amount'])
a['WAP'] = a['Cum Sum Amount'] / a['Total']
</code></pre>
<p>This is fine, however as the total starts to decrease this brings down the weighted average price.</p>
<p>my question is, if Total decreases how would i get WAP to reflect the row above? For instance in row 3, Total is 1000, which is lower than in row 2. This brings WAP down from 12.6 to 11.78, but i would like it to say 12.6 instead of 11.78. </p>
<p>I have tried looping through a['Total'] < 0 then a['WAP'] = 0 but this impacts the whole column. </p>
<p>Ultimately i am looking for a WAP column which reads:
10, 10.83, 12.6, 12.6, 12.6</p>
|
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cummax.html" rel="nofollow"><code>cummax</code></a>: </p>
<pre><code>a['WAP'] = (a['Cum Sum Amount'] / a['Total']).cummax()
print (a['WAP'])
0 10.000000
1 10.833333
2 12.666667
3 12.666667
4 12.666667
Name: WAP, dtype: float64
</code></pre>
|
python|pandas
| 3
|
1,811
| 38,965,471
|
How to slice pandas DataFrame by disjunction statement (logical "or")?
|
<p>I would like to slice a <code>pandas.DataFrame</code> which satisfies condition A <strong>or</strong> condition B. Most of the search results only show how to slice dataframe using <strong>"and"</strong>. So I wonder if it is possible to use <strong>"or"</strong> operator without converting (A and B) to (not (not A and not B))? Because sometimes there are many "or" conditions needed, and converting might be troublesome.</p>
<p>I tried to use:</p>
<pre><code>df[(df['c1']==x1) or (df['c2']==x2)]
</code></pre>
<p>but it does not work.</p>
|
<p>You need to use the logical or symbol <code>|</code></p>
<pre><code>df[(df['c1'] == x1) | (df['c2'] == x2)]
</code></pre>
<p>For <code>and</code>, you would need to use <code>&</code> </p>
<pre><code>df[(df['c1'] == x1) & (df['c2'] == x2)]
</code></pre>
|
python|pandas|dataframe
| 11
|
1,812
| 33,695,157
|
.eigenvals creates a new variable
|
<p>I'm calculating the eigenvalues of a matrix with the .eigenvals() function. When I do so for my matrix, a new variable that I never declared occurs in the solution and I don't know where it comes from, nor do I expect it to happen, but it definitly influences the solution.
I have the problem with numpy and sympy.
Here is my code for sympy:</p>
<pre><code>from sympy import *
D,Bm,Bp,Bz,l=symbols('D Bm Bp Bz l')
H=Matrix(([D+Bz,Bm,0],[Bp,0,Bm],[0,Bp,D-Bz]))
ev=H.eigenvals()
sol=ev.keys()
print sol[0]
print
print sol[1]
print
print sol[2]
</code></pre>
<p>The solutions look like this, with this strange 'I' in there. When I want to use the calculated eigenvalues, I have to define, what 'I' is, otherwise it won't solve my formulas.</p>
<pre><code>2*D/3 + (-2*Bm*Bp/3 - Bz**2/3 - D**2/9)/(Bm*Bp*D - 8*D**3/27 + D*(-2*Bm*Bp - Bz**2 + D**2)/3 + sqrt((-2*Bm*Bp/3 - Bz**2/3 - D**2/9)**3 + (2*Bm*Bp*D - 16*D**3/27 + 2*D*(-2*Bm*Bp - Bz**2 + D**2)/3)**2/4))**(1/3) - (Bm*Bp*D - 8*D**3/27 + D*(-2*Bm*Bp - Bz**2 + D**2)/3 + sqrt((-2*Bm*Bp/3 - Bz**2/3 - D**2/9)**3 + (2*Bm*Bp*D - 16*D**3/27 + 2*D*(-2*Bm*Bp - Bz**2 + D**2)/3)**2/4))**(1/3)
2*D/3 + (-2*Bm*Bp/3 - Bz**2/3 - D**2/9)/((-1/2 - sqrt(3)*I/2)*(Bm*Bp*D - 8*D**3/27 + D*(-2*Bm*Bp - Bz**2 + D**2)/3 + sqrt((-2*Bm*Bp/3 - Bz**2/3 - D**2/9)**3 + (2*Bm*Bp*D - 16*D**3/27 + 2*D*(-2*Bm*Bp - Bz**2 + D**2)/3)**2/4))**(1/3)) - (-1/2 - sqrt(3)*I/2)*(Bm*Bp*D - 8*D**3/27 + D*(-2*Bm*Bp - Bz**2 + D**2)/3 + sqrt((-2*Bm*Bp/3 - Bz**2/3 - D**2/9)**3 + (2*Bm*Bp*D - 16*D**3/27 + 2*D*(-2*Bm*Bp - Bz**2 + D**2)/3)**2/4))**(1/3)
2*D/3 + (-2*Bm*Bp/3 - Bz**2/3 - D**2/9)/((-1/2 + sqrt(3)*I/2)*(Bm*Bp*D - 8*D**3/27 + D*(-2*Bm*Bp - Bz**2 + D**2)/3 + sqrt((-2*Bm*Bp/3 - Bz**2/3 - D**2/9)**3 + (2*Bm*Bp*D - 16*D**3/27 + 2*D*(-2*Bm*Bp - Bz**2 + D**2)/3)**2/4))**(1/3)) - (-1/2 + sqrt(3)*I/2)*(Bm*Bp*D - 8*D**3/27 + D*(-2*Bm*Bp - Bz**2 + D**2)/3 + sqrt((-2*Bm*Bp/3 - Bz**2/3 - D**2/9)**3 + (2*Bm*Bp*D - 16*D**3/27 + 2*D*(-2*Bm*Bp - Bz**2 + D**2)/3)**2/4))**(1/3)
</code></pre>
<p>I can also do it numerically with the result, that all my symbols are numbers then, but the 'I' stays in the solution at the same point.</p>
<p>Has anyone ever seen this before or knows, what python is doing here or what this 'I' stands for? It would be a great help to know, what is happening there, since the calculated eigenvalues do not fully behave, as I expected and I give the blame to that terms including that 'I'.
Thanks for any comments in advance.</p>
|
<p><code>I</code> is the imaginary unit <code>sqrt(-1)</code>.</p>
<pre><code>>>> from sympy import I
>>> complex(I)
1j
</code></pre>
<p>For example,</p>
<pre><code>>>> from sympy import poly
>>> from sympy.abc import x
>>> p = poly(x**2 + 1)
>>> p.root(0)
-I
>>> p.root(1)
I
</code></pre>
|
python|python-2.7|numpy|sympy|eigenvalue
| 2
|
1,813
| 22,592,163
|
solve an integral equation by python
|
<p>I need to solve an integral equation by python 3.2 in win7.</p>
<p>I want to find an initial guess solution first and then use "fsolve()" to solve it in python.</p>
<p>This is the code:</p>
<pre><code>import numpy as np
from scipy.optimize.minpack import fsolve
from cmath import cos, exp
from scipy.integrate.quadpack import quad
def integrand2(x, b):
return exp(-x)/b
def intergralFunc2(b):
integral,err = quad(integrand2, 0, 10, args=(b)) // **error here**
return 0.01 - integral
import matplotlib.pyplot as plt
def findGuess():
vfunc = np.vectorize(intergralFunc2)
f = np.linspace(-20, 20,10)
plt.plot(f, vfunc(f))
plt.xlabel('guess value')
plt.show()
def solveFunction():
y= fsolve(intergralFunc2, 10)
return y
if __name__ == '__main__':
findGuess()
solution = solveFunction()
print("solution is ", solution)
</code></pre>
<p>I got error: </p>
<pre><code> quadpack.error: Supplied function does not return a valid float.
</code></pre>
<p>Any help would be appreciated. </p>
|
<p>Just made the following change and it should work (it worked for me).</p>
<p>remove:</p>
<pre><code>from cmath import exp, cos
</code></pre>
<p>include:</p>
<pre><code>from numpy import exp, cos
</code></pre>
<p>as explained in the comments, the <code>cmath</code> functions accept only <code>float</code> inputs, not arrays.</p>
|
python|python-3.x|numpy|scipy
| 3
|
1,814
| 62,417,486
|
Count ids based by occurence and on sequencial order Pandas
|
<p>Current dataset:</p>
<pre><code>month ID Bool
1 333 0
2 444 0
3 111 0
4 222 0
5 999 0
6 111 1
7 111 1
8 111 1
9 222 1
10 555 1
11 666 1
12 777 1
</code></pre>
<p>Two things need to be defined in one column named level by: </p>
<ol>
<li>Counting total occuring ID's with Bool 1 and add in new column with same ID Bool 0 </li>
<li>For every Bool 1 count the ID's that are underneath based on month. When it is the last one show 0.</li>
</ol>
<p>Required result is a new column named level:</p>
<pre><code>month ID Bool Level
1 333 0 0
2 444 0 0
3 111 0 4
4 222 0 1
5 999 0 0
6 111 1 3
7 111 1 2
8 111 1 1
9 222 1 0
10 555 1 0
11 111 1 0
12 777 1 0
</code></pre>
|
<p>You can do <code>cumcount</code> with <code>reversed</code> order </p>
<pre><code>df['level']=df.iloc[::-1].groupby('ID').cumcount()
df
Out[66]:
month ID Bool Level level
0 1 333 0 0 0
1 2 444 0 0 0
2 3 111 0 4 4
3 4 222 0 1 1
4 5 999 0 0 0
5 6 111 1 3 3
6 7 111 1 2 2
7 8 111 1 1 1
8 9 222 1 0 0
9 10 555 1 0 0
10 11 111 1 0 0
11 12 777 1 0 0
</code></pre>
|
python|pandas
| 1
|
1,815
| 62,345,730
|
Python: I have 1000 x values and need corresponding y values
|
<p>I have this vector <code>x=np.linspace(0,100,1000)</code>, and a function: </p>
<pre><code>(-1/(math.log(2)/5700))*math.log(x/100)
</code></pre>
<p>How can I calculate the corresponding <code>y</code> values and put them in a vector?</p>
|
<p>Use <a href="https://numpy.org/doc/1.18/reference/generated/numpy.log.html" rel="nofollow noreferrer"><code>np.log</code></a> instead of <code>math.log</code> for a vectorised approach, specially given that you're already using numpy:</p>
<pre><code>y = (-1/(np.log(2)/5700))*np.log(x/100)
</code></pre>
|
python|list|function|numpy|vector
| 1
|
1,816
| 51,213,544
|
Multiple Axes and Plots
|
<p>sorry if the post, is not that good. It's the first one for me on Stack Overflow.
I have Datasets in the following structure:</p>
<pre><code> Revolution1 Position1 Temperature1 Revolution2 Position2 Temperature2
1/min mm C 1/min m C
datas....
</code></pre>
<p>I plot these against the time. Now I want for every different unit a new y axis. So i looked in the matplotlib example and wrote something like this. X ist the X-Values and d is the pandas dataframe:</p>
<pre><code>fig,host=plt.subplots()
fig.subplots_adjust(right=0.75)
par1 = host.twinx()
par2 = host.twinx()
uni_units = np.unique(units[1:])
par2.spines["right"].set_position(("axes", 1.2))
make_patch_spines_invisible(par2)
# Second, show the right spine.
par2.spines["right"].set_visible(True)
for i,v in enumerate(header[1:]):
if d.loc[0,v] == uni_units[0]:
y=d.loc[an:en,v].values
host.plot(x,y,label=v)
if d.loc[0,v] == uni_units[1]:
(v,ct_yax[1]))
y=d.loc[an:en,v].values
par1.plot(x,y,label=v)
if d.loc[0,v] == uni_units[2]:
y=d.loc[an:en,v].values
par2.plot(x,y,label=v)
</code></pre>
<p><img src="https://i.stack.imgur.com/6hmzV.jpg" alt="see the plot i get here..."></p>
<p>EDIT: Okay i really missed to ask the question (maybe i was nervous, because it was the first time posting here):</p>
<p>I actually wanted to ask why it does not work, since i only saw 2 plots. But by zooming in I saw it actually plots every curve...</p>
<p>sorry!</p>
|
<p>If I understand correctly what you want is to get subplots from the <code>Dataframe</code>.</p>
<p>You can achieve such using the <code>subplots</code> parameter within the <code>plot</code>function you have under the <code>Dataframe</code> object.</p>
<p>With below toy sample you can get a better idea on how to achieve this:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
df = pd.DataFrame({"y1":[1,5,3,2],"y2":[10,12,11,15]})
df.plot(subplots=True)
plt.show()
</code></pre>
<p>Which produces below figure:
<a href="https://i.stack.imgur.com/Jb0az.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jb0az.png" alt="enter image description here"></a></p>
<p>You may check <a href="https://pandas.pydata.org/pandas-docs/stable/visualization.html#subplots" rel="nofollow noreferrer">documentation about subplots for <code>pandas</code> <code>Dataframe</code></a>.</p>
|
pandas|matplotlib
| 2
|
1,817
| 51,290,347
|
Pandas is Trying to convert my path to a float when inputing to csv cell?
|
<p>I'm trying to save a path into a cell of a csv file for reference, but it's giving me error: ValueError: could not convert string to float: <em>[path]</em>
class Program:</p>
<pre><code>def __init__(self, master):
self.data = pd.read_csv("SourcingPython\\ProgramData.csv")
if pd.isnull(self.data.User[0]):
path = os.getcwd()
fair = path.split("\\")
for i in fair:
if i.isdigit():
link = i
self.data.to_csv("SourcingPython\\ProgramData.csv", index=False)
def inputSelect(self):
self.data.Input.at[0]= askopenfilename()
print(self.data.Input[0])
self.data.to_csv("SourcingPython\\ProgramData.csv", index=False)
self.inLabel = Label(text=self.data.Input[0], relief=SUNKEN,width=50).grid(row=1,column=1)
root.update()
</code></pre>
|
<p>I was able to resolve this issue. For null values- it appears that you don't need to include ".at" to input the values.</p>
<p>so a simple:</p>
<pre><code>self.data.Input= askopenfilename()
</code></pre>
<p>was sufficient for this.</p>
<p>Thank you</p>
|
python|pandas
| 0
|
1,818
| 48,371,569
|
merge pandas MultiIndex is very slow
|
<p>I notice that pandas is very slow on merge DataFrames based on MultiIndex. Assign value is also sometimes slow</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
import numpy as np
from pandas_datareader import data
import datetime
import string
import random
start = datetime.datetime(2002, 1, 1)
end = datetime.datetime(2018, 1, 1)
def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
columns = [id_generator() for i in range(1000)]
dateindex = pd.date_range(start, end)
df = pd.DataFrame(np.random.randint(1, 100, (len(dateindex), len(columns))), columns=columns, index=dateindex)
df.columns = df.columns.rename('Name')
df.index = df.index.rename('Date')
df1 = df.pct_change(1).stack().rename('change1').to_frame()
df2 = df.pct_change(2).stack().rename('change2').to_frame()
df3 = df1.reset_index()
df4 = df2.reset_index()
%timeit pd.merge(df1, df2, left_index=True, right_index=True)
In [11]: 46.7 s Β± 656 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit pd.merge(df3, df4, on=['Date', 'Name'])
In [12]: 3.17 s Β± 168 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>The speed is more than 10 times slower. Anyone know what is going on? Is it always better to rest index and join on columns instead of MultiIndex.</p>
|
<p>Let use <code>join</code>:</p>
<pre><code>%timeit df1.join(df2)
1 loop, best of 3: 647 ms per loop
</code></pre>
|
python|pandas|merge
| 0
|
1,819
| 48,195,277
|
Writing lognormal function in python
|
<p>I'm trying to write an inverse lognormal function in python:</p>
<pre><code>import numpy as np
import scipy.stats as sp
from scipy.optimize import curve_fit
def lognorm1(x,s,scale):
ANS = sp.lognorm(s,scale=scale).ppf(x)
return ANS
curve_fit(lognorm1,x,y)
</code></pre>
<p>I have no troubles fitting the curve, however the scale paramater is the exponential of what LOGNORM.INV function is on excel. I know I can just log the scale parameter at the end, but is there anyway to rewrite the function so that I don't have to do this everytime?</p>
|
<p>Indeed, <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html" rel="nofollow noreferrer">SciPy documentation says</a></p>
<blockquote>
<p>A common parametrization for a lognormal random variable Y is in terms of the mean, mu, and standard deviation, sigma, of the unique normally distributed random variable X such that exp(X) = Y. This parametrization corresponds to setting s = sigma and scale = exp(mu).</p>
</blockquote>
<p>So let's set it as such: </p>
<pre><code>def lognorm1(x, mu, sigma):
ANS = sp.lognorm(s=sigma, scale=exp(mu)).ppf(x)
return ANS
curve_fit(lognorm1, x, y)
</code></pre>
<p>Now the parameters returned by <code>curve_fit</code> have the meaning of the mean and standard deviation of the underlying normal distribution.</p>
|
python|numpy|scipy
| 0
|
1,820
| 48,299,174
|
How to run predict_generator on large dataset with limited memory?
|
<p>Currently I am feeding all the images at once to predict_generator. <strong>I want to be able to feed small set of images which are being stored in the validation_generator and make predictions on them so that there are no memory issues with large datasets</strong>. How should I change the following code? </p>
<pre><code>top_model_weights_path = '/home/rehan/ethnicity.071217.23-0.28.hdf5'
path = "/home/rehan/countries/pakistan/guys/"
img_width, img_height = 139, 139
confidence = 0.8
model = applications.InceptionResNetV2(include_top=False, weights='imagenet',
input_shape=(img_width, img_height, 3))
print("base pretrained model loaded")
validation_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(path, target_size=(img_width, img_height),
batch_size=32,shuffle=False)
print("validation_generator")
features = model.predict_generator(validation_generator,steps=10)
</code></pre>
|
<p>i ran a loop over the object and then stored the data in a list to get rid of memory issues.</p>
<pre><code> validation_generator= ImageDataGenerator(rescale=1./255).flow_from_directory(path, target_size=(img_width, img_height),
batch_size=32,shuffle=False)
prediction_proba1=[]
prediction_classes1=[]
print("validation_generator")
print(len(validation_generator))
for i in range(len(validation_generator)):
print (" array coming...")
#print(validation_generator[i])
kl = validation_generator[i]
print(kl)
print("numpy array")
print(kl[0])
features = model.predict_on_batch(kl[0])
print("features")
print(features)
prediction_proba = model1.predict_proba(features)
prediction_classes = model1.predict_classes(features)
prediction_classes1.extend(prediction_classes)
prediction_proba1.extend(prediction_proba)
#print(prediction_proba1)
print(prediction_classes1)
</code></pre>
|
tensorflow|machine-learning|keras|imagenet
| 1
|
1,821
| 48,039,409
|
pandas getattr(df, "mean") doesn't work like df.mean()
|
<p>When I use <code>getattr()</code> to dynamically access the mean of a pandas dataframe or series, it returns a Series.mean object. However, when I use <code>df.mean()</code> to access the mean, it returns a float.</p>
<p>Why doesn't <code>getattr()</code> return the same thing that the normal method does?</p>
<p>Minimal reproducible code:</p>
<pre><code>import pandas as pd
import numpy as np
s = pd.Series(np.random.randn(5))
print(getattr(s, "mean"))
>>> <bound method Series.mean of
>>> 0 1.158042
>>> 1 -0.586821
>>> 2 -1.976764
>>> 3 1.722072
>>> 4 1.129570
print(s.mean())
>>> dtype: float64>
>>> 0.28921963496328584
</code></pre>
<p>I have used <code>dir(getattr(s, "mean"))</code> to attempt to get the mean value, but can't figure out which attribute, if any, will get me the float mean value.</p>
|
<p>Since returned atttribute by getattr is callable, try: </p>
<pre><code>print(getattr(s, "mean")())
</code></pre>
|
python|pandas|numpy|dynamic
| 6
|
1,822
| 48,669,802
|
After doing ML how to save predicted output to CSV files using PANDAS lib or CSV lib
|
<p>After doing Machine Learning in Python 3.5.x How to save predicted output to CSV files using PANDAS library or CSV library??</p>
|
<p>If you have a pandas DataFrame <code>df</code>, it can be saved to a CSV file using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer">to_csv</a> frunction.</p>
<pre><code>df.to_csv("some_file.csv")
</code></pre>
|
python|pandas|csv|machine-learning|scikit-learn
| 2
|
1,823
| 48,692,997
|
Loading sqlite table in a Pandas DataFrame gives AttributeError
|
<p>I'm trying to import data from sqlite database and load it into a pandas <code>DataFrame</code>. Sounds easy right?</p>
<pre><code>import pandas as pd
import sqlite3 as sql
pd.set_option('precision',7)
db = sql.connect('D:\db\crypto_db.db')
cursor = db.cursor()
cursor.execute('''select * from price''')
data_sql = cursor.fetchall()
data_pd = pd.DataFrame(data_sql)
print(data_pd)
</code></pre>
<p>The import works fine. This is a list of tuples. As the <code>dataframe</code> gets loaded with <code>data_sql</code> I get the following error:</p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'name'</p>
</blockquote>
<p>In my code I do not 'attribute' a list at all. I suppose it is in the underlying code of Pandas, but I cannot figure it out. </p>
<p>Please help and make my day :)</p>
<p>First row looks like this:</p>
<pre><code>(1, '2018-01-22 20:57:37.952722', 4.779e-05, 8.567e-05, 0.14788838, 5.5e-07, 8.455e-05, 2.437e-05, 4.662e-05, 0.015833, 0.00016368, 2.958e-05, 4.16e-06, 0.00070031, 6.174e-05, 0.07073803, 0.00877186, 4.75e-06, 6.2e-07, 5.252e-05, 0.00267409, 0.09210015, 0.00045663, 0.00421175, 3.51e-06, 1.532e-05, 0.00031989, 0.00417836, 0.0189958, 5.415e-05, 8.64e-06, 2.335e-05, 5.801e-05, 0.00193177, 0.01658499, 6.69e-05, 0.00025703, 0.00069553, 0.0003491, 3.828e-05, 3.603e-05, 0.00141013, 0.00490978, 0.0002373, 4.37e-06, 2.012e-05, 0.00046392, 0.00086303, 0.00769998, 2.241e-05, 0.00060928, 3.45e-06, 0.00038887, 0.000126, 4.374e-05, 0.00109395, 5.831e-05, 0.00034156, 0.000115, 0.00039507, 0.00938508, 0.00523241, 8.691e-05, 0.02860075, 9.441e-05, 0.00011479, 5.874e-05, 0.040542, 0.00015748, 1.59910467, 0.00066517, 0.02921625, 0.04512196, 0.20561388, 0.00058784, 0.02107231, 0.01526, 0.083361, 0.00414773, 0.44203455, 0.00170822, 1552.20855, 10512.99999903, 745.17442254, 28.31224896, 968.53360056, 176.02677305, 0.3806, 81.99986775, 0.46, 298.601292, 1.202, 432.5895432, 1.972e-05, 0.00159783, 0.55736974, 2.44854735, 0.57825471, 0.00233897, 0.00125385, 1.41373884)
</code></pre>
<p>I also tried:</p>
<pre><code>import pandas as pd
import sqlite3 as sql
conn = sql.connect('D:\db\crypto_db.db')
data_pd = pd.read_sql('select * from price',conn)
</code></pre>
<p>but this gives the next error:</p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'is_unique'</p>
</blockquote>
|
<p><code>pandas</code> can actually do extract + convert to dataframe at the same time.
You can do:</p>
<pre><code>pd.read_sql(con=cursor, sql="select * from price")
</code></pre>
<p>Reference: <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql.html</a></p>
|
python|pandas
| 1
|
1,824
| 48,599,009
|
Python error setting an array element with a sequence for different loops
|
<p>My code is as follows (there's about 100 lines before of setting values for the loop which seem to be working so I've just included necessary values):</p>
<pre><code>fraction=np.array([0.5, 0.3, 0.2])
tauC=np.array([30.,300.,100000.])
dC_memory=np.zeros((1,3))
dC_frac=np.zeros((1,3))
for j in range(0,ens_num):
dC_memory=np.zeros((1,3))
for n in range(0,N-1):
# C02 Concentration
for m in range(0,3):
dC_frac[m]=fraction[m]*E[j,n+1]-dC_memory[m]/(tauC[m])
dC_memory[m]=dC_memory[m]+dC_frac[m]*dt
dC[j,n]=dC[j,n]+dC_frac[m]*dt
C[j,n+1]=C[j,n]+dC[j,n]
# Temperature
dT[j,n]=((T2eq*math.log(C[j,n+1]/Cpi)/math.log(2))-T[j,n])*(dt/tauT)
T[j,n+1]=T[j,n]+dT[j,n]
# Adaptation
dTadp[j,n]=(T[j,n]-Tadp[j,n])*dt/tauA
Tadp[j,n+1]=Tadp[j,n]+dTadp[j,n]
Tdiff[j,n+1]=0.5*(abs(T[j,n]-Tadp[j,n+1])+T[j,n]-Tadp[j,n+1])
if yi[j,n+1]+xi0[k]<=mu:
count[j]=count[j]+1/N
</code></pre>
<p>When I run this I get the error on the line <code>dC[j,n]=dC[j,n]+dC_frac[m]*dt</code> saying</p>
<blockquote>
<p>ValueError: setting an array element with a sequence. </p>
</blockquote>
<p>I'm new to python but I know that python indexing starts from 0, but I cant understand why this code stops here.</p>
|
<p>Your example code is not complete. But I think the bug is clear.</p>
<p>By defining</p>
<pre><code> dC_frac=np.zeros((1,3))
</code></pre>
<p>You <code>dC_frac</code> is a <em>multidimensional</em> array of shape <code>(1, 3)</code>. Use <code>dC_frac.shape</code> you will find it's <code>(1, 3)</code>, <em>not</em>, <code>(3,)</code>. </p>
<p>Thus in </p>
<pre><code>for m in range(0,3):
dC_frac[m]=fraction[m]*E[j,n+1]-dC_memory[m]/(tauC[m])
...
</code></pre>
<p>Your <code>dC_frac[m]</code> is an array of 3 elements, <em>not</em> a scalar.</p>
<p>If your <code>dC[j, n]</code> and <code>dt</code> are scalars, </p>
<pre><code>dC[j,n]=dC[j,n]+dC_frac[m]*dt
</code></pre>
<p>This will assign an array of 3 elements to an entry. Thus the error.</p>
<p>To fix, just use</p>
<pre><code>dC_memory=np.zeros(3)
dC_frac=np.zeros(3)
</code></pre>
|
python|arrays|loops|numpy|indexing
| 2
|
1,825
| 48,694,286
|
Python - pandas xls import - difficulties removing certain row +
|
<p>[miniconda, python 3]</p>
<p>my data .xls to download: (password: stack)
<a href="http://download.hellshare.cz/file-xls/66813293/" rel="nofollow noreferrer">Download .xls</a></p>
<p>0)
You can notice that my xls file has big merged cell in the first row and also some merged cells in the rows 2 and 3. Is this a problem? If it is a problem - can i unmerge them somehow?</p>
<p>1)
I want to remove first row of this xls as there is no important info for me. I guess the problem is that the row is merged? I wanted to use df = df.drop([0]) for that, but instead of removing this huge first row, it removes the row with columns headers (starting with "ID klienta"). Why is that?</p>
<p>2)
After I would get rid of the first row, I like to process some numbers from various columns (In my example i want to separate data from "Stav" column). How do I do that? I have seen somewhere that it is possible to index rows/columns just by its header name (the string). For example i wanted to separate data from column with header "Stav" using: Stav = df['Stav']</p>
<p>My code so far is:</p>
<pre><code>import pandas as pd
import numpy as np
print("\n\n*********************************************")
print("My xls processing script\n")
print("*********************************************\n")
#load data
df = pd.read_excel("file.xls")
#My unsucessful attempt to get rid of first row
#uncomment this and it will remove the second row instead of the first row
#df = df.drop([0])
#print preview of 6 rows 5 columnts
print(df.iloc[0:5, 0:4])
print("\n\n")
#My unsuccessful attempt to get column date with header 'ID'
Stav = df['Stav']
print(Stav)
</code></pre>
<p>Output on the console:</p>
<pre><code>(xls_env) C:\Users\Slavek\Documents\PythonScripts>python xld_proj.py
*********************************************
My xls processing script
*********************************************
LidΓ©, kterΓ© jsem podpoΕil Unnamed: 1 Unnamed: 2 Unnamed: 3
0 ID klienta NΓ‘zev Stav ID pΕΓbΔhu
1 NaN NaN NaN NaN
2 zonky214882 Jeep na cestΔ 181187
3 zonky235862 Notebook k prΓ‘ci i relaxu na cestΔ 206317
4 zonky230378 DΔtskΓ½ pokoj v poΕΓ‘dku 199686
Traceback (most recent call last):
File "C:\miniconda\envs\xls_env\lib\site-packages\pandas\core\indexes\base.py", line 2525, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 117, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1265, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1273, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Stav'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "xld_proj.py", line 20, in <module>
Stav = df['Stav']
File "C:\miniconda\envs\xls_env\lib\site-packages\pandas\core\frame.py", line 2139, in __getitem__
return self._getitem_column(key)
File "C:\miniconda\envs\xls_env\lib\site-packages\pandas\core\frame.py", line 2146, in _getitem_column
return self._get_item_cache(key)
File "C:\miniconda\envs\xls_env\lib\site-packages\pandas\core\generic.py", line 1842, in _get_item_cache
values = self._data.get(item)
File "C:\miniconda\envs\xls_env\lib\site-packages\pandas\core\internals.py", line 3843, in get
loc = self.items.get_loc(item)
File "C:\miniconda\envs\xls_env\lib\site-packages\pandas\core\indexes\base.py", line 2527, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 117, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1265, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1273, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Stav'
</code></pre>
|
<p>I think you want the header function option on read in</p>
<pre><code>df = pd.read_excel("file.xls", header =[0,1,2])
</code></pre>
<p>Then you can drop the headers you don't want:</p>
<pre><code> df.columns = df.columns.droplevel([0,1])
</code></pre>
<p>or something along those lines. The sheet is a little messy since the variable names are scattered across the two sub headers. I'd clean it up so they are all on the same line.</p>
<p>or keep all the headers and see here:
<a href="https://stackoverflow.com/questions/37369317/how-do-i-change-or-access-pandas-multiindex-column-headers">How do I change or access pandas MultiIndex column headers?</a></p>
|
python|excel|pandas|xls
| 1
|
1,826
| 48,696,016
|
Find flattened indices of symmetric elements of a 2D array
|
<p>I have an 5 x 5 numpy array:</p>
<pre><code>a = np.arange(25).reshape(5, 5)
</code></pre>
<p>If <code>a</code> is flattened:</p>
<pre><code> b = a.flatten()
>> [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24]
</code></pre>
<p>I am trying to find the indices of symmetry of <code>b</code> for <code>a</code> where 'symmetry' is defined with respect to <code>a</code> shown below:</p>
<pre><code>a = np.array([
[a, d, c, d, a],
[b, e, f, e, b],
[c, f, g, f, c],
[b, e, f, e, b],
[a, d, c, d, a]
])
</code></pre>
<p>where the elements are 'symmetric' in a square pattern about <code>g</code>. The elements shown in the array above are merely placeholders to show the corresponding indices/locations that should be returned. </p>
<p>So, for a given <em>n</em> x <em>m</em> array the function with return all of the corresponding indices like so:</p>
<pre><code>[ 0 4 20 24]
[ 1 3 21 23]
[ 2 10 14 22]
[ 5 9 15 19]
[ 6 8 16 18]
[ 7 11 13 17]
[12]
</code></pre>
<p>where the output above corresponds to the 'symmetry' like so:</p>
<pre><code>[a a a a]
[d d d d]
[c c c c ]
... etc
</code></pre>
<p>All help is appreciated, thanks in advance!</p>
|
<p>I'll take a shot for square matrices. It will not be too hard to generalize the algorithm to non-square matrices if you know how the symmetry is defined for those cases.</p>
<p>First, create a 2D array of flattened indices:</p>
<pre><code>ind = np.arange(a.size).reshape(a.shape)
mid = a.shape[0] // 2
odd = a.shape[0] % 2
</code></pre>
<p>Now create a 3D array of the above array flipped along vertical and horizontal axes stacked along 3rd axis:</p>
<pre><code>idx = np.dstack([ind, np.fliplr(ind), np.flipud(ind), np.flipud(np.fliplr(ind))])
idx = idx[:mid, :mid].reshape((-1, 4)).tolist()
</code></pre>
<p>Now, <strong>only if</strong> the size of the array is odd, find the elements sitting/located on middle row and column:</p>
<pre><code>if odd:
idx += np.dstack([ind[:mid, mid], ind[-1:mid:-1, mid],
ind[mid, :mid], ind[mid, -1:mid:-1]]).reshape(-1, 4).tolist()
idx += [[ind[mid, mid]]]
</code></pre>
<p>Finally, if you want the result sorted at least within a set of indices:</p>
<pre><code>idx = sorted(map(sorted, idx))
</code></pre>
<p>Then, for a <code>5x5</code> array one gets:</p>
<pre><code>>>> print(idx)
[[0, 4, 20, 24],
[1, 3, 21, 23],
[2, 10, 14, 22],
[5, 9, 15, 19],
[6, 8, 16, 18],
[7, 11, 13, 17],
[12]]
</code></pre>
|
python|arrays|numpy|indexing
| 1
|
1,827
| 70,837,552
|
Return elements of a column based on a different column in Pandas
|
<p>I want to write a program to return names in a list based on the number of reports in descending order.
like ['Jack', 'Joe', 'Rick'....]</p>
<pre><code>df=
Number_of_reports Name
5 Rick
4 Amanda
7 Joe
8 Jack
2 Ryan
</code></pre>
<pre><code>mylist=[]
greater_value=0
for i in df['Number_of_Reports']:
if greater_value > i:
mylist.append(df['Name'])
</code></pre>
<p>Any help will be greatly appreciated</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.tolist.html" rel="nofollow noreferrer"><code>to_list</code></a>:</p>
<p><code>names = df.sort_values(by='Number_of_reports', ascending=False)['Name'].tolist()</code></p>
<p>Which gives:</p>
<p><code>['Jack', 'Joe', 'Rick', 'Amanda', 'Ryan']</code></p>
|
python|python-3.x|pandas
| 3
|
1,828
| 70,906,013
|
Best way to replicate SQL "update case when..." with Pandas?
|
<p>I have this sample data set</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>City</th>
</tr>
</thead>
<tbody>
<tr>
<td>LAL</td>
</tr>
<tr>
<td>NYK</td>
</tr>
<tr>
<td>Dallas</td>
</tr>
<tr>
<td>Detroit</td>
</tr>
<tr>
<td>SF</td>
</tr>
<tr>
<td>Chicago</td>
</tr>
<tr>
<td>Denver</td>
</tr>
<tr>
<td>Phoenix</td>
</tr>
<tr>
<td>Toronto</td>
</tr>
</tbody>
</table>
</div>
<p>And what I want to do is update certain values with specific values, and the rest of it I would leave as it is.</p>
<p>So, with SQL I would do something like this:</p>
<pre><code>update table1
set city = case
when city='LAL' then 'Los Angeles'
when city='NYK' then 'New York'
Else city
end
</code></pre>
<p>What would be the best way to do this in Pandas?</p>
|
<p>You can directly replace the values like this:</p>
<pre><code>replacement_dict = {"LAL": "Los Angeles", "NYK": "New York"}
for key, value in replacement_dict.items():
df['City'][df['City'] == key] = value
</code></pre>
|
python|pandas|dataframe|str-replace
| 1
|
1,829
| 70,979,489
|
Why do I get negative dimensions are not allowed merging rasterio datasets?
|
<p>I am trying to use Python's rasterio library to analyze GIS wind data available <a href="https://www.nrel.gov/gis/assets/images/us-wind-data.zip" rel="nofollow noreferrer">here</a>. I've written this reduced program:</p>
<pre><code>import numpy as np
import rasterio as rio
from rasterio.merge import merge
file1 = 'us-wind-data/wtk_conus_120m_mean_masked.tif'
file2 = 'us-wind-data/wtk_conus_140m_mean_masked.tif'
def f(old_data, new_data, old_nodata, new_nodata, index=None, roff=None, coff=None):
old_data[:] = np.maximum(old_data, new_data)
with rio.open(file1, dtype=np.float64) as test1, rio.open(file2, dtype=np.float64) as test2:
mosaic, out_trans = merge([test1, test2], method=f)
print(mosaic)
</code></pre>
<p>and when running this I see:</p>
<pre><code>Traceback (most recent call last):
File "merge5.py", line 16, in <module>
mosaic, out_trans = merge([test1, test2], method=f)
File "/home/mmachenry/.local/lib/python3.8/site-packages/rasterio/merge.py", line 261, in merge
dest = np.zeros((output_count, output_height, output_width), dtype=dt)
ValueError: negative dimensions are not allowed
</code></pre>
<p>There are a lot of examples of this error on Stackoverflow. None that I've found seem to be what I'm doing. It is, after all, a low level numpy error. In my research I've found people experiencing issues with reading the data as default float16 and then getting math overflow errors producing negatives. So I bumped up my dtype to float64 and it did not improve the situation.</p>
<p>I'm attempting to write a simple function that will take several of these tif files with a wind speed at each data point and create a new tif file with my data point that is a function of all of the wind speeds.</p>
|
<p>What's so mysterious about?</p>
<pre><code>In [142]: np.zeros((10,-2))
Traceback (most recent call last):
File "<ipython-input-142-c7db7030b12c>", line 1, in <module>
np.zeros((10,-2))
ValueError: negative dimensions are not allowed
</code></pre>
<p>One or more of</p>
<pre><code>(output_count, output_height, output_width)
</code></pre>
<p>But not knowing anything about <code>merge</code> or <code>rasterio</code> I don't know where those dimensions come from.</p>
<p>Can you tell us anything about <code>test1, test2</code> when this error occurs.</p>
<p>As writen this question is not reproducible - it involves files that we don't have access to (and probably don't want either).</p>
|
pandas|numpy|rasterio
| 0
|
1,830
| 51,867,278
|
how to understand this python code ,thanks a lot
|
<pre><code>import numpy as np
p = np.array([[1,2,3]])
print(p[np.array([0]), np.array([1,0,0])])
# output:[2,1,1]
</code></pre>
<p>I am trying to understand why this output is coming.</p>
|
<p><code>p</code> is (1,3) shape array. The indexing, which can also be written as</p>
<pre><code>p[ 0, [1,0,0]]
</code></pre>
<p>selects <code>p[0,1]</code>, <code>p[0,0]</code> and <code>p[0,0]</code>, that is the 2 and 1 (twice).</p>
<p>It's straight forward indexing with a list or array, also called advanced indexing.</p>
|
python-3.x|numpy
| 1
|
1,831
| 51,684,220
|
google.protobuf.text_format.ParseError: 9:18 : Couldn't parse integer: 03
|
<p>Im using python 3.6 tensorflow 1.5
im following the link</p>
<p>But got the error:</p>
<p>doe@doe:~/anaconda3/envs/tensorflow/models/research/object_detection$ python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_coco.config
WARNING:tensorflow:From /home/doe/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py:124: main (from <strong>main</strong>) is deprecated and will be removed in a future version.
Instructions for updating:
Use object_detection/model_main.py.
Traceback (most recent call last):
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1500, in _ParseAbstractInteger
return int(text, 0)
ValueError: invalid literal for int() with base 0: '03'</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last):
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1449, in _ConsumeInteger
result = ParseInteger(tokenizer.token, is_signed=is_signed, is_long=is_long)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1471, in ParseInteger
result = _ParseAbstractInteger(text, is_long=is_long)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1502, in _ParseAbstractInteger
raise ValueError('Couldn\'t parse integer: %s' % text)
ValueError: Couldn't parse integer: 03</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last):
File "train.py", line 184, in
tf.app.run()
File "/home/doe/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "/home/doe/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 136, in new_func
return func(*args, **kwargs)
File "train.py", line 93, in main
FLAGS.pipeline_config_path)
File "/home/doe/anaconda3/envs/tensorflow/models/research/object_detection/utils/config_util.py", line 94, in get_configs_from_pipeline_file
text_format.Merge(proto_str, pipeline_config)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 536, in Merge
descriptor_pool=descriptor_pool)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 590, in MergeLines
return parser.MergeLines(lines, message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 623, in MergeLines
self._ParseOrMerge(lines, message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 638, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 763, in _MergeField
merger(tokenizer, message, field)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 837, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 763, in _MergeField
merger(tokenizer, message, field)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 837, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 763, in _MergeField
merger(tokenizer, message, field)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 871, in _MergeScalarField
value = _ConsumeInt32(tokenizer)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1362, in _ConsumeInt32
return _ConsumeInteger(tokenizer, is_signed=True, is_long=False)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1451, in _ConsumeInteger
raise tokenizer.ParseError(str(e))
google.protobuf.text_format.ParseError: 9:18 : Couldn't parse integer: 03</p>
|
<p>I faced a similar problem relating to label_map_path when running on local machine. Solved by removing spaces between lines in the label map pbtxt file. Please check config file as well.</p>
|
python-3.x|ubuntu|tensorflow|video-processing|object-detection-api
| 0
|
1,832
| 41,708,059
|
Python Pandas: selecting 1st element in array in all cells
|
<p>What I am trying to do is select the 1st element of each cell regardless of the number of columns or rows (they may change based on user defined criteria) and make a new pandas dataframe from the data. My actual data structure is similar to what I have listed below.</p>
<pre><code> 0 1 2
0 [1, 2] [2, 3] [3, 6]
1 [4, 2] [1, 4] [4, 6]
2 [1, 2] [2, 3] [3, 6]
3 [4, 2] [1, 4] [4, 6]
</code></pre>
<p>I want the new dataframe to look like:</p>
<pre><code> 0 1 2
0 1 2 3
1 4 1 4
2 1 2 3
3 4 1 4
</code></pre>
<p>The code below generates a data set similar to mine and attempts to do what I want to do in my code without success (d), and mimics what I have seen in a similar question with success(c ; however, only one column). The link to the similar, but different question is here :<a href="https://stackoverflow.com/questions/26069235/python-pandas-selecting-element-in-array-column">Python Pandas: selecting element in array column</a></p>
<pre><code>import pandas as pd
zz = pd.DataFrame([[[1,2],[2,3],[3,6]],[[4,2],[1,4],[4,6]],
[[1,2],[2,3],[3,6]],[[4,2],[1,4],[4,6]]])
print(zz)
x= zz.dtypes
print(x)
a = pd.DataFrame((zz.columns.values))
b = pd.DataFrame.transpose(a)
c =zz[0].str[0] # this will give the 1st value for each cell in columns 0
d= zz[[b[0]].values].str[0] #attempt to get 1st value for each cell in all columns
</code></pre>
|
<p>You can use <code>apply</code> and for selecting first value of list use <a href="http://pandas.pydata.org/pandas-docs/stable/text.html#indexing-with-str" rel="noreferrer">indexing with str</a>:</p>
<pre><code>print (zz.apply(lambda x: x.str[0]))
0 1 2
0 1 2 3
1 4 1 4
2 1 2 3
3 4 1 4
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="noreferrer"><code>stack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="noreferrer"><code>unstack</code></a>:</p>
<pre><code>print (zz.stack().str[0].unstack())
0 1 2
0 1 2 3
1 4 1 4
2 1 2 3
3 4 1 4
</code></pre>
|
arrays|python-3.x|pandas|dataframe
| 11
|
1,833
| 42,087,542
|
Pip help install
|
<p>I am trying to install tensorflow on ubuntu from: <a href="https://www.tensorflow.org/get_started/os_setup#virtualenv_installation" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/os_setup#virtualenv_installation</a>
But when I get to the step: <code>pip install --upgrade $ TF_BINARY_URL</code>
I get this error:</p>
<blockquote>
<p><code>You must give at least one requirement to install (see "pip help install").</code></p>
</blockquote>
<p>Any help please</p>
|
<p>That's because the environment variable <code>$TF_BINARY_URL</code> is not set. you must export it first, like described in the docs you provided.</p>
<blockquote>
<pre><code># Ubuntu/Linux 64-bit, CPU only, Python 2.7
(tensorflow)$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.1-cp27-none-linux_x86_64.whl
</code></pre>
</blockquote>
<p>(or any other url specific to arch/platform)</p>
|
python|tensorflow|pip|ubuntu-16.04
| 0
|
1,834
| 64,331,477
|
Optimize a double loop with mesh grids involved
|
<p>I am doing a double loop to sum a function that has mesh grids as an input. The problem is that it runs very slow... I want to optimize the code with an alternative procedure, maybe using vectorize function of numpy, but I don't see how can be implemented. I show you the code that I have:</p>
<pre><code>import numpy as np
import time
Lxx = 2.
Lyy = 1.0
dxx = dyy = 0.01
nxx = 100
nyy = 100
XX, YY = np.meshgrid(np.arange(0, Lxx+dxx, dxx), np.arange(0, Lyy+dyy, dyy)) #mesh grid
def solution(xx,yy,nnmax,mmmax):
sol = 0.
for m in range(nnmax):
for n in range(mmmax):
sol = sol+np.sin(XX*0.356*n)+np.cos(YY*2.3*m)
return sol
start = time.time()
solution(XX,YY,nxx,nyy)
end = time.time()
print ("TIME", end-start)
</code></pre>
<p>What I want is to make the sum for large values in nxx, nyy. But of course then it takes a lot of time...This is the reason why I want optimize the code.</p>
|
<p>If you notice, the terms of the sum are completely separable: they don't share any loop variables. You can therefore create independent (smaller) arrays for the sum over <code>XX</code>, <code>n</code> and <code>YY</code>, <code>m</code>, and take the trig functions and sum of those. The final grid can be accumulated by broadcasting.</p>
<p>To begin, don't bother making the grid:</p>
<pre><code>x = np.arange(0, Lxx+dxx, dxx)
y = np.arange(0, Lyy+dyy, dyy)
</code></pre>
<p>Compute a single sum using broadcasting:</p>
<pre><code>n = np.arange(nyy)[:, None]
m = np.arange(nxx)[:, None]
sumx = np.sin(x + 0.356 * n).sum(0)
sumy = np.cos(y + 2.3 * m).sum(0)
</code></pre>
<p>You can use the same broadcasting trick to get the final sum in a grid:</p>
<pre><code>result = sumx[:, None] + sumy
</code></pre>
|
python|numpy|optimization
| 0
|
1,835
| 64,474,589
|
RASA --- ERROR: Could not find a version that satisfies the requirement tensorflow
|
<p>I am trying to install rasa and there is a problem with the tensorflow (Windows 10)</p>
<p>As a pre-requisite, I have installed Anaconda, VC++</p>
<p>Steps -</p>
<ol>
<li>Open Anaconda with admin rights</li>
<li>activate rasa</li>
<li>pip install rasa-x --extra index url <a href="https://pypi.rasa.com/simple" rel="nofollow noreferrer">https://pypi.rasa.com/simple</a></li>
<li>pip install rasa</li>
</ol>
<p>Error - ERROR: Could not find a version that satisfies the requirement tensorflow</p>
<hr />
<p>I tried to install tensorflow before installing rasa, apparently the error remains the same even for installing tensorflow .... Need some pointers to install rasa and tensorflow so that I can move ahead.</p>
|
<p>You need use Python version 3.6 or 3.7. Check on that
<a href="https://i.stack.imgur.com/RkSTZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RkSTZ.png" alt="enter image description here" /></a></p>
|
tensorflow|rasa
| 0
|
1,836
| 64,292,736
|
Pandas dataframe conditional cumulative sum based on date range
|
<p>I have a pandas dataframe:</p>
<pre><code> Date Party Status
-------------------------------------------
0 01-01-2018 John Sent
1 13-01-2018 Lisa Received
2 15-01-2018 Will Received
3 19-01-2018 Mark Sent
4 02-02-2018 Will Sent
5 28-02-2018 John Received
</code></pre>
<p>I would like to add new columns that perform a <code>.cumsum()</code>, but it is conditional on the dates. It would look like this:</p>
<pre><code> Num of Sent Num of Received
Date Party Status in Past 30 Days in Past 30 Days
-----------------------------------------------------------------------------------
0 01-01-2018 John Sent 1 0
1 13-01-2018 Lisa Received 1 1
2 15-01-2018 Will Received 1 2
3 19-01-2018 Mark Sent 2 2
4 02-02-2018 Will Sent 2 2
5 28-02-2018 John Received 1 1
</code></pre>
<p>I managed to implement what I need by writing the following code:</p>
<pre><code>def inner_func(date_var, status_var, date_array, status_array):
sent_increment = 0
received_increment = 0
for k in range(0, len(date_array)):
if((date_var - date_array[k]).days <= 30):
if(status_array[k] == "Sent"):
sent_increment += 1
elif(status_array[k] == "Received"):
received_increment += 1
return sent_increment, received_increment
</code></pre>
<pre><code>import pandas as pd
import time
df = pd.DataFrame({"Date": pd.to_datetime(["01-01-2018", "13-01-2018", "15-01-2018", "19-01-2018", "02-02-2018", "28-02-2018"]),
"Party": ["John", "Lisa", "Will", "Mark", "Will", "John"],
"Status": ["Sent", "Received", "Received", "Sent", "Sent", "Received"]})
df = df.sort_values("Date")
date_array = []
status_array = []
for i in range(0, len(df)):
date_var = df.loc[i,"Date"]
date_array.append(date_var)
status_var = df.loc[i,"Status"]
status_array.append(status_var)
sent_count, received_count = inner_func(date_var, status_var, date_array, status_array)
df.loc[i, "Num of Sent in Past 30 days"] = sent_count
df.loc[i, "Num of Received in Past 30 days"] = received_count
</code></pre>
<p>However, the process is computationally expensive and painfully slow when <code>df</code> is large, since the nested loops go through the dataframe twice. Is there a more pythonic way to implement what I am trying to achieve without iterating through the dataframe in the way I am doing?</p>
<p><strong>Update 2</strong></p>
<p>Michael has provided the solution to what I am looking for: <a href="https://stackoverflow.com/a/64293354/10199500">here</a>. Lets assume that I want to apply the solution on <code>groupby</code> objects. For example, using the rolling solution to compute the cumulative sums based for each party:</p>
<pre><code> Sent past 30 Received past 30
Date Party Status days by party days by party
-----------------------------------------------------------------------------------
0 01-01-2018 John Sent 1 0
1 13-01-2018 Lisa Received 0 1
2 15-01-2018 Will Received 0 1
3 19-01-2018 Mark Sent 1 0
4 02-02-2018 Will Sent 1 1
5 28-02-2018 John Received 0 1
</code></pre>
<p>I have attempted to regenerate the solution for the using the <code>groupby</code> method below:</p>
<pre><code>l = []
grp_obj = df.groupby("Party")
grp_obj.rolling('30D', min_periods=1)["dummy"].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
</code></pre>
<p>But I ended up with incorrect values. I know that it is happening because the <code>concat</code> method is combining the dataframes without condsidering their indices, since <code>groupby</code> orders the data differently. Is there a way I can modify the list appending to include the original index, such that I can merge/join the value_counts dataframe to the original one?</p>
|
<p>If you set <code>Date</code> as index and convert <code>Status</code> temporary to a categorical you can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html" rel="nofollow noreferrer"><code>pd.rolling</code></a> with a little trick</p>
<pre><code>df = df.set_index('Date')
df['dummy'] = df['Status'].astype('category',copy=False).cat.codes
l = []
df.rolling('30D', min_periods=1)['dummy'].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
pd.concat(
[df,
(pd.DataFrame(l)
.rename(columns={1.0: "Sent past 30 Days", 0.0: "Received past 30 Days"})
.fillna(0)
.astype('int'))
], axis=1).drop('dummy', 1)
</code></pre>
<p>Out:</p>
<pre><code> Date Party Status Received past 30 Days Sent past 30 Days
0 2018-01-01 John Sent 0 1
1 2018-01-13 Lisa Received 1 1
2 2018-01-15 Will Received 2 1
3 2018-01-19 Mark Sent 2 2
4 2018-02-02 Will Sent 2 2
5 2018-02-28 John Received 1 1
</code></pre>
<hr />
<h2>Maintaining an original index to allow subsequent merging</h2>
<p>Slightly adjust the data to have different sequences in <code>Date</code> and <code>index</code></p>
<pre><code>df = pd.DataFrame({"Date": pd.to_datetime(["01-01-2018", "13-01-2018", "03-01-2018", "19-01-2018", "08-02-2018", "22-02-2018"]),
"Party": ["John", "Lisa", "Will", "Mark", "Will", "John"],
"Status": ["Sent", "Received", "Received", "Sent", "Sent", "Received"]})
df
</code></pre>
<p>Out:</p>
<pre><code> Date Party Status
0 2018-01-01 John Sent
1 2018-01-13 Lisa Received
2 2018-03-01 Will Received
3 2018-01-19 Mark Sent
4 2018-08-02 Will Sent
5 2018-02-22 John Received
</code></pre>
<p>Store the original index after sorting by <code>Date</code> and reindex after operationing on the dataframe sorted by <code>Date</code></p>
<pre><code>df = df.sort_values('Date')
df = df.reset_index()
df = df.set_index('Date')
df['dummy'] = df['Status'].astype('category',copy=False).cat.codes
l = []
df.rolling('30D', min_periods=1)['dummy'].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
df = pd.concat(
[df,
(pd.DataFrame(l)
.rename(columns={1.0: "Sent past 30 Days", 0.0: "Received past 30 Days"})
.fillna(0)
.astype('int'))
], axis=1).drop('dummy', 1)
df.set_index('index')
</code></pre>
<p>Out:</p>
<pre><code> Date Party Status Received past 30 Days Sent past 30 Days
index
0 2018-01-01 John Sent 0 1
1 2018-01-13 Lisa Received 1 1
3 2018-01-19 Mark Sent 1 2
5 2018-02-22 John Received 1 0
2 2018-03-01 Will Received 2 0
4 2018-08-02 Will Sent 0 1
</code></pre>
<hr />
<h2>Counting values in groups</h2>
<p>Sort by <code>Party</code> and <code>Date</code> first to get the right order to append the grouped counts</p>
<pre><code>df = pd.DataFrame({"Date": pd.to_datetime(["01-01-2018", "13-01-2018", "15-01-2018", "19-01-2018", "02-02-2018", "28-02-2018"]),
"Party": ["John", "Lisa", "Will", "Mark", "Will", "John"],
"Status": ["Sent", "Received", "Received", "Sent", "Sent", "Received"]})
df = df.sort_values(['Party','Date'])
</code></pre>
<p>After that reindex before <code>concat</code> to append to the right rows</p>
<pre><code>df = df.set_index('Date')
df['dummy'] = df['Status'].astype('category',copy=False).cat.codes
l = []
df.groupby('Party').rolling('30D', min_periods=1)['dummy'].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
pd.concat(
[df,
(pd.DataFrame(l)
.rename(columns={1.0: "Sent past 30 Days", 0.0: "Received past 30 Days"})
.fillna(0)
.astype('int'))
], axis=1).drop('dummy', 1).sort_values('Date')
</code></pre>
<p>Out:</p>
<pre><code> Date Party Status Received past 30 Days Sent past 30 Days
0 2018-01-01 John Sent 0 1
2 2018-01-13 Lisa Received 1 0
4 2018-01-15 Will Received 1 0
3 2018-01-19 Mark Sent 0 1
5 2018-02-02 Will Sent 1 1
1 2018-02-28 John Received 1 0
</code></pre>
<h2>Micro-Benchmark</h2>
<p>As this solution is also iterating over the dataset I compared the running times of both approaches. Only very small datasets were used because the original solution's runtime was increasing fast.</p>
<p>Results</p>
<p><a href="https://i.stack.imgur.com/WVU64.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WVU64.png" alt="benchmark results" /></a></p>
<p>Code to reproduce the benchmark</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import perfplot
def makedata(n=1):
df = pd.DataFrame({"Date": pd.to_datetime(["01-01-2018", "13-01-2018", "15-01-2018", "19-01-2018", "02-02-2018", "28-02-2018"]*n),
"Party": ["John", "Lisa", "Will", "Mark", "Will", "John"]*n,
"Status": ["Sent", "Received", "Received", "Sent", "Sent", "Received"]*n})
return df.sort_values("Date")
def rolling(df):
df = df.set_index('Date')
df['dummy'] = df['Status'].astype('category',copy=False).cat.codes
l = []
df.rolling('30D', min_periods=1)['dummy'].apply(lambda x: l.append(x.value_counts()) or 0)
df.reset_index(inplace=True)
return pd.concat(
[df,
(pd.DataFrame(l)
.rename(columns={1.0: "Sent past 30 Days", 0.0: "Received past 30 Days"})
.fillna(0)
.astype('int'))
], axis=1).drop('dummy', 1)
def forloop(df):
date_array = []
status_array = []
def inner_func(date_var, status_var, date_array, status_array):
sent_increment = 0
received_increment = 0
for k in range(0, len(date_array)):
if((date_var - date_array[k]).days <= 30):
if(status_array[k] == "Sent"):
sent_increment += 1
elif(status_array[k] == "Received"):
received_increment += 1
return sent_increment, received_increment
for i in range(0, len(df)):
date_var = df.loc[i,"Date"]
date_array.append(date_var)
status_var = df.loc[i,"Status"]
status_array.append(status_var)
sent_count, received_count = inner_func(date_var, status_var, date_array, status_array)
df.loc[i, "Num of Sent in Past 30 days"] = sent_count
df.loc[i, "Num of Received in Past 30 days"] = received_count
return df
perfplot.show(
setup=makedata,
kernels=[forloop, rolling],
n_range=[x for x in range(5, 105, 5)],
equality_check=None,
xlabel='len(df)'
)
</code></pre>
|
python|pandas|dataframe
| 2
|
1,837
| 64,309,862
|
Perform numpy product over non-zero elements of a row
|
<p>I have a 2d array <code>r</code>. What I want to do is to take the product of each row (excluding the zero elements in that row). For example if I have:</p>
<pre><code>r = [[1 2 0 3 4],
[0 2 5 0 1],
[1 2 3 4 0]]
</code></pre>
<p>Then what I want is to have another 2d array <code>result</code> such that:</p>
<pre><code>result = [[24],
[10],
[24]]
</code></pre>
<p>How can I achieve this using numpy.prod?</p>
|
<p>I think I figured it out:</p>
<pre><code>np.prod(r, axis = 1, where = r > 0, keepdims = True)
</code></pre>
<p>Output:</p>
<pre><code>array([[24],
[10],
[24]])
</code></pre>
|
python|numpy
| 1
|
1,838
| 64,481,705
|
how to get row total in pandas
|
<p><a href="https://i.stack.imgur.com/IZebn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IZebn.png" alt="enter image description here" /></a></p>
<p>I am trying to get Row total and column total from my dataframe. I have no issue with the column total. However, My row total is adding up all the job descriptions rather than showing total</p>
<p>here's my code:</p>
<pre><code>Newdata= data.groupby(['Job Description','AgeBand'])['AgeBand'].count().reset_index(name="count")
Newdata= Newdata.sort_values(by = ['AgeBand'],ascending=True)
df=Newdata.pivot_table(index='Job Description', values = 'count', columns = 'AgeBand').reset_index()
df.loc['Total',:]= df.sum(axis=0)
df.loc[:,'Total'] = df.sum(axis=1)
df=df.fillna(0).astype(int, errors='ignore')
df
</code></pre>
|
<p>First preselect the columns you wish to add row wise, then use df.sum(axis=1).</p>
<p>I think you're after:</p>
<pre><code>df.loc[:,'Total'] = df.loc[:,'20-29':'UP TO 20'].sum(axis=1)
</code></pre>
|
pandas|dataframe|plotly
| 1
|
1,839
| 47,972,344
|
How to update GAN Generator and Discriminator asynchronously in Tensorflow?
|
<p>I want to develop a GAN with Tensorflow, with the Generator being an autoencoder and the Discriminator a Convolutional Neural Net with binary output. There is no problem to develop an autoencoder and the CNN, but my idea is to train 1 epoch for each one of the components (Discriminator and Generator) and repeat this cycle for 1000 epochs, keeping the results (weights) of the previous training epoch for the next one. How can I operationalize this ?</p>
|
<p>If you have two ops called <code>train_step_generator</code> and <code>train_step_discriminator</code> (each of which are, for example, of the form <code>tf.train.AdamOptimizer().minimize(loss)</code> with an appropriate loss for each), then your training loop should be something similar to the following structure:</p>
<pre><code>with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(1000):
if epoch%2 == 0: # train discriminator on even epochs
for i in range(training_set_size/batch_size):
z_ = np.random.normal(0,1,batch_size) # this is the input to the generator
batch = get_next_batch(batch_size)
sess.run(train_step_discriminator,feed_dict={z:z_, x:batch})
else: # train generator on odd epochs
for i in range(training_set_size/batch_size):
z_ = np.random.normal(0,1,batch_size) # this is the input to the generator
sess.run(train_step_generator,feed_dict={z:z_})
</code></pre>
<p>The weights will persist between iterations.</p>
|
tensorflow|generative|adversarial-machines
| 2
|
1,840
| 48,968,675
|
Tensorflow GPU import error
|
<p>I have CUDA 8.0, and I can download cuDNN. Currently, I have cuDNN version 7.0.5 for Linux. </p>
<p>I do not have administrator privileges. </p>
<p>When I tried to install TensorFlow version 1.4 for GPU, I got this error:</p>
<pre><code> ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory
</code></pre>
<p>I figured this was due to the absence of cuDNN on my machine. I downloaded version 7.0.5, at the advice of the sysadmin, which is of course not the version the error message wanted me to get (it wanted version 6). </p>
<p>So I thought, I'll try Tensorflow version 1.5 for GPU. I got this error:</p>
<pre><code>ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
</code></pre>
<p>What should I do? Is there a way to download older versions of cuDNN? Or a way to download cublas 9.0 somewhere?</p>
|
<p>Yes if you register at nvidia you can also download older versions of cuDNN. Itβs a little hidden though. Make sure you download the right version which is compatible to your cuda version. Also donβt forget to set CUDA_HOME environment variable for tensorflow to find your GPU.</p>
|
tensorflow
| 0
|
1,841
| 58,615,866
|
Iterative Creation and Naming of DataFrames
|
<p>Posting in continuation of <a href="https://stackoverflow.com/questions/58609931/pandas-multiple-dataframes-from-other-dataframes/58612808?noredirect=1#comment103540713_58612808[Text]">Pandas Multiple DataFrames from other DataFrames</a>. </p>
<p>Managed to iterate over multiple smaller dataframes (please note that the supermarket names have been added manually to aid in the understanding of the question; the dataframe names do not exist as a property of the dataframes):</p>
<pre><code> Loblaws
Summer Winter
Milk -7800.0 -3600.0
Salt -9000.0 -4500.0
Pear -15300.0 -11700.0
Wal-Mart
Summer Winter
Milk -14700.0 -10200.0
Salt -7500.0 -4800.0
Pear -3000.0 -9600.0
Whole Foods
Summer Winter
Milk -11500.0 -7500.0
Salt -7000.0 -8500.0
Pear -1000.0 -6500.0
</code></pre>
<p>and merge with the "base" dataframe on "Seasons":</p>
<pre><code>for df in locationlist:
df = df.transpose()
merged_dataframe = pd.merge(dfs, df, left_on='Season',right_index = True)
merged_dataframe.name = str(df)
merged_dataframes.append(merged_dataframe)
display(merged_dataframe)
</code></pre>
<p>by transposing such that the output looks like:</p>
<pre><code> Season Milk Salt Pear
Date
2018-01-24 Winter -7500.0 -8500.0 -6500.0
2018-01-25 Winter -7500.0 -8500.0 -6500.0
2018-01-26 Winter -7500.0 -8500.0 -6500.0
2018-01-27 Winter -7500.0 -8500.0 -6500.0
2018-01-28 Winter -7500.0 -8500.0 -6500.0
... ... ... ... ...
</code></pre>
<p>However, trying to return the name as a property using:</p>
<pre><code>for dfs in merged_dataframes:
print(dfs.name)
</code></pre>
<p>prints out the individual dataframes for each supermarket in their pre-merged format like:</p>
<pre><code> Milk Salt Pear
Summer -7800.0 -9000.0 -15300.0
Winter -3600.0 -4500.0 -11700.0
Milk Salt Pear
Summer -14700.0 -7500.0 -3000.0
Winter -10200.0 -4800.0 -9600.0
Milk Salt Pear
Summer -11500.0 -7000.0 -1000.0
Winter -7500.0 -8500.0 -6500.0
</code></pre>
|
<p>...continuing from the previous question...
I see what's happening here:
when you do <code>merged_dataframe.name = str(df)</code> you seem to want the name of the variable from which the dataframe came,
What actually happens is that you take the whole dataframe that df refers to (an original supermarket dataframe) and turn it all into a string (using the str method), and assign that whole dataframe as the name.</p>
<p>Once you actually print the names you get the name, which is the whole dataframeβ¦</p>
<p>I have good news and bad news: </p>
<p>The bad news: You cannot recover the name of the variable that referred to the dataframe originally, when you wrote this line: <code>supermarkets = [loblaws, wal_mart, whole_foods]</code> That variable name is not accessible from within the loop.</p>
<p>The good news: You can work around this by assigning a name variable to the original dataframes by doing something like this before the for loop:</p>
<pre><code>merged_dataframes = []
# first put all dataframes in a list
Loblaws.name = "Loblaws"
Wal_Mart.name = "Wal-Mart"
Whole_foods.name = "Whole Foods"
supermarkets = [loblaws, wal_mart, whole_foods]
for df in locationlist:
name_str = df.name
df = df.transpose()
merged_dataframe = pd.merge(dfs, df, left_on='Season',right_index = True)
merged_dataframe.name = name_str
merged_dataframes.append(merged_dataframe)
</code></pre>
<p>I hope this works for you! Let me know how it goes!</p>
|
python|pandas|dataframe
| 1
|
1,842
| 58,869,684
|
using model prediction inside another model
|
<p>How can one use model.predict inside another model? I need to add a layer at the end of model that uses predictions form another model.</p>
<p>I get this error:</p>
<pre><code>ValueError: When feeding symbolic tensors to a model, we expect the tensors to have a static batch size. Got tensor with shape: (None, 10)
</code></pre>
<p>when trying the following: </p>
<pre><code>...
model1_outputs = model1.predict(model1_inputs)
model2 = Model(inputs=model2_inputs, outputs=model1_outputs)
</code></pre>
|
<p>I think I found it, I am using this instead: </p>
<pre><code>model1_outputs = model1(model1_inputs)
model2 = Model(inputs=model2_inputs, outputs=model1_outputs)
</code></pre>
|
python|tensorflow|keras|deep-learning
| 0
|
1,843
| 70,125,424
|
Python: convert output of confidence interval to excel
|
<p>I calculated a 95% confidence interval in Python with this code:</p>
<pre><code>d = st.t.interval(alpha=0.95, df=len(df_efw)-1, loc=np.mean(df_efw).mean(), scale=st.sem(df_efw.stack()))
</code></pre>
<p>My output is: <code>(2540.3603658087004, 2640.3233923612343)</code></p>
<p>I want to convert this into an exisiting excel sheet with this code:</p>
<pre><code>ws.cell(row=4, column= 3).value = d
</code></pre>
<p>But my error is: <code>ValueError: Cannot convert (2540.3603658087004, 2640.3233923612343) to Excel</code></p>
<p>How can i convert it to excel? I would prefer it if i can convert a single data in one cell (2530,36 is in C4 and 2640,32 is in C5).</p>
|
<p>Using the row and column notation, access the tuple items individually and increment the row when writing with <code>ws.cell</code>.</p>
<pre class="lang-py prettyprint-override"><code>ws.cell(row=4, column= 3).value = d[0] # C4
ws.cell(row=5, column= 3).value = d[1] # C5
</code></pre>
|
python|excel|pandas
| 1
|
1,844
| 70,363,964
|
How to add a array to a column of a matrix? (python numpy)
|
<p>Like this:</p>
<pre><code>import numpy as np
a = np.zeros((3,3))
b = np.ones((3,1))
a[:,2] += b
</code></pre>
<p>expected:</p>
<pre><code>a =
0,0,1
0,0,1
0,0,1
</code></pre>
<p>in fact:</p>
<pre><code>ValueError: non-broadcastable output operand with shape (3,) doesn't match the broadcast shape (3,3)
</code></pre>
<p>What should I do?</p>
|
<p>Specifying the range of column is required</p>
<p>e.g. <code>a[:,0:1]</code> for column 0, <code>a[:,1:2]</code> for column 1, and <code>a[:,2:]</code> for column 2.</p>
<pre><code>import numpy as np
a = np.zeros((3,3))
b = np.ones((3,1))
a[:,2:] += b
</code></pre>
<p>output:</p>
<blockquote>
<pre><code>array([[0., 0., 1.],
[0., 0., 1.],
[0., 0., 1.]])
</code></pre>
</blockquote>
|
python|numpy
| 1
|
1,845
| 70,208,833
|
Add/Substract datetime in pyspark.pandas
|
<p>I got an error in calculating the date using pyspark.pandas.
Is there any way to calculate the date with pyspark.pandas?</p>
<pre><code>import pyspark.pandas as ps
import pandas as pd
df = pd.DataFrame({'year': [2015, 2016],
'month': [2, 3],
'day': [4, 5]})
df = ps.DataFrame(df)
srs = ps.to_datetime(df)
srs + timedelta(days=3)
# this yields the same error
srs.add(timedelta(days=3))
# this yields the same error
srs + pd.TimeDelta(days=3)
</code></pre>
<p>this yields this error <code>TypeError: Addition can not be applied to datetimes.</code></p>
<p>while below works</p>
<pre><code>srs = srs.to_pandas()
srs + timedelta(days=3)
</code></pre>
|
<p>I did have a similar problem on <code>pyspark==3.2.1</code>, and this seemed to be the only solution, as like</p>
<pre><code>(
ps.to_datetime(pd.Series(['2015-02-04', '2016-03-05']))
.apply(lambda single_dt: single_dt + pd.Timedelta(days=3))
)
</code></pre>
<p>Newer versions of Pyspark have <code>to_timedelta</code> function which solves this problem nicely too. As per <a href="https://spark.apache.org/docs/3.3.0/api/python/reference/pyspark.pandas/api/pyspark.pandas.to_timedelta.html?highlight=to_timedelta#pyspark.pandas.to_timedelta" rel="nofollow noreferrer">here</a></p>
|
python|pandas|datetime|pyspark|databricks
| 0
|
1,846
| 70,357,808
|
How can I get the sum of one column based on year, which is stored in another column?
|
<p>I have this code.</p>
<pre><code>cheese_sums = []
for year in milk_products.groupby(milk_products['Date']):
total = milk_products[milk_products['Date'] == year]['Cheddar Cheese Production (Thousand Tonnes)'].sum()
cheese_sums.append(total)
print(cheese_sums)
</code></pre>
<p>I am trying to sum all the Cheddar Cheese Production, which are stored as floats in the milk_products data frame. The Date column is a datetime object that holds only the year, but has 12 values representing each month. As it's written now, I can only print a list of six 0.0's.</p>
|
<p>I got it. It should be:</p>
<pre><code>cheese_sums = []
for year in milk_products['Date']:
total = milk_products[milk_products['Date'] == year]['Cheddar Cheese Production (Thousand Tonnes)'].sum()
if total not in cheese_sums:
cheese_sums.append(total)
print(cheese_sums)
</code></pre>
|
python|pandas
| 1
|
1,847
| 56,181,967
|
Group dataframe by week and get min and max dates within a week to new column
|
<p>I have a dataframe which includes columns "call_date", which is date of call and "call_week", which is number of week (week does not necessarily start on Monday or Sunday and does not necessarily last exactly 7 days):</p>
<p><a href="https://i.stack.imgur.com/5Kor0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Kor0.png" alt="enter image description here"></a></p>
<p>What I want to do is add new column to dataframe which would contain boundary dates of week separated by " - ". For example, if we have a <code>WEEK</code> 68 which has minimum <code>CALL_DATE</code> of <code>2019-04-25</code> and maximum <code>CALL_DATE</code> of <code>2019-04-30</code>, new column should contain value <code>2019-04-25 - 2019-04-30</code>.</p>
<p>I tried:
<code>dfg = df.groupby('WEEK')['CALL_DATE'].agg(['min', 'max']).reset_index()</code></p>
<p><code>dfg</code>:</p>
<p><a href="https://i.stack.imgur.com/7raFK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7raFK.png" alt="enter image description here"></a></p>
<p>Then I added those min and max columns to <code>df</code> via <code>join</code>:</p>
<p><code>df = df.join(dfg, lsuffix = 'WEEK', rsuffix = 'WEEK')</code></p>
<p>Now I am trying to apply <code>lambda</code> function to concatenate those columns in one which contains the result:</p>
<p><code>df['WEEK_TEXT'] = df.apply(lambda x : x['min'].strftime("%d.%m.%Y") + ' - ' + x['max'].strftime("%d.%m.%Y"))</code></p>
<p>But I get an error: <code>KeyError: ('min', 'occurred at index CONTACT_ID')</code></p>
<p>How do I fix that?</p>
|
<p>Better is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer"><code>Series.dt.strftime</code></a>:</p>
<pre><code>df['WEEK_TEXT'] = df['min'].dt.strftime("%d.%m.%Y") + ' - ' + df['max'].dt.strftime("%d.%m.%Y")
</code></pre>
<p>In your solution is necessary <code>axis=1</code> for processes by rows:</p>
<pre><code>f = lambda x : x['min'].strftime("%d.%m.%Y") + ' - ' + x['max'].strftime("%d.%m.%Y")
df['WEEK_TEXT'] = df.apply(f, axis=1)
</code></pre>
|
python-3.x|pandas|pandas-groupby
| 1
|
1,848
| 56,061,579
|
Inferencing from tflite model in Java
|
<p>I have exported a <code>tflite</code> model and using Python code on <a href="https://www.tensorflow.org/lite/convert/python_api?fbclid=IwAR1ie4Fq6dvKbCocYCQ2WG_l9x2XSs1Nr0_2ECyXhmrGC_TZUNOMOLrS0po#using_the_interpreter_from_model_data_" rel="nofollow noreferrer">this</a> link, I am able to do inferencing from this model. However, now I am trying to do inferencing in Android app using Java. I have been following official documentation <a href="https://www.tensorflow.org/lite/guide/inference?fbclid=IwAR2xUVdNSQpH2nMxQu020nw_QaXAB6owDaFM_1jpLVQI61SA-l1jYOzAtE0#java_2" rel="nofollow noreferrer">here</a> but I am unable to get it work. Can someone guide me how to get it done? All I need is following steps.</p>
<ol>
<li>Read <code>tflite</code> model.</li>
<li>Generate dummy record of shape [1,200,3]</li>
<li>Get inference from <code>tflite</code> model and print it.</li>
</ol>
<p>I have been reading tflite demos but still could not get around it. To load model, I use </p>
<pre><code>Interpreter interpreter = new Interpreter(file_of_a_tensorflowlite_model)
</code></pre>
<p>from official document and get following error:</p>
<pre><code>error: no suitable constructor found for Interpreter(String)
constructor Interpreter.Interpreter(File) is not applicable
(argument mismatch; String cannot be converted to File)
</code></pre>
<p>I am unable to resolve it. How do I do this simple task?</p>
|
<p>You can paste the TFLite model in your assets folder of your app. And then, use this code to load its <code>MappedByteBuffer</code>.</p>
<p>Β </p>
<pre><code>private MappedByteBuffer loadModelFile() throws IOException {
Β Β Β Β Β Β Β String MODEL_ASSETS_PATH = "recog_model.tflite";
Β Β Β Β Β Β Β AssetFileDescriptor assetFileDescriptor = context.getAssets().openFd(MODEL_ASSETS_PATH) ;
Β Β Β Β Β Β Β FileInputStream fileInputStream = new FileInputStream( assetFileDescriptor.getFileDescriptor() ) ;
Β Β Β Β Β Β Β FileChannel fileChannel = fileInputStream.getChannel() ;
Β Β Β Β Β Β Β long startoffset = assetFileDescriptor.getStartOffset() ;
Β Β Β Β Β Β Β long declaredLength = assetFileDescriptor.getDeclaredLength() ;
Β Β Β Β Β Β Β return fileChannel.map( FileChannel.MapMode.READ_ONLY , startoffset , declaredLength ) ;
}
</code></pre>
<p>And then call it in the constructor.</p>
<pre><code>Interpreter interpreter = new Interpreter( loadModelFile() )
</code></pre>
|
java|tensorflow
| 2
|
1,849
| 56,110,001
|
Pandas Dataframe resample week, starting first day of the year
|
<p>I have a dataframe containing hourly data, i want to get the max for each week of the year, so i used resample to group data by week</p>
<pre><code>weeks = data.resample("W").max()
</code></pre>
<p>the problem is that week max is calculated starting the first monday of the year, while i want it to be calculated starting the first day of the year.</p>
<p>I obtain the following result, where you can notice that there is 53 weeks, and the last week is calculated on the next year while 2017 doesn't exist in the data</p>
<pre><code>Date dots
2016-01-03 0.647786
2016-01-10 0.917071
2016-01-17 0.667857
2016-01-24 0.669286
2016-01-31 0.645357
Date dots
2016-12-04 0.646786
2016-12-11 0.857714
2016-12-18 0.670000
2016-12-25 0.674571
2017-01-01 0.654571
</code></pre>
<p>is there a way to calculate week for pandas dataframe starting first day of the year?</p>
|
<p>Find the starting day of the year, for example let say it's Friday, and then you can specify an <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#anchored-offsets" rel="nofollow noreferrer">anchoring suffix</a> to resample in order to calculate week starting first day of the year:
<code>weeks = data.resample("W-FRI").max()</code></p>
|
python|pandas|dataframe|resampling|week-number
| 3
|
1,850
| 56,109,029
|
How to add label to numpy.ndarray?
|
<p>I just trying to add label to numpy.ndarray.</p>
<pre><code>numpy.ndarray's shape is (?, 1, 100, 100)
[[[ 1 1 1 ... 1 1 1]
...
[ 1 1 1 ... 1 1 1]]]
</code></pre>
<p>and label is <code>[1,0]</code> or <code>[0,1]</code></p>
<p>so i want shape like this</p>
<pre><code>[[[ 1 1 1 ... 1 1 1]
...
[ 1 1 1 ... 1 1 1]]] , [1,0]
</code></pre>
<p>or</p>
<pre><code>[[[ 1 1 1 ... 1 1 1]
...
[ 1 1 1 ... 1 1 1]]] , [0,1]
</code></pre>
<p>how to make this shape???</p>
<p>I tried like this but doesn't work.</p>
<pre class="lang-py prettyprint-override"><code>data_train = []
for i in range(len(true_data.tolist())):
true_data[i].append([1,0])
data_train.append(true_data[i])
</code></pre>
|
<p>i found answer by myself!</p>
<p>this is my code</p>
<pre class="lang-py prettyprint-override"><code>data_train = []
true_list = true_data.tolist()
for index in range(len(true_list)):
true_list[index].append([1,0])
temp = np.asarray(true_list[index])
data_train.append(temp)
</code></pre>
|
python|numpy|tensorflow
| 0
|
1,851
| 55,838,535
|
How to add border in pandas dataframe to html table row header?
|
<p>Pandas style options let me format the data but i want to add border to the column headers, that is row 0. </p>
<pre><code>htmlFooterOutdatedList = dfServerOutdated[['System Name','IP Address','Last Communication','DAT (VSE)','OS Type','Status']].style.hide_index().set_properties(**{'font-size': '10pt','background-color': '#edeeef','border-color': 'black','border-style' :'solid' ,'border-width': '1px','border-collapse':'collapse', 'padding': '5px'}).applymap(datversion,subset = 'DAT (VSE)').applymap(LTSCDate,subset = 'Last Communication').background_gradient(cmap='PuBu', low=0, high=0, axis=0, subset='DAT (VSE)', text_color_threshold=0.458).set_table_styles([{'selector': 'th', 'props': [('font-size', '12pt')]}]).set_properties(subset=['Last Communication'], **{'width': '180px'}).set_properties(subset=['System Name'], **{'width': '30px'}).set_properties(subset=['Status'], **{'width': '90px'}).render()
</code></pre>
<p><a href="https://i.stack.imgur.com/DByem.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DByem.png" alt="Whi the rows are formatted, the column header border is missing which I want to add."></a></p>
<p>Can I add borders to column headers ?</p>
|
<p>Inside your (admittedly very long) line of code, you already have:</p>
<pre><code>.set_table_styles([{'selector': 'th', 'props': [('font-size', '12pt')]}])
</code></pre>
<p>which you can expand by common CSS attributes, e.g.:</p>
<pre><code>.set_table_styles([{'selector': 'th', 'props': [('font-size', '12pt'),('border-style','solid'),('border-width','1px')]}])
</code></pre>
|
python|html|python-3.x|pandas
| 9
|
1,852
| 64,922,890
|
Plotly vs Plotly Dash & Performance Issues
|
<p>I have been wondering what are the actual differences between Plotly and Plotly Dash in terms of performance. For an example, there is a functionality called "webgl" which allows GPU to render the data points on the graph in stead of a traditional SVG ("webgl" can be used both on Plotly & Plotly Dash). The problem with the "webgl", it can only be used on scatter points (bars, candlesticks, etc).</p>
<p>If I were to pull candlestick data (100,000 candles or more) on either Plotly or Plotly Dash, I see some performance issues such as significantly reduced interactivity and lag.</p>
<p>Is there any difference between Plotly and Plotly Dash? If there is, then what are the ways to increase the performance issue?</p>
|
<p>Well plotly dash is a deployment platform for analytical applications. Vanilla plotly is a graphing library.</p>
<p>Its sort of difficult to compare the two in terms of performance because they serve different purposes. Obviously the overhead of dash is more intensive because its hosting a web server that will most likely have automated update and interaction features.</p>
<p>From my personal use, vanilla plotly graphs can be embedded in html manually for an offline scenario, but the ability for complete interaction between multiple features is not possible.</p>
<p>As for improving the performance of either, that mainly comes down to optimization techniques within your own code.</p>
<p>Plotly's documentation and forums are surprisingly good if you have specific queries.
<a href="https://plotly.com/python/" rel="nofollow noreferrer">https://plotly.com/python/</a>
<a href="https://dash.plotly.com/introduction" rel="nofollow noreferrer">https://dash.plotly.com/introduction</a></p>
<p>Edit: I also forgot to mention that dash enterprise (plotly's paid service) has GPU acceleration support, but the license is pretty expensive.</p>
|
python|pandas|plot|plotly|plotly-dash
| 2
|
1,853
| 64,848,149
|
split dataset into train and test using tensorflow
|
<p>I want to split my full dataset(every raw data has multiple features) into train and test sets. Rather than using scikit-learn 's train-test-split is there any other proper way to split my data? as well as I need to shuffle my data when splitting.
(If the suggested method is based on tensorflow, it's too better.)</p>
|
<p>Try this code:</p>
<pre><code>import tensorflow as tf
input = tf.random.uniform([100, 5], 0, 10, dtype=tf.int32)
input = tf.random.shuffle(input)
train_ds = input[:90]
test_ds = input[-10:]
</code></pre>
|
tensorflow|machine-learning|train-test-split
| 1
|
1,854
| 64,808,971
|
TypeError: Cannot convert value <tensorflow.python.keras.losses.CategoricalCrossentropy object ...> to a TensorFlow DType
|
<p>I want to implement a Word2Vec using negative sampling with pure TensorFlow 2. When I want to compute the gradient I get this error in the last line. I'm struggling to find the problem.</p>
<p>the code is fairly simple:</p>
<pre><code>import tensorflow as tf
import numpy as np
x, y = (('self', 'the'), ('self', 'violent'), ('self', 'any')), (1, 0, 0)
y = tf.convert_to_tensor(y, dtype='float32')
embeding_tensor = tf.keras.layers.Embedding(len(words_lst), embeding_size)
context_tensor = tf.keras.layers.Embedding(len(words_lst), embeding_size)
with tf.GradientTape() as t:
middle = embeding_tensor(word2index[x[0][0]])
neighbor_choices = context_tensor(np.asarray([[word2index[i[1]] for i in x]]))
scores = tf.tensordot(neighbor_choices, middle, 1)
prediction = tf.nn.sigmoid(scores)
loss = tf.keras.losses.CategoricalCrossentropy(y, prediction)
g_embed, g_context = t.gradient(loss, [middle, neighbor_choices])
</code></pre>
<hr />
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-40-fba4cda17cff> in <module>()
17 loss = tf.keras.losses.CategoricalCrossentropy(y, prediction)
18
---> 19 g_embed, g_context = t.gradient(loss, [middle, neighbor_choices])
2 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py in as_dtype(type_value)
648
649 raise TypeError("Cannot convert value %r to a TensorFlow DType." %
--> 650 (type_value,))
TypeError: Cannot convert value <tensorflow.python.keras.losses.CategoricalCrossentropy object at 0x7f3ec9be28d0> to a TensorFlow DType.
</code></pre>
|
<p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy" rel="nofollow noreferrer"><code>tf.keras.losses.CategoricalCrossentropy</code></a> needs to be instantiated before being called:</p>
<pre><code>loss = tf.keras.losses.CategoricalCrossentropy()(y, prediction)
</code></pre>
<p>You could also just use <a href="https://www.tensorflow.org/api_docs/python/tf/keras/losses/categorical_crossentropy" rel="nofollow noreferrer"><code>tf.keras.losses.categorical_crossentropy</code></a>:</p>
<pre><code>loss = tf.keras.losses.categorical_crossentropy(y, prediction)
</code></pre>
|
python|tensorflow|keras|tensorflow2.0|gradient
| 3
|
1,855
| 64,679,220
|
Keep groups where at least one element satisfies condition in pyspark
|
<p>I've been trying to reproduce in pyspark something that is fairly easy to do in Pandas, but I've been struggling for a while now.
Say I have the following dataframe:</p>
<pre><code>df = pd.DataFrame({'a':[1,2,2,1,1,2], 'b':[12,5,1,19,2,7]})
print(df)
a b
0 1 12
1 2 5
2 2 1
3 1 19
4 1 2
5 2 7
</code></pre>
<p>And the list</p>
<pre><code>l = [5,1]
</code></pre>
<p>What I'm trying to do, is to group by <code>a</code>, and if any of the elements in <code>b</code> are in the list, then return <code>True</code> for all values in the group. Then we could use the result to index the dataframe. The Pandas equivalent of this, would be:</p>
<pre><code>df[df.b.isin(l).groupby(df.a).transform('any')]
a b
1 2 5
2 2 1
5 2 7
</code></pre>
<p>Reproducible dataframe in pyspark:</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = pd.DataFrame({'a':[1,2,2,1,1,2], 'b':[12,5,1,19,2,7]})
sparkdf = spark.createDataFrame(df)
</code></pre>
<p>I was currently going in the direction of grouping by <code>a</code> and applying a pandasUDF, though there's surely a better way to do this using spark only.</p>
|
<p>I've figured out a simple enough solution. The first step is to filter out rows where the values in <code>b</code> are in the list using <code>isin</code> and <code>filter</code>, and then keeping the unique grouping keys (<code>a</code>) in a list.</p>
<p>Then by merging back with the dataframe on <code>a</code> we keep groups contained in the list:</p>
<pre><code>unique_a = (sparkdf.filter(f.col('b').isin(l))
.select('a').distinct())
sparkdf.join(unique_a, 'a').show()
+---+---+
| a| b|
+---+---+
| 2| 5|
| 2| 1|
| 2| 7|
+---+---+
</code></pre>
|
python|pandas|pyspark
| 2
|
1,856
| 64,734,190
|
How to return NumPy array from Pandas MultiIndexed Dataframe?
|
<p>I have a MultiIndexed pandas Dataframe and I would like to convert this into a numpy array where each element in the first level of the MultiIndex corresponds to a row of the matrix. So given the dataframe below :</p>
<pre><code>df = pd.DataFrame(np.array([[1, 2, 3, 4 ], [2 ,1, 2, 3], [1, 3, 4 , 7], [1, 3, 5 , 7], [ 2 , 3 , 4 , 5]]),
columns=['a', 'b', 'c', 'd'])
df = df.set_index(['a','b']).sort_index()
df
</code></pre>
<p><a href="https://i.stack.imgur.com/A6Zii.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A6Zii.png" alt="enter image description here" /></a></p>
<p>I would like to return :</p>
<pre><code>[ [ [3 , 4] , [4 , 7] , [5 , 7] ] , [ [2 , 3] , [4 , 5] ] ]
</code></pre>
<p>I have tried using <code> df.unstack().values</code> but am having no success so far. Any tips or pointers in the right directions would be much appreciated.</p>
|
<p>Try this:</p>
<pre><code>[i.to_numpy().tolist() for _, i in df.groupby('a')]
</code></pre>
<p>Output:</p>
<pre><code>[[[3, 4], [4, 7], [5, 7]], [[2, 3], [4, 5]]]
</code></pre>
<p>Use list comprehension with <code>groupby</code> level 0 or 'a' in this dataframe.</p>
|
python|pandas|numpy|dataframe
| 2
|
1,857
| 44,116,787
|
Search for treshold values based on key from three columns(or more)
|
<p>I need help with dataset that looks like this:</p>
<pre><code>Name1 Name2 Name3 Temp Height
Alon Walon Balon 105 34 ]
Alon Walon Balon 106 42 |
Alon Walon Balon 105 33 ]-- Samples of Spot: Alon-Walon-Balon
Alon Walon Kalon 101 11 ]
Alon Walon Kalon 102 32 ]-- Samples of Spot: Alon-Walon-Kalon
Alon Talon Balon 111 12 ]-- Samples of Spot: Alon-Talon-Balon
Alon Talon Calon 121 10 ]-- Samples of Spot: Alon-Talon-Calon
</code></pre>
<p>What I want to achieve?</p>
<p>I have samples for one point in space, this point is described with three words, in this case let's take Alon-Walon-Balon:
I want to compare each value from Temp to other value like 105 if this value is higher than 105 then save this to another column.
The same goes for Height.</p>
<p>How am I doing this right now?</p>
<pre><code>df = df.groupby[['Name1','Name2','Name3','Temp','Height']].size().reset_index()
visited = ()
cntSpot = 0
overValTemp = 0
overValHeight = 0
for i in len(df):
name1 = str(df.get_value(i,'Name1'))
name2 = str(df.get_value(i,'Name2'))
name3 = str(df.get_value(i,'Name3'))
if str(name1+name2+name3) in visited:
cntSpot+=1
if df.get_value(i,'Temp')>105:
overValTemp+=1
if df.get_value(i,'Height)<13:
overValHeight+=1
a = str(name1+name2+name3)
visited.update({a:cntSpot,overValemp,overValHeight})
</code></pre>
<p>Now I have set of dictionaries with information how many times every spot is over certain values.
This is the information I need, how many times case occurred for one Spot.
Where is the trick?
The csv files are more than 2GB and I need to process It incredibely fast.</p>
|
<p>Here is a solution, that uses pandas groupby and is definitely more efficient than the loop.</p>
<pre><code>grouped = df.groupby(('Name1', 'Name2', 'Name3'))
count = grouped.size()
temp = grouped.apply(lambda x: x[x['Temp']>105].shape[0])
height = grouped.apply(lambda x: x[x['Height']<13].shape[0])
result = pd.concat([count, temp, height],
keys = ['Count', 'overValTemp', 'overValHeight'],
axis = 1)
result.index = map(lambda x: "-".join(x), result.index.tolist())
</code></pre>
<p>The result is the following:</p>
<pre><code> Count overValTemp overValHeight
Alon-Talon-Balon 1 1 1
Alon-Talon-Calon 1 1 1
Alon-Walon-Balon 3 1 0
Alon-Walon-Kalon 2 0 1
</code></pre>
|
python|excel|csv|pandas
| 1
|
1,858
| 44,104,584
|
Slice a dataframe based on one column starting with the value of another column
|
<p>I have a dataframe called <code>data</code>, that looks like like this:</p>
<p><code>|...|category|...|ngram|...|</code></p>
<p>I need to slice this dataframe to instances where <code>category</code> starts with the value of <code>ngram</code>. So for example, if I had the following instance:</p>
<ul>
<li>category: beds</li>
<li>ngram: bed</li>
</ul>
<p>then that instance should be dropped from the resulting dataframe.</p>
<p>In T-SQL, I use the following query (which may not be the best way, but it works):</p>
<pre><code>SELECT
*
FROM mytable
WHERE category NOT LIKE ngram+'%';
</code></pre>
<p>I have read up on this a bit, and my best attempt is:</p>
<pre><code>data[data.category.str.startswith(data.ngram.str) == True]
</code></pre>
<p>But this does not return any rows, nor does the inverse (using <code>== True</code>)</p>
|
<pre><code>#use df.apply to filter the rows with category starts with ngram.
data[data.apply(lambda x: x.category.startswith(x.ngram), axis=1)]
</code></pre>
|
python|pandas|sql-like|string-matching
| 0
|
1,859
| 69,605,807
|
pandas barplot choose color for each variable
|
<p>I usually use matplotlib, but was playing with pandas plotting and experienced unexpected behaviour. I was assuming the following would return red and green edges rather than alternating. What am I missing here?</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({"col1":[1,2,4,5,6], "col2":[4,5,1,2,3]})
def amounts(df):
fig, ax = plt.subplots(1,1, figsize=(3,4))
(df.filter(['col1','col2'])
.plot.bar(ax=ax,stacked=True, edgecolor=["red","green"],
fill=False,linewidth=2,rot=0))
ax.set_xlabel("")
plt.tight_layout()
plt.show()
amounts(df)
</code></pre>
|
<p>I think plotting each column separately and setting the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.bar.html" rel="nofollow noreferrer"><code>bottom</code></a> argument to stack the bars provides the output you desire.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({"col1":[1,2,4,5,6], "col2":[4,5,1,2,3]})
def amounts(df):
fig, ax = plt.subplots(1,1, figsize=(3,4))
df['col1'].plot.bar(ax=ax, linewidth=2, edgecolor='green', rot=0, fill=False)
df['col2'].plot.bar(ax=ax, bottom=df['col1'], linewidth=2, edgecolor='red', rot=0, fill=False)
plt.legend()
plt.tight_layout()
plt.show()
amounts(df)
</code></pre>
<p><a href="https://i.stack.imgur.com/r2gTB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r2gTB.png" alt="enter image description here" /></a></p>
|
python|pandas
| 1
|
1,860
| 69,564,472
|
Python: how to get all the first values row-wise from a 2D numpy array when using a 2D boolean mask
|
<p>I have two large 2D arrays, one with values and the other ones with a mask of "valid" values.</p>
<pre><code>vals = np.array([
[5, 2, 4],
[7, 8, 9],
[1, 3, 2],
])
valid = np.array([
[False, True, True],
[False, False, True],
[False, True, True],
])
</code></pre>
<p>My goal is to get, for each row, the first value when <code>valid==True</code>, and obtain a vector of that sort: <code>[2, 9, 3]</code>, in the fastest possible way.</p>
<p>I tried applying the mask and querying from it, but it destroys the structure:</p>
<pre><code>vals[valid]
> array([2, 4, 9, 3, 2])
</code></pre>
<p>I tried looping through all the indices, but I am wondering if there is a faster and vectorized way of doing that. Thank you!</p>
|
<p>Try:</p>
<pre><code>vals[np.arange(len(vals)), np.argmax(valid,axis=1)]
</code></pre>
<p>Or use <a href="https://numpy.org/doc/stable/reference/generated/numpy.take_along_axis.html#numpy.take_along_axis" rel="nofollow noreferrer"><code>np.take_along_axis</code></a>:</p>
<pre><code>np.take_along_axis(vals, np.argmax(valid,axis=1)[:,None], axis=1).ravel()
</code></pre>
|
python|arrays|numpy|performance|vectorization
| 2
|
1,861
| 41,065,945
|
Tensorflow Install from Source ImportError
|
<p>I am trying to install tensorflow directly from the source using</p>
<p><code>git clone https://github.com/tensorflow/tensorflow</code> and following the provided tutorial to build a wheel. Here is a full list of my commands used (bazel already installed) :</p>
<pre><code>git clone https://github.com/tensorflow/tensorflow
sudo pip3 install dev
sudo pip3 install numpy
sudo pip3 install wheel
./configure (at tensorflow source directory)
bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
sudo pip3 install /tmp/tensorflow_pkg/tensorflow-0.12.0rc0-cp35-none-any.whl
</code></pre>
<p>Up until this point everything works without error and the wheel file appears to be successfully installed as a module. However, when I try to import tensorflow in a python3 session, I received this error:</p>
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so, 10): Symbol not found: _PyCObject_Type
Referenced from: /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
Expected in: flat namespace
in /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import *
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 60, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so, 10): Symbol not found: _PyCObject_Type
Referenced from: /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
Expected in: flat namespace
in /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow.so
Error importing tensorflow. Unless you are using bazel,
you should not try to import tensorflow from its source directory;
please exit the tensorflow source tree, and relaunch your python interpreter
from there.
</code></pre>
<p>Any suggestions appreciated, thanks!</p>
|
<p>The missing symbol <code>_PyCObject_Type</code> suggests that TensorFlow's C++ Python extension was compiled against a different version of Python from the one that built the PIP package. When you run <code>./configure</code> before the <code>bazel build</code>, make sure that you answer the following prompt:</p>
<pre><code>Please specify the location of python. [Default is /usr/bin/python]:
</code></pre>
<p>...with the correct path to your <code>python</code> 3.5 executable.</p>
|
git|installation|tensorflow|importerror|python-wheel
| 2
|
1,862
| 40,841,019
|
My code shows invalid literal for float()
|
<p><strong>This is my code in editor:</strong></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x,y = np.loadtxt('D:\Tanjil\Python\directory\Matplot_trial1.csv',
unpack=True , delimiter='\s')
plt.plot(x,y,'r',label='angle=30 Degree'),
plt.ylabel('Power Input (kW)'),
plt.xlabel('Speed(rpm)'),
plt.axis([750.0, 1400.0, 3.3,3.8])
plt.title('Power Input vs. Speed curve')
plt.legend()
plt.show()
</code></pre>
<p><strong>then it shows this:</strong></p>
<pre><code> File "<ipython-input-19-abec5f4efd27>", line 6, in <module>
unpack=True , delimiter='\s')
File "C:\Users\bad_tanjil\Anaconda\lib\site-packages\numpy\lib\npyio.py", line 860, in loadtxt
items = [conv(val) for (conv, val) in zip(converters, vals)]
ValueError: invalid literal for float(): 1350,3.64
</code></pre>
|
<p>You should call <code>plt.axis()</code> with a list of integers like this :</p>
<pre><code>plt.axis([750, 1400, 3, 4])
</code></pre>
|
python|numpy|matplotlib
| 1
|
1,863
| 53,894,900
|
Why doesn't tensorflow on google deep learning VM use GPU?
|
<p>I am using a google deep learning VM from google marketplace and I opted for a NvdiaK80 GPU. I am trying to train an object detection model using object detection API. However, I notice that tensorflow is not using GPU by default(code to check is below)</p>
<p>My assumption here is that this instance comes with all the required NVIDIA drivers so it's not a driver related problem.</p>
<p>Further investigation showed that I had 2 installations of Tensorflow (tensorflow 1.12.0 and tensorflow-GPU 1.12.0). So I uninstalled the CPU version. However it still does not help.</p>
<p>I used the code below to check if tensorflow is using GPU</p>
<pre><code>from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
</code></pre>
<p>For reference, I am using the below code for object detection training which is running fine on the deep learning VM but is not using GPU. </p>
<pre><code>python $Tensor_path/legacy/train.py --logtostderr --
train_dir=$Train_path/training/ --
pipeline_config_path=$Train_path/training/
ssd_inception_v2_pets.config
</code></pre>
<p>Output(I would have expect the GPU device specifics that is being used) </p>
<pre><code>[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 18292259467280600161
]
</code></pre>
|
<p>I was able to resolve this by deleting the old instance and starting fresh with a new instance. My guess is the tensorflow GPU installation got corrupted while installing object detection API. Followed the steps here to install <a href="https://cloud.google.com/solutions/creating-object-detection-application-tensorflow" rel="nofollow noreferrer">https://cloud.google.com/solutions/creating-object-detection-application-tensorflow</a></p>
<p>And most likely this line is the culprit</p>
<pre><code>pip install --upgrade
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp27-none-
linux_x86_64.whl
</code></pre>
|
tensorflow|gpu|google-dl-platform
| 2
|
1,864
| 66,027,370
|
Make all entries before a zero to zeroes in a dataframe column
|
<p>I would like to convert all non zero values to zeros up to the last zero occurrence in a python dataframe column for each groups.</p>
<pre><code>group | value | Result
a | 1 | 0
a | 2 | 0
a | 0 | 0
a | 1 | 0
a | 0 | 0
a | 1 | 1
a | 2 | 2
b | 1 | 0
b | 0 | 0
b | 2 | 2
</code></pre>
<p>One way I could think of achieving this is by reversing the <code>value</code> column and multiplying the elements above it up to that for each group, however I do not know how to do it in python dataframe.
Any help appreciated</p>
|
<p>You can test if all values to last <code>0</code> by compare values by <code>0</code>, swapped values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iloc.html" rel="nofollow noreferrer"><code>Series.iloc</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumsum.html" rel="nofollow noreferrer"><code>GroupBy.cumsum</code></a> and last compare for not equal <code>0</code>, this mask is passed to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mask.html" rel="nofollow noreferrer"><code>Series.mask</code></a>:</p>
<pre><code>m = df['value'].eq(0).iloc[::-1].groupby(df['group']).cumsum().ne(0)
df['New'] = df['value'].mask(m, 0)
</code></pre>
<p>Similar solution with swapping back for original order:</p>
<pre><code>df1 = df.iloc[::-1]
m = df1['value'].eq(0).groupby(df1['group']).cumsum().ne(0).iloc[::-1]
df['New'] = df['value'].mask(m, 0)
print (df)
group value Result New
0 a 1 0 0
1 a 2 0 0
2 a 0 0 0
3 a 1 0 0
4 a 0 0 0
5 a 1 1 1
6 a 2 2 2
7 b 1 0 0
8 b 0 0 0
9 b 2 2 2
</code></pre>
|
python|pandas|dataframe|filter
| 1
|
1,865
| 66,266,170
|
Save scraped list object text as column to pandas dataframe
|
<p>I want to scrape text from webpages and put it in to pandas dataframe. If I can scrape a table I get no problem but this here is no table and make me much trouble.</p>
<pre><code>driver = webdriver.Firefox()
driver.get('https://example.com/')
time.sleep(3)
number = driver.find_elements_by_xpath("//span[@class='blaaal']")
name = driver.find_elements_by_xpath("//span[@class='blaaal2']")
for count in number:
print(count.text)
for names in name:
print(names.text)
df = pd.DataFrame(columns=['name','number'])
</code></pre>
<p>In that way I can print the data but if I use the count names definition without for in loop in dataframe I get error list object got no text or something like that.</p>
<p>I don't know how to put it to dataframe I think I have to append to define the for in loop as a column right? I don't find any post here on stack or any other learn tutorials on google.</p>
|
<p>Create list first for both <code>number</code> and <code>name</code> and then pass into pandas.</p>
<pre><code>number =[count.text for count in driver.find_elements_by_xpath("//span[@class='blaaal']")]
name = [names.text for names in driver.find_elements_by_xpath("//span[@class='blaaal2']")]
df = pd.DataFrame({"name":name,"number":number})
</code></pre>
|
python|pandas|dataframe|selenium|selenium-webdriver
| 1
|
1,866
| 66,100,040
|
Filter Numpy Array with optional argument
|
<p>I am building a function which should prepare my data depending on the input. The variable <code>x_imp</code> contains indices on which features are important. However sometimes I still need all features so if 'x_imp = None' nothing should happen.</p>
<p>My solution was this (this is not the whole function just the inputs):</p>
<pre><code>def get_train_data(x_cat, x_num,x_imp = None):
x_cat = x_cat[:,x_imp]
x_num = x_num[:,x_imp]
return x_train
</code></pre>
<p>But this changes the shape of the data.
For example if <code>data.shape = (4, 5)</code> then <code>data[:,None].shape = (4, 1, 5)</code></p>
<p>How do I avoid this problem?</p>
|
<p>This happens because slicing by <code>None</code> is an alias for <code>np.newaxis</code>. Is there a reason not to just add an explicit <code>if</code> statement?</p>
<pre><code>def get_train_data(x_cat, x_num,x_imp = None):
if x_imp is not None:
x_cat = x_cat[:,x_imp]
x_num = x_num[:,x_imp]
return x_train
</code></pre>
|
python|numpy|indexing
| 0
|
1,867
| 66,275,500
|
Cannot find the source code for `tf.quantization.fake_quant_with_min_max_args`
|
<p>Where one can find the github source code for <code>tf.quantization.fake_quant_with_min_max_args</code>. Checking the <a href="https://www.tensorflow.org/api_docs/python/tf/quantization/fake_quant_with_min_max_args" rel="nofollow noreferrer">TF API documentation</a>, there is no link to the github source file, and I could not find one on github.</p>
|
<p>The kernel for this op is defined here:</p>
<p><a href="https://github.com/tensorflow/tensorflow/blob/ac74e1746a28b364230072d4dac5a45077326dc2/tensorflow/core/kernels/fake_quant_ops.cc#L63-L98" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/ac74e1746a28b364230072d4dac5a45077326dc2/tensorflow/core/kernels/fake_quant_ops.cc#L63-L98</a></p>
|
tensorflow|tensorflow2.0
| 2
|
1,868
| 52,772,757
|
Analyzing data flow of Dask dataframes
|
<p>I have a dataset stored in a tab-separated text file. The file looks as follows:</p>
<pre><code>date time temperature
2010-01-01 12:00:00 10.0000
...
</code></pre>
<p>where the <code>temperature</code> column contains values in degrees Celsius (Β°C).
I compute the daily average temperature using Dask. Here is my code:</p>
<pre><code>from dask.distributed import Client
import dask.dataframe as dd
client = Client("<scheduler URL")
inputDataFrame = dd.read_table("<input file>").drop('time', axis=1)
groupedData = inputDataFrame.groupby('date')
meanDataframe = groupedData.mean()
result = meanDataframe.compute()
result.to_csv('result.out', sep='\t')
client.close()
</code></pre>
<p>In order to improve the performance of my program, I would like to understand the data flow caused by Dask data frames.</p>
<ol>
<li>How is the text file read into a data frame by <code>read_table()</code>? Does the client read the whole text file and send the data to the scheduler, which partitions the data and sends it to the workers? Or does each worker read the data partitions it works on directly from the text file?</li>
<li>When an intermediate data frame is created (e.g. by calling <code>drop()</code>) is the whole intermediate data frame sent back to the client and then sent to the workers for further processing?</li>
<li>The same question for groups: where is the data for a group object create and stored? How does it flow between client, scheduler and workers?</li>
</ol>
<p>The reason for my question is that if I run a similar program using Pandas, the computation is roughly two times faster, and I am trying to understand what causes the overhead in Dask. Since the size of the result data frame is very small compared to the size of the input data, I suppose there is quite some overhead caused by moving the input and intermediate data between client, scheduler and workers.</p>
|
<p>1) The data are read by the workers. The client does read a little ahead of time, to figure out the column names and types and, optionally, to find line-delimiters for splitting files. Note that all workers must be able to reach the file(s) of interest, which can require some shared file-system when working on a cluster.</p>
<p>2), 3) In fact, the <code>drop</code>, <code>groupby</code> and <code>mean</code> methods do not generate intermediate data-frames at all, they just accumulate a graph of operations to be executed (i.e., they are <em>lazy</em>). You could time these steps and see they are fast. During execution, intermediates are made on workers, copies to other workers as required, and discarded as soon as possible. There are never copies to the scheduler or client, unless you explicitly request so.</p>
<p>So, to the root of your question: you can investigate the performance or your operation best by looking at the <a href="http://docs.dask.org/en/latest/diagnostics-distributed.html#dashboard" rel="nofollow noreferrer">dashboard</a>. </p>
<p>There are many factors that govern how quickly things will progress: the processes may be sharing an IO channel; some tasks do not release the GIL, and so parallelise poorly in threads; the number of groups will greatly affect the amount of shuffling of data into groups... plus there is <em>always</em> some overhead for every task executed by the scheduler.</p>
<p>Since Pandas is efficient, it is not surprising that for the case where data fits easily into memory, it performs well compared to Dask.</p>
|
pandas|dask|dask-distributed
| 1
|
1,869
| 58,259,518
|
Groupy all columns and keep the non numeric
|
<p>I have a dataset with almost 200 columns. All of those columns are numeric. However I have 3 columns which are not numeric and I want to keep them - I don't want to group them. </p>
<p>Example:</p>
<pre><code>team_ref num_1 num_2 num_3 matchday match_id season_id
a 1 1 1 A AeD 2018
a 2 2 2 B AbD 2018
b 3 1 1 A AeD 2018
b 4 2 2 B AbD 2018
</code></pre>
<p>I want to group by team_ref and do the mean of <code>num1, num2, num3</code> but I want to keep the matchday, match_id and season_id of that event.</p>
<p>Since I have dozens of columns that I want to groupby, using the agg is not the best idea. </p>
<p>Any suggestion on how to do this?</p>
<p>regards</p>
|
<p>We can do </p>
<pre><code>df.groupby('team_ref').agg(lambda x : x.mean() if x.dtype!= 'object' else ','.join(x))
Out[26]:
num_1 num_2 num_3 matchday match_id season_id
team_ref
a 1.5 1.5 1.5 A,B AeD,AbD 2018
b 3.5 1.5 1.5 A,B AeD,AbD 2018
</code></pre>
|
python|python-3.x|pandas|dataframe
| 3
|
1,870
| 58,231,987
|
How to Find Year-wise Mean from Date-wise CSV Data In Pandas For Plotting bar chart
|
<p>I have Sample Data as</p>
<pre><code>Company,Date,Open,High,Low,Close,Adj Close,Volume
ADANIPORTS,5/6/2008,150,153.570007,147.820007,151.149994,134.313477,1782030
ADANIPORTS,5/7/2008,152,154.460007,150.240005,153.309998,136.232864,1180015
ADANIPORTS,5/8/2008,152.19996.759995,150.199997,155.889999,138.525497,1856245
ADANIPORTS,5/9/2008,155,160.600006,154.210007,156.520004,139.085312,3251375
</code></pre>
<p>For Different Company and Till 2018.
Now, I want to Find Mean Of Open and Close Year-wise To plot Bar-Chart</p>
|
<p>this will help you groupby year and take the means:</p>
<pre><code>df.groupby(pd.Grouper(key='date', freq='Y'))['Open','Close'].mean()
</code></pre>
<p>otherwise you can resample method:</p>
<pre><code>df.set_index('date').resample('Y')['Open','Close'].mean()
</code></pre>
|
python|pandas|analysis
| 1
|
1,871
| 58,406,428
|
numpy: combine image mask with RGB to get colored image mask
|
<p>how can I combine a binary mask image array (<code>this_mask</code> - shape:4,4) with a predefined color array (<code>mask_color</code>, shape:3)</p>
<pre><code>this_mask = np.array([
[0,1,0,0],
[0,0,0,0],
[0,0,0,0],
[0,0,0,0],
])
this_mask.shape # (4,4)
mask_color = np.array([128, 128, 64])
mask_color.shape # (3)
</code></pre>
<p>to get a new color mask image array (<code>this_mask_colored</code>, shape:4,4,3)?</p>
<pre><code>this_mask_colored = # do something with `this_mask` and `mask_color`
# [
# [
# [0,128,0],
# [0,0,0],
# [0,0,0],
# [0,0,0]
# ],
# [
# [0,128,0],
# [0,0,0],
# [0,0,0],
# [0,0,0]
# ],
# [
# [0,64,0],
# [0,0,0],
# [0,0,0],
# [0,0,0]
# ],
# ]
this_mask_colored.shape # (4,4,3)
</code></pre>
<p>I tried for loop through pixel by pixel, is it slow when when image is 225x225, what is best way to do this?</p>
<p>For each image, I have multiple layers of mask, and each mask layer needs to have a different predefine color. </p>
|
<p>This might work:</p>
<pre><code> this_mask = np.array([
[0,1,0,0],
[0,0,0,0],
[0,0,0,0],
[0,0,0,0],
])
mask_color = np.array([128, 128, 64])
res = []
for row in new:
tmp = []
for col in row:
tmp.append(np.array([1,1,1]) * col)
res.append(np.array(tmp))
res = res * mask_color
</code></pre>
<p>For each entry, 1 will be converted to [1, 1, 1] and 0 is [0, 0, 0]</p>
<p>I do this because I want to use the benefit of the operation * (element-wise multiplication)</p>
<p>This works:</p>
<pre><code> test = np.array([[0, 0, 0],
[1, 1, 1],
[0, 0, 0],
[0, 0, 0]])
test * np.array([128, 128, 64])
</code></pre>
<p>We'll get</p>
<pre><code> array([[ 0, 0, 0],
[128, 128, 64],
[ 0, 0, 0],
[ 0, 0, 0]])
</code></pre>
<p>And we want to put all the calculation to the numpy's side. So we loop through the array just for conversion and the rest is for numpy.</p>
<p>This takes 0.2 secs for 255x255 of 1 with one mask_color and 2 secs for 1000x1000</p>
|
python|numpy|image-processing
| 1
|
1,872
| 58,327,692
|
Updating a slice in rank3 tensorflow tensor along the third axis (Z) given a location (X,Y)
|
<p>I am trying to re-implement the below function (written in numpy) using <code>Tensorflow 1.9.0</code>. </p>
<pre class="lang-py prettyprint-override"><code>def lateral_inhibition2(conv_spikes,SpikesPerNeuronAllowed):
vbn = np.where(SpikesPerNeuronAllowed==0)
conv_spikes[vbn[0],vbn[1],:]=0
return conv_spikes
</code></pre>
<p><code>conv_spikes</code> is a binary tensor of rank <code>3</code> and <code>SpikesPerNeuronAllowed</code> is tensor of rank <code>2</code>. <code>conv_spikes</code> is a variable that indicates if a neuron in a specific location has spiked if the location contains <code>1</code> and a <code>0</code> indicates that neuron in that location hasn't spiked. <code>SpikesPerNeuronAllowed</code> variable indicates if all the neurons in a <code>X-Y</code> location along the <code>Z</code> axis are allowed to spike or not. A <code>1</code> in <code>SpikesPerNeuronAllowed</code> indicates that neurons at the corresponding <code>X-Y</code> location in <code>conv_spikes</code> and along the <code>Z</code> axis are allowed to spike. A <code>0</code> indicates that neurons at the corresponding <code>X-Y</code> location in <code>conv_spikes</code> and along the <code>Z</code> axis are not allowed to spike.</p>
<pre class="lang-py prettyprint-override"><code>conv_spikes2 = (np.random.rand(5,5,3)>=0.5).astype(np.int16)
temp2 = np.random.choice([0, 1], size=(25,), p=[3./4, 1./4])
SpikesPerNeuronAllowed2 = temp2.reshape(5,5)
print(conv_spikes2[:,:,0])
print
print(conv_spikes2[:,:,1])
print
print(conv_spikes2[:,:,2])
print
print(SpikesPerNeuronAllowed2)
</code></pre>
<p>produces the following output</p>
<pre class="lang-py prettyprint-override"><code>##First slice of conv_spikes across Z-axis
[[0 0 1 1 1]
[1 0 0 1 1]
[1 0 1 1 0]
[0 1 0 1 1]
[0 1 0 0 0]]
##Second slice of conv_spikes across Z-axis
[[0 0 1 0 0]
[0 0 1 0 1]
[0 0 1 1 1]
[0 0 0 1 0]
[1 1 1 1 1]]
##Third slice of conv_spikes across Z-axis
[[0 1 1 0 0]
[0 0 1 0 0]
[0 1 1 0 0]
[0 0 0 1 0]
[1 0 1 1 1]]
##SpikesPerNeuronAllowed2
[[0 0 0 0 1]
[0 0 0 0 0]
[0 0 0 0 0]
[1 1 0 0 0]
[0 0 0 1 0]]
</code></pre>
<p>Now, when the function is called</p>
<pre class="lang-py prettyprint-override"><code>conv_spikes2 = lateral_inhibition2(conv_spikes2,SpikesPerNeuronAllowed2)
print(conv_spikes2[:,:,0])
print
print(conv_spikes2[:,:,1])
print
print(conv_spikes2[:,:,2])
</code></pre>
<p>produces the following output</p>
<pre><code>##First slice of conv_spikes across Z-axis
[[0 0 0 0 1]
[0 0 0 0 0]
[0 0 0 0 0]
[0 1 0 0 0]
[0 0 0 0 0]]
##Second slice of conv_spikes across Z-axis
[[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 1 0]]
##Third slice of conv_spikes across Z-axis
[[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 0 0]
[0 0 0 1 0]]
</code></pre>
<p>I tried to repeat the same in Tensorflow as belows</p>
<pre class="lang-py prettyprint-override"><code>conv_spikes_tf = tf.Variable((np.random.rand(5,5,3)>=0.5).astype(np.int16))
a_placeholder = tf.placeholder(tf.float32,shape=(5,5))
b_placeholder = tf.placeholder(tf.float32)
inter2 = tf.where(tf.equal(a_placeholder,b_placeholder))
output= sess.run(inter2,feed_dict{a_placeholder:SpikesPerNeuronAllowed2,b_placeholder:0})
print(output)
</code></pre>
<p>produces the below output</p>
<pre class="lang-py prettyprint-override"><code>[[0 0]
[0 1]
[0 2]
[0 3]
[1 0]
[1 1]
[1 2]
[1 3]
[1 4]
[2 0]
[2 1]
[2 2]
[2 3]
[2 4]
[3 2]
[3 3]
[3 4]
[4 0]
[4 1]
[4 2]
[4 4]]
</code></pre>
<p>I try to update <code>conv_spikes_tf</code> with the below code results in an error, I tried going through the manual for <code>scatter_nd_update</code> but I don't think I understood very well.</p>
<pre class="lang-py prettyprint-override"><code>update = tf.scatter_nd_update(conv_spikes_tf, output, np.zeros(output.shape[0]))
sess.run(update)
ValueError: The inner 1 dimensions of input.shape=[5,5,3] must match the inner 1 dimensions of updates.shape=[21,2]: Dimension 0 in both shapes must be equal, but are 3 and 2. Shapes are [3] and [2]. for 'ScatterNdUpdate_8' (op: 'ScatterNdUpdate') with input shapes: [5,5,3], [21,2], [21,2].
</code></pre>
<p>I don't understand the error message, specifically what is <code>inner 1 dimensions</code> mean and how can I achieve the above <code>numpy</code> functionality with tensorflow?</p>
|
<p>The last dim of <code>updates</code> in <code>tf.scatter_nd_update</code> should be 3, which is equal to the last dim of <code>ref</code>. </p>
<pre><code>update = tf.scatter_nd_update(conv_spikes_tf, output, np.zeros(output.shape[0], 3))
</code></pre>
<p>If I understand correctly, you want to apply <code>SpikesPerNeuronAllowed2</code>(mask) to conv_spikes. A easier way is to reshape <code>conv_spikes</code> to (3,5,5) and multiply <code>SpikesPerNeuronAllowed2</code>. </p>
<p>I use a constant example to show the result. You can change it to <code>tf.Variable</code> as well. </p>
<pre class="lang-py prettyprint-override"><code>conv = (np.random.rand(3,5,5)>=0.5).astype(np.int32)
tmp = np.random.choice([0, 1], size=(25,), p=[3./4, 1./4])
mask = tmp.reshape(5,5)
# array([[[1, 1, 0, 0, 0],
# [0, 1, 0, 0, 1],
# [0, 1, 0, 0, 1],
# [1, 0, 0, 0, 1],
# [1, 0, 0, 1, 0]],
# [[1, 0, 0, 0, 1],
# [1, 0, 1, 1, 1],
# [0, 0, 1, 0, 1],
# [0, 0, 0, 1, 1],
# [0, 0, 0, 1, 1]],
# [[0, 0, 0, 1, 0],
# [0, 1, 1, 0, 1],
# [0, 1, 1, 0, 1],
# [1, 1, 1, 1, 0],
# [1, 1, 1, 0, 1]]], dtype=int32)
# array([[0, 0, 0, 1, 1],
# [0, 0, 0, 1, 0],
# [0, 0, 0, 0, 0],
# [0, 1, 0, 1, 0],
# [0, 0, 1, 0, 1]])
tf_conv = tf.constant(conv, dtype=tf.int32)
tf_mask = tf.constant(mask, dtype=tf.int32)
res = tf_conv * tf_mask
sess = tf.InteractiveSession()
sess.run(res)
# array([[[0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0]],
# [[0, 0, 0, 0, 1],
# [0, 0, 0, 1, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 1, 0],
# [0, 0, 0, 0, 1]],
# [[0, 0, 0, 1, 0],
# [0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0],
# [0, 1, 0, 1, 0],
# [0, 0, 1, 0, 1]]], dtype=int32)
</code></pre>
|
python|python-2.7|tensorflow
| 1
|
1,873
| 69,135,257
|
train a model which is instantiated in another model ( Pytorch)
|
<p>I have two classes of networks of neurons one of GNN type and the other simple of linear type, the latter is instantiated in the first !!! how can I train both at the same time?
here is an example:</p>
<pre><code>class linear_NN(nn.Module):
def __init__(self, input_dim, out_dim...):
super().__init__()
def forward(self, x, dim = 0):
'''Forward pass'''
return x
</code></pre>
<p>the main class or the large class</p>
<pre><code>class GNN(nn.Module):
def __init__(self, input_dim, n-hidden, out_dim...):
super().__init__()
def forward(self, h, dim = 0):
'''Forward pass'''
model=linear_NN(input, out..)
model(h, dim)
return h
</code></pre>
|
<p>You must declare it in the <code>__init__(...)</code>:</p>
<pre class="lang-py prettyprint-override"><code>class GNN(nn.Module):
def __init__(self, input_dim, n-hidden, out_dim, ...):
super().__init__()
self.linear = linear_NN(input, out..)
def forward(self, h, dim = 0):
'''Forward pass'''
self.linear(h, dim)
return h
</code></pre>
<p>Then, the <code>self.linear</code> model will be registered to your <code>GNN</code> main model, and if you get <code>GNN(...).parameters()</code>, you'll see the linear parameters there.</p>
|
python|neural-network|pytorch
| 1
|
1,874
| 69,009,968
|
Reading a JSON file using pandas in a desired format
|
<p>I have a JSON file that contains:</p>
<pre><code>{
"getYearsListOverview": {
"sp_name": "analytics.year_overview_drop_down",
"sp_input_params": {
"req_url_query_params": [],
"req_body_params": []
},
"sp_output_datasets": [],
"page_name": "home"
},
"getRankingsDataPerformanceReport": {
"sp_name": "analytics.get_performance_ranking_data",
"sp_input_params": {
"req_url_query_params": [
["@scroll_index", "index"]
],
"req_body_params": [
["@event_type_id", "event_type_id"],
["@season", "season"],
["@athlete_guid", "athlete_guid"]
]
},
"sp_output_datasets": [],
"number_of_output_datasets_for_customized_template": 4,
"customised_response_template": {
"performance_value_list": [],
"rankings_table": [],
"level_values": []
},
"page_name": "performancereport"
}
}
</code></pre>
<p>I want this to get converted into a <code>pandas</code> dataframe. The dataframe should have the following columns like:</p>
<pre><code>sp_name
req_url_query_params
req_body_params
sp_output_datasets
number_of_output_datasets_for_customized_template
performance_value_list
rankings_table
level_values
page_name
</code></pre>
<p>I am using the following code snippets:</p>
<pre><code>with open(os.path.join(filepath,json_file),'r',encoding="utf-8") as json_file:
j_f = json.load(json_file)
df_1 = json_normalize(j_f)
</code></pre>
<p>But this is giving me a dataframe whose columns are like below:</p>
<pre><code>getYearsListOverview.sp_name
getYearsListOverview.sp_input_params.req_url_query_params
getYearsListOverview.sp_input_params.req_body_params
</code></pre>
<p>And the values are being populated as per JSON.</p>
<p>How to get the desired column names here. A small change in JSON data structure is acceptable.</p>
|
<p>To simply rename the columns</p>
<pre><code>for col in df_1.columns:
new_col_name = col.split('.')[-1]
df_1.rename(columns = {col: new_col_name},inplace=True)
print(df_1.columns)
</code></pre>
<p>Output:</p>
<pre><code>Index(['sp_name', 'req_url_query_params', 'req_body_params',
'sp_output_datasets', 'page_name', 'sp_name', 'req_url_query_params',
'req_body_params', 'sp_output_datasets',
'number_of_output_datasets_for_customized_template',
'performance_value_list', 'rankings_table', 'level_values',
'page_name'],
dtype='object')
</code></pre>
<p>So your JSON file has two upper level elements which contains the same variables which creates conflicts in column naming when you create ONE dataframe from that JSON file. What you can do is divide the JSON object into two.</p>
<pre><code>dict0 = dict(list(j_f.items())[:1])
dict1 = dict(list(j_f.items())[1:])
</code></pre>
<p>And then turn back into dataframes again</p>
<pre><code>df_0 = pd.json_normalize(dict0)
df_1 = pd.json_normalize(dict1)
</code></pre>
<p>And work them indivdually. Resulting in two dataframes with no column collision.</p>
|
python|json|pandas
| 0
|
1,875
| 60,982,927
|
Append contents of previous row to the next one
|
<p>A bit stuck here. Seems easy but for some reason can't seem to get it to work.</p>
<p>I have a csv file that I need to read from and then add contents of the previous row to the next one. So for example if original data looks like this:</p>
<pre><code> 0
0 a
1 b
2 c
3 d
</code></pre>
<p>Then I need to get it to be like this:</p>
<pre><code> a b c
0 a 0 0
1 b a 0
2 c b a
3 d c b
</code></pre>
<p>I tried with Pandas first, but then quickly got lost in trying to find a simple and quick way of iterating over rows/columns. </p>
<p>After all this didn't quite work I decided to simply read the csv line-by-line and then recursively add data to the previous row's contents, but been unsuccessful so far in it constantly running into recursion limit issues and such.</p>
<p>What would be the best way to approach the problem? </p>
|
<p>Just a for loop would do:</p>
<pre><code>for i in range(1,3):
# may need to replace '0' with 0 or the actual column name
# also i with f'{i}' if you want column name as string
df[i] = df['0'].shift(i, fill_value=0)
# another column to shift:
df[f'other_col_{i}'] = df['other_col'].shift(i, fill_value=0)
</code></pre>
<p>If you have even more than two columns, maybe something similar to ALollz's excellent deleted answer:</p>
<pre><code>cols = ['col1', 'col2', 'col3']
new_df = pd.concat([df[cols].shift(i, fill_value=0).add_suffix(f'_{i}')
for i in range(3)
])
</code></pre>
<p>Output:</p>
<pre><code> 0 1 2
0 a 0 0
1 b a 0
2 c b a
3 d c b
</code></pre>
|
python|pandas|dataframe|recursion
| 2
|
1,876
| 60,850,984
|
Using a loaded tensorflow model outside of session
|
<p>I want to load a TensorFlow model (checkpoint) and use in in a while loop. </p>
<p>Loading the model takes some time, so I want to do that before the while loop.
If I use:</p>
<pre><code>with tf.Graph().as_default():
with tf.Session() as sess:
print("loading checkpoint ...")
saver = tf.train.import_meta_graph(str(modelpath / 'mfn.ckpt.meta'))
saver.restore(sess, str(modelpath / 'mfn.ckpt'))
while:
...
</code></pre>
<p>the problem is that the session is closed after the end of while.
I saw now <a href="https://stackoverflow.com/questions/60850866/error-while-trying-to-save-tensorflow-checkpoint-with-savedmodelbuilder-for-tens">this post</a> about a similar problem.</p>
<p>The answer seemed to be using TensorFlow Serving. Unfortunately, therefore the model has to be in the format of SavedModel class. I do not have a SavedModel but only the checkpoints.</p>
<p>I tried saving the loaded checkpoint with the <code>tf.saved_model.builder.SavedModelBuilder()</code>
but ran into some issues. I made a post about those issues separately <a href="https://stackoverflow.com/questions/60850866/error-while-trying-to-save-tensorflow-checkpoint-with-savedmodelbuilder-for-tens">here</a>. </p>
<p><strong>Is there another way of running a loaded model (as in the code above) outside of a session?</strong> </p>
|
<p>From your illustration code, I guess you're using TensorFlow version 1.x (with tf.Graph, tf.Session ...). Is this right?</p>
<p>So, about your question: "Is there another way of running a loaded model (as in the code above) outside of a session?",</p>
<p>I have a suggestion: have you ever tried to convert your code to TensorFlow version 2.x?</p>
<p>If you can do this, after that:</p>
<ul>
<li><p>You can easily save-load your TF model using tf.saved_model.save() and tf.saved_model.load() methods,</p></li>
<li><p>Then, use the loaded model easily in a while loop.</p></li>
</ul>
|
python|tensorflow|tensorflow-serving
| 0
|
1,877
| 61,126,560
|
keras load_model not work in google colab
|
<p>I tried load model that i created in my local machine,so first i upload my model(.h5) in to google drive and then i access my model in colab using following code</p>
<pre><code>from google.colab import drive
drive.mount('/content/drive')
</code></pre>
<p>then i tried with following code</p>
<pre><code>from keras.models import load_model
classifier = load_model('/content/drive/My Drive/Colab Notebooks/face_shape_recog_model.h5')
</code></pre>
<p>after run above code i got following error </p>
<pre><code>AttributeError: module 'tensorflow' has no attribute 'placeholder'
</code></pre>
<p>i tried with uninstall and install tensorflow and keras but still face the same issue
and also i tried with solutions mentioned in <a href="https://github.com/keras-team/keras/issues/9501" rel="nofollow noreferrer">github issue</a></p>
<p>thank you </p>
|
<p>I suspect this is due to an incompatiblity between keras 2.2 and tensorflow 2.x. You should be able to fix the issue by updating to keras 2.3 or newer:</p>
<pre><code>!pip install -U keras
</code></pre>
<p><em>Edit 2020-04-10: it looks like Keras 2.3 is now the default in Colab, so the above fix is no longer necessary.</em></p>
|
tensorflow|machine-learning|keras|deep-learning|google-colaboratory
| 2
|
1,878
| 71,687,578
|
how to get unique value in the pandas column?
|
<p>I have 2 dataframe as below:</p>
<pre><code>df.head(10)
key program
0 A emp
1 A dep
2 A emp
3 A dep
4 A dep
5 B emp
6 B dep
7 B emp
8 B emp
9 B emp
df1.head()
key program value1 value2
0 A emp 10000 100000
1 A dep 5000 30000
2 B emp 20000 40000
3 B dep 3000 6000
</code></pre>
<p>then I merge 2 df by 'key' and 'program'</p>
<pre><code>df_merge = df.merge(df1,how='left',left_on=['key','program'],right_on=['key','program'])
df_merge.head(10)
key program value1 value2
0 A emp 10000 100000
1 A dep 5000 30000
2 A emp 10000 100000
3 A dep 5000 30000
4 A dep 5000 30000
5 B emp 20000 40000
6 B dep 3000 6000
7 B emp 20000 40000
8 B emp 20000 40000
9 B emp 20000 40000
</code></pre>
<p>I would like to keep unique value in column 'value1' and 'values' base 'key' and 'program',
could you please assist how I can do that ?
output expected like below:</p>
<pre><code> key program value1 value2
0 A emp 10000 100000
1 A dep 5000 30000
2 A emp
3 A dep
4 A dep
5 B emp 20000 40000
6 B dep 3000 6000
7 B emp
8 B emp
9 B emp
</code></pre>
|
<p>You can modify your <code>merge</code> by creating a new index column:</p>
<pre><code>df_merge = (
df.merge(df1, how='left',
left_on=['key', 'program', df.groupby(['key', 'program']).cumcount()],
right_on=['key', 'program', df1.groupby(['key', 'program']).cumcount()])
.drop(columns='key_2')
)
</code></pre>
<p>Output:</p>
<pre><code>>>> df_merge
key program value1 value2
0 A emp 10000.0 100000.0
1 A dep 5000.0 30000.0
2 A emp NaN NaN
3 A dep NaN NaN
4 A dep NaN NaN
5 B emp 20000.0 40000.0
6 B dep 3000.0 6000.0
7 B emp NaN NaN
8 B emp NaN NaN
9 B emp NaN NaN
</code></pre>
|
python|pandas
| 0
|
1,879
| 71,495,344
|
Splitting the array in python on the based of position
|
<p>I have a array in python as:</p>
<pre><code>newarray=['Title',
'Salary USD',
'Equity %',
'Equity USD',
'Work location',
'Years of Experience',
'Years at Startup',
'Stage',
'Size',
'Staff electrical engineer',
'$226,000',
'0.002%',
'$650,000',
'San Francisco',
'8.0',
'3.0',
'Series H',
'1001-5000 employees',
'Sales development representative',
'$95,000',
'0.0%',
'-',
'Remote',
'1.0',
'1.0',
'Series H',
'1001-5000 employees',
'Product manager',
'$286,000',
'0.002%',
'$1,460,000',
'Remote US',
'10.0',
'1.0',
'Series H',
'1001-5000 employees',
'Data analytics manager',
'$190,000',
'0.01%',
'$126,000',
'Remote',
'6.0',
'4.0',
'Series H',
'201-500 employees']
</code></pre>
<p>There are thousands of data in this newarray. I have only illustrated some.After count of every <code>9</code> elements ,I want them in <strong>subarray</strong>.I just want split the inner data and put it into <code>subarray</code>. My expected output is:</p>
<pre><code>newarray=[['Title',
'Salary USD',
'Equity %',
'Equity USD',
'Work location',
'Years of Experience',
'Years at Startup',
'Stage',
'Size'],
['Staff electrical engineer',
'$226,000',
'0.002%',
'$650,000',
'San Francisco',
'8.0',
'3.0',
'Series H',
'1001-5000 employees'],
['Sales development representative',
'$95,000',
'0.0%',
'-',
'Remote',
'1.0',
'1.0',
'Series H',
'1001-5000 employees'],
['Product manager',
'$286,000',
'0.002%',
'$1,460,000',
'Remote US',
'10.0',
'1.0',
'Series H',
'1001-5000 employees'],
['Data analytics manager',
'$190,000',
'0.01%',
'$126,000',
'Remote',
'6.0',
'4.0',
'Series H',
'201-500 employees']]
</code></pre>
<p>I just want to make subarray of each 9 items.</p>
<p>I tried using splitting using from index 0 to 8 but it didn't worked:</p>
<pre><code>import numpy as np
arr = np.array(newarray)
newarray= np.array_split(newarray, 8)
print(newarray)
</code></pre>
|
<p>IIUC, you want a single 2D array? Then <a href="https://numpy.org/doc/stable/reference/generated/numpy.reshape.html" rel="nofollow noreferrer"><code>reshape</code></a>:</p>
<pre><code>out = arr.reshape((-1,9))
</code></pre>
<p><em>NB. be careful, <code>reshape</code> requires that you have a multiple of the dimensions, here you need n*9 items in the initial array.</em></p>
<p>output:</p>
<pre><code>array([['Title', 'Salary USD', 'Equity %', 'Equity USD', 'Work location',
'Years of Experience', 'Years at Startup', 'Stage', 'Size'],
['Staff electrical engineer', '$226,000', '0.002%', '$650,000',
'San Francisco', '8.0', '3.0', 'Series H', '1001-5000 employees'],
['Sales development representative', '$95,000', '0.0%', '-',
'Remote', '1.0', '1.0', 'Series H', '1001-5000 employees'],
['Product manager', '$286,000', '0.002%', '$1,460,000',
'Remote US', '10.0', '1.0', 'Series H', '1001-5000 employees'],
['Data analytics manager', '$190,000', '0.01%', '$126,000',
'Remote', '6.0', '4.0', 'Series H', '201-500 employees']],
dtype='<U32')
</code></pre>
|
python|python-3.x|numpy
| 1
|
1,880
| 71,486,886
|
Batched input shows 3d, but got 2d, 2d tensor
|
<p>I have this training loop</p>
<pre class="lang-py prettyprint-override"><code>def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = torch.stack(X).to(device), torch.stack(y).to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
</code></pre>
<p>and this lstm:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
import pandas as pd
import numpy as np
class BELT_LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers):
super (BELT_LSTM, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.input_size = input_size
self.BELT_LSTM = nn.LSTM(input_size, hidden_size, num_layers)
def forward(self, x):
# receive an input, create a new hidden state, return output?
# reset the hidden state?
hidden = (torch.zeros(self.num_layers, self.hidden_size), torch.zeros(self.num_layers, self.hidden_size))
x, _ = self.BELT_LSTM(x, hidden)
#since our observation has several sequences, we only want the output after the last sequence of the observation'''
x = x[:, -1]
return x
</code></pre>
<p>and here's the dataset class:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import print_function, division
import os
import torch
import pandas as pd
import numpy as np
import math
from torch.utils.data import Dataset, DataLoader
class rcvLSTMDataSet(Dataset):
"""rcv dataset."""
TIMESTEPS = 10
def __init__(self, csv_data_file, annotations_file):
"""
Args:
csv_data_file (string): Path to the csv file with the training data
annotations_file (string): Path to the file with the annotations
"""
self.csv_data_file = csv_data_file
self.annotations_file = annotations_file
self.labels = pd.read_csv(annotations_file)
self.data = pd.read_csv(csv_data_file)
def __len__(self):
return math.floor(len(self.labels) / 10)
def __getitem__(self, idx):
"""
pytorch expects whatever data is returned is in the form of a tensor. Included, it expects the label for the data.
Together, they make a tuple.
"""
# convert every ten indexes and label into one observation
Observation = []
counter = 0
start_pos = self.TIMESTEPS *idx
avg_avg_1 = 0
avg_avg_2 = 0
avg_avg_3 = 0
while counter < self.TIMESTEPS:
Observation.append(self.data.iloc[idx + counter].values)
avg_avg_1 += self.labels.iloc[idx + counter][2]
avg_avg_2 += self.labels.iloc[idx + counter][1]
avg_avg_3 += self.labels.iloc[idx + counter][0]
counter += 1
#average the avg_1, avg_2, avg_3 for TIMESTEPS length
avg_avg_1 = avg_avg_1 / self.TIMESTEPS
avg_avg_2 = avg_avg_2 / self.TIMESTEPS
avg_avg_3 = avg_avg_3 / self.TIMESTEPS
current_labels = [avg_avg_1, avg_avg_2, avg_avg_3]
print(current_labels)
return Observation, current_labels
def main():
loader = rcvDataSet('foo','foo2.csv')
j = 0
while j < len(loader.data % loader.TIMESTEPS):
print(loader.__getitem__(j))
j += 1
if "__main__" == __name__:
main()
</code></pre>
<p>When running this, i get:</p>
<pre><code> File "module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "lstm.py", line 21, in forward
x, _ = self.BELT_LSTM(x, hidden)
File "module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "rnn.py", line 747, in forward
raise RuntimeError(msg)
RuntimeError: For batched 3-D input, hx and cx should also be 3-D but got (2-D, 2-D) tensors
</code></pre>
<p>but as far as i can tell, i've followed the nn.LSTM instructions for both setting up the layers, and shaping the data properly. What am i doing wrong?</p>
<p>For reference, the incoming data is rows from a csv file, 12 columns wide, and i serve 10 rows per observation</p>
<p>Thanks</p>
|
<p>Your code:</p>
<pre class="lang-py prettyprint-override"><code>hidden = (torch.zeros(self.num_layers, self.hidden_size), torch.zeros(self.num_layers, self.hidden_size))
x, _ = self.BELT_LSTM(x, hidden)
</code></pre>
<p>Here hx and cx are both 2-D tensors. The correct way should be:</p>
<pre class="lang-py prettyprint-override"><code>h_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size)
c_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size)
x, _ = self.BELT_LSTM(x, (h_0, c_0))
</code></pre>
|
pytorch|tensor
| 2
|
1,881
| 71,651,439
|
Error in creating new dataframe from comparison of 2 dataframe in puthon
|
<p>I have 2 dataframe whose sample is as below:</p>
<p>df1:</p>
<pre><code> Table Field
0 AOI AEDAT
1 AEI AEDTZ
2 AOI AEENR
3 AEO AENAM
4 AEO AEOST
</code></pre>
<p>df2:</p>
<pre><code> View Field
0 Accounting 1 AEDAT
1 Accounting 1 AEDAT
2 Accounting 1 AEOST
3 Accounting 1 AEOST
</code></pre>
<p>What I want is compare <code>Field</code> columns of the 2 dataframe and if they are similar then in the third dataframe add the <code>View</code> field from the <code>df2</code> or else add <code>NA</code> as the row to the 3rd dataframe.</p>
<p>Here is what I wrote so far:</p>
<pre><code>df3 = pd.DataFrame(columns=['view'])
for index, row in df1.iterrows():
for index2, row2 in df2.iterrows():
if row['Field'] == row2['Field']:
df3['view'].append(row2['View'])
</code></pre>
<p>When I run this code I get following error: <code>TypeError: cannot concatenate object of type '<class 'str'>'; only Series and DataFrame objs are valid</code></p>
<p>How do I correct this?</p>
|
<p>Check with <code>merge</code></p>
<pre><code>df3 = df1.merge(df2, how = 'left', on = 'Field')
</code></pre>
|
python|pandas|dataframe
| 2
|
1,882
| 42,369,953
|
How to apply several functions to a single pandas dataframe column?
|
<p>I am curious about if it is possible to apply several functions to a single pandas dataframe column. For example, let's say that I have three functions:</p>
<p>βIn:</p>
<pre><code>def foo(col):
if 'hi' in col:
return 'TRUE'
def bar(col):
if 'bye' in col:
return 'TRUE'
def baz(col):
if 'ok' in col:
return 'TRUE'
</code></pre>
<p>And the following dataframe:</p>
<pre><code>dfs = pd.DataFrame({'col':['The quick hi brown fox hi jumps over the lazy dog',
'The quick hi brown fox bye jumps over the lazy dog',
'The NO quick brown fox ok jumps bye over the lazy dog']})
</code></pre>
<p>If I would like to apply each function to <code>col</code>, typically I will use the pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer">apply</a> function:</p>
<pre><code>dfs['new_col1'] = dfs['col'].apply(foo)
dfs['new_col2'] = dfs['col'].apply(bar)
dfs['new_col3'] = dfs['col'].apply(baz)
dfs
</code></pre>
<p>Out:</p>
<pre><code> col new_col1 new_col2 new_col3
0 The quick hi brown fox hi jumps over the lazy dog TRUE None None
1 The quick hi brown fox bye jumps over the lazy... TRUE TRUE None
2 The NO quick brown fox ok jumps bye over the l... None TRUE TRUE
</code></pre>
<p>However, as you can see I created 3 columns. Thus, <strong>my question is how to apply efficiently in large dataframes the above 3 functions at the same time to an specific column?</strong>, the expected result should be:</p>
<pre><code> col new_col
0 The quick hi brown fox hi jumps over the lazy dog TRUE
1 The quick hi brown fox bye jumps over the lazy... TRUE, TRUE
2 The NO quick brown fox ok jumps bye over the l... TRUE, TRUE
</code></pre>
<p>Note that I know that I can merge the 3 columns in a single one. Nevertheless, I would like to know if the above question is possible.</p>
|
<p>Why not lump all functions into one giant function?</p>
<pre><code>def oneGaintFunc(col):
def foo(col):
if 'hi' in col:
return 'TRUE'
def bar(col):
if 'bye' in col:
return 'TRUE'
def baz(col):
if 'ok' in col:
return 'TRUE'
a = foo(col)
b = bar(col)
c = baz(col)
return '{} {} {}'.format(a, b, c)
df['new_col'] = df['col'].apply(oneGiantFunc)
</code></pre>
|
python|python-3.x|pandas|map-function
| 4
|
1,883
| 42,241,963
|
fill in missing DataFrame indices
|
<p>Given two pandas dataframes <code>dfa</code> and <code>dfb</code>, how can I ensure the MultiIndex of each DataFrame contains all rows from the other?</p>
<pre><code>In [147]: dfa
Out[147]:
c
a b
0 5 10.0
1 6 11.0
2 7 12.0
3 8 13.5
4 9 14.0
In [148]: dfb
Out[148]:
c
a b
0 5 10
2 7 12
3 8 13
4 9 14
</code></pre>
<p>Here, <code>dfb</code> lacks index (1, 6):</p>
<pre><code>In [149]: dfa - dfb
Out[149]:
c
a b
0 5 0.0
1 6 NaN
2 7 0.0
3 8 0.5
4 9 0.0
</code></pre>
<p>... but <code>dfa</code> may also lack indices from <code>dfb</code>. The value should be <code>0</code> where we insert a missing index in each dataframe.</p>
<p>In other words, each DataFrame's index should be the union of the two MultiIndexes, where the added row gets a value of 0.</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sub.html" rel="nofollow noreferrer"><code>DataFrame.sub</code></a> with parameter <code>fill_value</code> if need replace <code>NaN</code> to some value:</p>
<pre><code>df = dfa.sub(dfb, fill_value=0)
print (df)
c
a b
0 5 0.0
1 6 11.0
2 7 0.0
3 8 0.5
4 9 0.0
</code></pre>
<pre><code>df = dfb.sub(dfa, fill_value=0)
print (df)
c
a b
0 5 10
1 6 0
2 7 12
3 8 13
4 9 14
</code></pre>
<p>Or if need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.union.html" rel="nofollow noreferrer"><code>union</code></a> of indexes add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a>:</p>
<pre><code>mux = dfa.index.union(dfb.index)
print (mux)
MultiIndex(levels=[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]],
labels=[[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]],
names=['a', 'b'],
sortorder=0)
print (dfa.reindex(mux, fill_value=0))
c
a b
0 5 10.0
1 6 11.0
2 7 12.0
3 8 13.5
4 9 14.0
print (dfb.reindex(mux, fill_value=0))
c
a b
0 5 10
1 6 0
2 7 12
3 8 13
4 9 14
</code></pre>
|
pandas|dataframe|union|nan|multi-index
| 1
|
1,884
| 69,837,581
|
Replace duplicated time index and fullfilling by time interpolation
|
<p>I have a dataframe with a wrong time stamp</p>
<p>The time index is wrong, instead of being sampled in periods of 1 min contains duplicated indexes with multiples of 10minutes</p>
<pre><code>2021-08-01 00:00:00
2021-08-01 00:00:00
2021-08-01 00:00:00
2021-08-01 00:00:00
...
2021-08-01 00:10:00
2021-08-01 00:10:00
....
2021-08-01 00:20:00
2021-08-01 00:20:00
... and so on
</code></pre>
<p>The desired result after the postprocessing should be</p>
<pre><code>2021-08-01 00:00:00
2021-08-01 00:01:00
2021-08-01 00:02:00
2021-08-01 00:03:00
...
2021-08-01 00:10:00
2021-08-01 00:11:00
...and so on
</code></pre>
<p>I have been trying with pandas.index functions to fullfill the duplicated indexes with nans and then interpolate to 1min but without success</p>
<p>Any hint?</p>
|
<p>Yo can add timedeltas by 1 minutes by counter by duplicated indices by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a>:</p>
<pre><code>print (df)
b
a
2021-08-01 00:00:00 1
2021-08-01 00:00:00 1
2021-08-01 00:00:00 1
2021-08-01 00:00:00 1
2021-08-01 00:10:00 1
2021-08-01 00:10:00 1
2021-08-01 00:20:00 1
2021-08-01 00:20:00 1
df.index = pd.to_datetime(df.index)
df.index += pd.to_timedelta(df.groupby(level=0).cumcount(), 'Min')
print (df)
b
2021-08-01 00:00:00 1
2021-08-01 00:01:00 1
2021-08-01 00:02:00 1
2021-08-01 00:03:00 1
2021-08-01 00:10:00 1
2021-08-01 00:11:00 1
2021-08-01 00:20:00 1
2021-08-01 00:21:00 1
</code></pre>
|
python|pandas|datetime|reindex
| 1
|
1,885
| 70,019,109
|
Finding similarities between two columns in two datasets in Python: optimization of approach
|
<p>Imagine i have the following datasets:</p>
<pre><code>import difflib as dl
import numpy as np
import pandas as pd
df1 = pd.DataFrame([[1,'one'],[2,'two'],[3,'three'],[4,'four'],[5,'five'],[7,'seven']], columns=['number', 'name'])
df2 = pd.DataFrame([[1,'one'],[2,'two'],[3,'three'],[4,'four'],[5,'five'],[55,'five'],[555,'five'],[6,'six'],[7,'seven'],[77,'seven'],[777,'seven'],[8,'eight']], columns=['number', 'name'])
</code></pre>
<p>I want to find similarities between name columns of the two datasets and for this i use <code>difflib</code> library and <code>apply</code> method from pandas:</p>
<pre><code>df1['duplicates'] = df1['name'].apply(lambda x: dl.get_close_matches(x, df2['name'], cutoff=0.75, n=5))
</code></pre>
<p>after this i use <code>explode</code>, in order to expand lists into separate records:</p>
<pre><code>df1 = df1.explode('duplicates').reset_index(drop=True).drop_duplicates(subset=['duplicates'],keep="last")
df1.reset_index(drop=True,inplace = True)
</code></pre>
<p>At the end I got the following result:</p>
<pre><code> number name duplicates
0 1 one one
1 2 two two
2 3 three three
3 4 four four
4 5 five five
5 7 seven seven
</code></pre>
<p>But in the final dataframe I want to get duplicate ID's as well(from df_2).</p>
<p>Of course, i can use <code>pd.merge</code> function, but for big datasets(10k records) it's slow and I suppose, there is a better approach for achieving my goal.</p>
<p>Can we return duplicate id's in <code>apply</code> function?
Is there better approach than 'pd.merge'?</p>
|
<p>not sure exactly what output you are expecting but you can use <code>.isin()</code> method <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isin.html" rel="nofollow noreferrer">see pandas docs</a></p>
<p>This would be a lot faster than doing a merge.</p>
<p>So something like</p>
<pre class="lang-py prettyprint-override"><code>duplicates = df2[df2['name'].isin(df1['name'])]
</code></pre>
<p>This outputs the following, which gives you the IDs from df2, where the name is duplicated in df1.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>number</th>
<th>name</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>one</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>two</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>three</td>
</tr>
<tr>
<td>3</td>
<td>4</td>
<td>four</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>five</td>
</tr>
<tr>
<td>5</td>
<td>55</td>
<td>five</td>
</tr>
<tr>
<td>6</td>
<td>555</td>
<td>five</td>
</tr>
<tr>
<td>7</td>
<td>6</td>
<td>six</td>
</tr>
<tr>
<td>8</td>
<td>7</td>
<td>seven</td>
</tr>
<tr>
<td>9</td>
<td>77</td>
<td>seven</td>
</tr>
<tr>
<td>10</td>
<td>777</td>
<td>seven</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas
| 1
|
1,886
| 43,050,898
|
How to use .apply function on a pandas DataFrame that has been filtered by regex?
|
<p>I have a pandas DataFrame with data scraped from a couple Wiki tables. The DataFrame has a column for names and some of these names are followed by "\r\n(head coach)". I would like to remove that and so I tried this:</p>
<pre><code>df['name'][df.name.str.contains(r'coach')] =\
df['name'][df.name.str.contains(r'coach')].apply(lambda x: x[0:-14])
</code></pre>
<p>When this runs, I get a SettingWithCopyWarning. I tried using .loc as suggested in this <a href="https://stackoverflow.com/questions/28002197/pandas-proper-way-to-set-values-based-on-condition-for-subset-of-multiindex-da">SO Q&A</a>:</p>
<pre><code> mask = df.loc[:,'name'] == df['name'].str.contains(r'coach')
</code></pre>
<p>But every value returns as False and so I get an empty Series when I use this with my DataFrame.</p>
<p>I'm not sure where I am going wrong with this. Any pointers?</p>
|
<p>You can try this:</p>
<pre><code>mask = df.name.str.contains(r'coach')]
df.loc[mask, 'name'] = df.loc[mask, 'name'].str[:-14]
</code></pre>
<p>Or as @piRSquared commented, this simple line should also work:</p>
<pre><code>df.loc[mask, 'name'] = df.name.str[:-14]
</code></pre>
|
python|regex|pandas|dataframe
| 3
|
1,887
| 50,458,413
|
Transposing a specific column into row in python dataframe
|
<p>I try to transpose a dataframe with a specific format :
Here is my current dataframe called df :
<a href="https://i.stack.imgur.com/0c7Rd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0c7Rd.jpg" alt="enter image description here"></a> </p>
<p>and the result of transpose shoud be :
<a href="https://i.stack.imgur.com/gXAvP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gXAvP.jpg" alt="enter image description here"></a></p>
<p>Thanks in advance.</p>
|
<p>You can try using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pd.pivot_table</code></a>:</p>
<pre><code>res = df.pivot_table(index=['pid', 'Accuracy'], columns=['TreeFeatures'],
values='Importance 1', aggfunc='first', fill_value=0)
</code></pre>
<p>If you need to elevate index to columns, reset index via <code>res.reset_index()</code>. </p>
|
python|python-3.x|pandas|dataframe
| 2
|
1,888
| 50,300,972
|
matmul function for vector with tensor multiplication in tensorflow
|
<p>In general when we multiply a vector <code>v</code> of dimension <code>1*n</code> with a tensor <code>T</code> of dimension <code>m*n*k</code>, we expect to get a matrix/tensor of dimension <code>m*k</code>/<code>m*1*k</code>. This means that our tensor has <code>m</code> slices of matrices with dimension <code>n*k</code>, and <code>v</code> is multiplied to each matrix and the resulting vectors are stacked together. In order to do this multiplication in <code>tensorflow</code>, I came up with the following formulation. I am just wondering if there is any built-in function that does this standard multiplication straightforward?</p>
<pre><code>T = tf.Variable(tf.random_normal((m,n,k)), name="tensor")
v = tf.Variable(tf.random_normal((1,n)), name="vector")
c = tf.stack([v,v]) # m times, here set m=2
output = tf.matmul(c,T)
</code></pre>
|
<p>You can do it with:</p>
<pre><code>tf.reduce_sum(tf.expand_dims(v,2)*T,1)
</code></pre>
<p>Code:</p>
<pre><code>m, n, k = 2, 3, 4
T = tf.Variable(tf.random_normal((m,n,k)), name="tensor")
v = tf.Variable(tf.random_normal((1,n)), name="vector")
c = tf.stack([v,v]) # m times, here set m=2
out1 = tf.matmul(c,T)
out2 = tf.reduce_sum(tf.expand_dims(v,2)*T,1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
n_out1 = sess.run(out1)
n_out2 = sess.run(out2)
#both n_out1 and n_out2 matches
</code></pre>
|
tensorflow|tensor|vector-multiplication
| 1
|
1,889
| 45,626,789
|
TensorFlow Estimator restoring all variables properly, but loss spikes up afterwards
|
<p>I am using TensorFlow 1.2.1 on Windows 10, and using the Estimator API. Everything runs without any errors, but whenever I have to restore the parameters from a checkpoint, some aspect of it doesn't work. I've checked that the values of every variable in classifier.get_variable_names() does not change after an evaluation, however the Loss spikes back up to near where it started, this is followed by a continued learning, each time learning faster than the last.</p>
<p>This happens within one TensorFlow run, when a validation or evaluation run happens, or when I rerun the python file to continue training.</p>
<p>The following graphs are one example of this problem, they are restoring the variables every 2500 steps:</p>
<p><a href="https://imgur.com/6q9Wuat" rel="nofollow noreferrer">http://imgur.com/6q9Wuat</a></p>
<p><a href="https://imgur.com/CQ2hdR8" rel="nofollow noreferrer">http://imgur.com/CQ2hdR8</a></p>
<p>The following code is a significiantly reduced version of my code, which still replicates the error:</p>
<pre><code>import tensorflow as tf
from tensorflow.contrib import learn
from tensorflow.contrib.learn.python.learn.estimators import model_fn as model_fn_lib
tf.logging.set_verbosity(tf.logging.INFO)
sess = tf.InteractiveSession()
def cnn_model_fn(features, labels, mode):
dense_layer1 = tf.layers.dense(inputs=features, units=512, activation=tf.nn.relu, name="FC_1")
dense_layer2 = tf.layers.dense(inputs=dense_layer1, units=1024, activation=tf.nn.relu, name="FC_2")
dense_layer3 = tf.layers.dense(inputs=dense_layer2, units=2048, activation=tf.nn.relu, name="FC_3")
dense_layer4 = tf.layers.dense(inputs=dense_layer3, units=512, activation=tf.nn.relu, name="FC_4")
logits = tf.layers.dense(inputs=dense_layer4, units=2, name="logit_layer")
loss = None
train_op = None
if mode != learn.ModeKeys.INFER:
loss = tf.losses.softmax_cross_entropy(
onehot_labels=labels, logits=logits)
if mode == learn.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=.001,
optimizer="SGD")
predictions = {
"classes": tf.argmax(input=logits, axis=1),
"probabilities": tf.nn.softmax(
logits, name="softmax_tensor")}
return model_fn_lib.ModelFnOps(
mode=mode,
predictions=predictions,
loss=loss,
train_op=train_op)
def main(unused_param):
def data_pipeline(filenames, batch_size, num_epochs=None, min_after_dequeue=10000):
with tf.name_scope("data_pipeline"):
filename_queue = tf.train.string_input_producer(filenames, num_epochs=num_epochs)
reader = tf.TextLineReader()
key, value = reader.read(filename_queue)
row = tf.decode_csv(value, record_defaults=[[0.0] for _ in range(66)])
example_op, label_op = tf.stack(row[:len(row)-2]), tf.stack(row[len(row)-2:])
capacity = min_after_dequeue + 3 * batch_size
example_batch, label_batch = tf.train.shuffle_batch(
[example_op, label_op],
batch_size=batch_size,
capacity=capacity,
min_after_dequeue=min_after_dequeue)
return example_batch, label_batch
def input_data_fn(data_getter_ops):
batch, labels = sess.run(data_getter_ops)
return tf.constant(batch, dtype=tf.float32), tf.constant(labels, dtype=tf.float32)
NUM_EPOCHS = 6
BATCHES_IN_TRAINING_EPOCH = 8000
training_data_pipe_ops = data_pipeline(
filenames=["train_data.csv"],
batch_size=500,
min_after_dequeue=10000)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
classifier = tf.contrib.learn.Estimator(
model_fn=cnn_model_fn,
model_dir="/tmp/bug_finder")
for j in range(NUM_EPOCHS):
classifier.fit(
input_fn=lambda: input_data_fn(training_data_pipe_ops),
steps = BATCHES_IN_TRAINING_EPOCH)
print("Epoch", str(j+1), "training completed.")
coord.request_stop()
coord.join(threads)
if __name__ == "__main__":
tf.app.run()
</code></pre>
|
<p>I figured out the issue, I was creating data pipelines with the interactive session I created, and then having my input function evaluate the examples (like a feed-dictionary). The reason this is an issue is that the Estimator class creates it's own session (a MonitoredTraininSession), and since the graph operations weren't being created from within a call from the Estimator class (and thus with it's session), they were not being saved. Using an input function to create the graph operations, and return the final graph operation (the batching) has resulted in everything working smoothly.</p>
|
tensorflow
| 0
|
1,890
| 45,687,352
|
Training neural network by making batch_size increase to avoid shocking
|
<pre><code>... build the graph ...
train_step =
tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
batch_xs, batch_ys = data.next_batch(batch_size)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
</code></pre>
<p>In a typical tensorflow neural network training, we usually make the <code>learning_rate</code> decay, but rarely make the <code>batch_size</code> increase. I think making the <code>batch_size</code> increase could also make the neural network converge and avoid shocking. This is my suggestion to train a neural network. Do you think it could be useful?</p>
|
<p><strong>If your descent is noisy:</strong></p>
<p>Increasing the <code>batch_size</code> will stabilise the fluctuations, as the gradient will be averaged over a higher number of samples. </p>
<p>The effect of halfing the <code>learning_rate</code> is similar to that of doubling the <code>batch_size</code>, but not the same (think vector-wise how this is different). halfing the <code>learning_rate</code> is better from a mathematical point of view, but doubling the <code>batch_size</code>, might(!) be computationally more convenient.</p>
<p><strong>In case of low noise:</strong></p>
<p>Reducing the <code>learning_rate</code> is the only viable option. If your gradient direction is not noisy, then increasing the <code>batch_size</code> is not going to change the situation much. A smaller <code>learning_rate</code> could however be useful, as a big step could make the gradient direction not representative, and you could exit the "valley".</p>
|
tensorflow|neural-network
| 1
|
1,891
| 62,847,023
|
Assign value to column based on lookup table using pandas
|
<p>I've the following matrix:</p>
<pre><code>destinations = ["DC","NY","SF","AL"]
workinDays = [[3, 5, 7, 7],
[5, 5, 7, 7],
[7, 7, 7, 7],
[7, 7, 7, 7]]
working_days_df = pd.DataFrame(data=workinDays, columns=destinations,
index=destinations).astype(str) + " working days"
</code></pre>
<p>Based on the above matrix (When you run the above code you'll get a matrix in a dataset form) I want to assign value to another dataset <code>other_df</code> which has upto 100 rows:</p>
<pre><code>dest1 dest2
DC DC
NY AL
...
</code></pre>
<p>So I want to add a new column which reads the correct value from the matrix above. For example in row 2 <code>dest1</code> is NY ad <code>dest2</code> is AL. So based on the matrix it's value should be 7. How can I do that?</p>
|
<p>IIUC, you can perform a lookup:</p>
<pre><code>df_other['new'] = working_days_df.lookup(df_other['dest1'], df_other['dest2'])
</code></pre>
<p>Here, <code>working_days_df</code> is your matrix DataFrame, while <code>df_other</code> is the one you'd like to lookup values for.</p>
|
python|pandas
| 1
|
1,892
| 62,766,853
|
Determine all the possible combinations between the main elements of the parent list
|
<p>I'm working on designing a dataset and I'm facing an issue with a specific part of it. I provided the example below to simplify and relate to my issue.</p>
<p>I have a list of lists</p>
<pre><code>list = ['b',['c','g','d'],['h','l']]
</code></pre>
<p>and I'm interested in a <strong>general solution</strong> to determine all the possible combinations between the main elements of the parent list</p>
<p>Solution needed:</p>
<pre><code>['b','c','h']
['b','c','l']
['b','g','h']
['b','g','l']
['b','d','h']
['b','d','l']
</code></pre>
|
<p>You can use <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow noreferrer"><code>itertools.product()</code></a>:</p>
<pre><code>import itertools
my_list = ['b', ['c','g','d'], ['h','l']]
print(list(itertools.product(*my_list)))
</code></pre>
<p>output:</p>
<pre><code>[('b', 'c', 'h'), ('b', 'c', 'l'), ('b', 'g', 'h'),
('b', 'g', 'l'), ('b', 'd', 'h'), ('b', 'd', 'l')]
</code></pre>
|
python-3.x|pandas|numpy
| 0
|
1,893
| 62,800,305
|
Repeat pandas rows based on content of a list
|
<p>I have a large pandas dataframe df as:</p>
<pre><code>Col1 Col2
2 4
3 5
</code></pre>
<p>I have a large list as:</p>
<pre><code>['2020-08-01', '2021-09-01', '2021-11-01']
</code></pre>
<p>I am trying to achieve the following:</p>
<pre><code>Col1 Col2 StartDate
2 4 8/1/2020
3 5 8/1/2020
2 4 9/1/2021
3 5 9/1/2021
2 4 11/1/2021
3 5 11/1/2021
</code></pre>
<p>Basically tile the dataframe <code>df</code> while adding the elements of list as a new column. I am not sure how to approach this?</p>
|
<p>Let use list comprehension with <code>assign</code> and <code>pd.concat</code>:</p>
<pre><code>l = ['2020-08-01', '2021-09-01', '2021-11-01']
pd.concat([df1.assign(startDate=i) for i in l], ignore_index=True)
</code></pre>
<p>Output:</p>
<pre><code> Col1 Col2 startDate
0 2 4 2020-08-01
1 3 5 2020-08-01
2 2 4 2021-09-01
3 3 5 2021-09-01
4 2 4 2021-11-01
5 3 5 2021-11-01
</code></pre>
|
pandas|python-3.8
| 4
|
1,894
| 62,602,528
|
Splitting an excel file into dataframes and then creating two new files from them in Python
|
<p>Sorry about the poorly worded title. On this project I am bringing in multiple excel files that need to be manipulated and then sent back out as multiple csv files(eventually heading into BigQuery). Several things I am trying to do is eliminate the final 6 rows (this is a watermark that is not needed), and then create two separate csv files. The excel files come in looking something like this:</p>
<p><a href="https://i.stack.imgur.com/Xjwo6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xjwo6.png" alt="enter image description here" /></a></p>
<p>I am removing the final 6 rows with a skipfooter and then am creating the first dataframe with columns 1-181 and the second data frame with columns 182-225. I can split them out but have had issues using an append or merge (probably doing it incorrectly). What I want to do is in the second csv to have the PID inserted and filled in in a new first column, something like this:</p>
<p><a href="https://i.stack.imgur.com/SmWeY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SmWeY.png" alt="enter image description here" /></a></p>
<p>My big questions are how do I correctly combine(append) that PID on all rows needed and how do I loop through hundreds of excel files that I am bringing in so that I am making sure the correct PID is going to the correct Record# tests? At this time I am just working with one file so I can see if it works correctly. In the code below my append will append index_df to second_df but I am unsure how to fill rest of the rows with that same PID.</p>
<pre><code>import os
import pandas as pd
import csv
raw_data_frame = pd.read_excel('\\\\file01\\incoming\\mat\ID5.xlsx', skipfooter=6)
first_df = raw_data_frame.iloc[:, 1:182]
second_df = raw_data_frame.iloc[:, 182:225]
index_df = raw_data_fram.iloc[0:1, 4:5]
df_combine = df_id.append(second_df)
</code></pre>
|
<p>The split part of the problem was you said was already working well, and you can use append and merge to accomplish this, but I think <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html" rel="nofollow noreferrer"><code>df.insert</code></a> (<code>df.insert(0, column="PID", value=df["PID"])</code>) follow by a <code>ffill</code> works better in this case. For the iteration over the <code>xls</code> files you can use a for loop with <a href="https://docs.python.org/3.6/library/glob.html#glob.glob" rel="nofollow noreferrer"><code>glob.glob</code></a> to find all the documents in a pre-defined folder. The way the output files are generated will have to be adapted to your problem, here I opted to place each <code>csv</code> file pair inside a new folder with the respective <code>PID</code> number.</p>
<pre><code>import pandas as pd
import glob
import os
INPUT_FOLDER = "input_xls"
OUTPUT_FOLDER = "output_xls"
for excel_file in glob.glob(os.path.join(INPUT_FOLDER, '*.xls')):
df = pd.read_excel(excel_file, skipfooter=6, dtype=str)
print(df)
# change to 182 here
COL_SPLIT = 5
first_df = df.iloc[:,:COL_SPLIT]
first_df = first_df.dropna(how="all")
second_df = df.iloc[:, COL_SPLIT:]
second_df = second_df.dropna(how="all")
second_df.insert(0, column="PID", value=df["PID"])
second_df["PID"].ffill(inplace=True)
print(first_df)
print(second_df)
pid = first_df.loc[0, "PID"]
out_path = os.path.join(OUTPUT_FOLDER, f'PID-{pid}')
os.makedirs(out_path, exist_ok=True)
first_df.to_csv(os.path.join(out_path,"first.csv"), index=False)
second_df.to_csv(os.path.join(out_path,"second.csv"), index=False)
</code></pre>
<pre><code>.
βββ input_xls
βΒ Β βββ excel_with_PID111.xls
βΒ Β βββ excel_with_PID1765.xls
βΒ Β βββ excel_with_PID232.xls
βΒ Β βββ excel_with_PID67867.xls
βββ output_xls
βΒ Β βββ PID-111
βΒ Β βΒ Β βββ first.csv
βΒ Β βΒ Β βββ second.csv
βΒ Β βββ PID-1755
βΒ Β βΒ Β βββ first.csv
βΒ Β βΒ Β βββ second.csv
βΒ Β βββ PID-232
βΒ Β βΒ Β βββ first.csv
βΒ Β βΒ Β βββ second.csv
βΒ Β βββ PID-67867
βΒ Β βββ first.csv
βΒ Β βββ second.csv
</code></pre>
<p>Dataframe <strong>first_df</strong></p>
<pre><code> PID Last First Gender Age
0 111 Guy Some M 35
</code></pre>
<p>Dataframe <strong>second_df</strong></p>
<pre><code> PID Record# testl test2 test3
0 111 222 378 24 371
1 111 223 319 28 311
2 111 224 207 20 210
3 111 225 100 30 200
</code></pre>
|
python|pandas|dataframe
| 0
|
1,895
| 54,446,886
|
how to keep value and value below shift (1)
|
<p>I want to know how to keep a value and a value below when it is equal to ("NaN"). Thank you.example</p>
<pre><code>df = pd.DataFrame ({'list': ["juan", "NaN", "Maria", "NaN", "juan", "juanita", "juan", "NaN"]})
</code></pre>
<p>I just want to continue</p>
<pre><code>df = pd.DataFrame ({'list': ["juan", "NaN", "juan", "NaN"]})
</code></pre>
<p>only when the value is "juan" and the value below is "NaN". But I do not want to use "for" ... I think something like "shift (1)"</p>
|
<p>First, we'll get the indices of each row that contains "juan" and has a row below it that contains "NaN:</p>
<pre><code>cond1 = df['list'] == 'juan'
cond2 = df['list'].shift(-1) == 'NaN'
idxs = cond1 & cond2
idxs = idxs[idxs == True]
</code></pre>
<p>We're almost done, but since you want to include the subsequent "NaN" rows in your final output as well, we will need to include their indices:</p>
<pre><code>idxs = np.array([[i,i+1] for i in idxs.index.values]).flatten()
</code></pre>
<p>To get the desired output, we just select these indices from the original df:</p>
<pre><code>output = df.loc[idxs]
</code></pre>
<p>Which gives us:</p>
<pre><code> list
0 juan
1 NaN
6 juan
7 NaN
</code></pre>
|
database|pandas|numpy|dataframe|data-science
| 1
|
1,896
| 54,669,673
|
Change the cell values , using Pandas (Python)
|
<p>I need some help with pandas dataframe.
Look the image:
<a href="https://i.stack.imgur.com/XfnCw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XfnCw.png" alt="enter image description here"></a></p>
<p>As you can see, I have some rows where the value are equal, like for example <strong>"Type address"</strong> or <strong>"Public Place"</strong>.</p>
<p>But I Want to transforme this excel rows in columns. </p>
<p>Using the follow code :</p>
<pre><code>import numpy as np
import pandas as pd
import openpyxl
df = pd.read_excel('myfile.xlsx')
tester = df.values.tolist()
keys = list(zip(*tester))[0]
seen = set()
seen_add = seen.add
keysu= [x for x in keys if not (x in seen or seen_add(x))]
values = list(zip(*tester))[1]
a = np.array(values).reshape(int(len(values)/len(keysu)),len(keysu))
list1 = [keysu]
for i in a:
list1.append(list(i))
df = pd.DataFrame(list1)
df.to_excel('output.xlsx',index=False,header=False)
</code></pre>
<p>Owin to the fact that, equal values arenΒ΄t working as well as I want.</p>
<p>What I want: </p>
<p>change the equals <strong>"Type address"</strong>, for example into <strong>"Type address 1"</strong>, <strong>"Type address 2 "</strong>. <strong>"Type address 3"</strong>, depends on the repetition.</p>
<p>But how can I do that? Somebody can help me?</p>
|
<p>You could iterate over the column and replace them as needed. Something like this maybe:</p>
<pre><code>counter = 1
result = []
for i in df.iloc[:, 0]:
if i == "Type address":
result.append(f"{i} {counter}")
else:
result.append(i)
counter += 1
df.iloc[:, 0] = result
</code></pre>
<p>Above I use f-strings (Python 3.6 or above), if you are on an older version of python, you can replace it with <code>"{i} {counter}".format(i, counter)</code>.</p>
|
python-3.x|pandas
| 1
|
1,897
| 54,495,737
|
How to get prediction when computing loss function in convolutional neural network (tensorflow)?
|
<p>I built a convolutional neural network with tensorflow by following these steps:
<a href="https://www.tensorflow.org/tutorials/estimators/cnn" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/estimators/cnn</a></p>
<p>I want to compute the loss with my own loss function and therefore need to get the predicted propabilities of each class in each training step.
From the Tensorflow tutorial I know that I can get these propabilities with "tf.nn.softmax(logits)", however this returns a tensor and I don't know how to extract the actual propabilities from this tensor. Can anyone please tell me how I can get these propabilities, so I can compute my loss function?</p>
|
<p>This is how you compute the softmax and get the probabilities afterwards:</p>
<pre><code># Probabities for each element in the batch for each class.
softmax = tf.nn.softmax(logits, axis=1)
# For each element in the batch return the element that has the maximal probability
predictions = tf.argmax(softmax, axis=1)
</code></pre>
<p>However, please note that you don't need the predictions in order to compute the loss function, you need the actuall probabilities. In case you want to compute other metrics then you can use the predictions (metrics such as accuracy, precision, recall and ect..). The softmax Tensor, contains the actual probabilities for each of your classes. For example, assuming that you have 2 elements in a batch, and you are trying to predict one out of three classes, the softmax will give you the following:</p>
<pre><code># Logits with random numbers
logits = np.array([[24, 23, 50], [50, 30, 32]], dtype=np.float32)
tf.nn.softmax(logits, axis=1)
# The softmax returns
# [[5.1090889e-12 1.8795289e-12 1.0000000e+00]
# [1.0000000e+00 2.0611537e-09 1.5229979e-08]]
# If we sum the probabilites for each batch they should sum up to one
tf.reduce_sum(softmax, axis=1)
# [1. 1.]
</code></pre>
<p>Based on how you imagine your loss function to be this should be correct:</p>
<pre><code>first_second = tf.nn.l2_loss(softmax[0] - softmax[1])
first_third = tf.nn.l2_loss(softmax[0] - softmax[2])
divide_and_add_m = tf.divide(first_second, first_third) + m
loss = tf.maximum(0.0, 1 - tf.reduce_sum(divide_and_add_m))
</code></pre>
|
python|tensorflow|conv-neural-network|prediction|loss-function
| 0
|
1,898
| 73,676,483
|
Can Horovod with TensorFlow work on non-GPU instances in Amazon SageMaker?
|
<p>I want to perform <strong>distributed training</strong> on <strong>Amazon SageMaker</strong>. The code is written with <strong>TensorFlow</strong> and similar to the following code where I think CPU instance should be enough:Β
<a href="https://github.com/horovod/horovod/blob/master/examples/tensorflow_word2vec.py" rel="nofollow noreferrer">https://github.com/horovod/horovod/blob/master/examples/tensorflow_word2vec.py</a></p>
<p>Can <strong>Horovod with TensorFlow</strong> work on <strong>non-GPU</strong> instances in Amazon SageMaker?</p>
|
<p>Yeah you should be able to use both CPU's and GPU's with Horovod on Amazon SageMaker. Please follow the below example for the same</p>
<p><a href="https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-python-sdk/tensorflow_script_mode_horovod/tensorflow_script_mode_horovod.ipynb" rel="nofollow noreferrer">https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-python-sdk/tensorflow_script_mode_horovod/tensorflow_script_mode_horovod.ipynb</a></p>
|
tensorflow|amazon-sagemaker|distributed-training|horovod
| 0
|
1,899
| 73,818,490
|
tensorflow keras load models weights
|
<p>I have recently saved some models which I have trained in another machine, and didn't save it like I have seen in another models, with the <code>h5</code> extension. I don't grasp yet how to load the weights. I can load the model, but without the weights means like nothing. Please help :-)</p>
<pre><code>from keras.models import load_model
from keras.models import model_from_json
β
model_LSTM_rendimiento = keras.models
model_LSTM_super = keras.models
model_LSTM_primero = keras.models
model_LSTM_rendimiento.load_model('../model_LSTM_rendimiento')
model_LSTM_super.load_model('../model_LSTM_super')
model_LSTM_primero.load_model('../model_LSTM_primero')
model_LSTM_primero.load_weights('../model_LSTM_primero_weights')
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_186379/3422008780.py in <module>
12 # model_LSTM_super.load_weights('../model_LSTM_super_weights')
13 model_LSTM_primero.load_model('../model_LSTM_primero')
---> 14 model_LSTM_primero.load_weights('../model_LSTM_primero_weights')
AttributeError: module 'keras.models' has no attribute 'load_weights'
</code></pre>
|
<p>Since you haven't saved the model in the h5 format, I'll assume you used the SavedModel format like this:</p>
<pre><code>model.save('path/to/location')
</code></pre>
<p>If this is what you did, then loading the model like this is enough:</p>
<pre><code>model = keras.models.load_model('path/to/location')
</code></pre>
<p>You don't have to load the weights separately; from the SavedModel <a href="https://www.tensorflow.org/guide/keras/save_and_serialize?hl=en#savedmodel_format" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>SavedModel is the more comprehensive save format that saves the model
architecture, weights, and the traced Tensorflow subgraphs of the call
functions. This enables Keras to restore both built-in layers as well
as custom objects.</p>
</blockquote>
<p>Your code:</p>
<pre><code>from tensorflow import keras
model_LSTM_rendimiento = keras.models.load_model('../model_LSTM_rendimiento')
model_LSTM_super = keras.models.load_model('../model_LSTM_super')
model_LSTM_primero = keras.models.load_model('../model_LSTM_primero')
</code></pre>
|
tensorflow|keras
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.