Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
374,200
| 43,717,113
|
Keras: ValueError: "concat" mode can only merge layers with matching output shapes
|
<p>I am facing this error in Keras 2. How can I resolve it?
I have imported</p>
<pre><code>from keras.layers import Input, merge
</code></pre>
<p>[...]</p>
<pre><code> up1 = merge([UpSampling2D(size=(2, 2))(conv3), conv2], mode='concat', concat_axis=1)
/usr/local/python/3.5.2-gcc4/externalmodules/lib/python3.5/site-packages/keras/legacy/layers.py:456: UserWarning: The `Merge` layer is deprecated and will be removed after 08/2017. Use instead layers from `keras.layers.merge`, e.g. `add`, `concatenate`, etc.
name=name)
Traceback (most recent call last):
File "./src/retinaNN_training.py", line 171, in <module>
model = get_unet(n_ch, patch_height, patch_width) #the U-net model
File "./src/retinaNN_training.py", line 53, in get_unet
up1 = merge([UpSampling2D(size=(2, 2))(conv3), conv2], mode='concat', concat_axis=1)
File "/usr/local/python/3.5.2-gcc4/externalmodules/lib/python3.5/site-packages/keras/legacy/layers.py", line 456, in merge
name=name)
File "/usr/local/python/3.5.2-gcc4/externalmodules/lib/python3.5/site-packages/keras/legacy/layers.py", line 107, in __init__
node_indices, tensor_indices)
File "/usr/local/python/3.5.2-gcc4/externalmodules/lib/python3.5/site-packages/keras/legacy/layers.py", line 187, in _arguments_validation
'Layer shapes: %s' % (input_shapes))
ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 2, 24, 128), (None, 1, 24, 64)]
</code></pre>
<p>How can I convert the merge function to be readable on Keras 2.x.x.? Thanks</p>
|
<p>It seems that you are using Keras version above <code>1.0.3</code>. Versions after 1.0.3 use tensorflow as backend by default <a href="https://github.com/orobix/retina-unet/issues/6" rel="nofollow noreferrer">Github Issues</a>. So you can do two things:</p>
<ol>
<li>Install <code>1.0.3</code> Version of Keras. [OR]</li>
</ol>
<p><code>pip install https://github.com/fchollet/keras/tarball/1.0.3</code> <br></p>
<ol start="2">
<li>Modify <code>~/.keras/keras.json</code> so that it looks like</li>
</ol>
<p><code>{ "image_dim_ordering": "th", "epsilon": 1e-07, "floatx": "float32", "backend": "tensorflow" }</code></p>
|
tensorflow|keras|theano|keras-layer
| 0
|
374,201
| 43,508,767
|
pandas dataframe to nested dict
|
<p>I have a dataframe like this:</p>
<pre><code> aa phel ri_s
no
1 a 21 76
2 s 32 87
3 d 43 98
4 f 54 25
5 g 65 37
</code></pre>
<p>and I would like to create a dictionary that looks like this:</p>
<pre><code>{1: {aa: a, phel: 21, ri_s: 76}, 2: {aa: s, phel: 32, ri_s:87}...}
</code></pre>
<p>but instead, I am getting this:</p>
<pre><code>{'a': {(0, 'a'): {'phel': 21, 'ri_s': 76}}, 'd': {(2, 'd'): {'phel': 43, 'ri_s': 98}}, 'f': {(3, 'f'): {'phel': 54, 'ri_s': 25}}, 'g': {(4, 'g'): {'phel': 65, 'ri_s': 37}}, 's': {(1, 's'): {'phel': 32, 'ri_s': 87}}}
</code></pre>
<p>my current code is:</p>
<pre><code>tsv_in = tsv_in.groupby('aa')['aa','phel', 'ri_s'].apply(
lambda x: x.set_index('aa', 'phel', 'ri_s').to_dict(orient='index')).to_dict()
</code></pre>
<p>Any suggestions?</p>
|
<p>You can zip the index and the rows as dictionaries together, and run a dictionary comprehension:</p>
<pre><code>{i:row for i,row in zip(df.index, df.to_dict(orient='row'))}
# returns
{1: {'aa': 'a', 'phel': 21, 'ri_s': 76},
2: {'aa': 's', 'phel': 32, 'ri_s': 87},
3: {'aa': 'd', 'phel': 43, 'ri_s': 98},
4: {'aa': 'f', 'phel': 54, 'ri_s': 25},
5: {'aa': 'g', 'phel': 65, 'ri_s': 37}}
</code></pre>
|
python|pandas|dictionary|nested
| 2
|
374,202
| 43,855,086
|
Vectorized assignment for numpy array with repeated indices (d[i,j,i,j] = s[i,j])
|
<p>How can I set</p>
<pre><code>d[i,j,i,j] = s[i,j]
</code></pre>
<p>using "NumPy" and without for loop?</p>
<p>I've tried the follow:</p>
<pre><code>l1=range(M)
l2=range(N)
d[l1,l2,l1,l2] = s[l1,l2]
</code></pre>
|
<p>If you think about it, that would be same as creating a <code>2D</code> array of shape <code>(m*n, m*n)</code> and assigning the values from <code>s</code> into the diagonal places. To have the final output as <code>4D</code>, we just need a reshape at the end. That's basically being implemented below -</p>
<pre><code>m,n = s.shape
d = np.zeros((m*n,m*n),dtype=s.dtype)
d.ravel()[::m*n+1] = s.ravel()
d.shape = (m,n,m,n)
</code></pre>
<p><strong>Runtime test</strong></p>
<p>Approaches -</p>
<pre><code># @MSeifert's solution
def assign_vals_ix(s):
d = np.zeros((m, n, m, n), dtype=s.dtype)
l1 = range(m)
l2 = range(n)
d[np.ix_(l1,l2)*2] = s[np.ix_(l1,l2)]
return d
# Proposed in this post
def assign_vals(s):
m,n = s.shape
d = np.zeros((m*n,m*n),dtype=s.dtype)
d.ravel()[::m*n+1] = s.ravel()
return d.reshape(m,n,m,n)
# Using a strides based approach
def assign_vals_strides(a):
m,n = a.shape
p,q = a.strides
d = np.zeros((m,n,m,n),dtype=a.dtype)
out_strides = (q*(n*m*n+n),(m*n+1)*q)
d_view = np.lib.stride_tricks.as_strided(d, (m,n), out_strides)
d_view[:] = a
return d
</code></pre>
<p>Timings -</p>
<pre><code>In [285]: m,n = 10,10
...: s = np.random.rand(m,n)
...: d = np.zeros((m,n,m,n))
...:
In [286]: %timeit assign_vals_ix(s)
10000 loops, best of 3: 21.3 µs per loop
In [287]: %timeit assign_vals_strides(s)
100000 loops, best of 3: 9.37 µs per loop
In [288]: %timeit assign_vals(s)
100000 loops, best of 3: 4.13 µs per loop
In [289]: m,n = 20,20
...: s = np.random.rand(m,n)
...: d = np.zeros((m,n,m,n))
In [290]: %timeit assign_vals_ix(s)
10000 loops, best of 3: 60.2 µs per loop
In [291]: %timeit assign_vals_strides(s)
10000 loops, best of 3: 41.8 µs per loop
In [292]: %timeit assign_vals(s)
10000 loops, best of 3: 35.5 µs per loop
</code></pre>
|
python|numpy|multidimensional-array|indexing
| 1
|
374,203
| 43,885,090
|
Comparing NumPy object references
|
<p>I want to understand the NumPy behavior.</p>
<p>When I try to get the reference of an inner array of a NumPy array, and then compare it to the object itself, I get as returned value <code>False</code>.</p>
<p>Here is the example:</p>
<pre><code>In [198]: x = np.array([[1,2,3], [4,5,6]])
In [201]: x0 = x[0]
In [202]: x0 is x[0]
Out[202]: False
</code></pre>
<p>While on the other hand, with Python native objects, the returned is <code>True</code>.</p>
<pre><code>In [205]: c = [[1,2,3],[1]]
In [206]: c0 = c[0]
In [207]: c0 is c[0]
Out[207]: True
</code></pre>
<p>My question, is that the intended behavior of NumPy? If so, what should I do if I want to create a reference of inner objects of NumPy arrays.</p>
|
<h2>2d slicing</h2>
<p>When I first wrote this I constructed and indexed a 1d array. But the OP is working with a 2d array, so <code>x[0]</code> is a 'row', a slice of the original.</p>
<pre><code>In [81]: arr = np.array([[1,2,3], [4,5,6]])
In [82]: arr.__array_interface__['data']
Out[82]: (181595128, False)
In [83]: x0 = arr[0,:]
In [84]: x0.__array_interface__['data']
Out[84]: (181595128, False) # same databuffer pointer
In [85]: id(x0)
Out[85]: 2886887088
In [86]: x1 = arr[0,:] # another slice, different id
In [87]: x1.__array_interface__['data']
Out[87]: (181595128, False)
In [88]: id(x1)
Out[88]: 2886888888
</code></pre>
<p>What I wrote earlier about slices still applies. Indexing an individual elements, as with <code>arr[0,0]</code> works the same as with a 1d array.</p>
<p>This 2d arr has the same databuffer as the 1d <code>arr.ravel()</code>; the shape and strides are different. And the distinction between <code>view</code>, <code>copy</code> and <code>item</code> still applies.</p>
<p>A common way of implementing 2d arrays in C is to have an array of pointers to other arrays. <code>numpy</code> takes a different, <code>strided</code> approach, with just one flat array of data, and uses<code>shape</code> and <code>strides</code> parameters to implement the transversal. So a subarray requires its own <code>shape</code> and <code>strides</code> as well as a pointer to the shared databuffer.</p>
<h2>1d array indexing</h2>
<p>I'll try to illustrate what is going on when you index an array:</p>
<pre><code>In [51]: arr = np.arange(4)
</code></pre>
<p>The array is an object with various attributes such as shape, and a data buffer. The buffer stores the data as bytes (in a C array), not as Python numeric objects. You can see information on the array with:</p>
<pre><code>In [52]: np.info(arr)
class: ndarray
shape: (4,)
strides: (4,)
itemsize: 4
aligned: True
contiguous: True
fortran: True
data pointer: 0xa84f8d8
byteorder: little
byteswap: False
type: int32
</code></pre>
<p>or</p>
<pre><code>In [53]: arr.__array_interface__
Out[53]:
{'data': (176486616, False),
'descr': [('', '<i4')],
'shape': (4,),
'strides': None,
'typestr': '<i4',
'version': 3}
</code></pre>
<p>One has the data pointer in hex, the other decimal. We usually don't reference it directly.</p>
<p>If I index an element, I get a new object:</p>
<pre><code>In [54]: x1 = arr[1]
In [55]: type(x1)
Out[55]: numpy.int32
In [56]: x1.__array_interface__
Out[56]:
{'__ref': array(1),
'data': (181158400, False),
....}
In [57]: id(x1)
Out[57]: 2946170352
</code></pre>
<p>It has some properties of an array, but not all. For example you can't assign to it. Notice also that its 'data` value is totally different.</p>
<p>Make another selection from the same place - different id and different data:</p>
<pre><code>In [58]: x2 = arr[1]
In [59]: id(x2)
Out[59]: 2946170336
In [60]: x2.__array_interface__['data']
Out[60]: (181143288, False)
</code></pre>
<p>Also if I change the array at this point, it does not affect the earlier selections:</p>
<pre><code>In [61]: arr[1] = 10
In [62]: arr
Out[62]: array([ 0, 10, 2, 3])
In [63]: x1
Out[63]: 1
</code></pre>
<p><code>x1</code> and <code>x2</code> don't have the same <code>id</code>, and thus won't match with <code>is</code>, and they don't use the <code>arr</code> data buffer either. There's no record that either variable was derived from <code>arr</code>.</p>
<p>With <code>slicing</code> it is possible get a <code>view</code> of the original array,</p>
<pre><code>In [64]: y = arr[1:2]
In [65]: y.__array_interface__
Out[65]:
{'data': (176486620, False),
'descr': [('', '<i4')],
'shape': (1,),
....}
In [66]: y
Out[66]: array([10])
In [67]: y[0]=4
In [68]: arr
Out[68]: array([0, 4, 2, 3])
In [69]: x1
Out[69]: 1
</code></pre>
<p>It's data pointer is 4 bytes larger than <code>arr</code> - that is, it points to the same buffer, just a different spot. And changing <code>y</code> does change <code>arr</code> (but not the independent <code>x1</code>).</p>
<p>I could even make a 0d view of this item</p>
<pre><code>In [71]: z = y.reshape(())
In [72]: z
Out[72]: array(4)
In [73]: z[...]=0
In [74]: arr
Out[74]: array([0, 0, 2, 3])
</code></pre>
<p>In Python code we normally don't work with objects like this. When we use the <code>c-api</code> or <code>cython</code> is it possible to access the data buffer directly. <code>nditer</code> is an iteration mechanism that works with 0d objects like this (either in Python or the c-api). In <code>cython</code> <code>typed memoryviews</code> are particularly useful for low level access.</p>
<p><a href="http://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html" rel="nofollow noreferrer">http://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html</a></p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/arrays.nditer.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/arrays.nditer.html</a></p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/c-api.iterator.html#c.NpyIter" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/c-api.iterator.html#c.NpyIter</a></p>
<h2>elementwise ==</h2>
<p>In response to comment, <a href="https://stackoverflow.com/questions/43885090/comparing-numpy-object-references/43898587#comment74809004_43885090">Comparing NumPy object references</a></p>
<blockquote>
<p>np.array([1]) == np.array([2]) will return array([False], dtype=bool)</p>
</blockquote>
<p><code>==</code> is defined for arrays as an elementwise operation. It compares the values of the respective elements and returns a matching boolean array.</p>
<p>If such a comparison needs to be used in a scalar context (such as an <code>if</code>) it needs to be reduced to a single value, as with <code>np.all</code> or <code>np.any</code>.</p>
<p>The <code>is</code> test compares object id's (not just for numpy objects). It has limited value in practical coding. I used it most often in expressions like <code>is None</code>, where <code>None</code> is an object with a unique id, and which does not play nicely with equality tests.</p>
|
python|arrays|numpy|identity
| 4
|
374,204
| 43,583,869
|
Transform from 2-level MultiIndex to 3-level MultiIndex
|
<p>I have something with the following data structure:</p>
<pre><code> foo year
par chi
10.0 900 0.024096 1983
901 0.200000 1983
902 0.300000 1983
900 0.027473 1984
901 0.023256 1984
902 0.400000 1984
900 0.018182 1985
</code></pre>
<p>That is, for each parent-child-year combination I have some observation of <code>foo</code>. Now, for each parent, I would like to compute the Covariance between each <code>chi</code> and each other <code>chi</code> (in this data set, 900 and 901), over time (that is, how do entries of <code>foo</code> in <code>chi_1</code> and <code>chi_2</code> covary over time, for a given <code>par</code>?).</p>
<p>I suppose the "easiest" way is to introduce <code>chi</code> a second time as a third-level index into the dataset, but I all I got was:</p>
<pre><code>index = pd.MultiIndex.from_product([par, chi, chi])
</code></pre>
<p>where <code>par</code>, <code>chi</code> are the unique values of the index. However, I couldn't find a way of reindexing my data with that in a way that is useful to the exercise. How would I proceed with this?</p>
|
<p>Solution plan:</p>
<ul>
<li>start with a dataframe with four columns (reset index if necessary)</li>
<li>for each <code>par</code> group apply a function that calculates child covariances</li>
<li>in the function unstack group so that its index is <code>year</code> and values of <code>foo</code> for each child are in separate columns</li>
<li>compute covariances and melt the result so that you get a row per <code>chi</code> and <code>chi_other</code> combination.</li>
</ul>
<p>Example:</p>
<pre><code>df = pd.DataFrame({'chi': [900, 901, 902, 900, 901, 902, 900],
'foo': [0.024096, 0.2, 0.3, 0.027473, 0.023256, 0.4, 0.018182],
'par': [10, 10, 10, 10, 10, 10, 10],
'year': [1983, 1983, 1983, 1984, 1984, 1984, 1985]})
def child_covariances(group):
x = group.set_index(['year','chi'])['foo'].unstack()
x = pd.melt(x.cov().reset_index(), id_vars=['chi'],
var_name='chi_other', value_name='foo_cov')\
.set_index(['chi','chi_other'])\
.query('chi <= chi_other').sort_index()
return x
res = df.groupby('par').apply(child_covariances)
# foo_cov
# par chi chi_other
# 10 900 900 0.000022
# 901 -0.000298
# 902 0.000169
# 901 901 0.015619
# 902 -0.008837
# 902 902 0.005000
</code></pre>
|
python|pandas
| 2
|
374,205
| 43,839,112
|
Format the color of a cell in a pandas dataframe according to multiple conditions
|
<p>I am trying to format the color of a cell of an specific column in a data frame, but I can't manage to do it according to multiple conditions.</p>
<p>This is my dataframe (df):</p>
<pre><code> Name ID Cel Date
0 Diego b000000005 7878 2565-05-31 20:53:00
1 Luis b000000015 6464 2017-05-11 20:53:00
2 Vidal b000000002 1100 2017-05-08 20:53:00
3 John b000000011 4545 2017-06-06 20:53:00
4 Yusef b000000013 1717 2017-06-06 20:53:00
</code></pre>
<p>I want the values in the "Date" column to change color according to the following conditions:</p>
<pre><code> if date < datetime.now():
color = 'green'
elif date > datetime.now():
date = 'yellow'
elif date > (datetime.now() + timedelta(days=60)):
color = 'red'
</code></pre>
<p>This is my current code:</p>
<pre><code>def color(val):
if val < datetime.now():
color = 'green'
elif val > datetime.now():
color = 'yellow'
elif val > (datetime.now() + timedelta(days=60)):
color = 'red'
return 'background-color: %s' % color
df.style.apply(color, subset = ['Fecha'])
</code></pre>
<p>I am getting the following error:</p>
<blockquote>
<p>ValueError: ('The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().', 'occurred at index Fecha')</p>
</blockquote>
<p>The output is:</p>
<pre><code>Out[65]: <pandas.formats.style.Styler at 0x1e3ab8dec50>
</code></pre>
<p>Any help will be appreciated.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/style.html" rel="noreferrer"><code>applymap</code></a>:</p>
<pre><code>from datetime import datetime, timedelta
import pandas as pd
name = ['Diego', 'Luis', 'Vidal', 'John', 'Yusef']
id = ['b000000005', 'b000000015', 'b000000002', 'b000000011', 'b000000013']
cel = [7878, 6464, 1100, 4545, 1717]
date = pd.to_datetime(['2017-05-31 20:53:00', '2017-05-11 20:53:00', '2017-05-08 20:53:00',
'2017-06-06 20:53:00', '2017-06-06 20:53:00'])
df = pd.DataFrame({'Name':name,'ID':id,'Cel':cel,'Date':date})
def color(val):
if val < datetime.now():
color = 'green'
elif val > datetime.now():
color = 'yellow'
elif val > (datetime.now() + timedelta(days=60)):
color = 'red'
return 'background-color: %s' % color
df.style.applymap(color, subset=['Date'])
</code></pre>
<p><a href="https://i.stack.imgur.com/vY9qN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vY9qN.png" alt="pandas styler output"></a></p>
<p>Screenshot from Jupyter notebook. If you <code>print</code> the output instead, you'll just get a reference to the <code>Styler</code> object:</p>
<pre><code>print(df.style.applymap(color, subset=['Date']))
<pandas.formats.style.Styler object at 0x116db43d0>
</code></pre>
|
python|pandas|formatting
| 17
|
374,206
| 43,892,150
|
Tensorflow on GPU
|
<p>I've been able to work with TensorFlow on CPU, Now I need to run it on a GPU device with the following specs: </p>
<p><strong>CPU: Intel Xeon(E5-2670) and win7 64bit and NVIDIA GeForce GTX 980 Ti</strong></p>
<p>I've installed python3.5 and Tensorflow for GPU just as described in TF homepage. when I run a test program here what I get when I try to <code>import Tensorflow</code> : </p>
<pre><code> (C:\Users\Engine>python
Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AM
D64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
Traceback (most recent call last):](url)
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_h
elper
return importlib.import_module(mname)
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\importlib
\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
File "<frozen importlib._bootstrap>", line 577, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 914, in create_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\pywrap_tensorflow.py", line 41, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\pywrap_tensorflow_internal.py", line 21, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_h
elper
return importlib.import_module('_pywrap_tensorflow_internal')
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\importlib
\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import *
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\__init__.py", line 51, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\pywrap_tensorflow.py", line 52, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_h
elper
return importlib.import_module(mname)
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\importlib
\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
File "<frozen importlib._bootstrap>", line 577, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 914, in create_module
File "<frozen importlib._bootstrap>", line 22
2, in _call_with_frames_removed
ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\pywrap_tensorflow.py", line 41, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\pywrap_tensorflow_internal.py", line 21, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\site-pack
ages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_h
elper
return importlib.import_module('_pywrap_tensorflow_internal')
File "C:\Users\Engine\AppData\Local\Programs\Python\Python35\lib\importlib
\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: No module named '_pywrap_tensorflow_internal'
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_probl
ems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
</code></pre>
<p>Does anyone has a hint how this can be solved ? </p>
|
<ol>
<li>CUDNN might be one of the reasons. cuDNN v6.0 does not work for many people. Try with cuDNN v5.1.</li>
<li>Importing from inside GIT folder is also one of the reasons. </li>
</ol>
<p>Good to know 1 worked for you. </p>
|
python|tensorflow|gpu
| 1
|
374,207
| 43,898,414
|
find numeric column names in Pandas
|
<p>I need to select columns in Pandas which contain only numeric values in column names, for example:</p>
<pre><code>df=
0 1 2 3 4 window_label next_states ids
0 17.0 18.0 16.0 15.0 15.0 ddddd d 13.0
1 18.0 16.0 15.0 15.0 16.0 ddddd d 13.0
2 16.0 15.0 15.0 16.0 15.0 ddddd d 13.0
3 15.0 15.0 16.0 15.0 17.0 ddddd d 13.0
4 15.0 16.0 15.0 17.0 NaN ddddd d 13.0
</code></pre>
<p>so I need to select only first five columns. Something like:</p>
<pre><code>df[df.columns.isnumeric()]
</code></pre>
<p><strong>EDIT</strong></p>
<p>I came up with the solution:</p>
<pre><code>digit_column_names = [num for num in list(df.columns) if isinstance(num, (int,float))]
df_new = df[digit_column_names]
</code></pre>
<p>not very pythonic or pandasian, but it works.</p>
|
<p>Try</p>
<pre><code>df.ids = df.ids.astype('object')
new_df = df.select_dtypes([np.number])
0 1 2 3 4
0 17.0 18.0 16.0 15.0 15.0
1 18.0 16.0 15.0 15.0 16.0
2 16.0 15.0 15.0 16.0 15.0
3 15.0 15.0 16.0 15.0 17.0
4 15.0 16.0 15.0 17.0 NaN
</code></pre>
<p>EDIT:
If you are interested in selecting column names that are numeric, here is something that you can do.</p>
<pre><code>df = pd.DataFrame({0: [1,2], '1': [3,4], 'blah': [5,6], 2: [7,8]})
df.columns = pd.to_numeric(df.columns, errors = 'coerce')
df[df.columns.dropna()]
</code></pre>
<p>You get</p>
<pre><code> 0.0 1.0 2.0
0 1 3 7
1 2 4 8
</code></pre>
|
python|pandas|dataframe
| 10
|
374,208
| 2,298,390
|
Fitting a line in 3D
|
<p>Are there any algorithms that will return the equation of a straight line from a set of 3D data points? I can find plenty of sources which will give the equation of a line from 2D data sets, but none in 3D.</p>
<p>Thanks.</p>
|
<p>If you are trying to predict one value from the other two, then you should use <code>lstsq</code> with the <code>a</code> argument as your independent variables (plus a column of 1's to estimate an intercept) and <code>b</code> as your dependent variable. </p>
<p>If, on the other hand, you just want to get the best fitting line to the data, i.e. the line which, if you projected the data onto it, would minimize the squared distance between the real point and its projection, then what you want is the first principal component. </p>
<p>One way to define it is the line whose direction vector is the eigenvector of the covariance matrix corresponding to the largest eigenvalue, that passes through the mean of your data. That said, <code>eig(cov(data))</code> is a really bad way to calculate it, since it does a lot of needless computation and copying and is potentially less accurate than using <code>svd</code>. See below:</p>
<pre><code>import numpy as np
# Generate some data that lies along a line
x = np.mgrid[-2:5:120j]
y = np.mgrid[1:9:120j]
z = np.mgrid[-5:3:120j]
data = np.concatenate((x[:, np.newaxis],
y[:, np.newaxis],
z[:, np.newaxis]),
axis=1)
# Perturb with some Gaussian noise
data += np.random.normal(size=data.shape) * 0.4
# Calculate the mean of the points, i.e. the 'center' of the cloud
datamean = data.mean(axis=0)
# Do an SVD on the mean-centered data.
uu, dd, vv = np.linalg.svd(data - datamean)
# Now vv[0] contains the first principal component, i.e. the direction
# vector of the 'best fit' line in the least squares sense.
# Now generate some points along this best fit line, for plotting.
# I use -7, 7 since the spread of the data is roughly 14
# and we want it to have mean 0 (like the points we did
# the svd on). Also, it's a straight line, so we only need 2 points.
linepts = vv[0] * np.mgrid[-7:7:2j][:, np.newaxis]
# shift by the mean to get the line in the right place
linepts += datamean
# Verify that everything looks right.
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d as m3d
ax = m3d.Axes3D(plt.figure())
ax.scatter3D(*data.T)
ax.plot3D(*linepts.T)
plt.show()
</code></pre>
<p>Here's what it looks like: <img src="https://i.imgur.com/ukDs0l.png" alt="a 3d plot of a fitted line"></p>
|
python|numpy|linear-algebra|curve-fitting
| 59
|
374,209
| 73,090,197
|
How to create pandas columns and fill with values according to values in another column
|
<p>I have this dataframe:</p>
<pre><code>text sentiment
asdasda positive
fsdfsdfs negative
sdfsdfs neutral
dfsdsd mixed
</code></pre>
<p><strong>and I want this outupu:</strong></p>
<pre><code>text positive negative neutral mixed
asdasda 1 0 0 0
fsdfsdfs 0 1 0 0
sdfsdfs 0 0 1 0
dfsdsd 0 0 0 1
</code></pre>
<p>How can I do it?</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.get_dummies.html" rel="nofollow noreferrer"><code>pandas.get_dummies</code></a> but before that you need to set <code>column "text"</code> as index and after getting result you need to rename all columns <code>sentiment_positive</code> to <code>positive</code> , <code>sentiment_negative</code> to <code>negative</code>, ...</p>
<pre><code>import pandas as pd
# df <- your_df
res = pd.get_dummies(df.set_index('text')
# rename column sentiment_positive to positive ,
# rename column sentiment_negative to negative , ...
).rename(columns = lambda x: x.split('_')[1])
print(res)
</code></pre>
<hr />
<pre class="lang-none prettyprint-override"><code> mixed negative neutral positive
text
asdasda 0 0 0 1
fsdfsdfs 0 1 0 0
sdfsdfs 0 0 1 0
dfsdsd 1 0 0 0
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
374,210
| 73,119,443
|
compare sums of individuals columns and return name of max and min column pandas
|
<p>I have a df something like this</p>
<pre><code> date mon tue wed thu fri sat sun
01-01-2022 2 3 5 7 8 1 0
02-01-2022 3 4 7 6 3 0 4
03-01-2022 4 8 7 9 1 2 5
04-01-2022 5 2 1 1 8 1 2
05-01-2022 6 1 9 3 7 1 1
</code></pre>
<p>my task is to find the sum of each column and compare them to each other and return the name of the column that has the highest and lowest sum. So in this case, when I compare sums of each column, I have wed column to be at max sum (29) and sat to be at min sum (5). So my expected output is printing this information:</p>
<pre><code> max number is seen on wed and min number is seen on sat.
</code></pre>
<p>can someone please help me with an efficient way of doing this? Much appreciated</p>
|
<p>You can use <code>set_index</code>, <code>sum</code> and <code>agg</code>:</p>
<pre><code>df.set_index('date').sum().agg(['idxmin', 'idxmax'])
</code></pre>
<p>output:</p>
<pre><code>idxmin sat
idxmax wed
dtype: object
</code></pre>
<p>As a string:</p>
<pre><code>s = df.set_index('date').sum().agg(['idxmin', 'idxmax'])
print(f"max number is seen on {s['idxmax']} and min number is seen on {s['idxmax']}.")
</code></pre>
<p>output:</p>
<pre><code>max number is seen on wed and min number is seen on wed.
</code></pre>
|
pandas|max|multiple-columns
| 2
|
374,211
| 73,116,292
|
Pandas group by and row count by category
|
<p>I have a pandas df as follows:</p>
<pre><code>User Amount Type
100 10 Check
100 20 Cash
100 30 Paypal
200 50 Venmo
200 50 Cash
200 50 Check
300 20 Zelle
300 15 Zelle
300 15 Zelle
</code></pre>
<p>I want to organize it such that my end result is as follows:</p>
<pre><code>User Cash Check Paypal Venmo Zelle
100 1 1 1
200 1 1 1
300 3
</code></pre>
<p>I am looking to count the number of times a user has transacted through each unique method.
If a user didnt transact, I want to either leave it blank or set it to 0.
How can I do this? I tried a <code>pd.groupby()</code> but am not sure of the next step...
Thanks!</p>
|
<p>You are looking for <a href="https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html" rel="nofollow noreferrer"><code>crosstab</code></a>:</p>
<pre><code>pd.crosstab(df['User'], df['Type']).reset_index().rename_axis('',axis=1)
</code></pre>
<p>output:</p>
<pre><code> User Cash Check Paypal Venmo Zelle
0 100 1 1 1 0 0
1 200 1 1 0 1 0
2 300 0 0 0 0 3
</code></pre>
|
pandas
| 1
|
374,212
| 72,860,430
|
How can I group by in python and create columns with information of a column if another column has a specific value?
|
<p>I have a data frame with "Team", "HA" (home away), "attack", "defense"</p>
<p><a href="https://i.stack.imgur.com/ceuEM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ceuEM.png" alt="input" /></a></p>
<p>And what I need to have is a table, grouped by Team with 4 columns like this</p>
<p><a href="https://i.stack.imgur.com/prlmD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/prlmD.png" alt="output" /></a></p>
<p>I guess it could be done with an aggregate function, but I don't really know how</p>
<pre><code>df_ad=df_calc.groupby(['Team','Liga']).agg...
</code></pre>
<p>The equivalent in SQL would be</p>
<pre><code>SELECT Team, CASE
WHEN HA='Home' THEN attack
END AS Home_attack, CASE
WHEN HA='Home' THEN defense
END AS Home_defense, CASE
WHEN HA='Away' THEN attack
END AS Away_attack, CASE
WHEN HA='Away' THEN defense
END AS Away_defense
FROM df_calc;
</code></pre>
|
<p>Just pivot your dataframe:</p>
<pre><code>out = df.pivot('Team', 'HA', ['attack', 'defense'])
out.columns = out.columns.swaplevel().to_flat_index().map(' '.join)
out = out.reset_index()
print(out)
# Output
Team Away attack Home attack Away defense Home defense
0 A. San Luis 1 3 2 4
1 AC Milan 5 7 6 8
2 AS Roma 9 11 10 12
</code></pre>
<p>For the first part, you can read: <a href="https://stackoverflow.com/questions/47152691/how-can-i-pivot-a-dataframe">How can I pivot a dataframe?</a></p>
|
python|sql|pandas
| 0
|
374,213
| 72,935,266
|
Detecting anomalies among several thousand users
|
<p>I have this issue where I record a daily entry for all users in my system (several thousands, even 100.000+). These entries have 3 main features, "date", "file_count", "user_id".</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>date</th>
<th>file_count</th>
<th>user_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-09-28</td>
<td>200</td>
<td>5</td>
</tr>
<tr>
<td>2021-09-28</td>
<td>10</td>
<td>7</td>
</tr>
<tr>
<td>2021-09-29</td>
<td>210</td>
<td>5</td>
</tr>
<tr>
<td>2021-09-29</td>
<td>50</td>
<td>7</td>
</tr>
</tbody>
</table>
</div>
<p>Where I am in doubt is how to run an anomaly detection algorithm efficiently on all these users.
My goal is to be able to report whether a user has some abnormal behavior each day.</p>
<p>In this example, user 7 should be flagged as an anomaly because the file_count suddenly is x5 higher than "normal".</p>
<p>My idea was firstly to create a model for each user but since there are so many users this might not be feasible.</p>
<p>Could you help explain me how to do this in an efficient manner if you know an algorithm that could solve this problem?</p>
<p>Any help is greatly appreciated!</p>
|
<p>Article for anomaly detection in audit data can be found many on the Internet.
One simple article with many of examples/approaches can be found in original (Czech) language here: <a href="https://blog.root.cz/trpaslikuv-blog/detekce-anomalii-v-auditnich-zaznamech-casove-rady/" rel="nofollow noreferrer">https://blog.root.cz/trpaslikuv-blog/detekce-anomalii-v-auditnich-zaznamech-casove-rady/</a> or translated using google technology: <a href="https://blog-root-cz.translate.goog/trpaslikuv-blog/detekce-anomalii-v-auditnich-zaznamech-casove-rady/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=sk&_x_tr_pto=wapp" rel="nofollow noreferrer">https://blog-root-cz.translate.goog/trpaslikuv-blog/detekce-anomalii-v-auditnich-zaznamech-casove-rady/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=sk&_x_tr_pto=wapp</a></p>
<p>PS: Clustering (Clustering Based Unsupervised Approach) can be way to go, when searching for simple algorithm.</p>
|
tensorflow|deep-learning|time-series|outliers|anomaly-detection
| 1
|
374,214
| 73,070,654
|
How to categorize one column value based on another column value
|
<p>I have a dataframe with 2 columns like the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ColA</th>
<th>COLB</th>
</tr>
</thead>
<tbody>
<tr>
<td>ABC</td>
<td>Null</td>
</tr>
<tr>
<td>Null</td>
<td>a</td>
</tr>
<tr>
<td>Null</td>
<td>b</td>
</tr>
<tr>
<td>DEF</td>
<td>Null</td>
</tr>
<tr>
<td>Null</td>
<td>c</td>
</tr>
<tr>
<td>Null</td>
<td>d</td>
</tr>
<tr>
<td>Null</td>
<td>e</td>
</tr>
<tr>
<td>GHI</td>
<td>Null</td>
</tr>
<tr>
<td>IJK</td>
<td>f</td>
</tr>
</tbody>
</table>
</div>
<p>I want to categories the “COLB” based on the “COLA” so that the final output look like :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ColA</th>
<th>COLB</th>
</tr>
</thead>
<tbody>
<tr>
<td>ABC</td>
<td>a,b</td>
</tr>
<tr>
<td>DEF</td>
<td>c,d,e</td>
</tr>
<tr>
<td>GHI</td>
<td>Empty</td>
</tr>
<tr>
<td>IJK</td>
<td>f</td>
</tr>
</tbody>
</table>
</div>
<p>How can I do this using pandas ?</p>
|
<p>Lets start by creating the DataFrame:</p>
<pre><code>df1 = pd.DataFrame({'ColA':['ABC',np.NaN,np.NaN,'DEF',np.NaN,np.NaN,np.NaN,'GHI','IJK'],'ColB':[np.NaN,'a','b',np.NaN,'c','d','e',np.NaN,'f']})
</code></pre>
<p>Next we fill all NaN values with previous occurence:</p>
<pre><code>df1.ColA.fillna(method='ffill',inplace=True)
</code></pre>
<p>Then we identify columns with empty colB:</p>
<pre><code>t1 = df1.groupby('ColA').count()
fill_list = t1[t1['ColB'] == 0].index
df1.loc[df1.ColA.isin(fill_list),'ColB'] = 'Empty'
</code></pre>
<p>Finally group by and join colB:</p>
<pre><code>df1 = df1.dropna()
df1.groupby('ColA').apply(lambda x: ','.join(x.ColB))
</code></pre>
<p>Output:
<a href="https://i.stack.imgur.com/L28Hd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L28Hd.png" alt="enter image description here" /></a></p>
|
python-3.x|pandas
| 1
|
374,215
| 72,899,364
|
Output the confiendence / probability for a class of a CNN neuronal network
|
<p>I have a problem. I want to get the confiendence/ probability for my prediction. How could I get the confidence? I looked at <a href="https://stackoverflow.com/questions/38133707/how-can-i-implement-confidence-level-in-a-cnn-with-tensorflow">How can I implement confidence level in a CNN with tensorflow?</a> . But I do not understand how I could get the prediction.</p>
<pre class="lang-py prettyprint-override"><code>class CNN_Text:
def __init__(self, x, y):
self.x =x
self.y = y
def forward(self):
filter_sizes = [1,2,3,5]
num_filters = 32
inp = Input(shape=(maxlen, ))
x = Embedding(embedding_matrix.shape[0], 300, weights=[embedding_matrix], trainable=False)(inp)
x = SpatialDropout1D(0.4)(x)
x = Reshape((maxlen, embed_size, 1))(x)
conv_0 = Conv2D(num_filters, kernel_size=(filter_sizes[0], embed_size), kernel_initializer='normal',
activation='elu')(x)
conv_1 = Conv2D(num_filters, kernel_size=(filter_sizes[1], embed_size), kernel_initializer='normal',
activation='elu')(x)
conv_2 = Conv2D(num_filters, kernel_size=(filter_sizes[2], embed_size), kernel_initializer='normal',
activation='elu')(x)
conv_3 = Conv2D(num_filters, kernel_size=(filter_sizes[3], embed_size), kernel_initializer='normal',
activation='elu')(x)
maxpool_0 = MaxPool2D(pool_size=(maxlen - filter_sizes[0] + 1, 1))(conv_0)
maxpool_1 = MaxPool2D(pool_size=(maxlen - filter_sizes[1] + 1, 1))(conv_1)
maxpool_2 = MaxPool2D(pool_size=(maxlen - filter_sizes[2] + 1, 1))(conv_2)
maxpool_3 = MaxPool2D(pool_size=(maxlen - filter_sizes[3] + 1, 1))(conv_3)
z = Concatenate(axis=1)([maxpool_0
, maxpool_1
, maxpool_2
, maxpool_3
])
# z = Dropout(0.3)(z)
z = Flatten()(z)
z = Dropout(0.3)(z)
outp = Dense(53, activation="softmax")(z)
model = Model(inputs=inp, outputs=outp)
model.summary()
return model
p1 = CNN_Text(...)
model = p1.forward()
model.compile(...)
history = model.fit(...)
pred = model.predict(...)
</code></pre>
<p>How I predict a class</p>
<pre class="lang-py prettyprint-override"><code>x = x.lower()
x = remove_URL(x)
x = remove_punct(x)
x = remove_stopwords(x)
x = tokenizer.texts_to_sequences([x])
x = pad_sequences(x, maxlen=maxlen)
pred = model.predict(x)
pred = pred.argmax(axis=1)
pred = le.classes_[pred]
return pred[0]
</code></pre>
|
<p>The Softmax activation function normalises the output of the network, giving you the predicted probability of each of the 53 classes for a given sample.</p>
<pre><code>pred = pred.argmax(axis=1)
</code></pre>
<p>This line gives you the index of the node with the highest predicted probability.</p>
<pre><code>pred = pred.max(axis=1)
</code></pre>
<p>This will give you the according probability instead (If you don't overwrite pred with argmax before).</p>
|
python|tensorflow|deep-learning|nlp|conv-neural-network
| 1
|
374,216
| 72,894,227
|
Why is np nan convertible to int by `astype` (but not by `int`)?
|
<p>This question comes from a finding that is very much not intuitive to me. If one tries the following:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
print(np.array([np.nan]).astype(int))
print(int(np.array([np.nan])))
</code></pre>
<p>then the output of the first is <code>[-9223372036854775808]</code>, and the second raises <code>ValueError: cannot convert float NaN to integer</code>. I'd expect the later behaviour, and I'd definitely not expect that one can convert <code>np.nan</code> to an int. Why is this like that? Why can one use <code>astype</code> to convert <code>np.nan</code> to int? Does it have any functionality or meaning?</p>
|
<p><a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.astype.html" rel="nofollow noreferrer"><code>.astype</code></a> has optional argument <code>casting</code> whose default value is <code>'unsafe'</code>. Following values are allowed</p>
<ul>
<li>‘no’ means the data types should not be cast at all.</li>
<li>‘equiv’ means only byte-order changes are allowed.</li>
<li>‘safe’ means only casts which can preserve values are allowed.</li>
<li>‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed.</li>
<li>‘unsafe’ means any data conversions may be done.</li>
</ul>
<p>When one attempt to do</p>
<pre><code>import numpy as np
print(np.array([np.nan]).astype(int, casting="safe"))
</code></pre>
<p>one gets following error</p>
<pre><code>TypeError: Cannot cast array from dtype('float64') to dtype('int32') according to the rule 'safe'
</code></pre>
|
python|numpy|integer|nan
| 2
|
374,217
| 72,940,115
|
Applying Filter to Multi Dimensional Numpy Array ,eg: Cifar10 Data
|
<pre><code>from keras.datasets import cifar10
# load dataset
(trainX, trainy), (testX, testy) = cifar10.load_data()
# summarize loaded dataset
print('Train: X=%s, y=%s' % (trainX.shape, trainy.shape))
print('Test: X=%s, y=%s' % (testX.shape, testy.shape))
</code></pre>
<blockquote>
<p>Train: X=(50000, 32, 32, 3), y=(50000, 1)</p>
</blockquote>
<blockquote>
<p>Test: X=(10000, 32, 32, 3), y=(10000, 1)</p>
</blockquote>
<pre><code>trainMask = (trainy == 1) | (trainy == 8) | (trainy == 9)
testMask = (testy == 1) | (testy == 8) | (testy == 9)
</code></pre>
<h1>HOW TO FILTER THE TRAIN AND TEST BASED ON MASK ? like..</h1>
<p>trainX = trainX[trainMask] ,</p>
<p>testX = testX[testMask]</p>
<p>trainy = trainy[trainMask] .. One Dimention Works..
Not trainX = trainX[trainMask]</p>
|
<p>what you're looking for is <code>np.where()</code>. see the code</p>
<pre><code>TrainX = TrainX[np.where(trainMask)]
TestX = TestX[np.where(testMask)]
</code></pre>
|
python|numpy
| 1
|
374,218
| 73,116,821
|
How to replace only first Nan value in Pandas DataFrame?
|
<p>I am trying to replace Nan with a list of numbers generated by a random seed. This means each Nan value needs to be replaced by a unique integer. Items in the columns are unique, but the rows just seem to be replicating themselves? Any suggestions would be welcome</p>
<pre><code>np.random.seed(56)
rs=np.random.randint(1,100, size=total)
df=pd.DataFrame(index=np.arange(rows), columns=np.arange(columns))
for i in rs:
df=df.fillna(value=i, limit=1)
</code></pre>
|
<p>the <code>df.fillna()</code> will replace <strong>all</strong> the values that contains NA. So your code is actually changing the NA values just at the first iteration of the forloop because than, no other values to fill remains.</p>
<p>You can use the applymap function to iterates through all the rows and fill the NANs with a randomly generated values in this way:</p>
<pre><code>df.applymap(lambda l: l if not np.isnan(l) else np.random.randint(1,100, size=total)))
</code></pre>
|
python|pandas|dataframe|numpy
| 0
|
374,219
| 73,126,473
|
Can I visualize the content of a datasets.Dataset?
|
<p>I am using the Huggingface <code>datasets</code> library to load a dataset from a pandas dataframe.
The code is something similar to this:</p>
<pre><code>from datasets import Dataset
import pandas as pd
df = pd.DataFrame({"a": [1], "b":[1]})
dataset = Dataset.from_pandas(df)
</code></pre>
<p>Everything went smoothly, however, I wanted to double check the content of the loaded <code>Dataset</code>. I was looking for something similar to a <code>df.head()</code> like we have in Pandas, but I found nothing on the official <a href="https://huggingface.co/docs/datasets/v1.2.1/loading_datasets.html" rel="nofollow noreferrer">Huggingface documentation</a>. Is there a way to "read" even partially the content of the loaded dataset?</p>
<p>Doing a simple <code>print(dataset)</code> does not shows the content, but only some high level information:</p>
<pre><code>Dataset({
features: ['a', 'b'],
num_rows: 1
})
</code></pre>
|
<p>The answer is simpler than you think. Just do</p>
<pre><code>print(dataset[i])
</code></pre>
<p>where <code>i</code> is the number of the row (first is 0).</p>
<p>The output will be a dictionary with the features as keys and the content of the row as values.</p>
<pre><code>print(dataset[0])
<<< {
"a": [1],
"b": [1]
}
</code></pre>
|
python|pandas|huggingface-datasets
| 1
|
374,220
| 72,849,150
|
How do i get the year to change with each tournament?
|
<p>I don't understand why i am struggling with this but how do i get the year to change for each iteration. So when it goes through season 2020, all those tournaments and id's should say 2020 but its only saying the last iteration ran.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import pandas as pd
seasonid = ['2021', '2020', '2019']
tournament = []
tid = []
for season in seasonid:
url = f'https://www.espn.com/golf/schedule/_/season/{season}'
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
for link in soup.find_all('div', class_='eventAndLocation__innerCell'):
for link2 in link.find_all('a'):
data = link2.get('href')
ndata = data.strip('https://www.espn.com/golf/leaderboard?tournamentId=')
tid.append(ndata)
for link in soup.find_all('div', class_='eventAndLocation__innerCell'):
for link2 in link.find_all('a'):
tournamentn = link2.text
tournament.append(tournamentn)
year = season
df1 = pd.DataFrame(tournament)
df2 = pd.DataFrame(tid)
df = pd.concat([df1, df2], axis = 1)
df['year'] = year
</code></pre>
|
<p>You can create list with all information as tuples and at the end create final dataframe. For example:</p>
<pre class="lang-py prettyprint-override"><code>import requests
import pandas as pd
from bs4 import BeautifulSoup
seasonid = ["2021", "2020", "2019"]
all_data = []
for season in seasonid:
url = f"https://www.espn.com/golf/schedule/_/season/{season}"
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, "html.parser")
for link in soup.find_all("div", class_="eventAndLocation__innerCell"):
for link2 in link.find_all("a"):
tid = link2.get("href").strip(
"https://www.espn.com/golf/leaderboard?tournamentId="
)
tournament = link2.text
all_data.append((season, tournament, tid))
df = pd.DataFrame(all_data, columns=["year", "tournament", "tid"])
print(df.head().to_markdown(index=False))
</code></pre>
<p>Prints:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">year</th>
<th style="text-align: left;">tournament</th>
<th style="text-align: right;">tid</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">2021</td>
<td style="text-align: left;">Safeway Open</td>
<td style="text-align: right;">401219793</td>
</tr>
<tr>
<td style="text-align: right;">2021</td>
<td style="text-align: left;">U.S. Open</td>
<td style="text-align: right;">401219333</td>
</tr>
<tr>
<td style="text-align: right;">2021</td>
<td style="text-align: left;">Corales Puntacana Resort & Club Championship</td>
<td style="text-align: right;">401219480</td>
</tr>
<tr>
<td style="text-align: right;">2021</td>
<td style="text-align: left;">Sanderson Farms Championship</td>
<td style="text-align: right;">401219794</td>
</tr>
<tr>
<td style="text-align: right;">2021</td>
<td style="text-align: left;">Shriners Hospitals for Children Open</td>
<td style="text-align: right;">401219795</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|dataframe|loops|beautifulsoup
| 1
|
374,221
| 73,160,855
|
How do I add data to a column only if a certain value exists in previous column using Python and Faker?
|
<p>I'm pretty new to Python and not sure what to even google for this. What I am trying to do is create a Pandas DataFrame that is filled with fake data by using Faker. The problem I am having is each column is generating fake data in a silo. I want to be able to have fake data created based on something that exists in a prior column.</p>
<p>So in my example below, I have <code>pc_type ["PC", "Apple]</code> From there I have the operating system and the options are Windows 10, Windows 11, and MacOS. Now I want only where <code>pc_type = "Apple"</code> to have the columns fill with the value of MacOS. Then for everything that is type PC, it's 50% Windows 10 and 50% Windows 11.</p>
<p>How would I write this code so that in the function body I can make that distinction clear and the results will reflect that?</p>
<pre><code>from faker import Faker
from faker.providers import BaseProvider, DynamicProvider
import numpy as np
import pandas as pd
from datetime import datetime
import random
pc_type = ['PC', 'Apple']
fake = Faker()
def create_data(x):
project_data = {}
for i in range(0, x):
project_data[i] = {}
project_data[i]['Name'] = fake.name()
project_data[i]['PC Type'] = fake.random_element(pc_type)
project_data[i]['With Windows 10'] = fake.boolean(chance_of_getting_true=25)
project_data[i]['With Windows 11 '] = fake.boolean(chance_of_getting_true=25)
project_data[i]['With MacOS'] = fake.boolean(chance_of_getting_true=50)
return project_data
df = pd.DataFrame(create_data(10)).transpose()
df
</code></pre>
|
<p>I'd slightly change the approach and generate a column <code>OS</code>. This column you can then transform into <code>With MacOS</code> etc. if needed.</p>
<p>With this approach its easier to get the 0.5 / 0.5 split within Windows right:</p>
<pre class="lang-py prettyprint-override"><code>from faker import Faker
from faker.providers import BaseProvider, DynamicProvider
import numpy as np
import pandas as pd
from datetime import datetime
import random
from collections import OrderedDict
pc_type = ['PC', 'Apple']
wos_type = OrderedDict([('With Windows 10', 0.5), ('With Windows 11', 0.5)])
fake = Faker()
def create_data(x):
project_data = {}
for i in range(x):
project_data[i] = {}
project_data[i]['Name'] = fake.name()
project_data[i]['PC Type'] = fake.random_element(pc_type)
if project_data[i]['PC Type'] == 'PC':
project_data[i]['OS'] = fake.random_element(elements = wos_type)
else:
project_data[i]['OS'] = 'MacOS'
return project_data
df = pd.DataFrame(create_data(10)).transpose()
df
</code></pre>
<p>Output</p>
<pre class="lang-py prettyprint-override"><code> Name PC Type OS
0 Nicholas Walker Apple MacOS
1 Eric Hull PC With Windows 10
2 Veronica Gonzales PC With Windows 11
3 Mrs. Krista Richardson Apple MacOS
4 Anne Craig PC With Windows 10
5 Joseph Hayes PC With Windows 10
6 Mary Nelson Apple MacOS
7 Jill Hunt Apple MacOS
8 Mark Taylor PC With Windows 11
9 Kyle Thompson PC With Windows 10
</code></pre>
|
python|pandas|function|faker
| 1
|
374,222
| 73,036,681
|
Counting elements in specified column of a .csv file
|
<p>I am programming in Python
I want to count how many times each word appears in a column. Coulmn 4 of my .csv file contains cca. 7 different words and need to know how many times each one appears. Eg. there are 700 lines and I need to count how many times the phrase HelloWorld appears in column 4.</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>pandas.Series.value_counts()</code></a> on the column you want. Since you mentioned it's the fourth column, you can get it by index using <code>iloc</code> as well. Of course you have to install pandas as it's not from the standard library, e.g. using pip with <code>pip install pandas</code> if you haven't already.
An example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv("path/to/file.csv")
forth_column = df.iloc[:, 3] # Gets all rows for the fourth column (index starts at 0)
counts = forth_column.value_counts()
print(counts) # You'll see the number of times each string appears in the column
# The keys are the strings and the values are the number of times they appear
hello_world_counts = counts["HelloWorld"]
</code></pre>
|
python|pandas|dataframe
| 0
|
374,223
| 73,061,746
|
Local data that I can't save again gives "UnpicklingError: pickle data was truncated" while opening
|
<p>I have a pandas dataframe that I pickled to backup some data on a server.
Then I imported everything to my local machine using VSCode. Now the server is off and there is no way I can access the data again.</p>
<p>I pickled the data using pandas:</p>
<pre><code>import pandas as pd
congestion.to_pickle('/home/tugba/Emissions_Research/DATA2/congestion_sensitivity_short.pkl')
</code></pre>
<p>However, when I try to open I get the "UnpicklingError: pickle data was truncated" error. There is no way I can save the data in another format or generate it again. Most of the solutions that I found suggest saving it in different ways and I can't do that. Is there a way to open this file or some part of it at least?</p>
<p>I tried the below methods and they all have the same error:</p>
<pre><code>import pickle
congestion = '/Users/aysetugbaozturk/Desktop/tugba/Emissions_Research/DATA2/congestion_sensitivity.pkl'
with open(congestion, 'rb') as f: # jupyter notebook saved
corpus = pickle.load(f)
data_arr = pickle.loads(congestion)
print (data_arr)
congestion = pd.read_pickle('/Users/aysetugbaozturk/Desktop/tugba/Emissions_Research/DATA2/congestion_sensitivity.pkl')
</code></pre>
|
<p>When you use <code>pandas.to_pickle()</code> to pickle a dataframe, you should preferably use <code>pandas.read_pickle()</code> to unpickle.</p>
|
pandas|pickle
| 0
|
374,224
| 72,972,933
|
Convert a Tensorflow dataset containing inputs and labels to two NumPy arrays
|
<p>I'm using Tensorflow 2.9.1. I have a <code>test_dataset</code> object of class <code>tf.data.Dataset</code>, which stores both inputs and labels. The inputs are 4-dimensional Tensors, and the labels are 3-dimensional Tensors:</p>
<pre class="lang-py prettyprint-override"><code>print(tf.data.Dataset)
<PrefetchDataset element_spec=(TensorSpec(shape=(64, 5, 548, 1), dtype=tf.float64, name=None), TensorSpec(shape=(64, 1, 1), dtype=tf.float64, name=None))>
</code></pre>
<p>The first dimension is the minibatch size. I need to convert this Tensorflow Dataset to two NumPy arrays, <code>X_test</code> containing the inputs, and <code>y_test</code> containing the labels, <strong>ordered in the same way</strong>. In other words, <code>(X_test[0], y_test[0])</code> must correspond to the first sample from <code>test_dataset</code>. <strong>Since the first dimension of my tensors is the minibatch size, I want to concatenate the results along that first dimension.</strong></p>
<p>How can I do that? I've seen two approaches:</p>
<h3>np.concatenate</h3>
<pre><code>X_test = np.concatenate([x for x, _ in test_dataset], axis=0)
y_test = np.concatenate([y for _, y in test_dataset], axis=0)
</code></pre>
<p>But I don't like it for two reasons:</p>
<ol>
<li><p>it seems wasteful to iterate twice on the same dataset</p>
</li>
<li><p><code>X_test</code> and <code>y_test</code> are probably not ordered in the same way. If I run</p>
<p>X_test = np.concatenate([x for x, _ in test_dataset], axis=0)
X_test2 = np.concatenate([x for x, _ in test_dataset], axis=0)</p>
</li>
</ol>
<p><code>X_test</code> and <code>X_test2</code> are different arrays, though of identical shape. I suspect the dataset is being shuffled after I itera through it once. <em>However</em>, this implies that also <code>X_test</code> and <code>y_test</code>, in my snippet above, won't be ordered in the same way. How can I fix that?</p>
<h3>tfds.as_numpy</h3>
<p><a href="https://www.tensorflow.org/datasets/api_docs/python/tfds/as_numpy" rel="nofollow noreferrer">tfds.as_numpy</a> can be used to convert a Tensorflow Dataset to an iterable of NumPy arrays:</p>
<pre><code>import tensorflow_datasets as tfds
np_test_dataset = tfds.as_numpy(test_dataset)
print(np_test_dataset)
<generator object _eager_dataset_iterator at 0x7fee81fd8b30>
</code></pre>
<p>However, I don't know how to proceed from here: how do I convert this iterable of NumPy arrays, to two NumPy arrays of the right shapes?</p>
|
<p>Instead of iterating over the dataset twice, you can unpack the dataset and concatenate the arrays inside the resulting tuples to get the final result.</p>
<p>The <code>zip(*ds)</code> is used to separate the dataset into two separate sequences (<code>X</code>'s and <code>y</code>'s). <code>X</code> and <code>y</code> each becomes a tuple of arrays and you then concatenate those arrays. You can read more about how <code>zip(*iterables)</code> works <a href="https://realpython.com/python-zip-function/#unzipping-a-sequence" rel="nofollow noreferrer">here</a>.</p>
<p>Here is an example with <code>mnist</code> data:</p>
<pre><code>import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
ds = tfds.load('mnist', split='train', as_supervised=True)
print(ds)
# <PrefetchDataset element_spec=(TensorSpec(shape=(28, 28, 1), dtype=tf.uint8, name=None), TensorSpec(shape=(), dtype=tf.int64, name=None))>
X, y = zip(*ds)
print(type(X), type(y))
# <class 'tuple'> <class 'tuple'>
print(len(X), len(y))
# 60000 60000
X_arr = np.concatenate(X)
print(X_arr.shape)
# (1680000, 28, 1)
</code></pre>
<p>You would do the same concatenation with your <code>y</code>'s. I am not showing it here because this dataset has different dimensionality. <code>np.concat</code> is used here since you want to join arrays on the first existing axis.</p>
<p>If needed, unpacking could also be done on the iterable created by the <code>tfds.as_numpy()</code> method:</p>
<pre><code>X, y = zip(*tfds.as_numpy(ds))
</code></pre>
|
python|arrays|numpy|tensorflow2.0|tensorflow-datasets
| 1
|
374,225
| 72,988,605
|
Pandas covert one dataframe to another
|
<p>I am making trouble on this matter. Would like to ask how to convert the following raw data to result data? Thanks</p>
<p>Raw data</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Board</th>
<th>Slot No.</th>
</tr>
</thead>
<tbody>
<tr>
<td>55</td>
<td>WD22UBBPe4</td>
<td>3</td>
</tr>
<tr>
<td>14</td>
<td>WD22UBBPd6</td>
<td>2</td>
</tr>
<tr>
<td>14</td>
<td>QWL1WBBPF4</td>
<td>3</td>
</tr>
<tr>
<td>14</td>
<td>QWL1WBBPD2</td>
<td>0</td>
</tr>
<tr>
<td>14</td>
<td>WD22LBBPD2</td>
<td>1</td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>4</td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>3</td>
</tr>
<tr>
<td>16</td>
<td>WD22UBBPd6</td>
<td>2</td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>0</td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>1</td>
</tr>
<tr>
<td>72</td>
<td>QWL1WBBPD2</td>
<td>0</td>
</tr>
<tr>
<td>72</td>
<td>WD22LBBPD2</td>
<td>1</td>
</tr>
<tr>
<td>72</td>
<td>WD22UBBPd6</td>
<td>2</td>
</tr>
<tr>
<td>72</td>
<td>QWL1WBBPD2</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
<p>Result</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Slot 0</th>
<th>Slot 1</th>
<th>Slot 2</th>
<th>Slot 3</th>
<th>Slot 4</th>
<th>Slot 5</th>
</tr>
</thead>
<tbody>
<tr>
<td>55</td>
<td></td>
<td></td>
<td></td>
<td>WD22UBBPe4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>14</td>
<td>QWL1WBBPD2</td>
<td>WD22LBBPD2</td>
<td>WD22UBBPd6</td>
<td>QWL1WBBPF4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>QWL1WBBPD2</td>
<td>WD22UBBPd6</td>
<td>QWL1WBBPD2</td>
<td>QWL1WBBPD2</td>
<td></td>
</tr>
<tr>
<td>72</td>
<td>QWL1WBBPD2</td>
<td>WD22LBBPD2</td>
<td>WD22UBBPd6</td>
<td>QWL1WBBPD2</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
|
<p>Here is one way to do it</p>
<pre><code>df.pivot(index='Name', columns='Slot No.').add_prefix('Slot ').fillna('').reset_index()
</code></pre>
<pre><code>
Name Slot Board
Slot No. Slot 0 Slot 1 Slot 2 Slot 3 Slot 4
0 14 QWL1WBBPD2 WD22LBBPD2 WD22UBBPd6 QWL1WBBPF4
1 16 QWL1WBBPD2 QWL1WBBPD2 WD22UBBPd6 QWL1WBBPD2 QWL1WBBPD2
2 55 WD22UBBPe4
3 72 QWL1WBBPD2 WD22LBBPD2 WD22UBBPd6 QWL1WBBPD2
</code></pre>
|
python|pandas
| 2
|
374,226
| 73,113,834
|
How to rewrite Pandas frame.append with concat
|
<p>I have the following code which works perfectly putting in subtotals and grand totals. With the frame.append method deprecated how should this be rewritten?</p>
<pre><code>pvt = pd.concat([y.append(y.sum()
.rename((x, 'Total')))
for x, y in table.groupby(level=0)
]).append(table.sum()
.rename(('Grand', 'Total')))
</code></pre>
<p>Prior to this, I created a pivot table. So I'm looking for the totals to be stacked, not added as another column</p>
<pre><code>pivot = pd.pivot_table(data=df2,
index=['date_created','BuyerName'],
aggfunc='sum').round()
</code></pre>
<p>I get the following error with suggestion #2
---> 17 pvt = pd.concat([x for _, y in table.groupby(level=0) for x in (y, y.sum().rename((x, 'Total')))] + <br />
18 [table.sum().rename(('Grand', 'Total'))])
'Total')))
25 return(pvt)</p>
<p>UnboundLocalError: local variable 'x' referenced before assignment</p>
|
<p>Use</p>
<pre class="lang-py prettyprint-override"><code>pvt = pd.concat([y for x, y in table.groupby(level=0)] + \
[y.sum().rename((x, 'Total')) for x, y in table.groupby(level=0)] + \
[table.sum().rename(('Grand', 'Total'))])
# or
pvt = pd.concat([x for _, y in table.groupby(level=0) for x in (y, y.sum().rename((x, 'Total')))] + \
[table.sum().rename(('Grand', 'Total'))])
</code></pre>
|
python|pandas|concatenation|deprecated
| 0
|
374,227
| 72,929,317
|
Concatenating two dataframes with no common columns but same row dimension
|
<p>I have two dataframes <strong>df1</strong> <em>(dimension: 2x3)</em> and <strong>df2</strong> <em>(dimension: 2x239)</em> taken for example - each having the same number of rows but a different number of columns.</p>
<p>I need to concatenate them to get a new dataframe <strong>df3</strong> <em>(dimension 2x242)</em>.</p>
<p>I used the <em>concat</em> function but ended up getting a (4x242) which has NaN values.</p>
<p>Need help.</p>
<p>Screenshot attached. <a href="https://i.stack.imgur.com/By2Et.png" rel="nofollow noreferrer">jupyter notebook screenshot</a></p>
|
<p>You need to set the axis=1 parameter</p>
<pre><code>pd.concat([df1.reset_index(), df2.reset_index()], axis=1)
</code></pre>
|
python|pandas|dataframe|concatenation
| 0
|
374,228
| 72,861,164
|
Plot each value in a time series dataframe
|
<p>I have a CSV file that shows the price of a product's barcode for several supermarkets during the COVID pandemic.
The <code>dataframe.head()</code> looks like this:</p>
<pre><code> BARCODE AC BFRESH LIDL SUPERM
Date
2020-01-03 5201263086618 6.36 7.97 0 0 8.31
2020-01-03 5201263086625 7.58 9.53 0 0 9.91
2020-01-03 7322540574852 18.11 18.34 0 0 8.86
2020-01-03 7322540647136 18.8 18.95 0 0 18.9
2020-01-03 7322540587555 18.22 18.98 0 0 9.21
</code></pre>
<p>In the dataset, there are 968 unique barcodes and the dataset also has 42592 entries, which means that some barcodes are present more than once in the dataset, so there are fluctuations in the product's prices.</p>
<p>I want to plot them somehow so that in one plot (preferably a line plot) all of the products are present clearly showing the ups and downs of the prices in different supermarkets (if any). Do you think there's a way to do that? I haven't found anything that matches my requirements just yet.</p>
<hr />
<p>Following a few comments around my decision on plotting 968 barcodes, I agree that it can be messy. It is just my first attempt/ approach to "look" at the data and maybe analyze the different prices on a few products and how they fluctuated during the COVID pandemic. I am open to suggestions on how to approach this.</p>
|
<p>It is technically possible, but are you sure that's what you want?</p>
<p>You're asking for (968 unique barcodes * 4 shopping centers) 3872 individual lines on a single plot. It would be impossible to interpret. What is the question you're trying to answer? There are a few better ways to generate a meaningful plot, e.g. try plotting</p>
<ul>
<li>average price of all products per supermarket over time. That gives you 4 lines. This gives an idea of how prices are trending, or which stores changed products/prices during the pandemic</li>
</ul>
<p><code>dataframe.groupby(["Date"])['AC','BFRESH','LIDL','SUPERM'].mean().plot()</code></p>
<p><a href="https://i.stack.imgur.com/nUMFk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nUMFk.png" alt="Plot showing supermarket trends over all products (Extended the dataset randomly by 2 days)" /></a></p>
<p>Or you could look for the most common product, filter for only it, and plot its price over time (here mean is selected because of how groupbys work, but there's only one barcode in the dataframe, so you could do .max(), .mix(), etc with the same result).</p>
<pre><code>df = dataframe
df = df[df.BARCODE == df.BARCODE.value_counts().idxmax()]
df.groupby(["Date"])['AC','BFRESH','LIDL','SUPERM'].mean().plot()
</code></pre>
<p><a href="https://i.stack.imgur.com/V2Llr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V2Llr.png" alt="Plot showing supermarket trends for the most common product (Extended the dataset randomly by 2 days)" /></a></p>
<p><em>Artificially extended dataset in csv format below</em></p>
<pre><code> Date,BARCODE,AC,BFRESH,LIDL,SUPERM
2020-01-03,5201263086618, 2.36,2.97,0, 8.31
2020-01-03,5201263086625, 3.58,9.53,0, 9.91
2020-01-03,7322540574852, 12.11,10.34, 0, 8.86
2020-01-03,7322540647136, 18.8,18.95, 0, 18.9
2020-01-03,7322540587555, 18.22,18.98, 0, 9.21
2020-01-05,5201263086618, 6.36,7.97,0, 3.31
2020-01-05,5201263086625, 7.58,9.53,0, 9.91
2020-01-05,7322540574852, 18.11,18.34, 0, 4.86
2020-01-05,7322540647136, 18.8,18.95, 0, 15.9
2020-01-05,7322540587555, 18.22,18.98, 0, 9.21
2020-01-08,5201263086618, 2.36,7.97,0, 3.31
2020-01-08,5201263086625, 2.58,9.53,0, 9.91
2020-01-08,7322540574852, 18.11,18.34, 0, 4.86
2020-01-08,7322540647136, 12.8,18.95, 0, 15.9
</code></pre>
|
python|pandas
| 1
|
374,229
| 72,962,185
|
How to create a new column from a constant value in a DateFrame
|
<pre><code>s = Service(executable_path=r'D:\Python3104\chromedriver.exe')
driver = webdriver.Chrome(service=s)
driver.maximize_window()
url = '''http://racing.hkjc.com/racing/information/Chinese/Reports/CORunning.aspx?
Date=20220701&RaceNo=2'''
driver.get(url)
time.sleep(3)
#Got the RaceNo from URL
soup = BeautifulSoup(driver.page_source, "html.parser")
RACENo = ((driver.current_url.split("RaceNo=")[1]))
#Change the RaceNO into Series
RACENo = pd.Series(RACENo)
#Get a DataFrame From HTML
df = pd.read_html(
str(soup.find("table", class_="table_bd f_fs13")))
#Add a column to DataFrame using RACEno
df = df[:,"RaceNo": RACENo]
</code></pre>
<p><a href="https://i.stack.imgur.com/D2sfv.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>But it told me: DataFrame constructor not properly called!
Can anyone tell me What i did wrong?</p>
|
<p>The following code below should work. You just need to assign the <code>RACENo</code> as <code>str</code> (not <code>pandas.Series</code>) to the new column. I.e. there is no need to convert <code>RACENo</code> to a <code>pandas.Series</code>.</p>
<p>There are some ways to do it:</p>
<ol>
<li><code>df['RaceNo'] = RACENo</code> - Insert/replace a column with the given value to all rows;</li>
<li><code>df = df.assign(RaceNo=RACENo)</code> - <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.assign.html#pandas.DataFrame.assign" rel="nofollow noreferrer"><code>assign</code></a> a new column to the DataFrame</li>
<li><code>df.insert(0, 'RaceNo', RACENo)</code> - Insert a column at a specified position</li>
<li><code>df.loc[:, 'RaceNo'] = RACENo</code> - Create a new column and assign the new value</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import time
from numpy import NaN
import pandas as pd
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from bs4 import BeautifulSoup
s = Service(executable_path=r'D:\Python3104\chromedriver.exe')
driver = webdriver.Chrome(service=s)
driver.maximize_window()
url = '''http://racing.hkjc.com/racing/information/Chinese/Reports/CORunning.aspx?
Date=20220701&RaceNo=2'''
driver.get(url)
time.sleep(10)
#Got the RaceNo from URL
soup = BeautifulSoup(driver.page_source, "html.parser")
RACENo = ((driver.current_url.split("RaceNo=")[1]))
#Get a DataFrame From HTML
df = pd.read_html(
str(soup.find("table", class_="table_bd f_fs13")))[0]
#Add a column to DataFrame using RACEno
df['RaceNo'] = RACENo
</code></pre>
|
python|pandas
| 2
|
374,230
| 73,039,289
|
Weird - Empty pd.dataframe after Excel import. But why?
|
<p>I'm afraid I'll despair soon.
I am importing an Excel file, and this always worked in this way for me. But nowI am getting an empty dataframe, and I don't know why?</p>
<p>My demo code looks like this:</p>
<pre><code>import pandas as pd
data = pd.read_excel ('import.xlsx', sheet_name=["Sheet 1", "Sheet 2"], dtype=str)
print (data)
df = pd.DataFrame(data, columns=["Header A", "Header B"] , dtype=str)
print (df)
</code></pre>
<p>My excel file consists of two sheets, and each sheet looks like this:</p>
<pre><code>| Header A | Header B |
|:--------:|:--------:|
| 1 | A |
| 2 | B |
| 3 | C |
</code></pre>
<p><a href="https://i.stack.imgur.com/9KHGt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9KHGt.png" alt="enter image description here" /></a></p>
<p>And this it the ouput:</p>
<pre><code>{'Sheet 1': Header A Header B
0 1 A
1 2 B
2 3 C, 'Sheet 2': Header A Header B
0 1 A
1 2 B
2 3 C}
Empty DataFrame
Columns: [A, B]
Index: []
</code></pre>
<p>Import works well, because when I print the data dict the data is there. But the DataFrame is empty. I looked for white spaces in the column names, etc. Even if I import only one sheet, the dataframe remains empty. But why?</p>
|
<p>I can't test it but it seems it gives dictionary with many dataframes and if you want to work with single sheet (single dataframe) then you should get it directly</p>
<pre><code>df = data["Sheet1"]
</code></pre>
|
python|excel|pandas|dataframe
| 1
|
374,231
| 73,053,240
|
Pytorch: Disable only nn.Dropout() without using model.eval()
|
<p>nn.Dropout() can be disabled by using model.eval().<br>However by using .eval(), nn.BatchNorm1d() are also disabled. Because the distributions between train and test sets are different, I'd like to disable only Dropout for generating data by GAN.<br>
Is there any way to disable only Dropout after training?<br>
Here is the generator model in my GAN.</p>
<pre><code>class Generator(nn.Module):
def __init__(self, num_input=2, noise_dim=1, num_output=5, hidden_size=128):
super(Generator, self).__init__()
self.fc_in = nn.Linear(num_input+noise_dim, hidden_size)
self.fc_mid = nn.Linear(hidden_size+num_input+noise_dim, hidden_size)
self.fc_out = nn.Linear(2*hidden_size+num_input+noise_dim, num_output)
self.bn_in = nn.BatchNorm1d(hidden_size)
self.bn_mid = nn.BatchNorm1d(hidden_size)
self.dropout = nn.Dropout()
self.relu = nn.ReLU()
def forward(self, y, z):
h0 = torch.concat([y,z],axis=1)
h1 = self.relu(self.bn_in(self.fc_in(h0)))
h1 = self.dropout(h1)
h1 = torch.concat([h0,h1],axis=1)
h2 = self.relu(self.bn_mid(self.fc_mid(h1)))
h2 = self.dropout(h2)
h2 = torch.concat([h1,h2],axis=1)
x = self.fc_out(h2)
return x
</code></pre>
|
<p>The answer in the comment is right, you can run <code>eval()</code> on single modules.</p>
<p>But... why do you think you need to keep the BatchNorm active after training? By default, in eval mode it will use the running average/std computed during training (which is a good thing, and makes the model give the same outputs regardless of the batch size or the content of the batch). The outputs should be closer to the ones in training when the batchnorm is in eval mode. Sometimes it's beneficial to recompute the batch norm mean/std after the training (but still later using the whole model in eval mode afterwards). There are some rare cases when you might want to use it in train mode, but make sure that it's what you want.</p>
|
python|pytorch|generative-adversarial-network|batch-normalization|dropout
| 0
|
374,232
| 72,920,150
|
Why am I getting an error on trying to assign arrays to each element of a list?
|
<p>I am trying to create a list which should contain numeric values which I am trying to extract from a dataframe. The following is my code:</p>
<pre><code>list_values = []
j = 0
for i in country_list:
list_values[j] = df6['positioning'][i].to_numpy()
j = j + 1
print(list_values)
</code></pre>
<p>When I am trying to just print the values directly without storing them in the list I can do that, but somehow I am not able to store them in the list or a 2d numpy array. What is going wrong?</p>
<p>The following is the error:</p>
<blockquote>
<p>IndexError: list assignment index out of range</p>
</blockquote>
|
<p>At the beginning, <code>list_values</code> is an empty list and <code>j</code> is 0.</p>
<p>So if you use <code>list_values[j]</code>, 0 is an invalid index for an empty list. Therefore you get this error.</p>
<p>You cannot ever grow a list by using index assignment. Index assignment can only replace list items that already exist. In order to grow a list, you can for example use the <code>append</code> method to add an item to the end of the list.</p>
|
python|arrays|list|numpy|for-loop
| 1
|
374,233
| 73,069,965
|
Converting column of floats to datetime
|
<p>I have a column of my dataframe that is made up of the following:</p>
<p><code>df['Year] = [2025, 2024, NaN, 2023, 2026, NaN]</code> (these are type <code>float64</code>)</p>
<p>How can I convert these years to something in datetime format? Since there are no months or days included I feel like they have to output as <code>[01-01-2025, 01-01-2021, NaT, 01-01-2023, 01-01-2026, NaT]</code> by default.</p>
<p>But if there was a way to still have the column as <code>[2025, 2024, NaT, 2023, 2026, NaT]</code> then that would work well too.</p>
<p>Using <code>df['Year'] = pd.DatetimeIndex(df['Year']).year</code> just output <code>[1970, 1970, NaN, 1970, 1970, NaN]</code>.</p>
<p>Thank you very much.</p>
|
<p>You can use pandas' <code>to_datetime()</code> and set <code>errors='coerce'</code> to take care of the NaNs (-> NaT)</p>
<pre><code>df['Year'] = pd.to_datetime(df['Year'], format='%Y', errors='coerce')
</code></pre>
<p>The output is going to be like <code>01-01-2025, 01-01-2021 ...</code></p>
|
python|pandas|datetime
| 2
|
374,234
| 72,951,491
|
Python: Why can't I add a 3x1 array to one column of a 3x100 array?
|
<p>Variable <code>a</code> has the shape (3,1) and variable <code>b</code> has the shape (3,100). Now, I want to add variable <code>a</code> to just one column of variable <code>b</code>, meaning:</p>
<pre><code>x[:,ii] = a + b[:,ii]
</code></pre>
<p>However, I get this message:</p>
<pre><code>could not broadcast input array from shape (3,3) into shape (3,)
</code></pre>
<p>What am I missing?</p>
|
<p>You need to use <a href="https://numpy.org/doc/stable/reference/generated/numpy.ravel.html" rel="nofollow noreferrer"><code>numpy.ravel()</code></a> Because <code>a.shape</code> is <code>(3,1)</code> and you need <code>(3,)</code>.</p>
<pre><code>x[:,ii] = a.ravel() + b[:,ii]
</code></pre>
|
python|arrays|numpy|indexing
| 1
|
374,235
| 73,040,420
|
Sum of values from multiple dicts
|
<p>I am iterating some code over a directory and I want to sum the values of same keys from dictionaries that I get.</p>
<p>The code is counting how many times a word appears in a column of a .csv file. It does that with every .csv file in the given folder.</p>
<p>I want an output of added values of same keys. Eg. first file had the word "dog" three times and the second had it 4 times. I want to add these two numbers.</p>
<p>The code to count words:</p>
<pre class="lang-py prettyprint-override"><code>for file in os.scandir(directory):
if file.path.endswith(ext):
df = pd.read_csv(file)
klasifikacija = df.iloc[:, 4] # Gets all rows for the fourth column (index starts at 0)
napake = klasifikacija.value_counts()
dict_napake = dict(napake)
</code></pre>
|
<p>You can make a list of all the dicct_napake you have iterated through and do the following:</p>
<pre><code>import collections
import functools
import operator
dict_napake_1 = {'a': 5, 'b': 1, 'c': 2}
dict_napake_2 = {'a': 2, 'b': 5}
dict_napake_3 = {'a': 10, 'c': 10}
master_dict = [dict_napake_1,
dict_napake_2,
dict_napake_3]
# sum the values with same keys
res = dict(functools.reduce(operator.add,
map(collections.Counter, master_dict)))
print("New dict : ", res)
</code></pre>
|
python|pandas|dataframe|dictionary
| 1
|
374,236
| 73,034,118
|
Efficient computation of entropy-like formula (sum(xlogx)) in Python
|
<p>I'm looking for an efficient way to compute the entropy of vectors, without normalizing them and while ignoring any non-positive value.</p>
<p>Since the vectors aren't probability vectors, and shouldn't be normalized, I can't use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.entropy.html" rel="nofollow noreferrer">scipy's entropy</a> function.</p>
<p>So far I couldn't find a single numpy or scipy function to obtain this, and as a result my alternatives involve breaking the computation into 2 steps, which involve intermediate arrays and slow down the run time. If anyone can think of a single function for this computation it will be interseting.</p>
<p>Below is a timeit script for measuring several alternatives at I tried. I'm using a pre-allocated array to avoid repeated allocations and deallocations during run-time. It's possible to select which alternative to run by setting the value of <code>func_code</code>. I included the nansum offered by one of the answers. The measurements on My MacBook Pro 2019 are:
matmul: 16.720187613
xlogy: 17.296380516
nansum: 20.059866123000003</p>
<pre><code>import timeit
import numpy as np
from scipy import special
def matmul(arg):
a, log_a = arg
log_a.fill(0)
np.log2(a, where=a > 0, out=log_a)
return (a[:, None, :] @ log_a[..., None]).ravel()
def xlogy(arg):
a, log_a = arg
a[a < 0] = 0
return np.sum(special.xlogy(a, a), axis=1) * (1/np.log(2))
def nansum(arg):
a, log_a = arg
return np.nansum(a * np.log2(a, out=log_a), axis=1)
def setup():
a = np.random.rand(20, 1000) - 0.1
log = np.empty_like(a)
return a, log
setup_code = """
from __main__ import matmul, xlogy, nansum, setup
data = setup()
"""
func_code = "matmul(data)"
print(timeit.timeit(func_code, setup=setup_code, number=100000))
</code></pre>
|
<p>On my machine the computation of the logarithms takes about 80% of the time of <code>matmul</code> so it is definitively the bottleneck an optimizing other functions will result in a negligible speed up.</p>
<p>The bad news is that the default implementation <code>np.log</code> is not yet optimized on most platforms. Indeed, it is <strong>not vectorized</strong> by default, except on <a href="https://github.com/numpy/numpy/commit/1eff1c543a8f1e9d7ea29182b8c76db5a2efc3c2" rel="nofollow noreferrer">recent x86 Intel processors supporting AVX-512</a> (ie. basically Skylake processors on servers and IceLake processors on PCs, not recent AlderLake though). This means the computation could be significantly faster once vectorized. AFAIK, the close-source SVML library do support AVX/AVX2 and could speed up it (on x86-64 processors only). SMVL is supported by Numexpr and Numba which can be faster because of that assuming you have access to the non-free SVML which is a part of Intel tools often available on HPC machines (eg. like MKL, OneAPI, etc.).</p>
<p>If you do not have access to the SVML there are two possible remaining options:</p>
<ul>
<li>Implement your <strong>own optimized SIMD log2</strong> function which is possible but hard since it require a good understanding of the hardware SIMD units and certainly require to write a C or Cython code. This solutions consists in computing the log2 function as a <strong><code>n</code>-degree polynomial approximation</strong> (it can be exact to 1 ULP with a big <code>n</code> though one generally do not need that). Naive approximations (eg. n=1) are much simple to implement but often too inaccurate for a scientific use).</li>
<li>Implement a <strong>multi-threaded</strong> log computation typically using Numba/Cython. This is a desperate solution as multithreading can slow things down if the input data is not large enough.</li>
</ul>
<hr />
<p>Here is an example of multi-threaded Numba solution:</p>
<pre class="lang-py prettyprint-override"><code>import numba as nb
@nb.njit('(UniTuple(f8[:,::1],2),)', parallel=True)
def matmul(arg):
a, log_a = arg
result = np.empty(a.shape[0])
for i in nb.prange(a.shape[0]):
s = 0.0
for j in range(a.shape[1]):
if a[i, j] > 0:
s += a[i, j] * np.log2(a[i, j])
result[i] = s
return result
</code></pre>
<p>This is about <strong>4.3 times faster</strong> on my 6-core PC (200 us VS 46.4 us). However, you should be careful if you run this on a server with many cores on such small dataset as it can actually be slower on some platforms.</p>
|
python|numpy|scipy|entropy
| 1
|
374,237
| 73,126,923
|
select rows based on a combination of strings without order in strings
|
<p>If I have the following code:</p>
<pre><code>df_ = df_[df_['summary'].str.contains('slow delivery', na=False)]
df_ = df_['summary']
print(df_)
</code></pre>
<p>And the following list:</p>
<pre><code>df_ = ['May be great product, but slow delivery is annoying',
'May be great product, but slow delivery is annoying',
'slow delivery',
'Great product, slow delivery',
'smewhat slow delivery but accurate and wellpackeged. thank you!']
</code></pre>
<p>But I want to select all the items in the list that contain a combination of slow and delivery, instead of 'slow delivery'.</p>
<p>How does the above need to be adjusted?</p>
<p>Thanks in advance.</p>
|
<p>Might as well just make separate masks for both words in this case. If you have a longer list of words, there are better solutions.</p>
<pre><code>df_ = df_[df_['summary'].str.contains('slow') & df_['summary'].str.contains('delivery')]
</code></pre>
|
pandas
| 1
|
374,238
| 73,133,136
|
Pandas column name missing depending on adressing
|
<pre><code>df.speed # so nice cause of autocomplete...
df['speed']
df.loc[:,'speed']
</code></pre>
<p>are returning my data like omitting the selected column name</p>
<pre><code>Time
2022-07-27 11:33:16.279157 45.000000
2022-07-27 11:33:16.628157 44.928571
2022-07-27 11:33:17.093157 44.857143
2022-07-27 11:33:17.449157 44.785714
</code></pre>
<p>Why is that missing??</p>
<p>I want it like</p>
<pre><code>df.filter(regex="speed")
</code></pre>
<p>which returns</p>
<pre><code> speed
Time
2022-07-27 11:33:16.279157 45.000000
2022-07-27 11:33:16.628157 44.928571
2022-07-27 11:33:17.093157 44.857143
2022-07-27 11:33:17.449157 44.785714
2022-07-27 11:33:17.885157 44.714286
</code></pre>
<p>which means only this can nicely easily plot with correct naming of the value axis</p>
<pre><code>df.filter(regex="speed").plot()
</code></pre>
<p><a href="https://i.stack.imgur.com/3qIIz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3qIIz.png" alt="enter image description here" /></a></p>
<p>whereas</p>
<pre><code>df.speed.plot(label="speed")
</code></pre>
<p>only works by using <code>plt.legend()</code></p>
<p>Is there a convenient way to do it?</p>
|
<ul>
<li><code>df['speed']</code> and <code>df.speed</code> return a Series.</li>
<li><code>df[['speed']]</code> returns a DataFrame, which is what you're expecting.</li>
</ul>
|
python|pandas|matplotlib
| 1
|
374,239
| 10,828,477
|
Is there a way I can vectorize fsolve?
|
<p>I'm trying apply fsolve to an array:</p>
<pre><code>from __future__ import division
from math import fsum
from numpy import *
from scipy.optimize import fsolve
from scipy.constants import pi
nu = 0.05
cn = [0]
cn.extend([pi*n - pi/4 for n in range(1, 5 +1)])
b = linspace(200, 600, 400)
a = fsolve(lambda a: 1/b + fsum([1/(b+cn[i]) for i in range(1, 5 +1)]) - nu*(sqrt(a) - 1)**2, 3)
</code></pre>
<p>It is not allowed by default:</p>
<pre><code>TypeError: only length-1 arrays can be converted to Python scalars
</code></pre>
<p>is there a way I can apply fsolve to an array?</p>
<p><strong>Edit</strong>:</p>
<pre><code>#!/usr/bin/env python
from __future__ import division
import numpy as np
from scipy.optimize import fsolve
nu = np.float64(0.05)
cn = np.pi * np.arange(6) - np.pi / 4.
cn[0] = 0.
b = np.linspace(200, 600, 400)
cn.shape = (6,1)
cn_grid = np.repeat(cn, b.size, axis=1)
K = np.sum(1/(b + cn_grid), axis=0)
f = lambda a: K - nu*(np.sqrt(a) - 1)**2
a0 = 3. * np.ones(K.shape)
a = fsolve(f, a0)
print a
</code></pre>
<p>solves it.</p>
|
<p><code>fsum</code> is for python scalars, so you should look to numpy for vectorisation. Your method is probably failing because you're trying to sum a list of five numpy arrays, rather than five numbers or a single numpy array.</p>
<p>First I would recalculate <code>cn</code> using numpy:</p>
<pre><code>import numpy as np
cn = np.pi * np.arange(6) - np.pi / 4.
cn[0] = 0.
</code></pre>
<p>Next I would compute the previous fsum result separately, since it's a constant vector. This is one way, although there may be more efficient ways:</p>
<pre><code>cn.shape = (6,1)
cn_grid = np.repeat(cn, b.size, axis=1)
K = np.sum(1/(b + cn_grid), axis=0)
</code></pre>
<p>Redefining your function in terms of <code>K</code> should now work:</p>
<pre><code>f = lambda a: K - nu*(np.sqrt(a) - 1)**2
</code></pre>
<hr>
<p>To use <code>fsolve</code> to find the solution, provide it with an appropriate initial vector to iterate against. This uses the zero vector:</p>
<pre><code>a0 = np.zeros(K.shape)
a = fsolve(f, a0)
</code></pre>
<p>or you can use <code>a0 = 3</code>:</p>
<pre><code>a0 = 3. * np.ones(K.shape)
a = fsolve(f, a0)
</code></pre>
<p>This function is invertible, so you can check <code>f(a) = 0</code> against the two exact solutions:</p>
<pre><code>a = (1 - np.sqrt(K/nu))**2
</code></pre>
<p>or</p>
<pre><code>a = (1 + np.sqrt(K/nu))**2
</code></pre>
<p><code>fsolve</code> seems to be picking up the first solution when starting from <code>a0 = 0</code>, and the second one for <code>a0 = 3</code>.</p>
|
numpy|scipy
| 3
|
374,240
| 3,718,791
|
What is the reason for an unhandled win32 exception in an installer program?
|
<p>I got the following message:</p>
<p><code>An unhandled win32 exception occurred in numpy-1.5.0-sse3.exe [3324].</code></p>
<p>The exception occurred in the Numpy installer for Python 2.7---I have the latter on the machine.</p>
<p>When I clicked "Yes" for using the selected debugger, I got the following message:</p>
<p><code>The Application Data folder for Visual Studio could not be created.</code></p>
<p>I don't have admin privileges.</p>
<p>Is this the reason for the exception, or is there some other reason?</p>
|
<p>This is not the reason for the exception. Instead, when you tried to debug the original problem, you encountered a problem with Visual Studio. The Visual Studio error is resolved in the following manner:</p>
<p>Check your registry under the following path:</p>
<pre><code>HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders
</code></pre>
<p>Under there should be an entry for AppData. The key value should be like so:</p>
<pre><code>%USERPROFILE%\AppData\Roaming
</code></pre>
<p>If it isn't, change it to the above value and your program should now work.</p>
<p>As for the original error, we really don't have enough information to comment intelligently.</p>
|
python|windows|visual-studio-2008|winapi|numpy
| 0
|
374,241
| 3,854,665
|
Indexing with Masked Arrays in numpy
|
<p>I have a bit of code that attempts to find the contents of an array at indices specified by another, that may specify indices that are out of range of the former array.</p>
<pre><code>input = np.arange(0, 5)
indices = np.array([0, 1, 2, 99])
</code></pre>
<p>What I want to do is this:
print input[indices]
and get
[0 1 2]</p>
<p>But this yields an exception (as expected):</p>
<pre><code>IndexError: index 99 out of bounds 0<=index<5
</code></pre>
<p>So I thought I could use masked arrays to hide the out of bounds indices:</p>
<pre><code>indices = np.ma.masked_greater_equal(indices, 5)
</code></pre>
<p>But still:</p>
<pre><code>>print input[indices]
IndexError: index 99 out of bounds 0<=index<5
</code></pre>
<p>Even though:</p>
<pre><code>>np.max(indices)
2
</code></pre>
<p>So I'm having to fill the masked array first, which is annoying, since I don't know what fill value I could use to not select any indices for those that are out of range:</p>
<blockquote>
<p>print input[np.ma.filled(indices, 0)]</p>
</blockquote>
<pre><code>[0 1 2 0]
</code></pre>
<p>So my question is: how can you use numpy efficiently to select indices safely from an array without overstepping the bounds of the input array?</p>
|
<p>Without using masked arrays, you could remove the indices greater or equal to 5 like this:</p>
<pre><code>print input[indices[indices<5]]
</code></pre>
<p>Edit: note that if you also wanted to discard negative indices, you could write:</p>
<pre><code>print input[indices[(0 <= indices) & (indices < 5)]]
</code></pre>
|
python|numpy|indexing
| 5
|
374,242
| 3,734,776
|
Translate matlab to python/numpy
|
<p>I am looking for an automatic code translator for Matlab to Python.
I downloaded and installed <a href="http://sourceforge.net/projects/libermate/" rel="nofollow noreferrer">LiberMate</a> but it is <strong>not</strong> documented anywhere and I wasn't able to make it work.</p>
<p>Has anybody dealt with this kind of challenge before? Any advice welcome.</p>
|
<p>I've done it manually. Check <a href="http://www.scipy.org/NumPy_for_Matlab_Users" rel="nofollow noreferrer">this</a>.</p>
<p><strong>[EDIT]</strong></p>
<p>You can also try to call your MATLAB code from Python using <a href="http://mlabwrap.sourceforge.net/" rel="nofollow noreferrer">Mlabwrap</a>, a high-level Python to MATLAB bridge that lets MATLAB look like a normal Python library.</p>
<p>For example:</p>
<pre><code>from mlabwrap import mlab
mlab.plot([1,2,3], '-o')
</code></pre>
|
python|matlab|numpy|scipy|code-translation
| 9
|
374,243
| 3,650,194
|
Are NumPy's math functions faster than Python's?
|
<p>I have a function defined by a combination of basic math functions (abs, cosh, sinh, exp, ...).</p>
<p>I was wondering if it makes a difference (in speed) to use, for example,
<code>numpy.abs()</code> instead of <code>abs()</code>?</p>
|
<p>Here are the timing results:</p>
<pre><code>lebigot@weinberg ~ % python -m timeit 'abs(3.15)'
10000000 loops, best of 3: 0.146 usec per loop
lebigot@weinberg ~ % python -m timeit -s 'from numpy import abs as nabs' 'nabs(3.15)'
100000 loops, best of 3: 3.92 usec per loop
</code></pre>
<p><code>numpy.abs()</code> is slower than <code>abs()</code> because it also handles Numpy arrays: it contains additional code that provides this flexibility.</p>
<p>However, Numpy <em>is</em> fast on arrays:</p>
<pre><code>lebigot@weinberg ~ % python -m timeit -s 'a = [3.15]*1000' '[abs(x) for x in a]'
10000 loops, best of 3: 186 usec per loop
lebigot@weinberg ~ % python -m timeit -s 'import numpy; a = numpy.empty(1000); a.fill(3.15)' 'numpy.abs(a)'
100000 loops, best of 3: 6.47 usec per loop
</code></pre>
<p>(PS: <code>'[abs(x) for x in a]'</code> is slower in Python 2.7 than the better <code>map(abs, a)</code>, which is about 30 % faster—which is still much slower than NumPy.)</p>
<p>Thus, <code>numpy.abs()</code> does not take much more time for 1000 elements than for 1 single float!</p>
|
python|performance|numpy
| 87
|
374,244
| 70,565,279
|
While applying qcut to a data series I am getting the below mentioned Index error
|
<p>Hi can anyone tell me what is the issue here?</p>
<pre><code>#How to bin a numeric series to 10 groups of equal size?
ser = pd.Series(np.random.random(20))
s=[0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]
label=['1st','2nd','3rd','4th','5th','6th','7th','8th','9th','10th']
k=pd.qcut(ser,q=s,labels=label)
</code></pre>
<blockquote>
<p>IndexError: index 76 is out of bounds for axis 0 with size 20</p>
</blockquote>
<p>The solution provided has the list passed directly in the qcut method rather than defining them as a variable</p>
|
<p>Which pandas version do you have? In 1.3.5 everything works fine:</p>
<pre><code>ser = pd.Series(np.random.random(10))
s=[0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]
label=['1st','2nd','3rd','4th','5th','6th','7th','8th','9th','10th']
k=pd.qcut(ser,q=s,labels=label)
</code></pre>
<p>Output:</p>
<pre><code>0 6th
1 3rd
2 3rd
3 5th
4 1st
5 1st
6 10th
7 9th
8 2nd
9 5th
10 8th
dtype: category
Categories (10, object): ['1st' < '2nd' < '3rd' < '4th' ... '7th' < '8th' < '9th' < '10th']
</code></pre>
|
python|pandas
| 0
|
374,245
| 70,392,819
|
Problem with plotting/calculating exponential curve (python, matplotlib, pandas)
|
<p>I have some data that forms exponential curve and I'm trying to fit that curve to the data.</p>
<p>Unfortunately everything I have tried didn't work (I will spare you madness of the code).</p>
<p>The thing is that it works when I used <code>a*x**2 +b*x + c</code> or <code>a*x**3 + b*x**2 +c*x + d</code> with what I found on internet (using implementation(s) of <code>from scipy.optimize import curve_fit</code>). Again I will spare you my iterations of exp function.</p>
<p>Here is the data:</p>
<pre><code>x,y
0.48995590396864286,8.109516054921031e-09
0.48995590396864286,8.09818090049968e-09
0.48995590396864286,8.103734197035667e-09
0.48995590396864286,8.110736963480639e-09
0.48995590396864286,8.09118823654877e-09
0.48995590396864286,8.12135991705394e-09
0.48995590396864286,8.122079043957364e-09
0.48995590396864286,8.128376050930522e-09
0.48995590396864286,8.157919899241163e-09
0.48661800486618,8.198100087712926e-09
0.48426150121065376,8.22138382076506e-09
0.48192771084337344,8.281557310731435e-09
0.4793863854266539,8.27420119872003e-09
0.47709923664122134,8.321514715516415e-09
0.47483380816714155,8.3552316463302e-09
0.47483380816714155,8.378564235036926e-09
0.47192071731949026,8.401917724613532e-09
0.4703668861712136,8.425994519752875e-09
0.4681647940074906,8.45965504646707e-09
0.4659832246039143,8.496218480906607e-09
0.46382189239332094,8.551849768778838e-09
0.46168051708217916,8.54285497435508e-09
0.46168051708217916,8.583748312156053e-09
0.46168051708217916,8.646661429014719e-09
0.4568296025582458,8.733501981255873e-09
0.45475216007276037,8.765708849715661e-09
0.45004500450045004,8.8589473576661e-09
0.44385264092321347,8.991513675928626e-09
0.4397537379067722,9.130861147033911e-09
0.43308791684711995,9.301055589581911e-09
0.4269854824935952,9.533957982742729e-09
0.42052144659377627,9.741467401775447e-09
0.41476565740356697,9.942960683024683e-09
0.4088307440719542,1.0205883938061429e-08
0.40176777822418647,1.0447121052453653e-08
0.3947887879984209,1.0747232046538825e-08
0.3895597974289053,1.1089181777589068e-08
0.3829950210647261,1.1466586145307001e-08
0.37664783427495296,1.1898726912256124e-08
0.3707823507601038,1.2248924384552248e-08
0.362844702467344,1.2806614625543388e-08
0.35676061362825545,1.3206507000963428e-08
0.35385704175513094,1.3625333143433576e-08
0.3460207612456747,1.4205592733074004e-08
0.34002040122407345,1.4793868231688043e-08
0.3348961821835231,1.545475512236522e-08
0.3287310979618672,1.6141630273450685e-08
0.32185387833923396,1.698004473312357e-08
0.3162555344718533,1.7677811603552503e-08
0.3111387678904792,1.858017339865837e-08
0.3037667071688943,1.9505998651376402e-08
0.29886431560071725,2.022694254385094e-08
0.2910360884749709,2.1353523243307723e-08
0.28457598178713717,2.2277591448622187e-08
0.2770083102493075,2.302804705798657e-08
0.2727024815925825,2.299784512552745e-08
</code></pre>
|
<p>If you believe this is exponentiel curve i would find linear fit of the log of the data.</p>
<pre><code># your data in a Dataframe
import pandas as pd
import numpy as np
df = pd.read_csv("data.csv", sep=",")
# get log of your data
log_y = np.log(df["y"])
# linear fit of your log (as exp(ln(ax + b)) = ax + b)
a, b = np.polyfit(df.x, log_y, 1)
# plot the fit
import matplotlib.pyplot as plt
plt.scatter(df.x, df.y, label="raw_data")
plt.plot(df.x, np.exp(a*df.x + b), label="fit")
plt.legend()
</code></pre>
<p><a href="https://i.stack.imgur.com/JrBOe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JrBOe.png" alt="plot of the fit" /></a></p>
|
python|pandas|matplotlib|scipy|scipy-optimize
| 1
|
374,246
| 70,647,541
|
Is there a way to reduce expected conv2d_Conv2D1_input from 4 dimensions to 3?
|
<p><strong>Problem</strong>:</p>
<ul>
<li>a ValueError is saying that <code>conv2d_Conv2D1_input</code> is expecting to have 4 dimension(s), but got array with shape [475,475,3]</li>
</ul>
<p>However:</p>
<ul>
<li>The inputShape is set to [475,475,3]</li>
<li>when logged, tensors have the shape [475,475,3]</li>
</ul>
<p>Error: <code>ValueError: Error when checking : expected conv2d_Conv2D1_input to have 4 dimension(s), but got array with shape [475,475,3]</code></p>
<p>Tensor:</p>
<pre><code>Tensor {
kept: false,
isDisposedInternal: false,
shape: [ 475, 475, 3 ],
dtype: 'int32',
size: 676875,
strides: [ 1425, 3 ],
dataId: {},
id: 8,
rankType: '3',
scopeId: 4
}
</code></pre>
<p>Complete Code:</p>
<pre class="lang-js prettyprint-override"><code>var tf = require('@tensorflow/tfjs');
var tfnode = require('@tensorflow/tfjs-node');
var fs = require(`fs`)
const main = async () => {
const loadImage = async (file) => {
const imageBuffer = await fs.readFileSync(file)
const tensorFeature = await tfnode.node.decodeImage(imageBuffer, 3)
return tensorFeature;
}
const tensorFeature = await loadImage(`./1.png`)
const tensorFeature2 = await loadImage(`./4.png`)
const tensorFeature3 = await loadImage(`./7.png`)
console.log(tensorFeature)
console.log(tensorFeature2)
console.log(tensorFeature3)
tensorFeatures = [tensorFeature, tensorFeature2, tensorFeature3]
labelArray = [0, 1, 2]
tensorLabels = tf.oneHot(tf.tensor1d(labelArray, 'int32'), 3);
const model = tf.sequential();
model.add(tf.layers.conv2d({
inputShape: [475, 475, 3],
filters: 32,
kernelSize: 3,
activation: 'relu',
}));
model.add(tf.layers.flatten());
model.add(tf.layers.dense({units: 3, activation: 'softmax'}));
model.compile({
optimizer: 'sgd',
loss: 'categoricalCrossentropy',
metrics: ['accuracy']
});
model.summary()
model.fit(tf.stack(tensorFeatures), tensorLabels)
const im = await loadImage(`./2.png`)
model.predict(im)
}
main()
</code></pre>
|
<p>The batch dimension is mission. It can be added by using <code>expandDims()</code></p>
<pre><code>const im = await loadImage(`./2.png`).expandDims()
model.predict(im)
</code></pre>
|
javascript|node.js|conv-neural-network|tensor|tensorflow.js
| 1
|
374,247
| 70,515,094
|
pandas: how to add date column into groupby result
|
<p>I have a csv file that is user behavior data in a web page. here is the sample data:</p>
<pre><code>_time,dataCenter,customer,user,SID,ACT,
2021-11-25T13:45:42.139+0000,dc1,customer1,user1,sid1,open_page,
2021-11-25T13:45:50.139+0000,dc1,customer1,user1,sid1,create_form,
2021-11-25T13:46:51.139+0000,dc1,customer1,user1,sid1,save_form,
2021-11-25T13:50:50.139+0000,dc1,customer2,user2,sid2,open_page,
2021-11-25T13:51:20.139+0000,dc1,customer2,user2,sid2,open_form_detail,
2021-11-25T13:53:50.139+0000,dc1,customer2,user2,sid2,back_to_form_list,
2021-11-25T23:59:50.139+0000,dc3,customer3,user3,sid3,open_page,
2021-11-26T00:02:50.139+0000,dc3,customer3,user3,sid3,show_more,
......
......
......
</code></pre>
<p>I want to do below data transformation:</p>
<ol>
<li>group ACT by dataCenter,customer,user and SID</li>
<li>extract date from _time column and assign to groupby result.</li>
</ol>
<p>Here is the expected Result:</p>
<pre><code>date,dataCenter,customer,user,SID,ACT,
2021-11-25,dc1,customer1,user1,sid1,"open_page,create_form,save_form",
2021-11-25,dc1,customer2,user2,sid2,"open_page,open_form_detail,back_to_form_list"
2021-11-25,dc3,customer3,user3,sid3,"open_page,show_more"
......
......
......
</code></pre>
<p>what i have tried:</p>
<pre><code>df= df.groupby(['dataCenter','customer','user','sid'])['ACT'].apply(','.join)
</code></pre>
<p>but i am not sure how to add <code>date</code> as a column in <code>groupby</code> result.</p>
<p>Can you please help advice?</p>
<p>Thanks
Cherie</p>
|
<p>IIUC:</p>
<pre><code>df = df.groupby(['dataCenter', 'customer', 'user', 'SID']).agg(date = ('_time', 'first'),
ACT= ('ACT', ','.join)).reset_index()
df['date'] = pd.to_datetime(df['date']).dt.date
</code></pre>
<p><code>OUTPUT</code></p>
<pre><code> dataCenter customer user SID date ACT
0 dc1 customer1 user1 sid1 2021-11-25 open_page,create_form,save_form
1 dc1 customer2 user2 sid2 2021-11-25 open_page,open_form_detail,back_to_form_list
2 dc3 customer3 user3 sid3 2021-11-25 open_page,show_more
</code></pre>
|
pandas|dataframe|group-by
| 1
|
374,248
| 70,605,545
|
Position of legend in matplot with secondary y-axis (python)
|
<p>I try to create a plot in python with matplotlib consisting of two plots and one with a secondary y-axis. The first is a scatter and the second a line plot.
Now, I want to move the legend to somewhere else but whenever I use <code>ax.legend()</code> only the label of the first axis appears but the second vanishes.</p>
<pre><code>import pandas as pd
# the data:
data = pd.DataFrame(
{'Date': {0: '2021-01-01 18:00:00', 1: '2021-01-02 20:00:00', 2: '2021-01-03 19:00:00', 3: '2021-01-04 17:00:00', 4: '2021-01-05 18:00:00', 5: '2021-01-06 18:00:00', 6: '2021-01-07 21:00:00', 7: '2021-01-08 16:00:00'},
'Value A': {0: 107.6, 1: 86.0, 2: 17.3, 3: 56.8, 4: 30.0, 5: 78.3, 6: 110.1, 7: 59.0},
'Value B': {0: 17.7, 1: 14.1, 2: 77.4, 3: 4.9, 4: 44.4, 5: 28.3, 6: 22.9, 7: 83.2},
'Value C': {0: 0.0, 1: 0.5, 2: 0.3, 3: 0.6, 4: 0.9, 5: np.nan, 6: 0.1, 7: 0.8},
'Value D': {0: 0.5, 1: 0.7, 2: 0.1, 3: 0.5, 4: 0.4, 5: 0.8, 6: 0.8, 7: 0.8},
'Flag': {0: 1, 1: np.nan, 2: np.nan, 3: np.nan, 4: np.nan, 5: np.nan, 6: 1, 7: np.nan}})
data["Date"] = pd.to_datetime(data["Date"])
# plot the flagged points as dots
data_flag = data[data["Flag"] == True]
ax = data_flag.plot.scatter(x="Date",
y="Value A",
c="black",
label="Outlier")
# plot the series as lines
ax = data.plot(x="Date",
y=["Value A",
"Value B",
"Value C",
"Value D"
],
secondary_y=["Value C", "Value D"],
ax=ax
)
ax.legend(bbox_to_anchor=(1.1, 1.05))
plt.show()
</code></pre>
<p>The output looks like this with missing labels in the legend for the secondary axis</p>
<p><a href="https://i.stack.imgur.com/X3g4l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X3g4l.png" alt="enter image description here" /></a></p>
<p>Without the <code>ax.legend()</code>, the labels for "Value C" and "Value D" are part of the legend.</p>
<p>What is the best way to move all labels at once?</p>
|
<p>This approach (using your provided data):</p>
<pre class="lang-py prettyprint-override"><code># plot the flagged points as dots
data_flag = data[data["Flag"] == True]
ax = data_flag.plot.scatter(x="Date",
y="Value A",
c="black",
label="Outlier")
# plot the series as lines
ax = data.plot(x="Date",
y=["Value A",
"Value B",
],
secondary_y=False,
ax=ax
)
ax2 = data.plot(x="Date",
y=["Value C",
"Value D"
],
secondary_y=True,
ax=ax
)
ax.legend(bbox_to_anchor=(1.1, 1.1), loc="upper left")
ax.set_ylabel("Value A, Value B")
ax2.legend(bbox_to_anchor=(1.1, 0.85), loc="upper left")
ax2.set_ylabel("Value C, Value D")
plt.show()
</code></pre>
<p>Three plots are generated, one for the flagged outlier, one for <code>Value A, Value B</code> and the last one with a secondary y-axis for the <code>Value C, Value D</code>. This looks like the following:</p>
<p><a href="https://i.stack.imgur.com/nIK0t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nIK0t.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|plot
| 0
|
374,249
| 70,695,599
|
How to impute nan values in a Pandas dataframe from a multi-index dataframe?
|
<p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame([[np.nan, 2, 20, 4],
[3, 1, np.nan, 1],
[3, 1, 15, 1],
[np.nan, 1, np.nan, 1],
[10, 1, 30, 4],
[50, 2, 35, 4],
[10, 1, 37, 4],
[40, 2, 30, 1]],
columns=list("ABCD"))
</code></pre>
<p>I want to fill the Nan values with their group means.
Towards that purpose, I run the following:</p>
<pre><code>df_mean=df.groupby(["B","D"]).mean()
df_mean
A C
B D
1 1 3.0 15.0
4 10.0 33.5
2 1 40.0 30.0
4 50.0 27.5
</code></pre>
<p>Is there a way to fill the dataframe df with the values computed in df_mean?</p>
<p>One way to do this would be as in <a href="https://datascience.stackexchange.com/a/107028/51858">this answer</a></p>
<pre><code>df[["A", "C"]] = (
df
# create groups
.groupby(["B", "D"])
# transform the groups by filling na values with the group mean
.transform(lambda x: x.fillna(x.mean()))
)
</code></pre>
<p>However, for a few millions of rows, where the simple groupby([...]).mean() would take a few seconds, take too long...</p>
<p>It there a quicker way to solve this?</p>
|
<p>You can use <code>combine_first</code>:</p>
<pre><code>out = df.combine_first(df.groupby(['B', 'D']).transform('mean'))
print(out)
# Output
A B C D
0 50.0 2 20.0 4
1 3.0 1 15.0 1
2 3.0 1 15.0 1
3 3.0 1 15.0 1
4 10.0 1 30.0 4
5 50.0 2 35.0 4
6 10.0 1 37.0 4
7 40.0 2 30.0 1
</code></pre>
|
python|pandas
| 1
|
374,250
| 70,539,674
|
Train neural network model on multiple datasets
|
<p>What I have:</p>
<ol>
<li>A neural network model</li>
<li>10 identically structured datasets</li>
</ol>
<p>What I want:</p>
<ol>
<li>Train model on all the datasets separately</li>
<li>Save their models separately</li>
</ol>
<p>I can train the datasets separately and save the single models one at a time. But I want to load my 10 datasets and create 10 models with them in a single run. The solution may be obvious but I am fairly new to this. How do I achieve this?</p>
<p>Thanks in advance.</p>
|
<p>You can use one of the concepts of <code>concurrency and parallelism</code>, namely <a href="https://www.geeksforgeeks.org/multithreading-python-set-1/" rel="nofollow noreferrer"><code>Multi-Threading</code></a>, or in some cases, <a href="https://www.geeksforgeeks.org/multiprocessing-python-set-1/" rel="nofollow noreferrer"><code>Multi-Processing</code></a> to achieve this.<br>
The easiest way to code will be by using <a href="https://docs.python.org/3/library/concurrent.futures.html" rel="nofollow noreferrer"><code>concurrent-futures</code></a> module of python.<br></p>
<blockquote>
<p>You can call the training function on model for each dataset to be used, all under the ThreadPoolExecutor, in order to fire parallel threads for performing individual trainings.</p>
</blockquote>
<h2>Code can be somewhat like this:</h2>
<br>
<h5>Step 1: Necessary imports</h5>
<pre><code>from concurrent.futures import ThreadPoolExecutor, as_completed
import tensorflow as tf
from tensorflow.keras.models import load_model, Sequential
from tensorflow.keras.layers import Dense, Activation, Flatten
</code></pre>
<br>
<h5>Step 2: Creating and building model</h5>
<pre><code>def create_model(): # responsible for creating model
model = Sequential()
model.add(Flatten()) # adding NN layers
model.add(Dense(64))
model.add(Activation('relu'))
# ........ so on
model.compile(optimizer='..', loss='..', metrics=[...]) # compiling the model
return model # finally returning the model
</code></pre>
<br>
<h5>Step 3: Define fit function (performs model training)</h5>
<pre><code>def fit(model, XY_train): # performs model.fit(...parameters...)
model.fit(XY_train[0], XY_train[1], epochs=5, validation_split=0.3) # use your already defined x_train, y_train
return model # finally returns trained model
</code></pre>
<br>
<h5>Step 4: Parallel trainer method, fires simultaneous training with TPE context manager</h5>
<pre><code># trains provided model on each dataset parallelly by using multi-threading
def parallel_trainer(model, XY_train_datasets : list[tuple]):
with ThreadPoolExecutor(max_workers = len(XY_train_datasets)) as executor:
futureObjs = [
executor.submit(
lambda ds: fit(model, ds), XY_train_datasets) # Call Fit for each dataset iterate through the datasets
]
for i, obj in enumerate(as_completed(futureObjs)): # iterate through trained models
(obj.result()).save(f"{i}.model") # save models
</code></pre>
<br>
<h5>Step 5: Create model, load dataset, call parallel trainer</h5>
<pre><code>model = create_model() # create the model
mnist = tf.keras.datasets.mnist # get dataset - for example :- mnist dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data() # get (x_train, y_train), (x_test, y_test)
datasets = [(x_train, y_train)]*10 # list of dataset paths (in your case, same dataset used 10 times)
parallel_trainer(model, datasets) # call parallel trainer
</code></pre>
<p><br><br></p>
<h3>Whole program</h3>
<pre><code>from concurrent.futures import ThreadPoolExecutor, as_completed
import tensorflow as tf
from tensorflow.keras.models import load_model, Sequential
from tensorflow.keras.layers import Dense, Activation, Flatten
def create_model(): # responsible for creating model
model = Sequential()
model.add(Flatten()) # adding NN layers
model.add(Dense(64))
model.add(Activation('relu'))
# ........ so on
model.compile(optimizer='..', loss='..', metrics=[...]) # compiling the model
return model # finally returning the model
def fit(model, XY_train): # performs model.fit(...parameters...)
model.fit(XY_train[0], XY_train[1], epochs=5, validation_split=0.3) # use your already defined x_train, y_train
return model # finally returns trained model
# trains provided model on each dataset parallelly by using multi-threading
def parallel_trainer(model, XY_train_datasets : list[tuple]):
with ThreadPoolExecutor(max_workers = len(XY_train_datasets)) as executor:
futureObjs = [
executor.submit(
lambda ds: fit(model, ds), XY_train_datasets) # Call Fit for each dataset iterate through the datasets
]
for i, obj in enumerate(as_completed(futureObjs)): # iterate through trained models
(obj.result()).save(f"{i}.model") # save models
model = create_model() # create the model
mnist = tf.keras.datasets.mnist # get dataset - for example :- mnist dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data() # get (x_train, y_train), (x_test, y_test)
datasets = [(x_train, y_train)]*10 # list of dataset paths (in your case, same dataset used 10 times)
parallel_trainer(model, datasets) # call parallel trainer
</code></pre>
|
python|tensorflow|keras|deep-learning|neural-network
| 2
|
374,251
| 70,424,990
|
Drop a row in a tensor if the sum of the elements is lower than some threshold
|
<p>How can I drop rows in a tensor if the sum of the elements in each row is lower than the threshold -1? For example:</p>
<pre class="lang-py prettyprint-override"><code>tensor = tf.random.normal((3, 3))
tf.Tensor(
[[ 0.506158 0.53865975 -0.40939444]
[ 0.4917719 -0.1575156 1.2308844 ]
[ 0.08580616 -1.1503975 -2.252681 ]], shape=(3, 3), dtype=float32)
</code></pre>
<p>Since the sum of the last row is smaller than -1, I need to remove it and get the tensor (2, 3):</p>
<pre><code>tf.Tensor(
[[ 0.506158 0.53865975 -0.40939444]
[ 0.4917719 -0.1575156 1.2308844 ]], shape=(2, 3), dtype=float32)
</code></pre>
<p>I know how to use tf.reduce_sum, but I do not know how to delete rows from a tensor. Something like <code>df.drop</code> would be nice.</p>
|
<p><code>tf.boolean_mask</code> is all you need.</p>
<pre class="lang-py prettyprint-override"><code>tensor = tf.constant([
[ 0.506158, 0.53865975, -0.40939444],
[ 0.4917719, -0.1575156, 1.2308844 ],
[ 0.08580616, -1.1503975, -2.252681 ],
])
mask = tf.reduce_sum(tensor, axis=1) > -1
# <tf.Tensor: shape=(3,), dtype=bool, numpy=array([ True, True, False])>
tf.boolean_mask(
tensor=tensor,
mask=mask,
axis=0
)
# <tf.Tensor: shape=(2, 3), dtype=float32, numpy=
# array([[ 0.506158 , 0.53865975, -0.40939444],
# [ 0.4917719 , -0.1575156 , 1.2308844 ]], dtype=float32)>
</code></pre>
|
python|tensorflow
| 0
|
374,252
| 70,726,330
|
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
|
<p>I get the following error message which I tried to deal with it by throwing <code>.to(self.device)</code> everywhere but it doesn't work.</p>
<pre><code> ab = torch.lgamma(torch.tensor(a+b, dtype=torch.float, requires_grad=True).to(device=local_device))
Traceback (most recent call last):
File "Script.py", line 923, in <module>
average_epoch_loss, out , elbo2 =train(epoch)
File "Script.py", line 848, in train
loss_dict = net.get_ELBO(X)
File "Script.py", line 546, in get_ELBO
elbo -= compute_kumar2beta_kld(self.kumar_a[:, k].to(self.device), self.kumar_b[:, k].to(self.device), self.prior, (self.K-1-k)* self.prior).mean().to(self.device)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
</code></pre>
<p>Here is a snippet of my script which is related to the error:</p>
<pre><code>def compute_kumar2beta_kld(a, b, alpha, beta):
SMALL = 1e-16
EULER_GAMMA = 0.5772156649015329
ab = torch.mul(a,b)+ SMALL
a_inv = torch.pow(a + SMALL, -1)
b_inv = torch.pow(b + SMALL, -1)
kl = torch.mul(torch.pow(1+ab,-1), beta_fn(a_inv, b))
for idx in range(10):
kl += torch.mul(torch.pow(idx+2+ab,-1), beta_fn(torch.mul(idx+2., a_inv), b))
kl = torch.mul(torch.mul(beta-1,b), kl)
psi_b = torch.digamma(b+SMALL)
kl += torch.mul(torch.div(a-alpha,a+SMALL), -EULER_GAMMA - psi_b - b_inv)
kl += torch.log(ab) + torch.log(beta_fn(alpha, beta) + SMALL)
kl += torch.div(-(b-1),b +SMALL)
return kl
class VAE(GMMVAE):
def __init__(self, hyperParams, K, nchannel, base_channels, z_dim, w_dim, hidden_dim, device, img_width, batch_size, include_elbo2):
global local_device
local_device = device
super(VAE, self).__init__(K, nchannel, base_channels, z_dim, w_dim, hidden_dim, device, img_width, batch_size)
self.prior = hyperParams['prior']
self.K = hyperParams['K']
self.z_dim = hyperParams['latent_d']
self.hidden_dim = hyperParams['hidden_d']
def get_ELBO(self, X):
elbo = torch.tensor(0, dtype=torch.float)
if self.include_elbo2:
for k in range(self.K-1):
elbo -= compute_kumar2beta_kld(self.kumar_a[:, k], self.kumar_b[:, k], self.prior, (self.K-1-k)* self.prior).mean().to(self.device)
</code></pre>
<p>I appreciate it if someone can suggest how to fix the error.</p>
|
<p>I am not sure if this is the "only" problem, but one of the device-related problems is this:</p>
<p><code> elbo = torch.tensor(0, dtype=torch.float)</code> <- this will create the elbo tensor on CPU</p>
<p>and when you do, <code>elbo -= <some result></code>,</p>
<p>The result is on cuda (or <code>self.device</code>). This will clearly cause a problem. To fix, this just do</p>
<p><code>elbo = torch.tensor(0, dtype=torch.float, device=self.device)</code></p>
|
deep-learning|pytorch|gpu
| 1
|
374,253
| 70,404,137
|
cannot concatenate object of type '<class 'numpy.ndarray'>'; only Series and DataFrame objs are valid
|
<p>I am intending to visualise the data using a pairplot after using StandardScaler,
But my code is producing the following error</p>
<pre><code> raise TypeError(msg)
TypeError: cannot concatenate object of type '<class 'numpy.ndarray'>'; only Series and DataFrame objs are valid
</code></pre>
<p>Full code</p>
<pre><code>from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
import seaborn as sns
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
iris = sns.load_dataset('iris')
X = iris.drop(columns='species')
y = iris['species']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
X_train=StandardScaler().fit_transform(X_train)
sns.pairplot(data=pd.concat([X_train, y_train], axis=1), hue=y_train.name)
plt.show()
</code></pre>
|
<p>After using <code>StandardScaler</code>, your X_train (which was a <code>pd.DataFrame</code> before) has become a <code>numpy.ndarray</code>, so that's why you cannot concat <code>X_train</code> and <code>y_train</code>. Because <code>X_train</code> is a NumPy array and <code>y_train</code> is a Pandas DataFrame</p>
<p>To use concat, both <code>X_train</code> and <code>y_train</code> has to be a Pandas DataFrame, so convert <code>X_train</code> to a DataFrame using this code.</p>
<pre><code>X_train = StandardScaler().fit_transform(X_train)
X_train = pd.DataFrame(X_train, columns = X.columns)
sns.pairplot(data=pd.concat([X_train, y_train], axis=1), hue=y_train.name)
plt.show()
</code></pre>
<p>It will work.</p>
|
python|pandas|seaborn
| 2
|
374,254
| 70,552,597
|
How to shape and train multicolumn input and multicolumn output (many to many) with RNN LSTM model in TensorFlow?
|
<p>I am facing a problem with training an LSTM model with multicolumn input output. My code is below:</p>
<pre><code>time_step = 60
#Create a data structure with n-time steps
X = []
y = []
for i in range(time_step + 1, len(training_set_scaled)):
X.append(training_set_scaled[i-time_step-1:i-1, 0:len(training_set.columns)]) #take all columns into the set
y.append(training_set_scaled[i, 0:len(training_set.columns)]) #take all columns into the set
X_train_arr, y_train_arr = np.array(X), np.array(y)
print(X_train_arr.shape) #(2494, 60, 5)
print(y_train_arr.shape) #(2494, 5)
#Split data
X_train_splitted = X_train_arr[:split]
y_train_splitted = y_train_arr[:split]
X_test_splitted = X_train_arr[split:]
y_test_splitted = y_train_arr[split:]
#Initialize the RNN
model = Sequential()
#Add the LSTM layers and some dropout regularization
model.add(LSTM(units= 50, activation = 'relu', return_sequences = True, input_shape = (X_train_arr.shape[1], X_train_arr.shape[2]))) #time_step/columns
model.add(Dropout(0.2))
model.add(LSTM(units= 40, activation = 'relu', return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(units= 80, activation = 'relu', return_sequences = True))
model.add(Dropout(0.2))
#Add the output layer.
model.add(Dense(units = 1))
#Compile the RNN
model.compile(optimizer='adam', loss = 'mean_squared_error')
#Fit to the training set
model.fit(X_train_splitted, y_train_splitted, epochs=3, batch_size=32)
</code></pre>
<p>The idea is to train the model with 60 steps back from <code>i</code> and having 5 column target in <code>i</code>:</p>
<pre><code>for i in range(time_step + 1, len(training_set_scaled)):
X.append(training_set_scaled[i-time_step-1:i-1, 0:len(training_set.columns)]) #take all columns into the set
y.append(training_set_scaled[i, 0:len(training_set.columns)]) #take all columns into the set
</code></pre>
<p>So my x-train (feed) and y-train (targets) are:</p>
<pre><code>X_train_arr, y_train_arr = np.array(X), np.array(y)
print(X_train_arr.shape) #(2494, 60, 5)
print(y_train_arr.shape) #(2494, 5)
</code></pre>
<p>Unfortunately, when fitting the model:</p>
<pre><code>model.fit(X_train_splitted, y_train_splitted, epochs=3, batch_size=32)
</code></pre>
<p>I am getting an error:</p>
<blockquote>
<p>Dimensions must be equal, but are 60 and 5 for '{{node
mean_squared_error/SquaredDifference}} =
SquaredDifference[T=DT_FLOAT](mean_squared_error/remove_squeezable_dimensions/Squeeze,
IteratorGetNext:1)' with input shapes: [?,60], [?,5].</p>
</blockquote>
<p>I understand that <code>X_train_arr</code> and <code>y_train_arr</code> need to be the same. BUT when testing with case below, everyting is fine:</p>
<pre><code>X_train_arr, y_train_arr = np.array(X), np.array(y)
print(X_train_arr.shape) #(2494, 60, 5)
print(y_train_arr.shape) #(2494, 1)
</code></pre>
<p>Idea of having <code>print(y_train_arr.shape) #(2494, 5)</code> is to be able to predict n-steps into the future, where each iteration of prediction generates new entire row of the data with 5 columns values.</p>
|
<p>Allright, after completing <a href="https://stackabuse.com/solving-sequence-problems-with-lstm-in-keras-part-2/" rel="nofollow noreferrer">this tutorial</a> i understood what should be done. Below is placed final code with comments:</p>
<pre><code>#Variables
future_prediction = 30
time_step = 60 #learning step
split_percent = 0.80 #train/test data split percent (80%)
split = int(split_percent*len(training_set_scaled)) #split percent multiplying by data rows
#Create a data structure with n-time steps
X = []
y = []
for i in range(time_step + 1, len(training_set_scaled)):
X.append(training_set_scaled[i-time_step-1:i-1, 0:len(training_set.columns)]) #take all columns into the set, including time_step legth
y.append(training_set_scaled[i, 0:len(training_set.columns)]) #take all columns into the set
X_train_arr, y_train_arr = np.array(X), np.array(y) #must be numpy array for TF inputs
print(X_train_arr.shape) #(2494, 60, 5) <-- train data, having now 2494 rows, with 60 time steps, each row has 5 features (MANY)
print(y_train_arr.shape) #(2494, 5) <-- target data, having now 2494 rows, with 1 time step, but 5 features (TO MANY)
#Split data
X_train_splitted = X_train_arr[:split] #(80%) model train input data
y_train_splitted = y_train_arr[:split] #80%) model train target data
X_test_splitted = X_train_arr[split:] #(20%) test prediction input data
y_test_splitted = y_train_arr[split:] #(20%) test prediction compare data
#Reshaping to rows/time_step/columns
X_train_splitted = np.reshape(X_train_splitted, (X_train_splitted.shape[0], X_train_splitted.shape[1], X_train_splitted.shape[2])) #(samples, time-steps, features), by default should be already
y_train_splitted = np.reshape(y_train_splitted, (y_train_splitted.shape[0], 1, y_train_splitted.shape[1])) #(samples, time-steps, features)
X_test_splitted = np.reshape(X_test_splitted, (X_test_splitted.shape[0], X_test_splitted.shape[1], X_test_splitted.shape[2])) #(samples, time-steps, features), by default should be already
y_test_splitted = np.reshape(y_test_splitted, (y_test_splitted.shape[0], 1, y_test_splitted.shape[1])) #(samples, time-steps, features)
print(X_train_arr.shape) #(2494, 60, 5)
print(y_train_arr.shape) #(2494, 1, 5)
print(X_test_splitted.shape) #(450, 60, 5)
print(y_test_splitted.shape) #(450, 1, 5)
#Initialize the RNN
model = Sequential()
#Add Bidirectional LSTM, has better performance than stacked LSTM
model = Sequential()
model.add(Bidirectional(LSTM(100, activation='relu', input_shape = (X_train_splitted.shape[1], X_train_splitted.shape[2])))) #input_shape will be (2494-size, 60-shape[1], 5-shape[2])
model.add(RepeatVector(5)) #for 5 column of features in output, in other cases used for time_step in output
model.add(Bidirectional(LSTM(100, activation='relu', return_sequences=True)))
model.add(TimeDistributed(Dense(1)))
#Compile the RNN
model.compile(optimizer='adam', loss = 'mean_squared_error')
#Fit to the training set
model.fit(X_train_splitted, y_train_splitted, epochs=3, batch_size=32, validation_split=0.2, verbose=1)
#Test results
y_pred = model.predict(X_test_splitted, verbose=1)
print(y_pred.shape) #(450, 5, 1) - need to be reshaped for (450, 1, 5)
#Reshaping data for inverse transforming
y_test_splitted = np.reshape(y_test_splitted, (y_test_splitted.shape[0], 5)) #reshaping for (450, 1, 5)
y_pred = np.reshape(y_pred, (y_pred.shape[0], 5)) #reshaping for (450, 1, 5)
#Reversing transform to get proper data values
y_test_splitted = scaler.inverse_transform(y_test_splitted)
y_pred = scaler.inverse_transform(y_pred)
#Plot data
plt.figure(figsize=(14,5))
plt.plot(y_test_splitted[-time_step:, 3], label = "Real values") #I am interested only with display of column index 3
plt.plot(y_pred[-time_step:, 3], label = 'Predicted values') # #I am interested only with display of column index 3
plt.title('Prediction test')
plt.xlabel('Time')
plt.ylabel('Column index 3')
plt.legend()
plt.show()
#todo: future prediction
</code></pre>
|
python|tensorflow|many-to-many|lstm|recurrent-neural-network
| 0
|
374,255
| 70,719,212
|
Barplot of two columns based on specific condition
|
<p>I was given a task where I'm supposed to plot a element based on another column element.</p>
<p>For further information here's the code:</p>
<pre><code># TODO: Plot the Male employee first name on 'Y' axis while Male salary is on 'X' axis
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_excel("C:\\users\\HP\\Documents\\Datascience task\\Employee.xlsx")
print(data.head(5))
</code></pre>
<p>Output:</p>
<pre><code> First Name Last Name Gender Age Experience (Years) Salary
0 Arnold Carter Male 21 10 8344
1 Arthur Farrell Male 20 7 6437
2 Richard Perry Male 28 3 8338
3 Ellia Thomas Female 26 4 8870
4 Jacob Kelly Male 21 4 548
</code></pre>
<p>How to plot the 'First Name' column vs the 'Salary' column of the first 5 rows of where the 'Gender' is Male.</p>
|
<p>First generate the male rows separately and extract first name and salary for plotting.</p>
<p>The below code identifies first five male employees and converts their first name and salary as x and y lists.</p>
<pre><code>x = list(df[df['Gender'] == "Male"][:5]['Fname'])
y = list(df[df['Gender'] == "Male"][:5]['Salary'])
print(x)
print(y)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>['Arnold', 'Arthur', 'Richard', 'Jacob']
[8344, 6437, 8338, 548]
</code></pre>
<p>Note that there're only 4 male available in the df.</p>
<p>Then we can plot any chart as we require;</p>
<pre><code>plt.bar(x, y, color = ['r', 'g', 'b', 'y']);
</code></pre>
<p><strong>Output:</strong>
<a href="https://i.stack.imgur.com/g7OcN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g7OcN.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib
| 1
|
374,256
| 70,406,840
|
Create columns in python data frame based on existing column-name and column-values
|
<p>I have a dataframe in pandas:</p>
<pre><code>import pandas as pd
# assign data of lists.
data = {'Gender': ['M', 'F', 'M', 'F','M', 'F','M', 'F','M', 'F','M', 'F'],
'Employment': ['R','U', 'E','R','U', 'E','R','U', 'E','R','U', 'E'],
'Age': ['Y','M', 'O','Y','M', 'O','Y','M', 'O','Y','M', 'O']
}
# Create DataFrame
df = pd.DataFrame(data)
df
</code></pre>
<p>What I want is to create for each category of each existing column a new column with the following format:</p>
<pre><code>Gender_M -> for when the gender equals M
Gender_F -> for when the gender equal F
Employment_R -> for when employment equals R
Employment_U -> for when employment equals U
and so on...
</code></pre>
<p>So far, I have created the below code:</p>
<pre><code>for i in range(len(df.columns)):
curent_column=list(df.columns)[i]
col_df_array = df[curent_column].unique()
for j in range(col_df_array.size):
new_col_name = str(list(df.columns)[i])+"_"+col_df_array[j]
for index,row in df.iterrows():
if(row[curent_column] == col_df_array[j]):
df[new_col_name] = row[curent_column]
</code></pre>
<p>The problem is that even though I have managed to create successfully the column names, I am not able to get the correct column values.</p>
<p>For example the column Gender should be as below:</p>
<pre><code>data2 = {'Gender': ['M', 'F', 'M', 'F','M', 'F','M', 'F','M', 'F','M', 'F'],
'Gender_M': ['M', 'na', 'M', 'na','M', 'na','M', 'na','M', 'na','M', 'na'],
'Gender_F': ['na', 'F', 'na', 'F','na', 'F','na', 'F','na', 'F','na', 'F']
}
df2 = pd.DataFrame(data2)
</code></pre>
<p>Just to say, the <code>na</code> can be anything such as blanks or dots or NAN.</p>
|
<p>You're looking for <a href="https://pandas.pydata.org/docs/reference/api/pandas.get_dummies.html" rel="nofollow noreferrer"><code>pd.get_dummies</code></a>.</p>
<pre><code>>>> pd.get_dummies(df)
Gender_F Gender_M Employment_E Employment_R Employment_U Age_M Age_O Age_Y
0 0 1 0 1 0 0 0 1
1 1 0 0 0 1 1 0 0
2 0 1 1 0 0 0 1 0
3 1 0 0 1 0 0 0 1
4 0 1 0 0 1 1 0 0
5 1 0 1 0 0 0 1 0
6 0 1 0 1 0 0 0 1
7 1 0 0 0 1 1 0 0
8 0 1 1 0 0 0 1 0
9 1 0 0 1 0 0 0 1
10 0 1 0 0 1 1 0 0
11 1 0 1 0 0 0 1 0
</code></pre>
|
pandas|dataframe|iteration
| 2
|
374,257
| 70,557,782
|
Long initialization time for model.fit when using tensorflow dataset from generator
|
<p><em>This is my first question on stack overflow. I apologise in advance for the poor formatting and indentation due to my troubles with the interface.</em></p>
<p><strong>Environment specifications:</strong></p>
<p>Tensorflow version - 2.7.0 GPU (tested and working properly)</p>
<p>Python version - 3.9.6</p>
<p>CPU - Intel Core i7 7700HQ</p>
<p>GPU - NVIDIA GTX 1060 3GB</p>
<p>RAM - 16GB DDR4 2400MHz</p>
<p>HDD - 1TB 5400 RPM</p>
<p><strong>Problem Statement:</strong></p>
<p>I wish to train a TensorFlow 2.7.0 model to perform multilabel classification with six classes on CT scans stored as DICOM images. The dataset is from Kaggle, link <a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/data" rel="nofollow noreferrer">here</a>. The training labels are stored in a CSV file, and the DICOM image names are of the format ID_"random characters".dcm. The images have a combined size of 368 GB.</p>
<p><strong>Approach used:</strong></p>
<ol>
<li><p>The CSV file containing the labels is imported into a pandas
DataFrame and the image filenames are set as the index.</p>
</li>
<li><p>A simple data generator is created to read the DICOM image and the
labels by iterating on the rows of the DataFrame. This generator is used to create a
training dataset using tf.data.Dataset.from_generator. The images are
pre-processed using bsb_window().</p>
</li>
<li><p>The training dataset is shuffled and split into a training(90%) and
validation set(10%)</p>
</li>
<li><p>The model is created using Keras Sequential, compiled, and fit using the training and validation datasets created earlier.</p>
</li>
</ol>
<p><strong>code:</strong></p>
<pre><code>def train_generator():
for row in df.itertuples():
image = pydicom.dcmread(train_images_dir + row.Index + ".dcm")
try:
image = bsb_window(image)
except:
image = np.zeros((256,256,3))
labels = row[1:]
yield image, labels
train_images = tf.data.Dataset.from_generator(train_generator,
output_signature =
(
tf.TensorSpec(shape = (256,256,3)),
tf.TensorSpec(shape = (6,))
)
)
train_images = train_images.batch(4)
TRAIN_NUM_FILES = 752803
train_images = train_images.shuffle(40)
val_size = int(TRAIN_NUM_FILES * 0.1)
val_images = train_images.take(val_size)
train_images = train_images.skip(val_size)
def create_model():
model = Sequential([
InceptionV3(include_top = False, input_shape = (256,256,3), weights = "imagenet"),
GlobalAveragePooling2D(name = "avg_pool"),
Dense(6, activation = "sigmoid", name = "dense_output"),
])
model.compile(loss = "binary_crossentropy",
optimizer = tf.keras.optimizers.Adam(5e-4),
metrics = ["accuracy", tf.keras.metrics.SpecificityAtSensitivity(0.8)]
)
return model
model = create_model()
history = model.fit(train_images,
batch_size=4,
epochs=5,
verbose=1,
validation_data=val_images
)
</code></pre>
<p><strong>Issue:</strong></p>
<p>When executing this code, there is a delay of a few hours of high disk usage (~30MB/s reads) before training begins. When a DataGenerator is made using tf.keras.utils.Sequence, training commences within seconds of calling model.fit().</p>
<p><strong>Potential causes:</strong></p>
<ol>
<li>Iterating over a pandas DataFrame in train_generator(). I am not sure how to avoid this issue.</li>
<li>The use of external functions to pre-process and load the data.</li>
<li>The usage of the take() and skip() methods to create training and validation datasets.</li>
</ol>
<p><strong>How do I optimise this code to run faster?</strong> I've heard splitting the data generator into label creation, image pre-processing functions and parallelising operations would improve performance. Still, I'm not sure how to apply those concepts in my case. Any advice would be highly appreciated.</p>
|
<p>I FOUND THE ANSWER</p>
<p>The problem was in the following code:</p>
<pre><code>TRAIN_NUM_FILES = 752803
train_images = train_images.shuffle(40)
val_size = int(TRAIN_NUM_FILES * 0.1)
val_images = train_images.take(val_size)
train_images = train_images.skip(val_size)
</code></pre>
<p>It takes an inordinate amount of time to split the dataset into training and validation datasets after loading the images. This step should be done early in the process, before loading any images. Hence, I split the image path loading and actual image loading, then parallelized the functions using the recommendations given <a href="https://www.tensorflow.org/guide/data_performance" rel="nofollow noreferrer">here</a>. The final optimized code is as follows</p>
<pre><code>def train_generator():
for row in df.itertuples():
image_path = f"{train_images_dir}{row.Index}.dcm"
labels = np.reshape(row[1:], (1,6))
yield image_path, labels
def test_generator():
for row in test_df.itertuples():
image_path = f"{test_images_dir}{row.Index}.dcm"
labels = np.reshape(row[1:], (1,6))
yield image_path, labels
def image_loading(image_path):
image_path = tf.compat.as_str_any(tf.strings.reduce_join(image_path).numpy())
dcm = pydicom.dcmread(image_path)
try:
image = bsb_window(dcm)
except:
image = np.zeros((256,256,3))
return image
def wrap_img_load(image_path):
return tf.numpy_function(image_loading, [image_path], [tf.double])
def set_shape(image, labels):
image = tf.reshape(image,[256,256,3])
labels = tf.reshape(labels,[1,6])
labels = tf.squeeze(labels)
return image, labels
train_images = tf.data.Dataset.from_generator(train_generator, output_signature = (tf.TensorSpec(shape=(), dtype=tf.string), tf.TensorSpec(shape=(None,6)))).prefetch(tf.data.AUTOTUNE)
test_images = tf.data.Dataset.from_generator(test_generator, output_signature = (tf.TensorSpec(shape=(), dtype=tf.string), tf.TensorSpec(shape=(None,6)))).prefetch(tf.data.AUTOTUNE)
TRAIN_NUM_FILES = 752803
train_images = train_images.shuffle(40)
val_size = int(TRAIN_NUM_FILES * 0.1)
val_images = train_images.take(val_size)
train_images = train_images.skip(val_size)
train_images = train_images.map(lambda image_path, labels: (wrap_img_load(image_path),labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
test_images = test_images.map(lambda image_path, labels: (wrap_img_load(image_path),labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
val_images = val_images.map(lambda image_path, labels: (wrap_img_load(image_path),labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
train_images = train_images.map(lambda image, labels: set_shape(image,labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
test_images = test_images.map(lambda image, labels: set_shape(image,labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
val_images = val_images.map(lambda image, labels: set_shape(image,labels), num_parallel_calls = tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)
train_images = train_images.batch(4).prefetch(tf.data.AUTOTUNE)
test_images = test_images.batch(4).prefetch(tf.data.AUTOTUNE)
val_images = val_images.batch(4).prefetch(tf.data.AUTOTUNE)
def create_model():
model = Sequential([
InceptionV3(include_top = False, input_shape = (256,256,3), weights='imagenet'),
GlobalAveragePooling2D(name='avg_pool'),
Dense(6, activation="sigmoid", name='dense_output'),
])
model.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(5e-4), metrics=["accuracy"])
return model
model = create_model()
history = model.fit(train_images,
epochs=5,
verbose=1,
callbacks=[checkpointer, scheduler],
validation_data=val_images
)
</code></pre>
<p>The CPU, GPU, and HDD are utilized very efficiently, and the training time is much faster than with a tf.keras.utils.Sequence datagenerator</p>
|
python|pandas|tensorflow|keras
| 1
|
374,258
| 70,631,736
|
how to compare two csv file in python and flag the difference?
|
<p>i am new to python. Kindly help me.
Here I have two set of csv-files. i need to compare and output the difference like changed data/deleted data/added data. here's my example</p>
<pre><code>file 1:
Sn Name Subject Marks
1 Ram Maths 85
2 sita Engilsh 66
3 vishnu science 50
4 balaji social 60
file 2:
Sn Name Subject Marks
1 Ram computer 85 #subject name have changed
2 sita Engilsh 66
3 vishnu science 90 #marks have changed
4 balaji social 60
5 kishor chem 99 #added new line
Output - i need to get like this :
Changed Items:
1 Ram computer 85
3 vishnu science 90
Added item:
5 kishor chem 99
Deleted item:
.................
</code></pre>
<p>I imported csv and done the comparasion via for loop with redlines. I am not getting the desire output. <strong>its confusing me a lot when flagging the added & deleted items between file 1 & file2 (csv files).</strong> pl suggest the effective code folks.</p>
|
<p>The idea here is to flatten your dataframe with <code>melt</code> to compare each value:</p>
<pre><code># Load your csv files
df1 = pd.read_csv('file1.csv', ...)
df2 = pd.read_csv('file2.csv', ...)
# Select columns (not mandatory, it depends on your 'Sn' column)
cols = ['Name', 'Subject', 'Marks']
# Flat your dataframes
out1 = df1[cols].melt('Name', var_name='Item', value_name='Old')
out2 = df2[cols].melt('Name', var_name='Item', value_name='New')
out = pd.merge(out1, out2, on=['Name', 'Item'], how='outer')
# Flag the state of each item
condlist = [out['Old'] != out['New'],
out['Old'].isna(),
out['New'].isna()]
out['State'] = np.select(condlist, choicelist=['changed', 'added', 'deleted'],
default='unchanged')
</code></pre>
<p>Output:</p>
<pre><code>>>> out
Name Item Old New State
0 Ram Subject Maths computer changed
1 sita Subject Engilsh Engilsh unchanged
2 vishnu Subject science science unchanged
3 balaji Subject social social unchanged
4 Ram Marks 85 85 unchanged
5 sita Marks 66 66 unchanged
6 vishnu Marks 50 90 changed
7 balaji Marks 60 60 unchanged
8 kishor Subject NaN chem changed
9 kishor Marks NaN 99 changed
</code></pre>
|
python|python-3.x|pandas|csv|export-to-csv
| 0
|
374,259
| 70,502,488
|
How to get individual cell in table using Pandas?
|
<p>I have a table:</p>
<pre><code> -60 -40 -20 0 20 40 60
100 520 440 380 320 280 240 210
110 600 500 430 370 320 280 250
120 670 570 490 420 370 330 290
130 740 630 550 480 420 370 330
140 810 690 600 530 470 410 370
</code></pre>
<p>The headers along the top are a wind vector and the first col on the left is a distance. The actual data in the 'body' of the table is just a fuel additive.</p>
<p>I am very new to Pandas and Numpy so please excuse the simplicity of the question. What I would like to know is, how can I enter the table using the headers to retrieve one number? I have seen its possible using indexes, but I don't want to use that method if I don't have to.</p>
<p>for example:
I have a wind unit of <code>-60</code> and a distance of <code>120</code> so I need to retrieve the number <code>670</code>. How can I use Numpy or Pandas to do this?</p>
<p>Also, if I have a wind unit of say <code>-50</code> and a distance of <code>125</code>, is it then possible to interpolate these in a simple way?</p>
<p>EDIT:</p>
<p>Here is what I've tried so far:</p>
<pre><code>import pandas as pd
df = pd.read_table('fuel_adjustment.txt', delim_whitespace=True, header=0,index_col=0)
print(df.loc[120, -60])
</code></pre>
<p>But get the error:</p>
<pre><code>line 3083, in get_loc raise KeyError(key) from err
KeyError: -60
</code></pre>
|
<p>You can select any cell from existing indices using:</p>
<pre><code>df.loc[120,-60]
</code></pre>
<p>The type of the indices needs however to be integer. If not, you can fix it using:</p>
<pre><code>df.index = df.index.map(int)
df.columns = df.columns.map(int)
</code></pre>
<p>For interpolation, you need to add the empty new rows/columns using <code>reindex</code>, then apply <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html" rel="nofollow noreferrer"><code>interpolate</code></a> on each dimension.</p>
<pre><code>(df.reindex(index=sorted(df.index.to_list()+[125]),
columns=sorted(df. columns.to_list()+[-50]))
.interpolate(axis=1, method='index')
.interpolate(method='index')
)
</code></pre>
<p>Output:</p>
<pre><code> -60 -50 -40 -20 0 20 40 60
100 520.0 480.0 440.0 380.0 320.0 280.0 240.0 210.0
110 600.0 550.0 500.0 430.0 370.0 320.0 280.0 250.0
120 670.0 620.0 570.0 490.0 420.0 370.0 330.0 290.0
125 705.0 652.5 600.0 520.0 450.0 395.0 350.0 310.0
130 740.0 685.0 630.0 550.0 480.0 420.0 370.0 330.0
140 810.0 750.0 690.0 600.0 530.0 470.0 410.0 370.0
</code></pre>
|
python|pandas|dataframe|numpy
| 3
|
374,260
| 70,606,281
|
Can't get summary or weights from loaded keras model
|
<p>I saved a keras model using model.save(model_path). Now when I try to load it and apply model.summary() or model.get_weights() function I am getting following error:</p>
<pre><code>AttributeError: '_UserObject' object has no attribute 'summary'
</code></pre>
<p>Tried printing the data type of the model and got following whereas it was a sequential keras model when I had saved it:</p>
<pre><code><class 'tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject'>
</code></pre>
<p>I am using tensorflow 2.4.1(cpu). Below sample code can help recreating the error:</p>
<pre><code>def save_model():
model = tf.keras.Sequential()
model_path = "https://tfhub.dev/google/universal-sentence-encoder/4"
hub_layer = hub.KerasLayer(model_path, input_shape=[], dtype=tf.string, trainable=False)
model.add(hub_layer)
model.save('/home/pcadmin/data/models/sentence_embedding/use-4-pre-trained/')
model = tf.keras.models.load_model(model_path)
print(model.summary())
model.get_weights()
</code></pre>
|
<p>The reproducible code above isn't complete I think. However, you need to change the code as follows to make it run. (I've tested on cpu/gpu with <code>tf 2.4/2.7</code>.)</p>
<pre><code>model_path = "https://tfhub.dev/google/universal-sentence-encoder/4"
def save_model(model_path):
model = tf.keras.Sequential()
hub_layer = hub.KerasLayer(model_path,
input_shape=[],
dtype=tf.string, trainable=False)
model.add(hub_layer)
model.save('saved/')
save_model(model_path)
model = tf.keras.models.load_model('/content/saved')
model.summary() # OK
model.get_weights() # OK
</code></pre>
|
tensorflow|keras|tensorflow2.0|tf.keras
| 1
|
374,261
| 70,657,069
|
How to plot a list of Points and LINESTRING?
|
<p>hello all is there a way to plot a list of LINESTRING and list of Points</p>
<p>for example I have</p>
<pre><code>line_string = [LINESTRING (-1.15.12 9.9, -1.15.13 9.93), LINESTRING (-2.15.12 8.9, -2.15.13 8.93)]
point = [POINT (5.41 3.9), POINT (6.41 2.9)]
</code></pre>
<p>My goal is to have a map or graph where the it shows me where the points connect with the LINESTRING.</p>
<p>Thank you in advance</p>
<p><strong>EDIT</strong></p>
<p>Thank you all for you answers sadly when I plot it looks like. I think the issue is the fact that some LINESTRINGS have 4 points (LINESTRING (-1.15.12 9.9, -1.15.13 9.93, -5.15.13 5.53, -3.15.13 2.23)) and some have 3 points. Is there a way to plot these better?</p>
<p><a href="https://i.stack.imgur.com/tsHio.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tsHio.png" alt="enter image description here" /></a></p>
|
<p>You can access matplotlib easily using geopandas scripting layer.</p>
<pre><code>from shapely.geometry import LineString, Point
import geopandas as gpd
line_strings = [LineString([(-1.15, 0.12), (9.9, -1.15), (0.13, 9.93)]),
LineString([(-2.15, 0.12), (8.9, -2.15), (0.13 , 8.93)])]
points = [Point(5.41, 3.9), Point (6.41, 2.9)]
geom = line_strings + points
gdf = gpd.GeoDataFrame(geometry=geom)
gdf.plot()
</code></pre>
<p>Edit, based upon your comment. You can make an interactive plot with Bokeh if you want to zoom in on certain areas.</p>
<pre><code>from bokeh.plotting import figure, show
p = figure(title="interactive plot example", x_axis_label='x', y_axis_label='y')
for ls in line_strings:
x, y = ls.coords.xy
p.line(x, y, legend_label="lines", color="blue", line_width=2)
for point in points:
p.circle(point.x, point.y, legend_label="points", size=5, color="red", alpha=0.5)
show(p)
</code></pre>
<p><a href="https://i.stack.imgur.com/Zn2yF.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zn2yF.jpg" alt="matplotlib sample" /></a></p>
<p><a href="https://i.stack.imgur.com/AgWKU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AgWKU.jpg" alt="bokeh sample" /></a></p>
|
python|geometry|geopandas|geo|shapely
| 4
|
374,262
| 70,716,860
|
How to analyze external data using command line arguments
|
<p>I started to write a program analyzing external data. This is done with the help of command line arguments. However, i cant execute the program.. Is it wrong or am I on the right track...?</p>
<pre class="lang-py prettyprint-override"><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument("statistic", choices=["avg", "max"], help="Which statistic should be run?")
parser.add_argument("variable", choices=["distance", "delay"], help="What variable should be used for the calculation?")
parser.add_argument("tsvfile", help="Name of data file to be analyzed")
import pandas as pd
df = pd.read_csv("flights.tsv", sep="\t")
args = parser.parse_args()
s = args.statistic
v = args.variable
t = args.tsvfile
if s == "avg" and v == "distance" and t == "flights.tsv":
print(df["DISTANCE"].mean())
elif s == "avg" and v == "delay" and t == "flights.tsv":
print(df["DEPATURE_DELAY"].mean())
elif s == "max" and v == "delay":
print(df["DEPATURE_DELAY"].max())
elif s == "max" and v == "distance" and t == "flights.tsv":
print(df["DISTANCE"].max())
</code></pre>
<p><a href="https://i.stack.imgur.com/qQq8n.png" rel="nofollow noreferrer">This is the exception I got</a></p>
<p>I would really love some help</p>
|
<p>If you were running the script from the command line, you would need to add arguments (e.g., <code>python3 tom_script.py avg delay flights.tsv</code>):</p>
<pre><code>$ python3 tom_script.py # Incorrect
usage: tom_script.py [-h] {avg,max} {distance,delay} tsvfile
tom_script.py: error: the following arguments are required: statistic, variable, tsvfile
$ python3 tom_script.py avg delay flights.tsv # Correct
</code></pre>
<p>However, to run the code in Jupyter Notebook, you have to provide values for the arguments within a cell (e.g., <code>args = parser.parse_args(args=['avg', 'delay', 'flights.tsv', ]</code>):</p>
<blockquote>
<p><strong>NOTE</strong> - Tested using Ubuntu 20.04, Python 3.8, IPython 7.13, Firefox 95.0, and Jupyter 6.0</p>
</blockquote>
<pre><code>in [1]: import argparse
in [2]: parser = argparse.ArgumentParser()
in [3]: parser.add_argument("statistic", choices=["avg", "max", ], help="Which statistic should be run?")
parser.add_argument("variable", choices=["distance", "delay", ],
help="What variable should be used for the calculation?")
parser.add_argument("tsvfile", help="Name of data file to be analyzed")
out[3]: _StoreAction(option_strings=[], dest='tsvfile', nargs=None, const=None, default=None, type=None, choices=None, help='Name of data file to be analyzed', metavar=None)
in [4]: # This does not work
args = parser.parse_args()
usage: ipykernel_launcher.py [-h] {avg,max} {distance,delay} tsvfile
ipykernel_launcher.py: error: argument statistic: invalid choice: '/home/stack/.local/share/jupyter/runtime/kernel-66ce9d80-ab79-4f2f-9836-c98cdbcd20c5.json' (choose from 'avg', 'max')
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
...
in [5]: # This does not work either (this also caused your exception)
args = parser.parse_args(args=[])
usage: ipykernel_launcher.py [-h] {avg,max} {distance,delay} tsvfile
ipykernel_launcher.py: error: the following arguments are required: statistic, variable, tsvfile
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
in [6]: # This works
args = parser.parse_args(args=['avg', 'delay', 'flights.tsv', ])
in [7]: s = args.statistic
v = args.variable
t = args.tsvfile
in [8]: print(s, v, t)
avg delay flights.tsv
</code></pre>
|
python|pandas|command-line-arguments
| 0
|
374,263
| 70,390,466
|
Calculating correlation between points where each points has a timeseries
|
<p>I could use some advice how to make a faster code to my problem. I'm looking into how to calculate the correlation between points in space (X,Y,Z) where for each point I have velocity data over time and ideally I would like for each point P1 to calculate the velocity correlation with all other points.</p>
<p>In the end I would like to have a matrix that for each pair of coordinates (X1,Y1,Z1), (X2,Y2,Z2) I get the Pearson correlation coefficient. I'm not entirely sure how to organize this best in python. What I have done so far is that I defined lines of points in different directions and for each line I calculate the correlation between points. This works for the analysis but I end up doing loops that takes a very long time to execute and I think it would be nice to instead just calculate the correlation between all points. Right now I'm using pandas DataFrame and statsmodels to do the correlation (stats.pearsonr(point_X_time.Vx, point_Y_time.Vx) which works but I don't know how to parallelize it efficiently.</p>
<p>I have all the data now in a DataFrame where the head looks like:</p>
<pre>
Velocity X Y Z Time
0 -12.125850 2.036 0 1.172 10.42
1 -12.516033 2.036 0 1.164 10.42
2 -11.816067 2.028 0 1.172 10.42
3 -10.722124 2.020 0 1.180 10.42
4 -10.628474 2.012 0 1.188 10.42
</pre>
<p>and the number of rows is ~300 000 rows but could easily be increased if the code would be faster.</p>
|
<p><strong>Solution 1:</strong></p>
<pre><code>groups = df.groupby(["X", "Y", "Z"])
</code></pre>
<p>You group the data by the points in space.</p>
<p>Than you iterate through all the combinations of points and calculate the correlation</p>
<pre><code>import itertools
import numpy as np
for combinations in itertools.combinations(groups.groups.keys(),2):
first = groups.get_group(combinations[0])["Velocity"]
second = groups.get_group(combinations[1])["Velocity"]
if len(first) == len(second):
print(f"{combinations} {np.corrcoef(first, second)[0,1]:.2f}")
</code></pre>
<p><strong>Solution 2:</strong></p>
<pre><code>df["cc"] = df.groupby(["X", "Y", "Z"]).cumcount()
df.set_index(["cc","X", "Y", "Z"])
df.unstack(level=[1,2,3])["Velocity"].corr()
</code></pre>
|
python|pandas|numpy|statsmodels
| 0
|
374,264
| 70,661,024
|
using warnings.filterwarnings() to convert a warning into an exception
|
<p>I want to programatically capture when statsmodels.api.OLS raises its "The smallest eigenvalue is ..." warning</p>
<p>This would enable me to filter a large number of OLS systems by whether or not they raise this warning</p>
<p>Ideally, I would like to pick off just particular warnings instead of a blanket filter for any/all warnings</p>
<p>My attempt (below) attempts a blanket filter using warnings.filterwarnings() , it doesn't work</p>
<p>How do I get warnings.filterwarnings() to work? Or is there some other module I should be looking at instead?</p>
<pre><code>import statsmodels.api as sm
import numpy as np
import pandas as pd
import warnings
np.random.seed(123)
nrows = 100
colA = np.random.uniform(0.0, 1.0, nrows)
colB = np.random.uniform(0.0 ,1.0, nrows)
colC = colA + colB # multicolinear data to generate ill-conditioned system
y = colA + 2 * colB + np.random.uniform(0.0, 0.1, nrows)
X = pd.DataFrame({'colA': colA, 'colB': colB, 'colC': colC})
warnings.filterwarnings('error') # achieves nothing
warnings.simplefilter('error')
# from https://stackoverflow.com/questions/59961735/cannot-supress-python-warnings-with-warnings-filterwarningsignore
# also achieves nothing
try:
model = sm.OLS(y, sm.add_constant(X)).fit()
print(model.summary())
except:
print('warning successfully captured in try-except')
</code></pre>
|
<p>You can get the smallest eigenvalue using <code>model.eigenvals[-1]</code>, just check that it is less than <code>1e-10</code> to raise an exception. Here's the <a href="https://www.statsmodels.org/dev/_modules/statsmodels/regression/linear_model.html#OLS" rel="nofollow noreferrer">source</a> that generates the note</p>
<pre class="lang-py prettyprint-override"><code>errstr = ("The smallest eigenvalue is %6.3g. This might indicate that there are\n"
"strong multicollinearity problems or that the design matrix is singular.")
try:
model = sm.OLS(y, sm.add_constant(X)).fit()
assert model.eigenvals[-1] >= 1e-10, (errstr % model.eigenvals[-1])
print(model.summary())
except AssertionError as e:
print('warning successfully captured in try-except. Message below:')
print(e)
</code></pre>
|
python|pandas|numpy
| 0
|
374,265
| 70,675,210
|
Pandas: Count the occurrences of specific value in Column B...and display it in column C
|
<p>I have a Pandas dataframe where one of the columns tracks the state an accident occurred in. I want to add a column that totals the number of accidents from that state. For instance, if one of my rows shows an accident that happened in Utah, I want the last column to count the number of accidents in the dataframe that occurred in Utah.</p>
<pre><code>model state total count
----- ----- -----------
Ford Maine 2
Dodge Maine 2
Chevy Utah 1
Fiat Texas 1
</code></pre>
<p>The code so far, along with failed attempts:</p>
<pre><code>import pandas as pd
df = pd.read_csv("faainfo.csv")
cols = df.columns
df.drop(columns=cols[15:43], inplace=True)
df = df.loc[df['LOC_CNTRY_NAME'] == 'UNITED STATES']
# LOC_STATE_NAME
states_list = {
"ALABAMA": "AL",
"ALASKA": "AK",
"ARIZONA": "AZ",
"ARKANSAS": "AR",
"CALIFORNIA": "CA",
"COLORADO": "CO",
"CONNECTICUT": "CT",
"DELAWARE": "DE",
"FLORIDA": "FL",
"GEORGIA": "GA",
"HAWAII": "HI",
"IDAHO": "ID",
"ILLINOIS": "IL",
"INDIANA": "IN",
"IOWA": "IA",
"KANSAS": "KS",
"KENTUCKY": "KY",
"LOUISIANA": "LA",
"MAINE": "ME",
"MARYLAND": "MD",
"MASSACHUSETTS": "MA",
"MICHIGAN": "MI",
"MINNESOTA": "MN",
"MISSISSIPPI": "MS",
"MISSOURI": "MO",
"MONTANA": "MT",
"NEBRASKA": "NE",
"NEVADA": "NV",
"NEW HAMPSHIRE": "NH",
"NEW JERSEY": "NJ",
"NEW MEXICO": "NM",
"NEW YORK": "NY",
"NORTH CAROLINA": "NC",
"NORTH DAKOTA": "ND",
"OHIO": "OH",
"OKLAHOMA": "OK",
"OREGON": "OR",
"PENNSYLVANIA": "PA",
"RHODE ISLAND": "RI",
"SOUTH CAROLINA": "SC",
"SOUTH DAKOTA": "SD",
"TENNESSEE": "TN",
"TEXAS": "TX",
"UTAH": "UT",
"VERMONT": "VT",
"VIRGINIA": "VA",
"WASHINGTON": "WA",
"WEST VIRGINIA": "WV",
"WISCONSIN": "WI",
"WYOMING": "WY"}
df['states'] = df.LOC_STATE_NAME.map(states_list)
f = df['LOC_STATE_NAME'] == 'ARIZONA'
x = f.sum()
print(x)
# df['FLORIDA'].value_counts()
# counts = df['LOC_STATE_NAME']
# df['stcnt'] = (df['LOC_STATE_NAME'])
df
</code></pre>
|
<p>You will want to read pandas docs on <code>assign</code> and <code>transform</code> to understand fully what each is doing.</p>
<p><code>transform</code> returns a series of counts in this case and <code>assign</code> will create your new column.</p>
<p>If your data is just columns model and state, try:</p>
<pre><code>df = df.assign(total_count = df.groupby(['state']).transform('count'))
model state total_count
1 Ford Maine 2
2 Dodge Maine 2
3 Chevy Utah 1
4 Fiat Texas 1
</code></pre>
|
python|pandas
| 0
|
374,266
| 70,656,917
|
Read excel and get data of 1 row as a object
|
<p>I want to read a excel file using pandas and want row of the excel as object like</p>
<pre><code>{2, 3,'test data' , 1}
</code></pre>
<p>I am reading pandas file like</p>
<pre><code>excel_data = pd.read_excel(upload_file_url , index_col=None, header=None)
for name in excel_data:
print(name)
</code></pre>
<p>but on printing I am getting just plan text . how I can achieve this with pandas ?</p>
|
<p>The <code>iterrows()</code> method might help to get individual rows from the dataframe.</p>
<p>Consider the following crude solution</p>
<pre><code>excel_data = pd.read_excel(upload_file_url , index_col=None, header=None)
for name in excel_data.iterrows():
print(str(name[1].tolist()).replace("[","{").replace("]","}"))
</code></pre>
<p>Here we use <code>iterrows()</code> to get individual rows of the spreadsheet as tuples.
Each tuple contains the row data at position 1 as Pandas Series.</p>
<p>In order to convert Pandas Series to <code>{2, 3,'test data' , 1}</code> you might just convert it to list and replace square brackets with curly brackets.</p>
<p><strong>Update:</strong> If you could print the data as dict instead of a list in curly brackets, the code would be simplified.</p>
<pre><code>excel_data = pd.read_excel(upload_file_url , index_col=None, header=None)
for name in excel_data.iterrows():
print(name[1].to_dict())
</code></pre>
|
python|pandas
| 1
|
374,267
| 70,584,848
|
How to make output file name reflect for loop list in python?
|
<p>I have a function that does an action for every item in a list and then outputs a final product file for each item in that list. What I am trying to do is append a string to each of the output files that reflects the items in the initial list. I will explain more in detail here.</p>
<p>I have the following code:</p>
<pre><code>list = ['Blue', 'Red', 'Green']
df = pd.read_csv('data.csv')
for i in list:
df_new = (df[i] * 2)
df_new.to_csv('Path/to/My/Folder/Name.csv')
</code></pre>
<p>Ok, so what is going on here is that I have a file 'data.csv' that has 3 columns with unique values, the 'Blue' column, the 'Red' column, and the 'Green' column. From all of this code, I want to produce 3 separate output .csv files, one file where the 'Blue' column is multiplied by 2, the next where the 'Red' column is multiplied by 2, and lastly a file where the 'Green' column is multiplied by 2. So what My code does is first write a list of all the column names, then open the .csv file as a dataframe, and then FOR EACH color in the list, multiply that column by 2, and send each product to a new dataframe.</p>
<p>What I am confused about is how to go about naming the output .csv files so I know which is which, specifically which column was multiplied by 2. Specifically I simply want my files named as such: 'Name_Blue.csv', 'Name_Red.csv', and 'Name_Green.csv'. How can I do this so that it works with my for loop code? I am not sure what this iterative naming process would even be called.</p>
|
<p>You need to use a formatted string (A string with an <em>f</em> at the beginning). For example:</p>
<pre><code>name = "foo"
greeting = f'Hello, {name}!'
</code></pre>
<p>Inside those curly brackets is the variable you want to put in the string. So here's the modified code:</p>
<pre><code>colors = ['Blue', 'Red', 'Green']
df = pd.read_csv('data.csv')
for i in colors:
df_new = (df[i] * 2)
df_new.to_csv(f'Path/to/My/Folder/Name_{i}.csv')
</code></pre>
<p>Now the formatted string in the last line will input i (the item in the list) as the name of the file!</p>
|
python|pandas|loops|csv
| 2
|
374,268
| 70,443,472
|
How to store after groupby
|
<p>How am I able to use the column after using groupby?
Let's say</p>
<pre><code>x = df_new.groupby('industry')['income'].mean().sort_values(ascending = False)
</code></pre>
<p>would give:</p>
<pre><code>"industry"
telecommunications 330
crypto. 100
gas 100
.
.
.
</code></pre>
<p>I would like to store the top most income's industry name to some variable and use it for other queries, so here it would be <code>"telecommunications"</code>.
But doing x[0] would give 330...</p>
<p>also pls recommend a better wording for this question... don't know the right terminologies</p>
|
<p><code>groupby(...).XXX()</code> (where <code>XXX</code> is some support method, e.g. <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.mean.html" rel="nofollow noreferrer"><code>mean</code></a>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a>, etc.) typical return <strong>Series</strong> objects, where the <strong>index</strong> contains the values that were grouped by. So you can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.index.html" rel="nofollow noreferrer"><code>x.index</code></a>:</p>
<pre><code>>>> x.index
Index(['telecommunication', 'crypto', 'gas'], dtype='object', name='industry')
</code></pre>
<p>If you want to get index for the max/min value, you can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.idxmax.html" rel="nofollow noreferrer"><code>idxmax</code></a>/<a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.idxmin.html" rel="nofollow noreferrer"><code>idxmin</code></a>:</p>
<pre><code>>>> x.idxmax()
'telecommunication'
>>> x.idxmin()
'crypto'
>>> x.index[2]
'gas'
</code></pre>
|
python|pandas|pandas-groupby
| 1
|
374,269
| 70,517,273
|
LSTM model training accuracy and loss not changing
|
<p>I am doing Sepsis Forecasting using Multivariate LSTM. The target variable is SepsisLabel. The time series data look like this where each row represent an hour, with 5864 patients (P_ID = 1 means its 1 patient data):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">HR</th>
<th style="text-align: center;">O2Sat</th>
<th style="text-align: center;">Temp</th>
<th style="text-align: center;">SBP</th>
<th style="text-align: center;">DBP</th>
<th style="text-align: center;">Resp</th>
<th style="text-align: center;">Age</th>
<th style="text-align: center;">Gender</th>
<th style="text-align: center;">ICULOS</th>
<th style="text-align: center;">SepsisLabel</th>
<th style="text-align: center;">P_ID</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">72.0</td>
<td style="text-align: center;">97.4</td>
<td style="text-align: center;">36.33</td>
<td style="text-align: center;">108.5</td>
<td style="text-align: center;">52.7</td>
<td style="text-align: center;">16.7</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">78.0</td>
<td style="text-align: center;">97.0</td>
<td style="text-align: center;">36.53</td>
<td style="text-align: center;">169.0</td>
<td style="text-align: center;">82.0</td>
<td style="text-align: center;">14.5</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">68.0</td>
<td style="text-align: center;">97.0</td>
<td style="text-align: center;">36.20</td>
<td style="text-align: center;">150.0</td>
<td style="text-align: center;">71.0</td>
<td style="text-align: center;">14.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">70.0</td>
<td style="text-align: center;">97.0</td>
<td style="text-align: center;">36.21</td>
<td style="text-align: center;">149.0</td>
<td style="text-align: center;">71.0</td>
<td style="text-align: center;">14.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">67.0</td>
<td style="text-align: center;">98.0</td>
<td style="text-align: center;">36.11</td>
<td style="text-align: center;">157.0</td>
<td style="text-align: center;">73.0</td>
<td style="text-align: center;">14.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">73.0</td>
<td style="text-align: center;">98.0</td>
<td style="text-align: center;">36.18</td>
<td style="text-align: center;">162.0</td>
<td style="text-align: center;">78.0</td>
<td style="text-align: center;">15.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">6</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">69.0</td>
<td style="text-align: center;">99.0</td>
<td style="text-align: center;">36.63</td>
<td style="text-align: center;">156.0</td>
<td style="text-align: center;">73.0</td>
<td style="text-align: center;">13.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">7</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">70.0</td>
<td style="text-align: center;">100.0</td>
<td style="text-align: center;">36.00</td>
<td style="text-align: center;">167.0</td>
<td style="text-align: center;">79.0</td>
<td style="text-align: center;">12.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">8</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">78.0</td>
<td style="text-align: center;">98.0</td>
<td style="text-align: center;">37.13</td>
<td style="text-align: center;">177.0</td>
<td style="text-align: center;">79.0</td>
<td style="text-align: center;">17.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">9</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">73.0</td>
<td style="text-align: center;">98.0</td>
<td style="text-align: center;">36.78</td>
<td style="text-align: center;">152.0</td>
<td style="text-align: center;">71.0</td>
<td style="text-align: center;">13.5</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">10</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">79.5</td>
<td style="text-align: center;">96.5</td>
<td style="text-align: center;">37.17</td>
<td style="text-align: center;">185.0</td>
<td style="text-align: center;">94.0</td>
<td style="text-align: center;">23.2</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">73.0</td>
<td style="text-align: center;">96.0</td>
<td style="text-align: center;">36.72</td>
<td style="text-align: center;">190.0</td>
<td style="text-align: center;">96.0</td>
<td style="text-align: center;">24.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">12</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">101.0</td>
<td style="text-align: center;">95.0</td>
<td style="text-align: center;">37.13</td>
<td style="text-align: center;">188.0</td>
<td style="text-align: center;">91.0</td>
<td style="text-align: center;">26.5</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">13</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">88.0</td>
<td style="text-align: center;">95.0</td>
<td style="text-align: center;">37.56</td>
<td style="text-align: center;">145.0</td>
<td style="text-align: center;">68.0</td>
<td style="text-align: center;">24.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">14</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">92.0</td>
<td style="text-align: center;">95.0</td>
<td style="text-align: center;">37.23</td>
<td style="text-align: center;">172.0</td>
<td style="text-align: center;">81.0</td>
<td style="text-align: center;">24.0</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">15</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2</td>
</tr>
</tbody>
</table>
</div>
<p>The code:</p>
<pre><code>X = df.drop('SepsisLabel',axis=1)
y = df[['SepsisLabel']]
x_train, x_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=42)
# reshape input to be [samples, time steps, features]
x_train = np.array(x_train).reshape(x_train.shape[0], x_train.shape[1], 1)
x_test = np.array(x_test).reshape(x_test.shape[0], x_test.shape[1], 1)
# Model
model = Sequential()
model.add(LSTM(128, activation = 'relu', input_shape = (10,1), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(100, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(50,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
model.summary()
model.compile(optimizer='adam', loss='mse', metrics = ['accuracy'])
loss_plot = PlotLossesKeras()
model.fit(x_train, y_train, epochs=15, batch_size=64, verbose=1, validation_split=0.2, shuffle=True, callbacks = [loss_plot])
</code></pre>
<p>The Results:
<a href="https://i.stack.imgur.com/Aavga.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Aavga.png" alt="Loss and accuracy of model training" /></a></p>
<pre><code>accuracy
training (min: 0.899, max: 0.900, cur: 0.900)
validation (min: 0.900, max: 0.900, cur: 0.900)
Loss
training (min: 0.100, max: 0.101, cur: 0.100)
validation (min: 0.100, max: 0.100, cur: 0.100)
</code></pre>
<p>I initially ran 500 epochs but the result was same. Here, I used 15 epochs.
How I can improve the model and get the best results? Open for critiques and suggestions. Thanks.</p>
|
<p>I tried the same code to reproduce the error. But I got this output.</p>
<pre><code>model.compile(optimizer='adam', loss='mse', metrics = ['accuracy'])
from livelossplot import PlotLossesKeras
loss_plot = PlotLossesKeras()
model.fit(x_train, y_train, epochs=50, batch_size=8, verbose=1, validation_split=0.2, shuffle=True, callbacks = [loss_plot])
</code></pre>
<p>Output:</p>
<pre><code>accuracy
training (min: 0.333, max: 0.778, cur: 0.778)
validation (min: 0.000, max: 1.000, cur: 0.333)
Loss
training (min: 0.169, max: 0.522, cur: 0.251)
validation (min: 0.179, max: 0.570, cur: 0.262)
2/2 [==============================] - 0s 473ms/step - loss: 0.2506 - accuracy: 0.7778 - val_loss: 0.2622 - val_accuracy: 0.3333
<keras.callbacks.History at 0x7f8c22ee4910>
</code></pre>
|
python|pandas|tensorflow|keras|lstm
| 0
|
374,270
| 70,700,626
|
This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
|
<p>I encountered this error, how do I resolve it?</p>
<pre><code>NotImplementedError: Cannot convert a symbolic Tensor (lstm_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
</code></pre>
<pre><code># train the model
model = define_model(vocab_size, max_length)
# train the model, run epochs manually and save after each epoch
epochs = 20
steps = len(train_descriptions)
for i in range(epochs):
# create the data generator
generator = data_generator(train_descriptions, train_features, tokenizer, max_length)
# fit for one epoch
model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1)
# save model
model.save('model_' + str(i) + '.h5')
</code></pre>
|
<p>As others have indicated elsewhere this is due to an incompatibility between specific tensorflow versions and specific numpy versions.</p>
<p>conda version 4.11.0</p>
<p>Commands to setup working environment:</p>
<pre><code>conda activate base
conda env remove -y --name myenv
conda create -y --name myenv tensorflow=2.4.0
conda activate myenv
conda install -y numpy=1.19.2
conda install -y keras
</code></pre>
<p>System Information</p>
<pre><code>System: Kernel: 5.4.0-100-generic x86_64 bits: 64 compiler: gcc v: 9.3.0
Desktop: Cinnamon 5.2.7 wm: muffin dm: LightDM Distro: Linux Mint 20.3 Una
base: Ubuntu 20.04 focal
Machine: Type: Laptop System: LENOVO product: 20308 v: Lenovo Ideapad Flex 14 serial: <filter>
Chassis: type: 10 v: Lenovo Ideapad Flex 14 serial: <filter>
Mobo: LENOVO model: Strawberry 4A v: 31900059Std serial: <filter> UEFI: LENOVO
v: 8ACN30WW date: 12/06/2013
CPU: Topology: Dual Core model: Intel Core i5-4200U bits: 64 type: MT MCP arch: Haswell
rev: 1 L2 cache: 3072 KiB
flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 18357
Speed: 798 MHz min/max: 800/2600 MHz Core speeds (MHz): 1: 798 2: 798 3: 798 4: 799
Graphics: Device-1: Intel Haswell-ULT Integrated Graphics vendor: Lenovo driver: i915 v: kernel
bus ID: 00:02.0 chip ID: 8086:0a16
Display: x11 server: X.Org 1.20.13 driver: modesetting unloaded: fbdev,vesa
resolution: 1366x768~60Hz
OpenGL: renderer: Mesa DRI Intel HD Graphics 4400 (HSW GT2) v: 4.5 Mesa 21.2.6
compat-v: 3.0 direct render: Yes
</code></pre>
|
python|numpy|tensorflow|tensorflow-datasets
| 0
|
374,271
| 70,412,894
|
Python Running Error on Macbook pro m1 max (Running on Tensorflow)
|
<p>I am trying to run this code from github <a href="https://github.com/ItamarRocha/binary-bot" rel="nofollow noreferrer">binary-bot</a> on my new macbook pro max M1 chip:</p>
<p>Metal device set to:</p>
<pre><code>Apple M1 Max
systemMemory: 32.00 GB
maxCacheSize: 10.67 GB
</code></pre>
<p>And I am getting the following error. Any suggestions?</p>
<pre><code>2021-12-19 17:26:25.248041: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-12-19 17:26:25.248181: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
1 Physical GPUs, 1 Logical GPUs
1 Physical GPUs, 1 Logical GPUs
Trying to connect to IqOption
Successfully Connected!
/Users/abdallahmohamed/Downloads/binary-bot-master/training.py:35: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only
df = df.drop("future", 1)
/Users/abdallahmohamed/Downloads/binary-bot-master/training.py:35: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only
df = df.drop("future", 1)
train data: 836 validation: 68
sells: 418, buys: 418
VALIDATION sells: 34, buys : 34
0.001-5-SEQ-2-40-16-PRED-1639927591
1 Physical GPUs, 1 Logical GPUs
2021-12-19 17:26:32.262259: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
Epoch 1/40
2021-12-19 17:26:33.223096: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:26:33.592036: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:26:33.646351: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:26:33.684523: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:26:33.786763: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:26:33.866171: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:26:33.932667: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
53/53 [==============================] - ETA: 0s - loss: 0.8687 - accuracy: 0.52272021-12-19 17:26:35.527128: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:26:35.652789: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:26:35.681668: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:26:35.710022: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 4s 34ms/step - loss: 0.8687 - accuracy: 0.5227 - val_loss: 0.6928 - val_accuracy: 0.5000
Epoch 2/40
53/53 [==============================] - ETA: 0s - loss: 0.7844 - accuracy: 0.5335WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.7844 - accuracy: 0.5335 - val_loss: 0.6890 - val_accuracy: 0.5000
Epoch 3/40
51/53 [===========================>..] - ETA: 0s - loss: 0.7257 - accuracy: 0.5588WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.7293 - accuracy: 0.5562 - val_loss: 0.6836 - val_accuracy: 0.5735
Epoch 4/40
52/53 [============================>.] - ETA: 0s - loss: 0.7421 - accuracy: 0.5649WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.7411 - accuracy: 0.5658 - val_loss: 0.7035 - val_accuracy: 0.4412
Epoch 5/40
52/53 [============================>.] - ETA: 0s - loss: 0.7205 - accuracy: 0.5565WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.7193 - accuracy: 0.5586 - val_loss: 0.7327 - val_accuracy: 0.4412
Epoch 6/40
52/53 [============================>.] - ETA: 0s - loss: 0.7233 - accuracy: 0.5637WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.7228 - accuracy: 0.5634 - val_loss: 0.7023 - val_accuracy: 0.5441
Epoch 7/40
51/53 [===========================>..] - ETA: 0s - loss: 0.7192 - accuracy: 0.5588WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.7187 - accuracy: 0.5586 - val_loss: 0.8523 - val_accuracy: 0.4559
Epoch 8/40
51/53 [===========================>..] - ETA: 0s - loss: 0.7111 - accuracy: 0.5613WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.7105 - accuracy: 0.5634 - val_loss: 0.7727 - val_accuracy: 0.4559
Epoch 9/40
53/53 [==============================] - ETA: 0s - loss: 0.7151 - accuracy: 0.5514WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 25ms/step - loss: 0.7151 - accuracy: 0.5514 - val_loss: 0.7105 - val_accuracy: 0.5147
Epoch 10/40
53/53 [==============================] - ETA: 0s - loss: 0.7046 - accuracy: 0.5371WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 25ms/step - loss: 0.7046 - accuracy: 0.5371 - val_loss: 0.6940 - val_accuracy: 0.5588
Epoch 11/40
53/53 [==============================] - ETA: 0s - loss: 0.7064 - accuracy: 0.5455WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 25ms/step - loss: 0.7064 - accuracy: 0.5455 - val_loss: 0.7433 - val_accuracy: 0.3971
Epoch 12/40
51/53 [===========================>..] - ETA: 0s - loss: 0.6991 - accuracy: 0.5784WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 25ms/step - loss: 0.6988 - accuracy: 0.5778 - val_loss: 0.6902 - val_accuracy: 0.5147
Epoch 13/40
52/53 [============================>.] - ETA: 0s - loss: 0.6812 - accuracy: 0.5757WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.6818 - accuracy: 0.5754 - val_loss: 0.8100 - val_accuracy: 0.4118
Epoch 14/40
52/53 [============================>.] - ETA: 0s - loss: 0.6876 - accuracy: 0.5673WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.6888 - accuracy: 0.5658 - val_loss: 0.7208 - val_accuracy: 0.5294
Epoch 15/40
52/53 [============================>.] - ETA: 0s - loss: 0.6815 - accuracy: 0.5505WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 23ms/step - loss: 0.6809 - accuracy: 0.5502 - val_loss: 0.6965 - val_accuracy: 0.5441
Epoch 16/40
51/53 [===========================>..] - ETA: 0s - loss: 0.6886 - accuracy: 0.5711WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.6900 - accuracy: 0.5670 - val_loss: 0.6529 - val_accuracy: 0.6029
Epoch 17/40
53/53 [==============================] - ETA: 0s - loss: 0.6959 - accuracy: 0.5598WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.6959 - accuracy: 0.5598 - val_loss: 0.7832 - val_accuracy: 0.4118
Epoch 18/40
52/53 [============================>.] - ETA: 0s - loss: 0.7002 - accuracy: 0.5325WARNING:tensorflow:Can save best model only with val_acc available, skipping.
53/53 [==============================] - 1s 24ms/step - loss: 0.6999 - accuracy: 0.5335 - val_loss: 0.7270 - val_accuracy: 0.3676
WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_5 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Trying to connect to IqOption
Successfully Connected!
2021-12-19 17:27:01.637577: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
2021-12-19 17:27:01.792362: F tensorflow/core/framework/tensor.cc:681] Check failed: IsAligned() ptr = 0x2d0efcee0
zsh: abort /Users/abdallahmohamed/miniforge3/envs/env_tensorflow/bin/python
</code></pre>
|
<p>It worked after I deleted the stored model and saved a new one</p>
|
python|tensorflow|deep-learning|lstm|apple-m1
| 1
|
374,272
| 70,685,859
|
How can I save an image (nii.gz) after reshaping it?
|
<p>I'm trying to reshape an image after reshaping it, I'm facing problems when it comes to the saving method. Here's the code I'm trying to run:</p>
<pre><code>import nibabel as nib
import numpy as np
from nibabel.testing import data_path
import os
example_filename = os.path.join("D:/Volumes convertidos LIDC",
'teste001converted.nii.gz')
img = nib.load('teste001converted.nii.gz')
print (img.shape)
newimg = img.get_fdata().reshape(332,360*360)
print (newimg.shape)
final_img = nib.Nifti1Image(newimg, img.affine)
nib.save(final_img, os.path.join("D:/Volumes convertidos LIDC",
'test2d.nii.gz'))
</code></pre>
<p>And I'm getting an error:</p>
<p>(most recent call last):</p>
<p>File "d:\Volumes convertidos LIDC\reshape.py", line 17, in
final_img = nib.Nifti1Image(newimg, img.affine)</p>
<p>File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 1756, in <strong>init</strong>
super(Nifti1Pair, self).<strong>init</strong>(dataobj,</p>
<p>File "C:\Python39\lib\site-packages\nibabel\analyze.py", line 918, in <strong>init</strong>
super(AnalyzeImage, self).<strong>init</strong>(
File "C:\Python39\lib\site-packages\nibabel\spatialimages.py", line 469, in <strong>init</strong>
self.update_header()</p>
<p>File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 2032, in update_header
super(Nifti1Image, self).update_header()
File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 1795, in update_header
super(Nifti1Pair, self).update_header()</p>
<p>File "C:\Python39\lib\site-packages\nibabel\spatialimages.py", line 496, in update_header
hdr.set_data_shape(shape)</p>
<p>File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 880, in set_data_shape
super(Nifti1Header, self).set_data_shape(shape)</p>
<p>File "C:\Python39\lib\site-packages\nibabel\analyze.py", line 633, in set_data_shape</p>
<pre><code>raise HeaderDataError(f'shape {shape} does not fit in dim datatype')
</code></pre>
<p>nibabel.spatialimages.HeaderDataError: shape (332, 129600) does not fit in dim datatype</p>
<p>Is there any way to solve it?</p>
|
<p>You are trying to save a <code>numpy</code> array, whereas the <code>nib.save</code> expects a <code>SpatialImage</code> object.</p>
<p>You should convert the <code>numpy</code> array to a <code>SpatialImage</code>:</p>
<pre><code>final_img = nib.Nifti1Image(newimg, img.affine)
</code></pre>
<p>After which you can save the image:</p>
<pre><code>nib.save(final_img, os.path.join("D:/Volumes convertidos LIDC", 'test4d.nii.gz'))
</code></pre>
<p>See the <a href="https://nipy.org/nibabel/reference/nibabel.loadsave.html#nibabel.loadsave.save" rel="nofollow noreferrer">documentation</a> and this <a href="https://stackoverflow.com/a/56635988/11919175">answer</a> for more explanation.</p>
<p><strong>Edit:</strong> This will not work if <code>newimg</code> is a 2D image.</p>
|
python|image|numpy
| 3
|
374,273
| 70,708,241
|
vectorize a dataframe in pandas
|
<p>Hi I have a dataframe in the tidy format such as</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c',3: 'a', 4: 'b', 5: 'c'},
'B': {0: 1, 1: 3, 2: 5,3: 1, 4: 3, 5: 5},
'C': {0: 2, 1: 4, 2: 6,3: 2, 4: 4, 5: 6}})
</code></pre>
<p>I made a function that in this case would map the codes 'a', 'b', 'c' in column 'A' to the observables in columns 'B' and 'C'.</p>
<p>The function is</p>
<pre><code>def vectorize(df):
indexdict={}
for code in df['A'].unique():
indexdict.update({'A':code})
transpose = df.T
value_dict ={}
for item in transpose.iloc[1]:
for value in transpose.iloc[2]:
value_dict.update({item:value})
indexdict.update(value_dict)
indexdict = {str(key):value for key,value in indexdict.items()}
df = pd.DataFrame(indexdict,index=[0])
df.set_index('A', inplace=True)
return df
</code></pre>
<p>I want to get a dataframe with all a, b,c codes with observables. However, when I do the function only returns the last entry. like this</p>
<p><a href="https://i.stack.imgur.com/CD1M6.png" rel="nofollow noreferrer">output</a></p>
<p>What am I doing wrong and is there a better way of doing this. The output is what I want but instead of just a single value I want all values for a, b, and c</p>
<p>Thanks</p>
|
<p>I think i figured it out by using the solution from @natnij but for strings</p>
<pre><code>df.pivot_table(index='A',columns='B',values='C',aggfunc=lambda x: ' '.join(x))
</code></pre>
<p>thank you</p>
|
python|pandas|dataframe
| 0
|
374,274
| 70,449,461
|
Add extra instances to my x_test and y_test after using train_test_split()
|
<p>I'm working on a multi-class classification problem in which I have my data categorized into 8 classes.</p>
<p>What I want to do is to extract out all the instances that are related to one classification from my training dataset and include in my testing dataset.</p>
<p>What I did until now is this:</p>
<pre><code># Generate some data
df = pd.DataFrame({
'x1': np.random.normal(0, 1, 100),
'x2': np.random.normal(2, 3, 100),
'x3': np.random.normal(4, 5, 100),
'y': np.random.choice([0, 1, 2, 3, 4, 5, 6, 7], 100)})
df.head(10)
# Output is as follows
# x1 x2 x3 y
# 0 -0.742347 -2.064889 2.979338 6
# 1 0.182298 6.366811 7.435432 7 <-- Instance no. 1 will be stored in (filtered_df) in the next step
# 2 -1.015937 -3.214670 8.544494 4
# 3 0.688138 1.938480 4.028213 6
# 4 0.397756 0.064590 9.186234 5
# 5 0.095368 -3.255433 1.010394 1
# 6 0.609087 6.783653 4.390600 6
# 7 -0.017803 -1.571393 6.539134 5
# 8 0.814820 4.535381 2.175285 0
# 9 -0.573918 -0.672416 0.826967 6
# Taking out instances that are classified as no "7" from the dataset
filtered_df = df[df['y']==7]
df.drop(df[df['y']==7].index, inplace=True)
df.head(10)
# Output is as follows
# x1 x2 x3 y
# 0 -0.742347 -2.064889 2.979338 6
# 2 -1.015937 -3.214670 8.544494 4 <-- Instance no. 1 is stored in (filtered_df) now
# 3 0.688138 1.938480 4.028213 6
# 4 0.397756 0.064590 9.186234 5
# 5 0.095368 -3.255433 1.010394 1
# 6 0.609087 6.783653 4.390600 6
# 7 -0.017803 -1.571393 6.539134 5
# 8 0.814820 4.535381 2.175285 0
# 9 -0.573918 -0.672416 0.826967 6
# 11 0.044094 2.581373 1.368575 5
# Extract the features and target
X = df.iloc[:, 0:3]
y = df.iloc[:, 3]
# Spliting the dataset into train, test and validate for binary classification
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.2)
## Not sure how to add (filtered_df) to X_test and y_test now ?
</code></pre>
<p>I'm not sure how to continue further. How can I add the instances that are stored in <code>filtered_df</code> to <code>x_test</code> and <code>y_test</code> ?</p>
|
<p>IIUC:</p>
<pre><code>for klass in df['y'].unique():
m = df['y'] != klass
X = df.loc[m, df.columns[:3]]
y = df.loc[m, df.columns[-1]]
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.2)
X_test = X_test.append(df.loc[~m, df.columns[:3]])
y_test = y_test.append(df.loc[~m, df.columns[-1]])
# do stuff here
...
</code></pre>
|
python|pandas|scikit-learn|classification
| 0
|
374,275
| 70,636,079
|
Why would a much lighter Keras model run at the same speed at inference as the much larger original model?
|
<p>I trained a Keras model with the following architecture:</p>
<pre><code>def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = inputs
# Entry block
x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(x)
x = layers.Conv2D(32, 3, strides=2, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.Conv2D(64, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
previous_block_activation = x # Set aside residual
for size in [128, 256, 512, 728]:
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(3, strides=2, padding="same")(x)
# Project residual
residual = layers.Conv2D(size, 1, strides=2, padding="same")(
previous_block_activation
)
x = layers.add([x, residual]) # Add back residual
previous_block_activation = x # Set aside next residual
x = layers.SeparableConv2D(1024, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.GlobalAveragePooling2D()(x)
if num_classes == 2:
activation = "sigmoid"
units = 1
else:
activation = "softmax"
units = num_classes
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(units, activation=activation)(x)
return keras.Model(inputs, outputs)
</code></pre>
<p>And that model has over 2 million trainable parameters.</p>
<p>I then trained a much lighter model with only 300,000. trainable parameters:</p>
<pre><code>def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = inputs
# Entry block
x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(x)
x = layers.Conv2D(64, kernel_size=(7, 7), activation=tf.keras.layers.LeakyReLU(alpha=0.01), padding = "same", input_shape=image_size + (3,))(x)
x = layers.MaxPooling2D(pool_size=(2, 2))(x)
x = layers.Conv2D(192, kernel_size=(3, 3), activation=tf.keras.layers.LeakyReLU(alpha=0.01), padding = "same", input_shape=image_size + (3,))(x)
x = layers.Conv2D(128, kernel_size=(1, 1), activation=tf.keras.layers.LeakyReLU(alpha=0.01), padding = "same", input_shape=image_size + (3,))(x)
x = layers.MaxPooling2D(pool_size=(2, 2))(x)
x = layers.Conv2D(128, kernel_size=(3, 3), activation=tf.keras.layers.LeakyReLU(alpha=0.01), padding = "same", input_shape=image_size + (3,))(x)
x = layers.MaxPooling2D(pool_size=(2, 2))(x)
x = layers.Dropout(0.5)(x)
x = layers.GlobalAveragePooling2D()(x)
if num_classes == 2:
activation = "sigmoid"
units = 1
else:
activation = "softmax"
units = num_classes
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(units, activation=activation)(x)
return keras.Model(inputs, outputs)
</code></pre>
<p>However, the last model (which is much lighter and even accepts a smaller input size) seems to run at the same speed, only classifying at 2 images per second. Shouldn't there be a difference in speed being it's a smaller model? Looking at the code, is there a glaring reason why that wouldn't be the case?</p>
<p>I'm using the same method at inference in both cases:</p>
<pre><code>image_size = (180, 180)
batch_size = 32
model = keras.models.load_model('model_13.h5')
t_end = time.time() + 10
iters = 0
while time.time() < t_end:
img = keras.preprocessing.image.load_img(
"test2.jpg", target_size=image_size
)
img_array = image.img_to_array(img)
#print(img_array.shape)
img_array = tf.expand_dims(img_array, 0) # Create batch axis
predictions = model.predict(img_array)
score = predictions[0]
print(score)
iters += 1
if score < 0.5:
print('Fire')
else:
print('No Fire')
print('TOTAL: ', iters)
</code></pre>
|
<p>The number of parameters is at most and indication how fast a model trains or runs inference. It might depend on many other factors.</p>
<p>Here some examples, which might influence the throughput of your model:</p>
<ol>
<li>The activation function: ReLu activations are faster then e.g. ELU or GELU which have exponetial terms. Not only is computing an exponention number slower than a linear number, but also the gradient is much more complex to compute since in Case of Relu is constant number, the slope of the activation (e.g.1).</li>
<li>the bit precission used for your data. Some HW accelerators can make faster computations in float16 than in float32 and also reading less bits decreses latency.</li>
<li>Some layers might not have parameters but perform fixed calculations. Eventhough no parameter is added to the network's weight, a computation still is performed.</li>
<li>The archetecture of your training HW. Certain filter sizes and batch sizes can be computed more efficiently than others.</li>
<li>sometimes the speed of the computing HW is not the bottleneck, the input pipeline for loading and preprocessing your data</li>
</ol>
<p>It's hard to tell without testing but in your particular example I would guess, that the following might slow down your inference:</p>
<ol>
<li>large perceptive field with a 7x7 conv</li>
<li>leaky_relu is slightly slower than relu</li>
<li>Probably your data input pipeline is the bottleneck, not the inference speed. If the inference speed is much faster than the data preparation, it might appear that both models have the same speed. But in reality the HW is idle and waits for data.</li>
</ol>
<p>To understand whats going on, you could either change some parameters and evaluate the speed, or you could analyze your input pipeline by tracing your hardware using tensorboard. Here is a smal guide: <a href="https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras" rel="nofollow noreferrer">https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras</a></p>
<p>Best,
Sascha</p>
|
python|tensorflow|machine-learning|keras|computer-vision
| 2
|
374,276
| 70,633,113
|
Collecting features from network.foward() in TensorFlow
|
<p>So basically I want to achieve the same goal as in this code but in TensorFlow</p>
<pre><code>def get_function(network, loader):
''' Collect function (features) from the self.network.module.forward_features() routine '''
features = []
for batch_idx, (inputs, targets) in enumerate(loader):
inputs, targets = inputs.to('cpu'), targets.to('cpu')
features.append([f.cpu().data.numpy().astype(np.float16) for f in network.forward_features(inputs)])
return [np.concatenate(list(zip(*features))[i]) for i in range(len(features[0]))]
</code></pre>
<p>Are there any clean ways to do this with TensorFlow iterator? Here is the torch code that I want to replicate in TensorFlow. <a href="https://pastecode.io/s/b03cpoyv" rel="nofollow noreferrer">https://pastecode.io/s/b03cpoyv</a></p>
|
<p>To answer your question, I just need to ensure that you understand your original <a href="/questions/tagged/torch" class="post-tag" title="show questions tagged 'torch'" rel="tag">torch</a> <a href="https://pastecode.io/s/b03cpoyv" rel="nofollow noreferrer">code</a> properly. So, here's your workflow</p>
<pre><code>class LeNet(nn.Module):
def forward:
# few bunch of layers
return output
def forward_features:
# same as forward function
return [each layer output]
</code></pre>
<p>Now, next, you use the <code>torch_get_function</code> method and retrieve all layers output from the <code>forward_features</code> function that is defined in your model. The <code>torch_get_function</code> gives a total of 4 outputs as a list and you pick only the first feature and concate across the batches in the end.</p>
<pre><code>def torch_get_function(network, loader):
features = []
for batch_idx, (inputs, targets) in enumerate(loader):
print('0', network.forward_features(inputs)[0].shape)
print('1', network.forward_features(inputs)[1].shape)
print('2', network.forward_features(inputs)[2].shape)
print('3', network.forward_features(inputs)[3].shape)
print()
features.append([f... for f in network.forward_features(inputs)])
return [np.concatenate(list(zip(*features))[i]) for i in range(len(features[0]))]
for epoch in epochs:
dataset = torchvision.datasets.MNIST...
dataset = torch.utils.data.Subset(dataset, list(range(0, 1000)))
functloader = torch.utils.data.DataLoader(...)
# for x , y in functloader:
# print('a ', x.shape, y.shape)
# a torch.Size([100, 1, 28, 28]) torch.Size([100])
activs = torch_get_function(net, functloader)
print(activs[0].shape)
break
</code></pre>
<p>That's why if when I ran your code, I got</p>
<pre><code># These are the 4 output that returned by forward_features(inputs)
0 torch.Size([100, 10, 12, 12])
1 torch.Size([100, 320])
2 torch.Size([100, 50])
3 torch.Size([100, 10])
...
# In the return statement of forward_features -
# You take only the first index feature and concate across batches.
(1000, 10, 12, 12)
</code></pre>
<p>So, the input size of your model is <code>(batch_size, 1, 28, 28)</code> and the final output is like <code>(1000, 10, 12, 12)</code>.</p>
<hr />
<p>Let's do the same in <a href="/questions/tagged/tensorflow" class="post-tag" title="show questions tagged 'tensorflow'" rel="tag">tensorflow</a>, step by step.</p>
<pre><code>import numpy as np
from tqdm import tqdm
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import (Conv2D, Dropout, MaxPooling2D,
Dense, Flatten)
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_test = x_test.astype("float32") / 255.0
x_test = np.reshape(x_test, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
dataset = dataset.shuffle(buffer_size=1024).batch(100)
# it's like torch.utils.data.Subset
dataset = dataset.take(1000)
dataset
<TakeDataset shapes: ((None, 28, 28, 1), (None,)), types: (tf.float32, tf.uint8)>
</code></pre>
<p>Let's now build the model. To make it familiar to you, I'm writing in sub-class API.</p>
<pre><code>class LeNet(keras.Model):
def __init__(self, num_classes, input_size=28):
super(LeNet, self).__init__()
self.conv1 = Conv2D(10, (5, 5))
self.conv2 = Conv2D(20, (5, 5))
self.conv2_drop = Dropout(rate=0.5)
self.fc1 = Dense(50)
self.fc2 = Dense(num_classes)
def call(self, inputs, training=None):
x1 = tf.nn.relu(MaxPooling2D(2)(self.conv1(inputs)))
x2 = tf.nn.relu(MaxPooling2D(2)(self.conv2_drop(self.conv2(x1))))
x2 = Flatten()(x2)
x3 = tf.nn.relu(self.fc1(x2))
x4 = tf.nn.softmax(self.fc2(x3), axis=1)
# in tf/keras, when we will call model.fit / model.evaluate
# to train the model only x4 will return
if training:
x4
else: # but when model(input)/model.predict(), we can return many :)
return [x1, x2, x3, x4]
lenet = LeNet(10)
lenet.build(input_shape=(None, 28, 28, 1))
</code></pre>
<p>Get the desired features</p>
<pre><code>features = []
for input, target in tqdm(dataset):
# lenet(...) will give 4 output as we model
# but as we're interested on the first index feature...
features.append(lenet(input, training=False)[0])
print(len(features))
features = np.concatenate(features, axis=0)
features.shape
(10000, 12, 12, 10)
</code></pre>
<p>In tensorflow, the channel axis is default set to last, as opposed to the torch. In torch, you received <code>(1000, 10, 12, 12)</code> and in tensorflow, it gives you <code>(10000, 12, 12, 10)</code> but you can change it of course, (<a href="https://stackoverflow.com/a/46965196/9215780">how</a>). Here is the working <a href="https://colab.research.google.com/drive/1-C9j7y3r_k0GmyAaE1M1baUU_BWNtWgo?usp=sharing" rel="nofollow noreferrer">colab</a>.</p>
|
python|tensorflow|machine-learning|keras|pytorch
| 1
|
374,277
| 42,888,300
|
can we pass non tensor to tf.py_func input?
|
<pre><code>def np_function( np_array1, float_value):
np_array2 = ...
return np_array2
#tensorflow customised op
def tf_function( tf_tensor_in_gpu, float_value):
return \
tf.py_func(np_function,[tf_tensor_in_gpu, float_value],[tf.float32])
</code></pre>
<p>I want to make a customized tensorflow op from my function, using "tf.py_func". How can I pass non-tensor input (e.g. "float_value" in the above code) to my functions. Is my code correct? It is giving errors at runtime when I called session.run. </p>
|
<p>i solved this by:</p>
<pre><code>def np_function_generator(float_value):
def np_function(np_array1):
np_array2 = ...
... you can use float_value here ...
return np_array2
return np_function
#tensorflow customised op
def tf_function( tf_tensor_in_gpu, float_value):
np_function = np_function_generator(float_value):
return \
tf.py_func(np_function,[tf_tensor_in_gpu],[tf.float32])
</code></pre>
|
tensorflow
| 2
|
374,278
| 42,975,789
|
Installing python packages without dependencies
|
<p>I am trying to install a specific version of a python package into a pre-installed python environment. The package is <a href="https://github.com/laspy/laspy" rel="nofollow noreferrer">laspy</a> and the version is an old one (1.2.5). The package is supposed to work with Python version 2.7, but I am trying to install it against version 3.5, as I saw <a href="https://github.com/laspy/laspy/issues/40" rel="nofollow noreferrer">here</a> that it should work on Python 3.4.</p>
<p>The real reason why I am doing so is that this specific Python is shipped with <a href="http://pro.arcgis.com/en/pro-app/arcpy/get-started/installing-python-for-arcgis-pro.htm" rel="nofollow noreferrer">ArcGIS Pro</a>, and I need the <code>arcpy</code> module which is present only in this installation.</p>
<p>I've been able to download the <code>laspy</code> package using <code>pip download</code>. This module depends on the module <code>numpy</code>, which is already present in the Python environment. This is causing the <code>pip install</code> to fail with error:</p>
<pre><code>PermissionError: [WinError 5] Accesso negato: 'C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\Lib\\site-packages\\numpy'
</code></pre>
<p>which I kind of understand (it cannot overwrite the already installed <code>numpy</code>).</p>
<p>Here comes my big doubt: would installing <code>laspy</code> with <code>pip</code> and <code>--no-dependencies</code> option "break" my python installation?</p>
|
<p>So stupid... The error message <code>PermissionError</code> was just because I opened cmd without administrative privileges...</p>
<p>Just installed <code>laspy</code> with <code>pip install laspy==1.2.5</code>. Hopefully it will work with this 64bit version of Python shipped with ArcGIS Pro (I was actually using it with the python 2.7 shipped with ArcGIS 10.x but it's 32bit and with LAS files it is easy to receive "out of memory" messages...).</p>
<p>Will edit this answer to give some news on the compatibility.</p>
<p><strong>UPDATE</strong></p>
<p>Seems like I was just able to import laspy, but all other submodules of it didn't work...</p>
<p>e.g. <code>import laspy</code> works 'from laspy.File import file' throws <code>No module named 'laspy.File'</code>.</p>
<p>I am now switching to a fork (<a href="https://github.com/GeospatialPython/laspy" rel="nofollow noreferrer">this one</a>), which should be compatible hopefully.</p>
|
python|python-3.x|numpy|arcgis|arcpy
| 1
|
374,279
| 42,902,944
|
python pandas - how to merge date from one and time from another column and create new column
|
<p>I have a dataframe that comes from a database like this:
Both FltDate and ESTAD2 are datetime64[ns]</p>
<pre><code>>>> print df[['Airport', 'FltDate', 'Carrier', 'ESTAD2']]
Airport FltDate Carrier ESTAD2
0 EDI 2017-06-18 BACJ 1899-12-30 05:35:00
1 EDI 2017-06-18 BA 1899-12-30 06:40:00
2 EDI 2017-06-18 BACJ 1899-12-30 07:00:00
3 EDI 2017-06-18 BA 1899-12-30 07:05:00
4 EDI 2017-06-18 BA 1899-12-30 09:00:00
5 EDI 2017-06-18 I2 1899-12-30 11:05:00
6 EDI 2017-06-18 BA 1899-12-30 11:25:00
7 EDI 2017-06-18 BA 1899-12-30 13:45:00
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 214 entries, 0 to 213
Data columns (total 12 columns):
Airport 214 non-null object
FltDate 214 non-null datetime64[ns]
<snip>
ESTAD2 214 non-null datetime64[ns]
<snip>
dtypes: datetime64[ns](4), object(8)
memory usage: 20.1+ KB
</code></pre>
<p>How to get the <strong>Date Part from FltDate</strong> and <strong>Time Part from ESTAD2.</strong></p>
<p>Result should be like this say for Row 0</p>
<pre><code>2017-06-18 05:35:00 (that is 18Jun2017 of FltDate + 05:35 of ESTAD2)
</code></pre>
<p>I may replace ESTAD2 with above result.. or create a new column as FltDateTime.</p>
<p>Tried various ways and failed... like below... adding was unsuccessful.</p>
<pre><code>>>> df.FltDate.dt.date
0 2017-06-18
1 2017-06-18
2 2017-06-18
>>> df.ESTAD2.dt.time
0 05:35:00
1 06:40:00
2 07:00:00
</code></pre>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html" rel="nofollow noreferrer"><code>strftime</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a>:</p>
<pre><code>df['date'] = pd.to_datetime(df.FltDate.dt.strftime('%Y-%m-%d ') +
df.ESTAD2.dt.strftime('%H:%M:%S'))
print (df)
Airport FltDate Carrier ESTAD2 date
0 EDI 2017-06-18 BACJ 1899-12-30 05:35:00 2017-06-18 05:35:00
1 EDI 2017-06-18 BA 1899-12-30 06:40:00 2017-06-18 06:40:00
2 EDI 2017-06-18 BACJ 1899-12-30 07:00:00 2017-06-18 07:00:00
3 EDI 2017-06-18 BA 1899-12-30 07:05:00 2017-06-18 07:05:00
4 EDI 2017-06-18 BA 1899-12-30 09:00:00 2017-06-18 09:00:00
5 EDI 2017-06-18 I2 1899-12-30 11:05:00 2017-06-18 11:05:00
6 EDI 2017-06-18 BA 1899-12-30 11:25:00 2017-06-18 11:25:00
7 EDI 2017-06-18 BA 1899-12-30 13:45:00 2017-06-18 13:45:00
</code></pre>
<p>Alternative solution:</p>
<pre><code>df['date'] = pd.to_datetime(df.FltDate.dt.strftime('%Y-%m-%d ') +
df.ESTAD2.astype(str).str.split().str[1])
print (df)
Airport FltDate Carrier ESTAD2 date
0 EDI 2017-06-18 BACJ 1899-12-30 05:35:00 2017-06-18 05:35:00
1 EDI 2017-06-18 BA 1899-12-30 06:40:00 2017-06-18 06:40:00
2 EDI 2017-06-18 BACJ 1899-12-30 07:00:00 2017-06-18 07:00:00
3 EDI 2017-06-18 BA 1899-12-30 07:05:00 2017-06-18 07:05:00
4 EDI 2017-06-18 BA 1899-12-30 09:00:00 2017-06-18 09:00:00
5 EDI 2017-06-18 I2 1899-12-30 11:05:00 2017-06-18 11:05:00
6 EDI 2017-06-18 BA 1899-12-30 11:25:00 2017-06-18 11:25:00
7 EDI 2017-06-18 BA 1899-12-30 13:45:00 2017-06-18 13:45:00
</code></pre>
|
python|pandas
| 2
|
374,280
| 42,837,067
|
Python Pandas: Use regex to replace strings with hyperlink
|
<p>Beginner's question. </p>
<p>I'm scraping housing ads with BS4 and analyse the subsequent data with Pandas. </p>
<p>I have a DataFrame with several columns. This issue considers only one of the columns, which looks like, </p>
<pre><code>district | ... |
----------------
A | ... |
B | ... |
C | ... |
... | ... |
</code></pre>
<p>I have a list of links related to the districts. For e.g. district A, the link looks like <code>www.site.com/city/district-A/</code>. </p>
<p>I want to replace each district name in the column (e.g. "A") with <code><a href="www.site.com/city/district-A/">A</a></code>. Preferably I do this replacement using regular expressions, since I have a large variety of district names and district links. </p>
<p>To make it more difficult, the district names are non-ASCII, while the links are ASCII. </p>
<p>How do I go about?</p>
|
<p>It seem you need <code>apply</code> <code>format</code>:</p>
<pre><code>df = pd.DataFrame({'district':['A','B','C']})
df['url'] = df.district.apply('<a href="www.site.com/city/district-{0}/">{0}</a>'.format)
print (df)
district url
0 A <a href="www.site.com/city/district-A/">A</a>
1 B <a href="www.site.com/city/district-B/">B</a>
2 C <a href="www.site.com/city/district-C/">C</a>
</code></pre>
|
python|regex|pandas|replace|hyperlink
| 1
|
374,281
| 42,896,453
|
Can numpy argsort return lower index for ties?
|
<p>I have a numpy array:</p>
<pre><code>foo = array([3, 1, 4, 0, 1, 0])
</code></pre>
<p>I want the top 3 items. Calling</p>
<pre><code>foo.argsort()[::-1][:3]
</code></pre>
<p>returns </p>
<pre><code>array([2, 0, 4])
</code></pre>
<p>Notice values <code>foo[1]</code> and <code>foo[4]</code> are equal, so <code>numpy.argsort()</code> handles the tie by returning the index of the item which appears last in the array; i.e. index 4.</p>
<p>For my application I want the tie breaking to return the index of the item which appears first in the array (index 1 here). How do I implement this efficiently?</p>
|
<p>What about simply this?</p>
<pre><code>(-foo).argsort(kind='mergesort')[:3]
</code></pre>
<p>Why this works:</p>
<p>Argsorting in descending order (not what <code>np.argsort</code> does) is the same as argsorting in ascending order (what <code>np.argsort</code> does) the opposite values. You then just need to pick the first 3 sorted indices. Now all you need is make sure that the sort is stable, meaning in case of ties, keep first index first.
NOTE: I thought the default <code>kind=quicksort</code> was stable but from the doc it appears only <code>kind=mergesort</code> is guaranteed to be stable: (<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html</a>)</p>
<blockquote>
<p>The various sorting algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The three available algorithms have the following properties:</p>
<p>kind speed worst case work space stable</p>
<p>‘quicksort’ 1 O(n^2) 0 no</p>
<p>‘mergesort’ 2 O(n*log(n)) ~n/2 yes</p>
<p>‘heapsort’ 3 O(n*log(n)) 0 no</p>
</blockquote>
|
python|arrays|numpy
| 5
|
374,282
| 42,671,418
|
Calculating Cumulative Compounded Returns in Pandas
|
<p>I have a series of daily percentage returns <code>returns</code>:</p>
<pre><code> Returns
Date
2003-03-03 0.0332
2003-03-04 0.0216
2003-03-05 0.0134
...
2010-12-29 0.0134
2010-12-30 0.0133
2010-12-31 -0.0297
</code></pre>
<p>I can calculate a return index by setting the value of the initial value to 1 and using <code>cumprod()</code></p>
<pre><code>ret_index = (1 + returns).cumprod()
ret_index[0] = 1
</code></pre>
<p>which gives me something like this:</p>
<pre><code>Date
2003-03-03 1.0000
2003-03-04 1.0123
2003-03-05 1.1334
...
2010-12-29 2.3344
2010-12-30 2.3544
2010-12-31 2.3643
</code></pre>
<p>So my cumulative compounded percentage return for the whole series is about 236%.</p>
<p>My question: I want to calculate cumulative compound percentage return for each year in the series (2003, 2004...2010). </p>
<p>The only way I can think of doing it is to iterate through my initial series, slice it by year, set the first element to 1, and calculate the return for each year. I would think there is any easier way using datetime (the index is a Datetimeindex) and resampling.</p>
<p>Can anyone help?</p>
|
<p>For me it return a bit different results, but I think you need <code>groupby</code>:</p>
<pre><code>a = df.add(1).cumprod()
a.Returns.iat[0] = 1
print (a)
Returns
Date
2003-03-03 1.000000
2003-03-04 1.055517
2003-03-05 1.069661
2010-12-29 1.083995
2010-12-30 1.098412
2010-12-31 1.065789
def f(x):
#print (x)
a = x.add(1).cumprod()
a.Returns.iat[0] = 1
return a
print (df.groupby(df.index.year).apply(f))
Returns
Date
2003-03-03 1.000000
2003-03-04 1.055517
2003-03-05 1.069661
2010-12-29 1.000000
2010-12-30 1.026878
2010-12-31 0.996380
</code></pre>
|
python|pandas|dataframe
| 5
|
374,283
| 43,014,503
|
save_npz method missing from scipy.sparse
|
<p>I am using 0.17 version of <code>scipy</code> library on ubuntu 16.04 64-bit system in <code>python v3.5</code>. I am unable to find <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.save_npz.html#scipy.sparse.save_npz" rel="nofollow noreferrer">scipy.sparse.save_npz</a> operation in the library, although it is mentioned in the latest documentation.</p>
<p>On listing the callable methods from <code>scipy.sparse</code> object I am getting the following output:</p>
<pre><code>['SparseEfficiencyWarning',
'SparseWarning',
'Tester',
'bench',
'block_diag',
'bmat',
'bsr_matrix',
'coo_matrix',
'cs_graph_components',
'csc_matrix',
'csr_matrix',
'dia_matrix',
'diags',
'dok_matrix',
'eye',
'find',
'hstack',
'identity',
'issparse',
'isspmatrix',
'isspmatrix_bsr',
'isspmatrix_coo',
'isspmatrix_csc',
'isspmatrix_csr',
'isspmatrix_dia',
'isspmatrix_dok',
'isspmatrix_lil',
'kron',
'kronsum',
'lil_matrix',
'rand',
'random',
'spdiags',
'spmatrix',
'test',
'tril',
'triu',
'vstack']
</code></pre>
<p>The list should have contained <code>save_npz</code> method but it is not there. If the method has been deprecated then please tell me some good alternatives for saving and loading sparse matrices. </p>
|
<p>Yes, <code>scipy.sparse.save_npz / load_npz</code> are new in version 0.19.0 <a href="http://scipy.github.io/devdocs/release.0.19.0.html" rel="nofollow noreferrer">http://scipy.github.io/devdocs/release.0.19.0.html</a></p>
|
python|numpy|scipy
| 3
|
374,284
| 42,921,854
|
How to check if a particular cell in pandas DataFrame isnull?
|
<p>I have the following <code>df</code> in pandas.</p>
<pre><code>0 A B C
1 2 NaN 8
</code></pre>
<p>How can I check if <code>df.iloc[1]['B']</code> is NaN?</p>
<p>I tried using <code>df.isnan()</code> and I get a table like this:</p>
<pre><code>0 A B C
1 false true false
</code></pre>
<p>but I am not sure how to index the table and if this is an efficient way of performing the job at all?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.isnull.html" rel="noreferrer"><code>pd.isnull</code></a>, for select use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="noreferrer"><code>loc</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iloc.html" rel="noreferrer"><code>iloc</code></a>:</p>
<pre><code>print (df)
0 A B C
0 1 2 NaN 8
print (df.loc[0, 'B'])
nan
a = pd.isnull(df.loc[0, 'B'])
print (a)
True
print (df['B'].iloc[0])
nan
a = pd.isnull(df['B'].iloc[0])
print (a)
True
</code></pre>
|
python|pandas|dataframe
| 40
|
374,285
| 42,929,997
|
How to replace non integer values in a pandas Dataframe?
|
<p>I have a dataframe consisting of two columns, Age and Salary</p>
<pre><code>Age Salary
21 25000
22 30000
22 Fresher
23 2,50,000
24 25 LPA
35 400000
45 10,00,000
</code></pre>
<p>How to handle outliers in Salary column and replace them with an integer? </p>
|
<p>If need replace non numeric values use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="noreferrer"><code>to_numeric</code></a> with parameter <code>errors='coerce'</code>:</p>
<pre><code>df['new'] = pd.to_numeric(df.Salary.astype(str).str.replace(',',''), errors='coerce')
.fillna(0)
.astype(int)
print (df)
Age Salary new
0 21 25000 25000
1 22 30000 30000
2 22 Fresher 0
3 23 2,50,000 250000
4 24 25 LPA 0
5 35 400000 400000
6 45 10,00,000 1000000
</code></pre>
|
python|pandas|dataframe
| 14
|
374,286
| 42,983,906
|
How to use `apply()` or other vectorized approach when previous value matters
|
<p>Assume I have a DataFrame of the following form where the first column is a random number, and the other columns will be based on the value in the previous column.</p>
<p><a href="https://i.stack.imgur.com/sDcvN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sDcvN.png" alt="enter image description here"></a></p>
<p>For ease of use, let's say I want each number to be the previous one squared. So it would look like the below.</p>
<p><a href="https://i.stack.imgur.com/PmvLK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PmvLK.png" alt="enter image description here"></a></p>
<p>I know I can write a pretty simple loop to do this, but I also know looping is not usually the most efficient in python/pandas. How could this be done with <code>apply()</code> or <code>rolling_apply()</code>? Or, otherwise be done more efficiently?</p>
<p>My (failed) attempts below:</p>
<pre><code>In [12]: a = pandas.DataFrame({0:[1,2,3,4,5],1:0,2:0,3:0})
In [13]: a
Out[13]:
0 1 2 3
0 1 0 0 0
1 2 0 0 0
2 3 0 0 0
3 4 0 0 0
4 5 0 0 0
In [14]: a = a.apply(lambda x: x**2)
In [15]: a
Out[15]:
0 1 2 3
0 1 0 0 0
1 4 0 0 0
2 9 0 0 0
3 16 0 0 0
4 25 0 0 0
In [16]: a = pandas.DataFrame({0:[1,2,3,4,5],1:0,2:0,3:0})
In [17]: pandas.rolling_apply(a,1,lambda x: x**2)
C:\WinPython64bit\python-3.5.2.amd64\lib\site-packages\spyderlib\widgets\externalshell\start_ipython_kernel.py:1: FutureWarning: pd.rolling_apply is deprecated for DataFrame and will be removed in a future version, replace with
DataFrame.rolling(center=False,window=1).apply(args=<tuple>,kwargs=<dict>,func=<function>)
# -*- coding: utf-8 -*-
Out[17]:
0 1 2 3
0 1.0 0.0 0.0 0.0
1 4.0 0.0 0.0 0.0
2 9.0 0.0 0.0 0.0
3 16.0 0.0 0.0 0.0
4 25.0 0.0 0.0 0.0
In [18]: a = pandas.DataFrame({0:[1,2,3,4,5],1:0,2:0,3:0})
In [19]: a = a[:-1]**2
In [20]: a
Out[20]:
0 1 2 3
0 1 0 0 0
1 4 0 0 0
2 9 0 0 0
3 16 0 0 0
In [21]:
</code></pre>
<p>So, my issue is mostly how to refer to the previous column value in my DataFrame calculations.</p>
|
<p>What you're describing is a recurrence relation, and I don't think there is currently any non-loop way to do that. Things like <code>apply</code> and <code>rolling_apply</code> still rely on having all the needed data available before they begin, and outputting all the result data at once at the end. That is, they don't allow you to compute the next value using earlier values of <em>the same series</em>. See <a href="https://stackoverflow.com/questions/21336794/python-recursive-vectorization-with-timeseries/21338665">this question</a> and <a href="https://stackoverflow.com/questions/23996260/python-pandas-using-the-previous-value-in-dataframe/23997030#23997030">this one</a> as well as <a href="https://github.com/pandas-dev/pandas/issues/4567" rel="nofollow noreferrer">this pandas issue</a>.</p>
<p>In practical terms, for your example, you only have three columns you want to fill in, so doing a three-pass loop (as shown in some of the other answers) will probably not be a major performance hit.</p>
|
python|python-3.x|pandas
| 4
|
374,287
| 42,981,493
|
Weights and biases in tf.layers module in TensorFlow 1.0
|
<p>How do you access the weights and biases when using tf.layers module in TensorFlow 1.0? The advantage of tf.layers module is that you don't have to separately create the variables when making a fully connected layer or convolution layer. </p>
<p>I couldn't not find anything in the documentation regarding accessing them or adding them in summaries after they are created.</p>
|
<p>I don't think <code>tf.layers</code> (i.e. TF core) support summaries yet. Rather you have to use what's in contrib ...knowing that stuff in contrib, may eventually move into core but that the current API may change:</p>
<blockquote>
<p>The layers module defines convenience functions summarize_variables,
summarize_weights and summarize_biases, which set the collection
argument of summarize_collection to VARIABLES, WEIGHTS and BIASES,
respectively.</p>
</blockquote>
<p>Checkout: </p>
<ul>
<li><p><a href="https://www.tensorflow.org/api_guides/python/contrib.layers#Summaries" rel="nofollow noreferrer">https://www.tensorflow.org/api_guides/python/contrib.layers#Summaries</a></p></li>
<li><p><a href="https://github.com/tensorflow/tensorflow/blob/131c3a67a7b8d27fd918e0bc5bddb3cb086de57e/tensorflow/python/layers/layers.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/131c3a67a7b8d27fd918e0bc5bddb3cb086de57e/tensorflow/python/layers/layers.py</a></p></li>
</ul>
|
tensorflow
| 2
|
374,288
| 42,796,332
|
Run Apriori algorithm in python 2.7
|
<p>I have a DataFrame in python by using pandas which has 3 columns and 80.000.000 rows.</p>
<p>The Columns are: {event_id,device_id,category}.
<a href="https://i.stack.imgur.com/mXKW9.png" rel="nofollow noreferrer">here is the first 5 rows of my df</a></p>
<p>each device has many events and each event can have more than one category.</p>
<p>I want to run Apriori algorithm to find out which categories seem together.</p>
<p>My idea is to create a list of lists[[]]: to save the categories which are in the same event for each device. like: [('a'),('a','b')('d'),('s','a','b')] then giving the list of lists as transactions to the algorithm.
I need help to create the list of lists. </p>
<p>If you have better idea please tell me because I am new in Python and this was the only way I found out.</p>
|
<p>A little bit of a late response here, but to me it seems like apriori might not be the right choice for your data. Traditional apriori looks at binary data (either "in the cart" or "not in the cart" for the classic market basket example), for a list of transactions that are all of the same type. What you seem to have is a multilevel/hierarchical association question that might be better suited to a more scalable algorithm.</p>
<p>That said, answering your formatting question, your first step would be to pivot your data so your transactions reflect rows, and the columns represent possible items to appear in each transaction. This can be achieved with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow noreferrer">DataFrame.pivot</a>, and would look something like this (code from the docs, posted here for convenience):</p>
<pre><code>df = pd.DataFrame({'foo': ['one','one','one','two','two','two'],
'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
'baz': [1, 2, 3, 4, 5, 6]})
>>> df
foo bar baz
0 one A 1
1 one B 2
2 one C 3
3 two A 4
4 two B 5
5 two C 6
df.pivot(index='foo', columns='bar', values='baz')
A B C
one 1 2 3
two 4 5 6
</code></pre>
<p>From there you can create a list of lists from the dataframe using:</p>
<pre><code>df.values.tolist()
</code></pre>
<p>That question was previously answered <a href="https://stackoverflow.com/questions/28006793/pandas-dataframe-to-list-of-lists">here</a>.</p>
<p>If you end up using apriori, there's already a package that has implemented it, which could save you some time called <a href="http://A%20little%20bit%20of%20a%20late%20response%20here,%20but%20to%20me%20it%20seems%20like%20apriori%20might%20not%20be%20the%20right%20choice%20for%20your%20data.%20Traditional%20apriori%20looks%20at%20binary%20data%20(either%20%22in%20the%20cart%22%20or%20%22not%20in%20the%20cart%22%20for%20the%20classic%20market%20basket%20example),%20for%20a%20list%20of%20transactions%20that%20are%20all%20of%20the%20same%20type.%20What%20you%20seem%20to%20have%20is%20a%20multilevel/hierarchical%20association%20question%20that%20might%20be%20better%20suited%20to%20a%20more%20scalable%20algorithm.%20%20That%20said,%20answering%20your%20formatting%20question,%20your%20first%20step%20should%20be%20to%20pivot%20your%20data%20so%20your%20transactions%20reflect%20rows,%20and%20the%20columns%20represent%20possible%20items%20to%20appear%20in%20each%20transaction.%20This%20can%20be%20achieved%20with%20[DataFrame.pivot][1],%20and%20would%20look%20something%20like%20this:%20%20%20%20%20%20df%20=%20pd.DataFrame(%7B'foo':%20['one','one','one','two','two','two'],%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20'bar':%20['A',%20'B',%20'C',%20'A',%20'B',%20'C'],%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20'baz':%20[1,%202,%203,%204,%205,%206]%7D)%20%20%20%20%20%3E%3E%3E%20df%20%20%20%20%20%20%20%20%20foo%20%20%20bar%20%20baz%20%20%20%20%200%20%20%20one%20%20%20A%20%20%20%201%20%20%20%20%201%20%20%20one%20%20%20B%20%20%20%202%20%20%20%20%202%20%20%20one%20%20%20C%20%20%20%203%20%20%20%20%203%20%20%20two%20%20%20A%20%20%20%204%20%20%20%20%204%20%20%20two%20%20%20B%20%20%20%205%20%20%20%20%205%20%20%20two%20%20%20C%20%20%20%206%20%20%20%20%20%20df.pivot(index='foo',%20columns='bar',%20values='baz')%20%20%20%20%20%20%20%20%20%20A%20%20%20B%20%20%20C%20%20%20%20%20one%20%201%20%20%202%20%20%203%20%20%20%20%20two%20%204%20%20%205%20%20%206%20%20From%20there%20you%20can%20create%20a%20list%20of%20lists%20using:%20%20%20%20%20%20df.values.tolist()%20%20That%20question%20was%20previously%20answered%20[here][2].%20%20%20%20%20[1]:%20https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html%20%20%20[2]:%20https://stackoverflow.com/questions/28006793/pandas-dataframe-to-list-of-lists" rel="nofollow noreferrer">apyori</a>.</p>
|
python-2.7|pandas|dataframe|transactions|apriori
| 0
|
374,289
| 42,683,189
|
pandas series extractall error
|
<p>I have a pandas series (named df) in the following format:</p>
<pre><code> col1
a GEOS 13100
b MATH 13100-MATH 13200
c MATH 19100-19200
d SPAN 10300 or 20300
e EGPT 10101-10102-10103
f MOGK 10100/30100
g PHSC 12600 must be taken before PHSC 12620
</code></pre>
<p>I want to extract all courses ("[A-Z]{4}\s*\d{5}" or "\d{5}") from col1. The desired data set will be in the following format:</p>
<pre><code> col1 col2 col3 col4 col5
a GEOS 13100
b MATH 13100 - MATH 13200
c MATH 19100 - 19200
d SPAN 10300 or 20300
e EGPT 10101 - 10102 - 10103
f MOGK 10100 / 30100
g PHSC 12600 PHSC 12620
</code></pre>
<p>I tried </p>
<pre><code>df.col1.str.extract('(([A-Z]{4}\s*\d{5}?)|(\d{5}?)).*?(and|\-|or|\, or|\:|\/|\.|\;|\(|\s?)')
</code></pre>
<p>and got the first matched pattern.</p>
<p>I tried </p>
<pre><code>df.col1.str.extractall('(([A-Z]{4}\s*\d{5}?)|(\d{5}?)).*?(and|\-|or|\, or|\:|\/|\.|\;|\(|\s?)')
</code></pre>
<p>but got the following error:</p>
<pre><code>Length of names must match number of levels in MultiIndex.
</code></pre>
<p>Anyone has any idea what I should do?</p>
|
<p>If you are using an older version of pandas, you might have run into something like <a href="https://github.com/pandas-dev/pandas/pull/13156" rel="nofollow noreferrer">this issue</a> (although your indices appear to not be of the problematic form). In version 0.19.0, both cases run without errors:</p>
<pre><code>In [25]: df = pd.DataFrame({'col1': ['SPAN 10300 or 20300', 'SPAN 10300 or 20301', 'MOGK 10100/30100', 'PHSC 12600 must be taken before PHSC 12620']})
In [26]: df.index = ['a', 'b', 'c', 'd']
In [27]: df
Out[27]:
col1
a SPAN 10300 or 20300
b SPAN 10300 or 20301
c MOGK 10100/30100
d PHSC 12600 must be taken before PHSC 12620
In [28]: df.col1.str.extract('(([A-Z]{4}\s*\d{5}?)|(\d{5}?)).*?(and|\-|or|\, or|\:|\/|\.|\;|\(|\s?)')
/usr/local/bin/ipython:1: FutureWarning: currently extract(expand=None) means expand=False (return Index/Series/DataFrame) but in a future version of pandas this will be changed to expand=True (return DataFrame)
#!/usr/local/bin/python3.5
Out[28]:
0 1 2 3
a SPAN 10300 SPAN 10300 NaN
b SPAN 10300 SPAN 10300 NaN
c MOGK 10100 MOGK 10100 NaN /
d PHSC 12600 PHSC 12600 NaN
In [29]: df.col1.str.extractall('(([A-Z]{4}\s*\d{5}?)|(\d{5}?)).*?(and|\-|or|\, or|\:|\/|\.|\;|\(|\s?)')
Out[29]:
0 1 2 3
match
a 0 SPAN 10300 SPAN 10300 NaN
1 20300 NaN 20300 NaN
b 0 SPAN 10300 SPAN 10300 NaN
1 20301 NaN 20301 NaN
c 0 MOGK 10100 MOGK 10100 NaN /
1 30100 NaN 30100 NaN
d 0 PHSC 12600 PHSC 12600 NaN
1 PHSC 12620 PHSC 12620 NaN NaN
</code></pre>
|
python|pandas
| 0
|
374,290
| 42,685,994
|
How to get a tensorflow op by name?
|
<p>You can get a tensor by name with <code>tf.get_default_graph().get_tensor_by_name("tensor_name:0")</code></p>
<p>But can you get an operation, such as <code>Optimizer.minimize</code>, or an <code>enqueue</code> operation on a queue?</p>
<p>In my first model I returned all tensors and ops I would need from a <code>build_model</code> function. But the list of tensors got ugly. In later models I tossed all tensors and ops in a dictionary for easier access. This time around I thought I'd just look up tensors by name as I needed them, but I don't know how to do that with ops.</p>
<p>Or is there a better way to do this? I find various tensors and ops are needed all over the place. Training, inference code, test cases, hence the desire for a nice standard way of accessing the various parts of the graph without passing variables all over the place.</p>
|
<p>You can use the <a href="https://www.tensorflow.org/api_docs/python/tf/Graph#get_operation_by_name" rel="noreferrer"><code>tf.Graph.get_operation_by_name()</code></a> method to get a <code>tf.Operation</code> by name. For example, to get an operation called <code>"enqueue"</code> from the default graph:</p>
<pre><code>op = tf.get_default_graph().get_operation_by_name("enqueue")
</code></pre>
|
python|tensorflow
| 31
|
374,291
| 43,004,991
|
add one row from another dataframe in pandas
|
<p>Here's the thing, I need to put one row from other dataframe to the top of main dataframe in pandas, above first row where are columns named.</p>
<p>Sample : </p>
<pre><code> 1value 2value 3value 4value 5value
acity 4 3 6 2 6
bcity 2 6 6 4 1
ccity 5 11 53 6 3
dcity 5 1 4 6 3
gcity 6 4 2 7 4
</code></pre>
<p>And the other sample:</p>
<pre><code>1value 2value 3value 4value 5value
2 5 2 6 3
</code></pre>
<p>And now I need to add value of second sample to the top of first sample. Desired output: </p>
<pre><code> 2 5 2 6 3
1value 2value 3value 4value 5value
acity 4 3 6 2 6
bcity 2 6 6 4 1
ccity 5 11 53 6 3
dcity 5 1 4 6 3
gcity 6 4 2 7 4
</code></pre>
<p>And just for mention, I have about 3000 rows, and 250 columns in this Sample dataframe.</p>
<p>I don't have any code yet, I tried to find here something...</p>
|
<p>Not sure if this is what you need, but a multi index data frame looks like the output:</p>
<p><em>df1 or second sample</em>:</p>
<p><a href="https://i.stack.imgur.com/HeKVW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeKVW.png" alt="enter image description here"></a></p>
<p><em>df or the first sample</em>:</p>
<p><a href="https://i.stack.imgur.com/zMRpJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zMRpJ.png" alt="enter image description here"></a></p>
<p>Rename the columns with a multi-index columns:</p>
<pre><code>df.columns = pd.MultiIndex.from_arrays([df1.values[0], df.columns])
</code></pre>
<p><a href="https://i.stack.imgur.com/tzRpH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tzRpH.png" alt="enter image description here"></a></p>
|
python|pandas|dataframe
| 1
|
374,292
| 42,810,531
|
NumPy doesn't recognize well array shape
|
<p>I have a code which is as follows:</p>
<pre><code>data = np.array([[[i, j], i * j] for i in range(10) for j in range(10)])
print(data)
x = np.array(data[:,0])
x1 = x[:,0]
x2 = x[:,1]
print(x)
</code></pre>
<p><code>data</code> correctly outputs <code>[[[0,0],0],[[0,1],0],[[0,2],0],...,[[9,9],81]]</code> which is, by the way, the multiplication table and it's results.</p>
<p>So, the first column of the <code>data</code> (which is <code>x</code>) must be separated into <code>x1</code> and <code>x2</code>, which are the first and last column of it respectively. Which I think I did it right but it raises an error saying <code>too many indices for array</code>. What am I doing wrong?</p>
|
<p><code>data.dtype</code> is <code>object</code> because the elements of <code>[[i,j],k]</code> are not homogeneous. A workaround for you :</p>
<pre><code>data = np.array([(i, j, i * j) for i in range(10) for j in range(10)])
print(data)
x1 = data[:,:2]
x2 = data[:,2]
</code></pre>
<p><code>data.shape</code> is now <code>(100,3)</code>, <code>data.dtype</code> is <code>int</code> and <code>x1</code> and <code>x2</code> what you want.</p>
|
python-3.x|numpy
| 1
|
374,293
| 42,747,032
|
Python linear least squares function not working
|
<p>Ok, so I'm writing a function for linear least squares in python and it's pretty much just one equation. Yet for some reason, I'm getting a ValueError. My best guess is it has something to do with the <code>.reshape</code> function, since in this question I had <a href="https://stackoverflow.com/questions/42737252/matlab-to-python-translation-of-design-matrix-function">a very similar problem</a> and reshaping was the solution. I've read up on it and from what I gather, w in my function is in format (n,) and the result would be in (n,1) as in my previously mentioned question. I tried reshaping <code>x_train</code> and <code>y_train</code> but I only got an error that I can't change the size of the array. I guess my parameters were set wrong. Right now I'm lost, and I have many more functions like these to go through -- I wish I could understand what am I missing in my code. The equation seems to be in order, so I suppose there's something I should be adding everytime - possibly the <code>reshape</code> function cause I'm still using the same models as in the last situation. I hope it's the right place to post this question, I don't know what else to do but I really want to understand so I won't have these problems in the future, thank you.</p>
<p>Code (np. stands for numpy):</p>
<pre><code>def least_squares(x_train, y_train, M):
'''
:param x_train: training input vector Nx1
:param y_train: training output vector Nx1
:param M: polynomial degree
:return: tuple (w,err), where w are model parameters and err mean squared error of fitted polynomial
'''
w = np.linalg.inv(design_matrix(x_train, M). * design_matrix(x_train, M)) * design_matrix(x_train, M).T * y_train
err = mean_squared_error(x_train, y_train, w)
return (w, err)
</code></pre>
<p><code>design_matrix</code> and <code>mean_squared_error</code> are working just fine.
Traceback:</p>
<pre><code>ERROR: test_least_squares_err (test.TestLeastSquares)
----------------------------------------------------------------------
Traceback (most recent call last):
File "\content.py", line 48, in least_squares
w = np.linalg.inv(design_matrix(x_train, M).T * design_matrix(x_train, M)) * design_matrix(x_train, M).T * y_train
ValueError: operands could not be broadcast together with shapes (7,20) (20,7)
</code></pre>
|
<p>Assuming that <code>design_matrix</code> returns a matrix, this code</p>
<pre><code>design_matrix(x_train, M).T * design_matrix(x_train, M)
</code></pre>
<p>most likely does not do what is intended since <code>*</code> is performing element-wise multiplication (Hadamard product of two matrices). Because your matrices are not square, it thus complains about incompatible shape.</p>
<p>To obtain matrix-matrix product, one might do (assuming numpy was imported as <code>import numpy as np</code>):</p>
<pre><code>np.dot(design_matrix(x_train, M).T, design_matrix(x_train, M))
</code></pre>
<p>Similar reasoning then applies to the rest of the statement <code>* design_matrix(x_train, M).T * y_train</code>...</p>
<p>Also, you might want to evaluate <code>design_matrix</code> only once, e.g., to put something like</p>
<pre><code>mat = design_matrix(x_train, M)
</code></pre>
<p>before the line calculating <code>w</code> and then use merely <code>mat</code>.</p>
|
python|numpy|linear-regression
| 2
|
374,294
| 42,789,715
|
How do I improve the performance in parallel computing with dask
|
<p>I have a pandas dataframe and converted to dask dataframe</p>
<p>df.shape = (60893, 2)</p>
<p>df2.shape = (7254909, 2)</p>
<pre><code>df['name_clean'] = df['Name'].apply(lambda x :re.sub('\W+','',x).lower(),meta=('x', 'str'))
names = df['name_clean'].drop_duplicates().values.compute()
df2['found'] = df2['name_clean2'].apply(lambda x: any(name in x for name in names),meta=('x','str')) ~ takes 834 ms
df2.head(10) ~ takes 3 min 54 sec
</code></pre>
<p>How can I see the shape of dask dataframe ?</p>
<p>Why it is so much time for .head() ? Am I doing it in the right way ?</p>
|
<p>You can not iterate over a dask.dataframe or dask.array. You need to call the <code>.compute()</code> method to turn it into a Pandas dataframe/series or NumPy array first.</p>
<p>Note just calling the <code>.compute()</code> method and then forgetting the result doesn't do anything. You need to save the result as a variable.</p>
<pre><code>dask_series = df.Name.apply(lambda x: re.sub('\W+', '', x).lower(),
meta=('x', 'str')
pandas_series = dask_series.compute()
for name in pandas_series:
...
</code></pre>
|
python|list|pandas|dask
| 3
|
374,295
| 42,897,010
|
Error while building list in python
|
<p>I' am trying to build a list in python. The list contains lists. A single inner list consists of various features of a audio signal like standard deviation, mean frequency etc. But when i print the outer list i get a blank list. Here is my code.</p>
<pre><code>from scipy.io.wavfile import read # to read wavfiles
import matplotlib.pyplot as plotter
from sklearn.tree import DecisionTreeClassifier as dtc
import numpy as np
import os
import scipy
import math
np.set_printoptions(precision=4)
def __init__(self, criterion="gini", splitter="best", max_depth=None, min_samples_split=10, min_samples_leaf=1, min_weight_fraction_leaf=0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_split=1e-7, class_weight=None, presort=False):
super(DecisionTreeClassifier, self).__init__(criterion=criterion, splitter=splitter, max_depth=max_depth, min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf, min_weight_fraction_leaf=min_weight_fraction_leaf, max_features=max_features, max_leaf_nodes=max_leaf_nodes, class_weight=class_weight, random_state=random_state, min_impurity_split=min_impurity_split, presort=presort)
fList = [] #feature list
mfList = [] #main feature list
labels = ["angry", "angry", "angry", "angry", "angry", "angry", "fear", "fear", "happy", "happy", "happy", "sad", "sad", "sad", "sad", "sad", "sad", "sad", "sad", "sad", "sad", "sad", "sad", "sad", "sad"]
label = [1,2,3,4,5,6,7,8,9,10]
def stddev(lst,mf):
sum1 = 0
len1 = len(lst)-1
for i in range(len(lst)):
sum1 += pow((lst[i]-mf),2)
sd = np.sqrt(sum1/len1)
fList.append(sd)
def find_iqr(num,num_array=[],*args):
num_array.sort()
l=int((int(num)+1)/4)
m=int((int(num)+1)/2)
med=num_array[m]
u=int(3*(int(num)+1)/4)
fList.append(num_array[l]) #first quantile
fList.append(med) #median
fList.append(num_array[u]) #third quantile
fList.append(num_array[u]-num_array[l]) #inter quantile range
def build(path1):
dirlist=os.listdir(path1)
n=1
mf=0
for name in dirlist:
path=path1+name
print ("File ",n)
fs, x = read(path) #fs will have sampling rate and x will have sample #
#print ("The sampling rate: ",fs)
#print ("Size: ",x.size)
#print ("Duration: ",x.size/float(fs),"s")
'''
plotter.plot(x)
plotter.show() #x-axis is in samples
t = np.arange(x.size)/float(fs) #creating an array with values as time w.r.t samples
plotter.plot(t) #plot t w.r.t x
plotter.show()
y = x[100:600]
plotter.plot(y)
plotter.show() # showing close-up of samples
'''
j=0
med=0
for i in x:
j=j+1
mf=mf+i
mf=mf/j
fList.append(np.max(abs(x))) #amplitude
fList.append(mf) #mean frequency
find_iqr(j,x)
fList.append((3*med)-(2*mf)) #mode
stddev(x,mf)
#fftc = np.fft.rfft(x).tolist()
#mr = 20*scipy.log10(scipy.absolute(x)).tolist()
#fList.append(fftc) #1D dft
#fList.append(mr) #magnitude response
mfList.append(fList)
fList[:] = []
n=n+1
path1 = '/home/vishnu/Desktop/Trainingsamples/'
path2 = '/home/vishnu/Desktop/TestSamples/'
clf = dtc() # this class is used to make decision tree
build(path1)
print(mfList)
clf.fit(mfList,label)
mfList[:] = [] #clear mflist
tlist = build(path2)
res = clf.predict(tlist)
print(res)
</code></pre>
<p>The following is my output screen:</p>
<pre><code>('File ', 1)
SA1.py:50: RuntimeWarning: invalid value encountered in sqrt
sd = np.sqrt(sum1/len1)
('File ', 2)
('File ', 3)
('File ', 4)
('File ', 5)
('File ', 6)
('File ', 7)
('File ', 8)
('File ', 9)
('File ', 10)
[[], [], [], [], [], [], [], [], [], []]
Traceback (most recent call last):
File "SA1.py", line 111, in <module>
clf.fit(mfList,label)
File "/home/vishnu/.local/lib/python2.7/site-packages/sklearn/tree/tree.py", line 739, in fit
X_idx_sorted=X_idx_sorted)
File "/home/vishnu/.local/lib/python2.7/site-packages/sklearn/tree/tree.py", line 122, in fit
X = check_array(X, dtype=DTYPE, accept_sparse="csc")
File "/home/vishnu/.local/lib/python2.7/site-packages/sklearn/utils/validation.py", line 424, in check_array
context))
ValueError: Found array with 0 feature(s) (shape=(10, 0)) while a minimum of 1 is required.
</code></pre>
<p>Here as one can see the line <code>print(mfList)</code> prints the output <code>[[], [], [], [], [], [], [], [], [], []]</code>. This is a list of empty lists. Where lies my mistake? Please guide.</p>
|
<p>The problem comes from the <code>fList[:] = []</code> you call at the end. I did a small example to test it:</p>
<pre><code>l = []
ml = []
def f(x):
for i in range(0, x):
l.append(i)
ml.append(l)
l[:] = []
f(10)
f(5)
print(ml)
</code></pre>
<p>This prints <code>ml</code> containing two empty lists:</p>
<pre><code>>>> [[], []]
</code></pre>
<p>If I remove the <code>l[:]=[]</code> and replace it with <code>l = []</code> I get the two lists with their contents inside <code>ml</code>:</p>
<pre><code>>>> [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4]]
</code></pre>
<p>The <code>fList[:]=[]</code> means "Replace all items inside <code>fList</code> with an empty item". You are working with references here and just because you have attached <code>fList</code> to <code>mfList</code> inside that scope doesn't mean you can't still access those elements through <code>fList</code>. So if you replace the elements in <code>fList</code> with new ones (in this case <code>[]</code>), it will also affect <code>mfList</code>.</p>
|
python|list|numpy|nested-lists
| 0
|
374,296
| 42,626,676
|
Comparing between two rows in pandas
|
<p>Is there any way I can compare between two rows in pandas? </p>
<p>I would do something like this in sql.</p>
<pre><code>Select * from table t1, table t2 where t1.price - t2.price > 10 and t1.type = 'abc' and t2.type = 'def'
</code></pre>
<p>The best way i can think of is subtracting rows in pandas Data-Frame based on this: </p>
<pre><code>abc = df[df['type'] == 'abc']
def = df[df['type'] == 'def']
</code></pre>
<p>And i am stuck as to how to go about doing this? </p>
<p>Something like </p>
<pre><code>price type
10 abc
10 def
30 abc
15 def
</code></pre>
<p>It should return rows</p>
<pre><code>10 def
30 abc
15 def
</code></pre>
|
<p>Aceminer, I'm still not totally clear what you're looking for on the comparison, but I set something similar up. The one caveat here is that this code compares one line to the following so that the last line you have doesn't show up since there is nothing to compare it to.</p>
<pre><code>df = pd.DataFrame({'price':[10,10,30,15], 'type':['abc','def','abc','def']})
price type
0 10 abc
1 10 def
2 30 abc
3 15 def
</code></pre>
<p>Filter:</p>
<pre><code>df = df[(df['type'] != df['type'].shift(-1)) & (abs(df['price'] - df['price'].shift(-1)) > 10)]
</code></pre>
<p>Results in:</p>
<pre><code> price type
1 10 def
2 30 abc
</code></pre>
|
python|pandas
| 0
|
374,297
| 27,205,133
|
Python numpy nan compare to true in arrays with strings
|
<p>I am trying to compare two numpy arrays which contain numbers, string and nans. I want to know how many items in the array are equal.</p>
<p>When comparing these two arrays:</p>
<pre><code>c =np.array([1,np.nan]);
d =np.array([2,np.nan]);
print (c==d)
[False False]
</code></pre>
<p>Which is the expected behaviour.</p>
<p>However, when comparing:</p>
<pre><code>a =np.array([1,'x', np.nan]);
b =np.array([1,'x', np.nan]);
print (a==b)
[ True True True]
</code></pre>
<p>That makes no sense to me, how can adding a string to the array change the way nans are compared? Any ideas?</p>
<p>Thanks!</p>
|
<p>If you examine the arrays, you'll see that <code>np.nan</code> has been converted to string (<code>'n'</code>):</p>
<pre><code>In [48]: a = np.array([1, 'x', np.nan])
In [49]: a
Out[49]:
array(['1', 'x', 'n'],
dtype='|S1')
</code></pre>
<p>And <code>'n' == 'n'</code> is <code>True</code>.</p>
<p>What I don't understand is why changing the array's <code>dtype</code> to <code>object</code> doesn't change the result of the comparison:</p>
<pre><code>In [72]: a = np.array([1, 'x', np.nan], dtype=object)
In [73]: b = np.array([1, 'x', np.nan], dtype=object)
In [74]: a == b
Out[74]: array([ True, True, True], dtype=bool)
In [75]: a[2] == b[2]
Out[75]: False
In [76]: type(a[2])
Out[76]: float
In [77]: type(b[2])
Out[77]: float
</code></pre>
<p>It's almost as if the two NaN objects are compared by reference rather than by value:</p>
<pre><code>In [79]: id(a[2])
Out[79]: 26438340
In [80]: id(b[2])
Out[80]: 26438340
</code></pre>
|
python|arrays|numpy
| 1
|
374,298
| 27,092,768
|
Plotting sectionwise defined function with python/matplotlib
|
<p>I'm new to Python and Scipy. Currently I am trying to plot a p-type transistor transfer curve in matplotlib. It is sectionwise defined and I am struggeling to find a good way to get the resulting curve. What I have so far is:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from scipy.constants import epsilon_0
V_GS = np.linspace(-15, 10, 100) # V
V_th = 1.9 # V
V_DS = -10 # V
mu_p = 0.1e-4 # m²/Vs
epsilon_r = 7.1
W = 200e-6 # m
L = 10e-6 # m
d = 70e-9 # m
C_G = epsilon_0*epsilon_r/d
beta = -mu_p*C_G*W/L
Ids_cutoff = np.empty(100); Ids_cutoff.fill(-1e-12)
Ids_lin = beta*((V_GS-V_th)*V_DS-V_DS**2/2)
Ids_sat = beta*1/2*(V_GS-V_th)**2
plt.plot(V_GS, Ids_lin, label='lin')
plt.plot(V_GS, Ids_sat, label='sat')
plt.plot(V_GS, Ids_cutoff, label='cutoff')
plt.xlabel('V_GS [V]')
plt.ylabel('I [A]')
plt.legend(loc=0)
plt.show()
</code></pre>
<p>This gives me the three curves over the complete V_GS range. Now I would like to define</p>
<pre><code>Ids = Ids_cutoff for V_GS >= V_th
Ids = Ids_lin for V_GS < V_th; V_DS >= V_GS - V_th
Ids = Ids_sat for V_GS < V_th; V_DS < V_GS - V_th
</code></pre>
<p>I found an example for np.vectorize() but somehow I am struggeling to understand how to work with these arrays. I could create a for loop that goes through all the values but I am pretty sure that there are more effective ways to do this.</p>
<p>Besides deriving a list of values for Ids and plotting it vs V_GS is there also a possibility to just sectionswise plot the three equations with matplotlib as one curve?</p>
|
<p>Do you want to fill the array <code>Vds</code> according to your selectors?</p>
<pre><code>Vds = np.zeros_like(V_GS) # for the same shape
Vds[V_GS >= V_th] = Ids_cutoff
Vds[(V_GS < V_th) & (V_DS >= V_GS - V_th)] = Ids_lin
Vds[(V_GS < V_th) & (V_DS < V_GS - V_th)] = Ids_sat
</code></pre>
<p>By plotting sectionwise, you mean leaving out a certain range? You can use np.nan for that:</p>
<pre><code>plt.plot([0,1,2,3,np.nan,10,11], np.arange(7))
</code></pre>
<p>results in:</p>
<p><img src="https://i.stack.imgur.com/BSeEk.png" alt="enter image description here"></p>
<p>As <em>Not a Number</em> is not plottable, no line will be drawn.</p>
|
python|numpy|matplotlib|plot|scipy
| 1
|
374,299
| 27,321,299
|
pandas function, use previously computed value
|
<p>Had a simple question that I have not found a simple answer to. As an example this data frame can be used: </p>
<pre><code>A = pd.Series([0.1,-0.2,0.14,0.12,-0.11])
B = pd.Series([1.0,3.0,2.0,6.0,9.0])
df = pd.DataFrame({'A':A,'B':B})
</code></pre>
<p>I now would like to create a column C as follows:</p>
<pre><code>C_i = A_i*(B_i+C_{i-1})
</code></pre>
<p>i.e. to compute the value of C I need the previously computed value of C. This can be done by a simple for loop, but I would like to use map, apply or some other pandas functionality. Can this be done i a simple manner?</p>
<p>I tested it in a spreadsheet and this what I am looking for:</p>
<pre><code>A B C
0,1 1 0,10000
−0,2 3 −0,62000
0,14 2 0,19320
0,12 6 0,74318
−0,11 9 −1,07175
</code></pre>
|
<p>One way to get this:</p>
<pre><code>df['C'] = df.A * df.B
df['C'] = df.C + (df.A * df.C.shift().fillna(0))
df
</code></pre>
<p>Which yields:</p>
<pre><code> A B C
0 0.10 1 0.100
1 -0.20 3 -0.620
2 0.14 2 0.196
3 0.12 6 0.754
4 -0.11 9 -1.069
</code></pre>
<p>Which looks like what you wanted.</p>
|
pandas
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.