Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
377,300
| 17,702,153
|
Numpy, dot products on multidimensional arrays
|
<p>I have some doubts on the numpy.dot product.</p>
<p>I define a matrix 6x6 like:</p>
<pre><code>C=np.zeros((6,6))
C[0,0], C[1,1], C[2,2] = 129.5, 129.5, 129.5
C[3,3], C[4,4], C[5,5] = 25, 25, 25
C[0,1], C[0,2] = 82, 82
C[1,0], C[1,2] = 82, 82
C[2,0], C[2,1] = 82, 82
</code></pre>
<p>Then I recast it in a 4-rank tensor by using a multidimensional array</p>
<pre><code>def long2short(m, n):
"""
Given two indices m and n of the stiffness tensor the function
return i the index of the Voigt matrix
i = long2short(m,n)
"""
if m == n:
i = m
elif (m == 1 and n == 2) or (m == 2 and n == 1):
i = 3
elif (m == 0 and n == 2) or (m == 2 and n == 0):
i = 4
elif (m == 0 and n == 1) or (m == 1 and n == 0):
i = 5
return i
c=np.zeros((3,3,3,3))
for m in range(3):
for n in range(3):
for o in range(3):
for p in range(3):
i = long2short(m, n)
j = long2short(o, p)
c[m, n, o, p] = C[i, j]
</code></pre>
<p>And then I would like to change the coordinate reference system of the tensor by using the rotation matrix that I define like:</p>
<pre><code>Q=np.array([[sqrt(2.0/3), 0, 1.0/sqrt(3)], [-1.0/sqrt(6), 1.0/sqrt(2), 1.0/sqrt(3)], [-1.0/sqrt(6), -1.0/sqrt(2), 1.0/sqrt(3)]])
Qt = Q.transpose()
</code></pre>
<p>The matrix is orthogonal (althought the numerical precision is not perfect):</p>
<pre><code>In [157]: np.dot(Q, Qt)
Out[157]:
array([[ 1.00000000e+00, 4.28259858e-17, 4.28259858e-17],
[ 4.28259858e-17, 1.00000000e+00, 2.24240114e-16],
[ 4.28259858e-17, 2.24240114e-16, 1.00000000e+00]])
</code></pre>
<p>But why then if I perform:</p>
<pre><code>In [158]: a=np.dot(Q,Qt)
In [159]: c_mat=np.dot(a, c)
In [160]: a1 = np.dot(Qt, c)
In [161]: c_mat1=np.dot(Q, a1)
</code></pre>
<p>I get the expected value for c_mat (=c) but not for c_mat1? Is there some subtility to use dot on multidimensional arrays?</p>
|
<p>The issue is that <code>np.dot(a,b)</code> for multidimensional arrays makes the dot product of the last dimension of <code>a</code> with the second-to-last dimension of <code>b</code>:</p>
<pre><code>np.dot(a,b) == np.tensordot(a, b, axes=([-1],[2]))
</code></pre>
<p>As you see, it does not work as a matrix multiplication for multidimensional arrays. Using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html" rel="noreferrer"><code>np.tensordot()</code></a> allows you to control in which <code>axes</code> from each input you want to perform the dot product. For example, to get the same result in <code>c_mat1</code> you can do:</p>
<pre><code>c_mat1 = np.tensordot(Q, a1, axes=([-1],[0]))
</code></pre>
<p>Which is forcing a matrix multiplication-like behavior.</p>
|
python|arrays|numpy|matrix
| 14
|
377,301
| 18,103,032
|
load a float+ string table
|
<p>I have a table which contains both floats and strings. When I'm trying to load it by <code>np.loadtxt(file.txt)</code>, I got an error like </p>
<pre><code>could not convert string to float: \Omega_b
</code></pre>
<p>How can I fix it. </p>
|
<p>You can load using the <code>dtype</code> option to create a <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html" rel="nofollow">structured array</a>:</p>
<pre><code>np.loadtxt(fname, dtype=[('col1_name', '|S10'), ('col2_name', float)])
</code></pre>
<p>Or if you don't want to specify which dtypes it should use you can go for what was suggested by @atomh33ls: <code>dtype=None</code>.</p>
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html" rel="nofollow">See additional options for <code>np.loadtxt</code></a> so that you can tune it to your needs.</p>
|
python|numpy
| 3
|
377,302
| 18,061,711
|
Converting an algorithm using numpy.irfft to JavaScript
|
<p>I am trying to convert an algorithm initially written with numpy to JavaScript, and I don't manage to reproduce the results from a reverse FFT.</p>
<p>The original algorithm uses <code>numpy.fft.rfft</code> and <code>numpy.fft.irfft</code> :</p>
<pre><code># Get the amplitude
amplitudes = abs(np.fft.rfft(buf, axis=0))
# Randomize phases
ph = np.random.uniform(0, 2*np.pi, (amplitudes.shape[0], 1)) * 1j
amplitudes = amplitudes * np.exp(ph)
# do the inverse FFT
buf = np.fft.irfft(amplitudes, axis=0)
</code></pre>
<p>I have found <a href="https://npmjs.org/package/ndfft" rel="nofollow">a JavaScript library</a> that seems to do the job for the FFT, and I am using <a href="https://github.com/josdejong/mathjs" rel="nofollow">mathjs</a> for the matrix/vector work.</p>
<p>I have made a lot of tries, the thing is I don't know what I should do to imitate <code>numpy.fft.irfft</code>.</p>
<p>Differences between the 2 FFTs :</p>
<ul>
<li><p>The JavaScript FFT function returns a complex output with the negative frequencies, so it contains 2 times more points that the result obtained with <code>numpy.fft.rfft</code>. Though the amplitudes in the positive frequencies <code>[0, WIN/2]</code> seem to match.</p></li>
<li><p>The JavaScript iFFT returns a complex output, while <code>numpy.fft.rfft</code> returns a real output.</p></li>
</ul>
<p><strong>ANSWER</strong></p>
<p>Thanks to @hotpaw2 I managed to solve my problem.</p>
<p>The spectrum of a real signal is symmetric and <code>numpy.fft.rfft</code> returns only the unique components of this spectrum. So for a block of 128 samples, <code>numpy.fft.rfft</code> returns a spectrum containing <code>128/2 + 1</code> values, i.e. <code>65</code> values.</p>
<p>Therefore, if I wanted to do the same, I need to discard all the symmetric values from my amplitudes and then apply the phase change.</p>
<p>For the reverse FFT : "to get a real-only output from a full length IFFT, the input has to be complex-conjugate symmetric". So I need to rebuild the spectrum by making the real part symmetric, and the imaginary part in mirror symmetry.</p>
<p>Here is the algorithm :</p>
<pre><code>fft(1, re, im)
amplitudes = math.select(re)
.subset([math.range(0, frameCount / 2)]) // get only the unique part
.abs().done() // input signal is real, so abs value of `re` is the amplitude
// Apply the new phases
re = math.emultiply(math.cos(phases), amplitudes)
im = math.emultiply(math.sin(phases), amplitudes)
// Rebuild `re` and `im` by adding the symetric part
re = math.concat(re, math.subset(re, [symRange]).reverse())
im = math.concat(im, math.select(im).subset([symRange]).emultiply(-1).done().reverse())
// do the inverse FFT
fft(-1, re, im)
</code></pre>
|
<p>To get a real-only output from a full length IFFT, the input has to be complex-conjugate symmetric (real components the same and imaginary components negated in mirror symmetry for the upper or negative other half of frequency inputs).</p>
<p>With complex conjugate input, the forward or inverse FFT computation should only end up with tiny near zero numerical noise values (due to finite precision rounding) in the imaginary components of the result.</p>
|
javascript|python|numpy|fft|ifft
| 3
|
377,303
| 17,950,835
|
Pandas: DataFrame filtering using groupby and a function
|
<p>Using Python 3.3 and Pandas 0.10</p>
<p>I have a DataFrame that is built from concatenating multiple CSV files. First, I filter out all values in the Name column that contain a certain string. The result looks something like this (shortened for brevity sakes, actually there are more columns):</p>
<pre><code>Name ID
'A' 1
'B' 2
'C' 3
'C' 3
'E' 4
'F' 4
... ...
</code></pre>
<p>Now my issue is that I want to remove a special case of 'duplicate' values. I want to remove all ID duplicates (entire row actually) where the corresponding Name values that are mapped to this ID are <strong>not</strong> similar. In the example above I would like to keep rows with ID 1, 2 and 3. Where ID=4 the Name values are unequal and I want to remove those.</p>
<p>I tried to use the following line of code (based on the suggestion here: <a href="https://stackoverflow.com/questions/13446480/python-pandas-remove-entries-based-on-the-number-of-occurrences#comment18556837_13447176">Python Pandas: remove entries based on the number of occurrences</a>).</p>
<p>Code:</p>
<pre><code>df[df.groupby('ID').apply(lambda g: len({x for x in g['Name']})) == 1]
</code></pre>
<p>However that gives me the error:
<code>ValueError: Item wrong length 51906 instead of 109565!</code></p>
<p>Edit:</p>
<p>Instead of using <code>apply()</code> I have also tried using <code>transform()</code>, however that gives me the error: <code>AttributeError: 'int' object has no attribute 'ndim'</code>. An explanation on why the error is different per function is very much appreciated!</p>
<p>Also, I want to keep keep all rows where ID = 3 in the above example.</p>
<p>Thanks in advance,
Matthijs</p>
|
<p>Instead of length <code>len</code>, I think you want to consider the number of unique values of Name in each group. Use <code>nunique()</code>, and check out this neat recipe for filtering groups.</p>
<pre><code>df[df.groupby('ID').Name.transform(lambda x: x.nunique() == 1).astype('bool')]
</code></pre>
<p>If you upgrade to pandas 0.12, you can use the new <code>filter</code> method on groups, which makes this more succinct and straightforward.</p>
<pre><code>df.groupby('ID').filter(lambda x: x.Name.nunique() == 1)
</code></pre>
<p>A general remark: Sometimes, of course, you do want to know the length of the group, but I find that <code>size</code> is a safer choice than <code>len</code>, which has been troublesome for me in some cases.</p>
|
python|python-3.x|pandas
| 5
|
377,304
| 18,203,915
|
Pandas read_clipboard broken in pandas 0.12?
|
<p>Since I updated pandas from version 0.11 to 0.12, read_clipboard doesn't seem to work anymore:</p>
<pre><code>import pandas as pd
df = pd.read_clipboard()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-6dead334eb54> in <module>()
----> 1 df = pd.read_clipboard()
C:\Python33\lib\site-packages\pandas\io\clipboard.py in read_clipboard(**kwargs)
16 from pandas.io.parsers import read_table
17 text = clipboard_get()
---> 18 return read_table(StringIO(text), **kwargs)
19
20
TypeError: initial_value must be str or None, not bytes
</code></pre>
<p>What I did was:</p>
<ul>
<li><p>Open a csv file in Excel 2010</p></li>
<li><p>Copy a range of cells, including headers</p></li>
<li><p>Perform read_clipboard in iPython Qt console as described in above code block</p></li>
</ul>
<p>After downgrading to 0.11, this procedure worked fine again. I'm using pandas for python 3.3 Win7 32 bit.</p>
<p>Is this a bug in pandas? Any suggestions on how to resolve this issue?</p>
|
<p>Its a bug in the string presented to py3; I'll fix it in master, but you can do this local edit.</p>
<p>in <code>C:\python33\Lib\site-packages\pandas\io\clipboard.py</code></p>
<p>after <code>text = clipboard_get()</code></p>
<p>add <code>text = text.decode('UTF-8')</code></p>
<p>apparently the clipboard routine gives you back bytes (and not a string) in py3</p>
|
python|pandas
| 2
|
377,305
| 18,048,646
|
Python array to 1-D Vector
|
<p>Is there a pythonic way to convert a structured array to vector? </p>
<p>For example:</p>
<p>I'm trying to convert an array like:</p>
<pre><code>[(9,), (1,), (1, 12), (9,), (8,)]
</code></pre>
<p>to a vector like:</p>
<pre><code>[9,1,1,12,9,8]
</code></pre>
|
<pre><code>In [15]: import numpy as np
In [16]: x = np.array([(9,), (1,), (1, 12), (9,), (8,)])
In [17]: np.concatenate(x)
Out[17]: array([ 9, 1, 1, 12, 9, 8])
</code></pre>
<p>Another option is <code>np.hstack(x)</code>, but for this purpose, <code>np.concatenate</code> is faster:</p>
<pre><code>In [14]: x = [tuple(np.random.randint(10, size=np.random.randint(10))) for i in range(10**4)]
In [15]: %timeit np.hstack(x)
10 loops, best of 3: 40.5 ms per loop
In [16]: %timeit np.concatenate(x)
100 loops, best of 3: 13.6 ms per loop
</code></pre>
|
python|numpy
| 12
|
377,306
| 17,835,302
|
How to update matplotlib's imshow() window interactively?
|
<p>I'm working on some computer vision algorithm and I'd like to show how a numpy array changes in each step.</p>
<p>What works now is that if I have a simple <code>imshow( array )</code> at the end of my code, the window displays and shows the final image.</p>
<p>However what I'd like to do is to update and display the imshow window as the image changes in each iteration. </p>
<p>So for example I'd like to do:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import time
array = np.zeros( (100, 100), np.uint8 )
for i in xrange( 0, 100 ):
for j in xrange( 0, 50 ):
array[j, i] = 1
#_show_updated_window_briefly_
plt.imshow( array )
time.sleep(0.1)
</code></pre>
<p>The problem is that this way, the Matplotlib window doesn't get activated, only once the whole computation is finished.</p>
<p>I've tried both native matplotlib and pyplot, but the results are the same. For plotting commands I found an <code>.ion()</code> switch, but here it doesn't seem to work.</p>
<p>Q1. What is the best way to continuously display updates to a numpy array (actually a uint8 greyscale image)?</p>
<p>Q2. Is it possible to do this with an animation function, like in the <a href="http://matplotlib.org/examples/animation/dynamic_image.html">dynamic image example</a>? I'd like to call a function inside a loop, thus I don't know how to achieve this with an animation function.</p>
|
<p>You don't need to call <code>imshow</code> all the time. It is much faster to use the object's <code>set_data</code> method:</p>
<pre><code>myobj = imshow(first_image)
for pixel in pixels:
addpixel(pixel)
myobj.set_data(segmentedimg)
draw()
</code></pre>
<p>The <code>draw()</code> should make sure that the backend updates the image.</p>
<p><strong>UPDATE:</strong> your question was significantly modified. In such cases it is better to ask another question. Here is a way to deal with your second question:</p>
<p>Matplotlib's animation only deals with one increasing dimension (time), so your double loop won't do. You need to convert your indices to a single index. Here is an example:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
nx = 150
ny = 50
fig = plt.figure()
data = np.zeros((nx, ny))
im = plt.imshow(data, cmap='gist_gray_r', vmin=0, vmax=1)
def init():
im.set_data(np.zeros((nx, ny)))
def animate(i):
xi = i // ny
yi = i % ny
data[xi, yi] = 1
im.set_data(data)
return im
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=nx * ny,
interval=50)
</code></pre>
|
python|numpy|matplotlib|spyder
| 52
|
377,307
| 4,152,457
|
parameters for low pass fir filter using scipy
|
<p>I am trying to write a simple low pass filter using scipy, but I need help defining the parameters.</p>
<p>I have 3.5 million records in the time series data that needs to be filtered, and the data is sampled at 1000 hz.</p>
<p>I am using signal.firwin and signal.lfilter from the scipy library.</p>
<p>The parameters I am choosing in the code below do not filter my data at all. Instead, the code below simply produces something that graphically looks like the same exact data except for a time phase distortion that shifts the graph to the right by slightly less than 1000 data points (1 second).</p>
<p>In another software program, running a low pass fir filter through graphical user interface commands produces output that has similar means for each 10 second (10,000 data point) segment, but that has drastically lower standard deviations so that we essentially lose the noise in this particular data file and replace it with something that retains the mean value while showing longer term trends that are not polluted by higher frequency noise. The other software's parameters dialog box contains a check box that allows you to select the number of coefficients so that it "optimizes based on sample size and sampling frequency." (Mine are 3.5 million samples collected at 1000 hz, but I would like a function that uses these inputs as variables.)</p>
<p><strong>*<em>Can anyone show me how to adjust the code below so that it removes all frequencies above 0.05 hz?</strong></em>* I would like to see smooth waves in the graph rather than just the time distortion of the same identical graph that I am getting from the code below now.</p>
<pre><code>class FilterTheZ0():
def __init__(self,ZSmoothedPylab):
#------------------------------------------------------
# Set the order and cutoff of the filter
#------------------------------------------------------
self.n = 1000
self.ZSmoothedPylab=ZSmoothedPylab
self.l = len(ZSmoothedPylab)
self.x = arange(0,self.l)
self.cutoffFreq = 0.05
#------------------------------------------------------
# Run the filter
#------------------------------------------------------
self.RunLowPassFIR_Filter(self.ZSmoothedPylab, self.n, self.l
, self.x, self.cutoffFreq)
def RunLowPassFIR_Filter(self,data, order, l, x, cutoffFreq):
#------------------------------------------------------
# Set a to be the denominator coefficient vector
#------------------------------------------------------
a = 1
#----------------------------------------------------
# Create the low pass FIR filter
#----------------------------------------------------
b = signal.firwin(self.n, cutoff = self.cutoffFreq, window = "hamming")
#---------------------------------------------------
# Run the same data set through each of the various
# filters that were created above.
#---------------------------------------------------
response = signal.lfilter(b,a,data)
responsePylab=p.array(response)
#--------------------------------------------------
# Plot the input and the various outputs that are
# produced by running each of the various filters
# on the same inputs.
#--------------------------------------------------
plot(x[10000:20000],data[10000:20000])
plot(x[10000:20000],responsePylab[10000:20000])
show()
return
</code></pre>
|
<p>Cutoff is normalized to the Nyquist frequency, which is half the sampling rate. So with FS = 1000 and FC = 0.05, you want cutoff = 0.05/500 = 1e-4. </p>
<pre><code>from scipy import signal
FS = 1000.0 # sampling rate
FC = 0.05/(0.5*FS) # cutoff frequency at 0.05 Hz
N = 1001 # number of filter taps
a = 1 # filter denominator
b = signal.firwin(N, cutoff=FC, window='hamming') # filter numerator
M = FS*60 # number of samples (60 seconds)
n = arange(M) # time index
x1 = cos(2*pi*n*0.025/FS) # signal at 0.025 Hz
x = x1 + 2*rand(M) # signal + noise
y = signal.lfilter(b, a, x) # filtered output
plot(n/FS, x); plot(n/FS, y, 'r') # output in red
grid()
</code></pre>
<p><img src="https://i.stack.imgur.com/gnXK8.png" alt="Output">
The filter output is delayed half a second (the filter is centered on tap 500). Note that the DC offset added by the noise is preserved by the low-pass filter. Also, 0.025 Hz is well within the pass range, so the output swing from peak to peak is approximately 2.</p>
|
numpy|signal-processing|scipy|scientific-computing|matplotlib
| 27
|
377,308
| 4,823,223
|
Numpy.eig and the percentage of variance in PCA
|
<p><a href="https://stackoverflow.com/questions/4801259/whats-wrong-with-my-pca">Picking up from where we left...</a></p>
<p>So I can use linalg.eig or linalg.svd to compute the PCA. Each one returns different Principal Components/Eigenvectors and Eigenvalues when they're fed the same data (I'm currently using the Iris dataset).</p>
<p>Looking <a href="http://onlinecourses.science.psu.edu/stat857/book/export/html/11" rel="nofollow noreferrer">here</a> or any other tutorial with the PCA applied to the Iris dataset, I'll find that the Eigenvalues are <code>[2.9108 0.9212 0.1474 0.0206]</code>. The <code>eig</code> method gives me a different set of eigenvalues/vectors to work with which I don't mind, except that these eigenvalues, once summed, equal the number of dimensions (4) and can be used to find how much each component contributes to the total variance.</p>
<p>Taking the eigenvalues returned by <code>linalg.eig</code> I can't do that. For example, the values returned are <code>[9206.53059607 314.10307292 12.03601935 3.53031167]</code>. The proportion of variance in this case would be <code>[0.96542969 0.03293797 0.00126214 0.0003702]</code>. <a href="http://www.sjsu.edu/faculty/watkins/princmp.htm" rel="nofollow noreferrer">This other page</a> says that ("The proportion of the variation explained by a component is just its eigenvalue divided by the sum of the eigenvalues.") </p>
<p>Since the variance explained by each dimension should be constant (I think), these proportions are wrong. So, if I use the values returned by <code>svd()</code>, which are the values used in all tutorials, I can get the correct percentage of variation from each dimension, but I'm wondering why the values returned by <code>eig</code> can't be used like that. </p>
<p>I assume the results returned are still a valid way to project the variables, so is there a way to transform them so that I can get the correct proportion of variance explained by each variable? In other words, can I use the <code>eig</code> method and still have the proportion of variance for each variable? Additionally, could this mapping be done only in the eigenvalues so that I can have both the real eigenvalues and the normalized ones?</p>
<p>Sorry for the long writeup btw. Here's a <code>(::)</code> for having gotten this far. Assuming you didn't just read this line.</p>
|
<p>Taking <a href="https://stackoverflow.com/questions/4801259/whats-wrong-with-my-pca/4803141#4803141">Doug's answer to your previous question</a> and implementing the following two functions, I get the output shown below:</p>
<pre><code>def pca_eig(orig_data):
data = array(orig_data)
data = (data - data.mean(axis=0)) / data.std(axis=0)
C = corrcoef(data, rowvar=0)
w, v = linalg.eig(C)
print "Using numpy.linalg.eig"
print w
print v
def pca_svd(orig_data):
data = array(orig_data)
data = (data - data.mean(axis=0)) / data.std(axis=0)
C = corrcoef(data, rowvar=0)
u, s, v = linalg.svd(C)
print "Using numpy.linalg.svd"
print u
print s
print v
</code></pre>
<p>Output:</p>
<pre><code>Using numpy.linalg.eig
[ 2.91081808 0.92122093 0.14735328 0.02060771]
[[ 0.52237162 -0.37231836 -0.72101681 0.26199559]
[-0.26335492 -0.92555649 0.24203288 -0.12413481]
[ 0.58125401 -0.02109478 0.14089226 -0.80115427]
[ 0.56561105 -0.06541577 0.6338014 0.52354627]]
Using numpy.linalg.svd
[[-0.52237162 -0.37231836 0.72101681 0.26199559]
[ 0.26335492 -0.92555649 -0.24203288 -0.12413481]
[-0.58125401 -0.02109478 -0.14089226 -0.80115427]
[-0.56561105 -0.06541577 -0.6338014 0.52354627]]
[ 2.91081808 0.92122093 0.14735328 0.02060771]
[[-0.52237162 0.26335492 -0.58125401 -0.56561105]
[-0.37231836 -0.92555649 -0.02109478 -0.06541577]
[ 0.72101681 -0.24203288 -0.14089226 -0.6338014 ]
[ 0.26199559 -0.12413481 -0.80115427 0.52354627]]
</code></pre>
<p>In both cases, I get the desired eigenvalues.</p>
|
python|math|numpy|pca
| 4
|
377,309
| 8,916,302
|
selecting across multiple columns with pandas
|
<p>I have a dataframe <code>df</code> in pandas that was built using <code>pandas.read_table</code> from a csv file. The dataframe has several columns and it is indexed by one of the columns (which is unique, in that each row has a unique value for that column used for indexing.) </p>
<p>How can I select rows of my dataframe based on a "complex" filter applied to multiple columns? I can easily select out the slice of the dataframe where column <code>colA</code> is greater than 10 for example:</p>
<pre><code>df_greater_than10 = df[df["colA"] > 10]
</code></pre>
<p>But what if I wanted a filter like: select the slice of <code>df</code> where <em>any</em> of the columns are greater than 10? </p>
<p>Or where the value for <code>colA</code> is greater than 10 but the value for <code>colB</code> is less than 5?</p>
<p>How are these implemented in pandas?
Thanks.</p>
|
<p>I encourage you to pose these questions on the <a href="http://groups.google.com/group/pystatsmodels" rel="noreferrer">mailing list</a>, but in any case, it's still a very much low level affair working with the underlying NumPy arrays. For example, to select rows where the value in any column exceed, say, 1.5 in this example:</p>
<pre><code>In [11]: df
Out[11]:
A B C D
2000-01-03 -0.59885 -0.18141 -0.68828 -0.77572
2000-01-04 0.83935 0.15993 0.95911 -1.12959
2000-01-05 2.80215 -0.10858 -1.62114 -0.20170
2000-01-06 0.71670 -0.26707 1.36029 1.74254
2000-01-07 -0.45749 0.22750 0.46291 -0.58431
2000-01-10 -0.78702 0.44006 -0.36881 -0.13884
2000-01-11 0.79577 -0.09198 0.14119 0.02668
2000-01-12 -0.32297 0.62332 1.93595 0.78024
2000-01-13 1.74683 -1.57738 -0.02134 0.11596
2000-01-14 -0.55613 0.92145 -0.22832 1.56631
2000-01-17 -0.55233 -0.28859 -1.18190 -0.80723
2000-01-18 0.73274 0.24387 0.88146 -0.94490
2000-01-19 0.56644 -0.49321 1.17584 -0.17585
2000-01-20 1.56441 0.62331 -0.26904 0.11952
2000-01-21 0.61834 0.17463 -1.62439 0.99103
2000-01-24 0.86378 -0.68111 -0.15788 -0.16670
2000-01-25 -1.12230 -0.16128 1.20401 1.08945
2000-01-26 -0.63115 0.76077 -0.92795 -2.17118
2000-01-27 1.37620 -1.10618 -0.37411 0.73780
2000-01-28 -1.40276 1.98372 1.47096 -1.38043
2000-01-31 0.54769 0.44100 -0.52775 0.84497
2000-02-01 0.12443 0.32880 -0.71361 1.31778
2000-02-02 -0.28986 -0.63931 0.88333 -2.58943
2000-02-03 0.54408 1.17928 -0.26795 -0.51681
2000-02-04 -0.07068 -1.29168 -0.59877 -1.45639
2000-02-07 -0.65483 -0.29584 -0.02722 0.31270
2000-02-08 -0.18529 -0.18701 -0.59132 -1.15239
2000-02-09 -2.28496 0.36352 1.11596 0.02293
2000-02-10 0.51054 0.97249 1.74501 0.20525
2000-02-11 0.10100 0.27722 0.65843 1.73591
In [12]: df[(df.values > 1.5).any(1)]
Out[12]:
A B C D
2000-01-05 2.8021 -0.1086 -1.62114 -0.2017
2000-01-06 0.7167 -0.2671 1.36029 1.7425
2000-01-12 -0.3230 0.6233 1.93595 0.7802
2000-01-13 1.7468 -1.5774 -0.02134 0.1160
2000-01-14 -0.5561 0.9215 -0.22832 1.5663
2000-01-20 1.5644 0.6233 -0.26904 0.1195
2000-01-28 -1.4028 1.9837 1.47096 -1.3804
2000-02-10 0.5105 0.9725 1.74501 0.2052
2000-02-11 0.1010 0.2772 0.65843 1.7359
</code></pre>
<p>Multiple conditions have to be combined using <code>&</code> or <code>|</code> (and parentheses!):</p>
<pre><code>In [13]: df[(df['A'] > 1) | (df['B'] < -1)]
Out[13]:
A B C D
2000-01-05 2.80215 -0.1086 -1.62114 -0.2017
2000-01-13 1.74683 -1.5774 -0.02134 0.1160
2000-01-20 1.56441 0.6233 -0.26904 0.1195
2000-01-27 1.37620 -1.1062 -0.37411 0.7378
2000-02-04 -0.07068 -1.2917 -0.59877 -1.4564
</code></pre>
<p>I'd be very interested to have some kind of query API to make these kinds of things easier</p>
|
python|pandas
| 49
|
377,310
| 8,957,175
|
DataFrame to Panel indexed by nonunique column with Pandas
|
<p>The following code should do what I want but it takes 10gb of ram by the time it is 20% done with the loop. </p>
<pre><code># In [4]: type(pd)
# Out[4]: pandas.sparse.frame.SparseDataFrame
memid = unique(pd.Member)
pan = {}
for mem in memid:
pan[mem] = pd[pd.Member==mem]
goal = pandas.Panel(pan)
</code></pre>
|
<p>I created a GitHub issue here. </p>
<p><a href="https://github.com/wesm/pandas/issues/663" rel="nofollow">https://github.com/wesm/pandas/issues/663</a></p>
<p>I'm pretty sure I identified a circular reference between NumPy ndarray views causing a memory leak. Just committed a fix:</p>
<p><a href="https://github.com/wesm/pandas/commit/4c3916310a86c3e4dab6d30858a984a6f4a64103" rel="nofollow">https://github.com/wesm/pandas/commit/4c3916310a86c3e4dab6d30858a984a6f4a64103</a></p>
<p>Can you install from source and let me know if that fixes your problem? </p>
<p>BTW you might try using SparsePanel instead of Panel because Panel will convert all of the sub-DataFrames to dense form.</p>
<p>Lastly, you might consider using groupby as an alternative to the <code>O(N * M)</code> chopping-up of the SparseDataFrame. It's even shorter:</p>
<p><code>
pan = dict(pd.groupby('Member'))
</code></p>
|
python|dataframe|panels|pandas
| 2
|
377,311
| 8,940,802
|
How to integrate SQLAlchemy and a subclassed Numpy.ndarray smoothly and in a pythonic way?
|
<p>I would like to store NumPy arrays with annotations (like <code>name</code>) via SQLAlchemy within a relational database. To do so, </p>
<ul>
<li>I separate the NumPy array from its data via a data transfer object (<code>DTONumpy</code> as part of <code>MyNumpy</code>).</li>
<li>NumPy objects are collected with <code>Container</code>.</li>
</ul>
<p>What would be a nice and pythonic way to modify <code>Container</code> (from the example below) in a way that it provides as a list directly <code>MyNumpy</code> objects instead of <code>DTONumpy</code> which is provided by SQLAlchemy?</p>
<p>Here is an illustration of the problem:</p>
<pre><code>import numpy as np
import zlib
import sqlalchemy as sa
from sqlalchemy.orm import relationship, scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.types import TypeDecorator, CHAR
DBSession = scoped_session(sessionmaker())
Base = declarative_base()
#### New SQLAlchemy-Type #####################
class NumpyType (sa.types.TypeDecorator):
impl = sa.types.LargeBinary
def process_bind_param(self, value, dialect):
return zlib.compress(value.dumps(), 9)
def process_result_value(self, value, dialect):
return np.loads(zlib.decompress(value))
##############################################
class DTONumpy(Base):
__tablename__ = 'dtos_numpy'
id = sa.Column(sa.Integer, primary_key=True)
amount = sa.Column('amount', NumpyType)
name = sa.Column('name', sa.String, default='')
container_id = sa.Column(sa.ForeignKey('containers.id'))
container_object = relationship(
"Container",
uselist=False,
backref='dto_numpy_objects'
)
def __init__(self, amount, name=None):
self.amount = np.array(amount)
self.name = name
class Container(Base):
__tablename__ = 'containers'
id = sa.Column(sa.Integer, primary_key=True)
name = sa.Column(sa.String, unique=True)
# HERE: how to access DTONumpy BUT as MyNumpy objects in a way that MyNumpy
# is smoothly integrated into SQLAlchemy?
class MyNumpy(np.ndarray):
_DTO = DTONumpy
def __new__(cls, amount, name=''):
dto = cls._DTO(amount=amount, name=name)
return cls.newByDTO(dto)
@classmethod
def newByDTO(cls, dto):
obj = np.array(dto.amount).view(cls)
obj.setflags(write=False) # Immutable
obj._dto = dto
return obj
@property
def name(self):
return self._dto.name
if __name__ == '__main__':
engine = sa.create_engine('sqlite:///:memory:', echo=True)
DBSession.configure(bind=engine)
Base.metadata.create_all(engine)
session = DBSession()
mn1 = MyNumpy ([1,2,3], "good data")
mn2 = MyNumpy ([2,3,4], "bad data")
# Save MyNumpy objects
c1 = Container()
c1.name = "Test-Container"
c1.dto_numpy_objects += [mn1._dto, mn2._dto] # not a good ui
session.add(c1)
session.commit()
# Load MyNumpy objects
c2 = session.query(Container).filter_by(name="Test-Container").first()
# Ugly UI:
mn3 = MyNumpy.newByDTO(c2.dto_numpy_objects[0])
mn4 = MyNumpy.newByDTO(c2.dto_numpy_objects[1])
name3 = mn3._dto.name
name4 = mn4._dto.name
</code></pre>
<p><code>Container</code> should now provide a list of <code>MyNumpy</code> objects and <code>MyNumpy</code> a reference to the according <code>Container</code> object (the list and the reference would have to take the SQLAlchemy mapping into account):</p>
<pre><code>type (c2.my_numpy_objects[0]) == MyNumpy
>>> True
c2.my_numpy_objects.append(MyNumpy ([7,2,5,6], "new data")
print c2.dto_numpy_objects[-1].name
>>> "new data"
</code></pre>
|
<p>Using the <code>ListView</code>-answer from <a href="https://stackoverflow.com/questions/8984692/how-can-i-change-in-python-the-return-input-type-of-a-list-that-is-implemented-a/" title="Stackoverflow Question 8984692">that</a> question, I came up with the following solution:</p>
<p>First, modify <code>Container</code> by adding a <code>ListView</code>-property on top of the SQLAlchemy-property <code>dto_numpy_objects</code>:</p>
<pre><code> def __init__(self, name):
self.name = name
"""
At this point, the following code doesn't work:
---------------------
self.my_numpies = ListView(
self.dto_numpy_objects, # see `DTO_Numpy.container_object`
MyNumpy.newByDTO,
MyNumpy.getDTO)
---------------------
SQLAlchemy seems to change the `dto_numypy_object`-object after the
init-call. Thus, `my_numpies._data` doesn't reference `dto_numpy_objects`
anymore. One solution is to implement a property that initalizes `ListView`
on first access. See below, property `Container.my_numpies`.
"""
@property
def my_numpies(self):
if not hasattr(self, '_my_numpies'):
# The following part can not be exe
self._my_numpies = ListView(
self.dto_numpy_objects, # see `DTO_Numpy.container_object`
MyNumpy.newByDTO,
MyNumpy.getDTO)
return self._my_numpies
</code></pre>
<p>Second, add method <code>getDTO</code> which can be used as <code>new2raw</code>-<em>converter</em> <code>MyNumpy</code>:</p>
<pre><code> def getDTO(self):
return self._dto
</code></pre>
<p>In order to use the <em>backref</em> <code>container_object</code> also from <code>MyNumpy</code> implement it as a wrapper by adding the following method:</p>
<pre><code> def __getattr__(self, attr):
return getattr(self._dto, attr)
</code></pre>
<p>All together, the code looks like this:</p>
<pre><code>import numpy as np
import zlib
import sqlalchemy as sa
from sqlalchemy.orm import relationship, scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.types import TypeDecorator, CHAR
DBSession = scoped_session(sessionmaker())
Base = declarative_base()
class ListView(list):
def __init__(self, raw_list, raw2new, new2raw):
self._data = raw_list
self.converters = {'raw2new': raw2new,
'new2raw': new2raw}
def __repr__(self):
repr_list = [self.converters['raw2new'](item) for item in self._data]
repr_str = "["
for element in repr_list:
repr_str += element.__repr__() + ",\n "
repr_str = repr_str[:-3] + "]"
return repr_str
def append(self, item):
self._data.append(self.converters['new2raw'](item))
def pop(self, index):
self._data.pop(index)
def __getitem__(self, index):
return self.converters['raw2new'](self._data[index])
def __setitem__(self, key, value):
self._data.__setitem__(key, self.converters['new2raw'](value))
def __delitem__(self, key):
return self._data.__delitem__(key)
def __getslice__(self, i, j):
return ListView(self._data.__getslice__(i,j), **self.converters)
def __contains__(self, item):
return self._data.__contains__(self.converters['new2raw'](item))
def __add__(self, other_list_view):
assert self.converters == other_list_view.converters
return ListView(
self._data + other_list_view._data,
**self.converters)
def __len__(self):
return len(self._data)
def __iter__(self):
return iter([self.converters['raw2new'](item) for item in self._data])
def __eq__(self, other):
return self._data == other._data
#### New SQLAlchemy-Type #####################
class NumpyType (sa.types.TypeDecorator):
impl = sa.types.LargeBinary
def process_bind_param(self, value, dialect):
return zlib.compress(value.dumps(), 9)
def process_result_value(self, value, dialect):
return np.loads(zlib.decompress(value))
##############################################
class DTONumpy(Base):
__tablename__ = 'dtos_numpy'
id = sa.Column(sa.Integer, primary_key=True)
amount = sa.Column('amount', NumpyType)
name = sa.Column('name', sa.String, default='')
container_id = sa.Column(sa.ForeignKey('containers.id'))
container_object = relationship(
"Container",
uselist=False,
backref='dto_numpy_objects'
)
def __init__(self, amount, name=None):
self.amount = np.array(amount)
self.name = name
def reprInitParams(self):
return "(%r, %r)" %(self.amount, self.name)
def __repr__(self):
return "%s%s" %(
self.__class__.__name__,
self.reprInitParams())
class Container(Base):
__tablename__ = 'containers'
id = sa.Column(sa.Integer, primary_key=True)
name = sa.Column(sa.String, unique=True)
def __init__(self, name):
self.name = name
super(Container, self).__init__()
@property
def my_numpies(self):
if not hasattr(self, '_my_numpies'):
# The following part can not be exe
self._my_numpies = ListView(
self.dto_numpy_objects, # see `DTO_Numpy.container_object`
MyNumpy.newByDTO,
MyNumpy.getDTO)
return self._my_numpies
class MyNumpy(np.ndarray):
_DTO = DTONumpy
def __new__(cls, amount, name=''):
dto = cls._DTO(amount=amount, name=name)
return cls.newByDTO(dto)
@classmethod
def newByDTO(cls, dto):
obj = np.array(dto.amount).view(cls)
obj.setflags(write=False) # Immutable
obj._dto = dto
return obj
@property
def name(self):
return self._dto.name
def getDTO(self):
return self._dto
def __getattr__(self, attr):
return getattr(self._dto, attr)
def __repr__(self):
return "%s%s" %(
self.__class__.__name__,
self._dto.reprInitParams())
if __name__ == '__main__':
engine = sa.create_engine('sqlite:///:memory:', echo=True)
DBSession.configure(bind=engine)
Base.metadata.create_all(engine)
session = DBSession()
mn1 = MyNumpy ([1,2,3], "good data")
mn2 = MyNumpy ([2,3,4], "bad data")
# Save MyNumpy-Objects
c1 = Container("Test-Container")
c1.my_numpies.append(mn1)
c1.my_numpies.append(mn2)
session.add(c1)
session.commit()
# Load MyNumpy-Objects
c2 = session.query(Container).filter_by(name="Test-Container").first()
mn3 = c1.my_numpies[0]
mn4 = c1.my_numpies[1]
</code></pre>
<p>For better representation I added</p>
<ul>
<li><code>DTONumpy.reprInitParams</code></li>
<li><code>DTONumpy.__repr__</code></li>
<li><code>MyNumpy.__repr__</code></li>
</ul>
<p>One thing that still doesn't work:</p>
<pre><code> c1.my_numpies += [mn1, mn2.dto]
</code></pre>
|
python|numpy|sqlalchemy
| 6
|
377,312
| 8,991,709
|
Why were pandas merges in python faster than data.table merges in R in 2012?
|
<p>I recently came across the <a href="http://pandas.sourceforge.net/" rel="noreferrer">pandas</a> library for python, which according to <a href="http://wesmckinney.com/blog/some-pandas-database-join-merge-benchmarks-vs-r-basemerge/" rel="noreferrer">this benchmark</a> performs very fast in-memory merges. It's even faster than the <a href="http://cran.r-project.org/web/packages/data.table/index.html" rel="noreferrer">data.table</a> package in R (my language of choice for analysis).</p>
<p>Why is <code>pandas</code> so much faster than <code>data.table</code>? Is it because of an inherent speed advantage python has over R, or is there some tradeoff I'm not aware of? Is there a way to perform inner and outer joins in <code>data.table</code> without resorting to <code>merge(X, Y, all=FALSE)</code> and <code>merge(X, Y, all=TRUE)</code>?</p>
<p><img src="https://i.stack.imgur.com/0pCvh.png" alt="Comparison"></p>
<p>Here's the <a href="https://github.com/wesm/pandas/blob/master/bench/bench_merge.R" rel="noreferrer">R code</a> and the <a href="https://github.com/wesm/pandas/blob/master/bench/bench_merge.py" rel="noreferrer">Python code</a> used to benchmark the various packages.</p>
|
<p>The reason pandas is faster is because I came up with a better algorithm, which is implemented very carefully using <a href="https://github.com/attractivechaos/klib" rel="noreferrer">a fast hash table implementation - klib</a> and in C/<a href="http://cython.org/" rel="noreferrer">Cython</a> to avoid the Python interpreter overhead for the non-vectorizable parts. The algorithm is described in some detail in my presentation: <a href="http://wesmckinney.com/blog/nycpython-1102012-a-look-inside-pandas-design-and-development/" rel="noreferrer"><em>A look inside pandas design and development</em></a>.</p>
<p>The comparison with <code>data.table</code> is actually a bit interesting because the whole point of R's <code>data.table</code> is that it contains <em>pre-computed indexes</em> for various columns to accelerate operations like data selection and merges. In this case (database joins) pandas' DataFrame contains <em>no pre-computed information</em> that is being used for the merge, so to speak it's a "cold" merge. If I had stored the factorized versions of the join keys, the join would be significantly faster - as factorizing is the biggest bottleneck for this algorithm.</p>
<p>I should also add that the internal design of pandas' DataFrame is much more amenable to these kinds of operations than R's data.frame (which is just a list of arrays internally).</p>
|
python|r|join|data.table|pandas
| 191
|
377,313
| 55,458,987
|
ValueError from tensorflow estimator RNNClassifier with gcloud ml-engine job
|
<p>I am working on the task.py file for submitting a gcloud MLEngine job. Previously I was using tensorflow.estimator.DNNClassifier successfully to submit jobs with my data (which consists solely of 8 columns of sequential numerical data for cryptocurrency prices & volume; no categorical).</p>
<p>I have now switched to the tensorflow contrib estimator RNNClassifier. This is my current code for the relevant portion:</p>
<pre><code>def get_feature_columns():
return [
tf.feature_column.numeric_column(feature, shape=(1,))
for feature in column_names[:len(column_names)-1]
]
def build_estimator(config, learning_rate, num_units):
return tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=get_feature_columns(),
num_units=num_units,
cell_type='lstm',
rnn_cell_fn=None,
optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate),
config=config)
estimator = build_estimator(
config=run_config,
learning_rate=args.learning_rate,
num_units=[32, 16])
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
</code></pre>
<p>However, I'm getting the following ValueError:</p>
<pre><code>ValueError: All feature_columns must be of type _SequenceDenseColumn. You can wrap a sequence_categorical_column with an embedding_column or indicator_column. Given (type <class 'tensorflow.python.feature_column.feature_column_v2.NumericColumn'>): NumericColumn(key='LTCUSD_close', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None)
</code></pre>
<p>I don't understand this, as the data is not categorical.</p>
|
<p>As @Ben7 pointed out <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/estimator/RNNClassifier#__init__" rel="nofollow noreferrer">sequence_feature_columns</a> accepts columns like <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/feature_column/sequence_numeric_column" rel="nofollow noreferrer">sequence_numeric_column</a>. However, according to the documentation, <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/estimator/RNNClassifier#class_rnnclassifier" rel="nofollow noreferrer">RNNClassifier</a> sequence_feature_columns expects SparseTensors and sequence_numeric_column is a dense tensor. This seems to be contradictory.</p>
<p>Here is a workaround I used to solve this issue (I took the to_sparse_tensor function from <a href="https://stackoverflow.com/a/48203570/7517757">this answer</a>):</p>
<pre class="lang-py prettyprint-override"><code>def to_sparse_tensor(dense):
# sequence_numeric_column default is float32
zero = tf.constant(0.0, dtype=tf.dtypes.float32)
where = tf.not_equal(dense, zero)
indices = tf.where(where)
values = tf.gather_nd(dense, indices)
return tf.SparseTensor(indices, values, tf.shape(dense, out_type=tf.dtypes.int64))
def get_feature_columns():
return [
tf.feature_column.sequence_numeric_column(feature, shape=(1,), normalizer_fn=to_sparse_tensor)
for feature in column_names[:len(column_names)-1]
]
</code></pre>
|
tensorflow|lstm|gcloud|recurrent-neural-network|tensorflow-estimator
| 1
|
377,314
| 55,365,495
|
Plotting data binned in a pandas dataframe in a scatterplot
|
<p>I've got a large amount of astronomical data that I need to plot in a scatterplot. I've binned the data according to distance, and I want to plot 4 scatterplots, side by side.</p>
<p>For the purposes of asking this question, I've constructed a MWE based, obviously with different data, on what I've got so far:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
data = {'Name':['Tom', 'Jack', 'Steve', 'Ricky', 'Jim', 'Lee', 'Rob', 'Dave',
'Jane', 'Bronwyn', 'Karen', 'Liz', 'Claire', 'Chris', 'Jan', 'Ruby'],
'Age':[28,34,29,42,14,16,75,68,
27,3,2,19,17,32,71,45],
'Weight':[60,75,73,82,54,55,98,82,45,9,8,47,54,62,67,67]}
stages = ['Toddler', 'Teen', ' Young Adult', 'Adult']
ages = [0,4,20,40,100]
df = pd.DataFrame(data)
df['binned'] = pd.cut(df['Age'], bins=ages, labels=stages)
fig=plt.figure()
fig.subplots_adjust(hspace=0)
fig.subplots_adjust(wspace=0)
gridsize = 1,4
ax1 = plt.subplot2grid(gridsize, (0,0))
ax1.scatter(df['Name'], df['Weight'], alpha = 0.5)
ax1.set_ylabel('Weight, kg', fontsize=20)
ax1.set_xlabel('Name', fontsize=20)
ax2 = plt.subplot2grid(gridsize, (0,1), sharey=ax1, sharex = ax1)
plt.setp(ax2.get_yticklabels(), visible=False)
ax2.scatter(df['Name'], df['Weight'], alpha = 0.5)
ax2.set_xlabel('Name', fontsize=20)
ax3 = plt.subplot2grid(gridsize, (0,2), sharey=ax1, sharex = ax1)
plt.setp(ax3.get_yticklabels(), visible=False)
ax3.scatter(df['Name'], df['Weight'], alpha = 0.5)
ax3.set_xlabel('Name', fontsize=20)
ax4 = plt.subplot2grid(gridsize, (0,3), sharey=ax1, sharex = ax1)
plt.setp(ax4.get_yticklabels(), visible=False)
ax4.scatter(df['Name'], df['Weight'], alpha = 0.5)
ax4.set_xlabel('Name', fontsize=20)
</code></pre>
<p>This plots four graphs as expected:
<a href="https://i.stack.imgur.com/iEv2j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iEv2j.png" alt="enter image description here"></a>
but how do I get each graph to plot only the data from one of each of the bins? In other words, how do I plot just one of the bins?</p>
<p>I'm not worried about the scrunching up of the names on the x axis, those are just for this MWE. They'll be numbers in my actual plots.</p>
<p>Just for clarification, my actual data is binned like</p>
<pre><code>sources['z bins']=pd.cut(sources['z'], [0,1,2,3, max(z)],
labels = ['z < 1', '1 < z < 2', '2 < z < 3', 'z > 3'])
</code></pre>
|
<p>What if you grouped the dataframe by <code>binned</code> and then plotted each group?</p>
<p>For example:</p>
<pre><code>fig=plt.figure()
fig.subplots_adjust(hspace=0)
fig.subplots_adjust(wspace=0)
gridsize = 1,4
for i, (name, frame) in enumerate(df.groupby('binned')):
ax = plt.subplot2grid(gridsize, (0,i))
ax.scatter(frame['Name'], frame['Weight'], alpha = 0.5)
ax.set_xlabel(name, fontsize=20)
</code></pre>
<p><a href="https://i.stack.imgur.com/u6yGB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u6yGB.png" alt="enter image description here"></a></p>
<p>I realize you will likely want to clean up the labels a bit, but this at least puts the different bins on a different axes object.</p>
<p>You can iterate over a groupby object and return the name of the group and the dataframe of that group. Here I am using enumerate in order to increment the axes object</p>
<p>Alternatively if you do not want to use a for loop you can access each group with the <code>get_group</code> method of a groupby object.</p>
<pre><code>grouped = df.groupby('binned')
ax1 = plt.subplot2grid(gridsize, (0,0))
ax1.scatter(grouped.get_group('Toddler')['Name'],
grouped.get_group('Toddler')['Weight'],
alpha = 0.5)
ax1.set_ylabel('Weight, kg', fontsize=20)
ax1.set_xlabel('Name', fontsize=20)
</code></pre>
|
pandas|dataframe|matplotlib|plot|binning
| 1
|
377,315
| 55,348,940
|
Getting the filters values from CNN layers
|
<p>I have the following model (for example)</p>
<pre><code>input_img = Input(shape=(224,224,1)) # size of the input image
x = Conv2D(64, (3, 3), strides=(1, 1), activation='relu', padding='same')(input_img)
</code></pre>
<p>I have several layers of such in my autoencoder model. I am particularly interested in the filters of the first layer. There are 64 filters each of size 3x3.</p>
<p>To get the filters, I tried using the following code:</p>
<pre><code>x.layers[0].get_weights()[0]
</code></pre>
<p>but I am getting the error as follows:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-166-96506292d6d7> in <module>()
4 x = Conv2D(64, (3, 3), strides=(1, 1), activation='relu', padding='same')(input_img)
5
----> 6 x.layers[0].get_weights()[0]
AttributeError: 'Tensor' object has no attribute 'layers'
</code></pre>
<p>I am not using the sequential model. My model will be formed using the following command after several such layers.</p>
<pre><code> model = Model()
</code></pre>
<p>I am new to CNN and I don't even know if the get_weights function can help me get filters value. How do I get value of filters?</p>
|
<p>At the moment your code is calling the <code>layers</code> function on a layer definition itself. </p>
<p>The model first needs to be compiled and then you can use the <code>layers</code> function on the model to retrieve the weights of the specific layer.</p>
<p>In your case:</p>
<pre><code>weights = model.layers[1].get_weights()
</code></pre>
<p>will give you the set of weights of the 1st convolutional layer</p>
<p>Which you can use after compiling the model:</p>
<pre><code>model = Model(inputs=input_img, output=b)
</code></pre>
<p>Where <code>b</code> refers to the last layer in your model.</p>
|
python|tensorflow|conv-neural-network|keras-layer|autoencoder
| 0
|
377,316
| 55,246,739
|
How do I set the x-coordinate in a swarmplot in seaborn?
|
<p>Trying to do a swarmplot with 3 different vectors in seaborn. I'd like to have each vector at a different x-coordinate and with a different colour. </p>
<p>Unfortunately all the tutorials have the data in some format I can't really find an explanation / manual for... This is what I've got so far:</p>
<pre><code>#!/usr/bin/env python3
import numpy as np
import matplotlib.pyplot as plt
import seaborn
import pandas as pd
# Create figure
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_ylabel('Evaluations')
colors = [
'tab:orange',
'tab:green',
'tab:red',
]
labels = [
'Method 2',
'Method 3',
'Method 4',
]
data = [
[1, 2.1, 3.2, 4.5, 3.6, 2.7, 1.4],
[2.2, 4.7, 5.1, 4.4, 3.8, 5, 3.4],
[8.4, 7.2, 6.1, 5.4, 8.1, 7.4, 6.8],
]
data = np.array(data).T
df = pd.DataFrame(data, columns=labels)
seaborn.swarmplot(data=df, y='Method 2', color=colors[0])
seaborn.swarmplot(data=df, y='Method 3', color=colors[1])
seaborn.swarmplot(data=df, y='Method 4', color=colors[2])
plt.show()
</code></pre>
<p>This almost works, but plots everything on the same axis:</p>
<p><a href="https://i.stack.imgur.com/ovYR1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ovYR1.png" alt="enter image description here"></a></p>
<p>Also, the label should be on the x-axis not the y-axis. Pretty sure I'm missing something really basic here. Anyone?</p>
|
<p>try:</p>
<pre><code>df2 = df.melt()
colors = ["orange", "green", "red"]
sns.swarmplot(data=df2, x = 'variable', y='value', palette =colors)
plt.show()
</code></pre>
|
pandas|seaborn|swarmplot
| 1
|
377,317
| 55,223,935
|
Compare two Dateframes
|
<p>Problem:
I have 2 Dataframes:</p>
<pre><code>Name B Worker B
A4 True A4 True
A5 True AND A6 False
A6 True C4 False
A7 False C7 True
</code></pre>
<p>I want to give out the "Name" where Df1.B == True and Df2.B == False </p>
|
<p>Check with <code>isin</code> </p>
<pre><code>df1.loc[(df1.B)&(~df1.name.isin(df2.Worker)),'name']
</code></pre>
<p>Update </p>
<pre><code>df1.loc[(df1.B)&(~df2.B),'name']
</code></pre>
|
python|pandas|dataframe
| 1
|
377,318
| 55,461,931
|
How to merge multiple parquet files in Glue
|
<p>I have Glue job which is writing parquet files in S3 every 6 seconds and S3 is having folder for that hour. At the end of the hour I want to merge all the files in that hour partition then put it in the same location. I don't want to use the Athena tables because job becomes slow. I am trying using Python Shell. But so for I have not found correct solution. Can someone help me with this?</p>
<p>File is also snappy zipped</p>
|
<p>Depending on how big your Parquet files are, and what the target size is – here's an idea to do this without Glue:</p>
<ol>
<li>Set up an hourly Cloudwatch cron rule to look in the directory of the previous file to invoke a Lambda function.</li>
<li>Open each Parquet file, and write them to a new parquet file.</li>
<li>Write the resulting Parquet file to the S3 key and remove the parts.</li>
</ol>
<p>Note there are some limitations/considerations with this design:</p>
<ol>
<li>Your Parquet files need to stay within the limits of your Lambda's memory capacity. If you aim for getting to parts that are 128mb, you should be able to achieve this</li>
<li>Your separate Parquet schemas need to be identical for you to be reliably "merging" them. If they are not, you need to look into the Parquet file's metadata footer which contains the schema to ensure the schema has the metadata for all the column chunks.</li>
<li>Because the S3 operation is not atomic, you may have a brief moment in which the new S3 Parquet object is uploaded but the old ones haven't been removed. If you don't require to query it within this window, that shouldn't be a problem.</li>
</ol>
<p>If you require Glue specifically, you may be able to just invoke a Glue job from the Lambda as opposed to trying to do it yourself from within Lambda.</p>
|
pandas|boto3|aws-glue|pyarrow
| 0
|
377,319
| 55,197,355
|
Using a Loop to extract CSV data into an object
|
<p>My problem is quite difficult to explain and I'm unsure if it's even possible to do what I'm asking, but I will try my best to explain. </p>
<p>Basically, I have a CSV file with data, and I want to extract specific cells and set them as a value in an object. Each row in the CSV contains information about an individual item. Currently, I have it hard coded using the pandas library and by doing df.iloc[0][1], etc. However, I want to be able to loop through the whole CSV and extract individual cells and add them to multiple objects, so I don't have to hard code every row manually. </p>
<p>Hopefully, the code will help show what I mean:</p>
<pre><code>df = pd.read_csv('Options.csv')
</code></pre>
<p>My Class:</p>
<pre><code> class Option:
def __init__(self, type, name, S, K):
self.type = type
self.name = name
self.S = S
self.K = K
</code></pre>
<p>Current Extraction from CSV: </p>
<pre><code>o1 = Option(df.iloc[0, 1], df.iloc[0][2], df.iloc[0][3], df.iloc[0][4])
o2 = Option(df.iloc[1, 1], df.iloc[1][2], df.iloc[1][3], df.iloc[1][4])
</code></pre>
<p>etc.</p>
<p>I still want to be able to select individual values of each option, for example, print(o1.name), o6.type, etc.</p>
|
<p>This will give you a list of of your Option objects:</p>
<pre><code>options = df.apply(lambda x: Option(x[1], x[2], x[3], x[4]), axis=1)
options_list = options.values.tolist()
</code></pre>
|
python|pandas|loops|csv
| 1
|
377,320
| 55,357,754
|
How to groupby a column in dataframe which contains a column containing list of tuples
|
<p>I am trying to group my dataframe by values in one of the columns, 'category'. Although, one of the other columns 'prob' contains a list of tuples on each row. When I try to group-by 'category', the 'prob' column disappears. </p>
<p>My current df:</p>
<pre><code> category other: prob:
one val [(hi, hello), (jimbob, joe)]
one val2 [(this, not), (is, work), (now, any)]
two val2 [(bob, jones), (work, here)]
three val3 [(milk, coffee), (tea, bread)]
two val3 [(money, here), (job, money)]
</code></pre>
<p>Expected output:</p>
<pre><code> category: other: prob:
one val, val2 [(hi, hello), (jimbob, joe), (this, not), (is, work), (now, any)]
two val2, val3 [(bob, jones), (work, here), (money, here), (job, money)]
three val3 [(money, here), (job, money)]
</code></pre>
<p>What is the best way to do this? Apologies if I have mis-phrased this question please let me know if you have any questions. Thank you!</p>
|
<p>You can aggregate data by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.agg.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a> with <code>join</code> for string column and flatten data for tuples - added 3 solutions, <code>sum</code> use only if small data and performance is not important:</p>
<pre><code>import functools
import operator
from itertools import chain
f = lambda x: [z for y in x for z in y]
#faster alternative
#f = lambda x: list(chain.from_iterable(x))
#faster alternative2
#f = lambda x: functools.reduce(operator.iadd, x, [])
#slow alternative
#f = lambda x: x.sum()
df = df.groupby('category', as_index=False).agg({'other':', '.join, 'prob':f})
print (df)
category other prob
0 one val, val2 [(hi, hello), (jimbob, joe), (this, not), (is,...
1 three val3 [(milk, coffee), (tea, bread)]
2 two val2, val3 [(bob, jones), (work, here), (money, here), (j...
</code></pre>
<p><strong>Performance</strong>:</p>
<p><a href="https://i.stack.imgur.com/MYa9B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MYa9B.png" alt="pic"></a></p>
<p><strong>Code for testing</strong>:</p>
<pre><code>np.random.seed(2019)
import perfplot
import functools
import operator
from itertools import chain
default_value = 10
def iadd(df1):
f = lambda x: functools.reduce(operator.iadd, x, [])
d = {'other':', '.join, 'prob':f}
return df1.groupby('category', as_index=False).agg(d)
def listcomp(df1):
f = lambda x: [z for y in x for z in y]
d = {'other':', '.join, 'prob':f}
return df1.groupby('category', as_index=False).agg(d)
def from_iterable(df1):
f = lambda x: list(chain.from_iterable(x))
d = {'other':', '.join, 'prob':f}
return df1.groupby('category', as_index=False).agg(d)
def sum_series(df1):
f = lambda x: x.sum()
d = {'other':', '.join, 'prob':f}
return df1.groupby('category', as_index=False).agg(d)
def sum_groupby_cat(df1):
d = {'other':lambda x: x.str.cat(sep=', '), 'prob':'sum'}
return df1.groupby('category', as_index=False).agg(d)
def sum_groupby_join(df1):
d = {'other': ', '.join, 'prob': 'sum'}
return df1.groupby('category', as_index=False).agg(d)
def make_df(n):
a = np.random.randint(0, n / 10, n)
b = np.random.choice(list('abcdef'), len(a))
c = [tuple(np.random.choice(list(string.ascii_letters), 2)) for _ in a]
df = pd.DataFrame({"category":a, "other":b, "prob":c})
df1 = df.groupby(['category','other'])['prob'].apply(list).reset_index()
return df1
perfplot.show(
setup=make_df,
kernels=[iadd, listcomp, from_iterable, sum_series,sum_groupby_cat,sum_groupby_join],
n_range=[10**k for k in range(1, 8)],
logx=True,
logy=True,
equality_check=False,
xlabel='len(df)')
</code></pre>
|
python|pandas
| 4
|
377,321
| 55,251,319
|
Tensorflow initialization gives all ones
|
<p>tensorflow 1.12.0</p>
<p>In the code snipped below, it seems that wrapped_rv_val and seq_rv_val should be equivalent, but they are not. Instead, seq_rv_val is correctly initialized to the randomly generated init_val array, but wrapped_rv_val is set to all ones. What's going on here?</p>
<pre><code>import numpy as np
import tensorflow as tf
init_val = np.random.rand(1, 1, 16, 1).astype(np.dtype('float32'))
wrapped_rv = tf.nn.softmax(tf.get_variable('wrapped_rv', initializer=init_val))
var = tf.get_variable('seq_rv', initializer=init_val)
seq_rv = tf.nn.softmax(var, axis=2)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
wrapped_rv_val = sess.run(wrapped_rv)
seq_rv_val = sess.run(seq_rv)
print("seq_rv_val: {0}".format(seq_rv_val.flatten()))
print("wrapped_rv_val: {0}".format(wrapped_rv_val.flatten()))
</code></pre>
<p>output:</p>
<p>seq_rv_val: [0.28422353 0.12556878 0.18170598 0.19684952 0.21165217]</p>
<p>wrapped_rv_val: [1. 1. 1. 1. 1.]</p>
|
<p>In fact, <code>seq_rv_val</code> and <code>wrapped_rv_val</code> both will be correctly initialized to the randomly generated <code>init_val array</code> when you do the following.</p>
<pre><code># change
wrapped_rv = tf.nn.softmax(tf.get_variable('wrapped_rv', initializer=init_val))
# to
wrapped_rv = tf.nn.softmax(tf.get_variable('wrapped_rv', initializer=init_val), axis=2)
</code></pre>
<p>Next I'll explain why <code>wrapped_rv</code> is initialized to 1. Let's look at the formula of <code>softmax</code>.
<a href="https://i.stack.imgur.com/ZMJPU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZMJPU.png" alt="enter image description here"></a></p>
<p>The number of denominator summation items will be 16 when you set <code>axis=2</code>. But the number of denominator summation items will be 1 when you set <code>axis=-1</code>(default). So the molecule is the same as the denominator and the result is 1 when you set it to <code>axis=-1</code>.
You can run the following example to understand the problem.</p>
<pre><code>import tensorflow as tf
y = tf.constant([[1],[0],[1]],dtype=tf.float32)
y1 = tf.constant([[1],[2],[3]],dtype=tf.float32)
y2 = tf.constant([[1],[3],[7]],dtype=tf.float32)
softmax_var1 = tf.nn.softmax(logits=y1)
softmax_var2 = tf.nn.softmax(logits=y2)
with tf.Session() as sess:
print(sess.run(softmax_var1))
print(sess.run(softmax_var2))
[[1.]
[1.]
[1.]]
[[1.]
[1.]
[1.]]
</code></pre>
|
python|tensorflow
| 1
|
377,322
| 55,517,871
|
How to preprocess strings in Keras models Lambda layer?
|
<p>I have the problem that the value passed on to the Lambda layer (at compile time) is a placeholder generated by keras (without values). When the model is compiled, the .eval () method throws the error:<br></p>
<blockquote>
<p>You must feed a value for placeholder tensor 'input_1' with dtype
string and shape [?, 1]</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>def text_preprocess(x):
strings = tf.keras.backend.eval(x)
vectors = []
for string in strings:
vector = string_to_one_hot(string.decode('utf-8'))
vectors.append(vector)
vectorTensor = tf.constant(np.array(vectors),dtype=tf.float32)
return vectorTensor
input_text = Input(shape=(1,), dtype=tf.string)
embedding = Lambda(text_preprocess)(input_text)
dense = Dense(256, activation='relu')(embedding)
outputs = Dense(2, activation='softmax')(dense)
model = Model(inputs=[input_text], outputs=outputs)
model.compile(loss='categorical_crossentropy',optimizer='adam', metrics=['accuracy'])
model.summary()
model.save('test.h5')
</code></pre>
<p>If I pass a string array into the input layer statically, I can compile the model, but I get the same error if I want to convert the model to tflite.</p>
<pre class="lang-py prettyprint-override"><code>#I replaced this line:
input_text = Input(shape=(1,), dtype=tf.string)
#by this lines:
test = tf.constant(["Hello","World"])
input_text = Input(shape=(1,), dtype=tf.string, tensor=test)
#but calling this ...
converter = TFLiteConverter.from_keras_model_file('string_test.h5')
tfmodel = converter.convert()
#... still leads to this error:
</code></pre>
<blockquote>
<p>InvalidArgumentError: You must feed a value for placeholder tensor
'input_3' with dtype string and shape [2] [[{{node input_3}}]]</p>
</blockquote>
|
<p>Okay I finally solved it that way:</p>
<pre class="lang-py prettyprint-override"><code>def text_preprocess(x):
b = tf.strings.unicode_decode(x,'UTF-8')
b = b.to_tensor(default_value=0)
#do things with decoded string
one_hot = K.one_hot(b,one_hot_size)
return one_hot
...
</code></pre>
|
python|tensorflow|keras|deep-learning
| 0
|
377,323
| 55,299,510
|
CPU usage and time until training starts increasing on each model.fit() in Keras
|
<p>I created an LSTM with Keras API. Now I am facing an issue while I try to test different values in it (for learning rate f.e.). Each time I change my values and define the model new somehow the model takes longer and longer until training start the CPU usage in waiting time is at 100%. Am I doing something wrong so that older learning session effect the new models? </p>
<p>My code is structered as followed, in one file I call a evaluation with different values and many iterations like this:</p>
<pre><code>for i in range(0, 100):
acc = model.create(xtrain, ytrain, hidden_units=hidden_size, batch_size=batch_size, learning_rate=learning_rate, l2_reg=l2_reg)
</code></pre>
<p>model is another file. In there I use the passed values to train a new model and pass back the accuracy to find the best batch size etc. Tho code for model creation is something like:</p>
<pre><code>def create(xtrain, ytrain, hidden_units, batch_size, learning_rate, l2_reg):
# defining some layers from input to output
# example: input = Input(shape=(20,)) ...
# creating the model
model = Model(inputs=[input], output=[outputs])
model.compile(optimizer='Adam', loss='binary_crossentropy', metrics=['acc'])
# calling model.fit
es = EarlyStopping(monitor='val_loss', mode='min', patience=4, verbose=1)
model.fit(xtrain, ytrain, epochs=100, batch_size=batch_size, validation_data=(some_xval_data, some_yval_data), callbacks=[es])
## In the end I evaluate the model on unseen data and return the accuracy
loss, acc = model.evaluate(x_testdata, y_testdata, batch_size=batch_size)
return acc
</code></pre>
<p>Now everytime the model starts to train the script prints:</p>
<pre><code>Epoch 1/100
</code></pre>
<p>On the first evaluation calls the model instantly starts to train and I see the time it takes for each step. But after some time, after the print of "Epoch 1/100" it suddenly starts to take time until the training starts. And the time increases from call to call. While it is waiting for the training to really start I can observe that my CPU usage is at 100% in that time.</p>
<p>So am I doing it wrong in calling the method every time again? Is there some process there older calls of "create" effect newer ones? I just hope that older training is not effecting newer training in my code structure?</p>
|
<p>Thanks to @Fedor Petrov and @desertnaut.</p>
<p>They discussed in the comments of another answer that I have to call the function <code>clear_session</code>:</p>
<pre><code>from keras.backend import clear_session
def create():
# do all the model stuff
# evaluate the model
clear_session()
return
</code></pre>
<p>Now I can call <code>create()</code> as many times as I want without any memory leaks.</p>
|
python|tensorflow|keras
| 4
|
377,324
| 55,264,229
|
Print file name within generator of during Pandas Concat pd.concat
|
<p>I'm loading thousands of files that is supposed to have the same structure through pd.concat using a generator from the list of files in a given directory. </p>
<p>Is there anyway I can print f within this generator for debugging purpose? I'd like to know which file causes the failure. Thank you all in advance!</p>
<pre><code>files = glob.glob(input_dir + "/*.csv")
df = pd.concat((pd.read_csv(f) for f in all_files))
</code></pre>
|
<p>You can use a <code>try..except</code> to properly handle loading the file and printing the potential error. Here's an example:</p>
<pre><code>files = glob.glob(input_dir + "/*.csv")
def load_file(f):
"""Loads a csv file into a dataframe"""
try:
# Load the file if there is no problem
return pd.read_csv(f)
except Exception as e:
# If there is a problem
# print an error message with the name of the file
print("Loading file {} failed with error: {}"
.format(f, e.message))
# return an empty dataframe so the pd.concat won't fail.
return pd.DataFrame()
df = pd.concat((load_file(f) for f in all_files))
</code></pre>
|
python|python-3.x|pandas
| 2
|
377,325
| 55,463,536
|
Select rows from a DataFrame using .loc and multiple conditions and then show the row corresponding to the min/max of one column
|
<p>I know how to select data using .loc and multiple conditions, like so: </p>
<pre><code>df.loc[(df['A'] == True)&(df['B'] == 'Tuesday')]
</code></pre>
<p>But from the result of this I can't figure out how to show the entire row corresponding to the min (or max) taken on one other column of numbers, 'C'. How do I do this?</p>
|
<p>Use this:</p>
<pre><code>df2 = df.loc[(df['A'] == True)&(df['B'] == 'Tuesday')]
df2.loc[df2.C == df2.C.min(), :]
</code></pre>
|
python|pandas|dataframe
| 3
|
377,326
| 55,494,824
|
Replacing Duplicate Strings in Pandas Dataframe
|
<p>I have a dataframe df</p>
<pre><code>Name Reagent
0 Experiment1 water
1 Experiment1 oil
2 Experiment1 water
3 Experiment1 milk
4 Experiment1 water
5 Experiment1 tea
6 Experiment1 water
7 Experiment1 coffee
8 Experiment2 water
9 Experiment2 coffee
</code></pre>
<p>I want to replace duplicate names within the same experiment with a differentiator of some sort. In the example only water is duplicated within a given experiment.</p>
<p>e.g</p>
<pre><code> Name Reagent
0 Experiment1 water1
1 Experiment1 oil
2 Experiment1 water2
3 Experiment1 milk
4 Experiment1 water3
5 Experiment1 tea
6 Experiment1 water4
7 Experiment1 coffee
8 Experiment2 water
9 Experiment2 coffee
</code></pre>
<p>Thanks for any help</p>
|
<p>Solution: append all values with the <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> as a counter (and replace <code>0</code> values with empty strings to ignore each first dupe):</p>
<pre><code>df['Reagent'] += df.groupby(['Name','Reagent']).cumcount().astype(str).replace('0','')
print (df)
Name Reagent
0 Experiment1 water
1 Experiment1 oil
2 Experiment1 water1
3 Experiment1 milk
4 Experiment1 water2
5 Experiment1 tea
6 Experiment1 water3
7 Experiment1 coffee
8 Experiment2 water
9 Experiment2 coffee
</code></pre>
<p>If need replace only all dupes by both columns filter rows by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html" rel="nofollow noreferrer"><code>DataFrame.duplicated</code></a> by both columns and add <code>1</code>:</p>
<pre><code>mask = df.duplicated(['Name','Reagent'], keep=False)
df.loc[mask, 'Reagent'] += df[mask].groupby(['Name','Reagent']).cumcount().add(1).astype(str)
print (df)
Name Reagent
0 Experiment1 water1
1 Experiment1 oil
2 Experiment1 water2
3 Experiment1 milk
4 Experiment1 water3
5 Experiment1 tea
6 Experiment1 water4
7 Experiment1 coffee
8 Experiment2 water
9 Experiment2 coffee
</code></pre>
|
pandas|dataframe
| 3
|
377,327
| 55,481,140
|
tensorflow.python.framework.errors_impl.ResourceExhaustedError
|
<p>I'm using an object detection module for classifying images. My specs are as follows:</p>
<ul>
<li>OS: Ubuntu 18.04 LTS</li>
<li>Python: 3.6.7</li>
<li>VirtualEnv: Version: 16.4.3</li>
<li>Pip3 version inside virtualenv: 19.0.3</li>
<li>TensorFlow Version: 1.13.1</li>
<li>Protoc Version: 3.0.0-9</li>
</ul>
<p>I'm working on Windows virtualenv and google-colab. This is the error message I get:</p>
<pre><code>python3 legacy/train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config
INFO:tensorflow:global step 1: loss = 18.5013 (48.934 sec/step)
INFO:tensorflow:Finished training! Saving model to disk.
/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/summary/writer/writer.py:386: UserWarning: Attempting to use a closed FileWriter. The operation will be a noop unless the FileWriter is explicitly reopened.
warnings.warn("Attempting to use a closed FileWriter. "
Traceback (most recent call last):
File "legacy/train.py", line 184, in <module>
tf.app.run()
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "legacy/train.py", line 180, in main
graph_hook_fn=graph_rewriter_fn)
File "/home/priyank/venv/models-master/research/object_detection/legacy/trainer.py", line 416, in train
saver=saver)
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/contrib/slim/python/slim/learning.py", line 785, in train
ignore_live_threads=ignore_live_threads)
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/training/supervisor.py", line 832, in stop
ignore_live_threads=ignore_live_threads)
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/training/coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "/home/priyank/venv/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/training/queue_runner_impl.py", line 257, in _run
enqueue_callable()
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1257, in _single_operation_run
self._call_tf_sessionrun(None, {}, [], target_list, None)
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
<b>tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[15,1,1755,2777,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node batch}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.</b>
</code></pre>
|
<p>You can try the following fixes:<br>
1. Reducing the image dimension in case you are using very high image resolution<br>
2. Try reducing the batch size<br>
3. Check if any other process is using up your memory</p>
<p>Could you also please share your config file</p>
|
tensorflow|object-detection|object-detection-api
| 3
|
377,328
| 55,267,201
|
Interpreting results of tensorflow benchmark tool
|
<p>Tensorflow have few benchmark tools:</p>
<p>For <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/benchmark" rel="nofollow noreferrer">.pb model</a> and for <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark" rel="nofollow noreferrer">.tflite model</a></p>
<p>I have few questions regarding parameters of .pb benchmark tool:</p>
<ol>
<li>Is <code>num_threads</code> related to number of parallel runs of single threaded experiments or to internal threads used by tensorflow?</li>
<li>Is it possible to use GPU when tool build for desktop, i.e. not for mobile? if it so, how to ensure that GPU is not used?</li>
</ol>
<p>Also few questions regarding result interpretation:</p>
<ol>
<li>What is <code>count</code> in result output? How <code>Timings (microseconds): count=</code> related to <code>--max_num_runs</code> parameter?</li>
</ol>
<p>Example:</p>
<pre><code>Run --num_threads=-1 --max_num_runs=1000:
2019-03-20 14:30:33.253584: I tensorflow/core/util/stat_summarizer.cc:85] Timings (microseconds): count=1000 first=3608 curr=3873 min=3566 max=8009 avg=3766.49 std=202
2019-03-20 14:30:33.253584: I tensorflow/core/util/stat_summarizer.cc:85] Memory (bytes): count=1000 curr=3301344(all same)
2019-03-20 14:30:33.253591: I tensorflow/core/util/stat_summarizer.cc:85] 207 nodes observed
2019-03-20 14:30:33.253597: I tensorflow/core/util/stat_summarizer.cc:85]
2019-03-20 14:30:33.378352: I tensorflow/tools/benchmark/benchmark_model.cc:636] FLOPs estimate: 116.65M
2019-03-20 14:30:33.378390: I tensorflow/tools/benchmark/benchmark_model.cc:638] FLOPs/second: 46.30B
Run --num_threads=1 --max_num_runs=1000:
2019-03-20 14:32:25.591915: I tensorflow/core/util/stat_summarizer.cc:85] Timings (microseconds): count=1000 first=7502 curr=7543 min=7495 max=7716 avg=7607.22 std=34
2019-03-20 14:32:25.591934: I tensorflow/core/util/stat_summarizer.cc:85] Memory (bytes): count=1000 curr=3301344(all same)
2019-03-20 14:32:25.591952: I tensorflow/core/util/stat_summarizer.cc:85] 207 nodes observed
2019-03-20 14:32:25.591970: I tensorflow/core/util/stat_summarizer.cc:85]
2019-03-20 14:32:25.805970: I tensorflow/tools/benchmark/benchmark_model.cc:636] FLOPs estimate: 116.65M
2019-03-20 14:32:25.806007: I tensorflow/tools/benchmark/benchmark_model.cc:638] FLOPs/second: 15.46B
Run --num_threads=-1 --max_num_runs=10000:
2019-03-20 14:38:48.045824: I tensorflow/core/util/stat_summarizer.cc:85] Timings (microseconds): count=3570 first=3961 curr=3899 min=3558 max=6997 avg=3841.2 std=175
2019-03-20 14:38:48.045829: I tensorflow/core/util/stat_summarizer.cc:85] Memory (bytes): count=3570 curr=3301344(all same)
2019-03-20 14:38:48.045833: I tensorflow/core/util/stat_summarizer.cc:85] 207 nodes observed
2019-03-20 14:38:48.045837: I tensorflow/core/util/stat_summarizer.cc:85]
2019-03-20 14:38:48.169368: I tensorflow/tools/benchmark/benchmark_model.cc:636] FLOPs estimate: 116.65M
2019-03-20 14:38:48.169412: I tensorflow/tools/benchmark/benchmark_model.cc:638] FLOPs/second: 48.66B
Run --num_threads=1 --max_num_runs=10000:
2019-03-20 14:35:50.826722: I tensorflow/core/util/stat_summarizer.cc:85] Timings (microseconds): count=1254 first=7496 curr=7518 min=7475 max=7838 avg=7577.23 std=50
2019-03-20 14:35:50.826735: I tensorflow/core/util/stat_summarizer.cc:85] Memory (bytes): count=1254 curr=3301344(all same)
2019-03-20 14:35:50.826746: I tensorflow/core/util/stat_summarizer.cc:85] 207 nodes observed
2019-03-20 14:35:50.826757: I tensorflow/core/util/stat_summarizer.cc:85]
2019-03-20 14:35:51.053143: I tensorflow/tools/benchmark/benchmark_model.cc:636] FLOPs estimate: 116.65M
2019-03-20 14:35:51.053180: I tensorflow/tools/benchmark/benchmark_model.cc:638] FLOPs/second: 15.55B
</code></pre>
<p>i.e. when <code>--max_num_runs=10000</code> is used count is <code>count=3570</code> and <code>count=1254</code> what does it mean?</p>
<p>For <code>.tflite</code> benchmark tool:</p>
<pre><code>--num_threads=1 --num_runs=10000
Initialized session in 0.682ms
Running benchmark for at least 1 iterations and at least 0.5 seconds
count=54 first=23463 curr=8019 min=7911 max=23463 avg=9268.5 std=2995
Running benchmark for at least 1000 iterations and at least 1 seconds
count=1000 first=8022 curr=6703 min=6613 max=10333 avg=6766.23 std=337
Average inference timings in us: Warmup: 9268.5, Init: 682, no stats: 6766.23
</code></pre>
<p>What does <code>no stats: 6766.23</code> mean?</p>
|
<p>After digging in the code a bit I've found the following (All times are in microseconds):</p>
<ul>
<li><code>count</code>: number of actual runs</li>
<li><code>first</code>: time the first iteration took</li>
<li><code>curr</code>: time the last iteration took</li>
<li><code>min</code>: minimum time an iteration took</li>
<li><code>max</code>: maximum time an iteration took</li>
<li><code>avg</code>: mean time an iteration took</li>
<li><code>std</code>: standard deviation of timing over all runs</li>
<li><code>Warmup</code>: warm up run average</li>
<li><code>Init</code>: start up time (should always be same as the <code>Initialized session in</code>)</li>
<li><code>no stats</code>: Is the very poorly named average run time (matches the <code>avg=</code> in the previous line)</li>
<li><code>num_threads</code>: This is used to set <code>intra_op_parallelism_threads</code> and <code>inter_op_parallelism_threads</code> (more info <a href="https://stackoverflow.com/a/41233901/1097517">here</a>)</li>
</ul>
<p>The relevant files (linked to the proper lines) are:</p>
<ul>
<li><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/util/stats_calculator.h#L36" rel="noreferrer"><code>stats_calculator.h</code></a> - The code that actually tracks the runtimes</li>
<li><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/benchmark/benchmark_model.cc#L64" rel="noreferrer"><code>benchmark_model.cc</code></a>(tflite) - The weird "no stats" name</li>
<li><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/benchmark/benchmark_model.cc#L256" rel="noreferrer"><code>benchmark_model.cc</code></a>(pb) - Using <code>num_threads</code></li>
</ul>
<p>I'm not so sure about the using GPU vs not using GPU. If you are using the <code>freeze_graph</code> to export the <code>.pb</code> file then it will store the device of each node in the graph. You can use device placement to do this before exporting. If you need to change it after you could try setting the environment variable <code>CUDA_VISIBLE_DEVICES=""</code> to make sure the GPU is not used.</p>
|
tensorflow|benchmarking|tensorflow-lite
| 6
|
377,329
| 55,394,339
|
"Request payload size exceeds the limit" in google cloud json prediction request
|
<p>I am trying to serve a prediction using google cloud ml engine. I generated my model using <a href="https://github.com/lengstrom/fast-style-transfer" rel="nofollow noreferrer">fast-style-transfer</a> and saved it on my google cloud ml engine's models section. For input it use float32 and so I had to convert my image in this format.</p>
<pre><code>image = tf.image.convert_image_dtype(im, dtypes.float32)
matrix_test = image.eval()
</code></pre>
<p>Then I generated my json file for the request:</p>
<pre><code>js = json.dumps({"image": matrix_test.tolist()})
</code></pre>
<p>Using the following code:</p>
<p><code>gcloud ml-engine predict --model {model-name} --json-instances request.json</code></p>
<p>The following error is returned: </p>
<pre><code>ERROR: (gcloud.ml-engine.predict) HTTP request failed. Response: {
"error": {
"code": 400,
"message": "Request payload size exceeds the limit: 1572864 bytes.",
"status": "INVALID_ARGUMENT"
}
}
</code></pre>
<p>I would like to know if I can increment this limit and, if not, if there is a way to fix it with a workaround... thanks in advance!</p>
|
<p>This is a hard limit for the Cloud Machine Learning Engine API. There's a <a href="https://issuetracker.google.com/issues/123314535" rel="nofollow noreferrer">feature request</a> to increase this limit. You could post a comment there asking for an update. Moreover, you could try <a href="https://stackoverflow.com/a/47523085">the following solution</a> in the meantime.</p>
<p>Hope it helps</p>
|
python-2.7|tensorflow|google-cloud-ml
| 2
|
377,330
| 55,547,287
|
how to get last column of pandas series
|
<p>I am trying to count frequencies of an array.
I've read this <a href="https://stackoverflow.com/questions/40144769/how-to-select-the-last-column-of-dataframe">post</a>, I am using DataFrame and get a series.</p>
<pre><code>>>> a = np.array([1, 1, 5, 0, 1, 2, 2, 0, 1, 4])
>>> df = pd.DataFrame(a, columns=['a'])
>>> b = df.groupby('a').size()
>>> b
a
0 2
1 4
2 2
4 1
5 1
dtype: int64
>>> b.iloc[:,-1]
</code></pre>
<p>when i try to get the last column, i got this error.</p>
<pre><code>Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/pan/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 1472, in __getitem__
return self._getitem_tuple(key) File "/Users/pan/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 2013, in _getitem_tuple
self._has_valid_tuple(tup) File "/Users/pan/anaconda3/lib/python3.6/site-packages/pandas/core/indexing.py", line 220, in _has_valid_tuple
raise IndexingError('Too many indexers') pandas.core.indexing.IndexingError: Too many indexers
</code></pre>
<p>how to get the last column of <code>b</code>?</p>
|
<p>Since <code>pandas.Series</code> is a </p>
<blockquote>
<p>One-dimensional ndarray with axis labels</p>
</blockquote>
<p>If you want to get just the frequencies column, i.e. the values of
your series, use:</p>
<pre><code>b.tolist()
</code></pre>
<p>or, alternatively:</p>
<pre><code>b.to_dict()
</code></pre>
<p>to keep both labels and frequencies.</p>
<p>P.S:</p>
<p>For your specific task consider also <code>collections</code> package:</p>
<pre><code>>>> from collections import Counter
>>> a = [1, 1, 5, 0, 1, 2, 2, 0, 1, 4]
>>> c = Counter(a)
>>> list(c.values())
[2, 4, 2, 1, 1]
</code></pre>
|
python|pandas
| 2
|
377,331
| 55,434,653
|
Batch Normalization doesn't have gradient in tensorflow 2.0?
|
<p>I am trying to make a simple GANs to generate digits from the MNIST dataset. However when I get to training(which is custom) I get this annoying warning that I suspect is the cause of not training like I'm used to.</p>
<p>Keep in mind this is all in tensorflow 2.0 using it's default eager execution.</p>
<p>GET THE DATA(not that important)</p>
<pre><code>(train_images,train_labels),(test_images,test_labels) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
train_dataset = tf.data.Dataset.from_tensor_slices((train_images,train_labels)).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
</code></pre>
<p>GENERATOR MODEL(This is where the Batch Normalization is at)</p>
<pre><code>def make_generator_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(tf.keras.layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
</code></pre>
<p>DISCRIMINATOR MODEL (likely not that important)</p>
<pre><code>def make_discriminator_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same'))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(1))
return model
</code></pre>
<p>INSTANTIATE THE MODELS(likely not that important)</p>
<pre><code>generator = make_generator_model()
discriminator = make_discriminator_model()
</code></pre>
<p>DEFINE THE LOSSES(maybe the generator loss is important since that is where the gradient comes from)</p>
<pre><code>def generator_loss(generated_output):
return tf.nn.sigmoid_cross_entropy_with_logits(labels = tf.ones_like(generated_output), logits = generated_output)
def discriminator_loss(real_output, generated_output):
# [1,1,...,1] with real output since it is true and we want our generated examples to look like it
real_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(real_output), logits=real_output)
# [0,0,...,0] with generated images since they are fake
generated_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(generated_output), logits=generated_output)
total_loss = real_loss + generated_loss
return total_loss
</code></pre>
<p>MAKE THE OPTIMIZERS(likely not important)</p>
<pre><code>generator_optimizer = tf.optimizers.Adam(1e-4)
discriminator_optimizer = tf.optimizers.Adam(1e-4)
</code></pre>
<p>RANDOM NOISE FOR THE GENERATOR(likely not important)</p>
<pre><code>EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# We'll re-use this random vector used to seed the generator so
# it will be easier to see the improvement over time.
random_vector_for_generation = tf.random.normal([num_examples_to_generate,
noise_dim])
</code></pre>
<p>A SINGLE TRAIN STEP(This is where I get the error</p>
<pre><code>def train_step(images):
# generating noise from a normal distribution
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images[0], training=True)
generated_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(generated_output)
disc_loss = discriminator_loss(real_output, generated_output)
This line >>>>>
gradients_of_generator = gen_tape.gradient(gen_loss, generator.variables)
<<<<< This line
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.variables))
</code></pre>
<p>THE FULL TRAIN(not important except that it calls train_step)</p>
<pre><code>def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for images in dataset:
train_step(images)
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
random_vector_for_generation)
# saving (checkpoint) the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time taken for epoch {} is {} sec'.format(epoch + 1,
time.time()-start))
# generating after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
random_vector_for_generation)
</code></pre>
<p>BEGIN TRAINING </p>
<pre><code>train(train_dataset, EPOCHS)
</code></pre>
<p>The error I get is as follows, </p>
<pre><code>W0330 19:42:57.366302 4738405824 optimizer_v2.py:928] Gradients does
not exist for variables ['batch_normalization_v2_54/moving_mean:0',
'batch_normalization_v2_54/moving_variance:0',
'batch_normalization_v2_55/moving_mean:0',
'batch_normalization_v2_55/moving_variance:0',
'batch_normalization_v2_56/moving_mean:0',
'batch_normalization_v2_56/moving_variance:0'] when minimizing the
loss.
</code></pre>
<p>And I get an image from the generator which looks like this:
<a href="https://i.stack.imgur.com/N7RiB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/N7RiB.png" alt="enter image description here"></a></p>
<p>which is kinda what I would expect without the normalization. Everything would clump to one corner because there are extreme values.</p>
|
<p>The problem is here:</p>
<pre><code>gradients_of_generator = gen_tape.gradient(gen_loss, generator.variables)
</code></pre>
<p>You should only be getting gradients for the <em>trainable</em> variables. So you should change it to </p>
<pre><code>gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
</code></pre>
<p>The same goes for the three lines following. The <code>variables</code> field includes stuff like the running averages batch norm uses during inference. Because they are not used during training, there are no sensible gradients defined and trying to compute them will lead to a crash.</p>
|
python-3.x|tensorflow|batch-normalization|tensorflow2.0
| 10
|
377,332
| 55,274,628
|
Install Tensorflow gpu on a remote pc without sudo
|
<p>I don't have <strong>sudo</strong> access to the remote pc where <strong>cuda</strong> is already installed. Now, I have to install <strong>tensorflow-gpu</strong> on that system. Please give me the step by step guide to install it without sudo. </p>
<p>Operating System : Ubuntu 18.04</p>
|
<p>I had to do this before. Basically, I installed miniconda (you can also use anaconda, same thing and installation works without sudo), and installed everything using conda.</p>
<p>Create my environment and activate it:</p>
<pre><code>conda create --name myenv python=3.6.8
conda actiavate myenv
</code></pre>
<p>Install the CUDA things and Tensorflow</p>
<pre><code>conda install cudatoolkit=9.0 cudnn=7.1.2 tensorflow-gpu
</code></pre>
<p>Depending on your system, you may need to change version numbers.</p>
<p>Not sure how familiar you are with conda - it is basically a package-manager/repository and environment manager like pip/venv with the addition that it can handle non-python things as well (such as cudnn for example). As a note - if a package is not availabe through conda, you can still use pip as a fallback.</p>
<p><strong>Untested with pip</strong>
I previously tried to do it without conda and using pip (I ended up failing due to some version conflicts, got frustrated with the process and moved to conda). It gets a little more complicated since you need to manually install it. So first, download cudnn from nvidia and unpack it anywhere you want. Then, you need to add it to the LD_LIBRARY_PATH:</p>
<pre><code>export LD_LIBRARY_PATH=/path/to/cuda/lib64:/path/to/cudnn/lib64/:${LD_ LIBRARY_PATH}
</code></pre>
|
tensorflow|ubuntu-18.04
| 3
|
377,333
| 55,474,457
|
How can I create a pandas dataframe column for each part-of-speech tag?
|
<p>I have a dataset that consists of tokenized, POS-tagged phrases as one column of a dataframe: </p>
<p><a href="https://i.stack.imgur.com/3TvQU.png" rel="nofollow noreferrer">Current Dataframe</a></p>
<p>I want to create a new column in the dataframe, consisting only of the proper nouns in the previous column:</p>
<p><a href="https://i.stack.imgur.com/Chhmw.png" rel="nofollow noreferrer">Desired Solution</a></p>
<p>Right now, I'm trying something like this for a single row: </p>
<pre><code>if 'NNP' in df['Description_POS'][96][0:-1]:
df['Proper Noun'] = df['Description_POS'][96]
</code></pre>
<p>But then I don't know how to loop this for each row, and how to obtain the tuple which contains the proper noun.
I'm very new right now and at a loss for what to use, so any help would be really appreciated! </p>
<p>Edit: I tried the solution recommended, and it seems to work, but there is an issue. </p>
<p>this was my dataframe:
<a href="https://i.stack.imgur.com/9EPZD.png" rel="nofollow noreferrer">Original dataframe</a></p>
<p>After implementing the code recommended</p>
<pre><code>df['Proper Nouns'] = df['POS_Description'].apply(
lambda row: [i[0] for i in row if i[1] == 'NNP'])
</code></pre>
<p>it looks like this:
<a href="https://i.stack.imgur.com/0Z5wb.png" rel="nofollow noreferrer">Dataframe after creating a proper nouns column</a></p>
|
<p>You can use the apply method, which as the name suggests will apply the given function to every row of the dataframe or series. This will return a series, which you can add as a new column to your dataframe</p>
<pre><code>df['Proper Nouns'] = df['POS_Description'].apply(
lambda row: [i[0] for i in row if i[1] == 'NNP'])
</code></pre>
<p>I am assuming the POS_Description dtype to be a list of tuples. </p>
|
python|pandas|nltk|pos-tagger
| 1
|
377,334
| 55,475,655
|
ModuleNotFoundError: No module named 'nets'
|
<p>Hi guys I'm having an error when I'm trying to run the training using this command:</p>
<pre><code>> `python train.py --logtostderr --train_dir=training/
> --pipeline_config_path=training/ssd_mobilenet_v1_coco.config`
</code></pre>
<blockquote>
<p><code>Traceback (most recent call last): File "train.py", line 51, in
<module> from object_detection.builders import model_builder File
"C:\tensorflow1\models\research\object_detection\builders\model_builder.py",
line 35, in <module> from object_detection.models import
faster_rcnn_inception_resnet_v2_feature_extractor as frcnn_inc_res
File
"C:\tensorflow1\models\research\object_detection\models \faster_rcnn_inception_resnet_v2_feature_extractor.py",
line 28, in <module> from nets import inception_resnet_v2
ModuleNotFoundError: No module named 'nets</code></p>
</blockquote>
|
<p>The <code>nets</code> module is in <code>slim</code> folder, you need to add the <code>slim</code> library to <code>PYTHONPATH</code>:</p>
<pre><code># From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
</code></pre>
|
tensorflow|object-detection-api
| 0
|
377,335
| 55,385,927
|
joining two dataframes and extending the contents of one of them
|
<pre><code>df1 = pd.DataFrame([1,2,3],columns=['x'])
df2 = pd.DataFrame([[100,200,300]],columns=['y','w','z'])
</code></pre>
<p>How can I join both dataframes by extending <code>df2</code> to match the rows of <code>df1</code>?</p>
<p>This is what I'm trying to achieve</p>
<pre><code>x y w z
1 100 200 300
2 100 200 300
3 100 200 300
</code></pre>
|
<p>You can use join as @Vaishali mentions of your index of your dataframes are the default range index. Where the index of your second dataframe, df2, matches the first index of df1. However, creating a temporary key and doing a merge to create a cartesian product is a little more robust.</p>
<pre><code>df1.assign(key=1).merge(df2.assign(key=1)).drop('key', axis=1)
</code></pre>
<p>Output:</p>
<pre><code> x y w z
0 1 100 200 300
1 2 100 200 300
2 3 100 200 300
</code></pre>
|
python|pandas
| 3
|
377,336
| 55,174,890
|
one Column value based add value of corresponding column in new column
|
<p>I have 2 two data frames df1 and df2 , in df2 i have 4 columns . i want if df2 column1 value is 0 ,code should add corresponding 3 column values in df1 with column name col2_0 ,col3_0, and col4_0(Note: this process also need to do for value -1,-2,-3,-4,-5), with if else can be done this problem but i am looking for pandas easy and fast way to handle this problem </p>
<p><strong>Here is df2</strong></p>
<p><a href="https://i.stack.imgur.com/12CIN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/12CIN.png" alt="enter image description here"></a> </p>
|
<p>I'll use an initially empty df1 with some extra rows for this example:</p>
<pre><code>df2 = pd.DataFrame({'#timestamp':[-5,-4,-3,-2,-1,0],
'grid_U1': [413.714,413.797,413.926,414.037,414.066,414.064],
'grid_U2': [415.796,415.909,416.117,416.093,416.163,416.183],
'grid_U3': [416.757,416.853,417.09,417.158,417.175,417.085]})
df1 = pd.DataFrame(index=range(0,10), columns=['col2_0','col3_0','col4_0'])
</code></pre>
<p>If you want to match row indices (copy from a given row number in df2 to the same row number in df1), then you can use this:</p>
<pre><code>In [403]: df1[['col2_0','col3_0','col4_0']] = df2[df2['#timestamp'].isin(range(-5,1))][['grid_U1','grid_U2','grid_U3']]
In [404]: df1
Out[404]:
col2_0 col3_0 col4_0
0 413.714 415.796 416.757
1 413.797 415.909 416.853
2 413.926 416.117 417.090
3 414.037 416.093 417.158
4 414.066 416.163 417.175
5 414.064 416.183 417.085
6 NaN NaN NaN
7 NaN NaN NaN
8 NaN NaN NaN
9 NaN NaN NaN
</code></pre>
<p>I'll confirm this is matching row numbers by selecting for timestamp values that don't occur at the top:</p>
<pre><code>In [405]: df1[['col2_0','col3_0','col4_0']] = df2[df2['#timestamp'].isin([-3,-1])][['grid_U1','grid_U2','grid_U3']]
In [406]: df1
Out[406]:
col2_0 col3_0 col4_0
0 NaN NaN NaN
1 NaN NaN NaN
2 413.926 416.117 417.090
3 NaN NaN NaN
4 414.066 416.163 417.175
5 NaN NaN NaN
6 NaN NaN NaN
7 NaN NaN NaN
8 NaN NaN NaN
9 NaN NaN NaN
</code></pre>
<p>If you want to instead fill in from the top of df1, you can tack a call to reset_index on the end (you need drop=True to avoid adding an extra index column in):</p>
<pre><code>In [412]: df1[['col2_0','col3_0','col4_0']] = df2[df2['#timestamp'].isin([-3,-1])][['grid_U1','grid_U2','grid_U3']].reset_index(drop=True)
In [413]: df1
Out[413]:
col2_0 col3_0 col4_0
0 413.926 416.117 417.090
1 414.066 416.163 417.175
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
5 NaN NaN NaN
6 NaN NaN NaN
7 NaN NaN NaN
8 NaN NaN NaN
9 NaN NaN NaN
</code></pre>
|
python-3.x|pandas|pandas-groupby
| 1
|
377,337
| 55,447,630
|
Python - Pandas Groupby and filter
|
<p>I have this as a csv working in pandas- first ten rows:</p>
A simplified df as follows:
<pre><code> permno price mv yearmonth
1752 10057 18.1250 7.898875e+04 198301
4732 10137 23.7500 1.130191e+06 198301
6144 10153 9.7500 1.226550e+05 198302
7869 10225 45.8750 2.530740e+06 198302
8267 10233 57.6250 1.670894e+06 198303
8692 10241 30.8750 5.742132e+06 198303
</code></pre>
<p>Would like to group by yearmonth and sorting according to mv into 5 groups by separating into 5 quantile in each yearmonth to get the expected result:</p>
<pre><code>yearmonth:198301, quantile:quantile(0.2)
permno price mv yearmonth
1752 10057 18.1250 7.898875e+04 198301
yearmonth:198301, quantile:quantile(0.4)
4732 10137 23.7500 1.130191e+06 198301
yearmonth:198302, quantile:quantile(0.2)
permno price mv yearmonth
6144 10057 9.7500 1.226550e+05 198302
yearmonth:198302, quantile:quantile(0.4)
permno price mv yearmonth
7869 10137 45.8750 2.530740e+06 198302
yearmonth:198303, quantile:quantile(0.2)
permno price mv yearmonth
8267 10057 57.6250 1.670894e+06 198303
yearmonth:198303, quantile:quantile(0.4)
permno price mv yearmonth
8692 10137 30.8750 5.742132e+06 198303
</code></pre>
<p>Some code that I've tried:</p>
<pre><code>q20=data.groupby("yearmonth")["mv"].quantile(0.2)
q40=data.groupby("yearmonth")["mv"].quantile(0.4)
q60=data.groupby("yearmonth")["mv"].quantile(0.6)
q80=data.groupby("yearmonth")["mv"].quantile(0.8)
for yearmonth,y in data.groupby(["yearmonth"]):
data_q20=y[y["mv"]<=q20[yearmonth]]
data_q40=y[y["mv"]<=q40[yearmonth]]
data_q40=data_q40[data_q40["mv"]>q20[yearmonth]]
data_q60=y[y["mv"]<=q60[yearmonth]]
data_q60=data_q60[data_q60["mv"]>q40[yearmonth]]
data_q80=y[y["mv"]>q60[yearmonth]]
data_q80=data_q80[data_q80["mv"]<=q80[yearmonth]]
data_q100=y[y["mv"]>q80[yearmonth]]
</code></pre>
<p>I am not sure how to map the yearmonth of "mv" with respect to the quantile one using apply. Any hint on that?</p>
<p>The ultimate goal of this sorting is to calculate the mean return in each yearmonth.</p>
|
<p>I think you may want to use cut or qcut to get your desired results. Cut will create evenly spaced ranges while qcut will create an even number of items per bin. Qcut is more consistent with quantiles.</p>
<p>Here's my code:</p>
<pre><code>#Recreate your dataset
df = pd.DataFrame(
{
'permno':[10057, 10137,10153, 10225, 10233, 10241],
'price':[18.125, 23.75,9.75, 45.875,57.625, 30.875],
'mv':[7.898875e+04, 1.130191e+06, 1.226550e+05,2.530740e+06,1.670894e+06, 5.742132e+06 ],
'yearmonth':[198301, 198301,198302,198302, 198303,198303]
},
index=[1752, 4732, 6144, 7869, 8267, 8692]
)
#Create a column for the classification.
df['Quantiles']= df.groupby(['yearmonth'])['mv'].transform(
lambda x: pd.qcut(x, 5, labels=(0.2, 0.4,.6,.8,1.0))
)
</code></pre>
<p>From here, you could filter the transactions. I think that the dataset you provided is too small but on a larger dataset, this code should work fine.</p>
|
python|pandas|dataframe|group-by
| 0
|
377,338
| 55,459,114
|
Merging Two Columns in DataFrame With Variable Column Names
|
<p>Editing my original post to hopefully simplify my question... I'm merging multiple DataFrames into one, SomeData.DataFrame, which gives me the following: </p>
<pre><code> Key 2019-02-17 2019-02-24_x 2019-02-24_y 2019-03-03
0 A 80 NaN NaN 80
1 B NaN NaN 45 36
2 C 44 NaN 39 NaN
3 D 80 NaN NaN 12
4 E 49 2 NaN NaN
</code></pre>
<p>What I'm trying to do now is efficiently merge the columns ending in "_x" and "_y" while keeping everything else in place so that I get:</p>
<pre><code> Key 2019-02-17 2019-02-24 2019-03-03
0 A 80 NaN 80
1 B NaN 45 36
2 C 44 39 NaN
3 D 80 NaN 12
4 E 49 2 NaN
</code></pre>
<p>The other issue I'm trying to account for is that the data contained in SomeData.DataFrame changes weekly so that my column headers are unpredictable. Meaning, some weeks I may not have the above issue at all and other weeks, there may be multiple instances for example:</p>
<pre><code> Key 2019-02-17 2019-02-24_x 2019-02-24_y 2019-03_10_x 2019-03-10_y
0 A 80 NaN NaN 80 NaN
1 B NaN NaN 45 36 NaN
2 C 44 NaN 39 NaN 12
3 D 80 NaN NaN 12 NaN
4 E 49 2 NaN NaN 17
</code></pre>
<p>So that again the desired result would be:</p>
<pre><code> Key 2019-02-17 2019-02-24 2019-03_10
0 A 80 NaN 80
1 B NaN 45 36
2 C 44 39 12
3 D 80 NaN 12
4 E 49 2 17
</code></pre>
<p>Is what I'm asking reasonable or am I venturing outside the bounds of Pandas' limits? I can't find anyone trying to do anything similar so I'm not sure anymore. Thank you in advance! </p>
|
<p>Edited answer to updated question:</p>
<pre><code>df = df.set_index('Key')
df.groupby(df.columns.str.split('_').str[0], axis=1).sum()
</code></pre>
<p>Output:</p>
<pre><code> 2019-02-17 2019-02-24 2019-03-03
Key
A 80.0 0.0 80.0
B 0.0 45.0 36.0
C 44.0 39.0 0.0
D 80.0 0.0 12.0
E 49.0 2.0 0.0
Second dataframe Output:
df.groupby(df.columns.str.split('_').str[0], axis=1).sum()
Output:
2019-02-17 2019-02-24 2019-03-10
Key
A 80.0 0.0 80.0
B 0.0 45.0 36.0
C 44.0 39.0 12.0
D 80.0 0.0 12.0
E 49.0 2.0 17.0
</code></pre>
<hr>
<pre><code>You could try something like this:
df_t = df.T
df_t.set_index(df_t.groupby(level=0).cumcount(), append=True)\
.unstack().T\
.sort_values(df.columns[0])[df.columns.unique()]\
.reset_index(drop=True)
</code></pre>
<p>Output:</p>
<pre><code> val03-20 03-20 val03-24 03-24
0 a 1 d 5
1 b 6 e 7
2 c 4 f 10
3 NaN NaN g 5
4 NaN NaN h 6
5 NaN NaN i 1
</code></pre>
|
python|pandas|dataframe
| 0
|
377,339
| 55,156,019
|
Split key value string in python and move it in a df column
|
<p>Here's the column that I have, I want to split into key - value and store in a new column in pandas df.</p>
<pre><code>{"FontStyle"=>"Gill Sans Standard", "FontSize"=>"Medium (3mm)"}
{"Font Style"=>"Gill Sans Standard","Font Size"=>"Medium (3mm)"}
{"Font Style":"Script","Font Size":"Medium (3mm)"}
{"Font Style"=>"Gill Sans Standard","Font Size"=>"Medium (3mm)"}
{"Font Style":"Gill Sans Standard","Font Size":"Medium (3mm)"}
</code></pre>
<p>The main issue is that some of them have '=>' while some have colon</p>
<p>I want two new columns in df one for Font Style and another for Font Size and the respected values in them </p>
<p>if anyone can help me achieve this then it would be great, and also if you could recommend me some book/tutorial for regex that would be great.</p>
<p>Thank you</p>
|
<p>This is by far not the most efficient code but this would do the work.</p>
<pre><code>import pandas as pd
import ast
text = '''{"FontStyle"=>"Gill Sans Standard", "FontSize"=>"Medium (3mm)"}
{"Font Style"=>"Gill Sans Standard","Font Size"=>"Medium (3mm)"}
{"Font Style"=>"Script","Font Size"=>"Medium (3mm)"}
{"Font Style"=>"Gill Sans Standard","Font Size"=>"Medium (3mm)"}'''
my_list = []
text = text.replace("FontStyle", "Font Style")
text = text.replace("FontSize", "Font Size")
text = text.replace("=>", ":")
text = text.split("\n")
for one_dict in text:
my_list.append(ast.literal_eval(one_dict))
df = pd.DataFrame(my_list)
print(df)
</code></pre>
<p>The output for the above code:</p>
<pre><code> Font Size Font Style
0 Medium (3mm) Gill Sans Standard
1 Medium (3mm) Gill Sans Standard
2 Medium (3mm) Script
3 Medium (3mm) Gill Sans Standard
</code></pre>
<p>I hope this helps. :-) Let me know if it does.</p>
|
python|regex|pandas|split
| 2
|
377,340
| 55,353,624
|
improve speed of extracting information from pandas columns
|
<p>I have a dataframe with around 200,000 datapoints and a column which looks like this (example for 1 datapoint):</p>
<pre><code>'{"id":342,"name":"Web","slug":"technology/web","position":15,"parent_id":16,"color":6526716,"urls":{"web":{"discover":"http://www.kickstarter.com/discover/categories/technology/web"}}}'
</code></pre>
<p>I want to extract information about the name and slug. I did the following:</p>
<pre><code>df["cat"], df["slug"] = np.nan, np.nan
for i in range(0, len(df.category)):
df["cat"][i] = df.category.iloc[i].split('"name":"')[1].split('"')[0]
df["slug"][i] = df.category.iloc[i].split('"name":"')[1].split('"')[4]
</code></pre>
<p>This works perfectly fine, but it takes around 4 hours. Is there any way to make this faster?</p>
|
<p>Instead of manipulating a DataFrame directly, try using simple data types and create a dataframe in one go. Another solution other than jezrael's:</p>
<pre><code>import json
cat, slug = [], []
for row in df.category:
d = json.loads(row)
cat.append(d['cat'])
slug.append(d['slug'])
df = pd.DataFrame({'cat': cat, 'slug': slug})
</code></pre>
|
python|pandas|dictionary
| 1
|
377,341
| 55,523,474
|
How to apply np.ceil to a structured numpy array
|
<p>I'm trying to use the np.ceil function on a structrued numpy array, but all I get is the error message:</p>
<pre class="lang-py prettyprint-override"><code>TypeError: ufunc 'ceil' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p>Here's a simply example of what that array would look like:</p>
<pre class="lang-py prettyprint-override"><code>arr = np.array([(1.4,2.3), (3.2,4.1)], dtype=[("x", "<f8"), ("y", "<f8")])
</code></pre>
<p>When I try</p>
<pre class="lang-py prettyprint-override"><code>np.ceil(arr)
</code></pre>
<p>I get the above mentioned error. When I just use one column, it works:</p>
<pre class="lang-py prettyprint-override"><code>In [77]: np.ceil(arr["x"])
Out[77]: array([ 2., 4.])
</code></pre>
<p>But I need to get the entire array. Is there any way other than going column by column, or not using structured arrays all together?</p>
|
<p>Here's a dirty solution based on viewing the array without its structure, taking the ceiling, and then converting it back to a structured array.</p>
<pre><code># sample array
arr = np.array([(1.4,2.3), (3.2,4.1)], dtype = [("x", "<f8"), ("y", "<f8")])
# remove struct and take the ceiling
arr1 = np.ceil(arr.view((float, len(arr.dtype.names))))
# coerce it back into the struct
arr = np.array(list(tuple(t) for t in arr1), dtype = arr.dtype)
# kill the intermediate copy
del arr1
</code></pre>
<p>and here it is as an unreadable one-liner but without assigning the intermediate copy <code>arr1</code></p>
<pre><code>arr = np.array(
list(tuple(t) for t in np.ceil(arr.view((float, len(arr.dtype.names))))),
dtype = arr.dtype
)
# array([(2., 3.), (4., 5.)], dtype=[('x', '<f8'), ('y', '<f8')])
</code></pre>
<p>I don't claim this is a great solution, but it should help you move on with your project until something better is proposed </p>
|
python|numpy|ceil|structured-array
| 0
|
377,342
| 55,513,199
|
How to deal with unicode values dict in a column
|
<p>I have a column ('discount') in a df which every value is in its format:</p>
<pre><code>{u'customer': u'xdawd', u'end': None, u'coupon': {u'object': u'coupon', u'name': u'Black Friday', u'percent_off': None, u'created': 213213, u'times_redeemed': 10, u'amount_off': 2500, u'currency': u'gbp', u'object': u'discount', u'start': 1543327380, u'subscription': u'uiodsjciosdj'}
</code></pre>
<p>I want to return the percent_off value or the amount_off value (only one of both appear) in a new column, so i have to get the one which is not as None in its value.</p>
<p>just an example of how it is in a excel:
<a href="https://i.imgur.com/Dt2fj8i.png" rel="nofollow noreferrer">https://i.imgur.com/Dt2fj8i.png</a></p>
|
<p>With a <code>lambda function</code> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html" rel="nofollow noreferrer"><code>Series.apply</code></a>:</p>
<pre><code>df['discount'].apply(lambda x: x['coupon'].get('percent_off') or x['coupon'].get('amount_off'))
</code></pre>
<p>[out]</p>
<pre><code>0 2500
Name: discount, dtype: int64
</code></pre>
<p>Or if you prefer to be more explicit as per @lenz suggestion:</p>
<pre><code>def extract_discount(x):
return x['coupon'].get('percent_off') or x['coupon'].get('amount_off')
df['discount'].apply(extract_discount)
</code></pre>
|
python|pandas|unicode|apply
| 2
|
377,343
| 55,350,221
|
multi indexing with sort by values - Pandas
|
<p>I am trying to return the <code>max</code> value based off two <code>Columns</code> in a <code>pandas</code> <code>df</code>. I want to groupby and sort these values so all are displayed from <code>max</code> to <code>min</code>.</p>
<p>Here is my attempt:</p>
<pre><code>import pandas as pd
d = ({
'Day' : ['Mon','Wed','Sat','Mon','Wed','Sat','Mon','Wed','Sat','Mon','Wed','Sat'],
'Object' : ['X','X','X','Y','Y','Y','X','X','X','Y','Y','Y'],
'Value' : [1,1,1,2,2,2,3,3,3,4,4,4],
})
df = pd.DataFrame(data = d)
df = df.groupby(['Day','Object']).Value.max()
df.groupby('Day').transform(pd.Series.sort_values,ascending=False)
</code></pre>
<p>Out:</p>
<pre><code>Day Object
Mon X 3
Y 4
Sat X 3
Y 4
Wed X 3
Y 4
</code></pre>
<p>Intended:</p>
<pre><code>Day Object
Mon Y 4
X 3
Y 2
X 1
Sat Y 4
X 3
Y 2
X 1
Wed Y 4
X 3
Y 2
X 1
</code></pre>
|
<pre><code>df = pd.DataFrame({
'Day': ['Mon', 'Mon', 'Mon', 'Mon', 'Wed', 'Wed', 'Wed', 'Wed', 'Sat', 'Sat', 'Sat', 'Sat'],
'Object': ['X', 'Y', 'X', 'Y', 'X', 'Y', 'X', 'Y', 'X', 'Y', 'X', 'Y'],
'Value': [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4],
})
df = df.sort_values(['Day', 'Value'], ascending=[1, 0])
df = df.set_index(['Day', 'Object'])
print(df)
</code></pre>
<p>output</p>
<pre><code> Value
Day Object
Mon Y 4
X 3
Y 2
X 1
Sat Y 4
X 3
Y 2
X 1
Wed Y 4
X 3
Y 2
X 1
</code></pre>
|
python|pandas|group-by|duplicates
| 3
|
377,344
| 55,258,435
|
How to fill values in a pandas Series in positions between specific "start" and "stop" markers?
|
<p>I have a DataFrame that is a follows:</p>
<pre><code>df[16820:16830]
data0 start_stop
16820 1 0
16821 1 1
16822 1 0
16823 1 0
16824 1 0
16825 1 -1
16826 0 0
16827 0 0
16828 1 1
16829 0 0
16830 1 -1
</code></pre>
<p>What I need to do is mark values between 1 and -1 in the start_stop columns as valid(1 means 'start', -1 means 'stop') and values between -1 and 1 as invalid (rubbish that I will later discard).
Is there any efficient way to do this instead of iterating with loops over the whole dataframe?</p>
<p>The end result would look like this:</p>
<pre><code> data0 start_stop valid
16820 1 0 False
16821 1 1 True
16822 1 0 True
16823 1 0 True
16824 1 0 True
16825 1 -1 False
16826 0 0 False
16827 0 0 False
16828 1 1 True
16829 0 0 True
16830 1 -1 False
...
</code></pre>
<p>The relevant loop that would achieve it is, i think this:</p>
<pre><code>df = df.reset_index(drop=True)
value = False
for i in range(0,df.shape[0]):
if df.loc[i, 'start_stop'] == 1:
df.loc[i,'valid'] = True
value = True
elif df.loc[i, 'start_stop'] == -1:
df.loc[i, 'valid'] = False
value = False
if df.loc[i, 'start_stop'] == 0:
df.loc[i, 'valid'] = value
</code></pre>
<p>Thanks!</p>
|
<p>This should work</p>
<pre><code>df['valid'] = df.start_stop.cumsum()
</code></pre>
<p>Then</p>
<pre><code>df['valid'] = df['valid'].apply(lambda x: True if x==1 else False)
df
start_stop valid
0 0 False
1 1 True
2 0 True
3 0 True
4 0 True
5 -1 False
6 0 False
7 0 False
8 1 True
9 0 True
10 -1 False
</code></pre>
|
python|pandas
| 6
|
377,345
| 55,322,358
|
Python Numpy arrays, Vlookup like function
|
<p>I would like to ask for your help. The problem in steps.
1. Import two excel files into Python Data frames - so far no problem
2. Transferring the data frames into numpy arrays.
3. Create a VLOOKUP function in python, with the arrays. Both arrays have a key in the first column, which is unique and can be used for matching. The two tables include data, which is correct in one table but not in the other one. I would like to overwrite values in the table where values are wrong from the table where values are correct (I know, which table is has the right values...)</p>
<p>Is there a more numpy way to do it;</p>
<h1>So far the code I wrote:</h1>
<pre><code>import pandas as pd
df=pd.DataFrame()
s = pd.read_excel("C:\a.xlsx")
r = pd.read_excel("C:\b.xlsx")
z=s.values
t = r.values
</code></pre>
<h1>Here matching the two arrays, and overwriting the value</h1>
<pre><code>for i in z:
for j in t:
if z[i, 0] == t[j, 0]:
t[i, 41] = z[j, 5]
</code></pre>
|
<p>If same length, use pd.merge, it acts like vlookup:</p>
<pre><code>newdf = s.merge(r, on ='same_key')
</code></pre>
<p>newdf will have all the columns from both data frames. You can now access the individual columns you need to update:</p>
<pre><code>newdf['wrongcolumn'] = newdf['rightcolumn']
</code></pre>
|
python|numpy|vlookup
| 0
|
377,346
| 55,153,394
|
Merge remaining part of split array
|
<p>I have framed the following code to split the array into 4 parts and obtain first part separately. Now, I need to obtain the other remaining parts as joined separate array.</p>
<pre><code>test = [(0,1,2),(9,0,1),(0,1,3),(0,1,8)]
print(test)
test_np = np.array_split(test,4)
np2 = test_np[2]
</code></pre>
<p>Then I can merge the other 3 parts into new array <code>np_new = [(0,1,2),(0,1,3),(0,1,8)]</code></p>
<p>I can't figure out how to do it? It should help me even if I choose the 2nd part and look forward to merging 1st,3rd and 4th part.</p>
|
<p>In your case, 'test' is a list of tuples, so you dont need numpy:</p>
<pre><code>import numpy as np
test = [(0,1,2),(9,0,1),(0,1,3),(0,1,8)]
t_0 = test[:1]
t_1 = test[1]
new_test= t_0+test[2:]
print(new_test)
# as np.array:
np_test=np.array(test)
</code></pre>
<p>If you have a numpy array in the first place:</p>
<pre><code>import numpy as np
np_test = np.array([(0,1,2),(9,0,1),(0,1,3),(0,1,8)])
new_np_test = np.vstack((np_test[0], np_test[2:]))
</code></pre>
|
python|arrays|numpy
| 1
|
377,347
| 55,142,337
|
KeyError when parsing CSV file
|
<pre><code>mtu,dap
06.01.2015 00:00 - 06.01.2015 01:00,36.90
</code></pre>
<p>I am trying to work the comma delimited data from the picture above into pandas for further analysis with the following bit of code:</p>
<pre><code>import pandas as pd
DAP = pd.read_csv('xx.csv',
index_col = 'mtu',
sep = ',',
encoding="utf-8-sig")
#DAP = DAP.set_index('mtu')
date_time = DAP['mtu']
Hourly_DAP = DAP['dap']
</code></pre>
<p>However it keep giving me the following error, with set_index enabled and with index_col, have tried other solutions that can be found online but none seem to solve this issue:</p>
<pre><code>KeyError: 'mtu'
</code></pre>
<p>Would anyone be able to solve this issue? </p>
<p>I have updated the code according to the duplicate question to the following however now I get nameError that index is not defined. The answer to the duplicate question is very brief so cannot figure it out. The updated code is as follows, can any pick the mistake?:</p>
<pre><code>import pandas as pd
DAP = pd.read_csv('xx.csv',
sep = ',',
encoding="utf-8-sig")
DAP = DAP.set_index('mtu','dap')
print(DAP.index)
index(['mtu', 'dap'], dtype='object', name='TweetID')
</code></pre>
|
<p>This is what I spent my day on, will defo give you a good review if you can help out. I feel like a noob but gotto start somewhere. Thanks for your help anyways! </p>
<pre><code>af = act_freq['actual_freq']
datetime = act_freq['datetime']
act_freq = pd.read_csv('xx.csv',
sep = ',',
encoding="utf-8-sig")
act_freq['datetime'] = pd.to_datetime(act_freq['datetime'],
infer_datetime_format=True)
act_freq.set_index=act_freq['datetime']
grid_freq_des = 50
</code></pre>
<p>The following function gave me what I wanted, however I would like to do it for the whole file </p>
<pre><code>print(sum(abs(grid_freq_des-(af.head(150)))))
</code></pre>
<p>So I spent my day setting up sth like this which i cannot get to work (</p>
<pre><code> for af in range (act_freq['actual_frequency']):
freq_dev = grid_freq_des - af
print(sum(freq_dev))
</code></pre>
<p>So summing up, I cannot set the datetime as index (python keeps giving its own index) and i would like to set up a function (freq_dev in this case) to repeat over the values of 'actual_freq' in the csv:</p>
<pre><code> datetime actual_freq
0 2019-01-01 00:00:00 50.038
1 2019-01-01 00:00:10 50.021
2 2019-01-01 00:00:20 50.013
3 2019-01-01 00:00:30 50.004
</code></pre>
|
pandas|csv|parsing|keyerror
| 0
|
377,348
| 55,227,825
|
Create a TensorFlow Dataset for a CNN from a local dataset
|
<p>I have a big dataset of B/W images with two classes where the name of the directory is the name of the class:</p>
<ul>
<li>the directory <code>SELECTION</code> contains all images with label = selection;</li>
<li>the directory <code>NEUTRAL</code> contains all images with label = neutral.</li>
</ul>
<p>I need to load all these images in a TensorFlow dataset for change the MNIST Dataset in <a href="https://www.tensorflow.org/tutorials/estimators/cnn" rel="nofollow noreferrer">this</a> tutorial.</p>
<p>I've tried to follow <a href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/images.ipynb" rel="nofollow noreferrer">this</a> guide and it looks good but there is some problems that I don't know how to fix. Following the guide I'm arrived till here:</p>
<pre class="lang-python prettyprint-override"><code> from __future__ import absolute_import, division, print_function
import os
import pathlib
import IPython.display as display
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(threshold=np.nan)
tf.enable_eager_execution()
tf.__version__
os.system('clear')
#### some tries for the SELECTION dataset ####
data_root = pathlib.Path('/Users/matteo/Desktop/DATASET_X/SELECTION/TRAIN_IMG')
all_image_paths = []
all_image_labels = []
for item in data_root.iterdir():
item_tmp = str(item)
if 'selection.png' in item_tmp:
all_image_paths.append(str(item))
all_image_labels.append(0)
image_count = len(all_image_paths)
label_names = ['selection', 'neutral']
label_to_index = dict((name, index) for index, name in enumerate(label_names))
img_path = all_image_paths[0]
img_raw = tf.read_file(img_path)
img_tensor = tf.image.decode_png(
contents=img_raw,
channels=1
)
print(img_tensor.numpy().min())
print(img_tensor.numpy().max())
#### it works fine till here ####
#### trying to make a function ####
#### problems from here ####
def load_and_decode_image(path):
print('[LOG:load_and_decode_image]: ' + str(path))
image = tf.read_file(path)
image = tf.image.decode_png(
contents=image,
channels=3
)
return image
image_path = all_image_paths[0]
label = all_image_labels[0]
image = load_and_decode_image(image_path)
print('[LOG:image.shape]: ' + str(image.shape))
path_ds = tf.data.Dataset.from_tensor_slices(all_image_paths)
print('shape: ', repr(path_ds.output_shapes))
print('type: ', path_ds.output_types)
print()
print('[LOG:path_ds]:' + str(path_ds))
</code></pre>
<p>If I load only one item it works but when I try to do:</p>
<pre><code>path_ds = tf.data.Dataset.from_tensor_slices(all_image_paths)
</code></pre>
<p>if I print <code>path_ds.shape</code> it return <code>shape: TensorShape([])</code> so it seems that it doesen't works. If I try to continue to follow the tutorial with this block </p>
<pre class="lang-python prettyprint-override"><code>image_ds = path_ds.map(load_and_decode_image, num_parallel_calls=AUTOTUNE)
plt.figure(figsize=(8, 8))
for n, image in enumerate(image_ds.take(4)):
print('[LOG:n, image]: ' + str(n) + ', ' + str(image))
plt.subplot(2, 2, n+1)
plt.imshow(image)
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.xlabel(' selection'.encode('utf-8'))
plt.title(label_names[label].title())
plt.show()
</code></pre>
<p>it give me the following error:</p>
<pre><code>It's not possible open ' < string >': The file was not found (file: // /Users/matteo/Documents/GitHub/Cnn_Genetic/cnn_genetic/<string > ).
</code></pre>
<p>but the problem is that I don't know what this file is and why it goes looking for it. I dont't neet to plot my images but I want to understand why it doesen't works. If I copy/paste the tutorial code i have the same problem so I think there's a problem with new tf version.</p>
<p>So....if anyone can tell me where I'm going wrong, I'd be very grateful.
Thanks for your time.</p>
|
<p>Your issue is that path_ds should be the image paths as strings, but you try to convert them to a list of tensors. </p>
<p>So to get the tensors you only need:</p>
<pre><code>image_ds = all_image_paths.map(load_and_decode_image, num_parallel_calls=AUTOTUNE)
</code></pre>
|
python|tensorflow|dataset|load|local
| 0
|
377,349
| 55,493,358
|
Is it possible to assign an operation to the same variable, which is used in operation?
|
<p>I have an operation in tensorflow which looks like follows:</p>
<p><code>x = tf.where(tf.is_nan(x), tf.zeros_like(x), x)</code></p>
<p>Is this possible, as the operation changes the new variable x continuously, while simultaneously using it for code execution?</p>
|
<p>Yes, it is possible to reuse variable names in Python, regardless of whether TensorFlow is used or not. (The previous tensor associated with <code>x</code> still exists, but cannot be accessed in code via <code>x</code> anymore, as it has been assigned a new value. The <code>x</code> to the left of <code>=</code> is really a different tensor as the <code>x</code> used on the right.)</p>
<p>You could rewrite your code as</p>
<pre><code>y = tf.where(tf.is_nan(x), tf.zeros_like(x), x)
</code></pre>
<p>and it would have the same meaning, as long as, for the rest of the code, you use <code>y</code> instead of <code>x</code>.</p>
|
python|tensorflow
| 0
|
377,350
| 55,307,233
|
Adding name of weekday in pandas plot?
|
<p>here is my code:</p>
<pre><code>df["Created"] = pd.to_datetime(df["Created"])
df.groupby(df.Created.dt.weekday).size().plot(linewidth = 0.4, x_compat=True)
</code></pre>
<p>I would like to show the name of the day on the graph and also by which day the week starts in pandas?</p>
<p><a href="https://i.stack.imgur.com/CFdwO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CFdwO.png" alt="enter image description here" /></a></p>
|
<p>Try dt.day_name instead of weekday</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.day_name.html#pandas-series-dt-day-name" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.day_name.html#pandas-series-dt-day-name</a></p>
|
python|pandas
| 0
|
377,351
| 55,287,338
|
Extract dataframes from python dictionary
|
<p>I have a python dictionary containing 3 dataframes and nothing else. I need to call each dataframe by dataframe name without using d['']; for example, with the dataframe loopdata1, I need to call it without doing d['loopdata1']. Here's the dictionary with the 3 dataframes loopdata1, loopdata2, and loopdata3:</p>
<pre><code>dict_items([('loopdata1', index id name age sex sterilized
0 0 A006100 Scamp 120 0 1
1 1 A047759 Oreo 120 0 1
2 2 A134067 Bandit 192 0 1
3 3 A141142 Bettie 180 1 1
4 4 A163459 Sasha 180 1 0
5 5 A165752 Pep 180 0 1
6 6 A178569 Boti 180 0 1
7 7 A189592 Ophelia 216 1 1
8 8 A191351 Bri-Bri 192 1 0
9 9 A197810 Sassafrass 168 1 1),
('loopdata2', index id name age sex sterilized
0 0 A200922 Carlos 192 0 1
1 1 A208755 Kootrie 168 0 1
2 2 A210457 Caleb 204 0 1
3 3 A212672 Cujo 156 1 0
4 4 A214991 Prissy 228 1 1
5 5 A215368 Guiness 156 0 1
6 6 A218622 Oliver 180 0 1
7 7 A218624 Cookie 180 0 1
8 8 A221174 Lippy 216 1 1
9 9 A221327 Jamie 192 1 1),
('loopdata3', index id name age sex sterilized
0 0 A249087 *Polly 180 1 1
1 1 A251095 Beauty 168 1 1
2 2 A251214 Rex 144 0 1
3 3 A251268 Sully 204 0 1
4 4 A251402 Amy 216 1 1
5 5 A253939 Dirty 144 1 1
6 6 A254503 Daisy 204 1 1
7 7 A256412 Beau 192 0 0
8 8 A258441 Spring 168 1 1
9 9 A260631 Popki 168 0 1)])
</code></pre>
<p>Here's the code that generated the dictionary -- I'm importing excel files that have the same names as the dataframes and stripping off the '.xlsx':</p>
<pre><code>import os
import glob
import pandas as pd
my_dir = '../test/'
os.chdir( my_dir )
filelist = []
for files in glob.glob( '*.xlsx' ) :
filelist.append(files)
lst = [os.path.splitext(x)[0] for x in filelist]
lst
d = {}
for dfname in lst:
d[dfname] = pd.read_excel(dfname + '.xlsx')
</code></pre>
<p>I've tried <a href="https://stackoverflow.com/questions/42466639/convert-a-dictionary-to-a-pandas-dataframe">Convert a dictionary to a pandas dataframe</a> and <a href="https://stackoverflow.com/questions/34933044/extracting-dataframes-from-a-dictionary-of-dataframes">Extracting dataframes from a dictionary of dataframes</a> with no luck. Thanks for taking a look!</p>
|
<p>you could append incoming dataframes together instead of adding them to a dict.<br>
<code>first = True
for dfname in lst:
if first = True:
main_df = pd.read_excel(dfname + '.xlsx')
first = False
else:
appending_df = pd.read_excel(dfname + '.xlsx')
main.df = main_df.append(appending_df)</code></p>
|
python|pandas|dataframe|dictionary|for-loop
| 0
|
377,352
| 55,546,832
|
multi task learning using estimates as features
|
<p>I have multi task network that has 3 classification heads <code>[A, B, C]</code>.
I want use output of head <code>A</code> as input to the first dense layers of <code>B and C</code>.</p>
<p>Does something special should be done for back propagation as I think that the gradients from <code>B and C</code> shouldn't flow back to <code>A</code>, as it already was calculated and it should dealt as constant.</p>
<p>Does anyone has code example for something like this ?</p>
|
<p>you can try:</p>
<pre><code>A_layer = tf.keras.layers.Dense(5)(x)
A_head= tf.keras.layers.Dense(5)(A_layer)
A_logic = tf.keras.layers.Dense(1)(A_head)
A_loss = tf.losses.sigmoid_cross_entropy(A_y,A_logic)
B_layer = tf.keras.layers.Dense(5)(tf.stop_gradient(A_logic))
B_head= tf.keras.layers.Dense(5)(B_layer)
B_logic = tf.keras.layers.Dense(1)(B_head)
B_loss = tf.losses.sigmoid_cross_entropy(B_y,B_logic)
C_layer = tf.keras.layers.Dense(5)(tf.stop_gradient(A_logic))
C_head= tf.keras.layers.Dense(5)(C_layer)
C_logic = tf.keras.layers.Dense(1)(C_head)
C_loss = tf.losses.sigmoid_cross_entropy(C_y,C_logic)
total_loss = A_loss + B_loss + C_loss
train_op = tf.train.AdamOptimizer().minimize(total_loss)
</code></pre>
|
tensorflow|keras|neural-network
| 0
|
377,353
| 55,552,107
|
How to extract first element of a tuple, which is a column in a dataframe in Python?
|
<p>I have a dataframe as follows,</p>
<pre><code>id text senti_score
1 text A (0.5,1)
2 text B (0.4,0.7)
3 Nan None
4 text c (0.2,0.4)
Expected output,
id text senti_score new_Score
1 text A (0.5,1) 0.5
2 text B (0.4,0.7) 0.4
3 Nan None None
4 text c (0.2,0.4) 0.2
</code></pre>
<p>Please note there some records which does not have senti_Score and it just has "None" in it.</p>
<p>Can someone please help me how to get this using python? Thanks in advance</p>
|
<p>Just use pandas <code>str</code> accessor + <code>.get</code></p>
<pre><code>df['senti_score'].str[0]
</code></pre>
<p>or</p>
<pre><code>df['senti_score'].str.get(0)
</code></pre>
|
python|pandas
| 4
|
377,354
| 55,299,674
|
Getting the list of dates given a string in python
|
<p>I have a string in python which is</p>
<pre><code>date="200601"
</code></pre>
<p>which represents in <code>2006 January</code></p>
<p>How do I get the list of dates in that particular month in the '<code>yyyy-mm-dd</code>' format</p>
<p>and also how do I iterate over <code>200601</code> to <code>201903</code> ie. getting the all values between the two month wise. </p>
<pre><code>example 200601,200602,200603...
</code></pre>
<p>Thanks and much appreciated.</p>
|
<p>To get all the <code>dates</code> in the <code>month</code> of that <code>year</code> in the format of <code>yyyy-mm-dd</code>:</p>
<pre><code>date = "200601"
import datetime, calendar
num_days = calendar.monthrange(int(date[:-2]), int(date[4:]))[1]
print([datetime.date(int(date[:-2]), int(date[4:]), day).strftime("%Y-%m-%d") for day in range(1, num_days+1)])
</code></pre>
<p><strong>OUTPUT</strong>:</p>
<pre><code>['2006-01-01', '2006-01-02', '2006-01-03', '2006-01-04', '2006-01-05', '2006-01-06', '2006-01-07', '2006-01-08', '2006-01-09', '2006-01-10', '2006-01-11', '2006-01-12', '2006-01-13', '2006-01-14', '2006-01-15', '2006-01-16', '2006-01-17', '2006-01-18', '2006-01-19', '2006-01-20', '2006-01-21', '2006-01-22', '2006-01-23', '2006-01-24', '2006-01-25', '2006-01-26', '2006-01-27', '2006-01-28', '2006-01-29', '2006-01-30', '2006-01-31']
</code></pre>
<p>And for the second question:</p>
<p><a href="https://stackoverflow.com/questions/7274267/print-all-day-dates-between-two-dates">Print all day-dates between two dates
</a></p>
<p><strong>EDIT</strong>:</p>
<p>Continuing from the comments under this answer:</p>
<pre><code>print([datetime.date(int(date[:-2]), int(date[4:]), day).strftime("%Y%d") for day in range(1, num_days+1)])
</code></pre>
<p><strong>OUTPUT</strong>:</p>
<pre><code>['200601', '200602', '200603', '200604', '200605', '200606', '200607', '200608', '200609', '200610', '200611', '200612', '200613', '200614', '200615', '200616', '200617', '200618', '200619', '200620', '200621', '200622', '200623', '200624', '200625', '200626', '200627', '200628', '200629', '200630', '200631']
</code></pre>
|
python|pandas|datetime
| 1
|
377,355
| 55,339,923
|
Column names of xlsx file are not retained in the converted csvs
|
<p>I am fetching data from a multisheet xlsx file and storing data in separate csv files. The first rows of all the sheets in xslx are stored in the first csv, the 2nd rows of all the sheets are stored in the 2nd csv, and, so on. For that I wrote the following code which works:</p>
<pre><code>xls = xlrd.open_workbook(r'Smallys ORDER.xlsx', on_demand=True)
df_list = []
names = xls.sheet_names()
#print(names)
#print('-'*80)
names.remove('EVENT')
#print(names)
for i in range(191):
rows = []
for name in names:
count = 0
prod = pd.read_excel('Smallys ORDER.xlsx', name, index_col=None)
#print(prod)
try:
item = prod.iloc[i]
print(item)
rows.append(item)
#items = item.concat(item)
#print(items)
#prod.to_csv(item + '.csv', encoding='utf-8', index=False)
#print('-'*80)
except:
print('Row finished !!!')
writer = csv.writer(open('/home/hp/products/' + 'prod['+str(i)+'].csv', 'w'))
writer.writerows(rows)
</code></pre>
<p><strong>This code does not retain the column names of the xlsx file (same for all the sheets) in the csvs.</strong></p>
|
<p>You have to explicitely write the column names when you use a <code>csv.writer</code>. It is enough to use the column names from the last sheet:</p>
<pre><code>writer = csv.writer(open('/home/hp/products/' + 'prod['+str(i)+'].csv', 'w'))
writer.writerow(prod.columns.tolist())
writer.writerows(rows)
</code></pre>
|
python|pandas|csv|dataframe|xlsx
| 2
|
377,356
| 55,438,811
|
OSError: [WinError 193] %1 is not a valid Win32 application Unable to Get Python to Import Libraries
|
<p>I have tried for 2 days to install Python 64-bit on my 64-bit Windows
10 PC. However, I tried Python, Anaconda, one user/ all users,
installation in C: Root/ Program Files/ etc. But I am unable to get
around this error. After online research it is some issue related to
Python not finding the 64-bit DLLs, but I couldn't locate how to
resolve this.</p>
<pre><code>import tensorflow as tf Traceback (most recent call last): File "<stdin>", line 1, in <module> File
"C:\Anaconda\envs\mango\lib\site-packages\tensorflow\__init__.py",
line 24, in <module> from tensorflow.python import pywrap_tensorflow
pylint: disable=unused-import File "C:\Anaconda\envs\mango\lib\site-packages\tensorflow\python\__init__.py",
line 47, in <module> import numpy as np File "C:\Users\Abhinav
Pandey\AppData\Roaming\Python\Python36\site-packages\numpy\__init__.py",
line 142, in <module> from . import core File "C:\Users\Abhinav
Pandey\AppData\Roaming\Python\Python36\site-packages\numpy\core\__init__.py",
line 23, in <module> WinDLL(os.path.abspath(filename))File
"C:\Anaconda\envs\mango\lib\ctypes\__init__.py", line 348, in __init__
self._handle = _dlopen(self._name, mode) OSError: [WinError 193] %1 is
not a valid Win32 application
</code></pre>
<p>Python works fine (mango) </p>
<pre><code>C:\Users\Abhinav Pandey>python Python 3.6.8 |Anaconda, Inc.| (default, Feb 21 2019, 18:30:04) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for
more information.
</code></pre>
<p>pip install works fine I could install pandas, only when I import, it
gives the same error (mango) </p>
<pre><code>C:\Users\Abhinav Pandey>pip install pandas
Collecting pandas Using cached
https://files.pythonhosted.org/packages/d0/4e/9db3468e504ac9aeadb37eb32bcf0a74d063d24ad1471104bd8a7ba20c97/pandas-0.24.2-cp36-cp36m-win_amd64.whl
Collecting pytz>=2011k (from pandas) Using cached
https://files.pythonhosted.org/packages/61/28/1d3920e4d1d50b19bc5d24398a7cd85cc7b9a75a490570d5a30c57622d34/pytz-2018.9-py2.py3-none-any.whl
Collecting python-dateutil>=2.5.0 (from pandas) Using cached
https://files.pythonhosted.org/packages/41/17/c62faccbfbd163c7f57f3844689e3a78bae1f403648a6afb1d0866d87fbb/python_dateutil-2.8.0-py2.py3-none-any.whl
Requirement already satisfied: numpy>=1.12.0 in c:\users\abhinav
pandey\appdata\roaming\python\python36\site-packages (from pandas)
(1.16.2) Requirement already satisfied: six>=1.5 in
c:\anaconda\envs\mango\lib\site-packages (from
python-dateutil>=2.5.0->pandas) (1.12.0) Installing collected
packages: pytz, python-dateutil, pandas Successfully installed
pandas-0.24.2 python-dateutil-2.8.0 pytz-2018.9
import pandas Traceback (most recent call last): File "<stdin>", line 1, in <module> File
"C:\Anaconda\envs\mango\lib\site-packages\pandas\__init__.py", line
13, in ?<module>
__import__(dependency) File "C:\Users\Abhinav Pandey\AppData\Roaming\Python\Python36\site-packages\numpy\__init__.py",
line 142, in <module> from . import core File "C:\Users\Abhinav
Pandey\AppData\Roaming\Python\Python36\site-packages\numpy\core\__init__.py",
line 23, in <module> WinDLL(os.path.abspath(filename)) File
"C:\Anaconda\envs\mango\lib\ctypes\__init__.py", line 348, in __init__
self._handle = _dlopen(self._name, mode) OSError: [WinError 193] %1 is
not a valid Win32 application
</code></pre>
|
<p>I had the same issue.
I wasn't using Anaconda, just the latest distribution. But re-installing Python in a different location ( I had it on C:/Python37) fixed the issue.</p>
<p>Not sure why, but I tried on the predefined location for the installer and in C:/Program Files/Python37, and both worked for me.</p>
|
python|tensorflow|windows-10|anaconda|importerror
| 0
|
377,357
| 55,189,818
|
Getting unique values from pandas column of 2d array cells
|
<p>I have a pandas DataFrame where each cell in a column is a 2d array of items.</p>
<p>EX: Observation 1 has column <code>items</code> with values <code>['Baseball', 'Glove','Snack']</code></p>
<p>When I use <code>.unique</code> on the individual cells, each cell gets analyzed based on the whole arrays value, not individual values in the array.</p>
<p>How can I iterate through each array in each cell to determine the true unique amount of items in the column? Thanks</p>
<pre><code> Items
0 ['Baseball', 'Hockey Stick', 'Mit']
1 ['Mit', 'Tennis Racket']
2 ['Baseball', 'Helmet']
</code></pre>
<p>These all return as unique values, I would like to get the unique count for each value in each list.</p>
|
<p>I would use <code>chain</code> method of <code>itertools</code> together with <code>set</code>s as to solve the problem as follows.</p>
<pre><code># you have a dataframe called data with the column items.
from itertools import chain
unique_lists_in_items = data.items.unique().tolist()
set_of_items = set(chain(*unique_lists_in_items))
</code></pre>
<p><code>set_of_items</code> is what you want.</p>
|
python|pandas
| 0
|
377,358
| 55,207,756
|
RuntimeWarning: Invalid value encountered in less xa[xa < 0] = -1 (Geopandas)
|
<h3>Problem</h3>
<p>I'm trying to plot crime data in each district using <strong>geopandas</strong>. I have merged <code>shapefile</code> data and crime data:</p>
<pre><code>merged = merged[['geometry','Extortion']]
merged.head()
</code></pre>
<p><a href="https://i.stack.imgur.com/MILS3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MILS3.jpg" alt="merged data"></a></p>
<h3>Attempt</h3>
<p>Then, I tried to plot crime data on top of the map:</p>
<pre><code>variable = 'Extortion'
vmin, vmax = 120, 220
fig, ax = plt.subplots(1, figsize=(20, 10))
merged.plot(variable, cmap='Blues', linewidth=0.8, ax=ax, edgecolor='0.8')
</code></pre>
<h3>Error</h3>
<pre><code>C:\Users\Navoda\Anaconda3\lib\site-packages\matplotlib\colors.py:504:
RuntimeWarning: invalid value encountered in less
xa[xa < 0] = -1
</code></pre>
<p>Without the parameter 'variable', it loads to the base map. The problem is with the variable. I tried switching off warnings as most of the posts suggested. It still didn't load crime data. </p>
<p>I checked the error location. But, I couldn't figure out the reason.</p>
<h3>Code</h3>
<pre><code>if xa.dtype.kind == "f":
xa *= self.N
# Negative values are out of range, but astype(int) would truncate
# them towards zero.
xa[xa < 0] = -1
# xa == 1 (== N after multiplication) is not out of range.
xa[xa == self.N] = self.N - 1
# Avoid converting large positive values to negative integers.
np.clip(xa, -1, self.N, out=xa)
xa = xa.astype(int)
</code></pre>
<p>Note: Extortion column does not have <code>NaN</code> values.</p>
<p>How do I solve this problem? </p>
|
<p>I got this same error and on examination it was caused by polygons in my shapefile not being represented in the data, so it just required replacement of the NANs which were created on the merge, eg</p>
<pre><code>merged['Extortion']=merged['Extortion'].fillna(0)
</code></pre>
|
python|maps|geopandas
| 1
|
377,359
| 9,792,925
|
how to speed up enumerate for numpy array / how to enumerate over numpy array efficiently?
|
<p>I need to generate a lot of random numbers. I've tried using <code>random.random</code> but this function is quite slow. Therefore I switched to <code>numpy.random.random</code> which is way faster! So far so good. The generated random numbers are actually used to calculate some thing (based on the number). I therefore <code>enumerate</code> over each number and replace the value. This seems to kill all my previously gained speedup. Here are the stats generated with <code>timeit()</code>:</p>
<pre><code>test_random - no enumerate
0.133111953735
test_np_random - no enumerate
0.0177130699158
test_random - enumerate
0.269361019135
test_np_random - enumerate
1.22525310516
</code></pre>
<p>as you can see, generating the number is almost 10 times faster using numpy, but enumerating over those numbers gives me equal run times.</p>
<p>Below is the code that I'm using:</p>
<pre><code>import numpy as np
import timeit
import random
NBR_TIMES = 10
NBR_ELEMENTS = 100000
def test_random(do_enumerate=False):
y = [random.random() for i in range(NBR_ELEMENTS)]
if do_enumerate:
for index, item in enumerate(y):
# overwrite the y value, in reality this will be some function of 'item'
y[index] = 1 + item
def test_np_random(do_enumerate=False):
y = np.random.random(NBR_ELEMENTS)
if do_enumerate:
for index, item in enumerate(y):
# overwrite the y value, in reality this will be some function of 'item'
y[index] = 1 + item
if __name__ == '__main__':
from timeit import Timer
t = Timer("test_random()", "from __main__ import test_random")
print "test_random - no enumerate"
print t.timeit(NBR_TIMES)
t = Timer("test_np_random()", "from __main__ import test_np_random")
print "test_np_random - no enumerate"
print t.timeit(NBR_TIMES)
t = Timer("test_random(True)", "from __main__ import test_random")
print "test_random - enumerate"
print t.timeit(NBR_TIMES)
t = Timer("test_np_random(True)", "from __main__ import test_np_random")
print "test_np_random - enumerate"
print t.timeit(NBR_TIMES)
</code></pre>
<p>What's the best way to speed this up and why does <code>enumerate</code> slow things down so dramatically?</p>
<p><strong>EDIT:</strong> the reason I use <code>enumerate</code> is because I need both the index and the value of the current element.</p>
|
<p>To take full advantage of numpy's speed, you want to create <a href="http://docs.scipy.org/doc/numpy/reference/ufuncs.html" rel="nofollow noreferrer">ufuncs</a> whenever possible. Applying <code>vectorize</code> to a function as <a href="https://stackoverflow.com/a/9793070/577088">mgibsonbr</a> suggests is one way to do that, but a better way, if possible, is simply to construct a function that takes advantage of numpy's built-in ufuncs. So something like this:</p>
<pre><code>>>> import numpy
>>> a = numpy.random.random(10)
>>> a + 1
array([ 1.29738145, 1.33004628, 1.45825441, 1.46171177, 1.56863326,
1.58502855, 1.06693054, 1.93304272, 1.66056379, 1.91418473])
>>> (a + 1) * 0.25 / 4
array([ 0.08108634, 0.08312789, 0.0911409 , 0.09135699, 0.09803958,
0.09906428, 0.06668316, 0.12081517, 0.10378524, 0.11963655])
</code></pre>
<p>What is the nature of the function you want to apply across the numpy array? If you tell us, perhaps we can help you come up with a version that uses only numpy ufuncs.</p>
<p>It's also possible to generate an array of indices without using <code>enumerate</code>. Numpy provides <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndenumerate.html" rel="nofollow noreferrer"><code>ndenumerate</code></a>, which is an iterator, and probably slower, but it also provides <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.indices.html" rel="nofollow noreferrer"><code>indices</code></a>, which is a very quick way to generate the indices corresponding to the values in an array. So...</p>
<pre><code>>>> numpy.indices(a.shape)
array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]])
</code></pre>
<p>So to be more explicit, you can use the above and combine them using <code>numpy.rec.fromarrays</code>:</p>
<pre><code>>>> a = numpy.random.random(10)
>>> ind = numpy.indices(a.shape)
>>> numpy.rec.fromarrays([ind[0], a])
rec.array([(0, 0.092473494150913438), (1, 0.20853257641948986),
(2, 0.35141455604686067), (3, 0.12212258656960817),
(4, 0.50986868372639049), (5, 0.0011439325711705139),
(6, 0.50412473457942508), (7, 0.28973489788728601),
(8, 0.20078799423168536), (9, 0.34527678271856999)],
dtype=[('f0', '<i8'), ('f1', '<f8')])
</code></pre>
<p>It's starting to sound like your main concern is performing the operation in-place. That's harder to do using <code>vectorize</code> but it's easy with the ufunc approach:</p>
<pre><code>>>> def somefunc(a):
... a += 1
... a /= 15
...
>>> a = numpy.random.random(10)
>>> b = a
>>> somefunc(a)
>>> a
array([ 0.07158446, 0.07052393, 0.07276768, 0.09813235, 0.09429439,
0.08561703, 0.11204622, 0.10773558, 0.11878885, 0.10969279])
>>> b
array([ 0.07158446, 0.07052393, 0.07276768, 0.09813235, 0.09429439,
0.08561703, 0.11204622, 0.10773558, 0.11878885, 0.10969279])
</code></pre>
<p>As you can see, numpy performs these operations in-place.</p>
|
python|numpy
| 6
|
377,360
| 10,181,151
|
trying to get reasonable values from scipy powerlaw fit
|
<p>I'm trying to fit some data from a simulation code I've been running in order to figure out a power law dependence. When I plot a linear fit, the data does not fit very well. </p>
<p>Here's the python script I'm using to fit the data:</p>
<pre><code>#!/usr/bin/env python
from scipy import optimize
import numpy
xdata=[ 0.00010851, 0.00021701, 0.00043403, 0.00086806, 0.00173611, 0.00347222]
ydata=[ 29.56241016, 29.82245508, 25.33930469, 19.97075977, 12.61276074, 7.12695312]
fitfunc = lambda p, x: p[0] + p[1] * x ** (p[2])
errfunc = lambda p, x, y: (y - fitfunc(p, x))
out,success = optimize.leastsq(errfunc, [1,-1,-0.5],args=(xdata, ydata),maxfev=3000)
print "%g + %g*x^%g"%(out[0],out[1],out[2])
</code></pre>
<p>the output I get is:
-71205.3 + 71174.5*x^-9.79038e-05</p>
<p>While on the plot the fit looks about as good as you'd expect from a leastsquares fit, the form of the output bothers me. I was hoping the constant would be close to where you'd expect the zero to be (around 30). And I was expecting to find a power dependence of a larger fraction than 10^-5. </p>
<p>I've tried rescaling my data and playing with the parameters to optimize.leastsq with no luck. Is what I'm trying to accomplish possible or does my data just not allow it? The calculation is expensive, so getting more data points is non-trivial.</p>
<p>Thanks!</p>
|
<p>It is much better to first take the logarithm, then use <code>leastsquare</code> to fit to this linear equation, which will give you a much better fit. There is a great example in the <a href="http://www.scipy.org/Cookbook/FittingData">scipy cookbook</a>, which I've adapted below to fit your code.</p>
<p>The best fits like this are: amplitude = 0.8955, and index = -0.40943265484</p>
<p>As we can see from the graph (and your data), if its a power law fit we would not expect the amplitude value to be near <code>30</code>. As in the power law equation <code>f(x) == Amp * x ** index</code>, so with a negative index: <code>f(1) == Amp</code> and <code>f(0) == infinity</code>.</p>
<p><img src="https://i.stack.imgur.com/bWiVi.png" alt="enter image description here"></p>
<pre><code>from pylab import *
from scipy import *
from scipy import optimize
xdata=[ 0.00010851, 0.00021701, 0.00043403, 0.00086806, 0.00173611, 0.00347222]
ydata=[ 29.56241016, 29.82245508, 25.33930469, 19.97075977, 12.61276074, 7.12695312]
logx = log10(xdata)
logy = log10(ydata)
# define our (line) fitting function
fitfunc = lambda p, x: p[0] + p[1] * x
errfunc = lambda p, x, y: (y - fitfunc(p, x))
pinit = [1.0, -1.0]
out = optimize.leastsq(errfunc, pinit,
args=(logx, logy), full_output=1)
pfinal = out[0]
covar = out[1]
index = pfinal[1]
amp = 10.0**pfinal[0]
print 'amp:',amp, 'index', index
powerlaw = lambda x, amp, index: amp * (x**index)
##########
# Plotting data
##########
clf()
subplot(2, 1, 1)
plot(xdata, powerlaw(xdata, amp, index)) # Fit
plot(xdata, ydata)#, yerr=yerr, fmt='k.') # Data
text(0.0020, 30, 'Ampli = %5.2f' % amp)
text(0.0020, 25, 'Index = %5.2f' % index)
xlabel('X')
ylabel('Y')
subplot(2, 1, 2)
loglog(xdata, powerlaw(xdata, amp, index))
plot(xdata, ydata)#, yerr=yerr, fmt='k.') # Data
xlabel('X (log scale)')
ylabel('Y (log scale)')
savefig('power_law_fit.png')
show()
</code></pre>
|
python|numpy|scipy|curve-fitting|least-squares
| 8
|
377,361
| 7,160,162
|
Left Matrix Division and Numpy Solve
|
<p>I am trying to convert code that contains the \ operator from Matlab (Octave) to Python. Sample code</p>
<pre class="lang-matlab prettyprint-override"><code>B = [2;4]
b = [4;4]
B \ b
</code></pre>
<p>This works and produces 1.2 as an answer. Using this web page</p>
<p><a href="http://mathesaurus.sourceforge.net/matlab-numpy.html" rel="nofollow noreferrer">http://mathesaurus.sourceforge.net/matlab-numpy.html</a></p>
<p>I translated that as:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
print lin.solve(B,b)
</code></pre>
<p>This gave me an error:</p>
<pre><code>numpy.linalg.linalg.LinAlgError: Array must be square
</code></pre>
<p>How come Matlab \ works with non square matrix for B?</p>
<p>Any solutions for this?</p>
|
<p>From <a href="http://www.mathworks.com/help/techdoc/ref/mldivide.html" rel="noreferrer">MathWorks documentation</a> for left matrix division:</p>
<blockquote>
<p>If A is an m-by-n matrix with m ~= n and B is a column vector with m
components, or a matrix with several such columns, then X = A\B is the
solution in the least squares sense to the under- or overdetermined
system of equations AX = B. In other words, X minimizes norm(A*X - B),
the length of the vector AX - B.</p>
</blockquote>
<p>The equivalent in numpy is <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html#numpy-linalg-lstsq" rel="noreferrer">np.linalg.lstsq</a>:</p>
<pre><code>In [15]: B = np.array([[2],[4]])
In [16]: b = np.array([[4],[4]])
In [18]: x,resid,rank,s = np.linalg.lstsq(B,b)
In [19]: x
Out[19]: array([[ 1.2]])
</code></pre>
|
python|matlab|numpy|octave|linear-algebra
| 20
|
377,362
| 56,793,012
|
Explanation of an implementation of the categorical_crossentropy
|
<p>The formula for the categorical cross-entropy is the following. </p>
<p><a href="https://i.stack.imgur.com/arjPt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/arjPt.png" alt="enter image description here"></a></p>
<p>What should the output of the last layer be? Should it be the probabilities of classes from a softmax layer? </p>
<p>What is the target? </p>
<p>How does the following code implement 1/N, the summation and p<sub>i,j</sub>?</p>
<pre><code>def categorical_crossentropy(output, target, from_logits=False):
"""Categorical crossentropy between an output tensor and a target tensor.
# Arguments
output: A tensor resulting from a softmax
(unless `from_logits` is True, in which
case `output` is expected to be the logits).
target: A tensor of the same shape as `output`.
from_logits: Boolean, whether `output` is the
result of a softmax, or is a tensor of logits.
# Returns
Output tensor.
"""
# Note: tf.nn.softmax_cross_entropy_with_logits
# expects logits, Keras expects probabilities.
if not from_logits:
# scale preds so that the class probas of each sample sum to 1
output /= tf.reduce_sum(output,
reduction_indices=len(output.get_shape()) - 1,
keep_dims=True)
# manual computation of crossentropy
epsilon = _to_tensor(_EPSILON, output.dtype.base_dtype)
output = tf.clip_by_value(output, epsilon, 1. - epsilon)
return - tf.reduce_sum(target * tf.log(output),
reduction_indices=len(output.get_shape()) - 1)
else:
return tf.nn.softmax_cross_entropy_with_logits(labels=target,
logits=output)
</code></pre>
|
<blockquote>
<p>What should the output of the last layer be? Should it be the probabilities of classes from a softmax layer?</p>
</blockquote>
<p>It can be either the output of the softmax layer or the raw <a href="https://en.wikipedia.org/wiki/Logit" rel="nofollow noreferrer">logits</a> (input to the softmax layer). The output vector of the softmax layer are the probabilities of each class. If <code>output</code> is the output of softmax then set <code>from_logits=False</code>. If <code>output</code> are the logits then you want to set <code>from_logits=True</code>. You can see internally that <a href="https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits" rel="nofollow noreferrer"><code>tf.nn.softmax_cross_entropy_with_logits</code></a> is called, which computes the softmax probabilities and the cross-entropy function at the same time. Computing them together allows for some math tricks for numerical stability.</p>
<blockquote>
<p>What is the target?</p>
</blockquote>
<p>The target is a one-hot vector. This means that a number <code>n</code> is represented by a vector <code>v</code> where <code>v[n] = 1</code> and <code>0</code> everywhere else. Here <code>n</code> is the class of the label. There is a function to get this encoding in TensoFlow called <a href="https://www.tensorflow.org/api_docs/python/tf/one_hot" rel="nofollow noreferrer"><code>tf.one_hot</code></a>. For example <code>tf.one_hot([3],5)</code> would result in the vector <code>[0, 0, 1, 0, 0]</code>.</p>
<blockquote>
<p>How does the following code implement 1/N, the summation and pi,j?</p>
</blockquote>
<p>The code above does not average over all the inputs (no need for the "1/N"). For example, if the input is shaped <code>[10, 5]</code> the output would be shaped <code>[10]</code>. You would have to call <code>tf.reduce_mean</code> on the result. So the equation is essentially: </p>
<p><a href="https://i.stack.imgur.com/jCDtF.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jCDtF.gif" alt="modified equation"></a></p>
<p>The above equation is implemented in the line</p>
<pre><code>return - tf.reduce_sum(target * tf.log(output),
reduction_indices=len(output.get_shape()) - 1)
</code></pre>
<p>The "Σ" is <code>tf.reduce_sum</code>. "pi,j" is <code>output</code>, the indicator function (i.e. the bolded 1) is the one-hot encoded <code>target</code>.</p>
<h3>Side Note</h3>
<p>You should use the <a href="https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits_v2" rel="nofollow noreferrer"><code>tf.softmax_cross_entropy_with_logits_v2</code></a>, because the code you provided (when setting <code>from_logits=False</code>) could result in numerical errors. The combined function takes care of all of those numerical issues.</p>
|
tensorflow|keras
| 3
|
377,363
| 56,530,706
|
How to change strings in dataframe to date time values?
|
<p>I have a column in a pandas dataframe of strings representing dates in the form</p>
<pre><code> Year-day hour:minute:second.microsecond
</code></pre>
<p>Except the day is written as a single number from 0-364. For example, the date<code>2019-040 04:00:00:000000</code> represents 4 am on february 9, 2019. How do I convert these values to date time instances?</p>
|
<p>You can use <code>datetime</code> and <code>strptime</code> to achieve this. The <code>%j</code> <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">directive</a> allows you to enter the zero-padded day number of the year:</p>
<pre class="lang-py prettyprint-override"><code>import datetime as dt
date_time = '2019-040 04:00:00:000000'
dt.datetime.strptime(date_time, '%Y-%j %H:%M:%S:%f')
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>datetime.datetime(2019, 2, 9, 4, 0)
</code></pre>
|
python|pandas
| 2
|
377,364
| 56,605,403
|
Getting value from dataframe based on Series with column name
|
<p>I have a DataFrame with date indexes and column names.</p>
<pre><code>df1 =
date col1 col2 col3
20190101 1 2 3
20190102 6 5 4
20190103 -7 -9 -8
20190104 4 9 8
</code></pre>
<p>and then I have Series with a subset of the indexes and a column name with the name of the column with the highest value above 0 for each row. Like this.</p>
<pre><code>max_col =
20190101 col3
20190102 col1
20190104 col2
</code></pre>
<p>I would like a result with the value of the column names in max_col.</p>
<pre><code>max_val =
20190101 3
20190102 6
20190104 9
</code></pre>
<p>I have tried with <code>df1.loc[max_col]</code>, <code>df1.at[max_col]</code> but the closest being <code>df1[max_col]</code>. But this creates a matrix instead of vector response. </p>
<p>Any ideas?</p>
<p>EDIT: </p>
<p>The values in <code>max_col</code> comes from another DataFrame <code>df2</code>.
So the <code>max_val</code> does not have to be the max values from df1. My mistake. </p>
<p>The solution with <code>df1.lookup(max_col.index, max_col)</code> worked great. </p>
|
<p>In your initial df, just use <code>df.max(1)</code>. Then, filter out all values that are negative.</p>
<pre><code>s = df.max(1)
s[s>0]
</code></pre>
<p>returns</p>
<pre><code>date
20190101 3
20190102 6
20190104 9
dtype: int64
</code></pre>
<hr>
<p>But if you really want to use your <code>max_col</code> series, you are looking at the definition of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.lookup.html" rel="nofollow noreferrer"><code>df.lookup</code></a></p>
<pre><code>df.lookup(max_col.index, max_col)
</code></pre>
|
python|pandas|dataframe
| 0
|
377,365
| 56,801,681
|
How to create chunking of numpy.linespace() for large scale data
|
<p>I'm getting a memory error because the resulting data is too large since I am using a size of over billions.</p>
<p>What approach may I be using for chunking of data?</p>
|
<p>If what you want is store all the resulting data, you can store first the chunked data to h5py. for reference <a href="http://docs.h5py.org/en/stable/" rel="nofollow noreferrer">http://docs.h5py.org/en/stable/</a>. Please elaborate your question</p>
<p>Try to create first a list of chunked size of your linspace total size.</p>
<p>Sample code:
where size is the shape of your linspace and limit is chunking value</p>
<pre><code> def create_list_sample_size(self, size, limit):
list_sample_size= []
while True:
if size > limit:
list_sample_size.append(limit)
samples = samples - limit
else:
list_sample_size.append(limit)
break
return list_sample_size
</code></pre>
<p>Then create your own linspace method where you compute the chunked version:</p>
<pre><code> def generate_linspace(list_sample_size)
for sample in list_sample_size:
length += sample
length -= 1
for sample in samples:
index += 1
high_range = low_range + sample
_sample = np.arange(low_range, high_range, dtype=dtype)
step = delta / length
if step == 0:
_sample *= delta
else:
_sample *= step
low_range = high_range
</code></pre>
|
python-3.x|numpy
| 1
|
377,366
| 56,452,840
|
French Character Turn Into Question Marks; Pandas
|
<p>I have a csv file which contains french characters/accents including: É, ê, è etc, referring to some french city and street names. I have tried several encoding options on the read_csv and to_csv functions in Pandas including:</p>
<pre><code> df=pd.read_csv(FilePath, encoding='latin-1' )
</code></pre>
<p>also:</p>
<pre><code>encoding='utf-8'
encoding='latin-1'
encoding='utf-8-sig'
encoding='iso-8859-1'
</code></pre>
<p>I have also tried not specifying any encoding.</p>
<p>I am using Python 2.7 and the Pandas Module. I have read that Python 3 does better with encoding but that is not currently an option.</p>
<p>The french characters turn into questions marks (?) when the output file is opened in excel or notepad++, and now due to trying to fix that issue they begin as questions marks when I read in the original file or when I open that original file in excel or notepadd++. Before they showed up as normal french characters.</p>
<p>Example data and code:</p>
<pre><code>City Address1_Particule Address1_Street Address1_StreetType
Montr? V Des BRISES DU FLEUVE ALL?
Montr? V Des BRISES DU FLEUVE ALL?
Montr? V Des BRISES DU FLEUVE ALL?
Montr? V Des BRISES DU FLEUVE ALL?
#create dataframe
df=pd.read_csv(FilePath, encoding='latin-1' )
for streetType in StreetTypeList:
for pretype in StreePreTypeList:
df[pretype]=''
# Change street type french from short to long form and into new column
df.loc[dfCAS[streetType]=='AV', [pretype]]='AVENUE'
df.loc[dfCAS[streetType]=='AVE', [pretype]]='AVENUE'
df.loc[dfCAS[streetType]=='BOUL', [pretype]]='BOULEVARD'
df.loc[dfCAS[streetType]=='CH', [pretype]]='CHEMIN'
df.to_csv(OutputPath, encoding='latin-1'
</code></pre>
<p>I hope to create an output csv file where french characters display properly.</p>
<p>Thank you for any help!</p>
|
<p>This should work</p>
<pre><code>df = pd.read_excel(FilePath, encoding='latin1')
</code></pre>
|
python|pandas|encoding|character-encoding
| 1
|
377,367
| 56,699,048
|
How to get the filename of a sample from a DataLoader?
|
<p>I need to write a file with the result of the data test of a Convolutional Neural Network that I trained. The data include speech data collection. The file format needs to be "file name, prediction", but I am having a hard time to extract the file name. I load the data like this:</p>
<pre class="lang-py prettyprint-override"><code>import torchvision
from torchvision import transforms
from torch.utils.data import DataLoader
TEST_DATA_PATH = ...
trans = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
test_dataset = torchvision.datasets.MNIST(
root=TEST_DATA_PATH,
train=False,
transform=trans,
download=True
)
test_loader = DataLoader(dataset=test_dataset, batch_size=1, shuffle=False)
</code></pre>
<p>and I am trying to write to the file as follows:</p>
<pre><code>f = open("test_y", "w")
with torch.no_grad():
for i, (images, labels) in enumerate(test_loader, 0):
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
file = os.listdir(TEST_DATA_PATH + "/all")[i]
format = file + ", " + str(predicted.item()) + '\n'
f.write(format)
f.close()
</code></pre>
<p>The problem with <code>os.listdir(TESTH_DATA_PATH + "/all")[i]</code> is that it is not synchronized with the loaded files order of <code>test_loader</code>. What can I do?</p>
|
<p>Well, it depends on how your <code>Dataset</code> is implemented. For instance, in the <code>torchvision.datasets.MNIST(...)</code> case, you cannot retrieve the filename simply because there is no such thing as the filename of a single sample (MNIST samples are <a href="https://github.com/pytorch/vision/blob/master/torchvision/datasets/mnist.py#L79-L99" rel="noreferrer">loaded in a different way</a>).</p>
<p>As you did not show your <code>Dataset</code> implementation, I'll tell you how this could be done with the <code>torchvision.datasets.ImageFolder(...)</code> (or any <a href="https://github.com/pytorch/vision/blob/ec203153095ad3d2e79fbf2865d80fe6076618fa/torchvision/datasets/folder.py#L57-L147" rel="noreferrer"><code>torchvision.datasets.DatasetFolder(...)</code></a>):</p>
<pre class="lang-py prettyprint-override"><code>f = open("test_y", "w")
with torch.no_grad():
for i, (images, labels) in enumerate(test_loader, 0):
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
sample_fname, _ = test_loader.dataset.samples[i]
f.write("{}, {}\n".format(sample_fname, predicted.item()))
f.close()
</code></pre>
<p>You can see that the path of the file is retrieved during the <a href="https://github.com/pytorch/vision/blob/ec203153095ad3d2e79fbf2865d80fe6076618fa/torchvision/datasets/folder.py#L129" rel="noreferrer"><code>__getitem__(self, index)</code></a>, especifically <a href="https://github.com/pytorch/vision/blob/ec203153095ad3d2e79fbf2865d80fe6076618fa/torchvision/datasets/folder.py#L137" rel="noreferrer">here</a>.</p>
<p>If you implemented your own <code>Dataset</code> (and perhaps would like to support <code>shuffle</code> and <code>batch_size > 1</code>), then I would return the <code>sample_fname</code> on the <code>__getitem__(...)</code> call and do something like this:</p>
<pre class="lang-py prettyprint-override"><code>for i, (images, labels, sample_fname) in enumerate(test_loader, 0):
# [...]
</code></pre>
<p>This way you wouldn't need to care about <code>shuffle</code>. And if the <code>batch_size</code> is greater than 1, you would need to change the content of the loop for something more generic, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>f = open("test_y", "w")
for i, (images, labels, samples_fname) in enumerate(test_loader, 0):
outputs = model(images)
pred = torch.max(outputs, 1)[1]
f.write("\n".join([
", ".join(x)
for x in zip(map(str, pred.cpu().tolist()), samples_fname)
]) + "\n")
f.close()
</code></pre>
|
python|machine-learning|pytorch|torchvision
| 8
|
377,368
| 56,763,226
|
Finding the mean on multiple fields
|
<p>I am trying to figure out a way to code in python on something specific. I am working with a csv data set that runs with the columns; age, sex, bmi, charges, smoker, number of children. My question being, is there a way to find the mean of BMI where the sex is equal to male or female? </p>
<p>I understand that using pandas the following will give me the mean of all columns:</p>
<pre class="lang-py prettyprint-override"><code>mean_age = df["age"].mean()
</code></pre>
<p>I have tried (which I did not think would work):</p>
<pre class="lang-py prettyprint-override"><code>mean_age = df["age"].mean(on "sex" = "male")
</code></pre>
<p>as well as </p>
<pre class="lang-py prettyprint-override"><code>mean_age = df["age"].mean("sex" = "male")
</code></pre>
<p>and </p>
<pre class="lang-py prettyprint-override"><code>mean_age = df["age"].mean(where( "sex") = "male")
</code></pre>
<p>I was wondering if I could code something along the lines of the mean on other columns.</p>
|
<p>I have found a way to group that gives me mean and counts on multiple fields:</p>
<pre><code>df.groupby(["sex"]).agg(["mean", "count"])
</code></pre>
|
python|pandas|pandas-groupby
| 0
|
377,369
| 56,854,774
|
Remove Duplicates and Filter Dataframe
|
<p>I'm working on a simulator that has a marketplace where providers put offers and consumers bid. The concept is rather simple.</p>
<p>Based on consumers and providers offers and preferences, I create a dataframe sorted by the Euclidean distance between the consumer preference and the provider offer to maximize the consumer utility. This is one example: (the last column is not part of the dataframe)</p>
<pre><code> consumerId providerId capacity price quality distance
9 1003 2001 1 0.815317 0.814237 0.769884 <-
4 1002 2001 1 0.815317 0.814237 0.586566 dup p,q
8 1003 2000 1 0.278722 0.064698 0.551566 dup Id
14 1003 2002 1 0.342255 0.069247 0.488291 dup Id
6 1003 2000 1 0.710141 0.503366 0.474249 dup Id
12 1003 2002 1 0.386136 0.062411 0.444144 dup Id
20 1005 2001 1 0.815317 0.814237 0.402990 dup p,q
13 1003 2002 1 0.467643 0.073472 0.363433 dup Id
15 1003 2002 1 0.527181 0.192858 0.337139 dup Id
21 1005 2002 1 0.951580 0.761860 0.319450 <-
7 1003 2000 1 0.611682 0.267618 0.312109 dup Id
1 1002 2000 1 0.710141 0.503366 0.310783 <-
5 1003 2000 1 0.725587 0.334001 0.307735 dup Id
17 1004 2000 1 0.710141 0.503366 0.305369 dup p,q
19 1005 2000 1 0.710141 0.503366 0.269010 dup Id
2 1002 2000 1 0.611682 0.267618 0.247648 dup Id
10 1003 2001 1 0.619495 0.082655 0.213857 dup Id
11 1003 2001 1 0.654035 0.163907 0.212591 dup Id
18 1004 2000 1 0.611682 0.267618 0.205169 <-
3 1002 2000 1 0.843739 0.410850 0.182180 dup Id
0 1002 2000 1 0.725587 0.334001 0.167611 dup Id
16 1004 2000 1 0.725587 0.334001 0.146053 dup Id
22 1009 2000 1 0.710141 0.503366 0.071535 dup p,q
</code></pre>
<p>Consumers can only buy one offer. Offers can only be bought once. I need to remove duplicates from the dataframe above, resulting on the follow:</p>
<pre><code> consumerId providerId capacity price quality distance
9 1003 2001 1 0.815317 0.814237 0.769884
21 1005 2002 1 0.951580 0.761860 0.319450
3 1002 2000 1 0.843739 0.410850 0.182180
18 1004 2000 1 0.611682 0.267618 0.205169
</code></pre>
<p>The closest I got from doing this is by using <code>df.drop_duplicates(subset=['price', 'quality'], keep='first')</code> then to the same to remove duplicate <code>consumerId</code>. </p>
<p>However, this method would not include the last line <br>
<code>18 1004 2000 1 0.611682 0.267618 0.205169</code><br> since that offer would been removed from the first dedup operation.</p>
<p>What's the best way to accomplish this filtering?</p>
|
<p>So I solved my issue by creating a new dataframe, iterating through each row, adding to the new df, and deduplicating at each step:</p>
<pre><code>df = pd.DataFrame()
for idx, row in offers.iterrows():
df = df.append(row)
df = df.drop_duplicates(subset=['consumerId'], keep='first')
df = df.drop_duplicates(subset=['price', 'quality'], keep='first')
</code></pre>
|
python|pandas|dataframe
| 0
|
377,370
| 56,721,487
|
Drop row keep similiar value column data
|
<p>Based on this question <a href="https://stackoverflow.com/questions/56519823/drop-row-based-on-two-columns-conditions">Drop row based on two columns conditions</a>, otherwise, I want to eliminate the different value of row data.</p>
<p>I have <code>dataframe</code> looks like this:</p>
<pre><code>df
Data1 Data2 Data3
A XX AA
A YY AA
B XX BB
B YY CC
C XX DD
C YY DD
D XX EE
D YY FF
</code></pre>
<p>my expected result looks like this:</p>
<pre><code>Data1 Data2 Data3
A XX AA
A YY AA
C XX DD
C YY DD
</code></pre>
<p>how to do it?</p>
|
<p>You can use <code>groupby</code>:</p>
<pre><code>df[df.groupby('Data1')['Data3'].transform('nunique').eq(1)]
</code></pre>
<p>Or <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html" rel="nofollow noreferrer"><code>duplicated()</code></a>:</p>
<pre><code>df[df.duplicated(['Data1','Data3'],keep=False)]
</code></pre>
<hr>
<pre><code> Data1 Data2 Data3
0 A XX AA
1 A YY AA
4 C XX DD
5 C YY DD
</code></pre>
|
python|pandas|row
| 2
|
377,371
| 56,612,609
|
Pandas accumulate data for linear regression
|
<p>I try to adjust my data so total_gross per day is accumulated. E.g.</p>
<pre><code>`Created` `total_gross` `total_gross_accumulated`
Day 1 100 100
Day 2 100 200
Day 3 100 300
Day 4 100 400
</code></pre>
<p>Any idea, how I have to change my code to have <strong>total_gross_accumulated</strong> available?</p>
<p><a href="https://docs.google.com/spreadsheets/d/1Ym8SX2wSFUtHI5bnZd89BCtdHr-IKyK3cqduhsEkdT4/edit?usp=sharing" rel="nofollow noreferrer">Here</a> is my data.</p>
<p>my code:</p>
<pre><code>from sklearn import linear_model
def load_event_data():
df = pd.read_csv('sample-data.csv', usecols=['created', 'total_gross'])
df['created'] = pd.to_datetime(df.created)
return df.set_index('created').resample('D').sum().fillna(0)
event_data = load_event_data()
X = event_data.index
y = event_data.total_gross
plt.xticks(rotation=90)
plt.plot(X, y)
plt.show()
</code></pre>
|
<p><em>List comprehension is the most pythonic way to do this.</em></p>
<p><strong>SHORT answer:</strong></p>
<p>This should give you the new column that you want:</p>
<pre><code>n = event_data.shape[0]
# skip line 0 and start by accumulating from 1 until the end
total_gross_accumulated =[event_data['total_gross'][:i].sum() for i in range(1,n+1)]
# add the new variable in the initial pandas dataframe
event_data['total_gross_accumulated'] = total_gross_accumulated
</code></pre>
<p><strong>OR faster</strong> </p>
<pre><code>event_data['total_gross_accumulated'] = event_data['total_gross'].cumsum()
</code></pre>
<hr>
<p><strong>LONG answer:</strong>
Full code using your data:</p>
<pre><code>import pandas as pd
def load_event_data():
df = pd.read_csv('sample-data.csv', usecols=['created', 'total_gross'])
df['created'] = pd.to_datetime(df.created)
return df.set_index('created').resample('D').sum().fillna(0)
event_data = load_event_data()
n = event_data.shape[0]
# skip line 0 and start by accumulating from 1 until the end
total_gross_accumulated =[event_data['total_gross'][:i].sum() for i in range(1,n+1)]
# add the new variable in the initial pandas dataframe
event_data['total_gross_accumulated'] = total_gross_accumulated
</code></pre>
<hr>
<p>Results:</p>
<pre><code>event_data.head(6)
# total_gross total_gross_accumulated
#created
#2019-03-01 3481810 3481810
#2019-03-02 4690 3486500
#2019-03-03 0 3486500
#2019-03-04 0 3486500
#2019-03-05 0 3486500
#2019-03-06 0 3486500
X = event_data.index
y = event_data.total_gross_accumulated
plt.xticks(rotation=90)
plt.plot(X, y)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/G1l2a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G1l2a.png" alt="enter image description here"></a></p>
|
pandas|matplotlib|machine-learning|linear-regression
| 2
|
377,372
| 56,803,686
|
How to extract adjacent rows?
|
<p>I have the code below which extract all the rows and columns which contains the string opened.</p>
<pre><code>opened = door[door.Text4.str.contains('opened')]
</code></pre>
<p>In addition to the above i also need to extract the next row.</p>
<pre><code> A Text4 C D
5 foo opened 0 0
6 bar 1 2
7 bar closed 3 6
8 foo 6 12
9 foo opened 7 14
10 foo 7 14
</code></pre>
<p>so i will end up with a data frame with </p>
<pre><code>5 foo opened 0 0
6 bar 1 2
9 foo opened 7 14
10 foo 7 14
</code></pre>
<p>How can i acheive this?</p>
|
<p>You can select the adjacent row containing <code>'opened'</code> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.shift.html" rel="nofollow noreferrer"><code>shift(1)</code></a>, which select the adjacent line. To select both the one containing <code>'opened'</code> and the adjacent one, you can use the operator <code>|</code></p>
<pre><code>opened = door[door.Text4.str.contains('opened') | door.Text4.str.contains('opened').shift(1)]
</code></pre>
<p>Result:</p>
<pre><code> A Text4 C D
5 foo opened 0 0
6 bar 1 2
9 foo opened 7 14
10 foo 7 14
</code></pre>
|
pandas
| 3
|
377,373
| 56,798,250
|
Pandas Dataframe to nested data structure
|
<p>I have a data frame with this structure:</p>
<pre><code>>>> df
ID Class Type
0 1 Math Calculus
1 1 Math Algebra
2 1 Science Physics
3 1 History American
4 2 Math Factorization
5 2 History European
6 2 Science Chemistry
7 2 Science Biology
8 3 Math Computation
9 3 Science Biology
</code></pre>
<p>Desired output is a structure that maps the ID to the Class and the Class to the Type for each ID.</p>
<p>for example:</p>
<pre><code>{
1: {Math: [Calculus, Algebra], Science: [Physics], History: [American]}
2: {Math: [Factorization], History: [European], Science: [Chemistry, Biology]}
3: {Math: [Computation], Science: [Biology]}
}
</code></pre>
<p>I am able to accomplish this with a for loop but the data set is very large (approximately 30 million rows) so I would like to accomplish this with Pandas)</p>
<p>I was able to get the output for a single ID formatted correctly like this</p>
<pre><code>>>> df.groupby(['ID', 'Class'])['Type'].apply(lambda x: x.to_dict())[1].groupby('Class').apply(lambda x: x.to_list()).to_dict()
{'History': ['American'], 'Math': ['Calculus', 'Algebra'], 'Science': ['Physics']}
>>> df.groupby(['ID', 'Class'])['Type'].apply(lambda x: x.to_dict())[2].groupby('Class').apply(lambda x: x.to_list()).to_dict()
{'History': ['European'], 'Math': ['Factorization'], 'Science': ['Chemistry', 'Biology']}
</code></pre>
<p>How can I apply the logic above to all the IDs and also is there an easier way to do this? I think I nested too many groupbys and over complicated the problem but not sure how to do this in a more efficient manner</p>
|
<p>IIUC you can try to play from this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
txt="""0 1 Math Calculus
1 1 Math Algebra
2 1 Science Physics
3 1 History American
4 2 Math Factorization
5 2 History European
6 2 Science Chemistry
7 2 Science Biology
8 3 Math Computation
9 3 Science Biology"""
txt = [list(filter(lambda a: a != '', t.split(" ")))[1:]
for t in txt.split("\n")]
df = pd.DataFrame(txt, columns=["ID", 'Class', 'Type'])
df["ID"] = df["ID"].astype(int)
out = df.groupby("ID")\
.apply(lambda x: x.groupby("Class")\
.apply(lambda y:y["Type"].tolist()).to_dict())
</code></pre>
<p>which returns</p>
<pre class="lang-sh prettyprint-override"><code>ID
1 {'History': ['American'], 'Math': ['Calculus',...
2 {'History': ['European'], 'Math': ['Factorization',...
3 {'Math': ['Computation'], 'Science': ['Biology']}
dtype: object
</code></pre>
<p>Now you have access to your data via (as example) <code>out[1]["Math"]</code> which returns <code>['Calculus', 'Algebra']</code></p>
|
python|pandas
| 1
|
377,374
| 56,468,112
|
Pandas dataframe slicing and manipulation
|
<p>I have dataframe <code>df1</code> as follows</p>
<pre><code>+------+----------+-----+
| Date | Location | Key |
+------+----------+-----+
| | a | 1 |
| | a | 2 |
| | b | 3 |
| | b | 3 |
| | b | 3 |
| | c | 4 |
| | c | 4 |
| | b | 5 |
| | b | 6 |
| | d | 7 |
| | b | 8 |
| | b | 8 |
| | b | 8 |
| | b | 9 |
+------+----------+-----+
</code></pre>
<p>and <code>df2</code> below is sliced from that.</p>
<pre><code>+------+----------+-----+
| Date | Location | Key |
+------+----------+-----+
| | b | 3 |
| | b | 3 |
| | b | 3 |
| | b | 5 |
| | b | 6 |
| | b | 8 |
| | b | 8 |
| | b | 9 |
| | b | 9 |
+------+----------+-----+
</code></pre>
<p>The goal is to find the time difference between the <code>Key</code> changes in <code>df2</code>(like from the last 3 to 5, 5 to 6, 6 to the first 8, last 8 to first 9 and so on), add them up, repeat this for every <code>Location</code> item and average them. </p>
<p>Can this process be vectorized or we need to slice the dataframe for every machine and manually compute the average? </p>
<p>[EDIT]: </p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-1142-b85a122735aa>", line 1, in <module>
s = temp.groupby('SSCM_ Location').apply(lambda x: x[x['Key'].diff().ne(0)]['Execution Date'].diff().mean())
File "C:\Users\dbhadra\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 930, in apply
return self._python_apply_general(f)
File "C:\Users\dbhadra\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 936, in _python_apply_general
self.axis)
File "C:\Users\dbhadra\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\groupby\groupby.py", line 2273, in apply
res = f(group)
File "<ipython-input-1142-b85a122735aa>", line 1, in <lambda>
s = temp.groupby('SSCM_ Location').apply(lambda x: x[x['Key'].diff().ne(0)]['Execution Date'].diff().mean())
File "C:\Users\dbhadra\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\series.py", line 1995, in diff
result = algorithms.diff(com._values_from_object(self), periods)
File "C:\Users\dbhadra\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\algorithms.py", line 1823, in diff
out_arr[res_indexer] = arr[res_indexer] - arr[lag_indexer]
TypeError: unsupported operand type(s) for -: 'str' and 'str'
</code></pre>
|
<p>You can try:</p>
<pre><code># obviously we will group by Location
groups = df1.groupby('Location')
# we record the changes and mark the unchanged with nan
df1['changes'] = groups.Key.diff().replace({0:np.nan})
# average the changes by location
# ignore all the nan's (unchanges)
groups.changes.mean()
</code></pre>
<p>Output:</p>
<pre><code>Location
a 1.0
b 1.5
c NaN
d NaN
Name: changes, dtype: float64
</code></pre>
|
python|pandas|vectorization
| 0
|
377,375
| 56,589,371
|
Problem with callback error during Dash app implementation
|
<p>I am trying to set up a simple Dash app that returns a value from a dataframe used as a "look-up table"; a screenshot of a sample test table is included here<a href="https://i.stack.imgur.com/bag5l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bag5l.png" alt="enter image description here"></a></p>
<p>Within the app, a user can enter one of many states (this is the first column in above table), and the app should return the corresponding two column entries from same row in which the state exists in this dataframe. For example: if a user selected LA, the app should return "med" and "radio"</p>
<p>Unfortunately, I am getting a call-back error after a user selection. My code is attached below. Can somebody please guide me on the way to resolve this? This is my first time using Dash - so appreciate any guidance!</p>
<pre><code>app = dash.Dash(__name__)
server = app.server
df=pd.read_csv("test_lookup_table.csv", delimiter=',', encoding="utf-8-sig")
df.set_index('state')
app.layout = html.Div([
html.H1('On demand look up '),
html.H3('Select from list of states below'),
dcc.Dropdown(
id='dropdown',
options=[{'label': i, 'value': i} for i in df.state],
value='LALA'
),
html.Div(id='display-value')
])
@app.callback(
Output('dropdown','options'),
[Input('dropdown', 'value')])
def callback_a(i):
return df.loc[i, 1:3]
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
|
<p>You did not mention the specific error, but I think I can see the problem. Your callback is outputing (<code>Output</code>) to the dropdown component's <code>options</code> prop. It sounds like you would want to output to something else, like a <code>div</code> or <code>p</code> component. You have a <code>display-value</code> ID for one <code>div</code>, so maybe that's where you want it to go? Try changing your callback like this:</p>
<pre class="lang-py prettyprint-override"><code>@app.callback(
Output('display-value', 'children'),
[Input('dropdown', 'value')])
def callback_a(i):
return df.loc[i, 1:3]
</code></pre>
<p><strong>EDIT:</strong></p>
<p>One problem is coming from setting the index to use the <code>state</code> column values. <code>set_index</code> returns a value, but you haven't assigned it, so nothing ends up happening. Use <code>df = df.set_index('state')</code> or include the <code>inplace=True</code> flag inside the method call.</p>
<p>That is causing the key error, because the <code>df</code> doesn't have the state values as indices, so you can't look for them. Now you can use <code>df.loc[i]</code> in your callback and it will be able to find the right row. You cannot use <code>[1:3]</code> here, though, because <code>loc</code> references by name, not value of the row/col, and you have no columns named with integers. You could do <code>df.loc[i].tolist()[1:3]</code>. However, with <code>state</code> as an index, you only have two values in each row (indices 0 and 1), so the <code>:3]</code> part is never going to get anything. You'll just get what's at index 1.</p>
<p>Lastly, and I should have mentioned this before, you've got the <code>df</code> set up as a global variable, which is generally not best practice. It would be better to use a callback to load the <code>df</code> in from your CSV file and add it to some component, like a <code>dash_table.DataTable</code>. Then you can have other callbacks pull values from that table instead of referencing a global variable.</p>
|
python|pandas|plotly-dash
| 1
|
377,376
| 56,535,640
|
In Python/Pandas, Check if a comma separated string contains any value in a list
|
<p>I have a column in pandas dataframe that looks like this:</p>
<pre><code>Code
----
ABC,DEF,XYZ
ABC,XYZ
...
...
CBA,FED,ABC
</code></pre>
<p>I'm trying to check if this series of comma separated string contains any string in my below list:</p>
<p>["UVW","XYZ"]</p>
<p>I know we can check single value like "XYZ" in df["Code"] but how can we do it for a list of values in Python or is there any special functions from pandas?</p>
|
<p>Use <code>pd.Series.str.contains</code> with <code>regex=True</code>:</p>
<p>Given <code>Series</code>, <code>s</code> and target list <code>l</code>:</p>
<pre><code>s
0 ABC,DEF,XYZ
1 ABC,XYZ
2 CBA,FED,ABC
l = ["UVW","XYZ"]
s.str.contains('|'.join(l))
</code></pre>
<p>Output:</p>
<pre><code>0 True
1 True
2 False
dtype: bool
</code></pre>
|
python|pandas
| 0
|
377,377
| 56,760,725
|
pandas dataframe - ungroup concatenated column
|
<p>I am trying to ungroup a concatenated column in a dataframe. In particular, I am trying to convert</p>
<pre><code> a b c
i0 1 a k1;k2
i1 2 b k3
i2 3 c k4;k5;k6
i3 4 d k7
</code></pre>
<p>into</p>
<pre><code> a b c
i0 1 a k1
i0 1 a k2
i1 2 b k3
i2 3 c k4
i2 3 c k5
i2 3 c k6
i3 4 d k7
</code></pre>
<p>I managed to do this using the code</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = pd.DataFrame({'a':[1,2,3,4],'b':list('abcd'),'c':['k1;k2','k3','k4;k5;k6','k7']},
index=['i'+str(i) for i in range(4)])
tmp = data['c'].str.split(';', expand=True).stack().reset_index(level=1, drop=True)
tmp.name = 'c'
data.drop('c',axis='columns',inplace=True)
data = data.join(tmp)
</code></pre>
<p>but it seems an incredibly convoluted way of doing something that is so simple. Is there a better way to do this using pandas?</p>
|
<p>Here's an answer that is not in the linked (unnest) question:</p>
<pre><code>(df.reset_index()
.set_index(['index','a','b'])
.c.str
.split(';',expand=True)
.stack()
.reset_index(level=-1,drop=True)
.reset_index(level=(1,2))
)
</code></pre>
<p>Output:</p>
<pre><code> a b 0
index
i0 1 a k1
i0 1 a k2
i1 2 b k3
i2 3 c k4
i2 3 c k5
i2 3 c k6
i3 4 d k7
</code></pre>
|
pandas
| 0
|
377,378
| 56,865,344
|
How do I calculate the matthews correlation coefficient in tensorflow
|
<p>So I made a model with tensorflow keras and it seems to work ok. However, my supervisor said it would be useful to calculate the Matthews correlation coefficient, as well as the accuracy and loss it already calculates. </p>
<p>my model is very similar to the code in the tutorial here (<a href="https://www.tensorflow.org/tutorials/keras/basic_classification" rel="noreferrer">https://www.tensorflow.org/tutorials/keras/basic_classification</a>) except with a much smaller dataset.</p>
<p>is there a prebuilt function or would I have to get the prediction for each test and calculate it by hand?</p>
|
<p>There is nothing out of the box but we can calculate it from the formula in a custom metric.</p>
<p>The basic classification link you supplied is for a multi-class categorisation problem whereas the Matthews Correlation Coefficient is specifically for <strong>binary</strong> classification problems.</p>
<p>Assuming your model is structured in the "normal" way for such problems (i.e. <code>y_pred</code> is a number between 0 and 1 for each record representing predicted probability of a "True" and labels are each exactly a <code>0</code> or <code>1</code> representing ground truth "False" and "True" respectively) then we can add in an MCC metric as follows:</p>
<pre class="lang-py prettyprint-override"><code># if y_pred > threshold we predict true.
# Sometimes we set this to something different to 0.5 if we have unbalanced categories
threshold = 0.5
def mcc_metric(y_true, y_pred):
predicted = tf.cast(tf.greater(y_pred, threshold), tf.float32)
true_pos = tf.math.count_nonzero(predicted * y_true)
true_neg = tf.math.count_nonzero((predicted - 1) * (y_true - 1))
false_pos = tf.math.count_nonzero(predicted * (y_true - 1))
false_neg = tf.math.count_nonzero((predicted - 1) * y_true)
x = tf.cast((true_pos + false_pos) * (true_pos + false_neg)
* (true_neg + false_pos) * (true_neg + false_neg), tf.float32)
return tf.cast((true_pos * true_neg) - (false_pos * false_neg), tf.float32) / tf.sqrt(x)
</code></pre>
<p>which we can include in our <code>model.compile</code> call:</p>
<pre class="lang-py prettyprint-override"><code>model.compile(optimizer='adam',
loss=tf.keras.losses.binary_crossentropy,
metrics=['accuracy', mcc_metric])
</code></pre>
<h3>Example</h3>
<p>Here is a complete worked example where we categorise mnist digits depending on whether they are greater than 4:</p>
<pre class="lang-py prettyprint-override"><code>mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
y_train, y_test = 0 + (y_train > 4), 0 + (y_test > 4)
def mcc_metric(y_true, y_pred):
predicted = tf.cast(tf.greater(y_pred, 0.5), tf.float32)
true_pos = tf.math.count_nonzero(predicted * y_true)
true_neg = tf.math.count_nonzero((predicted - 1) * (y_true - 1))
false_pos = tf.math.count_nonzero(predicted * (y_true - 1))
false_neg = tf.math.count_nonzero((predicted - 1) * y_true)
x = tf.cast((true_pos + false_pos) * (true_pos + false_neg)
* (true_neg + false_pos) * (true_neg + false_neg), tf.float32)
return tf.cast((true_pos * true_neg) - (false_pos * false_neg), tf.float32) / tf.sqrt(x)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.binary_crossentropy,
metrics=['accuracy', mcc_metric])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
</code></pre>
<p>output:</p>
<pre><code>Epoch 1/5
60000/60000 [==============================] - 7s 113us/sample - loss: 0.1391 - acc: 0.9483 - mcc_metric: 0.8972
Epoch 2/5
60000/60000 [==============================] - 6s 96us/sample - loss: 0.0722 - acc: 0.9747 - mcc_metric: 0.9495
Epoch 3/5
60000/60000 [==============================] - 6s 97us/sample - loss: 0.0576 - acc: 0.9797 - mcc_metric: 0.9594
Epoch 4/5
60000/60000 [==============================] - 6s 96us/sample - loss: 0.0479 - acc: 0.9837 - mcc_metric: 0.9674
Epoch 5/5
60000/60000 [==============================] - 6s 95us/sample - loss: 0.0423 - acc: 0.9852 - mcc_metric: 0.9704
10000/10000 [==============================] - 1s 58us/sample - loss: 0.0582 - acc: 0.9818 - mcc_metric: 0.9639
[0.05817381642502733, 0.9818, 0.9638971]
</code></pre>
|
python-2.7|tensorflow|machine-learning|tf.keras
| 9
|
377,379
| 56,695,125
|
How do I compute one hot encoding using tf.one_hot?
|
<p>I'm trying to build a one hot encoding of y_train of <strong>mnist</strong> data-set using <strong>tensorflow</strong>. I couldn't understand how to do it?</p>
<pre><code># unique values 0 - 9
y_train = array([5, 0, 4, ..., 5, 6, 8], dtype=uint8)
</code></pre>
<p>In <code>keras</code> we'll do something like</p>
<pre><code># this converts it into one hot encoding
one hot_encoding = tf.keras.utils.to_categorical(y_train)
</code></pre>
<p>Where as in <code>tf.one_hot</code> what should be my input to <code>indices</code> & <code>depth</code> parameters? After doing one hot encoding how can I convert it back to <strong>numpy</strong> array from <strong>2d-tensor</strong>?</p>
|
<p>I'm not familiar with Tensorflow but after some tests, this is what I've found:</p>
<p><code>tf.one_hot()</code> takes an <code>indices</code> and a <code>depth</code>. The <code>indices</code> are the values to actually convert to a one-hot encoding. <code>depth</code> refers to the maximum value to utilize.</p>
<p>For example, take the following code:</p>
<pre><code>y = [1, 2, 3, 2, 1]
tf.keras.utils.to_categorical(y)
sess = tf.Session();
with sess.as_default():
print(tf.one_hot(y, 2).eval())
print(tf.one_hot(y, 4).eval())
print(tf.one_hot(y, 6).eval())
</code></pre>
<p><code>tf.keras.utils.to_categorical(y)</code> Returns the following:</p>
<pre><code>array([[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.],
[0., 0., 1., 0.],
[0., 1., 0., 0.]], dtype=float32)
</code></pre>
<p>In contrast, the <code>tf.one_hot()</code> options (2, 4, and 6) do the following:</p>
<pre><code>[[0. 1.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 1.]]
[[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]]
[[0. 1. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]]
</code></pre>
<p>As can be seen here, to mimic <code>tf.keras.utils.to_categorical()</code> using <code>tf.one_hot()</code>, the <code>depth</code> parameter should be equivalent to the maximum value present in the array, +1 for 0. In this case, the maximum value is 3, so there are four possible values in the encoding - 0, 1, 2, and 3. As such, a depth of 4 is required to represent all of these values in the one-hot encoding.</p>
<p>As for conversion to numpy, as shown above, using a Tensorflow session, running <code>eval()</code> on a tensor converts it to a numpy array. For methods on doing this, refer to <a href="https://stackoverflow.com/questions/34097281">How can I convert a tensor into a numpy array in TensorFlow?</a>.</p>
<p>I'm not familiar with Tensorflow but I hope this helps.</p>
<p>Note: for the purposes of MNIST, a depth of 10 should be sufficient.</p>
|
python|tensorflow
| 4
|
377,380
| 56,793,367
|
How to train tiny yolov2 with tensorflow?
|
<p>I really don't know much about machine learning. I just downloaded tensorflow sharp plugin for unity and tried it with a pre-trained yolov2 model. Now, I want to train my own model to detect a certain kind of object.</p>
<p>I'm really feel like an alien. What should I do? Do I have to learn 'tensorflow' ? What "training yolov2 with tensorflow" really means? </p>
<p>I found a good article here: <a href="https://timebutt.github.io/static/how-to-train-yolov2-to-detect-custom-objects/" rel="nofollow noreferrer">https://timebutt.github.io/static/how-to-train-yolov2-to-detect-custom-objects/</a></p>
<p>But if I'm not wrong, It trains with darknet, not tensorflow. So I think I can't use the output with tensorflowsharp plugin. I couldn't find any straightforward tutorial about the topic. Any help will be appreciated..</p>
|
<p>Ok. For newbies like me, here is what you have to do: </p>
<p>YoloV2 algorithm written in Darknet. Darknet is an open source neural network framework written in C and CUDA. If you want to use YoloV2 with unity tensorflowsharp plugin, you need a Tensorflow implementation of YoloV2. </p>
<p>And <a href="https://github.com/thtrieu/darkflow" rel="nofollow noreferrer">darkflow</a>. (Darknet+Tensorflow=Darkflow. Funny huh?) does the job. So, here is the outline of what you should do to train your own yolov2 algorithm to use in unity with tensorflow: </p>
<p>1-) Install anaconda and python environment with tensorflow
2-) Download darkflow from github
3-) Train yolov2 with darkflow
4-) Convert training files to .pb, then .bytes
5-) Use .bytes with tensorflowsharp</p>
<p>For the first 3 steps, I strongly recommend video series starting with this one: <a href="https://www.youtube.com/watch?v=PyjBd7IDYZs" rel="nofollow noreferrer">https://www.youtube.com/watch?v=PyjBd7IDYZs</a> </p>
<p>Hope it helps. Feel free to comment when you stuck. </p>
|
tensorflow|yolo|tensorflowsharp
| 1
|
377,381
| 56,598,749
|
Input shape in keras (This loss expects targets to have the same shape as the output)
|
<p>this is my first time using keras, I'm trying to follow a tutorial I've found online and fit my own data to it. I have a matrix and binary labels.</p>
<pre><code>> str(d_train)
num [1:1062, 1:180] -0.04748 0.04607 -0.05429 -0.0126 -0.00219 ...
> str(trainlabels)
num [1:1062, 1:2] 0 0 0 0 0 0 1 0 0 0 ...
</code></pre>
<p>my code:</p>
<pre><code>model = keras_model_sequential()
model %>%
layer_dense(units = 8, activation = 'relu', input_shape = c(180)) %>%
layer_dense(units = 3, activation = "softmax")
summary(model)
## Compile
model %>%
compile(loss = "binary_crossentropy",
optimizer = "adam",
metrics = "accuracy")
## Fit model
history = model %>%
fit(d_train,
trainlabels,
epoch=200,
batch_size=32,
validation_split=0.2)
</code></pre>
<p>I can't seem to fit the model, I'm getting this error message:</p>
<pre><code>Error in py_call_impl(callable, dots$args, dots$keywords) :
ValueError: A target array with shape (1062, 2) was passed for an output of shape (None, 3) while using as loss `binary_crossentropy`. This loss expects targets to have the same shape as the output.
</code></pre>
<p>Based on the error message, asking for different shape of my input array, I tried to change the dimensions around with no luck.</p>
|
<p>I am not an R expert, but here:</p>
<pre><code>layer_dense(units = 3, activation = "softmax")
</code></pre>
<p>You are telling Keras that the output of your network has three classes. Your labels have shape <code>(1062, 2)</code> which suggest it has two classes, hence there is an inconsistency.</p>
<p>You could just change <code>units = 2</code> in your last dense and it should work. Also note that you are using the <code>softmax</code> activation, and in that case you should prefer to use the <code>categorical_crossentropy</code> loss.</p>
<p>To use <code>binary_crossentropy</code> for binary classification, you should have <code>units = 1</code>, <code>sigmoid</code> activation, and labels should be <code>(1062, 1)</code> or <code>(1062,)</code>, which means they are 0-1 encoded.</p>
|
r|tensorflow|keras|neural-network
| 17
|
377,382
| 56,727,860
|
How to fix the fetch argument error in implementing Bayesian Neural Network with tenssorflow
|
<pre><code>placeholder_X = tf.placeholder(tf.float32, shape = [None, 19])
placeholder_y = tf.placeholder(tf.float32, shape = [None,1])
#Build an iterator over training batches
#training_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train))
training_dataset = tf.data.Dataset.from_tensor_slices((placeholder_X, placeholder_y))
#Shuffle the dataset (note shuffle argument much larger than training size).learning_rate # shuffling of data
# and form batches of size batch_size
training_batches = training_dataset.shuffle(20000, reshuffle_each_iteration =True).repeat().batch(FLAGS.batch_size)
#training_iterator = tf.data.make_one_shot_iterator(training_batches)
#Building iterator over the heldout set with batch_size = heldout_size,
# i.e., return the entire heldout set as a constant.
val_dataset = tf.data.Dataset.from_tensor_slices((placeholder_X, placeholder_y))
val_batches = val_dataset.repeat().batch(500)
#heldout_iterator = tf.data.make_one_shot_iterator(heldout_batches)
test_dataset = tf.data.Dataset.from_tensor_slices((X_test,y_test))
test_dataset = test_dataset.batch(500)
#Combine these into a feasible iterator that can switch between training
# and validation inputs.
# Here should be minibatch increment be defined
handle = tf.placeholder(tf.string, shape = [])
feedable_iterator = tf.data.Iterator.from_string_handle(handle, training_batches.output_types, training_batches.output_shapes)
features_final, labels_final = feedable_iterator.get_next()
#create Reinitializable iterator for Train and Validation, one hot iterator for Test
train_val_iterator = tf.data.Iterator.from_structure(training_batches.output_types, training_batches.output_shapes)
training_iterator = train_val_iterator.make_initializer(training_batches)
val_iterator = train_val_iterator.make_initializer(val_batches)
test_iterator = test_dataset.make_one_shot_iterator()
def main(argv):
# extract the activation function from the hyperopt spec as an attribute from the tf.nn module
#activation = getattr(tf.nn, FLAGS.activation_function)
# define the graph
#with tf.Graph().as_default():
# Building the Bayesian Neural Network
# we are Gaussian Reparametrization Trick
# to compute the stochastic gradients as described in the paper
with tf.compat.v1.name_scope("bayesian_neural_net", values =[features_final]):
neural_net = tf.keras.Sequential()
for i in range(FLAGS.num_hidden_layers):
layer = tfp.layers.DenseReparameterization(
units = 10,
activation = tf.nn.relu,
trainable = True,
kernel_prior_fn=tfp.layers.default_multivariate_normal_fn, # NormalDiag
kernel_posterior_fn=tfp.layers.default_mean_field_normal_fn(),
#kernel_posterior_fn=tfp_layers_util.default_mean_field_normal_fn(), # softplus(sigma)
kernel_posterior_tensor_fn=lambda x: x.sample(),
bias_prior_fn=tfp.layers.default_multivariate_normal_fn, # NormalDiag
bias_posterior_fn=tfp.layers.default_mean_field_normal_fn(), # softplus(sigma)
bias_posterior_tensor_fn=lambda x: x.sample()
)
neural_net.add(layer)
neural_net.add(tfp.layers.DenseReparameterization(
units=2, # one dimensional output
activation= tf.nn.softmax, # since regression (outcome not bounded)
trainable=True, # i.e subject to optimization
kernel_prior_fn=tfp.layers.default_multivariate_normal_fn, # NormalDiag with hyperopt sigma
kernel_posterior_fn=tfp.layers.default_mean_field_normal_fn(), # softplus(sigma)
kernel_posterior_tensor_fn=lambda x: x.sample(),
bias_prior_fn =tfp.layers.default_multivariate_normal_fn, # NormalDiag with hyperopt sigma
bias_posterior_fn=tfp.layers.default_mean_field_normal_fn(), # softplus(sigma)
bias_posterior_tensor_fn=lambda x: x.sample()
))
logits = neural_net(features_final)
#labels_distribution = tfd.Bernoulli(logits=logits)
labels_distribution = tfd.Categorical(logits=logits)
#labels_distribution = tfd.Bernoulli(logits=logits)
# Perform KL annealing. The optimal number of annealing steps
# depends on the dataset and architecture.
t = tf.Variable(0.0)
kl_regularizer = t / (FLAGS.kl_annealing * len(X_train) / FLAGS.batch_size)
#Compute the -ELBO as the loss. The kl term is annealed from 1 to 1 over
# the epochs specified by the kl_annealing flag.
log_likelihood = labels_distribution.log_prob(labels_final)
#neg_log_likelihood = tf.reduce_mean(tf.squared_difference(logits,labels_final))
neg_log_likelihood = -tf.reduce_mean(input_tensor = log_likelihood)
kl = sum(neural_net.losses)/len(X_train) * tf.minimum(1.0, kl_regularizer)
elbo_loss = neg_log_likelihood + kl
# Build metrics for evaluation. Predictions are formed from single forward
# pass of the probablisitic layers . They are cheap but noisy predictions
predictions = tf.argmax(input = logits, axis=1)
predictions = tf.cast(predictions, tf.float32)
# TP, TN, FP, FN
TP = tf.count_nonzero(predictions * labels_final)
TN = tf.count_nonzero((predictions - 1) * (labels_final - 1))
FP = tf.count_nonzero(predictions * (labels_final - 1))
FN = tf.count_nonzero((predictions - 1) * labels_final)
# precision, recall, f1
precision = TP / (TP + FP)
recall = TP / (TP + FN)
f1 = 2 * precision * recall / (precision + recall)
tpr = TP/(TP+FN)
fpr = FP/(TP+FN)
#create Reinitializable iterator for Train and Validation, one hot iterator for Test
train_val_iterator = tf.data.Iterator.from_structure(training_batches.output_types, training_batches.output_shapes)
training_iterator = train_val_iterator.make_initializer(training_batches)
val_iterator = train_val_iterator.make_initializer(val_batches)
test_iterator = test_dataset.make_one_shot_iterator()
with tf.compat.v1.name_scope("train"):
train_accuracy, train_accuracy_update_op = tf.metrics.accuracy(labels=labels_final,predictions =predictions)
opt = tf.train.AdamOptimizer(FLAGS.learning_rate)
train_op = opt.minimize(elbo_loss)
update_step_op = tf.assign(t, t+1)
with tf.compat.v1.name_scope("valid"):
valid_accuracy, validation_accuracy_update_op = tf.metrics.accuracy(labels= labels_final,predictions = predictions)
with tf.compat.v1.name_scope("test"):
test_accuracy, test_accuracy_update_op = tf.metrics.accuracy(labels = labels_final,predictions = predictions)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
saver = tf.train.Saver()
stream_vars_valid = [ v for v in tf.local_variables() if "valid" in v.name]
reset_valid_op = tf.variables_initializer(stream_vars_valid)
valid_accuracy_summary = []
stop_early =0
with tf.compat.v1.Session() as sess:
sess.run(init_op)
# Run the training loop
train_val_string, test_string = sess.run([
train_val_iterator.string_handle(),
test_iterator.string_handle()])
training_steps = int(round(FLAGS.epochs * (len(X_train) / FLAGS.batch_size)))
for step in range(training_steps):
#start reininitializable's train iterator
sess.run(training_iterator, feed_dict = {placeholder_X:X_train, placeholder_y:y_train})
#
_ = sess.run([train_op,train_accuracy_update_op, update_step_op],feed_dict={handle: train_val_string})
# Manually print the frequency
if step % 100 == 0:
save_path = saver.save(sess, "/tmp/my_model.ckpt")
loss_value, accuracy_value, kl_value = sess.run([elbo_loss, train_accuracy, kl], feed_dict= {handle: train_val_string})
print("Step:{:>3d} loss : {:.3f} KL: {:.3f}" .format(step , loss_value, accuracy_value, kl_value))
if (step +1) % FLAGS.eval_freq ==0:
# Compute log prob of heldout set by averaging draws from the model:
# p(heldout | train) = int_model p(heldout|model) p(model|train) ~= 1/n * sum_{i=1}^n p(heldout | model_i)
# where model_i is a draw from the posterior
#p(model|train)
probs = np.asarray([sess.run((labels_distribution.probs),
feed_dict ={handle: train_val_string})
for _ in range(FLAGS.num_monte_carlo)])
mean_probs = np.mean(probs, axis =0).astype(np.int32)
print(mean_probs.dtype)
_, label_vals = sess.run((features_final, labels_final), feed_dict = {handle: train_val_string})
label_vals = (label_vals).astype(np.int32)
heldout_lp = np.mean(np.log(mean_probs[np.arange(mean_probs.shape[0]), label_vals]))
print(" ...Held_out nats: {:.3f}".format(heldout_lp))
# Calculate validation accuracy
for step in range(10):
#start reinitializable's validation iterator
sess.run(val_iterator, feed_dict = {placeholder_X:X_val, placeholder_y:y_val})
sess.run(validation_accuracy_update_op, feed_dict={handle:train_val_string})
valid_value = sess.run(valid_accuracy, feed_dict={handle:train_val_string})
valid_accuracy_summary.append(valid_value)
if valid_value < max(valid_accuracy_summary) and step > 100:
stop_early += 1
if stop_early == 40:
break
else:
stop_early = 0
print("Validation Accuracy: {:.3f}".format(valid_value))
sess.run(reset_valid_op)
#Feed to r=feedable iterator the string handle
test_value, precision_value, recall_value, fpr_value, tpr_value,f1 = sess.run([test_accuracy, precision, recall, fpr, tpr,f1],feed_dict={handle: test_string})
print("Step: {:>3d} test Accuracy: {:.3f} Precision: {:.3f} Recall: {:.3f} ".format(step, test_value, precision_value, recall_value))
print("Step: {:>3d} fpr: {:.3f} tpr: {:.3f} f1_1: {:.3f}".format( step, fpr_value, tpr_value,f1))
if __name__ == "__main__":
tf.compat.v1.app.run()
</code></pre>
<p>Expect the output to progress but it is giving out this error </p>
<pre><code>Step: 0 loss : 0.646 KL: 0.875
Step:100 loss : 0.654 KL: 0.904
Step:200 loss : 0.657 KL: 0.906
Step:300 loss : 0.648 KL: 0.906
int32
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:137: RuntimeWarning: divide by zero encountered in log
...Held_out nats: -inf
Validation Accuracy: 0.914
Step: 9 test Accuracy: 0.000 Precision: 0.910 Recall: 1.000
Step: 9 fpr: 0.099 tpr: 1.000 f1_1: 0.953
Step:400 loss : 0.624 KL: 0.906
Step:500 loss : 0.641 KL: 0.906
Step:600 loss : 0.612 KL: 0.906
Step:700 loss : 0.579 KL: 0.906
int32
...Held_out nats: -inf
Validation Accuracy: 0.914
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in __init__(self, fetches, contraction_fn)
302 self._unique_fetches.append(ops.get_default_graph().as_graph_element(
--> 303 fetch, allow_tensor=True, allow_operation=True))
304 except TypeError as e:
14 frames
TypeError: Can not convert a float64 into a Tensor or Operation.
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in __init__(self, fetches, contraction_fn)
305 raise TypeError('Fetch argument %r has invalid type %r, '
306 'must be a string or Tensor. (%s)' %
--> 307 (fetch, type(fetch), str(e)))
308 except ValueError as e:
309 raise ValueError('Fetch argument %r cannot be interpreted as a '
</code></pre>
|
<p>The exception arises because you use same name f1 as assignment, we need to change name f1 at left side. </p>
<pre><code> test_value, precision_value, recall_value, fpr_value, tpr_value,f1 = sess.run([test_accuracy, precision, recall, fpr, tpr,f1],feed_dict={handle: test_string})
</code></pre>
<p>change the line to </p>
<pre><code>test_value, precision_value, recall_value, fpr_value, tpr_value,f1_value = sess.run([test_accuracy, precision, recall, fpr, tpr,f1],feed_dict={handle: test_string})
</code></pre>
<p>Hopefully, this will work. </p>
|
tensorflow
| 0
|
377,383
| 56,623,173
|
How can I train a neural network with weights constrained to specific values?
|
<p>I am trying to train a network with weights that can only have certain values. However, the way that I am doing this takes a very long time, e.g. 5h per epoch for a 3-layered fully connected network on MNIST. Is there a faster way to do this?</p>
<p>I am using tf.keras for building my network. I added a custom tf.constraint that does a binary search on the list of possible weight values when updating the weights. I found a binary search code from <a href="https://stackoverflow.com/questions/45884453/binary-search-and-interpolation-in-tensorflow">here</a> that I adapted for my application. In order to apply the binary search function to all the parameters, I use "tf.map_fn".</p>
<p>Here is the Constraint class:</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.python.keras.constraints import Constraint
import tensorflow as tf
# binary search function
def find(weights, query, shape):
vals = tf.map_fn(lambda x: weights[tf.argmin(tf.cast(x >= weights, dtype=tf.int32)[1:] - tf.cast(x >= weights, dtype=tf.int32)[:-1])], tf.reshape(query,[-1]))
return tf.reshape(vals, shape)
class WeightQuantizeClip(Constraint):
# weights parameter holds the possible weight values
def __init__(self, weights = []):
self.weights = tf.convert_to_tensor(weights)
def __call__(self, p):
p = find(self.weights, p, p.shape)
return p
def get_config(self):
return {'name': self.__class__.__name__}
</code></pre>
<p>When I train a network with the above constraint the weights are only from the possible weight values, but the training time increases extremely. Without the binary search function, my GPU is fully utilized, but when I train with the binary search function the utilization drops to 2%. Can anyone help me with this?</p>
|
<p>From your description it seems, that some part of the clipping op gets executed on CPU which requires RAM-VRAM communication which is <em>extremely</em> slow.</p>
<p>However, if you are trying to do the traditional NN quantization, there is actually a whole TF module built for this purpose, you may want to check it out, maybe it covers your use case.</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/quantization/quantize" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/quantization/quantize</a></p>
|
tensorflow
| 0
|
377,384
| 56,767,246
|
How to wrap tf.cond function with keras.layers.Lambda?
|
<p>I'm trying to define a custom layer in keras,but I can't find a way to warp tf.cond with layers.Lambda function</p>
<pre class="lang-py prettyprint-override"><code> matches = tf.cond(
tf.greater(N, 0),
lambda: match_boxes(
anchors, groundtruth_boxes,
positives_threshold=positives_threshold,
negatives_threshold=negatives_threshold,
force_match_groundtruth=True
),
lambda: only_background
)
</code></pre>
|
<p>Since the body of your true function is very big, you could create a custom layer like this:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
class CustomLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(CustomLayer, self).__init__()
self.pred = kwargs.get('pred', False)
def call(self, inputs):
def true_fn(x):
return x + 1.
return tf.cond(self.pred,
true_fn=lambda: true_fn(inputs),
false_fn=lambda: tf.identity(inputs))
</code></pre>
<p>Testing:</p>
<pre class="lang-py prettyprint-override"><code>inputs = tf.placeholder(tf.float32, shape=(None, 1))
pred = tf.placeholder(tf.bool, shape=())
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(1, kernel_initializer=tf.initializers.ones))
model.add(CustomLayer(pred=pred))
outputs = model(inputs)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(outputs.eval({inputs: [[1.]], pred: False})) # [[1.]]
print(outputs.eval({inputs: [[1.]], pred: True})) # [[2.]]
</code></pre>
|
python|tensorflow|keras
| 1
|
377,385
| 56,855,877
|
Create interactive plot by IPywidgets in Jupyter notebook
|
<p>I am trying to create an interactive bar plot.
I have the following dataframe:</p>
<p>CustID| Age |Gender|Smoking_history |Alcohol_history</p>
<hr>
<p>1 |18-24| M | Non-smoker | <21 units per week</p>
<p>2 |43-48| F | Non-smoker | <21 units per week</p>
<p>3 |37-42| M | Unknown | <21 units per week</p>
<p>4 |18-24| F | Unknown | Unknown</p>
<p>5 |43-48| M | Previous smoker | <21 units per week</p>
<p>I want to create an interactive plot where I can select columns and it can create a bar plot based on group by of the selected columns, i.e. total numbers by counting rows based on the group.</p>
<p>It shows two drop-down lists where the desired columns can be selected. </p>
<pre><code> from plotly.offline import iplot, init_notebook_mode
import plotly.graph_objs as go
init_notebook_mode()
import plotly
@interact
def bar_plot(x=list([['Age', 'Gender', 'Smoking history', 'Alcohol
history']]), y=list(df[['Gender', 'Smoking history', 'Alcohol
history']])):
df.iplot(kind='bar', x=x, y=y,
xTitle=x.title(), yTitle=y.title(),
title=f'{y.title()} vs {x.title()}')
</code></pre>
<p>But it does not create any output. Instead it shows the error:</p>
<pre><code>"C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\display.py:689:
</code></pre>
<p>UserWarning:</p>
<pre><code>Consider using IPython.display.IFrame instead"
</code></pre>
<p>Any idea to resolve it?</p>
|
<p>I believe you just need to add connected=True inside of init_notebook_mode: <a href="https://plot.ly/python/offline/" rel="nofollow noreferrer">plotly offline docs</a></p>
<pre><code>init_notebook_mode(connected=True)
</code></pre>
|
python|pandas|jupyter-notebook|plotly|ipywidgets
| 0
|
377,386
| 56,625,762
|
Issues with custom scorer
|
<p>I am doing some online lessons in machine learning, and we use the following scoring function in our DNN models for regression:</p>
<pre><code> def r_squared(y_true, y_pred):
# 1 - ((y_i - y_hat_i)^2 / (y_i - y_sum)^2)
numerator = tf.reduce_sum(tf.square(tf.subtract(y_true, y_pred)))
denominator = tf.reduce_sum(tf.square(tf.subtract(y_pred, tf.reduce_mean(y_true))))
r2 = tf.clip_by_value(tf.subtract(1.0, tf.div(numerator, denominator)), clip_value_min = 0.0, clip_value_max = 1.0)
return r2
... later ...
model.compile(loss = "mse", # mean-square-error,
optimizer = optimizer(lr = learning_rate),
metrics = [r_squared])
</code></pre>
<p>Now while the model and all is working, I wanted to conduct a gridsearch to determine the best parameters for my model. However, when trying to use the <code>r_squared</code> function with the gridsearch as scorer, I get several errors:</p>
<pre><code>
grid = GridSearchCV(estimator = estimator,
param_grid = param_grid,
n_jobs = 1,
verbose = 1,
cv = folds,
scoring = make_scorer(FeedForward.r_squared, greater_is_better=True))
</code></pre>
<p>results in:</p>
<pre><code>TypeError: Input 'y' of 'Sub' Op has type float64 that does not match type float32 of argument 'x'.
</code></pre>
<p>around here:</p>
<pre><code>r2 = tf.clip_by_value(tf.subtract(1.0, tf.div(numerator, denominator)), clip_value_min = 0.0, clip_value_max = 1.0)
</code></pre>
<p>Thus, I changed the line as follows:</p>
<pre><code>r2 = tf.clip_by_value(tf.subtract(1.0, tf.div(tf.cast(numerator, tf.float32), tf.cast(denominator, tf.float32))), clip_value_min = 0.0, clip_value_max = 1.0)
</code></pre>
<p>which then results in:</p>
<pre><code>ValueError: scoring must return a number, got Tensor("mul:0", shape=(), dtype=float32) (<class 'tensorflow.python.framework.ops.Tensor'>) instead. (scorer=score)
</code></pre>
<p>While I understand the error and can confirm it in the debugger, I find myself unable to resolve the issue even with googling for the error. Which might be due to - unnecessary to mention - be beeing not familiar enough yet with tensorflow.</p>
<p>So how to I get the value out of the tensor? And am I even doing the right thing here, or is something else wrong?</p>
|
<p>The problem is mixing the usage of TensorFlow/Keras and scikit-learn. A Keras metric needs to be implemented using <code>keras.backend</code> functions, but scikit-learn functions are not symbolic and have to be implemented using numpy.</p>
<p>Fortunately scikit-learn already has an implementation of the R^2 score as <code>sklearn.metrics.r2_score</code>, so you can use it like this:</p>
<pre><code>from sklearn.metrics import r2_score
grid = GridSearchCV(estimator = estimator,
param_grid = param_grid,
n_jobs = 1,
verbose = 1,
cv = folds,
scoring = make_scorer(r2_score, greater_is_better=True))
</code></pre>
<p>Your Keras metric needs no change, its a bit odd that you have to keep two implementations of the metric, but it is like that.</p>
|
python|tensorflow|keras|scikit-learn|deep-learning
| 1
|
377,387
| 56,464,356
|
What is the equivalent in Tensorflow 2.0 of tf.contrib.framework.nest.flatten_dict_items()?
|
<p>I'm upgrading TF1 code to TF2 with the tf_upgrade_v2, and I found this message:</p>
<pre><code>tf.contrib.framework.nest.flatten_dict_items(dict)
AttributeError: module 'tensorflow' has no attribute 'contrib'
</code></pre>
<p>How I should update the code? I didn't find a solution.</p>
|
<p>This one is a little odd (hard to find) because it's not exported in the same way as the core functionality.</p>
<p>cs95 is correct in his comment in as much that it lives in <code>tensorflow.python.util.nest</code> but one cannot simply do:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
tf.python.util.nest.flatten_dict_items(my_dict)
</code></pre>
<p>Instead, we need to import the <code>nest</code> module itself with something like:</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.python.util import nest
nest.flatten_dict_items(my_dict)
</code></pre>
|
python|tensorflow|tensorflow2.0
| 1
|
377,388
| 56,822,357
|
How to shortern numpy code used for extraction
|
<p>I am writing 29 lines of extraction codes for my data extraction. Is there anyway I can shorten my code? </p>
<pre><code>import numpy as np
from numpy.lib.recfunctions import append_fields
import matplotlib.pyplot as plt
data_y = np.genfromtxt('data/housing-and-development-board-resale-price-index-1q2009-100-quarterly.csv',
names=True,
dtype=None,
delimiter=",",
missing_values='na,-',
filling_values=-1,
encoding=None)
# check if data load correctly
print(data_y)
years = []
for quarter in data_y['quarter']:
year, q = quarter.split('-') # the new column name is year
years.append(int(year))
years = np.array(years)
data_y = append_fields(data_y, 'year', years)
print(data_y)
# is there a way to make the following of 29 extractions more elegant?
data_1990 = data_y[data_y['year']==1990]
data_1991 = data_y[data_y['year']==1991]
data_1992 = data_y[data_y['year']==1992]
data_1993 = data_y[data_y['year']==1993]
data_1994 = data_y[data_y['year']==1994]
data_1995 = data_y[data_y['year']==1995]
data_1996 = data_y[data_y['year']==1996]
data_1997 = data_y[data_y['year']==1997]
data_1998 = data_y[data_y['year']==1998]
data_1999 = data_y[data_y['year']==1999]
data_2000 = data_y[data_y['year']==2000]
data_2001 = data_y[data_y['year']==2001]
data_2002 = data_y[data_y['year']==2002]
data_2003 = data_y[data_y['year']==2003]
data_2004 = data_y[data_y['year']==2004]
data_2005 = data_y[data_y['year']==2005]
data_2006 = data_y[data_y['year']==2006]
data_2007 = data_y[data_y['year']==2007]
data_2008 = data_y[data_y['year']==2008]
data_2009 = data_y[data_y['year']==2009]
data_2010 = data_y[data_y['year']==2010]
data_2011 = data_y[data_y['year']==2011]
data_2012 = data_y[data_y['year']==2012]
data_2013 = data_y[data_y['year']==2013]
data_2014 = data_y[data_y['year']==2014]
data_2015 = data_y[data_y['year']==2015]
data_2016 = data_y[data_y['year']==2016]
data_2017 = data_y[data_y['year']==2017]
data_2018 = data_y[data_y['year']==2018]
# is there a way to make the following of 29 extractions more elegant?
data_90 = data_1990['index']
data_91 = data_1991['index']
data_92 = data_1992['index']
data_93 = data_1993['index']
data_94 = data_1994['index']
data_95 = data_1995['index']
data_96 = data_1996['index']
data_97 = data_1997['index']
data_98 = data_1998['index']
data_99 = data_1999['index']
data_00 = data_2000['index']
data_01 = data_2001['index']
data_02 = data_2002['index']
data_03 = data_2003['index']
data_04 = data_2004['index']
data_05 = data_2005['index']
data_06 = data_2006['index']
data_07 = data_2007['index']
data_08 = data_2008['index']
data_09 = data_2009['index']
data_10 = data_2010['index']
data_11 = data_2011['index']
data_12 = data_2012['index']
data_13 = data_2013['index']
data_14 = data_2014['index']
data_15 = data_2015['index']
data_16 = data_2016['index']
data_17 = data_2017['index']
data_18 = data_2018['index']
data_combined = np.empty([len(year), 29])
for i in range(len(data_90)):
data_combined[i] = np.array([data_90[i], data_91[i], data_92[i], data_93[i], data_94[i], data_95[i], data_96[i],
data_97[i], data_98[i], data_99[i], data_00[i], data_01[i], data_02[i], data_03[i],
data_04[i], data_05[i], data_06[i], data_07[i], data_08[i], data_09[i], data_10[i],
data_11[i], data_12[i], data_13[i], data_14[i], data_15[i], data_16[i], data_17[i],
data_18[i]])
# is there a way to make the following of 29 extractions of labels more elegant?
labels = np.array(['1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000', '2001',
'2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', ' 2011', '2012', '2013',
'2014', '2015', '2016', '2017', '2018'])
boxprops = dict(linestyle='-', linewidth=2, color='blue')
flierprops = dict(marker='o', markerfacecolor='green', markersize=8)
medianprops = dict(linewidth=2, color='red')
plt.figure(figsize=(60, 60))
plt.title('Movement of Resale Price Index (RPI)', fontsize=15, weight='bold')
plt.boxplot(data_combined, labels=labels, flierprops=flierprops, medianprops=medianprops, boxprops=boxprops)
plt.ylabel('Resale Price Index (RPI)', labelpad=20, fontsize=12)
plt.xlabel('Years', labelpad=20, fontsize=12)
plt.show()
</code></pre>
|
<pre><code>year_data = {year: data_y[data_y['year']==year] for year in np.unique(data_y['year'])}
</code></pre>
|
python|python-3.x|numpy
| 1
|
377,389
| 56,734,378
|
Difference between weighted accuracy metric of Keras and Scikit-learn
|
<h3>Intro</h3>
<p>Hej everyone, </p>
<p>I am working on my diploma thesis and I face a binary classification problem with imbalanced class contribution. I have around 10 times more negative ("0") labels as positive ("1") labels. For that reason I considered not only observing accuracy and ROC-AUC, but also weighted/ balanced accuracy and Precision-Recall-AUC. </p>
<p>I already asked the question on GitHub (<a href="https://github.com/keras-team/keras/issues/12991" rel="nofollow noreferrer">https://github.com/keras-team/keras/issues/12991</a>) but the issue has not been answered yet so I thought this platform here might be the better place! </p>
<h3>Issue description</h3>
<p>During some calculations on the validation set in a custom callback I noticed, more or less by coincidence, that the weighted accuracy is always <strong>different</strong> from my results using <em>sklearn.metrics.accuracy_score()</em>. </p>
<p>Using Keras, weighted accuracy has to be declared in <em>model.compile()</em> and is a key in the logs{} dictionary after every epoch (and is also written to the log file by the CSVLogger callback or to the history object) or is returned as value in a list by <em>model.evaluate()</em>, </p>
<pre class="lang-py prettyprint-override"><code>model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'],
weighted_metrics=['accuracy'])
</code></pre>
<p>I calculate the val_sample_weights vector based on the class contribution of the training set with the Sklearn.metrics function <em>class_weight.compute_sample_weight()</em> and with the help of <em>class_weight.compute_class_weight()</em>.</p>
<pre class="lang-py prettyprint-override"><code>cls_weights = class_weight.compute_class_weight('balanced', np.unique(y_train._values),
y_train._values)
cls_weight_dict = {0: cls_weights[0], 1: cls_weights[1]}
val_sample_weights = class_weight.compute_sample_weight(cls_weight_dict, y_test._values)
</code></pre>
<p>In <em>model.fit()</em> I pass this vector togehter with the validation data and to <em>sklearn.metrics.accuracy_score()</em> I pass it to the parameter name <em>sample_weight</em> to compare the results on the same basis. </p>
<pre class="lang-py prettyprint-override"><code>model_output = model.fit(x_train, y_train, epochs=500, batch_size=32, verbose=1,
validation_data=(x_test, y_test, val_sample_weights))
</code></pre>
<p>Furthermore, I derived the equation how Scitkit-learn computes the weighted accuracy from several easy examples and it seems that it's computed by the following equation (which seems quite reasonable to me): </p>
<p><a href="https://chart.googleapis.com/chart?cht=tx&chl=Acc_%7Bweighted%7D%3D%5Cfrac%7Bw_pTP%2Bw_nTN%7D%7Bw_pTP%2Bw_pFN%2Bw_nTN%2Bw_nFP%7D%3D%5Cfrac%7Bw_pTP%2Bw_nTN%7D%7BTP%2BFN%2BTN%2BFP%7D" rel="nofollow noreferrer">LaTeX equation</a></p>
<p>TP, TN, FP and FN are the values reported in the confusion matrix and w_p and w_n are the class weights of the positive and negative class respectively. </p>
<p>An easy example to test it can be found here: </p>
<p><a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html</a></p>
<p>Just for the sake of completeness, <em>sklearn.metrics.accuracy_score(..., sample_weight=)</em> returns the same result as <em>sklearn.metrics.balanced_accuracy_score()</em>. </p>
<h3>System Information</h3>
<ul>
<li>GeForce RTX 2080 Ti</li>
<li>Keras 2.2.4</li>
<li>Tensorflow-gpu 1.13.1</li>
<li>Sklearn 0.19.2</li>
<li>Python 3.6.8</li>
<li>CUDA Version 10.0.130</li>
</ul>
<h3>Code example</h3>
<p>I searched an easy example to make the issue easy to reproduce, even if the class imbalance here is weaker (1:2 not 1:10). It's based on the introductory tutorial to Keras which can be found here:</p>
<p><a href="https://towardsdatascience.com/k-as-in-keras-simple-classification-model-a9d2d23d5b5a" rel="nofollow noreferrer">https://towardsdatascience.com/k-as-in-keras-simple-classification-model-a9d2d23d5b5a</a></p>
<p>The <em>Pima Indianas onset diabets</em> dataset will be downloaded, as done in the link above, from the repository of Jason Brownlee, the maker of the homepage Machine Learning Mastery. But I guess it can also be downloaded from various other sites. </p>
<p>So finally here's the code: </p>
<pre class="lang-py prettyprint-override"><code>from keras.layers import Dense, Dropout
from keras.models import Sequential
from keras.regularizers import l2
import pandas as pd
import numpy as np
from sklearn.utils import class_weight
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
file = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/' \
'pima-indians-diabetes.data.csv'
# Load csv data from file to data using pandas
data = pd.read_csv(file, names=['pregnancies', 'glucose', 'diastolic', 'triceps', 'insulin',
'bmi', 'dpf', 'age', 'diabetes'])
# Process data
data.head()
x = data.drop(columns=['diabetes'])
y = data['diabetes']
# Split into train and test
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1, random_state=0)
# define a sequential model
model = Sequential()
# 1st hidden layer
model.add(Dense(100, activation='relu', input_dim=8, kernel_regularizer=l2(0.01)))
model.add(Dropout(0.3))
# 2nd hidden layer
model.add(Dense(100, activation='relu', kernel_regularizer=l2(0.01)))
model.add(Dropout(0.3))
# Output layer
model.add(Dense(1, activation='sigmoid'))
# Compilation with weighted metrics
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'],
weighted_metrics=['accuracy'])
# Calculate validation _sample_weights_ based on the class distribution of train labels and
# apply it to test labels using Sklearn
cls_weights = class_weight.compute_class_weight('balanced', np.unique(y_train._values),
y_train._values)
cls_weight_dict = {0: cls_weights[0], 1: cls_weights[1]}
val_sample_weights = class_weight.compute_sample_weight(cls_weight_dict, y_test._values)
# Train model
model_output = model.fit(x_train, y_train, epochs=500, batch_size=32, verbose=1,
validation_data=(x_test, y_test, val_sample_weights))
# Predict model
y_pred = model.predict(x_test, batch_size=32, verbose=1)
# Classify predictions based on threshold at 0.5
y_pred_binary = (y_pred > 0.5) * 1
# Sklearn metrics
sklearn_accuracy = accuracy_score(y_test, y_pred_binary)
sklearn_weighted_accuracy = accuracy_score(y_test, y_pred_binary,
sample_weight=val_sample_weights)
# metric_list has 3 entries: [0] val_loss weighted by val_sample_weights, [1] val_accuracy
# [2] val_weighted_accuracy
metric_list = model.evaluate(x_test, y_test, batch_size=32, verbose=1,
sample_weight=val_sample_weights)
print('sklearn_accuracy=%.3f' %sklearn_accuracy)
print('sklearn_weighted_accuracy=%.3f' %sklearn_weighted_accuracy)
print('keras_evaluate_accuracy=%.3f' %metric_list[1])
print('keras_evaluate_weighted_accuracy=%.3f' %metric_list[2])
</code></pre>
<h3>Results and summary</h3>
<p>For example I get:</p>
<p><code>sklearn_accuracy=0.792</code></p>
<p><code>sklearn_weighted_accuracy=0.718</code></p>
<p><code>keras_evaluate_accuracy=0.792</code></p>
<p><code>keras_evaluate_weighted_accuracy=0.712</code></p>
<p>The "unweighted" accuracy value is the same, both for Sklearn as for Keras. The difference isn't really big, but it grows bigger as the dataset becomes more imbalanced. For example for my task it always differs around 5% from each other! </p>
<p>Maybe I'm missing something and it's supposed to be like that, but anyways it's confusing that Keras and Sklearn provide different values, especially thinking of the whole class_weights and sample_weights thing as a topic hard to get into. Unfortunately I'm not too deep into Keras to search in the Keras code on my own.</p>
<p>I would really appreciate receiving any answers! </p>
|
<p>I repeated your exact toy example and actually found that <code>sklearn</code> and <code>keras</code> do give the same results. I repeated the experiment 5 times to ensure it wasn't by chance and indeed the results were identical each time. For one of the runs for example:</p>
<pre><code>sklearn_accuracy=0.831
sklearn_weighted_accuracy=0.800
keras_evaluate_accuracy=0.831
keras_evaluate_weighted_accuracy=0.800
</code></pre>
<p>FYI I'm using <code>sklearn</code> and <code>keras</code> versions:</p>
<pre><code>0.20.3
2.3.1
</code></pre>
<p>respectively. See this google colab example: <a href="https://colab.research.google.com/drive/1b5pqbp9TXfKiY0ucEIngvz6_Tc4mo_QX" rel="nofollow noreferrer">https://colab.research.google.com/drive/1b5pqbp9TXfKiY0ucEIngvz6_Tc4mo_QX</a></p>
|
python|tensorflow|keras|deep-learning
| 0
|
377,390
| 56,790,402
|
Unsatisfiable error while installing tensorflow-gpu in Anaconda
|
<p>I have installed Anaconda Python 3.7 in Ubuntu 18.04 and then executed the commands:</p>
<pre><code>conda update --all
conda install cudnn
</code></pre>
<p>Now when I try to install tensorflow-gpu using the command <code>conda install tensorflow-gpu</code>, I get Unsatisfiable error like this:</p>
<blockquote>
<p>UnsatisfiableError: The following specifications were found to be
incompatible with each other:</p>
<ul>
<li>pkgs/main/linux-64::_ipyw_jlab_nb_ext_conf==0.1.0=py37_0 -> ipywidgets -> widgetsnbextension[version='>=3.4.0,<3.5.0'] ->
notebook[version='>=4.4.1'] -> nbconvert -> bleach</li>
<li>pkgs/main/linux-64::bleach==3.1.0=py37_0</li>
<li>pkgs/main/linux-64::ipywidgets==7.4.2=py37_0 -> widgetsnbextension[version='>=3.4.0,<3.5.0'] ->
notebook[version='>=4.4.1'] -> nbconvert -> bleach</li>
<li>pkgs/main/linux-64::jupyterlab==0.35.5=py37hf63ae98_0 -> jupyterlab_server[version='>=0.2.0,<0.3.0'] -> notebook -> nbconvert
-> bleach</li>
<li>pkgs/main/linux-64::jupyterlab_server==0.2.0=py37_0 -> notebook -> nbconvert -> bleach</li>
<li>pkgs/main/linux-64::notebook==5.7.8=py37_0 -> nbconvert -> bleach</li>
<li>pkgs/main/linux-64::widgetsnbextension==3.4.2=py37_0 -> notebook[version='>=4.4.1'] -> nbconvert -> bleach</li>
<li>pkgs/main/noarch::nbconvert==5.5.0=py_0 -> bleach</li>
</ul>
</blockquote>
<p>As multiple important packages are concerned here, I am getting confused what to do. I have NVIDIA GTX 1070 Max-Q in my PC, so tensorflow-gpu should work perfectly. </p>
|
<p>You can install Tensorflow GPU using the below code:</p>
<ol>
<li><p>Create a New Virtual Environment</p>
<p><code>conda create -n tensorflow_gpu pip python=3.6</code></p></li>
<li><p>Activate the Virtual Environment</p>
<p><code>activate tensorflow_gpu</code></p></li>
<li><p>Install CUDA Toolkit using </p>
<p><code>conda install -c anaconda cudatoolkit</code></p></li>
<li><p>Install Tensorflow GPU</p>
<p><code>pip install --ignore-installed --upgrade tensorflow-gpu==1.15</code></p></li>
</ol>
<p>For more information, please refer this <a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html#tensorflow-gpu" rel="nofollow noreferrer">Tensorflow Blog</a>.</p>
|
python|tensorflow|anaconda
| 0
|
377,391
| 56,571,746
|
I need some help for a keras image classifier project
|
<p>I've been doing this imager and it does not compile. The document of training if it works well but the document to predict the image does not.</p>
<p>It consists of an image classifier based on these videos.</p>
<p><a href="https://www.youtube.com/watch?v=EAqb20_4Rdg&t=450s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=EAqb20_4Rdg&t=450s</a></p>
<p><a href="https://www.youtube.com/watch?v=FWz0N4FFL0U&t=53s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=FWz0N4FFL0U&t=53s</a></p>
<p>THIS IS THE TRAINING CODE (IT WORKS PERFECTLY)</p>
<pre class="lang-py prettyprint-override"><code>import sys # Lo vamos a hacer con TensorFLow pero dentro de este tenemos la posibilidad de usar Keras.
import os # Vamos a importar las librerias que nos van a permitir movernos entre los directorios de nuestro sistema.
import tensorflow as tf
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator # Libreria que nos va a pre-procesar las imagenes para entrenar a nuestro algoritmo:
from tensorflow.python.keras import optimizers # Importo los optimizadores:
from tensorflow.python.keras.models import Sequential # Importo la libreria que nos permite hacer redes neurnales secueniales (Cada una de las capas está ordenada):
from tensorflow.python.keras.layers import Dropout, Flatten, Dense, Activation
from tensorflow.python.keras.layers import Convolution2D, MaxPooling2D # Importo las capas en las cuales vamos a estar haciendo nuestras convoluciones y max-pooling:
from tensorflow.python.keras import backend as K # Importo la libreria que nos va a ayudar a que si hay una sesión de queras que está corriendo en segundo plano, matarlo y empezar de 0:
K.clear_session() # Empiezo una sesión nueva:
data_entrenamiento = r'C:\Users\JOSEA\Downloads\data\train' # La "r" al principio de la string la va a transformar en uns RAW string para que no detecte el comando especial "\".
data_validacion = r'C:\Users\JOSEA\Downloads\data\validation'
##Parametros:
epocas = 20 # Veces que vamos a iterar nuestro DataSet.
altura, longitud = 100, 100 # Tamaño al que vamos a procesar nuestras imágenes.
batch_size = 32 # Número de imágenes que vamos a enviar para procesar en cada uno de las iteraciones. (Evitar sobrecarga de RAM).
pasos = 1000 # Número de veces que se va a procesar la información en cada una de las épocas.
pasos_validacion = 200 # Al final de cada época se van a ejecutar 200 pasos con nuestro DataSet de validación (Comprobación).
filtrosConv1 = 32
filtrosConv2 = 64 # Número de filtros que va a haber en cada convolución. Profundidad de las capas ocultas.
tamano_filtro1 = (3,3)
tamano_filtro2 = (2,2) # Filtro(altura x longitud)
tamano_pool = (2,2) # Tamaño del filtro en MaxPooling
clases = 2 # "gato", "perro", "gorila"
lr = 0.0005 # Learning rate. Que tan grande va a ser el ajuste para acercarse a una solución óptima (numero pequeño)
##Pre-procesamiento de imagenes:
# Creo un generador al que le voy a indicar como preprocesar la información y después voy a hacer la transformación de las imágenenes.
# ----------------------------------------------------------------------------------------------------------------------
entrenamiento_datagen = ImageDataGenerator(
rescale = 1./255, # Transforma cada pixel de un RANGO de 0-255 a un rango de 0-1 (Downnscale)
shear_range = 0.3, # Va a INCLINAR y rotar cada imagen para que el algoritmo aprenda como es el objeto desde todas las perspectivas
zoom_range = 0.3, # Va a hacer ZOOM a cada imagen para entrenar al algoritmo de forma alterna.
horizontal_flip = True # Va a INVERTIR la imagen.
)
validacion_datagen = ImageDataGenerator(
rescale = 1./255 # Solo REESCALO estas imágenes de test para comparar con los resultados de entrenamiento.
)
# Creo las dos variables que van a contener a las imágenes procesadas de Training y Testing.
# ----------------------------------------------------------------------------------------------------------------------
imagen_entrenamiento = entrenamiento_datagen.flow_from_directory(
data_entrenamiento, # Va a entrar al directorio "data".
target_size = (altura, longitud), # Va a preprocesar todas las imágenes que se encuentre a una altura y longitud (definidas arriba).
batch_size = batch_size, # Va a tomar una cantidad de 32 imágenes para cada iteración.
class_mode = 'categorical' # La clasificación va a ser categórica ["perro","gato","gorila"]
)
imagen_validacion = validacion_datagen.flow_from_directory(
data_validacion,
target_size = (altura, longitud),
batch_size = batch_size,
class_mode = 'categorical'
)
# Creo nuestra red neuronal convolucional:
#-----------------------------------------------------------------------------------------------------------------------
cnn = Sequential() # Le vamos a decir que la Red sea seuencial, es decir, varias capas apiladas entre ellas.
cnn.add(Convolution2D(filtrosConv1, tamano_filtro1, padding='same', input_shape=(altura, longitud,3), activation='relu')) # Convolución(Filtros, Tamaño_Filtro, Filtro_esquinas, Tamaño_Imagen, Función_de_activación=RELU).
cnn.add(MaxPooling2D(pool_size = tamano_pool)) # Después de la capa de Convolución vamos a tener una capa de MaxPooling que va a tener un tamaño de (2,2) pixels.
cnn.add(Convolution2D(filtrosConv2, tamano_filtro2, padding='same', activation='relu')) # Convolución(Filtros, Tamaño_Filtro, Filtro_esquinas, Función_de_activación=RELU).
cnn.add(MaxPooling2D(pool_size = tamano_pool)) # Después de la capa de Convolución vamos a tener una capa de MaxPooling que va a tener un tamaño de (2,2) pixels.
cnn.add(Flatten()) # Vamos a transformar la imagen que es muy profunda y pequeña a una muy grando plana de una sola dimensión.
cnn.add(Dense(256, activation = 'relu')) # Añade una capa de 256 neuronas donde van a estar todas las neuronas de las capas anteriores. Función de activación es relu.
cnn.add(Dropout(0.5)) # Le voy a "apagar" el 50% de las neuronas para que no esté demasiado ajustado. No quiero que encuentre un único camino para "perros" sino que encuentre varios.
cnn.add(Dense(clases, activation = 'softmax'))
cnn.compile(loss='categorical_crossentropy', optimizer=optimizers.Adam(lr=lr), metrics=['accuracy'])
cnn.fit_generator(imagen_entrenamiento, steps_per_epoch=pasos, epochs=epocas, validation_data=imagen_validacion, validation_steps=pasos_validacion)
dir = r'C:\Users\JOSEA\Downloads\data\modelos'
if not os.path.exists(dir):
os.mkdir(dir)
cnn.save(r'C:\Users\JOSEA\Downloads\data\modelos\modelo.h5')
cnn.save_weights(r'C:\Users\JOSEA\Downloads\data\modelos\pesos.h5')
</code></pre>
<p>THIS IS THE CALIDATION CODE (IT DOESNT COMPILE)</p>
<p>ERROR ON THE BOTTOM</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import tensorflow as tf
import numpy as np
import keras
from keras.preprocessing.image import load_img, img_to_array
from keras.models import load_model
longitud, altura = 150, 150
modelo=r'C:\Users\JOSEA\Downloads\data\modelos\modelo.h5'
pesos=r'C:\Users\JOSEA\Downloads\data\modelos\pesos.h5'
cnn = load_model(modelo)
cnn.load_weights(pesos)
def predict(file):
x = load_img(file, target_size=(longitud, altura)) # Le paso a la x nuestro valor de longitud y altura de la imagen.
x = img_to_array(x) # Convierto la imagen en un array de valores.
x = np.expand_dims(x, axis=0) # En el eje 0 (primera dimensión del array) quiero que me añada otra dimensión para procesar la imagen sin problema.
arreglo = cnn.predict(x) ## [[1, 0, 0]] # Llamo a la red neuronal para haga una predicción y nos da un array de dos dimensiones tal que #[[1, 0, 0]] de la que solo se va a tomar el 1 como acierto.
resultado = arreglo[0] ## [[0, 0, 1]] # Solo me interesa la primera dimensión como resultado.
respuesta = np.argmax(resultado) ## 2 # Va a tomar como output la posición en el vector del valor mas alto.
if respuesta == 0:
print('perro')
if respuesta == 1:
print('gato')
return respuesta
predict(r'C:\Users\JOSEA\Downloads\data\Extra_Dataset\cat.1440')
</code></pre>
<p>Error:</p>
<pre><code>C:\Users\JOSEA\Anaconda3\envs\Ajedrez\python.exe C:/Users/JOSEA/PycharmProjects/Image_Classifier_TF/predicciones.py
Using TensorFlow backend.
Traceback (most recent call last):
File "C:/Users/JOSEA/PycharmProjects/Image_Classifier_TF/predicciones.py", line 12, in <module>
cnn = load_model(modelo)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\engine\saving.py", line 225, in _deserialize_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\engine\saving.py", line 458, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\layers\__init__.py", line 55, in deserialize
printable_module_name='layer')
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\utils\generic_utils.py", line 145, in deserialize_keras_object
list(custom_objects.items())))
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\engine\sequential.py", line 300, in from_config
custom_objects=custom_objects)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\layers\__init__.py", line 55, in deserialize
printable_module_name='layer')
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\utils\generic_utils.py", line 147, in deserialize_keras_object
return cls.from_config(config['config'])
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\engine\base_layer.py", line 1109, in from_config
return cls(**config)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\layers\convolutional.py", line 490, in __init__
**kwargs)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\layers\convolutional.py", line 117, in __init__
self.kernel_initializer = initializers.get(kernel_initializer)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\initializers.py", line 508, in get
return deserialize(identifier)
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\initializers.py", line 503, in deserialize
printable_module_name='initializer')
File "C:\Users\JOSEA\Anaconda3\envs\Ajedrez\lib\site-packages\keras\utils\generic_utils.py", line 138, in deserialize_keras_object
': ' + class_name)
ValueError: Unknown initializer: GlorotUniform
Process finished with exit code 1
</code></pre>
|
<p>You are mixing <code>keras</code> and <code>tf.keras</code> by training your model using <code>tf.keras</code> and then loading it in <code>keras</code>. This won't work because both frameworks are not compatible in that way.</p>
<p>Choose one implementation and use it completely, do not mix them.</p>
|
python|image|tensorflow|keras|classification
| 1
|
377,392
| 56,863,821
|
For loops to iterate through two lists in an sql query, one of these lists is made up of smaller lists
|
<p>I have large databases for which I am using an sql query in python to write the data to csv files. In the sql database each row is a series of spatial information for a finger ID. I can parametize the query to get the information and write the files I need for each finger. However, the problem arises in creating a functional for loop that iterates over each ID for all the indexes in the list.</p>
<pre><code>INDEX = ([44,48,50,55,56,57], [49,54,57,61,62,64])
FINGER = ('rt100', 'rt101')
d = {}
newdf = {}
for Y in FINGER:
for X in INDEX:
for x in X:
d[x] = pd.read_sql ("SELECT x,y, CAST( (direction*180/3.142)as INT),CAST(quality*100 as INT) from UTS_7_fingerprints where finger like ? and ind = ?", conn, params=(Y,x))
newdf[Y] = pd.concat(d)
</code></pre>
<p>The script above runs the sql query and creates a dictionary of concatenated dataframes successfully. However for each FINGER value it is iterating over the entire INDEX list.</p>
<p>Looking like this:</p>
<pre><code>{'rt100': finger ind ... CAST( (direction*180/3.142)as INT) CAST(quality*100 as INT)
44 0 rt100 44 ... 281 93
48 0 rt100 48 ... 303 32
49 0 rt100 49 ... 281 13
50 0 rt100 50 ... 123 82
54 0 rt100 54 ... 281 14
55 0 rt100 55 ... 314 67
56 0 rt100 56 ... 123 88
57 0 rt100 57 ... 314 71
61 0 rt100 61 ... 326 11
</code></pre>
<p>This is an example for one of the FINGER values. I need it to be iterating over only [44,48,50,55,56,57] for 'rt100' and [49,54,57,61,62,64] for 'rt101'. Currently it is iterating through all the values within INDEX.
In reality I have many more similar correspondences hence the need for a query that takes these parameters.</p>
<p>To be more specific, I'm looking for a way to limit how this loop runs in order to write each query for each FINGER and INDEX to seperate .csv files that look like this:</p>
<pre><code>372,402,281,83
394,303,303,97
415,422,123,86
458,328,292,95
464,487,112,96
483,389,303,95
</code></pre>
<p>Where each line is information:</p>
<pre><code>'x,y, CAST( (direction*180/3.142)as INT),CAST(quality*100 as INT'
</code></pre>
<p>for each INDEX within each FINGER.</p>
|
<p>The problem is here:</p>
<blockquote>
<p>However for each FINGER value it is iterating over the entire INDEX list</p>
</blockquote>
<p>And it is caused by this loops:</p>
<pre><code>for Y in FINGER:
for X in INDEX:
# whatever
</code></pre>
<p>In this case <code>whatever</code> will be executed for all combinations of values from <code>FINGER</code> and <code>INDEX</code>. If each has <code>N</code> values the <code>whatever</code> line is executed <code>N*N</code> times.</p>
<p>But you want it to only execute for some of them, namely given <code>FINGER = [f1, f2, ..., fN]</code> and <code>INDEX = [i1, i2, ..., iN]</code> there should be exactly <code>N</code> iterations for values <code>(f1, i1)</code>, <code>(f2, i2)</code>, ... , <code>(fN, iN)</code>.</p>
<p>To do this change the loop:</p>
<pre><code>for Y, X in zip(FINGER, INDEX):
# whatever
</code></pre>
|
python|pandas|python-2.7|sqlite
| 0
|
377,393
| 56,813,078
|
Fill column with conditional mode of another column
|
<p>Given the below list, I'd like to fill in the 'Color Guess' column with the mode of the 'Color' column conditional on 'Type' and 'Size' and ignoring NULL, #N/A, etc.</p>
<p>For example, what's the most common color for SMALL CATS, what's the most common color for MEDIUM DOGS, etc.</p>
<blockquote>
<pre><code>Type Size Color Color Guess
Cat small brown
Dog small black
Dog large black
Cat medium white
Cat medium #N/A
Dog large brown
Cat large white
Cat large #N/A
Dog large brown
Dog medium #N/A
Cat small #N/A
Dog small white
Dog small black
Dog small brown
Dog medium white
Dog medium #N/A
Cat large brown
Dog small white
Dog large #N/A
</code></pre>
</blockquote>
|
<p>As BarMar already stated in the comments, we can use <code>pd.Series.mode</code> here from the linked answer. Only trick here is, that we have to use <code>groupby.transform</code>, since we want the data back in the same shape as your dataframe:</p>
<pre><code>df['Color Guess'] = df.groupby(['Type', 'Size'])['Color'].transform(lambda x: pd.Series.mode(x)[0])
</code></pre>
<hr>
<pre><code> Type Size Color Color Guess
0 Cat small brown brown
1 Dog small black black
2 Dog large black brown
3 Cat medium white white
4 Cat medium NaN white
5 Dog large brown brown
6 Cat large white brown
7 Cat large NaN brown
8 Dog large brown brown
9 Dog medium NaN white
10 Cat small NaN brown
11 Dog small white black
12 Dog small black black
13 Dog small brown black
14 Dog medium white white
15 Dog medium NaN white
16 Cat large brown brown
17 Dog small white black
18 Dog large NaN brown
</code></pre>
|
python|pandas
| 5
|
377,394
| 56,860,202
|
Converting numpy64 objects to Pandas datetime
|
<p>Question is pretty self-explanatory. I am finding that <code>pd.to_datetime</code> isn't changing anything about the object type and using <code>pd.Timestampe()</code>directly is bombing out.</p>
<p>Before this is marked a duplicate of <a href="https://stackoverflow.com/questions/13703720/converting-between-datetime-timestamp-and-datetime64">Converting between datetime, Timestamp and datetime64</a>, I am struggling at changing an entire column of a dataframe not just one datetime object. Perhaps that was in the article but I didn't see it in the top answer.</p>
<p>I will add that my error is occurring when I try to get unique values from the dataframes column. Is using unique converting the dtype to something unwanted?</p>
|
<p>The method you mentioned <code>pandas.to_datetime()</code> will work on scalars, Series and whole DataFrame if you need, so:</p>
<pre><code>dataFrame['column_date_converted'] = pd.to_datetime(dataFrame['column_to_convert'])
</code></pre>
|
python|pandas|numpy
| 1
|
377,395
| 56,474,361
|
Eigen decomposition of two square matrix in python
|
<p>In matlab we have option to find eigen decomposition of two matrix, no matter there product is symmetric or non symmetric such as</p>
<pre><code>A = [1 3; 4 9];
B = [4 7; 9 16];
[Vec,Val]=eig(A,B)
</code></pre>
<p>Vectors are</p>
<pre><code>`[-1,-1;0.54,0.85]`
</code></pre>
<p>and value are</p>
<pre><code>[-3.79,0;0,0.79]
</code></pre>
<p>I have checked in python <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigvals.html" rel="nofollow noreferrer">numpy.linalg</a> but there is no such option. All of the eig variation accept only one parameters. Is there way to deal with this in python</p>
|
<p>You can use <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.eig.html" rel="nofollow noreferrer">scipy.linalg.eig</a>: </p>
<pre><code>from scipy import linalg
linalg.eig(A, B)
</code></pre>
<p>where <code>A = [[1,3],[4,9]]</code> and <code>B = [[4,7], [9,16]]</code> are your two matrices.</p>
|
python|numpy|eigenvalue|eigenvector
| 1
|
377,396
| 56,607,794
|
tf.set_random_seed doesn't seem to set the seed correctly
|
<p>I encountered a problem that <code>tf.set_random_seed</code> is unable to generate a repeatable value when programming using Tensorflow on python. To be specific, </p>
<pre><code>import tensorflow as tf
sd = 1
tf.set_random_seed(seed = sd)
tf.reset_default_graph()
sess = tf.InteractiveSession()
print(sess.run(tf.random_normal(shape=[1], mean=0, stddev=1)))
</code></pre>
<p>the code above outputs <code>[1.3086201]</code>. Then I ran the whole piece of code again, it doesn't outputs the expected value <code>[1.3086201]</code> but gives a new <code>[-2.1209881]</code>. </p>
<p>Why would this happen and how to set a Tensorflow seed?</p>
|
<p><a href="https://www.tensorflow.org/api_docs/python/tf/random/set_random_seed" rel="nofollow noreferrer">According to the docs</a>, there are two types of seeds you can set when defining graph operations:</p>
<ol>
<li>The graph-level seed, which is set by <code>tf.set_random_seed</code>, and </li>
<li>operation-level seeds which are placed in a variable initializer</li>
</ol>
<p>Tensorflow then uses an elaborate set of rules (see the docs) to generate unique values for operations that rely on a seed. You can reproduce random output with a graph-level seed once you've initialized the variable, but <strong>subsequent re-initializations of that variable will usually yield different values.</strong></p>
<p>In your code, you are re-initializing <code>tf.random_normal</code> each time you run your code, so it is different.</p>
<hr>
<p>If you want <code>tf.random_normal</code> to generate the same unique sequence, no matter how many times it is re-initialized, you should <strong>set the operation-level seed instead</strong>,</p>
<pre><code>sess = tf.InteractiveSession()
print(sess.run(tf.random_normal(shape=[1], mean=0, stddev=1, seed=1)))
# -0.8113182
print(sess.run(tf.random_normal(shape=[1], mean=0, stddev=1, seed=1)))
# -0.8113182
</code></pre>
|
python|tensorflow|random
| 2
|
377,397
| 56,571,609
|
Downsample numpy array while preserving distribution
|
<p>I'm trying to write a function that can randomly sample a <code>numpy.ndarray</code> that has floating point numbers while preserving the distribution of the numbers in the array. I have this function for now:</p>
<pre><code>import random
from collections import Counter
def sample(A, N):
population = np.zeros(sum(A))
counter = 0
for i, x in enumerate(A):
for j in range(x):
population[counter] = i
counter += 1
sampling = population[np.random.choice(0, len(population), N)]
return np.histogram(sampling, bins = np.arange(len(A)+1))[0]
</code></pre>
<p>So I would like the function to work something like this(doesn't include accounting for distribution for this example):</p>
<pre><code>a = np.array([1.94, 5.68, 2.77, 7.39, 2.51])
new_a = sample(a,3)
new_a
array([1.94, 2.77, 7.39])
</code></pre>
<p>However, when I apply the function to an array like this I'm getting:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-74-07e3aa976da4> in <module>
----> 1 sample(a, 3)
<ipython-input-63-2d69398e2a22> in sample(A, N)
3
4 def sample(A, N):
----> 5 population = np.zeros(sum(A))
6 counter = 0
7 for i, x in enumerate(A):
TypeError: 'numpy.float64' object cannot be interpreted as an integer
</code></pre>
<p>Any help on modifying or create a function that would work for this would be really appreciated!</p>
|
<pre><code>In [67]: a = np.array([1.94, 5.68, 2.77, 7.39, 2.51])
In [68]: np.zeros(sum(a))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-68-263779bc977b> in <module>
----> 1 np.zeros(sum(a))
TypeError: 'numpy.float64' object cannot be interpreted as an integer
</code></pre>
<p>sum on the shape does not produce this error:</p>
<pre><code>In [69]: np.zeros(sum(a.shape))
Out[69]: array([0., 0., 0., 0., 0.])
</code></pre>
<p>But you shouldn't need to use sum:</p>
<pre><code>In [70]: a.shape
Out[70]: (5,)
In [71]: np.zeros(a.shape)
Out[71]: array([0., 0., 0., 0., 0.])
</code></pre>
<p>In fact if <code>a</code> is 2d, and you want a 1d array with the same number of items, you want the product of the shape, not the sum.</p>
<p>But do you want to return an array exactly the same size as <code>A</code>? I thought you were trying to downsize.</p>
|
python|python-3.x|numpy|random|probability
| 1
|
377,398
| 56,675,101
|
Python pandas: Setting index value of dataframe to another dataframe as a column using multiple column conditions
|
<p>I have two dataframes: <code>data_df</code> and <code>geo_dimension_df</code>. </p>
<p>I would like to take the index of <code>geo_dimension_df</code>, which I renamed to <code>id</code>, and make it a column on <code>data_df</code> called <code>geo_id</code>.</p>
<p>I'll be inserting both of these dataframes as tables into a database, and the <code>id</code> columns will be their primary keys while <code>geo_id</code> is a foreign key that will link <code>data_df</code> to <code>geo_dimension_df</code>.</p>
<p>As can be seen, the <code>cbsa</code> and <code>name</code> values can change over time. (Yuba City, CA -> Yuba City-Marysville, CA). Therefore, the <code>geo_dimension_df</code> is all the unique combinations of <code>cbsa</code> and <code>name</code>.</p>
<p>I need to compare the <code>cbsa</code> and <code>name</code> values on both dataframes and then when matching set <code>geo_dimension_df.id</code> as the <code>data_df.geo_id</code> value.</p>
<p>I tried using <code>merge</code> for a bit, but got confused, so now I'm trying with <code>apply</code> and looking at it like an Excel vlookup across multiple column values, but having no luck. The following is my attempt, but it's a bit gibberish...</p>
<pre><code>data_df['geo_id'] = data_df[['cbsa', 'name']]
.apply(
lambda x, y:
geo_dimension_df
.index[geo_dimension_df[['cbsa', 'name]]
.to_list()
== [x,y])
</code></pre>
<p>Below are the two original dataframes followed by the desired result. Thank you.</p>
<p>geo_dimension_df:</p>
<pre><code> cbsa name
id
1 10180 Abilene, TX
2 10420 Akron, OH
3 10500 Albany, GA
4 10540 Albany, OR
5 10540 Albany-Lebanon, OR
...
519 49620 York-Hanover, PA
520 49660 Youngstown-Warren-Boardman, OH-PA
521 49700 Yuba City, CA
522 49700 Yuba City-Marysville, CA
523 49740 Yuma, AZ
</code></pre>
<p>data_df:</p>
<pre><code> cbsa name month year units_total
id
1 10180 Abilene, TX 1 2004 22
2 10180 Abilene, TX 2 2004 12
3 10180 Abilene, TX 3 2004 44
4 10180 Abilene, TX 4 2004 32
5 10180 Abilene, TX 5 2004 21
...
67145 49740 Yuma, AZ 12 2018 68
67146 49740 Yuma, AZ 1 2019 86
67147 49740 Yuma, AZ 2 2019 99
67148 49740 Yuma, AZ 3 2019 99
67149 49740 Yuma, AZ 4 2019 94
</code></pre>
<p>Desired Outcome:
<br>
data_df (with geo_id foreign key column added):</p>
<pre><code> cbsa name month year units_total geo_id
id
1 10180 Abilene, TX 1 2004 22 1
2 10180 Abilene, TX 2 2004 12 1
3 10180 Abilene, TX 3 2004 44 1
4 10180 Abilene, TX 4 2004 32 1
5 10180 Abilene, TX 5 2004 21 1
...
67145 49740 Yuma, AZ 12 2018 68 523
67146 49740 Yuma, AZ 1 2019 86 523
67147 49740 Yuma, AZ 2 2019 99 523
67148 49740 Yuma, AZ 3 2019 99 523
67149 49740 Yuma, AZ 4 2019 94 523
</code></pre>
<p>Note: I'll be dropping <code>cbsa</code> and <code>name</code> from <code>data_df</code> after this, in case anybody is curious as to why I'm duplicating data.</p>
|
<p>First, because the index is not a proper column, make it a column so that it can be used in a later <code>merge</code>:</p>
<pre><code>geo_dimension_df['geo_id'] = geo_dimension_df.index
</code></pre>
<p>Next, join <code>data_df</code> and <code>geo_dimension_df</code></p>
<pre><code>data_df = pd.merge(data_df,
geo_dimension_df['cbsa', 'name', 'geo_id'],
on=['cbsa', 'name'],
how='left')
</code></pre>
<p>Finally, drop the column you added to the <code>geo_dimension_df</code> at the start:</p>
<pre><code>geo_dimension_df.drop('geo_id', axis=1, inplace=True)
</code></pre>
<p>After doing this, <code>geo_dimension_df</code>'s index column, <code>id</code>, will now appear on <code>data_df</code> under the column <code>geo_id</code>:</p>
<p>data_df:</p>
<pre><code> cbsa name month year units_total geo_id
id
1 10180 Abilene, TX 1 2004 22 1
2 10180 Abilene, TX 2 2004 12 1
3 10180 Abilene, TX 3 2004 44 1
4 10180 Abilene, TX 4 2004 32 1
5 10180 Abilene, TX 5 2004 21 1
...
67145 49740 Yuma, AZ 12 2018 68 523
67146 49740 Yuma, AZ 1 2019 86 523
67147 49740 Yuma, AZ 2 2019 99 523
67148 49740 Yuma, AZ 3 2019 99 523
67149 49740 Yuma, AZ 4 2019 94 523
</code></pre>
|
python|python-3.x|pandas
| 1
|
377,399
| 56,712,422
|
How to generate every combination of a given pattern in numpy array?
|
<p>I have a constraint variable based on which I need to generate every combination possible in a numpy array.</p>
<pre><code> length = 12
x >= 4 , x <= 7
Solution:
array([[0,0,0,0,0,0,0,0,1,1,1,1],
[0,0,0,1,1,1,1,1,1,0,0,0],
..... <every possible combination>
])
## I tried the below way but I am not sure how to obtain the desired result
np.tril(np.ones((12,12),int))
</code></pre>
<p>The sum of 1 within an array should be between 4 and 7. The length of one dimensional array or list should be 12 and the values of 1 should not be disjoint i.e. [0,1,0,1,1,1,0,0,0,0,0,0] is not valid since the pattern of 1 is interrupted by 0 . This one is valid : [0,1,1,1,1,1,0,0,0,0,0,0] </p>
<p>I need to this in the most efficient manner. Can someone please guide. thanks</p>
|
<p>I don't know any special function but test(...), below, runs in 149us on my machine. If you use the result a lot save it and copy from it as required.</p>
<pre><code>def n_ones_in_len( n_ones, length ):
""" Returns a diagonal with n ones offset by one column in each row. """
n_rows = length - n_ones + 1
res = np.zeros((n_rows, length), dtype = np.int)
for start in range(n_rows):
res[ start, start : start + n_ones] = 1
return res
n_ones_in_len(4,12)
Out[5]:
array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]])
</code></pre>
<p>Use this function to define a function to generate all the required quantities of ones.</p>
<pre><code>def test(lo, hi, length):
""" Returns a numpy array with diagonals of lo to hi-1 ones in rows of length columns """
res = np.empty((0,length), dtype = np.int) # Initialise res
for ones in range(lo, hi):
res = np.vstack((res, n_ones_in_len(ones, 12)))
# Stack the new results to the res array
return res
test(4, 8, 12) # Note half open range.
Out[8]:
array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0],
...
[0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]])
</code></pre>
<p>There will be other ways of doing this which may be faster but this should be reasonably easy to follow.</p>
|
python|python-3.x|numpy
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.