Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
5,900
| 4,092,927
|
Generating movie from python without saving individual frames to files
|
<p>I would like to create an h264 or divx movie from frames that I generate in a python script in matplotlib. There are about 100k frames in this movie.</p>
<p>In examples on the web [eg. 1], I have only seen the method of saving each frame as a png and then running mencoder or ffmpeg on these files. In my case, saving each frame is impractical. Is there a way to take a plot generated from matplotlib and pipe it directly to ffmpeg, generating no intermediate files? </p>
<p>Programming with ffmpeg's C-api is too difficult for me [eg. 2]. Also, I need an encoding that has good compression such as x264 as the movie file will otherwise be too large for a subsequent step. So it would be great to stick with mencoder/ffmpeg/x264.</p>
<p>Is there something that can be done with pipes [3]?</p>
<p>[1] <a href="http://matplotlib.sourceforge.net/examples/animation/movie_demo.html" rel="noreferrer">http://matplotlib.sourceforge.net/examples/animation/movie_demo.html</a></p>
<p>[2] <a href="https://stackoverflow.com/questions/2940671">How does one encode a series of images into H264 using the x264 C API?</a></p>
<p>[3] <a href="http://www.ffmpeg.org/ffmpeg-doc.html#SEC41" rel="noreferrer">http://www.ffmpeg.org/ffmpeg-doc.html#SEC41</a></p>
|
<p>This functionality is now (at least as of 1.2.0, maybe 1.1) baked into matplotlib via the <code>MovieWriter</code> class and it's sub-classes in the <code>animation</code> module. You also need to install <code>ffmpeg</code> in advance.</p>
<pre><code>import matplotlib.animation as animation
import numpy as np
from pylab import *
dpi = 100
def ani_frame():
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
im = ax.imshow(rand(300,300),cmap='gray',interpolation='nearest')
im.set_clim([0,1])
fig.set_size_inches([5,5])
tight_layout()
def update_img(n):
tmp = rand(300,300)
im.set_data(tmp)
return im
#legend(loc=0)
ani = animation.FuncAnimation(fig,update_img,300,interval=30)
writer = animation.writers['ffmpeg'](fps=30)
ani.save('demo.mp4',writer=writer,dpi=dpi)
return ani
</code></pre>
<p><a href="http://matplotlib.org/api/animation_api.html#animation" rel="noreferrer">Documentation for <code>animation</code></a></p>
|
python|numpy|ffmpeg|matplotlib|x264
| 57
|
5,901
| 8,708,758
|
Can I force a numpy ndarray to take ownership of its memory?
|
<p>I have a C function that mallocs() and populates a 2D array of floats. It "returns" that address and the size of the array. The signature is</p>
<pre><code>int get_array_c(float** addr, int* nrows, int* ncols);
</code></pre>
<p>I want to call it from Python, so I use ctypes.</p>
<pre><code>import ctypes
mylib = ctypes.cdll.LoadLibrary('mylib.so')
get_array_c = mylib.get_array_c
</code></pre>
<p>I never figured out how to specify argument types with ctypes. I tend to just write a python wrapper for each C function I'm using, and make sure I get the types right in the wrapper. The array of floats is a matrix in column-major order, and I'd like to get it as a numpy.ndarray. But its pretty big, so I want to use the memory allocated by the C function, not copy it. (I just found this PyBuffer_FromMemory stuff in this StackOverflow answer: <a href="https://stackoverflow.com/a/4355701/3691">https://stackoverflow.com/a/4355701/3691</a>)</p>
<pre><code>buffer_from_memory = ctypes.pythonapi.PyBuffer_FromMemory
buffer_from_memory.restype = ctypes.py_object
import numpy
def get_array_py():
nrows = ctypes.c_int()
ncols = ctypes.c_int()
addr_ptr = ctypes.POINTER(ctypes.c_float)()
get_array_c(ctypes.byref(addr_ptr), ctypes.byref(nrows), ctypes.byref(ncols))
buf = buffer_from_memory(addr_ptr, 4 * nrows * ncols)
return numpy.ndarray((nrows, ncols), dtype=numpy.float32, order='F',
buffer=buf)
</code></pre>
<p>This seems to give me an array with the right values. But I'm pretty sure it's a memory leak.</p>
<pre><code>>>> a = get_array_py()
>>> a.flags.owndata
False
</code></pre>
<p>The array doesn't own the memory. Fair enough; by default, when the array is created from a buffer, it shouldn't. But in this case it should. When the numpy array is deleted, I'd really like python to free the buffer memory for me. It seems like if I could force owndata to True, that should do it, but owndata isn't settable.</p>
<p>Unsatisfactory solutions:</p>
<ol>
<li><p>Make the caller of get_array_py() responsible for freeing the memory. That's super annoying; the caller should be able to treat this numpy array just like any other numpy array.</p></li>
<li><p>Copy the original array into a new numpy array (with its own, separate memory) in get_array_py, delete the first array, and free the memory inside get_array_py(). Return the copy instead of the original array. This is annoying because it's an ought-to-be unnecessary memory copy.</p></li>
</ol>
<p>Is there a way to do what I want? I can't modify the C function itself, although I could add another C function to the library if that's helpful.</p>
|
<p>I just stumbled upon this question, which is still an issue in August 2013. Numpy is really picky about the <code>OWNDATA</code> flag: There is no way it can be modified on the Python level, so ctypes will most likely not be able to do this. On the numpy C-API level - and now we are talking about a completely different way of making Python extension modules - one has to explicitly set the flag with:</p>
<pre><code>PyArray_ENABLEFLAGS(arr, NPY_ARRAY_OWNDATA);
</code></pre>
<p>On numpy < 1.7, one had to be even more explicit:</p>
<pre><code>((PyArrayObject*)arr)->flags |= NPY_OWNDATA;
</code></pre>
<p>If one has any control over the underlying C function/library, the best solution is to pass it an empty numpy array of the appropriate size from Python to store the result in. The basic principle is that memory allocation should always be done on the highest level possible, in this case on the level of the Python interpreter.</p>
<hr>
<p>As kynan commented below, if you use <code>Cython</code>, you have to expose the function <code>PyArray_ENABLEFLAGS</code> manually, see this post <a href="https://stackoverflow.com/questions/23872946/force-numpy-ndarray-to-take-ownership-of-its-memory-in-cython">Force NumPy ndarray to take ownership of its memory in Cython</a>.</p>
<p>The relevant documentation is <a href="https://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html#c.PyArrayObject.flags" rel="nofollow noreferrer">here</a>
and <a href="https://docs.scipy.org/doc/numpy/reference/c-api.array.html#c.NPY_ARRAY_OWNDATA" rel="nofollow noreferrer">here</a>.</p>
|
python|c|numpy|free|ctypes
| 6
|
5,902
| 55,368,594
|
How to get indices list for rows starting with lower case letter?
|
<p>I have a dataframe with one of the columns being df['Names']. How can I locate all the rows whose names start with a lower case letter?</p>
<pre><code>col1 Names
1564 abby
2289 Barry
</code></pre>
<p>etc.</p>
<p>I'm trying to accomplish this using regex with no luck.</p>
|
<p>one way from <code>str.lower</code> </p>
<pre><code>df[df.Names.str[0]==df.Names.str[0].str.lower()]
Out[173]:
col1 Names
0 1564 abby
</code></pre>
<p>Another way <code>islower</code></p>
<pre><code>df[df.Names.str[0].str.islower()]
Out[174]:
col1 Names
0 1564 abby
</code></pre>
|
python|regex|pandas
| 2
|
5,903
| 56,841,702
|
How do I group a time series by hour of day?
|
<p>I have a time series and I want to group the rows by hour of day (regardless of date) and visualize these as boxplots. So I'd want 24 boxplots starting from hour 1, then hour 2, then hour 3 and so on.</p>
<p>The way I see this working is splitting the dataset up into 24 series (1 for each hour of the day), creating a boxplot for each series and then plotting this on the same axes.</p>
<p>The only way I can think of to do this is to manually select all the values between each hour, is there a faster way?</p>
<p>some sample data:</p>
<pre><code>Date Actual Consumption
2018-01-01 00:00:00 47.05
2018-01-01 00:15:00 46
2018-01-01 00:30:00 44
2018-01-01 00:45:00 45
2018-01-01 01:00:00 43.5
2018-01-01 01:15:00 43.5
2018-01-01 01:30:00 43
2018-01-01 01:45:00 42.5
2018-01-01 02:00:00 43
2018-01-01 02:15:00 42.5
2018-01-01 02:30:00 41
2018-01-01 02:45:00 42.5
2018-01-01 03:00:00 42.04
2018-01-01 03:15:00 41.96
2018-01-01 03:30:00 44
2018-01-01 03:45:00 44
2018-01-01 04:00:00 43.54
2018-01-01 04:15:00 43.46
2018-01-01 04:30:00 43.5
2018-01-01 04:45:00 43
2018-01-01 05:00:00 42.04
</code></pre>
<p>This is what i've tried so far:</p>
<pre class="lang-py prettyprint-override"><code>zero = df.between_time('00:00', '00:59')
one = df.between_time('01:00', '01:59')
two = df.between_time('02:00', '02:59')
</code></pre>
<p>and then I would plot a boxplot for each of these on the same axes. However it's very tedious to do this for all 24 hours in a day.</p>
<p>This is the kind of output I want:
<a href="https://www.researchgate.net/figure/Boxplot-of-the-NOx-data-by-hour-of-the-day_fig1_24054015" rel="nofollow noreferrer">https://www.researchgate.net/figure/Boxplot-of-the-NOx-data-by-hour-of-the-day_fig1_24054015</a></p>
|
<p>there are 2 steps to achieve this: </p>
<ol>
<li><p>convert Actual to date time:</p>
<pre><code>df.Actual = pd.to_datetime(df.Actual)
</code></pre></li>
<li><p>Group by the hour:</p>
<pre><code>df.groupby([df.Date, df.Actual.dt.hour+1]).Consumption.sum().reset_index()
</code></pre></li>
</ol>
<p>I assumed you wanted to sum the Consumption (unless you wish to have mean or whatever just change it). One note: hour+1 so it will start from 1 and not 0 (remove it if you wish 0 to be midnight).</p>
<p>desired result:</p>
<pre><code> Date Actual Consumption
0 2018-01-01 1 182.05
1 2018-01-01 2 172.50
2 2018-01-01 3 169.00
3 2018-01-01 4 172.00
4 2018-01-01 5 173.50
5 2018-01-01 6 42.04
</code></pre>
|
python|pandas|dataframe|time-series
| 2
|
5,904
| 56,633,010
|
CppFlow on windows 10
|
<p>I found this interesting project on Github (<a href="https://github.com/serizba/cppflow" rel="nofollow noreferrer">https://github.com/serizba/cppflow</a>)
Which is a c++ wrapper for the tensor api written in C. </p>
<p>However I have some issues when installing it.. </p>
<p>I have installed a c++ compiler, but when I try to build it in visual studio, I get the following error:</p>
<p>cannot convert from 'T *' to 'std::vector>'</p>
<p>Does anybody have experience with this wrapper or recogninces the error message? My initial though was that I am using the wrong c++ compilier. </p>
|
<p>Well I'm not enough of a language lawyer to know where the fault lies (the code or the compiler) but it's clear that the intent of the line of code causing the problems is this</p>
<pre><code>return std::vector<T>(T_data, T_data + size);
</code></pre>
<p>With that more old-fashioned style of code it compiles for me.</p>
|
c++|tensorflow
| 1
|
5,905
| 26,283,127
|
Removing numpy meshgrid points outside of a Shapely polygon
|
<p>I have a 10 x 10 grid that I would like to remove points outside of a shapely Polygon:</p>
<pre><code>import numpy as np
from shapely.geometry import Polygon, Point
from descartes import PolygonPatch
gridX, gridY = np.mgrid[0.0:10.0, 0.0:10.0]
poly = Polygon([[1,1],[1,7],[7,7],[7,1]])
#plot original figure
fig = plt.figure()
ax = fig.add_subplot(111)
polyp = PolygonPatch(poly)
ax.add_patch(polyp)
ax.scatter(gridX,gridY)
plt.show()
</code></pre>
<p>Here is the resulting figure:
<img src="https://i.stack.imgur.com/gtgDb.png" alt="original fig"></p>
<p>And what I want the end result to look like:
<img src="https://i.stack.imgur.com/qf2wf.png" alt="end"></p>
<p>I know that I can reshape the array to a 100 x 2 array of grid points:</p>
<pre><code>stacked = np.dstack([gridX,gridY])
reshaped = stacked.reshape(100,2)
</code></pre>
<p>I can see if the point lies within the polygon easily:</p>
<pre><code>for i in reshaped:
if Point(i).within(poly):
print True
</code></pre>
<p>But I am having trouble taking this information and modifying the original grid</p>
|
<p>You're pretty close already; instead of printing True, you could just append the points to a list.</p>
<pre><code>output = []
for i in reshaped:
if Point(i).within(poly):
output.append(i)
output = np.array(output)
x, y = output[:, 0], output[:, 1]
</code></pre>
<p>It seems that <code>Point.within</code> doesn't consider points that lie on the edge of the polygon to be "within" it though.</p>
|
numpy|shapely
| 2
|
5,906
| 67,027,964
|
How to select a subset of rows in pandas with a certain starting value and certain ending value
|
<p>In pandas, it's possible to return subsets of rows using like this:</p>
<p><code>df[:6]</code></p>
<p>which would with the dataset I'm using return:</p>
<pre><code>weekday CO_level ...
0 Monday Very high
1 Tuesday Low
2 Wednesday Low
3 Saturday Medium
4 Sunday High
5 Thursday Low
</code></pre>
<p>I did a bit of data cleaning and removed all rows w/ null values which resulted in the rows having some missing weekdays but I want to visualize the CO_level for one entire week Monday - Sunday.</p>
<p>My question is: how can I go through the rows and return the first instance or all instance (doesn't really matter) of 7 <em>consecutive</em> rows with Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday values?</p>
<p>So that it could look something like this:</p>
<pre><code>weekday CO_level ...
345 Monday Very high
346 Tuesday Low
347 Wednesday Low
348 Thursday Medium
349 Friday High
350 Saturday Low
351 Sunday Low
</code></pre>
|
<p>@Yefet's answer looks good. Here's a different approach:</p>
<pre><code>days = ['Monday',
'Tuesday',
'Wednesday',
'Thursday',
'Friday',
'Saturday',
'Sunday']
for i in range(len(df)):
test_days = df['weekday'][i:i+7].to_list()
if test_days == days:
week_df = df.iloc[i:i+7,:]
break
</code></pre>
|
python|pandas|dataframe
| 1
|
5,907
| 67,067,203
|
How to find difference between rows in a pandas multiIndex, by level 1
|
<p>Suppose we have a DataFrame like this, only with many, many more index A values:</p>
<pre><code>df = pd.DataFrame([[1,2,1,2],
[1,1,2,2],
[2,2,1,0],
[1,2,1,2],
[2,1,1,2] ], columns=['A','B','c1','c2'])
df.groupby(['A','B']).sum()
## result
c1 c2
A B
1 1 2 2
2 2 4
2 1 1 2
2 1 0
</code></pre>
<p>How can I get a data frame that consists of the difference between rows, by the second level of the index, level B?
The output here would be</p>
<pre><code>A c1 c2
1 0 -2
2 0 2
</code></pre>
<p><strong>Note</strong> In my particular use case, I have a lot of column A values, so I can write out the value for A explicitly.</p>
|
<p>Check <code>diff </code> and <code>dropna</code></p>
<pre><code>g = df.groupby(['A','B'])[['c1','c2']].sum()
g = g.groupby(level=0).diff().dropna()
g
Out[25]:
c1 c2
A B
1 2 0.0 2.0
2 2 0.0 -2.0
</code></pre>
|
pandas
| 0
|
5,908
| 66,953,754
|
How to insert column from a file to another file at multiple places
|
<p>I would like to insert columns no. 1 and 2 from file no. 2 into file no. 1 after every second column and till the last column.</p>
<p>File1.txt (tab-separated, column range from 1-2400 and cell range from 1-4500)</p>
<pre><code> ID IMPACT ID IMPACT ID IMPACT
51 0.288 128 0.4557 156 0.85
625 0.858 15 -0.589 51 0.96
8 0.845 7 0.5891
</code></pre>
<p>File2.txt (consist of only two-tab separated column with 19000 raws)</p>
<pre><code> ID IMPACT
18 -1
165 -1
41 -1
11 -1
</code></pre>
<p>Output file</p>
<pre><code> ID IMPACT ID IMPACT ID IMPACT ID IMPACT ID IMPACT ID IMPACT
51 0.288 18 -1 128 0.4557 18 -1 156 0.85 18 -1
625 0.858 165 -1 15 -0.589 165 -1 51 0.96 165 -1
8 0.845 41 -1 7 0.5891 41 -1 41 -1
11 -1 11 -1 11 -1
</code></pre>
<p>I tried the below commands but it's not working</p>
<pre><code>paste <(cut -f 1,2 File1.txt) <(cut -f 1,2 File2.txt) <(cut -f 3,4 File1.txt) <(cut -f 1,2 File2.txt)......... > File3
</code></pre>
<p>Prob: It starts sifting the File2.txt column value into different columns after the highest cell of File1.txt</p>
<pre><code>paste File1.txt File2.txt > File3.txt
awk '{print $1 "\t" $2 "\t" $3 "\t" $4 "\t" $5 "\t" $6 "\t" $3 "\t" $4....}' File3.txt > File4.txt
</code></pre>
<p>This do the job, however it mixup the value of File1.txt from one column to another column.
I tried everything but failed to succeed.
Any help would be appreciated, however, bash or pandas would be better. Thanks in advance.</p>
|
<pre><code>$ awk '
BEGIN {
FS=OFS="\t" # tab-separated data
}
NR==FNR { # hash fields of file2
a[FNR]=$1 # index with record numbers FNR
b[FNR]=$2
next
}
{ # print file1 records with file2 fields
print $1,$2,a[FNR],b[FNR],$3,$4,a[FNR],b[FNR],$5,$6,a[FNR],b[FNR]
}
END { # in the end
for(i=(FNR+1);(i in a);i++) # deal with extra records of file2
print "","",a[i],b[i],"","",a[i],b[i],"","",a[i],b[i]
}' file2 file1
</code></pre>
<p>Output:</p>
<pre><code>ID IMPACT ID IMPACT ID IMPACT ID IMPACT ID IMPACT ID IMPACT
51 0.288 18 -1 128 0.4557 18 -1 156 0.85 18 -1
625 0.858 165 -1 15 -0.589 165 -1 51 0.96 165 -1
8 0.845 41 -1 7 0.5891 41 -1 41 -1
11 -1 11 -1 11 -1
</code></pre>
|
pandas|bash
| 2
|
5,909
| 66,959,215
|
Drop duplicates with condition
|
<p>I have the following pandas dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>so_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td>390</td>
</tr>
<tr>
<td>10</td>
<td>395</td>
</tr>
<tr>
<td>10</td>
<td>405</td>
</tr>
<tr>
<td>11</td>
<td>390</td>
</tr>
<tr>
<td>11</td>
<td>395</td>
</tr>
<tr>
<td>11</td>
<td>405</td>
</tr>
<tr>
<td>12</td>
<td>390</td>
</tr>
<tr>
<td>12</td>
<td>395</td>
</tr>
<tr>
<td>12</td>
<td>405</td>
</tr>
</tbody>
</table>
</div>
<p>The desired output would be the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>so_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td>390</td>
</tr>
<tr>
<td>11</td>
<td>395</td>
</tr>
<tr>
<td>12</td>
<td>405</td>
</tr>
</tbody>
</table>
</div>
<p>Basically my goal is to drop duplicates on the column 'index' while keeping a different ascending value for the column 'so_id'</p>
|
<p>We can do it but importantly, pay attention to comment above.</p>
<pre><code>df=df.sort_values (by=['so_id'])#Sort df
</code></pre>
<p>Create temporary column <code>t</code> which is a classification of <code>so_id</code> and <code>resort</code> <code>df</code> back to original <code>df=df.assign(t=df['so_id'].ne(df['so_id'].shift(1)).cumsum()).sort_values(by='index')</code></p>
<p>Create a temporary classification of index</p>
<pre><code>df=df.assign(t1=df['index'].ne(df['index'].shift(1)).cumsum())
</code></pre>
<p>Select where two classess above are similar</p>
<pre><code>df=df[df['t']==df['t1']].drop(columns=['t','t1'])
print(df)
index so_id
0 10 390
4 11 395
8 12 405
</code></pre>
|
python|pandas
| 1
|
5,910
| 67,169,344
|
Unknown error/crash - TensorFlow LSTM with GPU (no output after start of 1st epoch)
|
<p><strong>I'm trying to train a model using LSTM layers. I'm using a GPU and all needed libraries are loaded.</strong></p>
<p>When I'm building the model this way:</p>
<pre><code>model = keras.Sequential()
model.add(layers.LSTM(256, activation="relu", return_sequences=False)) # note the activation function
model.add(layers.Dropout(0.2))
model.add(layers.Dense(256, activation="relu"))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(1))
model.add(layers.Activation(activation="sigmoid"))
model.compile(
loss=keras.losses.BinaryCrossentropy(),
optimizer="adam",
metrics=["accuracy"]
)
</code></pre>
<p>It works. But it's using <code>activation="relu"</code> on the LSTM layer, so it's not CuDNNLSTM - that's automatically chosen when the activation function is <strong>tanh</strong> (default) - if I'm not wrong.</p>
<p>So, it's painfully slow and I would like to run the faster CuDNNLSTM. My code for that:</p>
<pre><code>model = keras.Sequential()
model.add(layers.LSTM(256, return_sequences=False))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(256, activation="relu"))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(1))
model.add(layers.Activation(activation="sigmoid"))
model.compile(
loss=keras.losses.BinaryCrossentropy(),
optimizer="adam",
metrics=["accuracy"]
)
</code></pre>
<p>It's basically the same, only without the activation function provided, so <strong>tanh</strong> will be used.
But now it's not training, and the end of output looks like this:</p>
<pre><code>2021-04-19 22:41:46.046218: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-04-19 22:41:46.046426: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-04-19 22:41:46.046642: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-04-19 22:41:46.046942: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2021-04-19 22:41:46.047124: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2021-04-19 22:41:46.047312: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2021-04-19 22:41:46.047489: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2021-04-19 22:41:46.047663: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2021-04-19 22:41:46.047936: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-04-19 22:41:46.665456: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-04-19 22:41:46.665712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0
2021-04-19 22:41:46.665876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N
2021-04-19 22:41:46.666186: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2982 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-04-19 22:41:46.667505: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-04-19 22:42:07.374456: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
Epoch 1/50
2021-04-19 22:42:08.922891: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-04-19 22:42:09.272264: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-04-19 22:42:09.302667: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
Process finished with exit code -1073740791 (0xC0000409)
</code></pre>
<p><strong>It just starts the first epoch, then freezes for a minute and exits with this weird exit code.</strong></p>
<ul>
<li>Shape of the input data: <code>tf.Tensor([50985 29 7], shape=(3,), dtype=int32)</code></li>
<li>My GPU: <code>Nvidia GTX 1050 Ti</code></li>
<li>CUDA: <code>v11.3</code></li>
<li>OS: <code>Windows 10</code></li>
<li>IDE: <code>PyCharm</code></li>
</ul>
<p>Finding solutions for this problem is a bit challenging as I don't have any error outputed. Am I doing something wrong? Has anyone encountered a similar issue? What should help?</p>
<p><strong>// Edit; I tried:</strong></p>
<ul>
<li>running this model with much fewer units (2 instead of 256) and lower batch_size</li>
<li>downgrading tensorflow to <code>2.4.0</code>, CUDA to <code>11.0</code> and cudnn to <code>8.0.1</code> with python <code>3.7.1</code> (this should be a right combination according to <a href="https://www.tensorflow.org/install/source#gpu" rel="nofollow noreferrer">this list from TensorFlow website</a>)</li>
<li>restarting my PC :)</li>
</ul>
|
<p><strong>I found the solution... kinda.</strong></p>
<p>So it works as it should when I downgraded tensorflow to <code>2.1.0</code>, CUDA to <code>10.1</code> and cudnn to <code>7.6.5</code> (at the time 4th combination from <a href="https://www.tensorflow.org/install/source#gpu" rel="nofollow noreferrer">this list on TensorFlow website</a>)</p>
<p>I don't know why it didn't work at the newest version, or at the valid combination for tensorflow <code>2.4.0</code>.</p>
<p>It's working well so my issue is solved. Nonetheless it would be nice to know why using LSTM with cudnn on higher versions didn't work for me, as I haven't found this issue anywhere.</p>
|
python|tensorflow|keras|lstm
| 0
|
5,911
| 47,086,599
|
parallelising tf.data.Dataset.from_generator
|
<p>
I have a non trivial input pipeline that <code>from_generator</code> is perfect for...</p>
<pre class="lang-py prettyprint-override"><code>dataset = tf.data.Dataset.from_generator(complex_img_label_generator,
(tf.int32, tf.string))
dataset = dataset.batch(64)
iter = dataset.make_one_shot_iterator()
imgs, labels = iter.get_next()
</code></pre>
<p>Where <code>complex_img_label_generator</code> dynamically generates images and returns a numpy array representing a <code>(H, W, 3)</code> image and a simple <code>string</code> label. The processing not something I can represent as reading from files and <code>tf.image</code> operations.</p>
<p>My question is about how to parallise the generator? How do I have N of these generators running in their own threads.</p>
<p>One thought was to use <code>dataset.map</code> with <code>num_parallel_calls</code> to handle the threading; but the map operates on tensors... Another thought was to create multiple generators each with it's own <code>prefetch</code> and somehow join them, but I can't see how I'd join N generator streams?</p>
<p>Any canonical examples I could follow?</p>
|
<p>
Turns out I can use <code>Dataset.map</code> if I make the generator super lightweight (only generating meta data) and then move the actual heavy lighting into a stateless function. This way I can parallelise just the heavy lifting part with <code>.map</code> using a <code>py_func</code>.</p>
<p>Works; but feels a tad clumsy... Would be great to be able to just add <code>num_parallel_calls</code> to <code>from_generator</code> :)</p>
<pre class="lang-py prettyprint-override"><code>def pure_numpy_and_pil_complex_calculation(metadata, label):
# some complex pil and numpy work nothing to do with tf
...
dataset = tf.data.Dataset.from_generator(lightweight_generator,
output_types=(tf.string, # metadata
tf.string)) # label
def wrapped_complex_calulation(metadata, label):
return tf.py_func(func = pure_numpy_and_pil_complex_calculation,
inp = (metadata, label),
Tout = (tf.uint8, # (H,W,3) img
tf.string)) # label
dataset = dataset.map(wrapped_complex_calulation,
num_parallel_calls=8)
dataset = dataset.batch(64)
iter = dataset.make_one_shot_iterator()
imgs, labels = iter.get_next()
</code></pre>
|
tensorflow|tensorflow-datasets
| 29
|
5,912
| 47,156,680
|
Regex for amounts in euro
|
<p>I need to find a regex expression that select only the amounts (in euros) so the value needs to be preceded by a <code>€</code> or <code>euros</code> and that after the <code>,</code> we have the pennies, there can be spaces or dots as well.</p>
<pre><code>7 967 59 €
- 9847, 48 euros à titre de rappel de salaire sur le bonus de l'année 2012,
- 1929, 78 euros à titre de rappel de salaire sur le bonus de l'année 2013,
- 129 689, 78 euros à titre de solde d'indemnité conventionnelle de licenciement,
- 1098 euros au titre du paiement du DIF,
é à 20 892, 05 euros, il ressort des pi
le de 27 084, 26 euros
ée à 26 395, 10 euros, hors bo
de 129 689, 78 euros,
6.000 € au titre des dommages et intérêts pour licenciement sans cause réelle et sérieuse,
1.510 € au titre de l'indemnité compensatrice de préavis,
151 € au titre des congés payés y afférents, 739 € au titre de l'indemnité de licenciement,
656,19 € au titre de l'indemnité due au titre de la non rémunération de la période de mise à pied conservatoire,
65,61 € au titre des congés payés afférents,
2.000 € au titre de 59 € au titre de <span class="highlight_underline">l'indemnité légale de licenciement</span>
2014,7 967, 59 € au titre de <span class="highlight_underline">l'indemnité légale de licenciement</span>
rappel de salaires de janvier 2007 au 7 mars 2007 3.708,34 €
SECTION B N° 419 425 426 427 428 429 430 432 433 434 436 441 442 443 444 446 467 571 572
</code></pre>
<p>I came up with this:</p>
<pre><code>(\d.+\d+)(?:\s(?:euros?|€))
</code></pre>
<p>But it isn't as accurate as it should.</p>
<p>Can someone help me ??</p>
<p>EDIT:</p>
<p>@Wiktor Stribiżew gave me :</p>
<pre><code>(\d[\d.\s,]*)(?:\s(?:euro|€))
</code></pre>
<p>which is close but with this examples:</p>
<pre><code>2014,7 967, 59 €
</code></pre>
<p>it takes also the <code>2014,</code></p>
<p>and with <code>49715 11000158926 101,30 €</code></p>
<p>it takes <code>49715 11000158926</code>. Numbers are limited to groups of 3.</p>
<p>and with <code>2007 3.708,34 €</code></p>
<p>it shouldn't take the <code>2007</code> as well</p>
<p>Edit 2:</p>
<p>Thanks for the answer, but it seems not to work in my python script :</p>
<pre><code>import regex
sentences_pd = pd.read_csv('sampled_amounts.csv', names=["text"])
sentences_pd.head()
print([(regex.findall("\b((?:\d+|\d{1,3}(?:[,.\s]\d{3})*)(?:[,.\s]*\d+)?)\s(?:euros?|€)", x)) for x in sentences_pd['text']])
</code></pre>
<p>the text column looks like:</p>
<p><a href="https://i.stack.imgur.com/nRVsy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nRVsy.png" alt="enter image description here"></a></p>
<p>It gives me an empty array</p>
<pre><code>[[], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []]
</code></pre>
|
<p>You may use</p>
<pre><code>\b((?:\d+|\d{1,3}(?:[,.\s]\d{3})*)(?:[,.\s]*\d+)?)\s(?:euros?|€)
</code></pre>
<p>See the <a href="https://regex101.com/r/0LPXcI/7" rel="nofollow noreferrer">regex demo</a></p>
<p><strong>Details</strong></p>
<ul>
<li><code>\b</code> - a word boundary</li>
<li><code>((?:\d+|\d{1,3}(?:[,.\s]\d{3})*)(?:[,.\s]*\d+)?)</code> - Group 1
<ul>
<li><code>(?:</code> - an alternation group start
<ul>
<li><code>\d+</code> - 1+ digits</li>
<li><code>|</code> - or </li>
<li><code>\d{1,3}</code> - 1 to 3 digits</li>
<li><code>(?:[,.\s]\d{3})*</code> - 0+ sequences of
<ul>
<li><code>[,.\s]</code> - 1 whitespace, <code>,</code> or <code>.</code></li>
<li><code>\d{3}</code> - 3 digits</li>
</ul></li>
</ul></li>
<li><code>)</code> - end of the alternation group</li>
<li><code>(?:[,.\s]*\d+)?</code> - an optional group of
<ul>
<li><code>[,.\s]*</code> - 0+ whitespaces, <code>,</code> or <code>.</code></li>
<li><code>\d+</code> - 1 or more digits</li>
</ul></li>
</ul></li>
<li><code>\s</code> - a whitespace</li>
<li><code>(?:euros?|€)</code> - either <code>euro</code>, <code>euros</code> or <code>€</code></li>
</ul>
|
python|regex|pandas
| 3
|
5,913
| 11,210,677
|
Need help to parallelize a loop in python
|
<p>I have a huge data set and I have to compute for every point of it a series of properties. My code is really slow and I would like to make it faster parallelizing somehow the do loop. I would like each processor to compute the "series of properties" for a limited subsample of my data and then join all the properties together in one array.
I'll try explain what I have to do with an example.</p>
<p>Let's say that my data set is the array <code>x</code>:</p>
<pre><code>x = linspace(0,20,10000)
</code></pre>
<p>The "property" I want to get is, for instance, the square root of <code>x</code>:</p>
<pre><code>prop=[]
for i in arange(0,len(x)):
prop.append(sqrt(x[i]))
</code></pre>
<p>The question is how can I parallelize the above loop? Let's assume I have 4 processor and I would like each of them to compute the sqrt of 10000/4=2500 points.</p>
<p>I tried looking at some python modules like <code>multiprocessing</code> and <code>mpi4py</code> but from the guides I couldn't find the answer to such a simple question.</p>
<p><strong>EDITS</strong></p>
<p>I'll thank you all for the precious comments and links you provided me. However, I would like to clarify my question. I'm not interested in the <code>sqrt</code> function whatsoever.
I am doing a series of operations within a loop. I perfectly know loops are bad and vectorial operation are always preferable to them but in this case I really have to do a loop. I won't go into the details of my problem because this would add an unnecessary complication to this question.
I would like to split my loop so that each processor does a part of it, meaning that I could run my code 40 times with 1/40 of the loop each and the merger the result but this would be stupid.
This is a brief example</p>
<pre><code> for i in arange(0,len(x)):
# do some complicated stuff
</code></pre>
<p>What I want is use 40 cpus to do this:</p>
<pre><code> for npcu in arange(0,40):
for i in arange(len(x)/40*ncpu,len(x)/40*(ncpu+1)):
# do some complicated stuff
</code></pre>
<p>Is that possible or not with python?</p>
|
<p>Parallelizing is not trivial, however you might find <a href="https://code.google.com/p/numexpr/" rel="nofollow">numexpr</a> useful.</p>
<p><strong>For numerical work</strong>, you really should look into the utilities numpy gives you (<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html" rel="nofollow">vectorize</a> and similar), these give you usually a good speedup as a basis to work on.</p>
<p><strong>For more complicated, non-numerical cases</strong>, you may use <code>multiprocessing</code> (see comments).</p>
<hr>
<p>On a sidenote, multithreading is even more non-trivial with python than with other languages, is that CPython has the <a href="http://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">Global Interpreter Lock (GIL)</a> which disallows two sections of python code to run in the same interpreter at the same time (i.e. there is no real multithreaded pure python code). For I/O and heavy calculations, third party libraries however tend to release that lock, so that limited multithreading is possible.</p>
<p>This adds to the usual multithreading nuisances of having to mutex shared data accesses and similar.</p>
|
python|numpy|parallel-processing|multiprocessing
| 3
|
5,914
| 68,212,240
|
Pandas DataFrame and NumPy array weirdness - df.to_numpy(), np.asarray(df), and np.array(df) give different memory usages
|
<p>I am working on converting an existing Pandas Dataframe to a Numpy array. The dataframe has no <code>NaN</code> values and is not sparsely populated (read in from a <code>.csv</code> file). In addition, in order to see memory usage, I performed the following:</p>
<p><code>sum(df.memory_usage)</code></p>
<pre><code>2400128
</code></pre>
<p><code>sys.getsizeof(df)</code></p>
<pre><code>2400144
</code></pre>
<p>The above small 16 byte difference is negligible, and understood since the size calculation is different and overhead of memory usage is different when using <code>sys.getsizeof</code> vs. <code>df.memory_usage</code> and summing it (for reference, <code>df.info()</code> or using the <code>pandas_profiling</code> library.</p>
<p>Now, when converting this to a Numpy array, there seems to be a huge discrepancy in memory usage:</p>
<p><code>sys.getsizeof(np.array(df))</code></p>
<pre><code>2400120
</code></pre>
<p><code>sys.getsizeof(df.to_numpy())</code></p>
<pre><code>120
</code></pre>
<p>To me, this does not make any sense, since both arrays are of the same type and also same size and data:</p>
<p><code>np.array(df)</code></p>
<pre><code>array([[1.0000e+00, 2.0000e+04, 2.0000e+00, ..., 0.0000e+00, 0.0000e+00,
1.0000e+00],
[2.0000e+00, 1.2000e+05, 2.0000e+00, ..., 0.0000e+00, 2.0000e+03,
1.0000e+00],
[3.0000e+00, 9.0000e+04, 2.0000e+00, ..., 1.0000e+03, 5.0000e+03,
0.0000e+00],
...,
[1.1998e+04, 9.0000e+04, 1.0000e+00, ..., 3.0000e+03, 4.0000e+03,
0.0000e+00],
[1.1999e+04, 2.8000e+05, 1.0000e+00, ..., 3.5000e+02, 2.0950e+03,
0.0000e+00],
[1.2000e+04, 2.0000e+04, 1.0000e+00, ..., 0.0000e+00, 0.0000e+00,
1.0000e+00]])
</code></pre>
<p><code>df.to_numpy() # or similarly, np.asarray(df)</code></p>
<pre><code>array([[1.0000e+00, 2.0000e+04, 2.0000e+00, ..., 0.0000e+00, 0.0000e+00,
1.0000e+00],
[2.0000e+00, 1.2000e+05, 2.0000e+00, ..., 0.0000e+00, 2.0000e+03,
1.0000e+00],
[3.0000e+00, 9.0000e+04, 2.0000e+00, ..., 1.0000e+03, 5.0000e+03,
0.0000e+00],
...,
[1.1998e+04, 9.0000e+04, 1.0000e+00, ..., 3.0000e+03, 4.0000e+03,
0.0000e+00],
[1.1999e+04, 2.8000e+05, 1.0000e+00, ..., 3.5000e+02, 2.0950e+03,
0.0000e+00],
[1.2000e+04, 2.0000e+04, 1.0000e+00, ..., 0.0000e+00, 0.0000e+00,
1.0000e+00]])
</code></pre>
<p>I found out that <code>df.to_numpy()</code> uses <code>np.asarray</code> in order to perform the conversion, so I also tried this:</p>
<p><code>sys.getsizeof(np.asarray(df))</code></p>
<pre><code>120
</code></pre>
<p>Both <code>np.asarray(df)</code> and <code>df.to_numpy()</code> gives a total of <strong>120</strong> bytes of use, while <code>np.array(df)</code> is <strong>2400120</strong> bytes! This does not make any sense!</p>
<p>Both arrays are not stored as a sparse array, and as shown above, have the same exact output (and by checking types, this is the same).</p>
<p>No idea how to resolve this issue, since this doesn't seem to make any sense from a memory perspective. I'm trying to understand this huge discrepancy in the memory usage, since all the values are integers or floats in the <code>.csv</code> file, and no missing or <code>NaN</code> values are present. Perhaps <code>np.asarray(df)</code> (and hence <code>df.to_numpy()</code>) is doing something differently from <code>np.array(df)</code>, or <code>sys.getsizeof</code> is doing something weird, but I cannot seem to resolve this issue.</p>
|
<p>A <code>numpy</code> has attributes like <code>shape</code> and <code>dtype</code>, and a <code>data buffer</code>, which is a flat C array, that stores the values.</p>
<pre><code>arr.nbytes # 2400000
</code></pre>
<p>is telling you the size of that data buffer. So if the array is (300,10000) float dtype, that would be <code>300*1000*8</code> bytes.</p>
<p><code>getsizeof</code> 2400120 is reporting on that buffer plus 120 bytes used for the array object itself, the shape tuple and dtype, etc.</p>
<p>But an array may be <code>view</code> of another. It will have its own 120 'overhead', but reference the data buffer of another array. <code>getsizeof</code> only reports on that 120, not the shared memory. In effect it tells us how much extra memory that view is consuming.</p>
<p>A dataframe is a complex object with index arrays, column name list (or array),etc. How the data is stored depends on column dtypes. Columns may be viewed as <code>Series</code>, or groups of columns of like dtype. I think in your case all columns have the same dtype, so the data is stored in a 2d <code>numpy</code> array. It's the data buffer of that array that the dataframe getsizeof is reporting.</p>
<pre><code>df.values
dt.to_numpy()
</code></pre>
<p>return a <code>view</code> of that data array. Thus <code>getsizeof</code> only reports 120.</p>
<p><code>np.array(df)</code> returns a copy of that array, which has its own data buffer, and thus the full size. Read its <code>docs</code>.</p>
<p><code>np.asarray(df)</code> has a <code>copy=False</code> parameter, and thus returns a <code>view</code> if possible.</p>
<p>In sum, the concept of a <code>view</code> is key to understanding the differences you see. <code>sys.getsizeof</code> is not that useful of a measure, unless you already understand how objects are organized. It's a good idea to check the documentation of functions that you use, including <code>np.array</code>, <code>np.asarray</code>, and <code>.to_numpy</code>.</p>
|
pandas|dataframe|numpy|memory|numpy-ndarray
| 2
|
5,915
| 68,177,394
|
Why we need to save pytorch models with .net extension?
|
<p>I'm a new learner for Pytorch and I am working on a Character_Level_LSTM_Exercise.</p>
<p>Why they save the model with .net extension in the model name?</p>
<p>I'm searching for the explanation but I didn't get any good explanation.</p>
<pre><code># change the name, for saving multiple files
model_name = 'rnn_x_epoch.net'
checkpoint = {'n_hidden': net.n_hidden,
'n_layers': net.n_layers,
'state_dict': net.state_dict(),
'tokens': net.chars}
with open(model_name, 'wb') as f:
torch.save(checkpoint, f)
</code></pre>
|
<p>You can use whatever extension you like! Just make sure to be consistent.</p>
<p>The docu recommends to use .pt extension.</p>
<p><a href="https://pytorch.org/docs/stable/generated/torch.save.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.save.html</a></p>
<p>For more explanation and more extensions options see <a href="https://github.com/pytorch/pytorch/issues/14864#issuecomment-477195843" rel="nofollow noreferrer">Soumith Chintala's
</a> comment.</p>
|
python|pytorch
| 1
|
5,916
| 59,382,676
|
Pandas chain.from_iterable: Error object of type 'itertools.chain' has no len()
|
<p>Having a dataframe as the following:</p>
<pre><code>df_data=pd.DataFrame({'name':[['ABC','DOS','TRES'],['XYZ','MORTGAGE','SOLUTIONS']],
'original': ['ABC DOS TRES','XYZ MORTGAGE SOLUTIONS']})
</code></pre>
<p>I am using chain.from_iterable to extract every item in a list and add the result to a dataframe:</p>
<pre><code>s = pd.DataFrame(chain.from_iterable(df_data['name']),columns=['word'])
</code></pre>
<p>How can I do something like this:</p>
<pre><code>t = pd.DataFrame({'word': chain.from_iterable(df_data['name'])})
</code></pre>
<p>The last creation of dataframe is giving an error <code>TypeError: object of type 'itertools.chain' has no len()</code>. What is the difference between both dataframe creations? How the error in the last creation can be fixed?</p>
<p>Thanks :)</p>
|
<p>Using <code>chain.from_iterable</code> returns an iterator, not a list/sequence. Older versions of Pandas needs the objects you pass to the data frame constructor to have a <code>len</code> so it knows what size array to allocate on the backend. The <code>chain</code> object does not supply that (nor should it). </p>
<p>You can wrap it in <code>list</code> is solve your issue:</p>
<pre><code>t = pd.DataFrame({'word': list(chain.from_iterable(df_data['name']))})
</code></pre>
|
python|pandas|dataframe
| 3
|
5,917
| 59,402,788
|
How to make a custom loss function in Keras properly
|
<p>i am making a mode that the prediction is a metrix from a conv layer.
my loss function is</p>
<pre><code>def custom_loss(y_true, y_pred):
print("in loss...")
final_loss = float(0)
print(y_pred.shape)
print(y_true.shape)
for i in range(7):
for j in range(14):
tl = float(0)
gt = y_true[i,j]
gp = y_pred[i,j]
if gt[0] == 0:
tl = K.square(gp[0] - gt[0])
else:
for l in range(5):
tl = tl + K.square(gp[l] - gt[l])/5
final_loss = final_loss + tl/98
return final_loss
</code></pre>
<p>the shapes that printed out from the arguments are </p>
<p>(?, 7, 14, 5)</p>
<p>(?, ?, ?, ?) </p>
<p>the labels are in the shape of 7x14x5.</p>
<p>it seems like the loss function gets called for a list of predictions instead of one prediction at a time. I am relatively new to Keras and don't really understand how these things work.</p>
<p>this is my model </p>
<pre><code>model = Sequential()
input_shape=(360, 640, 1)
model.add(Conv2D(24, (5, 5), strides=(1, 1), input_shape=input_shape))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.add(Conv2D(48, (5, 5), padding="valid"))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.add(Conv2D(48, (5, 5), padding="valid"))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.add(Conv2D(24, (5, 5), padding="valid"))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.add(Conv2D(5, (5, 5), padding="valid"))
model.add(MaxPooling2D((2,4), strides=(2, 2)))
model.compile(
optimizer="Adam",
loss=custom_loss,
metrics=['accuracy'])
print(model.summary())
</code></pre>
<p>I am getting an error like </p>
<p>ValueError: slice index 7 of dimension 1 out of bounds. for 'loss/max_pooling2d_5_loss/custom_loss/strided_slice_92' (op: 'StridedSlice') with input shapes: [?,7,14,5], [2], [2], [2] and with computed input tensors: input[1] = <0 7>, input[2] = <1 8>, input[3] = <1 1>.</p>
<p>I think I know this is because of the arguments to the loss function is given in many predictions at a time with 4D. </p>
<p>how can I fix? is the problem in the way I assign the loss function or in the loss function.
for now, the output of the loss function is a float. but what is it supposed to be.</p>
|
<p>To answer some of your concerns,</p>
<blockquote>
<p>I don't see anyone use loops in the loss function</p>
</blockquote>
<p>Usually it's a pretty bad practice. Deep nets train on millions of samples usually. Having loops instead of using vectorized operations therefore, will really bring down your model performance.</p>
<h2>Implementing without loops.</h2>
<p>I'm not sure if I've exactly captured what you wanted in your loss function. But I'm quite sure it's very close (if not this is what you needed). I could have compared your loss against mine with fixed random seeds to see if I get exactly the result given by your loss function. However, since your loss is not working, I can't do that.</p>
<pre><code>def custom_loss_v2(y_true, y_pred):
# We create MSE loss that captures what's in the else condition -> shape [batch_size, height, width]
mse = tf.reduce_mean((y_true-y_pred)**2, axis=-1)
# We create pred_first_ch tensor that captures what's in the if condition -> shape [batch, height, width]
pred_first_ch = tf.gather(tf.transpose(y_pred**2, [3,0,1,2]),0)
# We create this to get a boolean array that satisfy the conditions in the if else statement
true_first_zero_mask = tf.equal(tf.gather(tf.transpose(y_true, [3,0,1,2]),0), 0)
# Then we use tf.where with reduce_mean to get the final loss
res = tf.where(true_first_zero_mask, pred_first_ch, mse)
return tf.reduce_mean(res)
</code></pre>
|
python|tensorflow|keras|conv-neural-network|loss-function
| 1
|
5,918
| 13,815,719
|
Creating Grid with Numpy Performance
|
<p>So here is what I try to do,</p>
<pre><code>N=1000
x=np.arange(0,1,1./float(len(N)))
XX,YY=np.meshgrid(x,x)
l=len(XX)
grid=np.array([ ([XX[i,i],YY[j,j],0. ]) for i in xrange(l) for j in xrange(l) ])
</code></pre>
<p>the numpy routine is rather fast but I need the grid to be in a different form and this takes quite long (I guess because of indexing the numpy array).</p>
<p>Thanks for any suggestions : )</p>
<p>Cheers</p>
|
<p>Take advantage of <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">broadcasting</a>:</p>
<pre><code>z = np.zeros([N, N, 3])
z[:,:,0] = x.reshape(-1,1)
z[:,:,1] = x
fast_grid = z.reshape(N*N, 3)
print np.all( grid == fast_grid )
True
</code></pre>
|
python|numpy|grid
| 3
|
5,919
| 14,298,593
|
Scipy arpack eigs versus eigsh number of eigenvalues
|
<p>In scipy's ARPACK bindings, one cannot calculate all of the eigenvalues of a matrix. However, I find that eigsh is able to calculate n - 1 eigenvalues, while eigs is only able to calculate n - 2 eigenvalues. Can anyone verify that this is in fact a fundamental limitation of ARPACK and not a bug in scipy? </p>
<p>Here is example code:</p>
<pre><code>import scipy.sparse, scipy.sparse.linalg
t = scipy.sparse.eye(3,3).tocsr()
l,v = scipy.sparse.linalg.arpack.eigs(t,k=2)
l,v = scipy.sparse.linalg.arpack.eigsh(t,k=2)
</code></pre>
|
<p>It's ARPACK limitation:</p>
<p><a href="http://forge.scilab.org/index.php/p/arpack-ng/source/tree/master/SRC/dnaupd.f" rel="nofollow">http://forge.scilab.org/index.php/p/arpack-ng/source/tree/master/SRC/dnaupd.f</a></p>
<p><a href="http://forge.scilab.org/index.php/p/arpack-ng/source/tree/master/SRC/dsaupd.f" rel="nofollow">http://forge.scilab.org/index.php/p/arpack-ng/source/tree/master/SRC/dsaupd.f</a></p>
<p>Would be a strange bug to get this wrong...</p>
|
math|numpy|scipy|arpack
| 2
|
5,920
| 44,859,550
|
Large outputs predicted for the MNIST database in tensorflow
|
<p>I can't receive result after training of network at a test example.
It is a standard example from the help multilayer_perceptron.py</p>
<p>I try to receive result in such a way</p>
<pre><code>examples_to_show = 5
y_result = sess.run(y_pred, feed_dict={x:mnist.test.images[:examples_to_show]})
print("y_result=",y_result)
</code></pre>
<p>I receive not unclear figures instead of [ 0 0 1 0 0 0 0 0 0 0 ]</p>
<pre><code>In [20]:
'''
A Multilayer Perceptron implementation example using TensorFlow library.
This example is using the MNIST database of handwritten digits
Author: Aymeric Damien
'''
In [21]:
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
In [22]:
# Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
In [23]:
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
In [24]:
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
The designer for predictions!!!
In [25]:
# Prediction
y_pred = pred
In [30]:
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost))
print("Optimization Finished!")
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
# We will try to receive result of training!!!
examples_to_show = 5
y_result = sess.run(y_pred, feed_dict={x: mnist.test.images[:examples_to_show]})
print("y_result=",y_result)
Epoch: 0001 cost= 142.664078834
Epoch: 0002 cost= 37.176684845
Epoch: 0003 cost= 23.608409217
Epoch: 0004 cost= 16.678811304
Epoch: 0005 cost= 12.175642554
Epoch: 0006 cost= 9.083989911
Epoch: 0007 cost= 6.624555320
Epoch: 0008 cost= 4.970751049
Epoch: 0009 cost= 3.595181121
Epoch: 0010 cost= 2.671157273
Epoch: 0011 cost= 2.032964239
Epoch: 0012 cost= 1.588672840
Epoch: 0013 cost= 1.133152580
Epoch: 0014 cost= 0.805134769
Epoch: 0015 cost= 0.689760053
Optimization Finished!
Accuracy: 0.941
y_result= [
[ -203.50767517 -437.82525635 186.90861511 590.15588379
-471.18536377 -283.88424683 -1150.14709473 1022.75799561 -391.6159668
432.9206543 ]
[ -855.87487793 6.88715792 903.70776367 252.00227356
-1407.09313965 441.29104614 344.09405518 -1691.98535156
40.62039566 -1391.43688965]
[ -244.32698059 618.91705322 12.79210854 -36.14464951
-8.12554073 183.12348938 50.32661057 147.05378723 152.9332428
-210.40829468]
[ 1091.7199707 -919.26574707 -333.54571533 -953.7399292 -1072.82226562
73.99294281 305.2588501 -166.91053772 -985.14654541
452.14318848]
[ 200.62698364 89.34638214 -280.01904297 -342.19534302 1240.4128418
229.24633789 -424.91091919 298.81100464 -194.70623779
934.27703857]]
</code></pre>
<p>The result has to be y_result= [ 0 0 1 0 0 0 0 0 0 0 ] !!! ??? ??? Why ???</p>
|
<p>Your <code>y_result</code> is calculated here: <code>out_layer = tf.matmul(layer_2, weights['out']) + biases['out']</code>. It is clear that it will not be a one-hot vector but either a matrix or a vector (depending on your <code>layer_2</code> and <code>weight['out']</code>). Looking at your results it is a matrix</p>
|
tensorflow
| 0
|
5,921
| 44,863,067
|
pandas TimedeltaIndex.join does not take sort arg
|
<p>The latest pandas 0.20.2 TimedeltaIndex.join method does not take 'sort=...' kwarg, so that it cannot be used as a join index as required in pandas/core/reshape/merge.py line 722. Here's how to reproduce:</p>
<pre><code>import pandas as pd
import numpy as np
tx = pd.timedelta_range('09:30:00', '10:00:00', freq='30s')
df0 = pd.DataFrame(np.random.randn(len(tx), 3), index=tx, columns=['a','b','c'])
df1 = pd.DataFrame(np.random.randn(len(tx), 2), index=tx, columns=['d','e'])
df0.join(df1)
</code></pre>
<p>The exception is thrown at:</p>
<blockquote>
<pre><code>/opt/anaconda/lib/python2.7/site-packages/pandas/core/reshape/merge.pyc in _get_join_info(self)
720 join_index, left_indexer, right_indexer = \
721 left_ax.join(right_ax, how=self.how, return_indexers=True,
--> 722 sort=self.sort)
723 elif self.right_index and self.how == 'left':
724 join_index, left_indexer, right_indexer = \
TypeError: join() got an unexpected keyword argument 'sort'
</code></pre>
</blockquote>
<p>Version 0.19.2 works ok.
Is this a bug or something else?</p>
|
<p>This is a known issue. There is an issue report (<a href="https://github.com/pandas-dev/pandas/issues/16541" rel="nofollow noreferrer">here</a>) and a Pull Request that is being worked on (<a href="https://github.com/pandas-dev/pandas/pull/16586" rel="nofollow noreferrer">here</a>) with the hope to complete for <a href="https://github.com/pandas-dev/pandas/milestone/51" rel="nofollow noreferrer">0.20.3</a></p>
<p><strong><em>Update:</em></strong></p>
<p>The fix made it into (<a href="https://github.com/pandas-dev/pandas/milestone/51?closed=1" rel="nofollow noreferrer">0.20.3</a>)</p>
|
python|pandas|join
| 1
|
5,922
| 45,149,446
|
Stop Tensoflow from running on the GPU after a computation
|
<p>I am running a REST Server in Python, with an access point to retrieve an image and use a tensorflow model to predict what is on that image. After starting the server, I am sending images to the REST endpoint. The model loaded is an Inception model that I trained myself. It is loaded from a tensorflow checkpoint file to restore the weights. Here is the function that builds the graph and executes the classification:</p>
<pre><code>import os
import tensorflow as tf
from cnn_server.server import file_service as dirs
from slim.datasets import dataset_utils
from slim.nets import nets_factory as network_factory
from slim.preprocessing import preprocessing_factory as preprocessing_factory
def inference_on_image(bot_id, image_file, network_name='inception_v4', return_labels=1):
model_path = dirs.get_model_data_dir(bot_id)
# Get number of classes to predict
protobuf_dir = dirs.get_protobuf_dir(bot_id)
number_of_classes = dataset_utils.get_number_of_classes_by_labels(protobuf_dir)
# Get the preprocessing and network construction functions
preprocessing_fn = preprocessing_factory.get_preprocessing(network_name, is_training=False)
network_fn = network_factory.get_network_fn(network_name, number_of_classes)
# Process the temporary image file into a Tensor of shape [widht, height, channels]
image_tensor = tf.gfile.FastGFile(image_file, 'rb').read()
image_tensor = tf.image.decode_image(image_tensor, channels=0)
# Perform preprocessing and reshape into [network.default_width, network.default_height, channels]
network_default_size = network_fn.default_image_size
image_tensor = preprocessing_fn(image_tensor, network_default_size, network_default_size)
# Create an input batch of size one from the preprocessed image
input_batch = tf.reshape(image_tensor, [1, 299, 299, 3])
# Create the network up to the Predictions Endpoint
logits, endpoints = network_fn(input_batch)
restorer = tf.train.Saver()
with tf.Session() as sess:
tf.global_variables_initializer().run()
# Restore the variables of the network from the last checkpoint and run the graph
restorer.restore(sess, tf.train.latest_checkpoint(model_path))
sess.run(endpoints)
# Get the numpy array of predictions out of the
predictions = endpoints['Predictions'].eval()[0]
sess.close()
return map_predictions_to_labels(protobuf_dir, predictions, return_labels)
</code></pre>
<p>To build the graph of the Inception V4 model I used <code>tf.model.slim</code>, a collection of tensorflow implementations of state-of-the-art CCNs. The inception model is built here: <a href="https://github.com/tensorflow/models/blob/master/slim/nets/inception_v4.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/slim/nets/inception_v4.py</a> and provided via a factory method: <a href="https://github.com/tensorflow/models/blob/master/slim/nets/nets_factory.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/slim/nets/nets_factory.py</a></p>
<p>For the first image everythig works as expected:</p>
<pre><code>2017-07-17 18:00:43.831365: I tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-07-17 18:00:43.831371: I tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-07-17 18:00:43.831384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
192.168.0.192 - - [17/Jul/2017 18:00:46] "POST /classify/4 HTTP/1.1" 200 -
</code></pre>
<p>The second image creates the following Error:</p>
<pre><code>ValueError: Variable InceptionV4/Conv2d_1a_3x3/weights already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
</code></pre>
<p>My understanding of this is that the graph is created initially and then keeps on existing somewhere. Sending a second image results in calling the function again, the attempt to recreate the existing graph and then in the error. Now I have tried some things:</p>
<p><strong>Stopping Tensorflow overall:</strong>
I tried to stop tensorflow overall and recreate the device each time on the GPU. That would be the best solution, since this way the GPU is not occupied by Tensorflow when the server is running. I tried to do that with <code>sess.close()</code>, which did not work. <code>nvidia-smi</code> still shows the process on the GPU after processing the first image. Then I tried to access the devices somehow, but all I could get is the list of available devices via <code>device_lib.list_local_devices()</code>. However, this did not result in any option to manipulate the tensorflow processes on the GPU. Stopping the server, i.e. the initial python script that started the tensorflow session also kills off tensorflow on the GPU. Restarting the server after each classification is not an elegant solution.</p>
<p><strong>Reset or Delete the Graph</strong>
I tried to reset the Graph in several ways. One way is to retrieve the Graph from the tensor I am running, iterate over all collections and clearing them:</p>
<pre><code>graph = endpoints['Predictions'].graph
for key in graph.get_all_collection_keys():
graph.clear_collection(key)
</code></pre>
<p>Debugging shows that the graph collections are empty afterwards, however the error remains the same. The other way is to set the graph from the endpoint as default graph as <code>with graph.as_default:</code>, since the graph has been created before I didn't have much hope that this would delete the graph after the computation. It didn't.</p>
<p><strong>Set the variable scope to <code>reuse=true</code></strong>
The variable scope has an option reuse, which you can set in the <code>inception_v4.py</code>.</p>
<pre><code>def inception_v4(inputs, num_classes=1001, is_training=True,
dropout_keep_prob=0.8,
reuse=None,
scope='InceptionV4',
create_aux_logits=True):
</code></pre>
<p>Setting it to true, results in an error creating the graph initially, saying that the Variables do not exist.</p>
<p><strong>Loading the model once, then resuing it</strong>
Another way I thought of is to create the model once and then just reuse it, i.e. avoid calling the network factory a second time. Now this is problematic, since the server holds several models, which work on a different number of classes each. This means, I would have to create the graph for each of these models, keep them all alive and maintain them somehow. While this is possible, it causes a lot of overhead and is somewhat redundant since the model is always the same, just the weights and the final layer differ. The weights are stored in checkpoint files already and the implementations in <code>tf.model.slim</code> allows to easily create a graph with a different number of classes for the output.</p>
<p>I am out of ideas here. The most desirable solution would of course be to completely terminate tensorflow on the GPU and recreate the device from scratch each time the function is called.</p>
<p>Hope anybody can help here.</p>
<p>Thanks in advance.</p>
|
<p>Let's go over your problems one by one.</p>
<p>First, the error about the variable already existing comes from you reusing an existing graph and rerunning the model creation code on every request. Either create a graph per request by adding a <code>with tf.Graph().as_default():</code> context manager inside your <code>inference_on_image</code> function, or (strongly recommended) reuse the graph, by separating the part of that function which does <code>session.run</code> on the network from model building and weight loading.</p>
<p>For the second issue, there is no way to have tensorflow reset its GPU state without killing off the entire process.</p>
<p>For the third issue, clearing the graph collections won't do much. You can use a new graph per request but this will still share the state of the variables by default, as they will reside on the GPU. You can use session.reset to clear that state, but this won't get you your ram back.</p>
<p>To reuse the model with the different number of classes while sharing weights it sounds like you need to have a function which constructs all of them. I think the best way to do that is to change the implementation of the slim method to return up to the last layer, and then have your own code add the fully connected layers with the right numbers of classes on top of that.</p>
<p>Of course, you'd probably still want different parameter values for the rest of the network, unless you train all your models together.</p>
|
python|tensorflow
| 2
|
5,923
| 45,254,975
|
tensorflow object detection API ImportError when generating PASCAL TFRecord files
|
<p>I'm trying to use the Tensorflow Object Detection API and I've successfully tested the installation,but we I try to generate the PASCAL VOC TFRecord files with the given command</p>
<pre><code>python object_detection/create_pascal_tf_record.py \
--label_map_path=object_detection/data/pascal_label_map.pbtxt \
--data_dir=VOCdevkit --year=VOC2012 --set=train \
--output_path=pascal_train.record
</code></pre>
<p>I encountered the following error:</p>
<pre><code>Traceback (most recent call last):
File "object_detection/create_pascal_tf_record.py", line 36, in <module>
from object_detection.utils import dataset_util
ImportError: No module named object_detection.utils
</code></pre>
<p>my PYTHONPATH is:</p>
<pre><code>:/usr/local/lib/python2.7/dist-packages/tensorflow/models:/usr/local/lib/python2.7/dist-packages/tensorflow/models/slim
</code></pre>
<p>and I'm running the above command in the /models directory,anyone who knows how to fix this problem?</p>
|
<p>I had the same problem and I solved it by adding :</p>
<pre><code>import os
import sys
sys.path.append(os.path.abspath("./object_detection"))
</code></pre>
<p>and </p>
<pre><code>from object_detection.utils import dataset_util
</code></pre>
<p>becomes</p>
<pre><code>from utils import dataset_util
</code></pre>
|
tensorflow|importerror|object-detection
| 0
|
5,924
| 45,121,382
|
Tensorflow Autoencoder with custom training examples from binary file
|
<p>I'm trying to adapt the Tensorflow Autoencoder code found here (<a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/autoencoder.py" rel="nofollow noreferrer">https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/autoencoder.py</a>) to use my own training examples. My training examples are single channel 29*29 (gray level) images saved as UINT8 values continuously in a binary file. I have created a module which creates data_batches which will guide the training. This is the module:</p>
<pre><code>import tensorflow as tf
# various initialization variables
BATCH_SIZE = 128
N_FEATURES = 9
def batch_generator(filenames, record_bytes):
""" filenames is the list of files you want to read from.
In this case, it contains only heart.csv
"""
record_bytes = 29**2 # 29x29 images per record
filename_queue = tf.train.string_input_producer(filenames)
reader = tf.FixedLengthRecordReader(record_bytes=record_bytes) # skip the first line in the file
_, value = reader.read(filename_queue)
print(value)
# record_defaults are the default values in case some of our columns are empty
# This is also to tell tensorflow the format of our data (the type of the decode result)
# for this dataset, out of 9 feature columns,
# 8 of them are floats (some are integers, but to make our features homogenous,
# we consider them floats), and 1 is string (at position 5)
# the last column corresponds to the lable is an integer
#record_defaults = [[1.0] for _ in range(N_FEATURES)]
#record_defaults[4] = ['']
#record_defaults.append([1])
# read in the 10 columns of data
content = tf.decode_raw(value, out_type=tf.uint8)
#print(content)
# convert the 5th column (present/absent) to the binary value 0 and 1
#condition = tf.equal(content[4], tf.constant('Present'))
#content[4] = tf.where(condition, tf.constant(1.0), tf.constant(0.0))
# pack all UINT8 values into a tensor
features = tf.stack(content)
#print(features)
# assign the last column to label
#label = content[-1]
# The bytes read represent the image, which we reshape
# from [depth * height * width] to [depth, height, width].
depth_major = tf.reshape(
tf.strided_slice(content, [0],
[record_bytes]),
[1, 29, 29])
# Convert from [depth, height, width] to [height, width, depth].
uint8image = tf.transpose(depth_major, [1, 2, 0])
# minimum number elements in the queue after a dequeue, used to ensure
# that the samples are sufficiently mixed
# I think 10 times the BATCH_SIZE is sufficient
min_after_dequeue = 10 * BATCH_SIZE
# the maximum number of elements in the queue
capacity = 20 * BATCH_SIZE
# shuffle the data to generate BATCH_SIZE sample pairs
data_batch = tf.train.shuffle_batch([uint8image], batch_size=BATCH_SIZE,
capacity=capacity, min_after_dequeue=min_after_dequeue)
return data_batch
</code></pre>
<p>I then adapt the Autoencoder code to load batch_xs from my input batch feeding code:</p>
<pre><code>from __future__ import division, print_function, absolute_import
# Various initialization variables
DATA_PATH1 = 'data/building_extract_train.bin'
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# custom imports
import data_reader
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
# Parameters
learning_rate = 0.01
training_epochs = 20
batch_size = 256
display_step = 1
examples_to_show = 10
# Network Parameters
n_hidden_1 = 256 # 1st layer num features
n_hidden_2 = 128 # 2nd layer num features
#n_input = 784 # edge-data input (img shape: 28*28)
n_input = 841 # edge-data input (img shape: 29*29)
# tf Graph input (only pictures)
X = tf.placeholder("float", [None, n_input])
# create the data batches (queue)
# Accepts two parameters. The tensor containing the binary files and the size of a record
data_batch = data_reader.batch_generator([DATA_PATH1],29**2)
weights = {
'encoder_h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'decoder_h1': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])),
'decoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_input])),
}
biases = {
'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])),
'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])),
'decoder_b1': tf.Variable(tf.random_normal([n_hidden_1])),
'decoder_b2': tf.Variable(tf.random_normal([n_input])),
}
# Building the encoder
def encoder(x):
# Encoder Hidden layer with sigmoid activation #1
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']),
biases['encoder_b1']))
# Decoder Hidden layer with sigmoid activation #2
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']),
biases['encoder_b2']))
return layer_2
# Building the decoder
def decoder(x):
# Encoder Hidden layer with sigmoid activation #1
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']),
biases['decoder_b1']))
# Decoder Hidden layer with sigmoid activation #2
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']),
biases['decoder_b2']))
return layer_2
# Construct model
encoder_op = encoder(X)
decoder_op = decoder(encoder_op)
# Prediction
y_pred = decoder_op
# Targets (Labels) are the input data.
y_true = X
# Define loss and optimizer, minimize the squared error
cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
sess.run(init)
total_batch = int(mnist.train.num_examples/batch_size)
# Training cycle
for epoch in range(training_epochs):
# Loop over all batches
for i in range(total_batch):
#batch_xs, batch_ys = mnist.train.next_batch(batch_size)
batch_xs = sess.run([data_batch])
#print(batch_xs)
#batch_xs = tf.reshape(batch_xs, [-1, n_input])
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1),
"cost=", "{:.9f}".format(c))
coord.request_stop()
coord.join(threads)
print("Optimization Finished!")
</code></pre>
<p>Unfortunately, when running the code I get this error:
<strong>ValueError: Cannot feed value of shape (1, 128, 29, 29, 1) for Tensor 'Placeholder:0', which has shape '(?, 841)'</strong></p>
<p>My first question is why do I have Tensors of shape (1, 128, 29, 29, 1) when I was expecting (128,29,29,1)? Am I missing something here?</p>
<p>I also don't understand the following code and how can I alter it in order to compare it with my dataset:</p>
<pre><code> # Applying encode and decode over test set
encode_decode = sess.run(
y_pred, feed_dict={X: mnist.test.images[:examples_to_show]})
</code></pre>
<p>As I understand it, this code executes the y_pred part of the graph and passes the first 10 test images to the placeholder X, previously defined. If I were to use a second data queue for my test images (29x29) how would I input these into the above dictionary?</p>
<p>For example, using my code I could define a data_batch_eval as follows:</p>
<pre><code>data_batch_eval = data_reader.batch_generator([DATA_PATH_EVAL],29**2) # eval set
</code></pre>
<p>Nonetheless, how would I extract the first 10 test images to feed the dictionary?</p>
|
<blockquote>
<p>My first question is why do I have Tensors of shape (1, 128, 29, 29,
1) when I was expecting (128,29,29,1)? Am I missing something here?</p>
</blockquote>
<p>You need to remove the bracket in sess.run:</p>
<pre><code> batch_xs = sess.run(data_batch)
</code></pre>
<blockquote>
<p>Unfortunately, when running the code I get this error: ValueError:
Cannot feed value of shape (1, 128, 29, 29, 1) for Tensor
'Placeholder:0', which has shape '(?, 841)'</p>
</blockquote>
<p>You have declared your placeholder X as which is of [None, 841] and feeding an input [128, 29, 29, 1]:</p>
<pre><code> X = tf.placeholder("float", [None, n_input])
</code></pre>
<p>Either change your feed input or your placeholder, so that both have the same size.</p>
<p>Note: Your handling of queues are inefficient, you directly pass the <code>data_batch</code> as input to your <code>network</code> and not through the <code>feed in</code> mechanism. </p>
|
python|tensorflow|autoencoder
| 0
|
5,925
| 45,226,353
|
python pandas Transfer the format of the dataframe
|
<p>I have a dataframe named df like this: (there's no duplicate rows of df)</p>
<pre><code>a_id b_id
111111 18
111111 17
222222 18
333333 14
444444 13
555555 18
555555 24
222222 13
222222 17
333333 17
</code></pre>
<p>And I want to invert it to a dataframe df_2 like this:</p>
<pre><code>a_one a_two b_list number_of_b
222222 444444 13 1
111111 222222 17,18 2
111111 333333 17 1
111111 222222 17 1
222222 333333 17 1
111111 555555 18 1
222222 555555 18 1
</code></pre>
<p>If a_id share the same b_id, they become a pair on df_2;</p>
<p>the b_list of df_2 is the correspondingly b_id;</p>
<p>the number_of_b is the length of b_list</p>
|
<p>I have a solution:
First, make the combinations of <code>a_id</code> that have the same <code>b_id</code>:</p>
<pre><code>from itertools import combinations
df = df.groupby("b_id").apply(lambda x: list(combinations(x["a_id"], 2))).apply(pd.Series).stack()
</code></pre>
<p><code>df</code> now is:</p>
<pre><code> b_id
13 0 (444444, 222222)
17 0 (111111, 222222)
1 (111111, 333333)
2 (222222, 333333)
18 0 (111111, 222222)
1 (111111, 555555)
2 (222222, 555555)
</code></pre>
<p>Then split the Series, reset the index and concat the appearance of <code>b_id</code>:</p>
<pre><code>df = df.apply(pd.Series).reset_index().groupby([0,1])["b_id"].apply(lambda x:x.values).reset_index()
</code></pre>
<p>Now we get:</p>
<pre><code> 0 1 b_id
0 111111 222222 [17, 18]
1 111111 333333 [17]
2 111111 555555 [18]
3 222222 333333 [17]
4 222222 555555 [18]
5 444444 222222 [13]
</code></pre>
<p>This is almost what you need.
And for the exact results:</p>
<pre><code>df.columns = ["a_one", "a_two", "b_list"]
df["number_of_b"] = df.b_list.apply(len)
</code></pre>
<p>The final results:</p>
<pre><code> a_one a_two b_list number_of_b
0 111111 222222 [17, 18] 2
1 111111 333333 [17] 1
2 111111 555555 [18] 1
3 222222 333333 [17] 1
4 222222 555555 [18] 1
5 444444 222222 [13] 1
</code></pre>
<p>The whole code for clarity:</p>
<pre><code>from itertools import combinations
df = df.groupby("b_id").apply(lambda x: list(combinations(x["a_id"], 2))).apply(pd.Series).stack()
df = df.apply(pd.Series).reset_index().groupby([0,1])["b_id"].apply(lambda x:x.values).reset_index()
df.columns = ["a_one", "a_two", "b_list"]
df["number_of_b"] = df.b_list.apply(len)
</code></pre>
<p>This is not that fancy. Look forward to better solutions!</p>
|
python|pandas|dataframe
| 1
|
5,926
| 44,828,514
|
Changing Tensorflow number of convolutional and pooling layers using MNIST dataset
|
<p>I am using Windows 10 pro, python 3.6.2rc1, Visual Studio 2017, and Tensorflow. I am working with Tensorflow example in its tutorial in the following link:</p>
<p><a href="https://www.tensorflow.org/tutorials/layers" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/layers</a></p>
<p>I have added another layer of convolution and pooling before flattening the last layer (3rd layer) to see if the accuracy changes. </p>
<p>The code I have added is as follows:</p>
<pre><code>## Input Tensor Shape: [batch_size, 7, 7, 64]
## Output Tensor Shape: [batch_size, 7, 7, 64]
conv3 = tf.layers.conv2d(
inputs=pool2,
filters=64,
kernel_size=[3, 3],
padding=1,
activation=tf.nn.relu)
pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=1)
pool3_flat = tf.reshape(pool3, [-1, 7* 7 * 64])
</code></pre>
<p>The reason I have changed padding to 1 and stride to 1 is to make sure the size of output is the same as input. But after adding this new layer I get the following warnings and without showing any result the program ends:</p>
<blockquote>
<blockquote>
<p>Estimator is decoupled from Scikit Learn interface by moving into
separate class SKCompat. Arguments x, y and batch_size are only
available in the SKCompat class, Estimator will only accept input_fn.
Example conversion:
est = Estimator(...) -> est = SKCompat(Estimator(...))
WARNING:tensorflow:From E:\Apps\DA2CNNTest\TFHWDetection WIth More Layers\TFClassification\TFClassification\TFClassification.py:179: calling BaseEstimator.fit (from tensorflow.contrib.learn.python.learn.estimators.estimator) with batch_size is deprecated and will be removed after 2016-12-01.
Instructions for updating:
Estimator is decoupled from Scikit Learn interface by moving into
separate class SKCompat. Arguments x, y and batch_size are only
available in the SKCompat class, Estimator will only accept input_fn.
Example conversion:
est = Estimator(...) -> est = SKCompat(Estimator(...))
The thread 'MainThread' (0x5c8) has exited with code 0 (0x0).
The program '[13468] python.exe' has exited with code 1 (0x1).</p>
</blockquote>
</blockquote>
<p>Without adding this layer it works properly. In order to solve this problem I changed the conv3 and pool3 as follows:</p>
<pre><code>conv3 = tf.layers.conv2d(
inputs=pool2,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 3, 3, 64]
pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=2)
pool3_flat = tf.reshape(pool3, [-1, 3* 3 * 64])
</code></pre>
<p>but then I got a different error at</p>
<pre><code>nist_classifier.fit(
x=train_data,
y=train_labels,
batch_size=100,
steps=20000,
monitors=[logging_hook])
</code></pre>
<p>which is as follows:</p>
<blockquote>
<blockquote>
<p>tensorflow.python.framework.errors_impl.NotFoundError: Key conv2d_2/bias not found in checkpoint
[[Node: save/RestoreV2_5 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_5/tensor_names, save/RestoreV2_5/shape_and_slices)]]</p>
</blockquote>
</blockquote>
<p>The error is exactly refering to monitors=[logging_hook].</p>
<p>My whole code is as follow and as you see I have commented the previous one with padding=1.</p>
<p>I really appreciate if you can guide me what my mistake is and why is it so. Moreover, I am correct with the dimension of my inputs and outputs in the 3rd layer?</p>
<p>Complete code:</p>
<pre><code>"""Convolutional Neural Network Estimator for MNIST, built with tf.layers."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorflow.contrib import learn
from tensorflow.contrib.learn.python.learn.estimators import model_fn as model_fn_lib
tf.logging.set_verbosity(tf.logging.INFO)
def cnn_model_fn(features, labels, mode):
"""Model function for CNN."""
input_layer = tf.reshape(features, [-1, 28, 28, 1])
# Input Tensor Shape: [batch_size, 28, 28, 1]
# Output Tensor Shape: [batch_size, 28, 28, 32]
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Input Tensor Shape: [batch_size, 28, 28, 32]
# Output Tensor Shape: [batch_size, 14, 14, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2
# Input Tensor Shape: [batch_size, 14, 14, 32]
# Output Tensor Shape: [batch_size, 14, 14, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #2
# Input Tensor Shape: [batch_size, 14, 14, 64]
# Output Tensor Shape: [batch_size, 7, 7, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
'''Adding a new layer of conv and pool'''
## Input Tensor Shape: [batch_size, 7, 7, 32]
## Output Tensor Shape: [batch_size, 7, 7, 64]
#conv3 = tf.layers.conv2d(
# inputs=pool2,
# filters=64,
# kernel_size=[3, 3],
# padding=1,
# activation=tf.nn.relu)
## Input Tensor Shape: [batch_size, 7, 7, 64]
## Output Tensor Shape: [batch_size, 7, 7, 64]
#pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=1)
#pool3_flat = tf.reshape(pool3, [-1, 7* 7 * 64])
# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 7, 7, 64]
conv3 = tf.layers.conv2d(
inputs=pool2,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 3, 3, 64]
pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=2)
'''End of manipulation'''
# Input Tensor Shape: [batch_size, 3, 3, 64]
# Output Tensor Shape: [batch_size, 3 * 3 * 64]
pool3_flat = tf.reshape(pool3, [-1, 3* 3 * 64])
# Input Tensor Shape: [batch_size, 3 * 3 * 64]
# Output Tensor Shape: [batch_size, 1024]
# dense(). Constructs a dense layer. Takes number of neurons and activation function as arguments.
dense = tf.layers.dense(inputs=pool3_flat, units=1024, activation=tf.nn.relu)
# Add dropout operation; 0.6 probability that element will be kept
dropout = tf.layers.dropout(
inputs=dense, rate=0.4, training=mode == learn.ModeKeys.TRAIN)
logits = tf.layers.dense(inputs=dropout, units=10)
loss = None
train_op = None
# Calculate Loss (for both TRAIN and EVAL modes)
if mode != learn.ModeKeys.INFER:
onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=10)
loss = tf.losses.softmax_cross_entropy(
onehot_labels=onehot_labels, logits=logits)
# Configure the Training Op (for TRAIN mode)
if mode == learn.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=0.001,
optimizer="SGD")
# Generate Predictions
# The logits layer of our model returns our predictions as raw values in a [batch_size, 10]-dimensional tensor.
predictions = {
"classes": tf.argmax(
input=logits, axis=1),
"probabilities": tf.nn.softmax(
logits, name="softmax_tensor")
}
# Return a ModelFnOps object
return model_fn_lib.ModelFnOps(
mode=mode, predictions=predictions, loss=loss, train_op=train_op)
def main(unused_argv):
# Load training and eval data
mnist = learn.datasets.load_dataset("mnist")
train_data = mnist.train.images # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
# Create the Estimator
mnist_classifier = learn.Estimator(
model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model")
# Set up logging for predictions
# Log the values in the "Softmax" tensor with label "probabilities"
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=50)
# Train the model
mnist_classifier.fit(
x=train_data,
y=train_labels,
batch_size=100,
steps=20000,
monitors=[logging_hook])
# Configure the accuracy metric for evaluation
#change metrics variable name
metricss = {
"accuracy":
learn.MetricSpec(
metric_fn=tf.metrics.accuracy, prediction_key="classes"),
}
#Evaluate the model and print results
#for i in range(100)
eval_results = mnist_classifier.evaluate(
x=eval_data[0:100], y=eval_labels[0:100], metrics=metricss)
print(eval_results)
if __name__ == "__main__":
tf.app.run()
</code></pre>
|
<p>A simple fix for this would be to define a custom checkpoint directory for the model as follows.</p>
<pre><code>tf.train.generate_checkpoint_state_proto("/tmp/","/tmp/mnist_convnet_model")
</code></pre>
<p>This fixes the problem with the MNIST example and also gives you access to a location where you can control checkpoints.</p>
|
python|tensorflow
| 0
|
5,927
| 45,267,202
|
How to improve curve fitting in matplotlib?
|
<p>I have a set of data, y is angular orientation, and x is the timestamp for each point of y.</p>
<p>The entire data set has many segments for angular orientation. Inorder to do curve fitting, I have split the data into their respective segments, storing each segment as an numpy array. </p>
<p>I then find a polynomial fit using numpy.polyfit to find a curve fit for each segment of the data. However because my data is purely experimental, I have no knowledge of which polynomial degree to use for numpy.polyfit, thus I iterate through a range of polynomial degrees till I find the highest polynomial degree possible.</p>
<p>This is my code:</p>
<pre><code># Experimental data stored in lists: time_aralist and orienrad_aralist
# List stores the segments as arrays
fig = plt.figure()
# Finding curve fit
fittime_aralist, fitorienrad_aralist, fitorienrad_funclist = [], [], []
for j in range(len(time_aralist)):
z, res, _, _, _ = np.polyfit(time_aralist[j], orienrad_aralist[j], 200, full=True)
orienrad_func = np.poly1d(z)
fittime_ara = np.linspace(time_aralist[j][0], time_aralist[j][-1], 10000)
fitorienrad_ara = orienrad_func(fittime_ara)
# Appending to list
fittime_aralist.append(fittime_ara)
fitorienrad_aralist.append(fitorienrad_ara)
fitorienrad_funclist.append(orienrad_func)
# Plotting experimental data
for j in range(len(time_aralist)):
plt.plot(time_aralist[j], orienrad_aralist[j], 'ro')
for j in range(len(fittime_aralist)):
plt.plot(fittime_aralist[j], fitorienrad_aralist, 'k-')
</code></pre>
<p>This is what my plot looks like. Centered in the plot is one segment.</p>
<p>The black lines give the attempted curve fit, and the red dots are the experimental points.
<a href="https://i.stack.imgur.com/xhL9J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xhL9J.png" alt="enter image description here"></a></p>
<p>As seen in the diagram, the black curve fitted lines do not really fit the data points very well. My region of interest is the middle region of the segment, however the region is not fitted well as well, despite using the highest polynomial degree possible.</p>
<p>Could anyone provide me with any supplementary techniques, or an alternative code that could fit the data better?</p>
|
<h3>Cubic interpolation</h3>
<p>What about a cubic interpolation of the data?</p>
<pre><code>import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
x = np.linspace(6, 13, num=40)
y = 3 + 2.*x+np.sin(x)/2.+np.sin(4*x)/3.+ np.random.rand(len(x))/3.
fig, ax = plt.subplots()
ax.scatter(x,y, s=5, c="crimson")
f = interp1d(x, y, kind='cubic')
xdens = np.linspace(6, 13, num=400)
ydens = f(xdens)
ax.plot(xdens, ydens, label="interpolation")
ax.legend()
ax2 = ax.twinx()
yderiv = np.diff(ydens)/np.diff(xdens)
ax2.plot(xdens[:-1],yderiv, color="C2", label="derivative")
ax2.legend()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/X8M6X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X8M6X.png" alt="enter image description here" /></a></p>
<h3>Spline interpolation</h3>
<p>The same can be achieved with a Spline interpolation. The advantage would be that <code>scipy.interpolate.splrep</code> allows to use the parameter <code>s</code> to smoothen the result and it also allows to evaluate directly the derivative of the spline.</p>
<pre><code>import numpy as np
import scipy.interpolate
import matplotlib.pyplot as plt
x = np.linspace(6, 13, num=40)
y = 3 + 2.*x+np.sin(x)/2.+np.sin(4*x)/3.+ np.random.rand(len(x))/3.
fig, ax = plt.subplots()
ax.scatter(x,y, s=5, c="crimson")
tck = scipy.interpolate.splrep(x, y, s=0.4)
xdens = np.linspace(6, 13, num=400)
ydens = scipy.interpolate.splev(xdens, tck, der=0)
ax.plot(xdens, ydens, label="spline interpolation")
ax.legend(loc=2)
ax2 = ax.twinx()
yderiv = scipy.interpolate.splev(xdens, tck, der=1)
ax2.plot(xdens, yderiv, color="C2", label="derivative")
ax2.legend(loc=4)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/sK3Je.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sK3Je.png" alt="enter image description here" /></a></p>
|
python|numpy|matplotlib
| 1
|
5,928
| 57,277,290
|
Looping through two separate dataFrames, Haversine function, store the values
|
<p>I have two dataFrames that I want to loop through, apply a Haversine function, and structure the results within a new array. I want to grab the lat, lng coordinates for the first restaurant in da_store apply the Haversine function against all the lat, lng of da_univ, store the results and grab the minimum value. Ultimately, for each store calculate the euclidean distance from all the da_univ coordinates and find the closest located university&college.</p>
<p>Here is my attempt so far at a nested for loop, I am struggling saving the results in the proper format and finding the min.</p>
<pre><code>for index_store, row_store in da_store.iterrows():
store_lat = row_store['lat']
store_lon = row_store['lon']
store_list = []
for index_univ, row_univ in da_univ.iterrows():
univ_lat = row_univ['LATITUDE']
univ_lon = row_univ['LONGITUDE']
distance = haversine_np(store_lon, store_lat, univ_lon, univ_lat)
print(distance)
</code></pre>
<p>Data Frame 1: da_store</p>
<pre><code>In [203]: da_store.head()
Out[203]:
Restaurant # Restaurant Name Address City State Zip Code lat lon
0 3006 Weymouth Dual Riverway Plaza, Weymouth, MA Weymouth MA 2191 42.244559 -70.936438
1 3009 Somerset Dual Somerset Plaza, Rt. 6, Somerset, MA Somerset MA 2725 41.734643 -71.152320
2 3502 Westboro Mass Pike West Mile Post 105; Mass Turnpike W., Westboro, MA Westboro MA 1581 42.253973 -71.663506
3 3503 Charlton Mass Pike East Mile Post 81; Mass Turnpike E., Charlton, MA Charlton MA 1507 42.101589 -72.018530
4 3504 Charlton Mass Pike West Mile Post 89; Mass Turnpike W., Charlton City, MA Charlton City MA 1508 42.101497 -72.018247
</code></pre>
<p>Data Frame 2: da_univ</p>
<pre><code>In [204]: da_univ.head()
Out[204]:
INSTNM ZIP CITY STABBR LATITUDE LONGITUDE
0 Hult International Business School 02141-1805 Cambridge MA 42.369968 -71.070645
1 New England College of Business and Finance 2110 Boston MA 42.353619 -71.056671
2 Assumption College 01609-1296 Worcester MA 42.294226 -71.828991
3 Bancroft School of Massage Therapy 1604 Worcester MA 42.268973 -71.778113
4 Bay State College 2116 Boston MA 42.351760 -71.076991
</code></pre>
<p>Haversine Function: haversine_np</p>
<pre><code>from math import radians, cos, sin, asin, sqrt
def haversine_np(lon1, lat1, lon2, lat2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
All args must be of equal length.
"""
lon1, lat1, lon2, lat2 = map(np.radians, [lon1, lat1, lon2, lat2])
dlon = lon2 - lon1
dlat = lat2 - lat1
a = np.sin(dlat/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2.0)**2
c = 2 * np.arcsin(np.sqrt(a))
km = 6367 * c
return km
</code></pre>
<p>I need to practice my basic programming thank you for your help!</p>
|
<p>If you use loop with pandas and numpy, chances are high that you are doing it wrong. Learn and apply the vectorized functions that these libraries provide:</p>
<pre><code># Build an index that contain every pairing of Store - University
idx = pd.MultiIndex.from_product([da_store.index, da_univ.index], names=['Store', 'Univ'])
# Pull the coordinates of the store and the universities together
# We don't need their name here
df = pd.DataFrame(index=idx) \
.join(da_store[['lat', 'lon']], on='Store') \
.join(da_univ[['LATITUDE', 'LONGITUDE']], on='Univ')
def haversine_np(lon1, lat1, lon2, lat2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
All args must be of equal length.
"""
lon1, lat1, lon2, lat2 = map(np.radians, [lon1, lat1, lon2, lat2])
dlon = lon2 - lon1
dlat = lat2 - lat1
a = np.sin(dlat/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2.0)**2
c = 2 * np.arcsin(np.sqrt(a))
km = 6367 * c
return km
df['Distance'] = haversine_np(*df[['lat', 'lon', 'LATITUDE', 'LONGITUDE']].values.T)
# The closest university to each store
min_distance = df.loc[df.groupby('Store')['Distance'].idxmin(), 'Distance']
# Pulling everything together
min_distance.to_frame().join(da_store, on='Store').join(da_univ, on='Univ') \
[['Restaurant Name', 'INSTNM', 'Distance']]
</code></pre>
<p>Result:</p>
<pre><code> Restaurant Name INSTNM Distance
Store Univ
0 1 Weymouth Dual New England College of Business and Finance 15.651923
1 4 Somerset Dual Bay State College 68.921108
2 3 Westboro Mass Pike West Bancroft School of Massage Therapy 9.580468
3 2 Charlton Mass Pike East Assumption College 26.514269
4 2 Charlton Mass Pike West Assumption College 26.508821
</code></pre>
|
python|pandas|dataframe
| 1
|
5,929
| 57,246,380
|
How to separate Quarter and year data
|
<p>Need to separate the quarter and year into separate columns</p>
<pre><code>df.head()
Period
Q1/2012
Q2/2012
Q3/2012
Q4/2012
Q1/2013
</code></pre>
<p>Want to have the column displayed as:</p>
<pre><code>Period Year
Q1 2012
Q2 2012
Q3 2012
Q4 2012
Q1 2013
</code></pre>
|
<pre><code>import pandas as pd
import numpy as np
df
Period
0 Q1/2012
1 Q2/2012
2 Q3/2012
3 Q4/2012
4 Q1/2013
df['Year'] = df['Period'].str.extract(r'(\w{4})', expand=False)
df['Period'] = df['Period'].str.extract(r'(.\d{1})',expand=False)
df
Period Year
0 Q1 2012
1 Q2 2012
2 Q3 2012
3 Q4 2012
4 Q1 2013
</code></pre>
|
pandas|numpy|jupyter-notebook
| 0
|
5,930
| 45,905,103
|
Insert a predefined array onto large array and shift the position of the smaller array iteratively
|
<p>I would like to insert a array of size 2*2 filled with zeros onto a larger array. Further, I would like to shift the position of the zero array left to right, top to bottom iteratively.</p>
<pre><code>zero_array =[0 0
0 0]
large_array =[ 1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16]
</code></pre>
<p>Required result:</p>
<pre><code>Ist iteration
[ 0 0 3 4
0 0 7 8
9 10 11 12
13 14 15 16]
2nd iteration
[ 1 0 0 4
5 0 0 8
9 10 11 12
13 14 15 16]
3rd iteration
[ 1 2 0 0
5 6 0 0
9 10 11 12
13 14 15 16]
4th Iteration
[ 1 2 3 4
0 0 7 8
0 0 11 12
13 14 15 16]
</code></pre>
<p>And so on...</p>
|
<pre><code>import copy
import numpy as np
la=np.array(<insert array here>)
za=np.zeros((2,2))
ma=copy.deepcopy(la)
for i in range(len(la)-len(za)+1):
for j in range(len(la)-len(za)+1):
la=copy.deepcopy(ma)
la[i:i+len(za),j:j+len(za)]=za
print la
#la=large array
#za=zero array
</code></pre>
|
python|arrays|numpy|insert|iteration
| 0
|
5,931
| 45,968,411
|
Creating dataframe from tempfile
|
<p>Trying to load a temporary file into a pandas dataframe and throwing an error. Not sure how to get the parsed data from the temp file into a dataframe to use later on.</p>
<pre><code>line = []
for x in readMe:
line.append(" ".join(x.split()))
with tempfile.NamedTemporaryFile() as temp:
for i in line:
" ".join(i.split(None))
temp.write("%s\n" % i)
df = pd.read_csv(temp.name, sep=' ', names=curves, skiprows=dataStart, header=None)
</code></pre>
<p>Traceback (most recent call last):
File "C:/LAS Load.py", line 42, in
...
return func(*args, **kwargs)
TypeError: a bytes-like object is required, not 'str'</p>
|
<p>You didn't show us what <code>readMe</code> contains, in particular what type it causes <code>i</code> to have. If possible, would you please run this under python3? If not, show us some details like <code>type(i)</code>, and do a trivial <code>temp.write('hello')</code> so it's clear the file descriptor is writable. It's still not obvious what side effects line 41 has:</p>
<pre><code> " ".join(i.split(None))
</code></pre>
<p>The '%s' % i is essentially taking str(i). Perhaps what we're looking for is <code>i.encode('utf8')</code>, or passing in a <a href="https://docs.python.org/3/library/tempfile.html" rel="nofollow noreferrer">temp file encoding</a>.</p>
|
python|pandas
| 0
|
5,932
| 45,953,344
|
Getting error on ML-Engine predict but local predict works fine
|
<p>I have searched a lot here but unfortunately could not find an answer.</p>
<p>I am running <code>TensorFlow 1.3</code> (installed via PiP on MacOS) on my local machine, and have created a model using the <a href="https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md" rel="nofollow noreferrer">provided</a> "<code>ssd_mobilenet_v1_coco</code>" checkpoints.</p>
<p>I managed to train locally and on the ML-Engine (Runtime 1.2), and successfully deployed my savedModel to the ML-Engine.</p>
<p>Local predictions (below code) work fine and I get the model results</p>
<pre><code>gcloud ml-engine local predict --model-dir=... --json-instances=request.json
FILE request.json: {"inputs": [[[242, 240, 239], [242, 240, 239], [242, 240, 239], [242, 240, 239], [242, 240, 23]]]}
</code></pre>
<p>However when deploying the model and trying to run on the ML-ENGINE for remote predictions with the code below:</p>
<pre><code>gcloud ml-engine predict --model "testModel" --json-instances request.json(SAME JSON FILE AS BEFORE)
</code></pre>
<p>I get this error:</p>
<pre><code>{
"error": "Prediction failed: Exception during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\"NodeDef mentions attr 'data_format' not in Op<name=DepthwiseConv2dNative; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=padding:string,allowed=[\"SAME\", \"VALID\"]>; NodeDef: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, _output_shapes=[[-1,150,150,32]], data_format=\"NHWC\", padding=\"SAME\", strides=[1, 1, 1, 1], _device=\"/job:localhost/replica:0/task:0/cpu:0\"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)\n\t [[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise = DepthwiseConv2dNative[T=DT_FLOAT, _output_shapes=[[-1,150,150,32]], data_format=\"NHWC\", padding=\"SAME\", strides=[1, 1, 1, 1], _device=\"/job:localhost/replica:0/task:0/cpu:0\"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6, FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read)]]\")"
}
</code></pre>
<p>I saw something similar here: <a href="https://github.com/tensorflow/models/issues/1581" rel="nofollow noreferrer">https://github.com/tensorflow/models/issues/1581</a></p>
<p>About the problem being with the "data-format" parameter.
But unfortunately I could not use that solution since I am already on TensorFlow 1.3.</p>
<p>It also seems that it might be a problem with MobilenetV1: https:// github.com/ tensorflow/models/issues/2153</p>
<p>Any ideas?</p>
|
<p>I had a similar <a href="https://stackoverflow.com/q/46889203/7665986">issue</a>. This issue is due to mismatch in Tensorflow versions used for training and inference. I solved the issue by using Tensorflow - 1.4 for both training and inference. </p>
<p>Please refer to <a href="https://stackoverflow.com/a/47493624/7665986">this</a> answer.</p>
|
tensorflow|google-prediction
| 3
|
5,933
| 35,561,665
|
Tensorflow sequence2sequence model padding
|
<p>In the seq2seq models, paddings are applied to make all sequences in a bucket have the same lengths. And apart from this, it looks like no special handling is applied to the paddings:</p>
<p>the encoder encodes the paddings as well
the basic decoder w/o attention decodes using the last encoding which encodes the paddings
the decoder with attention attends to the hidden states of the padding inputs too
It would be really helpful if this could be clarified: is it true that, basically the paddings are just a special id/embedding, and the current seq2seq implementation treats them just like other embeddings? And no special mechanism is needed to ignore these padding, for example when encoding a sequence containing paddings; or to decode a sequence containing paddings using the attention-based decoder? So after padding, nothing special is done to the paddings, we can just pretend a padding is just another embedding (apart from maybe when doing the weighted x-entropy using target_weights)?</p>
<p>If the above is true, then when testing a trained model, is padding needed at all (since at test time, each sentence is decoded separately and not in a batch)? --- It looks like from the code, at test time an input sentence is still bucketed first and then padded?</p>
|
<p>I think your basic premise is correct: the model does not treat the padding symbol differently than any other symbol. However, when packing the data tensors the padding always shows up at the end of decoder training examples after the 'EOS' symbol, and at the beginning of encoder training examples (because the encoder sequences get reversed.) </p>
<p>Presumably the model will learn that padding on the encoder side carries no real semantic information as it wouldn't correlate with anything of the other words...I suppose it could convey something about the length of the sequence when attention is factored in, but that wouldn't be a bad thing. The bucket approach is an attempt to limit the amount of excess padding.</p>
<p>On the decoder side the model will quickly learn that padding always occurs after the 'EOS' symbol so you ignore everything after the 'EOS' symbol is emitted anyway. </p>
<p>Padding is mainly useful because when running tensors in batches, the sequences all have to be the same size...so when testing one-at-a-time you really don't need to pad. However, when testing large validation sets it is still useful to run in batches...with padding.</p>
<p>(I have more questions and concerns about the <em>UNK</em> symbol myself.)</p>
|
tensorflow
| 1
|
5,934
| 35,345,283
|
Tensorflow Race conditions when chaining multiple queues
|
<p>I'd like to compute the mean of each of the RGB channels of a set of images in a multithreaded manner.</p>
<p>My idea was to have a <code>string_input_producer</code> that fills a <code>filename_queue</code> and then have a second <code>FIFOQueue</code> that is filled by <code>num_threads</code> threads that load images from the filenames in <code>filename_queue</code>, perform some ops on them and then enqueue the result.</p>
<p>This second queue is then accessed by one single thread (the main thread) that sums up all the values from the queue.</p>
<p>This is the code I have:</p>
<pre><code># variables for storing the mean and some intermediate results
mean = tf.Variable([0.0, 0.0, 0.0])
total = tf.Variable(0.0)
# the filename queue and the ops to read from it
filename_queue = tf.train.string_input_producer(filenames, num_epochs=1)
reader = tf.WholeFileReader()
_, value = reader.read(filename_queue)
image = tf.image.decode_jpeg(value, channels=3)
image = tf.cast(image, tf.float32)
sum = tf.reduce_sum(image, [0, 1])
num = tf.mul(tf.shape(image)[0], tf.shape(image)[1])
num = tf.cast(num, tf.float32)
# the second queue and its enqueue op
queue = tf.FIFOQueue(1000, dtypes=[tf.float32, tf.float32], shapes=[[3], []])
enqueue_op = queue.enqueue([sum, num])
# the ops performed by the main thread
img_sum, img_num = queue.dequeue()
mean_op = tf.add(mean, img_sum)
total_op = tf.add(total, img_num)
# adding new queue runner that performs enqueue_op on num_threads threads
qr = tf.train.QueueRunner(queue, [enqueue_op] * num_threads)
tf.train.add_queue_runner(qr)
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
# the main loop being executed until the OutOfRangeError
# (when filename_queue does not yield elements anymore)
try:
while not coord.should_stop():
mean, total = sess.run([mean_op, total_op])
except tf.errors.OutOfRangeError:
print 'All images processed.'
finally:
coord.request_stop()
coord.join(threads)
# some additional computations to get the mean
total_3channel = tf.pack([total, total, total])
mean = tf.div(mean, total_3channel)
mean = sess.run(mean)
print mean
</code></pre>
<p>The problem is each time I'm running this function I get different results, for example:</p>
<pre><code>[ 99.35347748 58.35261154 44.56705856]
[ 95.91153717 92.54192352 87.48269653]
[ 124.991745 121.83417511 121.1891861 ]
</code></pre>
<p>I blame this to race conditions. But where do those race conditions come from? Can someone help me out?</p>
|
<p>Your <code>QueueRunner</code> will start <code>num_threads</code> threads which will race to access your <code>reader</code> and push the result onto the queue. The order of images on <code>queue</code> will vary depending on which thread finishes first.</p>
<p><strong>Update Feb 12</strong></p>
<p>A simple example of chaining two queues, and summing up values from the second queue. When using <code>num_threads</code> > 1, there's some non-determinism in the intermediate values, but the final value will always be <code>30</code>. When <code>num_threads=1</code>, everything is deterministic</p>
<pre><code>tf.reset_default_graph()
queue_dtype = np.int32
# values_queue is a queue that will be filled with 0,1,2,3,4
# range_input_producer creates the queue and registers its queue_runner
value_queue = tf.range_input_producer(limit=5, num_epochs=1, shuffle=False)
value = value_queue.dequeue()
# value_squared_queue will be filled with 0,1,4,9,16
value_squared_queue = tf.FIFOQueue(capacity=50, dtypes=queue_dtype)
value_squared_enqueue = value_squared_queue.enqueue(tf.square(value))
value_squared = value_squared_queue.dequeue()
# value_squared_sum keeps running sum of squares of values
value_squared_sum = tf.Variable(0)
value_squared_sum_update = value_squared_sum.assign_add(value_squared)
# register queue_runner in the global queue runners collection
num_threads = 2
qr = tf.train.QueueRunner(value_squared_queue, [value_squared_enqueue] * num_threads)
tf.train.queue_runner.add_queue_runner(qr)
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
tf.start_queue_runners()
for i in range(5):
sess.run([value_squared_sum_update])
print sess.run([value_squared_sum])
</code></pre>
<p>You should see:</p>
<pre><code>[0]
[1]
[5]
[14]
[30]
</code></pre>
<p>Or sometimes (when the order of first 2 values is flipped)</p>
<pre><code>[1]
[1]
[5]
[14]
[30]
</code></pre>
|
python|multithreading|queue|race-condition|tensorflow
| 2
|
5,935
| 35,763,048
|
Comparing DataFrames of different lengths
|
<p>I'm trying to filter one DataFrame by the values of another DataFrame, but can't get it to work as the filter-by-DataFrame has a different size than the to-be-filtered DataFrame. I thought I need to use <code>set_index</code> to align both DataFrames somehow, but that may be wrong.</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'a': [1, 1, 2, 3, 3, 4], 'b': [5, 3, 6, 2, 6, 4]})
df2 = pd.DataFrame({'a': [1, 2, 3, 4], 'b': [3, 5, 6, 3]})
dfa = df1.set_index('a')
>>> dfa
b
a
1 5
1 3
2 6
3 2
3 6
4 4
dfb = df2.set_index('a')
>>> dfa[dfa['b'] <= dfb['b']]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/core/ops.py", line 699, in wrapper
raise ValueError('Series lengths must match to compare')
ValueError: Series lengths must match to compare
</code></pre>
<p>The expected DataFrame would be <code>pd.DataFrame({'a': [1, 3, 3], 'b': [3, 2, 6]})</code>:</p>
<pre><code> a b
0 1 3
1 3 2
2 3 6
</code></pre>
<p>(all <code><a, b></code> rows disappear from <code>df1</code> for which the <code>b</code> value in <code>df2</code> is <= the <code>b</code> value in <code>df1</code> and both <code>a</code> values match for <code>df1</code> and <code>df2</code>).</p>
<p><strong>Update</strong></p>
<p>A more naive way doesn't work either...</p>
<pre><code>>>> df1[(df1['a'] == df2['a']) & (df1['b'] <= df2['b'])]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/core/ops.py", line 699, in wrapper
raise ValueError('Series lengths must match to compare')
ValueError: Series lengths must match to compare
</code></pre>
|
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex_like.html" rel="nofollow"><code>reindex_like</code></a> to allign your second dataframe to <code>df1</code> size and then use your attempt with addition of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> method instead of comparing <code>df1['a']</code> with <code>df2['a']</code>:</p>
<pre><code>df3 = df2.reindex_like(df1)
In [93]: df1[(df3['a'].isin(df1['a'])) & (df1['b'] <= df3['b'])]
Out[93]:
a b
1 1 3
2 2 6
3 3 2
</code></pre>
|
python|pandas
| 2
|
5,936
| 35,589,820
|
Python 'map' function inserting NaN, possible to return original values instead?
|
<p>I am passing a dictionary to the <code>map</code> function to recode values in the column of a Pandas dataframe. However, I noticed that if there is a value in the original series that is not explicitly in the dictionary, it gets recoded to <code>NaN</code>. Here is a simple example:</p>
<p>Typing...</p>
<pre><code>s = pd.Series(['one','two','three','four'])
</code></pre>
<p>...creates the series</p>
<pre><code>0 one
1 two
2 three
3 four
dtype: object
</code></pre>
<p>But applying the map...</p>
<pre><code>recodes = {'one':'A', 'two':'B', 'three':'C'}
s.map(recodes)
</code></pre>
<p>...returns the series</p>
<pre><code>0 A
1 B
2 C
3 NaN
dtype: object
</code></pre>
<p>I would prefer that if any element in series <code>s</code> is not in the <code>recodes</code> dictionary, it remains unchanged. That is, I would prefer to return the series below (with the original <code>four</code> instead of <code>NaN</code>).</p>
<pre><code>0 A
1 B
2 C
3 four
dtype: object
</code></pre>
<p>Is there an easy way to do this, for example an option to pass to the <code>map</code> function? The challenge I am having is that I can't always anticipate all possible values that will be in the series I'm recoding - the data will be updated in the future and new values could appear. </p>
<p>Thanks!</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html"><code>replace</code></a> instead of <code>map</code>:</p>
<pre><code>>>> s = pd.Series(['one','two','three','four'])
>>> recodes = {'one':'A', 'two':'B', 'three':'C'}
>>> s.map(recodes)
0 A
1 B
2 C
3 NaN
dtype: object
>>> s.replace(recodes)
0 A
1 B
2 C
3 four
dtype: object
</code></pre>
|
python|pandas|map-function
| 58
|
5,937
| 11,971,089
|
Adding a vector to matrix rows in numpy
|
<p>Is there a fast way in numpy to add a vector to every row or column of a matrix.</p>
<p>Lately, I have been tiling the vector to the size of the matrix, which can use a lot of memory. For example</p>
<pre><code> mat=np.arange(15)
mat.shape=(5,3)
vec=np.ones(3)
mat+=np.tile(vec, (5,1))
</code></pre>
<p>The other way I can think of is using a python loop, but loops are slow:</p>
<pre><code> for i in xrange(len(mat)):
mat[i,:]+=vec
</code></pre>
<p>Is there a fast way to do this in numpy without resorting to C extensions?</p>
<p>It would be nice to be able to virtually tile a vector, like a more flexible version of broadcasting. Or to be able to iterate an operation row-wise or column-wise, which you may almost be able to do with some of the ufunc methods.</p>
|
<p>For adding a 1d array to every row, broadcasting already takes care of things for you:</p>
<pre><code>mat += vec
</code></pre>
<p>However more generally you can use <code>np.newaxis</code> to coerce the array into a broadcastable form. For example:</p>
<pre><code>mat + np.ones(3)[np.newaxis,:]
</code></pre>
<p>While not necessary for adding the array to every row, this is necessary to do the same for column-wise addition:</p>
<pre><code>mat + np.ones(5)[:,np.newaxis]
</code></pre>
<p><strong>EDIT:</strong> as Sebastian mentions, for row addition, <code>mat + vec</code> already handles the broadcasting correctly. It is also faster than using <code>np.newaxis</code>. I've edited my original answer to make this clear.</p>
|
vector|matrix|numpy
| 38
|
5,938
| 28,482,463
|
Numpy: Convert array of keys into array of values
|
<p>I have an array and a dict having the entries of array as keys. How can I get array with entries having values corresponding to keys in the first array? What is the pythonic way without using simple loops.</p>
<p>For e.g.
I have an array:</p>
<pre><code> a = np.array([['1','2','3'],['10','4','5'],['9','34','6']],dtype=np.object)
[['1' '2' '3']
['10' '4' '5']
['9' '34' '6']]
</code></pre>
<p>and dict:</p>
<pre><code>d = {'1':23,'2':13,'3':3,'4':43,'5':230,'6':893,'7':98,'8':665,'9':33,'10':8797}
</code></pre>
<p>I want to get the array(for keys(for e.g. '34') not in dict, I should have 0) :</p>
<pre><code> b = np.array([[23,13,3],[8797,43,230],[33,0,893]])
[[ 23 13 3]
[8797 43 230]
[ 33 0 893]]
</code></pre>
|
<p>OK found the answer:</p>
<pre><code>a_flat = numpy.ndarray.flatten(a)
b = [d[x] if d.has_key(x) else 0 for x in a_flat]
b = numpy.reshape(c,a.shape)
print b
</code></pre>
|
python|arrays|numpy|dictionary
| 0
|
5,939
| 28,834,341
|
why python failed to use or upgrade package installed by pip?
|
<p>This problem may seem simple to most of you but I'm really confused. I tried to install numpy & pandas using pip. So initially I just did:</p>
<pre><code>sudo pip install pandas.
</code></pre>
<p>It installed successfully but when i tried:<code>import pandas</code> there's error as:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/pandas/__init__.py", line 7, in <module>
from . import hashtable, tslib, lib
File "pandas/src/numpy.pxd", line 157, in init pandas.hashtable (pandas/hashtable.c:22984)
ValueError: numpy.dtype has the wrong size, try recompiling
</code></pre>
<p>Then I assume it's the numpy version wrong (even pandas said installed a newer numpy) I tried to upgrade the numpy using "pip" but system reminded me don't have to.</p>
<p>As I checked, all my "pip" installed python packages are in <code>/usr/local/lib/python2.7/site-package</code>, in which the numpy version is 1.9.1 and pandas 0.15.1</p>
<p>When I do <code>which python</code>, it shows me the python path<code>/usr/local/bin</code> so I assume it's using the system patron and did installed all the packages accordingly</p>
<p>But when I typed in "python" in console, and tried:</p>
<pre><code>import numpy as np
np.version.version
</code></pre>
<p>It showed 1.6.1 instead of 1.9.1
Seems it never gets upgraded or failed to use the numpy installed.</p>
<p>How should I fix this?
Thanks</p>
|
<p>Uninstall Numpy and then Re-install the newest version of it.</p>
<pre><code>pip uninstall numpy
pip install numpy
</code></pre>
<p>I too was facing this problem earlier.</p>
|
python|numpy|pandas|path|installation
| 1
|
5,940
| 50,947,383
|
Python - How to fill string value with the modal value for the group
|
<p>I have a dataset like the below. I want to be able to be able to populate the missing text with what is normal for the group. I have tried using ffil but this doesn't help the ones that are blank at the start, and bfil similarly for the end. How can I do this?</p>
<pre><code>Group Name
1 Annie
2 NaN
3 NaN
4 David
1 NaN
2 Bertha
3 Chris
4 NaN
</code></pre>
<p>Desired Output:</p>
<pre><code>Group Name
1 Annie
2 Bertha
3 Chris
4 David
1 Annie
2 Bertha
3 Chris
4 David
</code></pre>
|
<p>Using <code>collections.Counter</code> to create a modal mapping by group:</p>
<pre><code>from collections import Counter
s = df.dropna(subset=['Name'])\
.groupby('Group')['Name']\
.apply(lambda x: Counter(x).most_common()[0][0])
df['Name'] = df['Name'].fillna(df['Group'].map(s))
print(df)
Group Name
0 1 Annie
1 2 Bertha
2 3 Chris
3 4 David
4 1 Annie
5 2 Bertha
6 3 Chris
7 4 David
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 4
|
5,941
| 50,899,488
|
Tensorflow python: reshape input [batchsize] to tensor [batchsize, 2] with specific order
|
<p>I have a tensor (shape=[batchsize]). I want to reshape the tensor in a specific order and into shape=[-1,2]. But I want to have:</p>
<ol>
<li>Element at [0,0]</li>
<li>Element at [1,0]</li>
<li>Element at [0,1]</li>
<li>Element at [1,1]</li>
<li>Element at [0,2]</li>
<li>Element at [0,3]</li>
<li>Element at [2,1]</li>
<li>Element at [3,1] and so on for an unknow batchsize.</li>
</ol>
<p>Here is an example code with a tensor range=(0 to input=8).</p>
<pre><code>import tensorflow as tf
import numpy as np
batchsize = tf.placeholder(shape=[], dtype=tf.int32)
x = tf.range(0, batchsize, 1)
x = tf.reshape(x, shape=[2, -1])
y = tf.transpose(x)
z = tf.reshape(y, shape=[-1, 2])
input = 8
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
msg = sess.run([z], feed_dict={batchsize: input})
print(msg)
</code></pre>
<p>Now my output is: </p>
<pre><code>[array([[0, 4],
[1, 5],
[2, 6],
[3, 7]], dtype=int32)]
</code></pre>
<p>But I want the output to be:</p>
<pre><code>[array([[0, 2],
[1, 3],
[4, 6],
[5, 7]], dtype=int32)]
</code></pre>
<p>Keep in mind I do not know how big batchsize is, I just set input= 8 for exemplary reason. Furthermore here I want to break the order after every 2nd element. In future I also would like to have this flexible. And in my real code the tensor ´x´ is no range array but complex random numbers, so you cannot sort in any way w.r.t. the values. I just made this code for a demonstration purpose.</p>
|
<p>You could try</p>
<pre><code> tf.reshape(tf.matrix_transpose(tf.reshape(x, [-1, 2, 2])), [-1, 2])
</code></pre>
|
python|tensorflow|reshape|tensor
| 0
|
5,942
| 50,914,759
|
does pandas have function similar to rowSums in R
|
<p>I have dataframe:</p>
<p>df:</p>
<pre><code>customer sample1 sample2 sample3 sample4
costprice1 10 21 32 43
costprice2 12 24 15 18
costprice3 1 2 15 8
costprice4 16 30 44 58
costprice5 18 33 48 63
costprice6 20 36 52 68
costprice7 22 39 56 73
costprice8 24 42 60 78
costprice9 26 45 64 83
costprice10 28 48 68 88
</code></pre>
<p>i would like to drop rows that have values less than 15 in more than 2 columns</p>
<p>so this would be dropped </p>
<pre><code>costprice3 1 2 15 8
</code></pre>
<p>In R we can do </p>
<pre><code>df[rowSums(df < 15) <=2 , , drop = FALSE]
</code></pre>
<p>Can this be done in pandas, i have used pandas any to filter out rows only </p>
<pre><code>df_fitered = df[(df > threshold).any(1)]
</code></pre>
|
<pre><code>In [16]: df[df.select_dtypes(['number']).lt(15).sum(axis=1) < 3]
Out[16]:
customer sample1 sample2 sample3 sample4
0 costprice1 10 21 32 43
1 costprice2 12 24 15 18
3 costprice4 16 30 44 58
4 costprice5 18 33 48 63
5 costprice6 20 36 52 68
6 costprice7 22 39 56 73
7 costprice8 24 42 60 78
8 costprice9 26 45 64 83
9 costprice10 28 48 68 88
</code></pre>
<p>Bonus answer:</p>
<pre><code>mask = <condition1>
df[mask & (df.select_dtypes(['number']).lt(15).sum(axis=1) < 3)]
</code></pre>
|
python-2.7|pandas
| 3
|
5,943
| 50,719,981
|
sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically
|
<pre><code>sql='delete a from sample a, TEMPLATE b where a.emailid=b.emailid '
df=psql.read_sql_query(sql,con=engine)
print df.head()
</code></pre>
<p>how do i delete common rows using pandas without reading the table or csv.
Kinldy please suggest me a best way....as reading of table is taking lot of time i used"pd.read_sql_table"</p>
|
<p>Pandas is using sqlalchemy under the hood, simply run your query using the engine.</p>
<pre><code>with engine.begin() as conn:
conn.execute(sql)
# safety checks go here, once the end of the with clause is reached the trans is committed.
</code></pre>
|
mysql|python-2.7|pandas
| 1
|
5,944
| 50,893,052
|
Pandas pivot_table with pd.grouper and Margins
|
<p><code>Margins=True</code> will not work in Pandas pivot_table when columns is set as <code>pd.grouper datetime</code>. this is my code which works as expected-- </p>
<pre><code>p = df.pivot_table(values='Qty', index=['ItemCode', 'LineItem'],columns=pd.Grouper(key = 'Date', freq='W'), aggfunc=np.sum, fill_value=0)
</code></pre>
<p>but if I add <code>margins=True</code>, so I get a subtotal, I get error saying: </p>
<blockquote>
<p>KeyError: "[TimeGrouper(key='In time', freq=, axis=0, sort=True, closed='left', label='left', how='mean', convention='e', base=0)] not in index"</p>
</blockquote>
|
<p>That looks strange! I wonder what causes the pivot table to use the TimeGrouper itself to be used as the index. It's seems like a bug, but I'm not sure. In any case, I think pivottables aren't able to do sub-index margins, so here is a solution with groupby instead:</p>
<p><strong>Sample data</strong></p>
<pre><code>import pandas as pd
from random import randint, choice
from string import ascii_letters, ascii_lowercase
# Say we have a dataframe with 500 rows and 20 different items
df_len = range(500)
item_codes = [''.join([choice(ascii_letters) for _ in range(10)]) for __ in range(20)]
df = pd.DataFrame({
'ItemCode': [choice(item_codes) for __ in df_len],
'Date': [pd.datetime.today() - pd.Timedelta(randint(0, 28), 'D') for _ in df_len],
'Qty': [randint(1,10) for _ in df_len],
'LineItem': [choice(('a', 'b', 'c')) for _ in df_len],
})
df.head()
ItemCode Date Qty LineItem
0 IFaEmWGHTJ 2020-05-21 13:29:56.687412 8 a
1 jvLqoLfBcd 2020-05-23 13:29:56.687509 6 a
2 GOPFJEoSUm 2020-05-13 13:29:56.687550 1 a
3 qJqzzgDTaa 2020-05-03 13:29:56.687575 5 a
4 BCvRrgcpFD 2020-05-24 13:29:56.690114 8 b
</code></pre>
<p><strong>Solution</strong></p>
<pre><code>res = (df.groupby(['ItemCode', 'LineItem', pd.Grouper(key='Date', freq='W')])['Qty']
.count()
.unstack()
.fillna(0))
res.loc[('column_total', ''), :] = res.sum(axis=0)
res.loc[:,'row_total'] = res.sum(axis=1)
</code></pre>
<p><strong>Result</strong></p>
<pre><code>| | 2020-05-03 | 2020-05-10 | 2020-05-17 | 2020-05-24 | 2020-05-31 | row_total |
|:---------------------|-------------:|-------------:|-------------:|-------------:|-------------:|------------:|
| ('CtdClujjRF', 'a') | 1 | 2 | 2 | 0 | 0 | 5 |
| ('CtdClujjRF', 'b') | 0 | 3 | 1 | 1 | 1 | 6 |
| ('CtdClujjRF', 'c') | 1 | 1 | 2 | 2 | 1 | 7 |
| ('DnQcEbHoVL', 'a') | 0 | 2 | 1 | 1 | 1 | 5 |
| ('DnQcEbHoVL', 'b') | 1 | 1 | 1 | 2 | 2 | 7 |
... ... ... ... ... ... ...
| ('sxFnkCcSJu', 'c') | 0 | 2 | 2 | 3 | 0 | 7 |
| ('vOaWNHgOgm', 'a') | 0 | 5 | 1 | 7 | 1 | 14 |
| ('vOaWNHgOgm', 'b') | 1 | 0 | 1 | 3 | 4 | 9 |
| ('vOaWNHgOgm', 'c') | 1 | 2 | 2 | 5 | 1 | 11 |
| ('column_total', '') | 64 | 128 | 115 | 127 | 66 | 500 |
</code></pre>
|
python|pandas|pivot-table
| 1
|
5,945
| 50,965,581
|
Unexpected error when trying to concatenate dataframes with categorical data
|
<p>I've got two dataframes df1 and df2 that look like this:</p>
<pre><code>#df1
counts freqs
categories
automatic 13 0.40625
manual 19 0.59375
#df2
counts freqs
categories
Straight Engine 18 0.5625
V engine 14 0.4375
</code></pre>
<p>Could anyone explain why <code>pd.concat([df1, df2], axis = 1)</code> will not give me this:</p>
<pre><code> counts freqs
categories
automatic 13 0.40625
manual 19 0.59375
Straight Engine 18 0.5625
V engine 14 0.4375
</code></pre>
<hr>
<p><strong>Here is what I've tried:</strong></p>
<p>1 - Using <code>pd.concat()</code></p>
<p>I'm suspecting that the way I've built these dataframes may be the source of the issue.
And here is how I've ended up with these particular dataframes:</p>
<pre><code># imports
import pandas as pd
from pydataset import data # pip install pydataset to get datasets from R
# load data
df_mtcars = data('mtcars')
# change dummyvariables to more describing variables:
df_mtcars['am'][df_mtcars['am'] == 0] = 'manual'
df_mtcars['am'][df_mtcars['am'] == 1] = 'automatic'
df_mtcars['vs'][df_mtcars['vs'] == 0] = 'Straight Engine'
df_mtcars['vs'][df_mtcars['vs'] == 1] = 'V engine'
# describe categorical variables
df1 = pd.Categorical(df_mtcars['am']).describe()
df2 = pd.Categorical(df_mtcars['vs']).describe()
</code></pre>
<p>I understand that 'categories' is what is causing the problems here since <code>df_con = pd.concat([df1, df2], axis = 1)</code> raises this error:</p>
<blockquote>
<p>TypeError: categories must match existing categories when appending</p>
</blockquote>
<p>But it confuses me that this is ok: </p>
<pre><code># code
df_con = pd.concat([df1, df2], axis = 1)
# output:
counts freqs counts freqs
categories
automatic 13.0 0.40625 NaN NaN
manual 19.0 0.59375 NaN NaN
Straight Engine NaN NaN 18.0 0.5625
V engine NaN NaN 14.0 0.4375
</code></pre>
<p>2 - Using <code>df.append()</code> raises the same error as <code>pd.concat()</code></p>
<p>3 - Using <code>pd.merge()</code> sort of works, but I'm losing the indexes:</p>
<pre><code># Code
df_merge = pd.merge(df1, df2, how = 'outer')
# Output
counts freqs
0 13 0.40625
1 19 0.59375
2 18 0.56250
3 14 0.43750
</code></pre>
<p>3 - Using <code>pd.concat()</code> on transposed dataframes</p>
<p>Since <code>pd.concat()</code> worked with <code>axis = 0</code> I thought I would get there using transposed dataframes.</p>
<pre><code># df1.T
categories automatic manual
counts 13.00000 19.00000
freqs 0.40625 0.59375
# df2.T
categories Straight Engine V engine
counts 18.0000 14.0000
freqs 0.5625 0.4375
</code></pre>
<p>But still no success:</p>
<pre><code># code
df_con = pd.concat([df1.T, df2.T], axis = 1)
>>> TypeError: categories must match existing categories when appending
</code></pre>
<p>By the way, what I was hoping for here is this:</p>
<pre><code>categories automatic manual Straight Engine V engine
counts 13.00000 19.00000 18.0000 14.0000
freqs 0.40625 0.59375 0.5625 0.4375
</code></pre>
<p>Still kind of works with <code>axis = 0</code> though:</p>
<pre><code># code
df_con = pd.concat([df1.T, df2.T], axis = 0)
# Output
categories automatic manual Straight Engine V engine
counts 13.00000 19.00000 NaN NaN
freqs 0.40625 0.59375 NaN NaN
counts NaN NaN 18.0000 14.0000
freqs NaN NaN 0.5625 0.4375
</code></pre>
<p>But that is still far from what I'm trying to accomplish.</p>
<p>Now I'm thinking that it would be possible to strip the 'category' info from df1 and df2, but I haven't been able to find out how to do that yet.</p>
<p>Thank you for any other suggestions!</p>
|
<p>try this,</p>
<pre><code>pd.concat([df1.reset_index(),df2.reset_index()],ignore_index=True)
</code></pre>
<p>Output:</p>
<pre><code> categories counts freqs
0 automatic 13 0.40625
1 manual 19 0.59375
2 Straight Engine 18 0.56250
3 V engine 14 0.43750
</code></pre>
<p>To get again category as index follow this,</p>
<pre><code>pd.concat([df1.reset_index(),df2.reset_index()],ignore_index=True).set_index('categories')
</code></pre>
<p>Output:</p>
<pre><code> counts freqs
categories
automatic 13 0.40625
manual 19 0.59375
Straight Engine 18 0.56250
V engine 14 0.43750
</code></pre>
<p>for more details follow <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer">this docs</a></p>
|
python|pandas|concatenation|categorical-data
| 2
|
5,946
| 33,275,096
|
Using Pandas to merge 2 list of dicts with common elements
|
<p>So I have 2 list of dicts..</p>
<pre><code>list_yearly = [
{'name':'john',
'total_year': 107
},
{'name':'cathy',
'total_year':124
},
]
list_monthly = [
{'name':'john',
'month':'Jan',
'total_month': 34
},
{'name':'cathy',
'month':'Jan',
'total_month':78
},
{'name':'john',
'month':'Feb',
'total_month': 73
},
{'name':'cathy',
'month':'Feb',
'total_month':46
},
]
</code></pre>
<p>The goal is to get a final dataset which looks like this :</p>
<pre><code>{'name':'john',
'total_year': 107,
'trend':[{'month':'Jan', 'total_month': 34},{'month':'Feb', 'total_month': 73}]
},
{'name':'cathy',
'total_year':124,
'trend':[{'month':'Jan', 'total_month': 78},{'month':'Feb', 'total_month': 46}]
},
</code></pre>
<p>Since my dataset is for a large number of students for all the 12 months of a particular year,I am using Pandas for the data munging..This is how I went about :</p>
<p>First combine both the lists into a single dataframe using the <strong>name</strong> key.</p>
<pre><code>In [5]: df = pd.DataFrame(list_yearly).merge(pd.DataFrame(list_monthly))
In [6]: df
Out[6]:
name total_year month total_month
0 john 107 Jan 34
1 john 107 Feb 73
2 cathy 124 Jan 78
3 cathy 124 Feb 46
</code></pre>
<p>Then create a trend column as a dict</p>
<pre><code>ln [7]: df['trend'] = df.apply(lambda x: [x[['month', 'total_month']].to_dict()], axis=1)
In [8]: df
Out[8]:
name total_year month total_month \
0 john 107 Jan 34
1 john 107 Feb 73
2 cathy 124 Jan 78
3 cathy 124 Feb 46
trend
0 [{u'total_month': 34, u'month': u'Jan'}]
1 [{u'total_month': 73, u'month': u'Feb'}]
2 [{u'total_month': 78, u'month': u'Jan'}]
3 [{u'total_month': 46, u'month': u'Feb'}]
</code></pre>
<p>And, use <code>to_dict(orient='records')</code> method of selected columns to convert it back into list of dicts:</p>
<pre><code>In [9]: df[['name', 'total_year', 'trend']].to_dict(orient='records')
Out[9]:
[{'name': 'john',
'total_year': 107,
'trend': [{'month': 'Jan', 'total_month': 34}]},
{'name': 'john',
'total_year': 107,
'trend': [{'month': 'Feb', 'total_month': 73}]},
{'name': 'cathy',
'total_year': 124,
'trend': [{'month': 'Jan', 'total_month': 78}]},
{'name': 'cathy',
'total_year': 124,
'trend': [{'month': 'Feb', 'total_month': 46}]}]
</code></pre>
<p>As is evident,the final dataset is not exactly what I want.Instead of the 2 dicts with both the months in it,I instead get 4 dicts with all the months separate.How can i fix this ? I would prefer fixing it within Pandas itself rather than using this final output to again reduce it to the desired state</p>
|
<p>Within pandas, try:</p>
<pre><code>df1 = pd.DataFrame(list_yearly)
df2 = pd.DataFrame(list_monthly)
df = df1.set_index('name').join(pd.DataFrame(df2.groupby('name').apply(\
lambda gp: gp.transpose().to_dict().values())))
</code></pre>
<p>Update: with removing names from dicts and converting to a list of dicts:</p>
<pre><code>df1 = pd.DataFrame(list_yearly)
df2 = pd.DataFrame(list_monthly)
keep_columns = [c for c in df2.columns if not c == 'name']
# within pandas
df = df1.set_index('name').join(pd.DataFrame(df2.groupby('name').apply(\
lambda gp: gp[keep_columns].transpose().to_dict().values()))) \
.reset_index()
data = [row.to_dict() for _, row in df.iterrows()]
</code></pre>
<p>It remains to rename '0' to 'trend'.</p>
|
python|list|pandas|dictionary|data-munging
| 1
|
5,947
| 66,577,309
|
How to find the `True` values' corresponding index and column in a large Pandas DataFrame?
|
<p>I have a large DataFrame <code>df</code> whose values are mostly <code>False</code>.</p>
<p>About 1% of the values of <code>df</code> are <code>True</code>.</p>
<p>How can I display the <code>True</code> values' corresponding index and column?</p>
<p>Here's the index of <code>df</code></p>
<pre><code>df.index
DatetimeIndex(['2007-04-23', '2007-04-24', '2007-04-25', '2007-04-26',
'2007-04-27', '2007-04-30', '2007-05-02', '2007-05-03',
'2007-05-04', '2007-05-07',
...
'2021-02-24', '2021-02-25', '2021-02-26', '2021-03-02',
'2021-03-03', '2021-03-04', '2021-03-05', '2021-03-08',
'2021-03-09', '2021-03-10'],
dtype='datetime64[ns]', name='date', length=3426, freq=None)
</code></pre>
<p>Here's the columns of <code>df</code></p>
<pre><code>df.columns
Index(['0015', '0050', '0051', '0052', '0053', '0054', '0055', '0056', '0057',
'0058',
...
'9944', '9945', '9946', '9949', '9950', '9951', '9955', '9958', '9960',
'9962'],
dtype='object', name='stock_id', length=1947)
</code></pre>
<p>And <code>df.shape</code> returns <code>(3426, 1947)</code>.</p>
<p>Suppose only the values of <code>df['1234']['2020-01-05']</code>, and <code>df['4321']['2020-03-07']</code> are true.</p>
<p>How can I write a function whose input is <code>df</code> and whose output are <code>df['1234']['2020-01-05']</code> and <code>df['4321']['2020-03-07']</code>?</p>
|
<p>Suppose we have this:</p>
<pre><code># Test data
a b c
2010 True False False
2011 False False True
</code></pre>
<p>You can try <code>np.where</code>:</p>
<pre><code>x,y = np.where(df)
indexes = df.index[x]
columns = df.columns[y]
print(indexes, columns)
</code></pre>
<p>Output:</p>
<pre><code>Index(['2010', '2011'], dtype='object') Index(['a', 'c'], dtype='object')
</code></pre>
|
python|pandas|dataframe
| 1
|
5,948
| 66,425,987
|
I get this error message: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
|
<p>here is my code.</p>
<pre><code>import numpy as np
from scipy.optimize import minimize
import sympy as sp
sp.init_printing()
from sympy import *
from sympy import Symbol, Matrix
rom sympy import *
def make_Aij(m, n, a='a') :
from sympy import Symbol, Matrix # just in case they aren't already loaded
A = zeros(m, n)
for i in range(0, m) :
for j in range(0, n) :
s = a+'_'+str(i)+str(j)
exec (s + "= Symbol('" + s + "')") # go look up what "exec" does!
exec ("A[i, j] = " + s)
return A
C = make_Aij(1, 2, 'c')
C
z = C[0,0]
g = C[0,0]**2-C[0,1]**2
g
h = C[0,0]
h
#Function defined
def function(h):
return g
g
#Jacobian working with sympy
q = diff(g,C[0,0])
q
#Jacobian final
def jacobian(h):
return q
q
Hf = diff(q,C[0,0])
Hf
#Hessianf
def Hessianf(h):
return Hf
Hf
from scipy.optimize import Bounds
bounds = Bounds(-1, 1)
h0 = (0*C[0,1])
res = minimize(function, h0, method='trust-constr', jac=jacobian, hess=Hessianf,
options={'verbose': 1}, bounds=bounds)
</code></pre>
<p>ERROR MESSAGE:
---------------------------------------------------------------------------</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-41-94c28f7f32b2> in <module>
----> 1 res = minimize(function, h0, method='trust-constr', jac=jacobian, hess=Hessianf,
2
3 options={'verbose': 1}, bounds=bounds)
~\anaconda3\lib\site-packages\scipy\optimize\_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
626 constraints, callback=callback, **options)
627 elif meth == 'trust-constr':
--> 628 return _minimize_trustregion_constr(fun, x0, args, jac, hess, hessp,
629 bounds, constraints,
630 callback=callback, **options)
~\anaconda3\lib\site-packages\scipy\optimize\_trustregion_constr\minimize_trustregion_constr.py in _minimize_trustregion_constr(fun, x0, args, grad, hess, hessp, bounds, constraints, xtol, gtol, barrier_tol, sparse_jacobian, callback, maxiter, verbose, finite_diff_rel_step, initial_constr_penalty, initial_tr_radius, initial_barrier_parameter, initial_barrier_tolerance, factorization_method, disp)
507
508 elif method == 'tr_interior_point':
--> 509 _, result = tr_interior_point(
510 objective.fun, objective.grad, lagrangian_hess,
511 n_vars, canonical.n_ineq, canonical.n_eq,
~\anaconda3\lib\site-packages\scipy\optimize\_trustregion_constr\tr_interior_point.py in tr_interior_point(fun, grad, lagr_hess, n_vars, n_ineq, n_eq, constr, jac, x0, fun0, grad0, constr_ineq0, jac_ineq0, constr_eq0, jac_eq0, stop_criteria, enforce_feasibility, xtol, state, initial_barrier_parameter, initial_tolerance, initial_penalty, initial_trust_radius, factorization_method)
319 while True:
320 # Solve SQP subproblem
--> 321 z, state = equality_constrained_sqp(
322 subprob.function_and_constraints,
323 subprob.gradient_and_jacobian,
~\anaconda3\lib\site-packages\scipy\optimize\_trustregion_constr\equality_constrained_sqp.py in equality_constrained_sqp(fun_and_constr, grad_and_jac, lagr_hess, x0, fun0, grad0, constr0, jac0, stop_criteria, state, initial_penalty, initial_trust_radius, factorization_method, trust_lb, trust_ub, scaling)
80 Z, LS, Y = projections(A, factorization_method)
81 # Compute least-square lagrange multipliers
---> 82 v = -LS.dot(c)
83 # Compute Hessian
84 H = lagr_hess(x, v)
~\anaconda3\lib\site-packages\scipy\sparse\linalg\interface.py in dot(self, x)
416
417 if x.ndim == 1 or x.ndim == 2 and x.shape[1] == 1:
--> 418 return self.matvec(x)
419 elif x.ndim == 2:
420 return self.matmat(x)
~\anaconda3\lib\site-packages\scipy\sparse\linalg\interface.py in matvec(self, x)
230 raise ValueError('dimension mismatch')
231
--> 232 y = self._matvec(x)
233
234 if isinstance(x, np.matrix):
~\anaconda3\lib\site-packages\scipy\sparse\linalg\interface.py in _matvec(self, x)
528
529 def _matvec(self, x):
--> 530 return self.__matvec_impl(x)
531
532 def _rmatvec(self, x):
~\anaconda3\lib\site-packages\scipy\optimize\_trustregion_constr\projections.py in least_squares(x)
151 # lu_sol = [aux]
152 # [ z ]
--> 153 lu_sol = solve(v)
154 # return z = inv(A A.T) A x
155 return lu_sol[n:m+n]
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
</code></pre>
<p>I simply want to calculate the minimum along the c00 axis of the 2-variables function f(c00,c01) with some bounds. Therefore, i should get the value c00 as a function of C01 (c01 being a parameter here). I think the problem comes from the sympy functions associated with scipy.optimize.</p>
<p>Thanks in advance.</p>
|
<p>The symbolic <code>sympy</code> doesn't mix well with the numeric <code>scipy</code> and <code>numpy</code>. The numeric functions don't understand about <code>sympy</code>'s symbols.</p>
<p>To get things to work together, all symbolic functions need to be converted to numpy equivalents. <code>sympy</code>'s <code>lambdify</code> can convert a sympy expression to a numpy function. In your code you could employ it as follows:</p>
<pre class="lang-py prettyprint-override"><code>np_function = sp.lambdify(h, function(h))
np_jacobian = sp.lambdify(h, jacobian(h))
np_hessian = sp.lambdify(h, Hessianf(h))
res = minimize(np_function, h0, method='trust-constr', jac=np_jacobian, hess=np_hessian,
options={'verbose': 1}, bounds=bounds)
</code></pre>
|
python|numpy|scipy|sympy
| 1
|
5,949
| 66,371,405
|
how do i Determine a Cut-Off or Threshold When Working With Fuzzymatcher in python
|
<p>Please help on the photo is a screenshot of my output and code as well, how do i use the best_match_score <strong>I NEED TO FILTER BY THE RETURNED "PRECISION SCORE</strong>" THE COLUMN ONLY COMES AFTER THE MERGE (i.e. JUST RETURN EVERYTHING with <strong>'best_match_score'</strong> BELOW -1.06)</p>
<pre><code>import fuzzymatcher
import pandas as pd
import os
# pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
REDCAP = pd.read_csv(r"C:\Users\Selamola\Desktop\PythonThings\FuzzyMatching\REDCAP Form A v1 and v2 23 Feb 211.csv")
covidSheet = pd.read_csv(r"C:\Users\Selamola\Desktop\PythonThings\FuzzyMatching\Cases missing REC ID 23 Feb 211.csv")
Data_merge = fuzzymatcher.fuzzy_left_join(covidSheet, REDCAP,
left_on=['Participant Name', 'Particfipant Surname', 'Screening Date',
'Screening Date', 'Hospital Number', 'Alternative Hospital Number'],
right_on=['Patient Name', 'Patient Surname', 'Date Of Admission',
'Date Of Sample Collection', 'Hospital Number', 'Hospital Number'])
# Merged_data = pd.merge(REDCAP, covidSheet, how='left',
# left_on=['Patient Name', 'Patient Surname'],
# right_on=['Participant Name', 'Particfipant Surname'])
# Data_merge.to_csv(r'C:\Users\Selamola\Desktop\PythonThings\FuzzyMatching\DataMacth.csv')
print(Data_merge)
</code></pre>
<p><a href="https://i.stack.imgur.com/X58tb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X58tb.jpg" alt="Image of WorkSpace" /></a></p>
|
<p>This seems very straightforward unless I'm missing something. Be sure to try read the documentation about <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html" rel="nofollow noreferrer">slicing data in pandas</a>.</p>
<pre><code>mask = Data_merge['best_match_score'] < .1.06
filtered_data = Data_merge[mask]
</code></pre>
|
python|pandas|dataframe|python-dataclasses|python-datamodel
| 1
|
5,950
| 66,375,749
|
Check comma from pandas columns and if it exist remove it and divided by 100 Python Pandas
|
<p>Let us say I have the following simple data frame:</p>
<p>df</p>
<pre><code>Test
3454,23
65
98,50
</code></pre>
<p>I want check whether comma or dot (.) exist and if it exist remove it and divided by 100.</p>
<p>The result seems like below.</p>
<pre><code>Test
3454.23
65
98.50
</code></pre>
<p>I have tried this.</p>
<pre><code>df['Test'] = (df['Test'].str.replace(',', '')/100)
</code></pre>
<p>But it doesn't work.</p>
<p>Can anyone help on this in Pandas? If have a solution something similar, you can indicate me the link.</p>
|
<p>Use <code>df.str.contains</code> to define condition and then</p>
<pre><code>np.where(condition, outcome if condition true, outcome if contion false)
</code></pre>
<p>code below:</p>
<pre><code>df['Test']=np.where(df['Test'].str.contains('\,'),df['Test'].str.replace(',','').astype(int)/100,df['Test'])
Test
0 3454.23
1 65
2 98.5
</code></pre>
<p>Or use pandas mask to isolate where condition exist and apply multiply</p>
<pre><code>df['Test']=df.Test.mask(df['Test'].str.contains('\,'),df['Test'].str.replace(',','').astype(int)/100)
</code></pre>
<p>or use pandas where</p>
<pre><code>df['Test']=df.Test.where(~df['Test'].str.contains('\,'),df['Test'].str.replace(',','').astype(int)/100)
</code></pre>
|
python|pandas
| 2
|
5,951
| 57,512,400
|
Update a string in a column based on conditions from a function in a Pandas Dataframe
|
<p>I'm trying to clean up a column that contains strings with more information than necessary. I tried searching for substrings or keywords and if found to replace with new string or keyword. </p>
<p>This is my df.</p>
<pre><code>var1 = [('Car 1',1),
('Book',2),
('Apple cake',3),
('Tree',4),
('Horse',5),
('Car',1),
('Apple Tree',3),
('Book shelf',2),
('Books',2),
('Trees',4)]
df = pd.DataFrame(var1, columns = ['Item' , 'Code'])
</code></pre>
<p>What I'm trying is to loop trough each row in a column, check is a substring exists and if Yes, then replace with a new string. I cannot specify exactly the content of the string because it varies. And I cannot use the Code value because in many cases the code is absent. </p>
<p>This is the code I'm using</p>
<pre><code>def item_check(string):
if 'Car' in string:
return 'Car'
elif 'Book' in string:
return 'Book'
elif 'Apple' in string:
return 'Apple'
elif 'Tree' in string:
return 'Tree'
elif 'Horse' in string:
return 'Horse'
else:
return ''
df['Item'] = df.apply(lambda x: item_check(df['Item']))
</code></pre>
<p>I expect the Item column to contain updated values:</p>
<pre><code>Car
Book
Apple
Tree
Horse
Car
Apple
Book
Book
Tree
</code></pre>
<p>however I get NaN</p>
|
<p>You nee to <code>apply</code> the method to the <code>Item</code> column. Hence do:</p>
<pre><code>df['Item'] = df['Item'].apply(item_check)
</code></pre>
<p>Output:</p>
<pre><code> Item Code
0 Car 1
1 Book 2
2 Apple 3
3 Tree 4
4 Horse 5
5 Car 1
6 Apple 3
7 Book 2
8 Book 2
9 Tree 4
</code></pre>
|
pandas|dataframe
| 0
|
5,952
| 57,318,578
|
How to iterate over the third array dimension returning a the 2d array
|
<p>i have got a 3d numpy array, what is the best way to iterate over the third dimension in a for loop returning the 2d array of the current interation?</p>
|
<p>just loop with its third dim:</p>
<pre><code>import numpy as np
a = np.arange(24).reshape((2,3,4))
for i in range(a.shape[2]): # index 2 is for 3rd dimension
print(a[:, :, i])
# or
print(a[..., i])
</code></pre>
<p>then you got it.</p>
<p>but using loop with numpy array is costly, you should <a href="https://realpython.com/numpy-array-programming/" rel="nofollow noreferrer">get used</a> to broadcasting, vectorization, indexing...</p>
|
python|arrays|numpy|matrix
| 0
|
5,953
| 24,394,507
|
empty strings after genfromtxt numpy
|
<p>Thanks for your patience, as I'm pretty new to python. The input file is a tab-delimited table.</p>
<pre><code>import numpy as np
#from StringIO import StringIO
inputfile=raw_input('Filepath please: ')
fieldnames='Reference Position, Type, Length, Reference, Allele, Linkage, Zygosity, \
Count, Coverage, Frequency, Hyper-allelic, Forward/reverse balance, Average quality, \
Overlapping annotations, Coding region change, Amino acid change'
fieldtypes='int,str,int,str,str,str,str,int,int,float,str,float,float,str,str,str'
with open(inputfile) as f:
storage=np.genfromtxt(f, skip_header=1, delimiter='\t', names=fieldnames, dtype=fieldtypes)
print storage
</code></pre>
<p>I get a ValueError: size of tuple must match number of fields.</p>
<p>Help?</p>
<hr>
<p>EDIT:</p>
<h2>Well, after implementing @Wooble's suggestions, no more error…</h2>
<p>EDIT2:</p>
<h2>But the problem now is that after I print storage, all the cells that are of dtype str are empty strings (''). Why is this?</h2>
<p>EDIT3:
I solved the empty string problem by changing the str types above to |S#, where # is an integer.</p>
|
<p>But the problem now is that after I print storage, all the cells that are of dtype str are empty strings (''). Why is this?</p>
<p>EDIT3: I solved the empty string problem by changing the str types above to |S#, where # is an integer.</p>
<p>EDIT4: See comment from Jinan Dangor below.</p>
|
python|string|numpy
| -1
|
5,954
| 43,873,620
|
pivot_table on multi-indexed dataframe
|
<p>How can I apply pandas.pivot_table to the dataframe:</p>
<pre><code>df = pd.DataFrame(
[
{'o1_pkid': 645, 'o2_pkid': 897, 'colname': 'col1', 'colvalue': 'sfjdka'},
{'o1_pkid': 645, 'o2_pkid': 897, 'colname': 'col2', 'colvalue': 25},
{'o1_pkid': 645, 'o2_pkid': 159, 'colname': 'col1', 'colvalue': 'laksjd'},
{'o1_pkid': 645, 'o2_pkid': 159, 'colname': 'col2', 'colvalue': 26}
]
</code></pre>
<p>)</p>
<p>to get a multi-indexed result (indexed by o1_pkid and o2_pkid), where the columns come from <strong>colname</strong> and the values come from <strong>colvalue</strong>? I am looking to get a result something like:</p>
<pre><code>colname col1 col2
o1_pkid o2_pkid
645 897 'sfjdka' 25
159 'laksjd' 26
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a>:</p>
<pre><code>df = df.set_index(['o1_pkid', 'o2_pkid', 'colname'])['colvalue'].unstack()
print (df)
colname col1 col2
o1_pkid o2_pkid
645 159 laksjd 26
897 sfjdka 25
</code></pre>
<p>But if get error:</p>
<blockquote>
<p>ValueError: Index contains duplicate entries, cannot reshape</p>
</blockquote>
<p>need:</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a> with some aggregate function like <code>sum</code>:</p>
<pre><code>df = pd.DataFrame(
[
{'o1_pkid': 645, 'o2_pkid': 897, 'colname': 'col1', 'colvalue': 'sfjdka'},
{'o1_pkid': 645, 'o2_pkid': 897, 'colname': 'col2', 'colvalue': 25},
{'o1_pkid': 645, 'o2_pkid': 159, 'colname': 'col1', 'colvalue': 'laksjd'},
{'o1_pkid': 645, 'o2_pkid': 159, 'colname': 'col2', 'colvalue': 10},
{'o1_pkid': 645, 'o2_pkid': 159, 'colname': 'col2', 'colvalue': 26}
])
df = df.pivot_table(index=['o1_pkid', 'o2_pkid'],
columns='colname',
values='colvalue',
aggfunc='sum')
print (df)
colname col1 col2
o1_pkid o2_pkid
645 159 laksjd 36
897 sfjdka 25
</code></pre>
<p>or <code>groupby</code> + <code>aggregate function</code> + <code>unstack</code>:</p>
<pre><code>df = df.groupby(['o1_pkid', 'o2_pkid', 'colname'])['colvalue'].sum().unstack()
print (df)
colname col1 col2
o1_pkid o2_pkid
645 159 laksjd 36
897 sfjdka 25
</code></pre>
|
pandas|pivot-table
| 1
|
5,955
| 43,877,196
|
how can i add together the data of two dataframes
|
<p>I want to add together data from two dataframes in this way:</p>
<pre><code> >>> df1 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 2],
'col3': ['aaa', 'bbb', 'ccc']})
>>> df1
col1 col2 col3
0 1 2 aaa
1 2 3 bbb
2 3 2 ccc
>>> df2 = pd.DataFrame({'col1': [4, 4, 5], 'col2': [4, 4, 5],
'col3': ['some', 'more', 'third']})
>>> df2
col1 col2 col3
0 4 4 some
1 4 4 more
2 5 5 third
</code></pre>
<p>I would like the result to be:</p>
<pre><code>>>> result
col1 col2 col3
0 4 4 some
1 4 4 more
2 9 7 third
3 1 2 aaa
4 2 3 bbb
</code></pre>
<p>That is: if there exist a col3 which have the same value, then col1 + col2 for that entry shall be added together.
If it doesnt exist, the rows should just to be concatted.
The order of the rows doesnt matter, and I don't need to keep df1 and df2, I just care about the result afterwards.</p>
<p>What is the best way to achieve this?</p>
<p>The data I've just loaded from different csv files that look exactly like that, so maybe there is an alternative way to do it as well?
The result I just want to save again as a csv file that looks like above.</p>
|
<p>Let's use <code>pd.concat</code> and <code>groupby</code> to sum values.</p>
<pre><code>pd.concat([df1,df2]).groupby('col3').sum().reset_index().reindex_axis(['col1','col2','col3'],axis=1)
</code></pre>
<p>Output:</p>
<pre><code> col1 col2 col3
0 1 2 aaa
1 2 3 bbb
2 4 4 more
3 4 4 some
4 9 7 third
</code></pre>
|
python|pandas
| 2
|
5,956
| 43,856,701
|
Pandas - inplace, view, copy confusion
|
<p>I'm having an issue with Pandas dataframes.
It seems that Pandas/Python generate a copy of the DF somewhere in my code as opposed to performing the modifications to the original DF.</p>
<p>In the code below, "update_df" still sees the DF with a "file_exists" column, which should have been removed by the previous function.</p>
<p>MAIN:</p>
<pre><code>if __name__ == '__main__':
df_main = load_df()
clean_df2(df_main)
update_df(df_main, image_path_main)
.....
</code></pre>
<p>clean_df2</p>
<pre><code>def clean_df2(df): #remove non-existing files from DF
df['file_exists'] = True # add column, set all to True?
.....
df = df[df['file_exists'] != False] #Keep only records that exist
df.drop('file_exists', 1, inplace=True) # delete the temporary column
df.reset_index(drop=True, inplace = True) # reindex if source has gaps
</code></pre>
<p>update_df:</p>
<pre><code>def update_df(df, image_path): #add DF rows for files not yet in DF
print(df)
....
</code></pre>
|
<p>I think when you do:</p>
<pre><code>df = df[df['file_exists'] != False]
</code></pre>
<p>You've created a copy of the original df.</p>
<p>To make it work, you can change your function to:</p>
<pre><code>def clean_df2(df): #remove non-existing files from DF
df['file_exists'] = True # add column, set all to True?
.....
return df
</code></pre>
<p>And when you call clean_df2(df), do the following:</p>
<pre><code>df = clean_df2(df)
</code></pre>
|
python|pandas|dataframe
| 1
|
5,957
| 43,703,566
|
Pandas - Group By ID, Get Percentage
|
<p>Say I have a dataframe like so:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID': ['3a2b', '2a2b', '1a2b', '1a2b'],
'label': [2, 2, 1, 0]})
</code></pre>
<p>df visualized:</p>
<pre><code> ID label
3a2b 2
2a2b 2
1a2b 1
1a2b 0
</code></pre>
<p>Now I'd like to groupby ID and display the % of labels associated with that ID are of each [0, 1, 2]</p>
<p>Desired output visualized:</p>
<pre><code>ID label 0 label 1 label 2
1a2b 50% 50% 0%
2a2b 0% 0% 100%
3a2b 0% 0% 100%
</code></pre>
<p>I've tried:</p>
<pre><code> df.groupby(['ID']).agg({'label': 'sum'})
</code></pre>
<p>but it doesn't quite work. </p>
<p>The denominator for each column can be found using:</p>
<pre><code>df1 = df.groupby(['ID']).agg({'label': 'count'})
</code></pre>
<p>which outputs:</p>
<pre><code>ID . label
1a2b . 2
2a2b . 1
3a2b . 1
</code></pre>
<p>Any help is much appreciated!</p>
|
<p>Use <code>get_dummies</code> on <code>label</code>, and groupby on <code>ID</code>, then <code>sum</code>, and apply row-wise percentage calculation.</p>
<pre><code>In [48]: (pd.get_dummies(df['label'], prefix='label')
.groupby(df['ID'])
.sum()
.apply(lambda x: x / x.sum() * 100, axis=1)
)
Out[48]:
label_0 label_1 label_2
ID
1a2b 50.0 50.0 0.0
2a2b 0.0 0.0 100.0
3a2b 0.0 0.0 100.0
</code></pre>
<p>Details</p>
<pre><code>In [49]: pd.get_dummies(df['label'], prefix='label')
Out[49]:
label_0 label_1 label_2
0 0.0 0.0 1.0
1 0.0 0.0 1.0
2 0.0 1.0 0.0
3 1.0 0.0 0.0
In [50]: pd.get_dummies(df['label'], prefix='label').groupby(df['ID']).sum()
Out[50]:
label_0 label_1 label_2
ID
1a2b 1.0 1.0 0.0
2a2b 0.0 0.0 1.0
3a2b 0.0 0.0 1.0
</code></pre>
|
python|python-2.7|pandas
| 0
|
5,958
| 43,534,719
|
how to convert pd.to_timedelta() to time() object?
|
<p>I need get <code>0 days 08:00:00</code> to <code>08:00:00</code>.</p>
<p>code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'Slot_no':[1,2,3,4,5,6,7],
'start_time':['0:01:00','8:01:00','10:01:00','12:01:00','14:01:00','18:01:00','20:01:00'],
'end_time':['8:00:00','10:00:00','12:00:00','14:00:00','18:00:00','20:00:00','0:00:00'],
'location_type':['not considered','Food','Parks & Outdoors','Food',
'Arts & Entertainment','Parks & Outdoors','Food']})
df = df.reindex_axis(['Slot_no','start_time','end_time','location_type','loc_set'], axis=1)
df['start_time'] = pd.to_timedelta(df['start_time'])
df['end_time'] = pd.to_timedelta(df['end_time'].replace('0:00:00', '24:00:00'))
</code></pre>
<p>output:</p>
<pre><code>print (df)
Slot_no start_time end_time location_type loc_set
0 1 00:01:00 0 days 08:00:00 not considered NaN
1 2 08:01:00 0 days 10:00:00 Food NaN
2 3 10:01:00 0 days 12:00:00 Parks & Outdoors NaN
3 4 12:01:00 0 days 14:00:00 Food NaN
4 5 14:01:00 0 days 18:00:00 Arts & Entertainment NaN
5 6 18:01:00 0 days 20:00:00 Parks & Outdoors NaN
6 7 20:01:00 1 days 00:00:00 Food NaN
</code></pre>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.time.html" rel="nofollow noreferrer"><code>dt.time</code></a>:</p>
<pre><code>df['end_time_times'] = pd.to_datetime(df['end_time']).dt.time
print (df)
Slot_no start_time end_time location_type loc_set \
0 1 00:01:00 0 days 08:00:00 not considered NaN
1 2 08:01:00 0 days 10:00:00 Food NaN
2 3 10:01:00 0 days 12:00:00 Parks & Outdoors NaN
3 4 12:01:00 0 days 14:00:00 Food NaN
4 5 14:01:00 0 days 18:00:00 Arts & Entertainment NaN
5 6 18:01:00 0 days 20:00:00 Parks & Outdoors NaN
6 7 20:01:00 1 days 00:00:00 Food NaN
end_time_times
0 08:00:00
1 10:00:00
2 12:00:00
3 14:00:00
4 18:00:00
5 20:00:00
6 00:00:00
</code></pre>
|
python|pandas|time|timedelta
| 1
|
5,959
| 1,420,235
|
How can I generate a complete histogram with numpy?
|
<p>I have a very long list in a <code>numpy.array</code>. I want to generate a histogram for it. However, Numpy's <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html" rel="nofollow noreferrer" title="numpy.histogram reference">built in histogram</a> requires a pre-defined number of bins. What's the best way to generate a full histogram with one bin for each value?</p>
|
<p>If you have an array of integers and the max value isn't too large you can use numpy.bincount:</p>
<pre><code>hist = dict((key,val) for key, val in enumerate(numpy.bincount(data)) if val)
</code></pre>
<p>Edit:
If you have float data, or data spread over a huge range you can convert it to integers by doing:</p>
<pre><code>bins = numpy.unique(data)
bincounts = numpy.bincount(numpy.digitize(data, bins) - 1)
hist = dict(zip(bins, bincounts))
</code></pre>
|
python|numpy|histogram
| 8
|
5,960
| 73,168,373
|
Ungroup/Unpivot 1 column in pandas
|
<p>My data is like this</p>
<pre><code>g1 g2 g3 value1 value2
A X True 1 2
A X False 3 4
B Y True 5 6
</code></pre>
<p>It was grouped by (g1, g2, g3) and then <code>reset_index</code>. What I am trying to do is to ungroup/unpivot the column <code>g3</code> so that the output looks something like this?</p>
<pre><code>g1 g2 value1_True value2_True value1_False value2_False
A X 1 2 3 4
B Y 5 6 0 0
</code></pre>
<p>I have tried to search online but could not find any answer for my particular case. Appreciate any help.</p>
|
<pre><code># Make the True/False into strings, this helps later.
df.g3 = df.g3.astype(str)
# Pivot your dataframe.
df = df.pivot(index=['g1', 'g2'], columns='g3')
# flatten the multiindex columns and join like you wanted.
df.columns = df.columns.to_flat_index().str.join('_')
print(df.reset_index().fillna(0))
</code></pre>
<p>Output:</p>
<pre><code> g1 g2 value1_False value1_True value2_False value2_True
0 A X 3.0 1.0 4.0 2.0
1 B Y 0.0 5.0 0.0 6.0
</code></pre>
|
pandas
| 2
|
5,961
| 72,999,041
|
How to rename columns in Pandas automatically?
|
<p>I have a Dataframe with 240 columns. But they are named by number from 0 to 239.</p>
<p>How can I rename it to respectively 'column_1', 'column_2', ........, 'column_239', 'column_240' automatically?
<a href="https://i.stack.imgur.com/9pBVT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9pBVT.png" alt="enter image description here" /></a></p>
|
<p>You can use:</p>
<pre><code>df.columns = df.columns.map(lambda x: f'column_{x+1}')
</code></pre>
<p>Example output:</p>
<pre><code> column_1 column_2 column_3 column_4 column_5 column_6 column_7 column_8 column_9 column_10
0 0 1 2 3 4 5 6 7 8 9
</code></pre>
<p>Used input:</p>
<pre><code>df = pd.DataFrame([range(10)])
</code></pre>
|
python|pandas
| 4
|
5,962
| 73,046,416
|
torch geometric error: FileNotFound: Could not find module '...\.conda\envs\...\Lib\site-packages\torch_sparse\_convert_cuda.pyd'
|
<p>torch geometric error</p>
<pre><code>FileNotFoundError: Could not find module '...\.conda\envs\urop\Lib\site-packages\torch_sparse\_convert_cuda.pyd' Try using the full path with constructor syntax.
</code></pre>
<p>Versions:</p>
<p>torch_geometric==2.0.4</p>
<pre><code>pytorch 1.11.0 py3.8_cpu_0 pytorch
pytorch-cluster 1.6.0 py38_torch_1.11.0_cu113 pyg
pytorch-mutex 1.0 cpu pytorch
pytorch-scatter 2.0.9 py38_torch_1.11.0_cu113 pyg
pytorch-sparse 0.6.14 py38_torch_1.11.0_cu113 pyg
pytorch-spline-conv 1.2.1 py38_torch_1.11.0_cu113 pyg
torchvision 0.12.0 py38_cpu pytorch
</code></pre>
|
<p><strong>I solved</strong> my problem with this error. I simply had an <em>old version of Torch</em> and installed torch-scatter and torch-sparse pointing to a wheel with a newer PyTorch version with the -f pip flag (pip install -v torch-scatter -f <a href="https://pytorch-geometric.com/whl/torch-1.12.1+cu116.html" rel="nofollow noreferrer">https://pytorch-geometric.com/whl/torch-1.12.1+cu116.html</a>).</p>
<p>Creating a new environment, installing the newest PyTorch version, and pointing to the correct wheel worked for me.</p>
|
python|pytorch|anaconda|pytorch-geometric
| 0
|
5,963
| 70,567,886
|
How to made a slice in a pandas dataframe?
|
<p>I want do a slice a dataframe using a string "PP" that is in my column and get just the numbers that is afeter string:</p>
<p>Dataframe:</p>
<pre><code>data = {'Serie':['28PP3097', '23228PP3097', '1822343218PP3097', '43642183097'],
'FooBar':["foo", "bar", "foo", "bar"]}
df = pd.DataFrame(data)
</code></pre>
<p><a href="https://i.stack.imgur.com/yqsk6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yqsk6.png" alt="enter image description here" /></a></p>
<p>Expected Result:</p>
<p><a href="https://i.stack.imgur.com/9bel0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9bel0.png" alt="enter image description here" /></a></p>
<p>I try:</p>
<pre><code>df["Serie"] = np.where(df["Serie"].str.contains("PP"), df["Serie"][df["Serie"].str.find('PP')+1:],df["Serie"])
</code></pre>
<p>In this df, but it give me a erro ´cannot do slice indexing on RangeIndex with these indexers´</p>
|
<p>You can do that by splitting and getting the last item after "PP":</p>
<pre><code>data = {'Serie':['28PP3097', '23228PP3097', '1822343218PP3097', '43642183097'],
'FooBar':["foo", "bar", "foo", "bar"]}
df = pd.DataFrame(data)
df['Serie']=[i.split('PP')[-1] for i in df['Serie']]
</code></pre>
<p>Result</p>
<pre><code> Serie FooBar
0 3097 foo
1 3097 bar
2 3097 foo
3 43642183097 bar
</code></pre>
|
python|pandas
| 1
|
5,964
| 70,448,690
|
Logits and labels must be broadcastable: logits_size=[400,3] labels_size=[16,3]
|
<p>I am trying to build a model that predicts the facial expression. The model I used: <a href="https://www.kaggle.com/nightfury007/fercustomdataset-3classes" rel="nofollow noreferrer">link</a>.</p>
<p>I adjusted the data so that it has three folders: train, test, validation. Each folder contains three subfolders named: disappointed, interested, neutral.</p>
<p>This is how I ran the code.</p>
<pre><code>image_gen=ImageDataGenerator(rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
rescale=1/255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_image_gen=image_gen.flow_from_directory(train_dir,
batch_size=batch_size,
class_mode='categorical') #implemented the same code for test and validation dirs.
</code></pre>
<p>This is the model itself:</p>
<pre><code>model.add(Conv2D(32, kernel_size=(3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size = (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size = (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation="softmax"))
model.compile(loss='categorical_crossentropy',
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=['accuracy'])
model.fit(train_image_gen,epochs=1,steps_per_epoch= nb_train_samples/16,
validation_data=valid_image_gen,validation_steps=nb_valid_samples//16)
</code></pre>
<p>When I run <code>model.fit</code>, it gives me the following error</p>
<pre><code>InvalidArgumentError: logits and labels must be broadcastable: logits_size=[400,3] labels_size=[16,3]
[[node categorical_crossentropy/softmax_cross_entropy_with_logits
(defined at /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/keras/backend.py:5009)
]] [Op:__inference_train_function_7946]
Errors may have originated from an input operation.
Input Source operations connected to node categorical_crossentropy/softmax_cross_entropy_with_logits:
In[0] categorical_crossentropy/softmax_cross_entropy_with_logits/Reshape:
In[1] categorical_crossentropy/softmax_cross_entropy_with_logits/Reshape_1:
Operation defined at: (most recent call last)
>>> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 193, in _run_module_as_main
>>> return _run_code(code, main_globals, None,
>>>
>>> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 86, in _run_code
>>> exec(code, run_globals)
>>>
>>> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ipykernel_launcher.py", line 16, in <module>
>>> app.launch_new_instance()
>>>
>>> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/traitlets/config/application.py", line 846, in launch_instance
>>> app.start()
>>>
>>> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/ipykernel/kernelapp.py", line 677, in start
>>> self.io_loop.start()
</code></pre>
<p>I have tried a lot of different ways and codes but I keep having the same error.</p>
|
<p>To get more information about the error, you can run in <em>eager</em> mode:</p>
<pre><code>model.compile(loss='categorical_crossentropy',
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=['accuracy'], run_eagerly=True)
</code></pre>
<p>In fact, there was an error with the input shape, which should be <code>256, 256, 3</code> in fact.</p>
|
python|tensorflow|keras
| 0
|
5,965
| 70,653,215
|
Add new column to pandas data frame based on string + value from another column in the data frame
|
<p>I have created a data frame using the code below:</p>
<pre><code>bins = [['0', '50'], ['0', '100'], ['0', '150'], ['0', '200'], ['0', '250'], ['0', '300'], ['0', '350'], ['0', '400']]
bins = pd.DataFrame(bins, columns = ['start', 'end'])
bins['range'] = bins[['start', 'end']].agg('-'.join, axis=1)
bins.start = pd.to_numeric(bins.start)
bins.end = pd.to_numeric(bins.end)
</code></pre>
<p>data frame looks like this:</p>
<pre><code> start end range
0 0 50 0-50
1 50 100 50-100
2 100 150 100-150
3 150 200 150-200
4 200 250 200-250
5 250 300 250-300
6 300 350 300-350
7 350 400 350-400
</code></pre>
<p>I am trying to add a new column named 'axis' that will include the string 'up to' and then the value from 'end' column, e.g. row 1 'axis' column will show 'up to 50' and row 2 will show 'up to 100'</p>
<p>What is the best way of doing this?</p>
<p>Thanks in advance</p>
|
<p>Use:</p>
<pre><code>df['axis'] = 'up to ' + df['end'].astype(str)
</code></pre>
|
python|pandas
| 2
|
5,966
| 42,800,377
|
Multilabel Classification with Tensorflow
|
<p>I have the code below for a multilabel classification:</p>
<pre><code>import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.datasets import make_multilabel_classification
from sklearn.model_selection import train_test_split
X, Y = make_multilabel_classification(n_samples=10000, n_features=200, n_classes=10, n_labels=2,
allow_unlabeled=False, random_state=1)
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.1, random_state=2)
#.........................................................................
learning_rate = 0.001
training_epochs = 5000
display_step = 50
num_input = x_train.shape[1]
num_classes = y_train.shape[1]
def init_weights(shape):
return tf.Variable(tf.random_normal(shape, stddev=0.01))
def model(X, w_h, w_h2, w_o, p_keep_input, p_keep_hidden):
X = tf.nn.dropout(X, p_keep_input)
h = tf.nn.relu(tf.matmul(X, w_h))
h = tf.nn.dropout(h, p_keep_hidden)
h2 = tf.nn.relu(tf.matmul(h, w_h2))
h2 = tf.nn.dropout(h2, p_keep_hidden)
h3 = tf.nn.relu(tf.matmul(h2, w_h3))
h3 = tf.nn.dropout(h3, p_keep_hidden)
return tf.nn.sigmoid(tf.matmul(h3, w_o))
x = tf.placeholder("float", [None, num_input])
y = tf.placeholder("float", [None, num_classes])
w_h = init_weights([num_input, 500])
w_h2 = init_weights([500, 500])
w_h3 = init_weights([500, 500])
w_o = init_weights([500, num_classes])
p_keep_input = tf.placeholder("float")
p_keep_hidden = tf.placeholder("float")
pred = model(x, w_h, w_h2, w_o, p_keep_input, p_keep_hidden)
#cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=pred, labels=y))
cost = -tf.reduce_sum( ( (y*tf.log(pred + 1e-9)) + ((1-y) * tf.log(1 - pred + 1e-9)) ) , name='xentropy' )
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
#optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
#--------------------------------------------------------------------------------
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
sess.run(tf.local_variables_initializer())
for epoch in range(training_epochs):
sess.run(optimizer, feed_dict = {x : x_train, y : y_train, p_keep_input: 1.0, p_keep_hidden: 1.0})
avg_cost = sess.run(cost, feed_dict = {x : x_train, y : y_train, p_keep_input: 1.0, p_keep_hidden: 1.0})
if epoch % display_step == 0:
training_acc = accuracy.eval({x : x_train, y : y_train, p_keep_input: 1.0, p_keep_hidden: 1.0})
print("Epoch:", '%03d' % (epoch), "Training Accuracy:", '%.5f' % (training_acc), "cost=", "{:.10f}".format(avg_cost))
print("Optimization Complete!")
a = tf.cast(tf.argmax(pred, 1),tf.float32)
b = tf.cast(tf.argmax(y,1),tf.float32)
roc_score = tf.metrics.auc(b, a)
cm = tf.confusion_matrix(b, a)
sess.run(tf.local_variables_initializer())
print(sess.run(cm, feed_dict={x : x_test, y : y_test, p_keep_input: 1.0, p_keep_hidden: 1.0}))
print(sess.run(roc_score, feed_dict={x : x_test, y : y_test, p_keep_input: 1.0, p_keep_hidden: 1.0}))
</code></pre>
<p>And the output is below:</p>
<pre><code>Epoch: 000 Training Accuracy: 0.31500 cost= 62297.6406250000
Epoch: 050 Training Accuracy: 0.30722 cost= 433502.8125000000
Epoch: 100 Training Accuracy: 0.30722 cost= 433502.8125000000
Epoch: 150 Training Accuracy: 0.30722 cost= 433502.8125000000
Epoch: 200 Training Accuracy: 0.30722 cost= 433502.8125000000
Epoch: 250 Training Accuracy: 0.30722 cost= 433502.8125000000
Epoch: 300 Training Accuracy: 0.30722 cost= 433502.8125000000
Epoch: 350 Training Accuracy: 0.30722 cost= 433502.8125000000
...
Epoch: 5000 Training Accuracy: 0.30722 cost= 433502.8125000000
</code></pre>
<p>As above, the training accuracy remains same almost all through the training process. I varied the number of hidden layers and learning rate from 0.001, 0.01 to 0.1 and the trend was still same.</p>
<p>I'd appreciate some help on what I may be doing wrong.</p>
|
<p>The main problem with your code is that you are not using mini-batch gradient descent, and instead you are using the whole training data for each gradient descent update. Additionally 5000 epochs is too many I think, and I guess 50-100 will be enough (you can verify by experiment). Also at the following lines, the second one is redundant and in fact you are running the graph two times in each iteration while you want to do this once:</p>
<pre><code>sess.run(optimizer, feed_dict = {x : x_train, y : y_train, p_keep_input: 1.0, p_keep_hidden: 1.0})
avg_cost = sess.run(cost, feed_dict = {x : x_train, y : y_train, p_keep_input: 1.0, p_keep_hidden: 1.0})
</code></pre>
<p>Correct form:</p>
<pre><code>_, avg_cost= sess.run([optimizer,cost], feed_dict = {x : x_train, y : y_train, p_keep_input: 1.0, p_keep_hidden: 1.0})
</code></pre>
<p>The following is modified code( There is comment at the of the lines that I have added <code># ADDED #</code>):</p>
<pre><code>import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.datasets import make_multilabel_classification
from sklearn.model_selection import train_test_split
X, Y = make_multilabel_classification(n_samples=10000, n_features=200, n_classes=10, n_labels=2,
allow_unlabeled=False, random_state=1)
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.1, random_state=2)
batch_size = 100 # ADDED #
num_batches = x_train.shape[0]/batch_size # ADDED #
learning_rate = 0.001
training_epochs = 5000
display_step = 1
num_input = x_train.shape[1]
num_classes = y_train.shape[1]
def init_weights(shape):
return tf.Variable(tf.random_normal(shape, stddev=0.01))
def model(X, w_h, w_h2, w_o, p_keep_input, p_keep_hidden):
X = tf.nn.dropout(X, p_keep_input)
h = tf.nn.relu(tf.matmul(X, w_h))
h = tf.nn.dropout(h, p_keep_hidden)
h2 = tf.nn.relu(tf.matmul(h, w_h2))
h2 = tf.nn.dropout(h2, p_keep_hidden)
h3 = tf.nn.relu(tf.matmul(h2, w_h3))
h3 = tf.nn.dropout(h3, p_keep_hidden)
return tf.nn.sigmoid(tf.matmul(h3, w_o))
x = tf.placeholder("float", [None, num_input])
y = tf.placeholder("float", [None, num_classes])
w_h = init_weights([num_input, 500])
w_h2 = init_weights([500, 500])
w_h3 = init_weights([500, 500])
w_o = init_weights([500, num_classes])
p_keep_input = tf.placeholder("float")
p_keep_hidden = tf.placeholder("float")
pred = model(x, w_h, w_h2, w_o, p_keep_input, p_keep_hidden)
cost = -tf.reduce_sum( ( (y*tf.log(pred + 1e-9)) + ((1-y) * tf.log(1 - pred + 1e-9)) ) , name='xentropy' )
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
sess.run(tf.local_variables_initializer())
for epoch in range(training_epochs):
for i in xrange(num_batches):# ADDED #
indices = xrange(i*batch_size, (i+1)*batch_size)# ADDED #
_, avg_cost= sess.run([optimizer,cost], feed_dict = {x : x_train[indices], y : y_train[indices], p_keep_input: 1.0, p_keep_hidden: 1.0})# ADDED #
if epoch % display_step == 0:
training_acc = accuracy.eval({x : x_train, y : y_train, p_keep_input: 1.0, p_keep_hidden: 1.0})
print("Epoch:", '%03d' % (epoch), "Training Accuracy:", '%.5f' % (training_acc), "cost=", "{:.10f}".format(avg_cost))
print("Optimization Complete!")
a = tf.cast(tf.argmax(pred, 1),tf.float32)
b = tf.cast(tf.argmax(y,1),tf.float32)
roc_score = tf.metrics.auc(b, a)
cm = tf.confusion_matrix(b, a)
sess.run(tf.local_variables_initializer())
print(sess.run(cm, feed_dict={x : x_test, y : y_test, p_keep_input: 1.0, p_keep_hidden: 1.0}))
print(sess.run(roc_score, feed_dict={x : x_test, y : y_test, p_keep_input: 1.0, p_keep_hidden: 1.0}))
</code></pre>
|
tensorflow|gradient-descent|multilabel-classification
| 1
|
5,967
| 42,668,549
|
How can I plot the duration of a program in python
|
<p>I'm trying to plot the duration of some programs that is running in the night, I export the program duration data into a CSV file so that later on can be analyzed. (something like this) </p>
<p><a href="https://i.stack.imgur.com/wZGiK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wZGiK.png" alt="example"></a></p>
<p>Here are my code and CSV examples: </p>
<p>CSV:</p>
<pre><code> na,programName,totaal,na,startDate,endDate,Date
?,"to/check.apl",54006,?,2017-02-27T20:04:07.233,2017-02- 27T20:05:01.239,2017-02-27T00:00:00.000
?,"to/ibx.apl",143887,?,2017-02-27T20:07:55.627,2017-02-27T20:10:19.514,2017-02-27T00:00:00.000
?,"to/checker.apl",2039600,?,2017-02-27T20:14:37.662,2017-02-27T20:48:37.262,2017-02-27T00:00:00.000
</code></pre>
<p>python code:</p>
<pre><code> import matplotlib
from pandas import *
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
data = "miFile.csv"
df = pd.DataFrame.from_csv(data)
df = df.set_index('totaal')
newDf = df[['programName','startDate','endDate']]
</code></pre>
<p>So far I get datetime error so I tried to fix this by doing this(also no luck to plot):</p>
<pre><code> newDf['startDate'] = pd.to_datetime(newDf['startDate'])
newDf['endDate'] = pd.to_datetime(newDf['endDate'])
#pd.to_datetime(pd.Series(["2017-02-27T20:04:07.233"]) format= "%d, %m, %y, %H: %M: %S")
newDf.plot('programName','startDate','endDate')
plt.show()
</code></pre>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> for creating <code>df</code>, then get difference of columns and <a href="http://pandas.pydata.org/pandas-docs/stable/timedeltas.html#frequency-conversion" rel="nofollow noreferrer">convert timedelta</a> to <code>minutes</code> for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>plot</code></a>:</p>
<pre><code>temp=u"""na,programName,totaal,na,startDate,endDate,Date
?,"to/check.apl",54006,?,2017-02-27T20:04:07.233,2017-02-27T20:05:01.239,2017-02-27T00:00:00.000
?,"to/ibx.apl",143887,?,2017-02-27T20:07:55.627,2017-02-27T20:10:19.514,2017-02-27T00:00:00.000
?,"to/checker.apl",2039600,?,2017-02-27T20:14:37.662,2017-02-27T20:48:37.262,2017-02-27T00:00:00.000"""
#after testing replace 'StringIO(temp)' to 'filename.csv'
df = pd.read_csv(StringIO(temp), index_col=[2], parse_dates=[4,5,6])
print (df.dtypes)
na object
programName object
na.1 object
startDate datetime64[ns]
endDate datetime64[ns]
Date datetime64[ns]
dtype: object
</code></pre>
<pre><code>df['duration'] = (df['endDate'] - df['startDate']).astype('timedelta64[m]')
newDf = df[['programName','duration']]
print (newDf)
programName duration
totaal
54006 to/check.apl 0.0
143887 to/ibx.apl 2.0
2039600 to/checker.apl 33.0
newDf.plot()
plt.show()
</code></pre>
|
python|csv|pandas|matplotlib|dataframe
| 2
|
5,968
| 42,882,721
|
Preserving a Month and Day as Date Format in Python Pandas
|
<p>I'm trying to take a column in yyyy-mm-dd format and convert to it mm-dd format (or MON DD, that works too), while preserving a date or numeric format. I've tried to use pd.to_datetime, but it seems that doesn't work because it requires the year, so it ends up padding the new columns with year 1900. I'm not looking for conversion in which the new column is a object, because I need to use the column to plot later on. What's the best approach? Data frame is pretty small.</p>
<pre><code>OldDate NewDate1 NewDate2 NewDate3
2017-01-02 01-02 01/02 Jan 2
2015-05-14 05-14 05/14 May 14
</code></pre>
|
<p>Let's say you have:</p>
<pre><code>df = pd.DataFrame({"OldDate":["2017-01-02","2015-05-14"]})
df
OldDate
0 2017-01-02
1 2015-05-14
</code></pre>
<p>Then you can do:</p>
<pre><code>from datetime import datetime as dt
df['OldDate'] = df.OldDate.apply(lambda s: dt.strptime(s, "%Y-%m-%d"))
df['NewDate1'] = df.OldDate.dt.strftime("%m-%d")
df['NewDate2'] = df.OldDate.dt.strftime("%m/%d")
df['NewDate3'] = df.OldDate.dt.strftime("%b %d")
df
OldDate NewDate1 NewDate2 NewDate3
0 2017-01-02 01-02 01/02 Jan 02
1 2015-05-14 05-14 05/14 May 14
</code></pre>
|
python|date|pandas
| 1
|
5,969
| 42,927,841
|
Trouble reading mnist with tensorflow
|
<p>So apparently the Yann LeCun's website is down so the following lines for reading mnist with tensorflow don't seem to be working :</p>
<pre><code>FROM tensorflow.examples.tutorials.mnist IMPORT input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot = true)
</code></pre>
<p>Any ideas how can i read the mnist without using these above lines?</p>
|
<p>You can access the website here: <a href="https://web.archive.org/web/20160117040036/http://yann.lecun.com/exdb/mnist/" rel="nofollow noreferrer">https://web.archive.org/web/20160117040036/http://yann.lecun.com/exdb/mnist/</a> - download the data, and read it in from a local copy...</p>
<p><strong>Edit</strong>
<a href="https://github.com/ischlag/tensorflow-input-pipelines/blob/master/datasets/mnist.py" rel="nofollow noreferrer">Here</a> is a code example for reading a local mnist dataset</p>
|
tensorflow|mnist
| 3
|
5,970
| 42,823,357
|
return a list in a method that is referencing a PD Dataframe
|
<p>Is there any way to return a list or tuple when referencing a pandas DF? get_df() is a pandas column with a couple hundred float values. The code below is asking to return the values greater than 6000 and less than 7000. Can I return a list to my method? (I know I can print this but that is not what I am trying to do) </p>
<pre><code>def mass_needed(numb_one, numb_two):
for i in get_df():
if i > numb_one and i < numb_two:
return(i)
print(mass_needed(6000, 7000))
</code></pre>
<p>What I am trying to accomplish is I want to be able to call mass_needed() and get a list values that I can print or manipulate just like a normal list.</p>
|
<p>in case anyone cared, I figured it out. Had to append the the values as they were being iterated through.</p>
<pre><code>def mass_needed(numb_one, numb_two):
li = []
for i in get_df():
if i > numb_one and i < numb_two:
li.append(i)
return li
x = pd.DataFrame(mass_needed(6000, 7000))
print(x)
</code></pre>
|
python|pandas
| 0
|
5,971
| 14,431,646
|
How to write Pandas dataframe to sqlite with Index
|
<p>I have a list of stockmarket data pulled from Yahoo in a pandas DataFrame (see format below). The date is serving as the index in the DataFrame. I want to write the data (including the index) out to a SQLite database. </p>
<pre><code> AAPL GE
Date
2009-01-02 89.95 14.76
2009-01-05 93.75 14.38
2009-01-06 92.20 14.58
2009-01-07 90.21 13.93
2009-01-08 91.88 13.95
</code></pre>
<p>Based on my reading of the write_frame code for Pandas, it <a href="https://github.com/pydata/pandas/blob/master/pandas/io/sql.py#L163">does not currently support writing the index</a>. I've attempted to use to_records instead, but ran into the <a href="https://github.com/pydata/pandas/issues/1908">issue with Numpy 1.6.2 and datetimes</a>. Now I'm trying to write tuples using .itertuples, but SQLite throws an error that the data type isn't supported (see code and result below). I'm relatively new to Python, Pandas and Numpy, so it is entirely possible I'm missing something obvious. I think I'm running into a problem trying to write a datetime to SQLite, but I think I might be overcomplicating this. </p>
<p>I think I <em>may</em> be able to fix the issue by upgrading to Numpy 1.7 or the development version of Pandas, which has a fix posted on GitHub. I'd prefer to develop using release versions of software - I'm new to this and I don't want stability issues confusing matters further. </p>
<p>Is there a way to accomplish this using Python 2.7.2, Pandas 0.10.0, and Numpy 1.6.2? Perhaps cleaning the datetimes somehow? I'm in a bit over my head, any help would be appreciated. </p>
<p><strong>Code:</strong></p>
<pre><code>import numpy as np
import pandas as pd
from pandas import DataFrame, Series
import sqlite3 as db
# download data from yahoo
all_data = {}
for ticker in ['AAPL', 'GE']:
all_data[ticker] = pd.io.data.get_data_yahoo(ticker, '1/1/2009','12/31/2012')
# create a data frame
price = DataFrame({tic: data['Adj Close'] for tic, data in all_data.iteritems()})
# get output ready for database export
output = price.itertuples()
data = tuple(output)
# connect to a test DB with one three-column table titled "Demo"
con = db.connect('c:/Python27/test.db')
wildcards = ','.join(['?'] * 3)
insert_sql = 'INSERT INTO Demo VALUES (%s)' % wildcards
con.executemany(insert_sql, data)
</code></pre>
<p><strong>Result:</strong></p>
<pre><code>---------------------------------------------------------------------------
InterfaceError Traceback (most recent call last)
<ipython-input-15-680cc9889c56> in <module>()
----> 1 con.executemany(insert_sql, data)
InterfaceError: Error binding parameter 0 - probably unsupported type.
</code></pre>
|
<p>In recent pandas the index will be saved in the database (you used to have to <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.reset_index.html" rel="noreferrer"><code>reset_index</code></a> first).</p>
<p>Following the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#io-sql" rel="noreferrer">docs</a> (setting a SQLite connection in memory):</p>
<pre><code>import sqlite3
# Create your connection.
cnx = sqlite3.connect(':memory:')
</code></pre>
<p><em>Note: You can also pass a SQLAlchemy engine here (see end of answer).</em></p>
<p>We can save <code>price2</code> to <code>cnx</code>:</p>
<pre><code>price2.to_sql(name='price2', con=cnx)
</code></pre>
<p>We can retrieve via <code>read_sql</code>:</p>
<pre><code>p2 = pd.read_sql('select * from price2', cnx)
</code></pre>
<p>However, when stored (and retrieved) <strong>dates are <code>unicode</code></strong> rather than <code>Timestamp</code>. To convert back to what we started with we can use <code>pd.to_datetime</code>:</p>
<pre><code>p2.Date = pd.to_datetime(p2.Date)
p = p2.set_index('Date')
</code></pre>
<p>We get back the same DataFrame as <code>prices</code>:</p>
<pre><code>In [11]: p2
Out[11]:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1006 entries, 2009-01-02 00:00:00 to 2012-12-31 00:00:00
Data columns:
AAPL 1006 non-null values
GE 1006 non-null values
dtypes: float64(2)
</code></pre>
<hr>
<p>You can also use a <a href="https://docs.sqlalchemy.org/en/latest/core/engines.html" rel="noreferrer">SQLAlchemy engine</a>:</p>
<pre><code>from sqlalchemy import create_engine
e = create_engine('sqlite://') # pass your db url
price2.to_sql(name='price2', con=cnx)
</code></pre>
<p>This allows you to use <code>read_sql_table</code> (which can only be used with SQLAlchemy):</p>
<pre><code>pd.read_sql_table(table_name='price2', con=e)
# Date AAPL GE
# 0 2009-01-02 89.95 14.76
# 1 2009-01-05 93.75 14.38
# 2 2009-01-06 92.20 14.58
# 3 2009-01-07 90.21 13.93
# 4 2009-01-08 91.88 13.95
</code></pre>
|
python|sqlite|pandas
| 62
|
5,972
| 30,399,147
|
What's the most pythonic way to load a matrix in ijv/coo/triplet format?
|
<p>My input file is in ijv/coo/triplet format with string column names, eg:</p>
<pre><code>Apple,Google,1
Apple,Banana,5
Microsoft,Orange,2
</code></pre>
<p>Should result in this 2x3 matrix:</p>
<pre><code>[[1,5,0], [0,0,2]]
</code></pre>
<p>I can read it manually by putting the column names to dictionaries and create a scipy sparse coo_matrix with that dict mapping to IDs. I would like to get it in scipy sparse or pandas dataframe in the end.</p>
<p>Is there any more pythonic way to do that? Pandas can only read csv, there is <code>scipy.io</code>, but they don't have coo format either. So if there is no library what would be the most pythonic way to get it into <code>scipy.coo_matrix</code> or <code>pandas.DataFrame</code>? </p>
|
<p>You need to define an unambiguous mapping from the row/column names to some indices (it is not important whether "Apple" is "0", or "1", just that it is represented by a number, hence this won't exactly match your result, but it should not matter). In this example, <code>'info.txt'</code> contains </p>
<pre><code>Apple,Google,1
Apple,Banana,5
Microsoft,Orange,2
</code></pre>
<p>Here is one way to achieve a coordinate matrix:</p>
<pre><code>import numpy as np
from scipy.sparse import coo_matrix
input = np.loadtxt( 'info.txt', delimiter=',' , dtype=str)
rows,cols,data = input.T
map_rows = { val:ind for ind,val in enumerate( np.unique(rows) ) }
map_cols = { val:ind for ind,val in enumerate( np.unique(cols) ) }
result = coo_matrix( (data.astype(float),( [map_rows[x] for x in rows], [map_cols[x] for x in cols]) ) )
</code></pre>
<p>Now you have the mappings and result</p>
<pre><code>print map_rows
#{'Apple': 0, 'Microsoft': 1}
print map_cols
#{'Banana': 0, 'Google': 1, 'Orange': 2}
print result.toarray()
#array([[ 5., 1., 0.],
# [ 0., 0., 2.]])
</code></pre>
|
python|pandas|scipy|scikit-learn
| 1
|
5,973
| 39,102,051
|
Read the picture as a grayscale numpy array, and save it back
|
<p>I tried the following, expecting to see the grayscale version of source image:</p>
<pre><code>from PIL import Image
import numpy as np
img = Image.open("img.png").convert('L')
arr = np.array(img.getdata())
field = np.resize(arr, (img.size[1], img.size[0]))
out = field
img = Image.fromarray(out, mode='L')
img.show()
</code></pre>
<p>But for some reason, the whole image is pretty much a lot of dots with black in between. Why does it happen?</p>
|
<p>When you are creating the <code>numpy</code> array using the image data from your Pillow object, be advised that the default precision of the array is <code>int32</code>. I'm assuming that your data is actually <code>uint8</code> as most images seen in practice are this way. Therefore, you must explicitly ensure that the array is the same type as what was seen in your image. Simply put, ensure that the array is <code>uint8</code> after you get the image data, so that would be the fourth line in your code<sup>1</sup>.</p>
<pre><code>arr = np.array(img.getdata(), dtype=np.uint8) # Note the dtype input
</code></pre>
<hr>
<p><sup>1. Take note that I've added two more lines in your code at the beginning to import the necessary packages for this code to work (albeit with an image offline).</sup></p>
|
image|numpy|image-processing|python-3.3|pillow
| 4
|
5,974
| 39,050,539
|
How to add multiple columns to pandas dataframe in one assignment?
|
<p>I'm new to pandas and trying to figure out how to add multiple columns to pandas simultaneously. Any help here is appreciated. Ideally I would like to do this in one step rather than multiple repeated steps...</p>
<pre><code>import pandas as pd
df = {'col_1': [0, 1, 2, 3],
'col_2': [4, 5, 6, 7]}
df = pd.DataFrame(df)
df[[ 'column_new_1', 'column_new_2','column_new_3']] = [np.nan, 'dogs',3] #thought this would work here...
</code></pre>
|
<p>I would have expected your syntax to work too. The problem arises because when you create new columns with the column-list syntax (<code>df[[new1, new2]] = ...</code>), pandas requires that the right hand side be a DataFrame (note that it doesn't actually matter if the columns of the DataFrame have the same names as the columns you are creating). </p>
<p>Your syntax works fine for assigning scalar values to <em>existing</em> columns, and pandas is also happy to assign scalar values to a new column using the single-column syntax (<code>df[new1] = ...</code>). So the solution is either to convert this into several single-column assignments, or create a suitable DataFrame for the right-hand side.</p>
<p>Here are several approaches that <em>will</em> work:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
'col_1': [0, 1, 2, 3],
'col_2': [4, 5, 6, 7]
})
</code></pre>
<p>Then one of the following:</p>
<h2>1) Three assignments in one, using list unpacking:</h2>
<pre><code>df['column_new_1'], df['column_new_2'], df['column_new_3'] = [np.nan, 'dogs', 3]
</code></pre>
<h2>2) <code>DataFrame</code> conveniently expands a single row to match the index, so you can do this:</h2>
<pre><code>df[['column_new_1', 'column_new_2', 'column_new_3']] = pd.DataFrame([[np.nan, 'dogs', 3]], index=df.index)
</code></pre>
<h2>3) Make a temporary data frame with new columns, then combine with the original data frame later:</h2>
<pre><code>df = pd.concat(
[
df,
pd.DataFrame(
[[np.nan, 'dogs', 3]],
index=df.index,
columns=['column_new_1', 'column_new_2', 'column_new_3']
)
], axis=1
)
</code></pre>
<h2>4) Similar to the previous, but using <code>join</code> instead of <code>concat</code> (may be less efficient):</h2>
<pre><code>df = df.join(pd.DataFrame(
[[np.nan, 'dogs', 3]],
index=df.index,
columns=['column_new_1', 'column_new_2', 'column_new_3']
))
</code></pre>
<h2>5) Using a dict is a more "natural" way to create the new data frame than the previous two, but the new columns will be sorted alphabetically (at least <a href="https://stackoverflow.com/q/39980323/3830997">before Python 3.6 or 3.7</a>):</h2>
<pre><code>df = df.join(pd.DataFrame(
{
'column_new_1': np.nan,
'column_new_2': 'dogs',
'column_new_3': 3
}, index=df.index
))
</code></pre>
<h2>6) Use <code>.assign()</code> with multiple column arguments.</h2>
<p>I like this variant on @zero's answer a lot, but like the previous one, the new columns will always be sorted alphabetically, at least with early versions of Python:</p>
<pre><code>df = df.assign(column_new_1=np.nan, column_new_2='dogs', column_new_3=3)
</code></pre>
<h2>7) This is interesting (based on <a href="https://stackoverflow.com/a/44951376/3830997">https://stackoverflow.com/a/44951376/3830997</a>), but I don't know when it would be worth the trouble:</h2>
<pre><code>new_cols = ['column_new_1', 'column_new_2', 'column_new_3']
new_vals = [np.nan, 'dogs', 3]
df = df.reindex(columns=df.columns.tolist() + new_cols) # add empty cols
df[new_cols] = new_vals # multi-column assignment works for existing cols
</code></pre>
<h2>8) In the end it's hard to beat three separate assignments:</h2>
<pre><code>df['column_new_1'] = np.nan
df['column_new_2'] = 'dogs'
df['column_new_3'] = 3
</code></pre>
<p>Note: many of these options have already been covered in other answers: <a href="https://stackoverflow.com/q/43415798/3830997">Add multiple columns to DataFrame and set them equal to an existing column</a>, <a href="https://stackoverflow.com/q/19866377/3830997">Is it possible to add several columns at once to a pandas DataFrame?</a>, <a href="https://stackoverflow.com/q/30926670/3830997">Add multiple empty columns to pandas DataFrame</a></p>
|
python|pandas|dataframe
| 349
|
5,975
| 19,732,097
|
Conversion of 2D cvMat to 1D
|
<p>How can I convert 2D <code>cvMat</code> to 1D? I have tried converting 2D <code>cvMat</code> to Numpy array then used <code>ravel()</code> (I want that kind of resultant matrix).When I tried converting it back to
<code>cvMat</code> using <code>cv.fromarray()</code> it gives an error that the matrix must be 2D or 3D. </p>
|
<p>Use <code>matrix.reshape((-1, 1))</code> to turn the <em>n</em>-element 1D matrix into an <em>n</em>-by-1 2D one before converting it.</p>
|
python|opencv|numpy
| 0
|
5,976
| 29,140,233
|
Comparaison of L value in GrayScale image with Y value in YUV image
|
<p>In some comments on previous questions, people told me that <code>Y</code> value of a <code>YUV</code> image converted using:</p>
<pre><code>image_in_yuv=cv2.cvtColor(image_in_bgr,cv2.COLOR_BGR2YUV)
</code></pre>
<p>is the same as the <code>L</code> value of the same image in its grayscale color space converted using </p>
<pre><code>image_in_grayscale=cv2.imred('image.png',cv2.IMREAD_GRAYSCALE)
</code></pre>
<p>I wonder how is this true ? because on my side when I run for example:</p>
<pre><code>print image_in_yuv[200,200,0] # Y will be printed
print image_in_grayscale[200,200] # L will be printed
</code></pre>
<p>I get <strong>different</strong> values of <code>Y</code> and <code>L</code> for the pixel (200,200)</p>
<p>So did I misunderstand something ?</p>
|
<p>The conversion of an RGB image into grayscale and YUV uses different numerical values. The <code>Y</code> channel <em>is</em> the "grayscale component" in the image, only in the sense that it denotes brightness. In fact, if I recall correct, the range of <code>Y</code> is 16-235.</p>
<p>Check out <a href="http://www.equasys.de/colorconversion.html" rel="nofollow">colour conversion</a> and
<a href="http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/" rel="nofollow">grayscale</a>.</p>
|
python|opencv|numpy
| 0
|
5,977
| 28,899,920
|
Numpy : The truth value of an array with more than one element is ambiguous
|
<p>I am really confused on why this error is showing up. Here is my code:</p>
<pre><code>import numpy as np
x = np.array([0, 0])
y = np.array([10, 10])
a = np.array([1, 6])
b = np.array([3, 7])
points = [x, y, a, b]
max_pair = [x, y]
other_pairs = [p for p in points if p not in max_pair]
>>>ValueError: The truth value of an array with more than one element is ambiguous.
Use a.any() or a.all()
(a not in max_paix)
>>>ValueError: The truth ...
</code></pre>
<p>What confuses me is that the following works fine:</p>
<pre><code>points = [[1, 2], [3, 4], [5, 7]]
max_pair = [[1, 2], [5, 6]]
other_pairs = [p for p in points if p not in max_pair]
>>>[[3, 4], [5, 7]]
([5, 6] not in max_pair)
>>>False
</code></pre>
<p>Why is this happening when using numpy arrays? Is <code>not in/in</code> ambiguous for existance?<br>
What is the correct syntax using <code>any()\all()</code>?</p>
|
<p>Numpy arrays define a custom equality operator, i.e. they are objects that implement the <code>__eq__</code> magic function. Accordingly, the <code>==</code> operator and all other functions/operators that rely on such an equality call this custom equality function. </p>
<p>Numpy's equality is based on element-wise comparison of arrays. Thus, in return you get another numpy array with boolean values. For instance:</p>
<pre><code>x = np.array([1,2,3])
y = np.array([1,4,5])
x == y
</code></pre>
<p>returns</p>
<pre><code>array([ True, False, False], dtype=bool)
</code></pre>
<p>However, the <code>in</code> operator in combination with <em>lists</em> requires equality comparisons that only return a <strong>single</strong> boolean value. This is the reason why the error asks for <code>all</code> or <code>any</code>. For instance:</p>
<pre><code>any(x==y)
</code></pre>
<p>returns <code>True</code> because at least one value of the resulting array is <code>True</code>.
In contrast </p>
<pre><code>all(x==y)
</code></pre>
<p>returns <code>False</code> because <strong>not</strong> all values of the resulting array are <code>True</code>.</p>
<p>So in your case, a way around the problem would be the following:</p>
<pre><code>other_pairs = [p for p in points if all(any(p!=q) for q in max_pair)]
</code></pre>
<p>and <code>print other_pairs</code> prints the expected result</p>
<pre><code>[array([1, 6]), array([3, 7])]
</code></pre>
<p>Why so? Well, we look for an item <em>p</em> from <em>points</em> where <strong>any</strong> of its entries are unequal to the entries of <strong>all</strong> items <em>q</em> from <em>max_pair</em>. </p>
|
python|numpy
| 8
|
5,978
| 29,224,987
|
Unit Test with Pandas Dataframe to read *.csv files
|
<p>I am often vertically concatenating many *.csv files in Pandas. So, everytime I do this, I have to check that all the files I am concatenating have the same number of columns. This became quite cumbersome since I had to figure out a way to ignore the files with more or less columns than what I tell it I need. eg. the first 10 files have 4 columns but then file #11 has 8 columns and file #54 has 7 columns. This means I have to load all files - even the files that have the wrong number of columns. I want to avoid loading those files and then trying to concatenate them vertically - I want to skip them completely.</p>
<p>So, I am trying to write a Unit Test with Pandas that will:
a. check the size of all the *.csv files in some folder
b. ONLY read in the files that have a pre-determined number of columns
c. print a message indicating the naems of the *.csv files have the wrong number of columns</p>
<p>Here is what I have (I am working in the folder C:\Users\Downloads):</p>
<pre><code>import unittest
import pandas as pd
from os import listdir
# Create csv files:
df1 = pd.DataFrame(np.random.rand(10,4), columns = ['A', 'B', 'C', 'D'])
df2 = pd.DataFrame(np.random.rand(10,3), columns = ['A', 'B', 'C'])
df1.to_csv('test1.csv')
df1.to_csv('test2.csv')
class Conct(unittest.TestCase):
"""Tests for `primes.py`."""
TEST_INP_DIR = 'C:\Users\Downloads'
fns = listdir(TEST_INP_DIR)
t_fn = fn for fn in fns if fn.endswith(".csv") ]
print t_fn
dfb = pd.DataFrame()
def setUp(self):
for elem in Conct.t_fn:
print elem
fle = pd.read_csv(elem)
try:
pd.concat([Conct.dfb,fle],axis = 0, join='outer', join_axes=None, ignore_index=True, verify_integrity=False)
except IOError:
print 'Error: unable to concatenate a file with %s columns.' % fle.shape[1]
self.err_file = fle
def tearDown(self):
del self.err_fle
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>Problem:
I am gettingthis output:</p>
<pre><code>['test1.csv', 'test2.csv']
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
</code></pre>
<p>The first print statement works - it is printing a list of *.csv files, as expected. But, for some reason, the second and third print statements do not work.</p>
<p>Also, the concatenation should not have gone through - the second file has 3 columns but the first one has got 4 columns. The IOerror line does not seem to be printing.</p>
<p>How can I use a <code>Python unittest</code> to check each of the *.csv files to make sure that they have the same number of columns before concatenation? And how can I print the appropriate error message at the correct time?</p>
|
<p>On second thought, instead of chunksize, just read in the first row and count the number of columns, then read and append everything with the correct number of columns. In short:</p>
<pre><code>for f in files:
test = pd.read_csv( f, nrows=1 )
if len( test.columns ) == 4:
df = df.append( pd.read_csv( f ) )
</code></pre>
<p>Here's the full version:</p>
<pre><code>df1 = pd.DataFrame(np.random.rand(2,4), columns = ['A', 'B', 'C', 'D'])
df2 = pd.DataFrame(np.random.rand(2,3), columns = ['A', 'B', 'C'])
df3 = pd.DataFrame(np.random.rand(2,4), columns = ['A', 'B', 'C', 'D'])
df1.to_csv('test1.csv',index=False)
df2.to_csv('test2.csv',index=False)
df3.to_csv('test3.csv',index=False)
files = ['test1.csv', 'test2.csv', 'test3.csv']
df = pd.DataFrame()
for f in files:
test = pd.read_csv( f, nrows=1 )
if len( test.columns ) == 4:
df = df.append( pd.read_csv( f ) )
In [54]: df
Out [54]:
A B C D
0 0.308734 0.242331 0.318724 0.121974
1 0.707766 0.791090 0.718285 0.209325
0 0.176465 0.299441 0.998842 0.077458
1 0.875115 0.204614 0.951591 0.154492
</code></pre>
<p>(<strong>Edit to add</strong>) Regarding the use of <code>nrows</code> for the <code>test...</code> line: The only point of the test line is to read in enough of the CSV so that on the next line we check if it has the right number of columns before reading in. In this test case, reading in the first row is sufficient to figure out if we have 3 or 4 columns, and it's inefficient to read in more than that, although there is no harm in leaving off the <code>nrows=1</code> besides reduced efficiency.</p>
<p>In other cases (e.g. no header row and varying numbers of columns in the data), you might <strong>need</strong> to read in the whole CSV. In that case, you'd be better off doing it like this:</p>
<pre><code>for f in files:
test = pd.read_csv( f )
if len( test.columns ) == 4:
df = df.append( test )
</code></pre>
<p>The only downside of that way is that you completely read in the datasets with 3 columns that you don't want to keep, but you also don't read in the good datasets twice that way. So that's definitely a better way if you don't want to use <code>nrows</code> at all. Ultimately, depends on what your actual data looks like as to which way is best for you, of course.</p>
|
unit-testing|python-2.7|pandas|dataframe
| 1
|
5,979
| 33,772,398
|
sum two pandas dataframe columns, keep non-common rows
|
<p><a href="https://stackoverflow.com/questions/33771675/pandas-concat-merge-and-sum-one-column#33771793">I just asked a similar question</a> but then
realized, it wasn't the <em>right</em> question.</p>
<p>What I'm trying to accomplish is to combine two data frames that actually have the same columns, but may or may not have common rows (indices of a MultiIndex). I'd like to combine them taking the sum of one of the columns, but leaving the other columns.</p>
<p>According to the accepted answer, the approach may be something like:</p>
<pre><code>def mklbl(prefix,n):
try:
return ["%s%s" % (prefix,i) for i in range(n)]
except:
return ["%s%s" % (prefix,i) for i in n]
mi1 = pd.MultiIndex.from_product([mklbl('A',4), mklbl('C',2)])
mi2 = pd.MultiIndex.from_product([mklbl('A',[2,3,4]), mklbl('C',2)])
df2 = pd.DataFrame({'a':np.arange(len(mi1)), 'b':np.arange(len(mi1)),'c':np.arange(len(mi1)), 'd':np.arange(len( mi1))[::-1]}, index=mi1).sort_index().sort_index(axis=1)
df1 = pd.DataFrame({'a':np.arange(len(mi2)), 'b':np.arange(len(mi2)),'c':np.arange(len(mi2)), 'd':np.arange(len( mi2))[::-1]}, index=mi2).sort_index().sort_index(axis=1)
df1 = df1.add(df2.pop('b'))
</code></pre>
<p>but the problem is this will fail as the indices don't align.</p>
<p>This is close to what I'm trying to achieve, except that I lose rows that are not common to the two dataframes:</p>
<pre><code>df1['b'] = df1['b'].add(df2['b'], fill_value=0)
</code></pre>
<p>But this gives me:</p>
<pre><code>Out[197]:
a b c d
A2 C0 0 4 0 5
C1 1 6 1 4
A3 C0 2 8 2 3
C1 3 10 3 2
A4 C0 4 4 4 1
C1 5 5 5 0
</code></pre>
<p>When I want:</p>
<pre><code>In [197]: df1
Out[197]:
a b c d
A0 C0 0 0 0 7
C1 1 2 1 6
A1 C0 2 4 2 5
C1 3 6 3 4
A2 C0 0 4 0 5
C1 1 6 1 4
A3 C0 2 8 2 3
C1 3 10 3 2
A4 C0 4 4 4 1
C1 5 5 5 0
</code></pre>
<p>Note: in response to @RandyC's comment about the XY problem... the <em>specific</em> problem is that I have a class which reads data and returns a dataframe of 1e9 rows. The columns of the data frame are <code>latll, latur, lonll, lonur, concentration, elevation</code>. The data frame is indexed by a <code>MultiIndex</code> (lat, lon, time) where time is a datetime. The rows of the two dataframes may/may not be the same (IF they exist for a given date, the lat/lon will be the same... they are grid cell centers). <code>latll, latur, lonll, lonur</code> are calculated from lat/lon. I want to sum the <code>concentration</code> column as I add two data frames, but not change the others.</p>
|
<p>Self answering, there was an error in the comment above that caused a double adding. This is correct:</p>
<pre><code>newdata = df2.pop('b')
result = df1.combine_first(df2)
result['b']= result['b'].add(newdata, fill_value=0)
</code></pre>
<p>seems to provide the solution to my use-case.</p>
|
python|pandas|dataframe
| 0
|
5,980
| 23,659,234
|
How to move my pandas dataframe to d3?
|
<p>I am new to Python and have worked my way through a few books on it. Everything is great, except visualizations. I really dislike matplotlib and Bokeh requires too heavy of a stack.</p>
<p>The workflow I want is:</p>
<p>Data munging analysis using pandas in ipython notebook -> visualization using d3 in sublimetext2</p>
<p>However, being new to both Python and d3, I don't know the best way to export my pandas dataframe to d3. Should I just have it as a csv? JSON? Or is there a more direct way?</p>
<p>Side question: Is there any (reasonable) way to do everything in an ipython notebook instead of switching to sublimetext?</p>
<p>Any help would be appreciated.</p>
|
<p>Basically there is no best format what will fit all your visualization needs.</p>
<p>It really depends on the visualizations you want to obtain.</p>
<p>For example, a <a href="http://bl.ocks.org/mbostock/raw/3886208/" rel="nofollow noreferrer">Stacked Bar Chart</a> takes as input a CSV file, and an <a href="http://bost.ocks.org/mike/miserables" rel="nofollow noreferrer">adjacency matrix vizualisation</a> takes a JSON format.</p>
<p>From my experience:</p>
<ul>
<li>to display relations beetween items, like <a href="http://bost.ocks.org/mike/miserables" rel="nofollow noreferrer">adjacency matrix</a> or <a href="http://bl.ocks.org/mbostock/4062006" rel="nofollow noreferrer">chord diagram</a>, one will prefer a JSON format that will allow to describe only existing relations. Data are stored like in a sparse matrix, and several data can be nested using dictionary. Moreover this format can directly be parsed in Python.</li>
<li>to display properties of an array of items, a CSV format can be fine. A perfect example can be found <a href="http://exposedata.com/parallel" rel="nofollow noreferrer">here</a> with a parallel chart display.</li>
<li>to display hierarchical data, like a tree, JSON is best suited.</li>
</ul>
<p>The best thing to do to help you figure out what best format you need, is to have a look at this <a href="https://observablehq.com/@d3/gallery" rel="nofollow noreferrer">d3js gallery</a></p>
|
pandas|d3.js|ipython|data-munging
| 5
|
5,981
| 22,726,498
|
What's the corresponding multielement operator version of "numpy.logical_or"?
|
<p>To sum elements up, we have binary operator <code>np.add</code>, and moreover <code>np.sum</code> dealing with multiple elements. Likewise, we have <code>np.multiply</code> and <code>np.product</code> to do the multiplication.</p>
<p>But for <code>np.logical_or</code>, what's the corresponding multielement operator? Assuming I have the following array:</p>
<pre><code>In [29]: a
Out[29]:
array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
</code></pre>
<p>I want to have a method like <code>np.logical_or(a, axis=0)</code>, so that I could get such an array <code>[ True, True, True]</code>. Now the only way I could come up with is:</p>
<pre><code>In [31]: a.sum(0).astype(bool)
Out[31]: array([ True, True, True], dtype=bool)
</code></pre>
<p>But that's not a good way because it will fail on array like:</p>
<pre><code>array([[-1, -1],
[ 1, 1]])
</code></pre>
|
<p>You're thinking of <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.all.html" rel="nofollow"><code>np.all</code></a> (for <code>logical_and</code>) or <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.any.html" rel="nofollow"><code>np.any</code></a> (for <code>logical_or</code>).</p>
<pre><code>In [11]: a.any(axis=1)
Out[11]: array([ True, True, True], dtype=bool)
</code></pre>
|
python|numpy|operators
| 1
|
5,982
| 22,835,914
|
Select two random rows from numpy array
|
<p>I have a numpy array as</p>
<pre><code>[[ 5.80084178e-05 1.20779787e-02 -2.65970238e-02]
[ -1.36810406e-02 6.85722519e-02 -2.60280724e-01]
[ 4.21996519e-01 -1.43644036e-01 2.12904690e-01]
[ 3.03098198e-02 1.50170659e-02 -1.09683402e-01]
[ -1.50776089e-03 7.22369575e-03 -3.71181228e-02]
[ -3.04448275e-01 -3.66987035e-01 1.44618682e-01]
[ -1.46744916e-01 3.47112167e-01 3.09550267e-01]
[ 1.16567762e-03 1.72858807e-02 -9.39297514e-02]
[ 1.25896836e-04 1.61310167e-02 -6.00253128e-02]
[ 1.65062798e-02 1.96933143e-02 -4.26540031e-02]
[ -3.78020965e-03 7.51770012e-03 -3.67852984e-02]]
</code></pre>
<p>And I want to select any two random rows from these so the output will be-</p>
<pre><code>[[ -1.36810406e-02 6.85722519e-02 -2.60280724e-01]
[ 1.16567762e-03 1.72858807e-02 -9.39297514e-02]]
</code></pre>
|
<p>I believe you are simply looking for:</p>
<pre><code>#Create a random array
>>> a = np.random.random((5,3))
>>> a
array([[ 0.26070423, 0.85704248, 0.82956827],
[ 0.26840489, 0.75970263, 0.88660498],
[ 0.5572771 , 0.29934986, 0.04507683],
[ 0.78377012, 0.66445244, 0.08831775],
[ 0.75533819, 0.05128844, 0.49477196]])
#Select random rows based on the rows in your array
#The 2 value can be changed to select any number of rows
>>> b = np.random.randint(0,a.shape[0],2)
>>> b
array([1, 2])
#Slice array a
>>> a[b]
array([[ 0.26840489, 0.75970263, 0.88660498],
[ 0.5572771 , 0.29934986, 0.04507683]])
</code></pre>
<p>Or simply:</p>
<pre><code>a[np.random.randint(0,a.shape[0],2)]
</code></pre>
|
python|arrays|random|numpy
| 10
|
5,983
| 29,655,929
|
Apply function to multilevel columns
|
<p>Given a <code>pandas</code> dataframe:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({
'clients': pd.Series(['A', 'A', 'A', 'B', 'B']),
'x': pd.Series([1.0, 1.0, 2.0, 1.0, 2.0]),
'y': pd.Series([6.0, 7.0, 8.0, 9.0, 10.0]),
'z': pd.Series([3, 2, 1, 0, 0])
})
grpd = df.groupby(['clients']).agg({
'x': [np.sum, np.average],
'y': [np.sum, np.average],
'z': [np.sum, np.average]
})
In[55]: grpd
Out[53]:
y x z
sum average sum average sum average
clients
A 21 7.0 4 1.333333 6 2
B 19 9.5 3 1.500000 0 0
</code></pre>
<p>how can I create a new column applying a function to a selected sub-column?</p>
<p>The desired result is:</p>
<pre><code> y x z new_col
sum average sum average sum average
clients
A 21 7.0 4 1.333333 6 2 0.19
B 19 9.5 3 1.500000 0 0 0.15
</code></pre>
<p>I had something like this in mind:</p>
<pre><code>grpd['new_col'] = grpd[['x', 'y']].apply(lambda x: x[0]['sum'] / x[1]['sum'], axis=1)
</code></pre>
|
<p>You can do vectorized versions of the operation:</p>
<pre><code>grpd['new_col'] = grpd[('x', 'sum')]/grpd[('y', 'sum')]
</code></pre>
<p>Or, for consistency (makes the second-level index for <code>new_col</code> <code>sum</code> like it is for <code>x</code> and <code>y</code>):</p>
<pre><code>grpd[('new_col','sum')] = grpd[('x', 'sum')]/grpd[('y', 'sum')]
</code></pre>
|
python|pandas|multi-level
| 0
|
5,984
| 29,806,080
|
Numpy - constructing matrix of Jaro (or Levenshtein) distances using numpy.fromfunction
|
<p>I am doing some text analysis right now and as part of it I need to get a matrix of Jaro distances between all of words in specific list (so pairwise distance matrix) like this one:</p>
<pre><code> │CHEESE CHORES GEESE GLOVES
───────┼───────────────────────────
CHEESE │ 0 0.222 0.177 0.444
CHORES │0.222 0 0.422 0.333
GEESE │0.177 0.422 0 0.300
GLOVES │0.444 0.333 0.300 0
</code></pre>
<p>So, I tried to construct it using <code>numpy.fromfunction</code>. Per documentation and examples it passes coordinates to the function, gets its results, constructs the matrix of results.</p>
<p>I tried the below approach:</p>
<pre><code>from jellyfish import jaro_distance
def distance(i, j):
return 1 - jaro_distance(feature_dict[i], feature_dict[j])
feature_dict = 'CHEESE CHORES GEESE GLOVES'.split()
distance_matrix = np.fromfunction(distance, shape=(len(feature_dict),len(feature_dict)))
</code></pre>
<p>Notice: jaro_distance just accepts 2 strings and returns a float.</p>
<p>And I got a error:</p>
<pre><code>File "<pyshell#26>", line 4, in distance
return 1 - jaro_distance(feature_dict[i], feature_dict[j])
TypeError: only integer arrays with one element can be converted to an index
</code></pre>
<p>I added <code>print(i)</code>, <code>print(j)</code> into beginning of the function and I found that instead of real coordinates something odd is passed:</p>
<pre><code>[[ 0. 0. 0. 0.]
[ 1. 1. 1. 1.]
[ 2. 2. 2. 2.]
[ 3. 3. 3. 3.]]
[[ 0. 1. 2. 3.]
[ 0. 1. 2. 3.]
[ 0. 1. 2. 3.]
[ 0. 1. 2. 3.]]
</code></pre>
<p>Why? The <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html" rel="nofollow">examples</a> on numpy site clearly show that just two integers are passed, nothing else.</p>
<p>I tried to exactly reproduce their example using a <code>lambda</code> function, but I get exactly same error:</p>
<pre><code>distance_matrix = np.fromfunction(lambda i, j: 1 - jaro_distance(feature_dict[i], feature_dict[j]), shape=(len(feature_dict),len(feature_dict)))
</code></pre>
<p>Any help is appreciated - I assume I misunderstood it somehow.</p>
|
<p>As suggested by @xnx I have investigated the <a href="https://stackoverflow.com/questions/18702105/parameters-to-numpys-fromfunction">question</a> and found out that fromfunc is not passing coordinates one by one, but actually passess all of indexies at the same time. Meaning that if shape of array would be (2,2) numpy will not perform <code>f(0,0), f(0,1), f(1,0), f(1,1)</code>, but rather will perform:</p>
<pre><code>f([[0., 0.], [1., 1.]], [[0., 1.], [0., 1.]])
</code></pre>
<p>But looks like my specific function could vectorized and will produce needed results. So the code to achieve the needed is below:</p>
<pre><code>from jellyfish import jaro_distance
import numpy
def distance(i, j):
return 1 - jaro_distance(feature_dict[i], feature_dict[j])
feature_dict = 'CHEESE CHORES GEESE GLOVES'.split()
funcProxy = np.vectorize(distance)
distance_matrix = np.fromfunction(funcProxy, shape=(len(feature_dict),len(feature_dict)))
</code></pre>
<p>And it works fine.</p>
|
python|arrays|numpy|matrix
| 1
|
5,985
| 62,450,156
|
numpy set value with another multiple dimension array as index
|
<p>Assume there is a 4 dimension array idx1, stores 5 th dimension index for another 5 dimension array zeros1.
like:</p>
<pre><code>N,T,H,W = idx1.shape
zeros1 = np.zeros( (N,T,H,W, 256) )
# it is guaranteed that idx1's value <256
</code></pre>
<p>I want to realize</p>
<pre><code>for n in range(N):
for t in range(T):
for h in range(H):
for w in range(W):
x = idx1[ n,t,h,w ]
zeros1[n,t,h,w,x] = 1
</code></pre>
<p>How can I do it with numpy advanced indexing.</p>
|
<p>Use open-range arrays and index to assign -</p>
<pre><code>out = np.zeros( (N,T,H,W, 256) )
i,j,k,l = np.ogrid[:N,:T,:H,:W]
out[i,j,k,l,idx1] = 1
</code></pre>
<p>Alternatively, in one-line -</p>
<pre><code>out[tuple((np.ogrid[:N,:T,:H,:W]+[idx1]))] = 1
</code></pre>
|
python|numpy
| 2
|
5,986
| 62,313,294
|
Get indices in pandas series while using str.findall
|
<p>I am working on finding the rows which contain a particular string. the dataset has close to 1 million rows. Here is a simple example; </p>
<pre><code>text=['abc USER@xxx.com 123 any@www foo @ bar 78@ppp @5555 aa@111www','anontalk.com']
text=pd.Series(text)
srhc=text.str.findall('www')
srhc
</code></pre>
<p>And the output is;</p>
<pre><code>0 [www, www]
1 []
dtype: object
</code></pre>
<p>Is it possible to efficiently (i.e. programmatically) just obtain the list of indices, which contain the text <code>www</code>. Help is appreciated.</p>
|
<p>We can do <code>str</code> <code>contains</code> with <code>nonzero</code></p>
<pre><code>srhc=text.str.contains('www').to_numpy().nonzero()[0]
srhc
Out[66]: array([0], dtype=int64)
</code></pre>
|
python|pandas
| 1
|
5,987
| 62,428,492
|
how to convert np.double or np.float64 values to real value
|
<p>I have a binary file consists of multiple sensors data which is written by Catman software(HBM).
I am reading that file using guidelines given by Catman Software.</p>
<p><a href="https://docs.google.com/spreadsheets/d/1dZOw9L6_ukHNYlcR-n64DuRZ702nBq6jjA169q-aCz0/edit?usp=sharing" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/1dZOw9L6_ukHNYlcR-n64DuRZ702nBq6jjA169q-aCz0/edit?usp=sharing</a></p>
<p>please check the link for the data writing format.
I am getting all the parameters correctly, but not the data, please help.</p>
<pre><code>Actual sensor data showing in Catman software
-0.05625
-0.07083
-0.07500
-0.07396
-0.07708
-0.07188
-0.06563
-0.04896
These are the values i got while reading same those values in double
-3.01998798238334e-13
1.68682055390325e-77
-1.07637219868070e-141
6.61923789516606e-206
-3.69718906070881e-270
-7.62423663153382e+282
4.68858598328491e+218
-2.61881942919867e+154
fid=open(file,"rb")
data = np.frombuffer(fid.read(8), dtype=np.double)
</code></pre>
<p>Please help me to get real values</p>
<p><a href="https://in.mathworks.com/matlabcentral/fileexchange/6780-catman-file-importer?focused=54427cf8-d142-8e8a-812e-79e73b54b4fb&tab=function" rel="nofollow noreferrer">https://in.mathworks.com/matlabcentral/fileexchange/6780-catman-file-importer?focused=54427cf8-d142-8e8a-812e-79e73b54b4fb&tab=function</a>
MATLAB LINK FO REFERENCE</p>
|
<p>this should read the first data value into a numpy array for you:</p>
<pre class="lang-py prettyprint-override"><code>with open(file, 'rb') as f:
file_id = int.from_bytes(f.read(2), byteorder='little')
data_offset = int.from_bytes(f.read(4), byteorder='little')
f.seek(data_offset, 0)
first_data_value = np.frombuffer(f.read(8), dtype='<f8')
</code></pre>
<p>You will also need the number of channels and length of each channel from the channel info section to know how many 8-byte blocks to read.</p>
|
python|numpy|file|binary
| 0
|
5,988
| 62,147,370
|
AttributeError: 'Model' object has no attribute 'trainable_variables' when model is <class 'keras.engine.training.Model'>
|
<p>I've just started to learn Tensorflow (2.1.0), Keras (2.3.1) and Python 3.7.7.</p>
<p>By the way, I'm running all my code on an Anaconda Environment on Windows 7 64bit. I have also tried on an Anaconda Environment on Linux and I get the same error.</p>
<p>I'm following this Tensorflow's tutorial: "<a href="https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough" rel="nofollow noreferrer">Custom training: walkthrough</a>".</p>
<p>Everything is ok, but when I typed this piece of code:</p>
<pre><code>def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets, training=True)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
</code></pre>
<p>I get the error:</p>
<blockquote>
<p>Instance of 'Model' has no 'trainable_variables' member</p>
</blockquote>
<p>This is my model, with all of its imports:</p>
<pre><code>import keras
from keras.models import Input, Model
from keras.layers import Dense, Conv2D, Conv2DTranspose, UpSampling2D, MaxPooling2D, Flatten, ZeroPadding2D
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adam
import numpy as np
import tensorflow as tf
def vgg16_encoder_decoder(input_size = (200,200,1)):
#################################
# Encoder
#################################
inputs = Input(input_size, name = 'input')
conv1 = Conv2D(64, (3, 3), activation = 'relu', padding = 'same', name ='conv1_1')(inputs)
conv1 = Conv2D(64, (3, 3), activation = 'relu', padding = 'same', name ='conv1_2')(conv1)
pool1 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_1')(conv1)
conv2 = Conv2D(128, (3, 3), activation = 'relu', padding = 'same', name ='conv2_1')(pool1)
conv2 = Conv2D(128, (3, 3), activation = 'relu', padding = 'same', name ='conv2_2')(conv2)
pool2 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_2')(conv2)
conv3 = Conv2D(256, (3, 3), activation = 'relu', padding = 'same', name ='conv3_1')(pool2)
conv3 = Conv2D(256, (3, 3), activation = 'relu', padding = 'same', name ='conv3_2')(conv3)
conv3 = Conv2D(256, (3, 3), activation = 'relu', padding = 'same', name ='conv3_3')(conv3)
pool3 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_3')(conv3)
conv4 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv4_1')(pool3)
conv4 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv4_2')(conv4)
conv4 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv4_3')(conv4)
pool4 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_4')(conv4)
conv5 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv5_1')(pool4)
conv5 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv5_2')(conv5)
conv5 = Conv2D(512, (3, 3), activation = 'relu', padding = 'same', name ='conv5_3')(conv5)
pool5 = MaxPooling2D(pool_size = (2,2), strides = (2,2), name = 'pool_5')(conv5)
#################################
# Decoder
#################################
#conv1 = Conv2DTranspose(512, (2, 2), strides = 2, name = 'conv1')(pool5)
upsp1 = UpSampling2D(size = (2,2), name = 'upsp1')(pool5)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', name = 'conv6_1')(upsp1)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', name = 'conv6_2')(conv6)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', name = 'conv6_3')(conv6)
upsp2 = UpSampling2D(size = (2,2), name = 'upsp2')(conv6)
conv7 = Conv2D(512, 3, activation = 'relu', padding = 'same', name = 'conv7_1')(upsp2)
conv7 = Conv2D(512, 3, activation = 'relu', padding = 'same', name = 'conv7_2')(conv7)
conv7 = Conv2D(512, 3, activation = 'relu', padding = 'same', name = 'conv7_3')(conv7)
zero1 = ZeroPadding2D(padding = ((1, 0), (1, 0)), data_format = 'channels_last', name='zero1')(conv7)
upsp3 = UpSampling2D(size = (2,2), name = 'upsp3')(zero1)
conv8 = Conv2D(256, 3, activation = 'relu', padding = 'same', name = 'conv8_1')(upsp3)
conv8 = Conv2D(256, 3, activation = 'relu', padding = 'same', name = 'conv8_2')(conv8)
conv8 = Conv2D(256, 3, activation = 'relu', padding = 'same', name = 'conv8_3')(conv8)
upsp4 = UpSampling2D(size = (2,2), name = 'upsp4')(conv8)
conv9 = Conv2D(128, 3, activation = 'relu', padding = 'same', name = 'conv9_1')(upsp4)
conv9 = Conv2D(128, 3, activation = 'relu', padding = 'same', name = 'conv9_2')(conv9)
upsp5 = UpSampling2D(size = (2,2), name = 'upsp5')(conv9)
conv10 = Conv2D(64, 3, activation = 'relu', padding = 'same', name = 'conv10_1')(upsp5)
conv10 = Conv2D(64, 3, activation = 'relu', padding = 'same', name = 'conv10_2')(conv10)
conv11 = Conv2D(1, 3, activation = 'relu', padding = 'same', name = 'conv11')(conv10)
model = Model(inputs = inputs, outputs = conv11, name = 'vgg-16_encoder_decoder')
return model
</code></pre>
<p>I have found any reference for that attribute in <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow noreferrer">Tensorflow Keras Model</a> documentation.</p>
<p>On "<a href="https://www.tensorflow.org/guide/migrate#2_use_python_objects_to_track_variables_and_losses" rel="nofollow noreferrer">Migrate your TensorFlow 1 code to TensorFlow 2 - 2. Make the code 2.0-native</a>", say:</p>
<blockquote>
<p>If you need to aggregate lists of variables (like
tf.Graph.get_collection(tf.GraphKeys.VARIABLES)), use the .variables
and .trainable_variables attributes of the Layer and Model objects.</p>
</blockquote>
<p>The network in Tensorflow's tutorial "<a href="https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough" rel="nofollow noreferrer">Custom training: walkthrough</a>" is:</p>
<pre><code>model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
</code></pre>
<p>When I do:</p>
<pre><code>print(type(model))
</code></pre>
<p>I get:</p>
<pre><code><class 'tensorflow.python.keras.engine.sequential.Sequential'>
</code></pre>
<p>But if I print the type of my network, <code>vgg16_encoder_decoder</code>, I get:</p>
<pre><code><class 'keras.engine.training.Model'>
</code></pre>
<p>So, <strong>the problem is</strong> the type of the network. I haven't say the above class, <code>'keras.engine.training.Model'</code>, before.</p>
<p>How can I fix this problem to let me use the attribute <code>trainable_variables</code>?</p>
|
<p>The problem is that you are using <code>keras</code> library instead of <code>tensorflow.keras</code>.
When using tensorflow it is highly recommended to use its own keras implementation. </p>
<p>This code should works </p>
<pre><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Conv2D, Conv2DTranspose, UpSampling2D, MaxPooling2D, Flatten, ZeroPadding2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
import numpy as np
def vgg16_encoder_decoder(input_size = (200,200,1)):
# Your code here (no change needed)
model = vgg16_encoder_decoder()
model.trainable_variables
</code></pre>
|
python|tensorflow|keras
| 3
|
5,989
| 62,116,344
|
Convert a series of 2D XY-line plots into a 2D heatmap plot
|
<p>I'm pretty new to python coding, so apologies if this question has been asked before.</p>
<p>I have a piece of code, written by another person, which I cannot show here, but it produces a series of line plots (like this <a href="https://i.stack.imgur.com/kQdVl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kQdVl.png" alt="X-Y line plots"></a>).</p>
<p>What I would like to do is basically collapse these plots into one heatmap plot, wherein the colour shows the density of the lines in that location.</p>
<p>Could anyone point me in the right direction for producing these plots?</p>
|
<p><img src="https://i.stack.imgur.com/ZXz2h.png" width="100" height="100">It looks like you are trying to draw a <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist2d.html#matplotlib-pyplot-hist2d" rel="nofollow noreferrer">2d histogram</a>, would you like help with that?</p>
<pre><code>N=200
t = np.linspace(0, 2*np.pi, N)
r = 5
x = r*np.cos(t)
y = r*np.sin(t)
x+=np.random.random(N)
y+=np.random.random(N)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(8,4))
ax1.plot(x,y, 'c-')
ax2.hist2d(x,y)
</code></pre>
<p><a href="https://i.stack.imgur.com/7XBUp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7XBUp.png" alt="enter image description here"></a></p>
<p>Alternatively, you can use a <a href="https://seaborn.pydata.org/generated/seaborn.kdeplot.html" rel="nofollow noreferrer">2D kernel density estimate plot</a> (provided by seaborn)</p>
<pre><code>fig, (ax1, ax2) = plt.subplots(1,2, figsize=(8,4))
ax1.plot(x,y, 'c-')
sns.kdeplot(x, y, ax=ax2, shade=True, cmap='viridis')
</code></pre>
<p><a href="https://i.stack.imgur.com/CuV6Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CuV6Y.png" alt="enter image description here"></a></p>
|
python|numpy|matplotlib|plot
| 0
|
5,990
| 62,441,606
|
ValueError: could not broadcast input array from shape (424,16,3) into shape (128,160,3)
|
<p>I was working with the code <a href="https://github.com/coxlab/prednet/blob/master/process_kitti.py" rel="nofollow noreferrer"><code>process_kitti.py</code></a> by coxlab from GitHub in an Anaconda environment. Some of the function was deprecated in Python 3.6. Therefore I have changed the following line:</p>
<pre><code>im = imresize(im, (desired_sz[0], int(np.round(target_ds * im.shape[1]))))
</code></pre>
<p>into</p>
<pre><code>from PIL import Image
im = np.array(Image.fromarray(im).resize((desired_sz[0], int(np.round(target_ds * im.shape[1])))))
</code></pre>
<p>Otherwise everything else remained the same. </p>
<p>Interestingly, while I ran this code, the following error:</p>
<pre><code>Creating train data: 41396 images
Traceback (most recent call last):
File "process_kitti.py", line 104, in <module>
process_data()
File "process_kitti.py", line 84, in process_data
X[i] = process_im(im, desired_im_sz)
ValueError: could not broadcast input array from shape (424,16,3) into shape (128,160,3)
</code></pre>
<p>I am a bit confused with the cause of such error. Thanks a lot with the help.</p>
|
<p>Do note that the <code>size</code> parameter in the <code>imresize</code> function from <code>scipy</code> is a 2-tuple of <code>(height, width)</code> while in <code>Pillow</code> package it is <code>(width, height)</code> so you might need to reverse the order</p>
<p>Source: </p>
<p><a href="https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.misc.imresize.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.misc.imresize.html</a></p>
<p><a href="https://pillow.readthedocs.io/en/stable/reference/Image.html" rel="nofollow noreferrer">https://pillow.readthedocs.io/en/stable/reference/Image.html</a></p>
|
python|numpy
| 1
|
5,991
| 62,185,353
|
Fill column with nan if sum of multiple columns is 0
|
<p><strong>Task</strong></p>
<p>I have a <code>df</code> where I do some ratios that are groupby <code>date</code> and <code>id</code>. I want to fill column <code>c</code> with <code>NaN</code> if the sum of <code>a</code> and <code>b</code> is 0. Any help would be awesome!!</p>
<p><strong>df</strong></p>
<pre><code> date id a b c
0 2001-09-06 1 3 1 1
1 2001-09-07 1 3 1 1
2 2001-09-08 1 4 0 1
3 2001-09-09 2 6 0 1
4 2001-09-10 2 0 0 2
5 2001-09-11 1 0 0 2
6 2001-09-12 2 1 1 2
7 2001-09-13 2 0 0 2
8 2001-09-14 1 0 0 2
</code></pre>
|
<p>Try this:</p>
<pre><code>df['new_c'] = df.c.where(df[['a','b']].sum(1).ne(0))
Out[75]:
date id a b c new_c
0 2001-09-06 1 3 1 1 1.0
1 2001-09-07 1 3 1 1 1.0
2 2001-09-08 1 4 0 1 1.0
3 2001-09-09 2 6 0 1 1.0
4 2001-09-10 2 0 0 2 NaN
5 2001-09-11 1 0 0 2 NaN
6 2001-09-12 2 1 1 2 2.0
7 2001-09-13 2 0 0 2 NaN
8 2001-09-14 1 0 0 2 NaN
</code></pre>
|
python|pandas|dataframe|fill
| 2
|
5,992
| 51,442,273
|
Assign different values into a new column based on dataframe chunk
|
<p>I have a dataframe:</p>
<pre><code>df = pd.DataFrame({'a':[1,2,3,4,5,6,7,8,9,10],'b':[100,100,100,100,100,100,100,100,100,100]})
a b
0 1 100
1 2 100
2 3 100
3 4 100
4 5 100
5 6 100
6 7 100
7 8 100
8 9 100
9 10 100
</code></pre>
<p>I want to create a column <code>c</code>, such that for every chunk of 3 rows it assigns a value.</p>
<p>The output i'm looking for:</p>
<pre><code> a b c
0 1 100 1
1 2 100 1
2 3 100 1
3 4 100 2
4 5 100 2
5 6 100 2
6 7 100 3
7 8 100 3
8 9 100 3
9 10 100 4
</code></pre>
<p>I tried to iterate through the dataframe and then using .loc to assign column values.</p>
<p>Is there any better/quick way of doing this?</p>
|
<p>If your index is a RangeIndex you can use it to create your values for your c column:</p>
<pre><code>df['c'] = df.index // 3 + 1
</code></pre>
|
python|pandas
| 1
|
5,993
| 51,275,683
|
ImportError: No module named pandas in Zeppelin (EMR)
|
<p>I have an EMR cluster with Spark/Hive/Zeppelin. In my Zeppelin notebook, I tried to import pandas:</p>
<pre><code>import pandas as pd
</code></pre>
<p>But I got this error:</p>
<pre><code>ImportError: No module named pandas
</code></pre>
<p>How can I resolve this issue? Is this because pandas not installed in the EMR?</p>
|
<p>It was a matter of installing pandas in the master node:</p>
<pre><code>sudo pip install pandas
</code></pre>
|
pandas|amazon-emr|apache-zeppelin
| 4
|
5,994
| 51,359,731
|
Tensorflow While loop with Variable Creation
|
<p>Tensorflow While loop with Variable Creation Code here : </p>
<pre><code>x = tf.Variable(100)
c = tf.Constant(2)
n = 100
loops = 50
l1 = tf.Variable(np.random.random(n))
c1 = tf.Variable(np.random.random(n))
x = tf.multiply(c1,tf.exp(-(x-l1)/c))
l2 = tf.Variable(np.random.random(n))
c2 = tf.Variable(np.random.random(n))
x = tf.multiply(c2,tf.exp(-(x-l2)/c))
l3 = tf.Variable(np.random.random(n))
c3 = tf.Variable(np.random.random(n))
x = tf.multiply(c3,tf.exp(-(x-l3)/c))
.....
.....
l50 = tf.Variable(np.random.random(n))
c50 = tf.Variable(np.random.random(n))
x = tf.multiply(c50,tf.exp(-(x-l50)/c))
</code></pre>
<p>So, I want to do that in while loop in tensorflow as:</p>
<pre><code>while loop(i from 1 to 50):
l[i] = tf.Variable(np.random.random(n))
c[i] = tf.Variable(np.random.random(n))
x = tf.multiply(c[i],tf.exp(-(x-l[i])/c))
</code></pre>
<p><br/> How can I achieve this in tensorflow.
Thanking you</p>
|
<p>Can you feed it like this ?</p>
<pre><code>l = tf.placeholder(tf.float32, shape=[None, ])
c = tf.placeholder(tf.float32, shape=[None, ])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
x = tf.multiply(l,c) #Assume a formula
for i in range(50) :
arr = np.random.random_sample((i,))
print (arr)
sess.run(x,feed_dict={l :arr,c :arr})
</code></pre>
<p>Based on your comment I've only attempted. This isn't the exact answer.</p>
<pre><code>results = tf.TensorArray(dtype=tf.float32, size=100)
lvalues = tf.TensorArray(dtype=tf.float32, size=50)
cvalues = tf.TensorArray(dtype=tf.float32, size=50)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
def my_func(x):
a = np.random.random_sample((x,))
return a
input = tf.placeholder(tf.int32)
y = tf.py_func(my_func, [input], tf.float64)
for i in range(50) :
r = sess.run(y,feed_dict={input : i})
lvalues.write(i,y)
cvalues.write(i,y)
print(y)
print(r)
</code></pre>
|
python|tensorflow
| 1
|
5,995
| 51,420,917
|
"Upsampling data" of data in Python
|
<p>I'm newish to Python and brand new to data-science.</p>
<p>I've got a large data set that I've been using supervised machine learning (CART with scikit-learn) to classify. I'm using pandas data-frames, for the most part, to operate on the data. The data looks like this:</p>
<pre><code>| F00 F01 F02 F03 ... C0 |
| ... .. .. ... ... .....|
| FN0 FN1 FN2 FN3... CN |
</code></pre>
<p>where Fij is the j'th feature of the i'th row, and Ck is the true class of that row/instance.</p>
<p>The problem is one of the 6 classes has a much larger proportion of the training samples. I've looked up up-sampling, but this seems to refer to the case of (unsurprisingly) sampling the data randomly as you would do with an extremely large data set.</p>
<p>What I want is upscale rather than upsample - that is, copy with replacement random instances of the minority classes, adding them into the dataset until the sizes of all classes match.</p>
<p>I've had no luck using pandas to do this so far, I was wondering if you might be able to help?</p>
|
<p>Original asker here:</p>
<p>For anyone interested, I did the following using the imblearn package:</p>
<pre><code>from imblearn.over_sampling import RandomOverSampler, SMOTE, ADASY
def organize_data(data, upsample=False, upmethod = None): # entire organizing, cleaning data function
...
if upsample:
upsampler = upmethod()
features, true_class = upsampler.fit_sample(features, true_class)
</code></pre>
<p>Just using RandomOverSampler as a naive approach to expanding my minority classes (as was appropriate with my data).</p>
|
python|pandas|machine-learning|scikit-learn|data-science
| 1
|
5,996
| 48,273,907
|
Pandas- set values to an empty dataframe
|
<p>I have initialized an empty pandas dataframe that I am now trying to fill but I keep running into the same error. This is the (simplified) code I am using</p>
<pre><code>import pandas as pd
cols = list("ABC")
df = pd.DataFrame(columns=cols)
# sett the values for the first two rows
df.loc[0:2,:] = [[1,2],[3,4],[5,6]]
</code></pre>
<p>On running the above code I get the following error: </p>
<pre><code>ValueError: cannot copy sequence with size 3 to array axis with dimension 0
</code></pre>
<p>I am not sure whats causing this. I tried the same using a single row at a time and it works (<code>df.loc[0,:] = [1,2,3]</code>). I thought this should be the logical expansion when I want to handle more than one rows. But clearly, I am wrong. Whats the correct way to do this? I need to enter values for multiple rows and columns and once. I can do it using a loop but that's not what I am looking for.</p>
<p>Any help would be great. Thanks</p>
|
<p>Since you have the columns from empty dataframe use it in dataframe constructor i.e </p>
<pre><code>import pandas as pd
cols = list("ABC")
df = pd.DataFrame(columns=cols)
df = pd.DataFrame(np.array([[1,2],[3,4],[5,6]]).T,columns=df.columns)
A B C
0 1 3 5
1 2 4 6
</code></pre>
<p>Well, if you want to use loc specifically then, reindex the dataframe first then assign i.e</p>
<pre><code>arr = np.array([[1,2],[3,4],[5,6]]).T
df = df.reindex(np.arange(arr.shape[0]))
df.loc[0:arr.shape[0],:] = arr
A B C
0 1 3 5
1 2 4 6
</code></pre>
|
python|pandas
| 5
|
5,997
| 48,112,174
|
TensorFlow Lite: Error converting to .tflite using toco
|
<p>I am trying to covert my TensorFlow frozen model to a tflite model. When I run toco, I get an error message reads as below</p>
<pre><code>F tensorflow/contrib/lite/toco/graph_transformations/propagate_fixed_sizes.cc:982] Check failed: input_dims.size() == 4 (2 vs. 4)
</code></pre>
<p>Here is how I call toco:</p>
<pre><code>bazel-bin/tensorflow/contrib/lite/toco/toco \
--input_format=TENSORFLOW_GRAPHDEF \
--input_file=/tmp/output_graph.pb \
--output_format=TFLITE \
--output_file=/tmp/my_model.lite \
--inference_type=FLOAT \
--inference_input_type=FLOAT \
--input_arrays=input_layer \
--output_arrays=classes_tensor\
--input_shapes=1,227,227,3
</code></pre>
<p>Here is my terminal print out during the operation: </p>
<pre><code>INFO: Analysed 0 targets (4 packages loaded).
INFO: Found 0 targets...
INFO: Elapsed time: 5.267s, Critical Path: 0.03s
INFO: Build completed successfully, 1 total action
2018-01-05 10:24:23.011483: W tensorflow/contrib/lite/toco/toco_cmdline_flags.cc:178] --input_type is deprecated. It was an ambiguous flag that set both --input_data_types and --inference_input_type. If you are trying to complement the input file with information about the type of input arrays, use --input_data_type. If you are trying to control the quantization/dequantization of real-numbers input arrays in the output file, use --inference_input_type.
2018-01-05 10:24:25.853112: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1122] Converting unsupported operation: IsVariableInitialized
2018-01-05 10:24:25.853197: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1122] Converting unsupported operation: RefSwitch
2018-01-05 10:24:25.853241: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1122] Converting unsupported operation: RandomShuffleQueueV2
2018-01-05 10:24:25.853268: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1122] Converting unsupported operation: QueueDequeueUpToV2
2018-01-05 10:24:26.207160: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 64 operators, 90 arrays (0 quantized)
2018-01-05 10:24:27.327055: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 15 operators, 33 arrays (0 quantized)
2018-01-05 10:24:27.327262: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 15 operators, 34 arrays (0 quantized)
2018-01-05 10:24:27.327356: F tensorflow/contrib/lite/toco/graph_transformations/propagate_fixed_sizes.cc:982] Check failed: input_dims.size() == 4 (2 vs. 4)
/home/olu/Dev/scratch_train_sign/freeze_graph_tf.sh: line 28: 8881 Aborted
</code></pre>
<p>I went into the propagate_fixed_sizes.cc file, and around line line 982 I found this comment below</p>
<pre><code>// The current ArgMax implementation only supports 4-dimensional inputs with
// the last dimension as the axis to perform ArgMax for.
</code></pre>
<p>The only place in my training code where I used ArgMax is as below:</p>
<pre><code> predictions = { "classes": tf.argmax(input=logits, axis=1, name="classes_tensor"), "probabilities": tf.nn.softmax(logits, name="softmax_tensor") }
</code></pre>
<p>Do you know what the solution to this is? A solution to this will be well appreciated.</p>
|
<p>After raising this as an issue on the TensorFlow bug tracker on GitHub, the answer boiled down to the fact that TLite for now doesn't completely support ArgMax. <a href="https://github.com/tensorflow/tensorflow/issues/15948" rel="nofollow noreferrer">Link</a></p>
|
tensorflow|tensorflow-serving|tensorflow-lite
| 2
|
5,998
| 48,038,417
|
keras stops working on first epoch
|
<p>I am running an image classification model. This is where I got stuck. Tried downgrading keras version to 1.0.2 and running the script again didn't work.</p>
<p>Jupyter notebook just keeps processing and doesn't run anything after the first epoch, running code on keras 1.2 with python 3.5</p>
<p>OUTPUT:</p>
<pre><code>/anaconda/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py:19: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(101, activation="softmax", kernel_initializer="glorot_uniform", kernel_regularizer=<keras.reg...)`
/anaconda/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py:21: UserWarning: Update your `Model` call to the Keras 2 API: `Model(outputs=Tensor("de..., inputs=Tensor("in...)`
/anaconda/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py:44: UserWarning: The semantics of the Keras 2 argument `steps_per_epoch` is not the same as the Keras 1 argument `samples_per_epoch`. `steps_per_epoch` is the number of batches to draw from the generator at each epoch. Basically steps_per_epoch = samples_per_epoch/batch_size. Similarly `nb_val_samples`->`validation_steps` and `val_samples`->`steps` arguments have changed. Update your method calls accordingly.
/anaconda/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py:44: UserWarning: Update your `fit_generator` call to the Keras 2 API: `fit_generator(<image_gen..., verbose=2, epochs=32, validation_steps=25250, validation_data=<image_gen..., steps_per_epoch=1183, callbacks=[<keras.ca...)`
Epoch 1/32
</code></pre>
<p>INPUT:</p>
<pre><code>%%time
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D, GlobalAveragePooling2D, AveragePooling2D
from keras.layers.normalization import BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint, CSVLogger, LearningRateScheduler, ReduceLROnPlateau
from keras.optimizers import SGD
from keras.regularizers import l2
import keras.backend as K
import math
K.clear_session()
base_model = InceptionV3(weights='imagenet', include_top=False, input_tensor=Input(shape=(299, 299, 3)))
x = base_model.output
x = AveragePooling2D(pool_size=(8, 8))(x)
x = Dropout(.4)(x)
x = Flatten()(x)
predictions = Dense(n_classes, kernel_initializer='glorot_uniform', W_regularizer=l2(.0005), activation='softmax')(x)
model = Model(input=base_model.input, output=predictions)
opt = SGD(lr=.01, momentum=.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
checkpointer = ModelCheckpoint(filepath='model4.{epoch:02d}-{val_loss:.2f}.hdf5', verbose=1, save_best_only=True)
csv_logger = CSVLogger('model4.log')
def schedule(epoch):
if epoch < 15:
return .01
elif epoch < 28:
return .002
else:
return .0004
lr_scheduler = LearningRateScheduler(schedule)
model.fit_generator(train_generator,
validation_data=test_generator,
nb_val_samples=X_test.shape[0],
samples_per_epoch=X_train.shape[0],
nb_epoch=32,
verbose=2,
callbacks=[lr_scheduler, csv_logger, checkpointer])
</code></pre>
|
<p>Try with <code>verbose = 1</code> in your <code>model.fit</code> call, it will print the progress bar. It is probably working, but due to the value of 2 given to the verbose parameter, it will only print one line of output AFTER the epoch has ended, which might take some time depending on your CPU/GPU and quantity of data.</p>
|
tensorflow|keras
| 4
|
5,999
| 48,389,563
|
Simple Conv1D as first layer in keras
|
<p>Here is my input</p>
<pre><code>x_train.shape # (12, 7) 12 observations each of length 7
x_train # dtype('int32')
</code></pre>
<p>Here's the architecture I'd like to achieve:</p>
<p><a href="https://i.stack.imgur.com/RgbTB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RgbTB.jpg" alt="enter image description here"></a></p>
<p>I'd like a kernel of size 3 convolved over the sequence. From keras documentation at <a href="https://keras.io/layers/convolutional/" rel="nofollow noreferrer">https://keras.io/layers/convolutional/</a></p>
<p>"When using this layer as the first layer in a model, provide an input_shape argument (tuple of integers or None, e.g. (10, 128) for sequences of 10 vectors of 128-dimensional vectors, or (None, 128) for variable-length sequences of 128-dimensional vectors."</p>
<p>Honestly I'm having a hard time understanding their logic. Here's my attempt</p>
<pre><code>docs_sequence = Input(shape=(7,), dtype='float32') # Longest document is 7 words
convolution = Conv1D(filters = 1, # only 1 convolution
kernel_size = 3, # tri grams
strides = 1,
input_shape = (1, 7),
padding = 'valid',
activation = 'relu')(docs_sequence)
output = Dense(1, activation='sigmoid')(convolution)
cnn_model = Model(inputs = docs_sequence, outputs = [output])
cnn_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
</code></pre>
<p>and I'm consistently getting </p>
<p>ValueError: Input 0 is incompatible with layer conv1d_30: expected ndim=3, found ndim=2</p>
|
<p>As the error message says, your input is two dimensional while the convolutional layer expects a three dimensional input. </p>
<p>With the following</p>
<pre><code>docs_sequence = Input(shape=(7,1), ...
</code></pre>
<p>instead of </p>
<pre><code>docs_sequence = Input(shape=(7,), ...
</code></pre>
<p>Keras accepts the model. Basically this adds a dimension of size one to the input (the three dimensions from the error message include the minibatch dimension which one can think of being prepended to the <code>shape</code> argument above).</p>
<p><code>cnn_model.summary()</code> then gives:</p>
<pre><code>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 7, 1) 0
_________________________________________________________________
conv1d_1 (Conv1D) (None, 5, 1) 4
_________________________________________________________________
dense_1 (Dense) (None, 5, 1) 2
=================================================================
</code></pre>
<p>When preparing the actual input data, you may have to add this dimension of size one to your input data. You may want to use <code>numpy.atleast_2d()</code> or <code>numpy.atleast_3d()</code> for this, possibly combined with taking the transpose or use <code>numpy.expand_dims()</code>. </p>
<p>In your case, <code>np.atleast_3d(x_train)</code> gives a shape of <code>(12, 7, 1)</code>.</p>
|
python|tensorflow|keras
| 5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.