Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
2,900
| 25,190,349
|
Non-reducing variant of the ANY() function that respects NaN
|
<p>Hard to explain in words but the expample should be clear:</p>
<pre><code>df = DataFrame( { 'x':[0,1], 'y':[np.NaN,0], 'z':[0,np.NaN] }, index=['a','b'] )
x y z
a 0 NaN 0
b 1 0 NaN
</code></pre>
<p>I want to replace all non-NaN values with a '1', if there is a '1' anywhere in that row. Just like this:</p>
<pre><code> x y z
a 0 NaN 0
b 1 1 NaN
</code></pre>
<p>This sort of works, but unfortunately overwrites the NaN</p>
<p><code>df[ df.any(1) ] = 1</code></p>
<pre><code> x y z
a 0 NaN 0
b 1 1 1
</code></pre>
<p>I thought there might be some non-reducing form of any (like cumsum is a non-reducing form of sum), but I can't find anything like that so far...</p>
|
<p>You could combine a multiplication by zero (to give an empty frame but which remembers nan locations) with an <code>add</code> on <code>axis=0</code>:</p>
<pre><code>>>> df
x y z
a 0 NaN 0
b 1 0 NaN
>>> (df * 0).add(df.any(1), axis=0)
x y z
a 0 NaN 0
b 1 1 NaN
</code></pre>
|
python|pandas
| 1
|
2,901
| 39,022,027
|
Address of last value in 1d NumPy array
|
<p>I have a 1d array with zeros scattered throughout. Would like to create a second array which contains the position of the last zero, like so:</p>
<pre><code>>>> a = np.array([1, 0, 3, 2, 0, 3, 5, 8, 0, 7, 12])
>>> foo(a)
[0, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3]
</code></pre>
<p>Is there a built-in NumPy function or broadcasting trick to do this without using a for loop or other iterator?</p>
|
<pre><code>>>> (a == 0).cumsum()
array([0, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3])
</code></pre>
|
python|arrays|numpy
| 8
|
2,902
| 39,110,507
|
Call a C++ function from Python and convert a OpenCV Mat to a Numpy array
|
<p><strong>Background situation</strong></p>
<p>I'm trying to use the OpenCV Stitching module via the Python bindings, but I'm getting an error:</p>
<pre><code>import cv2
stitcher = cv2.createStitcher(False)
imageL = cv2.imread("imageL.jpg")
imageC = cv2.imread("imageC.jpg")
imageR = cv2.imread("imageR.jpg")
stitcher.stitch((imageL, imageC))
</code></pre>
<blockquote>
<p>error: /home/user/OpenCV3.1.0/opencv/modules/python/src2/cv2.cpp:163: error: (-215) The data should normally be NULL! in function allocate</p>
</blockquote>
<p>Similar people suffering this:</p>
<ul>
<li><a href="https://stackoverflow.com/a/36646256/1253729">https://stackoverflow.com/a/36646256/1253729</a></li>
<li><a href="https://stackoverflow.com/q/38914916/1253729">How to stitch images from a UAV using opencv python with Stitcher class</a></li>
<li><a href="https://github.com/opencv/opencv/issues/6969" rel="noreferrer">https://github.com/opencv/opencv/issues/6969</a></li>
</ul>
<p><strong>The problem at hand</strong> </p>
<p>So I decided to use a official C++ OpenCV stitching example and use Python to call it using Boost.Python. However, I'm still unable to figure out how to properly use Boost.Python + <a href="https://github.com/spillai/numpy-opencv-converter" rel="noreferrer">numpy-opencv-converter</a> to handle the C++ Mat vs Numpy array conversion.</p>
<p><strong>¿How do I call the numpy-opencv-converter?</strong> I've only got Boost.Python in place, and when running my python function to call the C++ file I got this (expected) outcome:</p>
<pre><code>$ python python_caller.py
Traceback (most recent call last):
File "python_caller.py", line 10, in <module>
visualize(A)
Boost.Python.ArgumentError: Python argument types in
testing.visualize(numpy.ndarray)
did not match C++ signature:
visualize(cv::Mat)
</code></pre>
<p>Thanks.</p>
<p>PD: I'm in Ubuntu 14.04, Python 2.7.4 using OpenCV 3.1.0 compiled from sources and inside a virtualenv.</p>
<hr>
<p>These are the files I'm using.</p>
<p><strong>testing.cpp:</strong></p>
<pre><code>#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <boost/python.hpp>
using namespace cv;
int main(){}
Mat visualize(const cv::Mat input_image)
{
cv::Mat image;
image = input_image;
namedWindow("Display Image", WINDOW_AUTOSIZE );
imshow("Display Image", image);
waitKey(0);
return image;
}
using namespace boost::python;
BOOST_PYTHON_MODULE(testing) // file name
{
def("visualize", visualize); //function name
}
</code></pre>
<p><strong>python_caller.py:</strong></p>
<pre><code>import cv2
import numpy as np
from testing import visualize
A = cv2.imread("imageL.jpg")
visualize(A)
</code></pre>
<p><strong>Makefile:</strong></p>
<pre><code>CFLAGS=`pkg-config --cflags opencv`
LDFLAGS=`pkg-config --libs opencv`
testing.so: testing.o
g++ -shared -Wl,--export-dynamic -o testing.so testing.o -L/usr/lib -lboost_python -L/usr/lib/python2.7/config -lpython2.7 -L/usr/lib/x86_64-linux-gnu/ -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_objdetect -lopencv_ocl -lopencv_photo -lopencv_stitching -lopencv_superres -lopencv_ts -lopencv_video -lopencv_videostab
testing.o: testing.cpp
g++ -I/usr/include/python2.7 -I/usr/include -fPIC -c testing.cpp
</code></pre>
|
<p>You need to convert the Python NDArray <=> C++ cv::Mat. I can recommend this <a href="https://github.com/Algomorph/pyboostcvconverter" rel="noreferrer">GitHub Repo</a>. It contains an example that should fit to your needs. I am using the converter on Ubuntu 15.10 with Python 2.7/3.4 and OpenCV 3.1.</p>
|
python|c++|opencv|numpy|boost
| 7
|
2,903
| 29,298,757
|
Find Two Sets of Python Numpy Arrays on Common Column
|
<p>I'm trying to merge some data and I have the following two 2d numpy arrays (<strong>a</strong> and <strong>b</strong>)</p>
<pre><code>a = [[ 10 9.689474368e-04][ 20 6.88780375e-04]
[ 30 4.296339997e-04][ 40 -1.06232578e-03]
[ 50 -1.219884414e-03][ 60 -1.27936723e-03]]
b = [[ 30 6.687897368e-04][ 40 2.887890375e-04]
[ 50 3.293467897e-04][ 60 -8.067893578e-03]
[ 70 -7.213988414e-03][ 80 -8.278967323e-03]]
</code></pre>
<p>I would like to get 2 new 2d numpy arrays (<strong>c</strong> and <strong>d</strong>) where the first cols of <strong>a</strong> and <strong>b</strong> match like the following;</p>
<pre><code>c = [[ 30 4.296339997e-04][ 40 -1.06232578e-03]
[ 50 -1.219884414e-03][ 60 -1.27936723e-03]]
d = [[ 30 6.687897368e-04][ 40 2.887890375e-04]
[ 50 3.293467897e-04][ 60 -8.067893578e-03]]
</code></pre>
<p>Does anybody know of an optimised way of doing so?</p>
<p>I've tried simple looping through each item, however it's not fast enough and I know it can be solved with a much more elegant approach. </p>
<p>I'm playing around with the following solution. It's much faster but not sure if it is the correct approach.</p>
<pre><code>aHash = map(tuple, a)
bHash = map(tuple, b)
aKey = {x[:1] for x in aHash}
bKey = {x[:1] for x in bHash}
c = np.array([x for x in bHash if x[:1] in aKey])
d = np.array([x for x in aHash if x[:1] in bKey])
</code></pre>
<p>Thanks</p>
|
<p>Here is a solution I would expect to be quite fast, especially on presorted data.</p>
<pre><code>import numpy as np
a = np.array([[ 20 ,6.88780375e-04],
[ 30 , 4.296339997e-04],[ 40 , -1.06232578e-03],
[ 50 ,-1.219884414e-03],[ 60 , -1.27936723e-03],[ 10 ,9.689474368e-04],])
b = np.array([[ 30 , 6.687897368e-04],[ 40 , 2.887890375e-04],
[ 50 , 3.293467897e-04],[ 60 , -8.067893578e-03],
[ 70 , -7.213988414e-03],[ 80 , -8.278967323e-03],])
a.sort(axis=0)
b.sort(axis=0)
def merge(a, b):
c = []
d = []
ai = 0
bi = 0
while(ai < len(a) and bi < len(b)):
av = a[ai]
bv = b[bi]
if av[0] == bv[0]:
c.append(av)
d.append(bv)
ai += 1
continue
if av[0] < bv[0]:
ai += 1
continue
else:
bi += 1
continue
return np.array(c), np.array(d)
print merge(a,b)
</code></pre>
<p>Here is a comparison to the only other currently posted method. This uses the original array slightly unsorted (I wanted to apply some penalty to the sorting method)</p>
<pre><code>Full tests done 100,000 times
while_loop_method = 3.19426544412 sec
hash_map_method = 3.89232874699 sec
</code></pre>
<p>Here is a smaller scale comparison on a shuffled array 1000 times larger.</p>
<pre><code>Full tests done 1,000 times
while_loop_method = 24.1850584226
hash_map_method = 25.9077035996
</code></pre>
<p>My method appears to scale up fairly well but its not nearly as efficient on unsorted large arrays. I would expect my appending to the list to be the main culprit.</p>
|
python|arrays|numpy|merge
| 1
|
2,904
| 22,554,116
|
How to use a list of values to select rows from a pandas Dataframe in specific order
|
<p>If I have a pandas Dataframe like this:</p>
<pre><code>>>> df = DataFrame({'A' : [5,6,3,4], 'B' : [1,2,3, 5]})
>>> df
A B
0 5 1
1 6 2
2 3 3
3 4 5
</code></pre>
<p>And I want to select some values in a list of values with specific order. It may look like this:</p>
<pre><code>>>> df[df['A'].select_keeping_order([3, 4, 5])]
>>>
A B
2 3 3
3 4 5
0 5 1
</code></pre>
<p>I know there's a method called <code>isin</code>. But it selects the values in original order instead of "arguments order".</p>
<p>How can I do it?</p>
|
<pre><code>In [134]: df = DataFrame({'A' : [5,6,3,4], 'B' : [1,2,3, 5]}, index=list('abcd'))
In [135]: df
Out[135]:
A B
a 5 1
b 6 2
c 3 3
d 4 5
[4 rows x 2 columns]
In [138]: idx = pd.Index(df['A']).get_indexer([3,4,5]); idx
Out[138]: array([2, 3, 0])
In [136]: df.iloc[idx]
Out[136]:
A B
c 3 3
d 4 5
a 5 1
[3 rows x 2 columns]
</code></pre>
<p>I changed <code>df.index</code> to make it clear that this method does not rely on the index being integers in incremental order.</p>
|
python|pandas
| 6
|
2,905
| 13,478,597
|
Arithmetic on date series (not an index) in Pandas
|
<p>(Python 2.7, Pandas 0.9)</p>
<p>This seems like a simple thing to do, but I can't figure out how to calculate the difference between two date columns in a dataframe using Pandas. This dataframe already has an index, so making either column into a DateTimeIndex is not desirable. </p>
<p>To convert each date column from strings I used:</p>
<pre><code>data.Date_Column = pd.to_datetime(data.Date_Column)
</code></pre>
<p>From there, to get elapsed time between 2 columns, I do:</p>
<pre><code>data.Closed_Date - data.Created_Date
</code></pre>
<p>which returns an error:</p>
<pre><code>TypeError: %d format: a number is required, not a numpy.timedelta64
</code></pre>
<p>Checking dtypes on both columns yields datetime64[ns] and the individual dates in the array are type timestamp.</p>
<p>What am I missing?</p>
<p>EDIT:</p>
<p>Here's an example where I can create separate DateTimeIndex objects and accomplish what I want, but when I try to do it in the context of a dataframe, it fails.</p>
<pre><code>Created_Date = pd.DatetimeIndex(data['Created_Date'], copy=True)
Closed_Date = pd.DatetimeIndex(data['Closed_Date'], copy=True)
Closed_Date.day - Created_Date.day
[Out] array([ -3, -16, 5, ..., 0, 0, 0])
</code></pre>
<p>Now the same but in a dataframe:</p>
<pre><code>data.Created_Date = pd.DatetimeIndex(data['Created_Date'], copy=True)
data.Closed_Date = pd.DatetimeIndex(data.Closed_Date, copy=True)
data.Created_Date.day - data.Created_Date.day
AttributeError: 'Series' object has no attribute 'day'
</code></pre>
<p>Here's some of the data if you want to play around with it:</p>
<pre><code>data['Created Date'][0:10].to_dict()
{0: '1/1/2009 0:00',
1: '1/1/2009 0:00',
2: '1/1/2009 0:00',
3: '1/1/2009 0:00',
4: '1/1/2009 0:00',
5: '1/1/2009 0:00',
6: '1/1/2009 0:00',
7: '1/1/2009 0:00',
8: '1/1/2009 0:00',
9: '1/1/2009 0:00'}
data['Closed Date'][0:10].to_dict()
{0: '1/7/2009 0:00',
1: nan,
2: '1/1/2009 0:00',
3: '1/1/2009 0:00',
4: '1/1/2009 0:00',
5: '1/12/2009 0:00',
6: '1/12/2009 0:00',
7: '1/7/2009 0:00',
8: '1/10/2009 0:00',
9: '1/7/2009 0:00'}
</code></pre>
|
<p>Update: A useful workaround is to just smash this with the DatetimeIndex constructor (which is usually much faster than an apply), for example:</p>
<pre><code>DatetimeIndex(df['Created_Date']).day
</code></pre>
<p>In 0.15 this will be vailable in the dt attribute (along with other datetime methods):</p>
<pre><code>df['Created_Date'].dt.day
</code></pre>
<hr>
<p>Your error was the syntax, which although one might hope it would work, it doesn't:</p>
<pre><code>data.Created_Date.day - data.Created_Date.day
AttributeError: 'Series' object has no attribute 'day'
</code></pre>
<p>With more complicated selections like this one you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html?highlight=apply#pandas.DataFrame.apply" rel="noreferrer"><code>apply</code></a>:</p>
<pre><code>In [111]: df['sub'] = df.apply(lambda x: x['Created_Date'].day - x['Closed_Date'].day, axis=1)
In [112]: df[['Created_Date','Closed_Date','sub']]
Out[112]:
Created_Date Closed_Date sub
0 2009-01-07 00:00:00 2009-01-01 00:00:00 6
1 NaT 2009-01-01 00:00:00 9
2 2009-01-01 00:00:00 2009-01-01 00:00:00 0
3 2009-01-01 00:00:00 2009-01-01 00:00:00 0
4 2009-01-01 00:00:00 2009-01-01 00:00:00 0
5 2009-01-12 00:00:00 2009-01-01 00:00:00 11
6 2009-01-12 00:00:00 2009-01-01 00:00:00 11
7 2009-01-07 00:00:00 2009-01-01 00:00:00 6
8 2009-01-10 00:00:00 2009-01-01 00:00:00 9
9 2009-01-07 00:00:00 2009-01-01 00:00:00 6
</code></pre>
<p><strong>Be wary</strong>, you'll probably ought to do something separately with these <code>NaT</code>s:</p>
<pre><code>In [114]: df.ix[1][1].day # NaT.day
Out[114]: -1
</code></pre>
<p>. </p>
<p><em>Note: there is similarly strange behaviour using <code>.days</code> on a timedelta with <code>NaT</code>:</em></p>
<pre><code>In [115]: df['sub2'] = df.apply(lambda x: (x['a'] - x['b']).days, axis=1)
In [116]: df['sub2'][1]
Out[116]: 92505
</code></pre>
|
python|pandas
| 6
|
2,906
| 29,620,694
|
Matlab freqz function in Python
|
<p>I am trying to implement a Python equivalent for the Matlab frequency response function</p>
<pre><code>[h,f] = freqz(b, 1, 512, 12.5)
</code></pre>
<p>described in <a href="http://se.mathworks.com/help/signal/ug/frequency-response.html" rel="nofollow">here</a>. My current attempt</p>
<pre><code>f, h = scipy.signal.freqz(b, 1)
</code></pre>
<p>does not give the intended result. Trying the parameters <code>worN</code> and <code>whole</code> (see <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.signal.freqz.html" rel="nofollow">here</a>) do not seem to fix the issue.</p>
<p>How should it be done?</p>
<p><strong>Edit:</strong></p>
<p>Matlab example:</p>
<pre><code>>> [h, f] = freqz(1:5, 1, 512, 12.5)
h =
15.0000 + 0.0000i
14.9976 - 0.2454i
14.9902 - 0.4907i
14.9780 - 0.7358i
14.9609 - 0.9806i
14.9389 - 1.2250i
...
f =
0
0.0122
0.0244
0.0366
0.0488
0.0610
...
</code></pre>
<p>Python example:</p>
<pre><code>>>> f, h = scipy.signal.freqz(range(1,6), 1)
>>> h
array([ 15.00000000 +0.j , 14.99755288 -0.24541945j,
14.99021268 -0.49073403j, 14.97798292 -0.73583892j,
14.96086947 -0.98062944j, 14.93888050 -1.22500102j,
...])
>>> f
array([ 0. , 0.00613592, 0.01227185, 0.01840777, 0.02454369,
0.03067962, ...])
</code></pre>
<p>In other words, the Scipy function gives good values for <code>h</code>, but the values of <code>f</code> do not match.</p>
|
<p>In both languages <code>freqz</code> expects numerator coefficients <code>b</code> for the first argument, not <code>a</code> like you wrote. Should be</p>
<p><code>freqz(b, a, ...)</code></p>
<p>Looks like you are trying to find the response of an FIR filter, for which there are only numerator coefficients and <code>a</code> is always 1.</p>
|
python|matlab|numpy|scipy|signal-processing
| 1
|
2,907
| 29,352,705
|
python pandas TimeStamps to local time string with daylight saving
|
<p>I have a dataframe with a TimeStamps column. I want to convert it to strings of local time, ie with daylight saving. </p>
<p>So I want to convert ts[0] below to "2015-03-30 <strong>03</strong>:55:05". Pandas seems to be aware of DST, but only when you call .values on the series.</p>
<p>Thanks</p>
<pre><code>(Pdb) ts = df['TimeStamps']
(Pdb) ts
0 2015-03-30 02:55:05.993000
1 2015-03-30 03:10:20.937000
2 2015-03-30 10:09:19.947000
Name: TimeStamps, dtype: datetime64[ns]
(Pdb) ts[0]
Timestamp('2015-03-30 02:55:05.993000')
(Pdb) ts.values
array(['2015-03-30T03:55:05.993000000+0100',
'2015-03-30T04:10:20.937000000+0100',
'2015-03-30T11:09:19.947000000+0100'], dtype='datetime64[ns]')
</code></pre>
|
<p>DST is relative to your location (e.g. London DST began a few weeks after NY). You first need to make the timestamp timezone aware: </p>
<pre><code>from pytz import UTC
from pytz import timezone
import datetime as dt
ts = pd.Timestamp(datetime.datetime(2015, 3, 31, 15, 47, 25, 901597))
# or...
ts = pd.Timestamp('2015-03-31 15:47:25.901597')
# ts is a Timestamp, but it has no idea where in the world it is...
>>> ts.tzinfo is None
True
# So the timestamp needs to be localized. Assuming it was originally a UTC timestamp, it can be localized to UTC.
ts_utc = ts.tz_localize(UTC)
# Once localized, it can be expressed in other timezone regions, e.g.:
eastern = pytz.timezone('US/Eastern')
ts_eastern = ts_utc.astimezone(eastern)
# And to convert it to an ISO string of local time (e.g. eastern):
>>> ts_eastern.isoformat()
'2015-03-30T08:09:27.143173-04:00'
</code></pre>
<p>See <a href="http://pytz.sourceforge.net" rel="noreferrer">pytz</a> or <a href="https://docs.python.org/3/library/datetime.html" rel="noreferrer">datetime</a> for more information.</p>
|
python|pandas|timestamp|dst
| 13
|
2,908
| 62,387,610
|
Common values between multiple dataframes with different length
|
<p>I have 3 huge dataframes that have different length of values</p>
<p>Ex, </p>
<pre><code>A B C
2981 2952 1287
2759 2295 2952
1284 2235 1284
1295 1928 0887
2295 1284 1966
1567 1928
1287 2374
2846
2578
</code></pre>
<p>I want to find the common values between the three columns like this</p>
<pre><code>A B C Common
2981 2952 1287 1284
2759 2295 2952 2295
1284 2235 1284
1295 1928 0887
2295 1284 1966
1567 2295
1287 2374
2846
2578
</code></pre>
<p>I tried (from <a href="https://stackoverflow.com/questions/46556169/finding-common-elements-between-multiple-dataframe-columns">here</a>)</p>
<pre><code>df1['Common'] = np.intersect1d(df1.A, np.intersect1d(df2.B, df3.C))
</code></pre>
<p>but I get this error, <code>ValueError: Length of values does not match length of index</code></p>
|
<p>Idea is create <code>Series</code> with index filtered by indexing with length of array:</p>
<pre><code>a = np.intersect1d(df1.A, np.intersect1d(df2.B, df3.C))
df1['Common'] = pd.Series(a, index=df1.index[:len(a)])
</code></pre>
<p>If same DataFrame:</p>
<pre><code>a = np.intersect1d(df1.A, np.intersect1d(df1.B, df1.C))
df1['Common'] = pd.Series(a, index=df1.index[:len(a)])
print (df1)
A B C Common
0 2981.0 2952.0 1287 1284.0
1 2759.0 2295.0 2952 2295.0
2 1284.0 2235.0 1284 NaN
3 1295.0 1928.0 887 NaN
4 2295.0 1284.0 1966 NaN
5 NaN 1567.0 2295 NaN
6 NaN 1287.0 2374 NaN
7 NaN NaN 2846 NaN
8 NaN NaN 2578 NaN
</code></pre>
|
python|python-3.x|pandas
| 2
|
2,909
| 62,411,460
|
Python Conditional Statement
|
<p>Let's say I have 3 columns. They are 'Word', 'Word Count', and 'Positive'. The column 'Positive' is categorical by year. I need to find the most frequent words that are categorized by 'Positive'. When I use this code:</p>
<pre class="lang-py prettyprint-override"><code>df.sort_values(by=['Positive', 'Word Count', 'Word'], ascending=False, axis=0).head(5)[['Word', 'Word Count', 'Positive']]
</code></pre>
<p>it gives me this output:</p>
<pre><code>Word Word Count Positive
BEST 2654899 2012
INNOVATIVENESS 541 2011
EFFECTIVE 16420419 2009
BENEFIT 9902500 2009
ABLE 4090099 2009
</code></pre>
<p>As you can see it takes in to account the years before the Word Count. If I switch them, then I just get the most frequent words overall. My solution to this is to subset the 'Positive' column, by only taking into account values >=0 and then sort by Word Count. My problem is being able to subset the Positive column without making it into a boolean, and then being able to put it into my function.</p>
|
<p>I can't easily provide an example without an example of your data structure, but I think what you're looking for is a combination of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>pd.groupby()</code></a> to group everything by year, and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mode.html" rel="nofollow noreferrer"><code>pd.Series.mode</code></a> to find the most frequent, or actually maybe <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.max.html" rel="nofollow noreferrer"><code>.max()</code></a> if you are trying to find the largest number in the <code>Word Count</code> column.
It might look something like: </p>
<pre><code>df.groupby('Positive').max()
</code></pre>
|
python|pandas|dataframe|subset
| 0
|
2,910
| 62,156,152
|
Why do i get different accuracies on sparse_categorical_accuracy and val_sparse_categorical_accuracy when i pass in the same data
|
<p>I used the same dataset for training and validating my model and yet i get different training and validation accuracy/loss. shouldn't the accuracy/loss be the same since i'm using the same data?</p>
<p>Here is the code:</p>
<pre><code>def create_model(dataset):
model = tf.keras.models.Sequential([tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', ),
tf.keras.layers.MaxPool2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPool2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPool2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPool2D(2, 2),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(2, activation='softmax')])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
model.fit(dataset, validation_data=dataset)
return model
</code></pre>
<p>I get out of this:</p>
<pre><code>100/100 [==============================] - 178s 2s/step - loss: 0.6487 - sparse_categorical_accuracy: 0.6212 - val_loss: 0.5866 - val_sparse_categorical_accuracy: 0.7001
</code></pre>
<p>Note that I only went through one epoch.</p>
|
<p>This is because <code>Dropout</code> layers doesn't work while validation. Also <em>train accuracy</em> is a mean average of all batch accuracies, while <em>validation one</em> is an accuracy of whole dataset.</p>
|
python|tensorflow|machine-learning|keras
| 1
|
2,911
| 62,393,656
|
One Hot Encoding with Sparse categorical entropy throwing error
|
<p>So I am doing the MNIST Fashion example for Keras. And in the program I wrote for it, I didn't need to use "to_categorical" to one hot encode my data, and it still worked. When i tried to one hot encode it, it did not work. I am confused why this happened, because usually one should one hot encode their outputs right? If someone could help clarify this that would be great!</p>
<pre><code>from tensorflow.keras.datasets import fashion_mnist
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Activation
from tensorflow.keras import backend as K
from kerastuner import RandomSearch
import time
from tensorflow.keras.utils import to_categorical
from kerastuner.engine.hyperparameters import HyperParameters
LOG_DIR = f"{int(time.time())}"
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
#
#
if K.image_data_format() == 'channels_first':
input_shape = (1, x_train.shape[0], 28, 28)
input_shap2 = (1, x_test.shape[0], 28, 28)
else:
input_shape = (x_train.shape[0], 28, 28, 1)
input_shap2 = (x_test.shape[0], 28, 28, 1)
y_train = to_categorical(y_train)<<<<<<<<<<<<<<<<<<<<<<
y_test = to_categorical(y_test) <<<<<<<<<<<<<<<<<<<<<<<<<<<
# plt.imshow(x_train[0], cmap='gray')
# plt.show()
x_train = x_train.reshape(input_shape)
x_test = x_test.reshape(input_shap2)
def build_model(hp):
model = keras.models.Sequential()
model.add(Conv2D(hp.Int("input_units", min_value=32, max_value=256, step=32), (3, 3), input_shape=(28, 28, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
for i in range(hp.Int("n_layers", 1, 4)):
model.add(Conv2D(hp.Int(f"conv_{i}_units", min_value=32, max_value=256, step=32), (3, 3)))
model.add(Activation('relu'))
# model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(10))
model.add(Activation("softmax"))
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
return model
# model=build_model()
# model.fit(x_train, y_train, batch_size=64, epochs=8, validation_data = (x_test, y_test))
tuner = RandomSearch(
build_model,
objective="val_accuracy",
max_trials=1,
executions_per_trial=1,
directory=LOG_DIR
)
tuner.search(
x=x_train,
y=y_train,
epochs=1,
batch_size=64,
validation_data=(x_test,y_test)
)
</code></pre>
|
<p>You were using sparse-categorical-crossentropy instead of categorical-crossentropy. Sparse uses integer encoding whereas the other one uses one hot encoding. You should either use sparse and not one hot your labels, or use the other and one hot encode the labels</p>
|
tensorflow|keras|conv-neural-network|mnist|one-hot-encoding
| 0
|
2,912
| 51,252,460
|
Pivot dataframe with columns constant withing the index column
|
<p>Suppose I have the following dataframe, where both <code>Y</code> and <code>Z</code> are constant within <code>ID</code>:</p>
<pre><code> ID TYPE X Y Z
0 1 A 1 foo 10
1 1 B 2 foo 10
2 2 A 3 bar 20
3 2 B 4 bar 20
4 3 A 5 baz 30
5 3 B 6 baz 30
</code></pre>
<p>I would like to reshape the data from a "long" to "wide" format:</p>
<pre><code> ID XA XB Y Z
0 1 1 2 foo 10
1 2 3 4 bar 20
2 3 5 6 baz 30
</code></pre>
<p>However, if I use <code>pandas.DataFrame.pivot()</code>:</p>
<pre><code>df_new = df.pivot(index='ID', columns='TYPE')
</code></pre>
<p>I will get duplicates of <code>Y</code> and <code>Z</code>:</p>
<pre><code> X Y Z
TYPE A B A B A B
ID
1 1 2 foo foo 10 10
2 3 4 bar bar 20 20
3 5 6 baz baz 30 30
</code></pre>
<p>To get the desired output, I could do the following:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID': [1, 1, 2, 2, 3, 3],
'TYPE': ['A', 'B', 'A', 'B', 'A', 'B'],
'X': [1, 2, 3, 4, 5, 6],
'Y': ['foo', 'foo', 'bar', 'bar', 'baz', 'baz'],
'Z': [10, 10, 20, 20, 30, 30]})
def long_to_wide(df, i, j, varlist):
df_wide = df.pivot(index='ID', columns='TYPE')
df_wide.columns = [''.join(col).strip() for col in df_wide.columns.values]
df_wide.reset_index(inplace=True)
for var in varlist:
if pd.Series.equals(df_wide[var + 'A'], df_wide[var + 'B']):
df_wide.drop((var + 'B'), axis = 1, inplace = True)
else:
raise
# Error handling of some sort...
df_wide = df_wide.rename(columns={var + 'A': var})
return df_wide
df_new = long_to_wide(df, 'ID', 'TYPE', ['Y', 'Z'])
</code></pre>
<p>However, I feel that this must be unnecessarily complicated. For example, to get the desired output in Stata, one could run either: </p>
<pre><code>reshape wide X, i(ID) j(TYPE)
</code></pre>
<p>or</p>
<pre><code>reshape wide X, i(ID Y Z) j(TYPE)
</code></pre>
<p>This situation is quite common and I therefore thought there should be a built-in method to handle it. But after looking around at the <code>Pandas</code> documentation and also here at Stack Overflow, I haven't found a simpler solution. </p>
<p>Is there one?</p>
|
<p>I just had a better look at this and the function <code>pandas.DataFrame.pivot()</code> is actually performing as expected. Unlike Stata's <code>reshape</code>, which is a <em>command</em> and does quite a few things under the hood, <code>pivot()</code> simply re-arranges the data. </p>
<p>@Heleemur's solution is clever and works great, but usually it will be your responsibility to do the renaming or getting rid of the duplicates.</p>
<p>Here's an intuitive solution based on <code>pivot()</code> (or <code>pivot_table()</code>):</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID': [1, 1, 2, 2, 3, 3],
'TYPE': ['A', 'B', 'A', 'B', 'A', 'B'],
'X': [1, 2, 3, 4, 5, 6],
'Y': ['foo', 'foo', 'bar', 'bar', 'baz', 'baz'],
'Z': [10, 10, 20, 20, 30, 30]})
wanted = df.pivot(index='ID', columns='TYPE')[[('X','A'), ('X','B'), ('Y','A'), ('Z','A')]].reset_index()
wanted.columns = wanted.columns.get_level_values(0)
wanted.columns = ['ID', 'XA', 'XB', 'Y', 'Z']
wanted
ID XA XB Y Z
0 1 1 2 foo 10
1 2 3 4 bar 20
2 3 5 6 baz 30
</code></pre>
<p>Another way is also the following:</p>
<pre><code>wanted = df.pivot(index='ID', columns='TYPE').reset_index()
wanted.columns = [' '.join(col)for col in wanted.columns.values]
wanted = wanted.iloc[:, [0,2] + list(range(1, len(wanted.columns)-1, 2))]
wanted
ID X B X A Y A Z A
0 1 2 1 foo 10
1 2 4 3 bar 20
2 3 6 5 baz 30
wanted.columns = ['ID', 'XB', 'XA', 'Y', 'Z']
wanted
ID XB XA Y Z
0 1 2 1 foo 10
1 2 4 3 bar 20
2 3 6 5 baz 30
</code></pre>
<p>In a larger dataframe with more columns, you may want to keep the original names though.</p>
<hr>
<p><strong>EDIT:</strong></p>
<p>Here's an equivalent solution to the one from @Heleemur with <code>pivot_table()</code>:</p>
<pre><code>wanted = df.pivot_table(index=['ID', 'Y', 'Z'], columns='TYPE').reset_index()
wanted.columns = [''.join(c) for c in wanted.columns.values]
wanted
ID Y Z XA XB
0 1 foo 10 1 2
1 2 bar 20 3 4
2 3 baz 30 5 6
</code></pre>
|
python|pandas|dataframe|stata
| 3
|
2,913
| 51,119,246
|
create structured numpy array in python with strings and int
|
<p>i have this: </p>
<pre><code>>>> matriz
[['b8:27:eb:d6:e3:10', '0.428s', '198'],
['b8:27:eb:d6:e3:10', '0.428s', '232'],
['b8:27:eb:07:65:ad', '0.796s', '180'],
['b8:27:eb:07:65:ad', '0.796s', '255'],
dtype='<U17']`
</code></pre>
<p>but i need the column </p>
<pre><code> `matriz[:, [2]] :
[['198'],
['232'],
['180'],
['255']]`
</code></pre>
<p>to be int and the other columns to be strings, i was trying with structured numpy array but i have this error message,</p>
<pre><code> ValueError: invalid literal for int() with base 10: 'b8:27:eb:d6:e3:10'
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>i used </p>
<pre><code> matriz=np.array(matriz, dtype='U17,U17,i4')
</code></pre>
<p>i'm using numpy version '1.12.1' for raspberry pi 3, i don't know what i'm doing wrong.
thanks a lot </p>
|
<pre><code>In [484]: x = np.array([['b8:27:eb:d6:e3:10', '0.428s', '198'],
...: ['b8:27:eb:d6:e3:10', '0.428s', '232'],
...: ['b8:27:eb:07:65:ad', '0.796s', '180'],
...: ['b8:27:eb:07:65:ad', '0.796s', '255']],
...: dtype='<U17')
...:
</code></pre>
<p>You could fetch the last column with an <code>astype</code> conversion:</p>
<pre><code>In [485]: x[:,2].astype(int)
Out[485]: array([198, 232, 180, 255])
In [486]: x[:,[2]].astype(int)
Out[486]:
array([[198],
[232],
[180],
[255]])
</code></pre>
<p>To construct a structured array, you need to provide a list of tuples. A list of lists or non-structured array with the compound dtype will produce your kind of error.</p>
<pre><code>In [487]: np.array([tuple(i) for i in x],'U17,U10,int')
Out[487]:
array([('b8:27:eb:d6:e3:10', '0.428s', 198),
('b8:27:eb:d6:e3:10', '0.428s', 232),
('b8:27:eb:07:65:ad', '0.796s', 180),
('b8:27:eb:07:65:ad', '0.796s', 255)],
dtype=[('f0', '<U17'), ('f1', '<U10'), ('f2', '<i8')])
In [488]: _['f2']
Out[488]: array([198, 232, 180, 255])
</code></pre>
<p>Fields of the structured array are fetched by name.</p>
|
python|arrays|python-3.x|numpy
| 1
|
2,914
| 48,425,964
|
How to convert a list of lists into a unique Pandas DataFrame column?
|
<p>For a list as:</p>
<pre><code>L = [[0,1,1,0],
[0,1,1,1],
[1,0,0,1],
[1,1,0,0],
]
</code></pre>
<p>And I want to make a <code>DataFrame</code> as:</p>
<pre><code> Column Name
0 [0,1,1,0]
1 [0,1,1,1]
2 [1,0,0,1]
3 [1,1,0,0]
</code></pre>
<p>The reason is that each individually list is an object by itself. </p>
|
<p>You have to read the list as a indexed <code>dictionary</code> formed by the list of values:</p>
<pre><code>import pandas as pd
L = [[0,1,1,0],
[0,1,1,1],
[1,0,0,1],
[1,1,0,0],
]
Df = pd.DataFrame({i:[vals] for i,vals in enumerate(L)},index=['Column Name']).T
</code></pre>
<p>It will returns:</p>
<pre><code> Column Name
0 [0, 1, 1, 0]
1 [0, 1, 1, 1]
2 [1, 0, 0, 1]
3 [1, 1, 0, 0]
</code></pre>
|
python|pandas|dataframe|merge
| 0
|
2,915
| 48,138,218
|
tensorflow value error with sess.run()
|
<p>I tried to play with tensorflow a bit but it seems like I am doing something wrong, the little program I made:</p>
<pre><code>import tensorflow as tf
x = tf.placeholder(tf.float64)
y = tf.placeholder(tf.float64)
test = {"A":tf.Variable(tf.random_normal([20, 20])),
"B":tf.Variable(tf.random_normal([20, 20]))}
math_stuff = tf.matmul(x,y)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(math_stuff, feed_dict={x:test["A"], y:test["B"]}))
</code></pre>
<p>I want to see the result of <code>tf.matmul(x,y)</code> with the two 20x20 random array. The error that it throws at me:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Utilisateur\AppData\Local\Programs\Python\Python36\save\tensorflow_play.py",
line 15, in <module> print(sess.run(math_stuff, feed_dict={x:test["A"], y:test["B"]}))
File "C:\Users\Utilisateur\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py",
line 889, in run run_metadata_ptr)
File "C:\Users\Utilisateur\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py",
line 1089, in _run np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "C:\Users\Utilisateur\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\core\numeric.py",
line 531, in asarray return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
</code></pre>
|
<p>The <code>feed_dict</code> should contain numerical values, <strong>not</strong> <code>tf.Variable</code>. Replace your definition of <code>test</code> with:</p>
<pre><code>test = {"A":np.random.randn(20,20),
"B":np.random.randn(20,20)}
</code></pre>
<p>Also you should <code>import numpy as np</code> at the beginning, of course. The code then behaves as you want it to.</p>
<p>For a bit more explanation, you can think of the <code>feed_dict</code> as the <em>numerical values</em> you give to your computational graph, not <em>part</em> of the computational graph (as a <code>tf.Variable</code> would be). </p>
|
python|numpy|tensorflow
| 3
|
2,916
| 48,388,616
|
Create a new data frame based on conditions from columns of a given dataframe
|
<p>I have following data frame,</p>
<p>df.head()</p>
<pre><code>UID Timestamp Weekday Business_hour
AAD 2017-07-11 09:31:44 TRUE TRUE
AAD 2017-07-11 23:24:43 TRUE FALSE
AAD 2017-07-12 13:24:43 TRUE TRUE
SAP 2017-07-23 14:24:34 FALSE FALSE
SAP 2017-07-24 16:58:49 TRUE TRUE
YAS 2017-07-31 21:10:35 TRUE FALSE
</code></pre>
<p>based on the following conditions,</p>
<p>Active: Whether the same UID has larger events.That is, when same UID appeared more 2+ times in the same day.</p>
<p>Multiple_days: Whether the same UID is active for multiple days (2+ days).</p>
<p>Busi_weekday: Whether the same UID tends to occur during weekday business hours.</p>
<p>The aimed output should look like,</p>
<pre><code>UID Active Multiple_days Busi_weekday
AAD TRUE TRUE TRUE
SAP FALSE TRUE FALSE
YAS FALSE FALSE FALSE
</code></pre>
|
<p>You could calculate them one by one like this:</p>
<pre><code>data.Timestamp = pd.to_datetime(data.Timestamp)
data['date' ] = [x.date() for x in data.Timestamp]
target_df = pd.DataFrame()
target_df['UID'] = data.UID.unique()
a = data.groupby(['UID', 'date']).size()
a = a[a>1]
target_df['Active'] = [True if x in pd.DataFrame(a).reset_index().UID.values else False for x in target_df.UID.values]
a = data.groupby('UID')['Timestamp'].nunique()
a = a[a>1]
target_df['Multiple_days'] = [True if x in pd.DataFrame(a).reset_index().UID.values else False for x in target_df.UID.values]
a = data[(data.Weekday==True)&(data.Business_hour==True)].UID.unique()
target_df['Busi_weekday'] = [True if x in a else False for x in target_df.UID.values]
target_df
</code></pre>
<p><a href="https://i.stack.imgur.com/LZiRs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LZiRs.png" alt="enter image description here"></a></p>
|
python|pandas|numpy|group-by
| 1
|
2,917
| 48,354,243
|
tensorflow estimator from_generator, how to set TensorShape?
|
<p>I am trying use a generator to feed data into estimator. The following is the code. However, when try to run, I got the following error:</p>
<p>Update2: I finally made it work. So the correct tensorshape is
([], [], [])</p>
<p>Update: I added tensorshape ([None], [None], [None]), then I changed ds.batch(10), to an assignment ds = ds.batch(10) </p>
<p>but still got error. </p>
<pre><code>Traceback (most recent call last):
File "xyz.py", line 79, in <module>
tf.app.run(main=main, argv=None)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "xyz.py", line 67, in main
model.train(input_fn=lambda: input_fn(100))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 302, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 783, in _train_model
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 521, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 892, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 967, in run
raise six.reraise(*original_exc_info)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 952, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1024, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 827, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: exceptions.ValueError: `generator` yielded an element of shape () where an element of shape (?,) was expected.
[[Node: PyFunc = PyFunc[Tin=[DT_INT64], Tout=[DT_INT64, DT_STRING, DT_FLOAT], token="pyfunc_1"](arg0)]]
[[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,?], [?,?], [?,?]], output_types=[DT_INT64, DT_STRING, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]
</code></pre>
<p>So my question, how to set the TensorShape? The from generator takes a third argument of TensorShape but I cannot find any example/doc on how to set it. Any help? </p>
<p>Thanks,</p>
<pre><code>def gen(nn):
ii = 0
while ii < nn:
ii += 1
yield ii, 't{0}'.format(ii), ii*2
def input_fn(n):
ds = tf.data.Dataset.from_generator(lambda: gen(n), (tf.int64, tf.string, tf.float32), ([None], [None], [None]))
ds = ds.batch(10)
x, y, z = ds.make_one_shot_iterator().get_next()
return {'x': x, 'y': y}, tf.greater_equal(z, 10)
def build_columns():
x = tf.feature_column.numeric_column('x')
y = tf.feature_column.categorical_column_with_hash_bucket('y', hash_bucket_size=5)
return [x, y]
def build_estimator():
run_config = tf.estimator.RunConfig().replace(
session_config=tf.ConfigProto(device_count={'GPU': 0}))
return tf.estimator.LinearClassifier(model_dir=FLAGS.model_dir, feature_columns=build_columns(), config=run_config)
def main(unused):
# Clean up the model directory if present
shutil.rmtree(FLAGS.model_dir, ignore_errors=True)
model = build_estimator()
# Train and evaluate the model every `FLAGS.epochs_per_eval` epochs.
for n in range(FLAGS.train_epochs // FLAGS.epochs_per_eval):
model.train(input_fn=lambda: input_fn(100))
results = model.evaluate(input_fn=lambda: input_fn(20))
</code></pre>
|
<p>As mentioned by @FengTian in an update, the correct answer was to use shape <code>([], [], [])</code> as the output shape of the generator:</p>
<pre class="lang-py prettyprint-override"><code>tf.data.Dataset.from_generator(lambda: gen(n), (tf.int64, tf.string, tf.float32), ([], [], []))
</code></pre>
|
tensorflow|generator|tensorflow-estimator
| 1
|
2,918
| 48,069,000
|
Writing value to given filed in csv file using pandas or csv module
|
<p>Is there any way you can write value to specific place in given .csv file using pandas or csv module?</p>
<p>I have tried using csv_reader to read the file and find a line which fits my requirements though I couldn't figure out a way to switch value which is in the file to mine.</p>
<p>What I am trying to achieve here is that I have a spreadsheet of names and values. I am using JSON to update the values from the server and after that I want to update my spreadsheet also.</p>
<p>The latest solution which I came up with was to create separate sheet from which I will get updated data, but this one is not working, though there is no sequence in which the dict is written to the file.</p>
<pre><code>def updateSheet(fileName, aValues):
with open(fileName+".csv") as workingSheet:
writer = csv.DictWriter(workingSheet,aValues.keys())
writer.writeheader()
writer.writerow(aValues)
</code></pre>
<p>I will appreciate any guidance and tips.</p>
|
<p>You can try this way to operate the specified csv file</p>
<pre><code>import pandas as pd
a = ['one','two','three']
b = [1,2,3]
english_column = pd.Series(a, name='english')
number_column = pd.Series(b, name='number')
predictions = pd.concat([english_column, number_column], axis=1)
save = pd.DataFrame({'english':a,'number':b})
save.to_csv('b.csv',index=False,sep=',')
</code></pre>
<p><img src="https://i.stack.imgur.com/7zcu4.png" alt="enter image description here"></p>
|
python|excel|pandas|csv
| 0
|
2,919
| 48,503,051
|
New docs representation in doc2vec Tensorflow
|
<p>I trained doc2vec model in TensorFlow. So now I have embeded vectors for words in dictionary and vectors for the documents. </p>
<p>In the paper </p>
<pre><code>"Distributed Representations of Sentences and Documents"
Quoc Le, Tomas Mikolov
</code></pre>
<p>authors write </p>
<blockquote>
<p>“the inference stage” to get paragraph vectors D for new paragraphs
(never seen before) by adding more columns in D and gradient
descending on D while holding W,U,b fixed.</p>
</blockquote>
<p>I have pretrained model so we have W, U and b as graph variables. Question is how to implement inference of D(new document) efficiently in Tensorflow? </p>
|
<p>For most neural networks, the output of the network (class for classification problems, number for regression,...) if the value you are interested in. In those cases, inference means running the frozen network on some new data (forward propagation) to compute the desired output.</p>
<p>For those cases, several strategies can be used to deliver quickly the desired output for multiple new data points : scaling horizontally, reduce the complexity of calculation through quantisation of the weights, optimising the freezed graph computation (see <a href="https://devblogs.nvidia.com/tensorrt-3-faster-tensorflow-inference/" rel="nofollow noreferrer">https://devblogs.nvidia.com/tensorrt-3-faster-tensorflow-inference/</a>),...</p>
<p>doc2Vec (and word2vec) are different use case is however different : the neural net is used to compute an output (prediction of the next word), but the meaningful and useful data are the weights used in the neural network after training. The inference stage is therefore different : you do not want to get the output of the neural net to get a vector representation of a new document, you need to train the part of the neural net that provides you the vector representation of your document. Part of the neural net is then frozen (W,U,b).</p>
<p>How can you efficiently compute D (document vector) in Tensorflow :</p>
<ul>
<li>Make experiments to define the optimal learning rate (a smaller value might be a better fit for shorter document) as it defines how quick your neural network representation of a document.</li>
<li>As the other part of the neural net are frozen, you can scale the inference on multiple processes / machines</li>
<li>Identify the bottle necks : what is currently slow ? model computation ? Text retrieval from disk of from external data source ? Storage of the results ?</li>
</ul>
<p>Knowing more about your current issues, and the context might help.</p>
|
tensorflow|nlp|doc2vec
| 0
|
2,920
| 48,607,132
|
Timestamp fetched from websockets formatting
|
<p>I'm new here and need help in understanding how i can work with timestamps to datetime objects that are used in pandas. I saved some data using websockets in a csv file and loaded that csv file into a pandas dataframe. In my timestamp column i'm getting contents like <code>[2018-02-04T07:49:36.867Z, 2018-02-04T07:49:56.931Z and so on]</code>. </p>
<p>I have to manipulate the other data columns using the time data, like re-sampling (using pandas) over certain durations say 1 min, 3 min etc.
But I can't apply re-sampling as the date and time is not in correct format, like this <code>[20180204 07:49:56.931, 20180204 07:49:56:931 and so on]</code>.</p>
<p>How to achieve this transformation in pandas/python. Is it just just simple first string manipulation that i just remove these unwanted characters and then apply the datetime transformation. Any help on how to proceed would be helpful.</p>
<p>I don't even know where to start as I have never come across this type of format. </p>
|
<p>This is one way, using <code>pandas</code>:</p>
<pre><code>import pandas as pd
d = '2018-02-04T07:49:36.867Z'
d_pd = pd.to_datetime(d) # Timestamp('2018-02-04 07:49:36.867000')
d_str = d_pd.strftime('%Y%m%d %T.%f')[:-3] # '20180204 07:49:36.867'
</code></pre>
|
python|string|python-3.x|pandas|datetime
| 0
|
2,921
| 70,742,561
|
Pivoting data and keeping only specific rows as per a condition
|
<p>I have a pandas dataframe with multiple columns, which looks like the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Index</th>
<th style="text-align: center;">ID</th>
<th style="text-align: center;">Year</th>
<th style="text-align: center;">Code</th>
<th style="text-align: center;">#Purchase</th>
<th style="text-align: right;">Mode</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: center;">100</td>
<td style="text-align: center;">2018</td>
<td style="text-align: center;">ABC</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">100</td>
<td style="text-align: center;">2019</td>
<td style="text-align: center;">DEF</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">100</td>
<td style="text-align: center;">2019</td>
<td style="text-align: center;">GHI</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">102</td>
<td style="text-align: center;">2018</td>
<td style="text-align: center;">JKL</td>
<td style="text-align: center;">4</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">103</td>
<td style="text-align: center;">2019</td>
<td style="text-align: center;">MNO</td>
<td style="text-align: center;">5</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">103</td>
<td style="text-align: center;">2020</td>
<td style="text-align: center;">PQR</td>
<td style="text-align: center;">6</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: center;">102</td>
<td style="text-align: center;">2019</td>
<td style="text-align: center;">PQR</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: center;">104</td>
<td style="text-align: center;">2019</td>
<td style="text-align: center;">LMN</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: center;">104</td>
<td style="text-align: center;">2021</td>
<td style="text-align: center;">LMN</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">3</td>
</tr>
</tbody>
</table>
</div>
<p>I want to group rows w.r.t to ID and then pivot the results and would only like to keep IDs that have an entry against Mode_1 then Mode_2 and then Mode_3. The result should look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Index</th>
<th style="text-align: center;">ID</th>
<th style="text-align: center;">Year_1</th>
<th style="text-align: center;">Code_1</th>
<th style="text-align: center;">#Purchase_1</th>
<th style="text-align: right;">Mode_1</th>
<th style="text-align: center;">Year_2</th>
<th style="text-align: center;">Code_2</th>
<th style="text-align: center;">#Purchase_2</th>
<th style="text-align: right;">Mode_2</th>
<th style="text-align: center;">Year_3</th>
<th style="text-align: center;">Code_3</th>
<th style="text-align: center;">#Purchase_3</th>
<th style="text-align: right;">Mode_3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: center;">100</td>
<td style="text-align: center;">2018</td>
<td style="text-align: center;">ABC</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: center;">2019</td>
<td style="text-align: center;">DEF</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">2</td>
<td style="text-align: center;">2019</td>
<td style="text-align: center;">GHI</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">103</td>
<td style="text-align: center;">2019</td>
<td style="text-align: center;">MNO</td>
<td style="text-align: center;">5</td>
<td style="text-align: right;">1</td>
<td style="text-align: center;">2020</td>
<td style="text-align: center;">PQR</td>
<td style="text-align: center;">6</td>
<td style="text-align: right;">2</td>
<td style="text-align: center;">NaN</td>
<td style="text-align: center;">NaN</td>
<td style="text-align: center;">NaN</td>
<td style="text-align: right;">NaN</td>
</tr>
</tbody>
</table>
</div>
<p>In the example results, we can see that ID 102 is dropped because it doesn't have a <code>Mode</code> = 1, and ID 104 is dropped because the <code>Mode</code> value skips 2. So the possible combinations of <code>Mode</code> against an ID needed are 1, 1-->2, or 1-->2-->3.</p>
<p>It would be really appreciated if someone could help me with this example problem.
TIA</p>
|
<p>Use:</p>
<pre><code>#filter if difference is 1 per groups
m1 = df.groupby('ID')['Mode'].transform(lambda x: x.diff().iloc[1:].eq(1).all())
#filter if first value per group is 1
m2 = df.groupby('ID')['Mode'].transform('first').eq(1)
#pivoting all columns by ID per groups g created by copy Mode column
df = df[m1 & m2].assign(g = lambda x: x['Mode']).pivot(index='ID', columns='g').sort_index(axis=1, level=1)
#flatten MultiIndex
df.columns = df.columns.map(lambda x: f'{x[0]}_{x[1]}')
df = df.reset_index()
print (df)
ID #Purchase_1 Code_1 Mode_1 Year_1 #Purchase_2 Code_2 Mode_2 \
0 100 1.0 ABC 1.0 2018.0 2.0 DEF 2.0
1 103 5.0 MNO 1.0 2019.0 6.0 PQR 2.0
Year_2 #Purchase_3 Code_3 Mode_3 Year_3
0 2019.0 3.0 GHI 3.0 2019.0
1 2020.0 NaN NaN NaN NaN
</code></pre>
|
python|pandas
| 0
|
2,922
| 70,974,406
|
Remove all previous rows from primary dataframe based on condition from another dataframe
|
<p>I have two dataframe say df1 (primary dataframe) and df2. I want to drop all previous rows from df1 based on a condition from df2. My dataframe are like below:</p>
<p><strong>df2</strong></p>
<pre><code> tradingsymbol Time
0 BANKNIFTY2220339500CE 12:54:40
1 BANKNIFTY2220340000CE 12:53:33
2 BANKNIFTY2220340500CE 12:51:50
</code></pre>
<p><strong>df1</strong>.head(20)</p>
<pre><code> tradingsymbol Time last_price
0 BANKNIFTY2220339500CE 09:20:10 84.40
1 BANKNIFTY2220339500CE 09:20:10 85.95
2 BANKNIFTY2220339500CE 12:55:60 84.70 <-Valid Row
3 BANKNIFTY2220339500CE 13:22:10 86.35 <-Valid Row
4 BANKNIFTY2220339500CE 14:55:40 87.10 <-Valid Row
5 BANKNIFTY2220340000CE 09:20:13 88.95
6 BANKNIFTY2220340000CE 09:20:13 88.80
7 BANKNIFTY2220340000CE 09:20:14 88.30
8 BANKNIFTY2220340000CE 14:23:11 87.30 <-Valid Row
9 BANKNIFTY2220340500CE 09:20:15 90.15
10 BANKNIFTY2220340500CE 09:20:16 90.10
11 BANKNIFTY2220340500CE 09:20:17 91.05
12 BANKNIFTY2220340500CE 09:20:18 90.95
</code></pre>
<p>I want to remove all rows from df1 previous to time in <strong>Time column of df2</strong> for each tradingsymbol. I want my result as below:</p>
<pre><code> tradingsymbol Time last_price
2 BANKNIFTY2220339500CE 12:55:60 84.70
3 BANKNIFTY2220339500CE 13:22:10 86.35
4 BANKNIFTY2220339500CE 14:55:40 87.10
8 BANKNIFTY2220340000CE 14:23:11 87.30
</code></pre>
|
<p>In case the column elements are not yet in datetime format, you can transform:</p>
<pre><code>df["Time"] = pd.to_datetime(df["Time"]).dt.time
</code></pre>
<p>Or, you can set this option directly while reading:</p>
<pre><code>df = pd.read_csv(
filename,
parse_dates=["Time"],
date_parser=lambda x: pd.to_datetime(x, format="%H:%M:%S").time()
)
</code></pre>
<p>Having done this for both dataframes, one way to filter the dataframe is to go through all the rows in df2 and, for each one, drop the rows that satisfy the condition in df1. So:</p>
<pre><code>for index, row in df2.iterrows():
df1.drop(
df1[(df1.tradingsymbol == row["tradingsymbol"]) & (df1.Time < row["Time"])].index,
inplace=True
)
</code></pre>
|
python|python-3.x|pandas|dataframe|group-by
| 1
|
2,923
| 70,992,167
|
Python Numpy CAR implementation - ValueError: shapes not aligned
|
<p>I am trying to implement a Common Average Reference function in python. The idea is to compute the average of the signal at all EEG channels and subtract it from the EEG signal at every channels for every time point.
The input of this function is a NumPy array called trials. Trials is a 3D array that contains EEG data in this form: (trials x time x channels).
for example:</p>
<pre><code>trials.shape is (240, 2048, 17)
</code></pre>
<p>The output will be the processed signal array. This is my current code:</p>
<pre><code># Common Average Reference
import numpy as np
def car(trials):
signal = []
for tr in trials:
tr = np.subtract(tr,(np.dot(np.mean(tr, axis=1), np.ones((0, np.size(tr, axis=1))))))
signal.append(tr)
return signal
</code></pre>
<p>Bnd it returns this error:</p>
<pre><code>ValueError: shapes (2048,) and (0,17) not aligned: 2048 (dim 0) != 0 (dim 0)
</code></pre>
<p>Do you have any suggestions on how to solve this?
Thank you in advance!</p>
|
<p>WHOOPS - I didn't understand the definition of Common Average Reference. As pointed out in <a href="https://stackoverflow.com/users/1217358/warren-weckesser">Warren Weckesser's</a> <a href="https://stackoverflow.com/questions/70992167/python-numpy-car-implementation-valueerror-shapes-not-aligned/70993000#comment125500967_70993000">comment</a>, the CAR is the value at each electrode, not over time. So the average should be calculated over the channels dimension. Use <code>keepdims=True</code> to make the shape compatible so that the subtraction can still be done with broadcasting:</p>
<pre><code>>>> car = np.mean(trials, axis=2, keepdims=True)
>>> car.shape
(240, 2048, 1)
</code></pre>
<p>You can take advantage of NumPy's <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="nofollow noreferrer">broadcasting</a></p>
<pre><code>>>> import numpy as np
>>> trials = np.random.random((240, 2048, 17))
(WRONG) >>> car = np.mean(trials, axis=0) # calculate the Common Average Reference across all the trials for each channel
>>> car.shape
(2048, 17)
>>> tnew = trials - car
</code></pre>
|
python|numpy|signal-processing
| 1
|
2,924
| 70,834,135
|
invalid value encountered in log in Python
|
<p>I am trying to impement the random walk metropolis hastings algorithm which my code is :</p>
<pre><code>import numpy as np
def rwmetrop(data,mu0=0,kappa0=1,alpha0=1,lambda0=1,nburn=1000,ndraw=10000,vmu=1,vomega=1):
n = len(data)
alpha1 = (n/2) + alpha0 - 1
stdvmu = np.sqrt(vmu)
stdvomega = np.sqrt(vomega)
mu = np.random.normal(mu0,
np.sqrt(1/kappa0),
size =1)
omega = np.random.gamma(shape = alpha0,
scale = 1/alpha0,
size = 1)
draws = np.empty([ndraw,2])
acceptmu = 0
acceptomega = 0
it = -nburn
while it < ndraw:
mucan = np.random.normal(mu,stdvmu,1)
logp = (kappa0/2)*((mu-mu0)**2-((mucan-mu0)**2)) + np.sum( np.log( 1+omega*( (data-mu)**2 ) - np.log(1+omega*(data-mucan)**2)))
u = np.random.uniform(0,1,1)
if np.log(u) < logp:
acceptmu = acceptmu + 1
mu = mucan
omegacan = np.random.normal(omega,stdvomega,1)
if omegacan > 0:
logp = alpha1*(np.log(omegacan)-np.log(omega)) + lambda0*(omega-omegacan)+ sum(np.log(1+omega*( ((data-mu)**2)))- np.log(1+omegacan*( (data-mu)**2)) )
u = np.random.uniform(0,1,1)
if np.log(u) < logp:
acceptomega = acceptomega + 1
omega = omegacan
if it>0:
draws[it,0] = mu
draws[it,1] = omega
it = it+1
return(draws)
from scipy.stats import cauchy
data = cauchy.rvs(loc=1, scale=1, size=100)
</code></pre>
<p>But when i run the created function i receive a warning that says</p>
<pre><code>draws4=rwmetrop(data=data,mu0=0,kappa0=1,alpha0=1,lambda0=1,nburn=1000,ndraw=10000,vmu=1,vomega=1)
mu=draws4[:,0];mu
</code></pre>
<pre><code> RuntimeWarning: invalid value encountered in log
logp = (kappa0/2)*((mu-mu0)**2-((mucan-mu0)**2)) + np.sum( np.log( 1+omega*( (data-mu)**2 ) - np.log(1+omega*(data-mucan)**2)))
</code></pre>
<p>i checked the parenthesis but it seem to me ok.Probably some 0 (zero) creates the warning ?</p>
<p>what is wrong with my code?</p>
<p>I would appreciate any help from anybody.</p>
|
<p>When you call the function with this variables. The result of <code>1+omega*(data-mucan)**2</code> has a lot of negative number and when calculate <code>np.log(1+omega*(data-mucan)**2)</code> code calculate np.log(negative number) and this is <code>invalid value encountered in log</code>. negative number for log is out of the range</p>
|
python|function|numpy
| 1
|
2,925
| 70,759,325
|
Is there a way to (conditionally) forward fill values in a Pandas DF in a vectorized way based on multiple criteria?
|
<p>In the below dataframe, I'm trying to forward fill the <code>Pos</code> and <code>Stop</code> columns based on the following criteria:</p>
<ol>
<li>If (Prior <code>Pos</code> == -1) & (Current <code>High</code> < Prior <code>Stop</code>)</li>
<li>If (Prior <code>Pos</code> == 1 ) & (Current <code>Low</code> > Prior <code>Stop</code>)</li>
</ol>
<p>Once either of these conditions is violated, then the values should remain unchanged until the next non-zero instance of <code>Pos</code> and <code>Stop</code> at which point the above criteria should again be evaluated.</p>
<pre><code>import numpy as np
import pandas as pd
Open = {'Open':np.array([126.81999969, 126.55999756, 123.16000366, 125.23000336,127.81999969, 126.01000214, 127.81999969, 126.95999908,126.44000244, 125.56999969, 125.08000183, 124.27999878,124.68000031, 124.06999969, 126.16999817, 126.59999847,127.20999908])}
High = {'High': np.array([126.93000031, 126.98999786, 124.91999817, 127.72000122,128. , 127.94000244, 128.32000732, 127.38999939, 127.63999939, 125.80000305, 125.34999847, 125.23999786, 124.84999847, 126.16000366, 126.31999969, 128.46000671, 127.75 ])}
Low = {'Low' : np.array([125.16999817, 124.77999878, 122.86000061, 125.09999847,125.20999908, 125.94000244, 126.31999969, 126.41999817, 125.08000183, 124.55000305, 123.94000244, 124.05000305, 123.12999725, 123.84999847, 124.83000183, 126.20999908, 126.51999664])}
Close = {'Close': np.array([126.26999664, 124.84999847, 124.69000244, 127.30999756,125.43000031, 127.09999847, 126.90000153, 126.84999847, 125.27999878, 124.61000061, 124.27999878, 125.05999756, 123.54000092, 125.88999939, 125.90000153, 126.73999786, 127.12999725])}
Pos = {'Pos': np.array([ 0, 0, 1, 0, -1, 0, 0, 0, 0, -1, 0, 1, 0, 1, 0, 0, 0])}
Stop = {'Stop': np.array([ 0. , 0. , 122.86000061, 0. , 128. , 0. , 0. , 0. ,0. , 125.80000305, 0. , 124.05000305, 0. , 123.84999847, 0. , 0. , 0. ])}
index = pd.date_range('2022-1-1',periods = 17)
df = pd.DataFrame(dict(Open, **High, **Low, **Close, **Pos, **Stop), index = index)
df
Open High Low Close Pos Stop
Date
2022-01-01 126.82 126.93 125.17 126.27 0 0.00
2022-01-02 126.56 126.99 124.78 124.85 0 0.00
2022-01-03 123.16 124.92 122.86 124.69 1 122.86
2022-01-04 125.23 127.72 125.10 127.31 0 0.00
2022-01-05 127.82 128.00 125.21 125.43 -1 128.00
2022-01-06 126.01 127.94 125.94 127.10 0 0.00
2022-01-07 127.82 128.32 126.32 126.90 0 0.00
2022-01-08 126.96 127.39 126.42 126.85 0 0.00
2022-01-09 126.44 127.64 125.08 125.28 0 0.00
2022-01-10 125.57 125.80 124.55 124.61 -1 125.80
2022-01-11 125.08 125.35 123.94 124.28 0 0.00
2022-01-12 124.28 125.24 124.05 125.06 1 124.05
2022-01-13 124.68 124.85 123.13 123.54 0 0.00
2022-01-14 124.07 126.16 123.85 125.89 1 123.85
2022-01-15 126.17 126.32 124.83 125.90 0 0.00
2022-01-16 126.60 128.46 126.21 126.74 0 0.00
2022-01-17 127.21 127.75 126.52 127.13 0 0.00
</code></pre>
<p>The desired result is:</p>
<pre><code> Open High Low Close Pos Stop
Date
2022-01-01 126.82 126.93 125.17 126.27 0 0.00
2022-01-02 126.56 126.99 124.78 124.85 0 0.00
2022-01-03 123.16 124.92 122.86 124.69 1 122.86
2022-01-04 125.23 127.72 125.10 127.31 1 122.86
2022-01-05 127.82 128.00 125.21 125.43 -1 128.00
2022-01-06 126.01 127.94 125.94 127.10 -1 128.00
2022-01-07 127.82 128.32 126.32 126.90 0 0.00
2022-01-08 126.96 127.39 126.42 126.85 0 0.00
2022-01-09 126.44 127.64 125.08 125.28 0 0.00
2022-01-10 125.57 125.80 124.55 124.61 -1 125.80
2022-01-11 125.08 125.35 123.94 124.28 -1 125.80
2022-01-12 124.28 125.24 124.05 125.06 1 124.05
2022-01-13 124.68 124.85 123.13 123.54 0 0.00
2022-01-14 124.07 126.16 123.85 125.89 1 123.85
2022-01-15 126.17 126.32 124.83 125.90 1 123.85
2022-01-16 126.60 128.46 126.21 126.74 1 123.85
2022-01-17 127.21 127.75 126.52 127.13 1 123.85
</code></pre>
<p>I've tried using the <code>groupby</code> and <code>where</code> mathods which produce a df that is close to the desired but does not keep the values unchanged for the subsequent rows in a group once the the criteria is breached.</p>
<pre><code>s = df[['Pos','Stop']].mask(df['Stop'].eq(0)).ffill()
grouped = s.groupby(['Pos','Stop'])
df.update(grouped.apply(lambda g: g.where((s['Pos'] == 1) & (s['Stop'] <= df['Low']) | (s['Pos'] == -1) & (s['Stop'] >= df['High'])))
df
Open High Low Close Pos Stop
Date
2022-01-01 126.82 126.93 125.17 126.27 0.0 0.00
2022-01-02 126.56 126.99 124.78 124.85 0.0 0.00
2022-01-03 123.16 124.92 122.86 124.69 1.0 122.86
2022-01-04 125.23 127.72 125.10 127.31 1.0 122.86
2022-01-05 127.82 128.00 125.21 125.43 -1.0 128.00
2022-01-06 126.01 127.94 125.94 127.10 -1.0 128.00
2022-01-07 127.82 128.32 126.32 126.90 0.0 0.00
2022-01-08 126.96 127.39 126.42 126.85 -1.0 128.00
2022-01-09 126.44 127.64 125.08 125.28 -1.0 128.00
2022-01-10 125.57 125.80 124.55 124.61 -1.0 125.80
2022-01-11 125.08 125.35 123.94 124.28 -1.0 125.80
2022-01-12 124.28 125.24 124.05 125.06 1.0 124.05
2022-01-13 124.68 124.85 123.13 123.54 0.0 0.00
2022-01-14 124.07 126.16 123.85 125.89 1.0 123.85
2022-01-15 126.17 126.32 124.83 125.90 1.0 123.85
2022-01-16 126.60 128.46 126.21 126.74 1.0 123.85
2022-01-17 127.21 127.75 126.52 127.13 1.0 123.85
</code></pre>
|
<p>I would suggest iterating over all rows and checking for both of the conditions. Unfortunately, I cannot reproduce your result as your code generates a different dataframe. Nonetheless, I think the following code does what you need:</p>
<pre><code>import numpy as np
import pandas as pd
Open = {'Open':np.array([126.81999969, 126.55999756, 123.16000366, 125.23000336,127.81999969, 126.01000214, 127.81999969, 126.95999908,126.44000244, 125.56999969, 125.08000183, 124.27999878,124.68000031, 124.06999969, 126.16999817, 126.59999847,127.20999908])}
High = {'High': np.array([126.93000031, 126.98999786, 124.91999817, 127.72000122,128. , 127.94000244, 128.32000732, 127.38999939, 127.63999939, 125.80000305, 125.34999847, 125.23999786, 124.84999847, 126.16000366, 126.31999969, 128.46000671, 127.75 ])}
Low = {'Low' : np.array([125.16999817, 124.77999878, 122.86000061, 125.09999847,125.20999908, 125.94000244, 126.31999969, 126.41999817, 125.08000183, 124.55000305, 123.94000244, 124.05000305, 123.12999725, 123.84999847, 124.83000183, 126.20999908, 126.51999664])}
Close = {'Close': np.array([126.26999664, 124.84999847, 124.69000244, 127.30999756,125.43000031, 127.09999847, 126.90000153, 126.84999847, 125.27999878, 124.61000061, 124.27999878, 125.05999756, 123.54000092, 125.88999939, 125.90000153, 126.73999786, 127.12999725])}
Pos = {'Pos': np.array([ 0, 0, 1, 0, -1, 0, 0, 0, 0, -1, 0, 1, 0, 1, 0, 0, 0])}
Stop = {'Stop': np.array([ 0. , 0. , 122.86000061, 0. , 128. , 0. , 0. , 0. ,0. , 125.80000305, 0. , 124.05000305, 0. , 123.84999847, 0. , 0. , 0. ])}
index = pd.date_range('2022-1-1',periods = 17)
df = pd.DataFrame(dict(Open, **High, **Low, **Close, **Pos, **Stop), index = index)
# Produce iterable index
df = df.reset_index()
# Iterate over every row
for i in range(1, len(df)):
curr_pos = df.loc[i, 'Pos']
curr_stop = df.loc[i, 'Stop']
curr_low = df.loc[i, 'Low']
curr_high = df.loc[i, 'High']
prev_pos = df.loc[i-1, 'Pos']
prev_stop = df.loc[i-1, 'Stop']
# Check your conditions
if ((prev_pos == -1) and (curr_high > prev_stop)) or ((prev_pos == 1) and (curr_low < prev_stop)):
df.loc[i, 'Pos'] = prev_pos
df.loc[i, 'Stop'] = prev_stop
# Restore original index
df = df.set_index('index')
df
</code></pre>
<p>EDIT: Based on your solution, I tried to remove some operations that I think are costly. Would you mind testing the following code:</p>
<pre><code>Open = {'Open':np.array([126.81999969, 126.55999756, 123.16000366, 125.23000336,127.81999969, 126.01000214, 127.81999969, 126.95999908,126.44000244, 125.56999969, 125.08000183, 124.27999878,124.68000031, 124.06999969, 126.16999817, 126.59999847,127.20999908])}
High = {'High': np.array([126.93000031, 126.98999786, 124.91999817, 127.72000122,128. , 127.94000244, 128.32000732, 127.38999939, 127.63999939, 125.80000305, 125.34999847, 125.23999786, 124.84999847, 126.16000366, 126.31999969, 128.46000671, 127.75 ])}
Low = {'Low' : np.array([125.16999817, 124.77999878, 122.86000061, 125.09999847,125.20999908, 125.94000244, 126.31999969, 126.41999817, 125.08000183, 124.55000305, 123.94000244, 124.05000305, 123.12999725, 123.84999847, 124.83000183, 126.20999908, 126.51999664])}
Close = {'Close': np.array([126.26999664, 124.84999847, 124.69000244, 127.30999756,125.43000031, 127.09999847, 126.90000153, 126.84999847, 125.27999878, 124.61000061, 124.27999878, 125.05999756, 123.54000092, 125.88999939, 125.90000153, 126.73999786, 127.12999725])}
Pos = {'Pos': np.array([ 0, 0, 1, 0, -1, 0, 0, 0, 0, -1, 0, 1, 0, 1, 0, 0, 0])}
Stop = {'Stop': np.array([ 0. , 0. , 122.86000061, 0. , 128. , 0. , 0. , 0. ,0. , 125.80000305, 0. , 124.05000305, 0. , 123.84999847, 0. , 0. , 0. ])}
index = pd.date_range('2022-1-1',periods = 17)
df = pd.DataFrame(dict(Open, **High, **Low, **Close, **Pos, **Stop), index = index)
df_tmp = df.copy()
df_tmp[['Pos','Stop']] = df_tmp[['Pos','Stop']].mask(df['Stop'].eq(0)).ffill()
df_tmp['tmp'] = (df_tmp['Pos'] == 1) & (df_tmp['Stop'] <= df_tmp['Low']) | (df_tmp['Pos'] == -1) & (df_tmp['Stop'] >= df_tmp['High'])
positions = df_tmp.groupby(['Pos', 'Stop'])['tmp'].cummin().eq(1)
df[['Pos','Stop']] = df_tmp[['Pos','Stop']].where(positions, 0)
print(df.round(2))
</code></pre>
|
python-3.x|pandas|numpy
| 1
|
2,926
| 51,644,257
|
Tensorflow Estimators : proper way to train image grids separately
|
<p>I am trying to train an object detection model as described in this <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8281079" rel="nofollow noreferrer">paper</a> </p>
<p>There are 3 fully connected layers with 512, 512, 25 neurons. The 16x55x55 feature map from the last convolutional layer is fed into the fully connected layers to retrieve the appropriate class. At this stage, every grid described by (16x1x1) is fed into the fully connected layers to classify the grid as belonging to one of the 25 classes. The structure can be seen in the pciture below </p>
<p><a href="https://i.stack.imgur.com/jpbwm.png" rel="nofollow noreferrer">fully connected layers</a></p>
<p>I am trying to adapt the code from TF MNIST classification tutorial, and I would like to know if it is okay to just sum the losses from each grid as in the code snippet below and use it to train the model weights.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>flat_fmap = tf.reshape(last_conv_layer, [-1, 16*55*55])
total_loss = 0
for grid of flat_fmap:
dense1 = tf.layers.dense(inputs=grid, units=512, activation=tf.nn.relu)
dense2 = tf.layers.dense(inputs=dense1, units=512, activation=tf.nn.relu)
logits = tf.layers.dense(inputs=dense2, units=25)
total_loss += tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=total_loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=tf.estimator.ModeKeys.TRAIN, loss=total_loss, train_op=train_op)</code></pre>
</div>
</div>
</p>
<p>In the code above, I think at every iteration 3 new layers are being creating. However, I would like the weights to be preserved when classifying one grid and then another.</p>
|
<p>Adding to the total_loss should be ok. </p>
<p>tf.losses.sparse_softmax_cross_entropy is also adding losses together. </p>
<p>It calculates a sparse_softmax with logits and then reduces the resulting array though a sum using math_ops.reduce_sum.
So you are adding them together, one way or another. </p>
<p><a href="https://github.com/tensorflow/tensorflow/blob/r1.9/tensorflow/python/ops/losses/losses_impl.py" rel="nofollow noreferrer">As you can see in its source</a> </p>
<p>The for loop on the network declaration seems unusual, it probably makes more sense to do it at run time and pass each grid through the feed_dict. </p>
<pre><code>dense1 = tf.layers.dense(inputs=X, units=512, activation=tf.nn.relu)
dense2 = tf.layers.dense(inputs=dense1, units=512, activation=tf.nn.relu)
logits = tf.layers.dense(inputs=dense2, units=25)
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
total_loss = 0
with tf.session as sess:
sess.run(init)
for grid in flat_fmap:
_, l = sess.run([optimizer,loss], feed_dict{X: grid, labels=labels})
total_loss += l
</code></pre>
|
tensorflow|deep-learning|object-detection|tensorflow-estimator
| 0
|
2,927
| 51,643,452
|
How to examine the results of a tensorflow.data.Dataset based model.train input_fn
|
<p>I am learning how to use the tf.data.Dataset api. I am using the sample code provided by google for their coursera tensorflow class. Specifically I am working with the c_dataset.ipynb notebook <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/courses/machine_learning/deepdive/03_tensorflow" rel="nofollow noreferrer">here.</a></p>
<p>This notebook has a model.train routine which looks like this:</p>
<pre><code>model.train(input_fn = get_train(), steps = 1000)
</code></pre>
<p>The get_train() routine eventually calls code which uses the tf.data.Dataset api with this snippet of code:</p>
<pre><code>filenames_dataset = tf.data.Dataset.list_files(filename)
# read lines from text files
# this results in a dataset of textlines from all files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
# this does the decoder function for each textline
dataset = textlines_dataset.map(decode_csv)
</code></pre>
<p>The comments do a pretty explanation of what happens. Later this routine will return like so:</p>
<pre><code> # return the features and label as a tensorflow node, these
# will trigger file load operations progressively only when
# needed.
return dataset.make_one_shot_iterator().get_next()
</code></pre>
<p>Is there anyway to evaluate the result for one iteration? I tried to something like this but it fails.</p>
<pre><code># Try to read what its using from the cvs file.
one_batch_the_csv_file = get_train()
with tf.Session() as sess:
result = sess.run(one_batch_the_csv_file)
print(one_batch_the_csv_file)
</code></pre>
<p>Per the suggestion from Ruben below, I added this</p>
<p>I moved on to the next set of labs in this class where they introduce tensorboard and I get some graphs but still no inputs or outputs. With that said, here is a more complete set of source.</p>
<pre><code># curious he did not do this
# I am guessing because the output is so verbose
tf.logging.set_verbosity(tf.logging.INFO) # putting back in since, tf.train.LoggingTensorHook mentions it
def train_and_evaluate(output_dir, num_train_steps):
# Added this while trying to get input vals from csv.
# This gives an error about scafolding
# summary_hook = tf.train.SummarySaverHook()
# SAVE_EVERY_N_STEPS,
# summary_op=tf.summary.merge_all())
# To convert a model to distributed train and evaluate do four things
estimator = tf.estimator.DNNClassifier( # 1. Estimator
model_dir = output_dir,
feature_columns = feature_cols,
hidden_units=[160, 80, 40, 20],
n_classes=2,
config=tf.estimator.RunConfig().replace(save_summary_steps=2) # 2. run config
# ODD. he mentions we need a run config in the videos, but it was missing in the lab
# notebook. Later I found the bug report which gave me this bit of code.
# I got a working TensorBoard when I changed this from save_summary_steps=10 to 2.
)#
# .. also need the trainspec to tell the estimator how to get training data
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN), # make sure you use the dataset api
max_steps = num_train_steps)
# training_hook=[summary_hook]) # Added this while trying to get input vals from csv.
# ... also need this
# serving and training-time inputs are often very different
exporter = tf.estimator.LatestExporter('exporter', serving_input_receiver_fn = serving_input_fn)
# .. also need an EvalSpec which controls the evaluation and
# the checkpointing of the model since they happen at the same time
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL), # make sure you use the dataset api
steps=None, # evals on 100 batches
start_delay_secs = 1, # start evaluating after N secoonds. orig was 1. 3 seemed to fail?
throttle_secs = 10, # eval no more than every 10 secs. Can not be more frequent than the checkpoint config specified in the run config.
exporters = exporter) # how to export the model for production.
tf.estimator.train_and_evaluate(
estimator,
train_spec, # 3. Train Spec
eval_spec) # 4. Eval Spec
OUTDIR = './model_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
TensorBoard().start(OUTDIR)
# need to let this complete before running next cell
# call the above routine
train_and_evaluate(OUTDIR, num_train_steps = 6000) # originally 2000. 1000 after reset shows only projectors
</code></pre>
|
<p>I do not know exactly what kind of information you want to extract. If you are interested in step N, as a general answer:</p>
<ol>
<li>If you want exactly the results, just run with <code>model.train(input_fn = get_train(), steps = N)</code>.</li>
<li>Check train module functions <a href="https://www.tensorflow.org/api_docs/python/tf/train" rel="nofollow noreferrer">here</a> for specific content in a determined step.</li>
</ol>
<p>If you search for step you will find different classes:</p>
<ul>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/train/CheckpointSaverHook" rel="nofollow noreferrer">CheckpointSaverHook</a>: Saves checkpoints every N steps or seconds.</li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/train/LoggingTensorHook" rel="nofollow noreferrer">LoggingTensorHook</a>: Prints the given tensors every N local steps, every N seconds, or at end.</li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/train/ProfilerHook" rel="nofollow noreferrer">ProfilerHook</a>: Captures CPU/GPU profiling information every N steps or seconds.</li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/train/SummarySaverHook" rel="nofollow noreferrer">SummarySaverHook</a>: Saves summaries every N steps.</li>
<li>Etc. (there are more, just check what can be useful for you). </li>
</ul>
|
tensorflow|google-cloud-datalab
| 0
|
2,928
| 51,638,613
|
How do I multiply a pandas column with a part of a multi index dataframe
|
<p>I have a data frame with a multi index and one column.</p>
<p>Index fields are <code>type</code> and <code>amount</code>, the column is called <code>count</code></p>
<p>I would like to add a column that multiplies <code>amount</code> and <code>count</code></p>
<pre><code>df2 = df.groupby(['type','amount']).count().copy()
# I then dropped all columns but one and renamed it to "count"
df2['total_amount'] = df2['count'].multiply(df2['amount'], axis='index')
</code></pre>
<p>doesn't work. I get a key error on <code>amount</code>.</p>
<p>How do I access a part of the multi index to use it in calculations?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for <code>Series</code> with same size as original <code>df</code> with aggregated values, so possible <code>multiple</code>:</p>
<pre><code>count = df.groupby(['type','amount'])['type'].transform('count')
df['total_amount'] = df['amount'].multiply(count, axis='index')
print (df)
A amount C D E type total_amount
0 a 4 7 1 5 a 8
1 b 5 8 3 3 a 5
2 c 4 9 5 6 a 8
3 d 5 4 7 9 b 10
4 e 5 2 1 2 b 10
5 f 4 3 0 4 b 4
</code></pre>
<p>Or:</p>
<pre><code>df = pd.DataFrame({'A':list('abcdef'),
'amount':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'type':list('aaabbb')})
print (df)
A amount C D E type
0 a 4 7 1 5 a
1 b 5 8 3 3 a
2 c 4 9 5 6 a
3 d 5 4 7 9 b
4 e 5 2 1 2 b
5 f 4 3 0 4 b
df2 = df.groupby(['type','amount'])['type'].count().to_frame('count')
df2['total_amount'] = df2['count'].mul(df2.index.get_level_values('amount'))
print (df2)
count total_amount
type amount
a 4 2 8
5 1 5
b 4 1 4
5 2 10
</code></pre>
|
python|python-3.x|pandas|multiplication
| 1
|
2,929
| 64,463,810
|
Convert from Keras to Pytorch - conv2d
|
<p>I am trying to convert the following Keras code into PyTorch.</p>
<pre><code> tf.keras.Sequential([
Conv2D(128, 1, activation=tf.nn.relu),
Conv2D(self.channel_n, 1, activation=None),
])
</code></pre>
<p>When creating the model summary with self.channels=16 i get the following summary.</p>
<pre><code>Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (1, 3, 3, 128) 6272
_________________________________________________________________
conv2d_1 (Conv2D) (1, 3, 3, 16) 2064
=================================================================
Total params: 8,336
Trainable params: 8,336
Non-trainable params: 0
</code></pre>
<p>How would one convert?</p>
<p>I have attempted it as such:</p>
<pre><code>import torch
from torch import nn
class CellCA(nn.Module):
def __init__(self, channels, dim=128):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(in_channels=channels,out_channels=dim, kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=dim, out_channels=channels, kernel_size=1),
)
def forward(self, x):
return self.net(x)
</code></pre>
<p>However, I get 4240 params</p>
|
<p>The attempt above is correct if you configure the initial channels in correcty (48 in this case).</p>
|
keras|pytorch
| 0
|
2,930
| 64,393,002
|
Find rows which have one column's value as substring in another column along with other OR conditions in pandas
|
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'A':[1,2,3,4], 'B':['abc', 'def', 'pqr', 'xyz'], 'C':['a', 'h', 'm', 'z'], 'D':['g', 'h', 'i', 'j']})
</code></pre>
<p>I want to print rows that have C as a substring in B OR C equal to D
Something like:</p>
<pre><code>result_df = mydf[(mydf['C']==mydf['D']) | (mydf.B.str.contains(mydf.C, na = False))]
</code></pre>
<p>Any help is appreciated!</p>
|
<p>You can only do this by going through column B and C:</p>
<pre><code>print (df.loc[[y in x for x, y in zip(df["B"], df["C"])]|df["C"].eq(df["D"])])
A B C D
0 1 abc a g
1 2 def h h
3 4 xyz z j
</code></pre>
|
python|pandas
| 1
|
2,931
| 64,565,717
|
Appending pandas series to the left of index zero
|
<p>I'm trying to select portions of a pandas data series <code>yf </code> according to left limit <code>a0 </code> and right limit <code>b0 </code>.<br/></p>
<p>If the left limit is negative, I want to pad the difference with zeros so the resulting series would have the desired length, like this:</p>
<pre><code>if a0<0: ycr = pd.Series([0]*(abs(a0))).append(yf[:b0])
</code></pre>
<p>but this is returning:</p>
<pre><code>Series([], Name: 1, dtype: float64)
</code></pre>
<p>and no more information is given.</p>
|
<p>I created the source <em>Series</em> as:</p>
<pre><code>lst = np.arange(10,20)
yf = pd.Series(lst + 5, index = lst)
</code></pre>
<p>so that it contains:</p>
<pre><code>10 15
11 16
12 17
13 18
14 19
15 20
16 21
17 22
18 23
19 24
dtype: int32
</code></pre>
<p>(the left column is the index, and the right - actual values).</p>
<p>Then, to create an output <em>Series</em> composed of 3 zeroes and
then 5 initial elements of <em>yf</em> I ran:</p>
<pre><code>a0 = -3; b0 = 5
ycr = pd.Series([0]*(abs(a0))).append(yf[:b0])
</code></pre>
<p>and got:</p>
<pre><code>0 0
1 0
2 0
10 15
11 16
12 17
13 18
14 19
dtype: int64
</code></pre>
<p>Then I performed another test, on the source <em>Series</em> created
with the default index (consecutive integers from <em>0</em>):</p>
<pre><code>yf = pd.Series(lst + 5)
</code></pre>
<p>This time the result is:</p>
<pre><code>0 0
1 0
2 0
0 15
1 16
2 17
3 18
4 19
dtype: int64
</code></pre>
<p>(the only difference is in the index column, as I expected).</p>
<p>So, as you can see, your code works as expected.
Probably there is something wrong with your source data.</p>
|
python|pandas|append|series|pad
| 1
|
2,932
| 64,264,707
|
character vectorization
|
<p>I am trying to follow the Tensorflow Beginner Example to load text data by using "text_dataset_from_directory" and tokenize those data with "TextVectorization". (<a href="https://www.tensorflow.org/tutorials/keras/text_classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/text_classification</a>)</p>
<p>Is there any easy way to do character vectorization?</p>
|
<p>The easiest way is to create a copy of char file and word file.</p>
<pre><code>Example:
Char Version: I g o t o s c h o o l b y b u s
Word Version: I go to school by bus
</code></pre>
|
python|tensorflow|keras|tokenize
| 0
|
2,933
| 64,381,552
|
Python - Calling a Function Inside a For Loop - Changing Input Argument Without Overwriting It
|
<p>Novice Python user here, really stumped on this one. I have a 3x3 array that stores coordinates in xyz format, where the rows are the number of atoms and columns correspond to x,y and z. For every element that is not in the z direction I wish to add some scalar dr to it. Ultimately I would like to generate a dictionary of 6 geometries, where in each instance one of the elements x0,y0 x0,y1, x1,y0 etc. have the scalar added. For now I am just trying to write a function to do this and print out the geometries at each iteration.</p>
<p>This is a simpler version of the function I have written. Here I loop over the rows and columns, call a the function with the arguments of reference geometry (geom), and the two indexes for the rows and columns. For each X and Y coordinate the function then adds dr to geometry and returns its value.</p>
<pre><code>import numpy as np
dr = 0.1
principle_axes = 'Z'
def displace(coords, row, col):
if principle_axes == 'Z':
if col != 2:
new_coords = coords
new_coords[row, col] = new_coords[row, col] + dr
return new_coords
geom = np.array([[0, 0, 0.1435], [0, 0, 2.992], [0, 0, -2.8993]])
[nR, nC] = np.shape(geom)
if nR == 3 and nC == 3:
import pdb; pdb.set_trace() # XXX BREAKPOINT
for i in range(nR):
for j in range(nC):
print(geom)
displaced_geom = displace(geom, i, j)
print(displaced_geom)
</code></pre>
<p>With each iteration of the loop the function takes the returned geometry value of the last iteration, even though the argument (geom) that is called does not get reassigned during the loop. Which gives me this example output for geom...</p>
<pre><code>[[ 0. 0. 0.1435]
[ 0. 0. 2.992 ]
[ 0. 0. -2.8993]]
[[ 0.1 0. 0.1435]
[ 0. 0. 2.992 ]
[ 0. 0. -2.8993]]
[[ 0.1 0.1 0.1435]
[ 0. 0. 2.992 ]
[ 0. 0. -2.8993]]
</code></pre>
<p>The output that results from printing displaced_geom is the same. I wish the resulting output to be:</p>
<pre><code>[[ 0. 0. 0.1435]
[ 0. 0. 2.992 ]
[ 0. 0. -2.8993]]
[[ 0.1 0. 0.1435]
[ 0. 0. 2.992 ]
[ 0. 0. -2.8993]]
[[ 0. 0.1 0.1435]
[ 0. 0. 2.992 ]
[ 0. 0. -2.8993]]
[[ 0. 0. 0.1435]
[ 0.1 0. 2.992 ]
[ 0. 0. -2.8993]]
</code></pre>
<p>I can then figure out how to store the result from each iteration in a dictionary and use that to do stuff later on in my code. FYI I am running Python3.5.2 on Ubuntu 16.04.6 LTS. If anyone could help me understand where I have gone wrong and point me in the right direction that would be great.</p>
|
<pre><code>def displace(coords, row, col):
if principle_axes == 'Z':
if col != 2:
new_coords = coords
new_coords[row, col] = new_coords[row, col] + dr
return new_coords
</code></pre>
<p><code>new_coords = coords</code> assigns a pointer, not a copy.</p>
<p>You can instead do <code>new_coords = coords.copy()</code> so that you don't overwrite.</p>
|
python|python-3.x|function|numpy|for-loop
| 0
|
2,934
| 47,745,373
|
Counting uneven bins in Panda
|
<pre><code>pd.DataFrame({'email':["a@gmail.com", "b@gmail.com", "c@gmail.com", "d@gmail.com", "e@gmail.com",],
'one':[88, 99, 11, 44, 33],
'two': [80, 80, 85, 80, 70],
'three': [50, 60, 70, 80, 20]})
</code></pre>
<p>Given this DataFrame, I would like to compute, for each column, one, two and three, how many values are in certain ranges.</p>
<p>The ranges are for example: 0-70, 71-80, 81-90, 91-100</p>
<p>So the result would be:</p>
<pre><code>out = pd.DataFrame({'colname': ["one", "two", "three"],
'b0to70': [3, 1, 4],
'b71to80': [0, 3, 1],
'b81to90': [1, 1, 0],
'b91to100': [1, 0, 0]})
</code></pre>
<p>What would be a nice idiomatic way to do this?</p>
|
<p>This would do it:</p>
<pre><code>out = pd.DataFrame()
for name in ['one','two','three']:
out[name] = pd.cut(df[name], bins=[0,70,80,90,100]).value_counts()
out.sort_index(inplace=True)
</code></pre>
<p>Returns:</p>
<pre><code> one two three
(0, 70] 3 1 4
(70, 80] 0 3 1
(80, 90] 1 1 0
(90, 100] 1 0 0
</code></pre>
|
python|pandas
| 7
|
2,935
| 47,692,054
|
Tensorflow - Linear Regression
|
<p>I code tensorflow program for linear regression. I am using Gradient Descent algorithm for optimizing(Minimising) loss function. But value of loss function is increasing while executing the program. My program and output is in follow.</p>
<pre><code> import tensorflow as tf
W = tf.Variable([.3],dtype=tf.float32)
b = tf.Variable([-.3],dtype=tf.float32)
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
sess = tf.Session()
init = init = tf.global_variables_initializer()
sess.run(init)
lm = W*X + b
delta = tf.square(lm-Y)
loss = tf.reduce_sum(delta)
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
for i in range(8):
print(sess.run([W, b]))
print("loss= %f" %sess.run(loss,{X:[10,20,30,40],Y:[1,2,3,4]}))
sess.run(train, {X: [10,20,30,40],Y: [1,2,3,4]})
sess.close()
</code></pre>
<p>Output for my program is </p>
<pre><code>2017-12-07 14:50:10.517685: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
[array([ 0.30000001], dtype=float32), array([-0.30000001],dtype=float32)]
loss= 108.359993
[array([-11.09999943], dtype=float32), array([-0.676], dtype=float32)]
loss= 377836.000000
[array([ 662.25195312], dtype=float32), array([ 21.77807617], dtype=float32)]
loss= 1318221568.000000
[array([-39110.421875], dtype=float32), array([-1304.26794434], dtype=float32)]
loss= 4599107289088.000000
[array([ 2310129.25], dtype=float32), array([ 77021.109375], dtype=float32)]
loss= 16045701465112576.000000
[array([ -1.36451664e+08], dtype=float32), array([-4549399.], dtype=float32)]
loss= 55981405829796462592.000000
[array([ 8.05974733e+09], dtype=float32), array([ 2.68717856e+08], dtype=float32)]
loss= 195312036582209632600064.000000
</code></pre>
<p>Please provide me a answer why value of loss is increasing instead of decreasing.</p>
|
<p>Did you try changing the learning rate? Using a lower running rate (~1e-4) and more iterations should work. </p>
<p>More justification as to why a lower learning rate might be required. Note that your loss function is </p>
<p>L = \sum (Wx+b-Y)^2</p>
<p>and dL/dW = \sum 2(Wx+b-Y)*x</p>
<p>and hessian d^2L/d^2W = \sum 2x*x</p>
<p>Now, your loss is diverging because learning rate is more than inverse of hessian which there will be roughly 1/(2*2900). So you should try and decrease the learning rate here. </p>
<p>Note: I wasn't sure how to add math to StackOverflow answer so I had to add it this way. </p>
|
python|machine-learning|tensorflow|deep-learning|linear-regression
| 1
|
2,936
| 47,598,233
|
How to add/delete an index in a multi index dataframe Python
|
<p>I have a multi-index dataframe where I'd like to add a "three" index with 6 values, 2 for a, b and c in columns X and Y.</p>
<pre><code> import pandas as pd, numpy as np
np.arrays = [["one", "one", "one", "two", "two", "two"], ["a", "b", "c", "a", "b", "c"]]
df = pd.DataFrame(np.random.randn(6,2),
index = pd.MultiIndex.from_tuples(list(zip(*np.arrays))),
columns = ["X", "Y"])
df = round(abs(df), 3)
df
X Y
one a 1.521 0.048
b 1.595 1.783
c 0.286 1.042
two a 1.480 1.071
b 0.807 1.058
c 1.730 1.233
</code></pre>
<p>What I'd like is:</p>
<pre><code> X Y
one a 1.521 0.048
b 1.595 1.783
c 0.286 1.042
two a 1.480 1.071
b 0.807 1.058
c 1.730 1.233
three a 1.2 5.5
b 4.2 2.2
c 7.8 3.4
</code></pre>
<p>Also, how do I delete an index? I tried the following code, but it gave an AttributeError: <strong>delitem</strong></p>
<pre><code>del df.loc["one"]
</code></pre>
<p>Any help would be awesome.</p>
|
<p>One approach using pd.concat i.e </p>
<pre><code>idx = pd.MultiIndex.from_tuples(list(zip(['three']*3,list('abc'))))
new = pd.DataFrame(np.random.randn(3,2), index=idx, columns= df.columns)
new_df = pd.concat([df,new])
X Y
one a 0.270000 0.299000
b 0.644000 0.073000
c 1.224000 0.656000
two a 0.202000 0.097000
b 2.750000 0.373000
c 0.421000 0.939000
three a 1.392999 -0.870480
b -1.899386 -0.249068
c -0.609149 0.164459
</code></pre>
<p>And for deleting use drop i.e </p>
<pre><code>new_df = new_df.drop('one',level=0)
</code></pre>
|
python|pandas|dataframe|multi-index
| 1
|
2,937
| 48,952,733
|
What is the purpose of numpy masked array in this function?
|
<p>My code</p>
<pre><code>def to_norm(self, x):
if isinstance(x, np.ma.MaskedArray):
data = x.filled()
mask = x.mask
else:
data = x
mask = None
</code></pre>
<p>As I understand <code>isinstance</code> is checking type.The appropriate array elements are going to be filled. But what about masking, how does it works and why?</p>
|
<p><code>np.ma.MaskedArray</code> is a subclass of the regular numpy <code>ndarray</code>. You can read all about in the docs.</p>
<p>This method apparently tries to handle argument <code>x</code> is consistent manner, regardless of whether it is <code>ndarray</code> or <code>MaskedArray</code>.</p>
<p>A masked array has a <code>data</code> attribute and <code>mask</code> attribute. <code>x.filled()</code> returns the <code>data</code> with the masked values filled with the <code>x.fill_value</code>. </p>
<p>A masked array is used when you don't want to use some of the values of an array, for example if they are <code>np.nan</code> or some integer equivalent. <code>x.filled</code> is one way of using that data with the 'bad values' replaced with something useful or innocuous.</p>
<p>My description will make a lot more sense if you read the <code>MaskedArray</code> docs.</p>
|
python|numpy
| 3
|
2,938
| 48,994,761
|
How to append from multiple dictionaries in a List to another list with specific parts of the "inner" dictionary?
|
<p>I've got dictionaries in a list:</p>
<pre><code>fit_statstest = [{'activities-heart': [{'dateTime': '2018-02-01',
'value': {'customHeartRateZones': [],
'heartRateZones': [{'caloriesOut': 2119.9464,
'max': 96,
'min': 30,
'minutes': 1232,
'name': 'Out of Range'},
{'caloriesOut': 770.2719,
'max': 134,
'min': 96,
'minutes': 120,
'name': 'Fat Burn'},
{'caloriesOut': 0,
'max': 163,
'min': 134,
'minutes': 0,
'name': 'Cardio'},
{'caloriesOut': 0,
'max': 220,
'min': 163,
'minutes': 0,
'name': 'Peak'}],
'restingHeartRate': 64}}],
'activities-heart-intraday': {'dataset': [{'time': '00:00:00', 'value': 57},
{'time': '00:00:10', 'value': 56},
{'time': '00:00:20', 'value': 59},
{'time': '00:00:35', 'value': 59},
{'time': '02:54:10', 'value': 85},
{'time': '02:54:20', 'value': 71},
{'time': '02:54:30', 'value': 66},
...],'datasetInterval': 1,
'datasetType': 'second'}},
{'activities-heart': [{'dateTime': '2018-02-02',
'value': {'customHeartRateZones': [],
'heartRateZones': [{'caloriesOut': 2200.61802,
'max': 96,
'min': 30,
'minutes': 1273,
'name': 'Out of Range'},
{'caloriesOut': 891.9588,
'max': 134,
'min': 96,
'minutes': 133,
'name': 'Fat Burn'},
{'caloriesOut': 35.8266,
'max': 163,
'min': 134,
'minutes': 3,
'name': 'Cardio'},
{'caloriesOut': 0,
'max': 220,
'min': 163,
'minutes': 0,
'name': 'Peak'}],
'restingHeartRate': 67}}],
'activities-heart-intraday': {'dataset': [{'time': '00:00:10', 'value': 80},
{'time': '00:00:15', 'value': 79},
{'time': '00:00:20', 'value': 74},
{'time': '00:00:25', 'value': 72},
{'time': '03:04:10', 'value': 61},
{'time': '03:04:25', 'value': 61},
{'time': '03:04:40', 'value': 61},
...],
'datasetInterval': 1,
'datasetType': 'second'}}]
</code></pre>
<p>I'm trying to append the 'time': 'hh:mm:ss' and 'value': Int to a DataFrame.</p>
<p>This is how I did it for a single dictionary (which worked like a charm):</p>
<pre><code>time_list = []
val_list = []
for i in fit_statsHR['activities-heart-intraday']['dataset']:
val_list.append(i['value'])
time_list.append(i['time'])
</code></pre>
<p>And this is how I tried doing it for the multi-level dictionary list:</p>
<pre><code>time_test = []
val_test = []
for i in fit_statstest:
val_test.append(i['activities-heart-intraday']['dataset']['value'])
time_test.append(i['activities-heart-intraday']['dataset']['time'])
heartdftest = pd.DataFrame({'Heart Rate':val_test,'Time':time_test})
</code></pre>
<p>I get this error: list indices must be integers or slices, not str; and am not quite sure how to go about solving this problem.</p>
<p>I tried using the .copy() method but had no joy with that either.</p>
<p>UPDATE:
@Phydeaux: Cheers for this! I tried this:</p>
<pre><code>time_test = []
val_test = []
j = np.arange(0,len(fit_statstest))
for i in fit_statstest[j]['activities-heart-intraday']['dataset']:
val_test.append(i['value'])
time_test.append(i['time'])
</code></pre>
<p>I get this error now:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-184-f3e7484e1cfc> in <module>()
3 j = np.arange(0,len(fit_statstest))
4
----> 5 for i in fit_statstest[j]['activities-heart-intraday']['dataset']:
6 val_test.append(i['value'])
7 time_test.append(i['time'])
TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
<p>only integer scalar arrays can be converted to a scalar index. Not sure if I'm going in the right track though!</p>
|
<p>Here is one solution via a single list comprehension:</p>
<pre><code>import pandas as pd
time_values = [(d['time'], d['value']) for day in fit_statstest \
for d in day['activities-heart-intraday']['dataset']]
df = pd.DataFrame(time_values, columns=['time', 'value'])
</code></pre>
<p><strong>Result</strong></p>
<pre><code> time value
0 00:00:00 57
1 00:00:10 56
2 00:00:20 59
3 00:00:35 59
4 02:54:10 85
5 02:54:20 71
6 02:54:30 66
7 00:00:10 80
8 00:00:15 79
9 00:00:20 74
10 00:00:25 72
11 03:04:10 61
12 03:04:25 61
13 03:04:40 61
</code></pre>
|
python|list|pandas|dictionary|append
| 1
|
2,939
| 49,051,805
|
pandas fillna: How to fill only leading NaN from beginning of series until first value appears?
|
<p>I have several <code>pd.Series</code> that usually start with some NaN values until the first real value appears. I want to pad these leading NaNs with 0, but not any NaNs that appear later in the series.</p>
<pre><code>pd.Series([nan, nan, 4, 5, nan, 7])
</code></pre>
<p>should become</p>
<pre><code>ps.Series([0, 0, 4, 5, nan, 7])
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.first_valid_index.html" rel="nofollow noreferrer"><code>first_valid_index</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.loc.html" rel="nofollow noreferrer"><code>loc</code></a>:</p>
<pre><code>s.loc[:s.first_valid_index()] = 0
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mask.html" rel="nofollow noreferrer"><code>mask</code></a> with <code>isnull</code> and forward filling <code>NaN</code>s:</p>
<pre><code>s = s.mask(s.ffill().isnull(), 0)
</code></pre>
<hr />
<pre><code>print (s)
0 0.0
1 0.0
2 4.0
3 5.0
4 NaN
5 7.0
dtype: float64
</code></pre>
<p>EDIT: For function per groups use:</p>
<pre><code>def func(x):
x['col1'] = x['col1'].mask(x['col1'].ffill().isnull(), 0)
return x
df = df.groupby('col').apply(func)
</code></pre>
|
python|pandas
| 5
|
2,940
| 49,140,425
|
How to insert values from numpy into sql database with given columns?
|
<p>I need to insert some columns into a table in my Mariadb. The table name is Customer and has 6 columns, A,B,C,D,E,F. The primary keys are in the first column, column B has an address, C,D, and E contain None values and F the zip code. </p>
<p>I have an pandas dataframe that follows similar format. I converted it to numpy array by doing the following:</p>
<pre><code>data = df.iloc[:,1:4].values
</code></pre>
<p>hence data is a numpy array containing 3 columns and i need this inserted into C,D and E. I tried: </p>
<pre><code>query = """
Insert Into Customer (C,D,E) VALUES (?,?,?)
"""
cur.executemany(query,data)
cur.commit()
</code></pre>
<p>But i get an error:</p>
<pre><code>The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
|
<p>I solved it. Although very slow...</p>
<pre><code>query = """
Alter Customer SET
C = %s
D = %s
E = %s
where A = %s
"""
for row in data:
cur.execute(query,args=(row[1],row[2],row[3],row[0])
con.commit()
</code></pre>
|
python-3.x|pandas|numpy|mysql-python
| 0
|
2,941
| 58,924,899
|
how to remove duplicated elements from a list without using set()?
|
<p>Let </p>
<pre><code>a = np.array([1, 1, 1,1,1,1])
b = np.array([2,2,2])
</code></pre>
<p>be two numpy arrays. Then let</p>
<pre><code>c = [a]+[b]+[b]
</code></pre>
<p>clearly, <code>c</code> has duplicated elements <code>b</code>. Now I wish to remove one array <code>b</code> from <code>c</code> so that <code>c</code> only contains one <code>a</code> and one <code>b</code></p>
<p>For removing duplicated elements in a list I usually used <code>set()</code>. However, if, this time, I do </p>
<pre><code>set(c)
</code></pre>
<p>I would receive error like </p>
<pre><code>TypeError: unhashable type: 'numpy.ndarray'
</code></pre>
<p>In my understanding is that the <code>numpy.ndarray</code> is not hashable. </p>
<p>The list <code>c</code> above is just an example, in fact my <code>c</code> could be very long. So, is there any good way to remove duplicated elements from a list of numpy.array?</p>
<p>Thx!</p>
<hr>
<p>edit: I would expect my return to be <code>c = [a]+[b]</code></p>
|
<p>You can use this</p>
<pre><code>c = a.tolist() + b.tolist() + b.tolist()
</code></pre>
<p>And then</p>
<pre><code>c = set(c)
</code></pre>
|
python|list|numpy
| 1
|
2,942
| 58,819,435
|
How to split text of a dataframe column into multiple columns?
|
<p>I'm trying to retrieve a string from an excel sheet and split it into words then print it or write it back into a new string but when retrieving the data using pandas and trying to split it an error occurs saying dataframe doesn't support split function </p>
<p><strong>the excel sheet has this line in it:</strong> </p>
<p><a href="https://i.stack.imgur.com/ZAZDR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZAZDR.png" alt="enter image description here"></a></p>
<p><strong>I expect and output like this:</strong> </p>
<p><a href="https://i.stack.imgur.com/0rOAg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0rOAg.png" alt="enter image description here"></a></p>
<pre class="lang-py prettyprint-override"><code>import numpy
import pandas as pd
df = pd.read_excel('eng.xlsx')
txt = df
x = txt.split()
print(x)
AttributeError: 'DataFrame' object has no attribute 'split'
</code></pre>
|
<p>That's because you are applying <code>split()</code> function on a DataFrame and that's not possible.</p>
<pre><code>import pandas as pd
import numpy as np
def append_nan(x, max_len):
"""
Function to append NaN value into a list based on a max length
"""
if len(x) < max_len:
x += [np.nan]*(max_len - len(x))
return x
# I define here a dataframe for the example
#df = pd.DataFrame(['This is my first sentence', 'This is a second sentence with more words'])
df = pd.read_excel('your_file.xlsx', index=None, header=None)
col_names = df.columns.values.tolist()
df_output = df.copy()
# Split your strings
df_output[col_names[0]] = df[col_names[0]].apply(lambda x: x.split(' '))
# Get the maximum length of all yours sentences
max_len = max(map(len, df_output[col_names[0]]))
# Append NaN value to have the same number for all column
df_output[col_names[0]] = df_output[col_names[0]].apply(lambda x: append_nan(x, max_len))
# Create columns names and build your dataframe
column_names = ["word_"+str(d) for d in range(max_len)]
df_output = pd.DataFrame(list(df_output[col_names[0]]), columns=column_names)
# Then you can save it
df_output.to_excel('output.xlsx')
</code></pre>
|
python|excel|pandas|dataframe
| 3
|
2,943
| 58,962,701
|
What Network should beused for Gesture Detection
|
<p>I am trying to create a gesture detection application with Keras and python.</p>
<p>I have training and testing images like this one:</p>
<p><a href="https://imgur.com/1uUujOi" rel="nofollow noreferrer"><img src="https://i.imgur.com/1uUujOi.jpg" title="source: imgur.com" /></a>
<a href="https://imgur.com/jajU59t" rel="nofollow noreferrer"><img src="https://i.imgur.com/jajU59t.jpg" title="source: imgur.com" /></a>
<a href="https://imgur.com/1uUujOi" rel="nofollow noreferrer"><img src="https://i.imgur.com/1uUujOi.jpg" title="source: imgur.com" /></a></p>
<p>There are 3 different gestures so far with about 1000 training images each.</p>
<p>My script already works pretty well, but the accuracy is very low.</p>
<p>So I am going to add my whole code because I was advised to:</p>
<pre class="lang-py prettyprint-override"><code> #vgg16
import keras
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD
import cv2, numpy as np
#eigenes
import os
from keras.utils import to_categorical
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
class main:
def __init__(self):
self.paths = ['C:\\Users\\simon\\Dev\\opencsv\\tensor\\projekt\\zu heftig\\meiii\\ASL-Finger-Spelling-Recognition-master\\eigenes']
#pfad / pfäde zu den training images
self.x_train = []
self.y_train = []
self.batchsize = 1 #einstellungen
self.epochs = 1
self.anzahlangesten = 3
def load_training_images(self):
for path in self.paths:
for path, subdirs, files in os.walk(os.path.join(os.getcwd(), "eigenes")):
for filename in files:
fullpath = os.path.join(path, filename)
img = cv2.imread(fullpath)
mean_pixel = [103.939, 116.779, 123.68]
img = img.astype(np.float32, copy=False)
#for c in range(3):
# img[:, :, c] = img[:, :, c] - mean_pixel[c]
#img = img.transpose((2,0,1))
#img = np.expand_dims(img, axis=0)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
self.x_train.append(img)
self.y_train.append((os.path.dirname(fullpath)).split("\\")[-1])
#print(self.y_train[len(self.y_train) - 1])
self.X_train = np.array(self.x_train)
self.Y_train = np.array(self.y_train)
self.X_train = np.repeat(self.X_train[..., np.newaxis], 3, -1) #shape stuff
self.Y_train = to_categorical(self.Y_train)
self.X_train, self.X_test, self.Y_train, self.Y_test = train_test_split(self.X_train, self.Y_train, test_size=0.1, random_state=42)
print(self.Y_train.shape, self.Y_train)
def modelarchitektur(self):
self.model = Sequential()
self.model.add(ZeroPadding2D((1,1),input_shape=(215,240,3)))
self.model.add(Conv2D(64, (3, 3), activation='relu', padding="same"))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(64, (3, 3), activation='relu', padding="same"))
self.model.add(MaxPooling2D((2,2), strides=(2,2)))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(128, (3, 3), activation='relu', padding="same"))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(128, (3, 3), activation='relu', padding="same"))
self.model.add(MaxPooling2D((2,2), strides=(2,2)))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(256, (3, 3), activation='relu', padding="same"))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(256, (3, 3), activation='relu', padding="same"))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(256, (3, 3), activation='relu', padding="same"))
self.model.add(MaxPooling2D((2,2), strides=(2,2)))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(512, (3, 3), activation='relu', padding="same"))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(512, (3, 3), activation='relu', padding="same"))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(512, (3, 3), activation='relu', padding="same"))
self.model.add(MaxPooling2D((2,2), strides=(2,2)))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(512, (3, 3), activation='relu', padding="same"))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(512, (3, 3), activation='relu', padding="same"))
self.model.add(ZeroPadding2D((1,1)))
self.model.add(Conv2D(512, (3, 3), activation='relu', padding="same"))
self.model.add(MaxPooling2D((2,2), strides=(2,2)))
self.model.add(Flatten())
self.model.add(Dense(4096, activation='relu'))
self.model.add(Dropout(0.5))
self.model.add(Dense(4096, activation='relu'))
self.model.add(Dropout(0.5))
self.model.add(Dense(self.anzahlangesten, activation='softmax'))
def createmodel(self):
self.X_train = self.X_train / 255.0
#train
#sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
#self.model.compile(optimizer=sgd, loss='categorical_crossentropy')
#self.model.fit(self.X_train, self.Y_train, batch_size=self.batchsize, epochs=self.epochs)
#self.model.save('projekt.model')
#load
self.model = keras.models.load_model('projekt.model')
def test_accuracy(self):
preds = self.model.predict(self.X_test)
preds = np.argmax(preds, axis=1)
self.Y_test = np.argmax(self.Y_test, axis=1)
print(accuracy_score(preds, self.Y_test, normalize=True))
print()
print()
print()
print(accuracy_score(preds, self.Y_test, normalize=False))
main = main()
main.load_training_images()
main.modelarchitektur()
main.createmodel()
main.test_accuracy()
</code></pre>
<p>The accuracy is about 0.37873 of 1.</p>
<p>How could I increase accuracy?
What should be changed? The architecture or something else?
Or should I remove the repeat of the grayscale image and change the input of the model again?</p>
|
<p>Deep learning always require a lot of data. 60 images for each class is a very very vague figure. I would suggest you to first increase data set. The most simplest way to increase data set is by up sampling the images. You can invert the images and can easily up sample the data. Best of luck and cheers.
Do following actions on each image</p>
<ul>
<li>Invert and add the dataset</li>
<li>Rotate each image and save in the dataset after each rotation</li>
<li>Rotate the inverted image and save in the dataset after each rotation</li>
</ul>
<p>This way you can transform each image into 8 images and hence getting a data of 480 images for each class</p>
|
python|tensorflow|keras
| 0
|
2,944
| 58,901,422
|
Heat map from pandas DataFrame - 2D array
|
<p>I have a data visualisation-based question. I basically want to create a heatmap from a pandas DataFrame, where I have the x,y coordinates and the corresponding z value. The data can be created with the following code -</p>
<pre><code>data = ([[0.2,0.2,24],[0.2,0.6,8],[0.2,2.4,26],[0.28,0.2,28],[0.28,0.6,48],[0.28,2.4,55],[0.36,0.2,34],[0.36,0.6,46],[0.36,2.4,55]])
data=np.array(data)
df=pd.DataFrame(data,columns=['X','Y','Z'])
</code></pre>
<p>Please note that I have converted an array into a DataFrame just so that I can give an example of an array. My actual data set is quite large and I import into python as a DataFrame. After processing the DataFrame, I have it available as the format given above.</p>
<p>I have seen the other questions based on the same problem, but they do not seem to be working for my particular problem. Or maybe I am not applying them correctly. I want my results to be similar to what is given here <a href="https://plot.ly/python/v3/ipython-notebooks/cufflinks/#heatmaps" rel="nofollow noreferrer">https://plot.ly/python/v3/ipython-notebooks/cufflinks/#heatmaps</a></p>
<p>Any help would be welcome.</p>
<p>Thank you!</p>
|
<p>Found one way of doing this - </p>
<p>Using Seaborn.</p>
<pre><code>import seaborn as sns
data = ([[0.2,0.2,24],[0.2,0.6,8],[0.2,2.4,26],[0.28,0.2,28],[0.28,0.6,48],[0.28,2.4,55],[0.36,0.2,34],[0.36,0.6,46],[0.36,2.4,55]])
data=np.array(data)
df=pd.DataFrame(data,columns=['X','Y','Z'])
df=df.pivot('X','Y','Z')
diplay_df = sns.heatmap(df)
</code></pre>
<p>Returns the following image - </p>
<p><a href="https://i.stack.imgur.com/oQlDo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oQlDo.png" alt="Heatmap"></a></p>
<p>sorry for creating another question.</p>
<p>Also, thank you for the link to a related post.</p>
|
python|pandas|dataframe|heatmap
| 2
|
2,945
| 58,916,413
|
Return rows for customers only where values in a certain column are either x or y
|
<p>I have a list of customer emails, and the status of their account at different dates. </p>
<pre><code>df = pd.DataFrame({'email': pd.Series(['john@email.com', 'john@email.com', 'mary@email.com', 'mary@email.com', 'patrick@email.com', 'patrick@email.com', 'foo@email.com', 'foo@email.com'],dtype='object',index=pd.RangeIndex(start=0, stop=8, step=1)), 'date_created': pd.Series(['18/04/2018', '19/04/2018', '18/04/2018', '18/05/2018', '12/05/2019', '15/05/2019', '12/08/2019', '15/08/2019'],dtype='object',index=pd.RangeIndex(start=0, stop=8, step=1)), 'status': pd.Series(['Account Open ', 'Account Closed', 'Lead', 'Account Open ', 'Account Open ', 'Account Closed', 'Lead', 'Account Open '],dtype='object',index=pd.RangeIndex(start=0, stop=8, step=1))}, index=pd.RangeIndex(start=0, stop=8, step=1))
email date_created status
0 john@email.com 18/04/2018 Account Open
1 john@email.com 19/04/2018 Account Open
2 mary@email.com 18/04/2018 Lead
3 mary@email.com 18/05/2018 Account Open
4 patrick@email.com 12/05/2019 Account Open
5 patrick@email.com 15/05/2019 Account Closed
6 foo@email.com 12/08/2019 Lead
7 foo@email.com 15/08/2019 Account Open
</code></pre>
<p>I am interested in finding the time between when they were a <code>Lead</code> to when the status changed to <code>Account Open</code>. </p>
<p>Therefore I only want customers who have both these two statuses, in this case <code>foo@email.com</code> and <code>mary@email.com</code> above.</p>
<p>How can I extract the customers who have these two statuses only? I want to disregard everybody else. So my expected outcome would be:</p>
<pre><code> email date_created status
2 mary@email.com 18/04/2018 Lead
3 mary@email.com 18/05/2018 Account Open
6 foo@email.com 12/08/2019 Lead
7 foo@email.com 15/08/2019 Account Open
</code></pre>
<p>I tried:</p>
<pre><code>df[df['status'].str.contains('|'.join(['Lead','Account Open']),na=False)]
</code></pre>
<p>But I end up with <code>john@email.com</code> in the list because he has an <code>Account Open</code> status. He doesn't have a <code>Lead</code> record so I want to ignore him. </p>
<p>Any advice?</p>
<p><strong>edit</strong> just to clarify. maybe I should have started like this. </p>
<p>I have a dataframe with fruit purchases.</p>
<pre><code> id date fruit
0 1 01/01/2019 apple
1 1 02/01/2019 banana
2 2 03/01/2019 orange
3 2 04/01/2019 pineapple
4 3 05/01/2019 tomoato
5 3 06/01/2019 lemon
6 4 07/01/2019 apple
7 4 08/01/2019 banana
8 5 09/01/2019 melon
9 5 10/01/2019 apple
</code></pre>
<p>I want to extract all the customer records who bought an apple and banana only. In this case <code>id 1</code> and <code>id 4</code>. I would like to extract their records like so:</p>
<pre><code> id date fruit
0 1 01/01/2019 apple
1 1 02/01/2019 banana
6 4 07/01/2019 apple
7 4 08/01/2019 banana
</code></pre>
<p>This will allow me to measure the time difference between purchases of apples and bananas. </p>
|
<p>Idea is compare first duplicate value by <code>Lead</code> and second duplicate value by <code>Account Open</code>, chain conditions by <code>&</code> for AND and <code>|</code> for <code>OR</code> and filter by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>m1 = df['status'].str.contains('Lead',na=False)
m2 = df['status'].str.contains('Account Open', na=False)
</code></pre>
<p>Or test by <code>==</code>:</p>
<pre><code>m1 = df['status'] == 'Lead'
m2 = df['status'].str.strip() == 'Account Open'
mask = df['email'].duplicated()
df = df[(~mask & m1) | (mask & m2)]
print (df)
email date_created status
2 mary@email.com 18/04/2018 Lead
3 mary@email.com 18/05/2018 Account Open
6 foo@email.com 12/08/2019 Lead
7 foo@email.com 15/08/2019 Account Open
</code></pre>
<p>If possible more like 2 value of email per groups and is necessry filter only groups with 2 rows add another mask:</p>
<pre><code>mask1 = df.groupby('email')['email'].transform('size').eq(2)
df = df[((~mask & m1) | (mask & m2) ) & mask1]
</code></pre>
|
python|pandas
| 1
|
2,946
| 70,319,654
|
pandas Dataframe create new column
|
<p>I have this snippet of the code working with pandas dataframe, i am trying to use the apply function to create a new column called STDEV_TV but i keep running into this error all the columns i am working with are type float</p>
<pre><code>TypeError: ("'float' object is not iterable", 'occurred at index 0')
</code></pre>
<p>Can someone help me understand why i keep getting this error</p>
<pre><code>def sigma(df):
val = df.volume2Sum / df.volumeSum - df.vwap * df.vwap
return math.sqrt(max(val))
df['STDEV_TV'] = df.apply(sigma, axis=1)
</code></pre>
|
<p>Try:</p>
<pre><code>import pandas as pd
import numpy as np
import math
df = pd.DataFrame(np.random.randint(1, 10, (5, 3)),
columns=['volume2Sum', 'volumeSum', 'vwap'])
def sigma(df):
val = df.volume2Sum / df.volumeSum - df.vwap * df.vwap
return math.sqrt(val) if val >= 0 else val
df['STDEV_TV'] = df.apply(sigma, axis=1)
</code></pre>
<p>Output:</p>
<pre><code>>>> df
volume2Sum volumeSum vwap STDEV_TV
0 4 5 8 -63.200000
1 2 8 4 -15.750000
2 3 3 3 -8.000000
3 8 3 4 -13.333333
4 4 2 3 -7.000000
</code></pre>
|
python|pandas|dataframe
| 2
|
2,947
| 70,222,250
|
Pandas group by rolling window function on a timestamp field
|
<p>I want to add dates and days that are contained in a column after grouping by an ID column.</p>
<p>The following generates an example df:</p>
<pre><code>df = pd.DataFrame(
{
"ID":[1,1,1,1,2,2,2,3,3,3,3,3,3],
"Date":list(pd.date_range("2018-1-1", "2018-4-10", periods=4)) + list(pd.date_range("2018-6-6", "2018-7-30", periods=3)) + list(pd.date_range("2018-1-1", "2020-1-1", periods=6))
}
)
df['date_intervals'] = df.groupby('ID').Date.diff()
df['new_date_intermediate'] = df.date_intervals.mask(pd.isnull, df['Date'])
</code></pre>
<p>This results in this df:
<a href="https://i.stack.imgur.com/x3kxG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x3kxG.png" alt="enter image description here" /></a></p>
<p>Grouped by the ID field, I want a cumulative sum returning dates.</p>
<p>For example, for ID = 1, I want a vector of the first row + the second row, this would be 2018-01-01 + 33 days, followed by the result of that sum plus the third row which is adding another 33 days.</p>
|
<p>You can just do <code>cumsum</code></p>
<pre><code>df['new_date_intermediate'] = df.groupby('ID')['new_date_intermediate'].apply(lambda x :x.cumsum())
df
ID Date date_intervals new_date_intermediate
0 1 2018-01-01 NaT 2018-01-01 00:00:00
1 1 2018-02-03 33 days 2018-02-03 00:00:00
2 1 2018-03-08 33 days 2018-03-08 00:00:00
3 1 2018-04-10 33 days 2018-04-10 00:00:00
4 2 2018-06-06 NaT 2018-06-06 00:00:00
5 2 2018-07-03 27 days 2018-07-03 00:00:00
6 2 2018-07-30 27 days 2018-07-30 00:00:00
7 3 2018-01-01 NaT 2018-01-01 00:00:00
8 3 2018-05-27 146 days 2018-05-27 00:00:00
9 3 2018-10-20 146 days 2018-10-20 00:00:00
10 3 2019-03-15 146 days 2019-03-15 00:00:00
11 3 2019-08-08 146 days 2019-08-08 00:00:00
12 3 2020-01-01 146 days 2020-01-01 00:00:00
</code></pre>
|
python|pandas
| 1
|
2,948
| 70,344,616
|
KeyError: 337 when training a hugging face model using pytorch
|
<p>I am training a simple binary classification model using <code>Hugging face models</code> using <code>pytorch.</code></p>
<p>Bert PyTorch HuggingFace.</p>
<p>Here is the code:</p>
<pre><code>import transformers
from transformers import TFAutoModel, AutoTokenizer
from tokenizers import Tokenizer, models, pre_tokenizers, decoders, processors
from transformers import AutoTokenizer
from transformers import AdamW
from transformers import get_linear_schedule_with_warmup
from transformers import BertTokenizerFast as BertTokenizer, BertModel, AdamW, get_linear_schedule_with_warmup,BertConfig
</code></pre>
<p>I am reading a text-data and classifying it as toxic or non-toxic. I have downloaded and saved model in path.</p>
<pre><code>BERT_MODEL_NAME = '/home/pch/conv-bert-base'
MODEL_PATHS = {'conv-bert-base': '/home/pch/conv-bert-base/'}
tokenizer = BertTokenizer.from_pretrained(BERT_MODEL_NAME)
TRANSFORMERS = {"conv-bert-base": (BertModel, BertTokenizer, "conv-bert-base")}
class SEDataset(Dataset):
"""
Sexually Explicit dataset for the hate speech.
"""
def __init__(self, df,tokenizer: BertTokenizer, max_token_len: int = 512):
"""
Constructor
Arguments:
df {pandas dataframe} -- Dataframe where the data is.
"""
super().__init__()
self.df = df
self.tokenizer = tokenizer
self.max_token_len = max_token_len
try:
self.y = df['toxic'].values
except KeyError: # test data
self.y = np.zeros(len(df))
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
data_row = self.df[idx]
text_data = data_row['text']
encoding = tokenizer.encode_plus(
text_data,
add_special_tokens=True,
max_length=512,
return_token_type_ids=False,
padding="max_length",
return_attention_mask=True,
return_tensors='pt',)
self.word_ids = encoding["input_ids"]
self.attention_mask=encoding["attention_mask"]
return self.word_ids[idx], torch.tensor(self.y[idx]), self.attention_mask[idx]
class Transformer(nn.Module):
def __init__(self, model, num_classes=1):
"""
Constructor
Arguments:
model {string} -- Transformer to build the model on. Expects "conv-bert-base".
num_classes {int} -- Number of classes (default: {1})
"""
super().__init__()
self.name = model
model_class, tokenizer_class, pretrained_weights = TRANSFORMERS[model]
bert_config = BertConfig.from_json_file(MODEL_PATHS[model] + 'config.json')
bert_config.output_hidden_states = True
self.transformer = BertModel(bert_config)
self.nb_features = self.transformer.pooler.dense.out_features
self.pooler = nn.Sequential(
nn.Linear(self.nb_features, self.nb_features),
nn.Tanh(),
)
self.logit = nn.Linear(self.nb_features, num_classes)
def forward(self, tokens):
"""
Usual torch forward function
Arguments:
tokens {torch tensor} -- Sentence tokens
Returns:
torch tensor -- Class logits
"""
_, _, hidden_states = self.transformer(
tokens, attention_mask=(tokens > 0).long()
)
hidden_states = hidden_states[-1][:, 0] # Use the representation of the first token of the last layer
ft = self.pooler(hidden_states)
return self.logit(ft)
def fit(model, train_dataset, val_dataset, epochs=1, batch_size=32, warmup_prop=0, lr=5e-5):
device = torch.device('cuda')
model.to(device)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False)
optimizer = AdamW(model.parameters(), lr=lr)
num_warmup_steps = int(warmup_prop * epochs * len(train_loader))
num_training_steps = epochs * len(train_loader)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps)
loss_fct = nn.BCEWithLogitsLoss(reduction='mean').to(device)
for epoch in range(epochs):
model.train()
start_time = time.time()
optimizer.zero_grad()
avg_loss = 0
for step, (x, y_batch) in tqdm(enumerate(train_loader), total=len(train_loader)):
y_pred = model(x.to(device))
loss = loss_fct(y_pred.view(-1).float(), y_batch.float().to(device))
loss.backward()
avg_loss += loss.item() / len(train_loader)
xm.optimizer_step(optimizer, barrier=True)
scheduler.step()
model.zero_grad()
optimizer.zero_grad()
model.eval()
preds = []
truths = []
avg_val_loss = 0.
with torch.no_grad():
for x, y_batch in val_loader:
y_pred = model(x.to(device))
loss = loss_fct(y_pred.detach().view(-1).float(), y_batch.float().to(device))
avg_val_loss += loss.item() / len(val_loader)
probs = torch.sigmoid(y_pred).detach().cpu().numpy()
preds += list(probs.flatten())
truths += list(y_batch.numpy().flatten())
score = roc_auc_score(truths, preds)
dt = time.time() - start_time
lr = scheduler.get_last_lr()[0]
print(f'Epoch {epoch + 1}/{epochs} \t lr={lr:.1e} \t t={dt:.0f}s \t loss={avg_loss:.4f} \t val_loss={avg_val_loss:.4f} \t val_auc={score:.4f}')
model = Transformer("conv-bert-base")
epochs = 1 # 1 epoch seems to be enough
batch_size = 32
warmup_prop = 0.1
lr = 2e-5 # Important parameter to tweak
train_dataset = SEDataset(df3,tokenizer)
val_dataset = SEDataset(val_data,tokenizer)
fit(model, train_dataset, val_dataset, epochs=epochs, batch_size=batch_size, warmup_prop=warmup_prop, lr=lr)
</code></pre>
<p>I have attached all the codes above.</p>
<p><strong>Error:</strong></p>
<p>**0%| | 0/29 [00:00<?, ?it/s]</p>
<p>KeyError: 337**</p>
|
<p>For me this error was happening when passing a Pandas DataFrame with values missing in the index i.e 0, 1, 2, 4. Changing the index to 0, 1, 2, 3 fixed the problem.</p>
|
python-3.x|nlp|text-classification|huggingface-transformers
| 1
|
2,949
| 56,307,591
|
Why is the result a recurring number when it should be an integer?
|
<p>Hi so i'm writing some code for class and this for Linear Regression.<br>
The values calculated by hand is a=1.7 and b=1.6 for the data you can see in the code.</p>
<p>I've tried separating different parts of the formula into different variables but the answer remains the same (1.6999999999999993).</p>
<pre><code>import numpy as np
x=np.array([2,3,5,6])
y=np.array([4.5,7.2,9.2,11.5])
b=(np.sum((y-np.mean(y))*x))/(np.sum((x-np.mean(x))*x))
a=np.mean(y)-(b*(np.mean(x)))
print(a)
print(b)
</code></pre>
<p>The expected result is a=1.7 and b=1.6, but the output is a=1.6999999999999993.</p>
|
<p>It's because you are using <code>float</code> number, binary floating point math is like this. In most programming languages, it is based on the <a href="https://en.wikipedia.org/wiki/IEEE_754#Basic_formats" rel="nofollow noreferrer">IEEE 754 standard</a>.</p>
<p>see <a href="https://stackoverflow.com/questions/588004/is-floating-point-math-broken">Is floating point math broken?</a></p>
|
python|numpy
| 0
|
2,950
| 55,753,303
|
Safely downcast a float to the smallest possible integer type
|
<p>Pandas and numpy have a variety of ways to change numerical types but I couldn't find an automated way to safely convert a float to the smallest possible integer, given that no numerical info can be lost.</p>
<p>For example:</p>
<pre><code>1.0 (float32) -> 1 (int32) # OK, 1 == 1.0
1.0 (float32) -> 1 (int8) # also OK, just more compact storage
1.4 (float32) -> 1 (int8) # not OK, 1 != 1.1
</code></pre>
<p>Here's some sample data:</p>
<pre><code>df=pd.DataFrame({ 'i':[1.,333,555_666_777_888],
'j':[1.,333,555_666],
'x':np.random.randn(3) })
</code></pre>
<p>Looks like this (dtypes are all float64):</p>
<pre><code> i j x
0 1.000000e+00 1.0 0.852965
1 3.330000e+02 333.0 -0.955869
2 5.556668e+11 555666.0 -0.023493
</code></pre>
<p>Desired conversion</p>
<pre><code> i j x
0 1 1 -2.304234
1 333 333 -0.652469
2 555666777888 555666 -1.218302
</code></pre>
<p>with dtypes:</p>
<pre><code>i int64
j int32
x float64
</code></pre>
<p>I have a simple approach that I'll offer as an answer, but perhaps there are better ways or perhaps this is already part of pandas or numpy and I wasn't aware of it.</p>
<p>Also I'm punting on missing values in the answer (NaNs) as I don't have the latest version of of pandas (24.x) which allows integers to be NaN, so maybe someone would like to address that in an answer.</p>
<p>Note that there are few ways to convert floats to ints mentioned in this question: <a href="https://stackoverflow.com/questions/21291259/convert-floats-to-ints-in-pandas/55753621#55753621">Convert floats to ints in Pandas?</a>, but none of them addresses the potential loss of numerical precision in converting something like 2.1 to 2.</p>
|
<p>Here's a simple function:</p>
<pre><code>def float_to_int( s ):
if ( s.astype(np.int64) == s ).all():
return pd.to_numeric( s, downcast='integer' )
else:
return s
df.apply( float_to_int )
</code></pre>
<p>Output:</p>
<pre><code> i j x
0 1 1 -2.304234
1 333 333 -0.652469
2 555666777888 555666 -1.218302
</code></pre>
<p>With dtypes:</p>
<pre><code>i int64
j int32
x float64
</code></pre>
<p>Explanation:</p>
<p>I couldn't find an automated safe way to cast from float to integer, so I just check if converting a column to an integer changes the value. If not, then I allow the casting from float to int. Note that <code>int</code> defaults to <code>np.int32</code>, so using <code>np.int64</code> gives this a better chance of casting from float to int.</p>
<p>After that, pandas does all the work with <code>to_numeric()</code> as it will automatically convert to the smallest possible integer type.</p>
|
python|pandas|numpy
| 0
|
2,951
| 55,672,605
|
Reading the json file correctly
|
<p>This statement reads the json file. But it does not split the columns correctly.</p>
<p>df = pd.read_json('<a href="https://s3.amazonaws.com/todel162/config1.json" rel="nofollow noreferrer">https://s3.amazonaws.com/todel162/config1.json</a>', orient='index')</p>
<p>Is there any way to read the json using pandas dataframe?</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.io.json.json_normalize.html" rel="nofollow noreferrer"><code>json.json_normalize</code></a>:</p>
<pre><code>import json
from pandas.io.json import json_normalize
with open('config1.json') as f:
data = json.load(f)
df = json_normalize(data, 'configurationItems', ['fileVersion'])
print (df)
ARN awsAccountId awsRegion \
0 arn:aws:cloudtrail:us-east-1:513469704633:trai... 513469704633 us-east-1
1 arn:aws:cloudtrail:us-east-1:513469704633:trai... 513469704633 us-east-1
configurationItemCaptureTime configurationItemStatus \
0 2018-07-27T11:52:53.795Z ResourceDeleted
1 2018-07-27T11:52:53.791Z ResourceDeleted
configurationItemVersion configurationStateId configurationStateMd5Hash \
0 1.3 1532692373795
1 1.3 1532692373791
relatedEvents relationships resourceId \
0 [] [] AWSMacieTrail-DO-NOT-EDIT
1 [] [] test01
resourceType supplementaryConfiguration tags fileVersion
0 AWS::CloudTrail::Trail {} {} 1.0
1 AWS::CloudTrail::Trail {} {} 1.0
</code></pre>
|
pandas
| 1
|
2,952
| 55,657,732
|
How to see tensorflow build configuration?
|
<p>I am trying to build tensorflow from source on a remote server (with no superuser privileges) because I got this error when I simply installed with pip:</p>
<pre><code>Loaded runtime CuDNN library: 7.1.2 but source was compiled with: 7.4.2. CuDNN library major and minor version needs to match or have higher minor version in case of CuDNN 7.0 or later version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
</code></pre>
<p>I completed all the steps listed <a href="https://www.tensorflow.org/install/source" rel="nofollow noreferrer">here</a> successfully, but I still get the same error as above, despite setting CudNN version as 7.1.2 before building.</p>
<p>Is there any way I can see the configurations to verify that they have been set properly?</p>
|
<p>A file is generated after running <code>./configure</code> with the name <code>.tf_configure.bazelrc</code> you can inspect that file.</p>
|
tensorflow
| 0
|
2,953
| 55,798,536
|
predicting using pre-trained model becomes slower and slower
|
<p>I'm using a very naive way to make predictions based on pre-trained model in keras. But it becomes much slower later. Anyone knows why? I'm very very very new to tensorflow.</p>
<pre class="lang-py prettyprint-override"><code>count = 0
first = True
for nm in image_names:
img = image.load_img(TEST_PATH + nm, target_size=(299, 299))
img = image.img_to_array(img)
image_batch = np.expand_dims(img, axis=0)
processed_image = inception_v3.preprocess_input(image_batch.copy())
prob = inception_model.predict(processed_image)
df1 = pd.DataFrame({'photo_id': [nm]})
df2 = pd.DataFrame(prob, columns=['feat' + str(j + 1) for j in range(prob.shape[1])])
df = pd.concat([df1, df2], axis=1)
header = first
mode = 'w' if first else 'a'
df.to_csv(outfile, index=False, header=header, mode=mode)
first = False
count += 1
if count % 100 == 0:
print('%d processed' % count)
</code></pre>
|
<p>I doubt the TF is slowing down. However there is another stack overflow question showing that to_csv slows down on append.</p>
<p><a href="https://stackoverflow.com/questions/29271257/performance-python-pandas-dataframe-to-csv-append-becomes-gradually-slower">Performance: Python pandas DataFrame.to_csv append becomes gradually slower</a></p>
<p>If the images come batched you may also benefit from making larger batches rather than predicting one image at a time.</p>
<p>You can also explore tf.data for better data pipelining.</p>
|
tensorflow
| 0
|
2,954
| 55,689,915
|
Python: how to groupby a given percentile?
|
<p>I have a dataframe <code>df</code></p>
<pre><code>df
User City Job Age
0 A x Unemployed 33
1 B x Student 18
2 C x Unemployed 27
3 D y Data Scientist 28
4 E y Unemployed 45
5 F y Student 18
</code></pre>
<p>I want to <code>groupby</code> the <code>City</code> and do some stat. If I have to compute the mean, I can do the following:</p>
<pre><code>tmp = df.groupby(['City']).mean()
</code></pre>
<p>I would like to do same by a specific quantile. Is it possible?</p>
|
<pre><code>def q1(x):
return x.quantile(0.25)
def q2(x):
return x.quantile(0.75)
fc = {'Age': [q1,q2]}
temp = df.groupby('City').agg(fc)
temp
Age
q1 q2
City
x 22.5 30.0
y 23.0 36.5
</code></pre>
|
python|pandas|group-by
| 4
|
2,955
| 64,690,224
|
how can I add different size of the values into a pandas data frame at a time
|
<p>I need to fill a data frame gradually. in each step, I have a data like this:</p>
<pre><code>pubid = 1
keywords = [2, 2,3]
</code></pre>
<p>knowing that the length of values for the column are not equal how can I form a data frame like this:</p>
<pre><code>pubid keyword
1 2
1 2
1 3
</code></pre>
<p>So next time when the new data is coming and is like this:</p>
<pre><code>pubid = 6
keywords = [10, 11]
</code></pre>
<p>my data frame becomes:</p>
<pre><code>pubid keyword
1 2
1 2
1 3
6 10
6 11
</code></pre>
<p>I tried to create a temp dataframe at the beginning and add the values like this:</p>
<pre><code>data = {'pubid': 1, 'keywords':[1]}
df = pd.DataFrame(data)
pubid = 3
keyword = [2, 3]
df['pubid'] = 3
df["keywords"] = df["pubid"].apply(lambda x: i for i in keyword)
</code></pre>
<p>It does not work in this way, but don't know how can I solve it.</p>
|
<pre><code>pubid = 1
keywords = [2, 2,3]
df = pd.DataFrame({'pubid': pubid, 'keywords': keywords})
print(df)
</code></pre>
<p>Prints:</p>
<pre><code> pubid keywords
0 1 2
1 1 2
2 1 3
</code></pre>
<p>Then you can use <code>pd.concat</code> to add data to existing DataFrame:</p>
<pre><code>pubid = 6
keywords = [10, 11]
df = pd.concat([df, pd.DataFrame({'pubid': pubid, 'keywords': keywords})]).reset_index(drop=True)
print(df)
</code></pre>
<p>Prints:</p>
<pre><code> pubid keywords
0 1 2
1 1 2
2 1 3
3 6 10
4 6 11
</code></pre>
|
python|pandas|dataframe
| 1
|
2,956
| 64,744,358
|
Grouping by classes
|
<p>I would like to see how many times a url is labelled with 1 and how many times it is labelled with 0.
My dataset is</p>
<pre><code> Label URL
0 0.0 www.nytimes.com
1 0.0 newatlas.com
2 1.0 www.facebook.com
3 1.0 www.facebook.com
4 0.0 issuu.com
... ... ...
3572 0.0 www.businessinsider.com
3573 0.0 finance.yahoo.com
3574 0.0 www.cnbc.com
3575 0.0 www.ndtv.com
3576 0.0 www.baystatehealth.org
</code></pre>
<p>I tried <code>df.groupby("URL")["Label"].count()</code> but it does not return the expected output:</p>
<pre><code> Label URL Freq
0 0.0 www.nytimes.com 1
0 1.0 www.nytimes.com 0
1 0.0 newatlas.com 1
1 1.0 newatlas.com 0
2 1.0 www.facebook.com 2
2 0.0 www.facebook.com 0
4 0.0 issuu.com 1
4 1.0 issuu.com 0
... ... ...
</code></pre>
<p>What field should I consider I the group by to get something like the above df (expected output)?</p>
|
<p>Now you can do <code>value_counts</code></p>
<pre><code>df.value_counts(["URL","Label"])
</code></pre>
|
python|pandas
| 1
|
2,957
| 39,967,460
|
Using Pandas to Create DateOffset of Paydays
|
<p>I'm trying to use Pandas to create a time index in Python with entries corresponding to a recurring payday. Specifically, I'd like to have the index correspond to the first and third Friday of the month. Can somebody please give a code snippet demonstrating this?</p>
<p>Something like:</p>
<pre><code>import pandas as pd
idx = pd.date_range("2016-10-10", periods=26, freq=<offset here?>)
</code></pre>
|
<p>try this:</p>
<pre><code>In [6]: pd.date_range("2016-10-10", periods=26, freq='WOM-1FRI').union(pd.date_range("2016-10-10", periods=26, freq='WOM-3FRI'))
Out[6]:
DatetimeIndex(['2016-10-21', '2016-11-04', '2016-11-18', '2016-12-02', '2016-12-16', '2017-01-06', '2017-01-20', '2017-02-03', '2017-02-17',
'2017-03-03', '2017-03-17', '2017-04-07', '2017-04-21',
'2017-05-05', '2017-05-19', '2017-06-02', '2017-06-16', '2017-07-07', '2017-07-21', '2017-08-04', '2017-08-18', '2017-09-01',
'2017-09-15', '2017-10-06', '2017-10-20', '2017-11-03',
'2017-11-17', '2017-12-01', '2017-12-15', '2018-01-05', '2018-01-19', '2018-02-02', '2018-02-16', '2018-03-02', '2018-03-16',
'2018-04-06', '2018-04-20', '2018-05-04', '2018-05-18',
'2018-06-01', '2018-06-15', '2018-07-06', '2018-07-20', '2018-08-03', '2018-08-17', '2018-09-07', '2018-09-21', '2018-10-05',
'2018-10-19', '2018-11-02', '2018-11-16', '2018-12-07'],
dtype='datetime64[ns]', freq=None)
</code></pre>
|
python|pandas
| 2
|
2,958
| 40,878,053
|
what is the quickest way to drop zeros from a series
|
<p>I'm encountered this problem several times and always doing something different each time. What do others do?</p>
<p>Consider the series <code>s</code></p>
<pre><code>s = pd.Series([1, 0, 2], list('abc'), name='s')
</code></pre>
<p>What is the quickest way to to produce</p>
<pre><code>a 1
c 2
Name: s, dtype: int64
</code></pre>
|
<p>Boolean slicing is probably the easiest way:</p>
<pre><code>In [1]: s = pd.Series([1, 0, 2], list('abc'), name='s')
In [2]: s[s != 0]
Out[2]:
a 1
c 2
Name: s, dtype: int64
</code></pre>
|
python|pandas|numpy
| 3
|
2,959
| 41,101,348
|
Share variables - two queues
|
<p>Thanks to <a href="https://stackoverflow.com/questions/40803697/tensorflow-multithreading-image-loading">Tensorflow multithreading image loading</a>, I have this load data function which, given a csv file e.g. a training csv file it creates some data nodes; </p>
<pre><code> 34 def loadData(csvPath,shape, batchSize=10,batchCapacity=40,nThreads=16):
35 path, label = readCsv(csvPath)
36 labelOh = oneHot(idx=label)
37 pathRe = tf.reshape(path,[1])
38
39 # Define subgraph to take filename, read filename, decode and enqueue
40 image_bytes = tf.read_file(path)
41 decoded_img = tf.image.decode_jpeg(image_bytes)
42 decoded_img = prepImg(decoded_img,shape=shape)
43 imageQ = tf.FIFOQueue(128,[tf.float32,tf.float32,tf.string], shapes = [shape,[447],[1]])
44 enQ_op = imageQ.enqueue([decoded_img,labelOh,pathRe])
45
46 NUM_THREADS = nThreads
47 Q = tf.train.QueueRunner(
48 imageQ,
49 [enQ_op]*NUM_THREADS,
50 imageQ.close(),
51 imageQ.close(cancel_pending_enqueues=True)
52 )
53
54 tf.train.add_queue_runner(Q)
55 dQ = imageQ.dequeue()
56 X, Y, Ypaths = tf.train.batch(dQ, batch_size = batchSize, capacity = batchCapacity)
57 return X, Y, Ypaths
</code></pre>
<p>I then call it and have the standard model, loss, train subgraphs such as;</p>
<pre><code>xTr, yTr, yPathsTr = loadData(trainCsvPath, *args)
yPredTr = model1(xTr,*args)
ce = ... # some loss function
learningRate = tf.placeholder(tf.float32)
trainStep = tf.train.AdamOptimizer(learningRate).minimize(ce)
</code></pre>
<p>I then proceed to train the weights in model. As I understand so far I don't need to have data fed into <code>feed_dict</code> as it is already defined.</p>
<pre><code>with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess,coord=coord)
while not coord.should_stop():
sess.run([trainStep],feed_dict={learningRate:lr})
</code></pre>
<p>My question is now;</p>
<p>What's the best way to incorporate a train/test process? I.e. once the threads have finished the train csv file they then read the test csv file and I run another session where I have something like;</p>
<pre><code>xTe, yTe, yPathsTe = loadData(csvPathTe, *args)
yPredTe = model1(xTe,*args) #will this use the same weights as the trained network? Or am I defining another seperate subgraph?
ce = ... # redefined for yPredTe
while not coord.should_stop():
ce.eval() # return losses
</code></pre>
<p>which runs until the test csv file has finished.</p>
<p>How would I then rinse and repeat these steps (possibly shuffling my training set) for a set number of epochs? Should I have a csv queue as well?</p>
|
<p>Alas, currently there is no good answer to this question. The typical evaluation workflow involves running a <strong>separate process</strong> that periodically does the following (e.g. <a href="https://github.com/tensorflow/models/blob/12f279d6f4cb33574bc20109b41eb8a59f40cfd1/tutorials/image/cifar10/cifar10_eval.py#L116" rel="nofollow noreferrer"><code>evaluate()</code> in <code>cifar10_eval.py</code></a>):</p>
<ol>
<li>Build a graph that includes an input pipeline that knows about the evaluation set, a copy of the model, the evaluation ops (if any), and a <code>tf.train.Saver</code>.</li>
<li>Create a new session.</li>
<li>Restore the latest checkpoint written by the training process in that session.</li>
<li>Run the test op (e.g. <code>ce</code> in your question) and accumulate the results in Python, until you get a <code>tf.errors.OutOfRangeError</code>.</li>
</ol>
<p>We're currently working on improved input pipelines that will make it easier to iterate over files many times, and reuse the same session.</p>
|
tensorflow
| 3
|
2,960
| 41,035,942
|
Create numPy array using list comprehension
|
<p>Let say I have two numPy arrays <code>arr1</code>and <code>arr2</code>:</p>
<pre><code>arr1 = np.random.randint(3, size = 100)
arr2 = np.random.randint(3, size = 100)
</code></pre>
<p>I would like to build a matrix that contains the number of joint occurrences.
In other words, for all the values of <code>arr1</code> that are 0, find the elements in <code>arr2</code> that are also 0 and are located at the same position. And so, I would like to get the following matrix: </p>
<pre><code>M = [[p(0,0), p(0,1), p(0,2)],
[p(1,0), p(1,1), p(1,2)],
[p(2,0), p(2,1), p(2,2)]]
</code></pre>
<p>Where <code>p(0,0)</code>stands for the number of occurrences that are 0 on <code>arr1</code>and 0 on <code>arr2</code>.</p>
<p><strong>First Attempt:</strong> </p>
<p>As a first attempt I have tried the following: </p>
<pre><code>[[sum(arr1[arr2 == y] == x) for x in np.arange(0,3)] for y in np.arange(0,3)]
</code></pre>
<p>But python throws the following error: </p>
<pre><code>NameError: name 'arr1' is not defined
</code></pre>
<p><strong>Second Attempt:</strong></p>
<p>I tried to dig into this error by making use of for-loops:</p>
<pre><code>M = np.array([])
for x in np.arange(0,dim):
result = np.array([])
for y in np.arange(0,dim):
result_temp = sum(arr1[arr2 == x] == y)
result = np.append(result, result_temp)
M = np.append(M,result)
</code></pre>
<p>In this case Python does not throw the previous Error, but instead of getting a 3x3 array, I get a 1x9 array, and I am not able to get the desired 3x3 array. </p>
<p>Thanks in advance. </p>
|
<p>Your first list comprehension works. You won't get a <code>NameError</code> if <code>arr1</code> is defined:</p>
<pre><code>import numpy as np
np.random.seed(2016)
arr1 = np.random.randint(3, size = 100)
arr2 = np.random.randint(3, size = 100)
result = [[sum(arr1[arr2 == y] == x) for x in np.arange(0,3)]
for y in np.arange(0,3)]
print(result)
# [[10, 9, 10], [8, 13, 15], [18, 8, 9]]
</code></pre>
<p>But you could instead use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html" rel="nofollow noreferrer"><code>np.histogram2d</code></a>:</p>
<pre><code>result2, xedges, yedges = np.histogram2d(arr2, arr1, bins=range(4))
print(result2)
</code></pre>
<p>yields</p>
<pre><code>[[ 10. 9. 10.]
[ 8. 13. 15.]
[ 18. 8. 9.]]
</code></pre>
|
python|arrays|numpy|list-comprehension
| 3
|
2,961
| 41,155,504
|
H5PY/Numpy - Setting the inner shape of a numpy arrays (for h5py)
|
<p>I am trying to use h5py to store data as a list of tuples of (images, angles). Images are numpy arrays of size (240,320,3) of type uint8 from OpenCV while angles are just a number of type float16.</p>
<p>When using h5py, you need to have a predetermine shape in order to maintain a usable speed of read/write. H5py preloads the entire dataset with arbitrary values in which you can index later and set these values to whatever you would like.</p>
<p>I would like to know how to set the shape of an inner numpy array when initializing the shape of a dataset for h5py. I believe the same solution would apply for numpy as well.</p>
<pre><code>import h5py
import numpy as np
dset_length = 100
# fake data of same shape
images = np.ones((dset_length,240,320,3), dtype='uint8') * 255
# fake data of same shape
angles = np.ones(dset_length, dtype='float16') * 90
f = h5py.File('dataset.h5', 'a')
dset = f.create_dataset('dset1', shape=(dset_length,2))
for i in range(dset_length):
# does not work since the shape of dset[0][0] is a number,
# and can't store an array datatype
dset[i] = np.array((images[i],angles[i]))
</code></pre>
<p>Recreateing the problem in numpy looks like this:</p>
<pre><code>import numpy as np
a = np.array([
[np.array([0,0]), 0],
[np.array([0,0]), 0],
[np.array([0,0]), 0]
])
a.shape # (3, 2)
b = np.empty((3,2))
b.shape # (3, 2)
a[0][0] = np.array([1,1])
b[0][0] = np.array([1,1]) # ValueError: setting an array element with a sequence.
</code></pre>
|
<p>In numpy, you can store that data with structured arrays:</p>
<pre><code>dtype = np.dtype([('angle', np.float16), ('image', np.uint8, (240,320,3))])
data = np empty(10, dtype=dtype)
data[0]['angle'] = ... # etc
</code></pre>
|
python|numpy|h5py
| 2
|
2,962
| 53,889,936
|
Error when trying to trim a string in pandas
|
<p>I have the following code to trim off dangling line separators in pandas:</p>
<pre><code>for idx, value in enumerate(df.loc[0]):
if str(value).strip() != str(value):
print ('AAAAAAAAAA', repr(value))
df[idx] = df[idx].str.strip()
print ('BBBBBBBBBB')
</code></pre>
<p>Here is what happens when I run it:</p>
<pre><code>AAAAAAAAAA '325NYRQA82ZPP83EW9LJB3CXOZPDZM\r'
Traceback (most recent call last):
File "/Users/david/Desktop/V/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 3078, in get_loc
return self._engine.get_loc(key)
KeyError: 8
</code></pre>
<p>It seems that instead of calling the index number, I need to call the column name.</p>
|
<p>It looks like you're calling the index number where you need to be calling the index name. Here is how you would adjust it:</p>
<pre><code>for idx_name, value in df.loc[0].items():
if str(value).strip() != str(value):
df[idx_name] = df[idx_name].str.strip()
</code></pre>
|
python|pandas
| 0
|
2,963
| 54,160,084
|
DataFrame repeated dictionaries in a list
|
<p>I have a JSON file that contains a list of nested dictionaries- <strong>(Json Sample)</strong>:</p>
<pre><code>{"posts": [{"url": "http://twitter.com/AkEl_Saruman/status/1084067040481169408", "title": "", "type": "Twitter", "language": "tr", "assignedCategoryId": 19058723389, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.58}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.01}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.41}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067040481169408", "parentGuid": "1083973777493512192", "engagementType": "RETWEET", "documentsUrls": ["https://www.yenisafak.com/dunya/firatin-dogusunda-turkiyeye-sabotaj-3430594"]}, {"url": "http://twitter.com/eazngl2/status/1084067263895007232", "title": "", "type": "Twitter", "location": "SAU", "geolocation": {"id": "SAU", "name": "Saudi Arabia", "country": "SAU"}, "language": "ar", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.01}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.39}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.6}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067263895007232", "parentGuid": "1084044740461502465", "engagementType": "RETWEET", "mediaUrls": ["http://pbs.twimg.com/media/DwtM8PYW0AEE-oR.jpg"]}, {"url": "http://twitter.com/h_7hm/status/1084067289723535360", "title": "", "type": "Twitter", "location": "SAU", "geolocation": {"id": "SAU", "name": "Saudi Arabia", "country": "SAU"}, "language": "ar", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.17}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.01}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.81}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067289723535360", "parentGuid": "1083854364547207175", "engagementType": "RETWEET"}, {"url": "http://twitter.com/BeooutQ_2BQ/status/1084067316311224325", "title": "", "type": "Twitter", "location": "Bogota, Bogota, COL", "geolocation": {"id": "COL.Bogota.Bogota", "name": "Bogota", "country": "COL", "state": "Bogota", "city": "Bogota"}, "language": "ar", "assignedCategoryId": 19058723389, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.52}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.24}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.24}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067316311224325", "parentGuid": "1084066998399758336", "engagementType": "REPLY"}, {"url": "http://twitter.com/ekspreshbrajans/status/1084067335680471040", "title": "", "type": "Twitter", "location": "Adana, Mediterranean Region, TUR", "geolocation": {"id": "TUR.Mediterranean Region.Adana", "name": "Adana", "country": "TUR", "state": "Mediterranean Region", "city": "Adana"}, "language": "tr", "authorGender": "M", "assignedCategoryId": 19058723389, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.57}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.04}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.39}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067335680471040", "documentsUrls": ["http://ekspreshaberajansi.com/2019/01/12/fuat-ugur-feto-neyse-sozcu-de-o/"]}, {"url": "http://twitter.com/mualitass/status/1084067769094754305", "title": "", "type": "Twitter", "location": "Istanbul, Marmara Region, TUR", "geolocation": {"id": "TUR.Marmara Region.Istanbul", "name": "Istanbul", "country": "TUR", "state": "Marmara Region", "city": "Istanbul"}, "language": "tr", "authorGender": "M", "assignedCategoryId": 19058723389, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.96}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.03}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.01}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067769094754305", "parentGuid": "1084020709586845696", "engagementType": "RETWEET"}, {"url": "http://twitter.com/smail_gomra/status/1084067900732907520", "title": "", "type": "Twitter", "language": "ar", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.0}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.32}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.68}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067900732907520", "parentGuid": "1084062244781113347", "engagementType": "RETWEET", "mediaUrls": ["https://video.twimg.com/ext_tw_video/1084060799595933698/pu/vid/640x480/bZ1hcR-mCViaG2vQ.mp4?tag=8"]}, {"url": "http://twitter.com/taliphan_197878/status/1084067941556068352", "title": "", "type": "Twitter", "location": "Izmir, Aegean Region, TUR", "geolocation": {"id": "TUR.Aegean Region.Izmir", "name": "Izmir", "country": "TUR", "state": "Aegean Region", "city": "Izmir"}, "language": "tr", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.24}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.05}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.72}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067941556068352", "parentGuid": "1084016623294525440", "engagementType": "RETWEET", "documentsUrls": ["https://m.turkiyegazetesi.com.tr/yazarlar/fuat-ugur/606039.aspx"]}, {"url": "http://twitter.com/spor26/status/1084067995326963714", "title": "", "type": "Twitter", "location": "Eskisehir, Central Anatolian Region, TUR", "geolocation": {"id": "TUR.Central Anatolian Region.Eskisehir", "name": "Eskisehir", "country": "TUR", "state": "Central Anatolian Region", "city": "Eskisehir"}, "language": "tr", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.11}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.04}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.85}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067995326963714", "documentsUrls": ["http://dlvr.it", "http://www.spor26.com/haberdetay/10980-mehmet-ozcana-fransadan-talip.html?utm_source=dlvr.it&utm_medium=twitter"], "mediaUrls": ["http://pbs.twimg.com/media/DwtiGMAUwAIlqAH.jpg"]}, {"url": "http://twitter.com/xcV44s101gjyOn3/status/1084067920781733888", "title": "", "type": "Twitter", "location": "DZA", "geolocation": {"id": "DZA", "name": "Algeria", "country": "DZA"}, "language": "ar", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.0}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.32}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.68}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084067920781733888", "parentGuid": "1084062244781113347", "engagementType": "RETWEET", "mediaUrls": ["https://video.twimg.com/ext_tw_video/1084060799595933698/pu/vid/640x480/bZ1hcR-mCViaG2vQ.mp4?tag=8"]}, {"url": "http://twitter.com/Sidar66187750/status/1084068273380093955", "title": "", "type": "Twitter", "location": "Diyarbakir, Southeastern Anatolian Region, TUR", "geolocation": {"id": "TUR.Southeastern Anatolian Region.Diyarbakir", "name": "Diyarbakir", "country": "TUR", "state": "Southeastern Anatolian Region", "city": "Diyarbakir"}, "language": "tr", "assignedCategoryId": 19058723388, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.02}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.94}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.05}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084068273380093955", "parentGuid": "1083705784817643521", "engagementType": "RETWEET"}, {"url": "http://twitter.com/ahmetAcer14/status/1084068082673483776", "title": "", "type": "Twitter", "location": "Adana, Mediterranean Region, TUR", "geolocation": {"id": "TUR.Mediterranean Region.Adana", "name": "Adana", "country": "TUR", "state": "Mediterranean Region", "city": "Adana"}, "language": "und", "assignedCategoryId": 0, "assignedEmotionId": 0, "categoryScores": [], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084068082673483776", "parentGuid": "1084054837183029249", "engagementType": "RETWEET", "documentsUrls": ["https://www.turkiyegazetesi.com.tr/yazarlar/fuat-ugur/606038.aspx", "https://twitter.com/gencosmansarper/status/1083956208309018625"]}, {"url": "http://twitter.com/xcV44s101gjyOn3/status/1084068442758631425", "title": "", "type": "Twitter", "location": "DZA", "geolocation": {"id": "DZA", "name": "Algeria", "country": "DZA"}, "language": "ar", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.02}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.35}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.63}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084068442758631425", "parentGuid": "1084049278119612421", "engagementType": "RETWEET", "mediaUrls": ["http://pbs.twimg.com/media/DwtRDo0WsAER1-T.jpg", "http://pbs.twimg.com/media/DwtRCGuXQAAm35S.jpg", "http://pbs.twimg.com/media/DwtRC3KW0AA5RA3.jpg", "http://pbs.twimg.com/media/DwtRERfW0AA7sIN.jpg"]}, {"url": "http://twitter.com/Antaakya_Akgenc/status/1084066851242557440", "title": "", "type": "Twitter", "location": "Mediterranean Region, TUR", "geolocation": {"id": "TUR.Mediterranean Region", "name": "Mediterranean Region", "country": "TUR", "state": "Mediterranean Region"}, "language": "tr", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.37}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.17}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.46}], "emotionScores": [], "imageInfo": [{"url": "http://pbs.twimg.com/media/DwthDPkX0AAxAWJ.jpg", "brands": [], "objects": [{"score": 0.89882, "classId": 3324, "className": "Demonstration/protest"}]}, {"url": "http://pbs.twimg.com/media/DwthDPkXQAAf_0t.jpg", "brands": []}, {"url": "http://pbs.twimg.com/media/DwthDPcXcAETe37.jpg", "brands": [], "objects": [{"score": 0.96125, "classId": 3119, "className": "Sitting"}, {"score": 0.83501, "classId": 3488, "className": "People"}]}], "monitorId": 19058723386, "guid": "1084066851242557440", "mediaUrls": ["http://pbs.twimg.com/media/DwthDPkX0AAxAWJ.jpg", "http://pbs.twimg.com/media/DwthDPkXQAAf_0t.jpg", "http://pbs.twimg.com/media/DwthDPcXcAETe37.jpg"]}, {"url": "http://twitter.com/Charles70067222/status/1084068597453017090", "title": "", "type": "Twitter", "language": "tr", "authorGender": "M", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.33}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.39}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.27}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084068597453017090", "parentGuid": "1084068468910161920", "engagementType": "REPLY"}, {"url": "http://twitter.com/fbardia/status/1084068551059783681", "title": "", "type": "Twitter", "language": "es", "authorGender": "M", "assignedCategoryId": 19058723391, "assignedEmotionId": 0, "categoryScores": [{"categoryId": 19058723389, "categoryName": "Basic Negative", "score": 0.04}, {"categoryId": 19058723388, "categoryName": "Basic Positive", "score": 0.09}, {"categoryId": 19058723391, "categoryName": "Basic Neutral", "score": 0.87}], "emotionScores": [], "imageInfo": [], "monitorId": 19058723386, "guid": "1084068551059783681", "parentGuid": "1083916530784657408", "engagementType": "REPLY"}], "totalPostsAvailable": 16, "status": "success"}]
</code></pre>
<p>When I Dataframe this JSON file it looks like this:
<a href="https://i.stack.imgur.com/zk2My.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zk2My.png" alt="enter image description here"></a></p>
<p>I need to Dataframe the stings in column "posts". I tried this code:</p>
<pre><code>with open('test.json') as data_file:
d = json.loads(data_file.read())
df= pd.DataFrame(d[0]['posts'])
df
</code></pre>
<p><strong>df brings only the values of the first row, or index [0] which contains (84 urls)</strong>, like in this picture:
<a href="https://i.stack.imgur.com/4Tegs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Tegs.png" alt="enter image description here"></a></p>
<p><strong>What I need:</strong></p>
<p>Is there a way to Dataframe the dictionaries in all the indexes? </p>
<p>Thanks in advance!</p>
|
<p>I believe you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.json.json_normalize.html" rel="nofollow noreferrer"><code>json_normalize</code></a>:</p>
<pre><code>import json
from pandas.io.json import json_normalize
with open('test.json') as file:
j = json.load(file)
df = json_normalize(j, 'posts', ['totalPostsAvailable','status'])
print (df)
</code></pre>
|
python|json|python-3.x|pandas|dataframe
| 3
|
2,964
| 54,212,645
|
How to configure tensorflow legacy/train.py model.cpk output interval
|
<p>I am trying to address an issue caused by overfitting of a model. Unfortunately I don't know how to increase the interval of <code>model.cpk</code> that <code>legacy/train.py</code> outputs during training. Is there a way to reduce the time between each saving of <code>model.cpk</code> and to disable its deletion. I am training small models and can afford the increased storage requirement.</p>
|
<p>For save intervals and number of checkpoints to keep, have a look here:
<a href="https://www.tensorflow.org/api_docs/python/tf/train/Saver" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/train/Saver</a></p>
<p>From the link above <br>
-> max_to_keep <br>
-> keep_checkpoint_every_n_hours</p>
<blockquote>
<p>Additionally, optional arguments to the Saver() constructor let you
control the proliferation of checkpoint files on disk:</p>
<p>max_to_keep indicates the maximum number of recent checkpoint files to
keep. As new files are created, older files are deleted. If None or 0,
no checkpoints are deleted from the filesystem but only the last one
is kept in the checkpoint file. Defaults to 5 (that is, the 5 most
recent checkpoint files are kept.)</p>
<p>keep_checkpoint_every_n_hours: In addition to keeping the most recent
max_to_keep checkpoint files, you might want to keep one checkpoint
file for every N hours of training. This can be useful if you want to
later analyze how a model progressed during a long training session.
For example, passing keep_checkpoint_every_n_hours=2 ensures that you
keep one checkpoint file for every 2 hours of training. The default
value of 10,000 hours effectively disables the feature.</p>
</blockquote>
<p>I believe that you can reference this in the training config if you use one. Checkout the trainer.py file in the same legacy directory. Around line 375, it references keep_checkpoint_every_n_hours -> <br> </p>
<pre><code># Save checkpoints regularly.
keep_checkpoint_every_n_hours = train_config.keep_checkpoint_every_n_hours
saver = tf.train.Saver(keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours)
</code></pre>
<p>What it doesn't reference is the max_to_keep line which may need to be added to that script. That said, in closing, while it's difficult to be certain without all the information, but I cannot help but to think you are going about this the wrong way. Collecting every checkpoint and reviewing doesn't seem to be the right way to deal with over fitting. Run tensorboard and check the results of your training there. Additionally doing some evaluation using the model with evaluation data will also provide a great deal of insight into what your model is doing. </p>
<p>All the best with your training! </p>
|
python|tensorflow
| 1
|
2,965
| 54,124,278
|
Change the sign of the number in the pandas series
|
<p>How to change the sign in the series, if I have:</p>
<blockquote>
<p>1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13</p>
</blockquote>
<p>and need to get:</p>
<blockquote>
<p>1, 2, 3, -4, -5, -6, 8, 9, 10, -11, -12, -13</p>
</blockquote>
<p>I need to be able to set the period (now it is equal to 3) and the index from which the function starts (now it is equal to 3).</p>
<p>For example, if I specify 2 as the index, I get</p>
<blockquote>
<p>1, 2, -3, -4, -5, 6, 8, 9, -10, -11, -12, 13</p>
</blockquote>
<p>I need to apply this function sequentially to each column, since applying to the entire DataFrame leads to a memory error.</p>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> with integer division by (<code>//</code>) and modulo (<code>%</code>) for boolean mask:</p>
<pre><code>s = pd.Series([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])
N = 3
#if default RangeIndex
m = (s.index // N) % 2 == 1
#general index
#m = (np.arange(len(s.index)) // N) % 2 == 1
s = pd.Series(np.where(m, -s, s))
print (s)
0 1
1 2
2 3
3 -4
4 -5
5 -6
6 7
7 8
8 9
9 -10
10 -11
11 -12
12 13
dtype: int64
</code></pre>
<p>EDIT:</p>
<pre><code>N = 3
M = 1
m = np.concatenate([np.repeat(False, M),
(np.arange(len(s.index) - M) // N) % 2 == 0])
s = pd.Series(np.where(m, -s, s))
print (s)
0 1
1 -2
2 -3
3 -4
4 5
5 6
6 7
7 -8
8 -9
9 -10
10 11
11 12
12 13
dtype: int64
</code></pre>
|
python|pandas|time-series|series
| 3
|
2,966
| 66,294,709
|
Creating new dataframe by selecting specific columns from other dataframe
|
<p>This seems to be a simple question, and it could still be, on how to create new dataframe by selecting specific columns from other dataframes.
Lets illustrate it by having a three dummy dataframes df1, df2, df3, where "position" is common column in all dataframes</p>
<pre><code>df1 = pd.DataFrame({"Position": ["A", "B", "C"], "Team1": ["xyz", "xyy", "xxy"],"Team2": ["xxz", "yyx", "yxy"],"Team3": ["xzy", "zzy", "zxz"]})
df2 = pd.DataFrame({"Position": ["A", "B", "C"],"T1": ["1", "2", "4"],"T2": ["3", "5", "2"],"T3":["2","1","4"] }, index=[0, 1, 2], )
df3 = pd.DataFrame({"Position": ["A", "B", "C"],"T_1": ["IN", "IN", "OUT"],"T2": ["IN", "OUT", "OUT"],"T3":["OUT","IN","IN"] }, index=[0, 1, 2], )
</code></pre>
<p>I need to create, in this instance three dataframes where I merge Team1,T1 and T_1 on "Position"...now the catch is I do not know how many teams are there, df1,df2, df3 all will have same number of Teams, however number of teams can vary (in this instance i made it three, but in actual scenario it can be variable, say N), i want to know if some iteration can be performed to create dataframe based on the number of teams?</p>
<p>Here is the graphical way of looking at the Inputs (teams are defined for this example, but actually it is variable) and Expected output</p>
<p><a href="https://i.stack.imgur.com/6xlqb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6xlqb.png" alt="enter image description here" /></a></p>
|
<p>You could just concat horizontaly the relevant columns:</p>
<pre><code>new_dfs = [pd.concat((df.set_index('Position').iloc[:,i] for df in (
df1, df2, df3)), axis=1).reset_index() for i in range(3)]
</code></pre>
<p>It gives:</p>
<pre><code>for i in new_dfs:
print(i)
Position Team1 T1 T_1
0 A xyz 1 IN
1 B xyy 2 IN
2 C xxy 4 OUT
Position Team2 T2 T2
0 A xxz 3 IN
1 B yyx 5 OUT
2 C yxy 2 OUT
Position Team3 T3 T3
0 A xzy 2 OUT
1 B zzy 1 IN
2 C zxz 4 IN
</code></pre>
|
python|pandas|dataframe
| 2
|
2,967
| 65,955,235
|
Elegant way to add one row DataFrame to another DataFrame
|
<p>I have two <code>DataFrames</code> and one of them is a single row <code>DataFrame</code>. I want to add the one row <code>dataframe</code> across all the rows of the bigger one. I can solve it, but I am looking for a simpler solution:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'C':['car'],'D':['bus']})
print(df1)
C D
0 car bus
df2 = pd.DataFrame({'A':[1,2,3],'B':[8,2,0]})
print(df2)
A B
0 1 8
1 2 2
2 3 0
</code></pre>
<p>I want to join the line line DataFrame one across the bigger one. The result should be.</p>
<pre><code> A B C D
0 1 8 car bus
1 2 2 car bus
2 3 0 car bus
</code></pre>
<p><strong>My attempt:</strong> I created a <code>dummy</code> column in both DataFrames and did a <code>left join</code>, but that's inelegant. I am sure there will be simpler solutions.</p>
|
<p>I think the most elegant is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a>, but is necessary strings columns names:</p>
<pre><code>df2 = df2.assign(**df1.iloc[0])
print (df2)
A B C D
0 1 8 car bus
1 2 2 car bus
2 3 0 car bus
df1.columns=[1,8]
print (df1)
1 8
0 car bus
df2 = df2.assign(**df1.iloc[0])
print (df2)
</code></pre>
<blockquote>
<p>TypeError: assign() keywords must be strings</p>
</blockquote>
<p>Another solution working if match first index values:</p>
<pre><code>df2 = df2.join(df1.reindex(df2.index, method='ffill'))
print (df2)
A B C D
0 1 8 car bus
1 2 2 car bus
2 3 0 car bus
</code></pre>
|
python|pandas|dataframe
| 5
|
2,968
| 66,144,350
|
How to assign a value to a new column with a string condition in pandas dataframe
|
<p>I try to assign values to a new column in the data-frame based on condition, if the first column contains a certain letter or not. f the first column only contains one letter, I use the dummy variable function. But how about, if the first column contains numbers, strings, and Nan?</p>
<p>Here is a example:</p>
<pre class="lang-py prettyprint-override"><code># Before
c1
0 a
1 2
2 b
3 c
4 ab
5 bc
6 NaN
#After
c1 a b c
0 a 1 0 0
1 2 0 0 0
2 b 0 1 0
3 c 0 0 1
4 ab 1 1 0
5 bc 0 1 1
6 NaN 0 0 0
</code></pre>
<p>I try <code>str.contains()</code> to assign, but I get an error:</p>
<pre class="lang-py prettyprint-override"><code>x['a'] = 1 if x.c1.str.contains('a') else 0
The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<p>You <em>could</em> do something like this:</p>
<pre><code>df['a'] = df['c1'].str.contains('a').astype(int)
</code></pre>
<p>... but this raises a <code>ValueError</code> if you have any <code>NaN</code> values in <code>df['c1']</code> (as you do in your example).</p>
<p>Here's an alternative using <code>df.apply</code>:</p>
<pre><code>df['a'] = df['c1'].apply(lambda x: int('a' in x) if isinstance(x, str) else 0)
</code></pre>
<p>This approach also deals with columns that are composed of multiple types: it returns 1 only when a given row is a string, in addition to having the appropriate character inside.</p>
|
python|pandas|dataframe
| 2
|
2,969
| 66,049,765
|
filter a dataframe based on a specific value for each category in pandas
|
<p>I have a dataframe</p>
<pre><code>df = url browser loadtime
A safari 1500
A safari 1650
A Chrome 2800
B IE 3150
B safari 3300
C Chrome 2650
. . .
. . .
</code></pre>
<p>I need to compute the upper outlier of the load time per app using the 3 QI rule of thumb and then filter df keeping only rows where for each app, loadtime is less than the upper outlier for this same app.</p>
<p>This is how I proceed.</p>
<ol>
<li>I compute the upper outlier using the 3QI rule of thumb</li>
</ol>
<pre><code>def upper_outlier(x):
return np.percentile(x, 75) + 3*(np.percentile(x,75)-np.percentile(x,25))
## Find the upper outlier threshold per app
df_grouped = df.groupby("app")['loadtime'].agg([('upper_outlier', lambda x : upper_outlier(x))])
</code></pre>
<p>This way for each app I have the corresponding upper outlier</p>
<ol start="2">
<li>I filter <code>df</code> using <code>df_grouped</code></li>
</ol>
<pre><code>df_new = pd.DataFrame()
for app in df.app.unique():
df_new = pd.concat([df_new,df.loc[(df.app==app)&(df.loadtime<df_grouped.loc[app, 'upper_outlier'])]], axis = 0).reset_index(drop=True)
</code></pre>
<p>The for loop takes long as I have a lot of data. Is there a cleaner pythonic way of doing this?</p>
|
<p>You can try to merge your calculation with the original dataframe</p>
<pre><code>df_grouped = df.groupby("app")['loadtime'].agg([('upper_outlier', lambda x : upper_outlier(x))]).reset_index()
dfmerged = df.merge(df_grouped, on = 'app', how = 'left')
</code></pre>
<p>and then filter</p>
<pre><code>dfmerged[dfmerged.loadtime<dfmerged.upper_outlier]
</code></pre>
<p>Not sure if this is more efficient, but seems more straight forward.</p>
|
python|pandas|dataframe|filter|outliers
| 2
|
2,970
| 52,518,283
|
In Python Pandas, how to search if column elements contains the first 2 digits
|
<p>I am fairly new to Python and currently I am trying to build a function that searches for the first 2 digits of the elements in a column and if true, return the result with a new header such as region</p>
<p>For example, </p>
<pre><code> Adres AreaCode Region
0 SArea 123191 A
1 BArea 122929 A
2 AArea 132222 B
</code></pre>
<p>I want the function to search for just the first 2 digits of the AreaCode which would give me the result of along with a new header of Region which classifies the Region based on the first 2 digits of the AreaCode.
So in this case 12 would give me A and 13 would give me B</p>
<p>I already tried this</p>
<pre><code>df.loc[df.AreaCode == 123191, 'Region'] = 'A'
</code></pre>
<p>and this worked for the entire AreaCode but I have no idea how to modify it so that I would be able to search based on the first 2 digits.</p>
<p>and I tried this</p>
<pre><code>df.loc[df.AreaCode.str.contains == 12, 'Region' ] = 'A'
</code></pre>
<p>but it gives me the error:</p>
<pre><code>AttributeError: Can only use .str accessor with string values,
which use np.object_ dtype in pandas
</code></pre>
<p>How do I fix this and thanks a lot for helping!</p>
|
<blockquote>
<p>I tried this df.loc[df.AreaCode.str.contains == 12, 'Region' ] = 'A'
but it gives me the error: AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas</p>
</blockquote>
<p>You could simply convert it to a string, then use the same code:</p>
<pre><code>df.loc[df.AreaCode.astype(str).str.startswith('12'), 'Region' ] = 'A'
</code></pre>
|
python|pandas
| 2
|
2,971
| 52,737,636
|
BeautifulSoup Python to Dataframe
|
<p>I'm trying to convert scraped data to a pd dataframe(table).
The info is retrieved via beautifulsoup from different tags (a, span, div).
for ul in soup_level1.find('ul', {'class':"fix3"}):</p>
<pre><code>divjt=ul.find('div',{'class':"topb"})
a=divjt.find('a')
trajectory=a.text.strip()
divloc=ul.find('div',{'class':"under"})
d=divloc.find('div')
sp=ul.find('span',{'class':"blk"})
object=sp.text.strip()
try:
sas=ul.find_all('span',{'class':"f1"})
timex=sas[0].text
except IndexError:
timex=''
datalist.append[jobtitle,city,timex]
headers=['Traj', 'Object', 'Time']
A=[trajectory]
B=[object]
C=[timex]
datac=A+B+C
df = pd.DataFrame(datac)
print(df)
</code></pre>
<p>The result I am getting right now is </p>
<pre><code> 0
0 BRD - TWD
1 MER
2 11/10/2018
0
0 SFX - NYT
1 MER
2 10/05/2016
0
0 GER - BEN
1 MER
2 05/06/2016
</code></pre>
<p>I would basically want to "dump" those results in a proper dataframe table
where each row is printed to excel accordingly.</p>
<pre><code>0 BRD - TWD MER 11/10/2018
1 SFX - NYT MER 10/05/2016
2 GER - BEN MER 05/06/2016
</code></pre>
<p>Thank you!</p>
|
<p>If you want the data in an a Excel use csv Format instead , A csv file can be opened in excel/Libre office to get the required result</p>
<pre><code>var row = value1 + ":" + value2 + ":" + value3 ;
await fs.appendFile('file_name.csv', row + os.EOL, function (err) {
if (err) throw err;
});
</code></pre>
<p>this is How I Did it in Javascript.</p>
|
python|pandas|beautifulsoup
| 0
|
2,972
| 46,568,021
|
Install Tensorflow with SYCL support
|
<p>I am tring to use gdb to trace Tensorflow operation kernel implementation with Eigen SYCL support.
However, when I try to install the <code>.whl</code> package, some error messages about <code>fglrx</code> pop up. </p>
<h3>Error message</h3>
<pre><code>Compiling /tmp/pip-1vfYDJ-build/tensorflow-1.0.1.data/purelib/tensorflow/contrib/cudnn_rnn/ops/gen_cudnn_rnn_ops.py ...
File "/tmp/pip-1vfYDJ-build/tensorflow-1.0.1.data/purelib/tensorflow/contrib/cudnn_rnn/ops/gen_cudnn_rnn_ops.py", line 1
Error: Fail to load fglrx kernel module!
^
SyntaxError: invalid syntax ....
</code></pre>
<h3>Configuration</h3>
<ul>
<li>CPU: Intel Core i7-4790 </li>
<li>GPU: GeForce GT 630/PCIe/SSE2</li>
<li>OS: ubuntu 15.04</li>
<li>Driver: <a href="https://i.stack.imgur.com/6rlUk.png" rel="nofollow noreferrer">Nvidia binary driver nvidia-352</a></li>
</ul>
<h3>Command</h3>
<p>Here are commands I used according to the tutorial.</p>
<p><a href="https://www.codeplay.com/portal/03-30-17-setting-up-tensorflow-with-opencl-using-sycl" rel="nofollow noreferrer">https://www.codeplay.com/portal/03-30-17-setting-up-tensorflow-with-opencl-using-sycl</a></p>
<ul>
<li><p>configuration</p>
<pre><code>Please specify the location of python. [Default is /usr/local/bin/python]:
Please specify optimization flags to use during compilation [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? (Linux only) [Y/n]
jemalloc enabled on Linux
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA support will be enabled for TensorFlow
Found possible Python library paths: /usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages]
Using python library path: /usr/local/lib/python2.7/dist-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] y
OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N]
No CUDA support will be enabled for TensorFlow
Please specify which C++ compiler should be used as the host C++ compiler. [Default is /usr/bin/clang++-3.6]:
Please specify which C compiler should be used as the host C compiler. [Default is /usr/bin/clang-3.6]:
</code></pre></li>
<li><p>bazel build: </p></li>
</ul>
<blockquote>
<p>bazel build -c dbg --config=sycl
//tensorflow/tools/pip_package:build_pip_package</p>
</blockquote>
<ul>
<li>build pip package:</li>
</ul>
<blockquote>
<p>bazel-bin/tensorflow/tools/pip_package/build_pip_package</p>
</blockquote>
<ul>
<li>install pip package</li>
</ul>
<blockquote>
<p>sudo pip install:
/tmp/tensorflow_pkg/tensorflow-1.0.1-cp27-none-linux_x86_64.whl</p>
</blockquote>
<p>Please help me in resolving the issues.</p>
|
<p>You can't (currently) use SYCL with TensorFlow on Intel GPUs. However, it is coming soon. There are a few fixes you will need and then it will work correctly. You will need to wait for a new Intel OpenCL GPU driver and then for a few compatibility commits to TensorFlow before it will work on Intel GPU. You may also want to wait a little longer for some performance improvements, because we have been focusing more on correctness, first, with performance coming a little later.</p>
|
python|ubuntu|tensorflow|opencl|nvidia
| 1
|
2,973
| 58,175,041
|
Append data to DF with column names stored in list
|
<p>Long time listener, first time caller...</p>
<p>I'm new to python struggling to understand how to process lists for different purposes. In this case, I have what will ultimately be a long list of float objects that I'd like to arrange into a dataframe, appending new rows with each loop.</p>
<pre><code>cols = ['col1','col2'....etc...]
df= pd.DataFrame(columns=cols)
for symbol in symbolList:
col1 = some floating point calc
col2 = another
...etc...
df = df.append(pd.DataFrame(columns=cols, data=[[','.join(cols)]]))
</code></pre>
<p>Even when I attempt different methods of manipulating the resulting string in an effort to persuade python to treat it as a list of objects rather than a string.</p>
<p>I'm sure there's not only a relatively simple way of modifying the data= argument so this runs, but an overall more pythonic way of achieving my end result. I'd be happy to hear about the latter, but for my edification, if there is a solution to my specific question, I'd greatly appreciate your help.</p>
|
<p>You can try using <code>loc</code></p>
<pre><code>symbolList = [*'ABCDEFG']
for symbol in symbolList:
col1 = np.random.randn()
col2 = np.random.randn()
df.loc[symbol] = [col1, col2]
</code></pre>
<p>However. If you define your dataframe as <code>float</code> you can use <code>at</code></p>
<pre><code>cols = ['col1','col2']
df= pd.DataFrame(columns=cols, dtype=float)
symbolList = [*'ABCDEFG']
for symbol in symbolList:
col1 = np.random.randn()
col2 = np.random.randn()
df.at[symbol] = [col1, col2]
df
col1 col2
A -0.099437 -0.264494
B 0.460220 -0.250704
C -0.127054 -0.372424
D -0.909770 0.835610
E -0.442418 0.298014
F -0.468022 0.111015
G 0.080260 1.768664
</code></pre>
<p>Truth is, you could've used <code>at</code> to begin with but when you define your dataframe, the column's dtypes default to <code>object</code> and when you use <code>at</code> it does nothing to change it when you make new rows of <code>float</code>.</p>
|
python-3.x|pandas
| 0
|
2,974
| 58,185,660
|
tensorflow select list of indices along dimension
|
<p>In order to select a list of columns in a matrix I am doing the following:</p>
<pre><code>sel = tf.concat([tf.slice(mat, [0, i], [-1, 1]) for i in list_columns],
axis=1)
</code></pre>
<p>I wonder if there is a more efficient manner</p>
|
<p><code>tf.gather</code> will be more efficient and concise. Let <code>axis=1</code>, then you can select columns in specified indices. </p>
<pre class="lang-py prettyprint-override"><code>mat = tf.constant(np.arange(12).reshape(2,6))
#[[ 0, 1, 2, 3, 4, 5],
# [ 6, 7, 8, 9, 10, 11]]
list_columns = [0,2,4]
res = tf.gather(mat, [0,2,4], axis=1)
#[[ 0, 2, 4],
# [ 6, 8, 10]]
</code></pre>
|
tensorflow
| 1
|
2,975
| 58,309,532
|
How to get distinct value, count of a column in dataframe and store in another dataframe as (k,v) pair using Spark2 and Scala
|
<p>I want to get the distinct values and their respective counts of every column of a dataframe and store them as (k,v) in another dataframe.
Note: My Columns are not static, they keep changing. So, I cannot hardcore the column names instead I should loop through them.</p>
<p>For Example, below is my dataframe </p>
<pre><code>+----------------+-----------+------------+
|name |country |DOB |
+----------------+-----------+------------+
| Blaze | IND| 19950312|
| Scarlet | USA| 19950313|
| Jonas | CAD| 19950312|
| Blaze | USA| 19950312|
| Jonas | CAD| 19950312|
| mark | USA| 19950313|
| mark | CAD| 19950313|
| Smith | USA| 19950313|
| mark | UK | 19950313|
| scarlet | CAD| 19950313|
</code></pre>
<p>My final result should be created in a new dataframe as (k,v) where k is the distinct record and v is the count of it.</p>
<pre><code>+----------------+-----------+------------+
|name |country |DOB |
+----------------+-----------+------------+
| (Blaze,2) | (IND,1) |(19950312,3)|
| (Scarlet,2) | (USA,4) |(19950313,6)|
| (Jonas,3) | (CAD,4) | |
| (mark,3) | (UK,1) | |
| (smith,1) | | |
</code></pre>
<p>Can anyone please help me with this, I'm using Spark 2.4.0 and Scala 2.11.12</p>
<p><strong>Note: My columns are dynamic, so I can't hardcore the columns and do groupby on them.</strong></p>
|
<p>I don't have exact solution to your query but I can surely provide you with some help that can get you started working on your issue.</p>
<p>Create dataframe</p>
<pre><code>scala> val df = Seq(("Blaze ","IND","19950312"),
| ("Scarlet","USA","19950313"),
| ("Jonas ","CAD","19950312"),
| ("Blaze ","USA","19950312"),
| ("Jonas ","CAD","19950312"),
| ("mark ","USA","19950313"),
| ("mark ","CAD","19950313"),
| ("Smith ","USA","19950313"),
| ("mark ","UK ","19950313"),
| ("scarlet","CAD","19950313")).toDF("name", "country","dob")
</code></pre>
<p>Next calculate count of distinct element of each column</p>
<pre><code>scala> val distCount = df.columns.map(c => df.groupBy(c).count)
</code></pre>
<p>Create a range to iterate over distCount </p>
<pre><code>scala> val range = Range(0,distCount.size)
range: scala.collection.immutable.Range = Range(0, 1, 2)
</code></pre>
<p>Aggregate your data</p>
<pre><code>scala> val aggVal = range.toList.map(i => distCount(i).collect().mkString).toSeq
aggVal: scala.collection.immutable.Seq[String] = List([Jonas ,2][Smith ,1][Scarlet,1][scarlet,1][mark ,3][Blaze ,2], [CAD,4][USA,4][IND,1][UK ,1], [19950313,6][19950312,4])
</code></pre>
<p>Create data frame:</p>
<pre><code>scala> Seq((aggVal(0),aggVal(1),aggVal(2))).toDF("name", "country","dob").show()
+--------------------+--------------------+--------------------+
| name| country| dob|
+--------------------+--------------------+--------------------+
|[Jonas ,2][Smith...|[CAD,4][USA,4][IN...|[19950313,6][1995...|
+--------------------+--------------------+--------------------+
</code></pre>
<p>I hope this helps you in some way. </p>
|
pandas|scala|apache-spark|machine-learning|apache-spark-sql
| 1
|
2,976
| 69,129,338
|
Grouping a DataFrame, counting occurrences in one column, putting other column values in sets
|
<p>I have a dataframe, let's call it 'data', as follows:</p>
<pre><code>index ID name
0 23 aaa
1 42 bbb
2 23 aab
3 42 bbb
4 42 bbb
...
</code></pre>
<p>I want to count the occurences of ID and create an extra column for that by which I can sort. Additionally I want to add the names to sets, because they might differ. Something like this (additional index optional):</p>
<pre><code>count ID name
3 42 {bbb}
2 23 {aaa, aab}
</code></pre>
<p>I know the solution is somewhere in the group_by() function. I can put the names into sets with <code>data.groupby('ID')['name'].apply(set).reset_index()</code> but how do I additionally count the occurrences of ID and add the numbers to the DataFrame correctly? I'm standing on the hose, as the German says. Thanks a lot!</p>
|
<p>You can use <code>.agg</code> with multiple parameters:</p>
<pre class="lang-py prettyprint-override"><code>x = df.groupby("ID", as_index=False).agg(
count=("ID", "size"), name=("name", set)
)
print(x)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> ID count name
0 23 2 {aaa, aab}
1 42 3 {bbb}
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 1
|
2,977
| 69,027,228
|
All possible concatenations of two tensors in PyTorch
|
<p>Suppose I have two tensors <code>S</code> and <code>T</code> defined as:</p>
<pre><code>S = torch.rand((3,2,1))
T = torch.ones((3,2,1))
</code></pre>
<p>We can think of these as containing batches of tensors with shapes <code>(2, 1)</code>. In this case, the batch size is <code>3</code>.</p>
<p>I want to concatenate all possible pairings between batches. A single concatenation of batches produces a tensor of shape <code>(4, 1)</code>. And there are <code>3*3</code> combinations so ultimately, the resulting tensor <code>C</code> must have a shape of <code>(3, 3, 4, 1)</code>.</p>
<p>One solution is to do the following:</p>
<pre><code>for i in range(S.shape[0]):
for j in range(T.shape[0]):
C[i,j,:,:] = torch.cat((S[i,:,:],T[j,:,:]))
</code></pre>
<p>But the for loop doesn't scale well to large batch sizes. Is there a PyTorch command to do this?</p>
|
<p>In numpy something called np.meshgrid is used.</p>
<p><a href="https://stackoverflow.com/a/35608701/3259896">https://stackoverflow.com/a/35608701/3259896</a></p>
<p>So in pytorch, it would be</p>
<pre><code>torch.stack(
torch.meshgrid(x, y)
).T.reshape(-1,2)
</code></pre>
<p>Where x and y are your two lists. You can use any number. x, y , z, etc.</p>
<p>And then you reshape it to the number of lists you use.</p>
<p>So if you used three lists, use <code>.reshape(-1,3)</code>, for four use <code>.reshape(-1,4)</code>, etc.</p>
<p>So for 5 tensors, use</p>
<pre><code>torch.stack(
torch.meshgrid(a, b, c, d, e)
).T.reshape(-1,5)
</code></pre>
|
pytorch
| 0
|
2,978
| 68,951,354
|
interval in dataframe to start from the first row [python 3.6.0]
|
<p>Below data is in the interval of 5 mins, trying to group it in 10 mins</p>
<p>Dataframe names as <code>df</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>script_id</th>
<th>date_time</th>
<th>open</th>
<th>high</th>
<th>low</th>
<th>close</th>
<th>volume</th>
</tr>
</thead>
<tbody>
<tr>
<td>201</td>
<td>2019-01-01 10:45:00</td>
<td>1492.9</td>
<td>1493.85</td>
<td>1492.15</td>
<td>1492.9</td>
<td>7189</td>
</tr>
<tr>
<td>201</td>
<td>2019-01-01 10:50:00</td>
<td>1492.9</td>
<td>1495.95</td>
<td>1492.2</td>
<td>1495.85</td>
<td>15440</td>
</tr>
<tr>
<td>201</td>
<td>2019-01-01 10:55:00</td>
<td>1495.85</td>
<td>1495.95</td>
<td>1494</td>
<td>1494.5</td>
<td>8360</td>
</tr>
<tr>
<td>201</td>
<td>2019-01-01 11:00:00</td>
<td>1494.5</td>
<td>1494.5</td>
<td>1492</td>
<td>1492.05</td>
<td>9910</td>
</tr>
<tr>
<td>201</td>
<td>2019-01-01 11:05:00</td>
<td>1492.05</td>
<td>1493.9</td>
<td>1492</td>
<td>1493.35</td>
<td>14961</td>
</tr>
<tr>
<td>201</td>
<td>2019-01-01 11:10:00</td>
<td>1493.4</td>
<td>1493.4</td>
<td>1488</td>
<td>1489.25</td>
<td>16493</td>
</tr>
<tr>
<td>201</td>
<td>2019-01-01 11:15:00</td>
<td>1489.25</td>
<td>1492</td>
<td>1489.25</td>
<td>1490.6</td>
<td>14590</td>
</tr>
<tr>
<td>201</td>
<td>2019-01-01 11:20:00</td>
<td>1490.6</td>
<td>1491.65</td>
<td>1490</td>
<td>1491.5</td>
<td>3470</td>
</tr>
</tbody>
</table>
</div>
<p>While executing the below code:</p>
<pre><code>df_f = df.groupby(['script_id', pd.Grouper(key='date_time', freq=f'{tf}T')])\
.agg(open=pd.NamedAgg(column='open', aggfunc='first'),
high=pd.NamedAgg(column='high', aggfunc='max'),
low=pd.NamedAgg(column='low', aggfunc='min'),
close=pd.NamedAgg(column='close', aggfunc='last'),
volume=pd.NamedAgg(column='volume', aggfunc='sum'))\
.reset_index()
print(df_f)
</code></pre>
<p>The result is (have removed unwanted details from here):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>date_time</th>
</tr>
</thead>
<tbody>
<tr>
<td>2019-01-01 10:40:00</td>
</tr>
<tr>
<td>2019-01-01 10:50:00</td>
</tr>
<tr>
<td>2019-01-01 11:00:00</td>
</tr>
<tr>
<td>2019-01-01 11:10:00</td>
</tr>
</tbody>
</table>
</div><hr />
<p>But it should be (have removed unwanted details from here):- <strong>(Expected Result)</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>date_time</th>
</tr>
</thead>
<tbody>
<tr>
<td>2019-01-01 10:45:00</td>
</tr>
<tr>
<td>2019-01-01 10:55:00</td>
</tr>
<tr>
<td>2019-01-01 11:05:00</td>
</tr>
<tr>
<td>2019-01-01 11:15:00</td>
</tr>
</tbody>
</table>
</div>
|
<p>Seems like you just need to supply the offset argument when you call <code>pd.Grouper(... offset="5T")</code></p>
<pre><code>df_f = df.groupby(['script_id', pd.Grouper(key='date_time', freq='10T', offset="5T")])\
.agg(open=pd.NamedAgg(column='open', aggfunc='first'),
high=pd.NamedAgg(column='high', aggfunc='max'),
low=pd.NamedAgg(column='low', aggfunc='min'),
close=pd.NamedAgg(column='close', aggfunc='last'),
volume=pd.NamedAgg(column='volume', aggfunc='sum'))\
.reset_index()
print(df_f)
script_id date_time open high low close volume
0 201 2019-01-01 10:45:00 1492.90 1495.95 1492.15 1495.85 22629
1 201 2019-01-01 10:55:00 1495.85 1495.95 1492.00 1492.05 18270
2 201 2019-01-01 11:05:00 1492.05 1493.90 1488.00 1489.25 31454
3 201 2019-01-01 11:15:00 1489.25 1492.00 1489.25 1491.50 18060
</code></pre>
<hr />
<p>Older versions of <code>pandas.Grouper</code> objects use <code>base</code> instead of <code>offset</code>. <code>pd.Grouper(..., base=5)</code></p>
<pre><code>>>> df_f = df.groupby(['script_id', pd.Grouper(key='date_time', freq=f'10T', base=5)])\
.agg(open=pd.NamedAgg(column='open', aggfunc='first'),
high=pd.NamedAgg(column='high', aggfunc='max'),
low=pd.NamedAgg(column='low', aggfunc='min'),
close=pd.NamedAgg(column='close', aggfunc='last'),
volume=pd.NamedAgg(column='volume', aggfunc='sum'))\
.reset_index()
print(df_f)
script_id date_time open high low close volume
0 201 2019-01-01 10:45:00 1492.90 1495.95 1492.15 1495.85 22629
1 201 2019-01-01 10:55:00 1495.85 1495.95 1492.00 1492.05 18270
2 201 2019-01-01 11:05:00 1492.05 1493.90 1488.00 1489.25 31454
3 201 2019-01-01 11:15:00 1489.25 1492.00 1489.25 1491.50 18060
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 2
|
2,979
| 68,990,413
|
colors are not consistently applied to categories in subplots
|
<p>Having the following code i can't understand why is not showing correctly the information (check the column C and F) bot are show as the same value but are different</p>
<p>What i need is plot some of the columns in the df and share the legend between all subplots
(all columns have the same values ["SI","NO"]
(this is just a sample code )</p>
<pre><code>colnames=['a','b','c','d','e','f','g']
values=[
['SI','SI','NO','SI','SI','SI','NO'],
['SI','NO','NO','SI','NO','SI','NO'],
['SI','SI','NO','SI','SI','SI','NO'],
['SI','NO','NO','NO','NO','SI','SI'],
['SI','NO','NO','NO','NO','SI','NO']]
df=pd.DataFrame(values, columns=colnames)
def pieplotstest(df):
fig,ax =plt.subplots(2,3,facecolor=(1, 1, 1),figsize=(7.2,4.3))
plt.style.use('fivethirtyeight')
colors=["#172a3d","#e33e31"]
textprops=dict(color="w",weight='bold',size=5)
labels=['NO','SI']
ax[0,0].pie(df['b'].value_counts(),
colors=colors,
autopct = '%1.1f%%',
textprops=textprops,
wedgeprops=dict(width=0.5),
pctdistance=0.7
)
ax[0,1].pie(df['c'].value_counts(),
colors=colors,
autopct = '%1.1f%%',
textprops=textprops,
wedgeprops=dict(width=0.5),
pctdistance=0.75
)
ax[0,2].pie(df['d'].value_counts(),
colors=colors,
autopct = '%1.1f%%',
textprops=textprops,
wedgeprops=dict(width=0.5),
pctdistance=0.75
)
ax[1,0].pie(df['e'].value_counts(),
colors=colors,
autopct = '%1.1f%%',
textprops=textprops,
wedgeprops=dict(width=0.5),
pctdistance=0.75
)
ax[1,1].pie(df['f'].value_counts(),
colors=colors,
autopct = '%1.1f%%',
textprops=textprops,
wedgeprops=dict(width=0.5),
pctdistance=0.75
)
ax[1,2].pie(df['g'].value_counts(),
colors=colors,
autopct = '%1.1f%%',
textprops=textprops,
wedgeprops=dict(width=0.5),
pctdistance=0.75
)
ax[0,0].set_title('b',fontsize=10)
ax[0,1].set_title('c',fontsize=10)
ax[0,2].set_title('d',fontsize=10)
ax[1,0].set_title('e',fontsize=10)
ax[1,1].set_title('f',fontsize=10)
ax[1,2].set_title('g',fontsize=10)
fig.legend(labels,
loc=4,
fontsize=7
)
plt.suptitle('Como estan distribuidas tus ventas')
fig.tight_layout()
plt.savefig(f'orders3.png',dpi=600,transparent=True)
</code></pre>
<p>the result is :
<a href="https://i.stack.imgur.com/FKYcL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FKYcL.png" alt="enter image description here" /></a></p>
|
<p>So the commenter was correct, when you have only one value with <code>value_counts()</code> you run into issues.</p>
<p>So I transformed the DF with:</p>
<pre class="lang-py prettyprint-override"><code>df = df.T.apply(pd.Series.value_counts, axis=1).fillna(0).reset_index()
df.columns = ('question', 'no', 'si')
</code></pre>
<p>Created a list of subplot indices we need to make the code cleaner:</p>
<pre class="lang-py prettyprint-override"><code>subplot_list = []
for i in range(2):
for j in range(3):
subplot_list.append([i,j])
</code></pre>
<p>and added a loop to go through each question (question being 'a', 'b', etc.):</p>
<pre class="lang-py prettyprint-override"><code> for index, row in new_df.iterrows():
if row['question'] == 'a':
pass
else:
current_sub = subplot_list[index-1]
row_num = current_sub[0]
column = current_sub[1]
ax[row_num,column].pie(
[row['no'], row['si']],
colors=colors,
autopct = '%1.1f%%',
textprops=textprops,
wedgeprops=dict(width=0.5),
pctdistance=0.7
)
ax[row_num,column].set_title(row['question'],fontsize=10)
</code></pre>
<p>And the full code with output is below:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
colnames=['a','b','c','d','e','f','g']
values=[
['SI','SI','NO','SI','SI','SI','NO'],
['SI','NO','NO','SI','NO','SI','NO'],
['SI','SI','NO','SI','SI','SI','NO'],
['SI','NO','NO','NO','NO','SI','SI'],
['SI','NO','NO','NO','NO','SI','NO']]
df=pd.DataFrame(values,columns=colnames)
df = df.T.apply(pd.Series.value_counts, axis=1).fillna(0).reset_index()
df.columns = ('question', 'no', 'si')
subplot_list = []
for i in range(2):
for j in range(3):
subplot_list.append([i,j])
def pieplotstest(df):
fig,ax =plt.subplots(2,3,facecolor=(1, 1, 1),figsize=(7.2,4.3))
plt.style.use('fivethirtyeight')
colors=["#172a3d","#e33e31"]
textprops=dict(color="w",weight='bold',size=5)
labels=['NO','SI']
for index, row in new_df.iterrows():
if row['question'] == 'a':
pass
else:
current_sub = subplot_list[index-1]
row_num = current_sub[0]
column = current_sub[1]
ax[row_num,column].pie(
[row['no'], row['si']],
colors=colors,
autopct = '%1.1f%%',
textprops=textprops,
wedgeprops=dict(width=0.5),
pctdistance=0.7
)
ax[row_num,column].set_title(row['question'],fontsize=10)
fig.legend(labels,
loc=4,
fontsize=7
)
plt.suptitle('Como estan distribuidas tus ventas')
plt.savefig(f'orders3.png',dpi=600,transparent=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/rillw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rillw.png" alt="Pie chart in a loop" /></a></p>
|
python|pandas|matplotlib
| 1
|
2,980
| 69,204,712
|
Delete rows above certain value once number is reached
|
<p>I have a large dataset where I am interested in the part where it shuts down and when it is shut down. However, the data also includes data of the startup which I want to filter out.</p>
<p>The data goes down to <0.2, stays there for a while and then goes up again >0.2. I want to delete the part where it has been <0.2 before and is going up to >0.2.</p>
<p>I have used a standard filter, but since I am still interested in the first part this does not seem to work. Just looking at the derivative is also not an option since the value can go up and down in the beginning as well, the only difference with the latter part is that it has been <0.2 before.</p>
<p>How can I do this?</p>
<pre><code>import pandas as pd
data = {
"Date and Time": ["2020-06-07 00:00", "2020-06-07 00:01", "2020-06-07 00:02", "2020-06-07 00:03", "2020-06-07 00:04", "2020-06-07 00:05", "2020-06-07 00:06", "2020-06-07 00:07", "2020-06-07 00:08", "2020-06-07 00:09", "2020-06-07 00:10", "2020-06-07 00:11", "2020-06-07 00:12", "2020-06-07 00:13", "2020-06-07 00:14", "2020-06-07 00:15", "2020-06-07 00:16", "2020-06-07 00:17", "2020-06-07 00:18", "2020-06-07 00:19", "2020-06-07 00:20", "2020-06-07 00:21", "2020-06-07 00:22", "2020-06-07 00:23", "2020-06-07 00:24", "2020-06-07 00:25", "2020-06-07 00:26", "2020-06-07 00:27", "2020-06-07 00:28", "2020-06-07 00:29"],
"Value": [16.2, 15.1, 13.8, 12.0, 11.9, 12.1, 10.8, 9.8, 8.3, 6.2, 4.3, 4.2, 4.2, 3.3, 1.8, 0.1, 0.05, 0.15, 0.1, 0.18, 0.25, 1, 4, 8, 12.0, 12.0, 12.0, 12.0, 12.0, 12.0],
}
df = pd.DataFrame(data)
</code></pre>
<p>Required output:</p>
<pre><code>data = {
"Date and Time": ["2020-06-07 00:00", "2020-06-07 00:01", "2020-06-07 00:02", "2020-06-07 00:03", "2020-06-07 00:04", "2020-06-07 00:05", "2020-06-07 00:06", "2020-06-07 00:07", "2020-06-07 00:08", "2020-06-07 00:09", "2020-06-07 00:10", "2020-06-07 00:11", "2020-06-07 00:12", "2020-06-07 00:13", "2020-06-07 00:14", "2020-06-07 00:15", "2020-06-07 00:16", "2020-06-07 00:17", "2020-06-07 00:18", "2020-06-07 00:19"],
"Value": [16.2, 15.1, 13.8, 12.0, 11.9, 12.1, 10.8, 9.8, 8.3, 6.2, 4.3, 4.2, 4.2, 3.3, 1.8, 0.1, 0.05, 0.15, 0.1, 0.18],
}
</code></pre>
|
<p>You can identify the switching points (above 0.2 to under and vice versa) using <code>(df['Value'] < 0.2).diff()</code> and then use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer"><code>cumsum</code></a>. To remove any parts of the dataframe after the value has been below 0.2 for any period of time, simply remove any rows with a cumsum of 2 or more.</p>
<pre><code>s = (df['Value'] < 0.2).diff().cumsum()
df.loc[s < 2]
</code></pre>
<p>Result:</p>
<pre><code> Date and Time Value
1 2020-06-07 00:01 15.10
2 2020-06-07 00:02 13.80
3 2020-06-07 00:03 12.00
4 2020-06-07 00:04 11.90
5 2020-06-07 00:05 12.10
6 2020-06-07 00:06 10.80
7 2020-06-07 00:07 9.80
8 2020-06-07 00:08 8.30
9 2020-06-07 00:09 6.20
10 2020-06-07 00:10 4.30
11 2020-06-07 00:11 4.20
12 2020-06-07 00:12 4.20
13 2020-06-07 00:13 3.30
14 2020-06-07 00:14 1.80
15 2020-06-07 00:15 0.10
16 2020-06-07 00:16 0.05
17 2020-06-07 00:17 0.15
18 2020-06-07 00:18 0.10
19 2020-06-07 00:19 0.18
</code></pre>
|
python|pandas
| 3
|
2,981
| 44,524,901
|
How do I multiply matrices in PyTorch?
|
<p>With numpy, I can do a simple matrix multiplication like this:</p>
<pre><code>a = numpy.ones((3, 2))
b = numpy.ones((2, 1))
result = a.dot(b)
</code></pre>
<p>However, this does not work with PyTorch:</p>
<pre><code>a = torch.ones((3, 2))
b = torch.ones((2, 1))
result = torch.dot(a, b)
</code></pre>
<p>This code throws the following error:</p>
<blockquote>
<p>RuntimeError: 1D tensors expected, but got 2D and 2D tensors</p>
</blockquote>
<p>How do I perform matrix multiplication in PyTorch?</p>
|
<p>Use <a href="https://pytorch.org/docs/stable/generated/torch.mm.html" rel="nofollow noreferrer"><code>torch.mm</code></a>:</p>
<pre><code>torch.mm(a, b)
</code></pre>
<p><code>torch.dot()</code> behaves differently to <code>np.dot()</code>. There's been some discussion about what would be desirable <a href="https://github.com/pytorch/pytorch/issues/138" rel="nofollow noreferrer">here</a>. Specifically, <code>torch.dot()</code> treats both <code>a</code> and <code>b</code> as 1D vectors (irrespective of their original shape) and computes their inner product. The error is thrown because this behaviour makes your <code>a</code> a vector of length 6 and your <code>b</code> a vector of length 2; hence their inner product can't be computed. For matrix multiplication in PyTorch, use <code>torch.mm()</code>. Numpy's <code>np.dot()</code> in contrast is more flexible; it computes the inner product for 1D arrays and performs matrix multiplication for 2D arrays.</p>
<p><a href="https://pytorch.org/docs/master/torch.html#torch.matmul" rel="nofollow noreferrer"><code>torch.matmul</code></a> performs matrix multiplications if both arguments are <code>2D</code> and computes their dot product if both arguments are <code>1D</code>. For inputs of such dimensions, its behaviour is the same as <code>np.dot</code>. It also lets you do broadcasting or <code>matrix x matrix</code>, <code>matrix x vector</code> and <code>vector x vector</code> operations in batches.</p>
<pre><code># 1D inputs, same as torch.dot
a = torch.rand(n)
b = torch.rand(n)
torch.matmul(a, b) # torch.Size([])
# 2D inputs, same as torch.mm
a = torch.rand(m, k)
b = torch.rand(k, j)
torch.matmul(a, b) # torch.Size([m, j])
</code></pre>
|
python|matrix|pytorch|matrix-multiplication
| 117
|
2,982
| 44,458,434
|
Python: break up dataframe (one row per entry in column, instead of multiple entries in column)
|
<p>I have a solution to a problem, that to my despair is somewhat slow, and I am seeking advice on how to speed up my solution (by adding vectorization or other clever methods). I have a dataframe that looks like this:</p>
<pre><code>toy = pd.DataFrame([[1,'cv','c,d,e'],[2,'search','a,b,c,d,e'],[3,'cv','d']],
columns=['id','ch','kw'])
</code></pre>
<p>Output is:</p>
<p><a href="https://i.stack.imgur.com/DRtJO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DRtJO.png" alt="enter image description here"></a></p>
<p>The task is to break up <code>kw</code> column into one (replicated) row per comma-separated entry in each string. Thus, what I wish to achieve is:</p>
<p><a href="https://i.stack.imgur.com/JG4LI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JG4LI.png" alt="enter image description here"></a></p>
<p>My initial solution is the following:</p>
<pre><code>data = pd.DataFrame()
for x in toy.itertuples():
id = x.id; ch = x.ch; keys = x.kw.split(",")
data = data.append([[id, ch, x] for x in keys], ignore_index=True)
data.columns = ['id','ch','kw']
</code></pre>
<p>Problem is: it is slow for larger dataframes. My hope is that someone has encountered a similar problem before, and knows how to optimize my solution. I'm using python 3.4.x and pandas 0.19+ if that is of importance.</p>
<p>Thank you!</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split</code></a> for <code>list</code>s, then get <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.len.html" rel="nofollow noreferrer"><code>len</code></a> for <code>length</code>.</p>
<p>Last create new <code>DataFrame</code> by <code>constructor</code> with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow noreferrer"><code>numpy.repeat</code></a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer"><code>numpy.concatenate</code></a>:</p>
<pre><code>cols = toy.columns
splitted = toy['kw'].str.split(',')
l = splitted.str.len()
toy = pd.DataFrame({'id':np.repeat(toy['id'], l),
'ch':np.repeat(toy['ch'], l),
'kw':np.concatenate(splitted)})
toy = toy.reindex_axis(cols, axis=1)
print (toy)
id ch kw
0 1 cv c
0 1 cv d
0 1 cv e
1 2 search a
1 2 search b
1 2 search c
1 2 search d
1 2 search e
2 3 cv d
</code></pre>
|
performance|pandas|python-3.4
| 2
|
2,983
| 60,949,936
|
Why bilinear scaling of images with PIL and pytorch produces different results?
|
<p>In order to feed an image to the pytorch network I first need to downscale it to some fixed size. At first I've done it using PIL.Image.resize() method, with interpolation mode set to BILINEAR. Then I though it would be more convenient to first convert a batch of images to pytorch tensor and then use torch.nn.functional.interpolate() function to scale the whole tensor at once on a GPU ('bilinear' interpolation mode as well). This lead to a decrease of the model accuracy because now during inference a type of scaling (torch) was different from the one used during training (PIL). After that, I compared two methods of downscaling visually and found out that they produce different results. Pillow downscaling seems more smooth. Do these methods perform different operations under the hood though both being bilinear? If so, I am also curious if there is a way to achieve the same result as Pillow image scaling with torch tensor scaling? </p>
<p><a href="https://i.stack.imgur.com/2o4Ay.png" rel="noreferrer">Original image</a> (the well-known Lenna image)</p>
<p>Pillow scaled image:</p>
<p><a href="https://i.stack.imgur.com/WOqWj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WOqWj.png" alt="Pillow scaled image"></a></p>
<p>Torch scaled image:</p>
<p><a href="https://i.stack.imgur.com/J9zhJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/J9zhJ.png" alt="Torch scaled image"></a></p>
<p>Mean channel absolute difference map:</p>
<p><a href="https://i.stack.imgur.com/ynTQD.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ynTQD.png" alt="Mean channel absolute difference map"></a></p>
<p>Demo code:</p>
<pre><code>import numpy as np
from PIL import Image
import torch
import torch.nn.functional as F
from torchvision import transforms
import matplotlib.pyplot as plt
pil_to_torch = transforms.ToTensor()
res_shape = (128, 128)
pil_img = Image.open('Lenna.png')
torch_img = pil_to_torch(pil_img)
pil_image_scaled = pil_img.resize(res_shape, Image.BILINEAR)
torch_img_scaled = F.interpolate(torch_img.unsqueeze(0), res_shape, mode='bilinear').squeeze(0)
pil_image_scaled_on_torch = pil_to_torch(pil_image_scaled)
relative_diff = torch.abs((pil_image_scaled_on_torch - torch_img_scaled) / pil_image_scaled_on_torch).mean().item()
print('relative pixel diff:', relative_diff)
pil_image_scaled_numpy = pil_image_scaled_on_torch.cpu().numpy().transpose([1, 2, 0])
torch_img_scaled_numpy = torch_img_scaled.cpu().numpy().transpose([1, 2, 0])
plt.imsave('pil_scaled.png', pil_image_scaled_numpy)
plt.imsave('torch_scaled.png', torch_img_scaled_numpy)
plt.imsave('mean_diff.png', np.abs(pil_image_scaled_numpy - torch_img_scaled_numpy).mean(-1))
</code></pre>
<p>Python 3.6.6, requirements:</p>
<pre><code>cycler==0.10.0
kiwisolver==1.1.0
matplotlib==3.2.1
numpy==1.18.2
Pillow==7.0.0
pyparsing==2.4.6
python-dateutil==2.8.1
six==1.14.0
torch==1.4.0
torchvision==0.5.0
</code></pre>
|
<p>"Bilinear interpolation" is an interpolation method.</p>
<p>But downscaling an image is not necessarily only accomplished using interpolation.</p>
<p>It is possible to simply resample the image as a lower sampling rate, using an interpolation method to compute new samples that don't coincide with old samples. But this leads to aliasing (which is what you get when higher frequency components in the image cannot be represented at the lower sampling density, "aliasing" the energy of these higher frequencies onto lower frequency components; that is, new low frequency components appear in the image after the resampling).</p>
<p>To avoid aliasing, some libraries apply a low-pass filter (remove high frequencies that cannot be represented at the lower sampling frequency) before resampling. The subsampling algorithm in these libraries do much more than just interpolating.</p>
<p>The difference you see is because these two libraries take different approaches, one tries to avoid aliasing by low-pass filtering, the other doesn't.</p>
<p>To obtain the same results in Torch as in Pillow, you need to explicitly low-pass filter the image yourself. To get identical results you will have to figure out exactly how Pillow filters the image, there are different methods and different possible parameter settings. Looking at the source code is the best way to find out exactly what they do.</p>
|
python|image-processing|pytorch|python-imaging-library
| 8
|
2,984
| 60,791,997
|
How to check URL status for multiple URLs stored in a CSV file and save results to a new CSV file
|
<p>I'm new to python and currently trying to achieve the following:</p>
<p>I want to check HTTP response status codes for multiple URLs in my input.csv file:</p>
<pre><code>id url
1 https://www.google.com
2 https://www.example.com
3 https://www.testtesttest.com
...
</code></pre>
<p>and save results as an additional column 'status' flagging those URLs that are down or with some other issues in my output.csv file:</p>
<pre><code>id url status
1 https://www.google.com All good!
2 https://www.example.com All good!
3 https://www.testt75esttest.com Down
...
</code></pre>
<p>so far I was trying the following, but unsuccessfully::</p>
<pre><code>import requests
import pandas as pd
import requests.exceptions
df = pd.read_csv('path/to/my/input.csv')
urls = df.T.values.tolist()[1]
try:
r = requests.get(urls)
r.raise_for_status()
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
print "Down"
except requests.exceptions.HTTPError:
print "4xx, 5xx"
else:
print "All good!"
</code></pre>
<p>not sure how I could get results for the above and save as a new column in the output.csv file:</p>
<pre><code>df['status'] = #here the result
df.to_csv('path/to/my/output.csv', index=False)
</code></pre>
<p>Would someone be able to help with this? Thanks in advance! </p>
|
<pre><code>id url
1 https://www.google.com
2 https://www.example.com
3 https://www.testtesttest.com
</code></pre>
<p>Copy the above to clipboard. Then, run the below code. You need to loop through the urls and append the status to a list. Then, set the list as a new column.</p>
<pre><code>import requests
import pandas as pd
import requests.exceptions
df = pd.read_clipboard()
df
urls = df['url'].tolist()
status = []
for url in urls:
try:
r = requests.get(url)
r.raise_for_status()
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
status.append("Down")
except requests.exceptions.HTTPError:
status.append("4xx, 5xx")
else:
status.append("All good!")
df['status'] = status
df.to_csv('path/to/my/output.csv', index=False)
</code></pre>
|
pandas|web-scraping|python-requests
| 2
|
2,985
| 71,580,700
|
Creating list from imported CSV file with pandas
|
<p>I am trying to create a list from a CSV. This CSV contains a 2 dimensional table [540 rows and 8 columns] and I would like to create a list that contains the values of an specific column, column 4 to be specific.</p>
<p>I tried: list(df.columns.values)[4], it does mention the name of the column but i'm trying to get the values from the rows on column 4 and make them a list.</p>
<pre><code>import pandas as pd
import urllib
#This is the empty list
company_name = []
#Uploading CSV file
df = pd.read_csv('Downloads\Dropped_Companies.csv')
#Extracting list of all companies name from column "Name of Stock"
companies_column=list(df.columns.values)[4] #This returns the name of the column.
</code></pre>
|
<pre><code>companies_column = list(df.iloc[:,4].values)
</code></pre>
|
python|pandas|csv
| 1
|
2,986
| 71,773,219
|
How to segment and get the time between two dates in pandas?
|
<p>I want to know how long it has been driven during certain time segments, in this case we want to look at one hour segments.
To get the segments I make the following code:</p>
<pre><code>init = '2022-03-10 01:00:00'
end = '2022-03-10 06:00:00'
freq = '1h'
bucket = pd.DataFrame(
{'start_date': pd.date_range(start=init, end=end, freq=freq)}
)
bucket['end_date'] = bucket['start_date'] + pd.Timedelta(seconds=3600)
bucket
start_date end_date
0 2022-03-10 01:00:00 2022-03-10 02:00:00
1 2022-03-10 02:00:00 2022-03-10 03:00:00
2 2022-03-10 03:00:00 2022-03-10 04:00:00
3 2022-03-10 04:00:00 2022-03-10 05:00:00
4 2022-03-10 05:00:00 2022-03-10 06:00:00
5 2022-03-10 06:00:00 2022-03-10 07:00:00
</code></pre>
<p>From the database I get all the trips that have been made in the start and end time periods. For this example, the data obtained is:</p>
<pre><code>df = pd.DataFrame({
'start_date': ['2022-03-10 01:20:00', '2022-03-10 02:18:00', '2022-03-10 02:10:00', '2022-03-10 02:40:00', '2022-03-10 02:45:00', '2022-03-10 03:05:00', '2022-03-10 03:12:00', '2022-03-10 05:30:00'],
'end_date': ['2022-03-10 01:32:00', '2022-03-10 02:42:00', '2022-03-10 02:23:00', '2022-03-10 03:20:00', '2022-03-10 02:58:00', '2022-03-10 03:28:00', pd.NA, '2022-03-10 05:48:00'],
'number_of_trip': ["637hui", "384nfj", "102fiu", "948pvc", "473mds", "103fkd", "905783", "498wsq"],
'id': [1, 2, 3, 4, 5, 6, 7, 8]
})
df = df.replace(pd.NA, datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
df['start_date'] = pd.to_datetime(df['start_date'])
df['end_date'] = pd.to_datetime(df['end_date'])
start_date end_date number_of_trip id seconds
0 2022-03-10 01:20:00 2022-03-10 01:32:00 637hui 1 720
1 2022-03-10 02:18:00 2022-03-10 02:42:00 384nfj 2 1440
2 2022-03-10 02:10:00 2022-03-10 02:23:00 102fiu 3 780
3 2022-03-10 02:40:00 2022-03-10 03:20:00 948pvc 4 2400
4 2022-03-10 02:45:00 2022-03-10 02:58:00 473mds 5 780
5 2022-03-10 03:05:00 2022-03-10 03:28:00 103fkd 6 1380
6 2022-03-10 03:12:00 2022-04-06 17:04:15 905783 7 0
7 2022-03-10 05:30:00 2022-03-10 05:48:00 498wsq 8 1080
</code></pre>
<p>Now, I can't do a direct summation like this:</p>
<pre><code>times = df.resample(freq, on='start_date')['seconds'].sum()
start_date
2022-03-10 01:00:00 720
2022-03-10 02:00:00 5400
2022-03-10 03:00:00 1380
2022-03-10 04:00:00 0
2022-03-10 05:00:00 1080
</code></pre>
<p>The reason is that it only takes the start date but not the end date, this means that if a trip enters into two segments, it is only added in the first segment when in reality it should add one part in one segment and the other in another segment.</p>
<p>For the segment from <code>2:00:00 to 3:00:00</code> there are 4 rows that can enter, but not complete. The rows are:</p>
<pre><code>1 2022-03-10 02:18:00 2022-03-10 02:42:00 384nfj 2 1440
2 2022-03-10 02:10:00 2022-03-10 02:23:00 102fiu 3 780
3 2022-03-10 02:40:00 2022-03-10 03:20:00 948pvc 4 2400 # This is in two segments 2022-03-10 02:00:00 to 2022-03-10 03:00:00 (1200) and 2022-03-10 03:00:00 to 2022-03-10 04:00:00 (1200)
4 2022-03-10 02:45:00 2022-03-10 02:58:00 473mds 5 780
</code></pre>
<p>The resample code adds these 4 rows <code>1440+780+2400+780</code> giving the result of <code>5400</code>, however row 3 has time of 2 segments from <code>2:00-3:00 and from 3:00-4:00</code>, so for the segment <code>2:00-3:00</code> the elapsed time must be taken, in this case it must be from <code>2:40:00 to 3:00:00</code>, which is 1200 seconds, that makes the sum for the <code>2:00-3:00</code> segment is <code>1440+780+1200+780 = 4200</code></p>
<p>I have some code that gives the values correctly, but it is very slow. is there a way in pandas to improve this?</p>
<p>CODE:</p>
<pre><code>from datetime import timedelta, datetime
import pandas as pd
def bucket_count(seconds_bucket, data, inicio, fin):
result = pd.DataFrame()
bucket = pd.DataFrame(
{'start_date': pd.date_range(start=inicio, end=fin, freq="H"),
"color": "#FF0000"}
)
bucket['end_date'] = bucket['start_date'] + pd.Timedelta(seconds=seconds_bucket)
for index, row_bucket in bucket.iterrows():
inicio = row_bucket['start_date']
fin = inicio + timedelta(seconds=seconds_bucket)
df = data[(
((inicio < data['start_date']) & (
fin > data['start_date'])) |
((data['end_date'] > inicio) & (data['end_date'] < fin)))
]
for index, row in df.iterrows():
counter = 0
if row['start_date'] > inicio < fin and row['end_date'] < fin > inicio:
seconds = int((row['end_date'] - row['start_date']).total_seconds())
counter += seconds
elif row['start_date'] < inicio and row['end_date'] < fin:
seconds = int((row['end_date'] - inicio).total_seconds())
counter += seconds
elif row['start_date'] > inicio and row['end_date'] > fin:
seconds = int((fin - row['start_date']).total_seconds())
counter += seconds
row['start_date'] = inicio
row['end_date'] = fin
row['seconds'] = counter
result = pd.concat([result, row.to_frame().T], ignore_index=True)
return result.groupby(['start_date'])["seconds"].apply(lambda x: x.astype(int).sum()).reset_index()
inicio = '2022-03-10 01:00:00'
fin = '2022-03-10 06:00:00'
df = pd.DataFrame({
'start_date': ['2022-03-10 01:20:00', '2022-03-10 02:18:00', '2022-03-10 02:10:00', '2022-03-10 02:40:00', '2022-03-10 02:45:00', '2022-03-10 03:05:00', '2022-03-10 03:12:00', '2022-03-10 05:30:00'],
'end_date': ['2022-03-10 01:32:00', '2022-03-10 02:42:00', '2022-03-10 02:23:00', '2022-03-10 03:20:00', '2022-03-10 02:58:00', '2022-03-10 03:28:00', pd.NA, '2022-03-10 05:48:00'],
'number_of_trip': ["637hui", "384nfj", "102fiu", "948pvc", "473mds", "103fkd", "905783", "498wsq"],
'id': [1, 2, 3, 4, 5, 6, 7, 8]
})
df = df.replace(pd.NA, datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
df['start_date'] = pd.to_datetime(df['start_date'])
df['end_date'] = pd.to_datetime(df['end_date'])
a = bucket_count(3600, df, inicio, fin)
</code></pre>
<p>This result is correct</p>
<pre><code> start_date seconds
0 2022-03-10 01:00:00 720
1 2022-03-10 02:00:00 4200
2 2022-03-10 03:00:00 5460
3 2022-03-10 05:00:00 1080
</code></pre>
|
<p>This is what I came up with.</p>
<p>First off the time on your method was:</p>
<pre><code>%%timeit
bucket_count(3600, df, inicio, fin)
>>> 44 ms ± 507 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>Versus the method I came up with:</p>
<pre><code>%%timeit
bucket_count_new(3600, df, inicio, fin)
>>>17.1 ms ± 577 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>So about 50% faster. I think it will be even quicker on larger samples but I did not test this.</p>
<p>Additionally, I think your method has a bug in it, but I did not troubleshoot it. Here is what I came up with and the result.</p>
<p>import pandas as pd</p>
<pre><code>def bucket_count_new(seconds_bucket: int, data: pd.DataFrame, start: pd.Timestamp, end: pd.Timestamp):
# filter out rows that are not within the range of start to end
filtered_data = data.loc[data.start_date.lt(end) & data.end_date.gt(start)]
# cut records start and ending outside of range to time range
filtered_data.loc[filtered_data.start_date.lt(start), 'start_date'] = start
filtered_data.loc[filtered_data.end_date.gt(end) | filtered_data.end_date.isnull(), 'end_date'] = end
# add total trip seconds within date range
filtered_data['trip_seconds'] = (filtered_data.end_date - filtered_data.start_date).dt.total_seconds()
results = pd.DataFrame(
{'start_date': pd.date_range(start=start, end=end, freq=pd.Timedelta(seconds=seconds_bucket)),
'total_seconds': 0})
for ind, trip_start, trip_end, trip_num, trip_id, trip_seconds in filtered_data.to_records():
# get first bucket index after start
first_ind_after_start = results['start_date'].gt(trip_start).values.argmax()
# calculate first timedelta to get to next bucket start
first_timedelta = (results.loc[first_ind_after_start, 'start_date'] - trip_start).total_seconds()
# if the first timedelta is less than the total seconds for that trip add the trip_seconds else the timedelta
results.loc[
first_ind_after_start - 1, 'total_seconds'] += first_timedelta if first_timedelta <= trip_seconds else trip_seconds
# subtract time from trip added to first bucket for trip
trip_seconds -= first_timedelta
# iterate through next buckets subtracting time until gone
ind = first_ind_after_start
while trip_seconds >= seconds_bucket:
results.loc[ind, 'total_seconds'] += seconds_bucket
trip_seconds -= seconds_bucket
ind += 1
if trip_seconds > 0:
results.loc[ind, 'total_seconds'] = trip_seconds
return results
inicio = pd.Timestamp('2022-03-10 01:00:00')
fin = pd.Timestamp('2022-03-10 06:00:00')
df = pd.DataFrame({
'start_date': ['2022-03-10 01:20:00', '2022-03-10 02:18:00', '2022-03-10 02:10:00', '2022-03-10 02:40:00',
'2022-03-10 02:45:00', '2022-03-10 03:05:00', '2022-03-10 03:12:00', '2022-03-10 05:30:00'],
'end_date': ['2022-03-10 01:32:00', '2022-03-10 02:42:00', '2022-03-10 02:23:00', '2022-03-10 03:20:00',
'2022-03-10 02:58:00', '2022-03-10 03:28:00', pd.NA, '2022-03-10 05:48:00'],
'number_of_trip': ["637hui", "384nfj", "102fiu", "948pvc", "473mds", "103fkd", "905783", "498wsq"],
'id': [1, 2, 3, 4, 5, 6, 7, 8]
})
df['start_date'] = pd.to_datetime(df['start_date'])
df['end_date'] = pd.to_datetime(df['end_date'])
a = bucket_count_new(3600, df.copy(deep=True), start=inicio, end=fin)
</code></pre>
<p>Result:</p>
<pre><code>a
start_date total_seconds
0 2022-03-10 01:00:00 720
1 2022-03-10 02:00:00 4200
2 2022-03-10 03:00:00 2580
3 2022-03-10 04:00:00 0
4 2022-03-10 05:00:00 1080
5 2022-03-10 06:00:00 0
</code></pre>
<p>I believe this is correct unless I did my first filtering incorrectly because the total seconds within the range of start to end is the same as the total seconds in a. But I did not have time to debug it. Please let me know if this is the correct results.</p>
|
python|pandas|dataframe|numpy
| 1
|
2,987
| 42,148,670
|
Python convert large numpy array to pandas dataframe
|
<p>I have a chunk of code that I received that only works with pandas dataframes as input. I currently have a pretty large numpy array. I need to convert this into a pandas dataframe. </p>
<p>The Dataframe will be 288 rows (289 counting the columns names) and 1801 columns. I have an array of size 1801 that will be all of the column names in the dataframe. Then I have an array of size (288) which will fill the first column. Then I have an array of shape (1800, 288) that will fill columns 2-1801. is there an easy way to turn this into a dataframe without individually defining all 1801 columns? </p>
<p>I know I could define columns like column2=array[0,:], column3=array[1,:] but that will be alot of work for 1801 columns.</p>
|
<p>You can pass a numpy array directly to the DataFrame constructor:</p>
<pre><code>In [11]: a = np.random.rand(3, 5)
In [12]: a
Out[12]:
array([[ 0.46154984, 0.08813473, 0.57746049, 0.42924157, 0.34689139],
[ 0.29731858, 0.83300176, 0.15884604, 0.44753895, 0.56840054],
[ 0.02479636, 0.76544594, 0.24388046, 0.06679485, 0.94890838]])
In [13]: pd.DataFrame(a)
Out[13]:
0 1 2 3 4
0 0.461550 0.088135 0.577460 0.429242 0.346891
1 0.297319 0.833002 0.158846 0.447539 0.568401
2 0.024796 0.765446 0.243880 0.066795 0.948908
In [14]: pd.DataFrame(a.T)
Out[14]:
0 1 2
0 0.461550 0.297319 0.024796
1 0.088135 0.833002 0.765446
2 0.577460 0.158846 0.243880
3 0.429242 0.447539 0.066795
4 0.346891 0.568401 0.948908
</code></pre>
|
python|arrays|pandas|numpy|dataframe
| 5
|
2,988
| 43,212,725
|
Tensorflow multi-GPU training and variable scope
|
<p><strong>Context</strong></p>
<p>I'm working on a detector model on multiple GPUs using Tensorflow 1.0. As suggested <a href="https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py" rel="nofollow noreferrer">here</a>, Gradients are computed on multiple GPUs individually and are averaged on CPU. To share the trainable variables (e.g. weights and biases) across the GPU towers, the <code>reuse</code> flag is turned on using <code>tf.get_variable_scope().reuse_variables()</code>, as in the cifar10 example. The difference is that I am using an <code>AdamOptimizer</code> instead of <code>GradientDescentOptimizer</code>.</p>
<p><strong>Problem</strong></p>
<p>When I run the training job, it prints out a long stacktrace and raise the following error at <code>opt.apply_gradients()</code>:</p>
<p><code>ValueError: Variable conv1_1/kernel/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
</code></p>
<p><strong>Analysis</strong></p>
<p>Looking into the source code I found that the <code>AdamOptimizer</code> is creating a number of zero-initialized slots within the <code>_create_slots()</code> method, wherein it calls the <code>_zeros_slot()</code>. This calls a separate module called the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/slot_creator.py#L62" rel="nofollow noreferrer"><code>slot_creator</code></a> (source code linked).</p>
<p><strong>In <code>line 62</code> of the <code>slot_creator</code>, it uses <code>variable_scope.get_variable()</code>. This used to be <code>tf.Variable()</code> in 0.12.</strong></p>
<p>My understanding of variable scopes is that <code>variable_scope.get_variable()</code> would fail to create a variable <strong>if <code>reuse</code> flag is on`. See <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/variable_scope.py#L669" rel="nofollow noreferrer">here</a> for source code.</strong></p>
<p>But the cifar10 example by Tensorflow creators seems to suggest enabling reuse to share variables across the GPU towers using <code>tf.get_variable_scope().reuse_variables()</code>. This happens <strong>before</strong> we average and apply the gradients. It looks like Tensorflow 1.0 refuses to create variables for the <code>AdamOptimizer</code>.</p>
<p>This happens for all optimizers that directly or indirectly call the <code>slot_creator</code> module.</p>
<p><strong>Question</strong></p>
<p>As a quick fix, I added a custom function into the <code>VariableScope</code> class to disable the <code>_reuse</code> flag right before calling <code>opt.apply_gradients</code>. However, I am sure there is a merit to forcing the <code>reuse</code> flag to be only set to <code>True</code>. I am not sure what the better workaround would be. Any suggestions?</p>
|
<p>In the cifar10 code, moving <code>grads = opt.compute_gradients(loss)</code> in front of the <code>tf.get_variable_scope().reuse_variables()</code> line should solve the problem.</p>
|
tensorflow|computer-vision
| 0
|
2,989
| 72,202,826
|
python equivalent of group_by, mutate using cur_group() (i.e. value of grouping variable)
|
<p>If I have a frame <code>d</code> and a function <code>f()</code> in R that looks like these:</p>
<pre><code>df = data.frame(
group=c("cat","fish","horse","cat","fish","horse","cat","horse"),
x = c(1,4,7,2,5,8,3,9)
)
f <- function(animal,x) {
nchar(animal) + mean(x)*(x+1)
}
</code></pre>
<p>applying <code>f()</code> to each group to add new column with the result of <code>f()</code> is straightforward:</p>
<pre><code>library(dplyr)
mutate(group_by(df,group),result=f(cur_group(),x))
</code></pre>
<p>Output:</p>
<pre><code> group x result
<chr> <dbl> <dbl>
1 cat 1 7
2 fish 4 26.5
3 horse 7 69
4 cat 2 9
5 fish 5 31
6 horse 8 77
7 cat 3 11
8 horse 9 85
</code></pre>
<p>What is the correct way to do the same in python if <code>d</code> is a <code>pandas.DataFrame</code>?</p>
<pre><code>import numpy as np
import pandas as pd
d = pd.DataFrame({"group":["cat","fish","horse","cat","fish","horse","cat","horse"], "x":[1,4,7,2,5,8,3,9]})
def f(animal,x):
return [np.mean(x)*(k+1) + len(animal) for k in x]
</code></pre>
<p>I know I can get the "correct" values like this:</p>
<pre><code>d.groupby("group").apply(lambda g: f(g.name,g.x))
</code></pre>
<p>and can "explode" that into a single <code>Series</code> using <code>.explode()</code>, but what is the correct way to get the values added to the frame, in the correct order, etc:</p>
<p>Expected Output (python)</p>
<pre><code> group x result
0 cat 1 7.0
1 fish 4 26.5
2 horse 7 69.0
3 cat 2 9.0
4 fish 5 31.0
5 horse 8 77.0
6 cat 3 11.0
7 horse 9 85.0
</code></pre>
|
<p>We have <code>transform</code></p>
<pre><code>d['out'] = d.groupby('group')['x'].transform('mean').mul(d['x'].add(1)) + d['group'].str.len()
Out[540]:
0 7.0
1 26.5
2 69.0
3 9.0
4 31.0
5 77.0
6 11.0
7 85.0
dtype: float64
</code></pre>
|
python|r|pandas|dataframe
| 1
|
2,990
| 50,601,134
|
How do I calculate the confusion matrix in PyTorch efficiently?
|
<p>I have a tensor that contains my predictions and a tensor that contains the actual labels for my binary classification problem. How can I calculate the confusion matrix efficiently?</p>
|
<p>After my first version using a for-loop has proven inefficient, this is the fastest solution I came up with so far, for two equal-dimensional tensors <code>prediction</code> and <code>truth</code>:</p>
<pre class="lang-python prettyprint-override"><code>def confusion(prediction, truth):
confusion_vector = prediction / truth
true_positives = torch.sum(confusion_vector == 1).item()
false_positives = torch.sum(confusion_vector == float('inf')).item()
true_negatives = torch.sum(torch.isnan(confusion_vector)).item()
false_negatives = torch.sum(confusion_vector == 0).item()
return true_positives, false_positives, true_negatives, false_negatives
</code></pre>
<p>Commented version and test-case at <a href="https://gist.github.com/the-bass/cae9f3976866776dea17a5049013258d" rel="nofollow noreferrer">https://gist.github.com/the-bass/cae9f3976866776dea17a5049013258d</a></p>
|
pytorch
| 1
|
2,991
| 45,539,117
|
How feed iterator of numpy array to tensorflow Estimator/Evaluable
|
<p>I have an iterator function that yields one batch of features and label as a tuple of numpy arrays.</p>
<p>def batch_iter():
for ...:
yield (np_features, np_labels)</p>
<p>and then I try to feed the tensor Estimator like</p>
<pre><code># the cnn_model_fn will print out shapes of various tensor when
# constructing the model
classifier = learn.Estimator(
model_fn=cnn_model_fn, model_dir="/tmp/convnet_model")
for train_data, train_labels in batch_iter():
classifier.fit(
input_fn=lambda: (tf.constant(train_data), tf.constant(train_labels)),
steps=1,
monitors=[logging_hook])
</code></pre>
<p>The (annotated) log looks like</p>
<pre><code>conv1 shape (100, 16, 20, 32)
pool1 shape (100, 8, 10, 32)
conv2 shape (100, 8, 10, 64)
pool2 shape (100, 4, 5, 64)
onehot label shape (100, 5)
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Saving checkpoints for 1 into /tmp/convnet_model/model.ckpt. # checkpoint is saved in every iteration
INFO:tensorflow:step = 1, loss = 1618.76
INFO:tensorflow:Loss for final step: 1618.76.
conv1 shape (100, 16, 20, 32) # the model_fn is called in every iteration
pool1 shape (100, 8, 10, 32)
conv2 shape (100, 8, 10, 64)
pool2 shape (100, 4, 5, 64)
onehot label shape (100, 5)
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from /tmp/convnet_model/model.ckpt-1 # checkpoint is restored in every iteration
INFO:tensorflow:Saving checkpoints for 2 into /tmp/convnet_model/model.ckpt.
INFO:tensorflow:step = 2, loss = 69370.6
INFO:tensorflow:Loss for final step: 69370.6.
conv1 shape (100, 16, 20, 32)
pool1 shape (100, 8, 10, 32)
conv2 shape (100, 8, 10, 64)
pool2 shape (100, 4, 5, 64)
onehot label shape (100, 5)
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from /tmp/convnet_model/model.ckpt-2
INFO:tensorflow:Saving checkpoints for 3 into /tmp/convnet_model/model.ckpt.
INFO:tensorflow:step = 3, loss = 289303.0
INFO:tensorflow:Loss for final step: 289303.0.
...
</code></pre>
<p>The batches are read and losses do go down as the loop iterate. However, it seems the checkpoints are saved and restored in every iteration and the model_fn is called in every iteration. So I feel that's not right.</p>
<p>What's the right way to feed an iterator to Estimator/Evaluable?</p>
|
<p>in your input_fn you can use <code>tf.contrib.training.python_input</code></p>
|
python|numpy|tensorflow
| 1
|
2,992
| 62,812,706
|
Calculate pandas dataframe column not using for loops
|
<p>I have a data frame like this,</p>
<pre><code>Date Open High to Low X
27-Feb-15 A P x1
26-Feb-15 B Q x2
25-Feb-15 C R x3
24-Feb-15 D S x4
</code></pre>
<p>i need to calculate X column values like follows,</p>
<pre><code>x1 = (P+Q)/B
x2 = (Q+R)/C
'
'
</code></pre>
<p>is there any way to do this not using for loops using pandas?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.window.Rolling.sum.html" rel="nofollow noreferrer"><code>Rolling.sum</code></a> with division by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.div.html" rel="nofollow noreferrer"><code>Series.div</code></a>:</p>
<pre><code>print (df)
Date Open High to Low X
0 27-Feb-15 10 1 x1
1 26-Feb-15 20 2 x2
2 25-Feb-15 50 3 x3
3 24-Feb-15 100 4 x4
df['X'] = df['High to Low'].rolling(2).sum().div(df['Open'])
print (df)
Date Open High to Low X
0 27-Feb-15 10 1 NaN
1 26-Feb-15 20 2 0.15
2 25-Feb-15 50 3 0.10
3 24-Feb-15 100 4 0.07
</code></pre>
<p>If necessary shift data add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.shift.html" rel="nofollow noreferrer"><code>Series.shift</code></a>:</p>
<pre><code>df['X'] = df['High to Low'].rolling(2).sum().div(df['Open']).shift(-1)
print (df)
Date Open High to Low X
0 27-Feb-15 10 1 0.15
1 26-Feb-15 20 2 0.10
2 25-Feb-15 50 3 0.07
3 24-Feb-15 100 4 NaN
</code></pre>
|
python|pandas
| 2
|
2,993
| 62,751,639
|
Put dict/json to dataframe
|
<p>I have the following input:</p>
<pre><code>{'1LsquDfKDtz1uFz7txAVixkgFc82PHwqqp': {"balance": 0}, '1FBGyQnLZrfwVZRdYNxbrqnKukm9trH5Ka': {"balance": 0}, '1DSBqLVtDFgMypdo2yC77C5LZuTCHZS7St': {"balance": 34},
...
</code></pre>
<p>That I would like to put in a a dataframe. But I run the code:</p>
<pre><code>p = pd.DataFrame.from_dict(input)
</code></pre>
<p>I got the error:</p>
<pre><code>ValueError: If using all scalar values, you must pass an index
</code></pre>
<p>Expected output in xlsx:</p>
<pre><code> addresse balance
'1LsquDfKDtz1uFz7txAVixkgFc82PHwqqp' 0
'1FBGyQnLZrfwVZRdYNxbrqnKukm9trH5Ka' 0
'1DSBqLVtDFgMypdo2yC77C5LZuTCHZS7St' 34
</code></pre>
<p>Any contribution would be appreciated.</p>
|
<p>Add parameter <code>orient='index'</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.from_dict.html" rel="nofollow noreferrer"><code>DataFrame.from_dict</code></a>, then create index name by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>DataFrame.rename_axis</code></a> and last convert index to column by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a>:</p>
<pre><code>d = {'1LsquDfKDtz1uFz7txAVixkgFc82PHwqqp': {"balance": 0},
'1FBGyQnLZrfwVZRdYNxbrqnKukm9trH5Ka': {"balance": 0},
'1DSBqLVtDFgMypdo2yC77C5LZuTCHZS7St': {"balance": 34}}
</code></pre>
<hr />
<pre><code>p = pd.DataFrame.from_dict(d, orient='index').rename_axis('addresse').reset_index()
print (p)
addresse balance
0 1DSBqLVtDFgMypdo2yC77C5LZuTCHZS7St 34
1 1FBGyQnLZrfwVZRdYNxbrqnKukm9trH5Ka 0
2 1LsquDfKDtz1uFz7txAVixkgFc82PHwqqp 0
</code></pre>
<p>Another idea is use list comprehension and prepend new key to list of dictionaries, last pass to <code>DataFrame</code> constructor:</p>
<pre><code>p = pd.DataFrame([dict(**{'addresse':k}, **v) for k, v in d.items()])
print (p)
addresse balance
0 1LsquDfKDtz1uFz7txAVixkgFc82PHwqqp 0
1 1FBGyQnLZrfwVZRdYNxbrqnKukm9trH5Ka 0
2 1DSBqLVtDFgMypdo2yC77C5LZuTCHZS7St 34
</code></pre>
|
python|pandas
| 1
|
2,994
| 62,777,242
|
What is the best way to install tensorflow and mongodb in docker?
|
<p>I want to create a docker container or image and have tensorflow and mongodb installed, I have seen that there are docker images for each application, but I need them to be working together, from a mongodb database I must extract the data to feed a model created in tensorflow.</p>
<p>Then I want to know if it is possible to have a configuration like that, since I have tried with a ubuntu container and inside it to install the applications I need, but I don't know if there is another way to do it.</p>
<p>Thanks.</p>
|
<p>Interesting that I find this post, and just found one solution for myself. Maybe not the one for you, BTW.</p>
<p>What I did is: docker pull mongo and run as daemon:</p>
<pre><code>#!/bin/bash
export VOLUME='/home/user/code'
docker run -itd \
--name mongodb \
--publish 27017:27017 \
--volume ${VOLUME}:/code \
mongo
</code></pre>
<p>Here</p>
<ul>
<li>the 'd' in '-itd' means running as daemon (like service, not
interactive).</li>
<li>The --volume may not be used.</li>
</ul>
<p>Then docker pull tensorflow/tensorflow and run it with:</p>
<pre><code>#!/bin/bash
export VOLUME='/home/user/code'
docker run \
-u 1000:1000 \
-it --rm \
--name tensorflow \
--volume ${VOLUME}:/code \
-w /code \
-e HOME=/code/tf_mongodb \
tensorflow/tensorflow bash
</code></pre>
<p>Here</p>
<ul>
<li>the -u make docker bash with same ownership as host machine;</li>
<li>the --volume make host folder /home/user/code mapping to /code in docker;</li>
<li>the -w work make docker bash start from /code, which is /home/user/code in host;</li>
<li>the -e HOME= option sign bash $HOME folder such that later you can pip install.</li>
</ul>
<p>Now you have bash prompt such that you can</p>
<ul>
<li>create virtual env folder under /code (which is mapping to /home/user/code),</li>
<li>activate venv,</li>
<li>pip install pymongo,</li>
<li>then you can connect to mongodb you run in docker (localhost may not work, please use host IP address).</li>
</ul>
|
mongodb|docker|tensorflow
| 0
|
2,995
| 54,555,880
|
Pandas pivot_table using columns names
|
<p>I have a pandas dataframe that look like this : </p>
<pre><code>ID, tag, score1
A1, T1, 10
A1, T1, 0
A1, T2, 20
A1, T2, 0
A2, T1, 10
A2, T1, 10
A2, T2, 20
A2, T2, 20
</code></pre>
<p>Using pandas pivot_table function, I am able to pivot the table in order to obtain the following dataframe : </p>
<pre><code>df.pivot_table(index= 'tag' , columns='ID', values= 'score1' , aggfunc='mean')
A1, A2
T1 5, 10
T2 10, 20
</code></pre>
<p>Now let's say that my input dataframe have multiple <code>score</code> columns : </p>
<pre><code>ID, tag, score1, score2, score3
A1, T1, 10, 100, 1000
A1, T1, 0, 0, 0
A1, T2, 20, 200, 2000
A1, T2, 0, 0, 0
A2, T1, 10, 100, 1000
A2, T1, 10, 100, 1000
A2, T2, 20, 200, 2000
A2, T2, 20, 200, 2000
</code></pre>
<p>And I am looking for a way to <code>pivot</code> the data to get the following result : </p>
<pre><code>df.pivot_table(index= ??? , columns='ID', values= ??? , aggfunc='mean').round(-3)
A1, A2
score1 7.5, 15
score2 75, 150
score3 750, 1500
</code></pre>
<p>This time I don't want to pivot using values of a column, but directly using multiple column names.</p>
<p>Is there a way to do this using <code>pivot_table()</code> or am I going in the wrong direction ?</p>
|
<p>Aggregate <code>mean</code> and then transpose by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow noreferrer"><code>T</code></a>:</p>
<pre><code>df = df.groupby('tag').mean().T
print (df)
tag T1 T2
score1 7.5 15.0
score2 75.0 150.0
score3 750.0 1500.0
</code></pre>
|
python|pandas|dataframe|pivot-table
| 1
|
2,996
| 54,261,202
|
Find those entries in numpy array bigger than input datetime
|
<p>I would like to get that entries of an datetime numpy array back that are bigger then my input datetime variable. </p>
<p>Unfortunately, I get this error when executing the code below:</p>
<pre><code>TypeError: '>' not supported between instances of 'int' and 'datetime.datetime'
</code></pre>
<p>This is my code:</p>
<pre><code>import numpy as np
import pandas as pd
myRange = pd.date_range('2018-04-09', periods=5, freq='1D20min')
myArray = np.array(myRange).astype(np.datetime64).reshape(-1,1)
print("myArray:", myArray)
myDatetime = pd.datetime(2018,4,10,2,59,59)
myArray[myArray>myDatetime]
</code></pre>
<p>.</p>
<pre><code>myArray: [['2018-04-09T00:00:00.000000000']
['2018-04-10T00:20:00.000000000']
['2018-04-11T00:40:00.000000000']
['2018-04-12T01:00:00.000000000']
['2018-04-13T01:20:00.000000000']]
</code></pre>
|
<p>The problem is around comparing: <BR>
myArray (of type np.datetime64) with <BR>
myDateTime (of type pd.datetime) </p>
<p>Changing myDateTime to a numpy datetime64 gives a result. </p>
<pre><code>import numpy as np
import pandas as pd
myRange = pd.date_range('2018-04-09', periods=5, freq='1D20min')
myArray = np.array(myRange).astype(np.datetime64).reshape(-1,1)
print("myArray:", myArray)
myDatetime = np.datetime64("2018-04-10T02:59:59")
myArray[myArray>myDatetime]
</code></pre>
<p>gives:</p>
<pre><code>myArray: [['2018-04-09T00:00:00.000000000']
['2018-04-10T00:20:00.000000000']
['2018-04-11T00:40:00.000000000']
['2018-04-12T01:00:00.000000000']
['2018-04-13T01:20:00.000000000']]
Out[27]:
array(['2018-04-11T00:40:00.000000000',
'2018-04-12T01:00:00.000000000',
'2018-04-13T01:20:00.000000000'], dtype='datetime64[ns]')
</code></pre>
|
python|numpy
| 1
|
2,997
| 73,812,311
|
grouping the Datetime column on timestamp
|
<p>Existing Dataframe :</p>
<pre><code>Unique_Id Date
A 11-01-2022 10:20:30.500
A 11-01-2022 13:10:10:258
A 11-01-2022 17:30:22.223
A 11-01-2022 23:20:38.222
B 02-02-2022 08:25:30.000
B 04-02-2022 11:35:40.928
</code></pre>
<p>Expected Dataframe :</p>
<pre><code>Unique_Id Date Time_Group
A 11-01-2022 10:20:30.500 9AM - 12AM
A 11-01-2022 13:10:10:258 12PM - 3PM
A 11-01-2022 17:30:22.223 5PM - 8PM
A 11-01-2022 23:20:38.222 9PM - 12PM
B 02-02-2022 08:25:30.000 6AM - 9AM
B 04-02-2022 11:35:40.928 9AM - 12AM
</code></pre>
<p>Trying to Bin the time into the Time-Group</p>
<p>approached it using <code>pd.cut(df.Date.dt.hour)</code> , but stuck with binning part</p>
|
<p>Here is one approach:</p>
<pre><code>bins = [0,9,12,15,18,24]
# function to convert hour as int to xAM/xPM
h_to_str = lambda x: pd.to_datetime(str(x), format='%H').strftime('%-I%p')
h_as_str = [h_to_str(x%24) for x in bins]
labels = [f'{a} - {b}' for a,b in zip(h_as_str, h_as_str[1:])]
df['Time_Group'] = pd.cut(pd.to_datetime(df['Date']).dt.hour,
bins=bins, labels=labels)
</code></pre>
<p>output:</p>
<pre><code> Unique_Id Date Time_Group
0 A 11-01-2022 10:20:30.500 9AM - 12PM
1 A 11-01-2022 13:10:10.258 12PM - 3PM
2 A 11-01-2022 17:30:22.223 3PM - 6PM
3 A 11-01-2022 23:20:38.222 6PM - 12AM
4 B 02-02-2022 08:25:30.000 12AM - 9AM
5 B 04-02-2022 11:35:40.928 9AM - 12PM
</code></pre>
|
pandas
| 1
|
2,998
| 73,809,000
|
In dataframe, if column is not available in sheet1 then ask to and ignore case and space sensitive
|
<p>Match all column's names from sheet2 to sheet1 if anything new, ask to add in sheet3, and ignore case sensitive</p>
<p>In Sheet1 "fName" and in Sheet2 "fname" that should consider same (here, case sensitive) and replace name fName from <strong>sheet2</strong></p>
<p>Same "Full Name" and in sheet2 "fullname(Ignore space and case sensitive) but that is same So, replace name fName from <strong>sheet2</strong></p>
<p>And "Zip" is not in sheet. So, ask if you want to add zip(Y/N) if yes then in <strong>sheet3</strong> add
separate column</p>
<blockquote>
<p>No change in sheet1 and replace column name in sheet2 and add new columns in sheet3</p>
</blockquote>
<p><strong>Sheet1</strong></p>
<pre><code>Id fName Address1 Full Name
1 Parth 1, street Parth Hirpara
2 Ujas 10, avenue Ujas Gajera
</code></pre>
<p><strong>Sheet2</strong></p>
<pre><code>Id fname Zip fullname
3 Keval 123 Keval Borad
4 Vivek 456 Keyur Jasani
</code></pre>
<p><strong>Sheet2 after changing columns name</strong></p>
<pre><code>Id fName Zip Full Name
3 Keval 123 Keval Borad
4 Vivek 456 Keyur Jasani
</code></pre>
<p><strong>Final Data Sheet Columns</strong></p>
<blockquote>
<p>Ignore data-related things</p>
</blockquote>
<p><strong>Sheet3</strong></p>
<pre><code>Id fName Address1 Full Name Zip
...
...
...
...
</code></pre>
|
<p>Try this below code after loading sheet1 & sheet2 as df1 & df2 respectively...</p>
<pre><code>df3 = df1.copy()
lst_col1 = df1.columns.to_list()
lst_col2 = df2.columns.to_list()
lst_col1_temp = [str(col).lower().replace(" ","") for col in lst_col1]
lst_col2_temp = [str(col).lower().replace(" ","") for col in lst_col2]
# Renaming columns in Sheet2 (df2)
for col1 in lst_col1:
col1_temp = str(col1).lower().replace(" ","")
if col1 not in lst_col2 and col1_temp in lst_col2_temp:
lst_col2[lst_col2_temp.index(col1_temp)] = col1
# Adding columns in Sheet3 (df3)
for col2 in lst_col2:
col2_temp = str(col2).lower().replace(" ","")
if col2_temp not in lst_col1_temp:
df3[col2] = df2[col2]
</code></pre>
<p>Hope this Helps...</p>
|
python|pandas|dataframe
| 1
|
2,999
| 52,434,353
|
fillcolor property colorstring hsl string not working
|
<p>I have a colorscale</p>
<pre><code>colorscale2 = ['hsl(-2221.0, 60.0%, 98.0%)',
'hsl(-2192.0460921843687, 59.791583166332664%, 97.88777555110221%)',
'hsl(-2163.0921843687374, 59.58316633266533%, 97.7755511022044%)',
'hsl(-2134.138276553106, 59.37474949899799%, 97.66332665330661%)',
'hsl(-2105.184368737475, 59.166332665330664%, 97.55110220440882%)',
'hsl(-2076.2304609218436, 58.95791583166333%, 97.43887775551102%)',
'hsl(-2047.2765531062123, 58.74949899799599%, 97.32665330661322%)',
'hsl(-2018.3226452905812, 58.54108216432866%, 97.21442885771543%)',
'hsl(-1989.36873747495, 58.33266533066132%, 97.10220440881764%)',
'hsl(-1960.4148296593187, 58.124248496993985%, 96.98997995991984%)',
'hsl(-1931.4609218436874, 57.91583166332666%, 96.87775551102204%)',
'hsl(-1902.5070140280561, 57.70741482965932%, 96.76553106212425%)',
'hsl(-1873.5531062124248, 57.498997995991985%, 96.65330661322645%)',
'hsl(-1844.5991983967936, 57.29058116232465%, 96.54108216432866%)',
'hsl(-1815.6452905811623, 57.08216432865731%, 96.42885771543087%)',
'hsl(-1786.6913827655312, 56.87374749498998%, 96.31663326653306%)',
'hsl(-1757.7374749499, 56.66533066132264%, 96.20440881763527%)',
'hsl(-1728.7835671342687, 56.45691382765531%, 96.09218436873748%)']
</code></pre>
<p>but i recieve the following error:</p>
<pre><code>File "C:\ProgramData\Miniconda3\lib\site-packages\_plotly_utils\basevalidators.py", line 1100, in validate_coerce
self.raise_invalid_val(v)
File "C:\ProgramData\Miniconda3\lib\site-packages\_plotly_utils\basevalidators.py", line 243, in raise_invalid_val
valid_clr_desc=self.description()))
ValueError:
Invalid value of type 'builtins.str' received for the 'fillcolor' property of scatter
Received value: 'hsl(-2221.0, 60.0%, 98.0%)'
The 'fillcolor' property is a color and may be specified as:
- A hex string (e.g. '#ff0000')
- An rgb/rgba string (e.g. 'rgb(255,0,0)')
- An hsl/hsla string (e.g. 'hsl(0,100%,50%)')
- An hsv/hsva string (e.g. 'hsv(0,100%,100%)')
- A named CSS color:
aliceblue, antiquewhite, aqua, aquamarine, azure,
beige, bisque, black, blanchedalmond, blue,
blueviolet, brown, burlywood, cadetblue,
chartreuse, chocolate, coral, cornflowerblue,
cornsilk, crimson, cyan, darkblue, darkcyan,
darkgoldenrod, darkgray, darkgrey, darkgreen,
darkkhaki, darkmagenta, darkolivegreen, darkorange,
darkorchid, darkred, darksalmon, darkseagreen,
darkslateblue, darkslategray, darkslategrey,
darkturquoise, darkviolet, deeppink, deepskyblue,
dimgray, dimgrey, dodgerblue, firebrick,
floralwhite, forestgreen, fuchsia, gainsboro,
ghostwhite, gold, goldenrod, gray, grey, green,
greenyellow, honeydew, hotpink, indianred, indigo,
ivory, khaki, lavender, lavenderblush, lawngreen,
lemonchiffon, lightblue, lightcoral, lightcyan,
lightgoldenrodyellow, lightgray, lightgrey,
lightgreen, lightpink, lightsalmon, lightseagreen,
lightskyblue, lightslategray, lightslategrey,
lightsteelblue, lightyellow, lime, limegreen,
linen, magenta, maroon, mediumaquamarine,
mediumblue, mediumorchid, mediumpurple,
mediumseagreen, mediumslateblue, mediumspringgreen,
mediumturquoise, mediumvioletred, midnightblue,
mintcream, mistyrose, moccasin, navajowhite, navy,
oldlace, olive, olivedrab, orange, orangered,
orchid, palegoldenrod, palegreen, paleturquoise,
palevioletred, papayawhip, peachpuff, peru, pink,
plum, powderblue, purple, red, rosybrown,
royalblue, saddlebrown, salmon, sandybrown,
seagreen, seashell, sienna, silver, skyblue,
slateblue, slategray, slategrey, snow, springgreen,
steelblue, tan, teal, thistle, tomato, turquoise,
violet, wheat, white, whitesmoke, yellow,
yellowgreen
</code></pre>
<p>What is the problem? Ive included my entire code below,</p>
<pre><code>import plotly.plotly as py
from plotly.figure_factory._county_choropleth import create_choropleth
import numpy as np
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/chessybo/Oil-Spill-map/468bd2205d85c7b0bfb4ebcd4bc4bf0ba408efb4/RRC_Spill_table/county_name%20%26%20fips%20%26%20net%20loss%20%26%20count%20(ordered%20by%20district%20%26%20grouped).csv')
colorscale2 = ['hsl(-2221.0, 60.0%, 98.0%)',
'hsl(-2192.0460921843687, 59.791583166332664%, 97.88777555110221%)',
'hsl(-2163.0921843687374, 59.58316633266533%, 97.7755511022044%)',
'hsl(-2134.138276553106, 59.37474949899799%, 97.66332665330661%)',
'hsl(-2105.184368737475, 59.166332665330664%, 97.55110220440882%)',
'hsl(-2076.2304609218436, 58.95791583166333%, 97.43887775551102%)',
'hsl(-2047.2765531062123, 58.74949899799599%, 97.32665330661322%)',
'hsl(-2018.3226452905812, 58.54108216432866%, 97.21442885771543%)',
'hsl(-1989.36873747495, 58.33266533066132%, 97.10220440881764%)',
'hsl(-1960.4148296593187, 58.124248496993985%, 96.98997995991984%)',
'hsl(-1931.4609218436874, 57.91583166332666%, 96.87775551102204%)',
'hsl(-1902.5070140280561, 57.70741482965932%, 96.76553106212425%)',
'hsl(-1873.5531062124248, 57.498997995991985%, 96.65330661322645%)',
'hsl(-1844.5991983967936, 57.29058116232465%, 96.54108216432866%)',
'hsl(-1815.6452905811623, 57.08216432865731%, 96.42885771543087%)',
'hsl(-1786.6913827655312, 56.87374749498998%, 96.31663326653306%)',
'hsl(-1757.7374749499, 56.66533066132264%, 96.20440881763527%)',
'hsl(-1728.7835671342687, 56.45691382765531%, 96.09218436873748%)']
endpts = [120, 240, 360, 480, 600, 720, 840, 960, 1080, 1225, 2225, 3225, 4225, 5225, 6225]
fips = df['fips'].tolist()
values = df['Net spill volume (BBL)'].tolist()
count=df['number_of_oil_spills'].tolist()
x=values
fig = create_choropleth(
fips=fips, values=values,
binning_endpoints=endpts,
colorscale=colorscale2,
show_state_data=False,
show_hover=True, centroid_marker={'opacity': 0},
scope=['TX'],
state_outline={'color': 'rgb(15, 15, 55)', 'width': 3},
asp=2.9, title='Oil Spills from 12/1/16 - 5/14/18',
legend_title='Net spill Volume (BBL)'
)
fig['layout']['legend'].update({'x': 0})
fig['layout']['annotations'][0].update({'x': -0.12, 'xanchor': 'left'})
py.plot(fig, filename='oil spill net loss')
</code></pre>
|
<p>From <a href="https://www.w3schools.com/colors/colors_hsl.asp" rel="nofollow noreferrer">https://www.w3schools.com/colors/colors_hsl.asp</a>, the first value from hsl is hue which has a range from 0 to 360.0.</p>
<p>The data that you have is not expected by relevant functions. Perhaps you may want to look into that and perform some pre-processing.</p>
|
python|colors|plotly|geopandas|choropleth
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.