Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
7,300
| 60,353,610
|
Lookup child rows in a single DataFrame without using loops
|
<p>Currently I'm trying to extract meaninful data from a ticket system (Redmine). One of my tasks hereof is to find <strong>all</strong> the child tickets from a list of tickets which interest me. Since the children are of the same shape as their parents, live in the same DataFrame - so I cannot use <code>pd.merge</code>, and furthermore the child could have children themselves, I first tried to find them recursively, but fairly quickly stumbled over</p>
<blockquote>
<p>maximum recursion depth exceeded error</p>
</blockquote>
<p>So my next approach was to serialize the procedure instead. Unfortunately that makes me lookup such tickets multiple times in nested loops, which really is too slow for my needs.
So to not further overwork your imagination here's a possible example of data I'm working on:</p>
<pre><code> id project.id status.id priority.id author.id assigned_to.id parent.id category.id
0 18543 18 5 2 85 85.0 18203.0 NaN
1 18542 18 5 2 85 85.0 18538.0 NaN
2 18541 71 5 2 67 67.0 17788.0 NaN
3 18540 18 5 3 105 85.0 NaN 150.0
4 18539 17 5 2 81 81.0 18537.0 NaN
.. ... ... ... ... ... ... ... ...
806 18257 4 1 2 3 NaN 16423.0 NaN
807 17738 11 1 2 3 NaN 17737.0 NaN
808 16017 65 2 2 81 NaN NaN NaN
809 2473 65 15 2 4 4.0 NaN NaN
810 16423 65 18 2 3 18.0 NaN NaN
[811 rows x 8 columns]
</code></pre>
<p>Think of it as a hierarchical tree structure. As you can see, it would be quite easy to work bottom-up through the <code>parent.id</code> field, which matches the <code>id</code> field of its parent, but to traverse the thing top-down, is not that straight forward.</p>
<p>A solution I came up with is this:</p>
<pre><code>def findChildren(issueId, issueFrame):
# clean data
safeElements = issueFrame.fillna(0)
children = safeElements[safeElements['parent.id'] == issueId]
childList = np.array(children['id'])
listLength = 0
# seek until list of children does not grow anymore
while listLength != len(childList):
listLength = len(childList)
for identifier in childList:
children = safeElements[safeElements['parent.id'] == identifier]
addToList = np.array(children['id'])
childList = np.append(childList, addToList)
childList = np.unique(childList)
return childList
</code></pre>
<p>It works! But since I not just have to look for a single issue it takes literally minutes to create me all that lists of children I want. What would be a faster approach? The result doesn't need to be in a list of children. I would also be happy with a filtered DataFrame which holds the rows of its children to the last of their great-great-and-so-on-grandchildren.</p>
|
<p>The biggest performance bottleneck in your code is array-matching. Searching an array is an <code>O(n)</code> operation. Repeat it for each element in another array makes the operation <code>O(n*m)</code>. For faster result, lookup a dictionary instead, whose look up time is always <code>O(1)</code>.</p>
<p>And there is a way to do it without recursion. Try this:</p>
<pre><code>def findChildren(issueId, issueFrame, cache=None):
# Create a dictionary which lists all children an `id` has
# The result is something like this:
# {
# 1: [2,3,4], # 1 has children 2, 3, 4
# 2: [5] # 2 has child 5
# }
_cache = cache or issueFrame[['parent.id', 'id']].groupby('parent.id')['id'].apply(list).to_dict()
# We start with the supplied `issueId`
check_list = [issueId]
# The final list of children
ids = []
while len(check_list) > 0:
parent_id = check_list.pop()
# Get all children of the current `parent_id`...
children = _cache.get(parent_id, [])
# ... then check if they have an children too
check_list += children
# and add them to the list of children for the current ticket
ids += children
# Finally, extract those children from the DataFrame
return issueFrame[issueFrame['id'].isin(ids)]
</code></pre>
|
python|pandas|data-analysis
| 1
|
7,301
| 72,609,186
|
Find overlap start date and end date in Pandas Groupby
|
<p>I am trying to find when the overlap start and when it ended in following DF.</p>
<p>I am able to determine the overlap cluster using below code, and now I want to find out when the overlap begins and when it ends</p>
<pre><code> d = [
{'G1': 'A', 'G2': 'A1','Start_Date': '6/1/2020', 'End_Date': '5/31/2022'},
{'G1': 'B', 'G2': 'A1','Start_Date': '12/1/2020', 'End_Date': '11/30/2021'},
{'G1': 'B', 'G2': 'B1','Start_Date': '6/1/2020', 'End_Date': '5/31/2021'},
{'G1': 'Y', 'G2': 'B1','Start_Date': '6/1/2021', 'End_Date': '6/1/2022'},
{'G1': 'C', 'G2': 'C1','Start_Date': '1/1/2020', 'End_Date': '3/31/2020'},
{'G1': 'C', 'G2': 'C1','Start_Date': '4/1/2020', 'End_Date': '5/31/2020'},
{'G1': 'C', 'G2': 'C2','Start_Date': '6/1/2020', 'End_Date': '7/31/2020'},
{'G1': 'I', 'G2': 'C3','Start_Date': '8/1/2020', 'End_Date': '10/31/2020'},
{'G1': 'O', 'G2': 'C3','Start_Date': '11/1/2020', 'End_Date': '12/31/2021'},
{'G1': 'D', 'G2': 'D1','Start_Date': '1/1/2020', 'End_Date': '2/28/2020'},
{'G1': 'R', 'G2': 'D2','Start_Date': '3/1/2020', 'End_Date': '3/31/2020'},
{'G1': 'F', 'G2': 'D4','Start_Date': '4/1/2020', 'End_Date': '8/31/2020'},
{'G1': 'Y', 'G2': 'D4','Start_Date': '8/1/2020', 'End_Date': '10/31/2020'},
{'G1': 'D', 'G2': 'D4','Start_Date': '11/1/2020', 'End_Date': '12/31/2021'},
]
df = pd.DataFrame(d)
df['Start_Date'] = pd.to_datetime(df['Start_Date'],format='%m/%d/%Y', errors='coerce')
df['End_Date'] = pd.to_datetime(df['End_Date'],format='%m/%d/%Y', errors='coerce')
df['Range'] = df.apply(lambda x: pd.date_range(start=x['Start_Date'], end=x['End_Date']), axis=1)
def determine_cluster(group):
bucket = pd.DatetimeIndex(['1/1/1900'])
for x in group:
if x.isin(bucket).any():
return [True]*len(group)
bucket = bucket.append(x)
return [False]*len(group)
df['Cluster'] = df.groupby(['G2'])['Range'].transform(determine_cluster)
df.drop('Range', axis=1)
</code></pre>
<p>It Give below result:</p>
<p><a href="https://i.stack.imgur.com/tL6mn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tL6mn.png" alt="enter image description here" /></a></p>
<p>However my Desired output is:</p>
<p>Columns having groupby(cluster) overlap start and end date
<a href="https://i.stack.imgur.com/ND4eP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ND4eP.png" alt="enter image description here" /></a></p>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>def fn(x):
z = (
x.apply(
lambda y: pd.date_range(y["Start_Date"], y["End_Date"]),
axis=1,
)
.explode()
.sort_values()
)
y = z[z.duplicated()]
y = y[(y.diff() != pd.Timedelta("1d")).cumsum() == 1]
if len(y) == 0:
x["Cluster"] = False
else:
x["Overlap_Start_Date"] = y.iloc[0]
x["Overlap_End_Date"] = y.iloc[-1]
x["Cluster"] = True
return x
x = df.groupby("G2").apply(fn)
print(x)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> G1 G2 Start_Date End_Date Overlap_Start_Date Overlap_End_Date Cluster
0 A A1 2020-06-01 2022-05-31 2020-12-01 2021-11-30 True
1 B A1 2020-12-01 2021-11-30 2020-12-01 2021-11-30 True
2 B B1 2020-06-01 2021-05-31 NaT NaT False
3 Y B1 2021-06-01 2022-06-01 NaT NaT False
4 C C1 2020-01-01 2020-03-31 NaT NaT False
5 C C1 2020-04-01 2020-05-31 NaT NaT False
6 C C2 2020-06-01 2020-07-31 NaT NaT False
7 I C3 2020-08-01 2020-10-31 NaT NaT False
8 O C3 2020-11-01 2021-12-31 NaT NaT False
9 D D1 2020-01-01 2020-02-28 NaT NaT False
10 R D2 2020-03-01 2020-03-31 NaT NaT False
11 F D4 2020-04-01 2020-08-31 2020-08-01 2020-08-31 True
12 Y D4 2020-08-01 2020-10-31 2020-08-01 2020-08-31 True
13 D D4 2020-11-01 2021-12-31 2020-08-01 2020-08-31 True
</code></pre>
|
python|pandas|dataframe|group-by|overlap
| 2
|
7,302
| 59,859,154
|
ZeroDivisionError: float division by zero (Solving colebrook (nonlinear) equation with Newton Raphson method in python)
|
<p>I have tried solving the colebrook (nonlinear) equation for frictional factor in python but I keep getting this error:</p>
<p>ZeroDivisionError: float division by zero</p>
<p>here is the full traceback:</p>
<pre><code>Traceback (most recent call last):
File "c:/Users/BDG/Desktop/kkk/www/Plots/jjj/Code.py", line 49, in <module>
f = Newton(f0,re)
File "c:/Users/BDG/Desktop/kkk/www/Plots/jjj/Code.py", line 20, in Newton
eps_new = func(f, Re)/dydf(f, Re)
File "c:/Users/BDG/Desktop/kkk/www/Plots/jjj/Code.py", line 13, in func
return -0.86*np.log((e_D/3.7)+((2.51/Re))*f**(-0.5))-f**(-0.5)
ZeroDivisionError: float division by zero
</code></pre>
<p>I am trying to find the friction factor (f) for this <a href="https://www.wolframalpha.com/input/?i=-0.86*log%28%282.51%2F%28Re*sqrt%28f%29%29%29%2B%28%28e%2FD%29%2F%283.7%29%29%29%3D1%2Fsqrt%28f%29" rel="nofollow noreferrer">equation</a>:</p>
<p><code>-0.86 * log(2.51 / (Re * sqrt(f)) + e / D / 3.7) = 1 / sqrt(f)</code></p>
<p>at varying values of Reynold's number (Re) and plotting f against Re.</p>
<p>This is the code below, please help.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import time
#Parameters
e_D = 1e-4
eps=1e-7
def func(f, Re):
return -0.86*np.log((e_D/3.7)+((2.51/Re))*f**(-0.5))-f**(-0.5)
def dydf(f, Re):
return (1.0793/(Re*((251*f**-0.5)/(100*Re)+(10*e_D)/37)*(f**1.5)))+(1/(2*(f**1.5)))
def Newton(f0, Re, conv_hist=True):
f = f0
eps_new = func(f, Re)/dydf(f, Re)
iteration_counter = 0
history = []
while abs(eps_new) >= eps and iteration_counter <= 100:
eps_new = func(f, Re)/dydf(f, Re)
f = f - eps_new
iteration_counter += 1
history.append([iteration_counter, f, func(f,Re), eps_new])
if abs(dydf(f, Re)) <= eps:
print('derivative near zero!, dydf =', dydf(f,re))
print(dydf(f,re), 'iter# =', iteration_counter, 'eps =', eps_new)
break
if iteration_counter == 99:
print('maximum iterations reached!')
print(f, 'iter# = ', iteration_counter)
break
if conv_hist:
hist_dataframe = pd.DataFrame(history, columns=['Iteration #', 'Re','f', 'eps'])
hist_dataframe.style.hide_index()
return f
startTime = time.time()
Re = np.linspace(10**4,10**7,100)
f0 = 0.001
for re in range(len(Re)):
f = Newton(f0,re)
endTime = time.time()
print('Total process took %f seconds!' % (endTime - startTime))
plt.loglog(Re, f, marker='o')
plt.title('f vs Re')
plt.grid(b=True, which='minor')
plt.grid(b=True, which='major')
plt.xlabel('Re')
plt.ylabel('f')
plt.savefig('fvsRe.png')
plt.show()
</code></pre>
|
<p>Your problem is in this line:</p>
<pre><code>return -0.86*np.log((e_D/3.7)+((2.51/Re))*f**(-0.5))-f**(-0.5)
</code></pre>
<p>When <code>Re</code> is 0 this fails. This happens because of:</p>
<pre><code>for re in range(len(Re)):
f = Newton(f0,re)
</code></pre>
<p>I think what you wish to do instead is:</p>
<pre><code>for re in Re:
f = Newton(f0,re)
</code></pre>
<p>However, this won't work because you wish to plot <code>f</code> vs <code>Re</code>. So instead you should make f a list and append the results:</p>
<pre><code>f = []
for re in Re:
f.append(Newton(f0,re))
</code></pre>
<p><a href="https://i.stack.imgur.com/aSfWx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aSfWx.png" alt="Final result"></a></p>
|
python|numpy|matplotlib|physics|newtons-method
| 2
|
7,303
| 40,332,284
|
How to divide 1 column into 5 segments with pandas and python?
|
<p>I have a list of 1 column and 50 rows.
I want to divide it into 5 segments. And each segment has to become a column of a dataframe. I do not want the NAN to appear (figure2). How can I solve that?
Like this:
<a href="https://i.stack.imgur.com/IxSbn.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/IxSbn.jpg" alt="enter image description here"></a></p>
<pre><code>df = pd.DataFrame(result_list)
AWA=df[:10]
REM=df[10:20]
S1=df[20:30]
S2=df[30:40]
SWS=df[40:50]
result = pd.concat([AWA, REM, S1, S2, SWS], axis=1)
result
</code></pre>
<p>Figure2
<a href="https://i.stack.imgur.com/alqZx.png" rel="nofollow"><img src="https://i.stack.imgur.com/alqZx.png" alt="enter image description here"></a></p>
|
<p>You can use numpy's <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="nofollow">reshape</a> function:</p>
<pre><code>result_list = [i for i in range(50)]
pd.DataFrame(np.reshape(result_list, (10, 5), order='F'))
Out:
0 1 2 3 4
0 0 10 20 30 40
1 1 11 21 31 41
2 2 12 22 32 42
3 3 13 23 33 43
4 4 14 24 34 44
5 5 15 25 35 45
6 6 16 26 36 46
7 7 17 27 37 47
8 8 18 28 38 48
9 9 19 29 39 49
</code></pre>
|
python-3.x|pandas|numpy
| 1
|
7,304
| 40,643,819
|
Pandas N-Grams to Columns
|
<p>Given the following data frame:</p>
<pre><code>import pandas as pd
d=['Hello', 'Helloworld']
f=pd.DataFrame({'strings':d})
f
strings
0 Hello
1 Helloworld
</code></pre>
<p>I'd like to split each string into chunks of 3 characters and use those as headers to create a matrix of 1s or 0s, depending on if a given row has the chunk of 3 characters. </p>
<p>Like this:</p>
<pre><code> Strings Hel low orl
0 Hello 1 0 0
1 Helloworld 1 1 1
</code></pre>
<p>Notice that the string "Hello" has a 0 for the "low" column as it is only assigning a 1 for exact partial matches. If there is more than 1 match (i.e. if the string were "HelHel", it would still only assign a 1 (though it would also be nice to know how to count it and thus assign a 2 instead).</p>
<p>Ultimately, I'm trying to prepare my data for us in an LSHForest via SKLearn.
Therefore, I anticipate many different string values.</p>
<p>Here's what I've tried so far:</p>
<pre><code>#Split into chunks of exactly 3
def split(s, chunk_size):
a = zip(*[s[i::chunk_size] for i in range(chunk_size)])
return [''.join(t) for t in a]
cols=[split(s,3) for s in f['strings']]
cols
[['Hel'], ['Hel', 'low', 'orl']]
#Get all elements into one list:
import itertools
colsunq=list(itertools.chain.from_iterable(cols))
#Remove duplicates:
colsunq=list(set(colsunq))
colsunq
['orl', 'Hel', 'low']
</code></pre>
<p>So now, all I need to do is create a column in <strong>f</strong> for each element in <strong>colsunq</strong> and add 1 if the string in the 'strings' column has a match with the chunk for each given column header.</p>
<p>Thanks in advance!</p>
<p><strong>Note:</strong>
In case shingling is preferred:</p>
<pre><code>#Shingle into strings of exactly 3
def shingle(word):
a = [word[i:i + 3] for i in range(len(word) - 3 + 1)]
return [''.join(t) for t in a]
#Shingle (i.e. "hello" -> "hel","ell",'llo')
a=[shingle(w) for w in f['strings']]
#Get all elements into one list:
import itertools
colsunq=list(itertools.chain.from_iterable(a))
#Remove duplicates:
colsunq=list(set(colsunq))
colsunq
['wor', 'Hel', 'ell', 'owo', 'llo', 'rld', 'orl', 'low']
</code></pre>
|
<pre><code>def str_chunk(s, k):
i, j = 0, k
while j <= len(s):
yield s[i:j]
i, j = j, j + k
def chunkit(s, k):
return [_ for _ in str_chunk(s, k)]
def count_chunks(s, k):
return pd.value_counts(chunkit(s, k))
</code></pre>
<hr>
<p><strong><em>demonstration</em></strong> </p>
<pre><code>f.strings.apply(chunkit, k=3)
0 [Hel]
1 [Hel, low, orl]
Name: strings, dtype: object
f.strings.apply(count_chunks, k=3).fillna(0)
</code></pre>
<p><a href="https://i.stack.imgur.com/HorsK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HorsK.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>shingling</em></strong> </p>
<pre><code>def str_shingle(s, k):
i, j = 0, k
while j <= len(s):
yield s[i:j]
i, j = i + 1, j + 1
def shingleit(s, k):
return [_ for _ in str_shingle(s, k)]
def count_shingles(s, k):
return pd.value_counts(shingleit(s, k))
f.strings.apply(count_shingles, k=3).fillna(0)
</code></pre>
<p><a href="https://i.stack.imgur.com/53pX8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/53pX8.png" alt="enter image description here"></a></p>
|
python|pandas
| 2
|
7,305
| 61,760,231
|
Add data to cell based on other cell values
|
<p>I have a large group of data with various names and sources, in a large dataframe.</p>
<p>Reproducible data by <a href="https://stackoverflow.com/users/12696163/anshul-jain">Anshul Jain</a></p>
<pre class="lang-py prettyprint-override"><code>First_Name Last_Name Source
Matt Jones XX
James Smith YY
Smith Weston AA
Weston Supermare CC
Matt Jones YY
Weston Supermare FF
# copy in with:
df = pd.read_clipboard(sep='\\s+')
</code></pre>
<p>The data looks as follows:</p>
<pre><code>+------------+-----------+--------+
| First Name | Last Name | Source |
+------------+-----------+--------+
| Matt | Jones | XX |
| James | Smith | YY |
| Smith | Weston | AA |
| Weston | Supermare | CC |
| Matt | Jones | YY |
| Weston | Supermare | FF |
+------------+-----------+--------+
</code></pre>
<p>I need it to look like this:</p>
<pre><code>+------------+-----------+--------+
| First Name | Last Name | Source |
+------------+-----------+--------+
| Matt | Jones | XX, YY |
| James | Smith | YY |
| Smith | Weston | AA |
| Weston | Supermare | CC, FF |
+------------+-----------+--------+
</code></pre>
<p>I can get the deduplication process to work using:</p>
<pre><code>Conn_df = Conn_df.drop_duplicates(subset=['First Name', 'Last Name'])
</code></pre>
<p>However, before I deduplicate, I need to record all the sources for the same data on the same row. </p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code></a> to group the dataframe by the columns <code>First Name</code> and <code>Last Name</code> and then apply the <code>agg</code> function <code>join</code> on the <code>Source</code> column.</p>
<p>Use:</p>
<pre><code>result = Conn_df.groupby(["First Name", "Last Name"])["Source"].agg(', '.join).reset_index()
print(result)
</code></pre>
<p>This prints:</p>
<pre><code> First Name Last Name Source
0 James Smith YY
1 Matt Jones XX, YY
2 Smith Weston AA
3 Weston Supermare CC, FF
</code></pre>
|
python|python-3.x|pandas
| 3
|
7,306
| 61,844,846
|
Numpy push non-zero values down along the column
|
<p>I have a 2D numpy matrix and I want to push all non-zero values down along the columns.
I prefer a way that doesn't contain loops.
for example making this <a href="https://i.stack.imgur.com/tDMag.jpg" rel="nofollow noreferrer">before_matrix</a>
into this <a href="https://i.stack.imgur.com/AezUU.jpg" rel="nofollow noreferrer">after_matrix</a>
Also it's important to keep the order of the numbers.
along the column. making a column like this
[2, 0, 1, 0, 0] into [0, 0, 0, 2, 1]</p>
<p>Many thanks</p>
|
<p>Use stable argsort on the binarized array</p>
<pre><code># make example
>>> from scipy import sparse
>>>
>>> exmpl = sparse.random(5,4,0.5).A
>>>
>>> exmpl
array([[0. , 0. , 0.61062949, 0. ],
[0.85030071, 0.81443545, 0. , 0.82208658],
[0.0258324 , 0.77722165, 0. , 0. ],
[0. , 0. , 0.4879589 , 0. ],
[0. , 0.28429359, 0.59514095, 0.06782943]])
# sort it
>>> exmpl[(exmpl!=0).argsort(0,kind="stable"),np.arange(4)]
array([[0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ],
[0. , 0.81443545, 0.61062949, 0. ],
[0.85030071, 0.77722165, 0.4879589 , 0.82208658],
[0.0258324 , 0.28429359, 0.59514095, 0.06782943]])
>>>
</code></pre>
|
python|numpy|scipy
| 1
|
7,307
| 61,752,679
|
Convert a datetime to time with miliseconds in Python
|
<p>I got a DataFrame:</p>
<pre><code>client datetime
1 01/02/2020 13:47
2 02/02/2020 23:45
3 03/02/2020 16:22
4 04/02/2020 18:49
5 05/02/2020 11:02
</code></pre>
<p>and I need two new columns ["time_new"] to display the time in this format ('%H:%M:%S.%f')[:-3] and ["time_ms"] to display ms only:</p>
<pre><code>client datetime time_new time_ms
1 01/02/2020 13:47 13:47:11.783 783
2 02/02/2020 23:45 23:45:22.322 322
3 03/02/2020 16:22 16:22:05.122 122
4 04/02/2020 18:49 18:49:03.329 329
5 05/02/2020 11:02 11:02:34.545 545
</code></pre>
|
<p>For extract custom format with times and miliseconds use:</p>
<pre><code>df['datetime'] = pd.to_datetime(df['datetime'])
df['time_new'] = df['datetime'].dt.strftime('%H:%M:%S.%f').str[:-3]
df['time_ms'] = df['datetime'].dt.microsecond // 1000
#in source df are 0 second and milisecond, so output like this
print (df)
client datetime time_new time_ms
0 1 2020-01-02 13:47:00 13:47:00.000 0
1 2 2020-02-02 23:45:00 23:45:00.000 0
2 3 2020-03-02 16:22:00 16:22:00.000 0
3 4 2020-04-02 18:49:00 18:49:00.000 0
4 5 2020-05-02 11:02:00 11:02:00.000 0
</code></pre>
|
python|python-3.x|pandas|datetime
| 0
|
7,308
| 61,762,748
|
Is there a way to use operator.itemgetter with slice notation?
|
<p>I have a bunch of numpy arrays in a python list <code>lst</code>. I can slice one of these arrays to get a specific view by indexing it with <code>[:, 1]</code>, for example. </p>
<p>I need to apply this slicing operation to all the numpy arrays in <code>lst</code>. Using generator comprehension, I could do: </p>
<pre><code>(my_array[:, 1] for my_array in lst)
</code></pre>
<p>I'm wondering if there's a way to accomplish the same thing with <code>operator.itemgetter</code> and <code>map</code>. </p>
<p><code>map(operator.itemgetter(:, 1), lst)</code> unsurprisingly results in a syntax error. </p>
|
<p>The slice syntax generates <code>slice</code> objects for you. You'll have to create them explicitly to pass to <code>itemgetter</code>. Since <code>itemgetter(x,y)(a)</code> is equivalent to <code>(a[x], a[y])</code>, you also need to use parentheses to ensure that you pass a single <code>tuple</code> consisting of your <code>slice</code> and the <code>int</code> index.</p>
<pre><code># [:] -> slice(None)
map(operator.itemgetter((slice(None), 1)), lst)
</code></pre>
<hr>
<p>A useful tool for figuring out what exactly slicing syntax does is to define a small class</p>
<pre><code>class A:
def __getitem__(self, key):
print(key)
</code></pre>
<p>Then you can do quick checks like</p>
<pre><code>>>> A()[:,1]
(slice(None, None, None), 1)
</code></pre>
<p>(<code>slice(None)</code> is short for <code>slice(None, None, None)</code>.)</p>
|
python|numpy|slice
| 3
|
7,309
| 58,021,925
|
tensorflow: how to use flags.DEFINE_multi_float()
|
<p>I use bash code to run a python file with lots of parameters. like:</p>
<pre><code>python "${WORK_DIR}"/eval.py \
--logtostderr \
--eval_split="val" \
--model_variant="xception_65" \
--atrous_rates=6 \
--atrous_rates=12 \
--atrous_rates=18 \
--output_stride=16 \
--decoder_output_stride=4 \
--eval_crop_size="513,513" \
--checkpoint_dir="${TRAIN_LOGDIR}" \
--eval_logdir="${EVAL_LOGDIR}" \
--dataset_dir="${PASCAL_DATASET}" \
--max_number_of_evaluations=1 \
--eval_scales=[0.5,0.25,1.75]
</code></pre>
<p>But then I got error:</p>
<blockquote>
<p>absl.flags._exceptions.IllegalFlagValueError: flag
--eval_scales=[0.5,0.25,1.75]: could not convert string to float: '[0.5,0.25,1.75]'</p>
</blockquote>
<p><strong>So what is the right format</strong> to pass parameter to variable defined by <code>flags.DEFINE_multi_float()</code></p>
<pre><code># Change to [0.5, 0.75, 1.0, 1.25, 1.5, 1.75] for multi-scale test.
flags.DEFINE_multi_float('eval_scales', [1.0],
'The scales to resize images for evaluation.')
</code></pre>
|
<p>for multi float you should define your parameters length(your-list) times.</p>
<p>if you have a list like this [0.5,0.25], you should define --eval_scales 2 times for each value present into your list:</p>
<p>--eval_scales=0.5</p>
<p>--eval_scales=0.25</p>
|
python|tensorflow
| 0
|
7,310
| 57,992,962
|
Sum column which has both numbers and text using Pandas
|
<p>I have a column which contains both numbers and text, and I'm trying to find the sum of the values.</p>
<p>I tried this sum function below, but it didn't work. Please can you advise what else I could try?</p>
<pre><code>df["Price"].sum()
</code></pre>
<p><a href="https://i.stack.imgur.com/4Vvxh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Vvxh.png" alt="enter image description here"></a></p>
|
<p>Use <code>pd.to_numeric</code></p>
<p><strong>Ex:</strong></p>
<pre><code>df = pd.DataFrame({"Price": ["Nil", "Na", 1,2,3,4,5, "Null"]})
print(df[pd.to_numeric(df['Price'], errors='coerce').notnull()].sum())
#or
print(pd.to_numeric(df['Price'], errors='coerce').dropna().sum())
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Price 15.0
dtype: float64
</code></pre>
|
python|excel|pandas|sum
| 4
|
7,311
| 58,016,236
|
how to groupby and aggregate dynamic columns in pandas
|
<p>I have following dataframe in pandas</p>
<pre><code>code tank nozzle_1 nozzle_2 nozzle_var nozzle_sale
123 1 1 1 10 10
123 1 2 2 12 10
123 2 1 1 10 10
123 2 2 2 12 10
123 1 1 1 10 10
123 2 2 2 12 10
</code></pre>
<p>Now, I want to generate cumulative sum of all the columns grouping over tank and take out the last observation. nozzle_1 and nozzle_2 columns are dynamic, it could be nozzle_3, nozzle_4....nozzle_n etc. I am doing following in pandas to get the cumsum</p>
<pre><code>## Below code for calculating cumsum of dynamic columns nozzle_1 and nozzle_2
cols= df.columns[df.columns.str.contains(pat='nozzle_\d+$', regex=True)]
df.assign(**df.groupby('tank')[cols].agg(['cumsum'])\
.pipe(lambda x: x.set_axis(x.columns.map('_'.join), axis=1, inplace=False)))
## nozzle_sale_cumsum is static column
df[nozzle_sale_cumsum] = df.groupby('tank')['nozzle_sale'].cumsum()
</code></pre>
<p>From above code I will get cumsum of following columns</p>
<pre><code> tank nozzle_1 nozzle_2 nozzle_var nozzle_1_cumsum nozzle_2_cumsum nozzle_sale_cumsum
1 1 1 10 1 1 10
1 2 2 12 3 3 20
2 1 1 10 1 1 10
2 2 2 12 3 3 20
1 1 1 10 4 4 30
2 2 2 12 5 5 30
</code></pre>
<p>Now, I want to get last values of all 3 cumsum columns grouping over tank. I can do it with following code in pandas, but it is hard coded with column names.</p>
<pre><code> final_df= df.groupby('tank').agg({'nozzle_1_cumsum':'last',
'nozzle_2_cumsum':'last',
'nozzle_sale_cumsum':'last',
}).reset_index()
</code></pre>
<p>Problem with above code is nozzle_1_cumsum and nozzle_2_cumsum is hard coded which is not the case. How can I do this in pandas with dynamic columns. </p>
|
<p>How about:</p>
<pre><code>df.filter(regex='_cumsum').groupby(df['tank']).last()
</code></pre>
<p>Output:</p>
<pre><code> nozzle_1_cumsum nozzle_2_cumsum nozzle_sale_cumsum
tank
1 4 4 30
2 5 5 30
</code></pre>
<p>You can also replace <code>df.filter(...)</code> by, e.g., <code>df.iloc[:,-3:]</code> or <code>df[col_names]</code>.</p>
|
python|pandas
| 2
|
7,312
| 57,893,230
|
How to aggregate data by counts of a level, setting each level's counts as its own column?
|
<p>I have data which has a row granularity in terms of events, and I want to aggregate them by a customer ID. The data is in the form of a pandas df and looks like so:</p>
<pre><code>| Event ID | Cust ID | P1 | P2 | P3 | P4 |
------------------------------------------
| 1 | 1 | 12 | 0 | 0 | 0 |
--------------------------
| 2 | 1 | 12 | 0 | 0 | 0 |
--------------------------
| 3 | 1 | 10 | 12 | 0 | 0 |
---------------------------
| 4 | 2 | 206 | 0 | 0 | 0 |
---------------------------
| 5 | 2 | 206 | 25 | 0 | 0 |
----------------------------
</code></pre>
<p>P1 to P4 have numbers which are just levels, they are event categories which I need to get counts of (there are 175+ codes), where each event category gets its own column.</p>
<p>The output I want, would ideally look like:</p>
<pre><code>| Cust ID | Count(12) | Count(10) | Count(25) | Count(206) |
------------------------------------------------------------
| 1 | 3 | 1 | 0 | 0 |
---------------------
| 2 | 0 | 0 | 1 | 2 |
---------------------
</code></pre>
<p>The challenge I am facing is taking the counts across multiple columns. There are 2 '12's in P1 and 1 '12' in P2.</p>
<p>I tried using groupby and merge. But I've either used them incorrectly or they're the wrong functions to use because I get a lot of 'NaN's in the resulting table.</p>
|
<p>You can use the following method:</p>
<pre><code>df = pd.DataFrame({'Event ID':[1,2,3,4,5],
'Cust ID':[1]*3+[2]*2,
'P1':[12,12,10,206,25],
'P2':[0,0,12,0,0],
'P3':[0]*5,
'P4':[0]*5})
df.melt(['Event ID','Cust ID'])\
.groupby('Cust ID')['value'].value_counts()\
.unstack().add_prefix('Count_')\
.reset_index()
</code></pre>
<p>Output:</p>
<pre><code>value Cust ID Count_0 Count_10 Count_12 Count_25 Count_206
0 1 8.0 1.0 3.0 NaN NaN
1 2 6.0 NaN NaN 1.0 1.0
</code></pre>
|
python|pandas|aggregate
| 0
|
7,313
| 58,089,000
|
eroding several layers of an array
|
<p>I'm having trouble understanding scipy's <code>binary_erosion</code> function.</p>
<pre><code>from scipy.ndimage import binary_erosion
a = np.zeros([12,12])
a[1:11,1:11]=1
binary_erosion(a).astype(int)
</code></pre>
<p>this removes the outermost edges, but what if I want to remove the second layer as well? I know I should probably use the <code>structure</code> option, but I don't understand how it works and could not find enough examples that explain it properly</p>
|
<p>Use the <code>iterations</code> option to have it repeat <code>n</code> times (remove additional layers): [<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.binary_erosion.html" rel="nofollow noreferrer">source</a>]</p>
<blockquote>
<p>iterations : <em>int</em>, optional<br>
The erosion is repeated iterations times (one, by default). If iterations is less than 1, the erosion is repeated until the result does not change anymore.</p>
</blockquote>
<p>So yours:</p>
<pre class="lang-py prettyprint-override"><code>array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
</code></pre>
<p>And with the iterations option set to 2, you'll notice an additional layer has been reduced.</p>
<pre class="lang-py prettyprint-override"><code>>>> binary_erosion(a, iterations=2).astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
</code></pre>
<hr>
<p>Since you asked in a comment, the <code>structure</code> can be used to determine how much to remove for each <code>iteration</code>. There is a good breakdown <a href="https://en.wikipedia.org/wiki/Erosion_(morphology)#Example" rel="nofollow noreferrer">here</a> of what that means.</p>
<p>This is <code>the structuring element used for erosion</code>. Meaning that if this were a 3x3 square, as it moved around the edge, the pixels that are completely covered will get removed, and the ones that are only partially covered will stay.</p>
<p>Also take a look at <a href="https://medium.com/@aguigui17/activity-8-morphological-operations-in-progress-c86cf3fe64d" rel="nofollow noreferrer">this medium post</a> which has hand drawn a bunch of examples for how this works and breaks it down even further.</p>
|
python|arrays|numpy|scipy
| 1
|
7,314
| 58,002,476
|
Pandas apply : function prototype for automatic column unpacking
|
<p>Is there a way I can use to make a function unpack the columns automatically with df.apply method?</p>
<p>Ideally, I am looking for a way to define the function such that it automatically unpacks regardless of the number of columns in the data frame and allows me to use the column names as variables directly
Something like </p>
<pre><code>def func(*row):
print col1
or def func(**row)
print col1
</code></pre>
<p>along with <code>df.apply(func, axis=1)</code></p>
<p>What I've tried so far and did not like</p>
<pre><code>def func(row):
col1, col2, col3 = row
</code></pre>
<p><code>df[col1, col2, col3].apply(func, axis=1)</code></p>
<p>Example as requested:</p>
<pre><code>import pandas as pd
import numpy as np
grid = np.random.rand(5,2)
df = pd.DataFrame(grid, columns =['col1', 'col2'])
</code></pre>
<p>Given this: I am trying to write a function such that</p>
<pre><code>def multiply(x):
###This function definition does not obviously work. What im asking is a way to achieve similar functionality without me explicitly unpacking x to col1 and col2###
print col1
print col2
df.apply(multiply, axis=1)
</code></pre>
|
<p>If you use <code>apply</code> with a function and <code>axis=1</code> then the columns are already accessible by their names.</p>
<pre><code>def print_columns(row):
print('Column 1:', row['col1'])
print('Column 2:', row['col2'])
df.apply(print_columns, axis=1)
</code></pre>
<p>From the <code>row</code> variable in the function you can access any columns by name. Obviously make sure you <code>return</code> something to set in the output column.</p>
|
python|pandas|apply
| 0
|
7,315
| 58,094,088
|
Conditional merge on in pandas
|
<p>My question in simple I am using pd.merge to merge two df .
Here's the line of code:</p>
<p><code>pivoted = pd.merge(pivoted, concerned_data, on='A')</code></p>
<p>and I want the on='B' whenever a row has column A value as null. Is there a possible way to do this?</p>
<p>Edit:</p>
<p>As an example if</p>
<pre><code> df1: A | B |randomval
1 | 1 | ty
Nan| 2 | asd
</code></pre>
<pre><code> df2: A | B |randomval2
1 | Nan| tyrte
3 | 2 | asde
</code></pre>
<p>So if on='A' and the value is Nan is any of the df (for a single row) I want on='B' for that row only</p>
<p>Thank you!</p>
|
<p>You could create a third column in your <code>pandas.DataFrame</code> which incorporates this logic and merge on this one.</p>
<p>For example, create dummy data</p>
<pre><code>df1 = pd.DataFrame({"A" : [1, None], "B" : [1, 2], "Val1" : ["a", "b"]})
df2 = pd.DataFrame({"A" : [1, 2], "B" : [None, 2], "Val2" : ["c", "d"]})
</code></pre>
<p>Create a column <code>c</code> which has this logic</p>
<pre><code>df1["C"] = pd.concat([df1.loc[~df1.A.isna(), "A"], df1.loc[df1.A.isna(), "B"]],ignore_index=False)
df2["C"] = pd.concat([df2.loc[~df2.A.isna(), "A"], df2.loc[df2.A.isna(), "B"]],ignore_index=False)
</code></pre>
<p>Finally, merge on this common column and include only your value columns</p>
<pre><code>df3 = pd.merge(df1[["Val1","C"]], df2[["Val2","C"]], on='C')
In [27]: df3
Out[27]:
Val1 C Val2
0 a 1.0 c
1 b 2.0 d
</code></pre>
|
pandas
| 1
|
7,316
| 36,768,889
|
LinearDiscriminantAnalysis - Single column output from .transform(X)
|
<p>I have been successfully playing around with replicating one of the <a href="http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_vs_lda.html#example-decomposition-plot-pca-vs-lda-py" rel="nofollow">sklearn tutorials</a> using the iris dataset in PyCharm using Python 2.7. However, when trying to repeat this with my own data I have been encountering an issue. I have been importing data from a .csv file using 'np.genfromtxt', but for some reason I keep getting a single column output for X_r2 (see below), when I should get a 2 column output. I have therefore replaced my data with some randomly generated variables to post onto SO, and I am still getting the same issue.</p>
<p>I have included the 'problem' code below, and I would be interested to know what I have done wrong. I have extensively used the debugging features in PyCharm to check that the type and shape of my variables are similar to the original sklearn example, but it did not help me with the problem. Any help or suggestions would be appreciated.</p>
<pre><code>import numpy as np
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
y = np.random.randint(2, size=500)
X = np.random.randint(1, high=1000, size=(500, 6))
target_names = np.array([['XX'], ['YY']])
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
</code></pre>
|
<p>The array <code>y</code> in the example you posted has values of 0, 1 and 2 while yours only has values of 0 and 1. This change achieves what you want:</p>
<pre><code>import numpy as np
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
y = np.random.randint(3, size=500)
X = np.random.randint(1, high=1000, size=(500, 6))
target_names = np.array([['XX'], ['YY']])
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
</code></pre>
|
python|python-2.7|numpy|scikit-learn|pycharm
| 2
|
7,317
| 54,911,646
|
tensorflow : loop value in placeholder shape [None]
|
<p>My question is as follows: if I have a placeholder with shape of <code>'None'</code>, how can I write the code in tensorflow to loop the value of shape of <code>'None'</code>? For example, given a a placeholder, if I predefined the shape, I can write: </p>
<pre><code>[i for i in range(placeholder.shape[0].value)]
</code></pre>
<p>But how can I write the code when the shape is <code>'None'</code>? I have tried </p>
<pre><code>[i for i in tf.range(tf.shape(placeholder)[0])]
</code></pre>
<p>It does not work at all. I also tried to use <code>tf.while_loop</code>, but still can not get expected result. Can anyone help me? Thank you so much</p>
|
<p>Maybe you can use tf.scan:</p>
<pre><code>import numpy as np
import tensorflow as tf
tf.InteractiveSession()
placeholder = tf.placeholder(dtype=tf.int32) # The shape of the placeholder is unknown for now
def fn(_, x):
y = 2 * x # Do something with this value
return y
shape = tf.scan(fn, tf.shape(placeholder))
feed_dict = {placeholder: np.zeros((2, 3, 4))}
print(shape.eval(feed_dict))
</code></pre>
|
python|tensorflow
| 0
|
7,318
| 27,993,058
|
Pandas apply to dateframe produces '<built-in method values of ...'
|
<p>I'm trying to build a <a href="http://geojson.org/geojson-spec.html#examples" rel="noreferrer">GeoJSON object</a>. My input is a csv with an address column, a lat column, and a lon column. I then created Shapely points out of the coordinates , buffer them out by a given radius, and get the dictionary of coordinates via the mapping option- so far, so good. Then, after referring to <a href="https://stackoverflow.com/questions/16353729/pandas-how-to-use-apply-function-to-multiple-columns">this question</a>, I wrote the following function to get a Series of dictionaries:<p>
<code>def make_geojson(row):
return {'geometry':row['geom'], 'properties':{'address':row['address']}}</code></p>
<p>and I applied it thusly:</p>
<pre><code>data['new_output'] = data.apply(make_geojson, axis=1)
</code></pre>
<p>My resulting column is full of these: <code><built-in method values of dict object at 0x10...</code></p>
<p>The weirdest part is, when I directly call the function (i.e. <code>make_geojson(data.loc[0])</code> I do in fact get the dictionary I'm expecting. Perhaps even weirder is that, when I call the functions I'm getting from the apply (e.g. <code>data.output[0]()</code>, <code>data.loc[0]['output']()</code>) I get the equivalent of the following list:
<code>[data.loc[0]['geom'], {'address':data.loc[0]['address']}]</code>, i.e. the values (but not the keys) of the dictionary I'm trying to get.</p>
<p>For those of you playing along at home, here's a toy example:</p>
<pre><code>from shapely.geometry import Point, mapping
import pandas as pd
def make_geojson(row):
return {'geometry':row['geom'], 'properties':{'address':row['address']}}
data = pd.DataFrame([{'address':'BS', 'lat':34.017, 'lon':-117.959}, {'address':'BS2', 'lat':33.989, 'lon':-118.291}])
data['point'] = map(Point, zip(data['lon'], data['lat']))
data['buffer'] = data['point'].apply(lambda x: x.buffer(.1))
data['geom'] = data.buffer.apply(mapping)
data['output'] = data.apply(make_geojson, axis=1)
</code></pre>
|
<p>Thanks, DSM, for pointing that out. Lesson learned: pandas is not good for arbitrary Python objects</p>
<p>So this is what I wound up doing:</p>
<pre><code>temp = zip(list(data.geom), list(data.address))
output = map(lambda x: {'geometry': x[0], 'properties':{'address':x[1]}}, temp)
</code></pre>
|
python|pandas|apply|geojson|shapely
| 3
|
7,319
| 73,504,379
|
Key error for Level Values Raise Key Error(key)
|
<p>Anyone know why I am getting a Level_Values raise KeyError, my guess is there are too many of the same date value? or something. I can see the values when I do a .get('new date') it just won't sort the column:</p>
<p>df = pd.read_csv("Status 9 Data.csv", header=0, sep=",")</p>
<p>##df.columns = ["JOB_REFERENCE","STATUS","DESCRIPTION","JOB_STATUS","EVENT_DATE","SERVICE_NO","SERVICE_NUM","BKG_PTY_NAME","REFRIGERATED","OVERSIZE_FLAG","TEU_UTILISATION","FIELD_CHARACTER_NEW","FIELD_CHARACTER_OLD","FIELD_CHANGE_CODE"]</p>
<p>df['new date'] = pd.to_datetime(df['EVENT_DATE']).dt.normalize()</p>
<p>df.sort_values(by=df['new date'], inplace=True, ascending=False)</p>
<p>df</p>
|
<p>The Error you should pass in <code>by</code> the name of the column <code>new date</code> not the data like <code>df["new date"]</code></p>
<pre><code>df.sort_values(by='new date', inplace=True, ascending=False)
</code></pre>
|
pandas
| 0
|
7,320
| 73,274,016
|
unable to concat the output for multiple rows
|
<p>I have a dataframe which is like below</p>
<p><a href="https://i.stack.imgur.com/Dq2uW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dq2uW.png" alt="enter image description here" /></a></p>
<p>If i write a code like below</p>
<pre><code>df.iloc[0]
</code></pre>
<p><a href="https://i.stack.imgur.com/pzkfQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pzkfQ.png" alt="enter image description here" /></a></p>
<p>And if i write code like below</p>
<pre><code>df.iloc[3]
</code></pre>
<p><a href="https://i.stack.imgur.com/VV83H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VV83H.png" alt="enter image description here" /></a></p>
<p><strong>I want to concat all the df.iloc[0], df.iloc<a href="https://i.stack.imgur.com/Dq2uW.png" rel="nofollow noreferrer">1</a>, df.iloc<a href="https://i.stack.imgur.com/pzkfQ.png" rel="nofollow noreferrer">2</a> untill whatever the max rows are are present. But with the help of for loop i'm unable to. Can anyone help me with this?</strong></p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with comprehension:</p>
<pre><code>df1 = pd.concat((df.loc[i] for i in df.index))
</code></pre>
<p>Or:</p>
<pre><code>df1 = pd.concat((df.iloc[i] for i in range(len(df.index))))
</code></pre>
|
python|python-3.x|pandas|list|dataframe
| 1
|
7,321
| 73,322,872
|
How can I save result from groupby in a new column?
|
<p>I have a dataframe</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>key1</th>
<th>key2</th>
<th>key3</th>
<th>value1</th>
<th>value2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>a</td>
<td>s2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>s2</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>b</td>
<td>j6</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>and I want as result</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>key1</th>
<th>key2</th>
<th>key3</th>
<th>value1</th>
<th>value2</th>
<th>sum_value1</th>
<th>sum_value2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>a</td>
<td>s2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>7</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>s2</td>
<td>2</td>
<td>3</td>
<td>5</td>
<td>7</td>
</tr>
<tr>
<td>2</td>
<td>b</td>
<td>j6</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>sum_value1 is the summation of values in value1 by grouping key1, key2, key3. And so for sum_value2.</p>
<p>How can I get this? Thank you!</p>
<p>What I used so far:</p>
<p><code>df["sum_value1"] = df["value1"].groupby(["key1","key2","key3"]).transform('sum')</code></p>
|
<p>use groupby and transform to return the sum of individual columns</p>
<pre><code>df[['sum_value1','sum_value2']]=df.groupby(['key1','key2','key3'])[['value1','value2']].transform(sum)
df
</code></pre>
<pre><code> key1 key2 key3 value1 value2 sum_value1 sum_value2
0 1 a s2 3 4 5 5
1 1 a s2 2 1 5 5
2 2 b j6 1 1 1 1
</code></pre>
|
python|pandas|dataframe|group-by|sum
| 0
|
7,322
| 30,972,588
|
Columns name dropped on append in pandas
|
<p>WinPython: pandas 0.16.1, py3.4</p>
<pre><code>df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df1.columns.names=["hello"]
df1
hello A B C D
0 A0 B0 C0 D0
1 A1 B1 C1 D1
2 A2 B2 C2 D2
3 A3 B3 C3 D3
df4 = pd.DataFrame({'B': ['B2', 'B3', 'B6', 'B7'],
'D': ['D2', 'D3', 'D6', 'D7'],
'F': ['F2', 'F3', 'F6', 'F7']},
index=[2, 3, 6, 7])
df4.columns.names=["hello"]
df4
hello B D F
2 B2 D2 F2
3 B3 D3 F3
6 B6 D6 F6
7 B7 D7 F7
</code></pre>
<p>I need to join dataframes like the ones shown above, but columns name <code>hello</code> (it is not a column as it may seem!) is dropped on append operation. Why? I have to force it like this: <code>pv.columns.names = df4.columns.names</code></p>
<pre><code>df1.append(df4)
A B C D F
0 A0 B0 C0 D0 NaN
1 A1 B1 C1 D1 NaN
2 A2 B2 C2 D2 NaN
3 A3 B3 C3 D3 NaN
2 NaN B2 NaN D2 F2
3 NaN B3 NaN D3 F3
6 NaN B6 NaN D6 F6
7 NaN B7 NaN D7 F7
</code></pre>
<p>UPD: <code>concat</code>/<code>append</code> drops axis 0/1 names when they differ. So, i think, forcing <code>.names</code> after <code>append</code> is the best solution now.</p>
|
<p>The <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.append.html" rel="nofollow"><code>DataFrame.append</code> method</a> is not as good as the <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow"><code>pandas.concat</code> function</a> for this purpose.</p>
<p>Using the <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow"><code>pandas.concat</code> function</a>, you will keep the index.</p>
<pre><code>pd.concat([df1,df2])
A B C D F
hello
0 A0 B0 C0 D0 NaN
1 A1 B1 C1 D1 NaN
2 A2 B2 C2 D2 NaN
3 A3 B3 C3 D3 NaN
2 NaN B2 NaN D2 F2
3 NaN B3 NaN D3 F3
6 NaN B6 NaN D6 F6
7 NaN B7 NaN D7 F7
</code></pre>
|
python|pandas
| 1
|
7,323
| 67,509,823
|
Concat Read Excel Pandas
|
<p>I'm needing to read in an excel file and read all sheets inside that excel file.</p>
<p>I've tried:</p>
<pre><code>sample_df = pd.concat(pd.read_excel("sample_master.xlsx", sheet_name=None), ignore_index=True)
</code></pre>
<p>This code worked, but it's suddenly giving me this error:</p>
<pre><code>TypeError: first argument must be an iterable of pandas objects, you passed an object of type "DataFrame"
</code></pre>
<p>After reading in the excel file, I need to run the following command:</p>
<pre><code>new_id = sample_df.loc[(sample_df['Sequencing_ID'] == line) & (sample_df['Experiment_ID'] == experiment_id), \
'Sample_ID_for_report'].item()
</code></pre>
<p>Any help?</p>
|
<p>First, you will want to know all of the sheets that need to be read in. Second, you will want to iterate over each sheet.</p>
<ol>
<li><em>Getting Sheet names</em>.- You can get a list of the sheet names in a workbook with <code>sheets = pd.ExcelFile(path).sheet_names</code>, where <code>path</code> is the full path to your file. The function below reads a workbook and returns a list of sheet names that contain specific key words.</li>
</ol>
<pre class="lang-py prettyprint-override"><code> import re
import pandas as pd
def get_sheets(path):
sheets = pd.ExcelFile(path).sheet_names
sheets_to_process = []
for sheet in sheets:
excludes = ['exclude_term1', 'exclude_term1']
includes = ['find_term1', 'find_term2']
sheet_stnd = re.sub('[^0-9A-Za-z_]+', '', sheet).lower().strip(' ')
for exclude in excludes:
if sheet_stnd != exclude:
for include in includes:
if include in sheet_stnd:
sheets_to_process.append(sheet)
return list(set(sheets_to_process))
</code></pre>
<ol start="2">
<li><em>Loop over sheets</em>- You can then loop over the sheets to read them in. In this example,</li>
</ol>
<pre class="lang-py prettyprint-override"><code>for sheet in get_sheets(path):
df = pd.concat(pd.read_excel("sample_master.xlsx", sheet_name=sheet),
ignore_index=True)
</code></pre>
<p>Depending on your use case, you may also want to append each sheet into a larger data frame</p>
|
python|pandas
| 0
|
7,324
| 60,329,555
|
fastest way to replace values in an array with the value in the same position in another array if they match a condition
|
<p>I'm trying this syntaxis to replace values in an array with the value in the same position in another array if they match a condition:</p>
<pre><code>array[array>limit]=other_array[array>limit]
</code></pre>
<p>It works but I think I might be doing it the hard way. Any thoughts?</p>
|
<p>Use <a href="https://numpy.org/doc/1.18/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a>:</p>
<blockquote>
<p>Parameters</p>
<p>condition: array_like, bool</p>
<pre><code> Where True, yield x, otherwise yield y.
</code></pre>
<p>x, y: array_like</p>
<pre><code> Values from which to choose. x, y and condition need to be broadcastable to some shape.
</code></pre>
<p>Returns</p>
<p>out: ndarray</p>
<pre><code> An array with elements from x where condition is True, and elements from y elsewhere.
</code></pre>
</blockquote>
<p>Example:</p>
<pre><code>a1 = np.array([3, 2, 4, 1])
a2 = a1 + 10
limit = 2
>>> np.where(a1 > limit, a2, a1)
array([13, 2, 14, 1])
</code></pre>
|
python|arrays|numpy|indexing|replace
| 2
|
7,325
| 60,260,753
|
Join two dataframes to get cartesian product
|
<p>How to join two dataframes and get the cartesian product of all rows in both dataframes.</p>
<p>df1:</p>
<pre><code> values
0 4
1 5
2 6
</code></pre>
<p>df2:</p>
<pre><code> values
0 7
1 8
2 9
</code></pre>
<p>Expected Output:</p>
<pre><code> values_x values_y
0 4 7
1 4 8
2 4 9
3 5 7
4 5 8
5 5 9
6 6 7
7 6 8
8 6 9
</code></pre>
|
<p>You can use a dummy column to merge on:</p>
<pre><code>df1.assign(dummy=1).merge(df2.assign(dummy=1), on='dummy', how='outer').drop('dummy', axis=1)
</code></pre>
<p>Output:</p>
<pre><code> values_x values_y
0 4 7
1 4 8
2 4 9
3 5 7
4 5 8
5 5 9
6 6 7
7 6 8
8 6 9
</code></pre>
|
python|pandas|dataframe|merge|cartesian-product
| 2
|
7,326
| 60,056,340
|
How to get the unique pairs from the given data frame column with file handling?
|
<p>sample data from dataframe:
<a href="https://i.stack.imgur.com/7csTt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7csTt.png" alt="sample data from dataframe" /></a></p>
<h1>Pairs</h1>
<pre><code>(8, 8), (8, 8), (8, 8), (8, 8), (8, 8)
(6, 7), (7, 7), (7, 7), (7, 6), (6, 7)
(2, 12), (12, 3), (3, 4), (4, 12), (12, 12)
```
new_col = []
for e in content.Pairs:
new_col.append(list(dict.fromkeys(e)))
content['Unique'] = new_col
```
</code></pre>
<h1>output expected is unique pairs from Pair column like this:</h1>
<pre><code>(8, 8),(6, 7),(7, 6),(7, 7),(2, 12) so on
</code></pre>
<h1>what I am getting is this result when trying the above code:</h1>
<h1>Unique</h1>
<pre><code>['8', '']
['6', '7', '']
['2', '12', '3', '4', '']
</code></pre>
<p>what is the issue with the data if I am doing with manual data then it's working why not in the data frame</p>
|
<p>You could use the <strong>set</strong> method:</p>
<pre><code>data = (((8, 8), (8, 8), (8, 8), (8, 8), (8, 8)),
((6, 7), (7, 7), (7, 7), (7, 6), (6, 7)),
((2, 12), (12, 3), (3, 4), (4, 12), (12, 12)))
uniques = []
for col in data:
for unique in list(set(col)):
uniques.append(unique)
for x in uniques:
print(x)
</code></pre>
<p><strong>OR</strong>:</p>
<pre><code>data = (((8, 8), (8, 8), (8, 8), (8, 8), (8, 8)),
((6, 7), (7, 7), (7, 7), (7, 6), (6, 7)),
((2, 12), (12, 3), (3, 4), (4, 12), (12, 12)))
uniques = []
for col in data:
uniques += [unique for unique in list(set(col))]
for x in uniques:
print(x)
</code></pre>
|
python|pandas|data-science
| 1
|
7,327
| 65,345,723
|
start date is first day of month, end date is subsequent month first day Pandas
|
<p>I'm looking to iterate through a date range where start_date and end_date increment on a monthly basis with each iteration starting at the beginning of the month.</p>
<p>First iteration example:</p>
<pre><code>start_date = '2020-01-01'
end_date = '2020-02-01'
</code></pre>
<p>The second iteration of the loop should look like:</p>
<pre><code>start_date = '2020-02-01'
end_date = '2020-03-01'
</code></pre>
<p>what I've tried:</p>
<pre><code>for x in range(20):
start_date = pd.Timestamp('2020-01-01') + pd.offsets.MonthBegin(n=1)
end_date = start_date + pd.offsets.MonthBegin(n=1)
</code></pre>
<p>The issue I am having is that on the next iteration I am not incrementing to the next month. it stays on the current month. Is there a way to increment to the next month?</p>
|
<pre><code>start_date = pd.to_datetime('2020-02-01')
end_date = pd.to_datetime('2020-03-01')
for x in range(20):
print(start_date.strftime('%Y-%m-%d'))
print(end_date.strftime('%Y-%m-%d'))
# do the work here...
# then increment the dates for the next iteration
start_date = start_date + pd.offsets.MonthBegin(n=1)
end_date = start_date + pd.offsets.MonthBegin(n=1)
</code></pre>
<p>seems to do the job</p>
|
python-3.x|pandas|date
| 0
|
7,328
| 65,185,165
|
Filling column with values (pandas)
|
<p>I have a problem filling in values in a column with pandas. I want to add strings which should describe the annual income class of a customer. I want 20% of the length of the data frame to get the value "Lowest", 9% of the data frame should get "Lower Middle" etc... I thought of creating a list and appending the values and then set it as the value for the column but then I get a ValueError Length of values (5) does not match length of index (500)</p>
<pre><code>list_of_lists = []
list_of_lists.append(int(0.2*len(df))*"Lowest")
list_of_lists.append(int(0.09*len(df))*"Lower Middle")
list_of_lists.append(int(0.5*len(df))*"Middle")
list_of_lists.append(int(0.12*len(df))*"Upper Middle")
list_of_lists.append(int(0.12*len(df))*"Highest")
df["Annual Income"] = list_of_lists
</code></pre>
<p>Do you have an idea of what could be the best way to do this?</p>
<p>Thanks in advance
Best regards
Alina</p>
|
<p>You can use <code>numpy</code> to do a weighted choice. The method has a list of choices, the number of choices to make, and the probabilities. You could generate this and just do <code>df['Annual Income'] = incomes</code></p>
<p>I've printed out the value counts so you can see what the totals were. It will be slightly different every time.</p>
<p>Also I had to tweak the probabilities so they add up to 100%</p>
<pre><code>import pandas as pd
from numpy.random import choice
incomes = choice(['Lowest','Lower Middle','Middle','Upper Middle','Highest'], 500,
p=[.2,.09,.49,.11,.11])
df= pd.DataFrame({'Annual Income':incomes})
df.value_counts()
Annual Income
Middle 245
Lowest 87
Upper Middle 66
Highest 57
Lower Middle 45
</code></pre>
|
python|pandas|dataframe|dataset|data-science
| 1
|
7,329
| 65,243,260
|
PYTHON pandas Is there a way to dynamically delete rows from pandas dataframe WHILE writing it to CSV to free memory?
|
<p>I have several huge dataframes, and I'm writing multithreaded functions to write them to disk as .csv but it takes a really long time and I want that memory back so I can go get more huge dataframes while these slowly write.</p>
<p>Is it possible to use pandas to:</p>
<ol>
<li>write a chunk</li>
<li>delete those rows fromt he dataframe to free memory</li>
<li>Repeat until dataframe is written to disk as csv and thread is finished</li>
</ol>
|
<p>I am not quite sure how this works under the hood - but assume that chunksize parameter might be what you are looking for.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer">Pandas docs</a></p>
|
python|pandas|dataframe
| 0
|
7,330
| 65,481,370
|
Size mismatch in tensorflow_federated eager executor
|
<p>I am following this code <a href="https://github.com/BUAA-BDA/FedShapley/tree/master/TensorflowFL" rel="nofollow noreferrer">https://github.com/BUAA-BDA/FedShapley/tree/master/TensorflowFL</a> and trying to run the file same_OR.py</p>
<p>there is a problem in <code>import tensorflow.compat.v1</code> as tf its show that unable to import " tensorflow.compat.v1" File "sameOR.py"</p>
<pre><code>from __future__ import absolute_import, division, print_function
import tensorflow_federated as tff
import tensorflow.compat.v1 as tf
import numpy as np
import time
from scipy.special import comb, perm
import os
# tf.compat.v1.enable_v2_behavior()
# tf.compat.v1.enable_eager_execution()
# NUM_EXAMPLES_PER_USER = 1000
BATCH_SIZE = 100
NUM_AGENT = 5
def get_data_for_digit(source, digit):
output_sequence = []
all_samples = [i for i, d in enumerate(source[1]) if d == digit]
for i in range(0, len(all_samples), BATCH_SIZE):
batch_samples = all_samples[i:i + BATCH_SIZE]
output_sequence.append({
'x': np.array([source[0][i].flatten() / 255.0 for i in batch_samples],
dtype=np.float32),
'y': np.array([source[1][i] for i in batch_samples], dtype=np.int32)})
return output_sequence
def get_data_for_digit_test(source, digit):
output_sequence = []
all_samples = [i for i, d in enumerate(source[1]) if d == digit]
for i in range(0, len(all_samples)):
output_sequence.append({
'x': np.array(source[0][all_samples[i]].flatten() / 255.0,
dtype=np.float32),
'y': np.array(source[1][all_samples[i]], dtype=np.int32)})
return output_sequence
def get_data_for_federated_agents(source, num):
output_sequence = []
Samples = []
for digit in range(0, 10):
samples = [i for i, d in enumerate(source[1]) if d == digit]
samples = samples[0:5421]
Samples.append(samples)
all_samples = []
for sample in Samples:
for sample_index in range(int(num * (len(sample) / NUM_AGENT)), int((num + 1) * (len(sample) / NUM_AGENT))):
all_samples.append(sample[sample_index])
# all_samples = [i for i in range(int(num*(len(source[1])/NUM_AGENT)), int((num+1)*(len(source[1])/NUM_AGENT)))]
for i in range(0, len(all_samples), BATCH_SIZE):
batch_samples = all_samples[i:i + BATCH_SIZE]
output_sequence.append({
'x': np.array([source[0][i].flatten() / 255.0 for i in batch_samples],
dtype=np.float32),
'y': np.array([source[1][i] for i in batch_samples], dtype=np.int32)})
return output_sequence
BATCH_TYPE = tff.NamedTupleType([
('x', tff.TensorType(tf.float32, [None, 784])),
('y', tff.TensorType(tf.int32, [None]))])
MODEL_TYPE = tff.NamedTupleType([
('weights', tff.TensorType(tf.float32, [784, 10])),
('bias', tff.TensorType(tf.float32, [10]))])
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE)
def batch_loss(model, batch):
predicted_y = tf.nn.softmax(tf.matmul(batch.x, model.weights) + model.bias)
return -tf.reduce_mean(tf.reduce_sum(
tf.one_hot(batch.y, 10) * tf.log(predicted_y), axis=[1]))
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE, tf.float32)
def batch_train(initial_model, batch, learning_rate):
# Define a group of model variables and set them to `initial_model`.
model_vars = tff.utils.create_variables('v', MODEL_TYPE)
init_model = tff.utils.assign(model_vars, initial_model)
# Perform one step of gradient descent using loss from `batch_loss`.
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
with tf.control_dependencies([init_model]):
train_model = optimizer.minimize(batch_loss(model_vars, batch))
# Return the model vars after performing this gradient descent step.
with tf.control_dependencies([train_model]):
return tff.utils.identity(model_vars)
LOCAL_DATA_TYPE = tff.SequenceType(BATCH_TYPE)
@tff.federated_computation(MODEL_TYPE, tf.float32, LOCAL_DATA_TYPE)
def local_train(initial_model, learning_rate, all_batches):
# Mapping function to apply to each batch.
@tff.federated_computation(MODEL_TYPE, BATCH_TYPE)
def batch_fn(model, batch):
return batch_train(model, batch, learning_rate)
l = tff.sequence_reduce(all_batches, initial_model, batch_fn)
return l
@tff.federated_computation(MODEL_TYPE, LOCAL_DATA_TYPE)
def local_eval(model, all_batches):
#
return tff.sequence_sum(
tff.sequence_map(
tff.federated_computation(lambda b: batch_loss(model, b), BATCH_TYPE),
all_batches))
SERVER_MODEL_TYPE = tff.FederatedType(MODEL_TYPE, tff.SERVER, all_equal=True)
CLIENT_DATA_TYPE = tff.FederatedType(LOCAL_DATA_TYPE, tff.CLIENTS)
@tff.federated_computation(SERVER_MODEL_TYPE, CLIENT_DATA_TYPE)
def federated_eval(model, data):
return tff.federated_mean(
tff.federated_map(local_eval, [tff.federated_broadcast(model), data]))
SERVER_FLOAT_TYPE = tff.FederatedType(tf.float32, tff.SERVER, all_equal=True)
@tff.federated_computation(
SERVER_MODEL_TYPE, SERVER_FLOAT_TYPE, CLIENT_DATA_TYPE)
def federated_train(model, learning_rate, data):
l = tff.federated_map(
local_train,
[tff.federated_broadcast(model),
tff.federated_broadcast(learning_rate),
data])
return l
# return tff.federated_mean()
def readTestImagesFromFile(distr_same):
ret = []
if distr_same:
f = open(os.path.join(os.path.dirname(__file__), "test_images1_.txt"), encoding="utf-8")
else:
f = open(os.path.join(os.path.dirname(__file__), "test_images1_.txt"), encoding="utf-8")
lines = f.readlines()
for line in lines:
tem_ret = []
p = line.replace("[", "").replace("]", "").replace("\n", "").split("\t")
for i in p:
if i != "":
tem_ret.append(float(i))
ret.append(tem_ret)
return np.asarray(ret)
def readTestLabelsFromFile(distr_same):
ret = []
if distr_same:
f = open(os.path.join(os.path.dirname(__file__), "test_labels_.txt"), encoding="utf-8")
else:
f = open(os.path.join(os.path.dirname(__file__), "test_labels_.txt"), encoding="utf-8")
lines = f.readlines()
for line in lines:
tem_ret = []
p = line.replace("[", "").replace("]", "").replace("\n", "").split(" ")
for i in p:
if i!="":
tem_ret.append(float(i))
ret.append(tem_ret)
return np.asarray(ret)
def getParmsAndLearningRate(agent_no):
f = open(os.path.join(os.path.dirname(__file__), "weights_" + str(agent_no) + ".txt"))
content = f.read()
g_ = content.split("***\n--------------------------------------------------")
parm_local = []
learning_rate_list = []
for j in range(len(g_) - 1):
line = g_[j].split("\n")
if j == 0:
weights_line = line[0:784]
learning_rate_list.append(float(line[784].replace("*", "").replace("\n", "")))
else:
weights_line = line[1:785]
learning_rate_list.append(float(line[785].replace("*", "").replace("\n", "")))
valid_weights_line = []
for l in weights_line:
w_list = l.split("\t")
w_list = w_list[0:len(w_list) - 1]
w_list = [float(i) for i in w_list]
valid_weights_line.append(w_list)
parm_local.append(valid_weights_line)
f.close()
f = open(os.path.join(os.path.dirname(__file__), "bias_" + str(agent_no) + ".txt"))
content = f.read()
g_ = content.split("***\n--------------------------------------------------")
bias_local = []
for j in range(len(g_) - 1):
line = g_[j].split("\n")
if j == 0:
weights_line = line[0]
else:
weights_line = line[1]
b_list = weights_line.split("\t")
b_list = b_list[0:len(b_list) - 1]
b_list = [float(i) for i in b_list]
bias_local.append(b_list)
f.close()
ret = {
'weights': np.asarray(parm_local),
'bias': np.asarray(bias_local),
'learning_rate': np.asarray(learning_rate_list)
}
return ret
def train_with_gradient_and_valuation(agent_list, grad, bi, lr, distr_type):
f_ini_p = open(os.path.join(os.path.dirname(__file__), "initial_model_parameters.txt"), "r")
para_lines = f_ini_p.readlines()
w_paras = para_lines[0].split("\t")
w_paras = [float(i) for i in w_paras]
b_paras = para_lines[1].split("\t")
b_paras = [float(i) for i in b_paras]
w_initial_g = np.asarray(w_paras, dtype=np.float32).reshape([784, 10])
b_initial_g = np.asarray(b_paras, dtype=np.float32).reshape([10])
f_ini_p.close()
model_g = {
'weights': w_initial_g,
'bias': b_initial_g
}
for i in range(len(grad[0])):
# i->迭代轮数
gradient_w = np.zeros([784, 10], dtype=np.float32)
gradient_b = np.zeros([10], dtype=np.float32)
for j in agent_list:
gradient_w = np.add(np.multiply(grad[j][i], 1/len(agent_list)), gradient_w)
gradient_b = np.add(np.multiply(bi[j][i], 1/len(agent_list)), gradient_b)
model_g['weights'] = np.subtract(model_g['weights'], np.multiply(lr[0][i], gradient_w))
model_g['bias'] = np.subtract(model_g['bias'], np.multiply(lr[0][i], gradient_b))
test_images = readTestImagesFromFile(False)
test_labels_onehot = readTestLabelsFromFile(False)
m = np.dot(test_images, np.asarray(model_g['weights']))
test_result = m + np.asarray(model_g['bias'])
y = tf.nn.softmax(test_result)
correct_prediction = tf.equal(tf.argmax(y, 1), tf.arg_max(test_labels_onehot, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return accuracy.numpy()
def remove_list_indexed(removed_ele, original_l, ll):
new_original_l = []
for i in original_l:
new_original_l.append(i)
for i in new_original_l:
if i == removed_ele:
new_original_l.remove(i)
for i in range(len(ll)):
if set(ll[i]) == set(new_original_l):
return i
return -1
def shapley_list_indexed(original_l, ll):
for i in range(len(ll)):
if set(ll[i]) == set(original_l):
return i
return -1
def PowerSetsBinary(items):
N = len(items)
set_all = []
for i in range(2 ** N):
combo = []
for j in range(N):
if (i >> j) % 2 == 1:
combo.append(items[j])
set_all.append(combo)
return set_all
if __name__ == "__main__":
start_time = time.time()
#data_num = np.asarray([5923,6742,5958,6131,5842])
#agents_weights = np.divide(data_num, data_num.sum())
for index in range(NUM_AGENT):
f = open(os.path.join(os.path.dirname(__file__), "weights_"+str(index)+".txt"), "w")
f.close()
f = open(os.path.join(os.path.dirname(__file__), "bias_" + str(index) + ".txt"), "w")
f.close()
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
DISTRIBUTION_TYPE = "SAME"
federated_train_data_divide = None
federated_train_data = None
if DISTRIBUTION_TYPE == "SAME":
federated_train_data_divide = [get_data_for_federated_agents(mnist_train, d) for d in range(NUM_AGENT)]
federated_train_data = federated_train_data_divide
f_ini_p = open(os.path.join(os.path.dirname(__file__), "initial_model_parameters.txt"), "r")
para_lines = f_ini_p.readlines()
w_paras = para_lines[0].split("\t")
w_paras = [float(i) for i in w_paras]
b_paras = para_lines[1].split("\t")
b_paras = [float(i) for i in b_paras]
w_initial = np.asarray(w_paras, dtype=np.float32).reshape([784, 10])
b_initial = np.asarray(b_paras, dtype=np.float32).reshape([10])
f_ini_p.close()
initial_model = {
'weights': w_initial,
'bias': b_initial
}
model = initial_model
learning_rate = 0.1
for round_num in range(50):
local_models = federated_train(model, learning_rate, federated_train_data)
print("learning rate: ", learning_rate)
#print(local_models[0][0])#第0个agent的weights矩阵
#print(local_models[0][1])#第0个agent的bias矩阵
#print(len(local_models))
for local_index in range(len(local_models)):
f = open(os.path.join(os.path.dirname(__file__), "weights_"+str(local_index)+".txt"),"a",encoding="utf-8")
for i in local_models[local_index][0]:
line = ""
arr = list(i)
for j in arr:
line += (str(j)+"\t")
print(line, file=f)
print("***"+str(learning_rate)+"***",file=f)
print("-"*50,file=f)
f.close()
f = open(os.path.join(os.path.dirname(__file__), "bias_" + str(local_index) + ".txt"), "a", encoding="utf-8")
line = ""
for i in local_models[local_index][1]:
line += (str(i) + "\t")
print(line, file=f)
print("***" + str(learning_rate) + "***",file=f)
print("-"*50,file=f)
f.close()
m_w = np.zeros([784, 10], dtype=np.float32)
m_b = np.zeros([10], dtype=np.float32)
for local_model_index in range(len(local_models)):
m_w = np.add(np.multiply(local_models[local_model_index][0], 1/NUM_AGENT), m_w)
m_b = np.add(np.multiply(local_models[local_model_index][1], 1/NUM_AGENT), m_b)
model = {
'weights': m_w,
'bias': m_b
}
learning_rate = learning_rate * 0.9
loss = federated_eval(model, federated_train_data)
print('round {}, loss={}'.format(round_num, loss))
print(time.time()-start_time)
gradient_weights = []
gradient_biases = []
gradient_lrs = []
for ij in range(NUM_AGENT):
model_ = getParmsAndLearningRate(ij)
gradient_weights_local = []
gradient_biases_local = []
learning_rate_local = []
for i in range(len(model_['learning_rate'])):
if i == 0:
gradient_weight = np.divide(np.subtract(initial_model['weights'], model_['weights'][i]),
model_['learning_rate'][i])
gradient_bias = np.divide(np.subtract(initial_model['bias'], model_['bias'][i]),
model_['learning_rate'][i])
else:
gradient_weight = np.divide(np.subtract(model_['weights'][i - 1], model_['weights'][i]),
model_['learning_rate'][i])
gradient_bias = np.divide(np.subtract(model_['bias'][i - 1], model_['bias'][i]),
model_['learning_rate'][i])
gradient_weights_local.append(gradient_weight)
gradient_biases_local.append(gradient_bias)
learning_rate_local.append(model_['learning_rate'][i])
gradient_weights.append(gradient_weights_local)
gradient_biases.append(gradient_biases_local)
gradient_lrs.append(learning_rate_local)
all_sets = PowerSetsBinary([i for i in range(NUM_AGENT)])
group_shapley_value = []
for s in all_sets:
group_shapley_value.append(
train_with_gradient_and_valuation(s, gradient_weights, gradient_biases, gradient_lrs, DISTRIBUTION_TYPE))
print(str(s)+"\t"+str(group_shapley_value[len(group_shapley_value)-1]))
agent_shapley = []
for index in range(NUM_AGENT):
shapley = 0.0
for j in all_sets:
if index in j:
remove_list_index = remove_list_indexed(index, j, all_sets)
if remove_list_index != -1:
shapley += (group_shapley_value[shapley_list_indexed(j, all_sets)] - group_shapley_value[
remove_list_index]) / (comb(NUM_AGENT - 1, len(all_sets[remove_list_index])))
agent_shapley.append(shapley)
for ag_s in agent_shapley:
print(ag_s)
print("end_time", time.time()-start_time)
</code></pre>
<p>and these are list of errors .. can anyone help?</p>
<blockquote>
<p>Traceback (most recent call last): File "samOR.py", line 331, in
local_models = federated_train(model, learning_rate, federated_train_data) File
"C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\utils\function_utils.py",
line 561, in <strong>call</strong>
return context.invoke(self, arg) File "C:\Users\Aw\Anaconda3\lib\site-packages\retrying.py", line 49, in
wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw) File "C:\Users\Aw\Anaconda3\lib\site-packages\retrying.py", line 206, in
call
return attempt.get(self._wrap_exception) File "C:\Users\Aw\Anaconda3\lib\site-packages\retrying.py", line 247, in
get
six.reraise(self.value[0], self.value[1], self.value[2]) File "C:\Users\Aw\Anaconda3\lib\site-packages\six.py", line 703, in reraise
raise value File "C:\Users\Aw\Anaconda3\lib\site-packages\retrying.py", line 200, in
call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File
"C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\execution_context.py",
line 213, in invoke
arg = event_loop.run_until_complete( File "C:\Users\Aw\Anaconda3\lib\asyncio\base_events.py", line 616, in
run_until_complete
return future.result() File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 388, in _wrapped
return await coro File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\execution_context.py",
line 99, in
_ingest
ingested = await asyncio.gather(*ingested) File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\execution_context.py",
line 104, in _ingest
return await executor.create_value(val, type_spec) File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 200, in async_trace
result = await fn(*fn_args, **fn_kwargs) File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\reference_resolving_executor.py",
line 286, in create_value
return ReferenceResolvingExecutorValue(await File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\caching_executor.py",
line 245, in create_value
await cached_value.target_future File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 200, in async_trace
result = await fn(*fn_args, **fn_kwargs) File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\thread_delegating_executor.py",
line 110, in create_value
return await self._delegate( File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\thread_delegating_executor.py",
line 105, in _delegate
result_value = await _delegate_with_trace_ctx(coro, self._event_loop) File
"C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 388, in _wrapped
return await coro File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 200, in async_trace
result = await fn(<em>fn_args, **fn_kwargs) File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\federating_executor.py",
line 383, in create_value
return await self._strategy.compute_federated_value(value, type_spec) File
"C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\federated_resolving_strategy.py",
line 272, in compute_federated_value
result = await asyncio.gather(</em>[ File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 200, in async_trace
result = await fn(*fn_args, **fn_kwargs) File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\reference_resolving_executor.py",
line 281, in create_value
vals = await asyncio.gather( File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 200, in async_trace
result = await fn(*fn_args, **fn_kwargs) File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\reference_resolving_executor.py",
line 286, in create_value
return ReferenceResolvingExecutorValue(await File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\caching_executor.py",
line 245, in create_value
await cached_value.target_future File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 200, in async_trace
result = await fn(*fn_args, **fn_kwargs) File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\thread_delegating_executor.py",
line 110, in create_value
return await self._delegate( File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\thread_delegating_executor.py",
line 105, in _delegate
result_value = await _delegate_with_trace_ctx(coro, self._event_loop) File
"C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 388, in _wrapped
return await coro File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py",
line 200, in async_trace
result = await fn(*fn_args, **fn_kwargs) File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\eager_tf_executor.py",
line 464, in create_value
return EagerValue(value, self._tf_function_cache, type_spec, self._device) File
"C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\eager_tf_executor.py",
line 366, in <strong>init</strong> File
"C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\eager_tf_executor.py",
line 326, in to_representation_for_type
raise TypeError( TypeError: The apparent type float32[10] of a tensor [-0.9900856 -0.9902875 -0.99910086 -0.9972545 -0.99561495
-0.99766624 -0.9964327 -0.99897027 -0.9960221 -0.99313617] does not match the expected type float32[784,10]. ERROR:asyncio:Task was
destroyed but it is pending! task: <Task pending name='Task-7'
coro=<trace..async_trace() running at
C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\common_libs\tracing.py:200>
wait_for=<Future pending
cb=[_chain_future.._call_check_cancel() at
C:\Users\Aw0000282F4DFE3D0>()]></p>
</blockquote>
|
<p>It looks like this is a case of mismatched tensor shapes, specifcially its expecting a shape of <code>float32[784,10]</code> but the argument is shape<code>float32[10]</code>.</p>
<p>Near the end of the stack trace the key line appears to be:</p>
<pre class="lang-py prettyprint-override"><code>File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\eager_tf_executor.py", line 366,
in init
File "C:\Users\Aw\Anaconda3\lib\site-packages\tensorflow_federated\python\core\impl\executors\eager_tf_executor.py", line 326,
in to_representation_for_type raise TypeError(
TypeError: The apparent type float32[10] of a tensor [-0.9900856 -0.9902875 -0.99910086 -0.9972545 -0.99561495 -0.99766624 -0.9964327 -0.99897027 -0.9960221 -0.99313617] does not match the expected type float32[784,10].
</code></pre>
<p>The most common case this happens is converting <code>dict</code> (unordered in older versions of Python) to <code>tff.StructType</code> (ordered in TFF).</p>
<p>One place in the code that might be doing this is in:</p>
<pre class="lang-py prettyprint-override"><code> initial_model = {
'weights': w_initial,
'bias': b_initial
}
</code></pre>
<p>Instead, changing this to a <a href="https://docs.python.org/3/library/collections.html#collections.OrderedDict" rel="nofollow noreferrer"><code>collections.OrderedDict</code></a> to preserve the key ordering may help. Something like (ensuring the keys match the order in <code>MODEL_TYPE</code>):</p>
<pre class="lang-py prettyprint-override"><code> import collections
initial_model = collections.OrderedDict(
weights=w_initial,
bias=b_initial)
</code></pre>
|
python|tensorflow|machine-learning|tensorflow-federated|federated-learning
| 0
|
7,331
| 65,320,693
|
iterate rows of a column and remove all text after specific words in python
|
<p>Is there a way in python to remove all text if a specific combination of words is found in a row?
I have six different combinations of words after which the text should be deleted ('from the manufacturer' is an example). I want to iterate over rows of a column and remove all the text found after these words.</p>
<pre><code>list_of_words = ['Descrizione Prodotto', 'Produktbeschreibung des Herstellers', 'Description du fabricant', 'From the manufacturer', 'Descripción Prodotto']
var = df['info']
for index, row in df.iterrows():
for word in list_of_words:
if word in var:
var.split(word, 1)[0]
</code></pre>
|
<p>Try the following, using a lambda function:</p>
<pre><code>list_of_words = ['Descrizione Prodotto', 'Produktbeschreibung des Herstellers', 'Description du fabricant', 'From the manufacturer', 'Descripción Prodotto']
def clear(x):
for i in list_of_words:
if i in x:
x=x[:x.find(i)+len(i)]
return x
df['info']=df['info'].apply(lambda x: clear(x))
</code></pre>
|
python|pandas
| 1
|
7,332
| 65,338,973
|
Pandas Move text in column
|
<p>I have a data frame that contains search results. Is it possible to move the text around in the column so that the the name is always first?</p>
<pre><code>Results
'Phil Spencer', 'Microsoft'
'Larry Hryb', 'Microsoft'
'Microsoft', 'Bill Gates'
'Sony', 'Kenichiro, Yoshida'
'Sony', 'PS5', 'Howard Stringer'
</code></pre>
<p>Expected</p>
<pre><code>Results
'Phil Spencer', 'Microsoft'
'Larry Hryb', 'Microsoft'
'Bill Gates', 'Microsoft'
'Kenichiro, Yoshida','Sony'
'Howard Stringer', 'Sony', 'PS5'
</code></pre>
<p>I want it so that the name is always first in the column. Is there a way of doing this?</p>
|
<p>Quite a tough one, but we could assume that anything with a <code>space</code> is a name and try to order it that way.</p>
<p>First let's split by <code>,</code> that only proceeds after <code>'</code> and is followed by a space <code>\s</code></p>
<hr />
<pre><code>s = df['Results'].str.split("',\s",expand=True).stack()
0 0 'Phil Spencer
1 'Microsoft'
1 0 'Larry Hryb
1 'Microsoft'
2 0 'Microsoft
1 'Bill Gates'
3 0 'Sony
1 'Kenichiro, Yoshida'
4 0 'Sony
1 'PS5
2 'Howard Stringer'
new_results = (
s.loc[s.str.contains("\s{1}").astype(int).sort_values(ascending=False).index]
.replace({"'": "", ",": ""}, regex=True)
.groupby(level=[0])
.agg(", ".join)
)
0 Phil Spencer, Microsoft
1 Larry Hryb, Microsoft
2 Bill Gates, Microsoft
3 Kenichiro Yoshida, Sony
4 Howard Stringer, PS5, Sony
</code></pre>
<hr />
<p>Another, but more computationally expensive solution would be to sort by the len of each object, but you can see this is not foolproof, as some company names may be longer than the first name.</p>
<pre><code>(
s.loc[s.apply(len).sort_values(ascending=False).index]
.replace({"'": "", ",": ""}, regex=True)
.groupby(level=[0])
.agg(", ".join)
)
0 Phil Spencer, Microsoft
1 Microsoft, Larry Hryb # <-- wrong.
2 Bill Gates, Microsoft
3 Kenichiro Yoshida, Sony
4 Howard Stringer, Sony, PS5
dtype: object
</code></pre>
|
python|pandas|dataframe
| 3
|
7,333
| 50,137,837
|
Pandas DataFrame mean with object
|
<p>I have a dataframe with 2 column nbr and tag. Nbr contain integer and tag contain Tag object. </p>
<p>And I want to get the mean of all the tag object (using value attribute, and the result is a new Tag with that value).</p>
<p>For <code>dataframe.add</code> I had the add a the <code>__add__</code> method to the Tag class.
Example:</p>
<pre><code>import pandas as pd
class Tag(object):
def __init__(self, value):
self.value = value
def __add__(self, other):
return Tag(self.value + other.value)
a = Tag(2)
b = Tag(8)
frame = pd.DataFrame({
'tag': [a, b],
'nbr': [3, 6]
})
new_tag = frame.tag.sum()
print new_tag.value # 10
</code></pre>
<p>But for <code>frame.tag.mean()</code> I get this error <code>TypeError: Could not convert <__main__.Tag object at 0x7f375ac460d0> to numeric</code>.
Pandas first try to convert the object to float: <code>float(x)</code>, then if it fail it try this: <code>x = complex(x)</code>. </p>
<p>My question is their a way to make <code>float(tag_object)</code> or <code>complex(tag_object)</code> return the value attribute by adding a method to my Tag class like I did with <code>__add__</code>? </p>
<p>Thanks in advance.</p>
|
<p>Looking at the source code, it seems like Pandas's mean coerces the results to a numeric type. </p>
<p>You can get close by adding the <a href="https://docs.python.org/3/reference/datamodel.html" rel="nofollow noreferrer">special <code>__float__</code> method</a> to <code>Tag</code>:</p>
<pre><code>import pandas as pd
class Tag(object):
def __init__(self, value):
self.value = value
def __add__(self, other):
return Tag(self.value + other.value)
def __float__(self):
return float(self.value)
</code></pre>
<p>Once you do so, you get</p>
<pre><code>a = Tag(2)
b = Tag(8)
frame = pd.DataFrame({
'tag': [a, b],
'nbr': [3, 6]
})
new_tag = frame.tag.mean()
>>> print(new_tag)
5.0
</code></pre>
<p>Note that this doesn't do exactly what you wanted (it doesn't create a <code>Tag</code> with value 5.0 - Pandas wants the result to be a numeric type).</p>
|
python|pandas|object|mean
| 2
|
7,334
| 50,175,706
|
Binary Arrays: Determine if all 1s in one array come before first 1 in the other
|
<p>I've been working on an interesting problem and figured I should post about it on here. The problem is the following:</p>
<p>You are given two arrays, A and B, such that the elements of each array are 0 or 1. The goal is to figure out if all the 1s in A come before the first 1 in B. <em>You can assume that all 1s are sequential - you cannot have a 0 between 2 ones. (i.e. you would never have [1,0,1]).</em> Some examples:</p>
<p>True: A = [0,1,1,0], B = [0,0,0,1]</p>
<p>True: A = [1,0,0,0], B = [0,1,1,0]</p>
<p>False: A = [0,0,0,1], B = [1,0,0,0]</p>
<p>False: A = [0,1,1,0], B = [0,0,1,0]</p>
<p>You can assume that len(A) = len(B) = N, but N can be very large. Therefore, you cannot simply convert the entire array into a binary number because it will be too large to represent.</p>
<p>I'd like to find the solution in the most efficient way possible, ideally O(1). I've been coding this in Python using Numpy arrays, so you can perform logical operations on the arrays (e.g. A & B, which would tell you if any 1s overlap between A and B). Curious to see any solutions that you can come up with!</p>
|
<p>If you check out <a href="https://stackoverflow.com/questions/8768540/how-to-find-last-occurrence-of-maximum-value-in-a-numpy-ndarray">this question</a> you can find how to get the index of the last "max" value in a numpy array. then check to see if that is less than the first 1 in B and you are good. </p>
<pre><code>import numpy as np
reverse_A = A[::-1]
i = len(reverse_A) - np.argmax(reverse_A) - 1
i < np.argmax(B)
</code></pre>
|
python|algorithm|numpy|bit-manipulation|time-complexity
| 0
|
7,335
| 49,831,784
|
Filter groups after GroupBy in pandas while keeping the groups
|
<p>in pandas I want to do:
<code>df.groupby('A').filter(lambda x: x.name > 0)</code> - group by column <code>A</code> and then filter groups that have the value of the name non positive. However this canceles the grouping as <code>GroupBy.filter</code> returns <code>DataFrame</code> and thus losing the groupings. I want to do it in this order as it should be less computationaly demanding because <code>filter</code> followed by <code>groupby</code> would walk the DataFrame twice no (first filtering and then grouping)? Also cloning the groups from the grouping (to a dict or something) would lose me the functionality to seamlessly go back to dataframe (like in the example of <code>.filter</code> that you directly get the <code>DataFrame</code>)</p>
<p>Thanks</p>
<p>Example:</p>
<pre><code> A B
1 -1 1
2 -1 2
3 0 2
4 1 1
5 1 2
</code></pre>
<p><code>df.groupby('A')</code>:</p>
<pre><code>GroupBy object
-1 : [1, 2]
0 : [3]
1 : [4,5]
</code></pre>
<p><code>GroupBy.filter(lambda x: x.name >= 0)</code>:</p>
<pre><code>GroupBy object
0 : [3]
1 : [4,5]
</code></pre>
|
<p>I think the previous answers propose workarounds, which are maybe useful in your case but doesn't answer the question. </p>
<p>You created groups, and you want to throw out or keep some groups based on group statistics THEN perform some group statistics you actually care for on the groups. This should be possible, and useful in many cases, however, it is not possible now as a chained command (as far as I know) only if you use two identical groupbys consequently.</p>
<p>Let's make a case: Groupby reveals some features that are not filterable on an item level basis (so previous filtering is not an option). For example a group sum. The annoyment in filter is, that it returns a dataframe rather than keeping the grouping and allow you to perform further computations on the groups.</p>
<p>Here is an example:</p>
<p>Let's say you want to group by 'C' and filter on the sums of 'A' in the groups (<700), but in the filtered groups you actually care for the std of the groups. If filter would just be a filter on groups, this would work:</p>
<pre><code>df.groupby(['C']).filter(lambda x:x['A'].sum()<700, combine=False).std()
</code></pre>
<p>this doesn't work (note the nonexistent <code>combine=False</code> option on filter), what does is this:</p>
<pre><code>df.groupby(['C']).filter(lambda x:x['A'].sum()<700).groupby(['C']).std()
</code></pre>
<p>What filter does is actually filter&combine, which follows the split-apply-combine logic.</p>
|
python|pandas|pandas-groupby
| 6
|
7,336
| 64,136,603
|
How to deal with categorical data that has 35 unique values?
|
<p>I am working on IPL cricket dataset which has data about batting stats for all the teams over by over.</p>
<p>I want to visualise how different cricket grounds affect the total score of the batting team. I try to plot a simple scatter plot but the stadium names are too long and it does not show the names clearly.</p>
<p>Do I have to convert the 35 values into numeric values? It prints nothing when I try to find correlation with the target variable.</p>
<p>The data set:
<a href="https://i.stack.imgur.com/RoKdk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RoKdk.png" alt="The dataset" /></a></p>
<p>The problem with reading the plot (the x-axis):
<a href="https://i.stack.imgur.com/wQXLK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wQXLK.png" alt="The problem with reading the plot(the x axis)" /></a></p>
|
<p>You can change the size of the font and/or rotate it: <a href="https://matplotlib.org/api/matplotlib_configuration_api.html#matplotlib.rc" rel="nofollow noreferrer">https://matplotlib.org/api/matplotlib_configuration_api.html#matplotlib.rc</a></p>
|
python|pandas|matplotlib|data-science|categorical-data
| 1
|
7,337
| 63,937,996
|
Python sliding MinMaxScaler on last 100 values
|
<p>I now have a normalization across the entire column:</p>
<pre><code>MinMaxScaler().fit_transform(Glfeatures[['Temp']])
</code></pre>
<p>How to get a column without for, where for each value is normalized to 100 values to it?</p>
<p>For example:</p>
<pre><code>Glfeatures['Temp'][200] minmaxnormalizing on Glfeatures['Temp'][100:200]
Glfeatures['Temp'][300] minmaxnormalizing on Glfeatures['Temp'][200:300]
</code></pre>
<p>I need fast version :) normalization for all Glfeatures on last 100 values.</p>
<p>I tried <code>Glres[['Temp']].rolling(100).apply(MinMaxScaler())</code> but: "'MinMaxScaler' object is not callable"</p>
|
<p>This is an old question but I stumbled across it trying to accomplish the same thing.</p>
<p>Here is my solution as suggested in the comments above:</p>
<pre><code>def min_max(df, window):
def func(data):
x = data.values
return (x[-1] - min(x)) / (max(x) - min(x))
return df.rolling(window).apply(func)
</code></pre>
<p>The code snippet applies the min max scaling to the previous <code>window</code> values per column in the dataframe <code>df</code></p>
|
python|pandas
| 1
|
7,338
| 46,862,662
|
Getting error 'str' object has no attribute 'dtype' when exporting textsum model for TensorFlow Serving
|
<p>I am currently trying to get a TF textsum model exported using the PREDICT SIGNATURE. I have _Decode returning a result from a passed in test article string and then I pass that to buildTensorInfo. This is in-fact a string being returned. </p>
<p>Now when I run the textsum_export.py logic to export the model, it gets to the point where is is building out the TensorInfo object however errors out with the below trace. I know that the PREDICT signature is usually used with images. Is this the problem? Can I not use this for the Textsum model because I am working with strings?</p>
<p>Error is:</p>
<pre><code>Traceback (most recent call last):
File "export_textsum.py", line 129, in Export
tensor_info_outputs = tf.saved_model.utils.build_tensor_info(res)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/saved_model/utils_impl.py", line 37, in build_tensor_info
dtype_enum = dtypes.as_dtype(tensor.dtype).as_datatype_enum
AttributeError: 'str' object has no attribute 'dtype'
</code></pre>
<p>The TF session where the model is exported is below:</p>
<pre><code>with tf.Session(config = config) as sess:
# Restore variables from training checkpoints.
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
print('Successfully loaded model from %s at step=%s.' %
(ckpt.model_checkpoint_path, global_step))
res = decoder._Decode(saver, sess)
print("Decoder value {}".format(type(res)))
else:
print('No checkpoint file found at %s' % FLAGS.checkpoint_dir)
return
# Export model
export_path = os.path.join(FLAGS.export_dir,str(FLAGS.export_version))
print('Exporting trained model to %s' % export_path)
#-------------------------------------------
tensor_info_inputs = tf.saved_model.utils.build_tensor_info(serialized_tf_example)
tensor_info_outputs = tf.saved_model.utils.build_tensor_info(res)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={ tf.saved_model.signature_constants.PREDICT_INPUTS: tensor_info_inputs},
outputs={tf.saved_model.signature_constants.PREDICT_OUTPUTS:tensor_info_outputs},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
))
#----------------------------------
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder = saved_model_builder.SavedModelBuilder(export_path)
builder.add_meta_graph_and_variables(
sess=sess,
tags=[tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict':prediction_signature,
},
legacy_init_op=legacy_init_op)
builder.save()
print('Successfully exported model to %s' % export_path)
</code></pre>
|
<p>PREDICT signature work with tensors, if res is 'str' type python variable, then res_tensor will be of dtype tf.string</p>
<pre><code>res_tensor = tf.convert_to_tensor(res)
</code></pre>
|
tensorflow|tensorflow-serving
| 4
|
7,339
| 46,964,319
|
How do select 2nd column or a matrix from a pandas dataframe?
|
<p>How do you select column other than the first column?</p>
<pre><code>import pandas as pd
df = pd.read_csv('bio.csv')
df
</code></pre>
<p><a href="https://i.stack.imgur.com/qDf6G.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qDf6G.jpg" alt="Output 1"></a></p>
<p>I could select the first column, i.e., "Index"</p>
<pre><code>df['Index']
</code></pre>
<p><a href="https://i.stack.imgur.com/OfxAM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OfxAM.jpg" alt="Output 2"></a></p>
<p>However, I could not select the second column, i.e., "Height".</p>
<pre><code>df['Height']
</code></pre>
<p>Here is the trace:</p>
<pre><code>KeyError Traceback (most recent call last)
C:\util\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2441 try:
-> 2442 return self._engine.get_loc(key)
2443 except KeyError:
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'Height'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-8-58aff8413556> in <module>()
----> 1 df['Height']
C:\util\Anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
1962 return self._getitem_multilevel(key)
1963 else:
-> 1964 return self._getitem_column(key)
1965
1966 def _getitem_column(self, key):
C:\util\Anaconda3\lib\site-packages\pandas\core\frame.py in _getitem_column(self, key)
1969 # get column
1970 if self.columns.is_unique:
-> 1971 return self._get_item_cache(key)
1972
1973 # duplicate columns & possible reduce dimensionality
C:\util\Anaconda3\lib\site-packages\pandas\core\generic.py in _get_item_cache(self, item)
1643 res = cache.get(item)
1644 if res is None:
-> 1645 values = self._data.get(item)
1646 res = self._box_item_values(item, values)
1647 cache[item] = res
C:\util\Anaconda3\lib\site-packages\pandas\core\internals.py in get(self, item, fastpath)
3588
3589 if not isnull(item):
-> 3590 loc = self.items.get_loc(item)
3591 else:
3592 indexer = np.arange(len(self.items))[isnull(self.items)]
C:\util\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2442 return self._engine.get_loc(key)
2443 except KeyError:
-> 2444 return self._engine.get_loc(self._maybe_cast_indexer(key))
2445
2446 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'Height'
</code></pre>
|
<p>Below is the complete answer</p>
<pre><code>import pandas as pd
df = pd.read_csv('bio.csv', sep='[ \t]*,[ \t]*', engine='python')
df['Height']
</code></pre>
<p>Theis is the output:</p>
<pre><code>Out[22]: 0 65.78
1 71.52
2 69.40
3 68.22
4 67.79
5 68.70
6 69.80
7 70.01
8 67.90
9 66.78
10 66.49
11 67.62
12 68.30
13 67.12
14 68.28
15 71.09
</code></pre>
|
pandas
| 0
|
7,340
| 47,082,071
|
Trying to add results to an array in Python
|
<p>I have 2 matrixes and I want to safe the euclidean distance of each row in an array so afterwards I can work with the data (knn Kneighbours, I use a temporal named K so I can create later a matrix of that array (2 columns x n rows, each row will contain the distance from position n of the array, in this case, k is that n).</p>
<pre><code>import numpy as np
v1=np.matrix('1,2;3,4')
v2=np.matrix('5,6;7,8')
k=0
for i in v1:
distancias.append(k)=np.linalg.norm(v2-v1[k,:])
print(distancias[k])
k=k+1
</code></pre>
<p>It gives me an error: </p>
<pre><code>File "<ipython-input-44-4d3546d9ade5>", line 10
distancias.append(k)=np.linalg.norm(v2-v1[k,:])
^
SyntaxError: can't assign to function call
</code></pre>
<p>And I do not really know what syntax error is. </p>
<p>I also tried: </p>
<pre><code>import numpy as np
v1=np.matrix('1,2;3,4')
v2=np.matrix('5,6;7,8')
k=0
for i in v1:
valor=np.linalg.norm(v2-v1[k,:])
distancias.append(valor)
print(distancias[k])
k=k+1
</code></pre>
<p>And in this case the error is:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-51-8a48ca0267d5> in <module>()
9
10 valor=np.linalg.norm(v2-v1[k,:])
---> 11 distancias.append(valor)
12 print(distancias[k])
13 k=k+1
AttributeError: 'numpy.float64' object has no attribute 'append'
</code></pre>
|
<p>You are trying to assign data to a function call, which is not possible. If you want to add the data computed by <code>linalg.norm()</code> to the array <code>distancias</code> you can do like shown below.</p>
<pre><code>import numpy as np
v1=np.matrix('1,2;3,4')
v2=np.matrix('5,6;7,8')
k=0
distancias = []
for i in v1:
distancias.append(np.linalg.norm(v2-v1[k,:]))
print(distancias[k])
k=k+1
print(distancias)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>10.1980390272
6.32455532034
[10.198039027185569, 6.324555320336759]
</code></pre>
|
python|numpy
| 2
|
7,341
| 46,855,114
|
Poor quality classifier in Tflearn?
|
<p>I am new to machine learning and trying out <em>TFlearn</em> because it is simple.</p>
<p>I am trying to make a basic classifier which I find interesting.
My objective is to train the system to predict in which direction a point lies.</p>
<p>For example If I feed two 2D co-ordinates <code>(50,50)</code> and <code>(51,51)</code> the system has to predict that the direction is NE (North east).
If I feed <code>(50,50)</code> and <code>(49,49)</code> the system must predict that the direction is SW (South west)</p>
<p><strong>Input:</strong> X1,Y1,X2,Y2,Label<br>
<strong>Output:</strong> 0 to 8. For the 8 directions.</p>
<p>So here is the small code I wrote,</p>
<pre><code>from __future__ import print_function
import numpy as np
import tflearn
import tensorflow as tf
import time
from tflearn.data_utils import load_csv
#Sample input 50,50,51,51,5
data, labels = load_csv(filename, target_column=4,
categorical_labels=True, n_classes=8)
my_optimizer = tflearn.SGD(learning_rate=0.1)
net = tflearn.input_data(shape=[None, 4])
net = tflearn.fully_connected(net, 32) #input 4, output 32
net = tflearn.fully_connected(net, 32) #input 32, output 32
net = tflearn.fully_connected(net, 8, activation='softmax')
net = tflearn.regression(net,optimizer=my_optimizer)
model = tflearn.DNN(net)
model.fit(data, labels, n_epoch=100, batch_size=100000, show_metric=True)
model.save("direction-classifier.tfl")
</code></pre>
<p>The problem I am facing is that even after I passed around 40 million input samples, the systems accuracy is as low as 20%.<br>
I restricted the inputs to <code>40-x-60</code> and <code>40-y-60</code></p>
<p>I cannot understand if I over-fitted the sample, because the accuracy was never high throughout the training period of the total 40 million inputs</p>
<p>Why is the accuracy so low for this simple example?</p>
<p><strong>EDIT:</strong>
I have reduced the learning rate and made the batch size small. However, the results are still the same with very poor accuracy.
I have included the output of the first 25 steps.</p>
<pre><code>--
Training Step: 100000 | total loss: 6.33983 | time: 163.327s
| SGD | epoch: 001 | loss: 6.33983 - acc: 0.0663 -- iter: 999999/999999
--
Training Step: 200000 | total loss: 6.84055 | time: 161.981ss
| SGD | epoch: 002 | loss: 6.84055 - acc: 0.1568 -- iter: 999999/999999
--
Training Step: 300000 | total loss: 5.90203 | time: 158.853ss
| SGD | epoch: 003 | loss: 5.90203 - acc: 0.1426 -- iter: 999999/999999
--
Training Step: 400000 | total loss: 5.97782 | time: 157.607ss
| SGD | epoch: 004 | loss: 5.97782 - acc: 0.1465 -- iter: 999999/999999
--
Training Step: 500000 | total loss: 5.97215 | time: 155.929ss
| SGD | epoch: 005 | loss: 5.97215 - acc: 0.1234 -- iter: 999999/999999
--
Training Step: 600000 | total loss: 6.86967 | time: 157.299ss
| SGD | epoch: 006 | loss: 6.86967 - acc: 0.1230 -- iter: 999999/999999
--
Training Step: 700000 | total loss: 6.10330 | time: 158.137ss
| SGD | epoch: 007 | loss: 6.10330 - acc: 0.1242 -- iter: 999999/999999
--
Training Step: 800000 | total loss: 5.81901 | time: 157.464ss
| SGD | epoch: 008 | loss: 5.81901 - acc: 0.1464 -- iter: 999999/999999
--
Training Step: 900000 | total loss: 7.09744 | time: 157.486ss
| SGD | epoch: 009 | loss: 7.09744 - acc: 0.1359 -- iter: 999999/999999
--
Training Step: 1000000 | total loss: 7.19259 | time: 158.369s
| SGD | epoch: 010 | loss: 7.19259 - acc: 0.1248 -- iter: 999999/999999
--
Training Step: 1100000 | total loss: 5.60177 | time: 157.221ss
| SGD | epoch: 011 | loss: 5.60177 - acc: 0.1378 -- iter: 999999/999999
--
Training Step: 1200000 | total loss: 7.16676 | time: 158.607ss
| SGD | epoch: 012 | loss: 7.16676 - acc: 0.1210 -- iter: 999999/999999
--
Training Step: 1300000 | total loss: 6.19163 | time: 163.711ss
| SGD | epoch: 013 | loss: 6.19163 - acc: 0.1635 -- iter: 999999/999999
--
Training Step: 1400000 | total loss: 7.46101 | time: 162.091ss
| SGD | epoch: 014 | loss: 7.46101 - acc: 0.1216 -- iter: 999999/999999
--
Training Step: 1500000 | total loss: 7.78055 | time: 158.468ss
| SGD | epoch: 015 | loss: 7.78055 - acc: 0.1122 -- iter: 999999/999999
--
Training Step: 1600000 | total loss: 6.03101 | time: 158.251ss
| SGD | epoch: 016 | loss: 6.03101 - acc: 0.1103 -- iter: 999999/999999
--
Training Step: 1700000 | total loss: 5.59769 | time: 158.083ss
| SGD | epoch: 017 | loss: 5.59769 - acc: 0.1182 -- iter: 999999/999999
--
Training Step: 1800000 | total loss: 5.45591 | time: 158.088ss
| SGD | epoch: 018 | loss: 5.45591 - acc: 0.0868 -- iter: 999999/999999
--
Training Step: 1900000 | total loss: 6.54951 | time: 157.755ss
| SGD | epoch: 019 | loss: 6.54951 - acc: 0.1353 -- iter: 999999/999999
--
Training Step: 2000000 | total loss: 6.18566 | time: 157.408ss
| SGD | epoch: 020 | loss: 6.18566 - acc: 0.0551 -- iter: 999999/999999
--
Training Step: 2100000 | total loss: 4.95146 | time: 157.572ss
| SGD | epoch: 021 | loss: 4.95146 - acc: 0.1114 -- iter: 999999/999999
--
Training Step: 2200000 | total loss: 5.97208 | time: 157.279ss
| SGD | epoch: 022 | loss: 5.97208 - acc: 0.1277 -- iter: 999999/999999
--
Training Step: 2300000 | total loss: 6.75645 | time: 157.201ss
| SGD | epoch: 023 | loss: 6.75645 - acc: 0.1507 -- iter: 999999/999999
--
Training Step: 2400000 | total loss: 7.04119 | time: 157.346ss
| SGD | epoch: 024 | loss: 7.04119 - acc: 0.1512 -- iter: 999999/999999
--
Training Step: 2500000 | total loss: 5.95451 | time: 157.722ss
| SGD | epoch: 025 | loss: 5.95451 - acc: 0.1421 -- iter: 999999/999999
</code></pre>
|
<p>As discussed in my comment above, here is code that trains a multi-layer perceptron classifier model using <a href="https://github.com/nicholastoddsmith/pythonml/blob/master/TFANN.py" rel="nofollow noreferrer">a MLP helper class I created</a>. The class is implemented using TensorFlow and follows the scikit-learn fit, predict, score interface.</p>
<p>The basic idea is to generate a random start and end point then to use a dictionary to create labels based on the direction. I used <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.unique.html" rel="nofollow noreferrer"><em>np.unique</em></a> to find the number of class labels in the generated data as it can vary (some directions may be missing). I also included an empty string label for when the start and end point are the same.</p>
<h1>Code</h1>
<p>Using the code below I was able to achieve 100% cross-validation accuracy on some runs.
import numpy as np
from sklearn.model_selection import ShuffleSplit
from TFANN import MLPC</p>
<pre><code>#Dictionary to lookup direction ()
DM = {(-1, -1):'SW', (-1, 0):'W', (-1, 1):'NW', (0, 1):'N',
( 1, 1):'NE', ( 1, 0):'E', ( 1, -1):'SE', (0, -1):'S',
( 0, 0):''}
NR = 4096 #Number of rows in sample matrix
A1 = np.random.randint(40, 61, size = (NR, 2)) #Random starting point
A2 = np.random.randint(40, 61, size = (NR, 2)) #Random ending point
A = np.hstack([A1, A2]) #Concat start and end point as feature vector
#Create label from direction vector
Y = np.array([DM[(x, y)] for x, y in (A2 - A1).clip(-1, 1)])
NC = len(np.unique(Y)) #Number of classes
ss = ShuffleSplit(n_splits = 1)
trn, tst = next(ss.split(A)) #Make a train/test split for cross-validation
#%% Create and train Multi-Layer Perceptron for Classification (MLPC)
l = [4, 6, 6, NC] #Neuron counts in each layer
mlpc = MLPC(l, batchSize = 64, maxIter = 128, verbose = True)
mlpc.fit(A[trn], Y[trn])
s1 = mlpc.score(A[trn], Y[trn]) #Training accuracy
s2 = mlpc.score(A[tst], Y[tst]) #Testing accuracy
s3 = mlpc.score(A, Y) #Total accuracy
print('Trn: {:05f}\tTst: {:05f}\tAll: {:05f}'.format(s1, s2, s3))
</code></pre>
<h1>Results</h1>
<p>This is a sample run of the above code on my machine:</p>
<pre><code>Iter 1 2.59423236 (Batch Size: 64)
Iter 2 2.25392553 (Batch Size: 64)
Iter 3 2.02569708 (Batch Size: 64)
...
Iter 12 1.53575111 (Batch Size: 64)
Iter 13 1.47963311 (Batch Size: 64)
Iter 14 1.42776408 (Batch Size: 64)
...
Iter 83 0.23911642 (Batch Size: 64)
Iter 84 0.22893350 (Batch Size: 64)
Iter 85 0.23644384 (Batch Size: 64)
...
Iter 94 0.21170238 (Batch Size: 64)
Iter 95 0.20718799 (Batch Size: 64)
Iter 96 0.21230888 (Batch Size: 64)
...
Iter 126 0.17334313 (Batch Size: 64)
Iter 127 0.16970796 (Batch Size: 64)
Iter 128 0.15931854 (Batch Size: 64)
Trn: 0.995659 Tst: 1.000000 All: 0.996094
</code></pre>
|
machine-learning|tensorflow|classification|tflearn
| 1
|
7,342
| 46,880,050
|
Attempting to find the 5 largest values per month using groupby
|
<p>I am attempting to show the top three values of <code>nc_type</code> for each month. I tried using <code>n_largest</code> but that doesn't do it by date.</p>
<p>Original Data:</p>
<pre><code> area nc_type occurred_date
0 Filling x 12/23/2015 0:00
1 Filling f 12/22/2015 0:00
2 Filling s 9/11/2015 0:00
3 Filling f 2/17/2016 0:00
4 Filling s 5/3/2016 0:00
5 Filling g 8/29/2016 0:00
6 Filling f 9/9/2016 0:00
7 Filling a 6/1/2016 0:00
</code></pre>
<p>Transformed with:</p>
<pre><code>df.groupby([df.occurred_date.dt.month, "nc_type"])["rand"].count()
</code></pre>
<p>Transformed Data:</p>
<pre><code>occurred_date nc_type
1 x 3
y 4
z 13
w 24
f 34
..
12 d 18
g 10
w 44
a 27
g 42
</code></pre>
|
<p><strong>Scenario 1</strong><br>
<em>MultiIndex series</em></p>
<pre><code>occurred_date nc_type
1.0 x 3
y 4
z 13
w 24
f 34
12.0 d 18
g 10
w 44
a 27
g 42
Name: test, dtype: int64
</code></pre>
<p>Call <code>sort_values</code> + <code>groupby</code> + <code>head</code>:</p>
<pre><code>df.sort_values(ascending=False).groupby(level=0).head(2)
occurred_date nc_type
12.0 w 44
g 42
1.0 f 34
w 24
Name: test, dtype: int64
</code></pre>
<p>Change <code>head(2)</code> to <code>head(5)</code> for your situation.</p>
<p>Or, expanding upon my <a href="https://stackoverflow.com/questions/46880050/attempting-to-find-the-5-largest-values-per-month-using-groupby?noredirect=1#comment80707299_46880050">comment</a> with <code>nlargest</code>, you could do:</p>
<pre><code>df.groupby(level=0).nlargest(2).reset_index(level=0, drop=1)
occurred_date nc_type
1.0 f 34
w 24
12.0 w 44
g 42
Name: test, dtype: int64
</code></pre>
<hr>
<p><strong>Scenario 2</strong><br>
<em>3-col dataframe</em></p>
<pre><code> occurred_date nc_type value
0 1.0 x 3
1 1.0 y 4
2 1.0 z 13
3 1.0 w 24
4 1.0 f 34
5 12.0 d 18
6 12.0 g 10
7 12.0 w 44
8 12.0 a 27
9 12.0 g 42
</code></pre>
<p>You can use <code>sort_values</code> + <code>groupby</code> + <code>head</code>:</p>
<pre><code>df.sort_values(['occurred_date', 'value'],
ascending=[True, False]).groupby('occurred_date').head(2)
occurred_date nc_type value
4 1.0 f 34
3 1.0 w 24
7 12.0 w 44
9 12.0 g 42
</code></pre>
<p>Change <code>head(2)</code> to <code>head(5)</code> for your scenario.</p>
<hr>
<p><strong>Scenario 3</strong><br>
<em>MultiIndex Dataframe</em></p>
<pre><code> test
occurred_date nc_type
1.0 x 3
y 4
z 13
w 24
f 34
12.0 d 18
g 10
w 44
a 27
g 42
</code></pre>
<p>Or, with <code>nlargest</code>.</p>
<pre><code>df.groupby(level=0).test.nlargest(2)\
.reset_index(level=0, drop=1)
occurred_date nc_type
1.0 f 34
w 24
12.0 w 44
g 42
Name: test, dtype: int64
</code></pre>
|
python|pandas|group-by
| 1
|
7,343
| 46,698,020
|
Reorganize pandas data frame to display data horizontal
|
<p>I have a data frame that displays which segments a particular organization belongs to. I want to prep the data frame for a left join merge on Org ID with other organization data. </p>
<p>Current, this df displays info top to bottom with each segment (with org id) in a separate row. Below is a sample of the df and an example of where I want to go with it.</p>
<p><strong>Current df structure</strong></p>
<p><a href="https://i.stack.imgur.com/oNLb3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oNLb3.png" alt="enter image description here"></a></p>
<p><strong>Needed df structure</strong></p>
<p><a href="https://i.stack.imgur.com/7Yapc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Yapc.png" alt="enter image description here"></a></p>
<p>The number of segments is capped at 10 and each has a unique name such as Aerospace and Construction. </p>
<p>I have been digging around for a starting point to get this done for a few hours and no luck. </p>
<p>Could anyone provide a starting point for this?</p>
<p><strong>EDIT: Using pd.crosstab</strong></p>
<p>df.info()</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 13 entries, 0 to 12
Data columns (total 3 columns):
Org ID 13 non-null object
Org Name 13 non-null object
Segment 13 non-null object
dtypes: object(3)
memory usage: 392.0+ bytes
</code></pre>
<p>Code:</p>
<pre><code>file = "sample-data.csv"
path = root + file
name_cols = ['Org ID', 'Org Name', 'Segment']
pull_cols = ['Org ID', 'Org Name', 'Segment']
df = pd.read_csv(path, header=None, encoding="ISO-8859-1", names=name_cols,
usecols=pull_cols, index_col=False)
df = pd.crosstab([df['Org ID'], df['Org Name']], df['Segment']).reset_index()
df.head(10)
</code></pre>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/rg9yF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rg9yF.png" alt="enter image description here"></a></p>
|
<p>You can use <code>pd.crosstab</code>:</p>
<pre><code>df = df.drop_duplicates()
pd.crosstab([df['Org ID'], df['Org Name']], df['Segment']).reset_index()
</code></pre>
<hr>
<p><em>Example</em>:</p>
<pre><code>df = pd.DataFrame({
'A': ['a', 'a', 'b', 'b', 'c'],
'B': [1, 2, 2, 3, 4],
'C': ['seg1', 'seg1', 'seg2', 'seg2', 'seg3']
})
df = df.drop_duplicates()
pd.crosstab([df.A, df.B], df.C).reset_index()
#C A B seg1 seg2 seg3
#0 a 1 1 0 0
#1 a 2 1 0 0
#2 b 2 0 1 0
#3 b 3 0 1 0
#4 c 4 0 0 1
</code></pre>
|
python|python-3.x|pandas
| 2
|
7,344
| 32,869,110
|
Can Pandas Groupby Aggregate into a List of Objects
|
<p>While panda's <code>groupby</code> is able to aggregate data with functions like <code>sum</code> and <code>mean</code>, is there a way to aggregate into a list of objects, where the keys of these objects corresponds to the column names these values were aggregated from?</p>
<p><strong>Question:</strong></p>
<p>If the data looked like this</p>
<pre><code> A B C
1 10 22
1 12 20
1 11 8
1 10 10
2 11 13
2 12 10
3 14 0
</code></pre>
<p>how can we get Pandas to transform it into this hypothetical output:</p>
<pre><code> A D
1 [{'B':10, 'C':22}, {'B':12, 'C':20}, {'B':11, 'C':8}, {'B':10, 'C':10}]
2 [{'B':11, 'C':13}, {'B':12, 'C':10}]
3 [{'B':14, 'C':0}]
</code></pre>
|
<p>I actually wasn't sure this would work, but seems to.</p>
<pre><code>In [35]: df.groupby('A').apply(lambda x: x.to_dict(orient='records'))
Out[35]:
A
1 [{u'A': 1, u'C': 22, u'B': 10}, {u'A': 1, u'C'...
2 [{u'A': 2, u'C': 13, u'B': 11}, {u'A': 2, u'C'...
3 [{u'A': 3, u'C': 0, u'B': 14}]
dtype: object
</code></pre>
<p>Depending on what you're trying to accomplish it may be more natural to iterate of the groupby object and convert, like this:</p>
<pre><code>In [36]: for a, df_gb in df.groupby('A'):
...: d = df_gb.to_dict(orient='records')
...:
</code></pre>
|
python|python-2.7|pandas
| 1
|
7,345
| 38,635,580
|
Matrix of polynomial elements
|
<p>I am using NumPy for operations on matrices, to calculate matrixA * matrixB, the trace of the matrix, etc... And elements of my matrices are integers. But what I want to know is if there is possibility to work with matrices of polynomials. So for instance I can work with matrices such as <code>[x,y;a,b]</code>, not <code>[1,1;1,1]</code>, and when I calculate the trace it provides me with the polynomial x + b, and not 2. Is there some polynomial class in NumPy which matrices can work with?</p>
|
<p>One option is to use the <a href="http://docs.sympy.org/dev/modules/matrices/matrices.html" rel="nofollow">SymPy Matrices module</a>. SymPy is a symbolic mathematics library for Python which is quite interoperable with NumPy, especially for simple matrix manipulation tasks such as this. </p>
<pre><code>>>> from sympy import symbols, Matrix
>>> from numpy import trace
>>> x, y, a, b = symbols('x y a b')
>>> M = Matrix(([x, y], [a, b]))
>>> M
Matrix([
[x, y],
[a, b]])
>>> trace(M)
b + x
>>> M.dot(M)
[a*y + x**2, a*b + a*x, b*y + x*y, a*y + b**2]
</code></pre>
|
python|numpy|matrix|sympy
| 2
|
7,346
| 38,572,914
|
3D tensor input to embedding layer in keras or tensorflow?
|
<p>I want to build a network which takes in sentences as input to predict the sentiment. So my input looks something like (num of samples x num of sentences x num of words). I then want to feed this in an embedding layer to learn the word vectors which can be then summed to get sentence vector. Is this type of architecture possible in keras? or Tensorflow? From the documentation Keras's embedding layer only takes in input (nb_samples, sequence_length). Is there any work around possible?</p>
|
<p>I guess this class resolves for Keras:</p>
<pre><code>class AnyShapeEmbedding(Embedding):
'''
This Embedding works with inputs of any number of dimensions.
This can be accomplished by simply changing the output shape computation.
'''
#@overrides
def compute_output_shape(self, input_shape):
return input_shape + (self.output_dim,)
</code></pre>
|
python|tensorflow|keras
| 2
|
7,347
| 38,644,441
|
Splitting line and adding numbers to a numpy array
|
<p>I have several text files in a folder, all with data in the form of numbers, each separated by 3 spaces. There are no line breaks. I want to take the numbers, put them in order in a numpy array, and then reshape it to be a 240 by 240 array. (I have the correct number of data points in each file to do so.) Afterwards, I want it to display my array graphically, and then do the same for the next file. However, my attempts keep giving my errors that say:</p>
<pre><code>"'unicodeescape' codec can't decode bytes in position 10-11: malformed \N character escape."
</code></pre>
<p>My code so far is:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
a = np.array([])
import glob, os
os.chdir("/mydirectory")
for file in glob.glob("*.txt"):
for line in file:
numbers = line.split(' ')
for number in numbers:
a.np.append([number])
b = a.reshape(240,240)
plt.imshow(b)
a = np.array([])
</code></pre>
|
<p>It sounds like a number with reading one of the files. I'd suggest first doing a </p>
<pre><code> lines = file.readlines()
</code></pre>
<p>and making sure that the lines look right. You may also want to add a <code>strip</code></p>
<pre><code>In [244]: [int(x) for x in '121 342 123\n'.strip().split(' ')]
Out[244]: [121, 342, 123]
</code></pre>
<p>But this looping structure is also bad. It's a misuse of <code>np.append</code></p>
<pre><code>a = np.array([])
....
for number in numbers:
a.np.append([number])
In [245]: a=np.array([])
In [246]: a.np.append(['123'])
...
AttributeError: 'numpy.ndarray' object has no attribute 'np'
In [247]: a.append(['123'])
...
AttributeError: 'numpy.ndarray' object has no attribute 'append'
In [248]: np.append(a,['123'])
Out[248]:
array(['123'],
dtype='<U32')
In [249]: a
Out[249]: array([], dtype=float64)
</code></pre>
<p><code>np.append</code> returns a new array; it does not change <code>a</code> inplace.</p>
<p>You want to collect values in list (or lists of lists), or at the very least pass a list of integers to <code>np.array</code>:</p>
<pre><code>In [250]: np.array([int(x) for x in '121 342 123\n'.strip().split(' ')])
Out[250]: array([121, 342, 123])
</code></pre>
|
python|arrays|python-3.x|numpy
| 2
|
7,348
| 63,292,433
|
Alternative to pandas iterrows?
|
<p>I'm building a trading bot that looks through a df of prices and sells, buys, or passes depending on the price relative to the bounds. Every transaction uses all funds available, so the other constraint is that you must have the stock in the bank to execute a sale and vice versa. Finally I want to add the relevant trade details to a log when trades occur. This is intuitive with iterrows, but is there a more efficient way to do this? Particularly I'm struggling to track the two banks and update them after transactions so future transactions only occur when we have the right denomination (assume df is sorted by date ascending). Here is a snippet of the working but slow iterrows version.</p>
<pre><code>stock_bank = 1000
usd_bank = 0
trade_log = pd.DataFrame()
prices = pd.DataFrame({'buy_price': [9, 11, 12, 13, 12],
'sell_price': [10, 10, 11, 12, 12],
'lower_bound': [10, 11, 11, 10, 9],
'upper_bound': [12, 13, 13, 12, 11]})
for (i, row) in prices.iterrows():
# Sell stock if own stock and sell price above upper bound
if row['sell_price'] > row['upper_bound'] and stock_bank != 0:
usd_bank = stock_bank * (row['sell_price'])
stock_bank = 0
# trade_entry = (relevant trade details)
# trade_log = trades.append(trade_entry)
# Repeat for Buys
# Buy stock if have USD and buy price below lower bound
# etc
</code></pre>
|
<p>I would suggest using the <code>[numpy.where()][1]</code> function which is usually pretty fast especially if the conditions and manipulations aren't very complex. For your example it would look something like this:</p>
<pre><code>import numpy as np
prices['usd_bank'] = np.where((prices['sell_price'] > row['upper_bound'])
& (stock_bank != 0:)
,stock_bank * (prices['sell_price'])
,0)
</code></pre>
<p>After that it's pretty straight forward to just sum the column for a total and check if any value are > 0 to set stock_bank as 0.</p>
|
python|pandas
| 0
|
7,349
| 62,908,391
|
7 days hourly mean with pandas
|
<p>I need some help calculating a 7 days mean for every hour.</p>
<p>The timeseries has a hourly resolution and I need the 7 days mean for each hour e.g. for 13 o'clock</p>
<pre><code>date, x
2020-07-01 13:00 , 4
2020-07-01 14:00 , 3
.
.
.
2020-07-02 13:00 , 3
2020-07-02 14:00 , 7
.
.
.
</code></pre>
<p>I tried it with pandas and a rolling mean, but rolling includes last 7 days.
Thanks for any hints!</p>
|
<p>Add a new <code>hour</code> column, grouping by <code>hour</code> column, and then add
The average was calculated over 7 days. This is consistent with the intent of the question.</p>
<pre><code>df['hour'] = df.index.hour
df = df.groupby(df.hour)['x'].rolling(7).mean().reset_index()
df.head(35)
hour level_1 x
0 0 2020-07-01 00:00:00 NaN
1 0 2020-07-02 00:00:00 NaN
2 0 2020-07-03 00:00:00 NaN
3 0 2020-07-04 00:00:00 NaN
4 0 2020-07-05 00:00:00 NaN
5 0 2020-07-06 00:00:00 NaN
6 0 2020-07-07 00:00:00 48.142857
7 0 2020-07-08 00:00:00 50.285714
8 0 2020-07-09 00:00:00 60.000000
9 0 2020-07-10 00:00:00 63.142857
10 1 2020-07-01 01:00:00 NaN
11 1 2020-07-02 01:00:00 NaN
12 1 2020-07-03 01:00:00 NaN
13 1 2020-07-04 01:00:00 NaN
14 1 2020-07-05 01:00:00 NaN
15 1 2020-07-06 01:00:00 NaN
16 1 2020-07-07 01:00:00 52.571429
17 1 2020-07-08 01:00:00 48.428571
18 1 2020-07-09 01:00:00 38.000000
19 2 2020-07-01 02:00:00 NaN
20 2 2020-07-02 02:00:00 NaN
21 2 2020-07-03 02:00:00 NaN
22 2 2020-07-04 02:00:00 NaN
23 2 2020-07-05 02:00:00 NaN
24 2 2020-07-06 02:00:00 NaN
25 2 2020-07-07 02:00:00 46.571429
26 2 2020-07-08 02:00:00 47.714286
27 2 2020-07-09 02:00:00 42.714286
28 3 2020-07-01 03:00:00 NaN
29 3 2020-07-02 03:00:00 NaN
30 3 2020-07-03 03:00:00 NaN
31 3 2020-07-04 03:00:00 NaN
32 3 2020-07-05 03:00:00 NaN
33 3 2020-07-06 03:00:00 NaN
34 3 2020-07-07 03:00:00 72.571429
</code></pre>
|
pandas|mean|rolling-computation
| 1
|
7,350
| 67,910,484
|
Multiple conditions in for-loop
|
<p>I want to loop through a data frame to check if one statement is stratified (before checking elif, I want the code goes through all the K values and if it is not satisfied check the elif):
I have the following data frame:</p>
<pre><code>z={'speed':[2.2,12.74,5.1,.91,8.9]}
data=pd.DataFrame(data=z)
</code></pre>
<p>I want to select the row which has speed less than 5 and previous speed is also less than 5. if this statement is not satisfied, I want the code finds the first point (backwards) which has speed less than 5
this is the code I've wrote but it has syntax error and I'm not sure if the code goes through all k to check the first statement and then check the second one in case the first one is not satistfied:</p>
<pre><code>for k in reversed(data.index[:-1]):
if (data['speed'][k]<5 and data['speed'][k-1]<5):
print(k)
break
elif data['speed'][k]<5:
print(k)
break
</code></pre>
<p>the outcome of this should be 3, since the first statement is not satisfied
Thank you for your help</p>
|
<p>Your <code>break</code> statement indentation is not correct. The code is always breaking the loop at the first <code>break</code>.
The code should be like this:</p>
<pre><code>import pandas as pd
z={'speed':[2.2, 2.74, 5.1, 9.1, 0.5]}
data=pd.DataFrame(data=z)
found = 0
for k in range(len(data['speed']) - 1, 0, -1):
if (data['speed'][k]<5 and data['speed'][k-1]<5):
print(k)
found = 1
break
if found == 0:
for k in range(len(data['speed']) - 1, -1, -1):
if data['speed'][k]<5:
print(k)
break
</code></pre>
|
python|pandas|dataframe|for-loop|if-statement
| 1
|
7,351
| 67,958,246
|
How do you flatten the last two dimensions of an numpy array?
|
<p>For example, given a numpy array of dimensions (132, 82, 100), the resultant dimensions would be (132, 8200)</p>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.reshape.html" rel="nofollow noreferrer">reshape()</a> function to change shape of the array. Using -1 as a parameter to reshape tells numpy to infer the dimension there.</p>
<pre><code>arr = np.zeros((132, 82, 100))
arr = arr.reshape(*arr.shape[:-2], -1)
print(arr.shape)
# Out: (132, 8200)
</code></pre>
|
python|numpy
| 0
|
7,352
| 32,000,987
|
Converting Indices of Series to Columns
|
<p>I need to convert the Indices of a Series <code>amounts</code> into its Columns. For example, I need to convert:</p>
<pre><code> 1983-05-15 1
1983-11-15 1
1984-05-15 1
1984-11-15 101
</code></pre>
<p>into:</p>
<pre><code> 1983-05-15 1983-11-15 1984-05-15 1984-11-15
1 1 1 101
</code></pre>
<p>I wasn't able to find any documentation on doing this for <code>Series</code> type Objects specifically and don't know how to do this.</p>
<p>Thank You</p>
|
<p>Build a <code>DataFrame</code> out of your <code>Series</code>, then the <code>.T</code> property returns a transposed version.</p>
<pre><code>In [87]: pd.DataFrame(s).T
Out[87]:
1983-05-15 1983-11-15 1984-05-15 1984-11-15
0 1 1 1 101
</code></pre>
|
python|pandas
| 3
|
7,353
| 32,073,927
|
Test which Numpy function argument has more than one element
|
<p>Consider the following function:</p>
<pre><code>def foo(a, b, c):
""" Toy function
"""
return a, b, c
</code></pre>
<p>Each of these arguments will be of type <code>numpy.array</code>. I need to efficiently determine which of these arguments has more than one element for use further in the function. I'd like to avoid testing each argument with an <code>if</code> statement as the list can be large and performance is important. Assume that only one argument will have more than one element.</p>
<p>How can I determine which of the input arguments has more than one element?</p>
|
<p>You can use <code>locals()</code> to get a <code>dict</code> of all the arguments, then use <code>size</code> and <code>argmax</code> to find which is largest, like so:</p>
<pre><code>import numpy as np
a=np.array([1,])
b=np.array([1,])
c=np.array([1,2,3])
def foo(a,b,c):
args=locals()
return args.items()[np.array([i[1].size for i in args.items()]).argmax()][1]
biggest = foo(a,b,c)
print biggest
# [1,2,3]
</code></pre>
|
python|numpy|arguments
| 0
|
7,354
| 41,321,082
|
Pandas - split large excel file
|
<p>I have an excel file with about 500,000 rows and I want to split it to several excel file, each with 50,000 rows.</p>
<p>I want to do it with pandas so it will be the quickest and easiest.</p>
<p>any ideas how to make it?</p>
<p>thank you for your help</p>
|
<p>Assuming that your Excel file has only one (first) sheet containing data, I'd make use of <code>chunksize</code> parameter:</p>
<pre><code>import pandas as pd
import numpy as np
i=0
for df in pd.read_excel(file_name, chunksize=50000):
df.to_excel('/path/to/file_{:02d}.xlsx'.format(i), index=False)
i += 1
</code></pre>
<p><strong>UPDATE:</strong></p>
<pre><code>chunksize = 50000
df = pd.read_excel(file_name)
for chunk in np.split(df, len(df) // chunksize):
chunk.to_excel('/path/to/file_{:02d}.xlsx'.format(i), index=False)
</code></pre>
|
python|excel|pandas
| 9
|
7,355
| 41,267,573
|
Changing values of a function within a DataFrame
|
<p>I'm trying to create a function that removes the ' #1' from a column within a dataframe:</p>
<pre><code>def formatSignalColumn(df):
for i,signal in enumerate(df['Signal list']):
df = df.set_value(i, 'Signal list', signal.replace(" #1", ""))
df = df.set_value(i, 'Signal list', signal.replace(" #2", ""))
return df
</code></pre>
<p>However, when I pass my DataFrame through this, it does not change anything. </p>
<pre><code>tlog = formatSignalColumn(tlog)
</code></pre>
<p>Interestingly, when I run the for loop outside the function, it doesn't work either, but when I specifically choose the <code>i</code> and <code>signal</code> values it works... </p>
<pre><code>i = 0
signal = tlog['Signal list'][i]
tlog= tlog.set_value(i, 'Signal list', signal.replace(" #1", ""))
tlog= tlog.set_value(i, 'Signal list', signal.replace(" #2", ""))
</code></pre>
<p>This doesn't make any sense to me. Anyone have any ideas?</p>
|
<p>You can just use vectorised <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>str.replace</code></a> and pass a regex pattern to do this in a single line:</p>
<pre><code>In [231]:
df = pd.DataFrame({'something':[' #1blah', ' #2blah', '#3blah']})
df
Out[231]:
something
0 #1blah
1 #2blah
2 #3blah
In [232]:
df['something'] = df['something'].str.replace(' #1| #2','')
df
Out[232]:
something
0 blah
1 blah
2 #3blah
</code></pre>
<p>What you discovered was that you were operating on a copy of the passed in df, additionally modifying the data object as you're iterating is not a good idea.</p>
<p>On top of this, one should always seek a vectorised method and avoid loops as loops are rarely the only method available</p>
|
python|pandas|dataframe
| 2
|
7,356
| 27,483,731
|
Reorder pandas DataFrame with specific rules
|
<pre><code>XList=[2,3,4,5,5,6]
YList=['A','A','A','B','A','A']
df = pd.DataFrame({'X':XList,
'Y':YList})
df
X Y
0 10 A
1 3 A
2 4 A
3 5 B
4 5 A
5 6 A
</code></pre>
<p>How can I reorder only Line 3 and 4 (case: same X-Value) so they are in ascedent order in Y (A,B) like this:</p>
<p>Everytime X-Values are equal it should reorder Y-Values.</p>
<pre><code> X Y
0 10 A
1 3 A
2 4 A
3 5 A
4 5 B
5 6 A
</code></pre>
|
<p>If you want to sort only those values of <code>YList</code> where <code>XList</code> values are equal, here is the code:</p>
<pre><code>>>> XList=[2,3,4,5,5,6]
>>> YList=['A','A','A','B','A','A']
>>> idx = []
>>> for i in range(len(XList)-1):
... if XList[i]==XList[i+1]: idx.append(i)
... else:
... if len(idx)>=1:
... idx.append(i)
... YList[idx[0]:idx[-1]+1] = sorted(YList[idx[0]:idx[-1]+1])
... idx=[]
...
>>> YList
['A', 'A', 'A', 'A', 'B', 'A']
>>> df = pd.DataFrame({'X':XList,
... 'Y':YList})
>>> df
X Y
0 2 A
1 3 A
2 4 A
3 5 A
4 5 B
5 6 A
</code></pre>
|
python|pandas|group-by
| 1
|
7,357
| 61,588,717
|
Fastest way to average sign-normalized segments of data with NumPy?
|
<p>What would be the fastest way to collect segments of data from a NumPy array at every point in a dataset, normalize them based on the sign (+ve/-ve) at the start of the segment, and average all segments together?</p>
<p>At present I have:</p>
<pre><code>import numpy as np
x0 = np.random.normal(0,1,5000) # Dataset to be analysed
l0 = 100 # Length of segment to be averaged
def average_seg(x,l):
return np.mean([x[i:i+l]*np.sign(x[i]) for i in range(len(x)-l)],axis=0)
av_seg = average_seg(x0,l0)
</code></pre>
<p>Timing for this is as follows:</p>
<pre><code>%timeit average_seg(x0,l0)
22.2 ms ± 362 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>This does the job, but is there a faster way to do this? </p>
<p>The above code suffers when the length of x0 is large, and when the value of l0 is large. We're looking at looping through this code several million times, so even incremental improvements will help!</p>
|
<p>We can leverage <code>1D convolution</code> -</p>
<pre><code>np.convolve(x,np.sign(x[:-l+1][::-1]),'valid')/(len(x)-l+1)
</code></pre>
<p>The idea is to do the windowed summations with convolution and with a flipped kernel as per the <a href="https://en.wikipedia.org/wiki/Convolution" rel="nofollow noreferrer"><code>convolution definition</code></a>.</p>
<p>Timings -</p>
<pre><code>In [150]: x = np.random.normal(0,1,5000) # Dataset to be analysed
...: l = 100 # Length of segment to be averaged
In [151]: %timeit average_seg(x,l)
17.2 ms ± 689 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [152]: %timeit np.convolve(x,np.sign(x[:-l+1][::-1]),'valid')/(len(x)-l+1)
149 µs ± 3.12 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [153]: av_seg = average_seg(x,l)
...: out = np.convolve(x,np.sign(x[:-l+1][::-1]),'valid')/(len(x)-l+1)
...: print(np.allclose(out, av_seg))
True
</code></pre>
<p><strong><code>100x+</code></strong> speedup!</p>
|
python|performance|numpy|average
| 2
|
7,358
| 68,639,336
|
when padding I receive this error Dimension -343776 must be >= 0 [Op:Fill]
|
<p>When trying to pad the audio data by this code, I receive the error in the title</p>
<pre><code>zero_padding = tf.zeros([48000] - tf.shape(waveform), dtype=tf.float32)
</code></pre>
|
<p>Welcome! Have you checked the value of <code>tf.shape(waveform)</code>? It is probably 43776 + 48000 = 391776.</p>
<p>Here is the code to reproduce the error:</p>
<pre class="lang-py prettyprint-override"><code>tf.zeros([48000] - tf.shape(tf.zeros(391776)), dtype=tf.float32)
</code></pre>
|
python|tensorflow
| 0
|
7,359
| 68,544,119
|
'For' loop: creating a new column which takes into account new data from several csv files
|
<p>I would like to automate a process which assigns labels of several files.
Accidentally, someone created many files (csv) that look like as follows:</p>
<p>filename 1: <code>test_1.csv</code></p>
<pre><code>Node Target Char1 Var2 Start
1 2 23.1 No 1
1 3 12.4 No 1
1 4 52.1 Yes 1
1 12 14.5 No 1
</code></pre>
<p>filename 2: <code>test_2.csv</code></p>
<pre><code>Node Target Char1 Var2 Start
1 2 23.1 No 1
1 3 12.4 No 1
1 4 52.1 Yes 1
1 12 14.5 No 1
2 1 23.1 No 0
2 41 12.4 Yes 0
3 15 8.2 No 0
3 12 63.1 No 0
</code></pre>
<p>filename 3: <code>test_3.csv</code></p>
<pre><code>Node Target Char1 Var2 Start
1 2 23.1 No 1
1 3 12.4 No 1
1 4 52.1 Yes 1
1 12 14.5 No 1
2 1 23.1 No 0
2 41 12.4 Yes 0
3 15 8.2 No 0
3 12 63.1 No 0
41 2 12.4 Yes 0
15 3 8.2 No 0
15 8 12.2 No 0
12 3 63.1 No 0
</code></pre>
<p>From what I can see, the csv files are created including data from previous runs.
I would like to add a column which takes into account the dataset where it comes from, without duplicates, i.e., just considering what was added in the next dataset. This would mean, for instance, to have a unique file csv including all data:</p>
<p>filename ALL: <code>test_all.csv</code></p>
<pre><code>Node Target Char1 Var2 Start File
1 2 23.1 No 1 1
1 3 12.4 No 1 1
1 4 52.1 Yes 1 1
1 12 14.5 No 1 1
2 1 23.1 No 0 2
2 41 12.4 Yes 0 2
3 15 8.2 No 0 2
3 12 63.1 No 0 2
41 2 12.4 Yes 0 3
15 3 8.2 No 0 3
15 8 12.2 No 0 3
12 3 63.1 No 0 3
</code></pre>
<p>I was thinking of calculating the difference between the datasets (in terms of rows) and adding a new column based on that. However, I am doing this one by one, and this will be not doable since I have, for example:</p>
<pre><code>test_1.csv, test_2.csv, test_3.csv, ... , test_7.csv
filex_1.csv, filex_2.csv, ..., filex_7.csv
name_1.csv, name_2.csv, ..., name_7.csv
</code></pre>
<p>and so on.</p>
<p>The suffix <code>_x</code> goes from <code>1</code> to <code>7</code>: the only change would be in the filename (e.g., <code>filex, test, name,</code> and many many others).</p>
<p>Can you give me, please, some tips on how to run this in an easier and faster way, for example with a for loop which takes into account the suffix and creates a new column based on new information from each individual file?
I will be happy to provide more information and details, if you need.</p>
|
<p>You can achieve that with <code>pd.concat</code> and the <code>keys</code>-argument (<a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#concatenating-objects" rel="nofollow noreferrer">docs</a>).</p>
<pre class="lang-py prettyprint-override"><code>frames = [df1, df2, ...] # your dataframes
file_names = ['file1', 'file2', ...] # the file names
df = pd.concat(frames, keys=file_names)
</code></pre>
<h4 id="output-mno4">Output</h4>
<pre class="lang-py prettyprint-override"><code> Node Target Char1 Var2 Start
file1 0 1 2 23.1 No 1
1 1 3 12.4 No 1
2 1 4 52.1 Yes 1
3 1 12 14.5 No 1
file2 0 1 2 23.1 No 1
1 1 3 12.4 No 1
2 1 4 52.1 Yes 1
3 1 12 14.5 No 1
4 2 1 23.1 No 0
5 2 41 12.4 Yes 0
6 3 15 8.2 No 0
7 3 12 63.1 No 0
file3 0 1 2 23.1 No 1
1 1 3 12.4 No 1
2 1 4 52.1 Yes 1
3 1 12 14.5 No 1
4 2 1 23.1 No 0
5 2 41 12.4 Yes 0
6 3 15 8.2 No 0
7 3 12 63.1 No 0
8 41 2 12.4 Yes 0
9 15 3 8.2 No 0
10 15 8 12.2 No 0
11 12 3 63.1 No 0
</code></pre>
<p>To keep duplicates within files, we can temporarily set the level 1 index as column so <code>drop_duplicates</code> will only match on cross-file-dupes.</p>
<pre class="lang-py prettyprint-override"><code>df = df.reset_index(level=1).drop_duplicates()
# get rid of the extra column
df = df.drop('level_1', axis=1)
# Set the file name index as new column
df = df.reset_index().rename(columns={'index':'File'})
</code></pre>
<h4 id="output-1-ukpr">Output</h4>
<pre class="lang-py prettyprint-override"><code> File Node Target Char1 Var2 Start
0 file1 1 2 23.1 No 1
1 file1 1 3 12.4 No 1
2 file1 1 4 52.1 Yes 1
3 file1 1 12 14.5 No 1
4 file2 2 1 23.1 No 0
5 file2 2 41 12.4 Yes 0
6 file2 3 15 8.2 No 0
7 file2 3 12 63.1 No 0
8 file3 41 2 12.4 Yes 0
9 file3 15 3 8.2 No 0
10 file3 15 8 12.2 No 0
11 file3 12 3 63.1 No 0
</code></pre>
|
python|pandas|for-loop
| 1
|
7,360
| 53,200,748
|
Calculate the rotation angle of a vector python
|
<p>I am trying to find the rotation <code>angle</code> of a 2D <code>vector</code>. I have found a few questions that use 3D <code>vectors</code>. The following <code>df</code> represents a single <code>vector</code> with the first <code>row</code> as the origin.</p>
<pre><code>d = ({
'X' : [10,12.5,17,20,16,14,13,8,7],
'Y' : [10,12,13,8,6,7,8,8,9],
})
df = pd.DataFrame(data = d)
</code></pre>
<p>I can rotate a vector using the following equation:</p>
<pre><code>angle = x
theta = (x/180) * numpy.pi
rotMatrix = numpy.array([[numpy.cos(theta), -numpy.sin(theta)],
[numpy.sin(theta), numpy.cos(theta)]])
</code></pre>
<p>But I'm not sure how I would find the <code>angle</code> at each point of time using the coordinates listed above. Apologies for using a <code>df</code>. It replicates my actual <code>dataset</code> </p>
|
<p>First you should move the origin to <code>(0, 0)</code>, then you can use <code>np.arctan2()</code> which calculates the angle and defines the quadrant correctly. The result is already in radians (theta) so you don't need it in degrees (alpha).</p>
<pre><code>d = {'X' : [10,12.5,17,20,16,14,13,8,7],
'Y' : [10.,12,13,8,6,7,8,8,9]}
df = pd.DataFrame(data = d)
# move the origin
x = df["X"] - df["X"][0]
y = df["Y"] - df["Y"][0]
df["theta"] = np.arctan2(y, x)
df["aplha"] = np.degrees(df["theta"])
df
</code></pre>
|
python|pandas|matrix|rotation
| 1
|
7,361
| 53,335,939
|
Count times a value of a column appears and add a column to the dataframe with it
|
<p>I have a dataframe with 4 columns, one of them being people's names and another one activity they practiced. I want that in front of each row appears the number of times that combination appears. All the ways i found of counting change the dataframe or reduce the size of the data frame, apearing each combination only one time. I would like the dataframe to stay the same just with one more column with the number of times that combination exists. Does anyone know a way?</p>
<p><a href="https://i.stack.imgur.com/aUWxo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aUWxo.png" alt="enter image description here"></a></p>
|
<h3><code>groupby</code> + <code>size</code></h3>
<p>Assuming your grouper columns are <code>0</code> and <code>2</code>:</p>
<pre><code>df['combination_count'] = df.groupby([0, 2])[1].transform('size')
</code></pre>
<p>To move the new column to the front:</p>
<pre><code>cols = df.columns.tolist()
cols.insert(0, cols.pop(cols.index('combination_count')))
df = df.reindex(columns=cols)
</code></pre>
|
python|pandas|count|pandas-groupby
| 1
|
7,362
| 65,632,248
|
Same sentences produces a different vector in XLNet
|
<p>I have computed the vectors for two same sentences using <a href="https://github.com/amansrivastava17/embedding-as-service" rel="nofollow noreferrer">XLNet embedding-as-service</a>. But the model produces different vector embeddings for both the two same sentences hence the cosine similarity is not 1 and the Euclidean distances also not 0. in case of BERT its works fine.
for example; if</p>
<pre><code>vec1 = en.encode(texts=['he is anger'],pooling='reduce_mean')
vec2 = en.encode(texts=['he is anger'],pooling='reduce_mean')
</code></pre>
<p>the model (XLNet) is saying that these two sentences are dissimilar.</p>
|
<p>This is because of to the dropout layers in the model. During inference, the dropout layers should be turned off but there is a bug in the library. It is discussed here and apparently still not fixed.</p>
<p>See the discussion here: <a href="https://github.com/amansrivastava17/embedding-as-service/issues/45" rel="nofollow noreferrer">https://github.com/amansrivastava17/embedding-as-service/issues/45</a></p>
<p>In the mean time as suggested by @Davide Fiocco, you can use the straightforward approaches from HuggingFace. Either use <code>forward</code>, <code>generate</code> or <code>pipeline</code>.</p>
|
python|nlp|huggingface-transformers|bert-language-model|sentence-transformers
| 1
|
7,363
| 65,583,992
|
Data augmentation on GPU
|
<p>As tf.data augmentations are executed only on CPUs. I need a way to run certain augmentations on the TPU for an audio project.<br />
For example,</p>
<blockquote>
<p>CPU: tf.recs read -> audio crop -> noise addition.<br />
TPU: spectogram -> Mixup Augmentation.</p>
</blockquote>
<p>Most augmentations can be done as a Keras Layer on top of the model, but MixUp requires both changes in input as well as label.</p>
<p>Is there a way to do it using tf keras APIs.</p>
<p>And if there is any way we can transfer part of tf.data to run on TPU that will also be helpful.</p>
|
<p>See the Tensorflow guide that discusses <a href="https://www.tensorflow.org/guide/keras/preprocessing_layers#preprocessing_data_before_the_model_or_inside_the_model" rel="nofollow noreferrer">preprocessing data before the model or inside the model</a>. By including preprocessing inside the model, the GPU is leveraged instead of the CPU, it makes the model portable, and it helps reduce the training/serving skew. The guide also has multiple recipes to get you started too. It doesn't explicitly state this works for a TPU but it can be tried.</p>
|
tensorflow|keras|tensorflow2.0|keras-layer
| 2
|
7,364
| 21,029,128
|
updating pandas dataframe via for loops
|
<p>I have a bunch of URLs stored into a data frame and I am cleaning them up via a url parsing module. The issue that I am having is that the 'siteClean' field that is supposed to update with the cleaned url is updating the entire column and not the individual cell...</p>
<p>Here is the code:</p>
<pre><code>results = resultsX.copy(deep = True)
results = results.reset_index(drop = True)
results['siteClean'] = ''
from urlparse import urlsplit
import re
for row in results.iterrows():
#print row[1]
url = row[1][1]
if not re.match(r'http(s?)\:', url):
url = 'http://' + url
parsed = urlsplit(url)
host = parsed.netloc
#print host
#row[1][1] = host
#results[row][1] = host
results['siteClean'] = host
print results
</code></pre>
|
<p>In general, it's better to avoid looping over your frame's rows, if you can avoid it. If I understand your problem correctly, you want to look at a single column from your frame, and apply a function on each element of that column. Then you want to put the result of all those function calls into a column of the original frame. Maybe a new column, maybe in place of the old column. This sounds like a job for <code>pd.Series.map</code>.</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(0)
n=10
df = pd.DataFrame({'num': np.random.randn(n),
'lett': np.random.choice(
list('abcdefghijklmnopqrstuvwxyz'),n)
})
</code></pre>
<p><code>df</code> looks like this:</p>
<p><img src="https://i.stack.imgur.com/2UsY7.png" alt="df original"></p>
<p>Set up a function to classify a single letter as either a consonant or a vowel:</p>
<pre><code>def classify_letter(char):
if char in list('aeiou'):
return 'vowel'
else:
return 'consonant'
</code></pre>
<p>Then you can use <code>map</code> to generate a new <code>Series</code> whose entries are those of the input transformed by the specified function. You can stick that new output series wherever you like. It can be a new column (in your old <code>DataFrame</code> or elsewhere) or it can replace the old column. Note that <code>map</code> only works on a <code>Series</code>, so be sure to select down to one column before using it:</p>
<pre><code>df['new'] = df['lett'].map(classify_letter)
</code></pre>
<p>gives:</p>
<p><img src="https://i.stack.imgur.com/DEeyN.png" alt="df with col added"></p>
<p>while if you started from the original setup and ran:</p>
<pre><code>df['lett'] = df['lett'].map(classify_letter)
</code></pre>
<p>then you would replace the old column with the new one:</p>
<p><img src="https://i.stack.imgur.com/mmNHv.png" alt="df with col replaced"></p>
|
python|for-loop|pandas|dataframe
| 2
|
7,365
| 63,704,145
|
How to remove a vector which is specific value from tensor in tensorflow?
|
<p>I want to implement the following operation.
Given a tensor,</p>
<pre><code>m = ([[1, 1, 1], [2, 2, 2], [3, 3, 3]])
</code></pre>
<p>How to implement to remove the vector with value [2, 2, 2] from m?</p>
|
<p>You can do that like this:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
def remove_row(m, q):
# Assumes m is 2D
mask = tf.math.reduce_any(tf.not_equal(m, q), axis=-1)
return tf.boolean_mask(m, mask)
# Test
m = tf.constant([[1, 1, 1], [2, 2, 2], [3, 3, 3]])
q = tf.constant([2, 2, 2])
tf.print(remove_row(m, q))
# [[1 1 1]
# [3 3 3]]
</code></pre>
|
python|tensorflow|tensorflow2.0
| 1
|
7,366
| 63,572,811
|
deleting tuple elements based on condition
|
<p>Data frame with DST values:</p>
<pre><code>data0 = pd.DataFrame({'DST':[33,11,-52,7,80,34,41,68,-87],'Date':['1975-01-03','1975-01-04','1975-01-07','1975-01-08','1975-01-13','1975-01-14','1975-01-15','1975-02-01','1975-02-03']})
data0
DST Date
0 33 1975-01-03
1 11 1975-01-04
2 -52 1975-01-07
3 7 1975-01-08
4 80 1975-01-13
5 34 1975-01-14
6 41 1975-01-15
7 68 1975-02-01
8 -87 1975-02-03
</code></pre>
<p>Tuples I have:</p>
<pre><code>combined_date = [('1975-01-03', '1975-01-06'),('1975-01-13', '1975-01-15'),
('1975-01-31', '1975-02-02'),('1975-02-03', '1975-02-13')]
</code></pre>
<p><strong>Problem:</strong>
I have to remove tuple element if DST falls below -50 between those dates in tuple.
I tried the code:</p>
<pre><code>for i in len(data0):
if data0['DST'][i]<-50:
del (j for j in combined_date if data0['DATE'][i]>=j[0] and data0['DATE'][i]<=j[1])
</code></pre>
<p><strong>Expected Output:</strong></p>
<pre><code>('1975-01-03', '1975-01-06'),('1975-01-13', '1975-01-15'),
('1975-01-31', '1975-02-02')
</code></pre>
<p>Error occuring:can't delete generator expression.</p>
<p><strong>NOTE</strong></p>
<p>if DST finds below -50 then that tuple must be deleted!</p>
|
<p>First filter rows by condition in <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>data0['Date'] = pd.to_datetime(data0['Date'])
df = data0[data0['DST']<-50]
print (df)
DST Date
3 -67 1975-01-07
4 -80 1975-01-15
</code></pre>
<p>And then remove values of tuples in list comprehension with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.between.html" rel="nofollow noreferrer"><code>Series.between</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.any.html" rel="nofollow noreferrer"><code>Series.any</code></a>:</p>
<pre><code>out = [j for j in combined_date if not df['Date'].between(j[0], j[1]).any()]
print (out)
</code></pre>
|
python|pandas|numpy|date|math
| 1
|
7,367
| 63,615,305
|
I'm having a problem trying to load a Pytoch model: "Can't find Identity in module"
|
<p>When trying to load a pytorch model it gives the following attribute error</p>
<pre><code>model = torch.load('../input/melanoma-model/melanoma_model_0.pth')
model = model.to(device)
model.eval()
</code></pre>
<blockquote>
<p>AttributeError Traceback (most recent call
last) in
1 arch = EfficientNet.from_pretrained('efficientnet-b2')
2 model = Net(arch=arch)
----> 3 torch.load('../input/melanoma-model/melanoma_model_0.pth')
4 model = model.to(device)
5 model.eval()</p>
<p>/opt/conda/lib/python3.7/site-packages/torch/serialization.py in
load(f, map_location, pickle_module, **pickle_load_args)
591 return torch.jit.load(f)
592 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 593 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
594
595</p>
<p>/opt/conda/lib/python3.7/site-packages/torch/serialization.py in
_legacy_load(f, map_location, pickle_module, **pickle_load_args)
771 unpickler = pickle_module.Unpickler(f, **pickle_load_args)
772 unpickler.persistent_load = persistent_load
--> 773 result = unpickler.load()
774
775 deserialized_storage_keys = pickle_module.load(f, **pickle_load_args)</p>
<p>AttributeError: Can't get attribute 'Identity' on <module
'efficientnet_pytorch.utils' from
'/opt/conda/lib/python3.7/site-packages/efficientnet_pytorch/utils.py'></p>
</blockquote>
|
<p>First you need a model class to load the parameters from the .pth into. And you are missing one step:</p>
<pre class="lang-py prettyprint-override"><code>model = Model() # the model class (yours has probably another name)
model.load_state_dict(torch.load('../input/melanoma-model/melanoma_model_0.pth'))
model = model.to(device)
model.eval()
</code></pre>
<p>There you go, I hope that solved your problem!</p>
|
python|model|pytorch
| 2
|
7,368
| 21,512,042
|
Fast selection of a time interval in a pandas DataFrame/Series
|
<p>my problem is that I want to filter a DataFrame to only include times within the interval <em>[start, end)</em> . If do not care about the day, I would like to filter only for start and end time for each day. I have a solution for this but it is slow. So my question is if there is a faster way to do the time based filtering.</p>
<p>Example</p>
<pre><code>import pandas as pd
import time
index=pd.date_range(start='2012-11-05 01:00:00', end='2012-11-05 23:00:00', freq='1S').tz_localize('UTC')
df=pd.DataFrame(range(len(index)), index=index, columns=['Number'])
# select from 1 to 2 am, include day
now=time.time()
df2=df.ix['2012-11-05 01:00:00':'2012-11-05 02:00:00']
print 'Took %s seconds' %(time.time()-now) #0.0368609428406
# select from 1 to 2 am, for every day
now=time.time()
selector=(df.index.hour>=1) & (df.index.hour<2)
df3=df[selector]
print 'Took %s seconds' %(time.time()-now) #Took 0.0699911117554
</code></pre>
<p>As you can see if I remove the day (second case) it takes almost twice as much. The computation time increases rapidly if I have a number of different days, e.g from 5 to 7 Nov:</p>
<pre><code>index=pd.date_range(start='2012-11-05 01:00:00', end='2012-11-07 23:00:00', freq='1S').tz_localize('UTC')
</code></pre>
<p>So, to summarize is there a faster method to filter by time of the day, across many days?</p>
<p>Thx</p>
|
<p>You need <code>between_time</code> method.</p>
<pre><code>In [14]: %timeit df.between_time(start_time='01:00', end_time='02:00')
100 loops, best of 3: 10.2 ms per loop
In [15]: %timeit selector=(df.index.hour>=1) & (df.index.hour<2); df[selector]
100 loops, best of 3: 18.2 ms per loop
</code></pre>
<p>I had done these tests with 5th to 7th November as index.</p>
<h3>Documentation</h3>
<pre>
Definition: df.between_time(self, start_time, end_time, include_start=True, include_end=True)
Docstring:
Select values between particular times of the day (e.g., 9:00-9:30 AM)
Parameters
----------
start_time : datetime.time or string
end_time : datetime.time or string
include_start : boolean, default True
include_end : boolean, default True
Returns
-------
values_between_time : type of caller
</pre>
|
python|indexing|pandas
| 6
|
7,369
| 21,857,153
|
Error with matplotlib when used with Unicode strings
|
<p>I have text file containing Unicode strings and their frequencies.</p>
<pre><code>അംഗങ്ങള്ക്ക് 10813
കുടുംബശ്രീ 10805
പരിരക്ഷാപദ്ധതിക്ക് 10778
ചെയ്തു 10718
ഇന്ന് 10716
അന്തര് 659
രാജിന്റെ 586
</code></pre>
<p>When I try to plot it using <code>matplotlib</code> </p>
<p>I am getting this error</p>
<pre><code>Traceback (most recent call last):
File "plot.py", line 3, in <module>
xs, ys = np.loadtxt('oun.txt', delimiter='\t').T
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 841, in loadtxt
items = [conv(val) for (conv, val) in zip(converters, vals)]
ValueError: could not convert string to float: '
</code></pre>
<p>This is the code I have</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
xs, ys = np.loadtxt('oun.txt', delimiter='\t').T
plt.bar(xs, ys)
plt.show()
</code></pre>
<p>Whats wrong with this code ?</p>
|
<p>In order to read strings from a file using <code>loadtxt</code> you have to specify the <code>dtype</code> argument (see <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html" rel="nofollow">docs</a> here).</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt('derp', dtype={'names': ('strings', 'freq'),
'formats': ('S32', 'i4')})
xs, ys = zip(*data)
temp = range(len(ys)) # Temp variable for use as x-axis.
plt.bar(temp, ys, align='center')
plt.xticks(temp, xs) # Re-define ticks as your strings.
plt.show()
</code></pre>
<p>In this case the file has 2 columns, I've given them the <code>names</code> <code>('strings', 'freq')</code> and the <code>formats</code> are <code>('S32', 'i4')</code> where <code>S</code> denotes a string and <code>i</code> denotes an integer. The docs for <code>dtype</code> can be found <a href="http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html" rel="nofollow">here</a>. Note that the numbers within the <code>dtype</code> formatting give information on the size of the values in your columns (<code>i4</code> corresponds to a 32-bit signed integer for example).</p>
|
python|numpy|unicode|matplotlib
| 3
|
7,370
| 24,507,550
|
Swapping Columns with NumPy arrays
|
<p>When I have <code>a=1</code> and <code>b=2</code>, I can write <code>a,b=b,a</code> so that <code>a</code> and <code>b</code> are interchanged with each other.</p>
<p>I use this matrix as an array:</p>
<pre><code> [ 1, 2, 0, -2]
[ 0, 0, 1, 2]
[ 0, 0, 0, 0]
</code></pre>
<p>Swapping the columns of a numpy array does not work:</p>
<pre><code>import numpy as np
x = np.array([[ 1, 2, 0, -2],
[ 0, 0, 1, 2],
[ 0, 0, 0, 0]])
x[:,1], x[:,2] = x[:,2], x[:,1]
</code></pre>
<p>It yields:</p>
<pre><code> [ 1, 0, 0, -2]
[ 0, 1, 1, 2]
[ 0, 0, 0, 0]
</code></pre>
<p>So <code>x[:,1]</code> has simply been overwritten and not transferred to <code>x[:,2]</code>.</p>
<p>Why is this the case?</p>
|
<p>If you're trying to swap columns you can do it by</p>
<pre><code>print x
x[:,[2,1]] = x[:,[1,2]]
print x
</code></pre>
<p><strong>output</strong></p>
<pre><code>[[ 1 2 0 -2]
[ 0 0 1 2]
[ 0 0 0 0]]
[[ 1 0 2 -2]
[ 0 1 0 2]
[ 0 0 0 0]]
</code></pre>
<p>The swapping method you mentioned in the question seems to be working for single dimensional arrays and lists though,</p>
<pre><code>x = np.array([1,2,0,-2])
print x
x[2], x[1] = x[1], x[2]
print x
</code></pre>
<p><strong>output</strong></p>
<pre><code>[ 1 2 0 -2]
[ 1 0 2 -2]
</code></pre>
|
python|arrays|numpy|iterable-unpacking
| 25
|
7,371
| 30,109,522
|
multi-monthly mean with pandas' Series
|
<p>I have a sequence of <code>datetime</code> objects and a series of data which spans through several years. A can create a <code>Series</code> object and resample it to group it by months:</p>
<pre><code>df=pd.Series(varv,index=dates)
multiMmean=df.resample("M", how='mean')
print multiMmean
</code></pre>
<p>This, however, outputs </p>
<pre><code>2005-10-31 172.4
2005-11-30 69.3
2005-12-31 187.6
2006-01-31 126.4
2006-02-28 187.0
2006-03-31 108.3
...
2014-01-31 94.6
2014-02-28 82.3
2014-03-31 130.1
2014-04-30 59.2
2014-05-31 55.6
2014-06-30 1.2
</code></pre>
<p>which is a list of the mean value for each month of the series. This is not what I want. I want 12 values, one for every month of the year with a mean for each month through the years. How do I get that for <code>multiMmean</code>?</p>
<p>I have tried using <code>resample("M",how='mean')</code> on <code>multiMmean</code> and list comprehensions but I cannot get it to work. What am I missing?</p>
<p>Thank you.</p>
|
<p>the following worked for me:</p>
<pre><code># create some random data with datetime index spanning 17 months
s = pd.Series(index=pd.date_range(start=dt.datetime(2014,1,1), end = dt.datetime(2015,6,1)), data = np.random.randn(517))
In [25]:
# now calc the mean for each month
s.groupby(s.index.month).mean()
Out[25]:
1 0.021974
2 -0.192685
3 0.095229
4 -0.353050
5 0.239336
6 -0.079959
7 0.022612
8 -0.254383
9 0.212334
10 0.063525
11 -0.043072
12 -0.172243
dtype: float64
</code></pre>
<p>So we can <code>groupby</code> the <code>month</code> attribute of the datetimeindex and call <code>mean</code> this will calculate the mean for all months</p>
|
python|pandas|time-series
| 12
|
7,372
| 29,929,646
|
Reorder Stacked DataFrame
|
<p>I'm trying to reorder a stacked dataframe. For example, I have:</p>
<pre><code>import numpy as np
testdf = pd.DataFrame(np.random.randn(5,4), index=range(1,6), columns = ['Eric','Jane','Mary','Don'])
testdf.stack()
</code></pre>
<p>And my output is this:</p>
<pre><code>1 Eric -0.301206
Jane 1.327379
Mary 1.066828
Don -0.429380
2 Eric 0.196671
Jane -1.232447
Mary 1.139221
Don 1.441183
3 Eric -0.912282
Jane -0.204741
Mary -0.802078
Don 0.149269
4 Eric -0.168387
Jane 1.608617
Mary 2.237823
Don 0.973450
5 Eric -0.290492
Jane -0.374205
Mary 0.986653
Don 1.584820
dtype: float64
</code></pre>
<p>Is there any way I change the order of these names, without rearranging the columns of the original dataframe? My end goal is to tell pandas that <code>Eric, Don, Mary, Jane</code> is the desired order for all my output later on despite it not being alphabetically ordered, similar to the <code>levels</code> function in R?</p>
<p>What I'm trying to do Thanks!</p>
|
<p>use <code>set_levels</code> on the index to reorder the values:</p>
<pre><code>In [67]:
t.index.set_levels([[1,2,3,4,5],['Eric', 'Don', 'Mary', 'Jane']], inplace=True)
t
Out[67]:
1 Eric 1.139358
Don -0.368389
Mary -1.907364
Jane 0.444930
2 Eric -0.113019
Don -0.823055
Mary -1.397237
Jane 0.268164
3 Eric -1.246184
Don 0.356804
Mary -0.286919
Jane 0.845538
4 Eric -0.674448
Don 0.903695
Mary 0.873403
Jane -1.321770
5 Eric 1.308402
Don -1.901295
Mary 0.122430
Jane 0.110339
dtype: float64
</code></pre>
<p>from the docstrings, (there is also a brief explanation <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html?highlight=set_levels#setting-metadata" rel="nofollow">online</a> ):</p>
<pre><code>Signature: t.index.set_levels(levels, level=None, inplace=False, verify_integrity=True)
Docstring:
Set new levels on MultiIndex. Defaults to returning
new index.
Parameters
----------
levels : sequence or list of sequence
new level(s) to apply
level : int or level name, or sequence of int / level names (default None)
level(s) to set (None for all levels)
inplace : bool
if True, mutates in place
verify_integrity : bool (default True)
if True, checks that levels and labels are compatible
Returns
-------
new index (of same type and class...etc)
Examples
--------
>>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'),
(2, u'one'), (2, u'two')],
names=['foo', 'bar'])
>>> idx.set_levels([['a','b'], [1,2]])
MultiIndex(levels=[[u'a', u'b'], [1, 2]],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=[u'foo', u'bar'])
>>> idx.set_levels(['a','b'], level=0)
MultiIndex(levels=[[u'a', u'b'], [u'one', u'two']],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=[u'foo', u'bar'])
>>> idx.set_levels(['a','b'], level='bar')
MultiIndex(levels=[[1, 2], [u'a', u'b']],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=[u'foo', u'bar'])
>>> idx.set_levels([['a','b'], [1,2]], level=[0,1])
MultiIndex(levels=[[u'a', u'b'], [1, 2]],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=[u'foo', u'bar'])
</code></pre>
<p><strong>Update</strong></p>
<p>If your pandas version is <code>0.15.0</code> or greater then <code>set_levels</code> accepts a <code>level</code> arg which makes it cleaner to adjust one of the levels:</p>
<pre><code>In [244]:
testdf.index.set_levels(['Eric', 'Don', 'Mary', 'Jane'], level=1, inplace=True)
testdf
Out[244]:
1 Eric -0.026484
Don 0.223672
Mary 0.266461
Jane 1.121323
2 Eric -0.250781
Don -1.079661
Mary 0.525879
Jane 1.692250
3 Eric -1.337944
Don 0.765228
Mary -1.297232
Jane 1.121497
4 Eric 2.611441
Don 0.805786
Mary -0.174193
Jane -0.371906
5 Eric -0.084597
Don 1.794861
Mary 0.766524
Jane 0.150359
dtype: float64
</code></pre>
|
python|pandas
| 2
|
7,373
| 30,238,666
|
Calculate difference from a reference row in pandas (python)
|
<p>In Pandas I have a data frame of this type:</p>
<pre><code> value
SampleGroup sample
Group1 ref 18.1
smp1 NaN
smp2 20.3
smp3 30.0
smp4 23.8
smp5 23.2
</code></pre>
<p>What I want do do is to add a new column where the reference (ref) has been subtracted from all samples (smp).
Like this:</p>
<pre><code> value deltaValue
SampleGroup sample
Group1 ref 18.1 0
smp1 NaN NaN
smp2 20.3 2.2
smp3 30.0 11.9
smp4 23.8 5.7
smp5 23.2 5.1
</code></pre>
<p>Does anyone know how this can be done?
Thanks!</p>
|
<p>OK I knocked up the following which worked for me:</p>
<pre><code>In [327]:
t="""sample value
ref 18.1
smp1 NaN
smp2 20.3
smp3 30.0
smp4 23.8
smp5 23.2"""
df = pd.read_csv(io.StringIO(t), sep='\s+')
df
Out[327]:
sample value
0 ref 18.1
1 smp1 NaN
2 smp2 20.3
3 smp3 30.0
4 smp4 23.8
5 smp5 23.2
In [328]:
df['Group'] = 'Group1'
df
Out[328]:
sample value Group
0 ref 18.1 Group1
1 smp1 NaN Group1
2 smp2 20.3 Group1
3 smp3 30.0 Group1
4 smp4 23.8 Group1
5 smp5 23.2 Group1
In [329]:
df1 = df.set_index(['Group', 'sample'])
df1
Out[329]:
value
Group sample
Group1 ref 18.1
smp1 NaN
smp2 20.3
smp3 30.0
smp4 23.8
smp5 23.2
In [337]:
df1['deltaValue'] = df1['value'].sub(df1.loc[('Group1','ref')]['value'])
df1
Out[337]:
value deltaValue
Group sample
Group1 ref 18.1 0.0
smp1 NaN NaN
smp2 20.3 2.2
smp3 30.0 11.9
smp4 23.8 5.7
smp5 23.2 5.1
</code></pre>
<p>Also the following worked:</p>
<pre><code>df1['deltaValue'] = df1['value'] - df1.loc[('Group1','ref')]['value']
</code></pre>
|
python|pandas|row|dataframe
| 2
|
7,374
| 53,776,756
|
Create merged column and index column from 2 similar columns
|
<p>I have a DataFrame that looks like this: <code>{"Val1": [...], "Val2": [...]}</code>
What I now want to achieve is a DataFrame that looks like this: </p>
<pre><code>{
"Vals": [<should contain all vals from Val1 and Val2>],
"type": [<1 or 2 depending on the column from which
the corresponding value originated>]
}
</code></pre>
<p>I could generate this by eg:</p>
<pre><code>new = DataFrame({"vals": old.vals1.values + old.vals2.values,
"type": ([1] * len(old)) + ([2] * len(old))})
</code></pre>
<p>But this feels very hacky, and I wonder whether there is an elegant one-liner using a pandas method. Because in my actual problem the table has 4 more columns and then my hacky solutions becomes quite typing intensive.</p>
<p>EDIT:
A concrete example would be:</p>
<pre><code>old = pd.DataFrame({"A": [2, 4, 5], "B": [1, 2, 3], "C":[4, 5, 6]})
new = pd.DataFrame({"A": [2, 4, 5, 2, 4, 5], "B and C": [1, 2, 3, 4, 5, 6], "type": (["B"] * 3) + (["C"] * 3)})
old:
A B C
0 2 1 4
1 4 2 5
2 5 3 6
new:
A B and C type
0 2 1 B
1 4 2 B
2 5 3 B
3 2 4 C
4 4 5 C
5 5 6 C
</code></pre>
|
<p>Say you have:</p>
<pre><code>df = pd.DataFrame({'val1':[1,2,3,4],'val2':[5,6,7,8]})
</code></pre>
<p>Using <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.melt.html" rel="nofollow noreferrer"><code>pd.melt()</code></a> you'll get what you want:</p>
<pre><code>df.melt(var_name='Type', value_name='vals')
Type vals
0 val1 1
1 val1 2
2 val1 3
3 val1 4
4 val2 5
5 val2 6
6 val2 7
7 val2 8
</code></pre>
|
python|pandas
| 2
|
7,375
| 53,404,010
|
element-wise merge np.array in multiple pandas column
|
<p>I got a pandas dataframe, in which there are several columns’ value are np.array, I would like to merge these np.arrays into one array elementwise based row. </p>
<p>e.g</p>
<pre><code> col1 col2 col3
[2.1, 3] [4, 4] [2, 3]
[4, 5] [6, 7] [9, 9]
[7, 8] [8, 9] [5, 4]
... ... ...
</code></pre>
<p>expected result:</p>
<pre><code>col_f
[2.1, 3, 4, 4, 2, 3]
[4, 5, 6, 7, 9, 9]
[7, 8, 8, 9 5, 4]
</code></pre>
<p>........</p>
<p>I use kind of for loop to realize it, but just wondering if there is the more elegant way to do it. </p>
<p>below is my for loop cod:</p>
<pre><code>f_vector = []
for i in range(len(df.index)):
vector = np.hstack((df['A0_vector'][i], items_df['A1_vector'][i], items_df['A2_vector'][i], items_df['A3_vector'][i], items_df['A4_vector'][i], items_df['A5_vector'][i]))
f_vector.append(vector)
X = np.array(f_vector)
</code></pre>
|
<p>You can use <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer">numpy.concatenate</a> with apply along axis=1:</p>
<pre><code>import numpy as np
df['col_f'] = df[['col1', 'col2', 'col3']].apply(np.concatenate, axis=1)
</code></pre>
<p>If those were lists instead of np.arrays, <code>+</code> operator would have worked:</p>
<pre><code>df['col_f'] = df['col1'] + df['col2'] + + df['col3']
</code></pre>
<p><em>Note: edited after comments thread below.</em></p>
|
python|pandas|numpy
| 0
|
7,376
| 53,494,616
|
Python - How to create a matrix with negative index position?
|
<p>I can create a normal matrix with numpy using </p>
<p><code>np.zeros([800, 200])</code></p>
<p>How can I create a matrix with a negative index - as in a 1600x200 matrix with row index from -800 to 800? </p>
|
<p>Not sure what you need it for but maybe you could use a dictionary instead.</p>
<pre><code>a={i:0 for i in range(-800,801)}
</code></pre>
<p>With this you can call <code>a[-800] to a[800]</code>.</p>
<p>For 2-D,</p>
<pre><code>a={(i,j):0 for i in range(-800,801) for j in range(-100,101)}
</code></pre>
<p>This can be called with <code>a[(-800,-100)] to a[(800,100)]</code></p>
|
python|arrays|numpy|matrix
| 1
|
7,377
| 53,724,668
|
CNN Using Images With Significant Size Differences
|
<p>I developing a convolutional neural network (CNN) for image image classification. </p>
<p>The dataset available to me is relatively small (~35k images for both train and test sets). Each image in the dataset varies in size. The smallest image is 30 x 77 and the largest image is 1575 x 5959. </p>
<p>I saw this <a href="https://stackoverflow.com/questions/41907598/how-to-train-images-when-they-have-different-size">post</a> about how to deal with images that vary in size. The post identifies the following methods for dealing with images of different sizes. </p>
<ul>
<li><p>"Squash" images meaning they will be resized to fit specific dimensions without maintaining the aspect ratio</p></li>
<li><p>Center-crop the images to a specific size. </p></li>
<li>Pad the images with a solid color to a squared size, then resize.</li>
<li>Combination of the things above</li>
</ul>
<p>These seem like reasonable suggestions, but I am unsure of which approach is most relevant for my situation where the images have <strong>significant</strong> differences in sizes. I was thinking it makes sense for me to resize the images but maintain the same aspect ratio (each image would have the same height), and then take a center crop of these images. </p>
<p>Does anyone else have any thoughts?</p>
|
<p>The first important thing is: will resizing deteriorate the images?</p>
<p>Are your desired elements in the image all reasonably in the same scale despite the image size? </p>
<ul>
<li>If yes, you should not resize, use models with variable input sizes (there is a minimum, though). </li>
<li>If no, Will resize bring your desired elements to a similar scale?
<ul>
<li>If yes: resize! </li>
<li>If no: better think of the other solutions</li>
</ul></li>
</ul>
<p>Of course you can have models that can identify your elements in many different sizes, but the bigger the differences, the more powerful the model (I believe this statement is pretty reasonable) </p>
<p>Keras offers you the possibility of working with different image sizes (you don't really need them to have all the same size). </p>
<p>For that, you just need to specify the <code>input_shape=(None,None,input_channels)</code>.<br>
Notice that you will need to take care of compatibilities if you're going to create and merge branches.</p>
<p>With varying shapes, you will not be able to use <code>Flatten</code> layers, though. You will need <code>GlobalMaxPooling2D</code> or <code>GlobalAveragePooling2D</code>. Some other layers are also limited to fixed sizes, but convolutional, pooling and upsampling layers are ok. </p>
<p>The hard part is that you can't put different sizes in a single numpy array. Then you can: </p>
<ul>
<li>resize to groups of the same size without huge variations to make training easier. </li>
<li>simply not resize and train images one by one </li>
<li>keep aspect ratio and pad the sides </li>
</ul>
<p>But the best answer depends on your tests. </p>
|
python|tensorflow|keras|neural-network|computer-vision
| 6
|
7,378
| 17,386,835
|
Extending an existing matrix in scipy
|
<p>I have a N*N matrix:</p>
<pre><code>N=3
x = scipy.sparse.lil_matrix( (N,N) )
for _ in xrange(N):
x[random.randint(0,N-1),random.randint(0,N-1)]=random.randint(1,100)
</code></pre>
<p>Assume the matrix looks as below:</p>
<pre><code> X Y Z
X 0 [2,3] [1,4]
Y [2,3] 0 0
Z [1,4] 0 0
</code></pre>
<p>How do I add the N+1 vertex without disturbing the existing values? </p>
<pre><code> X Y Z A
X 0 [2,3] [1,4] 0
Y [2,3] 0 0 0
Z [1,4] 0 0 [1]
</code></pre>
<p>Would the entire matrix need to be re-constructed?<br>
When I try vstack to add a new row,I get an error:</p>
<pre><code>>>> import scipy.sparse as sp
>>> c=sp.coo_matrix(x)
>>> c.todense()
matrix([[ 1., 3., 5.],
[ 2., 6., 4.],
[ 8., 2., 10.]])
>>> sp.vstack([c,sp.coo_matrix(1,3)])
Traceback (most recent call last):
File "<pyshell#41>", line 1, in <module>
sp.vstack([c,sp.coo_matrix(1,3)])
File "c:\working\QZPkgs\eggs\scipy-0.10.1-py2.6-win32.egg\scipy\sparse\construct.py", line 293, in vstack
return bmat([ [b] for b in blocks ], format=format, dtype=dtype)
File "c:\working\QZPkgs\eggs\scipy-0.10.1-py2.6-win32.egg\scipy\sparse\construct.py", line 355, in bmat
raise ValueError('blocks[:,%d] has incompatible column dimensions' % j)
ValueError: blocks[:,0] has incompatible column dimensions
</code></pre>
|
<p>Looks like you are not assigning the output of the <code>todense()</code>. </p>
<p>Try:</p>
<pre><code>c_dense = c.todense()
sp.vstack([c_dense,sp.coo_matrix(1,3)])
</code></pre>
|
python|arrays|numpy|matrix|scipy
| 0
|
7,379
| 20,317,157
|
Arrays, plotting, fitting gaussian distribution for multiples on graph which represents a power spectrum
|
<p>This is the code I have done which reads in a data text file which has some metadata at the start. And then 2 columns of data which represent Angle 2 theta on the left column and Radiation count in the right column. My code looks for the value of wavelenght in metadata and stores it in a variable for later use, it then resets to the start of the file, skips the metadata and reads in the dataset needed into an array. The plot of log of Radiation count vs Angle of Diffraction is plot. There are several peaks on this plot, I want to be able to access each peak individually, calculate and store parameters such as the Full Width Half Maximum and The Area under Curve for later access. Is searching the main array called data
, scanning through it creating smaller arrays to store the additional data I need to calculate the parameters of each individual peak a good method to use? Here is my code below which is followed by the data I am using in the text file.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import math
figure = plt.figure()
count = 0
with open("chan.txt", "r") as metapart:
for line in metapart:
if "&END" in line:
break
count = count + 1
print count
print metapart.next()
metapart.seek(0)
data = np.genfromtxt(metapart, skiprows=11)
print data
metapart.close()
x_data = data[:,0]
y_init = data[:,1]
y_data = []
for i in y_init:
if i > 0:
y_data.append(math.log10(i))
else:
y_data.append(i)
print y_data
plt.plot(x_data,y_data, 'r')
plt.show()
</code></pre>
<p>Here is the data from text file:</p>
<pre><code> &SRS
<MetaDataAtStart>
multiple=True
Wavelength (Angstrom)=0.97587
mode=assessment
background=False
issid=py11n2g
noisy=False
</MetaDataAtStart>
&END
Two Theta(deg) Counts(sec^-1)
10.0 0.0
10.1 0.0
10.2 0.0
10.3 0.0
10.4 0.0
10.5 0.0
10.6 0.0
10.7 0.0
10.8 0.0
10.9 0.0
11.0 0.0
11.1 0.0
11.2 0.0
11.3 0.0
11.4 0.0
11.5 0.0
11.6 0.0
11.7 0.0
11.8 0.0
11.9 0.0
12.0 0.0
12.1 0.0
12.2 0.0
12.3 0.0
12.4 0.0
12.5 0.0
12.6 0.0
12.7 0.0
12.8 0.0
12.9 0.0
13.0 0.0
13.1 0.0
13.2 0.0
13.3 0.0
13.4 0.0
13.5 0.0
13.6 0.0
13.7 0.0
13.8 0.0
13.9 0.0
14.0 0.0
14.1 0.0
14.2 0.0
14.3 0.0
14.4 0.0
14.5 0.0
14.6 0.0
14.7 0.0
14.8 0.0
14.9 0.0
15.0 0.0
15.1 0.0
15.2 0.0
15.3 0.0
15.4 0.0
15.5 0.0
15.6 0.0
15.7 0.0
15.8 0.0
15.9 0.0
16.0 0.0
16.1 0.0
16.2 0.0
16.3 0.0
16.4 0.0
16.5 0.0
16.6 0.0
16.7 0.0
16.8 0.0
16.9 0.0
17.0 0.0
17.1 0.0
17.2 0.0
17.3 0.0
17.4 0.0
17.5 0.0
17.6 0.0
17.7 0.0
17.8 0.0
17.9 0.0
18.0 0.0
18.1 0.0
18.2 0.0
18.3 0.0
18.4 0.0
18.5 0.0
18.6 0.0
18.7 0.0
18.8 0.0
18.9 0.0
19.0 0.0
19.1 0.0
19.2 0.0
19.3 0.0
19.4 0.0
19.5 0.0
19.6 0.0
19.7 0.0
19.8 0.0
19.9 0.0
20.0 0.0
20.1 0.0
20.2 0.0
20.3 0.0
20.4 0.0
20.5 0.0
20.6 0.0
20.7 0.0
20.8 0.0
20.9 0.0
21.0 0.0
21.1 0.0
21.2 0.0
21.3 0.0
21.4 0.0
21.5 0.0
21.6 0.0
21.7 0.0
21.8 0.0
21.9 0.0
22.0 0.0
22.1 0.0
22.2 0.0
22.3 0.0
22.4 0.0
22.5 0.0
22.6 0.0
22.7 0.0
22.8 0.0
22.9 0.0
23.0 0.0
23.1 0.0
23.2 0.0
23.3 0.0
23.4 0.0
23.5 0.0
23.6 0.0
23.7 2.0
23.8 8.0
23.9 30.0
24.0 86.0
24.1 205.0
24.2 400.0
24.3 642.0
24.4 848.0
24.5 922.0
24.6 823.0
24.7 605.0
24.8 365.0
24.9 181.0
25.0 74.0
25.1 25.0
25.2 7.0
25.3 2.0
25.4 0.0
25.5 0.0
25.6 0.0
25.7 0.0
25.8 0.0
25.9 0.0
26.0 0.0
26.1 0.0
26.2 0.0
26.3 0.0
26.4 0.0
26.5 0.0
26.6 0.0
26.7 0.0
26.8 0.0
26.9 0.0
27.0 0.0
27.1 0.0
27.2 0.0
27.3 0.0
27.4 0.0
27.5 1.0
27.6 3.0
27.7 10.0
27.8 33.0
27.9 88.0
28.0 193.0
28.1 347.0
28.2 515.0
28.3 630.0
28.4 636.0
28.5 529.0
28.6 363.0
28.7 206.0
28.8 96.0
28.9 37.0
29.0 12.0
29.1 3.0
29.2 1.0
29.3 0.0
29.4 0.0
29.5 0.0
29.6 0.0
29.7 0.0
29.8 0.0
29.9 0.0
30.0 0.0
30.1 0.0
30.2 0.0
30.3 0.0
30.4 0.0
30.5 0.0
30.6 0.0
30.7 0.0
30.8 0.0
30.9 0.0
31.0 0.0
31.1 0.0
31.2 0.0
31.3 0.0
31.4 0.0
31.5 0.0
31.6 0.0
31.7 0.0
31.8 0.0
31.9 0.0
32.0 0.0
32.1 0.0
32.2 0.0
32.3 0.0
32.4 0.0
32.5 0.0
32.6 0.0
32.7 0.0
32.8 0.0
32.9 0.0
33.0 0.0
33.1 0.0
33.2 0.0
33.3 0.0
33.4 0.0
33.5 0.0
33.6 0.0
33.7 0.0
33.8 0.0
33.9 0.0
34.0 0.0
34.1 0.0
34.2 0.0
34.3 0.0
34.4 0.0
34.5 0.0
34.6 0.0
34.7 0.0
34.8 0.0
34.9 0.0
35.0 0.0
35.1 0.0
35.2 0.0
35.3 0.0
35.4 0.0
35.5 0.0
35.6 0.0
35.7 0.0
35.8 0.0
35.9 0.0
36.0 0.0
36.1 0.0
36.2 0.0
36.3 0.0
36.4 0.0
36.5 0.0
36.6 0.0
36.7 0.0
36.8 0.0
36.9 0.0
37.0 0.0
37.1 0.0
37.2 0.0
37.3 0.0
37.4 0.0
37.5 0.0
37.6 0.0
37.7 0.0
37.8 0.0
37.9 0.0
38.0 0.0
38.1 0.0
38.2 0.0
38.3 0.0
38.4 0.0
38.5 0.0
38.6 0.0
38.7 0.0
38.8 0.0
38.9 0.0
39.0 0.0
39.1 0.0
39.2 0.0
39.3 0.0
39.4 0.0
39.5 0.0
39.6 0.0
39.7 1.0
39.8 5.0
39.9 18.0
40.0 53.0
40.1 125.0
40.2 249.0
40.3 415.0
40.4 575.0
40.5 667.0
40.6 645.0
40.7 521.0
40.8 352.0
40.9 198.0
41.0 93.0
41.1 37.0
41.2 12.0
41.3 3.0
41.4 1.0
41.5 0.0
41.6 0.0
41.7 0.0
41.8 0.0
41.9 0.0
42.0 0.0
42.1 0.0
42.2 0.0
42.3 0.0
42.4 0.0
42.5 0.0
42.6 0.0
42.7 0.0
42.8 0.0
42.9 0.0
43.0 0.0
43.1 0.0
43.2 0.0
43.3 0.0
43.4 0.0
43.5 0.0
43.6 0.0
43.7 0.0
43.8 0.0
43.9 0.0
44.0 0.0
44.1 0.0
44.2 0.0
44.3 0.0
44.4 0.0
44.5 0.0
44.6 0.0
44.7 0.0
44.8 0.0
44.9 0.0
45.0 0.0
45.1 0.0
45.2 0.0
45.3 0.0
45.4 0.0
45.5 0.0
45.6 0.0
45.7 0.0
45.8 0.0
45.9 0.0
46.0 0.0
46.1 0.0
46.2 0.0
46.3 0.0
46.4 0.0
46.5 0.0
46.6 0.0
46.7 0.0
46.8 0.0
46.9 0.0
47.0 1.0
47.1 5.0
47.2 18.0
47.3 58.0
47.4 155.0
47.5 351.0
47.6 670.0
47.7 1079.0
47.8 1463.0
47.9 1672.0
48.0 1610.0
48.1 1307.0
48.2 894.0
48.3 515.0
48.4 250.0
48.5 103.0
48.6 35.0
48.7 10.0
48.8 3.0
48.9 1.0
49.0 0.0
49.1 0.0
49.2 0.0
49.3 1.0
49.4 2.0
49.5 8.0
49.6 25.0
49.7 63.0
49.8 135.0
49.9 244.0
50.0 374.0
50.1 483.0
50.2 528.0
50.3 488.0
50.4 382.0
50.5 252.0
50.6 141.0
50.7 66.0
50.8 26.0
50.9 9.0
51.0 3.0
51.1 1.0
51.2 0.0
51.3 0.0
51.4 0.0
51.5 0.0
51.6 0.0
51.7 0.0
51.8 0.0
51.9 0.0
52.0 0.0
52.1 0.0
52.2 0.0
52.3 0.0
52.4 0.0
52.5 0.0
52.6 0.0
52.7 0.0
52.8 0.0
52.9 0.0
53.0 0.0
53.1 0.0
53.2 0.0
53.3 0.0
53.4 0.0
53.5 0.0
53.6 0.0
53.7 0.0
53.8 0.0
53.9 0.0
54.0 0.0
54.1 0.0
54.2 0.0
54.3 0.0
54.4 0.0
54.5 0.0
54.6 0.0
54.7 0.0
54.8 0.0
54.9 0.0
55.0 0.0
55.1 0.0
55.2 0.0
55.3 0.0
55.4 0.0
55.5 0.0
55.6 0.0
55.7 0.0
55.8 0.0
55.9 0.0
56.0 0.0
56.1 0.0
56.2 0.0
56.3 0.0
56.4 0.0
56.5 0.0
56.6 0.0
56.7 0.0
56.8 0.0
56.9 0.0
57.0 0.0
57.1 0.0
57.2 0.0
57.3 0.0
57.4 0.0
57.5 0.0
57.6 0.0
57.7 0.0
57.8 1.0
57.9 3.0
58.0 10.0
58.1 27.0
58.2 60.0
58.3 113.0
58.4 184.0
58.5 256.0
58.6 305.0
58.7 310.0
58.8 270.0
58.9 202.0
59.0 129.0
59.1 70.0
59.2 33.0
59.3 13.0
59.4 4.0
59.5 1.0
59.6 0.0
59.7 0.0
59.8 0.0
59.9 0.0
60.0 0.0
60.1 0.0
60.2 0.0
60.3 0.0
60.4 0.0
60.5 0.0
60.6 0.0
60.7 0.0
60.8 0.0
60.9 0.0
61.0 0.0
61.1 0.0
61.2 0.0
61.3 0.0
61.4 0.0
61.5 0.0
61.6 0.0
61.7 0.0
61.8 0.0
61.9 0.0
62.0 0.0
62.1 0.0
62.2 0.0
62.3 0.0
62.4 0.0
62.5 0.0
62.6 0.0
62.7 0.0
62.8 0.0
62.9 0.0
63.0 0.0
63.1 0.0
63.2 0.0
63.3 0.0
63.4 0.0
63.5 0.0
63.6 0.0
63.7 0.0
63.8 0.0
63.9 0.0
64.0 0.0
64.1 0.0
64.2 0.0
64.3 0.0
64.4 0.0
64.5 0.0
64.6 0.0
64.7 0.0
64.8 0.0
64.9 0.0
65.0 0.0
65.1 0.0
65.2 0.0
65.3 0.0
65.4 0.0
65.5 0.0
65.6 0.0
65.7 0.0
65.8 0.0
65.9 0.0
66.0 0.0
66.1 0.0
66.2 0.0
66.3 0.0
66.4 0.0
66.5 0.0
66.6 0.0
66.7 0.0
66.8 0.0
66.9 0.0
67.0 0.0
67.1 0.0
67.2 0.0
67.3 0.0
67.4 0.0
67.5 0.0
67.6 0.0
67.7 0.0
67.8 0.0
67.9 0.0
68.0 0.0
68.1 0.0
68.2 0.0
68.3 0.0
68.4 0.0
68.5 0.0
68.6 0.0
68.7 0.0
68.8 0.0
68.9 0.0
69.0 0.0
69.1 0.0
69.2 0.0
69.3 0.0
69.4 0.0
69.5 0.0
69.6 0.0
69.7 0.0
69.8 0.0
69.9 0.0
70.0 0.0
70.1 0.0
70.2 0.0
70.3 0.0
70.4 0.0
70.5 0.0
70.6 0.0
70.7 0.0
70.8 0.0
70.9 0.0
71.0 0.0
71.1 0.0
71.2 0.0
71.3 0.0
71.4 0.0
71.5 0.0
71.6 0.0
71.7 0.0
71.8 0.0
71.9 0.0
72.0 0.0
72.1 0.0
72.2 0.0
72.3 0.0
72.4 0.0
72.5 0.0
72.6 0.0
72.7 0.0
72.8 0.0
72.9 0.0
73.0 0.0
73.1 0.0
73.2 0.0
73.3 0.0
73.4 0.0
73.5 0.0
73.6 0.0
73.7 0.0
73.8 0.0
73.9 0.0
74.0 0.0
74.1 0.0
74.2 0.0
74.3 0.0
74.4 0.0
74.5 0.0
74.6 0.0
74.7 0.0
74.8 0.0
74.9 0.0
75.0 0.0
75.1 0.0
75.2 0.0
75.3 0.0
75.4 0.0
75.5 0.0
75.6 0.0
75.7 0.0
75.8 0.0
75.9 0.0
76.0 0.0
76.1 0.0
76.2 0.0
76.3 0.0
76.4 0.0
76.5 0.0
76.6 0.0
76.7 0.0
76.8 0.0
76.9 0.0
77.0 0.0
77.1 0.0
77.2 0.0
77.3 0.0
77.4 0.0
77.5 0.0
77.6 0.0
77.7 0.0
77.8 0.0
77.9 0.0
78.0 0.0
78.1 0.0
78.2 0.0
78.3 0.0
78.4 0.0
78.5 0.0
78.6 0.0
78.7 0.0
78.8 0.0
78.9 0.0
79.0 0.0
79.1 0.0
79.2 0.0
79.3 0.0
79.4 0.0
79.5 0.0
79.6 0.0
79.7 0.0
79.8 0.0
79.9 0.0
80.0 0.0
80.1 0.0
80.2 0.0
80.3 0.0
80.4 0.0
80.5 0.0
80.6 0.0
80.7 0.0
80.8 0.0
80.9 0.0
81.0 0.0
81.1 0.0
81.2 0.0
81.3 0.0
81.4 0.0
81.5 0.0
81.6 0.0
81.7 0.0
81.8 0.0
81.9 0.0
82.0 0.0
82.1 0.0
82.2 0.0
82.3 0.0
82.4 0.0
82.5 0.0
82.6 0.0
82.7 0.0
82.8 0.0
82.9 0.0
83.0 0.0
83.1 0.0
83.2 0.0
83.3 0.0
83.4 0.0
83.5 0.0
83.6 0.0
83.7 0.0
83.8 0.0
83.9 0.0
84.0 0.0
84.1 0.0
84.2 0.0
84.3 0.0
84.4 0.0
84.5 0.0
84.6 0.0
84.7 0.0
84.8 0.0
84.9 0.0
85.0 0.0
85.1 0.0
85.2 0.0
85.3 0.0
85.4 0.0
85.5 0.0
85.6 0.0
85.7 0.0
85.8 0.0
85.9 0.0
86.0 0.0
86.1 0.0
86.2 0.0
86.3 0.0
86.4 0.0
86.5 0.0
86.6 0.0
86.7 0.0
86.8 0.0
86.9 0.0
87.0 0.0
87.1 0.0
87.2 0.0
87.3 0.0
87.4 0.0
87.5 0.0
87.6 0.0
87.7 0.0
87.8 0.0
87.9 0.0
88.0 0.0
88.1 0.0
88.2 0.0
88.3 0.0
88.4 0.0
88.5 0.0
88.6 0.0
88.7 0.0
88.8 0.0
88.9 0.0
89.0 0.0
89.1 0.0
89.2 0.0
89.3 0.0
89.4 0.0
89.5 0.0
89.6 0.0
89.7 0.0
89.8 0.0
89.9 0.0
90.0 0.0
90.1 0.0
90.2 0.0
90.3 0.0
90.4 0.0
90.5 0.0
90.6 0.0
90.7 0.0
90.8 0.0
90.9 0.0
91.0 0.0
91.1 0.0
91.2 0.0
91.3 0.0
91.4 0.0
91.5 0.0
91.6 0.0
91.7 0.0
91.8 0.0
91.9 0.0
92.0 0.0
92.1 0.0
92.2 0.0
92.3 0.0
92.4 0.0
92.5 0.0
92.6 0.0
92.7 0.0
92.8 0.0
92.9 0.0
93.0 0.0
93.1 0.0
93.2 0.0
93.3 0.0
93.4 0.0
93.5 0.0
93.6 0.0
93.7 0.0
93.8 0.0
93.9 0.0
94.0 0.0
94.1 0.0
94.2 0.0
94.3 0.0
94.4 0.0
94.5 0.0
94.6 0.0
94.7 0.0
94.8 0.0
94.9 0.0
95.0 0.0
95.1 0.0
95.2 0.0
95.3 0.0
95.4 0.0
95.5 0.0
95.6 0.0
95.7 0.0
95.8 0.0
95.9 0.0
96.0 0.0
96.1 0.0
96.2 0.0
96.3 0.0
96.4 0.0
96.5 0.0
96.6 0.0
96.7 0.0
96.8 0.0
96.9 0.0
97.0 0.0
97.1 0.0
97.2 0.0
97.3 0.0
97.4 0.0
97.5 0.0
97.6 0.0
97.7 0.0
97.8 0.0
97.9 0.0
98.0 0.0
98.1 0.0
98.2 0.0
98.3 0.0
98.4 0.0
98.5 0.0
98.6 0.0
98.7 0.0
98.8 0.0
98.9 0.0
99.0 0.0
99.1 0.0
99.2 0.0
99.3 0.0
99.4 0.0
99.5 0.0
99.6 0.0
99.7 0.0
99.8 0.0
99.9 0.0
100.0 0.0
100.1 0.0
100.2 0.0
100.3 0.0
100.4 0.0
100.5 0.0
100.6 0.0
100.7 0.0
100.8 0.0
100.9 0.0
101.0 0.0
101.1 0.0
101.2 0.0
101.3 0.0
101.4 0.0
101.5 0.0
101.6 0.0
101.7 0.0
101.8 0.0
101.9 0.0
102.0 0.0
102.1 0.0
102.2 0.0
102.3 0.0
102.4 0.0
102.5 0.0
102.6 0.0
102.7 0.0
102.8 0.0
102.9 0.0
103.0 0.0
103.1 0.0
103.2 0.0
103.3 0.0
103.4 0.0
103.5 0.0
103.6 0.0
103.7 0.0
103.8 0.0
103.9 0.0
104.0 0.0
104.1 0.0
104.2 0.0
104.3 0.0
104.4 0.0
104.5 0.0
104.6 0.0
104.7 0.0
104.8 0.0
104.9 0.0
105.0 0.0
105.1 0.0
105.2 0.0
105.3 0.0
105.4 0.0
105.5 0.0
105.6 0.0
105.7 0.0
105.8 0.0
105.9 0.0
106.0 0.0
106.1 0.0
106.2 0.0
106.3 0.0
106.4 0.0
106.5 0.0
106.6 0.0
106.7 0.0
106.8 0.0
106.9 0.0
107.0 0.0
107.1 0.0
107.2 0.0
107.3 0.0
107.4 0.0
107.5 0.0
107.6 0.0
107.7 0.0
107.8 0.0
107.9 0.0
108.0 0.0
108.1 0.0
108.2 0.0
108.3 0.0
108.4 0.0
108.5 0.0
108.6 0.0
108.7 0.0
108.8 0.0
108.9 0.0
109.0 0.0
109.1 0.0
109.2 0.0
109.3 0.0
109.4 0.0
109.5 0.0
109.6 0.0
109.7 0.0
109.8 0.0
109.9 0.0
110.0 0.0
</code></pre>
|
<p>Yes that is a suitable approach.</p>
<p>First some code to get you going (I tweaked your code):</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import math
figure = plt.figure()
def Gaussian(x, Geoms):
# Geoms is a n X 3 array:
# Position, Amplitude, FWHM
#Geoms = [[2,.00001,3][1,17,.2]]
Position = Geoms[:, 0]
Amplitude = Geoms[:, 1]
FWHM = Geoms[:, 2]
y = np.zeros(x.shape)
for i in np.arange(0, Amplitude.shape[0]):
g = Position[i]; q = (x-g); v = FWHM[i]
q = q*q; v = v*v
y += (Amplitude[i] * np.exp(-q/(v*np.sqrt(2))))
return y
count = 0
with open("chan.txt", "r") as metapart:
for line in metapart:
if "&END" in line:
break
count = count + 1
print count
print metapart.next()
metapart.seek(0)
data = np.genfromtxt(metapart, skiprows=11)
print data
metapart.close()
x_data = data[:,0]
y_data = data[:,1]
# Build several gaussian peaks.
FWHM = 0.25
Geoms = np.array([[24.48, 922, FWHM], [28.35, 644, FWHM]])
y_fit = Gaussian(x_data, Geoms)
# I recommend keeping your data in its original form. Only display it logarithmically if you want.
plt.semilogy(x_data,y_data, 'r', x_data, y_fit, 'b')
# And you may want to control the plot range.
plt.axis([0,100, 1,1000])
plt.show()
</code></pre>
<p>I added a function I wrote a while ago called Gaussian that, generates your peaks. Sorry it isn't better commented, but it should be pretty straightforward to use. It requires numpy inputs.</p>
<p>Second, I removed your for loop for truncating the data at 0. (You can look into <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.clip.html" rel="nofollow">numpy.clip()</a> in the future if you want to do that). I recommend leaving your data in the original format, and only <em>plot</em> it logarithmically. This is especially true if you are doing fits, since otherwise you have to write a log_gaussian function, etc., and it gets confusing.</p>
<p>Here, I just dropped in the first two gaussians by hand. Of course, you can drop in the rest by hand and tweak the values until they are as accurate as the uncertainty imposed by the digitization of your data within a few minutes. If you're doing this once, that's what I'd do. But if you are doing this more than once, or you have more complicated shapes, I would spend the time to learn the <a href="http://docs.scipy.org/doc/scipy/reference/optimize.html" rel="nofollow">SciPy optimize module</a>. You probably want to start with the <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/" rel="nofollow">SciPy tutorial</a>. Using ScyPy optimize has a longer runway, but it pays off with interest and dividends, and many stock splits.</p>
<p>Enjoy!</p>
|
python|arrays|numpy|plot|gaussian
| 0
|
7,380
| 12,461,413
|
Why does comparison of a numpy array with a list consume so much memory?
|
<p>This bit stung me recently. I solved it by removing all comparisons of numpy arrays with lists from the code. But why does the garbage collector miss to collect it?</p>
<p>Run this and watch it eat your memory:</p>
<pre><code>import numpy as np
r = np.random.rand(2)
l = []
while True:
r == l
</code></pre>
<p>Running on 64bit Ubuntu 10.04, virtualenv 1.7.2, Python 2.7.3, Numpy 1.6.2</p>
|
<p>Just in case someone stumbles on this and wonders...</p>
<p>@Dugal yes, I believe this is a memory leak in current numpy versions (Sept. 2012) that occurs when some Exceptions are raised (see <a href="http://projects.scipy.org/numpy/ticket/2216" rel="nofollow">this</a> and <a href="https://github.com/numpy/numpy/pull/449" rel="nofollow">this</a>). Why adding the <code>gc</code> call that @BiRico did "fixes" it seems weird to me, though it must be done right after appearently? Maybe its an oddity with how python garbage collects tracebacks, if someone knows the Exception handling and garbage colleciton CPython Internals, I would be interested. </p>
<p><strong>Workaround</strong>: This is not directly related to lists, but for example most broadcasting Exceptions (the empty list does not fit to the arrays size, an empty array results in the same leak. Note that internally there is an Exception prepared that never surfaces). So as a workaround, you should probably just check first if the shape is correct (if you do it a lot, otherwise I wouldn't worry really, this leaks just a small string if I got it right).</p>
<p><strong>FIXED:</strong> This issue will be fixed with numpy 1.7.</p>
|
python|arrays|list|memory-management|numpy
| 5
|
7,381
| 72,016,038
|
Sum up values based condition, if does not match keep current values
|
<p>I am looking for a way to sum up the values > or < a certain threshold in a given column (here > 6 in days_install_to_event column).</p>
<p>I tried many different ways, such a loc, query or groupby, but it return only the values > 6 not the ones < 6.</p>
<p>Here some of the things I have tried:</p>
<pre><code>df = pd.DataFrame({
'custom_action' : ['First_puchase', 'First_puchase', 'First_puchase', 'First_puchase',
'First_puchase', 'First_puchase', 'First_puchase', 'First_puchase'],
'days_install_to_event' : [1, 2, 3, 4, 5, 6, 7, 8],
'number_unique_users' : [1350, 250, 13, 2, 1, 2, 1, 2]})
df
custom_action days_install_to_event number_unique_users
0 First_puchase 1 1350
1 First_puchase 2 250
2 First_puchase 3 13
3 First_puchase 4 2
4 First_puchase 5 1
5 First_puchase 6 2
6 First_puchase 7 1
7 First_puchase 8 2
8 First_puchase 9 3
9 First_puchase 10 2
df_1 = df.loc[df['days_install_to_event'] > 6].sum()
df_2 = df.query("days_install_to_event > 6")['number_unique_users'].sum()
df_1
df_2
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>custom_action First_puchaseFirst_puchase
days_install_to_event 34
number_unique_users 8
8
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code>custom_action days_install_to_event number_unique_users
0 First_puchase 1 1350
1 First_puchase 2 250
2 First_puchase 3 13
3 First_puchase 4 2
4 First_puchase 5 1
5 First_puchase 6 2
6 First_puchase 7+ 8
</code></pre>
<p>In advance, sorry if a very similar question have been asked, I have been looking around for past 2 days but found nothing that could match exactly what I was looking for. It may be due to formulation.</p>
<p>Thanks for your help :)</p>
|
<p>As far as I know there is no out-of-the-box solution for this but you can get this result by creating a helper grouper column:</p>
<pre><code># Set days_install_to_event = 7+ if the value is larger than 6
grouper = df['days_install_to_event'].mask(df['days_install_to_event'] > 6, '7+')
</code></pre>
<p>Then, with the help of this column, you can use <code>groupby.agg</code>:</p>
<pre><code>In [27]: df.groupby(grouper).agg({
'number_unique_users': 'sum',
'custom_action': 'first',
}).reset_index()
Out[27]:
days_install_to_event number_unique_users custom_action
0 1 1350 First_puchase
1 2 250 First_puchase
2 3 13 First_puchase
3 4 2 First_puchase
4 5 1 First_puchase
5 6 2 First_puchase
6 7+ 8 First_puchase
</code></pre>
|
python-3.x|pandas
| 4
|
7,382
| 72,031,056
|
is there a way that i can convert this dataframe into a datatype
|
<p>already clean a lot of information into this, and i'm feel stuck now, if anyone can give some ideas and how to proceed i will apreacite so much.</p>
<p><a href="https://i.stack.imgur.com/JetLJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JetLJ.png" alt="dates column" /></a></p>
<p>later i need to join with this:</p>
<p><a href="https://i.stack.imgur.com/UNLQh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UNLQh.png" alt="dates and time column" /></a></p>
<p>and timezone abbr. ¿there's a library or something that can handle datetypes like this?</p>
<p>my desired output is something like this: 2022-05-04 18:00:00 but i have this:</p>
<p>[<img src="https://i.stack.imgur.com/rXeJJ.png" alt="have this" />]</p>
|
<p>As mentioned in the comment, you should make sure your dates are strings in order to concatenate them. You can do this like that:</p>
<pre><code>df.column_name = df.column_name.astype(str)
</code></pre>
<p>After that you can use <a href="https://docs.python.org/3/library/datetime.html" rel="nofollow noreferrer">python's datetime library</a> to format it as dates.</p>
<pre><code>date = 'May-02-2022'
time = '9:00 AM'
timestamp = a + ' ' + b
timestamp = pd.to_datetime(timestamp, format='%b-%d-%Y %I:%M %p')
print(timstamp)
>>> Timestamp('2022-05-02 09:00:00')
</code></pre>
|
python|pandas|datetime-format
| 0
|
7,383
| 18,852,884
|
Estimate Number Of Decimal Places of Time Series
|
<p>How do I estimate the number of decimal places for the numbers used inside a pandas time series?</p>
<p>e.g. for</p>
<pre><code>x=[1.01,1.01,1.03]
</code></pre>
<p>i would want</p>
<pre><code>in[0]: estimate_decimal_places(x)
out[0] : 2
</code></pre>
<p>e.g. for</p>
<pre><code>x=[1.1,1.5,2.0]
</code></pre>
<p>i would want</p>
<pre><code>in[0]: estimate_decimal_places(x)
out[0] : 1
</code></pre>
|
<pre><code>def estimate_decimal_places(num):
return len(str(num).split(".")[1])
x=[1.1,1.01,1.001]
for num in x:
print estimate_decimal_places(num)
</code></pre>
<p>gives</p>
<pre><code>1
2
3
</code></pre>
|
python|numpy
| 2
|
7,384
| 21,995,036
|
How to shuffle together a matrix and a response vector
|
<p>I have a dataset <code>X</code>, <code>y</code> where <code>X</code> is a matrix of observation <code>n*p</code> and <code>y</code> a response vector <code>n*1</code>. </p>
<p>I would like to shuffle <code>y</code> and the rows of <code>X</code> without losing the "line by line" relation.</p>
<p>How can I do that easily using <code>numpy</code> or <code>scipy</code> or <code>sklearn</code>?</p>
|
<p>You mean you want to keep the correspondence between rows in <code>X</code> and <code>y</code>? Generate random indices and index both arrays with them:</p>
<pre><code>>>> perm = np.random.permutation(X.shape[0])
>>> X = X[perm]
>>> y = y[perm]
</code></pre>
|
python|numpy|matrix|scipy|shuffle
| 2
|
7,385
| 22,149,584
|
What does axis in pandas mean?
|
<p>Here is my code to generate a dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
dff = pd.DataFrame(np.random.randn(1,2),columns=list('AB'))
</code></pre>
<p>then I got the dataframe:</p>
<pre><code>+------------+---------+--------+
| | A | B |
+------------+---------+---------
| 0 | 0.626386| 1.52325|
+------------+---------+--------+
</code></pre>
<p>When I type the commmand :</p>
<pre><code>dff.mean(axis=1)
</code></pre>
<p>I got :</p>
<pre><code>0 1.074821
dtype: float64
</code></pre>
<p>According to the reference of pandas, axis=1 stands for columns and I expect the result of the command to be</p>
<pre><code>A 0.626386
B 1.523255
dtype: float64
</code></pre>
<p>So here is my question: what does axis in pandas mean?</p>
|
<p>It specifies the axis <strong>along which</strong> the means are computed. By default <code>axis=0</code>. This is consistent with the <code>numpy.mean</code> usage when <code>axis</code> is specified <em>explicitly</em> (in <code>numpy.mean</code>, axis==None by default, which computes the mean value over the flattened array) , in which <code>axis=0</code> along the <em>rows</em> (namely, <em>index</em> in pandas), and <code>axis=1</code> along the <em>columns</em>. For added clarity, one may choose to specify <code>axis='index'</code> (instead of <code>axis=0</code>) or <code>axis='columns'</code> (instead of <code>axis=1</code>).</p>
<pre><code>+------------+---------+--------+
| | A | B |
+------------+---------+---------
| 0 | 0.626386| 1.52325|----axis=1----->
+------------+---------+--------+
| |
| axis=0 |
↓ ↓
</code></pre>
|
python|pandas|numpy|dataframe
| 488
|
7,386
| 18,204,134
|
Install python-numpy in the Virtualenv environment
|
<p>I would like to install the python-numpy in the Virtualenv environment. My system is Ubuntu 12.04, and my python is 2.7.5. First I installed the Virtualenv by </p>
<pre><code>$ sudo apt-get install python-virtualenv
</code></pre>
<p>And then set up an environment by</p>
<pre><code>$ mkdir myproject
$ cd myproject
$ virtualenv venv
New python executable in venv/bin/python
Installing distribute............done.
</code></pre>
<p>Activated it by</p>
<pre><code>$ . venv/bin/activate
</code></pre>
<p>Installed python-numpy in the environment by</p>
<pre><code>$ sudo apt-get install python-numpy
</code></pre>
<p>However, I tried to import numpy in python in the environment after all steps above. Python told me "No modules named numpy". Whereas, numpy could be imported in Python globally. I tried to remove and install many times but it does not work. I am a beginner of both Python and Linux. </p>
|
<p><code>apt-get</code> will still install modules globally, even when you're in your new <code>virtualenv</code>.</p>
<p>You should either use <code>pip install numpy</code> from within your virtual environment (easiest way), or else compile and install <code>numpy</code> from source using the <code>setup.py</code> file in the root of the source directory (slightly harder way, <a href="http://docs.scipy.org/doc/numpy/user/install.html#building-from-source" rel="noreferrer">see here</a>).</p>
<p>I'd also thoroughly recommend you take a look at <a href="http://virtualenvwrapper.readthedocs.org/en/latest/" rel="noreferrer"><code>virtualenvwrapper</code></a>, which makes managing virtual environments much friendlier.</p>
<h2>Edit:</h2>
<p>You should <strong>not</strong> be using <code>sudo</code>, either to create your virtual environment or to install things within it - it's a directory in your home folder, you don't need elevated permissions to make changes to it. If you use <code>sudo</code>, <code>pip</code> will make changes to your global site packages, not to your virtual environment, hence why you weren't able to install <code>numpy</code> locally.</p>
<p>Another thing to consider is that <s>by default, new <code>virtualenvs</code> will inherit from the global <code>site-packages</code> - i.e. if Python can't find a module locally within your <code>virtualenv</code>, Python will also look in your global site packages</s> <strong>*</strong>. In your case, since you'd already installed <code>numpy</code> globally (using <code>apt-get</code>), when you then try to <code>pip install numpy</code> in your virtual environment, <code>pip</code> sees that <code>numpy</code> is already in your Python path and doesn't install it locally.</p>
<p>You could:</p>
<ol>
<li><p>Pass the <code>--no-site-packages</code> option when you create your <code>virtualenv</code>. This prevents the new <code>virtualenv</code> from inheriting from the global site packages, so everything must be installed locally.</p></li>
<li><p>Force <code>pip</code> to install/upgrade <code>numpy</code> locally, e.g. using <code>pip install -U --force numpy</code></p></li>
</ol>
<hr>
<p><strong>*</strong> <a href="https://github.com/pypa/virtualenv/blob/develop/docs/changes.rst#17-2011-11-30" rel="noreferrer">As of v1.7</a>, the default behaviour of <code>virtualenv</code> is to not include the global <code>site-packages</code> directory. You can override this by passing the <code>--system-site-packages</code> flag when creating a new virtual environment.</p>
|
python|numpy|ubuntu-12.04|virtualenv
| 7
|
7,387
| 17,998,285
|
Unexpected read_csv result with \W+ separator
|
<p>I have an input file I am trying to read into a pandas dataframe.
The file is space delimited, including white space before the first value.
I have tried both read_csv and read_table with a "\W+" regex as the separator. </p>
<p><code>data = pd.io.parsers.read_csv('file.txt',names=header,sep="\W+")</code></p>
<p>They read in the correct number of columns, but the values themselves are totally bogus. Has any one else experienced this, or am I using it incorrectly</p>
<p>I have also tried to read file line by line, create a series from <code>row.split()</code> and append the series to a dataframe, but it appears to crash due to memory.</p>
<p>Are there any other options for creating a data frame from a file?</p>
<p>I am using Pandas v0.11.0, Python 2.7</p>
|
<p>The regex <code>'\W'</code> means "not a word character" (a "word character" being letters, digits, and underscores), see the <a href="http://docs.python.org/2/library/re.html" rel="nofollow">re docs</a>, hence the strange results. I think you meant to use whitespace <code>'\s+'</code>.</p>
<p>Note: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html" rel="nofollow"><code>read_csv</code></a> offers a <code>delim_whitespace</code> argument (which you can set to True), but personally I prefer to use <code>'\s+'</code>.</p>
|
python|pandas
| 2
|
7,388
| 55,564,896
|
Pandas Python - Grouping counts to others
|
<p>I am conducting data analysis for a project using python and pandas where I have the following data:</p>
<p>The numbers are the count.</p>
<pre><code>USA: 5000
Canada: 7000
UK: 6000
France: 6500
Spain: 4000
Japan: 5
China: 7
Hong Kong: 10
Taiwan: 6
New Zealand: 8
South Africa: 11
</code></pre>
<p>My task is to make a pie chart that represent the count.</p>
<p><code>df['Country'].value_counts().plot.pie()</code></p>
<p>What I will get is a pie chart, but I would like to combined the countries with smaller counts and put them into a category like other.</p>
<p>How can I do that?</p>
|
<p>IIUC using <code>np.where</code> setting the boundary , then <code>groupby</code> + <code>sum</code> , notice here I am using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.groupby.htmlis" rel="nofollow noreferrer"><code>pandas.Series.groupby</code></a></p>
<pre><code>s=df['Country'].value_counts()
s.groupby(np.where(s>=4000,s.index,'other')).sum()#.plot.pie()
Out[64]:
Canada 7000
France 6500
Spain 4000
UK 6000
USA 5000
other 47
</code></pre>
|
python|pandas
| 3
|
7,389
| 55,214,715
|
Couldn't parse file content
|
<p>When I ran my Python code, I got the error message below. Can anyone help?</p>
<pre><code>Traceback (most recent call last):
File "/home/yangjy/PycharmProjects/mmi_anti_pytorch-master/language_model/lm.py", line 8, in <module>
import tensor
File "/home/yangjy/anaconda3/lib/python3.6/site-packages/tensor/__init__.py", line 7, in <module>
from tensor import service
File "/home/yangjy/anaconda3/lib/python3.6/site-packages/tensor/service.py", line 14, in <module>
from tensor.protocol import riemann
File "/home/yangjy/anaconda3/lib/python3.6/site-packages/tensor/protocol/riemann.py", line 1, in <module>
from tensor.ihateprotobuf import proto_pb2
File "/home/yangjy/anaconda3/lib/python3.6/site-packages/tensor/ihateprotobuf/proto_pb2.py", line 16, in <module>
serialized_pb=b'\n\x0bproto.proto\"\x81\x01\n\x05State\x12\x0c\n\x04time\x18\x01 \x01(\x03\x12\r\n\x05state\x18\x02 \x01(\t\x12\x0f\n\x07service\x18\x03 \x01(\t\x12\x0c\n\x04host\x18\x04 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x05 \x01(\t\x12\x0c\n\x04once\x18\x06 \x01(\x08\x12\x0c\n\x04tags\x18\x07 \x03(\t\x12\x0b\n\x03ttl\x18\x08 \x01(\x02\"\xce\x01\n\x05\x45vent\x12\x0c\n\x04time\x18\x01 \x01(\x03\x12\r\n\x05state\x18\x02 \x01(\t\x12\x0f\n\x07service\x18\x03 \x01(\t\x12\x0c\n\x04host\x18\x04 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x05 \x01(\t\x12\x0c\n\x04tags\x18\x07 \x03(\t\x12\x0b\n\x03ttl\x18\x08 \x01(\x02\x12\x1e\n\nattributes\x18\t \x03(\x0b\x32\n.Attribute\x12\x15\n\rmetric_sint64\x18\r \x01(\x12\x12\x10\n\x08metric_d\x18\x0e \x01(\x01\x12\x10\n\x08metric_f\x18\x0f \x01(\x02\"\x17\n\x05Query\x12\x0e\n\x06unicodeing\x18\x01 \x01(\t\"g\n\x03Msg\x12\n\n\x02ok\x18\x02 \x01(\x08\x12\r\n\x05\x65rror\x18\x03 \x01(\t\x12\x16\n\x06states\x18\x04 \x03(\x0b\x32\x06.State\x12\x15\n\x05query\x18\x05 \x01(\x0b\x32\x06.Query\x12\x16\n\x06\x65vents\x18\x06 \x03(\x0b\x32\x06.Event\"\'\n\tAttribute\x12\x0b\n\x03key\x18\x01 \x02(\t\x12\r\n\x05value\x18\x02 \x01(\tB\x1a\n\x11\x63om.aphyr.riemannB\x05Proto')
File "/home/yangjy/anaconda3/lib/python3.6/site-packages/google/protobuf/descriptor.py", line 878, in __new__
return _message.default_pool.AddSerializedFile(serialized_pb)
TypeError: Couldn't parse file content!
</code></pre>
|
<p>Same issue, finally I found that's my source proto file importing wrong proto files.</p>
<p>After referring to the right files, it is fixed.</p>
<pre><code>syntax = "proto3";
import "tensorflow/path_to_correct_files/file1.proto";
import "tensorflow/path_to_correct_files/file1.proto";
</code></pre>
|
python|tensorflow
| 0
|
7,390
| 56,512,330
|
Pytorch: how to repeat a parameter matrix into a bigger one along both dimensions?
|
<p>What is the simplest syntax to transform 2D parameter tensor</p>
<pre><code>A B
C D
</code></pre>
<p>into</p>
<pre><code>A A B B
A A B B
C C D D
C C D D
</code></pre>
<p>Note they are parameter tensors, so I need autograd to back propagate gradient from latter into former.
Thanks!</p>
|
<p>using einops (same code works with numpy and pytorch):</p>
<pre><code>z = einops.repeat(x, 'i j -> (i 2) (j 2)')
</code></pre>
|
pytorch
| 0
|
7,391
| 56,447,078
|
How to read bulk excel file data and load into spark dataframe in Databricks
|
<p>i want to read the bulk excel data which contains 800k records and 230 columns in it. I have read data using spark and pandas dataframe , but while reading the data using spark data frame i'm getting the following message.</p>
<blockquote>
<p>Message: The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached.</p>
</blockquote>
<p>I have used below code using spark.</p>
<pre><code>df=spark.read.format("com.crealytics.spark.excel").option("useheader","true").option("treatEmptyValuesAsNulls","true").option("inferSchema", "true").option("addColorColumns", "False").option("location","/dbfs/FileStore/test/abc.xlsx").load()
Using scala:
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.DataFrame
def readExcel(file: String): DataFrame = sqlContext.read
.format("com.crealytics.spark.excel")
.option("location", file)
.option("useHeader", "true")
.option("treatEmptyValuesAsNulls", "true")
.option("inferSchema", "true")
.option("addColorColumns", "False")
.load()
val data = readExcel("/dbfs/test/abc.xlsx")
data.show(false)
</code></pre>
|
<p>Two things you can do is increase your memory on the cluster or use the max rows in memory option on the excel library to help stream a set amount of data:</p>
<p><code>.option("maxRowsInMemory", 20)</code></p>
|
python-3.x|pandas|pyspark|azure-databricks
| 0
|
7,392
| 56,771,162
|
Concatenating two columns in pandas dataframe without adding extra spaces at the end when the second column contains NaN/empty strings
|
<p>I have the following pandas data frame:</p>
<pre><code>> print(tpl_subset)
>
Fullname Infrasp Authorship
Lilium abchasicum NaN Baker
Lilium affine NaN Schult. & Schult.f.
Lilium akkusianum NaN Gämperle
Lilium albanicum NaN Griseb.
Lilium albanicum subsp. jankae (A.Kern.) Nyman
Lilium albiflorum NaN Hook.
Lilium album NaN Houtt.
Lilium amabile var. flavum Y.N.Lee
Lilium amabile var. immaculatum T.B.Lee
Lilium amabile var. kwangnungensis Y.S.Kim & W.B.Lee
... ... ...
</code></pre>
<p>I am trying to concatenate the first two columns into a new one only if the value of the second column is not <code>NaN</code>. </p>
<p>What I have been doing so far is simply concatenating the two columns while replacing <code>NaN</code> by an empty string. </p>
<pre class="lang-py prettyprint-override"><code>tpl_subset['Tmp'] = tpl_subset['Fullname'] + ' ' + tpl_subset['Infrasp'].fillna('')
</code></pre>
<p>The problem is that I end up with unwanted whitespaces at the end of the string when the value of the second column is <code>NaN</code> (e.g. <code>'Lilium abchasicum'</code> becomes <code>'Lilium abchasicum '</code>), which forces me to do extra steps to remove them. </p>
<p>These steps will be repeated hundreds of times on datasets containing hundred thousands rows each, so I'm looking for something efficient in term of performance. Using a <code>for</code> loop with an <code>if else</code> statement is not option.</p>
<p><strong>Q.:</strong> is there an efficient and more direct way to do this?</p>
<p>The desired column output is:</p>
<pre><code> Tmp
Lilium abchasicum
Lilium affine
Lilium akkusianum
Lilium albanicum
Lilium albanicum subsp. jankae
Lilium albiflorum
Lilium album
Lilium amabile var. flavum
Lilium amabile var. immaculatum
Lilium amabile var. kwangnungensis
</code></pre>
<p><strong>Edit:</strong> </p>
<p>A quick performance comparison between <code>numpy.where()</code> and <code>radd(' ').fillna('')</code> on the whole dataset (~1.2 millions rows):</p>
<pre><code>In:
import timeit
s = '''
import pandas
import numpy as np
tpl_data = pandas.read_csv('~/phd/Data/TPL/tpl_all_species.csv', sep = '\t')
tpl_fn = tpl_data['Fullname']
tpl_inf = tpl_data['Infrasp']
tpl_concat = tpl_fn + ' ' + tpl_inf
'''
tmp1 = "tpl_data['tmp1'] = np.where(tpl_inf.isnull(), tpl_fn, tpl_concat)"
tmp2 = "tpl_data['tmp2'] = (tpl_fn + tpl_inf.radd(' ').fillna(''))"
print('np.where():', timeit.Timer(tmp1, setup = s).repeat(repeat = 3, number = 10))
print('radd():', timeit.Timer(tmp2, setup = s).repeat(repeat = 3, number = 10))
Out:
np.where(): [0.7466984760000002, 0.7332379689999993, 0.7483021389999998]
radd(): [2.2832963809999995, 2.320076223000001, 2.299452007000003]
</code></pre>
|
<p>Or use <code>np.where</code>:</p>
<pre><code>df['Tmp'] = np.where(df['Infrasp'].isnull(), df['Fullname'], df['Fullname'] + ' ' + df['Infrasp'])
</code></pre>
|
python|pandas
| 3
|
7,393
| 56,485,097
|
How to change the format of a number in a pandas column?
|
<p>I have a large DataFrame of numbers but each individual number follows a different format. I want to use a regular expression to replace a large amount of them with a 111-111-1111 format</p>
<pre><code>numbers["numbers"].replace('^(\+\d{1,2}\s)?\(?\d{3}\)?[\s.-]?\d{3}[\s.-]?\d{4}$, "/*/*/*-/*/*/*-/*/*/*/*", regex=True')
</code></pre>
<p>it should take a number found by the expression and keep the base number but change its format. 1234567890 should equal 123-456-7890</p>
|
<p>You may use</p>
<pre><code>df["numbers"] = df["numbers"].str.replace('^(?:\+\d{1,2}\s)?\(?(\d{3})\)?[\s.-]?(\d{3})[\s.-]?(\d{4})$', r'\1-\2-\3')
</code></pre>
<p><strong>Details</strong></p>
<ul>
<li><code>^</code> - start of string</li>
<li><code>(?:\+\d{1,2}\s)?</code> - an optional sequence of</li>
<li><code>\(?</code> - an optional <code>(</code></li>
<li><code>(\d{3})</code> - Group 1: three digits</li>
<li><code>\)?</code> - an optional <code>)</code></li>
<li><code>[\s.-]?</code> - an optional whitespace, <code>.</code> or <code>-</code> </li>
<li><code>(\d{3})</code> - Group 2: three digits</li>
<li><code>[\s.-]?</code> - an optional whitespace, <code>.</code> or <code>-</code> </li>
<li><code>(\d{4})</code> - Group 3: four digits</li>
<li><code>$</code> - end of string.</li>
</ul>
<p>The <code>\x</code> in the replacement pattern (<code>r'\1-\2-\3'</code>) are placeholders for the values captured with corresponding groups.</p>
|
python|regex|python-3.x|pandas
| 0
|
7,394
| 56,650,308
|
Get min values between hours for each day
|
<p>I have a pandas time series dataframe with a value for each hour of the day over an extended period, like this:</p>
<pre><code> value
datetime
2018-01-01 00:00:00 38
2018-01-01 01:00:00 31
2018-01-01 02:00:00 78
2018-01-01 03:00:00 82
2018-01-01 04:00:00 83
2018-01-01 05:00:00 95
...
</code></pre>
<p>I want to create a new dataframe with the minimum value between hours 01:00 - 04:00 for each day but can't figure out how to do this.. the closest i can think of is: </p>
<pre><code>df2 = df.groupby([pd.Grouper(freq='d'), df.between_time('01:00', '04:00')]).min()))
</code></pre>
<p>but that gives me:</p>
<blockquote>
<p>ValueError: Grouper for '' not 1-dimensional</p>
</blockquote>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>DataFrame.between_time</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html" rel="nofollow noreferrer"><code>DataFrame.resample</code></a>:</p>
<pre><code>df = df.between_time('01:00', '04:00').resample('d').min()
print (df)
value
datetime
2018-01-01 31
</code></pre>
<p>Your solution is very close, only chain functions differently:</p>
<pre><code>df = df.between_time('01:00', '04:00').groupby(pd.Grouper(freq='d')).min()
print (df)
value
datetime
2018-01-01 31
</code></pre>
|
python|pandas
| 2
|
7,395
| 25,574,976
|
Taking a tensor product in python numpy without performing a sum
|
<p>I have two tensors, each 2D, with a common long axis (eg 20.000) and diffenrent short axes eg one 9 the other 10). I want to end up with a 9X10X20000 tensor, such that for each location on the long axis, the other two axes are the tensor product.
Explicitly, with the "long" axis here 4, I want to do:</p>
<pre><code>A = np.arange(8).reshape(2,4)
B = np.arange(12).reshape(3,4)
C = np.zeros(2,3,4)
for i in range(2):
for j in range(3):
for k in range(4):
C[i,j,k] = A[i,k]*B[j,k]
</code></pre>
<p>This code works, but I was wondering: is there a numpy way of doing this, without running for loops?</p>
<p>The context is for training a neural net, with the long axis being the training examples. I get a formula of this form when calculating gradients of the cost function.
Cheers,
Leo</p>
|
<p>I tend to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> for these problems, which makes it easy to specify what should happen in terms of the indices:</p>
<pre><code>>>> A = np.arange(8).reshape(2,4)
>>> B = np.arange(12).reshape(3,4)
>>> np.einsum('ik,jk->ijk', A, B)
array([[[ 0, 1, 4, 9],
[ 0, 5, 12, 21],
[ 0, 9, 20, 33]],
[[ 0, 5, 12, 21],
[16, 25, 36, 49],
[32, 45, 60, 77]]])
</code></pre>
|
python|numpy
| 3
|
7,396
| 25,471,180
|
Disable silent conversions in numpy
|
<p>Is there a way to disable silent conversions in numpy?</p>
<pre><code>import numpy as np
a = np.empty(10, int)
a[2] = 4 # OK
a[3] = 4.9 # Will silently convert to 4, but I would prefer a TypeError
a[4] = 4j # TypeError: can't convert complex to long
</code></pre>
<p>Can <code>numpy.ndarray</code> objects be configured to return a <code>TypeError</code> when assigning any value which is not <code>isinstance()</code> of the ndarray type?
If not, would the best alternative be to subclass <code>numpy.ndarray</code> (and override <code>__setattr__</code> or <code>__setitem__</code>)?</p>
|
<p>Unfortunately <code>numpy</code> doesn't offer this feature in array creation, you can set if casting is allowed only when you are converting an array (check the documentation for <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.astype.html" rel="nofollow"><code>numpy.ndarray.astype</code></a>).</p>
<p>You could use that feature, or subclass <code>numpy.ndarray</code>, but also consider using the <a href="https://docs.python.org/2/library/array.html" rel="nofollow"><code>array</code></a> module offered by python itself to create a typed array:</p>
<pre><code>from array import array
a = array('i', [0] * 10)
a[2] = 4 # OK
a[3] = 4.9 # TypeError: integer argument expected, got float
</code></pre>
|
python|numpy
| 2
|
7,397
| 66,815,863
|
What is the use of tensorflow backend utilities?
|
<p>In order to create custom loss functions, I have seen many people use backend functionality from tensorflow.<br />
For example if we want to create custom loss function that takes <code>y_true</code> and <code>y_pred</code> as input and compute square of difference between them. ie <code>MSE</code></p>
<p><strong>Code:</strong></p>
<pre><code>import tensorflow as tf
import tensorflow.keras.backend as kb
import numpy as np
y_true = np.random.uniform(0,1,(100,))
y_pred = np.random.uniform(0,1,(100,))
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
def custom_loss(y_actual,y_pred):
custom_loss=kb.mean(kb.square(y_actual-y_pred))
return custom_loss
print(custom_mean_squared_error(y_true, y_pred))
print(custom_loss(y_true, y_pred))
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>tf.Tensor(0.15846486198115287, shape=(), dtype=float64)
tf.Tensor(0.15846486198115287, shape=(), dtype=float64)
</code></pre>
<p>Why is the loss function created using backend utilities instead of normal tensorflow utilities?<br />
What is the use of backend utilities except for using the function that are not present in normal tensorflow?</p>
|
<p><code>tf.keras.backend</code> comes from the time where <code>keras</code> was a standalone library that was able to use different libraries such as TensorFlow, Theano or CNTK as a "backend", i.e as the librbary that would perform the computations. The goal was to able to write the same <code>keras</code> code that would be able to run on different libraries.</p>
<p>Or, if I quote the documentation of keras 2.3:</p>
<blockquote>
<h3>What is a "backend"?</h3>
<p>Keras is a model-level library, providing high-level building blocks for developing deep learning models. It does not handle itself low-level operations such as tensor products, convolutions and so on. Instead, it relies on a specialized, well-optimized tensor manipulation library to do so, serving as the "backend engine" of Keras. Rather than picking one single tensor library and making the implementation of Keras tied to that library, Keras handles the problem in a modular way, and several different backend engines can be plugged seamlessly into Keras.</p>
</blockquote>
<p>The <code>backend</code> module was then written to handle a generic API for different libraries that might have different names and slightly different behaviour for the same operations.</p>
<p>As of today (2021-03-26), keras 2.4 does not support multiples backends anymore. The <code>tf.keras.backend</code> module of TensorFlow 2.4 contains a limited amount of functions, so it is encouraged to migrate from <code>tf.keras.backend</code> to "normal" TensorFlow functions.</p>
|
python|tensorflow
| 0
|
7,398
| 67,106,137
|
cannot convert string to numbers in pandas.read_excel
|
<p><strong>Issue</strong></p>
<p>I have an excel file in German format. It looks like this
<a href="https://i.stack.imgur.com/I8Ady.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I8Ady.png" alt="enter image description here" /></a></p>
<p>I want to read the first column as numbers into pandas using the flowing code:</p>
<pre><code>import pandas as pd
import numpy as np
tmp = pd.read_excel("test.xlsx", dtype = {"col1": np.float64})
</code></pre>
<p>It gives me the error</p>
<pre><code>ValueError: Unable to convert column col1 to type <class 'numpy.float64'>
</code></pre>
<p>The issue is in excel. If I modify the <code>col1</code> manuelly to number format, it solves the issue. See this new excel file:
<a href="https://i.stack.imgur.com/0IaAN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0IaAN.png" alt="enter image description here" /></a></p>
<p><strong>Approach</strong></p>
<p>I can first read <code>col1</code> as object into pandas, then I need to replace <code>,</code> to <code>.</code>, at the last I can change the string to float.</p>
<p><strong>However</strong></p>
<p>The approach is tedious. How can I solve this problem more efficiently?</p>
|
<p>Unfortunately, there is no way to tell pandas what decimal separator is being used.</p>
<p>What you could do though is create a function to do the conversion and pass it to read_excel as part of the converters argument.</p>
<pre><code>def fix_decimal(num):
### convert numeric value with comma as decimal separator to float
print(num)
return float(num.replace(',', '.')) if num else 0
tmp = pd.read_excel("test.xlsx", converters={0: fix_decimal} )
</code></pre>
|
python|pandas
| 2
|
7,399
| 47,340,607
|
Tensorflow GradientBoostedDecisionTreeClassifier error : "Dense float feature must be a matrix"
|
<p>I am getting the error:</p>
<p><strong>“tensorflow.python.framework.errors_impl.InvalidArgumentError: Dense float feature must be a matrix.”</strong> when training with estimator <strong>tensorflow.contrib.boosted_trees.estimator_batch.estimator.GradientBoostedDecisionTreeClassifier</strong>. I am using Tensorflow version 1.4.0. The same code works correctly if I change estimator to tf.contrib.learn.DNNClassifier. In the code, dictionary of features are passed in “Train_input_fn” in tf.contrib.learn.Experiment.</p>
<p>Anyone faced similar error before?</p>
<pre><code>#'tensorflow==1.4.0'
import tensorflow as tf
import argparse
import sys
import os
from tensorflow.contrib.boosted_trees.estimator_batch.estimator import GradientBoostedDecisionTreeClassifier
from tensorflow.contrib.boosted_trees.proto import learner_pb2
from tensorflow_transform.tf_metadata import metadata_io
from tensorflow_transform.saved import input_fn_maker
from tensorflow.contrib.learn.python.learn import learn_runner
RAW_METADATA_DIR="raw_metadata"
CONTRACTED_METADATA_DIR="contracted_metadata"
TRANSFORMED_METADATA_DIR="transformed_metadata"
TRANSFORMED_TRAIN_DATA_FILE_PREFIX="train"
TRANSFORMED_EVAL_DATA_FILE_PREFIX="eval"
DATA_FILE_SUFFIX=".tfrecord.gz"
TRANSFORM_FN_DIR="transform_fn"
TARGET_FEATURE_COLUMN='target_field'
FEATURE_NUMERICAL_COLUMN_NAMES = [
'feature1',
'feature2',
'feature3',
'feature4',
'feature5'
]
FEATURE_INTEGER_COLUMN_NAMES = [ # comment out fields that are not features
'feature6',
'feature7',
'feature8',
'feature9',
'feature10'
]
def _parse_arguments(argv):
"""Parses command line arguments."""
parser = argparse.ArgumentParser(
description="Runs training on data.")
parser.add_argument(
"--model_dir", required=True, type=str,
help="The directory where model outputs will be written")
parser.add_argument(
"--input_dir", required=True, type=str,
help=("GCS or local directory containing tensorflow-transform outputs."))
parser.add_argument(
"--batch_size", default=30, required=False, type=int,
help=("Batch size to use during training."))
parser.add_argument(
"--num_epochs", default=100, required=False, type=int,
help=("Number of epochs through the training set"))
args, _ = parser.parse_known_args(args=argv[1:])
return args
def get_eval_metrics():
return {
"accuracy":
tf.contrib.learn.MetricSpec(
metric_fn=tf.contrib.metrics.streaming_accuracy,
prediction_key=tf.contrib.learn.PredictionKey.CLASSES),
"precision":
tf.contrib.learn.MetricSpec(
metric_fn=tf.contrib.metrics.streaming_precision,
prediction_key=tf.contrib.learn.PredictionKey.CLASSES),
"recall":
tf.contrib.learn.MetricSpec(
metric_fn=tf.contrib.metrics.streaming_recall,
prediction_key=tf.contrib.learn.PredictionKey.CLASSES)
}
def read_and_decode_single_record(input_dir, num_epochs,
mode=tf.contrib.learn.ModeKeys.TRAIN):
if mode == tf.contrib.learn.ModeKeys.TRAIN:
num_epochs = num_epochs
file_prefix = TRANSFORMED_TRAIN_DATA_FILE_PREFIX
else:
num_epochs = 1
file_prefix = TRANSFORMED_EVAL_DATA_FILE_PREFIX
transformed_metadata = metadata_io.read_metadata(os.path.join(input_dir,
TRANSFORMED_METADATA_DIR))
input_file_names = tf.train.match_filenames_once(os.path.join(input_dir,
'{}*{}'.format(file_prefix, DATA_FILE_SUFFIX)))
filename_queue = tf.train.string_input_producer(input_file_names,
num_epochs=num_epochs, shuffle=True)
reader = tf.TFRecordReader(options=tf.python_io.TFRecordOptions(
tf.python_io.TFRecordCompressionType.GZIP))
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized = serialized_example,
features=transformed_metadata.schema.as_feature_spec()
)
return features
def read_dataset(input_dir, num_epochs, batch_size, mode=tf.contrib.learn.ModeKeys.TRAIN):
def _input_fn():
min_after_dequeue = 10000
features = read_and_decode_single_record(input_dir, num_epochs, mode)
features = tf.train.shuffle_batch(
tensors=features,
batch_size=batch_size,
min_after_dequeue=min_after_dequeue,
capacity=(min_after_dequeue + 3) * batch_size)
target = features.pop(TARGET_FEATURE_COLUMN)
return features, target
return _input_fn
def specify_feature_columns():
feature_columns = [
tf.contrib.layers.real_valued_column(column_name = column_name)
for column_name in FEATURE_NUMERICAL_COLUMN_NAMES]
feature_columns.extend([
tf.contrib.layers.real_valued_column(column_name = column_name)
for column_name in FEATURE_INTEGER_COLUMN_NAMES])
return feature_columns
def build_estimator(model_dir, config, params):
print "Using gradient boosted decision trees estimator \n"
learner_config = learner_pb2.LearnerConfig()
learner_config.learning_rate_tuner.fixed.learning_rate = 0.1
learner_config.regularization.l1 = 0.0
learner_config.regularization.l2 = 4.0 / params.batch_size
learner_config.constraints.max_tree_depth = 4
learner_config.growing_mode = learner_pb2.LearnerConfig.WHOLE_TREE
return GradientBoostedDecisionTreeClassifier(
learner_config=learner_config,
examples_per_layer=params.batch_size,
num_trees=100,
center_bias=False,
feature_columns=specify_feature_columns()
# feature_engineering_fn=feature_engineering_fn
)
def get_experiment_fn(args):
config = tf.contrib.learn.RunConfig(save_checkpoints_steps=1000)
def experiment_fn(output_dir):
return tf.contrib.learn.Experiment(
estimator = build_estimator(model_dir = output_dir,
config = config,
params = args),
train_input_fn = read_dataset(args.input_dir,
args.num_epochs, args.batch_size,
mode=tf.contrib.learn.ModeKeys.TRAIN),
eval_input_fn = read_dataset(args.input_dir,
args.num_epochs, args.batch_size,
mode=tf.contrib.learn.ModeKeys.EVAL),
eval_metrics = get_eval_metrics())
return experiment_fn
def run(args):
learn_runner.run(get_experiment_fn(args), args.model_dir)
if __name__ == '__main__':
args = _parse_arguments(sys.argv)
run(args)
</code></pre>
<p>The full error trace:</p>
<pre><code>WARNING:tensorflow:Using temporary folder as model directory: /var/folders/mg/sd4_qlyj4_lbh5ggfn6frvcr00fk8_/T/tmpPFhins
WARNING:tensorflow:From /Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/monitors.py:267: __init__ (from tensorflow.contrib.learn.python.learn.monitors) is deprecated and will be removed after 2016-12-05.
Instructions for updating:
Monitors are deprecated. Please use tf.train.SessionRunHook.
WARNING:tensorflow:Casting <dtype: 'int64'> labels to bool.
WARNING:tensorflow:Casting <dtype: 'int64'> labels to bool.
WARNING:tensorflow:Error encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'_Resource' object has no attribute 'name'
2017-11-16 13:38:39.919664: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
WARNING:tensorflow:Error encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'_Resource' object has no attribute 'name'
2017-11-16 13:38:48.810825: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Dense float feature must be a matrix.
2017-11-16 13:38:48.810825: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Dense float feature must be a matrix.
Traceback (most recent call last):
File "./trainer/task.py", line 162, in <module>
run(args)
File "./trainer/task.py", line 157, in run
learn_runner.run(get_experiment_fn(args), args.model_dir)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 218, in run
return _execute_schedule(experiment, schedule)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 46, in _execute_schedule
return task()
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 625, in train_and_evaluate
self.train(delay_secs=0)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 367, in train
hooks=self._train_monitors + extra_hooks)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 812, in _call_train
monitors=hooks)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 316, in new_func
return func(*args, **kwargs)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 480, in fit
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1040, in _train_model
_, loss = mon_sess.run([model_fn_ops.train_op, model_fn_ops.loss])
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 521, in run
run_metadata=run_metadata)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 892, in run
run_metadata=run_metadata)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 967, in run
raise six.reraise(*original_exc_info)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 952, in run
return self._sess.run(*args, **kwargs)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 1024, in run
run_metadata=run_metadata)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 827, in run
return self._sess.run(*args, **kwargs)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dense float feature must be a matrix.
[[Node: gbdt_1/GradientTreesPartitionExamples = GradientTreesPartitionExamples[num_dense_float_features=10, num_sparse_float_features=0, num_sparse_int_features=0, use_locking=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ensemble_model, shuffle_batch:16, shuffle_batch:18, shuffle_batch:20, shuffle_batch:21, shuffle_batch:22, shuffle_batch:23, shuffle_batch:24, shuffle_batch:25, shuffle_batch:26, shuffle_batch:27, ^gbdt_1/TreeEnsembleStats)]]
Caused by op u'gbdt_1/GradientTreesPartitionExamples', defined at:
File "./trainer/task.py", line 162, in <module>
run(args)
File "./trainer/task.py", line 157, in run
learn_runner.run(get_experiment_fn(args), args.model_dir)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 218, in run
return _execute_schedule(experiment, schedule)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 46, in _execute_schedule
return task()
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 625, in train_and_evaluate
self.train(delay_secs=0)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 367, in train
hooks=self._train_monitors + extra_hooks)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 812, in _call_train
monitors=hooks)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 316, in new_func
return func(*args, **kwargs)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 480, in fit
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 986, in _train_model
model_fn_ops = self._get_train_ops(features, labels)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1202, in _get_train_ops
return self._call_model_fn(features, labels, model_fn_lib.ModeKeys.TRAIN)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1166, in _call_model_fn
model_fn_results = self._model_fn(features, labels, **kwargs)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/boosted_trees/estimator_batch/model.py", line 98, in model_builder
predictions_dict = gbdt_model.predict(mode)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/boosted_trees/python/training/functions/gbdt_batch.py", line 463, in predict
ensemble_stamp, mode)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/boosted_trees/python/training/functions/gbdt_batch.py", line 392, in _predict_and_return_dict
use_locking=True)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/contrib/boosted_trees/python/ops/gen_prediction_ops.py", line 117, in gradient_trees_partition_examples
use_locking=use_locking, name=name)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/Users/amolsharma/anaconda/envs/oldpython/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Dense float feature must be a matrix.
[[Node: gbdt_1/GradientTreesPartitionExamples = GradientTreesPartitionExamples[num_dense_float_features=10, num_sparse_float_features=0, num_sparse_int_features=0, use_locking=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ensemble_model, shuffle_batch:16, shuffle_batch:18, shuffle_batch:20, shuffle_batch:21, shuffle_batch:22, shuffle_batch:23, shuffle_batch:24, shuffle_batch:25, shuffle_batch:26, shuffle_batch:27, ^gbdt_1/TreeEnsembleStats)]]
</code></pre>
|
<p>I am guessing that the parsing spec created by tf.transform is different from what we normally get.
Can you share the output of transformed_metadata.schema.as_feature_spec()?</p>
<p>As a work-around try adding this line to your input_fn after features = tf.train.shuffle_batch(...):</p>
<p><code>
features = {feature_name: tf.reshape(feature_value, [-1, 1]) for
feature_name, feature_value in features.items()}</code></p>
|
python|machine-learning|tensorflow|google-cloud-platform|tensorflow-transform
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.