Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
375,400
| 68,981,524
|
Synthetic timestamp pandas
|
<p>I need to add a timestamp to a dataframe with the following settings:</p>
<pre><code>from datetime import datetime
date_rng = pd.date_range(start='1/1/2020', end='1/21/2020',
periods=len(df))
</code></pre>
<p>I would like to know how to discard the miliseconds terms since I need to convert to this format <code>Jan 1, 1400 00:00:00</code> later in Quicksight.
Thank you!</p>
<p><a href="https://i.stack.imgur.com/5RMBm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5RMBm.png" alt="enter image description here" /></a></p>
|
<p>You can use <code>.floor('s')</code>:</p>
<p>dummy example:</p>
<pre><code>from datetime import datetime
date_rng = pd.date_range(start='1/1/2020', end='1/21/2020', periods=12)
date_rng.floor('s')
</code></pre>
<p>output:</p>
<pre><code>DatetimeIndex(['2020-01-01 00:00:00', '2020-01-02 19:38:10',
'2020-01-04 15:16:21', '2020-01-06 10:54:32',
'2020-01-08 06:32:43', '2020-01-10 02:10:54',
'2020-01-11 21:49:05', '2020-01-13 17:27:16',
'2020-01-15 13:05:27', '2020-01-17 08:43:38',
'2020-01-19 04:21:49', '2020-01-21 00:00:00'],
dtype='datetime64[ns]', freq=None)
</code></pre>
|
pandas|amazon-web-services|dataframe|amazon-quicksight
| 0
|
375,401
| 68,890,895
|
How to vectorize function that concatenates values in duplicated rows?
|
<p>I have a function that transforms table that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Year</th>
<th>Period</th>
<th>Duplicate</th>
</tr>
</thead>
<tbody>
<tr>
<td>A9999</td>
<td>2020</td>
<td>23</td>
<td>False</td>
</tr>
<tr>
<td>A9999</td>
<td>2019</td>
<td>22</td>
<td>True</td>
</tr>
<tr>
<td>A9999</td>
<td>2018</td>
<td>20</td>
<td>True</td>
</tr>
<tr>
<td>B0000</td>
<td>2019</td>
<td>24</td>
<td>False</td>
</tr>
<tr>
<td>B0000</td>
<td>2018</td>
<td>12</td>
<td>True</td>
</tr>
<tr>
<td>C5555</td>
<td>2019</td>
<td>18</td>
<td>False</td>
</tr>
</tbody>
</table>
</div>
<p>to this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Year</th>
<th>Period</th>
<th>Duplicate</th>
<th>Concat</th>
</tr>
</thead>
<tbody>
<tr>
<td>A9999</td>
<td>2020</td>
<td>23</td>
<td>False</td>
<td>2018x2019x2020_23</td>
</tr>
<tr>
<td>A9999</td>
<td>2019</td>
<td>22</td>
<td>True</td>
<td>2018x2019x2020_22</td>
</tr>
<tr>
<td>A9999</td>
<td>2018</td>
<td>20</td>
<td>True</td>
<td>2018x2019x2020_20</td>
</tr>
<tr>
<td>B0000</td>
<td>2019</td>
<td>24</td>
<td>False</td>
<td>2018x2019_24</td>
</tr>
<tr>
<td>B0000</td>
<td>2018</td>
<td>12</td>
<td>True</td>
<td>2018x2019_12</td>
</tr>
<tr>
<td>C5555</td>
<td>2019</td>
<td>18</td>
<td>False</td>
<td>2019_18</td>
</tr>
</tbody>
</table>
</div>
<p>It concatenates values in <code>Year</code> column if the rows have the same <code>ID</code> value and joins it with <code>Period</code> value from current row.</p>
<p>The function is as follows:</p>
<pre><code>def create_concat_col(df):
concat_dic = {}
for index, cur_row in df.iterrows():
cur_id = cur_row['ID']
cur_year = str(cur_row['Year'])
if cur_id not in concat_dic:
concat_dic[cur_id] = cur_year
else:
concat_dic[cur_id] = cur_year + 'x' + concat_dic[cur_id]
if not cur_row['Duplicate']:
concat_dic[cur_id] = concat_dic[cur_id] + '_' + str(cur_row['Period'])
df.insert(12, 'Concat', '')
df['Concat'] = df['ID'].map(concat_dic)
return df
</code></pre>
<p>The function does its job but unfortunately is very slow since it iterates through all rows. Is there a way to vectorize this function so it's faster?</p>
<p>Thank you.</p>
|
<p>You can use a combination of <code>groupby</code> and <code>agg</code> to get the "Concat" column, then merge to get the output:</p>
<pre><code>df2 = (df.astype({'Year': str, 'Period': str})
.groupby('ID').agg({'Year': 'x'.join,
'Period': 'first'})
.apply('_'.join, axis=1)
.rename('Concat')
.reset_index()
)
df.merge(df2, on='ID')
</code></pre>
<p>output:</p>
<pre><code> ID Year Period Duplicate Concat
0 A9999 2020 23 False 2020x2019x2018_23
1 A9999 2019 22 True 2020x2019x2018_23
2 A9999 2018 20 True 2020x2019x2018_23
3 B0000 2019 24 False 2019x2018_24
4 B0000 2018 12 True 2019x2018_24
5 C5555 2019 18 False 2019_18
</code></pre>
|
python|pandas
| 1
|
375,402
| 69,027,451
|
replace matrix elements with maximum value to symmetrize a matrix
|
<p>I have a matrix. I want to replace some elements of my matrix by applying the following condition: if <code>Xij>Xji</code> or viceversa, replace the minimum value with the maximum value.
For example:</p>
<pre><code>Input_array = [[1, 5, 3],
[1, 10, 2],
[0, 9, 16]]
</code></pre>
<p>I want the output as a symmetric array by replacing the matrix elements according to the above mentioned condition.</p>
<pre><code>Output_array = [[1,5,3],
[5,10,9],
[3,9,16]]
</code></pre>
<p>N.B, For making the matrix symmetric I don't want to do <code>numpy.dot(matrix,matrixT)</code></p>
|
<pre><code>import numpy as np
arr = np.array([[1, 5, 3],[1, 10, 2],[0, 9, 16]])
arr_sym = np.where(arr > arr.T, arr, arr.T)
print(f'arr_sym = \n{arr_sym}')
</code></pre>
<p>output:</p>
<pre><code>arr_sym =
[[ 1 5 3]
[ 5 10 9]
[ 3 9 16]]
</code></pre>
|
python|numpy|matrix
| 2
|
375,403
| 69,033,605
|
how sort rows with respect of a group?
|
<p>Hi i have panda data frame. I wana sort data with respect of a group id and sorting with respect of order</p>
<pre><code>id title order
2 A 2
2 B 1
2 C 3
3 H 2
3 T 1
</code></pre>
<p>out put:</p>
<pre><code>id title order
2 B 1
2 A 2
2 C 3
3 T 1
3 H 2
</code></pre>
|
<p>Since you're not aggregating, you can sort by multiple columns to get the output you want.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'id': [2, 2, 2, 3, 3],
'title': ['A', 'B', 'C', 'H', 'T'],
'order': [2, 1, 3, 2, 1]})
df = df.sort_values(by=['id', 'order'])
print(df)
</code></pre>
<p>Output:</p>
<pre><code> id title order
1 2 B 1
0 2 A 2
2 2 C 3
4 3 T 1
3 3 H 2
</code></pre>
|
python|pandas|sorting
| 2
|
375,404
| 69,109,929
|
How to write value from a dataframe column to another column based on a condition?
|
<p>I'm having a bad time here trying to figure out how to set a column based on a condition. Basically, I want to copy the value from my "Customer" column to the rows of my "Call Ref" column, if the row is different from "Enterprise" and "Client".</p>
<p>Here is the code I'm trying:</p>
<pre><code>import numpy as np
df_OCB['Call Ref'] = np.select([
np.logical_and(
df_OCB['Call Ref'] != 'Enterprise',
df_OCB['Call Ref'] != 'Client'
)],
df_OCB['Costomer'],
default=''
)
</code></pre>
<p>Does anyone know of a solution?</p>
|
<p>Not shure if this is what you really want to do because you didn't provide an example of your input and output data frames but as far as I understand it I made a toy example:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame([['A','Enterprise'],
['B','Client'],
['C','Client'],
['D','Something else'],
['E','Enterprise']], columns=['Client','Call Ref'])
Output:
Client Call Ref
0 A Enterprise
1 B Client
2 C Client
3 D Something else
4 E Enterprise
</code></pre>
<p>Here you apply a the boolean mask: <code>(df['Call Ref'] != 'Enterprise') & (df['Call Ref'] != 'Client')</code></p>
<p>like this:</p>
<pre><code>df['Call Ref'][(df['Call Ref'] != 'Enterprise') & (df['Call Ref'] != 'Client')] = df['Client'][(df['Call Ref'] != 'Enterprise') & (df['Call Ref'] != 'Client')]
Output:
Client Call Ref
0 A Enterprise
1 B Client
2 C Client
3 D D
4 E Enterprise
</code></pre>
<p>If you provide more information about the Output dataframe that you want, I can give you a more concise example, hope it helped.</p>
|
python|pandas|dataframe
| 0
|
375,405
| 69,222,172
|
How to use pandas and Numpy data transformations within a Python function?
|
<p>I am trying to perform some simple transformations using pandas and NumPy inside a function. The transformations required are:</p>
<ol>
<li>Remove 'Verified' column from df</li>
<li>Convert array into a dataframe (df2)</li>
<li>Merge the two dfs together</li>
</ol>
<p>I've copied my code below. It works fine outside a function but I don't know how to make it work within a function.</p>
<pre><code>df = pd.DataFrame([[1, "John", True], [2, "Ann", False]], columns=["Id", "Login", "Verified"])
array = np.array([[1, 987340123], [2, 187031122]], np.int32)
df.drop(columns=['Verified'], inplace = True)
df2 = pd.DataFrame(data=id_password, index=["0", "1"], columns=["Id", "Password"])
df = df.merge(df2, how = 'inner')
print(df)
</code></pre>
<p>I'm sure there's a really simple solution but I'm completely stuck and a beginner. Any help greatly appreciated.</p>
|
<p>If I understand you correctly, you want to merge the <code>df</code> (without <code>Verified</code> column) and the <code>array</code>:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
[[1, "John", True], [2, "Ann", False]], columns=["Id", "Login", "Verified"]
)
array = np.array([[1, 987340123], [2, 187031122]], np.int32)
df_out = pd.merge(
df.drop(columns="Verified"),
pd.DataFrame(array, columns=["Id", "Password"]),
on="Id",
)
print(df_out)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> Id Login Password
0 1 John 987340123
1 2 Ann 187031122
</code></pre>
|
python|pandas|dataframe|numpy
| 0
|
375,406
| 69,295,546
|
Fill dataframe with duplicate data until a certain conditin is met
|
<p>I have a data frame df like,</p>
<pre><code>id name age duration
1 ABC 20 12
2 sd 50 150
3 df 54 40
</code></pre>
<p>i want to duplicate this data in same df until the duration sum is more than or equal to 300,</p>
<p>so the df can be like..</p>
<pre><code>id name age duration
1 ABC 20 12
2 sd 50 150
3 df 54 40
2 sd 50 150
</code></pre>
<p><strong>so far i have tried the below code, but this is running in infinite loop sometimes :/ .
please help.</strong></p>
<pre><code>def fillPlaylist(df,duration):
print("inside fill playlist fn.")
if(len(df)==0):
print("df len is 0, cannot fill.")
return df;
receivedDf= df
print("receivedDf",receivedDf,flush=True)
print("Received df len = ",len(receivedDf),flush=True)
print("duration to fill ",duration,flush=True)
while df['duration'].sum() < duration:
# random 5% sample of data.
print("filling")
ramdomSampleDuplicates = receivedDf.sample(frac=0.05).reset_index(drop=True)
df = pd.concat([ramdomSampleDuplicates,df])
print("df['duration'].sum() ",df['duration'].sum())
print("after filling df len = ",len(df))
return df;
</code></pre>
|
<p>Try using <code>n</code> instead of <code>frac</code>.</p>
<p><code>n</code> randomly sample n rows from your dataframe.</p>
<pre><code>sample_df = df.sample(n=1).reset_index(drop=True)
</code></pre>
<p>To use <code>frac</code> you can rewrite your code in this way.</p>
<pre><code>def fillPlaylist(df,duration):
while df.duration.sum() < duration:
sample_df = df.sample(frac=0.5).reset_index(drop=True)
df = pd.concat([df,sample_df])
return df
</code></pre>
|
python-3.x|pandas|dataframe|data-science
| 1
|
375,407
| 68,919,999
|
Pandas: How to transform nested json with dynamic keys and arrays to pandas dataframe
|
<p>How to transform nested json with dynamic keys and arays to pandas dataframe?</p>
<ul>
<li>Static keys: <code>data</code>, <code>label</code>, <code>units</code>, <code>date</code>, <code>val</code>, <code>num</code> (can be hardcoded)</li>
<li>Dynamic keys/arrays: <code>data_1_a</code>, <code>data_1000_xyz</code> , <code>name_1a</code> , <code>name_1b</code>, <code>name_10000_xyz</code>, <code>A</code>, <code>B</code> (cannot be hardcoded as they are up to 10000 names / data sub categories)</li>
</ul>
<p>For solutions I tried please see below useful links.</p>
<p>input json:</p>
<pre class="lang-json prettyprint-override"><code>{
"id": 1,
"data": {
"data_1_a": {
"name_1a": {
"label": "label_1",
"units": {
"A": [{"date": 2020, "val": 1}]}}
},
"data_1000_xyz": {
"name_1b": {
"label": "null",
"units": {
"B": [{"date": 2019, "val": 2},
{"date": 2020, "val": 3}]},
},
"name_10000_xyz": {
"label": "null",
"units": {
"A": [
{"date": 2018, "val": 4, "num": "str"},
{"date": 2019, "val": 5},
{"date": 2020, "val": 6, "num": "str"},
]
},
},
},
},
}
</code></pre>
<p>required output df:</p>
<pre><code>+---+--------------+----------------+---------+-------+------+-----+------+
|id |level_1 |level_2 |level_3 |level_4| date | val | num |
+---+--------------+----------------+---------+-------+------+-----+------+
|1 |data_1_a | name_1a | unit | A | 2020 | 1 | null |
|1 |data_1000_xyz | name_1b | unit | B | 2019 | 2 | null |
|1 |data_1000_xyz | name_1b | unit | B | 2020 | 3 | null |
|1 |data_1000_xyz | name_10000_xyz | unit | A | 2018 | 4 | str |
|1 |data_1000_xyz | name_10000_xyz | unit | A | 2019 | 5 | null |
|1 |data_1000_xyz | name_10000_xyz | unit | A | 2020 | 6 | str |
+-------------------------------------------------------------------------+
</code></pre>
<p>Usefull links:</p>
<ul>
<li><a href="https://hendrikvanb.gitlab.io/2018/07/nested_data-json_to_tibble/" rel="nofollow noreferrer">https://hendrikvanb.gitlab.io/2018/07/nested_data-json_to_tibble/</a></li>
<li><a href="https://stackoverflow.com/questions/52795561/flattening-nested-json-in-pandas-data-frame">flattening nested Json in pandas data frame</a></li>
<li><a href="https://www.byrdlab.org/post/json-files-tidy-data/" rel="nofollow noreferrer">https://www.byrdlab.org/post/json-files-tidy-data/</a></li>
<li><a href="https://towardsdatascience.com/flattening-json-records-using-pyspark-b83137669def" rel="nofollow noreferrer">https://towardsdatascience.com/flattening-json-records-using-pyspark-b83137669def</a></li>
<li><a href="https://stackoverflow.com/questions/68941232/how-to-explode-pandas-data-frame-with-json-arrays">How to explode pandas data frame with json arrays</a></li>
</ul>
|
<h2>Python Pandas solution:</h2>
<pre><code> import pandas as pd
# 1) flatten json
df = pd.json_normalize(json_1)
df_dic = df.to_dict('records')
# 2) split to levels
data = []
for row in df_dic:
k={}
for item in row.items():
if item[0] == 'id':
id = item[1]
else:
keys = item[0].split('.')
k = {i:s for i,s in enumerate(keys)}
k.update({'value':item[1]})
k.update({'id':id})
data.append(k)
df = (pd.DataFrame(data)[['id',1,2,3,4,'value']]
.rename(columns={1:'level_1',2:'level_2',3:'level_3',4:'level_4' }))
df = df.loc[~df['level_4'].isnull()]
# 3) explode
dfe = df.explode('value', ignore_index=True)
# 4) pop the value column and create a new dataframe from it then join the new frame with the exploded frame.
output_df = dfe.join(pd.DataFrame([*dfe.pop('value')], index=dfe.index))
id level_1 level_2 level_3 level_4 date val num
0 1 data_1_a name_1a units A 2020 1 NaN
1 1 data_1000_xyz name_1b units B 2019 2 NaN
2 1 data_1000_xyz name_1b units B 2020 3 NaN
3 1 data_1000_xyz name_10000_xyz units A 2018 4 str
4 1 data_1000_xyz name_10000_xyz units A 2019 5 NaN
5 1 data_1000_xyz name_10000_xyz units A 2020 6 str
</code></pre>
|
python|json|pandas|dataframe
| 1
|
375,408
| 69,276,726
|
pandas sum of column based on index
|
<p>How to sum pandas columns based on index choice</p>
<pre><code> 'A' 'B'
</code></pre>
<hr />
<pre><code>'G9' 15 16
</code></pre>
<hr />
<pre><code>'G10' 20 30
</code></pre>
<hr />
<pre><code>'G9PRO' 1 11
</code></pre>
<p>if I choose 'G9' I want to get this dataFrame</p>
<pre><code> 'logs'
</code></pre>
<hr />
<pre><code>'A' 15
</code></pre>
<hr />
<pre><code>'B' 16
</code></pre>
<p>and if I choose 'G9' and 'G10' I want to get this dataFrame</p>
<pre><code> 'logs'
</code></pre>
<hr />
<pre><code>'A' 35
</code></pre>
<hr />
<pre><code>'B' 46
</code></pre>
<p>and so on, I tried the sum function but it did not give the right result</p>
|
<p>You can use <code>df.index.isin()</code> and <code>.sum()</code> to generate the results you need</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'A': [15, 20, 1],
'B': [16, 30, 11]
}, index=['G9', 'G10', 'G9PRO'])
df
</code></pre>
<p>Test Case #1</p>
<pre><code>selected = ['G9', 'G10']
sum_df = df[df.index.isin(selected)].sum()
A 35
B 46
dtype: int64
</code></pre>
<p>Test Case #2</p>
<pre><code>selected = ['G9']
sum_df = df[df.index.isin(selected)].sum()
A 15
B 16
dtype: int64
</code></pre>
|
python|pandas
| 2
|
375,409
| 68,941,407
|
pandas append row in a loop
|
<p>i need to add column and append rows to a dataframe by searching a text file and adding the occurrences</p>
<p>below is my input dataframe 'dfl'</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>TextID</th>
<th>Type</th>
<th>Term</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>fname</td>
<td>john</td>
</tr>
<tr>
<td>1</td>
<td>lname</td>
<td>doe</td>
</tr>
<tr>
<td>2</td>
<td>fname</td>
<td>jason</td>
</tr>
<tr>
<td>3</td>
<td>loc</td>
<td>12234</td>
</tr>
</tbody>
</table>
</div>
<pre><code>target_string = []
words = []
s1 = ''
for index, row in dfl.iterrows():
words = row['Term']
field_name = row['Type']
path = "C:\\Users\\myfolder\\"+str(row['chart'])+".txt"
with open(path,"r") as myfile:
target_string = myfile.read()
#here i do the regex
for match in re.finditer(words, str(target_string),re.IGNORECASE):
print(match.start(),match.end(),match.group())
s1 = "match.start()+","+match.end()+","+field_name"
dfl['Match'] = s1.strip() # this is repeating for every row in dataframe below but i want each row to have its own match field
</code></pre>
<p>now i want to add the result to the above dataframe 'dfl' in a new column as below</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Text ID</th>
<th>Type</th>
<th>Term</th>
<th>Match</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>fname</td>
<td>john</td>
<td>0,4, fname</td>
</tr>
<tr>
<td>1</td>
<td>lname</td>
<td>doe</td>
<td>8,11,lname</td>
</tr>
<tr>
<td>2</td>
<td>fname</td>
<td>jason</td>
<td>10,15,fname</td>
</tr>
<tr>
<td>3</td>
<td>loc</td>
<td>12234</td>
<td>20,25,loc</td>
</tr>
</tbody>
</table>
</div>
|
<p>You can save the 'match' values in a list, and when you are done, create the new column using that list:</p>
<pre><code>target_string = []
words = []
match_list = []
s1 = ''
for index, row in dfl.iterrows():
words = row['Term']
field_name = row['Type']
path = "C:\\Users\\myfolder\\"+str(row['chart'])+".txt"
with open(path,"r") as myfile:
target_string = myfile.read()
#here i do the regex
for match in re.finditer(words, str(target_string),re.IGNORECASE):
print(match.start(),match.end(),match.group())
s1 = "match.start()+","+match.end()+","+field_name"
match_list.append(s1.strip())
dfl['Match'] = match_list
</code></pre>
<p>The problem you were having is you were creating the full column in every iteration with the same value (the last value of s1.strip() replacing all the existing values in the column).
If you want to append each individual value to the match column, you should use the index and replace each individual value as: <code>dfl.loc[idx, 'Match']=s1.strip()</code></p>
|
python|pandas|dataframe
| 2
|
375,410
| 69,230,570
|
How to do batched dot product in PyTorch?
|
<p>I have a input tensor that is of size <code>[B, N, 3]</code> and I have a test tensor of size <code>[N, 3]</code> . I want to apply a dot product of the two tensors such that I get <code>[B, N]</code> basically. Is this actually possible?</p>
|
<p>Yes, it's possible:</p>
<pre><code>a = torch.randn(5, 4, 3)
b = torch.randn(4, 3)
c = torch.einsum('ijk,jk->ij', a, b) # torch.Size([5, 4])
</code></pre>
|
pytorch
| 1
|
375,411
| 68,996,646
|
Scraping tables using Pandas read_html and identifying headers
|
<p>I am completely new to web scraping and would like to parse a specific table that occurs in the SEC filing DEF 14A of companies. I was able to get the right URL and pass it to panda.
Note: Even though the desired table should occur in every DEF 14A, it's layout may differ from company to company. Right now I am struggling with formatting the dataframe.
How do I manage to get the right header and join it into a single index(column)?</p>
<p><strong>This is my code so far:</strong></p>
<pre><code>url_to_use: "https://www.sec.gov/Archives/edgar/data/1000229/000095012907000818/h43371ddef14a.htm"
resp = requests.get(url_to_use)
soup = bs.BeautifulSoup(resp.text, "html.parser")
dfs = pd.read_html(resp.text, match="Salary")
pd.options.display.max_columns = None
df = dfs[0]
df.dropna(how="all", inplace = True)
df.dropna(axis = 1, how="all", inplace = True)
display(df)
</code></pre>
<p>Right now the output of my code looks like this:
<a href="https://i.stack.imgur.com/TU9h2.jpg" rel="nofollow noreferrer">Dataframe output</a></p>
<p>Whereas the correct layout looks like this:
<a href="https://i.stack.imgur.com/OYsSU.jpg" rel="nofollow noreferrer">Original format</a></p>
<p>Is there some way to identify those rows that belong to the header and combine them as the header?</p>
|
<p>The table <code>html</code> is rather messed up. The empty cells are actually in the source code. It would be easiest to do some post processing:</p>
<pre><code>import pandas as pd
import requests
r = requests.get("https://www.sec.gov/Archives/edgar/data/1000229/000095012907000818/h43371ddef14a.htm", headers={'User-agent': 'Mozilla/5.0'}).text
df = pd.read_html(r) #load with user agent to avoid 401 error
df = df[40] #get the right table from the list of dataframes
df = df[8:].rename(columns={i: ' '.join(df[i][:8].dropna()) for i in df.columns}) #generate column headers from the first 8 rows
df.dropna(how='all', axis=1, inplace=True) #remove empty columns and rows
df.dropna(how='all', axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
def sjoin(x): return ''.join(x[x.notnull()].astype(str))
df = df.groupby(level=0, axis=1).apply(lambda x: x.apply(sjoin, axis=1)) #concatenate columns with the same headers, taken from https://stackoverflow.com/a/24391268/11380795
</code></pre>
<p>Result</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">All Other Compensation ($)(4)</th>
<th style="text-align: left;">Change in Pension Value and Nonqualified Deferred Compensation Earnings ($)</th>
<th style="text-align: left;">Name and Principal Position</th>
<th style="text-align: left;">Non-Equity Incentive Plan Compensation ($)</th>
<th style="text-align: right;">Salary ($)</th>
<th style="text-align: right;">Stock Awards ($)(1)</th>
<th style="text-align: right;">Total ($)</th>
<th style="text-align: right;">Year</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">8953</td>
<td style="text-align: left;">(3)</td>
<td style="text-align: left;">David M. Demshur President and Chief Executive Officer</td>
<td style="text-align: left;">766200(2)</td>
<td style="text-align: right;">504569</td>
<td style="text-align: right;">1088559</td>
<td style="text-align: right;">2368281</td>
<td style="text-align: right;">2006</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">8944</td>
<td style="text-align: left;">(3)</td>
<td style="text-align: left;">Richard L. Bergmark Executive Vice President, Chief Financial Officer and Treasurer</td>
<td style="text-align: left;">330800(2)</td>
<td style="text-align: right;">324569</td>
<td style="text-align: right;">799096</td>
<td style="text-align: right;">1463409</td>
<td style="text-align: right;">2006</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">8940</td>
<td style="text-align: left;">(3)</td>
<td style="text-align: left;">Monty L. Davis Chief Operating Officer and Senior Vice President</td>
<td style="text-align: left;">320800(2)</td>
<td style="text-align: right;">314569</td>
<td style="text-align: right;">559097</td>
<td style="text-align: right;">1203406</td>
<td style="text-align: right;">2006</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">8933</td>
<td style="text-align: left;">(3)</td>
<td style="text-align: left;">John D. Denson Vice President, General Counsel and Secretary</td>
<td style="text-align: left;">176250(2)</td>
<td style="text-align: right;">264569</td>
<td style="text-align: right;">363581</td>
<td style="text-align: right;">813333</td>
<td style="text-align: right;">2006</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|web-scraping
| 1
|
375,412
| 68,910,605
|
Pandas: how to parse values from column
|
<p>I have a somewhat large dataframe, formatted something like this:</p>
<pre><code>colA colB
1 c, d
2 d, e, f
3 e, d, a
</code></pre>
<p>I want to get a dictionary that counts instances of unique values in colB, like:</p>
<pre><code>a: 1
c: 1
d: 3
e: 2
f: 1
</code></pre>
<p>My naive solution would be to iterate over every row of colB, split that, then use a <code>Counter</code>: <code>my_counter[current_colB_object] += 1</code>.</p>
<p>However, <a href="https://stackoverflow.com/a/55557758/2143578">this answer</a> strongly discourages iterating over dataframes, especially (like in my case) large ones.</p>
<p>What would be the preferred way of doing this?</p>
|
<p>Try with <code>explode</code> and <code>value_counts</code>:</p>
<pre><code>>>> df["colB"].str.split(", ").explode().value_counts().to_dict()
{'d': 3, 'e': 2, 'c': 1, 'f': 1, 'a': 1}
</code></pre>
<h6>Input <code>df</code>:</h6>
<pre><code>df = pd.DataFrame({"colA": [1, 2, 3],
"colB": ["c, d", "d, e, f", "e, d, a"]
})
>>> df
colA colB
0 1 c, d
1 2 d, e, f
2 3 e, d, a
</code></pre>
|
python|pandas|dataframe|iteration|counter
| 1
|
375,413
| 69,145,555
|
Pandas - extract method not matching anything
|
<p>I am having a problem with this seemingly easy task to do.
Here 's a recreation of my problem:</p>
<p>I have a dataframe called legal of this form:</p>
<pre><code>+----+-----------------+
| | legal |
|----+-----------------|
| 0 | gmbh |
| 1 | kg |
| 2 | ag |
| 3 | GmbH & Co. KGaA |
| 4 | LP |
| 5 | LLP |
| 6 | LLLP |
| 7 | LLC |
| 8 | PLLC |
| 9 | corp |
| 10 | corporation |
| 11 | inc |
| 12 | cic |
| 13 | cio |
| 14 | ltd |
| 15 | s.a. |
+----+-----------------+
</code></pre>
<p>It contains all the words that can represent a legal term of a given company.</p>
<p>Now I have another dataframe containing a list of company raw names that might also contain some legal terms.
My task is to identify such legal terms for each company row name in the <code>companies</code> dataframe.
I am trying to use some regex so that the legal terms might both be uppercase and lowercase (or a mix). So I am using the method <strong>extract</strong> for that.</p>
<p>For the sake of the demonstration, my first company raw name is <code>2&0 Technologies Inc</code>, so for that company I would expect to extract the world <code>inc</code> from my legal dataframe.</p>
<p>This is the simplified version of my code with some comments:</p>
<pre><code>def format_companies(self, legals, locations):
self.companies['base_name'] = ''
self.companies['location'] = ''
self.companies['legal'] = ''
for i, row in self.companies.iterrows():
legal_pattern = '/(' + "|".join(row['raw'].split()]) +')/ig'
legal_pattern = rf'{legal_pattern}'
print(legal_pattern) # It prints out -> /(2&0|Technologies|Inc)/ig
legal = legals['legal'].str.extract(legal_pattern)
print(tabulate(legal, headers='keys', tablefmt='psql')) # Everything is NaN. (results will be print below)
if i >= 0:
break
</code></pre>
<p>The first print statement is just to print out the pattern used in the extract method, which is <code>/(2&0|Technologies|Inc)/ig</code>.</p>
<p>The second pattern is to print out the results from the extract method, and as said in the comments, it returns a list of NaNs:</p>
<pre><code>+----+-----+
| | 0 |
|----+-----|
| 0 | nan |
| 1 | nan |
| 2 | nan |
| 3 | nan |
| 4 | nan |
| 5 | nan |
| 6 | nan |
| 7 | nan |
| 8 | nan |
| 9 | nan |
| 10 | nan |
| 11 | nan |
| 12 | nan |
| 13 | nan |
| 14 | nan |
| 15 | nan |
+----+-----+
</code></pre>
<p>I am very confused because if you try out the regular expression <code>/(2&0|Technologies|Inc)/ig</code> on the text 'inc' on <a href="https://www.regextester.com/" rel="nofollow noreferrer">https://www.regextester.com/</a>, inc gets selected correctly.</p>
<p>What am I doing wrong?</p>
|
<p><code>str.extract()</code> does not recognize regex pattern with <code>/i</code> to indicate IGNORECASE. To solve this, you can do it in 2 ways:</p>
<p><strong>Method 1:</strong> Change your definition of <code>legal_pattern</code> without the <code>/</code> and <code>/ig</code>:</p>
<pre><code>legal_pattern = '(' + "|".join(row['raw'].split()]) +')'
legal_pattern = rf'{legal_pattern}'
</code></pre>
<p>Instead, use the flag <code>re.IGNORECASE</code> in <code>str.extract()</code>, as follows:</p>
<pre><code>import re
legals['legal'].str.extract(legal_pattern, re.IGNORECASE)
</code></pre>
<p><strong>Method 2:</strong> Alternatively, you can also use <code>(?i)</code> in the regex to indicate IGNORECASE, as follows:</p>
<pre><code>legal_pattern = '(?i)(' + "|".join(row['raw'].split()]) +')'
legal_pattern = rf'{legal_pattern}'
</code></pre>
<p>Then, you can use <code>str.extract()</code> without specifying <code>re.IGNORECASE</code>:</p>
<pre><code>legals['legal'].str.extract(legal_pattern)
</code></pre>
<p><strong>Result:</strong></p>
<pre><code> 0
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
9 NaN
10 NaN
11 inc
12 NaN
13 NaN
14 NaN
15 NaN
</code></pre>
|
python|regex|pandas
| 1
|
375,414
| 68,986,597
|
DataFrame Pandas - How can I split a list of dictionary from each row to separated columns?
|
<p>I have the DataFrame below with 2 columns, and one of the columns is a list of dictionary inside an list of dictionary.
I would like to split/separate this column in several columns.</p>
<pre><code>import pandas as pd
USERNAME = ['root', 'user1', 'user2','user3']
test_data = '[{"conjunction":"and","expressions":[{"_actualOperator":"contains","_actualValue":"LBD","attr":"displayName","op":"contains","value":"LBD"}],"name":"test_Event","editable":true}]'
test_data2 = '[{"conjunction":"and","expressions":[{"_actualOperator":"not_contains","_actualValue":"AAA","attr":"Event","op":"contains","value":"LBD"}],"name":"test_Event","editable":true}]'
test_data3 = '[{"conjunction":"and","expressions":[{"_actualOperator":"exclude","_actualValue":"BBB","attr":"Event","op":"contains","value":"LBD"}],"name":"test_Event","editable":true}]'
test_data4 = '[{"conjunction":"and","expressions":[{"_actualOperator":"adding","_actualValue":"CASA","attr":"displayName","op":"contains","value":"LBD"}],"name":"test_Event","editable":true}]'
VALUE_STRING = [test_data, test_data2, test_data3, test_data4]
data = {'USERNAME': ['root', 'user1', 'user2','user3'], 'VALUE_STRING' : VALUE_STRING}
df = pd.DataFrame(data)
df
USERNAME VALUE_STRING
root [{"conjunction":"and","expressions":[{"_actual...
user1 [{"conjunction":"and","expressions":[{"_actual...
user2 [{"conjunction":"and","expressions":[{"_actual...
user2 [{"conjunction":"and","expressions":[{"_actual...
</code></pre>
<p>And I expected a result like this:</p>
<pre><code>df_expected = pd.DataFrame({'USERNAME': ['root', 'user1', 'user2','user3'],
'_actualOperator':['contains','not_contains','exclude','adding'],
'_actualValue':['LBD','AAA','BBB','CASA'],
'attr':['displayName','Event','Event','displayName']})
df_expected
USERNAME _actualOperator _actualValue attr
root contains LBD displayName
user1 not_contains AAA Event
user2 exclude BBB Event
user3 adding CASA displayName
</code></pre>
|
<p>It seems like the column <code>VALUE_STRING</code> contains json data, in that case we can parse the json data using <code>loads</code> method of <code>json</code> module, then extract the dictionaries associated with the key <code>expressions</code> from each row, create a new dataframe from these dictionaries and join back with <code>USERNAME</code> column</p>
<pre><code>import json
s = df['VALUE_STRING'].map(json.loads)\
.str[0].str['expressions'].str[0]
exp = pd.DataFrame([*s], index=s.index)
df_out = df[['USERNAME']].join(exp).drop(['op', 'value'], axis=1)
</code></pre>
<p>Alternative approach with pandas <code>json_normalize</code> method</p>
<pre><code>s = df['VALUE_STRING'].map(json.loads).str[0]
exp = pd.json_normalize(s, 'expressions')
df_out = df[['USERNAME']].join(exp).drop(['op', 'value'], axis=1)
</code></pre>
<hr />
<pre><code>print(df_out)
USERNAME _actualOperator _actualValue attr
0 root contains LBD displayName
1 user1 not_contains AAA Event
2 user2 exclude BBB Event
3 user3 adding CASA displayName
</code></pre>
|
python|pandas|dataframe
| 1
|
375,415
| 68,886,615
|
What is the calculation process of loss functions in multi-class multi-label classification problems using deep learning?
|
<p>Dataset description:</p>
<p>(1) X_train: <code>(6000,4)</code> shape</p>
<p>(2) y_train: <code>(6000,4)</code> shape</p>
<p>(3) X_validation: <code>(2000,4)</code> shape</p>
<p>(4) y_validation: <code>(2000,4)</code> shape</p>
<p>(5) X_test: <code>(2000,4)</code> shape</p>
<p>(6) y_test: <code>(2000,4)</code> shape</p>
<p>Relationship between X and Y is shown <a href="https://i.stack.imgur.com/5gcT5.png" rel="nofollow noreferrer">here</a></p>
<p>For single label classification, the activation function of the last layer is Softmax and the loss function is categorical_crossentrop.
And I know the mathematical calculation method for the loss function.</p>
<p>And for multi-class multi-label classification problems, the activation function of the last layer is sigmoid, and the loss function is binary_crossentrop.
I want to know how the mathematical calculation method of the loss function works</p>
<p>It would be a great help to me if you let me know.</p>
<pre><code>def MinMaxScaler(data):
numerator = data - np.min(data)
denominator = np.max(data) - np.min(data)
return numerator / (denominator + 1e-5)
kki = pd.read_csv(filename,names=['UE0','UE1','UE2','UE3','selected_UE0','selected_UE1','selected_UE2','selected_UE3'])
print(kki)
def LoadData(file):
xy = np.loadtxt(file, delimiter=',', dtype=np.float32)
print("Data set length:", len(xy))
tr_set_size = int(len(xy) * 0.6)
xy[:, 0:-number_of_UEs] = MinMaxScaler(xy[:, 0:-number_of_UEs]) #number_of_UES : 4
X_train = xy[:tr_set_size, 0: -number_of_UEs] #6000 row
y_train = xy[:tr_set_size, number_of_UEs:number_of_UEs*2]
X_valid = xy[tr_set_size:int((tr_set_size/3) + tr_set_size), 0:-number_of_UEs]
y_valid = xy[tr_set_size:int((tr_set_size/3) + tr_set_size), number_of_UEs:number_of_UEs *2]
X_test = xy[int((tr_set_size/3) + tr_set_size):, 0:-number_of_UEs]
y_test = xy[int((tr_set_size/3) + tr_set_size):, number_of_UEs:number_of_UEs*2]
print("Training X shape:", X_train.shape)
print("Training Y shape:", y_train.shape)
print("validation x shape:", X_valid.shape)
print("validation y shape:", y_valid.shape)
print("Test X shape:", X_test.shape)
print("Test Y shape:", y_test.shape)
return X_train, y_train, X_valid, y_valid, X_test, y_test, tr_set_size
X_train, y_train, X_valid, y_valid, X_test, y_test, tr_set_size = LoadData(filename)
model = Sequential()
model.add(Dense(64,activation='relu', input_shape=(X_train.shape[1],)))
model.add(Dense(46, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dense(12, activation='relu'))
model.add(Dense(4, activation= 'sigmoid'))
model.compile( loss ='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
hist = model.fit(X_train, y_train, epochs=5, batch_size=1, verbose= 1, validation_data=(X_valid, y_valid), callbacks= es)
</code></pre>
<p>This is a learning process, and even if epochs are repeated,
Accuracy does not improve.</p>
<pre><code>Epoch 1/10
6000/6000 [==============================] - 14s 2ms/step - loss: 0.2999 - accuracy: 0.5345 - val_loss: 0.1691 - val_accuracy: 0.5465
Epoch 2/10
6000/6000 [==============================] - 14s 2ms/step - loss: 0.1554 - accuracy: 0.4883 - val_loss: 0.1228 - val_accuracy: 0.4710
Epoch 3/10
6000/6000 [==============================] - 14s 2ms/step - loss: 0.1259 - accuracy: 0.4710 - val_loss: 0.0893 - val_accuracy: 0.4910
Epoch 4/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.1094 - accuracy: 0.4990 - val_loss: 0.0918 - val_accuracy: 0.5540
Epoch 5/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0967 - accuracy: 0.5223 - val_loss: 0.0671 - val_accuracy: 0.5405
Epoch 6/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0910 - accuracy: 0.5198 - val_loss: 0.0836 - val_accuracy: 0.5380
Epoch 7/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0870 - accuracy: 0.5348 - val_loss: 0.0853 - val_accuracy: 0.5775
Epoch 8/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0859 - accuracy: 0.5518 - val_loss: 0.0515 - val_accuracy: 0.6520
Epoch 9/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0792 - accuracy: 0.5508 - val_loss: 0.0629 - val_accuracy: 0.4350
Epoch 10/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0793 - accuracy: 0.5638 - val_loss: 0.0632 - val_accuracy: 0.6270
</code></pre>
|
<p><strong>Mistake 1</strong> -
The shape of <code>y_train</code>, <code>y_validation</code> and <code>y_test</code> should be <code>(6000,)</code>, <code>(2000,)</code> and <code>(2000,)</code> respectively.</p>
<p><strong>Mistake 2</strong> -
For multi-class classification, the loss should be <code>categorical_crossentropy</code> and activation should be a <code>softmax</code>. So, change these two lines, like this:</p>
<pre><code>model.add(Dense(4, activation= 'softmax'))
model.compile(loss ='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
</code></pre>
<p><strong>Suggestion</strong> -
Why are you splitting data by yourself? Use scikit-learn <code>train_test_split</code>. This code will give you proper splits:</p>
<pre><code>from sklearn.model_selection import train_test_split
x, x_test, y, y_test = train_test_split(xtrain, labels, test_size=0.2, train_size=0.8)
x_train, x_validation, y_train, y_validation = train_test_split(x, y, test_size = 0.25, train_size =0.75)
</code></pre>
|
python|tensorflow|machine-learning|keras|deep-learning
| 0
|
375,416
| 68,983,852
|
Pandas UDF Function Takes Unusually Long to Complete on Big Data
|
<p>I'm new to PySpark and Pandas UDFs, I'm running the following Pandas UDF Function to jumble a column containing strings (For Example: an input '<em>Luke</em>' will result in '<em>ulek</em>')</p>
<pre><code>pandas_udf("string")
def jumble_string(column: pd.Series)-> pd.Series:
return column.apply(lambda x: None if x==None else ''.join(random.sample(x, len(x))).lower())
spark_df = spark_df.withColumn("names", jumble_string("names"))
</code></pre>
<p>On running the above function on a large dataset I've noticed that the execution takes unusually long.</p>
<p>I'm guessing the <code>.apply</code> function has something to do with this issue.</p>
<p>Is there anyway I can rewrite this function so it can effectively execute on a Big Dataset?
<em>Please Advise</em></p>
|
<p>As the <code>.apply</code> method is not vectorized, the given operation is done by looping through the elements which slows down the execution as the data size becomes large.</p>
<p>For small sized data, the time difference is usually negligible. However, as the size increases, the difference starts to become noticeable. We are likely to deal with vast amount of data so time should always be taken into consideration.</p>
<p>You can read more about Apply vs Vectorized Operations <a href="https://towardsdatascience.com/efficient-pandas-apply-vs-vectorized-operations-91ca17669e84" rel="nofollow noreferrer">here</a>.</p>
<p>Therefore I decided to use a list comprehension which did increase my performance marginally.</p>
<pre><code>@pandas_udf("string")
def jumble_string(column: pd.Series)-> pd.Series:
return pd.Series([None if x==None else ''.join(random.sample(x, len(x))).lower() for x in column])
</code></pre>
|
python|pandas|pyspark|user-defined-functions
| 1
|
375,417
| 68,899,113
|
Using schema and table arguments in Pandas read_sql parameters
|
<p>I want to run the query <code>select count(?) from ?.?;</code> using pandas' <code>read_sql()</code> method with the parameters <code>select count(<column_name>) from <schema_name>.<table_name>;</code>.</p>
<p>I get the error <code>ValueError: ('Could not connect to db.', DatabaseError('Execution failed on sql \'select count(?) from ?.?;\': (\'42601\', \'[42601] ERROR: syntax error at or near "$2";\\nError while preparing parameters (1) (SQLExecDirectW)\')'))</code>.</p>
<p>The code I have is just like you'd expect:</p>
<pre><code>pd.read_sql('''select count(?) from ?.?;''', conn, params=[column_name, schema_name, table_name)
</code></pre>
<p>The actual values I am providing are: <code>column_name</code>=<code>record_id</code>, <code>schema_name</code>=<code>c_admin</code>, <code>table_name</code>=<code>backup_table</code>.</p>
<p>I'm using pyodbc to generate a connection and postgresql is the db I am using.</p>
|
<p><code>Pyodbc</code> doesn't support parameterizing SQL identifiers like schemas and tables. The library <code>psycopg2</code> will allow you to create SQL strings dynamically with these types of identifiers. By following these docs on the <a href="https://www.psycopg.org/docs/sql.html" rel="nofollow noreferrer">psycopg2 site</a>, you can do the following:</p>
<pre><code>from psycopg2 import connect, sql
conn = connect("dbname=DBNAME user=USER password=PASSWORD host=HOST port=PORT")
sql_string = """
select
count( {column} ) as not_null_count
from
{schema}.{table}
;
"""
query = sql.SQL(sql_string).format(
column=sql.Identifier('record_id'),
schema=sql.Identifier('c_admin'),
table=sql.Identifier('backup_table'),
)
df = pd.read_sql(query, conn)
</code></pre>
|
sql|pandas|postgresql|dataframe|query-parameters
| 0
|
375,418
| 69,151,019
|
How to solve the problem with tf.keras.optimizers.Adam(lr=0.001) command not working?
|
<p>I'm working on Google Colab and when I type</p>
<p><code>model.compile(optimizer=tf.keras.optimizers.Adam(lr=1e-6), loss=tf.keras.losses.BinaryCrossentropy())</code></p>
<p>it doesn't work and I get the following error message</p>
<p><code>Could not interpret optimizer identifier: <keras.optimizer_v2.adam.Adam object at 0x7f21a9b34d50></code></p>
|
<p>Generally, Maybe you used a different version for the layers import and the optimizer import.
tensorflow.python.keras API for model and layers and keras.optimizers for SGD. They are two different Keras versions of TensorFlow and pure Keras. They could not work together. You have to change everything to one version. Then it should work.</p>
<p>Maybe try import:</p>
<pre><code>from tensorflow.keras.optimizers import Adam
model.compile(optimizer=Adam(lr=1e-6),loss=tf.keras.losses.BinaryCrossentropy())
</code></pre>
|
python|tensorflow|keras|tf.keras|adam
| 1
|
375,419
| 68,941,232
|
Pandas: How to explode data frame with json arrays
|
<p>How to explode pandas data frame?</p>
<p>Input df:</p>
<p><a href="https://i.stack.imgur.com/K4Se9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K4Se9.png" alt="enter image description here" /></a></p>
<p>Required output df:</p>
<pre><code>+----------------+------+-----+------+
|level_2 | date | val | num |
+----------------+------+-----+------+
| name_1a | 2020 | 1 | null |
| name_1b | 2019 | 2 | null |
| name_1b | 2020 | 3 | null |
| name_10000_xyz | 2018 | 4 | str |
| name_10000_xyz | 2019 | 5 | null |
| name_10000_xyz | 2020 | 6 | str |
+------------------------------------+
</code></pre>
<p>To reproduce input df:</p>
<pre><code>import pandas as pd
pd.set_option('display.max_colwidth', None)
data={'level_2':{1:'name_1a',3:'name_1b',5:'name_10000_xyz'},'value':{1:[{'date':'2020','val':1}],3:[{'date':'2019','val':2},{'date':'2020','val':3}],5:[{'date':'2018','val':4,'num':'str'},{'date':'2019','val':5},{'date':'2020','val':6,'num':'str'}]}}
df = pd.DataFrame(data)
</code></pre>
|
<p><code>Explode</code> the dataframe on <code>value</code> column, then <code>pop</code> the <code>value</code> column and create a new dataframe from it then <code>join</code> the new frame with the exploded frame.</p>
<pre><code>s = df.explode('value', ignore_index=True)
s.join(pd.DataFrame([*s.pop('value')], index=s.index))
</code></pre>
<hr />
<pre><code> level_2 date val num
0 name_1a 2020 1 NaN
1 name_1b 2019 2 NaN
2 name_1b 2020 3 NaN
3 name_10000_xyz 2018 4 str
4 name_10000_xyz 2019 5 NaN
5 name_10000_xyz 2020 6 str
</code></pre>
|
python|json|pandas|dataframe
| 6
|
375,420
| 69,218,874
|
Why is 'metrics = tf.keras.metrics.Accuracy()' giving an error but 'metrics=['accuracy']' isn't?
|
<p>Im using the given code example on the fashion_mnist dataset. It contains <code>metrics="accuracy"</code> and runs through. Whenever I change it to <code>metrics=tf.keras.metrics.Accuracy()</code> it gives me following error:</p>
<pre><code>ValueError: Shapes (32, 10) and (32, 1) are incompatible
</code></pre>
<p>What am i doing wrong? Is the <code>Accuracy()</code> function not the same?</p>
<pre><code>import tensorflow as tf
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = train_images / 255.
test_images = test_images / 255.
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=tf.keras.activations.relu),
tf.keras.layers.Dense(10)])
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=10)
</code></pre>
|
<p>Based on the docs <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>When you pass the strings <code>"accuracy"</code> or <code>"acc"</code>, we convert this to one of <code>tf.keras.metrics.BinaryAccuracy</code>, <code>tf.keras.metrics.CategoricalAccuracy</code>, <code>tf.keras.metrics.SparseCategoricalAccuracy</code> based on the loss function used and the model output shape.</p>
</blockquote>
<p>So, when you pass <code>"accuracy"</code> it will be converted to the <code>SparseCategoricalAccuracy()</code> automatically.</p>
<p>So you can pass it like following:</p>
<pre><code>model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
# or
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
</code></pre>
|
python|tensorflow|machine-learning|keras
| 2
|
375,421
| 69,173,640
|
How to get the coefficients of the polynomial in python
|
<p>I need to create a function <code>get_polynom</code> that will take a list of tuples <code>(x1, y1), (x2, y2), ..., (xn, yn)</code> representing points and find the coefficients of the polynomial <code>c0, c1, ..., cn</code>.</p>
<p>I can't manage to understand the task, the only tip I have is the provided part of the function:</p>
<pre><code>import numpy as np
def get_polynom(coords):
...
return np.linalg.solve(a, b)
</code></pre>
<p>Have somebody done something similar? Just a little explanation of what exactly is expected would be great!</p>
<p>Thanks in advance!</p>
|
<p>A polynomial is a function <code>f(x) = cn x^n + ... + c1 x + c0</code>. With the pair of tuples you get n+1 equations of the form <code>f(xi) = yi</code> for i going from 1 to n+1. If you substitute <code>xi</code> and <code>yi</code> in the first equation, you obtain a linear system of n equations, with the unknowns <code>cn</code> to <code>c0</code>. Writing this in matrix form, you get <code>A*C = B</code>.</p>
<p>The <code>a</code> argument of the <code>np.linalg.solve</code> represents the <code>A</code> matrix, in this case the "weights" of the polynomial coefficients - that is the powers of x (e.g. one row will be <code>[xi^n, xi^(n-1), ..., xi, 1]</code>). The <code>b</code> argument will be the vector of <code>yi</code>.</p>
<p>Note that if your polynomial is of degree n, you need n+1 tuples to solve for its coefficients.</p>
|
python|numpy|polynomials|coefficients
| 1
|
375,422
| 69,108,115
|
custom padding in convolution layer of tensorflow 2.0
|
<p>In Pytorch, nn.conv2d()'s padding parameter allows a user to enter the padding size of choice(like p=n). There is no such equivalent for TensorFlow. How can we achieve similar customization?. Would be much appreciated if a small network is designed, using the usual CNN layers like pooling and FC, to demonstrate how to go about it, starting from the input layer.</p>
|
<p>You can use <code>tf.pad</code> followed by a convolution with no ("valid") padding. Here's a simple example:</p>
<pre><code>inp = tf.keras.Input((32, 32, 3)) # e.g. CIFAR10 images
custom_padded = tf.pad(inp, ((0, 0), (2, 0), (2, 0), (0, 0)))
conv = tf.keras.layers.Conv2D(16, 3)(custom_padded) # default padding is "valid"
model = tf.keras.Model(inp, conv)
</code></pre>
<p>The syntax for padding can take some getting used to, but basically each 2-tuple stands for a dimension (batch, width, height, filters), and within each 2-tuple the first number is how many elements to pad in front, and the second one how many to pad in the back. So in this case:</p>
<ul>
<li>no padding in the batch axis</li>
<li>2 elements on the left, 0 elements on the right for the width axis</li>
<li>2 elements on top, 0 elements on the bottom for the height axis</li>
<li>no padding in the channel axis</li>
</ul>
<p>In this example, we are using 16 filters with a filter size of 3. Normally this would require a padding of 1 element on each side to achieve "same" padding, but here we decide to pad 2 elements on one side and 0 on the other. This can of course be adapted to any other scheme you want/need.</p>
<p>This uses 0-padding by default, but you can change that in the <code>pad</code> function. See the docs: <a href="https://www.tensorflow.org/api_docs/python/tf/pad" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/pad</a></p>
<p>Note that I left out pooling or other layers because this should be simple to add. The basic recipe is just to replace the convolution layer by <code>pad</code> plus convolution with no padding.</p>
|
python-3.x|tensorflow|keras|deep-learning|tensorflow2.0
| 2
|
375,423
| 68,910,105
|
Trying to save a model to a pb file and I don't have a .meta file
|
<p>I trained a custom object detector model through TensorFlow object detection module and I used mobilenetssd as my pretrained model. After training was done, I have three files:</p>
<pre><code> checkpoint
ckpt-11.data-00000-of-00001
ckpt-11.index
</code></pre>
<p>Additionally I have this file as well:</p>
<pre><code>pipeline.config
</code></pre>
<p>I am trying to save this model as a pb file and I want to use the program provided in this <a href="https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc" rel="nofollow noreferrer">tutorial</a>. Can I run this program without the .meta file? How would I generate the .meta file? Additionally, where would I get the output_node_names as well?</p>
<p>Edit: I did manage to inference on this model using the chpt-11.index as well.</p>
|
<p>You can load the model with the above files only.</p>
<p>The model will load weights based on the <code>index file ckpt-11.index</code> and <code>shard file ckpt-11.data-00000-of-00001</code></p>
<p>Basically, <code>shard</code> file will contain the model weights and the <code>index</code> file indicates which weights are stored in which shard.
Usually, there will be only single shard, if you are training on a single machine.
For more details on save and load model using Tensorflow, you can refer <a href="https://www.tensorflow.org/tutorials/keras/save_and_load" rel="nofollow noreferrer">this</a> document.</p>
|
python|tensorflow|object-detection
| 1
|
375,424
| 68,891,221
|
TypeError: swaplevel() got an unexpected keyword argument 'axis'
|
<p>i am kind of new to pandas,
i am using <code>unstack</code> and <code>swaplevel</code> to pivot my dataframe and i am getting this error :</p>
<blockquote>
<p>TypeError: swaplevel() got an unexpected keyword argument 'axis'</p>
</blockquote>
<p>i have checked the pandas doc and the fucntion does take axis as argument, what am i doing wrong please !
thank you !</p>
|
<p><code>swaplevel(i=- 2, j=- 1, axis=0)</code> does have an axis argument, i may have to see your code to be able to trace your error.</p>
<p>On the other-hand, maybe you should try to use swaplevel for each axis individually.
for example :</p>
<pre><code>In [1]:
df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6]})
Out[1]:
a b
0 A 1
1 A 2
2 B 5
3 B 5
4 B 4
5 C 6
df.swaplevel(0)
df.swaplevel(1)
</code></pre>
|
python-3.x|pandas
| 0
|
375,425
| 69,053,558
|
How to remove NaN in subtracting?
|
<p>I am trying to perform subtraction in python. This is a simple task when performed in excel but I want to do this in jupyter notebook.</p>
<p>Below is my code:</p>
<pre><code>import pandas as pd
from sklearn import linear_model
import numpy as np
#Read X1 anomaly
X1= pd.read_csv (r'file\X1.csv')
X1 = pd.DataFrame(X1,columns=['Year','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'])
X1= X1[X1['Year'].between(1984,2020, inclusive="both")]
#X1 = X1["Mar"].describe()
#print (X1)
#Read X2 anomaly
X2= pd.read_csv (r'file\X2.csv')
X2 = pd.DataFrame(X2,columns=['Year','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'])
X2= X2[X2['Year'].between(1984,2020, inclusive="both")]
#X2 = X2["Mar"].describe()
#print (X2)
X1 = X1["Mar"]
X2 = X2["Mar"]
#### my goal is to remove transform X2 by removing their line of fit
regr = linear_model.LinearRegression()
regr.fit(X1.values.reshape(-1,1), X2)
Trend=regr.coef_*X1+regr.intercept_
X3=np.subtract(X2,Trend)
print (X3)
</code></pre>
<p>And here is the data <a href="https://drive.google.com/drive/folders/18AaLBFi5KLx16iSvBPClxzTF7lZPaI1q?usp=sharing" rel="nofollow noreferrer">link</a>. I want to remove the linear between X1 and X2 so I performed regression of X1 and X2 then I want to subtract the trend line from X2 to make it X3. However, there is a lot of NaN in X3. Please help me on what I should do.</p>
|
<p>I found this answer after experimenting with the codes and I want to share them with you in case someone is experiencing similar issue.</p>
<pre><code>regr = linear_model.LinearRegression()
regr.fit(X1.values.reshape(-1,1), X2)
Trend=regr.coef_*X1+regr.intercept_
X3=X2-np.array(Trend)
print (X3)
</code></pre>
<p>Notice what I did in the subtraction formula. Thank you.</p>
|
numpy|jupyter-notebook|regression|subtraction
| 0
|
375,426
| 69,247,515
|
How to apply a function with several variables to a column of a pandas dataframe (when it is not possible to change the order of vars in func)
|
<p>I would like to apply a func to a column of pandas DataFrame.
Such func takes one string and one column of the DF.</p>
<p>As follows:</p>
<pre><code>def check_it(language,text):
print(language)
if language == 'EN':
result = 'DNA' in text
else:
result ='NO'
return result
df = pd.DataFrame({'ID':['1','2','3'], 'col_1': ['DNA','sdgasdf','sdfsdf'], 'col_2':['sdfsf sdf s','DNA','sdgasdf']})
df['col_3']=df['col_2'].apply(check_it, args=('EN',))
df
</code></pre>
<p>This does not produce the results required because enven if 'EN' is passed as argument at first place when printing 'language' inside the func the result is the element o the column.</p>
<p>In the pandas documentation here: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html</a>
the example is not 100% clear:</p>
<pre><code>def subtract_custom_value(x, custom_value):
return x - custom_value
s.apply(subtract_custom_value, args=(5,))
</code></pre>
<p>It looks like the first variable of the func has to be the series.
If the functions are already given and changing the order of variables is not posible, how should I proceed?
What if the func takes multiples variables and only the third one out of 6 is the series of the dataframe?</p>
<p>thx.</p>
<p>NOTE:
The following would work but it is not a valid option:</p>
<pre><code>def check_it(text,language):
...
df['col_3']=df['col_2'].apply(check_SECA, args=('EN',))
</code></pre>
<p>since I can not change the order of the variables in the func</p>
|
<p>You can always create a lambda, and in the body, invoke your function as needed:</p>
<pre><code>df['col_3']=df['col_2'].apply(lambda text: check_it('EN', text))
df
ID col_1 col_2 col_3
0 1 DNA sdfsf sdf s False
1 2 sdgasdf DNA True
2 3 sdfsdf sdgasdf False
</code></pre>
|
python|pandas|apply
| 2
|
375,427
| 69,167,227
|
convert entire column with seconds to hours (pandas)
|
<p>there is such a dataframe</p>
<pre><code> x laikas_s
0 meh 5237
1 elec 20925
</code></pre>
<p>I want to get such a dataframe</p>
<pre><code> x laikas_s
0 meh 1:27:17
1 elec 5:48:45
</code></pre>
<p>in python i would translate it like this</p>
<pre><code>import datetime
sec = 20925
a = datetime.timedelta(seconds=sec)
print(a)
</code></pre>
<p>how to do it in <code>Pandas</code>?</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>pd.to_timedelta()</code></a> and specify the unit as second, as follows:</p>
<pre><code>df['laikas_s'] = pd.to_timedelta(df['laikas_s'], unit='S')
</code></pre>
<p>Result:</p>
<pre><code>print(df)
x laikas_s
0 meh 0 days 01:27:17
1 elec 0 days 05:48:45
</code></pre>
|
python|python-3.x|pandas|dataframe
| 3
|
375,428
| 69,280,095
|
numpy arange functon throws AttributeError: while the same code gets executed over online ide
|
<p>This is very strange. The np.arange function throws an AttributeError while i execute the code on my computer but it works fine over the <a href="https://mybinder.org/v2/gh/spyder-ide/spyder/5.x?urlpath=/desktop" rel="nofollow noreferrer">online ide</a>.
code is very simple :
trange = np.arange(0,180,(12/60))</p>
<p>I am using Spyder IDE 5.0.1 and python 3.7.9</p>
<p><a href="https://i.stack.imgur.com/adKJa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/adKJa.png" alt="enter image description here" /></a></p>
<pre><code>trange = np.arange(0,180,(12/60))
</code></pre>
|
<p>Search in your code where you have set <code>np = ...</code></p>
<p>Reproducible error:</p>
<pre><code>import numpy as np
np = 12.3
trange = np.arange(0, 180, (12/60))
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>----> 5 trange = np.arange(0, 180, (12/60))
AttributeError: 'float' object has no attribute 'arange'
</code></pre>
|
python|numpy
| 0
|
375,429
| 69,185,427
|
Combining three datasets removing duplicates
|
<p>I've three datasets:</p>
<p><strong>dataset 1</strong></p>
<pre><code>Customer1 Customer2 Exposures + other columns
Nick McKenzie Christopher Mill 23450
Nick McKenzie Stephen Green 23450
Johnny Craston Mary Shane 12
Johnny Craston Stephen Green 12
Molly John Casey Step 1000021
</code></pre>
<p><strong>dataset2</strong> (unique Customers: Customer 1 + Customer 2)</p>
<pre><code>Customer Age
Nick McKenzie 53
Johnny Craston 75
Molly John 34
Christopher Mill 63
Stephen Green 65
Mary Shane 54
Casey Step 34
Mick Sale
</code></pre>
<p><strong>dataset 3</strong></p>
<pre><code>Customer1 Customer2 Exposures + other columns
Mick Sale Johnny Craston
Mick Sale Stephen Green
</code></pre>
<p>Exposures refers to Customer 1 only.</p>
<p>There are other columns omitted for brevity. Dataset 2 is built by getting unique customer 1 and unique customer 2: no duplicates are in that dataset. Dataset 3 has the same column of dataset 1.</p>
<p>I'd like to add the information from dataset 1 into dataset 2 to have</p>
<p><strong>Final dataset</strong></p>
<pre><code>Customer Age Exposures + other columns
Nick McKenzie 53 23450
Johnny Craston 75 12
Molly John 34 1000021
Christopher Mill 63
Stephen Green 65
Mary Shane 54
Casey Step 34
Mick Sale
</code></pre>
<p>The final dataset should have all Customer1 and Customer 2 from both dataset 1 and dataset 3, with no duplicates.
I have tried to combine them as follows</p>
<pre><code>result = pd.concat([df2,df1,df3], axis=1)
</code></pre>
<p>but the result is not that one I'd expect.
Something wrong is in my way of concatenating the datasets and I'd appreciate it if you can let me know what is wrong.</p>
|
<p>After concatenating the dataframe df1 and df2 (assuming they have same columns), we can remove the duplicates using <code>df1.drop_duplicates(subset=['customer1'])</code> and then we can join with <code>df2</code> like this</p>
<pre><code>df1.set_index('Customer1').join(df2.set_index('Customer'))
</code></pre>
<p>In case <code>df1</code> and <code>df2</code> has different columns based on the primary key we can join using the above command and then again join with the <code>age</code> table.</p>
<p>This would give the result. You can concatenate dataset 1 and datatset 3 because they have same columns. And then run this operation to get the desired result. I am joining specifying the respective keys.</p>
<p>Note: Though not related to the question but for the concatenation one can use this code <code>pd.concat([df1, df3],ignore_index=True)</code> (Here we are ignoring the index column)</p>
|
python|pandas
| 1
|
375,430
| 68,882,592
|
scraping all the data from attribute <a>
|
<p>This is the code I have written so far I need to get all the ids from the page.
<a href="https://i.stack.imgur.com/6JFb0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6JFb0.png" alt="enter image description here" /></a></p>
<pre><code>import requests
import pandas as pd
from bs4 import BeautifulSoup
url = "https://bugzilla.mozilla.org/buglist.cgi?quicksearch=all"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
ID = soup.find_all("a")
data =[]
for name in ID:
data.append({"ID":name.text})
df=pd.DataFrame(data)
prind(pd)
</code></pre>
|
<p>Is this what you want?</p>
<pre><code>import requests
import pandas as pd
from bs4 import BeautifulSoup
url = "https://bugzilla.mozilla.org/buglist.cgi?quicksearch=all"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
ID = soup.find_all("a")
data =[]
for name in ID:
if name.text.isnumeric():
data.append({"ID":name.text})
df = pd.DataFrame(data)
print(df)
</code></pre>
<p>Tell me if its not working...</p>
|
python-3.x|pandas|web-scraping|beautifulsoup
| 1
|
375,431
| 69,099,426
|
how to write a function to filter rows based on a list values one by one and make analysis
|
<p>I have a list contains more than 10 values and I have a full dataframe. I'd like to filter each value from the list to a subdataframe and do some analysis on each of them. How can I write a function so I don't need to copy paste and change value so many times.</p>
<p>eg.</p>
<pre><code>list = ['A','B','C']
df1 = df[df['column1']=='A']
df2 = df[df['column1']=='B']
df3 = df[df['column1']=='C']
</code></pre>
<p>for each subdataframe ,I will do a groupby and value count</p>
<pre><code>df1.groupby(['column2']).size()
df2.groupby(['column2']).size()
df3.groupby(['column2']).size()
</code></pre>
|
<p>First many DataFrames is here not necessary.</p>
<p>You can filter only necessary values for <code>column1</code> and pass both columns to <code>groupby</code>:</p>
<pre><code>L = ['A','B','C']
s = df1[df1['column1'].isin(L)].groupby(['column1', 'column2']).size()
</code></pre>
<p>Last select by values of list:</p>
<pre><code>s.loc['A']
s.loc['B']
s.loc['C']
</code></pre>
<p>If want function:</p>
<pre><code>def f(df, x):
return df[df['column1'].eq(L)].groupby(['column2']).size()
print (f(df1, 'A'))
</code></pre>
|
python|pandas|dataframe|filter
| 0
|
375,432
| 69,004,997
|
Pandas DataFrame Time index using .loc function error
|
<p>I have created DataFrame with DateTime index, then I split the index into the Date index column and Time index column. Now, when I call for a row of a specific time by using pd.loc(), the system shows an error.</p>
<p>Here're an example of steps of how I made the DataFrame from beginning till reaching my consideration.</p>
<pre><code>import pandas as pd
import numpy as np
df= pd.DataFrame({'A':[1, 2, 3, 4], 'B':[5, 6, 7, 8], 'C':[9, 10, 11, 12],
'DateTime':pd.to_datetime(['2021-09-01 10:00:00', '2021-09-01 11:00:00', '2021-09-01 12:00:00', '2021-09-01 13:00:00'])})
df=df.set_index(df['DateTime'])
df.drop('DateTime', axis=1, inplace=True)
df
</code></pre>
<p><strong>OUT >></strong></p>
<pre><code> A B C
DateTime
2021-09-01 10:00:00 1 5 9
2021-09-01 11:00:00 2 6 10
2021-09-01 12:00:00 3 7 11
2021-09-01 13:00:00 4 8 12
</code></pre>
<p><strong>In this step, I'm gonna splitting DateTime index into multi-index Date & Time</strong></p>
<pre><code>df.index = pd.MultiIndex.from_arrays([df.index.date, df.index.time], names=['Date','Time'])
df
</code></pre>
<p><strong>OUT >></strong></p>
<pre><code> A B C
Date Time
2021-09-01 10:00:00 1 5 9
11:00:00 2 6 10
12:00:00 3 7 11
13:00:00 4 8 12
</code></pre>
<p><em><strong>##Here is the issue##</strong></em></p>
<p>when I call this statement, The system shows an error</p>
<pre><code>df.loc["11:00:00"]
</code></pre>
<p>How to fix that?</p>
|
<h3>1. If you want to use <code>.loc</code>, you can just specify the time by:</h3>
<pre><code>import datetime
df.loc[(slice(None), datetime.time(11, 0)), :]
</code></pre>
<p>or use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.IndexSlice.html" rel="nofollow noreferrer"><code>pd.IndexSlice</code></a> similar to the solution by BENY, as follows:</p>
<pre><code>import datetime
idx = pd.IndexSlice
df.loc[idx[:,datetime.time(11, 0)], :]
</code></pre>
<p>(defining a variable <code>idx</code> to use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.IndexSlice.html" rel="nofollow noreferrer"><code>pd.IndexSlice</code></a> gives us cleaner code and less typing if you are going to use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.IndexSlice.html" rel="nofollow noreferrer"><code>pd.IndexSlice</code></a> multiple times).</p>
<p><strong>Result:</strong></p>
<pre><code> A B C
Date Time
2021-09-01 11:00:00 2 6 10
</code></pre>
<h3>2. If you want to select just for one day, you can use:</h3>
<pre><code>import datetime
df.loc[(datetime.date(2021, 9, 1), datetime.time(11, 0))]
</code></pre>
<p><strong>Result:</strong></p>
<pre><code>A 2
B 6
C 10
Name: (2021-09-01, 11:00:00), dtype: int64
</code></pre>
<h3>3. You can also use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.xs.html" rel="nofollow noreferrer"><code>.xs</code></a> to access the MultiIndex row index, as follows:</h3>
<pre><code>import datetime
df.xs(datetime.time(11,0), axis=0, level='Time')
</code></pre>
<p><strong>Result:</strong></p>
<pre><code> A B C
Date
2021-09-01 2 6 10
</code></pre>
<h3>4. Alterative way if you haven't split DateTime index into multi-index Date & Time</h3>
<p>Actually, if you haven't split the DatetimeIndex into separate date and time index, you can also use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>.between_time()</code></a> function to filter the time, as follows:</p>
<pre><code>df.between_time("11:00:00", "11:00:00")
</code></pre>
<p>You can specify a range of time to filter, instead of just a point of time, if you specify different values for the <em>start_time</em> and <em>end_time</em>.</p>
<p><strong>Result:</strong></p>
<pre><code> A B C
DateTime
2021-09-01 11:00:00 2 6 10
</code></pre>
<p>As you can see, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>.between_time()</code></a> allows you to enter the time in simple string to filter, instead of requiring the use of datetime objects. This should be nearest to your tried ideal (but invalid) syntax of using <code>df.loc["11:00:00"]</code> to filter.</p>
<p>As a suggestion, if you split the DatetimeIndex into separate date and time index simply for the sake of filtering by time, you can consider using the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>.between_time()</code></a> function instead.</p>
|
python|pandas|dataframe|datetime|indexing
| 1
|
375,433
| 69,288,920
|
String formatting of data in column of pandas dataframe
|
<p>How can I remove <code>'$'</code> in a column values?<br />
<strong>Example</strong>: I have a column with values like <code>$40</code>, <code>$23</code>, <code>$35</code>,<br />
I want to see those column values like <code>40</code>, <code>23</code>, <code>35</code>.</p>
<p>I have already tried:</p>
<pre><code>DF_['H1 % Change'] = DF_['H1 % Change'].replace(r'$', '')
</code></pre>
<p>This didn't work!</p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.replace.html#pandas.Series.replace" rel="nofollow noreferrer"><code>Series.replace</code></a> replaces all values in the column that <em>exactly match</em> the first argument. You want <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.replace.html#pandas.Series.str.replace" rel="nofollow noreferrer"><code>Series.str.replace</code></a> to replace a substring:</p>
<pre><code>DF_['H1 % Change'] = DF_['H1 % Change'].str.replace(r'$', '')
</code></pre>
|
python|pandas|dataframe
| 3
|
375,434
| 69,045,906
|
How to access a specific column and also random simultaneously
|
<p>Accessing column in pandas.</p>
<p>How to access a single column and multiple random columns in pandas data frame?.
if I have 6 columns ['a', 'b', 'c', 'd', 'e', 'f'] how can I access column 'a' and the remaining 3 are random. I try using <code>df.sample()</code> but it will show random column and column 'a' not automatically show.</p>
|
<p>Use <code>set_index</code> with <code>sample</code>:</p>
<pre><code>>>> df.set_index('a').sample(3, axis=1).reset_index()
a d e b
0 1 4 5 2
1 7 10 11 8
2 13 16 17 14
3 19 22 23 20
4 25 28 29 26
</code></pre>
|
python|pandas|dataframe
| 2
|
375,435
| 68,961,523
|
Best method to cluster coordinates around set centroids (Improving Scikit K-Means output? Naive methods?)
|
<p>So basically I have two lists of coordinates, one with "home" points (centroids essentially) and one with "destination" points. I want to cluster these "destination" coordinates to the closest "home" points (as if the "home" points are centroids). Below is an example of what I want:</p>
<p><strong>Input</strong>:<br/>[home_coords_1, home_coords_2, home_coords_3]<br/>
[destination_coords_1, destination_coords_2, destination_coords_3, destination_coords_4, destination_coords_5]</p>
<p><strong>Output</strong>:<br/>[[home_coords_1, destination_coords_2, destination_coords_5],[home_coords_2, destination_coords_4], [home_coords_3, destination_coords_1, destination_coords_3]]
<br/><em>given that the "destination" coordinates are in close proximity to the "home" coordinate in its sub-array</em></p>
<p>I have already accomplished this with the K-Means clustering function in the scikit python package by passing the home coordinates as the initial centroids. However I noticed that there are some imperfections in the clustering. Also it seems as if this is almost an improper use of K-Means clustering as there is only one iteration happening (see the line of code below).</p>
<p><code>km = KMeans(n_clusters=len(home_coords_list), n_init= 1, init= home_coords).fit(destination_coords)</code></p>
<p>This brings me to my question: What is the best way to cluster a list of coordinates around a pre-set list of coordinates. An alternative I am thinking about is just running through the list of "home" coordinates and one by one picking <em>n</em> closest "destination" coordinates. This seems a lot more naive though. Any thoughts or suggestions? Any help is appreciated! Thank you!</p>
|
<p>You can use e.g. <code>scipy.spatial.KDTree</code>.</p>
<pre><code>from scipy.spatial import KDTree
import numpy as np
# sample arrays with home and destination coordinates
np.random.seed(0)
home = np.random.rand(10, 2)
destination = np.random.rand(50, 2)
kd_tree = KDTree(home)
labels = kd_tree.query(destination)[1]
print(labels)
</code></pre>
<p>This will give an array that for each <code>destination</code> point gives the index of the closest <code>home</code> point:</p>
<pre><code>[9 0 8 8 1 2 2 8 1 5 2 4 0 7 2 1 4 7 1 1 7 4 7 4 4 4 5 4 7 7 2 8 1 7 6 2 8
7 7 4 5 9 2 1 3 3 5 5 5 5]
</code></pre>
<p>Then for any given <code>home</code> point, you can find coordinates of all <code>destination</code> points clustered with that point:</p>
<pre><code># destination points clustered with `home[0]`
destination[labels == 0]
</code></pre>
<p>It gives:</p>
<pre><code>array([[0.46147936, 0.78052918],
[0.66676672, 0.67063787]])
</code></pre>
|
python|numpy|scikit-learn|coordinates|k-means
| 0
|
375,436
| 69,171,951
|
How to calculate the average of specific values in a column in a pandas data frame?
|
<p>My pandas data frame has 11 columns and 453 rows. I would like to calculate the average of the values in rows 450 to 453 in column 11. I would then like to add this 'average value' as a new column to my dataset.</p>
<p>I can use <code>df['average']= df[['norm']].mean</code></p>
<p>To get the average of column 11 (here called norm). I'm not sure how to only calculate the average of specific rows within that column though?</p>
|
<p>Here you go:</p>
<pre class="lang-py prettyprint-override"><code>df["average"] = df["norm"][450:].mean()
</code></pre>
<p>Demo:</p>
<pre class="lang-py prettyprint-override"><code>>>> df = pd.DataFrame({"a": [1, 2, 6, 2, 3]})
>>> df
a
0 1
1 2
2 6
3 2
4 3
>>> df["b"] = df["a"][2:].mean()
>>> df
a b
0 1 3.666667
1 2 3.666667
2 6 3.666667
3 2 3.666667
4 3 3.666667
</code></pre>
|
python|pandas|dataframe
| 1
|
375,437
| 69,266,482
|
Create list of words and group them by index
|
<p>I have column of index and each index has it's corresponding word:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>word</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>word1</td>
</tr>
<tr>
<td>1</td>
<td>word2</td>
</tr>
<tr>
<td>1</td>
<td>word3</td>
</tr>
<tr>
<td>2</td>
<td>word4</td>
</tr>
<tr>
<td>2</td>
<td>word5</td>
</tr>
</tbody>
</table>
</div>
<p>and so on.</p>
<p>I want to group them by index in this way:
for id 1-[word1,word2,word3]
for id 2-[word4,word5]</p>
<p>and so on</p>
<p>and extract to CSV file</p>
<p>I have this code:</p>
<pre><code>df = pd.DataFrame(data)
d={"word":"first"}
df_new = df.groupby(df['id'], as_index=False).aggregate(d).reindex(columns=df['word'])
print (df_new)
df_new.to_csv('test.csv', sep='\t', encoding='utf-8', index=False)
</code></pre>
<p>What do I need to change in order for that to work?</p>
<p>Thank you in advance</p>
|
<pre><code># Import Dependencies
import pandas as pd
# Create DataFrame
data = {'id': [1, 1, 1, 2, 2], 'word': ['word1', 'word2', 'word3', 'word4', 'word5']}
df = pd.DataFrame(data)
# Groupby and Merge
df = df.groupby('id', as_index=False).agg({'word' : ','.join})
</code></pre>
<pre><code># Result
id word
0 1 word1,word2,word3
1 2 word4,word5
</code></pre>
|
python|pandas|list|group-by|aggregation-framework
| 2
|
375,438
| 69,175,617
|
Getting a numpy array string from a csv file and converting it to a numpy array
|
<p>I made some calculations with my data and saved it into a csv file.
In the file I have a cell with this string:</p>
<pre><code>"[array([3, 3, 3]), array([3, 3, 3]), array([3, 3, 3]), array([3, 3, 3]), array([3])]"
</code></pre>
<p>I want to convert it to a valid numpy array. Tried some functions but got no luck yet.</p>
|
<p>If you have pandas, use <code>pd.eval</code>:</p>
<pre><code>>>> import pandas as pd
>>> from numpy import array
>>> pd.eval("[array([3, 3, 3]), array([3, 3, 3]), array([3, 3, 3]), array([3, 3, 3]), array([3])]")
[array([3, 3, 3]), array([3, 3, 3]), array([3, 3, 3]), array([3, 3, 3]), array([3])]
>>>
</code></pre>
|
python|arrays|numpy|csv
| 2
|
375,439
| 68,926,975
|
Filter or selecting data between two rows in pandas by multiple labels
|
<p>So I have this df or table coming from a pdf tranformation on this way example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>ElementRow</th>
<th>ElementColumn</th>
<th>ElementPage</th>
<th>ElementText</th>
<th>X1</th>
<th>Y1</th>
<th>X2</th>
<th>Y2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>50</td>
<td>0</td>
<td>1</td>
<td>Emergency Contacts</td>
<td>917</td>
<td>8793</td>
<td>2191</td>
<td>8878</td>
</tr>
<tr>
<td>2</td>
<td>51</td>
<td>0</td>
<td>1</td>
<td>Contact</td>
<td>1093</td>
<td>1320</td>
<td>1451</td>
<td>1388</td>
</tr>
<tr>
<td>3</td>
<td>51</td>
<td>2</td>
<td>1</td>
<td>Relationship</td>
<td>2444</td>
<td>1320</td>
<td>3026</td>
<td>1388</td>
</tr>
<tr>
<td>4</td>
<td>51</td>
<td>7</td>
<td>1</td>
<td>Work Phone</td>
<td>3329</td>
<td>1320</td>
<td>3898</td>
<td>1388</td>
</tr>
<tr>
<td>5</td>
<td>51</td>
<td>9</td>
<td>1</td>
<td>Home Phone</td>
<td>4260</td>
<td>1320</td>
<td>4857</td>
<td>1388</td>
</tr>
<tr>
<td>6</td>
<td>51</td>
<td>10</td>
<td>1</td>
<td>Cell Phone</td>
<td>5176</td>
<td>1320</td>
<td>5684</td>
<td>1388</td>
</tr>
<tr>
<td>7</td>
<td>51</td>
<td>12</td>
<td>1</td>
<td>Priority Phone</td>
<td>6143</td>
<td>1320</td>
<td>6495</td>
<td>1388</td>
</tr>
<tr>
<td>8</td>
<td>51</td>
<td>14</td>
<td>1</td>
<td>Contact Address</td>
<td>6542</td>
<td>1320</td>
<td>7300</td>
<td>1388</td>
</tr>
<tr>
<td>9</td>
<td>51</td>
<td>17</td>
<td>1</td>
<td>City</td>
<td>7939</td>
<td>1320</td>
<td>7300</td>
<td>1388</td>
</tr>
<tr>
<td>10</td>
<td>51</td>
<td>18</td>
<td>1</td>
<td>State</td>
<td>8808</td>
<td>1320</td>
<td>8137</td>
<td>1388</td>
</tr>
<tr>
<td>11</td>
<td>51</td>
<td>21</td>
<td>1</td>
<td>Zip</td>
<td>9134</td>
<td>1320</td>
<td>9294</td>
<td>1388</td>
</tr>
<tr>
<td>12</td>
<td>52</td>
<td>0</td>
<td>1</td>
<td>Silvia Smith</td>
<td>1093</td>
<td>1458</td>
<td>1973</td>
<td>1526</td>
</tr>
<tr>
<td>13</td>
<td>52</td>
<td>2</td>
<td>1</td>
<td>Mother</td>
<td>2444</td>
<td>1458</td>
<td>2783</td>
<td>1526</td>
</tr>
<tr>
<td>13</td>
<td>52</td>
<td>7</td>
<td>1</td>
<td>(123) 456-78910</td>
<td>5176</td>
<td>1458</td>
<td>4979</td>
<td>1526</td>
</tr>
<tr>
<td>14</td>
<td>52</td>
<td>10</td>
<td>1</td>
<td>Austin</td>
<td>7939</td>
<td>1458</td>
<td>8406</td>
<td>1526</td>
</tr>
<tr>
<td>15</td>
<td>52</td>
<td>15</td>
<td>1</td>
<td>Texas</td>
<td>8808</td>
<td>1458</td>
<td>8961</td>
<td>1526</td>
</tr>
<tr>
<td>16</td>
<td>52</td>
<td>20</td>
<td>1</td>
<td>76063</td>
<td>9134</td>
<td>1458</td>
<td>9421</td>
<td>1526</td>
</tr>
<tr>
<td>17</td>
<td>52</td>
<td>2</td>
<td>1</td>
<td>1234 Parkside Ct</td>
<td>6542</td>
<td>1458</td>
<td>9421</td>
<td>1526</td>
</tr>
<tr>
<td>18</td>
<td>53</td>
<td>0</td>
<td>1</td>
<td>Naomi Smith</td>
<td>1093</td>
<td>2350</td>
<td>1973</td>
<td>1526</td>
</tr>
<tr>
<td>19</td>
<td>53</td>
<td>2</td>
<td>1</td>
<td>Aunt</td>
<td>2444</td>
<td>2350</td>
<td>2783</td>
<td>1526</td>
</tr>
<tr>
<td>20</td>
<td>53</td>
<td>7</td>
<td>1</td>
<td>(123) 456-78910</td>
<td>5176</td>
<td>2350</td>
<td>4979</td>
<td>1526</td>
</tr>
<tr>
<td>21</td>
<td>53</td>
<td>10</td>
<td>1</td>
<td>Austin</td>
<td>7939</td>
<td>2350</td>
<td>8406</td>
<td>1526</td>
</tr>
<tr>
<td>22</td>
<td>53</td>
<td>15</td>
<td>1</td>
<td>Texas</td>
<td>8808</td>
<td>2350</td>
<td>8961</td>
<td>1526</td>
</tr>
<tr>
<td>23</td>
<td>53</td>
<td>20</td>
<td>1</td>
<td>76063</td>
<td>9134</td>
<td>2350</td>
<td>9421</td>
<td>1526</td>
</tr>
<tr>
<td>24</td>
<td>53</td>
<td>2</td>
<td>1</td>
<td>3456 Parkside Ct</td>
<td>6542</td>
<td>2350</td>
<td>9421</td>
<td>1526</td>
</tr>
<tr>
<td>25</td>
<td>54</td>
<td>40</td>
<td>1</td>
<td>End Employee Line</td>
<td>6542</td>
<td>2350</td>
<td>9421</td>
<td>1526</td>
</tr>
<tr>
<td>25</td>
<td>55</td>
<td>0</td>
<td>1</td>
<td>Emergency Contacts</td>
<td>917</td>
<td>8793</td>
<td>2350</td>
<td>8878</td>
</tr>
</tbody>
</table>
</div>
<p>I'm trying to separate each register by rows taking as a reference ElementRow column and keep the headers from the first rows and then iterate through the other rows after. The column X1 has a reference on which header should be the values. I would like to have the data like this way.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Contact</th>
<th>Relationship</th>
<th>Work Phone</th>
<th>Cell Phone</th>
<th>Priority</th>
<th>ContactAddress</th>
<th>City</th>
<th>State</th>
<th>Zip</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Silvia Smith</td>
<td>Mother</td>
<td></td>
<td>(123) 456-78910</td>
<td></td>
<td>1234 Parkside Ct</td>
<td>Austin</td>
<td>Texas</td>
<td>76063</td>
</tr>
<tr>
<td>2</td>
<td>Naomi Smith</td>
<td>Aunt</td>
<td></td>
<td>(123) 456-78910</td>
<td></td>
<td>3456 Parkside Ct</td>
<td>Austin</td>
<td>Texas</td>
<td>76063</td>
</tr>
</tbody>
</table>
</div>
<p>Things I tried:</p>
<p>To take rows between iterating through the columns. tried to slice taking the first index and the last index but showed this error:</p>
<pre><code>emergStartIndex = df.index[df['ElementText'] == 'Emergency Contacts']
emergLastIndex = df.index[df['ElementText'] == 'End Employee Line']
emerRows_between = df.iloc[emergStartIndex:emergLastIndex]
TypeError: cannot do positional indexing on RangeIndex with these indexers [Int64Index([...
</code></pre>
<p>That way is working with this numpy trick.</p>
<pre><code>emerRows_between = df.iloc[np.r_[1:54,55:107]]
emerRows_between
</code></pre>
<p>but when trying to replace the index showed this:</p>
<pre><code>emerRows_between = df.iloc[np.r_[emergStartIndex:emergLastIndex]]
emerRows_between
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>I tried iterating row by row like this but in some point the df reach the end and I'm receiving <em><strong>index out of bound</strong></em>.</p>
<pre><code>emergencyContactRow1 = df['ElementText','X1'].iloc[emergStartIndex+1].reset_index(drop=True)
emergencyContactRow2 = df['ElementText','X1'].iloc[emergStartIndex+2].reset_index(drop=True)
emergencyContactRow3 = df['ElementText','X1'].iloc[emergStartIndex+3].reset_index(drop=True)
emergencyContactRow4 = df['ElementText','X1'].iloc[emergStartIndex+4].reset_index(drop=True)
emergencyContactRow5 = df['ElementText','X1'].iloc[emergStartIndex+5].reset_index(drop=True)
emergencyContactRow6 = df['ElementText','X1'].iloc[emergStartIndex+6].reset_index(drop=True)
emergencyContactRow7 = df['ElementText','X1'].iloc[emergStartIndex+7].reset_index(drop=True)
emergencyContactRow8 = df['ElementText','X1'].iloc[emergStartIndex+8].reset_index(drop=True)
emergencyContactRow9 = df['ElementText','X1'].iloc[emergStartIndex+9].reset_index(drop=True)
emergencyContactRow10 = df['ElementText','X1'].iloc[emergStartIndex+10].reset_index(drop=True)
frameEmergContact1 = [emergencyContactRow1 , emergencyContactRow2 , emergencyContactRow3, emergencyContactRow4, emergencyContactRow5, emergencyContactRow6, emergencyContactRow7, , emergencyContactRow8,, emergencyContactRow9, , emergencyContactRow10]
df_emergContact1= pd.concat(frameEmergContact1 , axis=1)
df_emergContact1.columns = range(df_emergContact1.shape[1])
</code></pre>
<p>So how to make this code dynamic or how to avoid the index out of bound errors and keep my headers taking as a reference only the first row after the Emergency Contact row?. I know I didn't try to use the X1 column yet, but I have to resolve first how to iterate through those multiple indexes.</p>
<p>Each iteration from Emergency Contact index to End Employee line belongs to one person or one employee from the whole dataframe, so the idea after capture all those values is to keep also a counter variable to see how many times the data is captured between those two indexes.</p>
|
<p>It's a bit ugly, but this should do it. Basically you don't need the first or last two rows, so if you get rid of those, then pivot the X1 and ElemenTex columns you will be pretty close. Then it's a matter of getting rid of null values and promoting the first row to header.</p>
<pre><code>df = df.iloc[1:-2][['ElementTex','X1','ElementRow']].pivot(columns='X1',values='ElementTex')
df = pd.DataFrame([x[~pd.isnull(x)] for x in df.values.T]).T
df.columns = df.iloc[0]
df = df[1:]
</code></pre>
|
python|pandas|numpy|loops|slice
| 1
|
375,440
| 69,138,494
|
Pandas - Assign string values based on multiple ranges
|
<p>I have created a small function to assign a string value to a column based on ranges from another column ie: 3.2 == '0-6m', 7 == '6-12m'
But I am getting this error: <code>TypeError: 'float' object is not subscriptable</code></p>
<p>Dataframe</p>
<pre><code> StartingHeight
4.0
3.2
8.0
32.0
12.0
18.3
</code></pre>
<p>Expected output:</p>
<pre><code> StartingHeight height_factor
4.0 0-6m
3.2 0-6m
8.0 6-12m
32.0 >30m
12.0 6-12m
18.3 18-24m
</code></pre>
<p>Code:</p>
<pre><code> def height_bands(hbcol):
"""Apply string value based on float value ie: 6.2 == '6-12m
hb_values = ['0-6m', '6-12m', '12-18m', '18-24m', '24-30m', '>30m']"""
if (hbcol['StartingHeight'] >= 0) | (hbcol['StartingHeight'] < 6.1):
return '0-6m'
elif (hbcol['StartingHeight'] >= 6.1) | (hbcol['StartingHeight'] < 12):
return '6-12m'
elif (hbcol['StartingHeight'] >= 12) | (hbcol['StartingHeight'] < 18):
return '12-18m'
elif (hbcol['StartingHeight'] >= 18) | (hbcol['StartingHeight'] < 24):
return '18-25m'
else:
return '>30m'
df1['height_factor'] = df1.apply(lambda x: height_bands(x['StartingHeight']), axis=1)
</code></pre>
<p>Thanks for your help!</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html" rel="nofollow noreferrer"><code>pd.cut</code></a>:</p>
<pre><code>df['height_factor'] = pd.cut(df['StartingHeight'],
bins=[0, 6, 12, 18, 24, 30, np.inf],
labels=['0-6m', '6-12m', '12-18m',
'18-24m', '24-30m', '>30m'],
right=False)
</code></pre>
<p>Output:</p>
<pre><code>>>> df
StartingHeight height_factor
0 4.0 0-6m
1 3.2 0-6m
2 8.0 6-12m
3 32.0 >30m
4 12.0 6-12m
5 18.3 18-24m
</code></pre>
<p><em>Fixed by @HenryEcker</em></p>
|
python|pandas
| 0
|
375,441
| 69,196,861
|
How to group rows together based on conditions from a list? Pandas
|
<p>I want to be able to group rows into one if they have matching values in certain columns, however I only want them to be grouped if the value is in a list. For example,</p>
<pre><code>team_sports = ['football', 'basketball']
view of df
country sport age
USA football 21
USA football 28
USA golf 20
USA golf 44
China football 30
China basketball 22
China basketball 41
wanted outcome
country sport age
USA football 21,28
USA golf 20
USA golf 44
China football 30
China basketball 22,41
The attempt I made was,
team_sports = ['football', 'basketball']
for i in df['Sport']:
if i in team_sports:
group_df= df.groupby(['Country', 'Sport'])['Age'].apply(list).reset_index()
</code></pre>
<p>This is taking forever to run, the database I'm using has 100,000 rows.</p>
<p>Really appreciate any help, thanks</p>
|
<p>The more straightforward approach is to separate the DataFrame based on those rows where the <code>sports</code> column <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>isin</code></a> the list of <code>team_sports</code>. <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.SeriesGroupBy.aggregate.html" rel="nofollow noreferrer"><code>groupby aggregate</code></a> separately then <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> back together:</p>
<pre><code>team_sports = ['football', 'basketball']
m = df['sport'].isin(team_sports)
cols = ['country', 'sport']
group_df = pd.concat([
# Group those that do match condition
df[m].groupby(cols, as_index=False)['age'].agg(list),
# Leave those that don't match condition as is
df[~m]
], ignore_index=True).sort_values(cols)
</code></pre>
<p>*<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> is optional to regroup country and sport together</p>
<p><code>group_df</code>:</p>
<pre><code> country sport age
0 China basketball [22, 41]
1 China football [30]
2 USA football [21, 28]
3 USA golf 20
4 USA golf 44
</code></pre>
<hr />
<p>The less straightforward approach would be to create a new grouping level based on whether or not a value is in the list of team sports using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>isin</code></a> + <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>cumsum</code></a>:</p>
<pre><code>team_sports = ['football', 'basketball']
group_df = (
df.groupby(
['country', 'sport',
(~df['sport'].sort_values().isin(team_sports)).cumsum().sort_index()],
as_index=False,
sort=False
)['age'].agg(list)
)
</code></pre>
<p><code>group_df</code>:</p>
<pre><code> country sport age
0 USA football [21, 28]
1 USA golf [20]
2 USA golf [44]
3 China football [30]
4 China basketball [22, 41]
</code></pre>
<p>How the groups are created:</p>
<pre><code>team_sports = ['football', 'basketball']
print(pd.DataFrame({
'country': df['country'],
'sport': df['sport'],
'not_in_team_sports': (~df['sport'].sort_values()
.isin(team_sports)).cumsum().sort_index()
}))
</code></pre>
<pre><code> country sport not_in_team_sports
0 USA football 0
1 USA football 0
2 USA golf 1 # golf 1
3 USA golf 2 # golf 2 (not in the same group)
4 China football 0
5 China basketball 0
6 China basketball 0
</code></pre>
<p>*<code>sort_values</code> is necessary here so that <code>sport</code> groups are not interrupted by sports that are not in the list.</p>
<pre><code>df = pd.DataFrame({
'country': ['USA', 'USA', 'USA'],
'sport': ['football', 'golf', 'football'],
'age': [21, 28, 20]
})
team_sports = ['football', 'basketball']
print(pd.DataFrame({
'country': df['country'],
'sport': df['sport'],
'not_sorted': (~df['sport'].isin(team_sports)).cumsum(),
'sorted': (~df['sport'].sort_values()
.isin(team_sports)).cumsum().sort_index()
}))
</code></pre>
<pre><code> country sport not_sorted sorted
0 USA football 0 0
1 USA golf 1 1
2 USA football 1 0 # football 1 (separate group if not sorted)
</code></pre>
<p>Sorting ensures that football go together so this does not happen</p>
<hr />
<p>Setup:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'country': ['USA', 'USA', 'USA', 'USA', 'China', 'China', 'China'],
'sport': ['football', 'football', 'golf', 'golf', 'football', 'basketball',
'basketball'],
'age': [21, 28, 20, 44, 30, 22, 41]
})
</code></pre>
|
pandas|database|group-by
| 0
|
375,442
| 68,945,587
|
Is there an efficient way of creating a dataframe column based on data in another dataframe?
|
<p>I have two dataframes. One contains data on user subscriptions, another on user session.</p>
<p>Example of subscription data (df_subscriptions):</p>
<pre><code> user_id created ended
10238 140baa7a-1641-41b5-a85b-c43dc9e12699 2021-08-13 19:37:11.373039 2021-09-12 19:37:11.373039
10237 fbfa999c-9c56-4f06-8cf9-3c5deb32d5d2 2021-08-13 15:25:07.149982 2021-09-12 15:25:07.149982
6256 a55e64b0-a783-455e-bd9d-edbb4815786b 2021-08-13 18:31:36.083681 2021-09-12 18:31:36.083681
6257 ca2c0ee1-9810-4ce7-a2ec-c036d0b8a380 2021-08-13 16:29:52.981836 2021-09-12 16:29:52.981836
7211 24378efd-e821-4a51-a3e6-39c30243a078 2021-08-13 19:58:19.434908 2021-09-12 19:58:19.434908
</code></pre>
<p>Example of session data:</p>
<pre><code> user_id session_start session_duration
11960653 6f51df1a-8c2b-4ddb-9299-b36f250b05dc 2020-01-05 11:39:29.367 165.880005
80076 697e1c0a-c026-4104-b13f-1fd74eec5890 2021-01-31 02:16:33.935 22.883301
1577621 02b23671-8ce3-452b-b551-03b5ea7dce47 2021-05-18 02:07:32.589 4.283300
1286532 a506fb53-3505-44db-880a-27ad483151f0 2020-07-29 16:47:51.908 51.000000
18875432 1ea77db5-fe4a-414f-ba47-1f448175df3f 2020-10-17 04:00:35.269 360.733307
</code></pre>
<p>I need to calculate the total time user spend on the service while his subscription is active. The code below gives me correct/expected result, but takes A LOT of time on the real data:</p>
<pre><code>def sessions_during_sub (user_id, start_date, end_date):
result = df_sessions.loc[(df_sessions.user_id == user_id)&
(df_sessions.session_start >= start_date)&
(df_sessions.session_start <= end_date)].session_duration.sum()
return result
df_subscriptions['sessions'] = df_subscriptions.apply(lambda x: sessions_during_sub(x['user_id'], x['created'], x['ended']), axis=1)
</code></pre>
<p>Is there any way to to do it proper pandas way/vectorized? Any ideas of how to speed it up really.</p>
|
<p>Create some example data:</p>
<pre><code>subs = pd.DataFrame(zip(["user_0", "user_0", "user_1", "user_2"], [1900, 1920, 1950, 2000], [1910, 1930, 2000, 2020]), columns=["user_id", "created", "ended"])
user_id created ended
0 user_0 1900 1910
1 user_0 1920 1930
2 user_1 1950 2000
3 user_2 2000 2020
sessions = pd.DataFrame(zip(["user_0", "user_0", "user_0", "user_2"], [1905, 1915, 1925, 2005], [1.0, 5.0, 2.0, 7.0]), columns=["user_id", "session_start", "session_duration"])
user_id session_start session_duration
0 user_0 1905 1.0
1 user_0 1915 5.0
2 user_0 1925 2.0
3 user_2 2005 7.0
</code></pre>
<p>The point of merging is to create a table with all the subscription and session data in the same row. This is similar to checking the user_id equality when looping through all the rows in both arrays in applying <code>sessions_during_sub</code> in the code in your questions:</p>
<pre><code>merged = pd.merge(subs, sessions, on="user_id")
user_id created ended session_start session_duration
0 user_0 1900 1910 1905 1.0
1 user_0 1900 1910 1915 5.0
2 user_0 1900 1910 1925 2.0
3 user_0 1920 1930 1905 1.0
4 user_0 1920 1930 1915 5.0
5 user_0 1920 1930 1925 2.0
6 user_2 2000 2020 2005 7.0
</code></pre>
<p>Having multiple subscriptions and multiple sessions per user is not an issue here, you just get multiple resulting rows with some duplicated data. You can then write some logic to check the subscription range, like so:</p>
<pre><code>in_subscription_range = (merged.session_start >= merged.created) & (merged.session_start < merged.ended)
</code></pre>
<p>And finally compute the sum of the session duration, for example per user_id, like so:
merged[in_subscription_range].groupby("user_id").session_duration.sum()</p>
<pre><code>user_id
user_0 3.0
user_2 7.0
Name: session_duration, dtype: float64
</code></pre>
<p>If your original data contains temporally overlapping subscriptions or sessions, you need to fix that before merging, otherwise you may count the durations multiple times. But the same issue exists for your example code.</p>
|
python|pandas
| 0
|
375,443
| 69,257,614
|
define Array without allocating it
|
<p>I see that Numba does not support Dict-of-Lists ... Thus, I decided to use 2D Numpy arrays instead. This is sad :(</p>
<p>The second problem I have is that I want to create this array on demand. Here is an example:</p>
<pre><code>@nb.njit(parallel=True)
def blah(cond=True):
ary = None
if cond : ary = np.zeros((10000,2))
for i in range(5):
if cond: ary[i] = np.array([i,i])
return 555, ary
</code></pre>
<p>The problem is <code>ary</code> cannot be <code>None</code>, so I have to allocate the array even if i do not use it.</p>
<p>Is there a way to define <code>ary</code> without allocating it, so that Numba wont complain?</p>
<blockquote>
<p>The 'parallel' seems to cause the problem ??</p>
</blockquote>
<hr />
<p>interesting too that this updates only the first row (i is incremented):</p>
<pre><code>ary[i,:] = np.array([a,b])
</code></pre>
<p>but this works</p>
<pre><code> ary[i] = np.array([a,b])
</code></pre>
|
<p>If you want the code to be parallelized, then yes, it absolutely has to be allocated first. You can't have multiple threads trying to resize an array independently.</p>
|
python|numpy|allocation|numba
| 1
|
375,444
| 69,142,836
|
Import multiple excel files start with same name into pandas and concatenate them into one dataframe
|
<p>I have everyday multiple excel files with different names, but all these files start with the same name, for instance, "Answer1.xlsx", "AnswerAVD.xlsx","Answer2312.xlsx", etc.</p>
<p>Is it possible to read and concatenate all these files in a pandas dataframe?</p>
<p>I Know how to do one by one, but is not a solution</p>
<pre><code>import pandas as pd
dfs1 = pd.read_excel('C:/Answer1.xlsx')
dfs2 = pd.read_excel('C:/AnswerAVD.xlsx')
dfs3 = pd.read_excel('C:/Answer2312.xlsx')
Final=pd.concat([dfs1 , dfs2 ,dfs3 ])
</code></pre>
<p>Many thanks for your help</p>
|
<p>use a glob method with <code>pathlib</code> and then <code>concat</code> using pandas and a list comprehension.</p>
<pre><code>from pathlib import Path
import pandas as pd
src_files = Path('C:\\').glob('*Answer*.xlsx')
df = pd.concat([pd.read_excel(f, index_col=None, header=0) for f in src_files])
</code></pre>
|
python|excel|pandas
| 1
|
375,445
| 69,212,097
|
Replace nan cells with lists in Pandas dataframe
|
<p>I have the following Pandas dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">index</th>
<th style="text-align: left;">title</th>
<th style="text-align: center;">Open</th>
<th style="text-align: center;">Close</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">2009-02-13</td>
<td style="text-align: left;">[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...</td>
<td style="text-align: center;">7933.000000</td>
<td style="text-align: center;">7850.410156</td>
</tr>
<tr>
<td style="text-align: center;">2009-02-14</td>
<td style="text-align: left;">[613, 6294, 19, 251, 1463, 0, 0, 0, 0, 0, 0, 0...</td>
<td style="text-align: center;">NaN</td>
<td style="text-align: center;">NaN</td>
</tr>
<tr>
<td style="text-align: center;">2009-02-17</td>
<td style="text-align: left;">NaN</td>
<td style="text-align: center;">7845.629883</td>
<td style="text-align: center;">7552.600098</td>
</tr>
<tr>
<td style="text-align: center;">2009-02-18</td>
<td style="text-align: left;">NaN</td>
<td style="text-align: center;">7546.350098</td>
<td style="text-align: center;">7555.629883</td>
</tr>
<tr>
<td style="text-align: center;">2009-02-19</td>
<td style="text-align: left;">NaN</td>
<td style="text-align: center;">7555.229980</td>
<td style="text-align: center;">7465.950195</td>
</tr>
<tr>
<td style="text-align: center;">...</td>
<td style="text-align: left;">...</td>
<td style="text-align: center;">...</td>
<td style="text-align: center;">...</td>
</tr>
<tr>
<td style="text-align: center;">2020-06-07</td>
<td style="text-align: left;">[29, 68, 245, 3496, 62, 32, 20, 9, 11, 141, 32...</td>
<td style="text-align: center;">NaN</td>
<td style="text-align: center;">NaN</td>
</tr>
<tr>
<td style="text-align: center;">2020-06-08</td>
<td style="text-align: left;">[898, 30, 22, 1739, 47, 733, 8, 1182, 0, 0, 0,...</td>
<td style="text-align: center;">27232.929688</td>
<td style="text-align: center;">27572.439453</td>
</tr>
</tbody>
</table>
</div>
<p>where the title column is an indexed tokenization of news titles. I cannot discard the NaN rows because they will be relevant for later processing. Instead, I want to make replace NaN values with lists of zeros of the same size of the not-NaN cells.</p>
<p>I created a zeros list with:</p>
<pre><code>max_wid = dataframe["title"].map(lambda x: len(x)).max()
zeros = np.zeros(max_wid, dtype=int).tolist()
</code></pre>
<p>I managed to assign the list in the first row with <code>.at</code> but it's not feasible to replace all rows manually, although it's the only tip I found online.</p>
<p>I've tried using <code>.loc[dataframe.title.isnull(), "title"] = zeros</code> but it will return a <code> ValueError: cannot set using a multi-index selection indexer with a different length than the value</code>.</p>
<p>I thought of using itertuples but it doesn't allow to set attributes and iterrows is discouraged.</p>
<p>Any help is extremely appreciated.</p>
<h2>EDIT</h2>
I found an inefficient and inelegant solution by doing this:
<pre><code>zeros = np.zeros(max_wid, dtype=int).tolist()
dataframe["isna"] = dataframe.title.isna()
check = dataframe["isna"].values
title = dataframe["title"].values
test = np.empty((dataframe.shape[0]), dtype=object)
for i,v in enumerate(test):
if check[i] == True:
test[i] = zeros
else:
test[i] = title[i]
dataframe["title"] = test.tolist()
dataframe.drop("isna", axis=1, inplace=True)
</code></pre>
<p>If anyone can come up with a more optimized solution, I'll still appreciate it a lot!</p>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>mask = df.title.notna()
max_wid = df.loc[mask, "title"].str.len().max()
zeros = np.zeros(max_wid, dtype=int).tolist()
df.loc[~mask, "title"] = [zeros]
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> index title
0 2009-02-13 [1, 0, 0]
1 2009-02-14 [613, 6294, 19]
2 2009-02-15 [0, 0, 0]
3 2009-02-16 [0, 0, 0]
</code></pre>
<hr />
<p><code>df</code> used:</p>
<pre class="lang-none prettyprint-override"><code> index title
0 2009-02-13 [1, 0, 0]
1 2009-02-14 [613, 6294, 19]
2 2009-02-15 NaN
3 2009-02-16 NaN
</code></pre>
|
python|pandas|dataframe|numpy
| 0
|
375,446
| 68,984,520
|
How to we replace log(0) with 0?
|
<p><code>RuntimeWarning: invalid value encountered in multiply</code></p>
<p>I have a code:</p>
<pre><code>a = Y_list * np.log(Y_list/E_Y)
print(a)
</code></pre>
<p>My <code>Y_list</code> contains <code>0</code> values, I'm wondering how to do when <code>Y_list = 0</code> , <code>np.log(0) = 0</code>?</p>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer">np.where</a> It lets you define a condition for true and false and assign different values.</p>
<pre><code>np.where((Y_list/E_Y)!= 0, np.log(Y_list/E_Y),0)
</code></pre>
|
numpy
| 3
|
375,447
| 69,148,944
|
Loop through dataframe to identify subgroup and label with unique identifier
|
<p>I'm trying to complete the <code>Sessions</code> column with a unique integer per session for further processing.</p>
<p>A session is defined by one day or a period from 9:30-16:00</p>
<pre><code> Symbol Time Open High Low Close Volume LOD Sessions
2724312 AEHR 2019-09-23 09:31:00 1.42 1.42 1.42 1.42 200 NaN NaN
2724313 AEHR 2019-09-23 09:43:00 1.35 1.35 1.34 1.34 6062 NaN NaN
2724314 AEHR 2019-09-23 09:58:00 1.35 1.35 1.29 1.30 8665 NaN NaN
2724315 AEHR 2019-09-23 09:59:00 1.32 1.32 1.32 1.32 100 NaN NaN
2724316 AEHR 2019-09-23 10:00:00 1.35 1.35 1.35 1.35 400 NaN NaN
... ... ... ... ... ... ... ... ... ...
</code></pre>
<p>I've tried everything using <code>loop</code> but I keep getting every KeyError and SettingWithCopyWarning in the book.</p>
<hr />
<p>Edit: Error & Code added</p>
<p>SettingWithCopyWarning:</p>
<blockquote>
<p>A value is trying to be set on a copy of a slice from a DataFrame</p>
<p>See the caveats in the documentation:
<a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy</a>
self._setitem_single_block(indexer, value, name)</p>
</blockquote>
<pre><code>df = df
# Columns ['Symbol', 'Time', 'Open', 'High', 'Low', 'Close', 'Volume', 'LOD', 'Sessions']
# Add Date column to loop through
df['Date'] = pd.to_datetime(df['Time']).dt.date
previous_session = df['Date'].iloc[0]
prev_sesh_count = 1
for i, row in df.iterrows():
current_session = df['Date'].iloc[i]
if previous_session == current_session:
df['Sessions'].iloc[i] = prev_sesh_count
else:
df['Sessions'].iloc[i] = prev_sesh_count + 1
prev_sesh_count = prev_sesh_count + 1
</code></pre>
|
<p>Assuming the dataframe is sorted on <code>Date</code>, we can use <code>duplicated</code> along with <code>cumsum</code> to assign the unqiue sessions numbers</p>
<pre><code>df['Sessions'] = (~df.duplicated(['Symbol', 'Date'])).cumsum()
</code></pre>
<hr />
<pre><code>print(df)
Symbol Time Open High Low Close Volume LOD Sessions Date
2724312 AEHR 2019-09-23 09:31:00 1.42 1.42 1.42 1.42 200 NaN 1 2019-09-23
2724313 AEHR 2019-09-23 09:43:00 1.35 1.35 1.34 1.34 6062 NaN 1 2019-09-23
2724314 AEHR 2019-09-23 09:58:00 1.35 1.35 1.29 1.30 8665 NaN 1 2019-09-23
2724315 AEHR 2019-09-23 09:59:00 1.32 1.32 1.32 1.32 100 NaN 1 2019-09-23
2724316 AEHR 2019-09-23 10:00:00 1.35 1.35 1.35 1.35 400 NaN 1 2019-09-23
</code></pre>
|
pandas|database|dataframe|loops
| 1
|
375,448
| 68,983,642
|
Memory usage of torch.einsum
|
<p>I have been trying to debug a certain model that uses <code>torch.einsum</code> operator in a layer which is repeated a couple of times.</p>
<p>While trying to analyze the GPU memory usage of the model during training, I have noticed that a certain <strong>Einsum</strong> operation dramatically increases the memory usage. I am dealing with multi-dimensional matrices. The operation is <code>torch.einsum('b q f n, b f n d -> b q f d', A, B)</code>.</p>
<p>It is also worth mentioning that:</p>
<ul>
<li><code>x</code> was assigned before to a tensor of the same shape.</li>
<li>In every layer (they are all identical), the GPU memory is linearly increases) after this operation, <strong>and does not deallocate</strong> until the end of the model iteration.</li>
</ul>
<p>I have been wondering why this operation uses so much memory, and why the memory stays allocated after every iteration over that layer type.</p>
|
<p>Variable "<code>x</code>" is indeed overwritten, but the tensor data is kept in memory (also called the layer's <em>activation</em>) for later usage in the backward pass.</p>
<p>So in turn you are effectively allocating new memory data for the result of <code>torch.einsum</code>, but you won't be replacing <code>x</code>'s memory even if it has been seemingly overwritten.</p>
<hr />
<p>To pass this to the test, you can compute the forward pass under the <a href="https://pytorch.org/docs/stable/generated/torch.no_grad.html" rel="nofollow noreferrer"><code>torch.no_grad()</code></a> context manager (where those activations won't be kept in memory) and see the memory usage difference, compared with a standard inference.</p>
|
python|pytorch|numpy-einsum
| 1
|
375,449
| 69,091,092
|
How to sessionize data in python
|
<p>I have a requirement to tag user transactions to a session.</p>
<p>A session is defined in such a way that all actions of a user that happens within 5 minutes after the first action of a session belong to that session. We identify a user by the userID</p>
<p><strong>Sample Source Data</strong></p>
<pre class="lang-none prettyprint-override"><code>userID timestamp
1.0 2018-04-08 09:47:57.849
1.0 2018-04-08 09:48:38.762
1.0 2018-04-08 09:49:31.455
1.0 2018-04-08 09:53:18.131
1.0 2018-04-08 09:55:42.875
1.0 2018-04-08 10:15:04.757
2.0 2018-04-08 10:15:41.368
2.0 2018-04-08 10:19:10.744
2.0 2018-04-08 19:20:37.441
3.0 2018-04-08 19:21:00.315
</code></pre>
<p><strong>Expected Output</strong></p>
<pre class="lang-none prettyprint-override"><code>userID timestamp NewSession
1.0 2018-04-08 09:47:57.849 1
1.0 2018-04-08 09:48:38.762 0
1.0 2018-04-08 09:49:31.455 0
1.0 2018-04-08 09:53:18.131 1
1.0 2018-04-08 09:55:42.875 0
1.0 2018-04-08 10:15:04.757 1
2.0 2018-04-08 10:15:41.368 1
2.0 2018-04-08 10:19:10.744 0
2.0 2018-04-08 19:20:55.441 1
3.0 2018-04-08 19:21:00.315 1
</code></pre>
<p>I am new to python, so need community help.
I wrote below python code but it is checking 5 min gap between two consecutive transaction whereas I need to map all transaction to same session which is within 5 min internal.</p>
<pre class="lang-none prettyprint-override"><code>df = pd.read_csv(r".\Data\Test.csv", names=['userID','timestamp'],
parse_dates=[1])
df.sort_values(by=['userID','timestamp'], inplace=True)
cond1 = df.timestamp-df.timestamp.shift(1) > pd.Timedelta(5, 'm')
cond2 = df.userID != df.userID.shift(1)
df['SessionID'] = (cond1|cond2).cumsum()
</code></pre>
|
<p>We can start to define the session using a <code>groupby</code> and a <code>transform</code> like so :</p>
<pre class="lang-py prettyprint-override"><code>>>> df['NewSession'] = (df.groupby('userID')['timestamp']
... .transform(lambda x: x.diff().gt('5Min').cumsum()) + 1)
>>> df
userID timestamp NewSession
0 1.0 2018-04-08 09:47:57.849 1
1 1.0 2018-04-08 09:48:38.762 1
2 1.0 2018-04-08 09:49:31.455 1
3 1.0 2018-04-08 09:53:18.131 1
4 1.0 2018-04-08 09:55:42.875 1
5 1.0 2018-04-08 10:15:04.757 2
6 2.0 2018-04-08 10:15:41.368 1
7 2.0 2018-04-08 10:19:10.744 1
8 2.0 2018-04-08 19:20:37.441 2
9 3.0 2018-04-08 19:21:00.315 1
</code></pre>
<p>Then we can check the new session using <code>diff</code> to get the expected result :</p>
<pre class="lang-py prettyprint-override"><code>>>> df['NewSession'] = df.groupby('userID')['NewSession'].diff(1).fillna(1)
>>> df
userID timestamp NewSession
0 1.0 2018-04-08 09:47:57.849 1.0
1 1.0 2018-04-08 09:48:38.762 0.0
2 1.0 2018-04-08 09:49:31.455 0.0
3 1.0 2018-04-08 09:53:18.131 0.0
4 1.0 2018-04-08 09:55:42.875 0.0
5 1.0 2018-04-08 10:15:04.757 1.0
6 2.0 2018-04-08 10:15:41.368 1.0
7 2.0 2018-04-08 10:19:10.744 0.0
8 2.0 2018-04-08 19:20:37.441 1.0
9 3.0 2018-04-08 19:21:00.315 1.0
</code></pre>
|
python|python-3.x|pandas|dataframe
| 0
|
375,450
| 69,213,744
|
Pandas column taking very long time to translate
|
<p>I have a pandas dataframe of some 200k records. It has two columns; the text in English and a score. I want to translate a column from English to a few other languages. For that, I'm using the Cloud Translation API from Google's GCP. It's however, taking an absurdly long time to translate them. My code is basically this:</p>
<pre><code>def translate_text(text, target_language):
from google.cloud import translate_v2 as translate
try:
translate_client = translate.Client(credentials=credentials)
result = translate_client.translate(text, target_language=target_language)
return result['translatedText']
except Exception as e:
print(e)
</code></pre>
<p>and this:</p>
<pre><code>df['X_language'] = df['text'].apply(lambda text: translate_text(text, '<LANG CODE>'))
</code></pre>
<p>I've seen that <code>apply()</code> is fairly slow, plus the response from the API might be another factor in it being slow, but is there any way to make it more efficient? I tried swifter but that barely shaved off a couple of seconds (when testing against a subset of the dataframe).</p>
<p>Note that some of the text fields in the dataframe have around 300 characters in them. Not many but a decent number.</p>
<p><strong>EDIT</strong>:</p>
<p>After importing <code>translate</code> from <code>google.cloud</code> and defining the client once outside the function, the code ran much quicker. However, for some reason when I try to pass a list (the rows of the 'text' column), it doesn't return the translated text; it just runs quickly and returns the list itself in English.</p>
<p>Might that have to do with the credentials I'm using, or? I'm passing the service account JSON file you get when you create a project in GCP.</p>
<p><strong>EDIT 2</strong>:</p>
<p>I partitioned my dataframe into 4, each with ~50k records. It still takes too much time. I even removed all text with more than 250 characters..</p>
<p>I think it's an translation API issue? It takes way too long to translate I guess.</p>
|
<p>To fix the slow code, I just initialized the import and translate client outside the function once.</p>
<p>In the case of the 403 POST error, I had to create another GCP account. When I saw the quotas in the old account (trial), nothing was exceeded or close to, but the trial period apparently ended and I didn't have the free credits ($400) anymore. I tried enabling billing for the API (and checked my card wasn't defunct) but that didn't change much. Translate by batch worked in my newer account.</p>
<p>So, it was just an account issue rather than an API issue.</p>
|
python|pandas|google-translation-api
| 1
|
375,451
| 44,538,313
|
Select one group and transform the remaining group to columns in pandas
|
<p>I've a dataframe that looks like</p>
<pre><code>import pandas as pd
from pandas.compat import StringIO
origin = pd.read_table(StringIO('''label type value
x a 1
x b 2
y a 4
y b 5
z a 7
z c 9'''))
origin
Out[5]:
label type value
0 x a 1
1 x b 2
2 y a 4
3 y b 5
4 z a 7
5 z c 9
</code></pre>
<p>I want to transform it to something like</p>
<pre><code> label type value y_value z_value
0 x a 1 4 7
1 x b 2 5 NaN
</code></pre>
<p>Here the y_value and z_value are decided based on type.</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> for filtering first - in <code>df2</code> also remove rows which are not in <code>df1['type']</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow noreferrer"><code>isin</code></a>, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add_suffix.html" rel="nofollow noreferrer"><code>add_suffix</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>join</code></a>:</p>
<pre><code>a = 'x'
df1 = df[df['label'] == a]
df2 = df[(df['label'] != a) & (df['type'].isin(df1['type']))]
df3 = df2.pivot(index='type', columns='label', values='value').add_suffix('_value')
print (df3)
label y_value z_value
type
a 4.0 7.0
b 5.0 NaN
df3 = df1.join(df3, on='type')
print (df3)
label type value y_value z_value
0 x a 1 4.0 7.0
1 x b 2 5.0 NaN
</code></pre>
|
python|pandas
| 1
|
375,452
| 44,683,956
|
Grouping customer orders by date, category and customer with one-hot-encoding result
|
<p>I have a dataframe containing order of customers from different categories (A-F). A one indicates a purchase from this category, wheres a zero indicates none. Now I would like to indicate with 1 and 0 encoding whether a purchase in each respective category was made on a per day and per customer basis. </p>
<pre><code>YEAR MONTH DAY A B C D E F Customer
2007 1 1 1 0 0 0 0 0 5000
2007 1 1 1 0 0 0 0 0 5000
2007 1 1 0 1 0 0 0 0 5000
2007 1 2 0 1 0 0 0 0 5000
2007 1 2 0 0 1 0 0 0 5000
</code></pre>
<p>The output should look something like this:</p>
<pre><code> YEAR MONTH DAY A B C D E F Customer
2007 1 1 1 1 0 0 0 0 5000
</code></pre>
<p>I've been trying to work this out using pandas build in "groupby" however I cant get the right result. Anyone knows how to solve this?</p>
<p>Thank you very much!</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> and aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.max.html" rel="nofollow noreferrer"><code>max</code></a>:</p>
<pre><code>cols = ['YEAR','MONTH','DAY','Customer']
df = df.groupby(cols, as_index=False).max()
print (df)
YEAR MONTH DAY Customer A B C D E F
0 2007 1 1 5000 1 1 0 0 0 0
1 2007 1 2 5000 0 1 1 0 0 0
</code></pre>
<p>Anf if need same order of columns add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex_axis.html" rel="nofollow noreferrer"><code>DataFrame.reindex_axis</code></a>:</p>
<pre><code>cols = ['YEAR','MONTH','DAY','Customer']
df = df.groupby(cols, as_index=False).max().reindex_axis(df.columns, axis=1)
print (df)
YEAR MONTH DAY A B C D E F Customer
0 2007 1 1 1 1 0 0 0 0 5000
1 2007 1 2 0 1 1 0 0 0 5000
</code></pre>
|
python|pandas
| 1
|
375,453
| 44,611,800
|
Mapping an integer to array (Python): ValueError: setting an array element with a sequence
|
<p>I have a defaultdict which maps certain integers to a numpy array of size 20.</p>
<p>In addition, I have an existing array of indices. I want to turn that array of indices into a 2D array, where each original index is converted into an array via my defaultdict.</p>
<p>Finally, in the case that an index isn't found in the defaultdict, I want to create an array of zeros for that index.</p>
<p>Here's what I have so far</p>
<pre><code> converter = lambda x: np.zeros((d), dtype='float32') if x == -1 else cVf[x]
vfunc = np.vectorize(converter)
cvf = vfunc(indices)
</code></pre>
<p><code>np.zeros((d), dtype='float32')</code> and <code>cVf[x]</code> are identical data types/ shapes:</p>
<pre><code>(Pdb) np.shape(cVf[0])
(20,)
</code></pre>
<p>Yet I get the error in the title (*** ValueError: setting an array element with a sequence.) when I try to run this code. </p>
<p>Any ideas?</p>
|
<p>You should give us a some sample arrays or dictionaries (in the case of <code>cVF</code>, so we can make a test run.</p>
<p>Read what <code>vectorize</code> has to say about the return value. Since you don't define <code>otypes</code>, it makes a test calculation to determine the dtype of the returned array. My first thought was that the test calc and subsequent one might be returning different things. But you claim <code>converter</code> will always be returning the same dtype and shape array.</p>
<p>But let's try something simpler:</p>
<pre><code>In [609]: fv = np.vectorize(lambda x: np.array([x,x]))
In [610]: fv([1,2,3])
...
ValueError: setting an array element with a sequence.
</code></pre>
<p>It's having trouble with returning any array.</p>
<p>But if I give an <code>otypes</code>, it works</p>
<pre><code>In [611]: fv = np.vectorize(lambda x: np.array([x,x]), otypes=[object])
In [612]: fv([1,2,3])
Out[612]: array([array([1, 1]), array([2, 2]), array([3, 3])], dtype=object)
</code></pre>
<p>In fact in this case I could use <code>frompyfunc</code>, which returns object dtype, and is the underlying function for <code>vectorize</code> (and a bit faster).</p>
<pre><code>In [613]: fv = np.frompyfunc(lambda x: np.array([x,x]), 1,1)
In [614]: fv([1,2,3])
Out[614]: array([array([1, 1]), array([2, 2]), array([3, 3])], dtype=object)
</code></pre>
<p><code>vectorize</code> and <code>frompyfunc</code> are designed for functions that are <code>scalar in- scalar out</code>. That scalar may be an object, even array, but is still treated as a scalar.</p>
|
python|arrays|numpy
| 1
|
375,454
| 44,548,401
|
How to specify a scalar multiplier for units when using Quantities?
|
<p>The objective is to handle cell densities expressed as "1000/mm^3", i.e. thousands per cubic millimeter.</p>
<p>Currently I do this to handle "1/mm^3":</p>
<pre><code>import quantities as pq
d1 = pq.Quantity(500000, "1/mm**3")
</code></pre>
<p>which gives: </p>
<pre><code>array(500000) * 1/mm**3
</code></pre>
<p>But what I really need to do is to accept the values with units of "1000/mm^3". This should also be the form in which values are printed. When I try something like:</p>
<pre><code>d1 = pq.Quantity(5, 1000/pq.mm**3)
</code></pre>
<p>I get the following error:</p>
<pre><code>ValueError: units must be a scalar Quantity with unit magnitude, got 1000.0 1/mm**3
</code></pre>
<p>And if I try:</p>
<pre><code>a = pq.Quantity(500, "1000/mm**3")
</code></pre>
<p>The output is:</p>
<pre><code>array(500) * 1/mm**3
</code></pre>
<p>i.e. The <code>1000</code> just gets ignored.
Any idea how can I fix this? Any workaround?</p>
<p>(The requirement arises from the standard practice followed in the domain.)</p>
|
<p>One possible solution I have found is to create new units such as this:</p>
<pre><code>k_per_mm3 = pq.UnitQuantity('1000/mm3', 1e3/pq.mm**3, symbol='1000/mm3')
d1 = pq.Quantity(500, k_per_mm3)
</code></pre>
<p>Then on printing 'd1', I get:</p>
<pre><code>array(500) * 1000/mm3
</code></pre>
<p>which is what I required.</p>
<p>Is this the only way to do this? Or can the same be achieved with existing units (which is preferable)?</p>
|
python|numpy|units-of-measurement|quantities
| 1
|
375,455
| 44,794,455
|
What's the optimal way to access a cell within a panda of nested dictionaries?
|
<p>I have a nested dictionary panda. It goes from a tuple of <code>(STRING, DATE)</code> to a dictionary that contains specific columns and values. I've been trying to figure out the syntax to get the individual cell's data. For instance, I'd like to call [('SYY', '1997-06-30')]['dvt'] and get 99.5740. I've tried using df.get and [(tuple)] in various forms as well, without any luck. The error I receive is</p>
<blockquote>
<p>KeyError: ('SYY', '2016-06-30')</p>
</blockquote>
<p>The panda I have looks like this when printed:</p>
<pre><code> ivncf dvt fincf chech
SYY 1997-06-30 -213.3560 99.5740 -274.8150 9.9370
1998-06-30 -335.5300 110.9280 -29.6420 -7.4080
1999-06-30 -261.7350 126.6910 -284.5530 39.0150
2000-06-30 -459.3920 145.4180 -239.5090 9.8250
2001-06-30 -338.7510 173.7010 -639.8580 -23.3850
2002-06-30 -630.3000 225.5300 -359.9840 94.6960
2003-06-30 -681.8250 273.8520 -550.5280 139.0080
2004-06-30 -683.8110 321.3530 -641.8510 -137.7410
2005-06-30 -413.4400 368.7920 -784.2700 -8.0280
2006-06-30 -608.6710 408.2640 -504.6170 10.2190
2007-06-30 -648.7110 456.4380 -748.2500 5.9750
2008-06-30 -555.5600 513.5930 -698.5780 343.6800
2009-06-30 -658.6630 557.4870 -379.6430 535.5320
2010-06-30 -656.3200 585.7340 -667.0300 -433.2080
2011-06-30 -679.5560 604.5000 -377.9070 54.3220
2012-06-30 -903.6290 628.0240 -442.6490 49.1020
2013-06-30 -911.8820 654.8710 -874.2080 -276.5820
2014-06-30 -576.8380 673.5680 -915.8580 0.7610
2015-06-30 -654.3460 705.5390 3897.5620 4716.9980
2016-06-30 -600.8280 695.4690 -2404.7310 -1210.7440
</code></pre>
<p>Any suggestions? Thanks!</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>print (df.loc[('SYY', '1997-06-30'), 'dvt'])
99.574
</code></pre>
<p>For complicated selects use <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers" rel="nofollow noreferrer">slicers</a>:</p>
<pre><code>idx = pd.IndexSlice
print (df.loc[idx['SYY', '1997-06-30'], 'dvt'])
99.574
</code></pre>
<hr>
<pre><code>idx = pd.IndexSlice
print (df.loc[idx['SYY', '1997-06-30':'2001-06-30'], idx['dvt':'fincf']])
dvt fincf
SYY 1997-06-30 99.574 -274.815
1998-06-30 110.928 -29.642
1999-06-30 126.691 -284.553
2000-06-30 145.418 -239.509
2001-06-30 173.701 -639.858
</code></pre>
<hr>
<pre><code>idx = pd.IndexSlice
print (df.loc[idx['SYY', '1997-06-30'], idx['dvt':'chech']])
dvt 99.574
fincf -274.815
chech 9.937
Name: (SYY, 1997-06-30), dtype: float64
</code></pre>
<hr>
<pre><code>idx = pd.IndexSlice
print (df.loc[idx['SYY', '1997-06-30':'2003-06-30'], 'dvt'])
SYY 1997-06-30 99.574
1998-06-30 110.928
1999-06-30 126.691
2000-06-30 145.418
2001-06-30 173.701
2002-06-30 225.530
2003-06-30 273.852
Name: dvt, dtype: float64
</code></pre>
|
python|pandas|dictionary
| 3
|
375,456
| 44,610,766
|
Select columns using pandas dataframe.query()
|
<p>The documentation on <code>dataframe.query()</code> is <em>very</em> terse <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="noreferrer">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html</a> . I was also unable to find examples of projections by web search.</p>
<p>So I tried simply providing the column names: that gave a syntax error. Likewise for typing <code>select</code> and then the column names. So .. how to do this?</p>
|
<p>After playing around with this for a while and reading through <a href="https://github.com/pandas-dev/pandas/blob/v0.20.2/pandas/core/frame.py#L2038-L2128" rel="noreferrer">the source code</a> for <code>DataFrame.query</code>, I can't figure out a way to do it.</p>
<p>If it's not impossible, apparently it's at least strongly discouraged. When this question came up on github, prolific Pandas dev/maintainer jreback <a href="https://github.com/pandas-dev/pandas/issues/16226#issuecomment-299451702" rel="noreferrer">suggested using <code>df.eval()</code> for selecting columns and <code>df.query()</code> for filtering on rows</a>. </p>
<hr>
<p>UPDATE:</p>
<p>javadba points out that the return value of <code>eval</code> is not a dataframe. For example, to flesh out jreback's example a bit more...</p>
<pre><code>df.eval('A')
</code></pre>
<p>returns a Pandas Series, but</p>
<pre><code>df.eval(['A', 'B'])
</code></pre>
<p>does not return at DataFrame, it returns a list (of Pandas Series).</p>
<p>So it seems ultimately the best way to maintain flexibility to filter on rows and columns is to use <code>iloc</code>/<code>loc</code>, e.g.</p>
<pre><code>df.loc[0:4, ['A', 'C']]
</code></pre>
<p>output</p>
<pre><code> A C
0 -0.497163 -0.046484
1 1.331614 0.741711
2 1.046903 -2.511548
3 0.314644 -0.526187
4 -0.061883 -0.615978
</code></pre>
|
python|pandas|dataframe
| 8
|
375,457
| 44,794,220
|
pandas column value update from another dataframe value
|
<p>I have following 2 dataframe </p>
<pre><code>df_a =
id val
0 A100 11
1 A101 12
2 A102 13
3 A103 14
4 A104 15
df_b =
id loc val
0 A100 12
1 A100 23
2 A100 32
3 A102 21
4 A102 38
5 A102 12
6 A102 18
7 A102 19
.....
</code></pre>
<p>desired result: </p>
<pre><code>df_b =
id loc val
0 A100 12 11
1 A100 23 11
2 A100 32 11
3 A102 21 12
4 A102 38 12
5 A102 12 12
6 A102 18 12
7 A102 19 12
.....
</code></pre>
<p>When I try to update df_b's 'val' column by df_a's 'val' column like this, </p>
<pre><code>for index, row in df_a.iterrows():
v = row['val']
seq = df_a.loc[df_a['val'] == v]
df_b.loc[df_b['val'] == v, 'val'] = seq['val']
</code></pre>
<p>or </p>
<pre><code>df_x = df_b.join(df_a, on=['id'], how='inner', lsuffix='_left', rsuffix='_right')
</code></pre>
<p>However I could not solve this... How can I resolve this tricky things? </p>
<p>Thank you </p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="noreferrer"><code>map</code></a> by <code>Series</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="noreferrer"><code>set_index</code></a>:</p>
<pre><code>df_b['val'] = df_b['id'].map(df_a.set_index('id')['val'])
print (df_b)
id loc val
0 A100 12 11
1 A100 23 11
2 A100 32 11
3 A102 21 13
4 A102 38 13
5 A102 12 13
6 A102 18 13
7 A102 19 13
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="noreferrer"><code>merge</code></a> with <code>left join</code>:</p>
<pre><code>df = pd.merge(df_b,df_a, on='id', how='left')
print (df)
id loc val
0 A100 12 11
1 A100 23 11
2 A100 32 11
3 A102 21 13
4 A102 38 13
5 A102 12 13
6 A102 18 13
7 A102 19 13
</code></pre>
<p>If only one common column <code>id</code> used for joining in both <code>df</code> is possible omi it.</p>
<pre><code>df = pd.merge(df_b,df_a, how='left')
print (df)
id loc val
0 A100 12 11
1 A100 23 11
2 A100 32 11
3 A102 21 13
4 A102 38 13
5 A102 12 13
6 A102 18 13
7 A102 19 13
</code></pre>
|
python|pandas
| 6
|
375,458
| 44,811,405
|
Pandas: Concatenating DataFrame with Sparse Matrix
|
<p>I'm doing some basic machine learning and have a sparse matrix resulting from TFIDF as follows:</p>
<pre><code><983x33599 sparse matrix of type '<type 'numpy.float64'>'
with 232944 stored elements in Compressed Sparse Row format>
</code></pre>
<p>Then I have a DataFrame with a <code>title</code> column. I want to combine these into one DataFrame but when I try to use <code>concat</code>, I get that I can't combine a DataFrame with a non-DataFrame object.</p>
<p>How do I get around this?</p>
<p>Thanks!</p>
|
<p>Consider the following demo:</p>
<p>Source DF:</p>
<pre><code>In [2]: df
Out[2]:
text
0 is it good movie
1 wooow is it very goode
2 bad movie
</code></pre>
<p>Solution: let's create a SparseDataFrame out of TFIDF sparse matrix:</p>
<pre><code>from sklearn.feature_extraction.text import TfidfVectorizer
vect = TfidfVectorizer(sublinear_tf=True, max_df=0.5, analyzer='word', stop_words='english')
sdf = pd.SparseDataFrame(vect.fit_transform(df['text']),
columns=vect.get_feature_names(),
default_fill_value=0)
sdf['text'] = df['text']
</code></pre>
<p>Result:</p>
<pre><code>In [13]: sdf
Out[13]:
bad good goode wooow text
0 0.0 1.0 0.000000 0.000000 is it good movie
1 0.0 0.0 0.707107 0.707107 wooow is it very goode
2 1.0 0.0 0.000000 0.000000 bad movie
In [14]: sdf.memory_usage()
Out[14]:
Index 80
bad 8
good 8
goode 8
wooow 8
text 24
dtype: int64
</code></pre>
<p>PS pay attention at <code>.memory_usage()</code> - we didn't lose the "spareness". If we would use <code>pd.concat</code>, <code>join</code>, <code>merge</code>, etc. - we would lose the "sparseness" as all these methods generate a new regular (not sparsed) copy of merged DataFrames</p>
|
python|pandas|dataframe
| 3
|
375,459
| 44,401,088
|
Using Training TFRecords that are stored on Google Cloud
|
<p>My goal is to use training data (format: tfrecords) stored on Google Cloud storage when I run my Tensorflow Training App, locally. (Why locally? : I am testing before I turn it into a training package for Cloud ML)</p>
<p>Based on <a href="https://stackoverflow.com/questions/39783189/reading-input-data-from-gcs">this thread</a> I shouldn't have to do anything since the underlying Tensorflow API's should be able to read a gs://(url)</p>
<p>However thats not the case and the errors I see are of the format:</p>
<blockquote>
<p>2017-06-06 15:38:55.589068: I
tensorflow/core/platform/cloud/retrying_utils.cc:77] The operation
failed and will be automatically retried in 1.38118 seconds (attempt 1
out of 10), caused by: Unavailable: Error executing an HTTP request
(HTTP response code 0, error code 6, error message 'Couldn't resolve
host 'metadata'')</p>
<p>2017-06-06 15:38:56.976396: I
tensorflow/core/platform/cloud/retrying_utils.cc:77] The operation
failed and will be automatically retried in 1.94469 seconds (attempt 2
out of 10), caused by: Unavailable: Error executing an HTTP request
(HTTP response code 0, error code 6, error message 'Couldn't resolve
host 'metadata'')</p>
<p>2017-06-06 15:38:58.925964: I
tensorflow/core/platform/cloud/retrying_utils.cc:77] The operation
failed and will be automatically retried in 2.76491 seconds (attempt 3
out of 10), caused by: Unavailable: Error executing an HTTP request
(HTTP response code 0, error code 6, error message 'Couldn't resolve
host 'metadata'')</p>
</blockquote>
<p>I'm not able to follow where I have to begin debugging this error.</p>
<p>Here is a snippet that reproduced the problem and also shows the tensorflow API's that I am using.</p>
<pre><code>def _preprocess_features(features):
"""Function that returns preprocessed images"""
def _parse_single_example_from_tfrecord(value):
features = (
tf.parse_single_example(value,
features={'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([model_config.LABEL_SIZE], tf.int64)
})
)
return features
def _read_and_decode_tfrecords(filename_queue):
reader = tf.TFRecordReader()
# Point it at the filename_queue
_, value = reader.read(filename_queue)
features = _parse_single_example_from_tfrecord(value)
# decode the binary string image data
image, label = _preprocess_features(features)
return image, label
def test_tfread(filelist):
train_filename_queue = (
tf.train.string_input_producer(filelist,
num_epochs=None,
shuffle=True))
image, label = (
_read_and_decode_tfrecords(train_filename_queue))
return image
images= test_tfread(["gs://test-bucket/t.tfrecords"])
sess = tf.Session(config=tf.ConfigProto(
allow_soft_placement=True,
log_device_placement=True))
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
for step in range(model_config.MAX_STEPS):
_ = sess.run([images])
finally:
# When done, ask the threads to stop.
coord.request_stop()
# Finally, wait for them to join (i.e. cleanly shut down)
coord.join(threads)
</code></pre>
|
<p>Try executing the following command</p>
<p><code>gcloud auth application-default login</code></p>
|
tensorflow|google-cloud-ml|google-cloud-ml-engine
| 27
|
375,460
| 44,659,204
|
numpy dot product with missing values
|
<p>How do you do a numpy dot product where the two vectors might have missing values? This seems to require many additional steps, is there an easier way to do this?:</p>
<pre><code>v1 = np.array([1,4,2,np.nan,3])
v2 = np.array([np.nan,np.nan,2,4,1])
np.where(np.isnan(v1),0,v1).dot(np.where(np.isnan(v2),0,v2))
</code></pre>
|
<p>We can use <a href="https://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.nansum.html" rel="noreferrer"><code>np.nansum</code></a> to sum up the values ignoring <code>NaNs</code> after element-wise multiplication -</p>
<pre><code>np.nansum(v1*v2)
</code></pre>
<p>Sample run -</p>
<pre><code>In [109]: v1
Out[109]: array([ 1., 4., 2., nan, 3.])
In [110]: v2
Out[110]: array([ nan, nan, 2., 4., 1.])
In [111]: np.where(np.isnan(v1),0,v1).dot(np.where(np.isnan(v2),0,v2))
Out[111]: 7.0
In [115]: v1*v2
Out[115]: array([ nan, nan, 4., nan, 3.])
In [116]: np.nansum(v1*v2)
Out[116]: 7.0
</code></pre>
|
numpy|multidimensional-array|linear-algebra|numpy-ndarray|dot-product
| 9
|
375,461
| 44,459,935
|
Groupby .cumsum() blank if the summed column is equal to zero?
|
<p>I have a DataFrame .groupby() .cumsum(), with a DataFrame as follows:</p>
<pre><code> Col_A Col_B Col_C
1 A 0
2 A 1 1
3 A 1 2
4 A 1 3
5 B 0 0
6 B 1 1
7 B 0
8 B 1 2
9 C 1 1
10 C 1 2
11 C 1 3
12 C 0
</code></pre>
<p>The sum of Col_B is <code>df.groupby(['Col_A'])['Col_B'].cumsum()</code>. However, when Col_B == 0, the .cumsum() is blank. How do I record the <code>.cumsum()</code> even when Col_B is blank?</p>
<p>The resulting DataFrame should resemble:</p>
<pre><code> Col_A Col_B Col_C
1 A 0 0
2 A 1 1
3 A 1 2
4 A 1 3
5 B 0 0
6 B 1 1
7 B 0 1
8 B 1 2
9 C 1 1
10 C 1 2
11 C 1 3
12 C 0 3
</code></pre>
|
<p>Having a column of 0s is not the same as having a completely blank column.
If you have NAs in a column the .cumsum() for that column should in fact be NA(or 'blank' as you say).
You could check to see if the whole column is NA and set the value accordingly.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer">Documentation</a>:</p>
<pre><code>DataFrame.cumsum(axis=None, skipna=True, *args, **kwargs)
Return cumulative sum over requested axis.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
</code></pre>
|
python|pandas|group-by|sum|series
| 2
|
375,462
| 44,728,349
|
Unique values python
|
<p>I am trying to basically look through a column and if that column has a unique value then enter 1 but if it doesn't it just becomes a NaN, my dataframe looks like this:</p>
<pre class="lang-none prettyprint-override"><code> Street Number
0 1312 Oak Avenue 1
1 14212 central Ave 2
2 981 franklin way 1
</code></pre>
<p>the code I am using to put the number 1 next to unique values is as follows:</p>
<pre><code>df.loc[(df['Street'].unique()), 'Unique'] = '1'
</code></pre>
<p>however when I run this I get this error KeyError: "not in index" I don't know why. I tried running this on the Number column and I get my desired result which is:</p>
<pre class="lang-none prettyprint-override"><code> Street Number Unique
0 1312 Oak Avenue 1 NaN
1 14212 central Ave 2 1
2 981 franklin way 1 1
</code></pre>
<p>so my column that specifies which ones are unique is called Unique and it puts a one by the rows that are unique and NaNs by ones that are duplicates. So in this case I have 2 ones and it notices that and makes the first NaN and the second it provides a 1 and since their is only 1 two then it provides us for a 1 their as well since it is unique. I just don't know why I am getting that error for the street column.</p>
|
<p>That's not really producing your desired result. The output of <code>df['Number'].unique()</code>, <code>array([1, 2], dtype=int64)</code>, just happened to be in the index. You'd encounter the same issue on that column if <code>Number</code> instead was <code>[3, 4, 3]</code>, say.</p>
<p>For what you're looking for, selecting where not <code>duplicated</code>, or where you have left after dropping duplicates, might be better than <code>unique</code>:</p>
<pre><code>df.loc[~(df['Number'].duplicated()), 'Unique'] = 1
df
Out[51]:
Street Number Unique
0 1312 Oak Avenue 1 1.0
1 14212 central Ave 2 1.0
2 981 franklin way 1 NaN
df.loc[df['Number'].drop_duplicates(), 'Unique'] = 1
df
Out[63]:
Street Number Unique
0 1312 Oak Avenue 1 NaN
1 14212 central Ave 2 1.0
2 981 franklin way 1 1.0
</code></pre>
|
python|pandas|dataframe|unique
| 1
|
375,463
| 44,507,870
|
Is there an easy way to eliminate duplicate rows in a DataFrame in Python- pandas?
|
<p>My problem is that my data isn't a good representation of what is really going on because it has a lot of duplicate rows. Consider the following-</p>
<pre><code> a b
1 23 42
2 23 42
3 23 42
4 14 12
5 14 12
</code></pre>
<p>I only want 1 row and to eliminate all duplicates. It should look like the following after it's done.</p>
<pre><code> a b
1 23 42
2 14 12
</code></pre>
<p>Is there a function to do this?</p>
|
<p>Let's use <a href="http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.drop_duplicates.html#pandas-dataframe-drop-duplicates" rel="nofollow noreferrer"><code>drop_duplicates</code></a> with <code>keep='first'</code>:</p>
<pre><code>df2.drop_duplicates(keep='first')
</code></pre>
<p>Output:</p>
<pre><code> a b
1 23 42
4 14 12
</code></pre>
|
python|pandas|dataframe
| 7
|
375,464
| 44,481,377
|
Tensorflow equivalent of this numpy axis-wise cartesian product for 2D matrices
|
<p>I currently have code that allows one to take a combinatorial (cartesian) product across a particular axis. This is in numpy, and originated from a previous question <a href="https://stackoverflow.com/questions/44323478/efficient-axis-wise-cartesian-product-of-multiple-2d-matrices-with-numpy-or-tens">Efficient axis-wise cartesian product of multiple 2D matrices with Numpy or TensorFlow</a></p>
<pre><code>A = np.array([[1,2],
[3,4]])
B = np.array([[10,20],
[5,6]])
C = np.array([[50, 0],
[60, 8]])
cartesian_product( [A,B,C], axis=1 )
>> np.array([[ 1*10*50, 1*10*0, 1*20*50, 1*20*0, 2*10*50, 2*10*0, 2*20*50, 2*20*0]
[ 3*5*60, 3*5*8, 3*6*60, 3*6*8, 4*5*60, 4*5*8, 4*6*60, 4*6*8]])
</code></pre>
<p>and to reiterate the solution:</p>
<pre><code>L = [A,B,C] # list of arrays
n = L[0].shape[0]
out = (L[1][:,None]*L[0][:,:,None]).reshape(n,-1)
for i in L[2:]:
out = (i[:,None]*out[:,:,None]).reshape(n,-1)
</code></pre>
<p>Is there an existing method to perform this with broadcasting in tensorflow - without a for loop?</p>
|
<p>Ok so I managed to find a pure tf based (partial) answer for two arrays. It's not currently generalizable like the numpy solution for M arrays, but that's for another question (perhaps a tf.while_loop). For those that are curious, the solution adapts from <a href="https://stackoverflow.com/questions/43534057/evaluate-all-pair-combinations-of-rows-of-two-tensors-in-tensorflow">Evaluate all pair combinations of rows of two tensors in tensorflow</a></p>
<pre><code>a = np.array([[0, 1, 2, 3],
[4, 5, 6, 7],
[4, 5, 6, 7]])
b = np.array([[0, 1],
[2, 3],
[2, 3]])
N = a.shape[0]
A = tf.constant(a, dtype=tf.float64)
B = tf.constant(b, dtype=tf.float64)
A_ = tf.expand_dims(A, axis=1)
B_ = tf.expand_dims(B, axis=2)
z = tf.reshape(tf.multiply(A_, B_), [N, -1])
>> tf_result
Out[1]:
array([[ 0., 0., 0., 0., 0., 1., 2., 3.],
[ 8., 10., 12., 14., 12., 15., 18., 21.],
[ 8., 10., 12., 14., 12., 15., 18., 21.]])
</code></pre>
<p>Solutions for the multiple array case are welcome</p>
|
python|tensorflow|product|cartesian
| 0
|
375,465
| 44,649,603
|
ValueError: Cannot feed value of shape (3375, 50, 50, 2) for Tensor 'Reshape:0', which has shape '(?, 5000)'
|
<p>I am learning Tensorflow. Following is my code for MLP with TensorFlow. I have some issues with mismatching of data dimentions.</p>
<pre><code>import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
wholedataset = np.load('C:/Users/pourya/Downloads/WholeTrueData.npz')
data = wholedataset['wholedata'].astype('float32')
label = wholedataset['wholelabel'].astype('float32')
height = wholedataset['wholeheight'].astype('float32')
print(type(data[20,1,1,0]))
learning_rate = 0.001
training_iters = 5
display_step = 20
n_input = 3375
X = tf.placeholder("float32")
Y = tf.placeholder("float32")
weights = {
'wc1': tf.Variable(tf.random_normal([3, 3, 2, 1])),
'wd1': tf.Variable(tf.random_normal([3, 3, 1, 1]))
}
biases = {
'bc1': tf.Variable(tf.random_normal([1])),
'out': tf.Variable(tf.random_normal([1,50,50,1]))
}
mnist= data
n_nodes_hl1 = 500
n_nodes_hl2 = 500
n_nodes_hl3 = 500
n_classes = 2
batch_size = 100
x = tf.placeholder('float', shape = [None,50,50,2])
shape = x.get_shape().as_list()
dim = np.prod(shape[1:])
x_reshaped = tf.reshape(x, [-1, dim])
y = tf.placeholder('float', shape= [None,50,50,2])
shape = y.get_shape().as_list()
dim = np.prod(shape[1:])
y_reshaped = tf.reshape(y, [-1, dim])
def neural_network_model(data):
hidden_1_layer = {'weights':tf.Variable(tf.random_normal([5000,
n_nodes_hl1])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_2_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1,
n_nodes_hl2])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_3_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2,
n_nodes_hl3])),
'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))}
output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3,
n_classes])),
'biases':tf.Variable(tf.random_normal([n_classes])),}
l1 = tf.add(tf.matmul(data,hidden_1_layer['weights']),
hidden_1_layer['biases'])
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1,hidden_2_layer['weights']),
hidden_2_layer['biases'])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(l2,hidden_3_layer['weights']),
hidden_3_layer['biases'])
l3 = tf.nn.relu(l3)
output = tf.matmul(l3,output_layer['weights']) + output_layer['biases']
return output
def train_neural_network(x):
prediction = neural_network_model(x)
cost = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y) )
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
epoch_loss = 0
for _ in range(int(n_input/batch_size)):
epoch_x = wholedataset['wholedata'].astype('float32')
epoch_y = wholedataset['wholedata'].astype('float32')
_, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y:
epoch_y})
epoch_loss += c
print('Epoch', epoch, 'completed out
of',hm_epochs,'loss:',epoch_loss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:mnist.test.images,
y:mnist.test.labels}))
train_neural_network(x)
</code></pre>
<p>I got the following error:</p>
<pre><code>ValueError: Cannot feed value of shape (3375, 50, 50, 2) for Tensor 'Reshape:0', which has shape '(?, 5000)'
</code></pre>
<p>Does anyone know what is the issue with my code, and how can I fix it?
The data value is (3375, 50, 50, 2)</p>
<p>Thank you for anyone's input!</p>
|
<p>I think that the problem is that you use the same variable name <code>x</code> for the placeholder and the reshape, in lines </p>
<pre><code>x = tf.placeholder('float', shape = [None,50,50,2])
</code></pre>
<p>and </p>
<pre><code>x = tf.reshape(x, [-1, dim])
</code></pre>
<p>so that when you </p>
<pre><code>feed_dict={x: your_val}
</code></pre>
<p>you are feeding the output of the reshape operation.</p>
<p>You should have different names, for instance</p>
<pre><code>x_placeholder = tf.placeholder('float', shape = [None,50,50,2])
x_reshaped = tf.reshape(x, [-1, dim])
</code></pre>
<p>and then</p>
<pre><code>feed_dict={x_placeholder: your_val}
</code></pre>
|
tensorflow|python-3.5
| 0
|
375,466
| 44,594,292
|
Errors installing TensorFlow 1.2 GPU in Anaconda env with py 3.6 Ubuntu 16.04 Setup tools
|
<p>It seems TF is requiring setuptools 27.2.0 while I have setuptools (36.0.1)
????</p>
<p>Using a newly created and downloaded Anaconda virtual environment on Ubuntu 16.04 (in another env I have TF1.1GPU running fine) (py362) I attempt to install the TF 1.2GPU, anaconda Command line client (version 1.6.3) Python 3.6.1 FWIW I did install the suggested protobuf binary as I have a rather hefty machine before the TF install (pip3 install --upgrade \
<a href="https://storage.googleapis.com/tensorflow/linux/cpu/protobuf-3.1.0-cp35-none-linux_x86_64.whl" rel="nofollow noreferrer">https://storage.googleapis.com/tensorflow/linux/cpu/protobuf-3.1.0-cp35-none-linux_x86_64.whl</a>) Could that be messing me up? The error message point elswhere though but I see no solution to the setuptools issue? </p>
<p>I do see that prior versions had some similar issues but don't see a solution? I used the following:</p>
<pre><code>pip install --upgrade \ https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.0-cp36-cp36m-linux_x86_64.whl
</code></pre>
<p>Much installed fine but then I got this and I cannot run TF.</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/home/tom/anaconda3/envs/py362/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg'
(py362) tom@tomServal:~$ pip install setuptools-27.2.0-py3.6
Collecting setuptools-27.2.0-py3.6
Could not find a version that satisfies the requirement setuptools-27.2.0-py3.6 (from versions: )
No matching distribution found for setuptools-27.2.0-py3.6
(py362) tom@tomServal:~$ pip install setuptools
Requirement already satisfied: setuptools in ./anaconda3/envs/py362/lib/python3.6/site-packages
</code></pre>
<p>doing a pip list I see </p>
<pre><code>setuptools (36.0.1)
</code></pre>
<p>So it seems perhaps that the requirements on the TF1.2 install may be incorrectly pinned?</p>
|
<p><code>pip install setuptools==27.2.0</code></p>
|
python|ubuntu|tensorflow|anaconda|setuptools
| 1
|
375,467
| 44,778,876
|
Is there a systematic way to compute the receptive field of a neuron?
|
<p>I am interested in computing the receptive field of a neuron relatively to the input, or more generally relatively to an earlier layer.</p>
<p>This can be done manually, but I would like to know if there is a built-in function to do it or otherwise if there is a way do compute it automatically.</p>
<p>Is there something that could work at least on a simple (single stream, no skip/concatenation) network restricted to convolutions and reduce layers, with possibly a mix of <code>SAME</code> and <code>VALID</code> paddings and non-unit strides?</p>
|
<p>Yes, as of Aug 2017, you can simply use <code>tf.contrib.receptive_field</code></p>
<p>See <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/receptive_field" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/receptive_field</a> for details.</p>
|
tensorflow
| 1
|
375,468
| 44,774,829
|
python dataframe appending columns horizontally
|
<p>I am trying to make a simple script that concatenates or appends multiple column sets that I pull from xls files within a directory. Each xls file has a format of:</p>
<pre><code>Index Exp. m/z Intensity
1 1000.11 1000
2 2000.14 2000
3 3000.15 3000
</code></pre>
<p>Each file has varying number of indices. Below is my code:</p>
<pre><code>import pandas as pd
import os
import tkinter.filedialog
full_path = tkinter.filedialog.askdirectory(initialdir='.')
os.chdir(full_path)
data = {}
df = pd.DataFrame()
for files in os.listdir(full_path):
if os.path.isfile(os.path.join(full_path, files)):
df = pd.read_excel(files, 'Sheet1')[['Exp. m/z', 'Intensity']]
data = df.concat(df, axis=1)
data.to_excel('test.xls', index=False)
</code></pre>
<p>This produces an attributerror: DataFrame object has no attribute concat. I also tried using append like:</p>
<pre><code>data = df.append(df, axis=1)
</code></pre>
<p>but I know that append has no axis keyword argument. df.append(df) does work, but it places the columns at the bottom. I want something like:</p>
<pre><code>Exp. m/z Intensity Exp. m/z Intensity
1000.11 1000 1001.43 1000
2000.14 2000 1011.45 2000
3000.15 3000
</code></pre>
<p>and so on. So the column sets that I pull from each file should be placed to the right of the previous column sets, with a column space in between. </p>
|
<p>I think you need <code>append</code> <code>DataFrames</code> to list and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="noreferrer"><code>pd.concat</code></a>:</p>
<pre><code>dfs = []
for files in os.listdir(full_path):
if os.path.isfile(os.path.join(full_path, files)):
df = pd.read_excel(files, 'Sheet1')[['Exp. m/z', 'Intensity']]
#for add empty column
df['empty'] = np.nan
dfs.append(df)
data = pd.concat(dfs, axis=1)
</code></pre>
|
python|pandas|dataframe|append|concat
| 8
|
375,469
| 44,724,152
|
Why is .ix inclusive on the end of indexing ranges?
|
<p>Python Version: 2.7.6
Numpy Version: 1.10.2
Pandas: 0.17.1</p>
<p>I understand that .ix is now deprecated, but I'm working on a legacy system and seeing this behavior with .ix and I'm preplexed </p>
<pre><code># Native Python List Indexing is exclusive on the end index
[0, 1, 2, 3][0:1] # returns [0] indexes with [0, 1)
# Native Numpy
import numpy as np
numpyArray = np.reshape(np.arange(4), (2, 2))
numpyArray[0:1, 0:1] # returns array([[0]]), indexes with [0, 1) in rows and [0, 1) in columns
####### Pandas #######
import pandas as pd
dataFrame = pd.DataFrame(numpyArray)
# Pandas with iloc #
dataFrame.iloc[0:1, 0:1] # returns 0, indexes with [0, 1) in rows and [0, 1) in columns
# Pandas with ix #
dataFrame.ix[0:1, 0:1] # returns [[0, 1], [2, 3] indexes with [0, 1] in rows and [0, 1] in columns
</code></pre>
|
<p><a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#selection-by-label" rel="nofollow noreferrer"><code>.ix</code></a> is label-based indexing (same as <code>.loc</code>) which the docs state includes the stop range value which is different to <code>iloc</code> which is open-closed range so doesn't include the stop range value, this is by design</p>
<p>The reason it does this is because if your indices were for instance string, then it would make it problematic to select a range where you didn't know what the end range value should be:</p>
<pre><code>In[274]:
df = pd.DataFrame(np.random.randn(5,3), columns=list('abc'), index=list('vwxyz'))
df
Out[274]:
a b c
v -0.488627 0.213183 0.224104
w -0.200328 -1.138937 0.815568
x -1.131868 -0.562758 0.088719
y 0.120701 -0.863737 0.246295
z -0.808140 0.253376 0.645974
In[275]:
df.ix['w':'y']
Out[275]:
a b c
w -0.200328 -1.138937 0.815568
x -1.131868 -0.562758 0.088719
y 0.120701 -0.863737 0.246295
</code></pre>
<p>If it didn't include the end value for the last row, you'd need to know that <code>'z'</code> had to be passed in order to return the label before <code>'z'</code> to get the result above</p>
<p><strong>update</strong></p>
<p>Note that <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#whatsnew-0200-api-breaking-deprecate-ix" rel="nofollow noreferrer"><code>ix</code></a> is deprecated since <code>0.20.1</code> and you should use <code>loc</code> </p>
|
python|pandas|numpy
| 4
|
375,470
| 44,553,937
|
Creating legend for graph with multiple lines representing a 'group'
|
<p>From a dataframe in pandas 'g' I have the following data:</p>
<pre><code>index Speaker Date ARI Flesch Kincaid
0 Alan Greenspan 1996 15.234878 34.669383 14.533217
1 Alan Greenspan 1997 16.235605 31.415163 15.335869
11 Alan S. Blinder 2002 14.299481 41.847836 13.681203
12 Alan S. Blinder 2003 NaN NaN NaN
14 Alice M. Rivlin 1996 15.828971 33.394999 15.211662
</code></pre>
<p>With the code below I have been able to produce the following graph:</p>
<pre><code>s = data
s['Date'] = pd.to_datetime(s['Date'], format='%Y-%m-%d %H:%M:%S')
s = s.set_index(['Date'])
grouped = s.groupby('Speaker').resample('AS').mean()
grouped = grouped.reset_index()
g = grouped.reset_index()
g["Date"] = g["Date"].dt.year
g.plot(x='Date', y='Flesch', colormap = cm.cubehelix, legend=True,
title="Auto", figsize=(12,10))
</code></pre>
<p><a href="https://i.stack.imgur.com/SRalt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SRalt.png" alt="My current graph"></a></p>
<p>I would like the graph to include different colors for each line and place a legend that notes which "Speaker" is associated with each line. Any help would be appreciated!</p>
|
<p>Try to <code>groupby</code> by speaker and then plot as described <a href="http://pandas.pydata.org/pandas-docs/version/0.16.2/generated/pandas.core.groupby.DataFrameGroupBy.plot.html" rel="nofollow noreferrer">here</a></p>
|
python|pandas|matplotlib|graph
| 1
|
375,471
| 44,795,595
|
Why is my TFRecord file so much bigger than csv?
|
<p>I always thought that being a binary format, <a href="https://www.tensorflow.org/api_guides/python/python_io#tfrecords_format_details" rel="nofollow noreferrer">TFRecord</a> will consume less space then a human-readable csv. But when I tried to compare them, I saw that it is not the case.</p>
<p>For example here I create a <code>num_rows X 10</code> matrix with <code>num_rows</code> labels and save it as a csv. I do the same by saving it to TFRecors:</p>
<pre><code>import pandas as pd
import tensorflow as tf
from random import randint
num_rows = 1000000
df = pd.DataFrame([[randint(0,300) for r in xrange(10)] + [randint(0, 1)] for i in xrange(num_rows)])
df.to_csv("data/test.csv", index=False, header=False)
writer = tf.python_io.TFRecordWriter('data/test.bin')
for _, row in df.iterrows():
arr = list(row)
features, label = arr[:-1], arr[-1]
example = tf.train.Example(features=tf.train.Features(feature={
'features' : tf.train.Feature(int64_list=tf.train.Int64List(value=features)),
'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[label])),
}))
writer.write(example.SerializeToString())
writer.close()
</code></pre>
<p>Not only it takes way more time to create a binary file than a csv (2 sec VS 1 min 50 sec), but it also uses almost 2 times more space (38Mb VS 67.7Mb). </p>
<hr>
<p>Do I do it correctly? How can I make the output file smaller (saw <a href="https://www.tensorflow.org/api_docs/python/tf/python_io/TFRecordCompressionType" rel="nofollow noreferrer">TFRecordCompressionType</a>), but is there anything else I can do? And what is the reason for a much bigger size?</p>
<hr>
<p><strong>Vijay's comment regarding int64</strong> makes sense but still does not answer everything. Int64 consumes 8 bytes, because I am storing data in csv, the string representation of the integer should be of length 8. So if I do this <code>df = pd.DataFrame([[randint(1000000,99999999) for r in xrange(10)] for i in xrange(num_rows)])</code> I still get a slightly bigger size. Now it is 90.9Mb VS 89.1Mb. In additional to this csv stores 1 byte for each comma between each integers.</p>
|
<p>The fact that your file is bigger is due to the overhead that TFRecords has for each row, in particular the fact that the label names are stored every time.</p>
<p>In your example, if you increase the number of features (from 10 to say 1000) you will observe that your tfrecord file is actually about half the size of the csv.</p>
<p>Also that the fact that integers are stored on 64 bits is eventually irrelevant, because the serialization uses a "varint" encoding that depends on the value of the integer, not on its initial encoding. Take your example above, and instead of a random value between 0 and 300, use a constant value of 300: you will see that your file size increases.</p>
<p>Note that the number of bytes used for the encoding is not exactly that of the integer itself. So a value of 255 will still need two bytes, but a value of 127 will take one byte. Interesting to know, negative values come with a huge penalty: 10 bytes for storage no matter what.</p>
<p>The correspondance between values and storage requirements is found in protobuf's function <a href="https://github.com/google/protobuf/blob/2f4489a3e504e0a4aaffee69b551c6acc9e08374/python/google/protobuf/internal/encoder.py#L96" rel="nofollow noreferrer"><code>_SignedVarintSize</code></a>.</p>
|
tensorflow
| 2
|
375,472
| 44,508,502
|
KeyError from pandas DataFrame groupby
|
<p>This is a very strange error, I got <code>KeyError</code> when doing pandas DataFrame <code>groupby</code> for no obvious reason. </p>
<pre><code>df = pd.read_csv('test.csv')
df.tail(5)
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 165 entries, 0 to 164
Data columns (total 3 columns):
Id 165 non-null object
Time 165 non-null object
Val 165 non-null float64
dtypes: float64(1), object(2)
memory usage: 3.9+ KB
df.columns
Index([u'Id', u'Time', u'Val'], dtype='object')
df.groupby(['Id'])
KeyErrorTraceback (most recent call last)
<ipython-input-24-bba5c2dc5f75> in <module>()
----> 1 df.groupby(['Id'])
/usr/local/lib/python2.7/dist-packages/pandas/core/generic.pyc in groupby(self, by, axis, level, as_index, sort, group_keys, squeeze, **kwargs)
3776 return groupby(self, by=by, axis=axis, level=level, as_index=as_index,
3777 sort=sort, group_keys=group_keys, squeeze=squeeze,
-> 3778 **kwargs)
...
/usr/local/lib/python2.7/dist-packages/pandas/core/internals.pyc in get(self, item, fastpath)
3288
3289 if not isnull(item):
-> 3290 loc = self.items.get_loc(item)
3291 else:
3292 indexer = np.arange(len(self.items))[isnull(self.items)]
/usr/local/lib/python2.7/dist-packages/pandas/indexes/base.pyc in get_loc(self, key, method, tolerance)
1945 return self._engine.get_loc(key)
1946 except KeyError:
-> 1947 return self._engine.get_loc(self._maybe_cast_indexer(key))
1948
1949 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4154)()
pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4018)()
pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12368)()
pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12322)()
KeyError: 'Id'
</code></pre>
<p>Note that, using <code>df.columns = df.columns.map(str.strip)</code> as suggested doesn't make any different -- I'm still getting the exact same output from <code>df.columns</code> and error as above:</p>
<pre><code>df.columns = df.columns.map(str.strip)
df.columns
Out[38]:
Index([u'Id', u'Time', u'Val'], dtype='object')
</code></pre>
<p>If there is anywhere that I can post this "test.csv", I can do it, because I am almost certain that the problem is the format of the file -- the "test.csv" is Windows based, and is output from SQL Server SSMS. This is very important, as I opened, copied & saved the exactly content using Notepad++, and there won't be such problem with the newly saved file. </p>
<p>Using <code>file test.csv</code> under Linux shows:</p>
<pre><code>test.csv: UTF-8 Unicode (with BOM) text, with CRLF line terminators
</code></pre>
<p>Here are the top several bytes from the file:</p>
<pre><code>0000000 ef bb bf 49 64 2c 54 69 - 6d 65 2c 56 61 6c 0d 0a Id,Time,Val..
0000020 54 35 31 31 35 2c 30 30 - 3a 30 30 3a 30 30 2c 32 T5115,00:00:00,2
0000040 30 2e 38 31 39 0d 0a 54 - 35 31 31 35 2c 30 30 3a 0.819..T5115,00:
0000060 30 30 3a 30 33 2c 31 36 - 2e 39 32 36 0d 0a 54 35 00:03,16.926..T5
0000100 31 31 35 2c 30 30 3a 30 - 30 3a 30 38 2c 31 31 2e 115,00:00:08,11.
0000120 33 34 33 0d 0a 54 35 31 - 31 35 2c 30 30 3a 30 30 343..T5115,00:00
0000140 3a 31 37 2c 36 2e 39 37 - 35 0d 0a 54 35 31 31 35 :17,6.975..T5115
0000160 2c 30 30 3a 30 30 3a 32 - 39 2c 31 33 2e 35 35 33 ,00:00:29,13.553
0000200 0d 0a 54 35 31 31 35 2c - 30 30 3a 30 30 3a 33 35 ..T5115,00:00:35
</code></pre>
<p>Any idea how to solve it? Thx. </p>
|
<p>Got to the bottom of it -- it is in fact that the Windows based csv files IS the root cause. </p>
<p>Proofs:</p>
<ol>
<li>I opened, copied & saved the exactly content using Notepad++, and there won't be such problem with the newly saved file. </li>
<li>If I convert it using <code>dos2unix</code> under Linux, then try the above same code, it would work. The <code>groupby</code> will not threw exception any more. </li>
</ol>
<p>Filed a bug at <a href="https://github.com/pandas-dev/pandas/issues/16690" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/16690</a></p>
<p>And the solution to it, if the pandas is earlier than 0.19, use <code>encoding='utf-8-sig'</code> in <code>pd.read_csv</code>:</p>
<pre><code>df = pd.read_csv('test.csv', encoding='utf-8-sig')
</code></pre>
|
python|python-2.7|pandas|dataframe
| 1
|
375,473
| 44,458,947
|
Tensorflow keeps all files. How to prevent that?
|
<p>Since updating to tensoflow version 1.0 which introduced the new Saver V2, tf does not delete old files any more with the 'max_to_keep' argument. This is a problem on my system since my models are pretty big but my free space is limited.</p>
<p>Using the dummy program below I end up with following files for every number from 1 to 10 while I only expect the last 3 (8,9,10) to actually be there.</p>
<ul>
<li>testfile-1.data-00000-of-00001</li>
<li>testfile-1.index</li>
<li>testfile-1.meta</li>
</ul>
<p>program:</p>
<pre><code>import tensorflow as tf
a = tf.Variable(name='a', initial_value=0)
addops = a+1
saver = tf.train.Saver(max_to_keep=3)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
sess.run(tf.global_variables_initializer())
for i in range(10):
sess.run(addops)
save_path = saver.save(sess, 'testfile', global_step=i+1)
sess.close()
</code></pre>
<p>Is this just me or this a known bug?
What are possible problems which could lead to this misbehavior?
Is there any log or smth similar I could get more information from?</p>
|
<p>I can reproduce this. It seems to be a bug. </p>
<p>However the problem is gone once I save into a different location (different from the executed .py file path)</p>
<pre><code> save_path = saver.save(sess, 'data/testfile', global_step=i+1)
</code></pre>
|
python|tensorflow
| 1
|
375,474
| 44,732,839
|
open txt file using read_csv by pandas
|
<p>I am trying to process txt file using pandas.<br>
However, I get following error at read_csv </p>
<blockquote>
<p>CParserError Traceback (most recent call
last) in ()
22 Col.append(elm)
23
---> 24 revised=pd.read_csv(Path+file,skiprows=Header+1,header=None,delim_whitespace=True)
25
26 TimeSeries.append(revised)</p>
<p>C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in
parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col,
usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters,
true_values, false_values, skipinitialspace, skiprows, skipfooter,
nrows, na_values, keep_default_na, na_filter, verbose,
skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col,
date_parser, dayfirst, iterator, chunksize, compression, thousands,
decimal, lineterminator, quotechar, quoting, escapechar, comment,
encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines,
skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints,
use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
560 skip_blank_lines=skip_blank_lines)
561
--> 562 return _read(filepath_or_buffer, kwds)
563
564 parser_f.<strong>name</strong> = name</p>
<p>C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in
_read(filepath_or_buffer, kwds)
323 return parser
324
--> 325 return parser.read()
326
327 _parser_defaults = {</p>
<p>C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in
read(self, nrows)
813 raise ValueError('skip_footer not supported for iteration')
814
--> 815 ret = self._engine.read(nrows)
816
817 if self.options.get('as_recarray'):</p>
<p>C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in
read(self, nrows) 1312 def read(self, nrows=None): 1313<br>
try:
-> 1314 data = self._reader.read(nrows) 1315 except StopIteration: 1316 if self._first_chunk:</p>
<p>pandas\parser.pyx in pandas.parser.TextReader.read
(pandas\parser.c:8748)()</p>
<p>pandas\parser.pyx in pandas.parser.TextReader._read_low_memory
(pandas\parser.c:9003)()</p>
<p>pandas\parser.pyx in pandas.parser.TextReader._read_rows
(pandas\parser.c:9731)()</p>
<p>pandas\parser.pyx in pandas.parser.TextReader._tokenize_rows
(pandas\parser.c:9602)()</p>
<p>pandas\parser.pyx in pandas.parser.raise_parser_error
(pandas\parser.c:23325)()</p>
<p>CParserError: Error tokenizing data. C error: Expected 4 fields in
line 6, saw 8</p>
</blockquote>
<p>Does anyone know how I can fix this problem?<br>
My python script and example txt file I want to process is shown below. </p>
<pre><code>Path='data/NanFung/OCTA_Tower/test/'
files=os.listdir(Path)
TimeSeries=[]
Cols=[]
for file in files:
new=open(Path+file)
Supplement=[]
Col=[]
data=[]
Header=0
#calculate how many rows should be skipped
for line in new:
if line.startswith('Timestamp'):
new1=line.split(" ")
new1[-1]=str(file)[:-4]
break
else:
Header += 1
#clean col name
for elm in new1:
if len(elm)>0:
Col.append(elm)
revised=pd.read_csv(Path+file,skiprows=Header+1,header=None,delim_whitespace=True)
TimeSeries.append(revised)
Cols.append(Col)
</code></pre>
<p>txt file</p>
<pre><code>history:/NIKL6215_ENC_1/CH$2d19$2d1$20$20CHW$20OUTLET$20TEMP
20-Oct-12 8:00 PM CT to ?
Timestamp Trend Flags Status Value (ºC)
------------------------- ----------- ------ ----------
20-Oct-12 8:00:00 PM HKT {start} {ok} 15.310 ºC
21-Oct-12 12:00:00 AM HKT { } {ok} 15.130 ºC
</code></pre>
|
<p>It fails because the part of the file you're reading looks like this:</p>
<pre><code>Timestamp Trend Flags Status Value (ºC)
------------------------- ----------- ------ ----------
20-Oct-12 8:00:00 PM HKT {start} {ok} 15.310 ºC
21-Oct-12 12:00:00 AM HKT { } {ok} 15.130 ºC
</code></pre>
<p>But there are no consistent delimiters here. <code>read_csv</code> does not understand how to read fixed-width formats like yours. You might consider using a delimited file, such as with tab characters between the columns.</p>
|
python|pandas
| 1
|
375,475
| 44,699,092
|
Import NumPy gives me ImportError: DDL load failed: The specified procedure could not be found?
|
<p>My Environment: Win10 64 bits, Python 3.6 and I use pip install to install NumPy instead of Anaconda. NumPy version: 1.13.0</p>
<p>I have seen several people posted similar questions, but most of them are using Python 2.7. The closest solution I have seen so far is: <a href="https://github.com/ContinuumIO/anaconda-issues/issues/1508" rel="nofollow noreferrer">https://github.com/ContinuumIO/anaconda-issues/issues/1508</a> and <a href="https://github.com/numpy/numpy/issues/9272" rel="nofollow noreferrer">https://github.com/numpy/numpy/issues/9272</a>. But it seems they did not solve it in the end and the people who posted it are using Python 2.7. Therefore, I was wondering if someone can help me about this. My error log is below. Any help would be appreciated.</p>
<pre><code>C:\Users\Kevin>python
Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 07:18:10) [MSC v.1900 32 bit
(Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
Traceback (most recent call last):
File "D:\Python3.6\lib\site-packages\numpy\core\__init__.py", line 16, in
<module>
from . import multiarray
ImportError: DLL load failed: The specified procedure could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Python3.6\lib\site-packages\numpy\__init__.py", line 142, in
<module>
from . import add_newdocs
File "D:\Python3.6\lib\site-packages\numpy\add_newdocs.py", line 13, in
<module>
from numpy.lib import add_newdoc
File "D:\Python3.6\lib\site-packages\numpy\lib\__init__.py", line 8, in
<module>
from .type_check import *
File "D:\Python3.6\lib\site-packages\numpy\lib\type_check.py", line 11, in
<module>
import numpy.core.numeric as _nx
File "D:\Python3.6\lib\site-packages\numpy\core\__init__.py", line 26, in
<module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: The specified procedure could not be
found.
</code></pre>
|
<p>You should go with doing a clean install of numpy. Just don't go with the traditional way but download the wheel file instead. You can get the wheel file from here: <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow noreferrer">http://www.lfd.uci.edu/~gohlke/pythonlibs/</a> . Download this file- <code>numpy‑1.13.0+mkl‑cp36‑cp36m‑win_amd64.whl</code> and install this wheel using pip. Check by importing numpy from shell.</p>
|
python|python-3.x|numpy|pip
| 0
|
375,476
| 44,525,949
|
Pandas: Error while loading TSV file with JSON strings in one of the columns
|
<p>I am trying to load a tsv file which has only two columns:
<em>property_id</em> & <em>photo_urls</em></p>
<p>For each <em>property_id</em> the <em>photo_urls</em> column contains string representation of an
array of json where each json object represents one image (one URL).</p>
<p><a href="https://pastebin.com/n1kevgfy" rel="nofollow noreferrer">Here</a> (pastbin link) is a small sample of the the tsv file which I am trying to load using Pandas.</p>
<pre><code>photos_df = pandas.read_csv('test.tsv')
</code></pre>
<p>This throws the error:</p>
<pre><code>ParserError: Error tokenizing data. C error: Expected 49 fields in line 4, saw 84
</code></pre>
<p>I am guessing this is due to two possible reasons:</p>
<ol>
<li><p>Different <em>property_id</em>s have different number of images/URLs/JSON objects</p></li>
<li><p>The JSON strings are malformed/buggy</p></li>
</ol>
<p>I am not able to figure out what is it exactly.</p>
<p>Using <code>read_csv</code> with parameter <code>error_bad_lines=False</code> is not an option here since I don't want to lose any data.</p>
<p>Sub-question: Even with above two cases why should read_csv throw an error when both the columns are indeed in string formats? How does it know what is wrong inside that string?</p>
|
<p>try</p>
<pre><code> import pandas as pd
pd.read_csv('pLurm1w1.txt',delim_whitespace=True)
</code></pre>
<p>returns</p>
<pre><code> property_id photo_urls
0 ff808081469fd6e20146a5af948000ea [{title":"Balcony","name":"IMG_20131006_120837...
1 ff8080814702d3d10147068359d200cd NaN
2 ff808081470c645401470fb03f5800a6 [{title":"Bedroom","name":"ff808081470c6454014...
3 ff808081470c6454014715eaa5960281 [{title":"Bedroom","name":"Screenshot_7.jpg","...
</code></pre>
|
python|json|python-3.x|pandas|parsing
| 0
|
375,477
| 44,764,887
|
How to restore trained LinearClassifier from tensorflow high level API and make predictions
|
<p>I have trained a logistic regression model model using tensorflow's LinearClassifier() class, and set the model_dir parameter, which specifies the location where to save metagrahps of checkpoints during model training:</p>
<pre><code># Create temporary directory where metagraphs will evenually be saved
model_dir = tempfile.mkdtemp()
logistic_model = tf.contrib.learn.LinearClassifier(
feature_columns=feature_columns,
n_classes=num_labels, model_dir=model_dir)
</code></pre>
<p>I've been reading about restoring models from metagraphs, but have found nothing about how to do so for models created using the high level api. LinearClassifier() has a predict() function, but I can't find any documentation on how to run prediction using an instance of the model that has been restored via checkpoint metagraph. How would I go about doing this? Once the model is restored, my understanding is that I am working with a tf.Sess object, which lacks all of the built in functionality of the LinearClassifier class, like this:</p>
<pre><code>with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta')
new_saver.restore(sess, 'my-save-dir/my-model-10000')
# Run prediction algorithm...
</code></pre>
<p>How do I run the same prediction algorithm used by the high-level api to make predictions on a restored model? Is there a better way to approach this?</p>
<p>Thanks for your input.</p>
|
<p><code>LinearClassifier()</code> has the 'model_dir' param, if when points to a trained model will restore the model.<br>
During training, you do: </p>
<pre><code>logistic_model = tf.contrib.learn.LinearClassifier(feature_columns=feature_columns, n_classes=num_labels, model_dir=model_dir)
classifier.fit(X_train, y_train, steps=10)
</code></pre>
<p>During inference, <code>LinearClassifier()</code> will load the trained model from the path given, and you don't use the <code>fit()</code> method but call the <code>predict()</code> method: </p>
<pre><code>logistic_model = tf.contrib.learn.LinearClassifier(feature_columns=feature_columns, n_classes=num_labels, model_dir=model_dir)
y_pred = classifier.predict(X_test)
</code></pre>
|
python|tensorflow|logistic-regression
| 1
|
375,478
| 44,708,911
|
Structured 2D Numpy Array: setting column and row names
|
<p>I'm trying to find a nice way to take a 2d numpy array and attach column and row names as a structured array. For example:</p>
<pre><code>import numpy as np
column_names = ['a', 'b', 'c']
row_names = ['1', '2', '3']
matrix = np.reshape((1, 2, 3, 4, 5, 6, 7, 8, 9), (3, 3))
# TODO: insert magic here
matrix['3']['a'] # 7
</code></pre>
<p>I've been able to use set the columns like this: </p>
<pre><code>matrix.dtype = [(n, matrix.dtype) for n in column_names]
</code></pre>
<p>This lets me do <code>matrix[2]['a']</code> but now I want to rename the rows so I can do <code>matrix['3']['a']</code>.</p>
|
<p>As far as I know it's not possible to "name" the rows with pure structured NumPy arrays. </p>
<p>But if you have <a href="/questions/tagged/pandas" class="post-tag" title="show questions tagged 'pandas'" rel="tag">pandas</a> it's possible to provide an "index" (which essentially acts like a "row name"):</p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> column_names = ['a', 'b', 'c']
>>> row_names = ['1', '2', '3']
>>> matrix = np.reshape((1, 2, 3, 4, 5, 6, 7, 8, 9), (3, 3))
>>> df = pd.DataFrame(matrix, columns=column_names, index=row_names)
>>> df
a b c
1 1 2 3
2 4 5 6
3 7 8 9
>>> df['a']['3'] # first "column" then "row"
7
>>> df.loc['3', 'a'] # another way to index "row" and "column"
7
</code></pre>
|
python|arrays|numpy|structured-array
| 19
|
375,479
| 44,631,279
|
assign in pandas pipeline
|
<p>Say, I have the following DataFrame with raw input data, and want to process it using a chain of pandas functions ("<em>pipeline</em>"). In particular, I want to rename and drop columns and add an additional column based on another. </p>
<pre><code> Gene stable ID Gene name Gene type miRBase accession miRBase ID
0 ENSG00000274494 MIR6832 miRNA MI0022677 hsa-mir-6832
1 ENSG00000283386 MIR4659B miRNA MI0017291 hsa-mir-4659b
2 ENSG00000221456 MIR1202 miRNA MI0006334 hsa-mir-1202
3 ENSG00000199102 MIR302C miRNA MI0000773 hsa-mir-302c
</code></pre>
<p>At the moment I do the following (which works): </p>
<pre><code>tmp_df = df.\
drop("Gene type", axis=1).\
rename(columns = {
"Gene stable ID": "ENSG",
"Gene name": "gene_name",
"miRBase accession": "MI",
"miRBase ID": "mirna_name"
})
result = tmp_df.assign(species = tmp_df.mirna_name.str[:3])
</code></pre>
<p>result:</p>
<pre><code> ENSG gene_name MI mirna_name species
0 ENSG00000274494 MIR6832 MI0022677 hsa-mir-6832 hsa
1 ENSG00000283386 MIR4659B MI0017291 hsa-mir-4659b hsa
2 ENSG00000221456 MIR1202 MI0006334 hsa-mir-1202 hsa
3 ENSG00000199102 MIR302C MI0000773 hsa-mir-302c hsa
</code></pre>
<p><strong>Is it possible to put the <code>assign</code> command directly into the 'pipeline'?
It feels cumbersome having to assign an additional temporary variable. I have no idea how I should reference the corresponding renamed column ('mirna_name') in that case.</strong></p>
|
<p>You can use pipe:</p>
<pre><code>tmp_df = (
df.drop("Gene type", axis=1)
.rename(columns = {"Gene stable ID": "ENSG",
"Gene name": "gene_name",
"miRBase accession": "MI",
"miRBase ID": "mirna_name"}
)
.pipe(lambda x: x.assign(species = x.mirna_name.str[:3]))
)
tmp_df
Out[365]:
ENSG gene_name MI mirna_name species
0 ENSG00000274494 MIR6832 MI0022677 hsa-mir-6832 hsa
1 ENSG00000283386 MIR4659B MI0017291 hsa-mir-4659b hsa
2 ENSG00000221456 MIR1202 MI0006334 hsa-mir-1202 hsa
3 ENSG00000199102 MIR302C MI0000773 hsa-mir-302c hsa
</code></pre>
<p>As @Tom pointed out, this can also be done without using pipe in this case:</p>
<pre><code>(
df.drop("Gene type", axis=1).
.rename(columns = {"Gene stable ID": "ENSG",
"Gene name": "gene_name",
"miRBase accession": "MI",
"miRBase ID": "mirna_name"}
)
.assign(species = lambda x: x.mirna_name.str[:3])
)
</code></pre>
|
python|pandas
| 14
|
375,480
| 44,530,047
|
Why is my numeric data being treated as an object?
|
<p>DataFrame in Pandas being treated as an object when the data is actually numeric. How do I fix this issue? I'm assuming this is happening because I have certain values within my columns that are not numeric - which I am trying to convert to <code>NaN</code>. When I try and run the <code>to_numeric</code>function, it returns everything as NaN, which is not what I am expecting.</p>
<p>Imagine my data looks something like</p>
<pre><code>A B C D
X Y Z 53
X Y Z 65
X Y Z 22
X Y Z 6/5/96
X Y Z 45
X Y Z 97
</code></pre>
<p>I am trying to make everything in column D stay, while making the <code>6/5/96</code> change to <code>NaN</code>, but everything I have tried results in <code>NaN</code> for all the values in column D. When I look up the <code>dtypes</code> it lists column D as an object, but they are definitely numerical values. </p>
<p>How do I fix my DataFrame to look like this, without altering the actual numerical values?</p>
<pre><code>A B C D
X Y Z 53
X Y Z 65
X Y Z 22
X Y Z NaN
X Y Z 45
X Y Z 97
</code></pre>
<hr>
<p>I am using Tabula to convert a PDF to a CSV.</p>
<pre><code>df = pd.read_csv('TEST.csv')
df['D'] = pd.to_numeric(df['D'], errors='coerce')
</code></pre>
<p>Do you think during the Tabula PDF to CSV conversion, that my data is losing its data type? </p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow noreferrer"><code>to_numeric</code></a>, but all <code>int</code> values are cast to <code>float</code>s:</p>
<pre><code>df['D'] = pd.to_numeric(df['D'], errors='coerce')
</code></pre>
<p>But if mixed values - numeric with strings:</p>
<pre><code>df['D'] = pd.to_numeric(df['D'].astype(str), errors='coerce')
</code></pre>
<p>Or if trailing whitespaces:</p>
<pre><code>df['D'] = pd.to_numeric(df['D'].astype(str).str.strip(), errors='coerce')
</code></pre>
<p>EDIT:</p>
<pre><code>df['D'] = pd.to_numeric(df['D'].str.replace(',',''), errors='coerce')
</code></pre>
<p>Or:</p>
<pre><code>df['D'] = pd.to_numeric(df['D'].replace(',','', regex=True), errors='coerce')
</code></pre>
|
python|pandas|dataframe
| 3
|
375,481
| 44,635,626
|
Rename result columns from Pandas aggregation ("FutureWarning: using a dict with renaming is deprecated")
|
<p>I'm trying to do some aggregations on a pandas data frame. Here is a sample code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"User": ["user1", "user2", "user2", "user3", "user2", "user1"],
"Amount": [10.0, 5.0, 8.0, 10.5, 7.5, 8.0]})
df.groupby(["User"]).agg({"Amount": {"Sum": "sum", "Count": "count"}})
Out[1]:
Amount
Sum Count
User
user1 18.0 2
user2 20.5 3
user3 10.5 1
</code></pre>
<p>Which generates the following warning:</p>
<blockquote>
<p>FutureWarning: using a dict with renaming is deprecated and will be
removed in a future version return super(DataFrameGroupBy,
self).aggregate(arg, *args, **kwargs)</p>
</blockquote>
<p>How can I avoid this?</p>
|
<h1>Use groupby <code>apply</code> and return a Series to rename columns</h1>
<p>Use the groupby <code>apply</code> method to perform an aggregation that </p>
<ul>
<li>Renames the columns</li>
<li>Allows for spaces in the names</li>
<li>Allows you to order the returned columns in any way you choose</li>
<li>Allows for interactions between columns</li>
<li>Returns a single level index and NOT a MultiIndex</li>
</ul>
<p>To do this:</p>
<ul>
<li>create a custom function that you pass to <code>apply</code></li>
<li>This custom function is passed each group as a DataFrame</li>
<li>Return a Series</li>
<li>The index of the Series will be the new columns</li>
</ul>
<p><strong>Create fake data</strong></p>
<pre><code>df = pd.DataFrame({"User": ["user1", "user2", "user2", "user3", "user2", "user1", "user3"],
"Amount": [10.0, 5.0, 8.0, 10.5, 7.5, 8.0, 9],
'Score': [9, 1, 8, 7, 7, 6, 9]})
</code></pre>
<p><a href="https://i.stack.imgur.com/lfzQE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lfzQE.png" alt="enter image description here"></a></p>
<p><strong>create custom function that returns a Series</strong><br>
The variable <code>x</code> inside of <code>my_agg</code> is a DataFrame</p>
<pre><code>def my_agg(x):
names = {
'Amount mean': x['Amount'].mean(),
'Amount std': x['Amount'].std(),
'Amount range': x['Amount'].max() - x['Amount'].min(),
'Score Max': x['Score'].max(),
'Score Sum': x['Score'].sum(),
'Amount Score Sum': (x['Amount'] * x['Score']).sum()}
return pd.Series(names, index=['Amount range', 'Amount std', 'Amount mean',
'Score Sum', 'Score Max', 'Amount Score Sum'])
</code></pre>
<p><strong>Pass this custom function to the groupby <code>apply</code> method</strong></p>
<pre><code>df.groupby('User').apply(my_agg)
</code></pre>
<p><a href="https://i.stack.imgur.com/WcDR3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WcDR3.png" alt="enter image description here"></a></p>
<p>The big downside is that this function will be much slower than <code>agg</code> for the <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#cython-optimized-aggregation-functions" rel="noreferrer">cythonized aggregations</a></p>
<h1>Using a dictionary with groupby <code>agg</code> method</h1>
<p>Using a dictionary of dictionaries was removed because of its complexity and somewhat ambiguous nature. There is an <a href="https://github.com/pandas-dev/pandas/issues/18366#issuecomment-349090402" rel="noreferrer">ongoing discussion</a> on how to improve this functionality in the future on github Here, you can directly access the aggregating column after the groupby call. Simply pass a list of all the aggregating functions you wish to apply.</p>
<pre><code>df.groupby('User')['Amount'].agg(['sum', 'count'])
</code></pre>
<p>Output</p>
<pre><code> sum count
User
user1 18.0 2
user2 20.5 3
user3 10.5 1
</code></pre>
<p>It is still possible to use a dictionary to explicitly denote different aggregations for different columns, like here if there was another numeric column named <code>Other</code>.</p>
<pre><code>df = pd.DataFrame({"User": ["user1", "user2", "user2", "user3", "user2", "user1"],
"Amount": [10.0, 5.0, 8.0, 10.5, 7.5, 8.0],
'Other': [1,2,3,4,5,6]})
df.groupby('User').agg({'Amount' : ['sum', 'count'], 'Other':['max', 'std']})
</code></pre>
<p>Output</p>
<pre><code> Amount Other
sum count max std
User
user1 18.0 2 6 3.535534
user2 20.5 3 5 1.527525
user3 10.5 1 4 NaN
</code></pre>
|
python|pandas|aggregate|rename
| 91
|
375,482
| 44,411,506
|
Pandas, sorting a dataframe in a useful way to find the difference between times. Why are key and value errors appearing?
|
<p>I have a pandas DataFrame containing 5 columns. </p>
<pre><code>['date', 'sensorId', 'readerId', 'rssi']
df_json['time'] = df_json.date.dt.time
</code></pre>
<p>I am aiming to find people who have entered a store (rssi > 380). However this would be much more accurate if I could also check every record a sensorId appears in and whether the time in that record is within 5 seconds of the current record.</p>
<p>Data from the dataFrame: (df_json)</p>
<pre><code> date sensorId readerId rssi
0 2017-03-17 09:15:59.453 4000068 76 352
0 2017-03-17 09:20:17.708 4000068 56 374
1 2017-03-17 09:20:42.561 4000068 60 392
0 2017-03-17 09:44:21.728 4000514 76 352
0 2017-03-17 10:32:45.227 4000461 76 332
0 2017-03-17 12:47:06.639 4000046 43 364
0 2017-03-17 12:49:34.438 4000046 62 423
0 2017-03-17 12:52:28.430 4000072 62 430
1 2017-03-17 12:52:32.593 4000072 62 394
0 2017-03-17 12:53:17.708 4000917 76 335
0 2017-03-17 12:54:24.848 4000072 25 402
1 2017-03-17 12:54:35.738 4000072 20 373
</code></pre>
<p>I would like to use jezrael's answer of df['date'].diff(). However I cannot successfully use this, I receive many different errors. The ['date'] column is of dtype datetime64[ns].</p>
<p>How the data is stored above is not useful, for the .diff() to be of any use the data must be stored as below (dfEntered):</p>
<p>Sample Data: dfEntered</p>
<pre><code> date sensorId readerId time rssi
2017-03-17 4000046 43 12:47:06.639000 364
62 12:49:34.438000 423
4000068 56 09:20:17.708000 374
60 09:20:42.561000 392
76 09:15:59.453000 352
4000072 20 12:54:35.738000 373
12:54:42.673000 374
25 12:54:24.848000 402
12:54:39.723000 406
62 12:52:28.430000 430
12:52:32.593000 394
4000236 18 13:28:14.834000 411
</code></pre>
<p>I am planning on replacing 'time' with 'date'. Time is of dtype object and I cannot seem to cast it or diff() it.'date' will be just as useful.</p>
<p>The only way (I have found) of having df_json appear as dfEntered is with:
dfEntered = df_json.groupby(by=[df_json.date.dt.time, 'sensorId', 'readerId', 'date'])</p>
<p>If I do: </p>
<pre><code>dfEntered = df_json.groupby(by=[df_json.date.dt.time, 'sensorId', 'readerId'])['date'].diff()
</code></pre>
<p>results in: </p>
<pre><code>File "processData.py", line 61, in <module>
dfEntered = df_json.groupby(by=[df_json.date.dt.date, 'sensorId', 'readerId', 'rssi'])['date'].diff()
File "<string>", line 17, in diff
File "C:\Users\danie\Anaconda2\lib\site-packages\pandas\core\groupby.py", line 614, in wrapper
raise ValueError
ValueError
</code></pre>
<p>If I do: </p>
<pre><code>dfEntered = df_json.groupby(by=[df_json.date.dt.date, 'sensorId', 'readerId', 'rssi'])['time'].count()
print(dfEntered['date'])
</code></pre>
<p>Results in:</p>
<pre><code>File "processData.py", line 65, in <module>
print(dfEntered['date'])
File "C:\Users\danie\Anaconda2\lib\site-packages\pandas\core\series.py", line 601, in __getitem__
result = self.index.get_value(self, key)
File "C:\Users\danie\Anaconda2\lib\site-packages\pandas\core\indexes\multi.py", line 821, in get_value
raise e1
KeyError: 'date'
</code></pre>
<p>I applied a .count() to the groupby just so that I can output it. I had previously tried a .agg({'date':'diff'}) which resluts in the valueError, but the dtype is datetime64[ns] (atleast in the original df_json, I cannot view the dtype of dfEntered['date']</p>
<p>If the above would work I would like to have a df of [df_json.date.dt.date, 'sensorId', 'readerId', 'mask'] mask being true if they entered a store.</p>
<p>I then have the below df (contains sensorIds that received a text)</p>
<pre><code> sensor_id sms_status date_report rssi readerId
0 5990100 SUCCESS 2017-05-03 13:41:28.412800 500 10
1 5990001 SUCCESS 2017-05-03 13:41:28.412800 500 11
2 5990100 SUCCESS 2017-05-03 13:41:30.413000 500 12
3 5990001 SUCCESS 2017-05-03 13:41:31.413100 500 13
4 5990100 SUCCESS 2017-05-03 13:41:34.413400 500 14
5 5990001 SUCCESS 2017-05-03 13:41:35.413500 500 52
6 5990100 SUCCESS 2017-05-03 13:41:38.413800 500 60
7 5990001 SUCCESS 2017-05-03 13:41:39.413900 500 61
</code></pre>
<p>I would then like to merge the two together on day, sensorId, readerId.
I am hoping that would result in a df that could appear as [df_json.date.dt.date, 'sensorId', 'readerId', 'mask'] and therefore I could say that a sensorId with a mask of true is a conversion. A conversion being that sensorId received a text that day and also entered the store that day.</p>
<p>I'm beginning to get wary that my end aim isn't even achievable, as I simply do not understand how pandas works yet :D (damn errors)</p>
<p><strong>UPDATE</strong></p>
<pre><code>dfEntered = dfEntered.reset_index()
</code></pre>
<p>This is allowing me to access the date and apply a diff.</p>
<p>I don't quite understand the theory of how this problem occurred, and why reset_index() fixed this.</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with mask created with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.diff.html" rel="nofollow noreferrer"><code>diff</code></a>:</p>
<pre><code>df = pd.DataFrame({'rssi': [500,530,1020,1201,1231,10],
'time': pd.to_datetime(['2017-01-01 14:01:08','2017-01-01 14:01:14',
'2017-01-01 14:01:17', '2017-01-01 14:01:27',
'2017-01-01 14:01:29', '2017-01-01 14:01:30'])})
print (df)
rssi time
0 500 2017-01-01 14:01:08
1 530 2017-01-01 14:01:14
2 1020 2017-01-01 14:01:17
3 1201 2017-01-01 14:01:27
4 1231 2017-01-01 14:01:29
5 10 2017-01-01 14:01:30
</code></pre>
<hr>
<pre><code>print (df['time'].diff())
0 NaT
1 00:00:06
2 00:00:03
3 00:00:10
4 00:00:02
5 00:00:01
Name: time, dtype: timedelta64[ns]
mask = (df['time'].diff() >'00:00:05') & (df['rssi'] > 380)
print (mask)
0 False
1 True
2 False
3 True
4 False
5 False
dtype: bool
df1 = df[mask]
print (df1)
rssi time
1 530 2017-01-01 14:01:14
3 1201 2017-01-01 14:01:27
</code></pre>
|
python|pandas|datetime|multiple-columns|rows
| 0
|
375,483
| 60,895,213
|
Subtract multiple columns between two dataframes with different shapes based on multiple columns
|
<p>I'm looking at the following three datasets from JHU</p>
<p><a href="https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv" rel="nofollow noreferrer">https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv</a></p>
<p><a href="https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv" rel="nofollow noreferrer">https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv</a></p>
<p><a href="https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv" rel="nofollow noreferrer">https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv</a></p>
<p>Which are on the form </p>
<pre><code> 'Province/State 'Country/Region 'Lat' 'Long' '1/22/20' '1/23/20' ...
NaN Italy x y 0 0
</code></pre>
<p>I want to calculate the number of active cases per province,country and day based on formula <code>active = confirmed - (recovered+deahts)</code></p>
<p>Before the datasets had the same shape, so I could do the following</p>
<pre><code>df_active = df_confirmed.copy()
df_active.loc[4:] = df_confirmed.loc[4:]-(df_recovered.loc[4:]+df_deaths.loc[4:])
</code></pre>
<p>Now they do not contain data on the same countries, and do not always have the same amount of date columns. </p>
<p>So I need to do the following</p>
<p>1) Determine what date columns all 3 DF have in common,</p>
<p>2) Where the province and country column match, do <code>active = confirmed - (recovered+deahts)</code></p>
<p>For point 1) I can do the following</p>
<pre><code>## append all shape[1] to list
df_shape_list.append(df_confirmed.shape[1])
...
min_common_columns = min(df_shape_list)
</code></pre>
<p>So I need to subtract columns <code>4:min_common_columns</code>, but how do I do that where province and country column match on all 3 DF's?</p>
|
<p>Consider <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html" rel="nofollow noreferrer"><code>melt</code></a> to transform their wide data into long format then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> on location and date. Then run needed formula:</p>
<pre><code>from functools import reduce
import pandas as pd
df_confirmed = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/"
"csse_covid_19_time_series/time_series_covid19_confirmed_global.csv")
df_deaths = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/"
"csse_covid_19_time_series/time_series_covid19_deaths_global.csv")
df_recovered = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/"
"csse_covid_19_time_series/time_series_covid19_recovered_global.csv")
# MELT EACH DF IN LIST COMPREHENSION
df_list = [df.melt(id_vars = ['Province/State', 'Country/Region', 'Lat', 'Long'],
var_name = 'Date', value_name = val)
for df, val in zip([df_confirmed, df_deaths, df_recovered],
['confirmed', 'deaths', 'recovered'])]
# CHAIN MERGE
df_long = reduce(lambda x,y: pd.merge(x, y, on=['Province/State', 'Country/Region', 'Lat', 'Long', 'Date']),
df_list)
# SIMPLE ARITHMETIC
df_long['active'] = df_long['confirmed'] - (df_long['recovered'] + df_long['deaths'])
</code></pre>
<p><strong>Output</strong> <em>(sorted by active descending)</em></p>
<pre><code>df_long.sort_values(['active'], ascending=False).head(10)
# Province/State Country/Region Lat Long Date confirmed deaths recovered active
# 15229 NaN US 37.0902 -95.7129 3/27/20 101657 1581 869 99207
# 14998 NaN US 37.0902 -95.7129 3/26/20 83836 1209 681 81946
# 15141 NaN Italy 43.0000 12.0000 3/27/20 86498 9134 10950 66414
# 14767 NaN US 37.0902 -95.7129 3/25/20 65778 942 361 64475
# 14910 NaN Italy 43.0000 12.0000 3/26/20 80589 8215 10361 62013
# 14679 NaN Italy 43.0000 12.0000 3/25/20 74386 7503 9362 57521
# 14448 NaN Italy 43.0000 12.0000 3/24/20 69176 6820 8326 54030
# 14536 NaN US 37.0902 -95.7129 3/24/20 53740 706 348 52686
# 15205 NaN Spain 40.0000 -4.0000 3/27/20 65719 5138 9357 51224
# 14217 NaN Italy 43.0000 12.0000 3/23/20 63927 6077 7024 50826
</code></pre>
|
python|pandas
| 1
|
375,484
| 60,980,841
|
Zip variable becomes empty after running subsequent code
|
<p>I have a dataframe (<code>data</code>) which contains a few dates (<code>loss_date</code>, <code>report_date</code>, <code>good_date</code>), and I'm trying to count certain rows of the dataframe. The following code works perfectly the first time I run it:</p>
<pre><code># Set up bins
BUCKET_SIZE = 30
min_date = np.min(data.loss_date)
max_date = np.max(data.report_date)
num_days = (max_date - min_date).days
num_buckets = int(np.ceil(num_days/BUCKET_SIZE))
bounds = [min_date + timedelta(days = BUCKET_SIZE*i)
for i in range(0, num_buckets+1)
]
starts = bounds[0:len(bounds)-1]
ends = bounds[1:len(bounds)]
buckets = zip(starts, ends)
# Get data subset
l_data = data[data.good_date.notna()]
before = l_data[l_data.loss_date < l_data.good_date]
after = l_data[l_data.loss_date >= l_data.good_date]
# Define count function
def count_loss(df, start, end):
is_start = df.loss_date >= start
is_end = df.loss_date < end
count = len(df[is_start & is_end].index)
return(count)
# FIRST_TIME
count_before = [count_loss(before, s, e) for s,e in buckets]
</code></pre>
<p>But now when I run it again, e.g. </p>
<pre><code># CODE_AGAIN
count_after = [count_loss(after, s, e) for s,e in buckets]
</code></pre>
<p>I get the list <code>[]</code> as output. However if I run the following:</p>
<pre><code># CODE_AGAIN (but redefining buckets)
buckets = zip(starts, ends)
count_after = [count_loss(after, s, e) for s,e in buckets]
</code></pre>
<p>I get a non-empty list. After running <code>FIRST_TIME</code>, the buckets zip becomes empty - and repeating <code>buckets = zip(starts, ends)</code> fixes the problem; i.e. <code>CODE_AGAIN</code> works as it should. I can't understand why!</p>
<p>Many thanks. </p>
|
<p>In short, the problem is your title concept: "zip variable". This is not a static list; it's a generator object.</p>
<pre><code>buckets = zip(starts, ends)
</code></pre>
<p><code>buckets</code> is a callable interface, a function with a <code>yield</code>. Once you've iterated through the underlying structure, the generator is exhausted; any further references will yield <code>None</code>.</p>
<p>If you want to iterate multiple times, either re-create the <code>zip</code> expression on each use, or store it as a <code>list</code>:</p>
<pre><code>buckets = list(zip(starts, ends))
</code></pre>
|
python|pandas|list|variables|zip
| 2
|
375,485
| 61,172,737
|
How to get initial row's indexes from df.groupby?
|
<p>Actually, I have df</p>
<p><code>print(df)</code>:</p>
<pre><code> date value other_columns
0 1995 5
1 1995 13
2 1995 478
</code></pre>
<p>and so on...</p>
<p>After grouping them by date <code>df1 = df.groupby(by='date')['value'].min()</code>
I wonder how to get initial row's index. In this case, I want to get integer 0, because there was the lowest value in 1995. Thank you in advance.</p>
|
<p>You have to create a Column with the index value before doing the groupby:</p>
<pre><code>df['initialIndex'] = df.index.values
#do the groupby
</code></pre>
|
python|pandas
| 1
|
375,486
| 60,984,045
|
trouble in applying a function to selected columns
|
<p>I am new to python programming. so, what i want to achieve is basically fill all the na values in columns that are object type with their modes.</p>
<pre><code>object_columns=['A1','A4','A5','A6','A7']#these are object types columns
#wrote this function
def find_mode_fill(x):
return data[x].fillna(data[x].mode()[0])
</code></pre>
<p>I tried doing it in few ways which did not turn out to be correct.</p>
<p>1-</p>
<pre><code>data=data.apply(lambda x: data[x].fillna(data[x].mode()[0]) if x in object_columns else x)
</code></pre>
<p>2-</p>
<pre><code>data[object_columns]=data[object_columns].apply(find_mode_fill)
</code></pre>
<p>but, when i do use my function and apply it one by one it works</p>
<p>data['A1']=find_mode_fill('A1')</p>
<p>data['A2']=find_mode_fill('A2')
.
.
and so on </p>
|
<p>I think apply function is not proper for your problem.
Try this.</p>
<pre><code>for col in object_columns:
mode_value = df[col].mode()[0]
df[col][df[col].isnull()] = mode_value
</code></pre>
|
python|pandas
| 0
|
375,487
| 60,893,208
|
Distributed training over local gpu and colab gpu
|
<p>I want to fine tune ALBERT.</p>
<p>I see one can distribute neural net training over multiple gpus using tensorflow: <a href="https://www.tensorflow.org/guide/distributed_training" rel="nofollow noreferrer">https://www.tensorflow.org/guide/distributed_training</a></p>
<p>I was wondering if it's possible to distribute fine-tuning across both my laptop's gpu and a colab gpu?</p>
|
<p>I don't think that's possible. Because in order to do GPU distributed training, you need NVLinks among your GPUs. You don't have such a link between your laptop's GPU and Colab GPUs. This is a good read <a href="https://lambdalabs.com/blog/introduction-multi-gpu-multi-node-distributed-training-nccl-2-0/" rel="nofollow noreferrer">https://lambdalabs.com/blog/introduction-multi-gpu-multi-node-distributed-training-nccl-2-0/</a></p>
|
python|tensorflow|gpu|google-colaboratory|distributed-training
| 1
|
375,488
| 61,095,560
|
How to select subsequent numpy arrays handling potential np.nan values
|
<p>I have a Series like this:</p>
<pre><code>s = pd.Series({10: np.array([[0.72260683, 0.27739317, 0. ],
[0.7187053 , 0.2812947 , 0. ],
[0.71435467, 0.28564533, 1. ],
[0.3268072 , 0.6731928 , 0. ],
[0.31941951, 0.68058049, 1. ],
[0.31260015, 0.68739985, 0. ]]),
20: np.array([[0.7022099 , 0.2977901 , 0. ],
[0.6983866 , 0.3016134 , 0. ],
[0.69411673, 0.30588327, 1. ],
[0.33857735, 0.66142265, 0. ],
[0.33244109, 0.66755891, 1. ],
[0.32675582, 0.67324418, 0. ]]),
20: np.array([[0.68811957, 0.34188043, 0. ],
[0.68425783, 0.31574217, 0. ],
[0.67994496, 0.32005504, 1. ],
[0.34872593, 0.66127407, 1. ],
[0.34276171, 0.65723829, 1. ],
[0.33722803, 0.66277197, 0. ]]),
38: np.array([[0.68811957, 0.31188043, 0. ],
[0.68425783, 0.31574217, 0. ],
[0.67994496, 0.32005504, 1. ],
[0.34872593, 0.65127407, 0. ],
[0.34276171, 0.65723829, 1. ],
[0.33722803, 0.66277197, 0. ]]),
np.nan: np.nan}
)
</code></pre>
<p>I want to subset it with <code>np.array([1, 4, 1, 5])</code> or <code>np.array([1, 4, 1, np.nan])</code> returning <code>np.nan</code> no matter what the value is on the last element of indices array. How can I accomplish that?</p>
<p>Please note that I can't simply remove last element of a Series.</p>
|
<p>You can modify previous <a href="https://stackoverflow.com/a/61042803/2901002">answer</a> with remove missing values of <code>Series</code> and last add them by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reindex.html" rel="nofollow noreferrer"><code>Series.reindex</code></a> (only necessary unique index of <code>Series</code>):</p>
<pre><code>#a = np.array([1, 4, 1, 5])
a = np.array([1, 4, 1, np.nan])
mask = s.notna()
b = np.array(s[mask].tolist())[np.arange(mask.sum()), a[mask].astype(int), 2]
print (b)
[0. 1. 0.]
c = pd.Series(b, index=s[mask].index).reindex(s.index)
print (c)
10.0 0.0
20.0 1.0
38.0 0.0
NaN NaN
dtype: float64
</code></pre>
<p>EDIT: If not unique values in index is necessary create unique MultiIndex with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a>:</p>
<pre><code>s = pd.Series({10: np.array([[0.72260683, 0.27739317, 0. ],
[0.7187053 , 0.2812947 , 0. ],
[0.71435467, 0.28564533, 1. ],
[0.3268072 , 0.6731928 , 0. ],
[0.31941951, 0.68058049, 1. ],
[0.31260015, 0.68739985, 0. ]]),
20: np.array([[0.7022099 , 0.2977901 , 0. ],
[0.6983866 , 0.3016134 , 0. ],
[0.69411673, 0.30588327, 1. ],
[0.33857735, 0.66142265, 0. ],
[0.33244109, 0.66755891, 1. ],
[0.32675582, 0.67324418, 0. ]]),
23: np.array([[0.68811957, 0.34188043, 0. ],
[0.68425783, 0.31574217, 0. ],
[0.67994496, 0.32005504, 1. ],
[0.34872593, 0.66127407, 1. ],
[0.34276171, 0.65723829, 1. ],
[0.33722803, 0.66277197, 0. ]]),
38: np.array([[0.68811957, 0.31188043, 0. ],
[0.68425783, 0.31574217, 0. ],
[0.67994496, 0.32005504, 1. ],
[0.34872593, 0.65127407, 0. ],
[0.34276171, 0.65723829, 1. ],
[0.33722803, 0.66277197, 0. ]]),
np.nan: np.nan}
).rename({23:20})
print (s)
10.0 [[0.72260683, 0.27739317, 0.0], [0.7187053, 0....
20.0 [[0.7022099, 0.2977901, 0.0], [0.6983866, 0.30...
20.0 [[0.68811957, 0.34188043, 0.0], [0.68425783, 0...
38.0 [[0.68811957, 0.31188043, 0.0], [0.68425783, 0...
NaN NaN
dtype: object
</code></pre>
<hr>
<pre><code>a = np.array([1, 4, 1, 2, np.nan])
s = s.to_frame('a').set_index(s.groupby(s.index).cumcount(), append=True)['a']
print (s)
10.0 0 [[0.72260683, 0.27739317, 0.0], [0.7187053, 0....
20.0 0 [[0.7022099, 0.2977901, 0.0], [0.6983866, 0.30...
1 [[0.68811957, 0.34188043, 0.0], [0.68425783, 0...
38.0 0 [[0.68811957, 0.31188043, 0.0], [0.68425783, 0...
NaN 0 NaN
Name: a, dtype: object
</code></pre>
<hr>
<pre><code>mask = s.notna()
b = np.array(s[mask].tolist())[np.arange(mask.sum()), a[mask].astype(int), 2]
print (b)
[0. 1. 0. 1.]
c = pd.Series(b, index=s[mask].index).reindex(s.index)
print (c)
10.0 0 0.0
20.0 0 1.0
1 0.0
38.0 0 1.0
NaN 0 NaN
dtype: float64
</code></pre>
<p>And in last step remove helper level of <code>MultiIndex</code>:</p>
<pre><code>c = c.reset_index(level=-1, drop=True)
print (c)
10.0 0.0
20.0 1.0
20.0 0.0
38.0 1.0
NaN NaN
dtype: float64
</code></pre>
|
python|arrays|pandas|numpy
| 1
|
375,489
| 61,034,455
|
TensorFlow Federated: How can I write an Input Spec for a model with more than one input
|
<p>I'm trying to make an image captioning model using the federated learning library provided by tensorflow, but I'm stuck at this error </p>
<p><code>Input 0 of layer dense is incompatible with the layer: : expected min_ndim=2, found ndim=1.</code></p>
<p>this is my input_spec: </p>
<pre><code>input_spec=collections.OrderedDict(x=(tf.TensorSpec(shape=(2048,), dtype=tf.float32), tf.TensorSpec(shape=(34,), dtype=tf.int32)), y=tf.TensorSpec(shape=(None), dtype=tf.int32))
</code></pre>
<p>The model takes image features as the first input and a list of vocabulary as a second input, but I can't express this in the input_spec variable. I tried expressing it as a list of lists but it still didn't work. What can I try next?</p>
|
<p>Great question! It looks to me like this error is coming out of TensorFlow proper--indicating that you probably have the correct nested structure, but the leaves may be off. Your input spec looks like it "should work" from TFF's perspective, so it seems it is probably slightly mismatched with the data you have</p>
<p>The first thing I would try--if you have an example <code>tf.data.Dataset</code> which will be passed in to your client computation, you can simply <em>read <code>input_spec</code> directly off this dataset as the <code>element_spec</code> attribute</em>. This would look something like:</p>
<pre><code># ds = example dataset
input_spec = ds.element_spec
</code></pre>
<p>This is the easiest path. If you have something like "lists of lists of numpy arrays", there is still a way for you to pull this information off the data itself--the following code snippet should get you there:</p>
<pre><code># data = list of list of numpy arrays
input_spec = tf.nest.map_structure(lambda x: tf.TensorSpec(x.shape, x.dtype), data)
</code></pre>
<p>Finally, if you have a list of lists of <code>tf.Tensors</code>, TensorFlow provides a similar function:</p>
<pre><code># tensor_structure = list of lists of tensors
tf.nest.map_structure(tf.TensorSpec.from_tensor, tensor_structure)
</code></pre>
<p>In short, I would reocmmend <em>not</em> specifying <code>input_spec</code> by hand, but rather letting the data tell you what its input spec should be.</p>
|
python|tensorflow|tensorflow-federated
| 3
|
375,490
| 60,796,222
|
How can I Group By Year from a Date field using Python/Pandas
|
<p>I want to Group <strong>Return_On_Capital</strong> by <strong>datadate</strong> and <strong>Company name</strong></p>
<pre><code>Compustat.groupby(Compustat['datadate'].dt.strftime('%Y'))['Return_On_Capital'].sum().sort_values()
datadate Company name asset Debt_Curr_Liabilities Return_On_Capital
31/01/2007 AAR CORP 1067.633 74.245 -0.143515185
31/01/2011 AAR CORP 913.985 1703.727 -0.125509652
31/01/2011 AAR CORP 954.1 69 0.009514327
31/01/2007 ADC 1008.2 200.6 -0.097757499
30/01/2006 ADC 1107.7 1474.5 -0.091422466
31/01/2010 ALPHARMA 692.991 34.907 -0.053860375
31/01/2006 ALF 353.541 927.239 -0.131694528
</code></pre>
|
<p>This might work - </p>
<pre><code>Compustat['datadate'] = pd.to_datetime(Compustat['datadate'], format='%d/%m/%Y')
Compustat.groupby([Compustat['datedate'].dt.year, 'Company name']).agg(sum=('Return_On_Capital', 'sum')).sort_values()
</code></pre>
|
python-3.x|pandas|group-by
| 1
|
375,491
| 60,984,003
|
Why the backpropagation process can still work when I included 'loss.backward()' in 'with torch.no_grad():'?
|
<p>I'm working with a linear regression example in PyTorch. I know I did wrong including 'loss.backward()' in 'with torch.no_grad():', but why it worked well with my code?</p>
<p>According to <a href="https://pytorch.org/docs/stable/autograd.html?highlight=no_grad#torch.autograd.no_grad" rel="nofollow noreferrer">pytorch docs</a>, <code>torch.autograd.no_grad</code> is a context-manager that disabled gradient calculation. So I'm really confused.</p>
<p>Code here:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
# Toy dataset
x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],
[9.779], [6.182], [7.59], [2.167], [7.042],
[10.791], [5.313], [7.997], [3.1]], dtype=np.float32)
y_train = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573],
[3.366], [2.596], [2.53], [1.221], [2.827],
[3.465], [1.65], [2.904], [1.3]], dtype=np.float32)
input_size = 1
output_size = 1
epochs = 100
learning_rate = 0.05
model = nn.Linear(input_size, output_size)
criterion = nn.MSELoss(reduction='sum')
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# training
for epoch in range(epochs):
# convert numpy to tensor
inputs = torch.from_numpy(x_train)
targets = torch.from_numpy(y_train)
# forward
out = model(inputs)
loss = criterion(out, targets)
# backward
with torch.no_grad():
model.zero_grad()
loss.backward()
optimizer.step()
print('inputs grad : ', inputs.requires_grad)
if epoch % 5 == 0:
print ('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, epochs, loss.item()))
predicted = model(torch.from_numpy(x_train)).detach().numpy()
plt.plot(x_train, y_train, 'ro', label='Original data')
plt.plot(x_train, predicted, label='Fitted line')
plt.legend()
plt.show()
# Save the model checkpoint
torch.save(model.state_dict(), 'model\linear_model.ckpt')
</code></pre>
<p>Thanks in advance for answering my question.</p>
|
<p>This worked because the loss calculation has happened before the <code>no_grad</code> and you keep calculating the gradients according to that loss calculation (which calculation had gradient enabled). </p>
<p>Basically, you continue update the weights of your layers using the gradients calculated outside of the <code>no_grad</code>. </p>
<p>When you actually use the <code>no_grad</code>: </p>
<pre><code>for epoch in range(epochs):
# convert numpy to tensor
inputs = torch.from_numpy(x_train)
targets = torch.from_numpy(y_train)
with torch.no_grad(): # no_grad used here
# forward
out = model(inputs)
loss = criterion(out, targets)
model.zero_grad()
loss.backward()
optimizer.step()
print('inputs grad : ', inputs.requires_grad)
if epoch % 5 == 0:
print ('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, epochs, loss.item()))
</code></pre>
<p>Then you will get the proper error, saying: </p>
<p><code>element 0 of tensors does not require grad and does not have a grad_fn</code>.</p>
<p>That is, you use <code>no_grad</code> where is not appropriate to use it. </p>
<p>If you print the <code>.requires_grad</code> of loss, then you will see that loss has <code>requires_grad</code>. </p>
<p>That is, when you do this: </p>
<pre><code>for epoch in range(epochs):
# convert numpy to tensor
inputs = torch.from_numpy(x_train)
targets = torch.from_numpy(y_train)
# forward
out = model(inputs)
loss = criterion(out, targets)
# backward
with torch.no_grad():
model.zero_grad()
loss.backward()
optimizer.step()
print('inputs grad : ', inputs.requires_grad)
print('loss grad : ', loss.requires_grad) # Prints loss.require_rgad
if epoch % 5 == 0:
print ('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, epochs, loss.item()))
</code></pre>
<p>You will see: </p>
<pre><code>inputs grad : False
loss grad : True
</code></pre>
<p>Additionally, the </p>
<pre><code>print('inputs grad : ', inputs.requires_grad)
</code></pre>
<p>Will always print <code>False</code>. That is, if you do</p>
<pre><code>for epoch in range(epochs):
# convert numpy to tensor
inputs = torch.from_numpy(x_train)
targets = torch.from_numpy(y_train)
print('inputs grad : ', inputs.requires_grad). # Print the inputs.requires_grad
# forward
out = model(inputs)
loss = criterion(out, targets)
# backward
with torch.no_grad():
model.zero_grad()
loss.backward()
optimizer.step()
print('inputs grad : ', inputs.requires_grad)
print('loss grad : ', loss.requires_grad)
if epoch % 5 == 0:
print ('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, epochs, loss.item()))
</code></pre>
<p>You will get: </p>
<pre><code>inputs grad : False
inputs grad : False
loss grad : True
</code></pre>
<p>That is, you are using wrong things to check what you did wrong. The best thing that you can do is to read again the docs of PyTorch on gradient mechanics. </p>
|
pytorch|backpropagation
| 7
|
375,492
| 60,953,735
|
KeyError: "['belongs_to_collection' 'homepage' 'original_title' 'overview'\n 'poster_path' 'status' 'tagline'] not found in axis"
|
<p>This is my data</p>
<pre><code> # Column Non-Null Count Dtype
0 belongs_to_collection 604 non-null object
1 budget 3000 non-null int64
2 genres 2993 non-null object
3 homepage 946 non-null object
4 imdb_id 3000 non-null object
5 original_language 3000 non-null object
6 original_title 3000 non-null object
7 overview 2992 non-null object
8 popularity 3000 non-null float64
9 poster_path 2999 non-null object
10 production_companies 2844 non-null object
11 production_countries 2945 non-null object
12 release_date 3000 non-null object
13 runtime 2998 non-null float64
14 spoken_languages 2980 non-null object
15 status 3000 non-null object
16 tagline 2403 non-null object
17 title 3000 non-null object
18 Keywords 2724 non-null object
19 cast 2987 non-null object
20 crew 2984 non-null object
21 revenue 3000 non-null int64
dtypes: float64(2), int64(2), object(18)
</code></pre>
<p>I run it by python3.7, when I am trying to drop the column, it remind me that"KeyError: "['belongs_to_collection' 'homepage' 'original_title' 'overview'\n 'poster_path' 'status' 'tagline'] not found in axis""</p>
<p>Here is my code.</p>
<pre><code>to_drop = ['belongs_to_collection', 'homepage','original_title','overview','poster_path','status','tagline']
data.head()
data.drop(to_drop, inplace=True, axis=1)
</code></pre>
|
<p>You can try with:</p>
<pre><code>data.drop(columns=to_drop, inplace=True)
</code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer">pandas DOC</a></p>
<p>EDIT</p>
<p>both ways is working here!</p>
<pre><code>import pandas as pd
data = pd.read_csv('data.tab', sep="\t")
to_drop = ['belongs_to_collection', 'homepage','original_title','overview','poster_path','status','tagline']
data.drop(columns=to_drop, inplace=True)
#data.drop(to_drop, inplace=True, axis=1) <== working too
data.info()
</code></pre>
|
python|csv|data-mining|data-cleaning|sklearn-pandas
| 0
|
375,493
| 60,804,944
|
Selective Groupby-Aggregate using Python Pandas DataFrame
|
<p>How can we aggregate all the rows after 4pm of one day till before 10am of the next day in the DataFrame by performing OHLC operations on the grouped rows?</p>
<p>This will convert the original DataFrame from</p>
<pre><code> symbol datetime open high low close date toCombine
0 AAPL 2020-01-01 15:00 3 5 2 4 2020-01-01 False
1 AAPL 2020-01-01 15:30 4 10 4 8 2020-01-01 False
2 AAPL 2020-01-01 16:00 8 15 6 12 2020-01-01 False
3 AAPL 2020-01-01 18:00 12 20 8 16 2020-01-01 True
4 AAPL 2020-01-01 20:00 12 20 8 16 2020-01-01 True
5 AAPL 2020-01-02 07:00 15 24 9 19 2020-01-02 True
6 AAPL 2020-01-02 10:00 16 25 10 20 2020-01-02 True
7 AAPL 2020-01-02 12:00 20 30 12 24 2020-01-02 False
8 AAPL 2020-01-02 14:00 24 70 14 26 2020-01-02 False
9 AAPL 2020-01-02 16:00 103 105 102 104 2020-01-02 False
10 AAPL 2020-01-02 18:00 104 100 104 196 2020-01-02 True
11 AAPL 2020-01-03 08:00 108 110 106 112 2020-01-03 True
12 AAPL 2020-01-03 10:30 112 120 108 116 2020-01-03 False
13 AAPL 2020-01-03 13:00 115 124 109 119 2020-01-03 False
</code></pre>
<p>to <em>(ignoring the index values here)</em>:</p>
<pre><code> symbol datetime open high low close date
0 AAPL 2020-01-01 15:00 3 5 2 4 2020-01-01
1 AAPL 2020-01-01 15:30 4 10 4 8 2020-01-01
2 AAPL 2020-01-01 16:00 8 15 6 12 2020-01-01
6 AAPL 2020-01-02 10:00 12 25 8 20 2020-01-02 <---- aggregated row
7 AAPL 2020-01-02 12:00 20 30 12 24 2020-01-02
8 AAPL 2020-01-02 14:00 24 70 14 26 2020-01-02
9 AAPL 2020-01-02 16:00 103 105 102 104 2020-01-02
11 AAPL 2020-01-03 10:00 103 110 100 112 2020-01-03 <---- aggregated 10:00 row created if not exist
12 AAPL 2020-01-03 10:30 112 120 108 116 2020-01-03
13 AAPL 2020-01-03 13:00 115 124 109 119 2020-01-03
</code></pre>
<p>Notes:</p>
<ol>
<li><p><code>toCombine</code> column has already been created to label the rows that will be aggregated into a single row with <code>datetime</code> value that has a time of <code>10:00</code>.</p></li>
<li><p>If the row with a <code>datetime</code> value with time of <code>10:00</code> does not exist, it should be created. However, if there are also no rows with <code>toCombine == True</code> to aggregate from, then the <code>10:00</code> row does not need to be created.</p></li>
</ol>
<p>Thank you!</p>
<hr>
<p><strong>Python Code to Setup Problem</strong></p>
<pre><code>import pandas as pd
data = [
('AAPL', '2020-01-01 15:00', 3, 5, 2, 4, '2020-01-01', False),
('AAPL', '2020-01-01 15:30', 4, 10, 4, 8, '2020-01-01', False),
('AAPL', '2020-01-01 16:00', 8, 15, 6, 12, '2020-01-01', False),
('AAPL', '2020-01-01 18:00', 12, 20, 8, 16, '2020-01-01', True),
('AAPL', '2020-01-01 20:00', 12, 20, 8, 16, '2020-01-01', True),
('AAPL', '2020-01-02 07:00', 15, 24, 9, 19, '2020-01-02', True),
('AAPL', '2020-01-02 10:00', 16, 25, 10, 20, '2020-01-02', True),
('AAPL', '2020-01-02 12:00', 20, 30, 12, 24, '2020-01-02', False),
('AAPL', '2020-01-02 14:00', 24, 70, 14, 26, '2020-01-02', False),
('AAPL', '2020-01-02 16:00', 103, 105, 102, 104, '2020-01-02', False),
('AAPL', '2020-01-02 18:00', 104, 100, 104, 196, '2020-01-02', True),
('AAPL', '2020-01-03 08:00', 108, 110, 106, 112, '2020-01-03', True),
('AAPL', '2020-01-03 10:30', 112, 120, 108, 116, '2020-01-03', False),
('AAPL', '2020-01-03 13:00', 115, 124, 109, 119, '2020-01-03', False),
]
df = pd.DataFrame(data, columns=['symbol', 'datetime', 'open', 'high', 'low', 'close', 'date', 'toCombine'])
print(df)
</code></pre>
|
<p><strong>My approach</strong></p>
<pre><code>#Group and agg
m = df['toCombine']
agg_dict = {'datetime' : 'last',
'open' : 'first',
'high' : 'max',
'low' : 'min',
'close' : 'last'}
reduce_df = (df.loc[m].groupby(['symbol',(~m).cumsum()],
as_index=False).agg(agg_dict))
print(reduce_df)
symbol datetime open high low close
0 AAPL 2020-01-02 10:00 12 25 8 20
1 AAPL 2020-01-03 08:00 104 110 104 112
</code></pre>
<p>group by symbol maybe not neccesary if sorted correctly</p>
<hr>
<pre><code>#get correct datetime, append and sort
datetime = pd.to_datetime(reduce_df['datetime'])
hours = datetime.dt.hour
df = (reduce_df.assign(datetime = (datetime.mask(hours.gt(16),
datetime.add(pd.to_timedelta(10 + 24 - hours,
unit='h')))
.mask(hours.lt(10),
datetime.add(pd.to_timedelta(10 - hours,
unit='h')))),
date = lambda x: x['datetime'].dt.date)
.assign(datetime = lambda x: x['datetime'].dt.strftime('%Y-%m-%d %H:%M'))
.append(df.loc[~m])
.drop(columns='toCombine')
.sort_values(['symbol','datetime','date'])
)
print(df)
symbol datetime open high low close date
0 AAPL 2020-01-01 15:00 3 5 2 4 2020-01-01
1 AAPL 2020-01-01 15:30 4 10 4 8 2020-01-01
2 AAPL 2020-01-01 16:00 8 15 6 12 2020-01-01
0 AAPL 2020-01-02 10:00 12 25 8 20 2020-01-02
7 AAPL 2020-01-02 12:00 20 30 12 24 2020-01-02
8 AAPL 2020-01-02 14:00 24 70 14 26 2020-01-02
9 AAPL 2020-01-02 16:00 103 105 102 104 2020-01-02
1 AAPL 2020-01-03 10:00 104 110 104 112 2020-01-03
12 AAPL 2020-01-03 10:30 112 120 108 116 2020-01-03
</code></pre>
<hr>
<p>We could have used <code>dataframe.set_index</code> and <code>dataframe.asfreq</code> and <code>ffill</code> previously to groupby so we don't have to change datetime later, but I think the performance will be similar</p>
<p>if you want include hour=16 you need change <code>datetime.mask(hours.</code><strong>ge</strong><code>(16).......</code></p>
<p>and when we use <code>.loc</code> include it, because it is not included by <code>toCombine</code>(<strong>row 9 False</strong>)</p>
|
python|pandas|dataframe|aggregate|ohlc
| 1
|
375,494
| 60,839,222
|
How to apply Speller from autocorrect to a specific column of a dataframe
|
<p>I tried this:</p>
<pre><code>from autocorrect import Speller
spell = Speller(lang='en')
df['Text'] = df['Text'].apply(lambda x: spell(x))
</code></pre>
<p>But I get the error: <code>TypeError: expected string or bytes-like object</code></p>
|
<p>Probably some of the values in <code>df['Text']</code> are neither <code>str</code> nor <code>bytes</code>. Try this:</p>
<pre><code>from autocorrect import Speller
spell = Speller(lang='en')
df['Text'] = df['Text'].apply(
lambda x: spell(x) if isinstance(x, str) or isinstance(x, bytes) else x)
</code></pre>
|
python|pandas
| 0
|
375,495
| 60,817,191
|
Average pooling with window over variable length sequences
|
<p>I have a tensor <code>in</code> of shape (batch_size, features, steps) and want to get an output tensor <code>out</code> of the same shape by average pooling over the time dimension (steps) with a window size of <code>2k+1</code>, that is:</p>
<pre><code>out[b,f,t] = 1/(2k+1) sum_{t'=t-k,...,t+k} in[b,f,t']
</code></pre>
<p>For time steps where there are no <code>k</code> preceding and succeeding time steps, I only want to calculate the average on the existing time steps.</p>
<p>However, the sequences in the tensor have variable length and are padded with zeros accordingly, I have the sequence lengths stored in another tensor (and could e.g. create a mask with them).</p>
<ul>
<li>I know I can use <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/nn/avg_pool1d" rel="nofollow noreferrer"><code>out = tf.nn.avg_pool1d(in, ksize=2k+1, strides=1, padding="SAME", data_format="NCW")</code></a> that performs my described pooling-operation, however it does not understand that my sequences are padded with zero and does not allow me to pass a mask with the sequence lengths.</li>
<li>There also is <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D" rel="nofollow noreferrer"><code>tf.keras.layers.GlobalAveragePooling1D</code></a>, but that layer always pools over the entire sequence and doesn't allow me to specify a window size.</li>
</ul>
<p>How can I perform this operation <strong>with masking and a window size</strong>?</p>
|
<p>As far as I know, there is no such operation in TensorFlow. However, one can use a combination of two unmasked pooling operations, here written in pseudocode:</p>
<ol>
<li>Let <code>seq_mask</code> be a <a href="https://www.tensorflow.org/api_docs/python/tf/sequence_mask" rel="nofollow noreferrer">sequence mask</a> of shape (batch_size, time)</li>
<li>Let <code>in_pooled</code> be a the tensor <code>in</code> with unmasked average pooling</li>
<li>Let <code>seq_mask_pooled</code> be the tensor <code>seq_mask</code> with unmasked average pooling with the same pool size</li>
<li>Obtain the tensor <code>out</code> as follows: Every element of <code>out</code>, where the sequence mask is <code>0</code>, should also be <code>0</code>. Every other element is obtained by dividing <code>in_pooled</code> through <code>seq_mask_pooled</code> element wise (not that the element of <code>seq_mask_pooled</code> is never <code>0</code> if the element of <code>seq_mask</code> is not).</li>
</ol>
<p>The tensor <code>out</code> can e.g. be calculated using <a href="https://www.tensorflow.org/api_docs/python/tf/math/divide_no_nan" rel="nofollow noreferrer"><code>tf.math.divide_no_nan</code></a>.</p>
|
python|tensorflow|moving-average|pooling
| 0
|
375,496
| 60,892,714
|
How to get the Weight of Evidence (WOE) and Information Value (IV) in Python/pandas?
|
<p>I was wondering how to calculate the WOE and IV in python.
Are there any dedication function in numpy/scipy/pandas/sklearn?</p>
<p>Here is my example dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
np.random.seed(100)
df = pd.DataFrame({'grade': np.random.choice(list('ABCD'),size=(20)),
'pass': np.random.choice([0,1],size=(20))
})
df
</code></pre>
|
<p>Formulas for woe and iv:</p>
<p><a href="https://i.stack.imgur.com/LLp8M.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LLp8M.png" alt="enter image description here"></a></p>
<p>Code to achieve this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
np.random.seed(100)
df = pd.DataFrame({'grade': np.random.choice(list('ABCD'),size=(20)),
'pass': np.random.choice([0,1],size=(20))
})
feature,target = 'grade','pass'
df_woe_iv = (pd.crosstab(df[feature],df[target],
normalize='columns')
.assign(woe=lambda dfx: np.log(dfx[1] / dfx[0]))
.assign(iv=lambda dfx: np.sum(dfx['woe']*
(dfx[1]-dfx[0]))))
df_woe_iv
</code></pre>
<h1>output</h1>
<pre><code>pass 0 1 woe iv
grade
A 0.3 0.3 0.000000 0.690776
B 0.1 0.1 0.000000 0.690776
C 0.2 0.5 0.916291 0.690776
D 0.4 0.1 -1.386294 0.690776
</code></pre>
|
python|pandas|machine-learning
| 12
|
375,497
| 60,762,160
|
Python pandas groupby agg- sum one column while getting the mean of the rest
|
<p>Looking to group my fields based on date, and get a mean of all the columns except a binary column which I want to sum in order to get a count. </p>
<p>I know I can do this by:</p>
<p><code>newdf=df.groupby('date').agg({'var_a': 'mean', 'var_b': 'mean', 'var_c': 'mean', 'binary_var':'sum'})</code></p>
<p>But there is about 50 columns (other than the binary) that I want to mean, and I feel there must be a simple, quicker way of doing this instead of writing each 'column title' :'mean' for all of them. I've tried to make a list of column names but when I put this in the agg function, it says a list is an unhashable type. </p>
<p>Thanks!</p>
|
<p>Something like this might work - </p>
<pre><code>df = pd.DataFrame({'a':['a','a','b','b','b','b'], 'b':[10,20,30,40,20,10], 'c':[1,1,0,0,0,1]}, 'd':[20,30,10,15,34,10])
df
a b c d
0 a 10 1 20
1 a 20 1 30
2 b 30 0 10
3 b 40 0 15
4 b 20 0 34
5 b 10 1 10
</code></pre>
<p>Assuming <code>c</code> is the binary variable column. Then, </p>
<pre><code>cols = [ val for val in df.columns if val != 'c']
temp = pd.concat([df.groupby(['a'])[cols].mean(), df.groupby(['a'])['c'].sum()], axis=1).reset_index()
temp
a b d c
0 a 15.0 25.00 2
1 b 25.0 17.25 1
</code></pre>
|
python|pandas|dataframe
| 1
|
375,498
| 60,917,399
|
Error trying to convert simple convolutional model to CoreML
|
<p>I'm trying to convert a simple GAN generator (from ClusterGAN):</p>
<pre><code>self.name = 'generator'
self.latent_dim = latent_dim
self.n_c = n_c
self.x_shape = x_shape
self.ishape = (128, 7, 7)
self.iels = int(np.prod(self.ishape))
self.verbose = verbose
self.model = nn.Sequential(
# Fully connected layers
torch.nn.Linear(self.latent_dim + self.n_c, 1024),
nn.BatchNorm1d(1024),
nn.LeakyReLU(0.2, inplace=True),
torch.nn.Linear(1024, self.iels),
nn.BatchNorm1d(self.iels),
nn.LeakyReLU(0.2, inplace=True),
# Reshape to 128 x (7x7)
Reshape(self.ishape),
# Upconvolution layers
nn.ConvTranspose2d(128, 64, 4, stride=2, padding=1, bias=True),
nn.BatchNorm2d(64),
nn.LeakyReLU(0.2, inplace=True),
nn.ConvTranspose2d(64, 1, 4, stride=2, padding=1, bias=True),
nn.Sigmoid()
)
</code></pre>
<p>But onnx-coreml fails with <code>Error while converting op of type: BatchNormalization. Error message: provided number axes 2 not supported</code></p>
<p>I thought it was the BatchNorm2d, so I tried reshaping and applying BatchNorm1d, but I get the same error. Any thoughts? I'm very surprised that I'm having problems converting such a simple model, so I'm assuming that I must be missing something obvious.</p>
<p>I'm targeting iOS 13 and using Opset v10 for the onnx conversion.</p>
|
<p>Core ML does not have 1-dimensional batch norm. The tensor must have at least rank 3.</p>
<p>If you want to convert this model, you should fold the batch norm weights into those of the preceding layer and remove the batch norm layer. (I don't think PyTorch has a way to automatically do this for you.)</p>
|
pytorch|coreml|generative-adversarial-network
| 1
|
375,499
| 61,030,607
|
Create new columns based on a tree like pattern
|
<p>I have the following dataframe:</p>
<pre><code>col1 col2
basic c
c c++
c++ java
ruby
php
java python
python
r
c#
</code></pre>
<p>I want to create new columns based on a pattern followed in dataframe.<br>
For example, in the above dataframe <code>basic->c->c++->java->python</code> order can be observed from col1 and col2.</p>
<p><strong>Logic:</strong> </p>
<p>col1 value <code>basic</code> has <code>c</code> value in <code>col2</code>, similarly <code>c</code> value in <code>col1</code> corresponds to <code>c++</code> in col2, <code>c++</code> leads to <code>java</code> in <code>col2</code> and finally <code>java</code> to <code>python</code> in <code>col2</code>.<br>
Remaining values in "col1" which have corresponding blanks in <code>col2</code> would be left blank in newly created columns as well. (that is, we consider only the values in "col1" which do not have blanks in <code>col2</code>). </p>
<p>So my output dataframe would be:</p>
<pre><code> col1 col2 new_col1 new_col2 new_col3 new_col4
0 basic c c c++ java python
1 c c++ c++ java python
2 c++ java java python
3 ruby
4 php
5 java python python
6 python
7 r
8 c
</code></pre>
<p>Thanks !</p>
|
<p>This can be solved through <a href="https://en.wikipedia.org/wiki/Graph_theory" rel="nofollow noreferrer">graph theory</a> analysis. It looks like you want to obtain all <a href="https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#successor" rel="nofollow noreferrer">successors</a> starting from each of the nodes in <code>col2</code>. For that we need to first build a <a href="https://en.wikipedia.org/wiki/Directed_graph" rel="nofollow noreferrer">directed graph</a> using the columns <code>col1</code> and <code>col2</code>. We can use networkX for that, and build a <code>nx.DiGraph</code> from the dataframe using <a href="https://networkx.github.io/documentation/stable/reference/generated/networkx.convert_matrix.from_pandas_edgelist.html" rel="nofollow noreferrer"><code>nx.from_pandas_edgelist</code></a>:</p>
<pre><code>import networkx as nx
m = df.ne('').all(1)
G = nx.from_pandas_edgelist(df[m],
source='col1',
target='col2',
create_using=nx.DiGraph())
</code></pre>
<p>And then we can iterate over the <code>nodes</code> in <code>col2</code>, and search for all successors starting from that node. For that we can use <a href="https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.traversal.depth_first_search.dfs_tree.html#networkx.algorithms.traversal.depth_first_search.dfs_tree" rel="nofollow noreferrer"><code>dfs_tree</code></a>, which will traverse the graph in search for the successors with a depth-first-search from source:</p>
<pre><code>all_successors = [list(nx.dfs_tree(G, node)) for node in df.loc[m,'col2']]
</code></pre>
<p>Now we can assign back the list of longest paths with:</p>
<pre><code>out = (df.assign(
**pd.DataFrame(all_successors, index=df[m].index)
.reindex(df.index)
.fillna('')
.add_prefix('new_col')))
</code></pre>
<hr>
<pre><code>print(out)
col1 col2 new_col0 new_col1 new_col2 new_col3
0 basic c c c++ java python
1 c c++ c++ java python
2 c++ java java python
3 ruby
4 php
5 java python python
6 python
7 r
8 c
</code></pre>
<hr>
<p>To better explain this approach, consider instead this slightly different network, with an additional component:</p>
<p><a href="https://i.stack.imgur.com/2ckE1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ckE1.png" alt="enter image description here"></a></p>
<p>As mentioned what we want here is a list of successors for each of the nodes that we have in <code>Col2</code>. For these problems there are several graph searching algorithms, which can be used to explore the branches of a graph starting from a given node. For that we can use the <code>depth first search</code> based functions available in <code>nx.algorithms.traversal</code>. In this case we want <a href="https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.traversal.depth_first_search.dfs_tree.html#networkx.algorithms.traversal.depth_first_search.dfs_tree" rel="nofollow noreferrer"><code>nx.dfs_tree</code></a>, which returns an <em>oriented tree</em> constructed through a depth-first-search starting from a specified node.</p>
<p>Here are some examples:</p>
<pre><code>list(nx.dfs_tree(G, 'c++'))
# ['c++', 'java', 'python', 'julia']
list(nx.dfs_tree(G, 'python'))
# ['python', 'julia']
list(nx.dfs_tree(G, 'basic'))
# ['basic', 'php', 'sql']
</code></pre>
<p>Note that this could get quite tricker in the case of having cycles within the graph. Say there is for instance an edge between <code>c++</code> and <code>scala</code>. In this case it becomes unclear which should be the path to choose. One way could be traversing all respective paths with <code>nx.dfs_tree</code> and keeping the one of interest predefining some logic, such as keeping the longest. Though it doesn't seem like this is the case in this problem.</p>
|
python|pandas|graph|networkx|graph-theory
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.