Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
375,900
73,655,690
Parsing Nested JSON to one data file
<p>I am trying to parse a nested json.</p> <p>I've got the dataset stored here so that you can see what I'm seeing specifically if you want: <a href="https://mega.nz/file/YWNSRBjK#V9DpoY5LSp-VL8Mnu7NEfNf3FhDOCj9FHBiTQ4KHEa8" rel="nofollow noreferrer">https://mega.nz/file/YWNSRBjK#V9DpoY5LSp-VL8Mnu7NEfNf3FhDOCj9FHBiTQ4KHEa8</a></p> <p>I am attempting to parse this using pandas json_normalize function. Below is what my code looks like in it's entirety.</p> <pre><code>import gzip import shutil import json import pandas as pd with gzip.open('testjson.json.gz', 'rb') as f_in: with open('unzipped_json.json', 'wb') as f_out: shutil.copyfileobj(f_in, f_out) f = open('unzipped_json.json') data = json.load(f) keys = data.keys() keys_string = list(keys) ### In Network in_network_df = pd.json_normalize(data['in_network']) ### Negotiated Rates negotiated_rates_df = pd.json_normalize(data=data['in_network'], record_path=(&quot;negotiated_rates&quot;)) negotiated_rates_df = negotiated_rates_df.explode('provider_references') negotiated_rates_df = negotiated_rates_df.explode('negotiated_prices') ### Negotiated Prices negotiated_prices_df = pd.json_normalize(data=data['in_network'], meta=[ #['negotiated_rates','provider_references'], # ['negotiation_arrangement', 'name','billing_code_type','billing_code','description'] ], record_path=['negotiated_rates','negotiated_prices'], errors='ignore') negotiated_prices_df = negotiated_prices_df.explode('service_code') ### Provider References provider_references_df = pd.json_normalize(data['provider_references']) provider_references_test = provider_references_df.explode('provider_groups') ### Provider Groups provider_groups = pd.json_normalize(data=data['provider_references'], meta=['provider_group_id'], record_path=(&quot;provider_groups&quot;)) provider_groups = provider_groups.explode('npi') </code></pre> <p>I am specifically having trouble with the negotiated prices part of this json object. I am trying to add in some data from parent objects, but it is giving me an error. To point out specifically what I would like to do here it is below.</p> <pre><code>negotiated_prices_df = pd.json_normalize(data=data['in_network'], meta=['provider_references'], record_path=['negotiated_rates','negotiated_prices'], errors='ignore') </code></pre> <p>When I try to do this I get ValueError: operands could not be broadcast together with shape (74607,) (24869,)</p> <p>Can anyone help me understand what is going on here?</p> <p>Edit: Trying to provide some more context in case someone is not wanting to open my file... Here is one spot showing the problematic portion I'm dealing with in the JSON. I can't seem to get the provider_references to attach to any of the child objects.</p> <blockquote> <p>&quot;provider_references&quot;:[261, 398, 799],&quot;negotiated_prices&quot;:[{&quot;negotiated_type&quot;: &quot;fee schedule&quot;,&quot;negotiated_rate&quot;: 296.00,&quot;expiration_date&quot;: &quot;2023-06-30&quot;,&quot;service_code&quot;: [&quot;01&quot;, &quot;02&quot;, &quot;03&quot;, &quot;04&quot;, &quot;05&quot;, &quot;06&quot;, &quot;07&quot;, &quot;08&quot;, &quot;09&quot;, &quot;10&quot;, &quot;11&quot;, &quot;12&quot;, &quot;13&quot;,</p> </blockquote>
<p>I think the code that you want looks like this:</p> <pre><code>with open('unzipped_json.json') as f: data = json.load(f) negotiated_rates_and_prices_df = pd.json_normalize( data[&quot;in_network&quot;], record_path=[&quot;negotiated_rates&quot;, [&quot;negotiated_prices&quot;]], meta=[ &quot;negotiation_arrangement&quot;, &quot;name&quot;, &quot;billing_code_type&quot;, &quot;billing_code_type_version&quot;, &quot;billing_code&quot;, &quot;description&quot;, [&quot;negotiated_rates&quot;, &quot;provider_references&quot;], ], ) </code></pre> <p>That takes care of the <code>in_network</code> part of the JSON. The trick is that within the metadata path you want to put the columns which are not nested in a regular list, and the nested ones in the order of nesting (ie <code>[&quot;negotiated_rates&quot;, &quot;provider_references&quot;]</code>). There's a similar example in the docs <a href="https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.json_normalize.html?#codecell3" rel="nofollow noreferrer">here</a>.</p> <p>Then for the other nested part of the JSON you can do this:</p> <pre><code>provider_references_df = pd.json_normalize( data[&quot;provider_references&quot;], &quot;provider_groups&quot;, &quot;provider_group_id&quot; ) </code></pre> <p>And that takes care of the whole thing.</p>
python|json|pandas
2
375,901
73,710,989
Find row which value in either one of the column is NaN
<p>Here is a dataframe that I am working with:</p> <pre><code>cl_id a c d e A1 A2 A3 0 1 -0.419279 0.843832 -0.530827 text76 1.537177 -0.271042 1 2 0.581566 2.257544 0.440485 dafN_6 0.144228 2.362259 2 3 -1.259333 1.074986 1.834653 system NAN 1.100353 3 4 -1.279785 0.272977 0.197011 Fifty -0.031721 1.434273 4 5 0.578348 0.595515 0.553483 channel NaN NaN 5 6 -1.549588 -0.198588 0.373476 audio -0.508501 NAN </code></pre> <p>I would like to find rows that values in either column A2 or A3 is NaN. So it will return</p> <pre><code>cl_id a c d e A1 A2 A3 2 3 -1.259333 1.074986 1.834653 system NAN 1.100353 5 6 -1.549588 -0.198588 0.373476 audio -0.508501 NAN </code></pre> <p>Any help is appreciated.</p>
<p>You can do <code>count</code></p> <pre><code># 1 here is len(['A2', 'A3']) - count_na count_na = 1 df[df[['A2','A3']].count(axis=1) == 1] </code></pre> <p>Or you can check with <code>isna</code> and sum the result:</p> <pre><code>count_na = 1 df[df[['A2','A3']].isna().sum(axis=1) == count_na] </code></pre>
python|pandas
0
375,902
73,693,268
How to pivot a table based on the values of one column
<p>let's say I have the below dataframe:</p> <pre><code>dataframe = pd.DataFrame({'col1': ['Name', 'Location', 'Phone','Name', 'Location'], 'Values': ['Mark', 'New York', '656','John', 'Boston']}) </code></pre> <p>which looks like this:</p> <pre><code>col1 Values Name Mark Location New York Phone 656 Name John Location Boston </code></pre> <p>As you can see I have my wanted columns as rows in col1 and not all values have a Phone number, <strong>is there a way for me to transform this dataframe to look like this:</strong></p> <pre><code>Name Location Phone Mark New York 656 John Boston NaN </code></pre> <p>I have tried to transpose in Excel, do a Pivot and a Pivot_Table:</p> <pre><code>pivoted = pd.pivot_table(data = dataframe, values='Values', columns='col1') </code></pre> <p>But this comes out incorrectly. any help would be appreciated on this.</p> <p>NOTES: All new section start with the Name value and end before the Name value of the next person.</p>
<p>Create a new <code>index</code> using <code>cumsum</code> to identify unique sections then do <code>pivot</code> as usual...</p> <pre><code>df['index'] = df['col1'].eq('Name').cumsum() df.pivot('index', 'col1', 'Values') </code></pre> <hr /> <pre><code>col1 Location Name Phone index 1 New York Mark 656 2 Boston John NaN </code></pre>
python|pandas|dataframe|pivot
2
375,903
73,559,770
Remote ray call Ignoring function arguments
<p>I am trying to apply <code>ray</code> to a transformer pipeline as:</p> <pre><code>@ray.remote def predict(pipeline, text_data, max_length, min_length, do_sample): return pipeline(text_data, max_length, min_length, do_sample) </code></pre> <p>and initializing as:</p> <pre><code>predictions = ray.get(predict.remote(pipe_1, section, max_length=some_length, min_length=some_length, do_sample=False)) </code></pre> <p>the pipeline <code>pipe_1</code> is defined using <code>ray.put()</code></p> <pre><code>pipe_1 = ray.put(&quot;transformer pipeline method definition&quot;) </code></pre> <p>Pipeline method definition is similar to as described <a href="https://huggingface.co/facebook/bart-large-cnn" rel="nofollow noreferrer">here</a>. I have initialized <code>ray.init()</code> as : <code>ray.init(num_cpus=num_cpus, ignore_reinit_error=True)</code></p> <p>However its seems my <code>*args</code> are being ignored. Looks like <code>max_length</code>, <code>min_length</code> and <code>do_sample</code> arguments are ignored - below is the sample console output:</p> <pre><code>(predict pid=19440) Ignoring args : (529, 423, False) (predict pid=19440) Ignoring args : (680, 544, False) </code></pre> <p>Any suggestions.</p> <p>Source code and article: <a href="https://towardsdatascience.com/parallel-inference-of-huggingface-transformers-on-cpus-4487c28abe23" rel="nofollow noreferrer">Parallel Inference of HuggingFace Transformers on CPUs</a></p>
<p>Copy-pasting from <a href="https://discuss.ray.io/t/remote-ray-call-ignoring-function-arguments/7413" rel="nofollow noreferrer">discuss</a>:</p> <p>Sorry I’m not exactly sure what you mean by the args getting ignored. Could you post a runnable reproduction, preferably without the actual pipeline (you can just use an empty function)?</p> <p>I’m not sure if this is related to your issue, but I would also suggest calling pipeline as a remote function instead of passing it into the wrapper task. This is likely to be more efficient and fault-tolerant, as Ray will broadcast the pipeline definition to all workers beforehand. You can do this like so:</p> <pre class="lang-py prettyprint-override"><code>pipe1 = &quot;transformer pipeline method definition&quot; pipe1_remote_fn = ray.remote(pipe1) pipe1_remote_fn.remote(section, max_length=some_length, min_length=some_length, do_s </code></pre>
python|python-3.x|huggingface-transformers|ray
0
375,904
73,759,925
Convert string dictionary in pandas.core.series.Series to dictionary in python
<p>I read my data from excel and saved it in data frame format. One of the columns of the data has data in a dictionary format(same shape but not dictionary format), which is recognized as a string format. So, I want to change the data type of all rows (more than 40k) in that column from string to dictionary format. The when printing out column, the results look like this:</p> <pre><code>df['fruit'] 0 NaN 1 {'apple': [{'A': 1, 'B': 2, ... 2 {'apple': [{'A': 3, 'B': 4, ... 3 {'orange': [{'A': 5, 'B': 6... 4 {'apple': [{'A': 0, 'B': 9, ... </code></pre> <p>If I use that to_dict() to the column, it will be converted as follows.</p> <pre><code>df['fruit'].to_dict() {0: NaN, 1: &quot;{'apple': [{'A': 1, 'end': b, ...}&quot;, 2: &quot;{'apple': [{'A': 3, 'B': 4, ...}&quot;, 3: &quot;{'orange': [{'A': 5, 'B': 6...}&quot;, 4: &quot;{'apple': [{'A': 0, 'B': 9, ...}&quot;, </code></pre> <p>Then, when using to_dict('list'), I got the following error message.</p> <pre><code>df['fruit'].to_dict('list') .... TypeError: unsupported type: &lt;class 'str'&gt; </code></pre> <p>I want to use the dictionary format because I need only the information corresponding to 'B' in the data corresponding to the 'orange.'</p> <p>Any help would be greatly appreciated!</p>
<p>Use:</p> <pre><code>import pandas as pd df = pd.DataFrame({'string dict':[&quot;{'a': 1}&quot;, &quot;{'b':2}&quot;]}) df['string dict'].apply(eval) </code></pre> <p>which can be validated as follows:</p> <pre><code>type(df['string dict'].apply(eval)[0]) </code></pre> <p>returns:</p> <pre><code>dict </code></pre> <p>Based on your comment:</p> <pre><code>df['string dict'].fillna('{}').apply(eval) </code></pre> <p>I reproduced your error using the following test data:</p> <pre><code>df = pd.DataFrame({'string dict':[&quot;{'a': 1}&quot;, &quot;{'b':2}&quot;, np.nan, 2]}) </code></pre>
python|pandas|string|dictionary|type-conversion
1
375,905
73,764,642
How to group dates which are in sequential to 'From' and 'To'?
<p>I have dates in sequential and some are not in sequence. How can I group those dates to 'From date' and 'To Date'?</p> <pre><code>Name Date ABC Jan 1, 2022 ABC Jan 2, 2022 ABC Jan 3, 2022 ABC Feb 1, 2022 DEF Jan 1, 2022 DEF Mar 1, 2022 DEF Mar 2, 2022 </code></pre> <p>This should group as</p> <pre><code>Name From To ABC Jan 1, 2022 Jan 3, 2022 ABC Feb 1, 2022 Feb 1, 2022 DEF Jan 1, 2022 Jan 1, 2022 DEF Mar 1, 2022 Mar 2, 2022 </code></pre> <p>This is just the reverse of dates.explode (frequency day) where all the dates between two dates are converted to list of dates, but here I want to group those to 'from and to date'.</p>
<p>Doing <code>diff</code> with <code>cumsum</code> create the <code>groupby</code> key</p> <pre><code>x = pd.to_datetime(df.Date).diff().dt.days.ne(1).cumsum() out = df.groupby([df['Name'],x])['Date'].agg(['first','last']).reset_index(level=0) Out[219]: Name first last Date 1 ABC Jan 1, 2022 Jan 3, 2022 2 ABC Feb 1, 2022 Feb 1, 2022 3 DEF Jan 1, 2022 Jan 1, 2022 4 DEF Mar 1, 2022 Mar 2, 2022 </code></pre>
python|pandas
1
375,906
73,785,695
How to join dataframes on columns of lists with 'contains' conditions in python
<p>I have two dataframes that look like this:</p> <p>Dataframe 1:</p> <pre><code>antecedents consequents 0 (20679) (15056BL) 1 (20675) (20676) 2 (20675) (20677) 3 (20723) (20724) 4 (22356) (20724) ... ... ... 178 (22355, 20724, 22356) (20719) 179 (20724, 22356, 20719) (22355) 180 (21212, 84991, 84992) (21977) 181 (21212, 21977, 84992) (84991) 182 (84991, 21977, 84992) (21212) </code></pre> <p>Dataframe 2:</p> <pre><code>Invoice Customer ID StockCode 0 489434 13085.0 [85048, 79323P, 79323W, 22041, 21232, 22064, 2... 1 489435 13085.0 [22350, 22349, 22195, 22353] 2 489436 13078.0 [48173C, 21755, 21754, 84879, 22119, 22142, 22... 3 489437 15362.0 [22143, 22145, 22130, 21364, 21360, 21351, 213... 4 489438 18102.0 [21329, 21252, 21100, 21033, 20711, 21410, 214... </code></pre> <p>DataFrame 1 sample:</p> <pre><code>pd.DataFrame( {'antecedents' : [('20679'), ('85048'), ('22143'), ('22065','22138'), ('20754','21035','22041')], 'consequents' : [('20676'), ('20719'), ('22355'), ('20724'), ('212212')] }) </code></pre> <p>DataFrame 2 sample:</p> <pre><code>pd.DataFrame( {'Customer ID' : [13085, 13078, 15362, 18102, 12682, 18087, 18087, 13635, 14110], 'StockCode' : [ ['85048', '79323P', '79323W', '22041', '21232', '22064'], ['22350', '22349', '22195', '22353'], ['48173C', '21755', '21754', '84879', '22119', '22142'], ['22143', '22145', '22130', '21364', '21360', '21351'], ['21329', '21252', '21100', '21033', '20711', '21410'], ['22065', '22138', '22139', '22352', '85014A', '85014B'], ['22321', '22138', '84029E', '22111'], ['21955', '22111', '22296', '84899E', '22271', '22272'],['20754', '21035', '22041', '82001S', '82580', '85150'] ] }) </code></pre> <p>I need to join them on the following condition: all elements in antecedents (df1) are present in StockCode (df2).</p> <p>So if customerId 13085 has products 29679, 20675 and 20723, consequents 15056BL, 20676, 20677 and 20724 should show for them.</p> <p>I am able to reach the result with a for loop after I convert antecedents to a list of lists (ant_list):</p> <pre><code>for list2 in ant_list: for item, row in df_grouped.iterrows(): col_name = str(list2) if all(elem in row['StockCode'] for elem in list2): df_grouped.loc[item,col_name] = 1 </code></pre> <p>To get something like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Invoice</th> <th>Customer ID</th> <th>StockCode</th> <th>['20971']</th> <th>['22064']</th> <th>['79323P']</th> <th>['84032A']</th> </tr> </thead> <tbody> <tr> <td>489434</td> <td>13085.0.</td> <td>[85048,...</td> <td>NaN.</td> <td>1.0.</td> <td>1.0.</td> <td>NaN.</td> </tr> </tbody> </table> </div> <p>However I need to find a vectorized way, or any other better way, so that I can escalate the solution to bigger datasets and I'm stuck.</p>
<p>try:</p> <pre><code>df = df1.explode('antecedents').merge(df2.explode('StockCode'), right_on='StockCode', left_on='antecedents', how='left') df antecedents consequents Customer ID StockCode 0 20679 20676 NaN NaN 1 85048 20719 13085.0 85048 2 22143 22355 18102.0 22143 3 22065 20724 18087.0 22065 4 22138 20724 18087.0 22138 5 22138 20724 18087.0 22138 6 20754 212212 14110.0 20754 7 21035 212212 14110.0 21035 8 22041 212212 13085.0 22041 9 22041 212212 14110.0 22041 a = df.groupby('Customer ID')['StockCode'].apply(list).reset_index() b= df.groupby('Customer ID')['consequents'].apply(list).reset_index() c= df.groupby('Customer ID')['antecedents'].apply(list).reset_index() a.merge(b).merge(c) Customer ID StockCode consequents antecedents 0 13085.0 [85048, 22041] [20719, 212212] [85048, 22041] 1 14110.0 [20754, 21035, 22041] [212212, 212212, 212212] [20754, 21035, 22041] 2 18087.0 [22065, 22138, 22138] [20724, 20724, 20724] [22065, 22138, 22138] 3 18102.0 [22143] [22355] [22143] </code></pre>
python|pandas|dataframe|join
0
375,907
73,719,164
Python Pandas Dataframe drop columns if string contains special character
<p>I have a dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Product</th> <th style="text-align: center;"></th> <th style="text-align: right;">Storage</th> <th style="text-align: right;">Price</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Azure</td> <td style="text-align: center;">(2.4%</td> <td style="text-align: right;">Server</td> <td style="text-align: right;">£540</td> </tr> <tr> <td style="text-align: left;">AWS</td> <td style="text-align: center;"></td> <td style="text-align: right;">Server</td> <td style="text-align: right;">£640</td> </tr> <tr> <td style="text-align: left;">GCP</td> <td style="text-align: center;"></td> <td style="text-align: right;">Server</td> <td style="text-align: right;">£540</td> </tr> </tbody> </table> </div> <p>I would like to remove the column which contains the string '(2.4%' however I only want to remove the column in Pandas through regex if regex finds either a bracket or percentage in the string in that column '(%' and then pandas should drop that column entirely.</p> <p>Please can you help me find a way to use regex to search for special characters within a string and drop the column if that condition is met?</p> <p>I've searched on stack/google. I've used the following so far:</p> <pre><code>df = df.drop([col for col in df.columns if df[col].eq('(%').any()], axis=1) chars = '(%' regex = f'[{&quot;&quot;.join(map(re.escape, chars))}]' df = df.loc[:, ~df.apply(lambda c: c.str.contains(regex).any())] </code></pre> <p>however neither of these worked.</p> <p>Any help would be greatly appreciated. :)</p> <p>Thank You * Insert Smiley*</p>
<p>I would do something like this</p> <pre><code>import pandas as pd from io import StringIO text = &quot;&quot;&quot; Product,Perc,Storage,Price Azure,(2.4%,Server,£540 AWS,,Server,£640 GCP,,Server,£540 &quot;&quot;&quot; data = pd.read_csv(StringIO(text)) print(data) drop_columns = list() for col_name in data.columns: has_special_characters = data[col_name].str.contains(&quot;[\(%]&quot;) if has_special_characters.any(): drop_columns.append(col_name) print(f&quot;Dropping {drop_columns}&quot;) data.drop(drop_columns, axis=1, inplace=True) print(data) </code></pre> <p>Output of the script is:</p> <pre><code> Product Perc Storage Price 0 Azure (2.4% Server £540 1 AWS NaN Server £640 2 GCP NaN Server £540 Dropping ['Perc'] Product Storage Price 0 Azure Server £540 1 AWS Server £640 2 GCP Server £540 Process finished with exit code 0 </code></pre>
python|pandas|dataframe|conditional-statements
1
375,908
73,678,556
How to transform a csv file into a multi-dimensional list using Python?
<p>I started out with a 4d list, something like</p> <pre><code>tokens = [[[[&quot;a&quot;], [&quot;b&quot;], [&quot;c&quot;]], [[&quot;d&quot;]]], [[[&quot;e&quot;], [&quot;f&quot;], [&quot;g&quot;]],[[&quot;h&quot;], [&quot;i&quot;], [&quot;j&quot;], [&quot;k&quot;], [&quot;l&quot;]]]] </code></pre> <p>So I converted this to a csv file using the code</p> <pre><code>import csv def export_to_csv(tokens): csv_list = [[&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, word]] for h_index, h in enumerate(tokens): for i_index, i in enumerate(h): for j_index, j in enumerate(i): csv_list.append([h_index, i_index, j_index, j]) with open('TEST.csv', 'w') as f: # using csv.writer method from CSV package write = csv.writer(f) write.writerows(csv_list) </code></pre> <p>But now I want to do the reverse process, want to convert a csv file obtained in this format, back to the list format mentioned above.</p>
<p>Assuming you wanted your csv file to look something like this (there were a couple typos in the posted code):</p> <pre><code>A,B,C,word 0,0,0,a 0,0,1,b 0,0,2,c ... </code></pre> <p>here's one solution:</p> <pre><code>import csv def import_from_csv(filename): retval = [] with open(filename) as fh: reader = csv.reader(fh) # discard header row next(reader) # process data rows for (x,y,z,word) in reader: x = int(x) y = int(y) z = int(z) retval.extend([[[]]] * (x + 1 - len(retval))) retval[x].extend([[]] * (y + 1 - len(retval[x]))) retval[x][y].extend([0] * (z + 1 - len(retval[x][y]))) retval[x][y][z] = [word] return retval </code></pre>
python|pandas|list|csv|nested-lists
0
375,909
73,543,508
How to only display the "subcontrol"
<p>I have the following dataframe: (containing information like the one below)</p> <pre><code>import pandas as pd data = { &quot;items&quot;: [&quot;4.2 Paint&quot;, &quot;4.2.1 Paint job&quot;, &quot;4.2.1.10 Paint red&quot;, &quot;3.2 Seats&quot;, &quot;3.2.3.8 Seat belt&quot;] } df = pd.DataFrame(data) print(df) items 0 4.2 Paint 1 4.2.1 Paint job 2 4.2.1.10 Paint red 3 3.2 Seats 4 3.2.3.8 Seat belt </code></pre> <p>How can I display just the following?</p> <pre><code> items 0 4.2.1.10 Paint red 1 3.2.3.8 Seat belt </code></pre>
<p>It's very hard to workout what the criteria is here but if it's looking for the 4th subgroups then filter for when there are 3 dots.</p> <p><code>df[df['items'].apply(lambda x: x.count(&quot;.&quot;)==3)]</code></p> <p>-=-=-EDIT-=-==-</p> <p>If want the max per subgroup then something like this would work.</p> <ul> <li>get the group number</li> <li>count the <code>.</code> per line</li> <li>within each group select the max</li> </ul> <pre><code>df['group'] = df['items'].apply(lambda x: pd.to_numeric(x.split('.', 1)[0])) df['level'] = df['items'].apply(lambda x: x.count(&quot;.&quot;)) df.groupby('group').apply(lambda x: x.loc[x['level'] == x['level'].max()]) </code></pre>
python|pandas
1
375,910
73,678,262
Error while trying to save data into hdfs
<p>I'm trying to move data from local to hdfs using jupyter after the Data cleaning, i found some issues while doing it, and the data won't move into hdfs ( hdfs &amp; jupyter deployed in minikube k8s)</p> <p>This is the code in jupyter :</p> <pre><code>writer = pd.ExcelWriter(&quot;data.xlsx&quot;) data.to_excel( excel_writer=writer) writer.save(&quot;hdfs://hdfs-namenode-0.hdfs-namenode.default.svc.cluster.local/data&quot;) </code></pre> <p>The error is :</p> <pre><code>save() takes 1 positional argument but 2 were given </code></pre>
<p>This is how i solved my problem :</p> <pre><code>Client = InsecureClient('http://hdfs-namenode.default.svc.cluster.local:50070', user='hdfs') data = pd.read_csv('name_of_file.csv') with client.upload('path/name_of_file.csv' , 'name_of_file.csv', n_threads=1, temp_dir=None) as writer : data.to_csv(writer) </code></pre>
pandas|kubernetes|hadoop|hdfs|pandas.excelwriter
0
375,911
73,728,110
How to show different horizontal bar colors in grouped time series data in Pandas according to a column value (0/1)?
<p>I have the following sampled data frame from a million rows. It'll show value counts of anomalous rows, a dataframe with only anomalous rows, and the plot.</p> <p>Input data:</p> <pre class="lang-py prettyprint-override"><code>df_sample = pd.DataFrame({ 'AbsoluteTopImpressionPercentage': [0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.75, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 1.0, 1.0], 'AdFormat': ['TEXT', 'TEXT', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'TEXT', 'TEXT', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'TEXT', 'UNKNOWN', 'TEXT', 'TEXT', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'TEXT', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'TEXT', 'UNKNOWN', 'UNKNOWN', 'TEXT', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'UNKNOWN', 'TEXT', 'UNKNOWN', 'TEXT', 'UNKNOWN', 'UNKNOWN'], 'AdNetworkType1': ['SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH'], 'AdNetworkType2': ['SEARCH', 'SEARCH', 'SEARCH', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH', 'SEARCH_PARTNERS', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH_PARTNERS', 'SEARCH_PARTNERS', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH', 'SEARCH'], 'AllConversionRate': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'Clicks': [0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 4, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0, 1, 0, 0, 1, 0], 'ConversionRate': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'Ctr': [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.2105, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.3333, 1.0, 0.25, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2326, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0], 'Date': ['2021-04-09 00:00:00+00:00', '2021-04-19 00:00:00+00:00', '2022-08-24 00:00:00+00:00', '2021-07-25 00:00:00+00:00', '2022-07-29 00:00:00+00:00', '2022-05-27 00:00:00+00:00', '2022-05-18 00:00:00+00:00', '2022-07-24 00:00:00+00:00', '2021-02-05 00:00:00+00:00', '2021-04-30 00:00:00+00:00', '2021-07-11 00:00:00+00:00', '2021-05-24 00:00:00+00:00', '2022-05-22 00:00:00+00:00', '2021-07-01 00:00:00+00:00', '2021-07-11 00:00:00+00:00', '2022-03-24 00:00:00+00:00', '2021-05-12 00:00:00+00:00', '2022-06-14 00:00:00+00:00', '2021-04-27 00:00:00+00:00', '2021-01-29 00:00:00+00:00', '2022-09-08 00:00:00+00:00', '2021-06-07 00:00:00+00:00', '2022-05-28 00:00:00+00:00', '2022-03-26 00:00:00+00:00', '2021-04-09 00:00:00+00:00', '2022-05-22 00:00:00+00:00', '2021-05-28 00:00:00+00:00', '2022-05-22 00:00:00+00:00', '2021-07-06 00:00:00+00:00', '2021-07-14 00:00:00+00:00', '2021-06-24 00:00:00+00:00', '2021-03-24 00:00:00+00:00', '2021-06-03 00:00:00+00:00', '2022-05-30 00:00:00+00:00', '2021-03-15 00:00:00+00:00', '2022-08-05 00:00:00+00:00', '2021-07-06 00:00:00+00:00', '2022-03-30 00:00:00+00:00', '2022-09-07 00:00:00+00:00', '2021-05-27 00:00:00+00:00', '2021-06-04 00:00:00+00:00', '2022-04-16 00:00:00+00:00', '2022-05-22 00:00:00+00:00', '2021-07-08 00:00:00+00:00', '2022-05-26 00:00:00+00:00', '2021-02-09 00:00:00+00:00', '2022-04-27 00:00:00+00:00', '2021-05-06 00:00:00+00:00', '2021-06-29 00:00:00+00:00', '2022-06-01 00:00:00+00:00'], 'Anomaly': [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] }) </code></pre> <p>My code:</p> <pre class="lang-py prettyprint-override"><code>df_sample = df_sample.sort_values(by='Date') df_sample['Date'] = df_sample['Date'].apply(pd.to_datetime) print(df_sample.Anomaly.value_counts()) print('') display(df_sample.loc[df_sample['Anomaly'] == 1]) colors=[] for val in df_sample['Anomaly']: if val == 1: colors.append('red') else: colors.append('blue') # or #colors = pd.cut(df['Sum'].tolist(), [-np.inf, 10, 20, np.inf], # labels=['green', 'orange', 'red']) #ax = df['Sum'].plot(kind='barh',color=colors) fig, axs = plt.subplots(figsize=(22, 12)) df_sample.groupby(df_sample['Date'].dt.date)[&quot;Clicks&quot;].sum().plot(kind='barh', ax=axs, color=colors) plt.xlabel(&quot;Clicks&quot;) plt.ylabel(&quot;Dates&quot;) every_nth = 7 for n, label in enumerate(axs.yaxis.get_ticklabels()): if n % every_nth != 0: label.set_visible(False) </code></pre> <p>Output:</p> <p>Now in the output, I want those df_sample.Anomaly rows with value == 1 to be shown in the graph horizontal bars as red instead of blue. Any idea?</p>
<p>Without changing your code too much, you need to create the list of colors on your grouped data which you will plot later.</p> <p>The line, where I create <code>out</code>, I used <code>max</code> as aggregation for the column <code>Anomaly</code>. In your example data for each group of <code>Anomaly</code> there is only only 0's or only 1's. If these values differ you can decide here what you want to do (e.g take first or last, sum them up, min or max)</p> <p>Then build the list of colors based on the new data in <code>out['Anomaly']</code> and plot it based on <code>out['Clicks']</code></p> <pre class="lang-py prettyprint-override"><code>df_sample['Date'] = pd.to_datetime(df_sample['Date']) df_sample = df_sample.sort_values(by='Date') out = df_sample.groupby(df_sample['Date'].dt.date).agg({'Clicks' : 'sum', 'Anomaly': 'max'}) print(out) #collect colors based on that data colors = ['red' if val==1 else 'blue' for val in out['Anomaly']] fig, axs = plt.subplots(figsize=(10,8)) out['Clicks'].plot(kind='barh', ax=axs, color=colors) axs.set_xlabel(&quot;Clicks&quot;) axs.set_ylabel(&quot;Dates&quot;) every_nth = 7 for n, label in enumerate(axs.yaxis.get_ticklabels()): if n % every_nth != 0: label.set_visible(False) </code></pre> <p><a href="https://i.stack.imgur.com/XEDVv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEDVv.png" alt="enter image description here" /></a></p>
python|pandas|dataframe|matplotlib|time-series
2
375,912
73,642,093
Python Pandas to_datetime Without Zero Padded
<p>I am trying to convert a date &amp; time string using Pandas 'to_datetime', but the string values is non-zero padded:</p> <pre><code>3/31/22 23:30 3/31/22 23:45 4/1/22 0:00 4/1/22 0:15 </code></pre> <p>I have the following but get a mismatch error</p> <pre><code>pd.to_datetime(df.TimeStamp, format=&quot;%m/%d/%y %H:%m&quot;) </code></pre> <p>Is there a way to add the zero padding or have 'to_datetime' accept the above formatting?</p>
<p>The trouble isn't in the padding, it's actually in your formatting call. Note the capitalization of minutes (M) vs months (m), you used (m) for both. (<a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">documentation here</a>).</p> <p>Demonstration of working code is below</p> <blockquote> <p>pd.to_datetime(df.TimeStamp, format=&quot;%m/%d/%y %H:%m&quot;)</p> </blockquote> <p>should be</p> <blockquote> <p>pd.to_datetime(df.TimeStamp, format=&quot;%M/%d/%y %H:%m&quot;)</p> </blockquote> <pre><code>import pandas as pd times = [ &quot;3/31/22 23:30&quot;, &quot;3/31/22 23:45&quot;, &quot;4/1/22 0:00&quot;, &quot;4/1/22 0:15&quot; ] df = pd.DataFrame(times, columns=['TimeStamp']) pd.to_datetime(df.TimeStamp, format=&quot;%m/%d/%y %H:%M&quot;) &gt;&gt; 0 2022-03-31 23:30:00 &gt;&gt; 1 2022-03-31 23:45:00 &gt;&gt; 2 2022-04-01 00:00:00 &gt;&gt; 3 2022-04-01 00:15:00 &gt;&gt; Name: TimeStamp, dtype: datetime64[ns] </code></pre> <p>That said, if anyone lands here looking for the solution to the zero-padding, the <a href="https://stackoverflow.com/a/42709606/11574853">hash/dash</a> trick is worth further reading (though it does not work in many circumstances)</p>
python|pandas|string-to-datetime
1
375,913
73,672,691
How to find percentage change in values of a column using a variable for another column with pandas "category" data type?
<p>Here's the data frame:</p> <p><a href="https://i.stack.imgur.com/pF2tm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pF2tm.png" alt="enter image description here" /></a></p> <pre><code>df_temp = pd.DataFrame({'pos': {0: '1-10', 1: '11-20', 2: '21-30', 3: '31-40', 4: '41-50', 5: '51-60', 6: '61-70', 7: '71-80', 8: '81-90', 9: '91-100', 10: '101-110'}, 'imp': {0: 18647, 1: 566, 2: 157, 3: 61, 4: 343, 5: 464, 6: 225, 7: 238, 8: 852, 9: 108, 10: 0}}) df_temp </code></pre> <p>In my notebook, df.dtypes show:</p> <p><a href="https://i.stack.imgur.com/lcxMH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lcxMH.png" alt="enter image description here" /></a></p> <p>I don't know how to put such data type through a dict.</p> <p>Now if &quot;pos&quot; of x goes up from say 73 to 15. Required function:</p> <pre><code>def percent_change(73, 15): =&gt; 238*(x/100) = 566 #imp at 71-80 = 238 and 11-20 = 566 =&gt; x = (566*100)/238 =&gt; x = 100 - x # Mayby? To show only change? return x </code></pre> <p>Edit: My embarrassing solution, simply did string operation:</p> <pre><code>def p_change(x, y, df): if len(str(x)) == 1: x = 0 elif int(str(x)[1]) == 0: x = int(str(x)[0]) - 1 else: x = int(str(x)[0]) if len(str(y)) == 1: y = 0 elif int(str(y)[1]) == 0: y = int(str(y)[0]) - 1 else: y = int(str(y)[0]) x = int(df.loc[x, 'Impressions']) y = int(df.loc[y, 'Impressions']) p_change = print('Percentage change: ' + str(round(((y*100)/x - 100), 2)) + '%') return p_change </code></pre>
<p>Let's say we have some sequence of ages and a refference table between age bins and some value <em>imp</em>:</p> <pre><code>import pandas as pd import numpy as np rng = np.random.default_rng(42) N = 10 # number of data data = pd.Series(rng.integers(1, 111, N), name='Ages') imp = pd.Series( data=[18647, 566, 157, 61, 343, 464, 225, 238, 852, 108, 0], index=pd.IntervalIndex.from_breaks(range(0,111,10)).rename('pos'), name='imp' ) </code></pre> <p>Now we want to replace each age with the corresponding <em>imp</em> value. To do this we have to categorize ages like <code>imp.index</code> first, see <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html#pandas.cut" rel="nofollow noreferrer">pandas.cut</a>:</p> <pre><code>m = pd.cut(data, imp.index).map(imp).astype(imp.dtype) </code></pre> <p>Now we are ready to calculate the relative difference along a data sequence:</p> <pre><code>output = m.shift(-1)/m - 1 # multiply by 100 if needed </code></pre> <hr /> <h3>ps.</h3> <p>I'm not sure if it's a good idea to store the data as a percentage, like <code>&quot;10.05%&quot;</code>, etc. In my opinion, it's much better to store it as a ratio that can be used in further arithmetic operations. When printing the result on the screen or paper, we can always turn to formatting tools for help:</p> <pre><code>output.to_frame().style.format('{:.2%}') output.to_string(float_format=&quot;{:.2%}&quot;.format) </code></pre>
python|pandas|dataframe|numpy
1
375,914
73,679,996
Given a list of random variable and an expected value, how to generate probability distribution in python
<p>I have a list of random values x_1,x_2, ... x_n and an expected value X.I wanna write a function that takes these 2 things and randomly generates one of the many discrete probability distributions defined on the set {1,2....n} that meets the above mentioned constraint.</p> <p>Rephrasing the question as generate a vector p of length n such that</p> <p>0 &lt;= p_i &lt; =1</p> <p>| p | = 1</p> <p>p.x = X</p> <p>Using information I found <a href="https://stackoverflow.com/questions/55666040/solve-non-square-matrix-with-python-how-to-use-numpy-linalg-lstsq">here</a> I hacked together the below solution, but it doesn't work as the probabilities are not b/w 0 and 1</p> <pre><code>def normal_random_distribution(data_array, data_len, X_array, expd_vl): e_mat = np.concatenate(([X_array], [np.ones(data_len)]), axis = 0) f_mat = np.array([expd_vl, 1]) v_vec = np.linalg.lstsq(e_mat, f_mat,rcond=None)[0] e_null = sla.null_space(e_mat) lamda_lower = np.linalg.pinv(e_null) @ (-1 * v_vec) lamda_upper = np.linalg.pinv(e_null) @ ( 1 - v_vec) lamda = lamda_lower + random.random()*(lamda_upper - lamda_lower) sol = v_vec + e_null @ lamda return sol </code></pre> <p>How can I generate p ? The sol has l1 norm 1, and |sol * x| = X. I was hoping interpolating b/w lamda_lower and lamda_upper would make it b/w 0 and 1 but it does not. I wanna write the function so that on every call it generates randomly one of the many possible solutions. I realise one of the obvious solutions would be to find an x_i &lt; X and and x_j &gt; X and interpolate b/w the 2, but I want the function to (in theory) be able to generate all the possible distributions.</p>
<p>After trying to create a recursive solution for hours I gave up and coded an iterative brute force ish approach myself.</p> <pre><code>@numba.jit(nogil = True) def normal_random_distribution(data_array, X_array, expd_vl): # print(&quot;here&quot;) EPSILON = 0.001 data_len = data_array.size data_array[:] = 1 / data_len current_E = np.sum(data_array * X_array) middle = np.argmax(X_array &gt; expd_vl) - 1 while abs(current_E - expd_vl) &gt; EPSILON: # print(abs(current_E - expd_vl) / expd_vl) left_i = random.randint( 0, middle ) rght_i = random.randint( middle + 1, data_len - 1 ) if current_E &lt; expd_vl: up_lim = min(data_array[left_i] , ( expd_vl - current_E ) / ( X_array[rght_i] - X_array[left_i] ) ) mu = up_lim / 2 sigma = up_lim / 6 x = np.clip(np.random.normal( mu, sigma, 1 ), 0, up_lim)[0] data_array[left_i] -= x data_array[rght_i] += x current_E += x * ( X_array[rght_i] - X_array[left_i] ) else: up_lim = min(data_array[rght_i] , ( current_E - expd_vl ) / ( X_array[rght_i] - X_array[left_i] ) ) mu = up_lim / 2 sigma = up_lim / 6 x = np.clip(np.random.normal( mu, sigma, 1 ), 0, up_lim)[0] data_array[left_i] += x data_array[rght_i] -= x current_E -= x * ( X_array[rght_i] - X_array[left_i] ) </code></pre>
python|numpy|random|scipy|probability
0
375,915
73,540,650
numpy reshape and the base attribute of an array
<p>I'm trying to understand when, after a reshape, numpy made a copy or a view. I was trying it analyzing the content of the <code>base</code> attribute. I expected it to be <code>None</code> when the array is a copy, the original array if it is a view. However, with the following code:</p> <pre class="lang-py prettyprint-override"><code>A = numpy.array([[1,2,20],[3,4,40],[5,6,60],[7,8,80],[9,10,100]]) print('A:\n',A) print('A base:\n', A.base) print('A initial shape:', A.shape) B = A.reshape(3,5) print('B:\n', B) print('B base:\n', B.base) C = A[1:3,0:2] print('C:\n', C) print('C base:\n', C.base) D = C.reshape(4,1) print('D:\n', D) print('D base:\n', D.base) </code></pre> <p>I have the following output:</p> <pre><code>A: [[ 1 2 20] [ 3 4 40] [ 5 6 60] [ 7 8 80] [ 9 10 100]] A base: None A initial shape: (5, 3) B: [[ 1 2 20 3 4] [ 40 5 6 60 7] [ 8 80 9 10 100]] B base: [[ 1 2 20] [ 3 4 40] [ 5 6 60] [ 7 8 80] [ 9 10 100]] C: [[3 4] [5 6]] C base: [[ 1 2 20] [ 3 4 40] [ 5 6 60] [ 7 8 80] [ 9 10 100]] D: [[3] [4] [5] [6]] D base: [[3 4] [5 6]] </code></pre> <p>I agree that<code>A</code> is raw array having <code>base</code> attribute to <code>None</code>, <code>B</code> and <code>C</code> are views of <code>A</code>, so the base attribute points to the original <code>A</code> array. However, I don't undestand the <code>base</code> attribute of <code>D</code>. I expected it is not a view but a new array, but the <code>base</code> attribute point to a matrix <code>[[3 4][5 6]]</code> (that is not <code>C</code>, since <code>C</code> is a view of <code>A</code>, as shown in its <code>base</code> attribute) instead of <code>None</code>. Why this? <code>C</code> is a view of a new array never defined? Why <code>C</code> is not simply the desired <code>[[3] [4] [5] [6]]</code> array with <code>None</code> in <code>base</code> ?</p>
<p>For <code>B</code>, the base is the <code>A</code> array; the id's match:</p> <pre><code>In [111]: id(A) Out[111]: 2579202242096 In [112]: id(B.base) Out[112]: 2579202242096 </code></pre> <p>For <code>D</code>, the base is a copy of <code>C</code>, same values but different id:</p> <pre><code>In [113]: id(C) Out[113]: 2579189968112 In [114]: id(D.base) Out[114]: 2579204460048 </code></pre> <p>I like to use <code>__array_interface__</code> to check the memory buffer of arrays:</p> <pre><code>In [115]: A.__array_interface__ Out[115]: {'data': (2581261858000, False), 'strides': None, 'descr': [('', '&lt;i4')], 'typestr': '&lt;i4', 'shape': (5, 3), 'version': 3} In [116]: B.__array_interface__['data'] # same as As Out[116]: (2581261858000, False) In [117]: C.__array_interface__['data'] # 12 bytes further in Out[117]: (2581261858012, False) In [118]: D.__array_interface__['data'] # totally different Out[118]: (2581264982160, False) </code></pre> <p>Looking at strides as well as shape may help</p> <p>The inner loop, across the columns of a row, steps by 4 bytes, the size of the int32 (you might be 8 bytes); going from row to row requires a step of 3*4=12:</p> <pre><code>In [123]: A.shape, A.strides Out[123]: ((5, 3), (12, 4)) In [124]: B.shape, B.strides # 20 is 5*4 Out[124]: ((3, 5), (20, 4)) In [125]: C.shape, C.strides # same as for A, but just 2 columns Out[125]: ((2, 2), (12, 4)) In [126]: D.shape, D.strides # from row to row is 4 bytes Out[126]: ((4, 1), (4, 4)) </code></pre> <p>Look at the values ravelled:</p> <pre><code>In [132]: A.ravel() Out[132]: array([ 1, 2, 20, 3, 4, 40, 5, 6, 60, 7, 8, 80, 9, 10, 100]) In [133]: C.ravel() Out[133]: array([3, 4, 5, 6]) </code></pre> <p>While it's possible to select <code>C</code> for <code>A</code> using 2d slicing, we can't do the same with a raveled array. <code>[3, 4, _, 5, 6]</code>. The selection is 2, skip 1, 2. That's not a regular pattern. A view is possible only when the selection can be expressed as regular 1d slice (start,stop,step).</p> <p><code>reshape</code> says it can't always return a view. Reshape after transpose is one well known case of this. This is the other, a reshape after a subsetting slice.</p>
python|numpy|reshape|numpy-ndarray
1
375,916
73,658,346
Remove duplicates and keep row that certain column is Yes in a pandas dataframe
<p>I have a dataframe with duplicated values on column &quot;ID&quot;, like this one:</p> <pre><code>ID Name Street Birth Job Primary? 1 Fake1 Street1 2000-01-01 Job1 Yes 2 Fake2 Street2 2000-01-02 Job2 No 3 Fake3 Street3 2000-01-03 Job3 Yes 1 Fake1 Street1 2000-01-01 Job4 No 2 Fake2 Street2 2000-01-02 Job5 Yes 4 Fake4 Street4 2000-01-03 Job6 Yes 1 Fake1 Street1 2000-01-01 Job7 No </code></pre> <p>I need a way to remove duplicates (by &quot;ID&quot;) but keep the ones that the column Primary is &quot;Yes&quot; (all unique values have &quot;Yes&quot; in that column and duplicated values have one record as &quot;Yes&quot; and all others as &quot;No&quot;) resulting in this dataframe:</p> <pre><code>ID Name Street Birth Job Primary? 1 Fake1 Street1 2000-01-01 Job1 Yes 3 Fake3 Street3 2000-01-03 Job3 Yes 2 Fake2 Street2 2000-01-02 Job5 Yes 4 Fake4 Street4 2000-01-03 Job6 Yes </code></pre> <p>What is the best way to do it?</p> <p>Thanks!</p>
<p>Using <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.idxmax.html" rel="nofollow noreferrer"><code>groupby.idxmax</code></a> on a boolean Series derived from the &quot;Primary?&quot; column:</p> <pre><code>out = df.loc[df['Primary?'].eq('Yes').groupby(df['ID']).idxmax()] </code></pre> <p>output:</p> <pre><code> ID Name Street Birth Job Primary? 0 1 Fake1 Street1 2000-01-01 Job1 Yes 4 2 Fake2 Street2 2000-01-02 Job5 Yes 2 3 Fake3 Street3 2000-01-03 Job3 Yes 5 4 Fake4 Street4 2000-01-03 Job6 Yes </code></pre>
python|pandas|filter|duplicates|find
2
375,917
73,738,162
Python - Method to get the array around an slice in a matrix
<p>I define a Matrix NxN, with random values(0,1). I need to get the sum of the digits around the consecutive 1's.</p> <p>For example:</p> <pre class="lang-none prettyprint-override"><code>100110001 101001000 100001001 000000000 000111001 000000100 .. .. </code></pre> <p>For 111 in the above, the sum of the surrounding digits is 1.</p> <p>Is there some way using <em>numpy</em> or <em>itertools</em> or anything to get the sum or the array of the digits around?? Please help, cheers</p> <p>For detecting random consecutives 1's i use:</p> <pre><code>from itertools import groupby def groups(l): return [sum(g) for i, g in groupby(l) if i == 1] con += list(filter(lambda x: x &gt; 1,groups(matrix[4]))) </code></pre> <p>And to get the index of 1s :</p> <pre class="lang-py prettyprint-override"><code>idx+=[idx for idx, i in enumerate(matrix[4]) if i == 1] </code></pre>
<p>This might not fully answer your question, but it might point you in the right direction :</p> <pre><code>In [1]: import numpy as np In [2]: from scipy import ndimage as nd In [3]: mat = np.random.randint(0,2,(6,6)) In [4]: mat Out[4]: array([[1, 1, 1, 0, 1, 1], [1, 1, 0, 0, 1, 0], [1, 0, 1, 1, 1, 1], [0, 0, 1, 1, 1, 0], [1, 0, 1, 0, 1, 0], [1, 0, 0, 1, 1, 1]]) In [5]: s = np.array([[0,0,0],[1,1,1],[0,0,0]]) In [6]: lab, nlab = nd.label(mat, s) In [7]: lab Out[7]: array([[ 1, 1, 1, 0, 2, 2], [ 3, 3, 0, 0, 4, 0], [ 5, 0, 6, 6, 6, 6], [ 0, 0, 7, 7, 7, 0], [ 8, 0, 9, 0, 10, 0], [11, 0, 0, 12, 12, 12]], dtype=int32) In [8]: s2 = nd.generate_binary_structure(2,2) In [9]: for l in range(1, nlab+1): ...: is_l = lab==l ...: if np.sum(is_l) ==1: ...: continue ...: is_l_padded = nd.binary_dilation(is_l, s2) ...: border = np.logical_xor( is_l_padded, is_l) ...: num_ones = mat[border].sum() ...: print(&quot;label %d has %d ones around it&quot; % (l, num_ones)) ...: label 1 has 2 ones around it label 2 has 1 ones around it label 3 has 5 ones around it label 6 has 5 ones around it label 7 has 6 ones around it label 12 has 2 ones around it </code></pre>
python|arrays|numpy|matrix
0
375,918
73,633,416
Sum value in specific combinations of rows
<p>I have the following dataframe:</p> <pre><code>import pandas as pd import numpy as np df1 = pd.DataFrame({'Name' : ['Jake', 'Nate', '', 'Alex', '', 'Max', 'Nate', 'Jake'], 'Color' : ['', 'red;blue', 'blue;pink', 'green;blue;red', '', '', 'blue', 'red;yellow'], 'Value_1' : [1211233.419, 4007489.726, 953474.6894, np.NaN, 1761987.704, 222600361, 404419.2243, 606066.067 ], 'Value_2' : [np.NaN, 1509907.457, 4792269.911, 43486.59312, np.NaN, np.NaN, 2066645.251, 60988660.37], 'Value_3' : [1175299.998, np.NaN, 1888559.459, np.NaN, 444689.0177, 405513.0572, 343704.0269, 2948494.383]}) --- Name Color Value_1 Value_2 Value_3 0 Jake 1.211233e+06 NaN 1.175300e+06 1 Nate red;blue 4.007490e+06 1.509907e+06 NaN 2 blue;pink 9.534747e+05 4.792270e+06 1.888559e+06 3 Alex green;blue;red NaN 4.348659e+04 NaN 4 1.761988e+06 NaN 4.446890e+05 5 Max 2.226004e+08 NaN 4.055131e+05 6 Nate blue 4.044192e+05 2.066645e+06 3.437040e+05 7 Jake red;yellow 6.060661e+05 6.098866e+07 2.948494e+06 </code></pre> <p>I need two things:</p> <p>1)In the first case I need to add all the values (Value_1, Value_2, Value_3) where I have the same name and get for example:</p> <pre><code> Name Value_1 Value_2 Value_3 0 Jake 1.817299e+06 6.098866e+07 4.123794e+06 1 Nate 4.411909e+06 3.576553e+06 3.437040e+05 2 Alex NaN 4.348659e+04 NaN 3 Max 2.226004e+08 NaN 4.055131e+05 </code></pre> <p>2)I need the same thing but with the values of the name column plus the splits of the color column (only if there is at least one name and one color in the same row):</p> <pre><code> Name Color Value_1 Value_2 Value_3 0 Alex green NaN 4.348659e+04 NaN 1 Alex blue NaN 4.348659e+04 NaN 3 Alex red NaN 4.348659e+04 NaN 4 Jake red 6.060661e+05 6.098866e+07 2.948494e+06 5 Jake yellow 6.060661e+05 6.098866e+07 2.948494e+06 6 Nate red 4.007490e+06 1.509907e+06 NaN 7 Nate blue 4.411909e+06 3.576553e+06 3.437040e+05 </code></pre> <p>(Note that in this case the only line present twice is Nate-Blue)</p>
<p>You can use:</p> <pre><code>(df1.assign(Color=df1['Color'].str.split(';')) .explode('Color') .groupby(['Name', 'Color'], as_index=False) .sum() .replace('', pd.NA).dropna() ) </code></pre> <p>output:</p> <pre><code> Name Color Value_1 Value_2 Value_3 3 Alex blue 0.000000e+00 4.348659e+04 0.000000e+00 4 Alex green 0.000000e+00 4.348659e+04 0.000000e+00 5 Alex red 0.000000e+00 4.348659e+04 0.000000e+00 7 Jake red 6.060661e+05 6.098866e+07 2.948494e+06 8 Jake yellow 6.060661e+05 6.098866e+07 2.948494e+06 10 Nate blue 4.411909e+06 3.576553e+06 3.437040e+05 11 Nate red 4.007490e+06 1.509907e+06 0.000000e+00 </code></pre>
python|pandas|dataframe
1
375,919
73,606,579
End of the execution too long using Pool.starmap
<p>I'm executing a paralelized function using Pool.starmap function. The execution of the function it self only takes 6.5 minutes according to tqdm library but the program stays in execution for 20 min more until it finishes. The function is processing and applying filters to some strings in some colums of a pandas dataframe. A different paralelized function could perform better? There is something wrong with starmap function?</p> <p>Functon to be executed:</p> <pre class="lang-py prettyprint-override"><code>def get_best_string_filters(hst, apolnmar, apolnmod, apolnsub, apolnterm, amodnanu, ps, cc, cilindros, combustible, gearbox, year, search_model, search_version, search_container): select = table_ecode[(table_ecode.HST == hst)] year = int(year[-4:]) select = initial_selection(select, ps, cc, cilindros, combustible, gearbox, year) temp = get_starting_selection(select.copy(), search_model, &quot;HTB&quot;) if temp.empty: search_model, search_version, search_container = find_best_combination(select, search_model, search_version, search_container) else: select = temp.copy() _, search_version, search_container = find_best_combination(select, &quot;&quot;, search_version, search_container) #print(search_model, search_version, search_container) return [apolnmar, apolnmod, apolnsub, apolnterm, amodnanu, search_model, search_version, search_container] </code></pre> <p>starmap call:</p> <pre class="lang-py prettyprint-override"><code>if not exists(&quot;dict_search_ammo_make_version_fixed.npy&quot;): params = [(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o) for a, b, c, d, e, f, g, h, i, j, k, l, m, n, o in values_to_change.values] with Pool(mp.cpu_count()) as ex: array_split_ammo_make_version = ex.starmap(get_best_string_filters, tqdm(params, total=len(params))) dict_split_ammo_make_version = array_to_dict(array_split_ammo_make_version) # save the dict to disk for faster future executions np.save(&quot;dict_search_ammo_make_version_fixed.npy&quot;, dict_split_ammo_make_version) else: dict_split_ammo_make_version = np.load('dict_search_ammo_make_version_fixed.npy',allow_pickle='TRUE').item() </code></pre> <p>tqdm outputs 6.5 minutes and a completed status but the script continues to run for 20 long minutes: <a href="https://i.stack.imgur.com/DEuRx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DEuRx.png" alt="Execution image" /></a></p>
<p>In the demos below, generator function <code>params</code> simulates generating arguments to worker function <code>foo</code> <em>slowly</em> and <code>foo</code>, which just returns the passed argument, which is either a list when using <code>imap</code> or individual arguments that are the elements of a list.</p> <p><strong>Using <code>imap</code></strong></p> <pre class="lang-py prettyprint-override"><code>import time def foo(the_list): time.sleep(10) return the_list if __name__ == '__main__': from tqdm import tqdm from multiprocessing import Pool def params(): for i in range(1, 9): time.sleep(1) yield list(range(i)) with Pool() as ex: it = ex.imap(foo, params()) results = list(tqdm(it, total=8)) print(results) </code></pre> <p><strong>Using <code>apply_async</code></strong></p> <pre class="lang-py prettyprint-override"><code>import time def foo(*args): time.sleep(10) return args if __name__ == '__main__': from tqdm import tqdm from multiprocessing import Pool def params(): for i in range(1, 9): time.sleep(1) yield list(range(i)) def my_callback(result): bar.update(1) with Pool() as ex, tqdm(total=8) as bar: results = [] async_results = [ex.apply_async(foo, param, callback=my_callback) for param in params()] results = [async_result.get() for async_result in async_results] print(results) </code></pre> <p><strong><code>imap</code> with fixed sized tuples</strong></p> <pre class="lang-py prettyprint-override"><code>import time def foo(tpl): time.sleep(10) # unpack: a, b, c, d, e, f, g, h = tpl return (a + b) * (c + d) * (e + f) * (g + h) if __name__ == '__main__': from tqdm import tqdm from multiprocessing import Pool def params(): for i in range(1, 9): time.sleep(1) yield list(range(8)) with Pool() as ex: it = ex.imap(foo, params()) results = list(tqdm(it, total=8)) print(results) </code></pre>
python|pandas|multiprocessing|pool|starmap
1
375,920
73,737,782
AttributeError: 'bool' object has no attribute 'any'. only thrown for arrays exceeding certain size
<p>I am running some Pandas/numpy data manipulation code as shown below with a random sample dataframe:</p> <pre><code>import pandas as pd; import numpy as np; nrows = 200 df = pd.DataFrame(np.random.randint(0,25,size=(nrows, 8)), columns=list('ABCDEFGH')) array_val = df.values array_obj = ((array_val == array_val[:,None]).any(axis=-1)) print(array_obj.dtype) print(array_obj.shape) </code></pre> <p>The code is supposed to return an array with the shape: (nrow, nrow). So for example a dataframe with nrows of 500 would return a result with shape(500,500).</p> <p>The code runs successfully for lower values of nrows such as 5,000 or 20,000. (You may need more than 16gb RAM to run the logic for nrows above 10,000 though).</p> <p>However, I've noticed an issue when I've increased nrows above 75,000 / 80k. This line</p> <pre><code>array_obj = ((array_val == array_val[:,None]).any(axis=-1)) </code></pre> <p>throws an error:</p> <pre><code>AttributeError: 'bool' object has no attribute 'any'. </code></pre> <p>I've already check whether it may be exceeding the max array size but it looks like 75k rows should be under the limit <a href="https://stackoverflow.com/questions/855191/how-big-can-a-python-list-get">How Big can a Python List Get?</a>.</p> <p>If this isn't a memory-data structure problem, what's the root cause and the appropriate fix?</p> <p>Edit: I've searched around and some similar posts make mention off the issue being dependent on your machine/OS and Pandas/Numpy package versions. Would be curious to see if anyone manages to get the sample code running for nrows = 80000 on their env.</p>
<p>I ended up getting the code to run without an error on 75k/80k by using another environment with a different Pandas/numpy version. Though I'm still not sure why the issue is tied to package version</p>
python|arrays|pandas|numpy
0
375,921
73,795,151
Python - how to get add a counter inside a IF condition to add track the number of times something has occured
<p>I am new to python and learning it in bits and peices from internet. I have been tring to get a volume monitor for binance volumes.</p> <pre class="lang-py prettyprint-override"><code>for x in range(len(name)): # Code to get the data into panda dataframes for each token in name[] hrlyvol = res1[&quot;volume&quot;].iloc[0] min_2vol = res1[&quot;volume&quot;].iloc[-1] actvol = hrlyvol - min_2vol voldiff = hrlyvol/actvol volch = voldiff-1 op_price = res1[&quot;open&quot;].iloc[0] cl_price = res1[&quot;close&quot;].iloc[-1] price_change = (cl_price-op_price)/op_price if volch &gt; 0.05 or price_change &gt; 0.02: print(&quot;Volume is up by {:.0%} and Price has changed by {:.0%} in {}&quot;.format( volch, price_change, name[x])) time.sleep(3) </code></pre> <p>i tried adding <code>i = i+1</code> but it then adds for all the tokens that satisfy this rule. what i want is for each token to have a separate counter since not all tokens would satisfy the condition. lets say <code>xyz</code> and <code>abc</code> met the condition then token should be <code>xyz = 1</code> <code>abc = 1</code> then if in the next iteration only <code>xyz</code> met the condition then <code>xyz = 2</code> <code>abc = 1</code> and so on.</p>
<p>I am using the example of odd number or even number to showcase the condition</p> <pre><code>i=0 j=0 for n in range(100): i = i+1 if n%2 == 0 else i+0 j = j+1 if n%2 == 1 else j+0 </code></pre>
python|pandas|dataframe|loops|binance
0
375,922
73,671,952
How to speed up for loops in dataframe
<p>I want to convert a data frame to the format I want by scanning each latitude and longitude in the for loop, but this process takes too long. Is there a way to make the following script faster, such as using multi threads or processing? Can you show me how?</p> <pre><code>p=0 for i in tqdm(df_wind_monthly[&quot;lat&quot;]): for j in df_wind_monthly[&quot;lon&quot;]: print(&quot;lat: &quot; + str(i) + &quot; lon: &quot; + str(j)) for k in range(1948,2017): rslt_df_wind = df_wind_monthly.loc[(df_wind_monthly['lat'] == i) \ &amp; (df_wind_monthly['lon'] == j) \ &amp; (df_wind_monthly['year'] == k)]; rslt_df_wind = rslt_df_wind.reset_index() month_columns.loc[p,&quot;lat&quot;]=rslt_df_wind.loc[0,&quot;lat&quot;] month_columns.loc[p,&quot;lon&quot;]=rslt_df_wind.loc[0,&quot;lon&quot;] month_columns.loc[p,&quot;years&quot;]=rslt_df_wind.loc[0,&quot;year&quot;] month_columns.loc[p,&quot;wind_January&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;January&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_February&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;February&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_March&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;March&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_April&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;April&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_May&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;May&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_June&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;June&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_July&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;July&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_August&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;August&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_September&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;September&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_October&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;October&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_November&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;November&quot;].index.tolist()[0],&quot;wind&quot;] month_columns.loc[p,&quot;wind_December&quot;]=rslt_df_wind.loc[rslt_df_wind['month'].loc[lambda x: x==&quot;December&quot;].index.tolist()[0],&quot;wind&quot;] p+=1 </code></pre> <p>These are the inputs and expected output samples:</p> <pre><code> #df_wind_monthly (INPUT): time lat lon wind 0 1948-01-16 15.125 15.125 6.509021 1 1948-01-16 15.125 15.375 6.485108 2 1948-01-16 15.125 15.625 6.472615 3 1948-01-16 15.125 15.875 6.472596 4 1948-01-16 15.125 16.125 6.486597 #Expected dataframe columns (OUTPUT): month_columns=pd.DataFrame(columns=['lat',&quot;lon&quot;,&quot;years&quot;,&quot;wind_January&quot;,&quot;wind_February&quot;,&quot;wind_March&quot;,&quot;wind_April&quot;,&quot;wind_May&quot;,&quot;wind_June&quot;,&quot;wind_July&quot;,&quot;wind_August&quot;,&quot;wind_September&quot;,&quot;wind_October&quot;,&quot;wind_November&quot;,&quot;wind_December&quot;]) </code></pre>
<p>Looping is definetly NOT the way to go. If you type <code>for ... in</code> while using pandas DataFrame, you're almost always doing it wrong.</p> <p>What you want is to switch your data from long format (1 row = 1 observation) to wide format (1 row = 12 observations). It is a fairly common usecase, so pandas provides a method to do just that: <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>DataFrame.pivot</code></a>.</p> <p>Starting from your input dataframe, you need to:</p> <ul> <li>Add a year and month column from the time column</li> <li>Select only the years you want (1948-2016)</li> <li>Optionaly, if there is more than one measurement by month, compute the average, using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.mean.html" rel="nofollow noreferrer"><code>mean</code></a></li> <li>Switch from wide to long format with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a>.</li> <li>Rename the columns</li> </ul> <p><strong>Input</strong></p> <pre><code>&gt;&gt;&gt; df_wind_monthly time lat lon wind 0 1948-01-31 15.125 15.125 5.963 1 1948-01-31 15.125 15.375 6.404 2 1948-01-31 15.125 15.625 6.207 3 1948-01-31 15.125 15.875 6.972 4 1948-01-31 15.125 16.125 6.299 ... ... ... ... ... 86395 2019-12-31 17.375 16.375 6.514 86396 2019-12-31 17.375 16.625 6.593 86397 2019-12-31 17.375 16.875 6.438 86398 2019-12-31 17.375 17.125 6.394 86399 2019-12-31 17.375 17.375 6.232 </code></pre> <p><strong>Processing</strong></p> <pre><code>out_df = df_wind_monthly # Create a DateTime index from the date column to easily extract year / month date_index = pd.DatetimeIndex(out_df[&quot;time&quot;]) # Create a year and month columns to select the years and perform the groupby out_df = out_df.assign(year=date_index.year, month=date_index.month) # Select the years you want out_df = out_df[(out_df[&quot;year&quot;] &gt;= 1948) &amp; (out_df[&quot;year&quot;] &lt; 2017)] # If there are multiple measurement for a given year / month, compute the average # Skip this if you know there is only one measurement per month out_df = out_df.groupby([&quot;lat&quot;, &quot;lon&quot;, &quot;year&quot;, &quot;month&quot;]).mean().reset_index() # Switch from long to wide format out_df = out_df.pivot(index=[&quot;lat&quot;, &quot;lon&quot;, &quot;year&quot;], columns=&quot;month&quot;, values=&quot;wind&quot;) # Rename the columns out_df = out_df.rename( columns={ 1: &quot;wind_January&quot;, 2: &quot;wind_February&quot;, 3: &quot;wind_March&quot;, 4: &quot;wind_April&quot;, 5: &quot;wind_May&quot;, 6: &quot;wind_June&quot;, 7: &quot;wind_July&quot;, 8: &quot;wind_August&quot;, 9: &quot;wind_September&quot;, 10: &quot;wind_October&quot;, 11: &quot;wind_November&quot;, 12: &quot;wind_December&quot;, } ) # Reset Index if you prefer to have the data in columns out_df = out_df.reset_index() </code></pre> <p><strong>Output</strong>:</p> <pre><code>&gt;&gt;&gt; out_df month lat lon year wind_January ... wind_September wind_October wind_November wind_December 0 15.125 15.125 1948 5.963 ... 6.885 6.814 6.131 6.063 1 15.125 15.125 1949 6.304 ... 6.178 6.536 6.426 6.090 2 15.125 15.125 1950 6.207 ... 6.890 6.719 6.875 5.925 3 15.125 15.125 1951 6.100 ... 6.153 6.905 6.034 6.470 4 15.125 15.125 1952 5.951 ... 6.638 6.294 6.434 5.936 ... ... ... ... ... ... ... ... ... ... 6895 17.375 17.375 2012 6.674 ... 6.841 6.383 6.685 6.616 6896 17.375 17.375 2013 6.674 ... 6.469 6.940 6.842 6.154 6897 17.375 17.375 2014 6.794 ... 6.251 6.267 6.258 5.942 6898 17.375 17.375 2015 6.760 ... 6.933 6.848 6.765 6.446 6899 17.375 17.375 2016 6.253 ... 6.986 6.490 6.421 6.338 </code></pre>
python|pandas|dataframe|performance|latitude-longitude
1
375,923
73,791,940
How to re-write tensorflow code to make model training faster?
<p>QUESTION: My training is super slow. How do I rewrite my code to make my deep learning model training faster?</p> <p>BACKGROUND: I have built a CNN with TensorFlow 2.8.1 to classify CIFAR-100 images using a custom loss function. The CIFAR dataset includes 32x32-pixel RGB images of 100 fine classes (e.g., bear, car) categorized into 20 coarse classes (e.g., large omnivore, vehicle). My custom loss function is a weighted sum of two other loss functions (see code below). The first component is the crossentropy loss for the fine label. The second component is the crossentropy loss for the coarse label. My hope is that this custom loss function will enforce accurate classification of the coarse label to get a more accurate classifications of the fine label (fingers crossed). The comparator will be crossentropy loss of just the fine label (the baseline model). Note that to derive the coarse (hierarchical) loss component, I had to map the <code>y_true</code> (true fine label, integer) and <code>y_pred</code> (predicted softmax probabilities for the fine labels, vector) to the <code>y_true_coarse_int</code> (true coarse label, integer) and <code>y_pred_coarse_hot</code> (predicted coarse label, one hot encoded vector), respectively. <code>FineInts_to_CoarseInts</code> is a python dictionary that allows this mapping.</p> <p>The training takes &gt;5-hours to run with the custom loss function, whereas training with regular crossentropy loss for the fine classes takes ~1hr. Code was run on a high performance computing cluster with a 32GB CPU and 1 GPU.</p> <p>See below:</p> <pre><code># THIS CODE CELL IS TO DEFINE A CUSTOM LOSS FUNCTION def crossentropy_loss(y_true, y_pred): return SparseCategoricalCrossentropy()(y_true, y_pred) def hierarchical_loss(y_true, y_pred): y_true = tensorflow.cast(y_true, dtype=float) y_true_reshaped = tensorflow.reshape(y_true, -1) y_true_coarse_int = [FineInts_to_CoarseInts[K.eval(y_true_reshaped[i])] for i in range(y_true_reshaped.shape[0])] y_true_coarse_int = tensorflow.cast(y_true_coarse_int, dtype=tensorflow.float32) y_pred = tensorflow.cast(y_pred, dtype=float) y_pred_int = tensorflow.argmax(y_pred, axis=1) y_pred_coarse_int = [FineInts_to_CoarseInts[K.eval(y_pred_int[i])] for i in range(y_pred_int.shape[0])] y_pred_coarse_int = tensorflow.cast(y_pred_coarse_int, dtype=tensorflow.float32) y_pred_coarse_hot = to_categorical(y_pred_coarse_int, 20) return SparseCategoricalCrossentropy()(y_true_coarse_int, y_pred_coarse_hot) def custom_loss(y_true, y_pred): H = 0.5 total_loss = (1 - H) * crossentropy_loss(y_true, y_pred) + H * hierarchical_loss(y_true, y_pred) return total_loss </code></pre> <p>During model compilation I had to set the run_eagerly parameter to True. See below:</p> <pre><code># THIS CODE CELL IS TO COMPILE THE MODEL model.compile(optimizer=&quot;adam&quot;, loss=custom_loss, metrics=&quot;accuracy&quot;, run_eagerly=True) </code></pre> <p>The full code is below:</p> <pre><code># THIS CODE CELL LOADS THE PACKAGES USED IN THIS NOTEBOOK # Load core packages for data analysis and visualization import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sn import sys !{sys.executable} -m pip install pydot !{sys.executable} -m pip install graphviz # Load deep learning packages import tensorflow from tensorflow.keras.datasets.cifar100 import load_data from tensorflow.keras import (Model, layers) from tensorflow.keras.losses import SparseCategoricalCrossentropy import tensorflow.keras.backend as K from tensorflow.keras.utils import (to_categorical, plot_model) from tensorflow.lookup import (StaticHashTable, KeyValueTensorInitializer) # Load model evaluation packages import sklearn from sklearn.metrics import (confusion_matrix, classification_report) # Print versions of main ML packages print(&quot;Tensorflow version &quot; + tensorflow.__version__) print(&quot;Scikit learn version &quot; + sklearn.__version__) # THIS CODE CELL LOADS DATASETS AND CHECKS DATA DIMENSIONS # There is an option to load the &quot;fine&quot; (100 fine classes) or &quot;coarse&quot; (20 super classes) labels with integer (int) encodings # We will load both labels for hierarchical classification tasks (x_train, y_train_fine_int), (x_test, y_test_fine_int) = load_data(label_mode=&quot;fine&quot;) (_, y_train_coarse_int), (_, y_test_coarse_int) = load_data(label_mode=&quot;coarse&quot;) # EXTRACT DATASET PARAMETERS FOR USE LATER ON num_fine_classes = 100 num_coarse_classes = 20 input_shape = x_train.shape[1:] # THIS CODE CELL PROVIDES THE CODE TO LINK INTEGER LABELS TO MEANINGFUL WORD LABELS # Fine and coarse labels are provided as integers. We will want to link them both to meaningful world labels. # CREATE A DICTIONARY TO MAP THE 20 COARSE LABELS TO THE 100 FINE LABELS # This mapping comes from https://keras.io/api/datasets/cifar100/ # Except &quot;computer keyboard&quot; should just be &quot;keyboard&quot; for the encoding to work CoarseLabels_to_FineLabels = { &quot;aquatic mammals&quot;: [&quot;beaver&quot;, &quot;dolphin&quot;, &quot;otter&quot;, &quot;seal&quot;, &quot;whale&quot;], &quot;fish&quot;: [&quot;aquarium fish&quot;, &quot;flatfish&quot;, &quot;ray&quot;, &quot;shark&quot;, &quot;trout&quot;], &quot;flowers&quot;: [&quot;orchids&quot;, &quot;poppies&quot;, &quot;roses&quot;, &quot;sunflowers&quot;, &quot;tulips&quot;], &quot;food containers&quot;: [&quot;bottles&quot;, &quot;bowls&quot;, &quot;cans&quot;, &quot;cups&quot;, &quot;plates&quot;], &quot;fruit and vegetables&quot;: [&quot;apples&quot;, &quot;mushrooms&quot;, &quot;oranges&quot;, &quot;pears&quot;, &quot;sweet peppers&quot;], &quot;household electrical devices&quot;: [&quot;clock&quot;, &quot;keyboard&quot;, &quot;lamp&quot;, &quot;telephone&quot;, &quot;television&quot;], &quot;household furniture&quot;: [&quot;bed&quot;, &quot;chair&quot;, &quot;couch&quot;, &quot;table&quot;, &quot;wardrobe&quot;], &quot;insects&quot;: [&quot;bee&quot;, &quot;beetle&quot;, &quot;butterfly&quot;, &quot;caterpillar&quot;, &quot;cockroach&quot;], &quot;large carnivores&quot;: [&quot;bear&quot;, &quot;leopard&quot;, &quot;lion&quot;, &quot;tiger&quot;, &quot;wolf&quot;], &quot;large man-made outdoor things&quot;: [&quot;bridge&quot;, &quot;castle&quot;, &quot;house&quot;, &quot;road&quot;, &quot;skyscraper&quot;], &quot;large natural outdoor scenes&quot;: [&quot;cloud&quot;, &quot;forest&quot;, &quot;mountain&quot;, &quot;plain&quot;, &quot;sea&quot;], &quot;large omnivores and herbivores&quot;: [&quot;camel&quot;, &quot;cattle&quot;, &quot;chimpanzee&quot;, &quot;elephant&quot;, &quot;kangaroo&quot;], &quot;medium-sized mammals&quot;: [&quot;fox&quot;, &quot;porcupine&quot;, &quot;possum&quot;, &quot;raccoon&quot;, &quot;skunk&quot;], &quot;non-insect invertebrates&quot;: [&quot;crab&quot;, &quot;lobster&quot;, &quot;snail&quot;, &quot;spider&quot;, &quot;worm&quot;], &quot;people&quot;: [&quot;baby&quot;, &quot;boy&quot;, &quot;girl&quot;, &quot;man&quot;, &quot;woman&quot;], &quot;reptiles&quot;: [&quot;crocodile&quot;, &quot;dinosaur&quot;, &quot;lizard&quot;, &quot;snake&quot;, &quot;turtle&quot;], &quot;small mammals&quot;: [&quot;hamster&quot;, &quot;mouse&quot;, &quot;rabbit&quot;, &quot;shrew&quot;, &quot;squirrel&quot;], &quot;trees&quot;: [&quot;maple&quot;, &quot;oak&quot;, &quot;palm&quot;, &quot;pine&quot;, &quot;willow&quot;], &quot;vehicles 1&quot;: [&quot;bicycle&quot;, &quot;bus&quot;, &quot;motorcycle&quot;, &quot;pickup&quot; &quot;truck&quot;, &quot;train&quot;], &quot;vehicles 2&quot;: [&quot;lawn-mower&quot;, &quot;rocket&quot;, &quot;streetcar&quot;, &quot;tank&quot;, &quot;tractor&quot;] } # CREATE A DICTIONARY TO MAP THE INTEGER-ENCODED COARSE LABEL TO THE WORD LABEL # Create list of Course Labels CoarseLabels = list(CoarseLabels_to_FineLabels.keys()) # The target variable in CIFER100 is encoded such that the coarse class is assigned an integer based on its alphabetical order # The CoarseLabels list is already alphabetized, so no need to sort CoarseInts_to_CoarseLabels = dict(enumerate(CoarseLabels)) # CREATE A DICTIONARY TO MAP THE WORD LABEL TO THE INTEGER-ENCODED COARSE LABEL CoarseLabels_to_CoarseInts = dict(zip(CoarseLabels, range(20))) # CREATE A DICTIONARY TO MAP THE 100 FINE LABELS TO THE 20 COARSE LABELS FineLabels_to_CoarseLabels = {} for CoarseLabel in CoarseLabels: for FineLabel in CoarseLabels_to_FineLabels[CoarseLabel]: FineLabels_to_CoarseLabels[FineLabel] = CoarseLabel # CREATE A DICTIONARY TO MAP THE INTEGER-ENCODED FINE LABEL TO THE WORD LABEL # Create a list of the Fine Labels FineLabels = list(FineLabels_to_CoarseLabels.keys()) # The target variable in CIFER100 is encoded such that the fine class is assigned an integer based on its alphabetical order # Sort the fine class list. FineLabels.sort() FineInts_to_FineLabels = dict(enumerate(FineLabels)) # CREATE A DICTIONARY TO MAP THE INTEGER-ENCODED FINE LABELS TO THE INTEGER-ENCODED COARSE LABELS b = list(dict(sorted(FineLabels_to_CoarseLabels.items())).values()) FineInts_to_CoarseInts = dict(zip(range(100), [CoarseLabels_to_CoarseInts[i] for i in b])) #Tensor version of dictionary #fine_to_coarse = tensorflow.constant(list((FineInts_to_CoarseInts).items()), dtype=tensorflow.int8) # THIS CODE CELL IS TO BUILD A FUNCTIONAL MODEL inputs = layers.Input(shape=input_shape) x = layers.BatchNormalization()(inputs) x = layers.Conv2D(64, (3, 3), padding='same', activation=&quot;relu&quot;)(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Dropout(0.30)(x) x = layers.Conv2D(256, (3, 3), padding='same', activation=&quot;relu&quot;)(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Dropout(0.30)(x) x = layers.Conv2D(256, (3, 3), padding='same', activation=&quot;relu&quot;)(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Dropout(0.30)(x) x = layers.Conv2D(1024, (3, 3), padding='same', activation=&quot;relu&quot;)(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Dropout(0.30)(x) x = layers.GlobalAveragePooling2D()(x) x = layers.BatchNormalization()(x) x = layers.Dropout(0.30)(x) x = layers.Dense(512, activation = &quot;relu&quot;)(x) x = layers.BatchNormalization()(x) x = layers.Dropout(0.30)(x) output_fine = layers.Dense(num_fine_classes, activation=&quot;softmax&quot;, name=&quot;output_fine&quot;)(x) model = Model(inputs=inputs, outputs=output_fine) # THIS CODE CELL IS TO DEFINE A CUSTOM LOSS FUNCTION def crossentropy_loss(y_true, y_pred): return SparseCategoricalCrossentropy()(y_true, y_pred) def hierarchical_loss(y_true, y_pred): y_true = tensorflow.cast(y_true, dtype=float) y_true_reshaped = tensorflow.reshape(y_true, -1) y_true_coarse_int = [FineInts_to_CoarseInts[K.eval(y_true_reshaped[i])] for i in range(y_true_reshaped.shape[0])] y_true_coarse_int = tensorflow.cast(y_true_coarse_int, dtype=tensorflow.float32) y_pred = tensorflow.cast(y_pred, dtype=float) y_pred_int = tensorflow.argmax(y_pred, axis=1) y_pred_coarse_int = [FineInts_to_CoarseInts[K.eval(y_pred_int[i])] for i in range(y_pred_int.shape[0])] y_pred_coarse_int = tensorflow.cast(y_pred_coarse_int, dtype=tensorflow.float32) y_pred_coarse_hot = to_categorical(y_pred_coarse_int, 20) return SparseCategoricalCrossentropy()(y_true_coarse_int, y_pred_coarse_hot) def custom_loss(y_true, y_pred): H = 0.5 total_loss = (1 - H) * crossentropy_loss(y_true, y_pred) + H * hierarchical_loss(y_true, y_pred) return total_loss # THIS CODE CELL IS TO COMPILE THE MODEL model.compile(optimizer=&quot;adam&quot;, loss=crossentropy_loss, metrics=&quot;accuracy&quot;, run_eagerly=False) # THIS CODE CELL IS TO TRAIN THE MODEL history = model.fit(x_train, y_train_fine_int, epochs=200, validation_split=0.25, batch_size=100) # THIS CODE CELL IS TO VISUALIZE THE TRAINING history_frame = pd.DataFrame(history.history) history_frame.to_csv(&quot;history.csv&quot;) history_frame.loc[:, [&quot;accuracy&quot;, &quot;val_accuracy&quot;]].plot() history_frame.loc[:, [&quot;loss&quot;, &quot;val_loss&quot;]].plot() plt.show() # THIS CODE CELL IS TO EVALUATE THE MODEL ON AN INDEPENDENT DATASET score = model.evaluate(x_test, y_test_fine_int, verbose=0) print(&quot;Test loss:&quot;, score[0]) print(&quot;Test accuracy:&quot;, score[1]) </code></pre>
<h2>Quantization</h2> <ul> <li>Quantization is the technique that converts your number type <code>float32</code> to <code>int8</code>. It means your model size will be lesser.</li> <li>There are two types of quantization before training and after training.</li> <li>Try to apply quantization before training and let me know the results.</li> </ul> <p><a href="https://www.youtube.com/watch?v=v1oHf1KV6kM" rel="nofollow noreferrer"><strong>Refer to this video for Quantization</strong></a></p>
tensorflow|keras|eager-execution
0
375,924
73,731,376
Python Concat two dataframe after grouping and sort
<p>i have two pandas frame and i want to get one</p> <pre><code> asks_price asks_qty exchange_name_ask 0 20156.51 0.000745 Coinbase 1 20156.52 0.050000 Coinbase </code></pre> <pre><code> bids_price bids_qty exchange_name_bid 2 20153.28 0.000200 Coinbase 3 20153.27 0.051000 Coinbase </code></pre> <p>Get ====&gt;</p> <pre><code> asks_price asks_qty exchange_name_ask bids_price bids_qty exchange_name_bid 0 20156.51 0.000745 Coinbase 20153.28 0.000200 Coinbase 1 20156.52 0.050000 Coinbase 20153.27 0.051000 Coinbase </code></pre> <pre><code>ask_snapshot = ask_snapshot.groupby('asks_price')['asks_qty'].sum().reset_index() bid_snapshot = bid_snapshot.groupby('bids_price')['bids_qty'].sum().reset_index() ask_snapshot = ask_snapshot.sort_values(by='asks_price').reset_index() bid_snapshot = bid_snapshot.sort_values(by='bids_price', ascending=False).reset_index() ask = ask_snapshot.head(20) bid = bid_snapshot.head(20) snapshot = pd.concat([ask, bid], axis=1, join='inner') </code></pre> <p>in the snapshot the exchange_name column disappear, i dont know why Thanks</p>
<p>The two columns <code>exchange_name..</code> doesn't disappear when you use <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer"><strong><code>pandas.concat</code></strong></a> but they simply doesn't exist in the two dataframes passed as arguments.</p> <p>Try this :</p> <pre><code>ask_snapshot = ask_snapshot.groupby(['asks_price', 'exchange_name_ask'], as_index=False)['asks_qty'].sum() bid_snapshot = bid_snapshot.groupby(['bids_price', 'exchange_name_bid'], as_index=False)['bids_qty'].sum() ask_snapshot = ask_snapshot.sort_values(by='asks_price').reset_index() bid_snapshot = bid_snapshot.sort_values(by='bids_price', ascending=False).reset_index() ask = ask_snapshot.head(20) bid = bid_snapshot.head(20) snapshot = pd.concat([ask, bid], axis=1) </code></pre>
python|pandas
1
375,925
71,365,871
adding elements to a numpy array and reshape it
<p>I have the following numpy array</p> <pre><code>a= np.array([1,1]) </code></pre> <p>I have the two elements</p> <pre><code>b= [2, 2] c= [3, 3] </code></pre> <p>I would like to add those elements b and c, so that my output seems like this</p> <pre><code>a= [[1, 1], [2, 2]. [3, 3]], #shape=(3,2) </code></pre> <p>which numpy function should i use? thanks</p>
<p>Create a new <code>numpy</code> array with the three elements</p> <pre><code>&gt;&gt;&gt; np.array([a,b,c]) array([[1, 1], [2, 2], [3, 3]]) # shape : (3, 2) </code></pre> <p>If a had more than 1 dimension, <code>np.append</code> can be used :</p> <pre><code>&gt;&gt;&gt; a= np.array([[1,1], [4,4]]) &gt;&gt;&gt; a array([[1, 1], [4, 4]]) &gt;&gt;&gt; np.append(a,[b],axis=0) array([[1, 1], [4, 4], [2, 2]]) </code></pre>
python|arrays|numpy
1
375,926
71,268,742
Python libraries for representing distance between a point and DNF of inequalities
<p>Let us fix the number of variables to be 4: so x0, x1, x2, x3.</p> <p>I am looking for a python construct which allows me to:</p> <p>(i) store in memory, a disjunctive normal formula where the atomic formulas are inequalities: a0x0 + a1x1 + a2x2 + a3x3 &gt;= a4 or equalities: a0x0 + a1x1 + a2x2 + a3x3 == a4.</p> <p>(ii) Given a formula not in DNF, I have a function to convert it to such.</p> <p>(iii) Given a point (u1,u2,u3,u4), I can find the set distance of this point to the DNF formula.</p> <p>I know that numpy allows me to write atomic formulas and calculate their distance from a point, but it doesn't allow me to write conjunctions or disjunctions of them; and I can't compute set distance from a point to a DNF.</p> <p>I even checked out pyeda, but there the atomic formulas have to be boolean variables, and inequalities and equalities are not allowed.</p> <p>I can rewrite the whole code to define my own classes for a DNF, and define my own distance function, but I don't want to reinvent the wheel. Which python libraries can I use (and how) to make me achieve my task in the easiest fashion?</p>
<p>My response is to my interpretation of your problem, but I recognize that I am filling some gaps with my assumptions.</p> <p>(i) can be solved with <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html" rel="nofollow noreferrer">linear programming</a>.</p> <p>(ii) is a very open question, I will skip it.</p> <p>(iii) If you are happy with the distance <code>sum |x[i] - u[i]|</code> the problem can still be handled with linproc, you will have to use auxiliary variables though.</p>
python|numpy|scipy|pyeda
0
375,927
71,143,308
merge two datasets to find a mean
<p>I have two similar looking tables: df1:</p> <pre><code>country type mean count last_checked_date Brazil Weather x 2 2022-02-13 Brazil Corona y 3 2022-02-13 China Corona z 1 2022-02-13 China Fruits s 2 2022-02-13 </code></pre> <p>df2</p> <pre><code>country type mean count last_checked_date Ghana Weather a 2 2022-02-13 Brazil Corona b 5 2022-02-13 China Corona c 1 2022-02-13 Germany Fruits d 2 2022-02-13 </code></pre> <p>I want to join df2 with df1 such that no combination of country, type is lost. For each combination of country and type, I want to calculate a mean value with this formula:</p> <pre><code>df find_new_values(old_mean, new_mean, old_count, new_count): mean = (old_mean + new_mean)/(old_count+new_count) count = old_count+new_count return mean, count </code></pre> <p>For example, in df2, China, Corona is present in df1 as well so the mean would be (c+z)/(1+1)</p> <p>However, Ghana, Weather is present in df2 but not in df1 so in this case, I want to simply add a row to df1 as it is without the formula calculation.</p> <p>How can I achieve this? What's the correct join/merge type to use here?</p>
<p>We may consider the problem this way, we combine them into one table,</p> <pre><code>df = pd.concat([df1, df2]) </code></pre> <p>then use <code>groupby</code> to apply aggregations on each group of the rows that share the same <code>country</code> and <code>type</code>.</p> <pre><code>df.groupby(['country', 'type']).agg({'mean': 'mean', 'count': 'sum'}) </code></pre> <p>For country-type combination that only occur once in one of the dataframe, the corresponding group will only discover one row and the aggregation functions won't change anything.</p> <p>You may add <code>'last_checked_date': 'last'</code> to the list of <code>agg</code> if needed.</p>
python|pandas|dataframe|numpy|mean
0
375,928
71,223,806
Taking the average of one column with a certain value in another column Pandas
<p>I want to find the average of one column based on the value of another. So if i have col1 with columns ['1','2'].</p> <pre><code>data = [['A',10],['B',12],[['A',41],['B',14]] df1 = pd.DataFrame('data',columns=['1','2'] df1.head() </code></pre> <p>so how would i create a new column with the average for '1' A and B</p>
<p>You can do do a <code>groupby</code> and <code>transform</code> to get the mean of each group</p> <pre><code>df1[&quot;avg&quot;] = df1.groupby(['1']).transform('mean') print(df1) 1 2 avg 0 A 10 25.5 1 B 12 13.0 2 A 41 25.5 3 B 14 13.0 </code></pre> <hr /> <p>Or, if you want a literal column of the mean of <code>A</code> and <code>B</code> you can do the following</p> <pre><code>df1[&quot;A_avg&quot;] = df1[df1[&quot;1&quot;]==&quot;A&quot;][&quot;2&quot;].mean() df1[&quot;B_avg&quot;] = df1[df1[&quot;1&quot;]==&quot;B&quot;][&quot;2&quot;].mean() print(df1) 1 2 A_avg B_avg 0 A 10 25.5 13.0 1 B 12 25.5 13.0 2 A 41 25.5 13.0 3 B 14 25.5 13.0 </code></pre>
pandas|dataframe|aggregate
0
375,929
71,434,956
Optimalization of my script whcich calculate weekly qty of product
<p>I have a task where I need to change multiple times data in my data frame. I wrote the answer in Jupyter notebook, using loops and it's take around 2,5min to run.</p> <p>However, when I rewrite my code to pycharm using modules and definitions it takes around 20min and I do not know where I made a mistake.</p> <p>Here is explanation of my task and my idea which was written in Jupyter, maybe you will have some ideas how I could write it better.</p> <p>I have a data frame with weekly qty of sold toys in factory when 0w last week.</p> <pre><code>ID 0w 1w 2w 3w 4w 5w 6w 7w 8w 9w 10w 11w 12w 13w 0 0 1 0 0 5 1 65 2 62 1 1 2 1 60 1 0 0 1 5 16 0 2 0 0 40 0 100 0 0 2 0 3 0 0 0 0 0 40 0 0 20 0 0 0 3 0 5 6 0 0 0 0 0 0 0 0 0 0 0 4 0 1 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <p>First step is to save every row from my df to a list of lists 'week_qty':</p> <pre><code>week_qty = [] lenOfRows = len(copiedData) for i in range(0, lenOfRows): week_qty.append(weeksQtyEXTdata.iloc[i]) week_qty[0] = [0 1 0 0 5 1 65 2 62 1 1 2 1 60] </code></pre> <p>Second step is to take 90% and 10% value of each row and compare with it each value of the list, so for the first row 90% = 61.4 and 10% = 0. If the value in cell is lower than p10 I change it to the value of p10 and if it' higher than p90 I change it with the value of p90.</p> <pre><code>def CalcPercenatage(week_qty,oneWeek): p10=np.percentile(weekDemand,10) p90=np.percentile(weekDemand,90) if (oneWeek &lt; p10): return p10 elif(oneWeek &gt; p90): return p90 else: return oneWeek CalcPercenatage(week_qty[0]) = [60, 1, 2, 1, 1, 61.4, 2, 61.4, 1, 5, 0, 0, 1, 0] </code></pre> <p>Last step is to create a matrix of those values and do it for every row for each of 14 cells in a row:</p> <pre><code>for i in range(0, lenOfRows): Rows = [] for j in range(0, 14): Rows.append(CalcPercenatage(week_qty[i], week_qty[i][j])) MatrixBetweenWeeks.append(Rows) </code></pre> <p>I would like to make it faster, for 31000 data in pycharm it is working too long.</p>
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.clip.html" rel="nofollow noreferrer"><code>clip</code></a>:</p> <pre><code>p10, p90 = np.percentile(df.iloc[:, 1:], [10, 90], axis=1) out = df.iloc[:, 1:].clip(p10, p90, axis=0) out['Average'] = out.mean(axis=1) out = pd.concat([df.iloc[:, :1], out], axis=1) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; out ID 0w 1w 2w 3w 4w 5w 6w 7w 8w 9w 10w 11w 12w 13w Average 0 0 0 1.0 0.0 0 5 1 61.4 2.0 61.4 1.0 1.0 2.0 1 60 14.057143 1 1 0 0.0 1.0 5 16 0 2.0 0.0 0.0 32.8 0.0 32.8 0 0 6.400000 2 2 0 3.0 0.0 0 0 0 0.0 14.9 0.0 0.0 14.9 0.0 0 0 2.342857 3 3 0 3.5 3.5 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.500000 4 4 0 0.0 0.0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000000 </code></pre> <p><strong>Performance</strong></p> <p>For 31K records:</p> <pre><code>%timeit myfunc(df) 15.3 ms ± 80 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre>
python|dataframe|pycharm|pandas
1
375,930
71,248,632
How to convert data from DataFrame to form
<p>I'm trying to make a report and then convert it to the prescribed form but I don't know how. Below is my code:</p> <pre><code>data = pd.read_csv('https://raw.githubusercontent.com/hoatranobita/reports/main/Loan_list_test.csv') data_pivot = pd.pivot_table(data,('CLOC_CUR_XC_BL'),index=['BIZ_TYPE_SBV_CODE'],columns=['TERM_CODE','CURRENCY_CD'],aggfunc=np.sum).reset_index print(data_pivot) </code></pre> <p>Pivot table shows as below:</p> <pre><code>&lt;bound method DataFrame.reset_index of TERM_CODE Ng?n h?n Trung h?n CURRENCY_CD 1. VND 2. USD 1. VND 2. USD BIZ_TYPE_SBV_CODE 201 170000.00 NaN 43533.42 NaN 202 2485441.64 5188792.76 2682463.04 1497309.06 204 35999.99 NaN NaN NaN 301 1120940.65 NaN 190915.62 453608.72 401 347929.88 182908.01 239123.29 NaN 402 545532.99 NaN 506964.23 NaN 403 21735.74 NaN 1855.92 NaN 501 10346.45 NaN NaN NaN 601 881974.40 NaN 50000.00 NaN 602 377216.09 NaN 828868.61 NaN 702 9798.74 NaN 23616.39 NaN 802 155099.66 NaN 762294.95 NaN 803 23456.79 NaN 97266.84 NaN 804 151590.00 NaN 378000.00 NaN 805 182925.30 54206.52 4290216.37 NaN&gt; </code></pre> <p>Here is the prescribed form:</p> <pre><code>form = pd.read_excel('https://github.com/hoatranobita/reports/blob/main/Form%20A00034.xlsx?raw=true') form.head() Mã ngành kinh tế Dư nợ tín dụng (không bao gồm mua, đầu tư trái phiếu doanh nghiệp) Unnamed: 2 Unnamed: 3 Unnamed: 4 Unnamed: 5 0 NaN Ngắn hạn NaN Trung và dài hạn NaN Tổng cộng 1 NaN Bằng VND Bằng ngoại tệ Bằng VND Bằng ngoại tệ NaN 2 101.0 NaN NaN NaN NaN NaN 3 201.0 NaN NaN NaN NaN NaN 4 202.0 NaN NaN NaN NaN NaN </code></pre> <p>As you see, pivot table have no 101 but form has. So what I have to do to convert from Dataframe to Form that skip 101.</p> <p>Thank you.</p>
<p>Hi First create a worksheet using <a href="https://pypi.org/project/XlsxWriter/" rel="nofollow noreferrer">xlsxwriter</a></p> <pre><code>import xlsxwriter #start workbook workbook = xlsxwriter.Workbook('merge1.xlsx') #Introduce formatting format = workbook.add_format({'border': 1,'bold': True}) #Adding a worksheet worksheet = workbook.add_worksheet() merge_format = workbook.add_format({ 'bold':1, 'border': 1, 'align': 'center', 'valign': 'vcenter'}) #Starting the Headers worksheet.merge_range('A1:A3', 'Mã ngành kinh tế', merge_format) worksheet.merge_range('B1:F1', 'Dư nợ tín dụng (không bao gồm mua, đầu tư trái phiếu doanh nghiệp)', merge_format) worksheet.merge_range('B2:C2', 'Ngắn hạn', merge_format) worksheet.merge_range('D2:E2', 'Trung và dài hạn', merge_format) worksheet.merge_range('F2:F3', 'Tổng cộng', merge_format) worksheet.write(2, 1, 'Bằng VND',format) worksheet.write(2, 2, 'Bằng ngoại tệ',format) worksheet.write(2, 3, 'Bằng VND',format) worksheet.write(2, 4, 'Bằng ngoại tệ',format) </code></pre> <p>After this formatting you can start writing to sheet looping through using <a href="https://xlsxwriter.readthedocs.io/worksheet.html?highlight=write#write" rel="nofollow noreferrer">worksheet.write()</a> below I have included a sample</p> <pre><code> expenses = ( ['Rent', 1000], ['Gas', 100], ['Food', 300], ['Gym', 50], ) for item, cost in (expenses): worksheet.write(row, col, item) row += 1 </code></pre> <p>In row and col you can specify the cell row and column number it goes as a numerical value like a matrix.</p> <p>And finally close the workbook</p> <pre><code>workbook.close() </code></pre>
python|pandas|numpy
2
375,931
71,304,442
randomly choose value between two numpy arrays
<p>I have two numpy arrays:</p> <pre><code>left = np.array([2, 7]) right = np.array([4, 7]) right_p1 = right + 1 </code></pre> <p>What I want to do is</p> <pre><code>rand = np.zeros(left.shape[0]) for i in range(left.shape[0]): rand[i] = np.random.randint(left[i], right_p1[i]) </code></pre> <p>Is there a way I could do this without using a for loop?</p>
<p>You could try with:</p> <pre><code> extremes = zip(left, right_p1) rand = map(lambda x: np.random.randint(x[0], x[1]), extremes) </code></pre> <p>This way you will end up with a <code>map</code> object. If you need to save memory, you can keep it that way, otherwise you can get the full <code>np.array</code> passing through a <code>list</code> conversion, like this:</p> <pre><code>rand = np.array(list(map(lambda x: np.random.randint(x[0], x[1]), extremes))) </code></pre>
numpy
2
375,932
71,304,944
Panda- How can some column values can be moved to new column?
<p>I have the below data frame</p> <pre><code>d = { &quot;name&quot;:[&quot;RRR&quot;,&quot;RRR&quot;,&quot;RRR&quot;,&quot;RRR&quot;,&quot;RRR&quot;,&quot;ZZZ&quot;,&quot;ZZZ&quot;,&quot;ZZZ&quot;,&quot;ZZZ&quot;,&quot;ZZZ&quot;], &quot;id&quot;:[1,1,2,2,3,2,3,3,4,4],&quot;value&quot;:[12,13,1,44,22,21,23,53,64,9] } </code></pre> <p><img src="https://i.stack.imgur.com/S6rnS.png" alt="dataframe" /></p> <p>I want the out output as below:</p> <p><img src="https://i.stack.imgur.com/krZsK.png" alt="output" /></p>
<p>First pivot by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with counter by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a> with helper column <code>ind</code> by <code>id</code>, then sorting second level of <code>MultiIndex</code> with flatten values:</p> <pre><code>df = (df.assign(ind = df['id']) .set_index(['name','id', df.groupby(['name','id']).cumcount()])[['value', 'ind']] .unstack(1) .sort_index(axis=1, kind='mergesort', level=1)) df.columns = [f'{a}_{b}' for a, b in df.columns] df = df.droplevel(1).reset_index() print (df) name ind_1 value_1 ind_2 value_2 ind_3 value_3 ind_4 value_4 0 RRR 1.0 12.0 2.0 1.0 3.0 22.0 NaN NaN 1 RRR 1.0 13.0 2.0 44.0 NaN NaN NaN NaN 2 ZZZ NaN NaN 2.0 21.0 3.0 23.0 4.0 64.0 3 ZZZ NaN NaN NaN NaN 3.0 53.0 4.0 9.0 </code></pre>
python|pandas
2
375,933
71,299,388
Pandas - new column based on `max` of grouped values
<p>I have a Pandas dataframe with multiple groups in it, A, B, C. Each group has multiple counts associated with it and I want to create a new column that is normalised to the max value of each group.</p> <p>i.e.</p> <pre><code>index, group, year, count 0, A, 2015, 1 1, A, 2016, 2 2, A, 2017, 3 3, B, 2012, 10 4, B, 2013, 14 5, B, 2014, 18 6, C, 2014, 55 7, C, 2015, 59 8, C, 2016, 58 </code></pre> <p>...becomes</p> <pre><code>index, group, year, count, normalised 0, A, 2015, 1, 0.333 1, A, 2016, 2, 0.667 2, A, 2017, 3, 1.000 3, B, 2012, 10, 0.557 4, B, 2013, 14, 0.778 5, B, 2014, 18, 1.000 6, C, 2014, 55, 0.932 7, C, 2015, 59, 1.000 8, C, 2016, 58, 0.983 </code></pre> <p>If I try something like...</p> <p><code>df.assign(normalised=lambda x: x['count']/df[df['group'] == x['group']]['count'].max()</code></p> <p>then <code>max</code> will return <code>59</code> rather than the largest number within the category</p>
<p>You can use <code>groupby</code> + <code>transform</code> to calculate the ratio between current value and maximum value in each group:</p> <pre><code>df['normalised'] = df['count'].groupby(df.group).transform(lambda x: x / x.max()) df index group year count normalised 0 0 A 2015 1 0.333333 1 1 A 2016 2 0.666667 2 2 A 2017 3 1.000000 3 3 B 2012 10 0.555556 4 4 B 2013 14 0.777778 5 5 B 2014 18 1.000000 6 6 C 2014 55 0.932203 7 7 C 2015 59 1.000000 8 8 C 2016 58 0.983051 </code></pre>
python|pandas|assign
4
375,934
71,101,277
Append rows of same data and Transpose it into columns
<p>I have created a dataframe from an excel sheet using pandas. The issue with this data frame is the data structure provided to me. The data structure is somewhat complex where the same data types were given in rows structure. So I had to use df.transpose() to first transpose the data, but the issue occurs after transposing the data. so I am getting the data in multiple split sections and I need to append the transposed data in single columns. <a href="https://i.stack.imgur.com/QixfA.png" rel="nofollow noreferrer">Before Transpose of data</a></p> <p><a href="https://i.stack.imgur.com/xvBkl.png" rel="nofollow noreferrer">Data after Transposing</a></p> <p><a href="https://i.stack.imgur.com/BDAX7.png" rel="nofollow noreferrer">Needed output</a></p> <p>Update fixed the issue: Attaching the below code for reference.</p> <p>df_in = pd.read_csv('sample_copy2.csv',index_col=['Year']) df_t =df_in.transpose(copy=True) print(df_t)</p> <p>df = df_t s = df.columns.to_series()</p> <p>df.columns = [df.columns, s.groupby(s).cumcount()] dft = df.stack().sort_index(level=1).fillna(0).reset_index(level=1, drop=True).reset_index() df_f=dft.set_index('index')</p> <p>print(df_f)</p>
<p>since you haven't provided any reproducible code, we wouldn't be able to provide any solution as a code. However, I can answer at high level.</p> <p>The transpose that you have done is absolutely right. Why don't you make a new data frame with the repeated data from columns E, F, G, H and then concat the two data frame. For concat in pandas go to this link: <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.concat.html</a></p> <p>Hope this is helpful.</p>
python|pandas|csv
0
375,935
71,111,376
Unable to create directory and download model on local directory from Azure
<p>I have created a pipeline script where I have defined the pipeline steps and submit the pipeline.</p> <pre><code>dataPrep_step = PythonScriptStep(name='01 Data Preparation', source_directory='/home/ubuntu/Desktop/AzureMLProject/PytorchProject', script_name='220 - Dataprep Pipeline.py', inputs=[categories_ds.as_named_input('categories_ds'), labels_ds.as_named_input('labels_ds')], outputs=[dataFolder], runconfig=run_config, arguments=['--datafolder', dataFolder]) train_step = PythonScriptStep(name='02 Train the Model', source_directory='/home/ubuntu/Desktop/AzureMLProject/PytorchProject', script_name='220 - Traning Pipeline.py', inputs=[dataFolder], runconfig=run_config, arguments=['--datafolder', dataFolder, '--batch_size', 110, '--num_epochs', 1, '--learning_rate', 0.0001, '--ds', image_files_dataset.as_named_input('my_ds').as_mount()]) steps = [dataPrep_step, train_step] new_pipeline = Pipeline(workspace=ws, steps=steps) </code></pre> <p>Now the issue is that when I try to make directory and download the model locally from Azure MODEL, I am unable to do it inside training script (i.e., 220 - Traning Pipeline.py) but the model download successfully inside pipeline script (where I have defined the pipeline steps). I am using following code to download the model from Azure.</p> <pre><code>os.makedirs(&quot;/models&quot;, exist_ok = True) azure_model = Model(ws, &quot;pytorch-model&quot;) azure_model.download(target_dir=&quot;/models&quot;, exist_ok=True) </code></pre> <p>I am not sure why the same code is working fine inside pipeline script but not working inside training script. Kindly let me know what I am doing wrong here.</p>
<p>Pipeline script and training script can have different directory/file paths. Please check and specify the correct <strong>source</strong> and <strong>destination</strong> path of the download file.</p> <p>For example:</p> <blockquote> <p>run.download_file(name='outputs/my_output_file', output_file_path='my_destination_path')</p> </blockquote> <p>References: <a href="https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-train#create-training-scripts" rel="nofollow noreferrer">Create training scripts</a> and <a href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#submit-the-experiment" rel="nofollow noreferrer">Configure and submit training runs</a></p>
python|azure|pytorch|pipeline
0
375,936
71,303,481
Writing function including pandas query with numeric value in function cal
<p>I'm trying to write a function with two calls, one which is the data frame and the other which is some numeric value. I get the error that name &quot;t is not defined.&quot; When I hard code that numeric value, everything works well.</p> <p>Here is minimal reproducible example.</p> <pre><code>df = pd.DataFrame([[1, 2], [1, 3], [4, 6]], columns=['A', 'B']) def my_fun(t, df): l = df.query('A == t') return l my_fun(1, df) </code></pre>
<p>You could use an f-string to replace the value of the variable <code>t</code> in your query string instead of a literal <code>&quot;t&quot;</code> string:</p> <pre><code>l = df.query(f&quot;A == {t}&quot;) </code></pre> <hr /> <p>Complete code:</p> <pre><code>df = pd.DataFrame([[1, 2], [1, 3], [4, 6]], columns=['A', 'B']) def my_fun(t, df): l = df.query(f'A == {t}') return l print(my_fun(1, df)) </code></pre> <p>Output:</p> <pre><code> A B 0 1 2 1 1 3 </code></pre>
python|pandas|dataframe
2
375,937
71,233,779
How to create dataframe based on matrix?
<p>There are two dataframe I have &quot;df1&quot; and &quot;df2&quot; and one matrix &quot;res&quot;</p> <pre><code>df1= a df2 = a b c c e d </code></pre> <p>there are 4 record in df1 and 3 record in df2 so, res = 4*3 matrix</p> <pre><code>res = df2(index) 0 1 2 0 100 0 0 df1(index) 1 0 0 0 2 0 100 0 3 0 0 0 </code></pre> <p>so I have above data based on this data or matrix I want following output in the form of dataframe</p> <pre><code> df1 df2 score a a 100 a c 0 a e 0 b a 0 b c 0 b e 0 c a 0 c c 100 c e 0 d a 0 d c 0 d e 0 </code></pre>
<p>Set index and columns names by <code>df1, df2</code>:</p> <pre><code>res.index = df1[:len(res.index)] res.columns = df2[:len(res.columns)] </code></pre> <p>And then reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.melt.html" rel="nofollow noreferrer"><code>DataFrame.melt</code></a>:</p> <pre><code>df = res.rename_axis(index='df1', columns='df2').melt(ignore_index=False) </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a>:</p> <pre><code>df = res.rename_axis(index='df1', columns='df2').stack().reset_index(name='value') </code></pre>
python|pandas|python-2.7|nlp
1
375,938
71,246,570
Preserve Timestamp Column during Pandas Groupby
<p>I have a sizeable pandas df that I would like to aggregate by a Timestamp. The timestamps are on a granular scale(one second). Post aggregation, I would like the df to retain the first instance of that timestamp, but aggragated the following data by one minute periods.</p> <pre><code>Original: Timestamp Column 22-02-23 9:30:00 1 22-02-23 9:30:01 4 ... 22-02-23 9:33:04 4 22-02-23 9:33:05 7 </code></pre> <pre><code>Grouped: Timestamp Column 22-02-23 9:30:00 5 ... 22-02-23 9:33:04 11 </code></pre> <p>Is there a pandas function for this? Or does this aggregation need to be done manually?</p>
<p>You can use:</p> <pre class="lang-py prettyprint-override"><code>df[&quot;Timestamp&quot;] = pd.to_datetime(df[&quot;Timestamp&quot;]) df[&quot;Hour_Minute&quot;] = df[&quot;Timestamp&quot;].apply(lambda x: x.strftime(&quot;%Y-%m-%d %H:%M&quot;)) df.groupby(&quot;Hour_Minute&quot;).first() </code></pre>
python|pandas|pandas-groupby
1
375,939
71,192,010
Remove consecutive positive/negative numbers from a dataframe
<p>(EDITED) This is my current DataFrame:</p> <pre><code> aapl tigr srpt 4 58.254690 2475.247525 131.665569 5 56.869882 2386.634845 140.016802 6 -58.709564 -2597.402597 NaN 7 NaN 2314.814815 145.539223 8 -60.786578 NaN -154.822728 9 -57.780089 -2283.105023 -140.646976 10 57.116747 2192.982456 130.992926 11 58.139535 2304.147465 115.074799 12 -54.942036 -2074.688797 -110.595001 </code></pre> <p>I was wondering if there is any way possible to remove all but the 1st consecutive positive/negative number, such that the result is the DataFrame below:</p> <pre><code> aapl tigr srpt 4 58.254690 2475.247525 131.665569 6 -58.709564 -2597.402597 NaN 7 NaN 2314.814815 145.539223 8 -60.786578 NaN -154.822728 10 57.116747 2192.982456 130.992926 12 -54.942036 -2074.688797 -110.595001 </code></pre>
<p>Use:</p> <pre><code>In [1481]: x = df.fillna(1)['aapl'].gt(0) In [1487]: ix = x[~x.eq(x.shift(1))].index In [1488]: df.loc[ix] Out[1488]: aapl tigr srpt 4 58.254690 2475.247525 131.665569 6 -58.709564 -2597.402597 -143.492610 7 NaN 2314.814815 145.539223 8 -60.786578 -2032.520325 -154.822728 10 57.116747 2192.982456 130.992926 12 -54.942036 -2074.688797 -110.595001 </code></pre>
python|pandas
0
375,940
71,294,644
Rolling Pandas unique values with same window size
<p>I would like to sum rolling unique values with same window count.</p> <p>as example if if have values 20,30,30,40 i want sum of (20,30,40)</p> <p><a href="https://i.stack.imgur.com/OL9kh.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>If the duplicates are grouped like your example you can try drop the duplicates in your dataframe using df.drop_duplicates() then apply .rolling(3).sum() to the new dataframe without any repeated values.</p> <pre><code>series = pd.Series([20, 30, 30,30,40, 50,50 , 60]) unique_series = series.drop_duplicates() unique_series.rolling(3,min_periods=1).sum() </code></pre> <p>After seeing pieterbargs response above I tried the following:</p> <pre><code>df = pd.DataFrame({ 'value': [10,20, 30, 50,50,50, 70,80, 90,90], 'id': [1,2,3,4,5,6,7,8,9,10], }) grouping = (df['value']!=df['value'].shift()) df2 = df[grouping].rolling(3).sum()['value'].rename('sum') df = df.merge(df2,how='left',left_index=True,right_index=True) </code></pre> <p>The output is as follows:</p> <pre><code>value id sum 0 10 1 1 20 2 2 30 3 60.0 3 50 4 100.0 4 50 5 5 50 6 6 70 7 150.0 7 80 8 200.0 8 90 9 240.0 9 90 10 </code></pre> <p>You can use .fillna(method = 'ffill') to fill the values down if you want this.</p> <p><code>df['sum'] = df['sum'].fillna(method = 'ffill')</code></p> <p>Gives an output as follows:</p> <pre><code> value id sum 0 10 1 1 20 2 2 30 3 60.0 3 50 4 100.0 4 50 5 100.0 5 50 6 100.0 6 70 7 150.0 7 80 8 200.0 8 90 9 240.0 9 90 10 240.0 </code></pre>
pandas|cumsum
1
375,941
71,337,157
Split dataframe string (when string can hold n values of that cell variable), into multiple columns
<p>Currently working on a dataset with a lot of contact data, being Emails one of the variables.</p> <p>A cell in the Emails column can have more than one email (1 to n) and they are all separated by a comma and a space.</p> <p>For contacts with only two emails, the process would be quite straightforward. One can split the string and create a new column for that secondary email as follows</p> <pre><code>email_df[['Emails', 'SecondaryEmail']] = email_df['Emails'].str.split(', ', expand=True) </code></pre> <p>However this won't work with more than 2 emails. Therefore, I wonder what is the most efficient way to split the emails when the number of emails can go from 1 to n (in this case the n is limited to around 10 but that won't always be the case), into columns with only one email each (and different names each)?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>Series.str.split</code></a><a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.rsplit.html" rel="nofollow noreferrer"><code>Series.str.rsplit</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> for remove column <code>Email</code> after processing:</p> <pre><code>df = email_df.join(email_df.pop('Emails').str.split(', ', expand=True).add_prefix('Email')) </code></pre>
python|pandas|string|dataframe|split
1
375,942
71,439,153
Pandas groupby -> aggregate - function of two columns
<p>I'm using pandas <code>aggregate</code> as folows:</p> <pre><code>In [6]: gb = df.groupby(['col1', 'col2']) ...: counts = gb.size().to_frame(name='counts') ...: (counts ...: .join(gb.agg({'col3': 'mean'}).rename(columns={'col3': 'col3_mean'})) ...: .join(gb.agg({'col4': 'median'}).rename(columns={'col4': 'col4_median'})) ...: .join(gb.agg({'col4': 'min'}).rename(columns={'col4': 'col4_min'})) ...: .reset_index() ...: ) </code></pre> <p>How can I add one more column which will contain sum of values <code>col3 * col4</code>?</p>
<p>First create column <code>new</code> before <code>groupby</code> and then aggregate <code>sum</code>, your solution rewritten in named aggregation is:</p> <pre><code>counts = (df.assign(new = df['col3'] * df['col4']) .groupby(['col1', 'col2'], as_index=False) .agg(counts=('col1','size'), col3_mean=('col3','mean'), col4_median=('col4','median'), col4_min=('col4','min'), both_sum=('new','sum'))) </code></pre>
python|pandas|aggregate
0
375,943
71,133,574
Efficient chaining of boolean indexers in pandas DataFrames
<p>I am trying to very efficiently chain a <strong>variable</strong> amount of boolean pandas Series, to be used as a filter on a DataFrame through boolean indexing.</p> <p>Normally when dealing with multiple boolean conditions, one chains them like this</p> <pre><code>condition_1 = (df.A &gt; some_value) condition_2 = (df.B &lt;= other_value) condition_3 = (df.C == another_value) full_indexer = condition_1 &amp; condition_2 &amp; condition_3 </code></pre> <p>but this becomes a problem with a variable amount of conditions.</p> <pre><code>bool_indexers = [ condition_1, condition_2, ..., condition_N, ] </code></pre> <p>I have tried out some possible solutions, but I am convinced it can be done more efficiently.</p> <p><strong>Option 1</strong><br/> Loop over the indexers and apply consecutively.</p> <pre><code>full_indexer = bool_indexers[0] for indexer in bool_indexers[1:]: full_indexer &amp;= indexer </code></pre> <p><strong>Option 2</strong><br/> Put into a DataFrame and calculate the row product.</p> <pre><code>full_indexer = pd.DataFrame(bool_indexers).product(axis=0) </code></pre> <p><strong>Option 3</strong><br/> Use <code>numpy.product</code> (<a href="https://stackoverflow.com/a/58555604/9391713">like in this answer</a>) and create a new Series out of the result.</p> <pre><code>full_indexer = pd.Series(np.prod(np.vstack(bool_indexers), axis=0)) </code></pre> <p>All three solutions are somewhat inefficient because they rely on looping or force you to create a new object (which can be slow if repeated many times).</p> <p>Can it be done more efficiently or is this it?</p>
<p>Use <code>np.logical_and</code>:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'A': [0, 1, 2], 'B': [0, 1, 2], 'C': [0, 1, 2]}) m1 = df.A &gt; 0 m2 = df.B &lt;= 1 m3 = df.C == 1 m = np.logical_and.reduce([m1, m2, m3]) # OR m = np.all([m1, m2, m3], axis=0) out = df[np.logical_and.reduce([m1, m2, m3])] </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; pd.concat([m1, m2, m3], axis=1) A B C 0 False True False 1 True True True 2 True False False &gt;&gt;&gt; m array([False, True, False]) &gt;&gt;&gt; out A B C 1 1 1 1 </code></pre>
python|python-3.x|pandas|dataframe|boolean-indexing
3
375,944
71,404,715
Reshape a pandas DataFrame by expanding it horizontally
<p>I have a DataFrame with 4000 rows and 5 columns.</p> <p>They are information from multiple excel workbooks that I read in to one single sheet. Now I want to rearrange them in a horizontal manner, basically every time the header of the original excel sheet appears in the data, I want to move it horizontally.</p> <pre class="lang-py prettyprint-override"><code>a b . . . . . . . . a b . . . . a b . . . . . . . . a b . . . . . . . . . . . . a b . . . . </code></pre> <p>and I want to have something like</p> <pre class="lang-py prettyprint-override"><code>a b a b a b a b a b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . </code></pre> <p>Amendment:</p> <pre><code> symbol weight lqdty date 0 1712 0.007871 7.023737 20210104 1 1726 0.007650 3.221021 20210104 2 1824 0.032955 3.475508 20210104 0 1871 0.006443 4.615002 20210105 1 1887 0.007840 6.678486 20210105 2 1871 0.006443 4.615002 20210105 3 1887 0.007840 6.678486 20210105 0 1871 0.006443 4.615002 20210106 1 1887 0.007840 6.678486 20210106 </code></pre>
<p>An alternative, provided that your dataframe looks like</p> <pre><code>data = { 'symbol': [1712, 1726, 1824, 1871, 1887, 1871, 1887, 1871, 1887], 'weight': [0.007871, 0.00765, 0.032955, 0.006443, 0.00784, 0.006443, 0.00784, 0.006443, 0.00784], 'lqdty': [7.023737, 3.221021, 3.475508, 4.615002, 6.678486, 4.615002, 6.678486, 4.615002, 6.678486], 'date': [20210104, 20210104, 20210104, 20210105, 20210105, 20210105, 20210105, 20210106, 20210106] } index = [0, 1, 2, 0, 1, 2, 3, 0, 1] df = pd.DataFrame(data, index=index) </code></pre> <p>would be</p> <pre><code>groups = pd.Series(df.index).eq(0).cumsum().values result = pd.concat((sdf for _, sdf in df.groupby(groups)), axis=1) </code></pre> <p>Result:</p> <pre><code> symbol weight lqdty date symbol weight lqdty \ 0 1712.0 0.007871 7.023737 20210104.0 1871 0.006443 4.615002 1 1726.0 0.007650 3.221021 20210104.0 1887 0.007840 6.678486 2 1824.0 0.032955 3.475508 20210104.0 1871 0.006443 4.615002 3 NaN NaN NaN NaN 1887 0.007840 6.678486 date symbol weight lqdty date 0 20210105 1871.0 0.006443 4.615002 20210106.0 1 20210105 1887.0 0.007840 6.678486 20210106.0 2 20210105 NaN NaN NaN NaN 3 20210105 NaN NaN NaN NaN </code></pre>
python|excel|pandas|dataframe|numpy
1
375,945
71,224,956
Get a count of occurrence of string in each row and column of pandas dataframe
<pre><code>import pandas as pd # list of paragraphs from judicial opinions # rows are opinions # columns are paragraphs from the opinion opinion1 = ['sentenced to life','sentenced to death. The sentence ...','', 'sentencing Appellant for a term of life imprisonment'] opinion2 = ['Justice Smith','This concerns a sentencing hearing.', 'The third sentence read ...', 'Defendant rested.'] opinion3 = ['sentence sentencing sentenced','New matters ...', 'The clear weight of the evidence', 'A death sentence'] data = [opinion1, opinion2, opinion3] df = pd.DataFrame(data, columns = ['p1','p2','p3','p4']) # This works for one column. I have 300+ in the real data set. df['p2'].str.contains('sentenc') </code></pre> <h1>How do I determine whether 'sentenc' is in columns 'p1' through 'p4'?</h1> <p>Desired output would be something like:</p> <pre><code>True True False True False True True False True False False True </code></pre> <h1>How do I retrieve a count of the number of times that 'sentenc' appears in each cell?</h1> <p>Desired output would be a count for each cell of the number of times 'sentenc' appears:</p> <pre><code>1 2 0 1 0 1 1 0 3 0 0 1 </code></pre> <p>Thank you!</p>
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.count.html" rel="nofollow noreferrer"><code>pd.Series.str.count</code></a>:</p> <pre><code>counts = df.apply(lambda col: col.str.count('sentenc')) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; counts p1 p2 p3 p4 0 1 2 0 1 1 0 1 1 0 2 3 0 0 1 </code></pre> <p>To get it in boolean form, use <code>.str.contains</code>, or call <code>.astype(bool)</code> with the code above:</p> <pre><code>bools = df.apply(lambda col: col.str.contains('sentenc')) </code></pre> <p>or</p> <pre><code>bools = df.apply(lambda col: col.str.count('sentenc')).astype(bool) </code></pre> <p>Both will work just fine.</p>
python|pandas|dataframe
3
375,946
71,163,903
Convert pandas Columns in Rows using (melt doesn't work)
<p>How can I achieve this in pandas, I have a way where I take out each column as a new data frame and then so a insert in SQL but in that way if I have 10 columns I want to do the same I cannot make 10 data frames so I want to know how can I achieve it dynamically</p> <p>I have a data set where I have the following data</p> <p>Output I have</p> <pre><code>Id col1 col2 col3 1 Ab BC CD 2 har Adi tony </code></pre> <p>Output I want</p> <pre><code>Id col1 1 AB 1 BC 1 CD 2 har 2 ADI 2 Tony </code></pre>
<p><code>melt</code> <strong>does work</strong>, you just need a few extra steps for the exact output.</p> <p>Assuming &quot;Id&quot; is a column (if not, <code>reset_index</code>).</p> <pre><code>(df.melt(id_vars='Id', value_name='col1') .sort_values(by='Id') .drop('variable', axis=1) ) </code></pre> <p>Output:</p> <pre><code> Id col1 0 1 Ab 2 1 BC 4 1 CD 1 2 har 3 2 Adi 5 2 tony </code></pre> <p>Used input:</p> <pre><code>df = pd.DataFrame({'Id': [1, 2], 'col1': ['Ab', 'har'], 'col2': ['BC', 'Adi'], 'col3': ['CD', 'tony']}) </code></pre>
python-3.x|pandas|dataframe
2
375,947
71,141,595
If a date column doesn't have a certain date, then do something
<p>I am trying to read from dataframe on <code>Date</code> column, if a certain date doesn't exist, then insert new data for that date.</p> <p>I've tried to googled on how to know if a return is none, but I can't find any result.</p> <p>So this is the way I handle it, if the return is <code>KeyError</code>, then I insert the date data. But my concern is, what if the <code>KeyError</code> is NOT because the data doesn't exist?</p> <p>Or is there any better way than catching a KeyError like this?</p> <p>My Code</p> <pre><code>def checkdata(ticker, date): try: data = pd.read_sql(ticker, idxengine) data = data.set_index('Date') data.loc[date] print(data.loc[date]) except KeyError as ke: print(&quot;Data not found, insert new date data&quot; + ticker + str(date)) </code></pre> <hr /> <p><strong>Update to simplify what I want to see as the end result</strong></p> <p>Let's say I have dataA and dataB</p> <pre><code>dataA = [['2022-02-01', 123], ['2022-02-02', 120]] dataB = [['2022-02-01', 123], ['2022-02-03', 125]] </code></pre> <p>I want to have</p> <pre><code>dataC = [['2022-02-01', 123], ['2022-02-02', 120], ['2022-02-03', 125]] dfC = pd.DataFrame(dataC, columns = ['date', 'price']) print(dfC) </code></pre> <p>Expected output</p> <pre><code> name price 0 2022-02-01 123 1 2022-02-02 120 2 2022-02-02 125 </code></pre> <p>What should I do?</p>
<p>You don't want to select rows based on a condition that is applied to the <code>index</code>.</p> <p>Make the 'date' a proper column, and write a proper mask/condition, e.g.</p> <pre><code>relevant_data = data[data['date'] &gt; pd.Timestamp.today()] </code></pre> <p>An explicit condition will allow you to control what you want (e.g., if you have a timestamp with second precision, but you want to match based on month/day accuracy, both <code>==</code> and <code>.loc</code> on index will not work, as you need to convert the column to day/month/hour first)</p> <hr /> <p><strong>UPDATE</strong> regarding your edited question: you can &quot;combine&quot; both dataframes into one, then remove duplicates (assuming that the price information from both dataframes is equal. if not, you will need to define the logic what price info should take precendence).</p> <pre><code>dfa = pd.DataFrame(dataA, columns=['date', 'price']) dfb = pd.DataFrame(dataB, columns=['date', 'price']) pd.concat([dfa, dfb]).drop_duplicates() </code></pre> <p>with output</p> <pre><code> date price 0 2022-02-01 123 1 2022-02-02 120 1 2022-02-03 125 </code></pre>
python|pandas
1
375,948
71,221,975
Filter pandas dataframe rows based on multiple conditions
<p>This is my main dataframe that I want to filter.</p> <pre><code> first.seqnames first.start first.end first.width first.strand second.seqnames second.start second.end second.width second.strand 126457 chr1 10590184 10590618 GTTAATTATAGATAAATGGGCTAAAATTGCCTCTTGGTTTTGTAAC... * chr1 10730773 10731207 GTTAATTATAGATAAATGGGCTAAAATTGCCTCTTGGTTTTGTAAC... * 126461 chr1 10590958 10591541 CTTTCTTTTGCATACTTGTAGATTTTTCTTCTACTCTGGTTTAGGA... * chr1 10731548 10732131 CTTTCTTTTGCATACTTGTAGATTTTTCTTCTACTCTGGTTTAGGA... * 126544 chr1 10597414 10597918 ATCATTAGGAGATTATTAAAATTTGGAGTGTGTTGGCTGGCCTCGC... * chr1 10738018 10738522 ATCATTAGGAGATTATTAAAATTTGGAGTGTGTTGGCTGGCCTCGC... * 126576 chr1 10600437 10600904 CTCGTTACCATGAAAGCTTTTTTAGCATTGATTTCATAACAGTCTT... * chr1 10741045 10741512 CTCGTTACCATGAAAGCTTTTTTAGCATTGATTTCATAACAGTCTT... * 131172 chr1 11082133 11082593 TGAATCAGTGGTTTAATCTTCTTTGTTTACATCCCTTATTTCTTAT... * chr1 11245253 11245713 TGAATCAGTGGTTTAATCTTCTTTGTTTACATCCCTTATTTCTTAT... * </code></pre> <p>This is my conditional dataframe based on which I will filter:</p> <pre><code> Chrom Start End 0 chr1 10590184 10590618 1 chr1 10590958 10591541 2 chr1 10597414 10597918 </code></pre> <p>I've tried the following logic to filter each row. But it's wrong; it is not comparing each row.</p> <pre><code>header_frame[header_frame['first.end'].isin(knee_df['End']) &amp; header_frame['first.start'].isin(knee_df['Start'])] </code></pre> <p>I want only those rows in the 1st dataframe which exist in the 2nd dataframe.</p>
<p>Assuming <code>df1</code> and <code>df2</code> the two dataframes, you can inner <a href="https://pandas.pydata.org/docs/reference/api/pandas.merge.html" rel="nofollow noreferrer"><code>merge</code></a>:</p> <pre><code>df1.merge(df2, left_on=['first.seqnames', 'first.start', 'first.end'], right_on=['Chrom', 'Start', 'End'], how='inner' )[df1.columns] </code></pre> <p>output:</p> <pre><code> first.seqnames first.start first.end first.width first.strand second.seqnames second.start second.end second.width second.strand 0 chr1 10590184 10590618 GTTAATTATAGATAAATGGGCTAAAATTGCCTCTTGGTTTTGTAAC... * chr1 10730773 10731207 GTTAATTATAGATAAATGGGCTAAAATTGCCTCTTGGTTTTGTAAC... * 1 chr1 10590958 10591541 CTTTCTTTTGCATACTTGTAGATTTTTCTTCTACTCTGGTTTAGGA... * chr1 10731548 10732131 CTTTCTTTTGCATACTTGTAGATTTTTCTTCTACTCTGGTTTAGGA... * 2 chr1 10597414 10597918 ATCATTAGGAGATTATTAAAATTTGGAGTGTGTTGGCTGGCCTCGC... * chr1 10738018 10738522 ATCATTAGGAGATTATTAAAATTTGGAGTGTGTTGGCTGGCCTCGC... * </code></pre>
python-3.x|pandas|dataframe
1
375,949
71,285,825
Using create_tf_dataset_for_client() to define the training examples in the dataset
<p>I am preparing a dataset for federation settings, in the code below, I have multiple CSV files and used each is considered a single client.</p> <pre><code>dataset_paths = { 'client_0': '/content/drive/ds1.csv', 'client_1': '/content/drive/ds2.csv', 'client_2': '/content/drive/ds3.csv', 'client_3': '/content/drive/ds4.csv', 'client_4': '/content/drive/ds5.csv', } ## Defining the Dtyps for each columns in the datasets record_defaults = [int(), int(), int(), int(), float(),float(),float(),float(),float(),float(), int(), int()] @tf.function def create_tf_dataset_for_client_fn(dataset_path): return tf.data.experimental.CsvDataset( dataset_path, record_defaults=record_defaults, header=True ) source = tff.simulation.datasets.FilePerUserClientData( dataset_paths, create_tf_dataset_for_client_fn) </code></pre> <p>I wanted to access the data so I can determine the <code>features</code> and <code>label</code> column. so I typed:</p> <pre><code>for x in source.create_tf_dataset_for_client('client_1'): print(x) &gt;&gt;&gt; (&lt;tf.Tensor: shape=(), dtype=int32, numpy=-2145209674&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=1&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=14&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=64.17&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=18.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=70.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=80.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=30.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=270.14&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=7&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=2&gt;) (&lt;tf.Tensor: shape=(), dtype=int32, numpy=-2143677297&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=1&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=9&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=60.83&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=14.89&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=65.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=75.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=42.5&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=184.72&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=8&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=2&gt;) (&lt;tf.Tensor: shape=(), dtype=int32, numpy=-2138537298&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=1&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=11&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=65.83&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=18.82&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=70.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=85.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=30.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=295.14&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=7&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=2&gt;) (&lt;tf.Tensor: shape=(), dtype=int32, numpy=-2103817421&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=1&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=9&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=77.5&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=8.8&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=75.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=90.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=65.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=64.58&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=6&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=1&gt;) (&lt;tf.Tensor: shape=(), dtype=int32, numpy=-2081702335&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=10&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=75.83&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=9.7&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=77.5&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=90.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=65.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=78.47&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=6&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=1&gt;) (&lt;tf.Tensor: shape=(), dtype=int32, numpy=-2067936920&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=1&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=11&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=80.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=10.95&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=77.5&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=95.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=65.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=100.0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=6&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=2&gt;) (&lt;tf.Tensor: shape=(), dtype=int32, numpy=-2065922700&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=11&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=65.83&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=3.76&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=65.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=70.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=60.0&gt;, &lt;tf.Tensor: shape=(), dtype=float32, numpy=11.81&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=6&gt;, &lt;tf.Tensor: shape=(), dtype=int32, numpy=3&gt;) </code></pre> <p><em>there are more rows since I have a big size of data</em> So I can access these data as they are tensor objects, <strong>Question1</strong> how can I state that <code>DataFrame.iloc[1:-1] #Features</code> and <code>DataFrame.iloc[:-1] #Label</code> <strong>Question2</strong> How can I split each file to training and testing sets to start the training process?</p>
<p>You can try something like this:</p> <pre><code>import tensorflow as tf # Create dummy data samples = 5 data = (tf.random.uniform((samples,), maxval=50, dtype=tf.int32), tf.random.uniform((samples,), maxval=50, dtype=tf.int32), tf.random.uniform((samples,), maxval=50, dtype=tf.int32), tf.random.uniform((samples,), maxval=50, dtype=tf.int32), tf.random.normal((samples,)), tf.random.normal((samples,)), tf.random.normal((samples,)), tf.random.normal((samples,)), tf.random.normal((samples,)), tf.random.normal((samples,)), tf.random.uniform((samples,), maxval=50, dtype=tf.int32), tf.random.uniform((samples,), maxval=50, dtype=tf.int32)) client1_dataset = tf.data.Dataset.from_tensor_slices(data) client1_dataset = client1_dataset.map(lambda *x: (x[1:-1], x[:-1])) for x in client1_dataset: print(x) </code></pre> <pre><code>((&lt;tf.Tensor: id=2291, shape=(), dtype=int32, numpy=43&gt;, &lt;tf.Tensor: id=2292, shape=(), dtype=int32, numpy=47&gt;, &lt;tf.Tensor: id=2293, shape=(), dtype=int32, numpy=5&gt;, &lt;tf.Tensor: id=2294, shape=(), dtype=float32, numpy=0.6790141&gt;, &lt;tf.Tensor: id=2295, shape=(), dtype=float32, numpy=-0.996265&gt;, &lt;tf.Tensor: id=2296, shape=(), dtype=float32, numpy=-0.13631395&gt;, &lt;tf.Tensor: id=2297, shape=(), dtype=float32, numpy=-0.25907364&gt;, &lt;tf.Tensor: id=2298, shape=(), dtype=float32, numpy=-0.0063462467&gt;, &lt;tf.Tensor: id=2299, shape=(), dtype=float32, numpy=-0.6242705&gt;, &lt;tf.Tensor: id=2300, shape=(), dtype=int32, numpy=20&gt;), (&lt;tf.Tensor: id=2301, shape=(), dtype=int32, numpy=29&gt;, &lt;tf.Tensor: id=2302, shape=(), dtype=int32, numpy=43&gt;, &lt;tf.Tensor: id=2303, shape=(), dtype=int32, numpy=47&gt;, &lt;tf.Tensor: id=2304, shape=(), dtype=int32, numpy=5&gt;, &lt;tf.Tensor: id=2305, shape=(), dtype=float32, numpy=0.6790141&gt;, &lt;tf.Tensor: id=2306, shape=(), dtype=float32, numpy=-0.996265&gt;, &lt;tf.Tensor: id=2307, shape=(), dtype=float32, numpy=-0.13631395&gt;, &lt;tf.Tensor: id=2308, shape=(), dtype=float32, numpy=-0.25907364&gt;, &lt;tf.Tensor: id=2309, shape=(), dtype=float32, numpy=-0.0063462467&gt;, &lt;tf.Tensor: id=2310, shape=(), dtype=float32, numpy=-0.6242705&gt;, &lt;tf.Tensor: id=2311, shape=(), dtype=int32, numpy=20&gt;)) ((&lt;tf.Tensor: id=2312, shape=(), dtype=int32, numpy=5&gt;, &lt;tf.Tensor: id=2313, shape=(), dtype=int32, numpy=29&gt;, &lt;tf.Tensor: id=2314, shape=(), dtype=int32, numpy=7&gt;, &lt;tf.Tensor: id=2315, shape=(), dtype=float32, numpy=-3.1088789&gt;, &lt;tf.Tensor: id=2316, shape=(), dtype=float32, numpy=1.1138679&gt;, &lt;tf.Tensor: id=2317, shape=(), dtype=float32, numpy=0.60722053&gt;, &lt;tf.Tensor: id=2318, shape=(), dtype=float32, numpy=0.22470044&gt;, &lt;tf.Tensor: id=2319, shape=(), dtype=float32, numpy=-0.9214293&gt;, &lt;tf.Tensor: id=2320, shape=(), dtype=float32, numpy=-0.40438855&gt;, &lt;tf.Tensor: id=2321, shape=(), dtype=int32, numpy=32&gt;), (&lt;tf.Tensor: id=2322, shape=(), dtype=int32, numpy=16&gt;, &lt;tf.Tensor: id=2323, shape=(), dtype=int32, numpy=5&gt;, &lt;tf.Tensor: id=2324, shape=(), dtype=int32, numpy=29&gt;, &lt;tf.Tensor: id=2325, shape=(), dtype=int32, numpy=7&gt;, &lt;tf.Tensor: id=2326, shape=(), dtype=float32, numpy=-3.1088789&gt;, &lt;tf.Tensor: id=2327, shape=(), dtype=float32, numpy=1.1138679&gt;, &lt;tf.Tensor: id=2328, shape=(), dtype=float32, numpy=0.60722053&gt;, &lt;tf.Tensor: id=2329, shape=(), dtype=float32, numpy=0.22470044&gt;, &lt;tf.Tensor: id=2330, shape=(), dtype=float32, numpy=-0.9214293&gt;, &lt;tf.Tensor: id=2331, shape=(), dtype=float32, numpy=-0.40438855&gt;, &lt;tf.Tensor: id=2332, shape=(), dtype=int32, numpy=32&gt;)) ((&lt;tf.Tensor: id=2333, shape=(), dtype=int32, numpy=43&gt;, &lt;tf.Tensor: id=2334, shape=(), dtype=int32, numpy=17&gt;, &lt;tf.Tensor: id=2335, shape=(), dtype=int32, numpy=1&gt;, &lt;tf.Tensor: id=2336, shape=(), dtype=float32, numpy=0.26826212&gt;, &lt;tf.Tensor: id=2337, shape=(), dtype=float32, numpy=-0.2259336&gt;, &lt;tf.Tensor: id=2338, shape=(), dtype=float32, numpy=-1.5942549&gt;, &lt;tf.Tensor: id=2339, shape=(), dtype=float32, numpy=-0.8693648&gt;, &lt;tf.Tensor: id=2340, shape=(), dtype=float32, numpy=0.71869636&gt;, &lt;tf.Tensor: id=2341, shape=(), dtype=float32, numpy=-1.5996522&gt;, &lt;tf.Tensor: id=2342, shape=(), dtype=int32, numpy=16&gt;), (&lt;tf.Tensor: id=2343, shape=(), dtype=int32, numpy=6&gt;, &lt;tf.Tensor: id=2344, shape=(), dtype=int32, numpy=43&gt;, &lt;tf.Tensor: id=2345, shape=(), dtype=int32, numpy=17&gt;, &lt;tf.Tensor: id=2346, shape=(), dtype=int32, numpy=1&gt;, &lt;tf.Tensor: id=2347, shape=(), dtype=float32, numpy=0.26826212&gt;, &lt;tf.Tensor: id=2348, shape=(), dtype=float32, numpy=-0.2259336&gt;, &lt;tf.Tensor: id=2349, shape=(), dtype=float32, numpy=-1.5942549&gt;, &lt;tf.Tensor: id=2350, shape=(), dtype=float32, numpy=-0.8693648&gt;, &lt;tf.Tensor: id=2351, shape=(), dtype=float32, numpy=0.71869636&gt;, &lt;tf.Tensor: id=2352, shape=(), dtype=float32, numpy=-1.5996522&gt;, &lt;tf.Tensor: id=2353, shape=(), dtype=int32, numpy=16&gt;)) ((&lt;tf.Tensor: id=2354, shape=(), dtype=int32, numpy=18&gt;, &lt;tf.Tensor: id=2355, shape=(), dtype=int32, numpy=35&gt;, &lt;tf.Tensor: id=2356, shape=(), dtype=int32, numpy=29&gt;, &lt;tf.Tensor: id=2357, shape=(), dtype=float32, numpy=-0.9065403&gt;, &lt;tf.Tensor: id=2358, shape=(), dtype=float32, numpy=0.52284646&gt;, &lt;tf.Tensor: id=2359, shape=(), dtype=float32, numpy=1.3090674&gt;, &lt;tf.Tensor: id=2360, shape=(), dtype=float32, numpy=0.98598105&gt;, &lt;tf.Tensor: id=2361, shape=(), dtype=float32, numpy=1.0676131&gt;, &lt;tf.Tensor: id=2362, shape=(), dtype=float32, numpy=-0.11418144&gt;, &lt;tf.Tensor: id=2363, shape=(), dtype=int32, numpy=46&gt;), (&lt;tf.Tensor: id=2364, shape=(), dtype=int32, numpy=45&gt;, &lt;tf.Tensor: id=2365, shape=(), dtype=int32, numpy=18&gt;, &lt;tf.Tensor: id=2366, shape=(), dtype=int32, numpy=35&gt;, &lt;tf.Tensor: id=2367, shape=(), dtype=int32, numpy=29&gt;, &lt;tf.Tensor: id=2368, shape=(), dtype=float32, numpy=-0.9065403&gt;, &lt;tf.Tensor: id=2369, shape=(), dtype=float32, numpy=0.52284646&gt;, &lt;tf.Tensor: id=2370, shape=(), dtype=float32, numpy=1.3090674&gt;, &lt;tf.Tensor: id=2371, shape=(), dtype=float32, numpy=0.98598105&gt;, &lt;tf.Tensor: id=2372, shape=(), dtype=float32, numpy=1.0676131&gt;, &lt;tf.Tensor: id=2373, shape=(), dtype=float32, numpy=-0.11418144&gt;, &lt;tf.Tensor: id=2374, shape=(), dtype=int32, numpy=46&gt;)) ((&lt;tf.Tensor: id=2375, shape=(), dtype=int32, numpy=48&gt;, &lt;tf.Tensor: id=2376, shape=(), dtype=int32, numpy=23&gt;, &lt;tf.Tensor: id=2377, shape=(), dtype=int32, numpy=35&gt;, &lt;tf.Tensor: id=2378, shape=(), dtype=float32, numpy=-0.67218304&gt;, &lt;tf.Tensor: id=2379, shape=(), dtype=float32, numpy=2.060095&gt;, &lt;tf.Tensor: id=2380, shape=(), dtype=float32, numpy=0.33271575&gt;, &lt;tf.Tensor: id=2381, shape=(), dtype=float32, numpy=-0.073634386&gt;, &lt;tf.Tensor: id=2382, shape=(), dtype=float32, numpy=-0.7267375&gt;, &lt;tf.Tensor: id=2383, shape=(), dtype=float32, numpy=1.6494459&gt;, &lt;tf.Tensor: id=2384, shape=(), dtype=int32, numpy=13&gt;), (&lt;tf.Tensor: id=2385, shape=(), dtype=int32, numpy=36&gt;, &lt;tf.Tensor: id=2386, shape=(), dtype=int32, numpy=48&gt;, &lt;tf.Tensor: id=2387, shape=(), dtype=int32, numpy=23&gt;, &lt;tf.Tensor: id=2388, shape=(), dtype=int32, numpy=35&gt;, &lt;tf.Tensor: id=2389, shape=(), dtype=float32, numpy=-0.67218304&gt;, &lt;tf.Tensor: id=2390, shape=(), dtype=float32, numpy=2.060095&gt;, &lt;tf.Tensor: id=2391, shape=(), dtype=float32, numpy=0.33271575&gt;, &lt;tf.Tensor: id=2392, shape=(), dtype=float32, numpy=-0.073634386&gt;, &lt;tf.Tensor: id=2393, shape=(), dtype=float32, numpy=-0.7267375&gt;, &lt;tf.Tensor: id=2394, shape=(), dtype=float32, numpy=1.6494459&gt;, &lt;tf.Tensor: id=2395, shape=(), dtype=int32, numpy=13&gt;)) </code></pre> <p>To create test and train subsets just use <code>take</code> and <code>skip</code>:</p> <pre><code>test = client1_dataset.take(2) train = client1_dataset.skip(2) </code></pre> <p>If you want to split each csv file into a test and a training data set, you should do this before creating a <code>tf</code> data set.</p>
python|tensorflow|tensorflow-datasets|tensorflow-federated|federated-learning
1
375,950
71,388,924
How can I use Tensorflow.Checkpoint to recover a previously trained net
<p>I'm trying to understand how to recover a saved/checkpointed net using <code>tensorflow.train.Checkpoint.restore</code>.</p> <p>I'm using code that's strongly based on Google's Colab tutorial for creating a pix2pix GAN. Below, I've excerpted the key portion, which just attempts to instantiate a new net, then to fill it with weights from a previous net that was saved and checkpointed.</p> <p>I'm assigning a unique(ish) id number to a particular instantiation of a net by summing all the weights of the net. I compare these id numbers both at the creation of the net, and after I've attempted to recover the checkpointed net</p> <pre><code>def main(opt): # Initialize pix2pix GAN using arguments input from command line p2p = Pix2Pix(vars(opt)) print(opt) # print sum of initial weights for net print(&quot;Init Model Weights:&quot;, sum([x.numpy().sum() for x in p2p.generator.weights])) # Create or read from model checkpoints checkpoint = tf.train.Checkpoint(generator_optimizer=p2p.generator_optimizer, discriminator_optimizer=p2p.discriminator_optimizer, generator=p2p.generator, discriminator=p2p.discriminator) # print sum of weights from checkpoint, to ensure it has access # to relevant regions of p2p print(&quot;Checkpoint Weights:&quot;, sum([x.numpy().sum() for x in checkpoint.generator.weights])) # Recover Checkpointed net checkpoint.restore(tf.train.latest_checkpoint(opt.weights)).expect_partial() # print sum of weights for p2p &amp; checkpoint after attempting to restore saved net print(&quot;Restore Model Weights:&quot;, sum([x.numpy().sum() for x in p2p.generator.weights])) print(&quot;Restored Checkpoint Weights:&quot;, sum([x.numpy().sum() for x in checkpoint.generator.weights])) print(&quot;Done.&quot;) if __name__ == '__main__': opt = parse_opt() main(opt) </code></pre> <p>The output I got when I ran this code was as follows:</p> <pre><code>Namespace(channels='1', data='data', img_size=256, output='output', weights='weights/ckpt-40.data-00000-of-00001') ## These are the input arguments, the images have only 1 channel (they're gray scale) ## The directory with data is ./data, the images are 265x256 ## The output directory is ./output ## The checkpointed net is stored in ./weights/ckpt-40.data-00000-of-00001 ## Sums of nets' weights Init Model Weights: 11047.206374436617 Checkpoint Weights: 11047.206374436617 Restore Model Weights: 11047.206374436617 Restored Checkpoint Weights: 11047.206374436617 Done. </code></pre> <p>There is no change in the sum of the net's weights before and after recovering the checkpointed version, although <code>p2p</code> and <code>checkpoint</code> do seem to have access to the same locations in memory.</p> <p>Why am I not recovering the saved net?</p>
<p>The problem arose because tf.Checkpoint.restore needs the directory in which the checkpointed net is stored, not the specific file (or, what I took to be the specific file - ./weights/ckpt-40.data-00000-of-00001)</p> <p>When it is not given a valid directory, it silently proceeds to the next line of code, without updating the net or throwing an error. The fix was to give it the directory with the relevant checkpoint files, rather than just the file I believed to be relevant.</p>
tensorflow|checkpointing
0
375,951
71,295,254
Writing excel work books to Google Cloud Storage bucket using Google Cloud Composer
<p>I have a requirement where I have to create excel workbook (.xlsx) with 2 different workbooks. But when storing the data into GCS bucket, getting file not found error. I was able to save .csv files successfully. Please find the below example</p> <pre class="lang-py prettyprint-override"><code> import pandas as pd a = [1, 2, 3] b = [4, 5, 6] af1 = pd.DataFrame(a) bf1 = pd.DataFrame(b) af1.columns = ['A'] bf1.columns = ['B'] with pd.ExcelWriter('gs://&lt;bucket-name&gt;/output.xlsx') as writer: af1.to_excel(writer, sheet_name=&quot;A&quot;, index=False) bf1.to_excel(writer, sheet_name=&quot;B&quot;, index=False) </code></pre> <p>But getting the file not found. Whereas if I try to write to a csv file (using .to_csv(&quot;samepath&quot;)), am able to see the file. Please help</p>
<p>You are trying to access the bucket directly without using the <a href="https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python" rel="nofollow noreferrer">Google Cloud Storage API Client Libraries</a>. This is not a recommended approach. So try to use the Google Cloud Storage API Client Libraries and follow the below steps for your requirement:</p> <p>Step1: Add the <a href="https://pypi.org/project/XlsxWriter/" rel="nofollow noreferrer">xlsxwriter</a> package in Cloud Composer before triggering a DAG:</p> <p>Environment details -&gt; PYPI Packages -&gt; Edit -&gt; Package Name -&gt; Type <code>xlsxwriter</code> -&gt; Click Save</p> <p>Step2: Try the below code:</p> <pre><code>import airflow from airflow import DAG from airflow.utils import timezone from airflow.operators.python import PythonOperator from google.cloud import storage import pandas as pd from xlsxwriter import Workbook def invoke_cloud_storage(): a = [1, 2, 3] b = [4, 5, 6] af1 = pd.DataFrame(a) bf1 = pd.DataFrame(b) af1.columns = ['A'] bf1.columns = ['B'] writer=pd.ExcelWriter('file-name.xlsx') af1.to_excel(writer, sheet_name=&quot;A&quot;, index=False) bf1.to_excel(writer, sheet_name=&quot;B&quot;, index=False) writer.save() storage_client = storage.Client() bucket = storage_client.bucket('bucket-name') blob = bucket.blob('file-name.xlsx') blob.upload_from_filename('file-name.xlsx') with DAG( 'pandas_storage', description='Upload file in Cloud Storage', schedule_interval=None, start_date=airflow.utils.dates.days_ago(2), max_active_runs=1, catchup=False ) as dag: # Invoke cloud run process_file = PythonOperator( task_id='invoke_cloud_storage', python_callable=invoke_cloud_storage, dag=dag ) process_file </code></pre> <p>If you still need to access the bucket without using Google Cloud Storage API Client Libraries, add <a href="https://pypi.org/project/gcsfs/" rel="nofollow noreferrer">gcsfs</a> and <a href="https://pypi.org/project/fsspec/" rel="nofollow noreferrer">fsspec</a> libraries as dependencies in Cloud Composer. But these two libraries are not managed by Google and this is not a recommended approach, use it at your own risk. Follow the below steps for your requirement:</p> <p>Step1: Add the <code>xlsxwriter</code>, <code>gcsfs</code> and <code>fsspec</code> packages in Cloud Composer before triggering a DAG:</p> <p>Environment details -&gt; PYPI Packages -&gt; Edit -&gt; Add Packages -&gt; Click Save.</p> <p><a href="https://i.stack.imgur.com/sjX3N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sjX3N.png" alt="enter image description here" /></a></p> <p>Step2 : Try the below code:</p> <pre><code>import airflow from airflow import DAG from airflow.utils import timezone from airflow.operators.python import PythonOperator import pandas as pd from xlsxwriter import Workbook def invoke_cloud_storage(): a = [1, 2, 3] b = [4, 5, 6] af1 = pd.DataFrame(a) bf1 = pd.DataFrame(b) af1.columns = ['A'] bf1.columns = ['B'] with pd.ExcelWriter('gs://bucket-name/file-name.xlsx') as writer: af1.to_excel(writer, sheet_name=&quot;A&quot;, index=False) bf1.to_excel(writer, sheet_name=&quot;B&quot;, index=False) with DAG( 'pandas_storage_nr', description='Upload file in Cloud Storage', schedule_interval=None, start_date=airflow.utils.dates.days_ago(2), max_active_runs=1, catchup=False ) as dag: # Invoke cloud run process_file = PythonOperator( task_id='invoke_cloud_storage', python_callable=invoke_cloud_storage, dag=dag ) process_file </code></pre>
pandas|google-cloud-platform|google-cloud-storage|google-cloud-composer
1
375,952
71,343,980
Sum the values in selected rows of a data frame
<p>I have a list of dictionaries:</p> <pre><code>mylist = [{'Date': '01/02/2020', 'Value': '13'}, {'Date': '01/03/2020', 'Value': '2'}, {'Date': '10/3/2020', 'Value': '4'}, {'Date': '12/25/2020', 'Value': '2'}] </code></pre> <p>I wanted to sum the Values of the Date from 01/01/2020 to 01/04/2020. I tried the following to select the rows within the date range:</p> <pre><code>from datetime import datetime dfmylist = pd.DataFrame(mylist) dfmylist['Date'] = pd.to_datetime(dfmylist['Date']) dfmylistnew = (dfmylist['Date'] &gt; '01/01/2020') &amp; (dfmylist['Date'] &lt;= '01/04/2020') dfmylistnew1 = dfmylist.loc[dfmylistnew] dfmylistnew1 </code></pre> <p>I got the output data frame:</p> <pre><code> Date Value 0 2020-01-02 13 1 2020-01-03 2 </code></pre> <p>I want to get the sum <code>Value</code> from the above data frame, which is 15</p> <p>I tried:</p> <pre><code>total = dfmylistnew1['Value'].sum() </code></pre> <p>but the output is 132, instead of 15</p>
<p>From your data, convert values with the right type:</p> <pre><code>mylist = [{'Date': '01/02/2020', 'Value': '13'}, {'Date': '01/03/2020', 'Value': '2'}, {'Date': '10/3/2020', 'Value': '4'}, {'Date': '12/25/2020', 'Value': '2'}] df = pd.DataFrame(mylist).astype({'Date': 'datetime64', 'Value': 'int'}) total = df.loc[df['Date'].between('01/01/2020', '01/04/2020', inclusive='right'), 'Value'].sum() print(total) # Output 15 </code></pre>
python|pandas
3
375,953
71,422,639
How to load data from multiply datasets in pytorch
<p>I have two datasets of images - indoors and outdoors, they don't have the same number of examples.</p> <p>Each dataset has images that contain a certain number of classes (minimum 1 maximum 4), these classes can appear in both datasets, and each class has 4 categories - red, blue, green, white. Example: Indoor - cats, dogs, horses Outdoor - dogs, humans</p> <p>I am trying to train a model, where I tell it, &quot;here is an image that contains a cat, tell me it's color&quot; regardless of where it was taken (Indoors, outdoors, In a car, on the moon)</p> <p>To do that, I need to present my model examples so that every batch has only one category (cat, dog, horse or human), but I want to sample from all datasets (two in this case) that contains these objects and mix them. How can I do this?</p> <p>It has to take into account that the number of examples in each dataset is different, and that some categories appear in one dataset where others can appear in more than one. and each batch must contain only one category.</p> <p>I would appreciate any help, I have been trying to solve this for a few days now.</p>
<p>Assuming the question is:</p> <ol> <li>Combine 2+ data sets with potentially overlapping categories of objects (distinguishable by label)</li> <li>Each object has 4 &quot;subcategories&quot; for each color (distinguishable by label)</li> <li>Each batch should only contain a single object category</li> </ol> <p>The first step will be to ensure consistency of the object labels from both data sets, if not already consistent. For example, if the dog class is label <code>0</code> in the first data set but label <code>2</code> in the second data set, then we need to make sure the two dog categories are correctly merged. We can do this &quot;translation&quot; with a simple data set wrapper:</p> <pre class="lang-py prettyprint-override"><code>class TranslatedDataset(Dataset): &quot;&quot;&quot; Args: dataset: The original dataset. translate_label: A lambda (function) that maps the original dataset label to the label it should have in the combined data set &quot;&quot;&quot; def __init__(self, dataset, translate_label): super().__init__() self._dataset = dataset self._translate_label = translate_label def __len__(self): return len(self._dataset) def __getitem__(self, idx): inputs, target = self._dataset[idx] return inputs, self._translate_label(target) </code></pre> <p>The next step is combining the translated data sets together, which can be done easily with a <code>ConcatDataset</code>:</p> <pre class="lang-py prettyprint-override"><code>first_original_dataset = ... second_original_dataset = ... first_translated = TranslateDataset( first_original_dataset, lambda y: 0 if y is 2 else 2 if y is 0 else y, # or similar ) second_translated = TranslateDataset( second_original_dataset, lambda y: y, # or similar ) combined = ConcatDataset([first_translated, second_translated]) </code></pre> <p>Finally, we need to restrict batch sampling to the same class, which is possible with a custom <code>Sampler</code> when creating the data loader.</p> <pre class="lang-py prettyprint-override"><code>class SingleClassSampler(torch.utils.data.Sampler): def __init__(self, dataset, batch_size): super().__init__() # We need to create sequential groups # with batch_size elements from the same class indices_for_target = {} # dict to store a list of indices for each target for i, (_, target) in enumerate(dataset): # converting to string since Tensors hash by reference, not value str_targ = str(target) if str_targ not in indices_for_target: indices_for_target[str_targ] = [] indices_for_target[str_targ] += [i] # make sure we have a whole number of batches for each class trimmed = { k: v[:-(len(v) % batch_size)] for k, v in indices_for_target.items() } # concatenate the lists of indices for each class self._indices = sum(list(trimmed.values())) def __len__(self): return len(self._indices) def __iter__(self): yield from self._indices </code></pre> <p>Then to use the sampler:</p> <pre class="lang-py prettyprint-override"><code>loader = DataLoader( combined, sampler=SingleClassSampler(combined, 64), batch_size=64, shuffle=True ) </code></pre> <p>I haven't run this code, so it might not be exactly right, but hopefully it will put you on the right track.</p> <hr /> <p><a href="https://pytorch.org/docs/stable/data.html" rel="nofollow noreferrer">torch.utils.data Docs</a></p>
pytorch|dataset|multiple-databases|dataloader
1
375,954
71,352,723
Sort Categorial values within groupby in pandas
<p>I have this example df:</p> <pre><code> df3 = pd.DataFrame({'Customer':['Sara','John','Didi','Sara','Didi' ,'Didi'], 'Date': ['15-12-2021', '1-1-2022' , '1-3-2022','15-3-2022', '1-1-2022' , '1-4-2022'], 'Month': ['December-2021', 'January-2022', 'March-2022','March-2022', 'January-2022', 'April-2022'], 'Product': ['grocery','electronics','personal-care','grocery','electronics','personal-care'], 'status': ['purchased', 'refunded', 'refunded','refunded', 'purchased', 'refunded'] }) df3 </code></pre> <p>gives:</p> <pre><code>Customer Date Month Product status 0 Sara 15-12-2021 December-2021 grocery purchased 1 John 1-1-2022 January-2022 electronics refunded 2 Didi 1-3-2022 March-2022 personal-care refunded 3 Sara 15-3-2022 March-2022 grocery refunded 4 Didi 1-1-2022 January-2022 electronics purchased 5 Didi 1-4-2022 April-2022 personal-care refunded </code></pre> <p>I am trying to group by customer, product &amp; month and get the first status and then I want the groupby sorted by Month column:</p> <pre><code>df3.sort_values('Month').groupby(['Customer','Product','Month','Date']).agg({'status':'first'}).reset_index() </code></pre> <p>I got:</p> <pre><code> Customer Product Month Date status 0 Didi electronics January-2022 1-1-2022 purchased 1 Didi personal-care April-2022 1-4-2022 refunded 2 Didi personal-care March-2022 1-3-2022 refunded 3 John electronics January-2022 1-1-2022 refunded 4 Sara grocery December-2021 15-12-2021 purchased 5 Sara grocery March-2022 15-3-2022 refunded </code></pre> <p>I expected for <code>index 1 &amp; 2</code> to be reversed in order, March before April so what I tried to do is:</p> <pre><code>months = {'December-2021':0,'January-2022':1,'February-2022':2,'March-2022':3,'April-2022':4,'May-2022':5,'June-2022':6,'July-2022':7,'August-2022':8,'September-2022':9,'October-2022':10,'November-2022':11} </code></pre> <p>then map this through sort values:</p> <pre><code>df3.sort_values(by=['Month'], key=lambda x: x.map(months)).groupby(['Customer','Product','Month','Date']).agg({'status':'first'}).reset_index() </code></pre> <p>But I got the same exact results without the correct order</p>
<pre><code>df3['Month'] = pd.to_datetime(df3['Month'], infer_datetime_format=True) df3 = df3.sort_values(by=[&quot;Month&quot;],ascending=False).groupby( ['Customer','Product','Month','Date']).agg({ 'status':'first'}).reset_index() df3['Month'] = df3['Month'].dt.strftime('%B-%Y') df3 </code></pre> <p>Your desired output:</p> <pre><code> Customer Product Month Date status 0 Didi electronics January-2022 1-1-2022 purchased 1 Didi personal-care March-2022 1-3-2022 refunded 2 Didi personal-care April-2022 1-4-2022 refunded 3 John electronics January-2022 1-1-2022 refunded 4 Sara grocery December-2021 15-12-2021 purchased 5 Sara grocery March-2022 15-3-2022 refunded </code></pre>
python|pandas|numpy|sorting
2
375,955
71,430,787
Looping through pandas value_counts()
<p>I'm manually looking for all values in my df columns like this (to search for weird entries):</p> <pre><code>df['sex'].value_counts(), df['famsize'].value_counts(), df['Pstatus'].value_counts(), df['traveltime'].value_counts()... </code></pre> <p>then i get:</p> <pre><code>(F 591 M 453 Name: sex, dtype: int64, GT3 738 LE3 306 Name: famsize, dtype: int64, T 923 A 121 Name: Pstatus, dtype: int64, 1 623 2 320 3 77 4 24 (...) </code></pre> <p>But there are so many columns, so I tried to loop through it :</p> <pre><code>for v in df.columns: df[v].value_counts() </code></pre> <p>but nothing is returned, I also tried:</p> <pre><code>for v in df.columns: df[v].value_counts().apply(display) </code></pre> <p>and:</p> <pre><code>for v in df.columns: df[v].value_counts().apply(print) </code></pre> <p>but it returns a long list of numbers without the indexes/labels.</p> <p>Is there another way to automatically look through my dfs values? Maybe still using loop for.</p>
<p>I would just do:</p> <pre class="lang-py prettyprint-override"><code>for v in df.columns: print(df[v].value_counts()) </code></pre>
python|pandas|dataframe
1
375,956
71,302,832
Replace a specific value with another using Pandas
<p>I would like to replace a specific value or values with another value.</p> <p><strong>Data</strong></p> <pre><code>ID Date hi hello AA Q4.2022 1 0 BB Q4.2022 1 1 CC Q4.2022 HI111 1 </code></pre> <p><strong>Desired</strong></p> <pre><code>ID Date hi hello AA Q4.2022 1 0 BB Q4.2022 1 1 CC Q4.2022 1 </code></pre> <p><strong>Doing</strong></p> <p>I have using this statement, however, this deletes the all the values</p> <pre><code>df['hi'] = df['hi'].str.replace('HI111','') </code></pre> <p>Any suggestion is appreciated.</p>
<p>Perhaps, the numbers are int types; then you could try <code>to_numeric</code> + <code>isna</code> and use it in <code>mask</code>:</p> <pre><code>df['hi'] = df['hi'].mask(pd.to_numeric(df['hi'], errors='coerce').isna(), '') </code></pre> <p>or if you want to change the type of the numbers to strings as well, you could use <code>to_numeric</code> + <code>fillna</code> + <code>astype(str)</code>:</p> <pre><code>df['hi'] = (pd.to_numeric(df['hi'], errors='coerce').fillna('') .astype(str).str.split('.').str[0]) </code></pre> <p>Output:</p> <pre><code> ID Date hi hello 0 AA Q4.2022 1 0 1 BB Q4.2022 1 1 2 CC Q4.2022 1 </code></pre>
python|pandas|dataframe|numpy
1
375,957
71,119,396
How to detect dips with pandas or numpy array when data has repetitions?
<p>I'm trying to find the position of dips and bumps in an array by checking that <code>n &lt; n-1</code> and <code>n &gt; n+1</code>.</p> <p>It is a valid approach, but it fails when data repeats before bouncing, e.g. <code>{100,80,80,100}</code>.</p> <p>See this example:</p> <pre><code>import numpy as np import pandas as pd data = np.array([100,250,200,350,650,650,650,500,400,300,300,350,100]) dips = np.flatnonzero((np.roll(data,1) &gt; data).astype(int) &amp; (data &lt; np.roll(data,-1)).astype(int)) bumps = np.flatnonzero((np.roll(data,1) &lt; data).astype(int) &amp; (data &gt; np.roll(data,-1)).astype(int)) print(dips,bumps) </code></pre> <p>I could make it work by removing the repetitive elements, so we get more output:</p> <pre><code>df = pd.DataFrame({'close':data}) data = (df.loc[df.shift(-1)['close'] != df['close']]).to_numpy() dips = np.flatnonzero((np.roll(data,1) &gt; data).astype(int) &amp; (data &lt; np.roll(data,-1)).astype(int)) bumps = np.flatnonzero((np.roll(data,1) &lt; data).astype(int) &amp; (data &gt; np.roll(data,-1)).astype(int)) print(dips,bumps) </code></pre> <p>However, clearly in this case the order will not be preserved, hence the result is invalid.</p> <p>I don't know it seems like an easy problem but I can't quite solve it yet without falling back to <code>i, j</code> loops.</p> <p><strong>Edit</strong>: the expected output should capture all dips and bumps in a correct order:</p> <pre><code>dips: [2,9] or [2,10] bumps: [1,4,11] or [1,5,11] or [1,6,11] </code></pre> <p>Since values are same in there we could use first, middle or last position.</p> <p><a href="https://i.stack.imgur.com/25mSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/25mSM.png" alt="enter image description here" /></a></p>
<p>I got it:</p> <pre><code>from scipy.signal import find_peaks dips, bumps = find_peaks(-data), find_peaks(data) </code></pre>
python|arrays|pandas|numpy
1
375,958
71,167,116
Convert a column to a list of prevoius columns in a Dataframe
<p>I would like to create a column that is the form of a list of values from two previous columns, such as location that is made up of the long and lat columns.</p> <p><a href="https://i.stack.imgur.com/0fqSK.png" rel="nofollow noreferrer">This is what the DataFrame looks like</a></p>
<p>You can create a new columne based on other columns using <code>zip</code>, as follows:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ 'admin_port': ['NORTH SHIELDS', 'OBAN'], 'longitude': [-1.447104, -5.473469], 'latitude': [55.008766, 54.415695], }) df['new'] = pd.Series(list(zip(df['longitude'].values, df['latitude'].values))) print(df) </code></pre> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df admin_port longitude latitude new 0 NORTH SHIELDS -1.447104 55.008766 (-1.447104, 55.008766) 1 OBAN -5.473469 54.415695 (-5.473469, 54.415695) </code></pre> <p>For your information, you can see how to use <code>zip()</code> here: <a href="https://www.w3schools.com/python/ref_func_zip.asp" rel="nofollow noreferrer">https://www.w3schools.com/python/ref_func_zip.asp</a></p>
pandas|dataframe
0
375,959
71,275,335
IndexError after trying to iter over rows and columns
<p>Imagine a table with rows and columns. I want to read the table row by row. I don't understand what has to get fixed and what's the best way to do so:</p> <pre><code>import pandas as pd num_rows = 4 num_cols = 5 value = &quot;test&quot; for i in range(num_rows): s = pd.Series() for c in range(num_cols): s[c] = value </code></pre> <p>Output:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 4, in &lt;module&gt; File &quot;c:\Users\chris\projects\stockfinder\venv\lib\site-packages\pandas\core\series.py&quot;, line 1067, in __setitem__ values[key] = value IndexError: index 0 is out of bounds for axis 0 with size 0 </code></pre>
<p>Use:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd num_rows = 4 num_cols = 5 value = &quot;test&quot; for i in range(num_rows): #here is the problem s = pd.Series(index=range(num_cols)) for c in range(num_cols): s[c] = value </code></pre>
python|pandas
2
375,960
71,273,200
Reading a CSV file into Pandas
<p>I have csv data that looks like this and I'm trying to read it into a pandas df and I've tired all sorts of combinations given the ample documentation online - I've tried things like:</p> <pre><code>pd.read_csv(&quot;https://www.nwrfc.noaa.gov/natural/nat_norm_text.cgi?id=TDAO3.csv&quot;, delimiter=',', skiprows=0, low_memory=False) </code></pre> <p>and I get this error -</p> <pre><code>ParserError: Error tokenizing data. C error: Expected 1 fields in line 3, saw 989 </code></pre> <p>Or, like this but get an empty dataframe:</p> <pre><code>pd.read_csv('https://www.nwrfc.noaa.gov/natural/nat_norm_text.cgi?id=TDAO3.csv', skiprows=2, skipfooter=3,index_col=[0], header=None, engine='python', # c engine doesn't have skipfooter sep='delimiter') Out[31]: Empty DataFrame Columns: [] Index: [] </code></pre> <p>The first 10 lines of the csv file look like this:</p> <pre><code># Water Supply Monthly Volumes for COLUMBIA - THE DALLES DAM (TDAO3) # Volumes are in KAF ID,Calendar Year,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec TDAO3,1948,,,,,,,,,,6866.8,4307.04,4379.38 TDAO3,1949,3546.71,4615.1,8513.31,15020.45,35251.67,21985.99,11226.06,6966.73,4727.37,4406.29,5266.74,5595.91 TDAO3,1950,4353.86,5540.21,9696.27,12854.81,23359.51,39246.78,23393.23,9676.77,5729.74,6990.31,8300.03,8779.57 TDAO3,1951,8032.32,10295.98,7948.59,16144.8,36000.88,28334.09,19735.49,9308.15,6546.95,8907.1,6461.14,6425.76 TDAO3,1952,4671,6222.25,6551.62,18678.3,34866.91,27120.65,15994.18,7907.55,4810.39,3954.32,3259.29,3231.49 TDAO3,1953,7839.72,7870.96,6527.74,9474.66,23384.47,32668.32,17422.63,8655.16,5220.04,5130.46,5183.5,5915.14 TDAO3,1954,5197.51,5967.07,6718.36,10813.69,29190.37,32673.26,29624.38,13456.13,9165.78,5440.92,5732.22,4973.53 </code></pre> <p>thank you,</p>
<p>It is not link directly to file CSV but to page which displays it as HTML using tags <code>&lt;pre&gt;</code>, <code>&lt;br&gt;</code>, etc. and this makes problem.</p> <p>But you can use <code>requests</code> to download it as text.</p> <p>Later you can use standard <code>string</code>-functions to get text between <code>&lt;pre&gt;</code> and <code>&lt;/pre&gt;</code> and replace <code>&lt;br&gt;</code> with <code>'\n'</code> - and you will have text with correct CSV.</p> <p>And later you can use <code>io.StringIO</code> to create file in memory - to load it with <code>pd.read_csv()</code> without saving on disk.</p> <pre><code>import pandas as pd import requests import io url = &quot;https://www.nwrfc.noaa.gov/natural/nat_norm_text.cgi?id=TDAO3.csv&quot; response = requests.get(url) start = response.text.find('&lt;pre&gt;') + len('&lt;pre&gt;') end = response.text.find('&lt;/pre&gt;') pre = response.text[start:end] text = pre.replace('&lt;br&gt;', '\n') buf = io.StringIO(text) # file-like object in memory df = pd.read_csv(buf, skiprows=2, low_memory=False) print(df.to_string()) </code></pre> <p>Result</p> <pre><code> ID Calendar Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 0 TDAO3 1948 NaN NaN NaN NaN NaN NaN NaN NaN NaN 6866.80 4307.04 4379.38 1 TDAO3 1949 3546.71 4615.10 8513.31 15020.45 35251.67 21985.99 11226.06 6966.73 4727.37 4406.29 5266.74 5595.91 2 TDAO3 1950 4353.86 5540.21 9696.27 12854.81 23359.51 39246.78 23393.23 9676.77 5729.74 6990.31 8300.03 8779.57 3 TDAO3 1951 8032.32 10295.98 7948.59 16144.80 36000.88 28334.09 19735.49 9308.15 6546.95 8907.10 6461.14 6425.76 4 TDAO3 1952 4671.00 6222.25 6551.62 18678.30 34866.91 27120.65 15994.18 7907.55 4810.39 3954.32 3259.29 3231.49 5 TDAO3 1953 7839.72 7870.96 6527.74 9474.66 23384.47 32668.32 17422.63 8655.16 5220.04 5130.46 5183.50 5915.14 6 TDAO3 1954 5197.51 5967.07 6718.36 10813.69 29190.37 32673.26 29624.38 13456.13 9165.78 5440.92 5732.22 4973.53 7 TDAO3 1955 4124.26 3570.41 3843.46 7993.82 18505.47 31619.54 20408.54 8922.94 4983.31 5842.70 6982.45 9076.44 8 TDAO3 1956 8079.70 5366.62 8818.69 19754.46 40600.06 40447.34 19846.89 9726.93 5503.69 5446.20 4988.98 6006.80 9 TDAO3 1957 3940.08 4411.33 9155.00 12271.77 40111.86 27864.70 11585.75 6795.70 4613.31 4767.38 4087.55 4789.04 10 TDAO3 1958 4838.12 8246.89 7303.03 13902.66 33958.88 26239.62 12516.52 6898.78 4968.03 5198.19 6662.24 7616.43 ... rest ... </code></pre>
python|pandas|csv
1
375,961
71,232,777
What does the result numbers mean in Tensorflow text_classification
<p>Tensorflow text_classification:</p> <p><a href="https://www.tensorflow.org/tutorials/keras/text_classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/text_classification</a></p> <p>There are only two classes in this text_classification example,</p> <pre><code>Label 0 corresponds to neg Label 1 corresponds to pos </code></pre> <p>But the following prediction values neither <code>0</code> nor <code>1</code>:</p> <pre><code>examples = [ &quot;The movie was great!&quot;, &quot;The movie was okay.&quot;, &quot;The movie was terrible...&quot; ] export_model.predict(examples) array([[0.5921171 ], [0.41369876], [0.33293992]], dtype=float32) </code></pre> <p>The result of &quot;<code>The movie was great!</code>&quot; is <code>0.5921171</code>, but what does that mean?</p> <p>Does it mean <code>positive</code> for <code>value &gt;= 0.5</code> and <code>negative</code> for <code>value &lt; 0.5</code>?</p> <p>It should predict it as something like:</p> <pre><code>array([[1 ], [1], [0]], dtype=float32) </code></pre> <p>At the end of that link, there is an exercise:</p> <p><strong>Exercise: multi-class classification on Stack Overflow questions</strong></p> <p>The threshold 0.5 is not work for multi-class classification on Stack Overflow questions because there are total 4 labels &amp; classes in this exercise.</p> <pre><code>for i in range(len(raw_train_ds.class_names)): print(&quot;Label: {}, Class Name: {}&quot;.format(i, raw_train_ds.class_names[i])) Label: 0, Class Name: csharp Label: 1, Class Name: java Label: 2, Class Name: javascript Label: 3, Class Name: python </code></pre> <p>I'm using the same examples array for prediction:</p> <pre><code>examples = [ &quot;The movie was great!&quot;, &quot;The movie was okay.&quot;, &quot;The movie was terrible...&quot; ] export_model.predict(examples) array([[0.52356344, 0.4763114 , 0.54468685, 0.4438951 ], [0.52287084, 0.48242405, 0.5407451 , 0.4425373 ], [0.5221944 , 0.4766879 , 0.5448719 , 0.4454918 ]], dtype=float32) </code></pre> <p>How to set threshold for the <code>four</code> <code>multi-class classification</code> on Stack Overflow questions?</p> <p><strong>I think it is not related to threshold.</strong></p>
<p>You should define a threshold that whenever you get a value greater than it it is considered as positive, otherwise it considers it as negative. In your example to get [1,1,0] a threshold of 0.4 for example gives the right predictions.</p>
python|tensorflow
1
375,962
52,023,257
Transpose or Pivot multiple columns in Pandas
<p>I would like to transpose multiple columns in a dataframe. I have looked through most of the transpose and pivot pandas posts but could not get it to work. </p> <p>Here is what my dataframe looks like.</p> <pre><code>df = pd.DataFrame() df['L0'] = ['fruit', 'fruit', 'fruit', 'fruit', 'fruit', 'fruit', 'vegetable', 'vegetable', 'vegetable', 'vegetable', 'vegetable', 'vegetable'] df['L1'] = ['apple', 'apple', 'apple', 'banana', 'banana', 'banana', 'tomato', 'tomato', 'tomato', 'lettuce', 'lettuce', 'lettuce'] df['Type'] = ['X', 'Y', 'Z', 'X', 'Y', 'Z', 'X', 'Y', 'Z', 'X', 'Y', 'Z'] df['A'] = [3, 0, 4, 3, 1, 3, 2, 2, 2, 4, 2, 4] df['B'] = [3, 1, 0, 4, 1, 4, 4, 4, 2, 1, 2, 1] df['C'] = [0, 4, 1, 0, 2, 4, 1, 1, 2, 3, 2, 3] </code></pre> <p>I would like to transpose/pivot columns A, B and C and replace them with values from column "Type". Resulting dataframe should look like this.</p> <pre><code>df2 = pd.DataFrame() df2['L0'] = ['fruit', 'fruit', 'fruit', 'fruit', 'fruit', 'fruit', 'vegetable', 'vegetable', 'vegetable', 'vegetable', 'vegetable', 'vegetable'] df2['L1'] = ['apple', 'apple', 'apple', 'banana', 'banana', 'banana', 'tomato', 'tomato', 'tomato', 'lettuce', 'lettuce', 'lettuce'] df2['Type2'] = ['A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C'] df2['X'] = [3, 3, 0, 3, 4, 0, 2, 4, 1, 4, 1, 3] df2['Y'] = [0, 1, 4, 1, 1, 2, 2, 4, 1, 2, 2, 2] df2['Z'] = [4, 0, 1, 3, 4, 4, 2, 2, 2, 4, 1, 3] </code></pre> <p>The best I could do was this</p> <pre><code>df.groupby(['L0', 'L1', 'Type'])['A', 'B', 'C'].sum().unstack('Type') </code></pre> <p>But this is not really what I want. Thank you!</p>
<p>Add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a> before <code>unstack</code>:</p> <pre><code>df = (df.groupby(['L0', 'L1', 'Type'])['A', 'B', 'C'] .sum() .stack() .unstack('Type') .reset_index() .rename_axis(None, axis=1) .rename(columns={'level_2':'Type2'})) print (df) L0 L1 Type2 X Y Z 0 fruit apple A 3 0 4 1 fruit apple B 3 1 0 2 fruit apple C 0 4 1 3 fruit banana A 3 1 3 4 fruit banana B 4 1 4 5 fruit banana C 0 2 4 6 vegetable lettuce A 4 2 4 7 vegetable lettuce B 1 2 1 8 vegetable lettuce C 3 2 3 9 vegetable tomato A 2 2 2 10 vegetable tomato B 4 4 2 11 vegetable tomato C 1 1 2 </code></pre>
python|pandas|pivot|transpose
2
375,963
52,266,397
Create new column in DataFrame from a conditional of a list
<p>I have a DataFrame like:</p> <pre><code>df = pd.DataFrame({'A' : (1,2,3), 'B': ([0,1],[3,4],[6,0])}) </code></pre> <p>I want to create a new column called test that displays a 1 if a 0 exists within each list in column B. The results hopefully would look like:</p> <pre><code>df = pd.DataFrame({'A' : (1,2,3), 'B': ([0,1],[3,4],[6,0]), 'test': (1,0,1)}) </code></pre> <p>With a dataframe that contains strings rather than lists I have created additional columns from the string values using the following</p> <pre><code>df.loc[df['B'].str.contains(0),'test']=1 </code></pre> <p>When I try this with the example df, I generate a </p> <pre><code>TypeError: first argument must be string or compiled pattern </code></pre> <p>I also tried to convert the list to a string but only retained the first integer in the list. I suspect that I need something else to access the elements within the list but don't know how to do it. Any suggestions?</p>
<p>This should do it for you:</p> <p><code>df['test'] = pd.np.where(df['B'].apply(lambda x: 0 in x), 1, 0)</code></p>
python|pandas|list|conditional
1
375,964
52,423,147
Difference of Pre-Padding and Post-Padding text when preprossing different text sizes for tf.nn.embedding_lookup
<p>I have seen two types of padding when feeding to embedding layers. </p> <blockquote> <p><strong>eg:</strong></p> <p>considering two sentences:</p> <p>word1 = "I am a dog person."</p> <p>word2 = "Krishni and Pradeepa both love cats."</p> <p>word1_int = [1,2,3,4,5,6] </p> <p>word2_int = [7,8,9,10,11,12,13]</p> <p>padding both words to length = 8</p> <p><strong>padding method 1</strong>(putting 0s at the beginning)</p> <p>word1_int = [0,0,1,2,3,4,5,6] </p> <p>word2_int = [0,7,8,9,10,11,12,13]</p> <p><strong>padding method 2</strong>(putting 0s at the end)</p> <p>word1_int = [1,2,3,4,5,6,0,0] </p> <p>word2_int = [7,8,9,10,11,12,13,0]</p> </blockquote> <p>I am trying to do an <strong>online</strong> classification using the 20 news groups dataset. and I am currently using the 1st method to pad my text. </p> <p><strong>Question</strong>: Is there any advantage of using the 1st method over the other one in my implementation?</p> <p>Thank you in advance!</p> <p>My code is shown below:</p> <pre><code>from collections import Counter import tensorflow as tf from sklearn.datasets import fetch_20newsgroups import matplotlib as mplt mplt.use('agg') # Must be before importing matplotlib.pyplot or pylab! import matplotlib.pyplot as plt from string import punctuation from sklearn.preprocessing import LabelBinarizer import numpy as np from nltk.corpus import stopwords import nltk nltk.download('stopwords') def pre_process(): newsgroups_data = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes')) words = [] temp_post_text = [] print(len(newsgroups_data.data)) for post in newsgroups_data.data: all_text = ''.join([text for text in post if text not in punctuation]) all_text = all_text.split('\n') all_text = ''.join(all_text) temp_text = all_text.split(" ") for word in temp_text: if word.isalpha(): temp_text[temp_text.index(word)] = word.lower() # temp_text = [word for word in temp_text if word not in stopwords.words('english')] temp_text = list(filter(None, temp_text)) temp_text = ' '.join([i for i in temp_text if not i.isdigit()]) words += temp_text.split(" ") temp_post_text.append(temp_text) # temp_post_text = list(filter(None, temp_post_text)) dictionary = Counter(words) # deleting spaces # del dictionary[""] sorted_split_words = sorted(dictionary, key=dictionary.get, reverse=True) vocab_to_int = {c: i for i, c in enumerate(sorted_split_words,1)} message_ints = [] for message in temp_post_text: temp_message = message.split(" ") message_ints.append([vocab_to_int[i] for i in temp_message]) # maximum message length = 6577 # message_lens = Counter([len(x) for x in message_ints])AAA seq_length = 6577 num_messages = len(temp_post_text) features = np.zeros([num_messages, seq_length], dtype=int) for i, row in enumerate(message_ints): print(features[i, -len(row):]) features[i, -len(row):] = np.array(row)[:seq_length] print(features[i, -len(row):]) lb = LabelBinarizer() lbl = newsgroups_data.target labels = np.reshape(lbl, [-1]) labels = lb.fit_transform(labels) return features, labels, len(sorted_split_words)+1 def get_batches(x, y, batch_size=1): for ii in range(0, len(y), batch_size): yield x[ii:ii + batch_size], y[ii:ii + batch_size] def plot(noOfWrongPred, dataPoints): font_size = 14 fig = plt.figure(dpi=100,figsize=(10, 6)) mplt.rcParams.update({'font.size': font_size}) plt.title("Distribution of wrong predictions", fontsize=font_size) plt.ylabel('Error rate', fontsize=font_size) plt.xlabel('Number of data points', fontsize=font_size) plt.plot(dataPoints, noOfWrongPred, label='Prediction', color='blue', linewidth=1.8) # plt.legend(loc='upper right', fontsize=14) plt.savefig('distribution of wrong predictions.png') # plt.show() def train_test(): features, labels, n_words = pre_process() print(features.shape) print(labels.shape) # Defining Hyperparameters lstm_layers = 1 batch_size = 1 lstm_size = 200 learning_rate = 0.01 # --------------placeholders------------------------------------- # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): tf.set_random_seed(1) inputs_ = tf.placeholder(tf.int32, [None, None], name="inputs") # labels_ = tf.placeholder(dtype= tf.int32) labels_ = tf.placeholder(tf.float32, [None, None], name="labels") # output_keep_prob is the dropout added to the RNN's outputs, the dropout will have no effect on the calculation of the subsequent states. keep_prob = tf.placeholder(tf.float32, name="keep_prob") # Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 # generating random values from a uniform distribution (minval included and maxval excluded) embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1),trainable=True) embed = tf.nn.embedding_lookup(embedding, inputs_) print(embedding.shape) print(embed.shape) print(embed[0]) # Your basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32) outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) # hidden layer hidden = tf.layers.dense(outputs[:, -1], units=25, activation=tf.nn.relu) print(hidden.shape) logit = tf.contrib.layers.fully_connected(hidden, num_outputs=20, activation_fn=None) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logit, labels=labels_)) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) saver = tf.train.Saver() # ----------------------------online training----------------------------------------- with tf.Session(graph=graph) as sess: tf.set_random_seed(1) sess.run(tf.global_variables_initializer()) iteration = 1 state = sess.run(initial_state) wrongPred = 0 noOfWrongPreds = [] dataPoints = [] for ii, (x, y) in enumerate(get_batches(features, labels, batch_size), 1): feed = {inputs_: x, labels_: y, keep_prob: 0.5, initial_state: state} embedzz = sess.run(embedding, feed_dict=feed) print(embedzz) predictions = tf.nn.softmax(logit).eval(feed_dict=feed) print("----------------------------------------------------------") print("Iteration: {}".format(iteration)) isequal = np.equal(np.argmax(predictions[0], 0), np.argmax(y[0], 0)) print(np.argmax(predictions[0], 0)) print(np.argmax(y[0], 0)) if not (isequal): wrongPred += 1 print("nummber of wrong preds: ",wrongPred) if iteration%50 == 0: noOfWrongPreds.append(wrongPred/iteration) dataPoints.append(iteration) loss, states, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) print("Train loss: {:.3f}".format(loss)) iteration += 1 saver.save(sess, "checkpoints/sentiment.ckpt") errorRate = wrongPred / len(labels) print("ERRORS: ", wrongPred) print("ERROR RATE: ", errorRate) plot(noOfWrongPreds, dataPoints) if __name__ == '__main__': train_test() </code></pre> <p>This is the code sample that I am using to pad all the sentences.</p> <pre><code> seq_length = 6577 num_messages = len(temp_post_text) features = np.zeros([num_messages, seq_length], dtype=int) for i, row in enumerate(message_ints): print(features[i, -len(row):]) features[i, -len(row):] = np.array(row)[:seq_length] print(features[i, -len(row):]) </code></pre>
<p>Commonly, when we use LSTM or RNN's, we use the final output or the hidden state and pass it along to make predictions. You are also doing the same thing as seen in this line: </p> <pre><code>logit = tf.contrib.layers.fully_connected(hidden, num_outputs=20, activation_fn=None) </code></pre> <p>Here the two methods of padding get differentiated. If you use the 2nd method of padding, post-padding, then the final hidden state would get flushed out as mostly it will be <code>0</code>, whereas by using the 1st method, we make sure that the hidden state output is correct.</p>
python-3.x|tensorflow|machine-learning|text-classification|word-embedding
1
375,965
52,107,914
Pandas - From list of dates, get the last date in each month
<p>I have a fairly simple question but can't find a clean pandas solution to it.</p> <p>Given a list of dates in a series like below:</p> <pre><code>LoadedDate 0 2016-02-18 1 2016-02-19 2 2016-02-20 3 2016-02-23 4 2016-02-24 5 2016-02-25 6 2016-02-26 7 2016-02-27 8 2016-03-01 9 2016-03-02 10 2016-03-03 11 2016-03-04 12 2016-03-05 13 2016-03-08 14 2016-03-09 15 2016-03-10 16 2016-03-11 17 2016-03-12 18 2016-03-15 19 2016-03-16 20 2016-03-17 21 2016-03-18 22 2016-03-19 23 2016-03-22 24 2016-03-23 25 2016-03-24 26 2016-03-25 27 2016-03-30 28 2016-03-31 29 2016-04-01 30 2016-04-02 31 2016-04-05 32 2016-04-06 33 2016-04-07 34 2016-04-08 35 2016-04-09 36 2016-04-12 37 2016-04-13 38 2016-04-14 39 2016-04-15 40 2016-04-16 41 2016-04-19 42 2016-04-20 43 2016-04-21 44 2016-04-22 45 2016-04-23 46 2016-04-27 47 2016-04-28 48 2016-04-29 49 2016-04-30 50 2016-05-02 51 2016-05-03 52 2016-05-04 </code></pre> <p>I'd like to pull the last/max date of each month. So the output would be:</p> <pre><code>LastDate 0 2016-02-27 1 2016-03-31 2 2016-04-29 3 2016-05-04 </code></pre> <p>I tried <code>df.set_index('LoadedDate').groupby(pd.Grouper(freq='M')).max()</code> but it returned the max calendar date, not the actual max loaded date of my series.</p> <p>Thanks.</p>
<p>You could use</p> <pre><code>In [300]: df.groupby(df.LoadedDate.astype('datetime64[M]')).last().reset_index(drop=True) Out[300]: LoadedDate 0 2016-02-27 1 2016-03-31 2 2016-04-30 3 2016-05-04 </code></pre> <p>Or,</p> <pre><code>In [295]: df.groupby(df.LoadedDate - pd.offsets.MonthEnd()).last().reset_index(drop=True) Out[295]: LoadedDate 0 2016-02-27 1 2016-03-31 2 2016-04-30 3 2016-05-04 </code></pre> <p>Or,</p> <pre><code>In [301]: df.groupby(df.LoadedDate.dt.to_period('M')).last().reset_index(drop=True) Out[301]: LoadedDate 0 2016-02-27 1 2016-03-31 2 2016-04-30 3 2016-05-04 </code></pre> <p>Or,</p> <pre><code>In [303]: df.groupby(df.LoadedDate.astype(str).str[:7]).last().reset_index(drop=True) Out[303]: LoadedDate 0 2016-02-27 1 2016-03-31 2 2016-04-30 3 2016-05-04 </code></pre> <p>If the dates are not sorted. Using any of the above methods use <code>idxmax</code> and <code>loc</code></p> <pre><code>In [307]: df.loc[df.groupby(df.LoadedDate.astype(str).str[:7]).LoadedDate.idxmax().values] Out[307]: LoadedDate 7 2016-02-27 28 2016-03-31 49 2016-04-30 52 2016-05-04 </code></pre>
python|pandas|date
7
375,966
52,054,694
Appending list sometimes give 'IndexError: list index out of range' error and results in not as expected
<p>So i'm still new to programming and trying to implement an initialization method for a clustering problem using python-2.7.<br> The steps are: </p> <ol> <li>Pick a random data from dataset as first centroid</li> <li>While number of data in centroid &lt; n_klas : Calculate the data distance to the data in centroids</li> <li><p>Calculate the probability of all datas to their closest centroid using formula </p> <p>P(x) = D(x)**2 / sum(D(x)**2), in which D(x) is euclidean distance from data[x] to the closest centroid</p></li> <li><p>Pick Data with highest P(x), then loop back to no.2.</p></li> </ol> <p>But when i try to appending data sometimes i got this error 'IndexError: list index out of range' and sometimes the code works but only give 2 different centroid and the 3rd to n centroid give the same values as the 2nd centroid.</p> <p>Where did i do wrong? </p> <p>(Edit: i edited the steps to doi it because i was wrong)</p> <pre><code>def pickcentroid(df): x = df.values.tolist() n_klas = 3 # random.seed(2) idx_pusat_pertama = random.randint(0, len(df)) centroid = [] centroid_idx = [] centroid.append(x[idx_pusat_pertama]) centroid_idx.append(idx_pusat_pertama) prob_data = [] while len(centroid) &lt; n_klas: ac_mindist = 0 for i in x: dist_ke_c = [] for c in centroid: dist_ke_c.append(dist(i,c)) ac_mindist += min(dist_ke_c)**2 for idx in range(len(df)) : if idx not in centroid_idx: dist_ke_c2 = [] mindist_per_data = 0 for c in centroid: dist_ke_c2.append(dist(x[idx],c)) mindist_per_data = min(dist_ke_c2)**2 prob_data.append(mindist_per_data/ac_mindist) else: prob_data.append(0) new_cen_idx = prob_data.index(max(prob_data)) centroid_idx.append(new_cen_idx) centroid.append(x[new_cen_idx]) print(centroid) return centroid def dist(x,y): r = np.array(x) - np.array(y) distance = np.linalg.norm(r) # print(distance) return distance c = pickcentroid(df) </code></pre> <p>And the data looks like this:</p> <pre><code>-0.19864726098025476,-0.2174575876560727 -0.19427576174137176,-0.2658220115362011 0.24385376109048476,0.1555938625346895 -0.23636704446757748,0.14005058641250595 0.37563103051045826,0.33204816285389527 -0.13210748354848134,-0.0019122205360639893 -0.17120654390561796,0.04231258139538708 0.2865229979171536,0.34175192153482764 -0.328896319205639,-0.22737124434792602 0.03115098005450885,0.17089336362457433 </code></pre> <p>Thankyou very much for your kind help</p>
<p>The <code>randint(a, b)</code> returns random integers from <code>a</code> to <code>b</code>, <em>including</em> <code>b</code>. So, when you use <code>randint(0, len(x))</code>, you might get the value <code>len(x)</code> as output, which is out of range when used as index. </p> <p>For your use case, you could probably use <code>random_value = random.choice(x)</code> instead.</p>
python|python-2.7|pandas|cluster-analysis
2
375,967
52,441,474
Assign numbers to values of rows in a dataframe
<p>lets say i have a dataframe </p> <pre><code>A B C john I agree ryan II agree rose V strongly agree Shawn VI disagree </code></pre> <p>what i want to do is to assign numbers to C column values like this ?</p> <pre><code>A B C john I 1 ryan II 1 rose V 2 Shawn VI 0 </code></pre> <p>anyone know how to do this? </p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> by <code>dictionary</code>:</p> <pre><code>df['C'] = df['C'].map({'agree':1, 'strongly agree':2, 'disagree':0}) print (df) A B C 0 john I 1 1 ryan II 1 2 rose V 2 3 Shawn VI 0 </code></pre>
python|pandas|loops
3
375,968
52,228,639
pandas string replace any value of string after one rounded bracket "Only single Rounded bracket by python
<p>i need to replace any after "(" in pandas dataframe by "" </p> <pre><code>Tuscaloosa (University of Alabama &gt;&gt; Tuscaloosa and df['RegionName']= df['RegionName'].str.replace(r"\s+\(.*\"","") </code></pre> <p>not work</p>
<p>You can using <code>str.split</code></p> <pre><code>s Out[417]: 0 Tuscaloosa (University of Alabama &gt;&gt; Tuscaloosa 0 Tuscaloosa (University of Alabama &gt;&gt; Tuscaloosa 0 Tuscaloosa (University of Alabama &gt;&gt; Tuscaloosa 0 Tuscaloosa (University of Alabama &gt;&gt; Tuscaloosa dtype: object s.str.split('(',1).str[0] Out[418]: 0 Tuscaloosa 0 Tuscaloosa 0 Tuscaloosa 0 Tuscaloosa dtype: object </code></pre>
python|pandas|dataframe
1
375,969
52,442,499
How to separate null and non-null containing rows into two different DataFrames?
<p>Say I have a big DataFrame (>10000 rows) that has some rows containing one or more nulls. How do I remove all the rows containing a null in one or more of its columns from the original DataFrame and putting the rows into another DataFrame?</p> <p>e.g.:</p> <p>Original DataFrame:</p> <pre><code> a b c 1 "foo" 5 3 2 "bar" 9 1 3 NaN 5 4 4 "foo" NaN 1 </code></pre> <p>Non-Null DataFrame:</p> <pre><code> a b c 1 "foo" 5 3 2 "bar" 9 1 </code></pre> <p>Null containing DataFrame:</p> <pre><code> a b c 1 NaN 5 4 2 "foo" NaN 1 </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isna.html" rel="nofollow noreferrer"><code>DataFrame.isna</code></a> for checking missing values:</p> <pre><code>print (df.isna()) #print (df.isnull()) a b c 1 False False False 2 False False False 3 True False False 4 False True False </code></pre> <p>And test if at least <code>True</code> per row by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>DataFrame.any</code></a>:</p> <pre><code>mask = df.isna().any(axis=1) #oldier pandas versions mask = df.isnull().any(axis=1) print (mask) 1 False 2 False 3 True 4 True dtype: bool </code></pre> <p>Last filter by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> - <code>~</code> is for inverting boolean mask:</p> <pre><code>df1 = df[~mask] df2 = df[mask] print (df1) a b c 1 foo 5.0 3 2 bar 9.0 1 print (df2) a b c 3 NaN 5.0 4 4 foo NaN 1 </code></pre>
python|pandas|numpy|dataframe
2
375,970
52,229,059
EM score in SQuAD Challenge
<p>The <a href="https://rajpurkar.github.io/SQuAD-explorer/" rel="noreferrer">SQuAD Challenge</a> ranks the results against the F1 and EM scores. There is a lot of information about the F1 score (a function of precision and recall). But what would the EM score be?</p>
<blockquote> <p><strong>Exact match.</strong> This metric measures the percentage of predictions that match any one of the ground truth answers exactly.</p> </blockquote> <p>According to <a href="https://arxiv.org/pdf/1606.05250.pdf" rel="nofollow noreferrer">here</a>.</p>
tensorflow|machine-learning|deep-learning|stanford-nlp|reinforcement-learning
20
375,971
52,178,508
Can't install tensorflow on windows 7 32-bit
<p>I can't install TensorFlow in Windows 7, Python 3(32-bit, Lenovo ThinkPad X201s). When I type <code>pip3 install tensorflow</code>:</p> <pre><code>Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\sjkim&gt;pip3 install tensorflow Collecting tensorflow Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow C:\Users\sjkim&gt; </code></pre> <p>And I also have 2 python versions. How can I install it?</p>
<p>TensorFlow is only tested and supported for 64-bit, x86 systems. I don't believe you can install TensorFlow through pip or conda normally from a 32-bit system. You CAN run a linux docker container thought docker for windows, but it is based on vm and it didn't support gpu.</p> <p>I have provided a 32 bit tensorflow built for windows <a href="https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers" rel="nofollow noreferrer">https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers</a>, although it's not a helpful thing.</p> <p>32-bit process can use only 2GB memory; a complex model would not load.</p>
python|tensorflow|windows-7
1
375,972
52,393,659
Pandas DataFrame check if column value exists in a group of columns
<p>I have a DataFrame like this (simplified example)</p> <pre><code>id v0 v1 v2 v3 v4 1 10 5 10 22 50 2 22 23 55 60 50 3 8 2 40 80 110 4 15 15 25 100 101 </code></pre> <p>And would like to create an additional column that is either 1 or 0. 1 if v0 value is in the values of v1 to v4, and 0 if it's not. So, in this example for id 1 then the value should be 1 (since v2 = 10) and for id 2 value should be 0 since 22 is not in v1 thru v4.</p> <p>In reality the table is way bigger (around 100,000 rows and variables go from v1 to v99).</p>
<p>You can use the underlying <code>numpy</code> arrays for performance:</p> <p><strong><em>Setup</em></strong></p> <pre><code>a = df.v0.values b = df.iloc[:, 2:].values </code></pre> <hr> <pre><code>df.assign(out=(a[:, None]==b).any(1).astype(int)) </code></pre> <p></p> <pre><code> id v0 v1 v2 v3 v4 out 0 1 10 5 10 22 50 1 1 2 22 23 55 60 50 0 2 3 8 2 40 80 110 0 3 4 15 15 25 100 101 1 </code></pre> <hr> <p>This solution leverages broadcasting to allow for pairwise comparison:</p> <p>First, we broadcast <code>a</code>:</p> <pre><code>&gt;&gt;&gt; a[:, None] array([[10], [22], [ 8], [15]], dtype=int64) </code></pre> <p>Which allows for pairwise comparison with <code>b</code>:</p> <pre><code>&gt;&gt;&gt; a[:, None] == b array([[False, True, False, False], [False, False, False, False], [False, False, False, False], [ True, False, False, False]]) </code></pre> <p>We then simply check for any <code>True</code> results along the first axis, and convert to integer.</p> <hr> <p><strong><em>Performance</em></strong></p> <hr> <p><strong><em>Functions</em></strong></p> <pre><code>def user_chris(df): a = df.v0.values b = df.iloc[:, 2:].values return (a[:, None]==b).any(1).astype(int) def rahlf23(df): df = df.set_index('id') return df.drop('v0', 1).isin(df['v0']).any(1).astype(int) def chris_a(df): return df.loc[:, "v1":].eq(df['v0'], 0).any(1).astype(int) def chris(df): return df.apply(lambda x: int(x['v0'] in x.values[2:]), axis=1) def anton_vbr(df): df.set_index('id', inplace=True) return df.isin(df.pop('v0')).any(1).astype(int) </code></pre> <p><strong><em>Setup</em></strong></p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt from timeit import timeit res = pd.DataFrame( index=['user_chris', 'rahlf23', 'chris_a', 'chris', 'anton_vbr'], columns=[10, 50, 100, 500, 1000, 5000], dtype=float ) for f in res.index: for c in res.columns: vals = np.random.randint(1, 100, (c, c)) vals = np.column_stack((np.arange(vals.shape[0]), vals)) df = pd.DataFrame(vals, columns=['id'] + [f'v{i}' for i in range(0, vals.shape[0])]) stmt = '{}(df)'.format(f) setp = 'from __main__ import df, {}'.format(f) res.at[f, c] = timeit(stmt, setp, number=50) ax = res.div(res.min()).T.plot(loglog=True) ax.set_xlabel("N"); ax.set_ylabel("time (relative)"); plt.show() </code></pre> <p><strong><em>Output</em></strong></p> <p><a href="https://i.stack.imgur.com/VdiiC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VdiiC.png" alt="enter image description here"></a></p>
python|pandas|numpy|dataframe
15
375,973
52,087,985
TFLearn regression loss function is uninitialized
<p>I'm messing around trying to replicate the tflearn autencoder listed <a href="https://github.com/tflearn/tflearn/blob/master/examples/images/autoencoder.py" rel="nofollow noreferrer">here</a> in a Kaggle Kernel. The invocation looks like this:</p> <pre><code>class AutoEncoder(): def __init__(self, layers): """layers should be a list of layer sizes""" self.layers = layers self.encoder = None self.decoder = None self.decoding_model = None self.encoding_model = None def fit(self, X): # build encoder self.encoder = [tflearn.input_data(shape=[None, X.shape[1]])] for layer in self.layers: self.encoder.append(tflearn.fully_connected(self.encoder[-1], layer)) # build decoder self.decoder = [self.encoder[-1]] for layer in reversed(self.layers[:-1]): self.decoder.append(tflearn.fully_connected(self.decoder[-1], layer)) self.decoder.append(tflearn.fully_connected(self.decoder[-1], X.shape[1], activation='sigmoid')) # regression net = tflearn.regression(self.decoder[-1], optimizer='adam', learning_rate=0.001, # loss='mean_square', loss='weighted_crossentropy', metric=None) self.decoding_model = tflearn.DNN(net) #encoding self.encoding_model = tflearn.DNN(self.encoder[-1], session=self.decoding_model.session) self.decoding_model.fit(X, X, n_epoch=20, batch_size=256) return self.decoding_model def predict(self, X): return self.encoding_model.predict(X) ae = AutoEncoder([1024, 256, 2]) ae.fit(X_train) </code></pre> <p>However, upon running, it fails with an error that indicates my loss function has not been initialized:</p> <pre><code>--------------------------------------------------------------------------- FailedPreconditionError Traceback (most recent call last) /opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args) 1291 try: -&gt; 1292 return fn(*args) 1293 except errors.OpError as e: /opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata) 1276 return self._call_tf_sessionrun( -&gt; 1277 options, feed_dict, fetch_list, target_list, run_metadata) 1278 /opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata) 1366 self._session, options, feed_dict, fetch_list, target_list, -&gt; 1367 run_metadata) 1368 FailedPreconditionError: Attempting to use uninitialized value WeightedCrossentropy/Mean/moving_avg_1 [[{{node WeightedCrossentropy/Mean/moving_avg_1/read}} = Identity[T=DT_FLOAT, _class=["loc:@Adam_1/moving_avg/AssignMovingAvg"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](WeightedCrossentropy/Mean/moving_avg_1)]] </code></pre> <p>With TFLearn, I am not quite sure where the correct place is for global variable initialization, or if that would even solve the problem. Could someone please advise with why this might be happening?</p>
<p>You can try like this to fix this particular error. Move <code>fit</code> before the previous line. </p> <pre><code> self.decoding_model = tflearn.DNN(net) self.decoding_model.fit(X, X, n_epoch=20, batch_size=256) #encoding self.encoding_model = tflearn.DNN(self.encoder[-1], session=self.decoding_model.session) </code></pre>
python|tensorflow|tflearn
0
375,974
52,012,587
Save Python data-frame as Table in Teradata
<p>I want to pull a table from Teradata as a Python data-frame. I know how to accomplish this step. Next I want to run algorithms on the data to transform it however I want. Once I am done with manipulating the data in Python, I want the resulting data-frame to be saved as a new table in Teradata so that I can perform joins with other tables in the database. My question is how do I save a python data-frame back into a database, I would like to do this inside of python using a script.</p>
<p>One option is to use <a href="https://github.com/mark-hoffmann/fastteradata" rel="nofollow noreferrer"><code>fastterdata</code></a>, specifically the <code>load_table</code> function:</p> <pre><code>load_table(abs_path, df, table_name, env, db, connector = "teradata", clear_table=True) Loads a pandas dataframe from memory into teradata via the optimized fastload functionality. </code></pre> <p>Note that you need to install requirements listed <a href="https://github.com/mark-hoffmann/fastteradata#requirements" rel="nofollow noreferrer">here</a>.</p>
python|sql|python-3.x|pandas|teradata
3
375,975
52,426,828
Boolean Dataframe filter for another Dataframe
<p>The following dataframe <code>df1</code> contains numerical values</p> <pre><code> IDs Value1 Value2 Value Value4 AB 1 1 1 5 BC 2 2 2 3 BG 1 1 4 1 RF 2 2 2 7 </code></pre> <p>and this dataframe <code>df2</code> contains Boolean values:</p> <pre><code> Index 0 1 2 3 1 True False True True 2 False False True False 3 False False True False 4 False False False False </code></pre> <p>with the same number of columns and rows.</p> <p>What I need is to subset <code>df1</code> in the following manner: get only the columns that in <code>df2</code> have at least on <code>True</code> value.</p> <p>Meaning the following:</p> <pre><code> IDs Value1 Value3 Value4 AB 1 1 5 BC 2 2 3 BG 1 4 1 RF 2 2 7 </code></pre> <p>I have tried the following code:</p> <pre><code>df2_true = np.any(df2,axis=1) </code></pre> <p>However, the line above returns a list which can not be used here:</p> <pre><code>result = df1[:,df2_true] </code></pre> <p>Any help would be welcome</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a> with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.any.html" rel="nofollow noreferrer"><code>np.any</code></a> per <code>index</code> (<code>axis=0</code>):</p> <pre><code>result = df1.loc[:, np.any(df2.values,axis=0)] print (result) Value1 Value Value4 IDs AB 1 1 5 BC 2 2 3 BG 1 4 1 RF 2 2 7 </code></pre>
python|pandas|dataframe
3
375,976
52,222,676
pandas replace multiple values
<p>Below is sample dataframe</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'a': [1, 1, 1, 2, 2], 'b':[11, 22, 33, 44, 55]}) &gt;&gt;&gt; df a b 0 1 11 1 1 22 2 1 33 3 2 44 4 3 55 </code></pre> <p>Now I wanted to update/replace b values that are matched on a column from other dict based on index </p> <p>ex:</p> <pre><code>match = {1:[111, 222], 2:[444, 555]} </code></pre> <p>output:</p> <pre><code> a b 0 1 111 1 1 222 2 1 33 &lt;-- ignores this bcz not enough values to replace in match dict for 1 3 2 444 4 3 555 </code></pre> <p>Thanks in advance</p>
<p>Here's one way. The idea is to calculate a cumulative count by group and use this to filter rows. Use <code>itertools.chain</code> to create a single array of values. Finally, use <code>pd.DataFrame.loc</code> and Boolean indexing to set values.</p> <pre><code>from itertools import chain count = df.groupby('a').cumcount() + 1 m1 = df['a'].isin(match) m2 = count.le(df['a'].map(match).map(len)) values = list(chain.from_iterable(match.values())) df.loc[m1 &amp; m2, 'b'] = values print(df) a b 0 1 111 1 1 222 2 1 33 3 2 444 4 2 555 </code></pre>
python|pandas|dataframe
4
375,977
52,058,848
Tensorflow - Access weights while doing backprop
<p>I want to implement C-MWP as described here: <a href="https://arxiv.org/pdf/1608.00507.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1608.00507.pdf</a> in keras/tensorflow. This involves modifying the way backprop is performed. The new gradient is a function of the bottom activation responses the weight parameters and the gradients of the layer above.</p> <p>As a start, I was looking at the way keras-vis is doing modified backprop:</p> <pre><code>def _register_guided_gradient(name): if name not in ops._gradient_registry._registry: @tf.RegisterGradient(name) def _guided_backprop(op, grad): dtype = op.outputs[0].dtype gate_g = tf.cast(grad &gt; 0., dtype) gate_y = tf.cast(op.outputs[0] &gt; 0, dtype) return gate_y * gate_g * grad </code></pre> <p>However, to implement C-MWP I need access to the weights of the layer on which the backprop is performed. Is it possible to access the weight within the @tf.RegisterGradient(name) function? Or am I on the wrong path?</p>
<p>The gradient computation in TF is fundamentally per-operation. If the operation whose gradient you want to change is performed on the weights, or at least the weights are not far from it in the operation graph, you can try finding the weights tensor by walking the graph inside your custom gradient. For example, say you have something like</p> <pre><code>x = tf.get_variable(...) y = 5.0 * x tf.gradients(y, x) </code></pre> <p>You can get to the variable tensor (more precisely, the tensor produced by the variable reading operation) with something like</p> <pre><code>@tf.RegisterGradient(name) def my_grad(op, grad): weights = op.inputs[1] ... </code></pre> <p>If the weights are not immediate inputs, but you know how to get to them, you can walk the graph a bit using something like:</p> <pre><code>@tf.RegisterGradient(name) def my_grad(op, grad): weights = op.inputs[1].op.inputs[0].op.inputs[2] ... </code></pre> <p>You should understand that this solution is very hacky. If you control the forward pass, you might want to just define a custom gradient just for the subgraph you care about. You can see how you can do that in <a href="https://stackoverflow.com/questions/43256517/how-to-register-a-custom-gradient-for-a-operation-composed-of-tf-operations">How to register a custom gradient for a operation composed of tf operations</a> and <a href="https://stackoverflow.com/questions/36456436/how-can-i-define-only-the-gradient-for-a-tensorflow-subgraph">How Can I Define Only the Gradient for a Tensorflow Subgraph?</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/Graph#gradient_override_map" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/Graph#gradient_override_map</a></p>
python|tensorflow|keras|backpropagation
0
375,978
52,285,621
How do deep learning frameworks such as PyTorch handle memory when using multiple GPUs?
<p>I have recently run into a situation where I am running out of memory on a single Nvidia V100. I have limited experience using multiple GPUs to train networks so I'm a little unsure on how the data parallelization process works. Lets say I'm using a model and batch size that requires something like 20-25GB of memory. Is there any way to take advantage of the full 32GB of memory I have between two 16GB V100s? Would PyTorch's DataParallel functionality achieve this? I suppose there is also the possibility of breaking the model up and using model parallelism as well. Please excuse my lack of knowledge on this subject. Thanks in advance for any help or clarification!</p>
<p>You should keep model parallelism as your last resource and only if your model doesn't fit in the memory of a single GPU (with 16GB/GPU you have plenty of room for a gigantic model).</p> <p>If you have two GPUs, I would use data parallelism. In data parallelism you have a copy of your model on each GPU and each copy is fed with a batch. The gradients are then gathered and used to update the copies.</p> <p>Pytorch makes it really easy to achieve data parallelism, as you just need to wrap you model instance in <a href="https://pytorch.org/docs/stable/nn.html#dataparallel" rel="nofollow noreferrer"><code>nn.DataParallel</code></a>:</p> <pre><code>model = torch.nn.DataParallel(model, device_ids=[0, 1]) output = model(input_var) </code></pre>
deep-learning|gpu|hardware|pytorch
3
375,979
52,378,987
Tensorflow dimensions /placeholders
<p>I want to run a neural network in tensorflow. I am trying to do email classification, so my training data is an array of count vectorized documents.</p> <p>Im trying to understand the dimensions for how I should input data into tensorflow. I am creating placeholders like this:</p> <p>X = tf.placeholder(tf.int64, [None, #features]</p> <p>Y = tf.placeholder(tf.int64, [None, #labels])</p> <p>then later, I have to transform the actual y_train to have dimensionality (1, #observations) since I get some dimensionality errors when I run the code.</p> <p>Should the placeholders and the variables have the same dimensionality? What is the correspondence? I am getting out of memory errors, so am concerned that I have something wrong with the input dimensions. </p>
<p>A little unsure as to what your "#" symbols refer. This if often used to mean "number" in which case what you have written would be incorrect. To be clear you want to define your placeholders for X and Y as</p> <pre><code>X = tf.placeholder(tf.int64, [None, input_dimensions]) Y = tf.placeholder(tf.int64, [None, 1]) </code></pre> <p>Here the <code>None</code> values accommodate the number of samples in the training data you pass in; if you feed in 10 emails, <code>None</code> will be 10. The <code>input_dimensions</code> means "how long is the vector that represents a single training example". In the case of a grey-scale image this would be equal to the number of pixels, in the case of your e-mail inputs this should be the length of the longest vectorized email. </p> <p>All of your email inputs will need to be input at the same length, and a common practice for all those shorter than the longest email is to pad the vectors up to the max length with zeros.</p> <p>When comparing <code>Y</code> to the training labels (<code>y_train</code>) they should both be tensors of the same shape. So as <code>Y</code> has shape (number_of_emails, 1), so should <code>y_train</code>. You can convert from <code>(1, number_of_emails)</code> to <code>(number_of_emails, 1)</code> using </p> <pre><code>y_train = tf.reshape(y_train, [-1,1]) </code></pre> <p>Finally the out of memory errors are unlikely to be to do with any dimension miss-match, but more likely you are feeding too many emails into the network at once. Each time you feed in some emails as <code>X</code> they must be held in memory. If there are many emails, feeding them all in at once will exhaust the memory resources (particularly if training on a GPU). For this reason it is common practice to batch your inputs into smaller groups fed in sequentially. Tensorflow provides a guide to <a href="https://www.tensorflow.org/guide/datasets" rel="nofollow noreferrer">importing data</a>, as well as specific help on <a href="https://www.tensorflow.org/guide/datasets#simple_batching" rel="nofollow noreferrer">batching</a>.</p>
tensorflow
1
375,980
52,009,202
Python Pandas compare values in multiple columns for partial duplicates and drop record
<p>I need to create a function/expression that compares multiple columns (<code>'Cust ID Count'</code>, <code>'Revenue'</code> and possibly <code>'Family Name'</code> for a record match and then keeps only the first record based on ascending order. Also, this function will be looking at 2 different scenarios where there are multiple similar records:</p> <ol> <li>Multiple records will match across all columns/series with the exception of <code>'street'</code> (records <code>0 &amp; 1</code>)</li> <li>Multiple records will match across all columns/series with the exception of <code>'street'</code> and <code>'Family Name'</code> (records <code>3 &amp; 4</code>)</li> </ol> <p>I realize it looks like we can only use <code>Cust ID</code> count and <code>Revenue</code> as the matching parameters, but I would also like to use <code>'family name'</code> as an option if possible.</p> <p>Dataset:</p> <pre><code> idx Cust ID Count Family Name street Revenue 0 10 Smith spring 50 #match 1 10 Smith wilbur 50 #match 2 45 Jerry jane 35 #not a match 3 25 Cole mary 20 #match 4 25 Stein mary sue 20 #match </code></pre> <p>Output:</p> <pre><code> idx Cust ID Count Family Name street Revenue 0 10 Smith spring 50 #spring is kept due to alphabetical order 1 45 Jerry jane 35 #not a match 2 25 Cole mary 20 #mary is kept due to alphabetical order </code></pre>
<p>Try this:</p> <pre><code>(df.sort_values('Family Name') .drop_duplicates(['Cust ID Count', 'Revenue'], keep='first') .sort_index() .reset_index(drop=True)) </code></pre>
python|pandas
0
375,981
52,216,124
Deleting numpy subarray based off of first element in subarray
<p>I have a <code>numpy</code> array being generated from a function as follows</p> <pre><code> circles = [[ 56, 152, 26], [288, 300, 25], [288, 362, 25], [288, 238, 24], [318, 298, 45], [220, 366, 29]] </code></pre> <p>I want to check if all the values in the first element of each subarray are consistent (mathematically close, not differing by a large amount i.e. > 5) and remove the subarrays that don't conform to this condition. So in this case, i want to remove any subarray that is greater than <code>288 + 5</code> or less than <code>288 - 5</code>. Any thoughts?</p>
<p>A possible solution using <code>mode</code>: </p> <pre><code>&gt;&gt;&gt; from scipy.stats import mode &gt;&gt;&gt; eps = 5 &gt;&gt;&gt; most_freq = mode(circles[:, 0])[0][0] &gt;&gt;&gt; mask = np.abs(circles[:, 0] - most_freq) &lt;= eps &gt;&gt;&gt; circles[mask] array([[288, 300, 25], [288, 362, 25], [288, 238, 24]]) </code></pre> <p><strong>Edit:</strong> if your <code>circles</code> array is limited to to non-negative integers, you can use the following expression for <code>most_freq</code>:</p> <pre><code>most_freq = np.bincount(circles[:, 0]).argmax() </code></pre>
python|numpy
2
375,982
52,300,777
pandas dataframe group by next occurance of column value
<p>Below is my dataframe</p> <pre><code> info date time file msg 0 INFO: 2018-09-12 16:10:10: view.py: phone 1 INFO: 2018-09-12 16:10:10: view.py: asdasd 2 INFO: 2018-09-12 16:10:43: view.py: contact start 3 INFO: 2018-09-12 16:10:43: view.py: contact end 4 INFO: 2018-09-12 16:11:36: view.py: app start 5 INFO: 2018-09-12 16:11:36: view.py: busy start 6 INFO: 2018-09-12 16:12:08: view.py: busy end 7 INFO: 2018-09-12 16:12:08: view.py: contact end 8 INFO: 2018-09-12 16:12:08: view.py: app end 9 INFO: 2018-09-12 16:12:08: view.py: phone 7 INFO: 2018-09-12 16:12:08: view.py: contact end </code></pre> <p>I want to split this dataframe into multiple dataframes based on value in <code>msg</code> column. my dataframe should look something like this if i want to split by "phone" as the value:</p> <p>df1:</p> <pre><code> info date time file msg 0 INFO: 2018-09-12 16:10:10: view.py: phone 1 INFO: 2018-09-12 16:10:10: view.py: asdasd 2 INFO: 2018-09-12 16:10:43: view.py: contact start 3 INFO: 2018-09-12 16:10:43: view.py: contact end 4 INFO: 2018-09-12 16:11:36: view.py: app start 5 INFO: 2018-09-12 16:11:36: view.py: busy start 6 INFO: 2018-09-12 16:12:08: view.py: busy end 7 INFO: 2018-09-12 16:12:08: view.py: contact end 8 INFO: 2018-09-12 16:12:08: view.py: app end </code></pre> <p>df2:</p> <pre><code> info date time file msg 9 INFO: 2018-09-12 16:12:08: view.py: phone 7 INFO: 2018-09-12 16:12:08: view.py: contact end </code></pre>
<p>Use a dictionary for a variable number of related variables. Here you can combine with <code>GroupBy</code> + <code>cumsum</code>:</p> <pre><code>d = dict(tuple(df.groupby(df['msg'].eq('phone').cumsum()))) </code></pre> <p>Then access your dataframes via <code>d[1]</code>, <code>d[2]</code>, ..., <code>d[n]</code>.</p> <p>Result:</p> <pre><code>{1: info date time file msg 0 INFO: 2018-09-12 16:10:10: view.py: phone 1 INFO: 2018-09-12 16:10:10: view.py: asdasd 2 INFO: 2018-09-12 16:10:43: view.py: contactstart 3 INFO: 2018-09-12 16:10:43: view.py: contactend 4 INFO: 2018-09-12 16:11:36: view.py: appstart 5 INFO: 2018-09-12 16:11:36: view.py: busystart 6 INFO: 2018-09-12 16:12:08: view.py: busyend 7 INFO: 2018-09-12 16:12:08: view.py: contactend 8 INFO: 2018-09-12 16:12:08: view.py: append, 2: info date time file msg 9 INFO: 2018-09-12 16:12:08: view.py: phone 7 INFO: 2018-09-12 16:12:08: view.py: contactend} </code></pre>
python|python-3.x|pandas|dataframe|pandas-groupby
1
375,983
52,263,225
Error converting data type float to datetime format
<p>I would like to convert the data type float below to datetime format:</p> <blockquote> <p>df</p> </blockquote> <pre><code> Date 0 NaN 1 NaN 2 201708.0 4 201709.0 5 201700.0 6 201600.0 Name: Cred_Act_LstPostDt_U324123, dtype: float64 </code></pre> <blockquote> <p>pd.to_datetime(df['Date'],format='%Y%m.0')</p> </blockquote> <p>ValueError: time data 201700.0 does not match format '%Y%m.0' (match)</p> <p>How could I transform these rows without month information as yyyy01 as default? </p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>pd.Series.str.replace</code></a> to clean up your month data:</p> <pre><code>s = [x.replace('00.0', '01.0') for x in df['Date'].astype(str)] df['Date'] = pd.to_datetime(s, format='%Y%m.0', errors='coerce') print(df) Date 0 NaT 1 NaT 2 2017-08-01 4 2017-09-01 5 2017-01-01 6 2016-01-01 </code></pre>
python|pandas|datetime
0
375,984
52,273,801
Rotating Rows and columns pandas
<p>Hi i have been trying to figure out how to rotate my rows and columns.I tried using the transpose but it didnt work. My dataframe looks like this </p> <pre><code>Country | Rates 2015 | Rates2016 | Rates2017 | GDP 2015| GDP 2016 | GDP2017 World | 6 | 7 | 8 | 2355 | 1235 | 324325 </code></pre> <p>Isit possible to change to </p> <pre><code> | Rates | GDP 2015 | 6 | 2355 2016 | 7 | 1235 2017 | 8 | 34132 </code></pre> <p>Yeah this is what im trying to do basically</p>
<p>Ensure that the transposed part is in the index. I can only assume that if you have tried <code>df.T</code> that you have not set the index correctly</p> <pre><code>In [185]: df.set_index('Country Name').T Out[185]: Country Name A B C 2000 5 2 2 2005 3 1 2 2010 1 5 6 2015 7 2 9 </code></pre> <p>If you want to name the index then you will also need to do</p> <pre><code>In [187]: df = df.set_index('Country Name').T In [188]: df.index = df.index.rename('Year') In [189]: df Out[189]: Country Name A B C Year 2000 5 2 2 2005 3 1 2 2010 1 5 6 2015 7 2 9 </code></pre>
python|pandas
0
375,985
52,420,881
Getting index name for the max value in DF
<p>I have the following dataframe:</p> <pre><code>data = {'Algorithm': ['KNN', 'Decision Tree', 'SVM', 'Logistic Regression'], 'Jaccard': [0.75,0.65,0.67,0.70], 'F1-score': [0.69,0.78, 0.75, 0.77], 'LogLoss': ['NA', 'NA', 'NA', 5.23]} report= pd.DataFrame(data = data) report = report[['Algorithm', 'Jaccard', 'F1-score', 'LogLoss']] report.set_index(report['Algorithm'], inplace = True) report.drop(columns = ['Algorithm'], inplace = True) </code></pre> <p>What i want to do is to print out the name of the index with the highest value in the dafaframe. It would be something like this:</p> <pre><code>print(report['Jaccard'][index_name_with_highest_value]) </code></pre> <p>it should yield:</p> <pre><code>'KNN' </code></pre> <p>Thank you in advance :)</p>
<p>Try <code>np.where</code>:</p> <pre><code>print(report.index[np.where(report['Jaccard'].max())[0][0]]) </code></pre> <p>Updated Try <code>np.where</code>:</p> <pre><code>print(report['Algorithm'][np.where(report['Jaccard'].max())[0][0]]) </code></pre> <p>Or <code>idxmax</code>:</p> <pre><code>print(report['Jaccard'].idxmax()) </code></pre> <p>Update:</p> <pre><code>print(report['Algorithm'][np.where(report['Jaccard']==report['Jaccard'].max())[0][0]]) </code></pre> <p>@jezrael's solution is also very good:</p> <pre><code>print(report.index[report['Jaccard'] == report['Jaccard'].max()]) </code></pre>
python|pandas|indexing|max
3
375,986
52,172,098
Getting average per minute in pandas
<p>There are already plenty of question on stack overflow regarding what i am asking but i have a small doubt and because of that i think my question is different. In my time series i want to get the average per minute. My time series is something like below:-</p> <pre><code> time duration 2018-08-26T14:00:00.000Z 0.22 2018-08-26T14:00:00.000Z 0.23 2018-08-26T14:00:00.000Z 2.05 2018-08-26T14:00:00.000Z 2.5 2018-08-26T14:00:00.000Z 3.0 2018-08-26T14:00:01.000Z 30.4 2018-08-26T14:00:01.000Z 30.4 2018-08-26T14:00:01.000Z 30.4 2018-08-26T14:00:02.000Z 30.4 2018-08-26T14:00:02.000Z 30.4 2018-08-26T14:00:03.000Z 30.4 ..... 2018-08-26T14:01:03.000Z 30.4 2018-08-26T14:01:03.000Z 30.4 2018-08-26T14:02:03.000Z 30.4 2018-08-26T14:02:03.000Z 30.4 </code></pre> <p>As the data is from elastic search i am having multiple observation from the same second. From Multiple i mean i have may be 100 observation from one second time stamp.</p> <p>I am using the below code to perform the average duration per minute which i got from <a href="https://stackoverflow.com/questions/39952753/group-index-by-minute-and-compute-average">Group index by minute and compute average</a></p> <pre><code>df.index = pd.DatetimeIndex(df.time) df.groupby([df.index.values.astype('&lt;M8[m]')])['duration'].mean() </code></pre> <p>I am getting my output like below </p> <pre><code>2018-08-26 14:00:00 0.151470 2018-08-26 14:01:00 0.144745 2018-08-26 14:02:00 0.147503 2018-08-26 14:03:00 0.156921 2018-08-26 14:04:00 0.142978 2018-08-26 14:05:00 0.167170 2018-08-26 14:06:00 0.156233 2018-08-26 14:07:00 0.140044 2018-08-26 14:08:00 0.135376 2018-08-26 14:09:00 0.161247 2018-08-26 14:10:00 0.134211 2018-08-26 14:11:00 0.179065 2018-08-26 14:12:00 0.145470 2018-08-26 14:13:00 0.145623 2018-08-26 14:14:00 0.139927 2018-08-26 14:15:00 0.138283 2018-08-26 14:16:00 0.137545 2018-08-26 14:17:00 0.140346 </code></pre> <p>I just want to make sure if i am doing this right because i am having multiple instance for one second and I am afraid if its is considering all of it or not.</p> <p>I will appreciate any kind of help here.</p>
<p>This is what <code>.resample()</code> is for:</p> <blockquote> <p><code>resample()</code> is a time-based groupby, followed by a reduction method on each of its groups.</p> </blockquote> <p>Verifiable example:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; np.random.seed(444) &gt;&gt;&gt; # millisecond frequency, 100000 periods starting 2017-01-01 00:00:00 &gt;&gt;&gt; idx = pd.date_range(start='2017', periods=100000, freq='ms') &gt;&gt;&gt; idx.min(), idx.max() (Timestamp('2017-01-01 00:00:00', freq='L'), Timestamp('2017-01-01 00:01:39.999000', freq='L')) &gt;&gt;&gt; s = pd.Series(np.random.randn(len(idx)), index=idx) &gt;&gt;&gt; s.resample('s').mean().head() 2017-01-01 00:00:00 0.009352 2017-01-01 00:00:01 0.061978 2017-01-01 00:00:02 -0.011118 2017-01-01 00:00:03 0.046698 2017-01-01 00:00:04 -0.008205 </code></pre> <p>Manual inspection should match:</p> <pre><code>&gt;&gt;&gt; s.loc['2017-01-01 00:00:00'].mean() 0.00935201762323959 &gt;&gt;&gt; s.loc['2017-01-01 00:00:01'].mean() 0.061978455181838 </code></pre>
python|pandas|time-series|pandas-groupby
4
375,987
52,011,509
What is difference between tf.layers.conv2d and tf.layers.Conv2D?
<p>What is difference between <code>tf.layers.conv2d</code> and <code>tf.layers.Conv2D</code>?</p> <p>Why <code>tf.layers.Conv2D</code> used in example code in this <a href="https://arxiv.org/pdf/1807.03247.pdf" rel="nofollow noreferrer">paper</a>?</p> <p>Here is a full code snippet:</p> <pre><code>class AddCoords(base.Layer): """Add coords to a tensor""" def __init__(self, x_dim=64, y_dim=64, with_r=False): super(AddCoords, self).__init__() self.x_dim = x_dim self.y_dim = y_dim self.with_r = with_r def call(self, input_tensor): """ input_tensor: (batch, x_dim, y_dim, c) """ batch_size_tensor = tf.shape(input_tensor)[0] xx_ones = tf.ones([batch_size_tensor, self.x_dim], dtype=tf.int32) xx_ones = tf.expand_dims(xx_ones, -1) xx_range = tf.tile(tf.expand_dims(tf.range(self.x_dim), 0), [batch_size_tensor, 1]) xx_range = tf.expand_dims(xx_range, 1) xx_channel = tf.matmul(xx_ones, xx_range) xx_channel = tf.expand_dims(xx_channel, -1) yy_ones = tf.ones([batch_size_tensor, self.y_dim], dtype=tf.int32) yy_ones = tf.expand_dims(yy_ones, 1) yy_range = tf.tile(tf.expand_dims(tf.range(self.y_dim), 0), [batch_size_tensor, 1]) yy_range = tf.expand_dims(yy_range, -1) yy_channel = tf.matmul(yy_range, yy_ones) yy_channel = tf.expand_dims(yy_channel, -1) xx_channel = tf.cast(xx_channel, 'float32') / (self.x_dim - 1) yy_channel = tf.cast(yy_channel, 'float32') / (self.y_dim - 1) xx_channel = xx_channel*2 - 1 yy_channel = yy_channel*2 - 1 ret = tf.concat([input_tensor, xx_channel, yy_channel], axis=-1) if self.with_r: rr = tf.sqrt(tf.square(xx_channel-0.5) + tf.square(yy_channel-0.5)) ret = tf.concat([ret, rr], axis=-1) return ret class CoordConv(base.Layer): """CoordConv layer as in the paper.""" def __init__(self, x_dim, y_dim, with_r, *args, **kwargs): super(CoordConv, self).__init__() self.addcoords = AddCoords(x_dim=x_dim, y_dim=y_dim, with_r=with_r) self.conv = tf.layers.Conv2D(*args, **kwargs) def call(self, input_tensor): ret = self.addcoords(input_tensor) ret = self.conv(ret) return ret </code></pre>
<p><code>tf.layers.conv2d</code> is a simple <code>function</code>/<code>method</code> to compute its input's convolution, so it needs input <code>feature maps</code> and <code>kernel</code> or <code>filter</code> to run this method. One user just call this method to computing convolution.<br> But <code>tf.layers.Conv2d</code> is a <code>Class</code> (an OOP concept), you should instantiate it with certain <code>filter</code> before you can use it. Once instantiated, you can input different inputs to it and get outputs. One using this <code>Class</code> to design and program a new <code>Class</code> or a new type of operation for other users to use. </p>
python|tensorflow
2
375,988
52,232,742
How to use ast.literal_eval in a pandas dataframe and handle exceptions
<p>I have a <code>dataframe</code> with a column containing a <code>tuple</code> data as a string. Eg. <code>'(5,6)'</code>. I need to convert this to a tuple structure. One way of doing it is using the ast.literal_eval(). I am using it in this way.</p> <pre><code>df['Column'] = df['Column'].apply(ast.literal_eval) </code></pre> <p>Unfortunately, my data in this column contains empty strings also. The <code>ast.literal_eval()</code> is not able to handle this. I get this error.</p> <p><code>SyntaxError: unexpected EOF while parsing</code></p> <p>I am unsure if this is because it is unable to handle such a character. Based on my reading, I found that <code>ast.literal_eval()</code> works only in cases when a list, dict or tuple is there inside a string structure. </p> <p>To overcome this I tried to create my own function and return an empty string if it raises an exception.</p> <pre><code>def literal_return(val): try: return ast.literal_eval(val) except ValueError: return (val) df['Column2'] = df['Column'].apply(literal_return) </code></pre> <p>Even in this case, the same error pops up. How do we handle this. It would be great even if there is a way to ignore certain rows to apply the function and apply on the rest. Any help is appreciated.</p>
<p>I would do it simply requiring a string type from each entry:</p> <pre><code>from ast import literal_eval df['column_2'] = df.column_1.apply(lambda x: literal_eval(str(x))) </code></pre> <p>If You need to advanced Exception handling, You could do, for example:</p> <pre><code>def f(x): try: return literal_eval(str(x)) except Exception as e: print(e) return [] df['column_2'] = df.column_1.apply(lambda x: f(x)) </code></pre>
python|pandas|tuples
12
375,989
52,011,190
What datetime format is this and how do I parse it?
<p>I have some data that I'm pulling from an API and the date is formatted like this: '1522454400000'</p> <p>Not sure how to parse it but this is what I have (unsuccessfully tried)</p> <pre><code>df = DataFrame(test) df.columns = ['Date', 'Open', 'High', 'Low', 'Close', 'Volume'] df.set_index('Date') df.index = pd.to_datetime(df.index, unit = 'd') </code></pre> <p>where the variable <code>test</code> is a list of the underlying data. this incorrectly parses the data as year being 1970. </p> <p>The result of the parse: </p> <pre><code>1970-01-01 00:00:00.000000000 </code></pre> <p>Any ideas?</p> <p>********************** EDIT ************************************</p> <p>Python version: 3</p> <p>Pandas version. 0.23.0</p> <p>Here is a working example for reproducibility. But first, here are the facts I have discovered.</p> <p>DATE FORMAT: 64-bit Unix Timestamp in milliseconds since Epoch 1 Jan 1970</p> <p>TIMEZONE: UTC</p> <p>MY TIMEZONE: UTC + 4 (the desired datetime index)</p> <p>The code:</p> <pre><code>import bitmex import pandas as pd from pandas import DataFrame import datetime import ccxt api_connector = ccxt.bitmex({ 'enableRateLimit': True }) #get OHLCV Data testdata = api_connector.fetch_ohlcv('XBTZ18', '1h') df2 = DataFrame(testdata) df2.columns = ['Date', 'Open', 'High', 'Low', 'Close', 'Volume'] #df2.set_index('Date') df2.index = pd.to_datetime(df2.Date, unit='ms') df3 = df2.drop(['Date'], axis =1) df3.tail() </code></pre> <p>This returns:</p> <pre><code>Open High Low Close Volume Date 2018-07-06 00:00:00 6538.5 6555.0 6532.5 6537.0 176836 2018-07-06 01:00:00 6537.0 6535.5 6520.5 6524.5 139735 2018-07-06 02:00:00 6524.5 6542.5 6525.5 6542.5 59759 2018-07-06 03:00:00 6542.5 6545.0 6538.0 6538.0 121410 2018-07-06 04:00:00 6538.0 6538.5 6477.5 6525.0 764125 </code></pre> <p>Close! but no cigar. Today's date is 8/31/2018 so I would at least expect it to be in the correct month. </p> <p>What am I doing wrong, folks? </p>
<p>This is almost certainly a variation on <a href="https://en.wikipedia.org/wiki/Unix_time" rel="noreferrer">"Unix time"</a>: instead of seconds since the 1 Jan 1970 epoch, it's <em>milliseconds</em> since the 1 Jan 1970 epoch:</p> <pre><code>&gt;&gt;&gt; datetime.datetime.utcfromtimestamp(int('1522454400000') / 1000) datetime.datetime(2018, 3, 31, 0, 0) </code></pre> <p>That certainly looks like a reasonable date. And it even looks like it probably is UTC, not local time (unless you happen to be in England, or weren't expecting it to be exactly at midnight).</p> <hr> <p>I don't think any of Pandas' built-in formats (which are actually just wrappers around formats from <code>datetime</code> and/or <code>dateutil</code>) exactly matches this, so you'll probably need to either do what I did about (convert to int and treat it as a number) or do the stringy equivalent (chop off the last 3 characters and then treat as a string of a UNIX timestamp).</p> <p>The first one seems simpler:</p> <pre><code>&gt;&gt;&gt; pd.to_datetime(int('1522454400000'), unit='ms') Timestamp('2018-03-31 00:00:00') </code></pre> <p>In fact, it'll even work directly on strings, doing the conversion implicitly:</p> <pre><code>&gt;&gt;&gt; pd.to_datetime('1522454400000', unit='ms') Timestamp('2018-03-31 00:00:00') </code></pre>
python|pandas|datetime|datetime-format|ccxt
9
375,990
52,249,639
Can pd.DataFrame.set_index mantain dtype?
<p>I am trying to call <code>df.set_index</code> in such a way that the <code>dtype</code> of the column I set_index on is the new <code>index.dtype</code>. Unfortunately, in the following example, set_index changes the <code>dtype</code>.</p> <pre><code>df = pd.DataFrame({'a': pd.Series(np.array([-1, 0, 1, 2], dtype=np.int8))}) df['ignore'] = df['a'] assert (df.dtypes == np.int8).all() # fine df2= df.set_index('a') assert df2.index.dtype == df['a'].dtype, df2.index.dtype </code></pre> <p>Is it possible to avoid this behavior? My pandas version is 0.23.3</p> <p>Similarly,</p> <pre><code>new_idx = pd.Index(np.array([-1, 0, 1, 2]), dtype=np.dtype('int8')) assert new_idx.dtype == np.dtype('int64') </code></pre> <p>Even though the documentation for the dtype parameter says: "If an actual dtype is provided, we coerce to that dtype if it's safe. Otherwise, an error will be raised."</p>
<p>Despite my bloviating in the comments above, this might suffice to get an appropriate index that is both low memory and starts from <code>-1</code>.</p> <h3><code>pandas.RangeIndex</code></h3> <p>Takes a start and stop parameters like <code>range</code></p> <pre><code>df = df.set_index(pd.RangeIndex(-1, len(df) - 1)) print(df.index, df.index.dtype, sep='\n') </code></pre> <p>This should be very memory efficient.</p> <p>Despite it still being of <code>dtype</code> <code>int64</code> (which you should want), it takes up very little memory.</p> <pre><code>pd.RangeIndex(-1, 4000000).memory_usage() 84 </code></pre> <p>And</p> <pre><code>for i in range(1, 1000000, 100000): print(pd.RangeIndex(-1, i).memory_usage()) 84 84 84 84 84 84 84 84 84 84 </code></pre>
python|pandas
1
375,991
52,220,023
find duplicates and mark as variant
<p>I'm trying to create a data frame where I add duplicates as variants in a column.To further illustrate my question:</p> <p>I have a pandas dataframe like this:</p> <pre><code> Case ButtonAsInteger 0 1 130 1 1 133 2 1 42 3 2 165 4 2 158 5 2 157 6 3 158 7 3 159 8 3 157 9 4 130 10 4 133 11 4 43 ... ... ... </code></pre> <p>I have converted it into this form:</p> <pre><code>grouped = activity2.groupby(['Case']) values = grouped['ButtonAsInteger'].agg('sum') id_df = grouped['ButtonAsInteger'].apply(lambda x: pd.Series(x.values)).unstack(level=-1 0 1 2 3 4 5 6 7 8 9 Case 1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN 2 165.0 158.0 157.0 141.0 142.0 142.0 142.0 142.0 142.0 147.0 3 158.0 159.0 157.0 147.0 166.0 170.0 169.0 130.0 133.0 133.0 4 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN </code></pre> <p>And now I want to find duplicates and mark each duplicate as a variant. So in this example, Case 1 and 4 should get variant 1. Like this:</p> <pre><code> Variants 0 1 2 3 4 5 6 7 8 9 Case 1 1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN 2 2 165.0 158.0 157.0 141.0 142.0 142.0 142.0 142.0 142.0 147.0 3 3 158.0 159.0 157.0 147.0 166.0 170.0 169.0 130.0 133.0 133.0 4 1 130.0 133.0 42.0 52.0 47.0 47.0 32.0 94.0 NaN NaN </code></pre> <p>I have already tried this method <a href="https://stackoverflow.com/a/44999009">https://stackoverflow.com/a/44999009</a>. But it doesn't work on my data frame. Unfortunately I don't know why. </p> <p>It will probably be possible to apply a double for loop. So for each line look if there is a duplicate in the record. Whether this is efficient on a large record, I don't know.</p> <p>I have also added my procedure with grouping, because perhaps there is a possibility to already work with duplicates at this point?</p>
<p>This groups by all columns and returns the group index (+ 1 because zero based indexing is the default). I think this should be what you want.</p> <pre><code>id_df['Variant'] = id_df.groupby( id_df.columns.values.tolist()).grouper.group_info[0] + 1 </code></pre> <p>The resulting data frame, given your input data like above:</p> <pre><code> 0 1 2 Variant Case 1 130 133 42 1 2 165 158 157 3 3 158 159 157 2 4 130 133 42 1 </code></pre> <p>There could be a syntactically nicer way to access the group index, but i didn't find one.</p>
python|pandas|numpy|dataframe|duplicates
0
375,992
52,284,234
Two versions of Pandas causing problems
<p>It appears that when I run <code>&gt;conda list</code>, I have two versions of <code>pandas</code> installed. </p> <pre><code>pandas 0.23.4 py36h830ac7b_0 pandas 0.22.0 &lt;pip&gt; </code></pre> <p>I cannot run <code>import pandas</code> or <code>import pandas as pd</code> in my console (Anaconda - Spyder/Jupyter Notebook) to check the version, but I am getting errors thrown in a script related to <code>pandas</code>: </p> <blockquote> <p>Traceback (most recent call last) ...<br> from pandas.errors import AbstractMethodError</p> <p>ImportError: cannot import name 'AbstractMethodError'</p> </blockquote> <p>I was going to do <code>&gt;conda update pandas</code> but it said that my <code>numpy</code> would be downgraded. That doesn't sound right! How do I fix this?</p>
<p>It will be hard for someone on SO to debug your exact issue: The fastest way to fix your particular problem is most likely a fresh install of <code>Anaconda</code>. Then to set up a <code>conda</code> environment in your fresh install.</p> <p>See the following:</p> <ul> <li><a href="https://conda.io/docs/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands" rel="nofollow noreferrer">Conda Environments: Creating an Environment</a></li> <li><a href="https://github.com/conda/conda/issues/626#issue-30400607" rel="nofollow noreferrer">Powershell users will need this fix</a></li> </ul> <p>This will avoid any conflicts with other python versions or <code>pip</code></p> <p>This will also allow you to maintain different environments with different versions of <code>numpy</code> or <code>pandas</code></p> <p>See below for an example of how simple this is to switch between <code>2.7</code> and <code>3.6</code></p> <pre><code>[py27] PS C:\Users\me&gt; python --version Python 2.7.15 :: Anaconda, Inc. [py27] PS C:\Users\me&gt; deactivate Deactivating environment "py27..." PS C:\Users\me&gt; activate deeplearning Activating environment "deeplearning..." [deeplearning] PS C:\Users\me&gt; python --version Python 3.6.5 :: Anaconda custom (64-bit) </code></pre>
python|pandas|numpy|version|conda
1
375,993
52,387,537
Understand tensorflow slice operation
<p>I am confused about the follow code:</p> <pre> import tensorflow as tf import numpy as np from tensorflow.python.framework import ops from tensorflow.python.ops import array_ops from tensorflow.python.ops import control_flow_ops from tensorflow.python.ops import math_ops from tensorflow.python.framework import dtypes ''' Randomly crop a tensor, then return the crop position ''' def random_crop(value, size, seed=None, name=None): with ops.name_scope(name, "random_crop", [value, size]) as name: value = ops.convert_to_tensor(value, name="value") size = ops.convert_to_tensor(size, dtype=dtypes.int32, name="size") shape = array_ops.shape(value) check = control_flow_ops.Assert( math_ops.reduce_all(shape >= size), ["Need value.shape >= size, got ", shape, size], summarize=1000) shape = control_flow_ops.with_dependencies([check], shape) limit = shape - size + 1 begin = tf.random_uniform( array_ops.shape(shape), dtype=size.dtype, maxval=size.dtype.max, seed=seed) % limit return tf.slice(value, begin=begin, size=size, name=name), begin sess = tf.InteractiveSession() size = [10] a = tf.constant(np.arange(0, 100, 1)) print (a.eval()) a_crop, begin = random_crop(a, size = size, seed = 0) print ("offset: {}".format(begin.eval())) print ("a_crop: {}".format(a_crop.eval())) a_slice = tf.slice(a, begin=begin, size=size) print ("a_slice: {}".format(a_slice.eval())) assert (tf.reduce_all(tf.equal(a_crop, a_slice)).eval() == True) sess.close() </pre> <p>outputs:</p> <pre> [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99] offset: [46] a_crop: [89 90 91 92 93 94 95 96 97 98] a_slice: [27 28 29 30 31 32 33 34 35 36] </pre> <p>There are two <code>tf.slice</code> options:</p> <p>(1). called in function random_crop, such as <code>tf.slice(value, begin=begin, size=size, name=name)</code></p> <p>(2). called as <code>a_slice = tf.slice(a, begin=begin, size=size)</code></p> <p>The parameters (<code>values</code>, <code>begin</code> and <code>size</code>) of those two <code>slice</code> operations are the same.</p> <p>However, why the printed values <code>a_crop</code> and <code>a_slice</code> are different and <code>tf.reduce_all(tf.equal(a_crop, a_slice)).eval()</code> is True?</p> <p>Thanks</p> <p>EDIT1 Thanks @xdurch0, I understand the first question now. Tensorflow <code>random_uniform</code> seems like a random generator. </p> <pre> import tensorflow as tf import numpy as np sess = tf.InteractiveSession() size = [10] np_begin = np.random.randint(0, 50, size=1) tf_begin = tf.random_uniform(shape = [1], minval=0, maxval=50, dtype=tf.int32, seed = 0) a = tf.constant(np.arange(0, 100, 1)) a_slice = tf.slice(a, np_begin, size = size) print ("a_slice: {}".format(a_slice.eval())) a_slice = tf.slice(a, np_begin, size = size) print ("a_slice: {}".format(a_slice.eval())) a_slice = tf.slice(a, tf_begin, size = size) print ("a_slice: {}".format(a_slice.eval())) a_slice = tf.slice(a, tf_begin, size = size) print ("a_slice: {}".format(a_slice.eval())) sess.close() </pre> <p>output</p> <pre> a_slice: [42 43 44 45 46 47 48 49 50 51] a_slice: [42 43 44 45 46 47 48 49 50 51] a_slice: [41 42 43 44 45 46 47 48 49 50] a_slice: [29 30 31 32 33 34 35 36 37 38] </pre>
<p>The confusing thing here is that <code>tf.random_uniform</code> (like every random operation in TensorFlow) produces a new, different value on each evaluation call (each call to <code>.eval()</code> or, in general, each call to <code>tf.Session.run</code>). So if you evaluate <code>a_crop</code> you get one thing, if you then evaluate <code>a_slice</code> you get a different thing, but if you evaluate <code>tf.reduce_all(tf.equal(a_crop, a_slice))</code> you get <code>True</code>, because all is being computed in a single evaluation step, so only one random value is produced and it determines the value of both <code>a_crop</code> and <code>a_slice</code>. Another example is this, if you run <code>tf.stack([a_crop, a_slice]).eval()</code> you will get a tensor with to equal rows; again, only one random value was produced. More generally, if you call <code>tf.Session.run</code> with multiple tensors to evaluate, all the computations in that call will use the same random values.</p> <p>As a side note, if you actually need a random value in a computation that you want to maintain for a later computation, the easiest thing would be to just retrieve if with <code>tf.Session.run</code>, along with any other needed computation, to feed it back later through <code>feed_dict</code>; or you could have a <code>tf.Variable</code> and store the random value there. A more advanced possibility would be to use <a href="https://www.tensorflow.org/api_docs/python/tf/Session#partial_run" rel="nofollow noreferrer"><code>partial_run</code></a>, an experimental API that allows you to evaluate part of the computation graph and continue evaluating it later, while maintaining the same state (i.e. the same random values, among other things).</p>
python|tensorflow
1
375,994
52,110,869
FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated use `arr[tuple(seq)]` instead of `arr[seq]`
<p>I would like not to use the non-tuple sequence for multidimensional indexing so that the script will support future release of Python when this changes.</p> <p>Below is the code that i am using for plotting the graph:</p> <pre><code>data = np.genfromtxt(Example.csv,delimiter=',', dtype=None, names=True, converters={0: str2date}) p1, = host.plot(data["column_1"], data["column_2"], "b-", label="column_2") p2, = par1.plot(data["column_1"], data['column_3'], "r-", label="column_3") p3, = par2.plot(data["column_1"], data["column_4"], "g-", label="column_4") host.set_xlim([data["column_1"][0], data["column_1"][-1]]) host.set_ylim(data["column_2"].min(), data["column_2"].max()) par1.set_ylim(data["column_3"].min(), data["column_3"].max()) par2.set_ylim(data["column_4"].min(), data["column_4"].max()) </code></pre>
<p>I can reproduce the warning with:</p> <pre><code>In [313]: x = np.zeros((4,2)) In [315]: x[:,1] Out[315]: array([0., 0., 0., 0.]) </code></pre> <p>By replacing the <code>:</code> with a <code>slice(None)</code> we can write this indexing as:</p> <pre><code>In [316]: x[[slice(None),1]] /usr/local/bin/ipython3:1: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. #!/usr/bin/python3 Out[316]: array([0., 0., 0., 0.]) </code></pre> <p>It really should be a tuple, rather than a list:</p> <pre><code>In [317]: x[(slice(None),1)] Out[317]: array([0., 0., 0., 0.]) In [318]: x[tuple([slice(None),1])] Out[318]: array([0., 0., 0., 0.]) </code></pre> <p>The warning tells us that the list format used to be ok, but will in the future produce an error.</p> <p>I don't see anything your code that does this sort of slice in a list indexing.</p> <p><code>data</code> from <code>genfromtxt</code> is a structured array, so indexing by field name is normal: <code>data["column_1"]</code>. So it's likely that the warning is generated within the <code>plot</code> code. But we don't have any clue as to where. The warning doesn't give any sort of error stack trace, do it?</p> <p>So without a sample array like <code>data</code>, or a csv file like <code>Example.csv</code>, we can't reproduce the warning, and dig further.</p> <hr> <p>For a start I'd put some sort of <code>print</code> between each of your code lines. The goal is to pin down which <code>matplotlib</code> call is producing the warning.</p> <p>If for example it is produced in </p> <pre><code>host.set_xlim([data["column_1"][0], data["column_1"][-1]]) </code></pre> <p>I might try changing that call to</p> <pre><code>host.set_xlim((data["column_1"][0], data["column_1"][-1])) </code></pre> <p>or</p> <pre><code>host.set_xlim(data["column_1"][0], data["column_1"][-1]) </code></pre> <p>That's a bit of wild guess...</p> <h2>edit</h2> <p><a href="https://stackoverflow.com/questions/52594235/futurewarning-using-a-non-tuple-sequence-for-multidimensional-indexing-is-depre">FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated use `arr[tuple(seq)]`</a></p> <p>This latest SO, helps us identify a problem function in the <code>scipy.stats</code> package. It constructs a list of slices, and uses it without further conversion to tuple.</p>
python|arrays|python-3.x|numpy|matplotlib
16
375,995
52,171,582
format conversion not working in calculating fiscal year based on the month
<p>I am trying to calculate the fiscal year based on my month. The conversion is not working. Currently my time stamp is type <code>object</code>. I have converted it to int to get the necessary values it does not work.</p> <pre><code>import pandas as pd upload_raw['Month_']= upload_raw['CREAT_TS'].str[:10] upload_raw['Year_']= upload_raw['Month_'].str[:4].astype(int) upload_raw['Month_']=pd.DatetimeIndex(upload_raw['Month_']).month.astype(int) def year_conv(): if upload_raw['Month_'] &gt; 6: upload_raw['Year_']+1 else: upload_raw['Year_1'] </code></pre> <p>I still get the same value as the year.</p> <p>This is the format that i have for the date that i am converting it.</p> <pre><code>CREAT_TS 2018-06-22-06.48.49.601000 </code></pre> <p>Entire Code:</p> <pre><code>import pandas as pd import numpy as np import datetime from datetime import date from datetime import datetime from dateutil.relativedelta import relativedelta import pyodbc import calendar #loading Agency Notices Upload Raw Data upload_raw = pd.read_excel(r'C:\Users\Desktop\Upload Raw Data.xlsx', sheet_name = 'Upload', header = 0 ) display(upload_raw) upload_raw.dtypes datatype =upload_raw.dtypes display(datatype) # creating Month and Year column upload_raw['Month_']= upload_raw['CREAT_TS'].str[:10] upload_raw['Year_']=upload_raw['Month_'].str[2:4].astype(int) upload_raw['Month_']=pd.DatetimeIndex(upload_raw['Month_']).month.astype(int) def year_conv(): if upload_raw['Month_'] &gt; 6: upload_raw['Year_']+1 else: upload_raw['Year_1'] upload_raw['Month_']=upload_raw['Month_'].apply(lambda x: calendar.month_abbr[x]) # loading Branch Mapping Details mapping = pd.read_excel(r'C:\Users\Desktop\Mapping.xlsx', sheet_name = 'Mapping', header = 0 ) upload_lookup= pd.merge(left = upload_raw, right= mapping,on='BRANCH') display(upload_lookup) </code></pre> <p>Here is the sample data from the upload file</p> <pre><code>BRANCH CUE CREAT_TS RAF_IND AA &amp;CR 2018-06-22-06.48.49.601000 AA &amp;CR 2018-06-22-11.43.29.859000 AA &amp;CR 2018-06-22-11.54.52.633000 AA EZZ 2018-06-22-11.05.13.371000 </code></pre> <p>From CREAT_TS i am trying to get the month and the YEAR. If the <code>month &gt; 6</code> then the <code>year</code> should be <code>year+1</code> else should be the <code>year</code> that is present.</p> <p>Regards, Ren.</p>
<p>You can accomplish what you want using np.where()</p> <p>Based on your example, I've created a simplified dataframe to demonstrate. Note that I changed the final month to 7, so that we have an example where your condition evaluates to True.</p> <pre><code>df Out[74]: Month_ Year_ 0 6 18 1 6 18 2 6 18 3 7 18 </code></pre> <p>to avoid confusion, I am saving the new variable in 'Years_' so you can see the change.</p> <pre><code>df['Years'] = np.where(df['Month_'] &gt; 6, df['Year_'] + 1, df['Year_']) df Out[79]: Month_ Year_ Years 0 6 18 18 1 6 18 18 2 6 18 18 3 7 18 19 </code></pre>
python|pandas
1
375,996
52,349,686
Error replacing values in a column using pandas
<p>I am trying to replace the values in a column with numbers. These are the unique values in the column:</p> <pre><code>['R2' '01' '02' 'C1'] </code></pre> <p>So I did this </p> <pre><code>data = pd.read_csv('file.csv') df = pd.DataFrame(data) df['rates'].apply({'R2': 1, '01' : 2, '02' : 3, 'C1' : 4}.get) </code></pre> <p>But when I try to print out df['rates'] after the supposed replacement, I still got same values:</p> <pre><code>['R2' '01' '02' 'C1'] </code></pre> <p>This is how my file.csv looks like</p> <pre><code>amount,id,rates,height 1400,4,R2,3 1389,6,R2,8 10000,1,01,13 </code></pre>
<p>Try <code>map</code>:</p> <pre><code>df['rates'].map({'R2': 1, '01' : 2, '02' : 3, 'C1' : 4},inplace=True) </code></pre> <p>Or:</p> <pre><code>df['rates'] = df['rates'].map({'R2': 1, '01' : 2, '02' : 3, 'C1' : 4}) </code></pre> <p>Actually your code works but need to assign:</p> <pre><code>df['rates']=df['rates'].apply({'R2': 1, '01' : 2, '02' : 3, 'C1' : 4}.get) </code></pre>
python|pandas
1
375,997
52,372,690
Keep eliminating data points until good correlation coefficient is obtained
<p>I have been trying to find out a way in order to eliminate outliers from a dataset. The outliers are removed the following way: Any value which results into a 10% reduction in R2 value needs to be removed. When 4.2 in A-data set got replaced with 1.3 (in B-dataset), it changed the R2 >10% and thus was eliminated in the C-dataset.</p> <p>However, when 0.7 in A was replaced with 0.9, it would not change the correlation coefficient by 10% and thus was not removed from C-dataset.</p> <p>An image is attached herewith.</p> <p><a href="https://i.stack.imgur.com/YtcPL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YtcPL.png" alt="Eliminating outliers inorder to improve the correlation"></a></p> <p>In the image, -plot A has R2 of 1.0 -plot B has R2 of 0.8294 (1.3 is the outlier since it causes >10% lowering of R2 values) -plot C has R2 of 1.0 (on removing 1.3 from the dataset)</p> <p>How do I go about this issue. I need to use python to get to the solution. Out of the 10 data points a maximum of only 3 data points can be removed inorder to improve the correlation. </p> <p>I apologize if this question was asked before. Thanks a ton for the help!</p>
<p>You want <strong>robust linear regression</strong>, ignoring the outliers. Such a thing is already implemented in <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.HuberRegressor.html#sklearn.linear_model.HuberRegressor" rel="nofollow noreferrer">sklearn module</a> but since it's not among the tags, here is a plain SciPy solution. </p> <p>The idea is to minimize the sum of absolute values of deviations (L1 loss function) instead of the sum of squares. (Compare to: median vs mean.) </p> <pre><code>import numpy as np from scipy.optimize import minimize import matplotlib.pyplot as plt x = np.linspace(0.7, 7, 10) y = 0.8*x + 1.2 y[5] = 2.5 # outlier l1_loss = lambda c: np.sum(np.abs(c[0]*x + c[1] - y)) c = minimize(l1_loss, (0, 0)).x plt.plot(x, y, 'b*') plt.plot(x, c[0]*x+c[1], 'r') plt.show() good = np.abs(c[0]*x + c[1] - y) &lt; 0.1 # arbitrary threshold to separate good from bad print('good data: x = {}, y = {}'.format(x[good], y[good])) </code></pre> <p>Output: "good data: <code>x = [0.7 1.4 2.1 2.8 3.5 4.9 5.6 6.3 7. ]</code>, <code>y = [1.76 2.32 2.88 3.44 4. 5.12 5.68 6.24 6.8 ]</code>". </p> <p><img src="https://i.stack.imgur.com/FfoAN.png" alt="regression"></p> <p>The line is not perturbed by the outlier at all. </p> <p>You may want to replace <code>good = np.abs(c[0]*x + c[1] - y) &lt; 0.1</code> with an iterative approach, where the data point with the largest value of deviation, i.e., </p> <pre><code>outlier_idx = np.argmax(np.abs(c[0]*x + c[1] - y)) </code></pre> <p>is identified and removed from x and y arrays (<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.delete.html" rel="nofollow noreferrer"><code>np.delete</code></a>), then the process repeated until the correlation is good. </p>
python|python-3.x|numpy|scikit-learn|scipy
2
375,998
52,272,676
Combined group by using pandas
<p>Imagine a <code>pandas</code>data frame given by</p> <pre><code>df = pd.DataFrame({ 'id': range(1, 10), 'mfr': ('a', 'b', 'a', 'c', 'd', 'e', 'd', 'd', 'f'), 'vmn': ('A', 'A', 'B', 'C', 'D', 'E', 'F', 'F', 'D') }) </code></pre> <p>which gives the following table</p> <pre><code> id mfr vmn 0 1 a A 1 2 b A 2 3 a B 3 4 c C 4 5 d D 5 6 e E 6 7 d F 7 8 d F 8 9 f D </code></pre> <p>I wish to determine which <code>id</code>'s belong to eachother by grouping either by <code>mfr</code>and/or <code>vmn</code>. I can easily assign a group id by using one of the other by</p> <pre><code>df['groupby_mfr'] = df.groupby('mfr').grouper.group_info[0] df['groupby_vmn'] = df.groupby('vmn').grouper.group_info[0] </code></pre> <p>which gives the following</p> <pre><code> id mfr vmn groupby_mfr groupby_vmn 0 1 a A 0 0 1 2 b A 1 0 2 3 a B 0 1 3 4 c C 2 2 4 5 d D 3 3 5 6 e E 4 4 6 7 d F 3 5 7 8 d F 3 5 8 9 f D 5 3 </code></pre> <p>Now I want to combine this to a new group id so the resulting data frame becomes like this</p> <pre><code> id mfr vmn groupby_mfr groupby_vmn combined_group 0 1 a A 0 0 0 1 2 b A 1 0 0 2 3 a B 0 1 0 3 4 c C 2 2 1 4 5 d D 3 3 2 5 6 e E 4 4 3 6 7 d F 3 5 2 7 8 d F 3 5 2 8 9 f D 5 3 2 </code></pre> <p>The first two rows are the same since <code>vmn</code> are equal. The third are also the same group since row 3 and 1 are the same for <code>vmn</code>. And so on...</p> <p>Note also that this will be run on multiple columns with many rows so performance is much appreciated as well.</p>
<p>As suggested in the comments in the original post it can be solved by using <a href="https://networkx.github.io/" rel="nofollow noreferrer"><code>networkx</code></a>. </p> <pre><code>import networkx as nx import pandas as pd df = pd.DataFrame({ 'id': range(1, 10), 'mfr': ('a', 'b', 'a', 'c', 'd', 'e', 'd', 'd', 'f'), 'vmn': ('A', 'A', 'B', 'C', 'D', 'E', 'F', 'F', 'D') }) G = nx.from_pandas_edgelist(df, 'mfr', 'vmn') Gcc = nx.connected_components(G) connected_map = dict() for g, ids in enumerate(Gcc): for id in ids: connected_map[id] = g df['combined_group'] = df['mfr'].map(connected_map) </code></pre> <p>which yields</p> <pre><code> id mfr vmn combined_group 0 1 a A 0 1 2 b A 0 2 3 a B 0 3 4 c C 1 4 5 d D 2 5 6 e E 3 6 7 d F 2 7 8 d F 2 8 9 f D 2 </code></pre>
python|pandas|pandas-groupby
0
375,999
52,421,405
AttributeError: exp when using numpy on data loaded using scipy.io.loadmat
<p>I get the following output from the unit test below:</p> <pre><code>[[array([[-1.57079633]])]] [[array([[0.+1.57079633j]])]] &lt;module 'numpy' from '/usr/local/lib/python2.7/dist-packages/numpy/__init__.pyc'&gt; E ====================================================================== ERROR: test_TestWECTrain_BasicEnv_SetupAndStepping (__main__.Test_exp) ---------------------------------------------------------------------- Traceback (most recent call last): File "Test_exp.py", line 34, in test_TestWECTrain_BasicEnv_SetupAndStepping expsigmatphase = np.exp(tmp) AttributeError: exp ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (errors=1) </code></pre> <p>Here is the unit test</p> <pre><code>import unittest import os import scipy.io as sio import numpy as np from pprint import pprint class Test_exp (unittest.TestCase): def test_exp (self): data_file = "test_buoysimoptions.mat" buoysimoptions = sio.loadmat (data_file) t = 0.0 phase = buoysimoptions['SeaParameters']['phase'] sigma = buoysimoptions['SeaParameters']['sigma'] sigmatminusphase = sigma * t - phase; print (sigmatminusphase) tmp = -1.0j * sigmatminusphase; print (tmp) print (np) tmp = np.asarray(tmp) expsigmatphase = np.exp(tmp) if __name__ == '__main__': unittest.main() </code></pre> <p>The input file (2.9kB) can be downloaded here: <a href="https://www.dropbox.com/s/psq1gq8xpjivrim/test_buoysimoptions.mat?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/psq1gq8xpjivrim/test_buoysimoptions.mat?dl=0</a></p> <p>Why do I get the error <code>AttributeError: exp</code>?</p> <p>Note this is identical to <a href="https://stackoverflow.com/questions/24467970/attributeerror-exp-while-using-numpy-exp-on-an-apparently-ordinary-array">&quot;AttributeError: exp&quot; while using numpy.exp() on an apparently ordinary array</a> but this question was never answered and provides no minimal example like I do.</p> <p>This is in Python 2.7, In Python 3.5 I get:</p> <pre><code>[[array([[-1.57079633]])]] [[array([[0.+1.57079633j]])]] E ====================================================================== ERROR: test_exp (__main__.Test_exp) ---------------------------------------------------------------------- Traceback (most recent call last): File "Test_exp.py", line 25, in test_exp expsigmatphase = np.exp(tmp) AttributeError: 'numpy.ndarray' object has no attribute 'exp' ---------------------------------------------------------------------- Ran 1 test in 0.002s FAILED (errors=1) </code></pre> <p>Edit: some further information on the loaded data </p> <p>I expected <code>buoysimoptions['SeaParameters']['phase']</code> to just be a numpy array, but it seems not, see below, which ultimately causes the error</p> <pre><code>&gt;&gt;&gt; phase = buoysimoptions['SeaParameters']['phase'] &gt;&gt;&gt; phase array([[array([[1.57079633]])]], dtype=object) &gt;&gt;&gt; phase = buoysimoptions['SeaParameters']['phase'][0] &gt;&gt;&gt; phase array([array([[1.57079633]])], dtype=object) &gt;&gt;&gt; phase = buoysimoptions['SeaParameters']['phase'][0][0] &gt;&gt;&gt; phase array([[1.57079633]]) </code></pre> <p>do I need to index [0][0] always to just get the actual array? What is the right thing to do here? If I use the last one, the exp error goes away.</p>
<p>It turns out the answer is simple, these loaded variables were themselves oringinally matlab structures, and I was omitting the index when retrieving them, the correct thing to do is the following (note the extra [0,0]s when retrieving phase and sigma):</p> <pre><code>import unittest import os import scipy.io as sio import numpy as np from pprint import pprint class Test_exp (unittest.TestCase): def test_exp (self): data_file = "test_buoysimoptions.mat" buoysimoptions = sio.loadmat (data_file) t = 0.0 phase = buoysimoptions['SeaParameters'][0,0]['phase'] sigma = buoysimoptions['SeaParameters'][0,0]['sigma'] sigmatminusphase = sigma * t - phase; print (sigmatminusphase) tmp = -1.0j * sigmatminusphase; print (tmp) print (np) tmp = np.asarray(tmp) expsigmatphase = np.exp(tmp) if __name__ == '__main__': unittest.main() </code></pre>
python|numpy
1