Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
376,400
73,324,303
Vlookup using python when data given in range
<p>I have two excel files, I want to perform vlookup and find difference of costs using python or even excel.</p> <p>My files look like this</p> <p><strong>source_data.xlsx</strong> contains contains distance covered and their price, example distance range from 1 to 100 should be charged 4800 and distance range from 101 to 120 should be charged 5100.</p> <pre><code>DISTANCE COST 1-100 4800 101-120 5100 121-140 5500 141-160 5900 161-180 6200 181-200 6600 210-220 6900 221-240 7200 </code></pre> <p><strong>Analysis.xlsx</strong></p> <pre><code>loading_station distance_travel total_cost status PUGU 40 4000 PAID PUGU 80 3200 PAID MOROGORO 50 5000 PAID MOROGORO 220 30400 PAID DODOMA 150 5100 PAID KIGOMA 90 2345 PAID DODOMA 230 6000 PAID DODOMA 180 16500 PAID KIGOMA 32 3000 PAID DODOMA 45 6000 PAID DODOMA 65 5000 PAID KIGOMA 77 1000 PAID KIGOMA 90 4000 PAID </code></pre> <p>Actual Cost for distance is given in <code>source_data.xlsx</code>, I want to check cost in <code>Analysis.xlsx</code> if it correspond to Actual value, I want to detect underpayment and overpayment.</p> <p>Desired Output should be like this, with two column added, <code>source_cost</code> which is taken from <code>source_xlsx</code> by using <code>vlookup</code> and difference which is difference between <code>total_cost</code> and <code>source_cost</code></p> <pre><code>loading_station distance_travel total_cost status source_cost Difference PUGU 40 4000 PAID 4800 -800 PUGU 80 3200 PAID 4800 -1600 MOROGORO 50 5000 PAID 4800 200 MOROGORO 220 30400 PAID 6900 23500 DODOMA 150 5100 PAID 5900 -800 KIGOMA 90 2345 PAID 4800 -2455 DODOMA 230 6000 PAID 7200 -1200 DODOMA 180 16500 PAID 6200 10300 KIGOMA 32 3000 PAID 4800 -1800 DODOMA 45 6000 PAID 4800 1200 DODOMA 65 5000 PAID 4800 200 KIGOMA 77 1000 PAID 4800 -3800 KIGOMA 90 4000 PAID 4800 -800 </code></pre> <p>My code so far</p> <pre><code># import pandas import pandas as pd # read excel data source_data = pd.read_excel('source_data.xlsx') analysis_file = pd.read_excel('analysis.xlsx') source_data.head(5) analysis_file.head(5) </code></pre>
<p>Since it is a categorical bins problem, I suggest utilizing <code>cut()</code> and find the corresponding value.</p> <pre><code>import pandas as pd # create bins bh = df_source['DISTANCE'].apply(lambda x: x.split('-')).apply(pd.Series).astype(int).values[:,0] bt = df_source['DISTANCE'].apply(lambda x: x.split('-')).apply(pd.Series).astype(int).values[:,1] bins = pd.IntervalIndex.from_arrays(bh, bt, closed='both') print(bins) ### IntervalIndex([[1, 100], [101, 120], [121, 140], [141, 160], [161, 180], [181, 200], [210, 220], [221, 240]], dtype='interval[int64, both]') </code></pre> <p>As it shown, <code>IntervalIndex</code>, <code>dtype='interval[int64, both]'</code></p> <br/> <pre><code># find corresponding values df_analysis['source_cost'] = pd.cut(df_analysis['distance_travel'], bins=bins).map(dict(zip(bins, df_source['COST']))).astype(int) # calculation df_analysis['Difference'] = df_analysis['total_cost'] - df_analysis['source_cost'] print(df_analysis) ### </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>loading_station</th> <th>distance_travel</th> <th>total_cost</th> <th>status</th> <th>source_cost</th> <th>Difference</th> </tr> </thead> <tbody> <tr> <td>PUGU</td> <td>40</td> <td>4000</td> <td>PAID</td> <td>4800</td> <td>-800</td> </tr> <tr> <td>PUGU</td> <td>80</td> <td>3200</td> <td>PAID</td> <td>4800</td> <td>-1600</td> </tr> <tr> <td>MOROGORO</td> <td>50</td> <td>5000</td> <td>PAID</td> <td>4800</td> <td>200</td> </tr> <tr> <td>MOROGORO</td> <td>220</td> <td>30400</td> <td>PAID</td> <td>6900</td> <td>23500</td> </tr> <tr> <td>DODOMA</td> <td>150</td> <td>5100</td> <td>PAID</td> <td>5900</td> <td>-800</td> </tr> <tr> <td>KIGOMA</td> <td>90</td> <td>2345</td> <td>PAID</td> <td>4800</td> <td>-2455</td> </tr> <tr> <td>DODOMA</td> <td>230</td> <td>6000</td> <td>PAID</td> <td>7200</td> <td>-1200</td> </tr> <tr> <td>DODOMA</td> <td>180</td> <td>16500</td> <td>PAID</td> <td>6200</td> <td>10300</td> </tr> <tr> <td>KIGOMA</td> <td>32</td> <td>3000</td> <td>PAID</td> <td>4800</td> <td>-1800</td> </tr> <tr> <td>DODOMA</td> <td>45</td> <td>6000</td> <td>PAID</td> <td>4800</td> <td>1200</td> </tr> <tr> <td>DODOMA</td> <td>65</td> <td>5000</td> <td>PAID</td> <td>4800</td> <td>200</td> </tr> <tr> <td>KIGOMA</td> <td>77</td> <td>1000</td> <td>PAID</td> <td>4800</td> <td>-3800</td> </tr> <tr> <td>KIGOMA</td> <td>90</td> <td>4000</td> <td>PAID</td> <td>4800</td> <td>-800</td> </tr> </tbody> </table> </div>
python|excel|pandas
1
376,401
73,381,308
Calculate a Python Array Using For Loops from Data in 2 Dataframes?
<p>I am trying to make or fill a 2-d array using a numpy function called &quot;np.random.normal(average, standard deviation, size)&quot; with the given inputs. My inputs originate from two different dfs that contain the average and standard deviation respectively and they are shown below. This is dfa:</p> <pre><code> month site AvgAdj_Prod 1 Maple 44 2 Maple 48 3 Maple 51 4 Maple 55 5 Maple 62 6 Maple 57 7 Maple 51 8 Maple 44 9 Maple 48 10 Maple 39 11 Maple 38 12 Maple 40 1 Oak 117 2 Oak 129 3 Oak 133 4 Oak 201 5 Oak 206 6 Oak 271 7 Oak 289 8 Oak 221 9 Oak 159 10 Oak 157 11 Oak 140 12 Oak 130 </code></pre> <p>The standard deviation df &quot;dfs&quot; looks like this:</p> <pre><code> month site StdevAdj_Prod 1 Maple 12 2 Maple 13 3 Maple 11 4 Maple 10 5 Maple 9 6 Maple 14 7 Maple 7 8 Maple 9 9 Maple 12 10 Maple 14 11 Maple 18 12 Maple 15 1 Oak 25 2 Oak 37 3 Oak 44 4 Oak 39 5 Oak 52 6 Oak 71 7 Oak 49 8 Oak 54 9 Oak 44 10 Oak 69 11 Oak 51 12 Oak 77 </code></pre> <p>I am trying to populate a 2-d array for months (rows) and N instances (columns) based on the number of sites - 2 in this case - &quot;Maple' and &quot;Oak&quot;. I have tried the following to create the final 2-d array that should have 5 rows for each site because I need to know the output for months 8, 9, 10, 11, 12.</p> <p>So, the final dataframe of calculations using np.random.normal() should be 10 rows by 5 columns. Here is what I have tried, but my value 'rout' does not append new data but only overwrites the same data. Thank you for your help.</p> <pre class="lang-py prettyprint-override"><code>months = list(range(monthnow, 13)) # months == [8,9,10,11,12] r = [] uniques = df1[&quot;site&quot;].unique() # unique number of sites n = 5 ns = list(range(1, n + 1)) # of iterations to calculate (columns) r1 = np.zeros((len(months), n)) # component of np.random.normal(r1,r2,x) r2 = np.zeros((len(months), n)) # component of np.random.normal(r1,r2,x) rout = np.zeros((len(months), n)) # 5 x 5 array of output from np.random.normal() for vals in uniques: for i in months: for k in range(1, n + 1): # print(k) for j in ns: # print(j) r1[k - 1, j - 1] = dfa[(dfa.site == vals)][dfa.month == i][&quot;Adj_Prod&quot;] r2[k - 1][j - 1] = dfs[(dfs.plant_name == vals)][dfs.month == i][ &quot;Adj_Prod&quot; ] rout[k - 1, j - 1] = np.random.normal( r1[k - 1, j - 1], r2[k - 1, j - 1], 1 ) print(rout) r.append(rout) </code></pre>
<p>The problem comes from the fact that <code>rout</code> is not reinitialized after being appended.</p> <p>With the dataframes you provided:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd dfa = pd.DataFrame({'month': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], 'site': ['Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak'], 'AvgAdj_Prod': [44, 48, 51, 55, 62, 57, 51, 44, 48, 39, 38, 40, 117, 129, 133, 201, 206, 271, 289, 221, 159, 157, 140, 130]}) dfs = pd.DataFrame({'month': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], 'site': ['Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Maple', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak', 'Oak'], 'StdevAdj_Prod': [12, 13, 11, 10, 9, 14, 7, 9, 12, 14, 18, 15, 25, 37, 44, 39, 52, 71, 49, 54, 44, 69, 51, 77]}) </code></pre> <p>The following code provides, I think, the result you expect:</p> <pre class="lang-py prettyprint-override"><code>months = [8, 9, 10, 11, 12] r = [] uniques = dfa[&quot;site&quot;].unique() n = 5 ns = list(range(1, n + 1)) r1 = np.zeros((len(months), n)) r2 = np.zeros((len(months), n)) for val in uniques: rout = np.zeros((len(months), n)) for i in months: for k in range(1, n + 1): for j in ns: r1[k - 1, j - 1] = dfa.loc[ (dfa.site == val) &amp; (dfa.month == i), &quot;AvgAdj_Prod&quot; ] r2[k - 1][j - 1] = dfs.loc[ (dfs.site == val) &amp; (dfs.month == i), &quot;StdevAdj_Prod&quot; ] rout[k - 1, j - 1] = np.random.normal( r1[k - 1, j - 1], r2[k - 1, j - 1], 1 ) r.append(rout) </code></pre> <pre class="lang-py prettyprint-override"><code>print(r) # Output [ array( [ [37.56708627, 11.89442282, 32.79617584, 19.3774222, 41.61703549], [41.27896445, 32.82832126, 25.55349053, 54.42646487, 32.02670686], [36.37309834, 45.73408605, 34.14421009, 31.62474747, 35.52334414], [39.29474075, 41.25229097, 45.93103323, 20.76663807, 34.59076033], [54.22060032, 53.5233677, -2.64159211, 46.77896409, 29.19027622], ] ), array( [ [139.68338142, 224.35423705, 91.48533586, 73.78006845, 146.88688503], [209.26142311, 69.75640239, 103.79220378, 168.36765205, 91.0073397], [20.27003735, 52.15998312, 147.44803278, 8.36811135, 88.59378042], [252.16742434, 122.18310833, 169.66181174, 196.28030665, 179.30169691], [179.41385097, 29.46613095, 140.79689411, 73.38920373, 176.65458478], ] ), ] </code></pre>
arrays|pandas|loops
0
376,402
73,329,568
NLP data processing between `BucketIterator` and `build_vocab_from_iterator`
<p>I am using AG News Dataset to train model for using text classification.</p> <p>The part using <code>TabularDataset</code> to generate dataset from <code>csv</code> file.</p> <pre><code>import torchtext import torch from torchtext.legacy.data import Field, TabularDataset, BucketIterator, Iterator import spacy def des_tokenize(x): return x.split(' ') def title_tokenize(x): return x.split(' ') def category_tokenize(x): return x device = torch.device(&quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot;) CATEGORY = Field(tokenize=category_tokenize) TITLE = Field(tokenize=title_tokenize, init_token='&lt;SOS&gt;', eos_token='&lt;EOS&gt;') DES = Field(tokenize=des_tokenize, init_token='&lt;SOS&gt;', eos_token='&lt;EOS&gt;') spacy_en = spacy.load('en_core_web_sm') train_fields = [('id', None), ('category', CATEGORY), ('title', TITLE), ('description', DES)] test_fields = [('title', TITLE), ('description', DES)] train_data = TabularDataset( path = '/content/drive/MyDrive/summer2/train.csv', format = 'csv', fields = train_fields, skip_header = True) test_data = TabularDataset( path = '/content/drive/MyDrive/summer2/test.csv', format = 'csv', fields = test_fields, skip_header = True) </code></pre> <p>After dataset being generated, choosing to use pre-train embedding model called <code>torchtext.vocab.GloVe</code> to build <code>vocab</code>.</p> <pre><code>from torchtext.data.utils import get_tokenizer from torchtext.vocab import build_vocab_from_iterator train_batch_size = 10 test_batch_size = 1 max_length = 256 tokenizer = get_tokenizer('basic_english') train_iter = torchtext.legacy.data.BucketIterator( train_data, batch_size=train_batch_size, ) test_iter = torchtext.legacy.data.BucketIterator( test_data, batch_size=test_batch_size, ) DES.build_vocab( train_data, vectors=torchtext.vocab.GloVe(name=&quot;6B&quot;, dim=50, max_vectors=50_000), max_size=50_000, ) TITLE.build_vocab( train_data, vectors=torchtext.vocab.GloVe(name=&quot;6B&quot;, dim=50, max_vectors=50_000), max_size=50_000, ) CATEGORY.build_vocab(train_data) </code></pre> <p>And the output looks great after <code>create_batches</code> function</p> <pre><code>def create_batches(self): self.batches = batch(self.data(), self.batch_size, self.batch_size_fn) # Create batches - needs to be called before each loop. train_iter.create_batches() # Loop through BucketIterator. print('PyTorchText BuketIterator\n') for batch in train_iter.batches: # Let's check batch size. print('Batch size: %d\n'% len(batch)) print('category\ttitle\tdescription'.ljust(10)) # Print each example. for example in batch: print('%s \t %s \t %s'.ljust(10) % (example.category, example.title, example.description)) print('\n') # Only look at first batch. Reuse this code in training models. break </code></pre> <p>Output looks like</p> <pre><code>PyTorchText BuketIterator Batch size: 10 category title description 2 ['UPDATE', '1-Open-Rejuvenated', 'Haas', 'reaches', 'last', 'eight'] ['Germany', '#39;s', 'Tommy', 'Haas', 'continued', 'his', 'resurgence', 'with', 'a', '7-6', '6-1', '7-5', 'victory', 'over', 'Czech', 'teenager', 'Tomas', 'Berdych', 'on', 'Tuesday', 'to', 'reach', 'the', 'quarter-finals', 'of', 'the', 'US', 'Open', 'for', 'the', 'first', 'time.'] 3 ['Japan', '#39;s', 'Nikkei', 'Average,', 'Topix', 'Advance;', 'Toyota,', 'Advantest', 'Gain'] ['Japan', '#39;s', 'Nikkei', '225', 'Stock', 'Average', 'rose', '56.74,', 'or', '0.5', 'percent,', 'to', '11,139.97', 'at', '9:01', 'am', 'in', 'Tokyo.', 'The', 'broader', 'Topix', 'index', 'gained', '5.35,', 'or', '0.5', 'percent,', 'to', '1132.'] 2 ['Wildcats', 'on', 'the', 'rise', 'with', 'Santos'] ['The', 'University', 'of', 'New', &quot;Hampshire's&quot;, 'impressive', '51-40', 'road', 'victory', 'over', '10th-ranked', 'Villanova', 'Saturday', 'night', 'vaulted', 'the', 'Wildcats', 'three', 'spots', 'to', 'ninth', 'in', 'this', &quot;week's&quot;, 'Sports', 'Network', '1-AA', 'football', 'poll,', 'while', 'dropping', 'Villanova', 'to', '14th.'] 1 ['Cracking', 'under', 'the', 'strain'] ['Severe', 'cracks', 'surfaced', 'inside', 'the', 'Israeli', 'government', 'this', 'week', 'as', 'its', 'senior', 'law', 'officers', 'publicly', 'fell', 'out', 'with', 'the', 'defence', 'establishment', 'and', 'the', 'Foreign', 'Ministry', 'over', 'the', 'country', '#39;s', 'future', 'strategy', 'in', 'the', 'face', 'of', 'the', 'July', 'verdict', 'of', 'the', 'International', ''] 1 ['Arab', 'League', 'to', 'hold', 'emergency', 'meeting'] ['The', 'Arab', 'League', 'says', 'it', 'will', 'hold', 'an', 'emergency', 'session', 'to', 'discuss', 'the', 'violence', 'in', 'Gaza,', 'which', 'has', 'claimed', 'at', 'least', '56', 'Palestinians', 'this', 'week.'] 2 ['Holmes', 'to', 'decide', 'on', 'double'] ['Kelly', 'Holmes', 'has', 'still', 'to', 'confirm', 'whether', 'she', 'will', 'attempt', 'to', 'repeat', 'her', 'Olympic', 'double', 'at', 'this', 'weekend', '#39;s', 'World', 'Athletics', 'Final', 'after', 'clearing', 'the', 'first', 'hurdle', 'with', 'a', 'victory', 'in', 'the', '1500m', 'yesterday.'] 2 ['NBA', 'suspends', 'nine', 'players,', 'Artest', 'for', 'rest', 'of', 'season'] ['NBA', 'on', 'Sunday', 'suspended', 'nine', 'players', 'for', 'involving', 'in', 'a', 'melee', 'during', 'Friday', '#39;s', 'game', 'between', 'Detorit', 'Pistons', 'and', 'Indiana', 'Pacers,', 'with', 'Ron', 'Artest', 'suspended', 'for', 'the', 'rest', 'of', 'the', 'season,', '73', 'games.'] 2 ['On', 'the', 'Far', 'Side', 'of', 'the', 'Field,', 'a', 'Familiar', 'Face'] ['Perhaps', 'there', 'will', 'be', 'a', 'moment', 'during', &quot;Sunday's&quot;, 'game', 'between', 'the', 'Giants', 'and', 'the', 'Redskins', 'when', 'a', 'coach', 'and', 'his', 'former', 'franchise', 'quarterback', 'will', 'do', 'a', 'double', 'take.'] 3 ['', '#39;QUIET', '#39;', 'RULE', 'MAY', 'CHANGE'] ['The', 'Securities', 'and', 'Exchange', 'Commission', 'wants', 'to', 'scrap', 'a', '1933', 'rule', 'that', 'forces', 'a', 'strict', '', 'quot;quiet', 'period', 'quot;', 'on', 'all', 'talk', 'about', 'a', 'company', 'just', 'prior', 'to', 'its', 'stock', 'being', 'sold', 'initially', 'to', 'the', 'public.'] 2 ['Denehy', 'boosts', 'Walpole', ''] ['Danvers', 'coach', 'thought', 'he', 'had', 'the', 'perfect', 'game', 'plan', 'against', 'Walpole', 'last', 'night', 'in', 'the', 'Division', '2', 'playoffs', 'at', 'Endicott', 'College.', 'It', 'was', 'the', 'same', 'game', 'plan', 'that', 'earned', 'his', 'team', 'its', 'first', 'playoff', 'berth', 'in', '63', 'years.'] </code></pre> <p>The question is that what if I use <code>build_vocab_from_iterator</code> to create iterator ?</p> <p><a href="https://pytorch.org/text/stable/vocab.html#torchtext.vocab.build_vocab_from_iterator" rel="nofollow noreferrer">build_vocab_from_iterator</a></p> <p>Does the function has same meaning between my part using <code>BucketIterator</code> ?</p> <p>Also, I think using Pretrained Word Embeddings <code>GloVe</code> is better than <code>FastText</code> in this work, because the model needs to classify the description is which types.</p>
<p>After all, the solution which I just post can train the model.</p> <p>And it had better to use stopwords from library to has better accuracy.</p>
python|machine-learning|nlp|pytorch|vocabulary
0
376,403
73,248,121
Identify the discontinuity and mark it as incremental event in pandas
<p>Given a dataframe, I need to increment the event_id when a discontinuity observed in the column. Here for the given data below, if the difference between the current data and the previous data is &gt;5 then the succeeding column has to be mark with next event_id.</p> <pre><code>id, data, event_id, aa, 2, 1, aa, 4, 1, aa, 6, 1, aa, 12, 2, aa, 14, 2, aa, 15, 2, </code></pre> <p>I tried with below code,</p> <pre><code>df['pre_data']=df.groupby('id')['data'].shift(1) df['diff_flag']=np.where((df['data']-df['pre_data'])&lt;5,1,0) df['event_id']=df['diff_flag'].ne(df.groupby('id')['diff_flag'].shift()).cumsum() </code></pre> <p>But the code is giving the output of event_id as (1 ,1,1,2,3,3) while the expected output is (1 ,1,1,2,2,2)</p>
<p>if i understand well your problem you need to increase the <code>event_id</code> every time the difference is more than 5. In this case the solution is in your code you just need to change this</p> <blockquote> <p><code>df['diff_flag']=np.where((df['data']-df['pre_data'])&lt;5,1,0)</code></p> </blockquote> <p>to this:</p> <blockquote> <p><code>df['diff_flag']=np.where((df['data']-df['pre_data'])&gt;5,1,0)</code></p> </blockquote> <p>and return:</p> <blockquote> <p><code>df[&quot;diff_flag&quot;].cumsum()</code></p> </blockquote> <p>because the <code>numpy where</code> is the oposite of <code>pandas where</code> in numpy we update the value that match the condition</p>
python|pandas
0
376,404
73,492,505
Follium map : my European locations are plotted in Africa
<p>I would like to use Folium map to plot markers. My locations are in France. I have latitude and longitude information. So I create POINT geometry in order to implement them in Folium map.</p> <pre><code>df = pd.read_csv('./data/addresses_geocoded.csv', sep = ';', encoding = 'latin-1') geometry = [Point(xy) for xy in zip(df['latitude'], df['longitude'])] geo_df = gpd.GeoDataFrame (df[['type', 'contrat']], geometry = geometry, crs={'init':'epsg:4326'}) </code></pre> <p>Then, I create a map and add geodata.</p> <pre><code>map_contrat = folium.Map(location=[45.7174, 4.9036], tiles='openstreetmap', zoom_start=12) </code></pre> <p>As I want to be able to select or hide data from legend, I add geodata using features.</p> <pre><code>folium.features.GeoJson(geo_df[geo_df['contract'] == &quot;A&quot;], name=&quot;A&quot;).add_to(map_contrat) folium.features.GeoJson(geo_df[geo_df['contract'] == &quot;B&quot;], name=&quot;B&quot;).add_to(map_contrat) </code></pre> <p>But, as I understand, due to wrong crs or epsg, my data is not located it the right place.</p> <p>I am a bit lost with all possibility to chose epsg. It may be a trick to knwow which one to use ?</p> <p>Thanks in advance</p>
<p>The most typical CRS is EPSG:4326 which you have used. Have used a CSV that contains cities in the world and select 100 French cities. If I use longitude as latitude and latitude as longitude (erroneously transpose them) then the cities appear in Africa! As demonstrated by red markers in <strong>folium</strong> map. Correct order (blue) they are in France.</p> <pre><code>import pandas as pd import geopandas as gpd # 100 cities in france from CSV df = ( pd.read_csv( &quot;https://raw.githubusercontent.com/dr5hn/countries-states-cities-database/master/csv/cities.csv&quot; ) .loc[lambda d: d[&quot;country_code&quot;].eq(&quot;FR&quot;)] .sample(100) ) # incorrect order of lat / lon. appears in africae m = gpd.GeoDataFrame( df, geometry=gpd.points_from_xy(df[&quot;latitude&quot;], df[&quot;longitude&quot;]), crs=&quot;epsg:4386&quot; ).explore(color=&quot;red&quot;, width=300, height=300, name=&quot;wrong&quot;) # correct order - all good in France gpd.GeoDataFrame( df, geometry=gpd.points_from_xy(df[&quot;longitude&quot;], df[&quot;latitude&quot;]), crs=&quot;epsg:4386&quot; ).explore(m=m, width=300, height=300, name=&quot;correct&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/WxrYO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WxrYO.png" alt="enter image description here" /></a></p>
geometry|geopandas|folium|shapely|epsg
1
376,405
73,505,612
I am trying to assign a value to a cell in a dataframe using iloc and it is not working. It is simply staying its original value
<p>Trying to change 174.0 to NaN. Am I missing something obvious? Finding the index of the value in the overall dataframe is too complicated, so I narrowed it down to Well L15. Is this not allowd?</p> <pre><code>input: df[df['Well']=='L15'].iloc[4,6] output: 174.0 input: df[df['Well']=='L15'].iloc[4,6] = np.nan input: df[df['Well']=='L15'].iloc[4,6] output: 174.0 </code></pre> <p>I expect this to give me NaN at the end, not 174.0. Thank you!</p>
<p>The <a href="https://pandas.pydata.org/docs/user_guide/indexing.html?highlight=chained#why-does-assignment-fail-when-using-chained-indexing" rel="nofollow noreferrer">docs</a> say:</p> <blockquote> <p>Outside of simple cases, it’s very hard to predict whether it will return a view or a copy (it depends on the memory layout of the array, about which pandas makes no guarantees)</p> </blockquote> <p>So in your case, it is hard to predict, will <code>df[df['Well']=='L15'].iloc[4,6]</code> be a</p> <ul> <li><em>view</em> (in which case changing its elements would change the <strong>original</strong> elements since it is only a <strong>view</strong>),</li> <li>or a <em>copy</em> (in which case <strong>its</strong> elements would be changed, but the original elements would <strong>not</strong>.)</li> </ul> <p>Here is a workaround that you could use. Create a copy explicitly (so that pandas doesn't complain for not knowing if it's a view or a copy), change the value in this copy, and then replace the values in the original:</p> <pre class="lang-py prettyprint-override"><code>sub_df = df[df['Well']=='L15'].copy() sub_df.iloc[4, 6] = np.nan df[df['Well']=='L15'] = sub_df </code></pre>
python|pandas|dataframe|indexing
0
376,406
73,437,721
Create a def function to filter categories in a dataframe
<p>I´m trying to apply the following rules (picture attached) into the following dataframe (code attached).</p> <p><a href="https://i.stack.imgur.com/SxLEN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SxLEN.png" alt="enter image description here" /></a></p> <pre><code>data = pd.DataFrame({'col1': [0, 700, 708, 634, 656, 663, 0, 0, 637, 700, 700, 672, 675, 580, 0, 554, 690, 624, 596, 625, 621, 606, 618, 555, 691, 539, 548, 627, 703, 701, 636, 561, 658, 0, 0, 670, 700, 0, 613, 639, 708, 691, 0, 628, ], 'col2': ['SMALL', 'HIGH', 'SELECT', 'MEDIUM', 'SELECT', 'SELECT', 'SMALL', 'HIGH', 'HIGH', 'HIGH', 'SELECT', 'SELECT', 'HIGH', 'HIGH', 'HIGH', 'MEDIUM', 'SELECT', 'SELECT', 'SELECT', 'MEDIUM', 'HIGH', 'MEDIUM', 'MEDIUM', 'HIGH', 'MEDIUM', 'HIGH', 'MEDIUM', 'HIGH', 'SELECT', 'SELECT', 'HIGH', 'MEDIUM', 'SELECT', 'SMALL', 'SMALL', 'SELECT', 'SELECT', 'MEDIUM', 'MEDIUM', 'HIGH', 'SELECT', 'HIGH', 'HIGH', 'SELECT', ]}) </code></pre> <p>A possible way to insert the rules is creating a new column and charge the condition using <code>np.select</code> statement.</p> <p>Is it possible to create a def function that don´t be so time-consuming instead of this method? Every help is welcome. Thanks</p> <pre><code>data['object'] = np.select( [(data['col2'] == 'SMALL') &amp; (data['col1'] &gt;= 100) &amp; (data['col1'] &lt;= 515), (data['col2'] == 'SMALL') &amp; (data['col1'] &gt;= 516) &amp; (data['col1'] &lt;= 533), (data['col2'] == 'SMALL') &amp; (data['col1'] &gt;= 534) &amp; (data['col1'] &lt;= 555), (data['col2'] == 'SMALL') &amp; (data['col1'] &gt;= 556) &amp; (data['col1'] &lt;= 577), (data['col2'] == 'SMALL') &amp; (data['col1'] &gt;= 578) &amp; (data['col1'] &lt;= 582), (data['col2'] == 'SMALL') &amp; (data['col1'] &gt;= 583) &amp; (data['col1'] &lt;= 615), (data['col2'] == 'SMALL') &amp; (data['col1'] &gt;= 616) &amp; (data['col1'] &lt;= 632), (data['col2'] == 'SMALL') &amp; (data['col1'] &gt;= 633) &amp; (data['col1'] &lt;= 649), (data['col2'] == 'SMALL') &amp; (data['col1'] &gt;= 650) &amp; (data['col1'] &lt;= 999), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 100) &amp; (data['col1'] &lt;= 525), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 526) &amp; (data['col1'] &lt;= 543), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 544) &amp; (data['col1'] &lt;= 555), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 556) &amp; (data['col1'] &lt;= 565), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 566) &amp; (data['col1'] &lt;= 586), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 587) &amp; (data['col1'] &lt;= 598), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 599) &amp; (data['col1'] &lt;= 608), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 609) &amp; (data['col1'] &lt;= 626), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 627) &amp; (data['col1'] &lt;= 635), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 636) &amp; (data['col1'] &lt;= 653), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 654) &amp; (data['col1'] &lt;= 682), (data['col2'] == 'MEDIUM') &amp; (data['col1'] &gt;= 683) &amp; (data['col1'] &lt;= 999), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 100) &amp; (data['col1'] &lt;= 544), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 545) &amp; (data['col1'] &lt;= 562), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 563) &amp; (data['col1'] &lt;= 575), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 576) &amp; (data['col1'] &lt;= 584), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 585) &amp; (data['col1'] &lt;= 591), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 592) &amp; (data['col1'] &lt;= 604), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 605) &amp; (data['col1'] &lt;= 620), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 621) &amp; (data['col1'] &lt;= 635), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 636) &amp; (data['col1'] &lt;= 656), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 657) &amp; (data['col1'] &lt;= 670), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 671) &amp; (data['col1'] &lt;= 679), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 680) &amp; (data['col1'] &lt;= 692), (data['col2'] == 'HIGH') &amp; (data['col1'] &gt;= 693) &amp; (data['col1'] &lt;= 999), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 100) &amp; (data['col1'] &lt;= 564), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 565) &amp; (data['col1'] &lt;= 585), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 586) &amp; (data['col1'] &lt;= 606), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 607) &amp; (data['col1'] &lt;= 613), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 614) &amp; (data['col1'] &lt;= 636), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 637) &amp; (data['col1'] &lt;= 646), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 647) &amp; (data['col1'] &lt;= 663), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 664) &amp; (data['col1'] &lt;= 683), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 684) &amp; (data['col1'] &lt;= 699), (data['col2'] == 'SELECT') &amp; (data['col1'] &gt;= 700) &amp; (data['col1'] &lt;= 999) ], ['Object 1', 'Object 2', 'Object 3', 'Object 4', 'Object 5', 'Object 6', 'Object 7', 'Object 8', 'Object 9', 'Object 1', 'Object 2', 'Object 3', 'Object 4', 'Object 5', 'Object 6', 'Object 7', 'Object 8', 'Object 9', 'Object 10', 'Object 11', 'Object 12', 'Object 1', 'Object 2', 'Object 3', 'Object 4', 'Object 5', 'Object 6', 'Object 7', 'Object 8', 'Object 9', 'Object 10', 'Object 11', 'Object 12', 'Object 13', 'Object 1', 'Object 2', 'Object 3', 'Object 4', 'Object 5', 'Object 6', 'Object 7', 'Object 8', 'Object 9', 'Object 10', ]) </code></pre>
<p>Use <code>groupby</code> and <code>cut</code>:</p> <pre><code>bins = {'SMALL': [100, 515, 533, ... ,999], 'MEDIUM': [100, 525, 543, ... ,999], 'HIGH': [100, 544, 562, ... ,999], 'SELECT': [100, 564, 585, ... ,999] } labels = ['object 1', 'object 2', 'object 3', ..., 'object 13'] data['new'] = data.groupby('col2')['col1'].transform(lambda g: pd.cut(g, bins=bins[g.name], labels=labels[:len(bins[g.name])-1])) </code></pre> <p><em>NB. You can also use a dictionary to define different conditions for each group (<code>labels = {'SMALL': ['A', 'B'...], 'MEDIUM': ['X', 'Y'...]...}</code>). Then use <code>labels=labels[g.name]</code>.</em></p> <p>Example dummy output (I didn't use all bins):</p> <pre><code> col1 col2 new 0 0 SMALL NaN 1 700 HIGH object 3 2 708 SELECT object 3 3 634 MEDIUM object 3 4 656 SELECT object 3 5 663 SELECT object 3 6 0 SMALL NaN 7 0 HIGH NaN 8 637 HIGH object 3 ... </code></pre>
python|pandas|numpy|function|lambda
1
376,407
73,226,161
How can I best convert an API JSON object to a single row for SQL server?
<p>I have a script setup to pull a JSON from an API and I need to convert objects into different columns for a single row layout for a SQL server. See the example below for the body raw layout of an example object:</p> <pre><code>&quot;answers&quot;: { &quot;agent_star_rating&quot;: { &quot;question_id&quot;: 145, &quot;question_text&quot;: &quot;How satisfied are you with the service you received from {{ employee.first_name }} today?&quot;, &quot;comment&quot;: &quot;John was exceptionally friendly and knowledgeable.&quot;, &quot;selected_options&quot;: { &quot;1072&quot;: { &quot;option_id&quot;: 1072, &quot;option_text&quot;: &quot;5&quot;, &quot;integer_value&quot;: 5 } } }, </code></pre> <p>In said example I need the output for all parts of agent_star_rating to be individual columns so all data spits out 1 row for the entire survey on our SQL server. I have tried mapping several keys like so:</p> <pre><code>agent_star_rating = [list(response['answers']['agent_star_rating']['selected_options'].values())[0]['integer_value']] agent_question = (response['answers']['agent_star_rating']['question_text']) agent_comment = (response['answers']['agent_star_rating']['comment']) response['agent_question'] = agent_question response['agent_comment'] = agent_comment response['agent_star_rating'] = agent_star_rating </code></pre> <p>I get the expected result until we reach a point where some surveys have skipped a field like ['question text'] and we'll get a missing key error. This happens over the course of other objects and I am failing to come up with a solution for these missing keys. If there is a better way to format the output as I've described beyond the keys method I've used I'd also love to hear ideas! I'm fresh to learning python/pandas so pardon any improper terminology!</p>
<p>I would do something like this:</p> <pre class="lang-py prettyprint-override"><code># values that you always capture row = ['value1', 'value2', ...] gottem_attrs = {'question_id': '' , 'question_text': '', 'comment': '', 'selected_options': ''} # find and save the values that response have for attr in list(response['agent_star_rating']): gottem_attrs[attr] = response['agent_star_rating'][attr] # then you have your final row final_row = row + gottem_attrs.values() </code></pre> <p>If the response have a value in his attribute, this code will save it. Else, it will save a empty string for that value.</p>
python|sql|pandas|formatting
1
376,408
73,272,371
Change Column Values in a Dataframe column using Pandas
<p>The data type of the column is object. but, i still map it to string using <code>astype(str)</code>. even used <code>temp['Injury Severity'].str.strip()</code> to remove spaces from column values.</p> <p><a href="https://i.stack.imgur.com/PEO7R.png" rel="nofollow noreferrer">enter image description here</a></p> <p>I want to replace all &quot;Fatal(0)&quot;,Fatal(1)&quot;... with only &quot;Fatal&quot;. so i used.<code>temp['Injury Severity'] = temp['Injury Severity'].replace('Fatal(0)','Fatal',inplace = True)</code>.</p> <p>But did not work. i also tried <code>temp.loc[temp['Injury Severity'] == 'Fatal(0)','Injury Severity'] = temp['Injury Severity'].replace('Fatal(0)','Fatal',inplace = True)</code></p> <p>In addition is tried <code>str.replace</code> but did not work out.lastly also used <code>regex = True</code> but no changes was observed.It still remains the same.</p>
<p><img src="https://i.stack.imgur.com/jFYk7.png" alt="1" /></p> <p>I think it is solved. It seems that the values were having leading and trailing spaces in the name of values.Thanks alot for the help everyone !!</p>
pandas|replace|jupyter-notebook|slice
1
376,409
73,506,997
Is there a way of storing the original data lines in a pandas dataframe
<p>I am using the <code>read_csv</code> method from pandas.</p> <p>Say I am reading:</p> <pre><code>A,&quot;B&quot;,C </code></pre> <p>With column names 1, 2, 3</p> <p>I will get a dataframe of 3 columns, with the columns having values:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>1</th> <th>2</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>B</td> <td>C</td> </tr> </tbody> </table> </div> <p>I want to get a dataframe of 4 columns:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>orig</th> <th>1</th> <th>2</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>A,&quot;B&quot;,C</td> <td>A</td> <td>B</td> <td>C</td> </tr> </tbody> </table> </div> <p>For this scenario, re-creating the csv line from the 3 values is not possible as the parsing will already have discarded the quotes.</p> <p>Is there a good way of doing this? I am currently directly manipulating the input to add the column to the incoming data, however I was hoping there was something within pandas itself that could help?</p>
<p>The answer heavily depends on the data in a csv-file. If you are sure that there's no separator in quoted items, then we can avoid unquoting with <code>quoting=csv.QUOTE_NONE</code> parameter:</p> <pre><code># create dummy file data = 'A,&quot;B&quot;,C' with open('test.csv', 'w') as f: f.write(data) # read the file with quotes untouched from csv import QUOTE_NONE pd.read_csv('test.csv', header=None, quoting=QUOTE_NONE) </code></pre> <p>Ouput:</p> <pre class="lang-none prettyprint-override"><code> 0 1 2 0 A &quot;B&quot; C </code></pre> <p>In general, to see raw data we have to read them properly, i.e. without any csv-parser like <code>pandas.read_csv</code> or <code>csv.reader</code></p>
python|pandas|dataframe
0
376,410
73,279,382
How to return value if a number is withing a specified range in pandas
<p>I want to return a value (1,2,3,4 or 5) based on the range a number falls in. I want to define a function and apply the function to a column in a DataFrame using <code>.apply()</code>.</p> <p>In the code below, <code>amount</code> is a hypothetical column in a DataFrame. However, I get the error <code>SyntaxError: invalid syntax</code> on line <code>elif &gt;= 40 amount &lt; 60:</code> (I believe it will raise the same error on all other lines).</p> <pre><code>amount = pd.Series([20, 25, 65, 80]) def miles(amount): if 20 &gt;= amount &lt; 40: return 1 elif &gt;= 40 amount &lt; 60: return 2 elif &gt;= 60 amount &lt; 80: return 3 elif &gt;= 80 amount &lt; 100: return 4 elif &gt;= 100 amount &lt; 120: return 5 else: pass </code></pre> <p>Any help is appreciated. Thank you!</p>
<p>For this particular case, you are mapping discrete fixed-width integer ranges to a number. This can be solved using a linear transform. The offset in this case is 0.</p> <pre><code>amount = pd.Series([20, 25, 65, 80]) out = amount.divide(20).astype(int) out # returns: 0 1 1 1 2 3 3 4 dtype: int32 </code></pre> <p>For a more general case where the binning is not fixed-width, you can use <code>pd.cut</code>.</p> <pre><code>pd.cut(amount, [20, 40, 60, 80, 100, 120], right=False, labels=[1,2,3,4,5]).astype(int) # returns: 0 1 1 1 2 3 3 4 dtype: int32 </code></pre>
python|pandas
4
376,411
73,371,174
Pytorch, Pandas, Numpy different result on Windows and Linux
<p>We are working on an AI project which amongst others calculates the position of a human body lying in a bed. External supporters provided us a code package which does this job and calculates the deviation between the real position of the body (detected by a camera) and the predicted position.</p> <p>That code package has been developed on Windows and now needs to be ported to Ubuntu 20.04. Until now, in the code package for Ubuntu, we just changed some path names (backslash to slash) to make it work.</p> <p>If we now run – the exactly same - code with the exactly same training and test data under Windows 10 and Ubuntu we observe the following strange behaviour:</p> <p>The deviation between real body positions and predicted body positions is 25mm (average value) in Windows, which is a good result for our purposes and excepted as a match. In Ubuntu the deviation is 150mm (average value), which is a not usable result.</p> <p>We also ran the Windows code package in a Windows 10 VM (Virtualbox) on the respective Ubuntu machine. The result is close to the result we get when running the code under ‘plain‘ Ubuntu (not usable).</p> <p>When running under Ubuntu, there is no difference whether we utilize CPU or GPU.</p> <p>The code is written in Python 3 and executed under Python 3.7.12 in an Anaconda 3 virtual environment.</p> <p>We assume that Linux/Ubuntu does not fully support a certain CPU instruction set. but at the end we do not have an explanation for this behaviour. Does anybody have an idea?</p> <p>Our CPU:</p> <p>Intel Core I7 (Caby Lake)</p> <p>Our env.yaml for conda:</p> <pre><code>name: cosy-bunk-3.7 channels: - conda-forge - anaconda - defaults - pytorch dependencies: - python=3.7 - numpy - matplotlib - pandas - ipykernel - torchvision - pytorch - cudatoolkit=10.2 - torchaudio - plotly - scikit-learn - pip - pip: - optuna - dash </code></pre> <p>Output of 'conda list‘ under Ubuntu:</p> <pre><code> # Name Version Build Channel _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 2_kmp_llvm conda-forge alembic 1.8.1 pypi_0 pypi alsa-lib 1.2.6.1 h7f98852_0 conda-forge attr 2.5.1 h166bdaf_0 conda-forge attrs 22.1.0 pypi_0 pypi autopage 0.5.1 pypi_0 pypi backcall 0.2.0 pyh9f0ad1d_0 conda-forge backports 1.0 py_2 conda-forge backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge blas 1.1 openblas conda-forge brotli 1.0.9 pypi_0 pypi brotli-bin 1.0.9 h166bdaf_7 conda-forge brotlipy 0.7.0 py37h540881e_1004 conda-forge ca-certificates 2022.6.15 ha878542_0 conda-forge certifi 2022.6.15 py37h89c1867_0 conda-forge cffi 1.15.1 py37h43b0acd_0 conda-forge charset-normalizer 2.1.0 pyhd8ed1ab_0 conda-forge click 8.1.3 pypi_0 pypi cliff 3.10.1 pypi_0 pypi cmaes 0.8.2 pypi_0 pypi cmd2 2.4.2 pypi_0 pypi colorlog 6.6.0 pypi_0 pypi cryptography 37.0.4 py37h38fbfac_0 conda-forge cudatoolkit 10.2.89 h713d32c_10 conda-forge cudnn 7.6.5.32 h01f27c4_1 conda-forge cycler 0.11.0 pyhd8ed1ab_0 conda-forge dash 2.6.1 pypi_0 pypi dash-core-components 2.0.0 pypi_0 pypi dash-html-components 2.0.0 pypi_0 pypi dash-table 5.0.0 pypi_0 pypi dbus 1.13.6 h5008d03_3 conda-forge debugpy 1.6.0 py37hd23a5d3_0 conda-forge decorator 5.1.1 pyhd8ed1ab_0 conda-forge entrypoints 0.4 pyhd8ed1ab_0 conda-forge expat 2.4.8 h27087fc_0 conda-forge fftw 3.3.10 nompi_ha7695d1_103 conda-forge flask 2.2.2 pypi_0 pypi flask-compress 1.12 pypi_0 pypi font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge font-ttf-inconsolata 3.000 h77eed37_0 conda-forge font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge font-ttf-ubuntu 0.83 hab24e00_0 conda-forge fontconfig 2.14.0 h8e229c2_0 conda-forge fonts-conda-ecosystem 1 0 conda-forge fonts-conda-forge 1 0 conda-forge fonttools 4.34.4 py37h540881e_0 conda-forge freetype 2.10.4 h0708190_1 conda-forge gettext 0.19.8.1 h73d1719_1008 conda-forge giflib 5.2.1 h36c2ea0_2 conda-forge glib 2.72.1 h6239696_0 conda-forge glib-tools 2.72.1 h6239696_0 conda-forge greenlet 1.1.2 pypi_0 pypi gst-plugins-base 1.20.3 hf6a322e_0 conda-forge gstreamer 1.20.3 hd4edc92_0 conda-forge icu 70.1 h27087fc_0 conda-forge idna 3.3 pyhd8ed1ab_0 conda-forge importlib-metadata 4.12.0 pypi_0 pypi importlib-resources 5.9.0 pypi_0 pypi ipykernel 6.15.1 pyh210e3f2_0 conda-forge ipython 7.33.0 py37h89c1867_0 conda-forge itsdangerous 2.1.2 pypi_0 pypi jack 1.9.18 h8c3723f_1002 conda-forge jedi 0.18.1 pyhd8ed1ab_2 conda-forge jinja2 3.1.2 pypi_0 pypi joblib 1.1.0 pyhd8ed1ab_0 conda-forge jpeg 9e h166bdaf_2 conda-forge jupyter_client 7.3.4 pyhd8ed1ab_0 conda-forge jupyter_core 4.11.1 py37h89c1867_0 conda-forge keyutils 1.6.1 h166bdaf_0 conda-forge kiwisolver 1.4.4 py37h7cecad7_0 conda-forge krb5 1.19.3 h3790be6_0 conda-forge lcms2 2.12 hddcbb42_0 conda-forge ld_impl_linux-64 2.36.1 hea4e1c9_2 conda-forge lerc 4.0.0 h27087fc_0 conda-forge libblas 3.9.0 15_linux64_openblas conda-forge libbrotlicommon 1.0.9 h166bdaf_7 conda-forge libbrotlidec 1.0.9 h166bdaf_7 conda-forge libbrotlienc 1.0.9 h166bdaf_7 conda-forge libcap 2.64 ha37c62d_0 conda-forge libcblas 3.9.0 15_linux64_openblas conda-forge libclang 14.0.6 default_h2e3cab8_0 conda-forge libclang13 14.0.6 default_h3a83d3e_0 conda-forge libcups 2.3.3 hf5a7f15_1 conda-forge libdb 6.2.32 h9c3ff4c_0 conda-forge libdeflate 1.13 h166bdaf_0 conda-forge libedit 3.1.20191231 he28a2e2_2 conda-forge libevent 2.1.10 h9b69904_4 conda-forge libffi 3.4.2 h7f98852_5 conda-forge libflac 1.3.4 h27087fc_0 conda-forge libgcc-ng 12.1.0 h8d9b700_16 conda-forge libgfortran-ng 12.1.0 h69a702a_16 conda-forge libgfortran5 12.1.0 hdcd56e2_16 conda-forge libglib 2.72.1 h2d90d5f_0 conda-forge libiconv 1.16 h516909a_0 conda-forge liblapack 3.9.0 15_linux64_openblas conda-forge libllvm14 14.0.6 he0ac6c6_0 conda-forge libnsl 2.0.0 h7f98852_0 conda-forge libogg 1.3.4 h7f98852_1 conda-forge libopenblas 0.3.20 pthreads_h78a6416_1 conda-forge libopus 1.3.1 h7f98852_1 conda-forge libpng 1.6.37 h753d276_3 conda-forge libpq 14.4 hd77ab85_0 conda-forge libprotobuf 3.20.1 h6239696_0 conda-forge libsndfile 1.0.31 h9c3ff4c_1 conda-forge libsodium 1.0.18 h36c2ea0_1 conda-forge libstdcxx-ng 12.1.0 ha89aaad_16 conda-forge libtiff 4.4.0 h0e0dad5_3 conda-forge libtool 2.4.6 h9c3ff4c_1008 conda-forge libudev1 249 h166bdaf_4 conda-forge libuuid 2.32.1 h7f98852_1000 conda-forge libvorbis 1.3.7 h9c3ff4c_0 conda-forge libwebp 1.2.3 h522a892_1 conda-forge libwebp-base 1.2.3 h166bdaf_2 conda-forge libxcb 1.13 h7f98852_1004 conda-forge libxkbcommon 1.0.3 he3ba5ed_0 conda-forge libxml2 2.9.14 h22db469_3 conda-forge libzlib 1.2.12 h166bdaf_2 conda-forge llvm-openmp 14.0.4 he0ac6c6_0 conda-forge lz4-c 1.9.3 h9c3ff4c_1 conda-forge magma 2.5.4 h5da55e3_2 conda-forge mako 1.2.1 pypi_0 pypi markupsafe 2.1.1 pypi_0 pypi matplotlib 3.5.2 py37h89c1867_1 conda-forge matplotlib-base 3.5.2 py37hc347a89_1 conda-forge matplotlib-inline 0.1.3 pyhd8ed1ab_0 conda-forge mkl 2022.1.0 h84fe81f_915 conda-forge munkres 1.1.4 pyh9f0ad1d_0 conda-forge mysql-common 8.0.30 haf5c9bc_0 conda-forge mysql-libs 8.0.30 h28c427c_0 conda-forge nccl 2.13.4.1 h1a5f58c_0 conda-forge ncurses 6.3 h27087fc_1 conda-forge nest-asyncio 1.5.5 pyhd8ed1ab_0 conda-forge ninja 1.11.0 h924138e_0 conda-forge nspr 4.32 h9c3ff4c_1 conda-forge nss 3.78 h2350873_0 conda-forge numpy 1.21.6 py37h976b520_0 conda-forge openblas 0.3.20 pthreads_h320a7e8_1 conda-forge openjpeg 2.4.0 hb52868f_1 conda-forge openssl 1.1.1q h166bdaf_0 conda-forge optuna 2.10.1 pypi_0 pypi packaging 21.3 pyhd8ed1ab_0 conda-forge pandas 1.3.5 py37he8f5f7f_0 conda-forge parso 0.8.3 pyhd8ed1ab_0 conda-forge pbr 5.9.0 pypi_0 pypi pcre 8.45 h9c3ff4c_0 conda-forge pexpect 4.8.0 pyh9f0ad1d_2 conda-forge pickleshare 0.7.5 py_1003 conda-forge pillow 9.2.0 py37h44f0d7a_0 conda-forge pip 22.2.2 pyhd8ed1ab_0 conda-forge plotly 5.9.0 pyhd8ed1ab_0 conda-forge ply 3.11 py_1 conda-forge portaudio 19.6.0 h57a0ea0_5 conda-forge prettytable 3.3.0 pypi_0 pypi prompt-toolkit 3.0.30 pyha770c72_0 conda-forge psutil 5.9.1 py37h540881e_0 conda-forge pthread-stubs 0.4 h36c2ea0_1001 conda-forge ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge pulseaudio 14.0 h7f54b18_8 conda-forge pycparser 2.21 pyhd8ed1ab_0 conda-forge pygments 2.12.0 pyhd8ed1ab_0 conda-forge pyopenssl 22.0.0 pyhd8ed1ab_0 conda-forge pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge pyperclip 1.8.2 pypi_0 pypi pyqt 5.15.7 py37hf30b843_0 conda-forge pyqt5-sip 12.11.0 py37hd23a5d3_0 conda-forge pysocks 1.7.1 py37h89c1867_5 conda-forge python 3.7.12 hb7a2778_100_cpython conda-forge python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python_abi 3.7 2_cp37m conda-forge pytorch 1.12.0 cuda102py37haad9b4f_202 conda-forge pytorch-mutex 1.0 cuda pytorch pytz 2022.1 pyhd8ed1ab_0 conda-forge pyyaml 6.0 pypi_0 pypi pyzmq 23.2.0 py37h0c0c2a8_0 conda-forge qt-main 5.15.4 ha5833f6_2 conda-forge readline 8.1.2 h0f457ee_0 conda-forge requests 2.28.1 pyhd8ed1ab_0 conda-forge scikit-learn 1.0.2 py37hf9e9bfc_0 conda-forge scipy 1.7.3 py37hf2a6cf1_0 conda-forge setuptools 59.8.0 py37h89c1867_1 conda-forge sip 6.6.2 py37hd23a5d3_0 conda-forge six 1.16.0 pyh6c4a22f_0 conda-forge sleef 3.5.1 h9b69904_2 conda-forge sqlalchemy 1.4.40 pypi_0 pypi sqlite 3.39.2 h4ff8645_0 conda-forge stevedore 3.5.0 pypi_0 pypi tbb 2021.5.0 h924138e_1 conda-forge tenacity 8.0.1 pyhd8ed1ab_0 conda-forge threadpoolctl 3.1.0 pyh8a188c0_0 conda-forge tk 8.6.12 h27826a3_0 conda-forge toml 0.10.2 pyhd8ed1ab_0 conda-forge torchaudio 0.12.0 py37_cu102 pytorch torchvision 0.13.0 cuda102py37h9785060_0 conda-forge tornado 6.2 py37h540881e_0 conda-forge tqdm 4.64.0 pypi_0 pypi traitlets 5.3.0 pyhd8ed1ab_0 conda-forge typing-extensions 4.3.0 hd8ed1ab_0 conda-forge typing_extensions 4.3.0 pyha770c72_0 conda-forge unicodedata2 14.0.0 py37h540881e_1 conda-forge urllib3 1.26.11 pyhd8ed1ab_0 conda-forge wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge werkzeug 2.2.2 pypi_0 pypi wheel 0.37.1 pyhd8ed1ab_0 conda-forge xcb-util 0.4.0 h166bdaf_0 conda-forge xcb-util-image 0.4.0 h166bdaf_0 conda-forge xcb-util-keysyms 0.4.0 h166bdaf_0 conda-forge xcb-util-renderutil 0.3.9 h166bdaf_0 conda-forge xcb-util-wm 0.4.1 h166bdaf_0 conda-forge xorg-libxau 1.0.9 h7f98852_0 conda-forge xorg-libxdmcp 1.1.3 h7f98852_0 conda-forge xz 5.2.5 h516909a_1 conda-forge zeromq 4.3.4 h9c3ff4c_1 conda-forge zipp 3.8.1 pypi_0 pypi zlib 1.2.12 h166bdaf_2 conda-forge zstd 1.5.2 h8a70e8d_3 conda-forge </code></pre> <p>Output of 'conda list‘ under Windows:</p> <pre><code># Name Version Build Channel alembic 1.8.1 pypi_0 pypi attrs 22.1.0 pypi_0 pypi autopage 0.5.1 pypi_0 pypi backcall 0.2.0 pyh9f0ad1d_0 conda-forge backports 1.0 py_2 conda-forge backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge blas 1.0 mkl anaconda brotli 1.0.9 pypi_0 pypi brotli-bin 1.0.9 h8ffe710_7 conda-forge ca-certificates 2022.6.15 h5b45459_0 conda-forge certifi 2022.6.15 py37h03978a9_0 conda-forge cffi 1.15.1 py37hd8e9650_0 conda-forge click 8.1.3 pypi_0 pypi cliff 3.10.1 pypi_0 pypi cmaes 0.8.2 pypi_0 pypi cmd2 2.4.2 pypi_0 pypi colorama 0.4.5 pyhd8ed1ab_0 conda-forge colorlog 6.6.0 pypi_0 pypi cudatoolkit 11.3.1 h280eb24_10 conda-forge cycler 0.11.0 pyhd8ed1ab_0 conda-forge dash 2.6.1 pypi_0 pypi dash-core-components 2.0.0 pypi_0 pypi dash-html-components 2.0.0 pypi_0 pypi dash-table 5.0.0 pypi_0 pypi debugpy 1.6.0 py37hf2a7229_0 conda-forge decorator 5.1.1 pyhd8ed1ab_0 conda-forge entrypoints 0.4 pyhd8ed1ab_0 conda-forge flask 2.2.2 pypi_0 pypi flask-compress 1.12 pypi_0 pypi fonttools 4.34.4 py37hcc03f2d_0 conda-forge freetype 2.10.4 h546665d_1 conda-forge future 0.18.2 py37h03978a9_5 conda-forge greenlet 1.1.2 pypi_0 pypi icu 58.2 vc14hc45fdbb_0 [vc14] anaconda importlib-metadata 4.12.0 pypi_0 pypi importlib-resources 5.9.0 pypi_0 pypi intel-openmp 2022.1.0 h57928b3_3787 conda-forge ipykernel 6.15.1 pyh025b116_0 conda-forge ipython 7.33.0 py37h03978a9_0 conda-forge itsdangerous 2.1.2 pypi_0 pypi jedi 0.18.1 pyhd8ed1ab_2 conda-forge jinja2 3.1.2 pypi_0 pypi joblib 1.1.0 pyhd8ed1ab_0 conda-forge jpeg 9b vc14h4d7706e_1 [vc14] anaconda jupyter_client 7.3.4 pyhd8ed1ab_0 conda-forge jupyter_core 4.11.1 py37h03978a9_0 conda-forge kiwisolver 1.4.4 py37h8c56517_0 conda-forge libblas 3.9.0 12_win64_mkl conda-forge libbrotlicommon 1.0.9 h8ffe710_7 conda-forge libbrotlidec 1.0.9 h8ffe710_7 conda-forge libbrotlienc 1.0.9 h8ffe710_7 conda-forge libcblas 3.9.0 12_win64_mkl conda-forge liblapack 3.9.0 12_win64_mkl conda-forge libpng 1.6.37 h1d00b33_2 conda-forge libsodium 1.0.18 h8d14728_1 conda-forge libsqlite 3.39.2 h8ffe710_1 conda-forge libtiff 4.0.8 vc14h04e2a1e_10 [vc14] anaconda libuv 1.44.2 h8ffe710_0 conda-forge libwebp 1.2.4 h8ffe710_0 conda-forge libwebp-base 1.2.4 h8ffe710_0 conda-forge m2w64-gcc-libgfortran 5.3.0 6 conda-forge m2w64-gcc-libs 5.3.0 7 conda-forge m2w64-gcc-libs-core 5.3.0 7 conda-forge m2w64-gmp 6.1.0 2 conda-forge m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge mako 1.2.1 pypi_0 pypi markupsafe 2.1.1 pypi_0 pypi matplotlib 3.5.3 py37h03978a9_0 conda-forge matplotlib-base 3.5.3 py37h54234da_0 conda-forge matplotlib-inline 0.1.3 pyhd8ed1ab_0 conda-forge mkl 2021.4.0 h0e2418a_729 conda-forge mkl-service 2.4.0 py37h75fcce0_0 conda-forge msys2-conda-epoch 20160418 1 conda-forge munkres 1.1.4 pyh9f0ad1d_0 conda-forge nest-asyncio 1.5.5 pyhd8ed1ab_0 conda-forge ninja 1.11.0 h2d74725_0 conda-forge numpy 1.21.6 py37h2830a78_0 conda-forge openssl 1.1.1q h8ffe710_0 conda-forge optuna 2.10.1 pypi_0 pypi packaging 21.3 pyhd8ed1ab_0 conda-forge pandas 1.3.5 py37h9386db6_0 conda-forge parso 0.8.3 pyhd8ed1ab_0 conda-forge pbr 5.10.0 pypi_0 pypi pickleshare 0.7.5 py_1003 conda-forge pillow 9.2.0 py37hdc2b20a_1 pip 22.2.2 pyhd8ed1ab_0 conda-forge plotly 5.10.0 pyhd8ed1ab_0 conda-forge prettytable 3.3.0 pypi_0 pypi prompt-toolkit 3.0.30 pyha770c72_0 conda-forge psutil 5.9.1 py37hcc03f2d_0 conda-forge pycparser 2.21 pyhd8ed1ab_0 conda-forge pygments 2.12.0 pyhd8ed1ab_0 conda-forge pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge pyperclip 1.8.2 pypi_0 pypi pyqt 5.9.2 py37h6538335_4 conda-forge pyreadline3 3.4.1 pypi_0 pypi python 3.7.12 h7840368_100_cpython conda-forge python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python_abi 3.7 2_cp37m conda-forge pytorch 1.10.2 cpu_py37h907fbb5_0 anaconda pytz 2022.2 pyhd8ed1ab_0 conda-forge pywin32 303 py37hcc03f2d_0 conda-forge pyyaml 6.0 pypi_0 pypi pyzmq 23.2.0 py37hcce574b_0 conda-forge qt 5.9.7 vc14h73c81de_0 [vc14] anaconda scikit-learn 1.0.2 py37hcabfae0_0 conda-forge scipy 1.7.3 py37hb6553fb_0 conda-forge setuptools 59.8.0 py37h03978a9_1 conda-forge sip 4.19.8 py37h6538335_1000 conda-forge six 1.16.0 pyh6c4a22f_0 conda-forge sqlalchemy 1.4.40 pypi_0 pypi sqlite 3.39.2 h8ffe710_1 conda-forge stevedore 3.5.0 pypi_0 pypi tbb 2021.5.0 h2d74725_1 conda-forge tenacity 8.0.1 pyhd8ed1ab_0 conda-forge threadpoolctl 3.1.0 pyh8a188c0_0 conda-forge tk 8.6.7 vc14hb68737d_1 [vc14] anaconda torchaudio 0.10.2 py37_cu113 pytorch torchvision 0.11.3 py37_cu113 pytorch tornado 6.2 py37hcc03f2d_0 conda-forge tqdm 4.64.0 pypi_0 pypi traitlets 5.3.0 pyhd8ed1ab_0 conda-forge typing-extensions 4.3.0 hd8ed1ab_0 conda-forge typing_extensions 4.3.0 pyha770c72_0 conda-forge ucrt 10.0.20348.0 h57928b3_0 conda-forge unicodedata2 14.0.0 py37hcc03f2d_1 conda-forge vc 14.2 hb210afc_6 conda-forge vs2015_runtime 14.29.30037 h902a5da_6 conda-forge wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge werkzeug 2.2.2 pypi_0 pypi wheel 0.37.1 pyhd8ed1ab_0 conda-forge zeromq 4.3.4 h0e60522_1 conda-forge zipp 3.8.1 pypi_0 pypi zlib 1.2.11 vc14h1cdd9ab_1 [vc14] anaconda </code></pre>
<p>If you want more consistency in your environment solves, then add specificity to your YAML. I would recommend specifying every package that is important to you <strong>up through minor version</strong>. That is <a href="https://stackoverflow.com/a/64594513/570918">what is recommended</a> for Conda YAMLs amongst the Snakemake community.</p> <p>Also, the channel order doesn't make sense, and is potentially a source of inconsistency. If using PyTorch, then the <code>pytorch</code> channel should absolutely be top priority. Also, <code>anaconda</code> is a subset of <code>defaults</code>, so that's unnecessary. I'd change the channels to:</p> <pre class="lang-yaml prettyprint-override"><code>channels: - pytorch - conda-forge - defaults - nodefaults </code></pre> <p>And the <code>nodefaults</code> at the end is a directive to ignore any system- or user-level configured channels, ensuring only the YAML channels are considered.</p>
python|pandas|numpy|pytorch|conda
1
376,412
73,179,912
I keep getting error with my code. I want tselect rows for AR, AL, CA,
<p>1 to 5 of 5 entriesFilter</p> <p>I keep getting error with my code. I want tselect rows for AR, AL, CA, . Then, utilize stacked bar plot, to stack vote percentages for Trump, Clinton, Johnson, and Others. Please see 'pct_clinton', 'pct_trump', 'pct_johnson', 'pct_other' columns. Make sure that your x tick labels are those four states above.</p> <p><a href="https://i.stack.imgur.com/wHfgC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wHfgC.png" alt="Image of the Table" /></a></p> <pre><code>df = df[['state', 'pct_clinton', 'pct_trump', 'pct_johnson', 'pct_other']].dropna() df.head() df.plot(kind='bar', stacked=True, color=['red', 'skyblue', 'green']) x= states['AR', 'MI', 'CA', 'WI'] plt.xticks('states') plt.ylabel('') </code></pre>
<p>You can try:</p> <pre><code># Restrict the list of states STATES = ['Arizona', 'Arkansas', 'California'] ax = df[df['state'].isin(STATES)].plot(x='state', kind='bar', rot=45, xlabel='States') plt.tight_layout() plt.show() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/1VjAp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1VjAp.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib|plot
0
376,413
73,358,227
What's the .egg folder in pycharm?
<p>I am using pycharm/<code>python 3.8</code>. I am working on a branch let's call it: <code>Working_branch1</code>. I have some tests that I need to run, usually when I run them they test the version of my code: <code>code.py</code> located <code>\directory_name\code.py</code>. However since few days, every time I run these tests they are testing instead another version of my code of the same folder located in <code>Lib\site-packages\directory_name-py3.8.egg\code.py</code>. The tests are failing since it's an older version of the code.</p> <p>Can someone please let me know what's this <code>.egg</code> directory and how can I force my tests to run on my actual code version instead ?</p>
<p><code>site-packages</code> is where Python stores installed packages for a given environment. The tests are running on the version installed in your environment, which means you probably installed it at some point. Your options are either to uninstall it, or to reinstall the updated version.</p>
python|pandas|pycharm
0
376,414
73,455,383
What does a disparity map in OpenCV tell?
<p>What does the map returned by <code>stereo.compute()</code> indicate?</p> <p>The definition of disparity is the distance between two comparable pixels in the left and right images. However, by running the following code, I obtained a map with the same size as the input photos with values ranging from <code>-16</code> to <code>211</code>. I got confused when it comes with some negative numbers. If these values refer to distance, why would a distance of <code>-16</code> be possible? (In fact, it has plenty of <code>-16</code> in the map).<br /> What precisely do these values indicate? Any help is greatly appreciated.</p> <p><strong>code:</strong></p> <pre><code>import cv2 as cv from matplotlib import pyplot as plt imgL = cv.imread(&quot;data/stereo-corridor_l.png&quot;, 0) print(imgL.shape) imgR = cv.imread(&quot;data/stereo-corridor_r.png&quot;, 0) stereo = cv.StereoBM_create(numDisparities=16, blockSize=17) disparity = stereo.compute(imgL, imgR) plt.imshow(disparity, &quot;gray&quot;) plt.show() </code></pre> <p><strong>two images used:</strong><br /> stereo right<br /> <a href="https://i.stack.imgur.com/6tY57.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6tY57.png" alt="image R" /></a><br /> stereo left<br /> <a href="https://i.stack.imgur.com/aObX7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aObX7.png" alt="image L" /></a></p>
<p>As per its <a href="https://docs.opencv.org/4.5.2/d2/d6e/classcv_1_1StereoMatcher.html#a03f7087df1b2c618462eb98898841345" rel="nofollow noreferrer">documentation</a>, <code>stereo.compute()</code> computes <em>16-bit fixed-point disparity map (where each disparity value has 4 fractional bits), whereas other algorithms output 32-bit floating-point disparity map.</em></p> <p>In practice this means that the numbers your are having are integers that must be converted to floating point values.</p> <p>With some little changes, I did:</p> <pre><code>import cv2 as cv import numpy as np from matplotlib import pyplot as plt imgL = cv.imread(&quot;data/stereo-corridor_l.png&quot;, 0) print(imgL.shape) imgR = cv.imread(&quot;data/stereo-corridor_r.png&quot;, 0) stereo = cv.StereoBM_create(numDisparities=16, blockSize=17) disparity = stereo.compute(imgL, imgR).astype(np.float32)/16 print(f&quot;Range: {np.min(disparity)} &lt;-&gt; {np.max(disparity)}&quot;) plt.imshow(disparity, &quot;gray&quot;) plt.show() </code></pre> <p>You can see that the disparity range is <code>-1.0 &lt;-&gt; 12.5625</code>, where -1 simply means that the matching algorithm couldn't find a matching correspondence.</p> <p>Instead, without the conversion <code>astype(np.float32)/16</code>, you would get a fake range of <code>-16 &lt;-&gt; 201</code>, which is wrong and also 201 is a too high value (you don't have objects so close).</p> <p>For a full example you may refer to my <a href="https://github.com/decadenza/SimpleStereo/blob/master/examples/008%20StereoMatchingSGBM.py" rel="nofollow noreferrer">code here</a>, as part of the <a href="https://github.com/decadenza/SimpleStereo" rel="nofollow noreferrer">SimpleStereo library</a>.</p>
python|numpy|opencv|disparity-mapping
1
376,415
73,315,559
How to compare multi column values with other multi column value of same dataframe?
<p>I want to match <code>a1</code> <code>a2</code> from the row whose <code>a3</code> is missing with the entire column of <code>b1 b2 b3</code> where ever <code>a1 a2</code> matches with any two <code>b's</code> value we will grab the 3rd <code>b</code> value i.e in row 2 <code>a1=84</code> and <code>a2=5</code> which is a match for <code>b's</code> in row 3 where <code>b1=5</code> and <code>b2=84</code> so now we will grab the value <code>b3=89</code> in this case. Similarly for row 5 we will grab the value of <code>b2=66</code>.</p> <p>this is just a small data set actual data contains millions of rows.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>time</th> <th>duration</th> <th>a1</th> <th>a2</th> <th>a3</th> <th>b1</th> <th>b2</th> <th>b3</th> </tr> </thead> <tbody> <tr> <td>2022-02-28</td> <td>95</td> <td>11</td> <td>2</td> <td>3</td> <td>22</td> <td>67</td> <td>25</td> </tr> <tr> <td>2022-02-27</td> <td>85</td> <td>84</td> <td>5</td> <td></td> <td>72</td> <td>23</td> <td>15</td> </tr> <tr> <td>2022-02-26</td> <td>87</td> <td>6</td> <td>7</td> <td>8</td> <td>5</td> <td>84</td> <td>89</td> </tr> <tr> <td>2022-02-25</td> <td>72</td> <td>9</td> <td>10</td> <td>44</td> <td>55</td> <td>78</td> <td>41</td> </tr> <tr> <td>2022-02-24</td> <td>66</td> <td>19</td> <td>57</td> <td></td> <td>50</td> <td>60</td> <td>51</td> </tr> <tr> <td>2022-02-23</td> <td>88</td> <td>20</td> <td>48</td> <td>67</td> <td>19</td> <td>66</td> <td>57</td> </tr> </tbody> </table> </div> <p><a href="https://i.stack.imgur.com/jydMp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jydMp.png" alt="enter image description here" /></a></p>
<p>You can get all permutations of <code>b</code> columns, left join with original DataFrame filtered only rows with missing values in <code>a3</code> for <code>a3_</code> column wich match <code>a1, a2</code> in list. Then join list to one Series, remove possible duplicates in index and replace missing values of <code>a3</code> column in original DataFrame:</p> <pre><code>from itertools import permutations cols = ['b1','b2','b3'] L = [df[df['a3'].isna()].merge(df.loc[:, x].set_axis(['a1','a2','a3'], axis=1), how='left', on=['a1','a2'], suffixes=('','_'))['a3_'].dropna() for x in permutations(cols, 3)] final = pd.concat(L) final = final[~final.index.duplicated()] print (final) 4 66.0 1 89.0 Name: a3_, dtype: float64 df['a3'] = df['a3'].fillna(final) print (df) time duration a1 a2 a3 b1 b2 b3 0 2022-02-28 95 11 2 3.0 22 67 25 1 2022-02-27 85 84 5 89.0 72 23 15 2 2022-02-26 87 6 7 8.0 5 84 89 3 2022-02-25 72 9 10 44.0 55 78 41 4 2022-02-24 66 19 57 66.0 50 60 51 5 2022-02-23 88 20 48 67.0 19 66 57 </code></pre>
python|python-3.x|pandas|dataframe|csv
2
376,416
73,303,134
The size of tensor a (20) must match the size of tensor b (25) at non-singleton dimension 1 for pad_sequence() in pytorch
<p>My Code :</p> <pre><code> import torch.nn.utils.rnn as r a = torch.ones([1, 20]) b = torch.ones([1, 25]) c = r.pad_sequence([a, b], batch_first=True, padding_value=0) </code></pre> <p>The Traceback of this code is :</p> <pre><code>RuntimeError: The size of tensor a (20) must match the size of tensor b (23) at non-singleton dimension 1 </code></pre> <p>can anybody explain to me what is this error and how to solve this?</p> <p>All I wanted is to pad zeros to tensor a to make it's shape equal to b.</p>
<p>In your example you have two sequences of length/duration of 20 and 25 samples, respectively. Both sequences have 1-dim element per time step.</p> <p>PyTorch expects the element dim to be the last dim, therefore you need:</p> <pre class="lang-py prettyprint-override"><code>c = r.pad_sequence([a.T, b.T], batch_first=True) </code></pre> <p>With output shape of <code>c</code> equals to <code>[2, 25, 1]</code>.</p>
python|pytorch|recurrent-neural-network|tensor
0
376,417
73,448,658
Uncorrelated random variables python
<p>I am trying to create a random vector whose components are uncorrelated standard normal variables with zero mean and unit variance. I am using the function</p> <p>Are these random variables uncorrelated? Because when I am trying to find covariance coefficient:</p> <pre><code>import numpy as np print(np.random.normal(0,1,size=17)) print(np.cov(np.random.normal(0,1,size=17))) </code></pre> <p>Also, Python is not giving me exactly zero coefficient of covariance (my result is close to 0.9).</p>
<p>Those are <a href="https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables" rel="nofollow noreferrer">independent and identically distributed</a> variates drawn from a <a href="https://en.wikipedia.org/wiki/Continuous_uniform_distribution#Standard_uniform" rel="nofollow noreferrer">standard uniform distribution</a>. Therefore, you should expect them to be uncorrelated. If you want IID standard <strong>normal</strong> variates you want to call <code>np.random.normal(size=n)</code>.</p> <p>You should not expect to get exactly zero from any correlation test – this is a random set of values drawn from that distribution, sometimes they might happen to be autocorrelated.</p> <p>Update: I've just had a play with Numpy's <code>cov</code> function and think you're using it wrong. At the moment you're telling it to test for correlations between a value and itself, which will trivially be the identity. I think <a href="https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html" rel="nofollow noreferrer"><code>corrcoef</code></a> might be better method to use as it's normalised to [-1,1] which makes interpreting values easier.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import scipy.stats as sps n = 100 for _ in range(20): obs = np.random.normal(size=n) # calculate autocorrelation s1 = np.corrcoef(obs[1:], obs[:-1])[0,1] # calculate correlation with index s2 = np.corrcoef(obs, np.arange(n))[0,1] print(f&quot; {s1: .3f} {s2: .3f}&quot;) # 95% of the values above should be smaller than this ts = sps.norm.ppf((1 - 0.95) / 2) / np.sqrt(n) print(f&quot;\n{t:.0%} cutoff for n={n} is $abs(s) &lt; {np.abs(ts):.3f}$&quot;) </code></pre> <p>If I run this I get the following output:</p> <pre class="lang-none prettyprint-override"><code> 0.118 -0.152 -0.005 -0.145 -0.177 0.143 0.091 0.186 -0.183 0.083 -0.009 0.027 -0.124 -0.083 0.111 0.167 -0.207 -0.093 -0.025 -0.191 -0.117 -0.001 -0.019 -0.026 -0.177 -0.057 0.084 -0.123 -0.144 0.088 -0.018 0.245 0.117 0.002 0.069 0.084 0.025 0.043 -0.101 -0.112 95% cutoff for n=100 is $abs(s) &lt; 0.196$ </code></pre> <p>note that two values are greater than the test statistic, but that's expected. Re-running with <code>n=100_000</code> gives a much tighter bound, but you should still get the same failure rate.</p> <p>Your update says that your <code>n=17</code>, so you might want to switch to Student's t-distribution for the cutoff. Something like:</p> <pre><code>ts = sps.t(n-2).ppf((1 - 0.95) / 2) / np.sqrt(n) </code></pre> <p>i.e. just changing <code>sps.norm</code> to <code>sps.t(n)</code>. Note that changing from Gaussian to t only widens the cutoff for <code>n=17</code> a bit, from 0.475 to 0.517.</p>
python|numpy|random|covariance|variance
3
376,418
73,513,856
Pandas: Map DF with multiple columns to another
<p>I am trying to figure out an efficient way to map <code>df2</code> to <code>df1</code>. What makes this a bit trickier, is the <em>key</em> can sometimes be a tuple of 2+ <code>keys</code>.</p> <p>Keys (the a, b, c's) are indeed strings.</p> <pre><code>df1 = pd.DataFrame(data={'Index':[1,2,3,4,5,6,7,8],'key':['a','a','b',('a','c'),'c',('a','b','c'),'b','a']} df2 = pd.DataFrame(data = {'Index':[a,b,c],'Val1':[1,2,3],'Val2':[.1,.2,.3],'Val3':[10,20,30],'Val4':[1,2,3],'Val5':[2,4,6]}) </code></pre> <p>Basically, I am trying to map <code>df2</code> (via the <em>Index</em>) to the <em>key</em> in <code>df1</code>. The end result would have <code>columns = [key, Val1, Val2, Val3, Val4, Val5]</code>, and in the result that the <code>type(key) == tuple</code>, I would like a tuple of the 2+ matched Vals.</p> <p>Any help is appreciated! Thanks!</p>
<p>Since you are talking about data types, I think you are trying to join/merge in (a,b,c) variables and getting the error:</p> <blockquote> <p>TypeError: Unhashable Type: 'List'</p> </blockquote> <p>If I'm right, you should build a dictionary {index:value}, with index = integers and value = tuples of objects. Then, replace, do your join/merge/whatever and replace again.</p> <p>Considering pandas.replace will fail with &quot;weird&quot; data types, you will need one dict and filter the dataframe for each data type.</p> <p>That's a generic answer for a generic question. For a precise answer, specify (a,b,c) data types at least.</p>
python|pandas
0
376,419
73,329,541
mapping bad matches to other dataframe
<p>I've got a pandas df where I've already matched the name to the ID, but there are some IDs that don't have a name. For those, I want to go back to the mapping file and search the 'alternative_ID_list' column and see if there is a match with a corresponding name.</p> <pre><code>current df name ID 0 joe USER1 3 mary USER2 5 USER3 USER3 8 USER4 USER4 9 USER5 USER5 9 USER6 USER6 bad_matches=[3, 4, 5, 6] </code></pre> <pre><code>mapping_df = name ID alternative_ID_list 0 joe USER1 USER213.32 3 mary USER2 USER643.11 5 sam USER98 USER31.5 7 jack USER992 USER4.2 8 rick USER902 USER5.6, USER321.1 9 john USER979 USER6.8, USER987.9 10 jay USER980 USER479.2, USER989.0 #use mapping_df to find the bad_match_IDs (take the first match found if multiple rows for one bad_match_id) </code></pre> <pre><code>desired name ID 0 joe USER1 3 mary USER2 5 USER3 USER3 7 jack USER4 8 rick USER5 9 john USER6 </code></pre>
<p>First split column <code>alternative_ID_list</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a>, convert to integer and filter by <code>bad_matches</code> for possible match by original DataFrame by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>DataFrame.merge</code></a> with left join, last set same indices and replace matched rows in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a>:</p> <pre><code>df1 = (mapping_df.assign(alternative_ID_list=mapping_df.alternative_ID_list.str.split(', ')) .explode('alternative_ID_list') .astype({'alternative_ID_list':int}) .drop_duplicates('alternative_ID_list') .loc[lambda x: x['alternative_ID_list'].isin(bad_matches)]) print (df1) name ID alternative_ID_list 7 jack 992 379 8 rick 902 579 9 john 979 479 f = lambda x: x.strip('_') df1 = df.merge(df1, left_on='ID', right_on='alternative_ID_list', how='left', suffixes=('','_'))[['name_','ID_']].rename(columns=f) df = df1.set_index(df.index).fillna(df).astype({'ID':int}) print (df) name ID 0 joe 123 3 mary 342 5 ID/214 214 8 jack 992 9 rick 902 9 john 979 </code></pre> <p>EDIT: Because replaced only column <code>name</code> solution is simplify by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a>:</p> <pre><code>mapping = [str(x) for x in bad_matches] df1 = (mapping_df.assign(alternative_ID_list=mapping_df.alternative_ID_list.str.split(', ')) .explode('alternative_ID_list') .assign(alternative_ID_list = lambda x: x.alternative_ID_list.str.split('.').str[0]) .drop_duplicates('alternative_ID_list') .loc[lambda x: x['alternative_ID_list'].str.extract('(\d+)$', expand=False).isin(mapping)] ) print (df1) name ID alternative_ID_list 7 jack USER992 USER4 8 rick USER902 USER5 9 john USER979 USER6 df['name'] = df['ID'].map(df1.set_index('alternative_ID_list')['name']).fillna(df['name']) print (df) name ID 0 joe USER1 3 mary USER2 5 USER3 USER3 8 jack USER4 9 rick USER5 9 john USER6 </code></pre>
python|pandas|dataframe
0
376,420
73,328,881
Dataframe(pandas) append not working in a loop
<p>I am trying to append the DataFrame into existing DataFrame using loop. Currently, <code>new_data</code> has 4 values each column. I want to go through loop and add new data which is <code>df2</code> with the 3 values each column every time loop iterates.</p> <pre><code> new_data = df = pd.DataFrame({&quot;a&quot;:[1, 2, 3, 4], &quot;b&quot;:[5, 6, 7, 8]}) for i in range(len(5)): df2 = pd.DataFrame({&quot;a&quot;:[1, 2, 3], &quot;b&quot;:[5, 6, 7]}) print(df2) new_data.append(df2) </code></pre> <p>The final result should have 19 values each column,for example</p> <pre><code>a b ---- 1 5 2 6 3 7 4 8 1 5 2 6 3 7 1 5 2 6 3 7 1 5 2 6 3 7 1 5 2 6 3 7 1 5 2 6 3 7 </code></pre> <p>But for some reason it's not working and I am confused. When I try to perform the operation without a loop it is working properly.</p> <p>For example:</p> <pre><code># Creating the first Dataframe using dictionary df1 = df = pd.DataFrame({&quot;a&quot;:[1, 2, 3, 4], &quot;b&quot;:[5, 6, 7, 8]}) # Creating the Second Dataframe using dictionary df2 = pd.DataFrame({&quot;a&quot;:[1, 2, 3], &quot;b&quot;:[5, 6, 7]}) # Print df1 print(df1, &quot;\n&quot;) df1.append(df2) </code></pre> <p>I don't understand what the issue is here. Please explain to me what the issue is here.</p>
<p>You need to:</p> <pre><code>df1 = df1.append(df2) </code></pre> <p>And even better, don't use append which will be deprecated soon and use <code>concat</code> instead:</p> <pre><code>df1 = pd.concat([df1, df2]) </code></pre>
python|pandas|dataframe
0
376,421
73,308,464
Including a lag specification in a pandas merge based on datetime column
<p>I am merging a column from one dataframe with a larger one based on date column. With this code: <code>df_final = pd.merge(df_final, pmms_df, how='left', on='PredictionDate')</code></p> <p><code>pmms_df</code> looks like this:</p> <pre class="lang-py prettyprint-override"><code> PredictionDate U.S. 30 yr FRM U.S. 15 yr FRM 0 2014-12-31 3.87 3.15 1 2015-01-01 3.87 3.15 2 2015-01-02 3.87 3.15 3 2015-01-03 3.87 3.15 4 2015-01-04 3.87 3.15 ... ... ... ... 2769 2022-07-31 5.30 4.58 2770 2022-08-01 4.99 4.26 2771 2022-08-02 4.99 4.26 2772 2022-08-03 4.99 4.26 2773 2022-08-04 4.99 4.26 </code></pre> <p>and <code>df_final</code> is a huge df with 20,000+ rows and 61 columns, so I am only including the relevant output columns here post-merge:</p> <pre class="lang-py prettyprint-override"><code> PredictionDate U.S. 30 yr FRM U.S. 15 yr FRM 0 2022-03-09 3.85 3.09 1 2022-04-11 5.00 4.17 2 2022-05-10 5.30 4.48 3 2022-06-09 5.23 4.38 4 2021-04-09 3.13 2.42 ... ... ... ... 20528 2022-01-11 3.45 2.62 20529 2022-02-09 3.69 2.93 20530 2022-03-09 3.85 3.09 20531 2022-04-11 5.00 4.17 20532 2022-05-10 5.30 4.48 </code></pre> <p>The dataframe I'm merging with has rows with only one day per month so the merge finds that day's row in the first dataframe and merges the U.S. 30 and 15 yr FRM data for that day into a new column in the other dataframe. However, I would like to add an additional column in the other dataframe for both 30 and 15 yr FRM that is based on the data in this dataframe but from 30 days earlier. Desired output would look like something like this:</p> <pre class="lang-py prettyprint-override"><code> PredictionDate U.S. 30 yr FRM U.S. 15 yr FRM 30yrLag 15yrLag 0 2022-03-09 3.85 3.09 3.72 3.12 1 2022-04-11 5.00 4.17 5.05 4.15 2 2022-05-10 5.30 4.48 5.32 4.58 3 2022-06-09 5.23 4.38 . . 4 2021-04-09 3.13 2.42 . . ... ... ... ... 20528 2022-01-11 3.45 2.62 . . 20529 2022-02-09 3.69 2.93 . . 20530 2022-03-09 3.85 3.09 . . 20531 2022-04-11 5.00 4.17 . . 20532 2022-05-10 5.30 4.48 . . </code></pre> <p>So the idea is that those last two columns would contain the 30yr and 15yr data of 30 days prior in <code>pmms_df</code> to the day it was merged on. The values I included here for <code>30yrLag</code> and <code>15yrlag</code> are supposed to be the values for those columns from 30 days before the date in <code>PredictedDate</code> in the final dataframe.</p>
<p>Solution <a href="https://stackoverflow.com/a/73312099/15975987">here</a>.</p> <p>Needed to do the lag first, then merge, instead of doing it simultaneously.</p>
python|pandas|dataframe|datetime|merge
0
376,422
73,407,268
Create a dummy column based on a different column
<p>I have panel data and want to create a column &quot;active trader&quot; for each ID for each period, if the ID has at least traded once per quarter consecutively</p> <p>current df</p> <pre><code>ID date trading A 2020Q1 4 A 2020Q2 5 A 2020Q3 0 A 2020Q4 2 A 2021Q1 1 B 2019Q1 0 B 2019Q2 1 B 2019Q3 2 C 2021Q1 3 C 2021Q2 3 C 2021Q3 4 C 2021Q4 0 ... </code></pre> <p>desired</p> <pre><code>ID date trading active A 2020Q1 4 1 A 2020Q2 5 1 A 2020Q3 0 0 A 2020Q4 2 0 A 2021Q1 1 0 B 2019Q1 0 0 B 2019Q2 1 0 B 2019Q3 2 0 C 2021Q1 3 1 C 2021Q2 3 1 C 2021Q3 4 1 C 2021Q4 0 0 ... </code></pre>
<p>You could try as follows:</p> <pre><code>import pandas as pd import numpy as np data = {'ID': ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C'], 'date': ['2020Q1','2020Q2','2020Q3','2020Q4','2021Q1','2019Q1','2019Q2','2019Q3','2021Q1','2021Q2','2021Q3','2021Q4'], 'trading': [4, 5, 0, 2, 1, 0, 1, 2, 3, 3, 4, 0], 'active': [1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0]} df = pd.DataFrame(data) df_desired = df.copy() df_desired.drop('active', inplace=True, axis=1) df_desired['active'] = df_desired.groupby(['ID'])['trading'].cummin().gt(0).astype(int) # there's a difference in dtype (int64 -&gt; np.int32) df['active'] = df_desired['active'].astype(np.int32) # check if result matches desired output: df.equals(df_desired) # True </code></pre> <p><em>Explanation</em>. <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.cummin.html" rel="nofollow noreferrer"><code>df.cummin</code></a> can be used to return cumulative minimum for the traders within each group:</p> <pre><code>print(df_desired.groupby(['ID'])['trading'].cummin()) 0 4 1 4 2 0 3 0 4 0 5 0 6 0 7 0 8 3 9 3 10 3 11 0 Name: trading, dtype: int64 </code></pre> <p>So, this is a quick way to fill down everything with <code>0</code>, as soon as we hit the first one. Next, we simply check larger than <code>0</code>, and convert the resulting <code>pd.Series</code> with <code>True/False</code> to <code>1/0</code> using <code>.astype(int)</code>. So, the final result becomes:</p> <pre><code>print(df_desired.groupby(['ID'])['trading'].cummin().gt(0).astype(int)) 0 1 1 1 2 0 3 0 4 0 5 0 6 0 7 0 8 1 9 1 10 1 11 0 Name: trading, dtype: int32 </code></pre>
python|pandas
5
376,423
73,257,965
Merge two functions such that the arguments are merged and the output is merged
<p>I have a two functions as follows:</p> <pre><code>def eq_2(x): A, P, E, EA = x return np.array([E*A, EA, EA, E*P]) def eq_3(x): A, P, E, EA = x return np.array([E**2, E, E, E]) </code></pre> <p>Subsequently I make a list and save it as '<code>v</code>':</p> <pre><code>v = [eq_2, eq_3] [&lt;function eq_2 at 0x7f2&gt;, &lt;function eq_3 at 0x7f3&gt;] </code></pre> <p>Now my problem is: How can I treat <code>v</code> as a function that takes an <strong>argument <code>x</code> of <code>shape=(8,)</code> and returns a result of <code>shape=(8,)</code> ?</strong></p> <p>Furthermore, I want to be able to merge as many functions as I wish (i.e. increase the <code>v = [eq_2, eq_3]</code> term).</p>
<p>I the length of the input is always 8, I think you could split the input into two parts, feed it to the two functions, and concatenate the outputs.</p> <pre><code>import numpy as np def eq_2(x): A, P, E, EA = x return np.array([E*A, EA, EA, E*P]) def eq_3(x): A, P, E, EA = x return np.array([E**2, E, E, E]) v = lambda x: np.hstack((eq_2(x[:4]), eq_3(x[4:]))) v_result = v([1,2,3,4,5,6,7,8]) </code></pre>
python|numpy
0
376,424
73,332,346
How to make create a triangle of "1"?
<p>I want to create this from multiple arrays, best using NumPy:</p> <pre><code>1 0 0 0 0 0 1 1 0 0 0 0 1 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 1 0 1 1 1 1 1 1 </code></pre> <p>However, I prefer if a library is used to create this, how do I go about doing this?</p> <p>Note: NumPy can be used to create the array as well.</p> <p>There are a lot of answers on SO, but they all provide answers that <em>do not</em> use libraries, and I haven't been able to find anything online to produce this!</p>
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.tril.html" rel="nofollow noreferrer"><code>np.tril</code></a>:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; np.tril(np.ones((6, 6), dtype=int)) array([[1, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1]]) </code></pre>
python|arrays|numpy
2
376,425
73,244,807
related to pandas data frame
<p>CODE :-</p> <pre><code>from datetime import date from datetime import timedelta from nsepy import get_history import pandas as pd end1 =date.today() start1 = end1 - timedelta(days=10) exp_date1 = date(2022,8,25) exp_date2 = date(2022,9,29) stock = ['RELIANCE','HDFCBANK','INFY','ICICIBANK','HDFC','TCS','KOTAKBANK','LT','SBIN','HINDUNILVR','AXISBANK', 'ITC','BAJFINANCE','BHARTIARTL','ASIANPAINT','HCLTECH','MARUTI','TITAN','BAJAJFINSV','TATAMOTORS', 'TECHM','SUNPHARMA','TATASTEEL','M&amp;M','WIPRO','ULTRACEMCO','POWERGRID','HINDALCO','NTPC','NESTLEIND', 'GRASIM','ONGC','JSWSTEEL','HDFCLIFE','INDUSINDBK','SBILIFE','DRREDDY','ADANIPORTS','DIVISLAB','CIPLA', 'BAJAJ-AUTO','TATACONSUM','UPL','BRITANNIA','BPCL','EICHERMOT','HEROMOTOCO','COALINDIA','SHREECEM','IOC'] for stock in stock: stock_jan = get_history(symbol=stock, start=start1, end=end1, futures=True, expiry_date=exp_date1) stock_feb = get_history(symbol=stock, start=start1, end=end1, futures=True, expiry_date=exp_date2) delivery_per_age = get_history(symbol=stock, start=start1, end=end1) symbol_s = get_history(symbol=stock, start=start1, end=end1) oi_combined = pd.concat([stock_jan['Change in OI'] + stock_feb['Change in OI']]) total_oi = pd.concat([stock_jan['Open Interest']+stock_feb['Open Interest']]) delivery_vol = pd.concat([delivery_per_age['Deliverable Volume']]) na_me = pd.concat([symbol_s['Symbol']]) close = pd.concat([delivery_per_age['Close']]) df = pd.DataFrame(na_me) df['TOTAL_OPN_INT'] = total_oi df['OI_COMBINED'] = oi_combined df['%_CHANGE'] = ((df['OI_COMBINED'] / df['TOTAL_OPN_INT']) * 100).__round__(0) pd.set_option('display.max_columns',8) pd.set_option('display.width',200) print(df) </code></pre> <p>PRODUCT:-</p> <pre><code> Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 RELIANCE 24406250 6664500 27.0 2022-07-27 RELIANCE 30434500 6028250 20.0 2022-07-28 RELIANCE 36177500 5743000 16.0 2022-07-29 RELIANCE 35629250 -548250 -2.0 2022-08-01 RELIANCE 33920750 -1708500 -5.0 2022-08-02 RELIANCE 32738250 -1182500 -4.0 2022-08-03 RELIANCE 32026500 -711750 -2.0 2022-08-04 RELIANCE 32886500 860000 3.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 HDFCBANK 44094050 12837550 29.0 2022-07-27 HDFCBANK 53098100 9004050 17.0 2022-07-28 HDFCBANK 58785650 5687550 10.0 2022-07-29 HDFCBANK 59424200 638550 1.0 2022-08-01 HDFCBANK 60106200 682000 1.0 2022-08-02 HDFCBANK 60987300 881100 1.0 2022-08-03 HDFCBANK 60483500 -503800 -1.0 2022-08-04 HDFCBANK 60819550 336050 1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 INFY 27968100 10026900 36.0 2022-07-27 INFY 32902800 4934700 15.0 2022-07-28 INFY 36741900 3839100 10.0 2022-07-29 INFY 36555000 -186900 -1.0 2022-08-01 INFY 36683100 128100 0.0 2022-08-02 INFY 36653700 -29400 -0.0 2022-08-03 INFY 36848700 195000 1.0 2022-08-04 INFY 36459900 -388800 -1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 ICICIBANK 50990500 10811625 21.0 2022-07-27 ICICIBANK 59917000 8926500 15.0 2022-07-28 ICICIBANK 65434875 5517875 8.0 2022-07-29 ICICIBANK 64421500 -1013375 -2.0 2022-08-01 ICICIBANK 63976000 -445500 -1.0 2022-08-02 ICICIBANK 64975625 999625 2.0 2022-08-03 ICICIBANK 64824375 -151250 -0.0 2022-08-04 ICICIBANK 64097000 -727375 -1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 HDFC 14214900 4004400 28.0 2022-07-27 HDFC 16781100 2566200 15.0 2022-07-28 HDFC 21082800 4301700 20.0 2022-07-29 HDFC 21459600 376800 2.0 2022-08-01 HDFC 21417300 -42300 -0.0 2022-08-02 HDFC 21621300 204000 1.0 2022-08-03 HDFC 21690900 69600 0.0 2022-08-04 HDFC 21563100 -127800 -1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 TCS 8746050 2746050 31.0 2022-07-27 TCS 10440150 1694100 16.0 2022-07-28 TCS 12167850 1727700 14.0 2022-07-29 TCS 11899800 -268050 -2.0 2022-08-01 TCS 11961300 61500 1.0 2022-08-02 TCS 12141900 180600 1.0 2022-08-03 TCS 12310350 168450 1.0 2022-08-04 TCS 12492900 182550 1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 KOTAKBANK 10294000 4272400 42.0 2022-07-27 KOTAKBANK 13121600 2827600 22.0 2022-07-28 KOTAKBANK 14876800 1755200 12.0 2022-07-29 KOTAKBANK 14772000 -104800 -1.0 2022-08-01 KOTAKBANK 15180000 408000 3.0 2022-08-02 KOTAKBANK 15558000 378000 2.0 2022-08-03 KOTAKBANK 15645200 87200 1.0 2022-08-04 KOTAKBANK 15792800 147600 1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 LT 6851700 1901100 28.0 2022-07-27 LT 8606700 1755000 20.0 2022-07-28 LT 9540300 933600 10.0 2022-07-29 LT 9676200 135900 1.0 2022-08-01 LT 9579600 -96600 -1.0 2022-08-02 LT 9432300 -147300 -2.0 2022-08-03 LT 9510600 78300 1.0 2022-08-04 LT 9499200 -11400 -0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 SBIN 30426000 10119000 33.0 2022-07-27 SBIN 43723500 13297500 30.0 2022-07-28 SBIN 48078000 4354500 9.0 2022-07-29 SBIN 45868500 -2209500 -5.0 2022-08-01 SBIN 47425500 1557000 3.0 2022-08-02 SBIN 50124000 2698500 5.0 2022-08-03 SBIN 52092000 1968000 4.0 2022-08-04 SBIN 51882000 -210000 -0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 HINDUNILVR 6886200 1464900 21.0 2022-07-27 HINDUNILVR 8522700 1636500 19.0 2022-07-28 HINDUNILVR 10300200 1777500 17.0 2022-07-29 HINDUNILVR 10250100 -50100 -0.0 2022-08-01 HINDUNILVR 10237200 -12900 -0.0 2022-08-02 HINDUNILVR 10178700 -58500 -1.0 2022-08-03 HINDUNILVR 10208400 29700 0.0 2022-08-04 HINDUNILVR 10289700 81300 1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 AXISBANK 38370000 13545600 35.0 2022-07-27 AXISBANK 44377200 6007200 14.0 2022-07-28 AXISBANK 48842400 4465200 9.0 2022-07-29 AXISBANK 48660000 -182400 -0.0 2022-08-01 AXISBANK 48901200 241200 0.0 2022-08-02 AXISBANK 50166000 1264800 3.0 2022-08-03 AXISBANK 50004000 -162000 -0.0 2022-08-04 AXISBANK 50222400 218400 0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 ITC 52278400 13782400 26.0 2022-07-27 ITC 66179200 13900800 21.0 2022-07-28 ITC 78844800 12665600 16.0 2022-07-29 ITC 83827200 4982400 6.0 2022-08-01 ITC 85734400 1907200 2.0 2022-08-02 ITC 86812800 1078400 1.0 2022-08-03 ITC 83555200 -3257600 -4.0 2022-08-04 ITC 80704000 -2851200 -4.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 BAJFINANCE 3364500 1196625 36.0 2022-07-27 BAJFINANCE 4470500 1106000 25.0 2022-07-28 BAJFINANCE 4969750 499250 10.0 2022-07-29 BAJFINANCE 4754000 -215750 -5.0 2022-08-01 BAJFINANCE 4698125 -55875 -1.0 2022-08-02 BAJFINANCE 4670750 -27375 -1.0 2022-08-03 BAJFINANCE 4645625 -25125 -1.0 2022-08-04 BAJFINANCE 4619000 -26625 -1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 BHARTIARTL 31892450 11726800 37.0 2022-07-27 BHARTIARTL 41211950 9319500 23.0 2022-07-28 BHARTIARTL 50717650 9505700 19.0 2022-07-29 BHARTIARTL 52344050 1626400 3.0 2022-08-01 BHARTIARTL 53248450 904400 2.0 2022-08-02 BHARTIARTL 53561950 313500 1.0 2022-08-03 BHARTIARTL 53350100 -211850 -0.0 2022-08-04 BHARTIARTL 53362450 12350 0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 ASIANPAINT 4498800 1840000 41.0 2022-07-27 ASIANPAINT 5360400 861600 16.0 2022-07-28 ASIANPAINT 5885400 525000 9.0 2022-07-29 ASIANPAINT 5864200 -21200 -0.0 2022-08-01 ASIANPAINT 5812200 -52000 -1.0 2022-08-02 ASIANPAINT 5809800 -2400 -0.0 2022-08-03 ASIANPAINT 5773000 -36800 -1.0 2022-08-04 ASIANPAINT 5824800 51800 1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 HCLTECH 16972200 4148900 24.0 2022-07-27 HCLTECH 19371800 2399600 12.0 2022-07-28 HCLTECH 21725200 2353400 11.0 2022-07-29 HCLTECH 21765100 39900 0.0 2022-08-01 HCLTECH 21652400 -112700 -1.0 2022-08-02 HCLTECH 21272300 -380100 -2.0 2022-08-03 HCLTECH 21593600 321300 1.0 2022-08-04 HCLTECH 21654500 60900 0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 MARUTI 2804400 1075200 38.0 2022-07-27 MARUTI 3538900 734500 21.0 2022-07-28 MARUTI 3836200 297300 8.0 2022-07-29 MARUTI 3983700 147500 4.0 2022-08-01 MARUTI 3996700 13000 0.0 2022-08-02 MARUTI 4102600 105900 3.0 2022-08-03 MARUTI 3949400 -153200 -4.0 2022-08-04 MARUTI 3952000 2600 0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 TITAN 3439875 1219125 35.0 2022-07-27 TITAN 4491000 1051125 23.0 2022-07-28 TITAN 5237625 746625 14.0 2022-07-29 TITAN 5311875 74250 1.0 2022-08-01 TITAN 5392875 81000 2.0 2022-08-02 TITAN 5452500 59625 1.0 2022-08-03 TITAN 5470500 18000 0.0 2022-08-04 TITAN 5572125 101625 2.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 BAJAJFINSV 616100 193350 31.0 2022-07-27 BAJAJFINSV 776000 159900 21.0 2022-07-28 BAJAJFINSV 868250 92250 11.0 2022-07-29 BAJAJFINSV 816100 -52150 -6.0 2022-08-01 BAJAJFINSV 785600 -30500 -4.0 2022-08-02 BAJAJFINSV 788650 3050 0.0 2022-08-03 BAJAJFINSV 772850 -15800 -2.0 2022-08-04 BAJAJFINSV 746550 -26300 -4.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 TATAMOTORS 38274075 12728100 33.0 2022-07-27 TATAMOTORS 52608150 14334075 27.0 2022-07-28 TATAMOTORS 70717050 18108900 26.0 2022-07-29 TATAMOTORS 69433125 -1283925 -2.0 2022-08-01 TATAMOTORS 70537500 1104375 2.0 2022-08-02 TATAMOTORS 69673950 -863550 -1.0 2022-08-03 TATAMOTORS 67837125 -1836825 -3.0 2022-08-04 TATAMOTORS 67834275 -2850 -0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 TECHM 14919000 4667400 31.0 2022-07-27 TECHM 18929400 4010400 21.0 2022-07-28 TECHM 22117800 3188400 14.0 2022-07-29 TECHM 22616400 498600 2.0 2022-08-01 TECHM 22501800 -114600 -1.0 2022-08-02 TECHM 22698600 196800 1.0 2022-08-03 TECHM 22839600 141000 1.0 2022-08-04 TECHM 22904400 64800 0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 SUNPHARMA 7263200 3342500 46.0 2022-07-27 SUNPHARMA 14949900 7686700 51.0 2022-07-28 SUNPHARMA 18085200 3135300 17.0 2022-07-29 SUNPHARMA 20848100 2762900 13.0 2022-08-01 SUNPHARMA 20908300 60200 0.0 2022-08-02 SUNPHARMA 20686400 -221900 -1.0 2022-08-03 SUNPHARMA 21007000 320600 2.0 2022-08-04 SUNPHARMA 21337400 330400 2.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 TATASTEEL 15523550 5135700 33.0 2022-07-27 TATASTEEL 20983950 5460400 26.0 2022-07-28 TATASTEEL 247078000 226094050 92.0 2022-07-29 TATASTEEL 239398250 -7679750 -3.0 2022-08-01 TATASTEEL 248765250 9367000 4.0 2022-08-02 TATASTEEL 243975500 -4789750 -2.0 2022-08-03 TATASTEEL 241000500 -2975000 -1.0 2022-08-04 TATASTEEL 240758250 -242250 -0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 M&amp;M 8032500 2580200 32.0 2022-07-27 M&amp;M 10152100 2119600 21.0 2022-07-28 M&amp;M 10845100 693000 6.0 2022-07-29 M&amp;M 11348400 503300 4.0 2022-08-01 M&amp;M 11429600 81200 1.0 2022-08-02 M&amp;M 11151000 -278600 -2.0 2022-08-03 M&amp;M 11196500 45500 0.0 2022-08-04 M&amp;M 11816700 620200 5.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 WIPRO 29145000 9058000 31.0 2022-07-27 WIPRO 38028000 8883000 23.0 2022-07-28 WIPRO 44330000 6302000 14.0 2022-07-29 WIPRO 44173000 -157000 -0.0 2022-08-01 WIPRO 43964000 -209000 -0.0 2022-08-02 WIPRO 42742000 -1222000 -3.0 2022-08-03 WIPRO 41634000 -1108000 -3.0 2022-08-04 WIPRO 39661000 -1973000 -5.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 ULTRACEMCO 1822300 469300 26.0 2022-07-27 ULTRACEMCO 2157000 334700 16.0 2022-07-28 ULTRACEMCO 2222100 65100 3.0 2022-07-29 ULTRACEMCO 2168600 -53500 -2.0 2022-08-01 ULTRACEMCO 2078700 -89900 -4.0 2022-08-02 ULTRACEMCO 2094200 15500 1.0 2022-08-03 ULTRACEMCO 2036500 -57700 -3.0 2022-08-04 ULTRACEMCO 2013200 -23300 -1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 POWERGRID 41042700 16669800 41.0 2022-07-27 POWERGRID 47209500 6166800 13.0 2022-07-28 POWERGRID 49596300 2386800 5.0 2022-07-29 POWERGRID 51262200 1665900 3.0 2022-08-01 POWERGRID 51135300 -126900 -0.0 2022-08-02 POWERGRID 49148100 -1987200 -4.0 2022-08-03 POWERGRID 45613800 -3534300 -8.0 2022-08-04 POWERGRID 44012700 -1601100 -4.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 HINDALCO 19059750 6948800 36.0 2022-07-27 HINDALCO 24745425 5685675 23.0 2022-07-28 HINDALCO 30913775 6168350 20.0 2022-07-29 HINDALCO 30877225 -36550 -0.0 2022-08-01 HINDALCO 28327325 -2549900 -9.0 2022-08-02 HINDALCO 26728800 -1598525 -6.0 2022-08-03 HINDALCO 26794375 65575 0.0 2022-08-04 HINDALCO 27211475 417100 2.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 NTPC 46358100 20565600 44.0 2022-07-27 NTPC 54326700 7968600 15.0 2022-07-28 NTPC 62933700 8607000 14.0 2022-07-29 NTPC 67641900 4708200 7.0 2022-08-01 NTPC 67225800 -416100 -1.0 2022-08-02 NTPC 67282800 57000 0.0 2022-08-03 NTPC 66353700 -929100 -1.0 2022-08-04 NTPC 62244000 -4109700 -7.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 NESTLEIND 256560 139920 55.0 2022-07-27 NESTLEIND 345520 88960 26.0 2022-07-28 NESTLEIND 395240 49720 13.0 2022-07-29 NESTLEIND 396520 1280 0.0 2022-08-01 NESTLEIND 388440 -8080 -2.0 2022-08-02 NESTLEIND 389280 840 0.0 2022-08-03 NESTLEIND 390400 1120 0.0 2022-08-04 NESTLEIND 392600 2200 1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 GRASIM 10256200 2748350 27.0 2022-07-27 GRASIM 11433725 1177525 10.0 2022-07-28 GRASIM 11896850 463125 4.0 2022-07-29 GRASIM 11830350 -66500 -1.0 2022-08-01 GRASIM 11571000 -259350 -2.0 2022-08-02 GRASIM 11541075 -29925 -0.0 2022-08-03 GRASIM 11362475 -178600 -2.0 2022-08-04 GRASIM 11183400 -179075 -2.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 ONGC 39350850 11992750 30.0 2022-07-27 ONGC 46153800 6802950 15.0 2022-07-28 ONGC 45957450 -196350 -0.0 2022-07-29 ONGC 45052700 -904750 -2.0 2022-08-01 ONGC 45480050 427350 1.0 2022-08-02 ONGC 46061400 581350 1.0 2022-08-03 ONGC 48263600 2202200 5.0 2022-08-04 ONGC 47682250 -581350 -1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 JSWSTEEL 41593500 9583650 23.0 2022-07-27 JSWSTEEL 45940500 4347000 9.0 2022-07-28 JSWSTEEL 46996200 1055700 2.0 2022-07-29 JSWSTEEL 45624600 -1371600 -3.0 2022-08-01 JSWSTEEL 45524700 -99900 -0.0 2022-08-02 JSWSTEEL 45318150 -206550 -0.0 2022-08-03 JSWSTEEL 45300600 -17550 -0.0 2022-08-04 JSWSTEEL 45576000 275400 1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 HDFCLIFE 18042200 4721200 26.0 2022-07-27 HDFCLIFE 24107600 6065400 25.0 2022-07-28 HDFCLIFE 30991400 6883800 22.0 2022-07-29 HDFCLIFE 32433500 1442100 4.0 2022-08-01 HDFCLIFE 32698600 265100 1.0 2022-08-02 HDFCLIFE 32905400 206800 1.0 2022-08-03 HDFCLIFE 33080300 174900 1.0 2022-08-04 HDFCLIFE 33061600 -18700 -0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 INDUSINDBK 22698000 4559400 20.0 2022-07-27 INDUSINDBK 25984800 3286800 13.0 2022-07-28 INDUSINDBK 27224100 1239300 5.0 2022-07-29 INDUSINDBK 26694000 -530100 -2.0 2022-08-01 INDUSINDBK 25869600 -824400 -3.0 2022-08-02 INDUSINDBK 26315100 445500 2.0 2022-08-03 INDUSINDBK 26329500 14400 0.0 2022-08-04 INDUSINDBK 26091000 -238500 -1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 SBILIFE 3927000 1581000 40.0 2022-07-27 SBILIFE 5461500 1534500 28.0 2022-07-28 SBILIFE 6110250 648750 11.0 2022-07-29 SBILIFE 7089000 978750 14.0 2022-08-01 SBILIFE 6964500 -124500 -2.0 2022-08-02 SBILIFE 6978750 14250 0.0 2022-08-03 SBILIFE 6897750 -81000 -1.0 2022-08-04 SBILIFE 6791250 -106500 -2.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 DRREDDY 1435000 704375 49.0 2022-07-27 DRREDDY 1753125 318125 18.0 2022-07-28 DRREDDY 2011375 258250 13.0 2022-07-29 DRREDDY 2667250 655875 25.0 2022-08-01 DRREDDY 2681625 14375 1.0 2022-08-02 DRREDDY 2735625 54000 2.0 2022-08-03 DRREDDY 2735000 -625 -0.0 2022-08-04 DRREDDY 2580000 -155000 -6.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 ADANIPORTS 66928750 7380000 11.0 2022-07-27 ADANIPORTS 73210000 6281250 9.0 2022-07-28 ADANIPORTS 76423750 3213750 4.0 2022-07-29 ADANIPORTS 76556250 132500 0.0 2022-08-01 ADANIPORTS 76650000 93750 0.0 2022-08-02 ADANIPORTS 76297500 -352500 -0.0 2022-08-03 ADANIPORTS 75886250 -411250 -1.0 2022-08-04 ADANIPORTS 75835000 -51250 -0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 DIVISLAB 1836000 766200 42.0 2022-07-27 DIVISLAB 2267700 431700 19.0 2022-07-28 DIVISLAB 2399550 131850 5.0 2022-07-29 DIVISLAB 2419350 19800 1.0 2022-08-01 DIVISLAB 2467800 48450 2.0 2022-08-02 DIVISLAB 2503950 36150 1.0 2022-08-03 DIVISLAB 2511900 7950 0.0 2022-08-04 DIVISLAB 2540700 28800 1.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 CIPLA 4988750 3086850 62.0 2022-07-27 CIPLA 6264050 1275300 20.0 2022-07-28 CIPLA 8102900 1838850 23.0 2022-07-29 CIPLA 9441900 1339000 14.0 2022-08-01 CIPLA 9381450 -60450 -1.0 2022-08-02 CIPLA 9222850 -158600 -2.0 2022-08-03 CIPLA 8828300 -394550 -4.0 2022-08-04 CIPLA 8856250 27950 0.0 Symbol TOTAL_OPN_INT OI_COMBINED %_CHANGE Date 2022-07-26 BAJAJ-AUTO 1353000 413500 31.0 2022-07-27 BAJAJ-AUTO 1749000 396000 23.0 2022-07-28 BAJAJ-AUTO 1951500 202500 10.0 2022-07-29 BAJAJ-AUTO 1852250 -99250 -5.0 2022-08-01 BAJAJ-AUTO 1924000 71750 4.0 2022-08-02 BAJAJ-AUTO 1961500 37500 2.0 2022-08-03 BAJAJ-AUTO 1940000 -21500 -1.0 2022-08-04 BAJAJ-AUTO 1964250 24250 1.0 </code></pre> <p>I have created a screener which shows stocks data of Indian markets. after running the code the product which getting to me is shown above. in this product how can I add a condition that is [ the stocks should have positive %_CHANGE(column name) for at least previous 4 to 5 days in a row and suddenly if product shows negative %_CHANGE then only that stock name should occur after running the code.] In my code all positive and negative data is showing so how can I eliminate this and get the stocks which follows my criteria.<br /> So please tell me the code to solve the problem. I have shared my code and product. Thank you.</p>
<p>A few lines of code are sufficient to check the condition and add the stock's symbol.</p> <pre class="lang-py prettyprint-override"><code># ... your code before the loop target_stocks = [] for stock in stock: # ... your code in the loop print(df) # check condition cond_loc = ((df.loc[df.index[-5:-1], '%_CHANGE'].agg(min) &gt; 0) |(df.loc[df.index[-6:-1], '%_CHANGE'].agg(min) &gt; 0)) &amp; (df.loc[df.index[-1], '%_CHANGE'] &lt; 0) if(cond_loc): target_stocks.append(df['Symbol'][0]) </code></pre> <p>However, please note that your condition is rather restrictive. Often, the list of <code>target_stocks</code> will remain empty (such as today). An elementary explanation based on some probability theory provides an intuition why that is.</p> <p>Assume that a stock moves up or down with ~50% probability (this assumption is fairly realistic, btw.) Then, for example, 4 consecutive up-movements followed by a single down-movement has a chance of <code>0.5^(4+1)=0.03125=3.125%</code> assuming stochastic independence between the movements (another very realistic assumption w.r.t. stock price movements). That probability is about a mere 3%. The chance that not a single of your 13 (initially provided) stocks displays this behavior is, therefore (0.0312)^13, approximately 66% and quite likely. Lowering the number of days with up-movement to 2 provides only a single stock for today's data, namely <code>INFY</code>.</p> <p>For the bigger dataset of 50 stocks, this chance shrinks to around 20% though - in theory. Since stock price movements usually display a positive correlation (due to an overlap in industries/markets etc.) it will be more likely to observe comovement - if the overall market conditions are conducive.</p>
python|excel|pandas|numpy
0
376,426
34,889,599
Pandas df sum rows based on index column
<p>I have a Pandas df (See below), I want to sum the values based on the index column. My index column contains string values. See the example below, here I am trying to add Moving, Playing and Using Phone together as "Active Time" and sum their corresponding values, while keep the other index values as these are already are. Any suggestions, that how can I work with this type of scenario? </p> <pre><code>**Activity AverageTime** Moving 0.000804367 Playing 0.001191772 Stationary 0.320701558 Using Phone 0.594305473 Unknown 0.060697612 Idle 0.022299218 </code></pre>
<p>I am sure that there must be a simpler way of doing this, but here is one possible solution. </p> <pre><code># Filters for active and inactive rows active_row_names = ['Moving','Playing','Using Phone'] active_filter = [row in active_row_names for row in df.index] inactive_filter = [not row for row in active_filter] active = df.loc[active_filter].sum() # Sum of 'active' rows as a Series active = pd.DataFrame(active).transpose() # as a dataframe, and fix orientation active.index=["active"] # Assign new index name # Keep the inactive rows as they are, and replace the active rows with the # newly defined row that is the sum of the previous active rows. df = df.loc[inactive_filter].append(active, ignore_index=False) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>Activity AverageTime Stationary 0.320702 Unknown 0.060698 Idle 0.022299 active 0.596302 </code></pre> <p>This will work even when only a subset of the active row names are present in the dataframe. </p>
python|pandas|indexing|dataframe
3
376,427
35,032,135
how to add hour to pandas dataframe column
<p>I have a pandas dataframe time column like following.</p> <pre><code> segments_data['time'] Out[1585]: 0 04:50:00 1 04:50:00 2 05:00:00 3 05:12:00 4 06:04:00 5 06:44:00 6 06:44:00 7 06:47:00 8 06:47:00 9 06:47:00 </code></pre> <p>I want to add 5 hours and 30 mins to above time column. I am doing following in python.</p> <pre><code>pd.DatetimeIndex(segments_data['time']) + pd.DateOffset(hours=5,minutes=30) </code></pre> <p>But it gives me an error.</p> <pre class="lang-python prettyprint-override"><code>TypeError: object of type 'datetime.time' has no len() </code></pre> <p>please help.</p>
<p>as of '0.25.3' this is as simple as </p> <pre><code>df[column] = df[column] + pd.Timedelta(hours=1) </code></pre>
python|pandas|datetime|dataframe
14
376,428
34,980,659
select rows in pandas DataFrame using comparisons against two columns
<p>I have a pandas dataframe:</p> <pre><code>df = pd.DataFrame({'one' : [1, 2, 3, 4] ,'two' : [5, 6, 7, 8]}) one two 0 1 5 1 2 6 2 3 7 3 4 8 </code></pre> <p>Column "one" and column "two" together comprise (x,y) coordinates </p> <p>Lets say I have a list of coordinates: <code>c = [(1,5), (2,6), (20,5)]</code></p> <p>Is there an elegant way of obtaining the rows in <code>df</code> with matching coordinates? In this case, given <code>c</code>, the matching rows would be 0 and 1</p> <p>Related question: <a href="https://stackoverflow.com/questions/13937022/using-pandas-to-select-rows-using-two-different-columns-from-dataframe?rq=1">Using pandas to select rows using two different columns from dataframe?</a></p> <p>And: <a href="https://stackoverflow.com/questions/27110224/selecting-rows-from-pandas-dataframe-using-two-columns">Selecting rows from pandas DataFrame using two columns</a></p>
<p>This approaching using <code>pd.merge</code> should perform better than the iterative solutions.</p> <pre><code>import pandas as pd df = pd.DataFrame({"one" : [1, 2, 3, 4] ,"two" : [5, 6, 7, 8]}) c = [(1, 5), (2, 6), (20, 5)] df2 = pd.DataFrame(c, columns=["one", "two"]) pd.merge(df, df2, on=["one", "two"], how="inner") one two 0 1 5 1 2 6 </code></pre>
python|pandas
1
376,429
34,929,717
Get list of column names for columns that contain negative values
<p>This is a simple question but I have found "slicing" <code>DataFrames</code> in <code>Pandas</code> frustrating, coming from <code>R</code>. </p> <p>I have a <code>DataFrame</code> <code>df</code> below with 7 columns:</p> <pre><code>df Out[77]: fld1 fld2 fld3 fld4 fld5 fld6 fld7 0 8 8 -1 2 1 7 4 1 6 6 1 7 5 -1 3 2 2 5 4 2 2 8 1 3 -1 -1 7 2 3 2 0 4 6 6 4 2 0 5 2 5 -1 5 7 1 5 8 2 6 7 1 -1 0 1 8 1 7 6 2 4 1 2 6 1 8 3 4 4 5 8 -1 4 9 4 4 3 7 7 4 5 </code></pre> <p>How do I slice <code>df</code> in such a way that it produces a list of columns that contain at least one negative number?</p>
<p>You can select them by building an appropriate Series and then using it to index into <code>df</code>:</p> <pre><code>&gt;&gt;&gt; df &lt; 0 fld1 fld2 fld3 fld4 fld5 fld6 fld7 0 False False True False False False False 1 False False False False False True False 2 False False False False False False False 3 True True False False False False False 4 False False False False False False False 5 True False False False False False False 6 False False True False False False False 7 False False False False False False False 8 False False False False False True False 9 False False False False False False False &gt;&gt;&gt; (df &lt; 0).any() fld1 True fld2 True fld3 True fld4 False fld5 False fld6 True fld7 False dtype: bool </code></pre> <p>and then</p> <pre><code>&gt;&gt;&gt; df.columns[(df &lt; 0).any()] Index(['fld1', 'fld2', 'fld3', 'fld6'], dtype='object') </code></pre> <p>or</p> <pre><code>&gt;&gt;&gt; df.columns[(df &lt; 0).any()].tolist() ['fld1', 'fld2', 'fld3', 'fld6'] </code></pre> <p>depending on what data structure you want. We can also use this io index into <code>df</code> directly:</p> <pre><code>&gt;&gt;&gt; df.loc[:,(df &lt; 0).any()] fld1 fld2 fld3 fld6 0 8 8 -1 7 1 6 6 1 -1 2 2 5 4 8 3 -1 -1 7 2 4 6 6 4 5 5 -1 5 7 8 6 7 1 -1 8 7 6 2 4 6 8 3 4 4 -1 9 4 4 3 4 </code></pre>
python|python-2.7|pandas
7
376,430
35,281,237
I have a gaussian function with two independent discrete variables. How do I create a matrix of all possible values?
<p>Basically I have this:</p> <pre><code>from scip.stats import norm import pandas as pd r = pd.Series([1, 2, 3]) k = pd.Series([0.2, 0.3, 0.4, 0.5]) x = 2 mean = x + k variance = k # I'm feeding the gaussian function two vectors. # I'd like to get a matrix back of all possible combinations. Quickly. values = norm.pdf(r, mean, variance) </code></pre> <p>So I'm giving the function norm.pdf two vectors of data, and I'd like a (3x4) matrix returned to me that looks like:</p> <pre><code>values(1, 0.2) values(1, 0.3) values(1, 0.4) values(1, 0.5) values(2, 0.2) ... values(3, 0.2) ... values(4, 0.2) ........... ........... values(4, 0.5) </code></pre> <p>I know I could iterate over all items in all arrays, but that takes a lot of time, and I plan on scaling this up quite a bit. I'd like to take advantage of numpy's speed. I've tried vectorizing, but that fails. Any ideas? Thanks!!!</p>
<p>You can apply the <code>pdf</code> to each element of <code>r</code> and automatically put the results in a matrix using:</p> <pre><code>r.apply(lambda x: pd.Series(norm.pdf(x, mean, variance), index=k)) </code></pre> <p>If you return a <code>Series</code> from <code>apply</code> then the results are automatically unpacked into columns. Output:</p> <pre><code> 0.2 0.3 0.4 0.5 0 3.037941e-08 0.000111 0.002182 0.008864 1 1.209854e+00 0.806569 0.604927 0.483941 2 6.691511e-04 0.087406 0.323794 0.483941 </code></pre>
python|numpy|pandas|scipy
2
376,431
34,983,707
extract the first occurrence in numpy array following the nan
<p>I have the following array:</p> <pre><code>[1,1,1,1,1,1,nan,nan,nan,1,1,1,2,2,2,3,3] </code></pre> <p>I want to extract the first occurrence of <code>1</code> in this array following the nan's. I tried this:</p> <pre><code>numpy.argmax(arr &gt; numpy.nan) </code></pre>
<p><code>np.where(np.isnan(foo))[0][-1] + 1</code></p> <p>After the <code>np.where</code>, 0 returns the indices of the elements containing NaN. Then -1 gives you the last NaN index. Then add one to that to find the index of the element after the last NaN.</p> <p>In your example array, it produces an index of 9</p> <p>You can then use <code>np.where</code> again to find the first 1 from index 9 onwards.</p> <p>So altogether:</p> <pre> afternan = np.where(np.isnan(foo))[0][-1] + 1 np.where(foo[afternan:] == 1)[0][0] </pre> <p>Note that your example appears to be a list. I am presuming you transform that to a numpy array.</p>
python|numpy|pandas
5
376,432
35,031,976
Rolling window or occurrences for 2D matrix in Numpy per row?
<p>Looking for occurrences of a pattern on each row of a matrix, I found that there was not clear solution to do it on python for very big matrix having a good performance.</p> <p>I have a matrix similar to</p> <pre><code>matrix = np.array([[0,1,1,0,1,0], [0,1,1,0,1,0]]) print 'matrix: ', matrix </code></pre> <p>where I want to check the occurreces of patterns [0,0], [0,1] [1,0] and [1,1] on each rowconsidering overlapping. For the example given, where both rows are equal,ther result is equal for each pattern:</p> <ul> <li>pattern[0,0] = [0,0]</li> <li>pattern[0,1] = [2,2]</li> <li>pattern[1,0] = [2,2]</li> <li>pattern[1,1] = [1,1]</li> </ul> <p>The matrix in this example is quite small, but I am looking for performance as I have a huge matrix. You can test matrix with <code>matrix = numpy.random.randint(2, size=(100000,10))</code> or bigger for example to see the differences</p> <p>First I though on a possible answer converting rows to strings and looking for occurrences based on <a href="https://stackoverflow.com/questions/2970520/string-count-with-overlapping-occurrences">this answer</a> (<a href="https://stackoverflow.com/questions/2970520/string-count-with-overlapping-occurrences">string count with overlapping occurrences</a>):</p> <pre><code>def string_occurrences(matrix): print '\n===== String count with overlapping =====' numRow,numCol = np.shape(matrix) Ocur = np.zeros((numRow,4)) for i in range(numRow): strList = ''.join(map(str,matrix[i,:])) Ocur[i,0] = occurrences(strList,'00') Ocur[i,1] = occurrences(strList,'01') Ocur[i,2] = occurrences(strList,'10') Ocur[i,3] = occurrences(strList,'11') return Ocur </code></pre> <p>using the function <code>occurrences</code> of the answer</p> <pre><code>def occurrences(string, sub): count = start = 0 while True: start = string.find(sub, start) + 1 if start &gt; 0: count+=1 else: return count </code></pre> <p>but considering that the real array is huge, this solution is very very slow as it uses for loops, strings,... So looking for a numpy solution I used a trick to compare the values with a pattern and roll the matrix on <code>axis=1</code> to check all the occurrences. I call it pseudo rolling window on 2D as the window is not square and the way of calculation is different. There are 2 options, where the second (Option 2) is faster because it avoids the extra calculation of <code>numpy.roll</code></p> <pre><code>def pseudo_rolling_window_Opt12(matrix): print '\n===== pseudo_rolling_window =====' numRow,numCol = np.shape(matrix) Ocur = np.zeros((numRow,4)) index = 0 for i in np.arange(2): for j in np.arange(2): #pattern = -9*np.ones(numCol) # Option 1 pattern = -9*np.ones(numCol+1) # Option 2 pattern[0] = i pattern[1] = j for idCol in range(numCol-1): #Ocur[:,index] += np.sum(np.roll(matrix,-idCol, axis=1) == pattern, axis=1) == 2 # Option 1: 219.398691893 seconds (for my real matrix) Ocur[:,index] += np.sum(matrix[:,idCol:] == pattern[:-(idCol+1)], axis=1) == 2 # Option 2: 80.929688930 seconds (for my real matrix) index += 1 return Ocur </code></pre> <p>Searching for other possibilities, I found the "rolling window" which seemed to be a god answer for performance as it used the numpy function. Looking to <a href="https://stackoverflow.com/questions/6811183/rolling-window-for-1d-arrays-in-numpy">this answer</a> (<a href="https://stackoverflow.com/questions/6811183/rolling-window-for-1d-arrays-in-numpy">Rolling window for 1D arrays in Numpy?</a>) and the links on it, I checked the following function. But really, I do not understand the output as it seems that the calculations of the window are matching what I was expecting for result.</p> <pre><code>def rolling_window(a, size): shape = a.shape[:-1] + (a.shape[-1] - size + 1, size) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) </code></pre> <p>Used as:</p> <pre><code>a = rolling_window(matrix, 2) print a == np.array([0,1]) print np.all(rolling_window(matrix, 2) == [0,1], axis=1) </code></pre> <p>Does someone know what is wrong on this last case? Or could be any possibility with better performance?</p>
<p>You are using the wrong axis of the numpy array. You should change the axis in np.all from 1 to 2. Using the following code:</p> <pre><code>a = rolling_window(matrix, 2) print np.all(rolling_window(matrix, 2) == [0,1], axis=2) </code></pre> <p>you get:</p> <pre><code>&gt;&gt;&gt;[[ True False False True False] [ True False False True False]] </code></pre> <p>So, in order to get the results you are looking for:</p> <pre><code>print np.sum(np.all(rolling_window(matrix, 2) == [0,1], axis=2),axis=1) &gt;&gt;&gt;[2 2] </code></pre>
python|numpy|matrix|window|find-occurrences
1
376,433
35,323,023
State Normalization of RNNs
<p>Perhaps a question better posed to Computer Science or Cross Validated?</p> <hr> <p>I'm beginning some work with LSTM on sequences of arbitrary length and one problem I'm experiencing and that I haven't seen addressed, is that my network seems to have developed a couple parameters that grow linearly (perhaps as a measure of time?).</p> <p>The obvious issue with this is that the training data is bounded at a sequence of length <code>x</code> and so the network grows this parameter reasonably up until tilmestep <code>x</code>. But after that, the network will eventually NAN because values are getting too extreme.</p> <p>Has anyone read anything about the normalization of stabilization of states over time?</p> <p>Any suggestions would be much appreciated.</p>
<p>Idea #1: Gradient clipping is often applied in RNNs. Here is an example of implementation: <a href="https://stackoverflow.com/questions/36498127/how-to-effectively-apply-gradient-clipping-in-tensor-flow">How to effectively apply gradient clipping in tensor flow?</a></p> <p>Idea #2: Using <a href="https://arxiv.org/pdf/1603.09025.pdf" rel="nofollow noreferrer">Recurrent Batch Normalization (arXiv)</a> (<a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Batch Normalization</a>)</p> <p>Here is a Tensorflow implementation of a batch normalized LSTM cell: <a href="https://github.com/OlavHN/bnlstm/blob/master/lstm.py" rel="nofollow noreferrer">https://github.com/OlavHN/bnlstm/blob/master/lstm.py</a></p> <p>This implementation is explained in the article here : <a href="http://olavnymoen.com/2016/07/07/rnn-batch-normalization" rel="nofollow noreferrer">Batch normalized LSTM for Tensorflow</a></p>
python-2.7|neural-network|tensorflow|lstm|recurrent-neural-network
0
376,434
34,904,791
Python: Embed pandas plot in Tkinter GUI
<p>I'm writing an application using pandas DataFrames in Python 2.7. I need to plot columns of my DataFrames to a Tkinter window. I know that I can plot pandas DataFrames columns using the built-in plot method on the DataFrame or Series (that is just a wrapper of the matplotlib plot function), like so:</p> <pre><code>import pandas as pd df = pd.DataFrame({'one':[2,4,6,8], 'two':[3,5,7,9]}) df.plot('one') </code></pre> <p>Also, I figured out how to plot to a Tkinter GUI window using matplotlib:</p> <pre><code>import matplotlib matplotlib.use('TkAgg') from numpy import arange, sin, pi from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg from matplotlib.figure import Figure import pandas as pd import Tkinter as tk import ttk root = tk.Tk() #------------------------------------------------------------------------------- lf = ttk.Labelframe(root, text='Plot Area') lf.grid(row=0, column=0, sticky='nwes', padx=3, pady=3) f = Figure(figsize=(5,4), dpi=100) a = f.add_subplot(111) t = arange(0.0,3.0,0.01) s = sin(2*pi*t) a.plot(t,s) dataPlot = FigureCanvasTkAgg(f, master=lf) dataPlot.show() dataPlot.get_tk_widget().grid(row=0, column=0) #------------------------------------------------------------------------------- root.mainloop() </code></pre> <p>This all works as expected. What I want to do is have the pandas.DataFrame.plot() output on a Tkinter window, e.g. in the Labelframe as above. I can't get this to work. If possible, I do not want to use matplotlibs plot tools, as the pandas plot tools suit my needs much better. Is there a way to combine pandas plot() with Tkinter? Basically instead of this line:</p> <pre><code>dataPlot = FigureCanvasTkAgg(f, master=lf) dataPlot.show() </code></pre> <p>I need this:</p> <pre><code>dataPlot = FigureCanvasTkAgg(df.plot('one'), master=lf) dataPlot.show() </code></pre>
<p><code>pandas</code> uses <code>matplotlib</code> for plotting. Most <code>pandas</code> plotting functionality takes an <code>ax</code> kwarg that specifies the axes object that will be used. There are a few <code>pandas</code> functions that can't be used this way, and will always create their own figure/axes using <code>pyplot</code>. (e.g. <code>scatter_matrix</code>)</p> <p>For a simple case based on your example, however:</p> <pre><code>import matplotlib import numpy as np from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg from matplotlib.figure import Figure import pandas as pd import Tkinter as tk import ttk root = tk.Tk() lf = ttk.Labelframe(root, text='Plot Area') lf.grid(row=0, column=0, sticky='nwes', padx=3, pady=3) t = np.arange(0.0,3.0,0.01) df = pd.DataFrame({'t':t, 's':np.sin(2*np.pi*t)}) fig = Figure(figsize=(5,4), dpi=100) ax = fig.add_subplot(111) df.plot(x='t', y='s', ax=ax) canvas = FigureCanvasTkAgg(fig, master=lf) canvas.show() canvas.get_tk_widget().grid(row=0, column=0) root.mainloop() </code></pre>
python|pandas|matplotlib|tkinter|embed
4
376,435
35,234,680
Weighted smoothing of a 1D array - Python
<p>I am quite new to Python and I have an array of some parameter detections, some of the values were detected incorrectly and (like 4555555):</p> <pre><code>array = [1, 20, 55, 33, 4555555, 1] </code></pre> <p>And I want to somehow smooth it. Right now I'm doing that with a weighted mean:</p> <pre><code>def smoothify(array): for i in range(1, len(array) - 2): array[i] = 0.7 * array[i] + 0.15 * (array[i - 1] + array[i + 1]) return array </code></pre> <p>But it works pretty bad, of course, we can take a weighted mean of more than 3 elements, but it results in copypasting... I tried to find some native functions for that, but I failed.</p> <p>Could you please help me with that? </p> <p>P.S. Sorry if it's a noob question :(</p> <p>Thanks for your time, Best regards, Anna</p>
<p>For weighted smoothing purposes, you are basically looking to perform <a href="https://en.wikipedia.org/wiki/Convolution" rel="noreferrer"><code>convolution</code></a>. For our case, since we are dealing with 1D arrays, we can simply use NumPy's 1D convolution function : <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.convolve.html" rel="noreferrer"><code>np.convolve</code></a> for a vectorized solution. The only important thing to remember here is that the weights are to be reversed given the nature of convolution that uses a reversed version of the kernel that slides across the main input array. Thus, the solution would be -</p> <pre><code>weights = [0.7,0.15,0.15] out = np.convolve(array,np.array(weights)[::-1],'same') </code></pre> <p>If you were looking to get weighted mean, you could get those with <code>out/sum(weights)</code>. In our case, since the sum of the given weights is already <code>1</code>, so the output would stay the same as <code>out</code>.</p> <p>Let's plot the output alongwith the input for a graphical debugging -</p> <pre><code># Input array and weights array = [1, 20, 55, 33, 455, 200, 100, 20 ] weights = [0.7,0.15,0.15] out = np.convolve(array,np.array(weights)[::-1],'same') x = np.arange(len(array)) f, axarr = plt.subplots(2, sharex=True, sharey=True) axarr[0].plot(x,array) axarr[0].set_title('Original and smoothened arrays') axarr[1].plot(x,out) </code></pre> <p>Output -</p> <p><a href="https://i.stack.imgur.com/WOAMp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WOAMp.png" alt="enter image description here"></a></p>
python|numpy|smoothing
5
376,436
35,076,837
Pandas Groupby - naming aggregate output column
<p>I have a <code>pandas</code> <code>groupby</code> command which looks like this:</p> <pre><code>df.groupby(['year', 'month'], as_index=False).agg({'users':sum}) </code></pre> <p>Is there a way I can name the <code>agg</code> output something other than 'users' during the groupby command? For example, what if I wanted the sum of users to be total_users? I could rename the column after the groupby is complete, but wonder if there is another way. </p>
<p>I like @Alexander answer, but there is also <code>add_prefix</code>:</p> <pre><code>df.groupby(['year','month']).agg({'users':sum}).add_prefix('total_') </code></pre>
python|pandas
4
376,437
35,222,827
How to separate pandas elements that contain lists
<p>Here is a sample of the data.</p> <pre><code>data['nxt'].head() ​ Out[47]: market_cap_by_available_supply price_btc price_usd volume_usd 0 [1386136000000, 15091900] [1386136000000, 1.3982e-05] [1386136000000, 0.0150919] [1386136000000, 0.0] 1 [1386222394000, 14936300] [1386222394000, 1.31922e-05] [1386222394000, 0.0149363] [1386222394000, 0.0] 2 [1386308781000, 11237100] [1386308781000, 1.12001e-05] [1386308781000, 0.0112371] [1386308781000, 0.0] 3 [1386395502000, 7031430] [1386395502000, 9.6644e-06] [1386395502000, 0.00703143] [1386395502000, 0.0] 4 [1386481920000, 6292640] [1386481920000, 8.82299e-06] [1386481920000, 0.00629264] [1386481920000, 0.0] </code></pre> <p>I'm only interested in:<code>market_cap_by_available_supply</code></p> <pre><code>data['nxt'].market_cap_by_available_supply 0 [1386136000000, 15091900] 1 [1386222394000, 14936300] 2 [1386308781000, 11237100] 3 [1386395502000, 7031430] 4 [1386481920000, 6292640] </code></pre> <p>The purpose for this post is: How can we seperate these into two columns: Timestamp and Marketcap ?</p> <p>But my ultimate goal here is (using code below) </p> <p>Create a new Dataframe containing marketcaps and timestamps for dashcoin, then sequentially add the other coin's marketcaps that correspond to DASH's timestamp, any help with this would be great. </p> <pre><code>import numpy as np from pandas import Series, DataFrame import pandas as pd coins = ['dashcoin','litecoin','dogecoin','nxt'] API = 'https://api.coinmarketcap.com/v1/datapoints/' data = {} for coin in coins: data[coin]=(pd.read_json(API + coin)) MC_data = pd.DataFrame(columns=[['Timestamp']+coins]) </code></pre> <p>EDIT:</p> <p>I'm using for loops as later I will be adding many'coins'. @timmy, your method for extracting the timestamp and cap worked great, although i can't get the merge method to work.</p> <pre><code>data2 = {} for coin in coins: #seperates timestamp and marketcap from their respective list inside each element TS = data[coin].market_cap_by_available_supply.map(lambda r: r[0]) cap = data[coin].market_cap_by_available_supply.map(lambda r: r[1]) #Creates DataFrame and stores timestamp and marketcap data into dictionairy df = DataFrame(columns=['timestamp','cap']) df.timestamp = TS df.cap = cap data2[coin] = df for coin in coins: data2['merged'] = data2['merged'].merge(data[coin], on='timestamp', how='outer') KeyError: 'merged' </code></pre>
<p>Regarding your first question, you can use the map function:</p> <pre><code># Just renaming for readability cap_by_supply = data['nxt']['market_cap_by_available_supply'] # Exploding the market_cap_by_available_supply array into 2 columns data['nxt']['timestamp'] = cap_by_supply.map(lambda r: r[0]) data['nxt']['cap'] = cap_by_supply.map(lambda r: r[1]) </code></pre> <p>Regarding your second question, if I understood well:<br> You can create a dataframe for each coin type. They should be all formatted the same way, with a <code>timestamp</code> column and a <code>cap</code> column. Then use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow">merge function</a> to merge them through the timestamp column. You definitely want to remove all the columns that you don't want to merge, for example:</p> <pre><code># Drop unwanted columns, to repeat for each coin dataframe. # Here we keep timestamp and cap only, for example. data['nxt'] = data['nxt'].drop([ 'market_cap_by_available_supply', 'price_btc', 'price_usd', 'volume_usd' ], axis=1) # Merge all the coin frames into one data['coins'] = data['dashcoin'].merge(data['litecoin'], on='timestamp', how='outer') data['coins'] = data['coins'].merge(data['dogecoin'], on='timestamp', how='outer') data['coins'] = data['coins'].merge(data['nxt'], on='timestamp', how='outer') </code></pre> <p>You want to specify <code>outer</code> join so that you keep all your records.</p> <p>Hope it helps, I'm taking remarks and advices if anyone understood better or has a better solution.</p> <hr> <p><strong>EDIT</strong> regarding OP's EDIT:</p> <p>On the first iteration of your loop, there's nothing in <code>data2['merged']</code> and you can't merge a dataframe with nothing. We just need to say that <code>data2['merged']</code> is, at first, just a copy of the first coin dataframe. Then we loop over <code>coins</code> starting from the second dataframe, because the first one is already in <code>data2['merged']</code></p> <pre><code>... data2['merged'] = data2[coins[0]] for coin in coins[1:]: data2['merged'] = data2['merged'].merge(data2[coin], on='timestamp', how='outer') </code></pre>
python|pandas
0
376,438
35,093,496
problems in "pandas datetime convert to num"
<p>I use pandas to read a csv file to do some analysis. But the returned type is pandas.core.series.Series, which can not be converted to num using the command matplotlib.dates.date2num. Below is my code:</p> <pre><code>import pandas as pd import numpy as np from bokeh.plotting import figure, output_file, show import matplotlib.dates as mdates import time import matplotlib.pyplot as plt AAPL = pd.read_csv( "http://ichart.yahoo.com/table.csvs=AAPL&amp;a=0&amp;b=1&amp;c=2009&amp;d=0&amp;e=1&amp;f=2010", parse_dates=['Date'] ) x = AAPL['Date'] y = AAPL['Close'] print type(x) x = mdates.date2num(x) z4 = np.polyfit(x, y, 6) p4 = np.poly1d(z4) xx = np.linspace(x.min(), x.max(), 100) dd = mdates.num2date(xx) plt.plot(dd,p4(xx)) </code></pre> <p>The command <code>print type(x)</code> returns <code>pandas.core.series.Series</code>. This one <code>x = mdates.date2num(x)</code> returns the error as: <code>AttributeError: 'numpy.datetime64' object has no attribute 'toordinal'. </code></p> <p>Any one can shed a light on me for this issue? </p>
<p>Use <code>x.astype(datetime)</code> to convert to <code>datetime</code>.</p> <pre><code>from datetime import datetime x = mdates.date2num(x.astype(datetime)) z4 = np.polyfit(x, y, 6) p4 = np.poly1d(z4) xx = np.linspace(x.min(), x.max(), 100) dd = mdates.num2date(xx) plt.plot(dd,p4(xx)) </code></pre>
python|datetime|numpy|pandas|matplotlib
4
376,439
34,898,917
How to raise arrays with negative values to fractional power in Python?
<p>I have an array with negative values that has to be raised to fractional power in Python. I need to obtain the real part of the complex number array generated by the operation.</p> <p><strong>MWE</strong></p> <pre><code>from __future__ import division import numpy as np a = -10 b = 2.5 n = 0.88 x = np.arange(5, 11, 1) y = (a / (x - b)) ** (1 / n) </code></pre> <p>I am using Python v2.7.6.</p>
<p>The issue is that NumPy does not promote float or integer dtypes to complex dtypes for this calculation. </p> <p>You have a float array base and a float exponent, so NumPy tries to compute the results using the "<em>put two float dtype objects in, get a float dtype object out</em>" loop. Negative values trigger a warning and returns an array of null values.</p> <p><code>**</code> end up using the same code as <code>np.power</code> when one of the operands is an array. You can see all of the low-level loops that can be used below. Note that you always get back an object with the same dtype as your input dtypes:</p> <pre><code>&gt;&gt;&gt; np.power.types ['bb-&gt;b', # char 'BB-&gt;B', # unsigned char 'hh-&gt;h', # short ... 'dd-&gt;d', # compatible Python float 'gg-&gt;g', # compatible: C long float 'FF-&gt;F', 'DD-&gt;D', # compatible: Python complex 'GG-&gt;G', 'OO-&gt;O'] </code></pre> <p>We want the calculation to run with the <code>'DD-&gt;D'</code> loop!</p> <p>The solution, as pointed out by others on this page, is to make sure that either the base or the exponent has a complex dtype. This forces NumPy to promote any lesser numeric dtypes to a complex dtype and the computation uses the "<em>put two complex dtype objects in, get a complex dtype object out</em>" loop:</p> <pre><code>&gt;&gt;&gt; a = -10 + 0j # ensures that a/(x - b) will have complex dtype &gt;&gt;&gt; ((a / (x - b)) ** (1 / n)) array([-4.39566725-2.00743397j, -2.99895689-1.36957772j, -2.25394034-1.02934006j, -1.79435400-0.81945401j, -1.48410349-0.67776735j, -1.26136729-0.57604714j]) </code></pre> <p>If you just want the real parts use <code>((a / (x - b)) ** (1 / n)).real</code>.</p>
python|arrays|numpy|complex-numbers
2
376,440
35,145,472
How to modify a pandas DataFrame in a function so that changes are seen by the caller?
<p>I find myself doing repetitive tasks to various <code>[pandas][1]</code> DataFrames, so I made a function to do the processing. How do I modify <code>df</code> in the function <code>process_df(df)</code> so that the caller sees all changes (without assigning a return value)? </p> <p>A simplified version of the code:</p> <pre><code>def process_df(df): df.columns = map(str.lower, df.columns) df = pd.DataFrame({'A': [1], 'B': [2]}) process_df(df) print df </code></pre> <blockquote> <pre><code> A B 0 1 2 </code></pre> </blockquote> <p>EDIT new code:</p> <pre><code>def process_df(df): df = df.loc[:, 'A'] df = pd.DataFrame({'A': [1], 'B': [2]}) process_df(df) print df </code></pre> <blockquote> <pre><code> A B 0 1 2 </code></pre> </blockquote>
<p>Indexing a <code>DataFrame</code> using <code>ix</code>, <code>loc</code>, <code>iloc</code>, etc. returns a view of the underlying data (it is a read operation). In order to modify the contents of the frame you will need to use in-place transforms. For example,</p> <pre><code>def process_df(df): # drop all columns except for A df.drop(df.columns[df.columns != 'A'], axis=1, inplace=True) df = DataFrame({'A':[1,2,3], 'B':[1,2,3]}) process_df(df) </code></pre> <p>To change the order of columns, you can do something like this:</p> <pre><code>def process_df(df): # swap A and B df.columns = ['B', 'A'] df[['B', 'A']] = df[['A', 'B']] </code></pre>
python|pandas
7
376,441
35,230,524
Seaborn FacetGrid barplots and hue
<p>I have a DataFrame with the following structure:</p> <pre><code>interval segment variable value 4 02:00:00 Night weekdays 154.866667 5 02:30:00 Night weekdays 100.666667 6 03:00:00 Night weekdays 75.400000 7 03:30:00 Night weekdays 56.533333 8 04:00:00 Night weekdays 55.000000 9 04:30:00 Night weekends 53.733333 10 05:00:00 Night weekends 81.200000 11 05:30:00 Night weekends 125.933333 14 07:00:00 Morning weekdays 447.200000 15 07:30:00 Morning weekends 545.200000 16 08:00:00 Morning weekends 668.733333 17 08:30:00 Morning weekends 751.333333 18 09:00:00 Morning weekdays 793.800000 19 09:30:00 Morning weekdays 781.125000 23 11:30:00 Noon weekdays 776.375000 24 12:00:00 Noon weekdays 741.812500 25 12:30:00 Noon weekends 723.000000 26 13:00:00 Noon weekends 734.562500 27 13:30:00 Noon weekends 763.882353 28 14:00:00 Afternoon weekdays 810.411765 31 15:30:00 Afternoon weekdays 855.411765 32 16:00:00 Afternoon weekdays 824.882353 33 16:30:00 Afternoon weekends 768.529412 34 17:00:00 Afternoon weekends 790.812500 35 17:30:00 Afternoon weekends 809.125000 </code></pre> <p>I want to produce a faceted grid of bar plots, one for each variable (weekdays/weekends) and then colour the bars according to the "segment" column.</p> <p>Producing the two bar plots is very straightforward:</p> <pre><code>g = sns.FacetGrid(melted, col="variable") g.map(sns.barplot,'interval','value') </code></pre> <p>This produces (I know the xlabels are wrong, I can correct that): <a href="https://i.stack.imgur.com/zN5HH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zN5HH.png" alt="enter image description here"></a></p> <p>I'm stuck at colouring the bars according to "segment". Per the docs, all I need to do is add the variable when instantiating the FacetGrid and set some palette:</p> <pre><code>g = sns.FacetGrid(melted, col="variable",hue="segment",palette="Set3") g.map(sns.barplot,'interval','value') </code></pre> <p>But this produces: <a href="https://i.stack.imgur.com/1EiJB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1EiJB.png" alt="enter image description here"></a></p> <p>The bars are stacked in front of each other instead of being spread across the whole interval. What am I missing here?</p> <p>I've created a <a href="https://gist.github.com/anonymous/0730b697b6d189905b19" rel="noreferrer">gist</a> with the dataset. </p>
<p>Because <code>interval</code> is nested within the <code>x</code> variable (<code>segment</code>), you need to tell <code>barplot</code> about all of the possible levels of the <code>x</code> variable, so that they are not drawn on top of each other:</p> <pre><code>times = df.interval.unique() g = sns.FacetGrid(df, row="variable", hue="segment", palette="Set3", size=4, aspect=2) g.map(sns.barplot, 'interval', 'value', order=times) </code></pre> <p><a href="https://i.stack.imgur.com/gaqa5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gaqa5.png" alt="enter image description here"></a></p>
python|pandas|seaborn
23
376,442
31,051,593
Pandas: Filter rows by | (OR) – not mutually inclusive
<p>I'm looking for a way to filter <code>pandas</code> rows via alternatives in a string. I have many different terms I would like to search for, so it would be easier to put them in a few variables rather than list them every time I need to access them.</p> <p>I currently do:</p> <pre><code>df = df[df["A"].str.contains("BULL|BEAR|LONG|SHORT", case=False)] </code></pre> <p>Instead do something like:</p> <pre><code>bull = "BULL|LONG" bear = "BEAR|SHORT" leverage = bull + bear df = df[df["A"].find(leverage, case=False)] </code></pre> <p>The problem is that this method only filters out <em>one</em> alternative from each variable. It will find <code>"BULL"</code> but not <code>"LONG"</code>, and it will find <code>"SHORT"</code> but not <code>"BEAR"</code>. It seems what it selects is arbitrary. Depending on if and where these terms occur in the file I'm reading from, results may differ.</p> <p>I am assuming this is due to <code>|</code> functions as <code>OR</code> which is mutually exclusive.</p> <p>If so, is there a mutually inclusive option? I would like to continue to use <em>strings</em> to do this. The reason is that I use <code>str.contains</code> at another place that relies on the same variables:</p> <pre><code>df.loc[df["A"].str.contains(bull, case=False), "B"] df.loc[df["A"].str.contains(bear, case=False), "B"] </code></pre>
<p>You needed to add an additional <code>'|'</code> to join your terms:</p> <pre><code>In [227]: df = pd.DataFrame({'A':['bull', 'bear', 'short', 'null', 'LONG']}) df Out[227]: A 0 bull 1 bear 2 short 3 null 4 LONG In [228]: bull = "BULL|LONG" bear = "BEAR|SHORT" leverage = bull + '|' + bear df = df[df["A"].str.contains(leverage, case=False)] df Out[228]: A 0 bull 1 bear 2 short 4 LONG </code></pre>
python|python-2.7|pandas
1
376,443
31,018,622
Pandas quantile function for dates?
<p>I have a dataframe of donation amounts and dates. I would like to see how long it took a certain proportion of the donations to come in (at what point did we have 25% of donations?, 75% ?). It looked like the Pandas quantile function would do what I want. However it seems to only want numbers, not dates. Is there a function that would do the same with dates ?</p> <p><a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.core.groupby.DataFrameGroupBy.quantile.html#pandas.core.groupby.DataFrameGroupBy.quantile" rel="nofollow">http://pandas.pydata.org/pandas-docs/dev/generated/pandas.core.groupby.DataFrameGroupBy.quantile.html#pandas.core.groupby.DataFrameGroupBy.quantile</a></p>
<p>Like Evert say, you can convert it temporarily to int 64 compute and convert back to datetime</p> <pre><code>YOUR_DATAFRAME.YOUR_DATE.astype('int64').quantile([.25,.5,.75]).astype('datetime64[ns]') </code></pre>
python|pandas
6
376,444
31,021,235
Convert pandas dataframe to list of tuples
<p>I have a sample dataframe as follows</p> <pre><code>&gt;&gt;&gt; df a b 0 1 2 1 3 4 </code></pre> <p>I want to convert this to a list of tuples. I tried using <code>itertuples()</code> for the same</p> <pre><code>&gt;&gt;&gt; list(df.T.itertuples()) [('a', 1, 3), ('b', 2, 4)] </code></pre> <p>But, I want the result to be of the format [('a', [1, 3]), ('b', [2, 4])] wherein the first value of the tuple is the column name and the second item in the tuple is a list of column values.</p> <p>I know this can be done by looping through all column names and manually constructing the list of tuples. I'm just wondering if there is a smarter way to do this.</p>
<p>You can zip the column names with the values as lists:</p> <pre><code>In [127]: list(zip(df.columns,df.T.values.tolist())) Out[127]: [('a', [1, 3]), ('b', [2, 4])] </code></pre>
python|numpy|pandas|dataframe
8
376,445
30,997,206
Python Replacing every imaginary value in array by random
<p>I got an </p> <pre><code>array([[ 0.01454911+0.j, 0.01392502+0.00095922j, 0.00343284+0.00036535j, 0.00094982+0.0019255j , 0.00204887+0.0039264j , 0.00112154+0.00133549j, 0.00060697+0.j], [ 0.02179418+0.j, 0.01010125-0.00062646j, 0.00086327+0.00495717j, 0.00204473-0.00584213j, 0.00159394-0.00678094j, 0.00121372-0.0043044j , 0.00040639+0.j]]) </code></pre> <p>I need a solution which gives me the possibility to replace just the imaginary components by an random value generated by:</p> <pre><code>numpy.random.vonmises(mu, kappa, size=size) </code></pre> <p>The resulting array needs to be in the same form as the first one. </p>
<pre><code>n_epochs = 2 n_freqs = 7 # form giving parameters for the array data2 = np.zeros((n_epochs, n_freqs), dtype=complex) for i in range(0,n_epochs): data2[i] = np.real(data[i]) + np.random.vonmises(mu, kappa) * complex(0,1) </code></pre> <p>It gives my whole <code>n_epoch</code> the same imaginary value. Not exactly what I was asking for, but solves my problem.</p>
python|arrays|numpy|complex-numbers|replaceall
0
376,446
30,784,064
Trouble with pandas to_csv function
<p>the data that i'm working with has a tab delimiter. My issue is that when i try to put it to a csv / text file (with pandas), it displays the results like this</p> <pre><code>Symbol Description OM0S.SI sally 3LLS.SI walley </code></pre> <p>I am trying to achieve a result of this (seperated by a tab)</p> <pre><code>Symbol Description OM0S.SI sally 3LLS.SI walley </code></pre> <p>Here is a cut of the code that I have</p> <pre><code>nd = df.values[i] test = pd.DataFrame(data=nd, index=None, columns=None) test.to_csv('SGX[Defunct]' + '.txt', mode='a', sep='\t', header=0, index=0) </code></pre> <p>I have a separator that says to delimit it by a tab, but it doesnt give me what i want..</p> <p>Please advise.</p>
<p>If you're always adding only one i row each time (which, probably, wouldn't be the most pythonic way to solve that), you should sent to pandas list of lists instead, like this:</p> <pre><code>nd = df.values[i] test = pd.DataFrame(data=[nd], index=None, columns=None) test.to_csv('SGX[Defunct]' + '.txt', mode='a', sep='\t', header=0, index=0) </code></pre> <p>Edit: Also you could write it in one line:</p> <pre><code>df[i:i+1].to_csv('SGX[Defunct]' + '.txt', mode='a', sep='\t', header=0, index=0) </code></pre> <p>Where df[i:i+1] will create one i row dataframe slice.</p>
python|pandas
0
376,447
31,082,904
Downscaling part of image in Python
<p>I am trying to downscaling part of image start from (x,y) coordinate and have a width and height of 500 to be resized to 40x40. By doing so, I am averaging the surrounding pixel into one. (the simplest way I could find) But result is weird.</p> <p>The original image is a 512x512 png</p> <p>Original Image:</p> <p><img src="https://i.stack.imgur.com/0TvxO.png" alt="enter image description here"></p> <p>Expected Result:</p> <p><img src="https://i.stack.imgur.com/p5t8F.png" alt="enter image description here"></p> <p>Actual Result:</p> <p><img src="https://i.stack.imgur.com/kXO3C.png" alt="enter image description here"></p> <p>Below is the code snippet: </p> <pre><code>from PIL import Image import numpy as np defined_size = 40 img = Image.open("lena.png") pix = np.array(img) changed_img = shrink(pix, 0, 0, 500) changed_img = np.array(changed_img) resized = Image.fromarray(changed_img, 'RGB') def shrink(img, x, y, size): result = [[0 for width in xrange(defined_size)] for height in xrange(defined_size)] scale_factor = size/defined_size for i in xrange(defined_size): for j in xrange(defined_size): temp = np.array([0,0,0]) for t in xrange(scale_factor): print img[i*scale_factor+x, j*scale_factor+y] temp = np.add(temp, img[i*scale_factor+x, j*scale_factor+y]) result[i][j] = np.divide(temp, scale_factor) print result[i][j] return result </code></pre>
<p>There are several problems with your code. Let's tackle the issues one at a time:</p> <h1>Issue #1 - <code>(x,y)</code> are useless in your <code>shrink</code> definition</h1> <p>I see where you're going with <code>(x,y)</code>. You're using this to traverse over each of the larger blocks, summing all of the pixels and dividing by the total number of entries to get the average RGB pixel. You are setting this to <code>(0,0)</code> for every single block in the image, and so you aren't actually getting all of the pixels in the block. You'll need to use a pair of <code>for</code> loops to allow you to traverse over each block. I've also taken the liberty of removing the <code>(x,y)</code> inputs, and putting in the desired size as input into your function instead.</p> <h1>Issue #2 - Output is not three-channel</h1> <p>You initialized a two-dimensional list, yet <strong>the image has three channels</strong>. Because you're using <code>numpy</code> to do the computation for you... why don't you just declare an array of <code>zeros</code> via <code>np.zeros</code> first? This way, you don't have to convert back to a <code>numpy</code> array when you're finished. I've changed the declaration of your output image so that it's a <code>numpy</code> array of type <code>uint8</code>. This casting is important!</p> <h1>Issue #3 - Not iterating over each subsampled block properly</h1> <p>As we talked about with Issue #1, you aren't collecting the pixels properly per subsampled block. I inserted another pair of <code>for</code> loops to do that for you... enumerated as <code>x</code> and <code>y</code>. I've also removed <code>np.add</code> and used <code>+</code> to do the operations for you because it's easier to read.</p> <h1>Issue #4 - Dividing by the wrong factor</h1> <p>Because it's an average of a subsample you are trying to calculate, you must divide by the total number of values within each block. That is equal to <code>scale_factor*scale_factor</code>. You were only dividing by <code>scale_factor</code>.</p> <hr> <p>I've also taken the liberty to show the resized image and saving it to file. Without further ado here's the corrected code. I've also placed your testing code in a <code>__main__</code> block to make it easier for testing:</p> <pre><code>from PIL import Image import numpy as np def shrink(img, size, defined_size): # Change - Issue #1 result = np.zeros((defined_size, defined_size, 3), dtype=np.uint8) # Change - Issue #2 scale_factor = size/defined_size for i in xrange(defined_size): for j in xrange(defined_size): temp = np.array([0,0,0]) for x in xrange(scale_factor): # Change - Issue #3 for y in xrange(scale_factor): # Change - Issue #3 temp += img[i*scale_factor + x, j*scale_factor + y] # Change - Issue #3 result[i,j] = temp / (scale_factor*scale_factor) # Change return result if __name__ == '__main__': img = Image.open("lena.png") pix = np.array(img) changed_img = shrink(pix, 512, 40) # Change - Issue #1 resized = Image.fromarray(changed_img, 'RGB') resized.show() # Change resized.save("lena_resize.png") </code></pre> <p>... and we get this resized image:</p> <p><img src="https://i.stack.imgur.com/22Ugj.png" alt="enter image description here"></p>
python|image-processing|numpy|resize|python-imaging-library
2
376,448
67,226,537
Pandas Plot Bar Fixed Range Missing Values
<p>I'm plotting a bar chart with data that I have in a pandas.DataFrame. My code is as follows</p> <pre><code>import pandas as pd import matplotlib.pyplot as plot from datetime import datetime start_year = 2000 date_range = [ i + start_year for i in range(datetime.today().year - start_year)] data = pd.DataFrame([ [2015, 100], [2016, 110], [2017, 105], [2018, 109], [2019, 110], [2020, 116], [2021, 113] ], columns=[&quot;year&quot;, &quot;value&quot;]) chart = data.plot.bar( x=&quot;year&quot;, y=&quot;value&quot;, # xticks=date_range # , xlim=[date_range[0], date_range[-1]] ) plot.show() </code></pre> <p>The resulting plot is:</p> <p><a href="https://i.stack.imgur.com/fSJ06.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fSJ06.jpg" alt="Bar plot" /></a></p> <p>I have to plot several of these, for which data may start from 2000 and finish in 2010, then another dataframe that has data that starts in 2010 and ends in the current year.</p> <p>In order to make these plots visually comparable, I would like for all to start at the same year, 2000 in this example, and finish the current year. If no value is present for a given year, then 0 can be used. In this case, as example, I've used the year 2000, but it could also start from the year 2005, 2006 or 2010.</p> <p>How can I achieve what I'm looking for? I've tried setting xticks and xlim, but with xticks, the data gets skewed all towards one side, as if there were thousands of values in between. It is strange since I'm using int values.</p> <p>Thanks</p>
<p>You can prepare your dataframe so that it has all years you want. <strong>right</strong> <code>merge()</code> to a dataframe that has all required years</p> <pre><code>data = pd.DataFrame([ [2015, 100], [2016, 110], [2017, 105], [2018, 109], [2019, 110], [2020, 116], [2021, 113] ], columns=[&quot;year&quot;, &quot;value&quot;]) # NB range is zero indexed, hence endyear + 1 data.merge(pd.DataFrame({&quot;year&quot;:range(2010,2021+1)}), on=&quot;year&quot;, how=&quot;right&quot;).plot(kind=&quot;bar&quot;, x=&quot;year&quot;, y=&quot;value&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/xokqM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xokqM.png" alt="enter image description here" /></a></p>
python-3.x|pandas|matplotlib|bar-chart
0
376,449
67,561,709
Colab: Cannot run any cell after changing the runtime to local
<p>I'm new to Tensorflow and I just started using Google Colab a week ago, and I want to run it locally so that it can use my own CPU to avoid <a href="https://research.google.com/colaboratory/faq.html#resource-limits" rel="nofollow noreferrer">Colab Resource Restrictions</a>, so I followed the <a href="https://research.google.com/colaboratory/local-runtimes.html" rel="nofollow noreferrer">Official guide on running Colab locally</a>, and after I'm all set and try to run any cell in the Colab, it pops up this error:</p> <p><a href="https://i.stack.imgur.com/Ak1Bs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ak1Bs.png" alt="enter image description here" /></a></p> <p>And then immediately changed to this -</p> <p><a href="https://i.stack.imgur.com/Nqjio.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nqjio.png" alt="Error after running a cell" /></a></p> <p>And here is the full runtime logs it gave me:</p> <pre><code> Could not fetch resource at : 404 Not Found FetchError: Could not fetch resource at : 404 Not Found at ny.fr [as constructor] (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210513-060327-RC00_373550014:684:397) at new ny (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210513-060327-RC00_373550014:1299:1093) at za.program_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210513-060327-RC00_373550014:5067:158) at Ba (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210513-060327-RC00_373550014:19:336) at za.next_ (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210513-060327-RC00_373550014:17:503) at Ca.next (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210513-060327-RC00_373550014:20:206) at f (https://colab.research.google.com/v2/external/external_polymer_binary.js?vrz=colab-20210513-060327-RC00_373550014:61:101) </code></pre> <p>Since I've been struggling with this error all day, and I also reinstall and upgrade the Python, TensorFlow as well as the jupyter-notebook a few times, so PLEASE HELP ME!!! Or if anyone knows how to change the runtime back to browser will also be helpful, I'm kind of stuck here...</p>
<p>Since you mentioned you are new to colab, just double checking the basics - did you try <a href="https://i.stack.imgur.com/jQYVT.png" rel="nofollow noreferrer">Connect Button</a> (in top right area of colab) and choosing &quot;Connect to Local Run time&quot;?</p> <p>Also note that sometimes large datasets cause out of memory error, so that might be one cause too.</p>
tensorflow|jupyter-notebook|google-colaboratory|tensorflow2.0|tf.keras
0
376,450
67,275,865
Querying a list object from API and returning it into dataframe - issues with format
<p>I have the below script that returns data in a list format per quote of (i). I set up an empty list, and then query with the API function get_kline_data, and pass each output into my klines_list with the .extend function</p> <pre class="lang-py prettyprint-override"><code>klines_list = [] a = [&quot;REQ-ETH&quot;,&quot;REQ-BTC&quot;,&quot;XLM-BTC&quot;] for i in a: klines = client.get_kline_data(i, '5min', 1619317366, 1619317606) klines_list.extend([i,klines]) klines_list </code></pre> <p>klines_list then returns data in this format;</p> <pre class="lang-py prettyprint-override"><code>['REQ-ETH', [['1619317500', '0.0000491', '0.0000491', '0.0000491', '0.0000491', '5.1147', '0.00025113177']], 'REQ-BTC', [['1619317500', '0.00000219', '0.00000219', '0.00000219', '0.00000219', '19.8044', '0.000043371636']], 'XLM-BTC', [['1619317500', '0.00000863', '0.00000861', '0.00000863', '0.00000861', '653.5693', '0.005629652673']]] </code></pre> <p>I then try to convert it into a dataframe;</p> <pre class="lang-py prettyprint-override"><code>import pandas as py df = py.DataFrame(klines_list) </code></pre> <p>And this is the result;</p> <pre class="lang-py prettyprint-override"><code>0 0 REQ-ETH 1 [[1619317500, 0.0000491, 0.0000491, 0.0000491,... 2 REQ-BTC 3 [[1619317500, 0.00000219, 0.00000219, 0.000002... 4 XLM-BTC 5 [[1619317500, 0.00000863, 0.00000861, 0.000008.. </code></pre> <p>The structure of the DF is incorrect and it seems to be due to the way I have put my list together.</p> <h2>I would like the quantitative data in a column corresponding to the correct entry in list a, not in rows. Also, the ticker data, or list a, (&quot;REQ-ETH/REQ-BTC&quot;) etc should be in a separate column. What would be a good way to go about restructuring this?</h2> <p>Edit: @Ynjxsjmh This is the output when following the suggestion below for appending a dictionary within the for loop</p> <pre class="lang-py prettyprint-override"><code>REQ-ETH REQ-BTC XLM-BTC 0 [1619317500, 0.0000491, 0.0000491, 0.0000491, ... NaN NaN 1 NaN [1619317500, 0.00000219, 0.00000219, 0.0000021... NaN 2 NaN NaN [1619317500, 0.00000863, 0.00000861, 0.0000086... </code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer"><code>pandas.DataFrame()</code></a> can accept a dict. It will construct the dict key as column header, dict value as column values.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd a = [&quot;REQ-ETH&quot;,&quot;REQ-BTC&quot;,&quot;XLM-BTC&quot;] klines_data = {} for i in a: klines = client.get_kline_data(i, '5min', 1619317366, 1619317606) klines_data[i] = klines[0] # ^ # | # Add a key to klines_data df = pd.DataFrame(klines_data) </code></pre> <pre><code>print(df) REQ-ETH REQ-BTC XLM-BTC 0 1619317500 1619317500 1619317500 1 0.0000491 0.00000219 0.00000863 2 0.0000491 0.00000219 0.00000861 3 0.0000491 0.00000219 0.00000863 4 0.0000491 0.00000219 0.00000861 5 5.1147 19.8044 653.5693 6 0.00025113177 0.000043371636 0.005629652673 </code></pre> <p>If the length of <code>klines</code> is not equal, you can use</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame.from_dict(klines_data, orient='index').T </code></pre>
python|pandas|list|dataframe|append
1
376,451
67,470,057
Get an element from column and if equal to something put it in another column in python
<p>Lets say I have a dataframe like this:</p> <pre><code> full_path 0 C:\Users\User\Desktop\Test1\1.txt 1 C:\Users\User\Desktop\ABC\1.txt 2 C:\Users\User\Desktop\Test2\1.txt 3 C:\Users\User\Desktop\Test1\1.txt 4 C:\Users\User\Desktop\ABCD\1.txt 5 C:\Users\User\Desktop\Test2\1.txt </code></pre> <p>I want to check if the 5th element of the path is equal to Test 1 and Test2 and create a column like below:</p> <pre><code> full_path folder 0 C:\Users\User\Desktop\Test1\1.txt Test1 1 C:\Users\User\Desktop\ABC\1.txt 2 C:\Users\User\Desktop\Test2\1.txt Test2 3 C:\Users\User\Desktop\Test1\1.txt Test1 4 C:\Users\User\Desktop\ABCD\1.txt 5 C:\Users\User\Desktop\Test2\1.txt Test2 </code></pre> <p>I tried this command <code>df['folder']=df[&quot;full_path&quot;].str.rsplit(&quot;\\&quot;).str[4]</code> but it gives me this output:</p> <pre><code> full_path folder 0 C:\Users\User\Desktop\Test1\1.txt Test1 1 C:\Users\User\Desktop\ABC\1.txt ABC 2 C:\Users\User\Desktop\Test2\1.txt Test2 3 C:\Users\User\Desktop\Test1\1.txt Test1 4 C:\Users\User\Desktop\ABCD\1.txt ABCD 5 C:\Users\User\Desktop\Test2\1.txt Test2 </code></pre> <p>I dont want folders that are not Test1 and Test2 to be shown in the folder column</p>
<p>You can use Numpy where:</p> <pre><code>import numpy as np df['folder'] = np.where(df['full_path'].str.contains('Test'), df['full_path'].str.rsplit('\\').str[4], np.nan ) </code></pre> <p>Output:</p> <pre><code> full_path folder 0 C:\Users\User\Desktop\Test1\1.txt Test1 1 C:\Users\User\Desktop\ABC\1.txt NaN 2 C:\Users\User\Desktop\Test2\1.txt Test2 3 C:\Users\User\Desktop\Test1\1.txt Test1 4 C:\Users\User\Desktop\ABCD\1.txt NaN 5 C:\Users\User\Desktop\Test2\1.txt Test2 </code></pre>
python|python-3.x|pandas|list|dataframe
1
376,452
67,464,023
I wrote a KMeans class but the results look strange, what am I doing wrong?
<p>I was following along with Joel Grus' &quot;Data Science from Scratch&quot; and using it wrote my own KMeans code (swapping Joel's functions for numpy ones etc.). The code below converges and finds centroids, but they are almost always in the center of the feature space. Upon further investigation, it looks like the while loop exits on the second iteration (i.e. no changes are detected). I can't figure out why though, what have I done wrong?</p> <pre><code>import numpy as np import seaborn as sns from sklearn.datasets import make_blobs features, true_labels = make_blobs( n_samples=200, centers=3, cluster_std=2.75, random_state=42 ) class KMeans: def __init__(self, k): self.k = k self.means = None self.assignments = None def classify(feat, centroids): distances = [np.linalg.norm(feat - cent) for cent in centroids] label = np.argwhere(distances == min(distances)) return label def cluster_means(features, assignments, k): clusters = [features[self.assingments == cluster,:] for cluster in range(k)] cluster_means = np.array([np.mean(clusters[i], axis=0) for i in range(k)]) return cluster_means def train(self, features): self.assignments = np.random.randint(low=0, high=self.k, size=len(features)) while True: #find the centroids of the k classes self.means = cluster_means(features, self.assignments, self.k) new_assignments = [classify(feat, self.means) for feat in features] #get number of changes nChanges = len([x1 for x1, x2 in zip(self.assignments, new_assignments) if x1 != x2]) if nChanges == 0: return self.assignments = new_assignments #self.means = cluster_means(features, self.assignments, k) print(f&quot;changed: {nChanges} / {len(features)}&quot;) km = KMeans(k=3) km.train(features) km.means </code></pre> <p><a href="https://i.stack.imgur.com/iOrN0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iOrN0.png" alt="Centroids don't move from the centre" /></a></p>
<p>You know, sometimes you just need to post on here to end up answering your own question while you write it!</p> <p>Anyway, the reason it was breaking on the second iteration was that the list comprehensions needed to be converted into numpy arrays to be able to use the np.argmin() in classify() and to properly count the number of changes for nChanges.</p> <p>Here's a working solution</p> <pre><code>class KMeans: def __init__(self, k): self.k = k self.means = None def classify(feat, centroids): distances = np.array([np.linalg.norm(feat - cent) for cent in centroids]) label = np.argmin(distances) return label def cluster_means(features, assignments, k): clusters = [features[assignments == cluster,:] for cluster in range(k)] cluster_means = np.array([np.mean(clusters[i], axis=0) for i in range(k)]) return cluster_means def train(self, features): assignments = np.random.randint(low=0, high=self.k, size=len(features)) while True: #find the centroids of the k classes self.means = self..cluster_means(features, assignments, self.k) new_assignments = np.array([self.classify(feat, self.means) for feat in features]) #get number of changes nChanges = len([x1 for x1, x2 in zip(assignments, new_assignments) if x1 != x2]) if nChanges == 0: return else: assignments = new_assignments self.means = self.cluster_means(features, assignments, self.k) print(f&quot;changed: {nChanges} / {len(features)}&quot;) km = KMeans(k=3) km.train(features) km.means sns.scatterplot(x=km.means[:,0], y=km.means[:,1], color='red') sns.scatterplot(x=features[:,0], y=features[:,1], hue=true_labels) </code></pre> <p><a href="https://i.stack.imgur.com/wN7Uw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wN7Uw.png" alt="Centroids now where they should be!" /></a></p>
python|numpy|k-means
0
376,453
67,391,645
CannedClassifier at Tensorflow Lattice with more than 2 classes
<p>could someone helps me with tensorflow lattice? here's my problem: I want to classify one label with 18 features. if I use a label with two classes (e.g. 0 and 1) everything is fine. but my label has 30 classes and I get an error-message, that only one label is allowed (I use only one label and if I use the same structure of the DLN with a label with two classes I don't get these error-message). has anyone experience with that? thanks for your help!</p>
<p>Tensorflow lattice is not the best tool for classification problems. It's most commonly used for regression problems where you want to enforce monothonicity constraints. That's probably the reason why it does not allow more than one label.</p> <p>It would be helpful if you could explain why you want to use tensorflow lattice here in particular. Maybe it's possible to just use a neural net instead?</p>
tensorflow|classification|lattice
0
376,454
67,318,704
How to sort numpy arrays of a list based on the averages of columns
<p>I have a list of numpy arrays and want to firstly sort each array and then sort whole the array in my list. The first step is clear for me. It is my data:</p> <pre><code>unsorted=[np.array([[2.5, 6., 5.1],\ [3.5, 7., 0.1],\ [2.5, 7., 0.],\ [3.5, 6., 0.1]]),\ np.array([[2.5, 6., 1.],\ [1.5, 7., 5.2]]),\ np.array([[1.5, 7., 0.2],\ [2.5, 7., 1.2]])] </code></pre> <p>Firstly I sort each array based on first and second column using the following code:</p> <pre><code>sorted_arr=[] for i in unsorted: i=i[np.lexsort((i[:,0],i[:,1]))] sorted_arr.append(i) </code></pre> <p>Then, I want to also sort my arrays in the list. I want to sort them based on the average of first and second column. For first array averages of first and second column are: <code>3.</code> and <code>6.5</code>. For second array it is <code>2.</code> and <code>6.5</code>. For last array are <code>2.</code> and <code>7</code>. I want to sort array firstly based on the average of second column and then first column. I mean I want my final result to be:</p> <pre><code>sorted_arr=[array([[2.5, 6. , 1. ], [1.5, 7. , 5.2]]), array([[1.5, 7. , 0.2], [2.5, 7. , 1.2]]), array([[2.5, 6. , 5.1], [3.5, 6. , 0.1], [2.5, 7. , 0. ], [3.5, 7. , 0.1]])] </code></pre> <p>I do appreciate any help to do so in python.</p>
<p>From your <code>sorted_arr</code>, you can do:</p> <pre><code>sorted(sorted_arr, key=lambda x: tuple(x[:,:2].mean(0))) </code></pre>
python|arrays|numpy|sorting
1
376,455
67,198,159
Not showing the total number of size in each bar in its graph in Python?
<p>I have a problem about showing the numbers above each bar in its graph.</p> <p>Here is my dafaframe which is shown below.</p> <pre><code>ID SEX count 6 Secret Identity Male Characters 1751 3 Public Identity Male Characters 1662 1 Public Identity Female Characters 765 4 Secret Identity Female Characters 625 2 Public Identity Genderless Characters 11 0 Identity Unknown Male Characters 9 5 Secret Identity Genderless Characters 5 </code></pre> <p>Here is my code function which is shown below.</p> <pre><code>def show_values_on_bars(axs, h_v=&quot;v&quot;, space= 0.4): def _show_on_single_plot(ax): if h_v == &quot;v&quot;: for p in ax.patches: _x = p.get_x() + p.get_width() / 2 _y = p.get_y() + p.get_height() value = int(p.get_height()) ax.text(_x, _y, value, ha=&quot;center&quot;, fontsize=18) elif h_v == &quot;h&quot;: for p in ax.patches: _x = p.get_x() + p.get_width() _y = p.get_y() + p.get_height() - float(space) value = int(p.get_width()) ax.text(_x, _y, value, ha=&quot;left&quot;, fontsize=18) if isinstance(axs, np.ndarray): for idx, ax in np.ndenumerate(axs): _show_on_single_plot(ax) else: _show_on_single_plot(axs) </code></pre> <p>When I run this code snippets defined below, I got an error message which is shown below.</p> <pre><code>graph_3 = sns.barplot(data = dc_df, x = &quot;ID&quot; , y = &quot;count&quot;, ax=a[1,0], hue='SEX') show_values_on_bars(graph_3, &quot;v&quot;, 0.3) </code></pre> <p>The error : <code>value = int(p.get_height()) -&gt; ValueError: cannot convert float NaN to integer</code></p> <p>How can I fix it?</p>
<p>Kindly check it and assign value to 0 is <code>p.get_height()</code> is NaN.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np def show_values_on_bars(axs, h_v=&quot;v&quot;, space= 0.4): def _show_on_single_plot(ax): if h_v == &quot;v&quot;: for p in ax.patches: value = 0 if not np.isnan(p.get_height()): value = int(p.get_height()) _x = p.get_x() + p.get_width() / 2 _y = p.get_y() + value ax.text(_x, _y, value, ha=&quot;center&quot;, fontsize=18) </code></pre>
python|pandas|axes
1
376,456
67,439,268
Comparing and getting the indexes of 2 arrays
<p>What would be a numpy function that goes through array <code>a</code> and then output the indexes where values of array <code>b</code> is allocated.</p> <p>Code:</p> <pre><code>a = np.array([&quot;BTCUSD&quot;, &quot;ETHUSDC&quot;, &quot;BBB&quot;, &quot;ETHUSD&quot;, &quot;cow&quot;, &quot;head&quot;]) b= np.array([&quot;BTCUSD&quot;, &quot;ETHUSD&quot;]) </code></pre> <p>Expected output:</p> <pre><code>Indexes: 0, 3 </code></pre>
<pre><code>[list(a).index(i) for i in a if i in b] </code></pre>
arrays|python-3.x|string|numpy|indexing
0
376,457
67,209,056
dockerized flask app does not import pandas
<p>i'm developing a flaskapp that has pandas in it! it work fine when i run it in localhost but when i dockerize it and try to run the container i get this log:</p> <blockquote> <p>Traceback (most recent call last): File &quot;C:\testapi\app.py&quot;, line 4, in import pandas as pd ModuleNotFoundError: No module named 'pandas' TTraceback (most recent call last): File &quot;C:\testapi\app.py&quot;, line 4, in import pandas as pd ModuleNotFoundError: No module named 'pandas'</p> </blockquote> <p>can you guys solve my problem please?</p>
<p>Put this in your dockerfile:<br /> <code>RUN pip install pandas</code></p> <p>You should consider using a <code>requirements.txt</code> file for dependencies.<br /> Copy the <code>requirements.txt</code> to your container and install the requirements:<br /> <code>RUN pip install -r requirements.txt</code></p>
python|pandas
1
376,458
67,328,216
join 2 data sets and compare in python
<p>Can someone please help with the following using python code?</p> <ul> <li>I have 2 csv data sets each with a million records</li> <li>Both files have the same column names and total of 200 (100 in each)</li> <li>the 2 files are month over month transactions. So, many records may overlap, however, some columns can have change in value</li> <li>There will be some records that are unique to File A and/or B</li> </ul> <p>File A</p> <pre><code>ID, items, Amount A1, 10, 100 A2, 20, 200 A3, 30, 300 </code></pre> <p>File B</p> <pre><code>ID, items, Amount A1, 10, 100 A2, 12, 120 A4, 40, 400 </code></pre> <p>I need the final output to be as follows</p> <pre><code>FileA-ID, FileB-ID, Match?, FileA-Items, FileB-items, Match?, FileA-Amount, FileB-Amount, Match? A1, A1, Y, 10, 10, Y, 100, 100, Y, A2, A2, Y, 20, 12, N, 100, 120, N, A3, NAN, N, 30, NAN, N, 300, NAN, N, NAN, A4, N, NAN, 40, N, NAN, 400, N </code></pre> <p>This will be a monthly process, so i want to make the code generic so i can just rerun every month on a new file.</p>
<p>First you can keep track of your original columns into a list.</p> <pre class="lang-py prettyprint-override"><code>cols = df1.columns.tolist() </code></pre> <p>Then set <code>ID</code> column as index, and properly rename your two dataframe's column headers. Concat the two dataframes along columns.</p> <pre class="lang-py prettyprint-override"><code>df1 = df1.set_index('ID') df2 = df2.set_index('ID') df1['ID'] = df1.index df2['ID'] = df2.index df1 = df1.rename(lambda col: f'FileA-{col}', axis=1) df2 = df2.rename(lambda col: f'FileB-{col}', axis=1) df_ = pd.concat([df1, df2], axis=1) </code></pre> <pre><code>print(df_) FileA-items FileA-Amount FileA-ID FileB-items FileB-Amount FileB-ID ID A1 10.0 100.0 A1 10.0 100.0 A1 A2 20.0 200.0 A2 12.0 120.0 A2 A3 30.0 300.0 A3 NaN NaN NaN A4 NaN NaN NaN 40.0 400.0 A4 </code></pre> <p>At last, create <code>Match</code> column according to the same column in df1 and df2. Then sort the columns header.</p> <pre class="lang-py prettyprint-override"><code>for col in cols: df_[f'Match-{col}'] = np.where((df_[f'FileA-{col}'] == df_[f'FileB-{col}']), 'Y', 'N') df_ = df_.reindex(sorted(df_.columns, key = lambda x: cols.index(x.split('-')[1])), axis=1) </code></pre> <pre><code>print(df_) FileA-ID FileB-ID Match-ID FileA-items FileB-items Match-items FileA-Amount FileB-Amount Match-Amount ID A1 A1 A1 Y 10.0 10.0 Y 100.0 100.0 Y A2 A2 A2 Y 20.0 12.0 N 200.0 120.0 N A3 A3 NaN N 30.0 NaN N 300.0 NaN N A4 NaN A4 N NaN 40.0 N NaN 400.0 N print(df_.reset_index(drop=True)) FileA-ID FileB-ID Match-ID FileA-items FileB-items Match-items FileA-Amount FileB-Amount Match-Amount 0 A1 A1 Y 10.0 10.0 Y 100.0 100.0 Y 1 A2 A2 Y 20.0 12.0 N 200.0 120.0 N 2 A3 NaN N 30.0 NaN N 300.0 NaN N 3 NaN A4 N NaN 40.0 N NaN 400.0 N </code></pre>
python|pandas|dataframe|join
0
376,459
67,566,170
Sum columns based off conditionals - pandas
<p>I'm aiming to sum specific columns in a df where a condition is met. Where <code>Group</code> == <code>Group_A</code>, I want to sum <code>A_4','B_4</code>. However, where <code>Group</code> == <code>Group_B</code>, I want to pass the sum of <code>A_1','B_1</code> to the same column. I need to pass the function at the same time otherwise I end up with nan values.</p> <pre><code>df = pd.DataFrame({ 'Group_A' : ['Red','Red','Red','Red','Red',], 'Group_B' : ['Blue','Blue','Blue','Blue','Blue',], 'Group' : ['Blue','Blue','Blue','Red','Blue',], 'A_1' : [7,6,8,0,4], 'B_1' : [6,7,11,1,4], 'A_4' : [1,1,1,6,4], 'B_4' : [3,3,3,9,5], }) df['Sum'] = df.loc[df['Group'] == df['Group_A'],['A_4','B_4']].sum(axis=1) df['Sum'] = df.loc[df['Group'] == df['Group_B'],['A_1','B_1']].sum(axis=1) </code></pre> <p>intended output:</p> <pre><code> Group_A Group_B Group A_1 B_1 A_4 B_4 Sum 0 Red Blue Blue 7 6 1 3 13.0 1 Red Blue Blue 6 7 1 3 13.0 2 Red Blue Blue 8 11 1 3 19.0 3 Red Blue Red 0 1 6 9 15.0 4 Red Blue Blue 4 4 4 5 8.0 </code></pre> <p>Update:</p> <p>Could subtraction replace the sum method without drawing an error?</p> <pre><code>sub1 = df.loc[df['Group'] == df['Group_A'], df['A_4'].sub(df['B_4'])] sub2 = df.loc[df['Group'] == df['Group_B'], df['A_1'].sub(df['B_1'])] df['Sub'] = Sum1.append(Sum2) </code></pre>
<p>Another option, save the sum to series, and then update the dataframe:</p> <pre><code>Sum1 = df.loc[df['Group'] == df['Group_A'],['A_4','B_4']].sum(axis=1) Sum2 = df.loc[df['Group'] == df['Group_B'],['A_1','B_1']].sum(axis=1) df['Sum']=Sum1.append(Sum2) </code></pre> <p>as mentioned in comments, if you want to subtract, let's say Bs from As, then you can do something like this:</p> <pre><code>Sum1 = df.loc[df['Group'] == df['Group_A'],['A_4','B_4']].apply(lambda x: x['A_4'] - x['B_4'],axis=1) Sum2 = df.loc[df['Group'] == df['Group_B'],['A_1','B_1']].apply(lambda x: x['A_1'] - x['B_1'],axis=1) df['Sum']=Sum1.append(Sum2) </code></pre>
python|pandas
1
376,460
67,340,358
find index based on data from two numpy arrays
<p>I have huge numpy matrix. Let us say</p> <pre><code>A['a1'] = [1,2,3,6] A['a3']= [3,4,3,7] A['a4']= [4,6,8,7] B['b2'] = [2,2,2,4] A['a1'] A['a3'] A['a4'] B['b2'] 1 3 4 2 2 4 6 2 3 3 8 2 6 7 7 4 </code></pre> <p>I want to select the index where B['b2'] has value 2 and A['a3'] has value 3. So that means I need the index 0 and 2.</p> <p>For single array I can use np.where, but how can I correlate between that between different arrays. I have been using Pandas before and it was quite easy, but unable to find something to achieve it using numpy.</p>
<p>You could use a set object <strong>{...}</strong> combined with its method <em><strong>intersection</strong></em></p> <pre class="lang-py prettyprint-override"><code>import numpy as np A, B = {}, {} # Optional : to avoid bug in this chunk of code A['a1'] = [1,2,3,6] A['a3']= [3,4,3,7] A['a4']= [4,6,8,7] B['b2'] = [2,2,2,4] n1 = np.where(np.array(A['a3'])==3) # {0, 2} n2 = np.where(np.array(B['b2'])==2) # {0, 1, 2} print(set(n1[0]).intersection(n2[0])) # {0, 2} </code></pre>
python|numpy
1
376,461
67,249,082
Hyperparameter Tuning with Keras Tuner RandomSearch Error
<p>I am using keras tuner to optimize hyperparameters: hidden layers, neurons, activation function, and learning rate. I have time series regression problem with 31 inputs, 32 outputs with N number of data samples.</p> <p>My original X_train shape is (N,31) and Y_train shape is (N,32). I transform it to work for keras shape and I reshape X_train and Y_train as following: X_train.shape: (N,31,1) Y_train.shape: (N,32).</p> <p><a href="https://i.stack.imgur.com/GGMxA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GGMxA.jpg" alt="Code" /></a></p> <p>In the above code, X_train.shape(1) is 31 and Y_train.shape(1) is 32. When I used hyperparameter tuning, it says ValueError: Input 0 of layer lstm_1 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 20).</p> <p>Following Error exists: <a href="https://i.stack.imgur.com/aM0J8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aM0J8.jpg" alt="enter image description here" /></a></p> <p>What I am missing and what is its issues.</p>
<p>LSTM layers expects a 3D tensor input with the shape [batch, timesteps, feature]. Since you are using number of layers are a tuning parameter along with LSTM layers, when the number of LSTM layers is 2 and above, the LSTM layers after the first LSTM layer will also expect a 3D tensor as input which means that you will need to add the 'return_sequences=True' parameter to the setup so that the output tensor from previous LSTM layer has ndim=3 (i.e. batch size, timesteps, hidden state) which is fed into the next LSTM layer.</p>
python|tensorflow|keras|deep-learning|keras-tuner
0
376,462
67,354,192
Numpy double-slice assignment with integer indexing followed by boolean indexing
<p>I already know that Numpy &quot;double-slice&quot; with fancy indexing creates copies instead of views, and the solution seems to be to convert them to one single slice (e.g. <a href="https://stackoverflow.com/questions/34764141/cannot-assign-values-to-a-double-slice-using-numpy">This question</a>). However, I am facing this particular problem where i need to deal with an integer indexing followed by boolean indexing and I am at a loss what to do. The problem (simplified) is as follows:</p> <pre><code>a = np.random.randn(2, 3, 4, 4) idx_x = np.array([[1, 2], [1, 2], [1, 2]]) idx_y = np.array([[0, 0], [1, 1], [2, 2]]) print(a[..., idx_y, idx_x].shape) # (2, 3, 3, 2) mask = (np.random.randn(2, 3, 3, 2) &gt; 0) a[..., idx_y, idx_x][mask] = 1 # assignment doesn't work </code></pre> <p>How can I make the assignment work?</p>
<p>Ok apparently I am making things complicated. No need to combine the indexing. The following code solves the problem elegantly:</p> <pre><code>b = a[..., idx_y, idx_x] b[mask] = 1 a[..., idx_y, idx_x] = b print(a[..., idx_y, idx_x][mask]) # all 1s </code></pre>
python|numpy|slice
1
376,463
67,516,148
How to reshape an ndarray to fir prediction model?
<p>I need to read two images, convert them to size 150x150 and add them to an array that needs to be reshaped into a shape of (2, 150, 150, 3) in order to fit a keras model. Im having trouble understanding how numpy's reshape method works and how do i need to make use of it.</p> <p>My code:</p> <pre><code>import cv2 import numpy def loadAndReshape(target, path): targetImage = cv2.imread(path) targetImage = cv2.cvtColor(targetImage, cv2.COLOR_BGR2RGB) targetImage = cv2.resize(targetImage, dsize=(150, 150)) / 255 targetImage = targetImage.reshape(1, 150, 150, 3).astype('float32') numpy.append(target, targetImage) targetImages = numpy.ndarray((2, 150, 150, 3)) loadAndReshape(targetImages, './/test1.jpg') loadAndReshape(targetImages, './/test2.jpg') </code></pre> <p>Reshaping <code>targetImage</code> works without issues but in the end <code>targetImages</code> will still be an empty ndarray. How do i go about outputting the array needed for my model?</p>
<p>The function 'numpy.append' does not work inplace as I think you expect it to. Instead, you can do smth like:</p> <pre><code>mport cv2 import numpy as np def loadAndReshape(image_list, path): targetImage = cv2.imread(path) targetImage = cv2.cvtColor(targetImage, cv2.COLOR_BGR2RGB) targetImage = cv2.resize(targetImage, dsize=(150, 150)) / 255 targetImage = targetImage.reshape(1, 150, 150, 3).astype('float32') image_list.append(targetImage) targetImages = [] loadAndReshape(targetImages, './/test1.jpg') loadAndReshape(targetImages, './/test2.jpg') . . . targetImages = np.concatenate(targetImages) </code></pre>
python|numpy|keras|cv2
0
376,464
67,449,430
Concise way to concatenate consecutive rows in pandas
<p>I would like to take a dataframe and concatenate consecutive rows for comparison.</p> <p>e.g. Take</p> <pre><code>xyt = pd.DataFrame(np.concatenate((np.random.randn(3,2), np.arange(3).reshape((3, 1))), axis=1), columns=['x','y','t']) </code></pre> <p>Which looks something like:</p> <pre><code> x y t 0 1.237007 -1.035837 0.0 1 -1.782458 1.042942 1.0 2 0.063130 0.355014 2.0 </code></pre> <p>And make:</p> <pre><code> a b x y t x y t 0 1.237007 -1.035837 0.0 -1.782458 1.042942 1.0 1 -1.782458 1.042942 1.0 0.063130 0.355014 2.0 </code></pre> <p>The best I could come up with was:</p> <pre><code>pd.DataFrame( [np.append(x,y) for (x, y) in zip(xyt.values, xyt[1:].values)], columns=pd.MultiIndex.from_product([('a', 'b'), xyt.columns])) </code></pre> <p>Is there a better way?</p>
<p>Let's try <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer">concat</a> on axis=1 with the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.shift.html#pandas-dataframe-shift" rel="nofollow noreferrer">shifted</a> frame:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd xyt = pd.DataFrame({'x': {0: 1.237007, 1: -1.782458, 2: 0.06313}, 'y': {0: -1.035837, 1: 1.042942, 2: 0.355014}, 't': {0: 0.0, 1: 1.0, 2: 2.0}}) merged = pd.concat((xyt, xyt.shift(-1)), axis=1, keys=('a', 'b')).iloc[:-1] print(merged) </code></pre> <p><code>merged</code>:</p> <pre><code> a b x y t x y t 0 1.237007 -1.035837 0.0 -1.782458 1.042942 1.0 1 -1.782458 1.042942 1.0 0.063130 0.355014 2.0 </code></pre>
pandas
1
376,465
67,470,504
Group the same column value in the dataframe and add the sum of the same values as a new column
<p>I have a pandas <code>DataFrame</code> like following.</p> <pre><code>df = pd.DataFrame({ 'Column1': ['A', 'B', 'C', 'A', 'B', 'A', 'C', 'A', 'B', 'B'], 'Column2': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'Column3': ['X','Y','Z','X', 'X', 'Z','X','Y','Z','X']}) </code></pre> <p>I want to group by column 1. I also want to sum these values in column 2 and add the values in column 3 as a new column.</p> <pre><code> Column1 Column2 Column3 0 A 1 X 1 B 2 Y 2 C 3 Z 3 A 4 X 4 B 5 X 5 A 6 Z 6 C 7 X 7 A 8 Y 8 B 9 Z 9 B 10 X </code></pre> <p>Expected outcome</p> <pre><code> Column1 Column2 X Y Z 0 A 19 5 8 6 1 B 26 15 2 9 2 C 10 7 0 3 </code></pre> <p>I looked at the sample questions. But I could not find an answer to my problem. Any help regarding this is appreciated.</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html" rel="nofollow noreferrer"><code>DataFrame.insert</code></a>:</p> <pre><code>df = df.pivot_table(index='Column1', columns='Column3', values='Column2', aggfunc='sum', fill_value=0).reset_index().rename_axis(None, axis=1) df.insert(1, 'Column2', df.sum(axis=1)) print (df) Column1 Column2 X Y Z 0 A 19 5 8 6 1 B 26 15 2 9 2 C 10 7 0 3 </code></pre>
python|pandas|dataframe|pandas-groupby
2
376,466
67,498,283
Dataframe bar plot not consistent x axis with plt.plot
<pre><code>df = pd.DataFrame({&quot;segments&quot;: [2, 2, 2, 5, 3, 3, 3, 4, 4], &quot;values&quot;: [1, 2, 3, 4, 5, 6, 7, 8, 9]}) df.groupby(&quot;segments&quot;).size().plot(kind=&quot;bar&quot;) plt.plot([3, 3], [0, 5]) </code></pre> <p>Let's say I have a dataframe with columns segments and values. I want to plot bar graph for frequencies of segments and a line graph on a same axes.</p> <p>But when I run the code above x axis is not consistent within the graphs. The x values of the line should have been in &quot;3&quot; in the x-axis. (see image below).</p> <p>What am I suppose to do to fix this issue. (I want to use groupby().size() on dataframes)</p> <p><a href="https://i.stack.imgur.com/f1CM4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f1CM4.png" alt="enter image description here" /></a></p>
<p>When overlapping graphs with the pandas plotting function, write the code to plot them consecutively. Also, in this case, the x-axis is a categorical variable, so set use_index to false.</p> <pre><code>df.groupby(&quot;segments&quot;).size().plot(kind=&quot;bar&quot;) df.groupby(&quot;segments&quot;).size().plot(kind=&quot;line&quot;, use_index=False) </code></pre> <p><a href="https://i.stack.imgur.com/NwQ0J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NwQ0J.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib
0
376,467
67,576,058
Pandas: How to put labels on timestamp based on pre-designated time interval?
<p>I have a dataframe that looks like this:</p> <pre><code>+---------------------+--------+ | time | score | +---------------------+--------+ | 2021-01-01 08:01:00 | xx | +---------------------+--------+ | 2021-01-01 15:01:00 | xx | +---------------------+--------+ | 2021-01-02 23:45:00 | xx | +---------------------+--------+ | 2021-01-03 09:32:00 | xx | +---------------------+--------+ | 2021-01-04 20:01:00 | xx | +---------------------+--------+ | 2021-01-04 16:30:00 | xx | +---------------------+--------+ | 2021-01-04 12:01:00 | xx | +---------------------+--------+ </code></pre> <p>I also have a <strong>pre-designated time interval legend</strong> that looks like this:</p> <ul> <li>2AM - 5:59AM: G1</li> <li>6AM - 9:59AM: G2</li> <li>10AM - 4:29PM: G3</li> <li>4:30PM - 7:29PM: G4</li> <li>7:30PM - 7:59PM: G5</li> <li>8PM - 10:59PM: G6</li> <li>11PM - 1:59AM: G7</li> </ul> <p>How do I <strong>add a new column that assign labels to each timestamps</strong> based on pre-designated time interval legend?</p> <p>The final dataframe would look like this:</p> <pre><code>+---------------------+--------+-------+ | time | score | group | +---------------------+--------+-------+ | 2021-01-01 08:01:00 | xx | G2 | +---------------------+--------+-------+ | 2021-01-01 15:01:00 | xx | G3 | +---------------------+--------+-------+ | 2021-01-02 23:45:00 | xx | G7 | +---------------------+--------+-------+ | 2021-01-03 09:32:00 | xx | G2 | +---------------------+--------+-------+ | 2021-01-04 20:01:00 | xx | G6 | +---------------------+--------+-------+ | 2021-01-04 16:30:00 | xx | G4 | +---------------------+--------+-------+ | 2021-01-04 12:01:00 | xx | G3 | +---------------------+--------+-------+ </code></pre> <p>Thanks so much for your help!</p>
<p>You can shift the time with <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timedeltas.html" rel="nofollow noreferrer"><code>pd.Timedelta</code></a> and then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html?highlight=cut#pandas.cut" rel="nofollow noreferrer"><code>pd.cut</code></a>:</p> <pre><code>shifted_times = (df['time'] - pd.Timedelta('2H')) int_times = shifted_times.dt.hour * 100 + shifted_times.dt.minute df['group'] = pd.cut(int_times, right=False, bins = [99, 400, 800, 1430, 1730, 1800, 2100, 2400], labels=['G1','G2','G3','G4', 'G5', 'G6', 'G7'] ) </code></pre> <p>Output:</p> <p>print(df)</p> <pre><code> time score group 0 2021-01-01 08:01:00 xx G2 1 2021-01-01 15:01:00 xx G3 2 2021-01-02 23:45:00 xx G7 3 2021-01-03 09:32:00 xx G2 4 2021-01-04 20:01:00 xx G6 5 2021-01-04 16:30:00 xx G4 6 2021-01-04 12:01:00 xx G3 </code></pre>
python|pandas|date|datetime
0
376,468
67,588,381
Rolling correlation that includes all previous values in pandas
<p>I want to compute the correlation between two time series columns. I know that I can do this to get a singular r value:</p> <pre><code>df['a'].corr(df['b']) </code></pre> <p>However, I want to get the r value of the correlation between all previous and current values. I know pandas has prebuilt <code>rolling</code> functions, but these only include the specified window as history. Is there a prebuilt pandas way to do this?</p> <p>For example:</p> <pre><code>a b 1 4 1 4 2 3 3 2 4 1 5 0 1 6 </code></pre> <p><code>df['a'].corr(df['b'])</code> returns <code>-1</code>. What I'm looking for is something like this:</p> <pre><code>a b corr 1 4 np.nan 1 4 np.nan 2 3 -1 3 2 -1 4 1 -1 5 0 -1 1 6 -.93 </code></pre>
<p>Try <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.window.expanding.Expanding.corr.html#pandas-core-window-expanding-expanding-corr" rel="nofollow noreferrer">expanding corr</a>:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'a': {0: 1, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 1}, 'b': {0: 4, 1: 4, 2: 3, 3: 2, 4: 1, 5: 0, 6: 6} }) df['corr'] = df['a'].expanding().corr(df['b']) print(df) </code></pre> <p><code>df</code>:</p> <pre><code> a b corr 0 1 4 NaN 1 1 4 NaN 2 2 3 -1.000000 3 3 2 -1.000000 4 4 1 -1.000000 5 5 0 -1.000000 6 1 6 -0.939664 </code></pre>
python|pandas
1
376,469
67,404,862
Implementing Backprop for custom loss functions
<p>I have a neural network <code>Network</code> that has a vector output. Instead of using a typical loss function, I would like to implement my own loss function that is a method in some class. This looks something like:</p> <pre><code>class whatever: def __init__(self, network, optimizer): self.network = network self.optimizer = optimizer def cost_function(relevant_data): ...implementation of cost function with respect to output of network and relevant_data... def train(self, epochs, other_params): ...part I'm having trouble with... </code></pre> <p>The main thing I'm concerned with is about taking gradients. Since I'm taking my own custom loss function, do I need to implement my own gradient with respect to the cost function?</p> <p>Once I do the math, I realize that if the cost is J, then the gradient of J is a fairly simple function in terms of the gradient of the final layer of the Network. I.e, it looks something like: <a href="https://i.stack.imgur.com/IwO2D.gif" rel="nofollow noreferrer">Equation link</a>.</p> <p>If I used some traditional loss function like CrossEntropy, my backprocess would look like:</p> <pre><code>objective = nn.CrossEntropyLoss() for epochs: optimizer.zero_grad() output = Network(input) loss = objective(output, data) loss.backward() optimizer.step() </code></pre> <p>But how do we do this in my case? My guess is something like:</p> <pre><code>for epochs: optimizer.zero_grad() output = Network(input) loss = cost_function(output, data) #And here is where the problem comes in loss.backward() optimizer.step() </code></pre> <p><code>loss.backward()</code> as I understand it, takes the gradients of the loss function with respect to the parameters. But can I still invoke it while using my own loss function (presumably the program doesn't know what the gradient equation is). Do I have to implement another method/subroutine to find the gradients as well?</p> <p>Which brings me to my other question: if I <em>do</em> want to implement gradient calculation for my loss function, I also need the gradient of the neural network parameters. How do I obtain <em>those</em>? Is there a function for that?</p>
<p>As long as all your steps starting from the input till the loss function involve differentiable operations on PyTorch's tensors, you need not do anything extra. PyTorch builds a computational graph that keeps track of each operation, its inputs, and gradients. So, calling <code>loss.backward()</code> on your custom loss would still propagate gradients back correctly through the graph. <a href="https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html" rel="nofollow noreferrer">A Gentle Introduction to torch.autograd</a> from the PyTorch tutorials may be a useful reference.</p> <p>After the backward pass, if you need to directly access the gradients for further processing, you can do so using the <code>.grad</code> attribute (so <code>t.grad</code> for tensor <code>t</code> in the graph).</p> <p>Finally, if you have a specific use case for finding the gradient of an arbitrary differentiable function implemented using PyTorch's tensors with respect to one of its inputs (e.g. gradient of the loss with respect to a particular weight in the network), you could use <a href="https://pytorch.org/docs/stable/autograd.html#torch.autograd.grad" rel="nofollow noreferrer"><code>torch.autograd.grad</code></a>.</p>
python|machine-learning|neural-network|pytorch|backpropagation
1
376,470
67,299,116
How to create a dictionary for a 10,000 dataframe records and then access each dictionary record to make some calculations?
<p>I have a panda dataframe that has 10,000 records. The dataframe consists of 0 and 1 and looks like this:</p> <pre><code>C1 C2 C3 C4 0 0 1 1 0 1 0 0 1 0 1 1 </code></pre> <p>My aim is to make each record as a dictionary which I assign a value for the dictionary for each column (each column has the same value):</p> <pre><code> C1 C2 C3 C4 {0: 10} {0: 11} {1: 15} {1: 13} {0: 10} {1: 11} {0: 15} {0: 13} {1: 10} {0: 11} {1: 15} {1: 13} </code></pre> <p>and then access the dictionary and make some calculation row by row and compare between the total of <code>0</code> and the total of <code>1</code>. The number with the highest value will be added to a new column. For example for the first row <code>0 = 21</code> and <code>1 = 28</code> therefore <code>1</code> will be added to the new column. The second row <code>0 = 38</code> and <code>1 = 11</code> therefore <code> 1</code> will be added:</p> <pre><code> C1 C2 C3 C4 New_column {0: 10} {0: 11} {1: 15} {1: 13} 1 {0: 10} {1: 11} {0: 15} {0: 13} 0 {1: 10} {0: 11} {1: 15} {1: 13} 1 </code></pre>
<p>If you don't need the intermediate dicts, you can do some multiplications and sums:</p> <pre class="lang-py prettyprint-override"><code>values = [10, 11, 15, 13] zeros = df.eq(0).mul(values).sum(axis=1) ones = df.eq(1).mul(values).sum(axis=1) df['New_column'] = ones.gt(zeros).astype(int) # C1 C2 C3 C4 New_column # 0 0 0 1 1 1 # 1 0 1 0 0 0 # 2 1 0 1 1 1 </code></pre> <p>And if you do want the dicts, I would do it backwards and create the dicts <em>after</em> computing <code>New_column</code>:</p> <pre class="lang-py prettyprint-override"><code>df[['C1','C2','C3','C4']] = df.filter(like='C').apply( lambda column: [{x:values[df.columns.get_loc(column.name)]} for x in column]) # C1 C2 C3 C4 New_column # 0 {0: 10} {0: 11} {1: 15} {1: 13} 1 # 1 {0: 10} {1: 11} {0: 15} {0: 13} 0 # 2 {1: 10} {0: 11} {1: 15} {1: 13} 1 </code></pre>
python-3.x|pandas|dataframe|dictionary
2
376,471
67,568,149
How do I separate the dictionary list into separate columns?
<p>I have a list of dictionary in my dataframe column of vary length:</p> <pre><code>categories 1) [ { &quot;S&quot; : &quot;Vibes&quot; }, { &quot;S&quot; : &quot;Themed&quot; }, { &quot;S&quot; : &quot;Experiences&quot; }, { &quot;S&quot; : &quot;Girls Night&quot; }] 2) [ { &quot;S&quot; : &quot;Vibes&quot; }] 3) [ { &quot;S&quot; : &quot;Vibes&quot; }, { &quot;S&quot; : &quot;Drinks&quot; }] . . . </code></pre> <p>I want to make it into separate columns , if there is no dictionary on particular list it should make it as a null for that category as the output should look like :</p> <pre><code>categories 1 categories 2 categories 3 ...... { &quot;S&quot; : &quot;Vibes&quot; } { &quot;S&quot; : &quot;Themed&quot; } { &quot;S&quot; : &quot;Themed&quot; } { &quot;S&quot; : &quot;Vibes&quot; }] null null . . . . . . </code></pre>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html?highlight=explode#pandas.DataFrame.explode" rel="nofollow noreferrer"><code>.explode()</code></a> to expand the list of dict in column <code>categories</code> into separate rows, then create the categories names ('categories 1', 'categories 2', etc) by grouping on the original row index (row index before explode) using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>.groupby()</code></a> and get the serial number by <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>.cumcount()</code></a> within the group. Finally, we use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>.pivot()</code></a> to pivot the rows into columns.</p> <pre><code>df1 = df.explode('categories') df1['Cat_Num'] = 'categories ' + df1.groupby(level=0).cumcount().add(1).astype(str) df2 = df1.pivot(columns='Cat_Num', values='categories').rename_axis(columns=None) </code></pre> <h2>Demo</h2> <pre><code>data = {'categories': [ [{ &quot;S&quot; : &quot;Vibes&quot; }, { &quot;S&quot; : &quot;Themed&quot; }, { &quot;S&quot; : &quot;Experiences&quot; }, { &quot;S&quot; : &quot;Girls Night&quot; }], [ { &quot;S&quot; : &quot;Vibes&quot; }], [ { &quot;S&quot; : &quot;Vibes&quot; }, { &quot;S&quot; : &quot;Drinks&quot; }] ]} df = pd.DataFrame(data) df1 = df.explode('categories') df1['Cat_Num'] = 'categories ' + df1.groupby(level=0).cumcount().add(1).astype(str) df2 = df1.pivot(columns='Cat_Num', values='categories').rename_axis(columns=None) print(df2) categories 1 categories 2 categories 3 categories 4 0 {'S': 'Vibes'} {'S': 'Themed'} {'S': 'Experiences'} {'S': 'Girls Night'} 1 {'S': 'Vibes'} NaN NaN NaN 2 {'S': 'Vibes'} {'S': 'Drinks'} NaN NaN </code></pre>
python|pandas|list|dataframe|dictionary
1
376,472
67,493,974
How to use tensor shape parameters for something useful?
<p>I'm trying to use the shape of an incoming tensor to form the output, sort of like this:</p> <pre><code>import tensorflow.keras.backend as K def myFunc(x): sz = tf.shape(x)[1] # .. other stuff z = K.repeat_elements(y, sz, axis=1) </code></pre> <p>This results in <code>TypeError: Tensor object cannot be interpreted as integer</code>.</p> <p>How do I get around this?</p>
<p>If you know are that the dimension of <code>x</code> is known in advance, you can use <code>x.shape[1]</code> instead of <code>tf.shape(x)[1]</code>, which will return an integer.</p> <p>But I would advise to use <a href="https://www.tensorflow.org/api_docs/python/tf/repeat" rel="nofollow noreferrer"><code>tf.repeat</code></a> instead of <code>tf.keras.backend.repeat_elements</code>. <code>tf.repeat</code> will work regardless the usage of <code>tf.shape(x)</code> or <code>x.shape</code>.</p>
tensorflow|keras
1
376,473
67,380,305
Installing but OSMnx in new Environment: Fiona Error-- module 'fiona' has no attribute '_loading'
<p>I am installing OSMnx in a new environment following the steps from Geoffboeing's site: <a href="https://geoffboeing.com/2017/02/python-getting-started/" rel="nofollow noreferrer">https://geoffboeing.com/2017/02/python-getting-started/</a></p> <p>After activating the environment and importing the OSMnx module, it gives me the error of fiona</p> <pre><code>AttributeError Traceback (most recent call last) &lt;ipython-input-2-28b0d4205d5c&gt; in &lt;module&gt;() ----&gt; 1 import osmnx as ox 2 get_ipython().magic('matplotlib inline') 3 #import geopandas as gpd C:\Users\Kirti\anaconda3\envs\ox\lib\site-packages\osmnx\__init__.py in &lt;module&gt;() 7 ################################################################################ 8 ----&gt; 9 from .buildings import * 10 from .elevation import * 11 from .core import * C:\Users\Kirti\anaconda3\envs\ox\lib\site-packages\osmnx\buildings.py in &lt;module&gt;() 7 8 import time ----&gt; 9 import geopandas as gpd 10 import matplotlib.pyplot as plt 11 from matplotlib.collections import PatchCollection C:\Users\Kirti\anaconda3\envs\ox\lib\site-packages\geopandas\__init__.py in &lt;module&gt;() 5 from geopandas.array import points_from_xy # noqa 6 ----&gt; 7 from geopandas.io.file import _read_file as read_file # noqa 8 from geopandas.io.arrow import _read_parquet as read_parquet # noqa 9 from geopandas.io.arrow import _read_feather as read_feather # noqa C:\Users\Kirti\anaconda3\envs\ox\lib\site-packages\geopandas\io\file.py in &lt;module&gt;() 10 11 try: ---&gt; 12 import fiona 13 14 fiona_import_error = None C:\Users\Kirti\anaconda3\envs\ox\lib\site-packages\fiona\__init__.py in &lt;module&gt;() 83 84 import fiona._loading ---&gt; 85 with fiona._loading.add_gdal_dll_directories(): 86 from fiona.collection import BytesCollection, Collection 87 from fiona.drvsupport import supported_drivers AttributeError: module 'fiona' has no attribute '_loading' </code></pre> <p>I am trying to install OSMnx past a week but still couldn't</p> <ol> <li>Installed only from one channel Conda forge</li> <li>This is fresh, installed anaconda all over again</li> <li>So, no old modules clashing</li> <li>New environment</li> </ol> <p>Before going any further with installation of single module separately then causing version clashes between GDAL &amp; others. I want to ask if anybody has a solution or how can I do it?</p> <p>I took care of almost all measures but still no to avail. I am new to this and I am not sure with downloading <code>.whl</code> files and stuff.</p> <p>Here, is my conda list in this environment</p> <p><a href="https://i.stack.imgur.com/gXEt1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gXEt1.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/MWruR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MWruR.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/JnmJB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JnmJB.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/EYrrJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EYrrJ.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/vzJJk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vzJJk.png" alt="enter image description here" /></a></p> <p>Python Version is: <code>Python 3.9.0 </code> (in that environment)</p> <p>Thank you in advance.</p>
<p>If you want to install OSMnx, just follow its current documented <a href="https://osmnx.readthedocs.io/en/stable/#installation" rel="nofollow noreferrer">installation instructions</a>. Blog posts can fall out-of-date over the years.</p>
python|python-3.x|geopandas|osmnx|fiona
0
376,474
67,591,193
About tf.gradients is not supported when eager execution is enabled in R
<p>I am trying to implement the Grad-cam in R. And I met this error:</p> <pre><code>Error in py_call_impl(callable, dots$args, dots$keywords) : RuntimeError: tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead. </code></pre> <p>I found some solutions online but they all use python. I was wondering how can we fix this problem in R version of keras. Thank you.</p>
<p>This can help</p> <pre><code>tf$compat$v1$disable_eager_execution() </code></pre>
r|tensorflow|keras
0
376,475
67,368,886
pandas: add week dates with dataframe
<p>I have a df like, which has such rows:</p> <pre><code> p_id m_id x_id g_id u_id 0 2 NaN 1408 7 121 1 3 1259 117 23 315 2 3 1259 221 9 718 3 3 1259 397 76 367 </code></pre> <p>and two datetime objects:</p> <p>start_date:</p> <pre><code>datetime.datetime(2021, 5, 25, 0, 0) </code></pre> <p>end_date:</p> <pre><code>datetime.datetime(2021, 5, 29, 0, 0) </code></pre> <p>how do I get a df like, basically (adding the week-dates from start_date to end_date with each row):</p> <pre><code> p_id m_id x_id g_id u_id s_date 0 2 NaN 1408 7 121 2021-05-25 1 2 NaN 1408 7 121 2021-05-26 2 2 NaN 1408 7 121 2021-05-27 3 2 NaN 1408 7 121 2021-05-28 4 2 NaN 1408 7 121 2021-05-29 5 3 1259 117 23 315 2021-05-25 6 3 1259 117 23 315 2021-05-26 7 3 1259 117 23 315 2021-05-27 8 3 1259 117 23 315 2021-05-28 9 3 1259 117 23 315 2021-05-29 . . 15 3 1259 397 76 367 2021-05-25 16 3 1259 397 76 367 2021-05-26 17 3 1259 397 76 367 2021-05-27 18 3 1259 397 76 367 2021-05-28 19 3 1259 397 76 367 2021-05-29 </code></pre>
<h3>Generate <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="noreferrer"><code>date_range</code></a> and cross <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge.html" rel="noreferrer"><code>merge</code></a></h3> <ol> <li>In pandas version &gt;= <code>1.2x</code>, to perform a cross merge we can now pass an optional parameter <code>how='cross'</code> to the merge function</li> </ol> <pre class="lang-py prettyprint-override"><code>dates = pd.date_range(start_date, end_date) df.merge(dates.to_series(name='s_date'), how='cross') </code></pre> <ol start="2"> <li>For pandas version &lt; <code>1.2x</code> we have to create a temporary merge key in order to perform the <code>cross</code> merge</li> </ol> <pre class="lang-py prettyprint-override"><code>dates = pd.date_range(start_date, end_date) df.assign(k=1).merge(dates.to_frame(name='s_date').assign(k=1), on='k').drop('k', 1) </code></pre> <hr /> <pre><code> p_id m_id x_id g_id u_id s_date 0 2 NaN 1408 7 121 2021-05-25 1 2 NaN 1408 7 121 2021-05-26 2 2 NaN 1408 7 121 2021-05-27 3 2 NaN 1408 7 121 2021-05-28 4 2 NaN 1408 7 121 2021-05-29 5 3 1259.0 117 23 315 2021-05-25 6 3 1259.0 117 23 315 2021-05-26 7 3 1259.0 117 23 315 2021-05-27 8 3 1259.0 117 23 315 2021-05-28 9 3 1259.0 117 23 315 2021-05-29 10 3 1259.0 221 9 718 2021-05-25 11 3 1259.0 221 9 718 2021-05-26 12 3 1259.0 221 9 718 2021-05-27 13 3 1259.0 221 9 718 2021-05-28 14 3 1259.0 221 9 718 2021-05-29 15 3 1259.0 397 76 367 2021-05-25 16 3 1259.0 397 76 367 2021-05-26 17 3 1259.0 397 76 367 2021-05-27 18 3 1259.0 397 76 367 2021-05-28 19 3 1259.0 397 76 367 2021-05-29 </code></pre>
python|python-3.x|pandas|date
5
376,476
67,254,060
why is my visualization of cnn image features in tensorboard t-sne RANDOM?
<p>I have a Convolutional neural network (VGG16) that performs well on a classifying task on 26 image classes. Now I want to visualize the data distribution with t-SNE on tensorboard. I removed the last layer of the CNN, therefore the output is the 4096 features. Because the classification works fine (~90% val_accuracy) I expect to see something like a pattern in t-SNE. But no matter what I do, the distribution stays <strong>random</strong> (-&gt; data is aligned in a circle/sphere and classes are cluttered). Did I do something wrong? Do I misunderstand t-SNE or tensorboard? It´s my first time working with that.</p> <p><a href="https://i.stack.imgur.com/VxpPe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VxpPe.png" alt="3D visualization is an almost round sphere" /></a></p> <p>Here´s my code for getting the features:</p> <pre><code>import tensorflow as tf from tensorflow.keras import Model import os import numpy as np from tensorboard.plugins import projector def get_image_features(data_dir, model, layername='fc2'): # fc2 = last vgg dense layer out = model.get_layer(layername).output feature_model = Model(model.input, out) # the model, see summary() below dataGen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1. / model.input.shape[2]) test_batches = dataGen.flow_from_directory(data_dir, target_size=model.input.shape[1:3], class_mode='categorical', batch_size=50, shuffle=False) features = feature_model.predict(test_batches) return features def create_projector(self, features, metadata_path): import tensorflow.compat.v1 as tf # tf v1 needed tf.disable_v2_behavior() if type(features) is str: features = np.loadtxt(features) data_dir_list = os.listdir(self.test_data_dir) data_dir_list.sort() img_data = [] for dataset in data_dir_list: img_list = os.listdir(os.path.join(self.test_data_dir, dataset)) img_list.sort() for img in img_list: input_img = cv2.imread(os.path.join(self.test_data_dir, dataset, img)) input_img_resize = cv2.resize(input_img, (112, 112)) img_data.append(input_img_resize) img_data = np.array(img_data) features_var = tf.Variable(features, name='features') with tf.Session() as sess: saver = tf.train.Saver([features_var]) sess.run(features_var.initializer) saver.save(sess, os.path.join(self.log_dir, 'images_4_classes.ckpt')) config = projector.ProjectorConfig() embedding = config.embeddings.add() embedding.tensor_name = features_var.name embedding.metadata_path = metadata_path projector.visualize_embeddings(tf.summary.FileWriter(self.log_dir), config) </code></pre> <p><em>Edit</em>: When I uncheck &quot;spherize data&quot; it´s not a sphere anymore but still the distribution of classes seems completely random. By the way: PCA works fine and shows some kind of a pattern</p> <p>Here´s the feature_model.summary():</p> <pre><code>Layer (type) Output Shape Param # ================================================================= block1_conv1_input (InputLay [(None, 224, 224, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 25088) 0 _________________________________________________________________ fc1 (Dense) (None, 4096) 102764544 _________________________________________________________________ fc2 (Dense) (None, 4096) 16781312 ================================================================= </code></pre>
<p>After weeks I stopped trying it with tensorboard. I reduced the number of features in the output layer to 256, 128, 64 and I previously reduced the features with PCA and Truncated SDV but nothing changed.</p> <p>Now I use sklearn.manifold.TSNE and visualize the output with plotly. This is also easy, works fine and I can see appropriate patterns while t-SNE in tensorboard still produces a random distribution. So I guess for the algorithm in tensorboard it´s too many classes. Or I made a mistake when preparing the data and didn´t notice that (but then why does PCA work?)</p> <p>If anyone knows what the problem was, I´m still curious. But in case someone else is facing the same problem, I´d recommend trying it with sklearn.</p>
python|tensorflow|keras|conv-neural-network|tensorboard
0
376,477
67,333,907
Subtract sum of two rows from another row based on condition in Pandas
<p>I have a <strong>dataframe</strong> with three columns- <code>Col1</code>, <code>Col2</code> and <code>Col3</code></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Col1</th> <th>Col2</th> <th>Col3</th> </tr> </thead> <tbody> <tr> <td>x</td> <td>a</td> <td>10</td> </tr> <tr> <td>x</td> <td>b</td> <td>12</td> </tr> <tr> <td>x</td> <td>c</td> <td>25</td> </tr> <tr> <td>y</td> <td>a</td> <td>13</td> </tr> <tr> <td>y</td> <td>b</td> <td>14</td> </tr> <tr> <td>y</td> <td>c</td> <td>37</td> </tr> </tbody> </table> </div> <p>I want to update the value of <code>Col3</code> for each <code>c</code> from <code>Col2</code> for each unique <code>Col1</code> values by the below logic:</p> <pre><code>c-(a+b) </code></pre> <p>The desired output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Col1</th> <th>Col2</th> <th>Col3</th> </tr> </thead> <tbody> <tr> <td>x</td> <td>a</td> <td>10</td> </tr> <tr> <td>x</td> <td>b</td> <td>12</td> </tr> <tr> <td>x</td> <td>c</td> <td>3</td> </tr> <tr> <td>y</td> <td>a</td> <td>13</td> </tr> <tr> <td>y</td> <td>b</td> <td>14</td> </tr> <tr> <td>y</td> <td>c</td> <td>10</td> </tr> </tbody> </table> </div> <p>I would like to know the appropriate operation that I need to do to achieve the desired solution. Thanks</p>
<p>We can select the subset of rows where the <code>Col2</code> values is either <code>a</code> or <code>b</code>, then group these rows by <code>Col1</code> and transform using <code>sum</code> to calculate the transformed sum <code>a + b</code> per group, finally subtract the transformed sum from <code>Col3</code> where the corresponding <code>Col2</code> is <code>c</code></p> <pre><code>m = df['Col2'].isin(['a', 'b']) s = df['Col3'].where(m).groupby(df['Col1']).transform('sum') df.loc[df['Col2'].eq('c'), 'Col3'] -= s </code></pre> <hr /> <pre><code> Col1 Col2 Col3 0 x a 10.0 1 x b 12.0 2 x c 3.0 3 y a 13.0 4 y b 14.0 5 y c 10.0 </code></pre>
python|pandas|dataframe
2
376,478
67,245,347
how to get the first 3 elements of a string?
<p>I have a column in pandas which is postal codes. like this : <code>V6N 3S1</code> how I can make a new column with the first 3 element of each postal code? for an example <code>V6N</code> in my example?</p>
<p>use pandas string method -</p> <pre><code>df['postal_codes'].str[:3] </code></pre>
pandas|text
1
376,479
67,360,858
Numpy subarray affect the original 2d array
<p>I created this <code>2D array</code> with <code>numpy</code>:</p> <pre><code>&gt;&gt;&gt;import numpy as np &gt;&gt;&gt;np.random.seed(0) &gt;&gt;&gt;x2 = np.random.randint(10, size=(3, 4)) &gt;&gt;&gt;print(x2) [[5 0 3 3] [7 9 3 5] [2 4 7 6]] </code></pre> <p>Then I created another subarray from <code>x2</code></p> <pre><code>&gt;&gt;&gt;x2_sub = x2[:2, :2] &gt;&gt;&gt;print(x2_sub) [[5 0] [7 9]] </code></pre> <p>Now if I modify this subarray, the original array is changed!!:</p> <pre><code>&gt;&gt;&gt;x2_sub[0, 0] = 99 &gt;&gt;&gt;print(x2_sub) [[99 0] [7 9]] &gt;&gt;&gt;print(x2) [[99 0 3 3] [ 7 9 3 5] [ 2 4 7 6]] </code></pre> <p>I don't want the original array to change. Can anyone tell me what I'm doing worng ?</p>
<p>Slices in numpy create a <em>view</em> unlike Python lists. Use <code>.copy()</code> to explicitly create a copy:</p> <pre><code>x2_sub = x2[:2, :2].copy() </code></pre>
python|arrays|numpy
2
376,480
67,549,738
Transformation of the 3d numpy array
<p>I have 3d array and I need to set to zero its right part. For each 2d slice (n, :, :) of the array the index of the column should be taken from vector b. This index defines separating point - the left and right parts, as shown in the figure below.</p> <p><a href="https://i.stack.imgur.com/gx8qK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gx8qK.jpg" alt="enter image description here" /></a></p> <pre><code>a_before = [[[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12] [13 14 15 16]] [[17 18 19 20] [21 22 23 24] [25 26 27 28] [29 30 31 32]] [[33 34 35 36] [37 38 39 40] [41 42 43 44] [45 46 47 48]]] a_before.shape = (3, 4, 4) b = (2, 3, 1) a_after_1 = [[[ 1 2 0 0] [ 5 6 0 0] [ 9 10 0 0] [13 14 0 0]] [[17 18 19 0] [21 22 23 0] [25 26 27 0] [29 30 31 0]] [[33 0 0 0] [37 0 0 0] [41 0 0 0] [45 0 0 0]]] </code></pre> <p>After this, for each 2d slice (n, :, :) I have to take index of the column from c vector and multiply by the corresponding value taken from the vector d.</p> <pre><code>c = (1, 2, 0) d = (50, 100, 150) a_after_2 = [[[ 1 100 0 0] [ 5 300 0 0] [ 9 500 0 0] [13 700 0 0]] [[17 18 1900 0] [21 22 2300 0] [25 26 2700 0] [29 30 3100 0]] [[4950 0 0 0] [5550 0 0 0] [6150 0 0 0] [6750 0 0 0]]] </code></pre> <p>I did it but my version looks ugly. Maybe someone can help me.</p> <p>P.S. I would like to avoid for loops and use only numpy methods.</p> <p>Thank You.</p>
<p>Here's a version without loops.</p> <pre><code>In [232]: A = np.arange(1,49).reshape(3,4,4) In [233]: b = np.array([2,3,1]) In [234]: d = np.array([50,100,150]) In [235]: I,J = np.nonzero(b[:,None]&lt;=np.arange(4)) In [236]: A[I,:,J]=0 In [237]: A[np.arange(3),:,b-1] *= d[:,None] In [238]: A Out[238]: array([[[ 1, 100, 0, 0], [ 5, 300, 0, 0], [ 9, 500, 0, 0], [ 13, 700, 0, 0]], [[ 17, 18, 1900, 0], [ 21, 22, 2300, 0], [ 25, 26, 2700, 0], [ 29, 30, 3100, 0]], [[4950, 0, 0, 0], [5550, 0, 0, 0], [6150, 0, 0, 0], [6750, 0, 0, 0]]]) </code></pre> <p>Before I developed this, I wrote an iterative version. It helped me visualize the problem.</p> <pre><code>In [240]: Ac = np.arange(1,49).reshape(3,4,4) In [241]: In [241]: for i,v in enumerate(b): ...: Ac[i,:,v:]=0 ...: In [242]: for i,(bi,di) in enumerate(zip(b,d)): ...: Ac[i,:,bi-1]*=di </code></pre> <p>It may be easier to understand, and in that sense, less ugly!</p> <p>The fact that your <code>A</code> has middle dimension that is &quot;just-going-along&quot; for the ride, complicates &quot;vectorizing&quot; the problem.</p> <p>With a (3,4) 2d array, the solution is just:</p> <pre><code>In [251]: Ab = Ac[:,0,:] In [252]: Ab[b[:,None]&lt;=np.arange(4)]=0 In [253]: Ab[np.arange(3),b-1]*=d </code></pre>
python|arrays|numpy
2
376,481
67,279,958
Returning indices from pytorch Dataset: Function to alter __getitem__ results in metaclass conflict
<p>I have multiple classes (for different datasets) that inherit from pytorch's Dataset class. They have a general structure, like so:</p> <pre><code>from torch.utils.data import Dataset class SomeDataset(Dataset): def __init__(self, data, labels): super(SomeDataset, self).__init__() self.data = data self.labels = labels self.__name__ = 'SomeDataset' def __getitem__(self, index): return {'data': self.data[index], 'label': self.labels[index]} def __len__(self): return len(data) </code></pre> <p>Recently I have realised that it would be beneficial to keep track of the labels passed into the Dataloader when batching, so upon googling how to do this I came across <a href="https://discuss.pytorch.org/t/how-to-retrieve-the-sample-indices-of-a-mini-batch/7948/16" rel="nofollow noreferrer">this thread</a>, which is where I have adapted the code to write this function:</p> <pre><code>def return_indices(dataset_class): def __getitem__(self, index): return {'index':1, **dataset_class.__getitem__(self, index)} return type(dataset_class.__name__, (dataset_class, ), {'__getitem__': __getitem__}) </code></pre> <p>I had never seen <code>type</code> used like this before, but after some googling, it made <em>some</em> sense, so I tried it out. Unfortunately this led to this error:</p> <pre><code>TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases </code></pre> <p>which led to a whole lot more googling, and even though I'm beginning to grasp what a metaclass is and how they're used I still can't figure out what is wrong with this approach or how to solve it - and I'm starting to think that maybe it would be easier to rewrite this functionality into my dataset classes instead of having some neat wrapper that does it for me. Can anyone weigh in with whatever it is I'm missing?</p>
<p>Just do this:</p> <pre><code>def return_indices(dataset_class): def __getitem__(self, index): return {'index':1, **dataset_class.__getitem__(self, index)} metacls = type(dataset_class) return metacls(dataset_class.__name__, (dataset_class, ), {'__getitem__': __getitem__}) </code></pre> <p>What takes place: as you found out, the 3-paramter call to <code>type</code> is way to create a new-class programatically in Python, without the need for a &quot;class&quot; statement and its body.</p> <p>But <code>type</code> is the &quot;base metaclass&quot; - and while its instances will be ordinary classes, it also &quot;hardcodes&quot; the metaclass of the class you are creating to itself - in contrast, using the <code>class</code> statement will make Python search for a suitable metaclass among the bases of the class you are creating.</p> <p>Just using your derived class metaclass (which is obtained by either the one-parameter form of type, as above, or by the <code>__class__</code> attribute of the class, like in <code>dataset_class.__class__</code>).</p> <p>Using this as a callable in place of type will have itself as the metaclass, and things should work.</p> <p><strong>NB</strong>: As there are a couple more mechanisms to metaclasses, like <code>__prepare__</code>, just calling the metaclass instead of <code>type</code> will not always work - the correct generic way to do that involves calling <code>types.prepare_class</code> and <code>types.new_class</code> and having a callback to perform the equivalent of the execution of the class body that takes place in the body of a class statement. That won't be needed for most cases.</p>
python|python-3.x|pytorch|metaclass
0
376,482
67,294,320
How to split a numpy array into arrays with specific number of elements
<p>I know that np.array_split allows us to split a NumPy array, but the number of elements in the split arrays only depends on the number of split chunks. The following example shows what I get and what I wish to get (the size of my_array is 35):</p> <pre><code>my_array = [1 1 1 1 1 0 0 0 1 1 0 1 1 1 0 1 0 1 0 0 0 0 1 0 0 1 0 1 1 0 0 1 1 1 1] np.array_split(my_array, 5) outputs: [array([1, 1, 1, 1, 1, 0, 0]), array([0, 1, 1, 0, 1, 1, 1]), array([0, 1, 0, 1, 0, 0, 0]), array([0, 1, 0, 0, 1, 0, 1]), array([1, 0, 0, 1, 1, 1, 1])] AND np.array_split(my_array, 4) outputs: [array([1, 1, 1, 1, 1, 0, 0, 0, 1]), array([1, 0, 1, 1, 1, 0, 1, 0, 1]), array([0, 0, 0, 0, 1, 0, 0, 1, 0]), array([1, 1, 0, 0, 1, 1, 1, 1])] However, what I need is split arrays with 8 elements like this: [array([1, 1, 1, 1, 1, 0, 0, 0]), array([1, 1, 0, 1, 1, 1, 0, 1]), array([0, 1, 0, 0, 0, 0, 1, 0]), array([0, 1, 0, 1, 1, 0, 0, 1]), array([1, 1, 1]) </code></pre> <p>I know that np.array_split(my_array, [8, 16, 24, 32]) can give me the answer for this specific question, but what I wish to do is that, for any size of an array, I could split it into arrays with a specific number of elements except for the last one if the array is not divisible.</p>
<p>You are close:</p> <pre><code>my_array = np.arange(35) N = 8 </code></pre> <pre><code>&gt;&gt;&gt; np.array_split(my_array, range(N, len(my_array), N)) [array([0, 1, 2, 3, 4, 5, 6, 7]), array([ 8, 9, 10, 11, 12, 13, 14, 15]), array([16, 17, 18, 19, 20, 21, 22, 23]), array([24, 25, 26, 27, 28, 29, 30, 31]), array([32, 33, 34])] </code></pre>
python|arrays|numpy
1
376,483
67,513,640
Create a new dataframe by removing the outliers from the column
<p>I am working on removing outlier tutorial but it quite confused me when this loop not working properly:</p> <pre class="lang-py prettyprint-override"><code>target = df['ConvertedComp'] mean = target.mean() sd = target.std() for x in target: z_score = (x-mean)/sd if np.abs(z_score) &gt; 3: selected_df = df[df.ConvertedComp != x] </code></pre> <p>Also are there any other method to create new dataframe without outlier efficiently ? Thank you ! Hope I can learn something new.</p>
<p>You can try the following code to select rows where z_score calculated from <code>ConvertedComp</code> column is less than or equal to 3.</p> <pre class="lang-py prettyprint-override"><code>mask = df['ConvertedComp'].sub(df['ConvertedComp'].mean()).div(df['ConvertedComp'].std()).abs().le(3) df = df[mask] </code></pre>
python|pandas|dataframe|outliers
0
376,484
67,237,732
Effecient Way to Access an Element of a PyTorch Tensor?
<p>I want to extract only the first element of a very large pytorch tensor. I've seen posts talking about options like <code>my_tensor.numpy()[0]</code> or <code>my_tensor.detach().numpy()[0]</code> if I'm using requires_grad. This seems really inefficient just to access one element, especially if my tensor is big. Is there any way around this?</p>
<p>If you have a 1d tensor, you can access the first element with :</p> <pre><code>my_tensor[0].item() </code></pre> <p>If your tensor is higher dimensional, you will need to index it like:</p> <pre><code>my_3dtensor[0,0,0].item() </code></pre>
python|pytorch|tensor
0
376,485
67,236,791
Asking advice on EEG classification using Keras
<p>I have a dataset on EEG, with this shape:</p> <pre><code>(11,1158, 200) </code></pre> <p>Where</p> <pre><code>11 is the number of EEG channel 1158 is the number of each task 200 is the time interval of each task </code></pre> <p>for example, if you plot a task, you'll get (Note that the data is normalized):</p> <p><a href="https://i.stack.imgur.com/Ckxn9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ckxn9.png" alt="enter image description here" /></a></p> <p>This task represents a task with a class. (for example seeing a picture from class 2, and total number of classes in my dataset is 5).</p> <p>Now I converted my array to this shape:</p> <pre><code>(1158, 200, 11) </code></pre> <p>So that the model could differentiate each task. this is the model that I used:</p> <pre><code>opt = keras.optimizers.Adam(learning_rate=1e-4) model = Sequential() model.add(Conv1D(filters=128, kernel_size=64, activation='relu', input_shape=(200, 11))) model.add(Conv1D(filters=64, kernel_size=8, activation='relu')) model.add(Dropout(0.5)) model.add(MaxPooling1D(pool_size=2)) model.add(Flatten()) model.add(Dense(100, activation='relu')) model.add(Dense(5, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) model.fit(x_train, y_train, validation_data=(x_valid,y_valid), epochs=50, batch_size=16) </code></pre> <p>I tried many different hyper-parameters, but all my results are somewhat like this:</p> <pre><code>Epoch 50/50 58/58 [==============================] - 0s 5ms/step - loss: 0.1281 - accuracy: 0.9946 - val_loss: 2.7850 - val_accuracy: 0.1897 </code></pre> <p>The training accuracy is high, but validation accuracy is between 20% to 25% (100/5 = 20 where 5 is the number of classes); Which basically means the model predicts something random. Is my approach wrong? If so, how should I solve this problem?</p>
<p>As your data is essentially a time-series classification problem my instinct is to start with something LSTM based.</p> <p>Sadly my second insight is related to data size. Your feature space is 200x11=2200 and sample size is 1158. I tend think of using deep learning if sample size &gt;5 * feature size but really think that DL starts to shine around 10 times ratio. If gathering more data is not a realistic possibility then you will need to look at other solutions to having a high variance problem (model is too complex for the level of data available).</p> <p>Third I can comment that your data shape looks right to me assuming you normalized and shaped your validation set equivalently to your training set then that is likely not an issue.</p> <p>Edit: I just noticed that the final epoch of training shows 58/58 for the samples trained. This value denotes the number of samples in the training set. Is this what you were expecting? I would of expected the number of samples to be say 70% of your total set EG 810/810. If your unsure of what I'm asking please post the size of y_train and y_valid. This may help myself and others verify the training process.</p>
python|tensorflow|machine-learning|keras|neuroscience
0
376,486
67,423,937
Vectorize a function with a condition
<p>I would like to vectorize a function with a condition, meaning to calculate its values with array arithmetic. <code>np.vectorize</code> handles vectorization, but it does not work with array arithmetic, so it is not a complete solution</p> <p>An answer was given as the solution in the question &quot;<a href="https://stackoverflow.com/questions/24646472/how-to-vectorize-a-function-which-contains-an-if-statement">How to vectorize a function which contains an if statement?</a>&quot; but did not prevent errors here; see the MWE below.</p> <pre><code>import numpy as np def myfx(x): return np.where(x &lt; 1.1, 1, np.arcsin(1 / x)) y = myfx(x) </code></pre> <p>This runs but raises the following warnings:</p> <pre><code>&lt;stdin&gt;:2: RuntimeWarning: divide by zero encountered in true_divide &lt;stdin&gt;:2: RuntimeWarning: invalid value encountered in arcsin </code></pre> <p>What is the problem, or is there a better way to do this?</p> <p>I think this could be done by</p> <ol> <li>Getting the indices <code>ks</code> of <code>x</code> for which <code>x[k] &gt; 1.1</code> for each <code>k</code> in <code>ks</code>.</li> <li>Applying <code>np.arcsin(1 / x[ks])</code> to the slice <code>x[ks]</code>, and using 1 for the rest of the elements.</li> <li>Recombining the arrays.</li> </ol> <p>I am not sure about the efficiency, though.</p>
<p>The statement <code>np.where(x &lt; 1.1, 1, np.arcsin(1 / x))</code> is equivalent to</p> <pre><code>mask = x &lt; 1.1 a = 1 b = np.arcsin(1 / x) np.where(mask, a, b) </code></pre> <p>Notice that you're calling <code>np.arcsin</code> on all the elements of <code>x</code>, regardless of whether <code>1 / x &lt;= 1</code> or not. Your basic plan is correct. You can do the operations in-place on an output array using the <code>where</code> keyword of <a href="https://numpy.org/doc/stable/reference/generated/numpy.arcsin.html" rel="nofollow noreferrer"><code>np.arcsin</code></a> and <a href="https://numpy.org/doc/stable/reference/generated/numpy.reciprocal.html" rel="nofollow noreferrer"><code>np.reciprocal</code></a>, without having to recombine anything:</p> <pre><code>def myfx(x): mask = (x &gt;= 1.1) out = np.ones(x.shape) np.reciprocal(x, where=mask, out=out) # &gt;= 1.1 implies != 0 return np.arcsin(out, where=mask, out=out) </code></pre> <p>Using <a href="https://numpy.org/doc/stable/reference/generated/numpy.ones.html" rel="nofollow noreferrer"><code>np.ones</code></a> ensures that the unmasked elements of <code>out</code> are initialized correctly. An equivalent method would be</p> <pre><code>out = np.empty(x.shape) out[~mask] = 1 </code></pre>
numpy|conditional-statements|vectorization
2
376,487
67,199,019
How to add 91 to all the values in a column of a pandas data frame?
<p>Consider my data frame as like this</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>S.no</th> <th>Phone Number</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>9955290232</td> </tr> <tr> <td>2</td> <td>8752837492</td> </tr> <tr> <td>3</td> <td>9342832245</td> </tr> <tr> <td>4</td> <td>919485928837</td> </tr> <tr> <td>5</td> <td>917482482938</td> </tr> <tr> <td>6</td> <td>98273642733</td> </tr> </tbody> </table> </div> <p>I want the values in &quot;Phone number&quot; column to prefixed with 91 If the value has 91 already then, proceed to the next value.</p> <p>My output</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>S.no</th> <th>Phone Number</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>919955290232</td> </tr> <tr> <td>2</td> <td>918752837492</td> </tr> <tr> <td>3</td> <td>919342832245</td> </tr> <tr> <td>4</td> <td>919485928837</td> </tr> <tr> <td>5</td> <td>917482482938</td> </tr> <tr> <td>6</td> <td>919827364273</td> </tr> </tbody> </table> </div> <p>How could this be done?</p>
<p>Simplest would be comvert to string, add <code>91</code> to the beginning and slice to last 12 digits:</p> <pre><code>df['New Phone Number'] = df['Phone Number'].astype(str).radd(&quot;91&quot;).str[-12:] </code></pre>
python|pandas|dataframe|numpy
7
376,488
67,471,592
Deleting rows based on time interval in pandas
<p>I have a dataframe with datetime timestamps (every 1 minute). I'd like to increase the time interval between rows to 5 minutes. Basically keep rows 0, 5, 10 etc and remove the rest. How would I do that?</p> <pre><code>Date Value 17/08/2017 04:00:00 0 17/08/2017 04:01:00 1 17/08/2017 04:02:00 2 17/08/2017 04:03:00 3 17/08/2017 04:04:00 4 17/08/2017 04:05:00 5 17/08/2017 04:06:00 6 17/08/2017 04:07:00 7 17/08/2017 04:08:00 8 17/08/2017 04:09:00 9 17/08/2017 04:10:00 10 </code></pre> <p>Thanks</p>
<p>Firstly convert your date column to datetime dtype by using <code>to_datetime()</code> method(If its already of datetime then ignore this step):</p> <pre><code>df['Date']=pd.to_datetime(df['Date']) </code></pre> <p>Finally You can do this by boolean masking:</p> <pre><code>newdf=df[df['Date'].dt.minute%5==0] </code></pre> <p>Now if you prints <code>newdf</code> you will get your desired output:</p> <pre><code> Date Value 0 2017-08-17 04:00:00 0 5 2017-08-17 04:05:00 5 10 2017-08-17 04:10:00 10 </code></pre> <p>If needed use <code>reset_index()</code> method:</p> <pre><code>newdf=newdf.reset_index(drop=True) </code></pre> <p>Output of above code:</p> <pre><code> Date Value 0 2017-08-17 04:00:00 0 1 2017-08-17 04:05:00 5 2 2017-08-17 04:10:00 10 </code></pre>
python|pandas|sorting|datetime
2
376,489
67,395,873
Tensorflow-text: NotFoundError: _text_similarity_metric_ops.so not found
<pre><code>import tensorflow-text </code></pre> <p>Actually i'm trying to run on Windows 10 (Pro), version 1909. Attempts to run on <strong>Python 3.8.5, 3.6.13, and 3.7</strong> brought no result - i've got the same error.</p> <p>Using Jupiter Notebook, conda (4.10.1)</p> <p><strong>Version of Tensorflow - 2.1.0</strong> Downloading version of Tf-text is <strong>&quot;tensorflow_text-2.4.3-cp36&quot;</strong></p> <p>Now i'm trying to reinstall conda, switch tensorflow versions I hope this issue will be fixed soon.</p>
<p><em>So, this problem was solved easy by myself!</em></p> <p>All you have to do is:</p> <ol> <li><p><em>Setup conda enviroment, in Anaconda Then in Anaconda cmd run conda activate &lt;your_enviroment_name&gt;</em></p> </li> <li><p><code>pip install tensorflow==2.4.1</code>, <code>pip install tensorflow-text==2.4.1</code></p> </li> </ol> <p>Then it should works. Remember, run on <strong>Python 3.7.10</strong></p> <p>Also guys from tensorflow says that you can run it on python 3.6 and 3.8, but be carefully with TF 2.4.1, i saw some info that <em>Python 3.6 could not run just that current version.</em></p> <p>Best wishes, Temio</p>
python|tensorflow
1
376,490
67,442,911
Altair doesn't display charts when running a script from terminal?
<p>I am following this tutorial example on my Mac Pro Big Sur.</p> <pre><code>https://altair-viz.github.io/gallery/simple_bar_chart.html </code></pre> <p>vtest.py is below:</p> <pre><code>import altair as alt import pandas as pd source = pd.DataFrame({ 'a': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'], 'b': [28, 55, 43, 91, 81, 53, 19, 87, 52] }) alt.Chart(source).mark_bar().encode( x='a', y='b' ) </code></pre> <p>When I execute this on a terminal, it doesn't display anything, no visualization showing up, no error or warning message either.</p> <pre><code>% python vtest.py </code></pre> <p>Doesn't Altair work on Mac OS?</p>
<p>If you want to show an Altair plot by running a script from terminal, you can use the <code>.show()</code> method to open it in your default browser:</p> <pre><code>alt.Chart(source).mark_bar().encode( x='a', y='b' ).show() </code></pre> <p>The docs include a section with <a href="https://altair-viz.github.io/user_guide/display_frontends.html" rel="nofollow noreferrer">more details about how Altair charts are rendered and how to display them conveniently</a>, starting with this passage:</p> <blockquote> <p>Altair produces Vega-Lite visualizations, which require a Javascript frontend to display the charts. Because notebook environments combine a Python backend with a Javascript frontend, many users find them convenient for using Altair.</p> </blockquote>
python|pandas|visualization|altair
1
376,491
67,583,402
how to replace values on a dataframe using pandas and streamlit in python?
<p>i have a python script that read dataframe using pandas and display its content using streamlit.</p> <p>What i want is to replace <strong>current value</strong> with a <strong>new value</strong> based on the user input.</p> <p>Where user <strong>select the required column</strong> and than enter the <strong>current value</strong> in a text field than the <strong>new value</strong> in the second text field when button <strong>replace</strong> is pressed the <strong>old value</strong> is replaced by the <strong>new value</strong> and the new dataframe is displayed.</p> <p><strong>the problem is that when it display the dataframe nothing is changed</strong></p> <h1>code:</h1> <pre><code>import pandas as pd import streamlit as st df =pd.DataFrame({ &quot;source_number&quot;: [ [11199,11328,11287,32345,12342,1232,13456,123244,13456], &quot;location&quot;: [&quot;loc2&quot;,&quot;loc1&quot;,&quot;loc3&quot;,&quot;loc1&quot;,&quot;loc2&quot;,&quot;loc2&quot;,&quot;loc3&quot;,&quot;loc2&quot;,&quot;loc1&quot;], &quot;category&quot;: [&quot;cat1&quot;,&quot;cat2&quot;,&quot;cat1&quot;,&quot;cat3&quot;,&quot;cat3&quot;,&quot;cat3&quot;,&quot;cat2&quot;,&quot;cat3&quot;,&quot;cat2&quot;], }) columns = st.selectbox(&quot;Select column&quot;, df.columns) old_values = st.multiselect(&quot;Current Values&quot;,list(df[columns].unique()),list(df[columns].unique())) col1,col2 = st.beta_columns(2) with col1: old_val = st.text_input(&quot;old value&quot;) with col2: new_val = st.text_input(&quot;new value&quot;) if st.button(&quot;Replace&quot;): df[columns]=df[columns].replace({old_val:new_val}) st.dataframe(df) </code></pre>
<p>Your code works for <strong>text</strong> columns (<code>location</code> and <code>category</code>). It doesn't work for the <strong>numeric</strong> <code>source_number</code> column as you're trying to replace one <strong>string</strong> by another.</p> <p>For numeric columns you'll need to use <code>number_input</code> instead of <code>text_input</code>:</p> <pre><code>import pandas as pd from pandas.api.types import is_numeric_dtype import streamlit as st df = pd.DataFrame({ &quot;source_number&quot;: [11199,11328,11287,32345,12342,1232,13456,123244,13456], &quot;location&quot;: [&quot;loc2&quot;,&quot;loc1&quot;,&quot;loc3&quot;,&quot;loc1&quot;,&quot;loc2&quot;,&quot;loc2&quot;,&quot;loc3&quot;,&quot;loc2&quot;,&quot;loc1&quot;], &quot;category&quot;: [&quot;cat1&quot;,&quot;cat2&quot;,&quot;cat1&quot;,&quot;cat3&quot;,&quot;cat3&quot;,&quot;cat3&quot;,&quot;cat2&quot;,&quot;cat3&quot;,&quot;cat2&quot;], }) columns = st.selectbox(&quot;Select column&quot;, df.columns) old_values = st.multiselect(&quot;Current Values&quot;,list(df[columns].unique()),list(df[columns].unique())) col1,col2 = st.beta_columns(2) st_input = st.number_input if is_numeric_dtype(df[columns]) else st.text_input with col1: old_val = st_input(&quot;old value&quot;) with col2: new_val = st_input(&quot;new value&quot;) if st.button(&quot;Replace&quot;): df[columns]=df[columns].replace({old_val:new_val}) st.dataframe(df) </code></pre> <hr> <p>Update as per comment: you could cache the <code>df</code> to prevent re-initalization upon each widget interaction (you'll have to manually clear the cache to start over):</p> <pre><code>@st.cache(allow_output_mutation=True) def get_df(): return pd.DataFrame({ &quot;source_number&quot;: [11199,11328,11287,32345,12342,1232,13456,123244,13456], &quot;location&quot;: [&quot;loc2&quot;,&quot;loc1&quot;,&quot;loc3&quot;,&quot;loc1&quot;,&quot;loc2&quot;,&quot;loc2&quot;,&quot;loc3&quot;,&quot;loc2&quot;,&quot;loc1&quot;], &quot;category&quot;: [&quot;cat1&quot;,&quot;cat2&quot;,&quot;cat1&quot;,&quot;cat3&quot;,&quot;cat3&quot;,&quot;cat3&quot;,&quot;cat2&quot;,&quot;cat3&quot;,&quot;cat2&quot;], }) df = get_df() </code></pre>
python|pandas|replace|streamlit
0
376,492
67,191,434
Parse a log line and store in `Pandas.DataFrame`
<p>Suppose i have a <code>Pandas.DataFrame</code>:</p> <pre><code>log_df = pd.DataFrame(columns=['type', 'ts', 'process', 'subprocess', 'num', 'message']) </code></pre> <p>and a log file which contains lines in the following format:</p> <pre><code>ERROR:2021-04-19 08:43:10,562:trigger_manager.py:SpawnProcess-2:29:Stream has ended </code></pre> <p>and I'd like to parse it by the <code>:</code> but the problem is that I have <code>:</code> char separating the time-stamp field, which obviously, should not be parsed.</p> <p>I've tried using a simple <code>str.split(sep=':')</code>, which results in the splitting of the <code>time-stamp</code>. I think I should use a <code>regex</code>, but don't know how to approach this task.</p> <p>I'd appreciate any help.</p> <p>Thanks in advance.</p>
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>.str.extract()</code></a> to extract the log file contents as follows:</p> <p>For testing purpose, I created one line of data of your sample log file in the series <code>log_file</code>. You can replace with your data:</p> <pre><code>log_file = pd.Series(['ERROR:2021-04-19 08:43:10,562:trigger_manager.py:SpawnProcess-2:29:Stream has ended']) log_df = log_file.str.extract(r'(?P&lt;type&gt;[^:]+):(?P&lt;ts&gt;.+,\d+):(?P&lt;process&gt;[^:]+):(?P&lt;subprocess&gt;[^:]+):(?P&lt;num&gt;[^:]+):(?P&lt;message&gt;[^:]+)') print(log_df) type ts process subprocess num message 0 ERROR 2021-04-19 08:43:10,562 trigger_manager.py SpawnProcess-2 29 Stream has ended </code></pre> <h2>Regex Explanation</h2> <p>I extract your sample data according to the column names of the target dataframe, as follows:</p> <p><code>(?P&lt;type&gt;[^:]+)</code> named capturing group for the log <code>type</code>. Here <code>[^:]</code> matches characters other than <code>:</code> so that we can extract characters before the separator <code>:</code></p> <p><code>:</code> match the seperator <code>:</code> literally</p> <p><code>(?P&lt;ts&gt;.+,\d+)</code> named capturing group for timestamp <code>ts</code> with nanoseconds. We can use <code>.+</code> instead of <code>[^:]+</code> because of the special format of having a <code>,</code> before nanoseconds.</p> <p><code>:</code> match the seperator <code>:</code> literally</p> <p><code>(?P&lt;process&gt;[^:]+)</code> named capturing group for <code>process</code></p> <p><code>:</code> match the seperator <code>:</code> literally</p> <p><code>(?P&lt;subprocess&gt;[^:]+)</code> named capturing group for <code>subprocess</code></p> <p><code>:</code> match the seperator <code>:</code> literally</p> <p><code>(?P&lt;num&gt;[^:]+)</code> named capturing group for <code>num</code></p> <p><code>:</code> match the seperator <code>:</code> literally</p> <p><code>(?P&lt;message&gt;[^:]+)</code> named capturing group for <code>message</code></p>
python|python-3.x|regex|pandas
2
376,493
67,387,433
Python csv to json using pandas - csv columns to nested json
<p><strong>Python 3.8.5 with Pandas 1.1.3</strong></p> <p>I have a csv file with columns: name, city, state, and zipcode. I need to convert to json with the city, state, and zipcode column values inside an object called residence.</p> <p>For example:</p> <p>CSV file</p> <pre><code>Name City State Zipcode John Doe Akron OH 44140 </code></pre> <p>I need the JSON output to be structured like:</p> <pre><code>{ &quot;name&quot;: &quot;John Doe&quot;, &quot;residence&quot; : { &quot;city&quot;: &quot;Akron&quot;, &quot;state&quot;: &quot;OH&quot;, &quot;zipcode&quot;: 44140 } } </code></pre> <p>I currently use Pandas to convert csv to json using the following code:</p> <pre><code>import pandas as pd csv_file = pd.DataFrame(pd.read_csv(&quot;data.csv&quot;, sep = &quot;,&quot;, header = 0, index_col = False)) csv_file.to_json(&quot;data.json&quot;, orient = &quot;records&quot;, lines = True, date_format = &quot;iso&quot;, double_precision = 10, force_ascii = True, date_unit = &quot;ms&quot;, default_handler = None) </code></pre> <p>As-is, that just converts each column to a json key:value.</p> <p>How can I add to this code to achieve my desired output?</p>
<p>IIUC try creating the nested object row-wise first, then creating the JSON:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd csv_file = pd.read_csv(&quot;data.csv&quot;, sep=&quot;,&quot;, header=0, index_col=False) # Create Nested dict (Object) csv_file['Residence'] = csv_file[['City', 'State', 'Zipcode']].apply( lambda s: s.to_dict(), axis=1 ) # Write out Name and Residence Only csv_file[['Name', 'Residence']].to_json(&quot;data.json&quot;, orient=&quot;records&quot;, lines=True, date_format=&quot;iso&quot;, double_precision=10, force_ascii=True, date_unit=&quot;ms&quot;, default_handler=None) </code></pre> <p>data.csv</p> <pre> Name,City,State,Zipcode John Doe,Akron,OH,44140 Jane Smith,Los Angeles,California,90005 </pre> <p>data.json</p> <pre> {"Name":"John Doe","Residence":{"City":"Akron","State":"OH","Zipcode":44140}} {"Name":"Jane Smith","Residence":{"City":"Los Angeles","State":"California","Zipcode":90005}} </pre>
python|pandas
1
376,494
67,360,987
BERT model bug encountered during training
<p>So, I made a custom dataset consisting of reviews form several E-learning sites. What I am trying to do is build a model that can recognize emotions based on text and for training I am using the dataset I've made via scraping. While working on BERT, I encountered this error</p> <p><code>normalize() argument 2 must be str, not float</code></p> <p>here's my code:-</p> <pre><code>import numpy as np import pandas as pd import numpy as np import tensorflow as tf print(tf.__version__) import ktrain from ktrain import text from sklearn.model_selection import train_test_split import pickle #class_names = [&quot;Frustration&quot;, &quot;Not satisfied&quot;, &quot;Satisfied&quot;, &quot;Happy&quot;, &quot;Excitement&quot;] data = pd.read_csv(&quot;Final_scraped_dataset.csv&quot;) print(data.head()) X = data['Text'] y = data['Emotions'] class_names = np.unique(data['Emotions']) print(class_names) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 42) print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) print(X_train.head(10)) encoding = { 'Frustration': 0, 'Not satisfied': 1, 'Satisfied': 2, 'Happy': 3, 'Excitement' : 4 } y_train = [encoding[x] for x in y_train] y_test = [encoding[x] for x in y_test] X_train = X_train.tolist() X_test = X_test.tolist() #print(X_train) (x_train, y_train), (x_test, y_test), preproc = text.texts_from_array(x_train=X_train, y_train=y_train, x_test=X_test, y_test=y_test, class_names=class_names, preprocess_mode='bert', maxlen=200, max_features=15000) #I've encountered the error here '''model = text.text_classifier('bert', train_data=(x_train, y_train), preproc=preproc) learner = ktrain.get_learner(model, train_data=(x_train, y_train), val_data=(x_test, y_test), batch_size=4) learner.fit_onecycle(2e-5, 3) learner.validate(val_data=(x_test, y_test)) predictor = ktrain.get_predictor(learner.model, preproc) predictor.get_classes() import time message = 'I hate you a lot' start_time = time.time() prediction = predictor.predict(message) print('predicted: {} ({:.2f})'.format(prediction, (time.time() - start_time))) # let's save the predictor for later use predictor.save(&quot;new_model/bert_model&quot;) print(&quot;SAVED _______&quot;)''' </code></pre> <p>here's the complete error:-</p> <pre><code> File &quot;D:\Sentiment analysis\BERT_model_new_dataset.py&quot;, line 73, in &lt;module&gt; max_features=15000) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\ktrain\text\data.py&quot;, line 373, in texts_from_array trn = preproc.preprocess_train(x_train, y_train, verbose=verbose) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\ktrain\text\preprocessor.py&quot;, line 796, in preprocess_train x = bert_tokenize(texts, self.tok, self.maxlen, verbose=verbose) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\ktrain\text\preprocessor.py&quot;, line 166, in bert_tokenize ids, segments = tokenizer.encode(doc, max_len=max_length) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\keras_bert\tokenizer.py&quot;, line 73, in encode first_tokens = self._tokenize(first) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\keras_bert\tokenizer.py&quot;, line 103, in _tokenize text = unicodedata.normalize('NFD', text) TypeError: normalize() argument 2 must be str, not float </code></pre>
<p>It sounds like you may have a float value in your <code>data['Text']</code> column somehow.</p> <p>You can try something like this to shed more light on what's happening:</p> <pre class="lang-py prettyprint-override"><code>for i, s in enumerate(data['Text']): if not isinstance(s, str): print('Text in row %s is not a string: %s' % (i, s)) </code></pre>
python|pandas|numpy|tensorflow|bert-language-model
1
376,495
67,263,575
Pandas dataframe: how to permute rows and create new groups of combinations
<p>I have the following pandas dataframe df with 10 rows and 4 columns that attributes 3 categorical variables:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(np.random.choice([&quot;dog&quot;, &quot;cat&quot;, &quot;mice&quot;], size=(10, 4))) </code></pre> <p>I would to know all permutations possible between the rows and create a new dataframe containing different groupings of the row combinations such as a group containing twice the same variable in the same row as cat cat dog mice or 4 of the same pig pig pig pig etc. I have tried with Itertools without success. Someone to help with some indications? Thanks</p>
<p>I hope I've understood your question right. This example will create Series where index is the combination and values are size of this combination:</p> <pre><code>from collections import Counter from itertools import permutations print( df.assign( items=df.apply( lambda x: [ frozenset(Counter(p).items()) for p in permutations(x, len(x)) ], axis=1, ) ) .explode(&quot;items&quot;) .groupby(&quot;items&quot;) .size() ) </code></pre> <p>Prints (for example):</p> <pre class="lang-none prettyprint-override"><code>items ((mice, 2), (dog, 2)) 48 ((cat, 1), (dog, 2), (mice, 1)) 48 ((cat, 3), (mice, 1)) 24 ((mice, 3), (cat, 1)) 24 ((dog, 1), (mice, 3)) 48 ((dog, 1), (cat, 2), (mice, 1)) 24 ((mice, 4)) 24 dtype: int64 </code></pre> <hr /> <p>EDIT: To get a dataframe:</p> <pre><code>x = ( df.assign( items=df.apply( lambda x: [ frozenset(Counter(p).items()) for p in permutations(x, len(x)) ], axis=1, ) ) .explode(&quot;items&quot;) .groupby(&quot;items&quot;) .size() ) df_out = ( pd.DataFrame([dict(i, count=v) for i, v in zip(x.index, x)]) .fillna(0) .astype(int) ) print(df_out) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> dog mice cat count 0 1 1 2 24 1 2 2 0 72 2 2 1 1 24 3 0 2 2 48 4 4 0 0 24 5 0 3 1 24 6 1 3 0 24 </code></pre>
python|pandas|pandas-groupby|itertools
1
376,496
67,191,190
Create new columns and calculate values based on condition with date in Python
<p>I need to create a new column as Billing and Non-Billing based on the Date column.</p> <p>Condition for Column 1 : If the Start Date is <code>NULL</code> OR <code>BLANK</code> (OR) if its Start Date is in 'Future Date' (OR) if its Starts Date is in 'Past Date' (OR) if its End Date is in Past Date then I should create a new column as Non-Billing.</p> <p>Condition for columns 2: If the Start Date is in 'Current Date' then need to create a new column as 'Billable' and need to calculate it. Calculation should be in row axis.</p> <p>Calculation for Billing in row: <code>Billing = df[Billing] * sum/168 * 100</code></p> <p>Calculation for Non-Billing in row: <code>Non-Billing = df[Non-Billing] * sum/ 168 * 100</code></p> <p><strong>Data:</strong></p> <pre><code>Employee Name | Java | Python | .NET | React | Start Date | End Date | |Anu | 10 | 10 | 5 | 5 | 04-21-2021 | | |Kalai | | 10 | | 5 | 04-21-2021 | 10-31-2021 | |Smirthi | | 10 | 20 | | 03-21-2021 | | |Madhu | 20 | 10 | 10 | | 01-12-2021 | | |Latha | 40 | | 5 | | | | </code></pre> <p><strong>Input</strong></p> <p><a href="https://i.stack.imgur.com/nC3sP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nC3sP.png" alt="Input" /></a></p> <p><strong>Output</strong></p> <p><a href="https://i.stack.imgur.com/nCuHO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nCuHO.png" alt="Output" /></a></p> <p><strong>Code:</strong></p> <pre><code># Adding new columns total=df.sum(axis=1) df.insert(len(df.columns),column='Total',value=total) # Adding Utilization column utilization = (total/168) df.insert(len(df.columns), column='Utilization', value=utilization) # Filter dataframe using groupby df1 = df.groupby(['Employee Name']).sum(min_count=1) df1['Available'] = 168 </code></pre>
<p>I don't understand the conditions very well as there seem to be some inconsistencies but I believe this will help you getting started:</p> <pre><code>import pandas as pd import numpy as np import datetime df['Total'] = df.sum(axis=1) df['Available']=168 df['Amount']=df['Total']/df['Available']*100 df['Billing']=np.NaN df['NonBilling']=np.NaN df.loc[df['Start Date']==datetime.date.today(),'Billing']= df['Amount'] df.loc[df['Start Date']!=datetime.date.today(),'NonBilling']= df['Amount'] </code></pre> <p>NOTES:</p> <ol> <li><p>make sure about the date type to compare against today's date, if your date are being loaded as objects you may want to do something like this after loading:</p> <p>df['Start Date']= pd.to_datetime(df['Start Date']).dt.date</p> </li> <li><p>work out the conditions for Billing/NonBilling to make sure the columns are being populated as intended</p> </li> </ol>
python|pandas|numpy|pandas.excelwriter
0
376,497
67,229,256
Extract the position of one date in a dataframe
<p>I'm looking for a way to cut my dataframe at one precise date, so I thought about enter this date in my code, and then, extract the position of where it is and then just slice my dataframe with that position as the end of the df.</p> <p>Here is my code :</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np from pathlib import Path import re from datetime import date DAY_CHOOSEN = pd.DataFrame() out=pd.DataFrame() temp = read_csv(&quot;data.csv&quot;,sep=&quot;;&quot;) DAY_CHOOSEN=date(day=23,month=4,year=year_) for row in temp[0] : if row [0] == DAY_CHOOSEN : index=row temp=temp[:index] out = pd.merge(out,temp,left_index = True, right_index = True,how = 'outer') print(f&quot;{name} a ete traite&quot;) </code></pre> <p>But when I launch my code, I obviously have this error :</p> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File &quot;C:\Users\E31\Documents\cours\stage_dossier\projet_python\extract_period.py&quot;, line 40, in &lt;module&gt; if row ['dte_'+str(year_)] == DAY_CHOOSEN : TypeError: 'Timestamp' object is not subscriptable </code></pre> <p>Here is a link to download an example of my file. <a href="https://www.mediafire.com/file/86xw88k8b1f530s/data.csv/file" rel="nofollow noreferrer">https://www.mediafire.com/file/86xw88k8b1f530s/data.csv/file</a> How can I manage with this ? Thank you for your help !</p>
<p>You can easily slice your data based on a certain date if you parse the column that contains the date information to datetime datatype. Ex:</p> <pre><code>import pandas as pd df = pd.read_csv(filename, sep=';', decimal=',') # to datetime df['dte_1981'] = pd.to_datetime(df['dte_1981'], dayfirst=True) # now you can slice your df... DAY_CHOOSEN = pd.Timestamp('1981-04-23') df_select = df[df['dte_1981'] &gt; DAY_CHOOSEN] df_select dte_1981 res_1981 SQ_x 113 1981-04-24 2.5 -2.05 114 1981-04-25 2.5 -2.05 115 1981-04-26 2.5 -2.05 .... </code></pre>
python|pandas|date|datetime
2
376,498
67,484,547
How do I train the DeepSORT tracker for custom class?
<p>I want to detect and count the number of vines in a vineyard using Deep Learning and Computer Vision techniques. I am using the YOLOv4 object detector and training on the <a href="https://www.github.com/AlexeyAB/darknet" rel="nofollow noreferrer">darknet</a> framework. I have been able to integrate the SORT tracker into my application and it works well, but I still have the following issues:</p> <ul> <li>The tracker sometimes reassigns a new ID to the object</li> <li>The detector sometimes misidentifies the object <em>(which lead to incorrect tracking)</em></li> <li>The tracker sometimes does not track a detected object.</li> </ul> <p>You can see an example of the reassignment issue in the following <a href="https://i.stack.imgur.com/efjae.jpg" rel="nofollow noreferrer">image</a>. As you can see, in frame 40 the id 9 was a metal post, and frame 42 onwards it is being assigned to a tree</p> <p>In searching for the cause of these problems, I have learnt that <a href="https://www.github.com/nwojke/deep_sort" rel="nofollow noreferrer">DeepSORT</a> is an improved version of the SORT, which aims to handle this problem by using a Neural Network for associating tracks to detections.</p> <h3>Problem:</h3> <p>The problem I am facing is with the training of this particular model for Deepsort. I have seen that the authors have used <a href="https://www.github.com/nwojke/cosine_metric_learning" rel="nofollow noreferrer">cosine metric learning</a> to train their model, but I am not being able to customize the learning for my custom classes. The questions I have are as follows:</p> <ol> <li><p>I have a dataset of annotated <em>(YOLO TXT format)</em> images which I have used to train the YOLOv4 model. Can I reuse the same dataset for the Deepsort tracker? If so, then how?</p> </li> <li><p>If I cannot reuse the dataset, then how do I create my own dataset for training the model?</p> </li> </ol> <p>Thanks in advance for the help!</p>
<p>Yes, you can use the same classes for DeepSORT. SORT works in 2 stages, and DeepSORT adds a 3rd stage. First stage is detection, which is handled by YOLOv3, next is track association, which is handled by Kalman Filter and IOU. DeepSORT implements the 3rd stage, a Siamese network to compare the appearance features between current detections and the features of each track. I've seen implementations use ResNet as the feature embedding network</p> <p>Basically once YOLO detects your class, you pass the cropped detected image over to your siamese network and it converts it into feature embeddings and compares those features with the past ones using cosine distance.</p> <p>In conclusion, you can use the same YOLO classes for DeepSORT and SORT since they both need a detection stage, which is handled by YOLO.</p>
python|tensorflow|computer-vision|object-detection
2
376,499
67,343,548
Break out of the 'nested loop' when condition is met, and then continue the loop of the parent loop
<p>I have a nested loop, but I only need the first condition of the child loop. So I need the child loop to stop when it meets the condition, and restart the loop for the index of the parent loop. The example should clarify. I have first few rows of the dataframe:</p> <pre><code> M# Date Time Day Team Team2 Venue Team3 days_between1 next_game1 26 27 2021-05-01 7.30 pm Sat MI CSK Delhi MI CSK 0 0 27 28 2021-05-02 3.30 pm Sun RR SRH Delhi RR SRH 0 0 28 29 2021-05-02 7.30 pm Sun PK DC Ahmedabad PK DC 0 0 29 30 2021-05-03 7.30 pm Mon KKR RCB Ahmedabad KKR RCB 0 0 30 31 2021-05-04 7.30 pm Tue SRH MI Delhi SRH MI 0 0 31 32 2021-05-05 7.30 pm Wed RR CSK Delhi RR CSK 0 0 32 33 2021-05-06 7.30 pm Thur RCB PK Ahmedabad RCB PK 0 0 33 34 2021-05-07 7.30 pm Fri SRH CSK Delhi SRH CSK 0 0 34 35 2021-05-08 3.30 pm Sat KKR DC Ahmedabad KKR DC 0 0 </code></pre> <p>I am trying to calculate the number of days between the games a particular team plays. For example, MI plays on the first and the fourth. I have created a column 'Team3' that contains the names of both teams playing to make setting the condition easier. This is my attempt:</p> <pre><code>for i in range(26, df.last_valid_index()): a = df['Team'][i] for j in range(i,df.last_valid_index()): t = df['Team3'][j] if t.find(a) != -1: df['days_between1'][i] = df['Date'][j] - df['Date'][i] </code></pre> <p>The result should look something like this:</p> <pre><code> M# Date Time Day Team Team2 Venue Team3 days_between1 next_game1 26 27 2021-05-01 7.30 pm Sat MI CSK Delhi MI CSK 3 0 27 28 2021-05-02 3.30 pm Sun RR SRH Delhi RR SRH 3 0 28 29 2021-05-02 7.30 pm Sun PK DC Ahmedabad PK DC 3 0 29 30 2021-05-03 7.30 pm Mon KKR RCB Ahmedabad KKR RCB 3 0 30 31 2021-05-04 7.30 pm Tue SRH MI Delhi SRH MI 1 0 31 32 2021-05-05 7.30 pm Wed RR CSK Delhi RR CSK NA 0 32 33 2021-05-06 7.30 pm Thur RCB PK Ahmedabad RCB PK NA 0 33 34 2021-05-07 7.30 pm Fri SRH CSK Delhi SRH CSK NA 0 34 35 2021-05-08 3.30 pm Sat KKR DC Ahmedabad KKR DC NA 0 </code></pre>
<p>You can move the loop into a <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><strong><code>DataFrame.apply()</code></strong></a>.</p> <p>Find the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.first_valid_index.html" rel="nofollow noreferrer"><strong><code>first_valid_index()</code></strong></a> that matches a given <code>team</code> and subtract the matched date from the <code>team</code>'s date:</p> <pre class="lang-py prettyprint-override"><code>def between(row): index, team = row.name, row.Team mask = df[['Team', 'Team2']].loc[index+1:] == team match = mask.replace(False, np.nan).first_valid_index() return df.loc[match, 'Date'] - df.loc[index, 'Date'] if match else np.nan df.days_between1 = df.apply(between, axis=1) # M# Date Time Day Team Team2 Venue Team3 days_between1 next_game1 # 26 27 2021-05-01 7.30 pm Sat MI CSK Delhi MI CSK 3 days 0 # 27 28 2021-05-02 3.30 pm Sun RR SRH Delhi RR SRH 3 days 0 # 28 29 2021-05-02 7.30 pm Sun PK DC Ahmedabad PK DC 4 days 0 # 29 30 2021-05-03 7.30 pm Mon KKR RCB Ahmedabad KKR RCB 5 days 0 # 30 31 2021-05-04 7.30 pm Tue SRH MI Delhi SRH MI 3 days 0 # 31 32 2021-05-05 7.30 pm Wed RR CSK Delhi RR CSK NaT 0 # 32 33 2021-05-06 7.30 pm Thur RCB PK Ahmedabad RCB PK NaT 0 # 33 34 2021-05-07 7.30 pm Fri SRH CSK Delhi SRH CSK NaT 0 # 34 35 2021-05-08 3.30 pm Sat KKR DC Ahmedabad KKR DC NaT 0 </code></pre>
python|pandas|loops|break
1