Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
9,300
58,101,126
Using Scikit-Learn OneHotEncoder with a Pandas DataFrame
<p>I'm trying to replace a column within a Pandas DataFrame containing strings into a one-hot encoded equivalent using Scikit-Learn's OneHotEncoder. My code below doesn't work:</p> <pre class="lang-py prettyprint-override"><code>from sklearn.preprocessing import OneHotEncoder # data is a Pandas DataFrame jobs_encoder = OneHotEncoder() jobs_encoder.fit(data['Profession'].unique().reshape(1, -1)) data['Profession'] = jobs_encoder.transform(data['Profession'].to_numpy().reshape(-1, 1)) </code></pre> <p>It produces the following error (strings in the list are omitted):</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-91-3a1f568322f5&gt; in &lt;module&gt;() 3 jobs_encoder = OneHotEncoder() 4 jobs_encoder.fit(data['Profession'].unique().reshape(1, -1)) ----&gt; 5 data['Profession'] = jobs_encoder.transform(data['Profession'].to_numpy().reshape(-1, 1)) /usr/local/anaconda3/envs/ml/lib/python3.6/site-packages/sklearn/preprocessing/_encoders.py in transform(self, X) 730 copy=True) 731 else: --&gt; 732 return self._transform_new(X) 733 734 def inverse_transform(self, X): /usr/local/anaconda3/envs/ml/lib/python3.6/site-packages/sklearn/preprocessing/_encoders.py in _transform_new(self, X) 678 """New implementation assuming categorical input""" 679 # validation of X happens in _check_X called by _transform --&gt; 680 X_int, X_mask = self._transform(X, handle_unknown=self.handle_unknown) 681 682 n_samples, n_features = X_int.shape /usr/local/anaconda3/envs/ml/lib/python3.6/site-packages/sklearn/preprocessing/_encoders.py in _transform(self, X, handle_unknown) 120 msg = ("Found unknown categories {0} in column {1}" 121 " during transform".format(diff, i)) --&gt; 122 raise ValueError(msg) 123 else: 124 # Set the problematic rows to an acceptable value and ValueError: Found unknown categories ['...', ..., '...'] in column 0 during transform </code></pre> <p>Here's some sample data:</p> <pre class="lang-py prettyprint-override"><code>data['Profession'] = 0 unkn 1 safe 2 rece 3 unkn 4 lead ... 111988 indu 111989 seni 111990 mess 111991 seni 111992 proj Name: Profession, Length: 111993, dtype: object </code></pre> <p>What exactly am I doing wrong?</p>
<p><a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html" rel="noreferrer"><strong>OneHotEncoder</strong></a> Encodes categorical integer features as a one-hot numeric array. Its <strong>Transform</strong> method returns a sparse matrix if <code>sparse=True</code>, otherwise it returns a 2-d array.</p> <p>You can't cast a <strong>2-d array</strong> (or sparse matrix) into a <strong>Pandas Series</strong>. You must create a Pandas Serie (a column in a Pandas dataFrame) for each <strong>category</strong>.</p> <p>I would recommend <a href="https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.get_dummies.html" rel="noreferrer">pandas.get_dummies</a> instead:</p> <pre><code>data = pd.get_dummies(data,prefix=['Profession'], columns = ['Profession'], drop_first=True) </code></pre> <p><strong>EDIT:</strong></p> <p>Using Sklearn OneHotEncoder:</p> <pre><code>transformed = jobs_encoder.transform(data['Profession'].to_numpy().reshape(-1, 1)) #Create a Pandas DataFrame of the hot encoded column ohe_df = pd.DataFrame(transformed, columns=jobs_encoder.get_feature_names()) #concat with original data data = pd.concat([data, ohe_df], axis=1).drop(['Profession'], axis=1) </code></pre> <p><strong>Other Options:</strong> If you are doing hyperparameter tuning with <a href="https://scikit-learn.org/stable/modules/grid_search.html" rel="noreferrer">GridSearch</a> it's recommanded to use <a href="https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html" rel="noreferrer">ColumnTransformer</a> and <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.FeatureUnion.html" rel="noreferrer">FeatureUnion</a> with <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html" rel="noreferrer">Pipeline</a> or directly <a href="https://scikit-learn.org/stable/modules/generated/sklearn.compose.make_column_transformer.html" rel="noreferrer">make_column_transformer</a></p>
python|pandas|machine-learning|scikit-learn|one-hot-encoding
35
9,301
57,928,437
Pandas groupby to get percent of total rows by specific label
<p>Working with Pandas, I would like to calculate the percentage of rows that have a positive value in a specific column for a distinct breakdown.</p> <hr /> <h1>Input</h1> <p>An example likely illustrates this easiest so assume I have a table named <code>table</code> shown below:</p> <pre><code>| ID | Name | Sex | Number | |----|---------|-----|--------| | 1 | Jim | M | -1 | | 2 | Carly | F | 1 | | 3 | Joe | M | 0 | | 4 | Barbara | F | -1 | | 5 | Susan | F | -2 | | 6 | Phyllis | F | 2 | | 7 | John | M | 3 | </code></pre> <p>I want to, <strong>in the most efficient way possible</strong>, calculate the number of rows where the <code>Number</code> column is greater than 0, for each sex (M or F).</p> <hr /> <h1>Output</h1> <p>I expect a DataFrame output like the following:</p> <pre><code>| Sex | Percent| |-----|--------| | M | 0.33 | | F | 0.5 | </code></pre> <p>These percentages, again, are the number of rows where <code>df['Sex']=</code> (<code>M</code> or <code>F</code>) <strong>AND</strong> <code>df['Number'] &gt; 0</code></p> <hr /> <hr /> <h1>Tried</h1> <p>In this case, it seems easiest to subset the data and calculate it separately, which I have tried with the following:</p> <pre><code>male_df = df.loc[df['Sex']=='M']] female_df = df.loc[df['Sex']=='F']] d = {'M': None, 'F': None} for sex_df, label in [(male_df, 'M'), (female_df, 'F')]: d[label] = len(d.loc[d['Number'] &gt; 0])/len(d) new_df = pd.DataFrame.from_dict(d, columns=['Sex','Percent']) </code></pre> <h1>HOWEVER</h1> <p>My <strong>real</strong> data is actually subsetted by multiple columns, so doing individual <code>.loc()</code> calls for each subset is not practical. I was thinking there would be a way to implement this with pandas' <code>.groupby()</code> method, however do not know where to start.</p>
<p>Most efficient is to take the mean of a Boolean Series within group (<code>GroupBy.mean</code> will use cython). Since the Series we create shares the same index of the DataFrame, you can group in this way:</p> <pre><code>df['Number'].gt(0).groupby(df['Sex']).mean() #Sex #F 0.500000 #M 0.333333 #Name: Number, dtype: float64 </code></pre>
python|python-3.x|pandas|dataframe
5
9,302
57,785,687
Filter Pandas Dataframe based on List of substrings
<p>I have a Pandas Dataframe containing multiple colums of strings. I now like to check a certain column against a list of allowed substrings and then get a new subset with the result.</p> <pre><code>substr = ['A', 'C', 'D'] df = pd.read_excel('output.xlsx') df = df.dropna() # now filter all rows where the string in the 2nd column doesn't contain one of the substrings </code></pre> <p>The only approach I found was creating a List of the corresponding column an then do a list comprehension, but then I loose the other columns. Can I use list comprehension as part of e.g. <code>df.str.contains()</code>?</p> <pre><code>year type value price 2000 ty-A 500 10000 2002 ty-Q 200 84600 2003 ty-R 500 56000 2003 ty-B 500 18000 2006 ty-C 500 12500 2012 ty-A 500 65000 2018 ty-F 500 86000 2019 ty-D 500 51900 </code></pre> <p>expected output:</p> <pre><code>year type value price 2000 ty-A 500 10000 2006 ty-C 500 12500 2012 ty-A 500 65000 2019 ty-D 500 51900 </code></pre>
<p>You could use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>pandas.Series.isin</code></a></p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df.loc[df['type'].isin(substr)] year type value price 0 2000 A 500 10000 4 2006 C 500 12500 5 2012 A 500 65000 7 2019 D 500 51900 </code></pre>
python|pandas
3
9,303
58,089,920
how to group data and calculating a specific task
<p>I have a practice task below:</p> <p>Calculate the total births over the sample period by grouping the data by name and sex. Subset the group into male and female. Using these subsets, select the top and bottom 3 male and female names. Report them in a single table.</p> <p>So I have a .csv file ([1]) containing names, sex, births, and year. I can't seem to figure out exactly how to do this task.</p> <p>Output: [2]</p> <pre><code> births sex year F 1990 124.598148 1991 121.215316 1992 118.106646 1993 114.475367 1994 113.331661 1995 111.563710 1996 110.258765 1997 107.671846 1998 106.412899 1999 104.643578 2000 102.779761 2001 100.116023 2002 99.283904 2003 99.055598 2004 97.443251 2005 96.216343 2006 94.690833 2007 93.415595 2008 92.263176 2009 90.823585 M 1990 216.417422 1991 209.419977 1992 203.524373 1993 192.999015 1994 188.475200 1995 184.294158 1996 179.760661 1997 174.291755 1998 169.057720 1999 165.296596 2000 162.003634 2001 157.905281 2002 155.438592 2003 154.773933 2004 150.038389 2005 149.376874 2006 146.330312 2007 144.067535 2008 139.294722 2009 136.291111 </code></pre>
<p>IIUC, this is what you need.</p> <pre><code>g=g=df.groupby(['name','sex'])['births'].sum().reset_index(name='birth_sum').sort_values('birth_sum',ascending=False) top_names=g.loc[g.sex=='F'].head(3).append(g.loc[g.sex=='M'].head(3)) bottom_names=g.loc[g.sex=='F'].tail(3).append(g.loc[g.sex=='M'].tail(3)) </code></pre> <p><strong>Input</strong> (I modified the data since the data you posted has data only for one year)</p> <pre><code> name sex births year 0 Jessica F 46459 1990 1 Ashley F 45544 1990 2 Brittany F 36532 1990 3 Amanda F 34391 1990 4 Samantha F 25864 1990 5 Jessica F 46459 1991 6 Ashley F 45544 1991 7 Brittany F 36532 1991 8 Amanda F 34391 1991 9 Samantha F 55864 1991 10 Jessica F 46459 1992 11 Ashley F 45544 1992 12 Brittany F 86532 1992 13 Amanda F 34391 1992 14 Samantha F 55864 1992 15 James M 15 1990 16 Rob M 20 1990 17 Bob M 25 1990 18 Sam M 45 1990 19 Richard M 60 1990 20 James M 15 1991 21 Rob M 200 1991 22 Bob M 300 1991 23 Sam M 45 1991 24 Richard M 60 1991 25 James M 145 1992 26 Rob M 182 1992 27 Bob M 400 1992 28 Sam M 216 1992 29 Richard M 60 1992 </code></pre> <p><strong>Output</strong></p> <pre><code>print(top_names) name sex birth_sum 3 Brittany F 159596 5 Jessica F 139377 9 Samantha F 137592 2 Bob M 725 7 Rob M 402 8 Sam M 306 print(bottom_names) name sex birth_sum 9 Samantha F 137592 1 Ashley F 136632 0 Amanda F 103173 8 Sam M 306 6 Richard M 180 4 James M 175 </code></pre>
python|pandas|csv|group-by|pivot-table
0
9,304
34,170,055
Grouping a row into multiple groups with pandas
<p>I have a set of sentences, and I want to group them such all the rows in a group should share one particular word. However a sentence can belong to many groups because it has many words in it.</p> <p>So in the example below, there should be a groups like this:</p> <ul> <li>A 'temperature' group that includes all the rows (0, 1, 2, 3 and 4)</li> <li>A 'freezes' group which includes rows 2 and 4</li> <li>A 'the' group that includes rows 0, 1, 2, and 3</li> <li>A 'metal' group that only contains row 0.</li> <li>Groups for every other word in the dataset</li> </ul> <pre><code>import pandas as pd # An example data set df = pd.DataFrame({"sentences": [ "two long pieces of metal fixed together, each of which bends a different amount when they are both heated to the same temperature", "the temperature at which a liquid boils", "a system for measuring temperature that is part of the metric system, in which water freezes at 0 degrees and boils at 100 degrees", "a unit for measuring temperature. Measurements are often expressed as a number followed by the symbol ยฐ", "a system for measuring temperature in which water freezes at 32ยบ and boils at 212ยบ" ]}) # Create a new series which is a list of words in each "sentences" column df['words'] = df['sentences'].apply(lambda sentence: sentence.split(" ")) # Try to group by this new column df.groupby('words').count() # TypeError: unhashable type: 'list' </code></pre> <p><strike>However my code throws an error as shown.</strike> (see below) Since my task is a bit complicated I know it probably involves more than just calling groupby(). Can someone help me to make word groups with pandas?</p> <p><em>edit</em> After solving the error by returning <code>tuple(sentence.split())</code> (thanks ethan-furman), I try printing the result, but it doesn't seem to have done anything. I think it probably just put each row in a group:</p> <pre><code>print(df.groupby('words').count()) # sentences 5 # dtype: int64 </code></pre>
<p>My current solution uses pandas' MultiIndex feature. I'm sure it can be improved with some more efficient use of numpy, but I believe this will perform significantly better than the other python-only answer:</p> <pre><code>import pandas as pd import numpy as np # An example data set df = pd.DataFrame({"sentences": [ "two long pieces of metal fixed together, each of which bends a different amount when they are both heated to the same temperature", "the temperature at which a liquid boils", "a system for measuring temperature that is part of the metric system, in which water freezes at 0 degrees and boils at 100 degrees", "a unit for measuring temperature. Measurements are often expressed as a number followed by the symbol ยฐ", "a system for measuring temperature in which water freezes at 32ยบ and boils at 212ยบ" ]}) # Create a new series which is a list of words in each "sentences" column df['words'] = df['sentences'].apply(lambda sentence: sentence.split(" ")) # This is all the words in the dataset. Each word will be its own index (level of the MultiIndex) names = np.unique(df['words'].sum()) # Create an array of tuples, one tuple for each row of data # Each tuple contains True if the row has that word in it, and False if it does not values = df['words'].map( lambda words: np.vectorize( lambda word: True if word in words else False)(names) ) # Make a multindex index = pd.MultiIndex.from_tuples(values, names=names) # Add the MultiIndex without creating a new data frame df.set_index(index, inplace=True) # Find all the rows that have the word 'temperature' xs = df.xs(True, level='temperature') print(xs.to_string(index=False)) </code></pre>
python|python-3.x|pandas|group-by
1
9,305
34,086,776
Cumsum with pandas on financial data
<p>I'm very new to python and I have managed to import data from an excel datasheet using the <code>pd.read_excel</code> function. The data is arranged in the following manner in a dataframe :</p> <p><a href="https://i.stack.imgur.com/Yw3jR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yw3jR.jpg" alt="enter image description here"></a></p> <p>I'm trying to do a cumsum() over this dataframe, however I get this error message : </p> <pre><code>TypeError: unsupported operand type(s) for +: 'Timestamp' and 'Timestamp' </code></pre> <p>How can I force only the cumsum() on the returns columns without removing my dates columns ?</p> <p>I added the data with the following function :<br> <code>oFX = pd.read_excel('C:\\Work\\Python Dev\\Athenes\\FX.xlsx', 0)</code></p> <p>The <code>data.info()</code> is the following :<br> <code>oFX.info() &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 4133 entries, 0 to 4132 Data columns (total 11 columns): Dates 4133 non-null datetime64[ns] AUD 4133 non-null float64 CAD 4133 non-null float64 CHF 4133 non-null float64 EUR 4133 non-null float64 GBP 4133 non-null float64 JPY 4133 non-null float64 KRW 4133 non-null float64 MEP 4133 non-null float64 NZD 4133 non-null float64 USD 4133 non-null float64 dtypes: datetime64[ns](1), float64(10) memory usage: 387.5 KB</code></p> <p>Thanks in advance.</p>
<p>Seeing as your 'Dates' are just daily entries you can temporarily set the index to that column, call <code>cumsum</code> and then <code>reset_index</code>:</p> <pre><code>oFX.set_index('Dates').cumsum().reset_index() </code></pre>
python|pandas|dataframe
0
9,306
36,847,022
What numbers that I can put in numpy.random.seed()?
<p>I have noticed that you can put various numbers inside of <code>numpy.random.seed()</code>, for example <code>numpy.random.seed(1)</code>, <code>numpy.random.seed(101)</code>. What do the different numbers mean? How do you choose the numbers?</p>
<p>Consider a very basic random number generator:</p> <pre><code>Z[i] = (a*Z[i-1] + c) % m </code></pre> <p>Here, <code>Z[i]</code> is the <code>ith</code> random number, <code>a</code> is the multiplier and <code>c</code> is the increment - for different <code>a</code>, <code>c</code> and <code>m</code> combinations you have different generators. This is known as the <a href="https://en.wikipedia.org/wiki/Linear_congruential_generator" rel="noreferrer">linear congruential generator</a> introduced by Lehmer. The remainder of that division, or modulus (<code>%</code>), will generate a number between zero and <code>m-1</code> and by setting <code>U[i] = Z[i] / m</code> you get random numbers between zero and one.</p> <p>As you may have noticed, in order to start this generative process - in order to have a <code>Z[1]</code> you need to have a <code>Z[0]</code> - an initial value. This initial value that starts the process is called the seed. Take a look at this example:</p> <p><a href="https://i.stack.imgur.com/ENgP5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ENgP5.png" alt="enter image description here"></a></p> <p>The initial value, the seed is determined as 7 to start the process. However, that value is not used to generate a random number. Instead, it is used to generate the first <code>Z</code>. </p> <p>The most important feature of a pseudo-random number generator would be its unpredictability. Generally, as long as you don't share your seed, you are fine with all seeds as the generators today are much more complex than this. However, as a further step you can generate the seed randomly as well. You can skip the first <code>n</code> numbers as another alternative. </p> <p>Main source: Law, A. M. (2007). Simulation modeling and analysis. Tata McGraw-Hill.</p>
python|numpy-random
13
9,307
49,516,188
Difference between 'ctx' and 'self' in python?
<p>While using the deep learning library PyTorch I came across definitions like this. Does <code>ctx</code> have the same behavior as <code>self</code>? </p> <pre><code>class LinearFunction(Function): @staticmethod def forward(ctx, input, weight, bias=None): ctx.save_for_backward(input, weight, bias) output = input.mm(weight.t()) if bias is not None: output += bias.unsqueeze(0).expand_as(output) return output @staticmethod def backward(ctx, grad_output): input, weight, bias = ctx.saved_variables grad_input = grad_weight = grad_bias = None if ctx.needs_input_grad[0]: grad_input = grad_output.mm(weight) if ctx.needs_input_grad[1]: grad_weight = grad_output.t().mm(input) if bias is not None and ctx.needs_input_grad[2]: grad_bias = grad_output.sum(0).squeeze(0) return grad_input, grad_weight, grad_bias </code></pre>
<p><strong>A static method (<code>@staticmethod</code>) is called using the class <em>type</em> directly, not an <em>instance</em> of this class:</strong></p> <pre><code>LinearFunction.backward(x, y) </code></pre> <p>Since you have no instance, it does not make sense to use <code>self</code> in a static method.</p> <p>Here, <strong><code>ctx</code> is just a regular argument</strong> that you'll have to pass when calling your methods.</p>
python|pytorch
9
9,308
73,436,440
Replace and merge rows in pandas according to condition
<p>I have a dataframe:</p> <pre><code> lft rel rgt num 0 t3 r3 z2 3 1 t1 r3 x1 9 2 x2 r3 t2 8 3 x4 r1 t2 4 4 t1 r1 z3 1 5 x1 r1 t2 2 6 x2 r2 t4 4 7 z3 r2 t4 5 8 t4 r3 x3 4 9 z1 r2 t3 4 </code></pre> <p>And a reference dictionary:</p> <pre><code>replacement_dict = { 'X1' : ['x1', 'x2', 'x3', 'x4'], 'Y1' : ['y1', 'y2'], 'Z1' : ['z1', 'z2', 'z3'] } </code></pre> <p>My goal is to replace all occurrences of <code>replacement_dict['X1']</code> with 'X1', and then merge the rows together. For example, any instance of 'x1', 'x2', 'x3' or 'x4' will be replaced by 'X1', etc.</p> <p>I can do this by selecting the rows that contain any of these strings and replacing them with 'X1':</p> <pre><code>keys = replacement_dict.keys() for key in keys: DF.loc[DF['lft'].isin(replacement_dict[key]), 'lft'] = key DF.loc[DF['rgt'].isin(replacement_dict[key]), 'rgt'] = key </code></pre> <p>giving:</p> <pre><code> lft rel rgt num 0 t3 r3 Z1 3 1 t1 r3 X1 9 2 X1 r3 t2 8 3 X1 r1 t2 4 4 t1 r1 Z1 1 5 X1 r1 t2 2 6 X1 r2 t4 4 7 Z1 r2 t4 5 8 t4 r3 X1 4 9 Z1 r2 t3 4 </code></pre> <p>Now, if I select all the rows containing 'X1' and merge them, I should end up with:</p> <pre><code> lft rel rgt num 0 X1 r3 t2 8 1 X1 r1 t2 6 2 X1 r2 t4 4 3 t1 r3 X1 9 4 t4 r3 X1 4 </code></pre> <p>So the three columns ['lft', 'rel', 'rgt'] are unique while the 'num' column is added up for each of these rows. The row 1 above : ['X1' 'r1' 't2' 6] is the sum of two rows ['X1' 'r1' 't2' 4] and ['X1' 'r1' 't2' 2].</p> <p>I can do this easily for a small number of rows, but I am working with a dataframe with 6 million rows and a replacement dictionary with 60,000 keys. This is taking forever using a simple row wise extraction and replacement.</p> <p>How can this (specifically the last part) be scaled efficiently? Is there a pandas trick that someone can recommend?</p>
<p>Reverse the <code>replacement_dict</code> mapping and <code>map()</code> this new mapping to each of lft and rgt columns to substitute certain values (e.g. x1-&gt;X1, y2-&gt;Y1 etc.). As some values in lft and rgt columns don't exist in the mapping (e.g. t1, t2 etc.), call <code>fillna()</code> to fill in these values.<sup>1</sup></p> <p>You may also <code>stack()</code> the columns whose values need to be replaced (lft and rgt), call map+fillna and <code>unstack()</code> back but because there are only 2 columns, it may not be worth the trouble for this particular case.</p> <p>The second part of the question may be answered by summing num values after grouping by lft, rel and rgt columns; so <code>groupby().sum()</code> should do the trick.</p> <pre class="lang-py prettyprint-override"><code># reverse replacement map reverse_map = {v : k for k, li in replacement_dict.items() for v in li} # substitute values in lft column using reverse_map df['lft'] = df['lft'].map(reverse_map).fillna(df['lft']) # substitute values in rgt column using reverse_map df['rgt'] = df['rgt'].map(reverse_map).fillna(df['rgt']) # sum values in num column by groups result = df.groupby(['lft', 'rel', 'rgt'], as_index=False)['num'].sum() </code></pre> <p><sup>1</sup>: <code>map()</code> + <code>fillna()</code> may perform better for your use case than <code>replace()</code> because under the hood, <code>map()</code> implements a Cython optimized <code>take_nd()</code> method that performs particularly well if there are a lot of values to replace, while <code>replace()</code> implements <code>replace_list()</code> method which uses a Python loop. So if <code>replacement_dict</code> is particularly large (which it is in your case), the difference in performance will be huge, but if <code>replacement_dict</code> is small, <code>replace()</code> may outperform <code>map()</code>.</p>
python|pandas|dataframe|numpy
9
9,309
73,519,653
Create sorting dictionary
<p>Write the function dataframe that takes a dictionary as input and creates a dataframe from the dictionary, Sort the dictionary.</p> <p>Instructions</p> <pre><code>1. Create a dataframe with the input dictionary 2. Columns should be Name Age 3. Print &quot;Before Sorting&quot; 4. Print a Newline 5. Print the dataframe before sorting. Note: Printing the dataframe must not contain index. 6. Print a Newline 7. Sort the dataframe in ascending order based on Age column 8. Print &quot;After Sorting&quot; 9. Print a Newline 10. Print the dataframe after sorting. Note: Printing the dataframe must not contain index. </code></pre> <p>Sample Input (it may change according to use cases. So cannot insert below input on code)</p> <p>['william':42, 'George' :10, 'Joseph :22, 'Henry':15, 'Samuel':32, 'David':18]</p> <p>Sample Output</p> <p>Before Sorting</p> <p>Name Age</p> <p>William 42</p> <p>George. 10</p> <p>Joseph. 22</p> <p>Henry. 15</p> <p>Samuel. 32</p> <p>David. 18</p> <p>After Sorting</p> <p>Name. Age</p> <p>George. 10</p> <p>Henry. 15</p> <p>David. 18</p> <p>Joseph. 22</p> <p>Samuel. 32</p> <p>William. 42</p> <p>import pandas import ast</p> <p>#Enter your code here. Read input from STDIN. Print output from STDOUT</p> <p>def dataframe(key, value): . STDIN = {key:value}</p>
<pre><code>def dataframe(data): df = pd.DataFrame(data) print(&quot;Before Sorting&quot;) print(df) df.sort_values(by=['Age'], inplace=True) print(&quot;After Sorting&quot;) print(df) </code></pre> <p>Output :</p> <pre><code>Before Sorting Name Age 0 William 42 1 George 10 2 Joseph 22 3 Henry 15 4 Samuel 32 5 David 18 After Sorting Name Age 1 George 10 3 Henry 15 5 David 18 2 Joseph 22 4 Samuel 32 0 William 42 </code></pre>
python|pandas|dataframe|dictionary
1
9,310
73,515,252
Special case of counting empty cells "before" an occupied cell in Pandas
<p>Pandas question here.</p> <p>I have a specific dataset in which we are sampling subjective ratings several times over a second. The information is sorted as below. What I need is a way to &quot;count&quot; the number of blank cells before every &quot;second&quot; (i.e. &quot;1&quot; in the second's column that occur at regular intervals), so I can feed that value into a greatest common factor equation and create somewhat of a linear extrapolation based on milliseconds. In the example below that number would be &quot;2&quot; and I would feed that into the GCF formula. The end goal is to make a more accurate/usable timestamp. Sampling rates may vary by dataset.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>rating</th> <th>seconds</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>26</td> <td></td> </tr> <tr> <td>2</td> <td>28</td> <td></td> </tr> <tr> <td>3</td> <td>30</td> <td>1</td> </tr> <tr> <td>4</td> <td>33</td> <td></td> </tr> <tr> <td>5</td> <td>40</td> <td></td> </tr> <tr> <td>6</td> <td>45</td> <td>1</td> </tr> <tr> <td>7</td> <td>50</td> <td></td> </tr> <tr> <td>8</td> <td>48</td> <td></td> </tr> <tr> <td>9</td> <td>49</td> <td>1</td> </tr> </tbody> </table> </div>
<p>If you just want to count the number of NaNs before the first <code>1</code>:</p> <pre><code>df['seconds'].isna().cummin().sum() </code></pre> <p>If you have another value (e.g. empty string)</p> <pre><code>df['seconds'].eq('').cummin().sum() </code></pre> <p>Output: <code>2</code></p> <p>Or, if you have a range Index:</p> <pre><code>df['seconds'].first_valid_index() </code></pre>
python|pandas|timestamp|counting
1
9,311
73,451,970
try convert string to date row per row in pandas or similar
<p>I need to join dataframes with dates in the format <code>'%Y%m%d'</code>. Some data is wrong or missing and when I put pandas with:</p> <pre><code>try: df['data'] = pd.to_datetime(df['data'], format='%Y%m%d') except: pass </code></pre> <p>If 1 row is wrong, it fails to convert the whole column. I would like it to skip only the rows with error without converting.</p> <p>I could solve this by lopping with datetime, but my question is, is there a better solution for this with pandas?</p>
<p>Pass <code>errors = 'coerce'</code> to <code>pd.to_datetime</code> to convert the values with wrong date format to NaT. Then you can use <code>Series.fillna</code> to fill those NaT with the input values.</p> <pre><code>df['data'] = ( pd.to_datetime(df['data'], format='%Y%m%d', errors='coerce') .fillna(df['data']) ) </code></pre> <p>From the <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html#:%7E:text=errors%7B%E2%80%98ignore%E2%80%99%2C%20%E2%80%98raise,return%20the%20input." rel="nofollow noreferrer">docs</a></p> <blockquote> <p><strong>errors : {โ€˜ignoreโ€™, โ€˜raiseโ€™, โ€˜coerceโ€™}, default โ€˜raiseโ€™</strong></p> <ul> <li>If 'raise', then invalid parsing will raise an exception.</li> <li>If 'coerce', then invalid parsing will be set as NaT.</li> <li>If 'ignore', then invalid parsing will return the input.</li> </ul> </blockquote>
python-3.x|pandas|datetime
2
9,312
73,518,448
IndexingError: Too many indexers with Dataframe.loc
<p>I got this error while I tried to make telegram alarm bot using FinanceDataReader</p> <p>Here is the code:</p> <pre><code>code = 'KQ11' df = fdr.DataReader('KR11', '2022-08').reset_index() df['close_sma3d'] = df['Close'].rolling(3).mean() df['close_sma5d'] = df['Close'].rolling(5).mean() df['close_sma10d'] = df['Close'].rolling(10).mean() print('3,5,10 SMA breakout Signal') Three_days = 3 Five_days = 5 Ten_days = 10 for i in date_i: # we cannot calculate the first day and the last day if i &lt; 1 or i &gt; len(date_list) - Ten_days: continue prev_date = date_list[i-1] now_date = date_list[i] prev_price = df['Close'].loc[code, prev_date] now_price = df['Close'].loc[code, now_date] prev_sma3d = df['close_sma3d'].loc[code, prev_date] now_sma3d = df['close_sma3d'].loc[code, now_date] prev_sma5d = df['close_sma5d'].loc[code, prev_date] now_sma5d = df['close_sma5d'].loc[code, now_date] prev_sma10d = df['close_sma10d'].loc[code, prev_date] now_sma10d = df['close_sma10d'].loc[code, now_date] # ์–ด์ œ๋Š” ์ดํ‰์„  ๋ฐ‘, ๋‹น์ผ์€ ์ดํ‰์„  ์œ„ if now_price &gt; now_sma3d: print(f&quot; - {now_date} Signal ๋ฐœ์ƒ! now_price {now_price} 3์ผ์ด๋™ํ‰๊ท  {now_sma3d}&quot;) elif now_price &gt; now_sma5d: print(f&quot; - {now_date} Signal ๋ฐœ์ƒ! now_price {now_price} 5์ผ์ด๋™ํ‰๊ท  {now_sma5d}&quot;) elif now_price &gt; now_sma10d: print(f&quot; - {now_date} Signal ๋ฐœ์ƒ! now_price {now_price} 10์ผ์ด๋™ํ‰๊ท  {now_sma10d}&quot;) </code></pre> <p>And here is the code where the error occured:</p> <pre><code>prev_price = df['Close'].loc[code, prev_date] </code></pre> <p>Dataframe looks like this:</p> <pre><code> Date Close close_sma3d close_sma5d close_sma10d 9 2022-08-12 831.63 828.016667 829.712 823.267 10 2022-08-16 834.74 832.840000 830.488 825.980 11 2022-08-17 827.42 831.263333 829.242 828.288 12 2022-08-18 826.06 829.406667 830.400 829.358 13 2022-08-19 814.17 822.550000 826.804 828.259 14 2022-08-22 795.87 812.033333 819.652 824.682 15 2022-08-23 783.42 797.820000 809.388 819.938 16 2022-08-24 793.14 790.810000 802.532 815.887 17 2022-08-25 807.37 794.643333 798.794 814.597 18 2022-08-26 802.45 800.986667 796.450 811.627 </code></pre> <p>When I <code>print(prev_date, now_date)</code>:</p> <pre><code>2022-08-01 00:00:00 2022-08-02 00:00:00 </code></pre> <p>it seems really weird cause it's not a form of Multiindex to me</p> <p>I found that it has no columns even though it's DataFrame. When I <code>df['Close'].shape</code> it shows: <code>(19,)</code></p> <p>Any help would be appreicated.</p>
<p>You use <code>.loc</code> with wrong values. It rather need <code>.loc[ row_indexes, columns_names ]</code> but you use some strange <code>code</code> and value <code>prev_date</code></p> <p>To select some date it would need</p> <pre><code>df.loc[ df[&quot;Date&quot;] == prev_date, 'Close'] </code></pre> <p>But you don't need it in some situations.</p> <hr /> <p>To find all <code>&quot;Close&quot; &gt; &quot;close_sma3d&quot;</code> you can do</p> <pre><code>alert_three_days = df[ df['Close'] &gt; df['close_sma3d'] ] </code></pre> <p>and later iterate this dataframe</p> <pre><code>for index, row in alert_three_days.iterrows(): print(f&quot; - {row['Date']} Signal ๋ฐœ์ƒ! now_price {row['Close']} 3์ผ์ด๋™ํ‰๊ท  {row['close_sma3d']}&quot;) </code></pre> <p>or use <code>apply()</code> for this</p> <pre><code>def display(row): print(f&quot; - {row['Date']} Signal ๋ฐœ์ƒ! now_price {row['Close']} 3์ผ์ด๋™ํ‰๊ท  {row['close_sma3d']}&quot;) alert_three_days.apply(display, axis=1) </code></pre> <hr /> <p>If you want to compare with previous price then you can use <code>.shift(1)</code></p> <pre><code>df['Previous Close'] = df['Close'].shift(1) </code></pre> <p>and later compare</p> <pre><code>lower_close = df[ df['Previous Close'] &gt; df['Close'] ] </code></pre> <p>and display only selected rows</p> <pre><code>print(lower_close[['Previous Close', 'Close']]) </code></pre> <hr /> <p><strong>BTW:</strong></p> <p>When you get single column <code>df['Close']</code> then it gives you <code>pandas.Series</code> (which don't have columns) and <code>.shape</code> shows only number of rows in this <code>Series</code></p> <hr /> <p>Minimal working code for tests:</p> <pre><code>import FinanceDataReader as fdr df = fdr.DataReader('KQ11', '2022-08').reset_index() df['close_sma3d'] = df['Close'].rolling(3).mean() df['close_sma5d'] = df['Close'].rolling(5).mean() df['close_sma10d'] = df['Close'].rolling(10).mean() print('\n--- Close for 2022-08-03 ---\n') print(df.loc[ df['Date'] == '2022-08-03', 'Close']) print('\n--- Previous Close &gt; Close ---\n') df['Previous Close'] = df['Close'].shift(1) #df['Previous Date'] = df['Date'].shift(1) lower_close = df[ df['Previous Close'] &gt; df['Close'] ] print(lower_close[['Previous Close', 'Close', 'Date']]) print('\n--- alert_three_days ---\n') alert_three_days = df[ df['Close'] &gt; df['close_sma3d'] ] #print(alert_three_days) #def display(row): # print(f&quot; - {row['Date']} Signal ๋ฐœ์ƒ! now_price {row['Close']} 3์ผ์ด๋™ํ‰๊ท  {row['close_sma3d']}&quot;) #alert_three_days.apply(display, axis=1) for index, row in alert_three_days.iterrows(): print(f&quot; - {row['Date']} Signal ๋ฐœ์ƒ! now_price {row['Close']} 3์ผ์ด๋™ํ‰๊ท  {row['close_sma3d']}&quot;) </code></pre> <p>Results:</p> <pre><code>--- Close for 2022-08-03 --- 2 815.36 Name: Close, dtype: float64 --- Previous Close &gt; Close --- Previous Close Close Date 1 807.61 804.34 2022-08-02 5 831.64 830.86 2022-08-08 7 833.65 820.27 2022-08-10 9 832.15 831.63 2022-08-12 11 834.74 827.42 2022-08-17 12 827.42 826.06 2022-08-18 13 826.06 814.17 2022-08-19 14 814.17 795.87 2022-08-22 15 795.87 783.42 2022-08-23 18 807.37 802.45 2022-08-26 --- alert_three_days --- - 2022-08-03 00:00:00 Signal ๋ฐœ์ƒ! now_price 815.36 3์ผ์ด๋™ํ‰๊ท  809.1033333333334 - 2022-08-04 00:00:00 Signal ๋ฐœ์ƒ! now_price 825.16 3์ผ์ด๋™ํ‰๊ท  814.9533333333333 - 2022-08-05 00:00:00 Signal ๋ฐœ์ƒ! now_price 831.64 3์ผ์ด๋™ํ‰๊ท  824.0533333333333 - 2022-08-08 00:00:00 Signal ๋ฐœ์ƒ! now_price 830.86 3์ผ์ด๋™ํ‰๊ท  829.2199999999999 - 2022-08-09 00:00:00 Signal ๋ฐœ์ƒ! now_price 833.65 3์ผ์ด๋™ํ‰๊ท  832.0500000000001 - 2022-08-11 00:00:00 Signal ๋ฐœ์ƒ! now_price 832.15 3์ผ์ด๋™ํ‰๊ท  828.6899999999999 - 2022-08-12 00:00:00 Signal ๋ฐœ์ƒ! now_price 831.63 3์ผ์ด๋™ํ‰๊ท  828.0166666666665 - 2022-08-16 00:00:00 Signal ๋ฐœ์ƒ! now_price 834.74 3์ผ์ด๋™ํ‰๊ท  832.84 - 2022-08-24 00:00:00 Signal ๋ฐœ์ƒ! now_price 793.14 3์ผ์ด๋™ํ‰๊ท  790.81 - 2022-08-25 00:00:00 Signal ๋ฐœ์ƒ! now_price 807.37 3์ผ์ด๋™ํ‰๊ท  794.6433333333333 - 2022-08-26 00:00:00 Signal ๋ฐœ์ƒ! now_price 802.45 3์ผ์ด๋™ํ‰๊ท  800.9866666666667 </code></pre>
python|pandas
0
9,313
35,213,402
Make a "linear-within-logspace" array that goes like [1, 2, 3, ..., 10, 20, 30, ...]
<p>I'm trying to run a certain simulation that takes one parameter as input. I need to run it for a range of different parameter values that spans several orders of magnitude, but also gives a picture of the variation within each order of magnitude.</p> <p>In short, I need my parameter to take values <code>param = [1, 2, 3, ... 9, 10, 20, 30, ..., 90, 100, 200, ...]</code>.</p> <p>I've answered my own question with an attempt, but is there a more straightforward way to do this in numpy, that also makes the intention clearer?</p>
<p>You can easily make such an array with two loops:</p> <pre><code>param = [multiplier * magnitude for magnitude in [1, 10, 100] for multiplier in [1, 2, 3, 4, 5, 6, 7, 8, 9]] </code></pre>
python|arrays|numpy
2
9,314
35,061,261
how to merge pandas dataframe with different lengths
<p>I have a pandas dataframe like following.. </p> <pre><code>df_fav_dish item_id buyer_id dish_count dish_name 121 261 2 Null 126 261 3 Null 131 261 7 Null 132 261 6 Null 133 261 2 Null 135 261 2 Null 139 309 2 Null 140 261 2 Null 142 261 2 Null 143 153 3 Null 145 64 2 Null 148 261 2 Null 155 261 2 Null 156 64 2 Null 163 261 2 Null </code></pre> <p>length of above dataframe is 34. And I have another dataframe like following..</p> <pre><code> data item_id item_name 121 Paneer 126 Chicken 131 Prawns 132 Mutton 133 Curd 139 Mocktail 140 Cocktail 142 Biryani 143 Thai Curry 145 Red Curry 148 Fish 155 Lobster 69 Fish Curry 67 Butter 31 Bread 59 Egg Curry </code></pre> <p>length of above dataframe is 322 .This data frame contains almost 300 item_id and corresponding item names Now I want to join this two dataframes on item_id. Two dataframes are of different lengths. I am doing following in python.</p> <pre><code>df_fav_dish.merge(data[['item_name','item_id']],how='left',on='item_id') </code></pre> <p>But it gives me many rows. I just want to add <code>item_name</code> to the first data frame from second dataframe where both the <code>item_id</code> equal to each other </p> <p>Desired output is</p> <pre><code>item_id buyer_id dish_count dish_name item_name 121 261 2 Null paneer 126 261 3 Null Chicken 131 261 7 Null prawns 132 261 6 Null Mutton 133 261 2 Null Curd 135 261 2 Null 139 309 2 Null Mocktail 140 261 2 Null Cocktail 142 261 2 Null Biryani 143 153 3 Null Thai Curry 145 64 2 Null Red Curry 148 261 2 Null Fish 155 261 2 Null Lobster 156 64 2 Null 163 261 2 Null </code></pre>
<p>Your column <code>item_id</code> in dataframe <code>data</code> contains duplicity, so:</p> <p>If no duplicity:</p> <pre><code>print data item_id item_name 0 121 Paneer 1 140 Chicken 2 131 Prawns print df_fav_dish item_id buyer_id dish_count dish_name 0 139 309 2 Null 1 140 261 2 Null 2 142 261 2 Null 3 143 153 3 Null print df_fav_dish.merge(data[['item_name','item_id']],how='left',on='item_id') item_id buyer_id dish_count dish_name item_name 0 139 309 2 Null NaN 1 140 261 2 Null Chicken 2 142 261 2 Null NaN 3 143 153 3 Null NaN </code></pre> <p>With duplicity all duplicity rows are joined:</p> <pre><code>print data item_id item_name 0 140 Paneer 1 140 Chicken 2 140 Prawns print df_fav_dish item_id buyer_id dish_count dish_name 0 139 309 2 Null 1 140 261 2 Null 2 142 261 2 Null 3 143 153 3 Null print df_fav_dish.merge(data[['item_name','item_id']],how='left',on='item_id') item_id buyer_id dish_count dish_name item_name 0 139 309 2 Null NaN 1 140 261 2 Null Paneer 2 140 261 2 Null Chicken 3 140 261 2 Null Prawns 4 142 261 2 Null NaN 5 143 153 3 Null NaN </code></pre> <p>So you can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow"><code>drop_duplicates</code></a>:</p> <pre><code># Drop duplicates except for the first occurrence print df.drop_duplicates(subset='item_id', keep='first') item_id buyer_id dish_count dish_name item_name 0 139 309 2 Null NaN 1 140 261 2 Null Paneer 4 142 261 2 Null NaN 5 143 153 3 Null NaN # Drop duplicates except for the last occurrence print df.drop_duplicates(subset='item_id', keep='last') item_id buyer_id dish_count dish_name item_name 0 139 309 2 Null NaN 3 140 261 2 Null Prawns 4 142 261 2 Null NaN 5 143 153 3 Null NaN # Drop all duplicates print df.drop_duplicates(subset='item_id', keep=False) item_id buyer_id dish_count dish_name item_name 0 139 309 2 Null NaN 4 142 261 2 Null NaN 5 143 153 3 Null NaN </code></pre>
python|pandas
2
9,315
67,191,136
How to add a new date column derived from existing date column based on the shelf life of the products to my csv file using pandas?
<div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Room</th> <th style="text-align: center;">rack</th> <th style="text-align: right;">type</th> <th style="text-align: right;">no.of items</th> <th style="text-align: right;">manufacturing date</th> <th style="text-align: right;">expiry date(new column)</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">2</td> <td style="text-align: right;">LLQ</td> <td style="text-align: right;">6</td> <td style="text-align: right;">21/3/2021</td> <td style="text-align: right;">-</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">2</td> <td style="text-align: right;">AZK</td> <td style="text-align: right;">6</td> <td style="text-align: right;">21/3/2021</td> <td style="text-align: right;">-</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">2</td> <td style="text-align: right;">CHD</td> <td style="text-align: right;">6</td> <td style="text-align: right;">21/3/2021</td> <td style="text-align: right;">-</td> </tr> </tbody> </table> </div> <ul> <li><p>LLQ: expires in 21 days</p> </li> <li><p>AZK: expires in 10 days</p> </li> <li><p>CHD: expires in 30 days</p> </li> </ul> <p>How do I use pandas to derive the corresponding expiry dates based on the product shelf life? Pls help, I am new to python, basically coding in general.</p>
<p>Date data from csv is recognized as a string. So you first convert it into datetime format</p> <pre><code>from datetime import datetime from datetime import timedelta df['LLQ'] = df['manufacturing date'].apply(lambda x: (datetime.strptime(x, &quot;%m/%d/%Y&quot;) + timedelta(days=21)).strftime(&quot;%m/%d/%Y&quot;)) </code></pre> <p>Do same works for other calculations.</p>
python|pandas|csv
0
9,316
67,220,775
single column in dataframe needs to be broken up into 3 columns
<p>I have been using stack overflow a lot lately and I appreciate the community here.</p> <p>I have a code that I have been working on that is finally starting to look like it should be I have one glitch that I haven't been able to get past.</p> <p>I pulled data from a site for different states, they all have the same looking table with different data in it. I had to change the BeautifulSoup coding to make the loop work but now I have a pretty ugly column with all the data in it. It's easy to see which line goes where but don't really know how to get started in python.</p> <p>Any help would be appreciated.</p> <pre><code>states = [&quot;Washington&quot;, &quot;Oregon&quot;] period = &quot;2020&quot; num_states = len(states) state_list = [] df = pd.DataFrame() #df.columns['COUNTY','PAYMENT','TOTAL ACRES'] for state in states: driver = webdriver.Chrome(executable_path = 'C:/webdrivers/chromedriver.exe') driver.get('https://www.nbc.gov/pilt/counties.cfm') driver.implicitly_wait(20) state_s = driver.find_element(By.NAME, 'state_code') drp = Select(state_s) drp.select_by_visible_text(state) year_s = driver.find_element(By.NAME, 'fiscal_yr') drp = Select(year_s) drp.select_by_visible_text(period) driver.implicitly_wait(10) link = driver.find_element(By.NAME, 'Search') link.click() url = driver.current_url page = requests.get(url) #dfs = pd.read_html(addrss)[2] # Get the html soup = BeautifulSoup(page.text, 'lxml') table = soup.findAll('table')[2] headers = [] for i in table.find_all('th'): title = i.text.strip() headers.append(title) for row in table.find_all('tr')[1:]: data = row.find_all('td') row_data = [td.text.strip() for td in data] length = len(df) df </code></pre> <p>output:</p> <pre class="lang-none prettyprint-override"><code> 0 0 ADAMS COUNTY 1 $59,408 2 21,337 0 ASOTIN COUNTY 1 $174,550 .. ... 1 $38,627 2 58,311 0 TOTAL 1 $23,321,995 2 31,312,205 [228 rows x 1 columns] </code></pre>
<p>you can set a custom index then use <code>unstack()</code> and rename. This is based on the assumption that the headers in your pic above match your target dataset, (i.e the index is repeated in multiples of 3)</p> <pre><code>df1 = df.set_index(df.groupby(level=0)\ .cumcount(),append=True).stack()\ .unstack(0)\ .rename(columns={0 : 'County', 1: 'Price', 2 : 'Population?'}) print(df1) County Price Population? 0 1 ADAMS COUNTY $59,408 21,337 1 1 ASOTIN COUNTY $174,550 58,311 2 1 ANOTHER COUNTY $38,627 31,312,205 3 1 TOTAL $23,321,995 NaN </code></pre>
pandas|selenium|beautifulsoup|multiple-columns
3
9,317
67,245,275
Extracting str from pandas dataframe using json
<p>I read csv file into a dataframe named df</p> <p>Each rows contains str below.</p> <blockquote> <p>'{&quot;id&quot;:2140043003,&quot;name&quot;:&quot;Olallo Rubio&quot;,...}'</p> </blockquote> <p>I would like to extract &quot;name&quot; and &quot;id&quot; from each row and make a new dataframe to store the str.</p> <p>I use the following codes to extract but it shows an error. Please let me know if there is any suggestions on how to solve this problem. Thanks</p> <pre><code></code></pre> <blockquote> <p>JSONDecodeError: Expecting ',' delimiter: line 1 column 32 (char 31)</p> </blockquote>
<pre class="lang-py prettyprint-override"><code>text={ &quot;id&quot;: 2140043003, &quot;name&quot;: &quot;Olallo Rubio&quot;, &quot;is_registered&quot;: True, &quot;chosen_currency&quot;: 'Null', &quot;avatar&quot;: { &quot;thumb&quot;: &quot;https://ksr-ugc.imgix.net/assets/019/223/259/16513215a3869caaea2d35d43f3c0c5f_original.jpg?w=40&amp;h=40&amp;fit=crop&amp;v=1510685152&amp;auto=format&amp;q=92&amp;s=653706657ccc49f68a27445ea37ad39a&quot;, &quot;small&quot;: &quot;https://ksr-ugc.imgix.net/assets/019/223/259/16513215a3869caaea2d35d43f3c0c5f_original.jpg?w=160&amp;h=160&amp;fit=crop&amp;v=1510685152&amp;auto=format&amp;q=92&amp;s=0bd2f3cec5f12553e679153ba2b5d7fa&quot;, &quot;medium&quot;: &quot;https://ksr-ugc.imgix.net/assets/019/223/259/16513215a3869caaea2d35d43f3c0c5f_original.jpg?w=160&amp;h=160&amp;fit=crop&amp;v=1510685152&amp;auto=format&amp;q=92&amp;s=0bd2f3cec5f12553e679153ba2b5d7fa&quot; }, &quot;urls&quot;: { &quot;web&quot;: { &quot;user&quot;: &quot;https://www.kickstarter.com/profile/2140043003&quot; }, &quot;api&quot;: { &quot;user&quot;: &quot;https://api.kickstarter.com/v1/users/2140043003?signature=1531480520.09df9a36f649d71a3a81eb14684ad0d3afc83e03&quot; } } } def extract(text,*args): list1=[] for i in args: list1.append(text[i]) return list1 print(extract(text,'name','id')) # ['Olallo Rubio', 2140043003] </code></pre>
python|json|python-3.x|pandas|dataframe
0
9,318
67,541,324
Augmenting image in tensorflow 1 cause to have an tensor in return
<p>I want to augment my images without using keras. I came across with that website: <a href="https://www.wouterbulten.nl/blog/tech/data-augmentation-using-tensorflow-data-dataset/" rel="nofollow noreferrer">https://www.wouterbulten.nl/blog/tech/data-augmentation-using-tensorflow-data-dataset/</a></p> <p>The problem is that I give and images as input but take tensor as outpput (which was not the case using keras imagegenerator). How can I deal with that? After augmenting the data, I want to have same type of object in return.</p>
<p>Tensorflow provides image augmentation in two ways, one s using tf.keras preprocessing libraries and other one is using tf.image. Use the following sample code with tf.image performing image augmentation.</p> <pre><code>def augment(image_label, seed): image, label = image_label image, label = resize_and_rescale(image, label) image = tf.image.resize_with_crop_or_pad(image, IMG_SIZE + 6, IMG_SIZE + 6) # Make a new seed new_seed = tf.random.experimental.stateless_split(seed, num=1)[0, :] # Random crop back to the original size image = tf.image.stateless_random_crop( image, size=[IMG_SIZE, IMG_SIZE, 3], seed=seed) # Random brightness image = tf.image.stateless_random_brightness( image, max_delta=0.5, seed=new_seed) image = tf.clip_by_value(image, 0, 1) return image, label </code></pre> <p>Follow the <a href="https://www.tensorflow.org/tutorials/images/data_augmentation#using_tfimage" rel="nofollow noreferrer">Tensorflow image augmentation</a> tutorial for information.</p>
tensorflow
1
9,319
67,300,269
Using groupby() and value_counts()
<p>The goal is to identify the count of colors in each <code>groupby()</code>.</p> <p><img src="https://i.stack.imgur.com/IYhBU.png" alt="" /></p> <p>If you see the outcome below shows the first group color blue appears 3 times.</p> <p>The second group yellow appears 2 times and third group blue appears 5 times</p> <p>So far this is what I got.</p> <pre class="lang-py prettyprint-override"><code>df.groupby(['X','Y','Color'})..Color.value_counts() </code></pre> <p>but this produces only the count of 1 since color for each row appears once.</p> <p>The final output should be like this:</p> <p><img src="https://i.stack.imgur.com/bxfr6.png" alt="" /></p> <p>Thanks in advance for any assistance.</p>
<p>If you give size to the transform function, it will be in tabular form without aggregation.</p> <pre><code>df['Count'] = df.groupby(['X','Y']).Color.transform('size') df.set_index(['X','Y'], inplace=True) df Color Count X Y A B Blue 3 B Blue 3 B Blue 3 C D Yellow 2 D Yellow 2 E F Blue 5 F Blue 5 F Blue 5 F Blue 5 F Blue 5 </code></pre>
python|pandas
1
9,320
67,255,661
Compare Array numbers with DF column. If exists in df, than write to new df
<p>I have the following problem: I have an array with multiple ID's and i also have a column in a df, with which I want to match these ID's. If equal, the row should be written to a new DF.</p> <p>This is my array, called manhatten_ids</p> <pre><code>array([ 4, 12, 13, 24, 41, 42, 43, 45, 48, 50, 68, 74, 75, 79, 87, 88, 90, 100, 103, 104, 105, 107, 113, 114, 116, 120, 125, 127, 128, 137, 140, 141, 142, 143, 144, 148, 151, 152, 153, 158, 161, 162, 163, 164, 166, 170, 186, 194, 202, 209, 211, 224, 229, 230, 231, 232, 233, 234, 236, 237, 238, 239, 243, 244, 246, 249, 261, 262, 263], dtype=int64) </code></pre> <p>This is my frame, called df_trips:</p> <p><a href="https://i.stack.imgur.com/VMqkA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VMqkA.png" alt="This is my frame, called df_trips:" /></a></p> <p>so if one DOLocation ID matches an array value, it should write the whole row with all columns to a new df, called newdf.</p> <p>This one actually does not work:</p> <pre><code>newdf=df.loc[df_trips['DOLocationID'].isin(manhatten_ids)] </code></pre> <p>Does anyone has an idea?</p>
<p>So we can do this iteratively by using the enumerate function two look through the values of the location column like so:</p> <pre><code>newdf = pd.DataFrame() row_ind = 0 #row of new dataframe that you will write on vals = df.DOLocationID.values #get values of location column for i, val in enumerate(vals): if val in manhatten_ids: #check if location val is in manhatten ids newdf.loc[row_ind] = df.iloc[i] #assign new row of newdf to current row in df row_ind += 1 </code></pre>
python|pandas|data-science|jupyter
0
9,321
34,654,220
tensorflow check image is in reader
<p>I'm not sure whether TensorFlow actually has actually decoded an image when the output is only: <code>Tensor("DecodeJpeg:0", shape=TensorShape([Dimension(None), Dimension(None), Dimension(None)]), dtype=uint8)</code></p> <p><strong>How can I show the image and label from the Tensor object?</strong></p> <p>(code partly from: <a href="https://stackoverflow.com/questions/34340489/tensorflow-read-images-with-labels">Tensorflow read images with labels</a>)</p> <pre><code>import tensorflow as tf from PIL import Image import numpy as np from os.path import join KNOWN_HEIGHT = 812 KNOWN_WIDTH = 812 def read_my_file_format(self, filename_and_label_tensor): """Consumes a single filename and label as a ' '-delimited string. Args: filename_and_label_tensor: A scalar string tensor. Returns: Two tensors: the decoded image, and the string label. """ filename, label = tf.decode_csv(filename_and_label_tensor, [[""], [""]], ",") file_contents = tf.read_file(filename) image = tf.image.decode_jpeg(file_contents) #image.set_shape([KNOWN_HEIGHT, KNOWN_WIDTH, 3]) return image, label string = ['test.jpg,m', 'test2.jpg,f'] filepath_queue = tf.train.string_input_producer(string) image, label = read_my_file_format(filepath_queue.dequeue()) print(image) # Output: Tensor("DecodeJpeg:0", shape=TensorShape([Dimension(None), Dimension(None), Dimension(None)]), dtype=uint8) print(label) # Output: Tensor("DecodeCSV:1", shape=TensorShape([]), dtype=string) </code></pre> <p>What can I do to show the actual image in <code>image</code> and also show the <code>label</code>? Because whether the image is indeed in the same folder or not doesn't change the output from <code>image</code> and <code>label</code>.</p> <h2>Edit</h2> <p>The following code (partly from: <a href="https://stackoverflow.com/questions/33648322/tensorflow-image-reading-display">Tensorflow image reading &amp; display</a>) shows an image on my friends Mac, but not on my Ubuntu 14.04:</p> <pre><code># Test show image images = [] with tf.Session() as sess: # Start populating the filename queue. coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) if len(string) &gt; 0: for i in range(len(string)): plaatje = result.image.eval() images.append(plaatje) Image._showxv(Image.fromarray(np.asarray(plaatje))) coord.request_stop() coord.join(threads) print("tf.session success") </code></pre> <p>and this results in the following <strong>error</strong>:</p> <pre><code>W tensorflow/core/common_runtime/executor.cc:1076] 0x7fa3940cb950 Compute status: Cancelled: Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueMany[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input_producer, input_producer/RandomShuffle)]] I tensorflow/core/kernels/queue_base.cc:278] Skipping cancelled enqueue attempt </code></pre>
<p>It seems there is code omitted in your second code section, no?</p> <p>Try the solution suggested <a href="http://andyljones.tumblr.com/post/133267887103/tensorflow-placeholder-queue-errors" rel="nofollow noreferrer">here</a>. You have to initialize the placeholders before you start the queue runner. That fixed a similar problem for me.</p> <p>And I guess you already figured out that you must call <code>.eval()</code> over your tensor in order to get its actual value. Look in this <a href="https://stackoverflow.com/questions/34097281/tensorflow-convert-a-tensor-into-a-numpy-array-in-python">question I made</a></p>
python|image|tensorflow
1
9,322
34,609,531
Select size filtered elements in a large array (raster)
<p>I could need help on this:</p> <p>On a large boolean numpy array (imported raster) (2000x2000), I try to select only elements that are greater than 800 units. (number of total elements > 1000)</p> <p>I tried a loop:</p> <pre><code>labeled_array, num_features = scipy.ndimage.label(my_np_array, structure = None, output = np.int) print num_features RasterYSize, RasterXSize = my_np_array.shape big_zones = np.zeros((RasterYSize, RasterXSize), dtype=np.bool) print "Iterating in progress" # Improvement could be needed to reduce the number of loops for i in range(1, num_features): zone_array = (labeled_array == i) zone = np.sum(zone_array) if zone &gt; 800: big_zones += zone_array </code></pre> <p>But I'm sure there's a better way to do this.</p>
<p>Here's a vectorized approach based on binning with <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.bincount.html" rel="nofollow"><code>np.bincount</code></a> and performing the cumulative ORing at <code>big_zones += zone_array</code> with <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.in1d.html" rel="nofollow"><code>np.in1d</code></a> -</p> <pre><code>from scipy.ndimage import label # Label with scipy labeled_array, num_features = label(my_np_array, structure = None, output = np.int) # Set the threshold thresh = 800 # Get the binned counts with "np.bincount" and check against threshold matches = np.bincount(labeled_array.ravel())&gt;thresh # Get the IDs corresponding to matches and get rid of the starting "0" and # "num_features", as you won't have those in "range(1, num_features)" either match_feat_ID = np.nonzero(matches)[0] valid_match_feat_ID = np.setdiff1d(match_feat_ID,[0,num_features]) # Finally, use "np.in1d" to do ORing operation corresponding to the iterative # "big_zones += zone_array" operation on the boolean array "big_zones". # Since "np.in1d" works with 1D arrays only, reshape back to 2D shape out = np.in1d(labeled_array,valid_match_feat_ID).reshape(labeled_array.shape) </code></pre> <p><strong>Runtime tests and verify outputs</strong></p> <p>Function definitions -</p> <pre><code>def original_app(labeled_array,num_features,thresh): big_zones = np.zeros((my_np_array.shape), dtype=np.bool) for i in range(1, num_features): zone_array = (labeled_array == i) zone = np.sum(zone_array) if zone &gt; thresh: big_zones += zone_array return big_zones def vectorized_app(labeled_array,num_features,thresh): matches = np.bincount(labeled_array.ravel())&gt;thresh match_feat_ID = np.nonzero(matches)[0] valid_match_feat_ID = np.setdiff1d(match_feat_ID,[0,num_features]) return np.in1d(labeled_array,valid_match_feat_ID).reshape(labeled_array.shape) </code></pre> <p>Timings and output verification -</p> <pre><code>In [2]: # Inputs ...: my_np_array = np.random.rand(200,200)&gt;0.5 ...: labeled_array, num_features = label(my_np_array, structure = None, output = np.int) ...: thresh = 80 ...: In [3]: out1 = original_app(labeled_array,num_features,thresh) In [4]: out2 = vectorized_app(labeled_array,num_features,thresh) In [5]: np.allclose(out1,out2) Out[5]: True In [6]: %timeit original_app(labeled_array,num_features,thresh) 1 loops, best of 3: 407 ms per loop In [7]: %timeit vectorized_app(labeled_array,num_features,thresh) 100 loops, best of 3: 2.5 ms per loop </code></pre>
python|arrays|numpy|raster|ndimage
1
9,323
34,549,974
Pandas: Writing data frame to Excel (.xls) file issue
<p>I am trying to write the data frame to excel and also making the cell width (20) and trying to hide grid lines. So far I did like :</p> <pre><code>writer = ExcelWriter('output.xlsx') df.to_excel(writer,'Sheet1') writer.save() worksheet = writer.sheets['Sheet1'] # Trying to set column width as 18 worksheet.set_column(0,3,18) #the data frame has 2 columns and a row - in xls it converts them to 3 columns worksheet.hide_gridlines() # I tried with option 2 - but this hides only the column grids, I mean the row(in data fram now became column A - it still has grid lines) writer.save() </code></pre> <p>My Data frame looks like:</p> <pre><code> Col1 Col2 Time 2011-01-01 01:00:00 4345 0.444 2011-01-01 11:00:00 443 7.4 </code></pre> <p>Is this wrong way? I don't see the changes in the output file. What I am doing wrong here? Is there a way to name my row header? </p> <p>Here. When try to hide the grid line. The row(in data frame['Time]) now column A in excel still has the grid. </p>
<p>IIUC, for the <code>set_column</code> width you're actually write your <code>df</code> twice; the correct workflow should be the following (EDIT: add <code>engine</code> keyword):</p> <pre><code>import pandas as pd writer = pd.ExcelWriter('output.xlsx', engine='xlsxwriter') df.to_excel(writer,'Sheet1') worksheet = writer.sheets['Sheet1'] # Trying to set column width as 18 worksheet.set_column(0,3,18) #the data frame has 2 columns and a row - in xls it converts them to 3 columns worksheet.hide_gridlines() # I tried with option 2 - but this hides only the column grids, I mean the row(in data fram now became column A - it still has grid lines) writer.save() </code></pre> <p>This should correctly set the columns width. If you don't want to have the index <code>Time</code> column in your output, you should set:</p> <pre><code>df.to_excel(writer,'Sheet1',index=False) </code></pre> <p>if you have previously set:</p> <pre><code>df.set_index = 'Time' </code></pre> <p>The grid issue actually it's still there when you plot the full dataframe. I think the the current <code>ExcelWriter</code> object doesn't support the index for the <code>hide_gridlines()</code> option, but I don't know if it is a bug or not.</p> <p>EDIT: thanks to the comments, this isn't a bug.</p>
python|pandas|dataframe|xls
2
9,324
60,178,705
Dask Map Tensorflow across Partitions
<p>I have a Tensorflow model that I want to run (not train) on my Dask Dataframe. I'm using <code>map_partitions</code>. However, when I look at the dashboard to check progress, it is only running 1 task for all of the work . I expected it to process the partitions concurrently. What am I doing wrong?</p> <p>Start my local cluster:</p> <pre><code>cluster = LocalCluster(ip="0.0.0.0") client=Client(cluster) ddf = dd.read_csv("data/docs", names=["docs"]) </code></pre> <p>The Dataframes <code>ddf</code> is a bunch of sentences (strings) and has 9 partitions. </p> <p>Here is the TF model:</p> <pre><code>def encode_factory(sess): output_tensor_names_sorted = ["input_layer/concat:0"] loader.load(sess, 'serve', export_path) def encode(sentence): #encodes string as `Example` protobuff serialized_examples = make_examples(sentence, "word") inputs_feed_dict = {"input_example_tensor:0": serialized_examples} outputs = sess.run(output_tensor_names_sorted, feed_dict=inputs_feed_dict) return outputs[0][0] return encode </code></pre> <p>The function <code>encode_factory</code> takes a Tensorflow <code>Session</code> object and loads the TF model from <code>export_path</code> (disk). The function returns a closure which takes a sentence (text string) as input and returns the sentence encoding (embedding/floating point array).</p> <p>I register it as a future:</p> <pre><code>future_fn = client.scatter(encode_factory, broadcast=True) </code></pre> <p>I then define my mapping function:</p> <pre><code>def map_fn(pdf, encoder): #create instance of TF model encoder encode = encoder(tf.Session()) embedded_docs = [] #iterate through items in Pandas Dataframe for doc in pdf.docs: doc_embedding = encoder(doc) #pass sentence to TF model embedded_docs.append(str(doc_embedding)) pdf["encoding"] = embedded_docs return pdf </code></pre> <p>And apply the map across partitions:</p> <pre><code>ddf.map_partitions(map_fn, future_fn, meta={'docs': str, 'encoding': str}).head() </code></pre> <p>How can I achieve some concurrency, only 1 worker is running!</p> <p><a href="https://i.stack.imgur.com/sGfcY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sGfcY.png" alt="Dask Dashboard"></a></p>
<p>The answer, as I discovered, is quite simple. Since I'm calling <code>head</code> as opposed to <code>compute</code> I guess Dask only uses 1 worker... If one uses <code>compute</code> the tasks are spread across the workers.</p>
tensorflow|dask|dask-distributed
0
9,325
60,324,897
Unable to Pass Video Frame for Object Detection Tensorflow python
<p>I am able to process the video frames by saing the frame as an image and then processing it. But was unable to pass frame directly to the object detection. Saving image with imwrite is making program slow...</p> <p>Here is my main method:</p> <pre><code>cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=2), cv2.CAP_GSTREAMER) if cap.isOpened(): window_handle = cv2.namedWindow("CSI Camera", cv2.WINDOW_AUTOSIZE) # Window while cv2.getWindowProperty("CSI Camera", 0) &gt;= 0: ret_val, frame = cap.read() if not ret_val: break frame = imutils.resize(frame, width=600) #cv2.imwrite('box.jpg', frame) #image = Image.open(path) #Error in here!!! predictions = od_model.predict_image(frame) for x in range(len(predictions)): probab = (predictions[x]['probability'])*100 if(probab &gt; 45): print(predictions[x]['tagName'], end=' ') print(probab) #cv2.imshow("CSI Camera", frame) # This also acts as keyCode = cv2.waitKey(30) &amp; 0xFF # Stop the program on the ESC key if keyCode == 27: break cap.release() cv2.destroyAllWindows() else: print("Unable to open camera") </code></pre> <p>Error Message: </p> <pre><code>predictions = od_model.predict_image(frame) File "/home/bharat/New_IT3/object_detection.py", line 125, in predict_image inputs = self.preprocess(image) File "/home/bharat/New_IT3/object_detection.py", line 130, in preprocess image = image.convert("RGB") if image.mode != "RGB" else image AttributeError: 'numpy.ndarray' object has no attribute 'mode' </code></pre>
<p>open cv reads image in bgr colour spectrum conver it to rgb and send the image for detection, api for the same is - </p> <p>frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)</p>
opencv|tensorflow|object-detection|nvidia-jetson-nano
0
9,326
60,041,461
How to determine the differences in parquet files
<p>I'm storing a pandas DataFrame in a parquet file with this code snippet:</p> <pre><code>df.to_parquet(path, engine="pyarrow", compression="snappy") </code></pre> <p>As part of a regression test, I save the file and compare it to a previously generated file. I tried comparing the file contents in 3 different ways:</p> <ol> <li>command line diff: the files are different.</li> <li>pyarrow.parquet Table.equals: the tables are different.</li> <li>Pandas assert_frame_equal(): the DataFrames are equal.</li> </ol> <p>How can I dig deeper to find the differences between the parquet files?</p> <pre><code>import pyarrow.parquet as pq import pandas as pd path1 = "f1.pq" path2 = "f2.pq" df1 = pd.read_parquet(path1) df2 = pd.read_parquet(path2) # This assertion passes pd.testing.assert_frame_equal(df1, df2) table1 = pq.read_table(path1) table2 = pq.read_table(path2) # This assertion fails assert table1.equals(table2) </code></pre>
<blockquote> <p>How can I dig deeper to find the differences between the parquet files?</p> </blockquote> <p>You can compare the metadata dictionaries and show possible differences using <a href="https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertDictEqual" rel="nofollow noreferrer"><code>assertDictEqual</code></a>:</p> <pre><code>from datetime import datetime import time from unittest import TestCase import pandas as pd import pyarrow as pa import pyarrow.parquet as pq df = pd.DataFrame({'col1': [1, 2, 3]}) def write_with_ts(df, filename): &quot;&quot;&quot;Save df to filename as parquet with metadata with timestamp&quot;&quot;&quot; table = pa.Table.from_pandas(df) metadata = table.schema.metadata metadata.update({'timestamp': str(datetime.now())}) table = table.replace_schema_metadata(metadata) pq.write_table(table, filename) path1, path2 = 'f1.pq', 'f2.pq' write_with_ts(df, path1) time.sleep(1) write_with_ts(df, path2) table1 = pq.read_table(path1) table2 = pq.read_table(path2) # This assertion passes assert table1.equals(table2, check_metadata=False) # see (1) below tc = TestCase() tc.maxDiff = None # show diff even if it's longer than 640 chars tc.assertDictEqual(table1.schema.metadata, table2.schema.metadata) </code></pre> <p>The metadata assertion fails with:</p> <pre><code>... AssertionError: {b'pa[420 chars]rsion&quot;: &quot;1.4.2&quot;}', b'timestamp': b'2022-06-17 17:14:00.410200'} != {b'pa[420 chars]rsion&quot;: &quot;1.4.2&quot;}', b'timestamp': b'2022-06-17 17:14:01.413461'} {b'pandas': b'{&quot;index_columns&quot;: [{&quot;kind&quot;: &quot;range&quot;, &quot;name&quot;: null, &quot;start&quot;: 0, &quot;' b'stop&quot;: 3, &quot;step&quot;: 1}], &quot;column_indexes&quot;: [{&quot;name&quot;: null, &quot;field_' b'name&quot;: null, &quot;pandas_type&quot;: &quot;unicode&quot;, &quot;numpy_type&quot;: &quot;object&quot;, &quot;' b'metadata&quot;: {&quot;encoding&quot;: &quot;UTF-8&quot;}}], &quot;columns&quot;: [{&quot;name&quot;: &quot;col1&quot;,' b' &quot;field_name&quot;: &quot;col1&quot;, &quot;pandas_type&quot;: &quot;int64&quot;, &quot;numpy_type&quot;: &quot;in' b't64&quot;, &quot;metadata&quot;: null}], &quot;creator&quot;: {&quot;library&quot;: &quot;pyarrow&quot;, &quot;ver' b'sion&quot;: &quot;8.0.0&quot;}, &quot;pandas_version&quot;: &quot;1.4.2&quot;}', - b'timestamp': b'2022-06-17 17:14:00.410200'} ? ^ ^^^^ + b'timestamp': b'2022-06-17 17:14:01.413461'} ? ^ ^^^^ </code></pre> <p>(1) <code>check_metadata=False</code> is the default for <code>Table.equals()</code> since pyarrow version 0.17.0, see <a href="https://github.com/apache/arrow/pull/6830" rel="nofollow noreferrer">ARROW-7891</a>.</p> <hr> <p>Update as per comment:</p> <p>The file size depends on <em>how</em> a table is saved to a parquet file. Take for instance a dataframe of 1M random doubles: <code>df = pd.DataFrame({'col1': np.random.rand(1_000_000)})</code>:</p> <ul> <li><code>df.to_parquet(filename)</code>: 8,281,466 B</li> <li><code>df.to_parquet(filename, compression='GZIP')</code>: 7,800,448 B</li> <li><code>df.to_parquet(filename, row_group_size=1000)</code>: 9,583,534 B</li> </ul> <p>(default is <code>SNAPPY</code> compression and 1 rowgroup per 64M rows). Other issues might be timestamp resolution in combination with Parquet version, see <a href="https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_table.html" rel="nofollow noreferrer"><code>write_table</code></a> for details.</p> <p>You can get further insight by comparing the column metadata</p> <pre><code>metadata = pq.read_metadata(filename) metadata.row_group(0).column(0) # etc </code></pre> <p>See also <a href="https://parquet.apache.org/docs/file-format/" rel="nofollow noreferrer">Parquet file format</a> for details on the file format.</p>
pandas|parquet|pyarrow
1
9,327
60,077,538
How to speed up intersection of dict of sets in Python
<p>I have a dictionary with a set of integers.</p> <pre><code>{'A': {9, 203, 404, 481}, 'B': {9}, 'C': {110}, 'D': {9, 314, 426}, 'E': {59, 395, 405} } </code></pre> <p>You can generate the data with this:</p> <pre><code>data = {} for i in string.ascii_uppercase: n = 25 rng = np.random.default_rng() data[i] = set(rng.choice(100, size=n, replace=False)) </code></pre> <p>I need to get a list of the intersect of subsets of the dictionary. So here in example the output of the intersect of ['A','B','D'] would return [9]</p> <p>I've figured out 2 different ways of doing this but both are much to slow when the sets grow in value.</p> <pre><code>cols = ['A','B','D'] # method 1 lis = list(map(data.get, cols)) idx = list(set.intersection(*lis)) #method 2 (10x slower then method 1) query_dict = dict((k, data[k]) for k in cols) idx2 = list(reduce(set.intersection, (set(val) for val in query_dict.values()))) </code></pre> <p>When the sets grow (>10k ints per set) the runtime grows quickly.</p> <p>I'm okay with using other datatypes then sets in the dict like lists or numpy arrays etc.</p> <p>Is there a faster way of accomplishing this?</p> <p>EDIT:</p> <p>The original problem I had was this dataframe:</p> <pre><code> T S A B C D 0 49.378 1.057 AA AB AA AA 1 1.584 1.107 BC BA AA AA 2 1.095 0.000 BB BB AD 3 10.572 1.224 BA AB AA AA 4 0.000 0.000 DC BA AB </code></pre> <p>For each row I have to sum 'T' over all rows which have A,B,C,D in common, if a threshold is reached continue else over B,C,D in common, then C,D and then only D if threshold still not reached.</p> <p>However this was really slow, so first I tried with get_dummies and then take product of columns. However this was to slow so I moved to numpy arrays with indices to sum over. That is the fastest option up till now, however the intersect is the only things which still takes op too much time to compute.</p> <p>EDIT2:</p> <p>It turned out I was making it to hard on myself and it is possible with pandas groupby and that is very fast.</p> <p>code:</p> <pre><code>parts = [['A','B','C','D'],['B','C','D'],['C','D'],['D']] for part in parts: temp_df = df.groupby(part,as_index=False).sum() temp_df = temp_df[temp_df['T'] &gt; 100] df = pd.merge(df,temp_df,on=part,how='left',suffixes=["","_" + "".join(part)]) df['T_sum'] = df[['T_ABCD','T_BCD','T_CD','T_D']].min(axis=1) df['S_sum'] = df[['S_ABCD','S_BCD','S_CD','S_D']].min(axis=1) df.drop(['T_ABCD','T_BCD','T_CD','T_D','S_ABCD','S_BCD','S_CD','S_D'],, axis=1, inplace=True) </code></pre> <p>probably the code can be a bit cleaner, but I don't know how to replace only NaN values in a merge. </p>
<p>The problem here is how to efficiently find the intersection of several sets. According to the comments: <em>"Max n is 10 million - 30 million and the columns a,b,c,d can be almost unique rows to 1 million in common."</em> So the sets are large, but not all the same size. Set intersection is an <a href="https://en.wikipedia.org/wiki/Associative_property" rel="noreferrer">associative</a> and <a href="https://en.wikipedia.org/wiki/Commutative_property" rel="noreferrer">commutative</a> operation, so we can take the intersections in any order we like.</p> <p>The time complexity of <a href="https://stackoverflow.com/questions/20100003/whats-the-algorithm-of-set-intersection-in-python">intersecting two sets</a> is <code>O(min(len(set1), len(set2)))</code>, so we should choose an order to do the intersections in, which minimises the sizes of the intermediate sets.</p> <hr> <p>If we don't know in advance which pairs of sets have small intersections, the best we can do is intersect them in order of size. After the first intersection, the smallest set will always be the result of the last intersection, so we want to intersect that with the next-smallest input set. It's better to use <code>set.intersection</code> on all of the sets at once rather than <code>reduce</code> here, because that's <a href="https://github.com/python/cpython/blob/2a4903fcce54c25807d362dbbbcfb32d0b494f9f/Objects/setobject.c#L1312" rel="noreferrer">implemented essentially the same way</a> as <code>reduce</code> would do it, but in C.</p> <pre class="lang-py prettyprint-override"><code>def intersect_sets(sets): return set.intersection(*sorted(sets, key=len)) </code></pre> <p>In this case where we know nothing about the pairwise intersections, the only possible slowdown in the C implementation could be the unnecessary memory allocation for multiple intermediate sets. This can be avoided by e.g. <code>{ x for x in first_set if all(x in s for s in other_sets) }</code>, but that turns out to be much slower.</p> <hr> <p>I tested it with sets up to size 6 million, with about 10% pairwise overlaps. These are the times for intersecting four sets; after four, the accumulator is about 0.1% of the original size so any further intersections would take negligible time anyway. The orange line is for intersecting sets in the optimal order (smallest two first), and the blue line is for intersecting sets in the <em>worst</em> order (largest two first).</p> <p><a href="https://i.stack.imgur.com/n1sJh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n1sJh.png" alt="times"></a></p> <p>As expected, both take roughly linear time in the set sizes, but with a lot of noise because I didn't average over multiple samples. The optimal order is consistently about 2-3 times as fast as the worst order, measured on the same data, presumably because that's the ratio between the smallest and second-largest set sizes.</p> <p>On my machine, intersecting four sets of size 2-6 million takes about 100ms, so going up to 30 million should take about half a second; I think it's very unlikely that you can beat that, but half a second should be fine. If it consistently takes a lot longer than that on your real data, then the issue will be to do with your data not being uniformly random. If that's the case then there's probably not much Stack Overflow can do for you beyond this, because improving the efficiency will depend highly on the particular distribution of your real data (though see below about the case where you have to answer many queries on the same sets).</p> <p>My timing code is below.</p> <pre class="lang-py prettyprint-override"><code>import string import random def gen_sets(m, min_n, max_n): n_range = range(min_n, max_n) x_range = range(min_n * 10, max_n * 10) return [ set(random.sample(x_range, n)) for n in [min_n, max_n, *random.sample(n_range, m - 2)] ] def intersect_best_order(sets): return set.intersection(*sorted(sets, key=len)) def intersect_worst_order(sets): return set.intersection(*sorted(sets, key=len, reverse=True)) from timeit import timeit print('min_n', 'max_n', 'best order', 'worst order', sep='\t') for min_n in range(100000, 2000001, 100000): max_n = min_n * 3 data = gen_sets(4, min_n, max_n) t1 = timeit(lambda: intersect_best_order(data), number=1) t2 = timeit(lambda: intersect_worst_order(data), number=1) print(min_n, max_n, t1, t2, sep='\t') </code></pre> <hr> <p>If you need to do many queries, then it may be worth computing the pairwise intersections first:</p> <pre class="lang-py prettyprint-override"><code>from itertools import combinations pairwise_intersection_sizes = { (a, b): set_a &amp; set_b for ((a, set_a), (b, set_b)) in combinations(data.items(), 2) } </code></pre> <p>If some intersections are much smaller than others, then the precomputed pairwise intersections can be used to choose a better order to do <code>set.intersection</code> in. Given some sets, you can choose the pair with the smallest precomputed intersection, then do <code>set.intersection</code> on that precomputed result along with the rest of the input sets. Especially in the non-uniform case where some pairwise intersections are nearly empty, this could be a big improvement.</p>
python|pandas|set
6
9,328
65,287,506
getting correct index of numpy array
<p>I have to compare np.arrays pairwise and I need the index back.</p> <p>My code is:</p> <pre><code>import numpy as np Vals = np.array([[1.0, 1.0], [2., 2.], [1., 2.], [2., 1.], [3., 3.], [3., 3.]]) for Val in itertools.combinations(Vals,2): X1 = Val[0][0] X2 = Val[1][0] Y1 = Val[0][1] Y2 = Val[1][1] Index1 = np.where( (Vals == Val[0]).all(axis=1))[0][0] Index2 = np.where( (Vals == Val[1]).all(axis=1))[0][0] print(X1,Y1,Index1) print(X2,Y2,Index2) </code></pre> <p>This runs fine until two or more tuple with the same values are in Vals (as in the example). np.where gives back the first occurence of this tuple in the array and so I get the wrong index back. How can I get the correct index?</p>
<pre><code>for i, j in itertools.combinations(range(len(Vals)), 2): print(Vals[i], i, &quot;|&quot;, Vals[j], j) </code></pre> <pre class="lang-none prettyprint-override"><code>[1. 1.] 0 | [2. 2.] 1 [1. 1.] 0 | [1. 2.] 2 [1. 1.] 0 | [2. 1.] 3 [1. 1.] 0 | [3. 3.] 4 [1. 1.] 0 | [3. 3.] 5 [2. 2.] 1 | [1. 2.] 2 [2. 2.] 1 | [2. 1.] 3 [2. 2.] 1 | [3. 3.] 4 [2. 2.] 1 | [3. 3.] 5 [1. 2.] 2 | [2. 1.] 3 [1. 2.] 2 | [3. 3.] 4 [1. 2.] 2 | [3. 3.] 5 [2. 1.] 3 | [3. 3.] 4 [2. 1.] 3 | [3. 3.] 5 [3. 3.] 4 | [3. 3.] 5 </code></pre>
python|numpy|numpy-ndarray
0
9,329
65,424,771
How to convert one-hot vector to label index and back in Pytorch?
<p>How to transform vectors of labels to one-hot encoding and back in Pytorch?</p> <p>The solution to the question was copied to here after having to go through the entire forum discussion, instead of just finding an easy one from googling.</p>
<p>From <a href="https://discuss.pytorch.org/t/pytocrh-way-for-one-hot-encoding-multiclass-target-variable/68321" rel="noreferrer">the Pytorch forums</a></p> <pre><code>import torch import numpy as np labels = torch.randint(0, 10, (10,)) # labels --&gt; one-hot one_hot = torch.nn.functional.one_hot(target) # one-hot --&gt; labels labels_again = torch.argmax(one_hot, dim=1) np.testing.assert_equals(labels.numpy(), labels_again.numpy()) </code></pre>
python|pytorch|one-hot-encoding|multiclass-classification
5
9,330
65,203,030
Get all possible pairs within a cell in df python
<p>I have a df such as :</p> <pre><code>COL1 COL2 COL3 G1 1 ['B_-__Canis_lupus'] G1 2 ['A_-__Felis_cattus','O_+__Felis_cattus','D_-__Felis_sylvestris] G2 1 ['Q_-__Mus_musculus','S_-__Mus_griseus','P_-__Mus_rattus'] </code></pre> <p>and I would like from that to create 1 new column :</p> <p><strong>COL4</strong> which is all the pairwise possible combination of <strong>COL3</strong> contain (without against itself) and in a form a list of list within cells</p> <p>Here I should then get :</p> <pre><code>COL1 COL2 COL3 COL4 G1 1 ['B_-__Canis_lupus'] NA G1 2 ['A_-__Felis_cattus','O_+__Felis_cattus','D_-__Felis_sylvestris'] [['A_-__Felis_cattus','O_+__Felis_cattus'],['A_-__Felis_cattus','D_-__Felis_sylvestris'];['O_+__Felis_cattus','D_-__Felis_sylvestris']] G2 1 ['Q_-__Mus_musculus','S_-__Mus_griseus','P_-__Mus_rattus'] [['Q_-__Mus_musculus','S_-__Mus_griseus'],['Q_-__Mus_musculus','P_-__Mus_rattus'],['S_-__Mus_griseus','P_-__Mus_rattus']] </code></pre> <p>does someone have an idea?</p> <p>here are the data in dic format :</p> <pre><code> data= {'COL1': {0: 'G1', 1: 'G1', 2: 'G2'}, 'COL2': {0: 1, 1: 2, 2: 1}, 'COL3': {0: &quot;['B_-__Canis_lupus']&quot;, 1: &quot;['A_-__Felis_cattus','O_+__Felis_cattus','D_-__Felis_sylvestris']&quot;, 2: &quot;['Q_-__Mus_musculus','S_-__Mus_griseus','P_-__Mus_rattus']&quot;}} </code></pre> <p>I use :</p> <pre><code>import pandas as pd df=pd.read_csv(&quot;test.tab&quot;,sep=&quot;;&quot;) </code></pre> <p>or</p> <pre><code>pd.DataFrame.from_dict(data) </code></pre>
<p>Use <code>itertools.combinations</code>. The <code>COL3</code> column contains list as string which requires <code>literal_eval</code> to convert to <code>list</code>.</p> <pre><code>from itertools import combinations from ast import literal_eval def all_combinations(x): return [list(combinations(x, i)) for i in range(1, 3)] df['COL3'] = df.COL3.map(literal_eval) df['COL3'] = df.COL3.map(all_combinations) </code></pre>
python|python-3.x|pandas|dataframe
1
9,331
65,133,881
lambda function on list in dataframe column error
<p>I have a list of numbers inside of a pandas dataframe and i am trying to use a lambda function + list comprehension to remove values from these lists.</p> <p>col 1 col2</p> <p>a [-1, 2, 10, 600, -10]</p> <p>b [-0, -5, -6, -200, -30]</p> <p>c .....</p> <p>etc.</p> <pre><code>df.col2.apply(lambda x: [i for i in x if i&gt;= 0]) #just trying to remove negative values </code></pre> <p>numbers are always ascending and can be all negative, all positive or a mix. lists are about 200 items long all integers.</p> <p>I get this error:</p> <pre><code>TypeError: 'numpy.float64' object is not iterable </code></pre> <p>Edit: when i do it this way it works <code>[i for i in df[col2][#] if i &gt;= 0]</code> i guess i could run this through a for loop.. seems slow though</p> <p>Edit2: looking at it with fresh eyes. turns out that the column isn't entirely made up of lists there are a few float values spread throughout (duh). Something weird was going on with the merge, once i corrected that the code above worked as expected. Thanks for the help!</p>
<p>Because <code>x</code> in your lambda is a float number, and you cant loop over float :p. if you need to do so. you can</p> <pre class="lang-python prettyprint-override"><code>In [2]: np.random.seed(4) ...: df = pd.DataFrame(np.random.randint(-5,5, 7)).rename(columns={0:&quot;col2&quot;}) ...: df.col2 = df.col2.astype(float) ...: df Out[2]: col2 0 2.0 1 0.0 2 -4.0 3 3.0 4 2.0 5 3.0 6 -3.0 In [3]: df.col2.apply(lambda x: x if x &gt; 0 else None).dropna() Out[3]: 0 2.0 3 3.0 4 2.0 5 3.0 Name: col2, dtype: float64 </code></pre>
python|pandas|numpy|lambda
1
9,332
49,887,488
Frequency of Values by Date in a Pandas DataFrame
<p>I have a DataFrame in Pandas that looks like this. <code>date</code> is an index of dtype <code>datetime64</code>.</p> <pre><code> keyword id date 2017-03-31 21:22:33+00:00 cat 0 2017-07-07 11:28:36+00:00 dog 1 2017-03-31 01:18:50+00:00 cat 2 2017-03-31 21:03:39+00:00 cat 3 2017-08-23 13:26:43+00:00 elephant 4 </code></pre> <p>I would like a result that counts the keywords by day like this:</p> <pre><code>2017-03-31 cat 3 2017-07-07 dog 1 2017-08-23 elephant 1 </code></pre> <p>I am new to Pandas, so I am learning. I have tried things like:</p> <pre><code>df.resample('D').keyword.value_counts() </code></pre> <p>which returns:</p> <pre><code>ValueError: operands could not be broadcast together with shape ... </code></pre> <p>Apparently, I need to use <code>resample</code> because the date is an index. I'm not really sure how to proceed. Any thoughts would be appreciated.</p>
<p>By using <code>get_level_values</code> with <code>date</code>(Get the date part from the datetime format)</p> <pre><code>df.groupby([df.index.get_level_values(0).date,df.keyword]).size() Out[867]: keyword 2017-03-31 cat 3 2017-07-07 dog 1 2017-08-23 elephant 1 dtype: int64 </code></pre>
python|pandas
2
9,333
49,793,557
How to perform Kernel Density Estimation in Tensorflow
<p>I'm trying to write a Kernel Density Estimation algorithm in Tensorflow.</p> <p>When fitting the KDE model, I am iterating through all the data in the current batch and, for each, I am creating a kernel using the <code>tensorflow.contrib.distributions.MultivariateNormalDiag</code> object: <code> self.kernels = [MultivariateNormalDiag(loc=data, scale=bandwidth) for data in X] </code></p> <p>Later, when trying to predict the likelihood of a data point with respect to the model fitted above, for each data point I am evaluating, I am summing together the probability given by each of the kernels above: <code> tf.reduce_sum([kernel._prob(X) for kernel in self.kernels], axis=0) </code></p> <p>This approach only works when <code>X</code> is a numpy array, as TF doesn't let you iterate over a Tensor. My question is whether or not there is a way to make the algorithm above work with <code>X</code> as a <code>tf.Tensor</code> or <code>tf.Variable</code>?</p>
<p>One answer that I found for this problem tackles the problem of fitting the KDE and predicting the probabilities in one fell swoop. The implementation is a bit hacky, though.</p> <pre><code>def fit_predict(self, data): return tf.map_fn(lambda x: \ tf.div(tf.reduce_sum( tf.map_fn(lambda x_i: self.kernel_dist(x_i, self.bandwidth).prob(x), self.fit_X)), tf.multiply(tf.cast(data.shape[0], dtype=tf.float64), self.bandwidth[0])), self.X) </code></pre> <p>The first <code>tf.map_fn</code> iterates through the data for which we are calculating the likelihood, summing together the probabilities from each of the individual kernels. </p> <p>The second <code>tf.map_fn</code> iterates through all the data that we use to fit our model, and creates a <code>tf.contrib.distributions.Distribution</code> (here this is parameterized by <code>kernel_dist</code>).</p> <p><code>self.X</code> and <code>self.fit_X</code> are placeholders that are created when initializing the <code>KernelDensity</code> object.</p>
python|tensorflow|kernel-density
0
9,334
49,916,607
What does pandas index.searchsorted() do?
<p>I'm working on two dataframes <code>df1</code> and <code>df2</code>. I used the code : </p> <pre><code>df1.index.searchsorted(df2.index) </code></pre> <p>But I'm not sure about how does it work. Could someone please explain me how ?</p>
<p>The method applies a <a href="https://en.wikipedia.org/wiki/Binary_search_algorithm" rel="noreferrer">binary search</a> to the index. This is a well-known algorithm that uses the fact that values are already in sorted order to find an insertion index in as few steps as possible.</p> <p>Binary search works by picking the middle element of the values, then comparing that to the searched-for value; if the value is lower than that middle element, you then narrow your search to the first half, or you look at the second half if it is larger.</p> <p>This way you reduce the number of steps needed to find your element to <em>at most</em> the log of length of the index. For 1000 elements, that's fewer than 7 steps, for a million elements, fewer than 14, etc.</p> <p>The insertion index is the place to add your value to keep the index in sorted order; the <code>left</code> location also happens to be the index of a <em>matching</em> value, so you can also use this both to find places to insert missing or duplicate values, and to test if a given value is present in the index.</p> <p>The pandas implementation is basically the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html" rel="noreferrer"><code>numpy.sortedsearch()</code></a> function, which uses <a href="https://github.com/numpy/numpy/blob/master/numpy/core/src/npysort/binsearch.c.src" rel="noreferrer">generated C code</a> to optimise this search for different object types, squeezing out every last drop of speed.</p> <p>Pandas uses the method in various index implementations to ensure fast operations. You usually wouldn't use this method to test if a value is present in the index, for example, because Pandas indexes already implement an efficient <code>__contains__</code> method for you, usually based on <code>searchsorted()</code> where that makes sense. See <a href="https://github.com/pandas-dev/pandas/blob/v0.23.0.dev0/pandas/_libs/index.pyx#L425-L435" rel="noreferrer"><code>DateTimeEngine.__contains__()</code></a> for such an example.</p>
python|pandas|dataframe|indexing
9
9,335
63,824,157
While using np.arange() it was incrementing with wrong step size
<pre><code>for i in np.arange(0.0,1.1,0.1): print(i) </code></pre> <p>Output:</p> <pre><code>0.0 0.1 0.2 0.30000000000000004 0.4 0.5 0.6000000000000001 0.7000000000000001 0.8 0.9 1.0 </code></pre> <p>Expected output:</p> <pre><code>0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 </code></pre>
<p>It's not incrementing by the wrong step size, those are just floating point errors. From <a href="https://www.geeksforgeeks.org/floating-point-error-in-python/" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>This can be considered as a bug in Python, but it is not. This has little to do with Python, and much more to do with how the underlying platform handles floating-point numbers. Itโ€™s a normal case encountered when handling floating-point numbers internally in a system. Itโ€™s a problem caused when the internal representation of floating-point numbers, which uses a fixed number of binary digits to represent a decimal number. It is difficult to represent some decimal number in binary, so in many cases, it leads to small roundoff errors.</p> </blockquote>
python|numpy
0
9,336
63,781,144
TypeError: expected string or bytes-like object in Pandas
<p>I want to tokenize text, but couldn't. How can I solve this? Here is my problem: <img src="https://i.stack.imgur.com/BbjlW.png" alt="enter image description here" /></p> <p><img src="https://i.stack.imgur.com/GX5rW.png" alt="2nd image is my problem" /></p> <pre><code>#read_text from file data = pd.read_csv(&quot;input data.txt&quot;,encoding = &quot;UTF-8&quot;) print(data) </code></pre> <p>Output: Bangla text</p> <pre><code>t = Tokenizers() print(t.bn_word_tokenizer(data)) </code></pre> <p>Error</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-17-f9f299ecf33d&gt; in &lt;module&gt; 1 `t = Tokenizers()` ----&gt; 2 `print(t.bn_word_tokenizer(dataStr))` D:\anaconda\lib\site-packages\bnltk\tokenize\bn_word_tokenizers.py in bn_word_tokenizer(self, input_) 15 `tokenize_list` = [] 16 `r = re.compile(r'[\s\เฅค{}]+'.format(re.escape(punctuation)))` ---&gt; 17 `list_ = r.split(input_)` 18 `list_ = [i for i in list_ if i`] 19 `return list_` TypeError: expected string or bytes-like object </code></pre>
<p>Try this:</p> <pre><code>for column in data: a = data.apply(lambda row: t.bn_word_tokenizer(row), axis=1) print(a) </code></pre> <p>This will print one column at a time. If you want to convert the entire dataframe rather than just print then replace a with data[column] in the code above.</p>
python|pandas
0
9,337
63,913,629
Fast vectorized way to convert row vector to inptrs for sparse matrix?
<p>For sparse matrices, we usually pass in column indices (<code>indices</code>) and an <code>indptr</code> vector that indexes the <code>indices</code> vector so that <code>indices[indptr[i]:indptr[i+1]]</code> are the elements of row <code>i</code> in the sparse matrix.</p> <p>Is there a fast, vectorized, preferably numpy solution to convert a vector of consecutive row indices into an <code>indptr</code> in Python?</p> <p>For example, if this is my <code>rows</code> indices vector: <code>[0,1,1,2,2,2,3,5]</code>...</p> <p>The <code>indptr</code> vector would be <code>[0,1,3,6,7,7,8]</code> where the 7 repeats because the row vector is missing row 4.</p> <p>I can do it using a simple loop:</p> <pre><code>for i in range(len(rows)): indptr[rows[i]+1] += 1 indptr=np.cumsum(indptr) </code></pre> <p>But I was wondering if there's a faster, vectorized way to do it?</p>
<p>I think what you are looking for is this:</p> <pre><code>np.bincount(rows).cumsum() #[1 3 6 7 7 8] </code></pre> <p>And if there are rows at the bottom of your matrix that might be empty, simply add that as argument to <code>bincount</code> (per @CJR's recommendation):</p> <pre><code>np.bincount(rows, minlength=num_rows).cumsum() #[1 3 6 7 7 8] </code></pre> <p>You probably want to insert a <code>0</code> in the front as well. What <code>bincount</code> does is counting the number of elements in each bin/row and then <code>cumsum</code> adds them up. This way you will include missing bins/rows as well.</p> <p>The best way to insert a 0 is probably by this:</p> <pre><code>np.bincount(np.array(rows)+1).cumsum() #[0 1 3 6 7 7 8] </code></pre> <p>or you can directly do it by:</p> <pre><code>np.insert(np.bincount(rows).cumsum(),0,0) #[0 1 3 6 7 7 8] </code></pre>
python|arrays|numpy|scipy|sparse-matrix
6
9,338
47,016,371
Increasing column value pandas
<p>I have a dataframe of 143999 rows which contains position and time data. I already made a column "dt" which calulates the time difference between rows. Now I want to create a new column which gives the dt values a group number. So it starts with group = 0 and when dt > 60 the group number should increase by 1. I tried the following:</p> <pre><code>def group(x): c = 0 # if densdata["dt"] &lt; 60: densdata["group"] = c elif densdata["dt"] &gt;= 60: c += 1 densdata["group"] = c densdata["group"] = densdata.apply(group, axis=1)' </code></pre> <p>The error that I get is: <code>The truth value of a Series is ambiguous</code>.</p> <p>Any ideas how to fix this problem?</p> <p>This is what I want:</p> <pre><code> dt group 0.01 0 2 0 0.05 0 300 1 2 1 60 2 </code></pre>
<p>You can take advantage of the fact that <code>True</code> evaluates to 1 and use <code>.cumsum()</code>.</p> <pre><code>densdata = pd.DataFrame({'dt': np.random.randint(low=50,high=70,size=20), 'group' : np.zeros(20, dtype=np.int32)}) print(densdata.head()) dt group 0 52 0 1 59 0 2 69 0 3 55 0 4 63 0 densdata['group'] = (densdata.dt &gt;= 60).cumsum() print(densdata.head()) dt group 0 52 0 1 59 0 2 69 1 3 55 1 4 63 2 </code></pre> <p>If you want to guarantee that the first value of <code>group</code> will be 0, even if the first value of <code>dt</code> is >= 60, then use</p> <pre><code>densdata['group'] = (densdata.dt.replace(densdata.dt[0],np.nan) &gt;= 60).cumsum() </code></pre>
python|pandas
0
9,339
47,000,799
Indexing 2nd dimension in tensorflow
<p>I have an array of xyz coordinates of points of shape <code>(nsamples, npoints, 3)</code>.</p> <p>I am trying to build a tensorflow graph that selects the two points closest to the origin.</p> <p>I've gotten this far</p> <pre><code>r2 = tf.reduce_sum(tf.pow(centeredxyz, 2), axis=2) idx = tf.nn.top_k(-r2, 2)[1] </code></pre> <p>This gives me the indexes of the two closest points in the form of a 2D matrix i.e. </p> <p><code>[[3, 15], [6, 2], ...]</code> of shape <code>(nsamples, 2)</code>.</p> <p>How can I use these indexes to get back the points from <code>centeredxyz</code>? I tried <code>tf.gather_nd</code> but it considers that I'm asking for the coordinates of the 15th point of the 3rd sample while I'm asking for the 3rd and 15th point of the first sample, 6th and 2nd of the second sample etc.</p> <p>I tried creating a <code>tf.range</code> and stacking it to the indexes to obtain <code>[[0, 3], [0, 15], [1, 6], [1, 2], ...]</code> but it failed because it cannot create a range of unknown dimensions <code>ValueError: Cannot convert an unknown Dimension to a Tensor: ?</code></p> <p>So currently I am quite clueless as to what to try next.</p>
<p>I managed to patch together an ugly version. Hurts the brain but seems to work.</p> <pre><code>def gather_second_multicol(data, idx): nsamples = tf.shape(idx)[0] nselcol = tf.shape(idx)[1] idx = tf.reshape(idx, [-1, 1]) range = tf.range(nsamples) range = tf.tile(tf.expand_dims(range, 0), [nselcol, 1]) range = tf.transpose(range) range = tf.reshape(range, [-1, 1]) idx = tf.concat([range, idx], 1) gath = tf.gather_nd(data, idx) return tf.reshape(gath, [-1, nselcol, 3]) def get_closest(centeredxyz): r2 = tf.reduce_sum(tf.pow(centeredxyz, 2), axis=2) idx = tf.nn.top_k(-r2, 2)[1] closest = gather_second_multicol(centeredxyz, idx) return closest </code></pre>
tensorflow
0
9,340
47,039,760
Dataset API, Iterators and tf.contrib.data.rejection_resample
<p><strong>[Edit #1 after @mrry comment]</strong> I am using the (great &amp; amazing) Dataset API along with tf.contrib.data.rejection_resample to set a specific distribution function to the input training pipeline. </p> <p>Before adding the tf.contrib.data.rejection_resample to the input_fn I used the one shot Iterator. Alas, when starting to use the latter, I tried using the dataset.make_initializable_iterator() - This is because we are introducing to the pipeline stateful variables, and one is required to initialize the iterator AFTER all variables in the input pipeline are init. As @mrry wrote <a href="https://stackoverflow.com/a/44504063/8096451">here.</a></p> <p>I am passing the input_fn to an estimator and wrapped by an Experiment. </p> <p>Problem is - where to hook the init of the iterator? If I try:</p> <pre><code>dataset = dataset.batch(batch_size) if self.balance: dataset = tf.contrib.data.rejection_resample(dataset, self.class_mapping_function, self.dist_target) iterator = dataset.make_initializable_iterator() tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS, iterator.initializer) else: iterator = dataset.make_one_shot_iterator() image_batch, label_batch = iterator.get_next() print (image_batch) </code></pre> <p>and the mapping function:</p> <pre><code>def class_mapping_function(self, feature, label): """ returns a a function to be used with dataset.map() to return class numeric ID The function is mapping a nested structure of tensors (having shapes and types defined by dataset.output_shapes and dataset.output_types) to a scalar tf.int32 tensor. Values should be in [0, num_classes). """ # For simplicity, trying to return the label itself as I assume its numeric... return tf.cast(label, tf.int32) # &lt;-- I guess this is the bug </code></pre> <p>the iterator does not receive the Tensor shape as it does with one shot iterator. </p> <p>For Example. With One Shot iterator run, the iterator gets correct shape:</p> <pre><code>Tensor("train_input_fn/IteratorGetNext:0", shape=(?, 100, 100, 3), dtype=float32, device=/device:CPU:0) </code></pre> <p>But when using the initializable iterator, it is missing tensor shape info:</p> <pre><code>Tensor("train_input_fn/IteratorGetNext:0", shape=(?,), dtype=int32, device=/device:CPU:0) </code></pre> <p>Any help will be so appreciated!</p> <p>[<strong>Edit #2</strong> ]- following @mrry comment that it seems like another dataset] Perhaps the real issue here is not the init sequence of the iterator but the mapping function used by tf.contrib.data.rejection_resample that returns tf.int32. But then I wonder how the mapping function should be defined ? To keep the dataset shape as (?,100,100,3) for example... </p> <p>[<strong>Edit #3</strong>]: From the implementation of rejection_resample</p> <pre><code>class_values_ds = dataset.map(class_func) </code></pre> <p>So it makes sense the class_func will take a dataset and return a dataset of tf.int32.</p>
<p>Following @mrry response I could come up with a solution on how to use the Dataset API with tf.contrib.data.rejection_resample (using TF1.3).</p> <p><strong>The goal</strong></p> <p>Given a feature/label dataset with some distribution, have the input pipeline reshape the distribution to specific target distribution. </p> <p><strong>Numerical example</strong></p> <p>Lets assume we are building a network to classify some feature into one of 10 classes. And assume we only have 100 features with some random distribution of labels.<br> 30 features labeled as class 1, 5 features labeled as class 2 and so forth. During training we do not want to prefer class 1 over class 2 so we would like each mini-batch to hold a uniform distribution for all classes. </p> <p><strong>The solution</strong></p> <p>Using tf.contrib.data.rejection_resample will allow to set a specific distribution for our inputs pipelines. </p> <p>In the documentation it says tf.contrib.data.rejection_resample will take </p> <p>(1) Dataset - which is the dataset you want to balance</p> <p>(2) class_func - which is a function that generates a new numerical labels dataset only from the original dataset </p> <p>(3) target_dist - a vector in the size of the number of classes to specificy required new distribution.</p> <p>(4) some more optional values - skipped for now</p> <p>and as the documentation says it returns a `Dataset.</p> <p>It turns out that the shape of the input Dataset is different than the output Dataset shape. As a consequence, <strong>the returned Dataset (as implemeted in TF1.3) should be filtered by the user like this:</strong></p> <pre><code> balanced_dataset = tf.contrib.data.rejection_resample(input_dataset, self.class_mapping_function, self.target_distribution) # Return to the same Dataset shape as was the original input balanced_dataset = balanced_dataset.map(lambda _, data: (data)) </code></pre> <p>One note on the Iterator kind. As @mrry explained <a href="https://stackoverflow.com/a/44504063/8096451">here</a>, when using stateful objects within the pipeline one should use the initializable iterator and not the one-hot. Note that when using the initializable iterator you should add the init_op to the TABLE_INITIALIZERS or you will recieve this error: "GetNext() failed because the iterator has not been initialized."</p> <p>Code example:</p> <pre><code># Creating the iterator, that allows to access elements from the dataset if self.use_balancing: # For balancing function, we use stateful variables in the sense that they hold current dataset distribution # and calculate next distribution according to incoming examples. # For dataset pipeline that have state, one_shot iterator will not work, and we are forced to use # initializable iterator # This should be relaxed in the future. # https://stackoverflow.com/questions/44374083/tensorflow-cannot-capture-a-stateful-node-by-value-in-tf-contrib-data-api iterator = dataset.make_initializable_iterator() tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS, iterator.initializer) else: iterator = dataset.make_one_shot_iterator() image_batch, label_batch = iterator.get_next() </code></pre> <p><strong>Does it work ?</strong> </p> <p>Yes. Here are 2 images from Tensorboard after collection a histogram on the input pipeline labels. The original input labels were uniformly distributed. Scenario A: Trying to achieve the following 10-class distribution: [0.1,<strong>0.4</strong>,0.05,0.05,0.05,0.05,0.05,0.05,0.1,0.1]</p> <p>And the result:</p> <p><a href="https://i.stack.imgur.com/2Kgeg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2Kgeg.png" alt="enter image description here"></a></p> <p>Scenario B: Trying to achieve the following 10-class distribution: [0.1,0.1,0.05,0.05,0.05,0.05,0.05,0.05,<strong>0.4</strong>,0.1]</p> <p>And the result:</p> <p><a href="https://i.stack.imgur.com/Sms7j.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Sms7j.png" alt="enter image description here"></a></p>
tensorflow|iterator
12
9,341
46,738,941
Pandas duplicated and drop_duplicated does not work properly
<p>The dataset named 'changes' was obtained from a merge by RID:</p> <pre><code>changes.head() DX_bl RID DX_m12 0 1 3 1 1 0 4 0 2 0 6 0 3 0 8 0 4 1 10 1 </code></pre> <p>I've used the two following commands with the intention of identifying lines in which DX_bl and DX_m12 are different:</p> <pre><code>print(changes[~changes.duplicated(subset = ['DX_bl','DX_m12'], keep=False)]) print(changes.drop_duplicates(subset = ['DX_bl','DX_m12'])) DX_bl RID DX_m12 64 1 167 0 DX_bl RID DX_m12 0 1 3 1 1 0 4 0 10 0 30 1 64 1 167 0 </code></pre> <p>If <code>keep=False</code>, it should return lines 10 (RID 30) and 64 (RID 64). But, as we can see, it loses the information of line 10 (RID 30).</p> <p>On the other hand, if 'keep' is let in default option (<code>keep = 'first'</code>) it wrongly returns lines 0 (RID 3) and 1 (RID 4). </p> <p>Is there a bug in pandas' duplicated/drop_duplicates??</p>
<p>OK, i missed your comment "with the intention of identifying lines in which DX_bl and DX_m12 are different". </p> <p>If you want to list all lines where these 2 columns differ, you can use this:</p> <pre><code>changes[changes['DX_bl'] != changes['DX_m12']] </code></pre> <p>or, equivalently:</p> <pre><code>changes.query('DX_bl != DX_m12') </code></pre> <p>From the data you've posted, it seems that duplicated() and drop_duplicates() are working as the documentation.</p>
python|pandas
0
9,342
46,684,027
does pip install -U break virtualenv?
<p>I have a virtualenv where I'm running python 2.7.13. I did install numpy a while ago. Today I wanted to install statsmodels as well in the same virtualenv. That's why I did (according to the webpage):</p> <pre><code>pip install -U statsmodels </code></pre> <p>and several packages where updated (numpy among others). I forgot that the -U forces to install the newest version. Since numpy was updated to numpy 1.13.3 I'm not sure if this broke a dependency. Is the forced version 1.13.3 not suitable for the virtualenv? If so how can I remove it and install the correct one. If I'm running </p> <pre><code>pip uninstall numpy </code></pre> <p>followed by a </p> <pre><code>pip install numpy </code></pre> <p>it says:</p> <pre><code>pip install numpy Collecting numpy Using cached numpy-1.13.3-cp27-cp27mu-manylinux1_x86_64.whl Installing collected packages: numpy Successfully installed numpy-1.13.3 </code></pre>
<p>Yes, the compatibility with Python is guaranteed: look at the filename of the wheel that is installed: <code>numpy-1.13.3-cp27-cp27mu-manylinux1_x86_64.whl</code>. That matches the Python version you're using (including your OS). </p> <p>As for <code>statsmodels</code> and the upgraded NumPy: if statsmodels requires numpy 1.13.3, you're fine; that's the whole point of a virtualenv: it doesn't break any other dependencies/virtualenvs you might have set up. It is unlikely you have another package in the same virtualenv that requires a lower version of NumPy.</p>
python-2.7|numpy|virtualenv
1
9,343
32,979,836
How can I get records from an array into a table in python?
<p>I have an xml file, with some data that I am extracting and placing in a numpy record array. I print the array and I see the data is in the correct location. I am wondering how I can take that information in my numpy record array and place it in a table. Also I am getting the letter b when I print my record, how do I fix that?</p> <p>Xml data</p> <pre><code>&lt;instance name="uart-0" module="uart_16550" offset="000014"/&gt; &lt;instance name="uart-1" offset="000020" module="uart_16650"/&gt; </code></pre> <p>Code in python</p> <pre><code>inst_rec=np.zeros(5,dtype=[('name','a20'),('module','a20'),('offset','a5')]) for node in xml_file.iter(): if node.tag=="instance": attribute=node.attrib.get('name') inst_rec[i]= (node.attrib.get('name'),node.attrib.get('module'),node.attrib.get('offset')) i=i+1 for x in range (0,5): print(inst_rec[x]) </code></pre> <p>Output</p> <pre><code>(b'uart-0', b'uart_16550', b'00001') (b'uart-1', b'uart_16650', b'00002') </code></pre>
<p>You are using Python3, which uses unicode strings. It displays byte strings with the <code>b</code>. The xml file may also be bytes, for example, <code>encoding='UTF-8'</code>.</p> <p>You can get rid of the <code>b</code>, by passing the strings through <code>decode()</code> before printing.</p> <p>More on writing <code>csv</code> files in Py3 <a href="https://stackoverflow.com/questions/32660815/numpy-recarray-writes-byte-literals-tags-to-my-csv-file">Numpy recarray writes byte literals tags to my csv file?</a></p> <p>In my tests, I can simplify the display by making the <code>inst_rec</code> array use unicode strings (<code>'U20'</code>)</p> <pre><code>import numpy as np import xml.etree.ElementTree as ET tree = ET.parse('test.xml') root = tree.getroot() # inst_rec=np.zeros(2,dtype=[('name','a20'),('module','a20'),('offset','a5')]) inst_rec = np.zeros(2,dtype=[('name','U20'),('module','U20'),('offset','U5')]) i = 0 for node in root.iter(): if node.tag=="instance": attribute=node.attrib.get('name') rec = (node.attrib.get('name'),node.attrib.get('module'),node.attrib.get('offset')) inst_rec[i] = rec # no need to decode i=i+1 # simple print of the array print(inst_rec) # row by row print for x in range(inst_rec.shape[0]): print(inst_rec[x]) # formatted row by row print for rec in inst_rec: print('%20s,%20s, %5s'%tuple(rec)) # write a csv file np.savetxt('test.out', inst_rec, fmt=['%20s','%20s','%5s'], delimiter=',') </code></pre> <p>producing</p> <pre><code>[('uart-0', 'uart_16550', '00001') ('uart-1', 'uart_16650', '00002')] ('uart-0', 'uart_16550', '00001') ('uart-1', 'uart_16650', '00002') uart-0, uart_16550, 00001 uart-1, uart_16650, 00002 </code></pre> <p>and</p> <pre><code>1703:~/mypy$ cat test.out uart-0, uart_16550,00001 uart-1, uart_16650,00002 </code></pre> <hr> <p>As ASCII table display</p> <pre><code># formatted row by row print print('----------------------------------------') for rec in inst_rec: print('| %20s | %20s | %5s |'%tuple(rec)) print('---------------------------------------') </code></pre> <p>If you want anything fancier you need to specify the display tool - html, rich text, etc.</p> <hr> <p>with the added package <code>prettyprint</code>:</p> <pre><code>import prettytable pp = prettytable.PrettyTable() pp.field_names = inst_rec.dtype.names for rec in inst_rec: pp.add_row(rec) print(pp) </code></pre> <p>produces</p> <pre><code>+--------+------------+--------+ | name | module | offset | +--------+------------+--------+ | uart-0 | uart_16550 | 00001 | | uart-1 | uart_16650 | 00002 | +--------+------------+--------+ </code></pre> <p>In Python3 I am still using the unicode dtype. <code>prettyprint</code> will display the <code>b</code> if the any of the strings are byte.</p>
python|arrays|xml|numpy|datatable
0
9,344
38,923,280
Using the .loc function of the pandas dataframe
<p>i have a pandas dataframe whose one of the column is : </p> <pre><code> a = [1,0,1,0,1,3,4,6,4,6] </code></pre> <p>now i want to create another column such that any value greater than 0 and less than 5 is assigned 1 and rest is assigned 0 ie:</p> <pre><code>a = [1,0,1,0,1,3,4,6,4,6] b = [1,0,1,0,1,1,1,0,1,0] </code></pre> <p>now i have done this </p> <pre><code>dtaframe['b'] = dtaframe['a'].loc[0 &lt; dtaframe['a'] &lt; 5] = 1 dtaframe['b'] = dtaframe['a'].loc[dtaframe['a'] &gt;4 or dtaframe['a']==0] = 0 </code></pre> <p>but the code throws and error . what to do ?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.between.html" rel="nofollow"><code>between</code></a> to get Boolean values, then <code>astype</code> to convert from Boolean values to 0/1:</p> <pre><code>dtaframe['b'] = dtaframe['a'].between(0, 5, inclusive=False).astype(int) </code></pre> <p>The resulting output:</p> <pre><code> a b 0 1 1 1 0 0 2 1 1 3 0 0 4 1 1 5 3 1 6 4 1 7 6 0 8 4 1 9 6 0 </code></pre> <p><strong>Edit</strong></p> <p>For multiple ranges, you could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow"><code>pandas.cut</code></a>:</p> <pre><code>dtaframe['b'] = pd.cut(dtaframe['a'], bins=[0,1,6,9], labels=False, include_lowest=True) </code></pre> <p>You'll need to be careful about how you define <code>bins</code>. Using <code>labels=False</code> will return integer indicators for each bin, which happens to correspond with the labels you provided. You could also manually specify the labels for each bin, e.g. <code>labels=[0,1,2]</code>, <code>labels=[0,17,19]</code>, <code>labels=['a','b','c']</code>, etc. You may need to use <code>astype</code> if you manually specify the labels, as they'll be returned as categories.</p> <p>Alternatively, you could combine <code>loc</code> and <code>between</code> to manually specify each range:</p> <pre><code>dtaframe.loc[dtaframe['a'].between(0,1), 'b'] = 0 dtaframe.loc[dtaframe['a'].between(2,6), 'b'] = 1 dtaframe.loc[dtaframe['a'].between(7,9), 'b'] = 2 </code></pre>
python|pandas|dataframe
4
9,345
63,011,806
Searching for keywords irrespective of special characters in dataframe
<p>I have a dataframe df with a column of some texts:</p> <pre><code>texts This is really important(actually) because it has really some value This is not at all necessary for it @ to get that </code></pre> <p>I want to perform a search and obtain the texts with keywords like &quot;important(actually)&quot;, and it doesn't seem to work.</p> <p>How do I to get that information? I have used the following code:</p> <pre><code>df_filter=df[df.apply(lambda x: x.astype(str).str.contains(keyword, flags=re.I)).any(axis=1)] </code></pre> <p>But I am unable to get such information.</p>
<p>Just escape the special characters in regex</p> <pre><code>df = pd.DataFrame({'texts': [ 'This is really important(actually) because it has really some value', 'This is not at all necessary for it @ to get that']}) keyword = 'important(actually)' df[df.apply(lambda x: x.astype(str).str.contains( re.escape(keyword), flags=re.I)).any(axis=1)] </code></pre> <p>Output:</p> <pre><code> texts 0 This is really important(actually) because it ... </code></pre>
python|python-3.x|pandas|dataframe
1
9,346
62,996,281
How to convert OpenCV Mat input frame to Tensorflow tensor in Android Studio?
<p>I've been trying to run a Tensorflow model on android. The solution to do this was to create a tensorflow model(I used a pretrained Mobilenetv2 model) first. After training it on my own dataset, I converted it to a .tflite model which is supported by Android. Since I want to work with realtime video analysis, I am also using OpenCV library built for Android SDK.</p> <p>Now the part where I'm currently stuck is - how to convert the inputframe received by opencv JavaCameraView and feed it to the tflite model for inference? I found few solutions to convert Mat datatype to an Input Tensor but nothing seems clear. Can someone help me out with this?</p> <p>edit : Here's the code(need help with onCameraFrame Method below)</p> <pre><code>public class MainActivity extends AppCompatActivity implements CameraBridgeViewBase.CvCameraViewListener2 {`enter code here` CameraBridgeViewBase cameraBridgeViewBase; BaseLoaderCallback baseLoaderCallback; // int counter = 0; Interpreter it; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); cameraBridgeViewBase = (JavaCameraView)findViewById(R.id.CameraView); cameraBridgeViewBase.setVisibility(SurfaceView.VISIBLE); cameraBridgeViewBase.setCvCameraViewListener(this); try{ it=new Interpreter(loadModelFile(this)); } catch(Exception e){ Toast.makeText(this,&quot;Tf model didn't load&quot;,Toast.LENGTH_LONG).show(); } //System.loadLibrary(Core.NATIVE_LIBRARY_NAME); baseLoaderCallback = new BaseLoaderCallback(this) { @Override public void onManagerConnected(int status) { super.onManagerConnected(status); switch(status){ case BaseLoaderCallback.SUCCESS: cameraBridgeViewBase.enableView(); break; default: super.onManagerConnected(status); break; } } }; } private MappedByteBuffer loadModelFile(Activity activity) throws IOException { AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(&quot;model.tflite&quot;); FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor()); FileChannel fileChannel = inputStream.getChannel(); long startOffset = fileDescriptor.getStartOffset(); long declaredLength = fileDescriptor.getDeclaredLength(); return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength); } @Override public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) { //how to convert inputFrame to Input Tensor??? } @Override public void onCameraViewStarted(int width, int height) { } @Override public void onCameraViewStopped() { } @Override protected void onResume() { super.onResume(); if (!OpenCVLoader.initDebug()){ Toast.makeText(getApplicationContext(),&quot;There's a problem, yo!&quot;, Toast.LENGTH_SHORT).show(); } else { baseLoaderCallback.onManagerConnected(baseLoaderCallback.SUCCESS); } } @Override protected void onPause() { super.onPause(); if(cameraBridgeViewBase!=null){ cameraBridgeViewBase.disableView(); } } @Override protected void onDestroy() { super.onDestroy(); if (cameraBridgeViewBase!=null){ cameraBridgeViewBase.disableView(); } } } </code></pre>
<p>I suggest that you convert <code>Mat</code> into <code>FloatBuffer</code> as follows:</p> <pre><code>Mat floatMat = new Mat(); mat.convertTo(floatMat, CV_32F); FloatBuffer floatBuffer = floatMat.createBuffer(); </code></pre> <p>Note that the <code>createBuffer</code> method is found within the <code>Mat</code> class of the import <code>org.bytedeco.opencv.opencv_core.Mat</code> not the import <code>org.opencv.core</code>.</p> <p>Then you can create a tensor from the <code>floatBuffer</code> variable:</p> <pre><code>Tensor.create(new long[]{1, image_height, image_width, 3}, floatBuffer) </code></pre> <p>This creates a tensor that contains a batch of one image (as indicated by the number 1 on the far left), with an image of dimensions <code>(image_height, image_width, 3)</code> which you should know and replace. Most image processing and machine learning libraries use the first dimension for the height of the image or the &quot;rows&quot; and the second for the width or the &quot;columns&quot; and the third for the number of channels (RGB = 3 channels). If you have a grayscale image, then replace 3 by 1.</p> <p>Please check whether you can directly feed this tensor to your model or you have to perform some pre-processing steps first such as normalization.</p>
java|python|android|tensorflow|opencv
1
9,347
62,903,828
Faking whether an object is an Instance of a Class in Python
<p>Suppose I have a class <code>FakePerson</code> which imitates all the attributes and functionality of a base class <code>RealPerson</code> <strong>without extending it</strong>. In Python 3, is it possible to fake <code>isinstance()</code> in order to recognise <code>FakePerson</code> as a <code>RealPerson</code> object by only modifying the <code>FakePerson</code> class. For example:</p> <pre><code>class RealPerson(): def __init__(self, age): self.age = age def are_you_real(self): return 'Yes, I can confirm I am a real person' def do_something(self): return 'I did something' # Complicated functionality here class FakePerson(): # Purposely don't extend RealPerson def __init__(self, hostage): self.hostage = hostage def __getattr__(self, name): return getattr(self.hostage, name) def do_something(self): return 'Ill pretend I did something' # I don't need complicated functionality since I am only pretending to be a real person. a = FakePerson(RealPerson(30)) print(isinstance(a, RealPerson)) </code></pre> <p>The context of this is suppose I have a class that imitates most / all of the functionality of a Pandas DataFrame row (a <code>namedtuple</code> object). If I have a list of rows <code>list_of_rows</code>, Pandas generates a DataFrame object by <code>pandas.DataFrame(list_of_rows)</code>. However, since each element in <code>list_of_rows</code> is not a <code>namedtuple</code> and just a 'fake', the constructor can't recognise these 'fake' row objects as real rows even if the fake object does fake all the underlying methods and attributes of the Pandas <code>namedtuple</code>.</p>
<p>You may need to subclass your <code>RealPerson</code> class.</p> <pre class="lang-py prettyprint-override"><code>class RealPerson: def __init__(self, age): self.age = age def are_you_real(self): return 'Yes, I can confirm I am a real person' def do_something(self): return 'I did something' # Complicated functionality here class FakePerson: # Purposely don't extend RealPerson def __init__(self, hostage): self.hostage = hostage def __getattr__(self, name): return getattr(self.hostage, name) def do_something(self): return 'Ill pretend I did something' # I don't need complicated functionality since I am only pretending to be a real person. class BetterFakePerson(RealPerson): pass BetterFakePerson.__init__ = FakePerson.__init__ BetterFakePerson.__getattr__ = FakePerson.__getattr__ BetterFakePerson.do_something = FakePerson.do_something a = FakePerson(RealPerson(30)) print(isinstance(a, RealPerson)) b = BetterFakePerson(RealPerson(30)) print(isinstance(b, RealPerson)) </code></pre> <p>Hope this answer would not be too late for you LOL</p>
python|pandas|dataframe|isinstance
1
9,348
67,628,293
Got DatetimeIndex. Now how do I get these records to excel?
<p><strong>Goal:</strong> From an excel file, I want to get all the records which have dates that fall within a range and write them to a new excel file. The infile I'm working with has 500K+ rows and 21 columns.</p> <p><strong>What I've tried:</strong> I've read the infile to a Pandas dataframe then returned the <code>DatetimeIndex</code>. If I print the <code>range</code> variable I get the desired records.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd in_excel_file = r'path\to\infile.xlsx' out_excel_file = r'path\to\outfile.xlsx' df = pd.read_excel(in_excel_file) range = (pd.date_range(start='1910-1-1', end='2021-1-1')) print(range) ##prints DatetimeIndex(['1990-01-01', '1990-01-02', '1990-01-03', '1990-01-04', '1990-01-05', '1990-01-06', '1990-01-07', '1990-01-08', '1990-01-09', '1990-01-10', ... '2020-12-23', '2020-12-24', '2020-12-25', '2020-12-26', '2020-12-27', '2020-12-28', '2020-12-29', '2020-12-30', '2020-12-31', '2021-01-01'], dtype='datetime64[ns]', length=11324, freq='D') </code></pre> <p>Where I'm having trouble is getting the above <code>DatetimeIndex</code> to the outfile. The following gives me an error:</p> <pre class="lang-py prettyprint-override"><code>range.to_excel(out_excel_file, index=False) </code></pre> <pre><code>AttributeError: 'DatetimeIndex' object has no attribute 'to_excel' </code></pre> <p>I'm pretty sure that when writing to excel it has to be a <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_excel.html" rel="nofollow noreferrer">dataframe</a>. So, my question is how do I get the <code>range</code> variable to a dataframe object?</p>
<blockquote> <p>Goal: From an excel file, I want to get all the records which have dates that fall within a range and write them to a new excel file. The infile I'm working with has 500K+ rows and 21 columns.</p> </blockquote> <p>You could use an indexing operation to select only the data you need from the original DataFrame and save the result in an Excel file.</p> <p>In order to do that first you need to check if the date column from your original DataFrame is already converted to a datetime/date object:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np date_column = &quot;date&quot; # Suppose this is your date column name if not np.issubdtype(df[date_column].dtype, np.datetime64): df.loc[:, date_column] = pd.to_datetime(df[date_column], format=&quot;%Y-%m-%d&quot;) </code></pre> <p>Now you can use a regular indexing operation to get all values you need:</p> <pre class="lang-py prettyprint-override"><code>mask = (df[date_column] &gt;= '1910-01-01') &amp; (df[date_column] &lt;= '2021-01-01') # Creates mask for date range out_dataframe = df.loc[mask] # Here we select the indices using our mask out_dataframe.to_excel(out_excel_file) </code></pre>
python|excel|pandas|datetimeindex
1
9,349
67,871,922
I want my Neural network output to be either 0 or 1, and not probabilities between 0 and 1, with customised step function at output layer
<p>I want my Neural network output to be either 0 or 1, and not probabilities between 0 and 1.</p> <p>for the same I have designed step function for output layer, I want my output layer to just roundoff the output of previous(softmax) layer i.e. converting probabilities into 0 and 1.</p> <p>My customised function is not giving the expected results . Kindly help.</p> <p>My code is :</p> <pre><code>from keras.layers.core import Activation from keras.models import Sequential from keras import backend as K # Custom activation function from keras.layers import Activation from keras import backend as K from keras.utils.generic_utils import get_custom_objects @tf.custom_gradient def custom_activation(x): print(&quot;tensor &quot;,x) ones = tf.ones(tf.shape(x), dtype=x.dtype.base_dtype) zeros = tf.zeros(tf.shape(x), dtype=x.dtype.base_dtype) def grad(dy): return dy print(&quot; INSIDE ACTOVATION FUNCTION &quot;) return keras.backend.switch(x &gt; .5, ones, zeros), grad model = keras.models.Sequential() model.add(keras.layers.Dense(32,input_dim=a,activation='relu')) model.add(keras.layers.Dropout(0.2)) model.add(keras.layers.Dense(64,activation=&quot;relu&quot;)) model.add(keras.layers.Dropout(0.2)) model.add(keras.layers.Dense(2,activation='softmax')) model.add(Activation(custom_activation, name='custom_activation'))#output layer ### Compile the model model.compile(loss=&quot;binary_crossentropy&quot;,optimizer='adam',metrics=[&quot;accuracy&quot;]) </code></pre>
<p>First of all, note that obtaining the class probabilities is always yielding more information than a pure 0-1 classification, and thus your model will almost always train better and faster.</p> <p>That being said, and considering that you do have an underlying reason to limit your NN, a hard decision like the one you want to implement as activaction function is know as <strong>step function</strong> or <strong>heaviside</strong> function. The main problem with this function is that, by default, the function is non-differentiable (there is an infinite slope in the threshold, 0.5 in your case). To address this you have two options:</p> <ol> <li>Create a custom &quot;approximative&quot; gradient that is differentiable. <a href="https://stackoverflow.com/a/59495901/9670056">This SO answer covers it well</a>.</li> <li>Use <code>tf.cond()</code>, which, relying on TF's AutoGrad, will only execute one branch of the graph at runtime, and omit the unused branch.</li> </ol> <pre class="lang-py prettyprint-override"><code>class MyHeavisideActivation(tf.keras.layers.Layer): def __init__(self, num_outputs, threshold=.5, **kwargs): super(MyHeavisideActivation, self).__init__(**kwargs) self.num_outputs = num_outputs self.threshold = threshold def build(self, input_shape): pass def call(self, inputs): return tf.cond(inputs &gt; self.threshold, lambda: tf.add(tf.multiply(inputs,0), 1), # set to 1 lambda: tf.multiply(inputs, 0)) # set to 0 #&gt; ...same as above model.add(keras.layers.Dense(2,activation='softmax')) model.add(MyHeavisideActivation(2, name='custom_activation'))#output layer </code></pre>
tensorflow|keras|neural-network|activation-function
1
9,350
31,918,400
Python Bilinear regression
<p>I want to perform a 2 variable (linear) regression according to the following bilinear equation:</p> <pre><code>f(x,y) = a + b*x + c*y + d*x*y, </code></pre> <p>where the f(x,y) data is, for example, given by the following matrix:</p> <pre><code>x\y 6.0 7.0 8.0 9.0 00000 005804.69 007999.53 009833.15 011476.38 00150 005573.34 007821.44 009687.63 011353.49 00500 005161.67 007488.53 009408.31 011112.30 01000 004718.80 007097.39 009060.98 010801.41 01500 004374.67 006773.64 008760.04 010523.11 02000 004082.90 006493.12 008492.45 010269.24 02500 003819.52 006240.45 008248.46 010035.37 03000 003571.24 006005.50 008021.36 009815.34 </code></pre> <p>I have found a couple of ways to interpolate (e.g. <a href="https://stackoverflow.com/questions/10164109/passing-arguments-to-a-function-for-fitting">passing arguments to a function for fitting</a>), but it seems the x and y values must have the same dimension.</p> <p>Thanks!</p>
<p>Indeed, the x and y values must have the same dimensions, from the viewpoint of a solver. You should organize your data in the format of [(x_0,y_0,f(x_0, y_0)),...,(x_n,y_n,f(x_n, y_n))] for n data points. So if your data for the function f is a matrix of dimension P X Q, you should represent it as a new matrix of dimension PQ X 3 where the rows are the observations and the columns are x,y,f(x,y).</p>
python|function|numpy|scipy|regression
0
9,351
31,997,859
Bulk Insert A Pandas DataFrame Using SQLAlchemy
<p>I have some rather large pandas DataFrames and I'd like to use the new bulk SQL mappings to upload them to a Microsoft SQL Server via SQL Alchemy. The pandas.to_sql method, while nice, is slow. </p> <p>I'm having trouble writing the code...</p> <p>I'd like to be able to pass this function a pandas DataFrame which I'm calling <code>table</code>, a schema name I'm calling <code>schema</code>, and a table name I'm calling <code>name</code>. Ideally, the function will 1.) delete the table if it already exists. 2.) create a new table 3.) create a mapper and 4.) bulk insert using the mapper and pandas data. I'm stuck on part 3.</p> <p>Here's my (admittedly rough) code. I'm struggling with how to get the mapper function to work with my primary keys. I don't really need primary keys but the mapper function requires it. </p> <p>Thanks for the insights.</p> <pre><code>from sqlalchemy import create_engine Table, Column, MetaData from sqlalchemy.orm import mapper, create_session from sqlalchemy.ext.declarative import declarative_base from pandas.io.sql import SQLTable, SQLDatabase def bulk_upload(table, schema, name): e = create_engine('mssql+pyodbc://MYDB') s = create_session(bind=e) m = MetaData(bind=e,reflect=True,schema=schema) Base = declarative_base(bind=e,metadata=m) t = Table(name,m) m.remove(t) t.drop(checkfirst=True) sqld = SQLDatabase(e, schema=schema,meta=m) sqlt = SQLTable(name, sqld, table).table sqlt.metadata = m m.create_all(bind=e,tables=[sqlt]) class MyClass(Base): return mapper(MyClass, sqlt) s.bulk_insert_mappings(MyClass, table.to_dict(orient='records')) return </code></pre>
<p>I ran into a similar issue with pd.to_sql taking hours to upload data. The below code bulk inserted the same data in a few seconds. </p> <pre><code>from sqlalchemy import create_engine import psycopg2 as pg #load python script that batch loads pandas df to sql import cStringIO address = 'postgresql://&lt;username&gt;:&lt;pswd&gt;@&lt;host&gt;:&lt;port&gt;/&lt;database&gt;' engine = create_engine(address) connection = engine.raw_connection() cursor = connection.cursor() #df is the dataframe containing an index and the columns "Event" and "Day" #create Index column to use as primary key df.reset_index(inplace=True) df.rename(columns={'index':'Index'}, inplace =True) #create the table but first drop if it already exists command = '''DROP TABLE IF EXISTS localytics_app2; CREATE TABLE localytics_app2 ( "Index" serial primary key, "Event" text, "Day" timestamp without time zone, );''' cursor.execute(command) connection.commit() #stream the data using 'to_csv' and StringIO(); then use sql's 'copy_from' function output = cStringIO.StringIO() #ignore the index df.to_csv(output, sep='\t', header=False, index=False) #jump to start of stream output.seek(0) contents = output.getvalue() cur = connection.cursor() #null values become '' cur.copy_from(output, 'localytics_app2', null="") connection.commit() cur.close() </code></pre>
python|pandas|sqlalchemy
38
9,352
41,419,161
Mapping multiple columns to a single dataframe with pandas
<p>I'm trying to create a dataframe (e.g., <code>df3</code>) that overwrites salary information onto people's names. I currently working with df1 with a list of around 1,000 names. Here's an example of what df1 looks like.</p> <pre><code> print df1.head() Salary Name Joe Smith 8700 Jane Doe 6300 Rob Dole 4700 Sue Pam 2100 Jack Li 3400 </code></pre> <p>I also have <code>df2</code>, which randomly assigns people from <code>df1</code> as either Captain and Skipper columns.</p> <pre><code> print df2.head() Captain Skipper Sue Pam Joe Smith Jane Doe Sue Pam Rob Dole Joe Smith Joe Smith Sue Pam Rob Dole Jack Li </code></pre> <p>How can I replace the names in <code>df2</code> with their corresponding salaries so that I have this exact format below. In excel, I would use a VLOOKUP function, but I'm not sure how to accomplish this using Python.</p> <pre><code>print df3.head() Captain Skipper 2100 8700 6300 2100 4700 8700 8700 2100 4700 3400 </code></pre>
<p>You can lookup the salary for each name in <code>df1</code> with <code>df1.loc[name, 'Salary']</code>. Using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html#pandas.DataFrame.applymap" rel="nofollow noreferrer"><code>.applymap()</code></a>, you can do this for all entries in all columns of <code>df2</code>: </p> <pre><code>df3 = df2.applymap(lambda x: df1.loc[x, 'Salary']) print(df3) </code></pre> <p>Result:</p> <pre><code> Captain Skipper 0 2100 8700 1 6300 2100 2 4700 8700 3 8700 2100 4 4700 3400 </code></pre>
python|pandas
1
9,353
41,657,172
What does * (asterisk) do when applied to a TensorFlow layer?
<p>Currently reading a Python implementation of Inception-ResNet to assist in building a model in a different language (Deeplearning4j). This implementation is Inception-ResNet-v1 and I was trying to figure out how it implements the residual shortcuts in ResNet style.</p> <p>In the following code block is <code>net += scale * up</code>.</p> <pre><code># Inception-Renset-A def block35(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None): """Builds the 35x35 resnet block.""" with tf.variable_scope(scope, 'Block35', [net], reuse=reuse): with tf.variable_scope('Branch_0'): tower_conv = slim.conv2d(net, 32, 1, scope='Conv2d_1x1') with tf.variable_scope('Branch_1'): tower_conv1_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1') tower_conv1_1 = slim.conv2d(tower_conv1_0, 32, 3, scope='Conv2d_0b_3x3') with tf.variable_scope('Branch_2'): tower_conv2_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1') tower_conv2_1 = slim.conv2d(tower_conv2_0, 32, 3, scope='Conv2d_0b_3x3') tower_conv2_2 = slim.conv2d(tower_conv2_1, 32, 3, scope='Conv2d_0c_3x3') mixed = tf.concat(3, [tower_conv, tower_conv1_1, tower_conv2_2]) up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None, activation_fn=None, scope='Conv2d_1x1') net += scale * up if activation_fn: net = activation_fn(net) return net </code></pre> <p>Scale is a <code>double</code> between 0 and 1. <code>up</code> is a stack of layers, the last being a conv2d layer.</p> <p>What specifically is happening with <code>scale * up</code>?</p>
<p>Each layer in <code>up</code> is being multiplied by the scalar value in <code>scale</code>. And then <code>net</code> is being redefined as <code>net + scale * up</code>. So <code>net</code> should have the same dimensions as <code>up</code>.</p>
python|tensorflow|deep-learning
1
9,354
41,656,290
Support Vector Machine Python 3.5.2
<p>While searching some tutorial on <code>SVM</code>, I've found online - <a href="https://www.youtube.com/watch?v=SSu00IRRraY" rel="nofollow noreferrer">Support Vector Machine _ Illustration</a> - the below code, which is however yielding a <code>weird</code> chart. After debugging the code, I wonder if the cause lies on the <code>Date</code> list, precisely:</p> <pre><code>dates.append(int(row[0].split('-')[0])) </code></pre> <p>which is static from my side (i.e 2016) or if there is something else, although I am not seeing anything abnormal within the code.</p> <p><strong>EDIT</strong></p> <p>This deduction is coming from the syntax:</p> <pre><code>plt.scatter(dates, prices, color ='black', label ='Data'); plt.show() </code></pre> <p>yielding the vertical line, factually, whereas </p> <pre><code>dates.append(int(row[0].split('-')[0])) </code></pre> <p>is supposed, as described in the link and also reflected into the code, to convert each date <code>YYYY-MM-DD</code> to a different integer value</p> <p><strong>EDIT (2)</strong></p> <p>Substituting <code>dates.append(md.datestr2num(row[0]))</code> for </p> <p><code>dates.append(int(row[0].split('-')[0]))</code> in the function <code>get_data(filename)</code> does help!</p> <p><a href="https://i.stack.imgur.com/XkSY5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XkSY5.png" alt="enter image description here"></a></p> <pre><code>import csv import numpy as np from sklearn.svm import SVR import matplotlib.pyplot as plt dates = [] prices = [] def get_data(filename): with open(filename, 'r') as csvfile: csvFileReader = csv.reader(csvfile) next(csvFileReader) for row in csvFileReader: dates.append(int(row[0].split('-')[0])) prices.append(float(row[6])) # from 1 i.e from Opening to closing price return def predict_prices(dates,prices,x): dates = np.reshape(dates,(len(dates),1)) svr_lin = SVR(kernel = 'linear', C = 1e3) svr_poly = SVR(kernel = 'poly', C = 1e3, degree = 2) svr_rbf = SVR(kernel = 'rbf', C = 1e3, gamma = 0.1) svr_lin.fit(dates,prices) svr_poly.fit(dates,prices) svr_rbf.fit(dates,prices) plt.scatter(dates, prices, color ='black', label ='Data') plt.plot(dates, svr_rbf.predict(dates), color ='red', label = 'RBF model') plt.plot(dates, svr_rbf.predict(dates), color ='green', label = 'Linear model') plt.plot(dates, svr_rbf.predict(dates), color ='blue', label = 'Polynomial model') plt.xlabel('Date') plt.ylabel('Price') plt.title('Support Vector Regression') plt.legend plt.show() return svr_rbf.predict(x)[0], svr_lin.predict(x)[0], svr_poly.predict(x)[0] get_data('C:/local/ACA.csv') predict_prices(dates, prices, 29) </code></pre> <p>Thanks in advance</p>
<p>I have been working with that SVM code from a number of examples (Siraj, Chaitjo, Jaihad, and others)...and found out that the Date needs to be in DD-MM-YYYY format...so the data used is the day date...not the year date (As dark.vapor has described).</p> <p>And the data can only be for 30 days...as seen in this code segment:</p> <p>"predict_prices(dates, prices, 29)"</p> <p>Otherwise using datafiles with multiple months (with repeating day numbers...eg 15 Jan and 15 Feb)...I get multiple prices plotted on each day instead of only one day price for each day.</p> <p>Edit2: I played with varying the dataset and found that the data rows can be more than 29...as long as the date is just an integer sequence. I went up to 85 days (rows)...and they all plotted. So I am a bit confused as to what the "29" does in the above prediction code?</p> <p>It would be nice to be able to use larger datafiles with multiple months...and select the date ranges I want to test for...but for now that's above my coding skills.</p> <p>I'm just a novice coder so I hope this is accurate as this seems to work for me using the DD-MM-YYYY format which works fine and gives me a good clean plot.</p> <p>Hope this helps, Robert</p> <p>Edit: I just found a good article describing this code...which confirms the "day" parsing with the DD-MM-YYYY format...</p> <p><a href="https://github.com/mKausthub/stock-er" rel="nofollow noreferrer">https://github.com/mKausthub/stock-er</a></p> <p>dates.append(int(row[0].split('-')[0])) "gets <strong>day of the month</strong> which is at index zero since dates are in the format [date]-[month]-[year]."</p>
python|numpy|matplotlib|machine-learning|svm
0
9,355
61,516,921
How can i create a NxM array/matrix using numpy from the list items
<p>i have a file comprising of the following numbers</p> <pre><code>AAFF ADFF 7689 7FAD AAFF ADFF 7689 7FAD ... and so on for 200 lines of the file </code></pre> <p>I want to create 2 matrices using numpy. matrix 1 will have rows 8 and col 1(this will use the first 8 lines of the data from file), Matrix 2 will have 8x8 shape(this will use the next 64 lines of data from the file)</p> <p>For now i have saved the numbers in a list, and used a function to create matrix 1. How do i do this using numpy?</p> <p>Here is the function i used to create my matrix 1.</p> <pre><code>def createMatrix(rowCount, colCount, dataList): mat = [] for i in range (rowCount): rowList = [] for j in range (colCount): if dataList[j] not in mat: rowList.append(dataList[i]) mat.append(rowList) return mat </code></pre> <p>Can i modify this function to create 8x8 matrix? Or will it be simpler in numpy? </p>
<p>You can create a simple array from a list and reshape it:</p> <pre><code>import numpy as np import random random.seed(42) d = "0123456789ABCDEF" data = [''.join(random.choices(d, k = 4)) for _ in range(72)] print(data) first8, next64 = data[:8],data[8:8+64] farr = np.array(first8) arr = np.array(next64).reshape( (8,8)) print(farr) print(arr) </code></pre> <p>Output:</p> <pre><code># random data ['A043', 'BAE1', '6038', '03A8', '39C0', 'CB52', 'F511', 'D9CB', '8F68', 'D9D9', 'B034', '1314', 'A553', '4EA9', '2B26', 'FA8A', 'DC30', '543F', 'E5A6', 'E743', '849E', '63F8', '101A', 'C616', 'F8FD', '0BA8', '4A16', '7FE4', '82ED', '4A92', 'C8C8', '050E', 'ED40', 'EF17', '1CC2', '784D', '638B', '34FA', '7813', '5933', '1A3E', 'D13A', '32E9', '7CC3', '1667', 'BAF1', '65D3', '3764', '3E7D', '80FD', 'FED2', '7360', '6F4C', '76FF', '8B24', 'F98B', '098D', '2F12', '9A31', 'E399', '698E', '3B36', 'A45C', '17FF', '134E', 'EE52', 'DB9F', 'A0D4', 'AF21', '1849', 'B3A4', '7ED1'] # arrays from first 8 ['A043' 'BAE1' '6038' '03A8' '39C0' 'CB52' 'F511' 'D9CB'] # array reshaped to (8,8) [['8F68' 'D9D9' 'B034' '1314' 'A553' '4EA9' '2B26' 'FA8A'] ['DC30' '543F' 'E5A6' 'E743' '849E' '63F8' '101A' 'C616'] ['F8FD' '0BA8' '4A16' '7FE4' '82ED' '4A92' 'C8C8' '050E'] ['ED40' 'EF17' '1CC2' '784D' '638B' '34FA' '7813' '5933'] ['1A3E' 'D13A' '32E9' '7CC3' '1667' 'BAF1' '65D3' '3764'] ['3E7D' '80FD' 'FED2' '7360' '6F4C' '76FF' '8B24' 'F98B'] ['098D' '2F12' '9A31' 'E399' '698E' '3B36' 'A45C' '17FF'] ['134E' 'EE52' 'DB9F' 'A0D4' 'AF21' '1849' 'B3A4' '7ED1']] </code></pre> <hr> <p>You can do the same with normal python lists and no numpy at all:</p> <pre><code># skip first 8 - then take slices of size 8 from the original list plain = [ data[8+i:8+i+8] for i in range(0,64,8)] print(plain) [['8F68', 'D9D9', 'B034', '1314', 'A553', '4EA9', '2B26', 'FA8A'], ['DC30', '543F', 'E5A6', 'E743', '849E', '63F8', '101A', 'C616'], ['F8FD', '0BA8', '4A16', '7FE4', '82ED', '4A92', 'C8C8', '050E'], ['ED40', 'EF17', '1CC2', '784D', '638B', '34FA', '7813', '5933'], ['1A3E', 'D13A', '32E9', '7CC3', '1667', 'BAF1', '65D3', '3764'], ['3E7D', '80FD', 'FED2', '7360', '6F4C', '76FF', '8B24', 'F98B'], ['098D', '2F12', '9A31', 'E399', '698E', '3B36', 'A45C', '17FF'], ['134E', 'EE52', 'DB9F', 'A0D4', 'AF21', '1849', 'B3A4', '7ED1']] </code></pre>
python|numpy|matrix
2
9,356
68,720,130
How can I get the values satisfying some conditions in a dataframe
<p>I read excel data as dataframe of pandas in which each row has two non-NaN values(others are all NaN)</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">names</th> <th style="text-align: center;">Unnamed:1</th> <th style="text-align: center;">Unnamed:2</th> <th style="text-align: center;">Unnamed:3</th> <th style="text-align: center;">~</th> <th style="text-align: center;">Unnamed:19</th> <th style="text-align: center;">Unnamed:20</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">NaN</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">1.3</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">~(NaN)</td> <td style="text-align: center;">10.4</td> <td style="text-align: center;">NaN</td> </tr> <tr> <td style="text-align: center;">NaN</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">2.7</td> <td style="text-align: center;">~(NaN)</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">12.7</td> </tr> <tr> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> </tr> <tr> <td style="text-align: center;">name_<em><strong>ccdd</strong></em></td> <td style="text-align: center;">NaN</td> <td style="text-align: center;"><em><strong>1.3</strong></em></td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">~(NaN)</td> <td style="text-align: center;"><em><strong>9.3</strong></em></td> <td style="text-align: center;">NaN</td> </tr> <tr> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> <td style="text-align: center;">~</td> </tr> <tr> <td style="text-align: center;">name_yyzz</td> <td style="text-align: center;">0.5</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">~4.7~</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">NaN</td> </tr> </tbody> </table> </div> <p>I'd like to find a specific row containing a word(e.g. &quot;ccdd&quot;) and get non-NaN values (e.g. 1.3 and 9.3) in a general way</p> <p>I made a boolin which contains showing which row index contains a word(e.g. &quot;ccdd&quot;)</p> <pre><code>import pandas as pd import numpy as np filename= '~/data.xlsx' df = pd.read_excel(filename, engine='openpyxl') mask = df[df.columns[0]].str.contains('ccdd') print(mask) 0 NaN 1 NaN 2 False 3 False 4 False 5 True 6 False 7 False 8 False 9 False 10 False 11 False 12 False 13 False </code></pre> <p>the 5th row contains data on what I want to get two float values. but I can't go further.</p> <p>In a previous article(<a href="https://stackoverflow.com/questions/54888137/get-row-and-column-index-of-value-in-pandas-df">Get row and column index of the value in Pandas df</a>) I found a similar answer but I don't know how to utilize it.</p>
<p>You can pass na parameter in <code>str.contains()</code> so basically the na parameter set NaN values to True/False according to your input:</p> <pre><code>mask = df[df.columns[0]].str.contains('ccdd',na=False) </code></pre> <p>Now finally pass that mask to your df:</p> <pre><code>df[mask] #OR df.loc[mask] </code></pre>
python|pandas|dataframe
2
9,357
68,837,104
ValueError: logits and labels must have the same shape ((None, 10) vs (None, 1))
<p>I am new to tensorflow I was trying to build a simple model that would output the probability of installation (install colum).</p> <p>Here a subset of the dataset:</p> <pre><code>{'A': {0: 12, 2: 28, 3: 26, 4: 9, 5: 36}, 'B': {0: 10, 2: 17, 3: 22, 4: 2, 5: 31}, 'C': {0: 1, 2: 0, 3: 5, 4: 0, 5: 1}, 'D': {0: 5, 2: 0, 3: 0, 4: 0, 5: 0}, 'E': {0: 12, 2: 1, 3: 4, 4: 3, 5: 1}, 'F': {0: 12, 2: 2, 3: 14, 4: 9, 5: 11}, 'install': {0: 0, 2: 0, 3: 1, 4: 0, 5: 0}, 'G': {0: 21, 2: 12, 3: 8, 4: 13, 5: 19}, 'H': {0: 0, 2: 5, 3: 1, 4: 6, 5: 5}, 'I': {0: 21, 2: 22, 3: 5, 4: 10, 5: 20}, 'J': {0: 0.0, 2: 136.5, 3: 0.0, 4: 0.1, 5: 29.5}, 'K': {0: 0.15220949263502456, 2: 0.08139534883720931, 3: 0.15625, 4: 0.15384584755440725, 5: 0.04188829787234043}, 'L': {0: 649, 2: 379, 3: 531, 4: 660, 5: 242}, 'M': {0: 0, 2: 0, 3: 0, 4: 1, 5: 1}, 'N': {0: 1, 2: 1, 3: 1, 4: 0, 5: 0}, 'O': {0: 0, 2: 1, 3: 0, 4: 1, 5: 0}, 'P': {0: 0, 2: 0, 3: 0, 4: 0, 5: 0}, 'Q': {0: 1, 2: 0, 3: 1, 4: 0, 5: 1}} </code></pre> <p>And here the code I was working on:</p> <pre><code>X = df.drop('install', axis=1) #data y = df['install'] #target X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 42, test_size = 0.3) X_train = ss.fit_transform(X_train) X_test = ss.fit_transform(X_test) model = keras.models.Sequential([ keras.layers.Flatten(), keras.layers.Dense(128, activation='softmax'), keras.layers.Dropout(0.2), keras.layers.Dense(10) ]) loss = keras.losses.BinaryCrossentropy(from_logits=True) optim = keras.optimizers.Adam(lr=0.001) metrics = [&quot;accuracy&quot;] model.compile(loss=loss, optimizer=optim, metrics=metrics) batch_size = 32 epoch = 5 model.fit(X_train, y_train, batch_size=batch_size, epochs=epoch, shuffle=True, verbose=1) </code></pre> <p>Could you help me in understanding the error? I understood that the problem is about the size of my X and y.</p>
<p>Note: You have not specified which class the <code>ss</code> object belongs to and hence I will discuss everything removing it.</p> <p>First let's discuss your target. i.e. the install column. From the values I assume that that your problem is Binary Classification i.e. predicting <code>0</code> and <code>1</code> and you want the probability of having them.</p> <p>For this you have to define your model as below.</p> <pre><code>model = keras.models.Sequential([ keras.layers.Flatten(), keras.layers.Dense(128, activation='relu'), keras.layers.Dropout(0.2), keras.layers.Dense(2, activation='softmax') ]) ''' Note: I have changed the activation of the first `dense` layer from 'softmax` to `relu` as `softmax` is not ideal for inner layers as it greatly reduce information from each node. Although having 'softmax' will not result in any syntax error but it is methodologically wrong. Now the next major change is changing the number of units in the last `Dense` layer from 10 to 2. What you want is the probability of having either 0 or 1. So if you have the have the output from your model as `[a , b]` here a is some value corresponding to 0 and b corresponding to 1 then you can get probability on them using the 'softmax' activation. Without activation the values we get are called 'logits'. ''' # Now you have to change your loss function as below loss = tf.keras.losses.SparseCategoricalCrossentropy() # The rest is same. Now we run a dummy trial of the model after training it using your code. preds = model.predict(X_test) preds ''' This gives the results: array([[9.9999726e-01, 2.7777487e-06], [9.5156413e-01, 4.8435837e-02]], dtype=float32) This says the probability of sample 1 being 0 is '9.9999726e-01' i.e. '0.999..' and of it being 1 is '2.7777487e-06' i.e. '0.00000277..` and these gracefully sum up to 1. Same for the sample 2. ''' </code></pre> <p>There is another way of doing this. As you have only 1 label and hence if you have the probability corresponding to that label then you can have the probability corresponding to the other by subtracting it from 1. You can implement it as below:</p> <pre><code>model = keras.models.Sequential([ keras.layers.Flatten(), keras.layers.Dense(128, activation='relu'), keras.layers.Dropout(0.2), keras.layers.Dense(1, activation='sigmoid') ]) ''' The difference is 'softmax' and 'sigmoid' is that the 'softmax' is applied on all the units in a unified manner but 'sigmoid' is applied on each individual unit. So you can say that 'softmax' is the applied on the 'layer' and 'sigmoid' is applied on the 'units'. Now the output of the 'sigmoid' is the probability of the result being 1. So we can say that the result could either be 0 or 1 depending on the output probability with some threshold and hence we will not use a different loss that is BinaryCrossEntropy as the values will be binary (either 0 or 1). ''' loss = keras.losses.BinaryCrossentropy() # again without logits # We once again the train the model using the rest of the code and analyze the outputs. preds = model.predict(X_test) preds ''' This gives the results: array([[1.6424768e-13], [2.0349980e-06]], dtype=float32) So for sample 1 we have the probability of it being '1' as '1.6424768e-13' and as we have only '1' and '0' the probability of it being '0' is '1 - 1.6424768e-13'. Same for the sample 2. ''' </code></pre> <p>Now coming to answer from @<a href="https://stackoverflow.com/users/11087305/mattpats">Mattpats</a> . This answer will also work but in this case you will not get probability as the output but instead you will get the <code>logits</code> as you are not using any <code>activation</code> and the loss is calculated on the <code>logits</code> by specifying the argument <code>from_logits=True</code>. To the probabilities from this you have to use it like below:</p> <pre><code>preds = model.predict(X_test) sigmoid_preds = tf.math.sigmoid(preds).numpy() preds, sigmoid_preds ''' This give the following results: preds = array([[-51.056973], [-32.444508]], dtype=float32) sigmoid_preds = array([[6.702527e-23], [8.119502e-15]], dtype=float32) ''' </code></pre>
python|tensorflow|keras|deep-learning
2
9,358
68,568,004
Deleting observations when the value of a variable is 0 for that observation using loops in Python
<p>I have a dataset looking like this pattern:</p> <pre><code>person x1 x2 x3 1 0 0 1 2 0 1 0 3 1 0 0 4 0 1 1 </code></pre> <p>I want to create a loop through x1 to x3 to delete observations (person) whenever x1 is 0, and then x2 is 0, then x3 is 0. Each time I will have a new dataframe.</p> <p>I've tried something like this</p> <pre><code>df = pd.read_csv(the input file above) for n in range(1,4): omit = (df['x'n] == 0) dataset[n] = df.loc[~omit] </code></pre> <p>But it doesn't work, and I don't even understand the error report. Can someone help me?</p>
<p>One approach is to create a dictionary of dataframes corresponding to which <code>x_</code> variable you are using to delete observations.</p> <pre><code>df = pd.DataFrame({'person': {0: 1, 1: 2, 2: 3, 3: 4}, 'x1': {0: 0, 1: 0, 2: 1, 3: 0}, 'x2': {0: 0, 1: 1, 2: 0, 3: 1}, 'x3': {0: 1, 1: 0, 2: 0, 3: 1}}) dfs = {k:df.loc[df[f'{k}'].ne(0)] for k in ['x1','x2','x3']} </code></pre> <p>You can then access each dataframe with, e.g., <code>dfs['x1']</code></p> <pre><code> person x1 x2 x3 2 3 1 0 0 </code></pre> <p>It seems like this may be what you're trying to do as well. With some modifications, your code can accomplish the same task:</p> <pre><code>dataset = {} for n in range(1,4): omit = (df[f'x{n}'] == 0) dataset[n] = df.loc[~omit] </code></pre>
python|pandas|dataframe
0
9,359
68,551,015
extracting the indices from pandas series that are values in the dictionary
<p>I have a dictionary where keys are of string type and values are pandas series that have the 'index value' structure. I want to extract pandas series indices and make them keys in another dictionary with values per each index. Are there any ways I can transform one dictionary into another? Thanks.</p> <p>This is the example of the dictionary I am working with:</p> <pre><code>value = pd.Series([5,5,5]) key = ['one', 'two', 'three'] dic = {} for i in key: dic[i] = value print(dic) </code></pre>
<p>My apologies if I was not clear enough. I will update the post.</p> <p>I figured out another way to do so:</p> <pre><code>rec = pd.Series([]) for item in key: rec=rec.append(dic[item]) </code></pre>
python|pandas
0
9,360
68,743,916
How to update column value of CSV in Python?
<p>How can i change the rows in my csv data? I have a csv data which contains a column <code>class</code> and the coordinates.</p> <pre><code>class x1 y1 ... x45 y45 A 0.187087 0.668525 ... -0.024700 0.220235 B 0.202503 0.669253 ... -0.107100 0.240229 .... C 0.248009 0.676325 ... -0.070317 0.278087 C 0.245750 0.658381 ... -0.077429 0.282217 D 0.235889 0.643202 ... -0.080697 0.262705 </code></pre> <p>I would like to change the String from my column <code>class</code> in numbers like this:</p> <pre><code>class x1 y1 ... x45 y45 0 0.187087 0.668525 ... -0.024700 0.220235 1 0.202503 0.669253 ... -0.107100 0.240229 .... 2 0.248009 0.676325 ... -0.070317 0.278087 2 0.245750 0.658381 ... -0.077429 0.282217 3 0.235889 0.643202 ... -0.080697 0.262705 </code></pre> <p>How can i do this? Only the column 'class' is to be changed. Everything else should remain as it is. Ive tried something but its just chnages the columns headers.</p> <pre><code>df = pd.read_csv('data.csv') print(df.head()) df.rename({'A':'0', 'B':'1', 'C':'2', 'D':'3', 'E':'4', 'F':'5', 'O':'6',}, axis=1, inplace=True) print(df.head()) </code></pre>
<p>If you have a large number of class values and don't want to write the dictionary by hand you can convert to a categorical variable and then take an encoding.</p> <pre><code>df['class'] = df['class'].astype('category').cat.codes </code></pre>
python|pandas
1
9,361
68,545,110
Using curve_fit to estimate common model parameters over datasets with different sizes
<p>I am working on a curve fitting problem, where I have the intention of estimating shared model parameters globally over several datasets of unequal size. I started working from the code in the link below, where a common a-parameter for a linear regression y = a*x + b is estimated on three different y-vectors with a common x-vector. <a href="https://stackoverflow.com/questions/60231516/how-to-use-curve-fit-from-scipy-optimize-with-a-shared-fit-parameter-across-mult">How to use curve_fit from scipy.optimize with a shared fit parameter across multiple datasets?</a></p> <p>I managed to adapt the code sample to the more general case, with three different x-vectors, one corresponding to each y data vector. However, when I want to extend it further to work also for datasets of unequal size, I run into the following error: &quot;ValueError: setting an array element with a sequence.&quot;.</p> <p>Please find the code sample below. Any help is highly appreciated!</p> <p>Cheers</p> <pre><code>import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit x = [[0, 1, 2, 3], [0.2, 1.2, 2.2, 3.2], [0.3, 1.3, 2.3]] y = [[-0.80216234, 1.41125365, 1.42565202, 2.42567754], [ 1.34166743, 1.29731851, 2.98374731, 3.32110875], [ 1.71398203, 3.29737756, 3.81456949]] x = np.array(x) y = np.array(y) def f(x, a, b): return a * x + b def g(x, a, b_1, b_2, b_3): return np.concatenate((f(x[0], a, b_1), f(x[1], a, b_2), f(x[2], a, b_3))) (a, *b), _ = curve_fit(g, x, y.ravel()) for x_i, y_i, b_i in zip(x, y, b): plt.plot(x_i, f(x_i, a, b_i), label=f&quot;{a:.1f}x{b_i:+.1f}&quot;) plt.plot(x_i, y_i, linestyle=&quot;&quot;, marker=&quot;x&quot;, color=plt.gca().lines[-1].get_color()) plt.legend() plt.show() </code></pre> <p>See below for the code of the working example with multiple x-vectors of equal size:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit x = [[0, 1, 2, 3], [0.2, 1.2, 2.2, 3.2], [0.3, 1.3, 2.3, 3.3]] y = [[-0.80216234, 1.41125365, 1.42565202, 2.42567754], [ 1.34166743, 1.29731851, 2.98374731, 3.32110875], [ 1.71398203, 3.29737756, 3.81456949, 4.25]] x = np.array(x) y = np.array(y) def f(x, a, b): return a * x + b def g(x, a, b_1, b_2, b_3): return np.concatenate((f(x[0], a, b_1), f(x[1], a, b_2), f(x[2], a, b_3))) (a, *b), _ = curve_fit(g, x, y.ravel()) for x_i, y_i, b_i in zip(x, y, b): plt.plot(x_i, f(x_i, a, b_i), label=f&quot;{a:.1f}x{b_i:+.1f}&quot;) plt.plot(x_i, y_i, linestyle=&quot;&quot;, marker=&quot;x&quot;, color=plt.gca().lines[-1].get_color()) plt.legend() plt.show() </code></pre>
<p>In general I agree that the implications for least-squares fitting are rather complicated... some quick thoughts:</p> <ul> <li>can you be certain that the obtained <code>b</code> parameters are equally valid if they are estimated from different-length datasets?</li> <li>the more b-parameters you get the more uncertain their estimates will be since you only optimize the combined fit-performance and not each individual fit-performance</li> <li>I'm also uncertain how well the numerical evaluation of the jacobian will work in such a case... it might be worth implementing a custom <code>jac</code> function that evaluates the jacobian in an exact way</li> <li>... anad I'm sure there are more problems which I'm currently not aware of :D</li> </ul> <p>Nevertheless, you can of course trick <code>scipy.optimize</code>to do what you want...<br /> However, you have to go 1 step deeper and use <code>scipy.optimize.least_squares</code> directly rather than using the higher-level <code>scipy.optimize.curve_fit</code> function.</p> <p>This way you can alter the way how the residuals are calculated to accept datasets of different lengths.</p> <p>... here's a quick-and-dirty implementation of how it could work:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np from scipy.optimize import least_squares x = [[0.0, 1.0, 2.0, 3.0, 4.0, 5.0], [0.2, 1.2, 2.2, 3.2], [0.3, 1.3, 2.3 ]] y = [[-0.80216234, 1.41125365, 1.42565202, 2.42567754, 3, 4], [ 1.34166743, 1.29731851, 2.98374731, 3.32110875], [ 1.71398203, 3.29737756, 3.81456949 ]] def f(x, a, b): return a * x + b def fun(parameters): # separate a and b parameters a, *b = parameters # calculate function-results based on a shared a- and variable b- parameters res = (f(xi, a, bi) for (xi, bi) in zip(map(np.array, x), b)) # calculate the residuals errs = [] for i, j in zip(res, map(np.array, y)): errs += (i - j).tolist() return np.array(errs) # set start-values start_values = (1, 1, 2, 3) # do the fit a, *b = least_squares(fun, start_values).x for x_i, y_i, b_i in zip(map(np.array, x), y, b): plt.plot(x_i, f(x_i, a, b_i), label=f&quot;{a:.1f}x{b_i:+.1f}&quot;) plt.plot(x_i, y_i, linestyle=&quot;&quot;, marker=&quot;x&quot;, color=plt.gca().lines[-1].get_color()) plt.legend() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/l17vj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l17vj.png" alt="fit resutls" /></a></p>
python|arrays|numpy|curve-fitting
0
9,362
68,824,990
Iterate through df rows faster
<p>I am trying to iterate through rows of a Pandas df to get data from one column of the row, and using that data to add new columns. The code is listed below but it is VERY slow. Is there any way to do what I am trying to do without iterating thru the individual rows of the dataframe?</p> <pre><code>ctqparam = [] wwy = [] ww = [] for index, row in df.iterrows(): date = str(row['Event_Start_Time']) day = int(date[8] + date[9]) month = int(date[5] + date[6]) total = 0 for i in range(0, month-1): total += months[i] total += day out = total // 7 ww += [out] wwy += [str(date[0] + date[1] + date[2] + date[3])] val = str(row['TPRev']) out = &quot;&quot; for letter in val: if letter != '.': out += letter df.replace(to_replace=row['TPRev'], value=str(out), inplace = True) val = str(row['Subtest']) if val in ctqparam_dict.keys(): ctqparam += [ctqparam_dict[val]] # add WWY column, WW column, and correct data format of Test_Tape column df.insert(0, column='Work_Week_Year', value = wwy) df.insert(3, column='Work_Week', value = ww) df.insert(4, column='ctqparam', value = ctqparam) </code></pre>
<p>It's hard to say exactly what your trying to do. However, if you're looping through rows chances are that there is a better way to do it.</p> <p>For example, given a csv file that looks like this..</p> <pre><code>Event_Start_Time,TPRev,Subtest 4/12/19 06:00,&quot;this. string. has dots.. in it.&quot;,{'A_Dict':'maybe?'} 6/10/19 04:27,&quot;another stri.ng wi.th d.ots.&quot;,{'A_Dict':'aVal'} </code></pre> <p>You may want to:</p> <ol> <li>Format <code>Event_Start_Time</code> as datetime.</li> <li>Get the week number from <code>Event_Start_Time</code>.</li> <li>Remove all the dots (.) from the strings in column <code>TPRev</code>.</li> <li>Expand a dictionary contained in <code>Subtest</code> to its own column.</li> </ol> <p>Without looping through the rows, consider doing thing by columns. Like doing it to the first 'cell' of the column and it replicates all the way down.</p> <p><strong>Code:</strong></p> <pre><code>import pandas as pd df = pd.read_csv('data.csv') print(df) Event_Start_Time TPRev Subtest 0 4/12/19 06:00 this. string. has dots.. in it. {'A_Dict':'maybe?'} 1 6/10/19 04:27 another stri.ng wi.th d.ots. {'A_Dict':'aVal'} # format 'Event_Start_Time' as as datetime df['Event_Start_Time'] = pd.to_datetime(df['Event_Start_Time'], format='%d/%m/%y %H:%M') # get the week number from 'Event_Start_Time' df['Week_Number'] = df['Event_Start_Time'].dt.isocalendar().week # replace all '.' (periods) in the 'TPRev' column df['TPRev'] = df['TPRev'].str.replace('.', '', regex=False) # get a dictionary string out of column 'Subtest' and put into a new column df = pd.concat([df.drop(['Subtest'], axis=1), df['Subtest'].map(eval).apply(pd.Series)], axis=1) print(df) Event_Start_Time TPRev Week_Number A_Dict 0 2019-12-04 06:00:00 this string has dots in it 49 maybe? 1 2019-10-06 04:27:00 another string with dots 40 aVal print(df.info()) Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Event_Start_Time 2 non-null datetime64[ns] 1 TPRev 2 non-null object 2 Week_Number 2 non-null UInt32 3 A_Dict 2 non-null object dtypes: UInt32(1), datetime64[ns](1), object(2) </code></pre> <p>So you end up with a dataframe like this...</p> <pre><code> Event_Start_Time TPRev Week_Number A_Dict 0 2019-12-04 06:00:00 this string has dots in it 49 maybe? 1 2019-10-06 04:27:00 another string with dots 40 aVa </code></pre> <p>Obviously you'll probably want to do other things. Look at your data. Make a list of what you want to do to each column or what new columns you need. Don't mention how right now as chances are it's possible and has been done before - you just need to find the <strong>existing method</strong>.</p> <p>You may write down <em>get the difference in days from the current row and the row beneath</em> etc.). Finally search out how to do the formatting or calculation you require. Break the problem down.</p>
python|pandas
1
9,363
36,555,773
Defining x ticks in pandas
<p>I have a df like so:</p> <pre><code> bnw_Percent bnw_Value mtgp_Percent mtgp_Value php_Percent php_Value 0 0.004414 1.48150 0.010767 2.03548 0.028395 3.91826 1 0.015450 5.44825 0.020337 6.01205 0.352093 7.77627 2 0.059593 9.41501 0.043067 9.98863 1.118746 11.63430 3 0.156709 13.38180 0.137575 13.96520 1.164177 15.49230 4 0.355353 17.34850 0.242849 17.94180 1.436765 19.35030 5 0.560620 21.31530 0.436650 21.91840 1.635527 23.20830 6 0.896109 25.28200 0.695051 25.89490 1.839968 27.06630 7 1.436864 29.24880 1.208264 29.87150 2.169345 30.92430 8 2.145364 33.21550 2.025338 33.84810 3.219944 34.78230 9 3.412276 37.18230 3.423814 37.82470 4.514737 38.64030 10 5.566469 41.14910 5.351055 41.80120 6.542109 42.49830 11 9.188426 45.11580 8.220981 45.77780 9.233914 46.35640 12 14.300219 49.08260 12.081444 49.75440 12.010904 50.21440 13 18.072263 53.04930 16.622603 53.73100 13.833835 54.07240 14 19.641556 57.01610 20.070343 57.70750 13.788404 57.93040 15 14.682058 60.98280 16.976708 61.68410 12.715089 61.78840 16 7.237292 64.94960 9.493845 65.66070 9.489466 65.64640 17 2.057077 68.91640 2.672537 69.63730 4.100176 69.50440 18 0.211888 72.88310 0.265579 73.61380 0.800727 73.36240 19 0.000000 76.84990 0.001196 77.59040 0.005679 77.22040 </code></pre> <p>and I am making a plot with the following code:</p> <pre><code>plot=df.plot(x=['bnw_Value', 'mtgp_Value', 'php_Value'], y=['bnw_Percent', 'php_Percent', 'mtgp_Percent'], title='Frequency Distribution of Fuzzy Accumulated Values') plot.set_xlabel('Value') plot.set_ylabel('Percent') plot.legend(loc='center left', bbox_to_anchor=(1, 0.5)) </code></pre> <p>which looks like this:</p> <p><a href="https://i.stack.imgur.com/qnOOh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qnOOh.png" alt="enter image description here"></a></p> <p>my problem is this results in overlapping x-axis values, what I really want is something like the minimum value found in any of <code>bnw_Value</code>, <code>mtgp_Value</code> or <code>php_Value</code> and the maximum value in any of the those three columns in increments of 10. Or at least for the x axis to be readable somehow.</p>
<p>you can do it this way:</p> <pre><code>df.plot(x=df[[col for col in df.columns if 'Value' in col]].min(axis=1), y=['bnw_Percent', 'php_Percent', 'mtgp_Percent'], title='Frequency Distribution of Fuzzy Accumulated Values') </code></pre> <p><a href="https://i.stack.imgur.com/uGoV4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uGoV4.png" alt="enter image description here"></a></p> <p>Explanation:</p> <pre><code>In [24]: [col for col in df.columns if 'Value' in col] Out[24]: ['bnw_Value', 'mtgp_Value', 'php_Value'] In [25]: df[[col for col in df.columns if 'Value' in col]].min(axis=1) Out[25]: 0 1.48150 1 5.44825 2 9.41501 3 13.38180 4 17.34850 5 21.31530 6 25.28200 7 29.24880 8 33.21550 9 37.18230 10 41.14910 11 45.11580 12 49.08260 13 53.04930 14 57.01610 15 60.98280 16 64.94960 17 68.91640 18 72.88310 19 76.84990 dtype: float64 </code></pre>
python|pandas
1
9,364
36,449,688
Python: how to concatenate multiple pandas dataframes to produce a box-and-whisker plot?
<p>Say I have a dataset of values that were binned. The bins are stored in a dictionary called <code>mydict</code>. To obtain the histogram quantities needed to plot a Box-and-Whisker, I have done:</p> <p><code>df_dataset = pd.DataFrame.from_dict(dict([ (k, pd.Series(v)) for k,v in mydict.items() ]))</code></p> <p>To get the histogram quantities:</p> <pre><code>mydict_min = df_dataset.min() mydict_max = df_dataset.max() mydict_median = df_dataset.median() mydict_1st3rd = df_dataset.quantile([.1, .3]) </code></pre> <p>My problem: I need to plot a <strong><a href="https://en.wikipedia.org/wiki/Box_plot" rel="nofollow noreferrer">Box-and-Whisker plot</a></strong> given the histogram quantities shown above. How can I do this by using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.boxplot.html" rel="nofollow noreferrer">DataFrame.boxplot</a>? In <a href="https://stackoverflow.com/questions/35712627/python-matplotlib-box-plot?answertab=active#tab-top">this example</a> a Box-and-Whisker was built by doing:</p> <pre><code>import matplotlib.pyplot as plt from pandas import DataFrame df = DataFrame({'Parameter': ['A',]*8, 'Site': ['S1', 'S2', 'S1', 'S2', 'S1', 'S2', 'S1', 'S2'], 'Value': [2.34, 2.67, 2.56, 2.89, 3.45, 4.45, 3.67, 4.56]}) df.boxplot(by=['Parameter', 'Site']) plt.show() </code></pre> <p>Do I need to create a DataFrame of DataFrames? I already have the histogram quantities (shown above) and need no grouping. <strong>How could I amend this code to include my histogram quantities?</strong></p>
<p>After a brief search I have figured out there is no need to concatenate anything. The simple answer is to construct the Box-and-Whisker from the <code>df_dataset</code>, as it already is a <code>series</code> and as such it stores all the relevant histogram quantities.</p> <p>The line creating the Box-and-Whisker is: <code>df_dataset.plot.box()</code></p> <p>Further information can be found <a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html" rel="nofollow">here</a>.</p> <p>Details such as axis titles, plot title, grids, bins, and everything else can be set based on one's needs.</p>
python|pandas|matplotlib|plot|boxplot
1
9,365
65,660,379
psycopg2.errors.UndefinedTable: table "live" does not exist Pushing large dataframe to postgres database
<p>Am trying to push a large dataframe to the Postgres db, am using df_to_sql for this, the problem is it takes forever to complete this task, I have up to 1millions of dataframe rows, so I decided to use the python multiprocessing.</p> <p>below are my codes</p> <pre><code>def parallelize_push_to_db(df, table, chunk_size): chunk = split_df(df, chunk_size=chunk_size) chunk_len = len(chunk) processes = list() start_time = time.time() for i in range(chunk_len): process = Process(target=pussh_df_to_db, args=(chunk[i], table)) process.start() processes.append(process) for pro in processes: pro.join() end_time = time.time() - start_time print(f'parra {end_time}') def split_df(df, chunk_size): chunks = list() split_size = math.ceil(len(df) / chunk_size) for i in range(split_size): chunks.append(df[i * chunk_size:(i + 1) * chunk_size]) return chunks def pussh_df_to_db(df, table): # i think we should append the new df to the data in the db if not empty base = DB_Base() base.pg_cur.execute(f&quot;DROP TABLE IF EXISTS {table}&quot;) print(df.shape) print('running...........to send dt to postgres') df.to_sql(table, con=base.sql_alchemy_engine_conn, if_exists='replace') if &quot;__name__&quot; == &quot;__main__&quot;: parallelize_push_to_db(check_data, 'live', 1000) </code></pre> <p>some part of the code run but fails at the later part with the below error</p> <pre><code>in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) table &quot;user_user_recommendations_AAa&quot; does not exist [SQL: DROP TABLE &quot;live&quot;] </code></pre> <p>i think each process tries to create a new table and drop the old one, i really dont know how to solve this, any help??</p>
<p>Before i start, i will like to say, you will be better off using the threading modules, before making the decisions of using this sort of techniques you must understand the task you are doing. some tasks are cpu tasks whiles others are I/O heavy task. so try and understand all this before you choose which one you want, but sometimes they both will work.</p> <pre><code>def parallelize_push_to_db(df, table, chunk_size): chunk = split_df(df, chunk_size=chunk_size) chunk_len = len(chunk) processes = list() start_time = time.time() for i in range(chunk_len): process = Process(target=pussh_df_to_db, args=(chunk[i], table)) process.start() processes.append(process) for pro in processes: pro.join() end_time = time.time() - start_time print(f'parra {end_time}') </code></pre> <p>the first Ans will also work, but a solution for your question is this..</p>
python|pandas|postgresql|multithreading|multiprocessing
2
9,366
21,264,291
Counting consecutive events on pandas dataframe by their index
<p>I'm trying to use dataframe to select events. Let's say I have only two kinds of events: 1, -1 and the following dataframe:</p> <pre><code>a=pd.DataFrame({'ev':[1,1,-1,1,1,1,-1,-1]}) </code></pre> <p>with:</p> <pre><code>a[a['ev']&gt;0] </code></pre> <p>I can select only the 1 events. Now I would like to count how many consecutive events I have; in this case I would like to get <code>[2,3]</code></p> <p>The point is: how to get the list (or the series) of the indexes involved in <code>a[a['ev']&gt;0]</code>? </p>
<p>add <code>.index</code> to return a list of the indexes </p> <pre><code>In [43]: a=pd.DataFrame({'ev':[1,1,-1,1,1,1,-1,-1]}) In [44]: b = list(a[a['ev']&gt;0].index) In [45]: b Out[45]: [0, 1, 3, 4, 5] </code></pre>
python|indexing|pandas
-1
9,367
21,412,280
Iterating over frames for dropping columns
<p>The following for-loop iterating over two data frames doesn't work:</p> <pre><code>for frame in [df_train, df_test]: frame = frame.drop('Embarked', axis=1) </code></pre> <p>I don't get an error message, but the column 'Embarked' is not dropped in the two data frames. Why?</p>
<p>From <code>help(frame.drop)</code>:</p> <pre><code>def drop(self, labels, axis=0, level=None, inplace=False, **kwargs): """ Return new object with labels in requested axis removed ... inplace : bool, default False If True, do operation inplace and return None. </code></pre> <p>Right now, you're simply making new objects and naming them <code>frame</code>, which doesn't affect anything in your list. You could use <code>inplace=True</code> to affect the original object instead:</p> <pre><code>for frame in [df_train, df_test]: frame.drop('Embarked', axis=1, inplace=True) </code></pre>
python|pandas
4
9,368
53,634,265
tensorflow lite issue : output labels file size is fixed ?? output or input tensor dimension mismatch issue in android
<p>Demo is available by Tensorflow in following link :</p> <p><a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/java/demo" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/java/demo</a></p> <p>Please change labels.txt file in above file by adding or removing a text(class) , It'll crash the application.</p> <p>It's making the given solution unable to change output's size.</p> <p>Please address the solution, how can we change the <code>label.txt</code> file with some constants like <code>INPUT_SIZE</code> or <code>OUTPUT_SIZE</code> that makes this bug solved.</p> <p><strong>Note</strong></p> <p>Application works fine as it is, but crash upon changing <code>label.txt</code> file, e.g. adding or removing names in the file.</p> <p><strong>Crash :</strong></p> <p>Cannot copy between a <code>TensorFlowLite</code> tensor with shape [1, 1001] and a Java object with shape [1, 1000].</p> <p>****Above demo is build by using already present <code>gradle</code> configuration setting.**</p> <p><strong>Feature request:</strong> Changing label file size &amp; work accordingly through parameter is available in <code>IOS</code> demo with this<br> <code>output_size = 1000;</code><br> This feature should be in Android version as well.</p> <p><strong>Thanks.</strong></p>
<p>As a feature request, you should generally file things to Github.</p> <p>If you're looking for a local fix, I'm guessing the issue is that the model outputs a 1x1000 and you'll need to modify the actual model to be 1x1001 (and all the shapes of everything that comes before it).</p>
android|tensorflow|bytebuffer|tensor|tensorflow-lite
0
9,369
6,527,641
Speed up python code for computing matrix cofactors
<p>As part of a complex task, I need to compute <a href="http://en.wikipedia.org/wiki/Cofactor_%28linear_algebra%29" rel="nofollow noreferrer">matrix cofactors</a>. I did this in a straightforward way using this <a href="https://stackoverflow.com/questions/3858213/numpy-routine-for-computing-matrix-minors">nice code for computing matrix minors</a>. Here is my code:</p> <pre><code>def matrix_cofactor(matrix): C = np.zeros(matrix.shape) nrows, ncols = C.shape for row in xrange(nrows): for col in xrange(ncols): minor = matrix[np.array(range(row)+range(row+1,nrows))[:,np.newaxis], np.array(range(col)+range(col+1,ncols))] C[row, col] = (-1)**(row+col) * np.linalg.det(minor) return C </code></pre> <p>It turns out that this matrix cofactor code is the bottleneck, and I would like to optimize the code snippet above. Any ideas as to how to do this?</p>
<p>If your matrix is invertible, the cofactor is related to the inverse:</p> <pre><code>def matrix_cofactor(matrix): return np.linalg.inv(matrix).T * np.linalg.det(matrix) </code></pre> <p>This gives large speedups (~ 1000x for 50x50 matrices). The main reason is fundamental: this is an <code>O(n^3)</code> algorithm, whereas the minor-det-based one is <code>O(n^5)</code>.</p> <p>This probably means that also for non-invertible matrixes, there is some clever way to calculate the cofactor (i.e., not use the mathematical formula that you use above, but some other equivalent definition).</p> <hr> <p>If you stick with the det-based approach, what you can do is the following:</p> <p>The majority of the time seems to be spent inside <code>det</code>. (Check out <a href="http://packages.python.org/line_profiler/" rel="noreferrer">line_profiler</a> to find this out yourself.) You can try to speed that part up by linking Numpy with the Intel MKL, but other than that, there is not much that can be done.</p> <p>You can speed up the other part of the code like this:</p> <pre><code>minor = np.zeros([nrows-1, ncols-1]) for row in xrange(nrows): for col in xrange(ncols): minor[:row,:col] = matrix[:row,:col] minor[row:,:col] = matrix[row+1:,:col] minor[:row,col:] = matrix[:row,col+1:] minor[row:,col:] = matrix[row+1:,col+1:] ... </code></pre> <p>This gains some 10-50% total runtime depending on the size of your matrices. The original code has Python <code>range</code> and list manipulations, which are slower than direct slice indexing. You could try also to be more clever and copy only parts of the minor that actually change --- however, already after the above change, close to 100% of the time is spent inside <code>numpy.linalg.det</code> so that furher optimization of the othe parts does not make so much sense.</p>
python|matrix|performance|numpy|linear-algebra
14
9,370
12,114,863
Permuting numpy's 2d array indexes
<p>are there any numpy function or clever use of views to accomplish what the following function do?</p> <pre><code> import numpy as np def permuteIndexes(array, perm): newarray = np.empty_like(array) max_i, max_j = newarray.shape for i in xrange(max_i): for j in xrange(max_j): newarray[i,j] = array[perm[i], perm[j]] return newarray </code></pre> <p>That is, for a given permutation of the indexes of the matrix in a list <code>perm</code>, this function calculates the result of applying this permutation to the indexes of a matrix. </p>
<pre><code>def permutateIndexes(array, perm): return array[perm][:, perm] </code></pre> <hr> <p>Actually, this is better as it does it in a single go:</p> <pre><code>def permutateIndexes(array, perm): return array[np.ix_(perm, perm)] </code></pre> <p>To work with non-square arrays:</p> <pre><code>def permutateIndexes(array, perm): return array[np.ix_(*(perm[:s] for s in array.shape))] </code></pre>
python|arrays|numpy|multidimensional-array
6
9,371
71,833,667
Keras Confusion Matrix does not look right
<p>I am running a Keras model on the Breast Cancer dataset. I got around 96% accuracy with it, but the confusion matrix is completely off. Here are the graphs:</p> <p><a href="https://i.stack.imgur.com/7S6EI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7S6EI.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/4P2kn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4P2kn.png" alt="enter image description here" /></a></p> <p>And here is my confusion matrix:</p> <p><a href="https://i.stack.imgur.com/YGTR3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YGTR3.png" alt="enter image description here" /></a></p> <p>The matrix is saying that I have no true negatives and they're actually false negatives, when I believe that it's the reverse. Another thing that I noticed is that when the amount of true values are added up and divided by the length of the testing set, the result does not reflect the score that is calculated from the model. Here is the whole code:</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from tensorflow import keras from tensorflow.math import confusion_matrix from keras import Sequential from keras.layers import Dense breast = load_breast_cancer() X = breast.data y = breast.target #Split data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) #Scale data sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.fit_transform(X_test) #Create and fit keras model model = Sequential() model.add(Dense(8, activation='relu', input_shape=[X.shape[1]])) model.add(Dense(4, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) history = model.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size=16, epochs=50, verbose=1) history = pd.DataFrame(history.history) #Display loss visualization history.loc[:,['loss','val_loss']].plot(); history.loc[:,['accuracy','val_accuracy']].plot(); #Create confusion matrix y_pred = model.predict(X_test) conf_matrix = confusion_matrix(y_test,y_pred) cm = sns.heatmap(conf_matrix, annot=True, cmap='gray', annot_kws={'size':30}) cm_labels = ['Positive','Negative'] cm.set_xlabel('True') cm.set_xticklabels(cm_labels) cm.set_ylabel('Predicted') cm.set_yticklabels(cm_labels); </code></pre> <p>Am I doing something wrong here? Am I missing something?</p>
<p>Check the confusion matrix values from the <code>sklearn.metrics.confusion_matrix</code> <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html" rel="nofollow noreferrer">official documentation</a>. The values are so organized:</p> <ul> <li><code>TN</code>: upper left corner</li> <li><code>FP</code>: upper right corner</li> <li><code>FN</code>: lower left corner</li> <li><code>TP</code>: lower right corner</li> </ul> <p>You're getting 53 true negatives and 90 false negatives from the current confusion matrix.</p>
python|tensorflow|keras|deep-learning|confusion-matrix
1
9,372
71,884,072
Is it possible to use a docker image that has both pyspark and pandas installed?
<p>My flask application uses pandas and pyspark.</p> <p>I created a Dockerfile where it uses a docker Pandas image:</p> <pre><code>FROM amancevice/pandas RUN mkdir /app ADD . /app WORKDIR /app EXPOSE 5000 RUN pip install -r requirements.txt CMD [&quot;python&quot;, &quot;app.py&quot;] </code></pre> <p>In requirements.txt I have:</p> <pre><code>flask pymysql sqlalchemy passlib hdfs Werkzeug pandas pyspark </code></pre> <p>Where Iยดm using pyspark is in this function (it was just an example for verifying if it works):</p> <pre><code>from pyspark.sql import SparkSession @app.route('/home/search', methods=[&quot;GET&quot;, &quot;POST&quot;]) def search_tab(): if 'loggedin' in session: user_id = 'user' + str(session['id']) if request.method == 'POST': checkboxData = request.form.getlist(&quot;checkboxData&quot;) for cd in checkboxData: if cd.endswith(&quot;.csv&quot;): data_hdfs(user_id, cd) else: print(&quot;xml&quot;) return render_template(&quot;search.html&quot;, id=session['id']) return render_template('login.html') def data_hdfs(user_id, cd): #spark session warehouse_location ='hdfs://hdfs-nn:9000/flask_platform' spark = SparkSession \ .builder \ .master(&quot;local[2]&quot;) \ .appName(&quot;read csv&quot;) \ .config(&quot;spark.sql.warehouse.dir&quot;, warehouse_location) \ .getOrCreate() raw_data = spark.read.options(header='True', delimiter=';').csv(&quot;hdfs://hdfs-nn:9000&quot;+cd) raw_data.repartition(1).write.format('csv').option('header',True).mode('overwrite').option('sep',';').save(&quot;hdfs://hdfs-nn:9000/flask_platform/&quot;+user_id+&quot;/staging_area/mapped_files/mapped_file_4.csv&quot;) return spark.stop() </code></pre> <p>But when I try to use code in pyspark, I got this error:</p> <pre><code>JAVA_HOME is not set 172.20.0.1 - - [15/Apr/2022 11:58:16] &quot;POST /home/search HTTP/1.1&quot; 500 - Traceback (most recent call last): File &quot;/usr/local/lib/python3.9/site-packages/flask/app.py&quot;, line 2095, in __call__ return self.wsgi_app(environ, start_response) File &quot;/usr/local/lib/python3.9/site-packages/flask/app.py&quot;, line 2080, in wsgi_app response = self.handle_exception(e) File &quot;/usr/local/lib/python3.9/site-packages/flask/app.py&quot;, line 2077, in wsgi_app response = self.full_dispatch_request() File &quot;/usr/local/lib/python3.9/site-packages/flask/app.py&quot;, line 1525, in full_dis rv = self.handle_user_exception(e) File &quot;/usr/local/lib/python3.9/site-packages/flask/app.py&quot;, line 1523, in full_dis rv = self.dispatch_request() File &quot;/usr/local/lib/python3.9/site-packages/flask/app.py&quot;, line 1509, in dispatch return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File &quot;/app/app.py&quot;, line 243, in search_tab data_hdfs(user_id, cd) File &quot;/app/app.py&quot;, line 255, in data_hdfs spark = SparkSession \ File &quot;/usr/local/lib/python3.9/site-packages/pyspark/sql/session.py&quot;, line 228, in sc = SparkContext.getOrCreate(sparkConf) File &quot;/usr/local/lib/python3.9/site-packages/pyspark/context.py&quot;, line 392, in get SparkContext(conf=conf or SparkConf()) File &quot;/usr/local/lib/python3.9/site-packages/pyspark/context.py&quot;, line 144, in __i SparkContext._ensure_initialized(self, gateway=gateway, conf=conf) File &quot;/usr/local/lib/python3.9/site-packages/pyspark/context.py&quot;, line 339, in _en SparkContext._gateway = gateway or launch_gateway(conf) File &quot;/usr/local/lib/python3.9/site-packages/pyspark/java_gateway.py&quot;, line 108, i raise RuntimeError(&quot;Java gateway process exited before sending its port number&quot;) RuntimeError: Java gateway process exited before sending its port number </code></pre> <p>Is it possible to use a docker image that has both pyspark and pandas installed? If so, where can I find it? Because I need to use both in my project. Thanks</p>
<p><code>pyspark</code> (aka Spark) requires java, which doesn't seems to be installed in your image.</p> <p>You can try something like:</p> <pre><code>FROM amancevice/pandas RUN apt-get update \ &amp;&amp; apt-get install -y --no-install-recommends \ openjdk-11-jre-headless \ &amp;&amp; apt-get autoremove -yqq --purge \ &amp;&amp; apt-get clean \ &amp;&amp; rm -rf /var/lib/apt/lists/* ENV JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 RUN pip install -r requirements.txt RUN mkdir /app ADD . /app WORKDIR /app EXPOSE 5000 CMD [&quot;python&quot;, &quot;app.py&quot;] </code></pre> <p>Note that I also moved your <code>requrements.txt</code> installation before adding your code. This will save your time by using docker cache.</p>
java|pandas|docker|pyspark|dockerfile
0
9,373
72,017,161
How to assign values based on column value, then take the average of the sum
<p>I have a df with 6 columns (category 1, 2, ..., 6), all with values either &quot;I&quot;, &quot;II&quot;, &quot;III&quot;, or &quot;IV&quot;. These string values are the categories that applies to each column. I want to make another column (&quot;Score), that, based on the values for each column, assign a value to each column value, sum and take the average.</p> <p>I want to assign 100 to values of I, 70 to II, 35 to III, and 0 to IV. I then want to sum all these for the row, and take the average. Is there anyway to assign these values and perform the sum and average without creating more than 1 columns?</p>
<p>You can create a dictionary to map the values and use <code>np.mean</code> to get the average:</p> <pre><code>d = {'I': 100, 'II': 70, 'III': 35, 'IV': 0} df['Average Value'] = df.apply(lambda x: np.mean(x.map(d)), axis = 1) </code></pre>
python|pandas|dataframe|numpy|apply
1
9,374
71,871,741
Specifically change all elements in a numpy array based on a condition without iterating over each element
<p>Hi I would like to <strong>change all the values</strong> in this array (test) simultaneously based on a second boolean array (test_map) <strong>without iterating over each item</strong>.</p> <pre><code>test = np.array([1,1,1,1,1,1,1,1]) test_map = np.array([True,False,False,False,False,False,False,True]) test[test_map] = random.randint(0,1) </code></pre> <p>output:</p> <pre><code>array([0, 1, 1, 1, 1, 1, 1, 0]) or array([1, 1, 1, 1, 1, 1, 1, 1]) </code></pre> <p>The problem with this is that I want the values that should be changed (in this case the first and last value) to each randomly be changed to a 0 or 1. So the 4 possible outputs should be:</p> <pre><code>array([0, 1, 1, 1, 1, 1, 1, 0]) or array([1, 1, 1, 1, 1, 1, 1, 1]) or array([1, 1, 1, 1, 1, 1, 1, 0]) or array([0, 1, 1, 1, 1, 1, 1, 1]) </code></pre>
<p>One possible solution is to generate a random array of &quot;bits&quot;, with <code>np.random.randint</code>:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np test = np.array([1,1,1,1,1,1,1,1]) test_map = np.array([True,False,False,False,False,False,False,True]) # array of random 1/0 of the same length as test r = np.random.randint(0, 2, len(test)) test[test_map] = r[test_map] test </code></pre>
python|numpy|vectorization
2
9,375
71,877,483
Store series with different length in for loop
<p>The original df is like below:</p> <pre><code>Hour Count 0 15 0 0 0 0 0 17 0 18 0 12 1 55 1 0 1 0 1 0 1 53 1 51 ... </code></pre> <p>I was looping through this df hour by hour and remove Count=0 in that hour, then drew a boxplot of Count in that hour. Then I ended up with 24 graphs.</p> <p>Can I put those 24 boxplots onto the same graph when looping? For example getting an output df2 like below and using plt.boxplot(df2), but I'm not sure if that Nan will cause error.</p> <pre><code> Hour=0 Hour=1 ... 0 15 55 1 17 53 2 18 51 3 12 Nan </code></pre> <p>Another thing is that after removing 0, each hour with have different length of data in Count. How to append this data and get a df2 like above?</p> <p>You can use the code below for original df:</p> <pre><code>df = pd.DataFrame({ 'Hour': {0:1, 1:1, 2:1, 3:1, 4:1, 5:1, 6:2, 7:2, 8:2, 9:2, 10:2, 11:2}, 'Count': {0:15, 1:0, 2:0, 3:17, 4:18, 5:12, 6:55, 7:0, 8:0, 9:0, 10:53, 11:51}}) </code></pre> <p>Here is the code for making hourly boxplots:</p> <pre><code>for i in range(2): table1 = df[df['Hour'] == i] table2 = table1[table1['large_cnt'] != 0] fig = plt.figure(1, figsize=(9, 6)) plt.boxplot(table2['large_cnt']) plt.show() </code></pre>
<p>One option is to <code>pivot</code> the filtered DataFrame and plot the <code>boxplot</code>:</p> <pre><code>df.query('Count!=0').assign(i=lambda x: x.groupby('Hour').cumcount()).pivot('i', 'Hour', 'Count').boxplot(); </code></pre> <p><a href="https://i.stack.imgur.com/W5oKp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W5oKp.png" alt="enter image description here" /></a></p>
python-3.x|pandas|loops|concatenation
1
9,376
71,885,701
How can I blur a mask and smooth its edges?
<p><strong>I want to change the color of the cheeks of this photo. I got a mask with sharp edges</strong> <strong>How can I blur a mask and smooth its edges?</strong> <a href="https://i.stack.imgur.com/BB54C.png" rel="nofollow noreferrer">image</a> <a href="https://i.stack.imgur.com/pca8e.png" rel="nofollow noreferrer">mask</a></p>
<pre><code># import the necessary packages import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument(&quot;-i&quot;, &quot;--image&quot;, type=str, default=&quot;pca8e.png&quot;, help=&quot;path to input image&quot;) args = vars(ap.parse_args()) # load the image, display it to our screen, and initialize a list of # kernel sizes (so we can evaluate the relationship between kernel # size and amount of blurring) image = cv2.imread(args[&quot;image&quot;]) cv2.imshow(&quot;Original&quot;, image) kernelSizes = [(41,41)] # loop over the kernel sizes for (kX, kY) in kernelSizes: # apply a &quot;Gaussian&quot; blur to the image blurred = cv2.GaussianBlur(image, (kX, kY), 0) cv2.imshow(&quot;Gaussian ({}, {})&quot;.format(kX, kY), blurred) cv2.waitKey(0) </code></pre> <p>Try this code to blur the mask image. But I don't know how you're going to use it with your image.</p>
python|numpy|opencv|mediapipe
1
9,377
22,410,803
Why np.array([1e5])**2 is different from np.array([100000])**2 in Python?
<p>May someone please explain me why <code>np.array([1e5])**2</code> is not the equivalent of <code>np.array([100000])**2</code>? Coming from Matlab, I found it confusing!</p> <pre><code>&gt;&gt;&gt; np.array([1e5])**2 array([ 1.00000000e+10]) # correct &gt;&gt;&gt; np.array([100000])**2 array([1410065408]) # Why?? </code></pre> <p>I found that this behaviour starts from <strong>1e5</strong>, as the below code is giving the right result:</p> <pre><code>&gt;&gt;&gt; np.array([1e4])**2 array([ 1.00000000e+08]) # correct &gt;&gt;&gt; np.array([10000])**2 array([100000000]) # and still correct </code></pre>
<p><code>1e5</code> is a floating point number, but 10000 is an integer:</p> <pre><code>In [1]: import numpy as np In [2]: np.array([1e5]).dtype Out[2]: dtype('float64') In [3]: np.array([10000]).dtype Out[3]: dtype('int64') </code></pre> <p>But in numpy, integers have a fixed width (as opposed to python itself in which they are arbitrary length numbers), so they "roll over" when they get larger than the maximum allowed value.</p> <p>(Note that in your case you are using a 32-bit build, so in fact the latter would give you <code>dtype('int32')</code>, which has a maximum value 2**32-1=2,147,483,647, roughly 2e9, which is less than 1e10.)</p>
python|arrays|numpy|exponent
7
9,378
55,319,623
Large dataset processing for Tensorflow Federated
<p>What is the efficient way to prepare ImageNet (or other big datasets) for Tensorflow federated simulations? Particularly with applying custom map function on tf.Dataset object? I looked into the tutorials and docs but did not find anything helpful for this usecase. This tutorial (<a href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2" rel="nofollow noreferrer">https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2</a>) shows MNIST processing but this dataset is relatively small.</p>
<p>Could you please clarify what exactly you mean by "efficient" in this context. I presume you've tried something, and it wasn't working as expected. Could you please describe here how you went about setting it up, and what problems you ran into. Thanks!</p> <p>One thing to note is that the runtime included in the first release will only work with datasets that fit in memory. Perhaps this is the limitation you are running into.</p>
tensorflow-federated
1
9,379
55,245,290
loop over a dataframe and populate with values
<p>I am trying to loop over a dataframe and fill a new column with values according to a rule:</p> <pre><code>#formula for trading strategy df['new_column'] = "" for index,row in df.iterrows(): if row.reversal == 1: row.new_column = 1 index += 126 row.new_column = -1 else: row.new_column = 0 </code></pre> <p>This formula is meant to populate the new column in a way that, when reversal=1, a value of 1 is given, followed by 0s for the next 125 rows, and a -1 in the 126th row. Then it should start again looking at whether the 127th item of the reversal column is 1 (indicating a reversal) or 0, etc. Instead, if reversal !=1, a value of 0 is given.</p> <p>The problem is that when I take a look at the new column formed, it is still an empty column. There must be an error in the way I input the values in it. I looked at other ways to construct if statements for dataframes (e.g., lambda), but they do not allow me to perform all the operations in this code</p>
<p>new_column could have values: 0, 1 or -1, i suggest you to initially load 0 your column, so no need to set 0:</p> <pre><code>df['new_column'] = 0 index = 0 while index &lt; df.shape[0]: if df['reversal'][index] == 1: df.loc[index,'new_column'] = 1 #set 1 to same index df.loc[index + 126, 'new_column'] = -1 #set -1 to 126th row index = index + 127 #inc index to next loop else: index = index + 1 </code></pre> <p>Be carefull the value of index is not bigger than the number of row of the dataframe</p> <p>you could modify the test to secure the loop (to avoid error message):</p> <pre><code> if df['reversal'][index] == 1 and (index + 126) &lt; df.shape[0]: </code></pre>
pandas
0
9,380
55,235,691
Pandas: concat multiple .csv files and return Dataframe with columns of the same name aggregated
<p>I have 100 csv files. Each file contains columns that may or may not be in the other .csv files. I need to merge all of the csv files into one and sum all columns that have the same column name. Below is an example with two csv files, but imagine it can go up to 100 csv files: </p> <p><strong>first csv file:</strong></p> <pre><code> User col1 col2 col3 col4 col5 ....colX A 1 1 1 2 6 5 B 4 5 6 7 23 6 C 4 6 1 2 4 4 </code></pre> <p><strong>second csv file</strong></p> <pre><code>User col1 col2 col3 col4 col5 ....colY A 1 1 5 3 2 3 B 20 4 3 9 6 4 C 2 1 4 3 4 1 </code></pre> <p><strong>Result DataFrame</strong></p> <pre><code>User col1 col2 col3 col4 col5 ....colX colY A 1+1 1+1 1+5 2+3 6+2 5 3 B 4+20 5+4 6+3 7+9 23+6 6 4 C 4+2 6+1 1+4 2+3 4+4 4 1 </code></pre> <p>I have tried doing the following to combine the csv, but columns are not aggregating. </p> <pre><code>csvArray = [] for x in range(1,101): csvArray.append(pd.read_csv("myCsv"+str(x)+".csv")) full_df = pd.concat(csvArray).fillna(0) </code></pre>
<p>You can create index by <code>User</code> column and use <code>sum</code> by first level:</p> <pre><code>csvArray = [] for x in range(1,101): csvArray.append(pd.read_csv("myCsv{}.csv".format(x), index_col=['User'])) </code></pre> <p>Or:</p> <pre><code>csvArray = [pd.read_csv("myCsv{}.csv".format(x), index_col=['User']) for x in range(1,101)] </code></pre> <hr> <pre><code>full_df = pd.concat(csvArray).fillna(0).sum(level=0).reset_index() </code></pre> <p>In your solution should aggregate by <code>User</code> column:</p> <pre><code>full_df = pd.concat(csvArray).fillna(0).groupby('User', as_index=False).sum() </code></pre>
python|pandas|csv|concat
5
9,381
55,150,829
Numpy memmap in-place sort of a large matrix by column
<p>I'd like to sort a matrix of shape <code>(N, 2)</code> on the first column where <code>N</code> >> system memory.</p> <p>With in-memory numpy you can do:</p> <pre><code>x = np.array([[2, 10],[1, 20]]) sortix = x[:,0].argsort() x = x[sortix] </code></pre> <p>But that appears to require that <code>x[:,0].argsort()</code> fit in memory, which won't work for memmap where <code>N</code> >> system memory (please correct me if this assumption is wrong).</p> <p>Can I achieve this sort in-place with numpy memmap?</p> <p>(assume heapsort is used for sorting and simple numeric data types are used)</p>
<p>The solution may be simple, using the order argument to an in place <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.sort.html#numpy.ndarray.sort" rel="nofollow noreferrer">sort</a>. Of course, <code>order</code> requires fieldnames, so those have to be added first.</p> <pre><code>d = x.dtype x = x.view(dtype=[(str(i), d) for i in range(x.shape[-1])]) array([[(2, 10)], [(1, 20)]], dtype=[('0', '&lt;i8'), ('1', '&lt;i8')]) </code></pre> <p>The field names are strings, corresponding to the column indices. Sorting can be done in place with</p> <pre><code>x.sort(order='0', axis=0) </code></pre> <p>Then convert back to a regular array with the original datatype</p> <pre><code>x.view(d) array([[ 1, 20], [ 2, 10]]) </code></pre> <p>That should work, although you may need to change how the view is taken depending on how the data is stored on disk, see <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.view.html" rel="nofollow noreferrer">the docs</a></p> <blockquote> <p>For a.view(some_dtype), if some_dtype has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of a (shown by print(a)). It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results.</p> </blockquote>
python|numpy|python-3.6|memmap
2
9,382
55,505,119
Conditional drop of identical pairs of columns in pandas
<p>I have a somewhat big pandas dataframe (100,000x9). The first two columns are a combination of names associated with a value (in both sides). I want to delete the lower value associated with a given combination.</p> <p>I haven't tried anything yet, because I'm not sure how to tackle this problem. My first impression is that I need to use the apply function over the data frame, but I need to select each combination of 'first' and 'second', compare them and then delete that row. </p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(np.array([['John','Mary',5],['John','Mark',1], ['Mary','John',2], ['Mary','Mark',1], ['Mark','John',3], ['Mark','Mary',5]]), columns=['first','second','third']) df first second third 0 John Mary 5 1 John Mark 1 2 Mary John 2 3 Mary Mark 1 4 Mark John 3 5 Mark Mary 5 </code></pre> <p>My objective is to get this data frame</p> <pre class="lang-py prettyprint-override"><code>df_clean = pd.DataFrame(np.array([['John','Mary',5], ['Mark','John',3], ['Mark','Mary',5]]), columns=['first','second','third']) df_clean first second third 0 John Mary 5 1 Mark John 3 2 Mark Mary 5 </code></pre> <p>Any ideas?</p>
<p>First we use <a href="https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.sort.html" rel="nofollow noreferrer"><code>np.sort</code></a> to sort horizontally, then we use <code>groupby</code> with <code>max</code> function to get the highest value per unique value of first, second:</p> <pre><code>df[['first', 'second']] = np.sort(df[['first', 'second']], axis=1) print(df.groupby(['first', 'second']).third.max().reset_index()) first second third 0 John Mark 3 1 John Mary 5 2 Mark Mary 5 </code></pre>
python|pandas|numpy
2
9,383
55,227,270
How to keep ordering when saving dict as pandas data frame?
<p>If I have some example data as:</p> <pre><code>dic = {'common': {'value': 18, 'attr': 20, 'param': 22}, 'fuzzy': {'value': 14, 'attr': 21, 'param': 24}, 'adhead': {'value': 13, 'attr': 20, 'param': 29}} </code></pre> <p>Executing <code>pd.DataFrame(dic)</code> I get:</p> <pre><code> common fuzzy adhead attr 20 21 20 param 22 24 29 value 18 14 13 </code></pre> <p>And here, 'external' columns are ok, but 'internal' are sorted, this is what I need to avoid. How to do it quick? (Need to keep ordering - here rows are sorted)</p> <p>Hint: I've got a message on the colsole:</p> <blockquote> <p><strong>main</strong>:95: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version of pandas will change to not sort by default.</p> <p>To accept the future behavior, pass 'sort=True'.</p> <p>To retain the current behavior and silence the warning, pass sort=False</p> </blockquote> <p>but don;t know what's for. Passing this argument to <code>pd.DataFrame(dic, sort=False)</code> returns: <code>TypeError: __init__() got an unexpected keyword argument 'sort'</code>.</p>
<p>Then using <code>reindex</code> as protection </p> <pre><code>pd.DataFrame(dic).reindex(index=['value','attr','param']) Out[553]: common fuzzy adhead value 18 14 13 attr 20 21 20 param 22 24 29 </code></pre>
python|pandas
2
9,384
56,646,345
Is it possible to use pyspark to speed up regression analysis on each column of a very large size of an array?
<p>I have an array of very large size. I want to do linear regression on each column of the array. To speed up the calculation, I created a list with each column of the array as its element. I then employed pyspark to create a RDD and further applied a defined function on it. I had memory problems in creating that RDD (i.e. parallelization).</p> <p>I have tried to improve the spark.driver.memory to 50g by setting the spark-defaults.conf but the program still seems dead.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score, mean_squared_error from pyspark import SparkContext sc = SparkContext("local", "get Linear Coefficients") def getLinearCoefficients(column): y=column[~np.isnan(column)] # Extract column non-nan values x=np.where(~np.isnan(column))[0]+1 # Extract corresponding indexs plus 1 # We only do linear regression interpolation when there are no less than 3 data pairs exist. if y.shape[0]&gt;=3: model=LinearRegression(fit_intercept=True) # Intilialize linear regression model model.fit(x[:,np.newaxis],y) # Fit the model using data n=y.shape[0] slope=model.coef_[0] intercept=model.intercept_ r2=r2_score(y,model.predict(x[:,np.newaxis])) rmse=np.sqrt(mean_squared_error(y,model.predict(x[:,np.newaxis]))) else: n,slope,intercept,r2,rmse=np.nan,np.nan,np.nan,np.nan,np.nan return n,slope,intercept,r2,rmse random_array=np.random.rand(300,2000*2000) # Here we use a random array without missing data for testing purpose. columns=[col for col in random_array.T] columnsRDD=sc.parallelize(columns) columnsLinearRDD=columnsRDD.map(getLinearCoefficients) n=np.array([e[0] for e in columnsLinearRDD.collect()]) slope=np.array([e[1] for e in columnsLinearRDD.collect()]) intercept=np.array([e[2] for e in columnsLinearRDD.collect()]) r2=np.array([e[3] for e in columnsLinearRDD.collect()]) rmse=np.array([e[4] for e in columnsLinearRDD.collect()]) </code></pre> <p>The program output was stagnant like the following.</p> <pre><code>Exception in thread "dispatcher-event-loop-0" java.lang.OutOfMemoryError at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123) at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117) at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153) at org.apache.spark.util.ByteBufferOutputStream.write(ByteBufferOutputStream.scala:41) at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877) at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:43) at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100) at org.apache.spark.scheduler.TaskSetManager$$anonfun$resourceOffer$1.apply(TaskSetManager.scala:486) at org.apache.spark.scheduler.TaskSetManager$$anonfun$resourceOffer$1.apply(TaskSetManager.scala:467) at scala.Option.map(Option.scala:146) at org.apache.spark.scheduler.TaskSetManager.resourceOffer(TaskSetManager.scala:467) at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$org$apache$spark$scheduler$TaskSchedulerImpl$$resourceOfferSingleTaskSet$1.apply$mcVI$sp(TaskSchedulerImpl.scala:315) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at org.apache.spark.scheduler.TaskSchedulerImpl.org$apache$spark$scheduler$TaskSchedulerImpl$$resourceOfferSingleTaskSet(TaskSchedulerImpl.scala:310) at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$4$$anonfun$apply$11.apply(TaskSchedulerImpl.scala:412) at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$4$$anonfun$apply$11.apply(TaskSchedulerImpl.scala:409) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$4.apply(TaskSchedulerImpl.scala:409) at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$4.apply(TaskSchedulerImpl.scala:396) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.TaskSchedulerImpl.resourceOffers(TaskSchedulerImpl.scala:396) at org.apache.spark.scheduler.local.LocalEndpoint.reviveOffers(LocalSchedulerBackend.scala:86) at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receive$1.applyOrElse(LocalSchedulerBackend.scala:64) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:117) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) </code></pre> <p>I guess it is possible to use pyspark to speed up the calculation but how could I make it? Modifying other parameters in spark-defaults.conf? Or vectorize each column of the array (I do know range() function in Python3 do that way and it is really faster.)?</p>
<p>That is not going to work that way. You are basically doing three things:</p> <ol> <li>you are using a RDD for parallelization,</li> <li>you are calling your getLinearCoefficients() function and finally</li> <li>you call <a href="http://spark.apache.org/docs/2.2.1/api/python/pyspark.sql.html#pyspark.sql.DataFrame.collect" rel="nofollow noreferrer">collect()</a> on it to use your existing code.</li> </ol> <p>There is nothing wrong with the frist point, but there is a huge mistake in the second and third step. Your getLinearCoefficients() function does not benefit from pyspark, as you use numpy and sklearn (Have a look at this <a href="https://stackoverflow.com/q/38296609/6664872">post</a> for a better explanation). For most of the functions you are using, there is a pyspark equivalent. The problem with the third step is the collect() function. When you call collect(), pyspark is bringing all the rows of the RDD to the driver and executes the sklearn functions there. Therefore you get only the parallelization which is allowed by sklearn. Using pyspark is completely pointless in the way you are doing it currently and maybe even a drawback. Pyspark is not a framework which allows you to run your python code in parallel. When you want to execute your code in parallel with pyspark, you have to use the pyspark functions.</p> <p>So what can you? </p> <ul> <li>First of all you could use the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html" rel="nofollow noreferrer">n_jobs</a> parameter of the LinearRegession class to use more than one core for your calculation. This allows you at least to use all cores of one machine. </li> <li>Another thing you could do, is stepping away from sklearn and use the linearRegression of pyspark (have a look at the <a href="https://spark.apache.org/docs/2.2.0/ml-classification-regression.html#linear-regression" rel="nofollow noreferrer">guide</a> and the <a href="http://spark.apache.org/docs/2.2.1/api/python/pyspark.ml.html#pyspark.ml.regression.LinearRegression" rel="nofollow noreferrer">api</a>). With this you can use a whole cluster for your linear regression.</li> </ul>
numpy|pyspark
1
9,385
56,522,831
Is there a more concise way to count rows in a group in pandas?
<p>here is a data table</p> <p><a href="https://i.stack.imgur.com/dgDr1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dgDr1.png" alt="enter image description here"></a></p> <p>to count the rows conditional on 'Age'=='young', group by Class, I use this piece of code</p> <pre><code>df.loc[(df['Age']=='young') &amp; (df['Class'] == 'Yes'),'Class'].count() df.loc[(df['Age']=='young') &amp; (df['Class'] == 'No'),'Class'].count() </code></pre> <p>outputs</p> <pre><code>2 3 </code></pre> <p>is there a concise way to get the number of rows (2 and 3)?</p>
<p>You can use:</p> <pre><code>print(df.groupby('Class').size()) </code></pre> <p>If you want only <code>'young'</code>:</p> <pre><code>print(df[df['Age'].eq('young')].groupby('Class').size()) </code></pre>
python|pandas
2
9,386
56,478,762
Pandas & Python: Is there a way to import the contents of an excel file, add text content to the end of the file, and save down?
<p>I have a list of names that are stored in an excel file. The user needs to be able to import that list, which is a single column, and add additional names to the list, and save down the file. </p> <p>I've imported the excel file using pandas and created a dataframe (df). I've tried to append the df using a loop function. </p> <pre><code>import numpy as np import pandas as pd path = 'C:\\NY_Operations\\EdV\\Streaman\\Python\\CES Fee Calc\\' file_main = 'Main.xlsx' df_main = pd.read_excel(path + file_main) while True: b = (input("Enter name of new deal to be added to 'Main' spreadsheet or type 'Exit' ")) df_main.append(b) if df_main [-1] == "Exit": df_main.pop() break </code></pre> <p>The spreadsheet has "Toy", "Color", "Ball" in A1, A2, and A3. The user should be prompted to add new deals and he/she adds "Watch" and "Belt" and then writes "Exit" and the loop ends. In the spreadsheet, A4 should show "Watch" and A5 should show "Belt" in the df and spreadsheet.</p>
<p>I was able to figure it out. Thanks for everyone's help. </p> <pre><code>import numpy as np import pandas as pd #create list of deals in the 'main' consolidation spreadsheet path = 'C:\\NY_Operations\\EdV\\Streaman\\Python\\CES Fee Calc\\' file_main = 'Main.xlsx' df_main = pd.read_excel(path + file_main) new_deals = [] #each entry is the name of the item purchased while True: g = (input("Enter name of item or exit ")) new_deals.append(g) if new_deals [-1] == "exit": new_deals.pop() break df_newdeals = pd.DataFrame({'Deal Name':new_deals}) df1 = pd.concat([df_main,df_newdeals]) print(df1) df1.to_excel(path + file_main) </code></pre>
python|pandas
1
9,387
56,650,337
apply if else condition using dataframe
<p>With below code I can see the data, there is one row and two columns. I want to do a selection:</p> <ol> <li>if both columns are 0 then do something</li> <li>if both are greater than 0 then do something.</li> </ol> <p>I am getting error in if condition. Can anyone please help me to this done? </p> <blockquote> <p>Comment: OP post example dataset here or URL</p> </blockquote> <pre><code>from pyspark.sql import * import pandas as pd query = "(Select empID, empDept from employee)" df1 = spark.read.jdbc(url=url, table=query, properties=properties) df1.show() if df1[empID]==0 &amp;&amp; df1[empDept]==0: print("less than zero") elif df1[empID]&gt;0 &amp;&amp; df1[empDept]&gt;0: print("greather than 0") else print("do nothing") </code></pre>
<p>There are multiple syntactical errors in your script. Try the below-modified code.</p> <pre><code>import numpy as np if np.sum((df1["empID"]==0) &amp; (df1["empDept"]==0)): print("less than zero") elif np.sum((df1["empID"]&gt;0) &amp; (df1["empDept"]&gt;0)): print("greather than 0") else: print("do nothing") </code></pre> <p>Please note that any comparison on data frame columns( like df1["empID"]==0 ) would return a series of boolean values, so have to handle them as a series not a regular variable.</p> <p>df1:</p> <pre><code> empID empDept 0 1 1 </code></pre> <p>Output:</p> <pre><code>greather than 0 </code></pre>
python|sql-server|pandas|dataframe|databricks
0
9,388
56,853,946
Is there any alternative for pd.notna ( pandas 0.24.2). It is not working in pandas 0.20.1?
<p>"Code was developed in pandas=0.24.2, and I need to make the code work in pandas=0.20.1. What is the alternative for pd.notna as it is not working in pandas version 0.20.1. </p> <pre><code>df.loc[pd.notna(df["column_name"])].query(....).drop(....) </code></pre> <p>I need an alternative to pd.notna to fit in this line of code to work in pandas=0.20.1</p>
<pre><code>import os import subprocess import pandas as pd import sys from StringIO import StringIO from io import StringIO cmd = 'NSLOOKUP email.fullcontact.com' df = pd.DataFrame() a = subprocess.Popen(cmd, stdout=subprocess.PIPE) b = StringIO(a.communicate()[0].decode('utf-8')) df = pd.read_csv(b, sep=",") column = list(df.columns) name = list(df.iloc[1])[0].strip('Name:').strip() name </code></pre>
python-3.x|pandas
0
9,389
25,528,395
Python - Mask specific values of a grid
<p>I would like to mask values of a grid. As example i want to mask all values of "t &lt; 0" to do calculation after. I try to use a conditionnal if but it doesnt work...</p> <pre><code>import numpy as np Lx=10. Ly=10. x0 = 2 YA, XA = np.mgrid[0:Ly, 0:Lx] t = XA - 2 </code></pre>
<p>You need to explain what you want to do <em>after</em> you mask the array. Do you want to alter the unmasked values? Then </p> <pre><code>mask = t &lt; 0 YA[~mask] = ... </code></pre> <p>might be all you need.</p> <p>On the other hand, if you need to compute statistics on arrays with masked value, you may find using <a href="http://docs.scipy.org/doc/numpy/reference/routines.ma.html#masked-arrays-arithmetics" rel="nofollow">NumPy masked arrays</a> more convenient:</p> <pre><code>YA = np.ma.masked_array(YA, mask) </code></pre>
numpy|grid|mask
1
9,390
26,047,209
What is the difference between a pandas Series and a single-column DataFrame?
<p>Why does pandas make a distinction between a <code>Series</code> and a single-column <code>DataFrame</code>?<br> In other words: what is the reason of existence of the <code>Series</code> class? </p> <p>I'm mainly using time series with datetime index, maybe that helps to set the context. </p>
<p>Quoting the <a href="http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.html" rel="noreferrer">Pandas docs</a></p> <blockquote> <p><code>pandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=False)</code></p> <p>Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. <strong>Can be thought of as a dict-like container for Series objects.</strong> The primary pandas data structure.</p> </blockquote> <p>So, the <strong>Series is the data structure for a single column of a <code>DataFrame</code></strong>, not only conceptually, but literally, i.e. the data in a <code>DataFrame</code> is actually stored in memory as a collection of <code>Series</code>.</p> <p>Analogously: <em>We need both lists and matrices, because matrices are built with lists. Single row matricies, while equivalent to lists in functionality still cannot exist without the list(s) they're composed of.</em></p> <p>They both have extremely similar APIs, but you'll find that <code>DataFrame</code> methods always cater to the possibility that you have more than one column. And, of course, you can always add another <code>Series</code> (or equivalent object) to a <code>DataFrame</code>, while adding a <code>Series</code> to another <code>Series</code> involves creating a <code>DataFrame</code>.</p>
python|pandas
253
9,391
67,018,234
Tensorflow 2 - How to apply adapted TextVectorization to a text dataset
<h1>Question</h1> <p>Please help understand the cause of the error when applying the adapted <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization" rel="nofollow noreferrer">TextVectorization</a> to a text Dataset.</p> <h1>Background</h1> <p><a href="https://keras.io/getting_started/intro_to_keras_for_engineers/#the-ideal-machine-learning-model-is-endtoend" rel="nofollow noreferrer">Introduction to Keras for Engineers</a> has a part to apply an adapted TextVectorization layer to a text dataset.</p> <pre><code>from tensorflow.keras.layers.experimental.preprocessing import TextVectorization training_data = np.array([[&quot;This is the 1st sample.&quot;], [&quot;And here's the 2nd sample.&quot;]]) vectorizer = TextVectorization(output_mode=&quot;int&quot;) vectorizer.adapt(training_data) integer_data = vectorizer(training_data) # &lt;----- Apply the adapted TextVectorization </code></pre> <h2>Problem</h2> <p>Try to do the same by first adapt a TextVectorization layer to the PTB text, then apply it to the Shakespeare text.</p> <h3>Adapted a TextVectorization to PTB</h3> <pre><code>f = &quot;ptb.train.txt&quot; path_to_ptb = tf.keras.utils.get_file( str(pathlib.Path().absolute()) + '/' + f, f'https://raw.githubusercontent.com/tomsercu/lstm/master/data/{f}' ) ptb_ds = tf.data.TextLineDataset( filenames=path_to_file, compression_type=None, buffer_size=None, num_parallel_reads=True )\ .filter(lambda x: tf.cast(tf.strings.length(x), bool))\ .shuffle(10000) from tensorflow.keras.layers.experimental.preprocessing import TextVectorization vectorizer = TextVectorization(output_mode=&quot;int&quot;, ngrams=None) vectorizer.adapt(ptb_ds) </code></pre> <h3>Apply the TextVectorization layer to the Shakespeare text and got an error</h3> <pre><code>path_to_shakespeare = tf.keras.utils.get_file( 'shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt' ) shakespeare_ds = tf.data.TextLineDataset(path_to_shakespeare)\ .filter(lambda x: tf.cast(tf.strings.length(x), bool)) shakespeare_vector_ds =\ vectorizer(shakespeare_ds.batch(128).prefetch(tf.data.AUTOTUNE)) &lt;----- Error </code></pre> <h2>Error</h2> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-48-216442e69438&gt; in &lt;module&gt; ----&gt; 1 shakespeare_vector_ds = vectorizer(shakespeare_ds.batch(128).prefetch(tf.data.AUTOTUNE)) ... alueError: Attempt to convert a value (&lt;PrefetchDataset shapes: (None,), types: tf.string&gt;) with an unsupported type (&lt;class 'tensorflow.python.data.ops.dataset_ops.PrefetchDataset'&gt;) to a Tensor. </code></pre> <h1>Solution</h1> <p>This works but not clear why above causes the error, although it seems to be doing the same.</p> <ul> <li><a href="https://www.tensorflow.org/tutorials/text/word2vec" rel="nofollow noreferrer">Tensorflow Word2Vec tutorial</a></li> </ul> <pre><code>shakespeare_vector_ds =\ shakespeare_ds.batch(1024).prefetch(tf.data.AUTOTUNE).map(vectorizer).unbatch() </code></pre>
<p><code>tf.data.Dataset.map</code> applies a function to each element (a Tensor) of a dataset. The <code>__call__</code> method of the <code>TextVectorization</code> object expects a <code>Tensor</code>, not a <code>tf.data.Dataset</code> object. Whenever you want to apply a function to the elements of a <code>tf.data.Dataset</code>, you should use <code>map</code>.</p>
tensorflow|word-embedding
1
9,392
67,088,524
Python Pandas xlsxriter stacked line chart type setting standard line chart in Excel
<p>I am trying to created stacked line charts in excel using pandas and xlsxwriter.</p> <p>When entering the chart type dict into add chart I use what the documentation states should configure a stacked line in Excel. (So I think anyway!) When I open Excel I get a standard line chart, which means I need to manually change to stacked line (the second across box)</p> <p><a href="https://i.stack.imgur.com/noC7A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/noC7A.png" alt="enter image description here" /></a></p> <p>Here's some example code</p> <pre><code>import pandas as pd # example csv data ''' Date-Time col1 col2 2021-03-01 00:00:00 34329 34540 2021-03-01 00:15:00 34174 34369 2021-03-01 00:30:00 34121 34418 2021-03-01 00:45:00 34012 34235 2021-03-01 01:00:00 33959 34273 2021-03-01 01:15:00 33825 34049 2021-03-01 01:30:00 33782 34010 2021-03-01 01:45:00 33584 33882 2021-03-01 02:00:00 33601 33905 2021-03-01 02:15:00 33415 33746 2021-03-01 02:30:00 33412 33827 2021-03-01 02:45:00 33291 33744 2021-03-01 03:00:00 33329 33816 2021-03-01 03:15:00 33209 33745 2021-03-01 03:30:00 33219 33833 ''' # create df with open('example.csv', 'r') as csv: df = pd.read_csv(csv).set_index('Date-Time') # Create the workbook to save the data within workbook = pd.ExcelWriter('example.xlsx', engine='xlsxwriter') # Create sheets in excel for data pd.DataFrame().to_excel(workbook, sheet_name='Dashboard') # assign a blank dashboard for the charts worksheet_dashboard = workbook.sheets['Dashboard'] # Get the xlsxwriter objects from the dataframe writer object # for use with creating charts later book = workbook.book # Create sheets in excel for data df.to_excel(workbook, sheet_name='example') # Add the line chart objects chart = book.add_chart({'type': 'line', 'subtype': 'stacked'}) # Configure the first series. chart.add_series({ 'name': '=example!$B$1', 'categories': '=example!$A$1:$A$16', 'values': '=example!$B$2:$B$16', }) chart.add_series({ 'name': '=example!$C$1', 'categories': '=example!$A$1:$A$16', 'values': '=example!$C$2:$C$16', }) # Insert the chart into the worksheet worksheet_dashboard.insert_chart('A1', chart) workbook.save() </code></pre> <p>You can see that the line</p> <pre><code># Add the line chart objects chart = book.add_chart({'type': 'line', 'subtype': 'stacked'}) </code></pre> <p>Should provide the type as line and subtype stacked, anyone encountered this problem before?</p> <p>As a note the versions I'm using:</p> <ul> <li>Microsoft Office 365 Excel version 2008.</li> <li>print(pd.<strong>version</strong>) outputs: 0.23.0</li> </ul>
<p>The code should work as expected. Here is the output when I run it:</p> <p><a href="https://i.stack.imgur.com/UUHLx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UUHLx.png" alt="enter image description here" /></a></p> <p>Note, stacked line support was added in <a href="https://xlsxwriter.readthedocs.io/changes.html" rel="nofollow noreferrer">XlsxWriter version 1.2.9</a> so make sure you have that version, or newer.</p>
python|excel|pandas|xlsxwriter
2
9,393
66,866,690
TypeError: ("x() got an unexpected keyword argument 'result_type'", 'occurred at index 1'), pandas 0.23.4
<p>I am getting a Type error using the result_type keyword applying a function to a dataframe in order to add 2 columns.</p> <p>I can see the usual cause of this is due to a pandas version, but I have 0.23.4 running and am receiving this error.</p> <pre><code>def parse_allocation(x): direction = {'En':'entry', 'Ex':'exit'} point = x['Name'].split(' ') if len(point) &gt; 4: curvename = 'Allocation.' + point[2] + ' ' + point[3] + '.' + direction[point[1]] else: curvename = 'Allocation.' + point[2] + '.' + direction[point[1]] ent_ex = direction[point[1]] return curvename, ent_ex df_allocation[['curvename', 'DirectionKey']] = df_allocation.apply(parse_allocation, axis=1, result_type='expand') </code></pre> <p>This works fine on my local env which also is using <code>pandas 0.23.4</code> which seems odd.</p> <p>any ideas?</p> <p>Error:</p> <pre><code>TypeError: (&quot;parse_allocation() got an unexpected keyword argument 'result_type'&quot;, 'occurred at index 1') </code></pre> <p>thanks</p>
<p>The error is because pandas treats the parameter <code>result_type='expand'</code> as a parameter for the custom function <code>parse_allocation</code> rather than parameter of the <code>apply()</code> function. You can solve it by using lambda function as follows:</p> <pre><code>df_allocation[['curvename', 'DirectionKey']] = df_allocation.apply(lambda x: parse_allocation(x), axis=1, result_type='expand') </code></pre>
python|pandas|dataframe
0
9,394
66,840,550
How to convert Python JSON to API call string from dataframe
<p>My script reads parameters from an Excel file using <em>pandas</em>.</p> <ol> <li>The code first reads from an excel file and creates a <code>DataFrame</code>, <code>df</code>.</li> <li>The <code>df</code> is filtered on <code>df['Team'] == 'IT'</code>.</li> <li>Filtered <code>df</code> is converted to JSON to get all parameters and values.</li> </ol> <p><strong>This is the code:</strong></p> <pre><code>import pandas as pd import json loc = &quot;excel.xlsx&quot; df = pd.read_excel(loc) # &quot;Team&quot; is filtered by 'IT' rslt_df = df[df[&quot;Team&quot;] == 'IT'] # Encoding a DataFrame using &quot;columns&quot; formatted JSON result = rslt_df.to_json(orient=&quot;columns&quot;) # Parse json results parsed = json.loads(result) </code></pre> <p><strong>The <code>df</code> output filtered on <code>&quot;Team&quot;</code> using <code>&quot;IT&quot;</code></strong></p> <pre><code> StartDate EndDate startTime endTime Team RotationID Users 0 2021-04-01 2021-04-02 17:00:00 01:00:00 IT 081435f john@dotdansh.io 2 2021-04-02 2021-04-03 17:00:00 01:00:00 IT 081435f paul@dotdansh.io 4 2021-04-03 2021-04-04 17:00:00 01:00:00 IT 081435f danny@dotdansh.io 6 2021-04-04 2021-04-05 17:00:00 01:00:00 IT 081435f ben@dotdansh.io </code></pre> <p><strong>JSON output from filtered <code>df</code> using the <code>'columns'</code> data structure:</strong></p> <pre><code>{ &quot;StartDate&quot;: {&quot;0&quot;: &quot;2021-04-01&quot;, &quot;2&quot;: &quot;2021-04-02&quot;, &quot;4&quot;: &quot;2021-04-03&quot;, &quot;6&quot;: &quot;2021-04-04&quot;}, &quot;EndDate&quot;: {&quot;0&quot;: &quot;2021-04-02&quot;, &quot;2&quot;: &quot;2021-04-03&quot;, &quot;4&quot;: &quot;2021-04-04&quot;, &quot;6&quot;: &quot;2021-04-05&quot;}, &quot;startTime&quot;: {&quot;0&quot;: &quot;17:00:00&quot;, &quot;2&quot;: &quot;17:00:00&quot;, &quot;4&quot;: &quot;17:00:00&quot;, &quot;6&quot;: &quot;17:00:00&quot;}, &quot;endTime&quot;: {&quot;0&quot;: &quot;01:00:00&quot;, &quot;2&quot;: &quot;01:00:00&quot;, &quot;4&quot;: &quot;01:00:00&quot;, &quot;6&quot;: &quot;01:00:00&quot;}, &quot;Team&quot;: {&quot;0&quot;: &quot;IT&quot;, &quot;2&quot;: &quot;IT&quot;, &quot;4&quot;: &quot;IT&quot;, &quot;6&quot;: &quot;IT&quot;}, &quot;RotationID&quot;: {&quot;0&quot;: &quot;081435f&quot;, &quot;2&quot;: &quot;081435f&quot;, &quot;4&quot;: &quot;081435f&quot;, &quot;6&quot;: &quot;081435f&quot;}, &quot;User&quot;: {&quot;0&quot;: &quot;john@dotdansh.io&quot;, &quot;2&quot;: &quot;paul@dotdansh.io&quot;, &quot;4&quot;: &quot;danny@dotdansh.io&quot;, &quot;6&quot;: &quot;ben@dotdansh.io&quot;} } </code></pre> <p>I need to create an API <code>PATCH</code> request with the parameters received from the JSON below. Note that the values can be different and the count of values, as well.</p> <p><strong>This is the data string of the API patch call with the required parameters:</strong></p> <pre><code>import requests headers = { 'Authorization': 'Key &lt;myKey&gt;', 'Content-Type': 'application/json', } data = ' { &quot;name&quot;: &quot;&lt;Team&gt;&quot;, &quot;startDate&quot;: &quot;&lt;first_startDate&gt;T&lt;first_startTime&gt;Z&quot;, &quot;endDate&quot;: &quot;&lt;last_endDate&gt;T&lt;last_endTime&gt;Z&quot;,\ &quot;type&quot;: &quot;daily&quot;, &quot;length&quot;: 1,\ &quot;participants&quot;: [\. ##list of users { &quot;type&quot;: &quot;user&quot;,\ &quot;username&quot;: &quot;john@dotdansh.io&quot; },\ { &quot;type&quot;: &quot;user&quot;,\ &quot;username&quot;: &quot;paul@dotdansh.io&quot;}\ ] } }' response = requests.patch('https://example.com', headers=headers, data=data) </code></pre>
<p>I suggest using groupby to lump together the users, and then orientation <code>'records'</code> instead of <code>'columns'</code>:</p> <pre><code>df = df.groupby(['Team', 'StartDate', 'EndDate', 'RotationID']).agg(tuple).reset_index() for record in df.to_dict(orient='records'): payload = { &quot;name&quot;: record['Team'], &quot;startDate&quot;: record['StartDate'], &quot;endDate&quot;: record['EndDate'], &quot;rotationid&quot;: record['RotationID'], &quot;participants&quot;: [{ &quot;type&quot;: &quot;user&quot;, &quot;username&quot;: user } for user in record['User']] } requests.patch('https://example.com/endpoint', data=payload) </code></pre> <p><code>groupby()</code> creates and object which you can run an aggregation on, like <code>sum()</code> or <code>mean()</code>. But in this case we want all results, so we &quot;aggregate&quot; using the <code>tuple</code> constructor. That means Users end up in a <code>tuple</code> per group. Because groupby adds what you group by to the index, I remove the index at the end so that the <code>to_dict()</code> function returns them too. Your example data didn't have multiple users in one group, so I added on in my run and got the following payloads:</p> <pre><code>{'name': 'IT', 'startDate': '2021-04-01', 'endDate': '2021-04-02', 'rotationid': '081435f', 'participants': [{'type': 'user', 'username': 'john@dotdansh.io'}]} {'name': 'IT', 'startDate': '2021-04-02', 'endDate': '2021-04-03', 'rotationid': '081435f', 'participants': [{'type': 'user', 'username': 'paul@dotdansh.io'}]} {'name': 'IT', 'startDate': '2021-04-03', 'endDate': '2021-04-04', 'rotationid': '081435f', 'participants': [{'type': 'user', 'username': 'danny@dotdansh.io'}]} {'name': 'IT', 'startDate': '2021-04-04', 'endDate': '2021-04-05', 'rotationid': '081435f', 'participants': [{'type': 'user', 'username': 'ben@dotdansh.io'}, {'type': 'user', 'username': 'other@dotdansh.io'}]} </code></pre>
python|json|python-3.x|pandas|python-requests
0
9,395
66,850,493
Using a Nested For Loop to create a new column in Pandas
<p>I am trying to create a new column using information from a list and also an existing column. I created a nested FOR LOOP to go through the existing and the list simultaneously, checking if the information in the list is the same as that of the existing column, if Yes, then it should perform a task, if no then do something different.</p> <p>But I keep getting the error below:</p> <pre><code>ValueError: Length of values (2636) does not match length of index (659) </code></pre> <p>The IF statement works fine, but the Else aspect is what is generating more information and causing the length of the values to increase.</p> <p>Below is my code, please i need help on this.</p> <blockquote> </blockquote> <pre><code>directcustomers = [&quot;Old Mutual Abuja Branch Direct&quot;,&quot;Old Mutual Lagos Branch Direct&quot;,&quot;Old Mutual Rivers Branch Direct&quot;,&quot;Old Mutual Ibadan Branch Direct&quot;] direct_broker = [] for agent in monthinviewlist.AGENT_NAME: for name in directcustomers: &gt; if agent == name: &gt; direct_broker.append(&quot;Direct&quot;) &gt; else: &gt; direct_broker.append(&quot;Broker&quot;) &gt; pass &gt; &gt; monthinviewlist[&quot;Direct/Broker&quot;] = direct_broker </code></pre> <p>Thanks.</p>
<p>Let us say we have the following parameters :</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; monthinviewlist AGENT_NAME 0 Old Mutual Abuja Branch Direct 1 Old Mutual Lagos Branch Direct 2 Old Mutual Rivers Branch Direct 3 Old Mutual Ibadan Branch Direct 4 OtherA 5 OtherB 6 OtherC &gt;&gt;&gt; directcustomers [&quot;Old Mutual Abuja Branch Direct&quot;,&quot;Old Mutual Lagos Branch Direct&quot;,&quot;Old Mutual Rivers Branch Direct&quot;,&quot;Old Mutual Ibadan Branch Direct&quot;] </code></pre> <p>In that case, your current nested loops output something like:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; direct_broker [&quot;direct&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;direct&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;direct&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;direct&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;, &quot;broker&quot;] &gt;&gt;&gt; len(direct_broker) 28 </code></pre> <p>Here we have <code>len(direct_broker) == 28 == len(monthviewlist) * len(directcustomers)</code>. The problem you have is when you try to copy the content of this 28 elements array to a 7 elements table column. If it the intention was to duplicate your values, then you need to copy the agent names multiple times as well :</p> <pre class="lang-py prettyprint-override"><code>ext_monthinviewlist = pandas.DataFrame(columns=[&quot;AGENT_NAME&quot;, &quot;Direct/Broker&quot;]) ext_agent_names = [] direct_or_broker = [] for agent in monthinviewlist.AGENT_NAME: ext_agent_names += [agent for _ in range(len(directcustomers))] # keep 4 times the same name for name in directcustomers: # add &quot;broker&quot; or &quot;direct&quot; 4 times if agent == name: direct_or_broker.append(&quot;Direct&quot;) else: direct_or_broker.append(&quot;Broker&quot;) ext_monthinviewlist[&quot;AGENT_NAME&quot;] = ext_agent_names ext_monthinviewlist[&quot;Direct/Broker&quot;] = direct_or_broker </code></pre> <p>That will output :</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; ext_monthinviewlist AGENT_NAME Direct/Broker 0 Old Mutual Abuja Branch Direct direct 1 Old Mutual Abuja Branch Direct broker 2 Old Mutual Abuja Branch Direct broker 3 Old Mutual Abuja Branch Direct broker 4 Old Mutual Lagos Branch Direct broker 5 Old Mutual Lagos Branch Direct direct 6 Old Mutual Lagos Branch Direct broker 7 Old Mutual Lagos Branch Direct broker 8 Old Mutual Rivers Branch Direct broker 9 Old Mutual Rivers Branch Direct broker 10 Old Mutual Rivers Branch Direct direct 11 Old Mutual Rivers Branch Direct broker 12 Old Mutual Ibadan Branch Direct broker 13 Old Mutual Ibadan Branch Direct broker 14 Old Mutual Ibadan Branch Direct broker 15 Old Mutual Ibadan Branch Direct direct 16 OtherA broker 17 OtherA broker 18 OtherA broker 19 OtherA broker 20 OtherB broker 21 OtherB broker 22 OtherB broker 23 OtherB broker 24 OtherC broker 25 OtherC broker 26 OtherC broker 27 OtherC broker </code></pre>
python|pandas|list|loops
0
9,396
66,842,767
Pandas: How select rows by multiple criteria (if not nan and equals to specific value)
<p>I have a DataFrame</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">index</th> <th style="text-align: center;">Street</th> <th style="text-align: center;">House</th> <th style="text-align: right;">Building</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: center;">ABC</td> <td style="text-align: center;">20</td> <td style="text-align: right;">a</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: center;">ABC</td> <td style="text-align: center;">20</td> <td style="text-align: right;">b</td> </tr> <tr> <td style="text-align: left;">3</td> <td style="text-align: center;">ABC</td> <td style="text-align: center;">21</td> <td style="text-align: right;">NaN</td> </tr> <tr> <td style="text-align: left;">4</td> <td style="text-align: center;">BCD</td> <td style="text-align: center;">2</td> <td style="text-align: right;">1</td> </tr> </tbody> </table> </div> <p>Need to create a multiple selection from this DataFrame:</p> <ul> <li><em>If Street == str_filter</em>;</li> <li><em>If House == house_filter</em>;</li> <li><em>If Building == build_filter</em> AND <em>If Building is not NULL</em>;</li> </ul> <p>I've already tried <code>df[(df['Street'] == str_filter) &amp; (df['House'] == house_filter) &amp; ((df['Building'] == build_filter) &amp; (pd.notnull(df['Building'])))</code></p> <p>But doesn't lead to a result I particularly want to see. I have to check if the Building value is not NaN and if it's true select the row with the certain Building num. However, I also want to select the row if it has NaN value for the Building but also meets other criteria.</p> <p>Another idea was to create lists for the set of filter values and the set of this values meeting pd.notnull criteria:</p> <pre><code>filter_values = [str_filter, house_filter, build_filter] notnull_values = [pd.notnull(entry) for entry in filter_values] </code></pre> <p>This one doesn't meet the performance criteria, because I have extremely huge DataFrame and creating additional lists with additional filtering will lead to the out-performance. Possible solution may lay in the <code>df.loc</code> function, but I don't know how to realise it.</p> <p>To summarise, the problem is the following: <strong>How to create multiple selection in pandas with conditions for NaN values?</strong></p> <p>UPD: It seems that the function I have to use is <code>df[... &amp; (df['Building'] == 'a' if pd.notnull(df['Building']))]</code> using an analogy with lambda apply trick</p>
<p>A rather simple way to do it would be this:</p> <pre><code>if df[df['Building'].isnull()].empty: df[df['Street'] == str_filter][df['House'] == house_filter][df['Building'] == build_filter] else: df[df['Street'] == str_filter][df['House'] == house_filter][df['Building'].isnull()] </code></pre> <p>I'm not sure if this would satisfy your performance requirements?</p>
python|pandas|selection|nan
0
9,397
66,804,306
why loop saves only results from last file in pandas
<p>I am using a loop to open consecutive files and then a second loop to calculate the average of y at specific row nrs (x). Why is the second loop showing the average only of the last file? I would like to append the average from each file into one new dataframe.</p> <pre><code>path = '...../' for file in os.listdir(path): if file.endswith('.txt'): with open(os.path.join(path, file)) as f: df = pd.read_csv(f, sep=&quot;\t&quot;, header=0,usecols=[0,11]) df.columns = [&quot;x&quot;, &quot;y&quot;] average_PAR=[] list=[] for (x, y) in df.iteritems(): average_PAR = sum(y.iloc[49:350]) / len(y.iloc[49:350]) list.append(average_PAR) print(list) </code></pre> <p>Thank you!</p>
<p>You're main issue is with indentation and the fact your'e not saving <code>df</code> to a dictionary or list.</p> <p>additionally, you're first opening the file and then passing it to pandas, there is no need for this as pandas handles <code>I/O</code> for you.</p> <p>a simplified version of your code would be.</p> <pre><code>from pathlib import Path import pandas as pd dfs = {f.stem : pd.read_csv(f, sep=&quot;\t&quot;, header=0,usecols=[0,11]) for f in Path('.../').glob('*.txt')} for each_csv, dataframe in dfs.items(): dataframe.iloc[35:450] # do stuff. </code></pre>
pandas|loops|iteritems
0
9,398
10,943,088
numpy.max or max ? Which one is faster?
<p>In python, which one is faster ? </p> <pre><code>numpy.max(), numpy.min() </code></pre> <p>or</p> <pre><code>max(), min() </code></pre> <p>My list/array length varies from 2 to 600. Which one should I use to save some run time ?</p>
<p>Well from my timings it follows if you already have numpy array <code>a</code> you should use <code>a.max</code> (the source tells it's the same as <code>np.max</code> if <code>a.max</code> available). But if you have built-in list then most of the time takes <em>converting</em> it into np.ndarray => that's why <code>max</code> is better in your timings.</p> <p>In essense: if <code>np.ndarray</code> then <code>a.max</code>, if <code>list</code> and no need for all the machinery of <code>np.ndarray</code> then standard <code>max</code>.</p>
python|numpy|runtime|max|min
57
9,399
68,413,687
batched tensor slice, slice B x N x M with B x 1
<p>I have an B x M x N tensor, X, and I have and B x 1 tensor, Y, which corresponds to the index of tensor X at dimension=1 that I want to keep. What is the shorthand for this slice so that I can avoid a loop?</p> <p>Essentially I want to do this:</p> <pre><code>Z = torch.zeros(B,N) for i in range(B): Z[i] = X[i][Y[i]] </code></pre>
<p>the following code is similar to the code in the loop. the difference is that instead of sequentially indexing the array <code>Z</code>,<code>X</code> and <code>Y</code> we are indexing them in parallel using the array <code>i</code></p> <pre><code>B, M, N = 13, 7, 19 X = np.random.randint(100, size= [B,M,N]) Y = np.random.randint(M , size= [B,1]) Z = np.random.randint(100, size= [B,N]) i = np.arange(B) Y = Y.ravel() # reducing array to rank-1, for easy indexing Z[i] = X[i,Y[i],:] </code></pre> <p>this code can be further simplified as</p> <pre><code>-&gt; Z[i] = X[i,Y[i],:] -&gt; Z[i] = X[i,Y[i]] -&gt; Z[i] = X[i,Y] -&gt; Z = X[i,Y] </code></pre> <p>pytorch equivalent code</p> <pre><code>B, M, N = 5, 7, 3 X = torch.randint(100, size= [B,M,N]) Y = torch.randint(M , size= [B,1]) Z = torch.randint(100, size= [B,N]) i = torch.arange(B) Y = Y.ravel() Z = X[i,Y] </code></pre>
numpy|pytorch|slice|tensor|numpy-slicing
3