Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
375,800
| 62,610,336
|
Python: Implement a column with the day of the week based on the "Year", "Month", "Day" columns?
|
<p><a href="https://i.stack.imgur.com/psmTm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/psmTm.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/3LnpC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3LnpC.png" alt="enter image description here" /></a></p>
<p>I try to create a new column with the day of the week:</p>
<pre><code>df2019['Weekday']=pd.to_datetime(df2019['Year'],df2019['Month'],df2019['Day']).weekday()
</code></pre>
<p>And I get the following error:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>Thanks!</p>
|
<p>You can do something like this:</p>
<pre><code>from datetime import datetime
def get_weekday(row):
date_str = "{}-{}-{}".format(row["Year"], row["Month"], row["Day"])
date = datetime.strptime(date_str, '%Y-%m-%d')
return date.weekday()
df2019["weekday"] = df2019.apply(get_weekday, axis=1)
</code></pre>
|
python|pandas|dataframe|date|weekday
| 3
|
375,801
| 62,658,750
|
missing values in pandas column multiindex
|
<p>I am reading with pandas excel sheets like this one:</p>
<p><a href="https://i.stack.imgur.com/N5cAI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N5cAI.png" alt="enter image description here" /></a></p>
<p>using</p>
<pre><code>df = pd.read_excel('./question.xlsx', sheet_name = None, header = [0,1])
</code></pre>
<p>which results in multiindex dataframe with multiindex.</p>
<p><a href="https://i.stack.imgur.com/3pgrF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3pgrF.png" alt="enter image description here" /></a></p>
<p>What poses a problem here is that the empty fields are filled by default with <code>'Title'</code>, whereas I would prefer to use a distinct label. I cannot skip the first row since I am dealing with bigger data frames where the first and the second rows contain repeating labels (hence the use of the multiindex).</p>
<p>Your help will be much appreciated.</p>
|
<p>Assuming that you want to have empty strings instead of repeating the first label, you can read the 2 lines and build the MultiIndex directly:</p>
<pre><code>df1 = pd.read_excel('./question.xlsx', header = None, nrows=2).fillna('')
index = pd.MultiIndex.from_arrays(df1.values)
</code></pre>
<p>it gives:</p>
<pre><code>MultiIndex([('Title', '#'),
( '', 'Price'),
( '', 'Quantity')],
)
</code></pre>
<p>By the way, if you wanted a different label for empty fields, you can just use it as the parameter for <code>fillna</code>.</p>
<p>Then, you just read the remaining data, and set the index by hand:</p>
<pre><code>df1 = pd.read_excel('./question.xlsx', header = None, skiprows=2)
df1.columns = index
</code></pre>
|
python|excel|pandas|multi-index
| 1
|
375,802
| 62,502,285
|
Install geopandas in Pycharm
|
<p>Sorry if this is a repeat question, But I am new to Python and trying to install Geopandas in Pycharm.
My python version is 3.7.2.</p>
<ol>
<li>I have tried the conventional way of installing library in pycharm through project interpretor.</li>
<li>I have also tried <code>pip install geopandas </code></li>
</ol>
<p>Both show same error.</p>
<blockquote>
<p>A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable.
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output</p>
</blockquote>
<p>Please provide steps to follow wrt Pycharm if possible.</p>
|
<p>According to <a href="https://geopandas.org/install.html" rel="nofollow noreferrer">Geopandas's own install instructions</a>, the <em>recommended</em> way is to install via <a href="https://conda.io/en/latest/" rel="nofollow noreferrer">conda</a> package manager, which you can obtain by using the <a href="https://www.anaconda.com/products/individual" rel="nofollow noreferrer">Anaconda</a> or <a href="https://docs.conda.io/en/latest/miniconda.html" rel="nofollow noreferrer">miniconda</a> Python distributions:</p>
<pre><code>conda install geopandas
</code></pre>
<p>If, however, you have a <em>Very Good Reason</em> you can't use <code>conda</code>, and absolutely need to use <code>pip</code>, you should try installing GDAL first (<code>pip install GDAL</code>), and then checking if the <code>gdal-config</code> command is in your path (i.e. that you can access it via the terminal). Then you can try installing again from either PyCharm directly or via the command line.</p>
<p>If you can't get it installed from PyCharm and need to specify <code>GDAL_VERSION</code> (which you can get with <code>gdal-config --version</code>), you'll probably need to install from the command line, where you can set this variable either with an <code>export</code> or directly in the same invocation:</p>
<pre><code>GDAL_VERSION=2.4.2 pip install geopandas
</code></pre>
|
python-3.x|pycharm|gdal|geopandas|fiona
| 0
|
375,803
| 62,814,354
|
How to insert values into a column from another table?
|
<p>I have two datasets:</p>
<p>First dataset:</p>
<pre><code>Name ID
Alla 3
Peter NaN
Sara NaN
Maria NaN
</code></pre>
<p>Second dataset:</p>
<pre><code>Name_name ID_ID
Alla 3
Peter 4
Sara 5
</code></pre>
<p>I need to insert into the missing values of the first table, the ID of the second table according to a common attribute in pandas. How to do it. I'm completely confused.</p>
<p>The result:
First dataset</p>
<pre><code>Name ID
Alla 3
Peter 4
Sara 5
Maria NaN
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>Series.fillna</code></a> for replace by <code>Series</code> with <code>index</code> by <code>Name_name</code> column created by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> and selected <code>ID_ID</code>:</p>
<pre><code>s = df2.set_index('Name_name')['ID_ID']
df1['ID'] = df1['ID'].fillna(df1['Name'].map(s))
print (df1)
Name ID
0 Alla 3.0
1 Peter 4.0
2 Sara 5.0
</code></pre>
<p>If possible no missing values:</p>
<pre><code>s = df2.set_index('Name_name')['ID_ID']
df1['ID'] = df1['ID'].fillna(df1['Name'].map(s)).astype(int)
</code></pre>
<p>else:</p>
<pre><code>s = df2.set_index('Name_name')['ID_ID']
df1['ID'] = df1['ID'].fillna(df1['Name'].map(s)).astype('Int64')
</code></pre>
<p>EDIT: If got error :</p>
<blockquote>
<p>Reindexing only valid with uniquely valued Index objects</p>
</blockquote>
<p>it means there are duplicates in <code>Name_name</code> column like <code>Alla</code> is duplicated, so <code>map</code> not know which value is used and error is rased.</p>
<pre><code>print (df2)
Name_name ID_ID
0 Alla 3
1 Peter 4
2 Sara 5
3 Alla 8
</code></pre>
<p>Possible solution is remove duplicates with keep first dupes rows by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a>:</p>
<pre><code>s = df2.drop_duplicates('Name_name').set_index('Name_name')['ID_ID']
df1['ID'] = df1['ID'].fillna(df1['Name'].map(s))
print (df1)
Name ID
0 Alla 3.0
1 Peter 4.0
2 Sara 5.0
</code></pre>
|
pandas
| 1
|
375,804
| 62,646,249
|
Pandas boolean indexing: matching a set
|
<p>I am trying to figure out how to extract the entries in a Pandas dataframe where the values in one column match a given set. Here is an example:</p>
<pre><code>num = 5
df = pd.DataFrame(np.zeros((num,3)),index = np.arange(num),columns = ['ID','color','shape'])
df['color'] = ['red','red','blue','blue','yellow']
df['shape'] = ['square','triangle','triangle','circle','circle']
df['ID'] = np.arange(num,num+5)
</code></pre>
<p>If I want to select only the <code>'blue'</code> entries I can do <code>df[df.color == 'blue']</code>, and even get the <code>ID</code>s: <code>df[df.color == 'blue'].ID</code>, and then perform additional manipulations that way. How is this extended to set of such criteria? If I want to return all the entries that are either <code>blue</code> or <code>yellow</code>, or some general set
<code>colors = ['blue','yellow','pink']</code>
the most obvious thing (to me) would be</p>
<pre><code>df[df.color in colors]
</code></pre>
<p>but this gives: <code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p>
<p>What's the correct Pandas way of doing this?</p>
|
<p>How about <code>isin</code>:</p>
<pre><code>df[df.color.isin(colors)]
</code></pre>
<p>Output:</p>
<pre><code> ID color shape
2 7 blue triangle
3 8 blue circle
4 9 yellow circle
</code></pre>
|
python|pandas|indexing
| 2
|
375,805
| 62,894,793
|
Calculate the cosine distance between two column in Spark
|
<p>I am using a Python & Spark to solve an issue.
I have dataframe containing two columns in a Spark Dataframe
Each of the columns contain a scalar of numeric(e.g. double or float) type.</p>
<p>I want to interpret these two column as vector and calculate consine similarity between them.
Sofar I only found spark linear algebra that can be used on densevector that are located in cell of the dataframe.</p>
<p>code sample</p>
<p>Code in numpy</p>
<pre><code>import numpy as np
from numpy.linalg import norm
vec = np.array([1, 2])
vec_2 = np.array([2, 1])
angle_vec_vec = (np.dot(vec, vec))/(norm(vec * norm(vec)))
print(angle_vec_vec )
</code></pre>
<p>Result should 0.8</p>
<p>How to do this in Spark ?</p>
<pre><code>df_small = spark.createDataFrame([(1, 2), (2, 1)])
df_small.show()
</code></pre>
<p>Is there a way to convert a column of double values to a densevector ?
Do you see any other soluation to solve my problem ?</p>
|
<p>You can see <a href="https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/mllib/CosineSimilarity.scala" rel="nofollow noreferrer">here</a> a sample that calculates the cosine distance in Scala. The strategy is to represent the documents as a RowMatrix and then use its columnSimilarities() method.</p>
<p>If you want to use PySpark, you can try what's suggested <a href="https://stackoverflow.com/questions/46663775/spark-cosine-distance-between-rows-using-dataframe">here</a></p>
|
python|numpy|apache-spark
| 1
|
375,806
| 62,872,493
|
What is the effect of image resize and configuration last dense layers on performance of transfer learning (VGG, ResNet) for classification
|
<p>I am trying to build a Spam classification model using the <code>tf2.0</code> and <code>Keras</code> for a project. It has to be very good model and I want to improve the accuracy of the model. Spam images can be any Image in the world except a handwritten question or a picture of a question like <a href="https://drive.google.com/file/d/1xEE88WW5Gl4EZz9ZhVbLogCppy9ZMI7T/view?usp=sharing" rel="nofollow noreferrer">this</a>, or <a href="https://drive.google.com/file/d/10M9pHB4k6qhhtwhb01MOw9DdxrM-MJAw/view?usp=sharing" rel="nofollow noreferrer">this</a>. I have almost 300000 images of each category so I want to retrain the model from random weights because question images were not included in the imagenet data and I have enough data. <strong>Please suggest if I should use the <code>imagenet</code> weights instead</strong>. I want to improve the performance of my model. I have a model something like this:</p>
<pre><code>res_net = ResNet50(include_top=False,weights=None,input_shape=(224,224,3)) # will try VGG and other models too if needed
av1 = GlobalAveragePooling2D()(res_net.output)
fc1 = Dense(1024,activation='relu')(av1)
fc2 = Dense(512,activation='relu')(fc1)
fc3 = Dense(256,activation='relu')(fc2)
d1 = Dropout(0.5)(fc3)
fc4 = Dense(1,activation='sigmoid')(d1)
model = Model(inputs=res_net.input, outputs= fc4)
</code></pre>
<p><strong>Qusetions</strong>:</p>
<p>My images are question images and width is larger than height in most cases, so do I resize all images to (224,224) or to the <code>min(width or height)</code> in the whole data set (But that can suppress the details for larger images)?</p>
<p>I know the use of Dense layers are upto me but I can't use every possible combination of <code>units</code> and <code>number of layers</code> because of the obvious time, memory,power constraints. So If I use many dense layers / units, won't I be over-fitting my model as there are already many layers in these gigantic architectures?</p>
<p>Should I be using 3 channels or 1? Question images are mostly black with white backgrounds so it'll make training fast but 3 channels can have better results IMO??</p>
|
<p>If you are creating a custom architecture, you can have any dimension as you wish. But for already built architecture, image dimension is fixed. In Keras, you can visit each model from <a href="https://keras.io/api/applications/#:%7E:text=Keras%20Applications%20are%20deep%20learning,They%20are%20stored%20at%20%7E%2F." rel="nofollow noreferrer">Keras application link</a> to check the required dimensions.</p>
<p>Dense layer are up to you. If you choose the number of neurons bigger, then the model may not fit into your GPU. Or if you choose the number of neurons too small, the learning would not be much accurate. However, if you are using an already built model, two or three fully connected layers are fine.</p>
<p>If you are using pre-trained weights from imagenet, you should choose 3 channels, otherwise, one can be used. But I would say go for three and take advantage of transfer learning. Though you have a lot of data, still using imagenet weights are good. Go for transfer learning, retrained them again, instead of fine-tuning.</p>
|
tensorflow|keras|deep-learning|neural-network|conv-neural-network
| 0
|
375,807
| 62,608,855
|
Python: Split String if delimiter is not available using str.split
|
<p>I am trying to split below data based on | delimiter.if only single value is available in response(no Delimiter), it should go to answers not in Question column of dataframe.</p>
<p>Code:</p>
<pre><code>answers_df[['Question','answers']] = answers_df.response.str.split("|",expand=True)
</code></pre>
<p>Data:</p>
<pre><code>Assortments | 5
6
product | 8
</code></pre>
<p>expected Result:</p>
<pre><code>name rating
----- ------
Assortments 5
NAN 6
product 8
</code></pre>
|
<p>Here's a way to do it, the idea of to append <code>None</code> in the front:</p>
<pre><code>df['ans2'] = df['ans'].str.split('|').apply(lambda x: [None] + x if len(x) < 2 else x)
df[['q1', 'a1']] = df['ans2'].apply(pd.Series)
df = df.drop('ans2', axis=1)
ans name rating
0 Assortments|5 Assortments 5
1 6 None 6
2 product|8 product 8
</code></pre>
<p><strong>Sample Data</strong></p>
<pre><code>l=["Assortments|5", "6", "product|8"]
df = pd.DataFrame({"ans": l})
</code></pre>
|
python|pandas
| 1
|
375,808
| 62,492,971
|
Split one column into multiple columns in Python
|
<p>I have a Python dataframe like this with one column:</p>
<pre><code>index Train_station
0 Adenauerplatz 52° 29′ 59″ N, 13° 18′ 26″ O
1 Afrikanische Straße 52° 33′ 38″ N, 13° 20′ 3″ O
2 Alexanderplatz 52° 31′ 17″ N, 13° 24′ 48″ O
</code></pre>
<p>And I want to split it into 3 columns: Train station, Latitude, Longitude. The dataframe should look like this:</p>
<pre><code>index Train_station Latitude Longitude
0 Adenauerplatz 52° 29′ 59″ N 13° 18′ 26″ O
1 Afrikanische Straße 52° 33′ 38″ N 13° 20′ 3″ O
2 Alexanderplatz 52° 31′ 17″ N 13° 24′ 48″ O
</code></pre>
<p>I've tried using <em>df[['Latitude', 'Longitude']] = df.Train_station.str.split(',', expand=True)</em> but it only split between latitude and longitude coordinates. How can I split a column with more than one condition that I define?</p>
<p>I've thought about method to check the string starting from the left and then split the when it meets an integer or the defined string but I've found no answer for this method so far.</p>
|
<pre><code>df = df.Train_station.str.split(r'(.*?)(\d+°[^,]+),(.*)', expand=True)
print(df.loc[:, 1:3].rename(columns={1:'Train_station', 2:'Latitude', 3:'Longitude'}) )
</code></pre>
<p>Prints:</p>
<pre><code> Train_station Latitude Longitude
0 Adenauerplatz 52° 29′ 59″ N 13° 18′ 26″ O
1 Afrikanische Straße 52° 33′ 38″ N 13° 20′ 3″ O
2 Alexanderplatz 52° 31′ 17″ N 13° 24′ 48″ O
</code></pre>
<hr />
<p>EDIT: Thanks @ALollz, you can use <code>str.extract()</code>:</p>
<pre><code>df = df.Train_station.str.extract(r'(?P<Train_station>.*?)(?P<Latitude>\d+°[^,]+),(?P<Longitude>.*)', expand=True)
print(df)
</code></pre>
|
python|pandas|dataframe
| 5
|
375,809
| 62,769,896
|
Assigning the value to the list from an array
|
<p>res is list has 1867 value divided into 11 sets with different sets of elements.
<code>ex: res[0][:]=147,res[1][:]=174,res[2][:]=168</code> so on <code>total 11 set = 1867 elements</code>.</p>
<p><code>altitude=[125,85,69,754,855,324,...]</code> has <code>1867 values</code>.
I need to replace the res list values with a continuous altitude values in a list.</p>
<p>I have tried:</p>
<pre><code>for h in range(len(res)):
res[h][:]=altitude
</code></pre>
<p>It is storing all 1867 values in all the sets. <strong>I need the first 147 elements in set1, next (starting from 148th value) 174 elements in set2 so on...</strong></p>
<p>Thank You</p>
|
<p>You need to keep track of the number of elements assigned at each iteration to get the correct slice from <code>altitude</code>. If I understand correctly <code>res</code> is a list of lists with varying length.
Here is a possible solution:</p>
<pre><code>current_position = 0
for sublist in res:
sub_len = len(sublist)
sublist[:] = altitude[current_position: current_position + sub_len]
current_position += sub_len
</code></pre>
|
python-3.x|numpy
| 1
|
375,810
| 62,761,353
|
How to split dataframe made out of lists into different columns?
|
<p>I have the following dataframe. All the content in between the inverted commas are in one column. I want to split them into separate columns.:</p>
<pre><code>df=
0,"#1 Microwave Oven Sharp 20 Litres, White, R-20AS-W 5.0 out of 5 stars3SAR 199.00 "
1,"#2 Nikai Microwave - 20 LTR -NMO515N8N 5.0 out of 5 stars3SAR 177.00"
2,"#3 Geepas 20 Liter Microwave Oven - GMO1894 SAR 186.00"
</code></pre>
<p>I want to split it into columns like</p>
<pre><code>df=
0,"#1", "Microwave Oven Sharp 20 Litres, White, R-20AS-W", "5.0 out of 5 stars3", "SAR 199.00 "
1,"#2", "Nikai Microwave - 20 LTR -NMO515N8N", "5.0 out of 5 stars3", "SAR 177.00"
2,"#3", "Geepas 20 Liter Microwave Oven - GMO1894", "SAR 186.00"
</code></pre>
|
<p>Use <code>.str.split</code> with are regex for two more more spaces and parameter <code>expand=True</code>:</p>
<pre><code>df[column_name].str.split('\s\s+', expand=True)
</code></pre>
|
python|pandas|dataframe|split
| 1
|
375,811
| 62,882,272
|
Converting column values to rows
|
<p>I have a dataset where all values in column <code>B</code> are the same. It looks like this:</p>
<pre><code> A B
0 Marble Hill Pizza Place
1 Chinatown Pizza Place
2 Washington Pizza Place
3 Washington Pizza Place
4 Inwood Pizza Place
5 Inwood Pizza Place
</code></pre>
<p>I wish to convert column <code>A</code> values to rows. Then column <code>B</code> should count the number occurrences of each value from <code>A</code>.<br />
I want it to look like this:</p>
<pre><code> B
Marble Hill 1
Chinatown 1
Washington 2
Inwood 2
</code></pre>
|
<p>pandas's <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer">value_counts()</a> does exactly that. It returns a series with the number of occurrences of each value.</p>
<p><code>new_df = df["A"].value_counts()</code></p>
|
python|pandas|dataframe
| 2
|
375,812
| 62,563,249
|
Pandas filling nulls with grouby value
|
<p>I am trying to fill null values for all the numeric type columns in a dataframe.</p>
<p>The code below goes through each numeric column and groups by a categorical feature and calculates the median of the target column.</p>
<p>We then create a new column that copies over the values if it is present, but if it null, then it should copy over the value from the groupby based on the categorical value in the row where the n/a is present.</p>
<pre><code># fill in numeric nulls with median based on job
for i in dfint:
print(i)
for i in dfint:
if i in ["TARGET_BAD_FLAG", "TARGET_LOSS_AMT"]: continue
print(i)
group=df.groupby("JOB")[i].median()
print(group)
df["IMP_"+i]=df[i].fillna(group[group.index.get_loc(df.loc[df[i].isna(),"JOB"])])
#the line below works but fills in all nulls with the median for the "Mgr" job category, the code above should find the job category for the null record and pull the groupby value
#df["IMP_"+i]=df[i].fillna(group[group.index.get_loc("Mgr")])
</code></pre>
<p>I seem to be having an issue with the function between the .get_loc, here is the output</p>
<pre><code>TARGET_BAD_FLAG
TARGET_LOSS_AMT
LOAN
MORTDUE
VALUE
YOJ
DEROG
DELINQ
CLAGE
NINQ
CLNO
DEBTINC
LOAN
JOB
Mgr 18100
Office 16200
Other 15200
ProfExe 17300
Sales 14300
Self 24000
Name: LOAN, dtype: int64
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-207-f8a76179c818> in <module>
8 group=df.groupby("JOB")[i].median()
9 print(group)
---> 10 df["IMP_"+i]=df[i].fillna(group[group.index.get_loc(df.loc[df[i].isna(),"JOB"])])
11 #the line below works but fills in all nulls with the median for the "Mgr" job category, the code above should find the job category for the null record and pull the groupby value
12 #df["IMP_"+i]=df[i].fillna(group[group.index.get_loc("Mgr")])
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2895 )
2896 try:
-> 2897 return self._engine.get_loc(key)
2898 except KeyError:
2899 return self._engine.get_loc(self._maybe_cast_indexer(key))
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
TypeError: 'Series([], Name: JOB, dtype: object)' is an invalid key
</code></pre>
<p>Is there a way to modify that line to do as intended</p>
|
<p>You wrote this : <code>df.loc[df[i].isna(),"JOB"]</code> which will return you a pandas Series, not a key as requested by <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_loc.html" rel="nofollow noreferrer">pandas.Index.get_loc</a></p>
|
python|pandas
| 0
|
375,813
| 62,690,426
|
ValueError: Tensor conversion requested dtype float32_ref for Tensor with dtype float32
|
<p><strong>Embedding matrix:</strong></p>
<pre><code> def create_embedding_matrix(filepath, word_index, embedding_dim):
vocab_size = len(word_index) + 1
# Adding again 1 because of reserved 0 index
embedding_matrix = np.zeros((vocab_size, embedding_dim))
with open(filepath) as f:
for line in f:
# word, *vector = line.split()
# word, vector = re.split('', line)[0], re.split('', line)[1:]
# word, vector = (lambda x,*y:(x, y))(*line.split())
arr = line.split()
word = arr[0]
vector = arr[1:]
if word in word_index:
idx = word_index[word]
embedding_matrix[idx] = np.array(vector, dtype=np.float32)[:embedding_dim]
return vocab_size, embedding_matrix
</code></pre>
<p><strong>The model training looks like this -</strong></p>
<pre><code> def model_training(vocab_size, embedding_dim, X_train, y_train, X_test, y_test, maxlen):
# embedding_dim = 100
model = Sequential()
model.add(layers.Embedding(vocab_size, embedding_dim, input_length=maxlen))
model.add(layers.Conv1D(128, 5, activation='relu'))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(10, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test), batch_size=10)
df = load_sentence_data_2C()
maxlen = 100
vocab_size, tokenizer, X_train, y_train, X_test, y_test = sentence_tokenizer(df, maxlen)
embedding_dim = 50
vocab_size, embedding_matrix = create_embedding_matrix('data/glove.6B.50d.txt', tokenizer.word_index, embedding_dim)
embedding_dim = 100
model_training(vocab_size, embedding_dim, X_train, y_train, X_test, y_test, maxlen)
</code></pre>
<p><strong>Error is -</strong></p>
<blockquote>
<p>ValueError: Tensor conversion requested dtype float32_ref for Tensor
with dtype float32:
'Tensor("Adam/embedding_1/embeddings/m/Initializer/zeros:0",
shape=(1747, 100), dtype=float32)'</p>
</blockquote>
<p><strong>Full stack trace -</strong></p>
<pre class="lang-none prettyprint-override"><code>Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Traceback (most recent call last):
File "deeplearning/utils_exp.py", line 86, in <module>
model_training(vocab_size, embedding_dim, X_train, y_train, X_test, y_test, maxlen)
File "deeplearning/utils_exp.py", line 77, in model_training
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test), batch_size=10)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 1010, in fit
self._make_train_function()
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 509, in _make_train_function
loss=self.total_loss)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 497, in get_updates
return [self.apply_gradients(grads_and_vars)]
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 435, in apply_gradients
self._create_slots(var_list)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/keras/optimizer_v2/adam.py", line 145, in _create_slots
self.add_slot(var, 'm')
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 578, in add_slot
initial_value=initial_value)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 261, in __call__
return cls._variable_v2_call(*args, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 255, in _variable_v2_call
shape=shape)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 236, in <lambda>
previous_getter = lambda **kws: default_variable_creator_v2(None, **kws)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 2544, in default_variable_creator_v2
shape=shape)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 263, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 460, in __init__
shape=shape)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 605, in _init_from_args
name="initial_value", dtype=dtype)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1087, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1145, in convert_to_tensor_v2
as_ref=False)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1224, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1018, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype float32_ref for Tensor with dtype float32: 'Tensor("Adam/embedding_1/embeddings/m/Initializer/zeros:0", shape=(1747, 100), dtype=float32)'
</code></pre>
|
<p>It was a version mismatch issue. Solved by reinstalling both keras and tensorflow.</p>
|
python|pandas|numpy|tensorflow|keras
| 0
|
375,814
| 62,572,615
|
Get numeric part of string column and cast to integer
|
<p>The question is pretty simple actually, I just couldn't figure it out. There is a
Fifa Dataset that I use, and I'd like to convert all <code>weight</code> column to integer. so: first I drop the <code>lbs</code>, then I convert to integer.</p>
<pre><code>fifa["Weight"].head()
0 159lbs
1 183lbs
2 150lbs
3 168lbs
4 154lbs
Name: Weight, dtype: object
fifa.Weight = [int(x.strip("lbs")) if type(x)==str else x for x in fifa.Weight]
</code></pre>
<p>I know that I could use this but I don't want to.</p>
<pre><code>fifa_weight =[]
for i in fifa["Weight"]:
if(type(i)==str):
fifa_weight.append(int(i.strip("lbs")))
## There are some missing values in the Weight column that's why I use type(i)==str.
</code></pre>
<p>I get the values inside of the <code>fifa["Weight"]</code> column and try to put it inside the <code>fifa_weight</code> column but I wasn't able to change the columns(because of missing values) so.. how can I do that with for loop? I want my <code>fifa["Weight"]</code> column to be full of integers.</p>
|
<pre><code>>>> fifa
Weight
0 159lbs
1 183lbs
2 150lbs
3 168lbs
4 154lbs
fifa["Weight"] = fifa["Weight"].str.replace("lbs", "")
</code></pre>
<p>and then</p>
<pre><code>fifa["Weight"] = fifa["Weight"].astype(float)
</code></pre>
<p>If you have Empty cells in the Weight's Columns, then fill it first with something like a placeholder (like -9999) or something and then try the above;</p>
|
python|pandas|numpy|dataframe|data-analysis
| 2
|
375,815
| 62,768,722
|
Kernel is dead and restarting automatically while running a cell for a training a regression model
|
<p>Iam doing the Bulldozer price calculation problem, using RandomForestRegressor.After removing all the missing values and converting all data into numeric, I try to fit and train the data into a model. The data set is pretty large about 412698 rows × 57 columns and using a 3gb Ram device.</p>
<p>here is my code</p>
<pre><code>%%time
# Instantiate model
model = RandomForestRegressor(n_jobs=-1,
random_state=42)
# Fit the model
model.fit(df_tmp.drop("SalePrice", axis=1), df_tmp["SalePrice"])
</code></pre>
<p>The data set is available in Kaggle and I am also attaching its link..
<a href="https://www.kaggle.com/c/bluebook-for-bulldozers/data" rel="nofollow noreferrer">https://www.kaggle.com/c/bluebook-for-bulldozers/data</a></p>
|
<p>You can use Batch processing when you have more data than your ram can handle. Sklearn is inbuilt with this feature you have to use the <code>warm_start</code> parameter in RandomForestRegressor</p>
<p>warm_start, default=False
When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest</p>
<p>You can try something like this</p>
<pre><code>import numpy as np
model = RandomForestClassifier(warm_start = True, n_jobs=-1,random_state=42)
for df_split in np.array_split(df_temp, 500): # split into 500 dataframes
model.fit(df_split.drop("SalePrice", axis=1), df_split["SalePrice"])
</code></pre>
<p>if you have memory errors while loading pandas data frame itself check <a href="https://stackoverflow.com/questions/44729727/pandas-slice-large-dataframe-in-chunks">this</a></p>
<p>you can always use <a href="https://colab.research.google.com" rel="nofollow noreferrer">google colab</a> for better ram and gpu. It is free and very easy to start with</p>
|
python|pandas|scikit-learn|jupyter-notebook|regression
| 0
|
375,816
| 62,882,030
|
Python ImportError: from transformers import BertTokenizer, BertConfig
|
<p>I am trying to do named entity recognition in Python using BERT, and installed transformers v 3.0.2 from huggingface using <code>pip install transformers</code>
. Then when I try to run this code:</p>
<pre><code>import torch
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from transformers import BertTokenizer, BertConfig
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
torch.__version__
</code></pre>
<p>I get this error:</p>
<pre><code>ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-darwin.so, 2): Symbol not found: ____chkstk_darwin
Referenced from: /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-darwin.so (which was built for Mac OS X 10.15)
Expected in: /usr/lib/libSystem.B.dylib
in /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-darwin.so
</code></pre>
<p>The error occurs in this line: <code>from transformers import BertTokenizer, BertConfig</code> but I'm not sure how to fix this.</p>
|
<p>Try <code>pip install tokenizers==0.7.0</code></p>
|
python|python-import|importerror|huggingface-transformers|huggingface-tokenizers
| 0
|
375,817
| 62,882,132
|
Convert variable of dtype=<U14 to standard list
|
<p>I'm reading values from a csv using pandas and I've come across a format I've never seen before, to give an example:</p>
<p>myval = array('[1.0, 0, 0, 0]', dtype='<U14')</p>
<p>I would like to convert myval into a list of integers/ floats, but nothing I try seems compatible with a variable of this datatype. Would appreciate any help.</p>
|
<p>It is unicode 14 characters, you can try:</p>
<pre><code>eval(myval)
</code></pre>
|
python-3.x|pandas
| 1
|
375,818
| 62,490,885
|
filtering a numpy matrix using a dataframe column which is a boolean column
|
<p>I have a dataframe where one column is a booelan column, there are n rows in my dataframe. I am trying to use this column to filter a numpy matrix which is n by 30.</p>
<blockquote>
<p>filtered_matrix = my_matrix[df['my_bool_col'], :]</p>
</blockquote>
<p>The error I get is</p>
<blockquote>
<p>IndexError: boolean index did not match indexed array along dimension 1; dimension is 30 but corresponding boolean dimension is 1</p>
</blockquote>
|
<p>So the problem is that you want to index a dumpy matrix based on boolean values within a data frame. For this, you could do something like this:</p>
<pre><code>filtered_matrix = my_matrix[np.where([df['my_bool_col'].astype(int) == 1), :]
</code></pre>
<p>Numpy's <code>np.where()</code> method (<a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/generated/numpy.where.html</a>) in this case will return indices where an entry is <code>True</code>, then you use this array of indices to then index your matrix.</p>
|
python|pandas|numpy
| 0
|
375,819
| 62,671,542
|
Issue using BeautifulSoup and reading target URLs from a CSV
|
<p>Everything works as expected when I'm using a single URL for the URL variable to scrape, but not getting any results when attempting to read links from a csv. Any help is appreciated.</p>
<p>Info about the CSV:</p>
<ul>
<li>One column with a header called "Links"</li>
<li>300 rows of links with no space, commoa, ; or other charters before/after the links</li>
<li>One link in each row</li>
</ul>
<pre><code> import requests # required to make request
from bs4 import BeautifulSoup # required to parse html
import pandas as pd
import csv
with open("urls.csv") as infile:
reader = csv.DictReader(infile)
for link in reader:
res = requests.get(link['Links'])
#print(res.url)
url = res
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
email_elm0 = soup.find_all(class_= "app-support-list__item")[0].text.strip()
email_elm1 = soup.find_all(class_= "app-support-list__item")[1].text.strip()
email_elm2 = soup.find_all(class_= "app-support-list__item")[2].text.strip()
email_elm3 = soup.find_all(class_= "app-support-list__item")[3].text.strip()
final_email_elm = (email_elm0,email_elm1,email_elm2,email_elm3)
print(final_email_elm)
df = pd.DataFrame(final_email_elm)
#getting an output in csv format for the dataframe we created
#df.to_csv('draft_part2_scrape.csv')
</code></pre>
|
<p>The problem lies in this part of the code:</p>
<pre class="lang-py prettyprint-override"><code>with open("urls.csv") as infile:
reader = csv.DictReader(infile)
for link in reader:
res = requests.get(link['Links'])
...
</code></pre>
<p>After the loop is executed, <code>res</code> will have the last link. So, this program will only <strong>scrape</strong> the last link.</p>
<p>To solve this problem, store all the links in a list and iterate that list to <strong>scrape</strong> each of the link. You can store the <strong>scraped</strong> result in a seperate dataframe and concatenate them at the end to store in a single file:</p>
<pre class="lang-py prettyprint-override"><code>import requests # required to make request
from bs4 import BeautifulSoup # required to parse html
import pandas as pd
import csv
links = []
with open("urls.csv") as infile:
reader = csv.DictReader(infile)
for link in reader:
links.append(link['Links'])
dfs = []
for url in links:
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
email_elm0 = soup.find_all(class_="app-support-list__item")[0].text.strip()
email_elm1 = soup.find_all(class_="app-support-list__item")[1].text.strip()
email_elm2 = soup.find_all(class_="app-support-list__item")[2].text.strip()
email_elm3 = soup.find_all(class_="app-support-list__item")[3].text.strip()
final_email_elm = (email_elm0, email_elm1, email_elm2, email_elm3)
print(final_email_elm)
dfs.append(pd.DataFrame(final_email_elm))
#getting an output in csv format for the dataframe we created
df = pd.concat(dfs)
df.to_csv('draft_part2_scrape.csv')
</code></pre>
|
python|pandas|beautifulsoup
| 0
|
375,820
| 62,885,651
|
Groupby and frequency count does not return the right value
|
<p>I am using this code to group companies and to a frequency count. However, the returned result did not group the companies</p>
<pre><code>freq = df.groupby(['company'])['recruitment'].size()
I got some result similar to this.
recruitment
company
Data Co 3
Data Co 8
Apple Co 3
Apple Co 6
</code></pre>
<p>I have two questions:</p>
<ol>
<li>why this groupby did not group same companies?</li>
<li>When I put freq.columns. It only shows recruitment column, company dissapeared. Is there anyway to show two columns both company and recruitment?</li>
</ol>
|
<p>If the company name look the 'same' then you have whitespace at front or end, I am adding the upper convert all to upper case as well.</p>
<pre><code>freq = df.groupby(df['company'].str.strip().str.upper())['recruitment'].size()
</code></pre>
|
python|pandas
| 0
|
375,821
| 62,720,500
|
How to calculate the difference between grouped row in pandas
|
<p>I have a dataset with the number of views per article.</p>
<p><a href="https://i.stack.imgur.com/W6qFH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W6qFH.png" alt="enter image description here" /></a></p>
<p>I'm trying to calculate the additional number of views per day, for each story, so I can graph it.</p>
<p>I manage to do it for one story only.</p>
<pre><code>storyviews = stats[['title', 'views']].sort_values(by=['title','views'])
storyviews = stats[stats["title"] == "Getting Started with TDD"]
storyviews = storyviews[["title","views"]].sort_values(by=['title','views'])
difference = storyviews.set_index('title').diff()
difference = difference.dropna(subset=['views'])
difference
</code></pre>
<p>and I got the correct result.</p>
<p><a href="https://i.stack.imgur.com/Bpifl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bpifl.png" alt="enter image description here" /></a></p>
<p>Is there a way to do it in one pass for all the stories?</p>
<p>DATASET</p>
<pre><code>y,m,d,mediumID,title,link,publication,mins,views,reads,readRatio,fans,pubDate,liveDate
2020,06,30,a1777d8bf7e,Swift — Filtering: A Real Example,https://levelup.gitconnected.com/swift-filtering-a-real-example-a1777d8bf7e,Level Up Coding,4 min read,35,13,37.142857142857146,1,2020-06-17,2020-06-26
2020,06,30,6f5fc68b0b43,SwiftUI 2: an overview,https://levelup.gitconnected.com/swiftui-2-an-overview-6f5fc68b0b43,Level Up Coding,3 min read,43,22,51.16279069767442,2,2020-06-24,2020-06-24
2020,07,01,a1777d8bf7e,Swift — Filtering: A Real Example,https://levelup.gitconnected.com/swift-filtering-a-real-example-a1777d8bf7e,Level Up Coding,4 min read,37,13,35.13513513513514,1,2020-06-17,2020-06-26
2020,07,01,6f5fc68b0b43,SwiftUI 2: an overview,https://levelup.gitconnected.com/swiftui-2-an-overview-6f5fc68b0b43,Level Up Coding,3 min read,57,29,50.87719298245614,10,2020-06-24,2020-06-24
2020,07,02,a1777d8bf7e,Swift — Filtering: A Real Example,https://levelup.gitconnected.com/swift-filtering-a-real-example-a1777d8bf7e,Level Up Coding,4 min read,37,13,35.13513513513514,1,2020-06-17,2020-06-26
2020,07,02,6f5fc68b0b43,SwiftUI 2: an overview,https://levelup.gitconnected.com/swiftui-2-an-overview-6f5fc68b0b43,Level Up Coding,3 min read,76,43,56.578947368421055,15,2020-06-24,2020-06-24
2020,07,03,a1777d8bf7e,Swift — Filtering: A Real Example,https://levelup.gitconnected.com/swift-filtering-a-real-example-a1777d8bf7e,Level Up Coding,4 min read,40,13,34.21052631578947,1,2020-06-17,2020-06-26
2020,07,03,6f5fc68b0b43,SwiftUI 2: an overview,https://levelup.gitconnected.com/swiftui-2-an-overview-6f5fc68b0b43,Level Up Coding,3 min read,152,70,46.05263157894737,20,2020-06-24,2020-06-24
</code></pre>
<p>Thanks,
Nicolas</p>
|
<p>Could you give this a shot?</p>
<pre class="lang-py prettyprint-override"><code>cols = ['title', 'views']
storyviews = stats[cols].sort_values(by=cols)
res = storyviews.set_index('title').groupby('title', sort=False).diff().dropna()
</code></pre>
<p>Output:</p>
<pre><code> views
title
SwiftUI 2: an overview 14.0
SwiftUI 2: an overview 19.0
SwiftUI 2: an overview 76.0
Swift — Filtering: A Real Example 2.0
Swift — Filtering: A Real Example 0.0
Swift — Filtering: A Real Example 3.0
</code></pre>
<p>For plotting the legend, title ..., the you might want to ask another question. I don't have an answer. To get you started on the plot, try this.</p>
<pre class="lang-py prettyprint-override"><code>res.reset_index().groupby('title', sort=False).plot()
</code></pre>
|
python|pandas
| 2
|
375,822
| 62,520,199
|
Elementwise multiplication of pandas dataframe
|
<p>I know this question has been asked several times, but I tried all the anwers and still don't get the right result. I just want to do an element-wise multiplication of two pandas dataframes, but it always results in messing up the structure of the matrix:</p>
<pre><code>x = pd.DataFrame([1,1,1],[2,2,2])
y= pd.DataFrame([0,0,0],[1,1,1])
</code></pre>
<p>z= x*y should result in z being</p>
<pre><code>2 0
2 0
2 0
</code></pre>
<p>But instead results in z being:</p>
<pre><code>0
1 NaN
1 NaN
1 NaN
2 NaN
2 NaN
2 NaN
</code></pre>
<p>What am I doing wrong? I tried pandas.mul and pandas.multiply, but no success.</p>
|
<p>You should use: <code>print(x*y.values)</code> instead of <code>print(x*y)</code></p>
|
python|pandas
| 3
|
375,823
| 62,534,775
|
Remove rows from dataframe df1 if their columnS valueS exist in other dataframe df2
|
<p>I tried this :</p>
<pre><code>res = df1[~(getattr(df1, 'A').isin(getattr(df2, 'A')) & getattr(df1, 'C').isin(getattr(df2, 'C')))]
</code></pre>
<p>It works <strong>BUT</strong> the list of columns is variable in this example columns = ['A', 'C'] how can I loop over it to get the above expression dynamically according to the values of the list 'columns'</p>
<p>exp: df1:</p>
<pre><code> A B C D
0 oo one 0 0
1 bar one1 1 2
2 foo two2 2 4
3 bar one1 3 6
4 foo two 4 8
5 bar two 5 10
6 foo one 6 12
7 fowwo three 7 14
</code></pre>
<p>df2:</p>
<pre><code> A B C D
0 oo one 0 0
2 foo two2 2 4
3 bar one1 3 6
4 foo two 4 8
5 bar two 5 10
6 foo one 6 12
7 fowwo three 7 14
</code></pre>
<p>res:</p>
<pre><code> A B C D
1 bar one1 1 2
</code></pre>
|
<p>Use:</p>
<pre><code>column_list = ["A","C"]
df1[(~pd.concat((getattr(df1, col).isin(getattr(df2, col)) for col in column_list), axis=1 )).any(1)]
</code></pre>
<p>Output:</p>
<pre><code> A B C D
1 bar one1 1 2
</code></pre>
<p><strong>EDIT</strong></p>
<p>The new situation you explained in the comments can be solved with <code>merge</code>.</p>
<p>Dataframes:</p>
<pre><code>df3= pd.DataFrame({'A': '1010994595 1017165396 1020896102 1028915753 1028915753 1030811227 1033837508 1047224448 1047559040 1053827106 1094815936 1113339076 1115345471 1121416375 1122392586 1122981502 1132224809 '.split(), 'B': '99203 99232 99233 99231 99291 99291 99232 99232 99242 99232 99244 G0425 99213 99203 99606 99243 99214'.split(), 'C': np.arange(17), 'D': np.arange(17) * 2})
df4= pd.DataFrame({'A': '1115345471 1113339076 1020896102 1047224448 1053827106 1121416375 1122392586 1028915753 1132224809 1030811227 1094815936 1033837508 1047559040 1122981502 1028915753 1030811227 1017165396 '.split(), 'B': '99213 G0425 99291 99232 99291 99243 99606 99291 99214 99291 99244 99233 99242 99243 99291 99291 99232 '.split(), 'C': np.arange(17), 'D': np.arange(17) * 2})
</code></pre>
<p>Code to select rows from df4 that are not in df3 (for columns in column_list):</p>
<pre><code>list_col = ["A","B"]
df4[df4.merge(df3.drop_duplicates(), on=list_col, how='left', indicator=True)["_merge"] == "left_only"]
</code></pre>
<p>Output:</p>
<pre><code> A B C D
2 1020896102 99291 2 4
4 1053827106 99291 4 8
5 1121416375 99243 5 10
11 1033837508 99233 11 22
</code></pre>
<p>If you want to reset the index for the new table add <code>.reset_index(drop=True)</code> at the end</p>
|
pandas|dataframe|duplicates|compare|multiple-columns
| 1
|
375,824
| 62,587,433
|
Filter df so values are equal across groups - pandas
|
<p>I've got a pandas <code>df</code> that contains values at various time points. I perform a groupby of these time points and values. I'm hoping the filter the output so both groups contain values at each time point. If either group does not contain a value at that time point I want to drop that row.</p>
<p>Using the <code>df</code> below, there is values for <code>Group A</code> and <code>Group B</code> at various time points. However, time point <code>3,4,6</code> only contain one item from either <code>Group A</code> or <code>Group B</code>. When there isn't at least two items per group, I want to drop these rows altogether.</p>
<p>The ordering matters and not the total amount. So if there are missing items for either <code>Group</code> at a specific time point, I want to drop these rows.</p>
<p>Note: the df only contains a max of one value per group at each time point. But my actual data could contain numerous. The main concern is dropping rows where at least one group is absent.</p>
<pre><code>df1 = pd.DataFrame({
'Time' : [1,1,1,2,2,3,4,5,5,6],
'Group' : ['A','B','B','A','B','A','B','A','B','B'],
'Val_A' : [6,7,4,5,4,4,9,6,7,8],
'Val_B' : [1,2,2,3,2,1,2,1,4,9],
'Val_C' : [1,2,2,3,4,5,7,8,9,7],
})
Group_A = df1.loc[df1['Group'] == 'A']
Group_B = df1.loc[df1['Group'] == 'B']
Group_A = list(Group_A.groupby(['Time'])['Val_A'].apply(list))
Group_B = list(Group_B.groupby(['Time'])['Val_B'].apply(list))
print(df1)
print(Group_A)
print(Group_B)
Time Group Val_A Val_B Val_C
0 1 A 6 1 1
1 1 B 7 2 2
2 1 B 4 2 2
3 2 A 5 3 3
4 2 B 4 2 4
5 3 A 4 1 5
6 4 B 9 2 7
7 5 A 6 1 8
8 5 B 7 4 9
9 6 B 8 9 7
[[6], [5], [4], [6]]
[[2, 2], [2], [2], [4], [9]]
</code></pre>
<p>I can't use <code>dropna</code> or <code>drop_duplicates</code>. Furthermore, data may contain items for <code>Group B</code> and not <code>Group A</code>. So I'm hoping to find a function that can handle both instances.</p>
<p>Intended Output:</p>
<pre><code> Time Group Val_A Val_B Val_C
0 1 A 6 1 1
1 1 B 7 2 2
2 1 B 4 2 2
3 2 A 5 3 3
4 2 B 4 2 4
7 5 A 6 1 8
8 5 B 7 4 9
[[6], [5], [6]]
[[2, 2], [2], [4]]
</code></pre>
|
<p>If you don't care about which row you drop, you could pick the first n rows in each group where n is the smallest number of rows in any group:</p>
<pre><code>df1.groupby('Group').head(df1.groupby('Group')['Val_A'].count().min())
</code></pre>
<p>Or, if you only want rows with a value of 'Time' in each group, you could do the following:</p>
<pre><code>df1.groupby('Time').filter(lambda x: len(x['Val_A']) > 1)
</code></pre>
<p>Or, if you want to check that you have each group (e.g. A and B) at each point in time and they only appear once at that point</p>
<pre><code>df1.groupby('Time').filter(lambda x: {'A','B'} == set(x['Group']) and len(x) == 2)
</code></pre>
|
python|pandas
| 2
|
375,825
| 62,472,764
|
Can't import tensorflow when run script from terminal, even though tensorflow works in jupyter notebook and terminal
|
<p>TLDR: When I write a small script named toy_model.py and attempt to run it from the command line with</p>
<pre><code>py toy_model.py
</code></pre>
<p>I get an error message complaining about loading tensorflow.</p>
<p>However, I am able to use import and use tensorflow in many other settings without any problem, such as</p>
<ul>
<li>In a jupyter notebook</li>
<li>when I import toy_model.py into a jupyter notebook</li>
<li>when I use python from the command line</li>
</ul>
<p>I have tried many recommended solutions (downloading Spyder in the Anaconda Navigator virtual environment I am using, switching from tensorflow 2.1.0 to tensorflow 2.0.0, downloading microsoft visual studio), and none have been succesful.</p>
<p>I would be grateful for any assistance or insight into this problem, which I will describe in more detail below.</p>
<hr />
<p>I use Anaconda Navigator to code in Python. In Anaconda Navigator, I prepared an environment with the name <code>updated_tensorflow</code>. I used Anaconda's package manager to download tensorflow 2.0.0 and keras 2.3.1 into this environment.</p>
<p>I prepared a jupyter notebook named <code>test1.ipynb</code> with the following code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
model = tf.keras.Sequential([
tf.keras.layers.Dense(1, input_shape=(10,))])
model.compile(optimizer='adam',
loss= 'sparse_categorical_crossentropy', #should be 'sparse_categorical_crossentropy' b/c one-hot encoded
metrics=['accuracy'])
model.predict([[1,2,3,4,5,6,7,8,9,10], [-1,2,-3,4,-5,6,-7,8,-9,10]])
</code></pre>
<p>When I run <code>test1.ipynb</code> from the environment <code>updated_tensorflow</code>, there are no problems.</p>
<p>In the terminal I entered the environment <code>updated_tensorflow</code>, and began using python by typing <code>python</code> in the command line. I entered the same code as in <code>test1.ipynb</code> and had no problems.</p>
<p>I create a file with the name <code>toy_model.py</code> that contained the following code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
model = tf.keras.Sequential([
tf.keras.layers.Dense(1, input_shape=(10,))])
model.compile(optimizer='adam',
loss= 'sparse_categorical_crossentropy', #should be 'sparse_categorical_crossentropy' b/c one-hot encoded
metrics=['accuracy'])
</code></pre>
<p>Then, I created another jupyter notebook in the same directory as <code>toy_model1.py</code> with the name <code>test2.ipynb</code> and the following code:</p>
<pre><code>from toy_model1 import *
model.predict([[1,2,3,4,5,6,7,8,9,10], [-1,2,-3,4,-5,6,-7,8,-9,10]])
</code></pre>
<p>This cell ran with no problems.</p>
<p>Finally, in this same directory, I produced a small file with the name <code>toy_model.py</code> which contained the code</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
model = tf.keras.Sequential([
tf.keras.layers.Dense(1, input_shape=(10,))])
model.compile(optimizer='adam',
loss= 'sparse_categorical_crossentropy', #should be 'sparse_categorical_crossentropy' b/c one-hot encoded
metrics=['accuracy'])
model.predict([[1,2,3,4,5,6,7,8,9,10], [-1,2,-3,4,-5,6,-7,8,-9,10]])
</code></pre>
<p>Then in my terminal, still in the environment <code>updated_tensorflow</code>, I moved to the directory containing <code>toy_model.py</code> and attempted to run it with</p>
<pre><code>py toy_model.py
</code></pre>
<p>I got the following message that indicated I could not import tensorflow:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\me\Anaconda3\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\me\Anaconda3\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\me\Anaconda3\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\me\Anaconda3\Anaconda3\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\me\Anaconda3\Anaconda3\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "toy_model.py", line 3, in <module>
import tensorflow as tf
File "C:\Users\me\Anaconda3\Anaconda3\lib\site-packages\tensorflow\__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "C:\Users\me\Anaconda3\Anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 50, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\me\Anaconda3\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 69, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\me\Anaconda3\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\me\Anaconda3\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\me\Anaconda3\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\me\Anaconda3\Anaconda3\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\me\Anaconda3\Anaconda3\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
</code></pre>
<p>Some of the advice I have followed in my unsuccessful efforts to fix this problem includes:</p>
<ul>
<li><p>making sure pip and setup tools are updated</p>
</li>
<li><p>uninstalling and reinstalling tensorflow and keras using pip</p>
</li>
<li><p>uninstalling and reinstalling tensorflow and keras using conda</p>
</li>
<li><p>switching from tensorflow 2.1.0 to tensorflow to 2.0.0</p>
</li>
<li><p>installing tensorflow and keras on my base anaconda environment</p>
</li>
<li><p>installing tensorflow and keras on my machine</p>
</li>
<li><p>downloading Spyder in my <code>updated_tensorflow</code> environment</p>
</li>
<li><p>downloading Microsoft visual studios
None have been successful.</p>
</li>
</ul>
<p>I would be very grateful for any assistance in understanding and fixing whatever error I am making. It would be so nice if I could use run .py files from my terminal instead of using python exclusively in Jupyter notebooks!</p>
<p>Even a hint as to what the error message means would be helpful: I do not properly understand what a DLL even is.</p>
|
<p>It sounds like, by invoking <code>python</code>, you are not getting the specific python installation that has tensorflow (or tensorflow in that installation is broken). By default, invoking <code>python</code> will give you the system default (it's the first one in the environment path, so that's the first hit).</p>
<p>I would advise identifying exactly which python interpreter your notebook is using and calling that one specifically by saying</p>
<pre><code>/path/to/notebook/interpreter/python toy_model.py
</code></pre>
|
python|tensorflow|python-import|importerror|dllimport
| 2
|
375,826
| 62,694,594
|
How to group dates by month in python
|
<p>I know I could group if I have object with special key that presents data. But I have some data as index that looks like this</p>
<p>This is the index</p>
<pre><code>DatetimeIndex(['2000-01-03', '2000-01-04', '2000-01-05', '2000-01-06',
'2000-01-07', '2000-01-10', '2000-01-11', '2000-01-12',
'2000-01-13', '2000-01-14',
...
'2019-12-18', '2019-12-19', '2019-12-20', '2019-12-23',
'2019-12-24', '2019-12-25', '2019-12-26', '2019-12-27',
'2019-12-30', '2019-12-31'],
dtype='datetime64[ns]', name='DATE', length=5217, freq=None)
Index(['DEXUSEU'], dtype='object')
</code></pre>
<p>The whole table is</p>
<pre><code> DEXUSEU
DATE
2000-01-03 1.0155
2000-01-04 1.0309
2000-01-05 1.0335
...
</code></pre>
<p>Ultimately I would get the highest value for some month.
I was playing around with</p>
<pre><code>.groupby(pd.Grouper(freq='M')).max()
</code></pre>
<p>But I did not manage to get the desired results.</p>
<p>My goal is to have maximum value for each month. I have data of 10 years of euro/usd value pair for rate for each day. The grouping will mean that in the end I will have max value for Jan of 2000th, max value for Feb of 2000..., max value for Dec of 2019.</p>
<p>The .groupby(usdEuro.index.month).max() will give only 12 values, I want to have 12 per indivudial year.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.idxmax.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.idxmax</code></a> with convert years with months to month periods and select rows by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>df.loc[df.groupby(df.index.to_period('M'))['DEXUSEU'].idxmax()]
</code></pre>
<p>Or if possible use <code>Grouper</code>:</p>
<pre><code>df.loc[df.groupby(pd.Grouper(freq='M'))['DEXUSEU'].idxmax()]
</code></pre>
|
python|pandas|pandas-datareader
| 1
|
375,827
| 62,510,402
|
How to add multiple data in single Row Column using pandas
|
<p>I've one DataFrame which is having two column called Brand and con.</p>
<pre><code>import pandas as pd
cars = {'Brand': ['Honda Civic','Toyota Corolla'],
'Con': [['India','Srilanka'],['India']]
}
df = pd.DataFrame(cars, columns = ['Brand', 'Con'])
print (df)
</code></pre>
<p>which is giving me</p>
<pre><code> Brand Con
0 Honda Civic [India, Srilanka]
1 Toyota Corolla [India]
</code></pre>
<p>And i want to convert the dataframe to this output</p>
<pre><code> Brand Con
0 [Honda Civic, Toyota Corolla] India
1 [Honda Civic] Srilanka
</code></pre>
<p>Thank You in Advance :)</p>
|
<p>Use:</p>
<pre><code>df.explode('Con').groupby('Con')['Brand'].agg(list).reset_index().reindex(df.columns, axis=1)
</code></pre>
<p>EDIT:</p>
<pre><code>cars = {'Brand': ['Honda Civic','Toyota Corolla'],
'Con': [['India','Srilanka'],['India']]
}
df = pd.DataFrame(cars, columns = ['Brand', 'Con'])
print (df)
from itertools import chain
df1 = pd.DataFrame({
'Con' : list(chain.from_iterable(df['Con'].tolist())),
'Brand' : df['Brand'].repeat(df['Con'].str.len())
})
df2 = df1.groupby('Con')['Brand'].agg(list).reset_index().reindex(df.columns, axis=1)
print(df2)
Brand Con
0 [Honda Civic, Toyota Corolla] India
1 [Honda Civic] Srilanka
</code></pre>
|
python|pandas|python-2.7
| 2
|
375,828
| 62,513,777
|
Pandas: merge dataframes without creating new columns inside a for operation
|
<p>I'm trying to enrich a dataframe with data collected from an API.
So, I'm going like this:</p>
<pre><code>for i in df.index:
if pd.isnull(df.cnpj[i]) == True:
pass
else:
k=get_financials_hnwi(df.cnpj[i]) # this is my API requesting function, working fine
df=df.merge(k,on=["cnpj"],how="left") # here is my problem <-------------------------------
</code></pre>
<p>Since I'm running that merge in a for sentence, it is showing up suffixes (_x, _y). So I found this alternative here:</p>
<p><a href="https://stackoverflow.com/questions/41262379/pandas-merge-dataframes-without-creating-new-columns">Pandas: merge dataframes without creating new columns</a></p>
<pre><code>for i in df.index:
if pd.isnull(df.cnpj[i]) == True:
pass
else:
k=get_financials_hnwi(df.cnpj[i]) # this is my requesting function, working fine
val = np.intersect1d(df.cnpj, k.cnpj)
df_temp = pd.concat([df,k], ignore_index=True)
df=df_temp[df_temp.cnpj.isin(val)]
</code></pre>
<p>However it creates a new df, killing the original index and not allowing this line to run <code>if pd.isnull(df.cnpj[i]) == True:</code>.</p>
<p>Is there a nice way to run a merge/join/concat inside a for operation without creating new columns with _x and _y? Or there is a way to mix _x and _y columns afterall getting rid of it and condensing it in a single column? I just want a single column with all of it</p>
<p><strong>Sample data and reproducible code</strong></p>
<pre><code>df=pd.DataFrame({'cnpj':[12,32,54,65],'co_name':['Johns Market','T Bone Gril','Superstore','XYZ Tech']})
#first API request:
k=pd.DataFrame({'cnpj':[12],'average_revenues':[687],'years':['2019,2018,2017']})
df=df.merge(k,on="cnpj", how='left')
#second API request:
k=pd.DataFrame({'cnpj':[32],'average_revenues':[456],'years':['2019,2017']})
df=df.merge(k,on="cnpj", how='left')
#third API request:
k=pd.DataFrame({'cnpj':[53],'average_revenues':[None],'years':[None]})
df=df.merge(k,on="cnpj", how='left')
#fourth API request:
k=pd.DataFrame({'cnpj':[65],'average_revenues':[4142],'years':['2019,2018,2015,2013,2012']})
df=df.merge(k,on="cnpj", how='left')
print(df)
</code></pre>
<p>Result:</p>
<pre><code> cnpj co_name average_revenues_x years_x average_revenues_y \
0 12 Johns Market 687.0 2019,2018,2017 NaN
1 32 T Bone Gril NaN NaN 456.0
2 54 Superstore NaN NaN NaN
3 65 XYZ Tech NaN NaN NaN
years_y average_revenues_x years_x average_revenues_y \
0 NaN None None NaN
1 2019,2017 None None NaN
2 NaN None None NaN
3 NaN None None 4142.0
years_y
0 NaN
1 NaN
2 NaN
3 2019,2018,2015,2013,2012
</code></pre>
<p>Desired result:</p>
<pre><code> cnpj co_name average_revenues years
0 12 Johns Market 687.0 2019,2018,2017
1 32 T Bone Gril 456.0 2019,2017
2 54 Superstore None None
3 65 XYZ Tech 4142.0 2019,2018,2015,2013,2012
</code></pre>
|
<p>as your joining on a single column and mapping values we can take advantage of the <code>cnpj</code> column and set it to the index, we can then use <code>combine_first</code> or <code>update</code> or <code>map</code> to add your values into your dataframe.</p>
<p>assuming <code>k</code> will look like this. If not just update the function to return a dictionary that you can use <code>map</code> with.</p>
<pre><code> cnpj average_revenues years
0 12 687 2019,2018,2017
</code></pre>
<hr />
<p>lets hold this in a tidy function.</p>
<pre><code>def update_api_call(dataframe,api_call):
if dataframe.index.name == 'cnpj':
pass
else:
dataframe = dataframe.set_index('cnpj')
return dataframe.combine_first(
api_call.set_index('cnpj')
)
</code></pre>
<p>assuming your variable <code>k</code>s are numbered 1-4 for our test.</p>
<pre><code>df1 = update_api_call(df,k1)
print(df1)
average_revenues co_name years
cnpj
12 687.0 Johns Market 2019,2018,2017
32 NaN T Bone Gril NaN
54 NaN Superstore NaN
65 NaN XYZ Tech NaN
df2 = update_api_call(df1,k2)
print(df2)
average_revenues co_name years
cnpj
12 687.0 Johns Market 2019,2018,2017
32 456.0 T Bone Gril 2019,2017
54 NaN Superstore NaN
65 NaN XYZ Tech NaN
</code></pre>
<hr />
<pre><code>print(df4)
average_revenues co_name years
cnpj
12 687.0 Johns Market 2019,2018,2017
32 456.0 T Bone Gril 2019,2017
53 NaN NaN NaN
54 NaN Superstore NaN
65 4142.0 XYZ Tech 2019,2018,2015,2013,2012
</code></pre>
|
python|pandas|dataframe
| 1
|
375,829
| 62,819,596
|
Increasing Training/Validation Accuracy for Breast Mammography
|
<p>I've been training CNN model for binary classification task in order to classify Breast Mammography image patches to Normal and Abnormal. Here is my training plot:</p>
<p><a href="https://i.stack.imgur.com/CF8wj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CF8wj.png" alt="enter image description here" /></a></p>
<p>Even though results are some what solid, for binary classification task I am aiming for <code>0.9</code> or more train/val accuracy. I've examined output from training, and it seems that network is stuck in a saddle point. Here is a sample from training output:</p>
<pre><code>Epoch 48/400
134/134 [==============================] - ETA: 0s - loss: 0.2837 - binary_accuracy: 0.8762
Epoch 00048: val_loss did not improve from 0.37938
134/134 [==============================] - 39s 294ms/step - loss: 0.2837 - binary_accuracy: 0.8762 - val_loss: 0.3802 - val_binary_accuracy: 0.8358
Epoch 49/400
134/134 [==============================] - ETA: 0s - loss: 0.2820 - binary_accuracy: 0.8846
Epoch 00049: val_loss did not improve from 0.37938
134/134 [==============================] - 39s 294ms/step - loss: 0.2820 - binary_accuracy: 0.8846 - val_loss: 0.3844 - val_binary_accuracy: 0.8312
Epoch 50/400
134/134 [==============================] - ETA: 0s - loss: 0.2835 - binary_accuracy: 0.8806
Epoch 00050: val_loss did not improve from 0.37938
134/134 [==============================] - 39s 292ms/step - loss: 0.2835 - binary_accuracy: 0.8806 - val_loss: 0.3827 - val_binary_accuracy: 0.8293
Epoch 51/400
134/134 [==============================] - ETA: 0s - loss: 0.2754 - binary_accuracy: 0.8843
Epoch 00051: val_loss did not improve from 0.37938
134/134 [==============================] - 39s 293ms/step - loss: 0.2754 - binary_accuracy: 0.8843 - val_loss: 0.3847 - val_binary_accuracy: 0.8246
Epoch 52/400
134/134 [==============================] - ETA: 0s - loss: 0.2773 - binary_accuracy: 0.8832
Epoch 00052: val_loss did not improve from 0.37938
134/134 [==============================] - 39s 290ms/step - loss: 0.2773 - binary_accuracy: 0.8832 - val_loss: 0.4020 - val_binary_accuracy: 0.8293
Epoch 53/400
134/134 [==============================] - ETA: 0s - loss: 0.2762 - binary_accuracy: 0.8825
Epoch 00053: val_loss did not improve from 0.37938
134/134 [==============================] - 39s 290ms/step - loss: 0.2762 - binary_accuracy: 0.8825 - val_loss: 0.3918 - val_binary_accuracy: 0.8106
Epoch 54/400
134/134 [==============================] - ETA: 0s - loss: 0.2734 - binary_accuracy: 0.8881
Epoch 00054: val_loss did not improve from 0.37938
134/134 [==============================] - 39s 290ms/step - loss: 0.2734 - binary_accuracy: 0.8881 - val_loss: 0.4216 - val_binary_accuracy: 0.8181
Epoch 55/400
134/134 [==============================] - ETA: 0s - loss: 0.2902 - binary_accuracy: 0.8804
Epoch 00055: val_loss improved from 0.37938 to 0.36383, saving model to /content/drive/My Drive/Breast Mammography/Patch Classifier/Training/normal-abnormal_patch_classification_weights_clr-055- 0.3638.hdf5
134/134 [==============================] - 40s 301ms/step - loss: 0.2902 - binary_accuracy: 0.8804 - val_loss: 0.3638 - val_binary_accuracy: 0.8396
Epoch 56/400
134/134 [==============================] - ETA: 0s - loss: 0.2766 - binary_accuracy: 0.8822
Epoch 00056: val_loss did not improve from 0.36383
134/134 [==============================] - 39s 291ms/step - loss: 0.2766 - binary_accuracy: 0.8822 - val_loss: 0.4408 - val_binary_accuracy: 0.8209
Epoch 57/400
134/134 [==============================] - ETA: 0s - loss: 0.2811 - binary_accuracy: 0.8790
Epoch 00057: val_loss did not improve from 0.36383
134/134 [==============================] - 39s 289ms/step - loss: 0.2811 - binary_accuracy: 0.8790 - val_loss: 0.3743 - val_binary_accuracy: 0.8396
Epoch 58/400
134/134 [==============================] - ETA: 0s - loss: 0.2834 - binary_accuracy: 0.8792
Epoch 00058: val_loss did not improve from 0.36383
134/134 [==============================] - 39s 289ms/step - loss: 0.2834 - binary_accuracy: 0.8792 - val_loss: 0.3946 - val_binary_accuracy: 0.8200
Epoch 59/400
134/134 [==============================] - ETA: 0s - loss: 0.2716 - binary_accuracy: 0.8797
Epoch 00059: val_loss did not improve from 0.36383
134/134 [==============================] - 39s 293ms/step - loss: 0.2716 - binary_accuracy: 0.8797 - val_loss: 0.3784 - val_binary_accuracy: 0.8340
Epoch 60/400
134/134 [==============================] - ETA: 0s - loss: 0.2755 - binary_accuracy: 0.8836
Epoch 00060: val_loss did not improve from 0.36383
134/134 [==============================] - 39s 290ms/step - loss: 0.2755 - binary_accuracy: 0.8836 - val_loss: 0.4015 - val_binary_accuracy: 0.8321
Epoch 61/400
134/134 [==============================] - ETA: 0s - loss: 0.2767 - binary_accuracy: 0.8827
Epoch 00061: val_loss did not improve from 0.36383
</code></pre>
<p>I am considering following options:</p>
<ul>
<li>Load network from epoch where learning has stopped and train it with <strong>STEP learning rate schedule</strong> (starting with big learning rate in order to escape saddle point)</li>
<li>Perform offline augmentation to increase number of training data, while keeping all transformations performed during online augmentation</li>
<li>Changing networks architecture (for now I am using custom, somewhat shallow (I would say semi-deep) model)</li>
</ul>
<p>Does anyone have any suggestions other then the ones I've mentioned above. Also, I am using <code>SGD</code> with <code>momentum=0.9</code>. Are there any optimizers in practice that are able to escape saddle points more easily then SGD with momentum? Also, how does <code>BATCH_SIZE</code> (which I've set to <code>32</code>, while <code>TRAINING_SIZE=4300</code> - no class imbalance present)) affect learning?</p>
|
<p>Taking a look at the graph, it is mostly possible that the model is performing to its best considering the size of the dataset. Although the performance is not so bad, I would give the following suggestions:</p>
<ol>
<li>Try a pre-trained model. The pre-trained model mostly works better in small and big datasets. You may use any pre-trained model architecture from Keras <a href="https://keras.io/api/applications/" rel="nofollow noreferrer">applications</a>. To implement it properly, you should use <code>include_top=False</code>, and <code>weights='imagenet'</code>.</li>
<li>If you are not willing to change the model, then you may try training it in different image datasets, such as imagenet, mini-imagenet, tiny-imagenet. Pre-training may cause the model to find a more optimal embedding sub-space.</li>
<li>If you do not have any dropout layers in the current model, try adding some. This may cause an improvement in generalization.</li>
<li><code>SGD</code> with <code>momentum=0.9</code> is a good choice. However, you may try <code>Adam</code> with <code>learning_rate=0.001</code>.</li>
<li>After selecting the best loss function, try having a search on different <code>batch_sizes</code>. I believe would notice some improvement with better peak accuracy.</li>
</ol>
<p>I hope these answers would help you. If there are any more questions, please let me know. Thanks.</p>
|
tensorflow|machine-learning|keras|deep-learning|conv-neural-network
| 1
|
375,830
| 62,742,948
|
Comparing elements in two numpy arrays results in a memory address
|
<p>I'm comparing the individual elements in two numpy arrays. The array elements are integers. I'm using the 'equal_arrays' function to do the comparision but it results in giving me the memory address of the result object:</p>
<p>Here is the code:</p>
<pre><code> act = actual_direction
pre = predicted_direction
np.sum(act == pre)
comparison = act == pre
equal_arrays = comparison.all
print(f'equal_arrays : {equal_arrays}\n')
</code></pre>
<p>result:</p>
<pre><code> equal_arrays : <built-in method all of numpy.ndarray object at 0x00000122CA6CA3F0>
</code></pre>
<p>Do I have to access the memory address to get the results or is there a more elegant way to get the answer?</p>
<p>Thanks in advance.</p>
|
<p>Based on what i understand, you need a way to get an array with True or False values, for each corresponding element from the two matrixes, given they have the same shape? (What I am trying to do is get a comparison for each individual element of the arrays.)</p>
<p>If so you can try something to this:</p>
<pre><code>a = np.array([[1,2,3], [4,5,6], [7,8,9]])
b = np.array([[3,2,1], [6,5,4], [9,8,7]])
print(a == b)
</code></pre>
<p>Output:</p>
<pre><code>[[False True False]
[False True False]
[False True False]]
</code></pre>
|
python-3.x|numpy
| 2
|
375,831
| 62,767,073
|
Python Update only 1 column from another dataframe on index value
|
<p>I have 2 dataframes df1 & df2. The df2 dataframes is a subset of df1 I extracted to do some cleaning. Both dataframes can be matched on index. I seen a lot of merges on the site. I don't want to add more columns to df1 and the dataframes are not the same size df1 has 1000 rows and df2 has 275 rows so i don't want to replace the entire column. I want to update df1['AgeBin'] with the df2['AgeBin'] values where the indexes for these dataframes have a match.</p>
<pre><code>indexes = df.loc[df.AgeBin.isin(dfage_test.AgeBin.values)].index
df1.at[indexes,'AgeBin'] = df2['AgeBin'].values
</code></pre>
<p>This was what I came up with but seems like there is an issue since df's are different sizes</p>
<pre><code>ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>Below is an over simplification. df1 has 26 columns and df2 has 12 columns the Agebin is the last column in both dfs. This is in theory what my objective is</p>
<pre><code>df2
AgeBin
0 2
1 3
2 1
3 3
df1
AgeBin
0 NaN
1 NaN
2 NaN
3 NaN
df1 after update
AgeBin
0 2
1 3
2 1
3 3
</code></pre>
<p>Here are dataframe specs</p>
<pre><code>RangeIndex: 1309 entries, 0 to 1308
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 1046 non-null float64
1 Survived 714 non-null category
2 Pclass 1046 non-null category
3 Name 1046 non-null object
4 Sex 1046 non-null object
5 Age 1046 non-null float64
6 SibSp 1046 non-null float64
7 Parch 1046 non-null float64
8 Ticket 1046 non-null object
9 Fare 1046 non-null float64
10 Embarked 1046 non-null category
11 FamilySize 1046 non-null float64
12 Surname 1046 non-null object
13 Title 1046 non-null object
14 IsChild 1046 non-null float64
15 isMale 1046 non-null category
16 GroupID 1046 non-null float64
17 GroupSize 1046 non-null float64
18 GroupType 1046 non-null object
19 GroupNumSurvived 1046 non-null float64
20 GroupNumPerished 1046 non-null float64
21 LargeGroup 1046 non-null float64
22 SplitFare 1046 non-null float64
23 log10Fare 1046 non-null float64
24 log10SplitFare 1046 non-null float64
25 AgeBin 1046 non-null category
dtypes: category(5), float64(15), object(6)
memory usage: 221.9+ KB
dfageResults.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 263 entries, 5 to 1308
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 AgeBin 263 non-null category
dtypes: category(1)
memory usage: 12.4 KB
</code></pre>
<p>Here are the categories</p>
<pre><code>67] dfageResults.groupby(["AgeBin"])["AgeBin"].count()
AgeBin
0-14 25
15-29 192
30-44 46
Name: AgeBin, dtype: int64
[68] df.groupby(["AgeBin"])["AgeBin"].count()
AgeBin
0-14 107
15-29 462
30-44 301
45-59 136
60+ 40
Name: AgeBin, dtype: int64
</code></pre>
|
<p>Assuming all the indexed in <code>df2</code> exist in <code>df1</code> (which I understand is the case) - the below will suffice:</p>
<pre class="lang-py prettyprint-override"><code>df1.loc[df2.index,:]=df2
</code></pre>
<p>In case if the above assumption for <code>index</code> won't hold - this is the alternative (same result - updates only existing indexes in <code>df1</code>):</p>
<pre class="lang-py prettyprint-override"><code>df1.loc[set(df2.index).intersection(set(df1.index)),:]=df2
</code></pre>
<p>Sample output (with more representative sample data):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df1=pd.DataFrame({"AgeBin": [1,2,3,'x', np.nan,np.nan,'a']})
df2=pd.DataFrame({"AgeBin": ['new1', 'new2', 123]}, index=[5,2,3])
print(df1)
print(df2)
df1.loc[df2.index,:]=df2
print(df1)
</code></pre>
<p>Outputs:</p>
<pre class="lang-py prettyprint-override"><code> AgeBin
0 1
1 2
2 3
3 x
4 NaN
5 NaN
6 a
AgeBin
5 new1
2 new2
3 123
AgeBin
0 1
1 2
2 new2
3 123
4 NaN
5 new1
6 a
</code></pre>
|
python|pandas|dataframe
| 1
|
375,832
| 62,839,330
|
Join or Merge dataframes with different multi-indexes
|
<p>Apologies in advance for duplicate question, but I have yet to find a solution that works from previous questions.</p>
<p>I am unable to join two data-frames with different MultiIndexes. I want to keep all the columns from both data-frames.</p>
<p>Given that df1 has ~300k rows and df2 has ~50k rows the join would be many:1 between df1:df2.</p>
<pre><code>df1 B path_id
cust_id date
11 2015-02-24 10 13
28 2015-02-25 16 22
23 2015-02-26 21 19
15 2015-02-27 11 28
18 2015-02-28 29 10
df2 C
cust_id path_id
11 13 10
28 22 26
23 19 22
15 28 27
18 10 18
</code></pre>
<p>The goal is to assign column <code>C</code> to all matching combinations of index <code>cust_id</code> & column <code>path_id</code>. See df3 below as an example.</p>
<pre><code>df3 B C path_id
cust_id date
11 2015-02-24 10 10 13
28 2015-02-25 16 26 22
23 2015-02-26 21 22 19
15 2015-02-27 11 27 28
18 2015-02-28 29 18 10
</code></pre>
<p>Appreciate any response on this. Thank you!</p>
|
<p>Well I figured it out. I'm not sure if this is the best way but I just reset the indexes of both data frames and merged on the columns. See code below.</p>
<pre><code>df1.reset_index()
df2.reset_index()
df3 = df1.merge(df2, on=['cust_id', 'path_id'])
</code></pre>
<p>I then reassigned the indexes afterwards. If there is a better way please let me know.</p>
<p>Thanks!</p>
|
python|pandas|dataframe|multi-index
| 1
|
375,833
| 62,774,428
|
Slicing Not NaN values in python
|
<p>I am python newbie and would appreciate some help!
I have a dataframe called result in the below format:</p>
<pre><code>start end rf1 rf2 rf3
01-01-2008 10-01-2008 nan 12 nan
02-01-2008 11-01-2008 nan 16 nan
03-01-2008 12-01-2008 32 18 18
</code></pre>
<p>I want a list of those rfs in each row that are not NaN. Please note that my first two columns are not index. I tried the below code but couldnt get my answer:</p>
<pre><code>result_2=result.dropna(axis=1,how='all')
</code></pre>
<p>Basically I want a list of dates for which the rfs are not NaN.
For ex in the first row my output should give me start date, end date and 'rf2', similarly in last row, my output should give me start date, end date, 'rf1','rf2','rf3'</p>
|
<p>IIUC you can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a> having filtered on the <code>rfX</code> columns, <code>groupby</code> the index and build a list from the resulting groups:</p>
<pre><code>df.filter(regex=r'rf\d').stack().groupby(level=0).agg(list)
0 [12.0]
1 [16.0]
2 [32.0, 18.0, 18.0]
dtype: object
</code></pre>
<p>Or using a list comprehension:</p>
<pre><code>[[i for i in row if i==i] for row in df.filter(regex=r'rf\d').values.tolist()]
[[12.0], [16.0], [32.0, 18.0, 18.0]]
</code></pre>
<p>or if you need the column names.</p>
<pre><code>df['vals'] = df.filter(regex=r'rf\d').stack().reset_index(level=1)\
.groupby(level=0).level_1.agg(list)
print(df)
start end rf1 rf2 rf3 vals
0 2008-01-01 2008-10-01 NaN 12 NaN [rf2]
1 2008-02-01 2008-11-01 NaN 16 NaN [rf2]
2 2008-03-01 2008-12-01 32.0 18 18.0 [rf1, rf2, rf3]
</code></pre>
|
python|pandas|numpy|slice|nan
| 4
|
375,834
| 62,725,842
|
Correlation between continuous independent variable and binary class dependent variable
|
<p>Can someone please tell <strong>if it's correct</strong> to find <code>correlation</code> between a dependent variable that has <code>binary class(0 or 1)</code> and independent variables that have continuous values using pandas <code>df.corr()</code>.</p>
<p>I am getting correlation output if I do use it. But <strong>I want to understand if it's statistically correct</strong> to find pearson correlation(using df.corr()) between a binary categorical output and continuous input variables.</p>
|
<p>pearson correlation is for continues data if one is categorical and other is binary, you should use ANOVA to see the relation between variables <a href="https://www.quora.com/How-can-I-measure-the-correlation-between-continuous-and-categorical-variables" rel="nofollow noreferrer">refrence</a></p>
|
pandas|statistics|correlation
| 0
|
375,835
| 62,815,087
|
How to get multidimentional array slices in C#, numpy style?
|
<p>I have</p>
<pre><code>int[,,,] arr = new int[5, 6, 7, 8]; // c#
arr = np.zeros((5, 6, 7, 8)) # Python
</code></pre>
<p>A 4d array, with <code>5 * 6 * 7 * 8</code> cells.</p>
<p>I want to slice it in c# like in numpy</p>
<pre><code>var mySlice = arr[2:4, 0, :2, :]; // Won't work in C#, but looking for a way to do this. return type should be int[,,] A 3d array with 2 * 1 * 2 * 8 cells.
my_slice = arr[2:4, 0, :2, :]; # easy with numpy
</code></pre>
<p>If this can be done Dynamically, that's also fine.</p>
<p>How to slice multidimensional arrays in C#?</p>
|
<p>As far as I know there is no way to do this in c#. Other than writing your own Multidimensional array class.</p>
<p>C# 8 introduces ranges and array slicing. But this uses <code>Span<T></code> to return a slice without needing to copy any values. Multi-dimensional arrays are stored in continuous memory, so a potential multidimensional slicing operator would need to have one offset and length per dimension.</p>
|
python|c#|arrays|numpy|slice
| 0
|
375,836
| 62,791,055
|
Delete Column in CSV when element is certain value
|
<p>I am parsing a large amount of data so I need to delete a column in CSV. I would like to delete a column if it contains 0 and 32800.</p>
<p>There is no header too.</p>
<p><a href="https://i.stack.imgur.com/sGV6J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sGV6J.png" alt="enter image description here" /></a></p>
|
<p>To delete a column with a specific value, you can try doing this:</p>
<pre><code>data.loc[:, ~(data == 0).any()]
</code></pre>
<p>or this</p>
<pre><code>data.drop(columns=data.columns[(data == 0).any()])
</code></pre>
<ul>
<li><code>(data == 0)</code> alone gives you a dataframe with booleans (whether 0 appears in df or not)</li>
<li><code>(data == 0).any()</code> tells you if there is any 0's in the columns of df</li>
<li>So when you do this <code>data.columns[(data == 0).any()]</code> it tells you that in data there are 0 values in one or some columns.</li>
</ul>
|
pandas|dataframe
| 2
|
375,837
| 62,533,013
|
Why is slope not a good measure of trends for data?
|
<p>Following the advice of <a href="https://www.emilkhatib.com/analyzing-trends-in-data-with-pandas/" rel="nofollow noreferrer">this post</a> on Analyzing trends in data with pandas, I have used numpy's <code>polyfit</code> on several data I have. However it does not permit me to see when there is a trend and when there isn't. I wonder what am I understanding wrong.</p>
<p>First the code is the following</p>
<pre><code>import pandas
import matplotlib.pyplot as plt
import numpy as np
file="data.csv"
df= pandas.read_csv(file,delimiter=',',header=0)
selected=df.loc[(df.index>25)&(df.index<613)]
xx=np.arange(25,612)
y= selected[selected.columns[1]].values
df.plot()
plt.plot(xx,y)
plt.xlabel("seconds")
coefficients, residuals, _, _, _ = np.polyfit(range(25,25+len(y)),y,1,full=True)
plt.plot(xx,[coefficients[0]*x + coefficients[1] for x in range(25,25+len(y))])
mse = residuals[0]/(len(y))
nrmse = np.sqrt(mse)/(y.max() - y.min())
print('Slope ' + str(coefficients[0]))
print('Degree '+str(np.degrees(np.arctan(coefficients[0]))))
print('NRMSE: ' + str(nrmse))
print('Max-Min '+str((y.max()-y.min())))
</code></pre>
<p>I trimmed the first and last 25 points of data.
As a result I got the following:</p>
<p><a href="https://i.stack.imgur.com/EPGLP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EPGLP.png" alt="data4" /></a></p>
<p>I can clearly see that there is a trend to increase in the data.
For the results I got</p>
<pre><code>Slope 397.78399534197837
Degree 89.85596288567513
NRMSE: 0.010041127178789659
Max-Min 257824
</code></pre>
<p>and with this data</p>
<p><a href="https://i.stack.imgur.com/hb8f0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hb8f0.png" alt="enter image description here" /></a></p>
<p>I got</p>
<pre><code>Slope 349.74410929666203
Degree 89.83617844631047
NRMSE: 0.1482879344688465
Max-Min 430752
</code></pre>
<p>However with this data</p>
<p><a href="https://i.stack.imgur.com/mDtvZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mDtvZ.png" alt="enter image description here" /></a></p>
<p>I got</p>
<pre><code>Slope 29.414468649823373
Degree 88.05287249703134
NRMSE: 0.3752760050624873
Max-Min 673124
</code></pre>
<p>As you can see, in this there is not so much of a tendency to increase so the slope is less.</p>
<p>However here</p>
<p><a href="https://i.stack.imgur.com/ULYyX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ULYyX.png" alt="enter image description here" /></a></p>
<p>again has a big slope</p>
<pre><code>Slope 228.34551214653814
Degree 89.74908456620851
NRMSE: 0.3094116937517223
Max-Min 581600
</code></pre>
<p>I can't understand why slope is not indicating clearly the tendencies (and much less the degres)</p>
<p>A second thing that disconcerts me is that the slope depends on how much the data varies in the Y axis.
For example with data that varies few the slope is on the range of 0</p>
<p><a href="https://i.stack.imgur.com/H0fgY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H0fgY.png" alt="enter image description here" /></a></p>
<pre><code>Slope 0.00017744046645062043
Degree 0.010166589735754468
NRMSE: 0.07312155589459704
Max-Min 11.349999999999998
</code></pre>
<p>What is a good way to detect a trend in data, independent of its magnitude?</p>
|
<p>The idea is that you compare whether the linear fit shows a significant increase compared to the fluctuation of the data around the fit:</p>
<p><a href="https://i.stack.imgur.com/CAMC9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CAMC9.png" alt="Linear fit example" /></a></p>
<p>In the bottom panel, you see that the trend (the fit minus the constant part) exceeds residuals (defined as the difference between data and fit). What a good criterion for 'significant increase' is, depends on the type of data and also on how many values along the x axis you have. I suggest that you take the root mean square (RMS) of the residuals. If the trend in the fit exceeds some threshold (relative to the residuals), you call it a significant trend. A suitable value of the threshold needs to be established by trial and error.</p>
<p>Here is the code generating the plots above:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# example data
x = np.arange(25, 600)
y = 1.76e7 + 3e5/600*x + 1e5*np.sin(x*0.2)
y += np.random.normal(scale=3e4, size=x.shape)
# process
a1, a0 = np.polyfit(x, y, 1)
resid = y - (a1*x + a0) # array
rms = np.sqrt((resid**2).mean())
plt.close('all')
fig, ax = plt.subplots(2, 1)
ax[0].plot(x, y, label='data')
ax[0].plot(x, a1*x+a0, label='fit')
ax[0].legend()
ax[1].plot(x, resid, label='residual')
ax[1].plot(x, a1*(x-x[0]), label='trend')
ax[1].legend()
dy_trend = a1*(x[-1] - x[0])
threshold = 0.3
print(f'dy_trend={dy_trend:.3g}; rms={rms:.3g }')
if dy_trend > threshold*rms:
print('Significant trend')
</code></pre>
<p>Output:</p>
<pre><code>dy_trend=2.87e+05; rms=7.76e+04
Significant trend
</code></pre>
|
python|pandas|numpy|data-analysis
| 0
|
375,838
| 62,527,534
|
get indices of numpy multidimensional arrays
|
<pre><code>arr = np.array([[[1,2],[3,4]],[[5,6],[7,8]]])
</code></pre>
<p>I'm trying to get the indices of <code>arr</code> when <code>arr==1</code>.</p>
<p>I thought this would work but it doesn't give the expected output:</p>
<pre><code>>>> np.where(arr==1)
(array([0], dtype=int64), array([0], dtype=int64), array([0], dtype=int64))
</code></pre>
|
<p>If you change arr:</p>
<pre><code>arr = np.array([[[1,2],[3,4]],[[1,1],[7,8]]])
</code></pre>
<p>and you will get</p>
<pre><code>np.where(arr==1)
# (array([0, 1, 1]), array([0, 0, 0]), array([0, 0, 1]))
</code></pre>
<p>it is mean:</p>
<pre><code>arr[0][0][0] == 1
arr[1][0][0] == 1
arr[1][0][1] == 1
</code></pre>
<p>When you want display coordinate in one row:</p>
<pre><code>np.array(np.where(arr==1)).T
# array([[0, 0, 0],[1, 0, 0],[1, 0, 1]])
</code></pre>
|
python|numpy
| 1
|
375,839
| 62,590,761
|
An error occured while installing flair and pytorch with pipenv in windows with Pycharm
|
<p>I am having problems installing the python packages <code>flair</code> and <code>pytorch</code> via <code>pipenv</code> and have not been able to resolve this issue yet. Since I am trying to version my python git repository with <code>Pipfile + Pipfile.lock</code> instead of <code>requirements.txt</code> this is currently not possible:</p>
<p><code>pipenv install flair</code></p>
<p><code> ERROR: Could not find a version that satisfies the requirement torch>=1.1.0 (from flair->-r c:\users\user.name\appdata\local\temp\pipenv-plyx3uwp-requirements\pipenv-xh_afa_r-requirement.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch>=1.1.0 (from flair->-r c:\users\user.name\appdata\local\temp\pipenv-plyx3uwp-requirements\pipenv-xh_afa_r-requirement.txt (line 1)) Installation Failed</code></p>
<p>I tried these variants of installing <code>torchvision</code>:</p>
<p><code>pipenv install torchvision</code></p>
<p>to install <code>torchvision</code> which should pick up the latest torch version</p>
<p><code>pipenv install torch==1.3</code></p>
<p>to install torch</p>
<p><code>pipenv install https://download.pytorch.org/whl/cu92/torch-0.4.1-cp37-cp37m-win_amd64.whl</code></p>
<p>alternative way to install torch (here is more binaries: <a href="https://pytorch.org/get-started/previous-versions/#windows-binaries" rel="nofollow noreferrer">https://pytorch.org/get-started/previous-versions/#windows-binaries</a>)</p>
<p><code>pipenv install git+https://github.com/pytorch/vision#egg=torchvision</code></p>
<p>Another alternative way,</p>
<pre>Error text: Collecting torchvision
Downloading torchvision-0.5.0-cp37-cp37m-win_amd64.whl (1.2 MB)
Collecting numpy
Using cached numpy-1.18.5-cp37-cp37m-win_amd64.whl (12.7 MB)
Collecting pillow>=4.1.1
Downloading Pillow-7.1.2-cp37-cp37m-win_amd64.whl (2.0 MB)
Collecting six
Downloading six-1.15.0-py2.py3-none-any.whl (10 kB)
</pre>
<p><code>pipenv install torchvision</code></p>
<pre>users\user.name\appdata\local\temp\pipenv-hf2be0xq-requirements\pipenv-57akhz4j-requirement.txt (line
1)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.4.0 (from torchvision->-r c:\users\user.name\appdat
a\local\temp\pipenv-hf2be0xq-requirements\pipenv-57akhz4j-requirement.txt (line 1))
Installation Failed
</pre>
<hr />
<p>The only way it was possible to install torchvision was without its dependent packages:</p>
<p><code> pipenv run pip install --no-deps torchvision</code></p>
<p>But this did not resolve the problem of installing flair via pipenv since the dependencies are needed.</p>
|
<p>Try to cleanup the Pipfile and the virtual environment first. It looks like the transitive dependencies and the ones declared in the Pipfile clash.</p>
<p>Then try to install torchvision like this:</p>
<pre><code>pipenv install torchvision
</code></pre>
<p>This will install the latest torchvision version and the compatible torch version:</p>
<p><a href="https://github.com/pytorch/vision" rel="nofollow noreferrer">Source</a>:</p>
<pre><code>torch torchvision python
master / nightly master / nightly >=3.6
1.5.0 0.6.0 >=3.5
1.4.0 0.5.0 ==2.7, >=3.5, <=3.8
1.3.1 0.4.2 ==2.7, >=3.5, <=3.7
1.3.0 0.4.1 ==2.7, >=3.5, <=3.7
1.2.0 0.4.0 ==2.7, >=3.5, <=3.7
1.1.0 0.3.0 ==2.7, >=3.5, <=3.7
<=1.0.1 0.2.2 ==2.7, >=3.5, <=3.7
</code></pre>
<p>Flair works with Pytorch 1.1+ as stated <a href="https://github.com/flairNLP/flair" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>The project is based on PyTorch 1.1+ and Python 3.6+, because method
signatures and type hints are beautiful.</p>
</blockquote>
<p>After installing torchvision proceed to install flair:</p>
<pre><code>pipenv install flair
</code></pre>
<p>Here you can find a working Pipfile and Pipfile.lock file, after the two operations were concluded:</p>
<p>Pipfile</p>
<pre>name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
torchvision = "*"
flair = "*"
[requires]
python_version = "3.6"
</pre>
<p><a href="https://git.io/JJvQo" rel="nofollow noreferrer">Pipfile.lock</a></p>
|
python|pytorch|pipenv|pipfile|flair
| 0
|
375,840
| 62,735,038
|
How to identify from a list of list which are the coordinates (x,y) that appear and plasm it into a data frame
|
<p>I have a list of lists with different coordinates of different images. I want to create a data frame with two columns: coordinates and value being value how many times we can find this pixel (this value has to be 0 if it's not found in the whole list or a concrete number if we can find it x times).
What I have done is this:</p>
<pre><code>p = (0,0)
x = 0
y = 1
z = 0
d= {'Coordinates': [p], 'Value': [z]}
df = pd.DataFrame(data=d)
for _ in range(img.size[0]*img.size[1]-1):
new_row = {"p":(x,y), "z":z}
df = df.append(new_row, ignore_index = True)
y = y+1
if y = img.size[1]+1:
y = 0
x= x+1
print(df)
</code></pre>
<p>The pixels marked are in a list of lists and what should be changed from this code is the z value.
The data frame I want to get is something like:</p>
<p><a href="https://i.stack.imgur.com/FghPA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FghPA.png" alt="Data frame" /></a></p>
<p>An exameple of the list of coordinates I have is:
<a href="https://i.stack.imgur.com/i9VmU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i9VmU.png" alt="Coordinates" /></a>
Thanks for your help!! <3</p>
|
<p>You can use <code>Counter</code> to get all the unique pixels that have a count not equal to 0, then in order to add pixels that also have a count of 0, you can loop through all pixels (0,0)... (max_x, max_y) and compare their count to the Counter object.</p>
<pre><code>import numpy as np
import pandas as pd
from collections import Counter
coordinates = [[(0,1),(0,2),(0,3)],[(0,1),(0,2),(1,1)],[(1,1),(2,2),(3,3)]]
# this should return a value of 2 for (0,1),(0,2),(1,1)
# value of 1 for (0,3),(2,2),(3,3)
# values of 0 for all other pixels from (0,0)... (3,3) not in coordinates
# flatten the list of coordinates
all_coordinates = [item for img in coordinates for item in img]
c = Counter(all_coordinates)
# you want to look at the counts for pixels (0,0), ... (3,3) in c
max_x = max([item[0] for item in all_coordinates])
max_y = max([item[1] for item in all_coordinates])
coordinates_dict = dict()
for i in range(max_x + 1):
for j in range(max_y + 1):
coordinates_dict.update({(i,j): c[(i,j)]})
df = pd.DataFrame(coordinates_dict.items(), columns=['Coordinates','Value'])
</code></pre>
<hr />
<p>Output:</p>
<pre><code>>>> df
Coordinates Value
0 (0, 0) 0
1 (0, 1) 2
2 (0, 2) 2
3 (0, 3) 1
4 (1, 0) 0
5 (1, 1) 2
6 (1, 2) 0
7 (1, 3) 0
8 (2, 0) 0
9 (2, 1) 0
10 (2, 2) 1
11 (2, 3) 0
12 (3, 0) 0
13 (3, 1) 0
14 (3, 2) 0
15 (3, 3) 1
</code></pre>
|
python|python-3.x|pandas|dataframe|coordinates
| 1
|
375,841
| 62,821,289
|
Python - MemoryError: Unable to allocate for an array with type int64
|
<p>I'm trying to create a numpy matrix:</p>
<pre><code>matrix = np.zeros((242993, 9000000, 13), dtype=int)
</code></pre>
<p>But I am getting MemoryError:</p>
<pre><code>MemoryError: Unable to allocate 207. TiB for an array with shape (242993, 9000000, 13) and data type int64
</code></pre>
<p><strong>EDIT: I'm running on Linux Mint 64</strong></p>
<p><strong>EDIT 2: What I'm trying to do is to create a matrix that I will use save int/float numbers</strong></p>
<p><strong>EDIT 3: The question is how can I create the matrix with this size?</strong></p>
<p>Anyone can help me? Thanks</p>
|
<p><code>matrix = np.zeros((242993, 9000000, 13), dtype=int)</code> requires 242993x9000000x13x32(bit/int) bits which is essentially 9.1e14 bits or in order of hundreds of Tera bytes. Even if you use dtype of bits this still won't fit into your memory. Depending on your application, you might store it differently or break it down to smaller arrays.</p>
|
python|python-3.x|numpy|numpy-ndarray
| 0
|
375,842
| 62,760,099
|
Groupby calculation for only one part of dataframe - Pandas
|
<p>I am trying to make some calculations using two dataframes and a groupby code. However, I am not able to find the way to make these calculations only when my variable "date_int" is larger or equal than an specific number (e.g., 20180501; equivalent to date "2018-05-01").</p>
<p>In other words the groupby code that I am using does not consider only the relevant combinations (the ones starting on 2018-05-01) and does all the calculations for previous combinations. My pourpose is to save time and to have a code that only calculates for the combinations that I am looking for starting from 2018-05-01.</p>
<p>Below I give the two dataframes, the calculation (conflicting part of the code), and the expected result.</p>
<p>Dataframe 1 (df):</p>
<pre class="lang-py prettyprint-override"><code>idx = [np.array(['Jan-18', 'Jan-18', 'Feb-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'May-18', 'Jun-18', 'Jun-18', 'Jun-18','Jul-18', 'Aug-18', 'Aug-18', 'Sep-18', 'Sep-18', 'Oct-18','Oct-18', 'Oct-18', 'Nov-18', 'Dec-18', 'Dec-18',]),np.array(['A', 'B', 'B', 'A', 'B', 'C', 'D', 'E', 'B', 'A', 'B', 'C','A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'A', 'B', 'C'])]
data = [{'place': 1}, {'place': 5}, {'place': 3}, {'place': 2}, {'place': 7}, {'place': 3},{'place': 1}, {'place': 6}, {'place': 3}, {'place': 5}, {'place': 2}, {'place': 3},{'place': 1}, {'place': 9}, {'place': 3}, {'place': 2}, {'place': 7}, {'place': 3}, {'place': 6}, {'place': 8}, {'place': 2}, {'place': 7}, {'place': 9}]
df = pd.DataFrame(data, index=idx, columns=['place'])
df.index.names=['date','name']
df=df.reset_index()
df['date'] = pd.to_datetime(df['date'],format = '%b-%y') # http://strftime.org/
#df=df.set_index(['date','type'])
df.reset_index(inplace=True)
df['place'] = df.place.astype('float')
df['date_int'] = df['date'].astype('str').str.replace('-','').astype('int64')
df.set_index(['date_int','name'], inplace = True)
</code></pre>
<p>Dataframe 2 (df2):</p>
<pre class="lang-py prettyprint-override"><code>idx = [np.array(['Jan-18', 'Jan-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18','Jun-18', 'Jun-18', 'Jun-18', 'Jun-18', 'Jun-18', 'Jun-18', 'Aug-18', 'Aug-18', 'Sep-18', 'Sep-18', 'Oct-18','Oct-18', 'Oct-18','Oct-18', 'Oct-18','Oct-18', 'Dec-18', 'Dec-18',]),
np.array(['A', 'B', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C','C', 'C', 'D', 'D', 'D', 'D', 'E', 'E', 'E', 'E', 'A', 'A', 'B', 'B', 'C', 'C', 'B', 'C', 'A', 'B', 'C', 'C', 'A', 'A', 'B', 'B', 'B', 'C']),
np.array(['B', 'A', 'B', 'C', 'D', 'E', 'A', 'C', 'D', 'E', 'A', 'B','D', 'E', 'A', 'B', 'C', 'E', 'A', 'B', 'C', 'D', 'B', 'C', 'A', 'C', 'A', 'B', 'C', 'B', 'B', 'A', 'A', 'B', 'C', 'B', 'C', 'A', 'C', 'B'])]
data = [{'xx': -4, 'win': -1}, {'xx': 4, 'win': 1}, {'xx': -5, 'win': -1}, {'xx': -1, 'win': -1}, {'xx': 1, 'win': 1}, {'xx': -4, 'win': -1}, {'xx': 5, 'win': 1}, {'xx': 4, 'win': 1}, {'xx': 6, 'win': 1}, {'xx': 1, 'win': 1}, {'xx': 1, 'win': 1}, {'xx': -4, 'win': -1},{'xx': 2, 'win': 1}, {'xx': -3, 'win': -1}, {'xx': -1, 'win': -1}, {'xx': -6, 'win': -1}, {'xx': -2, 'win': -1}, {'xx': -5, 'win': -1}, {'xx': 4, 'win': 1}, {'xx': -1, 'win': -1}, {'xx': 3, 'win': 1}, {'xx': 5, 'win': 1}, {'xx': 3, 'win': 1}, {'xx': 2, 'win': 1}, {'xx': -3, 'win': -1}, {'xx': -1, 'win': -1}, {'xx': -2, 'win': -1}, {'xx': 1, 'win': 1}, {'xx': 6, 'win': 1}, {'xx': -6, 'win': -1}, {'xx': -5, 'win': -1}, {'xx': 5, 'win': 1}, {'xx': -3, 'win': -1}, {'xx': -5, 'win': -1}, {'xx': 3, 'win': 1}, {'xx': -2, 'win': -1}, {'xx': 5, 'win': 1}, {'xx': 2, 'win': 1}, {'xx': -2, 'win': -1}, {'xx': 2, 'win': 1}]
df2 = pd.DataFrame(data, index=idx, columns=['xx','win'])
df2.index.names=['date','i1', 'i2']
df2=df2.reset_index()
df2['date'] = pd.to_datetime(df2['date'],format = '%b-%y') # http://strftime.org/
#df=df.set_index(['date','type'])
df2.reset_index(inplace=True)
df2['xx'] = df2.xx.astype('float')
df2['date_int'] = df2['date'].astype('str').str.replace('-','').astype('int64')
df2=df2.drop(['date','index'], axis=1)
df2['date_int2']=df2['date_int']
df2.set_index(['date_int','i1','i2'], inplace = True)
</code></pre>
<p>The conflicting code (which doesn't do it for values of the variable "date_int" greater or equal than the number 20180501)</p>
<pre class="lang-py prettyprint-override"><code>df10=df.copy()
if (df10['date']>='2018-05-01').any():
df10['output'] = (df2.assign(to_this = df2['xx'][df2['date_int2']>=20180501])).groupby(level=[1,2]).to_this.cumcount().sum(level=[0,1])
df10['output'].fillna(0,inplace=True)
</code></pre>
<p>The expected outcome:</p>
<pre class="lang-py prettyprint-override"><code> index date place output
date_int name
20180501 B 8 2018-05-01 3.0 0.0
20180601 A 9 2018-06-01 5.0 3.0
B 10 2018-06-01 2.0 3.0
C 11 2018-06-01 3.0 2.0
20180701 A 12 2018-07-01 1.0 0.0
20180801 B 13 2018-08-01 9.0 2.0
C 14 2018-08-01 3.0 2.0
20180901 A 15 2018-09-01 2.0 3.0
B 16 2018-09-01 7.0 3.0
20181001 C 17 2018-10-01 3.0 5.0
A 18 2018-10-01 6.0 6.0
B 19 2018-10-01 8.0 7.0
20181101 A 20 2018-11-01 2.0 0.0
20181201 B 21 2018-12-01 7.0 4.0
C 22 2018-12-01 9.0 4.0
</code></pre>
<p>If you could elaborate on the code to make it work only when my variable "date_int" is larger or equal than a specific value it would be useful, as it will save me a lot of time.</p>
|
<p><strong>Explanation</strong>:</p>
<p>The updated code below uses a different approach. Rather than remove data from the <code>DataFrame</code> after it is created, it does not use that data at all. This is done by a <code>mask_it()</code> function. This function can be used to create a <code>boolean</code> <code>mask</code> based on the <code>date_int</code>. This <code>mask</code> can be applied to all the <code>arrays/lists</code> such as <code>idx</code>, <code>data</code> etc. to keep only those datapoints that are on or after the <code>date_int</code>. Next, these subset of indices/data are used to create the <code>DataFrame</code>.</p>
<p><strong>Notes</strong>:</p>
<ol>
<li>Code-1 below seems to be correct, but does not match the ouptut posted in the question. Here, <code>df1</code> and <code>df2</code> are created from already <code>masked</code> list of indices before any grouping. So, only dates of on or after the <code>date_int</code> are used. Therefore, <code>.cumcount()</code> may be correct. This may be important because <code>.group()</code> at <code>level=[1,2]</code> does not include any dates.</li>
<li>Code-2 below output matches the output posted in question, however it may not be correct. Because, here the grouping is done "before" any <code>dates</code> are removed. Thus, the <code>grouping</code> at <code>level=[1,2]</code> could include dates before <code>date_int</code>. This can be checked by using <code>df2.groupby(level=[1,2])['xx'].groups()</code>.</li>
</ol>
<p><em>Side-note: The updated code in the question is giving me a different expected output for <code>df10</code> than what is posted in the question</em></p>
<p><strong>Code-1</strong>:</p>
<pre><code>### Import libraries
import numpy as np
import pandas as pd
import datetime
### Create function for mask
def mask_it(date_int, xarray):
# Create 'start_date'
dt = str(date_int) # convert integer 'date_int' to string
dt = datetime.datetime(year=int(dt[0:4]), month=int(dt[4:6]), day=int(dt[6:8])) # convert to date
dtyear = dt.strftime("%y") # get last two digits of year
dtmonth = dt.strftime("%B")[0:3] # get first three characters of month
start_date = str(dtmonth)+'-'+str(dtyear)
start_date = datetime.datetime.strptime(start_date, '%b-%y')
# Convert array dates from string to datetime
xarray_NEW = np.array([ datetime.datetime.strptime(i, '%b-%y') for i in xarray])
print(type(start_date))
# Create boolean mask
mask = [i >= start_date for i in xarray_NEW]
return mask
### Function to create `df1`
def start_date_df(date_int):
# Index
df1_idx_date = np.array(['Jan-18', 'Jan-18', 'Feb-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'May-18', 'Jun-18', 'Jun-18', 'Jun-18','Jul-18', 'Aug-18', 'Aug-18', 'Sep-18', 'Sep-18', 'Oct-18','Oct-18', 'Oct-18', 'Nov-18', 'Dec-18', 'Dec-18',])
df1_idx_char = np.array(['A', 'B', 'B', 'A', 'B', 'C', 'D', 'E', 'B', 'A', 'B', 'C','A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'A', 'B', 'C'])
# Data
df1_data = np.array([{'place': 1}, {'place': 5}, {'place': 3}, {'place': 2}, {'place': 7}, {'place': 3},{'place': 1}, {'place': 6}, {'place': 3}, {'place': 5}, {'place': 2}, {'place': 3},{'place': 1}, {'place': 9}, {'place': 3}, {'place': 2}, {'place': 7}, {'place': 3}, {'place': 6}, {'place': 8}, {'place': 2}, {'place': 7}, {'place': 9}])
#:::::::::::::::::::::::::::::::::::::::::::::
### Subset by using mask_it
#:::::::::::::::::::::::::::::::::::::::::::::
# Get mask
mask1 = mask_it(date_int, df1_idx_date)
# Use mask
df1_idx_date = df1_idx_date[mask1]
df1_idx_char = df1_idx_char[mask1]
df1_data = df1_data[mask1]
#:::::::::::::::::::::::::::::::::::::::::::::
### Get 'idx' and 'data'
idx = [df1_idx_date, df1_idx_char]
data = list(df1_data)
df = pd.DataFrame(data, index=idx, columns=['place'])
df.index.names=['date','name']
df=df.reset_index()
df['date'] = pd.to_datetime(df['date'],format = '%b-%y') # http://strftime.org/
#df=df.set_index(['date','type'])
df.reset_index(inplace=True)
df['place'] = df.place.astype('float')
df['date_int'] = df['date'].astype('str').str.replace('-','').astype('int64')
df.set_index(['date_int','name'], inplace = True)
return df
### Function to create 'df2'
def start_date_df2(date_int):
# Index
df2_idx_date = np.array(['Jan-18', 'Jan-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18','Jun-18', 'Jun-18', 'Jun-18', 'Jun-18', 'Jun-18', 'Jun-18', 'Aug-18', 'Aug-18', 'Sep-18', 'Sep-18', 'Oct-18','Oct-18', 'Oct-18','Oct-18', 'Oct-18','Oct-18', 'Dec-18', 'Dec-18',])
df2_idx_char1 = np.array(['A', 'B', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C','C', 'C', 'D', 'D', 'D', 'D', 'E', 'E', 'E', 'E', 'A', 'A', 'B', 'B', 'C', 'C', 'B', 'C', 'A', 'B', 'C', 'C', 'A', 'A', 'B', 'B', 'B', 'C'])
df2_idx_char2 = np.array(['B', 'A', 'B', 'C', 'D', 'E', 'A', 'C', 'D', 'E', 'A', 'B','D', 'E', 'A', 'B', 'C', 'E', 'A', 'B', 'C', 'D', 'B', 'C', 'A', 'C', 'A', 'B', 'C', 'B', 'B', 'A', 'A', 'B', 'C', 'B', 'C', 'A', 'C', 'B'])
# Data
df2_data = np.array([{'xx': -4, 'win': -1}, {'xx': 4, 'win': 1}, {'xx': -5, 'win': -1}, {'xx': -1, 'win': -1}, {'xx': 1, 'win': 1}, {'xx': -4, 'win': -1}, {'xx': 5, 'win': 1}, {'xx': 4, 'win': 1}, {'xx': 6, 'win': 1}, {'xx': 1, 'win': 1}, {'xx': 1, 'win': 1}, {'xx': -4, 'win': -1},{'xx': 2, 'win': 1}, {'xx': -3, 'win': -1}, {'xx': -1, 'win': -1}, {'xx': -6, 'win': -1}, {'xx': -2, 'win': -1}, {'xx': -5, 'win': -1}, {'xx': 4, 'win': 1}, {'xx': -1, 'win': -1}, {'xx': 3, 'win': 1}, {'xx': 5, 'win': 1}, {'xx': 3, 'win': 1}, {'xx': 2, 'win': 1}, {'xx': -3, 'win': -1}, {'xx': -1, 'win': -1}, {'xx': -2, 'win': -1}, {'xx': 1, 'win': 1}, {'xx': 6, 'win': 1}, {'xx': -6, 'win': -1}, {'xx': -5, 'win': -1}, {'xx': 5, 'win': 1}, {'xx': -3, 'win': -1}, {'xx': -5, 'win': -1}, {'xx': 3, 'win': 1}, {'xx': -2, 'win': -1}, {'xx': 5, 'win': 1}, {'xx': 2, 'win': 1}, {'xx': -2, 'win': -1}, {'xx': 2, 'win': 1}])
#:::::::::::::::::::::::::::::::::::::::::::::
### Subset by using mask_it
#:::::::::::::::::::::::::::::::::::::::::::::
# Get mask
mask2 = mask_it(date_int, df2_idx_date)
# Use mask
df2_idx_date = df2_idx_date[mask2]
df2_idx_char1 = df2_idx_char1[mask2]
df2_idx_char2 = df2_idx_char2[mask2]
df2_data = df2_data[mask2]
#:::::::::::::::::::::::::::::::::::::::::::::
### Get 'idx' and 'data'
idx = [df2_idx_date, df2_idx_char1, df2_idx_char2]
data = list(df2_data)
df2 = pd.DataFrame(data, index=idx, columns=['xx','win'])
df2.index.names=['date','i1', 'i2']
df2=df2.reset_index()
df2['date'] = pd.to_datetime(df2['date'],format = '%b-%y') # http://strftime.org/
#df=df.set_index(['date','type'])
df2.reset_index(inplace=True)
df2['xx'] = df2.xx.astype('float')
df2['date_int'] = df2['date'].astype('str').str.replace('-','').astype('int64')
df2=df2.drop(['date','index'], axis=1)
df2.set_index(['date_int','i1','i2'], inplace = True)
#........................
# Limit by date_int
df2 = df2.groupby(level=[1,2])['xx'].cumcount().sum(level=[0,1])
#df2 = df2.loc[date_int:,:]
#........................
return df2
### Get DataFrames based on 'date_int'
df = start_date_df(20180501)
df['output'] = start_date_df2(20180501)
df['output'].fillna(0,inplace=True)
</code></pre>
<p>Output - 1</p>
<pre><code>print(df)
index date place output
date_int name
20180501 B 0 2018-05-01 3.0 0.0
20180601 A 1 2018-06-01 5.0 0.0
B 2 2018-06-01 2.0 0.0
C 3 2018-06-01 3.0 0.0
20180701 A 4 2018-07-01 1.0 0.0
20180801 B 5 2018-08-01 9.0 1.0
C 6 2018-08-01 3.0 1.0
20180901 A 7 2018-09-01 2.0 1.0
B 8 2018-09-01 7.0 1.0
20181001 C 9 2018-10-01 3.0 3.0
A 10 2018-10-01 6.0 3.0
B 11 2018-10-01 8.0 4.0
20181101 A 12 2018-11-01 2.0 0.0
20181201 B 13 2018-12-01 7.0 3.0
C 14 2018-12-01 9.0 3.0
</code></pre>
<hr />
<hr />
<p><strong>Code - 2</strong>:</p>
<pre><code>### Import libraries
import numpy as np
import pandas as pd
import datetime
### Create function for mask
def mask_it(date_int, xarray):
# Create 'start_date'
dt = str(date_int) # convert integer 'date_int' to string
dt = datetime.datetime(year=int(dt[0:4]), month=int(dt[4:6]), day=int(dt[6:8])) # convert to date
dtyear = dt.strftime("%y") # get last two digits of year
dtmonth = dt.strftime("%B")[0:3] # get first three characters of month
start_date = str(dtmonth)+'-'+str(dtyear)
start_date = datetime.datetime.strptime(start_date, '%b-%y')
# Convert array dates from string to datetime
xarray_NEW = np.array([ datetime.datetime.strptime(i, '%b-%y') for i in xarray])
print(type(start_date))
# Create boolean mask
mask = [i >= start_date for i in xarray_NEW]
return mask
### Function to create `df1`
def start_date_df(date_int):
# Index
df1_idx_date = np.array(['Jan-18', 'Jan-18', 'Feb-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'May-18', 'Jun-18', 'Jun-18', 'Jun-18','Jul-18', 'Aug-18', 'Aug-18', 'Sep-18', 'Sep-18', 'Oct-18','Oct-18', 'Oct-18', 'Nov-18', 'Dec-18', 'Dec-18',])
df1_idx_char = np.array(['A', 'B', 'B', 'A', 'B', 'C', 'D', 'E', 'B', 'A', 'B', 'C','A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'A', 'B', 'C'])
# Data
df1_data = np.array([{'place': 1}, {'place': 5}, {'place': 3}, {'place': 2}, {'place': 7}, {'place': 3},{'place': 1}, {'place': 6}, {'place': 3}, {'place': 5}, {'place': 2}, {'place': 3},{'place': 1}, {'place': 9}, {'place': 3}, {'place': 2}, {'place': 7}, {'place': 3}, {'place': 6}, {'place': 8}, {'place': 2}, {'place': 7}, {'place': 9}])
#:::::::::::::::::::::::::::::::::::::::::::::
### Subset by using mask_it
#:::::::::::::::::::::::::::::::::::::::::::::
# Get mask
mask1 = mask_it(date_int, df1_idx_date)
# Use mask
df1_idx_date = df1_idx_date[mask1]
df1_idx_char = df1_idx_char[mask1]
df1_data = df1_data[mask1]
#:::::::::::::::::::::::::::::::::::::::::::::
### Get 'idx' and 'data'
idx = [df1_idx_date, df1_idx_char]
data = list(df1_data)
df = pd.DataFrame(data, index=idx, columns=['place'])
df.index.names=['date','name']
df=df.reset_index()
df['date'] = pd.to_datetime(df['date'],format = '%b-%y') # http://strftime.org/
#df=df.set_index(['date','type'])
df.reset_index(inplace=True)
df['place'] = df.place.astype('float')
df['date_int'] = df['date'].astype('str').str.replace('-','').astype('int64')
df.set_index(['date_int','name'], inplace = True)
return df
### Function to create 'df2'
def start_date_df2(date_int):
# Index
df2_idx_date = np.array(['Jan-18', 'Jan-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18', 'Mar-18', 'Mar-18', 'Mar-18','Mar-18','Jun-18', 'Jun-18', 'Jun-18', 'Jun-18', 'Jun-18', 'Jun-18', 'Aug-18', 'Aug-18', 'Sep-18', 'Sep-18', 'Oct-18','Oct-18', 'Oct-18','Oct-18', 'Oct-18','Oct-18', 'Dec-18', 'Dec-18',])
df2_idx_char1 = np.array(['A', 'B', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C','C', 'C', 'D', 'D', 'D', 'D', 'E', 'E', 'E', 'E', 'A', 'A', 'B', 'B', 'C', 'C', 'B', 'C', 'A', 'B', 'C', 'C', 'A', 'A', 'B', 'B', 'B', 'C'])
df2_idx_char2 = np.array(['B', 'A', 'B', 'C', 'D', 'E', 'A', 'C', 'D', 'E', 'A', 'B','D', 'E', 'A', 'B', 'C', 'E', 'A', 'B', 'C', 'D', 'B', 'C', 'A', 'C', 'A', 'B', 'C', 'B', 'B', 'A', 'A', 'B', 'C', 'B', 'C', 'A', 'C', 'B'])
# Data
df2_data = np.array([{'xx': -4, 'win': -1}, {'xx': 4, 'win': 1}, {'xx': -5, 'win': -1}, {'xx': -1, 'win': -1}, {'xx': 1, 'win': 1}, {'xx': -4, 'win': -1}, {'xx': 5, 'win': 1}, {'xx': 4, 'win': 1}, {'xx': 6, 'win': 1}, {'xx': 1, 'win': 1}, {'xx': 1, 'win': 1}, {'xx': -4, 'win': -1},{'xx': 2, 'win': 1}, {'xx': -3, 'win': -1}, {'xx': -1, 'win': -1}, {'xx': -6, 'win': -1}, {'xx': -2, 'win': -1}, {'xx': -5, 'win': -1}, {'xx': 4, 'win': 1}, {'xx': -1, 'win': -1}, {'xx': 3, 'win': 1}, {'xx': 5, 'win': 1}, {'xx': 3, 'win': 1}, {'xx': 2, 'win': 1}, {'xx': -3, 'win': -1}, {'xx': -1, 'win': -1}, {'xx': -2, 'win': -1}, {'xx': 1, 'win': 1}, {'xx': 6, 'win': 1}, {'xx': -6, 'win': -1}, {'xx': -5, 'win': -1}, {'xx': 5, 'win': 1}, {'xx': -3, 'win': -1}, {'xx': -5, 'win': -1}, {'xx': 3, 'win': 1}, {'xx': -2, 'win': -1}, {'xx': 5, 'win': 1}, {'xx': 2, 'win': 1}, {'xx': -2, 'win': -1}, {'xx': 2, 'win': 1}])
"""
#:::::::::::::::::::::::::::::::::::::::::::::
### Subset by using mask_it
#:::::::::::::::::::::::::::::::::::::::::::::
# Get mask
mask2 = mask_it(date_int, df2_idx_date)
# Use mask
df2_idx_date = df2_idx_date[mask2]
df2_idx_char1 = df2_idx_char1[mask2]
df2_idx_char2 = df2_idx_char2[mask2]
df2_data = df2_data[mask2]
#:::::::::::::::::::::::::::::::::::::::::::::
"""
### Get 'idx' and 'data'
idx = [df2_idx_date, df2_idx_char1, df2_idx_char2]
data = list(df2_data)
df2 = pd.DataFrame(data, index=idx, columns=['xx','win'])
df2.index.names=['date','i1', 'i2']
df2=df2.reset_index()
df2['date'] = pd.to_datetime(df2['date'],format = '%b-%y') # http://strftime.org/
#df=df.set_index(['date','type'])
df2.reset_index(inplace=True)
df2['xx'] = df2.xx.astype('float')
df2['date_int'] = df2['date'].astype('str').str.replace('-','').astype('int64')
df2=df2.drop(['date','index'], axis=1)
df2.set_index(['date_int','i1','i2'], inplace = True)
#........................
# Limit by date_int
df2 = df2.groupby(level=[1,2])['xx'].cumcount().sum(level=[0,1])
#df2 = df2.loc[date_int:,:]
#........................
return df2
### Get DataFrames based on 'date_int'
df = start_date_df(20180501)
df['output'] = start_date_df2(20180501)
df['output'].fillna(0,inplace=True)
</code></pre>
<p>Output - 2:</p>
<pre><code>print(df)
index date place output
date_int name
20180501 B 0 2018-05-01 3.0 0.0
20180601 A 1 2018-06-01 5.0 3.0
B 2 2018-06-01 2.0 3.0
C 3 2018-06-01 3.0 2.0
20180701 A 4 2018-07-01 1.0 0.0
20180801 B 5 2018-08-01 9.0 2.0
C 6 2018-08-01 3.0 2.0
20180901 A 7 2018-09-01 2.0 3.0
B 8 2018-09-01 7.0 3.0
20181001 C 9 2018-10-01 3.0 5.0
A 10 2018-10-01 6.0 6.0
B 11 2018-10-01 8.0 7.0
20181101 A 12 2018-11-01 2.0 0.0
20181201 B 13 2018-12-01 7.0 4.0
C 14 2018-12-01 9.0 4.0
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 0
|
375,843
| 54,629,100
|
resample multiple columns with pandas
|
<p>I want to resample daily stock data into monthly stock data.</p>
<pre><code>data = yf.download(['AAPL', 'TSLA', 'FB'], '2018-01-01', '2019-01-01')['Close']
for column in data:
data[column].resample('M').last()
print(data[column])
print(data)
</code></pre>
<p>My data:</p>
<pre><code> AAPL FB TSLA
Date
2018-01-02 172.259995 181.419998 320.529999
2018-01-03 172.229996 184.669998 317.250000
2018-01-04 173.029999 184.330002 314.619995
2018-01-05 175.000000 186.850006 316.579987
2018-01-08 174.350006 188.279999 336.410001
</code></pre>
|
<p>You can't resample individual columns and assign it to the same DataFrame variable. You can just apply the resample call to the entire DataFrame:</p>
<pre><code>data = yf.download(['AAPL', 'TSLA', 'FB'], '2018-01-01', '2019-01-01')['Close']
data_resampled = data.resample('M').last()
print(data)
</code></pre>
|
python|pandas
| 2
|
375,844
| 54,292,672
|
Pands: Creating dataframe based on keywords from dictionary
|
<p>I have a dictionary where key is model name and values are keywords. I want to filter every row in a column that string contains one of the keywords that are in the values of dictionary.
Matching should be case insensitive. </p>
<p>Dictionary looks like this:</p>
<pre><code>{'J7 2017': [' J730F', 'amoled'], 'J5 2017': ['J530', 'TFT']}
</code></pre>
<p>data frame looks like:</p>
<pre><code> name
0 SCREEN SAMSUNG FULL AMOLED
1 SCREEN SAMSUNG J7 J730F 2017
2 WYŚWIETLACZ LCD + DIGITIZER SAMSUNG J5 2017 (J530)
3 3 colors SCREEN LCD SAMSUNG Galaxy J5 TFT
4 LG K10 K410 K420N K430
</code></pre>
<p>As a result i want to have model name [key] stored in separate data frame with with all rows that had my keyword</p>
<p>so the output would be:</p>
<pre><code>dfJ72017:
name
0 SCREEN SAMSUNG FULL AMOLED
1 SCREEN SAMSUNG J7 J730F 2017
dfJ52017:
name
0 WYŚWIETLACZ LCD + DIGITIZER SAMSUNG J5 2017 (J530)
1 3 colors SCREEN LCD SAMSUNG Galaxy J5 TFT
</code></pre>
<p>And do it for all keys and values in dictionary.</p>
|
<p>Use dict comprehension with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>str.contains</code></a> and filtering by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>, <code>'|'.join</code> here is for regex <code>OR</code>:</p>
<pre><code>d = {'J7 2017': [' J730F', 'AMOLED'], 'J5 2017': ['J530', 'TFT']}
dfs = {k: df[df['name'].str.contains('|'.join(v))] for k, v in d.items()}
print (dfs)
{'J7 2017': name
0 SCREEN SAMSUNG FULL AMOLED
1 SCREEN SAMSUNG J7 J730F 2017, 'J5 2017': name
2 WYŚWIETLACZ LCD + DIGITIZER SAMSUNG J5 2017 (J...
3 3 colors SCREEN LCD SAMSUNG Galaxy J5 TFT}
</code></pre>
|
python|pandas|data-science
| 0
|
375,845
| 54,602,610
|
Simplest way to index within a dimension
|
<p>I have two tensors <code>x</code> and <code>y</code> that have equal <code>shape</code> in the first <code>k</code> dimensions. The second tensor contains indices to retrieve values from the first along to the last dimension. For a <code>rank</code> of 3, then the output <code>z</code> should be such that <code>z[i_1, i_2,...,i_k, j] = x[i_1, i_2,...,i_k, y[i_1, i_2, ...,i_k, j]]</code>.</p>
<p>I currently have a method that requires reshaping the <code>x</code> and <code>y</code>, appending row indices of <code>y</code>, using <code>gather_nd</code> and finally returning to the original shape. Is there is a more elegant method? Is there a way to get the tensor of indices (like <code>np.indices</code>), preferably that does not require knowledge of the rank or shape beyond that they satisfy the above condition?</p>
|
<p>Found it! <code>tf.batch_gather</code> and <code>tf.batch_scatter</code>.</p>
|
python|tensorflow
| 3
|
375,846
| 54,471,306
|
Gradients and Quiver plots
|
<pre><code>from scipy import *
import matplotlib.pyplot as plt
def plot_elines(x_grid, y_grid, potential, field):
fig, ax = plt.subplots(figsize=(13, 13))
im_cs = ax.contour(x_grid, y_grid, potential, 18, cmap='inferno')
plt.clabel(im_cs, inline=1, fontsize=7)
ax.quiver(x_grid[::3, ::3], y_grid[::3, ::3],
field[0, ::3, ::3], field[1, ::3, ::3],)
ax.set_xlabel("$x$")
ax.set_ylabel("$y$")
plt.show()
# define q configuration (x,y positions)
charges = [
[1, 1],
]
xx, yy = meshgrid(linspace(-4, 4), linspace(-4, 4))
# potential function
e_pot = 0.
for idx, q in enumerate(charges):
dx, dy = xx-q[0], yy-q[1]
rr = hypot(dx, dy)
e_pot += 1/(4*pi) * 1./rr
e_field = gradient(-e_pot)
e_field /= hypot(e_field[0], e_field[1]) * 5
# why is this needed?
e_field[0] = e_field[0].T
e_field[1] = e_field[1].T
plot_elines(xx, yy, e_pot, e_field)
</code></pre>
<p>I have a question about using the <code>gradient</code> function from numpy/scipy.</p>
<p>I am plotting here the electric field equipotential lines and the field vectors of a single positive charge. The definition is </p>
<blockquote>
<p>E = -grad(V)</p>
</blockquote>
<p>By definition, the field vectors (quiver) and equipotential lines (contour) are supposed to be orthogonal to each other at all points in space, and since the charge is positive, the arrows need to point away from the charge itself.</p>
<p>I am using scipy's <code>gradient</code> function to calculate E, but I found that the output is wrong if I don't transpose the x-y grid output by the <code>gradient</code> function.</p>
<p>Compare the two outputs (with <code>.T</code> (correct) and without <code>.T</code> (wrong)):</p>
<p><img src="https://i.stack.imgur.com/y8cEG.png" width="300" alt="right">
vs
<img src="https://i.stack.imgur.com/aqoN9.png" width="300" alt="wrong"></p>
<p>Why is the transpose needed? Or am I plotting something wrongly? </p>
<p>Thanks.</p>
|
<p>The fact that the transpose gives you the correct plot is by pure coincidence, because the charge is possitionned symmetrical in x and y (i.e. on the 45° line). </p>
<p>The real problem comes from the wrong interpretation of <code>numpy.gradient</code>. It will return the gradient axis-wise. The first array for the axis 0 and the second for axis 1. Now, axis 0 in your case corresponds to the <code>y</code> axis, and axis 1 to the <code>x</code> axis. </p>
<pre><code>e_field_y, e_field_x = numpy.gradient(-e_pot)
</code></pre>
<p>So when you select the respective field components in the quiver plot, you need to choose the first entry as the y component and the second as the x component.</p>
<pre><code>ax.quiver(x_grid[::3, ::3], y_grid[::3, ::3],
field[1, ::3, ::3], field[0, ::3, ::3],)
</code></pre>
<p>The complete code would then look like</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def plot_elines(x_grid, y_grid, potential, field):
fig, ax = plt.subplots(figsize=(13, 13))
im_cs = ax.contour(x_grid, y_grid, potential, 18, cmap='inferno')
plt.clabel(im_cs, inline=1, fontsize=7)
ax.quiver(x_grid[::3, ::3], y_grid[::3, ::3],
field[1, ::3, ::3], field[0, ::3, ::3],)
ax.set_xlabel("$x$")
ax.set_ylabel("$y$")
ax.set_aspect("equal")
plt.show()
# define q configuration (x,y positions)
charges = [
[1, 0],
]
xx, yy = np.meshgrid(np.linspace(-4, 4), np.linspace(-4, 4))
# potential function
e_pot = 0.
for idx, q in enumerate(charges):
dx, dy = xx-q[0], yy-q[1]
rr = np.hypot(dx, dy)
e_pot += 1/(4*np.pi) * 1./rr
e_field = np.gradient(-e_pot)
e_field /= np.hypot(e_field[0], e_field[1]) * 5
plot_elines(xx, yy, e_pot, e_field)
</code></pre>
<p>where I put the charge off the anti-diagonal.</p>
<p><a href="https://i.stack.imgur.com/R8MFQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R8MFQ.png" alt="enter image description here"></a></p>
<p>A final hint: A good consistency check for such codes is if you use non-identical shaped grids. E.g. if you take 50 points along x and 51 along y, you would have gotten an error instead of a seemingly working code and would be directed more easily towards the underlying problem.</p>
|
python|numpy|matplotlib
| 1
|
375,847
| 54,595,264
|
How to best save model and write summary during hyper-parameter tuning?
|
<p>What is the best approach to save model and write session for each run, during hyperparameter tuning? Currently I have a bunch of models and summaries saved under 'training' and 'validation' directories, and I dunno which is generated from which hyperparameters. It is also hard to identify which model generated the best result for the validation set. </p>
<p>The tensorboard graph looks rather messy. Is there a clean way of logging and inspecting runs (from hyperparameter tuning)? Any tricks or methods I don't know about that makes it easy? Or would you guys recommend to use mlflow, comet and others. Thanks in advance!</p>
<p><a href="https://i.stack.imgur.com/EsBSo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EsBSo.png" alt="tensorboard graph"></a></p>
|
<p>I haven't used any of the tools you mentioned but here's how I implemented the logging of hyperparameters and results.</p>
<p>Just create a <code>pandas DataDrame</code> or even a basic dictionary with the hyperparameter names and values. To the same data structure, add the performance metrics obtained using those hyperparameter values. This way, the parameters and the metrics can be associated to one another.</p>
<p>Then save it as a <code>CSV</code> file, which can be loaded and used for analysis and visualization purposes later on.</p>
<p>Regarding the models themselves, an identifier can be added to the name which can be associated with a specific hyperparameter combination.</p>
<p>It is a simple, non-sophisticated approach, but works for me.</p>
|
tensorflow|machine-learning|deep-learning
| 1
|
375,848
| 54,306,002
|
Groupby by two differents options simultaneously
|
<p>Good morning.</p>
<p>I've a pandas dataframe like the following:</p>
<pre><code>df =
p f c a
0 1 2 1 16.32
1 1 2 2 48
2 1 2 3 60
3 1 2 4 112
4 1 2 5 52
5 1 3 6 288
6 1 4 7 201
7 1 4 8 52
8 1 4 4 44
9 1 5 7 251.2
10 1 5 9 220
11 1 5 8 83
12 1 5 10 142
13 2 1 11 100
14 2 1 12 110
15 2 2 11 120
16 2 2 13 130
17 2 3 13 140
18 2 3 14 150
19 2 4 12 160
</code></pre>
<p>And I want to do a groupby along columns c and a, but grouping c using something like SQL's COUNT(DISTINCT) and grouping a using sum(), in a way my result will be:</p>
<pre><code>df_result =
p f c a
0 1 2 5 288.32
1 1 3 6 576.32
2 1 4 8 873.92
3 1 5 10 1570.12
4 2 1 2 210
5 2 2 3 460
6 2 3 4 750
7 2 4 4 910
</code></pre>
<p>But I can't reach that result trying differents combinations of groupby and stack.</p>
<p><strong>EDIT</strong>
Take into account that column 'c' stores ID numbers, so the ascending order is just an example, so max aggregate wouldn't work. Sorry for don't say it before.</p>
<p>I think that a possible solution would be split it in two differents dataframes, grouping and then merge, but I'm not sure if this is the best solution.</p>
<p>Thanks you very much in advance.</p>
|
<p>You need aggregate <code>list</code> and <code>sum</code> first, then call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer"><code>DataFrame.cumsum</code></a>:</p>
<pre><code>df = df.groupby('f').agg({'c':list, 'a':'sum'}).cumsum()
print (df)
c a
f
2 [154, 215, 1, 8000, 214] 288.32
3 [154, 215, 1, 8000, 214, 640] 576.32
4 [154, 215, 1, 8000, 214, 640, 780, 830, 8000] 873.32
5 [154, 215, 1, 8000, 214, 640, 780, 830, 8000, ... 1569.52
</code></pre>
<p>And last get length of unique values per lists:</p>
<pre><code>df['c'] = df['c'].apply(lambda x: len(set(x)))
df = df.reset_index()
print (df)
f c a
0 2 5 288.32
1 3 6 576.32
2 4 8 873.32
3 5 10 1569.52
</code></pre>
<p>EDIT:</p>
<pre><code>df = (df.groupby(['p','f']).agg({'c':list, 'a':'sum'})
.groupby('p').apply(np.cumsum))
df['c'] = df['c'].apply(lambda x: len(set(x)))
df = df.reset_index()
print (df)
p f c a
0 1 2 5 288.32
1 1 3 6 576.32
2 1 4 8 873.32
3 1 5 10 1569.52
4 2 1 2 210
5 2 2 3 460
6 2 3 4 750
7 2 4 4 910
</code></pre>
|
python|pandas
| 1
|
375,849
| 54,653,036
|
assigning multiple values to different cells in a dataframe
|
<p>This is probably an easy question, but I couldn't find any simple way to do that. Imagine the following dataframe:</p>
<pre><code>df = pd.DataFrame(index=range(10), columns=range(5))
</code></pre>
<p>and three lists that contain indices, columns, and values of the defined dataframe that I intend to change:</p>
<pre><code>idx_list = [1,5,3,7] # the indices of the cells that I want to change
col_list = [1,4,3,1] # the columns of the cells that I want to change
value_list = [9,8,7,6] # the final value of whose cells`
</code></pre>
<p>I was wondering if there exist a function in <code>pandas</code> that does the following efficiently:</p>
<pre><code>for i in range(len(idx_list)):
df.loc[idx_list[i], col_list[i]] = value_list[i]
</code></pre>
<p>Thanks.</p>
|
<p>Using <code>.values</code></p>
<pre><code>df.values[idx_list,col_list]=value_list
df
Out[205]:
0 1 2 3 4
0 NaN NaN NaN NaN NaN
1 NaN 9 NaN NaN NaN
2 NaN NaN NaN NaN NaN
3 NaN NaN NaN 7 NaN
4 NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN 8
6 NaN NaN NaN NaN NaN
7 NaN 6 NaN NaN NaN
8 NaN NaN NaN NaN NaN
9 NaN NaN NaN NaN NaN
</code></pre>
<p>Or another way less efficient </p>
<pre><code>updatedf=pd.Series(value_list,index=pd.MultiIndex.from_arrays([idx_list,col_list])).unstack()
df.update(updatedf)
</code></pre>
|
python|pandas|dataframe
| 1
|
375,850
| 54,322,621
|
Pivot a two-column dataframe
|
<p><strong>Question</strong></p>
<p>I have a dataframe <code>untidy</code></p>
<pre><code> attribute value
0 age 49
1 sex M
2 height 176
3 age 27
4 sex F
5 height 172
</code></pre>
<p>where the values in the <code>'attribute'</code> column repeat periodically. The desired output is <code>tidy</code></p>
<pre><code> age sex height
0 49 M 176
1 27 F 172
</code></pre>
<p>(The row and column order or additional labels don't matter, I can clean this up myself.)</p>
<p>Code for instantiation:</p>
<pre><code>untidy = pd.DataFrame([['age', 49],['sex', 'M'],['height', 176],['age', 27],['sex', 'F'],['height', 172]], columns=['attribute', 'value'])
tidy = pd.DataFrame([[49, 'M', 176], [27, 'F', 172]], columns=['age', 'sex', 'height'])
</code></pre>
<hr>
<p><strong>Attempts</strong></p>
<p>This looks like a simple pivot-operation, but my initial approach introduces <code>NaN</code> values:</p>
<pre><code>>>> untidy.pivot(columns='attribute', values='value')
attribute age height sex
0 49 NaN NaN
1 NaN NaN M
2 NaN 176 NaN
3 27 NaN NaN
4 NaN NaN F
5 NaN 172 NaN
</code></pre>
<p>Some messy attempts to fix this:</p>
<pre><code>>>> untidy.pivot(columns='attribute', values='value').apply(lambda c: c.dropna().reset_index(drop=True))
attribute age height sex
0 49 176 M
1 27 172 F
</code></pre>
<p><br></p>
<pre><code>>>> untidy.set_index([untidy.index//untidy['attribute'].nunique(), 'attribute']).unstack('attribute')
value
attribute age height sex
0 49 176 M
1 27 172 F
</code></pre>
<p>What's the idiomatic way to do this?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot.html" rel="nofollow noreferrer"><code>pandas.pivot</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> for new index values and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>rename_axis</code></a> for remove columns name:</p>
<pre><code>df = pd.pivot(index=untidy.groupby('attribute').cumcount(),
columns=untidy['attribute'],
values=untidy['value']).rename_axis(None, axis=1)
print (df)
age height sex
0 49 176 M
1 27 172 F
</code></pre>
<p>Another solution:</p>
<pre><code>df = (untidy.set_index([untidy.groupby('attribute').cumcount(), 'attribute'])['value']
.unstack()
.rename_axis(None, axis=1))
</code></pre>
|
python|pandas
| 3
|
375,851
| 54,669,337
|
Pandas Dataframe: difference between all dates for each unique id
|
<pre><code>[In 621]: df = pd.DataFrame({'id':[44,44,44,88,88,90,95],
'Status': ['Reject','Submit','Draft','Accept','Submit',
'Submit','Draft'],
'Datetime': ['2018-11-24 08:56:02',
'2018-10-24 18:12:02','2018-10-24 08:12:02',
'2018-10-29 13:17:02','2018-10-24 10:12:02',
'2018-12-30 08:43:12', '2019-01-24 06:12:02']
}, columns = ['id','Status', 'Datetime'])
df['Datetime'] = pd.to_datetime(df['Datetime'])
df
Out[621]:
id Status Datetime
0 44 Reject 2018-11-24 08:56:02
1 44 Submit 2018-10-24 18:12:02
2 44 Draft 2018-10-24 08:12:02
3 88 Accept 2018-10-29 13:17:02
4 88 Submit 2018-10-24 10:12:02
5 90 Submit 2018-12-30 08:43:12
6 95 Draft 2019-01-24 06:12:02
</code></pre>
<p>What I am trying to get is another column, e.g. <code>df['Time in Status']</code> which is the time that <code>id</code> spent at that status. </p>
<p>I've looked at <code>df.groupby()</code> but only found answers (<a href="https://stackoverflow.com/questions/38915186/pandas-use-groupby-to-count-difference-between-dates/38915746">such as this one</a>) for working out between two dates (e.g. first and last) regardless how how many dates are in between.</p>
<pre><code>df['Datetime'] = pd.to_datetime(df['Datetime'])
g = df.groupby('id')['Datetime']
print(df.groupby('id')['Datetime'].apply(lambda g: g.iloc[-1] - g.iloc[0]))
id
44 -32 days +23:16:00
88 -6 days +20:55:00
90 0 days 00:00:00
95 0 days 00:00:00
Name: Datetime, dtype: timedelta64[ns]
</code></pre>
<p>The closest I've come to getting the result is <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.diff.html" rel="nofollow noreferrer">DataFrameGroupBy.diff</a></p>
<pre><code>df['Time in Status'] = df.groupby('id')['Datetime'].diff()
df
id Status Datetime Time in Status
0 44 Reject 2018-11-24 08:56:02 NaT
1 44 Submit 2018-10-24 18:12:02 -31 days +09:16:00
2 44 Draft 2018-10-24 08:12:02 -1 days +14:00:00
3 88 Accept 2018-10-29 13:17:02 NaT
4 88 Submit 2018-10-24 10:12:02 -6 days +20:55:00
5 90 Submit 2018-12-30 08:43:12 NaT
6 95 Draft 2019-01-24 06:12:02 NaT
</code></pre>
<p>However there are two issues with this. First, how can I do this calculation starting with the earliest date and working through until the end? E.g. so in row <code>2</code>, instead of <code>-1 days +14:00:00</code> it would be <code>0 Days 10:00:00</code>? Or is this easier to solve by rearranging the order of the data before hand?</p>
<p>The other issue is the NaT. If there is no date to compare with, then the current day (i.e. datetime.now) would be used. I could apply this afterwards easy enough, but I was wondering if there might be a better solution to finding and replacing all the NaT values.</p>
|
<p>Exactly you are right, first is necessary sorting <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a> with both columns:</p>
<pre><code>df = df.sort_values(['id', 'Datetime'])
df['Time in Status'] = df.groupby('id')['Datetime'].diff()
print (df)
id Status Datetime Time in Status
2 44 Draft 2018-10-24 08:12:02 NaT
1 44 Submit 2018-10-24 18:12:02 0 days 10:00:00
0 44 Reject 2018-11-24 08:56:02 30 days 14:44:00
4 88 Submit 2018-10-24 10:12:02 NaT
3 88 Accept 2018-10-29 13:17:02 5 days 03:05:00
5 90 Submit 2018-12-30 08:43:12 NaT
6 95 Draft 2019-01-24 06:12:02 NaT
</code></pre>
|
python|pandas|datetime|dataframe|group-by
| 2
|
375,852
| 54,416,773
|
How to balance unbalanced data in PyTorch with WeightedRandomSampler?
|
<p>I have a 2-class problem and my data is unbalanced.
class 0 has 232550 samples and class 1 has 13498 samples.
PyTorch docs and the internet tells me to use the class WeightedRandomSampler for my DataLoader. </p>
<p>I have tried using the WeightedRandomSampler but I keep getting errors.</p>
<pre><code> trainratio = np.bincount(trainset.labels) #trainset.labels is a list of
float [0,1,0,0,0,...]
classcount = trainratio.tolist()
train_weights = 1./torch.tensor(classcount, dtype=torch.float)
train_sampleweights = train_weights[trainset.labels]
train_sampler = WeightedRandomSampler(weights=train_sampleweights,
num_samples=len(train_sampleweights))
trainloader = DataLoader(trainset, sampler=train_sampler,
shuffle=False)
</code></pre>
<p>Some dimensions I printed out:</p>
<pre><code>train_weights = tensor([4.3002e-06, 4.3002e-06, 4.3002e-06, ...,
4.3002e-06, 4.3002e-06, 4.3002e-06])
train_weights shape= torch.Size([246048])
</code></pre>
<p>I can't see why I'm getting this error:</p>
<pre><code>UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
self.weights = torch.tensor(weights, dtype=torch.double)
</code></pre>
<p>I have tried other similar workarounds but so far all attempts produce some error.
How should I implement this to balance my train, validation and test data?</p>
|
<p>So apparently this is an internal warning, not an error. According to the guys at PyTorch, I can continue coding and not stress about the warning message. </p>
|
python|machine-learning|pytorch
| 0
|
375,853
| 54,691,288
|
How can I integrate tensorboard visualization to tf.Estimator?
|
<p>I have classical TensorFlow code for recognizing handwritten digits <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py</a> , using tf.Estimator. My question is comlicated and consists of two questions</p>
<ol>
<li><p>Should I write tf.summary() for target variables in code to visualize data in Tensoboard just typing <code>tensorboard -- logdir=/tmp/mnist_convnet_model</code> or tf.Estimator collect all summaries automatically in <code>*/tmp/mnist_convnet_model</code> directory and I can just call <code>tensorboard -- logdir=/tmp/mnist_convnet_model</code>? </p></li>
<li><p>If I have to write <code>tf.summary()</code> could you answer me, should I insert in code <code>tf summary merge_all()</code> in the code and in what piece of code?</p></li>
</ol>
<pre class="lang-py prettyprint-override"><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
def cnn_model_fn(features, labels, mode):
"""Model function for CNN."""
# Input Layer
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
# Convolutional Layer #1
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #1
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu)
# Pooling Layer #2
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Flatten tensor into a batch of vectors
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
# Dense Layer
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
# Add dropout operation; 0.6 probability that element will be kept
dropout = tf.layers.dropout(
inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
# Logits layer
# Input Tensor Shape: [batch_size, 1024]
# Output Tensor Shape: [batch_size, 10]
logits = tf.layers.dense(inputs=dropout, units=10)
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Calculate Loss (for both TRAIN and EVAL modes)
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode)
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"])}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
def main(unused_argv):
# Load training and eval data
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
train_data = mnist.train.images # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
# Create the Estimator
mnist_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model")
# Set up logging for predictions
# Log the values in the "Softmax" tensor with label "probabilities"
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=50)
# Train the model
train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=100,
num_epochs=None,
shuffle=True)
mnist_classifier.train(
input_fn=train_input_fn,
steps=20000,
hooks=[logging_hook])
# Evaluate the model and print results
eval_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": eval_data}, y=eval_labels, num_epochs=1, shuffle=False)
eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn)
print(eval_results)
if __name__ == "__main__":
tf.app.run()
</code></pre>
|
<p>Generally, You just need to specify <code>tf.summary.scalar()</code>, <code>tf.summary.histogram()</code> or <code>tf.summary.image()</code> anywhere in the code. You can use histogram summary in the following way to capture all weights and biases</p>
<pre><code>for value in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES):
tf.summary.histogram(value.name, value)
</code></pre>
<p>As for updatable metrics summary, e.g. accuracy of f1 score, you need to wrap it in <code>eval_metric_ops</code> and pass to <code>tf.estimator.EstimatorSpec</code></p>
<pre><code>accuracy = tf.metrics.accuracy(labels=labels, predictions=predictions)
eval_metric_ops = {'accuracy': accuracy}
</code></pre>
<ol>
<li>You can just call tensorboard with the same dir you specified during training.</li>
<li>You don't need to use <code>tf.summary.merge_all()</code></li>
</ol>
|
python|tensorflow|conv-neural-network|tensorboard
| 1
|
375,854
| 54,465,153
|
Using the values of the first row of a matrix as ticks for matplotlib.pyplot.imshow
|
<p>I have a file like this:</p>
<pre><code>820.5 815.3 810.4
2061 2082 2098
2094 2119 2071
2067 2079 2080
2095 2080 2116
2069 2103 2108
</code></pre>
<p>and I'm using the following code to open it and plot my data</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
img = np.loadtxt('example.txt')
wvl = img[0,:]
data = img[1:,:]
plt.imshow(data)
</code></pre>
<p>The question is: how can I use the values inside the <code>numpy.ndarray</code> "wvl" as the x-axis tick labels for my heatmap?</p>
<p>I have already tried with <code>plt.xticks(range(wvl.size),wvl)</code> but in the real case the length of my array is 512 which leads to an unreadable result. </p>
|
<p>If I understood correctly, the issue here is the large number of x-axis ticks. In that case, to have a readable solution, one way is to put the ticks at every <code>n</code>th step. You can additionally choose to rotate them or not as a matter of personal taste. To do so, you can do like following. </p>
<p>I just extended your data set four times to have mode data points.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
img = np.loadtxt('example.txt')
wvl = img[0,:]
data = img[1:,:]
ax.imshow(data)
step = 3
plt.xticks(range(0, wvl.size, step),wvl[::step], rotation=45);
</code></pre>
<p><a href="https://i.stack.imgur.com/qA5mv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qA5mv.png" alt="enter image description here"></a></p>
|
python|python-3.x|numpy|matplotlib|imshow
| 3
|
375,855
| 54,451,746
|
How to properly use pandas vectorization?
|
<p>According to <a href="https://engineering.upside.com/a-beginners-guide-to-optimizing-pandas-code-for-speed-c09ef2c6a4d6" rel="nofollow noreferrer">an article</a>, the <code>vectorization</code> is much faster than <code>apply</code> a function to a pandas dafaframe column.</p>
<p>But I had a somehow special case like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'IP': [ '1.0.64.2', '100.23.154.63', '54.62.1.3']})
def compare3rd(ip):
"""Check if the 3dr part of an IP is greater than 100 or not"""
ip_3rd = ip.split('.')[2]
if int(ip_3rd) > 100:
return True
else:
return False
# This works but very slow
df['check_results'] = df.IP.apply(lambda x: compare3rd(x))
print df
# This is supposed to be much faster
# But it doesn't work ...
df['check_results_2'] = compare3rd(df['IP'].values)
print df
</code></pre>
<p>Full error traceback look like this:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 16, in <module>
df['check_results_2'] = compare3rd(df['IP'].values)
File "test.py", line 6, in compare3rd
ip_3rd = ip.split('.')[2]
AttributeError: 'numpy.ndarray' object has no attribute 'split'
</code></pre>
<p>My question is: how do I properly use this <code>vectorization</code> method in this case?</p>
|
<p>Check with <code>str</code> in <code>pandas</code></p>
<pre><code>df.IP.str.split('.').str[2].astype(int)>100
0 False
1 True
2 False
Name: IP, dtype: bool
</code></pre>
<p>Since you mention <code>vectorize</code></p>
<pre><code>import numpy as np
np.vectorize(compare3rd)(df.IP.values)
array([False, True, False])
</code></pre>
|
python|pandas|loops|vectorization|apply
| 1
|
375,856
| 54,551,316
|
Keras extending embedding layer input
|
<p>A keras sequential model with embedding needs to be retrained starting from the currently known weights.</p>
<p>A Keras sequential model is trained on the provided (text) training data. The training data is tokenized by a (custom made) tokenizer. The input dimension for the first layer in the model, a embedding layer, is the number of words known by the tokenizer. </p>
<p>After a few days additional training data becomes available. The tokenizer needs to be refitted on this new data as it may contain additional words. That means that the input dimension of the embedding layer changes, so the previously trained model is not usable anymore.</p>
<pre><code>self.model = Sequential()
self.model.add(Embedding(tokenizer.totalDistinctWords + 1,
hiddenSize + 1, batch_size=1,
input_length=int(self.config['numWords'])))
self.model.add(LSTM(hiddenSize, return_sequences=True,
stateful=True, activation='tanh', dropout = dropout))
self.model.add(LSTM(hiddenSize, return_sequences=True,
stateful=True, activation='tanh', dropout = dropout))
self.model.add(TimeDistributed(Dense(
len(self.controlSupervisionConfig.predictableOptionsAsList))))
self.model.add(Activation('softmax'))
</code></pre>
<p>I want to use the previously trained model as initializer for the new training session. For the new words in the tokenizer, the embedding layer should just use a random initialization. For the words already known by the tokenizer, it should use the previously trained embedding.</p>
|
<p>You can access (get and set) the layer's weights directly as numpy arrays with code like <code>weights = model.layers[0].get_weights()</code> and <code>model.layers[0].set_weighs(weights</code>, where <code>model.layers[0]</code> is your Embedding layer. This way you can store embeddings separately and set known embeddings by copying them from stored data.</p>
|
python|tensorflow|keras|word-embedding
| 2
|
375,857
| 54,426,404
|
Bokeh DataTable - Return row and column on selection callback
|
<p>Using an on_change callback, I can get the numerical row index of a selection within a DataTable, in Bokeh.
Is it possible to:
a) Get the column index
b) Get the values of the indexes (column and row headers)</p>
<p>Example code:</p>
<pre><code>from bokeh.io import curdoc
from bokeh.layouts import row, column
import pandas as pd
from bokeh.models import ColumnDataSource, ColorBar, DataTable, DateFormatter, TableColumn, HoverTool, Spacer, DatetimeTickFormatter
'''
Pandas
'''
df = pd.DataFrame(data = {'Apples': [5,10], 'Bananas': [16,15], 'Oranges': [6,4]})
df.rename(index={0:'A',1:'B'}, inplace=True)
'''
BOKEH
'''
sourceTableSummary = ColumnDataSource(df)
Columns = [TableColumn(field=colIndex, title=colIndex) for colIndex in df.columns]
data_table = DataTable(columns=Columns, source=sourceTableSummary, index_position = 0, width = 1900, height = 200, fit_columns=False)
'''
Funcs
'''
def return_value(attr, old, new):
selectionRowIndex=sourceTableSummary.selected.indices[0]
print("Selected Row Index ", str(selectionRowIndex))
selectionValue=sourceTableSummary.data['Apples'][selectionRowIndex]
print("Selected value for Apples ", str(selectionValue))
# selectionColumnIndex?
# selectionRowHeader?
# selectionColumnHeader?
sourceTableSummary.on_change('selected', return_value)
curdoc().add_root(column(children=[data_table]))
</code></pre>
<p>This gives the following which can returns rows, and the values within the selection. This is ideal if I always want a single column returned. However the selection UI (dotted line) seems to suggest that the specific column is known, not just the row.
If there's no way of attaining the selected column, can I look it up using both the Row Index and the Cell Value? </p>
<p><a href="https://i.stack.imgur.com/JMPuH.png" rel="nofollow noreferrer">Local Server Output & Table</a></p>
|
<p>The following code uses JS callback to show the row and column index as well as the cell contents. The second Python callback is a trick to reset the indices so that a click on the same row can be detected (tested with Bokeh v1.0.4). Run with <code>bokeh serve --show app.py</code></p>
<pre><code>from random import randint
from datetime import date
from bokeh.models import ColumnDataSource, TableColumn, DateFormatter, DataTable, CustomJS
from bokeh.layouts import column
from bokeh.models.widgets import TextInput
from bokeh.plotting import curdoc
source = ColumnDataSource(dict(dates = [date(2014, 3, i + 1) for i in range(10)], downloads = [randint(0, 100) for i in range(10)]))
columns = [TableColumn(field = "dates", title = "Date", formatter = DateFormatter()), TableColumn(field = "downloads", title = "Downloads")]
data_table = DataTable(source = source, columns = columns, width = 400, height = 280, editable = True, reorderable = False)
text_row = TextInput(value = None, title = "Row index:", width = 420)
text_column = TextInput(value = None, title = "Column Index:", width = 420)
text_date = TextInput(value = None, title = "Date:", width = 420)
text_downloads = TextInput(value = None, title = "Downloads:", width = 420)
test_cell = TextInput(value = None, title = "Cell Contents:", width = 420)
source_code = """
var grid = document.getElementsByClassName('grid-canvas')[0].children;
var row, column = '';
for (var i = 0,max = grid.length; i < max; i++){
if (grid[i].outerHTML.includes('active')){
row = i;
for (var j = 0, jmax = grid[i].children.length; j < jmax; j++)
if(grid[i].children[j].outerHTML.includes('active'))
{ column = j }
}
}
text_row.value = String(row);
text_column.value = String(column);
text_date.value = String(new Date(source.data['dates'][row]));
text_downloads.value = String(source.data['downloads'][row]);
test_cell.value = column == 1 ? text_date.value : text_downloads.value; """
def py_callback(attr, old, new):
source.selected.update(indices = [])
source.selected.on_change('indices', py_callback)
callback = CustomJS(args = dict(source = source, text_row = text_row, text_column = text_column, text_date = text_date, text_downloads = text_downloads, test_cell = test_cell), code = source_code)
source.selected.js_on_change('indices', callback)
curdoc().add_root(column(data_table, text_row, text_column, text_date, text_downloads, test_cell))
</code></pre>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/sCVg9.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sCVg9.gif" alt="enter image description here"></a></p>
|
python|pandas|datatable|bokeh
| 2
|
375,858
| 54,688,502
|
neural network does not learn (loss stays the same)
|
<p>My project partner and I are currently facing a problem in our latest university project.
Our mission is to implement a neural network that plays the game Pong. We are giving the ball position the ball speed and the position of the paddles to our network and have three outputs: UP DOWN DO_NOTHING. After a player has 11 points we train the network with all states, the made decisions and the reward of the made decisions (see reward_cal()). The problem we are facing is, that the loss is constantly staying at a specific value only depending on the learning rate. Because of this the network always makes the same decision even though we reward it as terribly wrong. </p>
<p>Please help us find out what we did wrong we are thankful for every advise! Below is our code pls feel free to ask if there are any questions. We are pretty new to this topic so pls don't be rude if there is something completly stupid :D</p>
<p>this is our code:</p>
<pre><code>import sys, pygame, time
import numpy as np
import random
from os.path import isfile
import keras
from keras.optimizers import SGD
from keras.layers import Dense
from keras.layers.core import Flatten
pygame.init()
pygame.mixer.init()
#surface of the game
width = 400
height = 600
black = 0, 0, 0 #RGB value
screen = pygame.display.set_mode((width, height), 0, 32)
#(Resolution(x,y), flags, colour depth)
font = pygame.font.SysFont('arial', 36, bold=True)
pygame.display.set_caption('PyPong') #title of window
#consts for the game
acceleration = 0.0025 # ball becomes faster during the game
mousematch = 1
delay_time = 0
paddleP = pygame.image.load("schlaeger.gif")
playerRect = paddleP.get_rect(center = (200, 550))
paddleC = pygame.image.load("schlaeger.gif")
comRect = paddleC.get_rect(center=(200,50))
ball = pygame.image.load("ball.gif")
ballRect = ball.get_rect(center=(200,300))
#Variables for the game
pointsPlayer = [0]
pointsCom = [0]
playermove = [0, 0]
speedbar = [0, 0]
speed = [6, 6]
hitX = 0
#neural const
learning_rate = 0.01
number_of_actions = 3
filehandler = open('logfile.log', 'a')
filename = sys.argv[1]
#neural variables
states, action_prob_grads, rewards, action_probs = [], [], [], []
reward_sum = 0
episode_number = 0
reward_sums = []
pygame.display.flip()
def pointcontrol(): #having a look at the points in the game and restart()
if pointsPlayer[0] >= 11:
print('Player Won ', pointsPlayer[0], '/', pointsCom[0])
restart(1)
return 1
if pointsCom[0] >= 11:
print('Computer Won ', pointsPlayer[0], '/', pointsCom[0])
restart(1)
return 1
elif pointsCom[0] < 11 and pointsPlayer[0] < 11:
restart(0)
return 0
def restart(finished): #resetting the positions and the ball speed and
(if point limit was reached) the points
ballRect.center = 200,300
comRect.center = 200,50
playerRect.center = 200, 550
speed[0] = 6
speed[1] = 6
screen.blit(paddleC, comRect)
screen.blit(paddleP, playerRect)
pygame.display.flip()
if finished:
pointsPlayer[0] = 0
pointsCom[0] = 0
def reward_cal(r, gamma = 0.99): #rewarding every move
discounted_r = np.zeros_like(r) #making zero array with size of
reward array
running_add = 0
for t in range(r.size - 1, 0, -1): #iterating beginning in the end
if r[t] != 0: #if reward -1 or 1 (point made or lost)
running_add = 0
running_add = running_add * gamma + r[t] #making every move
before the point the same reward but a little bit smaller
discounted_r[t] = running_add #putting the value in the new
reward array
#e.g r = 000001000-1 -> discounted_r = 0.5 0.6 0.7 0.8 0.9 1 -0.7
-0.8 -0.9 -1 values are not really correct just to make it clear
return discounted_r
#neural net
model = keras.models.Sequential()
model.add(Dense(16, input_dim = (8), kernel_initializer =
'glorot_normal', activation = 'relu'))
model.add(Dense(32, kernel_initializer = 'glorot_normal', activation =
'relu'))
model.add(Dense(number_of_actions, activation='softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam')
model.summary()
if isfile(filename):
model.load_weights(filename)
# one ball movement before the AI gets to make a decision
ballRect = ballRect.move(speed)
reward_temp = 0.0
if ballRect.left < 0 or ballRect.right > width:
speed[0] = -speed[0]
if ballRect.top < 0:
pointsPlayer[0] += 1
reward_temp = 1.0
done = pointcontrol()
if ballRect.bottom > height:
pointsCom[0] += 1
done = pointcontrol()
reward_temp = -1.0
if ballRect.colliderect(playerRect):
speed[1] = -speed[1]
if ballRect.colliderect(comRect):
speed[1] = -speed[1]
if speed[0] < 0:
speed[0] -= acceleration
if speed[0] > 0:
speed[0] += acceleration
if speed[1] < 0:
speed[1] -= acceleration
if speed[1] > 0 :
speed[1] += acceleration
while True: #game
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
state = np.array([ballRect.center[0], ballRect.center[1], speed[0],
speed[1], playerRect.center[0], playerRect.center[1], comRect.center[0],
comRect.center[1]])
states.append(state)
action_prob = model.predict_on_batch(state.reshape(1, 8))[0, :]
action_probs.append(action_prob)
action = np.random.choice(number_of_actions, p=action_prob)
if(action == 0): playermove = [0, 0]
elif(action == 1): playermove = [5, 0]
elif(action == 2): playermove = [-5, 0]
playerRect = playerRect.move(playermove)
y = np.array([-1, -1, -1])
y[action] = 1
action_prob_grads.append(y-action_prob)
#enemy move
comRect = comRect.move(speedbar)
ballY = ballRect.left+5
comRectY = comRect.left+30
if comRect.top <= (height/1.5):
if comRectY - ballY > 0:
speedbar[0] = -7
elif comRectY - ballY < 0:
speedbar[0] = 7
if comRect.top > (height/1.5):
speedbar[0] = 0
if(mousematch == 1):
done = 0
reward_temp = 0.0
ballRect = ballRect.move(speed)
if ballRect.left < 0 or ballRect.right > width:
speed[0] = -speed[0]
if ballRect.top < 0:
pointsPlayer[0] += 1
done = pointcontrol()
reward_temp = 1.0
if ballRect.bottom > height:
pointsCom[0] += 1
done = pointcontrol()
reward_temp = -1.0
if ballRect.colliderect(playerRect):
speed[1] = -speed[1]
if ballRect.colliderect(comRect):
speed[1] = -speed[1]
if speed[0] < 0:
speed[0] -= acceleration
if speed[0] > 0:
speed[0] += acceleration
if speed[1] < 0:
speed[1] -= acceleration
if speed[1] > 0 :
speed[1] += acceleration
rewards.append(reward_temp)
if (done):
episode_number += 1
reward_sums.append(np.sum(rewards))
if len(reward_sums) > 40:
reward_sums.pop(0)
s = 'Episode %d Total Episode Reward: %f , Mean %f' % (
episode_number, np.sum(rewards), np.mean(reward_sums))
print(s)
filehandler.write(s + '\n')
filehandler.flush()
# Propagate the rewards back to actions where no reward
was given.
# Rewards for earlier actions are attenuated
rewards = np.vstack(rewards)
action_prob_grads = np.vstack(action_prob_grads)
rewards = reward_cal(rewards)
X = np.vstack(states).reshape(-1, 8)
Y = action_probs + learning_rate * rewards * y
print('loss: ', model.train_on_batch(X, Y))
model.save_weights(filename)
states, action_prob_grads, rewards, action_probs = [], [], [], []
reward_sum = 0
screen.fill(black)
screen.blit(paddleP, playerRect)
screen.blit(ball, ballRect)
screen.blit(paddleC, comRect)
pygame.display.flip()
pygame.time.delay(delay_time)
</code></pre>
<p>this is our output:</p>
<pre><code>pygame 1.9.4 Hello from the pygame community. https://www.pygame.org/contribute.html Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 16) 144
_________________________________________________________________
dense_2 (Dense) (None, 32) 544
_________________________________________________________________
dense_3 (Dense) (None, 3) 99
=================================================================
Total params: 787 Trainable params: 787 Non-trainable params: 0
_________________________________________________________________ 2019-02-14 11:18:10.543401: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA 2019-02-14 11:18:10.666634: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.6705 pciBusID: 0000:17:00.0 totalMemory:
10.92GiB freeMemory: 10.76GiB 2019-02-14 11:18:10.775144: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 1 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.6705 pciBusID: 0000:65:00.0 totalMemory:
10.91GiB freeMemory: 10.73GiB 2019-02-14 11:18:10.776037: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1 2019-02-14 11:18:11.176560: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-02-14 11:18:11.176590: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2019-02-14 11:18:11.176596: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y 2019-02-14 11:18:11.176600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N 2019-02-14 11:18:11.176914: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10403 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:17:00.0, compute capability: 6.1) 2019-02-14 11:18:11.177216: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10382 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1)
Computer Won 0 / 11 Episode 1 Total Episode Reward: -11.000000 , Mean -11.000000
loss: 0.254405
Computer Won 0 / 11 Episode 2 Total Episode Reward: -11.000000 , Mean -11.000000
loss: 0.254304
Computer Won 0 / 11 Episode 3 Total Episode Reward: -11.000000 , Mean -11.000000
loss: 0.254304
Computer Won 0 / 11 Episode 4 Total Episode Reward: -11.000000 , Mean -11.000000
loss: 0.254304
Computer Won 0 / 11 Episode 5 Total Episode Reward: -11.000000 , Mean -11.000000
loss: 0.254304
Computer Won 0 / 11 Episode 6 Total Episode Reward: -11.000000 , Mean -11.000000
loss: 0.254304
</code></pre>
|
<p>That's evil <code>'relu'</code> showing its power. </p>
<p>Relu has a "zero" region without gradients. When all your outputs get negative, Relu makes all of them equal to zero and kills backpropagation.</p>
<p>The easiest solution for using Relus safely is to add <code>BatchNormalization</code> layers before them:</p>
<pre><code>model = keras.models.Sequential()
model.add(Dense(16, input_dim = (8), kernel_initializer = 'glorot_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(32, kernel_initializer = 'glorot_normal'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(number_of_actions, activation='softmax'))
</code></pre>
<p>This will make "rougly" half of the outputs of the layer be zero and half be trainable.</p>
<p>Other solutions consist of controlling very well your learning rate and optimizer, which may be quite a headache for beginners. </p>
|
python|tensorflow|keras|neural-network|reinforcement-learning
| 5
|
375,859
| 54,285,889
|
Add new key in dicts in a list whose value is the index
|
<pre><code>df.at[0, 'A'] = [{'score': 12, 'player': [{'name': 'Jacob', 'score': 2},
{'name': 'Shane', 'score': 5}, ...]},
{'score': 33, 'player': [{'name': 'Cindy', 'score': 4}, ...]}, ...]
</code></pre>
<p>Say I have a list of n dictionaries for column 'A' in a data frame like above. I want to add a new key named 'game' which is the index of the list. So, it'd be like below.</p>
<pre><code>df.at[0, 'A'] = [{'score': 12, 'player': [...], 'game': 0},
{'score': 33, 'player': [...], 'game': 1}, ...]
</code></pre>
<p>Since I have to do the same thing with 'player', I don't want to use <code>for</code> loops.<br>
Is there a way to achieve this?</p>
<hr>
<pre><code>df.at[0, 'A'][0]['player'] = [{'name': 'Jacob', 'score': 2, 'number': 0},
{'name': 'Shane', 'score': 5, 'number': 1}, ...]}
</code></pre>
<p>For example, 'player' will have key 'number' whose value is the index of the inner list.</p>
<hr>
<p>Basically, I don't want to use any nested <code>for</code> loop to do this because the actual data I have received is a way larger NL data that actually came in that ridiculous form.</p>
|
<p>Given your data structure, Barmar is probably right that you're stuck with a <code>for</code> loop (which there's nothing wrong with, by-the-by). Here's a couple of potential work-arounds.</p>
<h1>A "solution"</h1>
<p>The information you're trying to record is redundant, so you probably don't need to bother with it in the first place.</p>
<p>Basically what you're saying is that the value of <code>game</code> and <code>number</code> are already encoded by the position of each element in its list. Chances are that there's a way to get whatever final result you're trying to compute while also skipping writing out all of this redundant information.</p>
<h1>A larger point</h1>
<p>You're trying to wrangle a large set of data with a complicated structure. You're probably about at the limit of what you can reasonably deal with using the kind of ad-hoc structure you posted. Here are some better ways:</p>
<ul>
<li><p>If you can figure out a way to flatten your data (or least to make it "rectangular", in a sense), then you may be able to wrangle it into a <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.array.html" rel="nofollow noreferrer">Numpy array</a>. Numpy hits a nice sweet spot between extremely fast and easy to use.</p></li>
<li><p>You could convert the inner dictionaries into more levels in your dataframe, to make a sort of hierarchical dataframe with an associated <a href="https://pandas.pydata.org/pandas-docs/stable/advanced.html#multiindex-advanced-indexing" rel="nofollow noreferrer"><code>MultiIndex</code></a>. There's a good SO thread with a lot more info <a href="https://stackoverflow.com/q/53927460/425458">here</a>.</p></li>
<li><p>While not necessarily the most performant option, one really good way to make it easier to understand data with a complex structure is to represent that structure as a hierarchy of <a href="http://www.greenteapress.com/thinkpython/html/thinkpython016.html" rel="nofollow noreferrer">user-defined objects</a>. In the past I've found this to be a very fruitful way to uncover hidden relationships in data (though like I said, it can be slow).</p></li>
</ul>
|
python|pandas
| 2
|
375,860
| 54,474,225
|
Parenthesis on .upper() and .apply(str.upper)
|
<p>Why do <code>upper</code> in <code>df.apply(df.str.upper)</code> doesn't require a parenthesis, but <code>upper()</code> method requires them as in <code>df.str.upper()</code>?</p>
<p>Is there some concept I've miss?</p>
|
<p>The <code>()</code> means "call this function now".</p>
<pre><code>print(str.upper())
</code></pre>
<p>Referring to a function without the <code>()</code> does not call the function immediately.</p>
<pre><code>map(str.upper)
</code></pre>
<p>The <code>str.upper</code> function is passed to the <code>map</code> function. The <code>map</code> function can now decide what it does with the <code>str.upper</code> function. It might call it once, or multiple times, or store it somewhere for later use.</p>
|
python|pandas
| 4
|
375,861
| 54,508,620
|
Trying to remove the first row in python fails?
|
<p>Hi I have some data in csv format.</p>
<p>I have some very simple code that is just meant to remove the first row:</p>
<pre><code>import numpy as np
import pandas as pd
data = pd.read_csv("mydata.csv")
data = data.drop(data.columns[[0]],axis=0)
data.to_csv("mydata2.csv")
</code></pre>
<p>However when run, I receieve this error:</p>
<pre><code>Warning (from warnings module):
File "C:/Users/george/Desktop/testing/output/PIVOTING.py", line 1
DtypeWarning: Columns (6) have mixed types. Specify dtype option on import or set low_memory=False.
Traceback (most recent call last):
File "C:/Users/george/Desktop/testing/output/PIVOTING.py", line 5, in <module>
data = data.drop(data.columns[[0]],axis=0)
File "C:\Python27\lib\site-packages\pandas\core\frame.py", line 3697, in drop
errors=errors)
File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 3111, in drop
obj = obj._drop_axis(labels, axis, level=level, errors=errors)
File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 3143, in _drop_axis
new_axis = axis.drop(labels, errors=errors)
File "C:\Python27\lib\site-packages\pandas\core\indexes\base.py", line 4404, in drop
'{} not found in axis'.format(labels[mask]))
KeyError: "['C'] not found in axis"
</code></pre>
|
<p>For future reference, pandas <code>read_csv</code> accept <code>skiprows</code> to help achieve exatly that task, as covered extensively <a href="https://stackoverflow.com/questions/20637439/skip-rows-during-csv-import-pandas">here</a></p>
|
python|pandas|numpy
| 1
|
375,862
| 54,331,864
|
Python iterrows() to index and drop rows from Dataframe
|
<p>I'm iterating over a df and want to drop rows based on a condition. It's like to check the contents of a sting for a character and drop if it does not exist. I have tried the code below, with exceptions. How can I access the third column's iterative row value and check for contains.</p>
<pre><code>for index, row in df_new.iterrows():
if not row[2].contains(','):
df_new.drop(index, inplace = True)
</code></pre>
<p>An exception is thrown: </p>
<blockquote>
<p>AttributeError: 'str' object has no attribute 'contains'</p>
</blockquote>
<p>I've tried a variety of string assignment also like:</p>
<pre><code>for index, row in df_new.iterrows():
string = str(row[2])
if not string.contains(','):
df_new.drop(index, inplace = True)
</code></pre>
|
<p>Might be quicker to do.</p>
<pre><code>df_new = df_new[~df_new.Column_name.str.contains(",")]
</code></pre>
|
python|pandas|dataframe
| 3
|
375,863
| 54,316,919
|
How to take out a row of a matrix that contains a maximum in each column?
|
<p>I have a matrix of data with 3 columns : x , y , z : each with lots of rows.</p>
<p>I need to find the row that contains the maximum each time for each column and also the same thing for minimums then write all these rows to a dataframe.</p>
<p>let's say I have :</p>
<pre><code>x= [1,2,4,3] , y= [7,8,6,5] , z= [12,10,11,9]
</code></pre>
<p>to find the corresponding row I did :</p>
<pre><code>alldata=[];
alldata.append([x]);
alldata.append([y]);
alldata.append([z]);
for elem in alldata:
xarr=np.array(elem)
rowmax=xarr.argmax()
ind= alldata.index(elem)
maxcorr.append(alldata[ind][0][rowmax])
for elem in alldata:
xarr=np.array(elem)
rowmin=xarr.argmin()
ind= alldata.index(elem)
maxcorr.append(alldata[ind][0][rowmin])
</code></pre>
<p>The problem is when I need to write the corresponding row that will be something like :</p>
<p>xmax,y,z,x2,ymax,z2,x3,y3,zmax,xmin,y4,z4,.....</p>
<p>for writing the corresponding row I tried : </p>
<pre><code>x=np.transpose(x);
y=np.transpose(y);
z=np.transpose(z);
mydata=[]
mydata.append(x)
mydata.append(y)
mydata.append(z)
mydata=np.array(mydata)
</code></pre>
<p>to write on dataframe I have :</p>
<pre><code>casename=['Xmax', 'Y', 'Z', ,'Xmin', 'Y', 'Z', 'X', 'Ymax', 'Z', 'X', 'Ymin', 'Z', 'X', 'Y', 'Zmax','X', 'Y', 'Zmin']
mydata=np.array(mydata).reshape(-1, len(casename))
df = pd.DataFrame(mydata, index=Filenames, columns=casename)
</code></pre>
<p>clearly mydata is the form that I am searching for , that is not formulated in the code and is my question. it gets impossible to take out the corresponding row from mydata</p>
<p>For example, the output I want according to the example is:</p>
<p>[ 4, 5, 11, 1, 7, 12, 2,8,10, 3,5,9, 1,7,12, 3,5,9] </p>
<p>also one thing : the Filenames should not change, cause I have several files with these X ,Y ,Z data</p>
|
<p>I am solving this using <code>pandas</code> since you tagged the question with the same. Also, I couldn't match the output you have given. I hope it is a typo. But I have gone by the description you have given ie;<code>xmax,y,z,x2,ymax,z2,x3,y3,zmax,xmin,y4,z4,.....</code></p>
<pre><code>df = pd.DataFrame(list(zip(x, y, z)), columns=['x', 'y', 'z'])
mylist = []
for i in df.columns:
mylist+=(list(df.loc[df[i].argmax()]))
for i in df.columns:
mylist+=(list(df.loc[df[i].argmin()]))
Out: [4, 6, 11, 2, 8, 10, 1, 7, 12, 1, 7, 12, 3, 5, 9, 3, 5, 9]
</code></pre>
|
python|pandas|numpy|dataframe
| 1
|
375,864
| 54,506,149
|
INSERT INTO Access database from pandas DataFrame
|
<p>Please could somebody tell me how should look like insert into the database but of the all data frame in python?</p>
<p>I found this but don't know how to insert all data frame called test_data with two figures: ID, Employee_id. </p>
<p>I also don't know how to insert the next value for ID (something like nextval)</p>
<p>Thank you</p>
<pre><code>import pyodbc
conn = pyodbc.connect(r'Driver={Microsoft Access Driver (*.mdb)};DBQ=C:\Users\test_database.mdb;')
cursor = conn.cursor()
cursor.execute('''
INSERT INTO employee_table (ID, employee_id)
VALUES(?????????)
''')
conn.commit()
</code></pre>
|
<p>Update, June 2020:</p>
<p>Now that the <a href="https://pypi.org/project/sqlalchemy-access/" rel="nofollow noreferrer">sqlalchemy-access</a> dialect has been revived the best solution would be to use pandas' <code>to_sql</code> method.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html</a></p>
<p><hr>
<em>(previous answer)</em></p>
<p>You can use pyodbc's <code>executemany</code> method, passing the rows using pandas' <code>itertuples</code> method:</p>
<pre class="lang-python prettyprint-override"><code>print(pyodbc.version) ## 4.0.24 (not 4.0.25, which has a known issue with Access ODBC)
connection_string = (
r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};'
r'DBQ=C:\Users\Public\MyTest.accdb;'
)
cnxn = pyodbc.connect(connection_string, autocommit=True)
crsr = cnxn.cursor()
# prepare test environment
table_name = "employee_table"
if list(crsr.tables(table_name)):
crsr.execute(f"DROP TABLE [{table_name}]")
crsr.execute(f"CREATE TABLE [{table_name}] (ID COUNTER PRIMARY KEY, employee_id TEXT(25))")
# test data
df = pd.DataFrame([[1, 'employee1'], [2, 'employee2']], columns=['ID', 'employee_id'])
# insert the rows from the DataFrame into the Access table
crsr.executemany(
f"INSERT INTO [{table_name}] (ID, employee_id) VALUES (?, ?)",
df.itertuples(index=False))
</code></pre>
<p><strong>Update:</strong>
Parameterized queries like this work again with pyodbc version 4.0.27 but not 4.0.25 (as mentioned above) or 4.0.26. Attempting to use these versions will result in an "Optional feature not implimented" error. This issue is discussed here <a href="https://github.com/mkleehammer/pyodbc/issues/509" rel="nofollow noreferrer">https://github.com/mkleehammer/pyodbc/issues/509</a>.</p>
|
python|pandas|ms-access|insert|pyodbc
| 5
|
375,865
| 54,556,604
|
pandas.interval_range for partial interval
|
<p>I'm using <code>pd.interval_range</code> to generate hourly intervals within a pair of timestamps:</p>
<pre><code>In [1]: list(pd.interval_range(pd.Timestamp('2019-02-06 07:00:00'),
pd.Timestamp('2019-02-06 08:00:00'), freq='h'))
Out[1]: [Interval('2019-02-06 07:00:00', '2019-02-06 08:00:00', closed='right')]
</code></pre>
<p>Is it possible to generate an interval shorter than 1 hour when the end time does not fall on an hour boundary?</p>
<p>In other words, when I move the end time by 1 minute I'm getting this:</p>
<pre><code>In [2]: list(pd.interval_range(pd.Timestamp('2019-02-06 07:00:00'),
pd.Timestamp('2019-02-06 08:01:00'), freq='h'))
Out[2]: [Interval('2019-02-06 07:00:00', '2019-02-06 08:00:00', closed='right')]
</code></pre>
<p>I'd like to get this instead:</p>
<pre><code>In [2]: list(pd.interval_range(pd.Timestamp('2019-02-06 07:00:00'),
pd.Timestamp('2019-02-06 08:01:00'), freq='h'))
Out[2]: [Interval('2019-02-06 07:00:00', '2019-02-06 08:00:00', closed='right'),
Interval('2019-02-06 08:00:00', '2019-02-06 08:01:00', closed='right')]
</code></pre>
|
<p>Try:</p>
<pre><code>start = pd.Timestamp('2019-02-06 07:00:00')
end = pd.Timestamp('2019-02-06 09:01:00')
interval_1 = pd.interval_range(start,
end, freq='h')
interval_out = pd.IntervalIndex.from_arrays(interval_1.left.to_series().tolist() +[interval_1.right[-1]],
interval_1.right.to_series().tolist() +[end])
interval_out
</code></pre>
<p>Output:</p>
<pre><code>IntervalIndex([(2019-02-06 07:00:00, 2019-02-06 08:00:00], (2019-02-06 08:00:00, 2019-02-06 09:00:00], (2019-02-06 09:00:00, 2019-02-06 09:01:00]]
closed='right',
dtype='interval[datetime64[ns]]')
</code></pre>
|
python|pandas
| 2
|
375,866
| 54,671,353
|
Convert categorical data into dummy set
|
<p>I'm having data like this:-</p>
<pre><code>|--------|---------|
| Col1 | Col2 |
|--------|---------|
| X | a,b,c |
|--------|---------|
| Y | a,b |
|--------|---------|
| X | b,d |
|--------|---------|
</code></pre>
<p>I want to convert these categorical data to dummy variables. Since my data is large its giving memory error if i'm using <code>get_dummies()</code> from pandas. I want my result like this:-</p>
<pre><code>|------|------|------|------|------|------|
|Col_X |Col_Y |Col2_a|Col2_b|Col2_c|Col2_d|
|------|------|------|------|------|------|
| 1 | 0 | 1 | 1 | 1 | 0 |
|------|------|------|------|------|------|
| 0 | 1 | 1 | 1 | 0 | 0 |
|------|------|------|------|------|------|
| 1 | 0 | 0 | 1 | 0 | 1 |
|------|------|------|------|------|------|
</code></pre>
<p>I have tried to convert Col2 using <a href="https://stackoverflow.com/questions/46867201/converting-pandas-column-of-comma-separated-strings-into-dummy-variables">this</a> but getting MemoryError as data is large and there is lot of variability in col2 too.</p>
<p>So,</p>
<p>1) How can I convert multiple categorical columns into dummy variable?</p>
<p>2) pandas get_dummy() is giving memory error, so how could i handle that?</p>
|
<p>I'm almost positive that you're encountering memory issues because <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.get_dummies.html#pandas.Series.str.get_dummies" rel="nofollow noreferrer">str.get_dummies</a> returns an array full of 1s and 0s, of datatype <code>np.int64</code>. This is quite different from the behavior of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html#pandas.get_dummies" rel="nofollow noreferrer">pd.get_dummies</a>, which returns an array of values of datatype <code>uint8</code>.</p>
<p>This appears to be a <a href="https://github.com/pandas-dev/pandas/issues/19618" rel="nofollow noreferrer">known issue</a>. However, there's been no update, nor fix, for the past year. Checking out the <a href="https://github.com/pandas-dev/pandas/blob/v0.24.1/pandas/core/strings.py#L1010" rel="nofollow noreferrer">source code</a> for str.get_dummies will indeed confirm that it is returning <code>np.int64</code>.</p>
<p>An 8 bit integer will take up 1 byte of memory, while a 64 bit integer will take up 8 bytes. I'm hopeful that memory problems can be avoided by finding an alternative way to one-hot encode <code>Col2</code> which ensures the output are all 8 bit integers.</p>
<p>Here was my approach, beginning with your example:</p>
<pre><code>df = pd.DataFrame({'Col1': ['X', 'Y', 'X'],
'Col2': ['a,b,c', 'a,b', 'b,d']})
df
Col1 Col2
0 X a,b,c
1 Y a,b
2 X b,d
</code></pre>
<ol>
<li>Since <code>Col1</code> contains simple, non-delimited strings, we can easily one-hot encode it using pd.get_dummies:</li>
</ol>
<pre><code>df = pd.get_dummies(df, columns=['Col1'])
df
Col2 Col1_X Col1_Y
0 a,b,c 1 0
1 a,b 0 1
2 b,d 1 0
</code></pre>
<p>So far so good.</p>
<pre><code>df['Col1_X'].values.dtype
dtype('uint8')
</code></pre>
<ol start="2">
<li>Let's get a list of all unique substrings contained inside the comma-delimited strings in <code>Col2</code>:</li>
</ol>
<pre><code>vals = list(df['Col2'].str.split(',').values)
vals = [i for l in vals for i in l]
vals = list(set(vals))
vals.sort()
vals
['a', 'b', 'c', 'd']
</code></pre>
<ol start="3">
<li>Now we can loop through the above list of values and use <code>str.contains</code> to create a new column for each value, such as <code>'a'</code>. Each row in a new column will contain 1 if that row actually has the new column's value, such as <code>'a'</code>, inside its string in <code>Col2</code>. As we create each new column, we make sure to convert its datatype to <code>uint8</code>:</li>
</ol>
<pre><code>col='Col2'
for v in vals:
n = col + '_' + v
df[n] = df[col].str.contains(v)
df[n] = df[n].astype('uint8')
df.drop(col, axis=1, inplace=True)
df
Col1_X Col1_Y Col2_a Col2_b Col2_c Col2_d
0 1 0 1 1 1 0
1 0 1 1 1 0 0
2 1 0 0 1 0 1
</code></pre>
<p>This results in a dataframe that meets your desired format. And thankfully, the integers in the four new columns that were one-hot encoded from <code>Col2</code> only take up 1 byte each, as opposed to 8 bytes each.</p>
<pre><code>df['Col2_a'].dtype
dtype('uint8')
</code></pre>
<p>If, on the outside chance, the above approach doesn't work. My advice would be to use str.get_dummies to one-hot encode <code>Col2</code> in chunks of rows. Each time you do a chunk, you would convert its datatype from <code>np.int64</code> to <code>uint8</code>, and then <a href="https://stackoverflow.com/questions/7922487/how-to-transform-numpy-matrix-or-array-to-scipy-sparse-matrix">transform the chunk to a sparse matrix</a>. You could eventually concatenate all chunks back together.</p>
|
python|pandas|scikit-learn|dummy-variable
| 2
|
375,867
| 54,311,923
|
How to save month-wise mails and aslo divide mails based on time using gmail api and save the output to csv or convert it into df?
|
<p>This code below gets us the messages count for every 30 days from the time period of message sent.</p>
<p><strong>This code gets us:(In-detail)</strong></p>
<p>1.Amazon first mail to my mail with a particular phase(here first order).</p>
<p>2.Convert that epoch format into time date and using timedelta And getting count of mails sent in the interval of 30 days.</p>
<p><strong>The output for this code will be like this:</strong></p>
<pre><code>Amazon first order:
1534476682000
Amazon total orders between 2018-08-01 and 2018-09-01: 20
Amazon total orders between 2018-09-01 and 2018-10-01: 11
Amazon total orders between 2018-10-01 and 2018-11-01: 15
Amazon total orders between 2018-11-01 and 2018-12-01: 7
Amazon total orders between 2018-12-01 and 2019-01-01: 19
Amazon total orders between 2019-01-01 and 2019-02-01: 23
Amazon total orders between 2019-02-01 and 2019-03-01: 12
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>#amazonfirstorder
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
from dateutil.relativedelta import relativedelta
from datetime import datetime
SCOPES = 'https://www.googleapis.com/auth/gmail.readonly'
def main():
store = file.Storage('token.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('credentials.json', SCOPES)
creds = tools.run_flow(flow, store)
service = build('gmail', 'v1', http=creds.authorize(Http()))
results = service.users().messages().list(userId='me', q='from:auto-confirm@amazon.in subject:(your amazon.in order of )',labelIds = ['INBOX']).execute()
messages = results.get('messages', [])
print('\nFilpkart first order:')
if not messages:
print (" ")
else:
print (" ")
msg = service.users().messages().get(userId='me', id=messages[-1]['id']).execute()
#print(msg['snippet'])
a=(msg['internalDate'])
ts = int(a)
ts /= 1000
year=int(datetime.utcfromtimestamp(ts).strftime('%Y'))
month=int(datetime.utcfromtimestamp(ts).strftime('%m'))
#print(year)
#print(month)
print(msg['internalDate'])
log_results = []
start_date = datetime(year,month,1)
#start_date = datetime(2016,1,1)
end_date = datetime.today()
increment = relativedelta(months=1)
target_date = start_date + increment
while target_date <= end_date:
timestamp_after = int(start_date.timestamp()) # timestamp of start day
timestamp_before = int(target_date.timestamp()) # timestamp of start day + 30 days
query = f'from:(auto-confirm@amazon.in) subject:(your amazon.in order of ) after:{timestamp_after} before:{timestamp_before}'
results = service.users().messages().list(userId='me', q=query, labelIds=['INBOX']).execute()
messages = results.get('messages', [])
orders = len(messages)
start_date_str = start_date.strftime('%Y-%m-%d')
target_date_str = target_date.strftime('%Y-%m-%d')
print(f"\nFlipkart total orders between {start_date.strftime('%Y-%m-%d')} and {target_date.strftime('%Y-%m-%d')}: {orders}")
log_results.append(dict(start=start_date_str, end=target_date_str, orders=orders))
# update interval
start_date += increment
target_date += increment
return log_results
if __name__ == '__main__':
log_results = main()
</code></pre>
<p>Now i have two problems :</p>
<p><strong>First</strong></p>
<p>How to save the output of that code into csv file.</p>
<p><strong>Second</strong>:</p>
<p>The above code gets us 30 days mails count,What i need is i need the count of mails i received before 12.00 PM in month-wise and after 12 PM in month-wise and save them in csv.</p>
<p><strong>OUTPUT i need for 2nd Problem</strong> :</p>
<pre><code>Amazon total orders between 2018-09-01 and 2018-10-01 before 12:00 PM : 11
Amazon total orders between 2018-10-01 and 2018-11-01 before 12:00 PM : 15
Amazon total orders between 2018-11-01 and 2018-12-01 before 12:00 PM : 7
Amazon total orders between 2018-12-01 and 2019-01-01 before 12:00 PM : 19
Amazon total orders between 2018-09-01 and 2018-10-01 after 12:00 PM : 3
Amazon total orders between 2018-10-01 and 2018-11-01 after 12:00 PM : 6
Amazon total orders between 2018-11-01 and 2018-12-01 after 12:00 PM : 88
Amazon total orders between 2018-12-01 and 2019-01-01 after 12:00 PM : 26
</code></pre>
|
<p>You will just need to loop through the dates with the interval you want.</p>
<p>The code below retrieves the messages of a user from a particular period of time, such as the month messages count.</p>
<p>You will need help to automate it to retrieve messages count for every 30 days.</p>
<p>For example, this code gets the messages from Jan 1, 2016 to Jan 30, 2016.</p>
<p>So from Jan 1, 2016 to Jan 1, 2019 you will need to automate it in a regular interval of 30 days.</p>
<pre><code>from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
import time
from dateutil.relativedelta import relativedelta
from datetime import datetime
SCOPES = 'https://www.googleapis.com/auth/gmail.readonly'
def main():
store = file.Storage('token.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('credentials.json', SCOPES)
creds = tools.run_flow(flow, store)
service = build('gmail', 'v1', http=creds.authorize(Http()))
end_date = datetime(2019, 1, 1)
interval = relativedelta(months=1)
current = datetime(2016, 1, 1) # init to the start date
while current < end_date + interval:
after = current.timestamp()
before = (current + interval).timestamp()
query = 'from:(auto-confirm@amazon.in) subject:(your amazon.in order of ) after:{} before:{}'.format(after, before)
results = service.users().messages().list(userId='me', q=query, labelIds = ['INBOX']).execute()
messages = results.get('messages', [])
print("\namazon total orders in {}: {}".format(current.strftime('%B %Y'), len(messages)))
current += interval
if __name__ == '__main__':
main()
</code></pre>
|
python|python-3.x|pandas|gmail|gmail-api
| 2
|
375,868
| 54,270,971
|
The initial state or constants of an RNN layer cannot be specified with a mix of Keras tensors and non-Keras tensors
|
<p>As we know the decoder takes the encoder hidden states as the initial state ...</p>
<pre><code>encoder_output , state_h, state_c = LSTM(cellsize, return_state=True)(embedded_encoder_input)
encoder_states = [state_h, state_c]
decoder_lstm = LSTM(cellsize, return_state=True, return_sequences=True)
decoder_outputs, state_dh, state_dc = decoder_lstm(embedded_decoder_inputs, initial_state=encoder_states)
</code></pre>
<p>Assume I want to replace the initial state of the decoder to be encoder_output and features from me from other resources </p>
<pre><code>encoder_states = [encoder_output , my_state]
</code></pre>
<p>But I face the following error:</p>
<blockquote>
<p>ValueError: The initial state or constants of an RNN layer cannot be
specified with a mix of Keras tensors and non-Keras tensors (a "Keras
tensor" is a tensor that was returned by a Keras layer, or by <code>Input</code>)</p>
</blockquote>
<p>Although I print state_h & stat_c & encoder_output & my_state, all have the same type and shape, example:</p>
<blockquote>
<pre><code>state_h: Tensor("lstm_57/while/Exit_2:0", shape=(?, 128), dtype=float32)
my_state: Tensor("Reshape_17:0", shape=(?, 128), dtype=float32)
</code></pre>
</blockquote>
<p>What am I understanding that it will not accept inputs not produced from the previous layer, and as Keras tensor? </p>
<p><strong>Update</strong></p>
<p>After convert tensor to Keras tensor, The new error:</p>
<blockquote>
<p>ValueError: Input tensors to a Model must come from
<code>keras.layers.Input</code>. Received: Tensor("Reshape_18:0", shape=(?, 128),
dtype=float32) (missing previous layer metadata).</p>
</blockquote>
|
<p>I guess you mixed tensorflow tensor and keras tensor. Although the results of <code>state_h</code> and <code>my_state</code> are tensor, they are actually different. You can use <code>K.is_keras_tensor()</code> to distinguish them. An example:</p>
<pre><code>import tensorflow as tf
import keras.backend as K
from keras.layers import LSTM,Input,Lambda
my_state = Input(shape=(128,))
print('keras input layer type:')
print(my_state)
print(K.is_keras_tensor(my_state))
my_state = tf.placeholder(shape=(None,128),dtype=tf.float32)
print('\ntensorflow tensor type:')
print(my_state)
print(K.is_keras_tensor(my_state))
# you may need it
my_state = Lambda(lambda x:x)(my_state)
print('\nconvert tensorflow to keras tensor:')
print(my_state)
print(K.is_keras_tensor(my_state))
# print
keras input layer type:
Tensor("input_3:0", shape=(?, 128), dtype=float32)
True
tensorflow tensor type:
Tensor("Placeholder:0", shape=(?, 128), dtype=float32)
False
convert tensorflow to keras tensor:
Tensor("lambda_1/Identity:0", shape=(?, 128), dtype=float32)
True
</code></pre>
|
tensorflow|keras|lstm|encoder-decoder
| 2
|
375,869
| 54,381,559
|
Pandas Dataframe: to_dict() poor performance
|
<p>I work with apis that return large pandas dataframes. I'm not aware of a fast way to iterate through the dataframe directly so I cast to a dictionary with <code>to_dict()</code>.</p>
<p>After my data is in dictionary form, the performance is fine. However, the <code>to_dict()</code> operation tends to be a performance bottleneck. </p>
<p>I often group columns of the dataframe together to form multi-index and use the 'index' orientation for <code>to_dict()</code>. Not sure if the large multi-index drives the poor performance. </p>
<p>Is there a faster way to cast a pandas dataframe? Maybe there is a better way to iterate directly over the dataframe without any cast? Not sure if there is a way I could apply vectorization.</p>
<p>Below I give sample code which mimics the issue with timings:</p>
<pre><code>import pandas as pd
import random as rd
import time
#Given a dataframe from api (model as random numbers)
df_columns = ['A','B','C','D','F','G','H','I']
dict_origin = {col:[rd.randint(0,10) for x in range(0,1000)] for col in df_columns}
dict_origin = pd.DataFrame(dict_origin)
#Transform to pivot table
t0 = time.time()
df_pivot = pd.pivot_table(dict_origin,values=df_columns[-3:],index=df_columns[:-3])
t1 = time.time()
print('Pivot Construction takes: ' + str(t1-t0))
#Iterate over all elements in pivot table
t0 = time.time()
for column in df_pivot.columns:
for row in df_pivot[column].index:
test = df_pivot[column].loc[row]
t1 = time.time()
print('Dataframe iteration takes: ' + str(t1-t0))
#Iteration over dataframe too slow. Cast to dictionary (bottleneck)
t0 = time.time()
df_pivot = df_pivot.to_dict('index')
t1 = time.time()
print('Cast to dictionary takes: ' + str(t1-t0))
#Iteration over dictionary is much faster
t0 = time.time()
for row in df_pivot.keys():
for column in df_pivot[row]:
test = df_pivot[row][column]
t1 = time.time()
print('Iteration over dictionary takes: ' + str(t1-t0))
</code></pre>
<p>Thank you!</p>
|
<p>The common guidance is don't iterate, use functions on all rows columns, or grouped rows/columns. Below, in the third code block shows how to iterate over the numpy array whhich is the <code>.values</code> attribute. The results are:</p>
<p>Pivot Construction takes: 0.012315988540649414</p>
<p>Dataframe iteration takes: 0.32346272468566895</p>
<p><strong>Iteration over values takes: 0.004369020462036133</strong></p>
<p>Cast to dictionary takes: 0.023524761199951172</p>
<p>Iteration over dictionary takes: 0.0010480880737304688</p>
<pre><code>import pandas as pd
from io import StringIO
# Test data
import pandas as pd
import random as rd
import time
#Given a dataframe from api (model as random numbers)
df_columns = ['A','B','C','D','F','G','H','I']
dict_origin = {col:[rd.randint(0,10) for x in range(0,1000)] for col in df_columns}
dict_origin = pd.DataFrame(dict_origin)
#Transform to pivot table
t0 = time.time()
df_pivot = pd.pivot_table(dict_origin,values=df_columns[-3:],index=df_columns[:-3])
t1 = time.time()
print('Pivot Construction takes: ' + str(t1-t0))
#Iterate over all elements in pivot table
t0 = time.time()
for column in df_pivot.columns:
for row in df_pivot[column].index:
test = df_pivot[column].loc[row]
t1 = time.time()
print('Dataframe iteration takes: ' + str(t1-t0))
#Iterate over all values in pivot table
t0 = time.time()
v = df_pivot.values
for row in range(df_pivot.shape[0]):
for column in range(df_pivot.shape[1]):
test = v[row, column]
t1 = time.time()
print('Iteration over values takes: ' + str(t1-t0))
#Iteration over dataframe too slow. Cast to dictionary (bottleneck)
t0 = time.time()
df_pivot = df_pivot.to_dict('index')
t1 = time.time()
print('Cast to dictionary takes: ' + str(t1-t0))
#Iteration over dictionary is much faster
t0 = time.time()
for row in df_pivot.keys():
for column in df_pivot[row]:
test = df_pivot[row][column]
t1 = time.time()
print('Iteration over dictionary takes: ' + str(t1-t0))
</code></pre>
|
python|pandas|dataframe|pivot-table|vectorization
| 6
|
375,870
| 54,294,577
|
Fast inner product of two 2-d masked arrays in numpy
|
<p>My problem is the following. I have two arrays <code>X</code> and <code>Y</code> of shape n, p where <strong>p >> n</strong> (e.g. n = 50, p = 10000).</p>
<p>I also have a mask <code>mask</code> (1-d array of booleans of size <code>p</code>) with respect to <code>p</code>, of <strong>small</strong> density (e.g. <code>np.mean(mask)</code> is 0.05).</p>
<p>I try to compute, as fast as possible, the inner product of <code>X</code> and <code>Y</code> with respect to <code>mask</code>: the output <code>inner</code> is an array of shape <code>n, n</code>, and is such that <code>inner[i, j] = np.sum(X[i, np.logical_not(mask)] * Y[j, np.logical_not(mask)])</code>.</p>
<p>I have tried using the <code>numpy.ma</code> library, but it is quite slow for my use:</p>
<pre><code>import numpy as np
import numpy.ma as ma
n, p = 50, 10000
density = 0.05
mask = np.array(np.random.binomial(1, density, size=p), dtype=np.bool_)
mask_big = np.ones(n)[:, None] * mask[None, :]
X = np.random.randn(n, p)
Y = np.random.randn(n, p)
X_ma = ma.array(X, mask=mask_big)
Y_ma = ma.array(Y, mask=mask_big)
</code></pre>
<p>But then, on my machine, <code>X_ma.dot(Y_ma.T)</code> is about 5 times slower than <code>X.dot(Y.T)</code>...</p>
<p>To begin with, I think it is a problem that <code>.dot</code> does not know that the mask is only with respect to <code>p</code> but I don't if its possible to use this information.</p>
<p>I'm looking for a way to perform the computation without being much slower than the naive dot.</p>
<p>Thanks a lot !</p>
|
<p>We can use <code>matrix-multiplication</code> with and without the masked versions as the masked subtraction from the full version yields to us the desired output -</p>
<pre><code>inner = X.dot(Y.T)-X[:,mask].dot(Y[:,mask].T)
</code></pre>
<p>Or simply use the reversed mask, would be slower though for a sparsey <code>mask</code> -</p>
<pre><code>inner = X[:,~mask].dot(Y[:,~mask].T)
</code></pre>
<p>Timings -</p>
<pre><code>In [34]: np.random.seed(0)
...: p,n = 10000,50
...: X = np.random.rand(n,p)
...: Y = np.random.rand(n,p)
...: mask = np.random.rand(p)>0.95
In [35]: mask.mean()
Out[35]: 0.0507
In [36]: %timeit X.dot(Y.T)-X[:,mask].dot(Y[:,mask].T)
100 loops, best of 3: 2.54 ms per loop
In [37]: %timeit X[:,~mask].dot(Y[:,~mask].T)
100 loops, best of 3: 4.1 ms per loop
In [39]: %%timeit
...: inner = np.empty((n,n))
...: for i in range(X.shape[0]):
...: for j in range(X.shape[0]):
...: inner[i, j] = np.sum(X[i, ~mask] * Y[j, ~mask])
1 loop, best of 3: 302 ms per loop
</code></pre>
|
python|numpy
| 2
|
375,871
| 54,586,159
|
I´m trying to filter data of colums from a Data Frame, but the index names contain white spaces
|
<p>I`m trying to filter the rows of a Data Frame, but since the index name of the column has white spaces, I've not been able to do it</p>
<p>The DDTS Number is the name of the column </p>
<p>It worked when there is no spaces</p>
<p>data[data3.(DDTS Number) != null]</p>
<p>I've tried different syntax but I don't if there is way to do it without to rename the column name </p>
|
<p>Hi everyone I found the solution. The problem with the method I used was it did not work when the index had spaces in the name so there is another way to get rid of the null values using the following structure:</p>
<p>df1 = df[df["ColumnName With Spaces"].notnull()]</p>
<p>From here you will filter all the rows in the "df" with index "ColumnName With Spaces" thet contain null values.</p>
|
python|pandas|filter|row|spaces
| 0
|
375,872
| 54,393,236
|
Scrape a table iterating over pages of a website: how to define the last page?
|
<p>I have the following code that works OK:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
df_list = []
for i in range(1, 13):
url = 'https://www.uzse.uz/trade_results?date=25.01.2019&mkt_id=ALL&page=%d' %i
df_list.append(pd.read_html(url)[0])
df = pd.concat(df_list)
df
</code></pre>
<p>But for this particular page I know the number of pages, which is 13 in <code>range(1, 13)</code>. Is there a way to define the last page so I do not have to go and check how many pages there are on a given page? </p>
|
<p>Try with </p>
<pre><code>for i in range(1, 100):
url = 'https://www.uzse.uz/trade_results?date=25.01.2019&mkt_id=ALL&page=%d' %i
if pd.read_html(url)[0].empty:
break
else :
df_list.append(pd.read_html(url)[0])
</code></pre>
<hr>
<pre><code>page=0 # using while
while page > 0:
url = 'https://www.uzse.uz/trade_results?date=25.01.2019&mkt_id=ALL&page=%d' % i
df_list.append(pd.read_html(url)[0])
page = page + 1
if pd.read_html(url)[0].empty:
break
print(page)
</code></pre>
|
python|python-3.x|pandas|for-loop|web-scraping
| 2
|
375,873
| 54,665,137
|
How to give a batch of frames to the model in pytorch c++ api?
|
<p>I've written a code to load the pytorch model in C++ with help of the PyTorch C++ Frontend api. I want to give a batch of frames to a pretrained model in the C++ by using <code>module->forward(batch_frames)</code>. But it can forward through a single input. <br/>
How can I give a batch of inputs to the model?</p>
<p>A part of code that I want to give the batch is shown below:</p>
<pre><code> cv::Mat frame;
vector<torch::jit::IValue> frame_batch;
// do some pre-processes on each frame and then add it to the frame_batch
//forward through the batch frames
torch::Tensor output = module->forward(frame_batch).toTensor();
</code></pre>
|
<p>You can create a batch of tensors in libtorch using a variety of ways.<br />
Here is one of these ways:<br />
Using Torch::TensorList:<br />
Since <code>torch::TensorList</code> is <code>ArrayRef</code>, you can not directly add any tensors to it, therefore first create a vector of tensors and then create your <code>TensorList</code> using that verctor:</p>
<pre class="lang-cpp prettyprint-override"><code>// two already processed tensor!
auto tensor1 = torch::randn({ 1, 3, 32, 32 });
auto tensor2 = torch::randn({ 1, 3, 32, 32 });
// using a tensor list
std::vector<torch::Tensor> tensor_vec{ tensor1, tensor2 };
torch::TensorList tensor_list{ tensor_vec};
auto batch_of_tensors = torch::cat(tensor_lst);
auto out = module->forward({batch_of_tensors}).toTensor();
</code></pre>
<p>Or you could do :</p>
<pre class="lang-cpp prettyprint-override"><code>auto batch_of_tensors = torch::cat({ tensor_vec});
</code></pre>
<p>or</p>
<pre class="lang-cpp prettyprint-override"><code>auto batch_of_tensors = torch::cat({ tensor1, tensor2});
</code></pre>
|
c++|pytorch|torch
| 2
|
375,874
| 54,683,598
|
Pandas not working in python3 but works in python
|
<p>I tried to run pandas in <code>python3</code>. But I get the following error.</p>
<pre><code>user@client3:~/smith/Python$ python3
Python 3.7.0 (default, Oct 3 2018, 21:22:25)
[GCC 5.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pandas'
</code></pre>
<p>So I tried to run from <code>python</code>,</p>
<pre><code>user@client3:~/smith/Python$ python
Python 2.7.12 (default, Nov 12 2018, 14:36:49)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> exit()
</code></pre>
<p>It works fine here. So I tried to install pandas for <code>python3</code> as follows,</p>
<pre><code>user@client3:~/smith/Python$ sudo apt-get install python3-pandas
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3-pandas is already the newest version (0.17.1-3ubuntu2).
The following packages were automatically installed and are no longer required:
linux-headers-4.4.0-139 linux-headers-4.4.0-139-generic linux-image-4.4.0-139-generic linux-image-extra-4.4.0-139-generic
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 22 not upgraded.
</code></pre>
<p>It already installed for <code>python3</code>. What happened here? Why it doesn't run for python3?</p>
|
<p>This should work, check it out.</p>
<pre><code>pip3 install pandas
</code></pre>
|
python|python-3.x|pandas
| 1
|
375,875
| 54,661,667
|
Tensorflow seems to be using system memory not GPU, and the Program stops after global_variable_inititializer()
|
<p>I just got a new GTX 1070 Founders Addition for my desktop, and I am trying to run tensorflow on this new GPU. I am using tensorflow.device() to run tensorflow on my GPU, but it seems like this is not happening. Instead it is using cpu, and almost all of my systems 8GB of ram. Here is my code:</p>
<pre><code>import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import matplotlib.image as mpimg
import math
print("\n\n")
# os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
#
with tf.device("/gpu:0"):
# Helper Function To Print Percentage
def showPercent(num, den, roundAmount):
print( str( round((num / den) * roundAmount )/roundAmount ) + " % ", end="\r")
# Defince The Number Of Images To Get
def getFile(dir, getEveryNthLine):
allFiles = list(os.listdir(dir))
fileNameList = []
numOfFiles = len(allFiles)
i = 0
for fichier in allFiles:
if(i % 100 == 0):
showPercent(i, numOfFiles, 100)
if(i % getEveryNthLine == 0):
if(fichier.endswith(".png")):
fileNameList.append(dir + "/" + fichier[0:-4])
i += 1
return fileNameList
# Other Helper Functions
def init_weights(shape):
init_random_dist = tf.truncated_normal(shape, stddev=0.1, dtype=tf.float16)
return tf.Variable(init_random_dist)
def init_bias(shape):
init_bias_vals = tf.constant(0.1, shape=shape, dtype=tf.float16)
return tf.Variable(init_bias_vals)
def conv2d(x, W):
# x --> [batch, H, W, Channels]
# W --> [filter H, filter W, Channels IN, Channels Out]
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="SAME")
def max_pool_2by2(x):
# x --> [batch, H, W, Channels]
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
def convolutional_layer(input_x, shape):
W = init_weights(shape)
b = init_bias([ shape[3] ])
return tf.nn.relu(conv2d(input_x, W) + b)
def normal_full_layer(input_layer, size):
input_size = int(input_layer.get_shape()[1])
W = init_weights([input_size, size])
b = init_bias([size])
return tf.matmul(input_layer, W) + b
print("Getting Images")
fileNameList = getFile("F:\cartoonset10k-small", 1000)
print("\nloaded " + str(len(fileNameList)) + " files")
print("Defining Placeholders")
x_ph = tf.placeholder(tf.float16, shape=[None, 400, 400, 4])
y_ph = tf.placeholder(tf.float16, shape=[None])
print("Defining Conv and Pool layer 1")
convo_1 = convolutional_layer(x_ph, shape=[5, 5, 4, 32])
convo_1_pooling = max_pool_2by2(convo_1)
print("Defining Conv and Pool layer 2")
convo_2 = convolutional_layer(convo_1_pooling, shape=[5, 5, 32, 64])
convo_2_pooling = max_pool_2by2(convo_2)
print("Define Flat later and a Full layer")
convo_2_flat = tf.reshape(convo_2_pooling, [-1, 400 * 400 * 64])
full_layer_one = tf.nn.relu(normal_full_layer(convo_2_flat, 1024))
y_pred = full_layer_one # Add Dropout Later
def getLabels(filePath):
df = []
with open(filePath, "r") as file:
for line in list(file):
tempList = line.replace("\n", "").replace('"', "").replace(" ", "").split(",")
df.append({
"attr": tempList[0],
"value":int(tempList[1]),
"maxValue":int(tempList[2])
})
return df
print("\nSplitting And Formating X, and Y Data")
x_data = []
y_data = []
numOfFiles = len(fileNameList)
i = 0
for file in fileNameList:
if i % 10 == 0:
showPercent(i, numOfFiles, 100)
x_data.append(mpimg.imread(file + ".png"))
y_data.append(pd.DataFrame(getLabels(file + ".csv"))["value"][0])
i += 1
print("\nConveting x_data to list")
i = 0
for indx in range(len(x_data)):
if i % 10 == 0:
showPercent(i, numOfFiles, 100)
x_data[indx] = x_data[indx].tolist()
i += 1
print("\n\nPerforming Train Test Split")
train_x, test_x, train_y, test_y = train_test_split(x_data, y_data, test_size=0.2)
print("Defining Loss And Optimizer")
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(
labels=y_ph,
logits=y_pred
)
)
optimizer = tf.train.AdadeltaOptimizer(learning_rate=0.001)
train = optimizer.minimize(cross_entropy)
print("Define Var Init")
init = tf.global_variables_initializer()
with tf.Session() as sess:
print("Checkpoint Before Initializer")
sess.run(init)
print("Checkpoint After Initializer")
batch_size = 8
steps = 1
i = 0
for i in range(steps):
if i % 10:
print(i / 100, end="\r")
batch_x = []
i = 0
for i in np.random.randint(len(train_x), size=batch_size):
showPercent(i, len(train_x), 100)
train_x[i]
batch_x = [train_x[i] for i in np.random.randint(len(train_x), size=batch_size) ]
batch_y = [train_y[i] for i in np.random.randint(len(train_y), size=batch_size) ]
print(sess.run(train, {
x_ph:train_x,
y_ph:train_y,
}))
</code></pre>
<p>If you run this, this program seems to quit when I run global_variable_initializer(). It also prints in the terminal:
<code>Allocation of 20971520000 exceeds 10% of system memory.</code> When looking at my task manager, I see this:</p>
<p><a href="https://i.stack.imgur.com/mgbZ5.png" rel="nofollow noreferrer">The program is using a lot of my CPU.</a></p>
<p><a href="https://i.stack.imgur.com/NHjrX.png" rel="nofollow noreferrer">The program is using a lot of my Memory.</a></p>
<p><a href="https://i.stack.imgur.com/pRDqt.png" rel="nofollow noreferrer">The program is using none of my GPU.</a></p>
<p>I am not shore why this is happening. I am using an anaconda environment, and have installed tensorflow-gpu. I would really appreciate anyones suggestions and help.</p>
<p>In addition, when I run this, the program stops after global_variable_initializer(). I am not sure if this is related to the problem above.</p>
<p>Tensorflow is version 1.12. CUDA is version 10.0.130.</p>
<p>Help would be greatly appreciated.</p>
|
<p>Try compare time (GPU vs CPU) with this simple example:</p>
<pre><code>import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
epoch = 3
print('GPU:')
with tf.device('/gpu:0'):
model = create_model()
model.fit(x_train, y_train, epochs=epoch)
print('\nCPU:')
with tf.device('/cpu:0'):
model = create_model()
model.fit(x_train, y_train, epochs=epoch)
</code></pre>
|
python|python-3.x|tensorflow|anaconda
| 1
|
375,876
| 54,661,057
|
Python multidimensional array indexing explanation
|
<p>Could someone please explain to me what is happening here? I understand what is happening here: <a href="https://docs.scipy.org/doc/numpy-1.15.0/user/basics.indexing.html#index-arrays" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.15.0/user/basics.indexing.html#index-arrays</a>, but do not understand this piece of code.</p>
<pre><code>import numpy as np
y = np.zeros((3,3))
y = y.astype(np.int16)
y[1,1] = 1
x = np.ones((3,3))
t = (1-y).astype(np.int16)
print(t)
print(x[t])
x[(1-y).astype(np.int16)] = 0
print(x)
</code></pre>
<p>output:</p>
<pre><code>[[1 1 1]
[1 0 1]
[1 1 1]]
[[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]]
[[0. 0. 0.]
[0. 0. 0.]
[1. 1. 1.]]
</code></pre>
|
<pre><code>import numpy as np # Line 01
y = np.zeros((3,3)) # Line 02
y = y.astype(np.int16) # Line 03
y[1,1] = 1 # Line 04
x = np.ones((3,3)) # Line 05
t = (1-y).astype(np.int16) # Line 06
print(t) # Line 07
print(x[t]) # Line 08
x[(1-y).astype(np.int16)] = 0 # Line 09
print(x) # Line 10
</code></pre>
<p><strong>Line 02:</strong></p>
<p>Creates a two-dimensional 3 x 3 ndarray of zeros. <code>y</code> is a name that is made to point to this ndarray.</p>
<p><strong>Line 03:</strong></p>
<p>Sets the data-type of each element of <code>y</code>, to 16-bit integer.</p>
<p><strong>Line 04:</strong></p>
<p>Sets the element of <code>y</code> at the intersection of the middle row and middle column, to <code>1</code>.</p>
<p><strong>Line 05:</strong></p>
<p>Creates a two-dimensional 3 x 3 ndarray of ones. <code>x</code> is a name that is made to point to this ndarray.</p>
<p><strong>Line 06:</strong></p>
<p>The subtraction (<code>1-t</code>) results in several scalar subtractions (<code>1- elem</code>), where <code>elem</code> is each element of <code>t</code>. The result will be another ndarray, having the same shape as <code>t</code>, and having the result of the subtraction (<code>1- elem</code>), as its values. That is, the values of the ndarray <code>(1-t)</code> will be:</p>
<pre><code>[[1-t[0,0], 1-t[0,1], 1-t[0,2]],
[1-t[1,0], 1-t[1,1], 1-t[1,2]],
[1-t[2,0], 1-t[2,1], 1-t[2,2]]]
</code></pre>
<p>Since <code>t</code> is full of zeros, and a lone <code>1</code> at the intersection of the middle row and middle column, <code>(1-t)</code> will be a two-dimensional ndarray full of ones, with a lone <code>0</code> at the intersection of the middle row and middle column.</p>
<p><strong>Line 07:</strong></p>
<p>Prints <code>t</code></p>
<p><strong>Line 08:</strong></p>
<p>Things get a little tricky from here. What is happening here is called "Combined Advanced and Basic Indexing" (<a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#combining-advanced-and-basic-indexing" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#combining-advanced-and-basic-indexing</a>). Let's go through the specifics, step-by-step.
First, notice that <code>x</code>, is a two-dimensional ndarray, taking another integer ndarray <code>t</code> as an index. Since <code>x</code> needs two indices to be supplied, <code>t</code> will be taken to be the first of those two indices, and the second index will be implicitly assumed to be <code>:</code>. So, <code>x[t]</code> is first interpreted as <code>x[t,:]</code>. The presence of these two indices, where one index is an array of integers <code>t</code>, and the other index is a slice <code>:</code>, results in the situation that is called "Combined Advanced and Basic Indexing".</p>
<p>Now, what exactly happens in this "Combined" scenario? Here goes:
First, the shape of the result will get contributions from the first index <code>t</code>, as well as from the second index <code>:</code>. Now <code>t</code> has the shape <code>(3,3)</code>, and hence the contribution of <code>t</code> to the shape of the result of <code>x[t,:]</code>, is to supply the outermost (leftmost) dimensions of the result shape. Hence the result shape will begin with <code>(3,3,)</code>. Now, the contribution of <code>:</code> to the shape of <code>x[t,:]</code> is based on the answer to the question: On which dimension of <code>x</code> is the <code>:</code> being applied ? The answer is -- the second dimension (since <code>:</code> is the second index within <code>x[t,:]</code>). Hence the contribution of <code>:</code> to the result shape of <code>x[t,:]</code> is <code>3</code> (since <code>3</code> is the length of the second dimension of <code>x</code>). To recap, we have deduced that the result shape of <code>x[t]</code> will be that of <code>x[t,:]</code>, which in turn will be <code>(3,3,3)</code>. This means <code>x[t]</code> will be a three-dimensional array, even though <code>x</code> itself is only a two-dimensional array.</p>
<p>Note that in the shape <code>(3,3,3)</code> of the result, the first two <code>3</code>s were contributed by the advanced index <code>t</code>, and the last <code>3</code> was contributed by the implicit basic index <code>:</code>. These two indexes <code>t</code> and <code>:</code> also use different <strong>ways</strong> to arrive at their respective contributions. The <code>3,3</code>, contribution that came from the index <code>t</code> is just the shape of <code>t</code> itself. It doesn't care about the position of <code>t</code> among the indexes, in the expression <code>x[t,:]</code> (it doesn't care whether <code>t</code> occurs before <code>:</code> or <code>:</code> appears before <code>t</code>). The <code>3</code> contribution that came from the index <code>:</code> is the length of the <strong>second</strong> dimension of <code>x</code>, and we consider the <strong>second</strong> dimension of <code>x</code> because <code>:</code> is the <strong>second</strong> index in the expression <code>x[t,:]</code>. If <code>x</code> had the shape <code>(3,5)</code> instead of <code>(3,3)</code>, then the shape of <code>x[t,:]</code> would have been <code>(3,3,5)</code> instead of <code>(3,3,3)</code>.</p>
<p>Now that we've deduced the <em>shape</em> of the result of <code>x[t]</code> to be <code>(3,3,3)</code>, let us move on to understand how the <em>values</em> themselves will get determined, in the result. The values in the result are obviously the values at positions [0,0,0],
[0,0,1], [0,1,2], [0,1,0], [0,1,1], [0,1,2], [0,2,0], [0,2,1], [0,2,2], and so on. Let's walk through one example of these positions, and you will hopefully get the drift. For our example, let's look at the position [0,1,2] in the result. To get the value for this position, we will first index into the <code>t</code> array using the 0 and the 1. That is, we find out <code>t[0,1]</code>, which will be <code>1</code> (refer to the output of <code>print(t)</code>). This <code>1</code>, which was obtained at <code>t[0,1]</code>, shall be taken to be our first index into <code>x</code>. The second index into <code>x</code> will be <code>2</code> (remember that we are discussing the position <code>[0,1,2]</code> within the result, and trying to determine the value at that position). Now, given these first and second indices into <code>x</code>, we get from <code>x</code> the value to be populated at position <code>[0,1,2]</code> of <code>x[t]</code>.</p>
<p>Now, <code>x</code> is just full of ones. So, <code>x[t]</code> will only consist of ones, even though the shape of <code>x[t]</code> is <code>(3,3,3)</code>. To really test your understanding of what I've said so far, you need to fill <code>x</code> with diverse values:
So, temporarily, comment out Line 05, and have the following line in its place:</p>
<pre><code>x = np.arange(9).reshape((3,3)) # New version of Line 05
</code></pre>
<p>Now, you will find that <code>print(x[t])</code> at Line 08 gives you:</p>
<pre><code>[[[3 4 5]
[3 4 5]
[3 4 5]]
[[3 4 5]
[0 1 2]
[3 4 5]]
[[3 4 5]
[3 4 5]
[3 4 5]]]
</code></pre>
<p>Against this output, test your understanding of what I've described above, about how the values in the result will get determined. (That is, if you've understood the above explanation of <code>x[t]</code>, you should be able to manually re-construct this same output as above, for <code>print (x[t])</code>.</p>
<p><strong>Line 09:</strong></p>
<p>Given the definition of <code>t</code> on Line 06, Line 09 is equivalent to <code>x[t]</code>, which, as we saw above, is equivalent to <code>x[t, :] = 0</code>.</p>
<p>And the effect of the assignment <code>x[t, :] = 0</code> is the same as the effect of <code>x[0:2, :] = 0</code>.</p>
<p>Why is this so? Simply because, in <code>x[t, :]</code>:</p>
<ol>
<li>The index values generated by the index <code>t</code> are <code>0</code>s and <code>1</code>s (since <code>t</code> is an integer index array consisting of only <code>0</code>s and <code>1</code>s)</li>
<li>The index values generated by the index <code>:</code> are <code>0</code>, <code>1</code>, and <code>2</code>.</li>
<li>We are referring only to positions within <code>x</code> that correspond to combinations of these index values. That is, <code>x[t, :]</code> relates only to the those positions <code>x[i,j]</code>, where <code>i</code> takes values <code>0</code> or <code>1</code>, and <code>j</code> takes values <code>0</code>,<code>1</code>, or <code>2</code>. That is, <code>x[t, :]</code> relates only to the positions <code>x[0,0]</code>, <code>x[0,1]</code>, <code>x[0,2]</code>, <code>x[1,0]</code>, <code>x[1,1]</code>, <code>x[1,2]</code>, within the array <code>x</code>.</li>
<li>So, the assignment statement <code>x[t, :] = 0</code> assigns the value 0 at these positions in <code>x</code>. Effectively, we are assigning the value <code>0</code> into all three columns in the first two rows of <code>x</code>, and we are leaving the third row of <code>x</code> unchanged.</li>
</ol>
<p><strong>Line 10:</strong></p>
<p>Prints the value of <code>x</code> after the above assignment.</p>
|
python|numpy|multidimensional-array|indexing|numpy-ndarray
| 3
|
375,877
| 54,358,239
|
Faster method to multiply column lookup values with vectorization
|
<p>I have two Dataframes, one contains values and is the working dataset (postsolutionDF), while the other is simply for reference as a lookup table (factorimportpcntDF). The goal is to add a column to postsolutionDF that contains the product of the lookup values from each row of postsolutionDF (new column name = num_predict). That product is then multiplied by 2700. For example, on first row, the working values are 0.5, 2, -6. The equivalent lookup values for these are 0.1182, 0.2098, and 0.8455. The product of those is 0.0209, which when multiplied by 2700 is 56.61 as shown in output.</p>
<p>The code below works for this simplified example, but it is very slow in the real solution (1.6MM rows x 15 numbered columns). I'm sure there is a better way to do this by removing the 'for k in range' loop but am struggling with how since already using apply on rows. I've found many tangential solutions but nothing that has worked for my situation yet. Thanks for any help.</p>
<pre><code>import pandas as pd
import numpy as np
postsolutionDF = pd.DataFrame({'SCRN' : (['2019-01-22-0000001', '2019-01-22-0000002', '2019-01-22-0000003']), '1' : 0.5,
'2' : 2, '3' : ([-6, 1.0, 8.0])})
postsolutionDF = postsolutionDF[['SCRN', '1', '2', '3']]
print('printing initial postsolutionDF..')
print(postsolutionDF)
factorimportpcntDF = pd.DataFrame({'F1_Val' : [0.5, 1, 1.5, 2], 'F1_Pcnt' : [0.1182, 0.2938, 0.4371, 0.5433], 'F2_Val'
: [2, 3, np.nan, np.nan], 'F2_Pcnt' : [0.2098, 0.7585, np.nan, np.nan], 'F3_Val' : [-6, 1, 8, np.nan], 'F3_Pcnt' :
[0.8455, 0.1753, 0.072, np.nan]})
print('printing factorimportpcntDF..')
print(factorimportpcntDF)
def zero_filter(row): # row is series
inner_value = 1
for k in range(1, 4): # number of columns in postsolutionDF with numeric headers, dynamic in actual code
inner_value *= factorimportpcntDF.loc[factorimportpcntDF['F'+str(k)+'_Val']==row[0+k], 'F'+str(k)+'_Pcnt'].values[0]
inner_value *= 2700
return inner_value
postsolutionDF['num_predict'] = postsolutionDF.apply(zero_filter, axis=1)
print('printing new postsolutionDF..')
print(postsolutionDF)
</code></pre>
<p>Print Output:</p>
<pre><code>C:\ProgramData\Anaconda3\python.exe C:/Users/Eric/.PyCharmCE2017.3/config/scratches/scratch_5.py
printing initial postsolutionDF..
SCRN 1 2 3
0 2019-01-22-0000001 0.5 2 -6.0
1 2019-01-22-0000002 0.5 2 1.0
2 2019-01-22-0000003 0.5 2 8.0
printing factorimportpcntDF..
F1_Pcnt F1_Val F2_Pcnt F2_Val F3_Pcnt F3_Val
0 0.1182 0.5 0.2098 2.0 0.8455 -6.0
1 0.2938 1.0 0.7585 3.0 0.1753 1.0
2 0.4371 1.5 NaN NaN 0.0720 8.0
3 0.5433 2.0 NaN NaN NaN NaN
printing new postsolutionDF..
SCRN 1 2 3 num_predict
0 2019-01-22-0000001 0.5 2 -6.0 56.610936
1 2019-01-22-0000002 0.5 2 1.0 11.737312
2 2019-01-22-0000003 0.5 2 8.0 4.820801
Process finished with exit code 0
</code></pre>
|
<p>I'm not sure how to do this in native pandas, but if you go back to numpy, it is pretty easy.</p>
<p>The <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html" rel="nofollow noreferrer">numpy.interp</a> function is designed to interpolate between values in the lookup table, but if the input values exactly match the values in the lookup table (like yours do), it becomes just a simple lookup instead of an interpolation.</p>
<pre><code>postsolutionDF['1new'] = np.interp(postsolutionDF['1'].values, factorimportpcntDF['F1_Val'], factorimportpcntDF['F1_Pcnt'])
postsolutionDF['2new'] = np.interp(postsolutionDF['2'].values, factorimportpcntDF['F2_Val'], factorimportpcntDF['F2_Pcnt'])
postsolutionDF['3new'] = np.interp(postsolutionDF['3'].values, factorimportpcntDF['F3_Val'], factorimportpcntDF['F3_Pcnt'])
postsolutionDF['num_predict'] = postsolutionDF['1new'] * postsolutionDF['2new'] * postsolutionDF['3new'] * 2700
postsolutionDF.drop(columns=['1new', '2new', '3new'], inplace=True)
</code></pre>
<p>Gives the output: </p>
<pre><code>In [167]: postsolutionDF
Out[167]:
SCRN 1 2 3 num_predict
0 2019-01-22-0000001 0.5 2 -6.0 56.610936
1 2019-01-22-0000002 0.5 2 1.0 11.737312
2 2019-01-22-0000003 0.5 2 8.0 4.820801
</code></pre>
<p>I had to pad out the factorimportpcntDF so all the columns had 4 values, otherwise looking up the highest value for a column wouldn't work. You can just use the same value multiple times, or split it into 3 lookup tables if you prefer, then the columns could be different lengths.</p>
<pre><code>factorimportpcntDF = pd.DataFrame({'F1_Val' : [0.5, 1, 1.5, 2], 'F1_Pcnt' : [0.1182, 0.2938, 0.4371, 0.5433],
'F2_Val' : [2, 3, 3, 3], 'F2_Pcnt' : [0.2098, 0.7585, 0.7585, 0.7585],
'F3_Val' : [-6, 1, 8, 8], 'F3_Pcnt' : [0.8455, 0.1753, 0.072, 0.072]})
</code></pre>
<p>Note that the documentation specifies that your <code>F1_val</code> etc columns need to be in increasing order (yours are here, just an FYI). Otherwise interp will run, but won't necessarily give good results.</p>
|
python|pandas|numpy|dataframe
| 0
|
375,878
| 54,613,854
|
Merge dataframes by left join SQL & Pandas
|
<p>I made two tables into a MySQL database using Python. The following SQL code is to perform join on two tables in the database. How can I do the same by writing equivalent python code?</p>
<p>MySQL code:</p>
<pre><code>SELECT A.num, B.co_name, A.rep_name
FROM A
JOIN B
ON A.num=B.no
</code></pre>
<p>Desired Python codes:</p>
<pre><code>sql = "XXX"
df_merged = pd.read_sql(sql, con=cnx)
</code></pre>
|
<p>I managed to resolve by enclosing my query with appropriate apostrophes: </p>
<pre><code>sql = '''SELECT A.num, B.co_name, A.rep_name
FROM A
LEFT JOIN B ON A.num=B.no '''
df_merged = pd.read_sql(sql, con=cnx)
</code></pre>
|
python|mysql|sql|pandas|merge
| 1
|
375,879
| 54,445,893
|
conda list returning run-time error Path not Found after installing PyTortch
|
<p>Recently I tried to install Pytrotch using the command<br>
<code>conda install pytorch torchvision cuda100 -c pytorch</code></p>
<p>To verity the package installed correctly I ran <code>conda list</code> in the anaconda prompt and got the following error:</p>
<p><code>RuntimeError: Path not found: C:\Users\[name]\AppData\Local\Continuum\Anaconda3\Lib\site-packages\Sphinx-1.5.6-py3.6.egg\EGG-INFO</code></p>
<p>I'm currently running conda version 4.6.1 and python version 3.6.7 on windows 10, I'd appreciate any help in determining what caused this error and how it can be fixed so I can properly manage my anaconda packages.</p>
<p>Full stack trace:
<code>
Traceback (most recent call last):
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\exceptions.py", line 1001, in __call__
return func(*args, **kwargs)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\main.py", line 84, in _main
exit_code = do_call(args, p)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\conda_argparse.py", line 81, in do_call
exit_code = getattr(module, func_name)(args, parser)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\main_list.py", line 142, in execute
show_channel_urls=context.show_channel_urls)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\main_list.py", line 80, in print_packages
show_channel_urls=show_channel_urls)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\main_list.py", line 45, in list_packages
installed = sorted(PrefixData(prefix, pip_interop_enabled=True).iter_records(),
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\core\prefix_data.py", line 116, in iter_records
return itervalues(self._prefix_records)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\core\prefix_data.py", line 145, in _prefix_records
return self.__prefix_records or self.load() or self.__prefix_records
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\core\prefix_data.py", line 69, in load
self._load_site_packages()
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\core\prefix_data.py", line 258, in _load_site_packages
python_record = read_python_record(self.prefix_path, af, python_pkg_record.version)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\gateways\disk\read.py", line 245, in read_python_record
pydist = PythonDistribution.init(prefix_path, anchor_file, python_version)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\common\pkg_formats\python.py", line 90, in init
return PythonEggInfoDistribution(anchor_full_path, python_version, sp_reference)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\common\pkg_formats\python.py", line 400, in __init__
super(PythonEggInfoDistribution, self).__init__(anchor_full_path, python_version)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\common\pkg_formats\python.py", line 104, in __init__
raise RuntimeError("Path not found: %s" % anchor_full_path)
RuntimeError: Path not found: C:\Users\[name]\AppData\Local\Continuum\Anaconda3\Lib\site-packages\Sphinx-1.5.6-py3.6.egg\EGG-INFO
</code></p>
<p>any help would be appreciated.</p>
|
<p>AS @P.Antoniadis said, this is an ongoing issue. And removing 'Sphinx-1.5.6-py3.6.egg' folder is the suggested workaround.</p>
<p><a href="https://github.com/conda/conda/issues/8156#issuecomment-458777849" rel="nofollow noreferrer">https://github.com/conda/conda/issues/8156#issuecomment-458777849</a></p>
|
python|anaconda|conda|pytorch
| 1
|
375,880
| 54,583,860
|
Mask last three bits in a numpy array
|
<p>I have a numpy 2D array with the following unique values:</p>
<pre><code>[ 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 128 129
131 132 133 134 135 136 137 138 139 140 141 142 143 192 193 194 195 196
197 198 199 200 201 202 203 204 205 206 207 255]
</code></pre>
<p>I want to mask those values in the numpy 2D array, where <strong>ANY</strong> of the last 3 bits of the value are 1. I am doing this, but not sure if it is correct or indeed the best way to achieve it:</p>
<pre><code>mask = ((arr & 3) == 0)
</code></pre>
|
<p>Assuming that by:</p>
<blockquote>
<p>mask those values in the numpy 2D array, where <strong>ANY</strong> of the last 3 bits of the value are 1</p>
</blockquote>
<p>you mean "<strong>select</strong> those elements in which any of the three least significant bits are nonzero", you could do:</p>
<pre><code>mask = np.bitwise_and(arr, 0b111) > 0
</code></pre>
<p>Arguably, using the function <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.bitwise_and.html" rel="nofollow noreferrer"><code>numpy.bitwise_and</code></a> instead of the <code>&</code> operator makes the code more readable.</p>
<h3>Sample run</h3>
<pre><code>In [35]: arr = np.arange(17)
In [36]: arr
Out[36]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
In [37]: mask = np.bitwise_and(arr, 0b111) > 0
In [38]: mask
Out[38]:
array([False, True, True, True, True, True, True, True, False,
True, True, True, True, True, True, True, False])
In [39]: for x in arr[mask]:
...: b = bin(x)
...: print('{}{:0>5}'.format(b[:2], b[2:]))
...:
...:
0b00001
0b00010
0b00011
0b00100
0b00101
0b00110
0b00111
0b01001
0b01010
0b01011
0b01100
0b01101
0b01110
0b01111
</code></pre>
|
python|numpy
| 1
|
375,881
| 73,825,623
|
Stream data using tf.data from cloud object store other than Google Storage possible?
|
<p>I've found <a href="https://www.tensorflow.org/datasets/gcs" rel="nofollow noreferrer">this</a> nice article on how to directly stream data from Google Storage to tf.data. This is super handy if your compute tier has limited storage (like on KNative in my case) and network bandwidth is sufficient (and free of charge anyway).</p>
<blockquote>
<p>tfds.load(..., try_gcs=True)</p>
</blockquote>
<p>Unfortunately, my data resides in a non Google bucket and it isn't documented for other Cloud Object Store systems.</p>
<p>Does anybody know if it also works in non GS environments?</p>
|
<p>I'm not sure how this is implemented in the library, but it should be possible to access other object store systems in a similar way.</p>
<p>You might need to extend the current mechanism to use a more generic API like the S3 API (most object stores have this as a compatibility layer). If you do need to do this, I'd recommend contributing it back upstream, as it seems like a generally-useful capability when either storage space is tight or when fast startup is desired.</p>
|
tensorflow|kubernetes|serverless|knative
| 0
|
375,882
| 73,711,194
|
oracle sql query and export excel and send email not shown persian and shown as question mark pandas python
|
<p>oracle sql query and export excel and send email not shown persian and shown as question marks</p>
<pre><code> connection = cx_Oracle.connect(user_db,password_db,dsn_tns)
df = pd.read_sql(query,con=connection)
attach_file_name = filename+'_'+timestr+'.xlsx'
df.to_excel(attach_file_name,sheet_name="report",encoding="utf-8",engine='xlsxwriter')
msg.attach(MIMEText(body,'plain'))
attach_file = open(attach_file_name,'rb')
payload = MIMEBase('application','octate-stream')
payload.set_payload((attach_file).read())
encoders.encode_base64(payload)
payload.add_header('Content-Disposition',"attach_file; filename= "+attach_file_name)
msg.attach(payload)
</code></pre>
|
<p>it is solved by reading query by encoding utf-8 from cx_orcale and change the engine of excel writer to xlswriter and set encoding</p>
<pre><code>con = cx_Oracle.connect(user_db,password_db,dsn_tns,encoding="UTF-8")
df = pd.read_sql(indict.datasources.Query, con=con)
df.to_excel(attach_file_name, sheet_name="report", encoding="utf-8", engine='xlsxwriter')
</code></pre>
|
python|pandas|cx-oracle|smtplib
| 1
|
375,883
| 73,573,453
|
Extract a new dataframe
|
<p>I have the following data set :</p>
<p><img src="https://i.stack.imgur.com/ZUP7q.png" alt="enter image description here" /></p>
<p>and I want to extract a new dataframe of the following form :</p>
<p><img src="https://i.stack.imgur.com/UnY3h.png" alt="enter image description here" /></p>
<p>I tried this code but I got "NaN" for all cells</p>
<pre><code>years = list(df_le["Year"].unique())
df2 =pd.DataFrame()
df2["Country"] =list(df_le["Country"].unique())
for year in years :
df2[year] = df_le[df_le["Year"]==year]["Life expectancy "]
</code></pre>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>df.pivot(index='Country', columns='Year', values='Life expectancy').reset_index()
</code></pre>
|
python|pandas|dataframe
| 0
|
375,884
| 73,687,094
|
Networkx: create weighted graph from pandas
|
<p>I am trying to create a weighted graph in <code>networkx</code>, but am facing problems when indicating the weight.
My code</p>
<pre><code>import pandas as pd
import networkx as nx
dictt = {"from" :["A", "B", "C"], "to":["B", "D", "A"],}
distmat = pd.DataFrame.from_dict(dictt)
distmat["weight"] = [1, 2, 4]
d = distmat.to_numpy()
data = list(map(tuple, d))
G = nx.Graph()
G.add_edges_from(data, weight = "weight")
</code></pre>
<p>returns <code>TypeError: 'int' object is not iterable</code>.
Any ideas on how to solve the issue?</p>
|
<p>Was using the wrong <code>networkx</code> function, here the correct code:</p>
<pre><code>dictt = { "from": ["A", "B", "C"], "to": ["B", "D", "A"]}
distmat = pd.DataFrame.from_dict(dictt)
distmat["weight"] = [1, 2, 4]
d = distmat.to_numpy()
data = list(map(tuple, d))
G = nx.Graph()
G.add_weighted_edges_from(data, weight = "weight")
</code></pre>
|
python|pandas|networkx
| 0
|
375,885
| 73,532,344
|
ImportError: cannot import name 'notf' from 'tensorboard.compat'
|
<p><strong>i am training stylegan2-ada-pytorch from google colab with my custom images however on trying to perform the initial training i get the above error from tensorboard</strong></p>
<pre><code>cmd = f"/usr/bin/python3 /content/stylegan2-ada-pytorch/train.py --snap {SNAP} --outdir {EXPERIMENTS} --data {DATA}"
</code></pre>
<p>!{cmd}</p>
|
<p>solved it by uninstalling jax and reinstalling the cuda version of iy</p>
<pre><code>!pip uninstall jax jaxlib -y
!pip install "jax[cuda11_cudnn805]==0.3.10" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
</code></pre>
<p>And then changed the torch version to 1.8.1</p>
|
pytorch|tensorboard|stylegan
| 0
|
375,886
| 73,698,975
|
How to export dataframe from google collaboration as .h5?
|
<p>I'm trying to export dataframe in <code>.h5</code> format from Google Colab, the package I'm using to do downstream analysis reads in that format so would like it to export in that specific file format.</p>
<p>I know to export it into csv file format you do:</p>
<pre><code>from google.colab import drive
drive.mount('drive')
adata.to_csv('adata.csv')
!cp df.csv "drive/My Drive/"
</code></pre>
<p>but I am not sure what is the right command is to execute export into .h5 format. Any help is greatly appreciated! Thank you.</p>
|
<p>You need to use <code>to_hdf</code> method:</p>
<pre><code>from google.colab import drive
drive.mount('drive')
adata.to_hdf('adata.h5', key='adata')
!cp adata.h5 "drive/My Drive/"
</code></pre>
|
python|pandas|dataframe|google-colaboratory
| 2
|
375,887
| 73,585,169
|
How can I make broadly applicable code that fills missing elements differently according to the variable type?
|
<p>I am supposed to fill missing values of a lot of CSV files.
Normally, those have almost the same variables.</p>
<p>Here are the conditions that I should satisfy.</p>
<ol>
<li>If the value type is numeric I should fill -1 to the missing value.</li>
<li>If the value type is character I should fill m to the missing value.</li>
</ol>
<p>The problem is that Each CSV file has <strong>different</strong> variables in detail. For example,
Data_1 is</p>
<pre><code>v1 v2 v3 v4 v5
1 a d 1
2 b 1 4
d a 1 6
2 d 1
</code></pre>
<p>then it should be</p>
<pre><code>v1 v2 v3 v4 v5
1 a d 1 -1
2 b m 1 4
-1 d a 1 6
2 m d 1 -1
</code></pre>
<p>However, each data is different in that,</p>
<pre><code>v1 v2 v3 v5
1 a d
2 b 4
d a 6
2 d
</code></pre>
<p>or</p>
<pre><code>v5 v6
x
4 y
6
d
</code></pre>
<p>Therefore, I want to generate code that can uniformly apply to many CSVs that have characteristics above.
I tried fillna for example,</p>
<pre><code>x = x.fillna(-1)
y = y.fillna(m)
</code></pre>
|
<p>From the description you posted, this function might help:</p>
<pre><code>def filldefault(series):
series.fillna('m' if type(series.iloc[0]) == str else -1, inplace = True)
</code></pre>
<p>Perhaps you can iterate something like that over the columns in your dataframe.</p>
|
python|pandas|fillna
| 1
|
375,888
| 73,609,732
|
Loading resnet50 prettrianed model in PyTorch
|
<p>I want to use resnet50 pretrained model using PyTorch and I am using the following code for loading it:</p>
<pre><code> import torch
model = torch.hub.load("pytorch/vision", "resnet50", weights="IMAGENET1K_V2")
</code></pre>
<p>Although I upgrade the torchvision but I receive the following error:</p>
<p><a href="https://i.stack.imgur.com/r22Vv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r22Vv.jpg" alt="" /></a></p>
<p>Any idea?</p>
|
<p>As per the latest definition, we now load models using torchvision library, you can try that using:</p>
<pre class="lang-py prettyprint-override"><code>from torchvision.models import resnet50, ResNet50_Weights
# Old weights with accuracy 76.130%
model1 = resnet50(weights=ResNet50_Weights.IMAGENET1K_V1)
# New weights with accuracy 80.858%
model2 = resnet50(weights=ResNet50_Weights.IMAGENET1K_V2)
</code></pre>
|
python|deep-learning|pytorch
| 1
|
375,889
| 73,652,452
|
Python dataframe returning closest value above specified input in one row (pivot_table)
|
<p>I have the following DataFrame, <code>output_excel</code>, containing inventory data and sales data for different products. See the DataFrame below:</p>
<pre><code> Product 2022-04-01 2022-05-01 2022-06-01 2022-07-01 2022-08-01 2022-09-01 AvgMonthlySales Current Inventory
1 BE37908 1500 1400 1200 1134 1110 1004 150.208333 1500
2 BE37907 2000 1800 1800 1540 1300 1038 189.562500 2000
3 DE37907 5467 5355 5138 4926 4735 4734 114.729167 5467
</code></pre>
<p>Please note that that in my example, today's date is 2022-04-01, so all inventory numbers for the months May through September are predicted values, while the AvgMonthlySales are the mean of actual, past sales for that specific product. The current inventory just displays today's value.</p>
<p>I also have another dataframe, <code>df2</code>, containing the lead time, the same sales data, and the calculated security stock for the same products. The formula for the security stock is ((leadtime in weeks / 4) + 1) * AvgMonthlySales:</p>
<pre><code> Product AvgMonthlySales Lead time in weeks Security Stock
1 BE37908 250.208333 16 1251.04166
2 BE37907 189.562500 24 1326.9375
3 DE37907 114.729167 10 401.552084
</code></pre>
<p><strong>What I am trying to achieve:</strong></p>
<p>I want to create a new dataframe, which tells me how many months are left until our inventory drops below the security stock. For example, for the first product, <code>BE37908</code>, the security stock is ~1251 units, and by 2022-06-01 our inventory will drop below that number. So I want to return 2022-05-01, as this is the last month where our inventories are projected to be above the security stock. The whole output should look something like this:</p>
<pre><code> Product Last Date Above Security Stock
1 BE37908 2022-05-01
2 BE37907 2022-07-01
3 DE37907 NaN
</code></pre>
<p>Please also note that the timeframe for the projections (the columns) can be set by the user, so we couldn't just select columns 2 through 7. However, the Product column will always be the first one, and the AvgMonthlySales and the Current Inventory columns will always be the last two.</p>
<p>To recap, I want to return the column with the smallest value above the security stock for each product. I have an idea on how to do that by column using <code>argsort</code>, but not by row. What is the best way to achieve this? Any tips?</p>
|
<p>You could try as follows:</p>
<pre><code># create list with columns with dates
cols = [col for col in df.columns if col.startswith('20')]
# select cols, apply df.gt row-wise, sum and subtract 1
idx = df.loc[:,cols].gt(df2['Security Stock'], axis=0).sum(axis=1).sub(1)
# get the correct dates from the cols
# if the value == len(cols)-1, *all* values will have been greater so: np.nan
idx = [cols[i] if i != len(cols)-1 else np.nan for i in idx]
out = df['Product'].to_frame()
out['Last Date Above Security Stock'] = idx
print(out)
Product Last Date Above Security Stock
1 BE37908 2022-05-01
2 BE37907 2022-07-01
3 DE37907 NaN
</code></pre>
|
python|pandas|dataframe|pivot-table|np.argsort
| 1
|
375,890
| 73,579,932
|
How to change monthly table into one column with date index?
|
<p>I downloaded the Broad Dollar Index from FRED with the following format:</p>
<pre><code> DATE RTWEXBGS
0 2006-01-01 100.0000
1 2006-02-01 100.2651
2 2006-03-01 100.5424
3 2006-04-01 100.0540
4 2006-05-01 97.8681
.. ... ...
194 2022-03-01 111.2659
195 2022-04-01 111.8324
196 2022-05-01 114.6075
197 2022-06-01 115.6957
198 2022-07-01 118.2674
</code></pre>
<p>I also got an Excel file of inflation rate with a different format:</p>
<pre><code> Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Annual
0 2022 0.07480 0.07871 0.08542 0.08259 0.08582 0.09060 0.08525 NaN NaN NaN NaN NaN NaN
1 2021 0.01400 0.01676 0.02620 0.04160 0.04993 0.05391 0.05365 0.05251 0.05390 0.06222 0.06809 0.07036 0.04698
2 2020 0.02487 0.02335 0.01539 0.00329 0.00118 0.00646 0.00986 0.01310 0.01371 0.01182 0.01175 0.01362 0.01234
3 2019 0.01551 0.01520 0.01863 0.01996 0.01790 0.01648 0.01811 0.01750 0.01711 0.01764 0.02051 0.02285 0.01812
4 2018 0.02071 0.02212 0.02360 0.02463 0.02801 0.02872 0.02950 0.02699 0.02277 0.02522 0.02177 0.01910 0.02443
.. ... ... ... ... ... ... ... ... ... ... ... ... ... ...
104 1918 0.19658 0.17500 0.16667 0.12698 0.13281 0.13077 0.17969 0.18462 0.18045 0.18519 0.20741 0.20438 0.17284
105 1917 0.12500 0.15385 0.14286 0.18868 0.19626 0.20370 0.18519 0.19266 0.19820 0.19469 0.17391 0.18103 0.17841
106 1916 0.02970 0.04000 0.06061 0.06000 0.05941 0.06931 0.06931 0.07921 0.09901 0.10784 0.11650 0.12621 0.07667
107 1915 0.01000 0.01010 0.00000 0.02041 0.02020 0.02020 0.01000 -0.00980 -0.00980 0.00990 0.00980 0.01980 0.00915
108 1914 0.02041 0.01020 0.01020 0.00000 0.02062 0.01020 0.01010 0.03030 0.02000 0.01000 0.00990 0.01000 0.01349
</code></pre>
<p>How do I change the inflation table into a format similar to the dollar index?</p>
|
<p>Something like this(didn't take column=<code>Annual</code> into account),</p>
<pre><code>df
###
Year Jan Feb Mar Apr May Jun Jul Aug \
0 2022 0.07480 0.07871 0.08542 0.08259 0.08582 0.09060 0.08525 NaN
1 2021 0.01400 0.01676 0.02620 0.04160 0.04993 0.05391 0.05365 NaN
2 2020 0.02487 0.02335 0.01539 0.00329 0.00118 0.00646 0.00986 NaN
Sep Oct Nov Dec Annual
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN
month = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
df_melt = pd.melt(df, id_vars=['Year'], value_vars=month, var_name='Month', value_name='Sales')
df_melt['Date'] = pd.to_datetime(df_melt['Year'].astype(str) + '-' + df_melt['Month'].astype(str))
# convert Date column to datetime type
df_melt = df_melt[['Date', 'Sales']]
df_melt
###
Date Sales
0 2022-01-01 0.07480
1 2021-01-01 0.01400
2 2020-01-01 0.02487
3 2022-02-01 0.07871
4 2021-02-01 0.01676
5 2020-02-01 0.02335
6 2022-03-01 0.08542
7 2021-03-01 0.02620
8 2020-03-01 0.01539
9 2022-04-01 0.08259
10 2021-04-01 0.04160
11 2020-04-01 0.00329
12 2022-05-01 0.08582
13 2021-05-01 0.04993
14 2020-05-01 0.00118
15 2022-06-01 0.09060
16 2021-06-01 0.05391
17 2020-06-01 0.00646
18 2022-07-01 0.08525
19 2021-07-01 0.05365
20 2020-07-01 0.00986
21 2022-08-01 NaN
22 2021-08-01 NaN
23 2020-08-01 NaN
24 2022-09-01 NaN
25 2021-09-01 NaN
26 2020-09-01 NaN
27 2022-10-01 NaN
28 2021-10-01 NaN
29 2020-10-01 NaN
30 2022-11-01 NaN
31 2021-11-01 NaN
32 2020-11-01 NaN
33 2022-12-01 NaN
34 2021-12-01 NaN
35 2020-12-01 NaN
</code></pre>
|
pandas|dataframe
| 1
|
375,891
| 73,602,436
|
Using a Knowledge Distillation model to run predictions and check classification metrics
|
<p>I am running the Keras example on knowledge distillation from the <a href="https://keras.io/examples/vision/knowledge_distillation/" rel="nofollow noreferrer">keras example</a> and my question is: The resulting compressed model that I can use to do predictions is the distiller or the student model? And in such case, how do I add back the softmax classification layer and run predictions using the resulting model?</p>
<pre><code> import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.astype("float32") / 255.0
x_train = np.reshape(x_train, (-1, 28, 28, 1))
x_test = x_test.astype("float32") / 255.0
x_test = np.reshape(x_test, (-1, 28, 28, 1))
teacher = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(256, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding="same"),
layers.Conv2D(512, (3, 3), strides=(2, 2), padding="same"),
layers.Flatten(),
layers.Dense(10),
],
name="teacher",
)
# Create the student
student = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(16, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding="same"),
layers.Conv2D(32, (3, 3), strides=(2, 2), padding="same"),
layers.Flatten(),
layers.Dense(10),
],
name="student",
)
teacher.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
teacher.fit(x_train, y_train, epochs=5)
teacher.evaluate(x_test, y_test)
distiller = Distiller(student=student, teacher=teacher)
distiller.compile(
optimizer=keras.optimizers.Adam(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
student_loss_fn=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
distillation_loss_fn=keras.losses.KLDivergence(),
alpha=0.1,
temperature=10,
)
# Distill teacher to student
distiller.fit(x_train, y_train, epochs=3)
# Evaluate student on test dataset
distiller.evaluate(x_test, y_test)
</code></pre>
<p>Despite being able to run the example, I don't think these informations are clear to me, I would like to test the model on unseen data, therefore I was wondering, how do I build a model from Knowledge Distillation and perform predictions and check its classification report?</p>
|
<p>The "compressed" model is the student model. The <code>Distiller</code> is just the wrapper for training the student to try and mimic the teacher, as opposed to training the student to try and estimate the ground-truth labels.<br />
The page you linked has a section comparing the distillation results to the equivalent light-weight student architecture with "from scratch" training against actual labels, so predictions are rather straight-forward from the tutorial.<br />
Note that the teacher and student both only have a <code>dense</code> layer at their end, and the training therefor assumes that the loss should be calculated by viewing the model outputs as <code>logits</code>. So both the teacher and the student outputs just need a simple <code>tf.nn.softmax</code> for getting the standard categorical score.<br />
Don't forget to recalibrate the softmax temperature if necessary.</p>
|
python|tensorflow|machine-learning|keras
| 0
|
375,892
| 73,733,427
|
Pandas: Cosine similarity for each rows
|
<p>I have dataframe:</p>
<pre><code>import pandas as pd
data = [['apple', 'one', 0.0, [0.047668457, -0.04888916]], ['banana', 'two', 0.0 , [0.0287323, -0.037841797] ], ['qiwi', 'three', 0.0, [0.031051636, -0.05227661]],
['orange', 'one', 1.0, [0.0020618439, -0.055389404]], ['mango', 'two', 1.0, [0.0030326843, -0.036193848]], ['strawberry', 'three', 1.0, [0.008613586, -0.06561279]]]
df = pd.DataFrame(data, columns=['word', 'group', 'count', 'vec'])
----------+-----+-----+--------------------+----------+
| word|group|count| vec| word2|
+----------+-----+-----+--------------------+----------+
| apple| one| 0.0|[0.047668457, -0....| apple|
| banana| two| 0.0|[0.0287323, -0.03...| banana|
| qiwi|three| 0.0|[0.031051636, -0....| qiwi|
| orange| one| 1.0|[0.0020618439, -0...| orange|
| mango| two| 1.0|[0.0030326843, -0...| mango|
|strawberry|three| 1.0|[0.008613586, -0....|strawberry|
+----------+-----+-----+--------------------+----------+
</code></pre>
<p>I want to create a 5x5 dataframe where the cosine similarity of each row will be calculated. Result look like this(I showed only 2 lines in the example):</p>
<pre><code> +------+----------+----------+------------------+------------------+------------------+------------------+
| word| apple| banana| qiwi| orange| mango| strawberry|
+------+----------+----------+------------------+------------------+------------------+------------------+
| apple| 1.0|0.99240247|0.9721006775103194|0.7414623055821596|0.7414623055821596|0.8007656107780402|
|banana|0.99240247| 1.0| 0.99357443| 0.81838407| 0.84415172| 0.868376|
+------+----------+----------+------------------+------------------+------------------+------------------+
...........................
</code></pre>
<p>I tried this, but i dont know how to fill all None:</p>
<pre><code>df['word2'] = df['word']
df_piv = df.pivot_table(index=['word'], columns='word2',
values='vec', aggfunc='first').reset_index()
# calc cos sim
# df2 = df_piv .set_index('word')
# v = cosine_similarity(df2.values)
# done = pd.DataFrame(v, columns=df2.index.values, index=df2.index).reset_index()
+----------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+
| word| apple| banana| mango| orange| qiwi| strawberry|
+----------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+
| apple|[0.047668457, -0....| null| null| null| null| null|
| banana| null|[0.0287323, -0.03...| null| null| null| null|
| mango| null| null|[0.0030326843, -0...| null| null| null|
| orange| null| null| null|[0.0020618439, -0...| null| null|
| qiwi| null| null| null| null|[0.031051636, -0....| null|
|strawberry| null| null| null| null| null|[0.008613586, -0....|
+----------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+
</code></pre>
|
<p>You can use <code>cdist</code> from <code>scipy.spatial.distance</code>:</p>
<pre><code>from scipy.spatial.distance import cdist
vecs = df['vec'].to_list()
pd.DataFrame(1 - cdist(vecs, vecs, metric='cosine'),
index=df['word'], columns=df['word'])
</code></pre>
<p>Output:</p>
<pre><code>word apple banana qiwi orange mango strawberry
word
apple 1.000000 0.992402 0.972101 0.741462 0.771779 0.800766
banana 0.992402 1.000000 0.993574 0.818384 0.844152 0.868376
qiwi 0.972101 0.993574 1.000000 0.878167 0.899404 0.918923
orange 0.741462 0.818384 0.878167 1.000000 0.998924 0.995648
mango 0.771779 0.844152 0.899404 0.998924 1.000000 0.998899
strawberry 0.800766 0.868376 0.918923 0.995648 0.998899 1.000000
</code></pre>
|
python|pandas|cosine-similarity
| 2
|
375,893
| 73,524,464
|
Converting the comma separated values in a cell to different rows by duplicating the other column contents into every row Python
|
<p>I have a dataset</p>
<pre><code>Name Type Cluster Value
ABC AA,BB AZ,YZ 15
LMN CC,DD,EE LM,LM,LM 20
</code></pre>
<p>with many other columns.</p>
<p>I want to convert it to a dataframe like:</p>
<pre><code>Name Type Cluster Value TypeSubset ClusterSubset
ABC AA, BB AZ, YZ 15 AA AZ
ABC AA, BB AZ, YZ 15 BB YZ
LMN CC,DD,EE LM,LM,LM 20 CC LM
LMN CC,DD,EE LM,LM,LM 20 DD LM
LMN CC,DD,EE LM,LM,LM 20 EE LM
</code></pre>
<p>The dataframe can have many columns. But the Number of elements in <code>Type</code> and <code>Cluster</code> will be same. I just want them separated into different rows and duplicate all the other columns.</p>
<p>How can I do it in python.</p>
<p>I tried</p>
<pre><code>df.set_index(['Type','Cluster'])
.apply(lambda x: x.astype(str).str.split(',').explode())
.reset_index()) ```
Not getting the desired result.
</code></pre>
|
<p><code>assign</code> new columns and <code>explode</code> in parallel.</p>
<pre><code>(df.assign(TypeSubset=df['Type'].str.split(','),
ClusterSubset=df['Cluster'].str.split(',')
)
.explode(['TypeSubset', 'ClusterSubset'])
)
</code></pre>
|
python-3.x|pandas|dataframe|data-preprocessing
| 1
|
375,894
| 73,608,058
|
Python: Iterate over values in a series and replace with dictionary values when key matches series value
|
<p>I'm trying to iterate over a column in a dataframe and when the value matches a key from my dictionary it should then replace the value in another column with the value of the matching key.</p>
<pre><code> df = pd.DataFrame({'id': ['123', '456', '789'], 'Full': ['Yes', 'No', 'Yes'], 'Cat':['','','']})
cats = {'123':'A', '456':'B', '789':'C'}
for val in df.id:
for key, cat in cats.items():
if key == val:
df.Cat.loc[(df.Full == 'Yes')] = cat
df
id Full Cat
0 123 Yes C
1 456 No
2 789 Yes C
</code></pre>
<p>I would expect id 123 to have a Cat of 'A' but instead it only returns 'C'</p>
<p>Can anyone explain to me why the it isn't iterating over the keys in dictionary?</p>
|
<p>For filtered values in column use <code>dict.get</code>:</p>
<pre><code>mask = df.Full == 'Yes'
df.loc[mask, 'Cat'] = df.loc[mask, 'id'].apply(lambda x: cats.get(x, ''))
print (df)
id Full Cat
0 123 Yes A
1 456 No
2 789 Yes C
</code></pre>
<p>If no match in dict is possible create <code>None</code>s use:</p>
<pre><code>cats = {'123':'A', '456':'B', '7890':'C'}
mask = df.Full == 'Yes'
df.loc[mask, 'Cat'] = df.loc[mask, 'id'].apply(cats.get)
print (df)
id Full Cat
0 123 Yes A
1 456 No
2 789 Yes None
</code></pre>
|
python|pandas|dictionary
| 1
|
375,895
| 73,557,055
|
Exponents in python: df**x vs df.pow(x)
|
<p>Just out of curiosity, what is the difference between <code>df**x</code> and <code>df.pow(x)</code>?</p>
<p>Having a dataframe <code>df</code> with a column named <code>values</code> you can either do: <code>df.values ** 2</code> or <code>df.values.pow(2)</code> to compute the entire column to the power of 2. I understand that you can change the axis while using <code>DataFrame.pow</code>. But is there a difference in performance? Will changing the power influence the performance?</p>
<pre><code>df = pd.DataFrame([1.,2])
df**2
df.pow(2)
</code></pre>
<p>I have read the discussion between the difference between <code>x**y</code> and <code>x.pow(y)</code> from the <code>math</code>-module <a href="https://stackoverflow.com/questions/20969773/exponentials-in-python-xy-vs-math-powx-y">here</a></p>
|
<p>From the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pow.html" rel="nofollow noreferrer">pandas docs</a>:</p>
<blockquote>
<p>Equivalent to dataframe ** other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rpow.</p>
</blockquote>
<p>It seems that performance wise they are nearly equivalent, but <code>pow</code> allows you to change the axis and add a <code>fill_value</code> which replaces missing values. I'd imagine that there is an extremely slight performance cost to using <code>pow</code>, but if a performance difference that granular matters, maybe python and pandas are the wrong tools for your project.</p>
|
python|pandas
| 1
|
375,896
| 73,760,714
|
Converting Python code to pyspark environment
|
<p>How can I have the same functions as shift() and cumsum() from pandas in pyspark?</p>
<pre><code>import pandas as pd
temp = pd.DataFrame(data=[['a',0],['a',0],['a',0],['b',0],['b',1],['b',1],['c',1],['c',0],['c',0]], columns=['ID','X'])
temp['transformed'] = temp.groupby('ID').apply(lambda x: (x["X"].shift() != x["X"]).cumsum()).reset_index()['X']
print(temp)
</code></pre>
<p>My question is how to achieve in pyspark.</p>
|
<p>Pyspark have handle these type of queries with Windows utility functions.
you can read its documentation <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.Window.html" rel="nofollow noreferrer">here</a></p>
<p>Your pyspark code would be something like this :</p>
<pre><code>from pyspark.sql import functions as F
from pyspark.sql Import Window as W
window = W.partitionBy('id').orderBy('time'?)
new_df = (
df
.withColumn('shifted', F.lag('X').over(window))
.withColumn('isEqualToPrev', (F.col('shifted') == F.col('X')).cast('int'))
.withColumn('cumsum', F.sum('isEqualToPrev').over(window))
)
</code></pre>
|
pandas|pyspark|group-by|shift|cumsum
| 1
|
375,897
| 73,797,165
|
Same numpy version [Colab and Raspberry pi]:: BUT 'ImportError: numpy.core.multiarray failed to import'
|
<p>I'm trying to use the trained multiclass classification model (model trained and saved from colab) into Raspberry pi 4.</p>
<p>In colab:</p>
<pre><code>import sys
print(sys.version)
</code></pre>
<p>prints <code>3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0]</code>. Similarly,</p>
<pre><code>import tensorflow
import numpy
print('tensorflow_version',tensorflow.__version__)
print('numpy_version',numpy.__version__)
</code></pre>
<p>prints</p>
<pre><code>tensorflow_version 2.5.0
numpy_version 1.19.2
</code></pre>
<p>On the Raspberry Pi <code>(Python 3.7.0 (default, Sep 20 2022, 15:06:22)[GCC 10.2.1 20210110] on linux),</code> <code>tensorflow.__version__</code> is <code>'2.5.0'</code> and <code>numpy.__version__</code> is <code>'1.19.2'</code>.</p>
<p>With the above configuration, I get :</p>
<pre><code>RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
ImportError: numpy.core.multiarray failed to import
</code></pre>
<p>While searching for similar questions in StackOverflow, most of them suggest that the error is due to mismatch in <code>numpy</code> version. However, in my case, I've the same version of numpy.</p>
<p>Does the <code>GCC</code> version have some role in this error? How can I downgrade <code>GCC</code> version in raspberry pi or upgrade its version in colab?</p>
<p>Note that the version of tensorflow is fixed owing to my hardware (<code>armv7l</code>), which mandes to use <code>numpy 1.19.2</code>.</p>
<p>Thank you.</p>
|
<p>I solved it by saving my classification model into <code>TensorflowLite</code> as <a href="https://www.tensorflow.org/lite/models/convert/convert_models" rel="nofollow noreferrer">follows</a>:</p>
<pre><code># Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
</code></pre>
|
python|numpy|google-colaboratory|tensorflow2.0|raspberry-pi4
| 0
|
375,898
| 73,529,481
|
numpy-based spatial reduction
|
<p>I'm looking for a flexible fast method for computing a custom reduction on an np.array using a square non-overlapping window. e.g.,</p>
<pre><code>array([[4, 7, 2, 0],
[4, 9, 4, 2],
[2, 8, 8, 8],
[6, 3, 5, 8]])
</code></pre>
<p>let's say I want the <code>np.max</code>, (on a 2x2 window in this case) I'd like to get:</p>
<pre><code>array([[9, 4],
[8, 8]])
</code></pre>
<p>I've built a slow function using for loops, but ultimately I need to apply this to large raster arrays.</p>
<p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.generic_filter.html#scipy.ndimage.generic_filter" rel="nofollow noreferrer">scipy.ndimage.generic_filter</a> is close, but this uses sliding windows (with overlap), giving a result with the same dimensions (no reduction).</p>
<p><a href="https://numpy.org/doc/stable/reference/generated/numpy.lib.stride_tricks.as_strided.html" rel="nofollow noreferrer">numpy.lib.stride_tricks.as_strided</a> combined with a reducing function doesn't seem to handle relationships between rows (i.e., 2D spatial awareness).</p>
<p><a href="https://rasterio.readthedocs.io/en/latest/topics/resampling.html" rel="nofollow noreferrer">rasterio</a> has some nice resampling methods built on GDAL, but these don't allow for custom reductions.</p>
<p><a href="https://scikit-image.org/docs/dev/api/skimage.transform.html?highlight=transform#skimage.transform.downscale_local_mean" rel="nofollow noreferrer">skimage.transform.downscale_local_mean</a> does not support custom functions on the blocks.</p>
<p>I'm sure there's something out there for custom <a href="https://en.wikipedia.org/wiki/Spatial_anti-aliasing" rel="nofollow noreferrer">spatial anti-aliasing</a>, but I can't seem to find a solution and am feeling dumb.</p>
<p>Any help is greatly appreciated,</p>
|
<p>With the max (or other function supporting axis), you can just reshape the array:</p>
<pre><code>a.reshape(a.shape[0]//2, 2, a.shape[1]//2, 2).max(axis=(1,3))
</code></pre>
<p>In general, you can reshape, swap the axis, flatten the 2x2 into a new axis <code>4</code>, then work on that axis.</p>
|
python|numpy|scipy|rasterio
| 3
|
375,899
| 73,547,066
|
Python Pandas: Going through a list of cycles and making point of interest
|
<p>To explain my problem easier I have created a dataset:</p>
<pre><code>data = {'Cycle': ['Set1', 'Set1', 'Set1', 'Set2', 'Set2', 'Set2', 'Set2'],
'Value': [1, 2.2, .5, .2,1,2.5,1]}
</code></pre>
<p>I want to create a loop that goes through the "Cycle" column and marks the max of each cycle with the letter A and the min with letter B, to output something like this:</p>
<pre><code>POI = {'Cycle': ['Set1', 'Set1', 'Set1', 'Set2', 'Set2', 'Set2', 'Set2'],
'Value': [1, 2.2, .5, .2,1,2.5,1],
'POI': [0, 'A','B','B',0,'A',0]}
df2 = pd.DataFrame(POI)
</code></pre>
<p>I am new to Python, so as much detail as possible would be very helpful, as well as I am not exactly sure how to go through each cycle on its own to get these values, so explaining that would be great.</p>
<p>Thanks</p>
|
<p>Using <code>numpy.select</code> and <code>groupby.transform</code>:</p>
<pre><code>g = df.groupby('Cycle')['Value']
df['POI'] = np.select([df['Value'].eq(g.transform('max')),
df['Value'].eq(g.transform('min'))],
['A', 'B'])
# if you want 0 as default value (not '0')
df['POI'] = df['POI'].replace('0', 0)
</code></pre>
<p>output:</p>
<pre><code> Cycle Value POI
0 Set1 1.0 0
1 Set1 2.2 A
2 Set1 0.5 B
3 Set2 0.2 B
4 Set2 1.0 0
5 Set2 2.5 A
6 Set2 1.0 0
</code></pre>
|
python|pandas|max
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.