Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
10,000
| 66,691,392
|
groupby agg with first non-null unique value
|
<p>Following code gives error</p>
<pre><code>import pandas as pd
import numpy as np
df=pd.DataFrame({"item":['a','a','b'],"item1":['b','d','c']})
df.groupby("item").agg(model_list=("item1", np.unique))
</code></pre>
<p>Since there are two unique values for item <code>a</code> (i.e., <code>b</code> and <code>d</code>), how to modify it to return the first non-null unique value?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>GroupBy.first</code></a> which by default remove missing values, so returned first non missing value:</p>
<pre><code>df=pd.DataFrame({"item":['a','a','b','b','b'],"item1":['b','d',np.nan, np.nan, 'c']})
df = df.groupby("item").agg(model_list=("item1", 'first'))
print (df)
model_list
item
a b
b c
</code></pre>
|
pandas
| 1
|
10,001
| 57,408,216
|
Python List - set every n-th value None
|
<p>As the title says, i want to know how to set every n-th value in a python list as Null. I looked after a solution in a lot of forums but i didn't find much.
I also don't want to overwrite existing values as None, instead i want to create new spaces with the value None</p>
<p>The list contains the date (12 dates = 1 year) and every 13th value should be empty because that row will be the average so i don't need a date</p>
<p>Here is my code how i generated the dates with pandas </p>
<pre><code>import pandas as pd
numdays = 370 #i have 370 values, every day = 1 month. Starting from 1990 till June 2019
date1 = '1990-01-01'
date2 = '2019-06-01'
mydates = pd.date_range(date1, date2,).tolist()
date_all = pd.date_range(start=date1, end=date2, freq='1BMS')
date_lst = [date_all]
</code></pre>
<p>The expected Output: </p>
<pre><code>01.01.1990
01.02.1990
01.03.1990
01.04.1990
01.05.1990
01.06.1990
01.07.1990
01.08.1990
01.09.1990
01.10.1990
01.11.1990
01.12.1990
None
01.01.1991
.
.
.
</code></pre>
|
<p>If I understood correctly:</p>
<pre><code>import pandas as pd
numdays = 370
date1 = '1990-01-01'
date2 = '2019-06-01'
mydates = pd.date_range(date1, date2,).tolist()
date_all = pd.date_range(start=date1, end=date2, freq='1BMS')
date_lst = [date_all]
for i in range(12,len(mydates),13): # add this
mydates.insert(i, None)
</code></pre>
|
python|pandas|list
| 1
|
10,002
| 57,705,976
|
How to pad an array with rows
|
<p>I have a set of numpy arrays with different number of rows and I would like to pad them to a fixed number of rows, e.g.</p>
<p>An array "a" with 3 rows: </p>
<pre><code>a = [
[1.1, 2.1, 3.1]
[1.2, 2.2, 3.2]
[1.3, 2.3, 3.3]
]
</code></pre>
<p>I would like to convert "a" to an array with 5 rows:</p>
<pre><code>[
[1.1, 2.1, 3.1]
[1.2, 2.2, 3.2]
[1.3, 2.3, 3.3]
[0, 0, 0]
[0, 0, 0]
]
</code></pre>
<p>I have tried <code>np.concatenate((a, np.zeros(3)*(5-len(a))), axis=0)</code>, but it does not work.</p>
<p>Any help would be appreciated.</p>
|
<p>You're looking for <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html" rel="nofollow noreferrer"><code>np.pad</code></a>. To zero pad you must set mode to <code>constant</code> and the <code>pad_width</code> that you want on the edges of each axis:</p>
<pre><code>np.pad(a, pad_width=((0,2),(0,0)), mode='constant')
array([[1.1, 2.1, 3.1],
[1.2, 2.2, 3.2],
[1.3, 2.3, 3.3],
[0. , 0. , 0. ],
[0. , 0. , 0. ]])
</code></pre>
|
python|numpy
| 2
|
10,003
| 70,643,487
|
create a column that is the sum of previous X rows where x is a parm given by a different column row
|
<p>Im trying to create a column where i sum the previous x rows of a column by a parm given in a different column row.</p>
<p>I have a solution but its really slow so i was wondering if anyone could help do this alot faster.</p>
<pre><code>| time | price |parm |
|--------------------------|------------|-----|
|2020-11-04 00:00:00+00:00 | 1.17600 | 1 |
|2020-11-04 00:01:00+00:00 | 1.17503 | 2 |
|2020-11-04 00:02:00+00:00 | 1.17341 | 3 |
|2020-11-04 00:03:00+00:00 | 1.17352 | 2 |
|2020-11-04 00:04:00+00:00 | 1.17422 | 3 |
</code></pre>
<p>and the slow slow code</p>
<pre><code>
@jit
def rolling_sum(x,w):
return np.convolve(x,np.ones(w,dtype=int),'valid')
@jit
def rol(x,y):
for i in range(len(x)):
res[i] = rolling_sum(x, y[i])[0]
return res
dfa = df[:500000]
res = np.empty(len(dfa))
r = rol(dfa.l_x.values, abs(dfa.mb).values+1)
r
</code></pre>
|
<p>Maybe something like this could work. I have made up an example with to_be_summed being the column of the value that should be summed up and looback holding the number of rows to be looked back</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"to_be_summed": range(10), "lookback":[0,1,2,3,2,1,4,2,1,2]})
summed = df.to_be_summed.cumsum()
result = [summed[i] - summed[max(0,i - lookback - 1)] for i, lookback in enumerate(df.lookback)]
</code></pre>
<p>What I did here is to first do a cumsum over the column that should be summed up. Now, for the i-th entry I can take the entry of this cumsum, and subtract the one i + 1 steps back. Note that this include the i-th value in the sum. If you don't want to inlcude it, you just have to change from summed[i] to summed[i - 1]. Also note that this part max(0,i - lookback - 1) will prevent you from accidentally looking back too many rows.</p>
|
python|pandas|numpy|optimization|sum
| 0
|
10,004
| 51,549,631
|
Merge specific rows pandas df
|
<p>I'm currently merging all values in a pandas df row before any 4 letter <code>string</code>. But I'm hoping to apply this specific rows instead of all rows. Specifically, I only want to apply it to rows directly underneath <code>X</code> in <code>Col A</code>. So if it's <code>X</code> apply function to the row underneath.</p>
<pre><code>d = ({
'A' : ['X','Foo','No','X','Foo','X','F'],
'B' : ['','Bar','Merge','','Barr','','oo'],
'C' : ['','XXXX','XXXX','','','','B'],
'D' : ['','','','','','','ar'],
'E' : ['','','','','','','XXXX'],
})
df = pd.DataFrame(data=d)
</code></pre>
<p>This code merges all values before any 4 letter string:</p>
<pre><code>mask = (df.iloc[:, 1:].applymap(len) == 4).cumsum(1) == 0
df.A = df.A + df.iloc[:, 1:][mask].fillna('').apply(lambda x: x.sum(), 1)
df.iloc[:, 1:] = df.iloc[:, 1:][~mask].fillna('')
</code></pre>
<p>Output:</p>
<pre><code> A B C D E
0 X
1 FooBar XXXX
2 NoMerge XXXX
3 X
4 Foo Barr
5 X
6 FooBar XXXX
</code></pre>
<p>As you can see this merges the entire <code>Column</code>. I'm trying to apply it to the rows beneath value <code>X</code> in <code>Col A</code> only. I think I need something like </p>
<pre><code>if val in Col.A == 'X':
##Do this to the row directly beneath
mask = (df.iloc[:, 1:].applymap(len) == 4).cumsum(1) == 0
df.A = df.A + df.iloc[:, 1:][mask].fillna('').apply(lambda x: x.sum(), 1)
df.iloc[:, 1:] = df.iloc[:, 1:][~mask].fillna('')
</code></pre>
<p>Intended Output:</p>
<pre><code> A B C D E
0 X
1 FooBar XXXX
2 No Merge XXXX
3 X
4 Foo Barr
5 X
6 FooBar XXXX
</code></pre>
|
<p>We need to create a mask for row-under-X condition as well. I prepared a series <code>maskX</code> for that and then used this to update the <code>mask</code> you prepared. Net result is the desired output. </p>
<pre><code>d = ({
'A' : ['X','Foo','No','X','Foo','X','F'],
'B' : ['','Bar','Merge','','Barr','','oo'],
'C' : ['','XXXX','XXXX','','','','B'],
'D' : ['','','','','','','ar'],
'E' : ['','','','','','','XXXX'],
})
df = pd.DataFrame(data=d)
print(df)
#Create the mask (as series) to handle the row-under-X condition
maskX = df.iloc[:,0].apply(lambda x: x=='X')
#In the below line use some jugglery to mark the row next to X as True
maskX.index += 1
maskX = pd.concat([pd.Series([False]), maskX])
maskX = maskX.drop(len(maskX)-1)
mask = (df.iloc[:, 1:].applymap(len) == 4).cumsum(1) == 0
#combine the effect of two masks
for i,v in maskX.items():
mask.iloc[i,:] = mask.iloc[i,:].apply(lambda x: x and v)
df.A[maskX] = df.A + df.iloc[:, 1:][mask].fillna('').apply(lambda x: x.sum(), 1)
df.iloc[:, 1:] = df.iloc[:, 1:][~mask].fillna('')
print(df)
</code></pre>
|
python|pandas|sorting|dataframe|merge
| 0
|
10,005
| 51,189,988
|
Impossible to use keras in R
|
<p>I have been trying to install Keras in R. Previously I have done that in another machine , it worked well there, but now i am facing problems.</p>
<p>Codes: </p>
<pre><code>library(devtools)
devtools::install_github("rstudio/reticulate")
devtools::install_github("rstudio/keras")
devtools::install_github("rstudio/tensorflow")
install_keras()
</code></pre>
<p>All worked well, but when i am trying to load any inbuilt dataset or run in keras functions its giving me error like this.(along with a dialog box : rssesion.exe Entry point not found)</p>
<pre><code>image=image_load(" D:/CT images/image1.png")
Error in image_load(" D:/CT images/image1.png") :
The Pillow Python package is required to load images
> mnist <- dataset_mnist()
Error: C:/Users/user/ANACON~1/envs/R-TENS~1/python36.dll - The specified
procedure could not be found.
</code></pre>
<p>I have checked the output of :</p>
<p>reticulate::py_discover_config("keras")</p>
<p>reticulate::py_discover_config("tensorflow")</p>
<p>Output :</p>
<pre><code>> reticulate::py_discover_config("keras")
python: C:\Users\user\ANACON~1\envs\R-TENS~1\python.exe
libpython: C:/Users/user/ANACON~1/envs/R-TENS~1/python36.dll
pythonhome: C:\Users\user\ANACON~1\envs\R-TENS~1
version: 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 11:27:44) [MSC
v.1900 64 bit (AMD64)]
Architecture: 64bit
numpy: C:\Users\user\ANACON~1\envs\R-TENS~1\lib\site-packages\numpy
numpy_version: 1.14.3
keras: [NOT FOUND]
python versions found:
C:\Users\user\ANACON~1\envs\R-TENS~1\python.exe
C:\Users\user\ANACON~1\python.exe
C:\Users\user\Anaconda3\python.exe
C:\Users\user\Anaconda3\envs\r-tensorflow\python.exe
> reticulate::py_discover_config("tensorflow")
python: C:\Users\user\Anaconda3\envs\r-tensorflow\python.exe
libpython: C:/Users/user/Anaconda3/envs/r-tensorflow/python36.dll
pythonhome: C:\Users\user\ANACON~1\envs\R-TENS~1
version: 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 11:27:44) [MSC
v.1900 64 bit (AMD64)]
Architecture: 64bit
numpy: C:\Users\user\ANACON~1\envs\R-TENS~1\lib\site-packages\numpy
numpy_version: 1.14.3
tensorflow: C:\Users\user\ANACON~1\envs\R-TENS~1\lib\site-
packages\tensorflow\__init__.p
python versions found:
C:\Users\user\Anaconda3\envs\r-tensorflow\python.exe
C:\Users\user\ANACON~1\envs\R-TENS~1\python.exe
C:\Users\user\ANACON~1\python.exe
C:\Users\user\Anaconda3\python.exe
</code></pre>
<p>I have installed the latest version of Anaconda and also installed latest version of R and Rstudio.
I cant understand the problem since I am a beginner. Please help</p>
|
<p>I was able to install and use Keras in R using the following commands. I haven't faced any issues.</p>
<pre><code>devtools::install_github("rstudio/keras")
library(keras)
install_keras()
</code></pre>
|
tensorflow|keras
| 0
|
10,006
| 51,972,807
|
Align stacked bar charts usind pandas
|
<p>I'm trying to align all of the stacked bar charts having the same index.
What's the best way of doing this?</p>
<p><a href="https://i.stack.imgur.com/24FEA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/24FEA.png" alt="stacked_bar_plots"></a></p>
<p>This is my code so far:</p>
<pre><code>xantho99 = [545/60, 6/60, 1688/60, 44/60]
buch99 = [51/60, 2/60, 576/60, 7/60]
myco99 = [519/60, 9/60, 889/60, 28/60]
cory99 = [247/60, 5/60, 1160/60, 28/60]
xantho90 = [545/60, 8/60, 989/60, 27/60]
buch90 = [51/60, 3/60, 523/60, 5/60]
myco90 = [519/60, 11/60, 802/60, 32/60]
cory90 = [247/60, 7/60, 899/60, 27/60]
xanthouc = [545/60, 0/60, 5407/60, 193/60]
buchuc = [51/60, 0/60, 1014/60, 20/60]
mycouc = [519/60, 0/60, 4644/60, 101/60]
coryuc = [247/60, 0/60, 2384/60, 77/60]
df = pd.DataFrame([xantho, xantho99, xantho90, buch, buch99, buch90, myco, myco99, myco90, cory, cory99, cory90], columns=['Prodigal', 'Cd-hit', 'PSOT', 'Zusammenführen'], index=["X", "X","X", "B", "B","B", "M", "M","M", "C", "C","C"])
df.columns.name = "Abschnitt"
current_palette = "blue", "green", "red", "yellow"
ax = df.plot.bar(stacked=True, title="Zeitbedarf der einzelnen Abschnitte (Xanthomonas)", xlim=(0, sum(xantho)*1.1), color=current_palette, rot=0)
ax.set_xlabel("Zeit in Stunden")
</code></pre>
<p>Thank You!</p>
|
<p>Here's the relevant code (should work if you put it at the bottom of the script in the question):</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
...
...
...
# Comment out old plot
# ax = df.plot.bar(stacked=True, title="Zeitbedarf der einzelnen Abschnitte (Xanthomonas)", xlim=(0, sum(xantho)*1.1), color=current_palette, rot=0)
# ax.set_xlabel("Zeit in Stunden")
spacing = [0, 0, 0, 1.0, 1.0, 1.0, 2.0, 2.0, 2.0, 3.0, 3.0, 3.0]
p1 = plt.bar(np.arange(12) + spacing, df['Prodigal'], color="blue", width=1.0, edgecolor="black")
p2 = plt.bar(np.arange(12) + spacing, df['PSOT'], bottom=df['Prodigal'], color="red", width=1.0, edgecolor="black")
p3 = plt.bar(np.arange(12) + spacing, df['Zusammenführen'], bottom=df['Prodigal'] + df['PSOT'], color="yellow", width=1.0, edgecolor="black")
p4 = plt.bar(np.arange(12) + spacing, df['Cd-hit'], bottom=df['Prodigal'] + df['PSOT'] + df['Zusammenführen'], color="green", width=1.0, edgecolor="black")
plt.legend((p1[0], p2[0], p3[0], p4[0]), ('Prodigal', 'PSOT', 'Zusammenführen', 'Cd-hit'))
plt.show()
</code></pre>
<p>Here's what pops up after you run this snippet:</p>
<p><a href="https://i.stack.imgur.com/7Eh6D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Eh6D.png" alt="Plot output"></a></p>
<p>I think this is what's desired, based on our discussion in comments. HTH, let me know if this is what you were looking for!</p>
|
python|pandas|stacked
| 0
|
10,007
| 51,956,198
|
Ravel() on only two dimensions of a 3D numpy array
|
<p>I have a numpy array of shape <code>(182, 218, 182)</code>. </p>
<p>I'm trying to reorganize it such that it is size <code>(182, 39676)</code> - e.g., take each of the 182 slices of it and ravel() out each of those slices into one dimension, but still keep the slices separate.</p>
<p>I can think of a few ways of doing this with a loop, but it seems un-pythonic to make a loop in numpy. Anyone know if there's a method or parameter somewhere that'll do the trick?</p>
<p>Thanks!</p>
|
<pre><code>import numpy
# initialize a[182][218][182]
a=numpy.reshape(a,(182,39676))
</code></pre>
<p>This reshape should do the work.</p>
|
python|numpy
| 0
|
10,008
| 51,768,418
|
Python code to process CSV file
|
<p>I am getting the CSV file updated on daily basis. Need to process and create new file based on the criteria - If New data then should be tagged as New against the row and if its an update to the existing data then should be tagged as Update. How to write a Python code to process and output in CSV file as follows based on the date.</p>
<h2>Day1 input data</h2>
<pre><code>empid,enmname,sal,datekey
1,cholan,100,8/14/2018
2,ram,200,8/14/2018
</code></pre>
<h2>Day2 input Data</h2>
<pre><code>empid,enmname,sal,datekey
1,cholan,100,8/14/2018
2,ram,200,8/14/2018
3,sundar,300,8/15/2018
2,raman,200,8/15/2018
</code></pre>
<h2>Output Data</h2>
<pre><code>status,empid,enmname,sal,datekey
new,3,sundar,300,8/15/2018
update,2,raman,200,8/15/2018
</code></pre>
|
<p>I'm feeling nice, so I'll give you some code. Try to learn from it.</p>
<hr>
<p>To work with CSV files, we'll need the <code>csv</code> module:</p>
<pre><code>import csv
</code></pre>
<p>First off, let's teach the computer how to open and parse a CSV file:</p>
<pre><code>def parse(path):
with open(path) as f:
return list(csv.DictReader(f))
</code></pre>
<p><code>csv.DictReader</code> reads the first line of the <code>csv</code> file and uses it as the "names" of the columns. It then creates a dictionary for each subsequent row, where the keys are the column names.</p>
<p>That's all well and good, but we just want the last version with each key:</p>
<pre><code>def parse(path):
data = {}
with open(path) as f:
for row in csv.DictReader(f):
data[row["empid"]] = row
return data
</code></pre>
<p>Instead of just creating a list containing everything, this creates a dictionary where the keys are the row's id. This way, rows found later in the file will overwrite rows found earlier in the file.</p>
<p>Now that we've taught the computer how to extract the data from the files, let's get it:</p>
<pre><code>old_data = parse("file1.csv")
new_data = parse("file2.csv")
</code></pre>
<p>Iterating through a dictionary gives you its keys, which are the ids defined in the data set. For consistency, <code>key in dictionary</code> says whether <code>key</code> is one of the keys in the dictionary. So we can do this:</p>
<pre><code>new = {
id_: row
for id_, row in new_data.items()
if id_ not in old_data
}
updated = {
id_: row
for id_, row in new_data.items()
if id_ in old_data and old_data[id_] != row
}
</code></pre>
<p>I'll put <a href="https://docs.python.org/3/library/csv.html#csv.DictWriter" rel="nofollow noreferrer"><code>csv.DictWriter</code></a> here and let you sort out the rest on your own.</p>
|
python|pandas|csv
| 1
|
10,009
| 36,224,581
|
pandas crashes on series with multiple data types
|
<p>I have a simple excel file with two columns - one categorical column and another numerical column that i read into pandas with the read_excel function as below</p>
<pre><code>df= pd.read_excel('pandas_crasher.xlsx')
</code></pre>
<p>The first column is of type Object with multiple types. Since the excel was badly formatted, the column contains a combination of timestamps, floats and texts. But its normally supposed to be just a simple textual column</p>
<pre><code>from datetime import datetime
from collections import Counter
df['random_names'].dtype
</code></pre>
<blockquote>
<p>dtype('O')</p>
</blockquote>
<p><strong><code>print Counter([type(i) for i in load_instance['random_names']])</code></strong></p>
<blockquote>
<p>Counter({type 'unicode'>: 15427, type 'datetime.datetime'>: 18,
type 'float'>: 2})</p>
</blockquote>
<p>When i do a simple groupby on it, it <strong><code>crashes the python kernel</code></strong> without any error messages or notifications - i tried doing it from both jupyter and a small custom flask app without any luck. </p>
<p><strong><code>df.groupby('random_names')['random_values'].sum()</code></strong> << crashes</p>
<p>Its a relatively small file of 700kb (15k rows and 2 cols) - so its definitely not a memory issue</p>
<p>I tried debugging with pdb to trace the point at which crashes but couldnt get past the the cython function in the pandas/core/<a href="https://github.com/pydata/pandas/blob/v0.17.1/pandas/core/groupby.py" rel="nofollow">groupby.py module</a> </p>
<blockquote>
<p>def _cython_operation(self, kind, values, how, axis)</p>
</blockquote>
<p>a possible bug in pandas - instead of crashing directly shouldnt it throw an exception and quit gracefully ?</p>
<p>I then convert the various datatypes into text with the following function</p>
<pre><code>def custom_converter(x):
if isinstance(x,datetime) or isinstance( x, ( int, long, float ) ):
return str(x)
else:
return x.encode('utf-8')
df['new_random_names'] = df['random_names'].apply(custom_converter)
df['new_random_names'].groupby('random_names')['random_values'].sum() << does not crash
</code></pre>
<p>The apply custom function is probably the slowest way of doing this. Is there any better/faster way of doing this ?</p>
<p>Excel file here: <a href="https://drive.google.com/file/d/0B1ZLijGO6gbLelBXMjJWRFV3a2c/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/0B1ZLijGO6gbLelBXMjJWRFV3a2c/view?usp=sharing</a></p>
|
<p>For me, the crash seems to happen when pandas tries to sort the group keys. If I pass the <code>sort=False</code> argument to <code>.groupby()</code> then the operation succeeds. This may work for you as well. The sort appears to be a numpy operation that doesn't actually involve pandas objects, so it may ultimately be a numpy issue. (For instance, <code>df.random_names.values.argsort()</code> also crashes for me.)</p>
<p>After some more playing around, I'm guessing the problem has to do with some sort of obscure condition that arises due to the particular comparisons that are made during numpy's sort operation. For me, this crashes:</p>
<pre><code>df.random_names.values[14005:15447]
</code></pre>
<p>but leaving one item off either end of the slice doesn't crash anymore. Making a copy of this data and then tweaking it by taking out individual elements, the crash will occur or not depending on whether certain seemingly random elements are removed from the data. Also, under certain circumstances it will fail with an exception of "TypeError: can't compare datetime.datetime to unicode" (or "datetime to float").</p>
<p>This section of the data contains one datetime and one float value, which happens to be a <code>nan</code>. It looks like there is some weird edge case in the numpy code that causes failed comparisons to crash under certain circumstances rather than raise the right exception.</p>
<p>To answer the question at the end of your post, you may have an easier time using the various arguments to <code>read_excel</code> (such as the <code>converters</code> argument) to read all the data in as textual values right from the start.</p>
|
python|excel|pandas|group-by|crash
| 1
|
10,010
| 36,225,177
|
Pandas read_sql() of a view keeps double quotes in columns with spaces
|
<p>I have an sqlite database with a view of several tables with a lot of columns with spaces in their names (I know, I know, not good practice, but it's out of my control). </p>
<p>Anyways, so the problem that I'm having is related to the spaces in the column names when using <code>pd.read_sql('SELECT "stupid column with space" from StupidView',con=db)</code>. <strong>It keeps keeping the quotes in the column name when querying the view but not when querying the table itself!</strong> Same SQL on the table returns columns without being wrapped in quotes. Am I missing something here? Any ideas on why this is happening? </p>
<p><strong>Working standalone example:</strong></p>
<pre><code>import pandas as pd
import sqlite3
import numpy as np
pd.set_option('display.width', 1000)
# Create the database
db = sqlite3.connect("Sample Database.sqlite")
cursor = db.cursor()
# Create a table
df = pd.DataFrame({"Stupid column with space":[1,2,3,4],
"MrNoSpace":[1,2,3,4]})
# Push the tables to the database
df.to_sql(name="StupidTable", con=db, flavor='sqlite', if_exists='replace', index=False)
# Just in case you're running this more than once, drop the view if it exists
try: cursor.execute("DROP VIEW StupidView;")
except: pass
# Execute the sql that creates the view
cursor.execute("""
CREATE VIEW StupidView AS
SELECT StupidTable.*
FROM StupidTable""")
db.commit()
# Execute the SQL and print the results
Test1_df = pd.read_sql('SELECT "Stupid column with space", "MrNoSpace" FROM StupidView',con=db) # read data from the view
Test2_df = pd.read_sql('SELECT "Stupid column with space", "MrNoSpace" FROM Table1',con=db) # same sql but on the table
Test3_df = pd.read_sql('SELECT `Stupid column with space`, `MrNoSpace` FROM StupidView',con=db) # using ` and not "
Test4_df = pd.read_sql('SELECT [Stupid column with space], [MrNoSpace] FROM StupidView',con=db) # using []
print Test1_df
print Test2_df
print Test3_df
print Test4_df
</code></pre>
<p><strong>Output:</strong></p>
<p><em>Test1_df - Query on the view: columns are wrapped in double quotes</em></p>
<pre><code> "Stupid column with space" "MrNoSpace"
0 1 1
1 2 2
2 3 3
3 4 4
</code></pre>
<p><em>Test2_df - Same query but on the table: now with no quotes in column names</em></p>
<pre><code> Stupid column with space MrNoSpace
0 1 1
1 2 2
2 3 3
3 4 4
</code></pre>
<p><em>Test3_df - Keeps the column names wrapped in ` (works fine if the query on a table but not a view)</em></p>
<pre><code> `Stupid column with space` `MrNoSpace`
0 1 1
1 2 2
2 3 3
3 4 4
</code></pre>
<p><em>Test4_df - Drops the column names altogether (works fine if used on table not view)</em></p>
<pre><code>0 1 1
1 2 2
2 3 3
3 4 4
</code></pre>
|
<p>This is apparently an issue with the version of sqlite that python 2.7 uses that will not be officially fixed (<a href="http://bugs.python.org/issue19167" rel="nofollow noreferrer">http://bugs.python.org/issue19167</a>). </p>
<p>If you want to still use python 2.7 or below, you can replace the <code>sqlite.dll</code> in <code>C:\Python27\DLLs</code> a newer version downloaded from here: <a href="https://www.sqlite.org/download.html" rel="nofollow noreferrer">https://www.sqlite.org/download.html</a>. Views now work as expected with python 2.7 as far as I can tell.</p>
<p>With help from the following post:
<a href="https://stackoverflow.com/a/3341117/1754273">https://stackoverflow.com/a/3341117/1754273</a></p>
|
python|sqlite|pandas
| 0
|
10,011
| 41,750,186
|
Using k-nearest neighbour without splitting into training and test sets
|
<p>I have the following dataset, with over 20,000 rows:</p>
<p><a href="https://i.stack.imgur.com/RlM5s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RlM5s.png" alt="enter image description here"></a></p>
<p>I want to use columns A through E to predict column X using a k-nearest neighbor algorithm. I have tried to use <code>KNeighborsRegressor</code> from sklearn, as follows:</p>
<pre><code>import pandas as pd
import random
from numpy.random import permutation
import math
from sklearn.neighbors import KNeighborsRegressor
df = pd.read_csv("data.csv")
random_indices = permutation(df.index)
test_cutoff = int(math.floor(len(df)/5))
test = df.loc[random_indices[1:test_cutoff]]
train = df.loc[random_indices[test_cutoff:]]
x_columns = ['A', 'B', 'C', D', E']
y_column = ['X']
knn = KNeighborsRegressor(n_neighbors=100, weights='distance')
knn.fit(train[x_columns], train[y_column])
predictions = knn.predict(test[x_columns])
</code></pre>
<p>This only makes predictions on the test data which is a fifth of the original dataset. I also want prediction values for the training data. </p>
<p>To do this, I tried to implement my own k-nearest algorithm by calculating the Euclidean distance for each row from every other row, finding the k shortest distances, and averaging the X value from those k rows. This process took over 30 seconds for just one row, and I have over 20,000 rows. Is there a quicker way to do this?</p>
|
<blockquote>
<p>To do this, I tried to implement my own k-nearest algorithm by calculating the Euclidean distance for each row from every other row, finding the k shortest distances, and averaging the X value from those k rows. This process took over 30 seconds for just one row, and I have over 20,000 rows. Is there a quicker way to do this?</p>
</blockquote>
<p>Yes, the problem is that loops in python are extremely slow. What you can do is <strong>vectorize</strong> your computations. So lets say that your data is in matrix X (n x d), then matrix of distances D_ij = || X_i - X_j ||^2 is</p>
<pre><code>D = X^2 + X'^2 - 2 X X'
</code></pre>
<p>so in Python</p>
<pre><code>D = (X ** 2).sum(1).reshape(-1, 1) + (X ** 2).sum(1).reshape(1, -1) - 2*X.dot(X.T)
</code></pre>
|
python|numpy|machine-learning|scikit-learn|nearest-neighbor
| 1
|
10,012
| 41,942,960
|
Python randomly drops to 0% CPU usage, causing the code to "hang up", when handling large numpy arrays?
|
<p>I have been running some code, a part of which loads in a large 1D numpy array from a binary file, and then alters the array using the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer">numpy.where()</a> method. </p>
<p>Here is an example of the operations performed in the code:</p>
<pre><code>import numpy as np
num = 2048
threshold = 0.5
with open(file, 'rb') as f:
arr = np.fromfile(f, dtype=np.float32, count=num**3)
arr *= threshold
arr = np.where(arr >= 1.0, 1.0, arr)
vol_avg = np.sum(arr)/(num**3)
# both arr and vol_avg needed later
</code></pre>
<p>I have run this many times (on a free machine, i.e. no other inhibiting CPU or memory usage) with no issue. But recently I have noticed that sometimes the code hangs for an extended period of time, making the runtime an order of magnitude longer. On these occasions I have been monitoring %CPU and memory usage (using gnome system monitor), and found that python's CPU usage drops to 0%. </p>
<p>Using basic prints in between the above operations to debug, it seems to be arbitrary as to which operation causes the pausing (i.e. open(), np.fromfile(), np.where() have each separately caused a hang on a random run). It is as if I am being throttled randomly, because on other runs there are no hangs.</p>
<p>I have considered things like garbage collection or <a href="https://stackoverflow.com/questions/31840563/python-cpu-usage-drops-to-0-resumes-after-keystroke-during-script-execution">this question</a>, but I cannot see any obvious relation to my problem (for example keystrokes have no effect).</p>
<p>Further notes: the binary file is 32GB, the machine (running Linux) has 256GB memory. I am running this code remotely, via an ssh session.</p>
<p>EDIT: This may be incidental, but I have noticed that there are no hang ups if I run the code after the machine has just been rebooted. It seems they begin to happen after a couple of runs, or at least other usage of the system.</p>
|
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> is creating a copy there and assigning it back into <code>arr</code>. So, we could optimize on memory there by avoiding a copying step, like so -</p>
<pre><code>vol_avg = (np.sum(arr) - (arr[arr >= 1.0] - 1.0).sum())/(num**3)
</code></pre>
<p>We are using <code>boolean-indexing</code> to select the elements that are greater than <code>1.0</code> and getting their offsets from <code>1.0</code> and summing those up and subtracting from the total sum. Hopefully the number of such exceeding elements are less and as such won't incur anymore noticeable memory requirement. I am assuming this hanging up issue with large arrays is a memory based one.</p>
|
python|arrays|numpy
| 1
|
10,013
| 37,671,974
|
Tensorflow negative sampling
|
<p>I am trying to follow the udacity tutorial on tensorflow where I came across the following two lines for word embedding models:</p>
<pre><code> # Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases,
embed, train_labels, num_sampled, vocabulary_size))
</code></pre>
<p>Now I understand that the second statement is for sampling negative labels. But the question is how does it know what the negative labels are? All I am providing the second function is the current input and its corresponding labels along with number of labels that I want to (negatively) sample from. Isn't there the risk of sampling from the input set in itself?</p>
<p>This is the full example: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/udacity/5_word2vec.ipynb" rel="noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/udacity/5_word2vec.ipynb</a></p>
|
<p>You can find the documentation for <code>tf.nn.sampled_softmax_loss()</code> <a href="https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss" rel="noreferrer">here</a>. There is even a good explanation of <strong>Candidate Sampling</strong> provided by TensorFlow <a href="https://www.tensorflow.org/extras/candidate_sampling.pdf" rel="noreferrer">here (pdf)</a>.</p>
<hr>
<blockquote>
<p>How does it know what the negative labels are?</p>
</blockquote>
<p>TensorFlow will randomly select negative classes among all the possible classes (for you, all the possible words).</p>
<blockquote>
<p>Isn't there the risk of sampling from the input set in itself?</p>
</blockquote>
<p>When you want to compute the softmax probability for your true label, you compute: <code>logits[true_label] / sum(logits[negative_sampled_labels]</code>. As the number of classes is huge (the vocabulary size), there is very little probability to sample the true_label as a negative label.<br>
Anyway, I think TensorFlow removes this possibility altogether when randomly sampling. (EDIT: @Alex confirms TensorFlow does this by default)</p>
|
python|tensorflow
| 12
|
10,014
| 37,889,843
|
Find rank and percentage rank in list
|
<p>I have some very large lists that I am working with (>1M rows), and I am trying to find a fast (the fastest?) way of, given a float, ranking that float compared to the list of floats, and finding it's percentage rank compared to the range of the list. Here is my attempt, but it's extremely slow:</p>
<pre><code>X =[0.595068426145485,
0.613726840488019,
1.1532608695652,
1.92952380952385,
4.44137931034496,
3.46432160804035,
2.20331487122673,
2.54736842105265,
3.57702702702689,
1.93202764976956,
1.34720184204056,
0.824997304105564,
0.765782842381996,
0.615110856990126,
0.622708022872803,
1.03211045820975,
0.997225012974318,
0.496352327702226,
0.67103858866700,
0.452224068868272,
0.441842124852685,
0.447584524952608,
0.4645525042246]
val = 1.5
arr = np.array(X) #X is actually a pandas column, hence the conversion
arr = np.insert(arr,1,val, axis=None) #insert the val into arr, to then be ranked
st = np.sort(arr)
RANK = float([i for i,k in enumerate(st) if k == val][0])+1 #Find position
PCNT_RANK = (1-(1-round(RANK/len(st),6)))*100 #Find percentage of value compared to range
print RANK, PCNT_RANK
>>> 17.0 70.8333
</code></pre>
<p>For the percentage ranks I could probably build a distribution and sample from it, not quite sure yet, any suggestions welcome...it's going to be used heavily so any speed-up will be advantageous.</p>
<p>Thanks.</p>
|
<p>Sorting the array seems to be rather slow. If you don't need the array to be sorted in the end, then numpy's boolean operations are quicker.</p>
<pre><code>arr = np.array(X)
bool_array = arr < val # Returns boolean array
RANK = float(np.sum(bool_array))
PCT_RANK = RANK/len(X)
</code></pre>
<p>Or, better yet, use a list comprehension and avoid numpy all together.</p>
<pre><code>RANK = float(sum([x<val for x in X]))
PCT_RANK = RANK/len(X)
</code></pre>
<p>Doing some timing, the numpy solution above gives 6.66 us on my system while the list comprehension method gives 3.74 us.</p>
|
python|performance|numpy|pandas|rank
| 6
|
10,015
| 31,546,867
|
Creating legend in matplotlib after plotting two Pandas Series
|
<p>I plotted two Pandas Series from the same DataFrame with the same x axis and everything worked out fine. However, when I tried to manually create a Legend, it appears but only with the title and not with the actually content. I've tried other solutions without any luck. Here's my code:</p>
<pre><code> fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
width = .3
df.tally.plot(kind='bar', color='red', ax=ax1, width=width, position=1, grid=False)
df.costs.plot(kind='bar', color='blue', ax=ax2, width=width, position=0, grid=True)
ax1.set_ylabel('Tally')
ax2.set_ylabel('Total Cost')
handles1, labels1 = ax1.get_legend_handles_labels()
handles2, labels2 = ax2.get_legend_handles_labels()
plt.legend([handles1, handles2], [labels1, labels2], loc='upper left', title='Legend')
plt.show()
plt.clf()
</code></pre>
|
<p>Maybe you have a good reason to do it your way, but if not, this is much easier:</p>
<pre><code>In [1]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Optional, just better looking
import seaborn as sns
# Generate random data
df = pd.DataFrame(np.random.randn(10,3), columns=['tally', 'costs', 'other'])
df[['tally', 'costs']].plot(kind='bar', width=.3)
plt.show();
Out[1]:
</code></pre>
<p><img src="https://i.stack.imgur.com/Peqsm.png" alt="Plot" /></p>
<hr />
<h1>Edit</h1>
<p>After learning that this is because you have a much different scale for the other one, here's the pandas approach:</p>
<pre><code># Generate same data as Jianxun Li
np.random.seed(0)
df = pd.DataFrame(np.random.randint(50,100,(20,3)), columns=['tally', 'costs', 'other'])
df.costs = df.costs * 5
width = .3
df.tally.plot(kind='bar', color='#55A868', position=1, width=width, legend=True, figsize=(12,6))
df.costs.plot(kind='bar', color='#4C72B0', position=0, width=width, legend=True, secondary_y=True)
plt.show();
</code></pre>
<p><img src="https://i.stack.imgur.com/opeND.png" alt="enter image description here" /></p>
|
python|pandas|matplotlib|plot
| 3
|
10,016
| 47,723,508
|
How do I extract the particular row from .csv file and write out in to another file
|
<p>I have .csv file something like this:</p>
<pre><code>x, y, z
1, 10, 45
2, 0, 34
4, 15, 34
5, 99, 38
6, 13, 23
5, 99, 38
6, 13, 23
. . .
1000, 234, 678
</code></pre>
<p>now I would like to write out the rows of column x, which can be advisable by 5 form this .csv file.</p>
<p>Here is the more detail about the file:</p>
<pre><code>x, y
0.0000123219872323, 1.213
. .
4.991414887967266, 8.123
4.996324047550014, 2.323
5.000581861276573, 6.234
5.006295444400881, 9.234
5.029657671211434, 1.219
5.034806828096650, 1.123
. .
9.997414351064347, 1.345
10.00211537343025, 1.232
10.00675672507283, 2.234
10.01201110041457, 1.003
. .
</code></pre>
<p>Here is the real file that I want to process. I have to extract the rows from this .csv file. In particular, rows of the column values which can be dividable by 5. </p>
<p>My output should look like this:</p>
<pre><code>5.000581861276573, 6.234
10.00211537343025, 1.232
</code></pre>
<p>Not twice the values of 5, 10 and so on. Once one 5 value is extracted,
it should go to next divisible by 5, in this case it should be 10. </p>
|
<p>If you want to write out each 5th row you can simply do</p>
<pre><code>df.iloc[::5, :].to_csv('file_name.csv')
</code></pre>
<p>whereby <code>df</code> is a pandas dataframe created like this:</p>
<pre><code>import pandas as pd
df = pd.read_csv('input.csv')
</code></pre>
<p>Otherwise, you can also do</p>
<pre><code># define all indexes here
out_rows = [1, 5]
df.iloc[out_rows, :].to_csv('file_name.csv')
</code></pre>
<p>You need to specify in more detail what exactly you want to achieve, then we can help better.</p>
<p>EDIT:</p>
<p>As far as I understand now, you want to select based on floats close to certain values. This might be a bit hacky and also assumes that you don't leave out integers (which seems to be a reasonable assumption based on your description of the values):</p>
<p>Let's say your <code>df</code> looks like this (replace by actual data):</p>
<pre><code> x, y
0 0.955425 1
1 0.602229 2
2 1.520194 6
3 1.748095 1
4 2.422760 0
5 2.051359 6
6 3.268572 9
7 3.981412 1
8 4.687532 0
9 4.215138 8
10 5.029877 7
11 5.197888 7
12 6.795040 1
13 6.452637 0
14 7.413032 8
15 7.127841 5
16 8.597014 7
17 8.002060 8
18 9.713273 3
19 9.912318 7
</code></pre>
<p>As written, first sort the values according to <code>x,</code></p>
<pre><code>df = df.sort_values('x,')
x, y
1 0.602229 2
0 0.955425 1
2 1.520194 6
3 1.748095 1
5 2.051359 6
4 2.422760 0
6 3.268572 9
7 3.981412 1
9 4.215138 8
8 4.687532 0
10 5.029877 7
11 5.197888 7
13 6.452637 0
12 6.795040 1
15 7.127841 5
14 7.413032 8
17 8.002060 8
16 8.597014 7
18 9.713273 3
19 9.912318 7
</code></pre>
<p>Then add a helper column where you <code>floor</code> the values in <code>x,</code></p>
<pre><code>df['helper'] = df['x,'].apply(np.floor).astype(int)
x, y helper
1 0.602229 2 0
0 0.955425 1 0
2 1.520194 6 1
3 1.748095 1 1
5 2.051359 6 2
4 2.422760 0 2
6 3.268572 9 3
7 3.981412 1 3
9 4.215138 8 4
8 4.687532 0 4
10 5.029877 7 5
11 5.197888 7 5
13 6.452637 0 6
12 6.795040 1 6
15 7.127841 5 7
14 7.413032 8 7
17 8.002060 8 8
16 8.597014 7 8
18 9.713273 3 9
19 9.912318 7 9
</code></pre>
<p>Now drop the duplicates in <code>helper</code>:</p>
<pre><code>df = df.drop_duplicates('helper')
x, y helper
1 0.602229 2 0
2 1.520194 6 1
5 2.051359 6 2
6 3.268572 9 3
9 4.215138 8 4
10 5.029877 7 5
13 6.452637 0 6
15 7.127841 5 7
17 8.002060 8 8
18 9.713273 3 9
</code></pre>
<p>and export the solution:</p>
<pre><code>df.iloc[::5, :].drop('helper', axis=1)
x, y
1 0.602229 2
10 5.029877 7
</code></pre>
|
python|pandas|csv|numpy|anaconda
| 2
|
10,017
| 48,950,424
|
What is the value of 10j in SciPy?
|
<p>I am learning Python and SciPy. I met below two expressions: </p>
<pre><code>a = np.concatenate(([3], [0]*5, np.arange(-1, 1.002, 2/9.0)))
</code></pre>
<p>and </p>
<pre><code>b = np.r_[3,[0]*5,-1:1:10j]
</code></pre>
<p>The two expressions output the same array. I don't understand 10j in the 2nd expression. What is its value? Thanks a lot for help. </p>
|
<p>It's a shorthand for creating an <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html" rel="nofollow noreferrer"><code>np.linspace</code></a>.</p>
<p>As per the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html" rel="nofollow noreferrer">docs for <code>np.r_</code></a>:</p>
<blockquote>
<p>If slice notation is used, the syntax <code>start:stop:step</code> is equivalent to <code>np.arange(start, stop, step)</code> inside of the brackets. However, if step is an imaginary number (i.e. <code>100j</code>) then its integer portion is interpreted as a number-of-points desired and the start and stop are inclusive. In other words <code>start:stop:stepj</code> is interpreted as <code>np.linspace(start, stop, step, endpoint=1)</code> inside of the brackets.</p>
</blockquote>
<p>So for your specific case, <code>-1:1:10j</code> would result in a step size of (1 - (-1)) / 9 = 0.222222... which gives the following array:</p>
<pre><code>>>> np.r_[-1:1:10j]
array([-1. , -0.77777778, -0.55555556, -0.33333333, -0.11111111,
0.11111111, 0.33333333, 0.55555556, 0.77777778, 1. ])
</code></pre>
<p>While this <em>happens</em> to give you the same answer as <code>np.arange(-1, 1.002, 2/9.0)</code>, note that <code>arange</code> is <em>not</em> a good way to create such an array in general, because using non-integer step-sizes in <code>arange</code>s is a <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html" rel="nofollow noreferrer">bad idea</a>:</p>
<blockquote>
<p>When using a non-integer step, such as 0.1, the results will often not be consistent. It is better to use linspace for these cases.</p>
</blockquote>
|
python|arrays|numpy|scipy
| 3
|
10,018
| 58,888,092
|
How to remove a rough line artifact from image after binarization
|
<p>I am stuck in a problem where I want to differentiate between an object and the <a href="https://i.stack.imgur.com/5BU7o.jpg" rel="nofollow noreferrer">background</a>(having a semi-transparent white sheet with backlight) i.e a fixed rough line introduced in the background and is merged with the object. My algorithm right now is I am taking the image from the camera, smoothing with gaussian blur, then extracting Value component from HSV, applying local binarization using wolf method to get the <a href="https://i.stack.imgur.com/BIfLJ.jpg" rel="nofollow noreferrer">binarized image</a> after which using OpenCV connected component algorithm I remove some small artifacts that are not connected to object as seen <a href="https://i.stack.imgur.com/WUDon.jpg" rel="nofollow noreferrer">here</a>. Now there is only this line artifact which is merged with the object but I want only the object as seen in this <a href="https://i.stack.imgur.com/drcs9.jpg" rel="nofollow noreferrer">image</a>. Please note that there are 2 lines in the binary image so using the 8 connected logic to detect lines not making a loop is not possible this is what I think and tried also. here is the code for that</p>
<pre><code>size = np.size(thresh_img)
skel = np.zeros(thresh_img.shape,np.uint8)
element = cv2.getStructuringElement(cv2.MORPH_RECT,(3,3))
done = False
while( not done):
eroded = cv2.erode(thresh_img,element)
temp = cv2.dilate(eroded,element)
temp = cv2.subtract(thresh_img,temp)
skel = cv2.bitwise_or(skel,temp)
thresh_img = eroded.copy()
zeros = size - cv2.countNonZero(thresh_img)
if zeros==size:
done = True
# set max pixel value to 1
s = np.uint8(skel > 0)
count = 0
i = 0
while count != np.sum(s):
# non-zero pixel count
count = np.sum(s)
# examine 3x3 neighborhood of each pixel
filt = cv2.boxFilter(s, -1, (3, 3), normalize=False)
# if the center pixel of 3x3 neighborhood is zero, we are not interested in it
s = s*filt
# now we have pixels where the center pixel of 3x3 neighborhood is non-zero
# if a pixels' 8-connectivity is less than 2 we can remove it
# threshold is 3 here because the boxfilter also counted the center pixel
s[s < 1] = 0
# set max pixel value to 1
s[s > 0] = 1
i = i + 1
</code></pre>
<p>Any help in the form of code would be highly appreciated thanks.</p>
|
<p>Since you are already using connectedComponents the best way is to exclude, not only the ones which are small, but also the ones that are touching the borders of the image.
You can know which ones are to be discarded using <code>connectedComponentsWithStats()</code> that gives you also information about the bounding box of each component.</p>
<p>Alternatively, and very similarly you can switch from <code>connectedComponents()</code> to <code>findContours()</code> which gives you directly the <strong>Components</strong> so you can discard the external ones and the small ones to retrieved the part you are interested in.</p>
|
python|numpy|opencv|image-processing|image-thresholding
| 0
|
10,019
| 58,708,819
|
How to calculate growth in percentage between rows in a Pandas DataFrame?
|
<p>I have a data-frame such as:</p>
<pre><code> A B(int64)
1 100
2 150
3 200
</code></pre>
<p>now I need to calculate the growth rate and set it as an additional column such as: </p>
<pre><code> A C
1 naN
2 50%
3 33.33%
</code></pre>
<p>how can I achieve that? Thank you so much for your help!</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.pct_change.html" rel="nofollow noreferrer"><code>Series.pct_change</code></a> with multiple by <code>100</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mul.html" rel="nofollow noreferrer"><code>Series.mul</code></a> and rounding by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.round.html" rel="nofollow noreferrer"><code>Series.round</code></a>:</p>
<pre><code>df['C'] = df.B.pct_change().mul(100).round(2)
print (df)
A B C
0 1 100 NaN
1 2 150 50.00
2 3 200 33.33
</code></pre>
<p>For percentage add <code>map</code> with <code>format</code> and processing with missing values with <code>NaN != NaN</code>:</p>
<pre><code>df['C'] = df.B.pct_change().mul(100).round(2).map(lambda x: '{0:g}%'.format(x) if x==x else x)
print (df)
A B C
0 1 100 NaN
1 2 150 50%
2 3 200 33.33%
</code></pre>
|
python|pandas|dataframe
| 4
|
10,020
| 56,088,440
|
Append columns to a DataFrame using apply and compute new columns based on existing values using apply on the row
|
<p>Given a DataFrame </p>
<pre class="lang-py prettyprint-override"><code> a b c d
1 5 5 5 5
2 5 5 5 5
3 5 5 5 5
</code></pre>
<p>I would like to add more columns on the DataFrame based on the existing ones but using some logic that can't fit in a lambda. The desired result should look something like this:</p>
<pre class="lang-py prettyprint-override"><code> a a_added c c_added d d_added
1 5 'good' 5 'good' 5 'bad'
2 5 'bad' 5 'good' 5 'bad'
3 5 'good' 5 'good' 5 'bad'
</code></pre>
<p>After seeing <a href="https://stackoverflow.com/a/45744302/2404336">this</a> answer, my idea was to use <code>DataFrame.apply()</code> on each row and after that <code>Series.apply()</code> on each value but I don't know exactly how to chain the calls and what exactly to return such that I return a new column name from the Series's apply function. After that I think I need to combine those two DataFrames with <code>DataFrame.join()</code>. I really need to use <code>Series.apply()</code> because I have to compute each value with some custom logic.</p>
<p>EDIT:
I have a map of of thresholds where the keys correspond to the column names in my DataFrame and the values are warning/critical thresholds plus an operation that says how the current value should be compared against the threshold:</p>
<pre><code>thresholds = {
'a': {'warning': 90, 'critical': 98, operation: 'lt'},
'b': {'warning': 10, 'critical': 15, operation: 'gt'},
'c': {'warning': 5, 'critical': 9, operation: 'le'}
}
</code></pre>
<p>EDIT2:
Using the following input with the thresholds above: </p>
<pre><code> a b c
1 89 0 4
2 91 9 10
3 99 17 5
</code></pre>
<p>will get as result:</p>
<pre><code> a a_r b b_r c c_r
1 89 good 0 good 4 good
2 91 warn 9 warn 10 crit
3 99 crit 17 good 5 warn
</code></pre>
<p>Therefore for each value depending on the column name I have to apply the corresponding threshold from the map.</p>
|
<p>Use:</p>
<pre><code>print (df)
a b c
1 89 11 4
2 91 9 10
3 99 17 5
thresholds = {
'a': {'warning': 90, 'critical': 98, 'operation': 'lt'},
'b': {'warning': 10, 'critical': 15, 'operation': 'gt'},
'c': {'warning': 5, 'critical': 9, 'operation': 'le'}
}
import operator
ops = {'gt': operator.gt,
'lt': operator.lt,
'ge': operator.ge,
'le': operator.le,
'eq': operator.eq,
'ne': operator.ne}
</code></pre>
<hr>
<pre><code>for k, v in thresholds.items():
op1 = v.pop('operation')
if op1 in ('lt','le'):
sorted_v = sorted(v.items(), key=operator.itemgetter(1))
else:
sorted_v = sorted(v.items(), key=operator.itemgetter(1), reverse=True)
for k1, v1 in sorted_v:
#https://stackoverflow.com/q/46421521/2901002
m = ops[op1](v1, df[k])
df.loc[m, f'{k}_added'] = k1
df = df.sort_index(axis=1).fillna('good')
print (df)
a a_added b b_added c c_added
1 89 good 11 critical 4 good
2 91 warning 9 warning 10 critical
3 99 critical 17 good 5 warning
</code></pre>
|
python|pandas|dataframe|series
| 1
|
10,021
| 56,387,827
|
How do I retain the column name used in my group by with Pandas
|
<p>I have two data frames. I would like to use group by on the second data frame and then merge the two together on the Company Name column. The issue is that with my group by statement I loose the Company Name column. </p>
<pre><code>import pandas as pd
df1 = pd.DataFrame(
{
'Company Name': ['Google','Google','Microsoft','Microsoft','Amazon','Amazon'],
'Location': ['Somewhere','Somewhere','Somewhere','Somewhere','Somewhere','Somewhere'],
}
)
df = pd.DataFrame(
{
'Company Name': ['Google','Google','Microsoft','Microsoft','Amazon','Amazon'],
'Sales': [12345,12345,12345,12345,12345,12345],
'Company Type': ['Software','Software','Software','Software','Software','Software']
}
)
df = df.groupby(['Company Name']).sum()
pd.merge(df1,df,how="inner",on="Company Name")
</code></pre>
<p>I get an error message when merging due to df not having a Company Name column to perform the join.</p>
|
<p>Replace this line:</p>
<pre><code>df = df.groupby(['Company Name']).sum()
</code></pre>
<p>With:</p>
<pre><code>df = df.groupby('Company Name', as_index=False).sum()
</code></pre>
<p>Then your code will work as expected, and return:</p>
<pre><code> Company Name Location Sales
0 Google Somewhere 24690
1 Google Somewhere 24690
2 Microsoft Somewhere 24690
3 Microsoft Somewhere 24690
4 Amazon Somewhere 24690
5 Amazon Somewhere 24690
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
10,022
| 56,434,677
|
How do I remove duplicates based on not only one but two conditions from other columns
|
<p>I am trying to remove the duplicated "Box" rows based on two columns in my Dataframe:</p>
<p><a href="https://i.stack.imgur.com/3Dr4p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Dr4p.png" alt="enter image description here"></a></p>
<pre><code>import pandas as pd
d = {'Box': ['A1', 'A1', 'A2', 'A3', 'A4', 'A5', 'A5'], 'Status': ['Prep', 'Ready', 'Prep', 'Prep', 'Ready', 'Prep', 'Ready'], 'Week':[11, 12, 12, 13, 11, 10, 11], 'QTY': [6, 7, 6, 8, 5, 8, 7]}
df = pd.DataFrame(data=d)
</code></pre>
<ul>
<li>if there is duplicated Box numbers, take the one with the min(Week)</li>
<li>if there is duplicated Box numbers, take the Status !=Ready (not equal to ready) </li>
</ul>
<p>What I have tried so far :</p>
<p><code>df1= df.drop_duplicates(subset=["Week", "Box"], keep=False)</code></p>
<p>If both conditions are met, I want to take the Status!= Ready condition as priority.</p>
<p>The expected result is:</p>
<p><a href="https://i.stack.imgur.com/WcwdH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WcwdH.png" alt="enter image description here"></a></p>
|
<p><code>DataFrame.drop_duplicates(...)</code> defaults to keeping the first item it finds based on the subset of columns you specify.</p>
<p>In other words, <code>df.drop_duplicates('Box')</code> will keep the first of each unique value of <code>Box</code> and drop the rest.</p>
<p>So we just need to sort our data frame so that the items we want to keep are the first ones we encounter.</p>
<pre><code>uniques = df.sort_values('Week').sort_values('Status').drop_duplicates('Box')
</code></pre>
<p>This makes quite a few assumptions:</p>
<ol>
<li>Your data is small, so sorting twice like this is not too expensive.</li>
<li>That you have no other values of <code>Status</code> that might disrupt this. <code>Prep</code> happens to be lexographically before <code>Ready</code>.</li>
<li>You have no examples where a lower <code>Week</code> value has <code>Ready</code> in <code>Status</code> - because we sort by <code>Status</code> last, we place a higher priority on this condition. You can reverse them if you want to filter by <code>Week</code> first.</li>
</ol>
<p>EDIT: </p>
<p>with the data you posted:</p>
<pre><code>>>> import pandas as pd
>>> d = {'Box': ['A1', 'A1', 'A2', 'A3', 'A4', 'A5', 'A5'], 'Status': ['Prep', 'Ready', 'Prep', 'Prep', 'Ready', 'Prep', 'Ready'], 'Week':[11, 12, 12, 13, 11, 10, 11], 'QTY': [6, 7, 6, 8, 5, 8, 7]}
>>> df = pd.DataFrame(data=d)
>>> df.sort_values('Status').sort_values('Week').drop_duplicates('Box').sort_index()
Box QTY Status Week
0 A1 6 Prep 11
2 A2 6 Prep 12
3 A3 8 Prep 13
4 A4 5 Ready 11
5 A5 8 Prep 10
</code></pre>
<p>For assumption 2 above, I recommend having an ordering for your statuses, then adding a column based on that.</p>
<pre><code>order = { 'Prep' : 1, 'Ready' : 2 }
df['status_order'] = df['Status'].apply(lambda x: order[x])
</code></pre>
<p>Then you can sort by this column instead of <code>Status</code>. This generalizes to handling duplicates for non-<code>Ready</code> status.</p>
|
python|pandas|dataframe|duplicates
| 0
|
10,023
| 56,133,320
|
How to remove special characters from csv using pandas
|
<p>Currently cleaning data from a csv file. Successfully mad everything lowercase, removed stopwords and punctuation etc. But need to remove special characters. For example, the csv file contains things such as 'César' '‘disgrace’'. If there is a way to replace these characters then even better but I am fine with removing them. Below is the code I have so far.</p>
<pre><code>import pandas as pd
from nltk.corpus import stopwords
import string
from nltk.stem import WordNetLemmatizer
lemma = WordNetLemmatizer()
pd.read_csv('soccer.csv', encoding='utf-8')
df = pd.read_csv('soccer.csv')
df.columns = ['post_id', 'post_title', 'subreddit']
df['post_title'] = df['post_title'].str.lower().str.replace(r'[^\w\s]+', '').str.split()
stop = stopwords.words('english')
df['post_title'] = df['post_title'].apply(lambda x: [item for item in x if item not in stop])
df['post_title']= df['post_title'].apply(lambda x : [lemma.lemmatize(y) for y in x])
df.to_csv('clean_soccer.csv')
</code></pre>
|
<p>When saving the file try:</p>
<pre><code>df.to_csv('clean_soccer.csv', encoding='utf-8-sig')
</code></pre>
<p>or simply</p>
<pre><code>df.to_csv('clean_soccer.csv', encoding='utf-8')
</code></pre>
|
python|pandas|csv|data-cleaning
| 1
|
10,024
| 56,381,714
|
Is there anyway to reset multi index in pandas?
|
<p>I want to obtain stock data using pandas_datareader. I have the data, but the index I got is <code>multiIndex.
_data.columns</code></p>
<pre><code>MultiIndex(levels=[['High', 'Low', 'Open', 'Close', 'Volume', 'Adj Close'], ['MSFT']],
codes=[[0, 1, 2, 3, 4, 5], [0, 0, 0, 0, 0, 0]],
names=['Attributes', 'Symbols'])
from pandas_datareader import data as pdr
import yfinance
_data = pdr.get_data_yahoo(['MSFT'], start='2019-01-01', end='2019-05-30')
</code></pre>
<p>The format I want to get is single index.So that i can use that data to plot</p>
<pre><code> symbol date price
0 MSFT 2000-01-01 39.81
1 MSFT 2000-02-01 36.35
2 MSFT 2000-03-01 43.22
3 MSFT 2000-04-01 28.37
4 MSFT 2000-05-01 25.45
</code></pre>
|
<p>There are several "Price" Columns to choose from. I chose <code>'Adj Close'</code>. This is mostly the same as <a href="https://stackoverflow.com/questions/56381714/is-there-anyway-to-reset-multi-index-in-pandas/56382230#comment99364733_56381714">ChrisA</a>'s comment.</p>
<pre><code>_data.stack()['Adj Close'].reset_index(name='Price')
Date Symbols Price
0 2019-01-02 MSFT 100.318642
1 2019-01-03 MSFT 96.628120
2 2019-01-04 MSFT 101.122223
3 2019-01-07 MSFT 101.251190
4 2019-01-08 MSFT 101.985329
.. ... ... ...
</code></pre>
|
python|pandas
| 3
|
10,025
| 55,898,944
|
Matplotlib: how to display a line with different colors base on the line data
|
<p>I have a numpy array which takes only two values <code>0.0018</code> and <code>0.0018001</code></p>
<pre><code>price_high_y = [0.0018 0.0018 0.0018 0.0018001 0.0018001 0.0018 0.0018 0.0018]
</code></pre>
<p>What I would like to do is to display this line with the values 0.0018 in black and 0.0018001 in yellow. It should be an horizontal line. I am a bit stuck.. any idea? thanks!</p>
|
<p>Is this what you want</p>
<pre><code>price_high_y = np.array([0.0018, 0.0018, 0.0018, 0.0018001, 0.0018001, 0.0018, 0.0018, 0.0018])
yvals = sorted(np.unique(price_high_y))
colors = {0.0018: 'k', 0.0018001: 'y'}
for i, y in enumerate(yvals):
plt.axhline(i+0.5, color=colors[y])
plt.yticks(np.arange(len(yvals))+0.5, yvals)
plt.xticks([0, 1])
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/hXpAP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hXpAP.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib
| 0
|
10,026
| 64,673,064
|
Shape gets changed when preprocessing with column transformer and predicting the testing data
|
<p>The data structure is like below.</p>
<pre><code>df_train.head()
ID y X0 X1 X2 X3 X4 X5 X6 X8 ... X375 X376 X377 X378 X379 X380 X382 X383 X384 X385
0 0 130.81 k v at a d u j o ... 0 0 1 0 0 0 0 0 0 0
1 6 88.53 k t av e d y l o ... 1 0 0 0 0 0 0 0 0 0
2 7 76.26 az w n c d x j x ... 0 0 0 0 0 0 1 0 0 0
3 9 80.62 az t n f d x l e ... 0 0 0 0 0 0 0 0 0 0
4 13 78.02 az v n f d h d n ... 0 0 0 0 0 0 0 0 0 0
df_train.shape
(4209, 378)
df_train.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4209 entries, 0 to 4208
Columns: 378 entries, ID to X385
dtypes: float64(1), int64(369), object(8)
memory usage: 12.1+ MB
cat_cols=df_train.select_dtypes(include="object").columns
y=df_train['y']
y.shape
(4209,)
y.head()
0 130.81
1 88.53
2 76.26
3 80.62
4 78.02
Name: y, dtype: float64
X=df_train.drop(['y','ID'],axis=1)
X.shape
(4209, 376)
X.head()
X0 X1 X2 X3 X4 X5 X6 X8 X10 X11 ... X375 X376 X377 X378 X379 X380 X382 X383 X384 X385
0 k v at a d u j o 0 0 ... 0 0 1 0 0 0 0 0 0 0
1 k t av e d y l o 0 0 ... 1 0 0 0 0 0 0 0 0 0
2 az w n c d x j x 0 0 ... 0 0 0 0 0 0 1 0 0 0
3 az t n f d x l e 0 0 ... 0 0 0 0 0 0 0 0 0 0
4 az v n f d h d n 0 0 ... 0 0 0 0 0 0 0 0 0 0
5 rows × 376 columns
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.3,random_state=42)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
(2946, 376)
(1263, 376)
(2946,)
(1263,)
ct=make_column_transformer((OneHotEncoder(),cat_cols),remainder='passthrough')
ct
ColumnTransformer(remainder='passthrough',
transformers=[('onehotencoder', OneHotEncoder(),
Index(['X0', 'X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X8'], dtype='object'))])
X_train.columns
Index(['X0', 'X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X8', 'X10', 'X11',
...
'X375', 'X376', 'X377', 'X378', 'X379', 'X380', 'X382', 'X383', 'X384',
'X385'],
dtype='object', length=376)
type(X_train)
pandas.core.frame.DataFrame
X_train_transformed=ct.fit_transform(X_train)
(2946, 558)
type(X_train_transformed)
numpy.ndarray
linereg=LinearRegression()
linereg.fit(X_train_transformed,y_train)
X_test_transformed=ct.fit_transform(X_test)
X_test.shape
(1263, 376)
X_test_transformed.shape
(1263, 544)
linereg.predict(X_test_transformed)
</code></pre>
<p>Error faced at this step (last extract shared here).</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-126-9d1b72421dd0> in <module>
D:\Anaconda\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
71 FutureWarning)
72 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 73 return f(**kwargs)
74 return inner_f
75
D:\Anaconda\lib\site-packages\sklearn\utils\extmath.py in safe_sparse_dot(a, b, dense_output)
151 ret = np.dot(a, b)
152 else:
--> 153 ret = a @ b
154
155 if (sparse.issparse(a) and sparse.issparse(b)
ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 558 is different from 544)
</code></pre>
<p>The shape is getting distorted while transforming the data set. Not sure whether any better way for preprocessing the data in this case as all columns are categorical.There are 8 columns of nominal categorical data values as strings and balance all columns have binary values only. THe column transformer had used One Hot Encoder and balance columns were passed directly to the predictor.Appreciate your help to resolve this .</p>
|
<p>I have tried to create a Minimal Reproducible Example of your problem, and I do not run into any errors myself. Can you run it on your side? See if there are any important differences between the dataframe created here and yours?</p>
<p>Note that:</p>
<ul>
<li>When transforming your test data, you should only transform the data with the <code>ColumnTransformer</code> and not fit it</li>
<li>The <code>OneHotEncoder</code> is initialized with <code>handle_unknown = 'ignore'</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import make_column_transformer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# Parameters to tweak
n_categories = 10 # Number of categorical columns
groups_by_cat = [3 , 10] # Number of groups which a category will have, to be chosen
# randomly between these two numbers
n_rows = 20
n_binary_cols = 10
# code
list_alpha = list('abcdefghijklmnopqrstuvwxyz')
np.random.seed(42)
groups = []
# names of the columns of the dataframe
col_names = ['X'+str(i) for i in range(n_categories + n_binary_cols)]
# first we generate randomly a set of groups that each category can have
for i in range(n_categories):
np.random.randn()
temp_groups = []
temp_n_groups = np.random.randint(*groups_by_cat)
for k in range(temp_n_groups):
group = "".join(np.random.choice(list_alpha,2, replace = True))
temp_groups.append(group)
groups.append(temp_groups)
# then we generate n_rows taking samples from the groups generated previously
array_categories = np.random.choice(groups[0],(n_rows,1), replace = True)
for i in range(1,n_categories):
temp_column = np.random.choice(groups[i],(n_rows,1), replace = True)
array_categories = np.hstack((array_categories, temp_column))
# we generate an array containing the binary columns
array_binaries = np.random.randint(0, 2, (n_rows, n_binary_cols))
# we create the dataframe concatenating together the two arrays
df = pd.DataFrame(np.hstack((array_categories, array_binaries)), columns = col_names)
y = np.random.random_sample((n_rows,1))
# split
X_train, X_test, y_train, y_test = train_test_split(df, y)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# create column transformer
cat_cols = df.select_dtypes(include="object").columns
ct = make_column_transformer((OneHotEncoder(handle_unknown='ignore'),cat_cols),
remainder='passthrough')
# fit transform the ColumnTransformer
X_train_transformed = ct.fit_transform(X_train)
# fit linearRegression and predict
linereg = LinearRegression()
linereg.fit(X_train_transformed,y_train)
X_test_transformed = ct.transform(X_test)
print("\nSizes of transformed arrays")
print(X_train_transformed.shape)
print(X_test_transformed.shape)
linereg.predict(X_test_transformed)
</code></pre>
<p>Note that the test data, is only transformed with the <code>ColumnTransformer</code>:</p>
<pre><code>X_test_transformed = ct.transform(X_test)
</code></pre>
<p>Otherwise the <code>OneHotEncoder()</code> will calculate again the necessary columns for your test data, which might not be exactly the same columns than for your training data (if for example the test data does not have some of the groups that were found on your training data). <a href="https://datascience.stackexchange.com/questions/12321/whats-the-difference-between-fit-and-fit-transform-in-scikit-learn-models">Here</a> you have more information in the differences between <code>fit</code> <code>fit_transform</code> and <code>transform</code></p>
|
python|python-3.x|pandas|scikit-learn|linear-regression
| 1
|
10,027
| 64,973,770
|
PyToch: ValueError: Expected input batch_size (256) to match target batch_size (128)
|
<p>I've faced a ValueError while training a BiLSTM part of speech tagger using pytorch. <strong>ValueError: Expected input batch_size (256) to match target batch_size (128).</strong></p>
<pre><code>def train(model, iterator, optimizer, criterion, tag_pad_idx):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
text = batch.p
tags = batch.t
optimizer.zero_grad()
#text = [sent len, batch size]
predictions = model(text)
#predictions = [sent len, batch size, output dim]
#tags = [sent len, batch size]
predictions = predictions.view(-1, predictions.shape[-1])
tags = tags.view(-1)
#predictions = [sent len * batch size, output dim]
#tags = [sent len * batch size]
loss = criterion(predictions, tags)
acc = categorical_accuracy(predictions, tags, tag_pad_idx)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion, tag_pad_idx):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
text = batch.p
tags = batch.t
predictions = model(text)
predictions = predictions.view(-1, predictions.shape[-1])
tags = tags.view(-1)
loss = criterion(predictions, tags)
acc = categorical_accuracy(predictions, tags, tag_pad_idx)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
class BiLSTMPOSTagger(nn.Module):
def __init__(self,
input_dim,
embedding_dim,
hidden_dim,
output_dim,
n_layers,
bidirectional,
dropout,
pad_idx):
super().__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim, padding_idx = pad_idx)
self.lstm = nn.LSTM(embedding_dim,
hidden_dim,
num_layers = n_layers,
bidirectional = bidirectional,
dropout = dropout if n_layers > 1 else 0)
self.fc = nn.Linear(hidden_dim * 2 if bidirectional else hidden_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
embedded = self.dropout(self.embedding(text))
outputs, (hidden, cell) = self.lstm(embedded)
predictions = self.fc(self.dropout(outputs))
return predictions
</code></pre>
<p>...........................
...........................
...........................
...........................</p>
<pre><code>INPUT_DIM = len(POS.vocab)
EMBEDDING_DIM = 100
HIDDEN_DIM = 128
OUTPUT_DIM = len(TAG.vocab)
N_LAYERS = 2
BIDIRECTIONAL = True
DROPOUT = 0.25
PAD_IDX = POS.vocab.stoi[POS.pad_token]
print(INPUT_DIM) #output 22147
print(OUTPUT_DIM) #output 42
model = BiLSTMPOSTagger(INPUT_DIM,
EMBEDDING_DIM,
HIDDEN_DIM,
OUTPUT_DIM,
N_LAYERS,
BIDIRECTIONAL,
DROPOUT,
PAD_IDX)
</code></pre>
<p>...........................
...........................
...........................
...........................</p>
<pre><code>N_EPOCHS = 10
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion, TAG_PAD_IDX)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion, TAG_PAD_IDX)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
ValueError Traceback (most recent call last)
<ipython-input-55-83bf30366feb> in <module>()
7 start_time = time.time()
8
----> 9 train_loss, train_acc = train(model, train_iterator, optimizer, criterion, TAG_PAD_IDX)
10 valid_loss, valid_acc = evaluate(model, valid_iterator, criterion, TAG_PAD_IDX)
11
4 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2260 if input.size(0) != target.size(0):
2261 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
-> 2262 .format(input.size(0), target.size(0)))
2263 if dim == 2:
2264 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
ValueError: Expected input batch_size (256) to match target batch_size (128).
</code></pre>
|
<p><em>(continuing from the comments)</em></p>
<p>I guess that your batch-size is equal to 128 (its nowhere defined), right?
The LSTM outputs a list of the outputs of every timestep. But for classification you normaly just want the last one. So the first dimension of <code>outputs</code> is your sequence length which in your case seems to be 2. And when you apply <code>.view</code> this two gets multiplied with your batch-size (128) and then it gets 256. So straight after your lstm layer you need to take the last output the the output-sequence <code>outputs</code>. Like this:</p>
<pre class="lang-py prettyprint-override"><code>def forward(self, text):
embedded = self.dropout(self.embedding(text))
outputs, (hidden, cell) = self.lstm(embedded)
# take last output
outputs = outputs.reshape(batch_size, sequence_size, hidden_size)
outputs = outputs[:, -1]
predictions = self.fc(self.dropout(outputs))
return predictions
</code></pre>
|
python|neural-network|pytorch|lstm|part-of-speech
| 0
|
10,028
| 40,041,076
|
Optimize Python code. Optimize Pandas apply. Numba slow than pure python
|
<p>I'm facing a huge bottleneck where I apply a method() to each row in Pandas DataFrame. The execution time is in sorts of 15-20 minutes.</p>
<p>Now, the code I use is as follows:</p>
<pre><code>def FillTarget(self, df):
backup = df.copy()
target = list(set(df['ACTL_CNTRS_BY_DAY']))
df = df[~df['ACTL_CNTRS_BY_DAY'].isnull()]
tmp = df[df['ACTL_CNTRS_BY_DAY'].isin(target)]
tmp = tmp[['APPT_SCHD_ARVL_D', 'ACTL_CNTRS_BY_DAY']]
tmp.drop_duplicates(subset='APPT_SCHD_ARVL_D', inplace=True)
t1 = dt.datetime.now()
backup['ACTL_CNTRS_BY_DAY'] = backup.apply(self.ImputeTargetAcrossSameDate,args=(tmp, ), axis=1)
# backup['ACTL_CNTRS_BY_DAY'] = self.compute_(tmp, backup)
t2 = dt.datetime.now()
print("Time for the bottleneck is ", (t2-t1).microseconds)
print("step f")
return backup
</code></pre>
<p>And, the method ImputeTargetAcrossSameDate() method is as follows:</p>
<pre><code>def ImputeTargetAcrossSameDate(self, x, tmp):
ret = tmp[tmp['APPT_SCHD_ARVL_D'] == x['APPT_SCHD_ARVL_D']]
ret = ret['ACTL_CNTRS_BY_DAY']
if ret.empty:
r = 0
else:
r = ret.values
r = r[0]
return r
</code></pre>
<p>Is there any way to optimize this apply() call to reduce the overall time.
Note that, I'll have to run this process on DataFrame which stores data for 2 years. I was running it for 15 days, and it took me 15-20minutes, while when ran for 1 month of data, it was executing for more than 45 minutes, after which I had to force stop the process, thus while running on full dataset, it ll be huge problem.</p>
<p>Also note that, I came across few examples <a href="http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html</a> for introducing numba to optimize code, and the following is my numba implementation:</p>
<p>Statement to call numba method:</p>
<pre><code>backup['ACTL_CNTRS_BY_DAY'] = self.compute_(tmp, backup)
</code></pre>
<p>Compute method of numba:</p>
<pre><code>@numba.jit
def compute_(self, df1, df2):
n = len(df2)
result = np.empty(n, dtype='float64')
for i in range(n):
d = df2.iloc[i]
result[i] = self.apply_ImputeTargetAcrossSameDate_method(df1['APPT_SCHD_ARVL_D'].values, df1['ACTL_CNTRS_BY_DAY'].values,
d['APPT_SCHD_ARVL_D'], d['ACTL_CNTRS_BY_DAY'])
return result
</code></pre>
<p>This is wrapper method which replaces Pandas' apply to call the Impute method on each row. The impute method using numba is as follows:</p>
<pre><code>@numba.jit
def apply_ImputeTargetAcrossSameDate_method(self, df1col1, df1col2, df2col1, df2col2):
dd = np.datetime64(df2col1)
idx1 = np.where(df1col1 == dd)[0]
if idx1.size == 0:
idx1 = idx1
else:
idx1 = idx1[0]
val = df1col2[idx1]
if val.size == 0:
r = 0
else:
r = val
return r
</code></pre>
<p>I ran the normal apply() method as well as numba() method for data having time period of 5 days, and following were my results:</p>
<pre><code>With Numba:
749805 microseconds
With DF.apply()
484603 microseconds.
</code></pre>
<p>As you can see numba is slower, which should not happen, so in case I'm missing something, lemme know so that I can optimize this piece of code.</p>
<p>Thanks in advance</p>
<p><strong>Edit 1</strong>
As requested, the data snipped (head of top 20 rows) is added as following:
Before:</p>
<pre><code> APPT_SCHD_ARVL_D ACTL_CNTRS_BY_DAY
919 2020-11-17 NaN
917 2020-11-17 NaN
916 2020-11-17 NaN
915 2020-11-17 NaN
918 2020-11-17 NaN
905 2014-06-01 NaN
911 2014-06-01 NaN
913 2014-06-01 NaN
912 2014-06-01 NaN
910 2014-06-01 NaN
914 2014-06-01 NaN
908 2014-06-01 NaN
906 2014-06-01 NaN
909 2014-06-01 NaN
907 2014-06-01 NaN
898 2014-05-29 NaN
892 2014-05-29 NaN
893 2014-05-29 NaN
894 2014-05-29 NaN
895 2014-05-29 NaN
</code></pre>
<p>After:</p>
<pre><code>APPT_SCHD_ARVL_D ACTL_CNTRS_BY_DAY
919 2020-11-17 0.0
917 2020-11-17 0.0
916 2020-11-17 0.0
915 2020-11-17 0.0
918 2020-11-17 0.0
905 2014-06-01 0.0
911 2014-06-01 0.0
913 2014-06-01 0.0
912 2014-06-01 0.0
910 2014-06-01 0.0
914 2014-06-01 0.0
908 2014-06-01 0.0
906 2014-06-01 0.0
909 2014-06-01 0.0
907 2014-06-01 0.0
898 2014-05-29 0.0
892 2014-05-29 0.0
893 2014-05-29 0.0
894 2014-05-29 0.0
895 2014-05-29 0.0
</code></pre>
<p>What the method does?
In the above data example, you can see some dates are repeated, and values against them is NaN. If all the rows having same date has value NaN, it replaces them with 0.
But there are some cases, lets say for example: <strong>2014-05-29</strong> where there will be 10 rows having same date, and only 1 row against that date where there will be some value. (Lets say 10). Then the method() shall populate all values against that particular date with 10 instead of NaNs.</p>
<p>Example:</p>
<pre><code>898 2014-05-29 NaN
892 2014-05-29 NaN
893 2014-05-29 NaN
894 2014-05-29 10
895 2014-05-29 NaN
</code></pre>
<p>The above shall become:</p>
<pre><code>898 2014-05-29 10
892 2014-05-29 10
893 2014-05-29 10
894 2014-05-29 10
895 2014-05-29 10
</code></pre>
|
<p>This is a bit rushed solution because I'm about to leave into the weekend now, but it works.</p>
<p>Input Dataframe:</p>
<pre><code>index APPT_SCHD_ARVL_D ACTL_CNTRS_BY_DAY
919 2020-11-17 NaN
917 2020-11-17 NaN
916 2020-11-17 NaN
915 2020-11-17 NaN
918 2020-11-17 NaN
905 2014-06-01 NaN
911 2014-06-01 NaN
913 2014-06-01 NaN
912 2014-06-01 NaN
910 2014-06-01 NaN
914 2014-06-01 NaN
908 2014-06-01 NaN
906 2014-06-01 NaN
909 2014-06-01 NaN
907 2014-06-01 NaN
898 2014-05-29 NaN
892 2014-05-29 NaN
893 2014-05-29 NaN
894 2014-05-29 10
895 2014-05-29 NaN
898 2014-05-29 NaN
</code></pre>
<p>The code:</p>
<pre><code>tt = df[pd.notnull(df.ACTL_CNTRS_BY_DAY)].APPT_SCHD_ARVL_D.unique()
vv = df[pd.notnull(df.ACTL_CNTRS_BY_DAY)]
for i,_ in df.iterrows():
if df.ix[i,"APPT_SCHD_ARVL_D"] in tt:
df.ix[i,"ACTL_CNTRS_BY_DAY"] = vv[vv.APPT_SCHD_ARVL_D == df.ix[i,"APPT_SCHD_ARVL_D"]]["ACTL_CNTRS_BY_DAY"].values[0]
df = df.fillna(0.0)
</code></pre>
<p>Basically there is no need to <code>apply</code> a function. What I'm doing here is:</p>
<ul>
<li>Get all unique dates with a value that is not null. -> <code>tt</code></li>
<li>Create a dataframe of only the non-null values. -> <code>vv</code></li>
<li>Iterate over all rows and test if the date in each row is present in <code>tt</code>.</li>
<li>If true take the value from <code>vv</code> where the date in <code>df</code> is the same and assign it to <code>df</code>.</li>
<li>Then fill all other null values with <code>0.0</code>.</li>
</ul>
<p>Iterating over rows isn't a fast thing, but I hope it's faster than your old code. If I had more time I would think of a solution without iteration, maybe on Monday.</p>
<p>EDIT:
Solution without iteration using <code>pd.merge()</code> instead:</p>
<pre><code>dg = df[pd.notnull(df.ACTL_CNTRS_BY_DAY)].groupby("APPT_SCHD_ARVL_D").first()["ACTL_CNTRS_BY_DAY"].to_frame().reset_index()
df = pd.merge(df,dg,on="APPT_SCHD_ARVL_D",how='outer').rename(columns={"ACTL_CNTRS_BY_DAY_y":"ACTL_CNTRS_BY_DAY"}).drop("ACTL_CNTRS_BY_DAY_x",axis=1).fillna(0.0)
</code></pre>
<p>Your data implies that there's at most only one value in <code>ACTL_CNTRS_BY_DAY</code> that is not null, so I'm using <code>first()</code> in the <code>groupby</code> to pick the only value that exists.</p>
|
python|pandas|optimization|jit|numba
| 1
|
10,029
| 39,530,157
|
Python numpy nonzero cumsum
|
<p>I want to do nonzero <code>cumsum</code> with <code>numpy</code> array. Simply skip zeros in array and apply <code>cumsum</code>. Suppose I have a np. array </p>
<pre><code>a = np.array([1,2,1,2,5,0,9,6,0,2,3,0])
</code></pre>
<p>my result should be</p>
<pre><code>[1,3,4,6,11,0,20,26,0,28,31,0]
</code></pre>
<p>I have tried this</p>
<pre><code>a = np.cumsum(a[a!=0])
</code></pre>
<p>but result is</p>
<pre><code>[1,3,4,6,11,20,26,28,31]
</code></pre>
<p>Any ideas? </p>
|
<p>You need to mask the original array so only the non-zero elements are overwritten:</p>
<pre><code>In [9]:
a = np.array([1,2,1,2,5,0,9,6,0,2,3,0])
a[a!=0] = np.cumsum(a[a!=0])
a
Out[9]:
array([ 1, 3, 4, 6, 11, 0, 20, 26, 0, 28, 31, 0])
</code></pre>
<p>Another method is to use <code>np.where</code>:</p>
<pre><code>In [93]:
a = np.array([1,2,1,2,5,0,9,6,0,2,3,0])
a = np.where(a!=0,np.cumsum(a),a)
a
Out[93]:
array([ 1, 3, 4, 6, 11, 0, 20, 26, 0, 28, 31, 0])
</code></pre>
<p><strong>timings</strong></p>
<pre><code>In [91]:
%%timeit
a = np.array([1,2,1,2,5,0,9,6,0,2,3,0])
a[a!=0] = np.cumsum(a[a!=0])
a
The slowest run took 4.93 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 12.6 µs per loop
In [94]:
%%timeit
a = np.array([1,2,1,2,5,0,9,6,0,2,3,0])
a = np.where(a!=0,np.cumsum(a),a)
a
The slowest run took 6.00 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 10.5 µs per loop
</code></pre>
<p>the above shows that <code>np.where</code> is marginally quicker than the first method</p>
|
python|numpy
| 3
|
10,030
| 69,662,602
|
Drop % of rows that do not contain specific string
|
<p>I want to drop 20% of rows that do not contain 'p' or 'u' in label column.
I know how to drop all of them, but I do not know how to drop certain percent of rows.
This is my code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"text": ["a", "b", "c", "d", "e", "f", "g", "h"],
"label": ["o-o-o", "o-o", "o-u", "o", "o-o-p-o", "o-o-o-o-o-o", "p-o-o", "o-o"]
})
print(df)
df = df[(df["label"].str.contains('p')) | (df["label"].str.contains('u'))]
print(df)
</code></pre>
|
<p>Use:</p>
<pre><code>#for unique indices
df = df.reset_index(drop=True)
#get mask for NOT contains p or u
m = ~df["label"].str.contains('p|u')
#get 20% Trues from m
#https://stackoverflow.com/a/31794767/2901002
mask = np.random.choice([True, False], m.sum(), p=[0.2, 0.8])
#filter both masks and remove rows
df = df.drop(df.index[m][mask])
</code></pre>
|
python|pandas
| 2
|
10,031
| 53,943,248
|
Find duplicated rows, multiply a certain column by number of duplicates, drop duplicated rows
|
<p>I have a pandas dataframe of about 70000 rows, and 4500 of them are duplicates of an original. The columns are a mix of string columns and number columns. The column I'm interested in is the <code>value</code> column. I'd like to look through the entire dataframe to find rows that are completely identical, count the number of duplicated rows per row (inclusive of the original), and multiply the <code>value</code> in that row by the number of duplicates. </p>
<p>I'm not really sure how to go about this from the start, but I've tried using df[df.duplicated(keep = False)] to obtain a dataframe <code>df1</code> of duplicated rows (inclusive of original rows). I appended a column of Trues to the end of <code>df1</code>. I tried to use .groupby with a combination of columns to sum up the number of Trues but the result was unable to capture true number of duplicates (i obtained about 3600 unique duplicated rows in this case). </p>
<p>Here's my actual code:</p>
<pre><code>duplicate_bool = df.duplicated(keep = False)
df['duplicate_bool'] = duplicate_bool
df1= df[duplicate_bool]
f = {'duplicate_bool':'sum'}
df2= df1.groupby(['Date', 'Exporter', 'Buyer', \
'Commodity Description', 'Partner Code', \
'Quantity', 'Price per MT'], as_index = False).agg(f)
</code></pre>
<p>My idea here was to obtain a separate dataframe <code>df2</code> with no duplicates, and i could multiply the entry in the <code>value</code> column inside with the number stored in the summed <code>duplicate_bool</code> column. Then I'd simply append <code>df2</code> to my original dataframe after removing all the duplicates identified by .duplicated. </p>
<p>However, if I use groupby with all columns I get an empty dataframe. If I don't use all the columns, I don't get the true number of duplicates and i wont be able to append it in any way.</p>
<p>I think I'd like a better way to do this since i'm confusing myself. </p>
|
<p>I think this question is nothing more of figuring out how to get a count of the occurrences of each unique row. If a row occurs only once, this number is one. If it occurs more often, it will be > 1. This count you can then use to multiply, filter, etc.</p>
<p>This nice one-liner (taken from <a href="https://stackoverflow.com/questions/35584085/how-to-count-duplicate-rows-in-pandas-dataframe">How to count duplicate rows in pandas dataframe?</a>) creates an extra column with the number of occurrences of each row: </p>
<p><code>df = df.groupby(df.columns.tolist()).size().reset_index().rename(columns={0:'dup_count'})</code>.</p>
<p>To then calculate the true value of each row:</p>
<p><code>df['total_value'] = df['value'] * df['dup_count']</code></p>
<p>And to filter we can use the <code>dup_count</code> column to remove all duplicate rows:</p>
<p><code>dff = df[df['dup_count'] == 1]</code></p>
|
python|pandas|dataframe|duplicates
| 1
|
10,032
| 54,228,133
|
How to merge list of tuples
|
<p>I have two lists of tuples like this:</p>
<pre><code>x1 = [('A', 3), ('B', 4), ('C', 5)]
x2 = [('B', 4), ('C', 5), ('D', 6)]
</code></pre>
<p>I want to merge the two lists as a new one x3 so that the values in the list are added.</p>
<pre><code>x3 = [('A', 3), ('B', 8), ('C', 10),('D',6)]
</code></pre>
<p>Could you please show me how I can do this? </p>
|
<p>You can create a dictionary and then loop over the values in each list, and either adding to the current value for each key in the dictionary, or setting the value equal to the current value if no value currently exists. Afterwards you can cast back to a list.</p>
<p>For example:</p>
<pre><code>full_dict = {}
for x in [x1, x2]:
for key, value in x:
full_dict[key] = full_dict.get(key, 0) + value # add to the current value, if none found then use 0 as current value
x3 = list(full_dict.items())
</code></pre>
<p>Result for <code>x3</code>:</p>
<pre><code>[('A', 3), ('B', 8), ('C', 10), ('D', 6)]
</code></pre>
|
python|pandas|list
| 6
|
10,033
| 53,952,470
|
Matrix Subtract like Matrix Multiplication in tensorflow
|
<p>This is my first post I usually found all my answers in the archives, but having a hard time with this one, thanks for the help!</p>
<p>I have two matrix A and B. Performing a matrix multiplication operation is trivial using tf.matmult. But I want to do matrix subtract similar to how matrix multiplication works. Eg if I have.</p>
<pre><code>A = tf.constant([[1, 1, 1, 2, 3, 1],[1,2,3,4,5,6],[4,3,2,1,6,5]])
B = tf.constant([[1,3,1],[2,1,1]])
#B*A
X = tf.matmult(B,A)
>>>X = [[8,10,12,15,24,24],[7,7,7,9,17,13]]
</code></pre>
<p>What I want to do is do a similar operation like matmult, but instead of multiply I want subtract and square. Eg...</p>
<p>for x<sub>11</sub>, where the subscript 11 is row 1, column 1 of matrix X.</p>
<p>= (-b<sub>11</sub> + a<sub>11</sub>)<sup>2</sup> + (-b<sub>12</sub> + a<sub>21</sub>)<sup>2</sup> + (-b<sub>13</sub> + a<sub>31</sub>)<sup>2</sup></p>
<p>and</p>
<p>x<sub>12</sub> = (-b<sub>11</sub> + a<sub>12</sub>)<sup>2</sup> + (-b<sub>12</sub> + a<sub>22</sub>)<sup>2</sup> + (-b<sub>13</sub> + a<sub>32</sub>)<sup>2</sup></p>
<p>and so on similar to how matrix multiplication works. </p>
<p>So if we take matrix A and B above and perform the operation described above (call it matmultsubtract), we get,</p>
<p>tf.matmultsubtract(B,A) gives:</p>
<p>[[(-1+1)<sup>2</sup>+(-3+1)<sup>2</sup>+(-1+4)<sup>2</sup>, (-1+1)<sup>2</sup>+(-3+2)<sup>2</sup>+(-1+3)<sup>2</sup>,...],</p>
<p>[(-2+1)<sup>2</sup>+(-1+1)<sup>2</sup>+(-1+4)<sup>2</sup>, (-2+1)<sup>2</sup>+(-1+2)<sup>2</sup>+(-1+3)<sup>2</sup>, ...]]</p>
<p>This isn't that hard if working with numpy arrays (you can use two nested for loops) by iterating manually rather than np.matmult, but tensorflow has a problem with for loops and I'm not sure how to do it.</p>
<p>Thanks for the help.</p>
|
<p>Trying a vectorization operation that may not be taken as matrix subtract.</p>
<pre><code># shape=(2,3,6)
B_new = tf.tile(tf.expand_dims(B,axis=-1),multiples=[1,1,A.shape[1]])
# shape=(2,3,6)
A_new = tf.tile(tf.expand_dims(A,axis=0),multiples=[B.shape[0],1,1])
# shape=(2,6)
result = tf.reduce_sum(tf.square(A_new - B_new),axis=1)
with tf.Session() as sess:
print(sess.run(result))
[[13 5 1 2 33 25]
[10 6 6 9 42 42]]
</code></pre>
|
python|tensorflow|matrix|matrix-multiplication
| 0
|
10,034
| 53,893,869
|
How to create a range of time in Python?
|
<p>I want to iterate over the value of an hour to plot the number of trips in each hour. </p>
<p>I have found no solution on the internet to how to solve this problem. Can one tell me how to do this?</p>
|
<p>You can find a best documentation at <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html</a>.</p>
<p>Please have a look at the below example.</p>
<pre><code>>>> import pandas as pd
>>>
>>> times = pd.date_range("2018-01-01", freq = "s", periods = 5)
>>>
>>> d = {
... 'fullname': ["A X", "G Y", "K P", "T B", "R O"], "entry_time": times
... }
>>>
>>> df = pd.DataFrame(d)
>>> df
fullname entry_time
0 A X 2018-01-01 00:00:00
1 G Y 2018-01-01 00:00:01
2 K P 2018-01-01 00:00:02
3 T B 2018-01-01 00:00:03
4 R O 2018-01-01 00:00:04
>>>
>>> df["entry_time"]
0 2018-01-01 00:00:00
1 2018-01-01 00:00:01
2 2018-01-01 00:00:02
3 2018-01-01 00:00:03
4 2018-01-01 00:00:04
Name: entry_time, dtype: datetime64[ns]
>>>
>>> df["entry_time"][0]
Timestamp('2018-01-01 00:00:00')
>>>
>>> str(df["entry_time"][1].time())
'00:00:01'
>>>
>>> df["entry_time"][1].time()
datetime.time(0, 0, 1)
>>>
>>> df["entry_time"][1].year
2018
>>>
>>> for t in times:
... print(t)
...
2018-01-01 00:00:00
2018-01-01 00:00:01
2018-01-01 00:00:02
2018-01-01 00:00:03
2018-01-01 00:00:04
>>>
</code></pre>
<p>Here is the way to list out all the methods/attributes defined on <code>Timestamp</code> objects.</p>
<pre><code>>>> dir(df["entry_time"][0])
['__add__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribut
e__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__pyx
_vtable__', '__radd__', '__reduce__', '__reduce_ex__', '__repr__', '__rsub__', '__setattr__', '__setstate__', '__sizeof__'
, '__str__', '__sub__', '__subclasshook__', '__weakref__', '_date_attributes', '_date_repr', '_get_date_name_field', '_get
_start_end_field', '_has_time_component', '_repr_base', '_round', '_short_repr', '_time_repr', 'asm8', 'astimezone', 'ceil
', 'combine', 'ctime', 'date', 'day', 'day_name', 'dayofweek', 'dayofyear', 'days_in_month', 'daysinmonth', 'dst', 'floor'
, 'fold', 'freq', 'freqstr', 'fromordinal', 'fromtimestamp', 'hour', 'is_leap_year', 'is_month_end', 'is_month_start', 'is
_quarter_end', 'is_quarter_start', 'is_year_end', 'is_year_start', 'isocalendar', 'isoformat', 'isoweekday', 'max', 'micro
second', 'min', 'minute', 'month', 'month_name', 'nanosecond', 'normalize', 'now', 'quarter', 'replace', 'resolution', 'ro
und', 'second', 'strftime', 'strptime', 'time', 'timestamp', 'timetuple', 'timetz', 'to_datetime64', 'to_julian_date', 'to
_period', 'to_pydatetime', 'today', 'toordinal', 'tz', 'tz_convert', 'tz_localize', 'tzinfo', 'tzname', 'utcfromtimestamp'
, 'utcnow', 'utcoffset', 'utctimetuple', 'value', 'week', 'weekday', 'weekday_name', 'weekofyear', 'year']
>>>
</code></pre>
|
python|python-3.x|pandas
| 2
|
10,035
| 38,107,979
|
From tuples to linear equations with numpy
|
<p>i need help in the following topic. Lets say i have three points, each with x, y coordinates and a corresponding z value, e.g.:</p>
<pre><code>p_0 = (x_0, y_0, z_0) : coordinates of first point
p_1 = (x_1, y_1, z_1) : coordinates of second point
p_2 = (x_2, y_2, z_2) : coordinates of third point
</code></pre>
<p>Later on, i want to find the dip direction and the dip of an interpolated plane. Im thinking of a linear equation system and matrix as followed:</p>
<p><a href="https://i.stack.imgur.com/VkVBu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VkVBu.png" alt="enter image description here"></a></p>
<p>Say i can write this as <code>Ba=z</code>, where <code>B</code> is my matrix with ones, the x and y values and <code>z</code> the vector with my z values.</p>
<p>Later on, i want to solve this system by: </p>
<pre><code>(a_0, a_1, a_2) = np.linalg.solve(B, z)
</code></pre>
<p>My problem is: how can i extract the matrix with my ones, the x and y values and the vector with my z values from my tuples? I am so stuck right now.</p>
|
<p>You could use</p>
<pre><code>p = np.row_stack([p_0, p_1, p_2])
B = np.ones_like(p)
# copy the first two columns of p into the last 2 columns of B
B[:, 1:] = p[:, :2]
z = p[:, 2]
</code></pre>
<hr>
<p>For example,</p>
<pre><code>import numpy as np
p_0 = (1,2,3)
p_1 = (4,-5,6)
p_2 = (7,8,9)
p = np.row_stack([p_0, p_1, p_2])
B = np.ones_like(p)
B[:, 1:] = p[:, :2]
z = p[:, 2]
a = np.linalg.solve(B, z)
print(a)
# [ 2. 1. -0.]
</code></pre>
|
python|numpy|matrix|interpolation|linear-algebra
| 1
|
10,036
| 65,934,904
|
groupby and get min then append values of the min row
|
<p>I use groupby and then minimum as aggregation function. I need some other values of the row with the minimum value. In the following MWE, I need <code>City</code> value of the row with the minimum distance <code>mindist</code>.</p>
<pre><code>import pandas as pd
data = {'City' : ['London', 'Paris', 'Lyon','NY', 'Bristol'], 'Distance' : [5, 1, 7, 2, 6], 'Country':['UK','FR','FR','US','UK']}
df = pd.DataFrame(data)
print(df)
df['mindist']=df.groupby(['Country'])['Distance'].transform(min)
print(df)
City Distance Country
0 London 5 UK
1 Paris 1 FR
2 Lyon 5 FR
3 NY 2 US
4 Bristol 6 UK
</code></pre>
<p>I want to append the <code>City</code> value according to aggregated <code>mindist</code> as follows:</p>
<pre><code> City Distance Country mindist City1
0 London 5 UK 5 London
1 Paris 1 FR 1 Paris
2 Lyon 5 FR 1 Paris
3 NY 2 US 2 NY
4 Bristol 6 UK 5 London
</code></pre>
<p>Note that there can be duplicate values in the <code>Distance</code> column.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> by <code>Distance</code> with <code>City</code> <code>Series</code>:</p>
<pre><code>df['City1'] = df['mindist'].map(df.set_index('Distance')['City'])
print(df)
City Distance Country mindist City1
0 London 5 UK 5 London
1 Paris 1 FR 1 Paris
2 Lyon 7 FR 1 Paris
3 NY 2 US 2 NY
4 Bristol 6 UK 5 London
</code></pre>
|
pandas|pivot-table
| 1
|
10,037
| 65,938,535
|
passing an iterator to fit/train/predict functions - is it possible?
|
<p>i wonder if theres a way to pass an iterator like into those varius sk models for example:
random-forest/logistic regression etc.</p>
<p>i have a tensor flow dataset can fetch from there a numpy iterator but cannot use it in those functions.</p>
<p>any solution?</p>
<pre><code>xs = tfds.as_numpy(tf.data.Dataset.from_tensor_slices(xs))
ys = tfds.as_numpy(tf.data.Dataset.from_tensor_slices(ys))
</code></pre>
<p>then fitting the model:</p>
<pre><code>cls.fit(xs, ys)
</code></pre>
<p>causing:</p>
<pre><code>TypeError: float() argument must be a string or a number, not '_IterableDataset'
</code></pre>
|
<p>An example of fitting and testing a model with your data stored in a list is below:</p>
<pre class="lang-py prettyprint-override"><code> # Import some libraries
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
# Make some generic data
first_data, first_classes = make_classification(n_samples=100, n_features=5, random_state=1)
second_data, second_classes = make_classification(n_samples=100, n_features=5, random_state=2)
third_data, third_classes = make_classification(n_samples=100, n_features=5, random_state=3)
# Save data and classes into a list
data = [first_data, second_data, third_data]
classes = [first_classes, second_classes, third_classes]
# Declare a logistic regression instance
model = LogisticRegression()
for i in range(len(data)):
# Split data into training and test
X_train, X_test, y_train, y_test = train_test_split(data[i], classes[i], test_size=0.15)
# Fit the model
model.fit(X_train, y_train)
# Print results
print("{} Dataset | Score: {}".format(i+1, model.score(X_test, y_test)))
</code></pre>
|
scikit-learn|tensorflow-datasets
| 0
|
10,038
| 52,745,193
|
variable colons for indexing in Python
|
<p>Suppose I have a Numpy array <strong>A</strong> that has a certain number of dimensions. For the rest of the question I will consider that <strong>A</strong> is a 4-dimensional array:</p>
<pre><code>>>>A.shape
(2,2,2,2)
</code></pre>
<p>Sometimes, I would like to access the elements </p>
<pre><code>A[:,:,1,:]
</code></pre>
<p>, but also sometimes I would like to access the elements</p>
<pre><code>A[:,1,:,:]
</code></pre>
<p>, and so on (the position of the '1' in the '1'-colon indexing "chain" is a variable).</p>
<p>How can I do that?</p>
|
<p>When you provide a <code>:</code> when indexing, python calls that a <a href="https://docs.python.org/3/library/functions.html#slice" rel="nofollow noreferrer"><code>slice</code></a>. When you provide comma-separated slices, it is really just a tuple of slices.</p>
<p>The <code>:</code> is equivalent to <code>slice(None)</code>, so you can get the same effect with the following</p>
<pre><code>>>> my_index = (slice(None), 1, slice(None), slice(None))
>>> A[my_index] == A[:,1,:,:]
True
</code></pre>
<p>You can build up your indexing programatically with this knowledge.</p>
|
python|numpy|indexing
| 0
|
10,039
| 52,457,989
|
pandas df.apply unexpectedly changes dataframe inplace
|
<p>From my understanding, pandas.DataFrame.apply does not apply changes inplace and we should use its return object to persist any changes. However, I've found the following inconsistent behavior:</p>
<p>Let's apply a dummy function for the sake of ensuring that the original df remains untouched:</p>
<pre><code>>>> def foo(row: pd.Series):
... row['b'] = '42'
>>> df = pd.DataFrame([('a0','b0'),('a1','b1')], columns=['a', 'b'])
>>> df.apply(foo, axis=1)
>>> df
a b
0 a0 b0
1 a1 b1
</code></pre>
<p>This behaves as expected. However, foo will apply the changes inplace if we modify the way we initialize this df:</p>
<pre><code>>>> df2 = pd.DataFrame(columns=['a', 'b'])
>>> df2['a'] = ['a0','a1']
>>> df2['b'] = ['b0','b1']
>>> df2.apply(foo, axis=1)
>>> df2
a b
0 a0 42
1 a1 42
</code></pre>
<p>I've also noticed that the above is not true if the columns dtypes are not of type 'object'. Why does apply() behave differently in these two contexts?</p>
<p>Python: 3.6.5</p>
<p>Pandas: 0.23.1</p>
|
<p>Interesting question! I believe the behavior you're seeing is an artifact of the way you use <code>apply</code>.</p>
<p>As you correctly indicate, <code>apply</code> is not intended to be used to modify a dataframe. However, since <code>apply</code> takes an arbitrary function, it doesn't guarantee that applying the function will be idempotent and will not change the dataframe. Here, you've found a great example of that behavior, because your function <code>foo</code> attempts to modify the row that it is passed by <code>apply</code>.</p>
<p>Using <code>apply</code> to modify a row could lead to these side effects. This isn't the best practice. </p>
<p>Instead, consider this idiomatic approach for <code>apply</code>. The function <code>apply</code> is often used to create a new column. Here's an example of how <code>apply</code> is typically used, which I believe would steer you away from this potentially troublesome area:</p>
<pre><code>import pandas as pd
# construct df2 just like you did
df2 = pd.DataFrame(columns=['a', 'b'])
df2['a'] = ['a0','b0']
df2['b'] = ['a1','b1']
df2['b_copy'] = df2.apply(lambda row: row['b'], axis=1) # apply to each row
df2['b_replace'] = df2.apply(lambda row: '42', axis=1)
df2['b_reverse'] = df2['b'].apply(lambda val: val[::-1]) # apply to each value in b column
print(df2)
# output:
# a b b_copy b_replace b_reverse
# 0 a0 a1 a1 42 1a
# 1 b0 b1 b1 42 1b
</code></pre>
<p>Notice that pandas passed a row or a cell to the function you give as the first argument to <code>apply</code>, then stores the function's output in a column of your choice.</p>
<p>If you'd like to modify a dataframe row-by-row, take a look at <code>iterrows</code> and <code>loc</code> for the most idiomatic route.</p>
|
python|pandas|dataframe|pandas-apply
| 3
|
10,040
| 46,186,352
|
Iterating over dataframe returns only column headers
|
<p>I'm trying to extract the latitude, longitude, magnitude and times from a csv which contains data from Earthquakes, in order to plot them into a map.</p>
<p>My current code for the extraction of the data is:</p>
<pre><code>import pandas as pd
csv_path = 'https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_hour.csv'
filename = pd.read_csv(csv_path, names = ['time','latitude','longitude','mag'])
lats, lons = [], []
magnitudes = []
timestrings = []
for row in filename:
print (row)
lats.append(row[1])
lons.append(row[2])
magnitudes.append(row[2])
timestrings.append(row[0])
# Printing this to check if the values are correctly imported
# This is, instead, printing the second letter of each word
print('lats', lats[0:5])
print('lons', lons[0:5])
</code></pre>
<p>But my output is:</p>
<pre><code>time
latitude
longitude
mag
lats ['i', 'a', 'o', 'a']
lons ['m', 't', 'n', 'g']
</code></pre>
<p>I'm sorry if this question was answered before, I tried to look it up but I didn't manage to get the answers I found working into my code.</p>
|
<p>You have a pandas dataframe, not a file. Iteration over a dataframe gives you the <em>headers of the series</em>:</p>
<pre><code>>>> import pandas as pd
>>> filename = pd.read_csv('https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_hour.csv', names = ['time','latitude','longitude','mag'])
>>> list(filename)
['time', 'latitude', 'longitude', 'mag']
</code></pre>
<p>Those names are the ones you passed into the <code>read_csv</code> call, but they are not a filter. I'd not use <code>names</code> <em>at all</em> here, and let Pandas figure out what columns there are, then pick from those:</p>
<pre><code>>>> df = pd.read_csv('https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_hour.csv')
>>> df.time
0 2017-09-12T22:13:27.650Z
Name: time, dtype: object
>>> df.latitude
0 58.0241
Name: latitude, dtype: float64
>>> df.longitude
0 -32.3543
Name: longitude, dtype: float64
>>> df.mag
0 4.8
Name: mag, dtype: float64
</code></pre>
<p>I used a more common <code>df</code> name to reflect this is a dataframe.</p>
<p>There is just one row, so you can get your data by converting each series to a list, producing single values:</p>
<pre><code>df = pd.read_csv('https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_hour.csv')
time = df.time.tolist()
lats = df.latitude.tolist()
longs = df.longitude.tolist()
magnitudes = df.mag.tolist()
</code></pre>
<p>However, if you wanted to plot data, you could do so simply directly from the dataframe, without manually extracting lists. See <a href="https://pandas.pydata.org/pandas-docs/stable/visualization.html" rel="noreferrer">Pandas Visualisation</a>.</p>
|
python|pandas|matplotlib|dataframe|plot
| 5
|
10,041
| 58,530,060
|
Pandas Transform and add second line between the measurements
|
<p>I am struggeling with transforming a pandas dataframe.</p>
<pre><code>df=
0 A -- cm
1 B -- cm2
2 C 69 cm/s
3 D 48 cm/s
4 E 152 ms
5 F 1.05 NaN
6 G 9.15 NaN
7 H -- ms
8 I 8 cm/s
9 J 12 cm/s
</code></pre>
<p>I want to transform it to:</p>
<pre><code>> A A_Unit B B_Unit C C_Unit ...
> -- cm -- cm2 69 cm/s ...
</code></pre>
<p>A to J is a parameter.</p>
<p>Conversion to a dataframe with only numbers works very good with df.T.drop(0), but i have actually no clue, how to add the label of the units it's values next to the parameter columns. </p>
<p>Maybe someone has a good idea and might help me with this topic.</p>
<p>Thanks</p>
<p>Thomas</p>
|
<p>You can stack and transpose since you will always have groupings of two.</p>
<hr>
<pre><code>u = df.set_index(0).stack().to_frame().T
u.columns = [
x if y == 1 else f'{x}_Unit' for x, y in u.columns]
</code></pre>
<p></p>
<pre><code> A A_Unit B B_Unit C C_Unit D D_Unit E E_Unit F G H H_Unit I I_Unit J J_Unit
0 -- cm -- cm2 69 cm/s 48 cm/s 152 ms 1.05 9.15 -- ms 8 cm/s 12 cm/s
</code></pre>
|
python-3.x|pandas|numpy
| 3
|
10,042
| 58,414,941
|
Lenth of values mismatch using np.where or how to write values based on condition into the new column
|
<p>Suppose I have a df: </p>
<pre><code>A | B |
aa| 11|
aa| 12|
aa| 13|
ab| 11|
ac| 11|
ab| 12|
ad| 11|
ae| 11|
</code></pre>
<p>I'm trying to create third column and fill it depending on the next condition:<br>
IF item in A has value 12 OR 13 - write 'yes in the C column. Else-write no.</p>
<p>So I created empty column C and got the unique A values. And used a for loop to fill the dataframe column, but I constantly get an error.</p>
<pre><code>df['C'] =''
uni = df['A'].unique()
for a in uni:
vals=['12', '13']
df['C'] = np.where(df[df['A']==a]['B'].isin(vals), 'YES', 'NO')
</code></pre>
<p>I also tried to use another for loop </p>
<pre><code>for a in uni:
if ('12' in df[df['A']==a]['B'].values) | ('13' in df[df['A']==a]['B'].values):
df["C"]='YES'
else:
df["C"]='NO'
</code></pre>
<p>But in this case the whole column is filled only with NO values.<br>
Where did I turn wrong way?</p>
|
<p>I think you need first test values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a> and then in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>DataFrame.any</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> test if at least one <code>True</code> per rows:</p>
<pre><code>vals=[12, 13]
df['C'] = np.where(df['B'].isin(vals).groupby(df['A']).transform('any'), 'YES', 'NO')
print (df)
A B C
0 aa 11 YES
1 aa 12 YES
2 aa 13 YES
3 ab 11 YES
4 ac 11 NO
5 ab 12 YES
6 ad 11 NO
7 ae 11 NO
</code></pre>
<p>Or get all values <code>A</code> per condition, get unique values and pass to another <code>isin</code>:</p>
<pre><code>df['C'] = np.where(df['A'].isin(df.loc[df['B'].isin(vals), 'A'].unique()), 'YES', 'NO')
print (df)
A B C
0 aa 11 YES
1 aa 12 YES
2 aa 13 YES
3 ab 11 YES
4 ac 11 NO
5 ab 12 YES
6 ad 11 NO
7 ae 11 NO
</code></pre>
|
python|pandas|numpy
| 2
|
10,043
| 68,942,031
|
Merging dataframes, based on range
|
<p>I have been struggling my whole day on merging two datasets. One data set shows me an customer ID, paydate and product_code, the other one tells me the special deals the company made with the customer for a special period.</p>
<ul>
<li>customer = customer</li>
<li>product_code = product_code</li>
<li>date_from <= Paydate <= date_untill</li>
</ul>
<p>I tried the following script (python):</p>
<pre><code>nef_df = pd.merge(df1, df2[['Customer', 'Product_code', 'date_from', 'date_untill']], on=['Customer', 'Product_code'])
</code></pre>
<p><a href="https://i.stack.imgur.com/oqNJZ.png" rel="nofollow noreferrer">example tables in Excell</a></p>
|
<p>According to your example, you'll need to perform an <strong>outer merge</strong>.</p>
<ol>
<li>Import pandas</li>
</ol>
<pre><code>import pandas as pd
</code></pre>
<ol start="2">
<li>Create raw data (as an example)</li>
</ol>
<pre><code>customer_1 = ['A1', 'A1', 'A2', 'A2', 'A2', 'A2', 'A3', 'A3']
paydate = ['1-6-2020', '26-11-2020', '7-1-2020', '5-12-2020', '1-3-2020', '16-7-2020', '10-1-2020', '31-12-2020']
product_code = [9100, 9100, 9100, 9100, 9200, 9200, 9400, 9400]
df1 = pd.DataFrame(
{
'customer':customer,
'paydate':paydate,
'product_code':product_code
}
)
customer_2 = ['A1', 'A2', 'A2', 'A2', 'A2', 'A3', 'A3', 'A4']
product_code = [9100, 9100, 9100, 9200, 9200, 9400, 9400, 9300]
price = [27, 20, 23, 23, 22, 20, 23, 44]
df2 = pd.DataFrame(
{
'customer':customer_2,
'product_code':product_code,
'price':price
}
)
</code></pre>
<ol start="3">
<li>Perform an outer merge</li>
</ol>
<pre><code>pd.merge(df1, df2, how='outer', on=['customer', 'product_code'])
</code></pre>
<p>Result table:</p>
<p><a href="https://i.stack.imgur.com/dWQfG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dWQfG.png" alt="Result table" /></a></p>
|
python|pandas|merge
| -1
|
10,044
| 68,880,288
|
CNN feature Extraction
|
<pre><code>class ResNet(nn.Module):
def __init__(self, output_features, fine_tuning=False):
super(ResNet, self).__init__()
self.resnet152 = tv.models.resnet152(pretrained=True)
#freezing the feature extraction layers
for param in self.resnet152.parameters():
param.requires_grad = fine_tuning
#self.features = self.resnet152.features
self.num_fts = 512
self.output_features = output_features
# Linear layer goes from 512 to 1024
self.classifier = nn.Linear(self.num_fts, self.output_features)
nn.init.xavier_uniform_(self.classifier.weight)
self.tanh = nn.Tanh()
def forward(self, x):
h = self.resnet152(x)
print('h: ',h.shape)
return h
image_model_resnet152=ResNet(output_features=10).to(device)
image_model_resnet152
</code></pre>
<p>Here, after printing the <code>image_model_resnet152</code>, I get:</p>
<p><a href="https://i.stack.imgur.com/PcwXg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PcwXg.png" alt="enter image description here" /></a></p>
<p>Here, what is the difference between <code>(avgpool): Linear(in_features=2048) </code>
and <code>(classifier): Linear(in_features=512)</code>?</p>
<p>I am implementing an image captioning model, so which <code>in_features</code> should I take for an image?</p>
|
<p>ResNet is not as straightforward as VGG: it's not a sequential model, <em>i.e.</em> there is some model-specific logic inside the <code>forward</code> definition of the <code>torchvision.models.resnet152</code>, for instance, the flattening of features between the CNN and classifier. You can take a look at <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py#L144" rel="nofollow noreferrer">its source code</a>.</p>
<p>The easiest thing to do in this case is to add a hook on the last layer of the CNN: <code>layer4</code>, and log the result of that layer in an external <em>dict</em>. This is done with <a href="https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html" rel="nofollow noreferrer"><code>register_forward_hook</code></a>.</p>
<p>Define the hook:</p>
<pre><code>out = {}
def result(module, input, output):
out['layer4'] = output
</code></pre>
<p>Attach the hook on the submodule <code>resnet.layer4</code>:</p>
<pre><code>>>> x = torch.rand(1,3,224,224)
>>> resnet = torchvision.models.resnet152()
>>> resnet.layer4.register_forward_hook(result)
</code></pre>
<p>After inference you will have access to the result inside of <code>out</code>:</p>
<pre><code>>>> resnet(x)
>>> out['layer4']
(1, 2048, 7, 7)
</code></pre>
<p>You can look at <a href="https://stackoverflow.com/a/65985476/6331369">another answer</a> of mine on a more in-depth usage of forward hooks.</p>
<hr />
<p>A possible implementation would be:</p>
<pre><code>class NN(nn.Module):
def __init__(self):
super().__init__()
self.resnet = torchvision.models.resnet152()
self.resnet.layer4.register_forward_hook(result)
self.out = {}
@staticmethod
def result(module, input, output):
out['layer4'] = output
def forward(self, x):
x = self.resnet(x)
return out['layer4']
</code></pre>
<p>You can then define additional layers for your custom classifier and call them inside <code>forward</code>.</p>
|
python|machine-learning|deep-learning|pytorch|conv-neural-network
| 1
|
10,045
| 69,238,906
|
append to/insert a row with an index value into an indexed dataframe without losing the index name?
|
<p>Given the dataframe:</p>
<pre><code>df = pd.DataFrame([{'myindex':1,'a':2,'b':3},{'myindex':2,'a':22,'b':33}]).set_index('myindex')
</code></pre>
<p>and a new row:</p>
<pre><code>new_row = {'myindex':11,'a':20,'b':30}
</code></pre>
<p>Is the most parsimonious way of adding <code>new_row</code> to the dataframe to <code>reset</code> the index, <code>append</code> without <code>myindex</code> as the index, and then reindex with a <code>set_index</code> to <code>myindex</code>?</p>
<pre><code>df = df.reset_index().append(new_row,ignore_index=True).set_index('myindex')
</code></pre>
<p>I tried the <code>pandas</code> method <code>concat</code> but it wipes out the <code>index</code> <code>name</code> while adding a new column named <code>myindex</code> consisting of NaN in all but the new_row. I tried doing a:</p>
<pre><code>new_row_myindex = new_row['myindex']
del new_row['myindex']
df = pd.concat([df, pd.DataFrame(new_row,index=[new_row_myindex])])
</code></pre>
<p>But that drops the index's name <code>myindex</code>.</p>
<p>I tried the <code>DataFrame</code> method <code>insert</code> but, unique among similar methods, it has no <code>axis</code> parameter and is therefore limited to inserting columns (quite curious when you think about it).</p>
|
<p>You can try something like this:</p>
<pre><code>df = df.append((pd.Series({'myindex':11,'a':20,'b':30}, name=new_row['myindex'])[1:]))
# Output
a b
myindex
1 2 3
2 22 33
11 20 30
</code></pre>
|
pandas|indexing
| 0
|
10,046
| 44,407,873
|
Tensorflow: traning by batch stuck forever in sess.run
|
<p>I'm trying to train my model batch by batch, as I couldn't find any example to how to do it properly. This is as far as I can do, on my mission to find how to train a model batch by batch in Tensorflow.</p>
<pre><code>queue=tf.FIFOQueue(capacity=50,dtypes=[tf.float32,tf.float32],shapes=[[10],[2]])
enqueue_op=queue.enqueue_many([X,Y])
dequeue_op=queue.dequeue()
qr=tf.train.QueueRunner(queue,[enqueue_op]*2)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
X_train_batch,y_train_batch=tf.train.batch(dequeue_op,batch_size=2)
coord=tf.train.Coordinator()
enqueue_threads=qr.create_threads(sess,coord,start=True)
sess.run(tf.local_variables_initializer())
for epoch in range(100):
print("inside loop1")
for iter in range(5):
print("inside loop2")
if coord.should_stop():
break
batch_x,batch_y=sess.run([X_train_batch,y_train_batch])
print("after sess.run")
print(batch_x.shape)
_=sess.run(optimizer,feed_dict={x_place:batch_x,y_place:batch_y})
coord.request_stop()
coord.join(enqueue_threads)
</code></pre>
<p>Which outputs,</p>
<pre><code>inside loop1
inside loop2
</code></pre>
<p>As you can see,
Which stuck forever when it runs the <code>batch_x,batch_y=sess.run([X_train_batch,y_train_batch])</code> line.
I don't know how could I solve this, or is this the proper way to train a model batch by batch?</p>
|
<p>After couple hour of searching, I found the Solution myself. So, I'm answering my own question now below.
The queues are filled by background threads, which are created when you call <code>tf.train.start_queue_runners()</code> If you don't call this method, the background threads will not start, the queues will remain empty, and the training op will block indefinitely waiting for input.</p>
<p><strong>FIX:</strong>
call <code>tf.train.start_queue_runners(sess)</code> just before the training loop.
Like I did below:</p>
<pre><code>queue=tf.FIFOQueue(capacity=50,dtypes=[tf.float32,tf.float32],shapes=[[10],[2]])
enqueue_op=queue.enqueue_many([X,Y])
dequeue_op=queue.dequeue()
qr=tf.train.QueueRunner(queue,[enqueue_op]*2)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
X_train_batch,y_train_batch=tf.train.batch(dequeue_op,batch_size=2)
coord=tf.train.Coordinator()
enqueue_threads=qr.create_threads(sess,coord,start=True)
tf.train.start_queue_runners(sess)
for epoch in range(100):
print("inside loop1")
for iter in range(5):
print("inside loop2")
if coord.should_stop():
break
batch_x,batch_y=sess.run([X_train_batch,y_train_batch])
print("after sess.run")
print(batch_x.shape)
_=sess.run(optimizer,feed_dict={x_place:batch_x,y_place:batch_y})
coord.request_stop()
coord.join(enqueue_threads)
</code></pre>
|
python|python-3.x|tensorflow
| 5
|
10,047
| 61,160,328
|
Is it possible to train a model Tensorflow Object Detection API with Tensorflow 2.1?
|
<p>When training the model Tensorflow Object Detection API on Tensorflow-gpu 2.1, there is an error: </p>
<p><strong>No module named 'tensorflow.contrib'</strong> </p>
<p>Is it possible to train a model Tensorflow Object Detection API with Tensorflow 2.1?<br>
I dont want to change the version of Tensorflow.<br>
Can someone help me?</p>
|
<p>Tensorflow object detection API currently works only for Tensorflow 1.x (>=1.12.0), but it's in the works. </p>
<p>See this Github thread: <a href="https://github.com/tensorflow/models/issues/6423" rel="nofollow noreferrer">https://github.com/tensorflow/models/issues/6423</a>. </p>
|
tensorflow|object-detection|object-detection-api|tensorflow2.x
| 1
|
10,048
| 71,530,521
|
pandas rolling window aggregating string column
|
<p>I am struggling when computing string aggregation operation using rolling window on pandas.</p>
<p>I am given the current df, where <em>t_dat is</em> the purchase date, <em>customer_id</em> and <em>article_id</em> are self-explanatory.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>t_dat</th>
<th>customer_id</th>
<th>article_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>2020-04-24</td>
<td>486230</td>
<td>781570001</td>
</tr>
<tr>
<td>2020-04-24</td>
<td>486230</td>
<td>598755030</td>
</tr>
<tr>
<td>2020-04-27</td>
<td>486230</td>
<td>836997001</td>
</tr>
<tr>
<td>2020-05-02</td>
<td>486230</td>
<td>687707005</td>
</tr>
<tr>
<td>2020-06-03</td>
<td>486230</td>
<td>741356002</td>
</tr>
</tbody>
</table>
</div>
<p>and I'd like to <strong>group by customer_id</strong> and concatenate articles id over a weekly rolling window (e.g. <em>article_ids</em> column in table below. pandas doesn't seem to support rolling window aggregation for string columns therefore I tried resample, but it doesn't accomplish what I expect (look at table below for my expected result)</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>t_dat</th>
<th>customer_id</th>
<th>article_id</th>
<th>article_ids</th>
</tr>
</thead>
<tbody>
<tr>
<td>2020-04-24</td>
<td>486230</td>
<td>781570001</td>
<td>598755030 836997001</td>
</tr>
<tr>
<td>2020-04-24</td>
<td>486230</td>
<td>598755030</td>
<td>781570001 836997001</td>
</tr>
<tr>
<td>2020-04-27</td>
<td>486230</td>
<td>836997001</td>
<td>836997001 687707005</td>
</tr>
<tr>
<td>2020-05-02</td>
<td>486230</td>
<td>687707005</td>
<td>687707005</td>
</tr>
<tr>
<td>2020-06-03</td>
<td>486230</td>
<td>741356002</td>
<td>741356002</td>
</tr>
</tbody>
</table>
</div>
<p>My goal is to actually understand if there are purchase pattern among different article_ids (i.e. are some articles bought shortly after any client has purchased another article?)</p>
<p>To make it more explicit, I am trying to structure the problem in two steps:</p>
<ol>
<li>What are the articles that a customer has purchased within 7 days from any other article purchase? I want to repeat this exercise for each customer and each purchased product</li>
<li>Once this is done, I want to identify those articles that are purchased more frequently in combination (within one week) from other products, so I can build a basic rec system.</li>
</ol>
<p>Here I am looking for a solution to number 1.</p>
<p>I have tried both</p>
<pre><code>df.groupby('customer_id').rolling('7D', on = 't_dat', min_periods = 1)['article_id'].agg(' '.join).reset_index()
</code></pre>
<p>or</p>
<pre><code>df.groupby('customer_id').rolling('7D', on = 't_dat', min_periods = 1)['article_id'].apply(lambda x: ' '.join(x.astype(str))).reset_index()
</code></pre>
<p>and, using resample,</p>
<pre><code>df.groupby('customer_id').resample('7D', on = 't_dat')['article_id'].agg(' '.join).reset_index()
</code></pre>
<p>without success. First one because of error <strong>TypeError: sequence item 0: expected str instance, float found</strong> and, when I cast string type to <em>article_id</em>, it returns <strong>TypeError: must be real number, not str</strong>;
second attempt because it doesn't return what I need with the proper offset (it takes week intervals starting from first occurrence in the dataset and then keep on setting the weekly intervals without rolling offset)</p>
<p>I have coded an alternative but it looks extremely slow and I would leverage on pandas vectorized operations to speed it up:</p>
<pre><code># for each article_id in every purchase, I want to check which other articles where bought within the following week
articles_list = df.groupby(['customer_id', 't_dat'])['article_id'].apply(list).reset_index()
def get_recommendations():
dict_recs = {}
for n, row in df.iterrows():
customer = row['customer_id']
date_purchase = row['t_dat']
articles_purchase = row['article_id']
df_clean = df[(df['customer_id'] == customer) & (df['t_dat'] <= date_purchase + timedelta(days=7)) & (df['t_dat'] >= date_purchase)]
articles_to_recommend = df_clean['article_id']
print("Iterating over {} row".format(n))
# print("Articles in scope are {} \n".format(articles_to_recommend))
for article in articles_purchase:
articles_list_to_iter = [i[j] for i in articles_to_recommend for j in range(len(i)) if i[j] != article]
# print("Articles preprocessed are {} \n".format(articles_list_to_iter))
if article not in dict_recs:
dict_recs[article] = articles_list_to_iter
else:
dict_recs[article].extend(articles_list_to_iter)
recs_list = {k: Counter(v).most_common(12) for k, v in dict_recs.items()}
return recs_list
</code></pre>
<p>Can you suggest any alternative I can use to accomplish what I am looking for?</p>
|
<p>I was able to aggregate by the day. Create a second dataframe and accumulate by customer all articles per day. Use the pd.Grouper to create your 7 day rolling window!</p>
<pre><code>data="""
t_dat customer_id article_id
2020-04-24 486230 781570001
2020-04-24 486230 598755030
2020-04-27 486230 836997001
2020-05-02 486230 687707005
2020-06-03 486230 741356002
"""
df = pd.read_csv(StringIO(data), sep='\t')
df['t_dat'] = pd.to_datetime(df['t_dat'])
df = df.sort_values(by=['t_dat'])
#grouped = df.groupby(['t_dat', 'customer_id']).agg({'article_id': lambda x: list(x)})
#grouped=grouped.reset_index()
#df=pd.DataFrame(grouped)
df = df.set_index('t_dat')
print(df)
df = df.groupby(['customer_id', pd.Grouper(level='t_dat', freq='7D')])['article_id'].apply(list).reset_index()
print(df)
</code></pre>
<p>output:</p>
<pre><code>customer_id t_dat article_id
0 486230 2020-04-24 [781570001, 598755030, 836997001]
1 486230 2020-05-01 [687707005]
2 486230 2020-05-29 [741356002]
</code></pre>
|
python|pandas|pandas-groupby|rolling-computation
| 0
|
10,049
| 71,526,523
|
handling million of rows for lookup operation using python
|
<p>I am new to data handling . I need to create python program to search a record from a samplefile1 in samplefile2. i am able to achieve it but for each record out of 200 rows in samplefile1 is looped over 200 rows in samplefile2 , it took 180 seconds complete execution time.</p>
<p>I am looking for something to be more time efficient so that i can do this task in minimum time .</p>
<p>My actual Dataset size is : 9million -> samplefile1 and 9million --> samplefile2.</p>
<p>Here is my code using Pandas.</p>
<p>sample1file1 rows:</p>
<pre class="lang-none prettyprint-override"><code>number='7777777777' subscriber-id="7777777777" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777777@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
number='7777777778' subscriber-id="7777777778" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777778@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
number='7777777779' subscriber-id="7777777779" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777779@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
.........100 rows
</code></pre>
<p>samplefile2 rows</p>
<pre class="lang-none prettyprint-override"><code>number='7777777777' subscriber-id="7777777777" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777777@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
number='7777777778' subscriber-id="7777777778" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777778@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
number='7777777769' subscriber-id="7777777779" rrid=0 NAPTR {order=10 preference=50 flags="U"service="sip+e2u"regexp="!^(.*)$!sip:+7777777779@ims.mnc001.mcc470.3gppnetwork.org;user=phone!"replacement=[]};
........100 rows
</code></pre>
<pre><code>import time
import pandas as pd
def timeit(func):
"""
Decorator for measuring function's running time.
"""
def measure_time(*args, **kw):
start_time = time.time()
result = func(*args, **kw)
print("Processing time of %s(): %.2f seconds."
% (func.__qualname__, time.time() - start_time))
return result
return measure_time
@timeit
def func():
df = pd.read_csv("sample_2.txt", names=["A1"], skiprows=0, sep=';')
df.drop(df.filter(regex="Unname"),axis=1, inplace=True)
finaldatafile1=df.fillna("TrackRow")
df1=pd.read_csv("sample_1.txt",names=["A1"],skiprows=0,sep=';')
df1.drop(df.filter(regex="Unname"),axis=1, inplace=True)
finaldatafile2=df1.fillna("TrackRow")
indexdf=df.index
indexdf1=df1.index
##### for loop for string to be matched (small datasets#######
for i in range(0,len(indexdf)-1):
lookup_value=finaldatafile1.iloc[[i]].to_string()
# print(lookup_value)
######### for loop for lookup dataset( large dataset #########
for j in range(0,len(indexdf1)-1):
match_value=finaldatafile2.iloc[[j]].to_string()
if i is j:
print (f"Its a match on lookup table position {j} and for string {lookup_value}")
else:
print("no match found in complete dataset")
if __name__ == "__main__":
func()
</code></pre>
|
<p>I don't think using Pandas is helping here as you are just comparing whole lines. An alternative approach would be to load the first file as a set of lines. Then enumerate over the lines in the second file testing if it is in the set. This will be much faster:</p>
<pre><code>@timeit
def func():
with open('sample_1.txt') as f_sample1:
data1 = set(f_sample1.read().splitlines())
with open('sample_2.txt') as f_sample2:
data2 = f_sample2.read().splitlines()
for index, entry in enumerate(data2):
if entry in data1:
print(f"It's a match on lookup table position {index} and for string\n{entry}")
else:
print("no match found in complete dataset")
</code></pre>
|
python|pandas|csv|bigdata|dask
| 1
|
10,050
| 69,755,679
|
Creating numpy array from calculations across arrays
|
<p>I currently have the task of creating a 4x4 array with operations performed on the cells</p>
<p>Below you will see a function that takes in <code>array</code> into function <code>the_matrix</code> which returns <code>adj_array</code></p>
<p>It then has a for loop that is supposed to loop through <code>array</code>, looking at the the cell in <code>ref_array</code> and upon finding the matching first two numbers in <code>array</code> (like 6,3") it will put that function <code>lambda N: 30</code> into it's respective cell in <code>adj_array</code>, as it will do for all cells in the 4x4 matrix</p>
<p>Essentially the function should return an array like this</p>
<pre><code>array([[inf, <function <lambda> at 0x00000291139AF790>,
<function <lambda> at 0x00000291139AF820>, inf],
[inf, inf, inf, <function <lambda> at 0x00000291139AF8B0>],
[inf, inf, inf, <function <lambda> at 0x00000291139AF940>],
[inf, inf, inf, inf]], dtype=object)
</code></pre>
<p>My work so far below</p>
<pre><code>def the_matrix(array):
ref_array = np.zeros((4,4), dtype = object)
ref_array[0,0] = (5,0)
ref_array[0,1] = (5,1)
ref_array[0,2] = (5,2)
ref_array[0,3] = (5,3)
ref_array[1,0] = (6,0)
ref_array[1,1] = (6,1)
ref_array[1,2] = (6,2)
ref_array[1,3] = (6,3)
ref_array[2,0] = (7,0)
ref_array[2,1] = (7,1)
ref_array[2,2] = (7,2)
ref_array[2,3] = (7,3)
ref_array[3,0] = (8,0)
ref_array[3,1] = (8,1)
ref_array[3,2] = (8,2)
ref_array[3,3] = (8,3)
for i in ref_array:
for a in i: #Expecting to get (5,1) here, but's showing me array
if a == array[0, 0:2]: #This specific slice was a test
put the function in that cell for adj_array
return adj_array
array = np.array([[5, 1, lambda N: 120],
[5, 2, lambda N: 30],
[6, 3, lambda N: 30],
[7, 3, lambda N: N/30]])
</code></pre>
<p>Have tried variations of this for loop, and it's throwing errors. For one, the <code>a</code> in the for loop is displaying the input argument <code>array</code>, which is weird because it hasn't been called in the loop at that stage. My intention here is to refer to the exact cell in ref_array.</p>
<p>Not sure where I'm going wrong here and how I'm improperly looping through. Any help appreciated</p>
|
<p>Your <code>ref_array</code> is object dtype, (4,4) containing tuples:</p>
<pre><code>In [26]: ref_array
Out[26]:
array([[(5, 0), (5, 1), (5, 2), (5, 3)],
[(6, 0), (6, 1), (6, 2), (6, 3)],
[(7, 0), (7, 1), (7, 2), (7, 3)],
[(8, 0), (8, 1), (8, 2), (8, 3)]], dtype=object)
</code></pre>
<p>Your iteration, just showing the iteration variables. I'm using `repr</p>
<pre><code>In [28]: for i in ref_array:
...: print(repr(i))
...: for a in i:
...: print(repr(a))
...:
array([(5, 0), (5, 1), (5, 2), (5, 3)], dtype=object)
(5, 0)
(5, 1)
(5, 2)
(5, 3)
...
</code></pre>
<p>So <code>i</code> is a "row" of the array, itself a 1d object dtype array.</p>
<p><code>a</code> is one of those objects, a tuple.</p>
<p>Your description of the alternatives is vague. But assume on tries to start with a numeric dtype array</p>
<pre><code>In [30]: arr = np.array(ref_array.tolist())
In [31]: arr
Out[31]:
array([[[5, 0],
[5, 1],
[5, 2],
[5, 3]],
...
[8, 2],
[8, 3]]])
In [32]: arr.shape
Out[32]: (4, 4, 2)
</code></pre>
<p>now the looping:</p>
<pre><code>In [33]: for i in arr:
...: print(repr(i))
...: for a in i:
...: print(repr(a))
...:
array([[5, 0], # i is a (4,2) array
[5, 1],
[5, 2],
[5, 3]])
array([5, 0]) # a is (2,) array....
array([5, 1])
array([5, 2])
array([5, 3])
</code></pre>
<p>If "the a in the for loop is displaying the input argument array", it's most likely because <code>a</code> IS a an array.</p>
<p>Keep in mind that object dtype arrays are processed at list speeds. You might as well think of them as bastardized lists. While they have some array enhancements (multidimensonal indexing etc), the elements are still references, and are processed as in lists.</p>
<p>I haven't paid attention as to why you are putting <code>lambdas</code> in the array. It looks ugly, and I don't see what it gains you. They can't be "evaluated" at array speeds. You'd have to do some sort of iteration or list comprehension.</p>
<h2>edit</h2>
<p>A more direct way of generating the <code>arr</code>, derived from <code>ref_array</code>:</p>
<pre><code>In [39]: I,J = np.meshgrid(np.arange(5,9), np.arange(0,4), indexing='ij')
In [40]: I
Out[40]:
array([[5, 5, 5, 5],
[6, 6, 6, 6],
[7, 7, 7, 7],
[8, 8, 8, 8]])
In [41]: J
Out[41]:
array([[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3],
[0, 1, 2, 3]])
In [42]: arr = np.stack((I,J), axis=2) # shape (4,4,2)
</code></pre>
<p>If the function was something like</p>
<pre><code>In [46]: def foo(I,J):
...: return I*10 + J
...:
</code></pre>
<p>You could easily generate a value for each pair of the values in <code>ref_array</code>.</p>
<pre><code>In [47]: foo(I,J)
Out[47]:
array([[50, 51, 52, 53],
[60, 61, 62, 63],
[70, 71, 72, 73],
[80, 81, 82, 83]])
</code></pre>
|
python|arrays|numpy|matrix|nodes
| 1
|
10,051
| 43,285,133
|
How to write two variables in one line?
|
<p>I would like to write two variable in a file. I mean this is my code :</p>
<pre><code>file.write("a = %g\n" %(params[0]))
file.write("b = %g\n" %(params[1]))
</code></pre>
<p>and what I want to write in my file is :</p>
<pre><code>f(x) = ax + b
</code></pre>
<p>where <code>a</code> is <code>params[0]</code> and <code>b</code> is <code>params[1]</code> but I don't know how to do this ? </p>
<p>Thank you for your help !</p>
|
<p>If all you want to write to your file is <code>f(x) = ax + b</code> where <code>a</code> and <code>b</code> are <code>params[0]</code> and <code>params[1]</code>, respectively, just do this:</p>
<pre><code>file.write('f(x) = %gx + %g\n' % (params[0], params[1]))
</code></pre>
<p><code>'f(x) = %gx + %g' % (params[0], params[1])</code> is simply string formatting, where you're putting <code>a</code> and <code>b</code> in their correct spaces.</p>
<p>Edit: If you're using Python 3.6, you can use f-strings:</p>
<pre><code>a, b = params[0], params[1]
file.write(f'f(x) = {a}x + {b}\n')
</code></pre>
|
python|python-2.7|python-3.x|numpy
| 1
|
10,052
| 43,406,111
|
Is GPU efficient on parameter server for data parallel training?
|
<p>On <a href="http://download.tensorflow.org/paper/whitepaper2015.pdf" rel="nofollow noreferrer">data parallel training</a>, I guess the GPU instance is not necessarily efficient for parameter servers because parameter servers only keep the values and don't run any computation such as matrix multiplication.</p>
<p>Therefore, I think the example config for <a href="https://cloud.google.com/ml-engine/docs/how-tos/using-gpus" rel="nofollow noreferrer">Cloud ML Engine</a> (using CPU for parameter servers and GPU for others) below has good cost performance:</p>
<pre><code>trainingInput:
scaleTier: CUSTOM
masterType: standard_gpu
workerType: standard_gpu
parameterServerType: standard_cpu
workerCount: 3
parameterServerCount: 4
</code></pre>
<p>Is that right?</p>
|
<p>Your assumption is a reasonable rule of thumb. That said, Parag points to a paper that describes a model that can leverage GPUs in the parameter server, so it's not always the case that parameter servers are not able to leverage GPUs.</p>
<p>In general, you may want to try both for a short time and see if throughput improves.</p>
<p>If you have any question as to what ops are actually being assigned to your parameter server, you can <a href="https://www.tensorflow.org/tutorials/using_gpu#logging_device_placement" rel="nofollow noreferrer">log the device placement</a>. If it looks like ops are on the parameter server that can benefit from the GPU (and supposing they really should be there), then you can go ahead and try a GPU in the parameter server.</p>
|
machine-learning|tensorflow|google-cloud-ml|google-cloud-ml-engine
| 0
|
10,053
| 43,204,940
|
Using multiprocessing to create two arrays in python simultaneously
|
<p>I've written a python script that uses numpy to create two arrays from different sources of data and compares them to each other. Building the arrays is quite a slow process so I wanted to find a way of building them at the same time to speed the script up. I tried to do this using the multiprocessing module like this: </p>
<pre><code>if __name__=='__main__':
p1 = Process(target=network_matrix_main_function(input_network))
p1.start()
p2 = Process(target=tree_matrix_main_function(input_tree))
p2.start()
print p1
print p2
</code></pre>
<p>But I'm getting the following error message:</p>
<pre><code>Process Process-1:
Traceback (most recent call last):
File "/Users/bj5/homebrew/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/Users/bj5/homebrew/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py", line 113, in run
if self._target:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
<Process(Process-1, stopped[1])>
<Process(Process-2, started)>
Process Process-2:
Traceback (most recent call last):
File "/Users/bj5/homebrew/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/Users/bj5/homebrew/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py", line 113, in run
if self._target:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>It sounds to me like python is struggling to handle the fact that the output from both the parallel processes is an array and not a single value. I've tried googling a.any() and a.all() (which it suggests to use in the final line of the error message) for use with multiprocessing and arrays but haven't been able to find anything that tells me how to use them. </p>
<p>Can anyone explain to me why my script doesn't work and what I need to do to run the two functions: network_matrix_main_function, and tree_matrix_main_function in parallel? </p>
|
<p>that is pretty easy to fix. You see, with multiprocessing you have to seperate the function and the arguments:</p>
<pre><code>p1 = Process(target=[some function], args=[(some arguments,other args)])
</code></pre>
<p>Also, you shouldn't print Processes, but define in their function that they should print out something,</p>
<p>And lastly, you should join the Processes back together with the .join() function.</p>
|
python|arrays|numpy|multiprocessing
| 0
|
10,054
| 72,282,582
|
Trying to make an image classification model using AutoML Vision run on a website
|
<p>I've created a classification model using AutoML Vision and tried to use <a href="https://cloud.google.com/vision/automl/docs/tensorflow-js-tutorial?_ga=2.245966400.-55487651.1648514707&_gac=1.61854174.1648905034.Cj0KCQjw6J-SBhCrARIsAH0yMZhG7bXhnNscFr09VIp9bK3x70O9dGFC2U-ZIHn3ZMtaW7FlOIQJ8E8aAk0_EALw_wcB" rel="nofollow noreferrer">this tutorial</a> to make a small web app to make the model classify an image using the browser.
The code I'm using is basically the same as the tutorial with some slight changes:</p>
<pre><code><script src="https://unpkg.com/@tensorflow/tfjs"></script>
<script src="https://unpkg.com/@tensorflow/tfjs-automl"></script>
<img id="test" crossorigin="anonymous" src="101_SI_24_23-01-2019_iOS_1187.JPG">
<script>
async function run() {
const model = await tf.automl.loadImageClassification('model.json');
const image = document.getElementById('test');
const predictions = await model.classify(image);
console.log(predictions);
// Show the resulting object on the page.
const pre = document.createElement('pre');
pre.textContent = JSON.stringify(predictions, null, 2);
document.body.append(pre);
}
</code></pre>
<p>run();
</p>
<p>This index.html file above is located in the same folder of the model files and the image file. The problem is that when I try to run the file I'm receiving this error:
<a href="https://i.stack.imgur.com/TN4zj.png" rel="nofollow noreferrer">error received</a></p>
<p>I have no idea what I should do to fix this error. I've tried many things without success, I've only changed the error.</p>
|
<p>models built with AutoML should not have dynamic ops, but seems that yours does.</p>
<p>if that is truly model designed using AutoML, then AutoML should be expanded to use asynchronous execution.</p>
<p>if model was your own (not AutoML), it would be a simple <code>await model.executeAsync()</code> instead of <code>model.execute()</code>, but in AutoML case, that part is hidden inside AutoML library module inside <code>classify</code> call and that needs to be addressed by tfjs AutoML team.</p>
<p>best to open an issue at <a href="https://github.com/tensorflow/tfjs/issues" rel="nofollow noreferrer">https://github.com/tensorflow/tfjs/issues</a></p>
<p>btw, why post a link to an image containing error message instead of copying the message as text here??</p>
|
tensorflow.js|google-cloud-automl
| 0
|
10,055
| 72,312,859
|
In Keras is it possible to see what were the predictions on each step during model.fit() or model.evaluate?
|
<p>I am fitting an ANN using Keras. As I don't trust the loss function output, I would like to see, what are the intermediate values that are compared to the target ones in order to calculate the loss after every epoch.</p>
<pre><code>history = model.fit(X, Y, epochs=epoc,batch_size=bs)
scores = model.evaluate(X, Y, verbose=0)
</code></pre>
<p>As an alternative, could you please tell me, is there a way to get the values for model.evaluate(x,y), because once again it gives only the score.</p>
<p>Thank you in advance for your answer!</p>
|
<p>To get predictions for each epoch you have to create callbacks and declare a on_epoch_end function as shown in this <a href="https://www.tensorflow.org/guide/keras/custom_callback#a_basic_example" rel="nofollow noreferrer">document</a>.</p>
<pre><code>class prediction_for_each_epoch(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
self.epoch_predictions = []
self.epoch_predictions.append(model.predict(test_images))
</code></pre>
<p>Here i have created a list to store the results of each epoch.</p>
<p>You have to pass the callbacks while training the model.</p>
<pre><code>model.fit(train_images,train_labels, epochs=2,callbacks=[prediction_for_each_epoch()])
</code></pre>
<p>Please refer to this working <a href="https://gist.github.com/kiransair/a36bc98093329b39e2e1dd805cc9acb7" rel="nofollow noreferrer">gist</a>.</p>
|
python|tensorflow|keras
| 0
|
10,056
| 72,158,678
|
How do I query more than one column in a data frame?
|
<p>I'm taking a Data Science class that uses Python and this is a questions that stumped me today. "How many babies are named “Oliver” in the state of Utah for all years?"
To answer this question we were supposed to use data from this set <a href="https://raw.githubusercontent.com/byuidatascience/data4names/master/data-raw/names_year/names_year.csv" rel="nofollow noreferrer">https://raw.githubusercontent.com/byuidatascience/data4names/master/data-raw/names_year/names_year.csv</a></p>
<p>So I started by loading in pandas.</p>
<pre><code>import pandas as pd
</code></pre>
<p>Then I loaded in the data set and created a data frame</p>
<pre><code>url='https://raw.githubusercontent.com/byuidatascience/data4names/master/data-raw/names_year/names_year.csv'
names=pd.read_csv(url)
</code></pre>
<p>Finally I used the .query() method to single out the data type that I wanted, the name Oliver.</p>
<pre><code>oliver=names.query("name == 'Oliver'")
</code></pre>
<p>I eventually found the total number of babies that had been named Oliver in Utah using this code</p>
<pre><code>total=pd.DataFrame.sum(quiz)
print(total)
</code></pre>
<p>but I wasn't sure how to single out the data for both the name and the state, or if that is even possible. Is there anyone out there that knows of a better way to find this answer?</p>
|
<p>You have all the code there you just need one more line to Sum accordint to the state:</p>
<pre><code>print(oliver.UT.sum()) # this will give you the total for the state of UTAH
</code></pre>
<p>and forget about the quiz.</p>
|
python|pandas|data-science
| 0
|
10,057
| 72,225,655
|
Extracting multiple sets of rows/ columns from a 2D numpy array
|
<p>I have a 2D numpy array from which I want to extract multiple sets of rows/ columns.</p>
<pre><code># img is 2D array
img = np.arange(25).reshape(5,5)
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
</code></pre>
<p>I know the syntax to extract one set of row/ column. The following will extract the first 4 rows and the 3rd and 4th column as shown below</p>
<pre><code>img[0:4, 2:4]
array([[ 2, 3],
[ 7, 8],
[12, 13],
[17, 18]])
</code></pre>
<p>However, what is the syntax if I want to extract multiple sets of rows and/or columns? I tried the following but it leads to an <code>invalid syntax</code> error</p>
<pre><code>img[[0,2:4],2]
</code></pre>
<p>The output that I am looking for from the above command is</p>
<pre><code>array([[ 2],
[12],
[17]])
</code></pre>
<p>I tried searching for this but it only leads to results for one set of rows/ columns or extracting discrete rows/ columns which I know how to do, like using np.ix.</p>
<p>For context, the 2D array that I am actually dealing with has the dimensions ~800X1200, and from this array I want to extract multiple ranges of rows and columns in one go. So something like <code>img[[0:100, 120:210, 400, 500:600], [1:450, 500:550, 600, 700:950]]</code>.</p>
|
<p>IIUC, you can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.r_.html" rel="nofollow noreferrer"><code>numpy.r_</code></a> to generate the indices from the slice:</p>
<pre><code>img[np.r_[0,2:4][:,None],2]
</code></pre>
<p>output:</p>
<pre><code>array([[ 2],
[12],
[17]])
</code></pre>
<p>intermediates:</p>
<pre><code>np.r_[0,2:4]
# array([0, 2, 3])
np.r_[0,2:4][:,None] # variant: np.c_[np.r_[0,2:4]]
# array([[0],
# [2],
# [3]])
</code></pre>
|
python|numpy|multidimensional-array|numpy-slicing
| 2
|
10,058
| 72,376,652
|
Need to find black point
|
<p>I have 2D array with x and y. I need to extract black point shown on graph.
Point is before sudden increase.
Also I paste data for x, y.</p>
<p><strong>Does anyone have any idea how to get this point?</strong></p>
<pre><code>x = [50,30,40,40,60,70,80,90,100,110,120,130,140,160,170,180,200,210,220,240,250,270,280,300,320,340,350,370,390,410,430,450,470,490,510,530,550,570,590,610,620,650,670,690,710,730,760,780,800,820,840,860,890,910,930,950,980,1000,1020,1040,1060,1090,1110,1130,1150,1170,1200,1220,1240,1260,1280,1300,1320,1340,1360,1380,1400,1420,1440,1460,1480,1500,1510,1530,1550,1570,1590,1600,1620,1630,1650,1660,1680,1690,1710,1720,1730,1740,1760,1770,1780,1790,1800,1810,1820,1830,1830,1840,1850,1860,1860,1870,1880,1880,1890,1890,1890,1900,1900,1900,1910,1910,1910,1910,1910,1910,1910,1910,1910,1900,1900,1900,1900,1890,1890,1890,1880,1880,1870,1870,1860,1850,1850,1840,1830,1830,1820,1810,1800,1800,1790,1780,1770,1760,1750,1740,1720,1710,1700,1690,1680,1660,1650,1640,1630,1610,1600,1590,1570,1550,1540,1520,1510,1490,1470,1460,1440,1420,1410,1390,1370,1350,1330,1310,1240,1290,1280,1250,1220,1200,1180,1160,1140,1110,1090,1070,1050,1030,1010,980,960,940,920,890,870,850,820,810,780,760,740,710,690,670,640,620,600,580,560,530,510,490,460,440,420,400,380,360,340,320,300,280,260,250,230,210,200,180,170,150,140,130,110,100,90,80,70,60,50,40,40,30,20,20,20,10,10,10,10,0,0,0,0,10,10,10,20,20,30]
y = [3220,3160,3180,3200,3230,3250,3270,3290,3300,3320,3340,3360,3390,3440,3480,3500,3520,3550,3580,3610,3640,3670,3710,3740,3780,3840,3880,3910,3950,4000,4050,4090,4140,4190,4250,4310,4390,4440,4490,4540,4600,4600,4590,4560,4540,4510,4470,4440,4420,4400,4390,4360,4370,4390,4410,4420,4450,4470,4490,4500,4500,4500,4510,4500,4490,4470,4450,4430,4400,4390,4380,4380,4390,4390,4390,4410,4420,4430,4440,4460,4470,4470,4470,4460,4450,4440,4430,4410,4400,4390,4360,4340,4350,4350,4360,4370,4380,4400,4410,4420,4440,4450,4460,4460,4450,4440,4430,4410,4400,4380,4370,4360,4360,4350,4360,4370,4380,4400,4420,4420,4390,4370,4360,4350,4340,4330,4300,4290,4290,4300,4290,4260,4230,4200,4160,4150,4140,4140,4110,4100,4090,4070,4040,4010,3990,3950,3920,3910,3900,3870,3850,3830,3780,3740,3680,3630,3590,3560,3520,3470,3430,3380,3330,3270,3220,3160,3110,3070,3030,2970,2920,2870,2870,2890,2920,2940,2980,3000,3010,3020,3030,3040,3050,3040,2940,3020,3000,2970,2920,2920,2920,2920,2910,2910,2920,2930,2950,2970,2990,3010,3010,3010,3000,3000,3000,2990,2970,2950,2930,2910,2890,2890,2900,2910,2920,2920,2930,2950,2960,2990,3010,3030,3030,3020,3010,3000,2990,2980,2970,2970,2960,2950,2950,2950,2960,2970,2990,3010,3020,3040,3040,3050,3060,3070,3070,3060,3050,3040,3030,3020,3020,3010,3010,3010,3010,3020,3020,3030,3050,3060,3080,3120,3160,3170,3170,3160,3160,3160,3170]
</code></pre>
<p>More data:</p>
<pre><code>x1 = [40,30,30,30,40,50,50,50,60,60,70,70,80,80,90,90,100,100,110,120,120,130,140,140,150,160,160,170,180,190,190,200,210,220,220,230,240,250,260,260,270,280,290,300,310,310,320,330,340,350,350,360,370,380,390,400,410,420,430,430,440,450,460,470,480,490,500,510,510,520,530,540,550,560,570,580,580,590,600,610,620,630,630,640,650,660,670,670,680,690,700,700,710,710,720,730,730,740,750,750,760,760,770,780,780,790,790,800,810,810,820,820,820,830,830,840,840,850,850,850,860,860,860,870,870,870,880,880,880,880,890,890,890,890,890,890,890,900,900,900,900,900,900,900,900,900,900,900,900,900,900,900,900,890,890,890,890,890,890,890,880,880,880,880,880,870,870,870,860,870,860,860,850,850,850,840,840,840,830,830,820,820,820,810,810,800,800,790,790,780,780,770,770,760,760,750,740,740,730,730,720,720,710,700,700,700,690,680,680,670,660,660,650,640,640,630,620,620,610,600,590,590,580,570,560,560,550,540,530,520,520,510,500,490,480,470,470,460,450,440,430,420,410,400,400,390,380,370,360,350,350,340,330,320,310,310,300,290,280,270,260,260,250,240,230,220,220,210,200,190,180,180,170,160,150,150,140,130,130,120,110,110,100,90,90,80,80,70,70,60,60,50,50,40,40,40,30,30,30,20,20,20,20,10,10,10,10,10,10,0,0,0,0,0,0,0,0,0,0,0,0,10,10,10,10,10,20,20,20,20]
y1 = [110,2070,2080,2090,2090,2130,2140,2160,2170,2200,2220,2240,2270,2300,2320,2320,2310,2280,2270,2260,2240,2220,2220,2240,2250,2270,2280,2300,2300,2310,2310,2290,2270,2250,2250,2240,2240,2240,2250,2270,2280,2290,2300,2300,2300,2280,2280,2260,2250,2250,2250,2250,2260,2270,2280,2290,2300,2300,2300,2290,2280,2280,2270,2260,2270,2270,2270,2280,2290,2300,2310,2310,2300,2300,2300,2290,2280,2280,2280,2280,2290,2290,2300,2300,2310,2310,2310,2300,2300,2290,2290,2280,2280,2280,2290,2290,2290,2300,2300,2300,2300,2300,2300,2290,2290,2280,2290,2280,2280,2280,2290,2300,2300,2300,2290,2300,2290,2290,2280,2280,2280,2280,2290,2290,2290,2290,2290,2290,2290,2290,2290,2290,2290,2280,2280,2280,2280,2280,2280,2280,2280,2290,2280,2280,2280,2280,2270,2240,2230,2230,2220,2220,2220,2220,2220,2220,2220,2220,2220,2220,2220,2220,2220,2210,2210,2210,2210,2210,2200,2200,2200,2200,2200,2200,2200,2200,2200,2200,2200,2200,2200,2190,2190,2190,2190,2190,2190,2190,2190,2190,2190,2190,2190,2180,2180,2180,2180,2180,2180,2180,2180,2180,2180,2180,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2160,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2170,2060,2030,2040,1990,1990,1980,1990,1960,1900,1920,1900,1900,1900,1910,1910,1880,1880,1880,1880,1880,1890,1890,1900,1930,1960,1970,1980,1990,2000,2000,2010,2020,2030,2040,2050,2060,2070,2080]
x2 = [30,20,30,30,40,40,50,50,60,60,70,70,80,90,90,100,110,110,120,130,140,140,150,160,170,180,180,190,200,210,220,230,240,250,260,270,280,290,300,310,320,330,340,350,360,370,380,390,400,410,420,440,450,460,470,480,490,510,520,530,540,550,560,580,590,600,610,620,640,650,660,670,680,700,700,710,730,740,750,760,770,780,790,810,820,830,840,850,860,870,880,890,900,910,920,930,940,950,960,970,980,980,990,1000,1010,1020,1030,1030,1040,1050,1060,1060,1060,1070,1080,1080,1090,1090,1100,1110,1110,1120,1120,1130,1130,1130,1140,1140,1140,1150,1150,1150,1160,1160,1160,1160,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1170,1160,1160,1160,1160,1160,1160,1150,1150,1150,1150,1140,1140,1140,1130,1130,1130,1120,1120,1100,1120,1110,1110,1100,1090,1090,1080,1080,1080,1070,1070,1060,1060,1050,1050,1040,1040,1030,1030,1020,1010,1010,1000,990,990,980,970,970,960,950,940,940,930,920,910,910,900,890,880,870,870,860,850,840,830,820,810,800,800,790,780,770,760,750,740,730,720,710,700,690,680,670,660,650,640,630,620,600,590,580,570,560,550,540,530,510,500,490,480,470,460,450,430,420,410,400,390,380,370,350,350,340,320,310,300,290,280,270,260,250,240,230,220,210,200,190,180,170,160,150,140,140,130,120,110,100,100,90,80,80,70,60,60,50,50,40,40,30,30,20,20,20,10,10,10,10,10,0,0,0,0,0,0,0,0,0,0,0,0,0,10,10,10,10,20,20]
y2 = [2450,2410,2420,2430,2470,2490,2510,2530,2560,2570,2600,2630,2650,2680,2680,2670,2670,2660,2650,2650,2640,2630,2630,2640,2640,2650,2650,2660,2670,2670,2670,2670,2660,2660,2660,2650,2650,2650,2650,2650,2660,2660,2670,2670,2680,2680,2680,2680,2680,2680,2680,2680,2680,2680,2680,2680,2680,2690,2690,2690,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2690,2690,2690,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2710,2710,2710,2710,2710,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2700,2690,2690,2690,2690,2690,2690,2680,2680,2680,2680,2680,2670,2670,2670,2670,2670,2670,2670,2670,2660,2660,2660,2650,2650,2650,2650,2650,2650,2650,2640,2640,2640,2640,2630,2630,2630,2620,2620,2620,2600,2560,2540,2530,2520,2510,2510,2500,2490,2480,2470,2460,2450,2440,2430,2430,2410,2400,2380,2370,2360,2340,2330,2310,2300,2280,2260,2250,2250,2250,2250,2260,2260,2260,2270,2270,2260,2250,2250,2250,2240,2240,2240,2240,2240,2240,2240,2230,2240,2240,2240,2230,2220,2220,2220,2220,2210,2210,2210,2210,2210,2200,2200,2200,2200,2200,2200,2200,2200,2200,2200,2190,2190,2190,2190,2190,2190,2190,2180,2190,2180,2180,2180,2170,2170,2170,2170,2170,2160,2160,2160,2160,2150,2150,2140,2140,2140,2140,2140,2140,2140,2130,2130,2130,2130,2120,2120,2120,2120,2120,2120,2120,2120,2120,2120,2120,2130,2120,2120,2130,2120,2130,2130,2130,2130,2130,2130,2130,2130,2130,2140,2140,2140,2140,2140,2140,2140,2140,2140,2140,2150,2150,2150,2150,2160,2160,2160,2170,2170,2170,2170,2170,2180,2180,2180,2180,2190,2190,2190,2190,2200,2200,2200,2210,2210,2210,2220,2220,2230,2230,2240,2260,2300,2300,2320,2320,2330,2340,2350,2360,2380,2390,2400,2410]
x3 = [60,30,40,40,50,70,90,100,110,130,140,160,180,200,210,230,250,270,290,310,330,350,370,400,420,440,470,490,520,540,560,590,620,640,670,690,710,740,770,790,820,850,870,900,920,950,980,1000,1030,1050,1070,1100,1120,1150,1170,1190,1220,1240,1260,1280,1300,1320,1340,1360,1380,1400,1410,1430,1440,1450,1470,1480,1490,1500,1510,1520,1530,1540,1540,1550,1550,1560,1560,1560,1570,1570,1570,1570,1570,1560,1560,1560,1560,1550,1550,1550,1540,1540,1530,1530,1520,1510,1510,1500,1490,1480,1470,1460,1460,1450,1440,1420,1410,1410,1390,1380,1370,1360,1340,1330,1320,1300,1290,1270,1260,1240,1220,1210,1190,1170,1150,1130,1120,1100,1080,1060,1040,1020,1000,970,950,930,900,880,850,830,800,780,750,720,620,700,670,640,590,560,530,500,470,440,410,380,360,340,310,280,260,240,210,190,170,150,130,110,100,80,70,50,40,30,20,20,10,10,0,0,0,0,0,0,0,10,10,20]
y3 = [2550,2470,2480,2490,2520,2610,2680,2720,2780,2770,2760,2730,2710,2660,2620,2600,2570,2580,2590,2630,2640,2670,2680,2690,2700,2690,2670,2640,2630,2610,2610,2600,2610,2610,2640,2650,2660,2670,2660,2660,2640,2630,2600,2590,2580,2580,2580,2590,2600,2610,2630,2640,2640,2640,2630,2610,2600,2580,2570,2570,2570,2570,2580,2570,2570,2570,2580,2570,2560,2550,2550,2560,2570,2560,2550,2560,2560,2570,2570,2570,2560,2560,2570,2570,2570,2560,2570,2570,2550,2520,2520,2520,2520,2520,2530,2490,2470,2470,2460,2440,2430,2410,2410,2400,2360,2310,2280,2250,2230,2220,2230,2230,2270,2310,2320,2340,2350,2340,2330,2320,2280,2250,2230,2220,2220,2230,2240,2250,2290,2310,2330,2320,2330,2310,2290,2270,2250,2220,2200,2210,2210,2230,2240,2280,2290,2300,2320,2310,2310,2280,2230,2270,2240,2240,2230,2230,2240,2270,2270,2290,2290,2340,2360,2360,2350,2330,2340,2350,2370,2340,2310,2310,2340,2360,2370,2370,2360,2380,2410,2430,2410,2390,2380,2380,2390,2380,2360,2330,2370,2410,2420,2430,2420,2430]
</code></pre>
<p>image for x, y
<a href="https://i.stack.imgur.com/EXLbz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EXLbz.png" alt="enter image description here" /></a></p>
<p>image for x1, y1
<a href="https://i.stack.imgur.com/U99cJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U99cJ.png" alt="enter image description here" /></a></p>
<p>image for x2, y2
<a href="https://i.stack.imgur.com/K1OsF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K1OsF.png" alt="Image for x2, y2" /></a></p>
<p>image for x3, y3
<a href="https://i.stack.imgur.com/4xK46.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4xK46.png" alt="enter image description here" /></a></p>
|
<pre><code>min(zip(y, x))
</code></pre>
<p>That would produce the <code>y</code> and the <code>x</code> coordinate of the point with the smallest <code>y</code> coordinate.</p>
|
python|pandas|numpy|scipy
| 1
|
10,059
| 50,519,983
|
How to apply a function to multiple columns in Pandas
|
<p>I have a bunch of columns which requires cleaning in Pandas. I've written a function which does that cleaning. I'm not sure how to apply the same function to many columns. Here is what I'm trying:</p>
<pre><code>df["Passengers", "Revenue", "Cost"].apply(convert_dash_comma_into_float)
</code></pre>
<p>But I'm getting KeyError.</p>
|
<p>Use double brackets [[]] as @chrisz points out:</p>
<p>Here is a MVCE:</p>
<pre><code>df = pd.DataFrame(np.arange(30).reshape(10,-1),columns=['A','B','C'])
def f(x):
#Clean even numbers from columns.
return x.mask(x%2==0,0)
df[['B','C']] = df[['B','C']].apply(f)
print(df)
</code></pre>
<p>Output</p>
<pre><code> A B C
0 0 1 0
1 3 0 5
2 6 7 0
3 9 0 11
4 12 13 0
5 15 0 17
6 18 19 0
7 21 0 23
8 24 25 0
9 27 0 29
</code></pre>
|
pandas
| 9
|
10,060
| 50,504,670
|
group by two columns count in pandas
|
<p>I have a Pandas DataFrame like this :</p>
<pre><code>df = pd.DataFrame({
'Date': ['2017-1-1', '2017-1-1', '2017-1-2', '2017-1-2', '2017-1-3'],
'Groups': ['one', 'one', 'one', 'two', 'two']})
Date Groups
0 2017-1-1 one
1 2017-1-1 one
2 2017-1-2 one
3 2017-1-2 two
4 2017-1-3 two
</code></pre>
<p>How can I generate a new DataFrame like this?</p>
<pre><code> Date Groups_counts
0 2017-1-1 1
1 2017-1-2 2
2 2017-1-3 1
</code></pre>
<p>Thanks a lot!</p>
|
<p>To get count of unique records use:</p>
<pre><code>df.groupby('Date')['Groups'].nunique()
</code></pre>
|
python|pandas
| 2
|
10,061
| 50,276,500
|
Adding time deltas to a running total in Pandas
|
<p><a href="https://i.stack.imgur.com/p2rSD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p2rSD.png" alt="enter image description here"></a></p>
<p>So I have two columns of data in my dataframe. TimeDeltasDiffs and ActualTime. First row of the dataframe has a start time in the ActualTime column. I want to populate the ActualTime column incrementally based on a time delta value in the current row.</p>
<p>I've used apply and lambdas for other operations on this dataframe. e.g.</p>
<pre><code>df_mrx['Log Timestamp'] = df_mrx['Log Timestamp'].apply(lambda x: '00:' + x)
</code></pre>
<p>But not sure how to use that based on previous values as I am iterating ...</p>
<p>Any help would be greatly appreciated.</p>
|
<p>You do not necessarily need to use .apply(). Consider below approach.
Given that below is the <code>df</code></p>
<pre><code> ActualTime TimeDeltasDiffs
0 2018-04-16 17:06:01 00:00:00
1 0 00:00:01
2 0 00:00:00
3 0 00:00:02
</code></pre>
<p>You need to use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer">.cumsum()</a> on <code>TimeDeltasDiffs</code></p>
<pre><code>df['ActualTime'] = df.iloc[0]['ActualTime']
df['ActualTime'] = pd.to_datetime(df['ActualTime']) + pd.to_timedelta(df['TimeDeltasDiffs']).cumsum()
</code></pre>
<p>Output:</p>
<pre><code> ActualTime TimeDeltasDiffs
0 2018-04-16 17:06:01 00:00:00
1 2018-04-16 17:06:02 00:00:01
2 2018-04-16 17:06:02 00:00:00
3 2018-04-16 17:06:04 00:00:02
</code></pre>
|
python|pandas|timedelta
| 1
|
10,062
| 50,527,616
|
Changing value of matrix and assigning value
|
<p>The code below based on <a href="http://www.johnwittenauer.net/machine-learning-exercises-in-python-part-1/" rel="nofollow noreferrer">http://www.johnwittenauer.net/machine-learning-exercises-in-python-part-1/</a> works </p>
<pre><code>theta = np.matrix(np.array([0, 0]))
def computeCost(X, y, theta, iterations, alpha):
temp = np.matrix(np.zeros(theta.shape))
m = len(X)
theta_trans = theta.T
for j in range(iterations):
hyp = np.dot(X, theta_trans)-y
term = np.multiply(hyp, X[:,0])
temp[0,0] = theta[0,0] - ((alpha / len(X)) * np.sum(term))
term = np.multiply(hyp, X[:,1])
temp[0,1] = theta[0,1] - ((alpha / len(X)) * np.sum(term))
theta = temp
theta_trans = theta.T
return theta
</code></pre>
<p>However, when I use theta directly instead of temp e.g. <code>theta[0,0] = theta[0,0] - ((alpha / len(X)) * np.sum(term)))</code> and comment out the <code>theta = temp</code> I always get 0 and 0 for theta.</p>
<p>When I do a similar operation outside the function theta is changed. For instance,</p>
<pre><code>theta = np.matrix(np.array([0,0]))
theta[0,0] = theta[0,0] - 1
print(theta)
</code></pre>
<p>theta shows as [-1 , 0].</p>
<p>Why is this type of assignment not working inside the function?</p>
|
<p>A probable explanation: it is a problem of types (int vs float).</p>
<p>The assignment </p>
<pre><code>theta = np.matrix(np.array([0, 0]))
</code></pre>
<p>creates a matrix of integers. There is some implicit conversion to integers when you assign directly its coefficients:</p>
<pre><code>>>> m = np.matrix(np.array([0, 0]))
>>> m
matrix([[0, 0]])
>>> m[0,0] = 0.5 # float
>>> m
matrix([[0, 0]]) # no effect, 0.5 converted to 0
>>> m[0,0] = 1 # int
>>> m
matrix([[1, 0]])
</code></pre>
<p>In contrast, the <code>temp</code> variable is a matrix of floats (because <code>np.zeros</code> creates an array of floats when <code>dtype</code> is not specified), so the assignment of floats works as expected.</p>
<p>So just declare <code>theta</code> directly as a matrix of floats and you should be fine.</p>
|
python|numpy|machine-learning
| 0
|
10,063
| 50,446,949
|
PIL: fromarray gives a wrong object in P mode
|
<p>I want to load an image in <code>P</code> mode, transform it into <code>np.array</code> and then transform it back, but I got a wrong Image object which is a gray image, not a color one</p>
<pre><code>label = PIL.Image.open(dir).convert('P')
label = np.asarray(label)
img = PIL.Image.fromarray(label, mode='P')
img.save('test.png')
</code></pre>
<p><code>dir</code> is the path of the original picture; <code>test.png</code> is a gray picture</p>
|
<p>Images in 'P' mode require a palette that associates each color index with an actual RGB color. Converting the image to an array loses the palette, you must restore it again.</p>
<pre><code>label = PIL.Image.open(dir).convert('P')
p = label.getpalette()
label = np.asarray(label)
img = PIL.Image.fromarray(label, mode='P')
img.setpalette(p)
img.save('test.png')
</code></pre>
|
python|numpy|machine-learning|python-imaging-library
| 2
|
10,064
| 45,637,245
|
How do I plot with matplotlib?
|
<p>How do I plot these using matplotlib or pandas' plot? </p>
<p>I've tried this btw: </p>
<pre><code>topic_count.plot.bar(stacked=True)
</code></pre>
<p>Which outputs : </p>
<pre><code> <matplotlib.axes._subplots.AxesSubplot at 0x118bdfeb8>
</code></pre>
<p>and nothing else, I am not seeing a plot. please help</p>
|
<p>Crude example with matplotlib:</p>
<pre><code>import matplotlib.pyplot as plt
foo = [1, 2]
plt.plot(foo)
plt.show()
</code></pre>
<p>And this should show you something like this:
<a href="https://i.stack.imgur.com/hBwzO.png" rel="nofollow noreferrer">Plot result</a></p>
<p>Some references:</p>
<ul>
<li><a href="https://matplotlib.org/users/pyplot_tutorial.html" rel="nofollow noreferrer">https://matplotlib.org/users/pyplot_tutorial.html</a></li>
</ul>
|
python|pandas|matplotlib|plot
| 4
|
10,065
| 62,766,171
|
Cutting a pandas DataFrame into small blocks and do simple calculations on each block
|
<p>I want to divide pandas DataFrame columns into blocks of 3 and find the mean of each block for each row.</p>
<p>Towards that end, by using a for-loop, I created a list of DataFrames by cutting them into blocks of 3, found their mean and reshaped it back into the shape I want.</p>
<p>The following code does the job:</p>
<pre><code>df = pd.DataFrame(np.random.rand(2000,100))
blocks = [df.iloc[:,i:i+3] for i in range(0,df.shape[1],3)]
list_df = []
for quarter in range(0,len(blocks)):
list_df.append(blocks[quarter].T.mean())
df = np.reshape(list_df,(len(blocks),len(blocks[0]))).T
df = pd.DataFrame(df)
</code></pre>
<p>The problem is, this is incredibly slow (given the size of my data, the for-loop is really taking its time). My question is, is there a more efficient way to do it? Specifically, are there any built-in pandas functions that do the same job?</p>
|
<p>I think you can do it directly by specifying axis=1 in <code>mean</code> on the selection of the 3 columns in the list comprehension. then use it in <code>pd.concat</code></p>
<pre><code>df_ = pd.concat([df.iloc[:,i:i+3].mean(axis=1) for i in range(0,df.shape[1],3)],
axis=1, ignore_index=True)
</code></pre>
<p><strong>In the specific case where the number of columns is a multiple of 3</strong> (not like in your example but just in case for your real data), you could use <code>to_numpy</code>, <code>reshape</code> and <code>mean</code> along the last axis, it should even be faster.</p>
<pre><code>pd.DataFrame(df.to_numpy()
.reshape(df.shape[0], df.shape[1]//3, 3)
.mean(axis=-1)
)
</code></pre>
|
python|pandas
| 1
|
10,066
| 62,685,261
|
Units of the last dense output layer in case of multiple categories
|
<p>I am currently working on this <a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20In%20Practice/Course%203%20-%20NLP/Course%203%20-%20Week%202%20-%20Exercise%20-%20Answer.ipynb" rel="nofollow noreferrer">colab</a>. Task is to classify the sentences into a certain category. So we have a multiple category problem, not binary, like prediction the sentiment of a review (positive / negative) according to certain review sentences. In case of multiple categories I thought that the number of units/neurons in the last layer has to match the number of classes I want to predict. So when I have a binary problem I use one neuron, indicating 0 or 1. When I have 5 classes, I need 5 units. That's what I thought.</p>
<p>However, in the code of the colab there is the following:</p>
<pre><code>model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(6, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
</code></pre>
<p>When I run the model.fit code part in this colab it does work. But I do not get it. When I check</p>
<pre><code>print(label_tokenizer.word_index)
print(label_tokenizer.word_docs)
print(label_tokenizer.word_counts)
</code></pre>
<p>This gives</p>
<pre><code>{'sport': 1, 'business': 2, 'politics': 3, 'tech': 4, 'entertainment': 5}
defaultdict(<class 'int'>, {'tech': 401, 'business': 510, 'sport': 511, 'entertainment': 386, 'politics': 417})
OrderedDict([('tech', 401), ('business', 510), ('sport', 511), ('entertainment', 386), ('politics', 417)])
</code></pre>
<p>So clearly 5 classes. However, when I adjust the model to <code>tf.keras.layers.Dense(5, activation='softmax')</code> and run the model.fit command it does not work. Accuracy is always 0.</p>
<p>Why is 6 here correct and not 5?</p>
|
<p>is 6 because the encode targets are in [1,5] but keras sparse_cat creates one-hot labels from 0 so it creates another unuseful label (0).</p>
<p>to use <code>Dense(5, activation='softmax')</code> you simply can do y-1 in order to get labels in [0,4] and get them starting from 0</p>
<p>following the colab link, you can change:</p>
<pre><code>model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(5, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(train_padded, training_label_seq-1, epochs=num_epochs, validation_data=(validation_padded, validation_label_seq-1), verbose=2)
</code></pre>
|
python|tensorflow|keras|neural-network|cross-entropy
| 2
|
10,067
| 62,492,482
|
Using np.where, langdetect in pandas
|
<p>I want to add a new column in dataframe, which will paste the data from another column if it is written in English, and paste nothing if it is not in English using langdetect library.</p>
<pre><code>df['lyrics_english'] = np.where(detect(df["lyrics"]) == 'en', df["lyrics"], '')
</code></pre>
<p>I hope, the meaning is clear. But I have Error like this.</p>
<pre><code> File "C:/Users/PycharmProjects/pythontask/example", line 128, in <module>
df['lyrics_english'] = np.where(detect(df["lyrics"]) == 'en', df["lyrics"], '')
File "C:\Users\AppData\Local\Programs\Python\Python38-32\lib\site-packages\langdetect\detector_factory.py", line 129, in detect
detector.append(text)
File "C:\Users\AppData\Local\Programs\Python\Python38-32\lib\site-packages\langdetect\detector.py", line 104, in append
text = self.URL_RE.sub(' ', text)
TypeError: expected string or bytes-like object
</code></pre>
<p>If I type</p>
<pre><code>df['lyrics_english'] = np.where(detect(df["lyrics"]) == 'en', 0, '')
</code></pre>
<p>there is again the same Error, smth linked with AppData. What can I do?</p>
|
<p>I guess might be due to some non-string values like <code>nan</code>, you can try:</p>
<pre><code>df['lyrics_english'] = np.where(detect(df["lyrics"].fillna("")) == 'en', df["lyrics"], '')
</code></pre>
<p>If this doesn't work, then you need to look into <code>df["lyrics"].unique()</code> and understand what's going on there.</p>
|
python|pandas|numpy|dataframe|sentiment-analysis
| 0
|
10,068
| 54,518,161
|
TypeError from SciKit-Learn's LabelEncoder
|
<p>Here is my code:</p>
<pre><code>#Importing the dataset
dataset = pd.read_csv('insurance.csv')
X = dataset.iloc[:, :-2].values
X = pd.DataFrame(X)
#Encoding Categorical data
from sklearn.preprocessing import LabelEncoder
labelencoder_X = LabelEncoder()
X[:, 1:2] = labelencoder_X.fit_transform(X[:, 1:2])
</code></pre>
<p>Sample Dataset</p>
<pre><code> age sex bmi children smoker region charges
19 female 27.9 0 yes southwest 16884.924
18 male 33.77 1 no southeast 1725.5523
28 male 33 3 no southeast 4449.462
33 male 22.705 0 no northwest 21984.47061
32 male 28.88 0 no northwest 3866.8552
31 female 25.74 0 no southeast 3756.6216
46 female 33.44 1 no southeast 8240.5896
37 female 27.74 3 no northwest 7281.5056
37 male 29.83 2 no northeast 6406.4107
60 female 25.84 0 no northwest 28923.13692
</code></pre>
<p>When running the labelencoder, I am getting the following error</p>
<blockquote>
<p>File "E:\Anaconda2\lib\site-packages\pandas\core\generic.py", line
1840, in _get_item_cache res = cache.get(item) TypeError: unhashable
type</p>
</blockquote>
<p>What could be causing this error?</p>
|
<p>Here is a small demo:</p>
<pre><code>In [36]: from sklearn.preprocessing import LabelEncoder
In [37]: le = LabelEncoder()
In [38]: X = df.apply(lambda c: c if np.issubdtype(df.dtypes.loc[c.name], np.number)
else le.fit_transform(c))
In [39]: X
Out[39]:
age sex bmi children smoker region charges
0 19 0 27.900 0 1 3 16884.92400
1 18 1 33.770 1 0 2 1725.55230
2 28 1 33.000 3 0 2 4449.46200
3 33 1 22.705 0 0 1 21984.47061
4 32 1 28.880 0 0 1 3866.85520
5 31 0 25.740 0 0 2 3756.62160
6 46 0 33.440 1 0 2 8240.58960
7 37 0 27.740 3 0 1 7281.50560
8 37 1 29.830 2 0 0 6406.41070
9 60 0 25.840 0 0 1 28923.13692
</code></pre>
<p>Source DF:</p>
<pre><code>In [35]: df
Out[35]:
age sex bmi children smoker region charges
0 19 female 27.900 0 yes southwest 16884.92400
1 18 male 33.770 1 no southeast 1725.55230
2 28 male 33.000 3 no southeast 4449.46200
3 33 male 22.705 0 no northwest 21984.47061
4 32 male 28.880 0 no northwest 3866.85520
5 31 female 25.740 0 no southeast 3756.62160
6 46 female 33.440 1 no southeast 8240.58960
7 37 female 27.740 3 no northwest 7281.50560
8 37 male 29.830 2 no northeast 6406.41070
9 60 female 25.840 0 no northwest 28923.13692
</code></pre>
|
python|pandas|scikit-learn
| 1
|
10,069
| 73,600,444
|
Convert a column to timedelta and remove days from it
|
<p>I am trying to convert a column with data like this: 1:38:17 or 36:21 to timedelta format. This column is extracted from a website and converted to a table using pandas.</p>
<pre><code>df[' Chip Time'] = df[' Chip Time'].apply(pd.to_timedelta, errors='coerce')
</code></pre>
<p>This returns 0 days 01:38:17 but for rows with minutes and seconds (36:21) it returns NaT.
I would like to have the time properly converted and remove the days part leaving only the time like this: 01:38:17.</p>
<p>I tried using the code below but it doesn't strip the days but it strips the time.</p>
<pre><code>df[' Chip Time'] = pd.to_timedelta(df[' Chip Time'].dt.days, unit='d')
</code></pre>
<p>Please is there another method I can use in order to return a result like this 01:38:17.</p>
|
<p>I think you want to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>pd.to_datetime</code></a> instead of <code>pd.to_timedelta</code>:</p>
<pre><code>df[' Chip Time'] = pd.to_datetime(df[' Chip Time'], format='%H:%M:%S').dt.time
</code></pre>
|
python|pandas|time
| 1
|
10,070
| 73,831,488
|
Iterate through all sheets of all workbooks in a directory
|
<p>I am trying to combine all spreadsheets from all workbooks in a directory into a single df. I've tried with <code>glob</code> and with <code>os.scandir</code> but either way I keep only getting the first sheet of all workbooks.
First attempt:</p>
<pre><code>import pandas as pd
import glob
workbooks = glob.glob(r"\mydirectory\*.xlsx")
list = []
for file in workbooks:
df = pd.concat(pd.read_excel(file, sheet_name=None), ignore_index = True)
list.append(df)
dataframe = pd.concat(list, axis = 0)
</code></pre>
<p>Second attempt:</p>
<pre><code>import os
import pandas as pd
df = pd.DataFrame()
path = r"\mydirectory\"
with os.scandir(path) as files:
for file in files:
data = pd.read_excel(file, sheet_name = None)
df = df.append(data)
</code></pre>
<p>I think the issue lies with the <code>for</code> loop but I'm too inexperienced to pin down the problem. Any help would be greatly appreciated, thx!!!</p>
|
<p>If I understand what you have written correctly, you want something like this:</p>
<pre><code>import pandas as pd
import glob
# list of workbooks in directory
workbooks = glob.glob(r"\mydirectory\*.xlsx")
l = []
# for each file in list
for file in workbooks:
# Class for file allows for retrieving sheet names
xl_file = pd.ExcelFile(file)
# concatenate DataFrames created from each sheet in the file
df = pd.concat([pd.read_excel(file, sheet) for sheet in xl_file.sheet_names], ignore_index=True)
# append to list
l.append(df)
# concatenate all file DataFrames to one DataFrame.
dataframe = pd.concat(l, axis=0)
</code></pre>
<p>This loops through the sheets within the Excel file for the concatenation, the only difference to what you had already written.</p>
<h1>Alternative:</h1>
<p>Alternatively, without needing to first find the sheet names, the dictionary created by <code>pd.read_excel(file, sheet_name=None)</code> can used.</p>
<pre><code>import pandas as pd
import glob
# list of workbooks in directory
workbooks = glob.glob(r"\mydirectory\*.xlsx")
l = []
# for each file in list
for file in workbooks:
# concatenate the dictionary of dataframes from pd.read_excel
df = pd.concat(pd.read_excel(file, sheet_name=None), ignore_index=True)
l.append(df)
# concatenate all file DataFrames to one DataFrame.
dataframe = pd.concat(l, axis=0)
</code></pre>
<p>A good explanation/example of the use of <code>sheet_name=None</code> can be found <a href="https://stackoverflow.com/questions/51990562/whats-the-proper-way-to-use-sheet-name-none-on-pandas-read-excel-to-concatena#:%7E:text=Since%20pd.read_excel%20%28%29%20with%20sheet_name%3DNone%20returns%20an%20OrderedDict%2C,MultiIndex%20dataframe%2C%20without%20specifying%20the%20number%20in%20advance.">here</a>. In short, the use of this returns a dictionary of DataFrames for each sheet. This can then be concatenated to one DataFrame, as above, or an individual sheet's DataFrame could be accessed through <code>dictionary["sheet_name"]</code>.</p>
|
python|pandas|glob
| 0
|
10,071
| 73,730,601
|
How to scale x axis which is increasing constantly using pandas
|
<p>I have 3 columns <code>v1</code>, <code>v2</code>, and <code>v3</code> with 10,000 entries and I want to plot <code>v1</code>, <code>v2</code>, and <code>v3</code> on the y-axis.</p>
<p>In the x-axis, I want to plot <code>v1</code>, <code>v2</code>, <code>v3</code> points every 500 seconds until the length of column entries is reached (i.e 10,000 entries).</p>
<p><code>y-axis=['v1','v2', 'v3']</code><br />
<code>x-axis=[0,500,1000,1500... ,len(v1)]</code></p>
<p>I tried setting xticks:</p>
<pre class="lang-py prettyprint-override"><code>length = len(df.axes[0]) #number of rows
x = np.arange(0,length,500)
y = ["v1", "v2", "v3"]
df.plot = (x,y)
plt.show()
</code></pre>
<p>but I am getting an error.</p>
|
<p>How's something like this, it grabs every 500th row (and includes the last row) and plots it using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer">Pandas plot</a>:</p>
<pre class="lang-py prettyprint-override"><code>df[::500].append(df[-1:]).plot(y=['v1', 'v2', 'v3'], kind='line')
</code></pre>
<p><a href="https://i.stack.imgur.com/7lClC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7lClC.png" alt="enter image description here" /></a></p>
<p>Full code:</p>
<pre class="lang-py prettyprint-override"><code>############### create mock dataframe ################
import random
v1 = [random.randint(0,100) for i in range(10000)]
v2 = [random.randint(0,100) for i in range(10000)]
v3 = [random.randint(0,100) for i in range(10000)]
df = pd.DataFrame({'v1':v1, 'v2':v2, 'v3':v3})
######################################################
# Select every 500th row, include the last row with '.append(df[-1:])', then plot it
df[::500].append(df[-1:]).plot(y=['v1', 'v2', 'v3'], kind='line')
plt.xticks(np.arange(0,len(v1)+1, 500), rotation = 45)
plt.show()
</code></pre>
|
python|pandas|matplotlib
| 0
|
10,072
| 73,710,166
|
Efficient function mapping with arguments in numpy
|
<p>I am trying to create a heightmap by interpolating between a bunch of heights at certain points in an area. To process the whole image, I have the following code snippet:</p>
<pre class="lang-py prettyprint-override"><code>
map_ = np.zeros((img_width, img_height))
for x in range(img_width):
for y in range(img_height):
map_[x, y] = calculate_height(set(points.items()), x, y)
</code></pre>
<p>This is <code>calculate_height</code>:</p>
<pre class="lang-py prettyprint-override"><code>def distance(x1, y1, x2, y2) -> float:
return np.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2)
def calculate_height(points: set, x, y) -> float:
total = 0
dists = {}
for pos, h in points:
d = distance(pos[0], pos[1], x, y)
if x == pos[0] and y == pos[1]:
return h
d = 1 / (d ** 2)
dists[pos] = d
total += d
r = 0
for pos, h in points:
ratio = dists[pos] / total
r += ratio * h
r = r
return r
</code></pre>
<p>This snippet works perfectly, but if the image is too big, it takes a long time to process, because this is O(n^2). The problem with this, is that a "too big" image is 800 600, and it takes almost a minute to process, which to me seems a bit excessive.</p>
<p>My goal is not to reduce the time complexity from O(n^2), but to reduce the time it takes to process images, so that I can get decently sized images in a reasonable amount of time.</p>
<p>I found
<a href="https://stackoverflow.com/questions/35215161/most-efficient-way-to-map-function-over-numpy-array">this post</a>, but I couldn't really try it out because it's all for a 1D array, and I have a 2D array, and I also need to pass the coordinates of each point and the set of existing points to the <code>calculate_height</code> function. What can I try to optimize this code snippet?</p>
<p>Edit: Moving <code>set(points.items)</code> out of the loop as @thethiny suggested was a HUGE improvement. I had no idea it was such a heavy thing to do. This makes it fast enough for me, but feel free to add more suggestions for the next people to come by!</p>
<p>Edit 2: I have further optimized this processing by including the following changes:</p>
<pre class="lang-py prettyprint-override"><code> # The first for loop inside the calculate_distance function
for pos, h in points:
d2 = distance2(pos[0], pos[1], x, y)
if x == pos[0] and y == pos[1]:
return h
d2 = d2 ** -1 # 1 / (d ** 2) == d ** -2 == d2 ** -1
dists[pos] = d2 # Having the square of d on these two lines makes no difference
total += d2
</code></pre>
<p>This reduced execution time for a 200x200 image from 1.57 seconds to 0.76 seconds. The 800x600 image mentioned earlier now takes 6.13 seconds to process :D</p>
<p>This is what <code>points</code> looks like (as requested by @norok12):</p>
<pre class="lang-py prettyprint-override"><code># Hints added for easier understanding, I know this doesn't run
points: dict[tuple[int, int], float] = {
(x: int, y: int): height: float,
(x: int, y: int): height: float,
(x: int, y: int): height: float
}
# The amount of points varies between datasets, so I can't provide a useful range other than [3, inf)
</code></pre>
|
<p>There's a few problems with your implementation.</p>
<p>Essentially what you're implementing is approximation using radial basis functions.</p>
<p>The usual algorithm for that looks like:</p>
<pre><code>sum_w = 0
sum_wv = 0
for p,v in points.items():
d = distance(p,x)
w = 1.0 / (d*d)
sum_w += w
sum_wv += w*v
return sum_wv / sum_w
</code></pre>
<p>Your code has some extra logic for bailing out if <code>p==x</code> - which is good.
But it also allocates an array of distances - which this single loop form does not need.</p>
<p>This brings execution of an example in a workbook from 13s to 12s.</p>
<p>The next thing to note is that collapsing the points dict into an numpy array gives us the chance to use numpy functions.</p>
<pre><code>points_array = np.array([(p[0][0],p[0][1],p[1]) for p in points.items()]).astype(np.float32)
</code></pre>
<p>Then we can write the function as</p>
<pre><code>def calculate_height_fast(points, x, y) -> float:
dx = points[:,0] - x
dy = points[:,1] - y
r = np.hypot(dx,dy)
w = 1.0 / (r*r)
sum_w = np.sum(w)
return np.sum(points[:,2] * w) / np.sum(w)
</code></pre>
<p>This brings our time down to 658ms. But we can do better yet...</p>
<p>Since we're now using numpy functions we can apply <code>numba.njit</code> to JIT compile our function.</p>
<pre><code>@numba.njit
def calculate_height_fast(points, x, y) -> float:
dx = points[:,0] - x
dy = points[:,1] - y
r = np.hypot(dx,dy)
w = 1.0 / (r*r)
sum_w = np.sum(w)
return np.sum(points[:,2] * w) / np.sum(w)
</code></pre>
<p>This was giving me 105ms (after the function had been run once to ensure it got compiled).</p>
<p>This is a speed up of 130x over the original implementation (for my data)</p>
<p>You can see the full implementations <a href="https://colab.research.google.com/drive/1vwe9eFvDWIJ0g7k4VSSTBPcQjvuMP4tj?usp=sharing" rel="nofollow noreferrer">here</a></p>
|
python|numpy|optimization
| 1
|
10,073
| 71,128,812
|
How to scale a fixed sparse matrix by the value in a 1x1 tensor in pytorch?
|
<p>Is it possible to scale a fixed sparse matrix by the value in a 1x1 tensor in pytorch?</p>
<p>For example, in code I'm working on I'm seeing the following issue:</p>
<pre><code>>>> import torch
>>> sp_mat = torch.sparse_coo_tensor([[0,1,2],[0,1,2]],[1,1,1],(3,3))
>>> w = torch.tensor([0.5])
>>> sp_mat*w
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: mul operands have incompatible sizes
</code></pre>
<p>Is there a workaround? Ultimately I want to let the w variable be a learnable parameter, but cannot seem to find a way to get this operation to work when w is a tensor.</p>
<p>It works just fine if the weight is a float:</p>
<pre><code>>>> import torch
>>> sp_mat = torch.sparse_coo_tensor([[0,1,2],[0,1,2]],[1,1,1],(3,3))
>>> y = 0.5
>>> sp_mat*y
tensor(indices=tensor([[0, 1, 2],
[0, 1, 2]]),
values=tensor([0.5000, 0.5000, 0.5000]),
size=(3, 3), nnz=3, layout=torch.sparse_coo)
</code></pre>
<p>Any suggestions? Thanks!</p>
|
<p>While unlearnable param is simple, to make the tensor learnable it has to be of the same shape as your data (hence requirements are 2x that memory unfortunately as 0D normal/sparse tensor seems not to be broadcasted correctly).</p>
<p>In this case <code>w</code> has to be recreated as sparse tensor, could be done like so (<code>sp_mat</code> is the same as <code>t</code> below):</p>
<pre><code>w = torch.sparse_coo_tensor(
t.indices(),
torch.full_like(
t.values(),
0.5,
),
t.shape,
requires_grad=True,
)
</code></pre>
<p>Also thanks to Phoenix for pointing out my misreading of the question itself.</p>
<p>On the bad side, all values are <strong>independent</strong> in this case for every value in the sparse matrix which might not be what you are after. AFAIK there is no way for “learnable scalar” and sparse matrix to work together and further hacks would be needed (correct me if I'm wrong please).</p>
<p><strong>EDIT:</strong> Just tested another approach, namely:</p>
<pre><code>w = torch.tensor(0.5, requires_grad=True).to_sparse()
</code></pre>
<p>And multiplied it, and it seems to be a bug. You might want to open PyTorch issue about it.</p>
|
python|pytorch|sparse-matrix
| 0
|
10,074
| 71,197,977
|
How to find ratio of values in two rows that have the same identifier using python dataframes
|
<p>I have a dataframe with 4858 rows and 67 columns. This contains the stats from each game in the season for each MLB team. This means that for every game, there are two rows of data. One with the stats from one team and the other with the stats from the team they played. Here are the column names: ['AB', 'R', 'H', 'RBI',
'BB', 'SO', 'PA', 'BA', 'OBP', 'SLG', 'OPS', 'Pit', 'Str', 'RE24',
'WinOrLoss', 'Team', 'Opponent', 'HomeOrAway', 'url', 'Win_Percentage',
'R_Season_Long_Count', 'H_Season_Long_Count', 'BB_Season_Long_Count',
'SO_Season_Long_Count', 'PA_Season_Long_Count', 'R_Moving_Average_3',
'R_Moving_Average_10', 'R_Moving_Average_31', 'SLG_Moving_Average_3',
'SLG_Moving_Average_10', 'SLG_Moving_Average_31', 'BA_Moving_Average_3',
'BA_Moving_Average_10', 'BA_Moving_Average_31', 'OBP_Moving_Average_3',
'OBP_Moving_Average_10', 'OBP_Moving_Average_31', 'SO_Moving_Average_3',
'SO_Moving_Average_10', 'SO_Moving_Average_31', 'AB_Moving_Average_3',
'AB_Moving_Average_10', 'AB_Moving_Average_31', 'Pit_Moving_Average_3',
'Pit_Moving_Average_10', 'Pit_Moving_Average_31', 'H_Moving_Average_3',
'H_Moving_Average_10', 'H_Moving_Average_31', 'BB_Moving_Average_3',
'BB_Moving_Average_10', 'BB_Moving_Average_31', 'OPS_Moving_Average_3',
'OPS_Moving_Average_10', 'OPS_Moving_Average_31',
'RE24_Moving_Average_3', 'RE24_Moving_Average_10',
'RE24_Moving_Average_31', 'Win_Percentage_Moving_Average_3',
'Win_Percentage_Moving_Average_10', 'Win_Percentage_Moving_Average_31',
'BA_Season_Long_Average', 'SLG_Season_Long_Average',
'OPS_Season_Long_Average']</p>
<p>Then, here is a picture of the output from these columns. Sorry, it's only from a few columns but essentially all the stats will just be numbers like this.</p>
<p><img src="https://i.stack.imgur.com/1QZGX.jpg" alt="enter image description here" /></p>
<p>The most important column for this question is the url column. This column identifies the game played as there is only one unique url for each game. However, there will be two rows within the dataframe that have this unique url as one will contain the stats from one team in that game and the other will contain the stats from the other team also in that game.</p>
<p>Now, what I am wanting to do is to combine these two rows that are identified by the common url by creating a ratio between them. So, I would like to divide the stats from the first team by the stats from the second team for that specific game with the unique url. I want to do this for each game/unique url. I am able to sum them by using the groupby.sum() function, but I am unsure how to find the ratio between the two rows with the same url. I would really appreciate any suggestions. Thanks so much!</p>
|
<p>Assumptions:</p>
<ul>
<li>always 2 rows for each url</li>
<li>in each url, among the 2 rows, you don't care which is divided by which</li>
</ul>
<p>A small example of your dataset:</p>
<pre><code>df = pd.DataFrame({
'url': ['1', '1', '2', '2', '3', '3'],
'non-stat1': np.arange(1., 7.),
'non-stat2': np.arange(2., 8.),
'stat1': np.arange(13., 19.),
'stat2': np.arange(6., 12.),
})
</code></pre>
<p>This lists the columns which are the stats that you want to apply division.</p>
<pre><code>columns_for_ratio = ['stat1', 'stat2']
</code></pre>
<p>This is how the division works. <code>.values</code> get you an array which always has two rows, so you can unpack the array into two variables, one array for each.</p>
<pre><code>def divide(two_rows):
x, y = two_rows.values
return pd.Series(x/y, two_rows.columns)
</code></pre>
<p>And finally do the division</p>
<pre><code>df.groupby('url')[columns_for_ratio].apply(divide)
</code></pre>
|
python|pandas|dataframe|group-by|pandas-groupby
| 0
|
10,075
| 71,108,178
|
How to get multiple same tag text in a single variable in XML Processing Python?
|
<pre><code><PREAMB>
<AGENCY TYPE="S">HOMELAND SECURITY </AGENCY>
<AGENCY TYPE="O">LABOR</AGENCY>
<AGY>
<HD SOURCE="HED">AGENCY:</HD>
<P>U.S. Citizenship and Immigration Services</P>
</AGY>
</PREAMB>
</code></pre>
<p>How can I get this as -
'departments are' : 'HOMELAND SECURITY,LABOR : U.S. Citizenship and Immigration Services '</p>
<p>The below code is just returning -
'departments are' : 'LABOR : U.S. Citizenship and Immigration Services'</p>
<pre><code>for agency in preambl.findall("./PREAMB/AGENCY"):
departments = agency.text
if departments != '' or departments != None:
if pre.findall("./PREAMB/AGY"):
agency1 = ''
for agencies in pre.findall("./PREAMB/AGY/P"):
for para1 in agencies.itertext():
agency1 += para1.replace('\n', ' ')
agency1 = ' '.join(agency1.split())
if agency1:
agency1 = '{"departments are":"' + str(departments) + ' : ' + str(agency1) + '"}'
agency1 = json.loads(agency1)
</code></pre>
<p>Any help would be appreciated.</p>
|
<p>You are making this a bit too complicated, I believe. Try it this way:</p>
<pre><code>targets = ['.//AGENCY','.//AGY//P']
agencies = []
for target in targets:
agencies.extend([agency.text for agency in preambl.findall(f'{target}')])
print('agencies are: ',agencies)
</code></pre>
<p>And see if you get your expected output.</p>
|
python|json|pandas|xml|xml-parsing
| 0
|
10,076
| 71,380,141
|
Why seaborn with displot irregular
|
<p>I used the script:</p>
<pre><code>sns.displot(data=df, x='New Category', height=5, aspect=3, kde=True)
</code></pre>
<p>but the data not irregular like this pict
I want the order to be like this::</p>
<ul>
<li>Less than 2 hours</li>
<li>Between 1 to 2 hours</li>
<li>Between 2 to 4 hours</li>
<li>Between 4 to 6 hours</li>
<li>Between 6 to 12 hours</li>
<li>More than 12 hours</li>
</ul>
<p>The Result of Script:</p>
<p><a href="https://i.stack.imgur.com/qb8l7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qb8l7.png" alt="The Result of Script" /></a></p>
|
<p>The easiest way to fix an order, is via <code>pd.Categorical</code>:</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# first, create some test data
categories = ['Less than 2 hours', 'Between 1 to 2 hours', 'Between 2 to 4 hours',
'Between 4 to 6 hours', 'Between 6 to 12 hours', 'More than 12 hours']
weights = np.random.rand(len(categories)) + 0.1
weights /= weights.sum()
df = pd.DataFrame({'New Category': np.random.choice(categories, 1000, p=weights)})
# fix an order on the column via pd.Categorical
df['New Category'] = pd.Categorical(df['New Category'], categories=categories, ordered=True)
# displot now uses the fixed order
sns.displot(data=df, x='New Category', height=5, aspect=3, kde=True)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/E8LIj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E8LIj.png" alt="fix an order for sns.displot" /></a></p>
|
python|python-3.x|pandas|dataframe|seaborn
| 2
|
10,077
| 71,144,447
|
pandas: how to do piecewise calculation based on condition of one column
|
<p>I have a dataframe like this:</p>
<pre><code>symbol Time Volume cumVolume group ...
00001 0 100 100 0 ...
00001 3 100 200 0 ...
00001 7 -200 0 0 ...
00001 12 -100 -100 1 ...
00001 13 -200 -300 1 ...
00001 18 300 0 1 ...
00002 0 -100 -100 2 ...
00002 4 -100 -200 2 ...
00002 7 100 -100 2 ...
00002 13 300 200 2 ...
00002 15 300 500 3 ...
</code></pre>
<p>I want to do the calculation for each symbol's sub-dataframe divided by <code>group</code>. For instance, I can see the dataframe like this:</p>
<pre><code>symbol Time Volume cumVolume group ...
00001 0 100 100 0 ...
00001 3 100 200 0 ...
00001 7 -200 0 0 ...
----------------------------------------------------
00001 12 -100 -100 1 ...
00001 13 -200 -300 1 ...
00001 18 300 0 1 ...
----------------------------------------------------
00002 0 -100 -100 2 ...
00002 4 -100 -200 2 ...
00002 7 100 -100 2 ...
00002 13 300 200 2 ...
----------------------------------------------------
00002 15 300 500 3 ...
</code></pre>
<p>the calculation rule is: <code>Volume</code> * <code>Time to section end</code></p>
<p>For example, for the first section: (100)*(7-0) + (100)*(7-3) + (-200)*(7-7)</p>
<p>For the second section: (-100)*(18-12) + (-200)*(18-13) + (300)*(18-18)</p>
<p>I am struggling with how to get the <code>Time to section end</code> variable. Could you give me some hints or solutions?</p>
|
<p>First, we want to calculate this value for each <code>"group"</code>, so we need to <code>df.groupby("group")</code>. Then, for each group, you can get the "end time" using <code>df_group.max()</code>. Now, to calculate "time to section end" we just substract the values: <code>df_group["Time"].max() - df_group["Time"]</code>. This works because is a "vectorized" operation. Finally, you can multiply the volume and then add everything using <code>.sum()</code>:</p>
<pre><code>for group, df_group in df.groupby("group"):
result = (df_group["Volume"] * (df_group["Time"].max() - df_group["Time"])).sum()
print(group, result)
</code></pre>
|
python|pandas|dataframe
| 1
|
10,078
| 71,105,644
|
AttributeError: 'CRS' object has no attribute 'equals'
|
<p>I'm trying to make an interactive map with Geopandas using the default data-set.</p>
<pre><code>countries.to_crs(epsg=3395)
countries.explore(column='pop_est',cmap='magma')
</code></pre>
<p>Now I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-27-92f1397b09bf> in <module>
1 #Popultion mapping- Interactive
2 countries.to_crs(epsg=3395)
----> 3 countries.explore(column='pop_est',cmap='magma')
~\anaconda3\envs\myenv\lib\site-packages\geopandas\geodataframe.py in explore(self, *args, **kwargs)
1856 def explore(self, *args, **kwargs):
1857 """Interactive map based on folium/leaflet.js"""
-> 1858 return _explore(self, *args, **kwargs)
1859
1860 def sjoin(self, df, *args, **kwargs):
~\anaconda3\envs\myenv\lib\site-packages\geopandas\explore.py in _explore(df, column, cmap, color, m, tiles, attr, tooltip, popup, highlight, categorical, legend, scheme, k, vmin, vmax, width, height, categories, classification_kwds, control_scale, marker_type, marker_kwds, style_kwds, highlight_kwds, missing_kwds, tooltip_kwds, popup_kwds, legend_kwds, **kwargs)
283 kwargs["crs"] = "Simple"
284 tiles = None
--> 285 elif not gdf.crs.equals(4326):
286 gdf = gdf.to_crs(4326)
287
AttributeError: 'CRS' object has no attribute 'equals'
</code></pre>
<p>How can I fix this?</p>
|
<p>You have an outdated version of pyproj installed in your environment. You need at least pyproj 2.5.0. GeoPandas 0.10.x contains an installation <em>bug</em> that allows you to install older versions but this doesn't work. Update your pyproj.</p>
<pre><code>conda update pyproj
</code></pre>
<p>or</p>
<pre><code>pip install -U pyproj
</code></pre>
<p>Also, note that the line <code>countries.to_crs(epsg=3395)</code> in your snippet above doesn't do anything. It does not work in place. You need to assign reprojected GeoDataFrame or use a keyword. But keep in mind that this has no effect on <code>explore</code> as it automatically retrojects geometries to Web Mercator.</p>
<pre class="lang-py prettyprint-override"><code>countries.to_crs(epsg=3395, inplace=True)
# or
countries = countries.to_crs(epsg=3395)
</code></pre>
|
geopandas
| 1
|
10,079
| 71,239,580
|
generating a Markov chain simulation using a transition matrix of specific size and with a given seed, using the mchmm library
|
<p>I am trying to generate a Markov simulation using a specific sequence as start, using the <a href="https://github.com/maximtrp/mchmm" rel="nofollow noreferrer">mchmm</a> library coded with scipy and numpy. I am not sure if I am using it correctly, since the library also has Viterbi and Baum-Welch algorithms in the context of Markov, which I am not familiar with.</p>
<p>To illustrate, I will continue with an example.</p>
<pre><code>data = 'AABCABCBAAAACBCBACBABCABCBACBACBABABCBACBBCBBCBCBCBACBABABCBCBAAACABABCBBCBCBCBCBCBAABCBBCBCBCCCBABCBCBBABCBABCABCCABABCBABC'
a = mc.MarkovChain().from_data(data)
</code></pre>
<p>I want a markov simulation based on a 3 states transition matrix, starting with the last 3 characters in the sequence above <em>("ABC")</em></p>
<pre><code>start_sequence = data[-3:]
tfm3 = a.n_order_matrix(a.observed_p_matrix, order=3) #this is because I want an order 3 transition matrix
ids, states = a.simulate(n=10, tf=tfm3, start=start_sequence)
</code></pre>
<p>this returns:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_2552615/2308700615.py in <module>
----> 1 ids, states = a.simulate(n=10, tf=tfm3, start=start_sequence)
~/anaconda3/lib/python3.8/site-packages/mchmm/_mc.py in simulate(self, n, tf, states, start, ret, seed)
304 _start = np.random.randint(0, len(states))
305 elif isinstance(start, str):
--> 306 _start = np.argwhere(states == start).item()
307
308 # simulated sequence init
ValueError: can only convert an array of size 1 to a Python scalar
</code></pre>
<p>I was expecting to get a sequence of 10 characters, starting with the string 'ABC' (data[-3:]), since I want to constraint the Markov simulation to start with the probabilities implied by that specific sequence, and following a Markov of order 3.</p>
<p>Any feedback?</p>
|
<p>The states in the <code>MarkovChain</code> instance <code>a</code> are <code>'A'</code>, <code>'B'</code> and <code>'C'</code>. When the <code>simulate</code> method is given a string for <code>state</code>, it expects it to be the name of one of the states, i.e. either <code>'A'</code>, <code>'B'</code> or <code>'C'</code>. You get that error because <code>data[-3:]</code> is not one of the states.</p>
<p>For example, in the following I use <code>start='A'</code> in the call of <code>simulate()</code>, and it generates a sequence of 10 states, starting at <code>'A'</code>:</p>
<pre><code>In [26]: data = 'AABCABCBAAAACBCBACBABCABCBACBACBABABCBACBBCBBCBCBCBACBABABCBCBA
...: AACABABCBBCBCBCBCBCBAABCBBCBCBCCCBABCBCBBABCBABCABCCABABCBABC'
In [27]: a = mc.MarkovChain().from_data(data)
In [28]: tfm3 = a.n_order_matrix(a.observed_p_matrix, order=3)
In [29]: ids, states = a.simulate(n=10, tf=tfm3, start='A')
In [30]: states
Out[30]: array(['A', 'C', 'A', 'C', 'C', 'C', 'A', 'A', 'C', 'B'], dtype='<U1')
</code></pre>
<hr />
<p>If you are trying to create a Markov chain where the states are sequences of three symbols (to add "history" that includes the previous two states), you could create a new input to <code>.from_data()</code> that consists of the length-3 overlapping subsequences of <code>data</code> (also known as the <a href="https://en.wikipedia.org/wiki/N-gram" rel="nofollow noreferrer">3-grams</a>). For example,</p>
<pre><code>In [65]: data3 = [data[k:k+3] for k in range(len(data)-2)]
In [66]: data3[:4]
Out[66]: ['AAB', 'ABC', 'BCA', 'CAB']
In [67]: data3[-8:]
Out[67]: ['CAB', 'ABA', 'BAB', 'ABC', 'BCB', 'CBA', 'BAB', 'ABC']
In [68]: a = mc.MarkovChain().from_data(data3)
</code></pre>
<p>Take a look at the states of this Markov chain:</p>
<pre><code>In [69]: a.states
Out[69]:
array(['AAA', 'AAB', 'AAC', 'ABA', 'ABC', 'ACA', 'ACB', 'BAA', 'BAB',
'BAC', 'BBA', 'BBC', 'BCA', 'BCB', 'BCC', 'CAB', 'CBA', 'CBB',
'CBC', 'CCA', 'CCB', 'CCC'], dtype='<U3')
</code></pre>
<p>Simulate 10 transitions, starting with the last state in <code>data3</code>:</p>
<pre><code>In [70]: ids, states = a.simulate(n=10, start=data3[-1])
In [71]: states
Out[71]:
array(['ABC', 'BCA', 'CAB', 'ABC', 'BCB', 'CBC', 'BCB', 'CBA', 'BAA',
'AAB'], dtype='<U3')
</code></pre>
<p>Compress the states to only include the final single character, so it is back in the form of the original <code>data</code> input:</p>
<pre><code>In [72]: ''.join([state[-1] for state in states])
Out[72]: 'CABCBCBAAB'
</code></pre>
|
python|numpy|scipy
| 1
|
10,080
| 52,220,959
|
Batch multiplication/division with scalar in tensorflow
|
<p>I'm struggling to find a simple way to multiply a batch of tensors with a batch of scalars.</p>
<p>I have a tensor with dimensions N, 4, 4. What I want is to divide tensor in the batch with the value at position 3, 3.</p>
<p>For example, let's say I have:</p>
<pre><code>A = [[[1, 1, 1, 0],
[1, 1, 1, 0],
[1, 1, 1, 0],
[0, 0, 0, a]],
[[1, 1, 1, 0],
[1, 1, 1, 0],
[1, 1, 1, 0],
[0, 0, 0, b]]
</code></pre>
<p>What I want is to obtain the following: </p>
<pre><code> B = [[[1/a, 1/a, 1/a, 0],
[1/a, 1/a, 1/a, 0],
[1/a, 1/a, 1/a, 0],
[0, 0, 0, 1]],
[[1/b, 1/b, 1/b, 0],
[1/b, 1/b, 1/b, 0],
[1/b, 1/b, 1/b, 0],
[0, 0, 0, 1]]
</code></pre>
|
<p>You should just do:</p>
<pre><code>B = A / A[:, 3:, 3:]
</code></pre>
|
python-3.x|tensorflow|batch-processing
| 0
|
10,081
| 52,042,732
|
anti join pandas data frames at different levels in python
|
<p>I am having two pandas data frames say df1 and df2.
df1 has 6 variables and df2 has 5 variables.
and first variable in both the data frames are in string format and reaming are in int format.</p>
<p>i want to identify the mismatched records in both data frames by using first 3
columns of both data frames and have to exclude them from df1 dataframe.</p>
<p>for that i tried the following code but it throws Nan values for me, if i drop the
Nan values then required data will be deleted.</p>
<p><strong><em>input data:-</em></strong></p>
<pre><code>**df1:-** **df2:-**
x1 x2 x3 x4 x5 x6 x1 x2 x3 x4 x5
SM 1 1 2 3 3 RK 2 4 3 4
RK 2 2 3 4 5 SM 1 1 3 3
NBR 1 2 2 5 6 NB 1 2 3 2
CBK 2 5 6 7 8 VSB 5 6 3 2
VSB 5 6 4 2 1 CB 2 6 4 1
SB 6 2 3 2 1 SB 6 2 4 1
</code></pre>
<p><strong><em>expected_out_put:-</em></strong></p>
<pre><code>x1 x2 x3 x4 x5 x6
RK 2 2 3 4 5
CBK 2 5 6 7 8
NBR 1 2 2 5 6
</code></pre>
<p><strong><em>syntax:-</em></strong></p>
<pre><code>data_out=df1[~df1['x1','x2','x3'].isin(df2['x1','x2','x3'])]
data_out=data_out.dropna()
</code></pre>
<p>please anyone can help me to tackle this.</p>
<p>Thanks in advance.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> with left join first, get columns names of added columns from <code>df2</code> and filter out all non <code>NaN</code>s rows by them:</p>
<pre><code>df = df1.merge(df2, on=['x1', 'x2', 'x3'], how='left', suffixes=('','_'))
print (df)
x1 x2 x3 x4 x5 x6 x4_ x5_
0 SM 1 1 2 3 3 3.0 3.0
1 RK 2 2 3 4 5 NaN NaN
2 NB 1 2 2 5 6 3.0 2.0
3 CB 2 5 6 7 8 NaN NaN
4 VSB 5 6 4 2 1 3.0 2.0
5 SB 6 2 3 2 1 4.0 1.0
cols = df.columns.difference(df1.columns)
print (cols)
Index(['x4_', 'x5_'], dtype='object')
df = df.loc[df[cols].isnull().all(axis=1), df1.columns.tolist()]
print (df)
x1 x2 x3 x4 x5 x6
1 RK 2 2 3 4 5
3 CB 2 5 6 7 8
</code></pre>
<p>EDIT:</p>
<p>With your sample data I get:</p>
<pre><code>df = df1.merge(df2, on=['x1', 'x2', 'x3'], how='left', suffixes=('','_'))
print (df)
x1 x2 x3 x4 x5 x6 x4_ x5_
0 SM 1 1 2 3 3 3.0 3.0
1 RK 2 2 3 4 5 NaN NaN
2 NBR 1 2 2 5 6 NaN NaN
3 CBK 2 5 6 7 8 NaN NaN
4 VSB 5 6 4 2 1 3.0 2.0
5 SB 6 2 3 2 1 4.0 1.0
cols = df.columns.difference(df1.columns)
print (cols)
Index(['x4_', 'x5_'], dtype='object')
df = df.loc[df[cols].isnull().all(axis=1), df1.columns.tolist()].x1.tolist()
print (df)
x1 x2 x3 x4 x5 x6
1 RK 2 2 3 4 5
2 NBR 1 2 2 5 6
3 CBK 2 5 6 7 8
</code></pre>
|
python|pandas|anti-join
| 1
|
10,082
| 52,075,111
|
reshape is deprecated issue when I pick series from pandas Dataframe
|
<p>When I try to take one series from dataframe I get this issue </p>
<blockquote>
<p>anaconda3/lib/python3.6/site-packages/numpy/core/fromnumeric.py:52: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
return getattr(obj, method)(*args, **kwds)</p>
</blockquote>
<p>This is the code snippet </p>
<pre><code>for idx, categories in enumerate(categorical_columns):
ax = plt.subplot(3,3,idx+1)
ax.set_xlabel(categories[0])
box = [df[df[categories[0]] == atype].price for atype in categories[1]]
ax.boxplot(box)
</code></pre>
|
<p>For avoid chained indexing use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>box = [df.loc[df[categories[0]] == atype, 'price'] for atype in categories[1]]
</code></pre>
<p>And for remove <code>FutureWarning</code> is necessary upgrade <code>pandas</code> with <code>matplotlib</code>.</p>
|
pandas|numpy|dataframe
| 1
|
10,083
| 60,719,348
|
compare two column in two table in pandas based on the specific condition
|
<p>I have two data frame as shown below.</p>
<p>user table: Details about the courses and modules attended by each users.</p>
<pre><code>user_id courses. Num_of_course attended_modules Total_Modules
1 [A] 1 {A:[1,2,3,4,5,6]} 6
2 [A,B,C] 3 {A:[8], B:[5], C:[6]} 3
3 [A,B] 2 {A:[2,3,9], B:[10]} 4
4 [A] 1 {A:[3]} 1
5 [B] 1 {B:[5]} 1
6 [A] 1 {A:[3]} 1
7 [B] 1 {B:[5]} 1
8 [A] 1 {A:[4]} 1
</code></pre>
<p>Course table: Details about the courses and all the modules in that course and popular modules .</p>
<pre><code>course_id modules #users Popular_modules
A [1,2,3,4,5,6,8,9] 5 [3,2]
B [5,8] 4 [5]
C [6,10] 2 []
</code></pre>
<p>From the above tables compare two column attended_modules and modules and suggest unattended modules for each users among the attended courses.</p>
<ol>
<li>Suggest the popular module in the attended course if the user not already attended that module.</li>
<li>Suggest the unattended module in the ongoing courses in the order as in the course table.</li>
<li>Suggest 3 modules to all users from each courses he has attended, if 3 modules available in that course.</li>
</ol>
<p>Expected Output:</p>
<pre><code>user_id courses. Recommended_modules
1 [A,B] {A:[8,9]}
2 [A,B,C] {A:[3,2,1], B:[8], C:[10]}
3 [A,B] {A:[1,4,5], B:[5,8]}
4 [A] {A:[2,1,4]}
5 [B] {B:[8]}
6 [A] {A:[2,1,4]}
7 [B] {B:[8]}
8 [A] {A:[3,2,1]}
</code></pre>
<p>EDIT:</p>
<p>Added the user_id = 9 (last row)</p>
<pre><code>user_id courses. Num_of_course attended_modules Total_Modules
1 [A] 2 {A:[1,2,3,4,5,6]} 6
2 [A,B,C] 3 {A:[8], B:[5], C:[6]} 3
3 [A,B] 2 {A:[2,3,9], B:[10]} 4
4 [A] 1 {A:[3]} 1
5 [B] 1 {B:[5]} 1
6 [A] 1 {A:[3]} 1
7 [B] 1 {B:[5]} 1
8 [A] 1 {A:[4]} 1
9 [A] 1 {A:[1,2,3,4,5,6,8,9], B:[5]} 8
</code></pre>
<p>The output of the above code is</p>
<pre><code> user_id courses. Recommended_modules
1 [A] {A:[8,9]}
2 [A,B,C] {A:[3,2,1], B:[8], C:[10]}
3 [A,B] {A:[1,4,5], B:[5,8]}
4 [A] {A:[2,1,4]}
5 [B] {B:[8]}
6 [A] {A:[2,1,4]}
7 [B] {B:[8]}
8 [A] {A:[3,2,1]}
9 [A,B] {A:[], B:[8]}
</code></pre>
<p>Where in the Recommended_modules for user_id = 9, course A has empty list </p>
<p>ie {A:[], B:[8]} expected is {B:[8]}</p>
<p>Expected output:</p>
<pre><code> user_id courses. Recommended_modules
1 [A] {A:[8,9]}
2 [A,B,C] {A:[3,2,1], B:[8], C:[10]}
3 [A,B] {A:[1,4,5], B:[5,8]}
4 [A] {A:[2,1,4]}
5 [B] {B:[8]}
6 [A] {A:[2,1,4]}
7 [B] {B:[8]}
8 [A] {A:[3,2,1]}
9 [A,B] {B:[8]}
</code></pre>
|
<p>Use cunstom function with difference between dictionaries, lastm create new dictionarie with <code>if-else</code> for empty lists in input is also empty list:</p>
<pre><code>df2 = df2.set_index('course_id')
mo = df2['modules'].to_dict()
#print (mo)
pop = df2['Popular_modules'].to_dict()
#print (pop)
</code></pre>
<hr>
<pre><code>def f(x):
out = {}
for k, v in x.items():
dif = [i for i in mo[k] if i not in v]
a = [i for i in pop[k] if i in dif]
b = [i for i in dif if i not in pop[k]]
c = a + b
out[k] = c[:3]
return out
</code></pre>
<hr>
<pre><code>df1['Recommended_modules'] = df1['attended_modules'].apply(f)
df1 = df1[['user_id','courses.','Recommended_modules']]
print (df1)
user_id courses. Recommended_modules
0 1 [A,B] {'A': [8, 9]}
1 2 [A,B,C] {'A': [3, 2, 1], 'B': [8], 'C': [10]}
2 3 [A,B] {'A': [1, 4, 5], 'B': [5, 8]}
3 4 [A] {'A': [2, 1, 4]}
4 5 [B] {'B': [8]}
5 6 [A] {'A': [2, 1, 4]}
6 7 [B] {'B': [8]}
7 8 [A] {'A': [3, 2, 1]}
</code></pre>
<p>EDIT: If empty list are in dictionaries in <code>attended_modules</code> input column and need remove it in output function is changed:</p>
<pre><code>def f(x):
out = {}
for k, v in x.items():
dif = [i for i in mo[k] if i not in v]
a = [i for i in pop[k] if i in dif]
b = [i for i in dif if i not in pop[k]]
c = a + b
if len(v) > 0:
out[k] = c[:3]
return out
</code></pre>
|
pandas|pandas-groupby
| 1
|
10,084
| 60,509,061
|
How to create dataframe in pandas that contains Null values
|
<p>I try to create below dataframe that deliberately lacks some piece of information. That is, <code>type</code> shall be empty for one record.</p>
<pre><code>df = {'id': [1, 2, 3, 4, 5],
'created_at': ['2020-02-01', '2020-02-02', '2020-02-02', '2020-02-02', '2020-02-03'],
'type': ['red', NaN, 'blue', 'blue', 'yellow']}
df = pd.DataFrame (df, columns = ['id', 'created_at','type', 'converted_tf'])
</code></pre>
<p>Works perfectly fine when I put all the values but I keep getting errors with <code>NaN</code>, <code>Null</code>, <code>Na</code>, <code></code> etc.</p>
<p>Any idea what I have to put?</p>
|
<p><code>NaN</code>, <code>Null</code>, <code>Na</code> doesn't not represent an absence of value.</p>
<hr>
<p>Use <em>Python's</em> <a href="https://docs.python.org/3/c-api/none.html" rel="nofollow noreferrer"><strong><em><code>None</code></em></strong></a> Object to represent absence of value.</p>
<pre><code>import pandas as pd
df = {'id': [1, 2, 3, 4, 5],
'created_at': ['2020-02-01', '2020-02-02', '2020-02-02', '2020-02-02', '2020-02-03'],
'type': ['red', None, 'blue', 'blue', 'yellow']}
df = pd.DataFrame (df, columns = ['id', 'created_at','type', 'converted_tf'])
</code></pre>
<p>If you try to print the df, you'll get the following output:</p>
<pre><code> id created_at type converted_tf
0 1 2020-02-01 red NaN
1 2 2020-02-02 None NaN
2 3 2020-02-02 blue NaN
3 4 2020-02-02 blue NaN
4 5 2020-02-03 yellow NaN
</code></pre>
<p>So, you may now think that <code>NaN</code> and <code>None</code> are different. Pandas uses <code>NaN</code> as a placeholder for missing values, i.e instead of showing None it shows <code>NaN</code> which is more readable. Read more about this <a href="https://stackoverflow.com/a/17534682/3091398">here</a>.</p>
<p>Now let's trying fillna function, </p>
<pre><code>df.fillna('') # filling None or NaN values with empty string
</code></pre>
<p>You can see that both <code>NaN</code> and <code>None</code> got replaced by empty string.</p>
<pre><code> id created_at type converted_tf
0 1 2020-02-01 red
1 2 2020-02-02
2 3 2020-02-02 blue
3 4 2020-02-02 blue
4 5 2020-02-03 yellow
</code></pre>
|
python|pandas
| 4
|
10,085
| 60,530,727
|
KeyError when trying to get to a value in 2d array (imported from csv file)
|
<p>I need a code in which I load data from several csv files (containing distance, altitude, angle, wavelength etc.)</p>
<pre><code>import numpy as np
import pandas as pd
date = 20180710
# import csv files
geometry = pd.read_csv('20180710_geo.csv', sep=';')
TWOb = pd.read_csv('l2b.csv', sep=';')
calib = pd.read_csv('calibration.csv', sep=';')
obs = pd.read_csv('observation.csv', sep=';')
###### extract position of date in obs #########
idxtupple = np.where(obs == date)
listidx = list(zip(idxtupple[0], idxtupple[1]))
for idx in listidx:
print(idx)
D = obs[idx[0],7]
print(D)
</code></pre>
<p>Everything seems fine, I have the correct number of lines and columns, and correct float numbers at each positions. But when I try to get an element in the 2d array (for instance obs[18,7] or geometry[2,5]), I get "<code>KeyError: (18, 7)</code>" (or <code>KeyError: (2, 5)</code> etc.) and I don't get why...</p>
<p>Here's what I get as csv file (from row 15 to 19 of my obs file):</p>
<pre><code>Date of acq. [yyyymmdd] Operation ... Altitude min [km] Comments
15 20180630 Check out ... 19,51 NaN
16 20180703 Check out ... NaN NaN
17 20180705 Dark ... NaN NaN
19 20180711 Box-A ... 19,52 Global mapping
</code></pre>
|
<p>Thank you for your answer, Serge Ballesta!</p>
<p>I managed to get the wanted value in the table with <code>str = obs.loc[idx[0], 'Solar distance [au]']</code> and then just converted it with <code>D = float(str.replace(',', '.'))</code> and it works.</p>
|
python|pandas|csv
| 0
|
10,086
| 60,398,009
|
find the duplicates and apply a condition on other column in pandas
|
<p>firstly I need to check the serial no column and find the duplicates,once the duplicate are found then second conditions has to applied on the rank column and which is the least rank & i need to update the status with rank 1 in least rank and other duplicate column has be updated with rank 2</p>
<p><img src="https://i.stack.imgur.com/M1DAB.jpg" alt="link to image"></p>
|
<p>Could you try this and check ?</p>
<pre><code>counts = df.groupby(['Serial No'])['Rank'].count().gt(1).reset_index()
dup_sernos = counts[counts['Rank'] == True]['Serial No'].tolist()
df['Status'] = df[df['Serial No'].isin(dup_sernos)].sort_values(['Serial No', 'Rank']).groupby(['Serial No']).cumcount()+1
df['Status'] = df['Status'].fillna('')
</code></pre>
|
python|pandas
| 0
|
10,087
| 60,548,567
|
Can I use a Tensor as a list index?
|
<p>I have this Custom Keras Layer that chooses between elements of a list, like a Dense layer, and I want it to return the element of the list it predicted directly.
The list is a list of <code>Keras.layers.Layer</code>.
I have this piece of code:</p>
<pre class="lang-py prettyprint-override"><code>def call(self, inputs, context):
pred = tf.argmax(tf.matmul(context, self.kernel))
return self.layers[pred](inputs)
</code></pre>
<p>It throws an error: <code>TypeError: list indices must be integers or slices, not Tensor</code>, which is understandable, but I can't find a way of making it work. The "pred" Tensor doesn't have a <code>.numpy</code> property, though I'm running the program eagerly, since this happens when the layer is being built. </p>
<p>I understand there may be no solutions, if so, submit ideas on how I could code this layer in another way.</p>
|
<p>There is a bigger problem. </p>
<p>This layer will not work, because you cannot get derivatives of <code>argmax</code>, the <code>kernel</code> will be impossible to train. And you will get an error message like "An operation has None for gradient"</p>
<p>As a workaround, I'd suggest you to:</p>
<ul>
<li>1: calculate all layers (hope they have the same shape?) </li>
<li>2: stack their results in the second dimension: <code>tf.stack([listf_of_outputs],axis=1)</code></li>
<li>2: take a <code>softmax</code> of the result of <code>matmul</code> </li>
<li>3: reshape the result of <code>softmax</code> to the same number of dimensions of the stacked result above: shape <code>(-1, number_of_layers, _other_dims_if_exist, 1)</code> </li>
<li>4: multiply (elementwise <code>*</code>) the stacked results by the reshaped softmax and sum the axis 1.</li>
</ul>
|
python|tensorflow|keras|deep-learning|keras-layer
| 1
|
10,088
| 60,643,795
|
For loop doesn't work for web scraping Google search in python
|
<p>I'm working on web-scraping Google search with a list of keywords. The nested For loop for scraping a single page works well. However, the other for loop searching keywords in the list does not work as I intended to which <strong>scrapes</strong> the data for each searching result. The results didn't get the search outcome of the first two keywords, but it got only the result of the last keyword.</p>
<p>Here is the code:</p>
<pre><code>browser = webdriver.Chrome(r"C:\...\chromedriver.exe")
df = pd.DataFrame(columns = ['ceo', 'value'])
baseUrl = 'https://www.google.com/search?q='
ceo_list = ["Bill Gates", "Elon Musk", "Warren Buffet"]
values =[]
for ceo in ceo_list:
browser.get(baseUrl + ceo)
table = browser.find_elements_by_css_selector('div.ifM9O')
for row in table:
ceo = str(([c.text for c in row.find_elements_by_css_selector('div.kno-ecr-pt.PZPZlf.gsmt.i8lZMc')])).strip('[]').strip("''")
value = str(([c.text for c in row.find_elements_by_css_selector('div.Z1hOCe')])).strip('[]').strip("''")
ceo = pd.Series(ceo)
value = pd.Series(value)
df = df.assign(**{'ceo': ceo, 'value': value})
print(df)
browser.close()
</code></pre>
<p>This is the output:</p>
<pre><code> ceo value
0 Warren Buffett Born: August 30, 1930 (age 89 years), Omaha, N...
</code></pre>
<p>What I'm expecting is this:</p>
<pre><code> ceo value
0 Bill Gates Born:..........
1 Elon Musk Born:...........
2 Warren Buffett Born: August 30, 1930 (age 89 years), Omaha, N...
</code></pre>
<p>Not sure which part was missing.</p>
|
<p>You need to create ceo as a list and append to it inside the for loop so you don't keep overwriting it</p>
|
python|pandas|loops|for-loop|web-scraping
| 0
|
10,089
| 60,574,862
|
Calculating pairwise Euclidean distance between all the rows of a dataframe
|
<p>How can I calculate the Euclidean distance between all the rows of a dataframe? I am trying this code, but it is not working:</p>
<pre><code>zero_data = data
distance = lambda column1, column2: pd.np.linalg.norm(column1 - column2)
result = zero_data.apply(lambda col1: zero_data.apply(lambda col2: distance(col1, col2)))
result.head()
</code></pre>
<p>This is how my (44062 by 278) dataframe looks like:</p>
<p><a href="https://i.stack.imgur.com/PebHB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PebHB.png" alt="Please see sample data here"></a></p>
|
<p>To compute the Eucledian distance between two rows i and j of a dataframe df:</p>
<pre><code>np.linalg.norm(df.loc[i] - df.loc[j])
</code></pre>
<p>To compute it between consecutive rows, i.e. 0 and 1, 1 and 2, 2 and 3, ...</p>
<pre><code>np.linalg.norm(df.diff(axis=0).drop(0), axis=1)
</code></pre>
<p>If you want to compute it between all the rows, i.e. 0 and 1, 0 and 2, ..., 1 and 1, 1 and 2 ..., then you have to loop through all the combinations of i and j (keep in mind that for 44062 rows there are 970707891 such combinations so using a for-loop will be very slow):</p>
<pre><code>import itertools
for i, j in itertools.combinations(df.index, 2):
d_ij = np.linalg.norm(df.loc[i] - df.loc[j])
</code></pre>
<p><strong>Edit:</strong></p>
<p>Instead, you can use <a href="https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.spatial.distance.cdist.html" rel="noreferrer">scipy.spatial.distance.cdist</a> which computes distance between each pair of two collections of inputs:</p>
<pre><code>from scipy.spatial.distance import cdist
cdist(df, df, 'euclid')
</code></pre>
<p>This will return you a symmetric (44062 by 44062) matrix of Euclidian distances between all the rows of your dataframe. The problem is that you need a lot of memory for it to work (at least 8*44062**2 bytes of memory, i.e. ~16GB).
So a better option is to use <a href="https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.spatial.distance.pdist.html" rel="noreferrer">pdist</a></p>
<pre><code>from scipy.spatial.distance import pdist
pdist(df.values, 'euclid')
</code></pre>
<p>which will return an array (of size 970707891) of all the pairwise Euclidean distances between the rows of df.</p>
<p>P.s. Don't forget to ignore the 'Actual_data' column in the computations of distances. E.g. you can do the following: <code>data = df.drop('Actual_Data', axis=1).values</code> and then <code>cdist(data, data, 'euclid')</code> or <code>pdist(data, 'euclid')</code>. You can also create another dataframe with distances like this:</p>
<pre><code>data = df.drop('Actual_Data', axis=1).values
d = pd.DataFrame(itertools.combinations(df.index, 2), columns=['i','j'])
d['dist'] = pdist(data, 'euclid')
i j dist
0 0 1 ...
1 0 2 ...
2 0 3 ...
3 0 4 ...
...
</code></pre>
|
python|pandas|numpy|dataframe|euclidean-distance
| 8
|
10,090
| 72,557,824
|
Alter dataframe based on values in other rows
|
<p>I'm trying to alter my dataframe to create a Sankey diagram.</p>
<p>I've 3 million rows like this:</p>
<pre><code>client_id | | start_date | end_date | position
1234 16-07-2019 27-03-2021 3
1234 18-07-2021 09-10-2021 1
1234 28-03-2021 17-07-2021 2
1234 10-10-2021 20-11-2021 2
</code></pre>
<p>I want it to look like this:</p>
<pre><code>client_id | | start_date | end_date | position | source | target
1234 16-07-2019 27-03-2021 3 3 2
1234 18-07-2021 09-10-2021 1 1 2
1234 28-03-2021 17-07-2021 2 2 1
1234 10-10-2021 20-11-2021 2 2 4
</code></pre>
<p>Value 4 is the value that I use as "exit in the flow.</p>
<p>I have no idea how to do this.</p>
<p>Background: the source and target values contain the position values based on start_date and end_date. So for example in the first row the source is position value 3 but the target is position value 2 because after the end date client changed from position 3 to 2.</p>
|
<p>Because the source and target are calculated by each client's date order. So it is possible to order the date and find its next position.</p>
<pre><code>columns = ["client_id" ,"start_date","end_date","position"]
data = [
["1234","16-07-2019","27-03-2021",3],
["1234","18-07-2021","09-10-2021",1],
["1234","28-03-2021","17-07-2021",2],
["1234","10-10-2021","20-11-2021",2],
["5678","16-07-2019","27-03-2021",3],
["5678","18-07-2021","09-10-2021",1],
["5678","28-03-2021","17-07-2021",2],
["5678","10-10-2021","20-11-2021",2],
]
df = pd.DataFrame(
data,
columns=columns
)
df = df.assign(
start_date = pd.to_datetime(df["start_date"]),
end_date = pd.to_datetime(df["end_date"])
)
sdf = df.assign(
rank=df.groupby("client_id")["start_date"].rank()
)
sdf = sdf.assign(
next_rank=sdf["rank"] + 1
)
combine_result = pd.merge(sdf,
sdf[["client_id", "position", "rank"]],
left_on=["client_id", "next_rank"],
right_on=["client_id", "rank"],
how="left",
suffixes=["", "_next"]
).fillna({"position_next": 4})
combine_result[["client_id", "start_date", "end_date", "position", "position_next"]].rename(
{"position": "source", "position_next": "target"}, axis=1).sort_values(["client_id", "start_date"])
</code></pre>
|
python|python-3.x|pandas|dataframe|sankey-diagram
| 1
|
10,091
| 72,550,951
|
Use pandas.groupby() and cumsum() with row wise condition check and replacement
|
<p>We have a dataframe,df with four variables A, B,C, and D.</p>
<p>Variable A has two levels 1,2, and 3 (in this example only).</p>
<p>Variable B, C and D are continuous variables.</p>
<p>Formula used for filling column C based on A and B is</p>
<pre><code>df['C'] = 150 - df['B'].groupby(df['A']).cumsum()
</code></pre>
<p><strong>Desired result is in column D</strong></p>
<p>Basically, a value in column C cannot take a value >150 and <0. For instance, in index 24, Column C with 163.5>150 is replaced with 150 in Column D. The values in subsequent rows changes. Again, in index 28, Column C takes a value 150-180=-30<0; thus, replaced with 0 in Column D and the values in subsequent rows changes.</p>
<p><strong>df</strong></p>
<pre><code>ID A B C D
0 1 21 129 129
1 1 -1.5 130.5 130.5
2 1 -1.5 132 132
3 1 13.5 118.5 118.5
4 1 13.5 105 105
5 1 13.5 91.5 91.5
6 2 21 129 129
7 2 -1.5 130.5 130.5
8 2 6 124.5 124.5
9 2 13.5 111 111
10 2 13.5 97.5 97.5
11 2 13.5 84 84
12 2 13.5 70.5 70.5
13 2 -9 79.5 79.5
14 2 6 73.5 73.5
15 2 -9 82.5 82.5
16 2 6 76.5 76.5
17 2 -1.5 78 78
18 2 13.5 64.5 64.5
19 2 -1.5 66 66
20 2 13.5 52.5 52.5
21 2 13.5 39 39
22 2 -106.5 145.5 145.5
23 2 6 139.5 139.5
24 2 -24 163.5 150
25 2 6 157.5 144
26 2 13.5 144 130.5
27 2 13.5 130.5 117
28 3 180 -30 0
29 3 -9 -21 9
30 3 6 -27 3
31 3 -1.5 -25.5 4.5
32 3 13.5 -39 0
33 3 -1.5 -37.5 1.5
34 3 13.5 -51 0
35 3 -24 -27 24
</code></pre>
<p><strong>NOTE</strong></p>
<p>Please see the changes between Column C and D from index no. 24.</p>
<p>Formula used to calculate values in column D from index no. 24 to 35 is as given below:</p>
<pre><code>ID formula
24 163.5>150 (SET TO 150)
25 150-6=144
26 144-13.5=130.5
27 130.5-13.5=117
28 150-180=-30 (SET TO 0)
29 0-(-9)=9
30 9-6=3
31 3-(-1.5)=4.5
32 4.5-13.5=-9 (SET TO 0)
33 0-(-1.5)=1.5
34 1.5-13.5=-12 (SET TO 0)
35 0-(-24)=24
</code></pre>
|
<pre><code>import pandas as pd
qqq = []
def func_data(x):
aaa = 150
for i in x:
aaa -=i
if aaa > 150:
aaa =150
if aaa < 0:
aaa = 0
qqq.append(aaa)
df['F'] = df.groupby(['A'])['B'].apply(func_data)
df['F'] = qqq
print(df)
</code></pre>
<p>Output</p>
<pre><code> ID A B C D F
0 0 1 21.0 129.0 129.0 129.0
1 1 1 -1.5 130.5 130.5 130.5
2 2 1 -1.5 132.0 132.0 132.0
3 3 1 13.5 118.5 118.5 118.5
4 4 1 13.5 105.0 105.0 105.0
5 5 1 13.5 91.5 91.5 91.5
6 6 2 21.0 129.0 129.0 129.0
7 7 2 -1.5 130.5 130.5 130.5
8 8 2 6.0 124.5 124.5 124.5
9 9 2 13.5 111.0 111.0 111.0
10 10 2 13.5 97.5 97.5 97.5
11 11 2 13.5 84.0 84.0 84.0
12 12 2 13.5 70.5 70.5 70.5
13 13 2 -9.0 79.5 79.5 79.5
14 14 2 6.0 73.5 73.5 73.5
15 15 2 -9.0 82.5 82.5 82.5
16 16 2 6.0 76.5 76.5 76.5
17 17 2 -1.5 78.0 78.0 78.0
18 18 2 13.5 64.5 64.5 64.5
19 19 2 -1.5 66.0 66.0 66.0
20 20 2 13.5 52.5 52.5 52.5
21 21 2 13.5 39.0 39.0 39.0
22 22 2 -106.5 145.5 145.5 145.5
23 23 2 6.0 139.5 139.5 139.5
24 24 2 -24.0 163.5 150.0 150.0
25 25 2 6.0 157.5 144.0 144.0
26 26 2 13.5 144.0 130.5 130.5
27 27 2 13.5 130.5 117.0 117.0
28 28 3 180.0 -30.0 0.0 0.0
29 29 3 -9.0 -21.0 9.0 9.0
30 30 3 6.0 -27.0 3.0 3.0
31 31 3 -1.5 -25.5 4.5 4.5
32 32 3 13.5 -39.0 0.0 0.0
33 33 3 -1.5 -37.5 1.5 1.5
34 34 3 13.5 -51.0 0.0 0.0
35 35 3 -24.0 -27.0 24.0 24.0
</code></pre>
<p>Apply has a func_data function to test conditions and set values. The result is an array qqq, set to column F.</p>
|
python|pandas|dataframe|group-by|cumsum
| 1
|
10,092
| 72,533,815
|
I trained a model in torch and then convert it to caffe and after that to tf. How to convert it now to onnx?
|
<p>I trained a Resnet model in torch.
Then, I converted it to caffe and to tflite.
now I want to convert it to onnx.
How can I do it?
I try that command:</p>
<pre><code>python3 -m tf2onnx.convert --tflite resnet.lite --output resnet.lite.onnx --opset 13 --verbose
</code></pre>
<p>because the current format of the model is tflite,</p>
<p>and got that error:</p>
<pre><code>return packer_type.unpack_from(memoryview_type(buf), head)[0]
struct.error: unpack_from requires a buffer of at least 11202612 bytes for unpacking 4 bytes at offset 11202608 (actual buffer size is 2408448)
</code></pre>
<p>Thanks.</p>
|
<p>you can try something like this checkout <a href="https://docs.microsoft.com/en-us/windows/ai/windows-ml/tutorials/tensorflow-convert-model" rel="nofollow noreferrer">link</a>
may be you need to freeze the model layers before starting conversion.</p>
<pre><code>pip install onnxruntime
pip install git+https://github.com/onnx/tensorflow-onnx
python -m tf2onnx.convert --saved-model ./checkpoints/yolov4.tf --output model.onnx --opset 11 --verbose
</code></pre>
<p>you can try this one also <a href="https://onnxruntime.ai/docs/tutorials/tf-get-started.html" rel="nofollow noreferrer">link</a></p>
<pre><code>pip install tf2onnx
</code></pre>
<pre><code>import tensorflow as tf
import tf2onnx
import onnx
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(4, activation="relu"))
input_signature = [tf.TensorSpec([3, 3], tf.float32, name='x')]
# Use from_function for tf functions
onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=13)
onnx.save(onnx_model, "dst/path/model.onnx")
</code></pre>
|
python|tensorflow|tensorflow-lite|onnx|tf2onnx
| 0
|
10,093
| 32,152,890
|
ipython date attribute not found
|
<p>Im using ipython notebook to run some analytics using pandas. however, im running into problems with the following function and the date attributes</p>
<pre><code>def get_date(time_unit):
t = tickets['purchased date'].map(lambda x: x.time_unit)
return t
# calling it like this produces this error
get_date('week')
</code></pre>
<p><strong><code>AttributeError: 'Timestamp' object has no attribute 'time_unit'</code></strong></p>
<p>but this works without a function</p>
<pre><code>tickets['purchased date'].map(lambda x: x.week)
</code></pre>
<p>im trying to create the function <code>get_date(time_unit)</code> because i will later need to use the function to <code>get_date('week')</code> and later <code>get_date('year')</code> etc etc.</p>
<p>how can i convert the string im passing to a valid attribute to use the function as i intent to use it??</p>
<p>thanks.</p>
|
<p>When you do -</p>
<pre><code>t = tickets['purchased date'].map(lambda x: x.time_unit)
</code></pre>
<p>This would not replace whatever is inside the <code>time_unit</code> string and take <code>x.week</code> , instead it would try to take the <code>time_unit</code> attribute of x, Which is causing the error you are seeing.</p>
<p>You should use <code>getattr</code> to get an attribute from an object using the string name of the attribute -</p>
<pre><code>t = tickets['purchased date'].map(lambda x: getattr(x, time_unit))
</code></pre>
<p>From <a href="https://docs.python.org/2/library/functions.html#getattr" rel="nofollow noreferrer">documentation of <code>getattr()</code> -</a></p>
<blockquote>
<p><strong>getattr(object, name[, default])</strong></p>
<p>Return the value of the named attribute of object. name must be a string. If the string is the name of one of the object’s attributes, the result is the value of that attribute. For example, <strong><code>getattr(x, 'foobar')</code> is equivalent to <code>x.foobar</code>.</strong></p>
</blockquote>
|
python|pandas|ipython|ipython-notebook
| 2
|
10,094
| 32,386,791
|
Returing a single boolean value if value is duplicated in pandas series?
|
<p>Given the following pandas DataFrame:</p>
<pre><code>mydf = pd.DataFrame([{'Campaign': 'Campaign X', 'Date': '24-09-2014', 'Spend': 1.34, 'Clicks': 241}, {'Campaign': 'Campaign Y', 'Date': '24-08-2014', 'Spend': 2.89, 'Clicks': 12}, {'Campaign': 'Campaign X', 'Date': '24-08-2014', 'Spend': 1.20, 'Clicks': 1}, {'Campaign': 'Campaign Z2', 'Date': '24-08-2014', 'Spend': 4.56, 'Clicks': 13}] )
</code></pre>
<p><a href="https://i.stack.imgur.com/1555y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1555y.png" alt="enter image description here"></a></p>
<p>I simply want to check (and return a <em>single</em> boolean value) if a given campaign appears more than once. </p>
<p>I could do:</p>
<pre><code>True in mydf['Campaign'].duplicated().get_values()
</code></pre>
<p>or:</p>
<pre><code>True if len(mydf.drop_duplicates('Campaign')) < len(mydf['Campaign']) else False
</code></pre>
<p>Is there a better/more efficient way? If not, which of the above is preferable?</p>
|
<p>It looks like your first proposed method is the fastest on a small dataframe.</p>
<pre><code>%timeit mydf.Campaign.duplicated().any()
The slowest run took 4.08 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 39.9 µs per loop
%timeit True in mydf['Campaign'].duplicated().get_values()
The slowest run took 4.23 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 34 µs per loop
%timeit True if len(mydf.drop_duplicates('Campaign')) < len(mydf['Campaign']) else False
1000 loops, best of 3: 311 µs per loop
</code></pre>
<p>On a larger dataframe, however, my method (the first one below) is slightly faster.</p>
<pre><code>mydf = pd.DataFrame({'Campaign': np.random.choice(list('ABCDEFGHIJKLMNOPQRSTUVWXYZ'), 1e6, replace=True), 'Date': pd.date_range('2015-1-1', periods=1e6), 'Spend': np.random.randn(1e6), 'Clicks': np.random.rand(1e6)})
%timeit mydf.Campaign.duplicated().any()
100 loops, best of 3: 11.2 ms per loop
%timeit True in mydf['Campaign'].duplicated().get_values()
100 loops, best of 3: 12.3 ms per loop
%timeit True if len(mydf.drop_duplicates('Campaign')) < len(mydf['Campaign']) else False
10 loops, best of 3: 138 ms per loop
</code></pre>
|
python|python-2.7|pandas
| 1
|
10,095
| 40,561,836
|
Copy certain rows from pandas dataframe to a new one (Time condition)
|
<p>I have a dataframe which looks like this:</p>
<pre><code> pressure mean pressure std
2016-03-01 00:00:00 615.686441 0.138287
2016-03-01 01:00:00 615.555000 0.067460
2016-03-01 02:00:00 615.220000 0.262840
2016-03-01 03:00:00 614.993333 0.138841
2016-03-01 04:00:00 615.075000 0.072778
2016-03-01 05:00:00 615.513333 0.162049
................
</code></pre>
<p>The first column is the index column.</p>
<p>I want to create a new dataframe with only the rows of 3pm and 3am,
so it will look like this:</p>
<pre><code> pressure mean pressure std
2016-03-01 03:00:00 614.993333 0.138841
2016-03-01 15:00:00 616.613333 0.129493
2016-03-02 03:00:00 615.600000 0.068889
..................
</code></pre>
<p>Any ideas ?</p>
<p>Thank you !</p>
|
<p>I couldn't load your data using <code>pd.read_clipboard()</code>, so I'm going to recreate some data:</p>
<pre><code>df = pd.DataFrame(index=pd.date_range('2016-03-01', freq='H', periods=72),
data=np.random.random(size=(72,2)),
columns=['pressure', 'mean'])
</code></pre>
<p>Now your dataframe should have a <code>DatetimeIndex</code>. If not, you can use <code>df.index = pd.to_datetime(df.index)</code>.</p>
<p>Then its really easy using boolean indexing:</p>
<pre><code>df.ix[(df.index.hour == 3) | (df.index.hour == 15)]
</code></pre>
<p><a href="https://i.stack.imgur.com/q0R8J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q0R8J.png" alt="enter image description here"></a></p>
|
python|pandas|dataframe
| 3
|
10,096
| 40,562,728
|
Putting lower bound and upper bounds on numpy.random.exponential
|
<p>I want to extract samples from the exponential distribution with lambda =2 , however these must be bounded between 1 and 10.I know the usual syntax for creating samples in the exponential distribution however I do not know how to bound it .</p>
<p>Also I cannot use scipy.</p>
|
<pre><code>import numpy as np
t = 0
t < 1 or t > 10:
t = np.random.exponential(2)
</code></pre>
<p>That should do it</p>
|
python|numpy|exponential
| 0
|
10,097
| 40,646,767
|
Which file to be used for eval step in TEXTSUM?
|
<p>Am working on the texsum model of tensorflow which is text summarization. I was following commands specified in readme at <a href="https://github.com/tensorflow/models/tree/master/textsum" rel="nofollow noreferrer">github/textsum</a>. It said that file named validation, present in data folder, is to be used in eval step, but there was no validation file in data folder. </p>
<p>I thought to make one myself and later realized that it should be a binary file. So I needed to prepare a text file which will be converted to binary.
But that text file has to have a specific format. Will it be same as that of the file used in train step? Can i use the same file for train step and eval step?
The sequence of steps i followed are:</p>
<p>Step 1: Train the model by using the vocab file which was mentioned as "updated" for toy dataset</p>
<p>Step 2: Training continued for a while and it got "Killed" at running_avg_loss: 3.590769</p>
<p>Step 3: Using the same data and vocab files for eval step, as had been used for training, I ran eval. It keeps on running with running_avg_loss between 6 to 7</p>
<p>I am doubtful of step 3, if same files are to be used or not.</p>
|
<p>So you don't have to run eval unless you are in fact testing your model after you have trained to determine how the training does against another set of data it has never seen before. I have also been sing it to determine if I am starting to overfit the data.</p>
<p>So you will usually take 20-30% of your overall dataset and use it for the eval process. You then go about training against your training data. Once complete, you can just run decode right away should you desire or you can run eval against the 20% - 30% dataset you set aside form the start. Once you feel comfortable with the results you can then run your decode to get the results.</p>
<p>Your binary format should be the same as your training data.</p>
|
tensorflow|eval|textsum
| 1
|
10,098
| 61,878,521
|
slicing by indices on multiple axes numpy
|
<pre><code>A = np.arange(120).reshape(2, 3, 4, 5)
is_ = [1, 2]
js = [0, 1, 2, 3]
A[:, :, is_, :][:, :, :, js].shape == (2, 3, 2, 4)
</code></pre>
<p>Is there a better way of doing the double slice here?</p>
<p>I tried <code>A[:, :, is_, js]</code> but that does it "zip" style.</p>
<p>Efficiency would be nice too, I'm having to do this in a double loop...</p>
|
<p>You can do it in a single indexing step. You just need to add a new axis to either of the indexing arrays so they are broadcastable:</p>
<pre><code>is_ = np.array([1, 2])
js = np.array([0, 1, 2, 3])
A[:, :, is_[:,None], js]
</code></pre>
|
python|numpy
| 2
|
10,099
| 61,998,420
|
NaN in else statement
|
<p>Please I can't figure out why is function returning me <code>NaN</code> in <code>else</code> statement.</p>
<p>The goal is to get mean of the all goals scored in whole season by team without the last match. If there is only one match in the season, I want to return goals scored in that match. </p>
<p>DF:</p>
<pre><code>HOME AWAY SEASON HOME_GOALS AWAY_GOALS ...
Team 1 Team 2 2020 1 1
Team 3 Team 4 2020 2 3
Team 1 Team 3 2019 2 1
Team 1 Team 4 2020 3 2
</code></pre>
<p>Expected output:</p>
<pre><code>HOME AWAY SEASON HOME_GOALS AWAY_GOALS HOME_GOALS_LAST_SEASON
Team 1 Team 2 2020 1 1 2 (1+3)/2
Team 3 Team 4 2020 2 3 2
Team 1 Team 3 2019 4 1 4
Team 1 Team 4 2020 3 2 2 (1+3)/2
df.insert(loc = 1, column ="HOME_GOALS_LAST_SEASON", value = 99.9 )
def last_season(team):
if len(team["HOME_GOALS"] > 1):
return team["HOME_GOALS"].iloc[:-1].mean()
else:
return team["HOME_GOALS"].iloc[0]
df = df.set_index(["HOME", "SEASON"])
df["HOME_GOALS_LAST_SEASON"] = df.groupby(["HOME", "SEASON"]).apply(last_season)
df = df.reset_index()
</code></pre>
|
<p>Why you complicate your life? If there's only 1 match, the mean will be simply its score.</p>
<p>No need for <code>if</code>-<code>else</code>.</p>
<p>So your command</p>
<pre><code>df["HOME_GOALS_LAST_SEASON"] = df.groupby(["HOME", "SEASON"]).apply(last_season)
</code></pre>
<p>replace with</p>
<pre><code>df["HOME_GOALS_LAST_SEASON"] = df.groupby(["HOME", "SEASON"])["HOME_GOALS"].mean()
</code></pre>
<p>(and remove your function definition).</p>
|
python|pandas|if-statement
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.