Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
1,900
| 73,821,175
|
Sum values in specific rows and place value in first column of that row
|
<p>I wish to sum values in the row that contains 'BB' and place that value in first column of that row.</p>
<p><strong>Data</strong></p>
<pre><code>ID Q121 Q221 Q321 Q421
AA 8.0 4.8 3.1 5.3
BB 0.6 0.7 0.3 0.9
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>ID Q121 Q221 Q321 Q421
AA 8.0 4.8 3.1 5.3
BB 2.5 0.0 0.0 0.0
</code></pre>
<p><strong>Doing</strong></p>
<pre><code> df.loc[df['ID'] == BB, 'Q121','Q221','Q321','Q421'].sum()
</code></pre>
<p>Not sure how to then place this value in the first column</p>
<p>I am still researching, any suggestion is appreciated.</p>
|
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer"><strong><code>pandas.DataFrame</code></strong></a> with a boolean mask to assign the new values :</p>
<pre><code>mask = df['ID'].eq('BB')
cols = df.columns.difference(['ID'])
update = [df.sum(axis=1, numeric_only=True)]
df.loc[mask, cols] = pd.DataFrame(dict(zip(cols, update)), index=df.index)
df.fillna(0, inplace=True)
</code></pre>
<p><strong>Note</strong> : <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sum.html" rel="nofollow noreferrer"><strong><code>pandas.DataFrame.sum</code></strong></a> is used to calculate the sum over the <code>axis=1 (columns)</code>.</p>
<h5># Output :</h5>
<pre><code>print(df)
ID Q121 Q221 Q321 Q421
0 AA 8.0 4.8 3.1 5.3
1 BB 2.5 0.0 0.0 0.0
</code></pre>
|
python|pandas|numpy
| 2
|
1,901
| 73,556,640
|
How to select multiple columns from two different Pandas dataframes in Python
|
<p>I have the following dataframes D1 and D2.</p>
<pre><code># importing sparksession from pyspark.sql module
from pyspark.sql import SparkSession
# creating sparksession and giving an app name
spark = SparkSession.builder.appName('sparkdf').getOrCreate()
# list of employee data
data = [["1", "sravan", "1","2","3"],
["2", "ojaswi", "2","3","3"],
["3", "rohith", "3","4","3"],
["4", "sridevi", "4","5","3"],
["5", "bobby", "5","2","3"]]
# specify column names
columns = ['ID', 'NAME', 'Company','Company1','Company2']
# creating a dataframe from the lists of data
dataframe = spark.createDataFrame(data, columns)
dataframe.show()
data1 = [["1", "45000", "1","1","2","1"],
["2", "145000", "2","1","2","2"],
["6", "45000", "6","1","2","3"],
["5", "34000", "7","1","2","3"]]
# specify column names
columns = ['ID', 'salary', 'department','Branch','Branch1','Branch2']
# creating a dataframe from the lists of data
dataframe1 = spark.createDataFrame(data1, columns)
dataframe1.show()
dataframe = dataframe.join(dataframe1,
dataframe.Company == dataframe1.department,
"inner").select(dataframe.ID, dataframe.NAME, dataframe.Company,round(dataframe1.salary, 2).alias("NewFlexOne"))
display(dataframe)
+---+-------+-------+--------+--------+
| ID| NAME|Company|Company1|Company2|
+---+-------+-------+--------+--------+
| 1| sravan| 1| 2| 3|
| 2| ojaswi| 2| 3| 3|
| 3| rohith| 3| 4| 3|
| 4|sridevi| 4| 5| 3|
| 5| bobby| 5| 2| 3|
+---+-------+-------+--------+--------+
+---+------+----------+------+-------+-------+
| ID|salary|department|Branch|Branch1|Branch2|
+---+------+----------+------+-------+-------+
| 1| 45000| 1| 1| 2| 1|
| 2|145000| 2| 1| 2| 2|
| 6| 45000| 6| 1| 2| 3|
| 5| 34000| 7| 1| 2| 3|
+---+------+----------+------+-------+-------+
</code></pre>
<p>I want to join the D1 and D2 and select few columns from D1 and D2 dataframe and rename one column in D2 dataframe in a single line.
Below how it is done in Pyspark.</p>
<pre><code>D1 = D1.join(D2, (when(D1.Company > 15, 16).otherwise(D1.Company)) == D2.department, 'inner').select(D1.ID, D1.Name, D1.Company, D1.Company1, D1.Company2, round(D2.Branch, 2).alias("NewBranch"))
D1 = D1.join(D2, (when(D1.FlexTwo > 15, 16).otherwise(D1.company1)) == D2.department, 'inner').select(D1.ID, D1.Name, D1.Company, D1.Company1, D1.Company2,D1.NewBranch, round(D2.Branch1, 2).alias("NewBranch1"))
</code></pre>
<p>Below is the code I tried in Python which does not work as expected.</p>
<pre><code>D1 = pd.merge(D1, D2, how='inner', left_on = np.where((D1['company']) > 15, 16, D1['company']), right_on = 'department').loc[:, ["company" ,"ID","Name","Company1","Company2","Branch"]]
D1=round(D1.rename(columns = {'Branch':'NewBranch'}),2)
D1 = pd.merge(D1, D2, how='inner', left_on = np.where((D1['company1']) > 15, 16, D1['company1']), right_on = 'department').loc[:, ["company" ,"ID","Name","Company1","Company2","NewBranch"]]
</code></pre>
<p>Kindly let me know how to achieve this in Python.</p>
<p>Thank you.</p>
|
<p>As per the syntax of <code>pd.merge()</code>, the <code>left_on</code> and <code>right_on</code> can be column names or pre-computed list of values with length same as dataframe (<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">official documentation</a>). So, a little work around is required. Pre-compute the condition <code>when(D1.Company > 15, 16).otherwise(D1.Company)</code> in a new column and use that column in join:</p>
<pre class="lang-py prettyprint-override"><code>D1 = pd.DataFrame(data=data, columns=columns)
D2 = pd.DataFrame(data=data1, columns=columns1)
D1['Company_join'] = D1[['Company']].apply(lambda row: "16" if int(row['Company']) > 15 else row['Company'], axis=1)
D1_D2_join = pd.merge(D1, D2, how='inner', left_on="Company_join", right_on='department')
print(D1_D2_join)
>> ID_x NAME Company Company1 Company2 Company_join ID_y salary department Branch Branch1 Branch2
>> 0 1 sravan 1 2 3 1 1 45000 1 1 2 1
>> 1 2 ojaswi 2 3 3 2 2 145000 2 1 2 2
</code></pre>
<p>Rest of the things round-off and column renaming are trivial.</p>
|
python|pandas|dataframe|join|pyspark
| 0
|
1,902
| 71,372,883
|
Constrain tensorflow probability to positive coefficients
|
<p>I have a tensorflow sts model I wish to constrain the Linear Regression coefficients to greater than zero. I understand this can be achieved by passing a HalfNormal distribution as a prior:</p>
<pre><code>network_effects = tfp.sts.LinearRegression(
design_matrix=tf.stack((df-df.mean()).values.astype(np.float32)),
name='network_effects',
weights_prior=tfd.HalfNormal(scale=2.0))
autoregressive = sts.Autoregressive(
order=8,
observed_time_series=observed_time_series,
name='autoregressive')
</code></pre>
<p>However, it complains that my dtypes are not the same with the error:</p>
<pre><code>ValueError: SampleHalfNormal, type=<dtype: 'float32'>, must be of the same type (<dtype: 'float64'>) as design_matrix_linop.
</code></pre>
<p>Is my method of constraining the Linear Regressor coefficients correct and if so, how do I specify that the HalfNormal distribution is of type float64?</p>
|
<p>Not at all obvious how this is done, but I'm not overly familiar with Tensorflow. However the answer is to set the scale argument to float64 as follows:</p>
<pre><code>tfd.HalfNormal(scale=np.float64(2.0))
</code></pre>
<p>The answer came from this post here:
<a href="https://stackoverflow.com/questions/55258627/how-can-i-create-an-array-of-distributions-in-tensorflow-probability">How can I create an array of distributions in TensorFlow Probability?</a></p>
|
python|tensorflow|tensorflow-probability|sts
| 0
|
1,903
| 71,264,782
|
Filtering Boolean in Dataframe
|
<p>I have a df with ±100k rows and 10 columns.
I would like to find/filter which rows contain at least 2 to 4 True values.
For simplicity's sake, let's say I have this df:</p>
<pre><code> A B C D E F
1 True True False False True
2 False True True True False
3 False False False False False
4 True False False False True
5 True False False False False
</code></pre>
<p>Expected output:</p>
<pre><code>A B C D E F
1 True True False False True
2 False True True True False
4 True False False False True
</code></pre>
<p>I have tried using</p>
<pre><code>df[(df['B']==True) | (df['C']==True) | (df['D']==True)| (df['E']==True)| (df['F']==True)]
</code></pre>
<p>But this only eliminates False rows and doesn't work if I want to find instances of at least 2/3 True.</p>
<p>Can anyone please help? Appreciate it.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.select_dtypes.html" rel="nofollow noreferrer"><code>DataFrame.select_dtypes</code></a> for only boolean columns, count <code>True</code>s by <code>sum</code> and then filter values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.between.html" rel="nofollow noreferrer"><code>Series.between</code></a> in <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df = df[df.select_dtypes(bool).sum(axis=1).between(2,4)]
print (df)
A B C D E F
0 1 True True False False True
1 2 False True True True False
3 4 True False False False True
</code></pre>
|
python|pandas|dataframe|filter|boolean
| 0
|
1,904
| 71,113,830
|
Pandas/Python: Store values of columns into list based on value in another column
|
<p>I have the following problem:</p>
<p>I want to store the values of the four different columns (Age_1 - Age_4) within a dataframe into a list, which is depending on the first column 'Year'.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Year</th>
<th>Age_1</th>
<th>Age_2</th>
<th>Age_3</th>
<th>Age_4</th>
</tr>
</thead>
<tbody>
<tr>
<td>2000</td>
<td>18</td>
<td>20</td>
<td>25</td>
<td>56</td>
</tr>
<tr>
<td>2000</td>
<td>17</td>
<td>32</td>
<td>24</td>
<td>41</td>
</tr>
<tr>
<td>2001</td>
<td>20</td>
<td>26</td>
<td>24</td>
<td>39</td>
</tr>
</tbody>
</table>
</div>
<p>...</p>
<p>So basically I want a list that then just contains all the ages that there is in the dataset for every year e.g. The first list would be list_2000 = [18,20,25,56,17,32,24,41...], the second would then be list_2001 = [20,26,24,39...]</p>
<p>Actually I assume that this should be easy to do, but my attempts weren't successful yet. So any help is apprechiated</p>
|
<p>IIUC,</p>
<pre><code>df.melt('Year',
value_vars=['Age_1', 'Age_2', 'Age_3', 'Age_4'])\
.groupby('Year')['value'].agg(list).to_dict()
</code></pre>
|
python|pandas
| 0
|
1,905
| 71,142,498
|
calculate new mean using old mean
|
<p>I had a dataset <code>df</code>that looked like this:</p>
<pre><code>Value themes country date
-1.975767 Weather Brazil 2022-02-13
-0.540979 Fruits China 2022-02-13
-2.359127 Fruits China 2022-02-13
-2.815604 Corona China 2022-02-13
-0.712323 Weather UK 2022-02-13
-0.929755 Weather Brazil 2022-02-13
</code></pre>
<p>I grouped themes+country to calculate mean and count values of each combination of theme and country (eg: Weather, Brazil or Weather, UK)</p>
<pre><code>df_calculations = df.groupby(["themes", "country"], as_index = False)["value"].mean()
df_calculations['count'] = df.groupby(["themes", "country"])["value"].count().tolist()
</code></pre>
<p>Then I added this info to a new table <code>df_avg</code> that looks like this:</p>
<pre><code>country type mean count last_checked_date
Brazil Weather x 2 2022-02-13 #same for all rows
Brazil Corona y 2022-02-13
China Corona z 1 2022-02-13
China Fruits s 2 2022-02-13
</code></pre>
<p>However, now, there's new are additional rows in the same original <code>df</code>.</p>
<pre><code>Value themes country date
-1.975560 Weather Brazil 2022-02-15
-0.540123 Fruits China 2022-02-16
-2.359234 Fruits China 2022-02-16
-2.359234 Corona UK 2022-02-16
</code></pre>
<p>I want to go through the <code>df</code> rows who's date is after the <code>last_checked_date</code>.</p>
<p>Then I want to calculate a new mean for each combination again but using the old mean and n value from my <code>df_avg</code>table instead of re-calculating for the whole <code>df</code></p>
<p>How can I achieve this?</p>
|
<p>Please see this: <a href="https://math.stackexchange.com/questions/784874/how-to-calculate-new-mean-if-population-is-unknown">Calculate new mean from old mean</a></p>
<p>Since you are maintaining a count (if not, it is pretty trivial) you can use that along with existing mean to calculate updated mean using the new observation.</p>
|
python|pandas|dataframe|numpy|mean
| 0
|
1,906
| 52,251,517
|
Fix duplicate index names in dataframe
|
<p>I am looking for the simplest solution to create a Python data frame from a CSV file that has duplicate index names (s1 and s2 in the example below). </p>
<p>Here is how the CSV file looks like:</p>
<pre><code> var1 var2 var3
unit x 8 4 12
temp y -1 -4 -3
time
s1 9 12 11
s2 12 15 7
month
s1 1 3 12
s2 2 4 6
</code></pre>
<p>Python data frame should be as follows:</p>
<pre><code> var1 var2 var3
unit x 8 4 12
temp y -1 -4 -3
time s1 9 12 11
time s2 12 15 7
month s1 1 3 12
month s2 2 4 6
</code></pre>
<p>What's the best way to perform this operation?</p>
|
<p>Use:</p>
<pre><code>#convert index to Series
s = df.index.to_series()
#identify duplicated values
m = s.duplicated(keep=False)
#replace dupes by NaNs and then by forward filling
df.index = np.where(m, s.mask(m).ffill() + ' ' + s.index, s)
#remove only NaNs rows
df = df.dropna(how='all')
print (df)
var1 var2 var3
unit x 8.0 4.0 12.0
temp y -1.0 -4.0 -3.0
time s1 9.0 12.0 11.0
time s2 12.0 15.0 7.0
month s1 1.0 3.0 12.0
month s2 2.0 4.0 6.0
</code></pre>
|
python|pandas|csv|dataframe
| 3
|
1,907
| 60,589,367
|
Use pandas element as key in dictionary
|
<p>I have a dictionary:
<code>fdict={"X":['tf','pytorch','keras'],"Y":['Gym','Scikit']}</code>
and a dataframe <code>df</code> with columns <code>c1</code> and <code>c2</code>:</p>
<ol>
<li>c1 c2</li>
<li>X pesos</li>
<li>Y long </li>
<li>X Myst</li>
</ol>
<p>and I want to get:
<code>'pytorch' in fdict[df['c1']]</code> as boolian response, in this case it would be <code>True</code></p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html" rel="nofollow noreferrer"><code>Series.apply</code></a> with lambda function and <code>get</code>, output is boolean <code>Series</code>:</p>
<pre><code>m = df['c1'].apply(lambda x: 'pytorch' in fdict.get(x, None))
print (m)
0 True
1 False
2 True
Name: c1, dtype: bool
</code></pre>
<p>If want test if at least one <code>True</code> add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.any.html" rel="nofollow noreferrer"><code>Series.any</code></a>:</p>
<pre><code>m1 = df['c1'].apply(lambda x: 'pytorch' in fdict.get(x, None)).any()
print (m1)
True
</code></pre>
|
python|pandas|dictionary
| 1
|
1,908
| 60,435,692
|
NOTHING PROVIDES 'virtual/x86_64-oesdk-linux-compilerlibs'
|
<p>I'm banging my head against the wall on this - mostly because I'm really new to Yocto and just getting into the swing of things. I've been building the image github.com/EttusResearch/oe-manifests and been successfully.</p>
<p>Now, I'd like to add tensorflow as a package, avoiding it's bazel and java dependancies I decided to create a recipe of my own, using the whl for armv7.</p>
<p>I've followed this article: <a href="https://stackoverflow.com/questions/48660051/yocto-recipe-python-whl-package">Yocto recipe python whl package</a></p>
<p>And used this whl repo: <a href="https://github.com/lhelontra/tensorflow-on-arm/releases" rel="nofollow noreferrer">https://github.com/lhelontra/tensorflow-on-arm/releases</a></p>
<p>I created a layer and then added a recipe which I've named tensorflow_2.0.0.bb which contains:</p>
<pre><code>
SRC_URI = "https://github.com/lhelontra/tensorflow-on-arm/releases/download/v2.0.0/tensorflow-2.0.0-cp37-none-linux_armv7l.whl;downloadfilename=v2.0.0.zip;subdir=${BP}"
SRC_URI[md5sum] = "0af281677f40e4aa1da7bb1b2ba72e18"
SRC_URI[sha256sum] = "3cb1be51fe3081924ddbe69e92a51780458accafd12e39482a872b27b3afff8c"
LICENSE = "BSD-3-Clause"
inherit nativesdk python3-dir
LIC_FILES_CHKSUM = "file:///${S}/tensorflow-2.0.0.dist-info/LICENSE;md5=64a34301f8e355f57ec992c2af3e5157"
PV ="2.0.0"
PN = "tensorflow"
do_unpack[depends] += "unzip-native:do_populate_sysroot"
PROVIDES += "tensorflow"
DEPENDS += "python3"
FILES_${PN} += "\
${libdir}/${PYTHON_DIR}/site-packages/* \
"
do_install() {
install -d ${D}${libdir}/${PYTHON_DIR}/site-packages/tensorflow-2.0.0.dist-info
install -d ${D}${libdir}/${PYTHON_DIR}/site-packages/tensorflow
install -m 644 ${S}/tensorflow/* ${D}${libdir}/${PYTHON_DIR}/site-packages/tensorflow/
install -m 644 ${S}/tensorflow-2.0.0.dist-info/* ${D}${libdir}/${PYTHON_DIR}/site-packages/tensorflow-2.0.0.dist-info/
}
</code></pre>
<p>The problem is, during building this recipe I get the following error:</p>
<pre><code>ERROR: Nothing PROVIDES 'virtual/x86_64-oesdk-linux-compilerlibs' (but /home/sudilav/oe-core/../meta-tensorflow/recipes-devtools/tensorflow/tensorflow_2.0.0.bb DEPENDS on or otherwise requires it). Close matches:
virtual/nativesdk-x86_64-oesdk-linux-compilerlibs
virtual/x86_64-oesdk-linux-go-crosssdk
virtual/x86_64-oesdk-linux-gcc-crosssdk
ERROR: Required build target 'tensorflow' has no buildable providers.
Missing or unbuildable dependency chain was: ['tensorflow', 'virtual/x86_64-oesdk-linux-compilerlibs']
</code></pre>
<p>Given I'm downloading and unzipping a whl, I can't see why it's flagging up these dependancies. I think the whl does compile, but it's a lot of code to check through. Has anyone seen this before? There's not much from google on this error :/</p>
|
<p>Bitbake has a tool to create a file with the dependencies tree.</p>
<pre><code>bitbake -g
</code></pre>
<p>or, for a specific recipe:</p>
<pre><code>bitbake -g {recipe name}
</code></pre>
<p>There is a dedicated tool too display these trees, like kgraphviewer and also online tools.
I personally just open these files with a text editor, they are pretty easy to read.</p>
<p>Just search the file for "virtual/x86_64-oesdk-linux-compilerlibs" and see who needs it.</p>
<p>Hope this helps.</p>
|
python|tensorflow|yocto|bitbake
| 0
|
1,909
| 60,618,799
|
How to subtract values in a column using groupby
|
<p>I have the following dataframe:</p>
<pre><code>ID Days TreatmentGiven TreatmentNumber
--- ---- -------------- ---------------
1 0 False NaN
1 30 False NaN
1 40 True 1
1 56 False NaN
2 0 False NaN
2 14 True 1
2 28 True 2
</code></pre>
<p>I'd like to create a new column with a new baseline for Days based on when the first treatment was given (TreatmentNumber==1), grouped by ID so that the result is the following:</p>
<pre><code>ID Days TreatmentGiven TreatmentNumber New_Baseline
--- ---- -------------- --------------- ------------
1 0 False NaN -40
1 30 False NaN -10
1 40 True 1 0
1 56 False NaN 16
2 0 False NaN -14
2 14 True 1 0
2 28 True 2 14
</code></pre>
<p>What is the best way to do this?</p>
<p>Thank you.</p>
|
<p>Idea is filter rows with <code>1</code> in <code>TreatmentNumber</code>, then convert to <code>Series</code> for <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> by <code>ID</code> used for subtract by <code>Days</code> column with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sub.html" rel="nofollow noreferrer"><code>Series.sub</code></a>:</p>
<pre><code>s = df[df['TreatmentNumber'].eq(1)].set_index('ID')['Days']
#Series created by first True rows by TreatmentGiven per groups
#s = df[df['TreatmentGiven']].drop_duplicates('ID').set_index('ID')['Days']
df['New_Baseline'] = df['Days'].sub(df['ID'].map(s))
print (df)
ID Days TreatmentGiven TreatmentNumber New_Baseline
0 1 0 False NaN -40
1 1 30 False NaN -10
2 1 40 True 1.0 0
3 1 56 False NaN 16
4 2 0 False NaN -14
5 2 14 True 1.0 0
6 2 28 True 2.0 14
</code></pre>
<p><strong>Detail</strong>:</p>
<pre><code>print (s)
ID
1 40
2 14
Name: Days, dtype: int64
print (df['ID'].map(s))
0 40
1 40
2 40
3 40
4 14
5 14
6 14
Name: ID, dtype: int64
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
1,910
| 60,680,091
|
How to compare one picture to all data test in siamese neural network?
|
<p>I've been build siamese neural network using pytorch. But I've just test it by inserting 2 pictures and calculate the similarity score, where 0 says that picture is different and 1 says picture is same. </p>
<pre><code>import numpy as np
import os, sys
from PIL import Image
dir_name = "/Users/tania/Desktop/Aksara/Compare" #this should contain 26 images only
X = []
for i in os.listdir(dir_name):
if ".PNG" in i:
X.append(torch.from_numpy(np.array(Image.open("./Compare/" + i))))
x1 = np.array(Image.open("/Users/tania/Desktop/Aksara/TEST/Ba/B/B.PNG"))
x1 = transforms(x1)
x1 = torch.from_numpy(x1)
#x1 = torch.stack([x1])
closest = 0.0 #highest similarity
closest_letter_idx = 0 #index of closest letter 0=A, 1=B, ...
cnt = 0
for i in X:
output = model(x1,i) #assuming x1 is your input image
output = torch.sigmoid(output)
if output > closest:
closest_letter_idx = cnt
closest = output
cnt += 1
</code></pre>
<p>Both pictures are different, so the output</p>
<pre><code> File "test.py", line 83, in <module>
X.append(torch.from_numpy(Image.open("./Compare/" + i)))
TypeError: expected np.ndarray (got PngImageFile)
</code></pre>
<p>this is the directory
<a href="https://i.stack.imgur.com/N4jQo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N4jQo.png" alt="enter image description here"></a></p>
|
<p>Yes there is a way, you could use the softmax function:</p>
<pre><code>output = torch.softmax(output)
</code></pre>
<p>This returns a tensor of 26 values, each corresponding to the probability that the image corresponds to each of the 26 classes. Hence, the tensor sums to 1 (100%). </p>
<p>However, this method is suitable for classification tasks, as opposed to Siamese Networks. Siamese networks compare between inputs, instead of sorting inputs into classes. From your question, it seems like you're trying to compare 1 picture with 26 others. You could loop over all the 26 samples to compare with, compute & save the similarity score for each, and output the maximum value (that is if you don't want to modify your model):</p>
<pre><code>dir_name = '/Aksara/Compare' #this should contain 26 images only
X = []
for i in os.listdir(dir_name):
if ".PNG" in i:
X.append(torch.from_numpy(np.array(Image.open("./Compare/" + i))))
x1 = np.array(Image.open("test.PNG"))
#do your transformations on x1
x1 = torch.from_numpy(x1)
closest = 0.0 #highest similarity
closest_letter_idx = 0 #index of closest letter 0=A, 1=B, ...
cnt = 0
for i in X:
output = model(x1,i) #assuming x1 is your input image
output = torch.sigmoid(output)
if output > closest:
closest_letter_idx = cnt
closest = output
cnt += 1
print(closest_letter_idx)
</code></pre>
|
pytorch|prediction|siamese-network
| 2
|
1,911
| 60,481,202
|
Groupby and Normalize selected columns Pandas DF
|
<p>I have a sample DF which I want to normalize based on 2 condtions</p>
<p>Creating sample DF:</p>
<pre><code>sample_df = pd.DataFrame(np.random.randint(1,20,size=(10, 3)), columns=list('ABC'))
sample_df["date"]= ["2020-02-01","2020-02-01","2020-02-01","2020-02-01","2020-02-01",
"2020-02-02","2020-02-02","2020-02-02","2020-02-02","2020-02-02"]
sample_df["date"] = pd.to_datetime(sample_df["date"])
sample_df.set_index(sample_df["date"],inplace=True)
del sample_df["date"]
sample_df["A_cat"] = ["ind","sa","sa","sa","ind","ind","sa","sa","ind","sa"]
sample_df["B_cat"] = ["sa","ind","ind","sa","sa","sa","ind","sa","ind","sa"]
sample_df
print (sample_df)
</code></pre>
<p>OP:</p>
<pre><code> A B C A_cat B_cat
date
2020-02-01 14 11 7 ind sa
2020-02-01 19 17 3 sa ind
2020-02-01 19 6 3 sa ind
2020-02-01 3 16 5 sa sa
2020-02-01 12 6 16 ind sa
2020-02-02 1 8 12 ind sa
2020-02-02 10 13 19 sa ind
2020-02-02 17 2 7 sa sa
2020-02-02 9 13 17 ind ind
2020-02-02 17 16 3 sa sa
</code></pre>
<p>Conditions to normalize:</p>
<pre><code>1. Groupby based on index, and
2. Nomalize selected columns
</code></pre>
<p>For example if the selected columns are <code>["A","B"]</code>, it should first groupby index in this case <code>2020-02-01</code> and normalize the selected columns in the 5 rows of the group.</p>
<p>Other inputs:</p>
<pre><code>selected_column = ["A","B"]
</code></pre>
<p>I can do this in a <code>for loop</code> by iterating over the groups and concatenating the normalized values. So any suggestions for a more efficient/pandas based approach would be great.</p>
<p>Code Tried with Pandas:</p>
<pre><code>from sklearn.preprocessing import StandardScaler
dfg = StandardScaler()
sample_df.groupby([sample_df.index.get_level_values(0)])[selected_columns].transform(dfg.fit_transform)
</code></pre>
<p>Error:</p>
<pre><code>('Expected 2D array, got 1D array instead:\narray=[14. 19. 19. 3. 12.].\nReshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.', 'occurred at index A')
</code></pre>
|
<p>Hope I got your question right. Do you just want to group by index, select values from Aand B AND CALCULATE THE PERCENTAGE?</p>
<pre><code> sample_df.reset_index(inplace=True)
sample_df['date']=pd.to_datetime(sample_df['date'])
sample_df.set_index('date', inplace=True)
df2=sample_df[(sample_df['A']>10)&(sample_df['B']>5)]
df2.groupby(df2.index.month)['A_cat'].value_counts(normalize=True)
</code></pre>
<p>and if you want for all the rest of the columns excluding A and B. Please try</p>
<pre><code>df2.groupby(df2.index.month).agg({i:'value_counts' for i in df2.columns[2:]}).groupby(level=0).transform(lambda x: x.div(x.sum()))
</code></pre>
<p>Alternatively, after selecting A and B into a dataframe, drop columns A and P and apply <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer">pd.series value count</a></p>
<pre><code>df2.drop(columns=['A','B'], inplace=True)
df2.apply(pd.Series.value_counts).transform(lambda x: x.div(x.sum()))
</code></pre>
|
python|pandas|scikit-learn|pandas-groupby|sklearn-pandas
| 1
|
1,912
| 72,698,645
|
Create a new column that starts counting from 0 until the value in another column changes
|
<p>I have a dataframe df that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Months</th>
<th>Borrow_Rank</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>4</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I want to create a new variable Months_Adjusted that starts counting from 0 for as long as Borrow_Rank remains the same.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Months</th>
<th>Borrow_Rank</th>
<th>Months_Adjusted</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>1</td>
<td>4</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>0</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>4</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>Thank you all and I apologise if I could have written the question better. This is my first post.</p>
|
<pre><code>import pandas as pd
df = pd.DataFrame({'Borrow_Rank':[1,1,1,1,1,1,1,2,2,2,2,2,3,3,3,1,1,1]})
selector = (df['Borrow_Rank'] != df['Borrow_Rank'].shift()).cumsum()
df['Months_Adjusted'] = df.groupby(selector).cumcount()
</code></pre>
|
python|pandas|dataframe
| 1
|
1,913
| 72,504,065
|
Iterating through a DataFrame using Pandas UDF and outputting a dataframe
|
<p>I have a piece of code that I want to translate into a Pandas UDF in PySpark but I'm having a bit of trouble understanding whether or not you can use conditional statements.</p>
<pre class="lang-py prettyprint-override"><code>def is_pass_in(df):
x = list(df["string"])
result = []
for i in x:
if "pass" in i:
result.append("YES")
else:
result.append("NO")
df["result"] = result
return df
</code></pre>
<p>The code is super simple all I'm trying to do is iterate through a column and in each row contains a sentence. I want to check if the word <code>pass</code> is in that sentence and if so append that to a list that will later become a column right next to the <code>df["string"]</code> column. Ive tried to do this using Pandas UDF but the error messages I'm getting are something that I don't understand because I'm new to spark. Could someone point me in the correct direction?</p>
|
<p>There is no need to use a UDF. This can be done in pyspark as follows. Even in pandas, I would advice you dont do what you have done. use <code>np.where()</code></p>
<pre><code>df.withColumn('result', when(col('store')=='target','YES').otherwise('NO')).show()
</code></pre>
|
pyspark|apache-arrow|pandas-udf
| 2
|
1,914
| 59,811,117
|
how to combine years in group to 10 in a sqlite query
|
<p>MOVIE (Mid, name, year, rank)
i want to count the number of movies in a decade. Suppose year in the table start from 1931
then years from 1931 to 1940 will form a decade.</p>
<p>My Query:</p>
<pre><code>query_7 = pd.read_sql_query('''SELECT yr.year as dec_start,yr.year + 9 as dec_end,COUNT(DISTINCT m.MID) as num_movies
FROM (SELECT DISTINCT year FROM Movie) yr ,Movie m WHERE m.year >= yr.year
AND m.year < yr.year + 10
GROUP BY yr.year
ORDER BY yr.year
''',conn)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/KzIBB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KzIBB.png" alt="enter image description here"></a></p>
<p>Problem with this query is for each unique year it start counting decade from that.
whereas required output is if 1931 is the lowest year in the database then first decade should start from 1931 and next from 1941 not from 1936.</p>
<p>any insight on this is much appreciated</p>
|
<p>If you want decades starting with the minimum year in the table use this:</p>
<pre><code>SELECT
(year - s.start_from) / 10 * 10 + s.start_from as dec_start,
(year - s.start_from) / 10 * 10 + s.start_from + 9 as dec_end,
COUNT(DISTINCT MID) as num_movies
FROM Movie CROSS JOIN (SELECT MIN(year) % 10 start_from FROM Movie) s
GROUP BY dec_start, dec_end
</code></pre>
<p>See the <a href="https://www.db-fiddle.com/f/qb3MqMJR8c5761XTjVmJLv/5" rel="nofollow noreferrer">demo</a>.<br/></p>
|
python|sql|pandas|sqlite|date
| 1
|
1,915
| 59,498,558
|
Dataframe to resample an unbalanced dataset
|
<p>The <a href="https://datahub.io/machine-learning/glass/r/glass.csv" rel="nofollow noreferrer">Glass Identification Database</a> is an unbalanced dataset and I want to do some resampling.</p>
<p>There are 214 rows data of 5 types of glass. Each type has different number of rows. With below I want to perform random under-sampling, bringing all types to the smallest number (i.e. each type to have 9 rows only.)</p>
<pre><code>import pandas
dataset = pandas.read_csv("C:\\temp\\glass.csv"]), sep = ",")
dataset['Type'] = pandas.Categorical(dataset['Type']).codes
# Class count
count_class_0, count_class_1, count_class_2, count_class_3, count_class_4, count_class_5 = dataset.Type.value_counts()
# Divide by class
df_class_0 = dataset[dataset['Type'] == 0]
df_class_1 = dataset[dataset['Type'] == 1]
df_class_2 = dataset[dataset['Type'] == 2]
df_class_3 = dataset[dataset['Type'] == 3]
df_class_4 = dataset[dataset['Type'] == 4]
df_class_5 = dataset[dataset['Type'] == 5]
class_count = dataset.Type.value_counts()
print('Class 0:', class_count[0]) # 70
print('Class 1:', class_count[1]) # 76
print('Class 2:', class_count[2]) # 13
print('Class 3:', class_count[3]) # 29
print('Class 4:', class_count[4]) # 9
print('Class 5:', class_count[5]) # 17
# Random under-sampling
df_class_0_under = df_class_0.sample(count_class_4)
df_test_under = pandas.concat([df_class_0_under, df_class_4], axis=0)
print('Random under-sampling:')
print(df_test_under.Type.value_counts())
</code></pre>
<p>It shows it wasn't correctly done:</p>
<pre><code>Random under-sampling:
0 13
4 9
</code></pre>
<p>What's the right way to get it done? (bringing all types to the smallest number, i.e. each type to have 9 rows only.)</p>
<p>Thank you.</p>
|
<p>First idea is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.head.html" rel="nofollow noreferrer"><code>GroupBy.head</code></a> with minimal of counts of <code>Type</code> column:</p>
<pre><code>dataset1 = dataset.groupby('Type').head(dataset.Type.value_counts().min())
</code></pre>
<p>For sampling use lambda function:</p>
<pre><code>dataset1 = dataset.groupby('Type').apply(lambda x: x.sample(dataset.Type.value_counts().min()))
</code></pre>
|
python|pandas|dataframe|resampling
| 1
|
1,916
| 59,505,743
|
Separate JSON elements into columns of pandas dataframe
|
<p>I'm trying to separate elements from a list into different columns of a pandas dataframe. Essentially I want, for every <code>tenure</code> option - i.e. Detached, Semi-detached, columns like <code>detached_price</code>, <code>detached_cost</code>, <code>detached_rooms</code> and <code>detached_asking</code>, then the same for Semi-detached, Terraced, Flats etc. </p>
<pre><code>p = [{'br8': [{'tenure': 'Detached',
'data': ['£1,248,554', '£571', '4.3', '£1,063,001']},
{'tenure': 'Semi-detached',
'data': ['£581,968', '£499', '3.3', '£587,188']},
{'tenure': 'Terraced', 'data': ['£520,725', '£516', '3.0', '£474,719']},
{'tenure': 'Flats', 'data': ['£424,898', '£516', '2.0', '£394,092']}]}]
</code></pre>
<p>I've tried this so far, but it won't parse the columns correctly. Does anybody have any advice or direction as to how to achieve my goal here?</p>
<p><code>pd.DataFrame.from_records(p).T</code></p>
<p>My desired output is:</p>
<pre><code> detached_price, detached_cost, detached_rooms, detached_asking, semi_detached_price, etc etc
br8 £1,248,554, £571 , 4.3 , £1,063,001, £581,968
</code></pre>
|
<p>This will be quite a long dataframe, but the below should work : </p>
<p>first we import some modules, and assign your columns, I'm assuming you have a full set of data and no NA values. if you do, you'll need to figure out a way to map your ask, cost, room into your dataframe. </p>
<pre><code>from collections import defaultdict
from itertools import cycle
import pandas as pd
dfs = defaultdict(list)
for index,y in p[0].items():
for _ in y:
for key, value in _.items():
dfs[key].append(value)
dfs['index'] = index
df = pd.DataFrame(dfs).set_index('index')
df = df.explode('data')
status = cycle( ['price','cost','room','ask'])
df['status'] = [next(status) for stat in range(len(df))]
df['tenure'] = df['tenure'] + '_' + df['status']
final = pd.crosstab(df.index,df.tenure,values=df.data,aggfunc='first')
print(final.iloc[:,:4])
</code></pre>
<hr>
<pre><code>tenure Detached_ask Detached_cost Detached_price Detached_room
postcode?
br8 £1,063,001 £571 £1,248,554 4.3
</code></pre>
|
python|pandas
| 1
|
1,917
| 54,762,245
|
tensorflow TF lite android app crashing after detection
|
<p>I have trained my model using <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md" rel="nofollow noreferrer">ssd_mobilenet_v2_quantized_coco</a>, which was also a long painstaking process of digging. Once training was successful, the model was correctly detecting images from my laptop but on my phone as soon as an object is detected, app crashes. I used TF lite Android app available at <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/examples/android/app" rel="nofollow noreferrer">GitHub</a>. I did some debugging on Android Studio and getting the following error log when an object gets detected and app crashes:</p>
<pre><code>I/tensorflow: MultiBoxTracker: Processing 0 results from 314 I/tensorflow:
DetectorActivity: Preparing image 506 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 506
I/tensorflow: MultiBoxTracker: Processing 0 results from 506
I/tensorflow: DetectorActivity: Preparing image 676 for detection in bg thread.
I/tensorflow: DetectorActivity: Running detection on image 676
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 3122
java.lang.ArrayIndexOutOfBoundsException: length=80; index=-2147483648
at java.util.Vector.elementData(Vector.java:734)
at java.util.Vector.get(Vector.java:750)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:193)
at android.os.HandlerThread.run(HandlerThread.java:65)
</code></pre>
<p>My guess is labels located in .txt file being somehow misread. This is because of the line:</p>
<blockquote>
<p>at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:213)</p>
</blockquote>
<p>and that line corresponds to the following code:</p>
<pre><code>labels.get((int) outputClasses[0][i] + labelOffset)
</code></pre>
<p>However, I don't know what to change in labels.txt. Possibly, I need to edit that txt as suggested <a href="https://stackoverflow.com/questions/47976743/tensorflow-object-detection-api-index-out-of-bounds">here</a>. Any other suggestions and explanation for possible causes are appreciated.</p>
<p>Update. I added ??? to the labels.txt and compiled/run, but I am still getting the same error as above.
P.S. I trained ssdmobilenet_V2_coco (the model without quantization) as well and it is working without crash on the app. I am guessing, perhaps, quantization is converting label indices differently and maybe resulting in outofbound error for labels. </p>
|
<p>Yes it is because the output of labels at times gets garbage value. For a quick fix you can try this:</p>
<p>add a condition:</p>
<pre><code> if((int) outputClasses[0][i]>10)
{
outputClasses[0][i]=-1;
}
</code></pre>
<p>here 10 is the number of classes for which the model was trained for. You can change it accordingly.</p>
|
android|python|tensorflow|object-detection|tensorflow-lite
| 0
|
1,918
| 54,799,384
|
Is concatenated matrix multiplication faster than multiple non-concatenated matmul? If so, why?
|
<p>The definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. We can simplify the expression by using a single matrix multiply by concatenating 4 small matrices (now the matrix are 4 times larger).</p>
<p>My question is: does this improve the efficiency of the matrix multiplication? If so, why? Because we can put them in continuous memory? Or is it because of the conciseness of the code?</p>
<p>The number of items that we multipy doesn't change whether or not we concatenate the matrices. (Therefore complexity shouldn't change.) So I'm wondering why we would do this..</p>
<p>Here is an excerpt from pytorch doc of <code>torch.nn.LSTM(*args, **kwargs)</code>. <code>W_ii, W_if, W_ig, W_io</code> are concatenated.</p>
<pre><code>weight_ih_l[k] – the learnable input-hidden weights of the \text{k}^{th}k
th
layer (W_ii|W_if|W_ig|W_io), of shape (4*hidden_size x input_size)
weight_hh_l[k] – the learnable hidden-hidden weights of the \text{k}^{th}k
th
layer (W_hi|W_hf|W_hg|W_ho), of shape (4*hidden_size x hidden_size)
bias_ih_l[k] – the learnable input-hidden bias of the \text{k}^{th}k
th
layer (b_ii|b_if|b_ig|b_io), of shape (4*hidden_size)
bias_hh_l[k] – the learnable hidden-hidden bias of the \text{k}^{th}k
th
layer (b_hi|b_hf|b_hg|b_ho), of shape (4*hidden_size)
</code></pre>
|
<p>The structure of LSTM isn't about improving multiplication efficiency but more so for bypassing diminishing / exploding gradients (<a href="https://stats.stackexchange.com/questions/185639/how-does-lstm-prevent-the-vanishing-gradient-problem">https://stats.stackexchange.com/questions/185639/how-does-lstm-prevent-the-vanishing-gradient-problem</a>). There are some studies being done to mitigate the effects of diminishing gradients, and GRU / LSTM cells + peepholes are few attempts to mitigate that.</p>
|
tensorflow|matrix|lstm|pytorch|gpu
| 0
|
1,919
| 54,934,049
|
pandas reset cumsum when the previous value is negative
|
<p>I need to perform a cumulative sum on a data frame that is grouped, but I need to have it reset when the previous value is negative and the current value is positive.</p>
<p>In R I could apply a condition to the groupby with ave() function, but I can't do that in python, so I am having a bit of trouble thinking of a solution. Can anyone help me out? </p>
<p>Here is a sample: </p>
<pre><code>import pandas as pd
df = pd.DataFrame({'PRODUCT': ['A'] * 40, 'GROUP': ['1'] * 40, 'FORECAST': [100, -40, -40, -40]*10, })
df['CS'] = df.groupby(['GROUP', 'PRODUCT']).FORECAST.cumsum()
# Reset cumsum if
# condition: (df.FORECAST > 0) & (df.groupby(['GROUP', 'PRODUCT']).FORECAST.shift(-1).fillna(0) <= 0)
</code></pre>
|
<p>This solution will work to reset the sum for any example where the values to be summed change from negative to positive (regardless of whether the dataset is nice and periodic as it is in your example)</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'PRODUCT': ['A'] * 40, 'GROUP': ['1'] * 40, 'FORECAST': [100, -40, -40, -40]*10, })
cumsum = np.cumsum(df['FORECAST'])
# Array of indices where sum should be reset
reset_ind = np.where(df['FORECAST'].diff() > 0)[0]
# Sums that need to be subtracted at resets
subs = cumsum[reset_ind-1].values
# Repeat subtraction values for every entry BETWEEN resets and values after final reset
rep_subs = np.repeat(subs, np.hstack([np.diff(reset_ind), df['FORECAST'].size - reset_ind[-1]]))
# Stack together values before first reset and resetted sums
df['CS'] = np.hstack([cumsum[:reset_ind[0]], cumsum[reset_ind[0]:] - rep_subs])
</code></pre>
<p>Alternatively, based <a href="https://stackoverflow.com/a/32891081/11021886">on this solution to a similar question</a> (and my realisation of the usefulness of <code>groupby</code>)</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'PRODUCT': ['A'] * 40, 'GROUP': ['1'] * 40, 'FORECAST': [100, -40, -40, -40]*10, })
# Create indices to group sums together
df['cumsum'] = (df['FORECAST'].diff() > 0).cumsum()
# Perform group-wise cumsum
df['CS'] = df.groupby(['cumsum'])['FORECAST'].cumsum()
# Remove intermediary cumsum column
df = df.drop(['cumsum'], axis=1)
</code></pre>
|
python|pandas|dataframe
| 1
|
1,920
| 49,539,178
|
Generate random int in 3D array
|
<p>l would like to generate a random 3d array containing random integers (coordinates) in the intervalle [0,100].</p>
<p>so, <code>coordinates=dim(30,10,2)</code></p>
<p>What l have tried ?</p>
<pre><code>coordinates = [[random.randint(0,100), random.randint(0,100)] for _i in range(30)]
</code></pre>
<p>which returns</p>
<pre><code>array([[97, 68],
[11, 23],
[47, 99],
[52, 58],
[95, 60],
[89, 29],
[71, 47],
[80, 52],
[ 7, 83],
[30, 87],
[53, 96],
[70, 33],
[36, 12],
[15, 52],
[30, 76],
[61, 52],
[87, 99],
[19, 74],
[37, 63],
[40, 2],
[ 8, 84],
[70, 32],
[63, 8],
[98, 89],
[27, 12],
[75, 59],
[76, 17],
[27, 12],
[48, 61],
[39, 98]])
</code></pre>
<p>of shape <code>(30,10)</code></p>
<p>What l'm supposed to get ?</p>
<p>dim=(30,10,2) rather than (30,10)</p>
|
<p>Use the <a href="https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.randint.html" rel="noreferrer"><code>size</code> parameter</a>:</p>
<pre><code>import numpy as np
coordinates = np.random.randint(0, 100, size=(30, 10, 2))
</code></pre>
<p>will produce a NumPy array with integer values between 0 and 100 and of shape <code>(30, 10, 2)</code>.</p>
|
python-3.x|numpy|random
| 23
|
1,921
| 49,624,840
|
Confusing behavior of np.random.multivariate_normal
|
<p>I am sampling from a multivariate normal using numpy as follows.</p>
<pre><code>mu = [0, 0]
cov = np.array([[1, 0.5], [0.5, 1]]).astype(np.float32)
np.random.multivariate_normal(mu, cov)
</code></pre>
<p>It gives me the following warning.</p>
<blockquote>
<p>RuntimeWarning: covariance is not positive-semidefinite.</p>
</blockquote>
<p>The matrix is clearly PSD. However, when I use a <strong>np.float64</strong> array, it works fine. I need the covariance matrix to be <strong>np.float32</strong>. What am I doing wrong? </p>
|
<p><em>This has been <a href="https://github.com/numpy/numpy/pull/12547" rel="nofollow noreferrer">fixed</a> in March 2019. If you still see the warning consider updating your numpy.</em></p>
<p>The warning is raised even for very small off-diagonal elements > 0. The default tolerance value does not seem to work well for 32 bit floats.</p>
<p>As a workaround pass a higher tolerance to the function:</p>
<pre><code>np.random.multivariate_normal(mu, cov, tol=1e-6)
</code></pre>
<hr>
<p><strong>Details</strong></p>
<p><a href="https://github.com/ozabluda/numpy/blob/08a02daa3f86e84df143d508949562f8881576c2/numpy/random/mtrand/mtrand.pyx#L4523" rel="nofollow noreferrer"><code>np.random.multivariate_normal</code></a> checks if the covariance is PSD by first decomposing it with <code>(u, s, v) = svd(cov)</code>, and then checking if the reconstruction <code>np.dot(v.T * s, v)</code> is close enough to the original <code>cov</code>. </p>
<p>With <code>float32</code> the result of the reconstruction is further off than the default tolerance of <code>1e-8</code> allows, and the function raises a warning.</p>
|
python|numpy
| 10
|
1,922
| 73,447,978
|
Pandas, 'nan' fields after changing column datatypes
|
<p>I am changing the datatype of a determined columns in pandas and I would like to keep the nan values as np.nan</p>
<p>After executing the following command:</p>
<pre><code>df = df.astype({"raw_salary_from": str, "raw_salary_to": str})
</code></pre>
<p>What I get is that all the nan's transformed into strings 'nan'. How can I avoid that issue while transforming the datatype of the column?</p>
<pre><code>['22000.0',
'0.0',
'40000.0',
'9.5',
'0.0',
'0.0',
'28.0',
'nan',
'nan',
'nan',
'nan',
'nan']
</code></pre>
|
<p>You can store the index of those nan's first and then convert them to <code>np.nan</code> with <code>mask</code>. So:</p>
<pre><code>idx_nan = df["raw_salary_from"].isna()
df["raw_salary_from"] = df["raw_salary_from"].astype(str).mask(idx_nan, np.nan)
</code></pre>
|
python|pandas|dataframe
| 1
|
1,923
| 73,368,599
|
Neural Network From Scratch - Forward propagation error
|
<p>I wanna implement the backward propagation concept in python with the next code</p>
<pre><code>class MLP(object):
def __init__(self, num_inputs=3, hidden_layers=[3, 3], num_outputs=2):
self.num_inputs = num_inputs
self.hidden_layers = hidden_layers
self.num_outputs = num_outputs
layers = [num_inputs] + hidden_layers + [num_outputs]
weights = []
bias = []
for i in range(len(layers) - 1):
w = np.random.rand(layers[i], layers[i + 1])
b=np.random.randn(layers[i+1]).reshape(1, layers[i+1])
weights.append(w)
bias.append(b)
self.weights = weights
self.bias = bias
activations = []
for i in range(len(layers)):
a = np.zeros(layers[i])
activations.append(a)
self.activations = activations
def forward_propagate(self, inputs):
activations = inputs
self.activations[0] = activations
for i, w in enumerate(self.weights):
for j, b in enumerate(self.bias):
net_inputs = self._sigmoid((np.dot(activations, w)+b))
self.activations[i + 1] = net_inputs
return activations
def train(self, inputs, targets, epochs, learning_rate):
for i in range(epochs):
sum_errors = 0
for j, input in enumerate(inputs):
target = targets[j]
output = self.forward_propagate(input)
def _sigmoid(self, x):
y = 1.0 / (1 + np.exp(-x))
return y
</code></pre>
<p>So I created the next dummy data in order to verify everything is correct</p>
<pre><code>items = np.array([[random()/2 for _ in range(2)] for _ in range(1000)])
targets = np.array([[i[0] + i[1]] for i in items])
mlp = MLP(2, [5], 1)
mlp.train(items, targets, 2, 0.1)
</code></pre>
<p>but when I run the code I have the next error</p>
<pre><code>ValueError: shapes (2,) and (5,1) not aligned: 2 (dim 0) != 5 (dim 0)
</code></pre>
<p>I understand the error, but how to solve it?</p>
|
<p>a couple of major problems with <code>forward_propagate</code>:</p>
<ol>
<li>change <code>net_inputs</code> to <code>activations</code> - otherwise you always compute and return the activations from the first layer</li>
<li>remove <code>for j, b in enumerate(self.bias):</code> - biases from other layers have no business here</li>
<li>use <code>matmul</code> instead of <code>dot</code></li>
</ol>
<p>so, something like</p>
<pre class="lang-py prettyprint-override"><code>for i, w in enumerate(self.weights):
activations = self._sigmoid((np.matmul(activations, w)+self.bias[i]))
self.activations[i + 1] = activations
return activations
</code></pre>
<p>Also, be careful to note that this method receives 1D array, which converts to a matrix after the first <code>matmul</code>. Matrixes are stored in <code>self.activations</code> and a matrix is returned from the method.
This might or might not be what you want.</p>
|
python|numpy|machine-learning|neural-network
| 1
|
1,924
| 73,239,293
|
How to find index that has same value in another array by python?
|
<p>I'm a beginner of Python. I would like to find indexes that have two largest values in ndarray. For example, ndarray x like this.</p>
<pre><code>x = np.array([2,3,5,3,7,3,1,5])
</code></pre>
<p>Because the two largest values are 7 and 5. The answer should be</p>
<pre><code>ind = [2,4,7]
x[ind] = [5,7,5]
</code></pre>
<p>Would you tell me how to code it?</p>
|
<p>You can accomplish this in 3 steps:</p>
<ol>
<li>Sort your unique array values (np.unique also sorts) <code>np.unique(…)</code></li>
<li>Slice the last N values (the maximums) from the sorted unique array <code>…[-max_n:]</code></li>
<li>Find the indices where your array has those maximums via <code>np.where(np.isin(…))</code></li>
</ol>
<pre class="lang-py prettyprint-override"><code>import numpy as np
max_n = 2
x = np.array([2,3,5,3,7,3,1,5])
max_values = np.unique(x)[-max_n:]
max_indices = np.where(np.isin(x, max_values))[0]
print(
f'{max_indices = }',
f'{x[max_indices] = }',
sep='\n'
)
largest_indices = array([2, 4, 7])
x[largest_indices] = array([5, 7, 5])
</code></pre>
|
python|numpy-ndarray
| 1
|
1,925
| 67,592,831
|
How do I incorporate absolute value within my Pandas dataframe?
|
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: center;">Impressions_Source</th>
<th style="text-align: right;">Impressions_Source2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">15020</td>
<td style="text-align: center;">150201</td>
<td style="text-align: right;">151920</td>
</tr>
</tbody>
</table>
</div>
<p>I am trying to figure out how to calculate a percentage difference (discrepancy) between two values utilizing absolute value.</p>
<p>Here is my formula used</p>
<p><code>df['Discrepancy_Num'] = (df['Impressions_Source'] - df['Impressions_Source2']) / (df['Impressions_Source'] * 100)</code></p>
<p>Would I just add .abs after?</p>
<p><code>df['Discrepancy_Num'] = (df['Impressions_Source'] - df['Impressions_Source2']) / (df['Impressions_Source'] * 100).abs</code></p>
|
<p>You can call <code>.abs()</code> afterwards:</p>
<pre class="lang-py prettyprint-override"><code>df["Discrepancy_Num"] = (
(df["Impressions_Source"] - df["Impressions_Source2"])
/ (df["Impressions_Source"] * 100)
).abs()
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> id Impressions_Source Impressions_Source2 Discrepancy_Num
0 15020 150201 151920 0.000114
</code></pre>
|
python|pandas|dataframe|absolute-value
| 1
|
1,926
| 60,158,212
|
How can i find unique record in python by row count?
|
<p>df:</p>
<pre><code> Country state item
0 Germany Augsburg Car
1 Spain Madrid Bike
2 Italy Milan Steel
3 Paris Lyon Bike
4 Italy Milan Steel
5 Germany Augsburg Car
</code></pre>
<p>In the above dataframe, if we take unique record Appearance.</p>
<pre><code> Country state item Appeared
0 Germany Augsburg Car 1
1 Spain Madrid Bike 1
2 Italy Milan Steel 1
3 Paris Lyon Bike 1
4 Italy Milan Steel 2
5 Germany Augsburg Car 2
</code></pre>
<p>Since row no. 4 and 5 appeared for the second time, i want to change their item name to differentiate both record.If a record is appeared more than once in the data, item name should be rename as Item_A for 1st appearance and Item_B for the second appearance...
Output:</p>
<pre><code>Country state item Appeared
0 Germany Augsburg Car_A 1
1 Spain Madrid Bike 1
2 Italy Milan Steel_A 1
3 Paris Lyon Bike 1
4 Italy Milan Stee_B 2
5 Germany Augsburg Car_B 2
</code></pre>
|
<p>You can first get the <code>Appreared</code> column by <code>groupby().cumcount</code>, then add the suffixes:</p>
<pre><code># unique values
duplicates = df.duplicated(keep=False)
# Appearance count
df['Appeared'] = df.groupby([*df]).cumcount().add(1)
# add the suffixes
suffixes = np.array(list('ABC'))
df.loc[duplicates, 'item'] = df['item'] + '_' + suffixes[df.Appeared-1]
</code></pre>
<p>Output:</p>
<pre><code> Country state item Appeared
0 Germany Augsburg Car_A 1
1 Spain Madrid Bike 1
2 Italy Milan Steel_A 1
3 Paris Lyon Bike 1
4 Italy Milan Steel_B 2
5 Germany Augsburg Car_B 2
</code></pre>
|
python|python-3.x|pandas|python-2.7
| 2
|
1,927
| 60,237,047
|
How to Decompose and Visualise Slope Component in Tensorflow Probability
|
<p>I'm running tensorflow 2.1 and tensorflow_probability 0.9. I have fit a Structural Time Series Model with a seasonal component. I am using code from the Tensorflow Probability Structural Time Series Probability example:
<a href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Structural_Time_Series_Modeling_Case_Studies_Atmospheric_CO2_and_Electricity_Demand.ipynb" rel="nofollow noreferrer" title="Structural_Time_Series_Modeling_Case_Studies_Atmospheric_CO2_and_Electricity_Demand.ipynb">Tensorflow Github</a>.</p>
<p>In the example there is a great plot where the decomposition is visualised:</p>
<pre><code>
# Get the distributions over component outputs from the posterior marginals on
# training data, and from the forecast model.
component_dists = sts.decompose_by_component(
demand_model,
observed_time_series=demand_training_data,
parameter_samples=q_samples_demand_)
forecast_component_dists = sts.decompose_forecast_by_component(
demand_model,
forecast_dist=demand_forecast_dist,
parameter_samples=q_samples_demand_)
demand_component_means_, demand_component_stddevs_ = (
{k.name: c.mean() for k, c in component_dists.items()},
{k.name: c.stddev() for k, c in component_dists.items()})
(
demand_forecast_component_means_,
demand_forecast_component_stddevs_
) = (
{k.name: c.mean() for k, c in forecast_component_dists.items()},
{k.name: c.stddev() for k, c in forecast_component_dists.items()}
)
</code></pre>
<p>When using a trend component, is it possible to decompose and visualise both:</p>
<p>trend/_level_scale & trend/_slope_scale</p>
<p>I have tried many permutations to extract the nested element of the trend component with no luck.</p>
<p>Thanks for your time in advance.</p>
|
<p>We didn't write a separate STS interface for this, but you can access the posterior on latent states (in this case, both the level and slope) by directly querying the underlying state-space model for its marginal means and covariances:</p>
<pre class="lang-py prettyprint-override"><code>ssm = model.make_state_space_model(
num_timesteps=num_timesteps,
param_vals=parameter_samples)
posterior_means, posterior_covs = (
ssm.posterior_marginals(observed_time_series))
</code></pre>
<p>You should also be able to draw samples from the joint posterior by running <code>ssm.posterior_sample(observed_time_series, num_samples)</code>.</p>
<p>It looks like there's currently a glitch when drawing posterior samples from a model with no batch shape (<code>Could not find valid device for node. Node:{{node Reshape}}</code>): while we fix that, it should work to add an artificial batch dimension as a workaround:
<code>ssm.posterior_sample(observed_time_series[tf.newaxis, ...], num_samples)</code>.</p>
|
time-series|tensorflow-probability|state-space
| 0
|
1,928
| 65,174,525
|
How to convert a boolean array into a matrix?
|
<p>I am a beginner, and I want to know is it possible to convert a boolean array into a matrix in NumPy?</p>
<p>For example, we have a boolean array <code>a</code> like this:</p>
<pre><code>a = [[False],
[True],
[True],
[False],
[True]]
</code></pre>
<p>And, we turn it into the following matrix:</p>
<pre><code>m = [[0, 0, 0, 0, 0]
[0, 1, 0, 0, 0]
[0, 0, 1, 0, 0]
[0, 0, 0, 0, 0]
[0, 0, 0, 0, 1]]
</code></pre>
<p>I mean the array to be the diagonal of the matrix.</p>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.diagflat.html#numpy.diagflat" rel="nofollow noreferrer"><code>np.diagflat</code></a> which <em>creates a two-dimensional array with the flattened input as a diagonal</em>:</p>
<pre><code>np.diagflat(np.array(a, dtype=int))
#[[0 0 0 0 0]
# [0 1 0 0 0]
# [0 0 1 0 0]
# [0 0 0 0 0]
# [0 0 0 0 1]]
</code></pre>
<p><a href="https://uscript.co/public/Google_108617488638529745626/python/07aeba3e.py" rel="nofollow noreferrer">Working example</a></p>
|
python|arrays|numpy|matrix
| 4
|
1,929
| 65,068,605
|
How to convert pandas <NA> to numy Nan?
|
<p>I have following data set</p>
<p><a href="https://i.stack.imgur.com/1EUfT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1EUfT.png" alt="enter image description here" /></a></p>
<p>I would like to convert float values to int, so i did <code>data.convert_dtypes()</code></p>
<p><a href="https://i.stack.imgur.com/B9GdA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B9GdA.png" alt="enter image description here" /></a></p>
<p>Pandas converted Nan to Na. How can i make it back or prevent pandas to do it? I use data imputation and some algorithmes doesn't support ( <code>'bool' object has no attribute 'transpose'</code> )</p>
<p>I tried <code>replace</code>, <code>fillna</code> . <code>Replace({pd.NA: np.nan})</code> convert int to float back again and this is not my solution since i would like to work with int</p>
|
<p>If need <code>np.nan</code> what is float need your solution, but <code>NA ineteger</code>s columns are converted to <code>float</code>s columns:</p>
<pre><code>df = df.replace({pd.NA: np.nan})
</code></pre>
<p>If need integers, onky ways is replace <code>NA</code> to some integer:</p>
<pre><code>df = df.replace(pd.NA, -1)
</code></pre>
|
pandas|dataframe
| 7
|
1,930
| 65,413,883
|
Why isn't my plt.plot() working when graphing projectile motion?
|
<p>I'm graphing projectile motion without/with air friction and I'm now on the first part, which is the one without air friction.</p>
<p>After I've put the plt.plot(x_nodrag,y_nodrag), it suppose to draw a curved line of the projectile motion. But for some reasons it's not being drawn even as the data of the them are being print. I want to know why.</p>
<p>I know there might be a lot of errors in this code!! Feel free to point them out if you'd like to. Thank you guys for the help.</p>
<p>Here's the picture of the graph:<a href="https://i.stack.imgur.com/znFLM.png" rel="nofollow noreferrer">https://i.stack.imgur.com/znFLM.png</a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Model parameters
M = 1.0 # Mass of projectile in kg
g = 9.8 # Acceleration due to gravity (m/s^2)
V = 80 # Initial velocity in m/s
ang = 60.0 # Angle of initial velocity in degree
Cd = 0.005 # Drag coefficient
dt = 0.5 # time step in s
# Set up the lists to store variables
# Start by putting the initial velocities at t=0
t = [0] # list to keep track of time
vx = [V*np.cos(ang/180*np.pi)] # list for velocity x and y components
vy = [V*np.sin(ang/180*np.pi)]
#show the projectile motion without drag force
t1=0
vx_nodrag=V*np.cos(ang/180*np.pi)
vy_nodrag=V*np.sin(ang/180*np.pi)
while (t1 < 100):
x_nodrag=vx_nodrag*t1
y_nodrag=vy_nodrag*t1+(0.5*-9.8*t1**2)
plt.ylim([0,200])
plt.xlim([0,270])
plt.plot(x_nodrag,y_nodrag)
print(x_nodrag,y_nodrag)
t1=t1+dt
plt.show()
</code></pre>
|
<p>The command <code>plt.plot</code> does not work for individual points. Use <code>plt.scatter(x_nodrag, y_nodrag)</code> instead, or collect the consecutive coordinates into lists or arrays and then plot the entire graph at once.</p>
|
python|numpy|matplotlib|physics
| 0
|
1,931
| 65,351,504
|
Finding if values in multiple columns greater than constant in pandas Dataframe
|
<p>This is how my data frame (df) looks:</p>
<pre><code>id,activity_date,status01_1,status01_2,status02,status03_01,status03_02, status04
1,2020-12-09 22:13:16,0,0,3560,0,0,0
1,2020-12-10 01:02:33,8327,0,0,0,0,0
1,2020-12-11 01:02:33,0,0,230,0,0,0
</code></pre>
<p>I would like to find if any of the status 01 and 03 columns are over a constant value of 2000 and set a another column (flag) to say the value was greater than 2000.
So in the input above rows, 1 and 2 will satisfy the condition but not 3.</p>
<p>The solution I can think of is to filter the data frame to have only status 01 and 03 columns in a new dataframe and use a complicated np.where clause to set a flag.</p>
<pre><code>df1 = df[[status01,status03]]
df1[more_than_2000] = np.where((df1['status01_01'] >= 2000) | (df1['status01_02'] >= 2000) | ...), 1,0)
</code></pre>
<p>What is a much better way of doing this?</p>
|
<p>Let's try:</p>
<pre><code>df1['ge_2000'] = df1.filter(like='status01').max(axis=1).ge(2000).astype(int)
</code></pre>
<p>Or</p>
<pre><code>df1['ge_2000'] = df1.filter(like='status01').ge(2000).any(axis=1).astype(int)
</code></pre>
<p>Output:</p>
<pre><code> id activity_date status01_1 status01_2 status02 status03_01 status03_02 status04 ge_2000
-- ---- ------------------- ------------ ------------ ---------- ------------- ------------- ----------- ---------
0 1 2020-12-09 22:13:16 0 0 3560 0 nan nan 0
1 1 2020-12-10 01:02:33 8327 0 0 nan nan 1
2 1 2020-12-11 01:02:33 0 0 230 0 nan 0
</code></pre>
<hr />
<p><strong>Update</strong>: for the extra question in comment:</p>
<pre><code>s = df.filter(like='status')
df.join(s.groupby(s.columns.str.split('_').str[0], axis=1)
.max().gt(2000).astype(int)
.add_suffix('_ge2000')
)
</code></pre>
|
python|python-3.x|pandas
| 0
|
1,932
| 49,963,694
|
Delete a row in pandas given a conditions data cleansing?
|
<p>I have a DataFrame like </p>
<pre><code>Classification Value_1 Value_2
churn 1.0 2.0
not_churn 2.0 3.0
not_churn 0.0 0.0
churn 0.0 1.0
</code></pre>
<p>I know that with all values = 0, the classification should be churn. Then I need to delete all the rows with all values is 0 and classification is not_churn. I tried: </p>
<pre><code> df.drop((df['value_1'] == 0
& df['value_2'] == 0
& df['classification']== 'not_churn').index)
'TypeError: cannot compare a dtyped [float64] array with a scalar of type [bool]'
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with change conditions for not equal <code>!=</code> and change <code>&</code> to <code>|</code> (or):</p>
<pre><code>df = df[(df['Value_1'] != 0 ) | (df['Value_2'] != 0) | (df['Classification'] != 'not_churn')]
print (df)
Classification Value_1 Value_2
0 churn 1.0 2.0
1 not_churn 2.0 3.0
3 churn 0.0 1.0
</code></pre>
|
python-3.x|pandas
| 0
|
1,933
| 49,934,787
|
Recursively calculating ratios between parents and children in pandas dataframe
|
<p>I looked around for a solution to this to the best of my ability. The closest I was able to find was <a href="https://stackoverflow.com/questions/46521390/getting-all-descendants-of-a-parent-from-a-pandas-dataframe-parent-child-table">this</a>, but it's not really what I'm looking for. </p>
<p>I am trying to model the relationship between a value and its parent's value. Specifically trying to calculate a ratio. I would also like to keep track of the level of lineage, like how many children deep is this item?</p>
<p>For example, I would like to input a pandas df that looks like this:</p>
<pre><code>id parent_id score
1 0 50
2 1 40
3 1 30
4 2 20
5 4 10
</code></pre>
<p>and get this:</p>
<pre><code>id parent_id score parent_child_ratio level
1 0 50 NA 1
2 1 40 1.25 2
3 1 30 1.67 2
4 2 20 2 3
5 4 10 2 4
</code></pre>
<p>So for every row, we go find the score of its parent and then calculate (parent_score/child_score) and make that the value of a new column. And then some sort of counting solution add on the child level. </p>
<p>This has been stumping me for a while, any help is appreciated!!!</p>
|
<p>The first part is just merges:</p>
<pre><code>with_parent = pd.merge(df, df, left_on='parent_id', right_on='id', how='left')
with_parent['child_parent_ratio'] = with_parent.score_y / with_parent.score_x
with_parent = with_parent.rename(columns={'id_x': 'id', 'parent_id_x': 'parent_id', 'score_x': 'score'})[['id', 'parent_id', 'score', 'child_parent_ratio']]
>>> with_parent
id parent_id score child_parent_ratio
0 1 0 50 NaN
1 2 1 40 1.250000
2 3 1 30 1.666667
3 4 2 20 2.000000
4 5 4 10 2.000000
</code></pre>
<p>For the second part you can run <a href="https://en.wikipedia.org/wiki/Breadth-first_search" rel="nofollow noreferrer">breadth-first search</a>. This creates a forest, and the level is the distance from the roots, as in:</p>
<p><a href="https://i.stack.imgur.com/bPhKa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bPhKa.png" alt="enter image description here"></a></p>
<p>E.g., using <a href="https://networkx.github.io/documentation/networkx-1.9/reference/generated/networkx.algorithms.shortest_paths.generic.shortest_path_length.html" rel="nofollow noreferrer"><code>networkx</code></a>:</p>
<pre><code>import networkx as nx
G = nx.DiGraph()
G.add_nodes_from(set(with_parent['id'].unique()).union(set(with_parent.parent_id.unique())))
G.add_edges_from([(int(r[1]['parent_id']), int(r[1]['id'])) for r in with_parent.iterrows()])
with_parent['level'] = with_parent['id'].map(nx.shortest_path_length(G, 0))
>>> with_parent
id parent_id score child_parent_ratio level
0 1 0 50 NaN 1
1 2 1 40 1.250000 2
2 3 1 30 1.666667 2
3 4 2 20 2.000000 3
4 5 4 10 2.000000 4
</code></pre>
|
python|pandas|recursion
| 3
|
1,934
| 63,884,811
|
how initialize a torch of matrices
|
<p>Hello I m trying to create a tensor that will have inside N matrices of n by n size. I tried to initialize it with</p>
<pre><code>Q=torch.zeros(N, (n,n))
</code></pre>
<p>but i get the following error</p>
<pre><code>zeros(): argument 'size' must be tuple of ints, but found element of type tuple at pos 2
</code></pre>
<p>Also I want to fill it later with random matrices with integer values and I will turn it to semidefinte so I thought of the following</p>
<pre><code>for i in range(0,N):
Q[i]=torch.randint(0,10,(n,n))
Q = Q*Q.t()
</code></pre>
<p>Is it correct? Is there any other faster way with a build in command?</p>
|
<p><code>N</code> matrices of <code>n x n</code> size is equivalent to three dimensional tensor of shape <code>[N, n, n]</code>. You can do it like so:</p>
<pre><code>import torch
N = 32
n = 10
tensor = torch.randint(0, 10, size=(N, n, n))
</code></pre>
<p>No need to fill it with <code>zeros</code> to begin with, you can create it directly.</p>
<p>You can also iterate over <code>0</code> dimension similar to what you did:</p>
<pre><code>for i in range(0, N):
tensor[i] = tensor[i] * tensor[i].T
</code></pre>
<p>See <a href="https://stackoverflow.com/a/63885307/10886420">@Dishin H Goyani</a> answer for faster approach with permutation.</p>
|
python|pytorch
| 3
|
1,935
| 46,830,781
|
Pandas qcut binning returning only one group
|
<p>I am working with pandas pcut and somethimes (when a lot of data are equal to the min) it returns either an error:</p>
<pre><code>Bin edges must be unique
</code></pre>
<p>Or I have to drop the non-unique bins that I get, but then all my data are in one bin.</p>
<p>For example:
dataset: </p>
<pre><code>import pandas as pd
nbins = 2
pd.qcut([0,0,0,0,0,1,2,3], nbins)
</code></pre>
<p>I want then to have the ones above or below the median (here 0).
Then I am expecting to get:</p>
<pre><code>[grp1, grp1, grp1, grp1, grp1, grp2, grp2, grp2]
</code></pre>
<p>But what are get are either:</p>
<pre><code>pd.qcut([0,0,0,0,0,1,2,3], 2)
out >>> ValueError: Bin edges must be unique: array([ 0., 0., 3.]).
</code></pre>
<p>If I drop non-unique bins:</p>
<pre><code>pd.qcut([0,0,0,0,0,1,2,3], 2, duplicates='drop')
out >>> [(-0.001, 3.0], (-0.001, 3.0], (-0.001, 3.0], (-0.001, 3.0], (-0.001, 3.0], (-0.001, 3.0], (-0.001, 3.0], (-0.001, 3.0]]
Categories (1, interval[float64]): [(-0.001, 3.0]]
</code></pre>
<p>And everything is in only one category.</p>
<p>I don't want to have necessarily +/- median, this is just an example when data are clustered around the min.</p>
<p>Thank you for your help</p>
|
<p>I found a very ugly way to solve it...:</p>
<pre><code>try:
pd.qcut(data, n_bins)
except ValueError:
pd.qcut(data, n_bins+1, duplicates = 'drop')
</code></pre>
|
python-2.7|pandas
| 0
|
1,936
| 46,964,552
|
Regex in pandas: Match vs Findall
|
<p>I am confused about when to use both str.findall and str.match.</p>
<p>For example, I have a df that has many lines of text where I need to extract dates.</p>
<p>Let us say I want to extract check the lines where there is a work Mar (as of the abbreviation of March).</p>
<p>I if I broadcast the df where there is a match</p>
<pre><code>df[df.original.str.match(r'(Mar)')==True]
</code></pre>
<p>I got the following output:</p>
<pre><code>204 Mar 10 1976 CPT Code: 90791: No medical servic...
299 March 1974 Primary ...
</code></pre>
<p>However, if I try the same regex within the str.findall, I got nothing:</p>
<pre><code>0 []
1 []
2 []
3 []
4 []
5 []
6 []
7 []
...
495 []
496 []
497 []
498 []
499 []
Name: original, Length: 500, dtype: object
</code></pre>
<p>Why is that ? I am sure it is a lack of understanding on match, find, findall, extract and extractall. </p>
|
<p>I try to use the documentation to explain this:</p>
<pre><code>s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"])
s
</code></pre>
<p>output:</p>
<pre><code>A a1a2
B b1
C c1
dtype: object
</code></pre>
<p>We first make the Series like this,and then use the <code>extract,extractall,find,findall</code></p>
<pre><code>s.str.extract("([ab])(\d)",expand=True)#We could use the extract and give the pat which can be str of regx
and only return the first match of the results.
0 1
A a 1
B b 1
C NaN NaN
s.str.extractall("([ab])(\d)")#return all the detail which me match
0 1
match
A 0 a 1
1 a 2
B 0 b 1
s.str.find("([ab])(\d)")#all the values is -1 cause find can only give the string
s.str.find('a')
A 0
B -1
C -1
dtype: int64
s.str.findall("([ab])(\d)")#give a string or regx and return the detail result
A [(a, 1), (a, 2)]
B [(b, 1)]
C []
dtype: object
</code></pre>
|
regex|pandas|match|findall
| 1
|
1,937
| 63,261,148
|
Pandas count groupbyed elemenys by condition
|
<p>I have dataframe like this:</p>
<pre><code>df = pd.DataFrame({
'user': ['1', '1', '1', '2', '2', '2', '3', '3', '3'],
'value': ['4', '4', '1', '2', '2', '2', '3', '1', '1']
})
</code></pre>
<p>'value' sorted by date, so i need to count users for which the last element is smaller than the other elements in the group</p>
<p>for this df it would be 2 because last element for group 'user 1' is lesser than other elements from group, same thing for user 3, but user 2 last element is greater thean other elements in group so i dont need to count it</p>
|
<p>You can compare all values by last one with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.last.html" rel="nofollow noreferrer"><code>GroupBy.last</code></a> for greater by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.gt.html" rel="nofollow noreferrer"><code>Series.gt</code></a>, filter values of users by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> and last count unique values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.nunique.html" rel="nofollow noreferrer"><code>Series.nunique</code></a>:</p>
<pre><code>#convert values to numeric
df['value'] = df['value'].astype(int)
out = df.loc[df['value'].gt(df.groupby('user')['value'].transform('last')), 'user'].nunique()
print (out)
2
</code></pre>
<p>EDIT:</p>
<p>It also omit one element groups:</p>
<pre><code>df = pd.DataFrame({
'user': ['1', '1', '1', '2', '2', '2', '3', '3', '3', '4'],
'value': ['4', '4', '1', '2', '2', '2', '3', '1', '1', '8']
})
df['value'] = df['value'].astype(int)
out = df.loc[df['value'].gt(df.groupby('user')['value'].transform('last')), 'user'].nunique()
print (out)
2
</code></pre>
|
python|pandas
| 1
|
1,938
| 63,065,554
|
Can pandas replace method be used on a view or slice to modify the original dataframe?
|
<p>I want to replace certain cell values in a dataframe if they are within one group(s), but not if they are the other group(s).</p>
<p>For example I create the following dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([['a',2,3],['b',2,3],['a',3,3]], columns = ['1st', '2nd', '3rd'])
df
1st 2nd 3rd
0 a 2 2
1 b 2 3
2 a 3 3
</code></pre>
<p>I want to filter on the 1st column to 'a', and then replace any 2s with 9s and 3s with 7s in the 2nd column only.</p>
<pre><code>df.loc[(df['1st']=='a')].replace({2:9, 3:7}, inplace = True)
df # same as original
</code></pre>
<p>This attempts to set a value on a copy of the slice, and not a view, so it fails to update the original dataframe. Perhaps there is some chained indexing going on here. I was hoping the view of the dataframe, which is still of type dataframe, would allow the replace method to act on the view and thus on the original.</p>
<p>The only thing I have found to work requires me to use one command per column-value pair I want to replace:</p>
<pre><code>df.loc[(df['1st']=='a') & (df['2nd']==2), '2nd'] = 9
df.loc[(df['1st']=='a') & (df['2nd']==3), '2nd'] = 7
df # It worked
1st 2nd 3rd
0 a 9 2
1 b 2 3
2 a 7 3
</code></pre>
<p>Is there a better way to do this?</p>
<p>Can the replace method or other methods be used on a view of a dataframe to modify the original?</p>
<p>I am trying to understand copies vs views and the best way to modify the original dataframe by working on filtered results.</p>
<p>Thanks for your help!</p>
|
<p>Try with <code>update</code></p>
<pre><code>df.update(df.loc[(df['1st']=='a')].replace({2:9, 3:7}))
df
1st 2nd 3rd
0 a 9.0 7.0
1 b 2.0 3.0
2 a 7.0 7.0
</code></pre>
<p>If not want to change the type</p>
<pre><code>df.loc[(df['1st']=='a')]=df.loc[(df['1st']=='a')].replace({2:9, 3:7})
df
1st 2nd 3rd
0 a 9 7
1 b 2 3
2 a 7 7
</code></pre>
|
python|pandas|dataframe
| 3
|
1,939
| 67,899,215
|
Matplotlib or seaborn or pandas : Plot with "deep" discretization
|
<p>I have data in the following form :</p>
<pre><code>pkgSpot state id_data_x id_data_y pkgSpot_y delay_mean delay_max delay_min
0 1 Free 7.245899e+08 7.245899e+08 1.0 0.334572 292.0 -161.0
1 1 Occupied 7.245876e+08 7.245876e+08 1.0 2.865248 116.0 -162.0
2 2 Free 7.245884e+08 7.245884e+08 2.0 0.122951 294.0 -84.0
3 2 Occupied 7.245885e+08 7.245885e+08 2.0 1.344130 257.0 -279.0
4 3 Free 7.245909e+08 7.245909e+08 3.0 -2.931159 261.0 -196.0
5 3 Occupied 7.245894e+08 7.245894e+08 3.0 1.975265 246.0 -273.0
6 4 Free 7.245753e+08 7.245753e+08 4.0 0.889908 222.0 -235.0
7 4 Occupied 7.245729e+08 7.245729e+08 4.0 1.483180 180.0 -117.0
8 17 Free 7.245742e+08 7.245742e+08 17.0 -10.535714 160.0 -236.0
9 17 Occupied 7.245744e+08 7.245744e+08 17.0 7.473988 294.0 -258.0
10 18 Free 7.246035e+08 7.246036e+08 18.0 -9.374269 104.0 -160.0
11 18 Occupied 7.246025e+08 7.246025e+08 18.0 8.403315 88.0 -100.0
12 19 Free 7.245642e+08 7.245642e+08 19.0 -4.568548 220.0 -271.0
13 19 Occupied 7.245633e+08 7.245633e+08 19.0 4.474790 253.0 -262.0
14 26 Free 7.245383e+08 7.245383e+08 26.0 -0.480363 280.0 -300.0
15 26 Occupied 7.245365e+08 7.245366e+08 26.0 -10.149856 263.0 -298.0
16 27 Free 7.245861e+08 7.245861e+08 27.0 -3.831683 300.0 -258.0
17 27 Occupied 7.245864e+08 7.245864e+08 27.0 1.077670 300.0 -299.0
18 28 Free 7.245878e+08 7.245878e+08 28.0 -8.868201 221.0 -300.0
19 28 Occupied 7.245891e+08 7.245891e+08 28.0 6.633684 241.0 -220.0
</code></pre>
<p>and I would like to have, in one figure, a graph showing the mean, max and min delay discretized per <code>pkgSpot</code> and per <code>state</code></p>
<p>Is there an easy way to achieve that with either pandas, seaborn or matplotlib ? I have played a little bit with the three libraries and with the pandas <code>melt</code> function but I could not find a way to do that 'easily'.</p>
<p>Thanks for your support</p>
|
<p>Here is an approach using filled areas to show the minimum, mean and maximum:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from io import StringIO
data_str = '''pkgSpot state delay_mean delay_max delay_min
1 Free 0.334572 292.0 -161.0
1 Occupied 2.865248 116.0 -162.0
2 Free 0.122951 294.0 -84.0
2 Occupied 1.344130 257.0 -279.0
3 Free -2.931159 261.0 -196.0
3 Occupied 1.975265 246.0 -273.0
4 Free 0.889908 222.0 -235.0
4 Occupied 1.483180 180.0 -117.0
17 Free -10.535714 160.0 -236.0
17 Occupied 7.473988 294.0 -258.0
18 Free -9.374269 104.0 -160.0
18 Occupied 8.403315 88.0 -100.0
19 Free -4.568548 220.0 -271.0
19 Occupied 4.474790 253.0 -262.0
26 Free -0.480363 280.0 -300.0
26 Occupied -10.149856 263.0 -298.0
27 Free -3.831683 300.0 -258.0
27 Occupied 1.077670 300.0 -299.0
28 Free -8.868201 221.0 -300.0
28 Occupied 6.633684 241.0 -220.0'''
df = pd.read_csv(StringIO(data_str), delim_whitespace=True)
fig, ax = plt.subplots(figsize=(12, 4))
for state, color in zip(['Free', 'Occupied'], ['dodgerblue', 'crimson']):
df_state = df[df['state'] == state]
x = df_state['pkgSpot'].astype(str)
ax.plot(x, df_state['delay_mean'], color=color)
ax.fill_between(x, df_state['delay_min'], df_state['delay_max'], color=color, alpha=0.4, label=state)
ax.set_xlabel('pkgSpot')
ax.set_ylabel('delay (min, mean, max)')
ax.margins(x=0.02)
ax.legend(ncol=2, loc='lower center', bbox_to_anchor=[0.5, 1.01])
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/sRoes.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sRoes.png" alt="filled areas" /></a></p>
<p>Another option uses errorbars:</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots(figsize=(12, 4))
for state, color, dodge in zip(['Free', 'Occupied'], ['dodgerblue', 'crimson'], [-0.2, 0.2]):
df_state = df[df['state'] == state]
x = np.arange(len(df_state)) + dodge
yerr = [df_state['delay_mean'] - df_state['delay_min'], df_state['delay_max'] - df_state['delay_mean']]
ax.errorbar(x, df_state['delay_mean'], yerr=yerr, color=color, ls=':', lw=2, capsize=10, capthick=2, label=state)
ax.set_xticks(np.arange(len(df_state)))
ax.set_xticklabels(df_state['pkgSpot'].astype(str))
ax.set_xlabel('pkgSpot')
ax.set_ylabel('delay (min, mean, max)')
ax.legend(ncol=2, loc='lower center', bbox_to_anchor=[0.5, 1.01])
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/1QQa3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1QQa3.png" alt="error bars" /></a></p>
|
pandas|matplotlib|seaborn
| 1
|
1,940
| 67,864,964
|
Merge multiple dataframes on date with not same values
|
<p>I have 4 dataframes: <code>jpn</code>, <code>swe</code>, <code>svk</code>, <code>aut</code>, where every dataframe has two columns <code>year</code> and <code>~_co2</code> and some index, which I dont care about.</p>
<pre class="lang-py prettyprint-override"><code>
>>>swe
year swe_co2
20105 1834 0.033
20106 1839 0.044
>>>svk
year svk_co2
15247 1840 0.013
15248 1841 0.023
</code></pre>
<p>Every dataframe has structure like this, but the <code>year</code> columns doesn't start in the same year for every value.</p>
<p>I would like to join this dataframes to create a new dataframe like this</p>
<pre class="lang-py prettyprint-override"><code>
>>>merger
year swe_co2 svk_co2
1 1834 0.033 None
2 1839 0.044 None
3 1840 0.047 0.013
4 1841 None 0.023
</code></pre>
|
<p>You can use <code>pd.concat</code></p>
<pre><code>pd.concat([swe, svk], ignore_index=True)
year swe_co2 svk_co2
0 1834 0.033 NaN
1 1839 0.044 NaN
2 1840 NaN 0.013
3 1841 NaN 0.023
</code></pre>
<p>If you have more than two dataframes, just append them in that list <code>[swe, svk, df3, df4, ...]</code></p>
|
python|pandas
| 2
|
1,941
| 61,275,706
|
Inserting into array with known positions and values
|
<pre><code>import numpy as np
a = np.zeros([2,2])
b = np.array([[0,0],
[0,1],
[1,0],
[1,1]])
values = np.array([[10,20,30,40]]).T
#some function
#desired outcome for a as numpy array:
a = [[10,20],
[30,40]]
</code></pre>
<p>As you can see from code, I have a zero value array, which I would like to fill with values. My question is does NumPy offer any function for this? I would like to find an elegant way, before using for loop. Thank you.</p>
|
<p>On simple approach is to use numpy indexing:</p>
<pre><code>import numpy as np
a = np.zeros([2, 2])
b = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
values = np.array([10, 20, 30, 40])
rows, cols = zip(*b)
a[rows, cols] = values
print(a)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[[10. 20.]
[30. 40.]]
</code></pre>
<p>An alternative, is to use the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html#scipy.sparse.csr_matrix" rel="nofollow noreferrer">csr_matrix</a> constructor from scipy:</p>
<pre><code>import numpy as np
from scipy.sparse import csr_matrix
a = np.zeros([2, 2])
b = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
values = np.array([10, 20, 30, 40])
a = csr_matrix((values, zip(*b)), a.shape).todense()
print(a)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>[[10 20]
[30 40]]
</code></pre>
|
python|numpy
| 1
|
1,942
| 61,602,178
|
custom layer with Tensorflow 2.1 problem with the output shape
|
<p>I am trying to have custom layer returning a tensor of (25,1) however there is a batch_size which should be passed through (I get an error from the next layer). I looked for examples, but could not figure how to specify the output shape. </p>
<p>Further I need an arbitrary output shape independent from the input size as the computation (not part of the below example) will always return a fixed number of values.</p>
<p>I tried the following:</p>
<pre><code>class SimpleLayer(layers.Layer):
def __init__(self, **kwargs):
super(SimpleLayer, self).__init__(**kwargs)
self.baseline = tf.Variable(initial_value=0.1, trainable=True)
def call(self, inputs):
print ("in call inputs:", inputs.shape)
ret = tf.zeros((25, 1)) + self.baseline
print("Ret:", ret, "Shape", tf.shape(ret))
return (ret)
</code></pre>
<p>this returns:</p>
<pre><code>Ret: Tensor("om/add:0", shape=(25, 1), dtype=float32) Shape Tensor("om/Shape:0", shape=(2,), dtype=int32)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
inputs (InputLayer) [(None, 150, 1)] 0
_________________________________________________________________
dense (Dense) (None, 150, 256) 512
_________________________________________________________________
om (SimpleLayer) (25, 1) 1
=================================================================
</code></pre>
<p>But this does make an output shape (25, 1) but not (None, 25, 1). </p>
<p>then I tried:</p>
<pre><code>class SimpleLayer(layers.Layer):
def __init__(self, **kwargs):
super(SimpleLayer, self).__init__(**kwargs)
self.baseline = tf.Variable(initial_value=0.1, trainable=True)
def call(self, inputs):
print ("in call inputs:", inputs.shape)
ret = tf.zeros((25, 1)) + self.baseline
return (ret)
</code></pre>
<p>and got the error: </p>
<pre><code>TypeError: Expected int32, got None of type 'NoneType' instead.
</code></pre>
<p>any suggestion?</p>
|
<p>I suggest you use the inputs data defined in the call methods otherwise the layer makes no sense</p>
<p>I provide a dummy example and works perfectly</p>
<pre><code>class SimpleLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(SimpleLayer, self).__init__(**kwargs)
self.baseline = tf.Variable(initial_value=0.1, trainable=True)
def call(self, inputs):
ret = inputs + self.baseline
return (ret)
def compute_output_shape(self, input_shape):
return (input_shape[0], input_shape[1], input_shape[2])
</code></pre>
<p>create a model with SimpleLayer</p>
<pre><code>inp = Input(shape=(25,1))
x = SimpleLayer()(inp)
out = Dense(3)(x)
model = Model(inp, out)
model.summary()
</code></pre>
<p>the summary:</p>
<pre><code>Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) [(None, 25, 1)] 0
_________________________________________________________________
simple_layer_16 (SimpleLayer (None, 25, 1) 1
_________________________________________________________________
dense_22 (Dense) (None, 25, 3) 6
=================================================================
Total params: 7
Trainable params: 7
Non-trainable params: 0
</code></pre>
<hr>
<p><strong>EDIT</strong></p>
<p>I try to override the problem of None dimension in this way</p>
<pre><code>class SimpleLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(SimpleLayer, self).__init__(**kwargs)
self.baseline = tf.Variable(initial_value=0.1, trainable=True, dtype=tf.float64)
def call(self, inputs):
ret = tf.zeros((1, 25, 1), dtype=tf.float64) + self.baseline
ret = tf.compat.v1.placeholder_with_default(ret, (None, 25, 1))
return (ret)
inp = Input((150,1))
x = Dense(256)(inp)
x = SimpleLayer()(x)
x = Dense(10)(x)
model = Model(inp, x)
model.summary()
</code></pre>
<p>the summary:</p>
<pre><code>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_34 (InputLayer) [(None, 150, 1)] 0
_________________________________________________________________
dense_68 (Dense) (None, 150, 256) 512
_________________________________________________________________
simple_layer_9 (SimpleLayer) (None, 25, 1) 1
_________________________________________________________________
dense_69 (Dense) (None, 25, 10) 20
=================================================================
Total params: 533
Trainable params: 533
Non-trainable params: 0
</code></pre>
|
python|tensorflow
| 1
|
1,943
| 68,565,190
|
What is the difference between SeedSequence.spawn and SeedSequence.generate_state
|
<p>I am trying to use numpy's <a href="https://numpy.org/doc/stable/reference/random/bit_generators/generated/numpy.random.SeedSequence.html" rel="nofollow noreferrer"><code>SeedSequence</code></a> to seed RNGs in different processes. However I am not sure whether I should use <code>ss.generate_state</code> or <code>ss.spawn</code>:</p>
<pre><code>import concurrent.futures
import numpy as np
def worker(seed):
rng = np.random.default_rng(seed)
return rng.random(1)
num_repeats = 1000
ss = np.random.SeedSequence(243799254704924441050048792905230269161)
with concurrent.futures.ProcessPoolExecutor() as pool:
result1 = np.hstack(list(pool.map(worker, ss.generate_state(num_repeats))))
ss = np.random.SeedSequence(243799254704924441050048792905230269161)
with concurrent.futures.ProcessPoolExecutor() as pool:
result2 = np.hstack(list(pool.map(worker, ss.spawn(num_repeats))))
</code></pre>
<p>What are the differences between the two approaches and which should I use?</p>
<p>Using <code>ss.generate_state</code> is ~10% faster for the basic example above, likely because we are serializing floats instead of objects.</p>
|
<p>Well the <code>SeedSequence</code> is intended to generate good quality seeds from not so good seeds.</p>
<h3>Performance</h3>
<p>The <code>generate_state</code> is much faster than <code>spawn</code></p>
<p>The difference of time for transfering the object or the state should not be the main reason for the difference you notice. You can test this without any</p>
<pre class="lang-py prettyprint-override"><code>%%timeit
ss.generate_state(num_repeats)
</code></pre>
<p>100 times faster than</p>
<pre class="lang-py prettyprint-override"><code>ss.spawn(num_repeats)
</code></pre>
<h3>Seed size</h3>
<p>In the code <code>map(worker, ss.generate_state(num_repeats))</code> the RNGs are seeded with integers, while in the <code>map(worker, ss.spawn(num_repeats))</code> the RNGs are seeded with <code>SeedSequence</code> that can potentially give result in quality initialization, the seed sequence can be used internally to generate a state vector with as many bits as required to completely initialize the RNG. Well, to be honest I expect it to do some expansion on this number to initialize the RNG, not simply padding with zeros for example.</p>
<h3>Repeated use</h3>
<p>The most important difference is that <code>generate_state</code> gives the same result if called multiple times. On the other hand, <code>spawn</code> gives different results each call.</p>
<p>For illustration, check the following example</p>
<pre class="lang-py prettyprint-override"><code>ss = np.random.SeedSequence(243799254704924441050048792905230269161)
print('With generate_state')
print(np.hstack([worker(s) for s in ss.generate_state(5)]))
print(np.hstack([worker(s) for s in ss.generate_state(5)]))
print('With spawn')
print(np.hstack([worker(s) for s in ss.spawn(5)]))
print(np.hstack([worker(s) for s in ss.spawn(5)]))
</code></pre>
<pre class="lang-txt prettyprint-override"><code>With generate_state
[0.6625651 0.17654256 0.25323331 0.38250588 0.52670541]
[0.6625651 0.17654256 0.25323331 0.38250588 0.52670541]
With spawn
[0.06988312 0.40886412 0.55733136 0.43249601 0.53394111]
[0.64885573 0.16788206 0.12435154 0.14676836 0.51876499]
</code></pre>
<p>As you see the arrays generated by seeding different RNGs with <code>generate_state</code> gives the same result, not only immediately after construction, but every time the method is called. Spawn should give you the same results using a newly constructed <code>SeedSequence</code> (I am using numpy 1.19.2), however if you run the same code twice using the same instance, the second time will give produce different seeds.</p>
|
numpy|random
| 1
|
1,944
| 68,866,740
|
import image-plot the SentinelHub-py utils issue
|
<p>I am trying to plot an image using <a href="https://github.com/sentinel-hub/sentinelhub-py/blob/master/examples/utils.py" rel="nofollow noreferrer">SentinelHub-py</a>. This is part of my the code:</p>
<pre><code>from sentinelhub import SHConfig
config = SHConfig()
config.sh_client_id = "76186bb6-a02e-4457-9a9d-126e4fffaed4"
config.sh_client_secret = "aTlX[s:39vzA8p}HA{k*Zp!fJNF~(c7e.u7r21V!"
config.save()
%reload_ext autoreload
%autoreload 2
%matplotlib inline
#Import them
import os
import datetime
import numpy as np
import matplotlib.pyplot as plt
from sentinelhub import SHConfig
from sentinelhub import MimeType, CRS, BBox, SentinelHubRequest, SentinelHubDownloadClient, DataCollection, bbox_to_dimensions, DownloadRequest
from utils import plot_image
betsiboka_coords_wgs84 = [46.16, -16.15, 46.51, -15.58]
resolution = 60
betsiboka_bbox = BBox(bbox=betsiboka_coords_wgs84, crs=CRS.WGS84)
betsiboka_size = bbox_to_dimensions(betsiboka_bbox, resolution=resolution)
print(f'Image shape at {resolution} m resolution: {betsiboka_size} pixels')
"""
Utilities used by example notebooks
"""
import matplotlib.pyplot as plt
import numpy as np
def plot_image(image, factor=3.5/255, clip_range=(0,1)):
"""
Utility function for plotting RGB images.
"""
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(15, 15))
if clip_range is not None:
ax.imshow(np.clip(image * factor, *clip_range), **kwargs)
else:
ax.imshow(image * factor, **kwargs)
ax.set_xticks([])
ax.set_yticks([])
</code></pre>
<p>which give me the following error:</p>
<pre><code> ---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_6632/1226496077.py in <module>
9 from sentinelhub import MimeType, CRS, BBox, SentinelHubRequest, SentinelHubDownloadClient, DataCollection, bbox_to_dimensions, DownloadRequest
10
---> 11 from utils import plot_image
12
13 CLIENT_ID='76186bb6-a02e-4457-9a9d-126e4fffaed4'
ImportError: cannot import name 'plot_image' from 'utils' (c:\xxxxxxxxxxxx.py)
</code></pre>
<p>someone made a comment on this issue by saying:</p>
<p>"It seems that the utils package that you are calling is not the correct one. Try loading the utils.py from the examples folder, or that you can find [here][1] (i.e. copy the file to your working directory with your notebook)."<br />
[1]: https://github.com/sentinel-hub/sentinelhub-py</p>
<p>I change my code to this:</p>
<pre><code> """
Utilities used by example notebooks
"""
import matplotlib.pyplot as plt
import numpy as np
def plot_image(image, factor=3.5/255, clip_range=(0,1)):
"""
Utility function for plotting RGB images.
"""
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(15, 15))
if clip_range is not None:
ax.imshow(np.clip(image * factor, *clip_range), **kwargs)
else:
ax.imshow(image * factor, **kwargs)
ax.set_xticks([])
ax.set_yticks([])
</code></pre>
<p>still didn't plot the image.</p>
<p>Please, any suggestions?</p>
|
<p>You have to navigate to your utils.py. In my computer is:</p>
<p>(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/utils).</p>
<p>Later, you have to copy and paste the next script from:</p>
<p><a href="https://github.com/sentinel-hub/sentinelhub-py/blob/master/examples/utils.py" rel="nofollow noreferrer">https://github.com/sentinel-hub/sentinelhub-py/blob/master/examples/utils.py</a></p>
<p>Finally, you will be able to type in your python script: "from utils import plot_image"</p>
<p>Best,
Jose</p>
|
python|numpy|matplotlib|syntax-error|sentinel
| 1
|
1,945
| 68,726,290
|
Setting learning rate for Stochastic Weight Averaging in PyTorch
|
<p>Following is a small working code for Stochastic Weight Averaging in Pytorch taken from <a href="https://pytorch.org/docs/1.8.1/optim.html#putting-it-all-together" rel="noreferrer">here</a>.</p>
<pre><code>loader, optimizer, model, loss_fn = ...
swa_model = torch.optim.swa_utils.AveragedModel(model)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=300)
swa_start = 160
swa_scheduler = SWALR(optimizer, swa_lr=0.05)
for epoch in range(300):
for input, target in loader:
optimizer.zero_grad()
loss_fn(model(input), target).backward()
optimizer.step()
if epoch > swa_start:
swa_model.update_parameters(model)
swa_scheduler.step()
else:
scheduler.step()
# Update bn statistics for the swa_model at the end
torch.optim.swa_utils.update_bn(loader, swa_model)
# Use swa_model to make predictions on test data
preds = swa_model(test_input)
</code></pre>
<p>In this code after 160th epoch the <code>swa_scheduler</code> is used instead of the usual <code>scheduler</code>. What does <code>swa_lr</code> signify? The <a href="https://pytorch.org/docs/1.8.1/optim.html#swa-learning-rate-schedules" rel="noreferrer">documentation</a> says,</p>
<blockquote>
<p>Typically, in SWA the learning rate is set to a high constant value. SWALR is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant.</p>
</blockquote>
<ol>
<li>So what happens to the learning rate of the <code>optimizer</code> after 160th epoch?</li>
<li>Does <code>swa_lr</code> affect the <code>optimizer</code> learning rate?</li>
</ol>
<p>Suppose that at the beginning of the code the <code>optimizer</code> was <code>ADAM</code> initialized with a learning rate of <code>1e-4</code>. Then does the above code imply that for the first 160 epochs the learning rate for training will be <code>1e-4</code> and then for the remaining number of epochs it will be <code>swa_lr=0.05</code>? If yes, is it a good idea to define <code>swa_lr</code> also to <code>1e-4</code>?</p>
|
<ul>
<li>
<blockquote>
<p>does the above code imply that for the first <em>160</em> epochs the learning rate for training will be <code>1e-4</code></p>
</blockquote>
<p>No it won't be equal to <code>1e-4</code>, during the first 160 epochs the learning rate is managed by the first scheduler <code>scheduler</code>. This one is a initialize as a <a href="https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html" rel="noreferrer"><code>torch.optim.lr_scheduler.CosineAnnealingLR</code></a>. The learning rate will follow this curve:</p>
<p><a href="https://i.stack.imgur.com/ufdV9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ufdV9.png" alt="enter image description here" /></a></p>
</li>
</ul>
<hr />
<ul>
<li>
<blockquote>
<p>for the remaining number of epochs it will be <code>swa_lr=0.05</code></p>
</blockquote>
<p>This is partially true, during the second part - from epoch <em>160</em> - the optimizer's learning rate will be handled by the second scheduler <code>swa_scheduler</code>. This one is initialized as a <a href="https://pytorch.org/docs/stable/optim.html#swa-learning-rate-schedules" rel="noreferrer"><code>torch.optim.swa_utils.SWALR</code></a>. You can read on the documentation page:</p>
<blockquote>
<p>SWALR is a learning rate scheduler that <strong>anneals the learning rate to a fixed value [<code>swa_lr</code>], and then keeps it constant</strong>.</p>
</blockquote>
<p>By default (cf. <a href="https://github.com/pytorch/pytorch/blob/master/torch/optim/swa_utils.py#L212" rel="noreferrer">source code</a>), the number of epochs before annealing is equal to <em>10</em>. Therefore the learning rate from epoch <em>170</em> to epoch <em>300</em> will be equal to <code>swa_lr</code> and will stay this way. The second part will be:</p>
<p><a href="https://i.stack.imgur.com/Aaoz8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Aaoz8.png" alt="enter image description here" /></a></p>
<p>This complete profile, <em>i.e.</em> both parts:</p>
<p><a href="https://i.stack.imgur.com/B5ZGS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B5ZGS.png" alt="enter image description here" /></a></p>
</li>
</ul>
<hr />
<ul>
<li>
<blockquote>
<p>If yes, is it a good idea to define <code>swa_lr</code> also to <code>1e-4</code></p>
</blockquote>
<p>It is mentioned in the docs:</p>
<blockquote>
<p>Typically, in SWA the learning rate is set to a high constant value.</p>
</blockquote>
<p>Setting <code>swa_lr</code> to <code>1e-4</code> would result in the following learning-rate profile:</p>
<p><a href="https://i.stack.imgur.com/7YMj6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7YMj6.png" alt="enter image description here" /></a></p>
</li>
</ul>
|
python|machine-learning|optimization|pytorch
| 8
|
1,946
| 68,552,532
|
Series objects are mutable, thus they cannot be hashed on Python pandas dataframe
|
<p>I have the following dataframe:</p>
<p>df1:</p>
<pre><code> Revenue Earnings Date
Year
2017 43206832000 4608790000 2017-01-01
2018 43462740000 8928258000 2018-01-01
2019 44268171000 5001014000 2019-01-01
2020 43126472000 4770527000 2020-01-01
</code></pre>
<p>I am using an api to get the excahnge currency, the api is CurrencyConverter, the link is:
<a href="https://pypi.org/project/CurrencyConverter/" rel="nofollow noreferrer">https://pypi.org/project/CurrencyConverter/</a></p>
<p>I am trying to add a column to my dataframe to show me the exchange rate of that date, I used the method:</p>
<pre><code>c.convert(100, 'EUR', 'USD', date=date(2013, 3, 21))
</code></pre>
<p>My code is:</p>
<pre><code>c = CurrencyConverter()
earnings['exchange_rate'] = c.convert(1, 'BRL', 'USD', earnings['Date'])
print(earnings)
</code></pre>
<p>I get an answer that says:</p>
<pre><code>TypeError: 'Series' objects are mutable, thus they cannot be hashed
</code></pre>
<p>I would like to get the following:</p>
<pre><code> Revenue Earnings Date exchange_rate
Year
2017 43206832000 4608790000 2017-01-01 0.305
2018 43462740000 8928258000 2018-01-01 0.305
2019 44268171000 5001014000 2019-01-01 0.295
2020 43126472000 4770527000 2020-01-01 0.249
</code></pre>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>from currency_converter import CurrencyConverter
# if "Date" column isn't already converted:
df["Date"] = pd.to_datetime(df["Date"])
c = CurrencyConverter(fallback_on_missing_rate=True) # without fallback_on_missing_rate=True I get `BRL has no rate for 2017-01-01` error.
df["exchange_rate"] = df["Date"].apply(lambda x: c.convert(1, "BRL", "USD", x))
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> Revenue Earnings Date exchange_rate
Year
2017 43206832000 4608790000 2017-01-01 0.306034
2018 43462740000 8928258000 2018-01-01 0.304523
2019 44268171000 5001014000 2019-01-01 0.258538
2020 43126472000 4770527000 2020-01-01 0.249114
</code></pre>
|
python|pandas|dataframe|api
| 1
|
1,947
| 52,983,468
|
How is the data of a numpy.array stored?
|
<p>This is my simple test code:</p>
<pre><code>data = np.arange(12, dtype='int32').reshape(2,2,3);
</code></pre>
<p>so the data is:</p>
<pre><code>array([[[ 0, 1, 2],
[ 3, 4, 5]],
[[ 6, 7, 8],
[ 9, 10, 11]]], dtype=int32)
</code></pre>
<p>but why does <code>data.data[:48]</code> look like this:</p>
<p>'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\t\x00\x00\x00\n\x00\x00\x00\x0b\x00\x00\x00'</p>
<p>I mean why are '9','10' stored as '\t\x00\x00\x00' and '\n\x00\x00\x00'?</p>
|
<p><code>\t</code> is the tab character, of <a href="http://man7.org/linux/man-pages/man7/ascii.7.html" rel="nofollow noreferrer">ASCII</a> value 9. <code>\n</code> is the LF character, of ASCII value 10. <code>\x00</code> is a NUL character, of ascii value 0. Thus,</p>
<p>'\t\x00\x00\x00' represents a sequence of bytes [9, 0, 0, 0], which is a little-endian representation of a long integer 9.</p>
<p>'\n\x00\x00\x00' represents a sequence of bytes [10, 0, 0, 0], which is a little-endian representation of a long integer 10.</p>
|
python|numpy
| 3
|
1,948
| 53,198,344
|
Pandas DataFrame: difference between rolling and expanding function
|
<p>Can anyone help me understand the difference between rolling and expanding function from the example given in the pandas docs.</p>
<pre><code>df = DataFrame({'B': [0, 1, 2, np.nan, 4]})
df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
df.expanding(2).sum()
B
0 NaN # 0 + NaN
1 1.0 # 1 + 0
2 3.0 # 2 + 1
3 3.0 # ??
4 7.0 # ??
df.rolling(2).sum()
B
0 NaN # 0 + NaN
1 1.0 # 1 + 0
2 3.0 # 2 + 1
3 NaN # NaN + 2
4 NaN # 4 + NaN
</code></pre>
<p>I give comment to each row to show my understanding of the calculation. Is that true for <code>rolling</code> function? What about <code>expanding</code>? Where are 3 and 7 in 3rd and 4th rows coming from?</p>
|
<p>The 2 in <code>expanding</code> is <code>min_periods</code> not the <code>window</code> </p>
<pre><code>df.expanding(min_periods=1).sum()
Out[117]:
B
0 0.0
1 1.0
2 3.0
3 3.0
4 7.0
</code></pre>
<p>If you want the same result with <code>rolling</code> <code>window</code> will be equal to the length of dataframe </p>
<pre><code>df.rolling(window=len(df),min_periods=1).sum()
Out[116]:
B
0 0.0
1 1.0
2 3.0
3 3.0
4 7.0
</code></pre>
|
python|pandas
| 4
|
1,949
| 53,104,887
|
Access internal tensors and add a new node to a tflite model?
|
<p>I am fairly new to TensorFlow and TensorFlow Lite. I have followed the tutorials on how to quantize and convert the model to fixed point calculations using <code>toco</code>. Now I have a <code>tflite</code> file which is supposed to perform only fixed point operations. I have two questions</p>
<ol>
<li>How do I test this in python? How do i access all the operations and results in the tflite file?</li>
<li>Is there a way to add a new node or operation in this tflite file? If so how?</li>
</ol>
<p>I would be really grateful if anyone could guide me.</p>
<p>Thanks and Regards,<br>
Abhinav George</p>
|
<blockquote>
<p>Is there a way to add a new node or operation in this tflite file? If so how?</p>
</blockquote>
<p>Unfortunately, <strong>no</strong>, and it is actually a <em>good thing</em>. TF-Lite was designed to be extremely light yet effective, using mapped files, flat buffers, static execution plan and so on to decrease memory footprint. The cost of that is that you loose any flexibility of TensorFlow.</p>
<p>TF-Lite is a framework for deployment. However, earlier on <a href="https://www.youtube.com/watch?v=ByJnpbDd-zc" rel="nofollow noreferrer">Google IO</a>, the TF team mentioned the possibility of on-device training, so maybe some kind of flexibility will be available in the future, but not now.</p>
<hr>
<blockquote>
<p>How do I test this in python? How do i access all the operations and results in the tflite file?</p>
</blockquote>
<p>You <strong>cannot</strong> access all internal operations, only inputs and outputs. The reason is simple: <em>the internal tensors wouldn't be saved, since the memory sections for them are also used for other operations</em> (which is why the memory footprint of it is so low).</p>
<p>If you just want to see the outputs, you can use the Python API as below (the code is self explanatory):</p>
<pre class="lang-python prettyprint-override"><code>import pprint
from tensorflow.contrib.lite.python import interpreter as interpreter_wrapper
# Load the model and allocate the static memory plan
interpreter = interpreter_wrapper.Interpreter(model_path='model.tflite')
interpreter.allocate_tensors()
# print out the input details
input_details = interpreter.get_input_details()
print('input_details:')
pp = pprint.PrettyPrinter(indent=2)
pp.pprint(input_details)
# print out the output details
output_details = interpreter.get_output_details()
print('output_details:')
pp = pprint.PrettyPrinter(indent=2)
pp.pprint(output_details)
# set input (img is a `numpy array`)
interpreter.set_tensor(input_details[0]['index'], img)
# forward pass
interpreter.invoke()
# get output of the network
output = interpreter.get_tensor(output_details[0]['index'])
</code></pre>
<hr>
<blockquote>
<p>What if I call <code>interpreter.get_tensor</code> for non-input and non-output tensors?</p>
</blockquote>
<p>You will not get the actual data that contained in that tensor after execution of the corresponding operation. As mentioned earlier, the memory section for tensors are shared with other tensors for maximum efficiency. </p>
|
python|tensorflow|tensorflow-lite
| 2
|
1,950
| 53,260,050
|
How to Group by column value in Pandas Data frame
|
<p>I have pandas dataframe like this. I want group by App_Name in seperate variable</p>
<pre><code>App_Name Date Response Gross Revenue
com.apple.tiles2 2018-10-13 3748.723574 24133394
com.orange.thescore 2018-10-13 2034.611964 8273607
com.number.studio 2018-10-13 1807.756545 33736740
com.orange.thescore 2018-10-14 4671.930435 38575556
com.number.studio 2018-10-14 3533.461547 38726087
com.banana.com 2018-10-14 2920.33747 86230313
com.apple.tiles2 2018-10-15 3986.434851 35928884
com.number.studio 2018-10-15 2044.759823 76526368
com.apple.tiles2 2018-10-16 2610.214035 30611434
com.alpha.studio 2018-10-16 1731.429858 11643154
com.banana.com 2018-10-16 1601.387403 13781285
com.alpha.studio 2018-10-17 2769.373388 13198984
com.banana.com 2018-10-17 2205.359489 21974901
com.orange.thescore 2018-10-17 1820.852862 7565015
com.alpha.studio 2018-10-18 2784.822039 24217875
com.banana.com 2018-10-18 2545.899329 28361412
com.orange.thescore 2018-10-18 2052.207745 7544861
</code></pre>
<p>I want to group data by App_Name and stored in sepearte list or dataframe for each App_Name, something like this given below:</p>
<pre><code>App_Name Date Response Gross Revenue
com.alpha.studio 2018-10-16 1731.429858 11643154
com.alpha.studio 2018-10-17 2769.373388 13198984
com.alpha.studio 2018-10-18 2784.822039 24217875
App_Name Date Response Gross Revenue
com.apple.tiles2 2018-10-13 3748.723574 24133394
com.apple.tiles2 2018-10-15 3986.434851 35928884
com.apple.tiles2 2018-10-16 2610.214035 30611434
App_Name Date Response Gross Revenue
com.banana.com 2018-10-14 2920.33747 86230313
com.banana.com 2018-10-16 1601.387403 13781285
com.banana.com 2018-10-17 2205.359489 21974901
com.banana.com 2018-10-18 2545.899329 28361412
App_Name Date Response Gross Revenue
com.number.studio 2018-10-14 3533.461547 38726087
com.number.studio 2018-10-13 1807.756545 33736740
com.number.studio 2018-10-15 2044.759823 76526368
App_Name Date Response Gross Revenue
com.orange.thescore 2018-10-13 2034.611964 8273607
com.orange.thescore 2018-10-14 4671.930435 38575556
com.orange.thescore 2018-10-17 1820.852862 7565015
com.orange.thescore 2018-10-18 2052.207745 7544861
</code></pre>
|
<p>Convert <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> object to dictionary of DataFrames:</p>
<pre><code>d = dict(tuple(df.groupby('App_Name')))
print (d['com.alpha.studio'])
App_Name Date Response Gross Revenue
9 com.alpha.studio 2018-10-16 1731.429858 11643154 NaN
11 com.alpha.studio 2018-10-17 2769.373388 13198984 NaN
14 com.alpha.studio 2018-10-18 2784.822039 24217875 NaN
</code></pre>
<p>EDIT:</p>
<pre><code>d1 = {}
for k, v in d.items():
d1[k] = v['Gross Revenue'].rolling(2).mean()
</code></pre>
|
python|pandas|pandas-groupby|data-science
| 3
|
1,951
| 65,565,974
|
How to update the empty dataframe values with values from another dataframe(Pandas)?
|
<p>I want to update empty rows in dataframe1 with the equivalent values from dataframe2 only if the rows in dataframe1 is empty.</p>
<p>Cases:</p>
<p><a href="https://i.stack.imgur.com/HD5CI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HD5CI.png" alt="Dataframe1" /></a></p>
<p>fig 1</p>
<p><a href="https://i.stack.imgur.com/r4neN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r4neN.png" alt="Dataframe2" /></a></p>
<p>fig 2</p>
<p>In the above example, I want to fill only empty rows of Price columns in dataframe1 with the equivalent Price columns from dataframe2.</p>
<p>Any ideas or suggestions for this?</p>
<pre><code>import pandas as pd
df1 = pd.read_csv('dataframe1.csv')
df2 = pd.read_csv('dataframe2.csv')
</code></pre>
|
<p>Try this:</p>
<pre><code> df2dict = df2.set_index(['Product Code'])['Price'].squeeze().to_dict()
# maps from df2['Product Code'] to empty columns in df
df['Price'] = df['Price'].fillna(df['Product Code'].map(df2dict))
</code></pre>
|
python|python-3.x|pandas|dataframe|numpy
| 2
|
1,952
| 65,820,991
|
Melt multiple columns to correspondent columns based on prefix in Pandas
|
<p>Assuming the following dataframe:</p>
<pre><code> id transaction seller0 seller1 seller2 buyer0 buyer1
0 1 Subject1 Tim Jamie Melissa Rosie NaN
1 2 Subject2 Rima Derren NaN Annalise Hania
2 3 Subject3 Rosa NaN NaN Joshua NaN
</code></pre>
<p>How could I reshape it to the following format? ie, <code>seller0 seller1 seller2</code> to <code>seller</code>, and <code>buyer0 buyer1</code> to <code>buyer</code> column for each transaction.</p>
<p>The output needed:</p>
<pre><code> id transaction seller buyer
0 1 Subject1 Tim Rosie
1 1 Subject1 Jamie NaN
2 1 Subject1 Melissa NaN
3 2 Subject2 Rima Annalise
4 2 Subject2 Derren Hania
5 3 Subject3 Rosa Joshua
</code></pre>
<p>Code:</p>
<pre><code>df.melt(['id', 'transaction'], value_name = 'seller').drop('variable', 1)
</code></pre>
<p>Out:</p>
<pre><code> id transaction seller
0 1 Subject1 Tim
1 2 Subject2 Rima
2 3 Subject3 Rosa
3 1 Subject1 Jamie
4 2 Subject2 Derren
5 3 Subject3 NaN
6 1 Subject1 Melissa
7 2 Subject2 NaN
8 3 Subject3 NaN
9 1 Subject1 Rosie
10 2 Subject2 Annalise
11 3 Subject3 Joshua
12 1 Subject1 NaN
13 2 Subject2 Hania
14 3 Subject3 NaN
</code></pre>
<p><strong>Updated output desired:</strong></p>
<pre><code> id transaction type name
0 1 Subject1 seller Tim
1 1 Subject1 seller Jamie
2 1 Subject1 seller Melissa
3 2 Subject2 seller Rima
4 2 Subject2 seller Derren
5 3 Subject3 seller Rosa
6 1 Subject1 buyer Rosie
7 2 Subject2 buyer Annalise
8 2 Subject2 buyer Hania
9 3 Subject3 buyer Joshua
</code></pre>
|
<p>Use <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiynYLhkqzuAhUH4jgGHZ_tDiIQFjADegQIARAC&url=https%3A%2F%2Fpandas.pydata.org%2Fpandas-docs%2Fstable%2Freference%2Fapi%2Fpandas.wide_to_long.html&usg=AOvVaw2x0C-DKYYiPjJpu1JxRkeW" rel="nofollow noreferrer">wide_to_long</a></p>
<pre><code>(
pd.wide_to_long(df,
stubnames=["seller", "buyer"],
i=["id", "transaction"],
j="num")
.dropna(how="all")
.droplevel(level=-1)
.reset_index()
)
id transaction seller buyer
0 1 Subject1 Tim Rosie
1 1 Subject1 Jamie NaN
2 1 Subject1 Melissa NaN
3 2 Subject2 Rima Annalise
4 2 Subject2 Derren Hania
5 3 Subject3 Rosa Joshua
</code></pre>
<p>You could also use <a href="https://pyjanitor.readthedocs.io/reference/janitor.functions/janitor.pivot_longer.html#janitor.pivot_longer" rel="nofollow noreferrer">pivot_longer</a> function from <a href="https://pyjanitor.readthedocs.io/index.html" rel="nofollow noreferrer">pyjanitor</a>; at the moment you have to install the latest development version from <a href="https://github.com/ericmjl/pyjanitor" rel="nofollow noreferrer">github</a>:</p>
<pre><code> # install latest dev version
# pip install git+https://github.com/ericmjl/pyjanitor.git
import janitor
(
df.pivot_longer(index=["id", "transaction"],
names_to=".value",
names_pattern=r"([a-z]+)\d")
.dropna(subset=["seller", "buyer"], how="all")
)
id transaction seller buyer
0 1 Subject1 Tim Rosie
1 2 Subject2 Rima Annalise
2 3 Subject3 Rosa Joshua
3 1 Subject1 Jamie NaN
4 2 Subject2 Derren Hania
6 1 Subject1 Melissa NaN
</code></pre>
<p>Update:</p>
<p>For your updated result, you can stack and do some minor adjustments:</p>
<pre><code>(
df.set_index(["id", "transaction"])
.stack()
.rename_axis(["id", "transaction", "type"])
.reset_index(name="name")
.assign(type=lambda df: df["type"].str[:-1])
)
id transaction type name
0 1 Subject1 seller Tim
1 1 Subject1 seller Jamie
2 1 Subject1 seller Melissa
3 1 Subject1 buyer Rosie
4 2 Subject2 seller Rima
5 2 Subject2 seller Derren
6 2 Subject2 buyer Annalise
7 2 Subject2 buyer Hania
8 3 Subject3 seller Rosa
9 3 Subject3 buyer Joshua
</code></pre>
<p>you could also use <a href="https://pyjanitor.readthedocs.io/reference/janitor.functions/janitor.pivot_longer.html#janitor.pivot_longer" rel="nofollow noreferrer">pivot_longer</a>:</p>
<pre><code> result = df.pivot_longer(index=["id", "transaction"],
names_to="type",
names_pattern=r"([a-z]+)\d",
values_to="name").dropna()
result
id transaction type name
0 1 Subject1 seller Tim
1 2 Subject2 seller Rima
2 3 Subject3 seller Rosa
3 1 Subject1 seller Jamie
4 2 Subject2 seller Derren
6 1 Subject1 seller Melissa
9 1 Subject1 buyer Rosie
10 2 Subject2 buyer Annalise
11 3 Subject3 buyer Joshua
13 2 Subject2 buyer Hania
</code></pre>
<p>In both instances, you are trying to completely get rid of the null entries. Stack by default removes the null entries.</p>
|
python-3.x|pandas|dataframe
| 3
|
1,953
| 53,567,745
|
installing tensorflow on anaconda (error occurred : matplotlib and user permission(13))
|
<p>While installing tensorflow on anaconda version 5.3.0 cmd environtment by running <code>conda install -c conda-forge tensorflow</code>, it came up with the following error:</p>
<p><a href="https://i.stack.imgur.com/LW9PW.png" rel="nofollow noreferrer">click here for print screen of error</a></p>
<p>I have tried following what others have said about <code>'This process cannot access the file because it is being used by another process - permission error(13)'</code> and run it on admin as has been suggested however still come up with the same errors in the above screenshot</p>
|
<p>fixed it by installing python 3.6.7 (which was a downgrade from 3.7)</p>
|
python|tensorflow
| 0
|
1,954
| 71,865,538
|
When concatenating DataFrame and Series, the series is inserted in "vertical"
|
<p>I'm iterating over a DataFrame with <code>DataFrame.iterrows()</code>:</p>
<pre><code>for index, row_to_append in source_dataframe.iterrows():
</code></pre>
<p>Then I want to add some of the rows to some other DataFrames. For the appending I'm using:</p>
<pre><code>pd.concat([destination_dataframe, row_to_append], axis=0, join='outer', ignore_index=True)
</code></pre>
<p>But the concatenation works in the wrong way, in fact given the <code>destination_dataframe</code> as:</p>
<pre><code> 0 1 2 3 ... 5 6 7 class
0 0.470588 0.896774 0.408163 0.239130 ... 0.104294 0.253629 0.183333 yes
1 0.000000 0.600000 0.163265 0.304348 ... 0.509202 0.943638 0.200000 yes
2 0.176471 0.219355 0.265306 0.271739 ... 0.261759 0.072588 0.083333 yes
3 0.117647 0.987097 0.469388 0.413043 ... 0.251534 0.034159 0.533333 yes
4 0.058824 0.264516 0.428571 0.239130 ... 0.171779 0.116567 0.166667 no
5 0.058824 0.290323 0.428571 0.173913 ... 0.202454 0.038002 0.000000 no
6 0.294118 0.464516 0.510204 0.239130 ... 0.151329 0.052519 0.150000 no
</code></pre>
<p>And the <code>row_to_add</code>:</p>
<pre><code>(0, 0.411765) (1, 0.36129) (2, 0.489796) (3, 0.23913) (4, 0.169471) (5, 0.241309) (6, 0.173356) (7, 0.183333) ('class', 'yes')
</code></pre>
<p>The output of the concatenation is:</p>
<pre><code> 0 1 2 ... 6 7 class
0 0.470588 0.896774 0.408163 ... 0.253629 0.183333 yes
1 0.0 0.600000 0.163265 ... 0.943638 0.200000 yes
2 0.176471 0.219355 0.265306 ... 0.072588 0.083333 yes
3 0.117647 0.987097 0.469388 ... 0.034159 0.533333 yes
4 0.058824 0.264516 0.428571 ... 0.116567 0.166667 no
5 0.058824 0.290323 0.428571 ... 0.038002 0.000000 no
6 0.294118 0.464516 0.510204 ... 0.052519 0.150000 no
0 0.411765 NaN NaN ... NaN NaN NaN
1 0.36129 NaN NaN ... NaN NaN NaN
2 0.489796 NaN NaN ... NaN NaN NaN
3 0.23913 NaN NaN ... NaN NaN NaN
4 0.169471 NaN NaN ... NaN NaN NaN
5 0.241309 NaN NaN ... NaN NaN NaN
6 0.173356 NaN NaN ... NaN NaN NaN
7 0.183333 NaN NaN ... NaN NaN NaN
class yes NaN NaN ... NaN NaN NaN
</code></pre>
<p>Where the row gets added "vertically" and not horizontally.</p>
<p>While I want:</p>
<pre><code> 0 1 2 ... 6 7 class
0 0.470588 0.896774 0.408163 ... 0.253629 0.183333 yes
1 0.0 0.600000 0.163265 ... 0.943638 0.200000 yes
2 0.176471 0.219355 0.265306 ... 0.072588 0.083333 yes
3 0.117647 0.987097 0.469388 ... 0.034159 0.533333 yes
4 0.058824 0.264516 0.428571 ... 0.116567 0.166667 no
5 0.058824 0.290323 0.428571 ... 0.038002 0.000000 no
6 0.294118 0.464516 0.510204 ... 0.052519 0.150000 no
7 0.411765 0.36129 0.489796 ... 0.173356 0.183333 yes <-------
</code></pre>
<p>I tried changing the <code>axis</code>, <code>join</code> and <code>ignore_index</code> parameters but still didn't get the result.</p>
|
<p>You can convert the <code>row_to_append</code> to dataframe</p>
<pre><code>row_to_append = pd.Series(row_to_append).to_frame().T
out = pd.concat([destination_dataframe, row_to_append], axis=0, join='outer', ignore_index=True)
</code></pre>
|
python|pandas|dataframe|append|concatenation
| 1
|
1,955
| 72,001,640
|
python: processing data so that only constant values remain
|
<p>I have data from a measurement and I want to process the data so that only the values remain, that are constant. The measured signal consists of parts where the value stays constant for some time then I do a change on the system that causes the value to increase. It takes time for the system to reach the constant value after the adjustment I do.<br />
I wrote a programm that compares every value with the 10 previous values. If it is equal to them within a tolerance it gets saved.</p>
<p>The code works but i feel like this can be done cleaner and more efficient so that it is sutable to process larger amouts of data. But I dont know how to make the code in for-loop more efficient. Do you have any suggestions?</p>
<p>Thank you in advance.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('radiale Steifigkeit_22_04_2022_raw.csv',
sep= ";",
decimal = ',',
skipinitialspace=True,
comment = '\t')
#df = df.drop(df.columns[[0,4]], axis=1)
#print(df.head())
#print(df.dtypes)
#df.plot(x = 'Time_SYS 01-cDAQ:1_A-In-All_Rec_rel', y = 'Kraft')
#df.plot(x = 'Time_SYS 01-cDAQ:1_A-In-All_Rec_rel', y = 'Weg')
#plt.show()
s = pd.Series(df['Weg'], name = 'Weg')
f = pd.Series(df['Kraft'], name= 'Kraft')
t = pd.Series(df['Time_SYS 01-cDAQ:1_A-In-All_Rec_rel'], name= 'Zeit')
#s_const = pd.Series()
s_const = []
f_const = []
t_const = []
s = np.abs(s)
#plt.plot(s)
#plt.show()
c = 0
#this for-loop compares the value s[i] with the previous 10 measurements.
#If it is equal within a tolerance it is saved into s_const.
for i in range(len(s)):
#for i in range(0,2000):
if i > 10:
si = round(s[i],3)
s1i = round(s[i-1],3)
s2i = round(s[i-2],3)
s3i = round(s[i-3],3)
s4i = round(s[i-4],3)
s5i = round(s[i-5],3)
s6i = round(s[i-6],3)
s7i = round(s[i-7],3)
s8i = round(s[i-8],3)
s9i = round(s[i-9],3)
s10i = round(s[i-10],3)
if si == s1i == s2i == s3i == s4i == s5i== s6i == s7i== s8i == s9i == s10i:
c = c+1
s_const.append(s[i])
f_const.append(f[i])
</code></pre>
|
<p>Here is a very performant implementation using itertools (based on <a href="https://stackoverflow.com/questions/3844801/check-if-all-elements-in-a-list-are-identical">Check if all elements in a list are identical</a>):</p>
<pre class="lang-py prettyprint-override"><code>from itertools import groupby
def all_equal(iterable):
g = groupby(iterable)
return next(g, True) and not next(g, False)
data = [1, 2, 2, 2, 2, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5]
window = 3
stable = [i for i in range(len(data) - window + 1) if all_equal(data[i:i+window])]
print(stable) # -> [1, 2, 7, 8, 9, 10, 13]
</code></pre>
<p>The algorithm produces a list of indices in your data where a stable period of length <code>window</code> starts.</p>
|
python|pandas|dataframe
| 0
|
1,956
| 71,872,873
|
Tensorflow CNN Image Classification - Using ImageDataGenerator and then Next&Model.fit gives error
|
<p>I have a CNN model, which is basically processing images and classifying them at the end. There are four class labels, which are UN, D1, D2 and D3. If you look at train_batches, you will see that it already labels them as an integer from 0 to 3. Before I send those images into CNN model, I been doing preprocessing. Those batches give me correct number of images and classes. However, when I'd like to plot them to see if I am doing anything wrong (using next function) or when I run model.fit_generator(x=train_batches, validation_data =valid_batches, epochs=5, verbose=2), it says "IndexError: index 1 is out of bounds for axis 2 with size 1".</p>
<p>I was suspecting there might be some "number of class" incompatibility but I could not figure it out.</p>
<p><a href="https://i.stack.imgur.com/jco1c.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jco1c.jpg" alt="train_batches" /></a>
<a href="https://i.stack.imgur.com/cNjxt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cNjxt.jpg" alt="ERROR" /></a>
<a href="https://i.stack.imgur.com/g2GkQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g2GkQ.jpg" alt="Train, Test and Validation files" /></a>
<a href="https://i.stack.imgur.com/JGaRR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JGaRR.jpg" alt="what train file looks like" /></a></p>
<pre><code>train_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input).\
flow_from_directory(directory=train_path, color_mode='grayscale', target_size=(24, 1000),
classes=['UN', 'D1', 'D2', 'D3'], shuffle=True, batch_size=5)
test_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input).\
flow_from_directory(directory=test_path, color_mode='grayscale', target_size=(24, 1000),
class_mode=None, shuffle=False, batch_size=1)
valid_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input).\
flow_from_directory(directory=valid_path, color_mode='grayscale', target_size=(24, 1000),
classes=['UN', 'D1', 'D2', 'D3'], shuffle=True, batch_size=5)
assert train_batches.n == 240
assert test_batches.n == 40
assert valid_batches.n == 41
assert train_batches.num_classes == valid_batches.num_classes == test_batches.num_classes == 4
model.compile(optimizer=Adam(learning_rate=0.0001), loss='SparseCategoricalCrossentropy', metrics=['accuracy'])
step_size_train = train_batches.n // train_batches.batch_size
step_size_valid = valid_batches.n // valid_batches.batch_size
model.fit_generator(generator=train_batches, steps_per_epoch=step_size_train, validation_data=valid_batches,
validation_steps=step_size_valid, epochs=10)
</code></pre>
|
<p>You are getting the error because <code>tf.keras.applications.vgg16.preprocess_input</code> takes an input tensor with 3 channels, according to its <a href="https://www.tensorflow.org/api_docs/python/tf/keras/applications/vgg16/preprocess_input" rel="nofollow noreferrer">documentation</a>. You don't need this function since you're training your model from scratch. Passing rescale=1/255 in the ImageDataGenerator call will be fine for basic preprocessing.</p>
<pre><code>train_batches = ImageDataGenerator(rescale=1/255).flow_from_directory(directory=train_path,
color_mode='grayscale',
target_size=(256,256),
classes=['UN', 'D1', 'D2', 'D3'],
shuffle=True,
batch_size=5)
</code></pre>
<p>Also, as you are working with images, the input shape in the model will be the shape of the image. For example <code>(256,256,3)</code> and the target size will be <code>(256,256)</code></p>
<p>Let us know if the issue still persists. Thanks!</p>
|
tensorflow|image-processing|conv-neural-network
| 0
|
1,957
| 71,886,922
|
NameError: name 'mode' is not defined
|
<p>Bonjour,</p>
<pre><code>import pandas as pd
import numpy as np
</code></pre>
<p>df is:</p>
<pre><code> month year sale name
0 1 2012 55 A
1 4 2014 40 B
2 7 2013 84 C
3 10 2014 31 d
</code></pre>
<p>code is:</p>
<pre><code>agg_func_text = {'name': [ 'nunique', mode, set]}
df.groupby(['year']).agg(agg_func_text)
</code></pre>
<p>That produces:</p>
<pre><code>---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Input In [66], in <cell line: 1>()
----> 1 agg_func_text = {'name': [ 'nunique', mode, set]}
2 df.groupby(['year']).agg(agg_func_text)
</code></pre>
<p>NameError: name 'mode' is not defined</p>
<p>Something is wrong but what?</p>
<p>Regards,
Atapalou</p>
|
<p><code>mode</code> is Series method; <code>groupby</code> objects don't have it. You have to specify that <code>Series.mode</code> is the one you want to call.</p>
<pre><code>agg_func_text = {'name': [ 'nunique', pd.Series.mode, set]}
out = df.groupby(['year']).agg(agg_func_text)
</code></pre>
<p>Output:</p>
<pre><code> name
nunique mode set
year
2012 1 A {A}
2013 1 C {C}
2014 2 [B, d] {B, d}
</code></pre>
|
pandas|numpy
| 1
|
1,958
| 72,101,088
|
Display column names in seaborn boxplot
|
<p>I have this code.</p>
<pre><code>l = df.columns.values
number_of_columns=5
number_of_rows = len(l)-1/number_of_columns
plt.figure(figsize=(number_of_columns * 2, 5*number_of_rows))
for i in range(0,len(l)):
plt.subplot(number_of_rows + 1, number_of_columns, i+1)
sns.set_style('whitegrid')
sns.boxplot(data=df[l[i]], color='green', orient='v')
plt.tight_layout()
</code></pre>
<p>This is the output.</p>
<p><a href="https://i.stack.imgur.com/2kecq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2kecq.png" alt="enter image description here" /></a></p>
<p>I want the output to display the column/feature name like the following image.</p>
<p><a href="https://i.stack.imgur.com/GMXxz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GMXxz.png" alt="enter image description here" /></a></p>
<p>Can someone tell me how to do that?</p>
<p>Thanks</p>
|
<p>You should collect the <a href="https://matplotlib.org/stable/api/axes_api.html#matplotlib-axes" rel="nofollow noreferrer"><code>matplotlib.axes</code></a> object returned by <a href="https://seaborn.pydata.org/generated/seaborn.boxplot.html" rel="nofollow noreferrer"><code>sns.boxplot</code></a> and <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.set_ylabel.html" rel="nofollow noreferrer">add a name to its y-axis</a>:</p>
<pre class="lang-py prettyprint-override"><code> #...
ax = sns.boxplot(data=df[l[i]], color='green', orient='v')
ax.set_ylabel(df.columns[i])
#...
</code></pre>
|
python|pandas|matplotlib|seaborn
| 1
|
1,959
| 47,385,719
|
Date Time difference and dataframe filtering
|
<p>I have Panda dataframe df of following structure, Start and End Time are string values.</p>
<pre><code> Start Time End Time
0 2007-07-24 22:00:00 2007-07-25 07:16:53
1 2007-07-25 07:16:55 2007-07-25 08:52:19
2 2007-07-25 09:45:53 2007-07-25 10:30:00
3 2007-07-25 12:32:00 2007-07-25 14:13:38
4 2007-07-25 22:59:00 2007-07-26 13:43:00
</code></pre>
<p>1- How to find the difference in Hours and Minutes between End Time and Start <br>
2- Query the dataframe to filter all rows having time less than 1 hour and 30 minutes <br>
3- Filter all rows having time difference between 20 minutes and 40 minutes <br></p>
|
<p><strong>Question 1</strong><br>
Use <code>pd.to_datetime</code>, and then subtract the columns.</p>
<pre><code>for c in df.columns:
df[c] = pd.to_datetime(df[c])
(df['End Time'] - df['Start Time']).dt.total_seconds() / 3600
0 9.281389
1 1.590000
2 0.735278
3 1.693889
4 14.733333
dtype: float64
</code></pre>
<p><strong>Question 2</strong><br>
Just use a mask and filter:</p>
<pre><code>v = (df['End Time'] - df['Start Time']).dt.total_seconds() / 3600
df[v < 1.5]
Start Time End Time
2 2007-07-25 09:45:53 2007-07-25 10:30:00
</code></pre>
<p>If I misunderstood, and you actually want to <em>retain</em> such rows, reverse the condition:</p>
<pre><code>df[v >= 1.5]
Start Time End Time
0 2007-07-24 22:00:00 2007-07-25 07:16:53
1 2007-07-25 07:16:55 2007-07-25 08:52:19
3 2007-07-25 12:32:00 2007-07-25 14:13:38
4 2007-07-25 22:59:00 2007-07-26 13:43:00
</code></pre>
<p><strong>Question 3</strong><br>
Again, use a mask and filter:</p>
<pre><code>df[(1/3 <= v) & (v <= 2/3)]
</code></pre>
|
python|pandas
| 2
|
1,960
| 47,178,371
|
Where is the code for gradient descent?
|
<p>Running some experiments with TensorFlow, want to look at the implementation of some functions just to see exactly how some things are done, started with the simple case of <code>tf.train.GradientDescentOptimizer</code>. Downloaded the zip of the full source code from github, ran some searches over the source tree, got to:</p>
<pre><code>C:\tensorflow-master\tensorflow\python\training\gradient_descent.py
class GradientDescentOptimizer(optimizer.Optimizer):
def _apply_dense(self, grad, var):
return training_ops.apply_gradient_descent(
</code></pre>
<p>Okay, so presumably the actual code is in <code>apply_gradient_descent</code>, searched for that... not there. Only three occurrences in the entire source tree, all of which are uses, not definitions.</p>
<p>What about <code>training_ops</code>? There does exist a source file with a suggestive name:</p>
<pre><code>C:\tensorflow-master\tensorflow\python\training\training_ops.py
from tensorflow.python.training import gen_training_ops
# go/tf-wildcard-import
# pylint: disable=wildcard-import
from tensorflow.python.training.gen_training_ops import *
# pylint: enable=wildcard-import
</code></pre>
<p>... the above is the entire content of that file. Hmm.</p>
<p>I did find this file:</p>
<pre><code>C:\tensorflow-master\tensorflow\python\BUILD
tf_gen_op_wrapper_private_py(
name = "training_ops_gen",
out = "training/gen_training_ops.py",
)
</code></pre>
<p>which seems to confirm such and such other files are object code, generated in the build process - but where is the source code they are generated from?</p>
<p>So this is the point at which I give up and ask for help. Can anyone familiar with the TensorFlow code base point me to where the relevant source code is?</p>
|
<p>The implementation further goes to the native c++ code. Here's <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/training_ops_gpu.cu.cc#L29" rel="noreferrer"><code>ApplyGradientDescent</code></a> GPU implementation (<code>core/kernels/training_ops_gpu.cu.cc</code>):</p>
<pre class="lang-cpp prettyprint-override"><code>template <typename T>
struct ApplyGradientDescent<GPUDevice, T> {
void operator()(const GPUDevice& d, typename TTypes<T>::Flat var,
typename TTypes<T>::ConstScalar lr,
typename TTypes<T>::ConstFlat grad) {
Eigen::array<typename TTypes<T>::Tensor::Index, 1> bcast;
bcast[0] = grad.dimension(0);
Eigen::Sizes<1> single;
var.device(d) -= lr.reshape(single).broadcast(bcast) * grad;
}
};
</code></pre>
<p>CPU implementation is <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/training_ops.cc" rel="noreferrer">here</a> (<code>core/kernels/training_ops.cc</code>):</p>
<pre class="lang-cpp prettyprint-override"><code>template <typename T>
struct ApplyGradientDescent<CPUDevice, T> {
void operator()(const CPUDevice& d, typename TTypes<T>::Flat var,
typename TTypes<T>::ConstScalar lr,
typename TTypes<T>::ConstFlat grad) {
var.device(d) -= grad * lr();
}
};
</code></pre>
|
python|tensorflow|machine-learning|gradient-descent
| 10
|
1,961
| 68,128,088
|
filter a pandas datafram based on the last rows of a key
|
<p>i have a pandas dataframe with some thousand of rows. this dataframe is ordered by two columns: name (a hundred unique values), and date.
i want to create a fraction of this dataframe that took only the last like 50 rows of each unique value of name.
so if i have:</p>
<pre><code> Name Date
0 A date1
1 A date2
2 A date3
3 A date4
4 A date5
5 A date6
6 A date7
7 A date8
8 B Date1
9 B Date2
10 B Date3
11 B Date4
12 B Date5
13 B Date6
14 B Date7
15 B Date8
</code></pre>
<p>i want to have just:</p>
<pre><code> Name Date
0 A date5
1 A date6
2 A date7
3 A date8
4 B Date5
5 B Date6
6 B Date7
7 B Date8
</code></pre>
<p>each name does not have the same numbers of rows.</p>
<p>all my ideas for now had too many cycle. is something i can do to try if it work but i can't apply on too much rows.
any idea?</p>
|
<p>Use:</p>
<pre><code>df.groupby("Name").tail(4).reset_index(drop=True)
Name Date
0 A date5
1 A date6
2 A date7
3 A date8
4 B Date5
5 B Date6
6 B Date7
7 B Date8
</code></pre>
|
python|pandas
| 1
|
1,962
| 68,067,438
|
Python: how to reshape a dataframe based on a condition?`
|
<p>I have a dataframe that looks like the following</p>
<pre><code>df
Name Val1 Val2
0 Mark 0 3
1 Mark 2 3
2 Mark 5 6
3 Mark 7 8
</code></pre>
<p>I would like to have something like this</p>
<pre><code>df
Name Val1_0 Val1_1 Val1_2 Val1_3 Val2_0 Val2_1 Val2_2 Val2_3
0 Mark 0 2 5 7 3 3 6 8
</code></pre>
|
<p>try via <code>set_index()</code>,<code>reset_index()</code> and <code>unstack()</code>:</p>
<pre><code>out=df.set_index('Name',append=True).unstack(0)
</code></pre>
<p>Finally :</p>
<pre><code>out.columns=out.columns.map(lambda x:'_'.join(map(str,x)))
out=out.reset_index()
</code></pre>
<p>output of <code>out</code>:</p>
<pre><code> Name Val1_0 Val1_1 Val1_2 Val1_3 Val2_0 Val2_1 Val2_2 Val2_3
0 Mark 0 2 5 7 3 3 6 8
</code></pre>
|
python|pandas
| 2
|
1,963
| 68,354,602
|
How do I know which spectrogram frames belong to which audio samples?
|
<p>I’ve been using this script:</p>
<pre><code>spgram = torchaudio.transforms.Spectrogram(512, hop_length=32)
audio = spgram(audio)
</code></pre>
<p>to get the spectrogram of some stereo music audio. I expected that the resulting spectrogram has the shape [2, 257, audio.shape[1]/32] However, that’s not the case. For examples, an audio clip with size [2, 199488] (with sr=24576) yields a spectrogram with size [2, 257, 6241] (note that 199488/32=6234). Why is that? and how can I convert from frame location to sample location?</p>
|
<p>See <code>center</code> parameter.</p>
<blockquote>
<p>whether to pad <code>waveform</code> on both sides so that the <code>t</code>-th frame is centered at time t x hop_length. (Default: <code>True</code>)</p>
</blockquote>
<p>So, by default, the signal is padded with zeros. The padding length is probably (<code>win_length - hop_length</code>). This ends up making the result longer by <code>(win_length - hop_length) / hop_length</code>, which is 7 in your case.</p>
|
audio|pytorch|torchaudio
| 0
|
1,964
| 57,229,479
|
DataFrame: how can I groupby Z and calculate the mean X in Y range
|
<p>I have a data frame which includes 3 columns - <code>Test</code>, <code>X</code> and <code>Y</code>. I want to add new columns <code>Xmean</code> which include the mean value of <code>X</code> with a condition on <code>Y</code> for each <code>Test</code>.</p>
<p>For example <code>Xmean</code> include the mean value on <code>X</code> while <code>Y >= 5</code> for each <code>Test</code>.</p>
<p><a href="https://i.stack.imgur.com/Pcr5i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pcr5i.png" alt="enter image description here"></a></p>
|
<p>import pandas as pd</p>
<p>df=pd.read_csv(r'Downloads\test.txt',delimiter=',',encoding='utf-8')</p>
<p>df_sort=df.sort_values("test")</p>
<p>df_filter=df_sort[df_sort['y']>=5]</p>
<h1>applying aggregates function to find mean</h1>
<p>df_agg=df_filter.groupby(['test'])['x'].mean()</p>
<h1>join two dataframes to get desired output</h1>
<p>df_final=pd.merge(df_sort[['test','x','y']],df_agg,on='test')</p>
<p>print(df_final)</p>
<p><a href="https://i.stack.imgur.com/gmsoV.png" rel="nofollow noreferrer">Output attached</a></p>
|
python-3.x|dataframe|pandas-groupby
| 0
|
1,965
| 57,255,942
|
object detection using tensorflow by own classifier
|
<p>When I try training my train.py for object detection using the tensorflow/models repository using the code </p>
<pre><code>python train.py --logtostderr --train_dir=training_dir/ --pipeline_config_path=training/faster_rcnn_inception_resnet_v2_atrous_pets.config
</code></pre>
<p>I am unable to run this command.</p>
<p>I have tried including all the files within object_detection as well tried to remove the object_detection. In the from statement, but it didn't work. </p>
<pre><code>import functools
import json
import os
import tensorflow as tf
from object_detection.builders import dataset_builder
from object_detection.builders import graph_rewriter_builder
from object_detection.builders import model_builder
from object_detection.legacy import trainer
from object_detection.utils import config_util
</code></pre>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:/Users/varsh/Documents/models/research/object_detection/train.py", line 49, in <module>
from object_detection.builders import dataset_builder
ModuleNotFoundError: No module named 'object_detection'
</code></pre>
|
<p>I've managed to solve this problem in my system (windows 10). The solution is not very straight forward but:</p>
<p>1) First u need to clone Tensorflow Object Detection API repository <a href="https://github.com/tensorflow/models" rel="nofollow noreferrer">https://github.com/tensorflow/models</a>.</p>
<p>2) Follow the installation provided in <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md</a></p>
<p>3) In the step 2, you are required to compile protobuf library, so download the protobuf compiler at <a href="https://github.com/google/protobuf/releases/latest" rel="nofollow noreferrer">https://github.com/google/protobuf/releases/latest</a> (at the time of this writing (3.5.1), there's a bug in protoc which may or may not related to the Windows environment, my solution is use the protoc v 3.4.0)
4) Append the PYTHONPATH environment variable with the directory of /research/ and /research/slim (dont forget to add the PYTHONPATH to Path if you haven't done so.</p>
<p>5) No more ModuleNotFoundError: No module named 'object_detection'</p>
|
python|tensorflow|computer-vision
| 1
|
1,966
| 57,099,019
|
Transforming Multiindex into single index after groupby() Pandas
|
<p>Probably this question is a bundle of some functions. I am having trouble renaming multiple index and transforming into simple index.</p>
<p>Lets say I have the following DF</p>
<pre><code>Customer Date Amount
John 10-10-2016 100,00
Mark 12-10-2016 50,00
John 13_10_2016 200,00
</code></pre>
<p>If I apply the following code:</p>
<pre><code>aggregation = {'Amount':{
'total' : 'sum'},
'Date':{
'first_purchase' :'min',
'last_purchase' : 'max'}
}
df_final = df.groupby('Customer').agg(aggregation).reset_index()
</code></pre>
<p>The result I get is:</p>
<pre><code> Customer Amount Date
total first_purchase last_purchase
John 300,00 10-10-2016 13-10_2016
Mark 50,00 12-10-2016 12-10-2016
</code></pre>
<p>The thing is, I will use this dataframe later to merge with others and the multiindex is not good for me. I wanto to turn it into a single index to have a dataframe like this:</p>
<pre><code> Customer total first_purchase last_purchase
John 300,00 10-10-2016 13-10_2016
Mark 50,00 12-10-2016 12-10-2016
</code></pre>
<p>I have already tried some unstack and reset index to level 0 but it does not work. Can anyone help me with that? I am sorry if it is a repeated question but I have not found the answer so far after trying many times.</p>
<p>tks</p>
|
<p>We can using <code>droplevel</code></p>
<pre><code>df_final.columns=df_final.columns.droplevel(0)
df_final.reset_index(inplace=True)
</code></pre>
|
python|pandas
| 1
|
1,967
| 56,982,447
|
horizontal stacked bar chart with a single series?
|
<p>My simple Dataframe produces a plot with 4 single, horizontal bars, rather than one stacked horizontal bar. I've tried transposing it etc - without success. I'm sure I'm doing something simple wrong - but I can't work it out. Help much appreciated!</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
import matplotlib.pyplot as plt
fake_data = [['dogs',12],['cats',8],['fish',22],['bird',8]]
myDF = pd.DataFrame(fake_data)
myDF.columns = ['animals','count']
myDF.plot.barh(stacked=True)
plt.show()
</code></pre>
|
<p>I think you need create one row <code>DataFrame</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.to_frame.html" rel="nofollow noreferrer"><code>Series.to_frame</code></a> and transpose by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.T.html" rel="nofollow noreferrer"><code>DataFrame.T</code></a>:</p>
<pre><code>myDF.set_index('animals')['count'].to_frame().T.plot.barh(stacked=True)
</code></pre>
|
python|pandas|matplotlib
| 0
|
1,968
| 45,747,393
|
Convert column with mixed text values and None to integer lists efficiently
|
<p>Imagine I have a column with the values</p>
<p><code>data = pd.DataFrame([['1,2,3'], ['4,5,6'], [None]])</code></p>
<p>I want the output to be:</p>
<p><code>[[1,2,3]], [[4,5,6]], [None]]</code></p>
<p>In other words, splitting up the comma-delimited strings into lists while ignoring the None values.</p>
<p>This function works fine for <code>apply</code>:</p>
<pre><code>def parse_text_vector(s):
if s is None:
return None
else:
return map(int, s.split(','))
</code></pre>
<p>As in this example: </p>
<pre><code>df = pd.DataFrame([['1,2,3'], ['4,5,6'], [None]])
result = df[0].apply(parse_text_vector)
</code></pre>
<p>But across millions of rows, this gets quite slow. I was hoping to improve runtime by doing something along the lines of</p>
<p><code>parse_text_vector(df.values)</code>, but this leads to:</p>
<pre><code>In [61]: parse_text_vector(df.values)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-61-527d5f9f2b84> in <module>()
----> 1 parse_text_vector(df.values)
<ipython-input-49-09dcd8f24ab3> in parse_text_vector(s)
4 return None
5 else:
----> 6 return map(int, s.split(','))
AttributeError: 'numpy.ndarray' object has no attribute 'split'
</code></pre>
<p>How can I get this to work? Or otherwise optimize this so that it doesn't take tens of minutes to process my million-line dataframe?</p>
|
<p>Use <code>df.str.split</code> and then convert to a list:</p>
<pre><code>In [9]: df
Out[9]:
Col1
0 1,2,3
1 4,5,6
2 None
In [10]: df.Col1.str.split(',').tolist()
Out[10]: [['1', '2', '3'], ['4', '5', '6'], None]
</code></pre>
<p>To convert the inner list elements to integers, you can do a conversion with <code>map</code> inside a list-comprehension:</p>
<pre><code>In [22]: [list(map(int, x)) if isinstance(x, list) else x for x in df.Col1.str.split(',').tolist()]
Out[22]: [[1, 2, 3], [4, 5, 6], None]
</code></pre>
|
python|pandas|dataframe
| 2
|
1,969
| 45,791,419
|
reading data from csv files in python
|
<p>I am trying to extract data from a dummy csv file to use inside tensorflow.
The dummy data only has 2 columns: X (single feature column) and Y (expected output). </p>
<pre><code>X Y
11.0 13.0
23.0 33.3
... ... and so on
</code></pre>
<p>Right now I am reading the data like so:</p>
<pre><code>import pandas as pd
dummy_data = pd.read_csv("dummy_data.csv", sep=",")
inputX = dummy_data.loc[:, 'X'].values
np.reshape(inputX, [11, 1])
</code></pre>
<p>I am reshaping the numpy array because I need to do matrix multiplication later on with linear regression but I want to ask is that the correct way to extract a column from csv data? Is there a better way to directly extract the csv data to a tensor object?</p>
|
<p>There is no need to reshape or use <code>.loc</code> or <code>.values</code>:</p>
<pre><code>inputX = dummy_data[['X']]
</code></pre>
<p>(Mind the list of lists <code>[[]]</code>!)</p>
|
python|pandas|csv
| 1
|
1,970
| 50,683,039
|
Conv2D transpose output shape using formula
|
<p>I get <code>[-1,256,256,3]</code> as the output shape using the transpose layers shown below. I print the output shape. My question is specifically about the height and width which are both <code>256</code>. The channels seem to be the number of filters from the last transpose layer in my code.</p>
<p>I assumed rather simplistically that the formula is this. I read other threads.</p>
<pre><code>H = (H1 - 1)*stride + HF - 2*padding
</code></pre>
<p>But when I calculate I don't seem to get that output. I think I may be missing the padding calculation
How much padding is added by <code>'SAME'</code> ?</p>
<p>My code is this.</p>
<pre class="lang-python prettyprint-override"><code> linear = tf.layers.dense(z, 512 * 8 * 8)
linear = tf.contrib.layers.batch_norm(linear, is_training=is_training,decay=0.88)
conv = tf.reshape(linear, (-1, 128, 128, 1))
out = tf.layers.conv2d_transpose(conv, 64,kernel_size=4,strides=2, padding='SAME')
out = tf.layers.dropout(out, keep_prob)
out = tf.contrib.layers.batch_norm(out, is_training=is_training,decay=0.88)
out = tf.nn.leaky_relu(out)
out = tf.layers.conv2d_transpose(out, 128,kernel_size=4,strides=1, padding='SAME')
out = tf.layers.dropout(out, keep_prob)
out = tf.contrib.layers.batch_norm(out, is_training=is_training,decay=0.88)
out = tf.layers.conv2d_transpose(out, 3,kernel_size=4,strides=1, padding='SAME')
print( out.get_shape())
</code></pre>
|
<p>Regarding <code>'SAME'</code> padding, the <a href="https://www.tensorflow.org/api_guides/python/nn#Convolution" rel="noreferrer"><code>Convolution</code></a> documentation offers some detailed explanations (further details in those <a href="https://www.tensorflow.org/api_guides/python/nn#Notes_on_SAME_Convolution_Padding" rel="noreferrer">notes</a>). Especially, when using <code>'SAME'</code> padding, the output shape is defined so:</p>
<blockquote>
<pre><code># for `tf.layers.conv2d` with `SAME` padding:
out_height = ceil(float(in_height) / float(strides[1]))
out_width = ceil(float(in_width) / float(strides[2]))
</code></pre>
</blockquote>
<p>In this case, the output shape depends only on the input shape and stride. The padding size is computed from there to fill this shape requirement (while, with <code>'VALID'</code> padding, it's the output shape which depends on the padding size)</p>
<p>Now for transposed convolutions... As this operation is the backward counterpart of a normal convolution (its gradient), it means that the output shape of a normal convolution corresponds to the input shape to its counterpart transposed operation. In other words, while the output shape of <code>tf.layers.conv2d()</code> is divided by the stride, the output shape
of <code>tf.layers.conv2d_transpose()</code> is multiplied by it:</p>
<pre><code># for `tf.layers.conv2d_transpose()` with `SAME` padding:
out_height = in_height * strides[1]
out_width = in_width * strides[2]
</code></pre>
<p>But once again, the padding size is calculated to obtain this output shape, not the other way around (for <code>SAME</code> padding). Since the normal relation between those values (i.e. the relation you found) is:</p>
<pre><code># for `tf.layers.conv2d_transpose()` with given padding:
out_height = strides[1] * (in_height - 1) + kernel_size[0] - 2 * padding_height
out_width = strides[2] * (in_width - 1) + kernel_size[1] - 2 * padding_width
</code></pre>
<p>Rearranging the equations we get</p>
<pre><code>padding_height = [strides[1] * (in_height - 1) + kernel_size[0] - out_height] / 2
padding_width = [[strides[2] * (in_width - 1) + kernel_size[1] - out_width] / 2
</code></pre>
<blockquote>
<p><strong>note:</strong> if e.g. <code>2 * padding_height</code> is an odd number, then <code>padding_height_top = floor(padding_height)</code>; and <code>padding_height_bottom = ceil(padding_height)</code> (same for resp. <code>padding_width</code>, <code>padding_width_left</code> and <code>padding_width_right)</code></p>
</blockquote>
<p>Replacing <code>out_height</code> and <code>out_width</code> with their expressions, and using your values (for the 1st transposed convolution):</p>
<pre><code>padding = [2 * (128 - 1) + 4 - (128 * 2)] / 2 = 1
</code></pre>
<p>You thus have a padding of <code>1</code> added on every side of your data, in order to obtain the output dim <code>out_dim = in_dim * stride = strides * (in_dim - 1) + kernel_size - 2 * padding = 256</code></p>
|
python|tensorflow|convolutional-neural-network
| 8
|
1,971
| 66,426,905
|
pandas merge python sort data frame
|
<pre><code>Name Sex Age Height Weight
0 Alfred M 14 69.0 112.5
1 Alice F 13 56.5 84.0
2 Barbara F 13 65.3 98.0
3 Carol F 14 62.8 102.5
4 Henry M 14 63.5 102.5
5 James M 12 57.3 83.0
6 Jane F 12 59.8 84.5
7 Janet F 15 62.5 112.5
8 Jeffrey M 13 62.5 84.0
9 John M 12 59.0 99.5
10 Joyce F 11 51.3 50.5
11 Judy F 14 64.3 90.0
12 Louise F 12 56.3 77.0
13 Mary F 15 66.5 112.0
14 Philip M 16 72.0 150.0
15 Robert M 12 64.8 128.0
16 Ronald M 15 67.0 133.0
17 Thomas M 11 57.5 85.0
18 William M 15 66.5 112.0
</code></pre>
<p>i want output sex column rows alternatively</p>
<pre><code>Name Sex Age Height Weight
Alice F 13 56.5 84.0
Alfred M 14 69.0 112.5
Barbara F 13 65.3 98.0
Henry M 14 63.5 102.5
Carol F 14 62.8 102.5
James M 12 57.3 83.0
Jane F 12 59.8 84.5
Jeffrey M 13 62.5 84.0
Janet F 15 62.5 112.5
John M 12 59.0 99.5
Joyce F 11 51.3 50.5
Philip M 16 72.0 150.0
Judy F 14 64.3 90.0
Robert M 12 64.8 128.0
Louise F 12 56.3 77.0
Ronald M 15 67.0 133.0
Mary F 15 66.5 112.0
Thomas M 11 57.5 85.0
</code></pre>
|
<p>You can use <code>groupby().cumcount()</code> to enumerate the rows within the group then <code>sort_values</code>:</p>
<pre><code>(df.assign(order=df.groupby(['Sex']).cumcount())
.sort_values(['order','Sex'])
.drop('order',axis=1)
)
</code></pre>
<p>Output:</p>
<pre><code> Name Sex Age Height Weight
1 Alice F 13 56.5 84.0
0 Alfred M 14 69.0 112.5
2 Barbara F 13 65.3 98.0
4 Henry M 14 63.5 102.5
3 Carol F 14 62.8 102.5
5 James M 12 57.3 83.0
6 Jane F 12 59.8 84.5
8 Jeffrey M 13 62.5 84.0
7 Janet F 15 62.5 112.5
9 John M 12 59.0 99.5
10 Joyce F 11 51.3 50.5
14 Philip M 16 72.0 150.0
11 Judy F 14 64.3 90.0
15 Robert M 12 64.8 128.0
12 Louise F 12 56.3 77.0
16 Ronald M 15 67.0 133.0
13 Mary F 15 66.5 112.0
17 Thomas M 11 57.5 85.0
18 William M 15 66.5 112.0
</code></pre>
|
python|pandas|dataframe|sorting|merge
| 0
|
1,972
| 66,546,790
|
.get(0) in a JS machine learning Algo Don't work
|
<p>I'm on my way to create my first KNN Algo to learn machine learning.
I'm looking after a basic course online that's explaining it, I'm feeling that I did exactly the same as he did.</p>
<p>But when I'm running it I get this pretty basic error of js.
I am using TensorFlow.</p>
<pre><code>.sort((a, b) => (a.get(0) > b.get(0) ? 1 : -1))
^
TypeError: a.get is not a function
</code></pre>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>require('@tensorflow/tfjs-node');
const tf = require('@tensorflow/tfjs');
const loadCSV = require('./load-csv');
function knn(features, labels, predictionPoint, k) {
return (
features
.sub(predictionPoint)
.pow(2)
.sum(1)
.pow(0.5)
.expandDims(1)
.concat(labels, 1)
.unstack()
.sort((a, b) => (a.get(0) > b.get(0) ? 1 : -1))
.slice(0, k)
.reduce((acc, pair) => acc + pair.get(1), 0) / k
);
}
let { features, labels, testFeatures, testLabels } = loadCSV(
'kc_house_data.csv',
{
shuffle: true,
splitTest: 10,
dataColumns: ['lat', 'long'],
labelColumns: ['price'],
}
);
features = tf.tensor(features);
labels = tf.tensor(labels);
console.log(features, labels, tf.tensor(testFeatures[0]), 10);
const result = knn(features, labels, tf.tensor(testFeatures[0]), 10);
console.log('Guess', result, testLabels[0][0]);
console.log(features);</code></pre>
</div>
</div>
</p>
<p>the log on the top to see whats passing in the function.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>Tensor {
kept: false,
isDisposedInternal: false,
shape: [ 21602, 2 ],
dtype: 'float32',
size: 43204,
strides: [ 2 ],
dataId: { id: 0 },
id: 0,
rankType: '2'
} Tensor {
kept: false,
isDisposedInternal: false,
shape: [ 21602, 1 ],
dtype: 'float32',
size: 21602,
strides: [ 1 ],
dataId: { id: 1 },
id: 1,
rankType: '2'
} Tensor {
kept: false,
isDisposedInternal: false,
shape: [ 2 ],
dtype: 'float32',
size: 2,
strides: [],
dataId: { id: 2 },
id: 2,
rankType: '1'
} 10</code></pre>
</div>
</div>
</p>
|
<p>After long research, and a lot of time.</p>
<p>TensorFlow removed the .get function you would use instead arraySync.
for example.</p>
<pre><code>pair.get(1)[0]
</code></pre>
<p>will be:</p>
<pre><code>pair.arraySync(1)[0]
</code></pre>
|
javascript|node.js|tensorflow.js
| 1
|
1,973
| 66,341,942
|
How to find the first row with a different value within a column for a pandas df?
|
<p>I have a pandas dataframe, like so:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Quantity</th>
</tr>
</thead>
<tbody>
<tr>
<td>2020/01/01</td>
<td>1</td>
</tr>
<tr>
<td>2020/01/02</td>
<td>1</td>
</tr>
<tr>
<td>2020/01/03</td>
<td>2</td>
</tr>
<tr>
<td>2020/01/04</td>
<td>2</td>
</tr>
<tr>
<td>2020/01/05</td>
<td>1</td>
</tr>
<tr>
<td>2020/01/06</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I'd like the first date for each new value within the "Quantity" column, like so:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Quantity</th>
</tr>
</thead>
<tbody>
<tr>
<td>2020/01/01</td>
<td>1</td>
</tr>
<tr>
<td>2020/01/03</td>
<td>2</td>
</tr>
<tr>
<td>2020/01/05</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>The values within "Quantity" can repeat so dropping duplicates is not a solution for me. How can I achieve this?</p>
|
<p>It seems like you want to keep the first row and every row in which the quantity changes, right? So you can do</p>
<pre class="lang-py prettyprint-override"><code>
q = df['Quantity'].values
sel = np.r_[True, q[1:] != q[:-1]]
df = df.loc[sel, :]
</code></pre>
<p>where I've called df your DataFrame. What this is doing is simply comparing if the n-th element starting from 1 (i.e the second element) is different from the (n-1)-th, and selecting if so.</p>
|
pandas
| 2
|
1,974
| 66,384,463
|
Filling aggregated column in Python
|
<p>Using the input below as an example, I am trying to create an aggregated column in a dataframe in Python based on unique instances of others. The best attempt I can make leaves some NaN in the new column though</p>
<pre><code>raw_data = {'RegionCode' : ['10001', '10001', '10001', '10001', '10001', '10001', '10002', '10002', '10002', '10002', '10002', '10002'],
'Stratum' : ['1', '1','2','2','3', '3', '1', '1', '2', '2', '3', '3'],
'LaStratum' : ['1021', '1021', '1022', '1022', '1023', '1023', '2021', '2021', '2022', '2022', '2023', '2023'],
'StratumPop' : [125, 125, 50, 50, 100, 100, 250, 250, 200, 200, 300, 300],
'Q_response' : [2, 1, 4, 1, 2, 2, 3, 4, 3, 2, 1, 4]}
Data = pd.DataFrame(raw_data, columns = ['RegionCode', 'Stratum', 'LaStratum', 'StratumPop', 'Q_response'])
#Sum StratumPop by unique instance of LaStratum at RegionCode level
Data['Total_Pop'] = Data.drop_duplicates(['LaStratum']).groupby('RegionCode')['StratumPop'].transform('sum')
Data
</code></pre>
<p>What I am trying to do is sum the StratumPop column at RegionCode level by each unique instance of LaStratum. The totals produced are correct but how can I 'fill' the column to repeat each total instead of just seeing the first occurence of each different total and NaN for the others? So Region 10001 has 275 on every row and Region 10002 has 750 on each row. Is this possible without creating staging tables and merging unique values back in (as I'm currently doing)?</p>
|
<p>To fill the column and repeat each Total_Pop per Region, you can use a simple grouped (by Region per se) <code>ffill()</code>:</p>
<pre><code>Data['Total_Pop_new'] = Data.groupby('RegionCode')['Total_Pop'].ffill()
</code></pre>
<p>Will give you back:</p>
<pre><code>Data
RegionCode Stratum LaStratum ... Q_response Total_Pop Total_Pop_new
0 10001 1 1021 ... 2 275.0 275.0
1 10001 1 1021 ... 1 NaN 275.0
2 10001 2 1022 ... 4 275.0 275.0
3 10001 2 1022 ... 1 NaN 275.0
4 10001 3 1023 ... 2 275.0 275.0
5 10001 3 1023 ... 2 NaN 275.0
6 10002 1 2021 ... 3 750.0 750.0
7 10002 1 2021 ... 4 NaN 750.0
8 10002 2 2022 ... 3 750.0 750.0
9 10002 2 2022 ... 2 NaN 750.0
10 10002 3 2023 ... 1 750.0 750.0
11 10002 3 2023 ... 4 NaN 750.0
</code></pre>
|
python|pandas|aggregation
| 2
|
1,975
| 66,605,765
|
Use a matrix to inform where pixels should be on or off (Numpy)
|
<p>I am not sure how to word this. I am sure there is an operation that describes what I am trying to do I just don't have a lot of experience manipulating image arrays.</p>
<p>I have a 2D array (matrix) of 1s and 0s which specify if a group of pixels should be the color [255,255,255] or the color [0,0,0] in rbg. It seems like this should be a simple multiplication. I should be able to multiple my color by my matrix of 1s and 0s to make an image, but all the dots products and matrix multiplication I have tried has failed.</p>
<p>Here is a simple example my 2D numpy array and</p>
<pre><code># 2D pixels array
[[0,1],
[1, 1]]
# rbg array
[[255,255,255]]
</code></pre>
<p>What I would want is the following 3D array</p>
<pre><code>[[[0,0,0],[255,255,255]],
[[255,255,255], [255,255,255]]]
</code></pre>
<p>This array has the shape 2X2X3.</p>
<p>Here are the arrays for reproducibility and to make it easy for anyone willing to help.</p>
<pre class="lang-py prettyprint-override"><code>pixel = np.array([0,1,1,1]).reshape(2,2)
rgb = np.array([255,255,255]).reshape(1,3)
</code></pre>
|
<p>How about reshaping pixel into a 3D matrix and using dot?</p>
<pre><code>pixel = np.array([0,1,1,1]).reshape(2,2,-1)
rgb = np.array([255,255,255]).reshape(1,3)
pixel.dot(rgb)
</code></pre>
<p>Output</p>
<pre><code>array([[[ 0, 0, 0],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255]]])
</code></pre>
|
python|arrays|image|numpy
| 1
|
1,976
| 66,557,738
|
Exploding memory consumption when training FL model with varying number of participants per round
|
<p>I'm running FL algorithm following the <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="nofollow noreferrer">image classification</a> tutorial. The number of participants vary at each round according to a predefined list of participants number.</p>
<pre class="lang-py prettyprint-override"><code>number_of_participants_each_round =
[108, 113, 93, 92, 114, 101, 94, 93, 107, 99, 118, 101, 114, 111, 88,
101, 86, 96, 110, 80, 118, 84, 91, 120, 110, 109, 113, 96, 112, 107,
119, 91, 97, 99, 97, 104, 103, 120, 89, 100, 104, 104, 103, 88, 108]
</code></pre>
<p>The federated data is preprocessed and batched before starting the training.</p>
<pre class="lang-py prettyprint-override"><code>
NUM_EPOCHS = 5
BATCH_SIZE = 20
SHUFFLE_BUFFER = 418
PREFETCH_BUFFER = 10
def preprocess(dataset):
def batch_format_fn(element):
return collections.OrderedDict(
x=tf.reshape(element['pixels'], [-1, 784]),
y=tf.reshape(element['label'], [-1, 1]))
return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch(
BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)
def make_federated_data(client_data, client_ids):
return [preprocess(client_data.create_tf_dataset_for_client(x)) for x in client_ids]
federated_train_data = make_federated_data(data_train, data_train.client_ids)
</code></pre>
<p>Participants are randomly sampled from <code>federated_train_data[0:expected_total_clients]</code> at each round according to the <code>number_of_participants_each_round</code>, then the <code>iterative_process</code> is executed for <code>45 rounds</code>.</p>
<pre class="lang-py prettyprint-override"><code>expected_total_clients = 500
round_nums = 45
for round_num in range(0, round_nums):
sampled_clients =
np.random.choice(a=federated_train_data[0:expected_total_clients],
size=number_of_participants_each_round[round_num],
replace=False)
state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))
</code></pre>
<p>The problem is that the <code>VRAM</code> usage quickly explodes after few rounds, it reaches <code>5.5 GB</code> at round <code>6~7</code>, and keeps increasing with an approx rate of <code>0.8 GB/round</code> until the training eventually crashes at round <code>25~26</code> where the VRAM reaches <code>17 GB</code> with <code>+4000</code> python threads created.</p>
<p>Error message below</p>
<pre class="lang-py prettyprint-override"><code>F tensorflow/core/platform/default/env.cc:72] Check failed: ret == 0 (35 vs. 0)Thread creation via pthread_create() failed.
</code></pre>
<p><strong>### Troubleshooting ###</strong></p>
<p>Reducing the <code>number_of_participants_each_round</code> to <code>20</code> allows the training to complete, but the memory consumption was still huge and growing.</p>
<p>Running the same code with fixed number of participants per round, memory consumption was fine with total of approx <code>1.5 ~ 2.0 GB</code> VRAM throughout the entire training.</p>
<pre class="lang-py prettyprint-override"><code>expected_total_clients = 500
fixed_client_size_per_round = 100
round_nums = 45
for round_num in range(0, round_nums):
sampled_clients =
np.random.choice(a=federated_train_data[0:expected_total_clients],
size=fixed_client_size_per_round,
replace=False)
state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))
</code></pre>
<p>Extra details:</p>
<pre><code>OS: MacOS Mojave, 10.14.6
python -V: Python 3.8.5 then downgraded to Python 3.7.9
TF version: 2.4.1
TFF version: 0.18.0
Keras version: 2.4.3
</code></pre>
<p>Is this a normal memory behaviour or a <code>bug</code>? Are there any refactoring/hints to optimize memory consumption ?</p>
|
<p>The issue was a bug in the <code>executor stack</code> of the TFF runtime process.</p>
<p>Complete details and bug fix below</p>
<p><a href="https://github.com/tensorflow/federated/issues/1215" rel="nofollow noreferrer">https://github.com/tensorflow/federated/issues/1215</a></p>
|
tensorflow-federated
| 2
|
1,977
| 57,718,244
|
Unable to call a function inside groupby.agg
|
<p>New to python. So please excuse mistakes. I am writing a script to group a pandas dataframe using groupby.agg. I get errors while trying to call a function that takes as input, the output of a lambda function</p>
<p>Here is the sample of the merged dataframe</p>
<pre><code>cprdf.iloc[5:10,5:20]
Out[237]:
Loan Nbr Servicer Loan Nbr Recon Action Code Loan Count_x \
5 21522594 25701889 Y 0.00 1
6 21522594 25701889 Y 0.00 1
7 21522594 25701889 Y 0.00 1
8 21522594 25701889 Y 0.00 1
9 21522594 25701889 Y 0.00 1
Days Delinquent_x Sale Date_x UPB Beginning UPB Purchase UPB Sch Prin \
5 0.00 NaN 142,936.57 0.00 162.16
6 0.00 NaN 143,097.92 0.00 161.35
7 0.00 NaN 143,258.47 0.00 160.55
8 0.00 NaN 143,418.22 0.00 159.75
9 0.00 NaN 143,735.33 0.00 317.11
UPB Curtailment UPB Liq UPB Adjustment UPB Non Cash UPB Ending
5 0.00 0.00 0.00 0.00 142,774.41
6 0.00 0.00 0.00 0.00 142,936.57
7 0.00 0.00 0.00 0.00 143,097.92
8 0.00 0.00 0.00 0.00 143,258.47
9 0.00 0.00 0.00 0.00 143,418.22
</code></pre>
<p>What I am trying to do is implement the following formulas for a variety of groupby operation</p>
<p>SMM = (UPB Curtail+UPB Liq+UPBAdj)/(UPB Begin)</p>
<p>CPR in % = 100*(1-(1-SMM)^12</p>
<p>Here is the relevant code</p>
<pre class="lang-py prettyprint-override"><code>
cprdf['NonSchP'] = cprdf['UPB Curtailment'] + cprdf['UPB Liq'] + \
cprdf['UPB Adjustment']
cprdf['SMM'] = np.where(cprdf['UPB Beginning'] == 0, 0,
cprdf['NonSchP']/cprdf['UPB Beginning'])
def wtavg(x):
return lambda x: np.average(x, weights=cprdf.loc[x.index, 'UPB Beginning'])
def cpr(y):
z = 100 * (1 - np.power((1 - y), 12))
return z
# dictionary for new columns
n = {'UPB_sum' : pd.NamedAgg('UPB Beginning', 'sum'),
'UPB_count': pd.NamedAgg('UPB Beginning', 'count'),
'PIF_sum': pd.NamedAgg('UPB Liq', 'sum'),
'PIF_count' : pd.NamedAgg('UPB Liq', np.count_nonzero),
'SMMAgg' : pd.NamedAgg('SMM', wtavg(cprdf['SMM'])),
'Rate': pd.NamedAgg('Current Loan Rate',wtavg(cprdf['Current Loan Rate'])),
'CPR':pd.NamedAgg('SMM',cpr(wtavg(cprdf['SMM'])))}
cprgroup = cprdf.groupby(['month_year'],as_index=True).agg(**n)
cprgroup.reset_index(drop=False,inplace=True)
</code></pre>
<p>I expect the output to be</p>
<p>cprgroup</p>
<p>Out[240]: </p>
<pre><code> month_year UPB_sum UPB_count PIF_sum PIF_count SMM Rate \
0 2019-04 11,237,040.94 22 718,172.19 1.00 0.06 5.95
1 2019-05 16,684,325.75 31 0.00 0.00 0.00 5.99
2 2019-06 106,783,721.43 221 2,242,731.83 3.00 0.02 5.77
3 2019-07 104,181,644.18 218 1,035,861.72 3.00 0.01 5.77
4 2019-08 102,853,211.42 215 3,188,568.04 2.00 0.03 5.77
CPR
0 54.75
1 0.03
2 24.07
3 13.24
4 31.70
</code></pre>
<p>Instead when I run the program I get the following error</p>
<pre><code>runfile('C:/Users/spyder-py3/untitled3.py', wdir='C:/Users/.spyder-py3')
Traceback (most recent call last):
File "<ipython-input-241-c3f795a9d003>", line 1, in <module>
runfile('C:/.spyder-py3/untitled3.py', wdir='C:/Users/.spyder-py3')
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/.spyder-py3/untitled3.py", line 51, in <module>
'CPR':pd.NamedAgg('SMM',cpr(wtavg(cprdf['SMM'])))}
File "C:/Users/.spyder-py3/untitled3.py", line 39, in cpr
z = 100 * (1 - np.power((1 - y), 12))
TypeError: unsupported operand type(s) for -: 'int' and 'function'
</code></pre>
<p>Is my mistake calling a lambda fucntion as the input for the cpr function?</p>
<p>When I change dictionary 'n' to use 'SMMAgg' as the input to the function </p>
<pre><code>'CPR':pd.NamedAgg('SMMAgg',cpr(SMMAgg))
</code></pre>
<p>I get</p>
<pre><code>NameError: name 'SMMAgg' is not defined
</code></pre>
<p>When I change the formula to </p>
<pre><code>'CPR':pd.NamedAgg('SMMAgg',cpr('SMMAgg'))
</code></pre>
<p>I get</p>
<pre><code>File "C:/Users/.spyder-py3/untitled3.py", line 39, in cpr
z = 100 * (1 - np.power((1 - y), 12))
TypeError: unsupported operand type(s) for -: 'int' and 'str'
</code></pre>
<p>Any help would be appreaciated. </p>
<p>I circumvented the errors by adding the CPR function after aggregation as a new column to the grouped dataframe and was able to get the output I need. But there is something that I don't understand with calling this function inside the dictionary.</p>
<p>Thank you.</p>
|
<p>After some research, I found a solution. One issue that i noticed( not 100% sure) is that NamegAgg does not accept the same column for multiple custom function for aggregation. So I created a dummy SMM column. I modified the CPR function by returning the lambda instead of assigning it to a new variable and returning it. I also invoked the wtavg function inside the CPR function and called the array of variables as input. So</p>
<pre><code>cprdf['SMM1']=cprdf['SMM']
def wtavg():
return lambda x: np.average(x, weights=cprdf.loc[x.index, 'UPB Beginning'])
def cpr():
return lambda y: 100 * (1 - np.power((1 - wtavg()(y)), 12))
</code></pre>
<p>Then my kwarg dictionary looked like this</p>
<pre><code>n = {'UPB_sum' : pd.NamedAgg('UPB Beginning', 'sum'),
'UPB_count': pd.NamedAgg('UPB Beginning', 'count'),
'PIF_sum': pd.NamedAgg('UPB Liq', 'sum'),
'PIF_count' : pd.NamedAgg('UPB Liq', np.count_nonzero),
'SMMAgg' : pd.NamedAgg('SMM', wtavg()),
'Rate': pd.NamedAgg('Current Loan Rate',wtavg()),
'CPRAgg':pd.NamedAgg('SMM1',cpr())}
cprgroup=cprdf.groupby(['month_year'],as_index=True).agg(**n)
</code></pre>
<p>Output</p>
<pre><code>cprgroup
Out[51]:
month_year UPB_sum UPB_count PIF_sum PIF_count SMMAgg \
0 2019-04 1.123704e+07 22 718172.19 1.0 0.063944
1 2019-05 1.668433e+07 31 0.00 0.0 0.000025
2 2019-06 1.067837e+08 221 2242731.83 3.0 0.022690
3 2019-07 1.041816e+08 218 1035861.72 3.0 0.011770
4 2019-08 1.028532e+08 215 3188568.04 2.0 0.031268
Rate CPRAgg
0 5.946053 54.749920
1 5.987882 0.030278
2 5.774863 24.074820
3 5.772602 13.244130
4 5.771342 31.696039
</code></pre>
<p>voila!</p>
|
python|pandas-groupby
| 1
|
1,978
| 57,309,280
|
filtering and make a list of lists from a csv file in python
|
<p>I have a csv file which looks like the small example:</p>
<p>small example:</p>
<pre><code>Id sv item1 item2 item3
pos ab 4 5 8
reg ad 7 85 96
neg af 14 78 32
neg ab 47 5 6
</code></pre>
<p>I would like to make a list of lists in python from this csv file. I want to skip the first 2 columns and then look for "<code>neg</code>" in the "<code>Id</code>" column. if the value in "Id" is "neg" I want to put the value of every row for the <code>non-skipped</code> columns in a inner list and make a list of lists using all <code>inner lists</code>.
for the small example the last 2 rows of the "<code>Id</code>" column are "<code>neg</code>" so I will take only these rows. then I will skip the 1st 2 columns therefore we would have 3 columns left. that is why the results would be a list of lists with 3 inner lists.
here is the expected output:</p>
<p>expected output:</p>
<pre><code>results = [[14, 47], [78, 5], [32, 6]]
</code></pre>
<p>to get this results I wrote the following code in python but it does not return what I want. do you know how to fix it?</p>
<pre><code>with open("infile.txt") as f:
df = f.loc[f["Id"] == "neg"]
results = []
for line in df:
results.append(line)
</code></pre>
|
<p>Using the <code>csv</code> module</p>
<p><strong>Ex:</strong></p>
<pre><code>import csv
results = []
with open(filename, "rU") as infile:
reader = csv.reader(infile, delimiter=" ")
for row in reader:
if row[0] == 'neg':
results.append(list(filter(None, row[2:])))
print([i for i in zip(*results)])
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>[('14', '47'), ('78', '5'), ('32', '6')]
</code></pre>
|
python-3.x|pandas|csv
| 0
|
1,979
| 70,820,024
|
Comparison between numpy array and float number of the same data type?
|
<p>np.arange(0, 1, 0.1) initializes an float point array with the defualt data type float64. However, when I use <= to compare it to, say, np.float64(0.6), the 7th element (0.6) returns False. What's even more weird is that if I use float32 for initialization and comparison, the result becomes just right.
What's the explanation for this?</p>
|
<p>The answer is pretty obvious if you do this:</p>
<pre><code>import numpy as np
a = np.arange(0, 1, 0.1)
print('\n'.join(map(str, zip(a, a >= np.float64(0.6)))))
</code></pre>
<p>Result:</p>
<pre><code>(0.0, False)
(0.1, False)
(0.2, False)
(0.30000000000000004, False)
(0.4, False)
(0.5, False)
(0.6000000000000001, True)
(0.7000000000000001, True)
(0.8, True)
(0.9, True)
</code></pre>
<p>It's just a classic case of this: <a href="https://stackoverflow.com/questions/588004/is-floating-point-math-broken">Is floating point math broken?</a></p>
<p>You asked why this isn't a problem for <code>float32</code>. For example:</p>
<pre><code>import numpy as np
a = np.arange(0, 1, 0.1, dtype=np.float32)
print('\n'.join(map(str, zip(a, a < np.float32(0.6)))))
</code></pre>
<p>Result:</p>
<pre><code>(0.0, True)
(0.1, True)
(0.2, True)
(0.3, True)
(0.4, True)
(0.5, True)
(0.6, False)
(0.7, False)
(0.8, False)
(0.90000004, False)
</code></pre>
<p>The clue is in the length of that last value. Notice how <code>0.90000004</code> is a lot shorter than <code>0.30000000000000004</code> and <code>0.6000000000000001</code>. This is because there is less precision available in 32 bits than there is in 64 bits.</p>
<p>In fact, this is the entire reason to use 64-bit floats over 32-bit ones, when you need the precision. Depending on your system's architecture, 64-bit is likely to be a bit slower and certain to take up twice the space, but the precision is better. How exactly depends on the implementation of the floating point number (there are many choices that are too technical and detailed to go into here) - but there's twice the number of bits available to store information about the number, so you can see how that allows an increase in precision.</p>
<p>It just so happens that in 32 bits, the format has a representation of 0.6 that has enough zeroes for it to just say 0.6 (instead of 0.60000000). In 64-bits, the best values to represent 0.6 have even more zeroes, but a non-zero gets in at the end, showing the inaccuracy of the representation in that format.</p>
<p>It seems counterintuitive that <code>float32</code> is 'more precise' than <code>float64</code> in this case, but that's just a matter of cherry-picking. If you looked at a large random selection of numbers, you'd find that <code>float64</code> gets a lot closer on average. It just so happens that <code>float32</code> <em>appears</em> more accurate by accident.</p>
<p>The key takeaway here is that floating point numbers are an <em>approximation</em> of the real numbers. They are sufficiently accurate for most everyday operations and the errors tend to average out over time for many use cases, if the format is well-designed. However, because there is some error involved in most cases (of course some numbers just happen to get accurately represented, every point in the floating point type still falls on the real number line), when printing floating point numbers, some rounding is generally required as a result.</p>
<p>My favourite example to show that imprecision shows up early in Python (or any language with floats really):</p>
<pre><code>>>> .1 + .1 + .1 == .3
False
>>> print(.1 + .1 + .1, f'{.1 + .1 + .1:.1f}')
0.30000000000000004 0.3
</code></pre>
<p>And if you need better precision, you can look at types like <code>decimal</code>. Also, in very specific cases more bits than 64 may be available, but that's likely to lead to surprises around support and I would not recommend it.</p>
|
python|numpy
| 5
|
1,980
| 71,039,745
|
Painting cells using pandas and complex conditions
|
<p>I have a data frame that contains the std, mean and median for several chemical elements. Sample data:</p>
<pre><code>test = pd.DataFrame({('Na', 'std'):{'A': 1.73, 'B':0.95, 'C':2.95}, ('Na', 'mean'):{'A': 10.3, 'B':11, 'C':20}, ('Na', 'median'):{'A':11, 'B':22, 'C':34},('K', 'std'):{'A': 1.33, 'B':1.95, 'C':2.66}, ('K', 'mean'):{'A': 220.3, 'B':121, 'C':290}, ('K', 'median'):{'A':211, 'B':122, 'C':340}})
</code></pre>
<p>Example of table:</p>
<pre><code> Na K
std mean med std mean med
A 1.73 10.3 11 1.33 220.3 211
B 0.95 11.0 22 1.95 121.0 122
C 2.95 20.0 34 2.66 290.0 340
</code></pre>
<p>I want to paint the cells following certain conditions:</p>
<ol>
<li>I would like to color the two smallest values in the std column for each chemical element (Example: 0.95 and 1.73 for Na, and 1.33 and 1.95 for K);</li>
<li>I would like to color the mean and median columns based on the two smallest values of the function [abs(mean - median)], for all the elements (Example: (10.3, 11) and (11.0, 22) for Na, and (220.3, 211) and (121, 122) for K).</li>
</ol>
<p>I made these functions to identify the values of cells to be painted following the conditions I want, but I don't know how to implement them in the pd.style function.</p>
<pre><code>def paint1(test):
val_keep = []
for element,stats in test:
if stats == 'std':
paint1 = test[element].nsmallest(2, 'std')
for value in paint1['std']:
val_keep.append(value)
return val_keep
def paint2(test):
val_keep = []
for element,stats in test:
if stats == 'mean':
diff = abs(test[element]['mean'] - test[element]['median'])
paint2 = diff.nsmallest(2).index
for value in paint2:
val_keep.append((test[element]['mean'][value]))
val_keep.append(test[element]['median'][value])
return val_keep
</code></pre>
<p>How can I paint the cells using these conditions? I saw other <a href="https://stackoverflow.com/questions/41203959/conditionally-format-python-pandas-cell">posts</a> using lambda functions to define the styling, but I think the functions I need are more complicated than that.</p>
|
<p>Use <code>df.style</code>:</p>
<pre class="lang-py prettyprint-override"><code>def styler(col):
chem, stat = col.name
if stat == 'std':
return np.where(col.isin(col.nsmallest(2)), 'color: red', '')
elif stat in ['mean', 'median']:
delta = (df[(chem, 'mean')] - df[(chem, 'median')]).abs()
return np.where(delta.isin(delta.nsmallest(2)), 'color: blue', '')
else:
return [''] * len(col)
df.style.apply(styler)
</code></pre>
|
python|pandas|pandas-styles
| 1
|
1,981
| 51,665,052
|
Groupby with Apply Method in Pandas : Percentage Sum of Grouped Values
|
<p>I am trying to develop a program to convert daily data into monthly or yearly data and so on.
I have a DataFrame with datetime index and price change %:</p>
<pre><code> % Percentage
Date
2015-06-02 0.78
2015-06-10 0.32
2015-06-11 0.34
2015-06-12 -0.06
2015-06-15 -0.41
...
</code></pre>
<p>I had success grouping by some frequency. Then I tested:</p>
<pre><code> df.groupby('Date').sum()
df.groupby('Date').cumsum()
</code></pre>
<p>If it was the case it would work fine, but the problem is that I can't sum it percent way (1+x0) * (1+x1)... -1. Then I tried:</p>
<pre><code>def myfunc(values):
p = 0
for val in values:
p = (1+p)*(1+val)-1
return p
df.groupby('Date').apply(myfunc)
</code></pre>
<p>I can't understand how apply () works. It seems to apply my function to all data and not just to the grouped items.</p>
|
<p>Your <code>apply</code> is applying to all rows individually because you're grouping by the <code>date</code> column. Your date column looks to have unique values for each row, so each group has only one row in it. You need to use a <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Grouper.html" rel="nofollow noreferrer"><code>Grouper</code></a> to group by month, then use <code>cumprod</code> and get the last value for each group:</p>
<pre><code># make sure Date is a datetime
df["Date"] = pd.to_datetime(df["Date"])
# add one to percentages
df["% Percentage"] += 1
# use cumprod on each month group, take the last value, and subtract 1
df.groupby(pd.Grouper(key="Date", freq="M"))["% Percentage"].apply(lambda g: g.cumprod().iloc[-1] - 1)
</code></pre>
<p>Note, though, that this applies the percentage growth as if the steps between your rows were the same, but it looks like sometimes it's 8 days and sometimes it's 1 day. You may need to do some clean-up depnding on the result you want.</p>
|
python|python-3.x|pandas
| 0
|
1,982
| 51,645,982
|
Pandas groupby and replace duplicates with empty string
|
<p>I have a dataframe like the following:</p>
<pre><code>import pandas as pd
d = {'one':[1,1,1,1,2, 2, 2, 2],
'two':['a','a','a','b', 'a','a','b','b'],
'letter':[' a','b','c','a', 'a', 'b', 'a', 'b']}
df = pd.DataFrame(d)
> one two letter
0 1 a a
1 1 a b
2 1 a c
3 1 b a
4 2 a a
5 2 a b
6 2 b a
7 2 b b
</code></pre>
<p>And I am trying to convert it to a dataframe like the following, where empty cells are filled with empty string '':</p>
<pre><code>one two letter
1 a a
b
c
b a
2 a a
b
b a
b
</code></pre>
<p>When I perform groupby with all columns I get a series object that is basically exactly what I am looking for, but not a dataframe:</p>
<pre><code>df.groupby(df.columns.tolist()).size()
1 a a 1
b 1
c 1
b a 1
2 a a 1
b 1
b a 1
b 1
</code></pre>
<p>How can I get the desired dataframe?</p>
|
<p>You can mask your columns where the value is not the same as the value below, then use <code>where</code> to change it to a blank string:</p>
<pre><code>df[['one','two']] = df[['one','two']].where(df[['one', 'two']].apply(lambda x: x != x.shift()), '')
>>> df
one two letter
0 1 a a
1 b
2 c
3 b a
4 2 a a
5 b
6 b a
7 b
</code></pre>
<hr>
<p><strong>some explanation</strong>:</p>
<p>Your mask looks like this:</p>
<pre><code>>>> df[['one', 'two']].apply(lambda x: x != x.shift())
one two
0 True True
1 False False
2 False False
3 False True
4 True True
5 False False
6 False True
7 False False
</code></pre>
<p>All that <code>where</code> is doing is finding the values where that is true, and replacing the rest with <code>''</code></p>
|
python|pandas|pandas-groupby
| 2
|
1,983
| 51,576,125
|
Python Pandas- Find the first instance of a value exceeding a threshold
|
<p>I am trying to find the first instance of a value exceeding a threshold based on another Python Pandas data frame column. In the code below, the "Trace" column has the same number for multiple rows. I want to find the first instance where the "Value" column exceeds 3. Then, I want to take the rest of the information from that row and export it to a new Pandas data frame (like in the second example). Any ideas?</p>
<pre><code>d = {"Trace": [1,1,1,1,2,2,2,2], "Date": [1,2,3,4,1,2,3,4], "Value": [1.5,1.9,3.1,5.5,1.1,3.6,1.9,6.2]}
df = pd.DataFrame(data=d)
</code></pre>
<p><a href="https://i.stack.imgur.com/4TSzw.png" rel="nofollow noreferrer">Example2</a></p>
|
<p>By using <code>idxmax</code></p>
<pre><code>df.loc[(df.Value>3).groupby(df.Trace).idxmax]
Out[602]:
Date Trace Value
2 3 1 3.1
5 2 2 3.6
</code></pre>
|
python|pandas
| 4
|
1,984
| 35,856,567
|
How to efficiently remove duplicate rows from a DataFrame
|
<p>I'm dealing with a very large Data Frame and I'm using <code>pandas</code> to do the analysis.
The data frame is structured as follows</p>
<pre><code>import pandas as pd
df = pd.read_csv("data.csv")
df.head()
Source Target Weight
0 0 25846 1
1 0 1916 1
2 25846 0 1
3 0 4748 1
4 0 16856 1
</code></pre>
<p>The issue is that I want to remove all the "duplicates". In the sense that if I already have a row that contains a <code>Source</code> and a <code>Target</code> I do not want this information to be repeated on another row.
For instance, rows number 0 and 2 are "duplicate" in this sense and only one of them should be retained.</p>
<p>A simple way to get rid of all the "duplicates" is</p>
<pre><code>for index, row in df.iterrows():
df = df[~((df.Source==row.Target)&(df.Target==row.Source))]
</code></pre>
<p>However, this approach is horribly slow since my data frame has about 3 million rows. Do you think there's a better way of doing this?</p>
|
<p>Create two temp columns to save <code>minimum(df.Source, df.Target)</code> and <code>maximum(df.Source, df.Target)</code>, and then check duplicated rows by <code>duplicated()</code> method:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0, 5, (20, 2)), columns=["Source", "Target"])
df["T1"] = np.minimum(df.Source, df.Target)
df["T2"] = np.maximum(df.Source, df.Target)
df[~df[["T1", "T2"]].duplicated()]
</code></pre>
|
pandas
| 4
|
1,985
| 35,886,028
|
Populate python array without double loop
|
<p>Can anyone advise me on how to use pandas more efficiently, currently I am doing the following to find out the correlation of two items but this isn't very fast.</p>
<pre><code>for i in range(0, df.shape[0]):
for j in range(0, df.shape[0]):
if i<j:
## get the weights
wgt_i = dataWgt_df.ix[df.index[i]][0]
wgt_j = dataWgt_df.ix[df.index[j]][0]
## get the std's
std_i = dataSTD_df.loc[date][df.index[i]][0]
std_j = dataSTD_df.loc[date][df.index[j]][0]
## get the corvariance
#print(cor.ix[df.index[i]][df.index[j]])
cor = corr.ix[df.index[i]][df.index[j]]
## create running total
totalBottom = totalBottom + (wgt_i * wgt_j * std_i * std_j)
totalTop = totalTop + (wgt_i * wgt_j * std_i * std_j * cor)
</code></pre>
<p>What I want to do is create an identity matrix like this </p>
<pre><code>0 1 1 1 1
0 0 1 1 1
0 0 0 1 1
0 0 0 0 1
0 0 0 0 0
</code></pre>
<p>which I can then use to multiply over the various dataframes, wgt_i wgt_j std_i std_j this will create a dataframe for top and bottom which I can then sum using the sum function and get the result.</p>
<p>My main question here is how to create the identity dataframe quickly and then create the wgt_i etc dataframe as the rest is relatively straight forward.</p>
|
<p>This is not as short as the solution from @larsbutler, but much faster for large n: </p>
<pre><code>import numpy as np
n = 5
M = np.zeros((n,n))
M[np.triu_indices_from(M)] = 1
M[np.diag_indices_from(M)] = 0
</code></pre>
<p>gives:</p>
<pre><code>array([[ 0., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1.],
[ 0., 0., 0., 1., 1.],
[ 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0.]])
</code></pre>
|
python|loops|pandas|correlation
| 0
|
1,986
| 37,533,956
|
Getting the index of pandas dataframe for matching row values
|
<p>I have to dataframes in pandas , <code>A</code> and <code>B</code>: </p>
<p>A:</p>
<pre><code>A = pd.DataFrame({0:[1.24, 8.75, 4.32]})
0 1.24
1 8.75
2 4.32
</code></pre>
<p>where the <code>0 1 2 3 4 5</code> is the index of the dataframe</p>
<p>and another dataframe with strings as index:</p>
<pre><code>B = pd.DataFrame({0:[9.43, 1.24, 9.09, 4.32, 8.85]}, index=['p_32','p_21','p_01','p_05','p_76'])
'p_32' 9.43
'p_21' 1.24
'p_01' 9.09
'p_05' 4.32
'p_76' 8.75
</code></pre>
<p>All of the numbers in the first column of <code>A</code> are contained in <code>B</code> but not the other way around. I want to get the index strings of <code>B</code> whos row number matches <code>A</code>, while retaining the order that in <code>A</code></p>
<p>So I would need:</p>
<pre><code>'p_21'
'p_76'
'p_05'
</code></pre>
|
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="nofollow">isin()</a> function for that:</p>
<pre><code>In [141]: B[B[0].isin(A[0])].index
Out[141]: Index(['p_21', 'p_05', 'p_76'], dtype='object')
In [142]: B[B[0].isin(A[0])]
Out[142]:
0
p_21 1.24
p_05 4.32
p_76 8.75
</code></pre>
<p>data:</p>
<pre><code>In [139]: A
Out[139]:
0
0 1.24
1 8.75
2 4.32
In [140]: B
Out[140]:
0
p_32 9.43
p_21 1.24
p_01 9.09
p_05 4.32
p_76 8.75
</code></pre>
|
python|pandas
| 2
|
1,987
| 37,553,397
|
Python Applymap taking time to run
|
<p>I have a matrix of data ( 55K X8.5k) with counts. Most of them are zeros, but few of them would be like any count. Lets say something like this: </p>
<pre><code> a b c
0 4 3 3
1 1 2 1
2 2 1 0
3 2 0 1
4 2 0 4
</code></pre>
<p>I want to binaries the cell values. </p>
<p>I did the following: </p>
<pre><code>df_preference=df_recommender.applymap(lambda x: np.where(x >0, 1, 0))
</code></pre>
<p>While the code works fine, but it takes a lot of time to run. </p>
<p>Why is that? </p>
<p>Is there a faster way?</p>
<p>Thanks</p>
<p>Edit: </p>
<p>Error when doing df.to_pickle</p>
<pre><code>df_preference.to_pickle('df_preference.pickle')
</code></pre>
<p>I get this: </p>
<pre><code>---------------------------------------------------------------------------
SystemError Traceback (most recent call last)
<ipython-input-16-3fa90d19520a> in <module>()
1 # Pickling the data to the disk
2
----> 3 df_preference.to_pickle('df_preference.pickle')
\\dwdfhome01\Anaconda\lib\site-packages\pandas\core\generic.pyc in to_pickle(self, path)
1170 """
1171 from pandas.io.pickle import to_pickle
-> 1172 return to_pickle(self, path)
1173
1174 def to_clipboard(self, excel=None, sep=None, **kwargs):
\\dwdfhome01\Anaconda\lib\site-packages\pandas\io\pickle.pyc in to_pickle(obj, path)
13 """
14 with open(path, 'wb') as f:
---> 15 pkl.dump(obj, f, protocol=pkl.HIGHEST_PROTOCOL)
16
17
SystemError: error return without exception set
</code></pre>
|
<p><strong>UPDATE:</strong></p>
<p>read <a href="https://github.com/pydata/pandas/issues/3699" rel="nofollow noreferrer">this topic</a> and <a href="https://github.com/pydata/pandas/issues/12712" rel="nofollow noreferrer">this issue</a> in regards to your error</p>
<p>Try to save your DF as HDF5 - it's much more convenient.</p>
<p>You may also want to read this <a href="https://stackoverflow.com/questions/37010212/what-is-the-fastest-way-to-upload-a-big-csv-file-in-notebook-to-work-with-python/37012035#37012035">comparison</a>...</p>
<p><strong>OLD answer:</strong></p>
<p>try this:</p>
<pre><code>In [110]: (df>0).astype(np.int8)
Out[110]:
a b c
0 1 1 1
1 1 1 1
2 1 1 0
3 1 0 1
4 1 0 1
</code></pre>
<p><code>.applymap()</code> - one of the slowest method, because it goes to each cell (basically it performs nested loops inside).</p>
<p><code>df>0</code> works with vectorized data, so it does it <strong>much</strong> faster</p>
<p><code>.apply()</code> - will work faster than <code>.applymap()</code> as it works on columns, but still much slower compared to <code>df>0</code></p>
<p><strong>UPDATE2:</strong> time comparison on a smaller DF (1000 x 1000), as <code>applymap()</code> will take ages on (55K x 9K) DF:</p>
<pre><code>In [5]: df = pd.DataFrame(np.random.randint(0, 10, size=(1000, 1000)))
In [6]: %timeit df.applymap(lambda x: np.where(x >0, 1, 0))
1 loop, best of 3: 3.75 s per loop
In [7]: %timeit df.apply(lambda x: np.where(x >0, 1, 0))
1 loop, best of 3: 256 ms per loop
In [8]: %timeit (df>0).astype(np.int8)
100 loops, best of 3: 2.95 ms per loop
</code></pre>
|
python|pandas|dataframe
| 3
|
1,988
| 37,435,468
|
pandas SettingWithCopyWarning when using a subset of columns
|
<p>I am trying to understand pandas SettingWithCopyWarning, what exactly triggers it an how to avoid it. I want to take a selection of columns from a data frame and then work with this selection of columns. I need to fill missing values and replace all values larger than 1 with 1. </p>
<p>I understand that sub_df=df[['col1', 'col2', 'col3']] produces a copy and that seems to be what I want. Could someone explain why the copy warning is triggered here, whether it's problem, and how I should avoid it?</p>
<p>I read a lot about chained assignment in this context, am I doing this here?</p>
<pre><code>data={'col1' : [25 , 0, 100, None],
'col2' : [50 , 0 , 0, None],
'col3' : [None, None, None, 100],
'col4' : [ 20 , 20 , 20 , 20 ],
'col5' : [1,1,2,3]}
df= pd.DataFrame(data)
sub_df=df[['col1', 'col2', 'col3']]
sub_df.fillna(0, inplace=True)
sub_df[df>1]=1 # produces the copy warning
sub_df
</code></pre>
<p>What really confuses me is why this warning is not triggered if I am not using a new name for my subset of columns as below:</p>
<pre><code>data={'col1' : [25 , 0, 100, None],
'col2' : [50 , 0 , 0, None],
'col3' : [None, None, None, 100],
'col4' : [ 20 , 20 , 20 , 20 ],
'col5' : [1,1,2,3]}
df= pd.DataFrame(data)
df=df[['col1', 'col2', 'col3']]
df.fillna(0, inplace=True)
df[df>1]=1 # does not produce the copy warning
df
</code></pre>
<p>Thanks!</p>
|
<p>Your 2 code snippets are semantically different, in the first it's ambiguous whether you want to operate on a view or a copy of the original df, in the second you overwrite <code>df</code> with a subset of the <code>df</code> so there is no ambiguity.</p>
<p>If you want to operate on a copy then do this:</p>
<pre><code>sub_df=df[['col1', 'col2', 'col3']].copy()
</code></pre>
<p>if you want to operate on a view then I suggest using a list of cols and referencing them using the new <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing" rel="nofollow">indexers</a> like the following:</p>
<pre><code>df[col_list].fillna(0)
</code></pre>
<p>and then </p>
<pre><code>df.loc[df > 1, col_list] = 1
</code></pre>
|
python|pandas
| 1
|
1,989
| 37,956,400
|
Using a pandas DataFrame to get the count of the 10th most frequent value
|
<p>I have a DataFrame that contains entries of place_ids such as:</p>
<pre><code>place_id
11111
11111
22222
33333
44444
44444
...
</code></pre>
<p>I would like to get the count of the 10th most frequent value.</p>
<p>Here's what I've come up with:</p>
<pre><code>print df.place_id.value_counts().nlargest(10).tail(1).values[0]
</code></pre>
<p>This seems like too much work. Is there an easier way to get the count of the 10th most frequent place_id?</p>
|
<p>try:</p>
<pre><code>import pandas as pd
import numpy as np
from string import ascii_letters
np.random.seed([3,1415])
s = pd.Series(np.random.choice(list(ascii_letters), (10000,)))
vc = s.value_counts().sort_values()
vc.loc[[vc.index[-10]]]
j 204
dtype: int64
</code></pre>
|
python|numpy|pandas|dataframe|series
| 2
|
1,990
| 64,565,273
|
Trying to repeat a pair of values in a numpy array
|
<p>I have a coordinate saved as a numpy array <code>x = np.array([1,2])</code> and I am trying to create an array that repeats [1,2] n times. For example, to repeat 4 times, I would want the array to look like this:</p>
<pre><code>array([1,2],[1,2],[1,2],[1,2])
</code></pre>
<p>I have tried using the function:</p>
<pre><code>np.repeat(x, 4, axis=0)
</code></pre>
<p>but the output is flattened array that looks like this:</p>
<pre><code>array([1,1,1,1,2,2,2,2])
</code></pre>
<p>Does anyone know how to do this?</p>
|
<p>Simplest way should be <code>[[1,2]]*4</code></p>
<blockquote>
<p><code>[[1,2]]*4</code></p>
</blockquote>
<p><code>[[1, 2], [1, 2], [1, 2], [1, 2]]</code></p>
<p>If you wanna make it array, <code>np.array([[1,2]]*4)</code> would work.</p>
|
python|arrays|numpy|repeat
| 4
|
1,991
| 47,854,438
|
How to check column value among all others values in same column in Pandas Data frame?
|
<p>I have a pandas data frame which having three columns. Normally for the Loan type it has 5 values. Let's say Conso, Immo, Pro, Autre, Tous. For this data frame only contain loan type 'Immo'. (At the beginning we don't know what the Loan type is). How do I check what the loan type is among all this loan type?</p>
<pre><code>CodeProduit LoanType Year
301 Immo 2003
301 Immo 2004
301 Immo 2005
301 Immo 2006
... ... ....
301 Immo 2017
def check_type_pret(p):
if p == 'Immo':
return p
elif p == 'Conso':
return p
elif p == 'Pro':
return p
elif p == 'Autres':
return p
elif p == 'Tous':
return p
else:
return 0
df1['Answer']=df1.LoanType.map(check_type_pret)
</code></pre>
<p>As a output I'm getting 0 for Answer Column. How do I get expected out put as I explained? </p>
|
<p>If want check if exist all values in <code>L</code> in column <code>LoanType</code> use:</p>
<pre><code>L = ['Immo', 'Conso', 'Pro', 'Autres', 'Tous']
a = all([(df['LoanType'] == x).any() for x in L])
print (a)
False
</code></pre>
<p>Or:</p>
<pre><code>s = set(['Immo', 'Conso', 'Pro', 'Autres', 'Tous'])
a = s.issubset(set(df['LoanType'].tolist()))
print (a)
False
</code></pre>
<p>EDIT:</p>
<p>If your solution return <code>0</code> there is no match. </p>
<p>I guess some traling whitespaces, so need remove them first by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow noreferrer"><code>strip</code></a>:</p>
<pre><code>df1['Answer'] = df1.LoanType.str.strip().map(check_type_pret)
</code></pre>
<p>Another solution with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.where.html" rel="nofollow noreferrer"><code>where</code></a> and condition with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow noreferrer"><code>isin</code></a>:</p>
<pre><code>print (df1)
CodeProduit LoanType Year
0 301 Immo1 2003
1 301 Conso 2004
2 301 Pro 2005
3 301 Autres 2006
4 301 Tous 2017
df1.LoanType = df1.LoanType.str.strip()
L = ['Immo', 'Conso', 'Pro', 'Autres', 'Tous']
df1['Answer'] = np.where(df1.LoanType.isin(L), df1.LoanType, 0)
#another solution
#df1['Answer'] = df1.LoanType.where(df1.LoanType.isin(L), 0)
print (df1)
CodeProduit LoanType Year Answer
0 301 Immo1 2003 0
1 301 Conso 2004 Conso
2 301 Pro 2005 Pro
3 301 Autres 2006 Autres
4 301 Tous 2017 Tous
</code></pre>
|
python|pandas|dataframe
| 1
|
1,992
| 47,575,063
|
Is there a sci.stats.moment function for binned data?
|
<p>I'm looking for a function which calculates the n-th central moment
(same as the one out of scipy.stats.moment)
for my binned data (Out of the numpy.histogram function).</p>
<pre><code># Generate normal distributed data
import numpy as np
import matplotlib.pyplot as plt
data = np.random.normal(size=500,loc=1,scale=2)
H = np.histogram(data,bins=50)
plt.scatter(H[1][:-1],H[0])
plt.show()
</code></pre>
<p>for my above code example the results should be (0,4,0,48) for the first four moments as there sigma = 2 (for the central moment).</p>
|
<p>Working with binned data is essentially the same as working with weighted data. One uses the midpoint of each bin as a data point, and the count of that bin as its weight. If <code>scipy.stats.moment</code> supported weights, we could do this computation directly. As is, use the method <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.average.html" rel="nofollow noreferrer"><code>numpy.average</code></a> which supports weights. </p>
<pre><code>midpoints = 0.5 * (H[1][1:] + H[1][:-1])
ev = np.average(midpoints, weights = H[0])
print(ev)
for k in range(2, 5):
print(np.average((midpoints - ev)**k, weights = H[0]))
</code></pre>
<p>Output (obviously random): </p>
<pre><code>1.08242834443
4.21602099286
0.713129264647
51.6257736139
</code></pre>
<p>I didn't print the centered 1st moment (which is 0 by construction), printing the expected value instead. Theoretically*, these are 1, 4, 0, 48 but for any given sample, there is going to be some deviation from the parameters of the distribution. </p>
<p>(*) Not exactly. In the formula for variance I didn't include the correction factor <code>n/(n-1)</code> (where n is the total size of data set, i.e., the sum of weights). This factor adjusts the <a href="https://en.wikipedia.org/wiki/Variance#Sample_variance" rel="nofollow noreferrer">sample variance</a> so it becomes an unbiased estimator of the population variance. You can include it if you like. Similar adjustments are probably needed for higher-order moments (if the goal is to have unbiased estimators), but I'd have to look this up, and in any case this is not a statistics site. </p>
|
python|numpy|scipy
| 1
|
1,993
| 47,852,014
|
LabelEncoding to multiple columns in pandas
|
<p>I'm currently working on Titanic dataset. It consists of 4-5 non numeric columns. I want to apply <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html" rel="nofollow noreferrer">sklearn.LabelEncoder</a> class to get encoded values for these non-numeric columns. I can, no doubt, apply this method one by one to each column. But the job will become more tedious when there're more than 20-30 such columns. Since I know the name of such non-numeric columns, is there any sophisticated way to do so in ease manner?</p>
|
<p>Just run a loop after selecting object types</p>
<pre><code>obj_cols = df.select_dtypes(include=[object])
for i in obj_cols:
df[i+'label'] = le.fit_transform(df[i])
</code></pre>
|
python|machine-learning|scikit-learn|dataset|sklearn-pandas
| -1
|
1,994
| 48,907,175
|
Pandas not replacing strings in dataframe
|
<p>I have seen this question but it isn't working for me, I am sure I am making a blunder but please tell me where I am doing wrong, I want values "Street", "LandContour" etc to be replaced from "pave" to 1 and so on.</p>
<p><a href="https://stackoverflow.com/questions/17114904/python-pandas-replacing-strings-in-dataframe-with-numbers">python pandas replacing strings in dataframe with numbers</a></p>
<p>This is my code till now:</p>
<pre><code>import numpy as np
import pandas as pd
df=pd.read_csv('train.csv') # getting file
df.fillna(-99999, inplace=True)
#df.replace("Street", 0, True) didn't work
# mapping={'Street':1,'LotShape':2,'LandContour':3,'Utilities':4,'SaleCondition':5}
# df.replace('Street', 0) # didn't work
# df.replace({'Street': mapping, 'LotShape': mapping,
# 'LandContour': mapping, 'Utilities': mapping,
# 'SaleCondition': mapping})
# didn't work ^
df.head()
</code></pre>
<p>I have tried <code>df['Street'].replace("pave",0,inplace=True)</code> and a lot of other things but none worked. Not even the single value of arguments given in df.replace are replaced. My df is working fine it is printing the head and also specific coloumns, <code>df.fillna</code> also worked fine. Any help will be great.</p>
<p>EDIT: All the non-commented lines are working and I want uncommented lines to work.</p>
<p>The sample output is:-</p>
<pre><code>Id MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape \
0 1 60 RL 65.0 8450 Pave -99999 Reg
1 2 20 RL 80.0 9600 Pave -99999 Reg
2 3 60 RL 68.0 11250 Pave -99999 IR1
3 4 70 RL 60.0 9550 Pave -99999 IR1
4 5 60 RL 84.0 14260 Pave -99999 IR1
LandContour Utilities ... PoolArea PoolQC Fence MiscFeature \
0 Lvl AllPub ... 0 -99999 -99999 -99999
1 Lvl AllPub ... 0 -99999 -99999 -99999
2 Lvl AllPub ... 0 -99999 -99999 -99999
3 Lvl AllPub ... 0 -99999 -99999 -99999
4 Lvl AllPub ... 0 -99999 -99999 -99999
MiscVal MoSold YrSold SaleType SaleCondition SalePrice
0 0 2 2008 WD Normal 208500
1 0 5 2007 WD Normal 181500
2 0 9 2008 WD Normal 223500
3 0 2 2006 WD Abnorml 140000
4 0 12 2008 WD Normal 250000
</code></pre>
<p>I have also tried:-</p>
<pre><code>mapping={'Pave':1,'Lvl':2,'AllPub':3,'Reg':4,'Normal':5,'Abnormal':0,'IR1':6}
#df.replace('Street',0)
df.replace({'Street': mapping, 'LotShape': mapping,
'LandContour': mapping, 'Utilities': mapping, 'SaleCondition': mapping})
</code></pre>
<p>But that didnt work either ^</p>
|
<p>Try:</p>
<pre><code>df = pd.read_csv('train.csv') # reset
df.fillna(-99999, inplace=True) # refill
df['Street'].replace('Pave', 0, inplace=True) # replace
</code></pre>
<p>The problem with your previous approaches is that they don't apply replace to the correct column with the correct search values. Pay careful attention to capitalization as well.</p>
|
python|pandas|dataframe
| 3
|
1,995
| 70,166,010
|
How to extract date time from dtype('<M8[ns]') in pandas?
|
<p>my ots column has : <code>2021-04-03 14:01:22.791856</code>
its dtype is <code>dtype('<M8[ns]')</code>
how do I get only <code>2021-04-03 14:01:22</code> ?</p>
|
<p>use this:
<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html</a></p>
<p>considering that is your time column in the pandas dataframe:</p>
<pre><code>df['time'] = df['time'].dt.strftime('%Y-%m-%d %H:%M:%S')
</code></pre>
|
python|pandas|datetime
| 1
|
1,996
| 70,236,626
|
Exact Value In Determinant Using Numpy
|
<p>I want to compute the determinant of a 2*2 matrix , and I use linalg.det from Numpy like this:</p>
<pre><code>import numpy as np
a = np.array([
[1,2],
[3,4]
])
b=np.linalg.det(a)
</code></pre>
<p>In other hand, We know that we also can compute it by a multiplication and a subtract like this:</p>
<pre><code>1*4 - 2*3 = -2
</code></pre>
<p>But when I set b equal to -2 :</p>
<pre><code>b == -2
</code></pre>
<p>It returns false.</p>
<p>What is the problem here? And How can I fix it?</p>
|
<p>If you're working with integers, you can use</p>
<pre class="lang-py prettyprint-override"><code>b = int(np.linalg.det(a))
</code></pre>
<p>To just make it into an integer. This should give you the following</p>
<pre class="lang-py prettyprint-override"><code>int(b) == -2 ## Returns True
</code></pre>
<p>Edit: This doesn't work if the approximation is something like -1.99999999999. As per JohanC in the comments below, try using int(round(b)) instead.</p>
|
python|python-3.x|numpy|math|determinants
| 0
|
1,997
| 56,371,979
|
Time difference between row and its previous/next row for the same customer in pandas dataframe
|
<p>I have a dataframe:</p>
<pre><code>In [1]: import pandas as pd;import numpy as np
In [2]: df = pd.DataFrame(
...: [
...: ['A', '2019-05-10 23:59:59', 'NOT_WORKING'],
...: ['A', '2019-05-11 00:05:00', 'WORKING'],
...: ['B', '2019-05-13 07:55:00', 'NOT_WORKING'],
...: ['B', '2019-05-15 07:57:00', 'WORKING'],
...: ['B', '2019-05-16 08:03:00', 'NOT_WORKING'],
...: ], columns=['cust', 'event_date', 'status'])
...: df.event_date = pd.to_datetime(df.event_date)
In [3]: df.loc[1, 'test'] = 'Y'
...: df.loc[3, 'test'] = 'Y'
In [4]: df
Out[4]:
cust event_date status test
0 A 2019-05-10 23:59:59 NOT_WORKING NaN
1 A 2019-05-11 00:05:00 WORKING Y
2 B 2019-05-13 07:55:00 NOT_WORKING NaN
3 B 2019-05-15 07:57:00 WORKING Y
4 B 2019-05-16 08:03:00 NOT_WORKING NaN
</code></pre>
<p>I need to find out time difference between test rows and their prvious/next rows for the same customer.</p>
<p>I have done it like this:</p>
<pre><code>In [5]: df.loc[:, 'prev_time'] = df.event_date.shift(1)
...: df.loc[:, 'prev_cust'] = df.cust.shift(1)
...: df.loc[:, 'next_time'] = df.event_date.shift(-1)
...: df.loc[:, 'next_cust'] = df.cust.shift(-1)
...: df
Out[5]:
cust event_date ... next_time next_cust
0 A 2019-05-10 23:59:59 ... 2019-05-11 00:05:00 A
1 A 2019-05-11 00:05:00 ... 2019-05-13 07:55:00 B
2 B 2019-05-13 07:55:00 ... 2019-05-15 07:57:00 B
3 B 2019-05-15 07:57:00 ... 2019-05-16 08:03:00 B
4 B 2019-05-16 08:03:00 ... NaT NaN
[5 rows x 8 columns]
In [9]: df = df.loc[df.test=='Y', :].assign(time_to_prev=lambda row: row.
...: event_date - row.prev_time ).assign(time_to_next=lambda row: row.
...: next_time - row.event_date)
...: df.loc[df.cust != df.prev_cust, 'time_to_prev'] = np.nan
...: df.loc[df.cust != df.next_cust, 'time_to_next'] = np.nan
...: df = df.drop(columns=['prev_time', 'prev_cust', 'next_time', 'nex
...: t_cust'])
...: df
Out[9]:
cust event_date status test time_to_prev time_to_next
1 A 2019-05-11 00:05:00 WORKING Y 0 days 00:05:01 NaT
3 B 2019-05-15 07:57:00 WORKING Y 2 days 00:02:00 1 days 00:06:00
</code></pre>
<p>It works, but I am looking for the more elegant solution that will incorporate groupby, diff...
How to do that?</p>
|
<p>First just make sure the sorting is correct for 'cust' and 'event_date', and then groupby customer, then take the difference for each row. </p>
<pre><code>df = df.sort_values(['cust', 'event_date'])
df.groupby('cust')['event_date'].diff()
event_date
0 NaT
1 0 days 00:05:01
2 NaT
3 2 days 00:02:00
4 1 days 00:06:00
</code></pre>
|
python|pandas|diff|difference|datediff
| 1
|
1,998
| 56,393,308
|
How to create a scatter plot in pandas grouped by time of day
|
<p>I would like to create a scatter plot using Pandas where the values are grouped by time of day and coloured/styled differently based on the day. The code snippet below will create a scatter plot of two time-series.</p>
<pre><code>import pandas as pd
idx = pd.date_range('2019-01-01', periods=48, freq='H')
x = pd.Series(range(len(idx)), index=idx)
y = x
d = {'x': x, 'y': y}
df = pd.DataFrame(data=d)
df.plot.scatter(x='x', y='y')
</code></pre>
<p>However, when I tried to aggregate the data in a list by time of day, I was unable to plot them as scatter:</p>
<pre><code>df['time'] = df.index.time
df_agg= df.groupby('time').agg(list)
</code></pre>
|
<p>Since you want to perform a scatter plot and keep all the data, I suggest not using <code>groupby</code>. Instead, the <code>hour</code> and <code>day</code> methods of DatetimeIndex objects provide a simple way to color by day and plot by daytime hour.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
idx = pd.date_range('2019-01-01', periods=48, freq='H')
x = pd.Series(range(len(idx)), index=idx)
y = x
# add 'hour' and 'day' columns in the dataframe
d = {'x': x, 'y': y, 'hour': idx.hour, 'day': idx.day}
df = pd.DataFrame(data=d)
# use 'hour' as x axis to plot, and 'day' as marker color
df.plot.scatter(x='hour', y='y', c='day', colormap='rainbow')
plt.show()
</code></pre>
|
pandas|pandas-groupby
| 2
|
1,999
| 55,913,093
|
Element-wise matrix multiplication for multi-dimensional array
|
<p>I want to realize component-wise matrix multiplication in MATLAB, which can be done using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>numpy.einsum</code></a> in Python as below:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
M = 2
N = 4
I = 2000
J = 300
A = np.random.randn(M, M, I)
B = np.random.randn(M, M, N, J, I)
C = np.random.randn(M, J, I)
# using einsum
D = np.einsum('mki, klnji, lji -> mnji', A, B, C)
# naive for-loop
E = np.zeros(M, N, J, I)
for i in range(I):
for j in range(J):
for n in range(N):
E[:,n,j,i] = B[:,:,i] @ A[:,:,n,j,i] @ C[:,j,i]
print(np.sum(np.abs(D-E))) # expected small enough
</code></pre>
<p>So far I use for-loop of <code>i</code>, <code>j</code>, and <code>n</code>, but I don't want to, at least for-loop of <code>n</code>.</p>
|
<h3>Option 1: Calling numpy from MATLAB</h3>
<p>Assuming your system is set up <a href="https://www.mathworks.com/help/matlab/matlab_external/call-python-from-matlab.html" rel="nofollow noreferrer">according to the documentation</a>, and you have the numpy package installed, you could do (in MATLAB):</p>
<pre class="lang-matlab prettyprint-override"><code>np = py.importlib.import_module('numpy');
M = 2;
N = 4;
I = 2000;
J = 300;
A = matpy.mat2nparray( randn(M, M, I) );
B = matpy.mat2nparray( randn(M, M, N, J, I) );
C = matpy.mat2nparray( randn(M, J, I) );
D = matpy.nparray2mat( np.einsum('mki, klnji, lji -> mnji', A, B, C) );
</code></pre>
<p>Where <code>matpy</code> can be found <a href="https://github.com/CoolProp/CoolProp/blob/master/wrappers/MATLAB/matpy.m" rel="nofollow noreferrer">here</a>.</p>
<h3>Option 2: Native MATLAB</h3>
<p>Here the most important part is to get the permutations right, so we need to keep track of our dimensions. We'll be using the following order:</p>
<pre><code>I(1) J(2) K(3) L(4) M(5) N(6)
</code></pre>
<p>Now, I'll explain how I got the correct permute order (let's take the example of <code>A</code>): <code>einsum</code> expects the dimension order to be <code>mki</code>, which according to our numbering is <code>5 3 1</code>. This tells us that the 1<sup>st</sup> dimension of <code>A</code> needs to be the 5<sup>th</sup>, the 2<sup>nd</sup> needs to be 3<sup>rd</sup> and the 3<sup>rd</sup> needs to be 1<sup>st</sup> (in short <code>1->5, 2->3, 3->1</code>). This also means that the "sourceless dimensions" (meaning those that have no original dimensions becoming them; in this case 2 4 6) should be singleton. Using <code>ipermute</code> this is really simple to write:</p>
<pre><code>pA = ipermute(A, [5,3,1,2,4,6]);
</code></pre>
<p>In the above example, <code>1->5</code> means we write <code>5</code> first, and the same goes for the other two dimensions (yielding [5,3,1]). Then we just add the singletons (2,4,6) at the end to get <code>[5,3,1,2,4,6]</code>. Finally:</p>
<pre class="lang-matlab prettyprint-override"><code>A = randn(M, M, I);
B = randn(M, M, N, J, I);
C = randn(M, J, I);
% Reference dim order: I(1) J(2) K(3) L(4) M(5) N(6)
pA = ipermute(A, [5,3,1,2,4,6]); % 1->5, 2->3, 3->1; 2nd, 4th & 6th are singletons
pB = ipermute(B, [3,4,6,2,1,5]); % 1->3, 2->4, 3->6, 4->2, 5->1; 5th is singleton
pC = ipermute(C, [4,2,1,3,5,6]); % 1->4, 2->2, 3->1; 3rd, 5th & 6th are singletons
pD = sum( ...
permute(pA .* pB .* pC, [5,6,2,1,3,4]), ... 1->5, 2->6, 3->2, 4->1; 3rd & 4th are singletons
[5,6]);
</code></pre>
<p>(see note regarding <code>sum</code> at the bottom of the post.)</p>
<p>Another way to do it in MATLAB, <a href="https://chat.stackoverflow.com/transcript/message/46079329#46079329">as mentioned by @AndrasDeak</a>, is the following:</p>
<pre class="lang-matlab prettyprint-override"><code>rD = squeeze(sum(reshape(A, [M, M, 1, 1, 1, I]) .* ...
reshape(B, [1, M, M, N, J, I]) .* ...
... % same as: reshape(B, [1, size(B)]) .* ...
... % same as: shiftdim(B,-1) .* ...
reshape(C, [1, 1, M, 1, J, I]), [2, 3]));
</code></pre>
<p>See also: <a href="https://www.mathworks.com/help/matlab/ref/squeeze.html" rel="nofollow noreferrer"><code>squeeze</code></a>, <a href="https://www.mathworks.com/help/matlab/ref/reshape.html" rel="nofollow noreferrer"><code>reshape</code></a>, <a href="https://www.mathworks.com/help/matlab/ref/permute.html" rel="nofollow noreferrer"><code>permute</code></a>, <a href="https://www.mathworks.com/help/matlab/ref/ipermute.html" rel="nofollow noreferrer"><code>ipermute</code></a>, <a href="https://www.mathworks.com/help/matlab/ref/shiftdim.html" rel="nofollow noreferrer"><code>shiftdim</code></a>.</p>
<hr />
<p>Here's a full example that shows that tests whether these methods are equivalent:</p>
<pre class="lang-matlab prettyprint-override"><code>function q55913093
M = 2;
N = 4;
I = 2000;
J = 300;
mA = randn(M, M, I);
mB = randn(M, M, N, J, I);
mC = randn(M, J, I);
%% Option 1 - using numpy:
np = py.importlib.import_module('numpy');
A = matpy.mat2nparray( mA );
B = matpy.mat2nparray( mB );
C = matpy.mat2nparray( mC );
D = matpy.nparray2mat( np.einsum('mki, klnji, lji -> mnji', A, B, C) );
%% Option 2 - native MATLAB:
%%% Reference dim order: I(1) J(2) K(3) L(4) M(5) N(6)
pA = ipermute(mA, [5,3,1,2,4,6]); % 1->5, 2->3, 3->1; 2nd, 4th & 6th are singletons
pB = ipermute(mB, [3,4,6,2,1,5]); % 1->3, 2->4, 3->6, 4->2, 5->1; 5th is singleton
pC = ipermute(mC, [4,2,1,3,5,6]); % 1->4, 2->2, 3->1; 3rd, 5th & 6th are singletons
pD = sum( permute( ...
pA .* pB .* pC, [5,6,2,1,3,4]), ... % 1->5, 2->6, 3->2, 4->1; 3rd & 4th are singletons
[5,6]);
rD = squeeze(sum(reshape(mA, [M, M, 1, 1, 1, I]) .* ...
reshape(mB, [1, M, M, N, J, I]) .* ...
reshape(mC, [1, 1, M, 1, J, I]), [2, 3]));
%% Comparisons:
sum(abs(pD-D), 'all')
isequal(pD,rD)
</code></pre>
<p>Running the above we get that the results are indeed equivalent:</p>
<pre class="lang-none prettyprint-override"><code>>> q55913093
ans =
2.1816e-10
ans =
logical
1
</code></pre>
<p>Note that these two methods of calling <code>sum</code> were introduced in recent releases, so you might need to replace them if your MATLAB is relatively old:</p>
<pre><code>S = sum(A,'all') % can be replaced by ` sum(A(:)) `
S = sum(A,vecdim) % can be replaced by ` sum( sum(A, dim1), dim2) `
</code></pre>
<hr />
<p>As requested in the comments, here's a benchmark comparing the methods:</p>
<pre class="lang-matlab prettyprint-override"><code>function t = q55913093_benchmark(M,N,I,J)
if nargin == 0
M = 2;
N = 4;
I = 2000;
J = 300;
end
% Define the arrays in MATLAB
mA = randn(M, M, I);
mB = randn(M, M, N, J, I);
mC = randn(M, J, I);
% Define the arrays in numpy
np = py.importlib.import_module('numpy');
pA = matpy.mat2nparray( mA );
pB = matpy.mat2nparray( mB );
pC = matpy.mat2nparray( mC );
% Test for equivalence
D = cat(5, M1(), M2(), M3());
assert( sum(abs(D(:,:,:,:,1) - D(:,:,:,:,2)), 'all') < 1E-8 );
assert( isequal (D(:,:,:,:,2), D(:,:,:,:,3)));
% Time
t = [ timeit(@M1,1), timeit(@M2,1), timeit(@M3,1)];
function out = M1()
out = matpy.nparray2mat( np.einsum('mki, klnji, lji -> mnji', pA, pB, pC) );
end
function out = M2()
out = permute( ...
sum( ...
ipermute(mA, [5,3,1,2,4,6]) .* ...
ipermute(mB, [3,4,6,2,1,5]) .* ...
ipermute(mC, [4,2,1,3,5,6]), [3,4]...
), [5,6,2,1,3,4]...
);
end
function out = M3()
out = squeeze(sum(reshape(mA, [M, M, 1, 1, 1, I]) .* ...
reshape(mB, [1, M, M, N, J, I]) .* ...
reshape(mC, [1, 1, M, 1, J, I]), [2, 3]));
end
end
</code></pre>
<p>On my system this results in:</p>
<pre class="lang-none prettyprint-override"><code>>> q55913093_benchmark
ans =
1.3964 0.1864 0.2428
</code></pre>
<p>Which means that the 2<sup>nd</sup> method is preferable (at least for the default input sizes).</p>
|
matlab|multidimensional-array|sum|elementwise-operations|numpy-einsum
| 8
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.