markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Data Scaling Standardize a dataset along any axis.Center to the mean and component wise scale to unit variance.For more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html)
x_train = preprocessing.scale(x_train) x_test = preprocessing.scale(x_test)
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_Scale.ipynb
surya2365/ds-seed
ModelLassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients. Tuning parameters> **fit_intercept** -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations> **alpha** -> Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.> **eps** -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.> **max_iter** -> Maximum number of iterations to perform.> **positive** -> Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.> **precompute** -> Whether to use a precomputed Gram matrix to speed up calculations.
model=LassoLars() model.fit(x_train,y_train)
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_Scale.ipynb
surya2365/ds-seed
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.score: The score function returns the coefficient of determination R2 of the prediction.
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
Accuracy score 79.47 %
Apache-2.0
Regression/Linear Models/LassoLars_Scale.ipynb
surya2365/ds-seed
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
R2 Score: 79.47 % Mean Absolute Error 3923.32 Mean Squared Error 31383167.17
Apache-2.0
Regression/Linear Models/LassoLars_Scale.ipynb
surya2365/ds-seed
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show()
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_Scale.ipynb
surya2365/ds-seed
[Table of Contents](./table_of_contents.ipynb) Discrete Bayes Filter 离散贝叶斯滤波
%matplotlib inline #format the book import book_format book_format.set_style()
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
The Kalman filter belongs to a family of filters called *Bayesian filters*. Most textbook treatments of the Kalman filter present the Bayesian formula, perhaps shows how it factors into the Kalman filter equations, but mostly keeps the discussion at a very abstract level. That approach requires a fairly sophisticated understanding of several fields of mathematics, and it still leaves much of the work of understanding and forming an intuitive grasp of the situation in the hands of the reader.I will use a different way to develop the topic, to which I owe the work of Dieter Fox and Sebastian Thrun a great debt. It depends on building an intuition on how Bayesian statistics work by tracking an object through a hallway - they use a robot, I use a dog. I like dogs, and they are less predictable than robots which imposes interesting difficulties for filtering. The first published example of this that I can find seems to be Fox 1999 [1], with a fuller example in Fox 2003 [2]. Sebastian Thrun also uses this formulation in his excellent Udacity course Artificial Intelligence for Robotics [3]. In fact, if you like watching videos, I highly recommend pausing reading this book in favor of first few lessons of that course, and then come back to this book for a deeper dive into the topic.Let's now use a simple thought experiment, much like we did with the g-h filter, to see how we might reason about the use of probabilities for filtering and tracking. 卡爾曼濾波器屬於一個名為“貝葉斯濾波器”的大家族。許多教材将卡爾曼濾波器作為貝葉斯公式的範例講授,例如展示貝葉斯公式是如何組成卡爾曼濾波器公式的。這些討論往往停留在抽象层面。這種學習方法需要讀者理解許多複雜的數學知識,而且無助於讀者直觀地理解卡爾曼濾波器。我會用不一樣的方式展開這個主題。對此,我首先要感謝Dieter Fox和Sebastian Thrun的工作,從他們那裡我獲益良多。具體来說,我會以如何跟踪走廊上的一個目標為例來為你建立對貝葉斯統計方法的直覺上的理解——別人使用機器人作為目標,而我則喜欢用狗。原因是狗的運動比機器人的更難預測,這為濾波任務帶來許多有趣的挑戰。我找到的關於這類例子最早的記錄見於Fox於1999年所作的【1】,隨後他在2003年的【2】中增加了更多細節。Sebastian Thrun在他的優達學城的“面向机器人学的人工智能”课程上引用了這個例子。如果你喜欢看視頻,我强烈建議你在閱讀本書之前先學一學該課程的前面几節,然后再回来繼續深入了解這個問題。像先前g-h濾波器一章中那樣,讓我們从一个簡單的思想實驗開始,看看如何用概率工具来解释濾波器和跟蹤器。 Tracking a DogLet's begin with a simple problem. We have a dog friendly workspace, and so people bring their dogs to work. Occasionally the dogs wander out of offices and down the halls. We want to be able to track them. So during a hackathon somebody invented a sonar sensor to attach to the dog's collar. It emits a signal, listens for the echo, and based on how quickly an echo comes back we can tell whether the dog is in front of an open doorway or not. It also senses when the dog walks, and reports in which direction the dog has moved. It connects to the network via wifi and sends an update once a second.I want to track my dog Simon, so I attach the device to his collar and then fire up Python, ready to write code to track him through the building. At first blush this may appear impossible. If I start listening to the sensor of Simon's collar I might read **door**, **hall**, **hall**, and so on. How can I use that information to determine where Simon is?To keep the problem small enough to plot easily we will assume that there are only 10 positions in the hallway, which we will number 0 to 9, where 1 is to the right of 0. For reasons that will be clear later, we will also assume that the hallway is circular or rectangular. If you move right from position 9, you will be at position 0. When I begin listening to the sensor I have no reason to believe that Simon is at any particular position in the hallway. From my perspective he is equally likely to be in any position. There are 10 positions, so the probability that he is in any given position is 1/10. Let's represent our belief of his position in a NumPy array. I could use a Python list, but NumPy arrays offer functionality that we will be using soon. 狗的跟蹤問題我们先从一个简单的问题开始。因为我们的工作室是宠物友好型,所以同事们会带狗狗到工作场所来。偶尔狗会从办公室跑出来,到走廊去玩。因為我们希望能跟踪狗的运动,所以在一次黑客马拉松上,某个人提出在狗狗的项圈上装一个超声波传感器。传感器能发出声波,并接受回声。根据回声的速度,传感器能够输出狗是否來到了開放的門道前。它能够在狗运动的时候给出信号,报告运动的方向。它还能通过无线网络连接以每秒钟一次的频率上传数据。我想要跟踪我的狗,西蒙。于是我打开Python,准备編寫代码来实现在建筑内跟踪狗狗的功能。乍看起来这似乎不可能。如果我监听西蒙项圈传来的信号,能得到類似於**门、走廊、走廊**這樣的一连串信号。但我们如何才能使用这些信号确定西蒙的位置呢?為控制問題的規模以便於繪圖,我們假設走廊中一共僅有10個不同的地點,從0到9標號。1號在0號右側。我們還假設走廊是圓形或矩形的環,其中理由隨後自明。於是當你從9號地點向右移動,你就會回到0點。我開始監聽傳感器時,我並沒有任何理由相信西蒙現在位於某個具體地點。從這一角度看,他在任意位置的可能性都是均等的。一共有10個位置,所以每個位置的概率都是1/10.首先,我们将狗狗在各个位置的置信度表示为一个NumPy数组。雖然用Python提供的list也可以,但NumPy数组提供了一些我們需要的功能。
import numpy as np belief = np.array([1/10]*10) print(belief)
[0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1]
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
In [Bayesian statistics](https://en.wikipedia.org/wiki/Bayesian_probability) this is called a [*prior*](https://en.wikipedia.org/wiki/Prior_probability). It is the probability prior to incorporating measurements or other information. More completely, this is called the *prior probability distribution*. A [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) is a collection of all possible probabilities for an event. Probability distributions always sum to 1 because something had to happen; the distribution lists all possible events and the probability of each.I'm sure you've used probabilities before - as in "the probability of rain today is 30%". The last paragraph sounds like more of that. But Bayesian statistics was a revolution in probability because it treats probability as a belief about a single event. Let's take an example. I know that if I flip a fair coin infinitely many times I will get 50% heads and 50% tails. This is called [*frequentist statistics*](https://en.wikipedia.org/wiki/Frequentist_inference) to distinguish it from Bayesian statistics. Computations are based on the frequency in which events occur.I flip the coin one more time and let it land. Which way do I believe it landed? Frequentist probability has nothing to say about that; it will merely state that 50% of coin flips land as heads. In some ways it is meaningless to assign a probability to the current state of the coin. It is either heads or tails, we just don't know which. Bayes treats this as a belief about a single event - the strength of my belief or knowledge that this specific coin flip is heads is 50%. Some object to the term "belief"; belief can imply holding something to be true without evidence. In this book it always is a measure of the strength of our knowledge. We'll learn more about this as we go.Bayesian statistics takes past information (the prior) into account. We observe that it rains 4 times every 100 days. From this I could state that the chance of rain tomorrow is 1/25. This is not how weather prediction is done. If I know it is raining today and the storm front is stalled, it is likely to rain tomorrow. Weather prediction is Bayesian.In practice statisticians use a mix of frequentist and Bayesian techniques. Sometimes finding the prior is difficult or impossible, and frequentist techniques rule. In this book we can find the prior. When I talk about the probability of something I am referring to the probability that some specific thing is true given past events. When I do that I'm taking the Bayesian approach.Now let's create a map of the hallway. We'll place the first two doors close together, and then another door further away. We will use 1 for doors, and 0 for walls: 在[贝叶斯统计学](https://en.wikipedia.org/wiki/Bayesian_probability)中这被称为[先验](https://en.wikipedia.org/wiki/Prior_probability). 它是在考虑测量结果或其它信息之前的概率。更完整的说,这叫做“先验概率”。 所谓[“先验概率”](https://en.wikipedia.org/wiki/Probability_distribution)是某个事件所有可能概率的集合。由于所有可能之中必有其一真实发生,所以概率分布的总和恆等於1。概率分布列出了所有可能的事件及每个事件的对应概率。我知道你肯定有使用过概率——比如“今天下雨的概率是30%”。這句話與上一段文字有相似的含義。贝叶斯统计是概率论的一场革命,是因为它将概率视作是每个事件的置信度。举例来说,如果我多次抛掷一个理想的硬币,我将得到50%正面,50%反面的统计结果。这叫做[频率统计](https://en.wikipedia.org/wiki/Frequentist_inference)。同贝叶斯统计不同,频率统计的計算基於事件發生的頻率。假如我再掷一次硬币,我相信它是哪一面落地?频率學派對此沒有什麼建議。它唯一能回答的是,有50%的硬币正面朝上。然而從某些方面看,像這樣为硬币的当前状态赋予概率是无意义的。它要么正要么反,只是我們不確定罷了。贝叶斯学派則將此視為单个事件的信念——它表示我们对该硬币正面朝上的概率是50%的信念或者知识。“信念”一词的含义是,在没有充分的证据的情况下相信某种情况属实。本书中,始终使用置信度来作为对知识的强度的度量。随着阅读的继续,我们将了解更多细节。贝叶斯统计学考虑過去的信息(先验概率)。通过觀察我们知道过去100天内有4天下雨,据此推出明天下雨的概率是1/25. 当然天气预报不是这么做的。如果我知道今天是雨天,而风暴的边沿移动迟缓,于是我猜测明天继续下雨。这才是天气预报的做法,属于贝叶斯方法。实践中频率统计和贝叶斯方法常常混合使用。有时候先验难以取得,或者无法取得,就使用频率统计的方法。本书中,先验是提供了的。当我们提到某事的概率时,我们指的是已知过去的系列事件的條件下,某事为真的概率。这时我们使用的是贝叶斯方法。现在,我们来为走廊建一个地图。我们将两个门靠近放在一起,另一个门放远一些。用1表示门,0表示墙壁。
hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
I start listening to Simon's transmissions on the network, and the first data I get from the sensor is **door**. For the moment assume the sensor always returns the correct answer. From this I conclude that he is in front of a door, but which one? I have no reason to believe he is in front of the first, second, or third door. What I can do is assign a probability to each door. All doors are equally likely, and there are three of them, so I assign a probability of 1/3 to each door. 我开始监听西蒙发送到网络的信号,得到的第一个数据是**门**。假設传感器返回的信号永远准确,於是我知道西蒙就在某道门前。但是是哪一道门?我没有理由去相信它现在是在第一道门前。对于第二三道门也是如此。我唯一能做的是为各个门的赋予一个置信度。因为每个门看起来都是等可能的,而有三道门,故我为每道门赋予$1/3$的置信度。
import kf_book.book_plots as book_plots from kf_book.book_plots import figsize, set_figsize import matplotlib.pyplot as plt belief = np.array([1/3, 1/3, 0, 0, 0, 0, 0, 0, 1/3, 0]) book_plots.bar_plot(belief)
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
This distribution is called a [*categorical distribution*](https://en.wikipedia.org/wiki/Categorical_distribution), which is a discrete distribution describing the probability of observing $n$ outcomes. It is a [*multimodal distribution*](https://en.wikipedia.org/wiki/Multimodal_distribution) because we have multiple beliefs about the position of our dog. Of course we are not saying that we think he is simultaneously in three different locations, merely that we have narrowed down our knowledge to one of these three locations. My (Bayesian) belief is that there is a 33.3% chance of being at door 0, 33.3% at door 1, and a 33.3% chance of being at door 8.This is an improvement in two ways. I've rejected a number of hallway positions as impossible, and the strength of my belief in the remaining positions has increased from 10% to 33%. This will always happen. As our knowledge improves the probabilities will get closer to 100%.A few words about the [*mode*](https://en.wikipedia.org/wiki/Mode_%28statistics%29)of a distribution. Given a list of numbers, such as {1, 2, 2, 2, 3, 3, 4}, the *mode* is the number that occurs most often. For this set the mode is 2. A distribution can contain more than one mode. The list {1, 2, 2, 2, 3, 3, 4, 4, 4} contains the modes 2 and 4, because both occur three times. We say the former list is [*unimodal*](https://en.wikipedia.org/wiki/Unimodality), and the latter is *multimodal*.Another term used for this distribution is a [*histogram*](https://en.wikipedia.org/wiki/Histogram). Histograms graphically depict the distribution of a set of numbers. The bar chart above is a histogram.I hand coded the `belief` array in the code above. How would we implement this in code? We represent doors with 1, and walls as 0, so we will multiply the hallway variable by the percentage, like so; 此分布称为[**分类分布**](https://en.wikipedia.org/wiki/Categorical_distribution),它是描述了$n$个输出的离散分布。它还是[**多峰分布(multimodal distribution)**](https://en.wikipedia.org/wiki/Multimodal_distribution) ,因为它为狗的多种可能位置给出了置信度。当然这不是说我们认为它可以同时出现在三个不同的位置,我们只是根据知识将范围缩小到三个位置之一。我们的(贝叶斯)置信度认为狗狗有33.3%的概率出现在0号门,有33.3%的方式出现在1号门,还有33.3%的方式出现在8号门。我们的改进体现在两个方面。一是我们拒绝了狗狗在一些位置出现的可能性,二是我们将余下位置的置信度从10%增长到33%。随着我们知识的增加,这种差别会更加明显。这里要说一說分布的[**众数(mode)**](https://en.wikipedia.org/wiki/Mode_%28statistics%29)。给定一个数组,比如{1, 2, 2, 2, 3, 3, 4}, **众数**是其中出现次数最多的数。对于该例,众数是2. 一个分布可以有多个众数。例如{1, 2, 2, 2, 3, 3, 4, 4, 4}的众数是2和4,因为它们都出现三次。所以第一个数组是[**单峰分布**](https://en.wikipedia.org/wiki/Unimodality),后一个数组是**多峰分布**。另一个重要的概念是[**直方图**](https://en.wikipedia.org/wiki/Histogram)。直方图通过图像的形式了一系列数组分布。上面那个图就是直方图的一个例子。上面的置信度数组`belief`是我手算的。如何用代码来实现这个过程呢?我们用1表示门,用0表示墙。我们用一个比例乘以这个数组,如下所示。
belief = hallway * (1/3) print(belief)
[0.333 0.333 0. 0. 0. 0. 0. 0. 0.333 0. ]
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
Extracting Information from Sensor ReadingsLet's put Python aside and think about the problem a bit. Suppose we were to read the following from Simon's sensor: * door * move right * door Can we deduce Simon's location? Of course! Given the hallway's layout there is only one place from which you can get this sequence, and that is at the left end. Therefore we can confidently state that Simon is in front of the second doorway. If this is not clear, suppose Simon had started at the second or third door. After moving to the right, his sensor would have returned 'wall'. That doesn't match the sensor readings, so we know he didn't start there. We can continue with that logic for all the remaining starting positions. The only possibility is that he is now in front of the second door. Our belief is: 从传感器读数中提取信息我们先抛开Python来思考这个问题。假如我们从西蒙的传感器读取到如下数据: * 门 * 右移 * 门 我们可以推导出西蒙的位置吗?当然可以!根据走廊的地图,只有一个位置可以产生测得的序列,即地图的最左端。因此我们非常肯定西蒙在第二道门前。如果这还不够清晰,可以试着假定西蒙从第二道门或第三道门出发,向右走。这样它的传感器会返回“墙”这个信号。这与传感器实际读数不匹配,所以我们知道这两处不是真正的起点。我们也可以在其它可能起点重复这样的推理。唯一的可能是西蒙目前在第二道门前。我们的置信度是:
belief = np.array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
I designed the hallway layout and sensor readings to give us an exact answer quickly. Real problems are not so clear cut. But this should trigger your intuition - the first sensor reading only gave us low probabilities (0.333) for Simon's location, but after a position update and another sensor reading we know more about where he is. You might suspect, correctly, that if you had a very long hallway with a large number of doors that after several sensor readings and positions updates we would either be able to know where Simon was, or have the possibilities narrowed down to a small number of possibilities. This is possible when a set of sensor readings only matches one to a few starting locations.We could implement this solution now, but instead let's consider a real world complication to the problem. 我特意将走廊的地图以及传感器的读数设计成这样以方便快速得到准确的答案。但现实问题往往没有这么清晰。但这个例子可以帮助你建立一种直觉——当收到第一个传感器信号的时候,我们只能以一个较低的置信度(0.333)猜测西蒙的位置,但是当第二个传感器数据到来时,我们就获得了更多关于西蒙位置的信息。你猜得对,就算有一道很长的走廊和许多道门,只要我们有一段足够长的传感器读数和位置更新的信息,我们就能定位西蒙,或者将可能性缩小到有限的几种情况。因为有可能一系列传感器读数只能通过个别起始点获得。我们现在就可以实现该解法,但在此之前,让我们再考虑考虑这个问题在现实世界中的复杂性。 Noisy SensorsPerfect sensors are rare. Perhaps the sensor would not detect a door if Simon sat in front of it while scratching himself, or misread if he is not facing down the hallway. Thus when I get **door** I cannot use 1/3 as the probability. I have to assign less than 1/3 to each door, and assign a small probability to each blank wall position. Something like```Python[.31, .31, .01, .01, .01, .01, .01, .01, .31, .01]```At first this may seem insurmountable. If the sensor is noisy it casts doubt on every piece of data. How can we conclude anything if we are always unsure?The answer, as for the problem above, is with probabilities. We are already comfortable assigning a probabilistic belief to the location of the dog; now we have to incorporate the additional uncertainty caused by the sensor noise. Say we get a reading of **door**, and suppose that testing shows that the sensor is 3 times more likely to be right than wrong. We should scale the probability distribution by 3 where there is a door. If we do that the result will no longer be a probability distribution, but we will learn how to fix that in a moment.Let's look at that in Python code. Here I use the variable `z` to denote the measurement. `z` or `y` are customary choices in the literature for the measurement. As a programmer I prefer meaningful variable names, but I want you to be able to read the literature and/or other filtering code, so I will start introducing these abbreviated names now. 传感器噪声不存在有理想的傳感器。傳感器有可能在西蒙坐在門前撓癢癢的時候無法給出正确定位,也有可能在沒有正面朝向走廊時候給出錯誤讀數。因此當傳感器傳來“門”這一數據时,我不能使用$1/3$作为其概率,而應使用比$1/3$小的数作为門的概率,然后用一个较小的值作为其它位置的概率。一个可能的情况是:```Python[.31, .31, .01, .01, .01, .01, .01, .01, .31, .01]```乍看之下,问题似乎是无解的。如果传感器含有噪声,那么每一段数据都值得怀疑。在我們無法確定任何事情的情況下,如何下結論呢?同上面那個問題的回答一樣,我們應該使用概率。我們對於為每個可能的位置賦予一定概率的做法已經習慣了。那麼現在我們需要將傳感器噪聲導致的額外的不確定性考慮進來。假如我們得到傳感器數據“門”,同時假設根據測試,該類數據正確的概率是錯誤的概率的三倍。在此情況下,概率分佈上對應門的位置應當放大三倍。如果我們這麼做,原來的數據就不再是概率分佈了,但我們後面會介紹修復這個問題的方法。讓我們看看這種做法用Python怎麼寫。我們這裡用`z`表示測量值。`z`或`y`常常在文獻中用來代表測量值。作為程序員我喜歡更有意義的名字,但我還希望能方便你閱讀有關文獻和查看其它濾波器的代碼,因此我這裡會使用這些簡化的變量名。
def update_belief(hall, belief, z, correct_scale): for i, val in enumerate(hall): if val == z: belief[i] *= correct_scale belief = np.array([0.1] * 10) reading = 1 # 1 is 'door' update_belief(hallway, belief, z=reading, correct_scale=3.) print('belief:', belief) print('sum =', sum(belief)) plt.figure() book_plots.bar_plot(belief)
belief: [0.3 0.3 0.1 0.1 0.1 0.1 0.1 0.1 0.3 0.1] sum = 1.6000000000000003
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
This is not a probability distribution because it does not sum to 1.0. But the code is doing mostly the right thing - the doors are assigned a number (0.3) that is 3 times higher than the walls (0.1). All we need to do is normalize the result so that the probabilities correctly sum to 1.0. Normalization is done by dividing each element by the sum of all elements in the list. That is easy with NumPy: 該數組的和不為1,因而不構成一個概率分佈。但代碼所作的事情大致還是對的——門對應的置信度(0.3)是其它位置的置信度(0,1)的三倍。我們只需做一個歸一化,就能使數組的和為1. 所謂歸一化,是將數組的各個元素除以自身的總和。這很容易用NumPy實現:
belief / sum(belief)
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
FilterPy implements this with the `normalize` function:```Pythonfrom filterpy.discrete_bayes import normalizenormalize(belief)```It is a bit odd to say "3 times as likely to be right as wrong". We are working in probabilities, so let's specify the probability of the sensor being correct, and compute the scale factor from that. The equation for that is$$scale = \frac{prob_{correct}}{prob_{incorrect}} = \frac{prob_{correct}} {1-prob_{correct}}$$Also, the `for` loop is cumbersome. As a general rule you will want to avoid using `for` loops in NumPy code. NumPy is implemented in C and Fortran, so if you avoid for loops the result often runs 100x faster than the equivalent loop.How do we get rid of this `for` loop? NumPy lets you index arrays with boolean arrays. You create a boolean array with logical operators. We can find all the doors in the hallway with: FilterPy實現了該歸一化函數`normalize`:```Pythonfrom filterpy.discrete_bayes import normalizenormalize(belief)```“正確概率是錯誤概率三倍”這樣的說法很奇怪。我們既然以概率論為工具,那麼更好的做法還是為指定傳感器正確的概率,並據此計算縮放係數。公式如下$$scale = \frac{prob_{correct}}{prob_{incorrect}} = \frac{prob_{correct}} {1-prob_{correct}}$$另外,`for`循環也很多餘。通常你需要避免在使用NumPy時寫`for`循環。NumPy是用C和Fortran實現的,如果你能避免經常使用for循環,那麼程序往往能加快100倍。如何避免`for`循環呢?NumPy可以使用布爾值作為數組的索引。布爾值可以用邏輯運算符得到。我們可以通過如下方式獲得所有門的位置:
hallway == 1
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
When you use the boolean array as an index to another array it returns only the elements where the index is `True`. Thus we can replace the `for` loop with```pythonbelief[hall==z] *= scale```and only the elements which equal `z` will be multiplied by `scale`.Teaching you NumPy is beyond the scope of this book. I will use idiomatic NumPy constructs and explain them the first time I present them. If you are new to NumPy there are many blog posts and videos on how to use NumPy efficiently and idiomatically.Here is our improved version: 當你使用布爾類型數組作為其他數組的索引的時候,就會得到對應值為真的位置。根據這個原理我們可以用下面的代碼替換掉前面使用的`for`循環```pythonbelief[hall==z] *= scale```這樣,只有對應`z`的位置的元素會被縮放`scale`倍。本書的目的不是NumPy教學。我只會使用常見的NumPy寫法,並且在引入新用法的時候做介紹。如果你是NumPy的新手,網絡上又許多介紹如何高效、規範使用NumPy的文章和視頻。經過改進的代碼如下:
from filterpy.discrete_bayes import normalize def scaled_update(hall, belief, z, z_prob): scale = z_prob / (1. - z_prob) belief[hall==z] *= scale normalize(belief) belief = np.array([0.1] * 10) scaled_update(hallway, belief, z=1, z_prob=.75) print('sum =', sum(belief)) print('probability of door =', belief[0]) print('probability of wall =', belief[2]) book_plots.bar_plot(belief, ylim=(0, .3))
sum = 1.0 probability of door = 0.1875 probability of wall = 0.06249999999999999
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
We can see from the output that the sum is now 1.0, and that the probability of a door vs wall is still three times larger. The result also fits our intuition that the probability of a door must be less than 0.333, and that the probability of a wall must be greater than 0.0. Finally, it should fit our intuition that we have not yet been given any information that would allow us to distinguish between any given door or wall position, so all door positions should have the same value, and the same should be true for wall positions. This result is called the [*posterior*](https://en.wikipedia.org/wiki/Posterior_probability), which is short for *posterior probability distribution*. All this means is a probability distribution *after* incorporating the measurement information (posterior means 'after' in this context). To review, the *prior* is the probability distribution before including the measurement's information. Another term is the [*likelihood*](https://en.wikipedia.org/wiki/Likelihood_function). When we computed `belief[hall==z] *= scale` we were computing how *likely* each position was given the measurement. The likelihood is not a probability distribution because it does not sum to one.The combination of these gives the equation$$\mathtt{posterior} = \frac{\mathtt{likelihood} \times \mathtt{prior}}{\mathtt{normalization}}$$ When we talk about the filter's output we typically call the state after performing the prediction the *prior* or *prediction*, and we call the state after the update either the *posterior* or the *estimated state*. It is very important to learn and internalize these terms as most of the literature uses them extensively.Does `scaled_update()` perform this computation? It does. Let me recast it into this form: 我們可以通過程序輸出看到數組的和為1.0,同時對應于門的概率是墻壁的概率的三倍。同時,結果顯示對應門的概率小於0.333,這是符合我們直覺的。除此之外,由於我們沒有任何信息能幫助我們對各個門、墻進行內部區分,因此所有墻壁具有一樣的概率,所有門具有一樣的概率,這也是符合我們認識的。這個結果即所謂的[**後驗**](https://en.wikipedia.org/wiki/Posterior_probability),是**後驗概率分佈**的縮寫。這表示該概率分佈是在考慮測量結果信息**之後**得到的(英文的posterior在此上下文中意思等同於after)。複習一下,**先驗**概率是考慮測量結果信息之前的概率分佈。另一個術語是[**似然**](https://en.wikipedia.org/wiki/Likelihood_function). 當我們計算`belief[hall==z] *= scale`時,我們計算的是給定測量結果後每個位置的**似然**程度。似然度不是概率分佈,因為其和不必等於1. 結合上述步驟可以得到如下公式$$\mathtt{posterior} = \frac{\mathtt{likelihood} \times \mathtt{prior}}{\mathtt{normalization}}$$ 當我們討論濾波器的輸出的時候,我們一般將更新前的狀態叫做**先驗**或者**預測**,將更新後的狀態叫做**後驗**或者**估計**。大多數相關文獻廣泛使用類似術語,因此學習和內化這些術語非常重要。函數`scaled_update()`包含了此操作了嗎?答案是肯定的。我們可以將其轉化為如下形式:
def scaled_update(hall, belief, z, z_prob): scale = z_prob / (1. - z_prob) likelihood = np.ones(len(hall)) likelihood[hall==z] *= scale return normalize(likelihood * belief)
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
This function is not fully general. It contains knowledge about the hallway, and how we match measurements to it. We always strive to write general functions. Here we will remove the computation of the likelihood from the function, and require the caller to compute the likelihood themselves.Here is a full implementation of the algorithm:```pythondef update(likelihood, prior): return normalize(likelihood * prior)```Computation of the likelihood varies per problem. For example, the sensor might not return just 1 or 0, but a `float` between 0 and 1 indicating the probability of being in front of a door. It might use computer vision and report a blob shape that you then probabilistically match to a door. It might use sonar and return a distance reading. In each case the computation of the likelihood will be different. We will see many examples of this throughout the book, and learn how to perform these calculations.FilterPy implements `update`. Here is the previous example in a fully general form: 這個函數還不夠通用。它包含有關於走廊問題以及測量方法的知識。而我們總是盡可能寫通用的函數。這裡我們要從函數中移除似然度的計算,要求函數調用者計算似然度。算法的完整實現如下:```pythondef update(likelihood, prior): return normalize(likelihood * prior)```對於不同問題,似然度計算方法不盡相同。例如,傳感器可能返回的不是0、1信號,而是返回一個出於0和1之間的浮點型小數用於表示目標出於門前的概率。它可能採用計算機視覺的方法去檢測團塊的外形來計算目標物體是門的概率。它也可能通過傳感器獲得距離的讀數。在不同的案例中,計算似然度的方式也不相同。本書中會介紹許多種不同的例子及對應的計算方式。FilterPy也實現了`update`函數。前面的例子用完全通用的形式寫出來會是這樣:
from filterpy.discrete_bayes import update def lh_hallway(hall, z, z_prob): """ compute likelihood that a measurement matches positions in the hallway.""" try: scale = z_prob / (1. - z_prob) except ZeroDivisionError: scale = 1e8 likelihood = np.ones(len(hall)) likelihood[hall==z] *= scale return likelihood belief = np.array([0.1] * 10) likelihood = lh_hallway(hallway, z=1, z_prob=.75) update(likelihood, belief)
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
Incorporating MovementRecall how quickly we were able to find an exact solution when we incorporated a series of measurements and movement updates. However, that occurred in a fictional world of perfect sensors. Might we be able to find an exact solution with noisy sensors?Unfortunately, the answer is no. Even if the sensor readings perfectly match an extremely complicated hallway map, we cannot be 100% certain that the dog is in a specific position - there is, after all, a tiny possibility that every sensor reading was wrong! Naturally, in a more typical situation most sensor readings will be correct, and we might be close to 100% sure of our answer, but never 100% sure. This may seem complicated, but let's go ahead and program the math.First let's deal with the simple case - assume the movement sensor is perfect, and it reports that the dog has moved one space to the right. How would we alter our `belief` array?I hope that after a moment's thought it is clear that we should shift all the values one space to the right. If we previously thought there was a 50% chance of Simon being at position 3, then after he moved one position to the right we should believe that there is a 50% chance he is at position 4. The hallway is circular, so we will use modulo arithmetic to perform the shift. 考慮運動模型回想一下之前我們是如何通過同時考慮一系列測量值和運動模型來快速找出準確解的。但是,該解法只存在於可以使用理想傳感器的幻想世界中。我們是否可以通過帶有噪聲的傳感器獲得精確解呢?不幸的是,答案是否定的。即使傳感器讀數和複雜的走廊地圖完美吻合,我們還是無法百分百確定狗的確切位置——畢竟每個傳感器讀書都有小概率出錯!自然,在典型的環境中,大多數傳感器數據都是正確的,這使得我們的推理正確的概率接近100%,但也永遠達不到100%。這看起來有點複雜,我們且繼續前進,把數學代碼寫出來。我們先解決一個簡單的問題——假如運動傳感器是準確的,它回報說狗向右移動一步。此時我們應如何更新`belief`數組?略經思考,你已明白,我們應當將所有數值向右移動一步。假如我們先前認為西蒙處於位置3的概率為50%,那麼現在它處於位置4的概率為50%。走廊是環形的,所以我們使用取模運算來執行此操作。
def perfect_predict(belief, move): """ move the position by `move` spaces, where positive is to the right, and negative is to the left """ n = len(belief) result = np.zeros(n) for i in range(n): result[i] = belief[(i-move) % n] return result belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05]) plt.subplot(121) book_plots.bar_plot(belief, title='Before prediction', ylim=(0, .4)) belief = perfect_predict(belief, 1) plt.subplot(122) book_plots.bar_plot(belief, title='After prediction', ylim=(0, .4))
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
We can see that we correctly shifted all values one position to the right, wrapping from the end of the array back to the beginning. The next cell animates this so you can see it in action. Use the slider to move forwards and backwards in time. This simulates Simon walking around and around the hallway. It does not yet incorporate new measurements so the probability distribution does not change shape, only position. 可見我們正確地將所有數值都向右移動了一步,最右邊的數組回到了數組的左側。下一個單元格輸出一個動畫。你可以用滑塊在時間上前移或後移。這就好像西蒙在走廊上四處遊走一般。因為沒有新的測量結果進來,分佈只是發生平移,形狀沒有改變。
from ipywidgets import interact, IntSlider belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05]) perfect_beliefs = [] for _ in range(20): # Simon takes one step to the right belief = perfect_predict(belief, 1) perfect_beliefs.append(belief) def simulate(time_step): book_plots.bar_plot(perfect_beliefs[time_step], ylim=(0, .4)) interact(simulate, time_step=IntSlider(value=0, max=len(perfect_beliefs)-1));
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
TerminologyLet's pause a moment to review terminology. I introduced this terminology in the last chapter, but let's take a second to help solidify your knowledge. The *system* is what we are trying to model or filter. Here the system is our dog. The *state* is its current configuration or value. In this chapter the state is our dog's position. We rarely know the actual state, so we say our filters produce the *estimated state* of the system. In practice this often gets called the state, so be careful to understand the context. One cycle of prediction and updating with a measurement is called the state or system *evolution*, which is short for *time evolution* [7]. Another term is *system propagation*. It refers to how the state of the system changes over time. For filters, time is usually a discrete step, such as 1 second. For our dog tracker the system state is the position of the dog, and the state evolution is the position after a discrete amount of time has passed.We model the system behavior with the *process model*. Here, our process model is that the dog moves one or more positions at each time step. This is not a particularly accurate model of how dogs behave. The error in the model is called the *system error* or *process error*. The prediction is our new *prior*. Time has moved forward and we made a prediction without benefit of knowing the measurements. Let's work an example. The current position of the dog is 17 m. Our epoch is 2 seconds long, and the dog is traveling at 15 m/s. Where do we predict he will be in two seconds? Clearly,$$ \begin{aligned}\bar x &= 17 + (15*2) \\&= 47\end{aligned}$$I use bars over variables to indicate that they are priors (predictions). We can write the equation for the process model like this:$$ \bar x_{k+1} = f_x(\bullet) + x_k$$$x_k$ is the current position or state. If the dog is at 17 m then $x_k = 17$.$f_x(\bullet)$ is the state propagation function for x. It describes how much the $x_k$ changes over one time step. For our example it performs the computation $15 \cdot 2$ so we would define it as $$f_x(v_x, t) = v_k t$$. 術語我們暫停複習一下術語。我上一章已經介紹過這些術語,但我們稍微花幾秒鐘鞏固一下知識。所谓**系統**是我們嘗試建模和濾波的對象。這裡,系統指的是那隻狗。**狀態**表示當前的配置或數值。本章中,狀態是狗的位置。我們一般無法知道真實的狀態,所以我們說濾波器得到的是**狀態估計**。實踐中我們往往也將其稱為狀態,所以你要小心理解上下文。一個預測步和一個根據測量更新狀態的更新步構成一個循環,這個循環被稱為狀態的**演化**,或系統的演化,它是**時間演化**【7】的縮寫。另一個術語是**系統傳播**。他指的是狀態是如何隨著時間改變的。對於濾波器,時間步是離散的,例如一秒的時間。對於我們的狗的跟蹤問題而言,系統的狀態是狗的位置,狀態的演化是經過一段離散的時間步後狗的位置的改變。我們用“過程模型”來建模系統的行為。這裡,我們的過程模型是夠經過每個時間步都會移動一段距離。這個模型並不精確地建模狗的運動。模型的誤差稱為“系統誤差”或者“過程誤差”。每個預測結果都給我們一個新的“先驗”。隨著時間的推進,我們在無測量結果輔助的情況下做出下一時刻的預測。讓我們看一個例子。當前狗的位置是17m。一個時間步是兩秒,狗的速度是15米每秒。我們預測它兩秒後的位置會在哪裡。顯而易見,$$ \begin{aligned}\bar x &= 17 + (15*2) \\&= 47\end{aligned}$$我通過在符號上加一橫表示先驗(即預測結果)。我們將過程模型用公式表示出來,如下所示:$$ \bar x_{k+1} = f_x(\bullet) + x_k$$$x_k$是當前位置或狀態,如果狗在17 m處,那麼$x_k = 17$.$f_x(\bullet)$是x的狀態傳播函數。它描述了$x_k$經過一個時間步的改變程度。對於這個例子,它執行了此計算$15 \cdot 2$,所以我們將它定義為 $$f_x(v_x, t) = v_k t$$ Adding Uncertainty to the Prediction`perfect_predict()` assumes perfect measurements, but all sensors have noise. What if the sensor reported that our dog moved one space, but he actually moved two spaces, or zero? This may sound like an insurmountable problem, but let's model it and see what happens.Assume that the sensor's movement measurement is 80% likely to be correct, 10% likely to overshoot one position to the right, and 10% likely to undershoot to the left. That is, if the movement measurement is 4 (meaning 4 spaces to the right), the dog is 80% likely to have moved 4 spaces to the right, 10% to have moved 3 spaces, and 10% to have moved 5 spaces.Each result in the array now needs to incorporate probabilities for 3 different situations. For example, consider the reported movement of 2. If we are 100% certain the dog started from position 3, then there is an 80% chance he is at 5, and a 10% chance for either 4 or 6. Let's try coding that: 在預測中考慮不確定性`perfect_predict()`函數假定測量是完美的,但實際上所有傳感器都有噪聲。如果傳感器顯示狗移動了一位,但實際上移動了兩位,會發生什麼?又或者實際上沒有移動呢?雖然這個問題乍看之下是無法解決的,但我們還是先建模一下問題,看看會發生什麼。設傳感器測量的位移有80%的幾率是正確的,10%的幾率給出右偏一位的值,10%的概率給出左偏一位的值。即是說,如果傳感器測得的位移是4(向右移4位),那麼狗有80%的概率向右移4位,有10%的概率向右移3位,有10%的概率向右移5位。對於數組中的每一個結果,我們都需要考慮三種情況的概率。例如,若傳感器報告位移為2,且我們百分百確定狗是從位置3起步的,那麼此時狗有80%的概率位於位置5,各有10%的概率位於4或6.我們試著用代碼的形式表達這個問題:
def predict_move(belief, move, p_under, p_correct, p_over): n = len(belief) prior = np.zeros(n) for i in range(n): prior[i] = ( belief[(i-move) % n] * p_correct + belief[(i-move-1) % n] * p_over + belief[(i-move+1) % n] * p_under) return prior belief = [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.] prior = predict_move(belief, 2, .1, .8, .1) book_plots.plot_belief_vs_prior(belief, prior)
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
It appears to work correctly. Now what happens when our belief is not 100% certain? 看起來代碼工作正常。現在我們看看置信度不是100%的情況會是怎樣?
belief = [0, 0, .4, .6, 0, 0, 0, 0, 0, 0] prior = predict_move(belief, 2, .1, .8, .1) book_plots.plot_belief_vs_prior(belief, prior) prior
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
Here the results are more complicated, but you should still be able to work it out in your head. The 0.04 is due to the possibility that the 0.4 belief undershot by 1. The 0.38 is due to the following: the 80% chance that we moved 2 positions (0.4 $\times$ 0.8) and the 10% chance that we undershot (0.6 $\times$ 0.1). Overshooting plays no role here because if we overshot both 0.4 and 0.6 would be past this position. **I strongly suggest working some examples until all of this is very clear, as so much of what follows depends on understanding this step.**If you look at the probabilities after performing the update you might be dismayed. In the example above we started with probabilities of 0.4 and 0.6 in two positions; after performing the update the probabilities are not only lowered, but they are strewn out across the map.This is not a coincidence, or the result of a carefully chosen example - it is always true of the prediction. If the sensor is noisy we lose some information on every prediction. Suppose we were to perform the prediction an infinite number of times - what would the result be? If we lose information on every step, we must eventually end up with no information at all, and our probabilities will be equally distributed across the `belief` array. Let's try this with 100 iterations. The plot is animated; use the slider to change the step number. 儘管現在情況更加複雜了,但你還是能夠用你的大腦的理解它。0.04是對應0.4置信度的信念被高估的可能性。而0.38是這樣算來的:有80%的概率移動了兩步 (0.4 $\times$ 0.8)和10%概率高估了位移(0.6 $\times$ 0.1)。低估的情況不參與計算,因為這種情況下對應於0.4和0.6的信念都會跳過該點。**我強烈建議你多使幾個例子,直到你深刻理解它們,這是因為後面許多內容都依賴於這一步。**如果你看過更新後的概率,那你可能會感覺失望。上面的例子中,我們開始時對兩個位置各有0.4和0.6的置信度;在更新後,置信度不僅減小了,它們還在地圖上分散開來。這不是偶然,也不是特意挑選的例子才能產生的結果——不論如何,預測的結果永遠會像這樣。如果傳感器包含噪聲,我們每次預測就都會丟失信息。假如我們在無限的時間裡無數次預測——結果會是怎樣?如果我們每次預測都丟失信息,我們最終會什麼信息都無法留下,我們的`belief`數組上的概率分佈將會處處均等。我們試試迭代100次。下面是繪製的動畫。你可以用滑塊來逐步瀏覽。
belief = np.array([1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) predict_beliefs = [] for i in range(100): belief = predict_move(belief, 1, .1, .8, .1) predict_beliefs.append(belief) print('Final Belief:', belief) # make interactive plot def show_prior(step): book_plots.bar_plot(predict_beliefs[step-1]) plt.title(f'Step {step}') interact(show_prior, step=IntSlider(value=1, max=len(predict_beliefs))); print('Final Belief:', belief)
Final Belief: [0.104 0.103 0.101 0.099 0.097 0.096 0.097 0.099 0.101 0.103]
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
After 100 iterations we have lost almost all information, even though we were 100% sure that we started in position 0. Feel free to play with the numbers to see the effect of differing number of updates. For example, after 100 updates a small amount of information is left, after 50 a lot is left, but by 200 iterations essentially all information is lost. 儘管我們以100%的置信度相信我們從0點開始,100次迭代後我們仍然幾乎丟失了所有信息。請隨意改變數目,觀察步數的影響。例如,經過100次更新,仍存在有一小部分信息;50次更新後,留下的信息較多;而200次更新後基本上所有數據都丟失了。 And, if you are viewing this online here is an animation of that output.I will not generate these standalone animations through the rest of the book. Please see the preface for instructions to run this book on the web, for free, or install IPython on your computer. This will allow you to run all of the cells and see the animations. It's very important that you practice with this code, not just read passively. 另外,如果你通過線上方式閱讀本書,你會看到這裡有輸出的動圖。這之後我就不會再生成單獨的動圖了。請遵循本前言部分的介紹在網頁上,或者在你電腦上配置IPython以免費運行本書內容。這樣你就能運行所有單元格並且看到動圖。為了能練習本書代碼而不僅僅是被動閱讀,這點非常重要。 Generalizing with ConvolutionWe made the assumption that the movement error is at most one position. But it is possible for the error to be two, three, or more positions. As programmers we always want to generalize our code so that it works for all cases. This is easily solved with [*convolution*](https://en.wikipedia.org/wiki/Convolution). Convolution modifies one function with another function. In our case we are modifying a probability distribution with the error function of the sensor. The implementation of `predict_move()` is a convolution, though we did not call it that. Formally, convolution is defined as 在卷積的幫助下推廣該方法我們先前假設位移的誤差最多不差過一位。但實際上可能有兩位、三位、甚至更多。作為程序員,我們總希望能將我們的代碼推廣到適應所有情況。這可以藉助[**卷積**](https://en.wikipedia.org/wiki/Convolution)工具輕鬆解決。卷積通過一個函數來修改另一個函數。在我們的案例中,我們用傳感器的誤差函數修改概率分佈。雖然我們之前沒這麼稱呼它,但`predict_move()`函數的實現就是一個卷積。卷積的正式定義如下: $$ (f \ast g) (t) = \int_0^t \!f(\tau) \, g(t-\tau) \, \mathrm{d}\tau$$ where $f\ast g$ is the notation for convolving f by g. It does not mean multiply.Integrals are for continuous functions, but we are using discrete functions. We replace the integral with a summation, and the parenthesis with array brackets.$$ (f \ast g) [t] = \sum\limits_{\tau=0}^t \!f[\tau] \, g[t-\tau]$$Comparison shows that `predict_move()` is computing this equation - it computes the sum of a series of multiplications.[Khan Academy](https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution) [4] has a good introduction to convolution, and Wikipedia has some excellent animations of convolutions [5]. But the general idea is already clear. You slide an array called the *kernel* across another array, multiplying the neighbors of the current cell with the values of the second array. In our example above we used 0.8 for the probability of moving to the correct location, 0.1 for undershooting, and 0.1 for overshooting. We make a kernel of this with the array `[0.1, 0.8, 0.1]`. All we need to do is write a loop that goes over each element of our array, multiplying by the kernel, and summing the results. To emphasize that the belief is a probability distribution I have named it `pdf`. 其中 $f\ast g$ 表示f和g的卷積。它不代表乘法。積分對應於連續函數,但我們使用的是離散函數。我們將積分替換為求和符號,將圓括號換成數組使用的方括號。$$ (f \ast g) [t] = \sum\limits_{\tau=0}^t \!f[\tau] \, g[t-\tau]$$比較發現`predict_move()`是在實行該計算——即一系列數值的積的和。[可汗學院](https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution) 【4】很好地介紹了卷積。維基百科提供了描繪卷積的優美動圖【5】。不論如何,卷積的大概思想是清除易懂的。你將一個稱為“核(kernel)”的數組劃過另一個數組,連同當前單元格的鄰接單元格與對應數組上的單元格相乘。在上面的例子中,我們用0.8作為正確估計的概率,0.1作為高估的概率,0.1作為低估的概率。這可以用數組`[0.1, 0.8, 0.1]`構成的核來表示。我們所要做的事情循環遍歷數組的每一元素,與核對應相乘,對結果求和。為強調置信度是一個概率分佈,我用`pdf`作為變量名。
def predict_move_convolution(pdf, offset, kernel): N = len(pdf) kN = len(kernel) width = int((kN - 1) / 2) prior = np.zeros(N) for i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N prior[i] += pdf[index] * kernel[k] return prior
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
This illustrates the algorithm, but it runs very slow. SciPy provides a convolution routine `convolve()` in the `ndimage.filters` module. We need to shift the pdf by `offset` before convolution; `np.roll()` does that. The move and predict algorithm can be implemented with one line:```pythonconvolve(np.roll(pdf, offset), kernel, mode='wrap')```FilterPy implements this with `discrete_bayes`' `predict()` function. 雖然該函數演示了算法執行流程,然而它執行得很慢。SciPy庫在`ndimage.filters`包中提供了卷積操作`convolve()`。我們要在作卷積前先將pdf平移`offset`步,這可以通過`np.roll()`函數實現。移動操作和預測操作可以由一行代碼實現:```pythonconvolve(np.roll(pdf, offset), kernel, mode='wrap')```FilterPy在`discrete_bayes`的`predict()`函數中實現了此操作。
from filterpy.discrete_bayes import predict belief = [.05, .05, .05, .05, .55, .05, .05, .05, .05, .05] prior = predict(belief, offset=1, kernel=[.1, .8, .1]) book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6))
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
All of the elements are unchanged except the middle ones. The values in position 4 and 6 should be $$(0.1 \times 0.05)+ (0.8 \times 0.05) + (0.1 \times 0.55) = 0.1$$Position 5 should be $$(0.1 \times 0.05) + (0.8 \times 0.55)+ (0.1 \times 0.05) = 0.45$$Let's ensure that it shifts the positions correctly for movements greater than one and for asymmetric kernels. 除去中部的幾個數值外,其它數保持不變。位於4和6除的概率應為 $$(0.1 \times 0.05)+ (0.8 \times 0.05) + (0.1 \times 0.55) = 0.1$$位置5處的概率應為$$(0.1 \times 0.05) + (0.8 \times 0.55)+ (0.1 \times 0.05) = 0.45$$接著,我們來確認一下對於移動量大於1,且非對稱的核,它也能正確地移動位置。
prior = predict(belief, offset=3, kernel=[.05, .05, .6, .2, .1]) book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6))
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
The position was correctly shifted by 3 positions and we give more weight to the likelihood of an overshoot vs an undershoot, so this looks correct.Make sure you understand what we are doing. We are making a prediction of where the dog is moving, and convolving the probabilities to get the prior.If we weren't using probabilities we would use this equation that I gave earlier:$$ \bar x_{k+1} = x_k + f_{\mathbf x}(\bullet)$$The prior, our prediction of where the dog will be, is the amount the dog moved plus his current position. The dog was at 10, he moved 5 meters, so he is now at 15 m. It couldn't be simpler. But we are using probabilities to model this, so our equation is:$$ \bar{ \mathbf x}_{k+1} = \mathbf x_k \ast f_{\mathbf x}(\bullet)$$We are *convolving* the current probabilistic position estimate with a probabilistic estimate of how much we think the dog moved. It's the same concept, but the math is slightly different. $\mathbf x$ is bold to denote that it is an array of numbers. 预测位置正确移动了三步,且我们为偏大的位移给出了更高的似然权重,所以结果看起来是正确的。你要保证确实理解我们在做的事情。我们在预测狗的位移,通过对概率分布做卷积来给出先验:如果我们使用的不是概率分布,那么我们需要使用前面给出的公式$$ \bar x_{k+1} = x_k + f_{\mathbf x}(\bullet)$$先验等于狗的当前位置加上狗的位移(这里先验指的是狗的预测位置)。如果狗在位置10,位移了5米,那么它现在出现在15米的位置。简单到不能再简单了。但现在我们使用概率分布来建模,于是我们的公式变为:$$ \bar{ \mathbf x}_{k+1} = \mathbf x_k \ast f_{\mathbf x}(\bullet)$$ Integrating Measurements and Movement UpdatesThe problem of losing information during a prediction may make it seem as if our system would quickly devolve into having no knowledge. However, each prediction is followed by an update where we incorporate the measurement into the estimate. The update improves our knowledge. The output of the update step is fed into the next prediction. The prediction degrades our certainty. That is passed into another update, where certainty is again increased.Let's think about this intuitively. Consider a simple case - you are tracking a dog while he sits still. During each prediction you predict he doesn't move. Your filter quickly *converges* on an accurate estimate of his position. Then the microwave in the kitchen turns on, and he goes streaking off. You don't know this, so at the next prediction you predict he is in the same spot. But the measurements tell a different story. As you incorporate the measurements your belief will be smeared along the hallway, leading towards the kitchen. On every epoch (cycle) your belief that he is sitting still will get smaller, and your belief that he is inbound towards the kitchen at a startling rate of speed increases.That is what intuition tells us. What does the math tell us?We have already programmed the update and predict steps. All we need to do is feed the result of one into the other, and we will have implemented a dog tracker!!! Let's see how it performs. We will input measurements as if the dog started at position 0 and moved right one position each epoch. As in a real world application, we will start with no knowledge of his position by assigning equal probability to all positions. 在位移更新过程中结合测量值因為在预测过程中存在信息丢失的问题,所以看起來似乎我们的系统会迅速退化到没有任何信息的状态。然而並非如此,因為每次預測後面都會緊跟著一個更新操作。有了更新操作,我們就可以在作估計時將測量結果納入考量。更新操作能改善信息的質量。更新的結果作為下一次預測的輸入。經過預測,確定性降低了。其結果傳遞給下一次更新,確定性又再一次得到增強。讓我們從直覺上思考這個問題。考慮一個簡單的例子:你需要跟蹤一條狗,而這條狗永遠坐在那裡不動。每次預測,你給出的結果都是它原地不動。於是你的濾波器迅速“收斂”到其位置的精確估計。這時候廚房的微波爐打開了,狗狗飛奔出去。你不知道這件事,所以下一次預測,你還是預言它原地不動。而這時測量值則傳遞出相悖的信息。如果你結合測量結果去做更新,那你關於位置的信念起始時散佈在走廊上各處,總體向著廚房移動。每一輪迭代(循環),你對狗原地不動的信念俞弱,俞相信狗在以驚人的速度向廚房進發。這是直覺上我們所能理解的。那麼我們是否能從數學中得到什麼呢?我們已編寫好更新和預測操作。我們所要做的只是將其中一步的結果傳給下一步,這樣我們就實現了一個狗跟蹤器!!!我們看下它表現如何。我們輸入測量值,假裝狗從位置0開始移動,每次向右移動一步。如果是在真實世界的應用中,起始狀態下我們沒有任何關於其位置的知識,這時我們就為每種可能位置賦予相等的概率。
from filterpy.discrete_bayes import update hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0]) prior = np.array([.1] * 10) likelihood = lh_hallway(hallway, z=1, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
After the first update we have assigned a high probability to each door position, and a low probability to each wall position. 第一次更新後,我们为每个门的位置分配了更高的权重,而为墙壁的位置赋予了较低的权重。
kernel = (.1, .8, .1) prior = predict(posterior, 1, kernel) book_plots.plot_prior_vs_posterior(prior, posterior, True, ylim=(0,.5))
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
The predict step shifted these probabilities to the right, smearing them about a bit. Now let's look at what happens at the next sense. 预测操作使得概率分布右移,并且使分布散得更开。现在让我们看看下一个读入传感器的读数会发生什么。
likelihood = lh_hallway(hallway, z=1, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
Notice the tall bar at position 1. This corresponds with the (correct) case of starting at position 0, sensing a door, shifting 1 to the right, and sensing another door. No other positions make this set of observations as likely. Now we will add an update and then sense the wall. 注意看位置1处的高峰。它对应的(正確)情況是:以位置0为起点出發,傳感器感應到門,向右移一步,然後傳感器再次感應到門。除此以外的情形則不太可能產生同樣的觀測結果。現在我們增加一個更新操作,這個更新操作中傳感器感應到墻壁。
prior = predict(posterior, 1, kernel) likelihood = lh_hallway(hallway, z=0, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
This is exciting! We have a very prominent bar at position 2 with a value of around 35%. It is over twice the value of any other bar in the plot, and is about 4% larger than our last plot, where the tallest bar was around 31%. Let's see one more cycle. 結果真是令人激動!條形圖在位置2處的數值顯著突出,其值為35%,其高度在其它任意一柱高度的兩倍以上。上一張圖的最高高度約為31%,所以經過本次操作高度提高的量約為4%。我們再觀察一輪。
prior = predict(posterior, 1, kernel) likelihood = lh_hallway(hallway, z=0, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5))
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
I ignored an important issue. Earlier I assumed that we had a motion sensor for the predict step; then, when talking about the dog and the microwave I assumed that you had no knowledge that he suddenly began running. I mentioned that your belief that the dog is running would increase over time, but I did not provide any code for this. In short, how do we detect and/or estimate changes in the process model if we aren't directly measuring it?For now I want to ignore this problem. In later chapters we will learn the mathematics behind this estimation; for now it is a large enough task just to learn this algorithm. It is profoundly important to solve this problem, but we haven't yet built enough of the mathematical apparatus that is required, and so for the remainder of the chapter we will ignore the problem by assuming we have a sensor that senses movement. 我忽略了一個重要問題。起初我們為預測步提供了運動傳感器,可是之後談及狗與微波爐的例子的時候,我卻假定你沒有關於狗突然開始運動的知識。我斷言道即使如此你仍會越來越相信狗處於運動狀態,但我未提供任何證明該斷言的代碼。簡而言之,在不直接測量的條件下,我們如何才能檢測或估計過程模型狀態的改變呢?我想暫時擱置這一問題。後續章節會介紹估計方法幕後的數學原理。而現在,僅僅是學習算法就已經是一個大任務。雖然解決這個問題很重要,但是我們還缺乏解決該問題所需的數學工具。那麼本章的後續部分會暫時忽略這個問題,仍然假設我們有一個專門用於測量運動的傳感器。 The Discrete Bayes AlgorithmThis chart illustrates the algorithm: 離散貝葉斯算法下圖顯示了算法的流程:
book_plots.predict_update_chart()
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
This filter is a form of the g-h filter. Here we are using the percentages for the errors to implicitly compute the $g$ and $h$ parameters. We could express the discrete Bayes algorithm as a g-h filter, but that would obscure the logic of this filter.The filter equations are:$$\begin{aligned} \bar {\mathbf x} &= \mathbf x \ast f_{\mathbf x}(\bullet)\, \, &\text{Predict Step} \\\mathbf x &= \|\mathcal L \cdot \bar{\mathbf x}\|\, \, &\text{Update Step}\end{aligned}$$$\mathcal L$ is the usual way to write the likelihood function, so I use that. The $\|\|$ notation denotes taking the norm. We need to normalize the product of the likelihood with the prior to ensure $x$ is a probability distribution that sums to one.We can express this in pseudocode.**Initialization** 1. Initialize our belief in the state **Predict** 1. Based on the system behavior, predict state for the next time step 2. Adjust belief to account for the uncertainty in prediction **Update** 1. Get a measurement and associated belief about its accuracy 2. Compute how likely it is the measurement matches each state 3. Update state belief with this likelihoodWhen we cover the Kalman filter we will use this exact same algorithm; only the details of the computation will differ.Algorithms in this form are sometimes called *predictor correctors*. We make a prediction, then correct them.Let's animate this. First Let's write functions to perform the filtering and to plot the results at any step. I've plotted the position of the doorways in black. Prior are drawn in orange, and the posterior in blue. I draw a thick vertical line to indicate where Simon really is. This is not an output of the filter - we know where Simon is only because we are simulating his movement. 如圖所示的濾波器是g-h濾波器的一種特殊形式。這裡我們用誤差的百分比隱式計算g和h參數。我們也可以將貝葉斯濾波器用g-h濾波器的形式該寫出來,但這麼做會使得濾波器所遵循的邏輯變得模糊。濾波器的公式如下:$$\begin{aligned} \bar {\mathbf x} &= \mathbf x \ast f_{\mathbf x}(\bullet)\, \, &\text{預測操作} \\\mathbf x &= \|\mathcal L \cdot \bar{\mathbf x}\|\, \, &\text{更新操作}\end{aligned}$$遵循慣例,我是用$\mathcal L$來代表似然函數。$\|\|$符號表示取模。我們需要對似然與先驗的乘積作歸一化來確保$x$是和為1的概率分佈。我們可以用如下的偽代碼來表達這個過程。**初始化** 1. 為狀態的置信度賦予初始值。 **預測** 1. 基於系統的行為預測下一時間步的狀態; 2. 根據預測操作的不確定性調整置信度; **更新** 1. 得到測量值和對測量精度的置信度; 2. 計算測量值與真實狀態相符的似然程度; 3. 根據似然更新狀態置信度;在卡爾曼濾波的章節,我們會使用完全一致的算法;只在於計算的細節上有所不同。這類算法有時候被稱為“預測校準器”,因為它們先做預測,再修正預測的值。讓我們用動畫來展示這個算法。我們先實現濾波函數,然後將每一步結果繪製出來。我用黑色來指示門道的位置。用橙色來繪製先驗,用藍色繪製後驗。縱向粗線用於指示西蒙的實際位置。注意它不是濾波器的輸出——之所以我們能知道西蒙的真實位置是因為我們在用程序模擬這個過程。
def discrete_bayes_sim(prior, kernel, measurements, z_prob, hallway): posterior = np.array([.1]*10) priors, posteriors = [], [] for i, z in enumerate(measurements): prior = predict(posterior, 1, kernel) priors.append(prior) likelihood = lh_hallway(hallway, z, z_prob) posterior = update(likelihood, prior) posteriors.append(posterior) return priors, posteriors def plot_posterior(hallway, posteriors, i): plt.title('Posterior') book_plots.bar_plot(hallway, c='k') book_plots.bar_plot(posteriors[i], ylim=(0, 1.0)) plt.axvline(i % len(hallway), lw=5) def plot_prior(hallway, priors, i): plt.title('Prior') book_plots.bar_plot(hallway, c='k') book_plots.bar_plot(priors[i], ylim=(0, 1.0), c='#ff8015') plt.axvline(i % len(hallway), lw=5) def animate_discrete_bayes(hallway, priors, posteriors): def animate(step): step -= 1 i = step // 2 if step % 2 == 0: plot_prior(hallway, priors, i) else: plot_posterior(hallway, posteriors, i) return animate
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
Let's run the filter and animate it. 讓我們運行濾波器,並觀察運行結果。
# change these numbers to alter the simulation kernel = (.1, .8, .1) z_prob = 1.0 hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0]) # measurements with no noise zs = [hallway[i % len(hallway)] for i in range(50)] priors, posteriors = discrete_bayes_sim(prior, kernel, zs, z_prob, hallway) interact(animate_discrete_bayes(hallway, priors, posteriors), step=IntSlider(value=1, max=len(zs)*2));
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
Now we can see the results. You can see how the prior shifts the position and reduces certainty, and the posterior stays in the same position and increases certainty as it incorporates the information from the measurement. I've made the measurement perfect with the line `z_prob = 1.0`; we will explore the effect of imperfect measurements in the next section. Finally, Another thing to note is how accurate our estimate becomes when we are in front of a door, and how it degrades when in the middle of the hallway. This should make intuitive sense. There are only a few doorways, so when the sensor tells us we are in front of a door this boosts our certainty in our position. A long stretch of no doors reduces our certainty. 現在,我們可以觀察到結果。你可以看到先驗是如何發生移動的,其不確定性是如何減少的。還注意到雖然先驗與後驗的極大值點重合,但後驗的確定性更高,這是後驗結合了測量信息的緣故。這裡通過令`z_prob = 1.0`使得測量是完全準確的。後續小節我們會探索不完美的測量造成的影響。最後一個值得注意的事是,當狗處於門前時估計的準確度是如何增加的,以及當狗位於走廊中心時它是如何退化的。你可以從直覺上理解這個問題。門的數量很少,所以一旦傳感器感應到門,我們對位置的確定性就增加。成片的非門區域則會降低確定性。 The Effect of Bad Sensor DataYou may be suspicious of the results above because I always passed correct sensor data into the functions. However, we are claiming that this code implements a *filter* - it should filter out bad sensor measurements. Does it do that?To make this easy to program and visualize I will change the layout of the hallway to mostly alternating doors and hallways, and run the algorithm on 6 correct measurements: 不良傳感器數據的影響你可能會對上面的結果表示懷疑,畢竟我一直只給函數傳入正確的傳感器數據。然而,既然我們聲稱自己實現了“濾波器”,那麼它應當能過濾不良傳感器數據。那麼它確實能做到這一點嗎?為使得問題易於編程實現和方便可視化,我要改變走廊的佈局,使門道和走廊均勻交替分佈。以六個正確測量結果為輸入運行算法:
hallway = np.array([1, 0, 1, 0, 0]*2) kernel = (.1, .8, .1) prior = np.array([.1] * 10) zs = [1, 0, 1, 0, 0, 1] z_prob = 0.75 priors, posteriors = discrete_bayes_sim(prior, kernel, zs, z_prob, hallway) interact(animate_discrete_bayes(hallway, priors, posteriors), step=IntSlider(value=12, max=len(zs)*2));
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
We have identified the likely cases of having started at position 0 or 5, because we saw this sequence of doors and walls: 1,0,1,0,0. Now I inject a bad measurement. The next measurement should be 0, but instead we get a 1: 我們看到最可能的起始位置是0或5。這是因為傳感器的讀數序列為:1、0、1、0、0.現在我插入一個錯誤的測量值。原本下一個讀數應當是0,但我將其替換為1.
measurements = [1, 0, 1, 0, 0, 1, 1] priors, posteriors = discrete_bayes_sim(prior, kernel, measurements, z_prob, hallway); plot_posterior(hallway, posteriors, 6)
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
That one bad measurement has significantly eroded our knowledge. Now let's continue with a series of correct measurements. 一個不良傳感器數據的插入嚴重污染了我們的知識。接著,我們在正確測量數據上繼續運行。
with figsize(y=5.5): measurements = [1, 0, 1, 0, 0, 1, 1, 1, 0, 0] for i, m in enumerate(measurements): likelihood = lh_hallway(hallway, z=m, z_prob=.75) posterior = update(likelihood, prior) prior = predict(posterior, 1, kernel) plt.subplot(5, 2, i+1) book_plots.bar_plot(posterior, ylim=(0, .4), title=f'step {i+1}') plt.tight_layout()
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
We quickly filtered out the bad sensor reading and converged on the most likely positions for our dog. 我們很快就濾除了不良傳感器讀數,並且概率分佈收斂在狗的最可能位置上。 Drawbacks and LimitationsDo not be mislead by the simplicity of the examples I chose. This is a robust and complete filter, and you may use the code in real world solutions. If you need a multimodal, discrete filter, this filter works.With that said, this filter it is not used often because it has several limitations. Getting around those limitations is the motivation behind the chapters in the rest of this book.The first problem is scaling. Our dog tracking problem used only one variable, $pos$, to denote the dog's position. Most interesting problems will want to track several things in a large space. Realistically, at a minimum we would want to track our dog's $(x,y)$ coordinate, and probably his velocity $(\dot{x},\dot{y})$ as well. We have not covered the multidimensional case, but instead of an array we use a multidimensional grid to store the probabilities at each discrete location. Each `update()` and `predict()` step requires updating all values in the grid, so a simple four variable problem would require $O(n^4)$ running time *per time step*. Realistic filters can have 10 or more variables to track, leading to exorbitant computation requirements.The second problem is that the filter is discrete, but we live in a continuous world. The histogram requires that you model the output of your filter as a set of discrete points. A 100 meter hallway requires 10,000 positions to model the hallway to 1cm accuracy. So each update and predict operation would entail performing calculations for 10,000 different probabilities. It gets exponentially worse as we add dimensions. A 100x100 m$^2$ courtyard requires 100,000,000 bins to get 1cm accuracy.A third problem is that the filter is multimodal. In the last example we ended up with strong beliefs that the dog was in position 4 or 9. This is not always a problem. Particle filters, which we will study later, are multimodal and are often used because of this property. But imagine if the GPS in your car reported to you that it is 40% sure that you are on D street, and 30% sure you are on Willow Avenue. A forth problem is that it requires a measurement of the change in state. We need a motion sensor to detect how much the dog moves. There are ways to work around this problem, but it would complicate the exposition of this chapter, so, given the aforementioned problems, I will not discuss it further.With that said, if I had a small problem that this technique could handle I would choose to use it; it is trivial to implement, debug, and understand, all virtues. 缺點和局限不要受我選的這個示例的簡單性所誤導。這是一個穩健的完整濾波器,可以應用於現實世界的解決方案。如果你需要一個多峰的,離散的濾波器,那麼這個濾波器可以為你所用。說是這麼說。實際上由於一些限制,這種濾波器也不是經常使用。餘下的章節就主要圍繞如何克服這些限制展開。第一個問題在於其伸縮性。我們的狗跟蹤問題只使用一個變量$pos$來表示狗的位置。許多有趣的問題都需要在一個大的向量空間中跟蹤多個變量。比如在現實中我們往往需要跟蹤狗的位置$(x,y)$,有時還需跟蹤其速度$(\dot{x},\dot{y})$ 。我們還沒有處理過多維的情況。在高維空間中,我們不再使用一維數組表示狀態,而是使用一個多維的網格來存儲各離散位置的對應概率。每個`update()`和`predict()`步都需要更新網格上的所有位置。那麼一個含有四個變量**每一步**運算都需要$O(n^4)$的運行時間。現實世界中的濾波器往往有超過10個需要跟蹤的變量,這需要極多的計算資源。第二個問題是這種濾波器是離散的,但我們生活的世界是連續的。這種基於直方圖的方式要求你將濾波器的輸出建模為一些列離散點。要在100米的走廊上達到1cm的定位精度需要10000個點。情況隨著維度的增加指數惡化。要在100平方米的庭院內達到1cm的精確度需要尺寸為一億的直方圖。第三個問題是,這種濾波器是多峰的。上一個問題中,程序以堅信狗處於位置4或9的狀態結束。這並不總成問題。我們後面將介紹粒子濾波器。粒子濾波器正是因為其具有多峰的性質而被廣泛應用。但你可以想象看你車里的GPS報告說你有40%的概率位於D街,又有30%的概率位於柳樹大道嗎。第四個問題是它需要狀態改變程度的測量值。我們需要運動傳感器以測量狗的運動量。有許多應對該問題的方法,但為不使本章的闡述過於複雜這裡就不再介紹。總之基於上述所有原因,我們不再做進一步的討論。話雖如此,如果我手頭有一個可以由這項技術處理的小問題,我就會使用它。易於實現、調試和理解都是它的優點。 Tracking and ControlWe have been passively tracking an autonomously moving object. But consider this very similar problem. I am automating a warehouse and want to use robots to collect all of the items for a customer's order. Perhaps the easiest way to do this is to have the robots travel on a train track. I want to be able to send the robot a destination and have it go there. But train tracks and robot motors are imperfect. Wheel slippage and imperfect motors means that the robot is unlikely to travel to exactly the position you command. There is more than one robot, and we need to know where they all are so we do not cause them to crash.So we add sensors. Perhaps we mount magnets on the track every few feet, and use a Hall sensor to count how many magnets are passed. If we count 10 magnets then the robot should be at the 10th magnet. Of course it is possible to either miss a magnet or to count it twice, so we have to accommodate some degree of error. We can use the code from the previous section to track our robot since magnet counting is very similar to doorway sensing.But we are not done. We've learned to never throw information away. If you have information you should use it to improve your estimate. What information are we leaving out? We know what control inputs we are feeding to the wheels of the robot at each moment in time. For example, let's say that once a second we send a movement command to the robot - move left 1 unit, move right 1 unit, or stand still. If I send the command 'move left 1 unit' I expect that in one second from now the robot will be 1 unit to the left of where it is now. This is a simplification because I am not taking acceleration into account, but I am not trying to teach control theory. Wheels and motors are imperfect. The robot might end up 0.9 units away, or maybe 1.2 units. Now the entire solution is clear. We assumed that the dog kept moving in whatever direction he was previously moving. That is a dubious assumption for my dog! Robots are far more predictable. Instead of making a dubious prediction based on assumption of behavior we will feed in the command that we sent to the robot! In other words, when we call `predict()` we will pass in the commanded movement that we gave the robot along with a kernel that describes the likelihood of that movement. 跟蹤和控制我們已經實現了對單個自主移動目標的被動跟蹤。但請你考慮一個十分相似的問題。我要實現仓库自動化,希望使用机器人收集客户訂的所有貨。或許一個最簡單的辦法是讓機器人在火車軌道上行駛。我希望我能讓機器人去到我所指定的目的地。但是鐵軌和機器人的發動機都不是完美的。輪胎打滑和發動機不完美決定了機器人不太可能準確移動到你指定的位置。機器人不止一個,我需要知道所有機器人的位置以免它們碰撞。所以我们增加了传感器。也许我们每隔几英尺就在轨道上安装一块磁铁,并使用霍尔传感器来计算經過的磁铁数。如果我們數到10個磁鐵,那麼機器人應在第10個磁鐵處。當然,有可能會發生漏掉一塊磁鐵沒統計,或者一塊磁鐵被數了兩次的情況,所以我們需要能適應一定程度的誤差。因為磁鐵計數和走廊傳感器相似,所以我們可以使用前面的代碼來跟蹤機器人。但這還沒有完成。我們學到一句話:永遠不要丟掉任何信息。如果信息存在,就應當用來改善你的估計。有什麼信息是為我們忽略的呢?我們能即時獲得對機器輪的控制信號輸入。比如,不妨設我們每秒傳遞一個信號給機器人——左一步,右一步,站住不動。我一送出命令“左一步”,我就預期一秒後機器人將位於當前位置的左邊一步。我沒有考慮加速度,所以這隻是一個簡化的問題。但我也不打算在這裡教控制論。車輪和發動機都是不完美的。機器人可能只移動0.9步,也可能移動1.2步。現在整個問題的解清晰了。我們先前假定狗總是保持之前的移動方向。這個假設對於我的狗來說是不可靠的。但機器人的行為就容易預測得多。預期使用假設得到一個不準確的預測,不如將我們送給機器人的命令作為輸入!換句話說,當調用`predict()`函數時,我們將送給機器人的移動命令,同描述移動的似然度的卷積核一起作為函數的輸入。 Simulating the Train BehaviorWe need to simulate an imperfect train. When we command it to move it will sometimes make a small mistake, and its sensor will sometimes return the incorrect value. 列車的行為模擬我們要模擬一個不完美的列車。當我們命令它移動時,它偶爾會犯一些小錯,它的傳感器有時會返回錯誤的值。
class Train(object): def __init__(self, track_len, kernel=[1.], sensor_accuracy=.9): self.track_len = track_len self.pos = 0 self.kernel = kernel self.sensor_accuracy = sensor_accuracy def move(self, distance=1): """ move in the specified direction with some small chance of error""" self.pos += distance # insert random movement error according to kernel r = random.random() s = 0 offset = -(len(self.kernel) - 1) / 2 for k in self.kernel: s += k if r <= s: break offset += 1 self.pos = int((self.pos + offset) % self.track_len) return self.pos def sense(self): pos = self.pos # insert random sensor error if random.random() > self.sensor_accuracy: if random.random() > 0.5: pos += 1 else: pos -= 1 return pos
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
With that we are ready to write the filter. We will put it in a function so that we can run it with different assumptions. I will assume that the robot always starts at the beginning of the track. The track is implemented as being 10 units long, but think of it as a track of length, say 10,000, with the magnet pattern repeated every 10 units. A length of 10 makes it easier to plot and inspect. 有了這個我們就可以實現濾波器了。我們將其封裝為一個函數,以便我們可以在不同假設條件下運行這段代碼。我假設機器人總是從軌道的起點出發。我們實現的軌道長度為10個單位。你可以想象他是一個每10單位長度就放置一塊磁鐵的10000單位長度的軌道。令長度為10有利於繪圖和分析。
def train_filter(iterations, kernel, sensor_accuracy, move_distance, do_print=True): track = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) prior = np.array([.9] + [0.01]*9) posterior = prior[:] normalize(prior) robot = Train(len(track), kernel, sensor_accuracy) for i in range(iterations): # move the robot and robot.move(distance=move_distance) # peform prediction prior = predict(posterior, move_distance, kernel) # and update the filter m = robot.sense() likelihood = lh_hallway(track, m, sensor_accuracy) posterior = update(likelihood, prior) index = np.argmax(posterior) if do_print: print(f'time {i}: pos {robot.pos}, sensed {m}, at position {track[robot.pos]}') conf = posterior[index] * 100 print(f' estimated position is {index} with confidence {conf:.4f}%:') book_plots.bar_plot(posterior) if do_print: print() print('final position is', robot.pos) index = np.argmax(posterior) print('''Estimated position is {} with ''' '''confidence {:.4f}%:'''.format( index, posterior[index]*100))
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
Read the code and make sure you understand it. Now let's do a run with no sensor or movement error. If the code is correct it should be able to locate the robot with no error. The output is a bit tedious to read, but if you are at all unsure of how the update/predict cycle works make sure you read through it carefully to solidify your understanding. 請閱讀代碼并且保证你确实理解它。我們先在沒有傳感器誤差和運動誤差的情況下運行代码。如果代码正确,那么它应当能正确无误地检出目标机器人。虽然程序输出有点冗长难读,但若你对更新/预测循环的工作方式完全不确顶,你务必通读这些文字以巩固你的理解。
import random random.seed(3) np.set_printoptions(precision=2, suppress=True, linewidth=60) train_filter(4, kernel=[1.], sensor_accuracy=.999, move_distance=4, do_print=True)
time 0: pos 4, sensed 4, at position 4 estimated position is 4 with confidence 99.9900%: time 1: pos 8, sensed 8, at position 8 estimated position is 8 with confidence 100.0000%: time 2: pos 2, sensed 2, at position 2 estimated position is 2 with confidence 100.0000%: time 3: pos 6, sensed 6, at position 6 estimated position is 6 with confidence 100.0000%: final position is 6 Estimated position is 6 with confidence 100.0000%:
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
We can see that the code was able to perfectly track the robot so we should feel reasonably confident that the code is working. Now let's see how it fairs with some errors. 我们可以看到程序完美无误地实现了对机器人的跟踪,所以我们相当有信心相信代码在正常工作。现在我们来看一些失败案例和几个错误。
random.seed(5) train_filter(4, kernel=[.1, .8, .1], sensor_accuracy=.9, move_distance=4, do_print=True)
time 0: pos 4, sensed 4, at position 4 estimated position is 4 with confidence 96.0390%: time 1: pos 8, sensed 9, at position 8 estimated position is 9 with confidence 52.1180%: time 2: pos 3, sensed 3, at position 3 estimated position is 3 with confidence 88.3993%: time 3: pos 7, sensed 8, at position 7 estimated position is 8 with confidence 49.3174%: final position is 7 Estimated position is 8 with confidence 49.3174%:
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
There was a sensing error at time 1, but we are still quite confident in our position. Now let's run a very long simulation and see how the filter responds to errors. 在时刻1有一个传感器异常,但我们仍然对预测的位置有相当高的置信度。现在加長模擬時間,看看滤波器是如何应对错误的。
with figsize(y=5.5): for i in range (4): random.seed(3) plt.subplot(221+i) train_filter(148+i, kernel=[.1, .8, .1], sensor_accuracy=.8, move_distance=4, do_print=False) plt.title (f'iteration {148 + i}')
_____no_output_____
CC-BY-4.0
02-Discrete-Bayes.ipynb
anti-destiny/Kalman-and-Bayesian-Filters-in-Python
House Prices: Advanced Regression TechniquesThis goal of this project was to predict sales prices and practice feature engineering, RFs, and gradient boosting. The dataset was part of the [House Prices Kaggle Competition](https://www.kaggle.com/c/house-prices-advanced-regression-techniques). Table of Contents* [1 Summary](1-Summary)* [2 Introduction](2-Introduction) * [3 Loading & Exploring the Data](3-Loading-&-Exploring-the-Data-Structure) * [3.1 Loading Required Libraries and Reading the Data into Python](3.1-Loading-Required-Libraries-and-Reading-the-Data-into-Python) * [3.2 Data Structure](3.2-Data-Structure) * [4 Exploring the Variables](4-Exploring-the-Variables) * [4.1 Exploring the Response Variable: SalePrice](4.1-Exploring-the-Response-Variable:-SalePrice) * [4.2 Log-Transformation of the Response Variable](4.2-Log-Transformation-of-the-Response-Variable) * [5 Data Imputation](5-Data-Imputation) * [5.1 Completeness of the Data](5.1-Completeness-of-the-Data) * [5.2 Impute the Missing Data](5.2-Impute-the-Missing-Data) * [5.2.1 Missing Values Corresponding to Lack of Specific Feature](5.2.1-Missing-Values-Corresponding-to-Lack-of-Specific-Feature) * [5.2.2 Mode Imputation: Replacing Missing Values with Most Frequent Value](5.2.2-Mode-Imputation:-Replacing-Missing-Values-with-Most-Frequent-Value) * [6 Feature Engineering](6-Feature-Engineering) * [6.1 Mixed Conditions](6.1-Mixed-Conditions) * [6.2 Mixed Exterior](6.2-Mixed-Exterior) * [6.3 Total Square Feet](6.3-Total-Square-Feet) * [6.4 Total Number of Bathrooms](6.4-Total-Number-of-Bathrooms) * [6.5 Binning the Neighbourhoods](6.5-Binning-the-Neighbourhoods) * [7 LotFrontage Imputation](7-LotFrontage-Imputation) * [7.1 LotFrontage Data Structure](7.1-LotFrontage-Data-Structure) * [7.2 Outlier Detection & Removal](7.2-Outlier-Detection-&-Removal) * [7.3 Determining Relevant Variables of LotFrontage](7.3-Determining-Relevant-Variables-of-LotFrontage) * [7.4 LotFrontage Model Building and Evaluation](7.4-LotFrontage-Model-Building-and-Evaluation) * [8 Preparing the Data for Modelling](8-Preparing-the-Data-for-Modelling) * [8.1 Removing Outliers](8.1-Removing-Outliers) * [8.2 Correlation Between Numeric Predictors](8.2-Correlation-Between-Numeric-Predictors) * [8.3 Label Encoding](8.3-Label-Encoding) * [8.4 Skewness & Normalization of Numeric Variables](8.4-Skewness-&-Normalization-of-Numeric-Variables) * [8.5 One Hot Encoding the Categorical Variables](8.5-One-Hot-Encoding-the-Categorical-Variables) * [9 SalePrice Modelling](9-SalePrice-Modelling) * [9.1 Obtaining Final Train and Test Sets](9.1-Obtaining-Final-Train-and-Test-Sets) * [9.2 Defining a Cross Validation Strategy](9.2-Defining-a-Cross-Validation-Strategy) * [9.3 Lasso Regression Model](9.3-Lasso-Regression-Model) * [9.4 Ridge Regression Model](9.4-Ridge-Regression-Model) * [9.5 XGBoost Model](9.5-XGBoost-Model) * [9.6 Ensemble Model](9.6-Ensemble-Model) Ensemble Model Accuracy: (RMSE = 0.12220) Competition Description> Ask a home buyer to describe their dream house, and they probably won't begin with the height of the basement ceiling or the proximity to an east-west railroad. But this playground competition's dataset proves that much more influences price negotiations than the number of bedrooms or a white-picket fence.>With 79 explanatory variables describing (almost) every aspect of residential homes in Ames, Iowa, this competition challenges you to predict the final price of each home. Competition EvaluationAs part of a Kaggle competition dataset, the accuracy of the sales prices was evaluated on [Root-Mean-Squared-Error (RMSE)](https://en.wikipedia.org/wiki/Root-mean-square_deviation) between the logarithm of the predicted value and the logarithm of the observed sales price. 1 SummaryI started this competition by focusing on getting a thorough understanding of the dataset. Particular attention was paid to impute the missing values within the dataset. The EDA process is detailed as well as visualized.In this project, I created a predictive model that has been trained on data collected from homes in Ames, Iowa. Three algorithms were used, and their validation set RMSE and test set RMSE are listed below:| Regression Model | Validation RMSE | Test RMSE ||------------------|-----------------|-----------|| Ridge | 0.1130 | 0.12528 || Lasso | 0.1125 | 0.12679 || XGBoost | 0.1238 | 0.12799 || || Ensemble | | 0.12220 |The Ridge regression model performed the best as a single model, likely due to the high multicollinearity. However, combining it with the Lasso and XGBoost regression models resulting in a higher prediction accuracy and a lower RMSE (0.12220 vs 0.12528). 2 IntroductionThe dataset used for this project is the [Ames Housing dataset](https://amstat.tandfonline.com/doi/abs/10.1080/10691898.2011.11889627) that was compiled by Dean De Cock for use in data science education. It is an alternative to the popular but older [Boston Housing dataset](http://lib.stat.cmu.edu/datasets/boston). The Ames Housing dataset is also used in the [Advanced Regression Techniques challenge](https://www.kaggle.com/c/house-prices-advanced-regression-techniques) on the Kaggle Website. These competitions is a great way to improve my skills and measure my progress as a data scientist. Kaggle describes the competition as follows:> Ask a home buyer to describe their dream house, and they probably won't begin with the height of the basement ceiling or the proximity to an east-west railroad. But this playground competition's dataset proves that much more influences price negotiations than the number of bedrooms or a white-picket fence.>With 79 explanatory variables describing (almost) every aspect of residential homes in Ames, Iowa, this competition challenges you to predict the final price of each home. 3 Loading & Exploring the Data Structure 3.1 Loading Required Libraries and Reading the Data into Python Loading Python packages used in the project
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import missingno as msno from time import time from math import sqrt import statsmodels.api as sm from statsmodels.formula.api import ols import scipy.stats as st from scipy.special import boxcox1p from sklearn.cluster import KMeans from sklearn import svm from sklearn.metrics import mean_absolute_error, mean_squared_error from sklearn.model_selection import train_test_split, KFold, cross_val_score, GridSearchCV from sklearn.preprocessing import RobustScaler, LabelEncoder, StandardScaler from sklearn.linear_model import Lasso, Ridge from sklearn.pipeline import make_pipeline from xgboost.sklearn import XGBRegressor %matplotlib inline
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Now, we read in the csv's as datarames into Python.
train = pd.read_csv('../input/train.csv') test = pd.read_csv('../input/test.csv')
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
3.2 Data StructureIn total, there are 81 columns/variables in the train dataset, including the response variable (SalePrice). I am only displaying a subset of the variables, as all of them will be discussed in more detail throughout the notebook.The train dataset consists of character and integer variables. Many of these variables are ordinal factors, despite being represented as character or integer variables. These will require cleaning and/or feature engineering later.
print("Dimensions of Train Dataset:" + str(train.shape)) print("Dimensions of Test Dataset:" + str(test.shape)) train.iloc[:,0:10].info()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Next, we are going to define a few variables that will be used in later analyses as well as being required for the submission file.
y_train = train['SalePrice'] test_id = test['Id'] ntrain = train.shape[0] ntest = test.shape[0]
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Lastly, we are going to merge the train and test datasets to explore the data as well as impute any missing values.
all_data = pd.concat((train, test), sort=True).reset_index(drop=True) all_data['Dataset'] = np.repeat(['Train', 'Test'], [ntrain, ntest], axis=0) all_data.drop('Id', axis=1,inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
4 Exploring the Variables 4.1 Exploring the Response Variable: SalePrice The probability distribution plot show that the sale prices are right skewed. This is to be expected as few people can afford very expensive houses.
sns.set_style('whitegrid') sns.distplot(all_data['SalePrice'][~all_data['SalePrice'].isnull()], axlabel="Normal Distribution", fit=st.norm, fit_kws={"color":"red"}) plt.title('Distribution of Sales Price in Dollars') (mu, sigma) = st.norm.fit(train['SalePrice']) plt.legend(['Normal Distribution \n ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)], loc='best', fancybox=True) plt.show() st.probplot(all_data['SalePrice'][~all_data['SalePrice'].isnull()], plot=plt) plt.show()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Linear models tend to work better with normally distributed data. As such, we need to transform the response variable to make it more normally distributed. 4.2 Log-Transformation of the Response Variable
all_data['SalePrice'] = np.log1p(all_data['SalePrice']) sns.distplot(all_data['SalePrice'][~all_data['SalePrice'].isnull()], axlabel="Normal Distribution", fit=st.norm, fit_kws={"color":"red"}) plt.title('Distribution of Transformed Sales Price in Dollars') (mu, sigma) = st.norm.fit(train['SalePrice']) plt.legend(['Normal Distribution \n ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)], loc='best', fancybox=True) plt.show() st.probplot(all_data['SalePrice'][~all_data['SalePrice'].isnull()], plot=plt) plt.show()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
The skew is highly corrected and the distribution of the log-transformed sale prices appears more normally distributed. 5 Data Imputation 5.1 Completeness of the Data We first need to find which variables contain missing values.
cols_with_missing_values = all_data.isnull().sum().sort_values(ascending=False) display(pd.DataFrame(cols_with_missing_values[cols_with_missing_values[cols_with_missing_values > 0].index], columns=["Number of Missing Values"])) cols_with_missing_values = all_data.isnull().sum().sort_values(ascending=False) cols_with_missing_values = all_data[cols_with_missing_values[cols_with_missing_values > 0].index] msno.bar(cols_with_missing_values) plt.show()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
5.2 Impute the Missing Data 5.2.1 Missing Values Corresponding to Lack of Specific Feature * PoolQC: data description of the variables states that NA represents "no pool". This makes sense given the vast number of missing values (>99%) and that a majority of houses do not have a pool.
all_data['PoolQC'].replace(np.nan, 'None', regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* MiscFeature: data description of the variables states that NA represents "no miscellaneous feature".
all_data['MiscFeature'].replace(np.nan, 'None', regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* Alley: data description of the variables states that NA represents "no alley access".
all_data['Alley'].replace(np.nan, 'None', regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* Fence: data description of the variables states that NA represents "no fence".
all_data['Fence'].replace(np.nan, 'None', regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* FireplaceQU: data description of the variables states that NA represents "no fireplace".
all_data['FireplaceQu'].replace(np.nan, 'None', regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* GarageType, GarageFinish, GarageQual, and GarageCond: Missing values are likely due to lack of a garage. Replacing missing data with None.
garage_categorical = ['GarageType', 'GarageFinish', 'GarageQual', 'GarageCond'] for variable in garage_categorical: all_data[variable].replace(np.nan, 'None', regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* GarageYrBlt, GarageCars, and GarageArea: Missing values are likely due to lack of a basement. Replacing missing data with 0 (since no garage means no cars in a garage).
garage_numeric = ['GarageYrBlt', 'GarageCars', 'GarageArea'] for variable in garage_numeric: all_data[variable].replace(np.nan, 0, regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* BsmtQual, BsmtCond, BsmtExposure, BsmtFinType1, and BsmtFinType2: Missing values are likely due to lack of a basement. Replacing missing data with None.
basement_categorical = ['BsmtQual','BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2'] for variable in basement_categorical: all_data[variable].replace(np.nan, 'None', regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, TotalBsmtSF, BsmtFullBath, and BsmtHalfBath: Missing values are likely due to lack of a basement. Replacing missing data with 0.
basement_numeric = ['BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', 'BsmtFullBath', 'BsmtHalfBath'] for variable in basement_numeric: all_data[variable].replace(np.nan, 0, regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* MasVnrType and MasVnrArea: Missing values are likely due to lack of masonry vaneer. We will replace the missing values with None for the type and 0 for the area.
all_data['MasVnrType'].replace(np.nan, 'None', regex=True, inplace=True) all_data['MasVnrArea'].replace(np.nan, 0, regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* Functional: data description of the variables states that NA represents "typical".
all_data['Functional'].replace(np.nan, 'Typ', regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
5.2.2 Mode Imputation: Replacing Missing Values with Most Frequent Value Using a mode imputation, we replace the missing values of a categorical variable with the mode of the non-missing cases of that variable. While it does have the advantage of being fast, it comes at the cost of a reduction in variance within the dataset.Due to the low number of missing values imputed using this method, the bias introduced on the mean and standard deviation, as well as correlations with other variables are minimal.First, we define a function "mode_impute_and_plot" to simplify the process of visualizing the categorical variables and replacing the missing variables with the most frequent value.
def mode_impute_and_plot(variable): print('# of missing values: ' + str(all_data[variable].isna().sum())) plt.figure(figsize=(8,4)) ax = sns.countplot(all_data[variable]) ax.set_xticklabels(ax.get_xticklabels(), rotation=40, ha="right") plt.tight_layout() plt.show() all_data[variable].replace(np.nan, all_data[variable].mode()[0], regex=True, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Now we can proceed to replace the missing values for the following variables:[]() | ------|------[MSZoning](MSZoning) | [Utilities](Utilities) | [Electrical](Electrical) | [Exterior1st and Exterior2nd](Exterior1st and Exterior2nd) | [KitchenQual](KitchenQual) | [SaleType](SaleType) * MSZoning: "RL" is by far the most common value. Missing values will be replace with "RL".
mode_impute_and_plot('MSZoning')
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* Utilities: With the exception of one "NoSeWa" value, all records for this variable are "AllPub". Since "NoSeWa" is only in the training set, this feature will not help in predictive modelling. As such, we can safely remove it.
mode_impute_and_plot('Utilities') all_data = all_data.drop('Utilities', axis=1)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* Electrical: Only one missing value, replace it with "SBrkr", the most common value.
mode_impute_and_plot('Electrical')
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* Exterior1st and Exterior2nd: There is only one missing value for both Exterior 1 & 2. Missing values will be replaced by the most common value.
mode_impute_and_plot('Exterior1st') mode_impute_and_plot('Exterior2nd')
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* KitchenQual: Only one missing value, replace it with "TA", the most common value.
mode_impute_and_plot('KitchenQual')
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
* SaleType: Replace missing values with "WD", the most common value.
mode_impute_and_plot('SaleType')
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Are there any remaining missing values?
cols_with_missing_values = all_data.isnull().sum().sort_values(ascending=False) display(pd.DataFrame(cols_with_missing_values[cols_with_missing_values[cols_with_missing_values > 0].index], columns=["Number of Missing Values"]))
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Due to its numeric nature and the large number of missing values, the LotFrontage variable will be imputed separately using an SVM algorithm (See Section [7 LotFrontage Imputation](7-Lot-Frontage-Imputation)). The remaining variables are all complete! Now to move on to feature engineering. 6 Feature Engineering 6.1 Mixed ConditionsIn order to simplify and boost the accuracy of the preditive models, we will merge the two conditions into one variable: MixedConditionsThe data descriptions states:* Condition1 represents proximity to various conditions.* Condition2 represents proximity to various conditions (if more than one is present).If a property does not have one or multiple conditions, then it is classified as normal. However, designation of "normal" are condition 1 or condition 2 is strictly alphabetical.For example, if a property is in proximity to a feeder street ("Feedr") and no other condition, then the data would appear as follows:Condition1 | Condition2-----------|------------- Feedr | Norm However, if a property is within 200' of East-West Railroad (RRNe) and no other condition, then the data would appear as follows:Condition1 | Condition2-----------|------------- Norm | RRNeOnce we merge Conditions 1 & 2 into the MixedConditions variable, we will remove them from the analysis.
all_data['MixedConditions'] = all_data['Condition1'] + ' - ' + all_data['Condition2'] all_data.drop(labels=['Condition1', 'Condition2'], axis=1, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
6.2 Mixed ExteriorThe Exterior1st and Exterior2nd features are similar to the Conditions feature we merged and remove above. Properties with multiple types of exterior covering the house are assigned to Exterior1st or Exterior2nd alphabetically.As such, we will conduct the same process to merge the two columns into a single MixedExterior variable and remove them from the analysis.
all_data['MixedExterior'] = all_data['Exterior1st'] + ' - ' + all_data['Exterior2nd'] all_data.drop(labels=['Exterior1st', 'Exterior2nd'], axis=1, inplace=True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
6.3 Total Square FeetOne of the important factors that people consider when buying a house is the total living space in square feet. Since the total square feet is not explicitly listed, we will add a new variable by adding up the square footage of the basement, first floor, and the second floor.
SF_df = all_data[['TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'SalePrice']] SF_df = SF_df[~SF_df['SalePrice'].isnull()] SF_vars = list(SF_df) del SF_vars[-1] SF_summary = [] for SF_type in SF_vars: corr_val = np.corrcoef(SF_df[SF_type], SF_df['SalePrice'])[1][0] SF_summary.append(corr_val) pd.DataFrame([SF_summary], columns=SF_vars, index=['SalePrice Correlation']) all_data['TotalSF'] = all_data['TotalBsmtSF'] + all_data['1stFlrSF'] + all_data['2ndFlrSF'] plt.figure(figsize=(10,6)) ax = sns.regplot(all_data['TotalSF'], all_data['SalePrice'], line_kws={'color': 'red'}) ax.set(ylabel='ln(SalePrice)') plt.show() print("Pearson Correlation Coefficient: %.3f" % (np.corrcoef(all_data['TotalSF'].iloc[:ntrain], train['SalePrice']))[1][0])
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
There is a very strong correlation (r = 0.78) between the TotalSF and the SalePrice, with the exception of two outliers. These outliers should be removed to increase the accuracy of our model. 6.4 Total Number of BathroomsThere are 4 bathroom variables. Individually, these variables are not very important. Together, however, these predictors are likely to become a strong one.A full bath is made up of four parts: a sink, shower, a bathtub, and a toilet. A half-bath, also called a powder room or guest bath, only has a sink and a toilet. As such, half-bathrooms will have half the value of a full bath.
bath_df = all_data[['BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath', 'SalePrice']] bath_df = bath_df[~bath_df['SalePrice'].isnull()] bath_vars = list(bath_df) del bath_vars[-1] bath_summary = [] for bath_type in bath_vars: corr_val = np.corrcoef(bath_df[bath_type], bath_df['SalePrice'])[1][0] bath_summary.append(corr_val) pd.DataFrame([bath_summary], columns=bath_vars, index=['SalePrice Correlation']) all_data['TotalBath'] = all_data['BsmtFullBath'] + (all_data['BsmtHalfBath']*0.5) + all_data['FullBath'] + (all_data['HalfBath']*0.5) plt.figure(figsize=(10,4)) sns.countplot(all_data['TotalBath'], color='grey') plt.tight_layout() plt.show() plt.figure(figsize=(10,6)) sns.regplot(all_data['TotalBath'].iloc[:ntrain], train['SalePrice'], line_kws={'color': 'red'}) ax.set(ylabel='ln(SalePrice)') plt.show() print("Pearson Correlation Coefficient: %.3f" % (np.corrcoef(all_data['TotalBath'].iloc[:ntrain], train['SalePrice']))[1][0])
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
We can see a high positive correlation (r = 0.67) between TotalBath and SalePrice. The new variable correlation is much higher than any of the four original bathroom variables. 6.5 Binning the Neighbourhoods
neighborhood_prices = all_data.iloc[:ntrain].copy() neighborhood_prices['SalePrice'] = np.exp(all_data['SalePrice']) neighborhood_prices = neighborhood_prices[['Neighborhood', 'SalePrice']].groupby('Neighborhood').median().sort_values('SalePrice') plt.figure(figsize=(15,7)) ax = sns.barplot(x= neighborhood_prices.index, y=neighborhood_prices['SalePrice'], color='grey') ax.set_xticklabels(ax.get_xticklabels(), rotation=40, ha="right") plt.tight_layout() plt.show()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
In order to bin the neighbourhoods into approriate clusters, we will use K-Mean clustering. K-Means is an unsupervised machine learning algorithm that groups a dataset into a user-specified number (k) of clusters. One potential issue with the algorithm is that it will cluster the data into k clusters, even if k is not the right number of cluster to use.Therefore, we need to identity the optimal number of k clusters to use. To do so, we will use the Elbow Method. Briefly, the idea of the elbow method is to run k-means clustering for a range of values and calculate the sum of squared errors (SSE). We then plot a line chart of the SSE for each value of k. Each additional k cluster will result in a lower SSE, but will eventually exhibit significantly diminished return. The goal is the choose a small value of k with a low SSE, after which the subsequent k values exhibit diminishing returns. If the line plot looks like an arm, then the "elbow" of the arm is the optimal k value. The plot below indicates that the optimal number of neighbourhood clusters is k = 3.
SS_distances = [] K = range(1,10) for k in K: km = KMeans(n_clusters=k) km = km.fit(neighborhood_prices) SS_distances.append(km.inertia_) plt.plot(K, SS_distances, 'bx-') plt.xlabel('k') plt.ylabel('Sum of squared distances') plt.title('Elbow Method For Optimal k') plt.show()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Now let's see how the binned neighborhoods look like.
neighborhood_prices['Cluster'] = KMeans(n_clusters=3).fit(neighborhood_prices).labels_ plt.figure(figsize=(15,7)) ax = sns.barplot(x= neighborhood_prices.index, y=neighborhood_prices['SalePrice'], hue=neighborhood_prices['Cluster']) ax.set_xticklabels(ax.get_xticklabels(), rotation=40, ha="right") plt.tight_layout() plt.show()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Lastly, we have to create a new variable in the dataset based on the new cluster labels.
neighborhood_dict = dict(zip(neighborhood_prices.index, neighborhood_prices.Cluster)) all_data['Neighborhood_Class'] = all_data['Neighborhood'] all_data['Neighborhood_Class'].replace(neighborhood_dict, inplace = True)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
7 LotFrontage ImputationDue to the numeric nature and sheer number of missing values in the LotFrontage column, we will impute the missing values using an SVM Regressor algorithm to predict the missing values.The first step is to subset the dataset into two groups. The train dataset (train_LotFrontage) contains the complete records while the test dataset (test_LotFrontage) contains the missing values. Next, we examine the structure of the newly created datasets. There are 486 missing values in the full dataset, or roughly 16% of the data. 7.1 LotFrontage Data Structure
train_LotFrontage = all_data[~all_data.LotFrontage.isnull()] test_LotFrontage = all_data[all_data.LotFrontage.isnull()] print("Dimensions of Train LotFrontage Dataset:" + str(train_LotFrontage.shape)) print("Dimensions of Test LotFrontage Dataset:" + str(test_LotFrontage.shape)) display(pd.DataFrame(all_data['LotFrontage'].describe()).transpose())
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Now let's examine the distribution of LotFrontages values in the Train LotFrontage dataset through a boxplot and distribution plot. Through these graphs, we can see several interesting observations appear:* There is a cluster of low LotFrontage value properties, shown as a peak on the far left of the distribution plot. The boxplot indicates these values may be outliers, shown as outlier points on the left of the whisker of the boxplot.* There is a long tail of high LotFrontage value properties. These values extend beyond the Median + 1.5IQR range.In other words, there are outliers on both ends of the LotFrontage value distributions.
fig, ax = plt.subplots(1,2, figsize=(16,4)) sns.boxplot(train_LotFrontage['LotFrontage'], ax=ax[0]) sns.distplot(train_LotFrontage['LotFrontage'], ax=ax[1], fit=st.norm, fit_kws={"color":"red"}) (mu, sigma) = st.norm.fit(train_LotFrontage['LotFrontage']) plt.legend(['Normal Distribution \n ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)], loc='best', fancybox=True) plt.show()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
7.2 Outlier Detection & RemovalBefore we examine the correlations between LotFrontage and other variables, we should remove the outliers seen above. To do so, we will use the Interquartile Range (IQR) method, where values outside the Median ± 1.5IQR are considered to be outliers. These outliers are then removed from the LotFrontage datasets.
def outlier_detection(data): Q1, Q3 = np.percentile(data, [25,75]) IQR = Q3-Q1 lower_cutoff = Q1 - (IQR * 1.5) upper_cutoff = Q3 + (IQR * 1.5) outliers = (data > Q3+1.5*IQR) | (data < Q1-1.5*IQR) return outliers train_LotFrontage_no_outliers = train_LotFrontage[~outlier_detection(train_LotFrontage.LotFrontage)] fig, ax = plt.subplots(1,2, figsize=(16,4)) sns.boxplot(train_LotFrontage_no_outliers['LotFrontage'], ax=ax[0]) sns.distplot(train_LotFrontage_no_outliers['LotFrontage'], ax=ax[1], fit=st.norm, fit_kws={"color":"red"}) (mu, sigma) = st.norm.fit(train_LotFrontage_no_outliers['LotFrontage']) plt.legend(['Normal Distribution \n ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)], loc='best', fancybox=True) plt.show()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Two new values previously not deemed outliers by the first iteration of the IQR range method are now shown as potential outliers in the boxplot. This is caused by having a new smaller IQR value after removing the first batch of outliers. As such, we will keep these values. 7.3 Determining Relevant Variables of LotFrontageNow we can explore the relationship between the LotFrontage target variable and other features. In order to confirm a relationship between these key features and we will conduct an ANOVA (Analysis of Variance) test to determine statistical significance (in this case, p ).Using the type III ANOVA, we are confident the variables listed in the table below are correlated with LotFrontage.* Note: Variables that begin with a number (1stFlrSF, 2ndFlrSF, and 3SsnPorch) cause a syntax error within the ols formula input. I briefly converted them to an appropriate name in the temp_train dataset for the purpose of the ANOVA.
temp_train = train_LotFrontage_no_outliers.copy() temp_train.rename(columns={'1stFlrSF':'X1stFlrSF', '2ndFlrSF': 'X2ndFlrSF', '3SsnPorch': 'X3SsnPorch'}, inplace=True) mod = ols('LotFrontage ~ X1stFlrSF + X2ndFlrSF + X3SsnPorch + Alley + BedroomAbvGr + BldgType + BsmtCond + BsmtExposure + BsmtFinSF1 + BsmtFinSF2 + BsmtFinType1 + BsmtFinType2 + BsmtFullBath + BsmtHalfBath + BsmtQual + BsmtUnfSF + CentralAir + Electrical + EnclosedPorch + ExterCond + ExterQual + Fence + FireplaceQu + Fireplaces + Foundation + FullBath + Functional + GarageArea + GarageCars + GarageCond + GarageFinish + GarageQual + GarageType + GarageYrBlt + GrLivArea + HalfBath + Heating + HeatingQC + HouseStyle + KitchenAbvGr + KitchenQual + LandContour + LandSlope + LotArea + LotConfig + LotShape + LowQualFinSF + MSSubClass + MSZoning + MasVnrArea + MasVnrType + MiscFeature + MiscVal + MoSold + Neighborhood + OpenPorchSF + OverallCond + OverallQual + PavedDrive + PoolArea + PoolQC + RoofMatl + RoofStyle + SaleCondition + SaleType + ScreenPorch + Street + TotRmsAbvGrd + TotalBsmtSF + WoodDeckSF + YearBuilt + YearRemodAdd + YrSold + MixedConditions + MixedExterior + TotalSF + TotalBath + Neighborhood_Class', data=temp_train).fit() aov_table = sm.stats.anova_lm(mod, typ=3) display(aov_table[aov_table['PR(>F)'] < 0.01])
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
7.4 LotFrontage Model Building and EvaluationNow let's build a model using the relevant variables selected above. We will be using Support Vector Regressor to build the model.The first step is to seperate the target variable, LotFrontage, select the relevant variables, and dummifying the categorical variables.
X = train_LotFrontage_no_outliers[aov_table[aov_table['PR(>F)'] < 0.01].index] y = train_LotFrontage_no_outliers['LotFrontage'] X = pd.get_dummies(X) transformer = StandardScaler().fit(X) X = transformer.transform(X)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
In order to determine the accuracy of our predictive model, we will create a Validation dataset. This dataset will have known values for the target LotFrontage variable which can we compare our model's prediction against. Using the mean absolute error, we can measure the difference between our predicted LotFrontage values with the true values.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1234) clf = svm.SVR(kernel='rbf', C=100, gamma=0.001) clf.fit(X_train, y_train) preds = clf.predict(X_test) print('Mean Absolute Error: %.3f' % mean_absolute_error(y_test, preds))
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
These results show that using the SVR model to impute LotFrontage gives an average rror of less than 7 feet.We will now use the same SVR model to predict the unknown LotFrontage values in the test_Frontage dataset.
model_data = train_LotFrontage_no_outliers.copy() model_data= model_data.append(test_LotFrontage) y = model_data['LotFrontage'] model_data = model_data[['MSZoning', 'Alley', 'LotArea', 'LotShape', 'LotConfig', 'Neighborhood', 'MixedConditions', 'GarageType', 'GarageCars', 'GarageArea']] model_data = pd.get_dummies(model_data) model_X_train = model_data[~y.isnull()] model_X_test = model_data[y.isnull()] model_y_train = y[~y.isnull()] transformer = StandardScaler().fit(model_X_train) model_X_train = transformer.transform(model_X_train) model_X_test = transformer.transform(model_X_test) clf = svm.SVR(kernel='rbf', C=100, gamma=0.001) clf.fit(model_X_train, model_y_train) LotFrontage_preds = clf.predict(model_X_test)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Now that we have the newly predicted LotFrontage values, we can examine the distribution of predicted values relative to the known LotFrontage values in the training dataset. Both distributions have a mean around 70 feet with similar tail lengths on either end.
sns.distplot(model_y_train) sns.distplot(LotFrontage_preds)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Lastly, we need to add the predicted values back into the original dataset.
all_data.LotFrontage[all_data.LotFrontage.isnull()] = LotFrontage_preds
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
8 Preparing the Data for Modelling 8.1 Removing OutliersAlthough we could be more in depth in our outliet detection and removal, we are only going to remove the two outliers found in the TotalSF variable. These properties exhibited large total square feet values with low SalePrice.
all_data.drop(all_data['TotalSF'].iloc[:ntrain].nlargest().index[:2], axis=0, inplace=True) ntrain = train.shape[0]-2
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
8.2 Correlation Between Numeric PredictorsWe can see a large correlation between many variables, suggesting a high level of multicollinearity. There are two options to consider:1. Use a regression model that deals well with multicollinearity, such as a ridge regression.2. Remove highly correlated predictors from the model.
corr = all_data.corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True f, ax = plt.subplots(figsize=(15, 15)) cmap = sns.diverging_palette(220, 10, as_cmap=True) sns.heatmap(corr, mask=mask, cmap=cmap, center=0, square=True, linewidths = .5) plt.show()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
8.3 Label EncodingMany of the ordinal variables are presented as numeric values. Therefore we need to label encode these numeric characters into strings.
num_to_str_columns = ['MSSubClass', 'OverallQual', 'OverallCond', 'MoSold', 'YrSold'] for col in num_to_str_columns: all_data[col] = all_data[col].astype('str') cat_cols = ['OverallQual', 'OverallCond', 'ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'BsmtFinSF2', 'HeatingQC', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'KitchenQual', 'TotRmsAbvGrd', 'Fireplaces', 'FireplaceQu', 'GarageFinish', 'GarageCars', 'GarageQual', 'GarageCond', 'PoolQC', 'Fence', 'YearBuilt', 'YearRemodAdd', 'GarageYrBlt', 'MoSold', 'YrSold'] for col in cat_cols: label = LabelEncoder() label.fit(list(all_data[col].values)) all_data[col] = label.transform(list(all_data[col].values)) print('Shape all_data: {}'.format(all_data.shape)) all_data.head()
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
8.4 Skewness & Normalization of Numeric VariablesSkewness is a measure of asymmetry of a distribution, and can be used to define the extent to which the distribution differs from a normal distribution. Therefore, a normal distribution will have a skewness of 0. As a rule of thumb, if skewness is less than -1 or greater than 1, the distribution is highly skewed. In order to account for skewness, we will transform the (highly) skewed data into normality using a Log Transformation. We define highly skewed data as variables with a skewness greater than 0.85. This method is similar to the approach used to normalize the [SalePrice Response Variable](4.2-Log-Transformation-of-the-Response-Variable), except we will use log+1 to avoid division by zero issues.
numeric_feats = ['LotFrontage', 'LotArea', 'BsmtFinSF1', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal', 'TotalSF'] skewed_feats = all_data[numeric_feats].apply(lambda x: st.skew(x.dropna())).sort_values(ascending=False) skewness = pd.DataFrame({'Skew Before Transformation' :skewed_feats}) skewness = skewness[abs(skewness) > 1].dropna(axis=0) skewed_features = skewness.index for feat in skewed_features: all_data[feat] = np.log1p(all_data[feat]+1) skewed_feats = all_data[skewed_features].apply(lambda x: st.skew(x.dropna())).sort_values(ascending=False) skewness['Skew After Transformation'] = skewed_feats skewness
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
8.5 One Hot Encoding the Categorical VariablesThe last step needed to prepare the data is to make sure that all categorical predictor variables are converted into a form that is usable by machine learning algorithms. This process is known as 'one-hot encoding' the categorical variables.The process involved all non-ordinal factors receiving their own separate column with 1's and 0's, and is required by most ML algorithms.
all_data = pd.get_dummies(all_data) print(all_data.shape) all_data.head(3)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
9 SalePrice ModellingNow that the data is correctly processed, we are ready to begin building our predictive models. 9.1 Obtaining Final Train and Test Sets
final_y_train = all_data['SalePrice'][~all_data['SalePrice'].isnull()] final_X_train = all_data[all_data['Dataset_Train'] == 1].drop(['Dataset_Train', 'Dataset_Test', 'SalePrice'], axis=1) final_X_test = all_data[all_data['Dataset_Test'] == 1].drop(['Dataset_Train', 'Dataset_Test', 'SalePrice'], axis=1)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
9.2 Defining a Cross Validation StrategyWe will use the [cross_val_score](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function of Sklearn and calculate the Root-Mean-Squared Error (RMSE) as a measure of accuracy.
n_folds = 10 def rmse_cv(model): kf = KFold(n_folds, shuffle=True, random_state=1234).get_n_splits(final_X_train.values) rmse= np.sqrt(-cross_val_score(model, final_X_train.values, final_y_train, scoring="neg_mean_squared_error", cv = kf)) return(rmse)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
9.3 Lasso Regression ModelThis model may be senstitive to outliers, so we need to make it more robust. This will be done using the RobustScaler function on pipeline.
lasso = make_pipeline(RobustScaler(), Lasso(alpha = 0.0005, random_state = 1234))
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Next, we tested out multiple alpha values and compared their accuracy using the rmse_cv function above. The results below show that using a value of 0.00025 is the optimal value of alpha for a Lasso Regression.
lasso_alpha = [1, 0.5, 0.25, 0.1, 0.05, 0.025, 0.01, 0.005, 0.0025, 0.001, 0.0005, 0.00025, 0.0001] lasso_rmse = [] for value in lasso_alpha: lasso = make_pipeline(RobustScaler(), Lasso(alpha = value, max_iter=3000, random_state = 1234)) lasso_rmse.append(rmse_cv(lasso).mean()) lasso_score_table = pd.DataFrame(lasso_rmse,lasso_alpha,columns=['RMSE']) display(lasso_score_table.transpose()) plt.semilogx(lasso_alpha, lasso_rmse) plt.xlabel('alpha') plt.ylabel('score') plt.show() print("\nLasso Score: {:.4f} (alpha = {:.5f})\n".format(min(lasso_score_table['RMSE']), lasso_score_table.idxmin()[0]))
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Using the newly defined alpha value, we can optimize the model and predict the missing values of SalePrice. The predictions are then formatted in a appropriate layout for submission to Kaggle.
lasso = make_pipeline(RobustScaler(), Lasso(alpha = 0.00025, random_state = 1234)) lasso.fit(final_X_train, final_y_train) lasso_preds = np.expm1(lasso.predict(final_X_test)) sub = pd.DataFrame() sub['Id'] = test_id sub['SalePrice'] = lasso_preds #sub.to_csv('Lasso Submission 2.csv',index=False)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
LASSO Regression Score: 0.12679 9.4 Ridge Regression ModelIdentical steps were taken for the ridge regression model as we took for the [Lasso Regression Model](9.3-Lasso-Regression-Model). In the case of the Ridge model, the optimal alpha value was 6.
ridge_alpha = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] ridge_rmse = [] for value in ridge_alpha: ridge = make_pipeline(RobustScaler(), Ridge(alpha = value, random_state = 1234)) ridge_rmse.append(rmse_cv(ridge).mean()) ridge_score_table = pd.DataFrame(ridge_rmse,ridge_alpha,columns=['RMSE']) display(ridge_score_table.transpose()) plt.semilogx(ridge_alpha, ridge_rmse) plt.xlabel('alpha') plt.ylabel('score') plt.show() print("\nRidge Score: {:.4f} (alpha = {:.4f})\n".format(min(ridge_score_table['RMSE']), ridge_score_table.idxmin()[0])) ridge = make_pipeline(RobustScaler(), Ridge(alpha = 6, random_state = 1234)) ridge.fit(final_X_train, final_y_train) ridge_preds = np.expm1(ridge.predict(final_X_test)) sub = pd.DataFrame() sub['Id'] = test_id sub['SalePrice'] = ridge_preds #sub.to_csv('Ridge Submission.csv',index=False)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Ridge Regression Score: 0.12528 9.5 XGBoost ModelSince there are multiple hyperparameters to tune in the XGBoost model, we will use the [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) function of Sklearn to determine the optimal values. Next, I used the [train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function to generate a validation set and find the RMSE of the models. This method is used in lieu of the [rmse_cv](9.2-Defining-a-Cross-Validation-Strategy) function used above for the Lasso and Ridge Regression models.
xg_X_train, xg_X_test, xg_y_train, xg_y_test = train_test_split(final_X_train, final_y_train, test_size=0.33, random_state=1234) xg_model = XGBRegressor(n_estimators=100, seed = 1234) param_dict = {'max_depth': [3,4,5], 'min_child_weight': [2,3,4], 'learning_rate': [0.05, 0.1,0.15], 'gamma': [0.0, 0.1, 0.2] } start = time() grid_search = GridSearchCV(xg_model, param_dict) grid_search.fit(xg_X_train, xg_y_train) print("GridSearch took %.2f seconds to complete." % (time()-start)) display(grid_search.best_params_)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Now that the hyperparameter have been chosen, we can calculate the validation RMSE of the XGBoost model.
xg_model = XGBRegressor(n_estimators = 1000, learning_rate = 0.1, max_depth = 4, min_child_weight = 4, gamma = 0, seed = 1234) start = time() xg_model.fit(xg_X_train, xg_y_train) xg_preds = xg_model.predict(xg_X_test) print("Model took %.2f seconds to complete." % (time()-start)) print("RMSE: %.4f" % sqrt(mean_squared_error(xg_y_test, xg_preds)))
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
Lastly, we predict the SalePrice using the test data. The predictions are then formatted in a appropriate layout for submission to Kaggle.
xg_model = XGBRegressor(n_estimators = 1000, learning_rate = 0.1, max_depth = 4, min_child_weight = 4, gamma = 0, seed = 1234) xg_model.fit(final_X_train, final_y_train) xg_preds = np.expm1(xg_model.predict(final_X_test)) sub = pd.DataFrame() sub['Id'] = test_id sub['SalePrice'] = xg_preds #sub.to_csv('XGBoost Submission.csv',index=False)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz
XGBoost Regression Model: 0.12799 9.6 Ensemble ModelSince the Lasso, Ridge, XGBoost algorithms so different, averaging the final Saleprice predictions may improve the accuracy. Since the Ridge regression performed the best with regards to the final RMSE (0.125 vs 0.126 and 0.127), I will assign it's weight to be double that of the other two models.Our final ensemble model performed better than any individual regression model (RMSE = 0.12220).
lasso = make_pipeline(RobustScaler(), Lasso(alpha = 0.00025, random_state = 1234)) lasso.fit(final_X_train, final_y_train) lasso_preds = np.expm1(lasso.predict(final_X_test)) ridge = make_pipeline(RobustScaler(), Ridge(alpha = 6, random_state = 1234)) ridge.fit(final_X_train, final_y_train) ridge_preds = np.expm1(ridge.predict(final_X_test)) xg_model = XGBRegressor(n_estimators = 1000, learning_rate = 0.1, max_depth = 4, min_child_weight = 4, gamma = 0, seed = 1234) xg_model.fit(final_X_train, final_y_train) xg_preds = np.expm1(xg_model.predict(final_X_test)) weights = [0.5, 0.25, 0.25] sub = pd.DataFrame() sub['Id'] = test_id sub['SalePrice'] = (ridge_preds*weights[0]) + (lasso_preds*weights[1]) + (xg_preds*weights[2]) sub.to_csv('Ensemble Submission.csv',index=False)
_____no_output_____
Apache-2.0
GAN Model/eda-to-ensemble-model-lasso-ridge-xgboost.ipynb
MrTONYCHAN/xyz