CyberV_ASR / VideoMMMU_ASR_large /Engineering /test_Computer_Science_317.mp4.txt
marinero4972's picture
Upload 301 files
0a5fd4d verified
[0.88s -> 10.86s] Welcome to this video about the k-nearest neighbors for classification. In this video we'll discuss how KNN classify a data point to a certain group.
[10.86s -> 25.46s] We'll also see how we can validate the KNN method by the holdout method and by cross-validation, and how to find an appropriate value for K. I recommend that you first watch my video about different validation techniques.
[25.46s -> 37.01s] KNN is a non-parametric classification method, which means that no parameters of the population distribution are estimated. KNN is a supervised machine learning algorithm.
[37.01s -> 40.59s] which means that we need data with known class or group.
[40.59s -> 52.02s] KNN is a type of lazy learning algorithm because it does not create a model in comparison with most other classification methods. Instead, it predicts directly based on the training data.
[52.02s -> 56.05s] The algorithm can be continuously updated with new data.
[56.66s -> 67.54s] To show how KNN works, let's have a look at the following FICTI dataset, which contains information on 12 patients. The blood concentration of the C-reactive protein
[67.54s -> 78.75s] and the procalcitonin have been measured on 12 patients that have entered a hospital. Once the patients had entered the hospital, the presence of bacteria and viruses were analyzed.
[78.75s -> 92.62s] However, it usually takes several hours or days to determine if a patient has a viral infection or a bacterial infection. After two days at a hospital, these six patients were found to be infected by a virus.
[92.62s -> 103.82s] Whereas these six patients were confirmed to have a bacterial infection. Since antibiotics are only effective on bacteria, only these patients were treated with antibiotics.
[104.11s -> 115.57s] The problem is that we have to wait about two days to confirm the type of pathogen. We therefore need to wait two days to know if the antibiotic treatment is appropriate or not.
[115.57s -> 128.53s] It would therefore be great if we could use the CRP or the PCT concentration to tell if a patient has a bacterial or viral infection, because the measurements of these variables can be done within just an hour.
[128.53s -> 142.91s] If we plot the CRP concentration of the 12 patients, we see that no simple cut-off line can be used to clearly separate the ones with a bacterial infection from the ones with a viral infection. The same is also true for the PCT level.
[142.91s -> 157.14s] Because we cannot completely separate the patients with a bacterial infection from the patient with a viral infection by using only the PCT concentration. However, if we plot the CRP and the PCT concentration in the same plot,
[157.14s -> 167.55s] We see that the following line can separate the two groups completely. This indicates that if we make use of both variables simultaneously, we can make a better prediction.
[167.55s -> 179.50s] This data point represents the CRP and PCD concentration of patient number 1, whereas this data point represents the CRP and PCD concentration of patient number 2, and so forth.
[179.50s -> 191.42s] The KNN algorithm makes use of data with known class or group when it makes predictions. In this example, we have 12 patients with CRP and PCT data.
[191.42s -> 205.42s] And we also know if they had a bacterial or viral infection. For example, let's say that we have a new patient that enters the hospital with a CRP concentration of 40 and PCT concentration of 41.
[205.42s -> 219.02s] KNN then determines the class of the new observation based on the majority class of the k closest neighbors. For example, if k is set to 5, the 5 closest neighbors will be evaluated.
[219.02s -> 231.07s] In this example, the five closest neighbors are patients which are known to have a viral infection, since the majority of the five closest neighbors is a class virus.
[231.07s -> 236.50s] The patient with an unknown infection will be classified as having a viral infection.
[236.91s -> 249.34s] Now, suppose that the patient instead has a PCT concentration of 60 and a CRP concentration of 42, since three out of the five closest neighbors are of class bacteria.
[249.34s -> 260.02s] whereas only two are of class Virus. The patient will be predicted to have a bacterial infection because the majority of the neighbors are of class Bacteria.
[261.36s -> 272.16s] The KNN follows four simple steps. In step one, we determine the distance between the new observation and all the data points in the training dataset.
[272.16s -> 284.70s] The Euclidean distance is by far the most common distance metric that is used. In step 2, we sort the distances. And in step 3, we identify the k closest neighbors.
[284.70s -> 292.05s] And in the final step, we determine the class of the new observation based on the group majority of the k nearest neighbors.
[292.50s -> 305.01s] Let's follow these steps on our example data where we like to predict the class of a new observation with a PCT concentration of 60 and a CRP concentration of 42. In this example,
[305.01s -> 316.40s] We set the value of k to 5, which means that we check the class of the five closest neighbors. We begin by calculating the Euclidean distance to all the data points.
[317.17s -> 331.14s] Let's calculate the distance between the new observation and patient number 1. In two dimensions, the Cladian distance can be calculated by the following formula. We plug in the x and y coordinates of the new data point.
[331.14s -> 338.77s] And for data point number 1. We see that the Euclidean distance between these two data points is 24.1.
[339.09s -> 348.37s] We fill in the distance in the table. The Euclidean distance between the new observation and data point number 2 is 32.3.
[348.69s -> 359.76s] Note that the Euclidean distance in two dimensions can be seen as applying the Pythagoras theorem to a right triangle. We know that the length of this side is 60 minus 30.
[360.14s -> 370.00s] whereas the length of this side is 42 minus 30. The length of this side can therefore be calculated to about 32.3.
[370.93s -> 380.46s] Once we have calculated the clearing distances between the new observation and all the data points, we'll sort this table based on the distances.
[380.82s -> 393.78s] After we have sorted the patients based on the distances to the new observation, we see that three out of the five closest neighbors are of class Bacteria, whereas two are of class Virus.
[393.78s -> 406.22s] Since the majority of the K nearest neighbors are of class Bacteria, we classify the new observation as Bacteria. This means that we predict that the patient has a bacterial infection.
[408.34s -> 411.73s] So, how good is this classifier?
[413.20s -> 423.79s] One way to evaluate how well KNN predicts the class of new cases is to use the leave-on-out cross-validation method on existing data with known class.
[424.56s -> 438.35s] If we leave out the first patient from retaining data, or we pretend that we do not know that this patient has a viral infection, we can let KNN predict this for us, and then see if the prediction is correct or not.
[438.67s -> 451.22s] Since the five closest neighbors around data point number one are of class Virus, the predicted class of this observation is Virus. Since we know that this person had a viral infection,
[451.22s -> 465.30s] We know that the predicted class is correct. Similarly, the second patient is also correctly predicted to have a viral infection because all the five closest neighbors around this point are of class Virus.
[465.30s -> 478.80s] However, when we predict the class of the fourth person, which we know had a viral infection, Kinane predicts that the person has a bacterial infection because three of their five closest neighbors are of class Bacteria.
[479.22s -> 486.83s] Since we know that person number 4 had a viral infection, we know that KNN has made the wrong prediction in this case.
[487.34s -> 499.31s] Based on the Lee-Barnard cross-validation method, we see that we make 10 correct predictions out of 12 possible. This gives us an accuracy of about 83%.
[499.66s -> 512.98s] If we have a very big dataset, the leave-on-out cross-validation may take a long time to run. If we have plenty of data, we can instead use the holdout method, where we split the data into a large training dataset.
[512.98s -> 522.99s] and a smaller test dataset. We then put the training data like this and predict the class of the test data based on the KNN.
[527.34s -> 539.97s] We'll now discuss the problems of an imbalanced dataset. When the groups are of equal size, k and n is unbiased. For example, if k is set to 5,
[539.97s -> 553.30s] We would predict the unknown observation as having a viral infection since 3 out of the 5 closest neighbors are a class virus. However, if the bacteria group includes more data points than the virus group,
[553.30s -> 564.90s] The classifier will favor the bacteria group because of its high density of data points in space. There are many types of methods to deal with an imbalanced dataset. For example,
[564.90s -> 573.33s] One can put more weight on a group with fewer data points when calculating the distances between the unknown observation and the data points.
[574.35s -> 586.70s] We'll now discuss how we can find an optimal value for K. Let's say that we have an additional data point up here, which means that the bacteria group now includes seven observations
[586.70s -> 598.67s] whereas the virus group includes only 6. If we then would set K to 13, and predict the class of the new observation, the class will always be predicted as bacteria,
[598.67s -> 610.05s] Since we have in total 13 data points, the majority class will always be of type bacteria. A high value of K is especially a problem with imbalanced datasets.
[610.05s -> 623.44s] where one group includes more observations than the other group. If we like to predict a class based on two groups, it is recommended that k is an odd number since we then avoid possible ties.
[623.76s -> 637.98s] For example, if we would set K to 4, this means that we should check the four closest neighbors. In this example, two out of the four closest neighbors will be of class Bacteria, and two of class Virus.
[637.98s -> 643.47s] In this case there is no majority that can determine the class of the new observation.
[644.11s -> 656.72s] We therefore need an additional rule in such case, that for example is based on a random process, would that be changed the value of k if this happens. Also, if the value of k is set to 1,
[656.72s -> 667.18s] The classification will be very sensitive to extreme values. For example, the following data point is very far away from the other data points of class Virus.
[667.18s -> 675.57s] If the new observation that we like to classify happens to be close to such an outlier, it will be predicted as the same class as the outlier.
[675.86s -> 683.92s] If we increase K from for example 1 to 9, we see that 8 other than 9 closest neighbors are class bacteria.
[684.85s -> 696.91s] So, in conclusion, the value of k should therefore not be too small or too large. A rule of thumb is to set k equal to the square root of the total sample size.
[697.42s -> 709.18s] We can also train the KNN to find an optimal value of k and at the same time perform validation of the mal. To do this, we could for example use the holdout method
[709.18s -> 719.92s] We split the data into the training dataset and the test dataset. Let's say that our previous example data represents our training dataset in this case.
[720.24s -> 732.21s] Based on the Lee-Warnhard cross-validation method, we know that the accuracy, or the proportion of correct predictions, is equal to about 83% when you set k to 5 for this dataset.
[733.14s -> 746.83s] If we set K to 3 and compute the KNN, we see that we have increased the accuracy to about 92%. The only case that is incorrectly classified, if we set K to 3, is patient number 10.
[746.83s -> 753.04s] where two out of his closest neighbours are of class Virus and only one of class Bacteria.
[753.33s -> 764.16s] If we try four different odd numbers for k, we can select the value of k that results in the best performance. Based on the Leibano cross-validation method,
[764.16s -> 777.46s] We could for example select that the value of k should be equal to 7. Note that we here have used only a small dataset, which may cause big variations due to chance. If we have more data,
[777.46s -> 789.30s] We could make a plot like this one, where the accuracy of the k and n is plotted against different values of k. In this example, an optimal value of k seems to be around 23.
[789.65s -> 798.61s] Once we have determined the value of k based on our training data, we can now validate our k-in-n algorithm by classifying our test data.
[799.12s -> 811.31s] We then test if the KNN correctly predicts this observation as bacteria based on the seven nearest neighbors. Then we take the next data point and so forth.
[811.57s -> 820.46s] Once we have classified all the data points in the test data, we can calculate metrics such as accuracy, specificity and sensitivity.
[822.22s -> 830.22s] We'll now have a look at how we can use KNN with more than two groups, and why it is important to sometimes standardize the data.
[830.74s -> 838.77s] We have previously used KNN to classify bacterial and viral infections. If you also have data on parasitic infections,
[839.12s -> 851.50s] as well as on other clinical variables, such as the body temperature. Then we can use KNN to predict if someone has a viral, bacterial or parasitic infection based on three clinical variables.
[852.14s -> 862.11s] Note that it might be a good idea to standardize the data when we like to include information for variables that have a large difference in variance. For example...
[862.11s -> 874.80s] The body temperature only spans between 36.8 and 41.1, which means that the distance in this dimension will have a very small impact compared to other variables during the classification.
[875.73s -> 886.64s] To give the variables equal weights in the classification process, we can first standardize the data so that all variables have a mean of 0 and a standard deviation of 1.
[887.34s -> 898.14s] The advantage with KNN compared to other classification methods is that it is a very simple method and easy to understand. It is based on local data points.
[898.14s -> 911.50s] which might be beneficial for datasets involving many groups with local clusters. We'll now have a look at some of the disadvantages with KNN. All training data is used every time we should predict.
[911.63s -> 924.18s] This means that the data must be stored everywhere where you like to use the classifier. For very large datasets, the classification might be computationally expensive for prediction.
[924.69s -> 936.59s] And another disadvantage is that KNN is sensitive to imbalanced datasets. This was the end of this lecture about using KNN as a classifier. Thanks for watching!