category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
statistical learning
Journals in statistical learning / machine learning
https://stats.stackexchange.com/questions/88330/journals-in-statistical-learning-machine-learning
<p>Can you please name some major and minor journals publishing articles in the field of statistical learning / machine learning. </p>
<p>Some influential journals in machine learning:</p> <ul> <li><a href="http://www.computer.org/portal/web/tpami" rel="nofollow">IEEE TPAMI</a></li> <li><a href="http://jmlr.org/" rel="nofollow">Journal of Machine Learning Research</a></li> <li><a href="http://www.mitpressjournals.org/loi/neco" rel="nofollow">Neural Computation</a></li> <li><a href="http://www.journals.elsevier.com/neurocomputing/" rel="nofollow">Neurocomputing</a></li> <li><a href="http://www.journals.elsevier.com/pattern-recognition-letters/" rel="nofollow">Pattern Recognition Letters</a></li> </ul> <p>Note that in machine learning some big conferences are considered equivalent to journal publications (NIPS, KDD), similar to computer science. I don't know if this trend also occurs in stats, I think not.</p>
400
statistical learning
Machine Learning VS Statistical Learning vs Statistics
https://stats.stackexchange.com/questions/442128/machine-learning-vs-statistical-learning-vs-statistics
<p>I have seen posts about the difference between ML and Statistics. And I have also seen posts explaining that Statistical Learning is a statistical approach to ML. But then, this is confusing because what is the difference between Statistics and Statistical Learning anyways?</p> <p>To finally resolve this confusion, I was hoping someone would be able to provide an answer. </p>
<p>Statistics is a mathematical science that studies the collection, analysis, interpretation, and presentation of data.</p> <p>Statistical/Machine Learning is the application of statistical methods (<em>mostly</em> <a href="https://en.wikipedia.org/wiki/Regression_analysis" rel="noreferrer">regression</a>) to make predictions about unseen data. Statistical Learning and Machine Learning are broadly the same thing. The main distinction between them is in the <a href="http://brenocon.com/blog/2008/12/statistics-vs-machine-learning-fight/" rel="noreferrer">culture</a>.</p>
401
statistical learning
Easier than Element of statistical Learning and harder than Introduction to statistical learning
https://stats.stackexchange.com/questions/392814/easier-than-element-of-statistical-learning-and-harder-than-introduction-to-stat
<p>I'm majoring industrial engineering on a master's course. Recently, I've realized that I need to study statistical perspective on M.L.</p> <p>So I'm studying the book Introduction to statistical learning with myself. But it seems to lack of mathematical background. On the other hand, Element of statistical learning is too hard for me so I barely understand it.</p> <p>Can you have any recommendation just middle of ESL and ISLR?: which includes richly and kindly explanations on math and statistics?</p>
<p>I like <em>Learning From Data</em> by Abu-Mustafa, et al., which should be enjoyed with the You-Tubed lecture series. It accurately describes itself as a short course, but not a hurried course. Neural nets get half of one lecture, which makes perfect sense when you get there. </p> <p>ESL is a long book and a hurried book. It is best if you already know all of the fundamentals. I think Boyd and Vanderberghe <em>Convex Optimization</em> and Blitzstein and Hwang <em>Intro to Probability</em> and Lay <em>Linear Algebra</em> are important prerequisites for ESL. Also Casella and Berger, because we all come from <em>Statistical Inference.</em></p>
402
statistical learning
Statistical Learning book with theoretical content
https://stats.stackexchange.com/questions/468321/statistical-learning-book-with-theoretical-content
<p>I'm currently reading the book 'An Introduction to Statistical Learning with application in R(ISLR)', it is very helpful for learning the applications of statistical model, but less complement of theoreotical content or mathematical proof/derivation of formulas. I'm often confused with some conclusion/formulas provided in that book without theoretical explanation. Does anyone can give me some resources or books that emphasizes on theoretical aspect of statistical learning? Or other statistical learning book with theoretical problems is OK as well.</p>
<p>You can try elements of statistical learning. It is freely available <a href="https://web.stanford.edu/~hastie/Papers/ESLII.pdf" rel="nofollow noreferrer">here</a>. It has quite a bit of theoretical content, but not many proofs. Most of the derivations and/or proofs are published in separate papers by Hastie and Tibshirani. </p>
403
statistical learning
What is the difference between Statistical Learning and Machine Learning?
https://stats.stackexchange.com/questions/617771/what-is-the-difference-between-statistical-learning-and-machine-learning
<p><a href="https://hastie.su.domains/ISLR2/ISLRv2_website.pdf" rel="nofollow noreferrer">An Introduction to Statistical Learning with Applications in R</a> 2nd edition by Hastie et al. says that</p> <blockquote> <p><em>Statistical learning</em> refers to a set of tools for making sense of complex datasets.</p> </blockquote> <p>How is it different from <em>Machine Learning</em> then?</p> <p>Is <em>Machine Learning</em> a subset of <em>Statistical Learning</em>?</p>
<p>I think any answers to this question will be verging on opinion-based, but I would say there is a gradient from</p> <ul> <li><em>theoretical</em> or <em>pure</em> statistics, focused on rigorous proofs of the properties of various statistical procedures or tests;</li> <li><em>applied</em> statistics, more interested in how procedures can be used with real data sets;</li> <li><em>computational statistics</em>, which focuses on algorithms and computational properties of procedures;</li> <li><em>statistical learning</em>, which asks how we can use computationally efficient, scaleable procedures to learn about patterns in data, but still using a statistical framework to understand how these procedures work;</li> <li><em>machine learning</em>, which is <em>also</em> interested in computationally efficient, scaleable procedures, but is less interested in the statistical properties of the answers;</li> <li><em>artificial intelligence</em>, which generalizes machine learning to a much broader framework of 'computer architectures to solve problems'.</li> </ul> <p>Statistical learning and machine learning in particular are very similar, but statistical learning is a little closer to statistics and machine learning is a little closer to computer science. Someone who works in SL is more likely use <em>confidence intervals</em> to describe uncertainty, while someone who works in ML would (more likely) use <em>risk bounds</em>. People who do SL are generally interested in both <em>prediction</em> and <em>inference</em>, while ML tends to be more focused on prediction (although not exclusively: quantifying variable importance can be thought of as a form of inference). For what it's worth, <a href="https://en.wikipedia.org/wiki/Machine_learning" rel="nofollow noreferrer">Wikipedia says</a></p> <blockquote> <p>Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning</p> </blockquote> <p>There a big overlaps between each step in this gradient, and it's arguably not a strict gradient (for example, you could argue that computational statistics and statistical learning are overlapping subsets of applied statistics).</p>
404
statistical learning
Statistical learning theory VS computational learning theory?
https://stats.stackexchange.com/questions/63077/statistical-learning-theory-vs-computational-learning-theory
<p>What relations and differences are between <a href="http://en.wikipedia.org/wiki/Statistical_learning_theory" rel="nofollow noreferrer">statistical learning theory</a> and <a href="http://en.wikipedia.org/wiki/Computational_learning_theory" rel="nofollow noreferrer">computational learning theory</a>?</p> <p>Are they about the same topic? Solve the same problems, and use the same methods?</p> <p>For example, the former says it is the theory of prediction (regression, classification,...).</p>
<p>Computational learning, more concretely the probably approximately correct (<a href="http://www.cs.iastate.edu/~honavar/pac.pdf" rel="noreferrer">PAC</a>) framework, answers questions like: how many training examples are needed for a learner to learn with high probability a good hypothesis? how much computational effort do I need to learn with high probability such hypothesis? It does not deal with the concrete classifier you are working with. It is about what you can and cannot learn with some samples at hand.</p> <p>In statistical learning theory you rather answer questions of the sort: how many training samples will the classifier misclassify before it has converged to a good hypothesis? i.e. how hard is it to train a classifier, and what warranties do I have on its performance? </p> <p>Regretfully I do not know a source where these two areas are described/compared in an unified manner. Still, though not much hope that helps</p>
405
statistical learning
Elements of Statistical Learning alternatives
https://stats.stackexchange.com/questions/154788/elements-of-statistical-learning-alternatives
<p>Elements of Statistical Learning (ESL) is a book that has fantastic breadth and depth. It covers the essentials to the very modern methods by citing the papers where these original studies come about. However, I really find the language of the book very very prohibitive. I believe there is an easier way to discuss concepts. I find ESL simply too overwhelming. Can someone suggest alternatives that are friendlier to the uninitiated? </p> <p>I found the sibling to ESL: Introduction to Statistical Learning. That is tone I want to read and understand. It is accommodating, without dumbing things down. Any thing similar to Intro to SL? </p>
<p>I agree that <em>An Intro to Statistical Learning</em> has a very accommodating tone. You may want to look at <em>Learning From Data, A Short Course</em> by Yaser Abu-Mostafa et al. I found this book and the accompanying youtube videos to be great. </p> <p>Lastly, spdrnl's comment about <em>Applied Predictive Modeling</em> by Kuhn is a good suggestion. I have not read it yet, but I have perused it and it seems like a great resource as well.</p>
406
statistical learning
Introduction to Statistical Learning
https://stats.stackexchange.com/questions/438143/introduction-to-statistical-learning
<p>For those that have read the book <a href="http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf" rel="nofollow noreferrer">Introduction to Statistical Learning</a>, I'm having a problem understanding a certain line: </p> <p><a href="https://i.sstatic.net/2eTms.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2eTms.png" alt="enter image description here"></a></p> <p>I can't seem to understand the derivation from the first equality to the second. I mean, i don't think Expectations work that way. I get the variance part. But, how does </p> <p><span class="math-container">$E[f(X)-\hat{f}(X)]^2$</span> = <span class="math-container">$[f(X)-\hat{f}(X)]^2$</span>? </p>
407
statistical learning
A good literature for statistical learning theory?
https://stats.stackexchange.com/questions/441192/a-good-literature-for-statistical-learning-theory
<p>Any recommendations for good literature on statistical learning theory? I mean, something what goes into more details than Elements of Statistical Learning, in terms of losses, empirical error estimation etc.</p>
408
statistical learning
Motivations for experiment design in statistical learning?
https://stats.stackexchange.com/questions/422186/motivations-for-experiment-design-in-statistical-learning
<p>My interests in statistics centre around statistical learning, including Bayesian inference, inference in combinatorial spaces, Monte Carlo methods, Markov decision processes, modeling stochastic processes and so on. It is mandatory at my university’s program that we take an experiment design course, which seems rather orthogonal to my interests. </p> <p>I am hoping this opinion is a by product of my ignorance. Thus some questions I seek to answer include: what are applications of experiment design in statistical learning? How can I relate it to statistical learning? What are some motivating examples or challenging problems in experiment design? </p>
<p>This is an interesting question. The following <a href="https://www.google.no/search?q=experimental+design+in+machine+learning&amp;oq=experimental+design+in+machine+learning&amp;aqs=chrome..69i57.14387j0j7&amp;sourceid=chrome&amp;ie=UTF-8" rel="noreferrer">stored google search</a> gives many interesting hits, and both ways: <em>Machine learning used in experimental design</em> and <em>experimental design used in machine learning</em>. </p> <p>Basically experimental design is about <em>planning the collection of data</em>. That must be useful in statistical learning/machine learning, as you can get much better results from your analysis with better data. One obvious application is planning of simulation experiments, as in this case the data collection is completely under your control. </p> <p>You could do worse than start with <a href="https://rads.stackoverflow.com/amzn/click/com/0471718130" rel="noreferrer" rel="nofollow noreferrer">this excellent book</a> by Box, Hunter &amp; Hunter. <a href="https://stats.stackexchange.com/questions/1815/recommended-books-on-experiment-design">Look also at this list</a>. This <a href="http://cocosci.princeton.edu/papers/algorithmDesign.pdf" rel="noreferrer">interesting-looking paper</a> asks to rethink experimental design as algorithm design. </p> <p>So use that required course to learn not only the classics, but also peek into some applications in the fields you mentioned, such as <a href="https://en.wikipedia.org/wiki/Bayesian_experimental_design" rel="noreferrer">Bayesian experimental design</a>, <a href="https://www.researchgate.net/publication/236152907_Combinatorial_Experimental_Design_Using_the_Optimal_Coverage_Approach" rel="noreferrer">combinatorics?</a>, <a href="http://www.cse.chalmers.se/~chrdimi/teaching/optimal_decisions/slides/experiment_design_presentation.pdf" rel="noreferrer">Markov decision processes</a>, <a href="https://pdfs.semanticscholar.org/0bf8/20d1f49dd3b4d1b7678369970c13d9518aed.pdf" rel="noreferrer">stochastic processes</a>. </p> <p><a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005466" rel="noreferrer">Active learning</a> seems to be a buzzword for combining learning with design ... <a href="https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2013-68.pdf" rel="noreferrer">reinforcement learning</a> much the same! That viewpoint is <a href="https://en.wikipedia.org/wiki/Active_learning_(machine_learning)" rel="noreferrer">supported by this Wikipedia article</a>. <a href="https://en.wikipedia.org/wiki/Computerized_adaptive_testing" rel="noreferrer">Computerized adaptive testing</a> can be seen as a forerunner of active learning, and is certainly using some experimental design. Some explanation of how that works can be found here: <a href="https://stats.stackexchange.com/questions/66186/statistical-interpretation-of-maximum-entropy-distribution/245198#245198">Statistical interpretation of Maximum Entropy Distribution</a>.</p> <p>While at it, the tag <a href="/questions/tagged/experiment-design" class="post-tag" title="show questions tagged &#39;experiment-design&#39;" rel="tag">experiment-design</a> covers many posts in here, too many still in need of answers&amp;upvotes. So going trough that, answering, upvoting would be a great learning experience ... </p>
409
statistical learning
Solutions to &#39;Statistical Learning with Sparsity&#39;
https://stats.stackexchange.com/questions/583137/solutions-to-statistical-learning-with-sparsity
<p>I've recently been working through <em>Statistical Learning with Sparsity</em> (SLS) by Hastie, Tibshirani and Hastie.</p> <p>I found some exercises very hard, and think I found some mistakes. A set of solutions would be very helpful.</p> <p>I've recently discovered that <em>Elements of Statistical Learning</em> now has a manual of notes and solutions by <a href="https://waxworksmath.com/Authors/G_M/Hastie/hastie.html" rel="nofollow noreferrer">John Weatherwax</a>.</p> <p>Does anyone know of something similar for SLS? (no luck with google)</p> <p>If not, I'd be happy to share the solutions to the questions I've attempted. Is there a standard protocol to start a collaborative solution manual? (or wiki/alternative place for discussion) (and is it safe to assume the authors won't mind?)</p>
410
statistical learning
Statistical learning and expected value
https://stats.stackexchange.com/questions/521720/statistical-learning-and-expected-value
<p>I'm studying some statistical learning theory. If i have <span class="math-container">$X$</span>, <span class="math-container">$Y$</span> as random variables representing the data-labels samples drawn from a certain distribution and a loss function, it is right to say that: <span class="math-container">$$ E[loss(Y, f(X))] = \sum_{x\in X} loss(y, x) * p(x)$$</span></p> <p>If i'm correct that means that the expected risk of my classifier <span class="math-container">$f()$</span> on <span class="math-container">$X$</span> is the probability of drawing some <span class="math-container">$x$</span> times its <span class="math-container">$loss$</span> function.</p>
<p>In statistical learning theory we pretty much always are considering the joint distribution of <span class="math-container">$(X,Y)$</span> and the expected value here is with respect to that distribution, not just the marginal distribution of <span class="math-container">$X$</span>. In general <span class="math-container">$$ \text E[L(Y, f(X))] = \int_{\mathcal X \times \mathcal Y} L(y, f(x)) \,\text dP(x, y) $$</span> where this notation unifies the discrete and continuous cases. If both are discrete this integral becomes <span class="math-container">$$ \sum_{x, y} L(y, f(x)) p(x, y) $$</span> but it still is over all possible values of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. Intuitively we want to make sure we're considering what our losses are over the full range of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> while weighting on the joint density of those values since they likely aren't independent so it matters how they interact. Also what you wrote would still be a random variable since it has <span class="math-container">$Y$</span> in it, so it would make comparing risks harder.</p>
411
statistical learning
How are statistical decision theory and statistical learning theory related?
https://stats.stackexchange.com/questions/135923/how-are-statistical-decision-theory-and-statistical-learning-theory-related
<p><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.115.9500&amp;rep=rep1&amp;type=pdf" rel="nofollow">This</a> paper attempts to contrast the basic elements of statistical learning theory and statistical decision theory, but I'm still confused about how the two are related.</p>
<p>Have you read the Wikipedia articles? Decision theory is a subset of or problem in statistical learning, in my view; both driven by statistics -- data. It concerns the optimal making of decisions such as choosing between alternatives, once or over a period of time (possibly without termination), or deciding when to stop an experiment, etc. Statistical learning theory is a much broader concept. It is the most popular paradigm for machine learning. Maybe you know that?</p>
412
statistical learning
Relation between Nonparametric Statistics and Statistical Learning Theory
https://stats.stackexchange.com/questions/258417/relation-between-nonparametric-statistics-and-statistical-learning-theory
<p>I used to hear some Statistics professor complaining about Machine Learning theories: "It is just Non-parametric Statistics". And, when I read Vapnik's book "Statistical Learning Theory", it seems he has been influenced a lot by non-parametric statistics. So, would anybody explain the similarity and difference between the two? My current guess (I am not familiar with the classic Nonparametric Statistics) is that Statistical Learning Theory cares much more about generalisation/prediction than Non-parametric Statistics?</p>
413
statistical learning
Is there a difference between the terms statistical learning and machine learning?
https://stats.stackexchange.com/questions/271027/is-there-a-difference-between-the-terms-statistical-learning-and-machine-learnin
<p>Quick question I guess, but is there a perceivable difference between the terms <em>Statistical Learning</em> and <em>Machine Learning</em>, or is it simply area jargon? I gather the computer scientists like to refer to machine learning while statisticians might refer to statistical learning (no less influenced by the <a href="http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Sixth%20Printing.pdf" rel="nofollow noreferrer">famous book</a>).</p> <hr> <p>This motif repeats across other questions in the site as well, with many questions like "what's the difference between machine learning and <em>something else</em>", but I'd like to have this one specifically answered (with some references if possible) to solve the possible merge of the terms.</p> <p>This question is rooted in a recent question on meta regarding <a href="https://stats.meta.stackexchange.com/questions/4525/the-learning-tags">The *learning tags</a>, where SL and ML were agreed to be made synonyms (<a href="https://stats.meta.stackexchange.com/a/4543/60613">refer to my answer for some background and what I have gathered on the subject so far</a>).</p> <hr> <p>Quoting my answer:</p> <blockquote> <p>Perphaps the difference is simply cultural, like many discussions in the main site pointed. Consider Stanford, where two courses are taught: <a href="http://statweb.stanford.edu/~tibs/stat315a.html" rel="nofollow noreferrer">Stats 315a</a>/<a href="http://web.stanford.edu/class/stats315b/" rel="nofollow noreferrer">315b</a> - Statistical Learning and <a href="http://cs229.stanford.edu/" rel="nofollow noreferrer">CS 229</a> - Machine Learning. Apart from being named different and being in different concentrations areas, they also attract different students.</p> <p><a href="http://statweb.stanford.edu/~tibs/stat315a/CS229vsStat315a" rel="nofollow noreferrer">Tibshirani even shares his views in his page</a> comparing both courses and then both terms:</p> <blockquote> <p>Machine learning research focusses more on low noise situations, eg engineering applications like robotics and physical sciences</p> <p>Statistical learning focusses more on high noise, observational data like medicine and genomics, and problems where interpretation of the fitted model is important</p> <p>But more and more overlap in application areas!</p> </blockquote> <hr> <p><a href="http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Sixth%20Printing.pdf" rel="nofollow noreferrer">James, G., Witten, D., Hastie, T., &amp; Tibshirani, R. (2013). An introduction to statistical learning (Vol. 6). New York: Springer.</a></p> </blockquote>
414
statistical learning
Valid references on origins of Machine Learning, Statistical Learning and Data Mining
https://stats.stackexchange.com/questions/179021/valid-references-on-origins-of-machine-learning-statistical-learning-and-data-m
<p>I know it's a rather debated question on Stack Exchange communities but let me explain the points of this question.</p> <p>I'm writing my capstone on Machine Learning and I need to clarify deeply, giving valid references, the differences among Data Mining, Big Data, Statistical Learning and Machine Learning. As far as I understand (I'm reading <em>The Elements of Statistical Learning</em>, <em>An Introduction to Statistical Learning</em> which are well known books) and based on the questions linked below, those are basically <strong><em>Machine Learning</em></strong>. Correct me if you do not agree with me.</p> <p>However, I can't find an article or book with a detailed discussion on the origins of those fields and relationship of each other and I can't cite, like, the questions below. They're not considered valid references.</p> <p><a href="https://stats.stackexchange.com/questions/5026/what-is-the-difference-between-data-mining-statistics-machine-learning-and-ai">1. What is the difference between data mining, statistics, machine learning and AI?</a></p> <p><a href="https://stackoverflow.com/questions/22419958/what-is-the-difference-between-big-data-and-data-mining">2. What is the difference between Big Data and Data Mining?</a></p> <p><a href="https://stackoverflow.com/questions/7105428/difference-between-data-mining-and-machine-learning">3. difference between data mining and machine learning [closed]</a></p>
<p>The issues you're noting are definitional ones where standard, widely accepted meanings for each term have yet to be agreed upon -- different authors and practitioners use them differently. I think nearly everyone would agree that there is a high degree of overlap in their use. This is frequently the case during the emergence of relatively new fields. So, 10 or more years ago, data mining was widely considered to be a "bad" thing relative to theoretical, hypothesis-driven standards of research -- the "gold standard." Today, the stigma associated with data mining has been, for the most part, removed in common parlance. </p> <p>Regrettably, these considerations can devolve into dogmatic, almost religious wars of turf where the contending definitions are a function and by-product of the discipline (the "turf") within which they originate. So, machine learning has largely developed within computer science departments, whose content overlaps with statistics, but it can be treated as a wholly separate discipline from statistics with a separate literature to command. Indeed, many ML practitioners will acknowledge that their exposure and experience evolved without any statistical considerations coming into play whatsoever. A good example of this is Chen and Xie's paper on "divide and conquer" algorithms for massive data -- <a href="http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf" rel="nofollow">http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf</a> -- which notes that D&amp;C approaches originated in ML and computer science but without any statistical consideration given to the accuracy of the approximating results. It took statisticians like Chen and Xie to ask and address the concern.</p> <p>If it's helpful, one way that might reduce some of the confusion is to think of the relationships in Boolean terms. You could even develop a text mining algorithm to show the overlap in Venn diagrams based on term usage from a set of related documents. So, AI is a subset of ML. ML is a subset of computer science that overlaps with statistics. And statistics may (or may not) be a subset of mathematics. Big data is a technical consideration that is largely a computer science concern but it is also a subset of issues having to do with IT hardware, software and data architecture. But big data has impact on statistical analysis insofar as most 20th c approaches to statistical modeling have significant "in-memory" software limitations when the data gets too big. Data mining is an approach to exploratory research that is a subset of methodological and research design issues that overlaps with statistics as well as ML. And so on.</p> <p>The bottom line is that the more you read on these topics, the closer you will get to arriving at your own understanding and definitions. You may have to get creative in this regard since you are unlikely to find crisp definitions in one, two or even a few sources.</p>
415
statistical learning
Statistical Learning Theory - Loss Function
https://stats.stackexchange.com/questions/247373/statistical-learning-theory-loss-function
<p>I am reading Vapnik's "Statistical Learning Theory" and I am confused about his use of Q(z,alpha). On page 23 he explains that Q is the loss function which takes as argument a function g, the function used by the learning algorithm to generate predictions:</p> <p>Q(z,alpha) = L(z,g(z,alpha))</p> <p>However, later in the book (p.147), when defining the concept of VC dimension he uses Q(z,alpha) as if it was g(z,alpha). He talks about a set of indicator functions Q(z,alpha) shattering sets of points and later talks about the capacity of Q. Usually people refer to VC dimension as a property of the function implemented by the leaning algorithm g not as a property of the loss function Q. What am I missing?</p>
416
statistical learning
Books Similar to Introduction to Statistical learning
https://stats.stackexchange.com/questions/476270/books-similar-to-introduction-to-statistical-learning
<p>I'm looking for books similar to Introduction to Statistical Learning with Applications in R (ISLR), which is not too rigorous in terms of the mathematical treatment, but still able to provide you the intuition about the methods? I'm particularly looking at this topics:</p> <ul> <li>Generalized Linear Models</li> <li>Time Series Analysis</li> <li>Survival Analysis</li> </ul>
<ul> <li>For time series analysis: &quot;<a href="https://otexts.com/fpp2/" rel="nofollow noreferrer">Forecasting Principles and Practices</a>&quot; by Hyndman and Athanasopoulos is absolutely excellent and is roughly on the same order of mathematical complexity as ISLR (i.e. enough, but not too much). It has the additional bonus of being available for free online, and having many code examples. It has one weak point: It doesn't do a good job of providing business context or intuitive aspects of TS modeling. For that I recommend &quot;<a href="https://www.oreilly.com/library/view/demand-forecasting-for/9781606495032/" rel="nofollow noreferrer">Demand Forecasting for Managers</a>&quot; by Stephan Kolassa and Enno Siemsen.</li> <li>For GLM's: Chapter 4 of &quot;<a href="https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf" rel="nofollow noreferrer">Machine Learning and Pattern Recognition</a>&quot; by Bishop gives a brief, but pretty good explanation of GLMs within the context of classification, and does so at the level of theoretical math you are looking for. No code samples though, and I don't think a free version was ever released.</li> <li>For Survival Analysis, I can't give you one specific reference. But in general, I would recommend looking in Operations Research or Industrial Engineering textbooks and course materials for the mid-level theoretical content and intuitive explanations that you are seeking.</li> </ul>
417
statistical learning
Elements of Statistical Learning -2.4 Statistical Decision Theory:how to prove formula (2.12)
https://stats.stackexchange.com/questions/618307/elements-of-statistical-learning-2-4-statistical-decision-theory-how-to-prove-f
<p>I have a question, how to prove formula (2.12) in book 《Elements of Statistical Learning》</p> <p><a href="https://i.sstatic.net/XfiOD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XfiOD.png" alt="enter image description here" /></a></p>
418
statistical learning
Question about notation in Introduction to Statistical Learning
https://stats.stackexchange.com/questions/135468/question-about-notation-in-introduction-to-statistical-learning
<p>I've been working my way through the problems in the book "Introduction to Statistical Learning". I have a question about the notation in Question 5 from Chapter 3 (screenshot below). What does $x_{i'}$ mean exactly, in comparison to $x_{i}$.</p> <p><img src="https://i.sstatic.net/KnJdH.png" alt="enter image description here"></p>
<p>First, it is useless to use a new index $i'$ in the expression $\hat\beta$ is useless: you can write $$\hat\beta=\frac{\sum_{i=1}^nx_iy_i}{\sum_{i=1}^nx_i^2}.$$ But now, it is ambiguous to strictly replace $\hat\beta$ with this expression in $\hat y_i=x_i\hat\beta$: $$ \hat y_i=x_i\frac{\sum_{i=1}^nx_iy_i}{\sum_{i=1}^nx_i^2}. $$ because $i$ appears as fixed as well as a summation index in the same expression. Then we need to use a new index: $$ \hat y_i=x_i\frac{\sum_{j=1}^nx_jy_j}{\sum_{j=1}^nx_j^2}. $$ Now the question is to write $\hat y_i$ as a linear combination of the $y_j$'s. From the above equality : $$ \hat y_i=\sum_{j=1}^n\left(\frac{x_ix_j}{\sum_{j=1}^nx_j^2}\right)y_j $$ but this is ambiguous again. We need a new index again: $$ \hat y_i=\sum_{i'=1}^n\left(\frac{x_ix_{i'}}{\sum_{j=1}^nx_j^2}\right)y_{i'} = \sum_{i'=1}^na_{i'}y_{i'} $$ with $a_{i'}=\dfrac{x_ix_{i'}}{\sum_{j=1}^nx_j^2}$.</p>
419
statistical learning
Data Modelling/Statistical Learning - Interview Questions
https://stats.stackexchange.com/questions/197044/data-modelling-statistical-learning-interview-questions
<p>I was asked the following 2 questions in an interview. I wasn't selected which means my answers were wrong. </p> <p>Now I need to learn from my mistakes. I have though quite a bit since the interview and still not got anywhere concrete so need your help..</p> <p>Question 1 : It is raining during evening and you cant go out. The TV news broadcast says that it will continue to rain for next 5 hours. What are the chances that you will see the sun in the next 36 hours ?</p> <p>My Answer in Interview : I said something to the effect that I will look at the data throughout the month and past years(same month) to see on average how often we get the clear skies after rainy day. </p> <p>I think I should have said that this is not a viable scenario to apply statistical learning</p> <p>Question 2 : In a company which had 1000 employees, 100 employees left in a year. Now the company has asked you to look at the existing employees and find out who are at the risk of leaving. The company can provide you any data that you would want.</p> <p>My answer : I said I would look at the salaries of these people and those who left. Compare it with market standards and see if I can find any pattern. Also I said it would look at the ratings of the people leaving and see if they are star performers or not, since high performers who do not get promoted may also leave. I would use logistic regression for classification.</p> <p>I felt I was on the right track but obviously that wasn't the case.</p> <p>I feel the interviewer was looking for my basic approach rather than any complicated statistical learning methods.</p>
<p>For the <strong>second question</strong>, I don't think people want to know specifically which model that you going to use because it is normally decided after some cross-validation. In this case, salary is a good feature to be included but there might also be: years of experience, department, age, degree, gender, marital status, etc. The provided data includes 100 positive and 900 negative labels, so I would tell how I deal with this <strong>unbalanced data</strong>: either under-sampling or under weighting those 900 negative-labelled samples. Finally, I would mention <strong>cross-validation</strong> to choose the best model, the best strategy to deal with the unbalanced data, and maybe the best feature set if there is too many features available. Besides, since we are interested in the risk of leaving, I might want to use <strong>Area Under Curve</strong> (AUC) instead of simple error rate as <strong>evaluation metric</strong>.</p> <p>For the <strong>first question</strong>, since you watched the weather forecast, you should know the answer. Building a model is OK but it is too much harder and often results in a less accurate prediction considering that you have less resources than those who do it for the TV broadcast. Looking for historical weather data is much harder than for a 48-hours weather forecast!</p>
420
statistical learning
Elements of Statistical Learning training set
https://stats.stackexchange.com/questions/372525/elements-of-statistical-learning-training-set
<p>I am trying to read the Elements of Statistical Learning Tibshirani, Hastie and Friedman, however I have a problem with understanding the expected (squared) prediction error (<span class="math-container">$EPE$</span>) formula that they provide on page <span class="math-container">$26$</span>:</p> <p>The start they assume that the relationship between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> is linear so:</p> <p><span class="math-container">$Y = X^TB+\epsilon$</span>, where <span class="math-container">$\epsilon$</span>~<span class="math-container">$N(0,\sigma^2)$</span>, the task is to feed the model to the training data. Now</p> <p><span class="math-container">$EPE(x_0) = E_{x_0|y_0}[E_T(y_0-\hat y_0)^2]$</span></p> <p>What is the <span class="math-container">$E_T$</span>? What is the reason to compute the <span class="math-container">$EPE$</span> of <span class="math-container">$x_0$</span> insted of <span class="math-container">$\hat y_0$</span>?</p> <p>On page <span class="math-container">$23$</span> there is written that <span class="math-container">$T$</span> is the training set, so my understanding is that it consists of some <span class="math-container">$X$</span>'s. Is it right?</p>
<p><span class="math-container">$E_T$</span> is the expectation taken over the training set. <span class="math-container">$\hat y_0$</span> is a deterministic function of <span class="math-container">$x_0,$</span> i.e., <span class="math-container">$\hat y_0 = \hat \beta^Tx_0.$</span> The training set consists of rows of covariate values, that is correct. </p>
421
statistical learning
Elements of Statistical Learning - Statistical Decision Theory : formula 2.10 EPE
https://stats.stackexchange.com/questions/618301/elements-of-statistical-learning-statistical-decision-theory-formula-2-10-ep
<p>Recently, I have been reading the 《Elements of Statistical Learning》book . Now,I have three question in chapter 2 formula(2.10). (1)What does <span class="math-container">$pr(dx,dy)$</span> in Formula 2.10 mean?</p> <p>(2)how to derivate this (2.10)formula?</p> <p>(3)Can <span class="math-container">$pr(dx,dy)$</span> be written as <span class="math-container">$pr(x,y)dxdy$</span>?</p> <p>The ESL book (2.10): <span class="math-container">$EPE(f)=\int[y-f(x)]^2Pr(dx,dy)$</span></p> <p>can modify(2.10) to the formula belows?: <span class="math-container">$EPE(f)=\int[y-f(x)]^2Pr(x,y)dxdy$</span></p> <p>the formula (2.10) context content in book is as below picture:</p> <p><a href="https://i.sstatic.net/9aHGl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9aHGl.png" alt="enter image description here" /></a></p>
422
statistical learning
A Fundamental Question about Statistical Learning
https://stats.stackexchange.com/questions/189583/a-fundamental-question-about-statistical-learning
<p>In statistical learning (many textbooks), we assume that the data $Y$ is generated by $Y=f(X)+\epsilon$, where $X$ are predictors and $\epsilon$ is some random noise. Then the problem becomes: using various methods to find an estimates of $f$, i.e., $\hat{f}$ such that the expected mean square error on the test set is minimized. We know that we can decompose the mean square error into reducible error, which is related to $f-\hat{f}$ and irreducible error, which is related to $Var[\epsilon]$. It is clear that if you can have $\hat{f}=f$, then the reducible error become 0.</p> <p>My question is: why there should be a "true" function $f$ such that we want to found an estimate $\hat{f}$ for and we wish it is close to $f$? </p> <p>Consider the following man-made example. I generate my data set using the deterministic function $Y=g(X_1,X_2)$. So no random term here. Then I give the data set $(Y,X_1)$ to a friend, i.e., I omit the variable $X_2$. Hence, the data set $(Y,X_1)$ looks random to my friend, e.g., he might see that for the same value of $X_1$, it actually corresponds to 2 different value of $Y$. Then I ask him to find the best way to predict $Y$. In this example, to my friend, if he adopts the idea as the statistical learning textbook, i.e., he should believe that the data is generated from $Y=f(X_1)+\epsilon$. My question is, what is the "true" $f$ in this case, and is that meaningful to talk about "true" $f$? </p> <p>I believe this is an important conceptual question, since in reality, I could always assume that every data set I saw is made by somebody who use a completely deterministic function and I just observe part of his set of predictors. </p> <p>To me, the notion of "true" $f$ and "try to find an estimates $\hat{f}$ of true $f$" is redundant and meaningless. From a practical perspective, my goal is just to find some function $g$, such that $g(X)$ gives me the minimum expected mean square error on the test set.</p>
<p>There are several issues here: the space your friend shall minimize in is uncountably. Hence, if he is not really lucky and guesses the correct function, there is no chance in minimizing the expression. Thus, one chooses a certain class in that one seeks to minimize. However, typically the class of function is biased. Like the Class of linear functions. In such a class of functions, one searches</p> <p>Since you only tell him, that Y is only generated by one variable he has, generally, no chance to find the true underlying generating function which is $g$. </p> <p>However, with the wrong assumption of a process depending only on one variable (i.e. assuming that an underlying f exists), he can try to find a function that explains the data reasonably well</p>
423
statistical learning
AIC formula in Introduction to Statistical Learning
https://stats.stackexchange.com/questions/181539/aic-formula-in-introduction-to-statistical-learning
<p>I'm a little puzzled by a formula presented in Hastie's "Introduction to Statistical Learning". In Chapter 6, page 212 (sixth printing, available <a href="http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Sixth%20Printing.pdf">here</a>), it is stated that:</p> <p>$AIC = \frac{RSS}{n\hat\sigma^2} + \frac{2d}{n} $</p> <p>For linear models with Gaussian noise, $d$ being the number of predictors and $\hat\sigma$ being the estimate of error variance. However,</p> <p>$\hat\sigma^2 = \frac{RSS}{(n-2)}$</p> <p>Which is stated in Chapter 3, page 66.</p> <p>Which would imply:</p> <p>$AIC = \frac{(n-2)}{n} + \frac{2d}{n} $</p> <p>Which can't be right. Can someone point out what I'm doing incorrectly?</p>
<p>I think that you are confusing the two residual sum of squares that you have. You have one RSS to estimate the $\hat{\sigma}^2$ in the formula, this RSS is in some sense independent of the number of parameters, $p$. This $\hat{\sigma}^2$ should be estimated using all your covariates, giving you a <strong>baseline unit of error</strong>. You should call the RSS <strong>in the formula for AIC</strong>: $\text{RSS}_{p_i}$, meaning that it corresponds to model $i$ with $p$ parameters, (<em>There may be many models with $p$ parameters</em>). So the RSS in the formula is calculated for a specific model, while the RSS for $\hat{\sigma}^2$ is for the full model.</p> <p>This is also noted in the page before, where $\hat{\sigma}^2$ is introduced for $C_p$.</p> <p>So the RSS for the formula in AIC is not indepednent of $p$, it is calculated for a given model. Introducing $\hat{\sigma}^2$ to all of this is just to have a baseline unit for the error, such that there is a <em>"fair"</em> comparison between the number of parameters and the reduction in error. You need to compare the number of parameters to something that is scaled w.r.t. the magnitude of the error.</p> <p>If you would not scale the RSS by the baseline error, it might be that the RSS is dropping much more than the number of variables introduced and thus you become more greedy in adding in more variables. If you scale it to some unit, the comparison to the number of parameters is independent of the magnitude of the baseline error.</p> <p>This is not the general way to calculate AIC, but it essentially boils down to something similar to this in cases where it is possible to derive simpler versions of the formula.</p>
424
statistical learning
Book for reading before Elements of Statistical Learning?
https://stats.stackexchange.com/questions/18973/book-for-reading-before-elements-of-statistical-learning
<p>Based on <a href="https://quant.stackexchange.com/questions/111/how-can-i-go-about-applying-machine-learning-algorithms-to-stock-markets">this post</a>, I want to digest Elements of Statistical Learning. Fortunately it is available for free and I started reading it.</p> <p>I don't have enough knowledge to understand it. Can you recommend a book that is a better introduction to the topics in the book? Hopefully something that will give me the knowledge needed to understand it?</p> <p>Related:</p> <p><a href="https://stats.stackexchange.com/questions/40808/how-to-get-an-introductory-understanding-of-machine-learning">Is a strong background in maths a total requisite for ML?</a></p>
<p>I bought, but have not yet read, </p> <blockquote> <p>S. Marsland, <em><a href="http://rads.stackoverflow.com/amzn/click/1420067184">Machine Learning: An Algorithmic Perspective</a></em>, Chapman &amp; Hall, 2009. </p> </blockquote> <p>However, the reviews are favorable and state that it is more suitable for beginners than other ML books that have more depth. Flipping through the pages, it looks to me to be good for me because I have little math background.</p>
425
statistical learning
Statistical learning - How to determine the irreducible error?
https://stats.stackexchange.com/questions/285750/statistical-learning-how-to-determine-the-irreducible-error
<p>I'm reading <em>Introduction to Statistical Learning</em>, currently Chapter 2 about the Bias-Variance trade-off. </p> <p>In all examples the irreducible error is 1, i.e. $Var(\epsilon) = 1$. I read in <a href="https://stats.stackexchange.com/questions/228896/why-is-the-variance-of-the-error-term-a-k-a-the-irreducible-error-always-1?noredirect=1&amp;lq=1">Why is the variance of the error term always 1 in examples of the bias-variance tradeoff?</a> that this is likely for pedagogical reasons. </p> <p>My question is, how would you actually go about to find the irreducible error? Is this even possible?</p>
426
statistical learning
Statistical Learning. Contradictions?
https://stats.stackexchange.com/questions/493564/statistical-learning-contradictions
<p>Currently I am re-reading some chapters of: <em>An Introduction to Statistical Learning with Applications in R</em> by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani (Springer, 2015). Now, I have some doubts about what is said there.</p> <p>Above all it seems to me relevant to note that in chapter 2 two concepts are introduced : <em>prediction accuracy-model interpretability tradeoff</em> and <em>bias-variance tradeoff</em>. I mentioned the latter in an <a href="https://stats.stackexchange.com/questions/491501/is-the-idea-of-a-bias-variance-tradeoff-a-false-construct/491766#491766">earlier question</a>.</p> <p>In this book, it is suggested that focusing on expected prediction error (test MSE) yields the following assertions:</p> <ul> <li><p>less flexible specifications imply more bias but less variance</p> </li> <li><p>more flexible specifications imply less bias but more variance</p> </li> </ul> <p>It follows that linear regression implies more bias but less variance. The optimum in the tradeoff between bias and variance, the minimum in test MSE, depends on the true form of <span class="math-container">$f()$</span> [in <span class="math-container">$Y = f(X) + \epsilon$</span>]. Sometimes linear regression works better than more flexible alternatives and sometimes not. This graph tells this story:</p> <p><a href="https://i.sstatic.net/xuG5L.png" rel="noreferrer"><img src="https://i.sstatic.net/xuG5L.png" alt="enter image description here" /></a></p> <p>In the second case linear regression works quite well, in the others two not so much. All is ok in this perspective.</p> <p>In my opinion the problem appears under the perspective of <em>inference</em> and <em>interpretability</em> used in this book. In fact this book also suggests that:</p> <ul> <li><p>less flexible specifications are more far away from reality, then more biased, but at the same time they are more tractable and, then, more interpretable;</p> </li> <li><p>more flexible specifications are closer to the reality, hence less biased, but at the same time they are less tractable and, then, less interpretable.</p> </li> </ul> <p>As a result we have that linear regressions, OLS and even more LASSO, are the most interpretable and more powerful for inference. This graph tell this story:</p> <p><a href="https://i.sstatic.net/r3AFt.png" rel="noreferrer"><img src="https://i.sstatic.net/r3AFt.png" alt="enter image description here" /></a></p> <p>This seems to me like a contradiction. How is possible that linear models are, at the same time, the more biased but the best for inference? And among linear models, how is possible that LASSO regression is better than OLS one for inference?</p> <p><strong>EDIT</strong>: My question can summarized as:</p> <ul> <li><p>linear estimated model are indicated as the more interpretable even if the more biased.</p> </li> <li><p>linear estimated model are indicated as the more reliable for inference even if the more biased.</p> </li> </ul> <p>I read carefully the answer and comments of Tim. However it seems to me that some problems remain. So, actually it looks like in some sense the first condition can hold, i.e. in a sense where “interpretability” is a property of the estimated model itself (its relation with something &quot;outside&quot; are not considered).</p> <p>About inference &quot;outside&quot; is the core, but the problem can move around its precise meaning. Then, I checked the definition that Tim suggested (<a href="https://stats.stackexchange.com/questions/234913/what-is-the-definition-of-inference">What is the definition of Inference?</a>), also here (<a href="https://en.wikipedia.org/wiki/Statistical_inference" rel="noreferrer">https://en.wikipedia.org/wiki/Statistical_inference</a>), and elsewhere. Some definition are quite general but in most material that I have inference is intended as something like: from sample say something about the &quot;true model&quot;, regardless of his deep meaning. So, the Authors of the book under consideration used something like the “true model”, implying we cannot skip it. Now, any biased estimator cannot say something right about the true model and/or its parameters, even asymptotically. Unbiasedness/consistency (difference irrelevant here) is the main requirements for any model written for pure inference goal. Therefore the second condition cannot hold, and the contradiction remains.</p>
<p>There’s no contradiction. The fact that something is easy to interpret has nothing to do with how accurate is it. The most interpretable model you could imagine is to predict constant, independently of the data. In such case, you would always be able to explain why your model made the prediction it made, but the predictions would be horrible.</p> <p>That said, it’s not the case that you need complicated, black-box models if you want accurate results and poorly performing models for interpretability. <a href="https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/5" rel="nofollow noreferrer">Here you can find</a> nice, popular article by Cynthia Rudin and Joanna Radin, where they give example of interpretable models giving very good results and use it to discuss how performance vs interpretability is a false dichotomy. There’s also very interesting <a href="https://podcasts.apple.com/pl/podcast/data-skeptic/id890348705?i=1000476935776" rel="nofollow noreferrer">episode of Data Skeptic</a> podcast on this subject hosting Cynthia Rudin.</p> <p>You may be interested also in the <a href="https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one">When is a biased estimator preferable to unbiased one?</a> thread.</p>
427
statistical learning
The error-rate in &quot;The elements of statistical learning&quot;
https://stats.stackexchange.com/questions/645621/the-error-rate-in-the-elements-of-statistical-learning
<p>This picture is from the book &quot;the elements of statistical learning&quot;:</p> <p><a href="https://i.sstatic.net/3gmgr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3gmgr.png" alt="enter image description here" /></a></p> <p>I am wondering how the test-error rate is calculated based on how the describe the simulation at the start? How do they for example calculate the error-rate for the regression when there is no <span class="math-container">$\beta$</span>?</p>
<p>Error rate is calculated as the number of errors divided by the number of attempts. This equals one minus the proportion classified correctly (accuracy expressed as a decimal or fraction).</p>
428
statistical learning
What is alpha in Vapnik&#39;s statistical learning theory?
https://stats.stackexchange.com/questions/478351/what-is-alpha-in-vapniks-statistical-learning-theory
<p>I'm currently studying Vapnik's theory of statistical learning. I rely on <a href="https://statisticalsupportandresearch.files.wordpress.com/2017/05/vladimir-vapnik-the-nature-of-statistical-learning-springer-2010.pdf" rel="nofollow noreferrer">Vapnik (1995)</a> and some secondary literature that is more accessible to me. Vapnik defines a learning machine as 'an object' capable of implementing a set of functions <span class="math-container">$f(x, \alpha), \alpha \in \Lambda$</span>. This term appears in all following equations, eg the risk functional <span class="math-container">$R(\alpha)$</span> is written as a function of <span class="math-container">$\alpha$</span>.</p> <p>I'm having trouble understanding what is <span class="math-container">$\alpha$</span> in practice and how does it relate to the VC dimension <span class="math-container">$h$</span>. Suppose for example that I fit a simple regression tree on my data. What are the 'learning machine' and <span class="math-container">$f(x, \alpha)$</span> in this context? Can I interpret <span class="math-container">$\alpha$</span> as the paramaters (eg split variables, cutpoints etc.) and hyperparameters of my decision tree?</p>
<h2>Short Answer</h2> <p><span class="math-container">$\alpha$</span> is the parameter or vector of parameters, including all so-called &quot;hyperparameters,&quot; of a set of functions <span class="math-container">$V$</span>, and has nothing to do with the VC dimension.</p> <h2>Long Answer: What is <span class="math-container">$\alpha$</span>?</h2> <p>Statistical learning is the process of choosing an appropriate function (called a model) from a given class of possible functions. Given a set of functions <span class="math-container">$V$</span> (the class of possible models under consideration), it is often convenient to work with a parametrization of <span class="math-container">$V$</span> instead. This means choosing a <em>parameter set</em> <span class="math-container">$\Lambda$</span> and a function <span class="math-container">$g$</span> called a <em>parametrization</em> where <span class="math-container">$g : \Lambda \to V$</span> is a surjective function, meaning that every function <span class="math-container">$f \in V$</span> has at least one parameter <span class="math-container">$\alpha \in \Lambda$</span> that maps to it. We call the elements <span class="math-container">$\alpha$</span> of the parameter space <span class="math-container">$\Lambda$</span> <em>parameters</em>, which can be numbers, vectors, or really any object at all. You can think of each <span class="math-container">$\alpha$</span> as being a representative for one of the functions <span class="math-container">$f \in V$</span>. With a parametrization, we can write the set <span class="math-container">$V$</span> as <span class="math-container">$V = \{ f(x, \alpha) \}_{\alpha \in \Lambda}$</span> (but this is bad notation, see footnote*).</p> <p>Technically, it's not necessary to parametrize <span class="math-container">$V$</span>, just convenient. We could use the set <span class="math-container">$V$</span> directly for statistical learning. For example, I could take</p> <p><span class="math-container">$$V = \{ \log(x), x^3, \sin (x), e^x, 1/x , \sqrt{x} \},$$</span></p> <p>and we could define the risk functional <span class="math-container">$R : V \to \mathbb{R}$</span> in the standard way as the expected loss</p> <p><span class="math-container">$$R(f) = \int L(y, f(x)) dF(x, y) = E[L(y, f(x))]$$</span></p> <p>for some loss function <span class="math-container">$L$</span>, a popular choice being <span class="math-container">$L(y, x) = \| y - f(x) \|_2$</span>, and where <span class="math-container">$F$</span> is the joint cdf of the data <span class="math-container">$(x, y)$</span>. The goal is then to choose the best model <span class="math-container">$f^*$</span>, which is the one that minimizes the risk functional, i.e.</p> <p><span class="math-container">$$f^* = \text{argmin}_{f \in V} R(f) .$$</span></p> <p>To make this easier to work with, Vapnik instead considers parametrizing the set <span class="math-container">$V$</span> with a parameter set <span class="math-container">$\Lambda$</span> and a parametrization <span class="math-container">$g : \Lambda \to V$</span>. With this, you can write every function <span class="math-container">$f \in V$</span> as <span class="math-container">$f = g(\alpha)$</span> for some parameter <span class="math-container">$\alpha \in \Lambda$</span>. This means that we can reinterpret the risk minimization problem as</p> <p><span class="math-container">$$ \alpha^* = \text{argmin}_{\alpha \in \Lambda} R(g(\alpha)) \quad \text{ and } \quad f^* = g(\alpha^*) . $$</span></p> <p>What Vapnik calls the risk functional is actually the function <span class="math-container">$R \circ g : \Lambda \to \mathbb{R}$</span> in the notation I've used, and if <span class="math-container">$\Lambda$</span> is a set of numbers or vectors of numbers, then this has the advantage of being a <strong>function</strong> as opposed to a <strong>functional</strong>. This makes analysis much easier. For example, in the calculus of variations <a href="https://en.wikipedia.org/wiki/Calculus_of_variations#Euler%E2%80%93Lagrange_equation" rel="nofollow noreferrer">the trick of replacing a functional with a function</a> is used to prove necessary conditions for minimizing a functional by converting a statement about a <strong>functional</strong> <span class="math-container">$J$</span> to a statement about a <strong>function</strong> <span class="math-container">$\Phi$</span>, which can then be analyzed by using standard calculus (see link for details).</p> <p>In addition to being easier to analyze, it's also quite convenient to use a parametrization when the functions in <span class="math-container">$V$</span> are all of a similar form, such as the set of power functions <span class="math-container">$$V = \{ x, x^2, x^3, x^4, \dots \} = \{ x^\alpha \}_{\alpha \in \mathbb{N}}$$</span> or the set of linear functions <span class="math-container">$$V = \{ mx + b \}_{(m, b) \in \mathbb{R}^2} .$$</span></p> <h2><span class="math-container">$\alpha$</span> in Practice: A Simple Example</h2> <p>To use your example, let's start with a very simple regression tree to model some data with one real-valued feature <span class="math-container">$x \in \mathbb{R}$</span> and a real-valued target <span class="math-container">$y \in \mathbb{R}$</span>. Let's also assume for simplicity that we're only considering left-continuous decision trees with a depth of 1. This defines our function class <span class="math-container">$V$</span> implicitly as</p> <p><span class="math-container">$$V = \{ \text{all functions which can be written as a left-continuous regression tree of depth 1} \} $$</span></p> <p>which is not a very mathematically convenient formulation. It would be much easier to work with this if we notice that the depth <span class="math-container">$d$</span> being exactly 1 means that there is one split point, which means that we can parametrize <span class="math-container">$V$</span> using the parametrization <span class="math-container">$g : \mathbb{R}^3 \to V$</span> defined by</p> <p><span class="math-container">$$ g(\alpha_1, \alpha_2, \alpha_3) = \begin{cases} \alpha_1 , &amp; \text{ if } x \le \alpha_3 \\ \alpha_2 , &amp; \text{ if } x &gt; \alpha_3 \\ \end{cases}, $$</span> where <span class="math-container">$\alpha_3$</span> is the split point, and <span class="math-container">$\alpha_1$</span> and <span class="math-container">$\alpha_2$</span> are the values of the function on the intervals <span class="math-container">$(-\infty, \alpha_3]$</span> and <span class="math-container">$(\alpha_3, \infty)$</span>. Notice that in general <strong>parametrizations are not unique</strong>. For instance, there was nothing special about the order of these three parameters: I could rearrange them to get a different parametrization, or I could even use the parametrization</p> <p><span class="math-container">$$ h(\alpha_1, \alpha_2, \alpha_3) = \begin{cases} \alpha_1^5 - 2 \alpha_1 + 5 , &amp; \text{ if } x \le 1000\alpha_3 \\ \tan(\alpha_2) , &amp; \text{ if } x &gt; 1000\alpha_3 \\ \end{cases}. $$</span> What's important is that every <span class="math-container">$f \in V$</span> can be represented by some parameter <span class="math-container">$\alpha = (\alpha_1, \alpha_2, \alpha_3) \in \mathbb{R}^3$</span>, which is possible whether we use the parametrization <span class="math-container">$g$</span> or <span class="math-container">$h$</span>.</p> <h2><span class="math-container">$\alpha$</span> in Practice: A More Complicated Example</h2> <p>Now, let's say we want to use a more complicated model. Let's use a regression tree to model data with two real-valued features <span class="math-container">$(x_1, x_2) \in \mathbb{R}^2$</span> and a real-valued target <span class="math-container">$y \in \mathbb{R}$</span>, and with decision trees with a maximum depth of 2. Parametrizing <span class="math-container">$V$</span> this time is much more complicated, because regression trees depend on the shape of the tree, which variable is being split at each node, and the actual value of the split point. Every full binary tree of depth <span class="math-container">$d \le 2$</span> is one of five possible shapes, shown below:</p> <p><a href="https://i.sstatic.net/Ho1rl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ho1rl.png" alt="All full binary trees of depth d ≤ 2" /></a></p> <p>In addition, for each leaf on the tree, we have to specify a real number parameter, and for each branch vertex we have to specify which of the two features we're splitting on and what the value of the split point is. One way you could construct the parametrization would be to use a discrete variable to parametrize the possible tree shapes, another discrete variable for each node to parametrize whether <span class="math-container">$x_1$</span> or <span class="math-container">$x_2$</span> is being split, and then real-valued parameters for the actual values of the function on each piece of the domain. Once again, there are many ways of parametrizing this set, but here is one: Let <span class="math-container">$$ \Lambda = \{ 1, 2, 3, 4, 5 \} \times \{ 1, 2 \}^3 \times \mathbb{R}^7 $$</span> For a parameter <span class="math-container">$\alpha \in \Lambda$</span>, e.g. <span class="math-container">$\alpha = (4, (2, 1, 1), (0.18, 0.3, -0.5, 10000, 538, 10, \pi))$</span>, the first coordinate determines the shape of the tree, as listed in order above; the second coordinate has three coordinates that determine which of the two features is split on at each branch node (note that the middle one is &quot;unused&quot; for shape 4, which is not an issue because parametrizations don't have to be injective functions); the third coordinate has seven coordinates, each of which is a real value corresponding to a node in the graph that</p> <ol> <li>for leaves, determines the value of the regression tree on the corresponding piece of the domain,</li> <li>for branch vertices, determines the split value,</li> <li>and for unused vertices, is unused.</li> </ol> <p>I've shown the graph corresponding to this parameter below:</p> <p><a href="https://i.sstatic.net/KBNgB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KBNgB.png" alt="enter image description here" /></a></p> <h2>Relation to VC Dimension</h2> <p><span class="math-container">$\alpha$</span> has nothing to do with the VC dimension, because each <span class="math-container">$\alpha \in \Lambda$</span> is a representative of one function <span class="math-container">$f \in V$</span>, and the VC dimension is a characteristic of the entire set of functions <span class="math-container">$V$</span>. You could ask whether the parametrization <span class="math-container">$g : \Lambda \to V$</span> has something to do with the VC dimension. In fact, this might even be intuitive, because the VC dimension measures the &quot;capacity&quot; of the set of functions <span class="math-container">$V$</span>. Often, the &quot;number of parameters&quot; is used as a proxy for the &quot;capacity&quot; as well. However, this intuitive concept does not formalize well. In fact, the example <span class="math-container">$V = \{ \sin(\theta x) \}_{\theta \in \mathbb{R}}$</span> has infinite VC dimension despite having only one parameter, so the notion of low &quot;number of parameters&quot; corresponding to low &quot;capacity&quot; does not hold. In fact, the &quot;number of parameters&quot; is not well defined in the first place, since parametrizations are not unique and can have different numbers of parameters (the minimum of which is almost always 1 because of space-filling curves).</p> <h2>The Learning Machine</h2> <p>The learning machine is not simply the set <span class="math-container">$V$</span>, however, but a process for estimating the data generating process that produces the training data <span class="math-container">$\{ (x, y) \}_{i = 1}^n$</span>. This might mean picking a function set <span class="math-container">$V$</span> in advance, and minimizing the empirical risk <span class="math-container">$$ R_\text{emp} (f) = \sum_{i = 1}^n L(y_i, f(x_i)) $$</span> over the set <span class="math-container">$V$</span>, or in parametric form, minimizing <span class="math-container">$$ R_\text{emp} (g(\alpha)) = \sum_{i = 1}^n L(y_i, g(\alpha)(x_i)) $$</span> over the set <span class="math-container">$\Lambda$</span>. Note that <span class="math-container">$g(\alpha)$</span> is itself a function, which <span class="math-container">$x_i$</span> is being plugged into in the above expression. This is why the notation <span class="math-container">$g_\alpha$</span> is slightly better than <span class="math-container">$g(\alpha)$</span>, so we don't have to write awkward expressions like <span class="math-container">$g(\alpha)(x_i)$</span>.</p> <p>The learning machine can also be much more complicated. For instance, it also includes any regularization being used. Limiting the set <span class="math-container">$V$</span> is one type of regularization used to avoid over-fitting, but of course there are other types as well.</p> <h2>Footnote</h2> <p>* We should really write functions as <span class="math-container">$f$</span> not as <span class="math-container">$f(x)$</span>, which is technically not a function but an element of the range of the function, so we could write <span class="math-container">$V = \{ f(\alpha) \}_{\alpha \in \Lambda}$</span>, or better yet <span class="math-container">$V = \{ f_\alpha \}_{\alpha \in \Lambda}$</span> to avoid confusing the arguments of the function with the parameter that indicates which function we're talking about.</p>
429
statistical learning
What is next after finished reading Elements of Statistical Learning?
https://stats.stackexchange.com/questions/468483/what-is-next-after-finished-reading-elements-of-statistical-learning
<p>I am a Pure Maths PhD student specialising in functional analysis.</p> <p>I would like to work as a data scientist after my PhD graduation, particularly in the field of machine learning, deep learning and artificial intelligence. </p> <p>I have some backgrounds on machine learning such as linear regression, logistic regression, K-mean clustering, SVM. For deep learning, I know neural networks and CNN. </p> <p>To build up more on my theoretical background in these fields, I started reading Element of Statistical Learning (ESL) where I think it is known as the bible in statistical learning. I find its contents manageable.</p> <p>In terms of programming, I have been using Python for the past 2 years and tutored an undergraduate Python course last year. So I think my Python skills, data Structures and Algorithms are average (at least, not beginner). I have implemented some projects involving stochastic differential equation models using Python. </p> <p>My question is: what is next after I finish reading ESL?</p> <p>I found a post in <a href="https://stats.stackexchange.com/q/18973/99818">CV</a> asking for reference BEFORE reading ESL, but not AFTER ESL. </p>
<p>Well the answer to the question is probably a matter of preference and depends on whether you want to specialize in some specific field (e.g. reinforcement learning) or you're aiming at having a more in-depth (but not limited to one subfield) view on machine learning.</p> <p>If it's the latter I would recommend you to look at the following two titles (both available online):</p> <ol> <li><p><a href="http://noiselab.ucsd.edu/ECE228/Murphy_Machine_Learning.pdf" rel="nofollow noreferrer">Machine Learning: A Probabilistic Perspective, Kevin P. Murphy</a></p></li> <li><p><a href="http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf" rel="nofollow noreferrer">Pattern Recognition and Machine Learning, Christopher M. Bishop</a></p></li> </ol>
430
statistical learning
Intro to Statistical Learning - Solutions for 2.1
https://stats.stackexchange.com/questions/539362/intro-to-statistical-learning-solutions-for-2-1
<p>I am reading An Introduction to Statistical Learning with Applications in R (ISLR) and I wonder what would be the answer for exercise 2.1 part (d). The question is, If the variance of the error terms <span class="math-container">$$\sigma^2 = \mathrm{Var}(\epsilon)$$</span> is extremely high, a more flexible method would do worse or better? My intuition is it does not matter what method we choose, as it is an irreducible error, but most solutions I found said that the high variance of error terms means that the sample will have a lot of noise in the relationship. Therefore we should prefer an inflexible method that is less likely to over-fit to this noise.</p> <p>Can someone explain it to me?</p>
<p>It matters what you choose because a more flexible method may fit to the noise very easily, and you'll have to battle with it. As you mentioned, this is irreducible error, but an overfitted model will make much larger errors on the holdout set. Its aim is never reducing the irreducible error.</p>
431
statistical learning
what machine learning isn&#39;t statistical?
https://stats.stackexchange.com/questions/497986/what-machine-learning-isnt-statistical
<p>If I understand correctly, statistical learning theory is just <em>one</em> approach to machine learning,</p> <p>What machine learning isn't statistical?</p> <p>Based on my very limited understanding, I thought that things like PAC-learning or Empirical Risk minimization pretty much cover everything. Isn't statistics involved in all of this?</p> <p>Is there any good source that clearly explains this?</p> <p><strong>EDIT</strong></p> <p>my comment below shows the answers I was looking for.</p>
432
statistical learning
Decomposition of average squared bias (in Elements of Statistical Learning)
https://stats.stackexchange.com/questions/201779/decomposition-of-average-squared-bias-in-elements-of-statistical-learning
<p>I can't figure out how formula 7.14 on page 224 of <em>The Elements of Statistical Learning</em> is derived. Can anyone help me figure it out? </p> <p>$$\textrm{Average squared bias} = \textrm{Average}[\textrm{model bias}]^2 + \textrm{Average}[\textrm{estimation bias}]^2$$</p> <p><a href="https://i.sstatic.net/biRw1.png" rel="noreferrer"><img src="https://i.sstatic.net/biRw1.png" alt="enter image description here"></a></p>
<p>The result is basically due to the property of best linear estimator. Note that we don't assume <span class="math-container">$f(X)$</span> is linear here. Nevertheless we can find the linear predictor that approximates <span class="math-container">$f$</span> the best. </p> <p>Recall the definition of <span class="math-container">$\beta_*$</span>: <span class="math-container">$\beta_{*} = \arg\min_\beta E{[(f(X) - X^T \beta)^2]}$</span>. We can derive the theoretical estimator for <span class="math-container">$\beta_*$</span>: <span class="math-container">\begin{align*} g(\beta) &amp;= E[(f(X) - X^T \beta)^2] = E [f^2(X)] - 2\beta^T E[Xf(X)] + \beta^T E[XX^T]\beta \\ &amp;\implies \frac{\partial{g(\beta)}}{\partial{\beta}} = -2 E{[Xf(X)]} + 2 E[XX^T]\beta = 0 \\ &amp;\implies \beta_{*} = E[X X^T]^{-1}E[X f(X)], \end{align*}</span> where we have assumed <span class="math-container">$E[X X^T]$</span> is invertible. I call it theoretical estimator as we never know (in real world scenarios anyways) the marginal distribution of X, or <span class="math-container">$P(X)$</span>, so we won't know those expectations. You should still recall the resemblance of this estimator to the ordinaory least square estimator (if you replace <span class="math-container">$f$</span> with <span class="math-container">$y$</span>, then the OLS estimator is the plugin equivalent estimator. at the end I show they are the same for estimating the value of <span class="math-container">$\beta_*$</span>), which basically tells us another way of deriving the OLS estimator (by large number theory). </p> <p>The L.H.S of (7.14) can be expanded as: <span class="math-container">\begin{align*} E_{x_0}[f(x_0) - E{\hat{f}_\alpha (x_0)}]^2 &amp;= E_{x_0}[f(x_0) -x_0^T\beta_{*}+ x_0^T\beta_{*} - E{\hat{f}_\alpha (x_0)}]^2 \\ &amp;= E_{x_0}[f(x_0) - x_0^T\beta_{*}]^2 + E_{x_0}[ x_0^T\beta_{*} - E{\hat{f}_\alpha (x_0)}]^2 \\ &amp;\;\;+ 2 E_{x_0}[(f(x_0) - x_0^T\beta_{*})(x_0^T\beta_{*}-E{\hat{f}_\alpha (x_0)})]. \end{align*}</span></p> <p>To show (7.14), one only needs to show the third term is zero, i.e. <span class="math-container">$$E_{x_0}[(f(x_0) - x_0^T\beta_{*})(x_0^T\beta_{*}-E{\hat{f}_\alpha (x_0)})] = 0, $$</span></p> <p>where the L.H.S equals <span class="math-container">\begin{align*} LHS = E_{x_0}[(f(x_0) - x_0^T\beta_{*})x_0^T\beta_{*}] - E_{x_0}[(f(x_0) - x_0^T\beta_{*})E{\hat{f}_\alpha (x_0)})] \end{align*}</span></p> <p>The first term (for convenience, I have omitted <span class="math-container">$x_0$</span> and replace it with <span class="math-container">$x$</span>): <span class="math-container">\begin{align} &amp;E{[(f(x) - x^T\beta_{*})x^T\beta_{*}]} = E{[f(x)x^T\beta_*]}- E{[(x^T\beta_*)^2]} \\ &amp;= E[f(x)x^T]\beta_* - \left(Var{[x^T\beta_*]} + (E{[x^T\beta_*]})^2\right) \\ &amp;= E[f(x)x^T]\beta_* - \left( \beta_*^T Var{[x]} \beta_* + (\beta_* ^T E[x])^2\right) \\ &amp;= E[f(x)x^T]\beta_* - \left( \beta_*^T (E[xx^T] - E[x]E[x]^T) \beta_* + (\beta_* ^T E[x])^2\right) \\ &amp;= E[f(x)x^T]\beta_* - E{[f(x)x^T]}E[xx^T]^{-1} E[xx^T]\beta_* + \beta_*^TE[x]E[x]^T \beta_*\\ &amp;\;\;- \beta_*^TE[x]E[x]^T \beta_* \\ &amp;= 0, \end{align}</span> where we have used the variance identity <span class="math-container">$Var{[z]} = E{[zz^T]} - E{[z]}E{[z]}^T$</span> twice for both the second and forth step; we have substituted <span class="math-container">$\beta_*^T$</span> in the second last line and all the other steps follow due to standard expectation/variance properties. In particular, <span class="math-container">$\beta_*$</span> is a constant vector w.r.t the expectation, as it is independent from where <span class="math-container">$x$</span> (or <span class="math-container">$x_0$</span>) is measured.</p> <p>The second term <span class="math-container">\begin{align} E{[(f(x) - x^T\beta_{*})E{\hat{f}_\alpha (x)}]} &amp;= E{[(f(x) - x^T\beta_{*}) E{[x^T\hat{\beta}_\alpha]}]} \\ &amp;= E{[E{[\hat{\beta}_\alpha}^T]x (f(x) - x^T\beta_{*})]} \\ &amp;= E{\hat{\beta}_\alpha}^TE{[x f(x) - x x^T\beta_*]} \\ &amp;=E{\hat{\beta}_\alpha}^T\left( E{[x f(x)]} - E[xx^T] E[xx^T]^{-1}E{[xf(x)]}\right)\\ &amp;=0, \end{align}</span> where the second equality holds because <span class="math-container">$E{\hat{f}_\alpha (x)}$</span> is a point-wise expectation where the randomness arises from the training data <span class="math-container">$y$</span>, so <span class="math-container">$x$</span> is fixed; the third equality holds as <span class="math-container">$E{\hat{\beta}_\alpha}$</span> is independent from where <span class="math-container">$x$</span> (<span class="math-container">$x_0$</span>) is predicted so it's a constant w.r.t the outside expectation. Combining the above results, the sum of these two terms is zero, which shows eq.(7.14). </p> <p>Although not related to the question, it is worth noting that <span class="math-container">$ f(X) = E[Y|X]$</span>, i.e. <span class="math-container">$f(X)$</span> is the optimal regression function, as <span class="math-container">$$ f(X) = E{[f(X) +\varepsilon |X]} = E[Y|X].$$</span> Therefore, <span class="math-container">\begin{align} \beta_{*} &amp;= E[XX^T]^{-1}E{[Xf(X)]} = E[XX^T]^{-1}E{[XE[Y|X]]} \\ &amp;= E[XX^T]^{-1}E[E[XY|X]] \\ &amp;= E[XX^T]^{-1}E[XY], \end{align}</span> if we recall the last estimator is the best linear estimator, the above equation basically tells us, using the optimal regression function <span class="math-container">$f(x)$</span> or the noisy version, <span class="math-container">$y$</span> is the same as far as the point estimator is the concern. Of course, the estimator with <span class="math-container">$f$</span> will have better property/efficiency as it will lead to smaller variance, which can be easily seen from that fact <span class="math-container">$y$</span> introduces extra error, or variance.</p>
433
statistical learning
Hastie &quot;statistical learning&quot; 2.28. Least squares and covariance
https://stats.stackexchange.com/questions/641539/hastie-statistical-learning-2-28-least-squares-and-covariance
<p>In Hasties book &quot;statistical learning&quot;, just above equation 2.28, it says that <span class="math-container">$\mathbf{X}^T\mathbf{X} \rightarrow NCov(X)$</span> (when <span class="math-container">$N$</span> is large and <span class="math-container">$E(X)=0$</span>).</p> <p>Why is this true?</p> <p><span class="math-container">$Cov(X)$</span> is obviously <span class="math-container">$Cov(X,X)$</span>, and since <span class="math-container">$Cov(X,X) = Var(X)$</span> and <span class="math-container">$E(X)=0$</span>, <span class="math-container">$Cov(X)=E(X^2)$</span>. So the original equation says that <span class="math-container">$\mathbf{X}^T\mathbf{X}\rightarrow NE(X^2)$</span>.</p> <p>But, why is <span class="math-container">$\mathbf{X}^T\mathbf{X}$</span> scalar? Shoudn't this be a p x p matrix? Where p is the dimensionality.</p> <p>This is a book that requires a lot of work, apperently...</p>
<p><span class="math-container">$X = (X_1, \ldots, X_p)$</span> is a random vector with <span class="math-container">$p$</span> entries (not a scalar random variable).</p> <p>The expectation of this random vector is denoted <span class="math-container">$E[X] = (E[X_1], \ldots, E[X_p])$</span>.</p> <p>The <a href="https://en.wikipedia.org/wiki/Covariance_matrix" rel="nofollow noreferrer">covariance matrix</a> of this random vector is denoted <span class="math-container">$\text{Cov}(X)$</span>; it is a <span class="math-container">$p \times p$</span> matrix whose <span class="math-container">$(i, j)$</span> entry is <span class="math-container">$\text{Cov}(X_i, X_j)$</span>.</p> <p><span class="math-container">$\mathbf{X}$</span> is an <span class="math-container">$N \times p$</span> matrix where each row is an i.i.d. draw from the distribution of the random vector <span class="math-container">$X$</span>.</p> <p>The sample covariance matrix <span class="math-container">$\frac{1}{N} \mathbf{X}^\top \mathbf{X}$</span> is a <span class="math-container">$p \times p$</span> matrix whose <span class="math-container">$(i, j)$</span> entry is <span class="math-container">$\frac{1}{N} \sum_{n=1}^N X_{n, i} X_{n, j}$</span>. For each <span class="math-container">$n$</span>, we have <span class="math-container">$E[X_{n, i} X_{n, j}] = \text{Cov}(X_i, X_j)$</span>, so <span class="math-container">$\frac{1}{N} \sum_{n=1}^N X_{n, i} X_{n, j}$</span> tends to <span class="math-container">$\text{Cov}(X_i, X_j)$</span> by the law of large numbers.</p>
434
statistical learning
Reproducing table 3.3 from Elements of Statistical Learning
https://stats.stackexchange.com/questions/161517/reproducing-table-3-3-from-elements-of-statistical-learning
<p>I am trying to reproduce a table 3.3 in Elements of statistical learning. Specifically, I am trying to get the coefficient estimates for ridge regression and lasso. I know that the estimate can be a bit off depending on the seeding value, but I personally think it is significantly off. The code is below, and would appreciate if anyone can give me some help. </p> <p>A link to data is here : <a href="http://statweb.stanford.edu/~tibs/ElemStatLearn/index.html" rel="nofollow noreferrer">http://statweb.stanford.edu/~tibs/ElemStatLearn/index.html</a> </p> <p>And a link to textbook is here : <a href="http://web.stanford.edu/~hastie/local.ftp/Springer/OLD/ESLII_print4.pdf" rel="nofollow noreferrer">http://web.stanford.edu/~hastie/local.ftp/Springer/OLD/ESLII_print4.pdf</a></p> <p>Relevant pages are page 61 and 63.</p> <pre><code>library(glmnet) Prostate &lt;- read.table("prostate.data", sep = "") train &lt;- Prostate[,10] Y &lt;- Prostate[,9] X &lt;- Prostate[, -c(9,10)] X.scaled &lt;- scale(X, TRUE, TRUE) set.seed(1) cv1 &lt;- cv.glmnet(X.scaled[train,], Y[train], alpha = 0) plot(cv1) ridge.r &lt;- glmnet(X.scaled[train,], Y[train], alpha = 0, lambda=cv1$lambda.min) coef(ridge.r) s0 (Intercept) 2.467025897 lcavol 0.486393030 lweight 0.599533912 age -0.014470377 lbph 0.137317539 svi 0.674949305 lcp -0.110476104 gleason 0.019892200 pgg45 0.006930003 cv2 &lt;- cv.glmnet(X.scaled[train,], Y[train], alpha = 1) plot(cv2) lasso.r &lt;- glmnet(X.scaled[train,], Y[train], alpha = 1, lambda =cv2$lambda.min) coef(lasso.r) s0 (Intercept) 2.466987400 lcavol 0.556835090 lweight 0.605923475 age -0.016916532 lbph 0.138909124 svi 0.700170345 lcp -0.170851957 gleason . pgg45 0.008051421 </code></pre> <p>The output table from the textbook is below. <img src="https://i.sstatic.net/4Nooa.png" alt="enter image description here"></p>
435
statistical learning
Alternative to The Elements of Statistical Learning: Data Mining, Inference, and Prediction
https://stats.stackexchange.com/questions/332842/alternative-to-the-elements-of-statistical-learning-data-mining-inference-and
<p>I am taking a class in statistics which uses The Elements of Statistical Learning: Data Mining, Inference, and Prediction as a textbook. However, I find this book very terse. </p> <p>Could anyone please recommend a book which has similar topic coverage but contains more examples and detailed explanations and does not omit mathematical derivations.</p> <p>P.S. I found Introduction to Statistical Learning as a good introductory level book. It was easy to read and provided a good overview of ML methods. However, I would like to see more rigorous explanations and mathematical derivations of the methods.</p>
436
statistical learning
Books to read on ML after ESL (Elements of Statistical Learning)?
https://stats.stackexchange.com/questions/460411/books-to-read-on-ml-after-esl-elements-of-statistical-learning
<p>I am almost finished reading ESL; Elements of Statistical Learning. I come from a strong mathematical and statistical background, and that was my first book about Machine Learning.</p> <p><strong>What other books would be good to go over now?</strong></p> <p>I am aware of books such as:</p> <ul> <li>Machine Learning: A Bayesian and Optimization Perspective (.Net Developers Series)</li> <li>Pattern Recognition and Machine Learning by Bishop</li> <li>Machine Learning: A Probabilistic Perspective, by Murphy</li> </ul> <p>and I have heard they are all good. However I am unsure of the order of reading or if you may; 'double-reading' same stuff (although that would not be a terrible idea as not everything stuck with me from ESL).</p> <p>Thank you</p>
<p>You're going in the right way! In my journey towards machine learning, I found <strong>Python Machine Learning</strong> by Sebastian Raschka very helpful for the start. Though I read a lot books then, this one seriously helped me to kick start the journey.</p> <p>Beside this one, you could check out some more books. <a href="https://www.onlinebooksreview.com/articles/best-machine-learning-books-with-python" rel="nofollow noreferrer">Here</a> is a good list I followed. Happy Machine Learning.</p>
437
statistical learning
How validation set in statistical learning works?
https://stats.stackexchange.com/questions/500236/how-validation-set-in-statistical-learning-works
<p>In statistical learning, we split the data into three parts for training, validation, and test, separately. With training data we can get a model <span class="math-container">$T$</span>, then we seem to optimize or change the model by validation data. How does that happen (since the model <span class="math-container">$T$</span> is sort of fixed)?</p> <p><em>Updated:</em> I see validation data set is for model selection, namely it doesn't change the model, but pick one from some. But what if we only have one model to train and test? e.g. if we wanna find the value of k and distance in a k-NN model, it seems that we only need to optimize the model instead of selecting one from some.</p> <p><em>A relevant question:</em> it is said that pixel <em>distance</em> (which I guess is subtracting RGB in each pixel and then add squares of them and them calculate the square root) is never used in kNN for image classification, this sounds reasonable since it seems to me not to provide a good way of measuring how much the objects in two images differ. But then what sort of <em>distance</em> do we define for kNN for image classification? I feel it's difficult for one to directly give such a definition (particularly so if one considers occlusion and disformation), so do we just use machine learning to let the computer decide what type of distance to use?</p>
<p>Imagine a multiple linear regression with a penalty on the magnitude of the coefficients (otherwise Lasso).</p> <p>On the Training data, you will fit your regression coefficients <span class="math-container">$\underline{w}$</span> by minimizing the loss function <span class="math-container">$L$</span>.</p> <p>On the Validation data, you will find the optimal value of the penalty <span class="math-container">$\lambda$</span> (you can find here some description about the <span class="math-container">$\lambda$</span> and the method <a href="https://en.wikipedia.org/wiki/Lasso_(statistics)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Lasso_(statistics)</a>) by the minimization of the loss function <span class="math-container">$L+f(\lambda)$</span>, where <span class="math-container">$f(\lambda)=\lambda \times \left \| \underline{w} \right \|_{2}$</span></p> <p>If you were to fit also the <span class="math-container">$\lambda$</span> on the training data, then you would minimize directly the loss function <span class="math-container">$L+f(\lambda)$</span> and as you can see it will obtain the minimum every time for <span class="math-container">$\lambda=0.$</span> This will make your predictions severely biased (not to mention that you will not be able to shrinkage your coefficients).</p> <p>Hence, in general, you use the Validation data to conduct model selection/ hyper parameter tuning of your model.</p>
438
statistical learning
Measure-theoretically rigorous treatment of statistical learning theory
https://stats.stackexchange.com/questions/552709/measure-theoretically-rigorous-treatment-of-statistical-learning-theory
<p>My main source on statistical learning theory has been <a href="https://www.cs.huji.ac.il/%7Eshais/UnderstandingMachineLearning/" rel="nofollow noreferrer">Shwartz/Ben-David</a>. This is a good book but it's a little vague from a measure-theoretic point of view. For example, in the definition of PAC learnability (Definition 3.1), learning happens with respect to &quot;every distribution over <span class="math-container">$X$</span>&quot;, <span class="math-container">$X$</span> being the input domain, which is just a set. No measure-space structure is mentioned for <span class="math-container">$X$</span>, and in particular it's not clear if &quot;every distribution&quot; means all possible measurable-space structures on <span class="math-container">$X$</span>, or just one specific sigma-algebra.</p> <p>For another example, the idea of a learning algorithm as something mapping training sets to hypothesis functions has no measurability assumptions on it, yet it requires at least some, since we want to write down expressions like &quot;probability (with respect to the random training set) that the test error of the returned hypothesis is less than <span class="math-container">$\epsilon$</span>&quot;, which implies the test error is a measurable random variable.</p> <p>I also looked at Vapnik, who seems to generally work with density functions, not abstract measure spaces.</p> <p>Is there a measure-theoretic treatment of this type of material, either as a book or a series of articles?</p>
<p>I will try to explain systematically, so apologies for going over things you probably already know.</p> <p>To construct a probability space, <span class="math-container">$\Omega$</span>, we first choose some elementary subsets/events appropriate to the situation; for example, in a coin toss the events might be <em>heads</em> and <em>tails</em>. We then define a probability measure, <span class="math-container">$\mathbb{P}$</span>, on these events. Finally, a sigma-algebra, <span class="math-container">$\Sigma,$</span> is generated by admitting the subsets/events resulting from all possible complementation, (countable) union and intersection. Not all of the events will always be practically relevant, but it doesn't hurt that they exist.</p> <p>While we can largely work on intuition in a simple example such as &quot;heads or tails&quot;, to scale the approach we define a function <span class="math-container">$X:\Omega\mapsto\mathbb{R}$</span> which assigns a single value to each of these events. For example, <span class="math-container">$\mathbb{P}(heads)=1$</span> and <span class="math-container">$\mathbb{P}(tails)=0$</span>, with the induced sigma-algebra additionally consisting of the empty set and the event <span class="math-container">$\{heads, tails\}$</span> (i.e. <span class="math-container">$\Omega$</span> itself). <span class="math-container">$X$</span> is, of course, called a <em>random variable</em>.</p> <p>To define events on the entire real line we take <span class="math-container">$X$</span> to be the identity function. The elementary events can, for example, be those of the form <span class="math-container">$\{(-\infty,a)\}:a\in\mathbb{R}$</span>. These generate the Borel sigma-algebra. It would be endlessly tedious to assign probabilities for each of these elementary events, but we don't need to, since the structure can be induced indirectly (see below).</p> <p>Now, the events <span class="math-container">$\{\omega\in\Omega:X(\omega)\in(-\infty,a),a\in\mathbb{R}\}$</span> exist in <span class="math-container">$\Sigma$</span>, and <span class="math-container">$X$</span> is consequently a <span class="math-container">$\Sigma$</span>-measurable function. The <span class="math-container">$\omega$</span> references tend to be omitted, which results in a reference to events <span class="math-container">$\{X\in(-\infty,a):a\in\mathbb{R}\}$</span>.</p> <p>I hope this addresses your question regarding what sigma-algebra a random variable tends to be defined on. There are a few loose ends, though. Let's call <span class="math-container">$F:\mathbb{R}\mapsto[0,1]:F(a)=\mathbb{P}(X&lt;a)$</span> the <em>distribution function</em> of <span class="math-container">$X$</span>. Let's take a function that is monotonic, approaches 0 as <span class="math-container">$a\rightarrow-\infty$</span>, approaches 1 as <span class="math-container">$a\rightarrow\infty$</span>, and is right-continuous. The Skorokhod Representation Theorem establishes that any such function can be uniquely associated with a random variable possessing this distribution function. This means we can avoid worrying about elementary events when it is more convenient to specify a probability space based on its distribution function instead.</p> <p>The last thing that might be unclear is why we use integration with respect to the Lebesgue measure, <span class="math-container">$\lambda$</span>, to obtain probabilities from the density functions pertaining to continuous distributions. This comes from the Radon-Nikodym Theorem. So long as <span class="math-container">$\Sigma$</span> is the Borel sigma-algebra and, for <span class="math-container">$A\in\Sigma$</span>, <span class="math-container">$\lambda(A)=0\Rightarrow\mathbb{P}(A)=0$</span> (i.e. <span class="math-container">$\mathbb{P}$</span> is absolutely continuous with respect to <span class="math-container">$\lambda$</span>), there exists a Radon-Nikodym derivative <span class="math-container">$f$</span> - the <em>density function</em> - whose integrals with respect to <span class="math-container">$\lambda$</span> over the Borel sets <span class="math-container">$A\in\Sigma$</span> return the values <span class="math-container">$\{\mathbb{P}(A):A\in\Sigma\}$</span>.</p> <p>Discrete distributions like the &quot;heads&quot; and &quot;tails&quot; example do not generate the Borel sigma-algebra, which is why they don't have density functions defined with respect to the Lebesgue measure. Of course, one could set up an alternative system based on the counting measure that would work for suitable discrete distributions. In fact, this gives rise to the <em>probability mass function</em> in cases where absolute continuity of the discrete probability measure with respect to the counting measure applies.</p> <p>In his <em>Statistical Learning Theory</em> (1998), Vapnik presents the measure-theoretic framework for probability work from page 59, although it's necessarily rather abridged.</p>
439
statistical learning
Is it allowed to refer to Artificial Neural Networks as Statistical learning?
https://stats.stackexchange.com/questions/524205/is-it-allowed-to-refer-to-artificial-neural-networks-as-statistical-learning
<p>I am producing a research statement to be sent to a statistics department and I was trying to avoid the term Machine learning in favour of the more friendly one of Statistical learning. Probably I could not avoid such use.</p>
<p>The classic <a href="https://web.stanford.edu/%7Ehastie/ElemStatLearn/" rel="noreferrer">The Elements of Statistical Learning</a> handbook by Hastie et al discusses neural networks among other algorithms, so it needs to be a “statistical learning” algorithm.</p> <p>Depending whom you’d ask, neural networks are either statistics, statistical learning, pattern recognition, machine learning, deep learning, or artificial intelligence. There’s no single, agreed category used by everybody to describe them.</p>
440
statistical learning
Free PDF for Bayes with R, similar to Elements of Statistical Learning
https://stats.stackexchange.com/questions/47442/free-pdf-for-bayes-with-r-similar-to-elements-of-statistical-learning
<p>Is there a good book/pdf similar to "Elements of Statistical Learning" that's available for free online, that deals with Bayesian statistics, ideally with code for <code>R</code>?</p>
<p>Well, given that you asked for something similar to the elements, I'm going to assume that you are of a machine learning bent.</p> <p>Therefore, I would suggest the following:</p> <p><a href="http://web4.cs.ucl.ac.uk/staff/D.Barber/textbook/090310.pdf" rel="noreferrer">Bayesian Reasoning and Machine Learning</a></p> <p>Additionally, if you are looking for something a little more introductory, I would suggest <a href="http://www.greenteapress.com/thinkbayes/" rel="noreferrer">Think Bayes</a></p>
441
statistical learning
Understanding the Bootstrap method in *Introduction to Statistical Learning*
https://stats.stackexchange.com/questions/475693/understanding-the-bootstrap-method-in-introduction-to-statistical-learning
<p>I am having a hard time understanding Bootstrap method. In the book <a href="http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf" rel="nofollow noreferrer"><em>Introduction to Statistical Learning</em></a> (pp. 187-190) the Bootstrap method is explained by first using &quot;simulated datasets&quot; from an &quot;original population&quot; to estimate <span class="math-container">$\alpha$</span>. But this approach does not seem to be applicable in practical scenarios, so we use Bootstrap samples instead. The book says that Bootstrap samples are taken from the &quot;original data set&quot;.</p> <p>I really don't understand these terms mean. Could someone please explain to me in simple words what those pages mean by these terms, and hence, what does this mean about how the Bootstrap method works?</p>
<p>Bootstrapping is introduced as a method to <em>estimate</em> the variance of a <em>statistics</em> <span class="math-container">$S$</span>, given a <em>sample</em> <span class="math-container">$X=\{X_1, X_2, \ldots, X_2\}$</span>.</p> <p>Usually, you can have two different scenarios. One scenario is when you know the analytical expression of the distribution of <span class="math-container">$S$</span>. One when you cannot infer the distribution of <span class="math-container">$S$</span>. In the first case, you can easily use the theoretical results to calculate the property of <span class="math-container">$S$</span>. In the second case, <em>bootstrapping</em> can provide a workaround to give you an <strong>approximated</strong> form of the distribution of <span class="math-container">$S$</span>.</p> <p>The book gives an example, where <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are random variables and you are interested in the random variable <span class="math-container">$\alpha$</span> which is a function of the variance of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, <span class="math-container">$\sigma_X^2$</span>, <span class="math-container">$\sigma_Y^2$</span>, and their covariance <span class="math-container">$\sigma_{XY}$</span>. Since these are unknown, you can estimate them from the sample, getting <span class="math-container">$\hat{\alpha}$</span>.</p> <p>Here, they <em>simulate</em> different scenarios to show you what the variability of <span class="math-container">$\hat{\alpha}$</span> can be.</p> <p>Remember that in reality, you have only <strong>one</strong> <span class="math-container">$\hat{\alpha}$</span>.</p> <p>But, let's say that you <strong>know</strong> the <strong>expected distributions</strong> of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, then you can simulate different scenarios, by <em>sampling</em> <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> from the <strong>expected distributions</strong>.<br /> For instance, let's say that <span class="math-container">$Z=(X, Y)$</span> is a 2-dimensional random variable defined by merging <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. Let's suppose that <span class="math-container">$Z$</span> is distributed as a 2-dimensional Normal distribution with mean <span class="math-container">$\mu_Z=(\mu_X, \mu_Y)$</span>, and variance-covariance matrix <span class="math-container">$\Sigma$</span>.</p> <p><span class="math-container">$$ Z|\mu_Z, \Sigma \sim N(\mu_Z, \Sigma)\\ \Sigma = \pmatrix{\sigma_X^2 &amp; \sigma_{XY} \\ \sigma_{XY} &amp; \sigma_Y^2} $$</span></p> <p>Then, simulating 1000 times this stochastic system is equivalent to randomly sample <span class="math-container">$Z$</span> from the given distribution.<br /> This is a R snippet where you need to specify the means and variance, covariance values:</p> <pre><code>mu_z = (mu_x, mu_y) cov_mat = rbind(c(sig_x, sig_xy), c(sig_xy, sig_y)) z = MASS::mvrnorm(n=1000, mu=mu_z, Sigma=cov_mat) </code></pre> <p>Now that you have 1000 <span class="math-container">$Z$</span> (<span class="math-container">$X$</span>, <span class="math-container">$Y$</span> pairs), you can calculate <span class="math-container">$\hat{\alpha}$</span> from each.<br /> The advantage of simulating is that we know the theoretical variances and covariance from which we sampled the <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, so we can compare the simulated <span class="math-container">$\hat{\alpha}$</span> with the &quot;true&quot; <span class="math-container">$\alpha$</span>.</p> <p>As you can see, they show that the maximum likelihood estimation of <span class="math-container">$\alpha$</span> (its mean) is pretty close to the theoretical value 0.6</p> <p><span class="math-container">$$ \bar{\alpha}=\frac{1}{1000}\sum_{i=1}^{1000} \hat{\alpha}_i=0.5996 $$</span></p> <p>with a standard deviation equal 0.083. This value tells you how close the estimate is to the real <span class="math-container">$\alpha$</span>.</p> <p>Now we go back to the real scenario, where you have only <strong>one</strong> sample with <em>unknown distribution</em>.</p> <p>The question is <em>how can we estimate the error of our estimate with only one sample</em>?</p> <p>This is where <em>bootstrap</em> comes into play.</p> <p><em>Bootstrapping</em> is a procedure to <em>estimate</em> the variability of a random variable <em>given a single sample</em>.</p> <p>The procedure is simple. You repeat a large number of times <span class="math-container">$i=1,\ldots,N$</span>:</p> <ol> <li>Randomly sample <span class="math-container">$n$</span> observations (<span class="math-container">$n$</span> equal to the number of observations in the dataset) from the dataset <strong>with replacement</strong></li> <li>Estimate your statistics (<span class="math-container">$\hat{\alpha_i}$</span> in our example) using this sample</li> </ol> <p>Finally:</p> <ol start="3"> <li>Calculate mean and standard deviation or quantile intervals</li> </ol> <p>Now, you can realise the principle behind this procedure.<br /> The main assumption is that your sample is one realisation of infinite possible scenarios all distributed following some theoretical distribution. That means that there is an unknown distribution driving the phenomenon, and when you observe the phenomenon you are randomly sampling from that distribution. Parameters of this distribution are fixed (unknown), data vary.</p> <p>When you don't have multiple samples, you can assume that your data is roughly representing the statistical properties of the underlying population. Then you can randomly sample with replacement and estimate the variability of your statistics.</p>
442
statistical learning
Justifying an early equation from *Introduction to Statistical Learning*
https://stats.stackexchange.com/questions/206645/justifying-an-early-equation-from-introduction-to-statistical-learning
<p>I'm self-studying <em>Introduction to Statistical Learning</em>. Page 19 of the book states the following:</p> <blockquote> <p>Consider a given estimate $\hat{f}$ and a set of predictors $X$, which yields the prediction $\hat{Y} = \hat{f}(X)$. Assume for a moment that both $\hat{f}$ and $X$ are fixed. Then, it is easy to show that</p> <p>$$ E(Y-\hat{Y})^2 = E[f(X) + \epsilon - \hat{f}(X)]^2 = [f(X) = \hat{f}(X)]^2 + Var(\epsilon)$$</p> </blockquote> <p><strong>Question:</strong> How exactly is the step from $E[f(X) + \epsilon - \hat{f}(X)]^2$ to $[f(X) = \hat{f}(X)]^2 + Var(\epsilon)$ justified?</p>
443
statistical learning
Introduction to Statistical Learning with R Equation 2.7
https://stats.stackexchange.com/questions/451021/introduction-to-statistical-learning-with-r-equation-2-7
<p>I'm really confused about equation 2.7 on page 34 in the Introduction to Statistical Learning with R text book found here: <a href="http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf" rel="nofollow noreferrer">http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf</a>. The book states: "Here the notation E(y0 - f_hat(x0))^2 defines the expected test MSE, and refers to the average test MSE that we would obtain if we repeatedly estimated f using a large number of training sets, and tested each at x0.</p> <p>I'm confused about exactly what is meant by x0 in this context. Is this a common identical individual observation row that is shared among all the various test data sets? Or, is x0 the collection of all x in each test set - like the observations in the test data associated with k-fold cross validation? Or is it something else? I have similar confusion about y0.</p>
<p><span class="math-container">$x_0$</span> is any value in the test set, in contrast to <span class="math-container">$x_1,x_2,\ldots, x_n$</span> in the training set. Similarly the pair <span class="math-container">$(x_0,y_0)$</span> is any such pair in the test set. The aim is to minimise the expected square of the error on data in the test set which was not considered (or even looked at) when training the model.</p> <p>Page 30 of your linked book says </p> <blockquote> <p>To state it more mathematically, suppose that we fit our statistical learning method on our training observations <span class="math-container">$\{(x_1,y_1),(x_2,y_2),\ldots,(x_n,y_n)\}$</span>,and we obtain the estimate <span class="math-container">$\hat f$</span>. We can then compute <span class="math-container">$\hat f(x_1),\hat f(x_2),\ldots,\hat f(x_n)$</span>. If these are approximately equal to <span class="math-container">$y_1,y_2,\ldots,y_n$</span>, then the training MSE given by (2.5) is small. However, we are really not interested in whether <span class="math-container">$\hat f(x_i)\approx y_i$</span>; instead, we want to know whether <span class="math-container">$\hat f(x_0)$</span> is approximately equal to <span class="math-container">$y_0$</span>,where <span class="math-container">$(x_0,y_0)$</span> is a previously unseen test observation not used to train the statistical learning method. </p> </blockquote>
444
statistical learning
Where do artificial neural networks belong in the &#39;taxonomy&#39; of statistical learning methods?
https://stats.stackexchange.com/questions/320664/where-do-artificial-neural-networks-belong-in-the-taxonomy-of-statistical-lear
<p>I'm a non-stats person trying to learn more about statistical learning methods, and to organize my thinking I am trying to construct a mental taxonomy of the methods I'm learning about. For instance: </p> <blockquote> <p>Statistical learning methods can be divided into supervised and unsupervised categories. </p> <p>Supervised methods can be divided into linear and non-linear methods. Supervised linear methods include generalized linear models and their derivations. Non-linear methods include tree-based methods, support vector machines, etc. </p> <p>Unsupervised methods include K-nearest neighbours, PCA, etc.</p> </blockquote> <p>What I'm struggling with is the position of artificial neural networks in this taxonomy. </p> <p>Neural networks allow us to reframe the variation in a dataset -- so are they an unsupervised learning method, similar to PCA? However, we can train them to perform a classification task -- so does that make them a supervised method? Or, are they better understood as an <em>implementation</em> of statistical learning methods? For instance, I have seen (but not necessarily understood) references to using support vector machines <em>within</em> a neural network. </p> <p>My apologies if this is a basic question. I have tried reading around it and remain confused but direction to further resources would be appreciated. I'm working through ISLR, and in part my confusion stems from the fact that that text makes no mention of neural networks. </p>
<p>Neural nets form a broad class of models, and cover many parts of the taxonomy you're describing (and even extend outside it). An individual neural net (or subclass of them) could be placed into the taxonomy, but the entire class of neural nets cannot.</p> <p>For example, neural nets can be used for both supervised and unsupervised learning problems, depending on the loss function. They can be linear or nonlinear, depending on the network architecture and activation function. Many other models (e.g. linear/logistic regression, kernel machines, PCA, etc.) are equivalent to a particular form of neural net.</p> <p>Neural nets can also be used to solve problems that are not statistical learning problems at all. For example, Hopfield nets can be used to solve optimization problems. There are even computationally universal neural nets that can implement every possible algorithm (see <a href="https://stats.stackexchange.com/questions/220907/meaning-and-proof-of-rnn-can-approximate-any-algorithm">here</a>, but this is a theoretical construction that would not be used in practice).</p>
445
statistical learning
Derivation of EPE in “The elements of statistical learning ”
https://stats.stackexchange.com/questions/314517/derivation-of-epe-in-the-elements-of-statistical-learning
<p>I am currently trying to read the &quot;<a href="http://amzn.to/2yXDxdf" rel="nofollow noreferrer">Elements of Statistical Learning</a>&quot;, by Efron, Hastie, and Tibshirani, and already at the beginning there is a bit above my level in mathematics. I have 3 questions regarding the move from (2.9) to (2.10):</p> <ol> <li><p>What is the meaning of integrating with respect to Pr(dx,dy) instead of with respect to dx,dy by themselves?</p> </li> <li><p>Since this is an indefinite integral, shouldn't there be a constant C or something afterwards?</p> </li> <li><p>This is more about the intuition behind this: why is the expected value of the loss function f is the same as the area beneath the function (that is, the integral of f)?</p> </li> </ol> <p><a href="https://i.sstatic.net/GeTkg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GeTkg.png" alt="enter image description here" /></a></p>
<blockquote> <ol> <li>what is the meaning of integrating with respect to Pr(dx,dy) instead of with respect to dx,dy by themselves?</li> </ol> </blockquote> <p>You are missing the notion that this is an expectation, that is, the average value of $(Y-f(X))^2$ under the joint distribution of $(X,Y)$. (Using <em>Pr</em> is clearly not the most inspired choice!)</p> <blockquote> <ol start="2"> <li>since this is an indefinite integral, shouldn't there be a +C or something afterwards?</li> </ol> </blockquote> <p>This is a regular integral and not an anti-derivative. The authors did not put the domain of integration on the integral sign, as is common when this domain does not suffer from possible confusion.</p> <blockquote> <ol start="3"> <li>this is more about the intuition behind this: why is the expected value of the loss function f is the same as the area beneath the function (that is, the integral of f)?</li> </ol> </blockquote> <p>First, $f(X)$ is not the loss function but the transform of the random variable $X$. The loss function is $(Y-f(X))^2$. Second, the integral of the loss function under the probability measure is its average or averaged value. This is <em>the</em> definition for expectations.</p> <p>About this entire excerpt, one way to look at it is to see it as a probabilistic version of the <em>Pythagorean theorem</em>: the average square distance between $Y$ and $f(X)$ is the sum $$\mathbb{E}[(Y-f(X))^2]=\mathbb{E}[(Y-\mathbb{E}[Y|X])^2]+\mathbb{E}[(\mathbb{E}[Y|X]-f(X))^2]$$ since the terms $(Y-\mathbb{E}[Y|X])$ and $(\mathbb{E}[Y|X]-f(X))$ are orthogonal in this probabilistic sense.</p>
446
statistical learning
Explanation on a Minsky&#39;s critique on statistical learning related to XOR
https://stats.stackexchange.com/questions/134857/explanation-on-a-minskys-critique-on-statistical-learning-related-to-xor
<p>I was listening to the first session of society of Minds by Minsky (2011) and he mentions at some point around minute 48 the following:</p> <p>"...lots of statistical learning tools is good for lots of applications, but they won't cut it to solve hard problems, where the hypothesis more complicated than seven or eight variables interaction, so most statistical learning people assume that if you get a lot of partial ones then you can look at combinations of ones that have high correlations with the result, you can start combining them and then get better and better, however mathematically if the effect you are looking for depends on exclusive or of several variables, then there is no way to approach that by successive approximations if any one of the variables is missing, any correlation of the phenomenon with the others, anyways that is a long story but I think it is worth complaining about...."</p> <p>Could somebody please explain me more about this phenomenon or guide me to a reference that I could read about it?</p>
<p>Minksy is famous for criticizing neural networks for their inability to solve the XOR problem. It's possible that is what he's referring to here. Linear statistical relationships are not enough to detect patterns that resemble the XOR function.</p> <p><a href="http://www.ucs.louisiana.edu/~isb9112/dept/phil341/histconn.html" rel="nofollow">http://www.ucs.louisiana.edu/~isb9112/dept/phil341/histconn.html</a> </p>
447
statistical learning
Statistical Learning With Sparsity: Direct Inspection of the LASSO function
https://stats.stackexchange.com/questions/432403/statistical-learning-with-sparsity-direct-inspection-of-the-lasso-function
<p>In the book Statistical Learning with Sparsity: The Lasso and Generalizations, in section 2.4.1, they mention that the absolute value of <span class="math-container">$\beta$</span> has no derivative at <span class="math-container">$\beta=0$</span>, therefore they proceed by direct inspection to determine the value of <span class="math-container">$\beta$</span> in a piece-wise manner, which is shown in 2.10. Does anyone know what they mean by direct inspection? Did they draw the function out and inspect it? I can't see how they arrived at 2.10 from inspecting 2.9.</p> <p><a href="https://i.sstatic.net/VQ6xx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VQ6xx.png" alt="enter image description here"></a></p>
448
statistical learning
Simulated annealing for deep learning: Why is gradient free statistical learning not in the main stream?
https://stats.stackexchange.com/questions/559251/simulated-annealing-for-deep-learning-why-is-gradient-free-statistical-learning
<p>In order to define what <a href="https://www.deeplearningbook.org" rel="nofollow noreferrer">deep learning</a> is, the learning portion is often listed with <a href="https://en.wikipedia.org/wiki/Backpropagation" rel="nofollow noreferrer">backpropagation</a> as a requirement without alternatives in the main stream software libraries and in the literature. There are not many <a href="https://en.wikipedia.org/wiki/Derivative-free_optimization" rel="nofollow noreferrer">gradient free</a> optimisations are mentioned in deep learning or in general statistical learning. Similarly, in &quot;classical algorithms&quot; (<a href="https://en.wikipedia.org/wiki/Non-linear_least_squares" rel="nofollow noreferrer">Nonlinear least squares</a>) involves derivatives [1]. In general, <a href="https://en.wikipedia.org/wiki/Derivative-free_optimization" rel="nofollow noreferrer">gradient free</a> learning in deep learning or classical algorithms are not in the main stream. One promising alternative is simulated annealing [2, 3], so-called 'nature-inspired optimization'.</p> <p>Is there any inherent theoretical reason that why gradient free deep learning (statistical learning) is not in the main stream? (Or not preferred?)</p> <p><strong>Notes</strong></p> <p>[1] Such as <a href="https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm" rel="nofollow noreferrer">Levenberg–Marquardt</a></p> <p>[2] <a href="https://www.sciencedirect.com/science/article/pii/S1877050915035759" rel="nofollow noreferrer">Simulated Annealing Algorithm for Deep Learning</a> (2015)</p> <p>[3] <a href="https://www.nature.com/articles/s41598-021-90144-3" rel="nofollow noreferrer">CoolMomentum: a method for stochastic optimization by Langevin dynamics with simulated annealing</a> (2021) Though this is still not fully gradient-free, but does not require auto-differentiation.</p> <p><strong>Edit 1</strong> Additional references using <a href="https://en.wikipedia.org/wiki/Ensemble_Kalman_filter" rel="nofollow noreferrer">Ensemble Kalman Filter</a>, showing a derivative free approach:</p> <ul> <li>Ensemble Kalman Inversion: A Derivative-Free Technique For Machine Learning Tasks <a href="https://arxiv.org/abs/1808.03620" rel="nofollow noreferrer">arXiv:1808.03620</a>.</li> <li>Ensemble Kalman Filter optimizing Deep Neural Networks: An alternative approach to non-performing Gradient Descent <a href="https://www.springerprofessional.de/en/ensemble-kalman-filter-optimizing-deep-neural-networks-an-altern/18742188" rel="nofollow noreferrer">springer</a> (<a href="https://juser.fz-juelich.de/record/889208/files/main.pdf" rel="nofollow noreferrer">manuscript-pdf</a>)</li> </ul> <p><strong>Edit 2</strong> As far as I gather, Yann LeCun does not consider gradient-free learning as part of deep learning ecosystem. &quot;DL is constructing networks of parameterized functional modules &amp; training them from examples using gradient-based optimization.&quot; <a href="https://twitter.com/ylecun/status/1209497021398343680" rel="nofollow noreferrer">tweet</a></p> <p><strong>Edit 3</strong> Ben Bolker's comment on local geometry definitely deserves to be one of the answers.</p> <p><strong>Edit 4</strong> A great introduction as a PhD thesis <a href="https://juser.fz-juelich.de/record/1015166" rel="nofollow noreferrer">Gradient-Free Optimization of Artificial and Biological Networks using Learning to Learn </a></p>
<p>Gradient-free learning is in the mainstream very heavily, but not used heavily in deep learning. Methods used for training neural networks that don't involve derivatives are typically called &quot;metaheuristics.&quot; In computer science and pattern recognition (which largely originated in electrical engineering), metaheuristics are the go-to for NP-hard problems, such as airline flight scheduling, traffic route planning to optimize fuel consumption by delivery trucks, or the traveling salesman problem (annealing). As an example see <a href="https://www.igi-global.com/chapter/swarm-based-nature-inspired-metaheuristics-for-neural-network-optimization/187679" rel="noreferrer">swarm-based learning for neural networks</a> or <a href="https://www.sciencedirect.com/science/article/abs/pii/S0957417415006570" rel="noreferrer">genetic algorithms for training neural networks</a> or use of a <a href="https://www.hindawi.com/journals/cin/2016/3263612/" rel="noreferrer">metaheuristic for training a convolutional neural network</a>. These are all neural networks which use metaheuristics for learning, and not derivatives.</p> <p>While metaheuristics encompasses a wide swath of the literature, they're just not strongly associated with deep-learning, as these are different areas of optimization. Look up &quot;solving NP-hard problems with metaheuristics.&quot; Last, recall that gradients used for neural networks don't have anything to do with the derivatives of a function that a neural network can be used to minimize (maximize). (This would be called function approximation using a neural network as opposed to classification analysis via neural network.) They're merely derivatives of the error or cross-entropy with respect to connection weight change within the network.</p> <p>In addition, the derivatives of a function may not be known, or the problem can be too complex for using derivatives. Some of the newer optimization methods involve finite differencing as a replacement for derivatives, since compute times are getting faster, and derivative-free methods are becoming less computationally expensive in the time complexity.</p>
449
statistical learning
Sparsity in Lasso and advantage over ridge (Statistical Learning)
https://stats.stackexchange.com/questions/151954/sparsity-in-lasso-and-advantage-over-ridge-statistical-learning
<p>I'm learning about the Statistical learning and in the section comparing Lasso and Ridge Regression it shows that the main difference between these two problems is the way the constraint/penalty is formulated. </p> <p>In Lasso, the penalty is $\ell_1$ norm: $\lambda \sum |\beta_j|$, while in regression, the penalty is $\ell_2$: $\lambda \sum \beta_j^2$. </p> <p>Geometrically, this means that the lasso will have a constraint in the form of a diamond (in 2 dimensions), and in higher dimensions it will have vertices and edges. For ridge regression, in 2D, it is a circle, and hypersphere in higher dimensions. </p> <p>My question is: The author claims that you get SPARSITY in the lasso. I do not understand why, even with the geometric picture above. And what is the clear advantage of Lasso over ridge regression? </p> <p>Your insights would be very valuable. I appreciate if your answer would contain some mathematics, but more importantly, intuition. Thanks</p>
<p>The lasso penalty will force some of the coefficients quickly to zero. This means that variables are removed from the model, hence the sparsity. </p> <p>Ridge regression will more or less compress the coefficients to become smaller. This does not necessarily result in 0 coefficients and removal of variables.</p> <p><img src="https://i.sstatic.net/QrueJ.gif" alt="enter image description here"></p> <p>See the picture above, taken from onlinecourses.science.psu.edu/stat857/node/158 </p> <p>The circles represent the error function and the magenta area the possible values for the parameters. </p> <p>You can see that in the left graph that the function is likely to hit the possible value space on one of the corners, on the axes. This implies that β1 is 0. On the right, where the space of allowed values is round due to the quadratic constraint, the function can hit the possible space in more arbitrary places</p>
450
statistical learning
Recreating figure 3.6 from Elements of Statistical Learning
https://stats.stackexchange.com/questions/411327/recreating-figure-3-6-from-elements-of-statistical-learning
<p>I am trying to recreate FIGURE 3.6 from Elements of Statistical Learning. The only information about the figure is included in the caption. <a href="https://i.sstatic.net/XiK3l.png" rel="noreferrer"><img src="https://i.sstatic.net/XiK3l.png" alt=""></a></p> <p>To recreate the forward stepwise line my process is as follows:</p> <p>For 50 repetitions:</p> <ul> <li>Generate data as described </li> <li>Apply forward stepwise regression (via AIC) 31 times to add variables</li> <li>Calculate the absolute difference between each <span class="math-container">$\hat{\beta}$</span> and its corresponding <span class="math-container">${\beta}$</span> and store results</li> </ul> <p>The leaves me with a <span class="math-container">$50 \times 31$</span> matrix of these differences on which I can calculate the mean of column wise to produce the plot. </p> <p>The above approach is incorrect but it is not clear to me what exactly it is supposed to be. I believe my issue is with the interpretation of the mean squared error on the Y axis. What exactly does the formula on the y axis mean? Is it just the kth beta being compared? </p> <p><strong>Code for reference</strong></p> <p>Generate data:</p> <pre><code>library('MASS') library('stats') library('MLmetrics') # generate the data generate_data &lt;- function(r, p, samples){ corr_matrix &lt;- suppressWarnings(matrix(c(1,rep(r,p)), nrow = p, ncol = p)) # ignore warning mean_vector &lt;- rep(0,p) data = mvrnorm(n=samples, mu=mean_vector, Sigma=corr_matrix, empirical=TRUE) coefficients_ &lt;- rnorm(10, mean = 0, sd = 0.4) # 10 non zero coefficients names(coefficients_) &lt;- paste0('X', 1:10) data_1 &lt;- t(t(data[,1:10]) * coefficients_) # coefs by first 10 columns Y &lt;- rowSums(data_1) + rnorm(samples, mean = 0, sd = 6.25) # adding gaussian noise return(list(data, Y, coefficients_)) } </code></pre> <p>Apply forward stepwise regression 50 times:</p> <pre><code>r &lt;- 0.85 p &lt;- 31 samples &lt;- 300 # forward stepwise error &lt;- data.frame() for(i in 1:50){ # i = 50 repititions output &lt;- generate_data(r, p, samples) data &lt;- output[[1]] Y &lt;- output[[2]] coefficients_ &lt;- output[[3]] biggest &lt;- formula(lm(Y~., data.frame(data))) current_model &lt;- 'Y ~ 1' fit &lt;- lm(as.formula(current_model), data.frame(data)) for(j in 1:31){ # j = 31 variables # find best variable to add via AIC new_term &lt;- addterm(fit, scope = biggest)[-1,] new_var &lt;- row.names(new_term)[min(new_term<span class="math-container">$AIC) == new_term$</span>AIC] # add it to the model and fit current_model &lt;- paste(current_model, '+', new_var) fit &lt;- lm(as.formula(current_model), data.frame(data)) # jth beta hat beta_hat &lt;- unname(tail(fit<span class="math-container">$coefficients, n = 1)) new_var_name &lt;- names(tail(fit$</span>coefficients, n = 1)) # find corresponding beta if (new_var_name %in% names(coefficients_)){ beta &lt;- coefficients_[new_var_name] } else{beta &lt;- 0} # store difference between the two diff &lt;- beta_hat - beta error[i,j] &lt;- diff } } # plot output vals &lt;-apply(error, 2, function(x) mean(x**2)) plot(vals) # not correct </code></pre> <p>Output:</p> <p><a href="https://i.sstatic.net/h1EbO.png" rel="noreferrer"><img src="https://i.sstatic.net/h1EbO.png" alt="My output in response to @whuber"></a></p>
<p>There are probably some numbers wrong in the caption in the graph and/or the rendering of the graph.</p> <p>An interesting anomaly is this graph on the version of chapter 3 on Tibshirani's website: <a href="http://statweb.stanford.edu/%7Etibs/book/" rel="nofollow noreferrer">http://statweb.stanford.edu/~tibs/book/</a></p> <p>The links are incomplete but based on the preface seems to be the 2nd edition.</p> <p><img src="https://i.sstatic.net/W3yNH.png" alt="different" /></p> <p>It can be that this graph is based on only the error for a single coefficient which may cause large discrepancies.</p> <h3>Code</h3> <p>In the code below we reproduce the graph of the forward stepwise method for varying degrees of correlation (the book uses 0.85) and we scale them according to the variance for the full model, which we compute as <span class="math-container">$\sigma^2 (X^TX)^{-1}$</span>.</p> <p><a href="https://i.sstatic.net/RtadR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RtadR.png" alt="example plot" /></a></p> <pre class="lang-r prettyprint-override"><code>library(MASS) ### function to do stepforward regression ### adding variables with best increase in RSS stepforward &lt;- function(Y,X, intercept) { kl &lt;- length(X[1,]) ### number of columns inset &lt;- c() outset &lt;- 1:kl best_RSS &lt;- sum(Y^2) ### outer loop increasing subset size for (k in 1:kl) { beststep_RSS &lt;- best_RSS ### RSS to beat beststep_par &lt;- 0 ### inner looping trying all variables that can be added for (par in outset) { ### create a subset to test step_set &lt;- c(inset,par) step_data &lt;- data.frame(Y=Y,X=X[,step_set]) ### perform model with subset if (intercept) { step_mod &lt;- lm(Y ~ . + 1, data = step_data) } else { step_mod &lt;- lm(Y ~ . + 0, data = step_data) } step_RSS &lt;- sum(step_mod$residuals^2) ### compare if it is an improvement if (step_RSS &lt;= beststep_RSS) { beststep_RSS &lt;- step_RSS beststep_par &lt;- par } } bestRSS &lt;- beststep_RSS inset &lt;- c(inset,beststep_par) outset[-which(outset == beststep_par)] } return(inset) } get_error &lt;- function(X = NULL, beta = NULL, intercept = 0) { ### 31 random X variables, standard normal if (is.null(X)) { X &lt;- mvrnorm(300,rep(0,31), M) } ### 10 random beta coefficients 21 zero coefficients if (is.null(beta)) { beta &lt;- c(rnorm(10,0,0.4^0.5),rep(0,21)) } ### Y with added noise Y &lt;- (X %*% beta) + rnorm(length(X[,1]),0,6.25^0.5) ### get step order step_order &lt;- stepforward(Y,X, intercept) ### error computation l &lt;- 10 error &lt;- matrix(rep(0,31*31),31) ### this variable will store error for 31 submodel sizes for (l in 1:31) { ### subdata Z &lt;- X[,step_order[1:l]] sub_data &lt;- data.frame(Y=Y,Z=Z) ### compute model if (intercept) { sub_mod &lt;- lm(Y ~ . + 1, data = sub_data) } else { sub_mod &lt;- lm(Y ~ . + 0, data = sub_data) } ### compute error in coefficients coef &lt;- rep(0,31) if (intercept) { coef[step_order[1:l]] &lt;- sub_mod<span class="math-container">$coefficients[-1] } else { coef[step_order[1:l]] &lt;- sub_mod$</span>coefficients[] } error[l,] &lt;- (coef - beta) } return(error) } ### storing results in this matrix and vector corrMSE &lt;- matrix(rep(0,10*31),10) corr_err &lt;- rep(0,10) for (k_corr in 1:10) { corr &lt;- seq(0.05,0.95,0.1)[k_corr] ### correlation matrix for X M &lt;- matrix(rep(corr,31^2),31) for (i in 1:31) { M[i,i] = 1 } ### perform 50 times the model set.seed(1) X &lt;- mvrnorm(300,rep(1,31), M) beta &lt;- c(rnorm(10,0,0.4^0.5),rep(0,21)) nrep &lt;- 50 me &lt;- replicate(nrep,get_error(X,beta, intercept = 1)) ### this line uses fixed X and beta ###me &lt;- replicate(nrep,get_error(beta = beta, intercept = 1)) ### this line uses random X and fixed beta ###me &lt;- replicate(nrep,get_error(intercept = 1)) ### random X and beta each replicate ### storage for error statistics per coefficient and per k mean_error &lt;- matrix(rep(0,31^2),31) mean_MSE &lt;- matrix(rep(0,31^2),31) mean_var &lt;- matrix(rep(0,31^2),31) ### compute error statistics ### MSE, and bias + variance for each coefficient seperately ### k relates to the subset size ### i refers to the coefficient ### averaging is done over the multiple simulations for (i in 1:31) { mean_error[i,] &lt;- sapply(1:31, FUN = function(k) mean(me[k,i,])) mean_MSE[i,] &lt;- sapply(1:31, FUN = function(k) mean(me[k,i,]^2)) mean_var[i,] &lt;- mean_MSE[i,] - mean_error[i,]^2 } ### store results from the loop plotset &lt;- 1:31 corrMSE[k_corr,] &lt;- colMeans(mean_MSE[plotset,]) corr_err[k_corr] &lt;- mean((6.25)*diag(solve(t(X[,1:31]) %*% (X[,1:31])))) } ### plotting curves layout(matrix(1)) plot(-10,-10, ylim = c(0,4), xlim = c(1,31), type = &quot;l&quot;, lwd = 2, xlab = &quot;Subset size k&quot;, ylab = expression((MSE)/(sigma^2 *diag(X^T*X)^-1)), main = &quot;mean square error of parameters \n normalized&quot;, xaxs = &quot;i&quot;, yaxs = &quot;i&quot;) for (i in c(1,3,5,7,9,10)) { lines(1:31,corrMSE[i,]*1/corr_err[i], col = hsv(0.5+i/20,0.5,0.75-i/20)) } col &lt;- c(1,3,5,7,9,10) legend(31,4, c(expression(rho == 0.05),expression(rho == 0.25), expression(rho == 0.45),expression(rho == 0.65), expression(rho == 0.85),expression(rho == 0.95)), xjust = 1, col = hsv(0.5+col/20,0.5,0.75-col/20), lty = 1) </code></pre>
451
statistical learning
Are there any good references regarding non-learnability in statistical learning theory?
https://stats.stackexchange.com/questions/463034/are-there-any-good-references-regarding-non-learnability-in-statistical-learning
<p>For example, I don't see a lot come up in my search for "non-learnability" ... is there another term? Specifically I'm imagining that similar to the Learning Guarantees from statistical learning theory there might be some results concerning non-learnability guarantees. Or are these equivalent in some simple way that I am not seeing? </p> <p><a href="https://www.sciencedirect.com/science/article/pii/S0304397512009358" rel="nofollow noreferrer">https://www.sciencedirect.com/science/article/pii/S0304397512009358</a></p>
452
statistical learning
Reproducing table 18.1 from &quot;Elements of Statistical Learning&quot;
https://stats.stackexchange.com/questions/12360/reproducing-table-18-1-from-elements-of-statistical-learning
<p>Table 18.1 in the <a href="http://www-stat.stanford.edu/~tibs/ElemStatLearn/" rel="noreferrer">Elements of Statistical Learning</a> summarizes the performance of several classifiers on a 14 class data set. I am comparing a new algorithm with the lasso and elastic net for such multiclass classification problems. </p> <p>Using <code>glmnet</code> version 1.5.3 (R 2.13.0) I am not able to reproduce point 7. (the $L_1$-penalized multinomial) in the table, where the number of genes used is reported to be 269 and the test error is 13 out of 54. The data used is this <a href="http://www-stat.stanford.edu/~tibs/ElemStatLearn/data" rel="noreferrer">14-cancer microarray data set</a>. Whatever I have tried, I get a best performing model using in the neighborhood of 170-180 genes with a test error of 16 out of 54.</p> <p>Note that in the beginning of Section 18.3, on page 654, some preprocessing of the data is described. </p> <p>I have contacted the authors -- so far without response -- and I ask if anybody can either confirm that there is a problem in reproducing the table or provide a solution on how to reproduce the table. </p>
<p>have you checked the R package of the <a href="http://cran.r-project.org/web/packages/ElemStatLearn" rel="nofollow">book?</a> it contains all the datasets, function and most of the scripts used in there...</p>
453
statistical learning
Computation of LDA in Elements of Statistical Learning 4.3.2
https://stats.stackexchange.com/questions/405541/computation-of-lda-in-elements-of-statistical-learning-4-3-2
<p>Elements of Statistical Learning 4.3.2 elaborates on computation for Linear Discriminant Analysis. <a href="https://web.stanford.edu/~hastie/Papers/ESLII.pdf" rel="nofollow noreferrer">https://web.stanford.edu/~hastie/Papers/ESLII.pdf</a></p> <p>Procedure is said to be </p> <blockquote> <p>• Sphere the data with respect to the common covariance estimate <span class="math-container">$\hat{Σ}: X∗ ← D^{−1/2}U^{T}X$</span>, where <span class="math-container">$\hat{Σ} = UD U^{T}$</span>. The common covariance estimate of <span class="math-container">$X^{∗}$</span> will now be the identity.</p> <p>• Classify to the closest class centroid in the transformed space, modulo the effect of the class prior probabilities <span class="math-container">$π_{k}$</span>.</p> </blockquote> <p>How is the procedure above derived from the expression of the discriminant functions which writes in the case of identical covariance matrices for all <span class="math-container">$k$</span> :</p> <p><span class="math-container">$\delta_{k}(x)=x^{T}\Sigma^{-1}\mu_{k}-\frac{1}{2}\mu_{k}^{T}\Sigma^{-1}\mu_{k}+\log{\mu_{k}}$</span></p>
<p>Sphering ( or whitening ) the data (<span class="math-container">$X$</span>) means applying a transformation so that in the new basis, the covariance for sphered data (<span class="math-container">$X^{*}$</span>) is the identity matrix, i.e. <span class="math-container">$E[X^{*T}X^{*}]=I_{n}$</span> .</p> <p>We operate this transformation to obtain significantly simpler computation. As mentioned in 4.3.2 the ingredients of <span class="math-container">$\delta_{k}(x)$</span> are</p> <blockquote> <p><span class="math-container">$(x − \hat{\mu_{k}})^{T}\hat{\Sigma}_{k}^{-1}(x − \hat{\mu_{k}}) = [U^{T}_{k} (x − \hat{\mu_{k}})]^{T}D_{k}^{-1}[U^{T}_{k} (x − \hat{\mu_{k}})]$</span></p> <p><span class="math-container">$\log{|\hat{\Sigma}_{k}|}=\sum_{l}\log{d_{kl}}$</span></p> </blockquote> <p>where <span class="math-container">$\hat{\Sigma}_{k}=U_{k}D_{k}U_{k}^{T}$</span> is the eigen-decomposition for each <span class="math-container">$\hat{\Sigma}_{k}$</span>, <span class="math-container">$U_{k}$</span> is <span class="math-container">$p \times p$</span> orthonomal and <span class="math-container">$D_{k}$</span> a diagonal matrix of positive eigenvalues <span class="math-container">$d_{kl}$</span>.</p> <p>Let's write the sphering of the data :</p> <p><span class="math-container">$[U^{T}_{k} (x − \hat{\mu_{k}})]^{T}D_{k}^{-1}[U^{T}_{k} (x − \hat{\mu_{k}})]$</span></p> <p><span class="math-container">$=(U^{T}_{k}x)^{T}D_{k}^{-1/2}D_{k}^{-1/2}U^{T}_{k}x + (U^{T}_{k}\hat{\mu}_{k})^{T}D_{k}^{-1/2}D_{k}^{-1/2}U^{T}_{k}\hat{\mu}_{k}-(U^{T}_{k}\hat{\mu}_{k})^{T}D_{k}^{-1/2}D_{k}^{-1/2}U^{T}_{k}x-(U^{T}_{k}x)^{T}D_{k}^{-1/2}D_{k}^{-1/2}U^{T}_{k}\hat{\mu}_{k}$</span></p> <p>We operate the suggested change of variables, <span class="math-container">$X^{*}\leftarrow D^{-1/2}U^{T}X$</span> and similarly <span class="math-container">$\hat{\mu}_{k}^{*}\leftarrow D^{-1/2}U^{T}\hat{\mu}_{k}$</span>. The previous calculation transforms into</p> <p><span class="math-container">$(x^{*}-\hat{\mu}_{k}^{*})^{T}(x^{*}-\hat{\mu}_{k}^{*})=\|x^{*}-\hat{\mu}_{k}^{*}\|^{2}$</span></p> <p>Minimizing the discriminant needs to minimize this quantity, that is to say finding the class <span class="math-container">$k$</span> minimizing the distance between data and the centroid of class <span class="math-container">$k$</span> in the new base.</p> <p>The last term in <span class="math-container">$\delta_{k}(x)$</span> is <span class="math-container">$\log{\mu_{k}}$</span> hence the mention on the influence of prior probability <span class="math-container">$\mu_{k}$</span>.</p>
454
statistical learning
Least Squares Definition in Elements of Statistical Learning
https://stats.stackexchange.com/questions/181648/least-squares-definition-in-elements-of-statistical-learning
<p>In <em>Elements of Statistical Learning</em>, they state on p. 11 that all vectors are column vectors and start developing the least squares idea.</p> <p>So if we have $$\mathbf{X} = \begin{bmatrix} 1 \\ X_1 \\ X_2 \\ \vdots \\ X_p\end{bmatrix}$$ and $$\hat{\boldsymbol{\beta}} = \begin{bmatrix} \hat{\beta}_0 \\ \hat{\beta}_1 \\ \vdots \\ \hat{\beta}_p \end{bmatrix}\text{,}$$ take $$\hat{Y} = \mathbf{X}^{T}\hat{\boldsymbol{\beta}} = \langle \mathbf{X}, \hat{\boldsymbol{\beta}} \rangle\text{.}$$ So $\hat{Y} \in \mathbb{R}^1$ because $\mathbf{X}^{T} \in M_{1 \times p}(\mathbb{R})$ and $\boldsymbol{\beta} \in M_{p \times 1}(\mathbb{R})$. Fine.</p> <p>Now they say:</p> <blockquote> <p>... in general $\hat{Y}$ can be a $K$-vector, in which case $\beta$ (I think this is a typo - they should probably have $\hat{\beta}$ instead) would be a $p \times K$ matrix of coefficients.</p> </blockquote> <p>Well, here's the problem. If $\mathbf{X}^{T} \in M_{1 \times p}(\mathbb{R})$ and $\hat{\beta} \in M_{p \times K}(\mathbb{R})$, wouldn't $\mathbf{X}^{T}\hat{\beta}$ give a <em>row vector</em> for $\mathbf{Y}$ (dimensions $1 \times K$)?</p>
<p>I think you confuse yourself a bit. In page 10 the authors say: </p> <p><em>"a set of $N$ input $p$-vectors $x_i$ , $i = 1, \dots, N$ would be represented by the $N \times p$ matrix <strong>$X$</strong>."</em> </p> <p>This means that the $p$ feature vectors/regressors they use will be represented as column vectors in the matrix $X$. For that matter and to quote from the text page 12 exactly :</p> <p><em>"$X$ is an $N \times p$ matrix with each row an input vector"</em>. </p> <p>So $X^T$ is not a row (or column) vector in any case. $X$ (and its transpose) will be (probably) of rank $p$. $\hat{Y}$ will be a matrix $N \times K$. For the case of OLS model $K$ is usually equal to unity (1) but this is not a strict prerequisite for its use (that's why usually we write $\hat{y}$).</p>
455
statistical learning
Elements of Statistical Learning - Statistical Decision Theory : Doubt regarding Minimization of EPE
https://stats.stackexchange.com/questions/286290/elements-of-statistical-learning-statistical-decision-theory-doubt-regarding
<p>With reference to Expected Prediction Error derivation - page 18, section 2.4 in Elements of Statistical Learning. Please refer text below: </p> <p><a href="https://i.sstatic.net/ZJbWz.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZJbWz.gif" alt="Please Refer below:"></a></p> <p>I have been able to follow up to step 2.11. I am struggling to understand step 2.12 and 2.13.</p> <p>My understanding:</p> <ul> <li>We intend to minimize EPE, as E<sub>X</sub> is constant, we focus on minimizing E<sub>Y|X</sub>([Y-f(X)]<sup>2</sup>|X).</li> <li><strong>Doubt in step 2.12:</strong> To minimize EPE, the difference between Y i.e. actual value and f(X) i.e. predicted value should be minimized. However, equation 2.12 is minimizing "c". </li> <li>Guide me on understanding this - my understanding is that with small c, [Y-c]<sup>2</sup> will become larger.</li> <li>Additionally, I could not figure out development of 2.13 from 2.12.</li> </ul> <p>Please correct me wherever my assumptions and/ or understanding is incorrect.</p> <p>P.S.: I studied Probability (Tsitsiklis) and Linear Algebra (David C. Lay) before moving to ESL.</p>
<p>Let $H$ be any set of functions of $x$. Then, for each $h\in H$, $\int h(x)\,dx \ge \int \inf_{g\in H} g(x)\,dx$. Sometimes, as in the current situation, the function $\lambda$, given by $\lambda(x)=\inf_{g\in H}g(x)$, is already in $H$, in which case the least value of the integral of $h$, as $h$ varies over $H$, is given by taking $h=\lambda$. We will define $H$ in the current situation, then compute $\lambda$ and then check that $\lambda\in H$.</p> <p>Here $H$ is the set of all functions $h$ given by $h(x) = E_{Y|X}((Y-f(x))^2 | X=x)$, where $f$ is some measurable function of $x$. Measurability imposes no restriction on possible values of $c=f(x)$, so $$ \lambda(x) = \inf_{h\in H} h(x) = \inf_c E_{Y|X}((Y-c)^2|X=x),$$ which is the minimum of a quadratic function of $c$. We write this quadratic function as $A-2B.c +c^2$, where $A$ and $B$ are independent of $c$. This has its minimum when $c=B=E_{Y|X}(Y|X=x)$. So we certainly cannot do any better than defining $f(x)=c=E_{Y|X}(Y|X=x)$. Since $f$ is a measurable function, we do have $\lambda\in H$, and this choice of $f$ achieves the required minimum in 2.11.</p>
456
statistical learning
What does this figure in “Introduction to statistical learning” mean?
https://stats.stackexchange.com/questions/300097/what-does-this-figure-in-introduction-to-statistical-learning-mean
<p>I am currently reading the book "Introduction to statistical learning" and on <a href="https://books.google.co.in/books?id=qcI_AAAAQBAJ&amp;lpg=PR2&amp;pg=PA3#v=onepage&amp;q&amp;f=false" rel="nofollow noreferrer">Page 3</a> there is a problem statement regarding the SMarket data (stock exchange data) fig 2.2 whose left panel states the relation shown is the percentage vs up/down trend of stock market. But here Author is saying that one box for 648 days and one for 602 days. </p> <p>I don't quite get that. There is no information given for the days. Is the print faulty or am I missing something here? If someone has gone through the book and has any idea, please help me to understand the following chart. </p> <p><a href="https://i.sstatic.net/Nl9Hg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nl9Hg.png" alt="picture of the book page in question"></a></p>
<blockquote> <p>The left-handed panel of Figure 1.2 displays two boxplots of the previous day's percentage changes in the stock index: one for the 648 days for with the market increased on subsequent days, and one for the 602 days for which the market decreased. </p> </blockquote> <p>The blue boxplot of the left-most figure is based on data from 648 days. These days are not randomly chosen, they are selected because they have a special characteristic: the day after the market increased. The red boxblot in the left-most figure is based on 602 days for which the market decreased the day after. </p> <p>In short, the red and blue boxplot visualize multiple datapoints. </p>
457
statistical learning
Interpreting exercise in Elements of Statistical Learning
https://stats.stackexchange.com/questions/465439/interpreting-exercise-in-elements-of-statistical-learning
<p>I am reading exercise 6.4 from <a href="https://web.stanford.edu/~hastie/Papers/ESLII.pdf" rel="nofollow noreferrer">The Elements of Statistical Learning</a> (Hastie, Tibshirani and Friedman) and I am having difficulty interpreting exactly what is being asked in the following question</p> <blockquote> <p>Ex. 6.4 Suppose that the <span class="math-container">$p$</span> predictors <span class="math-container">$X$</span> arise from sampling relatively smooth analog curves at <span class="math-container">$p$</span> uniformly spaced abscissa values. Denote by <span class="math-container">$Cov(X|Y) = Σ$</span> the conditional covariance matrix of the predictors, and assume this does not change much with <span class="math-container">$Y$</span> . Discuss the nature of Mahalanobis choice <span class="math-container">$A = Σ ^{−1}$</span> for the metric in (6.14). How does this compare with <span class="math-container">$A = I$</span>? How might you construct a kernel A that (a) downweights high-frequency components in the distance metric; (b) ignores them completely?</p> </blockquote> <p>Note that (6.14) is the kernel given by <span class="math-container">$$K_{\lambda , A}(x_0, x) = D(\frac{(x-x_0)^TA(x-x_0)}{\lambda})$$</span></p> <p>To me it sounds like there are some number of periodic curves and each observation <span class="math-container">$x_i$</span> is a sample from one of them, in which case <span class="math-container">$Y$</span> would be a categorical variable indicating which curve a given sample has been drawn from. Then <span class="math-container">$\Sigma_j$</span> would be the covariance matrix of all of the observations belonging to curve <span class="math-container">$j$</span>. I don't think this makes sense however since then we would have a different weight matrix <span class="math-container">$A$</span> for each of the analog curves. </p> <p>I suspect <a href="https://stats.stackexchange.com/questions/10795/how-to-interpret-an-inverse-covariance-or-precision-matrix/93384#93384">this answer</a> will be useful in solving the problem, but I still can't quite get a concrete interpretation of exactly what is being asked.</p> <p>How exactly should this question be interpreted? </p>
458
statistical learning
Generating pseudodata as in &quot;Elements of Statistical Learning&quot;
https://stats.stackexchange.com/questions/317961/generating-pseudodata-as-in-elements-of-statistical-learning
<p>I am trying to implement a Simulation from the book "Elements of Statistical Learning" by Hastie et al. </p> <p>My Problem is that I don't understand how to generate the pseudodata as they did. The book says </p> <blockquote> <p>For each of N = 100 Samples, we generated p standard Gaussian features X with pairwise correlation 0.2. The outcome Y was generated according to a linear model* $$Y = \sum_{j=1}^p X_j \beta_j + \sigma \epsilon,$$ *where $\epsilon$ was generated from a Standard Gaussian Distribution. For each dataset, the set of coefficients $\beta_j$ were also generated from a Standard Gaussian Distribution. We investigated p = 20, 100 and 1000. The standard deviation $ \sigma $ was chosen in each case so that the signal-to-noise-ratio $Var[E(Y|X)]/ \sigma ^2 $equaled 2.</p> </blockquote> <p>So, what I managed to generate so far are the Xs, the $\epsilon$ and the $\beta$s. </p> <p>I don't get how I'm meant to generate Y without knowing $\sigma$ and according to the description of $\sigma$, I need Y to compute it. </p> <p>Can someone please help me? What am I not understanding here?? Thanks and best regards!</p>
<blockquote> <p>The standard deviation sigma was chosen in each case so that the signal-to-noise-ratio $Var(E[Y|X]) / \sigma^2$ equaled 2.</p> </blockquote> <p>Because $\epsilon$ has mean 0, we know that:</p> <p>$$E[Y \mid X] = \sum_{j=1}^p X_j \beta_j = \beta^T X$$</p> <p>So, using the $X$ and $\beta$ you generated, calculate the variance of $E[Y \mid X]$ and divide it by 2 to obtain $\sigma^2$.</p> <p>Clarification: this should be done by treating $X$ as a random variable, not by working with samples. $E[Y \mid X]$ is a linear combination of Gaussian random variables so, as described <a href="https://en.wikipedia.org/wiki/Variance#Matrix_notation_for_the_variance_of_a_linear_combination" rel="nofollow noreferrer">here</a> and by whuber in the comments below:</p> <p>$$Var(E[Y \mid X]) = \beta^T C \beta$$</p> <p>where $C$ is the covariance matrix of $X$</p>
459
statistical learning
Bayes decision boundary of Figure 2.5 in Elements of Statistical Learning
https://stats.stackexchange.com/questions/35728/bayes-decision-boundary-of-figure-2-5-in-elements-of-statistical-learning
<p>When I read "Elements of Statistical Learning", I met some difficulty in calculating the Bayes decision boundary of Figure 2.5. In the package <code>ElemStatLearn</code>, it already calculated the probability at each point and used contours to draw the boundary. Can any one tell me how to calculate the probability? </p> <p>In a traditional Bayes decision problem, the mixture distributions are usually normal distributions, but in this example, it uses two steps to generate the samples, so I have some difficulty in calculating the distribution.</p>
<p>I asked the authors this question, and apparently they no longer are in possession of the code that created the data. So there is no real way to reconstruct the Bayes rule for this particular data set. Otherwise, it would be based on the ratio of the densities that would have been known for the Gaussian mixture distributions that the authors used to create the two classes.</p>
460
statistical learning
Hints for exercise 7.3 from The elements of statistical learning
https://stats.stackexchange.com/questions/306777/hints-for-exercise-7-3-from-the-elements-of-statistical-learning
<p>I am stuck on problem 7.3 from the book 'The Elements of Statistical Learning'. This is the problem: <a href="https://i.sstatic.net/CoQ3s.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CoQ3s.jpg" alt="enter image description here"></a></p> <p>Here is my attempt to a solution for least-squares projections:</p> <p>$$y_i-\hat f(x_i)=y_i-x_i^T(X_{-i}^TX_{-i})^{-1}X_{-i}^Ty,$$</p> <p>where $X_{-i}$ is the matrix of training-example predictors with $i^{th}$ example (row) removed. Next I write $$X_{-i}^TX_{-i}=X^TX-x_ix_i^T$$ and $$X_{-i}^Ty_{-i}=X^Ty-x_iy_i.$$ Therefore,</p> <p>$$y_i-\hat f(x_i)=y_i-x_i^T(X^TX-x_ix_i^T)^{-1}(X^Ty-x_iy_i).$$ I am not able to proceed further. It would be helpful if someone can give a hint to solve this problem.</p>
<p>Some hints: You correctly note $$ X_{-i}^TX_{-i}=X^TX-\vec{x}_i\vec{x}_i^T$$($\vec{x_i}$ is a column vector), and that you need to find $$\hat{\vec{\beta}}_{-i} = (X_{-i}^TX_{-i})^{-1}X_{-i}^T\vec{y}_{-i},$$ the estimated coefficients obtained by leaving out sample $i$. This will lead you to the new predicted value for sample $i$, $$ \hat f^{-i}(\vec{x}_i) = \vec{x}_i\hat{\vec{\beta}}_{-i}.$$ It will be useful to note that $$X_{-i}^T\vec{y}_{-i} = X^T\vec{y} - \vec{x}_iy_i,$$ and that you can use the <a href="https://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula" rel="nofollow noreferrer">Sherman-Morrison formula</a> to find the inverse of $(X^TX-\vec{x}_i\vec{x}_i^T)$. Also note that $$ \vec{x}_i^T(X^TX)^{-1}\vec{x}_i = S_{ii}.$$</p> <p>You can then work out what you get for $$\hat{\vec{\beta}}_{-i} = (X^TX-\vec{x}_i\vec{x}_i^T)^{-1}\left[X^Ty - \vec{x}_iy_i\right]$$ with a little algebra. Can you finish from here?</p>
461
statistical learning
Understanding linear projection in &quot;The Elements of Statistical Learning&quot;
https://stats.stackexchange.com/questions/185634/understanding-linear-projection-in-the-elements-of-statistical-learning
<p>In the book "The Elements of Statistical Learning" in chapter 2 ("Linear models and least squares; page no: 12"), it is written that </p> <blockquote> <p>In the (p+1)-dimensional input-output space, (X,Y) represent a hyperplane. If the constant is included in X, then the hyperplane includes the origin and is a subspace; if not, it is an affine set cutting the Y-axis at the point (0,$\beta$).</p> </blockquote> <p>I don't get the sentence "if constant is ... (0,$\beta$)". Please help? I think the hyperplane would cut the Y-axis at (0,$\beta$)in both the cases, is that correct?</p> <p>The answer below has helped somewhat, but I am looking for more specific answer. I understand that when $1$ is included in the $X$, it won't contain origin, but then how would the $(X,Y)$ would contain origin? Should not it depend on value of $\beta$? If intercept $\beta_0$ is not $0$, $(X,Y)$ should not contain origin, in my understanding? </p>
<p>Including the constant <code>1</code> in the input vector is a common trick to include a bias (think about Y-intercept) but keeping all the terms of the expression symmetrical: you can write $\beta X$ instead of $\beta_0 + \beta X$ everywhere.</p> <p>If you do this, it is then correct that the hyperplane $Y = \beta X$ includes the origin, since the origin is a vector of $0$ values and multiplying it for $\beta$ gives the value $0$.</p> <p>However, your input vectors will always have the first element equal to $1$; therefore they will never contain the origin, and will be place on an smaller hyperplane, which has one less dimension.</p> <p>You can visualize this by thinking of a line $Y=mx+q$ on your sheet of paper (2 dimensions). The corresponding hyperplane if you include the bias $q$ your vector becomes $X = [x, x_0=1]$ and your coefficients $\beta = [m, q]$. In 3 dimensions this is a plane passing from the origin, that intercepts the plane $x_0=1$ producing the line where your inputs can be placed.</p>
462
statistical learning
QDA - Missing term in quadratic discriminant function in &#39;Introduction to Statistical Learning&#39;
https://stats.stackexchange.com/questions/490651/qda-missing-term-in-quadratic-discriminant-function-in-introduction-to-statis
<p>Some resources such as <a href="https://online.stat.psu.edu/stat508/book/export/html/696" rel="nofollow noreferrer">https://online.stat.psu.edu/stat508/book/export/html/696</a>, give the following for the quadratic discriminant function; <span class="math-container">$$ln(\pi_k)-\frac{1}{2}(x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k)-ln(|\Sigma_k|^{1/2})$$</span></p> <p>whereas 'An Introduction to Statistical Learning' omits this last term.</p> <p>Why do they omit the last term? It depends on k so surely it should be relevant to the maximisation?</p>
<p>This is listed as errata for an early edition of the book (1st edition prior to the 4th printing) <a href="http://faculty.marshall.usc.edu/gareth-james/ISL/errata.html" rel="nofollow noreferrer">http://faculty.marshall.usc.edu/gareth-james/ISL/errata.html</a></p>
463
statistical learning
Derivation of equation 6.15 of Introduction to Statistical Learning - 2nd ed
https://stats.stackexchange.com/questions/590667/derivation-of-equation-6-15-of-introduction-to-statistical-learning-2nd-ed
<p>I was reading the book &quot;Introduction to Statistical Learning - 2nd ed&quot; and I can't understand the derivation of equation 6.15 on the page 247.</p> <p><a href="https://i.sstatic.net/9m9pt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9m9pt.png" alt="Page 247 of ISLR" /></a></p> <p>This question is similar to <a href="https://stats.stackexchange.com/questions/567691/introduction-to-statistical-learning-eq-6-12-and-6-13">Introduction to Statistical Learning Eq. 6.12 and 6.13</a> but the answers there weren't satisfactory.</p> <p>My approach was to use the modulus definition to separate the equation 6.13 into two cases, take the partial derivative, and set it to zero.</p> <p><span class="math-container">$$(y_j - \beta_j)^2 + \lambda |\beta_j| = \begin{cases} y_j^2 - 2y_j\beta_j + \beta_j^2 + \lambda \beta_j, &amp; \text{if } \beta_j\gt 0\\ y_j^2 - 2y_j\beta_j + \beta_j^2 - \lambda \beta_j, &amp; \text{if } \beta_j\lt 0\\ \end{cases} $$</span></p> <p><span class="math-container">$$\frac{\partial ((y_j - \beta_j)^2 + \lambda |\beta_j|)}{\partial \beta_j} = \begin{cases} -2y_j + 2\beta_j + \lambda, &amp; \text{if } \beta_j\gt 0\\ -2y_j + 2\beta_j - \lambda, &amp; \text{if } \beta_j\lt 0\\ \end{cases} $$</span></p> <p><span class="math-container">$$ \frac{\partial ((y_j - \beta_j)^2 + \lambda |\beta_j|)}{\partial \beta_j} = 0 \iff \begin{cases} \beta_j = y_j - \frac{\lambda}{2}, &amp; \text{if } \beta_j\gt 0\\ \beta_j = y_j + \frac{\lambda}{2}, &amp; \text{if } \beta_j\lt 0\\ \end{cases} $$</span></p> <p><span class="math-container">$$ \frac{\partial ((y_j - \beta_j)^2 + \lambda |\beta_j|)}{\partial \beta_j} = 0 \iff \begin{cases} \beta_j = y_j - \frac{\lambda}{2}, &amp; \text{if } y_j\gt \frac{\lambda}{2}\\ \beta_j = y_j + \frac{\lambda}{2}, &amp; \text{if } y_j\lt \frac{- \lambda}{2}\\ \end{cases} $$</span></p> <p>Now, it is clear to me that we haven't defined the function for <span class="math-container">$\frac{- \lambda}{2} \leq y_j \leq \frac{\lambda}{2}$</span>, since <span class="math-container">$(| \beta_j |)'$</span> isn't defined when <span class="math-container">$\beta_j = 0$</span>.</p> <p>How can we proceed here?</p>
464
statistical learning
Trying to understand splines according to &quot;The elements of statistical learning&quot;
https://stats.stackexchange.com/questions/441574/trying-to-understand-splines-according-to-the-elements-of-statistical-learning
<p>I'm working on Chapter 5 from <a href="https://web.stanford.edu/%7Ehastie/Papers/ESLII.pdf" rel="nofollow noreferrer">The elements of statistical learning</a> which describes the more general linear models that is splines. The fragment (chapter 5, page 143) below describes the alternative and more direct method of building continuous splines. <a href="https://i.sstatic.net/9sEbX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9sEbX.png" alt="Chapter 5, page 143" /></a></p> <p>To be honest I don't really know how does that base work. The previous method used indicators and then two constraints for continuity which was quite intuitive. What does ''the positive'' part mean?</p> <p>Is the final function <span class="math-container">$f(X) = \sum \beta_i h_i(X)$</span> just a sum of <span class="math-container">$h_1, h_2, h_3, h_4$</span> described above or do I need some indicators too?</p>
<p>The positive part of a function <span class="math-container">$f(x)$</span>, noted <span class="math-container">$f^+(x)$</span> or <span class="math-container">$f(x)_+$</span>, is defined as: <span class="math-container">$$ f(x)_+ = \max(f(x),0) $$</span></p> <p>For example the function <span class="math-container">$h_3(x) = (X-\xi_1)_+ = \max(X-\xi_1,0)$</span>. That is, whenever <span class="math-container">$X \leq \xi_1$</span>, <span class="math-container">$X-\xi_1 \leq 0$</span> and <span class="math-container">$\max(X-\xi_1,0)=0$</span>. And for <span class="math-container">$X \geq \xi_1$</span>, <span class="math-container">$(X-\xi_1)_+= X-\xi_1$</span>.</p> <p>So if <span class="math-container">$h_1,h_2,\dots$</span> are defined using positive parts, then <span class="math-container">$\sum \beta_ih_i(x)$</span> already include some indicators (inside positive parts).</p> <p>Just for information, the negative part of a function is defined as</p> <p><span class="math-container">$$ f(x)_- = - \min(f(x),0) $$</span></p> <p>And we have <span class="math-container">$f(x) = f(x)_+ - f(x)_-$</span>.</p>
465
statistical learning
Introduction to statistical learning Ch. 3 Pages 65-66
https://stats.stackexchange.com/questions/534570/introduction-to-statistical-learning-ch-3-pages-65-66
<p>In the textbook <a href="https://static1.squarespace.com/static/5ff2adbe3fe4fe33db902812/t/6062a083acbfe82c7195b27d/1617076404560/ISLR%2BSeventh%2BPrinting.pdf" rel="nofollow noreferrer"><em>Introduction to Statistical Learning with Applications in R</em></a> by James et al. (2014), the authors give the following formula for the standard error of the sample mean on page 65:</p> <blockquote> <p>We have the well-known formula <span class="math-container">$$\text{Var}(\hat{\mu}) = \text{SE}(\hat{\mu})^2 = \frac{\sigma^2}{n}, \tag{3.7}$$</span> where <span class="math-container">$\sigma$</span> is the standard deviation of each of the realizations <span class="math-container">$y_i$</span> of <span class="math-container">$Y$</span>.<span class="math-container">$^2$</span></p> </blockquote> <p>The corresponding footnote states:</p> <blockquote> <p><span class="math-container">$^2$</span> This formula holds provided that the <span class="math-container">$n$</span> observations are uncorrelated.</p> </blockquote> <p>I can't wrap my head around this, each <span class="math-container">$y_i$</span> has an exact value, how can it have a standard deviation?</p> <p>On page 66, they then add:</p> <blockquote> <p><span class="math-container">$$\begin{align}&amp;\text{SE}(\hat{\beta}_0)^2 = \sigma^2 \left[ \frac{1}{n} + \frac{\overline{x}^2}{\sum^n_{i=1}(x_i - \overline{x})^2} \right], \\ &amp;\text{SE}(\hat{\beta}_1)^2 = \frac{\sigma^2}{\sum^n_{i=1} (x_i - \overline{x})^2}, \tag{3.8} \end{align}$$</span> where <span class="math-container">$\sigma^2 = \text{Var}(\epsilon)$</span></p> </blockquote> <p>Is the <span class="math-container">$\sigma^2$</span> in equation <span class="math-container">$(3.8)$</span> the same as that in <span class="math-container">$(3.7)$</span>?</p>
<blockquote> <p>I can't wrap my head around this, each <span class="math-container">$y_i$</span> has an exact value, how can it have a standard deviation?</p> </blockquote> <p>Because each <span class="math-container">$y_i$</span> is the realization of a random variable. Look at page 61: <span class="math-container">$Y\approx \beta_0+\beta_1 X$</span> says that <span class="math-container">$Y$</span> <em>is approximately modeled as</em> <span class="math-container">$\beta_0+\beta_1 X$</span>, i.e. each <span class="math-container">$y_i$</span> will not be equal to <span class="math-container">$\beta_0+\beta_1 x_i$</span>, it will be different, it will be <span class="math-container">$\beta_0+\beta_1 x_i+\epsilon_i$</span> (page 63) so it will not be an 'exact' value, it will be a random value because <span class="math-container">$\epsilon_i$</span> is a random variable.</p> <p>If you are interested in knowing the population mean <span class="math-container">$\mu$</span> of some random variable <span class="math-container">$Y$</span>, you can use <span class="math-container">$n$</span> observations from <span class="math-container">$Y$</span>, <span class="math-container">$y_i,\dots,y_n$</span>. Let's say that the random variable is <span class="math-container">$Y\sim\mathcal{N}(\mu,\sigma^2)$</span>. The sample mean <span class="math-container">$\hat\mu=\frac1n\sum_{i=1}^n y_i$</span> can be used to estimate the population mean <span class="math-container">$\mu$</span> (which is unknown) but it is another random variable and one can show that <span class="math-container">$E[\hat\mu]=\mu$</span>, <span class="math-container">$\text{Var}(\hat\mu)=\sigma^2/n$</span>. Since each <span class="math-container">$y_i$</span> is a random value, its 'exact' value could be different (when you throw a die and get 4, 4 is an 'exact' value, but it could be 1, 2, 3, 5, or 6). Therefore you can guess that &quot;a single estimate <span class="math-container">$\hat\mu$</span> may be a substantial underestimate or overestimate of <span class="math-container">$\mu$</span>&quot; (page 65). If your observations were an <em>exact</em> representation of <span class="math-container">$Y$</span>, then <span class="math-container">$\hat\mu$</span> would always be equal to <span class="math-container">$\mu$</span>.</p> <p>However, &quot;Equation 3.7 also tells us how this deviation shrinks with <span class="math-container">$n$</span>—the more observations we have, the smaller the standard error of <span class="math-container">$\hat\mu$</span>.&quot; (page 66).</p> <blockquote> <p>Is the <span class="math-container">$\sigma^2$</span> in equation (3.8) the same as that in (3.7)?</p> </blockquote> <p>Thay share the same role. One could say that in (3.7) the population model is <span class="math-container">$Y=\mu+\epsilon$</span> with <span class="math-container">$E[Y]=\mu$</span> and <span class="math-container">$\text{Var}(Y)=\text{Var}(\epsilon)=\sigma^2$</span>, <span class="math-container">$$\begin{cases} Y=\mu+\epsilon \\ \epsilon\sim\mathcal{N}(0,\sigma^2) \end{cases}\quad\Leftrightarrow\quad Y\sim\mathcal{N}(\mu,\sigma^2)$$</span> in (3.8) the population model is <span class="math-container">$Y=\beta_0+\beta_1X+\epsilon$</span> with <span class="math-container">$E[Y]=\beta_0+\beta_1X$</span> and <span class="math-container">$\text{Var}(Y)=\text{Var}(\epsilon)=\sigma^2$</span>, <span class="math-container">$$\begin{cases} Y=\beta_0+\beta_1X+\epsilon \\ \epsilon\sim\mathcal{N}(0,\sigma^2) \end{cases}\quad\Leftrightarrow\quad Y\sim\mathcal{N}(\beta_0+\beta_1X,\sigma^2)$$</span></p>
466
statistical learning
Understanding Vapnik&#39;s Regression Function in Statistical Learning Theory
https://stats.stackexchange.com/questions/605801/understanding-vapniks-regression-function-in-statistical-learning-theory
<p>I am struggling to understand what is meant by Equation 1.8 in Vapnik's Statistical Learning Theory. Vapnik introduces the equation below as the <em>regression</em> function.</p> <p><span class="math-container">$ r(x) = \int y\ dF(y|x) $</span></p> <p>I know that <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are random variables -- where <span class="math-container">$x$</span> is an input and <span class="math-container">$y$</span> should be the outcome. I know <span class="math-container">$F(y|x)$</span> is the conditional probability distribution where</p> <p><span class="math-container">$F(y|x) = \frac{F(y,x)}{F(x)}$</span></p> <p>But I don't understand what is <span class="math-container">$r(x)$</span>. What does it mean to integrate a random variable, <span class="math-container">$y$</span>, w.r.t. <span class="math-container">$dF(y|x)$</span> ?</p> <p>To try and be more clear about my confusion, if I were to assume the joint space: <span class="math-container">$F(x,y)$</span> looked similar to this: <a href="https://en.wikipedia.org/wiki/File:Multivariate_normal_sample.svg" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/File:Multivariate_normal_sample.svg</a> , Then my <span class="math-container">$F(y|x)$</span> would be a normal distribution for some <span class="math-container">$x=c$</span> where <span class="math-container">$c$</span> is a constant.</p> <p>So, <span class="math-container">$F(y|x=c)=\frac{1}{\sigma \sqrt{2\pi}}exp((\frac{y-\mu}{\sigma})^2)$</span>, and when I plug this example into the original equation I get:</p> <p><span class="math-container">$ r(x) = \int y\ d(\frac{1}{\sigma \sqrt{2\pi}}exp((\frac{y-\mu}{\sigma})^2)) $</span></p> <p>I am really confused because (1) I have no clue how one would perform this integration; (2) I have no idea what I'm actually integrating here.</p> <p>I am also looking for more probability-calculus resources because I know I am weak in understanding this. Any suggestions would be greatly appreciated. And thank you for reading and helping!</p>
467
statistical learning
Question about an equation on bagging in Elements of Statistical Learning book
https://stats.stackexchange.com/questions/578939/question-about-an-equation-on-bagging-in-elements-of-statistical-learning-book
<p>This should be a simple question but I must have missed something.</p> <p>Equation (8.52) of Section 8.7 Bagging on page 285 of Trevor Hastie, <em>The Elements of Statistical Learning: Data Mining, Inference, and Prediction</em> is equivalent to <span class="math-container">$$\mathbf E_{\mathcal P}[(Y-f_{\text{ag}}(x))(\hat{f^*}-f_{\text{ag}}(x))]=0$$</span> wehre <span class="math-container">$f_{\text{ag}}(x):=\mathbf E_{\mathcal P}\hat{f^*}$</span>. I do not see the rationale of this equation. Could someone shed some light on the above equation?</p>
468
statistical learning
Can you explain this description of tree pruning in Intro to Statistical Learning?
https://stats.stackexchange.com/questions/643222/can-you-explain-this-description-of-tree-pruning-in-intro-to-statistical-learnin
<p>The underlined sentences below from p. 331 in <a href="https://www.statlearning.com/" rel="nofollow noreferrer">An Introduction to Statistical Learning</a> have me scratching my head: Given that the splitting algorithm always finds the best next split in terms of error reduction, how could it be possible for there to be a worthless split followed by a good split later on? And isn't setting a threshold the default means to implement a complexity cost -- for example the <code>cp</code> parameter in <code>rpart()</code>?</p> <p><a href="https://i.sstatic.net/hpgIm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hpgIm.png" alt="tree pruning, p. 331" /></a></p>
<p>It's quite possible for a bad split to be followed by a good split. Consider two predictor variables centered on (0,0), where the outcome variable <span class="math-container">$Y$</span> is 1 if the variables have the same sign and 0 if they have different signs. No single split will predict usefully, but a split on the first predictor followed by a split on the second predictor can be highly accurate.</p> <p>That sort of scenario is why pruning can be helpful (and why backwards selection can beat forwards selection).</p> <p>Even with pruning, tree building doesn't find the optimal tree, but pruning does allow it to (sometimes) find better trees than simple forward selection would.</p>
469
statistical learning
Deriving the in-sample error for linear model from the elements of statistical learning
https://stats.stackexchange.com/questions/91766/deriving-the-in-sample-error-for-linear-model-from-the-elements-of-statistical-l
<p>From the elements of statistical learning, it was claimed that $$ \frac{1}{N}\sum_{i=1}^N ||h(x_i) ||^2 \sigma^2_\varepsilon= \frac{p}{N}\sigma^2_\varepsilon$$</p> <p>where $h(x_i) = X(X^TX)^{-1}x_i$. Can someone show me how to prove this ? Thanks</p> <p>This came from the image below</p> <p><img src="https://i.sstatic.net/3658P.png" alt="enter image description here"></p>
<p>$$ \sum_{i=1}^{N}\|\textbf{h}(x_i)\|^2 =\sum_{i=1}^{N}\|\textbf{X}(\textbf{X}^T\textbf{X})^{-1}x_i\|^2 =tr\{(\textbf{X}(\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T)^T(\textbf{X}(\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T)\} =tr\{(\textbf{X}(\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T)(\textbf{X}(\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T)\} =tr\{\textbf{X}(\textbf{X}^T\textbf{X})^{-1}(\textbf{X}^T\textbf{X})(\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T\} =tr\{\textbf{X}(\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T\} =tr\{\textbf{X}^T\textbf{X}(\textbf{X}^T\textbf{X})^{-1}\} =tr\{\textbf{I}_p\}=p $$ Q.E.D.</p>
470
statistical learning
Question on loss function notation in Elements of Statistical Learning II
https://stats.stackexchange.com/questions/517082/question-on-loss-function-notation-in-elements-of-statistical-learning-ii
<p>In <a href="https://web.stanford.edu/%7Ehastie/Papers/ESLII.pdf" rel="nofollow noreferrer">Elements of Statistical Learning II</a> on page 349, the multinomial deviance loss function is given by <span class="math-container">$L(y,p(x))=-\sum_{k=1}^KI(y=G_k)f_k(x)+\log(\sum_{\ell=1}^Ke^{f_\ell(x)})$</span>, but there is no explanation given as to why the index for the first summation is denoted <span class="math-container">$k$</span> and the one for the second <span class="math-container">$\ell$</span>. What adds even more to my confusion is that on the previous page, they define the class probabilities <span class="math-container">$p_k$</span> as <span class="math-container">$p_k(x)=\frac{e^{f_k(x)}}{\sum_{l=1}^Ke^{f_l(x)}}$</span>, using yet another index <span class="math-container">$l$</span> with no explanation. Does anybody here understand what the distinction is and what that means for the interpretation of the loss function?</p>
<p>It's done to prevent confusion, because the second summation is inside the first one. If you've two nested summations or for loops, you wouldn't use <span class="math-container">$i$</span> to index both, right?</p> <p>Also, since the deviance loss is a function of <span class="math-container">$p_k(x)$</span>, that <span class="math-container">$l$</span> comes from the expression of <span class="math-container">$p_k(x)$</span> you've written:</p> <p><span class="math-container">$$p_k(x)=\frac{e^{f_k(x)}}{\sum_{l=1}^K e^{f_l(x)}}$$</span></p> <p>If you don't use another index <span class="math-container">$l$</span> for the summation, in the expression you'll confuse which <span class="math-container">$k$</span> to use (i.e. the subscripted <span class="math-container">$k$</span> int the formula, or in the summation index).</p>
471
statistical learning
On the importance of the i.i.d. assumption in statistical learning
https://stats.stackexchange.com/questions/213464/on-the-importance-of-the-i-i-d-assumption-in-statistical-learning
<p>In statistical learning, implicitly or explicitly, one <em>always</em> assumes that the training set $\mathcal{D} = \{ \bf {X}, \bf{y} \}$ is composed of $N$ input/response tuples $({\bf{X}}_i,y_i)$ that are <em>independently drawn from the same joint distribution</em> $\mathbb{P}({\bf{X}},y)$ with</p> <p>$$ p({\bf{X}},y) = p( y \vert {\bf{X}}) p({\bf{X}}) $$</p> <p>and $p( y \vert {\bf{X}})$ the relationship we are trying to capture through a particular learning algorithm. Mathematically, this i.i.d. assumption writes:</p> <p>\begin{gather} ({\bf{X}}_i,y_i) \sim \mathbb{P}({\bf{X}},y), \forall i=1,...,N \\ ({\bf{X}}_i,y_i) \text{ independent of } ({\bf{X}}_j,y_j), \forall i \ne j \in \{1,...,N\} \end{gather}</p> <p>I think we can all agree that this assumption is <em>rarely satisfied</em> in practice, see this related <a href="https://stats.stackexchange.com/q/82096/109618">SE question</a> and the wise comments of @Glen_b and @Luca.</p> <p>My question is therefore: </p> <blockquote> <p>Where exactly does the i.i.d. assumption becomes critical in practice? </p> </blockquote> <p><strong>[Context]</strong> </p> <p>I'm asking this because I can think of many situations where such a stringent assumption is not needed to train a certain model (e.g. linear regression methods), or at least one can work around the i.i.d. assumption and obtain robust results. Actually the <em>results</em> will usually stay the same, it is rather the <em>inferences</em> that one can draw that will change (e.g. heteroskedasticity and autocorrelation consistent HAC estimators in linear regression: the idea is to re-use the good old OLS regression weights but to adapt the finite-sample behaviour of the OLS estimator to account for the violation of the Gauss-Markov assumptions). </p> <p>My guess is therefore that <em>the i.i.d. assumption is required not to be able to train a particular learning algorithm, but rather to guarantee that techniques such as cross-validation can indeed be used to infer a reliable measure of the model's capability of generalising well</em>, which is the only thing we are interested in at the end of the day in statistical learning because it shows that we can indeed learn from the data. Intuitively, I can indeed understand that using cross-validation on dependent data could be optimistically biased (as illustrated/explained in <a href="https://github.com/ogrisel/notebooks/blob/master/Non%20IID%20cross-validation.ipynb" rel="noreferrer">this interesting example</a>). </p> <p>For me i.i.d. has thus nothing to do with <em>training</em> a particular model but everything to do with that model's <em>generalisability</em>. This seems to agree with a paper I found by Huan Xu et al, see "Robustness and Generalizability for Markovian Samples" <a href="https://www.google.be/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=2&amp;cad=rja&amp;uact=8&amp;ved=0ahUKEwiThZWy8e_MAhXoFJoKHZXECsIQFggnMAE&amp;url=http%3A%2F%2Fwww.ecmlpkdd2009.net%2Fwp-content%2Fuploads%2F2008%2F09%2Flearning-from-non-iid-data-theory-algorithms-and-practice.pdf&amp;usg=AFQjCNF5eOVWeNsP5OADpQ0oNJRwzWI8uQ&amp;sig2=bVae1fdm8HwKIFu9nOlLVg" rel="noreferrer">here</a>. </p> <p>Would you agree with that?</p> <p><strong>[Example]</strong> </p> <p>If this can help the discussion, consider the problem of using the LASSO algorithm to perform a smart selection amongst $P$ features given $N$ training samples $({\bf{X}}_i,y_i)$ with $\forall i=1,...,N$ $$ {\bf{X}}_i=[X_{i1},...,X_{iP}] $$ We can further assume that:</p> <ul> <li>The inputs ${\bf{X}}_i$ are dependent hence leading to a violation of the i.i.d. assumption (e.g. for each feature $j=1,..,P$ we observe a $N$ point time series, hence introducing temporal auto-correlation)</li> <li>The conditional responses $y_i \vert {\bf{X}}_i$ are independent.</li> <li>We have $P \gg N$. </li> </ul> <p>In what way(s) does the violation of the i.i.d. assumption can pose problem in that case assuming we plan to determine the LASSO penalisation coefficient $\lambda$ using a cross-validation approach (on the full data set) + use a nested cross-validation to get a feel for the generalisation error of this learning strategy (we can leave the discussion concerning the inherent pros/cons of the LASSO aside, except if it is useful).</p>
<p>The i.i.d. assumption about the pairs <span class="math-container">$(\mathbf{X}_i, y_i)$</span>, <span class="math-container">$i = 1, \ldots, N$</span>, is often made in statistics and in machine learning. Sometimes for a good reason, sometimes out of convenience and sometimes just because we usually make this assumption. To satisfactorily answer if the assumption is really necessary, and what the consequences are of not making this assumption, I would easily end up writing a book (if you ever easily end up doing something like that). Here I will try to give a brief overview of what I find to be the most important aspects.</p> <h2>A fundamental assumption</h2> <p>Let's assume that we want to learn a probability model of <span class="math-container">$y$</span> given <span class="math-container">$\mathbf{X}$</span>, which we call <span class="math-container">$p(y \mid \mathbf{X})$</span>. We do not make any assumptions about this model a priori, but we will make the minimal assumption that such a model exists such that</p> <ul> <li>the conditional distribution of <span class="math-container">$y_i$</span> given <span class="math-container">$\mathbf{X}_i$</span> is <span class="math-container">$p(y_i \mid \mathbf{X}_i)$</span>.</li> </ul> <p>What is worth noting about this assumption is that the conditional distribution of <span class="math-container">$y_i$</span> depends on <span class="math-container">$i$</span> only through <span class="math-container">$\mathbf{X}_i$</span>. This is what makes the model useful, e.g. for prediction. The assumption holds as a consequence of the <em>identically distributed</em> part under the i.i.d. assumption, but it is weaker because we don't make any assumptions about the <span class="math-container">$\mathbf{X}_i$</span>'s.</p> <p>In the following the focus will mostly be on the role of independence.</p> <h2>Modelling</h2> <p>There are two major approaches to learning a model of <span class="math-container">$y$</span> given <span class="math-container">$\mathbf{X}$</span>. One approach is known as <em>discriminative</em> modelling and the other as <em>generative</em> modelling.</p> <ul> <li><strong>Discriminative modelling</strong>: We model <span class="math-container">$p(y \mid \mathbf{X})$</span> directly, e.g. a logistic regression model, a neural network, a tree or a random forest. The <em>working modelling assumption</em> will typically be that the <span class="math-container">$y_i$</span>'s are conditionally independent given the <span class="math-container">$\mathbf{X}_i$</span>'s, though estimation techniques relying on subsampling or bootstrapping make most sense under the i.i.d. or the weaker exchangeability assumption (see below). But generally, for discriminative modelling we don't need to make distributional assumptions about the <span class="math-container">$\mathbf{X}_i$</span>'s.</li> <li><strong>Generative modelling</strong>: We model the joint distribution, <span class="math-container">$p(\mathbf{X}, y)$</span>, of <span class="math-container">$(\mathbf{X}, y)$</span> typically by modelling the conditional distribution <span class="math-container">$p(\mathbf{X} \mid y)$</span> and the marginal distribution <span class="math-container">$p(y)$</span>. Then we use Bayes's formula for computing <span class="math-container">$p(y \mid \mathbf{X})$</span>. Linear discriminant analysis and naive Bayes methods are examples. The <em>working modelling assumption</em> will typically be the i.i.d. assumption.</li> </ul> <p>For both modelling approaches the working modelling assumption is used to derive or propose learning methods (or estimators). That could be by maximising the (penalised) log-likelihood, minimising the empirical risk or by using Bayesian methods. Even if the working modelling assumption is wrong, the resulting method can still provide a sensible fit of <span class="math-container">$p(y \mid \mathbf{X})$</span>.</p> <p>Some techniques used together with discriminative modelling, such as bagging (bootstrap aggregation), work by fitting many models to data sampled randomly from the dataset. Without the i.i.d. assumption (or exchangeability) the resampled datasets will not have a joint distribution similar to that of the original dataset. Any dependence structure has become &quot;messed up&quot; by the resampling. I have not thought deeply about this, but I don't see why that should necessarily break the method as a method for learning <span class="math-container">$p(y \mid \mathbf{X})$</span>. At least not for methods based on the working independence assumptions. I am happy to be proved wrong here.</p> <h2>Consistency and error bounds</h2> <p>A central question for all learning methods is whether they result in models close to <span class="math-container">$p(y \mid \mathbf{X})$</span>. There is a vast theoretical literature in statistics and machine learning dealing with consistency and error bounds. A main goal of this literature is to prove that the learned model is close to <span class="math-container">$p(y \mid \mathbf{X})$</span> when <span class="math-container">$N$</span> is large. Consistency is a qualitative assurance, while error bounds provide (semi-) explicit quantitative control of the closeness and give rates of convergence.</p> <p>The theoretical results all rely on assumptions about the joint distribution of the observations in the dataset. Often the working modelling assumptions mentioned above are made (that is, conditional independence for discriminative modelling and i.i.d. for generative modelling). For discriminative modelling, consistency and error bounds will require that the <span class="math-container">$\mathbf{X}_i$</span>'s fulfil certain conditions. In classical regression one such condition is that <span class="math-container">$\frac{1}{N} \mathbb{X}^T \mathbb{X} \to \Sigma$</span> for <span class="math-container">$N \to \infty$</span>, where <span class="math-container">$\mathbb{X}$</span> denotes the design matrix with rows <span class="math-container">$\mathbf{X}_i^T$</span>. Weaker conditions may be enough for consistency. In sparse learning another such condition is the restricted eigenvalue condition, see e.g. <a href="https://projecteuclid.org/euclid.ejs/1260801227" rel="noreferrer">On the conditions used to prove oracle results for the Lasso</a>. The i.i.d. assumption together with some technical distributional assumptions imply that some such sufficient conditions are fulfilled with large probability, and thus the i.i.d. assumption may prove to be a sufficient but not a necessary assumption to get consistency and error bounds for discriminative modelling.</p> <p>The working modelling assumption of independence may be wrong for either of the modelling approaches. As a rough rule-of-thumb one can still expect consistency if the data comes from an <em>ergodic process</em>, and one can still expect some error bounds if the process is <em>sufficiently fast mixing</em>. A precise mathematical definition of these concepts would take us too far away from the main question. It is enough to note that there exist dependence structures besides the i.i.d. assumption for which the learning methods can be proved to work as <span class="math-container">$N$</span> tends to infinity.</p> <p>If we have more detailed knowledge about the dependence structure, we may choose to replace the working independence assumption used for modelling with a model that captures the dependence structure as well. This is often done for time series. A better working model may result in a more efficient method.</p> <h2>Model assessment</h2> <p>Rather than proving that the learning method gives a model close to <span class="math-container">$p(y \mid \mathbf{X})$</span> it is of great practical value to obtain a (relative) assessment of &quot;how good a learned model is&quot;. Such assessment scores are comparable for two or more learned models, but they will not provide an absolute assessment of how close a learned model is to <span class="math-container">$p(y \mid \mathbf{X})$</span>. Estimates of assessment scores are typically computed empirically based on splitting the dataset into a training and a test dataset or by using cross-validation.</p> <p>As with bagging, a random splitting of the dataset will &quot;mess up&quot; any dependence structure. However, for methods based on the working independence assumptions, ergodicity assumptions weaker than i.i.d. should be sufficient for the assessment estimates to be reasonable, though standard errors on these estimates will be very difficult to come up with.</p> <p>[<strong>Edit:</strong> Dependence among the variables will result in a distribution of the learned model that differs from the distribution under the i.i.d. assumption. The estimate produced by cross-validation is not obviously related to the generalization error. If the dependence is strong, it will most likely be a poor estimate.]</p> <h2>Summary (tl;dr)</h2> <p>All the above is under the assumption that there is a fixed conditional probability model, <span class="math-container">$p(y \mid \mathbf{X})$</span>. Thus there cannot be trends or sudden changes in the conditional distribution not captured by <span class="math-container">$\mathbf{X}$</span>.</p> <p>When learning a model of <span class="math-container">$y$</span> given <span class="math-container">$\mathbf{X}$</span>, independence plays a role as</p> <ul> <li>a useful working modelling assumption that allows us to derive learning methods</li> <li>a sufficient but not necessary assumption for proving consistency and providing error bounds</li> <li>a sufficient but not necessary assumption for using random data splitting techniques such as bagging for learning and cross-validation for assessment.</li> </ul> <p>To understand precisely what alternatives to i.i.d. that are also sufficient is non-trivial and to some extent a research subject.</p>
472
statistical learning
The Intercept terms for Ridge regression, lasso , pcr and PLS (elements of statistical learning)
https://stats.stackexchange.com/questions/92590/the-intercept-terms-for-ridge-regression-lasso-pcr-and-pls-elements-of-stati
<p>In table 3.3 (page 63) of the elements of statistical learning book, the intercept terms for Ridge regression, lasso , pcr and PLS differ. </p> <p>However, according to the theory in the book, these models should all have the same $\hat{\beta_0} = \bar{y}$. How are the intercepts estimated in the table ? </p> <p>Note that: all these models are applied onto the same dataset, where the inputs are centered. <img src="https://i.sstatic.net/DD4LY.png" alt="enter image description here"></p>
473
statistical learning
Show $\hat{f}(x) \to E[Y|X=x]$ elements of statistical learning
https://stats.stackexchange.com/questions/629745/show-hatfx-to-eyx-x-elements-of-statistical-learning
<p>I was reading elements of statistical learning and it mentions let <span class="math-container">$\hat{f}(x)=Ave(y_i|x_i \in N_k(x))$</span> where <span class="math-container">$N_k(x)$</span> is the neighborhood containing the k points closest to x .</p> <p>Then it says &quot;under mild regularity conditions on the joint probability distribution <span class="math-container">$Pr(X, Y )$</span>, one can show that as <span class="math-container">$N, k \to \infty$</span> such that <span class="math-container">$k/N \to 0$</span>, <span class="math-container">$\hat{f}(x) \to E(Y |X = x)$</span>.</p> <p>Can someone guide how to begin proving this statement and what mild regularity conditions are required?</p>
474
statistical learning
Derivation of the conditional median for linear regression in “The elements of statistical learning ”
https://stats.stackexchange.com/questions/344920/derivation-of-the-conditional-median-for-linear-regression-in-the-elements-of-s
<p>My question is about "The elements of statistical learning" book. I would like to know how to prove that the use of the $L_1$ loss $$L_1: E\bigg[|Y-f(X)|\bigg]$$ leads to have conditional median $\hat{f}(x)=median(Y|X=x)$ as solution to the $EPF(f)$ criterion minimisation in eq(2.11): $$EPF(f)= E_X\bigg[E_{Y|X}\big[(Y-f(X))^2|X\big]\bigg]$$</p>
<p>First, I think you misspelled something in the question. In your case it should be $$ EPE(f)=\mathbb{E}(\vert Y-f(X)\vert). $$ What you want to show is that $$ \text{argmin}_{f \text{ measurable}}EPE(f)=\left(X\mapsto\text{median}(Y\vert X)\right) $$ This is in fact equivalent to showing that the median is the best constant approximation in the $L^1$-norm, i.e. that</p> <p>$$ \text{argmin}_{c}\mathbb{E}(\vert X-c\vert) = c^* $$ where $$ c^*=\inf\{t:F_X(t)\geq 0.5\} $$ is the median of $X$ defined via the generalized inverse of the cdf $F_X(\cdot)$ of $X$.This can be easily shown as follows: First assume that $c&gt;c^*$, then</p> <p>\begin{align} \mathbb{E}(\vert X-c\vert)&amp;=\mathbb{E}((X-c)\chi_{\{X&gt;c\}})-\mathbb{E}((X-c)\chi_{\{X\in(c^*,c]\}})-\mathbb{E}((X-c)\chi_{\{X\leq c^*\}})\\ &amp;=\mathbb{E}((X-c^*)\chi_{\{X&gt;c\}})-(c-c^*)\mathbb{P}(X&gt;c)\\ &amp;\quad\quad + \mathbb{E}((X-c^*)\chi_{\{X\in(c^*,c]\}})-2\mathbb{E}(X\chi_{\{X\in(c^*,c]\}})+(c+c^*)\mathbb{P}(X\in (c^*,c])\\ &amp;\quad\quad-\mathbb{E}((X-c^*)\chi_{\{X\leq c^*\}})+(c-c^*)\mathbb{P}(X\leq c^*) \end{align} Now we bound $$ -2\mathbb{E}(X\chi_{\{X\in (c^*,c]\}})\geq -2c\mathbb{P}(X\in (c^*,c]). $$ Hence, we get \begin{align} \mathbb{E}(\vert X-c\vert)&amp;\geq \mathbb{E}(\vert X-c^*\vert)+(c-c^*)\left(\mathbb{P}(X\leq c^*)-P(X&gt;c)-\mathbb{P}(X\in (c^*,c])\right)\\ &amp;=\mathbb{E}(\vert X-c^*\vert)+(c-c^*)(2\mathbb{P}(X\leq c^*)-1)\\ &amp;\geq \mathbb{E}(\vert X-c^*\vert), \end{align} where we used that $c&gt;c^*$ and $2\mathbb{P}(x\leq c^*)\geq 1$ by the definition of $c^*$. Analogously it can be shown that the same thing holds for $c&lt;c^*$. Hence, we can conclude that the median is in fact the constant RV that approximates $X$ the best in $L^1$.</p> <p>Finally this can be used to show the final result:</p> <p>\begin{align} EPE(f)&amp;=\mathbb{E}(\vert Y-f(X)\vert)\\ &amp;=\mathbb{E}(\mathbb{E}(\vert Y-f(X)\vert\vert X))\\ &amp;\geq \mathbb{E}(\vert Y-\text{median}(Y\vert X)\vert\vert X)\\ &amp;=EPE(\text{median}(Y\vert X)) \end{align}</p>
475
statistical learning
Book recommendations needed - building foundational knowledge for ISL - Introduction to Statistical Learning (by Gareth James)
https://stats.stackexchange.com/questions/486182/book-recommendations-needed-building-foundational-knowledge-for-isl-introduc
<p>I'm trying to build a data science base from scratch. I started a book called Introduction to Statistical Learning by Gareth James and found that there are many mathematical &amp; statistical concepts that I'm unfamiliar with. I want to bridge this gap in my knowledge. Please recommend some books that will help me do that. (I prefer books over video lectures but if there is some course, that would help, mention that as well.)</p>
<p>For Mathematics:</p> <ol> <li>James Stewart's Calculus</li> </ol> <p>For Statistics:</p> <ol> <li>Basic Business Statistics (7e) - Beenson, Levine, Szabat, ...</li> </ol>
476
statistical learning
Loss functions in statistical decision theory vs. machine learning?
https://stats.stackexchange.com/questions/485964/loss-functions-in-statistical-decision-theory-vs-machine-learning
<p>I'm quite familiar with loss functions in machine learning, but am struggling to connect them to loss functions in statistical decision theory [1].</p> <p>In machine learning, a loss function is usually only considered at <strong>training time</strong>. It's a differentiable function of two variables, <code>loss(true value, predicted value)</code>, that you iteratively minimize over the training set to converge to (locally) optimal model weights.</p> <p>In statistical decision theory, a loss function seems to be relevant at <strong>prediction time</strong> (?). You want to rationally choose a value for an unknown quantity, based on your assessment of its probable values, and your loss of making a wrong prediction.</p> <p>What is the intuition of how these two concepts relate to each other?</p> <p>[1] For example, in Ch 6.3 of &quot;Machine Learning: A Probabilistic Approach&quot; or Ch 2.4 of &quot;Elements of Statistical Learning&quot;.</p>
<p>The loss that is of ultimate interest is the <strong>prediction loss</strong> (or <strong>decision loss</strong>). It represents real (financial/material/...) consequences of any given decision for the decision maker. It is this and only this loss that we want to minimize for its own sake rather than as an intermediate goal.</p> <p>The <strong>training loss</strong> is an intermediate tool for building prediction models. It does not affect the welfare of the decision maker directly; its effects manifest themselves only via the prediction loss.</p> <p>It may or may not be a good idea to match the training loss to the prediction loss.</p> <ul> <li>For example, suppose you have a sample generated by a Normal random variable. You have to predict a new observation from the same population, and your prediction loss is quadratic. Absent additional information, your best guess is the mean of the random variable. The best* estimate of it is the sample mean. It so happens that the type of training loss that is minimized by the sample mean is quadratic. Thus, here the <strong>training loss coincides with the prediction loss</strong>.</li> <li>Now suppose the situation is the same but your prediction loss is the absolute value of the prediction error. Absent additional information, your best guess is the median of the random variable. The best* estimate of it is the sample mean, not the sample median (because our sample is generated by a Normal random variable). As we already know, the training loss that yields the sample mean when minimized is quadratic. Thus, here the <strong>training loss does not coincide with the prediction loss</strong>.</li> </ul> <p>*Best in the sense of minimizing the expected prediction loss.</p>
477
statistical learning
Statistical learning when observations are not iid
https://stats.stackexchange.com/questions/563419/statistical-learning-when-observations-are-not-iid
<p>As far as I am concerned, statistical/machine learning algorithms always suppose that data are independent and identically distributed (<span class="math-container">$iid$</span>).</p> <p>My question is: what can we do when this assumption is clearly unsatisfied? For instance, suppose that we have a data set whith repeated measurements on the same observations , so that both the cross-section and the time dimensions are important (what econometricians call a <em>panel data set</em>, or statisticians refer to as <em>longitudinal data</em>, which is distinct from a time series).</p> <p>An example could be the following. In 2002, we collect the prices (henceforth <span class="math-container">$Y$</span>) of 1000 houses in New York, together with a set of covariates (henceforth <span class="math-container">$X$</span>). In 2005, we collect the same variables <strong>on the same houses</strong>. Similar happens in 2009 and 2012. Say I want to understand the relationship between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. Were the data <span class="math-container">$iid$</span>, I could easily fit a random forest (or any other supervised algorithm, for what matters), thus estimating the conditional expectation of <span class="math-container">$Y$</span> given <span class="math-container">$X$</span>. However, there is clearly some auto-correlation in my data. How can I handle this?</p>
<p>There is nothing in the theory of statistical learning or machine learning that requires samples to be i.i.d.</p> <p>When samples are i.i.d, you can write the joint probability of the samples given some model as a product, namely <span class="math-container">$P(\{x\}) = \Pi_{i} P_i(x_i)$</span> which makes the log-likelihood a sum of the individual log-likelihoods. This simplifies the calculation, but is by no means a requirement.</p> <p>In your case, you can for example model the distribution of a pair <span class="math-container">$x_i,y_i$</span> with some bi-variate distribution, say <span class="math-container">$z_i=(x_i,y_i)^T$</span> , <span class="math-container">$z_i \sim \mathcal{N}(\mu,\Sigma)$</span> , and then estimate the parameter <span class="math-container">$\Sigma$</span> from the likelihood <span class="math-container">$P(\{z\}) = \Pi_{i} P(z_i | \mu, \Sigma)$</span>.</p> <p>It is true that many out-of-the-box algorithm implementations implicitly assume independence between samples, so you are correct in identifying that you will have a problem applying them to you data as is. You will either have to modify the algorithm or find ones that are better suited for your case.</p>
478
statistical learning
Undirected graphical models with for discrete variables with hidden nodes - loglikelihood (The elements of statistical learning)
https://stats.stackexchange.com/questions/271376/undirected-graphical-models-with-for-discrete-variables-with-hidden-nodes-logl
<p>I don't understand the equation of loglikelihood of the observed data in graphical models with hidden nodes that appears in "The Elements of Statistical Learning" (Hastie, Tibshirani, Friedmann, chapter 17.4.2)</p> <p><a href="https://i.sstatic.net/OXyKc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OXyKc.jpg" alt="enter image description here"></a></p> <p>In the equation we sum over possible values of $x_h$, but $x_h$ doesn't appear anymore in the equation. Where does this sum come from? </p>
<p>This is a special case of the <a href="https://en.wikipedia.org/wiki/Law_of_total_probability" rel="nofollow noreferrer">law of total probability</a>. (See also the <a href="http://people.reed.edu/~jones/Courses/P02.pdf" rel="nofollow noreferrer">second equation of slide 5</a>.)</p> <p>Specifically the lower-case $x_{\mathcal{H}}$ you mention are supposed to refer to all possible/states values of the <strong>h</strong>idden random variables $X_{\mathcal{H}}$. To quote the authors:</p> <blockquote> <p>The sum over $x_{\mathcal{H}}$ means that we are summing over all possible $\{0,1\}$ values for the hidden units.</p> </blockquote> <p>It is a common (albeit somewhat ambiguous) practice to refer to the (possible) values of a random variable denoted with an upper-case letter, e.g. $T, X, Y, Z$, using the corresponding lower-case letters, e.g. $t, x, y, z$ respectively. This allows us to talk about the event $\{ Y = y\}$ or the probability that the random variable $Y$ equals the value $y$, $\mathbb{P}(Y=y)$, without specifying which specific value $y$ we should consider. This is helpful when the particular value $y$ doesn't matter.</p> <p>In this case, since we are summing over <em>all possible values</em> of the random variable(s) $X_{\mathcal{H}}$, every value in the sum is in some sense 'arbitrary', making the lower-case $x_{\mathcal{H}}$ convention for 'any particular value' of the random variable(s) $X_{\mathcal{H}}$ applicable here.</p>
479
statistical learning
How to plot decision boundary of a k-nearest neighbor classifier from Elements of Statistical Learning?
https://stats.stackexchange.com/questions/21572/how-to-plot-decision-boundary-of-a-k-nearest-neighbor-classifier-from-elements-o
<p>I want to generate the plot described in the book ElemStatLearn "The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Second Edition" by Trevor Hastie &amp; Robert Tibshirani&amp; Jerome Friedman. The plot is:</p> <p><img src="https://i.sstatic.net/oY7hr.png" alt="enter image description here"></p> <p>I am wondering how I can produce this exact graph in <code>R</code>, particularly note the grid graphics and calculation to show the boundary.</p>
<p>To reproduce this figure, you need to have the <a href="http://cran.r-project.org/web/packages/ElemStatLearn/index.html" rel="noreferrer">ElemStatLearn</a> package installed on you system. The artificial dataset was generated with <code>mixture.example()</code> as pointed out by @StasK.</p> <pre><code>library(ElemStatLearn) require(class) x &lt;- mixture.example$x g &lt;- mixture.example$y xnew &lt;- mixture.example$xnew mod15 &lt;- knn(x, xnew, g, k=15, prob=TRUE) prob &lt;- attr(mod15, "prob") prob &lt;- ifelse(mod15=="1", prob, 1-prob) px1 &lt;- mixture.example$px1 px2 &lt;- mixture.example$px2 prob15 &lt;- matrix(prob, length(px1), length(px2)) par(mar=rep(2,4)) contour(px1, px2, prob15, levels=0.5, labels="", xlab="", ylab="", main= "15-nearest neighbour", axes=FALSE) points(x, col=ifelse(g==1, "coral", "cornflowerblue")) gd &lt;- expand.grid(x=px1, y=px2) points(gd, pch=".", cex=1.2, col=ifelse(prob15&gt;0.5, "coral", "cornflowerblue")) box() </code></pre> <p>All but the last three commands come from the on-line help for <code>mixture.example</code>. Note that we used the fact that <code>expand.grid</code> will arrange its output by varying <code>x</code> first, which further allows to index (by column) colors in the <code>prob15</code> matrix (of dimension 69x99), which holds the proportion of the votes for the winning class for each lattice coordinates (<code>px1</code>,<code>px2</code>).</p> <p><img src="https://i.sstatic.net/UqAIT.png" alt="enter image description here"></p>
480
statistical learning
Parameter estimation for basis function model in Elements of Statistical Learning (ESL)
https://stats.stackexchange.com/questions/487702/parameter-estimation-for-basis-function-model-in-elements-of-statistical-learnin
<p>In the book <em>Elements of Statistical Learning</em>, section 2.8.3 describes Basis Functions, citing an example of a radial basis function as <span class="math-container">$f_{\theta}(x) = \sum_{m=1}^M \beta_M \sigma(\alpha_m'x + b_m)$</span>, with <span class="math-container">$\sigma(x)$</span> as the activation function. This makes sense to me, but I am confused as to how the model is actually being fit. The given form of <span class="math-container">$f_{\theta}(x)$</span> that can be used to generate a prediction from <span class="math-container">$x$</span>, but how are the parameters estimated?</p> <p>Or, is the parameter fitting a separate question entirely? My intuition is that the RSS, <span class="math-container">$\sum_{i=1}^N (y_i - f(x_i))^2$</span> can still be minimized for parameter fitting. Would that be correct?</p> <p>Sorry for the naive question. I am just trying to understand whether the <strong>class of the restricted estimator</strong> is linked in any way with the <strong>objective function we are trying to minimize</strong>?</p>
481
statistical learning
Does learning thorough statistical theory require learning analysis?
https://stats.stackexchange.com/questions/631739/does-learning-thorough-statistical-theory-require-learning-analysis
<p>Does learning thorough statistical theory requires learning analysis before that? I looked at the textbook for statistical theory. So far I don't know if analysis is required, but I think I have heard analysis is a prerequisite. Should I learn analysis beforehand?</p>
<p>No, you do not need to know real analysis to learn statistics. In fact, in many universities (intro level) statistics courses are not even in the math department.</p> <p>One can make a lot progress in statistics by letting the computer do all the math and you worrying only in how the statistical methods are being applied.</p> <p>However, if you want to understand why the rules/tables are what they are then you need to know probability theory. The deeper you want to understand probability theory the more real analysis (really measure theory) you need to know. But at some point you reach diminishing returns. Sometimes you know too much and it just does not help you anymore in the uses of statistics.</p> <p>So it is not required to know advanced math. However, knowing more (up to a certain extend without overdoing it) lets you apply it better and use better statistical techniques that you otherwise would not come up with.</p> <p>Here are some books on statistics that do not use any measure theory: Bayesian Data Analysis by Gelman, Statistical Rethinking by McElreath, Doing Bayesian Data Analysis by Krushcke, Statistical Models by Freedman, Linear Models in Statistics by Rencher.</p> <p>What do you notice about these books? These are all extremely well known books that are used to learn statistics, and they all avoid measure theory. In fact, there is not a single <span class="math-container">$\delta,\varepsilon$</span> proof in any one of them. McElreath's book even goes further to essentially eliminate all math from the subject and reduce it down to computer programming.</p> <p>Now look at one of the most well-known and respected books on measure based probability theory, &quot;Probability&quot; by Shiryaev. There are virtually no statistical applications in that book anywhere. He does mention a few, but that is about it.</p> <p>So if you are interested in learning statistics then you need to learn statistics. You can learn measure theory based probability on a need-to-know basis. However, if you start from the &quot;ground up&quot; and start with measure theory then you will hardly ever reach statistics in the end.</p>
482
statistical learning
Questions regarding the Bayes Classifier in *Introduction to Statistical Learning*
https://stats.stackexchange.com/questions/208795/questions-regarding-the-bayes-classifier-in-introduction-to-statistical-learnin
<p>I am having trouble grokking some very elementary material regarding Bayesian Classification in <a href="http://www.stat.berkeley.edu/~rabbee/s154/ISLR_First_Printing.pdf" rel="nofollow"><em>Introduction to Statistical Learning</em></a> at the end of pg. 37 to the very top of pg. 39 (i.e., the section entitled "The Bayes Classifier" which is accessible via the link).</p> <p>Here is a relevant snippet:</p> <blockquote> <p>It is possible to show (though the proof is outside of the scope of this book) that the test error rate given in (2.9) is minimized, on average, by a very simple classifier that assigns each observation to the most likely class, given its predictor values. In other words, we should simply assign a test observation with predictor vector $x_0$ to the class $j$ for which</p> <p>$$ Pr(Y= j \mid X = x_0) $$</p> <p>is largest.</p> </blockquote> <ol> <li><p>Where does the probability distribution come from to begin with? Is it inferred from the training data? Or is what is inferred from the training data a mere approximation of the real probability distribution (wherever it comes from)?</p></li> <li><p><strong>(Note: for this question you need to see the figure on page 38 pdf linked to above.)</strong> Need the Bayes Decision boundary be a straight line splitting the predictors into neat, contiguous boundaries? The example used in this section seems awfully convenient. Why can't there be a hodge podge of probability neighborhoods that aren't even separated by a single, clean line? Is there something about the Bayes Classification boundary definition that prevents this?</p></li> </ol>
<ol> <li><p>We make an assumption that all objects are realizations of random variables, which are independent and identically distributed from some distribution. Very often we have one more assumption that this distribution is from exact parametric family of distributions. In this case training data is used to evaluate its parameters. </p></li> <li><p>No, in general case the Bayes decision boundary shouldn't be linear. If you want linear decision boundary, look for linear classifiers such as SVM, logistic regression and the Naive Bayes classifier.</p></li> </ol>
483
statistical learning
Derivation of EPE for linear regression in &quot;The elements of statistical learning &quot;
https://stats.stackexchange.com/questions/257124/derivation-of-epe-for-linear-regression-in-the-elements-of-statistical-learning
<p>My question is about "The elements of statistical learning" book (yup, the one). Right now I am kinda stuck on second chapter at part, where they derive EPE for linear regression (Somewhat related to <a href="https://stats.stackexchange.com/questions/253101/confusion-about-derivation-of-regression-function">Confusion about derivation of regression function</a> , but I have more a detailed investigation here =)). Here it is:</p> <p><a href="https://i.sstatic.net/kaIP9.png" rel="noreferrer"><img src="https://i.sstatic.net/kaIP9.png" alt="enter image description here"></a></p> <p>Ok, I do not really get where did they get it from, but it is not so important, because I've found notes for the book from authors (A Solution Manual and Notes for: The Elements of Statistical Learning).</p> <p>There authors try to explain how this formula was derived. But I do not get that either =)</p> <p>So here is the logic that they have (Just a part, other is not important right now): </p> <p><a href="https://i.sstatic.net/37fUw.png" rel="noreferrer"><img src="https://i.sstatic.net/37fUw.png" alt="enter image description here"></a></p> <p>I do not really get a lot of things here. Let me first explain how I see variables here:</p> <p>$ T(\tau)$ - just all samples, basically iid, and distribution would be likelihood of all samples</p> <p>$X$ - input RV</p> <p>$Y$ - output RV</p> <p>$x_0$ - is selected input (independent of $T$, just some random $x$)</p> <p>$y_0$ - output calculated based on $x_0$ (RV, because it depends on e - normal error)</p> <ol> <li><p>What is $E_{{y_0}|{x_0}}$, how does it differ from $E_{Y|X}$? I mean $X$ an is input and $Y$ is an output, which have quite straightforward relation $Y = \beta X + e$. So $E_{Y|X}$ is expectation of output based on input. And $E_{{y_0}|{x_0}}$ seems pretty much the same to me, though I kinda understand, that $y_0$ is some output given that we have chosen $x_0$ (as it seems it is constant here, though I guess it is weird condition on constant). But, maybe, it is more broader thing, don't know.</p></li> <li><p>How this is true: $E_T =E_XE_{Y|X} $</p></li> <li>And the one that blew my head off: $E_TU_1 = U_1E_T$. What does it mean? =) I understand expectation for Random Variable, but what is $U_1E_T$, how am I supposed to understand it? Seems like $E_TU_1$ is just constant given that we've chosen $y_0$ and $x_0$, but I can be totally wrong here.</li> </ol> <p>I have many more questions, but for now I will stop here. I have ok background in stats and linear algebra, I understand undergrad level books (without measure theory though). But I really can't grasp how authors use expectations.</p>
<p>I am trying to answer the first question Suppose $x_0$ and $y_0$ are both in $R^p$. $T$ is the space of training parameters which are sets of pairs $(x_0,y_0)$ such that $y_0=x_0^T\beta + \epsilon$. These sets define exactly the training data and starting from there an linear estimation of $y$ as a function of $x$ is calculated with the linear regression method. Let $g:R^p \mapsto R^p$ be a function. Then $E_{x_0|y_0}g = \int_A\!g(y)\rho(y|x_0)\,\mathrm{d}y$ where $A= \{y=x_0^T\beta + \epsilon, where \epsilon \in N(0,\sigma) \}$ . Here $\rho$ is the conditional probability density function of $y$ when $x=x_0$. In our case $g(y)=E_T(y - \hat{y})^2$ </p> <p>For the second question $X$ and $Y$ denote ordered sets of $R^p$ (the input $x$), respectively the sets of corresponding $y$. </p> <p>For the third question $E_T U_1 = U_1 E_T$ when $U_1$ is a constant with respect to the integration variable in the estimation integral.</p>
484
statistical learning
What supplemental resource do you recommend in order to fully comprehend The Elements of Statistical Learning
https://stats.stackexchange.com/questions/204830/what-supplemental-resource-do-you-recommend-in-order-to-fully-comprehend-the-ele
<p>I am learning the book "Elements of Statistical Learning," but it is very hard because it requires very heavy knowledge about statistics, which I have some, but apparently not enough to understand the derivations in the book. For example, <a href="https://i.sstatic.net/8yZ06.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8yZ06.jpg" alt="enter image description here"></a></p> <p>this is the derivation for the expected prediction error for the linear model Y=beta*X + epsilon. I don't understand it at all! This is very frustrating, and I would really loved to be able to understand this book. Can you tell me some resources using which I can beef up my foundations (whatever it may be) necessary in order to fully comprehend ESL? </p>
<p>The same authors have a more introductory book called Introduction to Statistical Learning. There is a free PDF version online <a href="http://www-bcf.usc.edu/~gareth/ISL/" rel="nofollow">http://www-bcf.usc.edu/~gareth/ISL/</a></p>
485
statistical learning
Loss function in Supervised Learning vs Statistical Decision Theory
https://stats.stackexchange.com/questions/543535/loss-function-in-supervised-learning-vs-statistical-decision-theory
<p>I am confused by the different definitions of Loss Function in statistical decision theory vs machine learning.</p> <p>In statistical decision theory, a loss function is typically defined as <span class="math-container">$L(\theta, \delta(X))$</span>, where <span class="math-container">$\theta$</span> is the true, unknown parameter, <span class="math-container">$\delta(.)$</span> is the decision rule, and <span class="math-container">$X$</span> is data (generated from <span class="math-container">$\theta$</span>?). See for example lectures of the <a href="https://web.stanford.edu/%7Elmackey/stats300a/doc/stats300a-fall15-lecture1.pdf" rel="nofollow noreferrer">Theory of Statistics</a> class.</p> <p>In machine learning, it seems the loss function is defined as <span class="math-container">$L(y, f(X))$</span>, where <span class="math-container">$y$</span> is the true label and <span class="math-container">$f(x)$</span> is some model. See, for example, Elements of Statistical Learning chapter 2.4.</p> <p>My question is if they are talking about the same thing. It seems different. For example, if I am going to predict the next coin toss of an unknown coin, I can then model the coin toss as following a Bernoulli distribution with an unknown parameter <span class="math-container">$\theta$</span>.</p> <p>Let <span class="math-container">$X$</span> be some historical data. Then it appears that the loss function from statistical decision theory is computing my prediction <span class="math-container">$\delta(X)$</span> against the unknown parameter <span class="math-container">$\theta$</span> whereas in ML, it is computing the same prediction <span class="math-container">$\delta(X)$</span> (or <span class="math-container">$f(X)$</span>) against the true label?</p> <p>I am having trouble reconciling the two concepts.</p>
<p>I would say this is more a difference in the form of the <em>decision</em> than the loss. The loss function in both cases is Loss(true state of nature, your decision), but it simplifies differently depending on the form of the decision</p> <p>In point prediction settings (such as a lot of ML), the decision is a potential value of the label, and the state of nature effectively simplifies to the true value of the label, so the loss <span class="math-container">$L(y, \hat y)$</span> can be written as the loss from predicting <span class="math-container">$\hat y$</span> when the truth is <span class="math-container">$y$</span>.</p> <p>In parametric inference settings, the decision is a potential value of the parameter, and the state of nature effectively simplifies to the true parameter value, so the loss <span class="math-container">$L(\theta, \hat\theta)$</span> can be written as the loss from estimating <span class="math-container">$\hat\theta$</span> when the truth is <span class="math-container">$\theta$</span>.</p> <p>There are more complicated settings, too. For example, your decision might be an interval, and the state of nature might be a value, and the loss could be the length of the interval plus the distance from the value to the closest point of the interval (possibly zero)<a href="https://core.ac.uk/download/pdf/61319103.pdf" rel="nofollow noreferrer">[PDF]</a>. In that setting there isn't the nice correspondence between potential decisions and potential states of nature, and the loss doesn't simplify down to a summary of the error in the decision in the same way. And of course many other possibilities.</p>
486
statistical learning
Understanding notation in Bias-Variance decomposition in Elements of Statistical Learning
https://stats.stackexchange.com/questions/458620/understanding-notation-in-bias-variance-decomposition-in-elements-of-statistical
<p>I'm going through Elements of Statistical Learning and I'm having a bit of trouble understanding this bit of notation from Chapter 2 (this example is (2.27))</p> <p><span class="math-container">$$EPE(x_0) = E_{y_o|x_o}E_T(y_0 - \hat{y}_0)^2$$</span></p> <p>Here, <span class="math-container">$T$</span> is the set of N=1000 training samples drawn from a uniform distribution, and <span class="math-container">$Y = X^T\beta + \epsilon$</span> and <span class="math-container">$\epsilon \sim N(0, \sigma^2)$</span>.</p> <p>My question is about what <span class="math-container">$E_T$</span> means in the <span class="math-container">$EPE$</span> equation. I get that <span class="math-container">$T$</span> is a random variable, representing the possible draws of N training samples from the uniform distribution, and that <span class="math-container">$E_T$</span> is the expectation with respect to <span class="math-container">$T$</span>, however I'm not sure how exactly to properly understand how <span class="math-container">$T$</span> interacts with <span class="math-container">$(y_0 - \hat{y}_0)^2$</span>. I know it must, since y is a function of <span class="math-container">$x \sim T$</span> with some noise, but I'm having a hard time making that concrete.</p> <p>If I were to write this out as an integral, how would I get <span class="math-container">$T$</span> to show up in the inner integral?</p> <p>I think the integral would look something like this</p> <p><span class="math-container">$$\int_{y_0\sim P(y_0|x_0)} \left(\int_{t\in T} (y_0 - \hat{y}_0)^2 P(T=t) d\mu(t) \right) dy_0$$</span></p> <p>I'm trying to figure out how to make <span class="math-container">$(y_0 - \hat{y}_0)^2$</span> have a <span class="math-container">$t$</span> term so that the integral makes sense.</p>
487
statistical learning
Is there an error Section 6.6.2 of the book An Introduction to Statistical Learning?
https://stats.stackexchange.com/questions/490725/is-there-an-error-section-6-6-2-of-the-book-an-introduction-to-statistical-learn
<p>In Section 6.6.2 of An Introduction to Statistical Learning, the authors do the following:</p> <p>A) Fit a lasso model</p> <pre><code>lasso.mod=glmnet(x[train ,],y[ train],alpha=1, lambda =grid) </code></pre> <p>B) Perform cross-validation</p> <pre><code>set.seed(1) cv.out=cv.glmnet(x[train ,],y[ train],alpha=1) plot(cv.out) bestlam =cv.out$lambda.min </code></pre> <p>C) Compute the the test error using the best value of <span class="math-container">$\lambda$</span> obtained in part B)</p> <pre><code>lasso.pred=predict(lasso.mod,s=bestlam ,newx=x[test ,]) mean((lasso.pred -y.test)^2) </code></pre> <p>But it seems there is an error here? They are using using the <span class="math-container">$\lambda$</span> from part B) with the model from part A). Surely they should be using the <span class="math-container">$\lambda$</span> from part B) with the model from part B) rather than mixing the results of both A) and B)?</p> <p>They do the same thing in the previous section (6.6.1) for ridge regression. So if there is a typo/mistake in the Lasso section there is also one in the ridge regression section.</p> <p>So is it a typo/mistake, or am I mistaken?</p>
<p>Step A doesn't provide a single model; it provides a set of models, one for each value of <span class="math-container">$\lambda$</span>, developed on all of <code>x[train,]</code> and <code>y[train]</code>. There is no single model in Step B <em>even for a single value of</em> <span class="math-container">$\lambda$</span>. At the default 10-fold cross validation in <code>cv.glmnet</code>, you develop <em>10 different models</em> for <em>each</em> value of <span class="math-container">$\lambda$</span>. Here's what's going on &quot;under the hood.&quot;</p> <p>Cross validation tries to estimate what would happen if you repeated a modeling process on different samples from a population. For each fold of 10-fold CV, you build a model on 90% of the cases as an internal &quot;training&quot; set, then evaluate performance on the remaining 10% as an internal &quot;test&quot; set. After all 10 folds, each case has been included in one internal &quot;test&quot; set and 9 internal &quot;training&quot; sets. The performance (here, mean-square error on the 10 internal &quot;test&quot; sets) is averaged over all those 10 models developed at that <span class="math-container">$\lambda$</span> value.</p> <p>Those 10 lasso models at a single <span class="math-container">$\lambda$</span> value may differ not only in regression coefficients but even in the predictors that are selected for inclusion. That's OK. The point of evaluating over the range of <span class="math-container">$\lambda$</span> penalty values, as @chl notes in a comment, is to find an optimal penalty value that best balances off the bias and variance, to minimize the expected mean-square error when you apply your modeling <em>process</em> to the underlying population. That's a key concept to recognize. In many circumstances it's the modeling process that you're evaluating, not the model itself.</p> <p>Then you go back to your original set of models developed in Step A, over a grid of <span class="math-container">$\lambda$</span> values, and select the model developed at that optimal value of <span class="math-container">$\lambda$</span>. Yes, the details of that model will differ from <em>all</em> of the 10 models developed at that value of <span class="math-container">$\lambda$</span> during cross validation. But that choice of <span class="math-container">$\lambda$</span> means that the resulting model still should have the best expected performance when applied to new cases from the population.</p>
488
statistical learning
Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning)
https://stats.stackexchange.com/questions/110190/proof-derivation-of-residual-sum-of-squares-based-on-introduction-to-statistica
<p>On page 19 of the textbook <a href="http://www-bcf.usc.edu/%7Egareth/ISL/" rel="noreferrer">Introduction to Statistical Learning</a> (by James, Witten, Hastie and Tibshirani--it is freely downloadable on the web, and very good), the following is stated:</p> <blockquote> <p>Consider a given estimate <span class="math-container">$$\hat{Y} = \hat{f}(x)$$</span> Assume for a moment that both <span class="math-container">$$\hat{f}, X$$</span> are fixed. Then, it is easy to show that:</p> <p><span class="math-container">$$\mathrm{E}(Y - \hat{Y})^2 = \mathrm{E}[f(X) + \epsilon - \hat{f}(X)]^2$$</span> <span class="math-container">$$ = [f(X) - \hat{f}(X)]^2 + \mathrm{Var}(\epsilon)$$</span></p> </blockquote> <p>It is further explained that the first term represents the reducible error, and the second term represents the irreducible error.</p> <p>I am not fully understanding how the authors arrive at this answer. I worked through the calculations as follows:</p> <p><span class="math-container">$$\mathrm{E}(Y - \hat{Y})^2 = \mathrm{E}[f(X) + \epsilon - \hat{f}(X)]^2$$</span></p> <p>This simplifies to <span class="math-container">$[f(X) - \hat{f}(X) + \mathrm{E}[\epsilon]]^2 = [f(X) - \hat{f}(X)]^2$</span> assuming that <span class="math-container">$\mathrm{E}[\epsilon] = 0$</span>. Where is the <span class="math-container">$\mathrm{Var}(x)$</span> indicated in the text coming from?</p> <p>Any suggestions would be greatly appreciated.</p>
<p>Simply expand the square ...</p> <p>$$[f(X)- \hat{f}(X) + \epsilon ]^2=[f(X)- \hat{f}(X)]^2 +2 [f(X)- \hat{f}(X)]\epsilon+ \epsilon^2$$</p> <p>... and use linearity of expectations:</p> <p>$$\mathrm{E}[f(X)- \hat{f}(X) + \epsilon ]^2=E[f(X)- \hat{f}(X)]^2 +2 E[(f(X)- \hat{f}(X))\epsilon]+ E[\epsilon^2]$$</p> <p>Can you do it from there? (What things remain to be shown?)</p> <p>Hint in response to comments: Show $E(\epsilon^2)=\text{Var}(\epsilon)$</p>
489
statistical learning
Cross-validation scheme used in the Introduction to Statistical Learning, Chapter 6, Lab 3
https://stats.stackexchange.com/questions/223623/cross-validation-scheme-used-in-the-introduction-to-statistical-learning-chapte
<p>I've been really enjoying the <em><a href="http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Fourth%20Printing.pdf" rel="nofollow">Introduction to Statistical Learning</a></em> textbook so far, and I'm currently working my way through chapter 6. I realize that I am very confused by the process used in lab 3 of this chapter (page 256-258).</p> <p>First, they use the <code>pcr()</code> function's cross validation option and the entire training data set to calculate the optimal number of principle components. Great! All set (I thought...)</p> <pre><code>pcr.fit=pcr(Salary∼., data=Hitters, scale=TRUE, validation ="CV") </code></pre> <p>Next, they "perform PCR on the training data and evaluate its test set performance":</p> <pre><code>pcr.fit=pcr(Salary∼., data=Hitters, subset=train, scale=TRUE, validation ="CV") </code></pre> <p>I'm confused because I thought that cross-validation (which they did first) is basically a better version of doing exactly this! To make me even more confused, they go on to say they that with the training/test set approach, they get the "lowest cross-validation error" when 7 components are used. It seems like they are using a validation set together with cross-validation?</p>
<p>It is indeed not very clearly explained in the text, but here is what I think is going on.</p> <p><strong>First, they perform cross-validation on the whole dataset.</strong> They say that "the smallest cross-validation error occurs when $M = 16$ components are used", but also remark that the difference between different values of M is very small.</p> <p><strong>Second, they split the dataset intro training and validation set.</strong> They put the validation set aside, and use cross-validation <em>on the training set only</em> to get the optimal value of $M$. Curiously, they say that "the lowest cross-validation error occurs when $M = 7$ component are used" (there is no comment on why it's now so much smaller than 16). Then they use the model with $M=7$ and test its performance on the validation set. </p> <blockquote> <p>It seems like they are using a validation set together with cross-validation?</p> </blockquote> <p>Yes, exactly! This is a very sensible thing to do, because you want to measure the performance of your algorithm on a dataset that was not used for training in any way, including hyper-parameter tuning. So you use validation set for measuring the performance and training set to build the model, but in order to choose the value of $M$ you need to do cross-validation on the training set; i.e. the training set gets additionally split into training-training and training-test many times.</p> <blockquote> <p>I'm confused because I thought that cross-validation (which they did first) is basically a better version of doing exactly this</p> </blockquote> <p>Not exactly. When you perform a single cross-validation, you get a good estimate of optimal $M$, but a potentially bad estimate of the out-of-sample performance.</p> <p>There are two ways of doing it properly:</p> <ol> <li><p>Have a separate validation set and do cross-validation on the training set to tune hyperparameters. (That's what they do here.)</p></li> <li><p>Perform <em>nested cross-validation</em>. Search our site for "nested cross-validation" to read up on it. For example:</p> <ul> <li><a href="https://stats.stackexchange.com/questions/11602">Training with the full dataset after cross-validation?</a></li> <li><a href="https://stats.stackexchange.com/questions/65128">Nested cross validation for model selection</a></li> </ul></li> </ol>
490
statistical learning
statistical regression vs machine learning regression
https://stats.stackexchange.com/questions/514247/statistical-regression-vs-machine-learning-regression
<p>I was trying to understand the difference between statistical regression VS machine learning regression. My background is from Economics and learned regression from statistical point of view for the first time. I learned machine learning later on and it also had regression. There might not be clear distinction between two but wanted to ask major differences. <br>What I can say (I might be wrong) now is there're from different areas and the model is different where statistical regression represents outcome consists of a set of independent variables with an error term whereas machine learning regression consists of outputs and inputs. Also, ML regression requires the process of training, testing to predict the unseen data sets but statistical regression doesn't require this process but analyze the past data and derives out the parameters (similarly weights in ML).<br> An intuitive explanations (no math or complex equations) using example might be helpful!</p>
<p>There really isn't much of a difference. A strained distinction between the two might be consideration of the data generating process (what statisticians call the likelihood). Statisticians care about this because different likelihoods lead to different types of inference. A hotly debated example of this would be the linear probability model (essentially linear regression on binary data) versus logistic regression. I won't spark the debate here, only offer it to you for future research. In general, machine learning doesn't seem to make many statistical assumptions about the data.</p> <p>Leo Brieman's <a href="https://projecteuclid.org/journals/statistical-science/volume-16/issue-3/Statistical-Modeling--The-Two-Cultures-with-comments-and-a/10.1214/ss/1009213726.full" rel="nofollow noreferrer">Two Cultures</a> paper does a better job of elaborating on this difference. To the extent there is a difference between the two (I'm not certain there really is), this would be it.</p>
491
statistical learning
What is meant by the variance of *functions* in *Introduction to Statistical Learning*?
https://stats.stackexchange.com/questions/208672/what-is-meant-by-the-variance-of-functions-in-introduction-to-statistical-lea
<p>On pg. 34 of <em>Introduction to Statistical Learning</em>: $\newcommand{\Var}{{\rm Var}}$</p> <blockquote> <p>Though the mathematical proof is beyond the scope of this book, it is possible to show that the expected test MSE, for a given value $x_0$, can always be decomposed into the sum of three fundamental quantities: the <em>variance</em> of $\hat{f}(x_0)$, the squared <em>bias</em> of $\hat{f}(x_0)$ and the variance of the error terms $\varepsilon$. That is,</p> <p>$$ E\left(y_0 - \hat{f}(x_0)\right)^2 = \Var\big(\hat{f}(x_0)\big) + \Big[{\rm Bias}\big(\hat{f}(x_0)\big)\Big]^2 + \Var(\varepsilon) $$</p> <p>[...]Variance refers to the amount by which $\hat{f}$ would change if we estimated it using a different training data set.</p> </blockquote> <p><strong>Question:</strong> Since $\Var\big(\hat{f}(x_0)\big)$ seems to denote the variance of <em>functions</em>, what does this mean formally?</p> <p>That is, I am familiar with the concept of the variance of a random variable $X$, but what about the variance of a set of functions? Can this be thought of as just the variance of another random variable whose values take the form of functions?</p>
<p>Your correspondence with @whuber is correct.</p> <p>A learning algorithm $\mathcal{A}$ can be viewed as a higher level function, mapping training sets to functions.</p> <p>$$ \mathcal{A} : \mathcal{T} \rightarrow \{f \mid f: X \rightarrow \mathbb{R} \} $$</p> <p>where $\mathcal{T}$ is the space of possible training sets. This can be a bit hairy conceptually, but basically each individual training set results, after using the model training algorithm, in a speicific function $f$ which can be used to make predictions given a data point $x$.</p> <p>If we view the space of training sets as a probability space, so that there is some <em>distribution</em> of possible training data sets, then the model training algorithm becomes a function valued random variable, and we can think of statistical concepts. In particular, if we fix a specific data point $x_0$, then we get the numeric valued random variable</p> <p>$$ \mathcal{A}_{x_0}(T) = \mathcal{A}(T)(x_0) $$</p> <p>I.e., first train the algorithm on $T$, and then evaluate the resulting model at $x_0$. This is just a plain old, but rather ingeniously constructed, random variable on a probability space, so we can talk about its variance. This is the variance in your formula from ISL.</p>
492
statistical learning
What is the relationship between Online Learning and Statistical Learning?
https://stats.stackexchange.com/questions/392301/what-is-the-relationship-between-online-learning-and-statistical-learning
<p>Online Learning also known as Online Convex Optimization has famous algorithms like Follow-the-Leader and Online Gradient Descent (See <a href="http://ocobook.cs.princeton.edu/OCObook.pdf" rel="nofollow noreferrer">OCO Book)</a></p> <p>Now stochastic programming has algorithms like Sample Average Approximation and Stochastic Approximation. <a href="http://mansci-web.uai.cl/%7Ethmello/pubs/saa%20discrete%20SIAM.pdf" rel="nofollow noreferrer">(See SSA Paper)</a></p> <p>So what is the relationship between these two frameworks? To me it seems in online learning you assume that the sequence of sample comes over time and is arbitrary. In stochastic programming you either see it as a offline problem or as an online problem, where the samples arrive in an i.i.d fashion.</p> <p>So if you assume you take a look at the online problem, is then the only difference the assumption that the samples come from i.i.d, but the algorithms are same, i.e. Follow-the-Leader = Sample Average Approximation and Online Gradient Descent = Stochastic Approximation?</p>
493
statistical learning
Finding optimal subspace for Linear Discriminant Analysis - Elements of Statistical Learning 4.3.3
https://stats.stackexchange.com/questions/405607/finding-optimal-subspace-for-linear-discriminant-analysis-elements-of-statisti
<p>Linear Discriminant Analysis (LDA) possibly operates a dimension reduction. Section 4.3.3 in Elements of Statistical Learning explicits this notion as well as a method for computing the "optimal subspace for LDA".</p> <p><a href="https://web.stanford.edu/~hastie/Papers/ESLII.pdf" rel="nofollow noreferrer">https://web.stanford.edu/~hastie/Papers/ESLII.pdf</a></p> <p>Assuming <span class="math-container">$K$</span> classes, for a new observation <span class="math-container">$x$</span> LDA computes <span class="math-container">$\min_{k}{\|(\hat{\mu}_{k}^{*}-x^{*})\|^{2}}$</span> where <span class="math-container">$x^{*}$</span> and <span class="math-container">$\hat{\mu}_{k}^{*}$</span> are transformation of input <span class="math-container">$x$</span> and centroid <span class="math-container">$\hat{\mu}_{k}$</span> of class <span class="math-container">$k$</span> in a ad-hoc basis. In other terms we look for <span class="math-container">$k$</span> minimizing the distance of input data to the closest centroid (see more details <a href="https://stats.stackexchange.com/questions/405541/computation-of-lda-in-elements-of-statistical-learning-4-3-2">here</a>). </p> <p>The <span class="math-container">$k$</span> centroids lie in an affine subspace of dimension at most <span class="math-container">$K-1$</span>. (This is geometry, for <span class="math-container">$K=2$</span> we can say that two points uniquely define a line, which is of dimension <span class="math-container">$K-1=2-1=1$</span>.) We might project <span class="math-container">$X^{*}$</span> onto this centroid-spanning subspace <span class="math-container">$H_{K-1}$</span> and compare distances there. For <span class="math-container">$p$</span>-dimensional inputs, if <span class="math-container">$p$</span> is much larger than <span class="math-container">$K$</span> this will mean considerable drop in dimension.</p> <p>Further we might pick a subspace <span class="math-container">$H_{L}\subseteq{H_{K-1}}$</span> of dimension <span class="math-container">$L&lt;K-1$</span> which is optimal in some sense. </p> <blockquote> <p>Fisher defined optimal to mean that the projected centroids were spread out as much as possible in terms of variance. </p> </blockquote> <p>Then Elements of Statiscal Learning explicits the following method for finding the optimal subspace </p> <blockquote> <p>• compute the <span class="math-container">$K \times p$</span> matrix of class centroids <span class="math-container">$M$</span> and the common covariance matrix <span class="math-container">$W$</span> (for within-class covariance);</p> <p>• compute <span class="math-container">$M^{∗} = MW^{−1/2}$</span> using the eigen-decomposition of <span class="math-container">$W$</span>;</p> <p>• compute <span class="math-container">$B^{∗}$</span>, the covariance matrix of <span class="math-container">$M^{∗}$</span> (<span class="math-container">$B$</span> for between-class covariance), and its eigen-decomposition <span class="math-container">$B^{∗} = V^{∗}DBV^{∗T}$</span>. </p> <p>The columns <span class="math-container">$v_{l}^{∗}$</span> of <span class="math-container">$V^{∗}$</span> in sequence from first to last define the coordinates of the optimal subspaces.</p> </blockquote> <p>What is the rationale behind this method, and how is it providing the optimal subspace for LDA as per Fisher's definition of optimality ? More precisely :</p> <p>• Why are <span class="math-container">$W$</span> the within-class and <span class="math-container">$B^{*}$</span> between-class covariance matrices ?</p> <p>• Why do the columns <span class="math-container">$v_{l}^{*}$</span>of <span class="math-container">$V^{*}$</span> define the coordinate of the optimal subspaces ?</p> <p>• How do we derive the discriminant variables ?</p>
<p><strong><em>Within-class, between-class covariance matrices</em></strong></p> <p>• Assuming common covariance matrix <span class="math-container">$\hat{\Sigma}=\hat{\Sigma}_{k}$</span> for all classes <span class="math-container">$k$</span> we write</p> <p><span class="math-container">$\hat{\Sigma}=\sum_{k=1}^{K}\sum_{g_{i}=k}{(x_{i}-\hat{\mu}_{k})(x_{i}-\hat{\mu}_{k})^{T}/(N-K)}$</span></p> <p>where the centroid <span class="math-container">$\hat{\mu}_{k}$</span> is a <span class="math-container">$p$</span>-vector estimated by <span class="math-container">$\hat{\mu}_{k}=\sum_{g_{i}=k}x_{i}/N_{k}$</span> and <span class="math-container">$N_{k}$</span> the number of elements of <span class="math-container">$X$</span> (the number of observations) in class <span class="math-container">$k$</span>.</p> <p><span class="math-container">$\hat{\Sigma}$</span> measures the dispersion of observations around the mean of the class they belong to. We call <span class="math-container">$\hat{\Sigma}$</span> the <em>within-class covariance</em>. We denote <span class="math-container">$\hat{\Sigma}=W$</span>.</p> <p>• <span class="math-container">$M$</span> is the <span class="math-container">$K \times p$</span> matrix of centroids <span class="math-container">$\hat{\mu}_{k}$</span> so that its covariance matrix is given by </p> <p><span class="math-container">$\sum_{k=1}^{K}(\hat{\mu}_{k}-\bar{\mu})(\hat{\mu}_{k}-\bar{\mu})^{T}/K$</span> </p> <p>with <span class="math-container">$\bar{\mu}$</span> the mean of centroids <span class="math-container">$\hat{\mu}_{K}$</span> with <span class="math-container">$k=1..K$</span>. It measures the dispersion of the centroids around the mean of all centroids and we call it the <em>between-class covariance</em>. We denote <span class="math-container">$E[M^{T}M]=B$</span>. </p> <p>• We transform <span class="math-container">$M$</span> into <span class="math-container">$M^{*}=MW^{-1/2}$</span>, which operates a some basis change ( close to a <a href="https://en.wikipedia.org/wiki/Whitening_transformation" rel="nofollow noreferrer">whitening or sphering transformation</a> ). I explain in the next section the rationale behind the choice for this basis. <span class="math-container">$M^{*}$</span> has a covariance matrix <span class="math-container">$B^{*}$</span>.</p> <p><strong><em>Optimal subspace</em></strong></p> <p>Fisher's approach to LDA maximizes the criterion</p> <p><span class="math-container">$J(w)=w^{T}Bw / w^{T}Ww$</span>.</p> <p>We minimize the within-class covariance and maximize the between-class covariance.</p> <p><span class="math-container">$J(w)$</span> is invariant w.r.t. rescaling of <span class="math-container">$w \leftarrow \alpha w$</span>, so that we can choose <span class="math-container">$\alpha$</span> as to have a unit denominator <span class="math-container">$w^{T}Ww=1$</span> ( since it is a scalar ). Thus we can turn the optimization problem into solving </p> <p><span class="math-container">$\max_{w} \frac{1}{2} w^{T}Bw$</span> s. t. <span class="math-container">$w^{T}Ww=1$</span>.</p> <p>Which we can rewrite after the convenient basis change <span class="math-container">$w^{*} \leftarrow W^{1/2}w$</span> :</p> <p><span class="math-container">$\min_{w} -\frac{1}{2}(w^{*})^{T}B^{*}w^{*}$</span> s. t. <span class="math-container">$(w^{*})^{T}w^{*}=1$</span>.</p> <p>The Lagrangien for this problem writes </p> <p><span class="math-container">$L = -\frac{1}{2}(w^{*})^{T}B^{*}w^{*} + \frac{1}{2}\lambda[(w^{*})^{T}w^{*}-1]$</span></p> <p>and the Karush-Kuhn-Tucker conditions give </p> <p><span class="math-container">$B^{*}w^{*}=\lambda w^{*}$</span>.</p> <p>We see here that the basis change we operated enable to write KKT conditions for our optimization problem as an eigendecomposition. We can then write <span class="math-container">$B^{*}=V^{*}D_{B}(V^{*})^{T}$</span> where the columns <span class="math-container">$v_{l}^{*}$</span> of <span class="math-container">$V^{*}$</span> are the eigenvectors of <span class="math-container">$B^{*}$</span> and the diagonal values of <span class="math-container">$D_{B}$</span> its eigenvalues. </p> <p>From the objective function <span class="math-container">$J(w)$</span> we see that the directions we look for correspond to the largest eigenvalues. Indeed under KKT conditions we have</p> <p><span class="math-container">$J(w^{*})=-\frac{1}{2}(w^{*})^{T}B^{*}w^{*}=-\frac{1}{2}(w^{*})^{T}\lambda w^{*}=-\lambda/2$</span></p> <p>since <span class="math-container">$(w^{*})^{T}w^{*}=1$</span>.</p> <p>Finally, back to the original basis, discriminant variables are defined as <span class="math-container">$Z_{l}=v_{l}^{T}X$</span> with <span class="math-container">$v_{l}=W^{-1/2}v_{l}^{*}$</span> for <span class="math-container">$l=1..L$</span> corresponding to the <span class="math-container">$L$</span> eigenvectors <span class="math-container">$v^{*}_{l}$</span> with largest eigenvalue in the decomposition of the transformed between-class matrix <span class="math-container">$B^{*}$</span>.</p> <p><a href="https://www.ics.uci.edu/~welling/teaching/273ASpring09/Fisher-LDA.pdf" rel="nofollow noreferrer">More information here.</a></p>
494
statistical learning
Is there a textbook / handbook with full derivations for statistical / machine learning concepts?
https://stats.stackexchange.com/questions/74908/is-there-a-textbook-handbook-with-full-derivations-for-statistical-machine-l
<p>In particular, I am looking for a textbook which will go over the details of derivations (including all calculus and linear algebra) for learning models and concepts such as logistic regression, Gaussian Discriminant Analysis, with full proofs for variants like Gaussian Naive Bayes.</p> <p>Books such as "Elements of Statistical Learning" tend to gloss over certain details. For example, when discussing L1 Regularized Logistic Regression (Section 4.4.4), it says that "the score equations ... have the form", and then presents the form without giving the derivation.</p>
<p>Have a look at <a href="http://rads.stackoverflow.com/amzn/click/1439824142" rel="nofollow">'A First Course in Machine Learning,' Simon Rogers and Mark Girolami</a>. There are many easy to follow step by step derivations of concepts that include calculus and linear algebra. Also, you can look at google book preview to see if it fits your needs.</p>
495
statistical learning
Bias-variance docomposition of linear model fit in &#39;The Elements of Statistical Learning&#39;
https://stats.stackexchange.com/questions/307110/bias-variance-docomposition-of-linear-model-fit-in-the-elements-of-statistical
<p>In section 7.3 of 'The Elements of Statistical Learning', the authors have shown the expression for bias-variance decomposition of linear model fit: <a href="https://i.sstatic.net/tvwY3.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tvwY3.jpg" alt="enter image description here"></a></p> <p><a href="https://i.sstatic.net/8ZZd4.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ZZd4.jpg" alt="enter image description here"></a></p> <p>However, I get a slightly different expression for $Err(x_0)$. My attempt to derivation is as follows: the standard bias-variance decomposition is $$Err(x_0)=\sigma_\epsilon^2+[f(x_0)-E_\tau\hat f_p(x_0)]^2+var(\hat f_p(x_0)),$$ where the subscript $\tau$ indicates that the expectation is w.r.t. training data and $$var(\hat f_p(x_0))=E_\tau[\hat f_p(x_0)-E_\tau\hat f_p(x_0)]^2.$$ Further, I can write: $$var(\hat f_p(x_0))=E_XE_{Y|X}[(\hat f_p(x_0)-E_\tau\hat f_p(x_0))^2|X].$$ Now, I take $$L=E_{Y|X}[(\hat f_p(x_0)-E_\tau\hat f_p(x_0))^2|X]$$ so that $var(\hat f_p(x_0))=E_X(L).$ On further simiplification: $$L=E_{Y|X}[x_0^T(X^TX)^{-1}X^T(y-\mu_{Y/X})(y-\mu_{Y/X})^TX(X^TX)^{-1}x_0|X],$$ $$=\sigma_\epsilon^2E_{Y|X}[x_0^T(X^TX)^{-1}X^TI_NX(X^TX)^{-1}x_0|X],$$ $$=\sigma_\epsilon^2E_{Y|X}[||h(x_0)||^2|X],$$ where $$h(x_0)=X(X^TX)^{-1}x_0.$$ Now $$var(\hat f_p(x_0))=\sigma_\epsilon^2E_XE_{Y|X}[||h(x_0)||^2|X],$$ $$=\sigma_\epsilon^2E[||h(x_0)||^2].$$</p> <p>However, in the book $var(\hat f_p(x_0))=||h(x_0)||^2\sigma_\epsilon^2.$ Is there anything wrong with my derivation or how can I proceed further to remove the extra expectation operator.</p>
<p>You should drop the outside <span class="math-container">$E_X$</span>, and only use <span class="math-container">$E_{Y|X}$</span>, because here they are computing the test error (definition is at the beginning of this chapter eq 7.2), and the training sample X is fixed(you add <span class="math-container">$E_X$</span> when you want to compute the expected test error in eq 7.3), only the response variable Y is random (due to <span class="math-container">$\epsilon$</span>). This is the meaning of the condition X. When you put in <span class="math-container">$x_0$</span>, you can think it is a new number but also a 'fixed' number, so the "random" part only comes from <span class="math-container">$Y=AX + \epsilon$</span>, the irreducible error. </p> <p>In 7.12, they are giving the result as training error, and that's why <span class="math-container">$p/N \sigma^2_{\epsilon}$</span>.</p>
496
statistical learning
Why doesn&#39;t test error increase for a high number of boosting iterations? Figure 10.13 of The Elements of Statistical Learning
https://stats.stackexchange.com/questions/548557/why-doesnt-test-error-increase-for-a-high-number-of-boosting-iterations-figure
<p>My question refers to the figure 10.13 of <a href="https://web.stanford.edu/%7Ehastie/Papers/ESLII.pdf" rel="nofollow noreferrer">The Elements of Statistical Learning</a>. Test error decreases monotonically with the increase in tree iterations. However, I don't understand why the test error does not raise for the high number of iterations. In the figure below the test error increases for the more complex model.</p> <p><a href="https://i.sstatic.net/eHP62.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eHP62.png" alt="enter image description here" /></a></p> <p>EDIT: The boosting algorithm combines the output of weak learners (often trees) to produce strong predictions.</p> <p><span class="math-container">$$ f_{M}(x) = \sum_{m=1}^{M}f_{m}(x;\theta_{m}) $$</span></p> <p>where M is a tunable hyperparameter that controls trade off between bias/variance.</p> <p>In the section 10.14.1 of <a href="https://web.stanford.edu/%7Ehastie/Papers/ESLII.pdf" rel="nofollow noreferrer">The Elements of Statistical Learning</a> the gradient boosting model was fit to California Housing data. In the figure 10.13 the average absolute error for training and testing data is decreasing monotonically with the number of boosting iterations (M). This is what authors write:</p> <blockquote> <p>The test error is seen to decrease monotonically with increasing M, more rapidly during the early stages and then leveling off to being nearly constant as iterations increase. Thus, the choice of a particular value of M is not critical, as long as it is not too small</p> </blockquote> <p>So, I think authors claim that even for a big number of iterations M the test error will level off. However, shouldn't the test error rise for the very complex model (indicated by a high number of M)?</p>
<p>Note the rest of the paragraph you quoted:</p> <blockquote> <p>This tends to be the case in many applications. The shrinkage strategy (10.41) tends to eliminate the problem of overfitting, especially for larger data sets.</p> </blockquote> <p>Shrinkage slows down overfitting, but it does happen. Using basically the same setup as in the example, here's what I get when using sklearn set to many more trees (for whatever reason, the leveling off doesn't happen for me until already a bit further than in ESL):<br /> <a href="https://i.sstatic.net/ogUdC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ogUdC.png" alt="learning curve for 10000 trees" /></a></p> <p>I thought perhaps a big part of this was also the weak learners themselves: eventually they don't have enough capacity to fit to the gradient very well, and so reaching the actual training minimum isn't easy. I tried it with depth-1 trees, and built a classification problem that incorporates an xor-style data generating process (which depth-1 trees can't see), and the result suggests I was wrong about that:<br /> <a href="https://i.sstatic.net/Ot8Z2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ot8Z2.png" alt="test loss increases substantially" /></a><br /> I suppose actually the xor part makes it very hard to find the real process, and instead the trees eventually get built just on the random patterns?</p> <p><a href="https://github.com/bmreiniger/datascience.stackexchange/blob/master/stats548557_gbm_overfitting_ntrees.ipynb" rel="nofollow noreferrer">Colab notebook</a></p>
497
statistical learning
Statistical Significance of a learning Model
https://stats.stackexchange.com/questions/79677/statistical-significance-of-a-learning-model
<p>I built a learning model (for classification) based on a Random Forest classifier and i am asked to assess the statistical significance of its performances. </p> <p>Up to now, i trained and tested it on two different datasets A and B, respectively.</p> <p>What kind of test can i use?</p>
<p>You can get an un-biased estimate of the classification error with the out-of-bag error estimate. See explanation here: <a href="http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#ooberr" rel="nofollow noreferrer">http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#ooberr</a></p> <p>I suppose you could fit the model many times with different random seeds. If your classification error is better than expected by pure chance at least 95% of your trials, then your model is significant (at an alpha level of 0.05).</p> <p>You may not even have to go to all that trouble -- your out-of-bag error should converge to some value as more trees are added. I do not know how to estimate a confidence interval for it without the above procedure, but someone smarter than I am may know...</p> <p><strong>Edit:</strong> This thread looks relevant to the question of estimating a confidence interval on OOB classification error - <a href="https://stats.stackexchange.com/questions/16597/bootstrapping-estimates-of-out-of-sample-error">Bootstrapping estimates of out-of-sample error</a></p>
498
statistical learning
Equation 10.2 from The Elements of Statistical Learning. Median of a chi-squared distribution
https://stats.stackexchange.com/questions/545107/equation-10-2-from-the-elements-of-statistical-learning-median-of-a-chi-squared
<p>I'm reading about AdaBoost in the <em>The Elements of Statistical Learning</em> and I don't understand the equation 10.2. Below is an excerpt from the book.</p> <blockquote> <p>The power of AdaBoost to dramatically increase the performance of even a very weak classifier is illustrated in Figure 10.2. The features <span class="math-container">$X_1 ,...,X_{10}$</span> are standard independent Gaussian, and the deterministic target <span class="math-container">$Y$</span> is defined by</p> <p><span class="math-container">$ Y= \begin{cases} 1 &amp; \text{if}\,\sum_{j=1}^{10}X_j^2&gt;\chi_{10}^2(0.5), &amp;&amp;&amp;&amp;&amp; (10.2) \\ -1 &amp; \text{otherwise}. \end{cases} $</span></p> <p>Here <span class="math-container">$\chi_{10}^2(0.5)=9.34$</span> is the median of a chi-squared random variable with 10 degrees of freedom (sum of squares of 10 standard Gaussians). There are 2000 training cases, with approximately 1000 cases in each class, and 10,000 test observations. Here the weak classifier is just a “stump”: a two terminal-node classification tree. Applying this classifier alone to the training dataset yields a very poor test set error rate of 45.8%, compared to 50% for random guessing. However, as boosting iterations proceed the error rate steadily decreases, reaching 5.8% after 400 iterations</p> </blockquote> <p>So, our threshold (<span class="math-container">$\chi_{10}^2(0.5)$</span>) divides the dataset into 2 classes of equal size. I have 2 questions with that:</p> <ol> <li>Why do we want to have classes of equal size?</li> <li>Why is <span class="math-container">$\chi_{10}^2(0.5)$</span> the median of a chi-squared distribution with df=10? What does 0.5 stand for?</li> </ol>
499