question
stringlengths
6
3.53k
text
stringlengths
17
2.05k
source
stringclasses
1 value
Let $f(x, y)$ be a general function over $\mathbb{R}^{2}$. Mark any of the following statements that is always (independent of the function) correct?
Suppose f is a strong one-way function. Define g(x, r) = (f(x), r) where |r| = 2|x|.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $f(x, y)$ be a general function over $\mathbb{R}^{2}$. Mark any of the following statements that is always (independent of the function) correct?
If f and g are functions, then: ( f g ) ′ = f ′ g − g ′ f g 2 {\displaystyle \left({\frac {f}{g}}\right)'={\frac {f'g-g'f}{g^{2}}}\quad } wherever g is nonzero.This can be derived from the product rule and the reciprocal rule.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
[Gradient for convolutional neural nets] Let $f(x, y, z, u, v, w)=3 x y z u v w+x^{2} y^{2} w^{2}-7 x z^{5}+3 y v w^{4}$. What is $$ \left.\left[\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}+\frac{\partial f}{\partial z}+\frac{\partial f}{\partial u}+\frac{\partial f}{\partial v}+\frac{\partial f}{\partial w}\right]\right|_{x=y=z=u=v=w=1} ? $$
In a rectangular coordinate system, the gradient is given by ∇ f = ∂ f ∂ x i + ∂ f ∂ y j {\displaystyle \nabla f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} }
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
[Gradient for convolutional neural nets] Let $f(x, y, z, u, v, w)=3 x y z u v w+x^{2} y^{2} w^{2}-7 x z^{5}+3 y v w^{4}$. What is $$ \left.\left[\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}+\frac{\partial f}{\partial z}+\frac{\partial f}{\partial u}+\frac{\partial f}{\partial v}+\frac{\partial f}{\partial w}\right]\right|_{x=y=z=u=v=w=1} ? $$
By definition, the gradient of a scalar function f is ∇ f = ∑ i e i ∂ f ∂ q i = ∂ f ∂ x e 1 + ∂ f ∂ y e 2 + ∂ f ∂ z e 3 {\displaystyle \nabla f=\sum _{i}\mathbf {e} ^{i}{\frac {\partial f}{\partial q^{i}}}={\frac {\partial f}{\partial x}}\mathbf {e} ^{1}+{\frac {\partial f}{\partial y}}\mathbf {e} ^{2}+{\frac {\partial f}{\partial z}}\mathbf {e} ^{3}} where q i {\displaystyle q_{i}} are the coordinates x, y, z indexed. Recognizing this as a vector written in terms of the contravariant basis, it may be rewritten: ∇ f = ∂ f ∂ x − sin ⁡ ( ϕ ) ∂ f ∂ z cos ⁡ ( ϕ ) 2 e 1 + ∂ f ∂ y e 2 + − sin ⁡ ( ϕ ) ∂ f ∂ x + ∂ f ∂ z cos ⁡ ( ϕ ) 2 e 3 . {\displaystyle \nabla f={\frac {{\frac {\partial f}{\partial x}}-\sin(\phi ){\frac {\partial f}{\partial z}}}{\cos(\phi )^{2}}}\mathbf {e} _{1}+{\frac {\partial f}{\partial y}}\mathbf {e} _{2}+{\frac {-\sin(\phi ){\frac {\partial f}{\partial x}}+{\frac {\partial f}{\partial z}}}{\cos(\phi )^{2}}}\mathbf {e} _{3}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $\xv, \wv, \deltav \in \R^d$, $y \in \{-1, 1\}$, and $ arepsilon \in \R_{>0}$ be an arbitrary positive value. Which of the following is NOT true in general:
It is therefore enough to prove positivity of the Jacobian when a = 0. In that case J f ( 0 ) = | a 1 | 2 − | a − 1 | 2 , {\displaystyle \displaystyle {J_{f}(0)=|a_{1}|^{2}-|a_{-1}|^{2},}} where the an are the Fourier coefficients of f: a n = 1 2 π ∫ 0 2 π f ( e i θ ) e − i n θ d θ . {\displaystyle \displaystyle {a_{n}={1 \over 2\pi }\int _{0}^{2\pi }f(e^{i\theta })e^{-in\theta }\,d\theta .}}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $\xv, \wv, \deltav \in \R^d$, $y \in \{-1, 1\}$, and $ arepsilon \in \R_{>0}$ be an arbitrary positive value. Which of the following is NOT true in general:
Hence f ( x ) − f ( r x ) 1 − r = − f ( r x ) 1 − r ≥ − 1 ( 1 + r ) n − 1 f ( 0 ) > − f ( 0 ) 2 n − 1 > 0. {\displaystyle \displaystyle {{f(x)-f(rx) \over 1-r}={-f(rx) \over 1-r}\geq -{1 \over (1+r)^{n-1}}f(0)>-{f(0) \over 2^{n-1}}>0.}} Hence the directional derivative at x is bounded below by the strictly positive constant on the right hand side.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have data with lots of outliers. Everything else being equal, and assuming that you do not do any pre-processing, what cost function will be less effected by these outliers?
One common approach to handle outliers in data analysis is to perform outlier detection first, followed by an efficient estimation method (e.g., the least squares). While this approach is often useful, one must keep in mind two challenges. First, an outlier detection method that relies on a non-robust initial fit can suffer from the effect of masking, that is, a group of outliers can mask each other and escape detection. Second, if a high breakdown initial fit is used for outlier detection, the follow-up analysis might inherit some of the inefficiencies of the initial estimator.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have data with lots of outliers. Everything else being equal, and assuming that you do not do any pre-processing, what cost function will be less effected by these outliers?
While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive to outliers in the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification task as in Figure~\AMCref{fig:lr_data}, which consists of 14 two-dimensional linearly separable samples (circles corresponds to label $y=1$ and pluses corresponds to label $y=0$). We would like to predict the label $y=1$ of a sample $(x_1, x_2)$ when the following holds true \[ \prob(y=1|x_1, x_2, w_1, w_2) = rac{1}{1+\exp(-w_1x_1 -w_2x_2)} > 0.5 \] where $w_1$ and $w_2$ are parameters of the model. If we obtain the $(w_1, w_2)$ by optimizing the following objective $$ - \sum_{n=1}^N\log \prob(y_n| x_{n1}, x_{n2}, w_1, w_2) + rac{C}{2} w_2^2 $$ where $C$ is very large, then the decision boundary will be close to which of the following lines?
Consider the problem of binary classification: for inputs x, we want to determine whether they belong to one of two classes, arbitrarily labeled +1 and −1. We assume that the classification problem will be solved by a real-valued function f, by predicting a class label y = sign(f(x)). For many problems, it is convenient to get a probability P ( y = 1 | x ) {\displaystyle P(y=1|x)} , i.e. a classification that not only gives an answer, but also a degree of certainty about the answer.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification task as in Figure~\AMCref{fig:lr_data}, which consists of 14 two-dimensional linearly separable samples (circles corresponds to label $y=1$ and pluses corresponds to label $y=0$). We would like to predict the label $y=1$ of a sample $(x_1, x_2)$ when the following holds true \[ \prob(y=1|x_1, x_2, w_1, w_2) = rac{1}{1+\exp(-w_1x_1 -w_2x_2)} > 0.5 \] where $w_1$ and $w_2$ are parameters of the model. If we obtain the $(w_1, w_2)$ by optimizing the following objective $$ - \sum_{n=1}^N\log \prob(y_n| x_{n1}, x_{n2}, w_1, w_2) + rac{C}{2} w_2^2 $$ where $C$ is very large, then the decision boundary will be close to which of the following lines?
Note that predictions can now be made according to y = 1 iff P ( y = 1 | x ) > 1 2 ; {\displaystyle y=1{\text{ iff }}P(y=1|x)>{\frac {1}{2}};} if B ≠ 0 , {\displaystyle B\neq 0,} the probability estimates contain a correction compared to the old decision function y = sign(f(x)).The parameters A and B are estimated using a maximum likelihood method that optimizes on the same training set as that for the original classifier f. To avoid overfitting to this set, a held-out calibration set or cross-validation can be used, but Platt additionally suggests transforming the labels y to target probabilities t + = N + + 1 N + + 2 {\displaystyle t_{+}={\frac {N_{+}+1}{N_{+}+2}}} for positive samples (y = 1), and t − = 1 N − + 2 {\displaystyle t_{-}={\frac {1}{N_{-}+2}}} for negative samples, y = -1.Here, N+ and N− are the number of positive and negative samples, respectively. This transformation follows by applying Bayes' rule to a model of out-of-sample data that has a uniform prior over the labels. The constants 1 and 2, on the numerator and denominator respectively, are derived from the application of Laplace smoothing. Platt himself suggested using the Levenberg–Marquardt algorithm to optimize the parameters, but a Newton algorithm was later proposed that should be more numerically stable.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: egin{align*} g^\star(\xv)\in rg \min_{z\in\R}\mathbb E[\phi( z Y)|X=\xv]. \end{align*} Thus the function $g^\star$ that minimizes the $\phi$-risk can be determined by looking at each $\xv$ separately. Give a formula of the function $g^\star : \mathcal X o\R$ which minimizes the true $\phi$-risk, as a function of $\eta(\xv)$.
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X × Y {\displaystyle X\times Y} , the expected risk of a hypothesis (a function) h ∈ H {\displaystyle h\in {\mathcal {H}}} is E ( h ) := E ρ = ∫ X × Y L ( h ( x ) , y ) d ρ ( x , y ) {\displaystyle {\mathcal {E}}(h):=\mathbb {E} _{\rho }=\int _{X\times Y}{\mathcal {L}}(h(x),y)\,d\rho (x,y)} In our setting, we have h = A ( S n ) {\displaystyle h={\mathcal {A}}(S_{n})} , where A {\displaystyle {\mathcal {A}}} is a learning algorithm and S n = ( ( x 1 , y 1 ) , … , ( x n , y n ) ) ∼ ρ n {\displaystyle S_{n}=((x_{1},y_{1}),\ldots ,(x_{n},y_{n}))\sim \rho ^{n}} is a sequence of vectors which are all drawn independently from ρ {\displaystyle \rho } . Define the optimal riskSet h n = A ( S n ) {\displaystyle h_{n}={\mathcal {A}}(S_{n})} , for each n {\displaystyle n} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: egin{align*} g^\star(\xv)\in rg \min_{z\in\R}\mathbb E[\phi( z Y)|X=\xv]. \end{align*} Thus the function $g^\star$ that minimizes the $\phi$-risk can be determined by looking at each $\xv$ separately. Give a formula of the function $g^\star : \mathcal X o\R$ which minimizes the true $\phi$-risk, as a function of $\eta(\xv)$.
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X × Y {\displaystyle X\times Y} , the expected risk of a hypothesis (a function) h ∈ H {\displaystyle h\in {\mathcal {H}}} is E ( h ) := E ρ = ∫ X × Y L ( h ( x ) , y ) d ρ ( x , y ) {\displaystyle {\mathcal {E}}(h):=\mathbb {E} _{\rho }=\int _{X\times Y}{\mathcal {L}}(h(x),y)\,d\rho (x,y)} In our setting, we have h = A ( S n ) {\displaystyle h={\mathcal {A}}(S_{n})} , where A {\displaystyle {\mathcal {A}}} is a learning algorithm and S n = ( ( x 1 , y 1 ) , … , ( x n , y n ) ) ∼ ρ n {\displaystyle S_{n}=((x_{1},y_{1}),\ldots ,(x_{n},y_{n}))\sim \rho ^{n}} is a sequence of vectors which are all drawn independently from ρ {\displaystyle \rho } . Define the optimal riskSet h n = A ( S n ) {\displaystyle h_{n}={\mathcal {A}}(S_{n})} , for each n {\displaystyle n} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Linear or Logistic Regression) Suppose you are given a dataset of tissue images from patients with and without a certain disease. You are supposed to train a model that predicts the probability that a patient has the disease. It is preferable to use logistic regression over linear regression.
Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated. In particular, the residuals cannot be normally distributed.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Linear or Logistic Regression) Suppose you are given a dataset of tissue images from patients with and without a certain disease. You are supposed to train a model that predicts the probability that a patient has the disease. It is preferable to use logistic regression over linear regression.
Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis.The assumption of linear predictor effects can easily be relaxed using techniques such as spline functions.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Show that the solution of the problem of $rgmax_{\wv:\|\wv\|=1} ext{Var}[\wv^ op \xx]$ is to set $\wv$ to be the first principle vector of $\xv_1, . . . , \xv_N$.
For brevity, we write W {\displaystyle W} for W ( y 1 , … , y n ) {\displaystyle W(y_{1},\ldots ,y_{n})} and omit the argument x {\displaystyle x} . It suffices to show that the Wronskian solves the first-order linear differential equation W ′ = − p n − 1 W , {\displaystyle W'=-p_{n-1}\,W,} because the remaining part of the proof then coincides with the one for the case n = 2 {\displaystyle n=2} . In the case n = 1 {\displaystyle n=1} we have W = y 1 {\displaystyle W=y_{1}} and the differential equation for W {\displaystyle W} coincides with the one for y 1 {\displaystyle y_{1}} . Therefore, assume n ≥ 2 {\displaystyle n\geq 2} in the following.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Show that the solution of the problem of $rgmax_{\wv:\|\wv\|=1} ext{Var}[\wv^ op \xx]$ is to set $\wv$ to be the first principle vector of $\xv_1, . . . , \xv_N$.
We claim that any such vector x = f ( v ) {\displaystyle x=f(v)} satisfies x T A x x T x ≥ 2 d − 1 ( 1 − 1 2 r ) . {\displaystyle {\frac {x^{\text{T}}Ax}{x^{\text{T}}x}}\geq 2{\sqrt {d-1}}\left(1-{\frac {1}{2r}}\right).}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means that this point is classified correctly by $f$. Assume further that we have computed the gradient of $g$ at $\mathbf{x}$ to be $\nabla_{\mathbf{x}} g(\mathbf{x})=(+1,-2,+3,-4,+5,-6)$. You are allowed to make one step in order to (hopefully) find an adversarial example. In the following four questions, assume $\epsilon=1$. Which offset $\delta$ with $\|\delta\|_{1} \leq 1$ yields the smallest value for $g(\mathbf{x}+\delta)$, assuming that $g$ is (locally) linear?
This addresses the question whether there is a systematic way to find a positive number β ( x , p ) {\displaystyle \beta (\mathbf {x} ,\mathbf {p} )} - depending on the function f, the point x {\displaystyle \mathbf {x} } and the descent direction p {\displaystyle \mathbf {p} } - so that all learning rates α ≤ β ( x , p ) {\displaystyle \alpha \leq \beta (\mathbf {x} ,\mathbf {p} )} satisfy Armijo's condition. When p = − ∇ f ( x ) {\displaystyle \mathbf {p} =-\nabla f(\mathbf {x} )} , we can choose β ( x , p ) {\displaystyle \beta (\mathbf {x} ,\mathbf {p} )} in the order of 1 / L ( x ) {\displaystyle 1/L(\mathbf {x} )\,} , where L ( x ) {\displaystyle L(\mathbf {x} )\,} is a local Lipschitz constant for the gradient ∇ f {\displaystyle \nabla f\,} near the point x {\displaystyle \mathbf {x} } (see Lipschitz continuity). If the function is C 2 {\displaystyle C^{2}} , then L ( x ) {\displaystyle L(\mathbf {x} )\,} is close to the Hessian of the function at the point x {\displaystyle \mathbf {x} } . See Armijo (1966) for more detail.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means that this point is classified correctly by $f$. Assume further that we have computed the gradient of $g$ at $\mathbf{x}$ to be $\nabla_{\mathbf{x}} g(\mathbf{x})=(+1,-2,+3,-4,+5,-6)$. You are allowed to make one step in order to (hopefully) find an adversarial example. In the following four questions, assume $\epsilon=1$. Which offset $\delta$ with $\|\delta\|_{1} \leq 1$ yields the smallest value for $g(\mathbf{x}+\delta)$, assuming that $g$ is (locally) linear?
Often f is a threshold function, which maps all values of w → ⋅ x → {\displaystyle {\vec {w}}\cdot {\vec {x}}} above a certain threshold to the first class and all other values to the second class; e.g., f ( x ) = { 1 if w T ⋅ x > θ , 0 otherwise {\displaystyle f(\mathbf {x} )={\begin{cases}1&{\text{if }}\ \mathbf {w} ^{T}\cdot \mathbf {x} >\theta ,\\0&{\text{otherwise}}\end{cases}}} The superscript T indicates the transpose and θ {\displaystyle \theta } is a scalar threshold. A more complex f might give the probability that an item belongs to a certain class. For a two-class classification problem, one can visualize the operation of a linear classifier as splitting a high-dimensional input space with a hyperplane: all points on one side of the hyperplane are classified as "yes", while the others are classified as "no".
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements is correct?
If statements 1 and 2 are true, it absolutely follows that statement 3 is true. However, it may still be the case that statement 1 or 2 is not true. For example: If Albert Einstein makes a statement about science, it is correct.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements is correct?
If the first statement is false, then the second is false, too. But if the second statement is false, then the first statement is true. It follows that if the first statement is false, then the first statement is true.The same mechanism applies to the second statement.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $f:\R^D ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that \[ f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig), \] with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\Wm_\ell\in\R^{M imes M}$ for $\ell=2,\dots, L$, and $\sigma_i$ for $i=1,\dots,L+1$ is an entry-wise activation function. For any MLP $f$ and a classification threshold $ au$ let $C_{f, au}$ be a binary classifier that outputs YES for a given input $xv$ if $f(xv) \leq au$ and NO otherwise. space{3mm} Which of the following techniques do \emph{not} improve the generalization performance in deep learning?
Consider a multilayer perceptron (MLP) with one hidden layer and m {\displaystyle m} hidden units with mapping from input x ∈ R d {\displaystyle x\in R^{d}} to a scalar output described as F x ( W ~ , Θ ) = ∑ i = 1 m θ i ϕ ( x T w ~ ( i ) ) {\displaystyle F_{x}({\tilde {W}},\Theta )=\sum _{i=1}^{m}\theta _{i}\phi (x^{T}{\tilde {w}}^{(i)})} , where w ~ ( i ) {\displaystyle {\tilde {w}}^{(i)}} and θ i {\displaystyle \theta _{i}} are the input and output weights of unit i {\displaystyle i} correspondingly, and ϕ {\displaystyle \phi } is the activation function and is assumed to be a tanh function. The input and output weights could then be optimized with m i n W ~ , Θ ( f N N ( W ~ , Θ ) = E y , x ) {\displaystyle min_{{\tilde {W}},\Theta }(f_{NN}({\tilde {W}},\Theta )=E_{y,x})} , where l {\displaystyle l} is a loss function, W ~ = { w ~ ( 1 ) , . . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $f:\R^D ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that \[ f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig), \] with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\Wm_\ell\in\R^{M imes M}$ for $\ell=2,\dots, L$, and $\sigma_i$ for $i=1,\dots,L+1$ is an entry-wise activation function. For any MLP $f$ and a classification threshold $ au$ let $C_{f, au}$ be a binary classifier that outputs YES for a given input $xv$ if $f(xv) \leq au$ and NO otherwise. space{3mm} Which of the following techniques do \emph{not} improve the generalization performance in deep learning?
Consider a multilayer perceptron (MLP) with one hidden layer and m {\displaystyle m} hidden units with mapping from input x ∈ R d {\displaystyle x\in R^{d}} to a scalar output described as F x ( W ~ , Θ ) = ∑ i = 1 m θ i ϕ ( x T w ~ ( i ) ) {\displaystyle F_{x}({\tilde {W}},\Theta )=\sum _{i=1}^{m}\theta _{i}\phi (x^{T}{\tilde {w}}^{(i)})} , where w ~ ( i ) {\displaystyle {\tilde {w}}^{(i)}} and θ i {\displaystyle \theta _{i}} are the input and output weights of unit i {\displaystyle i} correspondingly, and ϕ {\displaystyle \phi } is the activation function and is assumed to be a tanh function. The input and output weights could then be optimized with m i n W ~ , Θ ( f N N ( W ~ , Θ ) = E y , x ) {\displaystyle min_{{\tilde {W}},\Theta }(f_{NN}({\tilde {W}},\Theta )=E_{y,x})} , where l {\displaystyle l} is a loss function, W ~ = { w ~ ( 1 ) , . . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the gradient of $\mathbf{x}^{\top} \mathbf{W} \mathbf{x}$ with respect to all entries of $\mathbf{W}$ (written as a matrix)?
In order to find the correct value of w {\displaystyle \mathbf {w} } , we can use gradient descent method. We first of all whiten the data, and transform x {\displaystyle \mathbf {x} } into a new mixture z {\displaystyle \mathbf {z} } , which has unit variance, and z = ( z 1 , z 2 , … , z M ) T {\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{M})^{T}} . This process can be achieved by applying Singular value decomposition to x {\displaystyle \mathbf {x} } , x = U D V T {\displaystyle \mathbf {x} =\mathbf {U} \mathbf {D} \mathbf {V} ^{T}} Rescaling each vector U i = U i / E ⁡ ( U i 2 ) {\displaystyle U_{i}=U_{i}/\operatorname {E} (U_{i}^{2})} , and let z = U {\displaystyle \mathbf {z} =\mathbf {U} } .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the gradient of $\mathbf{x}^{\top} \mathbf{W} \mathbf{x}$ with respect to all entries of $\mathbf{W}$ (written as a matrix)?
In order for W {\displaystyle \mathbf {W} } to be single-valued in configuration space, A {\displaystyle \mathbf {A} } has to be analytic and in order for A {\displaystyle \mathbf {A} } to be analytic (excluding the pathological points), the components of the vector matrix, F {\displaystyle \mathbf {F} } , have to satisfy the following equation: G q i q j = ∂ F q i ∂ q j − ∂ F q j ∂ q i − = 0. {\displaystyle G_{{q_{i}}{q_{j}}}={\frac {{\partial }\mathbf {F} _{q_{i}}}{\partial q_{j}}}-{\frac {{\partial }\mathbf {F} _{q_{j}}}{\partial q_{i}}}-\left=0.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We consider now the ridge regression problem: $$ \min _{\mathbf{w} \in \mathbb{R}^{d}} \frac{1}{2 N} \sum_{n=1}^{N}\left[y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right]^{2}+\lambda\|\mathbf{w}\|_{2}^{2}, $$ where the data $\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ are such that the feature vector $\mathbf{x}_{n} \in \mathbb{R}^{D}$ and the response variable $y_{n} \in \mathbb{R}$ Compute the closed-form solution $\mathbf{w}_{\text {ridge }}^{\star}$ of this problem, providing the required justifications. State the final result using the data matrix $\mathbf{X} \in \mathbb{R}^{N \times D}$.
In the simplest case, the problem of a near-singular moment matrix ( X T X ) {\displaystyle (\mathbf {X} ^{\mathsf {T}}\mathbf {X} )} is alleviated by adding positive elements to the diagonals, thereby decreasing its condition number. Analogous to the ordinary least squares estimator, the simple ridge estimator is then given by β ^ R = ( X T X + λ I ) − 1 X T y {\displaystyle {\hat {\beta }}_{R}=(\mathbf {X} ^{\mathsf {T}}\mathbf {X} +\lambda \mathbf {I} )^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} } where y {\displaystyle \mathbf {y} } is the regressand, X {\displaystyle \mathbf {X} } is the design matrix, I {\displaystyle \mathbf {I} } is the identity matrix, and the ridge parameter λ ≥ 0 {\displaystyle \lambda \geq 0} serves as the constant shifting the diagonals of the moment matrix. It can be shown that this estimator is the solution to the least squares problem subject to the constraint β T β = c {\displaystyle \beta ^{\mathsf {T}}\beta =c} , which can be expressed as a Lagrangian: min β ( y − X β ) T ( y − X β ) + λ ( β T β − c ) {\displaystyle \min _{\beta }\,(\mathbf {y} -\mathbf {X} \beta )^{\mathsf {T}}(\mathbf {y} -\mathbf {X} \beta )+\lambda (\beta ^{\mathsf {T}}\beta -c)} which shows that λ {\displaystyle \lambda } is nothing but the Lagrange multiplier of the constraint.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We consider now the ridge regression problem: $$ \min _{\mathbf{w} \in \mathbb{R}^{d}} \frac{1}{2 N} \sum_{n=1}^{N}\left[y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right]^{2}+\lambda\|\mathbf{w}\|_{2}^{2}, $$ where the data $\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ are such that the feature vector $\mathbf{x}_{n} \in \mathbb{R}^{D}$ and the response variable $y_{n} \in \mathbb{R}$ Compute the closed-form solution $\mathbf{w}_{\text {ridge }}^{\star}$ of this problem, providing the required justifications. State the final result using the data matrix $\mathbf{X} \in \mathbb{R}^{N \times D}$.
One particularly common choice for the penalty function R {\displaystyle R} is the squared ℓ 2 {\displaystyle \ell _{2}} norm, i.e., R ( w ) = ∑ j = 1 d w j 2 {\displaystyle R(w)=\sum _{j=1}^{d}w_{j}^{2}} 1 n ‖ Y − X ⁡ w ‖ 2 2 + λ ∑ j = 1 d | w j | 2 → min w ∈ R d {\displaystyle {\frac {1}{n}}\|Y-\operatorname {X} w\|_{2}^{2}+\lambda \sum _{j=1}^{d}|w_{j}|^{2}\rightarrow \min _{w\in \mathbb {R} ^{d}}} The most common names for this are called Tikhonov regularization and ridge regression. It admits a closed-form solution for w {\displaystyle w}: w = ( X T X + λ I ) − 1 X T Y {\displaystyle w=(X^{T}X+\lambda I)^{-1}X^{T}Y} The name ridge regression alludes to the fact that the λ I {\displaystyle \lambda I} term adds positive entries along the diagonal "ridge" of the sample covariance matrix X T X {\displaystyle X^{T}X} . When λ = 0 {\displaystyle \lambda =0} , i.e., in the case of ordinary least squares, the condition that d > n {\displaystyle d>n} causes the sample covariance matrix X T X {\displaystyle X^{T}X} to not have full rank and so it cannot be inverted to yield a unique solution. This is why there can be an infinitude of solutions to the ordinary least squares problem when d > n {\displaystyle d>n} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a joint data distribution $\mathcal D$ on $\mathcal X imes \{-1,1\}$ and $n$ independent and identically distributed observations from $\mathcal D$, the goal of the classification task is to learn a classifier $f:\mathcal X o \{-1,1\}$ with minimum true risk $\mathcal L(f) = \mathbb E_{(X,Y)\sim \mathcal D} [oldsymbol{\mathbb{1}}_{f(X) eq Y}]$ where $oldsymbol{\mathbb{1}}_{C} = egin{cases} 1 \; ext{ if } C ext{ is true} \ 0 \quad ext{otherwise} \end{cases}$. % We denote by $\mathcal D_{X}$ the marginal law (probability distribution) of $X$, and $\mathcal D_{Y|X}$ the conditional law of $Y$ given $X$. Give the two reasons seen in the course which explain that minimizing the true risk with the $0-1$ loss over the set of classifiers $f:\mathcal X o \{-1,1\}$ is problematic.
Empirical risk minimization for a classification problem with a 0-1 loss function is known to be an NP-hard problem even for a relatively simple class of functions such as linear classifiers. Nevertheless, it can be solved efficiently when the minimal empirical risk is zero, i.e., data is linearly separable. In practice, machine learning algorithms cope with this issue either by employing a convex approximation to the 0–1 loss function (like hinge loss for SVM), which is easier to optimize, or by imposing assumptions on the distribution P ( x , y ) {\displaystyle P(x,y)} (and thus stop being agnostic learning algorithms to which the above result applies).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a joint data distribution $\mathcal D$ on $\mathcal X imes \{-1,1\}$ and $n$ independent and identically distributed observations from $\mathcal D$, the goal of the classification task is to learn a classifier $f:\mathcal X o \{-1,1\}$ with minimum true risk $\mathcal L(f) = \mathbb E_{(X,Y)\sim \mathcal D} [oldsymbol{\mathbb{1}}_{f(X) eq Y}]$ where $oldsymbol{\mathbb{1}}_{C} = egin{cases} 1 \; ext{ if } C ext{ is true} \ 0 \quad ext{otherwise} \end{cases}$. % We denote by $\mathcal D_{X}$ the marginal law (probability distribution) of $X$, and $\mathcal D_{Y|X}$ the conditional law of $Y$ given $X$. Give the two reasons seen in the course which explain that minimizing the true risk with the $0-1$ loss over the set of classifiers $f:\mathcal X o \{-1,1\}$ is problematic.
Empirical risk minimization for a classification problem with a 0-1 loss function is known to be an NP-hard problem even for a relatively simple class of functions such as linear classifiers. Nevertheless, it can be solved efficiently when the minimal empirical risk is zero, i.e., data is linearly separable. In practice, machine learning algorithms cope with this issue either by employing a convex approximation to the 0–1 loss function (like hinge loss for SVM), which is easier to optimize, or by imposing assumptions on the distribution P ( x , y ) {\displaystyle P(x,y)} (and thus stop being agnostic learning algorithms to which the above result applies).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a matrix $\Xm$ of shape $D imes N$ with a singular value decomposition (SVD), $X=USV^ op$, suppose $\Xm$ has rank $K$ and $\Am=\Xm\Xm^ op$. Which one of the following statements is extbf{false}?
In particular, the decomposition can be interpreted as the sum of outer products of each left ( u k {\displaystyle \mathbf {u} _{k}} ) and right ( v k {\displaystyle \mathbf {v} _{k}} ) singular vectors, scaled by the corresponding nonzero singular value σ k {\displaystyle \sigma _{k}}: This result implies that A {\displaystyle \mathbf {A} } can be expressed as a sum of rank-1 matrices with spectral norm σ k {\displaystyle \sigma _{k}} in decreasing order. This explains the fact why, in general, the last terms contribute less, which motivates the use of the Truncated SVD as an approximation. The first term is the least squares fit of a matrix to an outer product of vectors.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a matrix $\Xm$ of shape $D imes N$ with a singular value decomposition (SVD), $X=USV^ op$, suppose $\Xm$ has rank $K$ and $\Am=\Xm\Xm^ op$. Which one of the following statements is extbf{false}?
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any m × n {\displaystyle \ m\times n\ } matrix. It is related to the polar decomposition. Specifically, the singular value decomposition of an m × n {\displaystyle \ m\times n\ } complex matrix M is a factorization of the form M = U Σ V ∗ , {\displaystyle \ \mathbf {M} =\mathbf {U\Sigma V^{*}} \ ,} where U is an m × m {\displaystyle \ m\times m\ } complex unitary matrix, Σ {\displaystyle \ \mathbf {\Sigma } \ } is an m × n {\displaystyle \ m\times n\ } rectangular diagonal matrix with non-negative real numbers on the diagonal, V is an n × n {\displaystyle n\times n} complex unitary matrix, and V ∗ {\displaystyle \ \mathbf {V^{*}} \ } is the conjugate transpose of V. Such decomposition always exists for any complex matrix.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume we have $N$ training samples $(\xx_1, y_1), \dots, (\xx_N, y_N)$ where for each sample $i \in \{1, \dots, N\}$ we have that $\xx_i \in \R^d$ and $y_i \in \R$. For $\lambda \geq 0$, we consider the following loss: L_{\lambda}(\ww) = rac{1}{N} \sum_{i = 1}^N (y_i - \xx_i^ op \ww)^2 + \lambda \Vert \ww \Vert_2, and let $C_\lambda = \min_{\ww \in \R^d} L_{\lambda}(\ww)$ denote the optimal loss value. Which of the following statements is extbf{true}:
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X × Y {\displaystyle X\times Y} , the expected risk of a hypothesis (a function) h ∈ H {\displaystyle h\in {\mathcal {H}}} is E ( h ) := E ρ = ∫ X × Y L ( h ( x ) , y ) d ρ ( x , y ) {\displaystyle {\mathcal {E}}(h):=\mathbb {E} _{\rho }=\int _{X\times Y}{\mathcal {L}}(h(x),y)\,d\rho (x,y)} In our setting, we have h = A ( S n ) {\displaystyle h={\mathcal {A}}(S_{n})} , where A {\displaystyle {\mathcal {A}}} is a learning algorithm and S n = ( ( x 1 , y 1 ) , … , ( x n , y n ) ) ∼ ρ n {\displaystyle S_{n}=((x_{1},y_{1}),\ldots ,(x_{n},y_{n}))\sim \rho ^{n}} is a sequence of vectors which are all drawn independently from ρ {\displaystyle \rho } . Define the optimal riskSet h n = A ( S n ) {\displaystyle h_{n}={\mathcal {A}}(S_{n})} , for each n {\displaystyle n} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume we have $N$ training samples $(\xx_1, y_1), \dots, (\xx_N, y_N)$ where for each sample $i \in \{1, \dots, N\}$ we have that $\xx_i \in \R^d$ and $y_i \in \R$. For $\lambda \geq 0$, we consider the following loss: L_{\lambda}(\ww) = rac{1}{N} \sum_{i = 1}^N (y_i - \xx_i^ op \ww)^2 + \lambda \Vert \ww \Vert_2, and let $C_\lambda = \min_{\ww \in \R^d} L_{\lambda}(\ww)$ denote the optimal loss value. Which of the following statements is extbf{true}:
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X × Y {\displaystyle X\times Y} , the expected risk of a hypothesis (a function) h ∈ H {\displaystyle h\in {\mathcal {H}}} is E ( h ) := E ρ = ∫ X × Y L ( h ( x ) , y ) d ρ ( x , y ) {\displaystyle {\mathcal {E}}(h):=\mathbb {E} _{\rho }=\int _{X\times Y}{\mathcal {L}}(h(x),y)\,d\rho (x,y)} In our setting, we have h = A ( S n ) {\displaystyle h={\mathcal {A}}(S_{n})} , where A {\displaystyle {\mathcal {A}}} is a learning algorithm and S n = ( ( x 1 , y 1 ) , … , ( x n , y n ) ) ∼ ρ n {\displaystyle S_{n}=((x_{1},y_{1}),\ldots ,(x_{n},y_{n}))\sim \rho ^{n}} is a sequence of vectors which are all drawn independently from ρ {\displaystyle \rho } . Define the optimal riskSet h n = A ( S n ) {\displaystyle h_{n}={\mathcal {A}}(S_{n})} , for each n {\displaystyle n} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the setting of EM, where $x_{n}$ is the data and $z_{n}$ is the latent variable, what quantity is called the posterior?
The typical models to which EM is applied use Z {\displaystyle \mathbf {Z} } as a latent variable indicating membership in one of a set of groups: The observed data points X {\displaystyle \mathbf {X} } may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations. The missing values (aka latent variables) Z {\displaystyle \mathbf {Z} } are discrete, drawn from a fixed number of values, and with one latent variable per observed unit.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the setting of EM, where $x_{n}$ is the data and $z_{n}$ is the latent variable, what quantity is called the posterior?
However, there are a number of differences. Most important is what is being computed. EM computes point estimates of posterior distribution of those random variables that can be categorized as "parameters", but only estimates of the actual posterior distributions of the latent variables (at least in "soft EM", and often only when the latent variables are discrete).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which statement about extit{black-box} adversarial attacks is true:
Black box attacks in adversarial machine learning assumes that the adversary can only get outputs for provided inputs and has no knowledge of the model structure or parameters. In this case, the adversarial example is generated either using a model created from scratch, or without any model at all (excluding the ability to query the original model). In either case, the objective of these attacks are to create adversarial examples that are able to transfer to the black box model in question.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which statement about extit{black-box} adversarial attacks is true:
Black box attacks in adversarial machine learning assumes that the adversary can only get outputs for provided inputs and has no knowledge of the model structure or parameters. In this case, the adversarial example is generated either using a model created from scratch, or without any model at all (excluding the ability to query the original model). In either case, the objective of these attacks are to create adversarial examples that are able to transfer to the black box model in question.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are in $D$-dimensional space and use a KNN classifier with $k=1$. You are given $N$ samples and by running experiments you see that for most random inputs $\mathbf{x}$ you find a nearest sample at distance roughly $\delta$. You would like to decrease this distance to $\delta / 2$. How many samples will you likely need? Give an educated guess.
There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on ( X , Y ) {\displaystyle (X,Y)} ) consistent provided k := k n {\displaystyle k:=k_{n}} diverges and k n / n {\displaystyle k_{n}/n} converges to zero as n → ∞ {\displaystyle n\to \infty } . Let C n k n n {\displaystyle C_{n}^{knn}} denote the k nearest neighbour classifier based on a training set of size n. Under certain regularity conditions, the excess risk yields the following asymptotic expansion for some constants B 1 {\displaystyle B_{1}} and B 2 {\displaystyle B_{2}} . The choice k ∗ = ⌊ B n 4 d + 4 ⌋ {\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor } offers a trade off between the two terms in the above display, for which the k ∗ {\displaystyle k^{*}} -nearest neighbour error converges to the Bayes error at the optimal (minimax) rate O ( n − 4 d + 4 ) {\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are in $D$-dimensional space and use a KNN classifier with $k=1$. You are given $N$ samples and by running experiments you see that for most random inputs $\mathbf{x}$ you find a nearest sample at distance roughly $\delta$. You would like to decrease this distance to $\delta / 2$. How many samples will you likely need? Give an educated guess.
A distance matrix is utilized in the k-NN algorithm which is one of the slowest but simplest and most used instance-based machine learning algorithms that can be used both in classification and regression tasks. It is one of the slowest machine learning algorithms since each test sample's predicted result requires a fully computed distance matrix between the test sample and each training sample in the training set. Once the distance matrix is computed, the algorithm selects the K number of training samples that are the closest to the test sample to predict the test sample's result based on the selected set's majority (classification) or average (regression) value. Prediction time complexity is O ( k ∗ n ∗ d ) {\displaystyle O(k*n*d)} , to compute the distance between each test sample with every training sample to construct the distance matrix where:k = number of nearest neighbors selected n = size of the training set d = number of dimensions being used for the dataThis classification focused model predicts the label of the target based on the distance matrix between the target and each of the training samples to determine the K-number of samples that are the closest/nearest to the target.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the function $f(x)=-x^{2}$. Which of the following statements are true regarding subgradients of $f(x)$ at $x=0$ ?
These concepts generalize further to convex functions f: U → R {\displaystyle f:U\to \mathbb {R} } on a convex set in a locally convex space V {\displaystyle V} . A functional v ∗ {\displaystyle v^{*}} in the dual space V ∗ {\displaystyle V^{*}} is called the subgradient at x 0 {\displaystyle x_{0}} in U {\displaystyle U} if for all x ∈ U {\displaystyle x\in U} , f ( x ) − f ( x 0 ) ≥ v ∗ ( x − x 0 ) . {\displaystyle f(x)-f(x_{0})\geq v^{*}(x-x_{0}).}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the function $f(x)=-x^{2}$. Which of the following statements are true regarding subgradients of $f(x)$ at $x=0$ ?
Let f: R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } be a convex function with domain R n {\displaystyle \mathbb {R} ^{n}} . A classical subgradient method iterates x ( k + 1 ) = x ( k ) − α k g ( k ) {\displaystyle x^{(k+1)}=x^{(k)}-\alpha _{k}g^{(k)}\ } where g ( k ) {\displaystyle g^{(k)}} denotes any subgradient of f {\displaystyle f\ } at x ( k ) {\displaystyle x^{(k)}\ } , and x ( k ) {\displaystyle x^{(k)}} is the k t h {\displaystyle k^{th}} iterate of x {\displaystyle x} . If f {\displaystyle f\ } is differentiable, then its only subgradient is the gradient vector ∇ f {\displaystyle \nabla f} itself. It may happen that − g ( k ) {\displaystyle -g^{(k)}} is not a descent direction for f {\displaystyle f\ } at x ( k ) {\displaystyle x^{(k)}} . We therefore maintain a list f b e s t {\displaystyle f_{\rm {best}}\ } that keeps track of the lowest objective function value found so far, i.e. f b e s t ( k ) = min { f b e s t ( k − 1 ) , f ( x ( k ) ) } . {\displaystyle f_{\rm {best}}^{(k)}=\min\{f_{\rm {best}}^{(k-1)},f(x^{(k)})\}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
K-means can be equivalently written as the following Matrix Factorization $$ \begin{aligned} & \min _{\mathbf{z}, \boldsymbol{\mu}} \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\left\|\mathbf{X}-\mathbf{M} \mathbf{Z}^{\top}\right\|_{\text {Frob }}^{2} \\ & \text { s.t. } \boldsymbol{\mu}_{k} \in \mathbb{R}^{D}, \\ & z_{n k} \in \mathbb{R}, \sum_{k=1}^{K} z_{n k}=1 . \end{aligned} $$
Given a set of observations (x1, x2, ..., xn), where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k (≤ n) sets S = {S1, S2, ..., Sk} so as to minimize the within-cluster sum of squares (WCSS) (i.e. variance). Formally, the objective is to find: where μi is the mean (also called centroid) of points in S i {\displaystyle S_{i}} , i.e. | S i | {\displaystyle |S_{i}|} is the size of S i {\displaystyle S_{i}} , and ‖ ⋅ ‖ {\displaystyle \|\cdot \|} is the usual L2 norm . This is equivalent to minimizing the pairwise squared deviations of points in the same cluster: The equivalence can be deduced from identity | S i | ∑ x ∈ S i ‖ x − μ i ‖ 2 = 1 2 ∑ x , y ∈ S i ‖ x − y ‖ 2 {\textstyle |S_{i}|\sum _{\mathbf {x} \in S_{i}}\left\|\mathbf {x} -{\boldsymbol {\mu }}_{i}\right\|^{2}={\frac {1}{2}}\sum _{\mathbf {x} ,\mathbf {y} \in S_{i}}\left\|\mathbf {x} -\mathbf {y} \right\|^{2}} . Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares, BCSS). This deterministic relationship is also related to the law of total variance in probability theory.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
K-means can be equivalently written as the following Matrix Factorization $$ \begin{aligned} & \min _{\mathbf{z}, \boldsymbol{\mu}} \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\left\|\mathbf{X}-\mathbf{M} \mathbf{Z}^{\top}\right\|_{\text {Frob }}^{2} \\ & \text { s.t. } \boldsymbol{\mu}_{k} \in \mathbb{R}^{D}, \\ & z_{n k} \in \mathbb{R}, \sum_{k=1}^{K} z_{n k}=1 . \end{aligned} $$
k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall that we say that a kernel $K: \R imes \R ightarrow \R $ is valid if there exists $k \in \mathbb{N}$ and $\Phi: \R ightarrow \R^k$ such that for all $(x, x') \in \R imes \R $, $K(x, x') = \Phi(x)^ op \Phi(x')$. The kernel $K(x, x') = \cos(x + x')$ is a valid kernel.
He therefore defined a continuous real symmetric kernel K ( s , t ) {\displaystyle K(s,t)} to be of positive type (i.e. positive-definite) if J ( x ) ≥ 0 {\displaystyle J(x)\geq 0} for all real continuous functions x {\displaystyle x} on {\displaystyle } , and he proved that (1.1) is a necessary and sufficient condition for a kernel to be of positive type. Mercer then proved that for any continuous p.d. kernel the expansion holds absolutely and uniformly.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall that we say that a kernel $K: \R imes \R ightarrow \R $ is valid if there exists $k \in \mathbb{N}$ and $\Phi: \R ightarrow \R^k$ such that for all $(x, x') \in \R imes \R $, $K(x, x') = \Phi(x)^ op \Phi(x')$. The kernel $K(x, x') = \cos(x + x')$ is a valid kernel.
The kernel is useful in classifying properties of prefilters and other families of sets. If B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} then for any point x , x ∉ ker ⁡ B if and only if X ∖ { x } ∈ B ↑ X . {\displaystyle x,x\not \in \ker {\mathcal {B}}{\text{ if and only if }}X\setminus \{x\}\in {\mathcal {B}}^{\uparrow X}.} Properties of kernels If B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} then ker ⁡ ( B ↑ X ) = ker ⁡ B {\displaystyle \ker \left({\mathcal {B}}^{\uparrow X}\right)=\ker {\mathcal {B}}} and this set is also equal to the kernel of the π–system that is generated by B .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Adversarial perturbations for linear models) Suppose you are given a linear classifier with the logistic loss. Is it true that generating the optimal adversarial perturbations by maximizing the loss under the $\ell_{2}$-norm constraint on the perturbation is an NP-hard optimization problem?
In an effort to analyze existing adversarial attacks and defenses, researchers at the University of California, Berkeley, Nicholas Carlini and David Wagner in 2016 propose a faster and more robust method to generate adversarial examples.The attack proposed by Carlini and Wagner begins with trying to solve a difficult non-linear optimization equation: Here the objective is to minimize the noise ( δ {\textstyle \delta } ), added to the original input x {\textstyle x} , such that the machine learning algorithm ( C {\textstyle C} ) predicts the original input with delta (or x + δ {\textstyle x+\delta } ) as some other class t {\textstyle t} . However instead of directly the above equation, Carlini and Wagner propose using a new function f {\textstyle f} such that: This condenses the first equation to the problem below: and even more to the equation below: Carlini and Wagner then propose the use of the below function in place of f {\textstyle f} using Z {\textstyle Z} , a function that determines class probabilities for given input x {\textstyle x} . When substituted in, this equation can be thought of as finding a target class that is more confident than the next likeliest class by some constant amount: When solved using gradient descent, this equation is able to produce stronger adversarial examples when compared to fast gradient sign method that is also able to bypass defensive distillation, a defense that was once proposed to be effective against adversarial examples.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Adversarial perturbations for linear models) Suppose you are given a linear classifier with the logistic loss. Is it true that generating the optimal adversarial perturbations by maximizing the loss under the $\ell_{2}$-norm constraint on the perturbation is an NP-hard optimization problem?
In an effort to analyze existing adversarial attacks and defenses, researchers at the University of California, Berkeley, Nicholas Carlini and David Wagner in 2016 propose a faster and more robust method to generate adversarial examples.The attack proposed by Carlini and Wagner begins with trying to solve a difficult non-linear optimization equation: Here the objective is to minimize the noise ( δ {\textstyle \delta } ), added to the original input x {\textstyle x} , such that the machine learning algorithm ( C {\textstyle C} ) predicts the original input with delta (or x + δ {\textstyle x+\delta } ) as some other class t {\textstyle t} . However instead of directly the above equation, Carlini and Wagner propose using a new function f {\textstyle f} such that: This condenses the first equation to the problem below: and even more to the equation below: Carlini and Wagner then propose the use of the below function in place of f {\textstyle f} using Z {\textstyle Z} , a function that determines class probabilities for given input x {\textstyle x} . When substituted in, this equation can be thought of as finding a target class that is more confident than the next likeliest class by some constant amount: When solved using gradient descent, this equation is able to produce stronger adversarial examples when compared to fast gradient sign method that is also able to bypass defensive distillation, a defense that was once proposed to be effective against adversarial examples.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with a linear classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & \mathbf{w}^{\top} \mathbf{x} \geq 0 \\ -1, & \mathbf{w}^{\top} \mathbf{x}<0\end{cases} $$ where $\mathbf{x} \in \mathbb{R}^{3}$. Suppose that the weights of the linear model are equal to $\mathbf{w}=(4,0,-3)$. For the next two questions, we would like to find a minimum-norm adversarial example. Specifically, we are interested in solving the following optimization problem, for a given $\mathbf{x}$ : $$ \min _{\boldsymbol{\delta} \in \mathbb{R}^{3}}\|\boldsymbol{\delta}\|_{2} \quad \text { subject to } \quad \mathbf{w}^{\top}(\mathbf{x}+\boldsymbol{\delta})=0 $$ This leads to the point $\mathbf{x}+\boldsymbol{\delta}$ that lies exactly at the decision boundary and the perturbation $\boldsymbol{\delta}$ is the smallest in terms of the $\ell_{2}$-norm. What is the optimum $\delta^{\star}$ that minimizes the objective in Eq. (OP) for the point $\mathbf{x}=$ $(-1,3,2) ?$
Utilizing Bayes' theorem, it can be shown that the optimal f 0 / 1 ∗ {\displaystyle f_{0/1}^{*}} , i.e., the one that minimizes the expected risk associated with the zero-one loss, implements the Bayes optimal decision rule for a binary classification problem and is in the form of f 0 / 1 ∗ ( x → ) = { 1 if p ( 1 ∣ x → ) > p ( − 1 ∣ x → ) 0 if p ( 1 ∣ x → ) = p ( − 1 ∣ x → ) − 1 if p ( 1 ∣ x → ) < p ( − 1 ∣ x → ) {\displaystyle f_{0/1}^{*}({\vec {x}})\;=\;{\begin{cases}\;\;\;1&{\text{if }}p(1\mid {\vec {x}})>p(-1\mid {\vec {x}})\\\;\;\;0&{\text{if }}p(1\mid {\vec {x}})=p(-1\mid {\vec {x}})\\-1&{\text{if }}p(1\mid {\vec {x}})
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with a linear classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & \mathbf{w}^{\top} \mathbf{x} \geq 0 \\ -1, & \mathbf{w}^{\top} \mathbf{x}<0\end{cases} $$ where $\mathbf{x} \in \mathbb{R}^{3}$. Suppose that the weights of the linear model are equal to $\mathbf{w}=(4,0,-3)$. For the next two questions, we would like to find a minimum-norm adversarial example. Specifically, we are interested in solving the following optimization problem, for a given $\mathbf{x}$ : $$ \min _{\boldsymbol{\delta} \in \mathbb{R}^{3}}\|\boldsymbol{\delta}\|_{2} \quad \text { subject to } \quad \mathbf{w}^{\top}(\mathbf{x}+\boldsymbol{\delta})=0 $$ This leads to the point $\mathbf{x}+\boldsymbol{\delta}$ that lies exactly at the decision boundary and the perturbation $\boldsymbol{\delta}$ is the smallest in terms of the $\ell_{2}$-norm. What is the optimum $\delta^{\star}$ that minimizes the objective in Eq. (OP) for the point $\mathbf{x}=$ $(-1,3,2) ?$
Apply the feature to each image in the training set, then find the optimal threshold and polarity θ j , s j {\displaystyle \theta _{j},s_{j}} that minimizes the weighted classification error. That is θ j , s j = arg ⁡ min θ , s ∑ i = 1 N w j i ε j i {\displaystyle \theta _{j},s_{j}=\arg \min _{\theta ,s}\;\sum _{i=1}^{N}w_{j}^{i}\varepsilon _{j}^{i}} where ε j i = { 0 if y i = h j ( x i , θ j , s j ) 1 otherwise {\displaystyle \varepsilon _{j}^{i}={\begin{cases}0&{\text{if }}y^{i}=h_{j}(\mathbf {x} ^{i},\theta _{j},s_{j})\\1&{\text{otherwise}}\end{cases}}} Assign a weight α j {\displaystyle \alpha _{j}} to h j {\displaystyle h_{j}} that is inversely proportional to the error rate. In this way best classifiers are considered more. The weights for the next iteration, i.e. w j + 1 i {\displaystyle w_{j+1}^{i}} , are reduced for the images i that were correctly classified. Set the final classifier to h ( x ) = sgn ⁡ ( ∑ j = 1 M α j h j ( x ) ) {\displaystyle h(\mathbf {x} )=\operatorname {sgn} \left(\sum _{j=1}^{M}\alpha _{j}h_{j}(\mathbf {x} )\right)}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Linear Regression) You are given samples $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ where $\mathbf{x}_{n} \in \mathbb{R}^{D}$ and $y_{n}$ are scalar values. You are solving linear regression using normal equations. You will always find the optimal weights with 0 training error in case of $N \leq D$.
It is implicit in the above treatment that the data points are all given equal weight. Technically, the objective function U = ∑ i w i ( Y i − y i ) 2 {\displaystyle U=\sum _{i}w_{i}(Y_{i}-y_{i})^{2}} being minimized in the least-squares process has unit weights, wi = 1. When weights are not all the same the normal equations become a = ( J T W J ) − 1 J T W y W i , i ≠ 1 {\displaystyle \mathbf {a} =\left(\mathbf {J^{T}W} \mathbf {J} \right)^{-1}\mathbf {J^{T}W} \mathbf {y} \qquad W_{i,i}\neq 1} ,If the same set of diagonal weights is used for all data subsets, W = diag ( w 1 , w 2 , . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Linear Regression) You are given samples $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ where $\mathbf{x}_{n} \in \mathbb{R}^{D}$ and $y_{n}$ are scalar values. You are solving linear regression using normal equations. You will always find the optimal weights with 0 training error in case of $N \leq D$.
An estimating equation motivated by multivariate linear regression is where r X Y ( s , t ) = cov ( X ( s ) , Y ( t ) ) {\displaystyle r_{XY}(s,t)={\text{cov}}(X(s),Y(t))} , R X X: L 2 ( S × S ) → L 2 ( S × T ) {\displaystyle R_{XX}:L^{2}({\mathcal {S}}\times {\mathcal {S}})\rightarrow L^{2}({\mathcal {S}}\times {\mathcal {T}})} is defined as ( R X X β ) ( s , t ) = ∫ S r X X ( s , w ) β ( w , t ) d w {\displaystyle (R_{XX}\beta )(s,t)=\int _{\mathcal {S}}r_{XX}(s,w)\beta (w,t)dw} with r X X ( s , w ) = cov ( X ( s ) , X ( w ) ) {\displaystyle r_{XX}(s,w)={\text{cov}}(X(s),X(w))} for s , w ∈ S {\displaystyle s,w\in {\mathcal {S}}} . Regularization is needed and can be done through truncation, L 2 {\displaystyle L^{2}} penalization or L 1 {\displaystyle L^{1}} penalization.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are vectors in $\mathbb{R}^{D}$ and $b_{u}$ and $b_{m}$ are scalars, indicating the bias. Is the problem jointly convex in $\mathbf{v}$ and $\mathbf{w}$ ? Look at a simple case, say for only 1 user and 1 movie and assume that $D=1$, i.e., consider $f(v, w)=\frac{1}{2}(v w+c-r)^{2}$. [Hint: $\mathrm{A} 2 \times 2$ matrix is positive definite if and only if the two diagonal terms are positive and the determinant is positive.]
Suppose S i ∼ W p ( n i , Σ ) , i = 1 , … , r + 1 {\displaystyle S_{i}\sim W_{p}\left(n_{i},\Sigma \right),i=1,\ldots ,r+1} are independently distributed Wishart p × p {\displaystyle p\times p} positive definite matrices. Then, defining U i = S − 1 / 2 S i ( S − 1 / 2 ) T {\displaystyle U_{i}=S^{-1/2}S_{i}\left(S^{-1/2}\right)^{T}} (where S = ∑ i = 1 r + 1 S i {\displaystyle S=\sum _{i=1}^{r+1}S_{i}} is the sum of the matrices and S 1 / 2 ( S − 1 / 2 ) T {\displaystyle S^{1/2}\left(S^{-1/2}\right)^{T}} is any reasonable factorization of S {\displaystyle S} ), we have ( U 1 , … , U r ) ∼ D p ( n 1 / 2 , . . . , n r + 1 / 2 ) . {\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(n_{1}/2,...,n_{r+1}/2\right).}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are vectors in $\mathbb{R}^{D}$ and $b_{u}$ and $b_{m}$ are scalars, indicating the bias. Is the problem jointly convex in $\mathbf{v}$ and $\mathbf{w}$ ? Look at a simple case, say for only 1 user and 1 movie and assume that $D=1$, i.e., consider $f(v, w)=\frac{1}{2}(v w+c-r)^{2}$. [Hint: $\mathrm{A} 2 \times 2$ matrix is positive definite if and only if the two diagonal terms are positive and the determinant is positive.]
The solution to the problem is given by first computing a singular value decomposition of E e s t {\displaystyle \mathbf {E} _{\rm {est}}}: E e s t = U S V T {\displaystyle \mathbf {E} _{\rm {est}}=\mathbf {U} \,\mathbf {S} \,\mathbf {V} ^{T}} where U , V {\displaystyle \mathbf {U} ,\mathbf {V} } are orthogonal matrices and S {\displaystyle \mathbf {S} } is a diagonal matrix which contains the singular values of E e s t {\displaystyle \mathbf {E} _{\rm {est}}} . In the ideal case, one of the diagonal elements of S {\displaystyle \mathbf {S} } should be zero, or at least small compared to the other two which should be equal. In any case, set S ′ = ( s 1 0 0 0 s 2 0 0 0 0 ) , {\displaystyle \mathbf {S} '={\begin{pmatrix}s_{1}&0&0\\0&s_{2}&0\\0&0&0\end{pmatrix}},} where s 1 , s 2 {\displaystyle s_{1},s_{2}} are the largest and second largest singular values in S {\displaystyle \mathbf {S} } respectively. Finally, E ′ {\displaystyle \mathbf {E} '} is given by E ′ = U S ′ V T {\displaystyle \mathbf {E} '=\mathbf {U} \,\mathbf {S} '\,\mathbf {V} ^{T}} The matrix E ′ {\displaystyle \mathbf {E} '} is the resulting estimate of the essential matrix provided by the algorithm.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following joint distribution that has the factorization $$ p\left(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}\right)=p\left(x_{1}\right) p\left(x_{2} \mid x_{1}\right) p\left(x_{3} \mid x_{2}\right) p\left(x_{4} \mid x_{1}, x_{3}\right) p\left(x_{5} \mid x_{4}\right) . $$ : (4 points.) Determine whether the following statement is correct. $$ X_{1} \perp X_{3} \mid X_{2}, X_{5} $$ Show your reasoning.
Suppose S i ∼ W p ( n i , Σ ) , i = 1 , … , r + 1 {\displaystyle S_{i}\sim W_{p}\left(n_{i},\Sigma \right),i=1,\ldots ,r+1} are independently distributed Wishart p × p {\displaystyle p\times p} positive definite matrices. Then, defining U i = S − 1 / 2 S i ( S − 1 / 2 ) T {\displaystyle U_{i}=S^{-1/2}S_{i}\left(S^{-1/2}\right)^{T}} (where S = ∑ i = 1 r + 1 S i {\displaystyle S=\sum _{i=1}^{r+1}S_{i}} is the sum of the matrices and S 1 / 2 ( S − 1 / 2 ) T {\displaystyle S^{1/2}\left(S^{-1/2}\right)^{T}} is any reasonable factorization of S {\displaystyle S} ), we have ( U 1 , … , U r ) ∼ D p ( n 1 / 2 , . . . , n r + 1 / 2 ) . {\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(n_{1}/2,...,n_{r+1}/2\right).}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following joint distribution that has the factorization $$ p\left(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}\right)=p\left(x_{1}\right) p\left(x_{2} \mid x_{1}\right) p\left(x_{3} \mid x_{2}\right) p\left(x_{4} \mid x_{1}, x_{3}\right) p\left(x_{5} \mid x_{4}\right) . $$ : (4 points.) Determine whether the following statement is correct. $$ X_{1} \perp X_{3} \mid X_{2}, X_{5} $$ Show your reasoning.
{\displaystyle p({\bf {y}},\theta |{\bf {x}})\;=\;p({\bf {y}}|{\bf {x}},\theta )p(\theta )\;=\;p({\bf {y}}|{\bf {x}})p(\theta |{\bf {y}},{\bf {x}})\;\simeq \;{\tilde {q}}(\theta )\;=\;Zq(\theta ).} The joint is equal to the product of the likelihood and the prior and by Bayes' rule, equal to the product of the marginal likelihood p ( y | x ) {\displaystyle p({\bf {y}}|{\bf {x}})} and posterior p ( θ | y , x ) {\displaystyle p(\theta |{\bf {y}},{\bf {x}})} . Seen as a function of θ {\displaystyle \theta } the joint is an un-normalised density.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: For any function $g:\mathcal X o \R$, and for a Bayes predictor $g^\star: \mathcal X o \R$ (i.e., such that $\sign\circ g^\star$ is a Bayes classifier), show that egin{align*} \mathcal L (g)-\mathcal L^\star = \mathbb E[oldsymbol{\mathbb{1}}_{g(X)g^\star(X)<0}|2\eta(X)-1|]. \end{align*}
Utilizing Bayes' theorem, it can be shown that the optimal f 0 / 1 ∗ {\displaystyle f_{0/1}^{*}} , i.e., the one that minimizes the expected risk associated with the zero-one loss, implements the Bayes optimal decision rule for a binary classification problem and is in the form of f 0 / 1 ∗ ( x → ) = { 1 if p ( 1 ∣ x → ) > p ( − 1 ∣ x → ) 0 if p ( 1 ∣ x → ) = p ( − 1 ∣ x → ) − 1 if p ( 1 ∣ x → ) < p ( − 1 ∣ x → ) {\displaystyle f_{0/1}^{*}({\vec {x}})\;=\;{\begin{cases}\;\;\;1&{\text{if }}p(1\mid {\vec {x}})>p(-1\mid {\vec {x}})\\\;\;\;0&{\text{if }}p(1\mid {\vec {x}})=p(-1\mid {\vec {x}})\\-1&{\text{if }}p(1\mid {\vec {x}})
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: For any function $g:\mathcal X o \R$, and for a Bayes predictor $g^\star: \mathcal X o \R$ (i.e., such that $\sign\circ g^\star$ is a Bayes classifier), show that egin{align*} \mathcal L (g)-\mathcal L^\star = \mathbb E[oldsymbol{\mathbb{1}}_{g(X)g^\star(X)<0}|2\eta(X)-1|]. \end{align*}
that minimizes the expected loss. This is known as a generalized Bayes rule with respect to π ( θ ) {\displaystyle \pi (\theta )\,\!} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In Text Representation learning, which of the following statements are correct?
The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words. A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based representation learning approaches attempt to solve this with word prediction tasks. GPTs pretrain on next word prediction using prior input words as context, whereas BERT masks random tokens in order to provide bidirectional context.Other self-supervised techniques extend word embeddings by finding representations for larger text structures such as sentences or paragraphs in the input data. Doc2vec extends the generative training approach in word2vec by adding an additional input to the word prediction task based on the paragraph it is within, and is therefore intended to represent paragraph level context.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In Text Representation learning, which of the following statements are correct?
The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words. A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based representation learning approaches attempt to solve this with word prediction tasks. GPTs pretrain on next word prediction using prior input words as context, whereas BERT masks random tokens in order to provide bidirectional context.Other self-supervised techniques extend word embeddings by finding representations for larger text structures such as sentences or paragraphs in the input data. Doc2vec extends the generative training approach in word2vec by adding an additional input to the word prediction task based on the paragraph it is within, and is therefore intended to represent paragraph level context.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right) \kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$.
Let κ 1 {\displaystyle \kappa ^{1}} be a s-finite kernel from S {\displaystyle S} to T {\displaystyle T} and κ 2 {\displaystyle \kappa ^{2}} a s-finite kernel from S × T {\displaystyle S\times T} to U {\displaystyle U} . Then the composition κ 1 ⋅ κ 2 {\displaystyle \kappa ^{1}\cdot \kappa ^{2}} of the two kernels is defined as κ 1 ⋅ κ 2: S × U → {\displaystyle \kappa ^{1}\cdot \kappa ^{2}\colon S\times {\mathcal {U}}\to } ( s , B ) ↦ ∫ T κ 1 ( s , d t ) ∫ U κ 2 ( ( s , t ) , d u ) 1 B ( u ) {\displaystyle (s,B)\mapsto \int _{T}\kappa ^{1}(s,\mathrm {d} t)\int _{U}\kappa ^{2}((s,t),\mathrm {d} u)\mathbf {1} _{B}(u)} for all s ∈ S {\displaystyle s\in S} and all B ∈ U {\displaystyle B\in {\mathcal {U}}} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right) \kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$.
For N {\displaystyle N} even, we define the Dirichlet kernel as D ( x , N ) = 1 N + 1 N cos ⁡ 1 2 N x + 2 N ∑ k = 1 ( N − 1 ) / 2 cos ⁡ ( k x ) = sin ⁡ 1 2 N x N tan ⁡ 1 2 x . {\displaystyle D(x,N)={\frac {1}{N}}+{\frac {1}{N}}\cos {\tfrac {1}{2}}Nx+{\frac {2}{N}}\sum _{k=1}^{(N-1)/2}\cos(kx)={\frac {\sin {\tfrac {1}{2}}Nx}{N\tan {\tfrac {1}{2}}x}}.} Again, it can easily be seen that D ( x , N ) {\displaystyle D(x,N)} is a linear combination of the right powers of e i x {\displaystyle e^{ix}} , does not contain the term sin ⁡ 1 2 N x {\displaystyle \sin {\tfrac {1}{2}}Nx} and satisfies D ( x m , N ) = { 0 for m ≠ 0 1 for m = 0 . {\displaystyle D(x_{m},N)={\begin{cases}0{\text{ for }}m\neq 0\\1{\text{ for }}m=0\end{cases}}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Robustness) The $l_{1}$ loss is less sensitive to outliers than $l_{2}$.
Also whereas the distribution of the trimmed mean appears to be close to normal, the distribution of the raw mean is quite skewed to the left. So, in this sample of 66 observations, only 2 outliers cause the central limit theorem to be inapplicable. Robust statistical methods, of which the trimmed mean is a simple example, seek to outperform classical statistical methods in the presence of outliers, or, more generally, when underlying parametric assumptions are not quite correct.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Robustness) The $l_{1}$ loss is less sensitive to outliers than $l_{2}$.
Another approach is using negentropy instead of kurtosis. Using negentropy is a more robust method than kurtosis, as kurtosis is very sensitive to outliers.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider optimizing a matrix factorization $\boldsymbol{W} \boldsymbol{Z}^{\top}$ in the matrix completion setting, for $\boldsymbol{W} \in \mathbb{R}^{D \times K}$ and $\boldsymbol{Z} \in \mathbb{R}{ }^{N \times K}$. We write $\Omega$ for the set of observed matrix entries. Which of the following statements are correct?
In the problem of matrix completion, the matrix X i t {\displaystyle X_{i}^{t}} takes the form X i t = e t ⊗ e i ′ , {\displaystyle X_{i}^{t}=e_{t}\otimes e_{i}',} where ( e t ) t {\displaystyle (e_{t})_{t}} and ( e i ′ ) i {\displaystyle (e_{i}')_{i}} are the canonical basis in R T {\displaystyle \mathbb {R} ^{T}} and R D {\displaystyle \mathbb {R} ^{D}} . In this case the role of the Frobenius inner product is to select individual elements w i t {\displaystyle w_{i}^{t}} from the matrix W {\displaystyle W} . Thus, the output y {\displaystyle y} is a sampling of entries from the matrix W {\displaystyle W} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider optimizing a matrix factorization $\boldsymbol{W} \boldsymbol{Z}^{\top}$ in the matrix completion setting, for $\boldsymbol{W} \in \mathbb{R}^{D \times K}$ and $\boldsymbol{Z} \in \mathbb{R}{ }^{N \times K}$. We write $\Omega$ for the set of observed matrix entries. Which of the following statements are correct?
Keshavan, Montanari and Oh consider a variant of matrix completion where the rank of the m {\displaystyle m} by n {\displaystyle n} matrix M {\displaystyle M} , which is to be recovered, is known to be r {\displaystyle r} . They assume Bernoulli sampling of entries, constant aspect ratio m n {\displaystyle {\frac {m}{n}}} , bounded magnitude of entries of M {\displaystyle M} (let the upper bound be M max {\displaystyle M_{\text{max}}} ), and constant condition number σ 1 σ r {\displaystyle {\frac {\sigma _{1}}{\sigma _{r}}}} (where σ 1 {\displaystyle \sigma _{1}} and σ r {\displaystyle \sigma _{r}} are the largest and smallest singular values of M {\displaystyle M} respectively). Further, they assume the two incoherence conditions are satisfied with μ 0 {\displaystyle \mu _{0}} and μ 1 σ 1 σ r {\displaystyle \mu _{1}{\frac {\sigma _{1}}{\sigma _{r}}}} where μ 0 {\displaystyle \mu _{0}} and μ 1 {\displaystyle \mu _{1}} are constants. Let M E {\displaystyle M^{E}} be a matrix that matches M {\displaystyle M} on the set E {\displaystyle E} of observed entries and is 0 elsewhere.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means that this point is classified correctly by $f$. Assume further that we have computed the gradient of $g$ at $\mathbf{x}$ to be $\nabla_{\mathbf{x}} g(\mathbf{x})=(+1,-2,+3,-4,+5,-6)$. You are allowed to make one step in order to (hopefully) find an adversarial example. In the following four questions, assume $\epsilon=1$. Which offset $\delta$ with $\|\delta\|_{\infty} \leq 1$ yields the smallest value for $g(\mathbf{x}+\delta)$, assuming that $g$ is (locally) linear?
This addresses the question whether there is a systematic way to find a positive number β ( x , p ) {\displaystyle \beta (\mathbf {x} ,\mathbf {p} )} - depending on the function f, the point x {\displaystyle \mathbf {x} } and the descent direction p {\displaystyle \mathbf {p} } - so that all learning rates α ≤ β ( x , p ) {\displaystyle \alpha \leq \beta (\mathbf {x} ,\mathbf {p} )} satisfy Armijo's condition. When p = − ∇ f ( x ) {\displaystyle \mathbf {p} =-\nabla f(\mathbf {x} )} , we can choose β ( x , p ) {\displaystyle \beta (\mathbf {x} ,\mathbf {p} )} in the order of 1 / L ( x ) {\displaystyle 1/L(\mathbf {x} )\,} , where L ( x ) {\displaystyle L(\mathbf {x} )\,} is a local Lipschitz constant for the gradient ∇ f {\displaystyle \nabla f\,} near the point x {\displaystyle \mathbf {x} } (see Lipschitz continuity). If the function is C 2 {\displaystyle C^{2}} , then L ( x ) {\displaystyle L(\mathbf {x} )\,} is close to the Hessian of the function at the point x {\displaystyle \mathbf {x} } . See Armijo (1966) for more detail.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means that this point is classified correctly by $f$. Assume further that we have computed the gradient of $g$ at $\mathbf{x}$ to be $\nabla_{\mathbf{x}} g(\mathbf{x})=(+1,-2,+3,-4,+5,-6)$. You are allowed to make one step in order to (hopefully) find an adversarial example. In the following four questions, assume $\epsilon=1$. Which offset $\delta$ with $\|\delta\|_{\infty} \leq 1$ yields the smallest value for $g(\mathbf{x}+\delta)$, assuming that $g$ is (locally) linear?
Often f is a threshold function, which maps all values of w → ⋅ x → {\displaystyle {\vec {w}}\cdot {\vec {x}}} above a certain threshold to the first class and all other values to the second class; e.g., f ( x ) = { 1 if w T ⋅ x > θ , 0 otherwise {\displaystyle f(\mathbf {x} )={\begin{cases}1&{\text{if }}\ \mathbf {w} ^{T}\cdot \mathbf {x} >\theta ,\\0&{\text{otherwise}}\end{cases}}} The superscript T indicates the transpose and θ {\displaystyle \theta } is a scalar threshold. A more complex f might give the probability that an item belongs to a certain class. For a two-class classification problem, one can visualize the operation of a linear classifier as splitting a high-dimensional input space with a hyperplane: all points on one side of the hyperplane are classified as "yes", while the others are classified as "no".
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A neural network has been trained for multi-class classification using cross-entropy but has not necessarily achieved a global or local minimum on the training set. The output of the neural network is $\mathbf{z}=[z_1,\ldots,z_d]^ op$ obtained from the penultimate values $\mathbf{x}=[x_1,\ldots,x_d]^ op$ via softmax $z_k= rac{\exp(x_k)}{\sum_{i}\exp(x_i)}$ that can be interpreted as a probability distribution over the $d$ possible classes. The cross-entropy is given by $H(\mathbf{y},\mathbf{z})=-\sum_{i=1}^{d} y_i \ln{z_i}$ where $\mathbf{y}$ is one-hot encoded meaning the entity corresponding to the true class is 1 and other entities are 0. We now modify the neural network, either by scaling $\mathbf{x} \mapsto lpha \mathbf{x}$ where $lpha \in \R_{>0}$ or through a shift $\mathbf{x} \mapsto \mathbf{x} + b\mathbf{1}$ where $b \in \R$. The modified $\mathbf{x}$ values are fed into the softmax to obtain the final output and the network / parameters are otherwise unchanged. How do these transformations affect the training accuracy of the network?
Multiclass cross-entropy compares the observed multiclass output with the predicted probabilities. For a random sample of multiclass outcomes of size n {\displaystyle n} , the average multiclass cross-entropy C ¯ {\displaystyle {\overline {C}}} for hyperbolastic H1 or H2 can be estimated by C ¯ = − 1 n ∑ i = 1 n ∑ j = 1 k {\displaystyle {\overline {C}}=-{\frac {1}{n}}\sum _{i=1}^{n}{\sum _{j=1}^{k}{}}}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A neural network has been trained for multi-class classification using cross-entropy but has not necessarily achieved a global or local minimum on the training set. The output of the neural network is $\mathbf{z}=[z_1,\ldots,z_d]^ op$ obtained from the penultimate values $\mathbf{x}=[x_1,\ldots,x_d]^ op$ via softmax $z_k= rac{\exp(x_k)}{\sum_{i}\exp(x_i)}$ that can be interpreted as a probability distribution over the $d$ possible classes. The cross-entropy is given by $H(\mathbf{y},\mathbf{z})=-\sum_{i=1}^{d} y_i \ln{z_i}$ where $\mathbf{y}$ is one-hot encoded meaning the entity corresponding to the true class is 1 and other entities are 0. We now modify the neural network, either by scaling $\mathbf{x} \mapsto lpha \mathbf{x}$ where $lpha \in \R_{>0}$ or through a shift $\mathbf{x} \mapsto \mathbf{x} + b\mathbf{1}$ where $b \in \R$. The modified $\mathbf{x}$ values are fed into the softmax to obtain the final output and the network / parameters are otherwise unchanged. How do these transformations affect the training accuracy of the network?
The standard softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression. Since the function maps a vector and a specific index i {\displaystyle i} to a real value, the derivative needs to take the index into account: This expression is symmetrical in the indexes i , k {\displaystyle i,k} and thus may also be expressed as ∂ ∂ q k σ ( q , i ) = σ ( q , k ) ( δ i k − σ ( q , i ) ) . {\displaystyle {\frac {\partial }{\partial q_{k}}}\sigma ({\textbf {q}},i)=\sigma ({\textbf {q}},k)(\delta _{ik}-\sigma ({\textbf {q}},i)).}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume that we have a convolutional neural net with $L$ layers, $K$ nodes per layer, and where each node is connected to $k$ nodes in a previous layer. We ignore in the sequel the question of how we deal with the points at the boundary and assume that $k<<<K$ (much, much, much smaller). How does the complexity of the back-propagation algorithm scale in these parameters?
When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing a sparse local connectivity pattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume. The extent of this connectivity is a hyperparameter called the receptive field of the neuron. The connections are local in space (along width and height), but always extend along the entire depth of the input volume. Such an architecture ensures that the learned (British English: learnt) filters produce the strongest response to a spatially local input pattern.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume that we have a convolutional neural net with $L$ layers, $K$ nodes per layer, and where each node is connected to $k$ nodes in a previous layer. We ignore in the sequel the question of how we deal with the points at the boundary and assume that $k<<<K$ (much, much, much smaller). How does the complexity of the back-propagation algorithm scale in these parameters?
Recall the forward pass of the convolutional neural network model, used in both training and inference steps. Let x ∈ R M m 1 {\textstyle \mathbf {x} \in \mathbb {R} ^{Mm_{1}}} be its input and W k ∈ R N × m 1 {\textstyle \mathbf {W} _{k}\in \mathbb {R} ^{N\times m_{1}}} the filters at layer k {\textstyle k} , which are followed by the rectified linear unit (RLU) ReLU ( x ) = max ( 0 , x ) {\textstyle {\text{ReLU}}(\mathbf {x} )=\max(0,x)} , for bias b ∈ R M m 1 {\textstyle \mathbf {b} \in \mathbb {R} ^{Mm_{1}}} . Based on this elementary block, taking K = 2 {\textstyle K=2} as example, the CNN output can be expressed as: Finally, comparing the CNN algorithm and the Layered thresholding approach for the nonnegative constraint, it is straightforward to show that both are equivalent: As explained in what follows, this naive approach of solving the coding problem is a particular case of a more stable projected gradient descent algorithm for the ML-CSC model. Equipped with the stability conditions of both approaches, a more clear understanding about the class of signals a CNN can recover, under what noise conditions can an estimation be accurately attained, and how can its structure be modified to improve its theoretical conditions. The reader is referred to (, section 5) for details regarding their connection.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Matrix Factorizations: If we compare SGD vs ALS for optimizing a matrix factorization of a $D \times N$ matrix, for large $D, N$
Special algorithms have been developed for factorizing large sparse matrices. These algorithms attempt to find sparse factors L and U. Ideally, the cost of computation is determined by the number of nonzero entries, rather than by the size of the matrix. These algorithms use the freedom to exchange rows and columns to minimize fill-in (entries that change from an initial zero to a non-zero value during the execution of an algorithm). General treatment of orderings that minimize fill-in can be addressed using graph theory.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Matrix Factorizations: If we compare SGD vs ALS for optimizing a matrix factorization of a $D \times N$ matrix, for large $D, N$
Reference paper "The Quadratic Sieve Factoring Algorithm" by Eric Landquist
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the logistic regression loss $L: \R^d o \R$ for a binary classification task with data $\left( \xv_i, y_i ight) \in \R^d imes \{0, 1\}$ for $i \in \left\{ 1, \ldots N ight\}$: egin{equation*} L(\wv) = rac{1}{N} \sum_{i = 1}^N igg(\log\left(1 + e^{\xv_i^ op\wv} ight) - y_i\xv_i^ op\wv igg). \end{equation*} Which of the following is a gradient of the loss $L$?
In machine learning applications where logistic regression is used for binary classification, the MLE minimises the cross-entropy loss function. Logistic regression is an important machine learning algorithm. The goal is to model the probability of a random variable Y {\displaystyle Y} being 0 or 1 given experimental data.Consider a generalized linear model function parameterized by θ {\displaystyle \theta } , h θ ( X ) = 1 1 + e − θ T X = Pr ( Y = 1 ∣ X ; θ ) {\displaystyle h_{\theta }(X)={\frac {1}{1+e^{-\theta ^{T}X}}}=\Pr(Y=1\mid X;\theta )} Therefore, Pr ( Y = 0 ∣ X ; θ ) = 1 − h θ ( X ) {\displaystyle \Pr(Y=0\mid X;\theta )=1-h_{\theta }(X)} and since Y ∈ { 0 , 1 } {\displaystyle Y\in \{0,1\}} , we see that Pr ( y ∣ X ; θ ) {\displaystyle \Pr(y\mid X;\theta )} is given by Pr ( y ∣ X ; θ ) = h θ ( X ) y ( 1 − h θ ( X ) ) ( 1 − y ) . {\displaystyle \Pr(y\mid X;\theta )=h_{\theta }(X)^{y}(1-h_{\theta }(X))^{(1-y)}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the logistic regression loss $L: \R^d o \R$ for a binary classification task with data $\left( \xv_i, y_i ight) \in \R^d imes \{0, 1\}$ for $i \in \left\{ 1, \ldots N ight\}$: egin{equation*} L(\wv) = rac{1}{N} \sum_{i = 1}^N igg(\log\left(1 + e^{\xv_i^ op\wv} ight) - y_i\xv_i^ op\wv igg). \end{equation*} Which of the following is a gradient of the loss $L$?
For proper loss functions, the loss margin can be defined as μ ϕ = − ϕ ′ ( 0 ) ϕ ″ ( 0 ) {\displaystyle \mu _{\phi }=-{\frac {\phi '(0)}{\phi ''(0)}}} and shown to be directly related to the regularization properties of the classifier. Specifically a loss function of larger margin increases regularization and produces better estimates of the posterior probability. For example, the loss margin can be increased for the logistic loss by introducing a γ {\displaystyle \gamma } parameter and writing the logistic loss as 1 γ log ⁡ ( 1 + e − γ v ) {\displaystyle {\frac {1}{\gamma }}\log(1+e^{-\gamma v})} where smaller 0 < γ < 1 {\displaystyle 0<\gamma <1} increases the margin of the loss. It is shown that this is directly equivalent to decreasing the learning rate in gradient boosting F m ( x ) = F m − 1 ( x ) + γ h m ( x ) , {\displaystyle F_{m}(x)=F_{m-1}(x)+\gamma h_{m}(x),} where decreasing γ {\displaystyle \gamma } improves the regularization of the boosted classifier. The theory makes it clear that when a learning rate of γ {\displaystyle \gamma } is used, the correct formula for retrieving the posterior probability is now η = f − 1 ( γ F ( x ) ) {\displaystyle \eta =f^{-1}(\gamma F(x))} . In conclusion, by choosing a loss function with larger margin (smaller γ {\displaystyle \gamma } ) we increase regularization and improve our estimates of the posterior probability which in turn improves the ROC curve of the final classifier.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When constructing a word embedding, what is true regarding negative samples?
In natural language processing (NLP), a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that words that are closer in the vector space are expected to be similar in meaning. Word embeddings can be obtained using language modeling and feature learning techniques, where words or phrases from the vocabulary are mapped to vectors of real numbers. Methods to generate this mapping include neural networks, dimensionality reduction on the word co-occurrence matrix, probabilistic models, explainable knowledge base method, and explicit representation in terms of the context in which words appear.Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in NLP tasks such as syntactic parsing and sentiment analysis.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When constructing a word embedding, what is true regarding negative samples?
Word embeddings may contain the biases and stereotypes contained in the trained dataset, as Bolukbasi et al. points out in the 2016 paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” that a publicly available (and popular) word2vec embedding trained on Google News texts (a commonly used data corpus), which consists of text written by professional journalists, still shows disproportionate word associations reflecting gender and racial biases when extracting word analogies. For example, one of the analogies generated using the aforementioned word embedding is “man is to computer programmer as woman is to homemaker”.The applications of these trained word embeddings without careful oversight likely perpetuates existing bias in society, which is introduced through unaltered training data. Furthermore, word embeddings can even amplify these biases (Zhao et al. 2017). Given word embeddings popular usage in NLP applications such as search ranking, CV parsing and recommendation systems, the biases that exist in pre-trained word embeddings may have further reaching impact than we realize.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If the first column of matrix L is (0,1,1,1) and all other entries are 0 then the authority values
7. Suppose a matrix has 0-( ± {\displaystyle \pm } 1) entries and in each column, the entries are non-decreasing from top to bottom (so all −1s are on top, then 0s, then 1s are on the bottom).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If the first column of matrix L is (0,1,1,1) and all other entries are 0 then the authority values
If every eigenvalue of A is less than 1 in absolute value, det ( I + A ) = ∑ k = 0 ∞ 1 k ! ( − ∑ j = 1 ∞ ( − 1 ) j j tr ⁡ ( A j ) ) k , {\displaystyle \det(I+A)=\sum _{k=0}^{\infty }{\frac {1}{k! }}\left(-\sum _{j=1}^{\infty }{\frac {(-1)^{j}}{j}}\operatorname {tr} \left(A^{j}\right)\right)^{k}\,,} where I is the identity matrix.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If the top 100 documents contain 50 relevant documents
These may consist of an entire document or a document fragment.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If the top 100 documents contain 50 relevant documents
50, Bs. 100 and Bs. 500.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|d_j|}\frac{w_{iq}}{|q|}$ . (1) Another way of looking at the information retrieval problem is using a probabilistic approach. The probabilistic view of information retrieval consists of determining the conditional probability $P(q|d_j)$ that for a given document $d_j$ the query by the user is $q$. So, practically in probabilistic retrieval when a query $q$ is given, for each document it is evaluated how probable it is that the query is indeed relevant for the document, which results in a ranking of the documents. In order to relate vector space retrieval to a probabilistic view of information retrieval, we interpret the weights in Equation (1) as follows: - $w_{ij}/|d_j|$ can be interpreted as the conditional probability $P(k_i|d_j)$ that for a given document $d_j$ the term $k_i$ is important (to characterize the document $d_j$). - $w_{iq}/|q|$ can be interpreted as the conditional probability $P(q|k_i)$ that for a given term $k_i$ the query posed by the user is $q$. Intuitively, $P(q|k_i)$ gives the amount of importance given to a particular term while querying. With this interpretation you can rewrite Equation (1) as follows: Show that indeed with the probabilistic interpretation of weights of vector space retrieval, as given in Equation (2), the similarity computation in vector space retrieval results exactly in the probabilistic interpretation of information retrieval, i.e., $sim(q,d_j)= P(q|d_j)$. Given that $d_j$ and $q$ are conditionally independent, i.e., $P(d_j \cap q|ki) = P(d_j|k_i)P(q|k_i)$. You can assume existence of joint probability density functions wherever required. (Hint: You might need to use Bayes theorem)
Similarities are computed as probabilities that a document is relevant for a given query. Probabilistic theorems like the Bayes' theorem are often used in these models. Binary Independence Model Probabilistic relevance model on which is based the okapi (BM25) relevance function Uncertain inference Language models Divergence-from-randomness model Latent Dirichlet allocation Feature-based retrieval models view documents as vectors of values of feature functions (or just features) and seek the best way to combine these features into a single relevance score, typically by learning to rank methods. Feature functions are arbitrary functions of document and query, and as such can easily incorporate almost any other retrieval model as just another feature.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|d_j|}\frac{w_{iq}}{|q|}$ . (1) Another way of looking at the information retrieval problem is using a probabilistic approach. The probabilistic view of information retrieval consists of determining the conditional probability $P(q|d_j)$ that for a given document $d_j$ the query by the user is $q$. So, practically in probabilistic retrieval when a query $q$ is given, for each document it is evaluated how probable it is that the query is indeed relevant for the document, which results in a ranking of the documents. In order to relate vector space retrieval to a probabilistic view of information retrieval, we interpret the weights in Equation (1) as follows: - $w_{ij}/|d_j|$ can be interpreted as the conditional probability $P(k_i|d_j)$ that for a given document $d_j$ the term $k_i$ is important (to characterize the document $d_j$). - $w_{iq}/|q|$ can be interpreted as the conditional probability $P(q|k_i)$ that for a given term $k_i$ the query posed by the user is $q$. Intuitively, $P(q|k_i)$ gives the amount of importance given to a particular term while querying. With this interpretation you can rewrite Equation (1) as follows: Show that indeed with the probabilistic interpretation of weights of vector space retrieval, as given in Equation (2), the similarity computation in vector space retrieval results exactly in the probabilistic interpretation of information retrieval, i.e., $sim(q,d_j)= P(q|d_j)$. Given that $d_j$ and $q$ are conditionally independent, i.e., $P(d_j \cap q|ki) = P(d_j|k_i)P(q|k_i)$. You can assume existence of joint probability density functions wherever required. (Hint: You might need to use Bayes theorem)
Zhao and Callan (2010) were perhaps the first to quantitatively study the vocabulary mismatch problem in a retrieval setting. Their results show that an average query term fails to appear in 30-40% of the documents that are relevant to the user query. They also showed that this probability of mismatch is a central probability in one of the fundamental probabilistic retrieval models, the Binary Independence Model. They developed novel term weight prediction methods that can lead to potentially 50-80% accuracy gains in retrieval over strong keyword retrieval models. Further research along the line shows that expert users can use Boolean Conjunctive Normal Form expansion to improve retrieval performance by 50-300% over unexpanded keyword queries.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is WRONG regarding the Transformer model?
With no change in flux there is no back E.M.F. and hence no reflected impedance. The transformer and valve combination then generate large 3rd order harmonics.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is WRONG regarding the Transformer model?
The ideal transformer model neglects many basic linear aspects of real transformers, including unavoidable losses and inefficiencies. (a) Core losses, collectively called magnetizing current losses, consisting of Hysteresis losses due to nonlinear magnetic effects in the transformer core, and Eddy current losses due to joule heating in the core that are proportional to the square of the transformer's applied voltage. (b) Unlike the ideal model, the windings in a real transformer have non-zero resistances and inductances associated with: Joule losses due to resistance in the primary and secondary windings Leakage flux that escapes from the core and passes through one winding only resulting in primary and secondary reactive impedance. (c) similar to an inductor, parasitic capacitance and self-resonance phenomenon due to the electric field distribution. Three kinds of parasitic capacitance are usually considered and the closed-loop equations are provided Capacitance between adjacent turns in any one layer; Capacitance between adjacent layers; Capacitance between the core and the layer(s) adjacent to the core;Inclusion of capacitance into the transformer model is complicated, and is rarely attempted; the ‘real’ transformer model's equivalent circuit shown below does not include parasitic capacitance. However, the capacitance effect can be measured by comparing open-circuit inductance, i.e. the inductance of a primary winding when the secondary circuit is open, to a short-circuit inductance when the secondary winding is shorted.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement the F1-score to evaluate your classifier.
You ran a classification on the same dataset which led to the following values for the confusion matrix categories: TP = 90, FP = 4; TN = 1, FN = 5.In this example, the classifier has performed well in classifying positive instances, but was not able to correctly recognize negative data elements. Again, the resulting F1 score and accuracy scores would be extremely high: accuracy = 91%, and F1 score = 95.24%. Similarly to the previous case, if a researcher analyzed only these two score indicators, without considering the MCC, they would wrongly think the algorithm is performing quite well in its task, and would have the illusion of being successful.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement the F1-score to evaluate your classifier.
In the example above, the MCC score would be undefined (since TN and FN would be 0, therefore the denominator of Equation 3 would be 0). By checking this value, instead of accuracy and F1 score, you would then be able to notice that your classifier is going in the wrong direction, and you would become aware that there are issues you ought to solve before proceeding. Consider this other example.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements about index merging (when constructing inverted files) is correct?
The inverted index is filled via a merge or rebuild. A rebuild is similar to a merge but first deletes the contents of the inverted index. The architecture may be designed to support incremental indexing, where a merge identifies the document or documents to be added or updated and then parses each document into words. For technical accuracy, a merge conflates newly indexed documents, typically residing in virtual memory, with the index cache residing on one or more computer hard drives.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements about index merging (when constructing inverted files) is correct?
The inverted index is filled via a merge or rebuild. A rebuild is similar to a merge but first deletes the contents of the inverted index. The architecture may be designed to support incremental indexing, where a merge identifies the document or documents to be added or updated and then parses each document into words. For technical accuracy, a merge conflates newly indexed documents, typically residing in virtual memory, with the index cache residing on one or more computer hard drives.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is false?
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri in the early 1970s, to a contingency table built from word counts in documents.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is false?
During the experiment, semantic associations remain fixed showing the assumption that semantic associations are not significantly impacted by the episodic experience of one experiment. The two measures used to measure semantic relatedness in this model are latent semantic analysis (LSA) and word association spaces (WAS). The LSA method states that similarity between words is reflected through their co-occurrence in a local context. WAS was developed by analyzing a database of free association norms, and is where "words that have similar associative structures are placed in similar regions of space".
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The number of non-zero entries in a column of a term-document matrix indicates:
When creating a data-set of terms that appear in a corpus of documents, the document-term matrix contains rows corresponding to the documents and columns corresponding to the terms. Each ij cell, then, is the number of times word j occurs in document i. As such, each row is a vector of term counts that represents the content of the document corresponding to that row. For instance if one has the following two (short) documents: D1 = "I like databases" D2 = "I dislike databases",then the document-term matrix would be: which shows which documents contain which terms and how many times they appear.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The number of non-zero entries in a column of a term-document matrix indicates:
In text databases, a document collection defined by a document by term D matrix (of size m×n, where m is the number of documents and n is the number of terms), the number of clusters can roughly be estimated by the formula m n t {\displaystyle {\tfrac {mn}{t}}} where t is the number of non-zero entries in D. Note that in D each row and each column must contain at least one non-zero element.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is incorrect
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri in the early 1970s, to a contingency table built from word counts in documents.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is incorrect
Latent semantic analysis (LSA, performing singular-value decomposition on the document-term matrix) can improve search results by disambiguating polysemous words and searching for synonyms of the query. However, searching in the high-dimensional continuous space is much slower than searching the standard trie data structure of search engines.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose that in a given FP Tree, an item in a leaf node N exists in every path. Which of the following is true?
If every node of a tree has finitely many successors, then it is called a finitely, otherwise an infinitely branching tree. A path π is a subset of T such that ε ∈ π and for every t ∈ T, either t is a leaf or there exists a unique c ∈ N {\displaystyle \mathbb {N} } such that t.c ∈ π. A path may be a finite or infinite set. If all paths of a tree are finite then the tree is called finite, otherwise infinite.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose that in a given FP Tree, an item in a leaf node N exists in every path. Which of the following is true?
The root is (s,0) and parent of a node (q,j) is (predecessor(q,j), j-1). This tree is infinite, finitely branching, and fully connected. Therefore, by Kőnig's lemma, there exists an infinite path (q0,0),(q1,1),(q2,2),... in the tree. Therefore, following is an accepting run of A run(q0,0)⋅run(q1,1)⋅run(q2,2)⋅...Hence, by infinite pigeonhole principle, w is accepted by A.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements regarding topic models is false?
In statistics and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. Intuitively, given that a document is about a particular topic, one would expect particular words to appear in the document more or less frequently: "dog" and "bone" will appear more often in documents about dogs, "cat" and "meow" will appear in documents about cats, and "the" and "is" will appear approximately equally in both. A document typically concerns multiple topics in different proportions; thus, in a document that is 10% about cats and 90% about dogs, there would probably be about 9 times more dog words than cat words.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements regarding topic models is false?
In particular, a larger number of academics are concerned about how some topic modeling techniques can hardly be validated. Random Samples. On the one hand, it is extremely hard to know how many units of one type of texts (for example blogposts) are in a certain time in the Internet.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Modularity of a social network always:
Social Networks. 35 (4): 626–638. doi:10.1016/j.socnet.2013.08.004.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Modularity of a social network always:
"Dynamic Social Networks Promote Cooperation in Experiments with Humans". Proceedings of the National Academy of Sciences. 108 (48): 19193–8.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus