question
stringlengths
6
3.53k
text
stringlengths
17
2.05k
source
stringclasses
1 value
(Infinite Data) Assume that your training data $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}$ is iid and comes from a fixed distribution $\mathcal{D}$ that is unknown but is known to have bounded support. Assume that your family of models contains a finite number of elements and that you choose the best such element according to the training data. You then evaluate the risk for this chosen model. Call this the training risk. As $|\mathcal{S}|$ tends to infinity, this training risk converges to the true (according to the distribution $\mathcal{D}$ ) risk of the best model in this family.
In statistical learning models, the training sample ( x i , y i ) {\displaystyle (x_{i},y_{i})} are assumed to have been drawn from the true distribution p ( x , y ) {\displaystyle p(x,y)} and the objective is to minimize the expected "risk" I = E = ∫ V ( f ( x ) , y ) d p ( x , y ) . {\displaystyle I=\mathbb {E} =\int V(f(x),y)\,dp(x,y)\ .} A common paradigm in this situation is to estimate a function f ^ {\displaystyle {\hat {f}}} through empirical risk minimization or regularized empirical risk minimization (usually Tikhonov regularization).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Infinite Data) Assume that your training data $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}$ is iid and comes from a fixed distribution $\mathcal{D}$ that is unknown but is known to have bounded support. Assume that your family of models contains a finite number of elements and that you choose the best such element according to the training data. You then evaluate the risk for this chosen model. Call this the training risk. As $|\mathcal{S}|$ tends to infinity, this training risk converges to the true (according to the distribution $\mathcal{D}$ ) risk of the best model in this family.
However, intrinsic constraints (whether physical, theoretical, computational, etc.) will always play a limiting role. The limiting case where only a finite number of data points are selected over a broad sample space may result in improved precision and lower variance overall, but may also result in an overreliance on the training data (overfitting). This means that test data would also not agree as closely with the training data, but in this case the reason is due to inaccuracy or high bias.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: Let $b: \R o \R$ a function that preserves the sign, i.e., $b(\R_+^*)\subseteq \R_+^*$ and $b(\R_-^*)\subseteq \R_-^*$. Show that egin{align*} \mathcal L (g)-\mathcal L^\star \leq \mathbb E[|2\eta(X)-1-b(g(X))|] \end{align*} egin{align*} \mathcal L (g)-\mathcal L^\star = \mathbb E[oldsymbol{\mathbb{1}}_{g(X)g^\star(X)<0}|2\eta(X)-1|]. \end{align*}
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X × Y {\displaystyle X\times Y} , the expected risk of a hypothesis (a function) h ∈ H {\displaystyle h\in {\mathcal {H}}} is E ( h ) := E ρ = ∫ X × Y L ( h ( x ) , y ) d ρ ( x , y ) {\displaystyle {\mathcal {E}}(h):=\mathbb {E} _{\rho }=\int _{X\times Y}{\mathcal {L}}(h(x),y)\,d\rho (x,y)} In our setting, we have h = A ( S n ) {\displaystyle h={\mathcal {A}}(S_{n})} , where A {\displaystyle {\mathcal {A}}} is a learning algorithm and S n = ( ( x 1 , y 1 ) , … , ( x n , y n ) ) ∼ ρ n {\displaystyle S_{n}=((x_{1},y_{1}),\ldots ,(x_{n},y_{n}))\sim \rho ^{n}} is a sequence of vectors which are all drawn independently from ρ {\displaystyle \rho } . Define the optimal riskSet h n = A ( S n ) {\displaystyle h_{n}={\mathcal {A}}(S_{n})} , for each n {\displaystyle n} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: Let $b: \R o \R$ a function that preserves the sign, i.e., $b(\R_+^*)\subseteq \R_+^*$ and $b(\R_-^*)\subseteq \R_-^*$. Show that egin{align*} \mathcal L (g)-\mathcal L^\star \leq \mathbb E[|2\eta(X)-1-b(g(X))|] \end{align*} egin{align*} \mathcal L (g)-\mathcal L^\star = \mathbb E[oldsymbol{\mathbb{1}}_{g(X)g^\star(X)<0}|2\eta(X)-1|]. \end{align*}
Fix a loss function L: Y × Y → R ≥ 0 {\displaystyle {\mathcal {L}}\colon Y\times Y\to \mathbb {R} _{\geq 0}} , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 {\displaystyle {\mathcal {L}}(y,y')=(y-y')^{2}} , where h ( x ) = y ′ {\displaystyle h(x)=y'} . For a given distribution ρ {\displaystyle \rho } on X × Y {\displaystyle X\times Y} , the expected risk of a hypothesis (a function) h ∈ H {\displaystyle h\in {\mathcal {H}}} is E ( h ) := E ρ = ∫ X × Y L ( h ( x ) , y ) d ρ ( x , y ) {\displaystyle {\mathcal {E}}(h):=\mathbb {E} _{\rho }=\int _{X\times Y}{\mathcal {L}}(h(x),y)\,d\rho (x,y)} In our setting, we have h = A ( S n ) {\displaystyle h={\mathcal {A}}(S_{n})} , where A {\displaystyle {\mathcal {A}}} is a learning algorithm and S n = ( ( x 1 , y 1 ) , … , ( x n , y n ) ) ∼ ρ n {\displaystyle S_{n}=((x_{1},y_{1}),\ldots ,(x_{n},y_{n}))\sim \rho ^{n}} is a sequence of vectors which are all drawn independently from ρ {\displaystyle \rho } . Define the optimal riskSet h n = A ( S n ) {\displaystyle h_{n}={\mathcal {A}}(S_{n})} , for each n {\displaystyle n} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are vectors in $\mathbb{R}^{D}$ and $b_{u}$ and $b_{m}$ are scalars, indicating the bias. How could you address the problem of recommending movies to a new user without any ratings? [This is not a math question.]
After the most like-minded users are found, their corresponding ratings are aggregated to identify the set of items to be recommended to the target user. The most important disadvantage of taking context into recommendation model is to be able to deal with larger dataset that contains much more missing values in comparison to user-item rating matrix. Therefore, similar to matrix factorization methods, tensor factorization techniques can be used to reduce dimensionality of original data before using any neighborhood-based methods.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are vectors in $\mathbb{R}^{D}$ and $b_{u}$ and $b_{m}$ are scalars, indicating the bias. How could you address the problem of recommending movies to a new user without any ratings? [This is not a math question.]
Specifically, the predicted rating user u will give to item i is computed as: r ~ u i = ∑ f = 0 n f a c t o r s H u , f W f , i {\displaystyle {\tilde {r}}_{ui}=\sum _{f=0}^{nfactors}H_{u,f}W_{f,i}} It is possible to tune the expressive power of the model by changing the number of latent factors. It has been demonstrated that a matrix factorization with one latent factor is equivalent to a most popular or top popular recommender (e.g. recommends the items with the most interactions without any personalization). Increasing the number of latent factors will improve personalization, therefore recommendation quality, until the number of factors becomes too high, at which point the model starts to overfit and the recommendation quality will decrease.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means that this point is classified correctly by $f$. Assume further that we have computed the gradient of $g$ at $\mathbf{x}$ to be $\nabla_{\mathbf{x}} g(\mathbf{x})=(+1,-2,+3,-4,+5,-6)$. You are allowed to make one step in order to (hopefully) find an adversarial example. In the following four questions, assume $\epsilon=1$. What is the value of $g(\mathbf{x}+\delta)$ for this $\ell_{1}$-optimal choice assuming that $g$ is (locally) linear?
This addresses the question whether there is a systematic way to find a positive number β ( x , p ) {\displaystyle \beta (\mathbf {x} ,\mathbf {p} )} - depending on the function f, the point x {\displaystyle \mathbf {x} } and the descent direction p {\displaystyle \mathbf {p} } - so that all learning rates α ≤ β ( x , p ) {\displaystyle \alpha \leq \beta (\mathbf {x} ,\mathbf {p} )} satisfy Armijo's condition. When p = − ∇ f ( x ) {\displaystyle \mathbf {p} =-\nabla f(\mathbf {x} )} , we can choose β ( x , p ) {\displaystyle \beta (\mathbf {x} ,\mathbf {p} )} in the order of 1 / L ( x ) {\displaystyle 1/L(\mathbf {x} )\,} , where L ( x ) {\displaystyle L(\mathbf {x} )\,} is a local Lipschitz constant for the gradient ∇ f {\displaystyle \nabla f\,} near the point x {\displaystyle \mathbf {x} } (see Lipschitz continuity). If the function is C 2 {\displaystyle C^{2}} , then L ( x ) {\displaystyle L(\mathbf {x} )\,} is close to the Hessian of the function at the point x {\displaystyle \mathbf {x} } . See Armijo (1966) for more detail.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means that this point is classified correctly by $f$. Assume further that we have computed the gradient of $g$ at $\mathbf{x}$ to be $\nabla_{\mathbf{x}} g(\mathbf{x})=(+1,-2,+3,-4,+5,-6)$. You are allowed to make one step in order to (hopefully) find an adversarial example. In the following four questions, assume $\epsilon=1$. What is the value of $g(\mathbf{x}+\delta)$ for this $\ell_{1}$-optimal choice assuming that $g$ is (locally) linear?
Often f is a threshold function, which maps all values of w → ⋅ x → {\displaystyle {\vec {w}}\cdot {\vec {x}}} above a certain threshold to the first class and all other values to the second class; e.g., f ( x ) = { 1 if w T ⋅ x > θ , 0 otherwise {\displaystyle f(\mathbf {x} )={\begin{cases}1&{\text{if }}\ \mathbf {w} ^{T}\cdot \mathbf {x} >\theta ,\\0&{\text{otherwise}}\end{cases}}} The superscript T indicates the transpose and θ {\displaystyle \theta } is a scalar threshold. A more complex f might give the probability that an item belongs to a certain class. For a two-class classification problem, one can visualize the operation of a linear classifier as splitting a high-dimensional input space with a hyperplane: all points on one side of the hyperplane are classified as "yes", while the others are classified as "no".
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We are given a data set $S=\left\{\left(\boldsymbol{x}_{n}, y_{n}\right)\right\}$ for a binary classification task where $\boldsymbol{x}_{n}$ in $\mathbb{R}^{D}$. We want to use a nearestneighbor classifier. In which of the following situations do we have a reasonable chance of success with this approach? [Ignore the issue of complexity.]
The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point x to the class of its closest neighbour in the feature space, that is C n 1 n n ( x ) = Y ( 1 ) {\displaystyle C_{n}^{1nn}(x)=Y_{(1)}} . As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We are given a data set $S=\left\{\left(\boldsymbol{x}_{n}, y_{n}\right)\right\}$ for a binary classification task where $\boldsymbol{x}_{n}$ in $\mathbb{R}^{D}$. We want to use a nearestneighbor classifier. In which of the following situations do we have a reasonable chance of success with this approach? [Ignore the issue of complexity.]
The k-nearest neighbour classifier can be viewed as assigning the k nearest neighbours a weight 1 / k {\displaystyle 1/k} and all others 0 weight. This can be generalised to weighted nearest neighbour classifiers. That is, where the ith nearest neighbour is assigned a weight w n i {\displaystyle w_{ni}} , with ∑ i = 1 n w n i = 1 {\textstyle \sum _{i=1}^{n}w_{ni}=1} . An analogous result on the strong consistency of weighted nearest neighbour classifiers also holds.Let C n w n n {\displaystyle C_{n}^{wnn}} denote the weighted nearest classifier with weights { w n i } i = 1 n {\displaystyle \{w_{ni}\}_{i=1}^{n}} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a linear model $\hat{y} = xv ^ op \wv$ with the squared loss under an $\ell_\infty$-bounded adversarial perturbation. For a single point $(xv, y)$, it corresponds to the following objective: egin{align} \max_{ ilde{xv}:\ \|xv- ilde{xv}\|_\infty\leq \epsilon} \left(y - ilde{xv} ^ op \wv ight)^{2}, ag{OP}\AMClabel{eq:opt_adv_regression} \end{align} where $\|xv- ilde{xv}\|_\infty\leq \epsilon$ denotes the $\ell_\infty$-norm, i.e. $|x_i - ilde{x}_i| \leq arepsilon$ for every $i$. \ Assume that $\wv = (3, -2)^ op$, $xv = (-1, 2)^ op$, $y=2$. What is the maximum value of the optimization problem in Eq.~(\AMCref{eq:opt_adv_regression})?
and simply write ℓ ( θ ∣ X , Y ) = ∑ i = 1 m ( y i θ ′ x i − e θ ′ x i ) . {\displaystyle \ell (\theta \mid X,Y)=\sum _{i=1}^{m}\left(y_{i}\theta 'x_{i}-e^{\theta 'x_{i}}\right).} To find a maximum, we need to solve an equation ∂ ℓ ( θ ∣ X , Y ) ∂ θ = 0 {\displaystyle {\frac {\partial \ell (\theta \mid X,Y)}{\partial \theta }}=0} which has no closed-form solution. However, the negative log-likelihood, − ℓ ( θ ∣ X , Y ) {\displaystyle -\ell (\theta \mid X,Y)} , is a convex function, and so standard convex optimization techniques such as gradient descent can be applied to find the optimal value of θ.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a linear model $\hat{y} = xv ^ op \wv$ with the squared loss under an $\ell_\infty$-bounded adversarial perturbation. For a single point $(xv, y)$, it corresponds to the following objective: egin{align} \max_{ ilde{xv}:\ \|xv- ilde{xv}\|_\infty\leq \epsilon} \left(y - ilde{xv} ^ op \wv ight)^{2}, ag{OP}\AMClabel{eq:opt_adv_regression} \end{align} where $\|xv- ilde{xv}\|_\infty\leq \epsilon$ denotes the $\ell_\infty$-norm, i.e. $|x_i - ilde{x}_i| \leq arepsilon$ for every $i$. \ Assume that $\wv = (3, -2)^ op$, $xv = (-1, 2)^ op$, $y=2$. What is the maximum value of the optimization problem in Eq.~(\AMCref{eq:opt_adv_regression})?
The subproblem considers the suggested solution y ¯ {\displaystyle \mathbf {\bar {y}} } to the master problem and solves the inner maximization problem from the minimax formulation. The inner problem is formulated using the dual representation maximize ( b − B y ¯ ) T u + d T y ¯ subject to A T u ≤ c u ≥ 0 {\displaystyle {\begin{aligned}&{\text{maximize}}&&(\mathbf {b} -B\mathbf {\bar {y}} )^{\mathrm {T} }\mathbf {u} +\mathbf {d} ^{\mathrm {T} }\mathbf {\bar {y}} \\&{\text{subject to}}&&A^{\mathrm {T} }\mathbf {u} \leq \mathbf {c} \\&&&\mathbf {u} \geq \mathbf {0} \end{aligned}}} While the master problem provides a lower bound on the value of the problem, the subproblem is used to get an upper bound. The result of solving the subproblem for any given y ¯ {\displaystyle \mathbf {\bar {y}} } can either be a finite optimal value for which an extreme point u ¯ {\displaystyle \mathbf {\bar {u}} } can be found, an unbounded solution for which an extreme ray u ¯ {\displaystyle \mathbf {\bar {u}} } in the recession cone can be found, or a finding that the subproblem is infeasible.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following are is a valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=a \kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)+b \kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ for all $a, b \geq 0$.
Let κ 1 {\displaystyle \kappa ^{1}} be a s-finite kernel from S {\displaystyle S} to T {\displaystyle T} and κ 2 {\displaystyle \kappa ^{2}} a s-finite kernel from S × T {\displaystyle S\times T} to U {\displaystyle U} . Then the composition κ 1 ⋅ κ 2 {\displaystyle \kappa ^{1}\cdot \kappa ^{2}} of the two kernels is defined as κ 1 ⋅ κ 2: S × U → {\displaystyle \kappa ^{1}\cdot \kappa ^{2}\colon S\times {\mathcal {U}}\to } ( s , B ) ↦ ∫ T κ 1 ( s , d t ) ∫ U κ 2 ( ( s , t ) , d u ) 1 B ( u ) {\displaystyle (s,B)\mapsto \int _{T}\kappa ^{1}(s,\mathrm {d} t)\int _{U}\kappa ^{2}((s,t),\mathrm {d} u)\mathbf {1} _{B}(u)} for all s ∈ S {\displaystyle s\in S} and all B ∈ U {\displaystyle B\in {\mathcal {U}}} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following are is a valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=a \kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)+b \kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ for all $a, b \geq 0$.
For N {\displaystyle N} even, we define the Dirichlet kernel as D ( x , N ) = 1 N + 1 N cos ⁡ 1 2 N x + 2 N ∑ k = 1 ( N − 1 ) / 2 cos ⁡ ( k x ) = sin ⁡ 1 2 N x N tan ⁡ 1 2 x . {\displaystyle D(x,N)={\frac {1}{N}}+{\frac {1}{N}}\cos {\tfrac {1}{2}}Nx+{\frac {2}{N}}\sum _{k=1}^{(N-1)/2}\cos(kx)={\frac {\sin {\tfrac {1}{2}}Nx}{N\tan {\tfrac {1}{2}}x}}.} Again, it can easily be seen that D ( x , N ) {\displaystyle D(x,N)} is a linear combination of the right powers of e i x {\displaystyle e^{ix}} , does not contain the term sin ⁡ 1 2 N x {\displaystyle \sin {\tfrac {1}{2}}Nx} and satisfies D ( x m , N ) = { 0 for m ≠ 0 1 for m = 0 . {\displaystyle D(x_{m},N)={\begin{cases}0{\text{ for }}m\neq 0\\1{\text{ for }}m=0\end{cases}}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{n \times n}$ be two symmetric matrices. Assume that $\mathbf{v} \in \mathbb{R}^{n}$ is an eigenvector for both matrices with associated eigenvalues $\lambda_{A}$ and $\lambda_{B}$ respectively. Show that $\mathbf{v}$ is an eigenvector of the matrix $\mathbf{A}+\mathbf{B}$. What is the corresponding eigenvalue?
The matrix A = ( 3 2 0 2 0 0 1 0 2 ) {\displaystyle A={\begin{pmatrix}3&2&0\\2&0&0\\1&0&2\end{pmatrix}}} has eigenvalues and corresponding eigenvectors λ 1 = − 1 , b 1 = ( − 3 , 6 , 1 ) , {\displaystyle \lambda _{1}=-1,\quad \,\mathbf {b} _{1}=\left(-3,6,1\right),} λ 2 = 2 , b 2 = ( 0 , 0 , 1 ) , {\displaystyle \lambda _{2}=2,\qquad \mathbf {b} _{2}=\left(0,0,1\right),} λ 3 = 4 , b 3 = ( 2 , 1 , 1 ) . {\displaystyle \lambda _{3}=4,\qquad \mathbf {b} _{3}=\left(2,1,1\right).} A diagonal matrix D {\displaystyle D} , similar to A {\displaystyle A} is D = ( − 1 0 0 0 2 0 0 0 4 ) .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{n \times n}$ be two symmetric matrices. Assume that $\mathbf{v} \in \mathbb{R}^{n}$ is an eigenvector for both matrices with associated eigenvalues $\lambda_{A}$ and $\lambda_{B}$ respectively. Show that $\mathbf{v}$ is an eigenvector of the matrix $\mathbf{A}+\mathbf{B}$. What is the corresponding eigenvalue?
Given an n × n square matrix A of real or complex numbers, an eigenvalue λ and its associated generalized eigenvector v are a pair obeying the relation ( A − λ I ) k v = 0 , {\displaystyle \left(A-\lambda I\right)^{k}{\mathbf {v} }=0,} where v is a nonzero n × 1 column vector, I is the n × n identity matrix, k is a positive integer, and both λ and v are allowed to be complex even when A is real. When k = 1, the vector is called simply an eigenvector, and the pair is called an eigenpair. In this case, Av = λv.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are given your $D \times N$ data matrix $\boldsymbol{X}$, where $D$ represents the dimension of the input space and $N$ is the number of samples. We discussed in the course the singular value decomposition (SVD). Recall that the SVD is not invariant to scaling and that empirically it is a good idea to remove the mean of each feature (row of $\boldsymbol{X}$ ) and to normalize its variance to 1 . Assume that $\boldsymbol{X}$ has this form except that the last row/feature is then multiplied by $\sqrt{2}$, i.e., it has variance $\left(\ell_{2}^{2}\right.$-norm) of 2 instead of 1. Recall that the SVD allows us to write $\boldsymbol{X}$ in the form $\boldsymbol{X}=\boldsymbol{U} \boldsymbol{S} \boldsymbol{V}^{\top}$, where $\boldsymbol{U}$ and $\boldsymbol{V}$ are unitary and $\boldsymbol{S}$ is a $D \times N$ diagonal matrix with entries $s_{i}$ that are non-negative and decreasing, called the singular values. Assume now that you add a feature, i.e., you add a row to $\boldsymbol{X}$. Assume that this row is identical to the last row of $\boldsymbol{X}$, i.e., you just replicate the last feature. Call the new matrix $\tilde{\boldsymbol{X}}$. But assume also that for $\tilde{\boldsymbol{X}}$ we normalize all rows to have variance 1. To summarize, $\boldsymbol{X}$ is the original data matrix, where all means have been taken out and all rows are properly normalized to have variance 1 except the last one that has variance 2 . And $\tilde{\boldsymbol{X}}$ is the original data matrix with the last row replicated, and all means have been taken out and all rows are properly normalized. Let $\boldsymbol{X}=\boldsymbol{U} \cdot \boldsymbol{S} \cdot \boldsymbol{V}^{\top}$ be the SVD of $\boldsymbol{X}$ and let. $\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{U}} \cdot \tilde{\boldsymbol{S}} \cdot \tilde{\boldsymbol{V}}^{\top}$ be the SVD of $\tilde{\boldsymbol{X}}$ \begin{enumerate} \item Show that \end{enumerate} (a) $\tilde{V}=V$ (b) $\tilde{\boldsymbol{S}}$ is equal to $\boldsymbol{S}$ with an extra all-zero row attached. \begin{enumerate} \setcounter{enumi}{1} \item Based on the previous relationships and assuming that it is always best to run an SVD with "normalized" rows, what is better: If you $K N O W$ that a feature is highly correlated to another feature a priori. Should you rather first run the SVD and then figure out what features to keep or should you first take the highly correlated feature out and then run the SVD? Explain. \end{enumerate}
They form two sets of orthonormal bases u1, ..., um and v1, ..., vn , and if they are sorted so that the singular values σ i {\displaystyle \ \sigma _{i}\ } with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as M = ∑ i = 1 r σ i u i v i ∗ , {\displaystyle \ \mathbf {M} =\sum _{i=1}^{r}\sigma _{i}\mathbf {u} _{i}\mathbf {v} _{i}^{*}\ ,} where r ≤ min { m , n } {\displaystyle \ r\leq \min\{m,n\}\ } is the rank of M. The SVD is not unique. It is always possible to choose the decomposition so that the singular values Σ i i {\displaystyle \Sigma _{ii}} are in descending order. In this case, Σ {\displaystyle \mathbf {\Sigma } } (but not U and V) is uniquely determined by M. The term sometimes refers to the compact SVD, a similar decomposition M = U Σ V ∗ {\displaystyle \ \mathbf {M} =\mathbf {U\Sigma V^{*}} \ } in which Σ {\displaystyle \ \mathbf {\Sigma } \ } is square diagonal of size r × r {\displaystyle r\times r} , where r ≤ min { m , n } {\displaystyle \ r\leq \min\{m,n\}\ } is the rank of M, and has only the non-zero singular values.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are given your $D \times N$ data matrix $\boldsymbol{X}$, where $D$ represents the dimension of the input space and $N$ is the number of samples. We discussed in the course the singular value decomposition (SVD). Recall that the SVD is not invariant to scaling and that empirically it is a good idea to remove the mean of each feature (row of $\boldsymbol{X}$ ) and to normalize its variance to 1 . Assume that $\boldsymbol{X}$ has this form except that the last row/feature is then multiplied by $\sqrt{2}$, i.e., it has variance $\left(\ell_{2}^{2}\right.$-norm) of 2 instead of 1. Recall that the SVD allows us to write $\boldsymbol{X}$ in the form $\boldsymbol{X}=\boldsymbol{U} \boldsymbol{S} \boldsymbol{V}^{\top}$, where $\boldsymbol{U}$ and $\boldsymbol{V}$ are unitary and $\boldsymbol{S}$ is a $D \times N$ diagonal matrix with entries $s_{i}$ that are non-negative and decreasing, called the singular values. Assume now that you add a feature, i.e., you add a row to $\boldsymbol{X}$. Assume that this row is identical to the last row of $\boldsymbol{X}$, i.e., you just replicate the last feature. Call the new matrix $\tilde{\boldsymbol{X}}$. But assume also that for $\tilde{\boldsymbol{X}}$ we normalize all rows to have variance 1. To summarize, $\boldsymbol{X}$ is the original data matrix, where all means have been taken out and all rows are properly normalized to have variance 1 except the last one that has variance 2 . And $\tilde{\boldsymbol{X}}$ is the original data matrix with the last row replicated, and all means have been taken out and all rows are properly normalized. Let $\boldsymbol{X}=\boldsymbol{U} \cdot \boldsymbol{S} \cdot \boldsymbol{V}^{\top}$ be the SVD of $\boldsymbol{X}$ and let. $\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{U}} \cdot \tilde{\boldsymbol{S}} \cdot \tilde{\boldsymbol{V}}^{\top}$ be the SVD of $\tilde{\boldsymbol{X}}$ \begin{enumerate} \item Show that \end{enumerate} (a) $\tilde{V}=V$ (b) $\tilde{\boldsymbol{S}}$ is equal to $\boldsymbol{S}$ with an extra all-zero row attached. \begin{enumerate} \setcounter{enumi}{1} \item Based on the previous relationships and assuming that it is always best to run an SVD with "normalized" rows, what is better: If you $K N O W$ that a feature is highly correlated to another feature a priori. Should you rather first run the SVD and then figure out what features to keep or should you first take the highly correlated feature out and then run the SVD? Explain. \end{enumerate}
Let X denote the d × n {\displaystyle d\times n} data matrix with column x i {\displaystyle x_{i}} as the image vector with mean subtracted. Then, c o v a r i a n c e ( X ) = X X T n {\displaystyle \mathrm {covariance} (X)={\frac {XX^{T}}{n}}} Let the singular value decomposition (SVD) of X be: X = U Σ V T {\displaystyle X=U{\Sigma }V^{T}} Then the eigenvalue decomposition for X X T {\displaystyle XX^{T}} is: X X T = U Σ Σ T U T = U Λ U T {\displaystyle XX^{T}=U{\Sigma }{{\Sigma }^{T}}U^{T}=U{\Lambda }U^{T}} , where Λ=diag (eigenvalues of X X T {\displaystyle XX^{T}} )Thus we can see easily that: The eigenfaces = the first k {\displaystyle k} ( k ≤ n {\displaystyle k\leq n} ) columns of U {\displaystyle U} associated with the nonzero singular values. The ith eigenvalue of X X T = 1 n ( {\displaystyle XX^{T}={\frac {1}{n}}(} ith singular value of X ) 2 {\displaystyle X)^{2}} Using SVD on data matrix X, it is unnecessary to calculate the actual covariance matrix to get eigenfaces.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let us assume that a kernel $K: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ is said to be valid if there exists $k \in \mathbb{N}$ and $\Phi: \mathcal{X} \rightarrow \mathbb{R}^{k}$ such that for all $\left(x, x^{\prime}\right) \in \mathcal{X} \times \mathcal{X}, K\left(x, x^{\prime}\right)=\Phi(x)^{\top} \Phi\left(x^{\prime}\right)$ Which one of the following kernels is not valid ?
kernels, then both and are p.d. kernels on X = X 1 × ⋯ × X n {\displaystyle {\mathcal {X}}={\mathcal {X}}_{1}\times \dots \times {\mathcal {X}}_{n}} . Let X 0 ⊂ X {\displaystyle {\mathcal {X}}_{0}\subset {\mathcal {X}}} . Then the restriction K 0 {\displaystyle K_{0}} of K {\displaystyle K} to X 0 × X 0 {\displaystyle {\mathcal {X}}_{0}\times {\mathcal {X}}_{0}} is also a p.d. kernel.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let us assume that a kernel $K: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ is said to be valid if there exists $k \in \mathbb{N}$ and $\Phi: \mathcal{X} \rightarrow \mathbb{R}^{k}$ such that for all $\left(x, x^{\prime}\right) \in \mathcal{X} \times \mathcal{X}, K\left(x, x^{\prime}\right)=\Phi(x)^{\top} \Phi\left(x^{\prime}\right)$ Which one of the following kernels is not valid ?
Moore initiated the study of a very general kind of p.d. kernel. If E {\displaystyle E} is an abstract set, he calls functions K ( x , y ) {\displaystyle K(x,y)} defined on E × E {\displaystyle E\times E} “positive Hermitian matrices” if they satisfy (1.1) for all x i ∈ E {\displaystyle x_{i}\in E} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Mark any of the following functions that have unique maximizers:
Often the functions to be minimized are not f i {\displaystyle f_{i}} but | f i − z i ∗ | {\displaystyle |f_{i}-z_{i}^{*}|} for some scalars z i ∗ {\displaystyle z_{i}^{*}} . Then f T c h b ( x , w ) = max i w i | f i ( x ) − z i ∗ | . {\displaystyle f_{Tchb}(x,w)=\max _{i}w_{i}|f_{i}(x)-z_{i}^{*}|.} All three functions are named in honour of Pafnuty Chebyshev.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Mark any of the following functions that have unique maximizers:
If x ¯ ( s ) {\displaystyle {\bar {x}}(s)} is the unique maximizer of f ( ⋅ ; s ) {\displaystyle f(\cdot ;s)} , it suffices to show that f ′ ( x ¯ ( s ) ; s ′ ) ≥ 0 {\displaystyle f'({\bar {x}}(s);s')\geq 0} for any s ′ > s {\displaystyle s'>s} , which guarantees that x ¯ ( s ) {\displaystyle {\bar {x}}(s)} is increasing in s {\displaystyle s} . This guarantees that the optimum has shifted to the right, i.e., x ¯ ( s ′ ) ≥ x ¯ ( s ) {\displaystyle {\bar {x}}(s')\geq {\bar {x}}(s)} . This approach makes various assumptions, most notably the quasiconcavity of f ( ⋅ ; s ) {\displaystyle f(\cdot ;s)} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(f(\mathbf{x}), f\left(\mathbf{x}^{\prime}\right)\right)$, where $f$ is any function from the domain to itself.
For N {\displaystyle N} even, we define the Dirichlet kernel as D ( x , N ) = 1 N + 1 N cos ⁡ 1 2 N x + 2 N ∑ k = 1 ( N − 1 ) / 2 cos ⁡ ( k x ) = sin ⁡ 1 2 N x N tan ⁡ 1 2 x . {\displaystyle D(x,N)={\frac {1}{N}}+{\frac {1}{N}}\cos {\tfrac {1}{2}}Nx+{\frac {2}{N}}\sum _{k=1}^{(N-1)/2}\cos(kx)={\frac {\sin {\tfrac {1}{2}}Nx}{N\tan {\tfrac {1}{2}}x}}.} Again, it can easily be seen that D ( x , N ) {\displaystyle D(x,N)} is a linear combination of the right powers of e i x {\displaystyle e^{ix}} , does not contain the term sin ⁡ 1 2 N x {\displaystyle \sin {\tfrac {1}{2}}Nx} and satisfies D ( x m , N ) = { 0 for m ≠ 0 1 for m = 0 . {\displaystyle D(x_{m},N)={\begin{cases}0{\text{ for }}m\neq 0\\1{\text{ for }}m=0\end{cases}}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(f(\mathbf{x}), f\left(\mathbf{x}^{\prime}\right)\right)$, where $f$ is any function from the domain to itself.
Let κ 1 {\displaystyle \kappa ^{1}} be a s-finite kernel from S {\displaystyle S} to T {\displaystyle T} and κ 2 {\displaystyle \kappa ^{2}} a s-finite kernel from S × T {\displaystyle S\times T} to U {\displaystyle U} . Then the composition κ 1 ⋅ κ 2 {\displaystyle \kappa ^{1}\cdot \kappa ^{2}} of the two kernels is defined as κ 1 ⋅ κ 2: S × U → {\displaystyle \kappa ^{1}\cdot \kappa ^{2}\colon S\times {\mathcal {U}}\to } ( s , B ) ↦ ∫ T κ 1 ( s , d t ) ∫ U κ 2 ( ( s , t ) , d u ) 1 B ( u ) {\displaystyle (s,B)\mapsto \int _{T}\kappa ^{1}(s,\mathrm {d} t)\int _{U}\kappa ^{2}((s,t),\mathrm {d} u)\mathbf {1} _{B}(u)} for all s ∈ S {\displaystyle s\in S} and all B ∈ U {\displaystyle B\in {\mathcal {U}}} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a Generative Adversarial Network (GAN) which successfully produces images of goats. Which of the following statements is false?
For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning.The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner. GANs are similar to mimicry in evolutionary biology, with an evolutionary arms race between both networks.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a Generative Adversarial Network (GAN) which successfully produces images of goats. Which of the following statements is false?
For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning.The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner. GANs are similar to mimicry in evolutionary biology, with an evolutionary arms race between both networks.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following probability distributions are members of the exponential family:
In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. The terms "distribution" and "family" are often used loosely: specifically, an exponential family is a set of distributions, where the specific distribution varies with the parameter; however, a parametric family of distributions is often referred to as "a distribution" (like "the normal distribution", meaning "the family of normal distributions"), and the set of all exponential families is sometimes loosely referred to as "the" exponential family.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following probability distributions are members of the exponential family:
Despite the analytical tractability of such distributions, they are in themselves usually not members of the exponential family. For example, the three-parameter Student's t distribution, beta-binomial distribution and Dirichlet-multinomial distribution are all predictive distributions of exponential-family distributions (the normal distribution, binomial distribution and multinomial distributions, respectively), but none are members of the exponential family. This can be seen above due to the presence of functional dependence on χ + T ( x ) {\displaystyle {\boldsymbol {\chi }}+\mathbf {T} (x)} . In an exponential-family distribution, it must be possible to separate the entire density function into multiplicative factors of three types: (1) factors containing only variables, (2) factors containing only parameters, and (3) factors whose logarithm factorizes between variables and parameters. The presence of χ + T ( x ) χ {\displaystyle {\boldsymbol {\chi }}+\mathbf {T} (x){\chi }} makes this impossible unless the "normalizing" function f ( … ) {\displaystyle f(\dots )\,} either ignores the corresponding argument entirely or uses it only in the exponent of an expression.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How does the bias-variance decomposition of a ridge regression estimator compare with that of the ordinary least-squares estimator in general?
The bias–variance decomposition forms the conceptual basis for regression regularization methods such as Lasso and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How does the bias-variance decomposition of a ridge regression estimator compare with that of the ordinary least-squares estimator in general?
The bias–variance decomposition forms the conceptual basis for regression regularization methods such as Lasso and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Alternating Least Squares \& Matrix Factorization) For optimizing a matrix factorization problem in the recommender systems setting, as the number of observed entries increases but all $K, N, D$ are kept constant, the computational cost of the matrix inversion in Alternating Least-Squares increases.
Many standard NMF algorithms analyze all the data together; i.e., the whole matrix is available from the start. This may be unsatisfactory in applications where there are too many data to fit into memory or where the data are provided in streaming fashion. One such use is for collaborative filtering in recommendation systems, where there may be many users and many items to recommend, and it would be inefficient to recalculate everything when one user or one item is added to the system. The cost function for optimization in these cases may or may not be the same as for standard NMF, but the algorithms need to be rather different.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Alternating Least Squares \& Matrix Factorization) For optimizing a matrix factorization problem in the recommender systems setting, as the number of observed entries increases but all $K, N, D$ are kept constant, the computational cost of the matrix inversion in Alternating Least-Squares increases.
Matrix factorization is a class of collaborative filtering algorithms used in recommender systems. Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. This family of methods became widely known during the Netflix prize challenge due to its effectiveness as reported by Simon Funk in his 2006 blog post, where he shared his findings with the research community. The prediction results can be improved by assigning different regularization weights to the latent factors based on items' popularity and users' activeness.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the lecture on bias-variance decomposition we have seen that the true error can be decomposed into noise, bias and variance terms. What happens to the three terms for ridge regression when the regularization parameter $\lambda$ grows? Explain your answer.
The bias–variance decomposition forms the conceptual basis for regression regularization methods such as Lasso and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the lecture on bias-variance decomposition we have seen that the true error can be decomposed into noise, bias and variance terms. What happens to the three terms for ridge regression when the regularization parameter $\lambda$ grows? Explain your answer.
The bias–variance decomposition forms the conceptual basis for regression regularization methods such as Lasso and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Convex III) Let $f, g: \mathbb{R} \rightarrow \mathbb{R}$ be two convex functions. Then $h=f \circ g$ is always convex.
Let f be a function from an interval I ⊆ R {\displaystyle I\subseteq \mathbb {R} } to R {\displaystyle \mathbb {R} } . If f is convex, then for any three points x, y, z in I, f ( x ) + f ( y ) + f ( z ) 3 + f ( x + y + z 3 ) ≥ 2 3 . {\displaystyle {\frac {f(x)+f(y)+f(z)}{3}}+f\left({\frac {x+y+z}{3}}\right)\geq {\frac {2}{3}}\left.} If a function f is continuous, then it is convex if and only if the above inequality holds for all x, y, z from I {\displaystyle I} . When f is strictly convex, the inequality is strict except for x = y = z.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Convex III) Let $f, g: \mathbb{R} \rightarrow \mathbb{R}$ be two convex functions. Then $h=f \circ g$ is always convex.
It can be generalized to any finite number n of points instead of 3, taken on the right-hand side k at a time instead of 2 at a time: Let f be a continuous function from an interval I ⊆ R {\displaystyle I\subseteq \mathbb {R} } to R {\displaystyle \mathbb {R} } . Then f is convex if and only if, for any integers n and k where n ≥ 3 and 2 ≤ k ≤ n − 1 {\displaystyle 2\leq k\leq n-1} , and any n points x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} from I, 1 k ( n − 2 k − 2 ) ( n − k k − 1 ∑ i = 1 n f ( x i ) + n f ( 1 n ∑ i = 1 n x i ) ) ≥ ∑ 1 ≤ i 1 < ⋯ < i k ≤ n f ( 1 k ∑ j = 1 k x i j ) {\displaystyle {\frac {1}{k}}{\binom {n-2}{k-2}}\left({\frac {n-k}{k-1}}\sum _{i=1}^{n}f(x_{i})+nf\left({\frac {1}{n}}\sum _{i=1}^{n}x_{i}\right)\right)\geq \sum _{1\leq i_{1}<\dots
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let us consider a binary classification problem with a training set $S=\{ (\xv_n,y_n)\}_{n=1}^N$ such that: \xv_n\in\R^D, ext{ and } y_n\in\{-1,1\}, ext{ for all } n=1,\cdots,N, where $N,D$ are integers such that $N,D\geq1$. We consider the Perceptron classifier which classifies $\xv\in\R^D$ following the rule: f_{\wv,b}(\xv)= \sign(\wv^ op \xv + b ), where $\wv\in\R^D$ is the weight vector, $b\in \R$ is the threshold, and the sign function is defined as \sign(z)=igg\{ {+1 ext{ if } z\geq 0 top -1 ext{ if } z< 0} As seen in the course, explain how we can ignore the threshold $b$ and only deal with classifiers passing through the origin, i.e., of the form $f_\wv(\xv)=\sign(\wv^ op \xv )$.
Consider the problem of binary classification: for inputs x, we want to determine whether they belong to one of two classes, arbitrarily labeled +1 and −1. We assume that the classification problem will be solved by a real-valued function f, by predicting a class label y = sign(f(x)). For many problems, it is convenient to get a probability P ( y = 1 | x ) {\displaystyle P(y=1|x)} , i.e. a classification that not only gives an answer, but also a degree of certainty about the answer.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let us consider a binary classification problem with a training set $S=\{ (\xv_n,y_n)\}_{n=1}^N$ such that: \xv_n\in\R^D, ext{ and } y_n\in\{-1,1\}, ext{ for all } n=1,\cdots,N, where $N,D$ are integers such that $N,D\geq1$. We consider the Perceptron classifier which classifies $\xv\in\R^D$ following the rule: f_{\wv,b}(\xv)= \sign(\wv^ op \xv + b ), where $\wv\in\R^D$ is the weight vector, $b\in \R$ is the threshold, and the sign function is defined as \sign(z)=igg\{ {+1 ext{ if } z\geq 0 top -1 ext{ if } z< 0} As seen in the course, explain how we can ignore the threshold $b$ and only deal with classifiers passing through the origin, i.e., of the form $f_\wv(\xv)=\sign(\wv^ op \xv )$.
The output y of this transfer function is binary, depending on whether the input meets a specified threshold, θ. The "signal" is sent, i.e. the output is set to one, if the activation meets the threshold. y = { 1 if u ≥ θ 0 if u < θ {\displaystyle y={\begin{cases}1&{\text{if }}u\geq \theta \\0&{\text{if }}u<\theta \end{cases}}} This function is used in perceptrons and often shows up in many other models. It performs a division of the space of inputs by a hyperplane. It is specially useful in the last layer of a network intended to perform binary classification of the inputs. It can be approximated from other sigmoidal functions by assigning large values to the weights.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the gradient of $\boldsymbol{x}^{\top} \boldsymbol{W}^{\top} \boldsymbol{W} \boldsymbol{x}$ with respect to $\boldsymbol{x}$ (written as a vector)?
{\displaystyle {\boldsymbol {F}}={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}.} We can also write the deformation gradient as F = 1 + γ e 1 ⊗ e 2 . {\displaystyle {\boldsymbol {F}}={\boldsymbol {\mathit {1}}}+\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the gradient of $\boldsymbol{x}^{\top} \boldsymbol{W}^{\top} \boldsymbol{W} \boldsymbol{x}$ with respect to $\boldsymbol{x}$ (written as a vector)?
By definition, the gradient of a scalar function f is ∇ f = ∑ i e i ∂ f ∂ q i = ∂ f ∂ x e 1 + ∂ f ∂ y e 2 + ∂ f ∂ z e 3 {\displaystyle \nabla f=\sum _{i}\mathbf {e} ^{i}{\frac {\partial f}{\partial q^{i}}}={\frac {\partial f}{\partial x}}\mathbf {e} ^{1}+{\frac {\partial f}{\partial y}}\mathbf {e} ^{2}+{\frac {\partial f}{\partial z}}\mathbf {e} ^{3}} where q i {\displaystyle q_{i}} are the coordinates x, y, z indexed. Recognizing this as a vector written in terms of the contravariant basis, it may be rewritten: ∇ f = ∂ f ∂ x − sin ⁡ ( ϕ ) ∂ f ∂ z cos ⁡ ( ϕ ) 2 e 1 + ∂ f ∂ y e 2 + − sin ⁡ ( ϕ ) ∂ f ∂ x + ∂ f ∂ z cos ⁡ ( ϕ ) 2 e 3 . {\displaystyle \nabla f={\frac {{\frac {\partial f}{\partial x}}-\sin(\phi ){\frac {\partial f}{\partial z}}}{\cos(\phi )^{2}}}\mathbf {e} _{1}+{\frac {\partial f}{\partial y}}\mathbf {e} _{2}+{\frac {-\sin(\phi ){\frac {\partial f}{\partial x}}+{\frac {\partial f}{\partial z}}}{\cos(\phi )^{2}}}\mathbf {e} _{3}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Minima) Convex functions over a convex set have a unique global minimum.
The following are useful properties of convex optimization problems: every local minimum is a global minimum; the optimal set is convex; if the objective function is strictly convex, then the problem has at most one optimal point.These results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Minima) Convex functions over a convex set have a unique global minimum.
Suppose each subset has its own cost function. The minima of each of these cost functions can be found, as can the minima of the global cost function, restricted to the same subsets. If these minima match for each subset, then it's almost obvious that a global minimum can be picked not out of the full set of alternatives, but out of only the set that consists of the minima of the smaller, local cost functions we have defined. If minimizing the local functions is a problem of "lower order", and (specifically) if, after a finite number of these reductions, the problem becomes trivial, then the problem has an optimal substructure.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which statement is true for linear regression?
This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating the y t {\displaystyle y_{t}} ′s to the actually observed x t {\displaystyle x_{t}} ′s, in a simple linear regression, is given by β x = Cov ⁡ Var ⁡ . {\displaystyle \beta _{x}={\frac {\operatorname {Cov} }{\operatorname {Var} }}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which statement is true for linear regression?
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Such models are called linear models.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Nearest Neighbor) The training error of the 1-nearest neighbor classifier is zero.
There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on ( X , Y ) {\displaystyle (X,Y)} ) consistent provided k := k n {\displaystyle k:=k_{n}} diverges and k n / n {\displaystyle k_{n}/n} converges to zero as n → ∞ {\displaystyle n\to \infty } . Let C n k n n {\displaystyle C_{n}^{knn}} denote the k nearest neighbour classifier based on a training set of size n. Under certain regularity conditions, the excess risk yields the following asymptotic expansion for some constants B 1 {\displaystyle B_{1}} and B 2 {\displaystyle B_{2}} . The choice k ∗ = ⌊ B n 4 d + 4 ⌋ {\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor } offers a trade off between the two terms in the above display, for which the k ∗ {\displaystyle k^{*}} -nearest neighbour error converges to the Bayes error at the optimal (minimax) rate O ( n − 4 d + 4 ) {\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
(Nearest Neighbor) The training error of the 1-nearest neighbor classifier is zero.
The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point x to the class of its closest neighbour in the feature space, that is C n 1 n n ( x ) = Y ( 1 ) {\displaystyle C_{n}^{1nn}(x)=Y_{(1)}} . As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Now let $\xv$ be a random vector distributed according to the uniform distribution over the finite centered dataset $\xv_1, . . . , \xv_N$ from above. % Consider the problem of finding a unit vector, $\wv \in \R^D$, such that the random variable $\wv^ op \xx$ has \emph{maximal} variance. What is the variance of the random variable $\wv^ op \xx$ over the randomness of $\xx$?
Suppose Tn is a uniformly (locally) regular estimator of the parameter q. Then There exist independent random m-vectors Z θ ∼ N ( 0 , I q ( θ ) − 1 ) {\displaystyle \scriptstyle Z_{\theta }\,\sim \,{\mathcal {N}}(0,\,I_{q(\theta )}^{-1})} and Δθ such that n ( T n − q ( θ ) ) → d Z θ + Δ θ , {\displaystyle {\sqrt {n}}(T_{n}-q(\theta ))\ {\xrightarrow {d}}\ Z_{\theta }+\Delta _{\theta },} where d denotes convergence in distribution. More specifically, ( n ( T n − q ( θ ) ) − 1 n ∑ i = 1 n ψ q ( θ ) ( x i ) 1 n ∑ i = 1 n ψ q ( θ ) ( x i ) ) → d ( Δ θ Z θ ) . {\displaystyle {\begin{pmatrix}{\sqrt {n}}(T_{n}-q(\theta ))-{\tfrac {1}{\sqrt {n}}}\sum _{i=1}^{n}\psi _{q(\theta )}(x_{i})\\{\tfrac {1}{\sqrt {n}}}\sum _{i=1}^{n}\psi _{q(\theta )}(x_{i})\end{pmatrix}}\ {\xrightarrow {d}}\ {\begin{pmatrix}\Delta _{\theta }\\Z_{\theta }\end{pmatrix}}.} If the map θ → q̇θ is continuous, then the convergence in (A) holds uniformly on compact subsets of Θ. Moreover, in that case Δθ = 0 for all θ if and only if Tn is uniformly (locally) asymptotically linear with influence function ψq(θ)
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Now let $\xv$ be a random vector distributed according to the uniform distribution over the finite centered dataset $\xv_1, . . . , \xv_N$ from above. % Consider the problem of finding a unit vector, $\wv \in \R^D$, such that the random variable $\wv^ op \xx$ has \emph{maximal} variance. What is the variance of the random variable $\wv^ op \xx$ over the randomness of $\xx$?
The probability density function of a CURV X ∼ U ⁡ {\displaystyle X\sim \operatorname {U} } is given by the indicator function of its interval of support normalized by the interval's length: Of particular interest is the uniform distribution on the unit interval {\displaystyle } . Samples of any desired probability distribution D {\displaystyle \operatorname {D} } can be generated by calculating the quantile function of D {\displaystyle \operatorname {D} } on a randomly-generated number distributed uniformly on the unit interval. This exploits properties of cumulative distribution functions, which are a unifying framework for all random variables.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a linear regression model on a dataset which we split into a training set and a test set. After training, our model gives a mean-squared error of 0.1 on the training set and a mean-squared error of 5.3 on the test set. Recall that the mean-squared error (MSE) is given by: $$MSE_{ extbf{w}}( extbf{y}, extbf{X}) = rac{1}{2N} \sum_{n=1}^N (y_n - extbf{x}_n^ op extbf{w})^2$$ Which of the following statements is extbf{correct} ?
In linear regression, there exist real response values y 1 , … , y n {\textstyle y_{1},\ldots ,y_{n}} , and n p-dimensional vector covariates x1, ..., xn. The components of the vector xi are denoted xi1, ..., xip. If least squares is used to fit a function in the form of a hyperplane ŷ = a + βTx to the data (xi, yi) 1 ≤ i ≤ n, then the fit can be assessed using the mean squared error (MSE). The MSE for given estimated parameter values a and β on the training set (xi, yi) 1 ≤ i ≤ n is defined as: MSE = 1 n ∑ i = 1 n ( y i − y ^ i ) 2 = 1 n ∑ i = 1 n ( y i − a − β T x i ) 2 = 1 n ∑ i = 1 n ( y i − a − β 1 x i 1 − ⋯ − β p x i p ) 2 {\displaystyle {\begin{aligned}{\text{MSE}}&={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-{\hat {y}}_{i})^{2}={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-a-{\boldsymbol {\beta }}^{T}\mathbf {x} _{i})^{2}\\&={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-a-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})^{2}\end{aligned}}} If the model is correctly specified, it can be shown under mild assumptions that the expected value of the MSE for the training set is (n − p − 1)/(n + p + 1) < 1 times the expected value of the MSE for the validation set (the expected value is taken over the distribution of training sets).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a linear regression model on a dataset which we split into a training set and a test set. After training, our model gives a mean-squared error of 0.1 on the training set and a mean-squared error of 5.3 on the test set. Recall that the mean-squared error (MSE) is given by: $$MSE_{ extbf{w}}( extbf{y}, extbf{X}) = rac{1}{2N} \sum_{n=1}^N (y_n - extbf{x}_n^ op extbf{w})^2$$ Which of the following statements is extbf{correct} ?
In linear regression, there exist real response values y 1 , … , y n {\textstyle y_{1},\ldots ,y_{n}} , and n p-dimensional vector covariates x1, ..., xn. The components of the vector xi are denoted xi1, ..., xip. If least squares is used to fit a function in the form of a hyperplane ŷ = a + βTx to the data (xi, yi) 1 ≤ i ≤ n, then the fit can be assessed using the mean squared error (MSE). The MSE for given estimated parameter values a and β on the training set (xi, yi) 1 ≤ i ≤ n is defined as: MSE = 1 n ∑ i = 1 n ( y i − y ^ i ) 2 = 1 n ∑ i = 1 n ( y i − a − β T x i ) 2 = 1 n ∑ i = 1 n ( y i − a − β 1 x i 1 − ⋯ − β p x i p ) 2 {\displaystyle {\begin{aligned}{\text{MSE}}&={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-{\hat {y}}_{i})^{2}={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-a-{\boldsymbol {\beta }}^{T}\mathbf {x} _{i})^{2}\\&={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-a-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})^{2}\end{aligned}}} If the model is correctly specified, it can be shown under mild assumptions that the expected value of the MSE for the training set is (n − p − 1)/(n + p + 1) < 1 times the expected value of the MSE for the validation set (the expected value is taken over the distribution of training sets).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are given two distributions over $\mathbb{R}$ : Uniform on the interval $[a, b]$ and Gaussian with mean $\mu$ and variance $\sigma^{2}$. Their respective probability density functions are $$ p_{\mathcal{U}}(y \mid a, b):=\left\{\begin{array}{ll} \frac{1}{b-a}, & \text { for } a \leq y \leq b, \\ 0 & \text { otherwise } \end{array} \quad p_{\mathcal{G}}\left(y \mid \mu, \sigma^{2}\right):=\frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \left(-\frac{(y-\mu)^{2}}{2 \sigma^{2}}\right)\right. $$ Which one(s) belong to the exponential family?
Next, consider the case of a normal distribution with unknown mean and unknown variance. The probability density function is then f ( y ; μ , σ ) = 1 2 π σ 2 e − ( y − μ ) 2 / 2 σ 2 . {\displaystyle f(y;\mu ,\sigma )={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-(y-\mu )^{2}/2\sigma ^{2}}.} This is an exponential family which can be written in canonical form by defining η = h ( y ) = 1 2 π T ( y ) = ( y , y 2 ) T A ( η ) = μ 2 2 σ 2 + log ⁡ | σ | = − η 1 2 4 η 2 + 1 2 log ⁡ | 1 2 η 2 | {\displaystyle {\begin{aligned}{\boldsymbol {\eta }}&=\left\\h(y)&={\frac {1}{\sqrt {2\pi }}}\\T(y)&=\left(y,y^{2}\right)^{\rm {T}}\\A({\boldsymbol {\eta }})&={\frac {\mu ^{2}}{2\sigma ^{2}}}+\log |\sigma |=-{\frac {\eta _{1}^{2}}{4\eta _{2}}}+{\frac {1}{2}}\log \left|{\frac {1}{2\eta _{2}}}\right|\end{aligned}}}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are given two distributions over $\mathbb{R}$ : Uniform on the interval $[a, b]$ and Gaussian with mean $\mu$ and variance $\sigma^{2}$. Their respective probability density functions are $$ p_{\mathcal{U}}(y \mid a, b):=\left\{\begin{array}{ll} \frac{1}{b-a}, & \text { for } a \leq y \leq b, \\ 0 & \text { otherwise } \end{array} \quad p_{\mathcal{G}}\left(y \mid \mu, \sigma^{2}\right):=\frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \left(-\frac{(y-\mu)^{2}}{2 \sigma^{2}}\right)\right. $$ Which one(s) belong to the exponential family?
An overdispersed exponential family of distributions is a generalization of an exponential family and the exponential dispersion model of distributions and includes those families of probability distributions, parameterized by θ {\displaystyle {\boldsymbol {\theta }}} and τ {\displaystyle \tau } , whose density functions f (or probability mass function, for the case of a discrete distribution) can be expressed in the form f Y ( y ∣ θ , τ ) = h ( y , τ ) exp ⁡ ( b ( θ ) T T ( y ) − A ( θ ) d ( τ ) ) . {\displaystyle f_{Y}(\mathbf {y} \mid {\boldsymbol {\theta }},\tau )=h(\mathbf {y} ,\tau )\exp \left({\frac {\mathbf {b} ({\boldsymbol {\theta }})^{\rm {T}}\mathbf {T} (\mathbf {y} )-A({\boldsymbol {\theta }})}{d(\tau )}}\right).\,\!} The dispersion parameter, τ {\displaystyle \tau } , typically is known and is usually related to the variance of the distribution. The functions h ( y , τ ) {\displaystyle h(\mathbf {y} ,\tau )} , b ( θ ) {\displaystyle \mathbf {b} ({\boldsymbol {\theta }})} , T ( y ) {\displaystyle \mathbf {T} (\mathbf {y} )} , A ( θ ) {\displaystyle A({\boldsymbol {\theta }})} , and d ( τ ) {\displaystyle d(\tau )} are known.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which statement is true for the Mean Squared Error (MSE) loss MSE( $\mathbf{x}, y):=\left(f_{\mathbf{w}}(\mathbf{x})-y\right)^{2}$, with $f_{\mathrm{w}}$ a model parametrized by the weights $\mathbf{w}$ ?
}}\\&=\operatorname {E} _{\theta }\left\right)^{2}\right]+\left(\operatorname {E} _{\theta }-\theta \right)^{2}\\&=\operatorname {Var} _{\theta }({\hat {\theta }})+\operatorname {Bias} _{\theta }({\hat {\theta }},\theta )^{2}\end{aligned}}} An even shorter proof can be achieved using the well-known formula that for a random variable X {\textstyle X} , E ( X 2 ) = Var ⁡ ( X ) + ( E ( X ) ) 2 {\textstyle \mathbb {E} (X^{2})=\operatorname {Var} (X)+(\mathbb {E} (X))^{2}} . By substituting X {\textstyle X} with, θ ^ − θ {\textstyle {\hat {\theta }}-\theta } , we haveBut in real modeling case, MSE could be described as the addition of model variance, model bias, and irreducible uncertainty (see Bias–variance tradeoff). According to the relationship, the MSE of the estimators could be simply used for the efficiency comparison, which includes the information of estimator variance and bias. This is called MSE criterion.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which statement is true for the Mean Squared Error (MSE) loss MSE( $\mathbf{x}, y):=\left(f_{\mathbf{w}}(\mathbf{x})-y\right)^{2}$, with $f_{\mathrm{w}}$ a model parametrized by the weights $\mathbf{w}$ ?
A popular example for a loss function is the squared error loss L ( θ , δ ) = ‖ θ − δ ‖ 2 {\displaystyle L(\theta ,\delta )=\|\theta -\delta \|^{2}\,\!} , and the risk function for this loss is the mean squared error (MSE). Unfortunately, in general, the risk cannot be minimized since it depends on the unknown parameter θ {\displaystyle \theta \,\!}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let us remind that we define the max-margin $M_\star$ as egin{align*} M_\star = \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } n=1,\cdots, N \end{align*} and a max-margin separating hyperplane $ar \wv$ as a solution of this problem: egin{align*} ar \wv \in rg \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } i=1,\cdots, N \end{align*} Does it imply that the output of the Perceptron algorithm is a max-margin separating hyperplane?
We want to find the maximum-margin hyperplane that divides the points having y i = 1 {\displaystyle y_{i}=1} from those having y i = − 1 {\displaystyle y_{i}=-1} . Any hyperplane can be written as the set of points x {\displaystyle \mathbf {x} } satisfying w ⋅ x − b = 0 , {\displaystyle \mathbf {w} \cdot \mathbf {x} -b=0,} where ⋅ {\displaystyle \cdot } denotes the dot product and w {\displaystyle {\mathbf {w} }} the (not necessarily normalized) normal vector to the hyperplane. The parameter b ‖ w ‖ {\displaystyle {\tfrac {b}{\|\mathbf {w} \|}}} determines the offset of the hyperplane from the origin along the normal vector w {\displaystyle {\mathbf {w} }} . If the training data are linearly separable, we can select two hyperplanes in such a way that they separate the data and there are no points between them, and then try to maximize their distance.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let us remind that we define the max-margin $M_\star$ as egin{align*} M_\star = \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } n=1,\cdots, N \end{align*} and a max-margin separating hyperplane $ar \wv$ as a solution of this problem: egin{align*} ar \wv \in rg \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } i=1,\cdots, N \end{align*} Does it imply that the output of the Perceptron algorithm is a max-margin separating hyperplane?
If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier. More formally, given some training data D {\displaystyle {\mathcal {D}}} , a set of n points of the form D = { ( x i , y i ) ∣ x i ∈ R p , y i ∈ { − 1 , 1 } } i = 1 n {\displaystyle {\mathcal {D}}=\left\{(\mathbf {x} _{i},y_{i})\mid \mathbf {x} _{i}\in \mathbb {R} ^{p},\,y_{i}\in \{-1,1\}\right\}_{i=1}^{n}} where the yi is either 1 or −1, indicating the set to which the point x i {\displaystyle \mathbf {x} _{i}} belongs. Each x i {\displaystyle \mathbf {x} _{i}} is a p-dimensional real vector.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
An expression is referentially transparent if it always returns the same value, no matter the global state of the program. A referentially transparent expression can be replaced by its value without changing the result of the program. Say we have a value representing a class of students and their GPAs. Given the following defintions: 1 case class Student(gpa: Double) 2 3 def count(c: List[Student], student: Student): Double = 4 c.filter(s => s == student).size 5 6 val students = List( 7 Student(1.0), Student(2.0), Student(3.0), 8 Student(4.0), Student(5.0), Student(6.0) 9 ) And the expression e: 1 count(students, Student(6.0)) If we change our definitions to: 1 class Student2(var gpa: Double, var name: String = "*") 2 3 def innerCount(course: List[Student2], student: Student2): Double = 4 course.filter(s => s == student).size 5 6 def count2(course: List[Student2], student: Student2): Double = 7 innerCount(course.map(s => new Student2(student.gpa, student.name)), student) 8 9 val students2 = List( 10 Student2(1.0, "Ana"), Student2(2.0, "Ben"), Student2(3.0, "Cal"), 11 Student2(4.0, "Dre"), Student2(5.0, "Egg"), Student2(6.0, "Fra") 12 ) And our expression to: e2: 1 count2(students2, Student2(6.0, "*")) Is the expression e2 referentially transparent?
Clearly, replacing x=x * 10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
An expression is referentially transparent if it always returns the same value, no matter the global state of the program. A referentially transparent expression can be replaced by its value without changing the result of the program. Say we have a value representing a class of students and their GPAs. Given the following defintions: 1 case class Student(gpa: Double) 2 3 def count(c: List[Student], student: Student): Double = 4 c.filter(s => s == student).size 5 6 val students = List( 7 Student(1.0), Student(2.0), Student(3.0), 8 Student(4.0), Student(5.0), Student(6.0) 9 ) And the expression e: 1 count(students, Student(6.0)) If we change our definitions to: 1 class Student2(var gpa: Double, var name: String = "*") 2 3 def innerCount(course: List[Student2], student: Student2): Double = 4 course.filter(s => s == student).size 5 6 def count2(course: List[Student2], student: Student2): Double = 7 innerCount(course.map(s => new Student2(student.gpa, student.name)), student) 8 9 val students2 = List( 10 Student2(1.0, "Ana"), Student2(2.0, "Ben"), Student2(3.0, "Cal"), 11 Student2(4.0, "Dre"), Student2(5.0, "Egg"), Student2(6.0, "Fra") 12 ) And our expression to: e2: 1 count2(students2, Student2(6.0, "*")) Is the expression e2 referentially transparent?
I call a mode of containment φ referentially transparent if, whenever an occurrence of a singular term t is purely referential in a term or sentence ψ(t), it is purely referential also in the containing term or sentence φ(ψ(t)). The term appeared in its contemporary computer science usage in the discussion of variables in programming languages in Christopher Strachey's seminal set of lecture notes Fundamental Concepts in Programming Languages (1967): One of the most useful properties of expressions is that called by Quine referential transparency. In essence this means that if we wish to find the value of an expression which contains a sub-expression, the only thing we need to know about the sub-expression is its value. Any other features of the sub-expression, such as its internal structure, the number and nature of its components, the order in which they are evaluated or the colour of the ink in which they are written, are irrelevant to the value of the main expression.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How many time is call compute printed when running the following code? def compute(n: Int) = \t printf("call compute") \t n + 1 LazyList.from(0).drop(2).take(3).map(compute)
Its running time is O ( r ) {\displaystyle O(r)} , but, since lazy evaluation is used, the computation is delayed until the results is forced by the computation. The list s in the data structure has two purposes. This list serves as a counter for | f | − | r | {\displaystyle |f|-|r|} , indeed, | f | = | r | {\displaystyle |f|=|r|} if and only if s is the empty list.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How many time is call compute printed when running the following code? def compute(n: Int) = \t printf("call compute") \t n + 1 LazyList.from(0).drop(2).take(3).map(compute)
Almost all calling conventions‍—‌the ways in which subroutines receive their parameters and return results‍—‌use a special stack (the "call stack") to hold information about procedure/function calling and nesting in order to switch to the context of the called function and restore to the caller function when the calling finishes. The functions follow a runtime protocol between caller and callee to save arguments and return value on the stack. Stacks are an important way of supporting nested or recursive function calls.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following algorithm \textsc{Random-Check} that takes as input two subsets $S\subseteq E$ and $T\subseteq E$ of the same ground set $E$. \begin{center} \begin{boxedminipage}[t]{0.85\textwidth} \textsc{Random-Check}$(S,T)$ \\[2mm] 1. For each element $e\in E$, independently of other elements randomly set \begin{align*} x_e = \begin{cases} 1 & \mbox{with probability $1/3$} \\ 0 & \mbox{with probability $2/3$} \end{cases} \end{align*} 2. \IF $\sum_{e\in S} x_e = \sum_{e\in T} x_e$ \THEN \\[1mm] 3. \qquad \RETURN true \\[1mm] 4. \ELSE\\ 5. \qquad \RETURN false \end{boxedminipage} \end{center} Note that \textsc{Random-Check}$(S,T)$ returns true with probability $1$ if $S=T$. Your task is to analyze the probability that the algorithm returns true if $S \neq T$. Specifically prove that \textsc{Random-Check}$(S,T)$ returns true with probability at most $2/3$ if $S\neq T$.\\ {\em (In this problem you are asked to prove that \textsc{Random-Check}($S,T$) returns true with probability at most $2/3$ if $S \neq T$. Recall that you are allowed to refer to material covered in the lecture notes.)}
Obviously the result of the comparison always has a probability of error. So the task is similar with finding the minimum in a set of element using noisy comparisons. There are a lot of classical algorithms in order to achieve this goal. The most recent one which achieves the best guarantees was proposed by Daskalakis and Kamath This algorithm sets up a fast tournament between the elements of C ϵ {\displaystyle \textstyle C_{\epsilon }} where the winner D ∗ {\displaystyle \textstyle D^{*}} of this tournament is the element which is ϵ − {\displaystyle \textstyle \epsilon -} close to D {\displaystyle \textstyle D} (i.e. d ( D ∗ , D ) ≤ ϵ {\displaystyle \textstyle d(D^{*},D)\leq \epsilon } ) with probability at least 1 − δ {\displaystyle \textstyle 1-\delta } . In order to do so their algorithm uses O ( log ⁡ N / ϵ 2 ) {\displaystyle \textstyle O(\log N/\epsilon ^{2})} samples from D {\displaystyle \textstyle D} and runs in O ( N log ⁡ N / ϵ 2 ) {\displaystyle \textstyle O(N\log N/\epsilon ^{2})} time, where N = | C ϵ | {\displaystyle \textstyle N=|C_{\epsilon }|} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following algorithm \textsc{Random-Check} that takes as input two subsets $S\subseteq E$ and $T\subseteq E$ of the same ground set $E$. \begin{center} \begin{boxedminipage}[t]{0.85\textwidth} \textsc{Random-Check}$(S,T)$ \\[2mm] 1. For each element $e\in E$, independently of other elements randomly set \begin{align*} x_e = \begin{cases} 1 & \mbox{with probability $1/3$} \\ 0 & \mbox{with probability $2/3$} \end{cases} \end{align*} 2. \IF $\sum_{e\in S} x_e = \sum_{e\in T} x_e$ \THEN \\[1mm] 3. \qquad \RETURN true \\[1mm] 4. \ELSE\\ 5. \qquad \RETURN false \end{boxedminipage} \end{center} Note that \textsc{Random-Check}$(S,T)$ returns true with probability $1$ if $S=T$. Your task is to analyze the probability that the algorithm returns true if $S \neq T$. Specifically prove that \textsc{Random-Check}$(S,T)$ returns true with probability at most $2/3$ if $S\neq T$.\\ {\em (In this problem you are asked to prove that \textsc{Random-Check}($S,T$) returns true with probability at most $2/3$ if $S \neq T$. Recall that you are allowed to refer to material covered in the lecture notes.)}
By Yao's principle, it also applies to the expected number of comparisons for a randomized algorithm on its worst-case input. For deterministic algorithms, it has been shown that selecting the k {\displaystyle k} th element requires ( 1 + H ( k / n ) ) n + Ω ( n ) {\displaystyle {\bigl (}1+H(k/n){\bigr )}n+\Omega ({\sqrt {n}})} comparisons, where is the binary entropy function. The special case of median-finding has a slightly larger lower bound on the number of comparisons, at least ( 2 + ε ) n {\displaystyle (2+\varepsilon )n} , for ε ≈ 2 − 80 {\displaystyle \varepsilon \approx 2^{-80}} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int What should replace ??? so that the following function computes the underlying set of the m multiset, formed from its distinct elements? For example, the underlying set of the multiset M = {'a', 'a', 'c', 'd', 'd'} is S = {'a', 'c', 'd'}. type Set = Char => Boolean def multisetToSet(m: Multiset): Set = ???
A multiset may be formally defined as an ordered pair (A, m) where A is the underlying set of the multiset, formed from its distinct elements, and m: A → Z + {\displaystyle m\colon A\to \mathbb {Z} ^{+}} is a function from A to the set of positive integers, giving the multiplicity – that is, the number of occurrences – of the element a in the multiset as the number m(a). (It is also possible to allow multiplicity 0 or ∞ {\displaystyle \infty } , especially when considering submultisets. This article is restricted to finite, positive multiplicities.)
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int What should replace ??? so that the following function computes the underlying set of the m multiset, formed from its distinct elements? For example, the underlying set of the multiset M = {'a', 'a', 'c', 'd', 'd'} is S = {'a', 'c', 'd'}. type Set = Char => Boolean def multisetToSet(m: Multiset): Set = ???
The multiset construction, denoted A = M { B } {\displaystyle {\mathcal {A}}={\mathfrak {M}}\{{\mathcal {B}}\}} is a generalization of the set construction. In the set construction, each element can occur zero or one times. In a multiset, each element can appear an arbitrary number of times.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Ignoring their different evaluation characteristics in this exercise, we consider here that filter and withFilter are equivalent. To which expression is the following for-loop translated ? 1 def mystery7(xs : List[Int], ys : List[Int]) : List[Int] = 2 for 3 y <- ys if y < 100 4 x <- xs if x < 20 5 yield 6 if y < x then 0 else y - x
Instead of the Java "foreach" loops for looping through an iterator, Scala has for-expressions, which are similar to list comprehensions in languages such as Haskell, or a combination of list comprehensions and generator expressions in Python. For-expressions using the yield keyword allow a new collection to be generated by iterating over an existing one, returning a new collection of the same type. They are translated by the compiler into a series of map, flatMap and filter calls. Where yield is not used, the code approximates to an imperative-style loop, by translating to foreach.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Ignoring their different evaluation characteristics in this exercise, we consider here that filter and withFilter are equivalent. To which expression is the following for-loop translated ? 1 def mystery7(xs : List[Int], ys : List[Int]) : List[Int] = 2 for 3 y <- ys if y < 100 4 x <- xs if x < 20 5 yield 6 if y < x then 0 else y - x
A for-loop is generally equivalent to a while-loop: factorial := 1 for counter from 2 to 5 factorial := factorial * counter counter := counter - 1 print counter + "! equals " + factorial is equivalent to: factorial := 1 counter := 1 while counter < 5 counter := counter + 1 factorial := factorial * counter print counter + "! equals " + factorial as demonstrated by the output of the variables.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the LP-rounding algorithm for Set Cover that works as follows: \begin{enumerate} \item Solve the LP relaxation to obtain an optimal solution $x^*$. \item Return the solution $\{S: x^*_S >0\}$, i.e., containing all sets with a positive value in the fractional solution. \end{enumerate} Use the complementarity slackness conditions to prove that the algorithm is an $f$-approximation algorithm, where $f$ is the frequency (i.e., the maximum number of sets that any element belongs to).
One can turn the linear programming relaxation for this problem into an approximate solution of the original unrelaxed set cover instance via the technique of randomized rounding (Raghavan & Tompson 1987). Given a fractional cover, in which each set Si has weight wi, choose randomly the value of each 0–1 indicator variable xi to be 1 with probability wi × (ln n +1), and 0 otherwise. Then any element ej has probability less than 1/(e×n) of remaining uncovered, so with constant probability all elements are covered.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the LP-rounding algorithm for Set Cover that works as follows: \begin{enumerate} \item Solve the LP relaxation to obtain an optimal solution $x^*$. \item Return the solution $\{S: x^*_S >0\}$, i.e., containing all sets with a positive value in the fractional solution. \end{enumerate} Use the complementarity slackness conditions to prove that the algorithm is an $f$-approximation algorithm, where $f$ is the frequency (i.e., the maximum number of sets that any element belongs to).
The cover generated by this technique has total size, with high probability, (1+o(1))(ln n)W, where W is the total weight of the fractional solution. Thus, this technique leads to a randomized approximation algorithm that finds a set cover within a logarithmic factor of the optimum. As Young (1995) showed, both the random part of this algorithm and the need to construct an explicit solution to the linear programming relaxation may be eliminated using the method of conditional probabilities, leading to a deterministic greedy algorithm for set cover, known already to Lovász, that repeatedly selects the set that covers the largest possible number of remaining uncovered elements.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Show that, given a matroid $\mathcal{M} = (E, \mathcal{I})$ and a weight function $w: E \rightarrow \mathbb{R}$,~\textsc{Greedy} (as defined in the lecture notes) always returns a base of the matroid.
In combinatorics, a branch of mathematics, a weighted matroid is a matroid endowed with function with respect to which one can perform a greedy algorithm. A weight function w: E → R + {\displaystyle w:E\rightarrow \mathbb {R} ^{+}} for a matroid M = ( E , I ) {\displaystyle M=(E,I)} assigns a strictly positive weight to each element of E {\displaystyle E} . We extend the function to subsets of E {\displaystyle E} by summation; w ( A ) {\displaystyle w(A)} is the sum of w ( x ) {\displaystyle w(x)} over x {\displaystyle x} in A {\displaystyle A} . A matroid with an associated weight function is called a weighted matroid.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Show that, given a matroid $\mathcal{M} = (E, \mathcal{I})$ and a weight function $w: E \rightarrow \mathbb{R}$,~\textsc{Greedy} (as defined in the lecture notes) always returns a base of the matroid.
A weighted matroid is a matroid together with a function from its elements to the nonnegative real numbers. The weight of a subset of elements is defined to be the sum of the weights of the elements in the subset. The greedy algorithm can be used to find a maximum-weight basis of the matroid, by starting from the empty set and repeatedly adding one element at a time, at each step choosing a maximum-weight element among the elements whose addition would preserve the independence of the augmented set. This algorithm does not need to know anything about the details of the matroid's definition, as long as it has access to the matroid through an independence oracle, a subroutine for testing whether a set is independent. This optimization algorithm may be used to characterize matroids: if a family F of sets, closed under taking subsets, has the property that, no matter how the sets are weighted, the greedy algorithm finds a maximum-weight set in the family, then F must be the family of independent sets of a matroid.The notion of matroid has been generalized to allow for other types of sets on which a greedy algorithm gives optimal solutions; see greedoid and matroid embedding for more information.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
& \multicolumn{3}{c}{ extbf{ProofWriter}} & \multicolumn{3}{c}{ extbf{CLUTRR-SG}} \ \cmidrule(lr){2-4} \cmidrule(lr){5-7} Consider the following code snippet: 1 type Logger[T] = T => Unit 2 def log[T](s: T)(using log: Logger[T]): Unit = log(s) 3 var count = 0 4 given countingLogger: Logger[String] = s => 5 count = count + 1 6 println(s) 7 given (using log: Logger[String]): Logger[Boolean] = 8 b => log(if b then "TRUE" else "FALSE") 9 def h() = 10 given Logger[String] = s => () 11 log("Inside h") 12 log(false) 13 h() 14 log(true) 15 count What is the value of the last line?
Principal value forms: Log ⁡ ( 1 ) = 0 {\displaystyle \operatorname {Log} (1)=0} Log ⁡ ( e ) = 1 {\displaystyle \operatorname {Log} (e)=1} Multiple value forms, for any k an integer: log ⁡ ( 1 ) = 0 + 2 π i k {\displaystyle \log(1)=0+2\pi ik} log ⁡ ( e ) = 1 + 2 π i k {\displaystyle \log(e)=1+2\pi ik}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
& \multicolumn{3}{c}{ extbf{ProofWriter}} & \multicolumn{3}{c}{ extbf{CLUTRR-SG}} \ \cmidrule(lr){2-4} \cmidrule(lr){5-7} Consider the following code snippet: 1 type Logger[T] = T => Unit 2 def log[T](s: T)(using log: Logger[T]): Unit = log(s) 3 var count = 0 4 given countingLogger: Logger[String] = s => 5 count = count + 1 6 println(s) 7 given (using log: Logger[String]): Logger[Boolean] = 8 b => log(if b then "TRUE" else "FALSE") 9 def h() = 10 given Logger[String] = s => () 11 log("Inside h") 12 log(false) 13 h() 14 log(true) 15 count What is the value of the last line?
log ⁡ ( x + y ) = log ⁡ ( x + x ⋅ y / x ) = log ⁡ ( x + x ⋅ exp ⁡ ( log ⁡ ( y / x ) ) ) = log ⁡ ( x ⋅ ( 1 + exp ⁡ ( log ⁡ ( y ) − log ⁡ ( x ) ) ) ) = log ⁡ ( x ) + log ⁡ ( 1 + exp ⁡ ( log ⁡ ( y ) − log ⁡ ( x ) ) ) = x ′ + log ⁡ ( 1 + exp ⁡ ( y ′ − x ′ ) ) {\displaystyle {\begin{aligned}&\log(x+y)\\={}&\log(x+x\cdot y/x)\\={}&\log(x+x\cdot \exp(\log(y/x)))\\={}&\log(x\cdot (1+\exp(\log(y)-\log(x))))\\={}&\log(x)+\log(1+\exp(\log(y)-\log(x)))\\={}&x'+\log \left(1+\exp \left(y'-x'\right)\right)\end{aligned}}} The formula above is more accurate than log ⁡ ( e x ′ + e y ′ ) {\displaystyle \log \left(e^{x'}+e^{y'}\right)} , provided one takes advantage of the asymmetry in the addition formula. x ′ {\displaystyle {x'}} should be the larger (least negative) of the two operands. This also produces the correct behavior if one of the operands is floating-point negative infinity, which corresponds to a probability of zero. − ∞ + log ⁡ ( 1 + exp ⁡ ( y ′ − ( − ∞ ) ) ) = − ∞ + ∞ {\displaystyle -\infty +\log \left(1+\exp \left(y'-(-\infty )\right)\right)=-\infty +\infty } This quantity is indeterminate, and will result in NaN.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider an array $A[1,\ldots, n]$ consisting of the $n$ distinct numbers $1,2, \ldots, n$. We are further guaranteed that $A$ is almost sorted in the following sense: $A[i] \neq i$ for at most $\sqrt{n}$ values of $i$. What are tight asymptotic worst-case running times for Insertion Sort and Merge Sort on such instances?
Consider performing insertion sort on n {\displaystyle n} numbers on a random access machine. The best-case for the algorithm is when the numbers are already sorted, which takes O ( n ) {\displaystyle O(n)} steps to perform the task. However, the input in the worst-case for the algorithm is when the numbers are reverse sorted and it takes O ( n 2 ) {\displaystyle O(n^{2})} steps to sort them; therefore the worst-case time-complexity of insertion sort is of O ( n 2 ) {\displaystyle O(n^{2})} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider an array $A[1,\ldots, n]$ consisting of the $n$ distinct numbers $1,2, \ldots, n$. We are further guaranteed that $A$ is almost sorted in the following sense: $A[i] \neq i$ for at most $\sqrt{n}$ values of $i$. What are tight asymptotic worst-case running times for Insertion Sort and Merge Sort on such instances?
This results in a worst case of O(n²) time for this sorting algorithm. This worst case occurs when the algorithm operates on an already sorted set, or one that is nearly sorted, reversed or nearly reversed. Expected O(n log n) time can however be achieved by shuffling the array, but this does not help for equal items.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Is “type-directed programming” a language mechanism that infers types from values?
It is through recognition of the eventual reduction of expressions to implicitly typed atomic values that the compiler for a type inferring language is able to compile a program completely without type annotations. In complex forms of higher-order programming and polymorphism, it is not always possible for the compiler to infer as much, and type annotations are occasionally necessary for disambiguation. For instance, type inference with polymorphic recursion is known to be undecidable. Furthermore, explicit type annotations can be used to optimize code by forcing the compiler to use a more specific (faster/smaller) type than it had inferred.Some methods for type inference are based on constraint satisfaction or satisfiability modulo theories.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Is “type-directed programming” a language mechanism that infers types from values?
The majority of them use a simple form of type inference; the Hindley-Milner type system can provide more complete type inference. The ability to infer types automatically makes many programming tasks easier, leaving the programmer free to omit type annotations while still permitting type checking. In some programming languages, all values have a data type explicitly declared at compile time, limiting the values a particular expression can take on at run-time.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following linear program for finding a maximum-weight matching: \begin{align*} \text{Maximize} \quad &\sum_{e\in E} x_e w_e\\ \text{Subject to} \quad &\sum_{e \in \delta(v)} x_e \leq 1 \quad \forall v \in V \\ &x_e \geq 0 \quad \forall e \in E \end{align*} (This is similar to the perfect matching problem seen in the lecture, except that we have inequality constraints instead of equality constraints.) Prove that, for bipartite graphs, any extreme point is integral.
Suppose each edge on the graph has a weight. A fractional matching of maximum weight in a graph can be found by linear programming. In a bipartite graph, it is possible to convert a maximum-weight fractional matching to a maximum-weight integral matching of the same size, in the following way: Let f be the fractional matching. Let H be a subgraph of G containing only the edges e with non-integral fraction, 0
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following linear program for finding a maximum-weight matching: \begin{align*} \text{Maximize} \quad &\sum_{e\in E} x_e w_e\\ \text{Subject to} \quad &\sum_{e \in \delta(v)} x_e \leq 1 \quad \forall v \in V \\ &x_e \geq 0 \quad \forall e \in E \end{align*} (This is similar to the perfect matching problem seen in the lecture, except that we have inequality constraints instead of equality constraints.) Prove that, for bipartite graphs, any extreme point is integral.
In bipartite graphs, if a single maximum-cardinality matching is known, it is possible to find all maximally matchable edges in linear time - O ( V + E ) {\displaystyle O(V+E)} .If a maximum matching is not known, it can be found by existing algorithms. In this case, the resulting overall runtime is O ( V 1 / 2 E ) {\displaystyle O(V^{1/2}E)} for general bipartite graphs and O ( ( V / log ⁡ V ) 1 / 2 E ) {\displaystyle O((V/\log V)^{1/2}E)} for dense bipartite graphs with E = Θ ( V 2 ) {\displaystyle E=\Theta (V^{2})} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall the online bin-packing problem that we saw in Exercise Set $10$: We are given an unlimited number of bins, each of capacity $1$. We get a sequence of items one by one each having a size of at most $1$, and are required to place them into bins as we receive them. Our goal is to minimize the number of bins we use, subject to the constraint that no bin should be filled to more than its capacity. An example is as follows: \begin{center} \vspace{4mm} \includegraphics[width=9cm]{binpackingExample2} \end{center} Here, seven items have already arrived that we have packed in three bins. The newly arriving item of size $1/6$ can either be packed in the first bin, third bin, or in a new (previously unused) bin. It cannot be packed in the second bin since $1/3 + 1/3 + 1/4 + 1/6 > 1$. If it is packed in the first or third bin, then we still use three bins, whereas if we pack it in a new bin, then we use four bins. In this problem you should, assuming that all items have size at most $0 <\epsilon\leq 1$, design and analyze an online algorithm for the online bin-packing problem that uses at most \begin{align} \frac{1}{1-\epsilon} \mbox{OPT} + 1 \mbox{ bins,} \label{eq:binguarantee} \end{align} where $\mbox{OPT}$ denotes the minimum number of bins an optimal packing uses. In the above example, $\epsilon = 1/3$. \\[2mm] {\em (In this problem you are asked to (i) design the online algorithm and (ii) prove that it satisfies the guarantee~\eqref{eq:binguarantee}. Recall that you are allowed to refer to material covered in the lecture notes.)}
Several approximation algorithms for the general bin-packing problem use the following scheme: Separate the items to "small" (smaller than eB, for some fraction e in (0,1)) and "large" (at least eB). Handle the large items first: Round the item sizes in some way, such that the number of different sizes is at most some constant d. Solve the resulting high-multiplicity problem. Allocate the small items greedily, e.g. with next-fit bin packing. If no new bins are created, then we are done. If new bins are created, this means that all bins (except maybe the last one) are full up to at least (1-e)B. Therefore, the number of bins is at most OPT/(1-e)+1 ≤ (1+2e)OPT+1.The algorithms differ in how they round the instance.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall the online bin-packing problem that we saw in Exercise Set $10$: We are given an unlimited number of bins, each of capacity $1$. We get a sequence of items one by one each having a size of at most $1$, and are required to place them into bins as we receive them. Our goal is to minimize the number of bins we use, subject to the constraint that no bin should be filled to more than its capacity. An example is as follows: \begin{center} \vspace{4mm} \includegraphics[width=9cm]{binpackingExample2} \end{center} Here, seven items have already arrived that we have packed in three bins. The newly arriving item of size $1/6$ can either be packed in the first bin, third bin, or in a new (previously unused) bin. It cannot be packed in the second bin since $1/3 + 1/3 + 1/4 + 1/6 > 1$. If it is packed in the first or third bin, then we still use three bins, whereas if we pack it in a new bin, then we use four bins. In this problem you should, assuming that all items have size at most $0 <\epsilon\leq 1$, design and analyze an online algorithm for the online bin-packing problem that uses at most \begin{align} \frac{1}{1-\epsilon} \mbox{OPT} + 1 \mbox{ bins,} \label{eq:binguarantee} \end{align} where $\mbox{OPT}$ denotes the minimum number of bins an optimal packing uses. In the above example, $\epsilon = 1/3$. \\[2mm] {\em (In this problem you are asked to (i) design the online algorithm and (ii) prove that it satisfies the guarantee~\eqref{eq:binguarantee}. Recall that you are allowed to refer to material covered in the lecture notes.)}
However, if space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated. Another variant of bin packing of interest in practice is the so-called online bin packing. Here the items of different volume are supposed to arrive sequentially, and the decision maker has to decide whether to select and pack the currently observed item, or else to let it pass.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In class we saw that Karger's min-cut algorithm implies that an undirected graph has at most $n \choose 2$ minimum cuts. Show that this result is tight by giving a graph with $n$ vertices and $n \choose 2$ minimum cuts.
The minimum cut problem in undirected, weighted graphs limited to non-negative weights can be solved in polynomial time by the Stoer-Wagner algorithm. In the special case when the graph is unweighted, Karger's algorithm provides an efficient randomized method for finding the cut. In this case, the minimum cut equals the edge connectivity of the graph. A generalization of the minimum cut problem without terminals is the minimum k-cut, in which the goal is to partition the graph into at least k connected components by removing as few edges as possible. For a fixed value of k, this problem can be solved in polynomial time, though the algorithm is not practical for large k.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In class we saw that Karger's min-cut algorithm implies that an undirected graph has at most $n \choose 2$ minimum cuts. Show that this result is tight by giving a graph with $n$ vertices and $n \choose 2$ minimum cuts.
This reduces the complexity to O ( n 4 ) {\displaystyle O(n^{4})} and is sound since, if a cut of capacity less than k exists, it is bound to separate u from some other vertex. It can be further improved by an algorithm of Gabow that runs in worst case O ( n 3 ) {\displaystyle O(n^{3})} time. The Karger–Stein variant of Karger's algorithm provides a faster randomized algorithm for determining the connectivity, with expected runtime O ( n 2 log 3 ⁡ n ) {\displaystyle O(n^{2}\log ^{3}n)} .A related problem: finding the minimum k-edge-connected spanning subgraph of G (that is: select as few as possible edges in G that your selection is k-edge-connected) is NP-hard for k ≥ 2 {\displaystyle k\geq 2} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem, we give a $2$-approximation algorithm for the submodular vertex cover problem which is a generalization of the classic vertex cover problem seen in class. We first, in subproblem~\textbf{(a)}, give a new rounding for the classic vertex cover problem and then give the algorithm for the more general problem in subproblem~\textbf{(b)}. Recall that a vertex cover instance is specified by an undirected graph $G= (V,E)$ and non-negative vertex-weights $w: V \rightarrow \mathbb{R}_+$. The task is to find a vertex cover $S \subseteq V$ of minimum total weight $\sum_{i\in S} w(i)$, where a subset $S \subseteq V$ of the vertices is said to be a vertex cover if for every edge $\{i,j\} \in E$, $i\in S$ or $j\in S$. The natural LP relaxation (as seen in class) is as follows: \begin{align*} \textbf{minimize} \hspace{0.4cm} & \sum_{i\in V} w(i) x_i \\ \textbf{subject to}\hspace{0.4cm} & x_i + x_j \geq 1 \qquad \mbox{for $\{i,j\} \in E$}\\ & \hspace{0.9cm} x_i \geq 0 \qquad \mbox{for $i\in V$} \end{align*} Given a fractional solution $x$ to the above linear program, a natural approach to solve the vertex cover problem is to give a rounding algorithm. Indeed, in class we analyzed a simple rounding scheme: output the vertex cover $S = \{i\in V: x_i \geq 1/2\}$. We proved that $w(S) \leq 2 \sum_{i\in V} w(i) x_i$. In this subproblem, your task is to prove that the following alternative randomized rounding scheme give the same guarantee in expectation. The randomized rounding scheme is as follows: \begin{itemize} \item Select $t \in [0,1/2]$ uniformly at random. \item Output $S_t = \{i\in V: x_i \geq t\}$. \end{itemize} Prove (i) that the output $S_t$ is a feasible vertex cover solution (for any $t\in [0,1/2]$) and (ii) that $\E[\sum_{i\in S_t} w(i)] \leq 2 \cdot \sum_{i\in V} w(i) x_i$ where the expectation is over the random choice of $t$. We remark that you \emph{cannot} say that $x$ is half-integral as $x$ may not be an extreme point solution to the linear program. \\ {\em (In this problem you are asked to prove that the randomized rounding scheme (i) always outputs a feasible solution and (ii) the expected cost of the output solution is at most twice the cost of the linear programming solution. Recall that you are allowed to refer to material covered in the lecture notes.)}
Assume that every vertex has an associated cost of c ( v ) ≥ 0 {\displaystyle c(v)\geq 0} . The (weighted) minimum vertex cover problem can be formulated as the following integer linear program (ILP). This ILP belongs to the more general class of ILPs for covering problems. The integrality gap of this ILP is 2 {\displaystyle 2} , so its relaxation (allowing each variable to be in the interval from 0 to 1, rather than requiring the variables to be only 0 or 1) gives a factor- 2 {\displaystyle 2} approximation algorithm for the minimum vertex cover problem. Furthermore, the linear programming relaxation of that ILP is half-integral, that is, there exists an optimal solution for which each entry x v {\displaystyle x_{v}} is either 0, 1/2, or 1. A 2-approximate vertex cover can be obtained from this fractional solution by selecting the subset of vertices whose variables are nonzero.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem, we give a $2$-approximation algorithm for the submodular vertex cover problem which is a generalization of the classic vertex cover problem seen in class. We first, in subproblem~\textbf{(a)}, give a new rounding for the classic vertex cover problem and then give the algorithm for the more general problem in subproblem~\textbf{(b)}. Recall that a vertex cover instance is specified by an undirected graph $G= (V,E)$ and non-negative vertex-weights $w: V \rightarrow \mathbb{R}_+$. The task is to find a vertex cover $S \subseteq V$ of minimum total weight $\sum_{i\in S} w(i)$, where a subset $S \subseteq V$ of the vertices is said to be a vertex cover if for every edge $\{i,j\} \in E$, $i\in S$ or $j\in S$. The natural LP relaxation (as seen in class) is as follows: \begin{align*} \textbf{minimize} \hspace{0.4cm} & \sum_{i\in V} w(i) x_i \\ \textbf{subject to}\hspace{0.4cm} & x_i + x_j \geq 1 \qquad \mbox{for $\{i,j\} \in E$}\\ & \hspace{0.9cm} x_i \geq 0 \qquad \mbox{for $i\in V$} \end{align*} Given a fractional solution $x$ to the above linear program, a natural approach to solve the vertex cover problem is to give a rounding algorithm. Indeed, in class we analyzed a simple rounding scheme: output the vertex cover $S = \{i\in V: x_i \geq 1/2\}$. We proved that $w(S) \leq 2 \sum_{i\in V} w(i) x_i$. In this subproblem, your task is to prove that the following alternative randomized rounding scheme give the same guarantee in expectation. The randomized rounding scheme is as follows: \begin{itemize} \item Select $t \in [0,1/2]$ uniformly at random. \item Output $S_t = \{i\in V: x_i \geq t\}$. \end{itemize} Prove (i) that the output $S_t$ is a feasible vertex cover solution (for any $t\in [0,1/2]$) and (ii) that $\E[\sum_{i\in S_t} w(i)] \leq 2 \cdot \sum_{i\in V} w(i) x_i$ where the expectation is over the random choice of $t$. We remark that you \emph{cannot} say that $x$ is half-integral as $x$ may not be an extreme point solution to the linear program. \\ {\em (In this problem you are asked to prove that the randomized rounding scheme (i) always outputs a feasible solution and (ii) the expected cost of the output solution is at most twice the cost of the linear programming solution. Recall that you are allowed to refer to material covered in the lecture notes.)}
The vertex cover problem involves finding a set of vertices that touches every edge of the graph. It is NP-hard but can be approximated to within an approximation ratio of two, for instance by taking the endpoints of the matched edges in any maximal matching. Evidence that this is the best possible approximation ratio of a polynomial-time approximation algorithm is provided by the fact that, when represented as a semidefinite program, the problem has an integrality gap of two; this gap is the ratio between the solution value of the integer solution (a valid vertex cover) and of its semidefinite relaxation. According to the unique games conjecture, for many problems such as this the optimal approximation ratio is provided by the integrality gap of their semidefinite relaxation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following classes: • class Pair[+U, +V] • class Iterable[+U] • class Map[U, +V] extends Iterable[Pair[U, V]] Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither covariance nor contravariance). Consider also the following typing relationships for A, B, X, and Y: • A >: B • X >: Y Fill in the subtyping relation between the types below using symbols: • <: in case T1 is a subtype of T2; • >: in case T1 is a supertype of T2; • “Neither” in case T1 is neither a supertype nor a supertype of T2. What is the correct subtyping relationship between Iterable[Pair[A, Y]] => Y and Map[A, Y] => X?
Subtyping and inheritance are independent (orthogonal) relationships. They may coincide, but none is a special case of the other. In other words, between two types S and T, all combinations of subtyping and inheritance are possible: S is neither a subtype nor a derived type of T S is a subtype but is not a derived type of T S is not a subtype but is a derived type of T S is both a subtype and a derived type of TThe first case is illustrated by independent types, such as Boolean and Float. The second case can be illustrated by the relationship between Int32 and Int64.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following classes: • class Pair[+U, +V] • class Iterable[+U] • class Map[U, +V] extends Iterable[Pair[U, V]] Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither covariance nor contravariance). Consider also the following typing relationships for A, B, X, and Y: • A >: B • X >: Y Fill in the subtyping relation between the types below using symbols: • <: in case T1 is a subtype of T2; • >: in case T1 is a supertype of T2; • “Neither” in case T1 is neither a supertype nor a supertype of T2. What is the correct subtyping relationship between Iterable[Pair[A, Y]] => Y and Map[A, Y] => X?
Sound structural subtyping rules for types other than object types are also well known.Implementations of programming languages with subtyping fall into two general classes: inclusive implementations, in which the representation of any value of type A also represents the same value at type B if A <: B, and coercive implementations, in which a value of type A can be automatically converted into one of type B. The subtyping induced by subclassing in an object-oriented language is usually inclusive; subtyping relations that relate integers and floating-point numbers, which are represented differently, are usually coercive. In almost all type systems that define a subtyping relation, it is reflexive (meaning A <: A for any type A) and transitive (meaning that if A <: B and B <: C then A <: C). This makes it a preorder on types.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following problem Alice holds a string $x = \langle x_1, x_2, \ldots, x_n \rangle$ and Bob holds a string $y = \langle y_1, y_2, \ldots, y_n\rangle$. Both strings are of length $n$ and $x_i, y_i \in \{1,2,\ldots, n\}$ for $i=1,2, \ldots, n$. The goal is for Alice and Bob to use little communication to estimate the quantity \begin{align*} Q = \sum_{i=1}^n (x_i + y_i)^2\,. \end{align*} A trivial solution is for Alice to transfer all of her string $x$ to Bob who then computes $Q$ exactly. However this requires Alice to send $\Theta(n \log n)$ bits of information to Bob. In the following, we use randomization and approximation to achieve a huge improvement on the number of bits transferred from Alice to Bob. Indeed, for a small parameter $\epsilon > 0$, your task is to devise and analyze a protocol of the following type: \begin{itemize} \item On input $x$, Alice uses a randomized algorithm to compute a message $m$ that consists of $O(\log (n)/\epsilon^2)$ bits. She then transmits the message $m$ to Bob. \item Bob then, as a function of $y$ and the message $m$, computes an estimate $Z$. \end{itemize} Your protocol should ensure that \begin{align} \label{eq:guaranteeStream} \Pr[| Z - Q| \geq \epsilon Q] \leq 1/3\,, \end{align} where the probability is over the randomness used by Alice.\\ {\em (In this problem you are asked to (i) explain how Alice computes the message $m$ of $O(\log(n)/\epsilon^2)$ bits (ii) explain how Bob calculates the estimate $Z$, and (iii) prove that the calculated estimate satisfies~\eqref{eq:guaranteeStream}. Recall that you are allowed to refer to material covered in the lecture notes.) }
The algorithm can be repeated many times to increase its accuracy. This fits the requirements for a randomized communication algorithm. This shows that if Alice and Bob share a random string of length n, they can send one bit to each other to compute E Q ( x , y ) {\displaystyle EQ(x,y)} . In the next section, it is shown that Alice and Bob can exchange only O ( log ⁡ n ) {\displaystyle O(\log n)} bits that are as good as sharing a random string of length n. Once that is shown, it follows that EQ can be computed in O ( log ⁡ n ) {\displaystyle O(\log n)} messages.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the following problem Alice holds a string $x = \langle x_1, x_2, \ldots, x_n \rangle$ and Bob holds a string $y = \langle y_1, y_2, \ldots, y_n\rangle$. Both strings are of length $n$ and $x_i, y_i \in \{1,2,\ldots, n\}$ for $i=1,2, \ldots, n$. The goal is for Alice and Bob to use little communication to estimate the quantity \begin{align*} Q = \sum_{i=1}^n (x_i + y_i)^2\,. \end{align*} A trivial solution is for Alice to transfer all of her string $x$ to Bob who then computes $Q$ exactly. However this requires Alice to send $\Theta(n \log n)$ bits of information to Bob. In the following, we use randomization and approximation to achieve a huge improvement on the number of bits transferred from Alice to Bob. Indeed, for a small parameter $\epsilon > 0$, your task is to devise and analyze a protocol of the following type: \begin{itemize} \item On input $x$, Alice uses a randomized algorithm to compute a message $m$ that consists of $O(\log (n)/\epsilon^2)$ bits. She then transmits the message $m$ to Bob. \item Bob then, as a function of $y$ and the message $m$, computes an estimate $Z$. \end{itemize} Your protocol should ensure that \begin{align} \label{eq:guaranteeStream} \Pr[| Z - Q| \geq \epsilon Q] \leq 1/3\,, \end{align} where the probability is over the randomness used by Alice.\\ {\em (In this problem you are asked to (i) explain how Alice computes the message $m$ of $O(\log(n)/\epsilon^2)$ bits (ii) explain how Bob calculates the estimate $Z$, and (iii) prove that the calculated estimate satisfies~\eqref{eq:guaranteeStream}. Recall that you are allowed to refer to material covered in the lecture notes.) }
The starting point is a bipartite communication scenario where one of the parts (Alice) is handed a random string x {\displaystyle x} of n {\displaystyle n} bits. The second part, Bob, receives a random number k ∈ { 1 , . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Professor Ueli von Gruy\`{e}res worked hard last year to calculate the yearly cheese consumption of each individual in Switzerland. Specifically, let $U$ be the set of all persons in Switzerland. For each person $i\in U$, Ueli calculated the amount $w_i \in \mathbb{R}_{\geq 0}$ (in grams) of the yearly cheese consumption of person $i$. However, to help Coop and Migros in their supply-chain management, he needs to calculate the total cheese consumption of those persons that prefer fondue over raclette. That is, if we let $F \subseteq U$ be those that prefer fondue over raclette, then Ueli wants to calculate \begin{align*} W_F = \sum_{i\in F} w_i\,. \end{align*} The issue is that Ueli does not know the set $F$ and he does not have the time or energy to ask the preferences of all persons. He therefore designs two estimators that only ask a single person: \begin{description} \item[Estimator $\Alg_1$:] Let $W = \sum_{i\in U}w_i$. Sample person $i$ with probability $\frac{w_i}{W}$ and output $W$ if $i$ prefers fondue and $0$ otherwise. \item[Estimator $\Alg_2$:] Sample person $i$ with probability $\frac{1}{|U|}$ and output $|U| \cdot w_i$ if $i$ prefers fondue and $0$ otherwise. \end{description} Let $X_1$ and $X_2$ be the random outputs of $\Alg_1$ and $\Alg_2$, respectively. Ueli has shown that $\Alg_1$ and $\Alg_2$ are unbiased estimators and he has also bounded their variances: \begin{align*} \E[X_1] = \E[X_2] = W_F, \qquad \Var[X_1] \leq W^2 \qquad \mbox{and} \qquad \Var[X_2] \leq |U| \sum_{i\in U} w_i^2\,. \end{align*} However, Ueli is now stuck because the variances are too high to give any good guarantees for the two estimators. We are therefore going to help Ueli by designing a new estimator with good guarantees while still asking the preferences of relatively few persons. For a fixed small parameter $\epsilon >0$, your task is to design and analyze an estimator that outputs a random value $Y$ with the following guarantee: \begin{align} \label{eq:guarantee} \Pr[|Y - W_F| \geq \epsilon W] \leq 1/3\,. \end{align} Your estimator should ask at most $3/\epsilon^2$ persons about their preferences. \\ {\em (In this problem you are asked to (i) design an estimator that asks at most $3/\epsilon^2$ persons about their preferences and (ii) prove that it satisfies the guarantee~\eqref{eq:guarantee}. Recall that you are allowed to refer to material covered in the lecture notes.)}
A way simpler possibility comes to mind and it is just drawing a straight line between two points and coming up with all the relevant data graphically. However, even though it is clearly seen in the paper that the income perceived is rising by 100 francs per sample family, the food expenditure is definitely not decreasing following any fixed arithmetic trend but it could be a geometric rate. His data could have been derived from an approximation following several news articles at the time.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Professor Ueli von Gruy\`{e}res worked hard last year to calculate the yearly cheese consumption of each individual in Switzerland. Specifically, let $U$ be the set of all persons in Switzerland. For each person $i\in U$, Ueli calculated the amount $w_i \in \mathbb{R}_{\geq 0}$ (in grams) of the yearly cheese consumption of person $i$. However, to help Coop and Migros in their supply-chain management, he needs to calculate the total cheese consumption of those persons that prefer fondue over raclette. That is, if we let $F \subseteq U$ be those that prefer fondue over raclette, then Ueli wants to calculate \begin{align*} W_F = \sum_{i\in F} w_i\,. \end{align*} The issue is that Ueli does not know the set $F$ and he does not have the time or energy to ask the preferences of all persons. He therefore designs two estimators that only ask a single person: \begin{description} \item[Estimator $\Alg_1$:] Let $W = \sum_{i\in U}w_i$. Sample person $i$ with probability $\frac{w_i}{W}$ and output $W$ if $i$ prefers fondue and $0$ otherwise. \item[Estimator $\Alg_2$:] Sample person $i$ with probability $\frac{1}{|U|}$ and output $|U| \cdot w_i$ if $i$ prefers fondue and $0$ otherwise. \end{description} Let $X_1$ and $X_2$ be the random outputs of $\Alg_1$ and $\Alg_2$, respectively. Ueli has shown that $\Alg_1$ and $\Alg_2$ are unbiased estimators and he has also bounded their variances: \begin{align*} \E[X_1] = \E[X_2] = W_F, \qquad \Var[X_1] \leq W^2 \qquad \mbox{and} \qquad \Var[X_2] \leq |U| \sum_{i\in U} w_i^2\,. \end{align*} However, Ueli is now stuck because the variances are too high to give any good guarantees for the two estimators. We are therefore going to help Ueli by designing a new estimator with good guarantees while still asking the preferences of relatively few persons. For a fixed small parameter $\epsilon >0$, your task is to design and analyze an estimator that outputs a random value $Y$ with the following guarantee: \begin{align} \label{eq:guarantee} \Pr[|Y - W_F| \geq \epsilon W] \leq 1/3\,. \end{align} Your estimator should ask at most $3/\epsilon^2$ persons about their preferences. \\ {\em (In this problem you are asked to (i) design an estimator that asks at most $3/\epsilon^2$ persons about their preferences and (ii) prove that it satisfies the guarantee~\eqref{eq:guarantee}. Recall that you are allowed to refer to material covered in the lecture notes.)}
A way simpler possibility comes to mind and it is just drawing a straight line between two points and coming up with all the relevant data graphically. However, even though it is clearly seen in the paper that the income perceived is rising by 100 francs per sample family, the food expenditure is definitely not decreasing following any fixed arithmetic trend but it could be a geometric rate. His data could have been derived from an approximation following several news articles at the time.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have just started your prestigious and important job as the Swiss Cheese Minister. As it turns out, different fondues and raclettes have different nutritional values and different prices: \begin{center} \begin{tabular}{|l|l|l|l||l|} \hline Food & Fondue moitie moitie & Fondue a la tomate & Raclette & Requirement per week \\ \hline Vitamin A [mg/kg] & 35 & 0.5 & 0.5 & 0.5 mg \\ Vitamin B [mg/kg] & 60 & 300 & 0.5 & 15 mg \\ Vitamin C [mg/kg] & 30 & 20 & 70 & 4 mg \\ \hline [price [CHF/kg] & 50 & 75 & 60 & --- \\ \hline \end{tabular} \end{center} Formulate the problem of finding the cheapest combination of the different fondues (moitie moitie \& a la tomate) and Raclette so as to satisfy the weekly nutritional requirement as a linear program.
Swiss cheese model == References ==
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have just started your prestigious and important job as the Swiss Cheese Minister. As it turns out, different fondues and raclettes have different nutritional values and different prices: \begin{center} \begin{tabular}{|l|l|l|l||l|} \hline Food & Fondue moitie moitie & Fondue a la tomate & Raclette & Requirement per week \\ \hline Vitamin A [mg/kg] & 35 & 0.5 & 0.5 & 0.5 mg \\ Vitamin B [mg/kg] & 60 & 300 & 0.5 & 15 mg \\ Vitamin C [mg/kg] & 30 & 20 & 70 & 4 mg \\ \hline [price [CHF/kg] & 50 & 75 & 60 & --- \\ \hline \end{tabular} \end{center} Formulate the problem of finding the cheapest combination of the different fondues (moitie moitie \& a la tomate) and Raclette so as to satisfy the weekly nutritional requirement as a linear program.
The nutritional value of cheese varies widely. Cottage cheese may consist of 4% fat and 11% protein while some whey cheeses are 15% fat and 11% protein, and triple-crème cheeses are 36% fat and 7% protein. In general, cheese is a rich source (20% or more of the Daily Value, DV) of calcium, protein, phosphorus, sodium and saturated fat. A 28-gram (one ounce) serving of cheddar cheese contains about 7 grams (0.25 oz) of protein and 202 milligrams of calcium.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose we have a universe $U$ of elements. For $A,B\subseteq U$, the Jaccard distance of $A,B$ is defined as $$ J(A,B)=\frac{|A\cap B|}{|A\cup B|}.$$ This definition is used in practice to calculate a notion of similarity of documents, webpages, etc. For example, suppose $U$ is the set of English words, and any set $A$ represents a document considered as a bag of words. Note that for any two $A,B\subseteq U$, $0\leq J(A,B)\leq 1$. If $J(A,B)$ is close to 1, then we can say $A\approx B$. Let $h: U\to [0,1]$ where for each $i\in U$, $h(i)$ is chosen uniformly and independently at random. For a set $S\subseteq U$, let $h_S:=\min_{i\in S} h(i)$. \textbf{Show that } $$ \Pr[h_A=h_B] = J(A,B).$$ Now, if we have sets $A_1, A_2,\dots,A_n$, we can use the above idea to figure out which pair of sets are ``close'' in time essentially $O(n|U|)$. We can also obtain a good approximation of $J(A,B)$ with high probability by using several independently chosen hash functions. Note that the naive algorithm would take $O(n^2|U|)$ to calculate all pairwise similarities.
{\displaystyle J(A,B)={{|A\cap B|} \over {|A\uplus B|}}={{|A\cap B|} \over {|A|+|B|}}.} The Jaccard distance, which measures dissimilarity between sample sets, is complementary to the Jaccard coefficient and is obtained by subtracting the Jaccard coefficient from 1, or, equivalently, by dividing the difference of the sizes of the union and the intersection of two sets by the size of the union: d J ( A , B ) = 1 − J ( A , B ) = | A ∪ B | − | A ∩ B | | A ∪ B | . {\displaystyle d_{J}(A,B)=1-J(A,B)={{|A\cup B|-|A\cap B|} \over |A\cup B|}.}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose we have a universe $U$ of elements. For $A,B\subseteq U$, the Jaccard distance of $A,B$ is defined as $$ J(A,B)=\frac{|A\cap B|}{|A\cup B|}.$$ This definition is used in practice to calculate a notion of similarity of documents, webpages, etc. For example, suppose $U$ is the set of English words, and any set $A$ represents a document considered as a bag of words. Note that for any two $A,B\subseteq U$, $0\leq J(A,B)\leq 1$. If $J(A,B)$ is close to 1, then we can say $A\approx B$. Let $h: U\to [0,1]$ where for each $i\in U$, $h(i)$ is chosen uniformly and independently at random. For a set $S\subseteq U$, let $h_S:=\min_{i\in S} h(i)$. \textbf{Show that } $$ \Pr[h_A=h_B] = J(A,B).$$ Now, if we have sets $A_1, A_2,\dots,A_n$, we can use the above idea to figure out which pair of sets are ``close'' in time essentially $O(n|U|)$. We can also obtain a good approximation of $J(A,B)$ with high probability by using several independently chosen hash functions. Note that the naive algorithm would take $O(n^2|U|)$ to calculate all pairwise similarities.
If μ {\displaystyle \mu } is a measure on a measurable space X {\displaystyle X} , then we define the Jaccard coefficient by J μ ( A , B ) = μ ( A ∩ B ) μ ( A ∪ B ) , {\displaystyle J_{\mu }(A,B)={{\mu (A\cap B)} \over {\mu (A\cup B)}},} and the Jaccard distance by d μ ( A , B ) = 1 − J μ ( A , B ) = μ ( A △ B ) μ ( A ∪ B ) . {\displaystyle d_{\mu }(A,B)=1-J_{\mu }(A,B)={{\mu (A\triangle B)} \over {\mu (A\cup B)}}.} Care must be taken if μ ( A ∪ B ) = 0 {\displaystyle \mu (A\cup B)=0} or ∞ {\displaystyle \infty } , since these formulas are not well defined in these cases. The MinHash min-wise independent permutations locality sensitive hashing scheme may be used to efficiently compute an accurate estimate of the Jaccard similarity coefficient of pairs of sets, where each set is represented by a constant-sized signature derived from the minimum values of a hash function.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Last year Professor Ueli von Gruy\`{e}res worked hard to to obtain an estimator $\Alg$ to estimate the total cheese consumption of fondue lovers in Switzerland. For a small $\epsilon >0$, his estimator \Alg only asks $3/\epsilon^2$ random persons and have the following guarantee: if we let $W$ denote the true answer and let $X$ be the random output of \Alg then \begin{align*} \Pr[|X - W| \geq \epsilon W] \leq 1/3\,. %\qquad \mbox{ where $\epsilon > 0$ is a small constant.} \end{align*} However, Ueli is now stuck because the error probability of $1/3$ is too high. We are therefore going to help Ueli by designing a new estimator with a much higher success probability while still only asking relatively few persons. For a fixed small parameter $\delta >0$, your task is to design and analyze an estimator that outputs a random value $Y$ with the following guarantee: \begin{align} \label{eq:guarantee2} \Pr[|Y - W| \geq \epsilon W] \leq \delta\,. %\qquad \mbox{ where $\epsilon > 0$ is a small constant.} \end{align} Your estimator should ask at most $3000\log(1/\delta)/\epsilon^2$ persons about their preferences. \\ While you should explain why your estimator works and what tools to use to analyze it, \emph{you do not need to do any detailed calculations.} \\ {\em (In this problem you are asked to (i) design an estimator that asks at most $3000 \log(1/\delta)/\epsilon^2$ persons and (ii) explain why it satisfies the guarantee~\eqref{eq:guarantee2}. Recall that you are allowed to refer to material covered in the lecture notes.)}
This approach allows for more natural study of the asymptotic properties of the estimators. In the other interpretation (fixed design), the regressors X are treated as known constants set by a design, and y is sampled conditionally on the values of X as in an experiment. For practical purposes, this distinction is often unimportant, since estimation and inference is carried out while conditioning on X. All results stated in this article are within the random design framework.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Last year Professor Ueli von Gruy\`{e}res worked hard to to obtain an estimator $\Alg$ to estimate the total cheese consumption of fondue lovers in Switzerland. For a small $\epsilon >0$, his estimator \Alg only asks $3/\epsilon^2$ random persons and have the following guarantee: if we let $W$ denote the true answer and let $X$ be the random output of \Alg then \begin{align*} \Pr[|X - W| \geq \epsilon W] \leq 1/3\,. %\qquad \mbox{ where $\epsilon > 0$ is a small constant.} \end{align*} However, Ueli is now stuck because the error probability of $1/3$ is too high. We are therefore going to help Ueli by designing a new estimator with a much higher success probability while still only asking relatively few persons. For a fixed small parameter $\delta >0$, your task is to design and analyze an estimator that outputs a random value $Y$ with the following guarantee: \begin{align} \label{eq:guarantee2} \Pr[|Y - W| \geq \epsilon W] \leq \delta\,. %\qquad \mbox{ where $\epsilon > 0$ is a small constant.} \end{align} Your estimator should ask at most $3000\log(1/\delta)/\epsilon^2$ persons about their preferences. \\ While you should explain why your estimator works and what tools to use to analyze it, \emph{you do not need to do any detailed calculations.} \\ {\em (In this problem you are asked to (i) design an estimator that asks at most $3000 \log(1/\delta)/\epsilon^2$ persons and (ii) explain why it satisfies the guarantee~\eqref{eq:guarantee2}. Recall that you are allowed to refer to material covered in the lecture notes.)}
Further, the standard error of the estimate is σ = α ^ − 1 n + O ( n − 1 ) {\displaystyle \sigma ={\frac {{\hat {\alpha }}-1}{\sqrt {n}}}+O(n^{-1})} . This estimator is equivalent to the popular Hill estimator from quantitative finance and extreme value theory.For a set of n integer-valued data points { x i } {\displaystyle \{x_{i}\}} , again where each x i ≥ x min {\displaystyle x_{i}\geq x_{\min }} , the maximum likelihood exponent is the solution to the transcendental equation ζ ′ ( α ^ , x min ) ζ ( α ^ , x min ) = − 1 n ∑ i = 1 n ln ⁡ x i x min {\displaystyle {\frac {\zeta '({\hat {\alpha }},x_{\min })}{\zeta ({\hat {\alpha }},x_{\min })}}=-{\frac {1}{n}}\sum _{i=1}^{n}\ln {\frac {x_{i}}{x_{\min }}}} where ζ ( α , x m i n ) {\displaystyle \zeta (\alpha ,x_{\mathrm {min} })} is the incomplete zeta function. The uncertainty in this estimate follows the same formula as for the continuous equation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall that a matroid $\mathcal{M} =(E, \mathcal{I} )$ is a partition matroid if $E$ is partitioned into \emph{disjoint} sets $E_1, E_2, ..., E_\ell$ and \[ \mathcal{I} = \lbrace X \subseteq E : |E_i \cap X | \leq k_i \mbox{ for } i=1,2,..., \ell \rbrace\,. \] Verify that this is indeed a matroid.
In mathematics, a partition matroid or partitional matroid is a matroid that is a direct sum of uniform matroids. It is defined over a base set in which the elements are partitioned into different categories. For each category, there is a capacity constraint - a maximum number of allowed elements from this category. The independent sets of a partition matroid are exactly the sets in which, for each category, the number of elements from this category is at most the category capacity.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall that a matroid $\mathcal{M} =(E, \mathcal{I} )$ is a partition matroid if $E$ is partitioned into \emph{disjoint} sets $E_1, E_2, ..., E_\ell$ and \[ \mathcal{I} = \lbrace X \subseteq E : |E_i \cap X | \leq k_i \mbox{ for } i=1,2,..., \ell \rbrace\,. \] Verify that this is indeed a matroid.
A matroid sum ∑ i M i {\displaystyle \sum _{i}M_{i}} (where each M i {\displaystyle M_{i}} is a matroid) is itself a matroid, having as its elements the union of the elements of the summands. A set is independent in the sum if it can be partitioned into sets that are independent within each summand. The matroid partitioning algorithm generalizes to the problem of testing whether a set is independent in a matroid sum. Its correctness can be used to prove that a matroid sum is necessarily a matroid.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In class, we saw Karger's beautiful randomized algorithm for finding a minimum cut in an undirected graph $G=(V,E)$. Recall that his algorithm works by repeatedly contracting a randomly selected edge until the graph only consists of two vertices which define the returned cut. For general graphs, we showed that the returned cut is a minimum cut with probability at least $1/\binom{n}{2}$. In this problem, we are going to analyze the algorithm in the special case when the input graph is a tree. Specifically, you should show that if the input graph $G=(V,E)$ is a spanning tree, then Karger's algorithm returns a minimum cut with probability $1$. \\ {\em (In this problem you are asked to show that Karger's min-cut algorithm returns a minimum cut with probability $1$ if the input graph is a spanning tree. Recall that you are allowed to refer to material covered in the lecture notes.)}
All other edges connecting either u {\displaystyle u} or v {\displaystyle v} are "reattached" to the merged node, effectively producing a multigraph. Karger's basic algorithm iteratively contracts randomly chosen edges until only two nodes remain; those nodes represent a cut in the original graph. By iterating this basic algorithm a sufficient number of times, a minimum cut can be found with high probability.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In class, we saw Karger's beautiful randomized algorithm for finding a minimum cut in an undirected graph $G=(V,E)$. Recall that his algorithm works by repeatedly contracting a randomly selected edge until the graph only consists of two vertices which define the returned cut. For general graphs, we showed that the returned cut is a minimum cut with probability at least $1/\binom{n}{2}$. In this problem, we are going to analyze the algorithm in the special case when the input graph is a tree. Specifically, you should show that if the input graph $G=(V,E)$ is a spanning tree, then Karger's algorithm returns a minimum cut with probability $1$. \\ {\em (In this problem you are asked to show that Karger's min-cut algorithm returns a minimum cut with probability $1$ if the input graph is a spanning tree. Recall that you are allowed to refer to material covered in the lecture notes.)}
The minimum cut problem in undirected, weighted graphs limited to non-negative weights can be solved in polynomial time by the Stoer-Wagner algorithm. In the special case when the graph is unweighted, Karger's algorithm provides an efficient randomized method for finding the cut. In this case, the minimum cut equals the edge connectivity of the graph. A generalization of the minimum cut problem without terminals is the minimum k-cut, in which the goal is to partition the graph into at least k connected components by removing as few edges as possible. For a fixed value of k, this problem can be solved in polynomial time, though the algorithm is not practical for large k.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus