id
stringlengths
1
4
question
stringlengths
6
1.87k
context
listlengths
5
5
choices
listlengths
2
18
answer
stringlengths
1
840
3349
(Linear or Logistic Regression) Suppose you are given a dataset of tissue images from patients with and without a certain disease. You are supposed to train a model that predicts the probability that a patient has the disease. It is preferable to use logistic regression over linear regression.
[ "Machine Learning Course - CS-433 Linear Regression Sept 20, 2022 Martin Jaggi Last updated on: September 20, 2022 credits to Mohammad Emtiyaz Khan 1 Model: Linear Regression What is it? Linear regression is a model that as- sumes a linear relationship between inputs and the output. Why learn about linear regressio...
[ "True", "False" ]
True
3353
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means ...
[ "ified as BLUE. All figures taken from Chapter 2 of “The Elements of Statistical Learning” by Hastie, Friedman, and Tibshirani. The k-nearest-neighbor classifier/regressor is an entirely dif- ferent way of performing classification/regression. It per- forms best if we are working in low dimensions and if we have re...
[ "$(0,0,0,0,0,1)$", "$(+1,-1,+1,-1,+1,-1)$", "$(+1,-2,+3,-4,+5,-6)$", "$(+1,+1,+1,+1,+1,+1)$", "$(-1,+2,-3,+4,-5,+6)$", "$(0,0,0,0,0,1)$", "$(-1,+1,-1,+1,-1,+1)$", "$(-1,-1,-1,-1,-1,-1)$" ]
$(0,0,0,0,0,1)$
3354
Which of the following statements is correct?
[ "o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o oo o o oo o o o o o o o o o o oo o o o oo o o o o o o o o o o o o o o o o o o o o o o o ...
[ "When applying stochastic gradient descent on the objective function $f(\\boldsymbol{w}):=\\sum_{n=1}^{30}\\left\\|\\boldsymbol{w}-\\boldsymbol{x}_{n}\\right\\|^{2}$ where $\\boldsymbol{x}_{n}$ are the datapoints, a stochastic gradient step is roughly $30 \\times$ faster than a full gradient step.", "In practice,...
['When applying stochastic gradient descent on the objective function $f(\\boldsymbol{w}):=\\sum_{n=1}^{30}\\left\\|\\boldsymbol{w}-\\boldsymbol{x}_{n}\\right\\|^{2}$ where $\\boldsymbol{x}_{n}$ are the datapoints, a stochastic gradient step is roughly $30 \\times$ faster than a full gradient step.', 'In practice, it c...
3358
Let $f:\R^D ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that \[ f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig), \] with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\...
[ ". The two historically common activation functions are both sigmoids, and are described by y ( v i ) = tanh ⁡ ( v i ) and y ( v i ) = ( 1 + e − v i ) − 1 {\\displaystyle y(v_{i})=\\tanh(v_{i})~~{\\textrm {and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}}. The first is a hyperbolic tangent that ranges from −1 to 1, while the ot...
[ "Data augmentation", "L2 regularization", "Dropout", "Tuning the optimizer", "None. All techniques here improve generalization." ]
None. All techniques here improve generalization. Once correct hyperparameters are chosen, any of these strategies may be applied to improve the generalization performance.
3359
What is the gradient of $\mathbf{x}^{\top} \mathbf{W} \mathbf{x}$ with respect to all entries of $\mathbf{W}$ (written as a matrix)?
[ "_{p})_{(k)}} that map each row vector x ( i ) = ( x 1,..., x p ) ( i ) {\\displaystyle \\mathbf {x} _{(i)}=(x_{1},\\dots,x_{p})_{(i)}} of X to a new vector of principal component scores t ( i ) = ( t 1,..., t l ) ( i ) {\\displaystyle \\mathbf {t} _{(i)}=(t_{1},\\dots,t_{l})_{(i)}}, given by t k ( i ) = x ( i ) ⋅ ...
[ "(a) $\\mathbf{W} \\mathbf{x}$", "(b) $\\mathbf{W}^{\\top} \\mathbf{x}$", "(c) $\\square\\left(\\mathbf{W}+\\mathbf{W}^{\\top}\\right) \\mathbf{x}$.", "(d) $\\mathbf{W}$", "(e) $\\mathbf{x} \\mathbf{x}^{\\top}$.", "(f) $\\mathbf{x}^{\\top} \\mathbf{x}$", "(g) $\\mathbf{W} \\mathbf{W}^{\\top}$." ]
(e)
3363
Given a matrix $\Xm$ of shape $D imes N$ with a singular value decomposition (SVD), $X=USV^ op$, suppose $\Xm$ has rank $K$ and $\Am=\Xm\Xm^ op$. Which one of the following statements is extbf{false}?
[ ") matrix A: A = Q <unk> <unk> ̃Σ 0 0 0 - ̃Σ 0 0 0 0 <unk> <unk>QT, Q = 1 √ 2 ̃U - ̃U un+1 · · · um V V 0 · · · 0, where ̃U, ̃Σ are the factors of the economy-sized SVD (1.6). 1.2.2 Uniqueness of the SVD Because eigenvalues are uniquely determined as the roots of the characteristic poly- nomial of a matrix, the rel...
[ "The eigenvalues of A are the singular values of X", "A is positive semi-definite, i.e all eigenvalues of A are non-negative", "The eigendecomposition of A is also a singular value decomposition (SVD) of A", "A vector $v$ that can be expressed as a linear combination of the last $D-K$ columns of $U$, i.e $x=...
The eigenvalues of A are the singular values of XA is PSD... and The eigendecomposition of... are true because $A =US^{2}U^ op$. The eigenvalues of... are the singular values...” is false because the eigenvalues of A are the square of the singular values of X. A vector x that... ...
3365
Assume we have $N$ training samples $(\xx_1, y_1), \dots, (\xx_N, y_N)$ where for each sample $i \in \{1, \dots, N\}$ we have that $\xx_i \in \R^d$ and $y_i \in \R$. For $\lambda \geq 0$, we consider the following loss: L_{\lambda}(\ww) = rac{1}{N} \sum_{i = 1}^N (y_i - \xx_i^ op \ww)^2 + \lambda \Vert \ww \Ve...
[ "', ⋯ Given n training pairs ';'(input and labels) 69 Loss Optimization Finding network weights that achieve the lowest loss ∗= arg min 3 1 < '.! 4 L' ;,'∗= arg min 3 = &, ', ⋯ &'&,'Given training pairs ';'(input and labels) 70 Loss Optimization Finding network weights that achieve the lowest loss ∗= arg min 3 1 < ...
[ "For $\\lambda = 0$, the loss $L_{0}$ is convex and has a unique minimizer.", "$C_\\lambda$ is a non-increasing function of $\\lambda$.", "$C_\\lambda$ is a non-decreasing function of $\\lambda$.", "None of the statements are true." ]
$C_\lambda$ is a non-decreasing function of $\lambda$.For $\lambda_1 < \lambda_2$, $L_{\lambda_1}(\ww) \leq L_{\lambda_2}(\ww)$ $ orall \ww$, which means that $C_{\lambda_1} \leq C_{\lambda_2}$.
3366
In the setting of EM, where $x_{n}$ is the data and $z_{n}$ is the latent variable, what quantity is called the posterior?
[ "k N(xn | μ(t) k, Σ(t) k ) 3. M-step: Update μ(t+1) k, Σ(t+1) k, π(t+1) k. μ(t+1) k = P n q(t) knxn P n q(t) kn Σ(t+1) k = P n q(t) kn(xn -μ(t+1) k )(xn -μ(t+1) k )<unk> P n q(t) kn π(t+1) k = 1 N X n q(t) kn If we let the covariance be diagonal i.e. Σk := σ2I, then EM algorithm is same as K-means as σ2 →0. (a) -2 ...
[ "(a) $\\square p\\left(\\mathbf{x}_{n} \\mid z_{n}, \\boldsymbol{\\theta}\\right)$", "(b) $\\square p\\left(\\mathbf{x}_{n}, z_{n} \\mid \\boldsymbol{\\theta}\\right)$", "(c) $\\square p\\left(z_{n} \\mid \\mathbf{x}_{n}, \\boldsymbol{\\theta}\\right)$" ]
(c)
3368
Which statement about extit{black-box} adversarial attacks is true:
[ "techniques for generating adversarial examples in the literature (by no means an exhaustive list). Gradient-based evasion attack Fast Gradient Sign Method (FGSM) Projected Gradient Descent (PGD) Carlini and Wagner (C&W) attack Adversarial patch attack Black box attacks Black box attacks in adversarial machine lear...
[ "They require access to the gradients of the model being attacked. ", "They are highly specific and cannot be transferred from a model which is similar to the one being attacked.", "They cannot be implemented via gradient-free (e.g., grid search or random search) optimization methods.", "They can be implement...
They can be implemented using gradient approximation via a finite difference formula.They can be implemented using gradient approximation via a finite difference formula as shown in the lecture slides. The rest of the options are incorrect since (1) gradient access is not needed, (2) adversarial examples are transferra...
3371
You are in $D$-dimensional space and use a KNN classifier with $k=1$. You are given $N$ samples and by running experiments you see that for most random inputs $\mathbf{x}$ you find a nearest sample at distance roughly $\delta$. You would like to decrease this distance to $\delta / 2$. How many samples will you likely n...
[ "The number of prototypes varies from 15% to 20% for different classes in this example. Fig. 5 shows that the 1NN classification map with the prototypes is very similar to that with the initial data set. The figures were produced using the Mirkes applet. CNN model reduction for k-NN classifiers k-NN regression In k...
[ "$2^D N$", "$N^D$", "$2 D$", "$\\log (D) N$", "$N^2$", "$D^2$", "$2 N$", "$D N$" ]
$2^D N$
3372
Consider the function $f(x)=-x^{2}$. Which of the following statements are true regarding subgradients of $f(x)$ at $x=0$ ?
[ "a subgradient of f at x if f(y) ≥f(x) + g<unk>(y -x) for all y ∈dom(f) x1 x2 f(x1) + gT 1 (x -x1) f(x2) + gT 2 (x -x2) f(x2) + gT 3 (x -x2) f(x) ∂f(x) ⊆Rd is the subdifferential, the set of subgradients of f at x. EPFL Optimization for Machine Learning CS-439 8/30 Subgradients II Example: f(x) = |x| 0 f(y) ≥gy y 7...
[ "A subgradient does not exist as $f(x)$ is differentiable at $x=0$.", "A subgradient exists but is not unique.", "A subgradient exists and is unique.", "A subgradient does not exist even though $f(x)$ is differentiable at $x=0$." ]
A subgradient does not exist even though $f(x)$ is differentiable at $x=0$.
3374
K-means can be equivalently written as the following Matrix Factorization $$ \begin{aligned} & \min _{\mathbf{z}, \boldsymbol{\mu}} \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\left\|\mathbf{X}-\mathbf{M} \mathbf{Z}^{\top}\right\|_{\text {Frob }}^{2} \\ & \text { s.t. } \boldsymbol{\mu}_{k} \in \mathbb{R}^{D}, \\ & z_{n ...
[ "=1 znkxn PN n=1 znk Hence, the name ‘K-means’. (b) -2 0 2 -2 0 2 (c) -2 0 2 -2 0 2 Summary of K-means Initialize μk ∀k, then iterate: 1. For all n, compute zn given μ. znk = 1 if k = arg minj ∥xn -μj∥2 2 0 otherwise 2. For all k, compute μk given z. μk = PN n=1 znkxn PN n=1 znk Convergence to a local optimum is as...
[ "(a) yes", "(b) no" ]
(b)
3375
Recall that we say that a kernel $K: \R imes \R ightarrow \R $ is valid if there exists $k \in \mathbb{N}$ and $\Phi: \R ightarrow \R^k$ such that for all $(x, x') \in \R imes \R $, $K(x, x') = \Phi(x)^ op \Phi(x')$. The kernel $K(x, x') = \cos(x + x')$ is a valid kernel.
[ "q}^{i}k_{q}({\\textbf {x}},{\\textbf {x}}')}}=\\sum _{q=1}^{Q}{b_{d,d'}^{q}k_{q}({\\textbf {x}},{\\textbf {x}}')}} where the functions u q i ( x ) {\\displaystyle u_{q}^{i}({\\textbf {x}})}, with q = 1, ⋯, Q {\\displaystyle q=1,\\cdots,Q} and i = 1, ⋯, R q {\\displaystyle i=1,\\cdots,R_{q}} have zero mean and cova...
[ "True", "False" ]
FalseFalse. $K(x, x) = \cos(2 x)$ can be strictly negative, which is impossible if it were a valid kernel.
3376
(Adversarial perturbations for linear models) Suppose you are given a linear classifier with the logistic loss. Is it true that generating the optimal adversarial perturbations by maximizing the loss under the $\ell_{2}$-norm constraint on the perturbation is an NP-hard optimization problem?
[ "assume that it is implemented by a NN We have complete access to the NN and are given an input x How do we nd an adversarial perturbation x so that x x You will explore this in the exercise session Here is a the basic idea If f doe not classify x correctly then we are done we can just set x x So we might a well as...
[ "True", "False" ]
False
3380
Consider a binary classification problem with a linear classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & \mathbf{w}^{\top} \mathbf{x} \geq 0 \\ -1, & \mathbf{w}^{\top} \mathbf{x}<0\end{cases} $$ where $\mathbf{x} \in \mathbb{R}^{3}$. Suppose that the weights of the linear model are equal to $\math...
[ "\\to \\mathbb {R} }. These are called margin-based loss functions. Choosing a margin-based loss function amounts to choosing φ {\\displaystyle \\phi }. Selection of a loss function within this framework impacts the optimal f φ ∗ {\\displaystyle f_{\\phi }^{*}} which minimizes the expected risk, see empirical risk ...
[ "$(1,-1,0)$", "$(0,-1,1)$", "$(-2,0,0)$", "$(1.2,0,1.6)$", "Other", "$(0,2,0)$", "$(-1.2,0,1.6)$" ]
Other
3381
(Linear Regression) You are given samples $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ where $\mathbf{x}_{n} \in \mathbb{R}^{D}$ and $y_{n}$ are scalar values. You are solving linear regression using normal equations. You will always find the optimal weights with 0 training error in case of...
[ "In statistics, a proper linear model is a linear regression model in which the weights given to the predictor variables are chosen in such a way as to optimize the relationship between the prediction and the criterion. Simple regression analysis is the most common example of a proper linear model. Unit-weighted re...
[ "True", "False" ]
False
3388
In Text Representation learning, which of the following statements are correct?
[ "representation learning of a certain data type (e.g. text, image, audio, video) is to pretrain the model using large datasets of general context, unlabeled data. Depending on the context, the result of this is either a set of representations for common data segments (e.g. words) which new data can be broken into, ...
[ "Learning GloVe word vectors can be done using the singular value decomposition, if the $f_{d n}$ weights are set to 1 for all observed entries.", "The skip-gram model for learning original word2vec embeddings does learn a binary classifier for each word.", "FastText as discussed in the course learns word vecto...
['The skip-gram model for learning original word2vec embeddings does learn a binary classifier for each word.', 'FastText as discussed in the course learns word vectors and sentence representations which are specific to a supervised classification task.']
3390
(Robustness) The $l_{1}$ loss is less sensitive to outliers than $l_{2}$.
[ "In objective video quality assessment, the outliers ratio (OR) is a measure of the performance of an objective video quality metric. It is the ratio of \"false\" scores given by the objective metric to the total number of scores. The \"false\" scores are the scores that lie outside the interval [ MOS − 2 σ, MOS + ...
[ "True", "False" ]
True
3391
Consider optimizing a matrix factorization $\boldsymbol{W} \boldsymbol{Z}^{\top}$ in the matrix completion setting, for $\boldsymbol{W} \in \mathbb{R}^{D \times K}$ and $\boldsymbol{Z} \in \mathbb{R}{ }^{N \times K}$. We write $\Omega$ for the set of observed matrix entries. Which of the following statements are correc...
[ "t}=e_{t}\\otimes x_{i}^{t}.} The role of matrix regularization in this setting can be the same as in multivariate regression, but matrix norms can also be used to couple learning problems across tasks. In particular, note that for the optimization problem min W ‖ X W − Y ‖ 2 2 + λ ‖ W ‖ 2 2 {\\displaystyle \\min _...
[ "Given any $\\Omega$, for $K:=\\min \\{N, D\\}$, there is an exact solution to the problem.", "In general, a step of $\\mathrm{SGD}$ will change all entries of the $\\mathbf{W}$ and $\\mathbf{Z}$ matrices.", "Adding a Frob-norm regularizer for $\\boldsymbol{W}$ and $\\boldsymbol{Z}$ to the matrix factorization ...
['A step of alternating least squares is more costly than an SGD step.', 'Given any $\\Omega$, for $K:=$ min $\\{N, D\\}$, there is an exact solution to the problem.', 'For complete observations $\\Omega=[1 \\ldots D] \\times[1 \\ldots N]$, the problem can be solved by the singular value decomposition.']
3393
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means ...
[ "\\to \\mathbb {R} }. These are called margin-based loss functions. Choosing a margin-based loss function amounts to choosing φ {\\displaystyle \\phi }. Selection of a loss function within this framework impacts the optimal f φ ∗ {\\displaystyle f_{\\phi }^{*}} which minimizes the expected risk, see empirical risk ...
[ "$(+1,-2,+3,-4,+5,-6)$", "$-(0,0,0,0,0,1)$", "$(0,0,0,0,0,1)$", "$(-1,-1,-1,-1,-1,-1)$", "$(+1,+1,+1,+1,+1,+1)$", "$(-1,+1,-1,+1,-1,+1)$", "$(+1,-1,+1,-1,+1,-1)$", "$(-1,+2,-3,+4,-5,+6)$" ]
$(-1,+1,-1,+1,-1,+1)$
3396
A neural network has been trained for multi-class classification using cross-entropy but has not necessarily achieved a global or local minimum on the training set. The output of the neural network is $\mathbf{z}=[z_1,\ldots,z_d]^ op$ obtained from the penultimate values $\mathbf{x}=[x_1,\ldots,x_d]^ op$ via softmax $...
[ "ural network is usually a softmax function layer, which is the algebraic simplification of N logistic classifiers, normalized per class by the sum of the N-1 other logistic classifiers. Neural Network-based classification has brought significant improvements and scopes for thinking from different perspectives. Ext...
[ "One transformation has no effect, the other one decreases the accuracy in some cases (but never increases it).", "One transformation has no effect, the other sometimes increases and sometimes decreases the accuracy.", "Neither transformation affects the accuracy.", "Both transformations decrease the accuracy...
Neither transformation affects the accuracy.The network prediction, and therefore the accuracy, only depends on which element of $\mathbf{z}$ is largest. Scaling $\mathbf{x}$ with a positive scalar or shifting $\mathbf{x}$ by a constant across all elements does not affect this.
3397
Assume that we have a convolutional neural net with $L$ layers, $K$ nodes per layer, and where each node is connected to $k$ nodes in a previous layer. We ignore in the sequel the question of how we deal with the points at the boundary and assume that $k<<<K$ (much, much, much smaller). How does the complexity of the b...
[ "point is that since the only way a weight in W l {\\displaystyle W^{l}} affects the loss is through its effect on the next layer, and it does so linearly, δ l {\\displaystyle \\delta ^{l}} are the only data you need to compute the gradients of the weights at layer l {\\displaystyle l}, and then the gradients of we...
[ "$\\Theta\\left(L k^K\\right)$", "$\\Theta\\left(L k^K\\right)$", "$\\Theta\\left(L K^k\\right)$", "$\\Theta(L K k)$", "$\\Theta\\left(L^k K\\right)$" ]
$\Theta(L K k)$
3398
Matrix Factorizations: If we compare SGD vs ALS for optimizing a matrix factorization of a $D \times N$ matrix, for large $D, N$
[ "standing Machine Learning by Shalev-Shwartz and Ben-David. x x x x x x x o o o o o o o + + + + + + + Figure 3: Compression via PCA. The images after dimensionality reduction to R2 (K = 2). The different marks indicate different individuals. alized nicely, as shown in Figure 3 here. SVD and Matrix Factorization In ...
[ "(a) Per iteration, SGD has a similar computational cost as ALS", "(b) Per iteration, ALS has an increased computational cost over SGD", "(c) Per iteration, SGD cost is independent of $D, N$" ]
['(b)', '(c)']
3401
Consider the logistic regression loss $L: \R^d o \R$ for a binary classification task with data $\left( \xv_i, y_i ight) \in \R^d imes \{0, 1\}$ for $i \in \left\{ 1, \ldots N ight\}$: egin{equation*} L(\wv) = rac{1}{N} \sum_{i = 1}^N igg(\log\left(1 + e^{\xv_i^ op\wv} ight) - y_i\xv_i^ op\wv igg). ...
[ "_{n}})}, with g ( z ) {\\displaystyle g(z)} the logistic function as before. The logistic loss is sometimes called cross-entropy loss. It is also known as log loss. (In this case, the binary label is often denoted by {−1,+1}.) Remark: The gradient of the cross-entropy loss for logistic regression is the same as th...
[ "\nabla L(\\wv) = \frac{1}{N} \\sum_{i = 1}^N \\; \\xv_i \bigg( y_i - \frac{e^{\\xv_i^\top\\wv}}{1 + e^{\\xv_i^\top\\wv}}\bigg) $", "\nabla L(\\wv) = \frac{1}{N} \\sum_{i = 1}^N \\; \\xv_i \bigg( \frac{1}{1 + e^{-\\xv_i^\top\\wv}} - y_i\bigg) $", "\nabla L(\\wv) = \frac{1}{N} \\sum_{i = 1}^N \\; \bigg( \fra...
$\displaystyle abla L(\wv) = rac{1}{N} \sum_{i = 1}^N \; \xv_i igg( rac{1}{1 + e^{-\xv_i^ op\wv}} - y_iigg) $ abla L(w) & = rac{1}{N} \sum_{i = 1}^N rac{e^{\xv_i^ op\wv}x_i}{1 + e^{\xv_i^ op\wv}} - y_i\xv_i \ & = rac{1}{N} \sum_{i = 1}^N \xv_i \left( rac{e^{x_i^ op\wv}}{1 + e^{\xv_...
3623
When constructing a word embedding, what is true regarding negative samples?
[ "short random walk is treated as a sentence. In its final phase, the algorithm employs Gensim's word2vec algorithm to learn embeddings based on biased random walks. Sequences of nodes are fed into a skip-gram or continuous bag of words model and traditional machine-learning techniques for classification can be used...
[ "They are words that do not appear as context words", "They are selected among words which are not stop words", "Their frequency is decreased down to its logarithm", "They are oversampled if less frequent" ]
['They are oversampled if less frequent']
3625
If the first column of matrix L is (0,1,1,1) and all other entries are 0 then the authority values
[ "-{\\mathcal {R}}(A)[\\rho ].} Next we vectorize the matrix ρ {\\displaystyle \\rho } which is the mapping ρ = ∑ i, j ρ i j | i ⟩ ⟨ j | ↦ | ρ ⟩ ⟩ = ∑ i, j ρ i j | i ⟩ <unk> | j ⟩, {\\displaystyle \\rho =\\sum _{i,j}\\rho _{ij}|i\\rangle \\langle j|\\mapsto |\\rho \\rangle \\!\\rangle =\\sum _{i,j}\\rho _{ij}|i\\ran...
[ "(0, 1, 1, 1)", "(0, 1/sqrt(3), 1/sqrt(3), 1/sqrt(3))", "(1, 1/sqrt(3), 1/sqrt(3), 1/sqrt(3))", "(1, 0, 0, 0)" ]
(0, 1/sqrt(3), 1/sqrt(3), 1/sqrt(3))
3626
If the top 100 documents contain 50 relevant documents
[ "Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done \"manually\" (or \"intellectually\") or algorithmically. The intellectual classification of documents...
[ "the precision of the system at 50 is 0.25", "the precision of the system at 100 is 0.5", "the recall of the system is 0.5", "All of the above" ]
the precision of the system at 100 is 0.5
3630
What is WRONG regarding the Transformer model?
[ "on the harmonics produced by the loads: Transformers with a larger K-factor are more expensive to produce.", "s. McGraw-Hill. pp. 586–622. CEGB, (Central Electricity Generating Board) (1982). Modern Power Station Practice. Pergamon. ISBN 978-0-08-016436-6. Crosby, D. (1958). \"The Ideal Transformer\". IRE Transa...
[ "It uses a self-attention mechanism to compute representations of the input and output.", "Its computation cannot be parallelized compared to LSTMs and other sequential models.", "Its complexity is quadratic to the input size.", "It captures the semantic context of the input." ]
['Its computation cannot be parallelized compared to LSTMs and other sequential models.']
3636
Which of the following statements about index merging (when constructing inverted files) is correct?
[ "Idea behind linear-time merging Think of two pile of cards that are placed face up I Basic step: pick the smaller of the two cards and place it in the output pile I There are Æ n basic steps, since each basic step removes one card from the input piles, and we started with n cards in the input pile I Therefore the ...
[ "While merging two partial indices on disk, the inverted lists of a term are concatenated without sorting", "Index merging is used when the vocabulary does no longer fit into the main memory", "The size of the final merged index file is O (n log2 (n) M )), where M is the size of the available memory", "While ...
While merging two partial indices on disk, the inverted lists of a term are concatenated without sorting
3640
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is false?
[ "Semantic Indexing (LSI) • Latent Dirichlet Allocation (LDA) LSA viz • LSA uses a term-document matrix which describes the occurrences of terms in documents • Weight of element proportional to the number of times the terms appear in each document (rare terms are upweighted to reflect their relative importance) • SV...
[ "The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot", "LSI does not depend on the order of words in the document, whereas WE does", "LSI is deterministic (given the dimension), whereas WE is not", "LSI does take into account the frequency of words in the documents, whereas WE wit...
LSI does take into account the frequency of words in the documents, whereas WE with negative sampling does not
3641
The number of non-zero entries in a column of a term-document matrix indicates:
[ "=c_{1}\\\\a_{2}+b_{2}&=c_{2}\\\\&\\ \\ \\vdots \\\\a_{n}+b_{n}&=c_{n}\\end{aligned}}} Hence, index notation serves as an efficient shorthand for representing the general structure to an equation, while applicable to individual components. Two-dimensional arrays More than one index is used to describe arrays of num...
[ "how many terms of the vocabulary a document contains", "how often a term of the vocabulary occurs in a document", "how relevant a term is for a document", "none of the other responses is correct" ]
none of the other responses is correct
3642
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is incorrect
[ "Semantic Indexing (LSI) • Latent Dirichlet Allocation (LDA) LSA viz • LSA uses a term-document matrix which describes the occurrences of terms in documents • Weight of element proportional to the number of times the terms appear in each document (rare terms are upweighted to reflect their relative importance) • SV...
[ "LSI is deterministic (given the dimension), whereas WE is not", "LSI does not take into account the order of words in the document, whereas WE does", "The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot", "LSI does take into account the frequency of words in the documents, wherea...
LSI does take into account the frequency of words in the documents, whereas WE does not.
3643
Suppose that in a given FP Tree, an item in a leaf node N exists in every path. Which of the following is true?
[ "s to a node in this tree at depth l {\\displaystyle \\ell }. The i {\\displaystyle i} th word in the prefix code corresponds to a node v i {\\displaystyle v_{i}} let A i {\\displaystyle A_{i}} be the set of all leaf nodes (i.e. of nodes at depth l n {\\displaystyle \\ell _{n}} ) in the subtree of A {\\displaystyle...
[ "N co-occurs with its prefixes in every transaction", "{N}’s minimum possible support is equal to the number of paths", "For every node P that is a parent of N in the FP tree, confidence(P->N) = 1", "The item N exists in every candidate set" ]
['{N}’s minimum possible support is equal to the number of paths']
3644
Which of the following statements regarding topic models is false?
[ "In statistics and natural language processing, a topic model is a type of statistical model for discovering the abstract \"topics\" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. Intuitively, given that a doc...
[ "Topic models map documents to dense vectors", "In LDA, topics are modeled as distributions over documents", "LDA assumes that each document is generated from a mixture of topics with a probability distribution", "Topics can serve as features for document classification" ]
['In LDA, topics are modeled as distributions over documents']
3646
Modularity of a social network always:
[ "\\displaystyle b} ) to each vertex v ∈ V ( G ) {\\displaystyle v\\in V(G)} in the graph), the modularity measures the difference between the number of links from/to each pair of communities, from that expected in a graph that is completely random in all respects other than the set of degrees of each of the vertice...
[ "Increases with the number of communities", "Increases when an edge is added between two members of the same community", "Decreases when new nodes are added to the social network that form their own communities", "Decreases if an edge is removed" ]
Increases when an edge is added between two members of the same community
3647
Which of the following is wrong regarding Ontologies?
[ "(2008). \"Ontology (Science)\". In Eschenbach, C.; Gruninger, M. (eds.). Formal Ontology in Information Systems, Proceedings of FOIS 2008. ISO Press. pp. 21–35. CiteSeerX 10.1.1.681.2599. Staab, S.; Studer, R., eds. (2009). \"What is an Ontology?\". Handbook on Ontologies (2nd ed.). Springer. pp. 1–17. doi:10.1007...
[ "We can create more than one ontology that conceptualizes the same real-world entities", "Ontologies help in the integration of data expressed in different models", "Ontologies dictate how semi-structured data are serialized", "Ontologies support domain-specific vocabularies" ]
Ontologies dictate how semi-structured data are serialized
3650
Which of the following statements is correct concerning the use of Pearson’s Correlation for user- based collaborative filtering?
[ "(X and Y) have in common to (b) the maximum amount of score deviation they could have in common MICRO-110 / Spring 2019 68 The Pearson coefficient The value of r ranges between ( -1) and ( +1) The value of r denotes the strength of the association as illustrated by the following diagram. -1 1 0 -0.25 -0.75 0.75 ...
[ "It measures whether different users have similar preferences for the same items", "It measures how much a user’s ratings deviate from the average ratings I", "t measures how well the recommendations match the user’s preferences", "It measures whether a user has similar preferences for different items" ]
It measures whether different users have similar preferences for the same items
3651
After the join step, the number of k+1-itemsets
[ "Within computing, author O'Connell defines join selection factor as \"[t]he percentage (or fraction) of records in one file that will be joined with records of another file\". This can be calculated when two database tables are to be joined. It is primarily concerned with query optimization.", "has. Participants...
[ "is equal to the number of frequent k-itemsets", "can be equal, lower or higher than the number of frequent k-itemsets", "is always higher than the number of frequent k-itemsets", "is always lower than the number of frequent k-itemsets" ]
can be equal, lower or higher than the number of frequent k-itemsets
3653
Which is true about the use of entropy in decision tree induction?
[ "s is approximately 2 n H ( k / n ) {\\displaystyle 2^{n\\mathrm {H} (k/n)}}. Use in machine learning Machine learning techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty. Decision tree...
[ "The entropy of the set of class labels of the samples from the training set at the leaf level is always 0", "We split on the attribute that has the highest entropy", "The entropy of the set of class labels of the samples from the training set at the leaf level can be 1", "We split on the attribute that has t...
['The entropy of the set of class labels of the samples from the training set at the leaf level can be 1']
3655
Which statement is false about clustering?
[ "Caruana, Rich (2007). \"Consensus Clusterings\". Seventh IEEE International Conference on Data Mining (ICDM 2007). IEEE. pp. 607–612. doi:10.1109/icdm.2007.73. ISBN 978-0-7695-3018-5....we address the problem of combining multiple clusterings without access to the underlying features of the data. This process is k...
[ "K-means fails to give good results if the points have non-convex shapes", "In K-means, bad initialization can lead to poor convergence speed", "DBSCAN is a deterministic algorithm", "DBSCAN algorithm is robust to outliers", "Density-based clustering fails to discover non-convex clusters" ]
['DBSCAN is a deterministic algorithm']
3656
Modularity clustering will end up always with the same community structure?
[ "\\displaystyle b} ) to each vertex v ∈ V ( G ) {\\displaystyle v\\in V(G)} in the graph), the modularity measures the difference between the number of links from/to each pair of communities, from that expected in a graph that is completely random in all respects other than the set of degrees of each of the vertice...
[ "True", "Only for connected graphs", "Only for cliques", "False" ]
False
3659
For his awesome research, Tugrulcan is going to use the Pagerank with teleportation and HITS algorithm, not on a network of webpages but on the retweet network of Twitter! The retweet network is a directed graph, where nodes are users and an edge going out from a user A and to a user B means that "User A retweeted User...
[ "by an infected individual.\" R 0 = β τ = β μ {\\displaystyle R_{0}=\\beta \\tau ={\\beta \\over \\mu }} Web link analysis Several Web search ranking algorithms use link-based centrality metrics, including (in order of appearance) Marchiori's Hyper Search, Google's PageRank, Kleinberg's HITS algorithm, the CheiRank...
[ "It will have a non-zero hub value.", "It will have an authority value of zero.", "It will have a pagerank of zero.", "Its authority value will be equal to the hub value of a user who never retweets other users." ]
It will have a pagerank of zero. Its authority value will be equal to the hub value of a user who never retweets other users.
3664
When searching for an entity 𝑒𝑛𝑒𝑤 that has a given relationship 𝑟 with a given entity 𝑒
[ "In computing, an exclusive relationship is a type of Relationship in computer data base design. In Relational Database Design, in some cases the existence of one kind of relationship type precludes the existence of another. Entities within an entity type A may be related by a relationship type R to an entity in en...
[ "We search for 𝑒𝑛𝑒𝑤 that have a similar embedding vector to 𝑒", "We search for 𝑒𝑛𝑒𝑤 that have a similar embedding vector to 𝑒𝑜𝑙𝑑 which has relationship 𝑟 with 𝑒", "We search for pairs (𝑒𝑛𝑒𝑤, 𝑒) that have similar embedding to (𝑒𝑜𝑙𝑑, 𝑒)", "We search for pairs (𝑒𝑛𝑒𝑤, 𝑒) that have si...
We search for pairs (𝑒𝑛𝑒𝑤, 𝑒) that have similar embedding to (𝑒𝑜𝑙𝑑, 𝑒)
3667
Which of the following graph analysis techniques do you believe would be most appropriate to identify communities on a social graph?
[ "which are a function of the observed network and, in some cases, nodal attributes. The probability of a graph y ∈ Y {\\displaystyle y\\in {\\mathcal {Y}}} in an ERGM is defined by: P ( Y = y | θ ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) {\\displaystyle P(Y=y|\\theta )={\\frac {\\exp(\\theta ^{T}s(y))}{c(\\theta )}}} where...
[ "Cliques", "Random Walks", "Shortest Paths", "Association rules" ]
Cliques
3671
Which of the following models for generating vector representations for text require to precompute the frequency of co-occurrence of words from the vocabulary in the document collection
[ "quency - Inverse Document Frequency tf-idf(wi,dj) = tf(wi,dj)·idf(wi) with idf(wi) = log |D| nb(dk ⊃wi) |D|: number of documents nb(dk ⊃wi): number of documents which contain term wi Computational Linguistics Course (EPFL-MsCS) – Information Retrieval – 35 / 74 Introduction Toolchain Indexing Vector Space model Qu...
[ "LSI", "CBOW", "Fasttext", "Glove" ]
['Glove']
3672
For which document classifier the training cost is low and inference is expensive?
[ "approach that seeks to solve this problem. It involves training only the fast (but imprecise) algorithms in the bucket, and then using the performance of these algorithms to help determine which slow (but accurate) algorithm is most likely to do best. Amended Cross-Entropy Cost: An Approach for Encouraging Diversi...
[ "for none", "for kNN", "for NB", "for fasttext" ]
for kNN
3678
A word embedding for given corpus
[ "over the corpus. The CBOW can be viewed as a ‘fill in the blank’ task, where the word embedding represents the way the word influences the relative probabilities of other words in the context window. Words which are semantically similar should influence these probabilities in similar ways, because semantically sim...
[ "depends only on the dimension d", "depends on the dimension d and number of iterations in gradient descent", "depends on the dimension d, number of iterations and chosen negative samples", "there are further factors on which it depends", "" ]
there are further factors on which it depends
3682
In Ranked Retrieval, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true? Hint: P@k and R@k are the precision and recall of the result set consisting of the k top-ranked documents.
[ "PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term \"web page\" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of lin...
[ "P@k-1>P@k+1", "R@k-1=R@k+1", "R@k-1<R@k+1", "P@k-1=P@k+1" ]
R@k-1<R@k+1
3684
Regarding the Expectation-Maximization algorithm, which one of the following false?
[ "Machine Learning Course - CS-433 Expectation-Maximization Algorithm Nov 30, 2022 Martin Jaggi Last updated on: November 28, 2022 credits to Mohammad Emtiyaz Khan & R ̈udiger Urbanke Motivation Computing maximum likelihood for Gaussian mixture model is difficult due to the log outside the sum. max θ L(θ) := N X n=1...
[ "Assigning equal weights to workers initially decreases the convergence time", "The label with the highest probability is assigned as the new label", "It distinguishes experts from normal workers", "In E step the labels change, in M step the weights of the workers change" ]
['Assigning equal weights to workers initially decreases the convergence time']
3686
For an item that has not received any ratings, which method can make a prediction?
[ "unknown parameters. A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters. Consider now a function of the unknown parameter: an estimator is a statistic used to estimate ...
[ "User-based collaborative RS", "Item-based collaborative RS", "Content-based RS", "None of the above" ]
Content-based RS
3692
The SMART algorithm for query relevance feedback modifies? (Slide 11 Week 3)
[ "and predicted responses (in real time) as you vary the value of an X variable • The horizontal dotted line shows the current predicted value of each Y variable for the current values of the X variables. • The black lines within the plots show how the predicted value changes when you change the current value of an ...
[ "The original document weight vectors", "The original query weight vectors", "The result document weight vectors", "The keywords of the original user query" ]
The original query weight vectors
3695
Suppose that in a given FP Tree, an item in a leaf node N exists in every path. Which of the following is TRUE?
[ "s to a node in this tree at depth l {\\displaystyle \\ell }. The i {\\displaystyle i} th word in the prefix code corresponds to a node v i {\\displaystyle v_{i}} let A i {\\displaystyle A_{i}} be the set of all leaf nodes (i.e. of nodes at depth l n {\\displaystyle \\ell _{n}} ) in the subtree of A {\\displaystyle...
[ "N co-occurs with its prefixes in every transaction", "For every node P that is a parent of N in the FP tree, confidence (P->N) = 1", "{N}’s minimum possible support is equal to the number of paths", "The item N exists in every candidate set" ]
{N}’s minimum possible support is equal to the number of paths
3698
In Ranked Retrieval, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true?Hint: P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents.
[ "PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term \"web page\" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of lin...
[ "P@k-1>P@k+1", "P@k-1=P@k+1", "R@k-1<R@k+1", "R@k-1=R@k+1" ]
['R@k-1<R@k+1']
3700
Suppose that for points p, q, and t in metric space, the following hold:p is density-reachable from q t is density-reachable from qp is density-reachable from tWhich of the following statements is false?
[ "more than k objects. We denote the set of k nearest neighbors as Nk(A). This distance is used to define what is called reachability distance: reachability-distance k ( A, B ) = max { k -distance ( B ), d ( A, B ) } {\\displaystyle {\\text{reachability-distance}}_{k}(A,B)=\\max\\{k{\\text{-distance}}(B),d(A,B)\\}} ...
[ "t is a core point", "p is a border point", "p and q are density-connected", "q is a core point " ]
['p is a border point']
3701
If for the χ2 statistics for a binary feature, we obtain P(χ2 |DF = 1) < 0.05, this means:
[ "P4 metric (also known as FS or Symmetric F ) enables performance evaluation of the binary classifier. It is calculated from precision, recall, specificity and NPV (negative predictive value). P4 is designed in similar way to F1 metric, however addressing the criticisms leveled against F1. It may be perceived as it...
[ "That the class labels depends on the feature", "That the class label is independent of the feature", "That the class label correlates with the feature", "No conclusion can be drawn" ]
['That the class labels depends on the feature']
3706
Which of the following is false regarding K-means and DBSCAN?
[ ". DBSCAN also received the 2014 ACM SIGKDD test of time award. As of 2005, he was the most cited German researcher in databases, and data mining. References External links Former Database Systems Group of Hans-Peter Kriegel Publications in the Digital Bibliography & Library Project Publications in the ACM Digital ...
[ "K-means does not handle outliers, while DBSCAN does", "K-means takes the number of clusters as parameter, while DBSCAN does not take any parameter", "K-means does many iterations, while DBSCAN does not", "Both are unsupervised" ]
['K-means takes the number of clusters as parameter, while DBSCAN does not take any parameter']
3708
Which of the following is correct regarding community detection?
[ "original and even improved base algorithm, matching its quality of clusters while being multiple orders of magnitude faster. See also blockmodeling Girvan–Newman algorithm – Community detection algorithm Lancichinetti–Fortunato–Radicchi benchmark – AlgorithmPages displaying short descriptions with no spaces for ge...
[ "High betweenness of an edge indicates that the communities are well connected by that edge", "The Louvain algorithm attempts to minimize the overall modularity measure of a community graph", "High modularity of a community indicates a large difference between the number of edges of the community and the number...
['High betweenness of an edge indicates that the communities are well connected by that edge', 'High modularity of a community indicates a large difference between the number of edges of the community and the number of edges of a null model']
3711
When constructing a word embedding, negative samples are:
[ "short random walk is treated as a sentence. In its final phase, the algorithm employs Gensim's word2vec algorithm to learn embeddings based on biased random walks. Sequences of nodes are fed into a skip-gram or continuous bag of words model and traditional machine-learning techniques for classification can be used...
[ "Word - context word combinations that are not occurring in the document collection", "Context words that are not part of the vocabulary of the document collection", "All less frequent words that do not occur in the context of a given word", "Only words that never appear as context word" ]
['Word - context word combinations that are not occurring in the document collection']
3712
Which of the following statements about index merging (when constructing inverted files) is correct?
[ "Idea behind linear-time merging Think of two pile of cards that are placed face up I Basic step: pick the smaller of the two cards and place it in the output pile I There are Æ n basic steps, since each basic step removes one card from the input piles, and we started with n cards in the input pile I Therefore the ...
[ "While merging two partial indices on disk, the inverted lists of a term are concatenated without sorting", "Index merging is used when the vocabulary does no longer fit into the main memory", "The size of the final merged index file is O(nlog2(n)*M), where M is the size of the available memory", "While mergi...
['While merging two partial indices on disk, the inverted lists of a term are concatenated without sorting']
3713
For his awesome research, Tugrulcan is going to use the PageRank with teleportation and HITS algorithm, not on a network of webpages but on the retweet network of Twitter! The retweet network is a directed graph, where nodes are users and an edge going out from a user A and to a user B means that "User A retweeted User...
[ "by an infected individual.\" R 0 = β τ = β μ {\\displaystyle R_{0}=\\beta \\tau ={\\beta \\over \\mu }} Web link analysis Several Web search ranking algorithms use link-based centrality metrics, including (in order of appearance) Marchiori's Hyper Search, Google's PageRank, Kleinberg's HITS algorithm, the CheiRank...
[ "It will have a non-zero hub value", "It will have an authority value of zero", "It will have a PageRank of zero", "Its authority value will be equal to the hub value of a user who never retweets other users" ]
['It will have a PageRank of zero', 'Its authority value will be equal to the hub value of a user who never retweets other users']
3850
Let $f_{\mathrm{MLP}}: \mathbb{R}^{d} \rightarrow \mathbb{R}$ be an $L$-hidden layer multi-layer perceptron (MLP) such that $$ f_{\mathrm{MLP}}(\mathbf{x})=\mathbf{w}^{\top} \sigma\left(\mathbf{W}_{L} \sigma\left(\mathbf{W}_{L-1} \ldots \sigma\left(\mathbf{W}_{1} \mathbf{x}\right)\right)\right) $$ with $\mathbf{w} \in ...
[ "\\frac {2^{-T_{s}}\\zeta |b_{t}^{(0)}-a_{t}^{(0)}|}{\\mu ^{2}}}}, such that the algorithm is guaranteed to converge linearly. Although the proof stands on the assumption of Gaussian input, it is also shown in experiments that GDNP could accelerate optimization without this constraint. Neural networks Consider a mu...
[ "$M! 2^M$", "$1$", "$2^M$", "$M !$" ]
$M! 2^M$
3851
Consider a linear regression problem with $N$ samples $\left\{\left(\boldsymbol{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$, where each input $\boldsymbol{x}_{n}$ is a $D$-dimensional vector $\{-1,+1\}^{D}$, and all output values are $y_{i} \in \mathbb{R}$. Which of the following statements is correct?
[ ",\\Theta _{1},\\ldots,\\Theta _{M})={\\frac {1}{M}}\\sum _{j=1}^{M}m_{n}(\\mathbf {x},\\Theta _{j})}. For regression trees, we have m n = ∑ i = 1 n Y i 1 X i ∈ A n ( x, Θ j ) N n ( x, Θ j ) {\\displaystyle m_{n}=\\sum _{i=1}^{n}{\\frac {Y_{i}\\mathbf {1} _{\\mathbf {X} _{i}\\in A_{n}(\\mathbf {x},\\Theta _{j})}}{N...
[ "Linear regression always \"works\" very well for $N \\ll D$", "A linear regressor works very well if the data is linearly separable.", "Linear regression always \"works\" very well for $D \\ll N$", "None of the above." ]
None of the above.
3852
We apply a Gaussian Mixture Model made of $K$ isotropic Gaussians (invariant to rotation around its center) to $N$ vectors of dimension $D$. What is the number of \emph{free} parameters of this model?
[ ",\\sigma _{0}^{2})\\\\{\\boldsymbol {\\phi }}&\\sim &\\operatorname {Symmetric-Dirichlet} _{K}(\\beta )\\\\z_{i=1\\dots N}&\\sim &\\operatorname {Categorical} ({\\boldsymbol {\\phi }})\\\\x_{i=1\\dots N}&\\sim &{\\mathcal {N}}(\\mu _{z_{i}},\\sigma _{z_{i}}^{2})\\end{array}}} {\\displaystyle } Multivariate Gaussia...
[ "$KD + 2K - 1 - 1$", "$KD + 2K - 1 + N - 1$", "$KD + KD^2 - 1$", "$2KD - 1$", "$2KD + N - 1$", "$NKD + NKD^2$", "$NKD + NKD^2 + N$", "$2NKD$", "$2NKD + N$", "$KD + K - 1$", "$KD + K + N$", "$NKD$", "$NKD + N$" ]
$KD + 2K - 1 - 1$Each of the $K$ clusters requires the following parameters: a scalar $\pi_k\in\R$, a vector $\mu_k \in \R^D$, and a scalar $\sigma_k \in \R$. The constraint that $\sum \pi_k = 1$ determines one of the parameters.
3854
Recall that the hard-margin SVM problem corresponds to: $$ \underset{\substack{\ww \in \R^d, \ orall i:\ y_i \ww^ op \xx_i \geq 1}}{\min} \Vert \ww \Vert_2.$$ Now consider the $2$-dimensional classification dataset corresponding to the $3$ following datapoints: $\xx_1 = (-1, 2)$, $\xx_2 = (1, 2)$, $\xx_3 = (0, -2)$ ...
[ "x} _{i})}. Dot products with w for classification can again be computed by the kernel trick, i.e. w ⋅ φ ( x ) = ∑ i α i y i k ( x i, x ) {\\textstyle \\mathbf {w} \\cdot \\varphi (\\mathbf {x} )=\\sum _{i}\\alpha _{i}y_{i}k(\\mathbf {x} _{i},\\mathbf {x} )}. Computing the SVM classifier Computing the (soft-margin)...
[ "Our dataset is not linearly separable and hence it does not make sense to consider the hard-margin problem.", "There exists a unique $\\ww^\\star$ which linearly separates our dataset.", "The unique vector which solves the hard-margin problem for our dataset is $\\ww^\\star = (0, 1)$.", "None of the other st...
None of the other statements are true.Solution is $\ww^\star = (0, 0.5)$..
3856
Let $\mathcal{R}_{p}(f, \varepsilon)$ be the $\ell_{p}$ adversarial risk of a classifier $f: \mathbb{R}^{d} \rightarrow\{ \pm 1\}$, i.e., $$ \mathcal{R}_{p}(f, \varepsilon)=\mathbb{E}_{(\mathbf{x}, y) \sim \mathcal{D}}\left[\max _{\tilde{\mathbf{x}}:\|\mathbf{x}-\tilde{\mathbf{x}}\|_{p} \leq \varepsilon} \mathbb{1}_{\{...
[ "0, it is constructed by solving η ∈arg max η:∥η∥≤ε L(hx⋆(a + η), b) Example norms frequently used in adversarial attacks ▶The most commonly used norm is the l\\infty-norm [8, 18]. ▶The use of l1-norm leads to sparse attacks. Figure: (Left) An l\\infty-attack: The alteration is hard to perceive. (Right) An l1-attac...
[ "$\\mathcal{R}_{2}(f, \\varepsilon) \\leq \\mathcal{R}_{1}(f, 2 \\varepsilon)$", "$\\mathcal{R}_{\\infty}(f, \\varepsilon) \\leq \\mathcal{R}_{2}(f, \\sqrt{d} \\varepsilon)$", "$\\mathcal{R}_{\\infty}(f, \\varepsilon) \\leq \\mathcal{R}_{1}(f, \\varepsilon)$", "$\\mathcal{R}_{\\infty}(f, \\varepsilon) \\leq \...
$\mathcal{R}_{\infty}(f, \varepsilon) \leq \mathcal{R}_{2}(f, \sqrt{d} \varepsilon)$
3857
The following function(s) have a unique minimizer.
[ "}}_{1},{\\hat {y}}_{2},\\ldots,{\\hat {y}}_{n})} using least squares. The objective function to be minimized is Q ( w ) = ∑ i = 1 n Q i ( w ) = ∑ i = 1 n ( y ^ i − y i ) 2 = ∑ i = 1 n ( w 1 + w 2 x i − y i ) 2. {\\displaystyle Q(w)=\\sum _{i=1}^{n}Q_{i}(w)=\\sum _{i=1}^{n}\\left({\\hat {y}}_{i}-y_{i}\\right)^{2}=\...
[ "(a) $f(x)=x^{2}, x \\in[-3,2]$", "(b) $f(x)=\\log (x), x \\in(0,10]$", "(c) $f(x)=\\sin (x), x \\in[-10,10]$", "(d) $f(x)=e^{3 x}+x^{4}-3 x, x \\in[-10,10]$" ]
['(a)', '(d)']
3859
(Stochastic Gradient Descent) One iteration of standard SGD for SVM, logistic regression and ridge regression costs roughly $\mathcal{O}(D)$, where $D$ is the dimension of a data point.
[ "x0, step size γk 1. For k = 0, 1,: obtain the (minibatch) stochastic gradient gk update xk+1 ←xk -γkgk Perturbed Stochastic Gradient Descent [13] Input: Stochastic gradient oracle g, initial point x0, step size γk 1. For k = 0, 1,: sample noise ξ uniformly from unit sphere update xk+1 ←xk -γk(gk + ξ) Mathematics o...
[ "True", "False" ]
True
3862
Consider two fully connected networks, A and B, with a constant width for all layers, inputs and outputs. Network A has depth $3L$ and width $H$, network B has depth $L$ and width $2H$. Everything else is identical for the two networks and both $L$ and $H$ are large. In this case, performing a single iteration of backp...
[ ": Write down the network pre- tending that all the parameters are independent. Run the backpropagation algorithm. The gradient for a particular pa- rameter for the model where some weights are equal is now just the sum of the gradients (of the model where weights are independent) of all the edges that share the sa...
[ "True", "False" ]
TrueTrue. The number of multiplications required for backpropagation is linear in the depth and quadratic in the width, $3LH^2 < L (2H)^2 = 4LH^2$.
3868
Consider a regression task. You are using your favorite learning algorithm with parameters w and add a regularization term of the form $\frac{\lambda}{2}\|\mathbf{w}\|^{2}$. Which of the following statements are correct for a typical scenario?
[ "t}=e_{t}\\otimes x_{i}^{t}.} The role of matrix regularization in this setting can be the same as in multivariate regression, but matrix norms can also be used to couple learning problems across tasks. In particular, note that for the optimization problem min W ‖ X W − Y ‖ 2 2 + λ ‖ W ‖ 2 2 {\\displaystyle \\min _...
[ "The training error as a function of $\\lambda \\geq 0$ decreases.", "The training error as a function of $\\lambda \\geq 0$ increases.", "The test error as a function of $\\lambda \\geq 0$ increases.", "The test error as a function of $\\lambda \\geq 0$ decreases.", "The training error as a function of $\\...
['The test error as a function of $\\lambda \\geq 0$ first decreases and then increases.', 'The training error as a function of $\\lambda \\geq 0$ increases.']
3869
Consider a movie recommendation system which minimizes the following objective rac{1}{2} \sum_{(d,n)\in\Omega} [x_{dn} - (\mathbf{W} \mathbf{Z}^ op)_{dn}]^2 + rac{\lambda_w}{2} orm{\mathbf{W}}_ ext{Frob}^2 + rac{\lambda_z}{2} orm{\mathbf{Z}}_ ext{Frob}^2 where $\mathbf{W}\in \R^{D imes K}$ and $\ma...
[ "placed at the appropriate spot based on their factor vectors in two dimensions. The plot reveals distinct genres, including clusters of movies with strong female leads, fraternity humor, and quirky independent films. Recall that for K-means, K was the number of clusters. (Similarly for GMMs, K was the number of la...
[ "Feature vectors obtained in both cases remain the same. ", "Feature vectors obtained in both cases are different.", "Feature vectors obtained in both cases can be either same or different, depending on the sparsity of rating matrix.", "Feature vectors obtained in both cases can be either same or different, d...
Feature vectors obtained in both cases remain the same.The SGD trajectory corresponding parts in the two cases are identical.
3870
(SVD) The set of singular values of any rectangular matrix $\mathbf{X}$ is equal to the set of eigenvalues for the square matrix $\mathbf{X X}^{\top}$.
[ ") matrix A: A = Q <unk> <unk> ̃Σ 0 0 0 - ̃Σ 0 0 0 0 <unk> <unk>QT, Q = 1 √ 2 ̃U - ̃U un+1 · · · um V V 0 · · · 0, where ̃U, ̃Σ are the factors of the economy-sized SVD (1.6). 1.2.2 Uniqueness of the SVD Because eigenvalues are uniquely determined as the roots of the characteristic poly- nomial of a matrix, the rel...
[ "True", "False" ]
False
3871
(Infinite Data) Assume that your training data $\mathcal{S}=\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}$ is iid and comes from a fixed distribution $\mathcal{D}$ that is unknown but is known to have bounded support. Assume that your family of models contains a finite number of elements and that you choose the bes...
[ "is assumed that the training set consists of a sample of independent and identically distributed pairs, ( x i, y i ) {\\displaystyle (x_{i},\\;y_{i})}. In order to measure how well a function fits the training data, a loss function L : Y × Y → R ≥ 0 {\\displaystyle L:Y\\times Y\\to \\mathbb {R} ^{\\geq 0}} is defi...
[ "True", "False" ]
True
3875
Consider a binary classification problem with classifier $f(\mathbf{x})$ given by $$ f(\mathbf{x})= \begin{cases}1, & g(\mathbf{x}) \geq 0 \\ -1, & g(\mathbf{x})<0\end{cases} $$ and $\mathbf{x} \in \mathbb{R}^{6}$. Consider a specific pair $(\mathbf{x}, y=1)$ and assume that $g(\mathbf{x})=8$. In particular this means ...
[ "\\to \\mathbb {R} }. These are called margin-based loss functions. Choosing a margin-based loss function amounts to choosing φ {\\displaystyle \\phi }. Selection of a loss function within this framework impacts the optimal f φ ∗ {\\displaystyle f_{\\phi }^{*}} which minimizes the expected risk, see empirical risk ...
[ "$+13$", "$-4$", "$-5$", "$-7$", "$2$", "$4$", "$-13$", "$-2$", "$+7$", "$0$" ]
$2$
3876
We are given a data set $S=\left\{\left(\boldsymbol{x}_{n}, y_{n}\right)\right\}$ for a binary classification task where $\boldsymbol{x}_{n}$ in $\mathbb{R}^{D}$. We want to use a nearestneighbor classifier. In which of the following situations do we have a reasonable chance of success with this approach? [Ignore the i...
[ "ified as BLUE. All figures taken from Chapter 2 of “The Elements of Statistical Learning” by Hastie, Friedman, and Tibshirani. The k-nearest-neighbor classifier/regressor is an entirely dif- ferent way of performing classification/regression. It per- forms best if we are working in low dimensions and if we have re...
[ "$n \\rightarrow \\infty, D$ is fixed", "$ n \\rightarrow \\infty, D \\ll \\ln (n)$", "$ n=D^2, D \\rightarrow \\infty$", "$ n$ is fixed, $D \\rightarrow \\infty$" ]
['$n \\rightarrow \\infty, D$ is fixed', '$ n \\rightarrow \\infty, D \\ll \\ln (n)$']
3879
Consider a linear model $\hat{y} = xv ^ op \wv$ with the squared loss under an $\ell_\infty$-bounded adversarial perturbation. For a single point $(xv, y)$, it corresponds to the following objective: egin{align} \max_{ ilde{xv}:\ \|xv- ilde{xv}\|_\infty\leq \epsilon} \left(y...
[ "optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility...
[ "$(5+9\u000barepsilon)^2$", "$(3+10\u000barepsilon)^2$", "$(10-\u000barepsilon)^2$", "Other", "$(9+5\u000barepsilon)^2$" ]
$(9+5 arepsilon)^2$ First, it's convenient to reparametrize the objective in terms of an additive perturbation $oldsymbol{\delta}$: $\max_{oldsymbol{\delta}:\|oldsymbol{\delta}\|_\infty\leq \epsilon} \left(y - xv^ op \wv - oldsymbol{\delta}^ op \wv ight)^{2}$. If we plug the given values ...
3885
Let us assume that a kernel $K: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ is said to be valid if there exists $k \in \mathbb{N}$ and $\Phi: \mathcal{X} \rightarrow \mathbb{R}^{k}$ such that for all $\left(x, x^{\prime}\right) \in \mathcal{X} \times \mathcal{X}, K\left(x, x^{\prime}\right)=\Phi(x)^{\top} \P...
[ "N} identity matrix, and Ω ∈ R N × N {\\displaystyle \\Omega \\in \\mathbb {R} ^{N\\times N}} is the kernel matrix defined by Ω i j = φ ( x i ) T φ ( x j ) = K ( x i, x j ) {\\displaystyle \\Omega _{ij}=\\phi (x_{i})^{T}\\phi (x_{j})=K(x_{i},x_{j})}. Kernel function K For the kernel function K(•, •) one typically h...
[ "$\\mathcal{X}=\\mathbb{N}, K\\left(x, x^{\\prime}\\right)=2$", "$\\mathcal{X}=\\mathbb{R}^{d}, K\\left(x, x^{\\prime}\\right)=\\left(x^{\\top} x^{\\prime}\\right)^{2}$", "$\\mathcal{X}=\\mathbb{R}, K\\left(x, x^{\\prime}\\right)=\\cos \\left(x-x^{\\prime}\\right)$", "All of the proposed kernels are in fact v...
All of the proposed kernels are in fact valid.
3886
Mark any of the following functions that have unique maximizers:
[ "function of a Markov chain has a very simple form, its maximization is not trivial. Indeed, the maximization must be performed under the constraint that the parameters to be estimated are probabilities! More precisely, recalling that Θ = {πλi, pλjλi, i, j = 1,, M}, we have to solve the follow- ing problem ̂Θ = arg...
[ "$f(x) =-x^{2}, \\quad x \\in[-10,10]$", "$f(x) =\\ln (x), \\quad x \\in(0,10]$", "$f(x) =x^{2}, \\quad x \\in[-10,10]$", "$f(x) =\\cos (2 \\pi x), \\quad x \\in[-1,1]$", "$f(x) =\\cos (2 \\pi x), \\quad x \\in\\left[-\\frac{1}{2}, \\frac{1}{2}\\right]$" ]
['$f(x) =-x^{2}, \\quad x \\in[-10,10]$', '$f(x) =\\ln (x), \\quad x \\in(0,10]$', '$f(x) =\\cos (2 \\pi x), \\quad x \\in\\left[-\\frac{1}{2}, \\frac{1}{2}\\right]$']
3888
Consider a Generative Adversarial Network (GAN) which successfully produces images of goats. Which of the following statements is false?
[ "A generative adversarial network (GAN) is a class of machine learning frameworks and a prominent framework for approaching generative artificial intelligence. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neural networks compete with each other in the form of ...
[ "The discriminator can be used to classify images as goat vs non-goat.", "The generator aims to learn the distribution of goat images.", "After the training, the discriminator loss should ideally reach a constant value.", "The generator can produce unseen images of goats." ]
The discriminator can be used to classify images as goat vs non-goat.classify goat vs non-goat... is FALSE, because the discriminator classifies images into real or fake.
3889
Which of the following probability distributions are members of the exponential family:
[ "< x 0 du (e-2u -e-u-y) = c E e-u-y -1 2e-2uFx 0 = 1 2c E 1 -e-2x -2e-y + 2e-y-xF. On setting x = y = +\\infty, we get 1 2c = 1, and this implies that c = 2. Now for y ≤x, consideration of areas shows that we should take the above formula with y = x, so F(x, y) = <unk> <unk> <unk> <unk> <unk> 1 -e-2x + 2e-x-y -2e-y...
[ "Cauchy distribution: $p(y|y_0,\\gamma) = \frac{1}{\\pi \\gamma [1 + (\frac{y-y_0}{\\gamma})^2]}$.", "Poisson distribution: $p(y|\\mu) = \frac{e^{-y}}{y!}\\mu^y$.", "Uniform distribution over $[0,\\eta], \\eta>0$: $p(y|\\eta) = \frac{1}{\\eta} 1_{y \\in [0,\\eta]}$." ]
Uniform distribution over $[0,\eta], \eta>0$: $p(y|\eta) = rac{1}{\eta} 1_{y \in [0,\eta]}$.The Poisson and Gaussian distributions are members of the exponential family, as seen in the lectures slides, while Cauchy and Uniform distributions are not. Indeed, the probability density function of the Cauchy and Uniform di...
3891
How does the bias-variance decomposition of a ridge regression estimator compare with that of the ordinary least-squares estimator in general?
[ "exactly zero in the limit as they themselves approach zero) instead of setting smaller values to zero and leaving larger ones untouched as the hard thresholding operator, often denoted H α, {\\displaystyle \\ H_{\\alpha }\\,} would. In ridge regression the objective is to minimize min β ∈ R p { 1 N ‖ y − X β ‖ 2 2...
[ "Ridge has a larger bias, and larger variance.", "Ridge has a larger bias, and smaller variance.", "Ridge has a smaller bias, and larger variance.", "Ridge has a smaller bias, and smaller variance." ]
Ridge has a larger bias, and smaller variance.
3892
(Alternating Least Squares \& Matrix Factorization) For optimizing a matrix factorization problem in the recommender systems setting, as the number of observed entries increases but all $K, N, D$ are kept constant, the computational cost of the matrix inversion in Alternating Least-Squares increases.
[ "displaystyle {\\text{minimize}}\\quad {\\text{over }}{\\widehat {D}}{\\text{ and }}R\\quad \\operatorname {vec} ^{\\top }(D-{\\widehat {D}})W\\operatorname {vec} (D-{\\widehat {D}})\\quad {\\text{subject to}}\\quad R{\\widehat {D}}=0\\quad {\\text{and}}\\quad RR^{\\top }=I_{r},} where I r {\\displaystyle I_{r}} is...
[ "True", "False" ]
False
3895
(Convex III) Let $f, g: \mathbb{R} \rightarrow \mathbb{R}$ be two convex functions. Then $h=f \circ g$ is always convex.
[ "VEX FUNCTIONS 3. f(x) = |x|a with a ≥1 Show that the following functions from a Euclidean space E to R are convex. 4. f(x) = ⟨w, x⟩+ b with w ∈E, b ∈R 5. f(x) = 1 2 ⟨x, Ax⟩+ ⟨b, x⟩+ c with A: E →E a symmetric positive semidefinite linear map, b ∈E, c ∈R. Among all of these functions, which are strictly convex? Whi...
[ "True", "False" ]
False
3900
What is the gradient of $\boldsymbol{x}^{\top} \boldsymbol{W}^{\top} \boldsymbol{W} \boldsymbol{x}$ with respect to $\boldsymbol{x}$ (written as a vector)?
[ "< ε A vector w⋆is a global minimum of L if it is no worse than all oth- ers, L(w⋆) ≤L(w), ∀w ∈RD A local or global minimum is said to be strict if the corresponding inequality is strict for w <unk>= w⋆. Smooth Optimization Follow the Gradient A gradient (at a point) is the slope of the tangent to the function (at ...
[ "$2 \\boldsymbol{W}^{\\top} \\boldsymbol{x}$", "$2 \\boldsymbol{W}^{\\top} \\boldsymbol{W} \\boldsymbol{x}$", "$2 \\boldsymbol{W} \\boldsymbol{W}^{\\top} \\boldsymbol{x}$", "$2 \\boldsymbol{W}$", "$2 \\boldsymbol{W} \\boldsymbol{x}$" ]
$2 \boldsymbol{W}^{\top} \boldsymbol{W} \boldsymbol{x}$
3901
(Minima) Convex functions over a convex set have a unique global minimum.
[ "convex functions, λ1, λ2,, λm ∈R+. Then f := Pm i=1 λifi is convex on dom(f) := Tm i=1 dom(fi). (ii) Let f be a convex function with dom(f) ⊆Rd, g : Rm →Rd an affine function, meaning that g(x) = Ax + b, for some matrix A ∈Rd×m and some vector b ∈Rd. Then the function f ◦g (that maps x to f(Ax + b)) is convex on d...
[ "True", "False" ]
False
3905
Which statement is true for linear regression?
[ "Machine Learning Course - CS-433 Linear Regression Sept 20, 2022 Martin Jaggi Last updated on: September 20, 2022 credits to Mohammad Emtiyaz Khan 1 Model: Linear Regression What is it? Linear regression is a model that as- sumes a linear relationship between inputs and the output. Why learn about linear regressio...
[ "A linear regression model can be expressd as an inner product between feature vectors and a weight vector.", "Linear regression, when using 'usual' loss functions, works fine when the dataset contains many outliers.", "A good fit with linear regression implies a causal relationship between inputs and outputs."...
A linear regression model can be expressd as an inner product between feature vectors and a weight vector.
3910
(Nearest Neighbor) The training error of the 1-nearest neighbor classifier is zero.
[ "*}\\left(2-{\\frac {MR^{*}}{M-1}}\\right)} where R ∗ {\\displaystyle R^{*}} is the Bayes error rate (which is the minimal error rate possible), R k N N {\\displaystyle R_{kNN}} is the asymptotic k-NN error rate, and M is the number of classes in the problem. This bound is tight in the sense that both the lower and...
[ "True", "False" ]
True
3912
Consider a linear regression model on a dataset which we split into a training set and a test set. After training, our model gives a mean-squared error of 0.1 on the training set and a mean-squared error of 5.3 on the test set. Recall that the mean-squared error (MSE) is given by: $$MSE_{ extbf{w}}( ex...
[ "of linear regression model to model bivariate dataset, but whose limitation is related to known distribution of the data. The term mean squared error is sometimes used to refer to the unbiased estimate of error variance: the residual sum of squares divided by the number of degrees of freedom. This definition for a...
[ "Retraining the model with feature augmentation (e.g. adding polynomial features) will increase the training MSE.", "Using cross-validation can help decrease the training MSE of this very model.", "Retraining while discarding some training samples will likely reduce the gap between the train MSE and the test MS...
Ridge regression can help reduce the gap between the training MSE and the test MSE.feature augmentation: Incorrect, using feature augmentation will increase overfitting, hence decrease the training MSE even more. cross-validation: Incorrect, cross-validation can help to select a model that overfits less, it does not...
3917
You are given two distributions over $\mathbb{R}$ : Uniform on the interval $[a, b]$ and Gaussian with mean $\mu$ and variance $\sigma^{2}$. Their respective probability density functions are $$ p_{\mathcal{U}}(y \mid a, b):=\left\{\begin{array}{ll} \frac{1}{b-a}, & \text { for } a \leq y \leq b, \\ 0 & \text { otherwi...
[ "independent, i.e., we assume p ( w, b | log ⁡ μ, log ⁡ ζ, M ) = p ( w | log ⁡ μ, M ) p ( b | log ⁡ σ b, M ). {\\displaystyle p(w,b|\\log \\mu,\\log \\zeta,\\mathbb {M} )=p(w|\\log \\mu,\\mathbb {M} )p(b|\\log \\sigma _{b},\\mathbb {M} ).} When σ b → ∞ {\\displaystyle \\sigma _{b}\\to \\infty }, the distribution of...
[ "Only Uniform.", "Both of them.", "Only Gaussian.", "None of them." ]
Only Gaussian.
3918
Which statement is true for the Mean Squared Error (MSE) loss MSE( $\mathbf{x}, y):=\left(f_{\mathbf{w}}(\mathbf{x})-y\right)^{2}$, with $f_{\mathrm{w}}$ a model parametrized by the weights $\mathbf{w}$ ?
[ "In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the true value. MSE is a risk function, corre...
[ "MSE is not necessarily convex with respect to the weights of the model $\\mathbf{w}$.", "MSE is more robust to outliers than Mean Absolute Error (MAE).", "For any ML task you are trying to solve, minimizing MSE will provably yield the best model." ]
MSE is not necessarily convex with respect to the weights of the model $\mathbf{w}$.
4153
How many time is call compute printed when running the following code? def compute(n: Int) = \t printf("call compute") \t n + 1 LazyList.from(0).drop(2).take(3).map(compute)
[ "Lazy Lists CS-214 Software Construction Collections and Combinatorial Search We’ve seen a number of immutable collections that provide powerful operations, in particular for combinatorial search. For instance, to find the second prime number between 1000 and 10000: (1000 to 10000).filter(isPrime)(1) This is much s...
[ "0", "1", "2", "3", "5" ]
0
4155
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int Wha...
[ "_{i}:i\\in I\\right\\}\\\\[0.3ex]&&i&&\\;\\mapsto \\;&L_{i}$end{alignedat}}} which may be summarized by writing L ∙ = ( L i ) i ∈ I. {\\displaystyle L_{\\bullet }=\\left(L_{i}\\right)_{i\\in I}.} Any given indexed family of sets L ∙ = ( L i ) i ∈ I {\\displaystyle L_{\\bullet }=\\left(L_{i}\\right)_{i\\in I}} (whi...
[ "(x: Int) => m(x) > 0", "x => m(x) > 0", "(x: Char) => m(x)", "(x: Char) => m(x) == 0", "m(x) > 0", "m(x)" ]
x => m(x) > 0
4160
Consider an array $A[1,\ldots, n]$ consisting of the $n$ distinct numbers $1,2, \ldots, n$. We are further guaranteed that $A$ is almost sorted in the following sense: $A[i] \neq i$ for at most $\sqrt{n}$ values of $i$. What are tight asymptotic worst-case running times for Insertion Sort and Merge Sort on such instan...
[ "= I Θ(1) if n Æ c, aT(n/b) + D(n) + C(n) otherwise. Lecture 2, 26.09.2022 Analysis of Merge Sort Divide: takes constant time, i.e., D(n) = Θ(1) Conquer: recursively solve two subproblems, each of size n/2 ∆2T(n/2). Combine: Merge on an n-element subarray takes Θ(n) time ∆C(n) = Θ(n). Recurrence for merge sort runn...
[ "It is $\\Theta(n + \\sqrt{n}\\log n)$ for Merge Sort and $\\Theta(n)$ for Insertion Sort.", "It is $\\Theta(n \\log n)$ for Merge Sort and $\\Theta(n^2)$ for Insertion Sort.", "It is $\\Theta(n + \\sqrt{n}\\log n)$ for Merge Sort and $\\Theta(n^{3/2})$ for Insertion Sort.", "It is $\\Theta(n + \\sqrt{n}\\log...
It is $\Theta(n \log n)$ for Merge Sort and $\Theta(n^{3/2})$ for Insertion Sort.
4161
Is “type-directed programming” a language mechanism that infers types from values?
[ "In object-oriented programming, a programming language is said to have first-class messages or dynamic messages if in a method call not only the receiving object and parameter list can be varied dynamically (i.e. bound to a variable or computed as an expression) but also the specific method invoked. Typed object-o...
[ "Yes", "No" ]
No
4190
Do the functions first and second return the same output for every possible input? def first(x: List[Int]): Int = x.head + first(x.tail) def second(x: List[Int]): Int = x.foldLeft(0)(_ + _)
[ "functions may make code clearer and shorter. Applying a Function to Elements of a List A common operation is to transform each element of a list and then return the list of results. For example, to multiply each element of a list by the same factor, you could write: def scaleList(xs: List[Double], factor: Double):...
[ "Yes", "No" ]
No
4191
Given the following data structure: enum IntSet: \t case Empty \t case NonEmpty(x: Int, l: IntSet, r: IntSet) And the following lemmas, holding for all x: Int, xs: List[Int], ys: List[Int], l: IntSet and r: IntSet: (SizeNil) nil.size === 0 (SizeCons) (x :: xs).size === xs.size + 1 (ConcatSize) (xs ++ ys).size === xs.si...
[ "right IntSet extends IntSet extension s IntSet def size Int s match case Empty case Node l r l size r size val t Node Node Empty Empty Node Empty Empty mkTree What is the running time of this function a a function of n def mkTree n Int IntSet if n then Empty else val t IntSet mkTree n Node t n t O n Given val t mk...
[ "SizeNil, ToListEmpty, TreeSizeEmpty", "ToListEmpty, TreeSizeEmpty, SizeNil", "SizeNil, TreeSizeEmpty, ToListEmpty", "TreeSizeEmpty, SizeNil, TreeSizeEmpty", "ToListEmpty, SizeNil, TreeSizeEmpty", "TreeSizeEmpty, ToListEmpty, SizeNil" ]
ToListEmpty, SizeNil, TreeSizeEmpty
4200
Let $E$ be a finite ground set and let $\mathcal{I}$ be a family of ground sets. Which of the following definitions of $\mathcal{I}$ guarantees that $M = (E, \mathcal{I})$ is a matroid? \begin{enumerate} \item $E$ is the edges of an undirected bipartite graph and $\mathcal{I} = \{X \subseteq E : \mbox{$X$ is an acyclic...
[ "p Intuitively B represents the moment right before GREEDY wa forced to make a sub obtimal decision and A represents the good decision taken by another algorithm Our goal is to show the existence of a e A B such that B e is acyclic We call this the key property This directly come from the fact that an acyclic graph...
[ "(a), (c), (f)", "(a), (b), (c), (d), (f)", "(a), (b), (c), (f)", "(a), (b), (e)", "(a), (c), (d), (f)", "(a), (b), (c), (d), (e)", "(a), (c), (d), (e)", "(a), (f)", "(a), (b), (c), (e)", "(a), (b), (f)", "(a), (c), (e)", "(a), (e)" ]
(a), (b), (f)
4209
Church booleans are a representation of booleans in the lambda calculus. The Church encoding of true and false are functions of two parameters: Church encoding of tru: t => f => t Church encoding of fls: t => f => f What should replace ??? so that the following function computes not(b and c)? b => c => b ??? (not b)
[ "λz. s ((λs. λz. s ((λs. λz. s ((λs. λz. s ((λs. λz. s ((λs. λz. z) s z)) s z)) s z)) s z)) s z)) s z)) Ugh! Displaying numbers If we enrich the pure lambda-calculus with “regular numbers,” we can display church numerals by converting them to regular numbers: realnat = λn. n (λm. succ m) 0 Now: realnat (times c2 c2...
[ "(not b)", "(not c)", "tru", "fls" ]
(not c)
4219
To which expression is the following for-loop translated? for x <- xs if x > 5; y <- ys yield x + y
[ "r(3^{3})=762,\\quad S_{4}=r(3^{4})=925,} yielding the system [ 732 637 637 762 ] [ Λ 2 Λ 1 ] = [ − 762 − 925 ] = [ 167 004 ]. {\\displaystyle {\\begin{bmatrix}732&637\\\\637&762\\end{bmatrix}}{\\begin{bmatrix}\\Lambda _{2}\\\\\\$Lambda _{1}\\end{bmatrix}}={\\begin{bmatrix}-762\\\\-925\\end{bmatrix}}={\\begin{bmatr...
[ "xs.flatMap(x => ys.map(y => x + y)).withFilter(x => x > 5)", "xs.withFilter(x => x > 5).map(x => ys.flatMap(y => x + y))", "xs.withFilter(x => x > 5).flatMap(x => ys.map(y => x + y))", "xs.map(x => ys.flatMap(y => x + y)).withFilter(x => x > 5)" ]
xs.withFilter(x => x > 5).flatMap(x => ys.map(y => x + y))
4920
Consider an IR system using a Vector Space model with Okapi BM25 as the weighting scheme (with \(k=1.5\) and \(b=0.75\)) and operating on a document collection that contains:a document \(d_1\), andand a document \(d_3\) corresponding to the concatenation of 3 copies of \(d_1\).Indicate which of the following statements...
[ "++i) out[i] = f(in[i]); for (size_t i = 0; i < n; ++i) out[i] = out[i-1] + in[i]; 1. Insufficient information / too complex (function calls, etc.) 2. Inter-loop dependency CS-328 RGL Realistic Graphics Lab From scalar to vectorized code 45 loop_start: movss xmm1, dword ptr [rsi] mulss xmm1, xmm1, dword ptr [rdx] a...
[ "The cosine similarity between \\(\\langle d_3\\rangle\\) and \\(\\langle d_1\\rangle\\) is equal to 1.", "Each component of \\(\\langle d_3\\rangle\\) is strictly larger than the corresponding one in \\(\\langle d_1\\rangle\\).", "Each component of \\(\\langle d_3\\rangle\\) is strictly smaller than the corres...
['Each component of \\(\\langle d_3\\rangle\\) is strictly larger than the corresponding one in \\(\\langle d_1\\rangle\\).', 'Indexing terms with small term frequency are favored in \\(\\langle d_3\\rangle\\) (w.r.t. \\(\\langle d_1\\rangle\\)).']
4924
If A={a} and B={b}, select all strings that belongs to (A ⊗ B)+ A penalty will be applied for any wrong answers selected.
[ "penalty © Y. Bellouard, EPFL. (2022) / Cours ‘Manufacturing Technologies’ / Micro-301 72 Basic handling index (AH) Component handling characteristic Index (Ah) One hand only 1 Very small aids / tools 1.5 Large and/or heavy (two hands/tools) 1.5 Very large and/or very heavy (two people/hoist) 3 © Y. Bellouard, EPFL...
[ "(aaa,bbb)", "(a,b)", "(aaaaa,bbbb)", "(a,bb)", "(aaa,bb)", "(aa,bbb)" ]
['(aaa,bbb)', '(a,b)']