source_dataset stringclasses 1
value | question stringlengths 6 1.87k | choices stringlengths 20 1.02k | answer stringclasses 4
values | rationale float64 | documents stringlengths 1.01k 5.9k |
|---|---|---|---|---|---|
epfl-collab | In terms of the \textbf{bias-variance} decomposition, a 1-nearest neighbor classifier has \rule{2cm}{0.15mm} than a 3-nearest neighbor classifier. | ['lower bias', 'higher variance', 'lower variance', 'higher bias'] | B | null | Document 1:::
K-nearest neighbors algorithm
This value is the average of the values of k nearest neighbors. If k = 1, then the output is simply assigned to the value of that single nearest neighbor.k-NN is a type of classification where the function is only approximated locally and all computation is deferred until fun... |
epfl-collab | In deep learning, which of these are hyper-parameters? | ['The learning rate', 'The weights $\\mathbf{W}^{[l]}$ and biases $\\mathbf{b}^{[l]}$', 'The number of layers', 'The type of weight initialization'] | A | null | Document 1:::
Hyperparameter (machine learning)
In machine learning, a hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are derived via training. Hyperparameters can be classified as model hyperparameters, that cannot... |
epfl-collab | What is the mean squared error of $f$ for a sample, where $\textbf{x}$ is an input, $y$ a target and $f(\textbf{x},W)$ the mapping function ?
(One answer) | [' $P(y=i |\\textbf{x}) = \\frac{e^{\\textbf{f}_i(\\textbf{x},W)}}{\\sum_j e^{\\textbf{f}_j(\\textbf{x},W)}}$ ', ' $||y - f(\\textbf{x},W)||^2 $ ', ' $-\\log(P(y=i | \\textbf{x})) = -\\log(\\frac{e^{\\textbf{f}_i(\\textbf{x},W)}}{\\sum_j e^{\\textbf{f}_j(\\textbf{x},W)}})$ ', ' $||y - f(\\textbf{x},W)|| $'] | B | null | Document 1:::
Minimum mean-square error
In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE), which is a common measure of estimator quality, of the fitted values of a dependent variable. In the Bayesian setting, the term M... |
epfl-collab | When using linear regression, how do you help prevent numerical instabilities? (One or multiple answers) | ['reduce learning rate', 'remove degenerate features', 'add a regularization term', 'add more features'] | C | null | Document 1:::
Linear Regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than... |
epfl-collab | You write a Python code to optimize the weights of your linear regression with 10 features \textbf{using gradient descent} for 500 epochs. What is the minimum number of for-loops you need to perform your optimization? | ['No for-loop is really necessary. Everything can be vectorized', 'Only one for-loop to iterate over the weights.', 'Two for-loops, one to iterate over the weights and the other to iterate over the epochs', 'Only one for-loop to iterate over the epochs.'] | D | null | Document 1:::
Gradient descent
In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the f... |
epfl-collab | Which loss function(s) should you use? (One or multiple answers) | ['hinge loss', 'L1 loss', 'mean square error (MSE) loss', 'cross entropy loss'] | D | null | Document 1:::
Loss functions for classification
These are called margin-based loss functions. Choosing a margin-based loss function amounts to choosing ϕ {\displaystyle \phi } . Selection of a loss function within this framework impacts the optimal f ϕ ∗ {\displaystyle f_{\phi }^{*}} which minimizes the expected risk.
... |
epfl-collab | Fill the missing line of code: (one answer)\\
\hspace*{.5cm} \#code missing\\
\hspace*{.5cm} np.mean(np.random.randn(1000))\\ | ['import numpy', 'import np.mean\\\\\n\t\timport np.random', 'import numpy as np', 'import np'] | C | null | Document 1:::
Marsaglia polar method
The Marsaglia polar method is a pseudo-random number sampling method for generating a pair of independent standard normal random variables.Standard normal random variables are frequently used in computer science, computational statistics, and in particular, in applications of the Mo... |
epfl-collab | What is the output of the following block of Python code? (one answer) \\
\verb|my_string = `computational'| \\
\verb|print(my_string[1])|\\
\verb|print(my_string[3:5])|
\vspace{0.25cm} | ['o\\\\put', 'o\\\\pu', 'c\\\\mp', 'c\\\\mpu'] | B | null | Document 1:::
String (computer science)
String may also denote more general arrays or other sequence (or list) data types and structures. Depending on the programming language and precise data type used, a variable declared to be a string may either cause storage in memory to be statically allocated for a predetermined... |
epfl-collab | In Machine Learning, we want to learn the \textbf{parameters W} for the mapping function f: $y=f(x,W) +\epsilon$ where x is the input, y the output, and $\epsilon$ the error term.\\
(One or multiple answers) | ['When f: $R \\rightarrow \\{1,..N\\}$, it is a classification task', 'When f: $R^M \\rightarrow R$, it is a classification task ', 'When f: $R^M \\rightarrow R$, it is a regression task', 'When f: $R^M \\rightarrow \\{1,..N\\}$, it is a classification task'] | A | null | Document 1:::
Learning with errors
There exists a certain unknown linear function f: Z q n → Z q {\displaystyle f:\mathbb {Z} _{q}^{n}\rightarrow \mathbb {Z} _{q}} , and the input to the LWE problem is a sample of pairs ( x , y ) {\displaystyle (\mathbf {x} ,y)} , where x ∈ Z q n {\displaystyle \mathbf {x} \in \mathbb ... |
epfl-collab | Principle Component Analysis (PCA) is a technique for... | ['variance normalization', 'data augmentation', 'feature extraction', 'dimensionality reduction'] | D | null | Document 1:::
Principal components
Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidi... |
epfl-collab | You are using a 3-layer fully-connected neural net with \textbf{ReLU activations}. Your input data has components in [0, 1]. \textbf{You initialize all your weights to -10}, and set all the bias terms to 0. You start optimizing using SGD. What will likely happen? | ['The gradient is 0 so nothing happens', 'Everything is fine', "The gradient is very large so the model can't converge", 'Training is fine, but our neural net does only as well as a linear model'] | A | null | Document 1:::
Stochastic gradient descent
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, sin... |
epfl-collab | You are using a 3-layer fully-connected neural, and you are using \textbf{$f(x) = 2x$ as your activation function} . Your input data has components in [0, 1]. \textbf{You initialize your weights using Kaiming (He) initialization}, and set all the bias terms to 0. You start optimizing using SGD. What will likely happen? | ['The gradient is 0 so nothing happens', 'Training is fine, but our neural net does only as well as a linear model', "The gradient is very large so the model can't converge", 'Everything is fine'] | B | null | Document 1:::
Stochastic gradient descent
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, sin... |
epfl-collab | What is a good representation for scores when classifying these three target classes: Car, Bike and Bus, in the context of logistic regression. (One or multiple answers) | ['{Car: $1$,} {Bike: $2$,} {Bus: $3$}', '{Car: $(0,1)$,} {Bike: $(1,0)$,} {Bus: $(1,1)$}', '{Car: $(0,1)$,} {Bike: $(1,0)$,} {Bus: $(0.5,0.5)$}', '{Car: $(0,1,0)$,} {Bike: $(1,0,0)$,} {Bus: $(0,0,1)$}'] | D | null | Document 1:::
Multiclass classifier
In machine learning and statistical classification, multiclass classification or multinomial classification is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called binary classification). While many classifica... |
epfl-collab | Decision trees... | ['... have several different roots.', '... need water and sunlight to grow.', '... can be used for both classification and regression.', '... can be easily explained.'] | C | null | Document 1:::
Classification and regression tree
Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations. Tree models where... |
epfl-collab | Which method can be used for dimensionality reduction ? | ['T-distributed Stochastic Neighbor Embedding (t-SNE)', 'Autoencoders', 'SVM', 'PCA'] | D | null | Document 1:::
Dimensionality reduction
Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension. ... |
epfl-collab | Mean Square Error loss: | ['Maximizing the accuracy', 'Minimizing the distance between the predicted point and the true point', 'Minimizing the score of false classes when they are close, or bigger than, the score of the true class', 'Maximizing the probability of the correct class'] | B | null | Document 1:::
Minimum mean-square error
In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE), which is a common measure of estimator quality, of the fitted values of a dependent variable. In the Bayesian setting, the term M... |
epfl-collab | You need to debug your Stochastic Gradient Descent update for a classification of three bridge types.
Manually compute the model output for the feature vector $x=(1, 0, 0, 0, 0)$ and $W$ contains only zeros. The model is logistic regression, \textit{i.e.}, $\textrm{softmax}(Wx)$.
Remember:
\begin{equation}
\te... | ['$(0, 0, 0, 0, 0)$', '$(\\frac{1}{5}, \\frac{1}{5}, \\frac{1}{5}, \\frac{1}{5}, \\frac{1}{5})$', '$(\\frac{1}{3}, \\frac{1}{3}, \\frac{1}{3})$', '$(0, 0, 0)$'] | C | null | Document 1:::
Stochastic gradient descent
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, sin... |
epfl-collab | Consider the following PyTorch code:
class ThreeLayerNet (nn.Module):
def __init__():
super().__init__()
def forward(x):
x = nn.Linear(100, 10)(x)
x = nn.ReLU()(x)
x = nn.Linear(10, 200)(x)
x = nn.ReLU()(x)
x = nn.Line... | ['Everything is fine.', 'There will be an error because the second layer has more neurons than the first. The number of neurons must never increase from one layer to the next.', 'The model will not train properly. The performance will be the same at the beginning of the first epoch and at the end of the last epoch.', '... | C | null | Document 1:::
Tensor (machine learning)
Operations on data tensors can be expressed in terms of matrix multiplication and the Kronecker product. The computation of gradients, an important aspect of the backpropagation algorithm, can be performed using PyTorch and TensorFlow.Computations are often performed on graphics ... |
epfl-collab | You are using a 3-layer fully-connected neural net with \textbf{ReLU activations}. Your input data has components in [0, 1]. \textbf{You initialize your weights by sampling from $\mathcal{N}(-10, 0.1)$ (Gaussians of mean -10 and variance 0.1)}, and set all the bias terms to 0. You start optimizing using SGD. What will ... | ['Training is fine, but our neural net does only as well as a linear model', 'Everything is fine', 'The gradient is 0 so nothing happens', "The gradient is very large so the model can't converge"] | C | null | Document 1:::
Stochastic gradient descent
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, sin... |
epfl-collab | We saw in class that we can quickly decrease the spatial size of the representation using pooling layers. Is there another way to do this without pooling? | ['Yes, by increasing the amount of padding.', 'Yes, by increasing the stride.', 'No, pooling is necessary.', 'Yes, by increasing the number of filters.'] | B | null | Document 1:::
Spatial embedding
Spatial embedding is one of feature learning techniques used in spatial analysis where points, lines, polygons or other spatial data types. representing geographic locations are mapped to vectors of real numbers. Conceptually it involves a mathematical embedding from a space with many di... |
epfl-collab | The \textbf{parameters} (weights \textbf{W}) are learned with ...
(One answer) | [' test ', ' validation ', ' training ', ' all the data together '] | C | null | Document 1:::
Trainable parameter
In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets. The model is initially fit on a training data set, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons ... |
epfl-collab | The \textbf{hyperparameters} are learned with ...
(One answer) | [' test ', ' all the data together ', ' training ', ' validation '] | D | null | Document 1:::
Hyperparameter (machine learning)
In machine learning, a hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are derived via training. Hyperparameters can be classified as model hyperparameters, that cannot... |
epfl-collab | We report the final performance (e.g., accuracy) on the ...
(One answer) | [' test ', ' validation ', ' training ', ' all the data together '] | A | null | Document 1:::
Stats Perform
Stats Perform (formerly STATS, LLC and STATS, Inc.) is a sports data and analytics company formed through the combination of Stats and Perform.The company is involved in sports data collection and predictive analysis for use across various sports sectors including professional team performan... |
epfl-collab | We consider a classification problem on linearly separable data. Our dataset had an outlier---a point that is very far from the other datapoints in distance (and also far from margins in SVM but still correctly classified by the SVM classifier).
We trained the SVM, logistic regression and 1-nearest-... | ['$y_n \\ww^\top x_n \\geq 1 ~ \x0corall n \\in \\{1,\\cdots,N\\}$', '$\\ww^\top x_n \\geq 1 ~ \x0corall n \\in\\{1,\\cdots,N\\}$', '$y_n + \\ww^\top x_n \\geq 1 ~ \x0corall n \\in \\{1,\\cdots,N\\}$', '$\x0crac{y_n}{\\ww^\top x_n }\\geq 1 ~\x0corall n \\in \\{1,\\cdots,N\\}$'] | A | null | Document 1:::
Margin classifier
In machine learning, a margin classifier is a classifier which is able to give an associated distance from the decision boundary for each example. For instance, if a linear classifier (e.g. perceptron or linear discriminant analysis) is used, the distance (typically euclidean distance, t... |
epfl-collab | Which of the following statements is correct? | ['When applying stochastic gradient descent on the objective function $f(\\boldsymbol{w}):=\\sum_{n=1}^{30}\\left\\|\\boldsymbol{w}-\\boldsymbol{x}_{n}\\right\\|^{2}$ where $\\boldsymbol{x}_{n}$ are the datapoints, a stochastic gradient step is roughly $30 \\times$ faster than a full gradient step.', 'When applying sto... | A | null | Document 1:::
Statement (logic)
In logic and semantics, the term statement is variously understood to mean either: a meaningful declarative sentence that is true or false, or a proposition. Which is the assertion that is made by (i.e., the meaning of) a true or false declarative sentence.In the latter case, a statement... |
epfl-collab | Consider the function $f(x)=-x^{2}$. Which of the following statements are true regarding subgradients of $f(x)$ at $x=0$ ? | ['A subgradient exists but is not unique.', 'A subgradient exists and is unique.', 'A subgradient does not exist even though $f(x)$ is differentiable at $x=0$.', 'A subgradient does not exist as $f(x)$ is differentiable at $x=0$.'] | C | null | Document 1:::
Subderivative
Rigorously, a subderivative of a convex function f: I → R {\displaystyle f:I\to \mathbb {R} } at a point x 0 {\displaystyle x_{0}} in the open interval I {\displaystyle I} is a real number c {\displaystyle c} such that for all x ∈ I {\displaystyle x\in I} . By the converse of the mean value ... |
epfl-collab | In Text Representation learning, which of the following statements are correct? | ['The skip-gram model for learning original word2vec embeddings does learn a binary classifier for each word.', 'FastText as discussed in the course learns word vectors and sentence representations which are specific to a supervised classification task.', 'Logistic regression used for text classification is faster at t... | A | null | Document 1:::
Sequence labeling
In machine learning, sequence labeling is a type of pattern recognition task that involves the algorithmic assignment of a categorical label to each member of a sequence of observed values. A common example of a sequence labeling task is part of speech tagging, which seeks to assign a pa... |
epfl-collab | When constructing a word embedding, what is true regarding negative samples? | ['Their frequency is decreased down to its logarithm', 'They are oversampled if less frequent', 'They are words that do not appear as context words', 'They are selected among words which are not stop words'] | B | null | Document 1:::
Precision and recall
For classification tasks, the terms true positives, true negatives, false positives, and false negatives (see Type I and type II errors for definitions) compare the results of the classifier under test with trusted external judgments. The terms positive and negative refer to the class... |
epfl-collab | If the first column of matrix L is (0,1,1,1) and all other entries are 0 then the authority values | ['(0, 1/sqrt(3), 1/sqrt(3), 1/sqrt(3))', '(1, 0, 0, 0)', '(1, 1/sqrt(3), 1/sqrt(3), 1/sqrt(3))', '(0, 1, 1, 1)'] | A | null | Document 1:::
Zero matrix
In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of m × n {\displaystyle m\times n} matrices, and is denoted by the symbol O {\displaystyle O} or 0 {\displaystyle 0... |
epfl-collab | If the top 100 documents contain 50 relevant documents | ['the precision of the system at 100 is 0.5', 'the precision of the system at 50 is 0.25', 'the recall of the system is 0.5', 'All of the above'] | A | null | Document 1:::
Relevance (information retrieval)
In information science and information retrieval, relevance denotes how well a retrieved document or set of documents meets the information need of the user. Relevance may include concerns such as timeliness, authority or novelty of the result.
Document 2:::
Uncertain inf... |
epfl-collab | What is WRONG regarding the Transformer model? | ['It uses a self-attention mechanism to compute representations of the input and output.', 'Its complexity is quadratic to the input size.', 'It captures the semantic context of the input.', 'Its computation cannot be parallelized compared to LSTMs and other sequential models.'] | D | null | Document 1:::
Parametric transformer
The Parametric transformer (or paraformer) is a particular type of transformer. It transfers the power from primary to secondary windings not by mutual inductance coupling but by a variation of a parameter in its magnetic circuit. First described by Wanlass, et al., 1968.
Document 2... |
epfl-collab | Which of the following statements about index merging (when constructing inverted files) is correct? | ['While merging two partial indices on disk, the vocabularies are concatenated without sorting', 'While merging two partial indices on disk, the inverted lists of a term are concatenated without sorting', 'The size of the final merged index file is O (n log2 (n) M )), where M is the size of the available memory', 'Inde... | B | null | Document 1:::
Inverted index
In computer science, an inverted index (also referred to as a postings list, postings file, or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward ind... |
epfl-collab | Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is false? | ['The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot', 'LSI is deterministic (given the dimension), whereas WE is not', 'LSI does take into account the frequency of words in the documents, whereas WE with negative sampling does not', 'LSI does not depend on the order of words in the docume... | C | null | Document 1:::
Semantic analysis (machine learning)
A prominent example is PLSI. Latent Dirichlet allocation involves attributing document terms to topics. n-grams and hidden Markov models work by representing the term stream as a Markov chain where each term is derived from the few terms before it.
Document 2:::
Semant... |
epfl-collab | The number of non-zero entries in a column of a term-document matrix indicates: | ['how often a term of the vocabulary occurs in a document', 'how many terms of the vocabulary a document contains', 'none of the other responses is correct', 'how relevant a term is for a document'] | C | null | Document 1:::
Zero matrix
In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of m × n {\displaystyle m\times n} matrices, and is denoted by the symbol O {\displaystyle O} or 0 {\displaystyle 0... |
epfl-collab | Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is incorrect | ['LSI is deterministic (given the dimension), whereas WE is not', 'The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot', 'LSI does take into account the frequency of words in the documents, whereas WE does not.', 'LSI does not take into account the order of words in the document, whereas WE... | C | null | Document 1:::
Semantic analysis (machine learning)
A prominent example is PLSI. Latent Dirichlet allocation involves attributing document terms to topics. n-grams and hidden Markov models work by representing the term stream as a Markov chain where each term is derived from the few terms before it.
Document 2:::
Semant... |
epfl-collab | Suppose that in a given FP Tree, an item in a leaf node N exists in every path. Which of the following is true? | ['For every node P that is a parent of N in the FP tree, confidence(P->N) = 1', 'N co-occurs with its prefixes in every transaction', '{N}’s minimum possible support is equal to the number of paths', 'The item N exists in every candidate set'] | C | null | Document 1:::
Tree (automata theory)
If every node of a tree has finitely many successors, then it is called a finitely, otherwise an infinitely branching tree. A path π is a subset of T such that ε ∈ π and for every t ∈ T, either t is a leaf or there exists a unique c ∈ N {\displaystyle \mathbb {N} } such that t.c ∈ π... |
epfl-collab | Which of the following statements regarding topic models is false? | ['Topic models map documents to dense vectors', 'LDA assumes that each document is generated from a mixture of topics with a probability distribution', 'In LDA, topics are modeled as distributions over documents', 'Topics can serve as features for document classification'] | C | null | Document 1:::
Boolean model of information retrieval
The (standard) Boolean model of information retrieval (BIR) is a classical information retrieval (IR) model and, at the same time, the first and most-adopted one. It is used by many IR systems to this day. The BIR is based on Boolean logic and classical set theory in... |
epfl-collab | Modularity of a social network always: | ['Decreases when new nodes are added to the social network that form their own communities', 'Increases when an edge is added between two members of the same community', 'Increases with the number of communities', 'Decreases if an edge is removed'] | B | null | Document 1:::
Modularity (networks)
Modularity is a measure of the structure of networks or graphs which measures the strength of division of a network into modules (also called groups, clusters or communities). Networks with high modularity have dense connections between the nodes within modules but sparse connections... |
epfl-collab | Which of the following is wrong regarding Ontologies? | ['We can create more than one ontology that conceptualizes the same real-world entities', 'Ontologies support domain-specific vocabularies', 'Ontologies help in the integration of data expressed in different models', 'Ontologies dictate how semi-structured data are serialized'] | D | null | Document 1:::
Class (knowledge representation)
The first definition of class results in ontologies in which a class is a subclass of collection. The second definition of class results in ontologies in which collections and classes are more fundamentally different. Classes may classify individuals, other classes, or a c... |
epfl-collab | Which of the following statements is correct concerning the use of Pearson’s Correlation for user- based collaborative filtering? | ['It measures how much a user’s ratings deviate from the average ratings I', 'It measures whether different users have similar preferences for the same items', 't measures how well the recommendations match the user’s preferences', 'It measures whether a user has similar preferences for different items'] | B | null | Document 1:::
Statistical correlation
The most common of these is the Pearson correlation coefficient, which is sensitive only to a linear relationship between two variables (which may be present even when one variable is a nonlinear function of the other). Other correlation coefficients – such as Spearman's rank corre... |
epfl-collab | After the join step, the number of k+1-itemsets | ['is equal to the number of frequent k-itemsets', 'is always higher than the number of frequent k-itemsets', 'is always lower than the number of frequent k-itemsets', 'can be equal, lower or higher than the number of frequent k-itemsets'] | D | null | Document 1:::
Karmarkar–Karp bin packing algorithms
They also devised several other algorithms with slightly different approximation guarantees and run-time bounds. The KK algorithms were considered a breakthrough in the study of bin packing: the previously-known algorithms found multiplicative approximation, where the... |
epfl-collab | Which is true about the use of entropy in decision tree induction? | ['The entropy of the set of class labels of the samples from the training set at the leaf level can be 1', 'The entropy of the set of class labels of the samples from the training set at the leaf level is always 0', 'We split on the attribute that has the highest entropy', 'We split on the attribute that has the lowest... | A | null | Document 1:::
Information gain in decision trees
In information theory and machine learning, information gain is a synonym for Kullback–Leibler divergence; the amount of information gained about a random variable or signal from observing another random variable. However, in the context of decision trees, the term is so... |
epfl-collab | Modularity clustering will end up always with the same community structure? | ['True', 'Only for cliques', 'False', 'Only for connected graphs'] | C | null | Document 1:::
Modularity (networks)
Biological networks, including animal brains, exhibit a high degree of modularity. However, modularity maximization is not statistically consistent, and finds communities in its own null model, i.e. fully random graphs, and therefore it cannot be used to find statistically significan... |
epfl-collab | When searching for an entity 𝑒𝑛𝑒𝑤 that has a given relationship 𝑟 with a given entity 𝑒 | ['We search for pairs (𝑒𝑛𝑒𝑤, 𝑒) that have similar embedding to (𝑒𝑜𝑙𝑑, 𝑒) for 𝑒𝑜𝑙𝑑 which has relationship 𝑟 with 𝑒', 'We search for 𝑒𝑛𝑒𝑤 that have a similar embedding vector to 𝑒', 'We search for 𝑒𝑛𝑒𝑤 that have a similar embedding vector to 𝑒𝑜𝑙𝑑 which has relationship 𝑟 with 𝑒', 'We search... | D | null | Document 1:::
Trigram search
Trigram search is a method of searching for text when the exact syntax or spelling of the target object is not precisely known or when queries may be regular expressions. It finds objects which match the maximum number of three consecutive character strings (i.e. trigrams) in the entered se... |
epfl-collab | Which of the following graph analysis techniques do you believe would be most appropriate to identify communities on a social graph? | ['Random Walks', 'Cliques', 'Association rules', 'Shortest Paths'] | B | null | Document 1:::
Social graph
The social graph is a graph that represents social relations between entities. In short, it is a model or representation of a social network, where the word graph has been taken from graph theory. The social graph has been referred to as "the global mapping of everybody and how they're relate... |
epfl-collab | Which of the following models for generating vector representations for text require to precompute the frequency of co-occurrence of words from the vocabulary in the document collection | ['Fasttext', 'LSI', 'CBOW', 'Glove'] | D | null | Document 1:::
Bag-of-words model in computer vision
In computer vision, the bag-of-words model (BoW model) sometimes called bag-of-visual-words model can be applied to image classification or retrieval, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence coun... |
epfl-collab | For which document classifier the training cost is low and inference is expensive? | ['for none', 'for kNN', 'for fasttext', 'for NB'] | B | null | Document 1:::
Document AI
Document AI or Document Intelligence is a technology that uses natural language processing (NLP) and machine learning (ML) to train computer models to simulate a human review of documents. NLP enables the computer system to grasp the relations between contents of documents, including the conte... |
epfl-collab | In Ranked Retrieval, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true?
Hint: P@k and R@k are the precision and recall of the result set consisting of the k top-ranked documents. | ['P@k-1=P@k+1', 'R@k-1=R@k+1', 'R@k-1<R@k+1', 'P@k-1>P@k+1'] | C | null | Document 1:::
Precision and recall
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction... |
epfl-collab | Regarding the Expectation-Maximization algorithm, which one of the following false? | ['It distinguishes experts from normal workers', 'The label with the highest probability is assigned as the new label', 'In E step the labels change, in M step the weights of the workers change', 'Assigning equal weights to workers initially decreases the convergence time'] | D | null | Document 1:::
EM algorithm
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performi... |
epfl-collab | For an item that has not received any ratings, which method can make a prediction? | ['User-based collaborative RS', 'Content-based RS', 'Item-based collaborative RS', 'None of the above'] | B | null | Document 1:::
Naranjo algorithm
The Naranjo algorithm, Naranjo Scale, or Naranjo Nomogram is a questionnaire designed by Naranjo et al. for determining the likelihood of whether an ADR (adverse drug reaction) is actually due to the drug rather than the result of other factors. Probability is assigned via a score termed... |
epfl-collab | The SMART algorithm for query relevance feedback modifies? (Slide 11 Week 3) | ['The original document weight vectors', 'The keywords of the original user query', 'The result document weight vectors', 'The original query weight vectors'] | D | null | Document 1:::
Relevance (information retrieval)
In information science and information retrieval, relevance denotes how well a retrieved document or set of documents meets the information need of the user. Relevance may include concerns such as timeliness, authority or novelty of the result.
Document 2:::
Query optimiz... |
epfl-collab | Suppose that in a given FP Tree, an item in a leaf node N exists in every path. Which of the following is TRUE? | ['N co-occurs with its prefixes in every transaction', 'For every node P that is a parent of N in the FP tree, confidence (P->N) = 1', 'The item N exists in every candidate set', '{N}’s minimum possible support is equal to the number of paths'] | D | null | Document 1:::
Tree (automata theory)
If every node of a tree has finitely many successors, then it is called a finitely, otherwise an infinitely branching tree. A path π is a subset of T such that ε ∈ π and for every t ∈ T, either t is a leaf or there exists a unique c ∈ N {\displaystyle \mathbb {N} } such that t.c ∈ π... |
epfl-collab | In Ranked Retrieval, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true?Hint: P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents. | ['R@k-1=R@k+1', 'P@k-1>P@k+1', 'R@k-1<R@k+1', 'P@k-1=P@k+1'] | C | null | Document 1:::
Precision and recall
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction... |
epfl-collab | Suppose that for points p, q, and t in metric space, the following hold:p is density-reachable from q t is density-reachable from qp is density-reachable from tWhich of the following statements is false? | ['p and q are density-connected', 't is a core point', 'p is a border point', 'q is a core point '] | C | null | Document 1:::
Discrete metric space
Formally, a metric space is an ordered pair (M, d) where M is a set and d is a metric on M, i.e., a functionsatisfying the following axioms for all points x , y , z ∈ M {\displaystyle x,y,z\in M} :The distance from a point to itself is zero: (Positivity) The distance between two dist... |
epfl-collab | If for the χ2 statistics for a binary feature, we obtain P(χ2 |DF = 1) < 0.05, this means: | ['That the class label correlates with the feature', 'No conclusion can be drawn', 'That the class labels depends on the feature', 'That the class label is independent of the feature'] | C | null | Document 1:::
5 sigma
In the case where X takes random values from a finite data set x1, x2, ..., xN, with each value having the same probability, the standard deviation is or, by using summation notation, If, instead of having equal probabilities, the values have different probabilities, let x1 have probability p1, x2... |
epfl-collab | Which of the following is false regarding K-means and DBSCAN? | ['K-means takes the number of clusters as parameter, while DBSCAN does not take any parameter', 'K-means does not handle outliers, while DBSCAN does', 'Both are unsupervised', 'K-means does many iterations, while DBSCAN does not'] | A | null | Document 1:::
Determining the number of clusters in a data set
Determining the number of clusters in a data set, a quantity often labelled k as in the k-means algorithm, is a frequent problem in data clustering, and is a distinct issue from the process of actually solving the clustering problem. For a certain class of ... |
epfl-collab | Which of the following is correct regarding community detection? | ['High modularity of a community indicates a large difference between the number of edges of the community and the number of edges of a null model', 'High betweenness of an edge indicates that the communities are well connected by that edge', 'The Girvan-Newman algorithm attempts to maximize the overall betweenness mea... | B | null | Document 1:::
Community structure
In the study of complex networks, a network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. In the particular case of non-overlapping communit... |
epfl-collab | When constructing a word embedding, negative samples are: | ['Only words that never appear as context word', 'Word - context word combinations that are not occurring in the document collection', 'All less frequent words that do not occur in the context of a given word', 'Context words that are not part of the vocabulary of the document collection'] | B | null | Document 1:::
Precision and recall
For classification tasks, the terms true positives, true negatives, false positives, and false negatives (see Type I and type II errors for definitions) compare the results of the classifier under test with trusted external judgments. The terms positive and negative refer to the class... |
epfl-collab | Which of the following statements about index merging (when constructing inverted files) is correct? | ['Index merging is used when the vocabulary does no longer fit into the main memory', 'The size of the final merged index file is O(nlog2(n)*M), where M is the size of the available memory', 'While merging two partial indices on disk, the inverted lists of a term are concatenated without sorting', 'While merging two pa... | C | null | Document 1:::
Inverted index
In computer science, an inverted index (also referred to as a postings list, postings file, or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward ind... |
epfl-collab | For his awesome research, Tugrulcan is going to use the PageRank with teleportation and HITS algorithm, not on a network of webpages but on the retweet network of Twitter! The retweet network is a directed graph, where nodes are users and an edge going out from a user A and to a user B means that "User A retweeted User... | ['Its authority value will be equal to the hub value of a user who never retweets other users', 'It will have a non-zero hub value', 'It will have an authority value of zero', 'It will have a PageRank of zero'] | D | null | Document 1:::
PageRank
PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number an... |
epfl-collab | Let $f_{\mathrm{MLP}}: \mathbb{R}^{d} \rightarrow \mathbb{R}$ be an $L$-hidden layer multi-layer perceptron (MLP) such that $$ f_{\mathrm{MLP}}(\mathbf{x})=\mathbf{w}^{\top} \sigma\left(\mathbf{W}_{L} \sigma\left(\mathbf{W}_{L-1} \ldots \sigma\left(\mathbf{W}_{1} \mathbf{x}\right)\right)\right) $$ with $\mathbf{w} \in ... | ['$M !$', '$2^M$', '$1$', '$M! 2^M$'] | D | null | Document 1:::
Multilayer perceptron
A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly sepa... |
epfl-collab | Consider a linear regression problem with $N$ samples $\left\{\left(\boldsymbol{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$, where each input $\boldsymbol{x}_{n}$ is a $D$-dimensional vector $\{-1,+1\}^{D}$, and all output values are $y_{i} \in \mathbb{R}$. Which of the following statements is correct? | ['A linear regressor works very well if the data is linearly separable.', 'None of the above.', 'Linear regression always "works" very well for $N \\ll D$', 'Linear regression always "works" very well for $D \\ll N$'] | B | null | Document 1:::
Constrained least squares
Stochastic (linearly) constrained least squares: the elements of β {\displaystyle {\boldsymbol {\beta }}} must satisfy L β = d + ν {\displaystyle \mathbf {L} {\boldsymbol {\beta }}=\mathbf {d} +\mathbf {\nu } } , where ν {\displaystyle \mathbf {\nu } } is a vector of random varia... |
epfl-collab | Let $\mathcal{R}_{p}(f, \varepsilon)$ be the $\ell_{p}$ adversarial risk of a classifier $f: \mathbb{R}^{d} \rightarrow\{ \pm 1\}$, i.e., $$ \mathcal{R}_{p}(f, \varepsilon)=\mathbb{E}_{(\mathbf{x}, y) \sim \mathcal{D}}\left[\max _{\tilde{\mathbf{x}}:\|\mathbf{x}-\tilde{\mathbf{x}}\|_{p} \leq \varepsilon} \mathbb{1}_{\{... | ['$\\mathcal{R}_{\\infty}(f, \\varepsilon) \\leq \\mathcal{R}_{2}(f, \\varepsilon / d)$', '$\\mathcal{R}_{\\infty}(f, \\varepsilon) \\leq \\mathcal{R}_{2}(f, \\sqrt{d} \\varepsilon)$', '$\\mathcal{R}_{\\infty}(f, \\varepsilon) \\leq \\mathcal{R}_{1}(f, \\varepsilon)$', '$\\mathcal{R}_{2}(f, \\varepsilon) \\leq \\mathca... | B | null | Document 1:::
Loss functions for classification
In the case of binary classification, it is possible to simplify the calculation of expected risk from the integral specified above. Specifically, I = ∫ X × Y V ( f ( x → ) , y ) p ( x → , y ) d x → d y = ∫ X ∫ Y ϕ ( y f ( x → ) ) p ( y ∣ x → ) p ( x → ) d y d x → = ∫ X ... |
epfl-collab | We are given a data set $S=\left\{\left(\boldsymbol{x}_{n}, y_{n}\right)\right\}$ for a binary classification task where $\boldsymbol{x}_{n}$ in $\mathbb{R}^{D}$. We want to use a nearestneighbor classifier. In which of the following situations do we have a reasonable chance of success with this approach? [Ignore the i... | ['$ n=D^2, D \\rightarrow \\infty$', '$ n$ is fixed, $D \\rightarrow \\infty$', '$ n \\rightarrow \\infty, D \\ll \\ln (n)$', '$n \\rightarrow \\infty, D$ is fixed'] | D | null | Document 1:::
Nearest centroid classifier
In machine learning, a nearest centroid classifier or nearest prototype classifier is a classification model that assigns to observations the label of the class of training samples whose mean (centroid) is closest to the observation. When applied to text classification using wo... |
epfl-collab | How does the bias-variance decomposition of a ridge regression estimator compare with that of the ordinary least-squares estimator in general? | ['Ridge has a smaller bias, and smaller variance.', 'Ridge has a larger bias, and larger variance.', 'Ridge has a larger bias, and smaller variance.', 'Ridge has a smaller bias, and larger variance.'] | C | null | Document 1:::
L2 regularization
It is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias (... |
epfl-collab | You are given two distributions over $\mathbb{R}$ : Uniform on the interval $[a, b]$ and Gaussian with mean $\mu$ and variance $\sigma^{2}$. Their respective probability density functions are $$ p_{\mathcal{U}}(y \mid a, b):=\left\{\begin{array}{ll} \frac{1}{b-a}, & \text { for } a \leq y \leq b, \\ 0 & \text { otherwi... | ['Both of them.', 'Only Gaussian.', 'None of them.', 'Only Uniform.'] | B | null | Document 1:::
Natural parameters
In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentia... |
epfl-collab | Church booleans are a representation of booleans in the lambda calculus. The Church encoding of true and false are functions of two parameters: Church encoding of tru: t => f => t Church encoding of fls: t => f => f What should replace ??? so that the following function computes not(b and c)? b => c => b ??? (not b) | ['(not b)', 'fls', '(not c)', 'tru'] | C | null | Document 1:::
Simply typed λ-calculus
The simply typed lambda calculus ( λ → {\displaystyle \lambda ^{\to }} ), a form of type theory, is a typed interpretation of the lambda calculus with only one type constructor ( → {\displaystyle \to } ) that builds function types. It is the canonical and simplest example of a type... |
epfl-collab | To which expression is the following for-loop translated? for x <- xs if x > 5; y <- ys yield x + y | ['xs.flatMap(x => ys.map(y => x + y)).withFilter(x => x > 5)', 'xs.withFilter(x => x > 5).flatMap(x => ys.map(y => x + y))', 'xs.withFilter(x => x > 5).map(x => ys.flatMap(y => x + y))', 'xs.map(x => ys.flatMap(y => x + y)).withFilter(x => x > 5)'] | B | null | Document 1:::
Iterative for loop
In computer science a for-loop or for loop is a control flow statement for specifying iteration. Specifically, a for loop functions by running a section of code repeatedly until a certain condition has been satisfied. For-loops have two parts: a header and a body. The header defines the... |
epfl-collab | Why is natural language processing difficult?Select all that apply.You will get a penalty for wrong answers. | ['High dimensionality and sparseness of data', 'Lack of linguistic competence and resources', 'Impact of power laws', 'Subjectivity of annotators'] | B | null | Document 1:::
Natural language understanding
Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.There is consi... |
epfl-collab | A query \(q\) has been submitted to two distinct Information Retrieval engines operating on the same document collection containing 1'000 documents, with 50 documents being truly relevant for \(q\).The following result lists have been produced by the two IR engines, \(S_1\) and \(S_2\) respectively:
\(S_1\text{:}\)
\(... | ['\\(S_2\\)', 'This evaluation metric cannot be computed.', 'Both engines perform equally.', '\\(S_1\\)'] | A | null | Document 1:::
Average precision
Evaluation measures for an information retrieval (IR) system assess how well an index, search engine or database returns results from a collection of resources that satisfy a user's query. They are therefore fundamental to the success of information systems and digital platforms. The suc... |
epfl-collab | A major specificity of natural languages is that they are inherently implicit and ambiguous. How should this be taken into account in the NLP perspective?
(penalty for wrong ticks) | ['by interacting with human experts to formulate precise interpretation rules for linguistic entities', 'by teaching humans to talk and write in a way that reduces implicitness and ambiguity', 'by increasing the amount of a priori knowledge that NLP systems are able to exploit', 'by designing NLP algorithms and data st... | D | null | Document 1:::
Natural language generation
NLG may be viewed as complementary to natural-language understanding (NLU): whereas in natural-language understanding, the system needs to disambiguate the input sentence to produce the machine representation language, in NLG the system needs to make decisions about how to put ... |
epfl-collab | Consider 3 regular expressions \(A\), \(B\), and \(C\), such that:the sets of strings recognized by each of the regular expressions is non empty;the set of strings recognized by \(B\) is included in the set of strings recognized by \(A\);some strings are recognized simultaneously by \(A\) and by \(C\); andno string is ... | ['Any string recognized by \\(B\\) is (at least) associated to itself by the transducer \\(A\\otimes B\\)', '\\((A\\otimes B)\\circ (C)\\) recognizes a non empty set of string associations', '\\((B\\otimes A)\\circ (C)\\) recognizes a non empty set of string associations', 'Any string recognized by \\(A\\) but not by \... | A | null | Document 1:::
Comparison of regular expression engines
This is a comparison of regular expression engines.
Document 2:::
Regular Expression
A regular expression (shortened as regex or regexp; sometimes referred to as rational expression) is a sequence of characters that specifies a match pattern in text. Usually such p... |
epfl-collab | Why is natural language processing difficult?
Select all that apply.A penalty will be applied for wrong answers. | ['Impact of power laws', 'Lack of linguistic competence and resources', 'Subjectivity of annotators', 'High dimensionality and sparseness of data'] | B | null | Document 1:::
Natural language understanding
Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.There is consi... |
epfl-collab | A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int The... | ['x => if !m(x) then p(x) else 0', 'x => if m(x) then p(x) else 0', 'x => m(x) && p(x)', 'x => if p(x) then m(x) else 0'] | D | null | Document 1:::
Set (abstract data type)
Static sets allow only query operations on their elements — such as checking whether a given value is in the set, or enumerating the values in some arbitrary order. Other variants, called dynamic or mutable sets, allow also the insertion and deletion of elements from the set. A mu... |
epfl-collab | The little Fermat theorem states that for a prime $n$ and any $b\in \mathbb{Z}_n ^\star$ we have\dots | ['$b^{n-1}\\mod n = 1$.', '$b^{n-1}\\mod n = n$.', '$b^{n-1}\\mod n = b$.', '$b^{n}\\mod n = 1$.'] | A | null | Document 1:::
Cubic residue symbol
An analogue of Fermat's little theorem is true in Z {\displaystyle \mathbb {Z} }: if α {\displaystyle \alpha } is not divisible by a prime π {\displaystyle \pi } , α N ( π ) − 1 ≡ 1 mod π . {\displaystyle \alpha ^{N(\pi )-1}\equiv 1{\bmod {\pi }}.} Now assume that N ( π ) ≠ 3 {\displ... |
epfl-collab | The number of permutations on a set of $n$ elements | ['is always greater than $2^n$', 'is approximately $n(\\log n - 1)$', 'can be approximated using the Stirling formula', 'is independent of the size of the set'] | C | null | Document 1:::
Circular notation
In computer science, they are used for analyzing sorting algorithms; in quantum physics, for describing states of particles; and in biology, for describing RNA sequences. The number of permutations of n distinct objects is n factorial, usually written as n!, which means the product of al... |
epfl-collab | Select \emph{incorrect} statement. Complexity analysis of an attack consideres | ['probability of success.', 'memory complexity.', 'time complexity.', 'difficulty to understand a corresponding journal paper.'] | D | null | Document 1:::
Algorithmic complexity attack
An algorithmic complexity attack (ACA) is a form of attack in which the system is attacked by an exhaustion resource to take advantage of worst-case performance.
Document 2:::
Brute force attack
In cryptography, a brute-force attack consists of an attacker submitting many pas... |
epfl-collab | Which one of these is \emph{not} a stream cipher? | ['A5/1', 'IDEA', 'RC4', 'E0'] | B | null | Document 1:::
SSS (cipher)
In cryptography, SSS is a stream cypher algorithm developed by Gregory Rose, Philip Hawkes, Michael Paddon, and Miriam Wiggers de Vries. It includes a message authentication code feature. It has been submitted to the eSTREAM Project of the eCRYPT network. It has not selected for focus nor for... |
epfl-collab | Tick the \emph{correct} assertion regarding GSM. | ['The integrity of GSM messages is well protected.', 'In GSM, the phone is authenticated to the network.', 'GSM uses the GSME cipher to encrypt messages.', 'In GSM, the communication is always encrypted.'] | B | null | Document 1:::
Timing advance
In the GSM cellular mobile phone standard, timing advance (TA) value corresponds to the length of time a signal takes to reach the base station from a mobile phone. GSM uses TDMA technology in the radio interface to share a single frequency between several users, assigning sequential timesl... |
epfl-collab | Tick the \emph{wrong} assertion concerning 3G. | ['3G uses f8 for encryption.', 'In 3G, there is a counter to protect against replay attacks.', 'In 3G, the network is authenticated to the phone.', 'The integrity of 3G messages is well protected.'] | C | null | Document 1:::
3G network
3G is the third generation of wireless mobile telecommunications technology. It is the upgrade over 2G, 2.5G, GPRS and 2.75G Enhanced Data Rates for GSM Evolution networks, offering faster data transfer, and better voice quality. This network was superseded by 4G, and later on by 5G. This netwo... |
epfl-collab | Tick the \textbf{false} statement. | ['In WEP, authentication is done with the pre-shared keys.', 'Due to memory limitations, dummy devices can share the same key with everyone.', 'Cryptographic primitives used in Bluetooth are provably secure.', 'The security of Bluetooth 2.0 pairing is based on PIN.'] | C | null | Document 1:::
False (logic)
In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Bool... |
epfl-collab | Why do block ciphers use modes of operation? | ['to use keys of any size.', 'to be provably secure.', 'to encrypt messages of any size.', 'it is necessary for the decryption to work.'] | C | null | Document 1:::
Cipher-block chaining
In cryptography, a block cipher mode of operation is an algorithm that uses a block cipher to provide information security such as confidentiality or authenticity. A block cipher by itself is only suitable for the secure cryptographic transformation (encryption or decryption) of one ... |
epfl-collab | If we pick independent random numbers in $\{1, 2, \dots, N\}$ with uniform distribution, $\theta \sqrt{N}$ times, we get at least one number twice with probability\dots | ['$e^{\\theta ^2}$', '$1-e^{-\\theta ^2 /2}$', '$1-e^{\\theta ^2}$', '$e^{-\\theta ^2 /2}$'] | B | null | Document 1:::
Discrete uniform random variable
In probability theory and statistics, the discrete uniform distribution is a symmetric probability distribution wherein a finite number of values are equally likely to be observed; every one of n values has equal probability 1/n. Another way of saying "discrete uniform dis... |
epfl-collab | In practice, what is the typical size of an RSA modulus? | ['256 bits', '64 bits', '1024 bits', '8192 bits'] | C | null | Document 1:::
Factoring integers
The researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are... |
epfl-collab | The one-time pad is\dots | ['A perfectly binding commitment scheme.', 'A computationally (but not statistically) binding commitment scheme.', 'Not a commitment scheme.', 'A statistically (but not perfectly) binding commitment scheme.'] | C | null | Document 1:::
Blinding (cryptography)
The one-time pad (OTP) is an application of blinding to the secure communication problem, by its very nature. Alice would like to send a message to Bob secretly, however all of their communication can be read by Oscar. Therefore, Alice sends the message after blinding it with a sec... |
epfl-collab | Tick the \textbf{false} statement. | ['If a point is singular on an Elliptic curve, we can draw a tangent to this point.', 'The identity element of $E_{a,b}$ is the point at infinity.', 'Elliptic curve cryptography is useful in public-key cryptography.', '$P=(x_p,y_p)$ and $Q=(x_p,-y_p)$ are the inverse of each other on an Elliptic curve of equation $y^2=... | A | null | Document 1:::
False (logic)
In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Bool... |
epfl-collab | Diffie-Hellman refers to \ldots | ['a signature scheme.', 'the inventors of the RSA cryptosystem.', 'a public-key cryptosystem.', 'a key-agreement protocol.'] | D | null | Document 1:::
Diffie–Hellman problem
The Diffie–Hellman problem (DHP) is a mathematical problem first proposed by Whitfield Diffie and Martin Hellman in the context of cryptography. The motivation for this problem is that many security systems use one-way functions: mathematical operations that are fast to compute, but... |
epfl-collab | Consider the Rabin cryptosystem using a modulus $N=pq$ where $p$ and $q$ are both $\ell$-bit primes. What is the tightest complexity of the encryption algorithm? | ['$O(\\ell^2)$', '$O(\\ell)$', '$O(\\ell^4)$', '$O(\\ell^3)$'] | A | null | Document 1:::
Trapdoor permutation
As of 2004, the best known trapdoor function (family) candidates are the RSA and Rabin families of functions. Both are written as exponentiation modulo a composite number, and both are related to the problem of prime factorization. Functions related to the hardness of the discrete log... |
epfl-collab | Select the \emph{incorrect} statement. | ['Plain RSA encryption is deterministic.', 'The non-deterministic encryption can encrypt one plaintext into many ciphertexts.', 'The non-deterministic encryption always provides perfect secrecy.', 'ElGamal encryption is non-deterministic.'] | C | null | Document 1:::
Uncertain inference
Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plau... |
epfl-collab | Which mode of operation is similar to a stream cipher? | ['OFB', 'ECB', 'CFB', 'CBC'] | A | null | Document 1:::
Cipher-block chaining
In cryptography, a block cipher mode of operation is an algorithm that uses a block cipher to provide information security such as confidentiality or authenticity. A block cipher by itself is only suitable for the secure cryptographic transformation (encryption or decryption) of one ... |
epfl-collab | Select the \emph{incorrect} statement. | ['The Discrete Logarithm can be solved in polynomial time on a quantum computer.', 'The Discrete Logarithm is hard to compute for the additive group $\\mathbf{Z}_{n}$.', 'The Computational Diffie-Hellman problem reduces to the Discrete Logarithm problem.', 'The ElGamal cryptosystem is based on the Discrete Logarithm pr... | B | null | Document 1:::
Uncertain inference
Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query. Since d and q are both generated by users, they are error prone; thus d → q {\displaystyle d\to q} is uncertain. This will affect the plau... |
epfl-collab | In Bluetooth, the link key $K_{link}$ is ... | ['used to generate an epheremal key $K_{init}$.', 'the input to the pairing protocol.', 'used to authenticate devices.', 'not used to generate the encryption key.'] | C | null | Document 1:::
Bluetooth Basic Rate/Enhanced Data Rate
Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, ... |
epfl-collab | Let $n=pq$ where $p$ and $q$ are prime numbers. We have: | ['$\\varphi (n) = n-1$', '$\\varphi (n) = p + q$', '$\\varphi (n) = (p-1) (q-1)$', '$\\varphi (n) = pq$'] | C | null | Document 1:::
Pythagorean prime
A Pythagorean prime is a prime number of the form 4 n + 1 {\displaystyle 4n+1} . Pythagorean primes are exactly the odd prime numbers that are the sum of two squares; this characterization is Fermat's theorem on sums of two squares. Equivalently, by the Pythagorean theorem, they are the ... |
epfl-collab | Which of the following elements belongs to $\mathbb{Z}_{78}^*$? | ['35', '65', '46', '21'] | A | null | Document 1:::
Abell 78
Abell 78 is a planetary nebula located in the constellation of Cygnus. It has a fainter halo consisting mostly hydrogen, and an inner elliptical ring that is mostly made of helium. The central star of the planetary nebula has a spectral type of , similar to that of a carbon-rich Wolf–Rayet star.
... |
epfl-collab | Tick the \textbf{false} statement. Moore's Law ... | ['implies that the heat generated by transistors of CPU doubles every 18 months.', 'assumes the number of transistors per CPU increases exponentially fast with time.', 'is partly a reason why some existing cryptosystems are insecure.', 'was stated by the founder of Intel.'] | A | null | Document 1:::
Computational power
Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience i... |
epfl-collab | The elements of $\mathbf{Z}_{14}^*$ are | ['$\\{ 1, 2, 3, 9, 11 \\}$', '$\\{ 1, 3, 5, 9, 11, 13\\}$', '$\\{ 0, 1, 3, 5, 9, 11, 13\\}$', '$\\{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13\\}$'] | B | null | Document 1:::
Z* theorem
In mathematics, George Glauberman's Z* theorem is stated as follows: Z* theorem: Let G be a finite group, with O(G) being its maximal normal subgroup of odd order. If T is a Sylow 2-subgroup of G containing an involution not conjugate in G to any other element of T, then the involution lies in ... |
epfl-collab | Tick the \textbf{false} statement. | ['RSA can be accelerated by using CRT (Chinese Remainder Theorem).', 'The CRT states $\\mathbb{Z}_{mn} \\equiv \\mathbb{Z}_{m} \\cup \\mathbb{Z}_{n}$.', 'The CRT implies $\\varphi(mn)=\\varphi(m)\\varphi(n)$ for $\\mathsf{gcd}(m,n)=1$.', 'An isomorphism is defined as a bijective homomorphism.'] | B | null | Document 1:::
False (logic)
In logic, false or untrue is the state of possessing negative truth value and is a nullary logical connective. In a truth-functional system of propositional logic, it is one of two postulated truth values, along with its negation, truth. Usual notations of the false are 0 (especially in Bool... |
epfl-collab | What is the advantage of using a salt in a password authentication protocol? | ['It protects against online attacks.', 'It avoids single-target exhaustive search attacks from the database.', 'It avoids multi-target bruteforce attacks from the database.', 'It makes the protocol more spicy.'] | C | null | Document 1:::
Salt (cryptography)
In cryptography, a salt is random data fed as an additional input to a one-way function that hashes data, a password or passphrase. Salting helps defend against attacks that use precomputed tables (e.g. rainbow tables), by vastly growing the size of table needed for a successful attack... |
epfl-collab | Select \emph{incorrect} statement. The birthday paradox | ['implies that majority of people is born at full moon.', 'implies that in a list of $\\Theta\\sqrt{N}$ random numbers from $\\mathbb{Z}_N$ we have at least one number twice with probability $1- e^{-{\\Theta^2\\over 2}}$.', 'can be used to find collisions in hash function.', 'implies that in class of $23$ students we h... | A | null | Document 1:::
Artificial precision
In numerical mathematics, artificial precision is a source of error that occurs when a numerical value or semantic is expressed with more precision than was initially provided from measurement or user input. For example, a person enters their birthday as the date 1984-01-01 but it is ... |
epfl-collab | Which scheme is the most secure? | ['DES.', 'Three-key triple DES.', 'Double DES.', 'Two-key triple DES.'] | B | null | Document 1:::
Asymptotic security
In cryptography, concrete security or exact security is a practice-oriented approach that aims to give more precise estimates of the computational complexities of adversarial tasks than polynomial equivalence would allow. It quantifies the security of a cryptosystem by bounding the pro... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.