question,answer,label,source "In Itanium's procedure call and return mechanism, What problem might arise when the processor executes erb+alloc+? Which mechanisms could be used to handle the problem? Feel free to mention what Itanium actually does (if you recall it), but list any effective solution that comes to mind. Just outline the basic ideas, do not try to describe or solve the details.","val THRESHOLD = ??? def aggregate(z: B)(f: (B, A) => B, g: (B, B) => B): B = def go(s: Splitter[A]): B = if s.remaining <= THRESHOLD then s.foldLeft(z)(f) else s.split .map((t: Splitter[A]) => task { go(t) }) .map(_.join()) .reduce(g) go(splitter)",0.1,M1_preference_data_0 " List two common types of exceptions which could possibly be implemented imprecisely. Explain why.","S1: q1: P=4/8 R=4/4 q2: P=1/5 R=1/2 q3: P=3/9 R=3/5 q4: P=3/9 R=3/4 mean P(S1) = 41/120 mean R(S1) = 57/80 S2: q1: P=1/5 R=1/4 q2: P=2/4 R=2/2 q3: P=3/5 R=3/5 q4: P=2/4 R=2/4 mean P(S1) = 9/20 mean R(S1) = 47/80",0.1,M1_preference_data_1 " What differentiates VLIW processors from out-of-order superscalar processors? ","The app could stream images rather than batch them, to only download images the user actually sees",0.1,M1_preference_data_2 "A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: 1 type Multiset = Char => Int What should replace ??? so that the following function transforms given set s to a multiset where each element of s appears exactly once? 1 type Set = Char => Boolean 2 def setToMultiset(s: Set): MultiSet = ???","Recall that a \emph{base} of a matroid is an independent set of maximum cardinality. Let $B \in \mathcal{I}$ be a base of $\mathcal{M}$. Suppose towards a contradiction that the output $S \in \mathcal{I}$ of \textsc{Greedy} is not a base of $\mathcal{M}$. Then $|S| < |B|$, and, by the second axiom of matroids, there exists some $e_{b} \in B \setminus S$ such that $( S \cup \{e_b\} ) \in \mathcal{I}$. Let $S'$ be the subset of elements in $S$ that were considered before $e_b$ by \textsc{Greedy}. In other words, $S'$ was the partial solution of \textsc{Greedy} just before it considered $e_b$. By the first axiom of matroids, $S' \cup \{e_b\} \in \mathcal{I}$ because $S' \cup \{e_b\} \subseteq S \cup \{e_b\}$. Thus \textsc{Greedy} should have added $e_b$ to its solution $S$ in Step 4, which is a contradiction.",0.1,M1_preference_data_3 "Consider a classic pipeline with Fetch, Decode, Execute, Memory, and Writeback stages such as MIPS's. Give an example snippet made of 2-3 instructions (use any credible assembly language) which would benefit from a forwarding path between the Memory and Execute stage (assume that no other forwarding paths exist).","It is not suitable as the item is not specified properly (""doesn't render well"" is not concrete). A bug item has to include details on what is wrong with the user experience.",0.1,M1_preference_data_4 " Let us remind that we define the max-margin $M_\star$ as egin{align*} M_\star = \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } n=1,\cdots, N \end{align*} and a max-margin separating hyperplane $ar \wv$ as a solution of this problem: egin{align*} ar \wv \in rg \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } i=1,\cdots, N \end{align*} Does it imply that the output of the Perceptron algorithm is a max-margin separating hyperplane? ","$P( ext{continuous wave})= rac{2}{58}$ since ``\emph{continuous wave}'' appears two times and that there are 58 bigrams in a 59 token corpus. $P( ext{pulsed laser})=0$ since ""pulsed laser"" never occurs in the corpus.",0.1,M1_preference_data_5 "If process i fails, then eventually all processes j≠i fail Is the following true? If all processes j≠i fail, then process i has failed",['PNP'],0.1,M1_preference_data_6 "Interpreting the results obtained throughout this homework, create a short text (max. 250 words) where you: Present and explain a credible causal diagram capturing the relationship between the variables below, and justify your causal diagram given the questions answered in this homework: ""Skill"": an individual's innate talent towards a sport. ""Relative Age"": how old an individual was in comparison to his or her peers. ""Success before adulthood"": how successful the individual is as an athlete as a child/teenager. ""Success as an adult"": how successful the individual is as an athlete as an adult. Discuss: Consider two equally successful children athletes, one born on March 31 and the other on April 1 — which will likely be more successful as an adult? Your answer should be consistent with your causal diagram.","The set of states Q is the set of all possible maps q:A→N. Intuitively, each state of the object assigns each account its balance.The initialization map q_0:A→N assigns the initial balance to each account. Operations and responses of the type are defined as O={transfer⁡(a,b,x):a,b∈A,x∈ N}∪{read⁡(a):a∈A} and R={ true, false }∪N. For a state q∈Q, a proces p∈Π, an operation o∈O, a response r∈R and a new state q^'∈Q, the tuple (q,p,o,q^',r)∈Δ if and only if one of the following conditions is satisfied: -o=transfer⁡(a,b,x) ∧ p ∈ μ(a) ∧ q(a) ≥ x ∧ q^'(a) = q(a)-x ∧ q'(b) = q(b) + x ∧ ∀ c ∈ A ∖ {a,b}: q'(c) = q(c) (all other accounts unchanged) ∧ r= true; -o=transfer⁡(a,b,x)∧(p∉μ(a)∨q(a) 3d] \leq 0.47$, the expected number of answers that exceed $3d$ is $0.47t$. If the median is larger than $3d$, this means at least half of $t$ individual answers exceed 3d. Then, we have: \[\Pr(X > t/2) = Pr(X > (1+3/47)0.47t) \leq e^{-\frac{3/47^2 0.47t}{3}} \leq e^{-0.00063t} = e^{-0.00063C\ln(1/\delta)} \leq \delta/2 \] with $C$ being a large enough constant. Similarly, by defining the probability that the median is below $\frac{d}{3}$ we get: \[\Pr(X < t/2) = Pr(X < (1-3/53)0.53t) \leq e^{-\frac{3/53^2 0.53t}{2}} \leq \delta/2 \] with $C$ being a large enough constant. This means the probability that $d/3 \leq \overline{d} < 3d$ is at least $1-\delta$.",0.1,M1_preference_data_13 "The data contains information about submissions to a prestigious machine learning conference called ICLR. Columns: year, paper, authors, ratings, decisions, institution, csranking, categories, authors_citations, authors_publications, authors_hindex, arxiv. The data is stored in a pandas.DataFrame format. Create another field entitled reputation capturing how famous the last author of the paper is. Notice that the last author of the paper is usually the most senior person involved in the project. This field should equal log10(#𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑠#𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑠+1). Notice that each author in the dataset has at least 1 publication, so you don't risk dividing by 0.","$$ \mathbf{K}_{i, j}:=k\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \phi\left(\mathbf{x}_{j}\right)\right\rangle_{\mathbb{R}^{H}}=\mathbf{\Phi} \boldsymbol{\Phi}^{\top} \in \mathbf{R}^{n \times n} $$",0.1,M1_preference_data_14 "Recall the online bin-packing problem that we saw in Exercise Set $10$: We are given an unlimited number of bins, each of capacity $1$. We get a sequence of items one by one each having a size of at most $1$, and are required to place them into bins as we receive them. Our goal is to minimize the number of bins we use, subject to the constraint that no bin should be filled to more than its capacity. An example is as follows: \begin{center} \vspace{4mm} \includegraphics[width=9cm]{binpackingExample2} \end{center} Here, seven items have already arrived that we have packed in three bins. The newly arriving item of size $1/6$ can either be packed in the first bin, third bin, or in a new (previously unused) bin. It cannot be packed in the second bin since $1/3 + 1/3 + 1/4 + 1/6 > 1$. If it is packed in the first or third bin, then we still use three bins, whereas if we pack it in a new bin, then we use four bins. In this problem you should, assuming that all items have size at most $0 <\epsilon\leq 1$, design and analyze an online algorithm for the online bin-packing problem that uses at most \begin{align} \frac{1}{1-\epsilon} \mbox{OPT} + 1 \mbox{ bins,} \label{eq:binguarantee} \end{align} where $\mbox{OPT}$ denotes the minimum number of bins an optimal packing uses. In the above example, $\epsilon = 1/3$. \\[2mm] {\em (In this problem you are asked to (i) design the online algorithm and (ii) prove that it satisfies the guarantee~\eqref{eq:binguarantee}. Recall that you are allowed to refer to material covered in the lecture notes.)}",not(a or b),0.1,M1_preference_data_15 "We have a collection of rectangles in a plane, whose sides are aligned with the coordinate axes. Each rectangle is represented by its lower left corner $(x_1,y_1)$ and its upper right corner $(x_2,y_2)$. All coordinates are of type Long. We require $x_1 \le x_2$ and $y_1 \le y_2$. Define a case class Rectangle storing two corners. ","Whatever your personal convictions, you have to adapt to the convention used by the project.",0.1,M1_preference_data_16 "It is often desirable to be able to express the performance of an NLP system in the form of one single number, which is not the case with Precision/Recall curves. Indicate what score can be used to convert a Precision/Recall performance into a unique number. Give the formula for the corresponding evaluation metric, and indicate how it can be weighted.","The benefit of depending on the latest available minor version means that we will always be up-to-date with the latest bugfixes and security/performance improvements as well. But the problem with this approach is that it could lead to unexpected behaviour which could cause bugs, because two compatible versions can still have slightly different behaviour.",0.1,M1_preference_data_17 "The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: egin{align*} g^\star(\xv)\in rg \min_{z\in\R}\mathbb E[\phi( z Y)|X=\xv]. \end{align*} Thus the function $g^\star$ that minimizes the $\phi$-risk can be determined by looking at each $\xv$ separately. Give a formula of the function $g^\star : \mathcal X o\R$ which minimizes the true $\phi$-risk, as a function of $\eta(\xv)$. ","Yes, it is possible. d1>d2: adding d3=”b” d2>d1: adding d3=”c”",0.1,M1_preference_data_18 "In this problem we are going to investigate the linear programming relaxation of a classical scheduling problem. In the considered problem, we are given a set $M$ of $m$ machines and a set $J$ of $n$ jobs. Each job $j\in J$ has a processing time $p_j > 0$ and can be processed on a subset $N(j) \subseteq M$ of the machines. The goal is to assign each job $j$ to a machine in $N(j)$ so as to complete all the jobs by a given deadline $T$. (Each machine can only process one job at a time.) If we, for $j\in J$ and $i\in N(j)$, let $x_{ij}$ denote the indicator variable indicating that $j$ was assigned to $i$, then we can formulate the scheduling problem as the following integer linear program: \begin{align*} \sum_{i\in N(j)} x_{ij} & = 1 \qquad \mbox{for all } j\in J & \hspace{-3em} \mbox{\small \emph{(Each job $j$ should be assigned to a machine $i\in N(j)$)}} \\ \sum_{j\in J: i \in N(j)} x_{ij} p_j & \leq T \qquad \mbox{for all } i \in M & \hspace{-3em} \mbox{\small \emph{(Time needed to process jobs assigned to $i$ should be $\leq T$)}} \\ x_{ij} &\in \{0,1\} \ \mbox{for all } j\in J, \ i \in N(j) \end{align*} The above integer linear program is NP-hard to solve, but we can obtain a linear programming relaxation by relaxing the constraints $x_{ij} \in \{0,1\}$ to $x_{ij} \in [0,1]$. The obtained linear program can be solved in polynomial time using e.g. the ellipsoid method. \\[2mm] \emph{Example.} An example is as follows. We have two machines $M = \{m_1, m_2\}$ and three jobs $J= \{j_1, j_2, j_3\}$. Job $j_1$ has processing time $1/2$ and can only be assigned to $m_1$; job $j_2$ has processing time $1/2$ and can only be assigned to $m_2$; and job $j_3$ has processing time $1$ and can be assigned to either machine. Finally, we have the ``deadline'' $T=1$. An extreme point solution to the linear programming relaxation is $x^*_{11} = 1, x^*_{22} =1, x^*_{13} = 1/2$ and $x^*_{23} = 1/2$. The associated graph $H$ (defined in subproblem~\textbf{a}) can be illustrated as follows: \begin{tikzpicture} \node[vertex] (a1) at (0,1.7) {$a_1$}; \node[vertex] (a2) at (0,0.3) {$a_2$}; \node[vertex] (b1) at (3,2.5) {$b_1$}; \node[vertex] (b2) at (3,1) {$b_2$}; \node[vertex] (b3) at (3,-0.5) {$b_3$}; \draw (a1) edge (b3); \draw (a2) edge (b3); \end{tikzpicture} Use the structural result proved in the first subproblem to devise an efficient rounding algorithm that, given an instance and a feasible extreme point $x^*$ in the linear programming relaxation corresponding to the instance, returns a schedule that completes all jobs by deadline $T + \max_{j\in J} p_j$. In other words, you wish to assign jobs to machines so that the total processing time of the jobs a machine receives is at most $T + \max_{j\in J} p_j$.",['(80+1000-{a}-{b}+80)/1000'],0.1,M1_preference_data_19 "Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|d_j|}\frac{w_{iq}}{|q|}$ . (1) Another way of looking at the information retrieval problem is using a probabilistic approach. The probabilistic view of information retrieval consists of determining the conditional probability $P(q|d_j)$ that for a given document $d_j$ the query by the user is $q$. So, practically in probabilistic retrieval when a query $q$ is given, for each document it is evaluated how probable it is that the query is indeed relevant for the document, which results in a ranking of the documents. In order to relate vector space retrieval to a probabilistic view of information retrieval, we interpret the weights in Equation (1) as follows: - $w_{ij}/|d_j|$ can be interpreted as the conditional probability $P(k_i|d_j)$ that for a given document $d_j$ the term $k_i$ is important (to characterize the document $d_j$). - $w_{iq}/|q|$ can be interpreted as the conditional probability $P(q|k_i)$ that for a given term $k_i$ the query posed by the user is $q$. Intuitively, $P(q|k_i)$ gives the amount of importance given to a particular term while querying. With this interpretation you can rewrite Equation (1) as follows: Show that indeed with the probabilistic interpretation of weights of vector space retrieval, as given in Equation (2), the similarity computation in vector space retrieval results exactly in the probabilistic interpretation of information retrieval, i.e., $sim(q,d_j)= P(q|d_j)$. Given that $d_j$ and $q$ are conditionally independent, i.e., $P(d_j \cap q|ki) = P(d_j|k_i)P(q|k_i)$. You can assume existence of joint probability density functions wherever required. (Hint: You might need to use Bayes theorem)","1. I/O: Checked exception, can happen even if the code is correct due to external factors 2. Backend timeout: Checked exception, can happen even if the code is correct due to SwengPhotos internals 3. Name already exists: Unchecked exception, the library should deal with this internally and never make such an invalid request, since the cloud is private",0.1,M1_preference_data_20 "You are discussing coding habits with a colleague, who says: ""When I edit a part of a function, if I see unclean code in another part of it, I also clean that other part up."" In one sentence, explain if this is a good habit and why:","If the messages, on which the algorithm agrees in consensus, are never sorted deterministically within every batch (neither a priori, not a posteriori), then the total order property does not hold. Even if the processes decide on the same batch of messages, they might TO-deliver the messages within this batch in a different order. In fact, the total order property would be ensured only with respect to batches of messages, but not with respect to individual messages. We thus get a coarser granularity in the total order.",0.1,M1_preference_data_21 "Recall from the last lecture (see Section 16.1.1 in notes of Lecture~8) that the number of mistakes that Weighted Majority makes is at most $2(1+\epsilon) \cdot \mbox{(\# of $i$'s mistakes)} + O(\log N/\epsilon)$, where $i$ is any expert and $N$ is the number of experts. Give an example that shows that the factor $2$ is tight in the above bound. The simplest such example only uses two experts, i.e., $N=2$, and each of the experts is wrong roughly half of the time. Finally, note how your example motivates the use of a random strategy (as in the Hedge strategy that we will see in the next lecture).","Returns a list of elements that appear exactly once in the input list, in a reverse order",0.1,M1_preference_data_22 "Given the following code snippet, you are tasked to produce a modulo scheduled version of the loop achieving the best possible performance. You can assume that any operation has a latency of one cycle and that the processor has 2 ALUs, one memory unit, and one branch unit. The processor also has all the necessary hardware structures and instructions to support modulo scheduling. In particular, the instruction erb+mov EC+ loads a value in the epilogue counter and the instruction erb+loop.pip+ is equivalent to erb+loop+ but has all features expected to execute a modulo scheduled loop. Predicates from erb+p32+ and registers from erb+x32+ onwards rotate; across iterations registers are rotated upwards (i.e., the content of erb+x44+ is accessible as erb+x45+ in the next iteration). What is the shortest achievable initiation interval? Why? egin{verbatim} 0: mov LC, 100 1: mov x1, 10000 2: ld x2, 0(x1) 3: addi x2, x2, 10 4: st x2, 0(x1) 5: addi x1, x1, 1 6: loop 2 \end{verbatim} ","Consider the following linear program with a variable $p(v)$ for each vertex $v\in V$: \begin{align*} \textrm{max} \quad & \sum_{v\in V} p(v) \\ \textrm{subject to} \quad& \sum_{v\in S} p(v) \leq |E(S, \bar S)| \qquad \mbox{for all $\emptyset \subset S \subset V$} \\ & p(v) \geq 0 \qquad \mbox{for all $v\in V$} \end{align*} We show that this linear program can be solved in polynomial time using the Ellipsoid method by designing a polynomial time separation oracle. That is we need to design a polynomial time algorithm that given $p^* \in \mathbb{R}^V$ certifies that $p^*$ is a feasible solution or outputs a violated constraints. The non-negativity constraints are trivial to check in time $O(|V|)$ (i.e., polynomial time) so let us worry about the other constraints. These constraints can be rewritten as $f(S) \geq 0$ for all $\emptyset \subset S \subset V$, where \begin{align*} f(S) = |E(S,\bar S)| - \sum_{v\in S} p^*(v) \quad \mbox{for $\emptyset \subseteq S \subseteq V$.} \end{align*} Note that $f$ is a submodular function since it is a sum of two submodular functions: the cut function (which is submodular as seen in class) and a linear function (trivially submodular). Hence there is a violated constraint if and only if \begin{align*} \min_{\emptyset \subseteq S \subset V} f(S) < 0\,. \end{align*} This is an instance of the submodular function minimization problem with the exception that we do not allow $S=V$ for a solution. Therefore we solve $n$ instances of submodular minimization on the smaller ground sets $V \setminus \{v_1\}$, $V \setminus \{v_2\}$, \dots, $V \setminus \{v_n\}$ where $V =\{v_1, \dots, v_n\}$. Since submodular function minimization is polynomial time solvable (and we can clearly evaluate our submodular function in polynomial time), we can solve the separation problem in polynomial time.",0.1,M1_preference_data_23 "Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \alpha) \), of appropriate dimension, where \( \alpha \) is a real-number between 0 and 1?","We can view the threshold as an additional weight by adding the constant input $1$ to the input $\xv$. It amounts to consider the input $ ilde \xv^ op= [\xv^ op,1]$ since $ ilde \xv^ op [\wv^ op,b] = \xv^ op \wv + b$. ",0.1,M1_preference_data_24 "Remember that monoids can be represented by the following type class: 1 trait SemiGroup[T]: 2 extension (x: T) def combine (y: T): T 3 4 trait Monoid[T] extends SemiGroup[T]: 5 def unit: T Additionally the three following laws should hold for all Monoid[M] and all a, b, c: M: (Associativity) a.combine(b).combine(c) === a.combine(b.combine(c)) (Left unit) unit.combine(a) === a (Right unit) a.combine(unit) === a Consider the following implementation of Monoid for Int: 1 given Pos: Monoid[Int] with 2 extension (x: Int) def combine (y: Int): Int = Math.max(x + y, 0) 3 def unit: Int = 0 Which of the three monoid laws does it fullfil? None of them Only Associativity Only Left unit Only Right unit Only Associativity and Left unit Only Associativity and Right unit Only Left unit and Right unit All of them","Here we give a detailed explanation of how to set the costs. Your solution does not need to contain such a detailed explanation. The idea of using the Hedge method for linear programming is to associate an expert with each constraint of the LP. In other words, the Hedge method will maintain a weight distribution over the set of constraints of a linear problem to solve, and to iteratively update those weights in a multiplicative manner based on the cost function at each step. Initially, the Hedge method will give a weight $w^{(1)}_i = 1$ for every constraint/expert $i=1,\dots, m$ (the number $m$ of constraints now equals the number of experts). And at each step $t$, it will maintain a convex combination $\vec{p}^{(t)}$ of the constraints (that is defined in terms of the weights). Using such a convex combination $\vec{p}$, a natural easier LP with a single constraint is obtained by summing up all the constraints according to $\vec{p}$. Any optimal solution of the original LP is also a solution of this reduced problem, so the new problem will have at least the same cost as the previous one. We define an oracle for solving this reduced problem: \begin{definition}{} An oracle that, given $\vec{p} = (p_1, \dots, p_m) \geq \mathbf{0}$ such that $\sum_{i=1}^m p_i = 1$, outputs an optimal solution $x^*$ to the following reduced linear problem: \begin{align*} \textbf{Maximize} \hspace{0.8cm} & \sum_{j=1}^n c_jx_j \\ \textbf{subject to}\hspace{0.8cm} &\left(\sum_{i=1}^m p_i A_i \right) \cdot x \leq \sum_{i=1}^m p_ib_i \\ \hspace{0.8cm} & x \geq 0 \\ \end{align*} \end{definition} As explained, we associate an expert to each constraint of the covering LP. In addition, we wish to increase the weight of unsatisfied constraints and decrease the weight of satisfied constraints (in a smooth manner depending on the size of the violation or the slack). The Hedge algorithm for covering LPs thus becomes: \begin{itemize} \item Assign each constraint $i$ a weight $w_i^{(1)}$ initialized to $1$. \end{itemize} At each time $t$: \begin{itemize} \item Pick the distribution $p^{(t)}_i = w_i^{(t)}/\Phi^{(t)}$ where $\Phi^{(t)}= \sum_{i\in [N]} w^{(t)}_i$. \item Now \emph{we define the cost vector instead of the adversary} as follows: \begin{itemize} \item Let $x^{(t)}$ be the solution returned by the oracle on the LP obtained by using the convex combination $\vec{p}^{(t)}$ of constraints. Notice that cost of $x^{(t)}$, i.e., $c^\top x^{(t)}$, is at least the cost of an optimal solution to the original LP. \item Define the cost of constraint $i$ as \begin{align*} m^{(t)}_i = b_i -\sum_{j=1}^n A_{ij} x_j = b_i - A_i x . \end{align*} Notice that we have a positive cost if the constraint is satisfied (so the weight will be decreased by Hedge) and a negative cost if it is violated (so the weight will be increased by Hedge). \end{itemize} \item After observing the cost vector, set $w_i^{(t+1)} = w_i^{(t)} \cdot e^{-\varepsilon \cdot m_i^{(t)}}$. \end{itemize} {\bf Output:} the average $\bar x =\frac{1}{T} \sum_{t=1}^T x^{(t)}$ of the constructed solutions.",0.1,M1_preference_data_25 Build the inverse document-frequency matrix (idf),"x => Math.min(a(x), b(x))",0.1,M1_preference_data_26 "Given a joint data distribution $\mathcal D$ on $\mathcal X imes \{-1,1\}$ and $n$ independent and identically distributed observations from $\mathcal D$, the goal of the classification task is to learn a classifier $f:\mathcal X o \{-1,1\}$ with minimum true risk $\mathcal L(f) = \mathbb E_{(X,Y)\sim \mathcal D} [oldsymbol{\mathbb{1}}_{f(X) eq Y}]$ where $oldsymbol{\mathbb{1}}_{C} = egin{cases} 1 \; ext{ if } C ext{ is true} \ 0 \quad ext{otherwise} \end{cases}$. % We denote by $\mathcal D_{X}$ the marginal law (probability distribution) of $X$, and $\mathcal D_{Y|X}$ the conditional law of $Y$ given $X$. Give the two reasons seen in the course which explain that minimizing the true risk with the $0-1$ loss over the set of classifiers $f:\mathcal X o \{-1,1\}$ is problematic.",Amazon Web Services (as can be found by looking up the error name),0.1,M1_preference_data_27 "You've been hired to modernize a codebase from a 50-year-old company: version control, automated builds, and continuous integration. One of your colleagues, who is not completely up-to-date with modern practices, asks you the following question: ""Do I have to do one ""commit"" each day with my day's work?"" What would be your answer?","The result of doing scanLeft1 and then reversing the answer is not the same as applying scanRight1 on the reversed input (unless $f$ is commutative) Consider once again our favourite sequence $A = (a_1, a_2)$. We apply the operations as required: $$rev(A).scanLeft1(f) = (a_2, f(a_2, a_1))$$ and $$rev(A.scanRight1(f)) = (a_2, f(a_1, a_2))$$ These are not equal unless $f$ is commutative. Choose once again the function $f(x, y) := x$. We get $$rev(A).scanLeft1(f) = (a_2, a_2)$$ and $$rev(A.scanRight1(f)) = (a_2, a_1)$$ which are unequal if $a_1 \not = a_2$.",0.1,M1_preference_data_28 "You just started an internship in an IT services company. Your first task is about maintaining an old version of a product still used by a large customer. A bug just got fixed in the latest version of the product, and you must fix it in the old version. You ask where the source code is, and a developer shows you a repository in which the development team makes a single commit each week with the week's work. The old version is in another repository, which is a copy of the original repository made back when the version was released. Suggest a better way to handle old versions,",""" Compute confidence for a given set of rules and their respective support freqSet : frequent itemset of N-element H : list of candidate elements Y1, Y2... that are part of the frequent itemset supportData : dictionary storing itemsets support rules : array to store rules min_confidence : rules with a confidence under this threshold should be pruned "" def compute_confidence(freqSet, H, supportData, rules, min_confidence=0.7): prunedH = [] for Y in H: X = freqSet - Y support_XuY = supportData[freqSet] support_X = supportData[X] conf = support_XuY/support_X if conf >= min_confidence: rules.append((X, Y, conf)) prunedH.append(Y) return prunedH",0.1,M1_preference_data_29 "In a game of Othello (also known as Reversi in French-speaking countries), when a player puts a token on a square of the board, we have to look in all directions around that square to find which squares should be “flipped” (i.e., be stolen from the opponent). We implement this in a method computeFlips, taking the position of square, and returning a list of all the squares that should be flipped: final case class Square(x: Int, y: Int) def computeFlips(square: Square): List[Square] = { List(−1, 0, 1).flatMap{i => List(−1, 0, 1).filter{j => i != 0 || j != 0}.flatMap{j => computeFlipsInDirection(square, i, j)}}} def computeFlips In Direction (square: Square, dirX: Int, dirY: Int): List[Square] = {// omitted} Rewrite the method computeFlips to use one for comprehension instead of maps, flatMaps and filter s. The resulting for comprehension should of course have the same result as the expression above for all value of square. However, it is not necessary that it desugars exactly to the expression above.",Neither,0.1,M1_preference_data_30 "Consider the Poisson distribution with parameter $\lambda$. It has a probability mass function given by $p(i)=\frac{\lambda^{i} e^{-\lambda}}{i !}$, $i=0,1, \cdots$ (i) Write $p(i)$ in the form of an exponential distribution $p(i)=h(i) e^{\eta \phi(i)-A(\eta)}$. Explicitly specify $h, \eta, \phi$, and $A(\eta)$ (ii) Compute $\frac{d A(\eta)}{d \eta}$ and $\frac{d^{2} A(\eta)}{d \eta^{2}}$ ? Is this the result you expected?","Since FloodSet guarantees that all non-faulty processes obtain the same W after f+1 rounds, other decision rules would also work correctly, as long as all the processes apply the same rule.",0.1,M1_preference_data_31 "One of your colleagues has recently taken over responsibility for a legacy codebase, a library currently used by some of your customers. Before making functional changes, your colleague found a bug caused by incorrect use of the following method in the codebase: public class User { /** Indicates whether the user’s browser, if any, has JavaScript enabled. */ public boolean hasJavascriptEnabled() { … } // … other methods, such as getName(), getAge(), ... } Your colleague believes that this is a bad API. You are reviewing the pull request your colleague made to fix this bug. After some discussion and additional commits to address feedback, the pull request is ready. You can either ""squash"" the pull request into a single commit, or leave the multiple commits as they are. Explain in 1 sentence whether you should ""squash"" and why.","causal language modeling learns to predict the next word, which you would need to generate a story.",0.1,M1_preference_data_32 "Consider the following CFG \(\text{S} \rightarrow \text{NP VP PNP}\) \(\text{NP} \rightarrow \text{Det N}\) \(\text{NP} \rightarrow \text{Det Adj N}\) \(\text{VP} \rightarrow \text{V}\) \(\text{VP} \rightarrow \text{Aux Ving}\) \(\text{VP} \rightarrow \text{VP NP}\) \(\text{VP} \rightarrow \text{VP PNP}\) \(\text{PNP} \rightarrow \text{Prep NP}\) and the following lexicon: the:Det, red:Adj, cat:N, is:Aux, meowing:Ving, on:Prep, roof:N The next four questions ask you the content of a given cell of the chart used by the CYK algorithm (used here as a recognizer) for the input sentence the red cat is meowing on the roof Simply answer ""empty'' if the corresponding cell is empty and use a comma to separate your answers when the cell contains several objects.What is the content of the cell at row 3 column 6 (indexed as in the lectures)?","We give a rounding algorithm that gives a cut $x^*$ (an integral vector) of expected value equal to the value of an optimal solution $x$ to the quadratic program. The rounding algorithm is the basic one: set $x^*_i = 1$ with probability $x_i$ and $x^*_i = 0$ with probability $1 - x_i$. The expected value of the objective function is then $\sum_{i,j}\mathbb{E}\left( (1 - x^*_i)x^*_j + x^*_i (1 - x^*_j) \right) = \sum_{(i,j) \in E} (1 - x_i)x_j + x_i (1 - x_j)$ by independence of the variables $x^*_i$ and $x^*_j$. Thus the expected value after rounding is the optimal value of the quadratic relaxation. But as no integral cut can have larger value than the relaxation, the value of the obtained cut $x^*$ must always equal the value of the fractional solution $x$. It follows that the optimal value of the quadratic relaxation equals the value of an optimal cut.",0.1,M1_preference_data_33 "Implement Item-based collaborative filtering using the following formula: \begin{equation} {r}_{x}(a) = \frac{\sum\limits_{b \in N_{I}(a)} sim(a, b) r_{x}(b)}{\sum\limits_{b \in N_{I}(a)}|sim(a, b)|} \end{equation} You will create a function that takes as input the ratings and the similarity matrix and gives as output the predicted ratings. ","Recall the definition of pairwise independence: for any non-empty $S$ and $T$ such that $S\neq T$ and two bits $b_S$ and $b_T$, we have \begin{align*} \Pr[X_S= b_S \wedge X_T = b_T] = 1/4\,. \end{align*} We now first argue that $\mathbb{E}[X_S] = 1/2, \mathbb{E}[X_T] = 1/2$ and $\mathbb{E}[X_S X_T] = 1/4$ implies that they are pairwise independent. We have \begin{align*} \Pr[X_S= 1 \wedge X_T = 1] &= \mathbb{E}[X_S X_T] = 1/4\,, \\ \Pr[X_S= 1 \wedge X_T = 0] &= \mathbb{E}[X_S] - \mathbb{E}[X_S X_T] = 1/4\,, \\ \Pr[X_S= 0 \wedge X_T = 1] &= \mathbb{E}[X_T] - \mathbb{E}[X_S X_T] = 1/4\,, \\ \Pr[ X_S = 0 \wedge X_T = 0] & = \mbox{``remaining probability''}= 1- 3\cdot 1/4 = 1/4\,. \end{align*} We thus complete the proof by showing that $\mathbb{E}[X_S] = \mathbb{E}[X_T] = 1/2$ and $\mathbb{E}[X_S X_T] = 1/4$. In both calculations we use the identity $\oplus_{i\in A}\: y_i = \frac{1}{2}\left( 1 - \prod_{i\in A} (-1)^{y_i} \right)$. For the former, \begin{align*} \mathbb{E}[X_S] = \mathbb{E}[\oplus_{i\in S}\: y_i ] = \mathbb{E}\left[\frac{1}{2} \left(1- \prod_{i\in S} (-1)^{y_i} \right)\right] = \frac{1}{2} \left(1- \prod_{i\in S} \mathbb{E}[(-1)^{y_i}] \right) = \frac{1}{2}\,. \end{align*} The second to last equality is due to the independence of the random bits $y_i$ and the last equality follows because $y_i$ is an uniform random bit. The same calculation also shows that $\mathbb{E}[X_T] = 1/2$. For the latter, \begin{align*} \mathbb{E}[X_SX_T] & = \mathbb{E}[\oplus_{i\in S}\: y_i \cdot \oplus_{i\in T}\: y_i] \\ & = \mathbb{E}\left[\frac{1}{2} \left(1- \prod_{i\in S} (-1)^{y_i} \right)\cdot \frac{1}{2} \left(1- \prod_{i\in T} (-1)^{y_i} \right)\right]\\ &= \frac{1}{4} \left(1- \mathbb{E}\left[\prod_{i\in S} (-1)^{y_i}\right] - \mathbb{E}\left[\prod_{i\in T} (-1)^{y_i}\right] + \mathbb{E}\left[\prod_{i\in S} (-1)^{y_i} \prod_{i\in T} (-1)^{y_i}\right] \right) \\ & = \frac{1}{4} \left(1 + \mathbb{E}\left[\prod_{i\in S} (-1)^{y_i} \prod_{i\in T} (-1)^{y_i}\right] \right) \qquad \mbox{(by independence of $y_i$s)}\\ & = \frac{1}{4} \left(1 + \mathbb{E}\left[\prod_{i\in S\Delta T} (-1)^{y_i} \right] \right) \qquad \mbox{(recall $S\Delta T = S\setminus T \cup T\setminus S$)}\\ & = \frac{1}{4} \qquad \mbox{($S\Delta T \neq \emptyset$ and again using independence of $y_i$s.} \end{align*}",0.1,M1_preference_data_34 "Implement Latent Semantic Indexing by selecting the first x largest singular values of the term document matrix Hint 1: np.linalg.svd(M, full_matrices=False) performs SVD on the matrix $\mathbf{M}$ and returns $\mathbf{K}, \mathbf{S}, \mathbf{D}^T$ - $\mathbf{K}, \mathbf{D}^T$ are matrices with orthonormal columns - $\mathbf{S}$ is a **vector** of singular values in a **descending** order","We have a variable $x_1$ for moitie moitie, a variable $x_2$ for a la tomate, and a variable $x_3$ for Raclette. The linear program becomes \begin{align*} \text{Minimize} \quad &50 x_1 + 75 x_2 + 60 x_3\\ \text{Subject to} \quad &35 x_1 + 0.5 x_2 + 0. 5x_3 \geq 0.5 \\ &60 x_1 + 300 x_2 + 0. 5x_3 \geq 15 \\ &30 x_1 + 20x_2 + 70 x_3 \geq 4 \\ & x_1, x_2, x_3\geq 0 \end{align*}",0.1,M1_preference_data_35 "In class, we saw Karger's beautiful randomized algorithm for finding a minimum cut in an undirected graph $G=(V,E)$. Recall that his algorithm works by repeatedly contracting a randomly selected edge until the graph only consists of two vertices which define the returned cut. For general graphs, we showed that the returned cut is a minimum cut with probability at least $1/\binom{n}{2}$. In this problem, we are going to analyze the algorithm in the special case when the input graph is a tree. Specifically, you should show that if the input graph $G=(V,E)$ is a spanning tree, then Karger's algorithm returns a minimum cut with probability $1$. \\ {\em (In this problem you are asked to show that Karger's min-cut algorithm returns a minimum cut with probability $1$ if the input graph is a spanning tree. Recall that you are allowed to refer to material covered in the lecture notes.)}","Any terminating exception. 1. Memory protection violation. 2. Memory fault. ",0.1,M1_preference_data_36 What happens in the reliable broadcast algorithm if the accuracy property of the failure detector is violated?,"Using Claim 7 and Corollary 8 from the Lecture 7 notes, the expected cost of collection $C$ after $d$ executions of Step 3 of the algorithm for set cover given in Lecture 7 notes is at most $d \cdot LP_{OPT}$. Let $X_i$ be a random variable corresponding to the event whether i-th element is covered by the output $C$ or not. Specifically, let $X_i = 1$ if the given constraint is not satisfied (therefore given element is not covered) and $X_i = 0$ if it is satisfied (therefore the element is covered) after $d$ executions of Step 3. Let $X$ denote the total number of constraints that are not satisfied. Therefore we write, $$X = X_1 + X_2 + \dots + X_n.$$ From Claim~9 in the Lecture 7 notes, we know that the probability that a constraint remains unsatisfied after a single execution of Step 3 is at most $\frac{1}{e}$. In addition, from the first step in the proof of Claim~10, the probability that a constraint is unsatisfied after $d$ executions of Step 3 is at most $\frac{1}{e^d}$. In other words, we have in our notation that $$\mathbb{P}(X_i = 1) \leq \frac{1}{e^d}.$$ Since each $X_i$ is a Bernoulli random variable, we can write their expectation as $$\mathbb{E}(X_i) \leq \frac{1}{e^d}.$$ We want to bound the probability that more than 10 \% of the elements are not covered (which also means less than 90 \% of the elements are covered). We can use Markov's Inequality to write, \begin{align*} \mathbb{P}\left(X \geq \frac{n}{10}\right) &\leq \frac{\mathbb{E}(X)}{n/10}\\ &= \frac{\mathbb{E}(X_1) + \mathbb{E}(X_2) + \dots + \mathbb{E}(X_n)}{n/10} \\ &\leq \frac{n \cdot \frac{1}{e^d}}{n/10} \\ &= \frac{10}{e^d}. \end{align*} Lastly, similar to Claim 11 in the lecture notes, we will bound the probability of bad events (namely, the cost is high or less than 90 \% of elements are covered). Firstly, we found that expected cost after $d$ executions is at most $d\cdot LP_{OPT}$. We can write using Markov's Inequality that, $$\mathbb{P}(cost \geq 5d\cdot LP_{OPT}) \leq \frac{1}{5}.$$ Secondly, we bound the probability of event that less than 90 \% of elements are covered. We did it above and showed that probability that more than 10 \% of the elements are not covered is at most $\frac{10}{e^d}$. In the worst case, these bad events are completely disjoint. Therefore, the probability that no bad event occurs is at least $1 - \frac{1}{5} - \frac{10}{e^d} > \frac{1}{2}$ for some large constant $d$. Therefore, the algorithm, with probability at least $\frac{1}{2}$, will return a collection of sets that cover at least 90 \% of the elements and has cost at most $5d LP_{OPT} \leq 5d \mbox{OPT}$.",0.1,M1_preference_data_37 "In this week's lecture, you have been introduced to the aggregate method of ParSeq[A] (and other parallel data structures). It has the following signature: def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B Discuss, as a group, what aggregate does and what its arguments represent. Consider the parallel sequence xs containing the three elements x1, x2 and x3. Also consider the following call to aggregate: xs.aggregate(z)(f, g) The above call might potentially result in the following computation: f(f(f(z, x1), x2), x3) But it might also result in other computations. Come up with at least two other computations in terms of f and g that may result from the above call to aggregate. Below are other examples of calls to aggregate. In each case, check if the call can lead to different results depending on the strategy used by aggregate to aggregate all values contained in data down to a single value. You should assume that data is a parallel sequence of values of type BigInt. 1. data.aggregate(1)(_ + _, _ + _)","The result of scanRight1 is not the same as scanLeft1 on the reversed sequence either. Consider the same example as the previous case, but reverse the argument of scanLeft1. We have $$rev(A).scanLeft1(f) = (a_2, f(a_2, a_1))$$ but $$A.scanRight1(f) = (f(a_1, a_2), a_2)$$ With the choice of $f(x, y) := x$, we get $$rev(A).scanLeft1(f) = (a_2, a_1)$$ and $$A.scanRight1(f) = (a_1, a_2)$$ which once again are unequal if $a_1 \not = a_2$.",0.1,M1_preference_data_38 "Given the following classes: • class Pair[+U, +V] • class Iterable[+U] • class Map[U, +V] extends Iterable[Pair[U, V]] Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither covariance nor contravariance). Consider also the following typing relationships for A, B, X, and Y: • A >: B • X >: Y Fill in the subtyping relation between the types below using symbols: • <: in case T1 is a subtype of T2; • >: in case T1 is a supertype of T2; • “Neither” in case T1 is neither a supertype nor a supertype of T2. What is the correct subtyping relationship between A => (Y => X) and A => (X => Y)?","def cosine_similarity(v1,v2): """""" It computes cosine similarity. :param v1: list of floats, with the vector of a document. :param v2: list of floats, with the vector of a document. :return: float """""" sumxx, sumxy, sumyy = 0, 0, 0 for i in range(len(v1)): x = v1[i]; y = v2[i] sumxx += x*x sumyy += y*y sumxy += x*y if sumxy == 0: sim = 0 else: sim = sumxy/math.sqrt(sumxx*sumyy) return sim",0.1,M1_preference_data_39 "The goal of the 4 following questions is to prove that the methods map and mapTr are equivalent. The former is the version seen in class and is specified by the lemmas MapNil and MapCons. The later version is a tail-recursive version and is specified by the lemmas MapTrNil and MapTrCons. All lemmas on this page hold for all x: Int, y: Int, xs: List[Int], ys: List[Int], l: List [Int] and f: Int => Int. Given the following lemmas: (MapNil) Nil.map(f) === Nil (MapCons) (x :: xs).map(f) === f(x) :: xs.map(f) (MapTrNil) Nil.mapTr(f, ys) === ys (MapTrCons) (x :: xs).mapTr(f, ys) === xs.mapTr(f, ys ++ (f(x) :: Nil)) (NilAppend) Nil ++ xs === xs (ConsAppend) (x :: xs) ++ ys === x :: (xs ++ ys) Let us first prove the following lemma: (AccOut) l.mapTr(f, y :: ys) === y :: l.mapTr(f, ys) We prove it by induction on l. Base case: l is Nil. Therefore, we need to prove: Nil.mapTr(f, y :: ys) === y :: Nil.mapTr(f, ys). What exact sequence of lemmas should we apply to rewrite the left hand-side (Nil.mapTr(f, y :: ys)) to the right hand-side (y :: Nil.mapTr(f, ys))?","Continous integration should be set up for the main branch, to reduce the likelihood of bugs in the final product.",0.1,M1_preference_data_40 "Assume you work in a team that is developing a weather application that brings together data from several sources. One of your colleagues is responsible for creating a client for a weather service that returns data in JSON format. Your colleague suggests creating a weather client interface that returns the weather as a string, then a class that gets (fetches from the weather service) and returns the JSON, and a decorator that extracts the weather prediction from that JSON and returns it. What do you think about this approach?","We convince our friend by taking $y_1\geq 0$ multiples of the first constraints and $y_2\geq 0$ multiplies of the second constraint so that \begin{align*} 6 x_1 + 14 x_2 + 13 x_3 \leq y_1 ( x_1 + 3x_2 + x_3 ) + y_2 (x_1 + 2x_2 + 4 x_3) \leq y_1 24 + y_2 60\,. \end{align*} To get the best upper bound, we wish to minimize the right-hand-side $24 y_1 + 60 y_2$. However, for the first inequality to hold, we need that $y_1 x_1 + y_2 x_1 \geq 6 x_1$ for all non-negative $x_1$ and so $y_1 + y_2 \geq 6$. The same argument gives us the constraints $3y_1 + 2y_2 \geq 14$ for $x_2$ and $y_1 + 4y_2 \geq 13$ for $x_3$. It follows that we can formulate the problem of finding an upper bound as the following linear program (the dual): \begin{align*} \text{Minimize} \quad &24y_1 + 60 y_2\\ \text{Subject to} \quad &y_1 + y_2 \geq 6 \\ & 3y_1 + 2y_2 \geq 14 \\ & y_1 + 4 y_2 \geq 13 \\ & y_1, y_2 \geq 0 \end{align*}",0.1,M1_preference_data_41 "Consider the following case class definitions: case class Node(id: Int) case class Edge(from: Node, to: Node) Let us represent a directed graph G as the list of all its edges (of type List[Edge]). We are interested in computing the set of all nodes reachable in exactly n steps from a set of initial nodes. Write a reachable function with the following signature to provide this functionality: def reachable(n: Int, init: Set[Node], edges: List[Edge]): Set[Node] You can assume that n >= 0.","The transducer T1 is built by using the standard operators (concatenation, disjunction and cross-product) and regular expressions available for the transducers. For instance: T1 = ([a-z]+) ((\+V\+IndPres\+) x (\+)) (((([12]s) | ([123]p)) x (2)) | ((3s) x (1)) )",0.1,M1_preference_data_42 "Design a polynomial-time algorithm for the matroid matching problem: \begin{description} \item[Input:] A bipartite graph $G=(A \cup B, E)$ and two matroids $\mathcal{M}_A = (A, \mathcal{I}_A)$, $\mathcal{M}_B = (B, \mathcal{I}_B)$. \item[Output:] A matching $M \subseteq E$ of maximum cardinality satisfying: \begin{enumerate} \item[(i)] the vertices $A' = \{a\in A: \mbox{there is a $b\in B$ such that $\{a,b\}\in M$}\}$ of $A$ that are matched by $M$ form an independent set in $\mathcal{M}_A$, i.e., $A'\in \mathcal{I}_A$; and \item[(ii)] the vertices $B' = \{b\in B: \mbox{there is an $a\in A$ such that $\{a,b\}\in M$}\}$ of $B$ that are matched by $M$ form an independent set in $\mathcal{M}_B$, i.e., $B'\in \mathcal{I}_B$. \end{enumerate} \end{description} We assume that the independence oracles for both matroids $\mathcal{M}_A$ and $\mathcal{M}_B$ can be implemented in polynomial-time. Also to your help you may use the following fact without proving it. \begin{center} \begin{boxedminipage}{\textwidth} \textbf{Fact (obtaining a new matroid by copying elements)}. Let $\mathcal{M} = (N, \mathcal{I})$ be a matroid where $N = \{e_1, \ldots, e_n\}$ consists of $n$ elements. Now, for each $i=1,\ldots, n$, make $k_i$ copies of $e_i$ to obtain the new ground set \begin{align*} N' = \{e_1^{(1)}, e_1^{(2)},\ldots, e_1^{(k_1)}, e_2^{(1)}, e_2^{(2)}, \ldots, e_2^{(k_2)}, \ldots, e_n^{(1)},e_n^{(2)}, \ldots, e_n^{(k_n)}\}\,, \end{align*} where we denote the $k_i$ copies of $e_i$ by $e_i^{(1)}, e_i^{(2)},\ldots, e_i^{(k_i)}$. Then $(N', \mathcal{I}')$ is a matroid where a subset $I' \subseteq N'$ is independent, i.e., $I' \in \mathcal{I}'$, if and only if the following conditions hold:\\[-1mm] \begin{enumerate} \item[(i)] $I'$ contains at most one copy of each element, i.e., we have $|I' \cap \{e_i^{(1)}, \ldots, e_i^{(k_i)}\}| \leq 1$ for each $i= 1,\ldots, n$; \item[(ii)] the original elements corresponding to the copies in $I'$ form an independent set in $\mathcal{I}$, i.e., if $I' = \{e_{i_1}^{(j_1)}, e_{i_2}^{(j_2)}, \ldots, e_{i_\ell}^{(j_\ell)}\}$ then $\{e_{i_1}, e_{i_2}, \ldots, e_{i_\ell}\} \in \mathcal{I}$.\\ \end{enumerate} Moreover, if the independence oracle of $(N, \mathcal{I})$ can be implemented in polynomial time, then the independence oracle of $(N', \mathcal{I}')$ can be implemented in polynomial time. \end{boxedminipage} \end{center} {\em (In this problem you are asked to design and analyze a polynomial-time algorithm for the matroid matching problem. You are allowed to use the above fact without any proof and to assume that all independence oracles can be implemented in polynomial time. Recall that you are allowed to refer to material covered in the lecture notes.)}","Let us compute the derivative wrt a particular user $u^{\prime}$ and set it to 0 . We get $$ \sum_{u^{\prime} \sim m}\left(f_{u^{\prime} m}-r_{u^{\prime} m}\right)+\lambda b_{u^{\prime}}=0 $$ Note that the $f_{u^{\prime} m}$ contains the $b_{u^{\prime}}$. Solving this equation for $b_{u^{\prime}}$ we get $$ b_{u^{\prime}}=\frac{\sum_{u^{\prime} \sim m}\left(r_{u^{\prime} m}-\left\langle\mathbf{v}_{u^{\prime}}, \mathbf{w}_{m}\right\rangle-b_{m}\right)}{\lambda+\sum_{u^{\prime} \sim m} 1} $$ where $u^{\prime} \sim m$ are the movies rated by $u^{\prime}$.",0.1,M1_preference_data_43 "Assume you are working on a mobile application. You meet a client while out for coffee, who tells you: ""I noticed it's not possible to customize the profile picture. I know you have a lot of stuff to do this sprint, but my boss is threatening to switch to another app, could you get this fixed during this sprint?"" In one sentence, give an answer that helps both you and the client.","No, it's not, you have to potentially write a lot of statements that you'll need to manually remove. Using a debugger, your friend could add breakpoints that prints the values, and can change them on the fly.",0.1,M1_preference_data_44 "Assume that some of your colleagues work on an AI-based image generation service, where a user enters a topic, and the AI generates a synthetic photo on that topic. They tell you the following about this service: ""Currently, the user types in the topic they want to see images for, and the client app sends a request to the server with the user ID and the indicated topic. The server generates an image, which takes a second or so, and sends it to the client app, which then requests another image on the same topic, and so on, until the app has received 9 images. It then displays these in a 3x3 grid. The user now looks at the 9 images and, if they see an inappropriate one, they click on a button that causes the app to send a review request to the server. Human moderators then process each report, and data scientists tweak the AI model to avoid generating images similar to the ones reported as inappropriate. Users then get a notification that their report was processed. The whole reporting process typically takes a day."" Now assume you can change the server's interface. Explain in 1-2 sentences an alternative way to make the app display the 9 images faster:"," alloc is used to give fresh registers to each routine without having to push explicitly values on the stack (and popping them back prior to a return). It is usually placed at the beginning of each routine. The first parameter indicates how many values will be hidden on the next br.call and the second how many values will be used for returning values from called routines. The compiler need to determine the largest number of registers needed by the routing (first parameter) and the maximum number of returned values from all the possibily called routines (second parameter). On executing alloc, the processor simply memorizes the values to be used in the successive call instruction to change the offset.",0.1,M1_preference_data_45 "We have a collection of rectangles in a plane, whose sides are aligned with the coordinate axes. Each rectangle is represented by its lower left corner $(x_1,y_1)$ and its upper right corner $(x_2,y_2)$. All coordinates are of type Long. We require $x_1 \le x_2$ and $y_1 \le y_2$. Define an operation hull2 that takes two Rectangles, r1 and r2, and computes as the result the smallest Rectangle containing both r1 and r2.","semantic ambiguity (two different meanings, polysemy, homonymy(homography)). Word Sense Disambiguation (WSD) through the information from the context (e.g. coehsion).",0.1,M1_preference_data_46 "We consider now the ridge regression problem: $$ \min _{\mathbf{w} \in \mathbb{R}^{d}} \frac{1}{2 N} \sum_{n=1}^{N}\left[y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right]^{2}+\lambda\|\mathbf{w}\|_{2}^{2}, $$ where the data $\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ are such that the feature vector $\mathbf{x}_{n} \in \mathbb{R}^{D}$ and the response variable $y_{n} \in \mathbb{R}$ Compute the closed-form solution $\mathbf{w}_{\text {ridge }}^{\star}$ of this problem, providing the required justifications. State the final result using the data matrix $\mathbf{X} \in \mathbb{R}^{N \times D}$.","1. Predicates are 1-bit registers associated with instruction. If the predicate is true, the instruction commits the result to the register file; if it is false, the result is dropped. 2. It allows to execute both sides of a branch and, when there are enough available resources, one does not suffer from the branch penalty which, in VLIW processors, in the absence of speculative execution after branches, may often be significant. 3. It does make sense also in RISC processors and indeed some have partial or complete forms of predication (e.g., ARM). In particular, for branches very hard to predict, it may be that executing instructions on both control flow branches costs less than the average misprediction penalty. ",0.1,M1_preference_data_47 "If process i fails, then eventually all processes j≠i fail Is the following true? If some process j≠i does not fail, then process i has not failed","Simply use that if $g(\xv)g^\star(\xv)<0$ we get that $|2\eta(\xv)-1)|\leq |2\eta(\xv)-1-b(g(\xv))|$ Indeed either $\eta(\xv)>1/2$ and $g(\xv)<0$ and thus $b(g(\xv))<0$, or $\eta(\xv)<1/2$ and $g(\xv)>0$ and thus $b(g(\xv))>0$, where we have used that $b$ preserves signs. ",0.1,M1_preference_data_48 "Homer, Marge, and Lisa Simpson have decided to go for a hike in the beautiful Swiss Alps. Homer has greatly surpassed Marge's expectations and carefully prepared to bring $n$ items whose total size equals the capacity of his and his wife Marge's two knapsacks. Lisa does not carry a knapsack due to her young age. More formally, Homer and Marge each have a knapsack of capacity $C$, there are $n$ items where item $i=1, 2, \ldots, n$ has size $s_i >0$, and we have $\sum_{i=1}^n s_i = 2\cdot C$ due to Homer's meticulous preparation. However, being Homer after all, Homer has missed one thing: although the items fit perfectly in the two knapsacks fractionally, it might be impossible to pack them because items must be assigned integrally! Luckily Lisa has studied linear programming and she saves the family holiday by proposing the following solution: \begin{itemize} \item Take \emph{any} extreme point $x^*$ of the linear program: \begin{align*} x_{iH} + x_{iM}& \leq 1 \qquad \quad \mbox{for all items $i=1,2,\ldots, n$}\\ \sum_{i=1}^n s_i x_{iH} & = C \\ \sum_{i=1}^n s_i x_{iM} & = C \\ 0 \leq x_{ij} &\leq 1 \qquad \quad \mbox{for all items $i=1,2, \ldots, n$ and $j\in \{H,M\}$}. \end{align*} \item Divide the items as follows: \begin{itemize} \item Homer and Marge will carry the items $\{i: x^*_{iH} = 1\}$ and $\{i: x^*_{iM}=1\}$, respectively. \item Lisa will carry any remaining items. \end{itemize} \end{itemize} {Prove} that Lisa needs to carry at most one item. \\[1mm] {\em (In this problem you are asked to give a formal proof of the statement that Lisa needs to carry at most one item. You are not allowed to change Lisa's solution for dividing the items among the family members. Recall that you are allowed to refer to material covered in the lecture notes.) }","# adding album number two_ormore.loc[:,'album_number'] = (two_ormore .sort_values(by=['releaseyear', 'reviewdate']) .groupby('artist')['album'] .transform(lambda x : range(len(x)))) # example artist: two_ormore.sort_values(by=['releaseyear', 'reviewdate'])\ .query('artist == ""Young Thug""')[['releaseyear','reviewdate','album_number']]",0.1,M1_preference_data_49 "You have been hired to evaluate an email monitoring system aimed at detecting potential security issues. The targeted goal of the application is to decide whether a given email should be further reviewed or not. You have been given the results of three different systems that have been evaluated on the same panel of 157 different emails. Here are the classification errors and their standard deviations: system 1 (error=0.079, stddev=0.026) system 2 (error=0.081, stddev=0.005) system 3 (error=0.118, stddev=0.004) What should be the minimal size of a test set to ensure, at a 95% confidence level, that a system has an error 0.02 lower (absolute difference) than system 3? Justify your answer.","As evident from the confusion matrix, the disparity in class proportions does indeed hurt the model. Almost all (99%) of the examples are classified as class 0. What's more, 75% of the articles in class 1, are predicted to belong to class 0. One way to address this is to employ cost-sensitive learning, i.e., using class-weight='balanced' while training the model. Another way could be up/down sampling minority/majority class. 'SMOTE' is a very popular technique of oversampling the minority class that can be employed here.",0.1,M1_preference_data_50 "What is the problem addressed by a Part-of-Speech (PoS) tagger? Why isn't it trivial? What are the two main difficulties?","['answer should fit the regular expression: 10^8 + 10^15', 'answer should fit the regular expression: (10\\^8|10\\^\\{8\\}|10\\^(8)|10⁸) ?\\+ ?(10\\^15|10\\^\\{15\\}|10\\^(15)|10¹⁵)', 'answer should fit the regular expression: (10\\^15|10\\^\\{15\\}|10\\^(15)|10¹⁵) ?\\+ ?(10\\^8|10\\^\\{8\\}|10\\^(8)|10⁸)']",0.1,M1_preference_data_51 "If process i fails, then eventually all processes j≠i fail Is the following true? If no process j≠i fails, nothing can be said about process i","You could change the interface such that all 9 images are batched together, this reduces the communication.",0.1,M1_preference_data_52 "In an automated email router of a company, we want to make the distinction between three kind of emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a Naive Bayes approach. What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'? We will consider the following three messages: The Dow industrials tumbled 120.54 to 10924.74, hurt by GM's sales forecast and two economic reports. Oil rose to $71.92. BitTorrent Inc. is boosting its network capacity as it prepares to become a centralized hub for legal video content. In May, BitTorrent announced a deal with Warner Brothers to distribute its TV and movie content via the BT platform. It has now lined up IP transit for streaming videos at a few gigabits per second Intel will sell its XScale PXAxxx applications processor and 3G baseband processor businesses to Marvell for $600 million, plus existing liabilities. The deal could make Marvell the top supplier of 3G and later smartphone processors, and enable Intel to focus on its core x86 and wireless LAN chipset businesses, the companies say. For the first text, give an example of the corresponding output of the NLP pre-processor steps.", False,0.1,M1_preference_data_53 From a corpus of \( N \) occurences of \( m \) different tokens:How many different 4-grams (values) could you possibly have?,x => if s(x) then 1 else 0,0.1,M1_preference_data_54 "A company active in automatic recognition of hand-written documents needs to improve the quality of their recognizer. This recognizer produces sets of sequences of correct English words, but some of the produced sequences do not make any sense. For instance the processing of a given hand-written input can produce a set of transcriptions like: 'A was salmon outer the does', 'It was a afternoon nice sunny', and 'I Thomas at mice not the spoon'. What is wrong with such sentences? NLP techniques of what level might allow the system to select the correct one(s)? What would be the required resources?","Average degree is not recommended as the degree distribution of real-world networks usually follows a powerlaw. Summarizing powerlaws with average values is not a good idea, as there is a long tail, and there are many nodes that have very high degree. Instead, median is a better choice.",0.1,M1_preference_data_55 "What is modulo scheduling and what are its benefits? What does it apply to? What is its goal? In which respect is it superior to simpler techniques with the same goal?","First, let us simplify the situation a little by noticing that with probability $1$, all elements $h(i)$ for $i \in U$ are different. This is because $\Pr[h(i) = h(j)] = 0$ for $i \ne j$ (recall that each $h(i)$ is uniform on the interval $[0,1]$). Given this, let us see where $\min_{i \in A \cup B} h(i)$ is attained: \begin{itemize} \item if it is attained in $A \cap B$, then $h_A = h_B = h_{A \cup B} = h_{A \cap B}$, \item otherwise, say it is attained in $A \setminus B$: then $h_A < h_B$. \end{itemize} Therefore the event $h_A = h_B$ is (almost everywhere) equal to $h_{A \cup B} = h_{A \cap B}$. Furthermore, notice that for any set $S \subseteq U$ and any $i \in S$ we have $\Pr[h(i) = h_S] = 1/|S|$ due to symmetry. Therefore \[ \Pr[h_A = h_B] = \Pr[h_{A \cap B} = h_{A \cup B}] = \sum_{i \in A \cap B} \Pr[h(i) = h_{A \cup B}] = |A \cap B| \cdot \frac{1}{|A \cup B|} = J(A,B). \]",0.1,M1_preference_data_56 "Assume you are writing server-side code for an online shop. The code will be invoked over HTTP from a mobile app. Your current code is as follows: public class ShoppingCart { public void buy(Product product, int quantity) { if (product == null) { throw new IllegalArgumentException(""product cannot be null""); } if (quantity < 1) { throw new IllegalArgumentException(""quantity must be at least 1""); } int price = product.getUnitPrice() * quantity; int discount = computeDiscount(product, quantity); int shippingFees = computeShippingFees(product, quantity); int totalPrice = price - discount + shippingFees; // this triggers a call to the actual credit card processor CreditCardProcessor.billCurrentUser(totalPrice); } private int computeDiscount(Product product, int quantity) { // ... discount computation logic ... } private int computeShippingFees(Product product, int quantity) { // ... shipping fees computation logic ... } } A colleague states that a null product should throw a checked exception, not an ""IllegalArgumentException"", because users might input bad data in the app. Explain in 1 sentence whether this is a good idea and why or why not.","Any NLP application that requires the assessment of the semantic proximity between textual entities (text, segments, words, ...) might benefit from the semantic vectorial representation. Information retrieval is of course one of the prototypical applications illustrating the potentiality of the VS techniques. However, many other applications can be considered: \begin{itemize} \item automated summarization: the document to summarize is split into passages; each of the passages is represented in a vector space and the passage(s) that is the 'most central' in the set of vector thus produced are taken as good candidates for the summary to generate; \item semantic desambiguisation: when polysemic words (such as 'pen' which can be a place to put cow or a writing instrument) are a problem -for example in machine translationvectorial representations can be generated for the different possible meanings of a word (for example from machine readable disctionalires) and used to desambiguate the occurrences of an ambiguous word in documents; \item automated routing of messages to users: each user is represented by the vector representing the semantic content of the messages s/he has received so far, and any new incoming message is routed only to those users the representative vector of which is enough similar to the vector representing the content of the incoming message; \item text categorization or clustering \item ... \end{itemize}",0.1,M1_preference_data_57 "Given a document collection with a vocabulary consisting of three words, $V = {a,b,c}$, and two documents $d_1$ = aabc and $d_2 = abc$. The query is $q = ab$. Using smoothed probabilistic retrieval (with $\lambda=0.5$), is it possible to enforce both a ranking $d_1 > d_2$ and $d_2 > d_1$ by adding suitable documents to the collection. If yes, give examples of such documents to be added, if no, provide an argument why this cannot be the case.","1. Rotating registers: A register file with a hardware managed offset added to the querying addresses, to rename registers automatically across loop iterations. 2. (Rotating) predicates: to enable only the active stages of the loop. 3. Loop count register: Tracks the number of loop iterations remaining to be done. 4. Epilogue count: when the loop count reaches zero, keep the loop active for some cycles during the epilogue until the pipeline is flushed. 5. Special control flow instructions to manage the loop execution.",0.1,M1_preference_data_58 "Assume you decide to contribute to an open source project, by adding a feature to an existing class of the project. The class uses an underscore at the beginning of names then ""camelCase"" for private properties such as ""_likeThis"", but you find this odd because you're used to the ""snake case"" ""like_this"". Which option would you choose for the name of the new private property that you will add?","from sklearn.metrics import mean_squared_error from math import sqrt def rmse(prediction, ground_truth): prediction = prediction[ground_truth.nonzero()].flatten() ground_truth = ground_truth[ground_truth.nonzero()].flatten() return sqrt(mean_squared_error(prediction, ground_truth))",0.1,M1_preference_data_59 "For this homework you will use a dataset of 18,403 music reviews scraped from Pitchfork¹, including relevant metadata such as review author, review date, record release year, review score, and genre, along with the respective album's audio features pulled from Spotify's API. The data consists of the following columns: artist, album, recordlabel, releaseyear, score, reviewauthor, reviewdate, genre, key, acousticness, danceability, energy, instrumentalness, liveness, loudness, speechiness, valence, tempo. Create a new column 'album_number' which indicates how many albums the artist has produced before this one (before the second album, the artist has already produced one album).","$ ext{Var}[\wv^ op \xx] = rac1N \sum_{n=1}^N (\wv^ op \xx_n)^2$ % ",0.1,M1_preference_data_60 Implement RSME score based on the following formula. \begin{equation} \mathit{RMSE} =\sqrt{\frac{1}{N} \sum_i (r_i -\hat{r_i})^2} \end{equation} You can use the mean_squared_error function from sklearn.metrics.,"Every process uses TRB to broadcast its proposal. Let p be any process, eventually every correct process either delivers p’s proposal or ⊥ (if p fails). Eventually, every correct process has the same set of proposals (at least one is not ⊥, since not every process crashes). Processes use a shared but arbitrary function to extract a decision out of the set of proposals (e.g., sort alphabetically and pick the first).",0.1,M1_preference_data_61 "Give some arguments justifying why evaluation is especially important for NLP. In particular, explain the role of evaluation when a corpus-based approach is used.",If you do not get an accuracy of at least 90 percent then you are not really doing anything since you can get ten percent by simply always outputting 0.,0.1,M1_preference_data_62 "Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|d_j|}\frac{w_{iq}}{|q|}$ . (1) Another way of looking at the information retrieval problem is using a probabilistic approach. The probabilistic view of information retrieval consists of determining the conditional probability $P(q|d_j)$ that for a given document $d_j$ the query by the user is $q$. So, practically in probabilistic retrieval when a query $q$ is given, for each document it is evaluated how probable it is that the query is indeed relevant for the document, which results in a ranking of the documents. In order to relate vector space retrieval to a probabilistic view of information retrieval, we interpret the weights in Equation (1) as follows: - $w_{ij}/|d_j|$ can be interpreted as the conditional probability $P(k_i|d_j)$ that for a given document $d_j$ the term $k_i$ is important (to characterize the document $d_j$). - $w_{iq}/|q|$ can be interpreted as the conditional probability $P(q|k_i)$ that for a given term $k_i$ the query posed by the user is $q$. Intuitively, $P(q|k_i)$ gives the amount of importance given to a particular term while querying. With this interpretation you can rewrite Equation (1) as follows: $sim(q,d_j) = \sum^m_{i=1} P(k_i|d_j)P(q|k_i)$ (2) Note that the model described in Question (a) provides a probabilistic interpretation for vector space retrieval where weights are interpreted as probabilities . Compare to the probabilistic retrieval model based on language models introduced in the lecture and discuss the differences.","Total order property: Let m1 and m2 be any two messages and suppose p and q are any two correct processes that deliver m1 and m2. If p delivers m1 before m2, then q delivers m1 before m2. This allows a scenario where faulty process p broadcasts messages 1, 2, 3, and correct processes a, b, c behave as follows: - Process a delivers 1, then 2. - Process b delivers 3, then 2. - Process c delivers 1, then 3.",0.1,M1_preference_data_63 "In this problem, we consider a generalization of the min-cost perfect matching problem. The generalization is called the \emph{min-cost perfect $b$-matching problem} and is defined as follows: \begin{description} \item[Input:] A graph $G = (V,E)$ with edge costs $c: E \rightarrow \mathbb{R}$ and degree bounds $b: V \rightarrow \{1,2, \ldots, n\}$. \item[Output:] A subset $F \subseteq E$ of minimum cost $\sum_{e\in F} c(e)$ such that for each vertex $v\in V$: \begin{itemize} \item The number of edges incident to $v$ in $F$ equals $b(v)$, i.e., $|\{e\in F : v \in e\}| = b(v)$. \end{itemize} \end{description} Note that min-cost perfect matching problem is the special case when $b(v) =1$ for all $v\in V$. An example with general $b$'s is as follows: \begin{tikzpicture} \node at (1, 2.8) {Input}; \node[vertex] (u1) at (0,2) {$u_1$}; \node[vertex] (u2) at (0,0) {$u_2$}; \node[vertex] (v1) at (2,2) {$v_1$}; \node[vertex] (v2) at (2,0) {$v_2$}; \node[left = 0.1cm of u1] {$b(u_1) = 1$}; \node[left = 0.1cm of u2] {$b(u_2) = 2$}; \node[right = 0.1cm of v1] {$b(v_1) = 1$}; \node[right = 0.1cm of v2] {$b(v_2) = 2$}; \draw (u1) edge[ultra thick] (v1) edge (v2); \draw (u2) edge (v1) edge[ultra thick] (v2); \begin{scope}[xshift=7cm] \node at (1, 2.8) {Output}; \node[vertex] (u1) at (0,2) {$u_1$}; \node[vertex] (u2) at (0,0) {$u_2$}; \node[vertex] (v1) at (2,2) {$v_1$}; \node[vertex] (v2) at (2,0) {$v_2$}; \draw (u1) edge (v2); \draw (u2) edge (v1) edge[ultra thick] (v2); \end{scope} \end{tikzpicture} On the left, we illustrate the input graph with the degree bounds (the $b$'s). Thin and thick edges have cost $1$ and $2$, respectively. On the right, we illustrate a solution of cost $1+1 +2 = 4$. It is a feasible solution since the degree of each vertex $v$ equals $b(v)$ in the solution. Your task is to prove the following statement: If the input graph $G=(V,E)$ is bipartite then any extreme point solution to the following linear programming relaxation (that has a variable $x_e$ for every edge $e\in E$) is integral: \begin{align*} \textbf{Minimize} \hspace{0.8cm} & \sum_{e\in E} c(e) x_e\\ \textbf{subject to}\hspace{0.8cm} & \sum_{e\in E: v\in e} x_e = b(v) \qquad \mbox{for all $v\in V$}\\ \hspace{0.8cm} & \hspace{0.9cm} 0 \leq x_e \leq 1 \hspace{0.9cm} \mbox{for all $e\in E$}. \end{align*} {\em (In this problem you are asked to prove that every extreme point solution to the above linear program is integral assuming that the input graph $G$ is bipartite. Recall that you are allowed to refer to material covered in the lecture notes.)}","def communities_modularity(G, nodes_community): ''' input: G:nx.Graph nodes_community:{node_id:community_id} output: Q (modularity metric) ''' Q = 0 m = len(G.edges) for node_i in G.nodes: for node_j in G.nodes: if nodes_community[node_i] == nodes_community[node_j]: Q += G.number_of_edges(node_i, node_j) - G.degree[node_i]*G.degree[node_j]/(2*m) Q = Q/(2*m) return Q ",0.1,M1_preference_data_64 "You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! Your column generation system has become quite successful and you've managed to automate most of your job simply by typing your editor's title pitches into your model to produce your column every day. Two years later, during the COVID--25 pandemic, your editor proposes to use your system to generate an information sheet about the pandemic for anyone looking for information about symptoms, treatments, testing sites, medical professionals, etc. Given the similarity to a previous pandemic many years before, COVID--19, you train your model on all news articles published about COVID--19 between the years of 2019--2022. Then, you generate the information page from your trained model. Give an example of a potential harm that your model could produce from the perspective of human interaction harms.","As a visually impaired user, I want my reading assistant to be able to read the jokes out loud, so that I can make my friends laugh.",0.1,M1_preference_data_65 "The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: Let $b: \R o \R$ a function that preserves the sign, i.e., $b(\R_+^*)\subseteq \R_+^*$ and $b(\R_-^*)\subseteq \R_-^*$. Show that egin{align*} \mathcal L (g)-\mathcal L^\star \leq \mathbb E[|2\eta(X)-1-b(g(X))|] \end{align*} egin{align*} \mathcal L (g)-\mathcal L^\star = \mathbb E[oldsymbol{\mathbb{1}}_{g(X)g^\star(X)<0}|2\eta(X)-1|]. \end{align*} ","1. Continuous integration, even paired with tests, cannot guarantee the code ""never has bugs"" 2. Feature branches should be allowed to fail tests, otherwise developers will not commit enough and risk losing data",0.1,M1_preference_data_66 "Assume that you are part of a team developing a mobile app using Scrum. When using the app, you identified multiple bugs and features which you think should be implemented, and took some notes. You want to share these with the Product Owner. Your backlog of tasks includes the following task: - [ ] As a registered user, I can click on the settings button from the navigation pane, so I can access the settings screen from everywhere in the app. Is this item suitable to be submitted to the Product Backlog? Why?","The load should become an advanced load and there must be a exttt{chk.a} instruction right after the store to exttt{r3}; the recovery code consists of the load simply repeated and of the incrementation of exttt{r1} also repeated.",0.1,M1_preference_data_67 " List two common types of exceptions which must be implemented precisely. Explain why. ","Given a total-order broadcast primitive TO, a consensus abstraction is obtained as follows: upon init do decided := false end upon propose(v) do TO-broadcast(v) end upon TO-deliver(v) do if not decided then decided := true decide(v) end end When a process proposes a value v in consensus, it TO-broadcasts v. When the first message is TO-delivered containing some value x, a process decides x. Since the total-order broadcast delivers the same sequence of messages at every correct process, and every TO-delivered message has been TO-broadcast, this abstraction implements consensus.",0.1,M1_preference_data_68 " In which class of processors do you expect to find reservation stations? "," Firstly, the destination address of the load itself and of all the preceding stores must be known. Secondly, there should be no collision with one of the store addresses (and in this case the load can be sent to memory) or, if there is any, the data for the latest colliding store must be known (and in this case this data value is returned as the result).",0.1,M1_preference_data_69 "Consider an HMM Part-of-Speech tagger, the tagset of which contains, among others: DET, N, V, ADV and ADJ, and some of the parameters of which are: $$ \begin{gathered} P_{1}(\mathrm{a} \mid \mathrm{DET})=0.1, \quad P_{1}(\text {accurately} \mid \mathrm{ADV})=0.1, \quad P_{1}(\text {computer} \mid \mathrm{N})=0.1, \\ P_{1}(\text {process} \mid \mathrm{N})=0.095, \quad P_{1}(\text {process} \mid \mathrm{V})=0.005, \\ P_{1}(\text {programs} \mid \mathrm{N})=0.080, \quad P_{1}(\text {programs} \mid \mathrm{V})=0.020, \end{gathered} $$ \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & & \multicolumn{5}{|l|}{$\mathrm{Y} \rightarrow$} \\ \hline & & $\mathrm{DET}$ & N & V & ADJ & $\mathrm{ADV}$ \\ \hline \multirow[t]{5}{*}{$X \downarrow$} & $\mathrm{DET}$ & 0 & 0.55 & 0 & 0.02 & 0.03 \\ \hline & $\mathrm{N}$ & 0.01 & 0.10 & 0.08 & 0.01 & 0.02 \\ \hline & V & 0.16 & 0.11 & 0.06 & 0.08 & 0.08 \\ \hline & ADJ & 0.01 & 0.65 & 0 & 0.05 & 0 \\ \hline & ADV & 0.08 & 0.02 & 0.09 & 0.04 & 0.04 \\ \hline \end{tabular} \end{center} $P_{2}(\mathrm{Y} \mid \mathrm{X}):\left(\right.$ for instance $\left.P_{2}(\mathrm{~N} \mid \mathrm{DET})=0.55\right)$ and: $P_{3}(\mathrm{DET})=0.20, \quad P_{3}(\mathrm{~N})=0.06, \quad P_{3}(\mathrm{~V})=0.08, \quad P_{3}(\mathrm{ADV})=0.07, \quad P_{3}(\mathrm{ADJ})=0.02$. What would be the output of the HMM PoS tagger on the above sentence? Fully justify your answer. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\mathrm{x}$ & $\mathrm{y}$ & $\mathrm{xlN}$ & processlx & ylx & programsly & ADVly \\ \hline\hline $\mathrm{N}$ & $\mathrm{N}$ & 10 & 95 & 10 & 80 & 2 \\ \hline $\mathrm{V}$ & $\mathrm{N}$ & 8 & 5 & 11 & 80 & 2 \\ \hline $\mathrm{N}$ & $\mathrm{V}$ & 10 & 95 & 8 & 20 & 8 \\ \hline $\mathrm{V}$ & $\mathrm{V}$ & 8 & 5 & 6 & 20 & 8 \\ \hline \end{tabular} \end{center}","We prove that all the extreme points are integral by contradiction. To that end, assume that there exists an extreme point $x^*$ that is not integral. Let $G=(V_1,V_2,E)$ be the given bipartite graph and let $E_f= \{e \in E \text{ }|\text{ } 0 < x_e^* < 1\}$. If $E_f$ contains a cycle, then the proof follows in the same way as the proof in the lecture notes. Therefore, we assume that $E_f$ does not contain any cycles. Consider any maximal path in $E_f$; let it have vertices $v_1,...,v_{k}$ and edges $e_1,...,e_{k-1}$. Choose any $\epsilon$ such that $0 < \epsilon < \min(x^*_{e_i},1-x^*_{e_i} : i = 1, ..., k-1)$. Note that, since $E_f$ only contains edges that are fractional, such an $\epsilon$ exists. Let $y,z$ be the following two solutions to the linear program: \[ y = \left\{ \begin{array}{l l} x^*_e + \epsilon & \quad \text{if } e \in \{e_1,e_3,e_5,e_7,...\} \\ x^*_e - \epsilon & \quad \text{if } e \in \{e_2,e_4,e_6,e_8,...\} \\ x^*_e & \quad \text{otherwise}\\ \end{array} \right.\] \[ z = \left\{ \begin{array}{l l} x^*_e - \epsilon & \quad \text{if } e \in \{e_1,e_3,e_5,e_7,...\} \\ x^*_e + \epsilon & \quad \text{if } e \in \{e_2,e_4,e_6,e_8,...\} \\ x^*_e & \quad \text{otherwise}\\ \end{array} \right.\] One can see that $x^* = \frac{y+z}{2}$. We continue by showing that $y$ is a feasible solution to the linear program. One can see that for any vertex $v \in G$ except $v_1$ and $v_k$ we have $\sum_{e \in \delta(v)} y_e = \sum_{e \in \delta(v)} x^*_e $. So, we only need to show that the linear program constraint holds for $v_1$ and $v_k$. Let us first state two observations. First, by the definition of $\epsilon$, we have that $0 \leq x^*_{e_1}+\epsilon \leq 1$, $0 \leq x^*_{e_1}-\epsilon \leq 1$, $0 \leq x^*_{e_{k-1}}+\epsilon \leq 1$, and $0 \leq x^*_{e_{k-1}}-\epsilon \leq 1$. Second, since the path is maximal and $E_f$ does not contain any cycles, the degrees of $v_1$ and $v_k$ in $E_f$ are both one. Therefore $\sum_{e \in \delta(v_1)} y_e = y_{e_1}$ and $\sum_{e \in \delta(v_k)} y_e = y_{e_{k-1}}$. Putting together the previous two observations, we get that the linear program constraint also holds for $v_1$ and $v_k$, so $y$ is a feasible solution. We can similarly show that $z$ is also a feasible solution. This shows that we can write $x^*$ as a convex combination of $y$ and $z$, which contradicts the fact that $x^*$ is an extreme point.",0.1,M1_preference_data_70 "Assume you're working for a startup that develops a university management app. You just received a description of what the app should do: > This app will be the administrative backbone of the university. > Almost all staff will use it. > Human Resources will register each student, including their personal details, and use the system to ensure each student follows the rules concerning the duration of studies, the number of courses that must be taken, the payment of all applicable fees... > Professors will use the app to input grades and to send informational messages to students in their courses. > Students will be able to see the list of courses and register for a course. > Staff members will also be able to update their personal details, including their banking coordinates for their salary. Write a user story, in a single sentence using the below format, that summarizes this conversation: > As a student, I want to ... so that ... Your story must contain all necessary information and only that information.",['1'],0.1,M1_preference_data_71 "Let $A \in \mathbb{R}^{m\times n}$, $b\in \mathbb{R}^m$ and $c\in \mathbb{R}^n$. Consider the following linear program with $n$ variables: \begin{align*} \textbf{maximize} \hspace{0.8cm} & c^Tx \\ \textbf{subject to}\hspace{0.8cm} & Ax =b \\ \hspace{0.8cm} & x \geq 0 \end{align*} Show that any extreme point $x^*$ has at most $m$ non-zero entries, i.e., $|\{i: x^*_i > 0 \}| \leq m$. \\[-0.2cm] \noindent \emph{Hint: what happens if the columns corresponding to non-zero entries in $x^*$ are linearly dependent?}\\[-0.2cm] {\small (If you are in a good mood you can prove the following stronger statement: $x^*$ is an extreme point if and only if the columns of $A$ corresponding to non-zero entries of $x^*$ are linearly independent.)}","We assume that we cannot directly compute $\phi(\mathbf{x})$. The complexity would be too high. Instead, we will now apply the kernel trick to this problem:",0.1,M1_preference_data_72 What is the formal relation between accuracy and the error rate? In which case would you recommend to use the one or the other?,"This is a compatibility break, the method should be deprecated instead and removed in some future release after it has been deprecated for some time.",0.1,M1_preference_data_73 "Consider the following contains function defined on Iterable (in particular, it accepts both Vector and List). def contains[A](l: Iterable[A], elem: A): Boolean = val n = l.size if n <= 5 then for i <- l do if i == elem then return true false else val (p0, p1) = parallel( contains(l.take(n / 2), elem), contains(l.drop(n / 2), elem) ) p0 || p1 Let $n$$n$ be the size of l. Assume that drop and take run in $\Theta(1)$ on Vector and $\Theta(n)$ on List. What is the asymptotic depth of contains if it is called on a List?","Let $y$ be an optimal dual solution. By complementary slackness we have that for each set $S$, either $x^*_s = 0$ or $\sum_{e\in S} y_e = c(S)$. Let us now compute the cost of our algorithm. The cost of the algorithm is $\sum_{S:x^*_S > 0} c(S)$. By complementarity slackness we get that \[ \sum_{S:x^*_S > 0} c(S) = \sum_{S:x^*_S > 0} \sum_{e\in S} y_e \leq \sum_{S} \sum_{e\in S} y_e = \sum_{e \in U} y_e \sum_{S \ni e} 1 \le \sum_{e\in U} f\cdot y_e. \] We also know that (since $y$ is a feasible dual solution) $\sum_{e\in U} y_e \leq OPT$. Therefore the cost of the above algorithm is at most $f\cdot OPT$.",0.1,M1_preference_data_74 "Prove that if a^2 is even, a is even.","The colleague should start off by profiling first. He might have an idea to improve the performance of a component of the app, but it might not be the performance bottleneck. Thus, the end user won't notice any improvement.",0.1,M1_preference_data_75 "In the following problem Alice holds a string $x = \langle x_1, x_2, \ldots, x_n \rangle$ and Bob holds a string $y = \langle y_1, y_2, \ldots, y_n\rangle$. Both strings are of length $n$ and $x_i, y_i \in \{1,2,\ldots, n\}$ for $i=1,2, \ldots, n$. The goal is for Alice and Bob to use little communication to estimate the quantity \begin{align*} Q = \sum_{i=1}^n (x_i + y_i)^2\,. \end{align*} A trivial solution is for Alice to transfer all of her string $x$ to Bob who then computes $Q$ exactly. However this requires Alice to send $\Theta(n \log n)$ bits of information to Bob. In the following, we use randomization and approximation to achieve a huge improvement on the number of bits transferred from Alice to Bob. Indeed, for a small parameter $\epsilon > 0$, your task is to devise and analyze a protocol of the following type: \begin{itemize} \item On input $x$, Alice uses a randomized algorithm to compute a message $m$ that consists of $O(\log (n)/\epsilon^2)$ bits. She then transmits the message $m$ to Bob. \item Bob then, as a function of $y$ and the message $m$, computes an estimate $Z$. \end{itemize} Your protocol should ensure that \begin{align} \label{eq:guaranteeStream} \Pr[| Z - Q| \geq \epsilon Q] \leq 1/3\,, \end{align} where the probability is over the randomness used by Alice.\\ {\em (In this problem you are asked to (i) explain how Alice computes the message $m$ of $O(\log(n)/\epsilon^2)$ bits (ii) explain how Bob calculates the estimate $Z$, and (iii) prove that the calculated estimate satisfies~\eqref{eq:guaranteeStream}. Recall that you are allowed to refer to material covered in the lecture notes.) }","The algorithm is as follows: \begin{itemize} \item If $x_1 \geq W$, then do the exchange on day $1$ and receive $x_1$ Swiss francs. \item Otherwise, do the exchange on day $2$ and receive $x_2$ Swiss francs. \end{itemize} We now analyze its competitiveness. If $x_1 \geq W$, then our algorithm gets at least $W$ Swiss francs. Optimum is at most $W^2$ and so we are $1/W$ competitive. Otherwise if $x_1 < W$ then we get $x_2 \geq 1$ Swiss francs which is $x_2/ \max(x_2, x_1) \geq 1/W$ competitive.",0.1,M1_preference_data_76 "You have just started your prestigious and important job as the Swiss Cheese Minister. As it turns out, different fondues and raclettes have different nutritional values and different prices: \begin{center} \begin{tabular}{|l|l|l|l||l|} \hline Food & Fondue moitie moitie & Fondue a la tomate & Raclette & Requirement per week \\ \hline Vitamin A [mg/kg] & 35 & 0.5 & 0.5 & 0.5 mg \\ Vitamin B [mg/kg] & 60 & 300 & 0.5 & 15 mg \\ Vitamin C [mg/kg] & 30 & 20 & 70 & 4 mg \\ \hline [price [CHF/kg] & 50 & 75 & 60 & --- \\ \hline \end{tabular} \end{center} Formulate the problem of finding the cheapest combination of the different fondues (moitie moitie \& a la tomate) and Raclette so as to satisfy the weekly nutritional requirement as a linear program.","1. The adjacency graph has ones everywhere except for (i) no edges between exttt{sum} and exttt{i}, and between exttt{sum} and exttt{y\_coord}, and (ii) five on the edge between exttt{x\_coord} and exttt{y\_coord}, and two on the edge between exttt{i} and exttt{y\_coord}. 2. Any of these solution should be optimal either as shown or reversed: - exttt{x\_coord}, exttt{y\_coord}, exttt{i}, exttt{j}, exttt{sum} - exttt{j}, exttt{sum}, exttt{x\_coord}, exttt{y\_coord}, exttt{i} - exttt{sum}, exttt{j}, exttt{x\_coord}, exttt{y\_coord}, exttt{i} 3. Surely, this triad should be adjacent: exttt{x\_coord}, exttt{y\_coord}, exttt{i}.",0.1,M1_preference_data_77 "Your colleague wants your opinion on a module design question. They are developing a service that recommends hikes near users based on the weather, and they think the module should take as input a weather service, a service that lists hikes, a function that sorts hikes by length, and outputs an array of hikes. What do you think? (the answer should make it possible to have automated tests for the module)","def compute_precision_at_k(retrieved_tweets, gt, k=5): """""" It computes the precision score at a defined set of retrieved documents (k). :param predict: list of predictions :param gt: list of actual relevant data :param k: int :return: float, the precision at a given k """""" results = retrieved_tweets.merge(gt, how=""outer"", on=""id"") return np.array(results[:k]['relevant'].tolist()).mean()",0.1,M1_preference_data_78 Implement the function `check_words` that checks if the words of a strings have common words with a list. Write your code in python. Your code should be agnostic to lower/upper case.,"df[""authors_publications_last""] = df[""authors_publications""].apply(lambda a:int(str(a).split("";"")[-1])) df[""authors_citations_last""] = df[""authors_citations""].apply(lambda a: int(str(a).split("";"")[-1])) df[""reputation""] = np.log10(df[""authors_citations_last""]/df[""authors_publications_last""] + 1)",0.1,M1_preference_data_79 "Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are vectors in $\mathbb{R}^{D}$ and $b_{u}$ and $b_{m}$ are scalars, indicating the bias. Assume that our objective is given by $$ \frac{1}{2} \sum_{u \sim m}\left(f_{u m}-r_{u m}\right)^{2}+\frac{\lambda}{2}\left[\sum_{u \in \mathbf{U}}\left(b_{u}^{2}+\left\|\mathbf{v}_{u}\right\|^{2}\right)+\sum_{m \in \mathbf{M}}\left(b_{m}^{2}+\left\|\mathbf{w}_{m}\right\|^{2}\right)\right] $$ where $\lambda>0$. Here $\mathbf{U}$ denotes the set of all users, $M$ the set of all movies, and $u \sim m$ represents the sum over all $(u, m)$ pairs for which a rating exists. Write the optimal values of $b_{u}$, provided that all other values are fixed.","Using meta data of the movie as additional information to encode the similarity, perhaps approximating the corresponding weight as a linear combination of existing movies based on their similarities in terms of meta information.",0.1,M1_preference_data_80 "We have a collection of rectangles in a plane, whose sides are aligned with the coordinate axes. Each rectangle is represented by its lower left corner $(x_1,y_1)$ and its upper right corner $(x_2,y_2)$. All coordinates are of type Long. We require $x_1 \le x_2$ and $y_1 \le y_2$. How can the result be computed in parallel? Which properties of hull2 need to hold to make the solution correct? Prove these properties for hull2.","Suppose that completeness is violated. Then, the processes might not be relaying messages they should be relaying. This may violate agreement. For instance, assume that only a single process p1 BEB-delivers (hence RB-delivers) a message m from a crashed process p2. If a failure detector (at p1) does not ever suspect p2, no other correct process will deliver m (agreement is violated).",0.1,M1_preference_data_81 "Consider the following snippet used to produce a high-performance circuit using a statically scheduled HLS tool, such as Xilinx Vivado HLS. Assume that a erb+double+ multiplication takes several cycles (latency) to compute. egin{verbatim} double a[ARRAY_SIZE] = ...; int b = 1; for (int i = 0; i < ARRAY_SIZE; i++) if (a[i] * (double) b >= CONST) b++; \end{verbatim} Would a compiler for a VLIW processor like Itanium experience more, less, or different problems in scheduling efficiently the snippet above? Explain. ","Noticing that $80 \cdot 2=20 \cdot 8$, only the first three enter the game, among which the first is clerarly the best. The output will thus be a (DET) computer (N) process (N) programs (N) accurately (ADV)",0.1,M1_preference_data_82 "One of your colleagues has recently taken over responsibility for a legacy codebase, a library currently used by some of your customers. Before making functional changes, your colleague found a bug caused by incorrect use of the following method in the codebase: public class User { /** Indicates whether the user’s browser, if any, has JavaScript enabled. */ public boolean hasJavascriptEnabled() { … } // … other methods, such as getName(), getAge(), ... } Your colleague believes that this is a bad API. You are reviewing the pull request your colleague made to fix this bug. Part of the pull request deletes the ""hasJavascriptEnabled"" method from the code, but you disagree. Explain in 1 sentence why this could cause issues and what should be done instead.","1. Modulo scheduling is a loop pipelining technique. It transforms a loop kernel in a way to expose maximum parallelism and minimize the loop initiation interval. 2. Compared to basic software pipelining, it does not need an explicit prologue and epilogue.",0.1,M1_preference_data_83 You have been hired to evaluate an email monitoring system aimed at detecting potential security issues. The targeted goal of the application is to decide whether a given email should be further reviewed or not. Give four standard measures usually considered for the evaluation of such a system? Explain their meaning. Briefly discuss their advantages/drawbacks.,we still lack 5\%: 16 to 20 will provide it: 20 tokens altogether.,0.1,M1_preference_data_84 "Consider two Information Retrieval systems S1 and S2 that produced the following outputs for the 4 reference queries q1, q2, q3, q4: S1: | referential: q1: d01 d02 d03 d04 dXX dXX dXX dXX | q1: d01 d02 d03 d04 q2: d06 dXX dXX dXX dXX | q2: d05 d06 q3: dXX d07 d09 d11 dXX dXX dXX dXX dXX | q3: d07 d08 d09 d10 d11 q4: d12 dXX dXX d14 d15 dXX dXX dXX dXX | q4: d12 d13 d14 d15 S2:: | referential: q1: dXX dXX dXX dXX d04 | q1: d01 d02 d03 d04 q2: dXX dXX d05 d06 | q2: d05 d06 q3: dXX dXX d07 d08 d09 | q3: d07 d08 d09 d10 d11 q4: dXX d13 dXX d15 | q4: d12 d13 d14 d15 where dXX refer to document references that do not appear in the referential. To make the answer easier, we copied the referential on the right. For each of the two systems, compute the mean Precision and Recall measures (provide the results as fractions). Explain all the steps of your computation.","Proof sketch Show that $D(L) \leq D'(L)$ for all $1 \leq L$. Then, show that, for any $1 \leq L_1 \leq L_2$, we have $D'(L_1) \leq D'(L_2)$. This property can be shown by induction on $L_2$. Finally, let $n$ be such that $L \leq 2n < 2L$. We have that: $$\begin{align} D(L) &\leq D'(L) &\text{Proven earlier.} \\ &\leq D'(2n) &\text{Also proven earlier.} \\ &\leq \log_2(2n) (d + cT) + cT \\ &< \log_2(2L) (d + cT) + cT \\ &= \log_2(L) (d+cT) + \log_2(2) (d+cT) + cT \\ &= \log_2(L) (d+cT) + d + 2cT \end{align}$$ Done.",0.1,M1_preference_data_85 "Why a data prefetcher could hinder a Prime+Probe cache attack? How can the attacker overcome this problem? ","Call(N(""exists""), Fun(""y"", Call(Call(N(""less""), N(""x"")), N(""y""))))",0.1,M1_preference_data_86 "We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture): ...some picture... Explanation of (some) tags: \begin{center} \begin{tabular}{l|l|l|l} Tag & English expl. & Expl. française & Example(s) \\ \hline JJ & Adjective & adjectif & yellow \\ NN & Noun, Singular & nom commun singulier & cat \\ NNS & Noun, Plural & nom commun pluriel & cats \\ PRP\$ & Possessive Pronoun & pronom possessif & my, one's \\ RB & Adverb & adverbe & never, quickly \\ VBD & Verb, Past Tense & verbe au passé & ate \\ VBN & Verb, Past Participle & participe passé & eaten \\ VBZ & Verb, Present 3P Sing & verbe au présent, 3e pers. sing. & eats \\ WP\$ & Possessive wh- & pronom relatif (poss.) & whose \\ \end{tabular} \end{center} What kind of model (of PoS tagger) is it? What assumption(s) does it rely on?",['NP'],0.1,M1_preference_data_87 "Consider the following contains function defined on Iterable (in particular, it accepts both Vector and List). def contains[A](l: Iterable[A], elem: A): Boolean = val n = l.size if n <= 5 then for i <- l do if i == elem then return true false else val (p0, p1) = parallel( contains(l.take(n / 2), elem), contains(l.drop(n / 2), elem) ) p0 || p1 Let $n$ be the size of l. Assume that drop and take run in $\Theta(1)$ on Vector and $\Theta(n)$ on List. What is the asymptotic depth of contains if it is called on a Vector?","Real-world code typically has too many paths to feasibly cover, e.g., because there are too many ""if"" conditions, or potentially-infinite loops.",0.1,M1_preference_data_88 "/True or false:/ Is the following statement true or false? Justify your answer. ""The node with the highest clustering coefficient in an undirected graph is the node that belongs to the largest number of triangles.""",True,0.1,M1_preference_data_89 "Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. Recall that in the min $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. Show that the optimal value of the following linear program equals the number of edges crossed by a min $s,t$-cut: \begin{align*} \textbf{minimize} \hspace{0.8cm} & \sum_{e\in E} y_e \\ \textbf{subject to}\hspace{0.8cm} & y_{\{u,v\}} \geq x_u - x_v \qquad \mbox{for every $\{u,v\}\in E$} \\ \hspace{0.8cm} & y_{\{u,v\}} \geq x_v - x_u \qquad \mbox{for every $\{u,v\}\in E$} \\ & \hspace{0.6cm}x_s = 0 \\ & \hspace{0.6cm}x_t = 1 \\ & \hspace{0.6cm}x_v \in [0,1] \qquad \mbox{for every $v\in V$} \end{align*} The above linear program has a variable $x_v$ for every vertex $v\in V$ and a variable $y_e$ for every edge $e\in E$. \emph{Hint: Show that the expected value of the following randomized rounding equals the value of the linear program. Select $\theta$ uniformly at random from $[0,1]$ and output the cut $ S = \{v\in V: x_v \leq \theta\}$.}","(a) err = 1 - acc. (b) does not make any sense: they are the same (opposite, actually)",0.1,M1_preference_data_90 "What is your take on the accuracy obtained in an unballanced dataset? Do you think accuracy is the correct evaluation metric for this task? If yes, justify! If not, why not, and what else can be used?","1. In general, no. Speculative execution needs the ability to rollback wrongly executed instructions and VLIW typically miss appropriate data structures for this. 2, In specific cases, processors may implement mechanisms to implement speculatively instructions (e.g., by not raising exceptions or detecting colliding memory accesses) and instructions to detect wrong execution. Compilers are then in charge to use such mechanisms to explicitly rollback or ignore operations executed due to wrong speculation. Itanium implements two such speculative loads making it possible to implement them speculatively before branches and before stores, respectively. 3. It could be mentioned that predication is, in a sense, a form of speculative execution. ",0.1,M1_preference_data_91 "Consider the following toy learning corpus of 59 tokens (using a tokenizer that splits on whitespaces and punctuation), out of a possible vocabulary of $N=100$ different tokens: Pulsed operation of lasers refers to any laser not classified as continuous wave, so that the optical power appears in pulses of some duration at some repetition rate. This\linebreak encompasses a wide range of technologies addressing a number of different motivations. Some lasers are pulsed simply because they cannot be run in continuous wave mode. Using a 2-gram language model, what are the values of the parameters corresponding to ""continuous wave"" and to ""pulsed laser"" using estimation smoothed by a Dirichlet prior with parameters all equal to $0.01$","It can be simply a test followed by a load at an arbitrary address (an array access protected by the test) and an indirect access based on the result of that access.",0.1,M1_preference_data_92 "In the context of Load Store Queue, What conditions must be satisfied in the LSQ for a load to be executed and the result to be returned to the processor?","} (a) The log-likelihood is $$ \begin{aligned} \mathcal{L} & =\left(\sum_{n=1}^{N}\left(y_{n} \log (\theta)-\theta-\log y_{n} !\right)\right. \\ & =\log (\theta) \sum_{n=1}^{N} y_{n}-N \theta-\log \left(\prod_{n=1}^{N} y_{i} !\right) \end{aligned} $$ (b) Taking the derivative with respect to $\theta$, setting the result to 0 and solving for $\theta$ we get $$ \theta=\frac{1}{N} \sum_{n=1}^{N} y_{n} $$ (c) The parameter $\theta$ represents the mean of the Poisson distribution and the optimum choice of $\theta$ is to set this mean to the empirical mean of the samples.",0.1,M1_preference_data_93 "Can we devise a Best-effort Broadcast algorithm that satisfies the causal delivery property, without being a causal broadcast algorithm, i.e., without satisfying the agreement property of a reliable broadcast?","If there are two experts, then the Weighted Majority strategy boils down to following the prediction of the expert who was wrong fewer times in the past. Assume a deterministic implementation of this strategy -- i.e., if the two experts are tied, then listen to the first one. Our example will consist of (arbitrarily many) identical phases; each phase consists of two days and it looks as follows. On the first day, we are going to follow the first expert. The first expert is wrong and the second one is right. Therefore we ``suffer''. On the second day, we are going to follow the second expert. But the first expert is right and the second one is wrong. We suffer again. In total, we suffered twice and each expert suffered only once.",0.1,M1_preference_data_94 "The MIPS R10000 fetches four instructions at once and, therefore, there are four such circuits working in parallel inside the processor. What is the function of the ``Old dest'' field in the ``Active List''? And what is the function of ``Log dest''? Why are they needed in the ``Active list''?","The model could generate text that suggests treatments to users. As the model is not a medical professional, these treatments could cause harm to the user if followed. The model could also give wrong addresses to testing sites, causing users to be harmed. Others are acceptable.",0.1,M1_preference_data_95 "Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & 2x_1 - x_2 \\ \textbf{subject to}\hspace{0.8cm} & x_1 - x_2 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{0.85cm}x_1 + s_2 = 4 \\ \hspace{0.8cm} & \hspace{0.85cm} x_2 + s_3 = 2 \\ \hspace{0.8cm} &\hspace{-0.8cm} x_1,\: x_2, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + x_2 - s_1 \\ s_2 &= 3 -x_2 + s_1 \\ s_3 &= 2 -x_2 \\ \cline{1-2} z &= 2 + x_2 - 2s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau.","The function contains will create contiguous sub-arrays of less than 5 elements of the array. The work is the total sum of all the work done by every parallel task. A task either splits the array or processes it sequentially. Since splitting is done in $\Theta(1)$ and every element is going to be processed sequentially, the asymptotic work is $\Theta(n)$.",0.1,M1_preference_data_96 What does it mean that a processor supports precise exceptions? ,"Let $\phi_{1}(\cdot)$ be the feature map corresponding to $\kappa_{1}(\cdot, \cdot)$. Then by direct inspection we see that $\phi(\cdot)=\phi_{1}(f(\cdot))$ is the feature map corresponding to $\kappa(f(\cdot), f(\cdot))$. Indeed, $$ \phi_{1}(f(\mathbf{x}))^{\top} \phi_{1}\left(f\left(\mathbf{x}^{\prime}\right)\right)=\kappa_{1}\left(f(\mathbf{x}), f\left(\mathbf{x}^{\prime}\right)=\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right) .\right. $$",0.1,M1_preference_data_97 "Consider an IR engine, which uses an indexing mechanism implementing the following 3 consecutive filters: a morpho-syntactic filter that restricts indexing term candidates to only nouns, and reduces them to their root forms; a frequencial filter parameterized with \(f_\text{min}=0.06\) (resp. \(f_\text{max}=0.20\)) as lower (resp. upper) cut-off value, expressed as relative frequencies; a stop word filter using the following stop list: {a, in, mouse, the}. and the following document \(d\): Cats are the worst enemies of rodents. After all, a cat is a cat: as soon as it can, it rushes into the bushes with only one target in mind: mice, mice and mice! Naturally, the cats of houses are less frightening, as for them croquette loaded dressers have replaced prey hiding bushes. Cat's life in the house is easy!... What is the multi-set resulting from the indexing of document \(d\) by the above described IR engine? Format your answer as an alphabetically ordered list of the form: ""lemma1(tf1), lemma2(tf2), ..."", where tfi is the term frequency of indexing term i. For instance: dog(2), frog(3), zebra(1)",['N'],0.1,M1_preference_data_98 "Consider the following SCFG with the following probabilities: S → NP VP      0.8 S → NP VP PP      0.2 VP → Vi     {a1} VP → Vt NP     {a2} VP → VP PP         a NP → Det NN     0.3 NP → NP PP     0.7 PP → Prep NP    1.0 Vi → sleeps    1.0Vt → saw   1.0NN → man   {b1}NN → dog      bNN → telescope    {b2}Det → the   1.0Prep → with   0.5Prep → in   0.5What is the value of a? (Give your answer as a numerical value, not as a formula)","First, the output is always feasible since we always include all vertices with $x_i \geq 1/2$ which is a feasible vertex cover as seen in class. We proceed to analyze the approximation guarantee. Let $X_i$ be the indicator random variable that $i$ is in the output vertex cover. Then $\Pr[X_i = 1]$ is equal to the probability that $t \leq x_i$ which is $1$ if $x_i \geq 1/2$ and otherwise it is $x_i/(1/2) = 2x_i$. We thus always have that $\Pr[X_i =1] \leq 2x_i$. Hence, \begin{align*} \E[\sum_{i\in S_t} w(i)] = \E[\sum_{i\in V} X_i w(i)] = \sum_{i\in V} \E[X_i] w(i)] \leq 2 \sum_{i\in V} x_i w(i)\,. \end{align*}",0.1,M1_preference_data_99 "Assume you are part of a team developing a mobile app using Scrum. One of your colleagues, who was tasked with adding a ""dark mode"" to the app, has found an unrelated bug during testing: a race condition occasionally causes stale data to be displayed. Your colleague wants to immediately fix it, though the source of the bug is not clear and it is unknown how often this actually occurs. Explain in 1 sentence whether that is a good idea and why or why not.","Covering 80% of the paths is often either undoable or a waste of time. For example, any potentially infinite loop such as requiring a valid user input can't be covered more than 0% in terms of paths. Even for the rest of the code, there are often so many paths that covering 80% of them would take a huge amount of time. A more realistic strategy would be to cover at least 80% of the branches, or perhaps 80% of the instructions.",0.1,M1_preference_data_100 "Devise an algorithm that, without consensus, implements a weaker specification of NBAC by replacing the termination property with weak termination. Weak termination: Let p be a distinguished process, known to all other processes. If p does not crash then all correct processes eventually decide. Your algorithm may use a perfect failure detector.","Consider the directed graph $D = (V,A)$ obtained from $G$ by replacing every edge $\{u,v\} \in E$ by the two arcs $(u,v)$ and $(v,u)$. With the arc set $A$ as ground set we define two partition matroids $\mathcal{M}_1$ and $\mathcal{M}_2$: \begin{itemize} \item To be independent in $\mathcal{M}_1$ one can take at most one of $\{(u,v), (v,u)\}$ for every $\{u,v\} \in E$, i.e., \begin{align*} \mathcal{I}_1 = \{F \subseteq A: |F \cap \{(u,v), (v,u)\} | \leq 1\mbox{ for all $\{u,v\}\in E$}\}\,. \end{align*} This matroid enforces the constraint that each edge should be oriented in one direction. \item To be independent in $M_2$, one can take at most $k(v)$ arcs among the set $\delta^{-}(v)$ of incoming arcs for every $v$: \begin{align*} \mathcal{I}_2 = \{F \subseteq A: |F \cap \delta^-(v)| \leq k(v) \mbox{ for all $v\in V$}\}\,. \end{align*} This matroid enforces the indegree restrictions of the orientation. \end{itemize} By the above definitions, there exists an orientation satisfying the required indegree restrictions if and only if there exists a common independent set to $\mathcal{M}_1$ and $\mathcal{M}_2$ of cardinality precisely $|E|$ (in which case we select either $(u,v)$ or $(v,u)$ but not both).",0.1,M1_preference_data_101 "Assume that your team is discussing the following java code: public final class DataStructure { public void add(int val) { /*...*/ } private boolean isFull() { /*...*/ } } One of your colleagues suggests that ""add"" should be changed to return a boolean indicating whether the passed value was added or not. Explain whether this breaks backward compatibility and why or why not (without worrying about whether this is a good or a bad thing). ",$$ rac{\exp(s_1)}{\displaystyle\sum_{i=1}^{|V|} \exp(s_i)}$$,0.1,M1_preference_data_102 "The objective of this question is to illustrate the use of a lexical semantics resource to compute lexical cohesion. Consider the following toy ontology providing a semantic structuring for a (small) set of nouns: The word 'mouse' appears at two different places in the toy ontology. What does this mean? What specific problems does it raise when the ontology is used? How could such problems be solved? (just provide a sketch of explanation.)","def community_influencers(G, nodes_community, communities, communities_count): ''' input: G:nx.Graph nodes_community:{node_id:community_id} communities:[community_ids] community_count:int output: influencers:{community_id:node_id} ''' influencers = {} for c in communities: nodes = [n for n in G.nodes if nodes_community[n]==c] pr = nx.pagerank(G.subgraph(nodes)) influencers[c] = max(pr, key=pr.get) return influencers",0.1,M1_preference_data_103 "Consider an operation we will call scanRight1 that, given a function $f$ of two arguments, and a sequence $a_1, \ldots, a_N$, computes a sequence $b_1, \ldots, b_N$ such that: $b_N = a_N$ $b_i = f(a_{i}, b_{i+1})$, for $0 < i < N$ Define similarly scanLeft1 in a manner similar to scanRight1: Given a function $f$ of two arguments, and a sequence $a_1, \ldots, a_N$, scanLeft1 computes a sequence $b_1, \ldots, b_N$ such that: $b_1 = a_1$ $b_i = f(b_{i-1}, a_{i})$, for $0 < i \leq N$ Suppose that $f$ is associative. is the result of scanRight1 the same same as the result of scanLeft1 on the reversed sequence $a_N, \ldots, a_1$$a_N, \ldots, a_1$ ?","Let $OPT$ be the number of edges that cross a minimum $s,t$-cut, and let $OPT_{LP}$ be the value of the given LP. To show that $OPT = OPT_{LP}$, we show that $OPT_{LP} \leq OPT$ and $OPT_{LP} \geq OPT$. Firstly let's prove that $OPT_{LP} \leq OPT$. Suppose that $S$ is an optimal cut $s,t$-cut. We have $s\in S$ and $t\not \in S$. We will create a solution for the LP problem whose value equals cut size defined by $S$ and $E \setminus S$. Set $x_u = 0$ for all $u\in S$, and $x_v =1 $ for all $v\not \in S$. Furthermore define $$y_e = \begin{cases} 1&\mbox{if $e\in \delta(S)$} \\ 0 & \mbox{otherwise}. \end{cases}$$ Clearly $\sum_e y_e = |\delta(S)| = OPT$. It remains to prove that the assignment to the variables $\{x_v\}_{v\in V}$, $\{y_e\}_{e\in E}$ is feasible: \begin{itemize} \item Consider any edge $\{u,v\}$. We need to verify that $y_{\{u,v\}} \geq x_u - x_v$ and $y_{\{u,v\}} \geq x_v -x_u$. In other words, that $y_{\{u,v\}} \geq |x_u - x_v|$. \begin{itemize} \item If $\{u,v\} \in \delta(S)$ then one of the vertices are in $S$ and one is outside. Say $u\in S$ and $v\not \in S$. Then \begin{align*} 1 = y_{\{u,v\}} = |0 - 1| = |x_u - x_v|\,. \end{align*} \item If $\{u,v\} \not \in \delta(S)$ then either $x_u =x_v = 0$ (both are in $S$) or $x_u =x_v$ (both are outside $S$). In either case $|x_u -x_v| = 0$ and so the constraint $ y_{\{u,v\}} = 0 = |x_u - x_v|$ is again verified (with equality). \end{itemize} \item $x_s = 0$ and $x_t = 1$. Moreover, we have $x_v \in \{0,1\} \subseteq [0,1]$ for every $v\in V$. \end{itemize} This finishes one part of the proof - there is an assignment to the variables such that the LP outputs $OPT$. This means that $OPT_{LP}$ is at most $OPT$, in other words $OPT_{LP} \leq OPT$.\\ Now let's prove that $OPT_{LP} \geq OPT$. Suppose that $(\{x^*_v\}_{v\in V},\{y^*_e\}_{e\in E})$ is an optimal solution to the LP. Consider the following randomized rounding: select $\theta \in (0,1)$ uniformly at random and let $S = S_\theta= \{v\in V: x^*_v \leq \theta\}$. Let's analyze this rounding algorithm. It is clear that we always output a feasible cut since $x^*_s = 0$ and $x^*_t =1$. This tells us that for every $\theta\in (0,1)$ the associated $S_\theta$ is a valid solution and so $OPT \leq \delta(S_\theta)$. We thus have \begin{align*} OPT \leq \mathbb{E}_{\theta \in [0,1]} [|\delta(\{v: x^*_v \leq \theta\}|)]. \end{align*} We will now complete the proof by showing that the above expectation is at most $OPT_{LP}$. Let's introduce a new random variable $X_{e,\theta}$ that indicates if an edge is cut: $$X_{e,\theta} = \begin{cases} 1&\mbox{ if $e\in \delta(S_\theta)$} \\ 0 & \mbox{otherwise}. \end{cases}$$ Then the expectation above equals $$ \mathbb{E}_{\theta \in [0,1]} \left[ \sum_{e\in E} X_{e,\theta} \right]= \sum_{e\in E}\mathbb{E}_{\theta \in [0,1]} \left[ X_{e,\theta} \right] $$ Let's analyze $\mathbb{E}_{\theta \in [0,1]} \left[ X_e \right] = \Pr_{\theta \in [0,1]}[e \text{ is cut in } S_\theta]$ for a specific edge $e=\{u,v\}\in E$. In the case when $x^*_u \leq x^*_v$, the edge $e$ is cut if and only if $x^*_u \leq \theta \leq x^*_v$. The other case is analogous. It follows that \begin{align*} \Pr_{\theta \in [0,1]}[X_{\{u,v\},\theta}] = \begin{cases} \Pr_{\theta \in [0,1]}[ \theta \in [x^*_u, x^*_v]] & \mbox{if $x^*_u \leq x^*_v$}\\ \Pr_{\theta \in [0,1]}[ \theta \in [x^*_v, x^*_u]] & \mbox{if $x^*_u > x^*_v$} \end{cases} = |x^*_u - x^*_v|\,. \end{align*} Now since the LP guarantees that $y^*_{\{u,v\}} \geq |x_u^* - x_v^*|$, we have \begin{align*} \sum_{\{u,v\}\in E}\mathbb{E}_{\theta \in [0,1]} \left[ X_{\{u,v\},\theta} \right] = \sum_{\{u,v\}\in E}|x^*_u - x^*_v| \leq \sum_{\{u,v\}\in E} y^*_{\{u,v\}} = OPT_{LP}\,. \end{align*} It follows that $$ OPT \leq \mathbb{E}_{\theta \in [0,1]} [|\delta(\{v: x^*_v \leq \theta\}|)] \leq OPT_{LP} $$ and this finishes the proof.",0.1,M1_preference_data_104 "For each of the following pairs, what kind of morphology is involved? cat+N => cats, break+V => breakable , freeze+V => frozen , translate+V => translation, modify+V => modifies ; inflectional, inflectional, derivational, inflectional, derivational","We show how to find such $k$ disjoint perfect matchings in a $k$-regular bipartite graph in polynomial time. Let $G_0 = (A \cup B, E)$ be a $k$-regular bipartite graph. Consider the LP for bipartite perfect matching on $G_0$. The LP is feasible because setting $x_e = 1/k$ for all $e \in E$ satisfies all the constraints (recall that each vertex of a $k$-regular graph is incident to exactly $k$ edges). Now we find an extreme point solution to the LP in polynomial time, and due to the integrality of such solutions, we get a valid perfect matching $M_1$. Notice that $M_1$, being a perfect matching, forms a $1$-regular sub-graph of G. Therefore, if we remove the matching $M_1$ from the original graph $G_0$, we get a new $(k-1)$-regular graph $G_1 = (A \cup B, E \setminus M)$. Now we repeat the process $k$ times. Formally, at each iteration $i = 1, \dots, k$, we start with $(k-i+1)$-regular graph $G_{i-1}$. By solving the bipartite perfect matching LP for $G_{i-1}$ to get an extreme point solution, we obtain a perfect matching $M_i$. We remove $M_i$ from $G_{i-1}$ to obtain a $(k-i)$-regular graph $G_i$, which is a sub-graph of $G_{i-1}$. Since we remove the already found perfect matchings at each iteration, the $k$-perfect matchings $M_1, \dots, M_k$ are disjoint. Furthermore, since all graphs $G_1, \dots, G_{k-1}$ are sub-graphs of the original graph $G_0$, the matchings $M_1, \dots, M_k$ are all valid perfect matchings of $G_0$.",0.1,M1_preference_data_105 Implement Connectivity-Based Community Ranking by doing the following: - Compute a meta graph where nodes are communities and edges denote inter-connections across communities. - Add the weights of the inter-connections as weights to the edges. - Compute `pagerank` on the meta graph. - Hint: `w_matrix` is the confusion matrix of the weights among the communities. `w_matrix` is not symmetric.,"words and punctuation: M. O'Connel payed $12,000 ( V.T.A. not included ) with his credit card . Usually not in a lexicon because hard to lexicalize (too many hard-to-predict occurrences): O'Connel, $12,000 'O'Connel' could be in some lexicon of proper names (but not so usual), or recognized by some NER (Named-Entity Recognizer). '$12,000' could be in some lexicon making use of regular expressions (e.g. a FSA), but this is also not so usual unless making use of some (other) NER. tokens: M|.| |O|'|Connel| |payed| |$| |12|,|000| |(|V|.|T|.|A|.| |not| |included|)| |with| |his| |credit| |card|.| We could go from tokens to words by: • agglutinating several (consecutive) tokens when the resulting word is in our lexicon • doing so, it would be good to keep all possible solutions, for instance in the compact form of a graph/lattice; for instance: • making use of NERs (check their input format/tokenization rules) • add our own had-oc rules, e.g. M + period + whitespace + proper name/unknow token with capital letter -→ proper noun",0.1,M1_preference_data_106 "Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are vectors in $\mathbb{R}^{D}$ and $b_{u}$ and $b_{m}$ are scalars, indicating the bias. How could you address the problem of potentially recommending a new movie without any ratings to users? [As in the previous point, this is also not a math question.]","We show that $\mathcal{M}$ satisfies the two axioms $I_1$ and $I_2$ for matroids. \begin{itemize} \item Take any $A \in \mathcal{I}$ and let $B \subseteq A$. Then $|E_i \cap A| \leq k_i$ for all $i = 1, \dots, \ell$. Clearly, all the inequalities $|E_i \cap B| \leq k_i$ also hold as $B \subseteq A$. Thus $B \in \mathcal{I}$, and Axiom $I_1$ is satisfied. \item Let $A, B \in \mathcal{I}$ and suppose that $|A| > |B|$. For all $i = 1, \dots, \ell$, consider the sets $E_i \cap A$ and $E_i \cap B$. Since $|A|$ is strictly greater than $|B|$, there exists an index $j \in \{1, \dots, \ell\}$ such that $|E_j \cap A| > |E_j \cap B|$. This implies that $|E_j \cap B| \leq |E_j \cap A| - 1 \leq k_j - 1$. Choose an element $e \in (E_j \cap A) \setminus (E_j \cap B) \subseteq A$ and add it to $B$. The new set $B \cup \{e \}$ satisfies $|E_j \cap (B \cup \{ e \})| \leq k_j$. Clearly, the new set also satisfies all the remaining inequalities because $|E_i \cap (B \cup \{e \})| = |E_i \cap B|$ for $i \neq j$ (note that $e \notin E_i$ for $i \neq j$ because $e \in E_j$ and $E_1, E_2, \dots, E_\ell$ are disjoint). Thus $B \cup \{e\} \in \mathcal{I}$, and Axiom $I_2$ is satisfied. \end{itemize}",0.1,M1_preference_data_107 "In this exercise, we will see how to combine the Principal Component Analysis (PCA) and the kernel method into an algorithm known as kernel PCA. We are given $n$ observations in a low dimensional space $\mathbf{x}_{1}, \cdots, \mathbf{x}_{n} \in \mathbb{R}^{L}$ and we consider a kernel $k$ and its associated features $\operatorname{map} \phi: \mathbb{R}^{L} \mapsto \mathbb{R}^{H}$ which satisfies: $$ k(\mathbf{x}, \mathbf{y})=\langle\phi(\mathbf{x}), \phi(\mathbf{y})\rangle_{\mathbb{R}^{H}} $$ where $\langle\cdot, \cdot\rangle_{\mathbb{R}^{H}}$ is the standard scalar product of $\mathbb{R}^{H}$. We define the empirical covariance matrix and the empirical covariance matrix of the mapped observations as: $$ \boldsymbol{\Sigma}:=\frac{1}{n} \sum_{i=1}^{n} \mathbf{x}_{i} \mathbf{x}_{i}^{\top} \quad \text { and } \quad \boldsymbol{\Sigma}^{\mathbf{H}}:=\frac{1}{n} \sum_{i=1}^{n} \phi\left(\mathbf{x}_{i}\right) \phi\left(\mathbf{x}_{i}\right)^{\top} $$ The kernel matrix $\mathbf{K}$ is defined by: $$ \mathbf{K}_{i, j}:=k\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \phi\left(\mathbf{x}_{j}\right)\right\rangle_{\mathbb{R}^{H}} $$ We also define the data matrix and the corresponding matrix of the mapped data as: $$ \mathbf{X}:=\left(\begin{array}{c} \mathbf{x}_{1}^{\top} \\ \cdots \\ \mathbf{x}_{n}^{\top} \end{array}\right) \in \mathbb{R}^{n \times L} \quad \text { and } \quad \mathbf{\Phi}:=\left(\begin{array}{c} \phi\left(\mathbf{x}_{1}\right)^{\top} \\ \cdots \\ \phi\left(\mathbf{x}_{n}\right)^{\top} \end{array}\right) \in \mathbb{R}^{n \times H} . $$ Finally we denote the eigenpairs (eigenvalues and eigenvectors) of $\boldsymbol{\Sigma}^{\mathbf{H}}$ by $\left\{\left(\lambda_{i}, \mathbf{v}_{i}\right)\right\}_{i=1}^{H}$ and those of $\mathbf{K}$ by $\left\{\left(\rho_{j}, \mathbf{w}_{j}\right)\right\}_{j=1}^{n}$. We also assume that the vectors $\mathbf{v}_{i}$ and $\mathbf{w}_{j}$ are normalized. Thus: $$ \boldsymbol{\Sigma}^{\mathbf{H}} \mathbf{v}_{i}=\lambda_{i} \mathbf{v}_{i}, \quad\left\|\mathbf{v}_{i}\right\|_{2}=1 \quad \text { and } \quad \mathbf{K} \mathbf{w}_{j}=\rho_{j} \mathbf{w}_{j}, \quad\left\|\mathbf{w}_{j}\right\|_{2}=1 $$ Let us remind that we assume in the kernel setting that we can compute $k(\mathbf{x}, \mathbf{y})$ but that we cannot directly compute $\phi(\mathbf{x})$ What we would like to do is to first map the data into the high-dimensional space using the features map $\phi$ and then to apply the standard PCA algorithm in the high-dimensional space $\mathbb{R}^{H}$. This would amount to: (a) Computing the empirical covariance matrix $\boldsymbol{\Sigma}^{\mathbf{H}}$ of the mapped data $\phi\left(\mathbf{x}_{i}\right)$. (b) Computing the eigenvectors $\mathbf{v}_{1}, \cdots, \mathbf{v}_{N}$ associated with the $N$ largest eigenvalues of $\boldsymbol{\Sigma}^{\mathbf{H}}$. (c) Computing the projection $\Pi\left(\phi\left(\mathbf{x}_{i}\right)\right) \in \mathbb{R}^{L}$ for each data point onto these eigenvectors, where the $j$-th component of the projection is given by: $$ \Pi_{j}\left(\phi\left(\mathbf{x}_{i}\right)\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \mathbf{v}_{j}\right\rangle_{\mathbb{R}^{H}} $$ Write the kernel matrix $\mathbf{K}$ as a function of the features matrix $\boldsymbol{\Phi}$. What is the size of this matrix?","Instead of selecting the edge to contract in each iteration uniformly at random. Now select an edge proportional to its weight $w_e$. To show that this idea indeed makes sense, we observe that Claim 1 from the notes of Lecture~11 still holds: the probability that we select an edge in $E(S^*, \overline{S^*})$ to contract is at most $2/n$. Indeed, let $k= \sum_{e\in E(S^*, \overline{S^*})} w(e)$ be the weight of the min cut and $w(E) = \sum_{e\in E} w(e)$ denotes the total weight of the edges. The probability that we select an edge in the min cut $E(S^*, \overline{S^*})$ is $$ P[e\in E(S^*,\overline{S^*})] =\frac{k}{w(E)}$$ Now similar to the hand-shake lemma we have $$ \sum_{v\in V} w(\delta(v)) = 2\cdot w(E) $$ where $\delta(v)$ denotes the edges adjacent to $v$ and $w(\delta(v))$ is the weight of $\delta(v)$. We also have that $$ w(\delta(v)) \geq k $$ since $k$ is the weight of the mincut. Therefore, \begin{align*} \sum_{v\in V} w(\delta(v)) = 2\cdot w(E) \geq k\cdot n \Rightarrow w(E) \geq k\cdot n/2\,. \end{align*} This means $$P[e\in E(S^*,\overline{S^*})] =\frac{k}{w(E)} \leq \frac{k}{nk/2} =\frac{2}{n} $$ Hence, we have that the probability to contract an edge in $E(S^*, \overline{S^*})$ is at most $2/n$. The analysis now continues in the exact same manner as in the unweighted case. We observe that even in a weighted graph, when we contract an edge $(u, v)$ the size of the minimum cut does not decrease. Then, let, $A_{i}$ be the event that the edge picked in step $i$ of the loop is not in $E(S^*,\overline{S^*})$. We need to lower bound $P[A_1,A_2,\dots,A_{n-2}]$. By Bayes rule we have, \begin{eqnarray*} P[A_1,\dots,A_{n-2}] = P[A_{1}] P[A_{2}|A_{1}] P[A_{3}|A_{1},A_{2}] \dots P[A_{n-2}|A_{1},A_{2},\dots,A_{n-3}]. \end{eqnarray*} From the above, we have that, for all $i$, $$ P[A_i | A_1,\dots,A_{i-1}] \geq 1- \frac{2}{n-i+1}.$$ Therefore, \begin{eqnarray*} P[A_1,\dots,A_{n-2}] &\geq & \left(1-\frac{2}{n}\right)\left(1-\frac{2}{n-1}\right)\dots \left(1-\frac{2}{3}\right)\\ &=& \frac{n-2}{n}\cdot \frac{n-3}{n-1}\cdot \frac{n-4}{n-2}\dots \frac{1}{3}\\ &=&\frac{2}{n(n-1)}=1/{n\choose 2}. \end{eqnarray*}",0.1,M1_preference_data_108 "What is the complexity of concatenation of two conc-trees with heights $h_1$ and $h_2$?","Force the system to ouput a given number of documents (increasing) so as to increase recall (ultimatly to recall max. when we ask the system to decidedfor all the available documents whether they are pertinent or not)",0.1,M1_preference_data_109 "One of your colleagues has recently taken over responsibility for a legacy codebase, a library currently used by some of your customers. Before making functional changes, your colleague found a bug caused by incorrect use of the following method in the codebase: public class User { /** Indicates whether the user’s browser, if any, has JavaScript enabled. */ public boolean hasJavascriptEnabled() { … } // … other methods, such as getName(), getAge(), ... } Your colleague believes that this is a bad API. Explain in 1 sentence why that is indeed the case.","1.5% is for sure wrongly tagged. For the rest (100%-1.5%), only 98% are correctly tagged. So the overall score is 0.985×0.98 ≃ 0.96.",0.1,M1_preference_data_110 "Consider a data stream $\sigma=(a_1,\ldots, a_m)$, with $a_j\in [n]$ for every $j=1,\ldots, m$, where we let $[n]:=\{1, 2, \ldots, n\}$ to simplify notation. For $i\in [n]$ let $f_i$ denote the number of times element $i$ appeared in the stream $\sigma$. We say that a stream $\sigma$ is {\epsilonm approximately sparse} if there exists $i^*\in [n]$ such that $f_{i^*}=\lceil n^{1/4}\rceil$ and for all $i\in [n]\setminus \{i^*\}$ one has $f_i\leq 10$. We call $i^*$ the {\epsilonm dominant} element of $\sigma$. Give a single pass streaming algorithm that finds the dominant element $i^*$ in the input stream as long as the stream is approximately sparse. Your algorithm should succeed with probability at least $9/10$ and use $O(n^{1/2}\log^2 n)$ bits of space. You may assume knowledge of $n$ (and that $n$ is larger than an absolute constant).","This is an HMM of order 1 (Well, the picture is actualy a part of a Markov chain. The 'hidden' part will be provide by the emission probabilities, i.e. the lexicon). HMM relies on two asumptions (see course): limited lexical contionning $\left(P\left(w_{i} \mid \ldots C_{i} \ldots\right)=P\left(w_{i} \mid C_{i}\right)\right)$ and limited scope for syntactic dependencies $\left(P\left(C_{i} \mid C_{1} \ldots C_{i-1}\right)=P\left(C_{i} \mid C_{i-k} \ldots C_{i-1}\right)\right)$.",0.1,M1_preference_data_111 "Consider the following algorithm \textsc{Random-Check} that takes as input two subsets $S\subseteq E$ and $T\subseteq E$ of the same ground set $E$. \begin{center} \begin{boxedminipage}[t]{0.85\textwidth} \textsc{Random-Check}$(S,T)$ \\[2mm] 1. For each element $e\in E$, independently of other elements randomly set \begin{align*} x_e = \begin{cases} 1 & \mbox{with probability $1/3$} \\ 0 & \mbox{with probability $2/3$} \end{cases} \end{align*} 2. \IF $\sum_{e\in S} x_e = \sum_{e\in T} x_e$ \THEN \\[1mm] 3. \qquad \RETURN true \\[1mm] 4. \ELSE\\ 5. \qquad \RETURN false \end{boxedminipage} \end{center} Note that \textsc{Random-Check}$(S,T)$ returns true with probability $1$ if $S=T$. Your task is to analyze the probability that the algorithm returns true if $S \neq T$. Specifically prove that \textsc{Random-Check}$(S,T)$ returns true with probability at most $2/3$ if $S\neq T$.\\ {\em (In this problem you are asked to prove that \textsc{Random-Check}($S,T$) returns true with probability at most $2/3$ if $S \neq T$. Recall that you are allowed to refer to material covered in the lecture notes.)}","def truncated_svd(term_doc_matrix, num_val): K, S, Dt = np.linalg.svd(term_doc_matrix, full_matrices=False) K_sel = K[:,0:num_val] S_sel = np.diag(S)[0:num_val,0:num_val] Dt_sel = Dt[0:num_val,:] return K_sel, S_sel, Dt_sel",0.1,M1_preference_data_112 In which type of processors do you expect to find a reorder buffer?,"\begin{itemize} \item Noise term stays constant (it only depends on the data and not of the algorithm) \item Bias term increases (if $\lambda=\infty$ very large bias) \item Variance term decreases (if $\lambda=\infty$ then no variance) \end{itemize}",0.1,M1_preference_data_113 "Consider the following joint distribution that has the factorization $$ p\left(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}\right)=p\left(x_{1}\right) p\left(x_{2} \mid x_{1}\right) p\left(x_{3} \mid x_{2}\right) p\left(x_{4} \mid x_{1}, x_{3}\right) p\left(x_{5} \mid x_{4}\right) . $$ We say that a data point $y$ follows a Poisson distribution with parameter $\theta$ if the probability of the observation $y, y \in \mathbb{N}$, is given by $$ p(y \mid \theta)=\frac{\theta^{y} e^{-\theta}}{y !} $$ Assume that you are given the samples $\mathcal{S}=\left\{y_{1}, \cdots, y_{N}\right\}$ (a) Write down the log-likelihood, call it $\mathcal{L}$, of these samples as a function of $\theta$ assuming that the samples are iid and follow a Poisson distribution with parameter $\theta$. (b) What is the parameter $\theta$ that maximizes this log-likelihood expressed as a function of the samples?","Notice that minimising $\mathcal{L}(\mathbf{z}, \boldsymbol{\mu})$ for given $\left\{z_{n k}\right\}_{n, k=1}^{N, K}$ reduces to minimizing $\sum_{n=1}^{N} z_{n k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{k}\right\|_{2}^{2}$ for each $k \in\{1, \ldots, K\}$ independently. This sum is a function of $\boldsymbol{\mu}_{k}$ which is quadratic and positive. It is therefore minimum when its gradient vanishes. Setting the gradient to 0 leads to $2 \sum_{n=1}^{N} z_{n k}\left(\mathbf{x}_{n}-\boldsymbol{\mu}_{k}\right)=0$, hence the update for each $k$ is: $$ \boldsymbol{\mu}_{k}=\frac{\sum_{n=1}^{N} z_{n k} \mathbf{x}_{n}}{\sum_{n=1}^{N} z_{n k}} $$ This step corresponds to the update step and boils down to computing the center of mass of each cluster $k$.",0.1,M1_preference_data_114 "In this exercise, we will see how to combine the Principal Component Analysis (PCA) and the kernel method into an algorithm known as kernel PCA. We are given $n$ observations in a low dimensional space $\mathbf{x}_{1}, \cdots, \mathbf{x}_{n} \in \mathbb{R}^{L}$ and we consider a kernel $k$ and its associated features $\operatorname{map} \phi: \mathbb{R}^{L} \mapsto \mathbb{R}^{H}$ which satisfies: $$ k(\mathbf{x}, \mathbf{y})=\langle\phi(\mathbf{x}), \phi(\mathbf{y})\rangle_{\mathbb{R}^{H}} $$ where $\langle\cdot, \cdot\rangle_{\mathbb{R}^{H}}$ is the standard scalar product of $\mathbb{R}^{H}$. We define the empirical covariance matrix and the empirical covariance matrix of the mapped observations as: $$ \boldsymbol{\Sigma}:=\frac{1}{n} \sum_{i=1}^{n} \mathbf{x}_{i} \mathbf{x}_{i}^{\top} \quad \text { and } \quad \boldsymbol{\Sigma}^{\mathbf{H}}:=\frac{1}{n} \sum_{i=1}^{n} \phi\left(\mathbf{x}_{i}\right) \phi\left(\mathbf{x}_{i}\right)^{\top} $$ The kernel matrix $\mathbf{K}$ is defined by: $$ \mathbf{K}_{i, j}:=k\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \phi\left(\mathbf{x}_{j}\right)\right\rangle_{\mathbb{R}^{H}} $$ We also define the data matrix and the corresponding matrix of the mapped data as: $$ \mathbf{X}:=\left(\begin{array}{c} \mathbf{x}_{1}^{\top} \\ \cdots \\ \mathbf{x}_{n}^{\top} \end{array}\right) \in \mathbb{R}^{n \times L} \quad \text { and } \quad \mathbf{\Phi}:=\left(\begin{array}{c} \phi\left(\mathbf{x}_{1}\right)^{\top} \\ \cdots \\ \phi\left(\mathbf{x}_{n}\right)^{\top} \end{array}\right) \in \mathbb{R}^{n \times H} . $$ Finally we denote the eigenpairs (eigenvalues and eigenvectors) of $\boldsymbol{\Sigma}^{\mathbf{H}}$ by $\left\{\left(\lambda_{i}, \mathbf{v}_{i}\right)\right\}_{i=1}^{H}$ and those of $\mathbf{K}$ by $\left\{\left(\rho_{j}, \mathbf{w}_{j}\right)\right\}_{j=1}^{n}$. We also assume that the vectors $\mathbf{v}_{i}$ and $\mathbf{w}_{j}$ are normalized. Thus: $$ \boldsymbol{\Sigma}^{\mathbf{H}} \mathbf{v}_{i}=\lambda_{i} \mathbf{v}_{i}, \quad\left\|\mathbf{v}_{i}\right\|_{2}=1 \quad \text { and } \quad \mathbf{K} \mathbf{w}_{j}=\rho_{j} \mathbf{w}_{j}, \quad\left\|\mathbf{w}_{j}\right\|_{2}=1 $$ Let us remind that we assume in the kernel setting that we can compute $k(\mathbf{x}, \mathbf{y})$ but that we cannot directly compute $\phi(\mathbf{x})$ What we would like to do is to first map the data into the high-dimensional space using the features map $\phi$ and then to apply the standard PCA algorithm in the high-dimensional space $\mathbb{R}^{H}$. This would amount to: (a) Computing the empirical covariance matrix $\boldsymbol{\Sigma}^{\mathbf{H}}$ of the mapped data $\phi\left(\mathbf{x}_{i}\right)$. (b) Computing the eigenvectors $\mathbf{v}_{1}, \cdots, \mathbf{v}_{N}$ associated with the $N$ largest eigenvalues of $\boldsymbol{\Sigma}^{\mathbf{H}}$. (c) Computing the projection $\Pi\left(\phi\left(\mathbf{x}_{i}\right)\right) \in \mathbb{R}^{L}$ for each data point onto these eigenvectors, where the $j$-th component of the projection is given by: $$ \Pi_{j}\left(\phi\left(\mathbf{x}_{i}\right)\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \mathbf{v}_{j}\right\rangle_{\mathbb{R}^{H}} $$ Write the empirical covariance matrices $\boldsymbol{\Sigma}$ and $\boldsymbol{\Sigma}^{\mathbf{H}}$ in function of the design matrix $\mathbf{X}$ and the features matrix $\boldsymbol{\Phi}$. What are the sizes of these matrices $\boldsymbol{\Sigma}$ and $\boldsymbol{\Sigma}^{\mathbf{H}}$ ?","No, because the limit is about complexity, not number of lines",0.1,M1_preference_data_115 "Chef Baker Buttersweet just took over his family business - baking tasty cakes! He notices that he has $m$ different ingredients in various quantities. In particular, he has $b_i \geq 0$ kilograms of ingredient $i$ for $i = 1, \dots, m$. His family cookbook has recipes for $n$ types of mouthwatering cakes. A kilogram of cake of type $j$ is worth $c_j$ CHF. For each recipe $j$, the cookbook says how many kilograms of each of the ingredients are needed to make one kilogram of cake of type $j$. One kilogram of cake of type $j$, for $j=1, \dots, m$, needs precisely $a_{ij}$ kilograms of ingredient $i$ for all $i=1,\dots,m$. Chef wants to make $x_j \leq 1$ kilograms of cake of type $j$. Having studied linear programming, he knows that the maximum revenue he can get is given by the following linear program, where $A \in \mathbb{R}_{+}^{m\times n} \mbox{ , } b \in \mathbb{R}_+^m \mbox{ and } c\in \mathbb{R}^n_+$. \begin{align*} \textbf{Maximize} \hspace{0.8cm} & \sum_{j=1}^n c_j x_j\\ \textbf{subject to}\hspace{0.8cm} & Ax \leq b \\ \hspace{0.8cm} & 1 \geq x_j \geq 0 \ \ \ \forall j. \end{align*} Chef realizes that he can use Hedge algorithm to solve this linear program (approximately) but he is struggling with how to set the costs $m^{(t)}_{i}$ at each iteration. Explain how to set these costs properly. {\em (In this problem you are asked to define the costs $m^{(t)}_i$. You do \textbf{not} need to explain how to solve the reduced linear program that has a single constraint. Recall that you are allowed to refer to material covered in the lecture notes.)}","def longest[A](ls: List[A]): Int = ls.foldLeft((Option.empty[A], 0, 0)) { case ((last, cur, max), x) => val last2 = Some(x) val cur2 = if (last2 == last) cur + 1 else 1 (last2, cur2, if (cur2 > max) cur2 else max) }._3",0.1,M1_preference_data_116 Would VLIW processors benefit from a Load Store Queue?,xs.flatMap(x => x),0.1,M1_preference_data_117 "Consider the following algorithm that takes as input a complete $n$-by-$n$ bipartite graph $G=(U \cup V,E)$ with positive integer edge-weights $w :E \rightarrow \mathbb{Z}_{> 0 }$: \begin{center} \begin{boxedminipage}[t]{0.85\textwidth} \begin{minipage}{14cm} \begin{verse} \textsc{MinWeightPerfectMatching}$(G, w)$: \\[2mm] 1. \FOR each edge $e\in E$ {\scriptsize (i.e., each pair $(u,v)$ since the graph is complete)} \\ 2. \qquad select independently and uniformly at random $p(e) \in \{1, \dots, n^2\}$.\\[1mm] 3. Define a bi-adjacency matrix $A$ with $n$ rows (one for each $u\in U$) and $n$ columns (one for each $v\in V$) as follows: \begin{align*} A_{u,v} = 2^{n^{100} w(u,v)}\cdot p(u,v) \,. \end{align*}\\ 4. \RETURN largest positive integer $i$ such that $2^{i \cdot n^{100} }$ divides $\det(A)$ (if no such $i$ exists, we return $0$). \end{verse} \end{minipage} \end{boxedminipage} \end{center} Prove that the above algorithm returns the value of a min-weight perfect matching with probability at least $1-1/n$. Recall that you are allowed to refer to material covered in the course. \\[2mm] \noindent Hint: Let $\mathcal{M}_i$ denote the set of perfect matchings $M$ whose weight $\sum_{e\in M} w(e)$ equals $i$. Use that one can write $\det(A)$ as follows: \begin{align*} \det(A) = \sum^{\infty}_{i=0} 2^{i \cdot n^{100}} f_i({p}) \qquad \mbox{where } f_i(p) = \sum_{M \in \mathcal{M}_i} \textrm{sign}(M) \prod_{e\in M} p(e)\,. \end{align*} Here $\textrm{sign}(M)\in \{\pm 1\}$ is the sign of the permutation corresponding to $M$.","The grammar G is: \item a constituency-based (because it consists of rewriting rules); \item context-free grammar (because of the format of the rules: only one an exactly one terminal on the left-hand side); \item in extended Chomsky Normal Form (because of the format of the rules: no more than two terms on the right-hand side).",0.1,M1_preference_data_118 Provide a formal definition of a transducer. Give some good reasons to use such a tool for morphological processing.,"The dual is the following: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & & \displaystyle\sum_{e \in E} y_e &\\ \text{subject to}& & \displaystyle\sum_{e \in p} y_e \ge 1 &\forall p \in P,\\ & & y_e \ge 0 & \forall e \in E. \end{array} \end{equation*} Any binary solution $y \in \{0,1\}^{|E|}$ to the dual corresponds to a set of edges which, when removed from $G$, disconnect $s$ and $t$ (indeed, for every path $p$ from $s$ to $t$, at least one edge must be removed). This is called the minimum $s$,$t$-cut problem.",0.1,M1_preference_data_119 "Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V What type of rules does the provided grammar $G$ consist of? What type of rules should $G$ be complemented with to be exploitable in practice? What is the format of these missing rules?","For being computed with a parallel reduction, hull2 needs to be associative. In fact, it is associative as well as commutative here. We slightly abuse notation to keep the structure similar to code, with a Rectangle structure. Given $$ r_i = Rectangle((x_1^i, y_1^i), (x_2^i, y_2^i)) $$ we have, \begin{align*} &hull2(r_1, hull2(r_2, r_3)) &= hull2(r_1, Rectangle((min(x_1^2, x_1^3), min(y_1^2, y_1^3)), (max(x_2^2, x_2^3), max(y_2^2, y_2^3))))\ &= Rectangle((min(x_1^1, min(x_1^2, x_1^3)), min(y_1^1, min(y_1^2, y_1^3))), (max(x_2^1, max(x_2^2, x_2^3)), max(y_2^1, max(y_2^2, y_2^3))))\\ &= Rectangle((min(x_1^1, x_1^2, x_1^3), min(y_1^1, y_1^2, y_1^3)), (max(x_2^1, x_2^2, x_2^3), max(y_2^1, y_2^2, y_2^3))) \end{align*} Note that at this step, the order of $r_1, r_2, r_3$ does not matter, $min$ and $max$ are both associative and commutative, so the associativity and commutativity of $hull2$ are visible here! We reduce from here to prove associativity, and the same steps are followed for commutativity. \begin{align*} &hull2(r_1, hull2(r_2, r_3)) &= Rectangle((min(min(x_1^1, x_1^2), x_1^3), min(min(y_1^1, y_1^2), y_1^3)), (max(max(x_2^1, x_2^2), x_2^3), max(max(y_2^1, y_2^2), y_2^3)))\ &= hull2(hull2(r_1, r_2), r_3) \end{align*} For commutativity, \begin{align*} &hull2(r_1, r_2) &= Rectangle((min(x_1^1, x_1^2), min(y_1^1, y_1^2)), (max(x_2^1, x_2^2), max(y_2^1, y_2^2))) &= Rectangle((min(x_1^2, x_1^1), min(y_1^2, y_1^1)), (max(x_2^2, x_2^1), max(y_2^2, y_2^1))) \\ &= hull2(r_2, r_1) \end{align*}",0.1,M1_preference_data_120 "You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! To evaluate your system, you decide to hold out some of the columns you have previously written and use them as an evaluation set. After generating new columns using the same titles as these held-out columns, you decide to evaluate their quality. What would be an advantage of using a model-based metric?","No, in general adding noise or reducing the precision is not very effective since repeating probes multiple times would still leak the desired information. Prime + probe may not work or be more difficult, but other attacks would still be possible.",0.1,M1_preference_data_121 "Assume that while working on a new feature for your team's product, your colleague is required to write a function that takes a list of events and sorts them by their timestamp. Using their algorithm course knowledge, they remind you that merge sort's complexity is $O(n log n)$, which is better than the $O(n^2)$ worst-case complexity of quick sort, and that they will therefore use the former to solve their task. What do you think of your colleague's approach?",Those sentences are not 'grammatically' (syntactically) correct. It should be filtered out at the syntactic level using a (phrase-structure) grammar.,0.1,M1_preference_data_122 "Does the disparity in class proportions hurt the model? If yes, how can you fix it? If not, justify the reasons behind your choice. Hint: The learning objective of a classifier can be modified by altering the importance of each class in the computation of the loss function. Based you answer on the following confusion matrix: |precision | recall | f1-score | support| |-|-|-|-| | 0 | 0.973 | 0.997 | 0.985 | 330| | 1 | 0.750 | 0.250 | 0.375 | 12|","Yes, because the long-latency multiplication prevents pipelining; a new iteration can start only when the previous if-condition is computed and the value of b is updated (if needed).",0.1,M1_preference_data_123 " Design and analyze a polynomial-time algorithm for the following problem: \begin{center} \begin{boxedminipage}[t]{0.83\textwidth} \begin{description} \item[Input:] a vertex set $V$. \item[Output:] vertex subsets $S_1, S_2, \ldots, S_\ell \subseteq V$ with the following property:\\[2mm] For every set of edges $E\subseteq {V \choose 2}$, there is an $i\in \{1,2, \ldots, \ell\}$ such that \begin{align*} |\{e\in E: |e\cap S_i| = 1\}| \geq |E|/2\,, \end{align*} i.e., $S_i$ cuts at least half the edges in $G = (V,E)$. \end{description} \end{boxedminipage} \end{center} We remark that, since your algorithm should run in time polynomial in $n=|V|$, it can output at most polynomially (in $n$) many vertex sets. We also emphasize that the algorithm does \textbf{not} take the edge set $E$ as input. {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it runs in time polynomial in $n$, and (iii) prove that the output satisfies the property given in the problem statement. Recall that you are allowed to refer to material covered in the lecture notes.)}","def euclidean_distance(v1, v2): """""" It computes the euclidean distance between to vectors. :param v1: First vector (numpy array). :param v2: Second vector (numpy array). :return: Euclidean distance (float) """""" return np.linalg.norm(v1 - v2) def knn(doc_vectors, query_vector, k=10): """""" It finds the `k` nearest documents to the given query (based on euclidean distance). :param doc_vectors: An array of document vectors (np.array(np.array)). :param query_vector: Query representation (np.array) :return: List of document indices (list(int)) """""" dist_scores = [(i, euclidean_distance(np.array(doc), np.array(query_vector))) for i, doc in enumerate(doc_vectors)] dist_scores = sorted(dist_scores, key=lambda a: a[1]) top_k_docs = [i for i in list(zip(*dist_scores[0:k]))[0]] return top_k_docs",0.1,M1_preference_data_124 Implement weigthing estimation of kNN classification,"Every process initializes a balance[] array with the initial, agreed upon balances of each process. Upon requesting a payment, a process TO-broadcasts a message [PAY, source, recipient, amount]. Upon TO-delivering a message [PAY, source, recipient, amount], a process verifies if balance[source] is at least amount. If so, it subtracts amount from balance[source] and adds it to balance[recipient]. Since every process receives the same sequence of operations, the outcome of each operation is the same at every process. Correct processes successfully issue operations and agree on the balance of every process at any point in time.",0.1,M1_preference_data_125 "Two excellent students, Alice from EPFL and Bob from MIT, have both built their own spam filters. A spam filter is an algorithm that takes as input an email and outputs $1$ if the email is spam and $0$ otherwise. Alice and Bob now want to compare their two spam filters. To perform the comparison, they both download the same huge data set consisting of $n$ emails out of which some are spam. Alice then runs her spam filter on the data set to obtain $a_1, a_2, \ldots, a_n$ where $a_i \in \{0,1\}$ is the output of her spam filter on the $i$:th email in the data set. Similarly, Bob runs his spam filter on the data set to obtain $b_1, b_2, \ldots, b_n$ where $b_i \in \{0,1\}$ is the output of his spam filter on the $i$:th email in the data set. Their goal is then to determine whether their outputs are the same. An issue that they face is that $a_1, a_2,\ldots, a_n$ are stored on Alice's computer and $b_1, b_2, \ldots, b_n$ are stored on Bob's computer. They thus need to transfer (or communicate) information to solve the problem. A trivial solution is for Alice to transfer all her outputs $a_1, a_2,\ldots, a_n$ to Bob who then performs the comparison. However, this requires Alice to send $n$ bits of information to Bob; an operation that is very costly for a huge data set. In the following, we use randomization to achieve a huge improvement on the number of bits transfered between Alice and Bob. \\[0mm] Specifically, motivated by something called pseudo-random generators, we assume that Alice and Bob have access to the same randomness (called shared randomness). That is, Alice and Bob have access to the same infinite stream of random bits $r_1, r_2, \ldots$. Your task is now to use this shared randomness to devise a randomized protocol of the following type: \begin{itemize} \item As a function of $a_1, a_2, \ldots, a_n$ and the random bits $r_1, r_2, \ldots$, Alice computes a message $m$ that consists of only $2$ bits. She then transmits this $2$-bit message $m$ to Bob. \item Bob then, as a function of $b_1, b_2, \ldots, b_n$, the message $m$, and the random bits $r_1, r_2, \ldots$, outputs \textsc{Equal} or \textsc{Not Equal}. \end{itemize} Bob's output is correct if he outputs $\textsc{Equal}$ when $a_i = b_i$ for all $i\in \{1,\ldots, n\}$ and $\textsc{Not Equal}$ otherwise. Your protocol should ensure that Bob outputs the correct answer with probability at least $2/3$, where the probability is over the random bits $r_1, r_2, \ldots $.\\ {\em (In this problem you are asked to (i) explain how Alice computes the message $m$ of $2$ bits (ii) explain how Bob calculates his output, and (iii) prove that Bob's output is correct with probability at least $2/3$. A correct solution where Alice sends a message $m$ of $O(\log n)$ bits is rewarded $12$ points. Recall that you are allowed to refer to material covered in the lecture notes.) }\\ \noindent {\small An interesting fact (but unrelated to the exam) is that any correct deterministic strategy would require Alice and Bob to send $n$ bits of information.}","The function f must be associative. That is, for any x, y, z, it should be the case that: f(x, f(y, z)) == f(f(x, y), z). Both the min and max functions are associative. In addition, it can be easily shown that pairwise application of associative functions is also associative. From this follows that f is indeed associative.",0.1,M1_preference_data_126 "For this homework you will use a dataset of 18,403 music reviews scraped from Pitchfork¹, including relevant metadata such as review author, review date, record release year, review score, and genre, along with the respective album's audio features pulled from Spotify's API. The data consists of the following columns: artist, album, recordlabel, releaseyear, score, reviewauthor, reviewdate, genre, key, acousticness, danceability, energy, instrumentalness, liveness, loudness, speechiness, valence, tempo. Create a new dataframe containing one row per 1st-2nd album pair. The dataframe should contain rows: score_diff: the difference in scores between the second and the first album (second - first). time_diff: the number of days elapsed between the first and the second album. did_style_change: a dummy variable that indicates whether the style of the music has changed. To obtain it, first, calculate the standardized euclidean distance of music-related numerical features¹ between the second and the first album. Second, assign 1 to the 20% most distant 1st-2nd album pairs and 0 to all others.","a few hints: • there is no theoretical proof nor unique optimal solution in NLP • so as to have an objective (not subjective) quantitative (not qualitative) measure • it helps clarifying, even specifying, the objectives to be reached • allow to monitor variability over time (task shift, for whatever reasons, e.g. change in vocabulary) • feed-back loop (propose clues where it can help the best)",0.1,M1_preference_data_127 "In this week's lecture, you have been introduced to the aggregate method of ParSeq[A] (and other parallel data structures). It has the following signature: def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B Discuss, as a group, what aggregate does and what its arguments represent. Under which condition(s) on z, f, and g does aggregate always lead to the same result? Come up with a formula on z, f, and g that implies the correctness of aggregate. Hint: You may find it useful to use calls to foldLeft(z)(f) in your formula(s).","Since the number of registers are limited, at some point the addresses will wrap up and the pointed registers would not be free but would contain useful values. One could think of using an interrupt to invoke some handler which saves those values on the stack or could do this in hardware directly. Itanium chooses the second way (other processors in the past such as Sun's Sparc did the former).",0.1,M1_preference_data_128 "Consider a country with n ≥ 2 cities. For every pair of different cities x, y, there exists a direct route (single direction) either from x to y or from y to x. Show that there exists a city that we can reach from every other city either directly or through exactly one intermediate city.","""Backend timeout"" is an abstraction leak because it reveals an implementation detail irrelevant to the abstraction: that the server has a backend. The other two are fundamental concepts in the abstraction, thus not leaks.",0.1,M1_preference_data_129 "Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V Indicate what type of constraints are (resp. are not) taken into account by the grammar $G$, and, for each constraint type mentioned, provide illustrative examples.","Consider the directed graph $G' = (V,E')$ obtained from $G$ by replacing every edge $\{u,v\} \in E$ by the two arcs $e_1=(u,v)$ and $e_2=(v,u)$. If $e \in A'$, we assign weight $w_e=n^2+1$ to it, otherwise we set $w_e=n^2$. Let $\delta^{+}(v)=\{u\in V : (v,u) \in E'\}$ denote the set of outgoing edges of $v$ in $G'$ and $\delta^{-}(v)=\{u\in V : (u,v) \in E'\}$ be the set of incoming edges of $v$ in $G'$. With the arc set $E'$ as ground set we define two partition matroids $\mathcal{M}_1$ and $\mathcal{M}_2$: \begin{itemize} \item To be independent in $\mathcal{M}_1$ one can take at most one of $\{(u,v), (v,u)\}$ for every $\{u,v\} \in E$, i.e., \begin{align*} \mathcal{I}_1 = \{F \subseteq E': |F \cap \{(u,v), (v,u)\} | \leq 1\mbox{ for all $\{u,v\}\in E$}\}\,. \end{align*} This matroid enforces the constraint that each edge should be oriented in one direction. \item To be independent in $M_2$, one can take at most $\frac{1}{2}\text{deg}(v)$ arcs among the set $\delta^{+}(v)$ of outgoing arcs for every $v$: \begin{align*} \mathcal{I}_2 = \{F \subseteq E': |F \cap \delta^+(v)| \leq \frac{1}{2}\text{deg}(v) \mbox{, for all $v\in V$}\}\,. \end{align*} \end{itemize} Let solution $S$ be the maximum weight independent set in the intersection of the two matroids $\mathcal{M}_1$, and $\mathcal{M}_2$. Now we prove that a solution $S$ is feasible if and only if it is independent in both $\mathcal{I}_1$ and $\mathcal{I}_2$. First observe that any solution with maximum weight, also has the maximum cardinality. Every solution of size $k$ has weight at most $k\cdot (n^2+1)$, whereas any solution of size $k+1$ has weight at least $(k+1)n^2$ which is larger than any solution of size at most $k$. Thus the maximum weighted solution has maximum size i.e. $|A'|$. Now we prove that any solution (with maximum cardinality) that is independent in $\mathcal{I}_2$, satisfies both indegree and outdegree constraints. Suppose $F \subseteq \mathcal{I}_2$ and $|F|=|A'|$. Thus we have \[|A'|=\sum_{v \in V} |F \cap \delta^+(v)| \leq \sum_{v \in V} \frac{1}{2}\text{deg}(v) = |A'| \text{.}\] Thus for all $v \in V$, we have $|F \cap \delta^+(v)| = \frac{1}{2}\text{deg}(v)$, so that $|F \cap \delta^-(v)| = \frac{1}{2}\text{deg}(v)$. Thus $F$ is a feasible solution to the problem. Recall that solution $S$ has the maximum weight among all feasible solutions. Thus $S$ has maximum cardinality, and among all the feasible solutions with the same cardinality, $S$ maximizes $|E'\cap A'|$. By Edmonds, Lawler's theorem, there is a polynomial-time algorithm for finding a maximum weight independent set in the intersection of two matroids $\mathcal{M}_1$, and $\mathcal{M}_2$.",0.1,M1_preference_data_130 "Freshly graduated from EPFL, you have been hired as contractors for a successful and rapidly growing bank. The bank has been experiencing problems with their money management system recently, which is written in Scala, and so they hired the best and brightest young engineer they could find: you! The system had been working perfectly fine so far, they tell you. In the past days, due to an increased number of customers, they had to switch from a single-threaded sequential execution environment to a multi-threaded concurrent one, in which the threads may perform transactions concurrently. That's when problems started, your manager says... Here is the code responsible to withdraw money from the account from and transfer it to the account to, within the same bank: def transfer(from: Account, to: Account, amount: BigInt): Unit = { require(amount >= 0) val balanceFrom = from.balance if (balanceFrom >= amount) { from.balance = balanceFrom - amount val balanceTo = to.balance to.balance = balanceTo + amount } } For the bank, it is very important that the following two properties hold after any sequence of completed transfer transactions: The balance of an account never goes below 0. The total sum of money held by the bank is constant. Does the transfer method above respect the two properties in a sequential execution environment, that is, when there is only one thread in the program?","The bug item is properly specified, and is therefore suitable to be submitted.",0.1,M1_preference_data_131 "If process i fails, then eventually all processes j≠i fail Is the following true? If no process j≠i fails, then process i has failed","No, it is almost certain that it would not work. On a dynamically-scheduled processor, the user is not supposed to see the returned value from a speculative load because it will never be committed; the whole idea of the attack is to make speculatively use of the result and leave a microarchitectural trace of the value before the instruction is squashed. In Itanium, the returned value of the speculative load instruction is architecturally visible and checking whether the load is valid is left to the compiler which, in fact, might or might not perform such a check. In this context, it would have been a major implementation mistake if the value loaded speculatively under a memory access violation were the true one that the current user is not allowed to access; clearly, the implementation of Itanium must have done deliberately what AMD does only per chance in their dynamically scheduled processors---that is, Itanium must check for permission issues before returning the value into the destination register.",0.1,M1_preference_data_132 "The data contains information about submissions to a prestigious machine learning conference called ICLR. Columns: year, paper, authors, ratings, decisions, institution, csranking, categories, authors_citations, authors_publications, authors_hindex, arxiv. The data is stored in a pandas.DataFrame format. Create two fields called has_top_company and has_top_institution. The field has_top_company equals 1 if the article contains an author in the following list of companies [""Facebook"", ""Google"", ""Microsoft"", ""Deepmind""], and 0 otherwise. The field has_top_institution equals 1 if the article contains an author in the top 10 institutions according to CSRankings.","Recall that, in the Hedge algorithm we learned in class, the total loss over time is upper bounded by $\sum_{t = 1}^T m_i^t + \frac{\ln N}{\epsilon} + \epsilon T$. In the case of investments, we want to do almost as good as the best investment. Let $g_i^t$ be the fractional change of the value of $i$'th investment at time $t$. I.e., $g_i^t = (100 + change(i))/100$, and $p_i^{t+1} = p_i^{t} \cdot g_i^t$. Thus, after time $T$, $p_i^{T+1} = p_i^1 \prod_{t = 1}^T g_i^t$. To get an analogous bound to that of the Hedge algorithm, we take the logarithm. The logarithm of the total gain would be $\sum_{t=1}^T \ln g_i^t$. To convert this into a loss, we multiply this by $-1$, which gives a loss of $\sum_{t=1}^T (- \ln g_i^t)$. Hence, to do almost as good as the best investment, we make our cost vectors to be $m_i^t = - \ln g_i^t$. Now, from the analysis of Hedge algorithm in the lecture, it follows that for all $i \in [N]$, $$\sum_{t = 1}^T p^{(t)}_i \cdot m^{(t)} \leq \sum_{t = 1}^{T} m^{(t)}_i + \frac{\ln N}{\epsilon} + \epsilon T.$$ Taking the exponent in both sides, We have that \begin{align*} \exp \left( \sum_{t = 1}^T p^{(t)}_i \cdot m^{(t)} \right) &\leq \exp \left( \sum_{t = 1}^{T} m^{(t)}_i + \frac{\ln N}{\epsilon} + \epsilon T \right)\\ \prod_{t = 1}^T \exp( p^{(t)}_i \cdot m^{(t)} ) &\leq \exp( \ln N / \epsilon + \epsilon T) \prod_{t = 1}^T \exp(m^t_i) \\ \prod_{t = 1}^T \prod_{i \in [N]} (1 / g_i^t)^{p^{(t)}_i} &\leq \exp( \ln N / \epsilon + \epsilon T) \prod_{t = 1}^{T} (1/g^{(t)}_i) \end{align*} Taking the $T$-th root on both sides, \begin{align*} \left(\prod_{t = 1}^T \prod_{i \in [N]} (1 / g_i^t)^{p^{(t)}_i} \right)^{(1/T)} &\leq \exp( \ln N / \epsilon T + \epsilon ) \left( \prod_{t = 1}^{T} (1/g^{(t)}_i) \right)^{(1/T)}. \end{align*} This can be interpreted as the weighted geometric mean of the loss is not much worse than the loss of the best performing investment.",0.1,M1_preference_data_133 "You want to create an application that allows users to manage their e-books. These books will be stored in a local database, with attributes like name, file, etc. In addition, your application will allow to add notes on books, which will be stored separately in the database, and to send a book with its notes by e-mail to friends, who can import the book and the notes in the app. What modules would you define?","Every process sends its proposal (COMMIT / ABORT) to p using point-to-point links. p collects all the proposals. If it detects (with the perfect failure detector) that any process crashed, or any process proposes ABORT then it unilaterally decides to ABORT. Otherwise, it unilaterally decides to COMMIT. p uses Best-Effort Broadcast to send its decision to every other process. If p does not crash, every correct process eventually receives p’s decision and decides accordingly.",0.1,M1_preference_data_134 "Consider the LP-rounding algorithm for Set Cover that works as follows: \begin{enumerate} \item Solve the LP relaxation to obtain an optimal solution $x^*$. \item Return the solution $\{S: x^*_S >0\}$, i.e., containing all sets with a positive value in the fractional solution. \end{enumerate} Use the complementarity slackness conditions to prove that the algorithm is an $f$-approximation algorithm, where $f$ is the frequency (i.e., the maximum number of sets that any element belongs to).","We solve the problem using ""deferred decision"" technique. First, note that simply one has $$\sum_{e\in S}x_e=\sum_{e\in S\setminus T}x_e +\sum_{e\in S\cap T} x_e$$ and $$\sum_{e\in T}x_e=\sum_{e\in T\setminus S}x_e +\sum_{e\in T\cap S} x_e.$$ So, we have $$\Pr\left[\sum_{e\in S}x_e=\sum_{e\in T}x_e\mid S\ne T\right]=\Pr\left[\sum_{e\in S\setminus T}x_e=\sum_{e\in T\setminus S}x_e\mid S\ne T\right].$$ Note that since $S\ne T$, then $(S\setminus T)\cup (T\setminus S)\ne \emptyset$, and therefore $\exists f \in (S\setminus T)\cup (T\setminus S)$. Without lose of generality suppose that $f\in (S\setminus T)$. Then, $$Pr\left[\sum_{e\in S\setminus T}x_e=\sum_{e\in T\setminus S}x_e\mid S\ne T\right]=Pr\left[ x_f=\sum_{e\in T\setminus S}x_e-\sum_{e\in (S\setminus T)\setminus \{f\}}x_e\mid S\ne T\right].$$ At this point, assume that we know that values of $x_e$'s for all $e\in E\setminus \{f\}$, so $c:=\sum_{e\in T\setminus S}x_e-\sum_{e\in (S\setminus T)\setminus \{f\}}x_e$ is fixed. We just need to note that $\Pr\left[x_f=c |S\ne T\right]=\Pr\left[x_f=c \right]\le \frac{2}{3}$ for any $c\in \mathbb{R}$ by assumption of the question (line 1 of \textsc{Random-Check($S,T$)}). So, the claim holds.",0.1,M1_preference_data_135 "What is predication and why is it (almost) universal in VLIW processors? Could it make sense also in a RISC processor? Why?","$|h - \max(h_1, h_2)| \leq 1$",0.1,M1_preference_data_136 "Assume you are part of a team developing a mobile app using Scrum. At the last sprint planning, you were assigned the task of adding a new authentication method. However, a customer representative just sent you an email: ""the representative believes authentication is less important than support for right-to-left languages, and would like you to work on that instead."" Explain in 1 sentence what you should do:",This always leads to the same result.,0.1,M1_preference_data_137 "Give well chosen examples of applications that can be evaluated with the single metric derived from Precision/Recall and illustrate: • a situation where more weight should be given to Precision; • a situation where more weight should be given to Recall.","There are two solutions (the first refers to material covered in class). {\bf Solution 1:} In class we designed LSH family $\mathcal{H}_1$ of hash functions $h: \{0,1\}^d \rightarrow \{0,1\}$ so that \begin{align} \label{eq:hashg1} \Pr_{h \sim \mathcal{H}_1}[h(p) = h(q)] = \left( 1-\frac{\dist(p,q)}{d} \right)\,. \end{align} Now define, similar to the construction of ANNS data structure, a new hash family $\mathcal{H}$ defined as follows: \begin{itemize} \item Select $h_1$ and $h_2$ uniformly at random from $\mathcal{H}_1$. \item Define $h: \{0,1\}^d \to \{0,1,2,3\}$ by $h(p) = x$ where $x$ equals the number in $\{0,1,2,3\}$ whose binary representation is $h_1(p)h_2(p)$, i.e., $x= 2h_1(p) + h_2(p)$. \end{itemize} For any $p, q \in \{0,1\}^d$ we thus have \begin{align*} \Pr_{h \sim \mathcal{H}}[h(p) = h(q)] & = \Pr_{h_1,h_2 \in \mathcal{H}_1}[h_1(p) = h_1(q) \wedge h_2(p) = h_2(q)] \\ & = \Pr_{h_1\in \mathcal{H}_1}[h_1(p) = h_1(q)] \Pr_{h_2\in \mathcal{H}_1}[h_2(p) = h_2(q)] \qquad \mbox{($h_1$ and $h_2$ are selected independently)}\\ & = \left( 1-\frac{\dist(p,q)}{d} \right)^2\,. \qquad \mbox{(by~\eqref{eq:hashg1})} \end{align*} {\bf Solution 2:} Define $\mathcal{H}$ as follows: \begin{itemize} \item Select coordinates $i,j \in [d]$ independently uniformly at random. \item Define $h:= h_{ij}(p) = x$ where $x$ equals the number in $\{0,1,2,3\}$ whose binary representation is $p_i p_j$, i.e., $x= 2p_i + p_j$. \end{itemize} We have that \begin{align*} \Pr_{h\in \mathcal{H}}[h(p) = h(q)] & = \Pr_{i,j\in [d]}[ p_i = q_i \wedge p_j = q_j] \\ & = \Pr_{i\in [d]}[p_i = q_i] \cdot \Pr_{j\in [d]}[p_j = q_j] \qquad \mbox{(by independence of $i$ and $j$)} \\ & = \left(\Pr_{i\in [d]}[p_i = q_i]\right)^2 \\ & = \left( \frac{|\{\ell: p_{\ell} = q_{\ell}\}|}{d} \right)^2 \\ & = \left( \frac{d - |\{\ell: p_{\ell} \neq q_{\ell}\}|}{d} \right)^2\\ & = \left(1 - \frac{\dist(p,q)}{d} \right)^2\,. \end{align*} \grading{ \begin{itemize} \item 9 pts for the design of the family (6pts for ``correct'' selection of hash function, 3pts for mapping it to 0,1,2,3) \item 6 pts for analysis (3 pts for independent and 3 pts for the correct analysis of ``single'' hash function) \end{itemize} If everything is correct except forgot to say independent in analysis -2pts. }",0.1,M1_preference_data_138 "/True or false:/ Is the following statement true or false? Justify your answer. ""The node with the highest clustering coefficient in an undirected graph is the node that belongs to the largest number of triangles.:""","Any nonterminating exception, including asynchronous ones. 1. A TLB miss exception. 2. An IO exception.",0.1,M1_preference_data_139 "If process i fails, then eventually all processes j≠i fail Is the following true? If all processes j≠i fail, then process i has not failed,","Changing the parameter type from ""int"" to its wrapper class ""Integer"" will break backward binary compatibility for the reasons explained previously, but it remains source compatible thanks to Java's autoboxing capability.",0.1,M1_preference_data_140 "Assume that some of your colleagues work on an AI-based image generation service, where a user enters a topic, and the AI generates a synthetic photo on that topic. They tell you the following about this service: ""Currently, the user types in the topic they want to see images for, and the client app sends a request to the server with the user ID and the indicated topic. The server generates an image, which takes a second or so, and sends it to the client app, which then requests another image on the same topic, and so on, until the app has received 9 images. It then displays these in a 3x3 grid. The user now looks at the 9 images and, if they see an inappropriate one, they click on a button that causes the app to send a review request to the server. Human moderators then process each report, and data scientists tweak the AI model to avoid generating images similar to the ones reported as inappropriate. Users then get a notification that their report was processed. The whole reporting process typically takes a day."" One colleague remarks that the ""report an image for moderation"" feature currently starts by spending 10 seconds in the background on the client side, and they have a way to speed this step up by 90%. In comparison, the optimizations you have devised for image generation would save around 30% of the current 10 seconds it takes for an entire image grid. Explain in 1-2 sentences whether the team should prioritize optimizing the ""report an image for moderation"" function over image generation:","The figure shows the Bayes net corresponding to this factorization. Note that the path from $X_{1}$ to $X_{3}$ via $X_{4}$ is not blocked since it is head to head and we are conditioning on $X_{5}$, and $X_{5}$ is a child of $X_{4}$. The statements is therefore in general not true.",0.1,M1_preference_data_141 Implement a function that computes the support for each provided itemset by counting the number of its occurences in the original dataset of transactions. You can use the following formula: $$\mathrm{supp}(X) = \frac{|\{t \in T; X \subseteq t\}|}{|T|}$$ ,"Let $S$ be a minimum $s,t$-cut; then the number of edges cut by $S$ is $\opt$. We shall exhibit a feasible solution $y$ to the linear program such that value of $y$ is $\opt$. This then implies that $\optlp \leq \opt$ as the minimum value of a solution to the linear program is at most the value of $y$. Define $y$ as follows: for each $e\in E$ \begin{align*} y_e = \begin{cases} 1 & \mbox{if $e$ is cut by $S$,}\\ 0 & \mbox{otherwise.} \end{cases} \end{align*} Notice that, by this definition, $\sum_{e\in E} y_e = \opt$. We proceed to show that $y$ is a feasible solution: \begin{itemize} \item for each $e\in E$, we have $y_e \geq 0$; \item for each $p\in P$, we have $\sum_{e\in p} y_e \geq 1$ since any path from $s$ to $t$ must exit the set $S$. Indeed, $S$ contains $s$ but it does not contain $t$, and these edges (that have one end point in $S$ and one end point outside of $S$) have $y$-value equal to $1$. \end{itemize}",0.1,M1_preference_data_142 "The company in which you work has just hired a new CTO, freshly graduated from a theoretical university. The CTO decides that in order to minimize bugs in the product, all new code must now be covered at least 80% in terms of paths in the code. Is this a good idea, and why? Can you suggest something better, given the CTO's objective? (the answer must be based on the definition of path coverage.)","ys.filter(_ < 100) .flatMap( y => xs.filter(_ < 20).map(x => if y < x then 0 else y - x) )",0.1,M1_preference_data_143 "It is often desirable to be able to express the performance of an NLP system in the form of a single number, which is not the case when the Precision/Recall framework is used. Indicate what scores can be used to convert Precision/Recall measures into a unique number. For each score, give the corresponding formula.",None of them,0.1,M1_preference_data_144 "The MIPS R10000 fetches four instructions at once and, therefore, there are four such circuits working in parallel inside the processor. Describe very briefly the function of the ``FP map'', of the ``Floating-point queue'', and of the ``Active list''. If applicable, feel free to describe them using other generic terms or names for these structures used in the course. ","1. Nonterminating exceptions require to be precise because one must be able to continue execution from a well-defined state (e.g., by jumping at a particular location with respect to the exception PC). 2. I/O interrupts, TLB misses, timer interrupts, system calls, debug interrupts.",0.1,M1_preference_data_145 "Can one easily adapt the Spectre attack to Itanium? If so, give some hints on how different it will be from the classic attack and how potential victims could protect sensitive parts of their code. If not, explain why it is not possible. ","Let $S = \{i\in V: v_2(i) \leq 0\}$. Note that $S \neq \emptyset$ and $S \neq V$ since $v_2 \perp v_{1}$ and $v_2 \not= \mathbf{0}$. In class we saw that if $\lambda_2 =1$ then all vertices in a connected component must receive the same value by the second eigenvector $v_2$. In particular adjacent vertices receive the same value. It follows that no vertex $i$ with $v_2(i) \leq 0$ is adjacent to a vertex $j$ with $v_2(j) >0$ and so no edges crosses the cut defined by $S$. For a different proof that doesn't rely on orthogonality note that it is enough to choose $S$ to be the set of all vertices with their $v_2$ values equal to some value (that occurs in $v_2$). For example $S = \{ i \in V : v_2(i) = v_2(1)\} $ - this is the set of all vertices whose second eigenvalue is equal to the one of vertex $1$. Now $S\not= \emptyset$ since the vertex $1$ is contained in $S$. Also $S\not=V$ since that would mean that all vertices have the same eigenvalue, but this cannot happen since $v_2 \not= v_1$ (more precisely $v_2$ cannot be a multiple of $v_1$). From what was written above, all vertices $v\not \in S$ have eigenvalues different from $v_1(1)$, and as such cannot be connected to vertices in $S$. This means that $S$ defines a cut, as requested. Note that the spectral graph partitioning algorithm seen in class uses the edges of the graph so doesn't work directly.",0.1,M1_preference_data_146 "Consider the following CFG \(\text{S} \rightarrow \text{NP VP PNP}\) \(\text{NP} \rightarrow \text{Det N}\) \(\text{NP} \rightarrow \text{Det Adj N}\) \(\text{VP} \rightarrow \text{V}\) \(\text{VP} \rightarrow \text{Aux Ving}\) \(\text{VP} \rightarrow \text{VP NP}\) \(\text{VP} \rightarrow \text{VP PNP}\) \(\text{PNP} \rightarrow \text{Prep NP}\) and the following lexicon: the:Det, red:Adj, cat:N, is:Aux, meowing:Ving, on:Prep, roof:N The next four questions ask you the content of a given cell of the chart used by the CYK algorithm (used here as a recognizer) for the input sentence the red cat is meowing on the roof Simply answer ""empty'' if the corresponding cell is empty and use a comma to separate your answers when the cell contains several objects.What is the content of the cell at row 3 column 1 (indexed as in the lectures)?","Data is centered, i.e. $\E[\xv] = $ or in other words $ rac1N \sum_{n=1}^N \xx_n = $ or $ rac1N \sum_{n=1}^N x_{nd} = 0$ $ orall d$. ",0.1,M1_preference_data_147 "Consider the following definition of trees representing higher-order functions, as well as a recursive function subst0. 1 enum Expr: 2 case C(c: BigInt) 3 case N(name: String) 4 case BinOp(op: BinOps, e1: Expr, e2: Expr) 5 case IfNonzero(cond: Expr, trueE: Expr, falseE: Expr) 6 case Call(fun: Expr, arg: Expr) 7 case Fun(param: String, body: Expr) 8 9 import Expr._ 10 11 enum BinOps: 12 case Plus, Minus, Times, Power, LessEq 13 14 def subst0(e: Expr, n: String, r: Expr): Expr = e match 15 case C(c) => e 16 case N(s) => if s == n then r else e 17 case BinOp(op, e1, e2) => 18 BinOp(op, subst0(e1, n, r), subst0(e2, n, r)) 19 case IfNonzero(cond, trueE, falseE) => 20 IfNonzero(subst0(cond,n,r), subst0(trueE,n,r), subst0(falseE,n,r)) 21 case Call(f, arg) => 22 Call(subst0(f, n, r), subst0(arg, n, r)) 23 case Fun(formal, body) => 24 if formal == n then e 25 else Fun(formal, subst0(body, n, r)) And consider the following expression: 1 val e = Call(N(""exists""), Fun(""y"", Call(Call(N(""less""), N(""x"")), N(""y"")))) What is subst0(e, ""x"", N(""y"")) equal to?",Returns the sum of all elements in the input list,0.1,M1_preference_data_148 Explain why any fail-noisy consensus algorithm (one that uses an eventually perfect failure detector ◇P) actually solves uniform consensus (and not only the non-uniform variant).,"def get_idf(vocabulary, documents): """""" It computes IDF scores, storing idf values in a dictionary. :param documents: list of list of str, with the tokenized tweets. :param vocabulary: dict with the vocabulary (computed in 1.1) and each term's frequency. :return: dict with the terms as keys and values the idf for each term. """""" idf = dict() num_documents = len(documents) for i, term in enumerate(vocabulary): idf[term] = math.log(num_documents/sum(term in document for document in documents), math.e) return idf",0.1,M1_preference_data_149 "Consider the following context-free grammar \(G\) (where \(\text{S}\) is the top-level symbol): \(R_{01}: \text{S} \rightarrow \text{NP VP}\) \(R_{02}: \text{NP} \rightarrow \text{NP0}\) \(R_{03}: \text{NP} \rightarrow \text{Det NP0}\) \(R_{04}: \text{NP0} \rightarrow \text{N}\) \(R_{05}: \text{NP0} \rightarrow \text{Adj N}\) \(R_{06}: \text{NP0} \rightarrow \text{NP0 PNP}\) \(R_{07}: \text{VP} \rightarrow \text{V}\) \(R_{08}: \text{VP} \rightarrow \text{V NP}\) \(R_{09}: \text{VP} \rightarrow \text{V NP PNP}\) \(R_{10}: \text{PNP} \rightarrow \text{Prep NP}\) complemented by the lexicon \(L\): a : Det blue : Adj, N drink : N, V drinks : N, V friends : N from : Prep gave : V letter : N my : Det neighbor : N nice : Adj, N of : Prep postman : N ran : V the : Det to : PrepIndicate the number of non-terminals contained in the grammar \(G\):","Let the two feature maps be $\phi_{1}$ and $\phi_{2}$. Assume that they are of dimensions $d_{1}$ and $d_{2}$. Then $\phi$ is a feature map of dimension $d_{1} d_{2}$ of the form $$ \begin{aligned} \phi(\mathbf{x})^{\top}= & \left(\left(\phi_{1}(\mathbf{x})\right)_{1}\left(\phi_{2}(\mathbf{x})\right)_{1}, \cdots,\left(\phi_{1}(\mathbf{x})\right)_{1}\left(\phi_{2}(\mathbf{x})\right)_{d_{2}}, \cdots,\right. \\ & \left.\left(\phi_{1}(\mathbf{x})\right)_{d_{1}}\left(\phi_{2}(\mathbf{x})\right)_{1}, \cdots,\left(\phi_{1}(\mathbf{x})\right)_{d_{1}}\left(\phi_{2}(\mathbf{x})\right)_{d_{2}}\right) . \end{aligned} $$",0.1,M1_preference_data_150 "A sequential object is a tuple T = (Q, q0, O, R, ∆), where: ● Q is a set of states. ● q0 ∈ Q is an initial state. ● O is a set of operations. ● R is a set of responses. ● ∆ ⊆ (Q × Π × O) × (Q × R) is a relation that associates a state, a process, and an operation to a set of possible new states and responses. Processes invoke operations on the object. As a result, they get responses back, and the state of the object is updated to a new value, following from ∆. Define a sequential object representing Asset Transfer, i.e., an object that allows processes to exchange units of currency."," Yes, it is needed to load the exception PC register if an exception is raised. The actual PC, by then, would have an arbitrary value.",0.1,M1_preference_data_151 "Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & \hspace{0.4cm}4x_1 - 6x_2 + 4x_3 \\ \textbf{subject to}\hspace{0.6cm} & x_1 - 3x_2 + x_3 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{1.90cm}x_1 + s_2 = 8 \\ \hspace{0.8cm} & \hspace{0.65cm} 3x_2 + 2x_3 + s_3 = 6 \\ \hspace{0.8cm} &\hspace{-0.35cm} x_1,\: x_2, \: x_3, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + 3x_2 - x_3 - s_1 \\ s_2 &= 7 -3x_2 + x_3 + s_1 \\ s_3 &= 6 - 3x_2 - 2x_3 \\ \cline{1-2} z &= 4 + 6 x_2 - 4s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau.","It breaks backward compatibility, because the signature changes and ""Document"" is not a special kind of ""String"" thus callers will have to be updated",0.1,M1_preference_data_152 "In an automated email router of a company, we want to make the distinction between three kind of emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a Naive Bayes approach. What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'? We will consider the following three messages: The Dow industrials tumbled 120.54 to 10924.74, hurt by GM's sales forecast and two economic reports. Oil rose to $71.92. BitTorrent Inc. is boosting its network capacity as it prepares to become a centralized hub for legal video content. In May, BitTorrent announced a deal with Warner Brothers to distribute its TV and movie content via the BT platform. It has now lined up IP transit for streaming videos at a few gigabits per second Intel will sell its XScale PXAxxx applications processor and 3G baseband processor businesses to Marvell for $600 million, plus existing liabilities. The deal could make Marvell the top supplier of 3G and later smartphone processors, and enable Intel to focus on its core x86 and wireless LAN chipset businesses, the companies say. Suppose we have collected the following statistics $3^{3}$ about the word frequencies within the corresponding classes, where '0.00...' stands for some very small value: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & technical & financial & irrelevant & & technical & financial & irrelevan \\ \hline $\$<$ number $>$ & 0.01 & 0.07 & 0.05 & deal & 0.01 & 0.02 & $0.00 \ldots$ \\ \hline Dow & $0.00 \ldots$ & 0.08 & $0.00 \ldots$ & forecast & $0.00 \ldots$ & 0.03 & 0.01 \\ \hline GM & $0.00 \ldots$ & 0.03 & $0.00 \ldots$ & gigabit & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ \\ \hline IP & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ & hub & 0.06 & $0.00 \ldots$ & 0.01 \\ \hline Intel & 0.02 & 0.02 & $0.00 \ldots$ & network & 0.04 & 0.01 & $0.00 \ldots$ \\ \hline business & 0.01 & 0.07 & 0.04 & processor & 0.07 & 0.01 & $0.00 \ldots$ \\ \hline capacity & 0.01 & $0.00 \ldots$ & $0.00 \ldots$ & smartphone & 0.04 & 0.04 & 0.01 \\ \hline chipset & 0.04 & 0.01 & $0.00 \ldots$ & wireless & 0.02 & 0.01 & $0.00 \ldots$ \\ \hline company & 0.01 & 0.04 & 0.05 & sen & re & . & . \\ \hline \end{tabular} \end{center} In a typical NLP architecture, where/how would you store this information? Explicit your answer, e.g. provide an illustrative example."," Division by zero, hardware errors.",0.1,M1_preference_data_153 "According to your knowledge of English, split the following sentence into words and punctuation: M. O'Connel payed $ 12,000 (V.T.A. not included) with his credit card. Which of these words won't usually be in a standard lexicon? Justify your answer. Assuming separators are: whitespace, quote ('), full-stop/period (.), parenthesis, and that separators a kept as tokens, tokenize the former sentence. How would you propose to go from tokens to words? (propose concreat implementations)","df_train_r = df.sample(frac=0.7) df_test_r = df.loc[df.index.difference(df_train_r.index)]",0.1,M1_preference_data_154 "Use the integrality of the bipartite perfect matching polytope (as proved in class) to show the following classical result: \begin{itemize} \item[] The edge set of a $k$-regular bipartite graph $G=(A\cup B, E)$ can in polynomial time be partitioned into $k$ disjoint perfect matchings. \end{itemize} \noindent A graph is $k$-regular if the degree of each vertex equals $k$. Two matchings are disjoint if they do not share any edges."," ``FP map'' is the mapping table (in which physical register is an architectural register at a particular moment in time), ``Floating-point queue'' is the equivalent of a Reservation Station, and ``Active list'' is the equivalent of the Reorder Buffer.",0.1,M1_preference_data_155 "Consider the following context-free grammar, where S is the top-level symbol, upper-case letters denotes non-terminals and lower case letters denotes terminals:S → T A S → B A S → A B S → b A → A C A → a T → A B B → b C → c Except the first one, the next questions are based on filling the cells of the chart used by the CYK algorithm for the input sequence acbac. Consider the chart with naming of the cells as follows: CYK is used here for both recognising and analysing purposes. Based on your computation of the CYK, how many parse trees can be constructed for acbac? Give your answer as a numerical value."," When the processor enters the exception handler, all instructions before the one pointed to by the exception PC have been already executed and the ones after it are not. The situation of the one pointed to by the exception PC depends deterministically from the exception cause. (The answer is slightly imprecise, but this is the gist of what is expected.)",0.1,M1_preference_data_156 "Your team is developing a library that is mostly intended to be used by your company's own applications, but the library is nevertheless distributed via a public repo on GitHub. It contains the following java function: ""public InputStream convertToPdf(Document document) throws GoogleServerNotRespondingError"" This library has a maintainability problem due to using Google-specific errors. Describe in 1-2 sentences how to fix this problem:",Returns the number of elements in the input list,0.1,M1_preference_data_157 "Recall the Jaccard index that we saw in Exercise Set 10: Suppose we have a universe $U$. For non-empty sets $A,B \subseteq U$, the Jaccard index is defined as \begin{align*} J(A,B) = \frac{|A \cap B|}{|A \cup B|}\,. \end{align*} Design a locality sensitive hash (LSH) family $\mathcal{H}$ of functions $h: 2^U \rightarrow [0,1]$ such that for any non-empty sets $A, B\subseteq U$, \begin{align*} \Pr_{h \sim \mathcal{H}}[h(A) \neq h(B)] \begin{cases} \leq 0.01 & \mbox{if $J(A,B) \geq 0.99$,}\\ \geq 0.1 & \mbox{if $J(A,B) \leq 0.9$.} \end{cases} \end{align*} {\em (In this problem you are asked to explain the hash family and argue that it satisfies the above properties. Recall that you are allowed to refer to material covered in the course.)}","$cov = rac1N \Xm\Xm^ op \in\R^{D imes D}$. ",0.1,M1_preference_data_158 "Describe the techniques that typical dynamically scheduled processors use to achieve the same purpose of the following features of Intel Itanium: (a) Predicated execution; (b) advanced loads---that is, loads moved before a store and explicit check for RAW hazards; (c) speculative loads---that is, loads moved before a branch and explicit check for exceptions; (d) rotating register file.","Alice and Bob can both apply the AMS sketch with constant precision and failure probability $1/n^2$ to their vectors. Then Charlie subtracts the sketches from each other, obtaining a sketch of the difference. Once the sketch of the difference is available, one can find the special word similarly to the previous problem.",0.1,M1_preference_data_159 "Let $y_1, y_2, \ldots, y_n$ be uniform random bits. For each non-empty subset $S\subseteq \{1,2, \ldots, n\}$, define $X_S = \oplus_{i\in S}\:y_i$. Show that the bits $\{X_S: \emptyset \neq S\subseteq \{1,2, \ldots, n\} \}$ are pairwise independent. This shows how to stretch $n$ truly random bits to $2^n-1$ pairwise independent bits. \\ \emph{Hint: Observe that it is sufficient to prove $\mathbb{E}[X_S] = 1/2$ and $\mathbb{E}[X_S X_T] = 1/4$ to show that they are pairwise independent. Also use the identity $\oplus_{i\in A}\: y_i = \frac{1}{2}\left( 1 - \prod_{i\in A} (-1)^{y_i} \right)$.}","As a multitasker, I want the app to be usable with voice only, so that I can use it when my hands are busy.",0.1,M1_preference_data_160 "Your team is developing a library that is mostly intended to be used by your company's own applications, but the library is nevertheless distributed via a public repo on GitHub. It contains the following java function: ""public InputStream convertToPdf(Document document) throws GoogleServerNotRespondingError"" A security consultant points out that passing an arbitrary document can lead to security issues since the document could link to malicious content from the Internet, and your own application does not actually use this flexibility as it only prints plain text. The consultant suggests replacing the ""Document"" parameter by a ""String"". You agree, but identify some risk associated with this. Explain in one sentence the main problem that such a parameter change could lead to:","Differences are due to two subproducts: On one hand: $$ P(X \mid \mathrm{WPS}) \cdot P(\text { first } \mid X) \cdot P(Y \mid X) \cdot P(\text { adult } \mid Y) \cdot P(\mathrm{NN} \mid Y) $$ for $X$ either 'JJ' or 'RB' and $Y$ either 'JJ' of 'NN', and on the other hand: $$ P(X \mid \mathrm{RB}) \cdot P(\text { developed } \mid X) \cdot P(Y \mid X) \cdot P(\text { programs } \mid Y) $$ for $X$ either 'VBD' or 'VBN' and $Y$ either 'NNS' of 'VBZ', NOTICE: \begin{enumerate} \item do not forget emision probabilities \item do not forget the right hand part of each tag, e.g for 'adult', not only $P(N N \mid R B)$ (for instance), but also $P(N N \mid N N)$ for the transition to 'tooth'. \end{enumerate}",0.1,M1_preference_data_161 Explain how it is possible to compute Precision at different Recalls.,"Somewhat counterintuitively, the property doesn't hold. To show this, let's take the following values for $L_1$, $L_2$, $T$, $c$, and $d$: $$\begin{cases} L_1 = 10,\\ L_2 = 12, \\ T = 11, \\ c = 1, \\ d = 1 \end{cases}$$ Using those values, we get that $D(L_1) = 10$ and $D(L_2) = \max(D(6), D(6)) + 1 = 7$.",0.1,M1_preference_data_162 "Review the notion of depth seen in the lecture. What does it represent? Below is a formula for the depth of a divide and conquer algorithm working on an array segment of size $L$, as a function of $L$. The values $c$, $d$ and $T$ are constants. We assume that $L>0$ and $T>0$. $$ D(L) = \begin{cases} c \cdot L &\text{if}\ L \leq T \\ \text{max}\left( D\left(\left\lfloor \frac L2 \right\rfloor \right), D\left(L - \left\lfloor \frac L2 \right\rfloor \right)\right) + d &\text{otherwise} \end{cases} $$ Below the threshold T, the algorithm proceeds sequentially and takes time c to process each single element. Above the threshold, the algorithm is applied recursively over the two halves of the array. The results are then merged using an operation that takes d units of time. Prove a logarithmic upper bound on $D(L)$. That is, prove that $D(L)$ is in $O(log(L))$ by finding specific constants $a$, $b$$b$ such that $D(L) \leq a \times log_2(L) + b$. Hint: The proof is more complex that it might seem. One way to make it more manageable is to define and use a function $D'(L)$that has the property described in question 1, and is greater or equal to $D(L)$. We suggest you use: $$D'(L) = \begin{cases} c \cdot L &\text{if}\ L \leq T \\ \text{max}\left( D'\left(\left\lfloor \frac L2 \right\rfloor \right), D'\left(L - \left\lfloor \frac L2 \right\rfloor \right)\right) + d + \underline{\underline{c \cdot T}} &\text{otherwise} \end{cases}$$ Also remark that computing $D'(L)$ when $L$ is a power of 2 is easy. Also remember that there always exists a power of 2 between any positive integer and its double. ",Merging if there are no requested changes could lead to merging buggy code because nobody had the time to look at it. A better alternative would be to wait for 1 or 2 approving reviews from colleagues that know how the PR's feature is supposed to behave.,0.1,M1_preference_data_163 Would it make sense to add the total-order property to the best-effort broadcast?,0,0.1,M1_preference_data_164 "Consider a network that is organized as a 2-dimensional grid, such that every process has up to 4 neighbors. The width of the grid is w and the height is h. The grid is big, meaning that w+h is much smaller than w*h. While there are faulty and correct processes in the network, it is assumed that two correct processes are always connected through at least one path of correct processes. In every round processes may send a message to each of its neighbors, the size of the message is not limited. Assume there is no faulty process. Write a protocol to reach consensus. Optimize your protocol according to speed. How many rounds does your protocol require?","def user_based_predict(ratings, similarity): filled_matrix = np.zeros((n_users, n_items)) # compute the average ratings for each user tmp = train_data_matrix.copy() tmp[tmp == 0] = np.nan user_average_ratings = np.nanmean(tmp, axis=1) # loop over all the items for i in tqdm(range(n_items)): # get the users who rated this item ranked_users_indices = train_data_matrix[:,i].nonzero()[0] for u in range(n_users): numerator = 0 denominator = 0 for y in ranked_users_indices: numerator+=user_similarity[u,y]*(train_data_matrix[y,i]-user_average_ratings[y]) denominator+=np.abs(user_similarity[u,y]) if denominator>0: filled_matrix[u,i]= user_average_ratings[u]+ numerator/denominator else: filled_matrix[u,i]= user_average_ratings[u] # we ensure that the ratings are in the expected range filled_matrix.clip(0,5) return filled_matrix ",0.1,M1_preference_data_165 "In the lecture on bias-variance decomposition we have seen that the true error can be decomposed into noise, bias and variance terms. What happens to the three terms for ridge regression when the regularization parameter $\lambda$ grows? Explain your answer.","The idea is that a processor capable of speculation may deliver the value to the consumer (the second load) before detecting that the access was illegal. Then, the second access leaves a trace in the cache. Finally, both accesses will be squashed, but the attacker can use a cache attack to detect which address was accessed by the second load and thus the secret. Some processors may allow speculation but check that a load is at an allowed address before the load itself happens; in this case, the attack would not work (e.g., AMD processors seem to be immune from this problem). ",0.1,M1_preference_data_166 Hypothesize a reason for the difference in performance between the Linear regression and the Gradient Boosting Regressor.,"A property that implies the correctness is: forall xs, ys. g(xs.F, ys.F) == (xs ++ ys).F (split-invariance) where we define xs.F == xs.foldLeft(z)(f) The intuition is the following. Take any computation tree for xs.aggregate. Such a tree has internal nodes labelled by g and segments processed using foldLeft(z)(f). The split-invariance law above says that any internal g-node can be removed by concatenating the segments. By repeating this transformation, we obtain the entire result equals xs.foldLeft(z)(f). The split-invariance condition uses foldLeft. The following two conditions together imply split-invariance and are expressed without the use of foldLeft: forall u. g(u,z) == u (g-right-unit) forall u, v. g(u, f(v,x)) == f(g(u,v), x) (g-f-assoc) To see that these conditions imply split-invariance, assume g-right-unit and g-f-assoc. We wish to prove split-invariance. We do so by (total) induction on the length of ys. If ys has length zero, then ys.foldLeft gives z, so by g-right-unit both sides reduce to xs.foldLeft. Let ys have length n>0 and assume by I.H. that split-invariance holds for all ys of length strictly less than n. Let ys == ys1 :+ y (that is, y is the last element of ys). Then g(xs.F, (ys1 :+ y).F) == (foldLeft definition) g(xs.F, f(ys1.F, y)) == (by g-f-assoc) f(g(xs.F, ys1.F), y) == (by I.H.) f((xs++ys1).F, y) == (foldLeft definition) ((xs++ys1) :+ y).F == (properties of lists) (xs++(ys1 :+ y)).F",0.1,M1_preference_data_167 "Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|d_j|}\frac{w_{iq}}{|q|}$ . (1) Another way of looking at the information retrieval problem is using a probabilistic approach. The probabilistic view of information retrieval consists of determining the conditional probability $P(q|d_j)$ that for a given document $d_j$ the query by the user is $q$. So, practically in probabilistic retrieval when a query $q$ is given, for each document it is evaluated how probable it is that the query is indeed relevant for the document, which results in a ranking of the documents. In order to relate vector space retrieval to a probabilistic view of information retrieval, we interpret the weights in Equation (1) as follows: - $w_{ij}/|d_j|$ can be interpreted as the conditional probability $P(k_i|d_j)$ that for a given document $d_j$ the term $k_i$ is important (to characterize the document $d_j$). - $w_{iq}/|q|$ can be interpreted as the conditional probability $P(q|k_i)$ that for a given term $k_i$ the query posed by the user is $q$. Intuitively, $P(q|k_i)$ gives the amount of importance given to a particular term while querying. With this interpretation you can rewrite Equation (1) as follows: $sim(q,d_j) = \sum^m_{i=1} P(k_i|d_j)P(q|k_i)$ (2) Using the expression derived for $P(q|d_j)$ in (a), obtain a ranking (documents sorted in descending order of their scores) for the documents $P(k_i|d_1) = (0, 1/3, 2/3)$, $P(k_i|d_2) =(1/3, 2/3, 0)$, $P(k_i|d_3) = (1/2, 0, 1/2)$, and $P (k_i|d_4) = (3/4, 1/4, 0)$ and the query $P(q|k_i) = (1/5, 0, 2/3)$.","a (DET) computer (N) process (V, N) programs (V, N) accurately (ADV) which leads to 4 solutions.",0.1,M1_preference_data_168 "You are given a probability distribution $P(y_t | y_0, \ldots, y_{t-1})$ over 100 possible next tokens to generate by your model. The distribution has the following characteristics: egin{itemize} \item 20\% of the probability mass is on the most probable token; \item 10\% of the probability mass is on each of the next 4~most probable tokens; \item 1\% of the probability mass is on each of the next 20~most probable tokens; \item the remaining mass is uniformly distributed across the remaining 75 tokens. \end{itemize} In top-p sampling, if $p = 0.75$, how many tokens will be included in the set of tokens you sample from? Fully justify your answer.","[['Cars flow beautifully', 'syntactic'], ['The cook put cherry stones in the cake', 'semantic'], ['The glass broke its leg', 'syntactic'], ['I no go rain', 'lexical']]",0.1,M1_preference_data_169 "Given the following code snippet, explain if it would be better scheduled with static HLS or dynamic HLS. Assume erb+acc+ to be a floating point variable; floating-point multiplications have a latency of four and all other operations a latency of one. If good static scheduling requires some typical trasformations, explain clearly which ones. egin{verbatim} 0: for i = 0 to N do 1: acc = 0.0 2: for j = 0 to M do 3: cond = (j % 5) == 0 4: if (cond) then 5: acc = acc * acc 6: else 7: acc = acc + a[i * M + j] + 1.0 8: end if 9: end for 10: b[i] = acc 11: end for \end{verbatim} ","We need to verify the two axioms: \begin{description} \item[$(I_1)$] Consider a set $A' \in \mathcal{I}$ and a subset $A'' \subseteq A'$. The matching that match every vertex in $A'$ also matches every vertex in $A''$ so $A'' \in \mathcal{I}$ as required. \item[$(I_2)$] Consider two sets $A_1, A_2 \in \mathcal{I}$ with $|A_1| < |A_2|$. Let $M_1$ and $M_2$ be the two matchings that matches all vertices in $A_1$ and $A_2$, respectively. Assume, w.l.o.g. that $|M_1| = |A_1|$ and $|M_2| = |A_2|$. Note that these matchings are guaranteed to exist since $A_1, A_2 \in \mathcal{I}$. Now the graph $(V, M_1 \cup M_2)$ has a matching of cardinality at least $|M_2|$. This means that there is an augmenting path $P$ in $(V, M_1 \cup M_2)$ with respect to the matching $M_1$. If we let $M= M_1 \Delta P$ then $M$ matches all vertices in $A_1$ plus one more vertex $v$ from $A$ since $|M| = |M_1| + 1$. This vertex has to be from $A_2$ since $A_1 \cup A_2$ are the only vertices of $A$ that are incident to any edges in graph $(V, M_1 \cup M_2)$. It follows that $v\in A_2 \setminus A_1$ and $A_1 + v \in \mathcal{I}$ as required. \end{description}",0.1,M1_preference_data_170 "Assume you are working on a mobile application. Users report that your app freezes when they access its image gallery, which shows images in a scrollable grid. This is the (java) function run to display the gallery: void startImageGallery() { // Download all the user's images from the application server List images = getImages(); // Display the first few images displayImage(images); // Initialize the behavior of the exit button, back button, zoom button, etc. startComponentBehavior(); } In one sentence, explain why the application is freezing:",The code should use dependency injection so that tests can mock the services instead of using the real ones.,0.1,M1_preference_data_171 "In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right) \kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$.","It should prioritize the image generation, as we can think that it will be called more often and also because optimizing for the moderation speeds up 9 seconds of a procedure that typically takes a day.",0.1,M1_preference_data_172 "Given the following code snippet, explain if it would be better scheduled with static HLS or dynamic HLS. Assume erb+acc+ to be a floating point variable; floating-point multiplications have a latency of four and all other operations a latency of one. If good static scheduling requires some typical trasformations, explain clearly which ones. egin{verbatim} 0: for i = 0 to N do 1: acc = 0.0 2: for j = 0 to M do 3: cond = (c[j] % 5) == 0 4: if (cond) then 5: acc = acc * acc 6: else 7: acc = acc + a[i * M + j] + 1.0 8: end if 9: end for 10: b[i] = acc 11: end for \end{verbatim} ",O( (f+1)n^2 ),0.1,M1_preference_data_173 "We have a collection of rectangles in a plane, whose sides are aligned with the coordinate axes. Each rectangle is represented by its lower left corner $(x_1,y_1)$ and its upper right corner $(x_2, y_2)$. All coordinates are of type Long. We require $x_1 \le x_2$ and $y_1 \le y_2$. Define a function hull that, given an Array[Rectangle], computes the smallest rectangle containing each of the elements of the array, using one of the collection operations mentioned in Week 02 videos.","Assign each vertex with probability 0.5 to either side. Then, for any directed edge $(i, j)$ \begin{equation} Pr((i,j) \textrm{ in cut}) = Pr(i \in U \wedge j \in W) = \frac{1}{2}\times \frac{1}{2} = \frac{1}{4} \end{equation} The expected total weight of the cut is \begin{equation} \sum_{(i,j)\in A} Pr((i,j) \textrm{ in cut}) = \frac{1}{4}\sum_{(i,j) \in A} w_{ij} \geq \frac{1}{4} OPT \end{equation}",0.1,M1_preference_data_174 Why does Intel Itanium contain more general-purpose registers (128) than most RISC instruction sets (usually 32)?,False: Some process j can fail for a reason not related to the failure of process i.,0.1,M1_preference_data_175 Implement probabilistic estimation of kNN classification,Dynamically scheduled out-of-order processors.,0.1,M1_preference_data_176 "Provide a precise definition of concatenative morphology and illustrate your answer with concrete examples in English or French. Is this type of morphology relevant for all languages? More generally, is morphology of the same complexity for all languages?","The results suggest that neither of the hypotheses trying to explain the ""Second Album Syndrome"" are true: The coefficient associated with the time difference is negative: this implies that the longer time goes by, the worse the score of the next album will be. This is the opposite of what the time spent hypothesis says.",0.1,M1_preference_data_177 "If process i fails, then eventually all processes j≠i fail Is the following true? If all processes j≠i fail, nothing can be said about process i","[""answer should fit the regular expression: Cohen's kappa"", ""answer should fit the regular expression: (we should compute )?(the )?(Cohen(['’]?s)? )?kappa( (score|metric))?"", ""answer should fit the regular expression: inter[- ]?annotator agreement ((with )|\\()?(e\\.g\\.?:? )?Cohen['’]?s kappa\\)?"", ""answer should fit the regular expression: Cohen['’]?s kappa \\(?inter[- ]?annotator agreement\\)?"", ""answer should fit the regular expression: (the )?Cohen['’]?s kappa (to get the score of )?agreement"", 'answer should fit the regular expression: IAA|(inter[- ]?annotator agreement)', 'answer should fit the regular expression: inter[- ]?annotator Agreement \\(IAA\\)']",0.1,M1_preference_data_178 "Assume your team is considering adding support for the SwengPhotos cloud service, which provides upload and download of photos on private cloud storage. Each photo is associated with a unique name. SwengPhotos's documentation for the ""upload new photo"" interface describes the following error responses: 1. I/O error 2. Backend timeout error 3. Name already exists error You decide to build a Java library to wrap the SwengPhotos API. For each error, explain in 1 sentence whether it should be a ""checked"" exception and why:"," No. Dependences need to be statically determined at compile time to build the execution schedule. If there is no way to determine that two accesses are independent, the compiler needs to schedule them conservatively assuming a (possible) dependence.",0.1,M1_preference_data_179 "For a bipartite graph, devise an efficient algorithm for finding an augmenting path $P$ (if one exists). What is the total running time of the \textsc{AugmentingPathAlgorithm} explained in the second lecture?"," An adder is needed on the address path of the register file. The adder adds a value from a register (an offset) changed as a function of the exttt{alloc} parameters.",0.1,M1_preference_data_180 "You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! Given that you have published 1000 columns at the Gazette, you have around 800 training examples that you can use for your system. Given the size of your dataset, do you think it would be helpful to pretrain your model on other text? Why or why not?","MapTrCons, ConsAppend, IH, MapTrCons",0.1,M1_preference_data_181 "Consider the standard linear programming relaxation of Set Cover that we saw in class. We gave a randomized rounding algorithm for the Set Cover problem. Use similar techniques to give an algorithm that, with probability at least a positive constant, returns a collection of sets that cover at least $90\%$ of the elements and has cost at most a constant factor larger than the LP solution.","Let $X_e$ be the indicator random variable that edge $e$ is cut. Then \begin{align*} \E[\mbox{\# edges cut}] = \E\left[ \sum_{e\in E} X_e \right] = \sum_{e\in E} \E[X_e]\,. \end{align*} We complete the proof by showing that $\E[X_e] \geq 1/2$ for $e=\{u,v\}\in E$. We have \begin{align*} \E[X_e] = \Pr[\mbox{$e$ is cut}] = \Pr[h(u) \neq h(v)] = 1 - \Pr[h(u) = h(v)] \geq 1/2\,, \end{align*} where the last inequality follows because $h$ was selected at random from a $2$-universal hash family $\mathcal{H}$, and thus $\Pr[h(u) = h(v)] \leq 1/2$ for $u \neq v$.",0.1,M1_preference_data_182 "Consider the following contains function defined on Iterable (in particular, it accepts both Vector and List). def contains[A](l: Iterable[A], elem: A): Boolean = val n = l.size if n <= 5 then for i <- l do if i == elem then return true false else val (p0, p1) = parallel( contains(l.take(n / 2), elem), contains(l.drop(n / 2), elem) ) p0 || p1 Let $n$ be the size of l. Assume that drop and take run in $\Theta(1)$ on Vector and $\Theta(n)$ on List. What is the asymptotic work of contains if it is called on a Vector?","The purpose of the daily Scrum meeting is to synchronize the team's work and to identify any impediments that might prevent the team from delivering the work. Therefore, the team should not be discussing the implementation details, of features during the meeting. Instead, the meeting should be solely dedicated to reporting the progress of the tasks.",0.1,M1_preference_data_183 "Show that, given a matroid $\mathcal{M} = (E, \mathcal{I})$ and a weight function $w: E \rightarrow \mathbb{R}$,~\textsc{Greedy} (as defined in the lecture notes) always returns a base of the matroid.","Note that, if $x$ is a feasible solution, $x_{iH} + x_{iM}=1$ for all $i=1, \dots, n$. Otherwise, if $x_{jH} + x_{jM} < 1$, we would have that (since $s_i>0$ for every item $i$) \begin{align*} 2 \cdot C &= \sum_{i=1}^n s_i x_{iH} + \sum_{i=1}^n s_i x_{iM} = s_j(\underbrace{x_{jH} + x_{jM}}_{< 1}) + \sum_{i=1, i \neq j}^n s_i(\underbrace{x_{iH} + x_{iM}}_{\leq 1}) < \sum_{i=1}^n s_i = 2 \cdot C, \end{align*} which is a contradiction. Now suppose that $x^\ast$ is an extreme-point solution. We claim that $0 0$ to be small enough so that all the values $ x^\ast_{iH} \pm \epsilon, x^\ast_{iM} \pm \epsilon, x^\ast_{jH} \pm \epsilon \left(\tfrac{s_i}{s_j}\right), x^\ast_{jM} \pm \epsilon \left(\tfrac{s_i}{s_j}\right)$ stay in the range $[0,1]$ (note that, since $s_i>0$ and $s_j>0$, $0 < \frac{s_i}{s_j} < \infty$). As shown below, we can verify that the solutions $x^{(1)}$ and $x^{(2)}$ both satisfy the LP constraints, and hence are feasible solutions. For $x^{(1)}$, we have that $$\sum_{i=1}^n s_i x^{(1)}_{iH} = \sum_{i=1}^n s_i x^{\ast}_{iH} - s_i \epsilon + s_j \epsilon \left( \frac{s_i}{s_j} \right) = \sum_{i=1}^n s_i x^{\ast}_{iH} = C$$ and $$\sum_{i=1}^n s_i x^{(1)}_{iM} = \sum_{i=1}^n s_i x^{\ast}_{iM} + s_i \epsilon - s_j \epsilon \left( \frac{s_i}{s_j} \right) = \sum_{i=1}^n s_i x^{\ast}_{iM} = C.$$ Furthermore, for $x^{(1)}_{iH} + x^{(1)}_{iM} = x^\ast_{iH} + x^\ast_{iM} + \epsilon - \epsilon = 1$, $x^{(1)}_{jH} + x^{(1)}_{jM} = x^\ast_{jH} + x^\ast_{jM} - \epsilon \left(\tfrac{s_i}{s_j}\right) + \epsilon \left(\tfrac{s_i}{s_j}\right) = 1$, and for $k \neq i$ and $k \neq j$, $x^{(1)}_{kH} + x^{(1)}_{kM} = x^\ast_{kH} + x^\ast_{kM} = 1$. By a similar argument, we can show that $x^{(2)}$ is also feasible. It is easy to see that $x^\ast = \frac{1}{2} x^{(1)} + \frac{1}{2}x^{(2)}$ and $x^{(1)} \ne x^\ast$, and hence, $x^\ast$ is not an extreme point. \QED",0.1,M1_preference_data_184 "Given the following function sums: 1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match 2 case Nil => List((c, 1)) 3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs) 4 5 def sums(digits: List[Int]): List[(Int, Int)] = 6 digits.foldRight(List[(Int, Int)]())(add) Your task is to identify several operations on lists of digits: What does the following operation implement, for a given input list of digits? 1 def mystery1(digits: List[Int]): List[Int] = 2 sums(digits).filter(_._2 == 1).map(_._1)","This kind of issue should be documented in the code until it is fixed, not in an unrelated document that may be forgotten or become out of sync.",0.1,M1_preference_data_185 Implement Discounted Cumulative Gain. DCG is a retrieval metric that also takes into account the ordering of the result. The DCG accumulated at a rank $k$ is defined as: $DCG_k = \sum_{i=1}^k \frac{grade[i]}{log_2(i+1)}$ where $grade[i]$ is the relevance score given by the user for the result at position $i$. Hint: the logarithm is computed using the function np.log2,"['bush(2), house(2)', 'bush(2),house(2)', 'house(2),bush(2)', 'house(2), bush(2)', 'bush (2), house (2)', 'bush (2),house (2)', 'house (2), bush (2)', 'house (2),bush (2)', '""bush(2), house(2)""', 'house(2)', 'cat(5), house(2)', 'bush(2), house(2), mouse(3)', 'bush(2), target(1), house(2)', 'cat(5), bush(2), house(2)', 'bush(2),cat(5),house(2)', 'bush(2), cat(5), house(2)', 'bush(2),house(2),mouse(3)']",0.1,M1_preference_data_186 Give some concrete examples of NLP applications that might benefit from the semantic vectorial representations.,"Modify the “candeliver” function. Function candeliver(m) returns Boolean is return #(ack[m]) > N / 2 Suppose that a correct process delivers m. That means that at least one correct process p “acknowledged” m (rebroadcast m using BestEffortBroadcast). Consequently, all correct processes eventually deliver m from BestEffortBroadcast broadcast by p and rebroadcast m themselves (if they have not done that yet). Hence, every correct process eventually collects at least N/2 acknowledgements and delivers m.",0.1,M1_preference_data_187 "Give an example of an exception whose precise implementation is arguably irrelevant in practice.","Only $x_3$ has a positive coefficient in $z$, we will pivot $x_3$. We have $\nearrow x_3 \longrightarrow \ x_3 \leq \; \infty \ (1),\ x_3 \leq 3\ (2),\ x_3 \leq 2\ (3)$, Thus we use third equality to pivot $x_3$. Hence $x_3=\frac{1}{2}(4+3x_2-s_3)$. And we get \begin{align*} \hspace{1cm} x_1 &= 1 + \frac{1}{2}(4+3x_2-s_3) - s_1 \\ s_2 &= 3 -\frac{1}{2}(4+3x_2-s_3) + s_1 \\ x_3&=\frac{1}{2}(4+3x_2-s_3) \\ \cline{1-2} z &= 4 - x_2 + (4+3x_2-s_3) - 4s_1 \end{align*} That is \begin{align*} \hspace{1cm} x_1 &= 3 + \frac{3x_2}{2} -\frac{s_3}{2} - s_1 \\ s_2 &= 1 - \frac{3x_2}{2} +\frac{s_3}{2} + s_1 \\ x_3&= 2+ \frac{3x_2}{2} -\frac{s_3}{2} \\ \cline{1-2} z &= 8 + 2x_2 + -s_3 - 4s_1 \\ x_1& :=3\text{ }x_2:=0\text{ }x_3:=2\text{ }s_1:=0\text{ }s_2:=1\text{ }s_3:=0 \end{align*}",0.1,M1_preference_data_188 When are paired t-tests helpful? Justify.,"In the same manner, this can be solved by defining recursively $W(n) = 2 W(n/2) + O(n)$. For a quick answer, one can use the master theorem and find that $W(n)$ is $\Theta(n \log n)$. Also, one can prove by induction that $W(n)$ is $\Theta(n \log n)$.",0.1,M1_preference_data_189 "A service is an application component that performs long-running operations, usually in the background. A service doesn't provide a UI. While reviewing the pull request of a friend you notice that he periodically fetches data from the cloud in his activity? What potential problem this could lead to, and how can you fix it?","The + operation is not associative. Notice that due to the nature of our definition, wherein we maintain a finite precision (just 4 bits!), adding a small number to a large one may not produce a change at all. We use this property to construct a counterexample: val a = Float8(3, 8) // a: Float8 = Float8(3,8) val b = Float8(1, 8) // b: Float8 = Float8(1,8) val c = Float8(1, 10) // c: Float8 = Float8(1,10) // adding this number does not change c! c + Float8(3, 8) // res2: Float8 = Float8(1,10) // but a slightly larger one will c + Float8(4, 8) // res3: Float8 = Float8(2,10) a + (b + c) // res4: Float8 = Float8(1,10) (a + b) + c // res5: Float8 = Float8(2,10)",0.1,M1_preference_data_190 "Last year Professor Ueli von Gruy\`{e}res worked hard to to obtain an estimator $\Alg$ to estimate the total cheese consumption of fondue lovers in Switzerland. For a small $\epsilon >0$, his estimator \Alg only asks $3/\epsilon^2$ random persons and have the following guarantee: if we let $W$ denote the true answer and let $X$ be the random output of \Alg then \begin{align*} \Pr[|X - W| \geq \epsilon W] \leq 1/3\,. %\qquad \mbox{ where $\epsilon > 0$ is a small constant.} \end{align*} However, Ueli is now stuck because the error probability of $1/3$ is too high. We are therefore going to help Ueli by designing a new estimator with a much higher success probability while still only asking relatively few persons. For a fixed small parameter $\delta >0$, your task is to design and analyze an estimator that outputs a random value $Y$ with the following guarantee: \begin{align} \label{eq:guarantee2} \Pr[|Y - W| \geq \epsilon W] \leq \delta\,. %\qquad \mbox{ where $\epsilon > 0$ is a small constant.} \end{align} Your estimator should ask at most $3000\log(1/\delta)/\epsilon^2$ persons about their preferences. \\ While you should explain why your estimator works and what tools to use to analyze it, \emph{you do not need to do any detailed calculations.} \\ {\em (In this problem you are asked to (i) design an estimator that asks at most $3000 \log(1/\delta)/\epsilon^2$ persons and (ii) explain why it satisfies the guarantee~\eqref{eq:guarantee2}. Recall that you are allowed to refer to material covered in the lecture notes.)}","Returns List(1) if the input list contains exactly one digit 1, an empty list otherwise",0.1,M1_preference_data_191 "In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following are is a valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=a \kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)+b \kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ for all $a, b \geq 0$.","def insert (elem: Int, list: List[Int]): List[Int] = list match{ case x :: xs if x < elem => x :: insert (elem, xs) case _ => elem :: list }",0.1,M1_preference_data_192 "As a group, write a function called minMax, which should take a non-empty array as input and return a pair containing the smallest and the largest element of the array. def minMax(a: Array[Int]): (Int, Int) = ??? Now write a parallel version of the function. You may use the constructs task and/or parallel, as seen in the lectures.","1. In general, Spectre builds on the branch prediction and speculative execution, thus is something which should not work with VLIWs. 2. But it is indeed possible due to the existence of a speculative load. 3. The attacker has much less control on the timing, though, because it is statically determined: either the compiler has speculated the load and used the value for an indirect access to enable the cache attack, or not. If so, the attack is possible and there is nothing to be done (no training of branch predictors, etc.). If not, there is nothing to be done. The defense is simple, do not use a speculative load in critical cases. ",0.1,M1_preference_data_193 "Given the following function sums: 1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match 2 case Nil => List((c, 1)) 3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs) 4 5 def sums(digits: List[Int]): List[(Int, Int)] = 6 digits.foldRight(List[(Int, Int)]())(add) Your task is to identify several operations on lists of digits: What does the following operation implement, for a given input list of digits? 1 def mystery2(digits: List[Int]): List[Int] = 2 mystery1(digits).filter(_ == 1)","It means that on entering an exception handler, all instructions before the point where the exception happened are executed and committed and no instruction after it is executed nor committed.",0.1,M1_preference_data_194 What happens in the reliable broadcast algorithm if the completeness property of the failure detector is violated?,"The depth can be defined recursively as $D(n) = \text{max}(D(n/2), D(n/2)) + O(1) = D(n/2) + O(1)$. Therefore, $D(n)$ is $\Theta(\log n)$. One way of seeing it would be by drawing the call tree, its depth is $\Theta(\log n)$ because each time, we split the array in half.",0.1,M1_preference_data_195 "In an automated email router of a company, we want to make the distinction between three kind of emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a Naive Bayes approach. What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'? We will consider the following three messages: The Dow industrials tumbled 120.54 to 10924.74, hurt by GM's sales forecast and two economic reports. Oil rose to $71.92. BitTorrent Inc. is boosting its network capacity as it prepares to become a centralized hub for legal video content. In May, BitTorrent announced a deal with Warner Brothers to distribute its TV and movie content via the BT platform. It has now lined up IP transit for streaming videos at a few gigabits per second Intel will sell its XScale PXAxxx applications processor and 3G baseband processor businesses to Marvell for $600 million, plus existing liabilities. The deal could make Marvell the top supplier of 3G and later smartphone processors, and enable Intel to focus on its core x86 and wireless LAN chipset businesses, the companies say. Suppose we have collected the following statistics $3^{3}$ about the word frequencies within the corresponding classes, where '0.00...' stands for some very small value: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & technical & financial & irrelevant & & technical & financial & irrelevan \\ \hline $\$<$ number $>$ & 0.01 & 0.07 & 0.05 & deal & 0.01 & 0.02 & $0.00 \ldots$ \\ \hline Dow & $0.00 \ldots$ & 0.08 & $0.00 \ldots$ & forecast & $0.00 \ldots$ & 0.03 & 0.01 \\ \hline GM & $0.00 \ldots$ & 0.03 & $0.00 \ldots$ & gigabit & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ \\ \hline IP & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ & hub & 0.06 & $0.00 \ldots$ & 0.01 \\ \hline Intel & 0.02 & 0.02 & $0.00 \ldots$ & network & 0.04 & 0.01 & $0.00 \ldots$ \\ \hline business & 0.01 & 0.07 & 0.04 & processor & 0.07 & 0.01 & $0.00 \ldots$ \\ \hline capacity & 0.01 & $0.00 \ldots$ & $0.00 \ldots$ & smartphone & 0.04 & 0.04 & 0.01 \\ \hline chipset & 0.04 & 0.01 & $0.00 \ldots$ & wireless & 0.02 & 0.01 & $0.00 \ldots$ \\ \hline company & 0.01 & 0.04 & 0.05 & sen & re & . & . \\ \hline \end{tabular} \end{center} We now want to specifically focus on the processing of compounds such as 'network capacity' in the second text. Outline how you would build a pre-processor for compound words","for x in [""authors_citations"", ""authors_publications"", ""authors_hindex""]: df[x + ""list""] = df[x].apply(lambda a: str(a).replace(""-1;"","""").replace("";-1"","""").split("";"")) df[x + ""_median""] = df[x + ""list""].apply(lambda a: np.median([float(v) for v in a]))",0.1,M1_preference_data_196 "Let's denote by respectively \(A\), \(B\) and \(C\) the value stored by the Viterbi algorithm in the node associated to respectively N, V and Adj for the word ""time"".If \(C > B > A\) and \(10 A \geq 9 C\), what would be the tag of ""time"" in the most probable tagging, if the tag of ""control"" is N (in the most probable tagging)?","This is an abstraction leak: the notion of JavaScript and even a browser is a completely different level of abstraction than users, so this method will likely lead to bugs.",0.1,M1_preference_data_197 Vectorize the input with the Vector Space Model,"1. While the attacker may have correctly primed the LLC, they may end up probing the L1 and thus learn nothing. It is important to ``reprime'' the L1 with other data so that the attacker sees only L1 misses when probing and thus can observe LLC hits and misses. In inclusive caches, repriming implies using addresses which land on the same set as the primary priming in the L1 but on different sets in the LLC (which is usually possible because the LLC has generally more sets than the L1). 2. To properly address all sets in LLC, large pages may need to be used. ",0.1,M1_preference_data_198 "In a nutshell, the ""second album syndrome"" is a theory that states that the second album of a band always sucks You have the following regression output regarding the score_diff: the difference in scores between the second and the first album (second - first): Dep. Variable: score_diff R-squared: -0.000 Interpret the 𝑅2 in your regression here. Do these analyses suggest that the ""second album syndrome"" exists? Why?","the corpus being 19 characters long, there are 18 bigrams in total. Here are the counts Xc, 2; Xh, 1; Xt, 1; at, 2; ca, 1; cu, 1; eX, 2; ha, 1; he, 2; tX, 2; th, 2; ut, 1",0.1,M1_preference_data_199 "Consider an operation we will call scanRight1 that, given a function $f$ of two arguments, and a sequence $a_1, \ldots, a_N$, computes a sequence $b_1, \ldots, b_N$ such that: $b_N = a_N$ $b_i = f(a_{i}, b_{i+1})$, for $0 < i < N$ Define similarly scanLeft1 in a manner similar to scanRight1: Given a function $f$ of two arguments, and a sequence $a_1, \ldots, a_N$, scanLeft1 computes a sequence $b_1, \ldots, b_N$ such that: $b_i = f(b_{i-1}, a_{i})$, for $0 < i \leq N$ Suppose that $f$ is associative. is the result of scanRight1 same as the result of scanLeft1?","* Similarity - For retrieval, both approaches aim to measure the relevance between the query q and document $d_i$ by computing $P(q|d_i)$ * Diference - They are different in how $P(q|d)$ is computed. In the above, $P(q|d) = \sum_{i=1}^m P(k_i|d) P(q|k_i)$ while in the lecture $P(q|d) = \prod_{i=1}^m P(k_i|M_{d}) = \prod_{i=1}^m \frac{tf(k_i,d)}{L_d}$",0.1,M1_preference_data_200 What happens in the uniform reliable broadcast algorithm if the completeness property of the failure detector is violated?,"1. When one of the processor execution unit encounters an instruction triggering an exception, it set the exception flag of that instruction to true in the ROB. 2. When the processor tries to commit that instruction, it checks whether the flag is set to true, and if it is, it enters rollback instead of commiting the instruction. 3. During rollback, the processor essentially revert the architectural state to the one one would have had at the time of that instruction (e.g., restores the free list and the register map table following the physical register allocation list in reverse order) and squashes all following instructions (e.g., empties the ROB, the RSes, and all pipeline stages). 4. The processor restarts fetching and decoding from the exception handler address. ",0.1,M1_preference_data_201 Show that P is the weakest failure detector for Group Membership.,"• accuracy / error rate / 'overall performance': number of correct/incorrect over total number ; adv: simple ; drawback: too simple, does not take unbalancing of classes into account • Precision (for one class): number of correctly classified emails over number of emails classified in that class by the system ; Ignores false negatives ; can be biaised by classifying only very few highly trusted emails • Recall / true positive rate: number of correctly classified emails over number of emails classified in that class by experts (in the referential) ; Ignores false positives ; Can be biaised by classifying all documents in the most important class • Area under ROC Curve ; Plot true positive rate vs false positive rates ; not easy to compute ; • F score: Harmonic mean of precision and recall; balances P and R ; too simple: unary score for complex situation • false positive rate",0.1,M1_preference_data_202 "Using the same set of transformations as in the previous question, what is the final value you get for the edit distance between execution and exceuton, i.e. D(execution, exceuton)?Give your answer as a numerical value. ","1. Prefetching, so that the next batch is ready by the time the user wants to see it 2. Caching, so that the photos do not need to be re-downloaded",0.1,M1_preference_data_203 "Assume that some of your colleagues work on an AI-based image generation service, where a user enters a topic, and the AI generates a synthetic photo on that topic. They tell you the following about this service: ""Currently, the user types in the topic they want to see images for, and the client app sends a request to the server with the user ID and the indicated topic. The server generates an image, which takes a second or so, and sends it to the client app, which then requests another image on the same topic, and so on, until the app has received 9 images. It then displays these in a 3x3 grid. The user now looks at the 9 images and, if they see an inappropriate one, they click on a button that causes the app to send a review request to the server. Human moderators then process each report, and data scientists tweak the AI model to avoid generating images similar to the ones reported as inappropriate. Users then get a notification that their report was processed. The whole reporting process typically takes a day."" Explain in 1-2 sentences what you could do on the server side, without any changes to the client app, so that users get their images faster:"," The fetch, decode, and rename stage proceed in program order. They directly append in program order memory instructions to the LSQ.",0.1,M1_preference_data_204 "Imagine you're working at JaaS, the Jokes-as-a-Service platform. With JaaS, everyone can be funny any time by having new jokes at their fingertips via a public API. During the orientation at JaaS, the VP of engineering explains to you their workflow: 1. Branching: Developers must use a separate branch for each feature, and they must commit their code once a day. 2. Testing: When their feature is finished, developers must run a test suite locally, on their machine, and make sure that every test passes. Once that's done, they can commit and push, then open a PR describing the feature, with a screenshot of the test results attached, and wait for code reviews from colleagues. 3. Merging: If no one requested changes on the code within 24 hours, one can merge the PR to the main branch. The above ""Branching"" directive contains a flaw. Give a better alternative for it and explain why your alternative is better in maximum 2 sentences:",False.,0.1,M1_preference_data_205 "Consider the following joint distribution that has the factorization $$ p\left(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}\right)=p\left(x_{1}\right) p\left(x_{2} \mid x_{1}\right) p\left(x_{3} \mid x_{2}\right) p\left(x_{4} \mid x_{1}, x_{3}\right) p\left(x_{5} \mid x_{4}\right) . $$ : (4 points.) Determine whether the following statement is correct. $$ X_{1} \perp X_{3} \mid X_{2}, X_{5} $$ Show your reasoning.",<:,0.1,M1_preference_data_206 "Consider a bipartite graph $G=(V,E)$ where $V$ is partitioned into $A$ and $B$. Let $(A, \mathcal{I})$ be the matroid with ground set $A$ and \begin{align*} \mathcal{I} = \{ A' \subseteq A: \mbox{ $G$ has a matching in which every vertex of $A'$ is matched}\}\,. \end{align*} Recall that we say that a vertex is matched by a matching $M$ if there is an edge in $M$ incident to $v$. Show that $(A, \mathcal{I})$ is indeed a matroid by verifying the two axioms.","Let us try to picture what the hashing function does. On the $i$-th coordinate, it partitions $\mathbb{R}$ into buckets of the form $..., [s_i - w, s_i), [s_i, s_i + w), [s_i + w, s_i + 2w), ...$, each of length $w$, with a random ``offset''. Given two numbers $p_i$ and $q_i$, the probability that they fall into the same bucket is $1 - \frac{|p_i - q_i|}{w}$ (unless they are farther away than $w$, in which case it is $0$).\footnote{ To see this, assume wlog that $p_i < q_i < p_i + w$; there will be exactly one bucket-beginning in the interval $(p_i, p_i + w]$, the position of that bucket-beginning is distributed uniformly on that interval, and $p_i$ and $q_i$ will go into different buckets if and only if that bucket-beginning falls into $(p_i, q_i]$. The probability of this happening is $\frac{|p_i - q_i|}{w}$. } Therefore: \begin{itemize} \item if $|p_i - q_i| > w$ for some $i$, then $\Pr[h(p) = h(q)] = 0$, \item otherwise \[ \Pr[h(p) = h(q)] = \prod_{i=1}^d \left(1 - \frac{|p_i - q_i|}{w} \right) \approx \prod_{i=1}^d e^{-\frac{|p_i - q_i|}{w}} = e^{- \frac{\sum_{i=1}^d |p_i - q_i|}{w}} = e^{- \frac{\|p - q\|_1}{w}}. \] \epsilonnd{itemize}",0.1,M1_preference_data_207 "You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! Would you use a causal language modeling or masked language modeling training objective to train your model? Why?","We want to assign each job to a machine. For any job $j$ which the LP assigns to a single machine $i$, i.e., $x^*_{ij} = 1$, it clearly makes sense to listen to the LP and assign $j$ to $i$. To assign the other jobs, we will use the graph $H$, which has the following properties: \begin{itemize} \item $H$ is acyclic, i.e., a forest. \item Vertices corresponding to the already-assigned jobs are isolated. \item Vertices corresponding to the yet-unassigned jobs have degree at least two. \end{itemize} While $H$ contains an edge, we iteratively do the following: \begin{itemize} \item Choose a machine $i$ whose vertex $a_i$ has degree one in $H$, i.e., there is only a single edge $\{a_i, b_j\}$ incident to $a_i$. (Such a machine exists because $H$ is nonempty, and a nonempty forest contains a degree-one vertex; since job-vertices have degrees either zero or at least two, a degree-one vertex must be a machine-vertex.) \item Assign job $j$ to machine $i$. \item Remove all edges incident to $j$ from $H$. (Note that this preserves all the above properties of $H$.) \end{itemize} Obviously, this procedure will terminate. Note that it will assign all jobs to machines, because as long as there is an unassigned job $j$, the vertex $b_j$ has at least two incident edges in $H$. It remains to reason that for every machine $i$, the sum of processing times of jobs assigned to $i$ (this is called the \textit{makespan} of $i$) is at most $T + \max_{j \in J} p_j$. This is because there are two kinds of jobs assigned to $i$: \begin{itemize} \item Assigned in the beginning: the jobs $j$ which had $x^*_{ij} = 1$. The total processing time of these jobs is \[ \sum_{j\in J: x^*_{ij} = 1} p_j = \sum_{j\in J: x^*_{ij} = 1} x^*_{ij} p_j \le \sum_{j\in J: i \in N(j)} x^*_{ij} p_j \le T. \] \item Assigned later: at most one other job. For note that we only assign a job to $i$ when $a_i$ is of degree-one, and once we do, $a_i$ becomes isolated and no more jobs can be assigned to $i$. This job has processing time at most $\max_{j \in J} p_j$. \end{itemize} Another perspective on the solution is the following. By looking at the makespan bound $T + \max_j p_j$ that we should satisfy, we can notice that it will be fine if we assign the integral jobs to the machines that the LP has selected and the non-integral jobs in such a way that each machine gets at most one. In other words, we just need to prove that the graph $H$ has a matching which matches all non-integral jobs (any such matching will be fine). Our iterative procedure is one way to prove that such a matching exists.",0.1,M1_preference_data_208 "Meltdown is a well-known attack on dynamically-scheduled processors which exploits the fact that loads may be executed speculatively before determining whether they represent a memory access violation. Intel Itanium has a speculative load instruction which allows a compiler to perform a load speculatively and needs a check instruction at a later point in the code to verify whether the load did in fact raise an exception. Would you imagine that a Meltdown attach on Itanium based on this instruction could be possible? Explain clearly your reasoning.","Call(N(""exists""), Fun(""y"", Call(Call(N(""less""), N(""y"")), N(""y""))))",0.1,M1_preference_data_209 "If process i fails, then eventually all processes j≠i fail Is the following true? If some process j≠i does not fail, then process i has failed","coumpond preprocessor is a wide topic in itself (lexical acquisition), but as in many NLP domains two main ways could be considered, which whould definitly be exploited in complement one of the other: the statistical way and the linguistic/human knowledge way. The most naive linguistic approach could be to add by hand coumponds to the lexicon. For the statistical, simply extract all correlated pairs or biger tuples of word, using e.g. mutual information, chi-square or whatever measure of correlation. This could be enhanced using human knowledge by selecting which PoS tags could enter this correlation game (e.g. lookig for NN NN, NN of NN etc..) but also by filtering out manually automatically extracted lists of candidates.",0.1,M1_preference_data_210 "Assume In the process of reworking the architecture of the project, you need to remove a method because it's too easy to use incorrectly and there's now an easier-to-use replacement. What changes should you make for upcoming releases?","['{t} * (1 + {t} + {w})', '({t} - 1) * (1 + {t}) + ({w} - 1) * {t}']",0.1,M1_preference_data_211 "Write the dual of the following linear program: \begin{align*} \text{Maximize} \quad &6x_1 + 14 x_2 + 13 x_3\\ \text{Subject to} \quad & x_1 + 3x_2 + x_3 \leq 24 \\ & x_1 + 2x_2 + 4 x_3 \leq 60 \\ & x_1, x_2, x_3 \geq 0 \end{align*} Hint: How can you convince your friend that the above linear program has optimum value at most $z$?",The probability of contracting an edge in a fixed min-cut $C$ is much lower in the initial contractions (in Karger's algorithm) than at the end. Thus it makes sense to repeat the later contractions more times than the initial ones. This is the main idea of the Karger-Stein algorithm. Their implementation of this idea is to perform contractions so as to reduce the size of the graph from $n$ to $n/\sqrt{2}$ (they make $n- n/\sqrt{2}$ contractions). Then find a min-cut recursively in the smaller graph by repeating the same algorithm twice on the smaller graph and choose the smaller of the obtained cuts. (These recursive calls will also split and so on.) The running time of this algorithm is roughly the same as one iteration of Karger's algorithm but the success probability is much higher.,0.1,M1_preference_data_212 "Assume you are working on a school project with your friend. Your friend claims that using very detailed names are good in code because they describe exactly what is going on. Do you agree? Explain in max 2 sentences.","Continuous integration can run code analysis and automated tests that developers wrote, but it does not create its own tests.",0.1,M1_preference_data_213 "You are given a training set $S=\left\{\left(x_{n}, y_{n}\right)\right\}_{n=1}^{N}$ for classification with $y_{n} \in\{0,1\}$. Ninety percent of the labeled data has label 0. You split the data randomly into two equal parts, train on the first part, and then test on the second part. You get an accuracy of 85 percent. What is your reaction? Explain.","The idea of the algorithm is to first decrease the variance by taking the average of $t = 10/\epsilon^2$ independent runs of \Alg. We then do the median trick. Formally, consider the algorithm $\mathcal{B}$ that runs $t$ independent copies of \Alg and then outputs the average of the $t$ estimates obtained from the independent runs of $\mathcal{A}$. Let $B$ be the random output of this algorithm. As seen in class, we have $\mathbb{E}[B] = c$ (it is still an unbiased estimator) and $\textrm{Var}[B] = c^2/t$. Now by Chebychev's Inequality we have \begin{align*} \Pr[ |B - c| \geq \epsilon c] \leq \frac{\textrm{Var}[B]}{\epsilon^2 c^2} = \frac{1}{t \epsilon^2} = 1/10 \qquad \mbox{\small (since we selected $t= 10/\epsilon^2$)}\,. \end{align*} So algorithm $\mathcal{B}$ returns a $1\pm \epsilon$ approximation with probability at least $9/10$. We now want to decrease the probability $1/10$ of failing all the way down to $\delta$. To do this we use the median trick. Let $\mathcal{C}$ be the algorithm that runs $u= 10\ln(1/\delta)$ independent copies of $\mathcal{B}$ and outputs the \emph{median} of the obtained copies. Let $Y$ be the random output of $\mathcal{C}$. We now analyze the failure probability of $\mathcal{C}$, i.e., we wish to show $\Pr[|Y - c| \geq \epsilon c] \leq \delta$. To do so define $Z_i\in \{0,1\}$ to be the indicator random variable that takes value $1$ if the $i$:th run of $\mathcal{B}$ outputs a value $B_i$ such that $|B_i - c| \geq \epsilon c$. Note that $\Pr[|B_i - c| \geq \epsilon c] \leq 1/10$ and so $\Pr[Z_i = 1] \leq 1/10$. So if we let $Z = Z_1 + Z_2 +\dots + Z_u$, then $Z$ is a sum of independent variables where $\mathbb{E}[Z] \leq u/10$. Moreover since $Y$ is the median of the independent runs of $\mathcal{B}$, \begin{align*} \Pr[|Y - c| \geq \epsilon c] \leq \Pr[Z \geq u/2] \,. \end{align*} We shall now analyze $\Pr[Z \geq u/2]$ using the Chernoff Bounds. Indeed, since $Z$ is a sum of \emph{independent} random variables taking values in $\{0,1\}$ we have \begin{align*} \Pr[ Z \geq u/2] \leq \Pr[Z > 3 \cdot \mathbb{E}[Z]] \leq e^{-\ln(1/\delta)} = \delta\,. \end{align*} We have thus proved that $\mathcal{C}$ satisfies the right guarantees. Let us know analyze its resource requirements. $\mathcal{C}$ runs $O(\log(1/\delta)$ copies of $\mathcal{B}$ and each copy of $\mathcal{B}$ runs $O(1/\epsilon^2)$ copies \Alg. Thus the total resource requirements increase by at most a factor $O(\log(1/\delta) 1/\epsilon^2)$ as required (calculating the mean and median can be done in linear time so it does not affect the asymptotic running time).",0.1,M1_preference_data_214 The first annotator rated {a} reviews as positive and the rest as negative. The second annotator rated {b} reviews as positive and the rest as negative. 80 reviews were rated as positive by both annotators. Compute the quality of the above reference using Cohen's Kappa.Give your answer as a numerical value to three decimal places.,"It's a (Determinitic) Finite-State Automaton on character pairs (cross-product of alphabets) It's the most efficient time-space implementation for matching two languages (cross-product of languages), which is the purpose of morphology.",0.1,M1_preference_data_215 "Assume you are working on a mobile application. Users complain that your app's image gallery uses too much of their mobile data. In one sentence, explain the first step towards improving this:"," 0: [""mov LC, 10"", ""mov x51, 10000"", nop, nop, nop] 1: [""mov EC, 1"", ""mov p40, true"", nop, nop, nop] 2: [""(p40) addi x50, x51, 1"", nop, nop, ""(p40) ld x60, 0(x51)"", nop] 3: [""(p40) addi x70, x60, 10"", nop, nop, ""(p41) st x71, 0(x52)"", loop.pip 2] The best possible II is 2 because of the two memory operations with a single memory unit. ",0.1,M1_preference_data_216 "Consider the task of classifying reviews as positive or negative. To create a reference for this task, two human annotators were asked to rate 1000 movie reviews as positive or negative.The first annotator rated {a} reviews as positive and the rest as negative. The second annotator rated {b} reviews as positive and the rest as negative. 80 reviews were rated as positive by both annotators. What is the raw agreement between the two annotators?Give your answer as a numerical value to three decimal places.","We consider an algorithm that uses an eventually perfect failure detector. By contradiction, we assume that the algorithm satisfies agreement, but not uniform agreement. We consider two executions A and B of the algorithm. In A, two processes π and ρ decide differently, and π crashes. Let t denote the time when ρ decides. In B, π does not crash, but every process that suspects π in A also suspects π in B at the same moment of its execution. No process that suspects π restores π before t. All messages from π are delayed: none of its messages is delivered before t. It is easy to see that ρ receives exactly the same messages and indications from the failure detector in A and B, and thus decides differently from π also in B. However, in B, π never failed. Therefore, if the algorithm violates uniform agreement, it also violates agreement.",0.1,M1_preference_data_217 "You are given the following accident and weather data. Each line corresponds to one event: 1. car_accident rain lightning wind clouds fire 2. fire clouds rain lightning wind 3. car_accident fire wind 4. clouds rain wind 5. lightning fire rain clouds 6. clouds wind car_accident 7. rain lightning clouds fire 8. lightning fire car_accident (a) You would like to know what is the likely cause of all the car accidents. What association rules do you need to look for? Compute the confidence and support values for these rules. Looking at these values, which is the most likely cause of the car accidents?","F1, or any combination, e.g. weighted averages (Fβ )",0.1,M1_preference_data_218 "Consider the following problem where we are given an edge-colored graph and we wish to find a spanning tree that contains a specified number of edges of each color: \begin{description} \item[Input:] A connected undirected graph $G=(V,E)$ where the edges $E$ are partitioned into $k$ color classes $E_1, E_2, \dots, E_k$. In addition each color class $i$ has a target number $t_i \in \mathbb{N}$. \item[Output:] If possible, a spanning tree $T \subseteq E$ of the graph satisfying the color requirements: \begin{align*} |T \cap E_i| = t_i \qquad \mbox{ for $i=1,\dots, k$.} \end{align*} Otherwise, i.e., if no such spanning tree $T$ exists, output that no solution exists. \end{description} \noindent {Design} a polynomial time algorithm for the above problem. You should analyze the correctness of your algorithm, i.e., why it finds a solution if possible. To do so, you are allowed to use algorithms and results seen in class without reexplaining them.","\begin{align} P(q|d_j) & = \frac{P(q \cap d_j)}{P(d_j)} \\ & = \sum_{i=1}^m \frac{P(q \cap d_j | k_i) P(k_i)}{P(d_j)} & \text{using $P(A) = \sum_B P(A|B) P(B)$} \\ & = \sum_{i=1}^m \frac{P(q|k_i) P(d_j | k_i) P(k_i)}{P(d_j)} & \text{using conditional independence} \\ & = \sum_{i=1}^m P(k_i |d_j) P(q|k_i) & \text{using Bayes theorem, $P(A|B) = \frac{P(B|A) P(A)}{P(B)}$} \end{align}",0.1,M1_preference_data_219 "You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! You consider using either a transformer or a recurrent neural network (RNN) as the underlying model for your text generator. Assuming there are no practical issues with selecting either one (such as the amount of data available), which one would you choose for this task? Give two reasons why.",>:,0.1,M1_preference_data_220 "In this problem we design an LSH for points in $\mathbb{R}^d$ with the $\epsilonll_1$ distance, i.e. $$d(p,q) =\sum_{i=1}^d |p_i - q_i|.$$ Define a class of hash functions as follows: Fix a positive number $w$. Each hash function is defined via a choice of $d$ independently selected random real numbers $s_1,s_2,\dots,s_d$, each uniform in $[0,w)$. The hash function associated with this random set of choices is $$h(x_1,\dots ,x_d) = \left(\left\lfloor \frac{x_1 - s_1}{w}\right\rfloor ,\left\lfloor \frac{x_2 - s_2}{w}\right\rfloor,\dots,\left\lfloor \frac{x_d - s_d}{w}\right\rfloor\right).$$ Let $\alpha_i = |p_i - q_i|$. What is the probability that $h(p) = h(q)$, in terms of the $\alpha_i$ values? It may be easier to first think of the case when $w=1$. Try to also simplify your expression if $w$ is much larger than $\alpha_i$'s, using that $(1-x) \approx e^{-x}$ for small values of $x\geq 0$.",This might lead to different results.,0.1,M1_preference_data_221 "In an automated email router of a company, we want to make the distinction between three kind of emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a Naive Bayes approach. What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'?","$P(q|d_1) = \sum_{i=1}^m P(k_i|d_1) P(q|k_i) = 0 \times \frac{1}{5} + \frac{1}{3} \times 0 + \frac{2}{3} \frac{2}{3} = \frac{4}{9} = 0.4444$ Similarly, for the remaining documents: + $P(q|d_2) = 1/15 = 0.0666$ + $P(q|d_3) = 11/30 = 0.3666$ + $P(q|d_4) = 3/20 = 0.15$. Thus the final ranking is $d_1, d_3, d_4, d_2$.",0.1,M1_preference_data_222 "What does the following operation output for a given input list of numbers ? 1 def mystery5(ys: List[Int]) = 2 for y <- ys if y >= 0 && y <= 255 yield 3 val bits = 4 for z <- 7 to 0 by -1 yield 5 if ((1 << z) & y) != 0 then ""1"" else ""0"" 6 bits.foldRight("""")((z, acc) => z + acc) We have as an output...","Let $S_{\text {training }}=\{-1,0,1\}$. If our initial cluster assignments are $\{-1,0\}$ for cluster 1 and $\{1\}$ for cluster 2 then this itself is already a fixed point with cluster centroids -0.5 and 1 , respectively. But there is of course the ""symmetric"" fixed point with clusters $\{-1\}$ and $\{0,1\}$ that has cluster centroids -1 and 0.5 , respectively. The exact symmetry here is not necessary. Even if we moved the point 0 slightly the problem would persist.",0.1,M1_preference_data_223 "Consider the following contains function defined on Iterable (in particular, it accepts both Vector and List). def contains[A](l: Iterable[A], elem: A): Boolean = val n = l.size if n <= 5 then for i <- l do if i == elem then return true false else val (p0, p1) = parallel( contains(l.take(n / 2), elem), contains(l.drop(n / 2), elem) ) p0 || p1 Let $n$ be the size of l. Assume that drop and take run in $\Theta(1)$ on Vector and $\Theta(n)$ on List. What is the asymptotic work of contains if it is called on a List?","Let $v_b = 0$ for all $b\in B$ and $u_a = \min_{\{a,b\} \in E} c(\{a,b\})$ be a dual solution. By definition it is feasible. Now define the vector $(u^*, v^*)$ by \begin{align*} u^*_a = \begin{cases} 1 & \mbox{if $a\in S$}\\ 0 & \mbox{otherwise} \end{cases} \qquad \mbox{and} \qquad v^*_b = \begin{cases} -1 & \mbox{if $b\in N(S)$} \\ 0 & \mbox{otherwise} \end{cases} \end{align*} Note that $(u,v) + \alpha\cdot (u^*, v^*)$ is a feasible solution for any scalar $\alpha \geq 0$. Such a solution has dual value $\sum_{a\in A} u_a + \sum_{b\in B} v_b + \alpha \cdot \left( \sum_{a\in S} u^*_a - \sum_{b\in N(S)} v^*_b \right) = \sum_{a\in A} u_a + \sum_{b\in B} v_b + \alpha \cdot (|S| - |N(S)|)$, and as $|S| > |N(S)|$ this shows that the optimal solution to the dual is unbounded (letting $\alpha \rightarrow \infty$).",0.1,M1_preference_data_224 "Consider the following snippet used to produce a high-performance circuit using a statically scheduled HLS tool, such as Xilinx Vivado HLS. Assume that a erb+double+ multiplication takes several cycles (latency) to compute. egin{verbatim} double a[ARRAY_SIZE] = ...; int b = 1; for (int i = 0; i < ARRAY_SIZE; i++) if (a[i] * (double) b >= CONST) b++; \end{verbatim} For the same snippet, would a dynamically scheduled circuit naturally achieve better performance? If so, does the answer depend on some particular technique applied in the design of the dynamically scheduled circuit? Which one? Explain. ","$P(""continuous wave"")= rac{2.01}{58+0.01 imes N^2}= rac{2.01}{158}$ where $N$ is the size of the (total possible) vocabulary. $P(""pulsed laser"")= rac{0.01}{58+0.01 imes N^2}= rac{1}{15\,800}$",0.1,M1_preference_data_225 Use Total Order Broadcast to implement an Asset Transfer sequential object.,"Thus, when comparing the precision of the classifiers trained in two datasets, we conflate the challenges that may arise from data imbalance with the signal that we are trying to extract. The F-1 score is a better metric to handle data imbalance, as it combines precision with recall.",0.1,M1_preference_data_226 "Given a matroid $\mathcal{M}= (E, \mathcal{I})$ and a weight function $w: E \rightarrow \mathbb{R}$,~\textsc{Greedy} for matroids returns a base $S = \{s_1, s_2, \dots, s_k\}$ of maximum weight. As noted in the lecture notes, any base consists of the same number, say $k$, of elements (which is said to be the rank of the matroid). We further assume that the elements of $S$ are indexed so that $w(s_1) \geq w(s_2) \geq \dots \geq w(s_k)$. Let $S_\ell = \{s_1, \dots, s_\ell\}$ be the subset of $S$ consisting of the $\ell$ first elements, for $\ell = 1,\dots, k$. Then prove that \begin{align*} w(S_\ell) = \max_{T\in \mathcal{I}: |T| = \ell} w(T) \mbox{ for all $\ell =1, \dots, k$.} \end{align*} In other words, \textsc{Greedy} does not only returns a base of maximum weight but the ``prefixes'' are maximum weight sets of respective cardinalities.","Notice that minimising $\mathcal{L}(\mathbf{z}, \boldsymbol{\mu})$ for given $\left\{\boldsymbol{\mu}_{k}\right\}_{k=1}^{K}$ reduces to minimizing $\sum_{k=1}^{K} z_{n k} \| \mathbf{x}_{n}-$ $\boldsymbol{\mu}_{k} \|_{2}^{2}$ for each $n \in\{1, \ldots, N\}$ independently. Notice that this sum is minimized for $z_{n k}=1$ if $k=$ $\arg \min _{1 \leq j \leq k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{j}\right\|_{2}^{2}$ and 0 otherwise. Hence the closed-from update is, for each $n$ : $$ z_{n k}=\left\{\begin{array}{l} 1 \text { if } k=\arg \min _{1 \leq j \leq k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{j}\right\|_{2}^{2} \\ 0 \text { otherwise } \end{array}\right) $$ This step corresponds to the assignment step and boils down to finding, for each point $\mathbf{x}_{n}$, the centroid $\boldsymbol{\mu}_{k}$ which is closest.",0.1,M1_preference_data_227 "Consider the following code transformation: egin{verbatim} r3 = r3 << 4 r4 = r4 << 4 st [r3] = r2 ld r1 = [r4] r5 = r3 + 4 r1 = r1 + 1 st [r5] = r6 => r3 = r3 << 4 r4 = r4 << 4 st [r3] = r2 ld r1 = [r4] r5 = r3 + 4 r1 = r1 + 1 st [r5] = r6 \end{verbatim} Explain (i) which pairs of instructions which have been reordered in the above snippets are potentially resulting in erroneous execution in general and (ii) discuss specifically whether they are indeed a problem in this specific case.",def tails(ls: List[Int]): List[List[Int]] = ls :: (ls match { case x :: xs => tails(xs) case Nil => Nil }),0.1,M1_preference_data_228 "Review the notion of depth seen in the lecture. What does it represent? Below is a formula for the depth of a divide and conquer algorithm working on an array segment of size $L$, as a function of $L$. The values $c$, $d$ and $T$ are constants. We assume that $L>0$ and $T>0$. $$ D(L) = \begin{cases} c \cdot L &\text{if}\ L \leq T \\ \text{max}\left( D\left(\left\lfloor \frac L2 \right\rfloor \right), D\left(L - \left\lfloor \frac L2 \right\rfloor \right)\right) + d &\text{otherwise} \end{cases} $$ Below the threshold T, the algorithm proceeds sequentially and takes time c to process each single element. Above the threshold, the algorithm is applied recursively over the two halves of the array. The results are then merged using an operation that takes d units of time. Is it the case that for all $1 \leq L_1 \leq L_2$ we have $D(L_1) \leq D(L_2)$? If it is the case, prove the property by induction on $L$. If it is not the case, give a counterexample showing values of $L_1$, $L_2$, $c$, and $d$for which the property does not hold. ","concatenative morphology uses roots, prefixes and suffixes only. Exameples: in-cred-ible : in-: prefix, cred: root, -ible: suffix. concatenative morphology not relevant for all languages: more complex languages can involve infixes (e.g. Tagalog, Hebrew) or cirumfixes (e.g. German). Pattern-based morphology should then be used. the complexity of the morphology can vary a lot between languages: as easy as in Spanish, as hard as in Turkish, or Hebrew, Tagalog...",0.1,M1_preference_data_229 "Implement a reliable broadcast algorithm without using any failure detector, i.e., using only BestEffort-Broadcast(BEB).","No, the goal of tests is to increase confidence in the code's correctness, coverage is only a metric to evaluate tests and not the main goal",0.1,M1_preference_data_230 "In the maximum directed cut problem we are given as input a directed graph $G = (V, A)$. Each arc $(i, j)\in A$ has a nonnegative weight $w_{ij} \geq 0$. The goal is to partition $V$ into two sets $U$ and $W = V \setminus U$ so as to maximize the total weight of the arcs going from $U$ to $W$ (that is, arcs $(i, j)$ with $i \in U$ and $j \in W$). Give a randomized 1/4-approximation algorithm for this problem (together with a proof that it is a 1/4-approximation in expectation).","The Hessian of $f(v, w)$ is equal to $$ \left(\begin{array}{cc} 2 w^{2} & 4 v w-2 r \\ 4 v w-2 r & 2 v^{2} \end{array}\right) $$ This matrix is not positive semi-definite in general. Its determinant is equal to $-4(r-3 v w)(r-v w)$ which is not necessarily positive. Therefore the problem is only element-wise convex but not jointly convex in $v$ and $w$.",0.1,M1_preference_data_231 "Explain why any fail-noisy consensus algorithm (one that uses an eventually perfect failure detector ◇P) requires a majority of the processes to be correct. More precisely, provide a “bad run” in the case where the majority of processes is faulty.","system 2: error is first criterion, then for stastistically non signifiant differences in error (which is the case for system 1 and 2), then min std dev is better (especially with such big difference as here!)",0.1,M1_preference_data_232 Prove that √2 is irrational.,"The current sprint backlog cannot be changed, the client should contact the Product Owner to plan the next sprint instead.",0.1,M1_preference_data_233 "Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are vectors in $\mathbb{R}^{D}$ and $b_{u}$ and $b_{m}$ are scalars, indicating the bias. Is the problem jointly convex in $\mathbf{v}$ and $\mathbf{w}$ ? Look at a simple case, say for only 1 user and 1 movie and assume that $D=1$, i.e., consider $f(v, w)=\frac{1}{2}(v w+c-r)^{2}$. [Hint: $\mathrm{A} 2 \times 2$ matrix is positive definite if and only if the two diagonal terms are positive and the determinant is positive.]","When we want to prove something by contradiction, we start by assuming that the negation (of whatever we are trying to prove) is true. We said in the classroom that p → q is equivalent to ¬p ∨ q. Therefore the negation of p → q is p ∧¬q. With that said, let's assume that a^2 is even and a is odd. Since a is odd, a can be written as a=2k+1. Therefore, a^2 = (2k+1)^2 = 4k^2 + 4k + 1 = 2(2k^2 + 2k) + 1. Thus, a^2 is odd, a contradiction!",0.1,M1_preference_data_234 "For students born in April, how many months older are they than the average student in their grade? 5.4898 months For students born in March, how many months younger are they than the average student in their grade? 5.5102 months Discuss: Considering your common sense and the results obtained from the simulation: what advantage do students born in April have over those born in March? How may this affect their odds of becoming professional athletes?","Precision is prefered when very large amount of data are available and only a few well choosen one are enough: we want to have those very early, e.g. Web search Recall is prefered when have all the correct documents is important (implying that, if we want to handle them, they are not that many). Typically in legal situations.",0.1,M1_preference_data_235 "Prove Hall's Theorem: \begin{itemize} \item[]``An $n$-by-$n$ bipartite graph $G=(A \cup B, E)$ has a perfect matching if and only if $|S| \leq |N(S)|$ for all $S\subseteq A$.'' \end{itemize} \emph{(Hint: use the properties of the augmenting path algorithm for the hard direction.)}","We could consider at least two approaches here: either binomial confidence interval or t-test. • binomial confidence interval: evaluation of a binary classifier (success or not) follow a binomial law with parameters (perror,T), where T is the test-set size (157 in the above question; is it big enough?). Using normal approximation of the binomial law, the width of the confidence interval around estimated error probability is q(α)*sqrt(pb*(1-pb)/T),where q(α) is the 1-α quantile (for a 1 - α confidence level) and pb is the estimation of perror. We here want this confidence interval width to be 0.02, and have pb = 0.118 (and 'know' that q(0.05) = 1.96 from normal distribution quantile charts); thus we have to solve: (0.02)^2 = (1.96)^2*(0.118*(1-0.118))/T Thus T ≃ 1000. • t-test approach: let's consider estimating their relative behaviour on each of the test cases (i.e. each test estimation subset is of size 1). If the new system as an error of 0.098 (= 0.118 - 0.02), it can vary from system 3 between 0.02 of the test cases (both systems almost always agree but where the new system improves the results) and 0.216 of the test cases (the two systems never make their errors on the same test case, so they disagree on 0.118 + 0.098 of the cases). Thus μ of the t-test is between 0.02 and 0.216. And s = 0.004 (by assumption, same variance). Thus t is between 5*sqrt(T) and 54*sqrt(T) which is already bigger than 1.645 for any T bigger than 1. So this doesn't help much. So all we can say is that if we want to have a (lowest possible) difference of 0.02 we should have at least 1/0.02 = 50 test cases ;-) And if we consider that we have 0.216 difference, then we have at least 5 test cases... The reason why these numbers are so low is simply because we here make strong assumptions about the test setup: that it is a paired evaluation. In such a case, having a difference (0.02) that is 5 times bigger than the standard deviation is always statistically significant at a 95% level.",0.1,M1_preference_data_236 "Let $\xv_1, . . . , \xv_N$ be a dataset of $N$ vectors in $\R^D$. What does it mean for the data vectors $\xv_1, . . . , \xv_N$ to be centered, as for principle component analysis (PCA) to be meaningful? Use the notation $x_{nd}$ for individual entries.","def knn_weighting_estimate(doc_vectors, doc_labels, query_vector, k=10): """""" Weighting estimation for kNN classification :param doc_vectors: Document vectors (np.array(np.array)) :param doc_labels: Document labels/topics (list) :param query_vector: Query vector (np.array) :param k: Number of nearest neighbors to retrieve :return: A dictionary containing the estimation (sorted) score for each label/topic (dict) """""" top_k_doc_indices = knn(doc_vectors, query_vector, k) top_k_labels = [doc_labels[i] for i in top_k_doc_indices] scores = {t:0 for t in list(set(doc_labels))} for i in top_k_doc_indices: scores[doc_labels[i]] += cosine_similarity(query_vector, doc_vectors[i]) return scores",0.1,M1_preference_data_237 "Your aim is to evaluate a movie review analysis system, the purpose of which is to classify the overall review's sentiment.For each movie review, such a system outputs one of the following classes: positive and negative.You perform your evaluation on a corpus that contains a total of 1000 reviews, out of which {neg} are negative reviews.What is the recall of a system which:predicted that {=({tn} + {fn})} are negative,and was correct for only {tn} of those negative reviews?Give your answer as a numerical value to two decimal places.","def reachable(n: Int, init: Set[Node], edges: List[Edge]): Set[Node] = n match { case 0 => init case _ => val next = init.flatMap(node => edges.filter(_.from == node).map(_.to)) reachable(n - 1, next, edges) }",0.1,M1_preference_data_238 "In the context of Load Store Queue, How does the LSQ gets ordering information?","Clearly, $\phi(\mathbf{x})=f(\mathbf{x})$ will be the corresponding feature map.",0.1,M1_preference_data_239 "Is the processors in recoder buffers ordered (e.g., FIFO-like) or an unordered structure? Why?","Each old version should be in a branch of the repository, not in another repository",0.1,M1_preference_data_240 "For the same processor, with the Memory to Execute forwarding path, see if there exists a way to optimize this snippet of a program (assume all instructions require a single cycle in the Execute unit): egin{verbatim} add r5, r2, r1 mul r7, r12, r5 add r5, r3, r1 mul r8, r12, r5 add r5, r4, r1 \end{verbatim} If there is one, show the modified code, explain the reason of the change(s), and say how many cycles you expect to gain with the modification(s). If there is none possible, explain why. Assume that the processor has 31 general purpose registers and exttt{r18}-- exttt{r23} are unused in this program.","Let us describe $\mathcal{H}$ by giving a procedure to sample an element $h\in \mathcal{H}$: \begin{itemize} \item for each $u\in U$, sample $h_u$ uniformly at random from $[0,1]$. \item set $h(A) = \min_{u\in A} h_u$ for any non-empty $A \subseteq U$ (i.e., MinHashing). \end{itemize} In Exercise Set 10, we showed that $\Pr[h(A) = h(B)] = J(A,B)$. So $\Pr[h(A) \neq h(B)] = 1- J(A,B)$ and the claimed bounds follow immediately.",0.1,M1_preference_data_241 "In this problem we are going to investigate the linear programming relaxation of a classical scheduling problem. In the considered problem, we are given a set $M$ of $m$ machines and a set $J$ of $n$ jobs. Each job $j\in J$ has a processing time $p_j > 0$ and can be processed on a subset $N(j) \subseteq M$ of the machines. The goal is to assign each job $j$ to a machine in $N(j)$ so as to complete all the jobs by a given deadline $T$. (Each machine can only process one job at a time.) If we, for $j\in J$ and $i\in N(j)$, let $x_{ij}$ denote the indicator variable indicating that $j$ was assigned to $i$, then we can formulate the scheduling problem as the following integer linear program: \begin{align*} \sum_{i\in N(j)} x_{ij} & = 1 \qquad \mbox{for all } j\in J & \hspace{-3em} \mbox{\small \emph{(Each job $j$ should be assigned to a machine $i\in N(j)$)}} \\ \sum_{j\in J: i \in N(j)} x_{ij} p_j & \leq T \qquad \mbox{for all } i \in M & \hspace{-3em} \mbox{\small \emph{(Time needed to process jobs assigned to $i$ should be $\leq T$)}} \\ x_{ij} &\in \{0,1\} \ \mbox{for all } j\in J, \ i \in N(j) \end{align*} The above integer linear program is NP-hard to solve, but we can obtain a linear programming relaxation by relaxing the constraints $x_{ij} \in \{0,1\}$ to $x_{ij} \in [0,1]$. The obtained linear program can be solved in polynomial time using e.g. the ellipsoid method. \\[2mm] \emph{Example.} An example is as follows. We have two machines $M = \{m_1, m_2\}$ and three jobs $J= \{j_1, j_2, j_3\}$. Job $j_1$ has processing time $1/2$ and can only be assigned to $m_1$; job $j_2$ has processing time $1/2$ and can only be assigned to $m_2$; and job $j_3$ has processing time $1$ and can be assigned to either machine. Finally, we have the ``deadline'' $T=1$. An extreme point solution to the linear programming relaxation is $x^*_{11} = 1, x^*_{22} =1, x^*_{13} = 1/2$ and $x^*_{23} = 1/2$. The associated graph $H$ (defined in subproblem~\textbf{a}) can be illustrated as follows: \begin{tikzpicture} \node[vertex] (a1) at (0,1.7) {$a_1$}; \node[vertex] (a2) at (0,0.3) {$a_2$}; \node[vertex] (b1) at (3,2.5) {$b_1$}; \node[vertex] (b2) at (3,1) {$b_2$}; \node[vertex] (b3) at (3,-0.5) {$b_3$}; \draw (a1) edge (b3); \draw (a2) edge (b3); \end{tikzpicture} Let $x^*$ be an extreme point solution to the linear program and consider the (undirected) bipartite graph $H$ associated to $x^*$ defined as follows. Its left-hand-side has a vertex $a_i$ for each machine $i\in M$ and its right-hand-side has a vertex $b_j$ for each job $j\in J$. Finally, $H$ has an edge $\{a_i, b_j\}$ iff $0 < x^*_{ij} < 1$.\\[0mm] {Prove that $H$ is acyclic} (using that $x^*$ is an extreme point).","No! Assume that some algorithm does not ensure the causal delivery property but ensures its non-uniform variant. Assume also that m1 → m2. This means that a correct process has to deliver m1 before delivering m2, but a faulty process is allowed to deliver m2 and not deliver m1. However, a process doesn’t know that is faulty in advance (i.e., before it crashes). So, no algorithm can “treat faulty processes in a special way”, i.e., a process has to behave correctly until it crashes. Reminder (Causal delivery property): For any message m1 that potentially caused a message m2 , i.e., m1 → m2, no process delivers m2 unless it has already delivered m1.",0.1,M1_preference_data_242 "Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V What is the number $N$ of additional rules that should be added to $G$ to make it applicable to any sequence of words from a set of 10000 distinct words with an average syntactic ambiguity of 1.5? Justify your answer.","def minMax(data: ParSeq[Int]): (Int, Int) = data.map({ (x: Int) => (x, x) }).reduce({ case ((mn1, mx1), (mn2, mx2)) => (min(mn1, mn2), max(mx1, mx2)) }) Or: def minMax(data: ParSeq[Int]): (Int, Int) = (data.reduce(min), data.reduce(max))",0.1,M1_preference_data_243 "Imagine you're working at at JaaS, the Jokes-as-a-Service platform. With JaaS, everyone can be funny any time by having new jokes at their fingertips via a public API. Your first task is to convert user feedback into user stories. Here is one such piece of feedback: ""Hi, I have been using your app for a long time, but I've been recently diagnosed with diabetic retinopathy, which made me blind in both eyes, so I can’t read anymore. I've been trying to use the Voice-Over feature on my phone, since it is supposed to be able to read out loud what the app displays. However, it seems to only be able to read the title of the jokes, not the actual jokes themselves. I have not made anyone laugh in 3 months, and I feel miserable! A friend of mine told me that it could be because your app displays the jokes as images instead of text, which is not supported by the voice-over program on my phone. Please do something!"" Turn this feedback into a user story that follows the following guidelines: 1) A user story that summarizes all necessary information from the feedback 2) the user story does not contain any unnecessary information","The solution is to differenciate the two kind of adjectives. For instance: VP -> VBe Adj+ N -> Adj- N Adj+ -> Adj+ PP Adj+ -> Ving Adj+ -> Adj Adj- -> Adj (and, of course, add the right PoS tag into the lexicon, e.g. former:Adj-). Here we keep the PoS tag Adj for 'Adj- or Adj+'.",0.1,M1_preference_data_244 "Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V In how many rules should the 9 rules provided for $G$ be expanded into to cope with simple number agreements? Justify your answer.","By definition of $\gamma$ and $M$ we have that $\gamma/\|\wv_\star\|_2\leq M$. And therefore we obtain : t \leq rac{R^2}{M^2} ",0.1,M1_preference_data_245 "In this week's lecture, you have been introduced to the aggregate method of ParSeq[A] (and other parallel data structures). It has the following signature: def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B Discuss, as a group, what aggregate does and what its arguments represent. Consider the parallel sequence xs containing the three elements x1, x2 and x3. Also consider the following call to aggregate: xs.aggregate(z)(f, g) The above call might potentially result in the following computation: f(f(f(z, x1), x2), x3) But it might also result in other computations. Come up with at least two other computations in terms of f and g that may result from the above call to aggregate. Below are other examples of calls to aggregate. In each case, check if the call can lead to different results depending on the strategy used by aggregate to aggregate all values contained in data down to a single value. You should assume that data is a parallel sequence of values of type BigInt. 4. data.aggregate(1)((acc, x) => x * x * acc, _ * _)","def generatePassedExams(students: List[Student], courses: List[Course]): List[(String, String, Int)] = students.flatMap {s => s.exams .filter(_.grade > 2) .flatMap {e => courses .filter(c => e.courseId == c.id) .map(c => (s.name, c.name, e.grade))}}",0.1,M1_preference_data_246 "Freshly graduated from EPFL, you have been hired as contractors for a successful and rapidly growing bank. The bank has been experiencing problems with their money management system recently, which is written in Scala, and so they hired the best and brightest young engineer they could find: you! The system had been working perfectly fine so far, they tell you. In the past days, due to an increased number of customers, they had to switch from a single-threaded sequential execution environment to a multi-threaded concurrent one, in which the threads may perform transactions concurrently. That's when problems started, your manager says... Here is the code responsible to withdraw money from the account from and transfer it to the account to, within the same bank: def transfer(from: Account, to: Account, amount: BigInt): Unit = { require(amount >= 0) val balanceFrom = from.balance if (balanceFrom >= amount) { from.balance = balanceFrom - amount val balanceTo = to.balance to.balance = balanceTo + amount } } For the bank, it is very important that the following two properties hold after any sequence of completed transfer transactions: The balance of an account never goes below 0. The total sum of money held by the bank is constant. For each of the proposed implementations of transfer below, check which of the two properties hold. Additionally, check if the system is vulnerable to deadlocks. Variant 1: def transfer1(from: Account, to: Account, amount: Long): Unit = { require(amount >= 0) val balanceFrom = from.balance if (balanceFrom >= amount) { from.synchronized { from.balance = balanceFrom - amount } to.synchronized { val balanceTo = to.balance to.balance = balanceTo + amount } } } Variant 2: def transfer2(from: Account, to: Account, amount: Long): Unit = { require(amount >= 0) from.synchronized { val balanceFrom = from.balance if (balanceFrom >= amount) { from.balance = balanceFrom - amount to.synchronized { val balanceTo = to.balance to.balance = balanceTo + amount } } } } Variant 3 object lock // Global object def transfer3(from: Account, to: Account, amount: Long): Unit = { require(amount >= 0) lock.synchronized { val balanceFrom = from.balance if (balanceFrom >= amount) { from.balance = balanceFrom - amount val balanceTo = to.balance to.balance = balanceTo + amount } } }","Let $y$ be an optimal solution to the linear program of value $\optlp$. We shall give a randomized rounding that outputs an $s,t$-cut that in expectation cuts $\optlp$ edges. Since any cut must cut at least $\opt$ edges, it then follows that $\optlp \geq \opt$. We now describe the rounding procedure. For each vertex $v\in V$, define \begin{align*} x_v = \mbox{the length of the shortest path from $s$ to $v$ in the graph where edge $e\in E$ has length $y_e$.} \end{align*} Notice that $x_s =0$ and $x_t \geq 1$. That $x_t\geq 1$ is due to the fact that for every path $p\in P$ from $s$ to $t$ we have $\sum_{e\in p} y_e \geq 1$. Furthermore, the variables $x$ satisfy the following key property: \begin{claim} For every edge $\{u,v\} \in E$, we have $y_{\{u,v\}} \geq |x_v- x_u|$. \label{claim:2} \end{claim} \begin{proof} Name the vertices so that $x_v > x_u$ and suppose toward contradiction that $y_{\{u,v\}} < x_v - x_u$. Let $p_u$ be a shortest path from $s$ to $u$ that has length $x_u$. And consider the path $p_u + \{u,v\}$. This is a path from $s$ to $v$ that has length $x_u + y_{\{u,v\}}$, which is strictly smaller than $x_v$ (contradicting the definition of $x_v$ to be the length of a \emph{shortest} path). \end{proof} We can thus deduce that we have a feasible solution to the linear program: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & & \displaystyle\sum_{e \in E} y_e &\\ \text{subject to}& & \displaystyle\sum_{e \in p} y_{\{u,v\}} \geq x_u - x_v, \quad \mbox{for every $\{u,v\} \in E$}\\ & & \displaystyle\sum_{e \in p} y_{\{u,v\}} \geq x_v - x_u, \quad \mbox{for every $\{u,v\} \in E$}\\ & & x_s = 0, x_t \geq 1,\\ & & x_v \ge 0, \forall v \in V \\ & & y_e \ge 0 \forall e \in E. \end{array} \end{equation*} In class, we saw that the above linear program has cost equal to $\opt$ which finishes the proof. For completeness, we give the full argument. The rounding algorithm is as follows: \begin{itemize} \item Select $\theta \in[0,1]$ uniformly at random. \item Output $S = \{v\in V: x_v < \theta \}$. \end{itemize} Notice that $S$ always contains $s$ (since $x_s = 0$) and never contains $t$ (since $x_t \geq 1$). We now analyze the expected number of edges that are cut. Let $X_e$ be the random indicator variable for the event that edge $e$ is cut. Then \begin{align*} \E_\theta[\mbox{\# edges cut}] = \E_\theta\left[\sum_{e\in E} X_e \right] = \sum_{e\in E} \E_\theta[X_e] = \sum_{e\in E} \Pr_\theta[\mbox{e is cut}]\,. \end{align*} We now analyze, for a fixed edge $e=\{u,v\} \in E$, the probability that $e$ is cut over the uniformly at random choice $\theta \in [0,1]$. Assume without loss of generality that $x_u < x_v$. We then have that $e$ is cut if and only if $\theta \in [x_u, x_v]$ which happens with probability at most $x_v - x_u$ (we say at most because $x_v$ could, in theory, be bigger than $1$). Now, by Claim~\ref{claim:2} above, $x_v -x_u \leq y_e$. So we have \[\opt \leq \E_\theta[\mbox{\# edges cut}] \leq \sum_{e\in E} y_e= \optlp\,,\] as required.",0.1,M1_preference_data_247 "Can we devise a broadcast algorithm that does not ensure the causal delivery property but only (in) its non-uniform variant: No correct process pi delivers a message m2 unless pi has already delivered every message m1 such that m1 → m2?",$\Theta(n)$,0.1,M1_preference_data_248 "Assume you are working on a mobile application. Your team's graphic designer tells you: ""I wanted the logo to be exactly 250 pixels wide, but on my new phone it's not. This is a bug that needs fixing!"" In one sentence, explain whether you agree with the designer's assessment and why."," The schedule of statically scheduled HLS is known at compile time, while with dynamically scheduled HLS, dependences are encoded in the dataflow graph and resolved at runtime.",0.1,M1_preference_data_249 "In context of Meltdown Attack, what are the basic ideas of the attack and how they relate to the snippet above? What is the microarchitectural mechanism targeted by the attack? Is it likely to work on all processors with such architectural mechanism or some processors may be intrinsically not vulnerable? Explain. ",['(1000 - {fn} - {neg}) / ((1000 - {fn} - {neg}) + {fn})'],0.1,M1_preference_data_250 "In this problem, we give a $2$-approximation algorithm for the submodular vertex cover problem which is a generalization of the classic vertex cover problem seen in class. We first, in subproblem~\textbf{(a)}, give a new rounding for the classic vertex cover problem and then give the algorithm for the more general problem in subproblem~\textbf{(b)}. Recall that a vertex cover instance is specified by an undirected graph $G= (V,E)$ and non-negative vertex-weights $w: V \rightarrow \mathbb{R}_+$. The task is to find a vertex cover $S \subseteq V$ of minimum total weight $\sum_{i\in S} w(i)$, where a subset $S \subseteq V$ of the vertices is said to be a vertex cover if for every edge $\{i,j\} \in E$, $i\in S$ or $j\in S$. The natural LP relaxation (as seen in class) is as follows: \begin{align*} \textbf{minimize} \hspace{0.4cm} & \sum_{i\in V} w(i) x_i \\ \textbf{subject to}\hspace{0.4cm} & x_i + x_j \geq 1 \qquad \mbox{for $\{i,j\} \in E$}\\ & \hspace{0.9cm} x_i \geq 0 \qquad \mbox{for $i\in V$} \end{align*} Given a fractional solution $x$ to the above linear program, a natural approach to solve the vertex cover problem is to give a rounding algorithm. Indeed, in class we analyzed a simple rounding scheme: output the vertex cover $S = \{i\in V: x_i \geq 1/2\}$. We proved that $w(S) \leq 2 \sum_{i\in V} w(i) x_i$. In this subproblem, your task is to prove that the following alternative randomized rounding scheme give the same guarantee in expectation. The randomized rounding scheme is as follows: \begin{itemize} \item Select $t \in [0,1/2]$ uniformly at random. \item Output $S_t = \{i\in V: x_i \geq t\}$. \end{itemize} Prove (i) that the output $S_t$ is a feasible vertex cover solution (for any $t\in [0,1/2]$) and (ii) that $\E[\sum_{i\in S_t} w(i)] \leq 2 \cdot \sum_{i\in V} w(i) x_i$ where the expectation is over the random choice of $t$. We remark that you \emph{cannot} say that $x$ is half-integral as $x$ may not be an extreme point solution to the linear program. \\ {\em (In this problem you are asked to prove that the randomized rounding scheme (i) always outputs a feasible solution and (ii) the expected cost of the output solution is at most twice the cost of the linear programming solution. Recall that you are allowed to refer to material covered in the lecture notes.)}","We partition the universe into $\sqrt{n}$ disjoint blocks $[n]=B_1\cup \ldots \cup B_{\sqrt{n}}$ each of size $\sqrt{n}$ and apply the AMS sketch with $\epsilon$ a sufficiently small constant and $\delta=1/n^2$. Denote the corresponding frequency vectors by $f^1,\ldots, f^{\sqrt{n}}\in \mathbb{R}^{\sqrt{n}}$. The algorithm is as follows. For every $i\in [\sqrt{n}]$ and every $j\in B_i$ we use the AMS sketch to obtain a $(1\pm \epsilon)$-approximation to $$ ||f^i||_2^2 $$ and $$ ||f^i-\lceil n^{1/4}\rceil\cdot e_{j}||_2^2. $$ Since blocks are of size $\sqrt{n}$, when we subtract an incorrect element, the corresponding Euclidean norm squared goes up by at least a $(1+\Omega(1))$ factor, which we can detect with the AMS sketch as long as $\epsilon$ is a small constant. If we subtract a correct element, the Euclidean norm squared reduces by at least a $(1-\Omega(1))$ factor, which we can again detect with the AMS sketch with constant $\epsilon$. The setting of $\delta=1/n^2$ ensures that we can afford a union bound over all possible elements to subtract.",0.1,M1_preference_data_251 "What is the main difficulty in performing a Prime+Probe attack on a system with L1 caches private to each core and a shared LLC, and with attacker and victim running on different cores? How can this difficulty be circumvented? ","Transformers Transformers don’t have a recurrent computation, so the representations at each time step can directly attend to the representations at other time steps. As a result, it is more effective for modeling long-term dependencies because there is no vanishing gradients effect across time. Because there is no recurrence, the representations at each time step can be computed in parallel",0.1,M1_preference_data_252 "Imagine that the data structure you are given, instead of an Array[A], is one called ParSeq[A]. This class offers the two following methods, which work in parallel: def map[B](f: A => B): ParSeq[B] def reduce(f: (A, A) => A): A Can you write the following minMax function in terms of map and/or reduce operations ? def minMax(data: ParSeq[Int]): (Int, Int) = ???","/** * computes the 'hull' of a set of rectangles given as an array, i.e., the smallest * rectangle covering all rectangles. */ def hull(rs: Array[Rectangle]): Rectangle = rs.reduce(hull2)",0.1,M1_preference_data_253 "Consider the following toy corpus: the cat cut the hat How many different bigrams of characters (including whitespace) do you have in that corpus?",FIFO-like ordered structure. ,0.1,M1_preference_data_254 What kind of exceptions require the processor to implement them precisely? Why? Give three examples of such exceptions.,['(((80+1000-{a}-{b}+80)/1000)-((({a}*{b})/(1000*1000))+((1000-{a})*(1000-{b}))/(1000*1000)))/(1-((({a}*{b})/(1000*1000))+((1000-{a})*(1000-{b}))/(1000*1000)))'],0.1,M1_preference_data_255 "Consider the following document: D = 'the exports from Switzerland to the USA are increasing in 2006' Propose a possible indexing set for this document. Justify your answer.","At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + x_2 - s_1 & \text{ (1)}\\ s_2 &= 3 -x_2 + s_1 & \text{ (2)}\\ s_3 &= 2 -x_2& \text{ (3)}\\ \cline{1-2} z &= 2 + x_2 - 2s_1\\ x_1& :=1\text{ }x_2:=0\text{ }s_1:=0\text{ }s_2:=3\text{ }s_3:=2 \end{align*} Only $x_2$ has a positive coefficient in $z$, we will pivot $x_2$. We have $\nearrow x_2 \longrightarrow \ x_2 \leq \; \infty\ (1),\ x_2 \leq 3\ (2),\ x_2 \leq 2\ (3) \longrightarrow x_2 := 2,\: s_3 := 0$ \begin{align*} \hspace{1cm} x_1 &= 3 - s_1 - s_3 \\ s_2 &= 1 + s_1 +s_3 \\ x_2 &= 2 - s_3 \\ \cline{1-2} z &= 4 - 2s_1 - s_3\\ x_1& :=3\text{ }x_2:=2\text{ }s_1:=0\text{ }s_2:=1\text{ }s_3:=0 \end{align*}",0.1,M1_preference_data_256 "Assume you are working on a school project with your friend. Your friend uses ""print"" to debug his code. Is this a good idea and, regardless of whether it is bad or not, is there a better way to do it? Explain why or why not in max 2 sentences.","No! Assume that some broadcast algorithm ensures the causal delivery property and is not reliable, but best-effort; define an instance co of the corresponding abstraction, where processes co-broadcast and co-deliver messages. The only way for an algorithm to be best-effort broadcast but not reliable broadcast is to violate the agreement property: there must be some execution of the algorithm where some correct process p co-delivers a message m that some other process q does not ever co-deliver. This is possible in a best-effort broadcast algorithm, in fact this can only happen if the process s that co-broadcasts the message m is faulty (and crashes during the broadcast of m).",0.1,M1_preference_data_257 "What hardware support is necessary in a processor to implement modulo scheduling? Name all hardware features involved and give a brief explanation of each of them.","Variant 1: In this variant, property 2 can be violated. It is not susceptible to deadlocks. Variant 2: In this variant, neither of the two properties can be violated. However, it is susceptible to deadlocks. Variant 3: In this variant, neither of the two properties can be violated and no deadlock can occur. It is however still not entirely satisfactory, since no two threads can execute transfers in parallel, even when the accounts under consideration are completely disjoint.",0.1,M1_preference_data_258 "The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$: For any function $g:\mathcal X o \R$, and for a Bayes predictor $g^\star: \mathcal X o \R$ (i.e., such that $\sign\circ g^\star$ is a Bayes classifier), show that egin{align*} \mathcal L (g)-\mathcal L^\star = \mathbb E[oldsymbol{\mathbb{1}}_{g(X)g^\star(X)<0}|2\eta(X)-1|]. \end{align*} ","Changing the signature of a public method will break binary compatibility, as existing compiled code refers to the method using a ""void""-returning signature. While the change is backward source compatible, it is not overall backward compatible.",0.1,M1_preference_data_259 "Consider we use the set of transformations: insertion, deletion, substitution, and transposition. We want to compute the edit distance between words execution and exceuton, i.e. D(execution, exceuton).When computing the above, what is the value you get for D(exec,exce)?Give your answer as a numerical value. ","/** * computes the 'hull' of two rectangles as a rectangle, i.e., the smallest * rectangle covering both rectangles. */ def hull2(r1: Rectangle, r2: Rectangle): Rectangle = // take the smallest coordinates of both lower left corners val newLowerLeft = Point( math.min(r1.lowerLeft.x, r2.lowerLeft.x), math.min(r1.lowerLeft.y, r2.lowerLeft.y) ) // and the largest coordinates of both upper right corners val newUpperRight = Point( math.max(r1.upperRight.x, r2.upperRight.x), math.max(r1.upperRight.y, r2.upperRight.y) ) Rectangle(newLowerLeft, newUpperRight)",0.1,M1_preference_data_260 "You have $1$ Euro and your goal is to exchange it to Swiss francs during the next two consecutive days. The exchange rate is an arbitrary function from days to real numbers from the interval $[1,W^2]$, where $W\geq 1$ is known to the algorithm. More precisely, at day $1$, you learn the exchange rate $x_1 \in [1,W^2]$, where $x_1$ is the amount of Swiss francs you can buy from $1$ Euro. You then need to decide between the following two options: \begin{enumerate}[label=(\roman*)] \item Trade the whole $1$ Euro at day $1$ and receive $x_1$ Swiss francs. \item Wait and trade the whole $1$ Euro at day $2$ at exchange rate $x_2 \in [1,W^2]$. The exchange rate $x_2$ is known only at day 2, i.e., after you made your decision at day 1. \end{enumerate} In the following two subproblems, we will analyze the competitive ratio of optimal deterministic algorithms. Recall that we say that an online algorithm is $c$-competitive if, for any $x_1, x_2 \in [1,W^2]$, it exchanges the $1$ Euro into at least $c \cdot \max\{x_1, x_2\}$ Swiss francs. Give a deterministic algorithm with a competitive ratio of $1/W$. \\ {\em (In this problem you are asked to (i) design a deterministic online algorithm for the above problem and (ii) to prove that your algorithm is $1/W$-competitive. Recall that you are allowed to refer to material covered in the lecture notes.)}",27^2 = 729 bigrams in total,0.1,M1_preference_data_261 Assume your colleague wants to wait until the next minor release to include a major bugfix instead of making a bugfix release. Explain why this is not a good idea.,"Yes, because it will make ""ShoppingCart"" much easier to test by injecting a fake payment processor during unit tests.",0.1,M1_preference_data_262 "Assume you have been working with a friend on a LinkedIn-like app, where a user can lookup the shortest path to another user on the platform. You currently have two issues, the operation of finding a path sometimes takes a considerable amount of time, and it freezes the app in the process. Your friend suggests to run this operation concurrently with the main thread, he says it's going to speed up the duration of the operation and will stop the freezes. Your friend suggestion will actually only fix one of the two problems, can you tell which one is it and why?","def minMax(a: Array[Int]): (Int, Int) = { val threshold = 10 def minMaxPar(from: Int, until: Int): (Int, Int) = { if (until - from <= threshold) { var i = from var min = a(from) var max = a(from) while (i < until) { val x = a(i) if(x < min) min = x if(x > max) max = x i = i + 1 } (min, max) } else { val mid = from + ((until - from) / 2) val ((xMin, xMax), (yMin, yMax)) = parallel(minMaxPar(from, mid), minMaxPar(mid, until)) (min(xMin, yMin), max(xMax, yMax)) } } minMaxPar(0, a.size) }",0.1,M1_preference_data_263 Implement a function that computes the confidence for a given set of rules and their respective support. You can use the following formula: $$\mathrm{conf}(X \Rightarrow Y) = \mathrm{supp}(X \cup Y) / \mathrm{supp}(X)$$,"Functionally, it does not matter. One could pick one at random, execute the longest latency instructions first, or (if there is a way to determine it efficiently) excecute the instructions with more pending dependences first.",0.1,M1_preference_data_264 " & \multicolumn{3}{c}{ extbf{ProofWriter}} & \multicolumn{3}{c}{ extbf{CLUTRR-SG}} \ \cmidrule(lr){2-4} \cmidrule(lr){5-7} What does the following function implement? 1 a => b => (not a) (not b) fls ","Notice that there is absolutely no need to compute every pair of similarities!!! Only 3 of them are useful, cf drawing (a); and even the drawing alone might be sufficient (no computation at all)! +------------------+ | | | +-------+ | | | | +------+ | +----+ | | | | | +----+ | | | | | | | | (d2) (d5) (d3) (d4) (d6) (d1) Notice that, of course, more similar means bigger cosine! In case you really want to do some computation, here are some values: $D(2,5)=4 / \sqrt{17}$, $D(3,5)=5 / \sqrt{34}$, and $D(1,3)=3 / \sqrt{10}$.",0.1,M1_preference_data_265 What does it mean that a processor implements precise exceptions?,"When preparing benchmarks, your colleague should make sure that the inputs are representative of the real world, and complex enough to potentially detect performance bottlenecks. In this example, the inputs strings should be long enough.",0.1,M1_preference_data_266 "Assume that the texts to be tagged contain 1.5% of unknown words and that the performance of the tagger to be used is 98% on known words. What will be its typical overall performance in the following situation: all unknown words are systematically wrongly tagged?","Let $i^*$ be the weight of a minimum-weight perfect matching (and so $\mathcal{M}_{i^*}$ contains all perfect matchings of minimum weight). We first prove that \begin{align} \label{eq:prob} \Pr[f_{i^*}(p)= 0] \leq 1/n\,, \end{align} where the probability is over the random selection of $p(e)$'s in Step 2 of the algorithm. First note that if we think of $f_{i^*}(x)$ as a polynomial in the variables $x(e)$ for each edge $e\in E$ (that are replaced by the values $p(e)$), then we have that $f_{i^*}(x)$ is not identical to zero since $\mathcal{M}_{i^*} \neq \emptyset$ and the monomial $\prod_{e\in M} x(e)$ corresponding to a matching $M\in \mathcal{M}_{i^*}$ appears only once (with a non-zero coefficient) in $f_{i^*}(p)$. The probability bound~\eqref{eq:prob} now follows from the Schwartz-Zippel lemma that we saw in class since each $p(e)$ is selected from a set of $n^2$ values at random and the degree of $f_{i^*}$ is $n$. We now prove that the algorithm \emph{always} outputs the correct answer if $f_{i^*}(p) \neq 0$ (and hence with probability at least $1-1/n$ over the selection of $p(e)$'s). First note that $2^{i^* n^{100}}$ divides $\det(A)$, since for $i < i^*$ we have $f_i(x) = 0$ (in particular, $f_i(p) = 0$) because $i^*$ is the value of a min-weight perfect matching and so $\mathcal{M}_i = \emptyset$, and all $f_i(p)$ (for all $i$) are integers, therefore \begin{align*} \det(A) = \sum_{i=0}^\infty 2^{i\cdot n^{100}} f_i(p) = \sum_{i=i^*}^\infty 2^{i\cdot n^{100}} f_i(p) \end{align*} is divisible by $2^{i^* \cdot n^{100}}$. We have thus proved that the algorithm outputs an integer that is at least $i^*$. We now show that if $f_{i^*}(p) \neq 0$, then $2^{(i^* +1)n^{100}}$ does not divide $\det(A)$ and thus the algorithm must output the correct value. To do so, we bound the absolute value of $f_{i^*}(p)$. We have \begin{align*} |f_{i^*}(p)| = \left|\sum_{M \in \mathcal{M}_i} \textrm{sign}(M) \prod_{e\in M} p(e)\right| \leq \sum_{M \in \mathcal{M}_{i^*}} \prod_{e\in M} p(e) \leq |\mathcal{M}_{i^*}| \prod_{e\in M}n^2 \leq n! \cdot (n^2)^n \leq n^{3n} < 2^{n^{100}}\,. \end{align*} Therefore \begin{align*} \left|2^{i^*\cdot n^{100}}f_{i^*}(p)\right| < 2^{i^* \cdot n^{100}} 2^{n^{100}} = 2^{(i^*+1) \cdot n^{100}} \end{align*} but $2^{i^*\cdot n^{100}}f_{i^*}(p) \ne 0$, and so $2^{(i^*+1) n^{100}}$ does not divide $2^{i^*\cdot n^{100}}f_{i^*}(p)$, whereas it does divide $2^{i \cdot n^{100}}f_{i}(p)$ for all $i > i^*$. Therefore it does not divide $\det(A)$.",0.1,M1_preference_data_267 "In Itanium's procedure call and return mechanism, Still ignoring potential problems during the execution of erb+alloc+, what hardware changes are needed to the processor (compared to a more traditional and straightforward VLIW processor) to implement this functionality?", false,0.1,M1_preference_data_268 "Consider the following grammar: S -> NP VP NP -> Det N VP -> VBe Adj NP -> NP PP VP -> V N -> Adj N VP -> VP PP Adj -> Adj PP V -> VBe Adj -> Ving PP -> Prep NP and the following lexicon: at:Prep is:VBe old:Adj black:Adj looking:Ving the:Det cat:N mouse:N under:Prep former:Adj nice:Adj with:Prep This grammar also accepts the following examples, which are (either syntactically or semantically) incorrect in English: the cat is old at the mouse the cat is nice under the mouse the cat is nice at the mouse at the mouse In the first example, attaching 'at the mouse' to 'old' is incorrect in English because some adjectives (e.g. 'old') may not have a PP; the second example is incorrect because 'nice' can only take PPs where the preposition is limited to a certain subset (e.g. 'at', but not 'under'); and the third example is incorrect because adjectives may not combine with more than one PP. Propose modifications to the grammar in order to prevent these types of over-generation.","Changing the signature of a private method does not break compatibility, since nobody else could call it.",0.1,M1_preference_data_269 "Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V Precisely define the type of grammar G is corresponding to (for that, consider at least the following aspects: dependency-based vs. constituency-based, position in the Chomsky hierarchy, and CNF). Justify your answer for each of the aspects you will be mentioning.",['2'],0.1,M1_preference_data_270 "Your colleague wants to improve the performance of a web application by caching common results in an in-memory LRU cache, where the least recently used results are evicted when the cache is full, and wants your opinion on the best way to implement it. He has already implemented the ""Cache"" interface, which he will use to store the cached results. The ""Cache"" interface is defined as follows: interface Cache { /** * Returns the value associated with the given key, or null if the key is not * present in the cache. */ CompletableFuture get(K key); /** * Associates the given value with the given key in the cache. If the cache * already contains a value for the given key, it is replaced. */ CompletableFuture put(K key, V value); } What do you think of this interface?","The item is a user story, and is therefore suitable to be submitted.",0.1,M1_preference_data_271 "Assume you are working on a trendy app that allows everyone to create and share quizzes! Your first task is to add a new feature. Here is a transcript from part of a conversation with a user: > Hey! So you're the developer of this quiz app? > The one where I can write questions and answers for my friends? > I think it's cool, but I can't use it as much as I want to. > I'm a firefighter, I don't have time for this app during the day, > and when I come home, I have plenty to do, cook, clean, ... > When I do these tasks I'm busy, not like when I'm watching TV. > I don't always have my phone in hand! Sometimes I even forget where I put it. > Maybe if one could use the app with voice it would work? > With that, I could create quizzes while cooking! > Otherwise, I can't use the app much. Write a user story, in a single sentence using the below format, that summarizes this conversation: > As a ... I want to ... So that ... Your story must contain all necessary information and only that information.","... a list of strings, each string corresponding to the 8-bit representation of an element if and only if that element is between 0 and 255 included. The most significant bit is farthest to the left in the characters sequence. Hint: The most significant bit represents the largest value in a multiple-bit binary number.",0.1,M1_preference_data_272 "You are working on an app which is a search engine for cat photos. The app works by making requests to a server which stores the photos. Users search for cat photos and see a batch of results at a time; they can tap on a photo to see it full screen. You are getting two main complaints from users about the app’s performance: 1. When going from a page of results to the next one, the photos take too long to load 2. When going back to the search results after looking at a picture, the photos take too long to be re-downloaded For each of these complaints, write exactly one sentence giving a possible solution and explaining why it helps:","1. On the victim side, the prefetcher might also evict data not corresponding to actual victim accesses, adding noise to the attacker measurement. If the victim does not make accesses in a regular manner, the prefetcher might not kick in, though. 2. On the attacker side, when probing, the prefetcher will probably kick-in, transforming most of the probe accesses in hits. The solution is usually to confuse the prefetcher by performing the probe accesses in a random sequence. ",0.1,M1_preference_data_273 "Consider the $k$-means algorithm. We discussed in the course that this algorithm is efficient. But we also discussed that it might not converge to the optimal solution. Let us explore this in a very simple setting. Assume that your data is one-dimensional. I.e., the points of your training set $S_{\text {training }}$ are elements of $\mathbb{R}$. Further, assume that $k=2$, i.e., we are looking for two clusters. Give an example of a data set in one dimension that has at least two distinct fixed points. I.e., a data set so that depending on the initial choice of cluster assignments the algorithm will converge to different solutions. The simpler the example the better (and the more points).","At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + 3x_2 - x_3 - s_1 & \text{ (1)}\\ s_2 &= 7 - 3x_2 + x_3 + s_1 & \text{ (2)}\\ s_3 &= 6 - 3x_2 - 2x_3 & \text{ (3)}\\ \cline{1-2} z &= 4 + 6x_2 - 4s_1\\ x_1& :=1\text{ }x_2:=0\text{ }x_3:=0\text{ }s_1:=0\text{ }s_2:=7\text{ }s_3:= 6 \end{align*} Only $x_2$ has a positive coefficient in $z$, we will pivot $x_2$. We have $\nearrow x_2 \longrightarrow \ x_2 \leq \; \infty\ (1),\ x_2 \leq 7/3\ (2),\ x_2 \leq 6/3\ (3) \longrightarrow x_2 := 2,\: s_3 := 0$ \begin{align*} \hspace{1cm} x_1 &= 7 - 3x_3 - s_1 - s_3 \\ s_2 &= 1 + 3x_3 + s_1 +s_3 \\ x_2 &= 2 - 2x_3/3 - s_3/3 \\ \cline{1-2} z &= 16 - 4x_3 - 2s_3 - 4s_1\\ x_1& :=7\text{ }x_2:=2\text{ }x_3:=0\text{ }s_1:=0\text{ }s_2:=1\text{ }s_3:=0 \end{align*}",0.1,M1_preference_data_274 "The objective of this question is to illustrate the use of a lexical semantics resource to compute lexical cohesion. Consider the following toy ontology providing a semantic structuring for a (small) set of nouns: Give some examples of NLP tasks for which lexical cohesion might be useful. Explain why.","Yes, the above solution does not have any deadlocks as the forks are acquired in a specific order.",0.1,M1_preference_data_275 "In this week's lecture, you have been introduced to the aggregate method of ParSeq[A] (and other parallel data structures). It has the following signature: def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B Discuss, as a group, what aggregate does and what its arguments represent. Implement aggregate using the task and/or parallel constructs seen in the first week and the Splitter[A] interface seen in this week's lecture. The Splitter interface is defined as: trait Splitter[A] extends Iterator[A]: def split: Seq[Splitter[A]] def remaining: Int You can assume that the data structure you are defining aggregate for already implements a splitter method which returns an object of type Splitter[A]. Your implementation of aggregate should work in parallel when the number of remaining elements is above the constant THRESHOLD and sequentially below it. Hint: Iterator, and thus Splitter, implements the foldLeft method.",Including a major bugfix in a minor release instead of a bugfix release will cause an incoherent changelog and an inconvenience for users who wish to only apply the patch without any other changes. The bugfix could be as well an urgent security fix and should not wait to the next minor release date.,0.1,M1_preference_data_276 "Your team is discussing the following code: /** Uploads images to the cloud. */ public final class ImageUploader { public void upload(Image image) { /* … */ } private boolean canUpload(Image image) { /* … */ } } One of your colleagues thinks that ""canUpload"" should be made public. Explain in 1 sentence whether this breaks backward compatibility and why or why not (without worrying about whether this is a good or a bad thing):",ccccb,0.1,M1_preference_data_277 "Calculate the mean of individuals who remain alive in the data. The data is stored in a pandas.DataFrame and the respective column is ""alive"".","The positional constraints (word order) are taken into account by G; for example, ""the frames break"" is accepted by G, as it should, and ""frames the break"" is not, as it should; The selectional constraints (agreements) are not taken into account by G; for example, ""the frames break"" is accepted by G, as it should, but ""the frame break"" is also accepted by G, while it should not.",0.1,M1_preference_data_278 "Given the following classes: • class Pair[+U, +V] • class Iterable[+U] • class Map[U, +V] extends Iterable[Pair[U, V]] Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither covariance nor contravariance). Consider also the following typing relationships for A, B, X, and Y: • A >: B • X >: Y Fill in the subtyping relation between the types below using symbols: • <: in case T1 is a subtype of T2; • >: in case T1 is a supertype of T2; • “Neither” in case T1 is neither a supertype nor a supertype of T2. What is the correct subtyping relationship between Iterable[Pair[A, Y]] => Y and Map[A, Y] => X?","def connectivity_ranking(G, nodes_community, communities, communities_count): ''' input: G:nx.Graph nodes_community:{node_id:community_id} communities:[community_ids] community_count:int output: communities_ranking:{community_id:ranking} ''' communities_ranking = default_ranking.copy() meta_G = nx.Graph() w_matrix = {c2:{c1:0 for c1 in communities} for c2 in communities} for (n1, n2, weight) in G.edges(data='Weight'): w_matrix[nodes_community[n1]][nodes_community[n2]] += weight for c1 in communities: for c2 in communities: if (c1 < c2): weight = w_matrix[c1][c2] + w_matrix[c2][c1] meta_G.add_edge(c1, c2, weight=weight) communities_ranking = nx.pagerank(meta_G) return communities_ranking",0.1,M1_preference_data_279 "Given a document collection with a vocabulary consisting of three words, $V = {a,b,c}$, and two documents $d_1$ = aabc and $d_2 = abc$. The query is $q = ab$. Is it possible to enforce a ranking $d_2 > d_1$ with vector space retrieval and $d_1 > d_2$ with probabilistic retrieval ($\lambda=0.5$), by adding the same documents to the collection? If yes, give examples of such documents to be added, if no, provide an argument why this cannot be the case.","[['break+V => breakable\xa0', 'derivational'], ['freeze+V =>\xa0frozen\xa0', 'inflectional'], ['translate+V => translation', 'derivational'], ['cat+N => cats', 'inflectional'], ['modify+V => modifies ', 'inflectional']]",0.1,M1_preference_data_280 "You have $1$ Euro and your goal is to exchange it to Swiss francs during the next two consecutive days. The exchange rate is an arbitrary function from days to real numbers from the interval $[1,W^2]$, where $W\geq 1$ is known to the algorithm. More precisely, at day $1$, you learn the exchange rate $x_1 \in [1,W^2]$, where $x_1$ is the amount of Swiss francs you can buy from $1$ Euro. You then need to decide between the following two options: \begin{enumerate}[label=(\roman*)] \item Trade the whole $1$ Euro at day $1$ and receive $x_1$ Swiss francs. \item Wait and trade the whole $1$ Euro at day $2$ at exchange rate $x_2 \in [1,W^2]$. The exchange rate $x_2$ is known only at day 2, i.e., after you made your decision at day 1. \end{enumerate} In the following two subproblems, we will analyze the competitive ratio of optimal deterministic algorithms. Recall that we say that an online algorithm is $c$-competitive if, for any $x_1, x_2 \in [1,W^2]$, it exchanges the $1$ Euro into at least $c \cdot \max\{x_1, x_2\}$ Swiss francs. Show that any deterministic algorithm has a competitive ratio of at most $1/W$. {\em (In this problem you are asked to prove that any deterministic algorithm has a competitive ratio of at most $1/W$ for the above problem. Recall that you are allowed to refer to material covered in the lecture notes.)}","We use the idea of the AMS algorithm. We first describe how Alice Alice calculates the message $m$. Let $\Alg$ be the following procedure: \begin{itemize} \item Select a random $h: [n] \rightarrow \{\pm 1\}$ $4$-wise independent hash function. $h$ takes $O(\log n)$ bits to store. \item Calculate $A = \sum_{i=1}^n h(i) x_i$. \end{itemize} Let $t=6/\epsilon^2$. Alice runs $\Alg$ $t$ times. Let $h_i$ and $A_i$ be the hash function and the quantity calculated by $i$:th invokation of \Alg. Then Alice transmits the information $h_1, A_1, h_2, A_2, \ldots, h_t, A_t$ to Bob. Note that each $h_i$ takes $O(\log n)$ bits to store and each $A_i$ is an integer between $-n^2$ and $n^2$ and so it also takes $O(\log n)$ bits to store. Therefore the message Alice transmits to Bob is $O(\log(n)/\epsilon^2)$ bits. Now Bob calculates the estimate $Z$ as follows: \begin{itemize} \item For $\ell = 1, 2, \ldots, t$, let $Z_\ell = A_\ell + \sum_{i=1}^n h_\ell(i) y_i$. \item Output $Z = \frac{\sum_{\ell=1}^t Z_\ell^2}{t}.$ \end{itemize} To prove that $Z$ satisfies~\eqref{eq:guaranteeStream}, we first analyze a single $Z_\ell$. First, note that $Z_\ell = A_\ell + \sum_{i=1}^n h_\ell(i)y_i = \sum_{i=1}^n h_\ell(i) (x_i + y_i) = \sum_{i=1}^n h_\ell(i) f_i$, where we let $f_i = x_i + y_i$. And so $Z_\ell = \sum_{i=1}^n h_\ell(i) f_i$ where $h_\ell$ is a random $4$-wise independent hash function. This is exactly the setting of the analysis of the AMS streaming algorithm seen in class. And so over the random selection of the hash function, we know that \begin{align*} \E[Z_\ell^2] = \sum_{i=1}^n f_i^2 = Q \end{align*} and \begin{align*} \Var[Z_\ell^2] \leq 2\left( \sum_{i=1}^n f_i^2 \right)^2 = 2Q^2\,. \end{align*} Therefore, we have that \begin{align*} \E[Z] = Q \qquad \mbox{and} \qquad \Var[Z] \leq \frac{2 Q^2}{t}\,. \end{align*} So by Chebychev's inequality \begin{align*} \Pr[|Z- Q| \geq \epsilon Q] \leq \frac{2Q^2/t}{\epsilon^2 Q^2} \leq 1/3\,, \end{align*} by the selection of $t = 6/\epsilon^2$.",0.1,M1_preference_data_281 What is the communication complexity of the FloodSet algorithm in number of messages?,df.alive.mean(),0.1,M1_preference_data_282 "You are asked to implement the following List functions using only the specified List API methods. You are also allowed to use the reverse method in any subquestion, should you see it fit. If you need another method of List, you need to reimplement it as part of your answer. Please refer to the appendix on the last page as a reminder for the behavior of the given List API methods. Implement scanLeft using only foldLeft, Nil and :: (cons). def scanLeft[A, B](xs: List[A])(z: B)(op: (B, A) => B): List[B] = ???",Only Associativity,0.1,M1_preference_data_283 "The MIPS R10000 fetches four instructions at once and, therefore, there are four such circuits working in parallel inside the processor. What do ``WrPtr'' and ``RdPtr'' represent, respectively? Very briefly explain what they are used for and when their value changes.","We use the same rounding scheme as in the first subproblem. We then have that the expected cost of our solution is \begin{align*} 2\int_0^{1/2} f(\{i: x_i \geq t\}) dt\,. \end{align*} On the other hand, \begin{align*} \hat f(x) = \int_0^1 f(\{i: x_i \geq t\}) dt = \int_0^{1/2} f(\{i: x_i \geq t\}) dt + \int_{1/2}^1 f(\{i: x_i \geq t\}) dt \end{align*} which by non-negative is at least \begin{align*} \int_0^{1/2} f(\{i: x_i \geq t\}) dt\,. \end{align*} Our output is thus at most twice the lower bound in expectation. The algorithm can easily be derandomized by trying all relevant $t$'s (at most $n+1$ many and select the best one).",0.1,M1_preference_data_284 "You are given three classes (Student, Exam and Course which are defined below) and the method generatePassedExams, which from a given list of students and a list of courses, generates a list of students and all their successfully passed courses together with the corresponding grade. A course is considered as successfully passed if the grade for that course is greater than 2. case class Student(name: String, exams: List[Exam]) case class Exam(courseId: String, grade: Double) case class Course(id: String, name: String) def generatePassedExams( students: List[Student], courses: List[Course]): List[(String, String, Double)] = { for { s <- students e <- s.exams if e.grade > 2 c <- courses if e.courseId == c.id } yield (s.name, c.name, e.grade) } Your task is to rewrite the method generatePassedExams to use map, flatMap and filter instead of the for-comprehension. The resulting method should of course have the same result as the for-comprehension above.","Let us assume that F processes in the system can be Byzantine. Moreover, let us assume that N = 3F, where N is the total number of processes. We separate all processes into three disjoint groups A, B and C such that |A| = |B| = |C| = F and S ∈ B. Execution 1: All processes from group C are faulty and they are silent forever. Processes from groups A and B are correct and S broadcasts “0”. Due to the validity property, all processes from A and B deliver “0”. Execution 2: All processes from group A are faulty and they are silent forever. Processes from groups B and C are correct and S broadcasts “1”. Due to the validity property, all processes from B and C deliver “1”. Execution 3: All processes from group B are faulty. Processes from A and C are correct. Crucial: Processes from group B behave towards processes from A as in execution 1 and processes from group B behave towards processes from C as in execution 2. Moreover, processes from group A and C do not communicate before all of these processes deliver a message. Processes from group A does not distinguish execution 1 from execution 3. Therefore, they deliver “0”. Processes from group C does not distinguish execution 2 from execution 3. Therefore, they deliver “1”. We violate consistency. Therefore, N > 3F! For the sake of simplicity, let N = 3F + 1. Validity: If S is correct, every correct process sends the ECHO for the message. Therefore, every correct process eventually receives 2F + 1 ECHOs and delivers the message. Consistency: Suppose that a correct process delivers M and another correct process delivers M’. That means that F + 1 processes have sent ECHOs for both M and M’. However, this is impossible due to the fact that only F processes are Byzantine.",0.1,M1_preference_data_285 Briefly describe the specific objectives of the morphological module in the general perspective of automated Natural Language Processing.,"Consider some $\ell = 1, \dots, k$ and define $\mathcal{I}_\ell = \{I \in \mathcal{I}: |I| \leq \ell\}$. There are two key observations: \begin{itemize} \item $\mathcal{M}_\ell = (E, \mathcal{I}_\ell)$ is a matroid called the truncated matroid of $M$. \item \textsc{Greedy} has an identical execution on $\mathcal{M}_\ell$ as for $\mathcal{M}$ until $\ell$ elements are selected. \end{itemize} From these properties (and the fact that \textsc{Greedy} works for matroids and thus for $\mathcal{M}_\ell$) we have that \textsc{Greedy} returns the max-weight base $S= \{s_1, \dots, s_\ell\}$ of $\mathcal{M}_\ell$. In other words, \begin{align*} w(S_\ell) =\max_{T\in \mathcal{I_\ell}: |T| = \ell} w(T) = \max_{T\in \mathcal{I}: |T| = \ell} w(T) \end{align*} as required.",0.1,M1_preference_data_286 "An expression is referentially transparent if it always returns the same value, no matter the global state of the program. A referentially transparent expression can be replaced by its value without changing the result of the program. Say we have a value representing a class of students and their GPAs. Given the following defintions: 1 case class Student(gpa: Double) 2 3 def count(c: List[Student], student: Student): Double = 4 c.filter(s => s == student).size 5 6 val students = List( 7 Student(1.0), Student(2.0), Student(3.0), 8 Student(4.0), Student(5.0), Student(6.0) 9 ) And the expression e: 1 count(students, Student(6.0)) If we change our definitions to: 1 class Student2(var gpa: Double, var name: String = ""*"") 2 3 def innerCount(course: List[Student2], student: Student2): Double = 4 course.filter(s => s == student).size 5 6 def count2(course: List[Student2], student: Student2): Double = 7 innerCount(course.map(s => new Student2(student.gpa, student.name)), student) 8 9 val students2 = List( 10 Student2(1.0, ""Ana""), Student2(2.0, ""Ben""), Student2(3.0, ""Cal""), 11 Student2(4.0, ""Dre""), Student2(5.0, ""Egg""), Student2(6.0, ""Fra"") 12 ) And our expression to: e2: 1 count2(students2, Student2(6.0, ""*"")) Is the expression e2 referentially transparent?"," 1. The inner loop can achieve low initiation interval with static scheduling because the result of the condition is statically determinable. In particular, unrolling the inner loop by a factor five would completely remove the condition and, approximately, have five iterations executing with an iteration interval determined by the loop carried dependency though exttt{acc}. Overall, the average initiation interval of the inner loop would be approximately $(4 + 4 * 1) / 5 = 1.6$. 2. Dynamically scheduled HLS would have no problems and would achieve essentially the same average initiation interval. ",0.1,M1_preference_data_287 "You have been hired to evaluate an email monitoring system aimed at detecting potential security issues. The targeted goal of the application is to decide whether a given email should be further reviewed or not. You have been given the results of three different systems that have been evaluated on the same panel of 157 different emails. Here are the classification errors and their standard deviations: system 1 (error=0.079, stddev=0.026) system 2 (error=0.081, stddev=0.005) system 3 (error=0.118, stddev=0.004) Which system would you recommend? Why?","Yes, 800 columns would not be enough training data to learn a suitable generator.",0.1,M1_preference_data_288 " & \multicolumn{3}{c}{ extbf{ProofWriter}} & \multicolumn{3}{c}{ extbf{CLUTRR-SG}} \ \cmidrule(lr){2-4} \cmidrule(lr){5-7} Consider the following code snippet: 1 type Logger[T] = T => Unit 2 def log[T](s: T)(using log: Logger[T]): Unit = log(s) 3 var count = 0 4 given countingLogger: Logger[String] = s => 5 count = count + 1 6 println(s) 7 given (using log: Logger[String]): Logger[Boolean] = 8 b => log(if b then ""TRUE"" else ""FALSE"") 9 def h() = 10 given Logger[String] = s => () 11 log(""Inside h"") 12 log(false) 13 h() 14 log(true) 15 count What is the value of the last line?","MapTrNil, MapTrNil",0.1,M1_preference_data_289 "Consider the following toy corpus: the cat cut the hat How many occurences do you have in total? (i.e. including repertitions)","A dynamically scheduled circuit supporting speculation would be able to achieve a perfect pipeline (assuming that the if branch is never taken; the II would increase only if/when the if condition evaluates to true).",0.1,M1_preference_data_290 Implement a Rocchio classifier,Yes,0.1,M1_preference_data_291 "Implement a function that takes a list ls as argument, and returns a list of all the suffixes of ls. That is, given a list List(a,b,c,...) it returns List(List(a,b,c,...), List(b,c,...), List(c,...), List(...), ..., List()). Implement the function recursively using only Nil (empty), :: (cons) and pattern matching. def tails(ls: List[Int]): List[List[Int]] = ???","The purpose of morphology is to study of the internal structure and the variability of the words in a language, like verbal conjugations, plurals, nominalization, ...",0.1,M1_preference_data_292 "Split the given data into a training set (70%) and a testing set (30%). We refer to these as ""random split"" in the subsequent tasks. The data is in a pandas.DataFrame format.","extensionsToCheck = [""Facebook"", ""Google"", ""Microsoft"", ""Deepmind""] df[""has_top_company""] = df[""institution""].apply(lambda x: any(ext in x for ext in extensionsToCheck)).astype(""int"") df[""csranking_list""] = df[""csranking""].apply(lambda a: list(map(int, str(a).replace(""-1"",""999"").split("";"")))) df[""has_top_institution""] = df.csranking_list.apply(lambda x: (np.array(x) <= 10).sum() > 0).astype(""int"")",0.1,M1_preference_data_293 "Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & 4x_1 - x_2 - 2x_3 \\ \textbf{subject to}\hspace{0.8cm} & x_1 - x_3 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{0.85cm}x_1 + s_2 = 4 \\ \hspace{0.8cm} & \hspace{-0.85cm} -3x_2 + 2x_3 + s_3 = 4 \\ \hspace{0.8cm} &\hspace{-1.4cm} x_1,\: x_2, \: x_3, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + x_3 - s_1 \\ s_2 &= 3 -x_3 + s_1 \\ s_3 &= 4 +3x_2 - 2x_3 \\ \cline{1-2} z &= 4 - x_2 + 2x_3 - 4s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau.","Use average precision : for each relevant document compute the average precision for all relevant docs below rank R_k",0.1,M1_preference_data_294 What measure should you compute to estimate the quality of the annotations produced by the two annotators?,"As a student, I want to see a list of all courses, so that I can choose which ones to take.",0.1,M1_preference_data_295 "Assume you are working on a text editor written in Java. Your colleague is interested in optimizing a wrapper around ""String.substring()"" used to let users copy and paste parts of the text and decides to write some benchmarks to measure the current performance of the feature. How would you suggest that he proceeds?","We name “central” the city that we can reach from every other city either directly or through exactly one intermediate city. Base case (n=2): It obviously holds. Either one of the cities is “central”. Inductive step: Suppose this property holds for n ≥ 2 cities. We will prove that it will still hold for n+1 cities. Let n+1 cities, ci, i=0, ..., n, where for every pair of different cities ci, cj, there exists a direct route (single direction) either from ci to cj or from cj to ci. We consider only the first n cities, i.e. cities ci, i=0, ..., n-1. According to the inductive step, there exists one central city among these n cities. Let cj be that city. We now exclude city cj and consider the rest of the cities. Again, we have n cities, therefore there should exist one city among them that is central. Let ck be that city. All cities apart from cj and ck can reach cj and ck either directly or through one intermediate city. Furthermore, there exists a route between cj and ck: ● If the route is directed from cj to ck, then ck is the central city for the n+1 cities. ● If the route is directed from ck to cj, then cj is the central city for the n+1 cities.",0.1,M1_preference_data_296 how can the results from a classifier impact the metric (precision) used? What could be a better suited metric to use with imbalanced data?,"def cross_val(X,y,params,N=20, n_jobs=4): # generate list of params params_list = [{name:y for name,y in zip(params.keys(),x)} for x in list(itertools.product(*params.values()))] # create kfold splitter kfold = KFold(N) # apply cross_val score function for each param set we have scores = Parallel(n_jobs=n_jobs, verbose=10)(delayed(cross_val_single_param)(X,y,param,kfold) for param in params_list) return scores",0.1,M1_preference_data_297 "If process i fails, then eventually all processes j≠i fail Is the following true? If a process j≠i fails, then process i has failed"," 1.In this case, the condition depends on runtime information. The exttt{if} would be if-converted but the worst-case branch execution latence (the one with the multiplication) would determine the initiation interval due to the loop carried dependency though exttt{acc}. Overall, the initiation interval would be at best 4. 2. Dynamically scheduled HLS would have no problems and would generally achieve a better average initiation interval depending on the data being read from memory and the control path followed. It would probably be between 1 and 4.",0.1,M1_preference_data_298 "We learnt in the lecture that terms are typically stored in an inverted list. Now, in the inverted list, instead of only storing document identifiers of the documents in which the term appears, assume we also store an *offset* of the appearance of a term in a document. An $offset$ of a term $l_k$ given a document is defined as the number of words between the start of the document and $l_k$. Thus our inverted list is now: $l_k= \langle f_k: \{d_{i_1} \rightarrow [o_1,\ldots,o_{n_{i_1}}]\}, \{d_{i_2} \rightarrow [o_1,\ldots,o_{n_{i_2}}]\}, \ldots, \{d_{i_k} \rightarrow [o_1,\ldots,o_{n_{i_k}}]\} \rangle$ This means that in document $d_{i_1}$ term $l_k$ appears $n_{i_1}$ times and at offset $[o_1,\ldots,o_{n_{i_1}}]$, where $[o_1,\ldots,o_{n_{i_1}}]$ are sorted in ascending order, these type of indices are also known as term-offset indices. An example of a term-offset index is as follows: **Obama** = $⟨4 : {1 → [3]},{2 → [6]},{3 → [2,17]},{4 → [1]}⟩$ **Governor** = $⟨2 : {4 → [3]}, {7 → [14]}⟩$ **Election** = $⟨4 : {1 → [1]},{2 → [1,21]},{3 → [3]},{5 → [16,22,51]}⟩$ Which is to say that the term **Governor** appear in 2 documents. In document 4 at offset 3, in document 7 at offset 14. Now let us consider the *SLOP/x* operator in text retrieval. This operator has the syntax: *QueryTerm1 SLOP/x QueryTerm2* finds occurrences of *QueryTerm1* within $x$ (but not necessarily in that order) words of *QueryTerm2*, where $x$ is a positive integer argument ($x \geq 1$). Thus $x = 1$ demands that *QueryTerm1* be adjacent to *QueryTerm2*. List each set of values for which the query **Obama** *SLOP/x* **Election** has a different set of documents as answers (starting from $x = 1$). ","If the deterministic sorting is done prior to proposing the set for consensus, instead of a posteriori upon deciding, the processes would not agree on a set but on a sequence of messages. But if they TO-deliver the messages in the decided order, the algorithm still ensures the total order property.",0.1,M1_preference_data_299 "To support very large scale neural networks in limited amount of memory, one may want to use floating point numbers with very few bits. Here we consider substantially simplified operations on such numbers, Float8. A value Float8(mant,exp) represents the non-negative integer mant * 2^exp. We call mant a mantissa (which gives significant digits) whereas exp is the exponent. This allows us to represent both smaller and larger integers, keeping a similar number of significant digits. (The larger integers can only be represented up to a given power of two.) In our simple example, we use only four bits for both mantissa and the exponent, and we assume they are both non-negative. final case class Float8(mant: Int, exp: Int): require(0 <= mant && mant <= 15 && 0 <= exp && exp <= 15) def value: Int = mant << exp val a = Float8(15, 8) val b = Float8(5, 10) We look at the operation plus, of adding such numbers. When the exponent is smaller than another one, the operation shifts mantissa and then performs addition. If mantissa gets too large, we reduce it an increase the exponent. extension (x: Float8) def +(y: Float8): Float8 = if x.exp <= y.exp then val shift = y.exp - x.exp val mant = (x.mant >> shift) + y.mant if mant < 16 then Float8(mant, y.exp) else val exp1 = y.exp + 1 if exp1 < 16 then Float8(mant / 2, y.exp + 1) else Float8(15, 15) else y + x Is this operation associative? Prove or give a counterexample.","We consider the following greedy algorithm. At the arrival of an item of size $s$: \begin{itemize} \item If it fits, pack the item in an already used bin. \item Otherwise, pack the item in a new bin. \end{itemize} To analyze the algorithm and to prove~\eqref{eq:binguarantee} the following observation is crucial: \begin{claim} Whenever a new bin is opened, every other bin is packed with items of total size at least $1-\epsilon$. \end{claim} \begin{proof} We only open up a new bin if the arriving item of size $s \leq \epsilon$ does not fit in an already used bin. Since it does not fit in an already used bin, it implies that every such bin is packed with items of total size at least $1-\epsilon$. \end{proof} Notice that the above claim implies that at any point of time, all but $1$ bin have load at least $1-\epsilon$. If our algorithm has opened $m+1$ bins, the total size of all items packed so far is thus at least $(1-\epsilon)m$. However, $\opt$ clearly needs $(1-\epsilon)m$ bins to pack these items (since each bin can take items of size at most $1$). Hence, $m\leq \frac{\opt}{1-\epsilon}$ and \begin{align*} m + 1\leq \frac{\opt}{1-\epsilon} + 1\,, \end{align*} as required.",0.1,M1_preference_data_300 "Consider a $d$-regular undirected graph $G = (V,E)$ and let $M$ be its normalized adjacency matrix. As seen in class, $M$ has $n= |V|$ eigenvalues $1=\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n\geq -1$ and the corresponding eigenvectors ${v}_1, {v}_2, \ldots, {v}_n \in \mathbb{R}^n$ can be selected to be orthogonal vectors where \begin{align*} {v}_1 = \begin{bmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix} \mbox{ is the all one vector.} \end{align*} Assuming that $\lambda_2 = 1$, your task is to design a procedure \textsc{FindDisconnectedSet}$(v_2)$ that takes as input the second eigenvector and outputs a non-empty subset $S \subsetneq V$ of the vertices such that there is no edge crossing the cut defined by $S$. In other words, the output $S$ must satisfy $S \neq \emptyset, S \neq V$ and any edge $e \in E$ has either both endpoints in $S$ or both endpoints in $V \setminus S$. We remark that your procedure \textsc{FindDisconnectedSet} does \textbf{not} know the edgeset $E$ of the graph. Thus it needs to define the set $S$ only based on the values $v_2(i)$ the second eigenvector assigns to every vertex $i\in V$. \\ {\em (In this problem you are asked to (i) design the algorithm \textsc{FindDisconnectedSet} and (ii) argue that it outputs a non-empty $S \subsetneq V$ that cuts $0$ edges assuming $\lambda_2 = 1$. Recall that you are allowed to refer to material covered in the lecture notes.)}","We run the simple greedy algorithm: \begin{enumerate} \item Initially, let $\mathcal{T} = \emptyset$. \item For each streamed $S$: if $S$ is disjoint from all sets in $\mathcal{T}$, add $S$ to $\mathcal{T}$. \item At the end, we simply return $\mathcal{T}$ as our solution. \end{enumerate} We now analyze the greedy algorithm in terms of space and approximation guarantee. \begin{description} \item[Space:] Since at all times $\mathcal{T}$ is a family of disjoint sets, we have $\sum_{S\in \mathcal{T}} |S| \leq n$. If we store each selected set $S$ as a list of its elements this will take space $|S|\log n$ for each set $S\in \mathcal{T}$ (the $\log n$ is the space required to save the identifier of each element). Thus, as $\sum_{S\in \mathcal{T}} |S| \leq n$, we require $O(n \log n)$ space in total. \item[Approximation ratio:] Let $\mathcal{O}$ be an optimal set packing. The greedy algorithm returns a maximal set packing so any $O \in \mathcal{O}$ intersects at least one set in $\mathcal{T}$ (maybe itself if it was selected by greedy). Moreover, any set $S$ can intersect at most $k$ sets in $\mathcal{O}$ since $|S| \leq k$. Therefore, \begin{align*} |\mathcal{O}| = \sum_{O\in \mathcal{O}} 1 \leq \sum_{O \in \mathcal{O}} \sum_{S\in \mathcal{T}: S\cap O \neq \emptyset} 1 = \sum_{S \in \mathcal{T}} \sum_{O\in \mathcal{O}: S\cap O \neq \emptyset} 1 \leq \sum_{S\in \mathcal{T}} k = k |\mathcal{T}| \,. \end{align*} (This is very similar to Problem 4 in Exercise Set 10.) \end{description}",0.1,M1_preference_data_301 Describe the main principles of the standard vector space model for semantics.,['10'],0.1,M1_preference_data_302 "Let $\xv_1, . . . , \xv_N$ be a dataset of $N$ vectors in $\R^D$. Write down the covariance matrix of the dataset $\Xm = (\xv_1, . . . , \xv_N) \in \R^{D imes N}$, \emph{and} state its dimensions. Data is centered."," Any dependence at distance two: egin{verbatim} add r1, r2, r3 mul r4, r5, r6 add r7, r8, r1 \end{verbatim} or egin{verbatim} ld r1, 0(r3) mul r4, r5, r6 add r7, r8, r1 \end{verbatim}",0.1,M1_preference_data_303 "Consider the following case class definitions: case class Node(id: Int) case class Edge(from: Node, to: Node) Let us represent a directed graph G as the list of all its edges (of type List[Edge]). We are interested in computing the set of all nodes reachable in exactly n steps from a set of initial nodes. Given a set of nodes within the graph, use the function you defined above to compute the subset of these nodes that belong to a cycle of size 3 within the graph. def cycles3(nodes: Set[Node], edges: List[Edge]): Set[Node]","No, the coworker already has tasks for this sprint and it is the Product Owner that should decide of the bug's priority in the backlog for future sprints.",0.1,M1_preference_data_304 "In order to summarize the degree distribution in a single number, would you recommend using the average degree? Why, or why not? If not, what alternatives can you think of? Please elaborate!","inflectional morphology: no change in the grammatical category (e.g. give, given, gave, gives ) derivational morphology: change in category (e.g. process, processing, processable, processor, processabilty)",0.1,M1_preference_data_305 "You are responsible for a project aiming at providing on-line recommendations to the customers of a on-line book selling company. The general idea behind this recommendation system is to cluster books according to both customers and content similarities, so as to propose books similar to the books already bought by a given customer. The core of the recommendation system is a clustering algorithm aiming at regrouping books likely to be appreciate by the same person. This clustering should not only be achieved based on the purchase history of customers, but should also be refined by the content of the books themselves. It's that latter aspect we want to address in this exam question. The chosen clustering algorithm is the dendrogram. What other algorithms could you propose for the same task? Briefly review advantages and disadvantages of each of them (including dendrograms). Which one would you recommend for the targeted task?","Consider a system of three processes p1, p2 and p3. Assume that p1 URB-broadcasts the message m. Suppose that completeness is violated. p1 might never URB-deliver m if either p2 or p3 crashes and p1 never detects their crash. Hence, p1 would wait indefinitely for p2 and p3 to relay m (validity property violation)",0.1,M1_preference_data_306 "Freshly graduated from EPFL, you have been hired as contractors for a successful and rapidly growing bank. The bank has been experiencing problems with their money management system recently, which is written in Scala, and so they hired the best and brightest young engineer they could find: you! The system had been working perfectly fine so far, they tell you. In the past days, due to an increased number of customers, they had to switch from a single-threaded sequential execution environment to a multi-threaded concurrent one, in which the threads may perform transactions concurrently. That's when problems started, your manager says... Here is the code responsible to withdraw money from the account from and transfer it to the account to, within the same bank: def transfer(from: Account, to: Account, amount: BigInt): Unit = { require(amount >= 0) val balanceFrom = from.balance if (balanceFrom >= amount) { from.balance = balanceFrom - amount val balanceTo = to.balance to.balance = balanceTo + amount } } For the bank, it is very important that the following two properties hold after any sequence of completed transfer transactions: The balance of an account never goes below 0. The total sum of money held by the bank is constant. Does anything change in the setting where multiple threads can execute the transfer method concurrently? For each of the two desired properties of the system, check if it holds in this concurrent environment. If not, come up with an example execution which exhibits a violation of the property.","The lowest level modules are the generic database, and sending e-mails. The books module and the notes module use the database, and the sharing function uses e-mail. Then, we can for example have a ViewModel using these three intermediate modules, and a user interface on top of it",0.1,M1_preference_data_307 "You are responsible for a project aiming at providing on-line recommendations to the customers of a on-line book selling company. The general idea behind this recommendation system is to cluster books according to both customers and content similarities, so as to propose books similar to the books already bought by a given customer. The core of the recommendation system is a clustering algorithm aiming at regrouping books likely to be appreciate by the same person. This clustering should not only be achieved based on the purchase history of customers, but should also be refined by the content of the books themselves. It's that latter aspect we want to address in this exam question. Consider the following six 'documents' (toy example): d1: 'Because cows are not sorted as they return from the fields to their home pen, cow flows are improved.' d2: 'He was convinced that if he owned the fountain pen that he'd seen in the shop window for years, he could write fantastic stories with it. That was the kind of pen you cannot forget.' d3: 'With this book you will learn how to draw humans, animals (cows, horses, etc.) and flowers with a charcoal pen.' d4: 'The cows were kept in pens behind the farm, hidden from the road. That was the typical kind of pen made for cows.' d5: 'If Dracula wrote with a fountain pen, this would be the kind of pen he would write with, filled with blood red ink. It was the pen she chose for my punishment, the pen of my torment. What a mean cow!' d6: 'What pen for what cow? A red pen for a red cow, a black pen for a black cow, a brown pen for a brown cow, ... Understand?' and suppose (toy example) that they are indexed only by the two words: pen and cow. What are their vector representations?","\begin{itemize} \item Convex problem so we set the gradient to 0 \item $\frac{1}{N} \sum_{n=1}^{N}\left[\mathbf{x}_{n}^{\top} \mathbf{w}-y_{n}\right] \mathbf{x}_{n}+2 \lambda \mathbf{w}=0$, therefore $\mathbf{w}=\left[\sum_{n=1}^{N}\left[\mathbf{x} \mathbf{x}_{n}^{\top}\right]+2 N \lambda I\right]^{-1} \sum_{n=1}^{N}\left[\mathbf{x} y_{n}\right]=\left[\mathbf{X}^{\top} \mathbf{X}+\right.$ $2 N \lambda I]^{-1} \mathbf{X}^{\top} \mathbf{y}$ \end{itemize}",0.1,M1_preference_data_308 "Consider the following toy corpus: the cat cut the hat What is the probability of the following sequences, if the parameters are estimated using MLE (maximum-likelihood estimation) on the above corpus (make use of a calculator or even a short program): - cutthechat - cut the chat Fully justify your answer.", In dynamically scheduled out-of-order processors. ,0.1,M1_preference_data_309 "Consider the following sentence: High-energy pulsed laser beams are used in soft-tissue surgery. Using a 2-gram language model and a tokenizer that splits on whitespaces and punctuation (including hyphens (-)), what is the probability of the above sentence? Provide your answer as a formula, but clearly explaining each variable.","Running a blocking operation in an activity, such as a network operation, can lead to a bad user experience, because of lack of reactivity of the app in case of a slow network. He should use a service to fetch the required data, and he must create a new thread within the service to do that work. By using a separate thread, the application's main thread can remain dedicated to user interaction with your activities.",0.1,M1_preference_data_310 "To which expression is the following for-loop translated ? 1 def mystery8(xs : List[List[Int]]) = 2 for 3 x <- xs 4 y <- x 5 yield 6 y","$$ \begin{gathered} \boldsymbol{\Sigma}:=\frac{1}{n} \sum_{i=1}^{n} \mathbf{x}_{i} \mathbf{x}_{i}^{\top}=\frac{1}{n} \mathbf{X}^{\top} \mathbf{X} \in \mathbf{R}^{L \times L} \\ \boldsymbol{\Sigma}^{\mathbf{H}}:=\frac{1}{n} \sum_{i=1}^{n} \phi\left(\mathbf{x}_{i}\right) \phi\left(\mathbf{x}_{i}\right)^{\top}=\frac{1}{n} \boldsymbol{\Phi}^{\top} \mathbf{\Phi} \in \mathbf{R}^{H \times H} \end{gathered} $$",0.1,M1_preference_data_311 "Can we implement TRB with an eventually perfect failure detector ◇P, under the assumption that at least one process can crash?","Consider a system of three processes p1, p2 and p3. Assume that p1 URB-broadcasts the message m. Suppose that accuracy is violated. Assume that p1 falsely suspects p2 and p3 to have crashed. p1 eventually URB-delivers m. Assume that p1 crashes afterwards. It may happen that p2 and p3 never BEB-deliver m and have no knowledge about m (uniform agreement is violated).",0.1,M1_preference_data_312 "Consider the Maximum Disjoint Paths problem: given an undirected graph $G=(V,E)$ with designated source $s\in V$ and sink $t\in V\setminus \{s\}$ vertices, find the maximum number of edge-disjoint paths from $s$ to $t$. To formulate it as a linear program, we have a variable $x_p$ for each possible path $p$ that starts at the source $s$ and ends at the sink $t$. The intuitive meaning of $x_p$ is that it should take value $1$ if the path $p$ is used and $0$ otherwise\footnote{I know that the number of variables may be exponential, but let us not worry about that.}. Let $P$ be the set of all such paths from $s$ to $t$. The linear programming relaxation of this problem now becomes \begin{align*} \mbox{Maximize} &\qquad \sum_{p\in P} x_p \\ \mbox{subject to} & \quad \ \ \sum_{p\in P: e\in p} x_p \leq 1, \qquad \ \forall e\in E,\\ &\ \ \quad \qquad x_p \geq 0, \qquad \qquad \ \forall p \in P. \end{align*} What is the dual of this linear program? What famous combinatorial problem do binary solutions to the dual solve?","SSS = B, MMM = E",0.1,M1_preference_data_313 "In class, we saw Karger's beautiful randomized algorithm for finding a min-cut in an undirected graph $G=(V,E)$ with $n = |V|$ vertices. Each iteration of Karger's algorithm can be implemented in time $O(n^2)$, and if repeated $\Theta(n^2 \log n)$ times, Karger's algorithm returns a min-cut with probability at least $1-1/n$. However, this leads to the often prohibitively large running time of $O(n^4 \log n)$. Karger and Stein made a crucial observation that allowed them to obtain a much faster algorithm for min-cut: the Karger-Stein algorithm runs in time $O(n^2 \log^3 n)$ and finds a min-cut with probability at least $1-1/n$. Explain in a couple of sentences the main idea that allowed Karger and Stein to modify Karger's algorithm into the much faster Karger-Stein algorithm. In other words, what are the main differences between the two algorithms?",Correct choice is (1),0.1,M1_preference_data_314 "Design and analyze a polynomial time algorithm for the following problem: \begin{description} \item[INPUT:] An undirected graph $G=(V,E)$. \item[OUTPUT:] A non-negative vertex potential $p(v)\geq 0$ for each vertex $v\in V$ such that \begin{align*} \sum_{v\in S} p(v) \leq |E(S, \bar S)| \quad \mbox{for every $\emptyset \neq S \subsetneq V$ \quad and \quad $\sum_{v\in V} p(v)$ is maximized.} \end{align*} \end{description} {\small (Recall that $E(S, \bar S)$ denotes the set of edges that cross the cut defined by $S$, i.e., $E(S, \bar S) = \{e\in E: |e\cap S| = |e\cap \bar S| = 1\}$.)} \\[1mm] \noindent Hint: formulate the problem as a large linear program (LP) and then show that the LP can be solved in polynomial time. \\[1mm] {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it returns a correct solution and that it runs in polynomial time. Recall that you are allowed to refer to material covered in the course.) }","precision = tp/(tp+fp) recall = tp/(tp+fn) f_measure = 2*precision*recall/(precision+recall) print('F-measure: ', f_measure)",0.1,M1_preference_data_315 "Consider using a parser with the following (partial) grammar: S -> NP VP VP -> V NP -> Det N VP -> VP PP NP -> N VP -> VBP VBG PP NP -> NP PP PP -> P NP and (also partial) lexicon: 2012 N from P Switzerland N in P USA N increasing VBG are VBP the Det exports N to P exports V Using the CYK algorithm, parse the following sentence with the above lexicon/grammar: the exports from the USA to Switzerland are increasing in 2012 Provide both the complete, fully filled, data structure used by the algorithm, as well as the result of the parsing in the form of a/the parse tree(s).","def vectorize_vsr(document, vocabulary, idf): """""" It takes the input text and vectorizes it based on the tf-idf formula. :param document: list of str, with the tokenized document :param vocabulary: dict, with the vocabulary (computed in 1.1) and each term's frequency. :param idf: dict, with the terms as keys and values the idf for each term. :return: np.array, with the vectorized document """""" vector = np.zeros(len(vocabulary)) term_freq = Counter(document) max_freq = term_freq.most_common(1)[0][1] for i, term in enumerate(vocabulary): vector[i] = idf[term] * term_freq[term]/max_freq return vector",0.1,M1_preference_data_316 "Consider the following sentences: ```Aphrodite and Eros are Gods.``` ```Aphrodite is a parent of Eros.``` ```Aphrodite is beautiful.``` ```Aphrodite is happy.``` Specify which are the *classes*, the *instances* and the *properties* in the above statements. ","F score : $$ \frac{\left(b^{2}+1\right) \cdot P \cdot R}{b^{2} \cdot P+R} $$ When $b^{2}>1$ emphasizes $P$ otherwise emphasies $R$. Accuracy: ratio of correct results provided by the system (wrt total number of results from the system) Error $=1$-Accuracy",0.1,M1_preference_data_317 "You are given your $D \times N$ data matrix $\boldsymbol{X}$, where $D$ represents the dimension of the input space and $N$ is the number of samples. We discussed in the course the singular value decomposition (SVD). Recall that the SVD is not invariant to scaling and that empirically it is a good idea to remove the mean of each feature (row of $\boldsymbol{X}$ ) and to normalize its variance to 1 . Assume that $\boldsymbol{X}$ has this form except that the last row/feature is then multiplied by $\sqrt{2}$, i.e., it has variance $\left(\ell_{2}^{2}\right.$-norm) of 2 instead of 1. Recall that the SVD allows us to write $\boldsymbol{X}$ in the form $\boldsymbol{X}=\boldsymbol{U} \boldsymbol{S} \boldsymbol{V}^{\top}$, where $\boldsymbol{U}$ and $\boldsymbol{V}$ are unitary and $\boldsymbol{S}$ is a $D \times N$ diagonal matrix with entries $s_{i}$ that are non-negative and decreasing, called the singular values. Assume now that you add a feature, i.e., you add a row to $\boldsymbol{X}$. Assume that this row is identical to the last row of $\boldsymbol{X}$, i.e., you just replicate the last feature. Call the new matrix $\tilde{\boldsymbol{X}}$. But assume also that for $\tilde{\boldsymbol{X}}$ we normalize all rows to have variance 1. To summarize, $\boldsymbol{X}$ is the original data matrix, where all means have been taken out and all rows are properly normalized to have variance 1 except the last one that has variance 2 . And $\tilde{\boldsymbol{X}}$ is the original data matrix with the last row replicated, and all means have been taken out and all rows are properly normalized. Let $\boldsymbol{X}=\boldsymbol{U} \cdot \boldsymbol{S} \cdot \boldsymbol{V}^{\top}$ be the SVD of $\boldsymbol{X}$ and let. $\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{U}} \cdot \tilde{\boldsymbol{S}} \cdot \tilde{\boldsymbol{V}}^{\top}$ be the SVD of $\tilde{\boldsymbol{X}}$ \begin{enumerate} \item Show that \end{enumerate} (a) $\tilde{V}=V$ (b) $\tilde{\boldsymbol{S}}$ is equal to $\boldsymbol{S}$ with an extra all-zero row attached. \begin{enumerate} \setcounter{enumi}{1} \item Based on the previous relationships and assuming that it is always best to run an SVD with ""normalized"" rows, what is better: If you $K N O W$ that a feature is highly correlated to another feature a priori. Should you rather first run the SVD and then figure out what features to keep or should you first take the highly correlated feature out and then run the SVD? Explain. \end{enumerate}","The post-condition can be written to only run during ""debug"" builds, such as automated tests. The advantage is not to penalize ""release"" builds from a performance point of view, the disadvantage is that ""release"" bugs will be more difficult to debug.",0.1,M1_preference_data_318 "An expression is referentially transparent if it always returns the same value, no matter the global state of the program. A referentially transparent expression can be replaced by its value without changing the result of the program. Say we have a value representing a class of students and their GPAs. Given the following defintions: 1 case class Student(gpa: Double) 2 3 def count(c: List[Student], student: Student): Double = 4 c.filter(s => s == student).size 5 6 val students = List( 7 Student(1.0), Student(2.0), Student(3.0), 8 Student(4.0), Student(5.0), Student(6.0) 9 ) And the expression e: 1 count(students, Student(6.0)) If we change our definitions to: 1 class Student2(var gpa: Double, var name: String = ""*"") 2 3 def innerCount(course: List[Student2], student: Student2): Double = 4 course.filter(s => s == student).size 5 6 def count2(course: List[Student2], student: Student2): Double = 7 innerCount(course.map(s => new Student2(student.gpa, student.name)), student) 8 9 val students2 = List( 10 Student2(1.0, ""Ana""), Student2(2.0, ""Ben""), Student2(3.0, ""Cal""), 11 Student2(4.0, ""Dre""), Student2(5.0, ""Egg""), Student2(6.0, ""Fra"") 12 ) And our expression to: e2: 1 count2(students2, Student2(6.0, ""*"")) What is the result of e2?","N=15000 We need a lexical rule for each of the possible (PoS-tag, surface form) associations; there are 10000 distinct words with an average syntactic ambiguity of 1.5, thus a total of 10’000*1.5 = 15’000 possible associations and therefore 15’000 lexical rules to be added.",0.1,M1_preference_data_319 "Implement a function that inserts a given element elem into a sorted (in ascending order) list list . The resulting list should also be sorted in ascending order. Implement the function recursively. def insert (elem: Int, list: List[Int]): List[Int] = ???","Data is centered, i.e. $\E[\xv] = $ or in other words $ rac1N \sum_{n=1}^N \xx_n = $ or $ rac1N \sum_{n=1}^N x_{nd} = 0$ $ orall d$. ",0.1,M1_preference_data_320 "Assume you are writing server-side code for an online shop. The code will be invoked over HTTP from a mobile app. Your current code is as follows: public class ShoppingCart { public void buy(Product product, int quantity) { if (product == null) { throw new IllegalArgumentException(""product cannot be null""); } if (quantity < 1) { throw new IllegalArgumentException(""quantity must be at least 1""); } int price = product.getUnitPrice() * quantity; int discount = computeDiscount(product, quantity); int shippingFees = computeShippingFees(product, quantity); int totalPrice = price - discount + shippingFees; // this triggers a call to the actual credit card processor CreditCardProcessor.billCurrentUser(totalPrice); } private int computeDiscount(Product product, int quantity) { // ... discount computation logic ... } private int computeShippingFees(Product product, int quantity) { // ... shipping fees computation logic ... } } A colleague remarks that hardcoding ""CreditCardProcessor"" is not a good practice, and that ""ShoppingCart"" should instead have a payment processor interface as a constructor parameter. Explain in 1 sentence whether this is a good idea and why or why not:","By assumption $\kappa_{1}$ and $\kappa_{2}$ are valid kernels. Hence there exist feature maps $\phi_{1}$ and $\phi_{2}$ so that $$ \begin{aligned} & \kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\phi_{1}(\mathbf{x})^{\top} \phi_{1}\left(\mathbf{x}^{\prime}\right), \\ & \kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\phi_{2}(\mathbf{x})^{\top} \phi_{2}\left(\mathbf{x}^{\prime}\right) . \end{aligned} $$ Hence, $$ \kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=a \phi_{1}(\mathbf{x})^{\top} \phi_{1}\left(\mathbf{x}^{\prime}\right)+b \phi_{2}(\mathbf{x})^{\top} \phi_{2}\left(\mathbf{x}^{\prime}\right) . $$ This can be represented as an inner product via the feature map $$ \left(\sqrt{a} \phi_{1}(\cdot), \sqrt{b} \phi_{2}(\cdot)\right) $$",0.1,M1_preference_data_321 "Let $f\colon \mathbb{R}\rightarrow \mathbb{R}$ and $g\colon\mathbb{R}\rightarrow \mathbb{R}$ are two functions defined on all $\mathbb{R}$. If $f\circ g$ is injective, then $g$ is injective.",['1'],0.1,M1_preference_data_322 " Consider the following snippet used to produce a high-performance circuit using a statically scheduled HLS tool, such as Xilinx Vivado HLS. Assume that a erb+double+ multiplication takes several cycles (latency) to compute. egin{verbatim} double a[ARRAY_SIZE] = ...; int b = 1; for (int i = 0; i < ARRAY_SIZE; i++) if (a[i] * (double) b >= CONST) b++; \end{verbatim} Is this kind of code fundamentally problematic for a tool aimed at producing statically scheduled pipelined circuits? If so, explain. ","Suppose that $H$ contains a (simple) cycle. As usual, we will show that $x^*$ can then be written as a convex combination of two different feasible points, contradicting that $x^*$ is an extreme point. Namely, we will define a nonzero vector $y$ such that $x - \epsilon y$ and $x + \epsilon y$ are both feasible (for a small enough $\epsilon > 0$). Of course, $x = \frac{(x - \epsilon y) + (x + \epsilon y)}{2}$. As in the proof for the bipartite matching polytope, label edges in the cycle alternately as odd and even. In that proof, we defined $y$ to be $+1$ on even edges and $-1$ on odd edges, but this doesn't quite work here. This is because of the constraints $\sum_{j\in J: i \in N(j)} x_{ij} p_j \leq T$: the left-hand side would change when adding $y$ because the $p_j$'s are different.\footnote{ It is not enough to set $\epsilon$ small enough. Consider an instance with two machines and two jobs. The jobs have processing times $1$ and $2$, and $T = 1.5$. Let $x$ assign $1/2$ of each job to each machine. Then just adding/subtracting $\pm \epsilon$ on the cycle-edges will overload one of the machines, no matter what $\epsilon > 0$ is. } To deal with this, we instead set $y_{ij} = \pm 1 / p_j$ for edges $\{i,j\}$ in the cycle. Then: \begin{itemize} \item for each $j \in J$ we have $\sum_{i \in N(j)} y_{ij} = 0$ because this sum has two nonzero terms, equal to $1/p_j$ and $-1/p_j$ (or no nonzero terms if $j$ does not appear in the cycle), \item for each $i \in M$ we have $\sum_{j\in J: i \in N(j)} y_{ij} p_j = 0$ because this sum has two nonzero terms, equal to $1/p_j \cdot p_j$ and $-1/p_{j'} \cdot p_{j'}$ where $j$ and $j'$ are the neighbors of $i$ on the cycle (or no nonzero terms if $i$ does not appear in the cycle), \item since $y_{ij} \ne 0$ only if $0 < x^*_{ij} < 1$, we can set $\epsilon$ so that $0 \le x^*_{ij} \pm \epsilon y_{ij} \le 1$ for all $i$ and $j$. \end{itemize} This shows that the points $x \pm \epsilon y$ are both feasible. \newpage",0.1,M1_preference_data_323 "Imagine you're working at JaaS, the Jokes-as-a-Service platform. With JaaS, everyone can be funny any time by having new jokes at their fingertips via a public API. During the orientation at JaaS, the VP of engineering explains to you their workflow: 1. Branching: Developers must use a separate branch for each feature, and they must commit their code once a day. 2. Testing: When their feature is finished, developers must run a test suite locally, on their machine, and make sure that every test passes. Once that's done, they can commit and push, then open a PR describing the feature, with a screenshot of the test results attached, and wait for code reviews from colleagues. 3. Merging: If no one requested changes on the code within 24 hours, one can merge the PR to the main branch. The above ""Testing"" directive contains a flaw. Give a better alternative for it and explain why your alternative is better in maximum 2 sentences:","compounds are simply ignored as such by the Naive Bayes and are, due to the 'Naive' independance assumption, handled as separated tokens.",0.1,M1_preference_data_324 "Consider the following grammar: S -> NP VP NP -> Det N VP -> VBe Adj NP -> NP PP VP -> V N -> Adj N VP -> VP PP Adj -> Adj PP V -> VBe Adj -> Ving PP -> Prep NP and the following lexicon: at:Prep is:VBe old:Adj black:Adj looking:Ving the:Det cat:N mouse:N under:Prep former:Adj nice:Adj with:Prep The above grammar over-generates. One reason is that some adjectives, e.g. former, can only occur before a noun. For instance the cat is former is incorrect in English (but accepted by the above grammar). Another reason for over-generation is that PPs do not combine with adjectives occurring before a noun. For instance: the looking at the mouse cat is black is incorrect in English (but accepted by the above grammar). Explain how the above grammar might be modified to prevent these two types of over-generation.",Medical,0.1,M1_preference_data_325 "Assume that you are part of a team developing a mobile app using Scrum. When using the app, you identified multiple bugs and features which you think should be implemented, and took some notes. You want to share these with the Product Owner. Your backlog of tasks includes the following task: - [ ] [Bug] The landing page doesn't render well, if the language of my device is not English. Is this item suitable to be submitted to the Product Backlog? Why?","In order to show that P is the weakest failure detector for Group Membership, we need to that that: 1. P can be used to implement Group Membership. 2. Group Membership can be used to implement P. For (1), the perfect FD satisfies the completeness and accuracy properties of GM, the Uniform Consensus (an implementation of which also uses the perfect FD) satisfies agreement property of GM, and the fact that a process forms a new view only when the set of correct processes is properly contained in the current view membership satisfies the new monotonicity property. For (2), assume that all processes run a GM algorithm. We can implement a perfect failure detector as follows: Whenever a new view is installed, all processes that are freshly removed from the view are added to the detected set. This approach satisfies both Strong Completeness and Strong Accuracy, directly from the corresponding properties of GM.",0.1,M1_preference_data_326 " Let us remind that we define the max-margin $M_\star$ as egin{align*} M_\star = \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } n=1,\cdots, N \end{align*} and a max-margin separating hyperplane $ar \wv$ as a solution of this problem: egin{align*} ar \wv \in rg \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } i=1,\cdots, N \end{align*} Bound the number of perceptron updates $t$ using the quantities $R$ and $M_\star$. Prove your result. ","1. Branch prediction and speculation. 2. Dependence prediction and speculation. 3. Branch prediction and speculation. 4. Dynamic register renaming.",0.1,M1_preference_data_327 "Professor Ueli von Gruy\`{e}res has worked intensely throughout his career to get a good estimator of the yearly consumption of cheese in Switzerland. Recently, he had a true breakthrough. He was able to design an incredibly efficient randomized algorithm \Alg that outputs a random value $X$ satisfying \begin{align*} \mathbb{E}[X] = c \qquad \mbox{ and } \qquad \textrm{Var}[X] = c^2\,, \end{align*} where $c$ is the (unknown) yearly consumption of cheese in Switzerland. In other words, \Alg is an unbiased estimator of $c$ with variance $c^2$. Use Ueli von Gruy\`{e}res' algorithm \Alg to design an algorithm that outputs a random value $Y$ with the following guarantee: \begin{align} \label{eq:guarantee} \Pr[|Y - c| \geq \epsilon c] \leq \delta\qquad \mbox{ where $\epsilon > 0$ and $\delta >0$ are small constants.} \end{align} Your algorithm should increase the resource requirements (its running time and space usage) by at most a factor $O(1/\epsilon^2 \cdot \log(1/\delta))$ compared to the requirements of $\Alg$. \\[0mm] {\em (In this problem you are asked to (i) design the algorithm using $\mathcal{A}$, (ii) show that it satisfies the guarantee~\eqref{eq:guarantee}, and (iii) analyze how much the resource requirements increase compared to that of simply running $\mathcal{A}$. Recall that you are allowed to refer to material covered in the course.)}","Yes, it is possible. Adding a document like d3=”c” would make d2>d1 for VSR and d1>d2 for smoothed probabilistic retrieval.",0.1,M1_preference_data_328 You are given the following accident and weather data. Each line corresponds to one event: 1. car_accident rain lightning wind clouds fire 2. fire clouds rain lightning wind 3. car_accident fire wind 4. clouds rain wind 5. lightning fire rain clouds 6. clouds wind car_accident 7. rain lightning clouds fire 8. lightning fire car_accident (b) Find all the association rules for minimal support 0.6 and minimal confidence of 1.0 (certainty). Follow the apriori algorithm.,"There is a huge class imbalance, thus, accuracy is not a good metric. It is very easy for accuracy to be biased. If you label everything as the majority class, the accuracy would still be very high (in the high 90s), but the performance on the minority class would be terrible. In such cases, one can use the either of the following for evaluation. confusion matrix, balanced accuracy score (sklearn), or (un)weighted micro/macro averaged F1 scores",0.1,M1_preference_data_329 "Consider the following toy corpus: the cat cut the hat Considering only lowercase alphabetical and whitespace, how many bigrams are possible?",['α/(N-3+αm^4)'],0.1,M1_preference_data_330 "To avoid the effect of the prefetcher, one could use a hash table to randomize the access order during probing so that the prefetcher cannot anticipate the access pattern. What is the potential problem with this idea?","For the set of real numbers, we know that: - |a| = -a, if a < 0 - |a| = a, if a ≥ 0 So: - If x < 7: |x - 7| = 7 - x, therefore x + |x - 7| = x + (7 - x) = 7 ≥ 7 - If x ≥ 7: |x - 7| = x - 7, therefore x + |x - 7| = x + (x - 7) = 2x - 7 ≥ 2*7 - 7 → x + |x - 7| ≥ 7",0.1,M1_preference_data_331 " Consider the following algorithm that takes as input an undirected graph $G=(V,E)$: \begin{center} \begin{boxedminipage}[t]{0.85\textwidth} \begin{minipage}{14cm} \begin{verse} \textsc{SimpleCut}$(G=(V,E))$: \\[2mm] 1. Let $\mathcal{H}$ be a $2$-universal family of hash functions $h: V \to \{0,1\}$. \\[1mm] 2. Select $h \in \mathcal{H}$ at random. \\[1mm] 3. \RETURN the vertex set $S = \{v\in V: h(v) = 0\}$. \end{verse} \end{minipage} \end{boxedminipage} \end{center} Prove the following: \begin{itemize} \item[]In expectation, the set $S$ returned by \textsc{SimpleCut} cuts at least $|E|/2$ edges. \end{itemize} {\em (In this problem you are asked to prove the above statement. Recall that you are allowed to refer to material covered in the lecture notes.)}",It's $P( ext{High}) \cdot \prod_{i=2}^{14}P(w_i|w_{i-1})$ with the above fourteen tokens $w_i$.,0.1,M1_preference_data_332 "Given the following function sums: 1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match 2 case Nil => List((c, 1)) 3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs) 4 5 def sums(digits: List[Int]): List[(Int, Int)] = 6 digits.foldRight(List[(Int, Int)]())(add) Your task is to identify several operations on lists of digits: What does the following operation implement, for a given input list of digits? 1 def mystery4(digits: List[Int]): Int = sums(digits) match 2 case Nil => 0 3 case t => t.reduceLeft((a, b) => (a._1, a._2 + b._2))._2","Let $\mathbf{a}=(a_1, \dots, a_n)$ and $\mathbf{b}=(b_1, \dots, b_n)$. Alice generates two independent random vectors $\mathbf{r}^1, \mathbf{r}^2 \sim \operatorname{Uniform}(\{0, 1\}^n)$ using the shared random bits. Note that this is equivalent to choosing each element of $\mathbf{r}^1$ and $\mathbf{r}^2$ independently and uniformly at random from $\{0, 1\}$. Alice then computes $x_1 = \langle \mathbf{a}, \mathbf{r}^1\rangle \operatorname{mod} 2$ and $x_2 = \langle \mathbf{a}, \mathbf{r}^2 \rangle \operatorname{mod} 2$, and transmits $(x_1, x_2)$ to Bob. Bob uses the shared random bits to generate the same vectors $\mathbf{r}^1$ and $\mathbf{r}^2$, and computes $y_1 = \langle \mathbf{b}, \mathbf{r}^1\rangle \operatorname{mod} 2$ and $y_2 = \langle \mathbf{b}, \mathbf{r}^2 \rangle \operatorname{mod} 2$. If $x_1 = y_1$ and $x_2 = y_2$, Bob outputs $\textsc{Equal}$. Otherwise, Bob outputs $\textsc{Not Equal}$. We prove that the above protocol succeeds with probability at least $2/3$. Clearly, it succeeds whenever $\mathbf{a} = \mathbf{b}$. Thus, we only have to show that it succeeds with probability at least $2/3$ when $\mathbf{a} \neq \mathbf{b}$. We first show that $\Pr[x_1 = y_1 | \mathbf{a} \neq \mathbf{b}] = 1/2$. Notice that \begin{align*} \Pr[x_1=y_1| \mathbf{a} \neq \mathbf{b}] &= \Pr[ \langle \mathbf{a}, \mathbf{r}^1 \rangle \operatorname{mod} 2 = \langle \mathbf{b}, \mathbf{r}^1 \rangle \operatorname{mod} 2 | \mathbf{a} \neq \mathbf{b}] \\ &= \Pr[ \langle \mathbf{a} - \mathbf{b}, \mathbf{r}^1 \rangle \operatorname{mod} 2 = 0 | \mathbf{a} \neq \mathbf{b}]. \end{align*} Let $\mathbf{c} = \mathbf{a} - \mathbf{b}$. Since $\mathbf{a} \neq \mathbf{b}$, we have that $\mathbf{c} \neq \mathbf{0}$. This means that for at least one index $j$, $c_j = \pm 1$. Now fix such $j$, and suppose that we have chosen all elements $r^1_i$ for $i \neq j$ independently and uniformly at random from $\{0, 1\}$. Then, there will be only one choice for $r^1_j$ that would make $\langle \mathbf{c}, \mathbf{r}^1 \rangle = 0$. Thus, using the principle of deferred decisions, we have that $$\Pr[ \langle \mathbf{c}, \mathbf{r}^1 \rangle \operatorname{mod} 2 = 0 | \mathbf{a} \neq \mathbf{b}] = \Pr[ \langle \mathbf{a} - \mathbf{b}, \mathbf{r}^1 \rangle \operatorname{mod} 2 = 0 | \mathbf{a} \neq \mathbf{b}] = 1/2.$$ As a result, $\Pr[x_1=y_1| \mathbf{a} \neq \mathbf{b}] = 1/2$, and similarly, $\Pr[x_2=y_2| \mathbf{a} \neq \mathbf{b}] = 1/2$. Since $\mathbf{r}^1$ and $\mathbf{r}^2$ are independent from each other, $\Pr[(x_1,x_2)=(y_1,y_2)| \mathbf{a} \neq \mathbf{b}] = (1/2) \cdot (1/2) = 1/4$, which implies that $\Pr[(x_1,x_2) \neq (y_1,y_2)| \mathbf{a} \neq \mathbf{b}] = 1 - 1/4 \geq 2/3$ as required. \QED",0.1,M1_preference_data_333 "Consider the problem of finding a maximum cardinality set packing in the semi-streaming model. An instance of this problem consists of a known universe $U$ of $n$ elements and sets $S \subseteq U$ are streamed one-by-one. The goal is to select a family $\mathcal{T}$ of pairwise disjoint sets (i.e., $S\cap S' = \emptyset$ for any two distinct sets $S, S' \in \mathcal{T}$) of maximum cardinality while only using $O(n\cdot \textrm{poly}\log n)$ storage space. Devise an algorithm in this setting that returns a set packing of cardinality at least $1/k$ times that of a maximum cardinality set packing, assuming that each streamed set $S$ has cardinality at most $k$, i.e., $|S| \leq k$. \\[0mm] {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it uses $O(n\cdot \textrm{\textnormal{poly}}{\log n})$ space, and (iii) prove that it returns a solution of cardinality at least $1/k$ times the cardinality of a maximum cardinality set packing. Recall that you are allowed to refer to material covered in the course.) }","def dcg(k, retrieved_ids, grade): dcg_val = 0 for i in range(1, k): dcg_val += grade[i] / math.log2(i+1) return dcg_val",0.1,M1_preference_data_334 "Recall the Manhattan distance function that we saw in class: for any $d$-dimensional Boolean vectors $p,q \in \{0,1\}^d$, the Manhattan distance is defined by \begin{align*} \dist(p,q) = \|p-q\|_1 = |\{i: p_i \neq q_i\}|\,. \end{align*} Design a locality sensitive hash (LSH) family $\mathcal{H}$ of functions $h: \{0,1\}^d \rightarrow \{0,1,2,3\}$ such that for any $p, q\in \{0,1\}^d$, \begin{align*} \Pr_{h \sim \mathcal{H}}[h(p) = h(q)] = \left( 1-\frac{\dist(p,q)}{d} \right)^2\,. \end{align*} {\em (In this problem you are asked to explain the hash family and show that it satisfies the above property. Recall that you are allowed to refer to material covered in the lecture notes.)}",False.,0.1,M1_preference_data_335 What property does the function f passed to reduce need to satisfy in order to have the same result regardless on how reduce groups the applications of the operation f to the elements of the data structure? Prove that your function f indeed satisfies that property.,"Consider an extreme point $x^*$, and suppose for the sake of contradiction that $x^*$ is not half-integral, i.e., that there is an edge $e$ such that $x^*_e \not \in \{0, \frac12, 1\}$. We will show that $x^*$ is a convex combination of feasible points, contradicting that $x^*$ is an extreme point. Let $V^+ = \{v: \frac{1}{2} < x^*_v < 1\}$ and $V^- = \{v: 0 < x^*_v < \frac{1}{2}\}$. Note that $V^+ \cup V^- \neq \emptyset$, since $x^*$ is assumed to not be half-integral. Take $\epsilon > 0$ to be tiny, and define: \[ y_v^+ = \left\{ \begin{array}{l l} x^*_v + \epsilon & \quad \text{if } v \in V^+\\ x^*_v - \epsilon & \quad \text{if } v \in V^-\\ x^*_v & \quad \text{otherwise}\\ \end{array} \right.\] \[ y_v^- = \left\{ \begin{array}{l l} x^*_v - \epsilon & \quad \text{if } v \in V^+\\ x^*_v + \epsilon & \quad \text{if } v \in V^-\\ x^*_v & \quad \text{otherwise}\\ \end{array} \right.\] Note that $x^* = \frac{1}{2} y^+ + \frac{1}{2} y^-$. It remains to verify that $y^+$ and $y^-$ are feasible solutions. \begin{enumerate} \item By selecting $\epsilon$ small enough, the boundary constraints $(0 \leq y^+_v \leq 1, 0 \leq y^-_v \leq 1)$ are satisfied. \item Consider the constraints for the edges $e=\{u,v\} \in E$. If $x^*_u+x^*_v > 1$, the constraint remains satisfied by picking $\epsilon > 0$ small enough. If $x^*_u+x^*_v = 1$, then consider the following cases: \begin{itemize} \item $u, v \notin V^+ \cup V^-$. In this case, $y^+_u + y^+_v = x^*_u + x^*_v = 1$. \item $u \in V^+$; then $v \in V^-$. In this case, $y^+_u + y^+_v = x^*_u + \epsilon + x^*_v -\epsilon = 1$. \item $u \in V^-$; then $v \in V^+$. In this case, $y^+_u + y^+_v = x^*_u - \epsilon + x^*_v +\epsilon = 1$. \end{itemize} \end{enumerate} So $y^+$ is a feasible solution. The same argument holds for $y^-$.",0.1,M1_preference_data_336 Implement the modularity metric for communities.,"Yes, because intermediate commits from the pull requests are not complete, and may even not build, which would make it hard in the future to find the root cause of a bug. Alternatively, you could squash into multiple commits, such as one for a refactoring and one for the actual bug fix, if it makes reviewing easier.",0.1,M1_preference_data_337 Create a function that parses the input documents and creates a dictionary with the terms and term frequencies.,"Each commit should represent a single logical change, such as a new feature or a bug fix. This is not related with the time it takes to make the change. (It's a good idea to commit partial work and push it before going home as a backup, but this should then be squashed before merging)",0.1,M1_preference_data_338 "Assume your team is considering adding support for the SwengPhotos cloud service, which provides upload and download of photos on private cloud storage. Each photo is associated with a unique name. SwengPhotos's documentation for the ""upload new photo"" interface describes the following error responses: 1. I/O error 2. Backend timeout error 3. Name already exists error Explain, in 1-3 sentences, which of these errors are abstraction leaks and why:","We define our estimator as follows: \begin{itemize} \item Let $t= 1000 \log(1/\delta)$. \item Run $t$ independent copies of $\Alg$ to obtain estimates $X_1, X_2, \ldots, X_t$. \item Output $Y$ to be the \emph{median} of $X_1, \ldots, X_t$. \end{itemize} Let $I_i$ be the indicator random variable that $|X_i - W| \geq \epsilon W$. For us to have $|Y - W| \geq \epsilon W$ it must be that $\sum_{i = 1}^t I_i \geq t/2$. However, $\E[\sum_{i=1}^t I_i] \leq t/3$ and it is a sum of \emph{independent} random variables taking values in $\{0,1\}$. We can thus apply Chernoff bounds to obtain \begin{align*} \Pr[|Y- W| \geq \epsilon W] \leq \Pr[\sum_{i=1}^t I_i \geq t/2] \leq e^{-t/100} \leq \delta\,, \end{align*} where we used that $t = 1000 \log(1/\delta)$.",0.1,M1_preference_data_339 "Since exploiting the cache side-channel requires precise time measurement, many security researchers suggest reducing the precision of this time measurement. Can this proposal fully disable all possible cache side-channel attacks? Discuss.?","The problem addressed by a PoS tagger is to assign part-of-speech tags (i.e. grammatical roles) to words within a given context (sentence, text). This task is not trivial because of lexical ambiguity (words can have multiple grammatical roles, e.g. can/N can/V) and out-of-vocabulary forms (i.e. unknown words). Lexical ambiguity is not trivial to handle because it leads to an exponential number of possible solution w.r.t. the sentence length. Unknow words are not trivial because we have to decide how to cope with them, which often involves high level linguistic features (and compromise to be made). This is the role of the 'guesser'.",0.1,M1_preference_data_340 "Estimate the 95% confidence intervals of the geometric mean and the arithmetic mean of pageviews using bootstrap resampling. The data is given in a pandas.DataFrame called df and the respective column is called ""pageviews"". You can use the scipy.stats python library.","(a) PoS tagging, but also Information Retrieval (IR), Text Classification, Information Extraction. For the later, accuracy sounds like precision (but it depends on what we actually mean by 'task' (vs. subtask)) . (b) a reference must be available, 'correct' and 'incorrect' must be clearly defined",0.1,M1_preference_data_341 "You are writing an implementation for the following function: /** Find the N-th percentile of the array of values provided, e.g., 50% = median, 100% = maximum */ int findPercentile(int[] values, int n) To facilitate debugging, you decided to add a post-condition: the returned value must be in the array ""values"". However, one of your colleagues notices that the post-condition requires to iterate the whole array, and does not agree because this function will be used frequently in a code whose latency must be minimized. What compromise would you suggest? What are its pros and cons?","def item_based_predict(ratings, similarity): filled_matrix = np.zeros((n_users, n_items)) # loop over all the users for u in range(n_users): # get the items rated by this user ranked_items_indices = train_data_matrix[u,:].nonzero()[0] for i in range(n_items): numerator = 0 denominator = 0 for j in ranked_items_indices: numerator+=item_similarity[i,j]*train_data_matrix[u,j] denominator+=np.abs(item_similarity[i,j]) if denominator>0: filled_matrix[u,i]= numerator/denominator else: # simply take a random rating in that case filled_matrix[u,i]= np.random.randint(1,6) return filled_matrix ",0.1,M1_preference_data_342 What is the asymptotic work of parGroupyBy2?,"import numpy as np def get_vars(): X = np.random.random(30) Y = np.random.random(30) Z = X/2 + Y/2 + 0.1 K = Y + 0.1 return X,Y,Z,K",0.1,M1_preference_data_343 "Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. In the minimum (unweighted) $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. We shall use a linear program to solve this problem. Let ${P}$ be the set of all paths between $s$ and $t$ in the graph $G$. The linear program has a variable $y_e$ for each edge $e\in E$ and is defined as follows: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & & \displaystyle\sum_{e \in E} y_e &\\ \text{subject to}& & \displaystyle\sum_{e \in p} y_e \ge 1 &\forall p \in P,\\ & & y_e \ge 0 & \forall e \in E. \end{array} \end{equation*} For example, consider the following graph where the numbers on the edges depict the $y_e$-values of a feasible solution to the linear program: \begin{center} \input{cutExample} \end{center} The values on the edges depict a feasible but not optimal solution to the linear program. That it is feasible follows because each $y_e$ is non-negative and $\sum_{e\in p} y_e \geq 1$ for all $p\in P$. Indeed, for the path $s, b, a, t$ we have $y_{\{s,b\}}+ y_{\{b,a\}} + y_{\{a,t\}} = 1/4 + 1/4 + 1/2 = 1$, and similar calculations for each path $p$ between $s$ and $t$ show that $\sum_{e\in p} y_e \geq 1$. That the solution is not optimal follows because its value is $2.5$ whereas an optimal solution has value $2$. Prove that $\opt\leq \optlp$, where $\opt$ and $\optlp$ are defined as in {\bf 6a}. \\ Hint: Round a feasible linear programming solution $y$. In the (randomized) rounding it may be helpful to consider, for each vertex $v\in V$, the length of the shortest path from $s$ to $v$ in the graph where edge $e\in E$ has length $y_e$. For example, in the graph and linear programming solution depicted in the problem statement, we have that the length of the shortest path from $s$ to $a$ equals $1/2$. \\ {\em (In this problem you are asked to prove $\opt \leq \optlp$. Recall that you are allowed to refer to material covered in the lecture notes.)}",['(p(N-3)+α)/(N-3+αm^4)'],0.1,M1_preference_data_344 "In this week's lecture, you have been introduced to the aggregate method of ParSeq[A] (and other parallel data structures). It has the following signature: def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B Discuss, as a group, what aggregate does and what its arguments represent. Consider the parallel sequence xs containing the three elements x1, x2 and x3. Also consider the following call to aggregate: xs.aggregate(z)(f, g) The above call might potentially result in the following computation: f(f(f(z, x1), x2), x3) But it might also result in other computations. Come up with at least two other computations in terms of f and g that may result from the above call to aggregate.","The attacker fills the cache with its own data (``prime'') and let's the victim run; then, it checks which of its own data have been evicted (``probe''). In this way, it learns which values have been accessed by the victim.",0.1,M1_preference_data_345 "Assume that your team is discussing the following java code: public final class DataStructure { public void add(int val) { /*...*/ } private boolean isFull() { /*...*/ } } Your colleagues were changing the parameter type of ""add"" to an ""Integer"". Explain whether this breaks backward compatibility and why or why not (also without worrying about whether this is a good or a bad thing).","D(cat,dog)=2 D(cat,pen)=6 D(cat,table)=6 D(dog,pen)=6 D(dog,table)=6 D(pen,table)=2",0.1,M1_preference_data_346 "Devise an algorithm for the following graph orientation problem: \begin{description} \item[Input:] An undirected graph $G = (V,E)$ and capacities $k : V \rightarrow \mathbb{Z}$ for each vertex. \item[Output:] If possible, an orientation of $G$ such that each vertex $v\in V$ has in-degree at most $k(v)$. \end{description} An orientation of an undirected graph $G$ replaces each undirected edge $\{u,v\}$ by either an arc $(u,v)$ from $u$ to $v$ or by an $(v,u)$ from $v$ to $u$. \\[2mm] \noindent\emph{(Hint: reduce the problem to matroid intersection. You can also use bipartite matching\ldots)}","True: Nothing can be said about process i, because of “eventually”.",0.1,M1_preference_data_347 "Implement a function that takes a lists ls as argument and returns the length of the longest contiguous sequence of repeated elements in that list. For this second question, you are required to use foldLeft in your solution, and your solution should not be recursive. For example: longest(List(1, 2, 2, 5, 5, 5, 1, 1, 1)) == 3 def longest[A](ls: List[A]): Int = ???","No it does not, we have no clue to which separating hyperplane the Perceptron algorithm is converging to. ",0.1,M1_preference_data_348 "Implement a uniform reliable broadcast algorithm without using any failure detector, i.e., using only BestEffort-Broadcast(BEB).","The code synchronously downloads images, thus the app will freeze until the download is over.",0.1,M1_preference_data_349 "Assume that you are part of a team developing a mobile app using Scrum. When using the app, you identified multiple bugs and features which you think should be implemented, and took some notes. You want to share these with the Product Owner. Your backlog of tasks includes the following task: - [ ] Login Is this item suitable to be submitted to the Product Backlog? Why?","The algorithm is as follows: We store the first item and a counter, initialized to 1. For each subsequent item, if it is the same as the currently stored item, increment the counter. If it differs, and the counter is zero, then store the new item and set the counter to 1; else, decrement the counter. Say that $s$ is the unknown value that occurs more than $m/2$ times. The idea of the algorithm is that if you could pair up elements of the stream so that distinct values are paired up, and then you ``kill'' these pairs, then $s$ will always survive. The way this algorithm pairs up the values is by holding onto the most recent value that has no pair (implicitly, by keeping a count how many copies of that value you saw). Then when you come across a new element, you decrement the counter and implicitly account for one new pair.",0.1,M1_preference_data_350 "Show a code snippet which represents the kernel of a Spectre attack (use any convenient programming language or assembly). ","No, this will force other members to sit through an explanation that's not relevant to them, the colleague was right.",0.1,M1_preference_data_351 "Imagine you're working at JaaS, the Jokes-as-a-Service platform. With JaaS, everyone can be funny any time by having new jokes at their fingertips via a public API. During the orientation at JaaS, the VP of engineering explains to you their workflow: 1. Branching: Developers must use a separate branch for each feature, and they must commit their code once a day. 2. Testing: When their feature is finished, developers must run a test suite locally, on their machine, and make sure that every test passes. Once that's done, they can commit and push, then open a PR describing the feature, with a screenshot of the test results attached, and wait for code reviews from colleagues. 3. Merging: If no one requested changes on the code within 24 hours, one can merge the PR to the main branch. The above ""Merging"" directive contains a flaw. Give a better alternative for it and explain why your alternative is better in maximum 2 sentences:","Let $x^*$ be an extreme point for the graph $G=(V, E)$ and let $E_f = \{e\in E: 0 < x_e^* < 1\}$. Suppose towards contradiction that $E_f \neq \emptyset$. Note that $E_f$ must then contain a cycle as $b$ is integral: indeed any vertex incident to an edge in $E_f$ is incident to at least two edges in $E_f$. Note that since $G$ is bipartite, hence, the cycle has even length. Let $e_1, e_2, ..., e_{2k}$ be the edges of the cycle. All these edges are fractional and we want to define $y$ and $z$ so that they are feasible solutions and $x^* = \frac{1}{2}(y+z)$ which will contradict the fact that $x^*$ is an extreme point. Let $y, z$ be \[ y_e = \left\{ \begin{array}{l l} x^*_e + \epsilon & \quad \text{if } e \in \{e_1, e_3, e_5, ..., e_{2k-1}\}\\ x^*_e - \epsilon & \quad \text{if } e \in \{e_2, e_4, e_6, ..., e_{2k}\}\\ x^*_e & \quad \text{otherwise}\\ \end{array} \right.\] \[ z_e = \left\{ \begin{array}{l l} x^*_e - \epsilon & \quad \text{if } e \in \{e_1, e_3, e_5, ..., e_{2k-1}\}\\ x^*_e + \epsilon & \quad \text{if } e \in \{e_2, e_4, e_6, ..., e_{2k}\}\\ x^*_e & \quad \text{otherwise}\\ \end{array} \right.\] Notice that the degree constraints are still satisfied by $y$ and $z$ as we are alternating between increasing and decreasing the edge values in a cycle of even length. More formally suppose that $e_i$ and $e_{i+1}$ are the edges incident to $v$. Depending on parity of $i$ we have $y_{e_i}=x^{*}_{e_i}+\epsilon$ and $y_{e_{i+1}}=x^{*}_{e_{i+1}}-\epsilon$. Or we have $y_{e_i}=x^{*}_{e_i}-\epsilon$ and $y_{e_{i+1}}=x^{*}_{e_{i+1}}+\epsilon$. Therefore degree constraints still satisfied: \begin{align*} &\sum_{e\in E: v\in e} y_e = \sum_{e\in E: v\in e} x^*_e =b(v)\text{, and} \\ &\sum_{e\in E: v\in e} z_e = \sum_{e\in E: v\in e} x^*_e =b(v) \end{align*} Hence, to ensure feasibility, we need to choose such a small $\epsilon$ so as to guarantee that all $y_e$ and $z_e$ are in $[0,1]$. For example $\epsilon=\min\{x^*_e,(1-x^*_e) : e\in E_f\}$ gives that both $y$ and $z$ are feasible. Now one can easily see that $x^* = \frac{1}{2} (y+z)$ which contradicts the assumption that $x^*$ is an extreme point.",0.1,M1_preference_data_352 " Now let $\xv$ be a random vector distributed according to the uniform distribution over the finite centered dataset $\xv_1, . . . , \xv_N$ from above. % Consider the problem of finding a unit vector, $\wv \in \R^D$, such that the random variable $\wv^ op \xx$ has \emph{maximal} variance. What is the variance of the random variable $\wv^ op \xx$ over the randomness of $\xx$?","- Classes: God, beautiful, happy - Instances: Aphrodite, Eros - Properties: isa, isParentOf ",0.1,M1_preference_data_353 "Given the following classes: • class Pair[+U, +V] • class Iterable[+U] • class Map[U, +V] extends Iterable[Pair[U, V]] Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither covariance nor contravariance). Consider also the following typing relationships for A, B, X, and Y: • A >: B • X >: Y Fill in the subtyping relation between the types below using symbols: • <: in case T1 is a subtype of T2; • >: in case T1 is a supertype of T2; • “Neither” in case T1 is neither a supertype nor a supertype of T2. What is the correct subtyping relationship between Map[A, X] and Map[B, Y]?","(i) (a) $h(i)=\frac{1}{i !}$ (b) $\eta=\ln (\lambda)$ (c) $\phi(i)=i$ (d) $A(\eta)=\lambda=e^{\eta}$ (ii) (a) $\frac{d A(\eta)}{d \eta}=\lambda$ (b) $\frac{d^{2} A(\eta)}{d \eta^{2}}=\lambda$",0.1,M1_preference_data_354 "In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=\kappa_{1}\left(f(\mathbf{x}), f\left(\mathbf{x}^{\prime}\right)\right)$, where $f$ is any function from the domain to itself.","In general no load could be advanced before a store, for there could be a dependence through memory if the store and the load addresses happen to be the same. Here there are two stores, but the store to exttt{r5} is at an address which is multiple of 16 plus 4; since the load is at an address which is a multiple of 16, there cannot be collision between these two accesses, so only the store to exttt{r3} and the load can collide and create a problem.",0.1,M1_preference_data_355 What is the communication complexity of the FloodSet algorithm in number of bits?," Keep track of all instructions being executed or waiting to be executed and commit their results in order to the registers or memory once it is sure that nothing would have prevented their execution (exceptions in previous instructions, misprediction of previous branches, etc.)",0.1,M1_preference_data_356 "You are responsible for a project aiming at providing on-line recommendations to the customers of a on-line book selling company. The general idea behind this recommendation system is to cluster books according to both customers and content similarities, so as to propose books similar to the books already bought by a given customer. The core of the recommendation system is a clustering algorithm aiming at regrouping books likely to be appreciate by the same person. This clustering should not only be achieved based on the purchase history of customers, but should also be refined by the content of the books themselves. It's that latter aspect we want to address in this exam question. Consider the following six 'documents' (toy example): d1: 'Because cows are not sorted as they return from the fields to their home pen, cow flows are improved.' d2: 'He was convinced that if he owned the fountain pen that he'd seen in the shop window for years, he could write fantastic stories with it. That was the kind of pen you cannot forget.' d3: 'With this book you will learn how to draw humans, animals (cows, horses, etc.) and flowers with a charcoal pen.' d4: 'The cows were kept in pens behind the farm, hidden from the road. That was the typical kind of pen made for cows.' d5: 'If Dracula wrote with a fountain pen, this would be the kind of pen he would write with, filled with blood red ink. It was the pen she chose for my punishment, the pen of my torment. What a mean cow!' d6: 'What pen for what cow? A red pen for a red cow, a black pen for a black cow, a brown pen for a brown cow, ... Understand?' and suppose (toy example) that they are indexed only by the two words: pen and cow. What is the result of the dendrogram clustering algorithm on those six documents, using the cosine similarity and single linkage? Explain all the steps. Hint: $5 / \sqrt{34}<3 / \sqrt{10}<4 / \sqrt{17}$.","Yes, it is possible. d1>d2: without adding any document, it holds true d2>d1: adding d3=”aaaa”",0.1,M1_preference_data_357 "Devise an algorithm that, without consensus, implements a weaker specification of NBAC by replacing the termination property with very weak termination. Very weak termination: If no process crashes, then all processes decide. Is a failure detector needed to implement this algorithm?","Consider a system with an even number N of processes. Let A, B denote two distinct subsets of N/2 processes that propose values A and B respectively. By contradiction, let us assume that an algorithm exists that achieves consensus when N/2 processes fail. The two following executions are valid: - Execution 1. All processes in A crash at the beginning without issuing any message. All the processes in B still achieve consensus and, by validity, decide B. Let TB denote the time when the last process decides on a value. - Execution 2. All processes in B crash at the beginning without issuing any message. All the processes in A still achieve consensus and, by validity, decide A. Let TA denote the time when the last process decides on a value. Let us now consider the following Execution 3. All the processes in A are suspected by each process in B, and vice versa (at the same time as Executions 1 and 2, respectively). No message between a process in A and a process in B is delivered before max(TA, TB). No process restores any process in the other group before max(TA, TB). It is immediate to see that a process in A cannot distinguish between Executions 2 and 3. Similarly, a process in B cannot distinguish between Executions 1 and 3. Therefore, all processes in A decide A, all processes in B decide B, and agreement is violated.",0.1,M1_preference_data_358 "Consider the following definition of trees representing higher-order functions, as well as a recursive function subst0. 1 enum Expr: 2 case C(c: BigInt) 3 case N(name: String) 4 case BinOp(op: BinOps, e1: Expr, e2: Expr) 5 case IfNonzero(cond: Expr, trueE: Expr, falseE: Expr) 6 case Call(fun: Expr, arg: Expr) 7 case Fun(param: String, body: Expr) 8 9 import Expr._ 10 11 enum BinOps: 12 case Plus, Minus, Times, Power, LessEq 13 14 def subst0(e: Expr, n: String, r: Expr): Expr = e match 15 case C(c) => e 16 case N(s) => if s == n then r else e 17 case BinOp(op, e1, e2) => 18 BinOp(op, subst0(e1, n, r), subst0(e2, n, r)) 19 case IfNonzero(cond, trueE, falseE) => 20 IfNonzero(subst0(cond,n,r), subst0(trueE,n,r), subst0(falseE,n,r)) 21 case Call(f, arg) => 22 Call(subst0(f, n, r), subst0(arg, n, r)) 23 case Fun(formal, body) => 24 if formal == n then e 25 else Fun(formal, subst0(body, n, r)) And consider the following expression: 1 val e = Call(N(""exists""), Fun(""y"", Call(Call(N(""less""), N(""x"")), N(""y"")))) What is subst0(e, ""y"", C(42)) equal to?"," \item representing words as embeddings allows finer-grained semantics beyond word overlap to be captured by the metric \item more correlated with human judgment",0.1,M1_preference_data_359 "Assume you are working on SuperQuiz, a trendy app that lets everyone design quizzes and share them with friends! SuperQuiz recently hired a new CEO, who wants to improve the development practices using modern methods. However, this CEO has no engineering background, so the suggested improvements are well intentioned but not always feasible. The latest CEO suggestion is this: ""Continuous Integration is a modern best practice. We must adopt it, so that the code in our repository never has bugs. From now on, all branches in the SuperQuiz repository must have continuous integration enabled, and at the end of each day all branches must pass all tests."" Propose (in 1-2 sentences) a compromise that achieves the CEO's true objective:","The result of scanRight1 is not the same as scanLeft1. Consider the length two sequence $A = (a_1, a_2)$. $$A.scanLeft1(f) = (a_1, f(a_1, a_2))$$ but $$A.scanRight1(f) = (f(a_1, a_2), a_2)$$ These are not equal for most choices of $f$. You may take $f(x, y) := y$ as an example. In this case, $$A.scanLeft1(f) = (a_1, a_2)$$ and $$A.scanRight1(f) = (a_2, a_2)$$ which are unequal if $a_1 \not = a_2$.",0.1,M1_preference_data_360 "The objective of this question is to illustrate the use of a lexical semantics resource to compute lexical cohesion. Consider the following toy ontology providing a semantic structuring for a (small) set of nouns: We want to use lexical cohesion to decide whether the provided text consists of one single topical segment corresponding to both sentences, or of two distinct topical segments, each corresponding to one of the sentences. Let's define the lexical cohesion of any set of words (in canonical form) as the average lexical distance between all pairs of words present in the set2. The lexical distance between any two words is be defined as the length of a shortest path between the two words in the available ontology. For example, 'freedom' and 'happiness' are at distance 2 (length, i.e. number of links, of the path: happiness −→ abstract entities −→ freedom), while 'freedom' and 'dog' are at distance 6 (length of the path: freedom −→ abstract entities −→ non animate entities −→ all −→ animate entities −→ animals −→ dog) Compute the lexical distance between all the pairs of words present in the above text and in the provided ontology (there are 6 such pairs).","(a) The set of classifiers is not convex because $\{-1,1\}$ is discrete. (b) The indicator function is not convex because it is not continuous.",0.1,M1_preference_data_361 "Assume the company you're working in recently hired a new CEO, who wants to improve development using modern methods. However, this CEO does not have an engineering background, so his suggestions are well-intentioned but not always feasible. The CEO comes to you with a new suggestion: > Continuous integration is a good practice. We must adopt it, so that our code never has bugs. > All branches in all our repositories must use continuous integration, and at the end of each day all branches must pass continuous integration. Explain to the CEO why his goal is not realistic",1,0.1,M1_preference_data_362 "What happens in our ""Consensus-Based Total-Order Broadcast"" algorithm, if the set of messages delivered in a round is not sorted deterministically after deciding in the consensus abstraction, but before it is proposed to consensus?","Specialize adjectives even further, for instance: VP -> VBe Adj+ N -> Adj- N Adj+ -> Adj+PP PP Adj+ -> looking LookPP where Adj+PP is the kind of adjective than could be complemented with a PP. Furthermore, what should be avoided is the accumulation of PPs on the same non-terminal, i.e. we should NOT have any X -> X PP with the same X on both left and right. The main idea here is to go for a feature grammar and lexicalize some of the dependencies.",0.1,M1_preference_data_363 "Having the following stats: - $X \sim Uniform(0,1)$ - $Y \sim Uniform(0,1)$ - $Z = X/2 + Y/2 + 0.1$ - $K = Y + 0.1$ What are the expected values and the variance of 𝑋, 𝑌, 𝑍, and 𝐾?","Lemmatized and with number entity recognition, this could lead to: Dow industrial tumble hurt GM sale forecast economic report oil rise \$ If a multi-set representation is even included in the preprocessing (this was not expected as an answer), the output could even be: (\$,1) (,2) (Dow,1) (GM,1) (economic,1) (forecast,1) (hurt,1) (industrial,1) (oil,1) (report,1) (rise,1) (sale,1) (tumble,1)",0.1,M1_preference_data_364 "Given the following function sums: 1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match 2 case Nil => List((c, 1)) 3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs) 4 5 def sums(digits: List[Int]): List[(Int, Int)] = 6 digits.foldRight(List[(Int, Int)]())(add) Your task is to identify several operations on lists of digits: What does the following operation implement, for a given input list of digits? 1 def mystery3(digits: List[Int]): Int = sums(digits) match 2 case Nil => 0 3 case t => t.reduceLeft((a, b) => (a._1 * a._2 + b._1 * b._2, 1))._1","VLIW processor rely on the compiler to schedule instructions while OoO processors dynamically schedule instruction based on an hardware mechanism. ",0.1,M1_preference_data_365 "A beautiful result by the Swiss mathematician Leonhard Euler (1707 - 1783) can be stated as follows: \begin{itemize} \item[] Let $G= (V,E)$ be an undirected graph. If every vertex has an even degree, then we can orient the edges in $E$ to obtain a directed graph where the in-degree of each vertex equals its out-degree. \end{itemize} In this problem, we address the problem of correcting an imperfect orientation $A$ to a perfect one $A'$ by flipping the orientation of the fewest possible edges. The formal problem statement is as follows: \begin{description} \item[Input:] An undirected graph $G=(V,E)$ where every vertex has an even degree and an orientation $A$ of $E$. That is, for every $\{u,v\}\in E$, $A$ either contains the directed edge $(u,v)$ that is oriented towards $v$ or the directed edge $(v,u)$ that is oriented towards $u$. \item[Output:] An orientation $A'$ of $E$ such that $|A'\setminus A|$ is minimized and \begin{align*} \underbrace{|\{u\in V : (u,v) \in A'\}|}_{\mbox{\scriptsize in-degree}} = \underbrace{|\{u\in V: (v,u) \in A'\}|}_{\mbox{\scriptsize out-degree}} \qquad \mbox{for every $v\in V$}. \end{align*} \end{description} \noindent {Design and analyze} a polynomial-time algorithm for the above problem. \\ {\em (In this problem you are asked to (i) design the algorithm, (ii) analyze its running time, and (iii) show that it returns a correct solution. Recall that you are allowed to refer to material covered in the lecture notes.)} \\[1cm] \setlength{\fboxsep}{2mm} \begin{boxedminipage}{\textwidth} An example is as follows: \begin{center} \begin{tikzpicture} \begin{scope} \node at (0, 2) {\small $G$}; \node[vertex] (b) at (1,1) {$b$}; \node[vertex] (c) at (1,-1) {$c$}; \node[vertex] (d) at (-1,-1) {$d$}; \node[vertex] (a) at (-1,1) {$a$}; \draw (a) edge (b); \draw (b) edge (c); \draw (c) edge (d); \draw (d) edge (a); \end{scope} \begin{scope}[xshift=5.5cm] \node at (0, 2) {\small $A = \{(a,b), (c,b), (c,d), (d,a)\}$}; \node[vertex] (b) at (1,1) {$b$}; \node[vertex] (c) at (1,-1) {$c$}; \node[vertex] (d) at (-1,-1) {$d$}; \node[vertex] (a) at (-1,1) {$a$}; \draw (a) edge[->] (b); \draw (b) edge[<-] (c); \draw (c) edge[->] (d); \draw (d) edge[->] (a); \end{scope} \begin{scope}[xshift=11cm] \node at (0, 2) {\small $A' = \{(a,b), (b,c), (c,d), (d,a)\}$}; \node[vertex] (b) at (1,1) {$b$}; \node[vertex] (c) at (1,-1) {$c$}; \node[vertex] (d) at (-1,-1) {$d$}; \node[vertex] (a) at (-1,1) {$a$}; \draw (a) edge[->] (b); \draw (b) edge[->] (c); \draw (c) edge[->] (d); \draw (d) edge[->] (a); \end{scope} \end{tikzpicture} \end{center} The solution $A'$ has value $|A' \setminus A| = 1$ {\small (the number of edges for which the orientation was flipped).} \end{boxedminipage}",It is not suitable as the item doesn't describe what the login experience should be.,0.1,M1_preference_data_366 "Concatenating two conc-trees of heights $h_1$ and $h_2$ yields a conc-tree with height $h$ where","We find that relative age impacts the success of atheletes as adults (measured by having a Wikipedia page), however, we have seen that, once you have become a(n) (adult) professional, athletes with higher relative age are not necessarily more famous. This could be explained by the DAG above — ""relative age"" only impacts ""sucess before adulthood"", which itself impacts ""success as an adult"". Skill on the other hand, impacts success both before and after adulthood. Consider two equally successful children athletes, one born on March 31 and the other on April 1, it is likely that the former is more skilled, since they are younger and thus likely less developed than their peers. Therefore, as adults, the athlete born on March 31 is more likely to be successful, since there is a causal path between ""skill"" and ""success as an adult"".",0.1,M1_preference_data_367 Assume you are working in a company on the back-end of a mobile application. Your code crashes with a `MaxSpotFleetRequestCountExceeded` error. Who is your web service provider?,"Using MLE, the probability of the observed bigram are proportionnal to their number of occurence: Xc: 2/18; Xh: 1/18; Xt: 1/18; at: 2/18; ca: 1/18; cu: 1/18; eX: 2/18; ha: 1/18; he: 2/18; tX: 2/18; th: 2/18; ut: 1/18 and all the other are 0. Thus the propability of any sequence containing an unseen bigram is 0 (as a product of terms, at least one of which is 0), which is the case for both sequences (bigram 'ch' never seen)",0.1,M1_preference_data_368 "In this week's lecture, you have been introduced to the aggregate method of ParSeq[A] (and other parallel data structures). It has the following signature: def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B Discuss, as a group, what aggregate does and what its arguments represent. Consider the parallel sequence xs containing the three elements x1, x2 and x3. Also consider the following call to aggregate: xs.aggregate(z)(f, g) The above call might potentially result in the following computation: f(f(f(z, x1), x2), x3) But it might also result in other computations. Come up with at least two other computations in terms of f and g that may result from the above call to aggregate. Below are other examples of calls to aggregate. In each case, check if the call can lead to different results depending on the strategy used by aggregate to aggregate all values contained in data down to a single value. You should assume that data is a parallel sequence of values of type BigInt. 2. data.aggregate(0)((acc, x) => x - acc, _ + _)","False: Nothing can be said about process i, because of “eventually”.",0.1,M1_preference_data_369 "In this week's lecture, you have been introduced to the aggregate method of ParSeq[A] (and other parallel data structures). It has the following signature: def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B Discuss, as a group, what aggregate does and what its arguments represent. Discuss the implementations from questions 4 and 5. Which one do you think would be more efficient?","In the same manner, $D(n) = D(n/2) + \Theta(n)$ which is $\Theta(n)$.",0.1,M1_preference_data_370 "We will analyze the $K$-means algorithm and show that it always converge. Let us consider the $K$-means objective function: $$ \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\sum_{n=1}^{N} \sum_{k=1}^{K} z_{n k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{k}\right\|_{2}^{2} $$ where $z_{n k} \in\{0,1\}$ with $\sum_{k=1}^{K} z_{n k}=1$ and $\boldsymbol{\mu}_{k} \in \mathbb{R}^{D}$ for $k=1, \ldots, K$ and $n=1, \ldots, N$. How would you choose $\left\{\boldsymbol{\mu}_{k}\right\}_{k=1}^{K}$ to minimize $\mathcal{L}(\mathbf{z}, \boldsymbol{\mu})$ for given $\left\{z_{n k}\right\}_{n, k=1}^{N, K}$ ? Compute the closed-form formula for the $\boldsymbol{\mu}_{k}$. To which step of the $K$-means algorithm does it correspond?","The idea is wrong. Even if the interface remains the same since we are dealing with character strings, a decorator does not make sense because the class returning JSON cannot be used without this decorator; the logic for extracting the weather prediction naturally belongs to the weather client in question. It is therefore better to create a class containing both the download of the JSON and the extraction of the weather forecast.",0.1,M1_preference_data_371 Implement cosine similarity between two vectors,"We first find all the frequent itemsets of size one. The minimal support requirement is 0.6, which means that to be frequent an itemset must occur in at least 5 out of the 8 transactions, 5/8 = 0. 6.25≥0.6. There are five frequent itemsets:{clouds} support: 0.75 {wind} support: 0.625 {lightning} support: 0.625 {rain} support: 0.625 {fire} support: 0.75 From the above itemsets we next generate all possible itemsets of size 2 and prune the itemsets with support below 0.6. Only two itemsets remain: {lightning, fire} support: 0.625 {clouds, rain} support: 0.625 It is not possible to generate the itemsets of size 3 out of the above 2 itemsets, the intersection is empty. Based on the itemsets of size 2 we generate all possible association rules and compute their confidence: {lightning} →{fire} support: 0.625 confidence: 1.0 {fire} → {lightning} support: 0.625 confidence: 0.833 {clouds} → {rain} support: 0.625 confidence: 0.833 {rain} → {clouds} support: 0.625 confidence: 1.0 There are only two association rules with confidence equal to 1 and that is the final solution.",0.1,M1_preference_data_372 In class we saw that Karger's min-cut algorithm implies that an undirected graph has at most $n \choose 2$ minimum cuts. Show that this result is tight by giving a graph with $n$ vertices and $n \choose 2$ minimum cuts.,"(d1) at (1,2) (d2) at (2,0) (d3) at (1,1) (d4) at (2,2) (d5) at (4,1) (d6) at (4,4)",0.1,M1_preference_data_373 "In Itanium's procedure call and return mechanism, What is the purpose of the erb+alloc+ instruction? Where do you expect a compiler to place it? What is the meaning of its two arguments and how would a compiler determine their values? Describe what the processor does when it executes it (ignore here potential problems due to the limited number of registers).","As a professor, I want to send grades by uploading a single file for a course, so that I don't have to repeat the same action for 100s of students.",0.1,M1_preference_data_374 "How does a Prime+Probe cache attack works? What information does it typically reveal to the attacker about the victim code? ",False: Some process j can fail for a reason not related to the failure of process i.,0.1,M1_preference_data_375 "Consider the min-cost perfect matching problem on a bipartite graph $G=(A \cup B, E)$ with costs $c: E \rightarrow \mathbb{R}$. Recall from the lecture that the dual linear program is \begin{align*} \text{Maximize} \quad & \sum_{a\in A} u_a + \sum_{b\in B} v_b\\ \text{Subject to} \quad &u_a + v_b \leq c(\{a,b\}) \qquad \mbox{for every edge $\{a,b\} \in E$.} \\ \end{align*} Show that the dual linear program is unbounded if there is a set $S \subseteq A$ such that $|S| > |N(S)|$, where $N(S) = \{ v\in B: \{u,v\} \in E \mbox{ for some $u\in S$}\}$ denotes the neighborhood of $S$. This proves (as expected) that the primal is infeasible in this case.","Suppose that accuracy is violated. Then, the processes might be relaying messages when this is not really necessary. This wastes resource, but does not impact correctness.",0.1,M1_preference_data_376 "What are the differences between statically scheduled HLS and dynamically scheduled HLS? ","- $E(X) = 0.5$, $Var(X) = 1/12$ - $E(Y) = 0.5$, $Var(Y) = 1/12$ - $E(Z) = 0.6$, $Var(Z) = 1/24$ - $E(K) = 0.6$, $Var(K) = 1/12$",0.1,M1_preference_data_377 "The company finally decides to implement a hybrid model consisting of a 4-gram character model combined (independently) with a 3-gram word model.How many parameters would such a hybrid model have in total?Provide the answer in the form 10^A + 10^B (for instance, write ""10^7 + 10^9"").",False: Because of “eventually”.,0.1,M1_preference_data_378 "Implement Community Influencers by doignt he following steps: - Isolate each community from the graph. - Select the node with the **maximum pagerank** within each community as the **influencer** of that community. - Break ties arbitrarily. - Hint: Useful functions: `nx.pagerank()`, `G.subgraph()`.","IR: find document based on lexical cohesion wrt query words. automatic summarization: check coherence of extracted sentences semantic disamguation of possibles choices in spelling error correction (e.g. bag or bug for 'bxg') WSD Machine translation (semantic filter)",0.1,M1_preference_data_379 Explain the difference between inflectional and derivational morphology. Illustrate your explanation with concrete examples in English or French.," ``OldDest'' is in the ROB to know, upon commit, which physical register has just became free. ``LogDest'' is necessary in case of squashing instructions from the ROB to restore a previous mapping.",0.1,M1_preference_data_380 Prove that x + |x - 7| ≥ 7,"G consists of syntactic rules. G should be complemented with lexical rules with the following format: T --> w, where T is a pre-terminal (i.e. a Part-of-Speech tag) and w is a terminal (i.e. a word).",0.1,M1_preference_data_381 "Assume you are working in a company on the back-end of a mobile application. One colleague explains that the team had originally configured the repo so that only code with >80% path coverage could be merged, but have now dropped this requirement. In one sentence, give one possible reason behind this removal.","import scipy.stats as stats amean = stats.bootstrap((df.pageviews.values,), statistic=np.mean) gmean = stats.bootstrap((df.pageviews.values,), statistic=mstats.gmean) print(""Arith. mean 95%CI:"", amean.confidence_interval.low, amean.confidence_interval.high ) print(""Geom. mean 95%CI:"", gmean.confidence_interval.low, gmean.confidence_interval.high )",0.1,M1_preference_data_382 "Given a document collection with a vocabulary consisting of three words, $V = {a,b,c}$, and two documents $d_1$ = aabc and $d_2 = abc$. The query is $q = ab$. Using standard vector space retrieval, is it possible to enforce both a ranking $d_1 > d_2$ and $d_2 > d_1$ by adding suitable documents to the collection. If yes, give examples of such documents to be added, if no, provide an argument why this cannot be the case.",Yes,0.1,M1_preference_data_383 "A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: 1 type Multiset = Char => Int The intersection of two multisets a and b contains all elements that are in both a and b. For example, the intersection of a multiset {’b’, ’b’, ’b’, ’c’} and a multiset {’b’, ’a’, ’b’} is a multiset {’b’, ’b’}. What should replace ??? so that the intersection function is correct? 1 def intersection(a: Multiset, b: Multiset): Multiset = ???","Doing so improves probability estimation and inference (provided that we have enough learning data), because these are not independent terms, thus the probabilities of the NERs are not (higher) the products of their terms (probabilistic independence). Drawback is that NER can be wrong at this stage. It's better not to take decision too early in the process: shipping all the alternatives to the next modules would be better.",0.1,M1_preference_data_384 "In this problem we are going to formally analyze the important median trick. Suppose that we have a streaming algorithm for distinct elements that outputs an estimate $\hat d$ of the number $d$ of distinct elements such that \begin{align*} \Pr[\hat d > 3d] \leq 47 \% \qquad \mbox{and} \qquad \Pr[\hat d < d/3] \leq 47\%\,, \end{align*} where the probabilities are over the randomness of the streaming algorithm (the selection of hash functions). In other words, our algorithm overestimates the true value by a factor of 3 with a quite large probability $47\%$ (and also underestimates with large probability). We want to do better! An important and useful technique for doing better is the median trick: run $t$ independent copies in parallel and output the median of the $t$ estimates (it is important that it is the median and \emph{not} the mean as a single horrible estimate can badly affect the mean). Prove that if we select $t = C \ln(1/\delta)$ for some large (but reasonable) constant $C$, then the estimate $\hat d$ given by the median trick satisfies \begin{align*} d/3 \leq \hat d \leq 3d \qquad \mbox{with probability at least $1-\delta$.} \end{align*} \emph{Hint: an important tool in this exercise are the Chernoff Bounds, which basically say that sums of independent variables are highly concentrated.} Two such bounds can be stated as follows. Suppose $ X_1, X_2, \dots, X_n$ are independent random variables taking values in $\{0,1\}$. Let $X$ denote their sum and let $\mu = \mathbb{E}[X]$ denote the sum's expected value. Then for any $\delta \in (0,1)$, \begin{align*} \Pr[ X \leq (1- \delta) \mu] \leq e^{-\frac{\delta^2 \mu }{2}} \qquad \mbox{and} \qquad \Pr[ X \geq (1+ \delta) \mu] \leq e^{-\frac{\delta^2 \mu }{3}}\,. \end{align*}","def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B = if this.isEmpty then z else this.map((x: A) => f(z, x)).reduce(g)",0.1,M1_preference_data_385 "In this exercise, we will see how to combine the Principal Component Analysis (PCA) and the kernel method into an algorithm known as kernel PCA. We are given $n$ observations in a low dimensional space $\mathbf{x}_{1}, \cdots, \mathbf{x}_{n} \in \mathbb{R}^{L}$ and we consider a kernel $k$ and its associated features $\operatorname{map} \phi: \mathbb{R}^{L} \mapsto \mathbb{R}^{H}$ which satisfies: $$ k(\mathbf{x}, \mathbf{y})=\langle\phi(\mathbf{x}), \phi(\mathbf{y})\rangle_{\mathbb{R}^{H}} $$ where $\langle\cdot, \cdot\rangle_{\mathbb{R}^{H}}$ is the standard scalar product of $\mathbb{R}^{H}$. We define the empirical covariance matrix and the empirical covariance matrix of the mapped observations as: $$ \boldsymbol{\Sigma}:=\frac{1}{n} \sum_{i=1}^{n} \mathbf{x}_{i} \mathbf{x}_{i}^{\top} \quad \text { and } \quad \boldsymbol{\Sigma}^{\mathbf{H}}:=\frac{1}{n} \sum_{i=1}^{n} \phi\left(\mathbf{x}_{i}\right) \phi\left(\mathbf{x}_{i}\right)^{\top} $$ The kernel matrix $\mathbf{K}$ is defined by: $$ \mathbf{K}_{i, j}:=k\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \phi\left(\mathbf{x}_{j}\right)\right\rangle_{\mathbb{R}^{H}} $$ We also define the data matrix and the corresponding matrix of the mapped data as: $$ \mathbf{X}:=\left(\begin{array}{c} \mathbf{x}_{1}^{\top} \\ \cdots \\ \mathbf{x}_{n}^{\top} \end{array}\right) \in \mathbb{R}^{n \times L} \quad \text { and } \quad \mathbf{\Phi}:=\left(\begin{array}{c} \phi\left(\mathbf{x}_{1}\right)^{\top} \\ \cdots \\ \phi\left(\mathbf{x}_{n}\right)^{\top} \end{array}\right) \in \mathbb{R}^{n \times H} . $$ Finally we denote the eigenpairs (eigenvalues and eigenvectors) of $\boldsymbol{\Sigma}^{\mathbf{H}}$ by $\left\{\left(\lambda_{i}, \mathbf{v}_{i}\right)\right\}_{i=1}^{H}$ and those of $\mathbf{K}$ by $\left\{\left(\rho_{j}, \mathbf{w}_{j}\right)\right\}_{j=1}^{n}$. We also assume that the vectors $\mathbf{v}_{i}$ and $\mathbf{w}_{j}$ are normalized. Thus: $$ \boldsymbol{\Sigma}^{\mathbf{H}} \mathbf{v}_{i}=\lambda_{i} \mathbf{v}_{i}, \quad\left\|\mathbf{v}_{i}\right\|_{2}=1 \quad \text { and } \quad \mathbf{K} \mathbf{w}_{j}=\rho_{j} \mathbf{w}_{j}, \quad\left\|\mathbf{w}_{j}\right\|_{2}=1 $$ Let us remind that we assume in the kernel setting that we can compute $k(\mathbf{x}, \mathbf{y})$ but that we cannot directly compute $\phi(\mathbf{x})$ What we would like to do is to first map the data into the high-dimensional space using the features map $\phi$ and then to apply the standard PCA algorithm in the high-dimensional space $\mathbb{R}^{H}$. This would amount to: (a) Computing the empirical covariance matrix $\boldsymbol{\Sigma}^{\mathbf{H}}$ of the mapped data $\phi\left(\mathbf{x}_{i}\right)$. (b) Computing the eigenvectors $\mathbf{v}_{1}, \cdots, \mathbf{v}_{N}$ associated with the $N$ largest eigenvalues of $\boldsymbol{\Sigma}^{\mathbf{H}}$. (c) Computing the projection $\Pi\left(\phi\left(\mathbf{x}_{i}\right)\right) \in \mathbb{R}^{L}$ for each data point onto these eigenvectors, where the $j$-th component of the projection is given by: $$ \Pi_{j}\left(\phi\left(\mathbf{x}_{i}\right)\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \mathbf{v}_{j}\right\rangle_{\mathbb{R}^{H}} $$ Explain why we cannot directly apply the algorithm explained above.","VLIWs use the same scheduling techniques as static HLS, so the behavior is qualitatively the same.",0.1,M1_preference_data_386 "Implement User-based collaborative filtering using the following formula: \begin{equation} {r}_{x}(a) = \bar{r}_{x} + \frac{\sum\limits_{y \in N_{U}(x)} sim(x, y) (r_{y}(a) - \bar{r}_{y})}{\sum\limits_{y \in N_{U}(x)}|sim(x, y)|} \end{equation} You will create a function that takes as input the ratings and the similarity matrix and gives as output the predicted ratings. ","$$ (\mathbf{A}+\mathbf{B}) \mathbf{v}=\mathbf{A} \mathbf{v}+\mathbf{B} \mathbf{v}=\lambda_{A} \mathbf{v}+\lambda_{B} \mathbf{v}=\left(\lambda_{A}+\lambda_{B}\right) \mathbf{v} $$",0.1,M1_preference_data_387 "Assume you are working on SuperQuiz, a trendy app that lets everyone design quizzes and share them with friends! SuperQuiz recently hired a new CEO, who wants to improve the development practices using modern methods. However, this CEO has no engineering background, so the suggested improvements are well intentioned but not always feasible. The latest CEO suggestion is this: ""Continuous Integration is a modern best practice. We must adopt it, so that the code in our repository never has bugs. From now on, all branches in the SuperQuiz repository must have continuous integration enabled, and at the end of each day all branches must pass all tests."" Give two reasons (1 sentence each) explaining why the goal and implementation of the CEO's suggestion goes too far:","Recall from the previous question that $g^\star(\xv)=2\eta(\xv)-1$. Moreover, by definition, we have $\eta(\xv)= \mathbb P(Y=1|X=\xv)$. Thus, the stated classifier $f$ is indeed the Bayes classifier, as $f(\xv) = \sign(g^\star(\xv)) = \sign(2\eta(\xv)-1) \in rgmax_{y\in\{-1,1\}}\ \mathbb P (Y=y|X=\xv)$.",0.1,M1_preference_data_388 "Given the following method: 1 def mystery6(nIter: Int) (ss: List[String] ): List[String] = 2 if nIter <= 0 then ss 3 else mystery6 (nIter - 1) ( 4 for 5 s <- ss 6 c <- List (’c’ , ’b’ , ’a’) 7 yield 8 s + c 9 ) ::: ss What is the output if we call mystery6 this way: mystery6(5)(List("""")).filter(_.exists(_ == ’b’))(0)"," Dependent instructions should be at distance two: egin{verbatim} add r5, r2, r1 add r23, r3, r1 mul r7, r12, r5 mul r8, r12, r23 add r5, r4, r1 \end{verbatim} Renaming is needed to remove a name dependence. This saves two cycles.",0.1,M1_preference_data_389 "Assume you are working in a company on the back-end of a mobile application. You are tasked with improving the integration of the authentication via Google in your app, but your manager tells you: ""Don't run any tests today, we only have a few API calls left for today's rate limit, we need to preserve those for customers."" In 1-2 sentences, propose a change to the codebase to avoid this problem.","*The + operation on Float8 as defined *is commutative. ** We use in particular the fact that the x + y operation only performs addition if x.exp is less than or equal to y.exp, and flips the call to y + x otherwise. Consider three cases, (1) x.exp < y.exp, (2) x.exp > y.exp, and (3) x.exp == y.exp: (1) if x.exp < y.exp: we follow the evalation of the expression y + x. The function checks if y.exp <= x.exp; however, this is not the case by assumption, so it returns x + y. Hence, y + x == x + y. (2) if x.exp > y.exp: we follow the evalation of the expression x + y as in case 1. The function checks if x.exp <= y.exp; however, this is not the case by assumption, so it returns y + x. Hence, x + y == y + x. (3) if x.exp == y.exp: following the evaluation of x + y and y + x, we enter the if condition. shift == 0, and thus mant = (x.exp << 0) + y.exp = x.exp + y.exp, in both cases. Note that the result is computed in terms of mant and y.exp == x.exp. Since mant and exp are the same in both cases, we have x + y == y + x.",0.1,M1_preference_data_390 "Assume you are working on a mobile application. In the daily standup, you mention you are having issues with JavaFX. Before you can give more details, your team's JavaFX expert tells you to leave it at that and instead pass by her office afterwards. The Scrum Master disagrees and asks you to give more details. In one sentence, explain whether your Scrum Master is taking the right approach and why.","The simplest solution to produce the indexing set associated with a document is to use a stemmer associated with stop lists allowing to ignore specific non content bearing terms. In this case, the indexing set associated with D might be: $I(D)=\{2006$, export, increas, Switzerland, USA $\}$ A more sophisticated approach would consist in using a lemmatizer in which case, the indexing set might be: $I(D)=\left\{2006 \_N U M\right.$, export\_Noun, increase\_Verb, Switzerland\_ProperNoun, USA\_ProperNoun\}",0.1,M1_preference_data_391 "In the context of superscalar processors, what is the function of a Load Store Queue (LSQ)?","Every process simply uses Best-Effort Broadcast to send its proposal to every other process. Upon receiving all proposals, a process decides COMMIT if it only received COMMIT proposals. It decides ABORT otherwise. Under the assumption that no process crashes, every process eventually receives the proposal of every other process, and decides. No failure detector was needed. Indeed, termination is not guaranteed if any process crashes.",0.1,M1_preference_data_392 "Are VLIWs capable of speculative execution? Explain precisely your answer and provide examples for Itanium, if appropriate..","Property 1 holds Since the balance of from is read only once and snapshotted in balanceFrom, threads cannot read balance after the condition is evaluated, and so cannot write a negative value to the balance fields. Violation of property 2 Consider two threads T1 and T2 that execute concurrently transfer(from, to, amount) with the exact same parameters. Assume that the account from has sufficient funds for at least one transfer. Consider the following execution with interleaved operations: Execution begins from.balance = bf, to.balance = bt T1: executes until it has computed the value balanceFrom - amount, but not written it to from.balance. from.balance = bf, to.balance = bt T2: executes in its entirety the call to transfer. from.balance = bf - amount, to.balance = bt + amount T1: resumes its execution and completes the call to transfer, in particular, overwriting the balance values. from.balance = bf - amount, to.balance = bt + 2 * amount At the end of this execution, the total amount of money held by the bank has changed. It has, in fact, increased by the value amount. The bank loses money when people try to withdraw! You can try to see this behaviour by running this yourself! def testSeq: (BigInt, BigInt) = { val A = Account(1) val B = Account(0) transfer(A, B, 1) transfer(A, B, 1) (A.balance, B.balance) } def test: (BigInt, BigInt) = { val A = Account(1) val B = Account(0) parallel(transfer(A, B, 1), transfer(A, B, 1)) (A.balance, B.balance) } (1 to 100).map(x => testSeq).forall(_ == (0, 1)) // res0: Boolean = true // you can see this error even in one execution, we add more just to make the probability 'reliable' // you might still have to run it a few times to get the desired result // but even a 1 in 100,000 error is too much for a bank! val t = (1 to 200000).map(x => test) t.forall(_ == (0, 1)) // res1: Boolean = false t.exists(_ == (0, 2))",0.1,M1_preference_data_393 "If process i fails, then eventually all processes j≠i fail Is the following true? If a process j≠i fails, then process i has not failed","Let the bipartition of the bipartite graph $G=(V,E)$ be $A$ and $B$, and let $M$ be the current bipartite matching maintained by \textsc{AugmentingPathAlgorithm}. From $G$, obtain the directed graph where each edge in $M$ is directed from the vertex in $B$ to the vertex in $A$. All other edges $E\setminus M$ are directed from $A$ to $B$. Now start a breadth-first-search from the unmatched vertices in $A$. Note that if an unmatched vertex in $B$ is reached by the breadth-first-search, then we have found an augmenting path. The runtime analysis is \begin{itemize} \item $O(|E|)$ for the construction of the directed graph from $G$. \item $O(|V| + |E|)$ for the run of the breadth-first-search. \end{itemize} So we can find an augmenting path in time $O(|V| + |E|)$. Since we increase the size of the matching each time we find an augmenting path, the total running time is $O(k (|V| + |E|) = O(|V| (|V|+ |E|)$ where $k$ is the cardinality of a maximum matching.",0.1,M1_preference_data_394 "You have been provided with the following definitions for the possible meanings of the words ""balloon"" and ""plane"": balloon: - meaning 1: balloon --(hyponym)--> inflatable - meaning 2: balloon --(hyponym)--> transport plane: - meaning 1: plane --(hyponym)--> transport plane --(holonym)--> wing - meaning 2: plane --(hyponym)--> surface What type of approach has been used to produce this type of semantic representations? What principle does it rely on?",['(({a}*{b})/(1000*1000))+((1000-{a})*(1000-{b}))/(1000*1000)'],0.1,M1_preference_data_395 Implement MAP score,"def cycles3(nodes: Set[Node], edges: List[Edge]): Set[Node] = nodes.filter(node => reachable(3, Set(node), edges).contains(node))",0.1,M1_preference_data_396 "An expression is referentially transparent if it always returns the same value, no matter the global state of the program. A referentially transparent expression can be replaced by its value without changing the result of the program. Say we have a value representing a class of students and their GPAs. Given the following defintions: 1 case class Student(gpa: Double) 2 3 def count(c: List[Student], student: Student): Double = 4 c.filter(s => s == student).size 5 6 val students = List( 7 Student(1.0), Student(2.0), Student(3.0), 8 Student(4.0), Student(5.0), Student(6.0) 9 ) And the expression e: 1 count(students, Student(6.0)) Is the expression e referentially transparent?","O( (f+1)n^2 )b in the binary case, or O( (f+1)n^3 )b in the non-binary case",0.1,M1_preference_data_397 "Assume Your project depends on the latest available minor release version of a package instead of a hardcoded exact version, what are the pros and cons of this?","We define our estimator as follows: \begin{itemize} \item Let $t= 3/\epsilon^2$. \item Run $t$ independent copies of $\Alg_1$ to obtain estimates $Z_1, Z_2, \ldots, Z_t$. \item Output $Y = \left( Z_1 + Z_2 + \ldots + Z_t \right)/t$. \end{itemize} Since $\Alg_1$ only asks one person, we have that our estimator only asks $3/\epsilon^2$ persons (as required). We now analyze the guarantee of our estimator. We have \begin{align*} \E[Y] = \E\left[\left( Z_1 + Z_2 + \ldots + Z_t \right)/t\right] = \frac{1}{t}\sum_{i=1}^t \E[Z_i] = W_F\,, \end{align*} where the last equality follows from that $\Alg_1$ is an unbiased estimator which was given to us in the stament. As seen in the lecture notes, we also have \begin{align*} \Var[Y] = \Var[X_1]/t \end{align*} and so, by the problem statement, we know $\Var[Y]\leq W^2/t$. We know apply Chebychev's inequality to analyze the guarantee of our estimator: \begin{align*} \Pr[|Y - W_F| \geq \epsilon W] \leq \frac{\Var[Y]}{\epsilon^2 W^2} \leq \frac{W^2}{t \epsilon^2 W^2}\leq 1/3\,, \end{align*} where the last inequality is due to the selection $t=3/\epsilon^2$. \grading{ \begin{itemize} \item 7 pts for correct algorithm/idea: \begin{itemize} \item 2 pts for using correct estimator ($\mathcal{A}_1$) \item 5 pts for an idea that uses at most $3\epsilon^{-2}$ times $\mathcal{A}_1$ \end{itemize} \item 8 pts for the analysis of the algorithm: \begin{itemize} \item 2 pts for correct expectation \item 3 pts for correct variance \item 3 pts for correct use of Chebyshev (statement only: 2 pts) \end{itemize} \item -2 pts if ""Markov"" is stated instead of ""Chebyshev"" \end{itemize} }",0.1,M1_preference_data_398 Implement the recall at k metric,1,0.1,M1_preference_data_399 "Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. In the minimum (unweighted) $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. We shall use a linear program to solve this problem. Let ${P}$ be the set of all paths between $s$ and $t$ in the graph $G$. The linear program has a variable $y_e$ for each edge $e\in E$ and is defined as follows: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & & \displaystyle\sum_{e \in E} y_e &\\ \text{subject to}& & \displaystyle\sum_{e \in p} y_e \ge 1 &\forall p \in P,\\ & & y_e \ge 0 & \forall e \in E. \end{array} \end{equation*} For example, consider the following graph where the numbers on the edges depict the $y_e$-values of a feasible solution to the linear program: \begin{center} \input{cutExample} \end{center} The values on the edges depict a feasible but not optimal solution to the linear program. That it is feasible follows because each $y_e$ is non-negative and $\sum_{e\in p} y_e \geq 1$ for all $p\in P$. Indeed, for the path $s, b, a, t$ we have $y_{\{s,b\}}+ y_{\{b,a\}} + y_{\{a,t\}} = 1/4 + 1/4 + 1/2 = 1$, and similar calculations for each path $p$ between $s$ and $t$ show that $\sum_{e\in p} y_e \geq 1$. That the solution is not optimal follows because its value is $2.5$ whereas an optimal solution has value $2$. Let $\opt$ denote the number of edges crossing a minimum $s,t$-cut and let $\optlp$ denote the value of an optimal solution the linear program. Prove that $\optlp \leq \opt$. \\ {\em (In this problem you are asked to prove $\optlp \leq \opt$. Recall that you are allowed to refer to material covered in the lecture notes.)}","The wrong suggestion is that the operation will run faster. Executing code concurrently does not mean necessarily in parallel, concurrency creates the illusion of parallelism even if it's running on a single CPU. If it's indeed running on a single CPU core, the operation will actually take even more time because of the additional time of each context switch between the operation thread and UI thread. If it's running on multiple CPUs, it will take the same time, since the operation itself is the same (it hasn't been divided between multiple threads), the only difference is that it's running in parallel with the UI thread. In both cases, the app will become more fluid (without freezes), since the user will not have to wait until the operation finishes to continue interacting with the app.",0.1,M1_preference_data_400 "What happens in our ""Consensus-Based Total-Order Broadcast"" algorithm, if the set of messages decided on by consensus is not sorted deterministically at all?","Students born in April would be significantly older than the average students in their grade, thus, they may be faster, stronger, and better players simply because they are more advanced in their biological and psychological maturation. These students will be more likely to be selected to sports teams and grow fond of sports early on, becoming more likely to become sucessful (and thus be on wikipedia).",0.1,M1_preference_data_401 "Suppose we have a universe $U$ of elements. For $A,B\subseteq U$, the Jaccard distance of $A,B$ is defined as $$ J(A,B)=\frac{|A\cap B|}{|A\cup B|}.$$ This definition is used in practice to calculate a notion of similarity of documents, webpages, etc. For example, suppose $U$ is the set of English words, and any set $A$ represents a document considered as a bag of words. Note that for any two $A,B\subseteq U$, $0\leq J(A,B)\leq 1$. If $J(A,B)$ is close to 1, then we can say $A\approx B$. Let $h: U\to [0,1]$ where for each $i\in U$, $h(i)$ is chosen uniformly and independently at random. For a set $S\subseteq U$, let $h_S:=\min_{i\in S} h(i)$. \textbf{Show that } $$ \Pr[h_A=h_B] = J(A,B).$$ Now, if we have sets $A_1, A_2,\dots,A_n$, we can use the above idea to figure out which pair of sets are ``close'' in time essentially $O(n|U|)$. We can also obtain a good approximation of $J(A,B)$ with high probability by using several independently chosen hash functions. Note that the naive algorithm would take $O(n^2|U|)$ to calculate all pairwise similarities.","Firstly notice that for a tree the minimum cut is of size one. It is enough to consider $S=\{ v \}$ where $v$ is any leaf node. As such it has only one edge connecting it to the rest of the graph and the cut size is 1. Since a tree is connected there cannot be a 0-weight cut, so the minimum cut indeed contains only one edge. Now let's prove that Karger's algorithm always outputs a cut with only one edge. The starting graph $G=(V,E)$ is a tree, so it satisfies $\#E = \#V - 1$. At every step Karger's algorithm decreases the number of nodes by 1, and the number of edges by $\geq 1$ (in case of multiedges). At the end, since the starting graph is connected, we will end with $\geq 1$ edges between the two final nodes. Since the algorithm runs for $\#V-2$ steps, this means that it had to remove exactly $1$ edge at every step, and finish with exactly 1 edge between the two final nodes in order to satisfy the above inequalities. It follows that the obtained cut will have weight 1, i.e. it will be a minimum cut. Note also that at every step of the algorithm the obtained graph is still a tree, i.e. Karger's algorithm preserves acyclicity. Since at the end we finish with two nodes we know that there can be exactly one edge between them since the graph is a tree.",0.1,M1_preference_data_402 "Is the decision rule of the FloodSet algorithm so critical? In other words, is there any alternative decision rule we can have? If so, name one.","Without loss of generality, index $x^*$ so that $x^*_{1}, \dots, x^*_k > 0$ and $x^*_{k+1},\dots, x^*_n = 0$. Also let $a_1, a_2, \dots, a_k$ denote the columns of $A$ corresponding to the nonzero variables $x^*_1, \dots, x^*_k$. We start by showing that if $a_1,\dots, a_k\in \mathbb{R}^m$ are linearly dependent then $x^*$ is not an extreme point. This implies that ``any extreme point $x^*$ has at most $m$ non-zero entries, i.e., $|\{i: x^*_i > 0 \}| \leq m$'' since no more than $m$ vectors can be linearly independent in the $m$-dimensional space $\mathbb{R}^m$. Since we assume $a_1, \dots, a_k$ to be linearly dependent we can write $\sum_{i=1}^k \lambda_i a_i = 0$ for some scalars $\lambda_i$ that are not all equal to $0$. Now it is easy to verify that for a small enough $\varepsilon > 0$, \begin{align*} y := \left[ \begin{array}{c} x^*_1 \\ \vdots \\ x^*_k \\ x^*_{k+1} \\ \vdots\\ x^*_n \end{array} \right] + \varepsilon \cdot \left[ \begin{array}{c} \lambda_1 \\ \vdots \\ \lambda_k \\ 0 \\ \vdots\\ 0 \end{array} \right] \qquad \mbox{and} \qquad z := \left[ \begin{array}{c} x^*_1 \\ \vdots \\ x^*_k \\ x^*_{k+1} \\ \vdots\\ x^*_n \end{array} \right] - \varepsilon \cdot \left[ \begin{array}{c} \lambda_1 \\ \vdots \\ \lambda_k \\ 0 \\ \vdots\\ 0 \end{array} \right] \end{align*} are both feasible solutions to the linear program and $x^* = \tfrac{y+z}{2}$ and thus $x^*$ is not an extreme point. We now complete the proof of the good mood problem by showing that if $x^*$ is not an extreme point, then $a_1, \dots, a_k$ are linearly dependent. If $x^*$ is not an extreme point, then there is a vector $y\in \mathbb{R}^n \setminus \{0\}$ such that $x^*+y$ is feasible and $x^*-y$ is feasible. By simple rewriting, $Ax = b$ and $A(x+y) = b$ imply $Ay = 0$, i.e., $\sum_{i=1}^n a_i y_i =0$. By the nonnegativity constraints, we must have $y_{i} = 0$ for all $i>k$ (for suppose that $y_i > 0$ and $x^*_i = 0$; then either $x^*-y$ or $x^*+y$ would need to be negative on coordinate $i$ and thus infeasible). Hence we have that $\sum_{i=1}^k a_i y_i =0$, which shows that the vectors $a_1, \dots, a_k$ are linearly dependent. In the proofs above we used that $x^*$ is not an extreme point (i.e., it can be written as a convex combination of other feasible vectors) if and only if it can be written as a convex combination of \textbf{two} other feasible vectors. The proof here is as follows: suppose that $x^* = \sum_{i = 1}^n \lambda_i y_i$ for $\lambda_1, ..., \lambda_n > 0$, $\sum_i \lambda_i = 1$ and $y_1 \ne x^*$. Then we can rewrite \[ x^* = \sum_{i = 1}^n \lambda_i y_i = \lambda_1 y_1 + \sum_{i=2}^n \lambda_i y_i = \lambda_1 y_i + (1 - \lambda_1) \cdot \left( \sum_{i=2}^n \frac{\lambda_i}{1 - \lambda_1} y_i \right) \] which is a convex combination of two other feasible points. The point $\sum_{i=2}^n \frac{\lambda_i}{1 - \lambda_1} y_i$ is feasible as a convex combination of feasible points (note that $\sum_{i=2}^n \lambda_i = 1 - \lambda_1$).",0.1,M1_preference_data_403 "Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are vectors in $\mathbb{R}^{D}$ and $b_{u}$ and $b_{m}$ are scalars, indicating the bias. How could you address the problem of recommending movies to a new user without any ratings? [This is not a math question.]","No, this is unethical as it is lying to customers about what the app is doing and what work will be done in the future.",0.1,M1_preference_data_404 "Consider the following quadratic programming relaxation of the Max Cut problem on $G=(V,E)$: \begin{align*} \textbf{maximize} \hspace{0.8cm} & \sum_{\{i,j\} \in E} (1-x_i)x_j + x_i (1-x_j) \\ \textbf{subject to}\hspace{0.8cm} & x_i \in [0,1] ~ ~ \forall i\in V \end{align*} Show that the optimal value of the quadratic relaxation actually equals the value of an optimal cut. (Unfortunately, this does not give an exact algorithm for Max Cut as the above quadratic program is NP-hard to solve (so is Max Cut).) \\ \noindent\emph{Hint: analyze basic randomized rounding.}","We solve this problem using matroid intersection. First observe that if the summation of the $t_i$ for $1 \leq i \leq k$ is not equal to $n-1$ then there is no feasible solution since we know that the number of edge in any spanning tree is exactly $n-1$. Therefore, we assume $\sum_{1 \leq i \leq k} t_i = n-1$. The ground set for both matroids that we use is the set of the edges $E$. First matroid that we use is the graphic matroid. The second matroid that we use is a partition matroid with following independent sets: \[ \mathcal{I} = \{ F \subseteq E \text{ }| \text{ } |F\cap E_i| \leq t_i, \quad \text{for } 1 \leq i \leq k\} \] As shown in class the both above defined matroids are indeed matroid. Now assume that $F$ is the maximum size independent set the intersection of these two matroids (we saw in the class how we can find $F$). If $|F| < n-1$ it is not possible to find a solution for our problem, since any solution to our problem corresponds to a solution in the intersection of these two matroids of size $n-1$. Moreover, if $|F| = n-1$, than $F$ is a spanning tree and $|F\cap E_i| \leq t_i$. Also, we know that $|F| = n-1$ and $\sum_{1 \leq i \leq k} t_i = n-1$ and $E_i$'s are disjoint. Therefore $|F\cap E_i| = t_i$, so we get the desired solution.",0.1,M1_preference_data_405 "Assume that you are part of a team developing a mobile app using Scrum. One of your colleagues suggests that your team should organize daily Scrum meetings to discuss the progress of the tasks and how to implement complex features. He especially wants to discuss the implementation of a feature that will allow users to scan a QR code to get a discount, and would like some input from the team. What are your thoughts on this?","1. The accesses to the hash table will pollute the cache and thus mask the real state of the cache after the victim execution. 2.Unrolling completely the probe loop and specifying directly the randomized sequence of probe accesses would remove this issue.",0.1,M1_preference_data_406 "Consider the following toy learning corpus of 59 tokens (using a tokenizer that splits on whitespaces and punctuation), out of a possible vocabulary of $N=100$ different tokens: Pulsed operation of lasers refers to any laser not classified as continuous wave, so that the optical power appears in pulses of some duration at some repetition rate. This\linebreak encompasses a wide range of technologies addressing a number of different motivations. Some lasers are pulsed simply because they cannot be run in continuous wave mode. Using a 2-gram language model, what are the values of the parameters corresponding to ""continuous wave"" and to ""pulsed laser"" using Maximum-Likelihood estimates?","Tranform to CNF: $\mathrm{X}->$ VBP VBG $\mathrm{VP} \rightarrow \mathrm{X} P \mathrm{PP}$ Chart: (next page) \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & S & & & & & & & & & \\ \hline & & & S & & & & & & & \\ \hline \multirow{2}{*}{NP} & & & . & $\mathrm{S}$ & & & & & & \\ \hline & NP VP & & & . & & & & & & \\ \hline \multirow{3}{*}{NP} & s. & PP & & & & $\mathrm{S}$ & & & & \\ \hline & NP VP & . & $\mathrm{NP}$ & & & I & VP & & & \\ \hline & + & PP & . & NP & & & & & & \\ \hline $\mathrm{NP}$ & & - & NP & . & PP & & $\mathbf{X}$ & & PP & \\ \hline Det & N V NP VP & $\mathrm{P}$ & Det & N NP & $\mathrm{P}$ & N NP & VBP & VBG & $\mathrm{P}$ & $\mathrm{N} \mathrm{Nl}$ \\ \hline the & exports & from & the & USA & to & Switzerland & are & increasing & in & 2006 \\ \hline \end{tabular} \end{center}",0.1,M1_preference_data_407 Implement kNN function (finding k nearest documents for a given document),"$rgmax_{\wv:\|\wv\|=1} ext{Var}[\wv^ op \xx] = rac1N \sum_{n=1}^N (\wv^ op \xx_n)^2 = rac1N \sum_{n=1}^N \wv^ op \xx_n \xx_n^ op \wv = rac1N \wv \Xm\Xm^ op \wv$ is (by definition of Eigenvector) maximized if $\wv$ is the top eigenvector of $\Xm\Xm^ op$. (One can add some arguing why this gives the top singular vector of $\Xm$) ",0.1,M1_preference_data_408 "The data contains information about submissions to a prestigious machine learning conference called ICLR. Columns: year, paper, authors, ratings, decisions, institution, csranking, categories, authors_citations, authors_publications, authors_hindex, arxiv. The data is stored in a pandas.DataFrame format. Create 3 new fields in the dataframe corresponding to the median value of the number of citations per author, the number of publications per author, and the h-index per author. So for instance, for the row authors_publications, you will create an additional column, e.g. authors_publications_median, containing the median number of publications per author in each paper.","Consider an deterministic online algorithm $\Alg$ and set $x_1 = W$. There are two cases depending on whether \Alg trades the $1$ Euro the first day or not. Suppose first that $\Alg$ trades the Euro at day $1$. Then we set $x_2 = W^2$ and so the algorithm is only $W/W^2 = 1/W$ competitive. For the other case when \Alg waits for the second day, we set $x_2 = 1$. Then \Alg gets $1$ Swiss franc whereas optimum would get $W$ and so the algorithm is only $1/W$ competitive again.",0.1,M1_preference_data_409 "The goal of this question is to illustrate how to use transducers to implement a simplified version of the conjugation of English verbs. We will restrict to the conjugated forms corresponding to the indicative mode and the present tense. Formally, this can be modeled as defining a transducer able to recognize associations such as the following table: make+V+IndPres+1s make make+V+IndPres+2s make make+V+IndPres+3s makes make+V+IndPres+1p make make+V+IndPres+2p make make+V+IndPres+3p make where ""V"" identifies the grammatical category ""Verb"", ""IndPres"" indicates that we are dealing with the Indicative mode and the Present tense, and ""1s"", ""2s"", ""3s"" (resp. ""1p"", ""2p"", ""3p"") refer to the first, second, and third person singular (resp. plural). In the above table, what do the strings in the first and the second column correspond to?","It is easy to see that if a bipartite graph has a perfect matching, then $|S| \leq |N(S)|$ for all $S\subseteq A$. This holds even if we only consider the edges inside the perfect matching. Now we focus on proving the other direction, i.e., if $|S| \leq |N(S)|$ for all $S\subseteq A$ then $G$ has a perfect matching. We define a procedure that given a matching $M$ with maximum size which does not cover $a_0 \in A$, it returns a set $S \subseteq A$ such that $|N(S)| < |S|$. This shows that the size of the matching should be $n$. To this end, let $A_0 = \{a_0\}$ and $B_0 = N(a_0)$. Note that all vertices of $B_0$ are covered by the matching $M$ (if $b_0 \in B_0$ is not covered, the edge $a_0b_0$ can be added to the matching which contradicts the fact that $M$ is a maximum matching). If $B_0 = \emptyset$, $S = A_0$ is a set such that $|N(S)| < |S|$. Else, $B_0$ is matched with $|B_0|$ vertices of $A$ distinct from $a_0$. We set $A_1 = N_M(B_0)\cup\{a_0\}$, where $N_M(B_0)$ is the set of vertices matched with vertices of $B_0$. We have $|A_1| = |B_0|+1 \geq |A_0|+1$. Let $B_1 = N(A_1)$. Again, no vertices in $B_1$ is exposed, otherwise there is an augmenting path. If $|B_1| < |A_1|$, the algorithm terminates with $|N(A_1)| < |A_1|$. If not, let $A_2 = N_M(B_1)\cup\{a_0\}$. Then $|A_2| \geq |B_1| + 1 \geq |A_1| + 1$. We continue this procedure till it terminates. This procedure eventually terminates since size of set $A_i$ is strictly increasing. Hence it return a set $S\subseteq A$ such that $|N(A)| < |S|$. \footnote{Some parts of this proof are taken from this \href{http://www-sop.inria.fr/members/Frederic.Havet/Cours/matching.pdf}{link}. }",0.1,M1_preference_data_410 "There are N philosphers sitting around a circular table eating spaghetti and discussing philosphy. The problem is that each philosopher needs two forks to eat, and there are only $N$ forks, one between each pair of philosophers. We want to design an algorithm that the philosophers can use, that ensures that no one starves as long as each philosopher eventually stops eating, and such that the maximum number of philosophers can eat at once. Assume now that you have $N/2$ forks and $N$ philosophers (assuming $N$ is even). Similar to Q1, each philosopher p takes fork p%n and (p+1)%n. Does your solution for question 1 prevent deadlocks ?","Keep working on the existing task, as the sprint backlog is already set, but forward the request to the Product Owner to be added and prioritized into the product backlog.",0.1,M1_preference_data_411 "We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture): ...some picture... Explanation of (some) tags: \begin{center} \begin{tabular}{l|l|l|l} Tag & English expl. & Expl. française & Example(s) \\ \hline JJ & Adjective & adjectif & yellow \\ NN & Noun, Singular & nom commun singulier & cat \\ NNS & Noun, Plural & nom commun pluriel & cats \\ PRP\$ & Possessive Pronoun & pronom possessif & my, one's \\ RB & Adverb & adverbe & never, quickly \\ VBD & Verb, Past Tense & verbe au passé & ate \\ VBN & Verb, Past Participle & participe passé & eaten \\ VBZ & Verb, Present 3P Sing & verbe au présent, 3e pers. sing. & eats \\ WP\$ & Possessive wh- & pronom relatif (poss.) & whose \\ \end{tabular} \end{center} We use the following (part of) lexicon: \begin{center} \begin{tabular}{l|ll|l} adult & JJ & has & VBZ \\ adult & $\mathrm{NN}$ & just & RB \\ daughter & $\mathrm{NN}$ & my & PRP\$ \\ developed & VBD & programs & NNS \\ developed & VBN & programs & VBZ \\ first & $\mathrm{JJ}$ & tooth & $\mathrm{NN}$ \\ first & $\mathrm{RB}$ & whose & WP\$ \\ \end{tabular} \end{center} and consider the following sentence: my daughter whose first adult tooth has just developed programs What (formal) parameters make the difference in the choice of these different PoS taggings (for the above model)? Give the explicit mathematical formulas of these parts that are different.","No, different devices have different screen sizes, using absolute sizes in pixels is not appropriate.",0.1,M1_preference_data_412 "In an automated email router of a company, we want to make the distinction between three kind of emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a Naive Bayes approach. What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'? We will consider the following three messages: The Dow industrials tumbled 120.54 to 10924.74, hurt by GM's sales forecast and two economic reports. Oil rose to $71.92. BitTorrent Inc. is boosting its network capacity as it prepares to become a centralized hub for legal video content. In May, BitTorrent announced a deal with Warner Brothers to distribute its TV and movie content via the BT platform. It has now lined up IP transit for streaming videos at a few gigabits per second Intel will sell its XScale PXAxxx applications processor and 3G baseband processor businesses to Marvell for $600 million, plus existing liabilities. The deal could make Marvell the top supplier of 3G and later smartphone processors, and enable Intel to focus on its core x86 and wireless LAN chipset businesses, the companies say. Suppose we have collected the following statistics $3^{3}$ about the word frequencies within the corresponding classes, where '0.00...' stands for some very small value: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & technical & financial & irrelevant & & technical & financial & irrelevan \\ \hline $\$<$ number $>$ & 0.01 & 0.07 & 0.05 & deal & 0.01 & 0.02 & $0.00 \ldots$ \\ \hline Dow & $0.00 \ldots$ & 0.08 & $0.00 \ldots$ & forecast & $0.00 \ldots$ & 0.03 & 0.01 \\ \hline GM & $0.00 \ldots$ & 0.03 & $0.00 \ldots$ & gigabit & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ \\ \hline IP & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ & hub & 0.06 & $0.00 \ldots$ & 0.01 \\ \hline Intel & 0.02 & 0.02 & $0.00 \ldots$ & network & 0.04 & 0.01 & $0.00 \ldots$ \\ \hline business & 0.01 & 0.07 & 0.04 & processor & 0.07 & 0.01 & $0.00 \ldots$ \\ \hline capacity & 0.01 & $0.00 \ldots$ & $0.00 \ldots$ & smartphone & 0.04 & 0.04 & 0.01 \\ \hline chipset & 0.04 & 0.01 & $0.00 \ldots$ & wireless & 0.02 & 0.01 & $0.00 \ldots$ \\ \hline company & 0.01 & 0.04 & 0.05 & sen & re & . & . \\ \hline \end{tabular} \end{center} We now want to specifically focus on the processing of compounds such as 'network capacity' in the second text. How are the compounds handled by a Naive Bayes classifier if no specific pre-processing of compounds is used?","def compute_recall_at_k(retrieved_tweets, gt, k=5): """""" It computes the recall score at a defined set of retrieved documents (k). :param predict: list of predictions :param gt: list of actual relevant data :param k: int :return: float, the precision at a given k """""" relevant = len(tweets[tweets['relevant']==1]) results = retrieved_tweets.merge(gt, how=""outer"", on=""id"")[:k] hits = len(results[results['relevant']==1]) return hits / relevant",0.1,M1_preference_data_413 "How would a data prefetcher influence the results of a \emph{prime + probe} attack?","We are in the unsupervised case. A possible baseline altenative are the K-means. drawbacks: what K should be use for K-mean? converges only to a local min, what linkage to use for dendrograms advantages: planar representation for dendrograms (could be complemented with minimal spanning tree), K-means are incremental: can choose to stop if too long (monitor intra-class variance, however) Maybe the best to do should be to try both (and even more) and evaluated them, if possible, in real context...",0.1,M1_preference_data_414 "Show that the solution of the problem of $rgmax_{\wv:\|\wv\|=1} ext{Var}[\wv^ op \xx]$ is to set $\wv$ to be the first principle vector of $\xv_1, . . . , \xv_N$.","No, null products are not just bad input but a coding bug, the app should not even be making such an invalid request thus the exception should be unchecked.",0.1,M1_preference_data_415 "we'd like to do some sentence topic classification using a Naive-Bayes model. Consider the following toy learning corpus, where each sentence has been assigned a topic, either ""Medical"" or ""Computer"": \item Medical: plastic surgery process initial consultation can be scheduled by sending an email to the administration. \item Medical: in the process, the laser beam comes into contact with soft tissues. \item Medical: laser eye surgery process reshapes parts of the cornea by removing tiny amount of tissues. \item Computer: the team behind the laser based quantum computer includes scientists from the US, Australia and Japan. \item Computer: the optical laser barcode scanner was plugged on the USB port. \item Computer: cdrom laser lens cleaning process starts with opening the tray. \item Computer: laser was a computer trademark. The parameters are learned using some appropriate additive smoothing with the same value for all parameters. In the above learning corpus, there are $42$ token occurrences in ""Medical"" documents and $42$ token occurrences in ""Computer"" documents (punctuation is ignored). How would the following short sentence: ""pulsed laser used for surgery process"" be classified by this model?","We cannot implement TRB with an eventually perfect failure detector. Let s be the designated sender (broadcasting a message m), let p be a correct process. Let us consider two executions, A and B. - In A, s crashes before sending out any message. At time t < ∞, p delivers ⊥. - In B, s is correct but all of its messages are delayed until t’ > t. Moreover, ◇P behaves identically in A and B until time t. This is possible because ◇P is only eventually perfect. Since A and B are indistinguishable, p delivers ⊥ in B as well. By agreement, s delivers ⊥ in B. But this violates validity: s should deliver m in B.",0.1,M1_preference_data_416 Implement the F1-score to evaluate your classifier.,"The version from question 4 may require 2 traversals (one for map, one for reduce) and does not benefit from the (potentially faster) sequential operator f.",0.1,M1_preference_data_417 "Remember that monoids can be represented by the following type class: 1 trait SemiGroup[T]: 2 extension (x: T) def combine (y: T): T 3 4 trait Monoid[T] extends SemiGroup[T]: 5 def unit: T Additionally the three following laws should hold for all Monoid[M] and all a, b, c: M: (Associativity) a.combine(b).combine(c) === a.combine(b.combine(c)) (Left unit) unit.combine(a) === a (Right unit) a.combine(unit) === a Consider the following implementation of Monoid for Boolean: 1 given Or: Monoid[Boolean] with 2 extension (x: Boolean) def combine (y: Boolean): Boolean = x || y 3 def unit: Boolean = true Which of the three monoid laws does it fullfil? None of them Only Associativity Only Left unit Only Right unit Only Associativity and Left unit Only Associativity and Right unit Only Left unit and Right unit All of them","def check_words(string_to_check, list_to_check): set_headline_words = set(string_to_check.lower().split("" "")) set_words_check = set(list_to_check) return int(len(set_headline_words.intersection(set_words_check)) > 0) ",0.1,M1_preference_data_418 "You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! Your column generation system has become quite successful and you've managed to automate most of your job simply by typing your editor's title pitches into your model to produce your column every day. Two years later, during the COVID--25 pandemic, your editor proposes to use your system to generate an information sheet about the pandemic for anyone looking for information about symptoms, treatments, testing sites, medical professionals, etc. Given the similarity to a previous pandemic many years before, COVID--19, you train your model on all news articles published about COVID--19 between the years of 2019--2022. Then, you generate the information page from your trained model. Give an example of a potential harm that your model could produce from the perspective of leaking private information.","Use a step of all-to-all communication. In particular, very process that gets a message relays it immediately. upon initialization do delivered := {} upon RB-broadcast(m) do send(m) to Π \ {p} RB-deliver(m) upon BEB-receive(m) from q do if not m ∈ delivered send (m) to Π \ {p, q} RB-deliver(m) delivered := delivered ∪ m Agreement: Before RB-delivering m, a correct process p forwards m to all processes. By the properties of perfect channels and the fact that p is correct, all correct processes will eventually receive m and RB-deliver it.",0.1,M1_preference_data_419 "Assume that you are part of a team developing a mobile app using Scrum. When using the app, you identified multiple bugs and features which you think should be implemented, and took some notes. You want to share these with the Product Owner. Your backlog of tasks includes the following task: - [ ] [Bug] When I click on the home button, the app doesn't redirect me there (tested from multiple locations in the app). Is this item suitable to be submitted to the Product Backlog? Why?",""" Compute support for each provided itemset by counting the number of its occurences in the original dataset of transactions. dataset : list of transactions, preprocessed using 'preprocess()' Ck : list of itemsets to compute support for. min_support : minimum support. Itemsets with support below this threshold will be pruned. output : list of remaining itemsets, after the pruning step. support_dict : dictionary containing the support value for each itemset. "" def get_support(dataset, Ck, min_support): support_count = {} for transaction in dataset: for candidate in Ck: if candidate.issubset(transaction): if not candidate in support_count: support_count[candidate]=1 else: support_count[candidate] += 1 num_transactions = float(len(dataset)) output = [] support_dict = {} for key in support_count: support = support_count[key]/num_transactions if support >= min_support: output.insert(0,key) support_dict[key] = support return output, support_dict",0.1,M1_preference_data_420 "In the following let $\kappa_{1}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ and $\kappa_{2}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)$ be two valid kernels. Show that the following is also valid kernel: $\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right)=f(\mathbf{x}) f\left(\mathbf{x}^{\prime}\right)$ for any real-valued function $f$.","Your colleague's ""Cache"" interface is asynchronous, which isn't necessary for a cache that will store values in memory. This may even make the interface more difficult to use, as the caller will have to wait for the result of the ""CompletableFuture""s when it could just use the value directly in a synchronous manner. The risk of exposing asynchronous APIs is that the caller will likely need to expose asynchronous APIs as well, which can make the code more complex.",0.1,M1_preference_data_421 "The goal of the 4 following questions is to prove that the methods map and mapTr are equivalent. The former is the version seen in class and is specified by the lemmas MapNil and MapCons. The later version is a tail-recursive version and is specified by the lemmas MapTrNil and MapTrCons. All lemmas on this page hold for all x: Int, y: Int, xs: List[Int], ys: List[Int], l: List [Int] and f: Int => Int. Given the following lemmas: (MapNil) Nil.map(f) === Nil (MapCons) (x :: xs).map(f) === f(x) :: xs.map(f) (MapTrNil) Nil.mapTr(f, ys) === ys (MapTrCons) (x :: xs).mapTr(f, ys) === xs.mapTr(f, ys ++ (f(x) :: Nil)) (NilAppend) Nil ++ xs === xs (ConsAppend) (x :: xs) ++ ys === x :: (xs ++ ys) Let us first prove the following lemma: (AccOut) l.mapTr(f, y :: ys) === y :: l.mapTr(f, ys) We prove it by induction on l. Induction step: l is x :: xs. Therefore, we need to prove: (x :: xs).mapTr(f, y :: ys) === y :: (x :: xs).mapTr(f, ys) We name the induction hypothesis IH. What exact sequence of lemmas should we apply to rewrite the left hand-side ((x :: xs).mapTr(f, y :: ys)) to the right hand-side (y :: (x :: xs).mapTr(f, ys))?","The main assumption is that features/attributes contributing to the likelihood are independant, conditionnaly to classes: $$ P\left(f_{1} \ldots f_{n} \mid C\right)=\prod_{i} P\left(f_{i} \mid C\right) $$ ${ }^{3}$ Note that this is only partial information, statistics about other words not presented here have also been collected. This is in practice definitely a strong assumption. This is the reason why is is called 'Naive'",0.1,M1_preference_data_422 "Design a one-pass (streaming) algorithm that, for a stream that possesses a majority element (appearing more than $m/2$ times), terminates with this element. Prove the correctness of your algorithm.","Removing a method could be done in a minor release if it's backward compatible, otherwise it must be done into a major release. In the case of an incompatible change, it is wise to start by annotating the method as deprecated in the upcoming release before removing it in a later one.",0.1,M1_preference_data_423 "Consider the following linear program for finding a maximum-weight matching: \begin{align*} \text{Maximize} \quad &\sum_{e\in E} x_e w_e\\ \text{Subject to} \quad &\sum_{e \in \delta(v)} x_e \leq 1 \quad \forall v \in V \\ &x_e \geq 0 \quad \forall e \in E \end{align*} (This is similar to the perfect matching problem seen in the lecture, except that we have inequality constraints instead of equality constraints.) Prove that, for bipartite graphs, any extreme point is integral.","case class Point(x: Int, y: Int) case class Rectangle(lowerLeft: Point, upperRight: Point): require( lowerLeft.x <= upperRight.x&& lowerLeft.y <= upperRight.y ) // x1 <= x2 && y1 <= y2",0.1,M1_preference_data_424 What are the different types of morphologies that can be considered? Briefly describe the main differences between them.,"It is not realistic to have zero bugs in a large software product: one should minimize bugs, but a goal of zero is unlikely never be met.",0.1,M1_preference_data_425 "You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! You initialize your model with a vocabulary $V$ with $|V|$ tokens. Given a vector of scores $S = [s_1, \ldots, s_i, \ldots, s_{|V|}]$ output by your model for each token in your vocabulary, write out the softmax function to convert score $s_1$ to a probability mass $P(s_1)$","Depending on a weather service and a hiking trail listing service is a good idea. However, the function that sorts the hiking trails by length should not be a dependency. In fact, this function is a ""pure"" function that could be an implementation detail; there is only one valid implementation possible. Furthermore, the module should depend on a geolocation service to filter the hiking trails based on their proximity to the user, as the module specification requires.",0.1,M1_preference_data_426 "The ""Consensus-Based Total-Order Broadcast"" algorithm transforms a consensus abstraction (together with a reliable broadcast abstraction) into a total-order broadcast abstraction. Describe a transformation between these two primitives in the other direction, that is, implement a (uniform) consensus abstraction from a (uniform) total-order broadcast abstraction."," They are the pointers to head and queue of the FIFO. The head (``RdPtr'') is the instruction candidate to commit; the tail (``WrPtr'') is where new decoded instructions are inserted.",0.1,M1_preference_data_427 "Change Karger's algorithm so that it also works for edge-weighted graphs. Also adapt the analysis to prove that it still returns any min cut $(S^*, \overline{S^*})$ with probability at least $1/{n \choose 2}$. (Hence, edge-weighted graphs also have at most ${n \choose 2}$ min cuts.)","1. Dynamically scheduled processors have universally more physical registers than the typical 32 architectural ones and they are used for removing WARs and WAW (name dependencies). In VLIW processors, the same renaming must be done by the compiler and all registers must be architecturally visible. 2. Also, various techniques essential to improve the performance of VLIW processors consume more registers (e.g., loop unrolling or loop fusion). ",0.1,M1_preference_data_428 What happens in the uniform reliable broadcast algorithm if the accuracy property of the failure detector is violated?,"Assume that √2 is rational, i.e. √2 = a/b where a,b are coprime (have no common divisors). We square both sides, thus 2 = a^2/b^2 → a^2 = 2b^2. Therefore, a^2 is even, and using the result of the previous exercise we know that a is even. Since a is even, it has the form a=2k. We substitute this in the previous equation and we have that: (2k)^2 = 2b^2 → 4k2 = 2b^2 → b^2 = 2k2. Since b^2 = 2k^2, this means that b2 is even → b is even, which is a contradiction! The contradiction is that we assumed a,b to be coprime, but we concluded that both are even!",0.1,M1_preference_data_429 "An expression is referentially transparent if it always returns the same value, no matter the global state of the program. A referentially transparent expression can be replaced by its value without changing the result of the program. Say we have a value representing a class of students and their GPAs. Given the following defintions: 1 case class Student(gpa: Double) 2 3 def count(c: List[Student], student: Student): Double = 4 c.filter(s => s == student).size 5 6 val students = List( 7 Student(1.0), Student(2.0), Student(3.0), 8 Student(4.0), Student(5.0), Student(6.0) 9 ) And the expression e: 1 count(students, Student(6.0))","We use the given fact to create two matroids. First, using $\mathcal{M}_A$ we create $\mathcal{M}'_A$ as follows: For $a \in A$, create copies $a^{(b)}$ for each $b \in $ such that $(a,b) \in E$, and let $\mathcal{M}'_A$ be the matroid obtained from $\mathcal{M}_A$ using these new copies of vertices. Similarly obtain $\mathcal{M}'_B$ from $\mathcal{M}_B$ by copying each $b \in B$ to get $b^{(a)}$'s for each $a$ such that $(a,b) \in E$. Notice that the ground set of $\mathcal{M}'_A$ has a one-to-one correspondence with $E$, and so do the ground set of $\mathcal{M}'_B$. Thus, w.l.o.g., we can assume that both the matroids $\mathcal{M}'_A$ and $\mathcal{M}'_B$ are defined on the common ground set $E$. Now we can use matroid intersection, which runs in polynomial time, to find a maximum independent set in the intersection of the two matroids (since we have polynomial-time independence oracles for all the matroids we consider). To see that this provides the required answer, note that any valid matching in two matroids $\mathcal{M}_A$ and $\mathcal{M}_B$ correspond to an independent set in the intersection of $\mathcal{M}'_A$ and $\mathcal{M}'_B$, and vice-versa (this follows from the given fact).",0.1,M1_preference_data_430 "Suppose that Alice and Bob have two documents $d_A$ and $d_B$ respectively, and Charlie wants to learn about the difference between them. We represent each document by its word frequency vector as follows. We assume that words in $d_A$ and $d_B$ come from some dictionary of size $n$, and let $x\in \mathbb{R}^n$ be a vector such that for every word $i\in [n]$\footnote{We let $[n]:=\{1,2,\ldots, n\}$.} the entry $x_i$ equals the number of times the $i$-th word in the dictionary occurs in $d_A$. Similarly, let $y\in \mathbb{R}^n$ be a vector such that for every word $i\in [n]$ the entry $y_i$ denotes the number of times the $i$-th word in the dictionary occurs in $d_B$. We assume that the number of words in each document is bounded by a polynomial in $n$. Suppose that there exists $i^*\in [n]$ such that for all $i\in [n]\setminus \{i^*\}$ one has $|x_i-y_i|\leq 2$, and for $i^*$ one has $|x_{i^*}-y_{i^*}|\geq n^{1/2}$. Show that Alice and Bob can each send a $O(\log^2 n)$-bit message to Charlie, from which Charlie can recover the identity of the special word $i^*$. Your solution must succeed with probability at least $9/10$. You may assume that Alice, Bob and Charlie have a source of shared random bits.",False,0.1,M1_preference_data_431 "Assume you are writing server-side code for an online shop. The code will be invoked over HTTP from a mobile app. Your current code is as follows: public class ShoppingCart { public void buy(Product product, int quantity) { if (product == null) { throw new IllegalArgumentException(""product cannot be null""); } if (quantity < 1) { throw new IllegalArgumentException(""quantity must be at least 1""); } int price = product.getUnitPrice() * quantity; int discount = computeDiscount(product, quantity); int shippingFees = computeShippingFees(product, quantity); int totalPrice = price - discount + shippingFees; // this triggers a call to the actual credit card processor CreditCardProcessor.billCurrentUser(totalPrice); } private int computeDiscount(Product product, int quantity) { // ... discount computation logic ... } private int computeShippingFees(Product product, int quantity) { // ... shipping fees computation logic ... } } A sales representative suggests adding a ""Thread.sleep"" call at the beginning of ""buy"", so that a future version of the software can remove it and advertise increased speed to customers. Explain in 1 sentence whether this is a good idea and why or why not:","Running tests locally only and using screenshots of the results in the PR is a bad practice. A better alternative would be to run tests on the CI, so that the correct tests will be run each time, a detailed report will be available and most of all, there won't be any ""but it worked on my computer"" syndrome since the environment is standardized.",0.1,M1_preference_data_432 "Assume you are working on SuperQuiz, a trendy app that lets everyone design quizzes and share them with friends! Your first assignment is to add a new feature that is requested by users. You are given the following transcript of an interview with a customer of your product: > Hi! > So you're the developer of this quiz app? > The one where you write questions and answers and get your friends to guess? > It's fun, but I can't use it as much as I'd like. > I'm a firefighter, I don't have time for this app during the day, but I wish I could use it at home. > See, when I come back home after work, I have a bunch of stuff to do, cleaning, cooking, ... > And when I'm doing these tasks, I'm rather busy. Not like when I'm watching TV. > I don't always have my phone in my hands! Sometimes I even forget where I put it. > Maybe if you made it so I could talk to the app? You know, many apps have that feature now. > Then I could multitask! Think about quizzes while I'm cooking! > Otherwise, I won't use the app much. Write down a user story, as a single sentence that follows the following guidelines: 1) A user story that summarizes all necessary information from the feedback 2) the user story does not contain any unnecessary information","# 6.1 music_features = [""key"", ""acousticness"", ""danceability"", ""energy"", ""instrumentalness"", ""liveness"", ""loudness"", ""speechiness"", ""valence"", ""tempo""] first_two = two_ormore[two_ormore.album_number.isin((0,1))] variances = first_two[music_features].var() def agg_func(x): assert len(x) == 2 # get the two parts of the df first, second = x.iloc[0], x.iloc[1] # compute the 3 features score_d = second['score'] - first['score'] time_d = abs(first['releaseyear'] - second['releaseyear']) dist = seuclidean(first[music_features], second[music_features], variances) # return the 3 features as columns return pd.Series([score_d, time_d, dist], index =['score_diff', 'time_diff', 'dist']) # apply our aggregation function pair_df = first_two.sort_values(by='releaseyear').groupby('artist').apply(agg_func) # get the cut value for highest 20% quantile_8th = pair_df['dist'].quantile(q=0.8) pair_df['time_diff'] = pair_df['time_diff'].dt.days # assign style change to 1 if in the 20% highest pair_df.loc[:,'did_style_change'] = 0 pair_df.loc[pair_df['dist'] >= quantile_8th,'did_style_change'] = 1",0.1,M1_preference_data_433 "Assume you are working on a mobile application. You get complaints from Android users: when rotating the phone, the text they had typed disappears. In one sentence, explain what the likely root cause is.","The approach used is based on (a limited set of) semantic relations (hyponymy, holonymy) and relies on the Aristotelian ""Genus-Differentia"" principle.",0.1,M1_preference_data_434 Design an algorithm that implements consensus using multiple TRB instances.,"The goal if the protocol is for every process to know the input of all the other processes. Each process internally allocates an array of size w*h, where each entry of the array represents the input of one process that initially is set to a sentinel value ‘?’. The process fills the array until no ‘?’ are left, then the process can make its decision (e.g., the minimal value received). Whenever a process gets new information, it sends its updated array to all its four neighbors. At the beginning, the “new information” is its own input. The number of required rounds is the length of the longest shortest path between any two processes. In the case of the grid, this is w+h.",0.1,M1_preference_data_435 "A leftist min heap is a tree that satisfies the following properties: P.1 Min heap: For any given node C, if P is a parent node of C, then the value of P is less than or equal to the value of C. P.2 Leftist heap: For any given node C, if L is a left child of C and R is a right child of C, then the rank of R is less than or equal to the rank of L. Here, rank of C is the number of edges on the shortest path from node C to a leaf node. Consider the following implementation of a leftist min heap: 1 sealed abstract class Heap 2 case class Empty() extends Heap 3 case class Node(rank: Int, value: Int, h1: Heap, h2: Heap) extends Heap 4 def rank(h: Heap): Int = h match 5 case Empty() => -1 6 case Node(r, v, h1, h2) => r 7 def insert(x: Int, h: Heap) = merge(h, Node(0, x, Empty(), Empty())) 8 def findMin(h: Heap): Int = h match 9 case Empty() => 0 10 case Node(_, x, _, _) => x 11 def deleteMin(h: Heap): Heap = h match 12 case Empty() => h 13 case Node(_, x, lh, rh) => merge(lh, rh) 14 15 // Merge two leftist min heaps h1 and h2 16 def merge(h1: Heap, h2: Heap): Heap = 17 def shake(x: Int, lh: Heap, rh: Heap) = 18 // Ensure the leftist property 19 (lh, rh) match 20 SSS 21 case _ => Node(rank(lh) + 1, x, rh, lh) 22 // Ensure the min property 23 (h1, h2) match 24 case (Empty(), h) => h 25 case (h, Empty()) => h 26 MMM 27 case (Node(_, x1, lh1, rh1), _: Node) => shake(x1, lh1, merge(rh1, h2)) Figure 1 shows two example leftist min heaps, with values inside each node and ranks next to each node. To merge the two heaps, we first obtain the min heap from Figure 2, which satisfies the property P.1 but not the property P.2, and finally the leftist min heap from Figure 3, which satisfies both properties. Complete the implementation of the merge function by replacing SSS and MMM lines: A. case _ => if (rank(lh) >= rank(rh)) Node(rank(rh) + 1, x, lh, rh) B. case _ if (rank(lh) >= rank(rh)) => Node(rank(rh) + 1, x, lh, rh) C. case (Node(r1, x1, _, _), Node(r2, x2, _, _)) => if (r1 >= r2) Node(rank(rh) + 1, x, lh, rh) D. case (Node(r1, x1, lh1, rh1), Node(r2, x2, lh2, rh2)) => if (x1 > x2) shake( x2, lh2, merge(h1, rh2)) E. case (Node(_, x1, lh1, rh1), Node(_, x2, lh2, rh2)) if (x1 > x2) => shake(x2 , lh2, merge(h1, rh2)) F. case _ if (x1 > x2) => shake(x2, lh2, merge(h1, rh2))","class Fork(var myNumber: Int) { var inUse: Boolean = false } def philosopherTurn(l: Fork, r: Fork): Boolean = Thread.sleep(100) var left = l var right = r if left.myNumber > right.myNumber then left = r right = l left.synchronized { right.synchronized { if left.inUse == false && right.inUse == false then left.inUse = true right.inUse = true else return false } } Thread.sleep(1000) left.synchronized { right.synchronized { left.inUse = false right.inUse = false } } true def run() = val n = 5 val forks = new Array[Fork](n) val philosophers = new Array[Thread](n) for p <- 0 to n - 1 do forks(p) = new Fork(p) for p <- 0 to n - 1 do philosophers(p) = new Thread { override def run() = { while (!philosopherTurn(forks(p%n), forks((p+1)%n))){} } } philosophers(p).start for p <- 0 to n - 1 do philosophers(p).join()",0.1,M1_preference_data_436 "Consider an HMM Part-of-Speech tagger, the tagset of which contains, among others: DET, N, V, ADV and ADJ, and some of the parameters of which are: $$ \begin{gathered} P_{1}(\mathrm{a} \mid \mathrm{DET})=0.1, \quad P_{1}(\text { accurately } \mid \mathrm{ADV})=0.1, \quad P_{1}(\text { computer } \mid \mathrm{N})=0.1, \\ P_{1}(\text { process } \mid \mathrm{N})=0.095, \quad P_{1}(\text { process } \mid \mathrm{V})=0.005, \\ P_{1}(\text { programs } \mid \mathrm{N})=0.080, \quad P_{1}(\text { programs } \mid \mathrm{V})=0.020, \end{gathered} $$ \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & & \multicolumn{5}{|l|}{$\mathrm{Y} \rightarrow$} \\ \hline & & $\mathrm{DET}$ & N & V & ADJ & $\mathrm{ADV}$ \\ \hline \multirow[t]{5}{*}{$X \downarrow$} & $\mathrm{DET}$ & 0 & 0.55 & 0 & 0.02 & 0.03 \\ \hline & $\mathrm{N}$ & 0.01 & 0.10 & 0.08 & 0.01 & 0.02 \\ \hline & V & 0.16 & 0.11 & 0.06 & 0.08 & 0.08 \\ \hline & ADJ & 0.01 & 0.65 & 0 & 0.05 & 0 \\ \hline & ADV & 0.08 & 0.02 & 0.09 & 0.04 & 0.04 \\ \hline \end{tabular} \end{center} $P_{2}(\mathrm{Y} \mid \mathrm{X}):\left(\right.$ for instance $\left.P_{2}(\mathrm{~N} \mid \mathrm{DET})=0.55\right)$ and: $P_{3}(\mathrm{DET})=0.20, \quad P_{3}(\mathrm{~N})=0.06, \quad P_{3}(\mathrm{~V})=0.08, \quad P_{3}(\mathrm{ADV})=0.07, \quad P_{3}(\mathrm{ADJ})=0.02$. What are all the possible taggings of the sentence a computer process programs accurately","\begin{enumerate} \item In the exercise you learned that the columns of $\boldsymbol{V}$ are the eigenvectors associated to $\boldsymbol{X}^{\top} \boldsymbol{X}$. And the non-zero singular values of $\boldsymbol{X}$ are the square roots of the non-zero eigenvalues of $\boldsymbol{X}^{\top} \boldsymbol{X}$. But for our case $\tilde{\boldsymbol{X}}^{\top} \tilde{\boldsymbol{X}}=\boldsymbol{X}^{\top} \boldsymbol{X}$, which proves the two claims. \item It is better to first take out the highly correlated feature. If not, then the previous calculations show that this is the same as having a row that is not properly normalized. \end{enumerate}",0.1,M1_preference_data_437 "Recall that a matroid $\mathcal{M} =(E, \mathcal{I} )$ is a partition matroid if $E$ is partitioned into \emph{disjoint} sets $E_1, E_2, ..., E_\ell$ and \[ \mathcal{I} = \lbrace X \subseteq E : |E_i \cap X | \leq k_i \mbox{ for } i=1,2,..., \ell \rbrace\,. \] Verify that this is indeed a matroid.","Committing once per day will make it hard to review code history and revert bugs. Developpers should commit often, each time a small ""sub-feature"" is ready.",0.1,M1_preference_data_438 "If process i fails, then eventually all processes j≠i fail Is the following true? If some process j≠i does not fail, nothing can be said about process i","When a new user come in, we have little information about them, and thus the matrix factorization method can not learn much associations between the new user and the existing users. We should use the demographics information of the user to bridge its associations with existing users.",0.1,M1_preference_data_439 "Suppose that you are given an insertion only stream of items. For every $k\geq 1$, give an algorithm that at each point in the stream maintains $k$ uniformly random elements from the prefix of the stream sampled without replacement. Your algorithm must use $O(k\log n)$ space.","1. If one were to probe sequentially the primed area, the prefetcher would detect that and would try to reduce the attacker misses by bringing earlier in memory some parts of the primed area. Clearly, this would partly or completely invalidate the results of the probe. 2. On the victim side, if the victim were to make accesses following a pattern that triggers the prefetcher, this would add noise to the footprint left in the cache by the victim. In many practical cases, interesting accesses by the victim are random-looking (e.g., accesses in S-boxes).",0.1,M1_preference_data_440 "We will analyze the $K$-means algorithm and show that it always converge. Let us consider the $K$-means objective function: $$ \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\sum_{n=1}^{N} \sum_{k=1}^{K} z_{n k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{k}\right\|_{2}^{2} $$ where $z_{n k} \in\{0,1\}$ with $\sum_{k=1}^{K} z_{n k}=1$ and $\boldsymbol{\mu}_{k} \in \mathbb{R}^{D}$ for $k=1, \ldots, K$ and $n=1, \ldots, N$. How would you choose the $\left\{z_{n k}\right\}_{n, k=1}^{N, K}$ to minimize $\mathcal{L}(\mathbf{z}, \boldsymbol{\mu})$ for given $\left\{\boldsymbol{\mu}_{k}\right\}_{k=1}^{K}$ ? Compute the closed-form formula for the $z_{n k}$. To which step of the $K$-means algorithm does it correspond?","inflectional morphology: no change in the grammatical category (e.g. give, given, gave, gives ) derivational morphology: change in category (e.g. process, processing, processable, processor, processabilty)",0.1,M1_preference_data_441 "In this week's lecture, you have been introduced to the aggregate method of ParSeq[A] (and other parallel data structures). It has the following signature: def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B Discuss, as a group, what aggregate does and what its arguments represent. Implement aggregate using the methods map and/or reduce of the collection you are defining aggregate for.","Catch the Google error internally and throw a more general one. Alternatively, change the thrown exception type to a superclass of ""GoogleServerNotRespondingError"" that is not Google-specific, if it has one.",0.1,M1_preference_data_442 "Your team is developing a library that is mostly intended to be used by your company's own applications, but the library is nevertheless distributed via a public repo on GitHub. It contains the following java function: ""public InputStream convertToPdf(Document document) throws GoogleServerNotRespondingError"" This library has a maintainability problem. Explain in 1-2 sentences what it is:",['1-{a1}-{a2}'],0.1,M1_preference_data_443 "Ignoring their different evaluation characteristics in this exercise, we consider here that filter and withFilter are equivalent. To which expression is the following for-loop translated ? 1 def mystery7(xs : List[Int], ys : List[Int]) : List[Int] = 2 for 3 y <- ys if y < 100 4 x <- xs if x < 20 5 yield 6 if y < x then 0 else y - x","def reverse[A](xs: List[A]) = xs.foldLeft(List[A]())((r,c) => c :: r) def scanLeft[A, B >: A](xs: List[A])(z: B)(op: (B, B) => B): List[B] = reverse(xs.foldLeft((z, List(z))){case ((acc, accList), curr) => val res = op(acc,curr) (res, res :: accList)}._2)",0.1,M1_preference_data_444 "We learnt in the lecture that terms are typically stored in an inverted list. Now, in the inverted list, instead of only storing document identifiers of the documents in which the term appears, assume we also store an *offset* of the appearance of a term in a document. An $offset$ of a term $l_k$ given a document is defined as the number of words between the start of the document and $l_k$. Thus our inverted list is now: $l_k= \langle f_k: \{d_{i_1} \rightarrow [o_1,\ldots,o_{n_{i_1}}]\}, \{d_{i_2} \rightarrow [o_1,\ldots,o_{n_{i_2}}]\}, \ldots, \{d_{i_k} \rightarrow [o_1,\ldots,o_{n_{i_k}}]\} \rangle$ This means that in document $d_{i_1}$ term $l_k$ appears $n_{i_1}$ times and at offset $[o_1,\ldots,o_{n_{i_1}}]$, where $[o_1,\ldots,o_{n_{i_1}}]$ are sorted in ascending order, these type of indices are also known as term-offset indices. An example of a term-offset index is as follows: **Obama** = $⟨4 : {1 → [3]},{2 → [6]},{3 → [2,17]},{4 → [1]}⟩$ **Governor** = $⟨2 : {4 → [3]}, {7 → [14]}⟩$ **Election** = $⟨4 : {1 → [1]},{2 → [1,21]},{3 → [3]},{5 → [16,22,51]}⟩$ Which is to say that the term **Governor** appear in 2 documents. In document 4 at offset 3, in document 7 at offset 14. Now let us consider the *SLOP/x* operator in text retrieval. This operator has the syntax: *QueryTerm1 SLOP/x QueryTerm2* finds occurrences of *QueryTerm1* within $x$ (but not necessarily in that order) words of *QueryTerm2*, where $x$ is a positive integer argument ($x \geq 1$). Thus $x = 1$ demands that *QueryTerm1* be adjacent to *QueryTerm2*. Consider the general procedure for ""merging"" two term-offset inverted lists for a given document, to determine where the document satisfies a *SLOP/x* clause (since in general there will be many offsets at which each term occurs in a document). Let $L$ denote the total number of occurrences of the two terms in the document. Assume we have a pointer to the list of occurrences of each term and can move the pointer along this list. As we do so we check whether we have a hit for $SLOP/x$ (i.e. the $SLOP/x$ clause is satisfied). Each move of either pointer counts as a step. Based on this assumption is there a general ""merging"" procedure to determine whether the document satisfies a $SLOP/x$ clause, for which the following is true? Justify your answer. 1. The merge can be accomplished in a number of steps linear in $L$ regardless of $x$, and we can ensure that each pointer moves only to the right (i.e. forward). 2. The merge can be accomplished in a number of steps linear in $L$, but a pointer may be forced to move to the left (i.e. backwards). 3. The merge can require $x \times L$ steps in some cases.", The LSQ decides which and when memory accesses can be executed out of the original program order.,0.1,M1_preference_data_445 "Consider the following code transformation: egin{verbatim} r3 = r3 << 4 r4 = r4 << 4 st [r3] = r2 ld r1 = [r4] r5 = r3 + 4 r1 = r1 + 1 st [r5] = r6 => r3 = r3 << 4 r4 = r4 << 4 st [r3] = r2 ld r1 = [r4] r5 = r3 + 4 r1 = r1 + 1 st [r5] = r6 \end{verbatim} Correct the code to avoid the problem(s) using the appropriate Itanium instruction(s). Write also any needed recovery code. As much as possible, keep the new ordering (right snippet above).","Paired t-tests are helpful when the two distributions being compared are correlated. In the formulas, we have that $\overline{x}_{\mathrm{diff}}$ = $\overline{x}_{1} - \overline{x}_{2}$, however, $s_{\mathrm{diff}} / \sqrt n \neq \sqrt{\frac{s_{1}^{2}}{n_{1}} + \frac{s_{2}^{2}}{n_{2}}}$. If $x_1$ and $x_2$ are correlated, the sample variance of the difference will be smaller than the pooled variance across the samples. In practice, we see that in the simulations, $X$ and $Z$ are correlated, and thus you gain a lot of statistical power with the paired t-test, while x and k are uncorrelated, and you do not gain statistical power with the paired t-test.",0.1,M1_preference_data_446 "With respect to reorder buffers, Would you expect to find the memory address where a particular instruction was fetched (i.e., the value of the PC at the time of fetching) inside the reorder buffer? If so, why would it be there? If not, elaborate on why it would it be unneeded.","As a person who multi-tasks, I want to create and answer quizzes only with my voice, so that I can play with my friends while my hands are busy doing something else.",0.1,M1_preference_data_447 "In the above, what is the chance agreement between the two annotators?Give your answer as a numerical value to three decimal places.",$\Theta(|h_1 - h_2|)$,0.1,M1_preference_data_448 "Write modular code (i.e., a function) to divide your training data into 𝑁 folds and perform cross-validation. For each possible combination of the two hyperparameters (see below for the range of values that you should try for each hyperparameter), train your model in a cross-validation setup with 𝑁=20 folds. ","Yes, it does. The first property is ensured by the if condition, and the second by the equal addition and removal of funds from the two accounts respectively.",0.1,M1_preference_data_449 "Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:If a 4-gram has a probability estimated to be \( p \) with Maximum-Likelihood estimation, what would be its probability if estimated using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \alpha) \) ?","It is $g^\star(x)=\mathbb E[y|x]=2\eta(x)-1$. Indeed egin{align*} g^\star(\xv)&\in rg \min_{z\in\R}\mathbb E [(y z-1)^2|\xv] \ &= rg \min_{z\in\R}\mathbb E [( z-y)^2|\xv] ext{because } y\in\{-1,1\} \ & = rg \min_{z\in\R} \{ \mathbb E [(y -\mathbb E [y|\xv])^2|\xv] + (z-\mathbb E [y|\xv])^2 \} \end{align*} where we have used the law of total variance.",0.1,M1_preference_data_450 "Assume you're working for a startup that develops a university management app. You just received a description of what the app should do: > This app will be the administrative backbone of the university. > Almost all staff will use it. > Human Resources will register each student, including their personal details, and use the system to ensure each student follows the rules concerning the duration of studies, the number of courses that must be taken, the payment of all applicable fees... > Professors will use the app to input grades and to send informational messages to students in their courses. > Students will be able to see the list of courses, register for a course, and see their grades. > Staff members will also be able to update their personal details, including their banking coordinates for their salary. Write a user story, in a single sentence using the below format, that summarizes this conversation: > As a professor, I want to ... so that ... Your story must contain all necessary information and only that information.",This might lead to different results.,0.1,M1_preference_data_451 "Byzantine consistent broadcast (BCB) assumes one designated sender S and it satisfies the following properties: Validity: If S is correct, then every correct process eventually delivers the message. No duplication: Every correct process delivers at most one message. Integrity: If a correct process delivers a message and S is correct, then S has previously broadcast the message. Consistency: No two correct processes deliver different messages. Do we need to introduce some constraints on the number of Byzantine processes in non-synchronous environments? If yes, what are the constraints?","This is known as {\epsilonm reservoir sampling}. The algorithm is as follows: \begin{enumerate} \item Keep the first $k$ items in memory. \item When the $i$-th item arrives (for $i>k$) \begin{itemize} \item with probability $k/i$, keep the new item and discard a uniformly random item of those that are currently in memory; \item with probability $1-k/i$, keep the old items and ignore the new one. \epsilonnd{itemize} \epsilonnd{enumerate} We will perform an induction on the number of elements $m$ that the maintained set is a uniformly random set of $k$ items from the stream. If $m \leq k$, the algorithm is clearly correct: this provides the base of the induction. Let us assume that till some time step $j-1$, the maintained set is a uniformly random subset of the first $j-1$ elements of size $k$. The inductive step is provided by the following argument. Note that the probability that the $j$-th element that arrives in the stream belongs to the set of $k$ uniformly random elements from $1,\ldots, j$ sampled without replacement is exactly $$ {j-1 \choose k-1}/{j\choose k}=\frac{(j-1)!}{(k-1-(j-1))! (k-1)!}\cdot \frac{(k-j)! k!}{j!}=\frac{k}{j}. $$ If $j$ is included, then it suffices to add to $\{j\}$ a uniformly random subset of the first $j-1$ elements of size $k-1$. Taking a uniformly random element out of the maintained set of size $k$ achieves exactly this goal.",0.1,M1_preference_data_452 "Consider the linear programming relaxation for minimum-weight vertex cover: \begin{align*} \text{Minimize} \quad &\sum_{v\in V} x_v w(v)\\ \text{Subject to} \quad &x_u + x_v \geq 1 \quad \forall \{u,v\} \in E \\ &0 \leq x_v \leq 1 \quad \ \ \forall v \in V \end{align*} In class, we saw that any extreme point is integral when considering bipartite graphs. For general graphs, this is not true, as can be seen by considering the graph consisting of a single triangle. However, we have the following statement for general graphs: \begin{itemize} \item[] Any extreme point $x^*$ satisfies $x^*_v \in \{0, \frac12, 1\}$ for every $v\in V$\,. \end{itemize} Prove the above statement.","def rocchio_estimate(doc_vectors, doc_labels, query_vector): """""" Rocchio classification :param doc_vectors: Document vectors (np.array(np.array)) :param doc_labels: Document labels/topics (list) :param query_vector: Query vector (np.array) :return: A dictionary containing the estimation score for each label/topic (dict) """""" topic_to_doc = {t:[] for t in list(set(doc_labels))} for i, doc in enumerate(doc_vectors): topic_to_doc[doc_labels[i]].append(np.array(doc)) centroids = {t:sum(topic_to_doc[t]) / len(topic_to_doc[t]) for t in topic_to_doc} scores = {t:euclidean_distance(centroids[t], query_vector) for t in centroids} return scores",0.1,M1_preference_data_453 Implement the precision at k metric,"def compute_map(queries, K=10): map_score = 0 prec_rec_dict = [] for i, query in enumerate(queries): ap = 0 predict = search_vec(query, K) gt = search_vec_sklearn(query, features) prec_rec = [] for k in range(1, K+1): precision_at_k = compute_precision_at_k(predict, gt, k) recall_at_k = compute_recall_at_k(predict, gt, k) prec_rec.append((precision_at_k, recall_at_k)) precs_int = compute_interpolated_precisions(prec_rec) # Sum interpolated precision only when recall increases prev_r = 0 for j, p_r in enumerate(prec_rec): rec = p_r[1] if rec > prev_r: ap += precs_int[j] prev_r = rec map_score += ap/len(gt) prec_rec_dict.append(prec_rec) map_score = map_score/len(queries) return map_score, prec_rec_dict",0.1,M1_preference_data_454 "Assume that your team's project manager decides that the team should stop working on new features for the next two weeks and instead focus on improving the performance and stability of the product to provide a better user experience. Your colleague thinks that he has an idea that might drastically improve the performance of the app, which is optimizing the function that handles sanitizing and validating user input. He says that it will probably take him a couple of days to implement it, so he suggests that he should jump right into it. What do you think of your colleague's approach?","If the names are too detailed, it makes the code hard to read.",0.1,M1_preference_data_455 "Assume that your team is discussing the following java code: public final class DataStructure { public void add(int val) { /*...*/ } private boolean isFull() { /*...*/ } } One of your colleagues thinks that ""isFull"" should be made public. Explain whether this breaks backward compatibility and why or why not (also without worrying about whether this is a good or a bad thing)","While Bachmann-Landau notation offers a formal description of how the runtime of an algorithm grows with respect to the size of its input, it's sometimes not enough to understand the performance of an algorithm. Typically, high constant factors may make an algorithm that is theoretically faster than another one slower in practice. This can be determined by running benchmarks with various input sizes. For example, many sorting algorithms tend to use insertion sort, which has a worst-case complexity of $O(n^2)$, for small inputs (up to around 20 elements). This is because the overhead of using a more complex algorithm is not worth it for small inputs. Your colleague should therefore run benchmarks with various input sizes to determine which algorithm fits their use case best",0.1,M1_preference_data_456 "You've been hired to modernize a codebase from a 50-year-old company: version control, automated builds, and continuous integration. One of your colleagues, who is not completely up-to-date with modern practices, asks you the following question: ""Does adding ""continuous integration"" mean we no longer need to worry about testing?"" What would be your answer?","def knn_probabilistic_estimate(doc_vectors, doc_labels, query_vector, k=10): """""" Probabilistic estimation for kNN classification :param doc_vectors: Document vectors (np.array(np.array)) :param doc_labels: Document labels/topics (list) :param query_vector: Query vector (np.array) :param k: Number of nearest neighbors to retrieve :return: A dictionary containing the estimation (sorted) score for each label/topic (dict) """""" top_k_doc_indices = knn(doc_vectors, query_vector, k) top_k_labels = [doc_labels[i] for i in top_k_doc_indices] scores = {t:0 for t in list(set(doc_labels))} for i in top_k_doc_indices: scores[doc_labels[i]] += 1 scores = {t:scores[t] / k for t in scores} return scores",0.1,M1_preference_data_457 "Up to which linguistic processing level can each of the following sentences be considered as correct? The glass broke its leg, I no go rain, The cook put cherry stones in the cake, Cars flow beautifully; syntactic, pragmatic, syntactic, semantic, lexical","The provided 9 rules could be expanded as follows to take simple number agreement into account: R1.1: S --> NPs VPs R1.2: S --> NPp VPp R2.1: NPs --> NNs R2.2: NPp --> NNp R3.1: NPs --> Dets NNs R3.2: NPp --> Detp NNp R4.1: NNs --> Ns R4.2: NNp --> Np R5.1: NNs --> NNs NNs R5.2: NNp --> NNs NNp R5.3: NNs --> NNp NNs R5.4: NNp --> NNp NNp R6.1: NNs --> NNs PNP R6.2: NNp --> NNp PNP R7.1: PNP --> Prep NPs R7.2: PNP --> Prep NPp R8.1: VPs --> Vs R8.2: VPp --> Vp R9: VPs --> Adv Vs R9: VPs --> Adv Vs thus resulting in a set of 20 syntactic rules. Note that rule R5 may be expanded in only 2 rules instead of 4 if the assumption that in nominal compounds corresponding to a sequence of several (e.g. ""satellite antenna frames""), all the nouns but the last one must be singular: R5.1: NNs --> NNs NNs R5.2: NNp --> NNs NNp",0.1,M1_preference_data_458 " The [t-statistic](https://en.wikipedia.org/wiki/T-statistic) is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. In a t-test, the higher the t-statistic, the more confidently we can reject the null hypothesis. Use `numpy.random` to create four samples, each of size 30: - $X \sim Uniform(0,1)$ - $Y \sim Uniform(0,1)$ - $Z = X/2 + Y/2 + 0.1$ - $K = Y + 0.1$","def get_vocabulary_frequency(documents): """""" It parses the input documents and creates a dictionary with the terms and term frequencies. INPUT: Doc1: hello hello world Doc2: hello friend OUTPUT: {'hello': 3, 'world': 1, 'friend': 1} :param documents: list of list of str, with the tokenized documents. :return: dict, with keys the words and values the frequency of each word. """""" vocabulary = dict() for document in documents: for word in document: if word in vocabulary: vocabulary[word] += 1 else: vocabulary[word] = 1 return vocabulary",0.1,M1_preference_data_459 " Let us consider a binary classification problem with a training set $S=\{ (\xv_n,y_n)\}_{n=1}^N$ such that: \xv_n\in\R^D, ext{ and } y_n\in\{-1,1\}, ext{ for all } n=1,\cdots,N, where $N,D$ are integers such that $N,D\geq1$. We consider the Perceptron classifier which classifies $\xv\in\R^D$ following the rule: f_{\wv,b}(\xv)= \sign(\wv^ op \xv + b ), where $\wv\in\R^D$ is the weight vector, $b\in \R$ is the threshold, and the sign function is defined as \sign(z)=igg\{ {+1 ext{ if } z\geq 0 top -1 ext{ if } z< 0} As seen in the course, explain how we can ignore the threshold $b$ and only deal with classifiers passing through the origin, i.e., of the form $f_\wv(\xv)=\sign(\wv^ op \xv )$. ","The gradient boosting regressor overfits to the training data, making it even worse than the linear regression.",0.1,M1_preference_data_460 "You are discussing coding habits with a colleague, who says: ""When I code, if a function I write has more than 10 lines, I always refactor to make it call another function, so that all my functions have less than 10 lines."" In one sentence, explain if this is a good habit and why:","Previous data could have mentioned names, addresses, titles, workplaces, of medical professionals during COVID-19. This information could be generated by the model if trained on this data",0.1,M1_preference_data_461 "In this problem, we give a $2$-approximation algorithm for the submodular vertex cover problem which is a generalization of the classic vertex cover problem seen in class. We first, in subproblem~\textbf{(a)}, give a new rounding for the classic vertex cover problem and then give the algorithm for the more general problem in subproblem~\textbf{(b)}. Design and analyze a \emph{deterministic} $2$-approximation algorithm for the submodular vertex cover problem: \begin{description} \item[Input:] An undirected graph $G = (V,E)$ and a non-negative submodular function $f: 2^V \rightarrow \mathbb{R}_+$ on the vertex subsets. \item[Output:] A vertex cover $S\subseteq V$ that minimizes $f(S)$. \end{description} We remark that the classic vertex cover problem is the special case when $f$ is the linear function $f(S) = \sum_{i\in S} w(i)$ for some non-negative vertex weights $w$. A randomized 2-approximation algorithm will be given partial credits and to your help you may use the following fact without proving it. \begin{center} \begin{boxedminipage}{0.86\textwidth} \textbf{Fact}. Let $V = \{1,2, \ldots, n\}$ and let $\hat f: [0,1]^n \rightarrow \mathbb{R}_+$ denote the Lov\'{a}sz extension of $f$. There is a deterministic polynomial-time algorithm that minimizes $\hat f(x)$ subject to $x_i + x_j \geq 1$ for all $\{i,j\} \in E$ and $x_i \in [0,1]$ for all $i\in V$. \end{boxedminipage} \end{center} {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it runs in polynomial-time, and (iii) prove that the value of the found solution is at most twice the value of an optimal solution. You are allowed to use the above fact without any proof. For full score your algorithm should be deterministic but randomized solutions will be given partial credits. Recall that you are allowed to refer to material covered in the lecture notes.)}","def computeFlips(square: Square): List[Square] = { for {i <− List(−1, 0, 1) j <− List(−1, 0, 1) if i != 0 || j != 0 flip <− computeFlipsInDirection(square.x, square.y, i, j)} yield flip}",0.1,M1_preference_data_462 "If there are {t} PoS tags, what is the maximum number of (not necessarily free) parameters the probabilistic model needs to consider to determine the best possible PoS tag sequence given a word sequence of length {w}, subjected to the limited lexical conditioning and limited scope for syntactic dependencies (1 neighbor) hypotheses.Give your answer as a numerical value (not as a formula).",True: Nothing can be said about process i.,0.1,M1_preference_data_463 "Many general evaluation metrics can be considered for various NLP tasks. The simplest one is accuracy. Give several examples of NLP tasks for which accuracy can be used as an evaluation metric. Justify why. In general, what property(ies) must an NLP task satisfy in order to be evaluable through accuracy?","g(f(z, x1), f(f(z, x2), x3)); g(f(f(z, x1), x2), f(z, x3)); g(g(f(z, x1), f(z, x2)), f(z, x3)); g(f(z, x1), g(f(z, x2), f(z, x3)))",0.1,M1_preference_data_464 "How is it possible to compute the average Precision/Recall curves? Explain in detail the various steps of the computation.","there are 12 different bigrams (denoting here the whitespace with 'X' to better see it): Xc, Xh, Xt, at, ca, cu, eX, ha, he, tX, th, ut,",0.1,M1_preference_data_465 "Let $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{n \times n}$ be two symmetric matrices. Assume that $\mathbf{v} \in \mathbb{R}^{n}$ is an eigenvector for both matrices with associated eigenvalues $\lambda_{A}$ and $\lambda_{B}$ respectively. Show that $\mathbf{v}$ is an eigenvector of the matrix $\mathbf{A}+\mathbf{B}$. What is the corresponding eigenvalue?","Yes, it is a good way to incrementally makes the code cleaner",0.1,M1_preference_data_466 "Consider a DSP with an Address Generation Unit which has a single address register which can only be automodified to point to the next or previous word in memory without using the main ALU nor reloading the address register. A program uses five integer variables erb+i+, erb+j+, erb+x_coord+, erb+y_coord+, and erb+sum+, and the sequence of accesses in the main program loop is statically known and is egin{verbatim} i → j → x_coord → y_coord → x_coord → i → y_coord → → x_coord → y_coord → j → sum → x_coord → y_coord \end{verbatim} Note that these accesses are all inside a loop which repeats many times. What is an optimal placement of the five integers in memory? Show how you have arrived to the result. ","This is an abstraction leak: the error should not be Google-specific, as users of the functions should not know or care we internally use Google.",0.1,M1_preference_data_467 "There are N philosphers sitting around a circular table eating spaghetti and discussing philosphy. The problem is that each philosopher needs two forks to eat, and there are only $N$ forks, one between each pair of philosophers. We want to design an algorithm that the philosophers can use, that ensures that no one starves as long as each philosopher eventually stops eating, and such that the maximum number of philosophers can eat at once. Lecture 5 provides one possible solution which uses a central arbiter. Can you write the philospherTurn function without a central arbiter? You may modify the provided class Fork if required. class Fork() { var inUse: Boolean = false } def philosopherTurn(l: Fork, r: Fork): Boolean = ??? // your implementation here // your implementation here def run() = val n = 5 val forks = new Array[Fork](n) val philosophers = new Array[Thread](n) for p <- 0 to n - 1 do forks(p) = new Fork() for p <- 0 to n - 1 do philosophers(p) = new Thread { override def run() = { while (!philosopherTurn(forks(p % n), forks((p + 1) % n))) { /* wait */ } } } philosophers(p).start for p <- 0 to n - 1 do philosophers(p).join() Hint: Use the deadlock prevention technique introduced in the lecture.",Android destroys and re-creates activities when the phone rotates.,0.1,M1_preference_data_468 "The goal of this question is to illustrate how to use transducers to implement a simplified version of the conjugation of English verbs. We will restrict to the conjugated forms corresponding to the indicative mode and the present tense. The idea is to build a transducer corresponding to the composition of three transducers: \item a transducer $T_1$ that defines the morphological paradigm, i.e. identifies the various cases to consider for conjugating a regular verb; \item a transducer $T_2$ that implements the identified cases in the form of transformation rules to be applied for the considered morphological paradigm; \item a transducer $T_3$ that handles all the exceptions to be implemented. Provide a formal definition for transducer $T_1$:","['m^4', 'min(m^4, N-3)']",0.1,M1_preference_data_469 "Professor Ueli von Gruy\`{e}res worked hard last year to calculate the yearly cheese consumption of each individual in Switzerland. Specifically, let $U$ be the set of all persons in Switzerland. For each person $i\in U$, Ueli calculated the amount $w_i \in \mathbb{R}_{\geq 0}$ (in grams) of the yearly cheese consumption of person $i$. However, to help Coop and Migros in their supply-chain management, he needs to calculate the total cheese consumption of those persons that prefer fondue over raclette. That is, if we let $F \subseteq U$ be those that prefer fondue over raclette, then Ueli wants to calculate \begin{align*} W_F = \sum_{i\in F} w_i\,. \end{align*} The issue is that Ueli does not know the set $F$ and he does not have the time or energy to ask the preferences of all persons. He therefore designs two estimators that only ask a single person: \begin{description} \item[Estimator $\Alg_1$:] Let $W = \sum_{i\in U}w_i$. Sample person $i$ with probability $\frac{w_i}{W}$ and output $W$ if $i$ prefers fondue and $0$ otherwise. \item[Estimator $\Alg_2$:] Sample person $i$ with probability $\frac{1}{|U|}$ and output $|U| \cdot w_i$ if $i$ prefers fondue and $0$ otherwise. \end{description} Let $X_1$ and $X_2$ be the random outputs of $\Alg_1$ and $\Alg_2$, respectively. Ueli has shown that $\Alg_1$ and $\Alg_2$ are unbiased estimators and he has also bounded their variances: \begin{align*} \E[X_1] = \E[X_2] = W_F, \qquad \Var[X_1] \leq W^2 \qquad \mbox{and} \qquad \Var[X_2] \leq |U| \sum_{i\in U} w_i^2\,. \end{align*} However, Ueli is now stuck because the variances are too high to give any good guarantees for the two estimators. We are therefore going to help Ueli by designing a new estimator with good guarantees while still asking the preferences of relatively few persons. For a fixed small parameter $\epsilon >0$, your task is to design and analyze an estimator that outputs a random value $Y$ with the following guarantee: \begin{align} \label{eq:guarantee} \Pr[|Y - W_F| \geq \epsilon W] \leq 1/3\,. \end{align} Your estimator should ask at most $3/\epsilon^2$ persons about their preferences. \\ {\em (In this problem you are asked to (i) design an estimator that asks at most $3/\epsilon^2$ persons about their preferences and (ii) prove that it satisfies the guarantee~\eqref{eq:guarantee}. Recall that you are allowed to refer to material covered in the lecture notes.)}","The standard approach to vector semantics can be decomposed into two mains steps: \begin{itemize} \item the indexing (or desequalization) phase: during this phase, the documents for which a vectorial semantic representation needs to be produced, are processed with linguistic tools in order to identify the indexing features (words, stems, lemmas, ...) they will be associated with. \end{itemize} This phase results in the association with each of the documents of a set of indexing features. Notice that, for the rest of the processing, on the sets of indexing features will be considered. The rest of the documents will be ignored. Notice also that the sets of indexing features are sets!... and that therefore any notion of word order is lost after the indexing phase. For example, if we consider the toy document collection consisting of the two following documents: D1 = 'the results of the experiments on transgenic plants will be issued soon.' D2 = 'as soon as the experiments will be over, the laboratory will close.' A possible output of the indexing phase for these documents might be: D1 $->$ \{result, experiment, transgenic, plant, issue\} D2 --> \{experiment, over, laboratory, close\} but it is important to notice that the order of the word lemmas in the indexing sets is in fact meaningless, and D1 and D2 might be equivalently indexed by: D1 --> \{experiment, issue, plant, result, transgenic\} D2 $->$ \{close, experiment, laboratory, over\} where the indexing features have been arbitrarily put in alphabetic order. \begin{itemize} \item The second step of the vector semantics modeling is the representation phase. \end{itemize} During this phase, each of the indexing features that have been identified is associated with one of the dimensions of a (usually highly dimensional) vector space and a method must be designed to transform the indexing sets associated with the documents into vectors. A possible approach is to use binary vectors in which the $0 / 1$ coordinates simply indicated whether the corresponding indexing feature is or is not associated with a given document. A more sophisticated approach consists in using the occurrence statistics of the indexing features in the documents to derive less brittle importance scores for each of the indexing features appearing in a document. A simple version of this approach if to use the (usually normalized) occurrence frequency of a feature in a document as a measure of the importance of this feature for the document. For example, a feature appearing in a document 3 times more frequently than another will be considered as three times more important for that document. The importance scores can then be used as coordinates for the vectors representing the topical content of the documents. Once each of the documents can be represented in the indexing feature vector space, the remaining problem is to define a similarity in this vector space in order to be able to evaluate the semantic proximities between the documents. The standard approach is to use the cosine similarity, defined as: if V1 is the vector representing document D1 and V2 is the vector representing document D2 the semantic proximity between $\mathrm{D} 1$ and $\mathrm{D} 2$ is simply defined as: $$ \operatorname{sim}\left(D_{1}, D_{2}\right)=\cos \left(V_{1}, V_{2}\right)=\frac{V_{1} \cdot V_{2}}{\|V 1\|\|V 2\|} $$ where $X \cdot Y$ denotes the dot-product between vector $X$ and vector $Y$, and $\|X\|=\sqrt{X \cdot X}$ represents the norm (i.e. the length) of vector $X$. Notice that this simple similarity might be further sophisticated in order to take into account varying importance for the various dimensions of the vector space. A possible approach is to use a weighted dot-product of the form: for $V 1=\left(v_{11}, v_{12}, \ldots, v_{1 n}\right)$ and $V 2=\left(v_{21}, v_{22}, \ldots, v_{2 n}\right)$ $V_{1} \cdot V 2=\sum_{i=1}^{n} a_{i} v_{1 i} v_{2 i}$, where the $a_{i}$ are some (usually positive) coefficients. A standard approach for the weighting of the vector space dimensions is to use the 'inverse document frequency' (i.e. fact any function $f()$ decreasing with the document frequency of an indexing feature, i.e. the inverse of the number of documents containing the given indexing feature). For example, if we take: ai $=\operatorname{idf}(\mathrm{i}) 2=\log (1 / \mathrm{DF}(\mathrm{i})) 2$, where $\mathrm{DF}(\mathrm{i})$ is the document frequency of the indexing feature associated with the i-th dimension of the vector space, we get: $\operatorname{sim}(\mathrm{D} 1, \mathrm{D} 2)=\cos \left(\mathrm{V} 1^{\prime}, \mathrm{V}^{\prime}\right)$, where $\mathrm{Vi}^{\prime}=(\mathrm{tf}(\mathrm{i}, \mathrm{k}) \cdot \operatorname{idf}(\mathrm{k}))$, where $\mathrm{tf}(\mathrm{i}, \mathrm{k})$ is the measure of importance of the k-th indexing feature for the $\mathrm{i}$-th document and $\operatorname{idf}(\mathrm{k})$ is a measure of importance of the k-th dimension of the vector space. This approach corresponds to the standard 'tf.idf' weighting scheme.",0.1,M1_preference_data_470 "To support very large scale neural networks in limited amount of memory, one may want to use floating point numbers with very few bits. Here we consider substantially simplified operations on such numbers, Float8. A value Float8(mant,exp) represents the non-negative integer mant * 2^exp. We call mant a mantissa (which gives significant digits) whereas exp is the exponent. This allows us to represent both smaller and larger integers, keeping a similar number of significant digits. (The larger integers can only be represented up to a given power of two.) In our simple example, we use only four bits for both mantissa and the exponent, and we assume they are both non-negative. final case class Float8(mant: Int, exp: Int): require(0 <= mant && mant <= 15 && 0 <= exp && exp <= 15) def value: Int = mant << exp val a = Float8(15, 8) val b = Float8(5, 10) We look at the operation plus, of adding such numbers. When the exponent is smaller than another one, the operation shifts mantissa and then performs addition. If mantissa gets too large, we reduce it an increase the exponent. extension (x: Float8) def +(y: Float8): Float8 = if x.exp <= y.exp then val shift = y.exp - x.exp val mant = (x.mant >> shift) + y.mant if mant < 16 then Float8(mant, y.exp) else val exp1 = y.exp + 1 if exp1 < 16 then Float8(mant / 2, y.exp + 1) else Float8(15, 15) else y + x Is this operation commutative? Prove or give a counterexample. ","After a client's first request, the server can already start generating the next images, as the topic is the same for all 9 images.",0.1,M1_preference_data_471 "Explain how precise exceptions are implemented in dynamically-scheduled out-of-order processors.","This does not break compatibility, as the method is private so nobody else could call it.",0.1,M1_preference_data_472 "Consider the following sentence: High-energy pulsed laser beams are used in soft-tissue surgery. Using a 1-gram language model and a tokenizer that splits on whitespaces and punctuation (including hyphens (-)), assume that the tokenization is now enhanced with Named Entity Recognition (NER) specialized on technical and medical terms. What would be the advantage of doing so? What would be the major drawback? Justify your answers.",Consider the graph consisting of a single cycle with $n$ vertices and thus $n$ edges. Removing any two edges result in a minimum cut. There are $n \choose 2$ ways of selecting the two edges to remove and hence Karger's result is tight.,0.1,M1_preference_data_473 "Consider an operation we will call scanRight1 that, given a function $f$ of two arguments, and a sequence $a_1, \ldots, a_N$, computes a sequence $b_1, \ldots, b_N$ such that: $b_N = a_N$ $b_i = f(a_{i}, b_{i+1})$, for $0 < i < N$ Define similarly scanLeft1 in a manner similar to scanRight1: Given a function $f$ of two arguments, and a sequence $a_1, \ldots, a_N$, scanLeft1 computes a sequence $b_1, \ldots, b_N$ such that: $b_1 = a_1$ $b_i = f(b_{i-1}, a_{i})$, for $0 < i \leq N$ Suppose that $f$ is associative. is the result of doing scanLeft1 and then reversing the sequence the same as first reversing the sequence and then doing scanRight1? Illustrate your answer on a sequence of three elements where each $a_i$ is a list and f(x,y) = x ::: y is concatenation.",We need to look for the association rules of the form: {cause} → {car accident} i.e. in which the left-hand side represents the cause of the accident. The possible association rules are: {lightning} → {car accident} support: 0.25 confidence: 0.4 {wind} → {car accident} support: 0.375 confidence: 0.6 {fire} → {car accident} support: 0.375 confidence: 0.5 {clouds} → {car accident} support: 0.25 confidence: 0.33 {rain} → {car accident} support: 0.125 confidence: 0.2 {wind}has both the highest confidence and the highest support and is the most likely cause of the car accidents.,0.1,M1_preference_data_474