question
stringlengths 19
3.53k
| answer
stringlengths 1
5.09k
| label
float64 0.1
0.1
| source
stringlengths 20
22
|
|---|---|---|---|
In Itanium's procedure call and return mechanism, What problem might arise when the processor executes
erb+alloc+? Which mechanisms could be used to handle the problem?
Feel free to mention what Itanium actually does (if you recall it),
but list any effective solution that comes to mind. Just outline the
basic ideas, do not try to describe or solve the details.
|
val THRESHOLD = ??? def aggregate(z: B)(f: (B, A) => B, g: (B, B) => B): B = def go(s: Splitter[A]): B = if s.remaining <= THRESHOLD then s.foldLeft(z)(f) else s.split .map((t: Splitter[A]) => task { go(t) }) .map(_.join()) .reduce(g) go(splitter)
| 0.1
|
M1_preference_data_0
|
List two common types of exceptions which could possibly be
implemented imprecisely. Explain why.
|
S1:
q1: P=4/8 R=4/4
q2: P=1/5 R=1/2
q3: P=3/9 R=3/5
q4: P=3/9 R=3/4
mean P(S1) = 41/120 mean R(S1) = 57/80
S2:
q1: P=1/5 R=1/4
q2: P=2/4 R=2/2
q3: P=3/5 R=3/5
q4: P=2/4 R=2/4
mean P(S1) = 9/20 mean R(S1) = 47/80
| 0.1
|
M1_preference_data_1
|
What differentiates VLIW processors from out-of-order
superscalar processors?
|
The app could stream images rather than batch them, to only download images the user actually sees
| 0.1
|
M1_preference_data_2
|
A multiset is an unordered collection where elements can appear multiple times. We will represent a
multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that
is not in the multiset, and the (positive) number of times it appears otherwise:
1 type Multiset = Char => Int
What should replace ??? so that the following function transforms given set s to a
multiset where each element of s appears exactly once?
1 type Set = Char => Boolean
2 def setToMultiset(s: Set): MultiSet = ???
|
Recall that a \emph{base} of a matroid is an independent set of maximum cardinality. Let $B \in \mathcal{I}$ be a base of $\mathcal{M}$. Suppose towards a contradiction that the output $S \in \mathcal{I}$ of \textsc{Greedy} is not a base of $\mathcal{M}$. Then $|S| < |B|$, and, by the second axiom of matroids, there exists some $e_{b} \in B \setminus S$ such that $( S \cup \{e_b\} ) \in \mathcal{I}$. Let $S'$ be the subset of elements in $S$ that were considered before $e_b$ by \textsc{Greedy}. In other words, $S'$ was the partial solution of \textsc{Greedy} just before it considered $e_b$. By the first axiom of matroids, $S' \cup \{e_b\} \in \mathcal{I}$ because $S' \cup \{e_b\} \subseteq S \cup \{e_b\}$. Thus \textsc{Greedy} should have added $e_b$ to its solution $S$ in Step 4, which is a contradiction.
| 0.1
|
M1_preference_data_3
|
Consider a classic pipeline with Fetch, Decode, Execute, Memory,
and Writeback stages such as MIPS's. Give an example snippet made of
2-3 instructions (use any credible assembly language) which would
benefit from a forwarding path between the Memory and Execute stage
(assume that no other forwarding paths exist).
|
It is not suitable as the item is not specified properly ("doesn't render well" is not concrete). A bug item has to include details on what is wrong with the user experience.
| 0.1
|
M1_preference_data_4
|
Let us remind that we define the max-margin $M_\star$ as
egin{align*}
M_\star = \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } n=1,\cdots, N
\end{align*}
and a max-margin separating hyperplane $ar \wv$ as a solution of this problem:
egin{align*}
ar \wv \in rg \max_{\wv\in\mathbb R^D, \| \wv\|_2=1} M ext{ such that } y_n \xv_n^ op \wv \geq M ext{ for } i=1,\cdots, N
\end{align*}
Does it imply that the output of the Perceptron algorithm is a max-margin separating hyperplane?
|
$P( ext{continuous wave})=rac{2}{58}$ since ``\emph{continuous wave}'' appears two times and that there are 58 bigrams in a 59 token corpus.
$P( ext{pulsed laser})=0$ since "pulsed laser" never occurs in the corpus.
| 0.1
|
M1_preference_data_5
|
If process i fails, then eventually all processes j≠i fail
Is the following true? If all processes j≠i fail, then process i has failed
|
['PNP']
| 0.1
|
M1_preference_data_6
|
Interpreting the results obtained throughout this homework, create a short text (max. 250 words) where you:
Present and explain a credible causal diagram capturing the relationship between the variables below, and justify your causal diagram given the questions answered in this homework:
"Skill": an individual's innate talent towards a sport.
"Relative Age": how old an individual was in comparison to his or her peers.
"Success before adulthood": how successful the individual is as an athlete as a child/teenager.
"Success as an adult": how successful the individual is as an athlete as an adult.
Discuss: Consider two equally successful children athletes, one born on March 31 and the other on April 1 — which will likely be more successful as an adult? Your answer should be consistent with your causal diagram.
|
The set of states Q is the set of all possible maps q:A→N. Intuitively, each state of the object assigns each account its balance.The initialization map q_0:A→N assigns the initial balance to each account.
Operations and responses of the type are defined as O={transfer(a,b,x):a,b∈A,x∈ N}∪{read(a):a∈A} and R={ true, false }∪N.
For a state q∈Q, a proces p∈Π, an operation o∈O, a response r∈R and a new state q^'∈Q, the tuple (q,p,o,q^',r)∈Δ if and only if one of the following conditions is satisfied:
-o=transfer(a,b,x) ∧ p ∈ μ(a) ∧ q(a) ≥ x ∧ q^'(a) = q(a)-x ∧ q'(b) = q(b) + x ∧ ∀ c ∈ A ∖ {a,b}: q'(c) = q(c) (all other accounts unchanged) ∧ r= true;
-o=transfer(a,b,x)∧(p∉μ(a)∨q(a)<x)∧q^'=q∧r=" false
-o=read(a)∧q=q^'∧r=q(a))
| 0.1
|
M1_preference_data_7
|
Suppose you are using the Hedge algorithm to invest your money (in a good way) into $N$ different investments. Every day you see how well your investments go: for $i\in [N]$ you observe the change of each investment in percentages. For example, $\mbox{change(i) = 20\%}$ would mean that investment $i$ increased in value by $20\%$ and $\mbox{change}(i) = -10\%$ would mean that investment $i$ decreased in value by $10\%$. How would you implement the ``adversary'' at each day $t$ so as to make sure that Hedge gives you (over time) almost as a good investment as the best one? In other words, how would you set the cost vector $\vec{m}^{(t)}$ each day?
|
This is a more difficult question than it seems because it actually depends on the representation choosen for the lexicon. If this representation allows to have several numeric fields associated to lexical entries, then definitly it should be stored there.
Otherwise some external (I mean out of the lexicon) array would be build, the role of the lexicon then being to provide a mapping between lexical entries and indexes in these arrays.
The choice of the implementation also highly depends on the size of the vocabulary to be stored (and also on the timing specifications for this tasks: realtime, off-line, ...)
Anyway this is typically a lexical layer level resource.
Example for the case where a associative memory (whatever it's implementation) is available:
capacity $\rightarrow 123454$ by the lexicon then an array such that ARRAY[1][123454] $=0.01$
It should be noticed that these probability arrays are very likely to be very sparse. Thus sparse matrix representations of these would be worth using here.
| 0.1
|
M1_preference_data_8
|
You are discussing coding habits with a colleague, who says:
"When I code, I only write some tests in order to get the minimum coverage my team wants."
In one sentence, explain if this is a good habit and why:
|
the strings in the first column are canonical representations, while the strings in the second column are surface forms.
| 0.1
|
M1_preference_data_9
|
Now let $\xv$ be a random vector distributed according to the uniform distribution over the finite centered dataset $\xv_1, . . . , \xv_N$ from above. %
Consider the problem of finding a unit vector, $\wv \in \R^D$, such that the random variable $\wv^ op \xx$ has \emph{maximal} variance. What does it mean for the data vectors $\xv_1, . . . , \xv_N$ to be centered, as for principle component analysis (PCA) to be meaningful?
Use the notation $\x_{nd}$ for individual entries.
|
False.
| 0.1
|
M1_preference_data_10
|
What is the function of processors in a reorder buffer?
|
The algorithm is as follows: \begin{center} \begin{boxedminipage}[t]{0.9\textwidth} \begin{enumerate} \item Construct a $2$-universal hash family $\mathcal{H}$ of hash functions $h: V \to \{0,1\}$. \\[1mm] As seen in class, we can construct such a family in time polynomial in $n$, and $\mathcal{H}$ will contain $O(n^2)$ hash functions. Specifically, $\mathcal{H}$ has a hash function for each $a, b\in [p]$ with $a\neq 0$ where $p$ is a prime satisfying $|U|\leq p \leq 2|U|$. \item Let $h_1, h_2, \ldots, h_{\ell}$ be the hash functions of $\mathcal{H}$ where $\ell = |\mathcal{H}|$. \item For each $i =1, \ldots, \ell$, output the set $S_i = \{v: h_i(v) =0\}.$ \end{enumerate} \end{boxedminipage} \end{center} As already noted, the algorithm runs in time polynomial in $n$ and it outputs polynomially many vertex subsets. We proceed to verify the property given in the statement. Consider an arbitrary edge set $E \subseteq {V \choose 2}$. By the previous subproblem, \begin{align*} \frac{|E|}{2} &\leq \E_{h\in \mathcal{H}} \left[ \mbox{\# edges cut by $\{v: h(v) = 0\}$}\right] \\ & = \frac{1}{\ell} \sum_{i =1}^\ell \mbox{(\# edges cut by $\{v: h_i(v) = 0\}$)} \\ & = \frac{1}{\ell} \sum_{i =1}^\ell \mbox{(\# edges cut by $S_i$)}\,. \end{align*} We thus have that the average number of edges that the sets $S_1, S_2, \ldots, S_\ell$ cut is at least $|E|/2$. This implies that at least one of those sets cuts at least $|E|/2$ edges.
| 0.1
|
M1_preference_data_11
|
If several elements are ready in a reservation station, which
one do you think should be selected? extbf{Very briefly} discuss
the options.
|
Obama SLOP/1 Election returns document 3 Obama SLOP/2 Election returns documents 3 and T Obama SLOP/5 Election returns documents 3,1, and 2 Thus the values are X=1, x=2, and x=5 Obama = (4 : {1 - [3}, {2 - [6]}, {3 [2,17}, {4 - [1]}) Election = (4: {1 - [4)}, (2 - [1, 21), {3 - [3]}, {5 - [16,22, 51]})
| 0.1
|
M1_preference_data_12
|
Your team is discussing the following code:
/** Uploads images to the cloud. */
public final class ImageUploader {
public void upload(Image image) { /* … */ }
private boolean canUpload(Image image) { /* … */ }
}
One of your colleagues points out that "upload" currently has some unexpected behavior regarding file sizes, and suggests that this should be written down in a Google Doc shared with the team.
Give 1 sentence explaining why this is not a good idea and 1 sentence suggesting a better way to record this information:
|
Let $d_i$ be the estimate of the $i$-th copy of the algorithm and $\overline{d}$ be the median of $d_i$. We also define $X_i = 1$ if $d_i \geq 3d$ and zero otherwise. Moreover, $X = \sum_{i=1}^{t} X_i$. Since $\Pr[\hat{d} > 3d] \leq 0.47$, the expected number of answers that exceed $3d$ is $0.47t$. If the median is larger than $3d$, this means at least half of $t$ individual answers exceed 3d. Then, we have: \[\Pr(X > t/2) = Pr(X > (1+3/47)0.47t) \leq e^{-\frac{3/47^2 0.47t}{3}} \leq e^{-0.00063t} = e^{-0.00063C\ln(1/\delta)} \leq \delta/2 \] with $C$ being a large enough constant. Similarly, by defining the probability that the median is below $\frac{d}{3}$ we get: \[\Pr(X < t/2) = Pr(X < (1-3/53)0.53t) \leq e^{-\frac{3/53^2 0.53t}{2}} \leq \delta/2 \] with $C$ being a large enough constant. This means the probability that $d/3 \leq \overline{d} < 3d$ is at least $1-\delta$.
| 0.1
|
M1_preference_data_13
|
The data contains information about submissions to a prestigious machine learning conference called ICLR. Columns:
year, paper, authors, ratings, decisions, institution, csranking, categories, authors_citations, authors_publications, authors_hindex, arxiv. The data is stored in a pandas.DataFrame format.
Create another field entitled reputation capturing how famous the last author of the paper is. Notice that the last author of the paper is usually the most senior person involved in the project. This field should equal log10(#𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑠#𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑠+1). Notice that each author in the dataset has at least 1 publication, so you don't risk dividing by 0.
|
$$
\mathbf{K}_{i, j}:=k\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \phi\left(\mathbf{x}_{j}\right)\right\rangle_{\mathbb{R}^{H}}=\mathbf{\Phi} \boldsymbol{\Phi}^{\top} \in \mathbf{R}^{n \times n}
$$
| 0.1
|
M1_preference_data_14
|
Recall the online bin-packing problem that we saw in Exercise Set $10$: We are given an unlimited number of bins, each of capacity $1$. We get a sequence of items one by one each having a size of at most $1$, and are required to place them into bins as we receive them. Our goal is to minimize the number of bins we use, subject to the constraint that no bin should be filled to more than its capacity. An example is as follows: \begin{center} \vspace{4mm} \includegraphics[width=9cm]{binpackingExample2} \end{center} Here, seven items have already arrived that we have packed in three bins. The newly arriving item of size $1/6$ can either be packed in the first bin, third bin, or in a new (previously unused) bin. It cannot be packed in the second bin since $1/3 + 1/3 + 1/4 + 1/6 > 1$. If it is packed in the first or third bin, then we still use three bins, whereas if we pack it in a new bin, then we use four bins. In this problem you should, assuming that all items have size at most $0 <\epsilon\leq 1$, design and analyze an online algorithm for the online bin-packing problem that uses at most \begin{align} \frac{1}{1-\epsilon} \mbox{OPT} + 1 \mbox{ bins,} \label{eq:binguarantee} \end{align} where $\mbox{OPT}$ denotes the minimum number of bins an optimal packing uses. In the above example, $\epsilon = 1/3$. \\[2mm] {\em (In this problem you are asked to (i) design the online algorithm and (ii) prove that it satisfies the guarantee~\eqref{eq:binguarantee}. Recall that you are allowed to refer to material covered in the lecture notes.)}
|
not(a or b)
| 0.1
|
M1_preference_data_15
|
We have a collection of rectangles in a plane, whose sides are aligned with the coordinate axes. Each rectangle is represented by its lower left corner $(x_1,y_1)$ and its upper right corner $(x_2,y_2)$. All coordinates are of type Long. We require $x_1 \le x_2$ and $y_1 \le y_2$. Define a case class Rectangle storing two corners.
|
Whatever your personal convictions, you have to adapt to the convention used by the project.
| 0.1
|
M1_preference_data_16
|
It is often desirable to be able to express the performance of an NLP system in the form of one single number, which is not the case with Precision/Recall curves. Indicate what score can be used to convert a Precision/Recall performance into a unique number. Give the formula for the corresponding evaluation metric, and indicate how it can be weighted.
|
The benefit of depending on the latest available minor version means that we will always be up-to-date with the latest bugfixes and security/performance improvements as well. But the problem with this approach is that it could lead to unexpected behaviour which could cause bugs, because two compatible versions can still have slightly different behaviour.
| 0.1
|
M1_preference_data_17
|
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$:
egin{align*}
g^\star(\xv)\in rg \min_{z\in\R}\mathbb E[\phi( z Y)|X=\xv].
\end{align*}
Thus the function $g^\star$ that minimizes the $\phi$-risk can be determined by looking at each $\xv$ separately.
Give a formula of the function $g^\star : \mathcal X o\R$ which minimizes the true $\phi$-risk, as a function of $\eta(\xv)$.
|
Yes, it is possible. d1>d2: adding d3=”b” d2>d1: adding d3=”c”
| 0.1
|
M1_preference_data_18
|
In this problem we are going to investigate the linear programming relaxation of a classical scheduling problem. In the considered problem, we are given a set $M$ of $m$ machines and a set $J$ of $n$ jobs. Each job $j\in J$ has a processing time $p_j > 0$ and can be processed on a subset $N(j) \subseteq M$ of the machines. The goal is to assign each job $j$ to a machine in $N(j)$ so as to complete all the jobs by a given deadline $T$. (Each machine can only process one job at a time.) If we, for $j\in J$ and $i\in N(j)$, let $x_{ij}$ denote the indicator variable indicating that $j$ was assigned to $i$, then we can formulate the scheduling problem as the following integer linear program: \begin{align*} \sum_{i\in N(j)} x_{ij} & = 1 \qquad \mbox{for all } j\in J & \hspace{-3em} \mbox{\small \emph{(Each job $j$ should be assigned to a machine $i\in N(j)$)}} \\ \sum_{j\in J: i \in N(j)} x_{ij} p_j & \leq T \qquad \mbox{for all } i \in M & \hspace{-3em} \mbox{\small \emph{(Time needed to process jobs assigned to $i$ should be $\leq T$)}} \\ x_{ij} &\in \{0,1\} \ \mbox{for all } j\in J, \ i \in N(j) \end{align*} The above integer linear program is NP-hard to solve, but we can obtain a linear programming relaxation by relaxing the constraints $x_{ij} \in \{0,1\}$ to $x_{ij} \in [0,1]$. The obtained linear program can be solved in polynomial time using e.g. the ellipsoid method. \\[2mm] \emph{Example.} An example is as follows. We have two machines $M = \{m_1, m_2\}$ and three jobs $J= \{j_1, j_2, j_3\}$. Job $j_1$ has processing time $1/2$ and can only be assigned to $m_1$; job $j_2$ has processing time $1/2$ and can only be assigned to $m_2$; and job $j_3$ has processing time $1$ and can be assigned to either machine. Finally, we have the ``deadline'' $T=1$. An extreme point solution to the linear programming relaxation is $x^*_{11} = 1, x^*_{22} =1, x^*_{13} = 1/2$ and $x^*_{23} = 1/2$. The associated graph $H$ (defined in subproblem~\textbf{a}) can be illustrated as follows: \begin{tikzpicture} \node[vertex] (a1) at (0,1.7) {$a_1$}; \node[vertex] (a2) at (0,0.3) {$a_2$}; \node[vertex] (b1) at (3,2.5) {$b_1$}; \node[vertex] (b2) at (3,1) {$b_2$}; \node[vertex] (b3) at (3,-0.5) {$b_3$}; \draw (a1) edge (b3); \draw (a2) edge (b3); \end{tikzpicture} Use the structural result proved in the first subproblem to devise an efficient rounding algorithm that, given an instance and a feasible extreme point $x^*$ in the linear programming relaxation corresponding to the instance, returns a schedule that completes all jobs by deadline $T + \max_{j\in J} p_j$. In other words, you wish to assign jobs to machines so that the total processing time of the jobs a machine receives is at most $T + \max_{j\in J} p_j$.
|
['(80+1000-{a}-{b}+80)/1000']
| 0.1
|
M1_preference_data_19
|
Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|d_j|}\frac{w_{iq}}{|q|}$ . (1) Another way of looking at the information retrieval problem is using a probabilistic approach. The probabilistic view of information retrieval consists of determining the conditional probability $P(q|d_j)$ that for a given document $d_j$ the query by the user is $q$. So, practically in probabilistic retrieval when a query $q$ is given, for each document it is evaluated how probable it is that the query is indeed relevant for the document, which results in a ranking of the documents. In order to relate vector space retrieval to a probabilistic view of information retrieval, we interpret the weights in Equation (1) as follows: - $w_{ij}/|d_j|$ can be interpreted as the conditional probability $P(k_i|d_j)$ that for a given document $d_j$ the term $k_i$ is important (to characterize the document $d_j$). - $w_{iq}/|q|$ can be interpreted as the conditional probability $P(q|k_i)$ that for a given term $k_i$ the query posed by the user is $q$. Intuitively, $P(q|k_i)$ gives the amount of importance given to a particular term while querying. With this interpretation you can rewrite Equation (1) as follows: Show that indeed with the probabilistic interpretation of weights of vector space retrieval, as given in Equation (2), the similarity computation in vector space retrieval results exactly in the probabilistic interpretation of information retrieval, i.e., $sim(q,d_j)= P(q|d_j)$. Given that $d_j$ and $q$ are conditionally independent, i.e., $P(d_j \cap q|ki) = P(d_j|k_i)P(q|k_i)$. You can assume existence of joint probability density functions wherever required. (Hint: You might need to use Bayes theorem)
|
1. I/O: Checked exception, can happen even if the code is correct due to external factors
2. Backend timeout: Checked exception, can happen even if the code is correct due to SwengPhotos internals
3. Name already exists: Unchecked exception, the library should deal with this internally and never make such an invalid request, since the cloud is private
| 0.1
|
M1_preference_data_20
|
You are discussing coding habits with a colleague, who says:
"When I edit a part of a function, if I see unclean code in another part of it, I also clean that other part up."
In one sentence, explain if this is a good habit and why:
|
If the messages, on which the algorithm agrees in consensus, are never sorted deterministically within every batch (neither a priori, not a posteriori), then the total order property does not hold. Even if the processes decide on the same batch of messages, they might TO-deliver the messages within this batch in a different order. In fact, the total order property would be ensured only with respect to batches of messages, but not with respect to individual messages. We thus get a coarser granularity in the total order.
| 0.1
|
M1_preference_data_21
|
Recall from the last lecture (see Section 16.1.1 in notes of Lecture~8) that the number of mistakes that Weighted Majority makes is at most $2(1+\epsilon) \cdot \mbox{(\# of $i$'s mistakes)} + O(\log N/\epsilon)$, where $i$ is any expert and $N$ is the number of experts. Give an example that shows that the factor $2$ is tight in the above bound. The simplest such example only uses two experts, i.e., $N=2$, and each of the experts is wrong roughly half of the time. Finally, note how your example motivates the use of a random strategy (as in the Hedge strategy that we will see in the next lecture).
|
Returns a list of elements that appear exactly once in the input list, in a reverse order
| 0.1
|
M1_preference_data_22
|
Given the following code snippet, you are tasked to produce a
modulo scheduled version of the loop achieving the best possible
performance. You can assume that any operation has a latency of one
cycle and that the processor has 2 ALUs, one memory unit, and one
branch unit. The processor also has all the necessary hardware
structures and instructions to support modulo scheduling. In
particular, the instruction erb+mov EC+ loads a value in the
epilogue counter and the instruction erb+loop.pip+ is equivalent
to erb+loop+ but has all features expected to execute a modulo
scheduled loop. Predicates from erb+p32+ and registers from
erb+x32+ onwards rotate; across iterations registers are rotated
upwards (i.e., the content of erb+x44+ is accessible as erb+x45+
in the next iteration). What is the shortest achievable initiation
interval? Why?
egin{verbatim}
0: mov LC, 100
1: mov x1, 10000
2: ld x2, 0(x1)
3: addi x2, x2, 10
4: st x2, 0(x1)
5: addi x1, x1, 1
6: loop 2
\end{verbatim}
|
Consider the following linear program with a variable $p(v)$ for each vertex $v\in V$: \begin{align*} \textrm{max} \quad & \sum_{v\in V} p(v) \\ \textrm{subject to} \quad& \sum_{v\in S} p(v) \leq |E(S, \bar S)| \qquad \mbox{for all $\emptyset \subset S \subset V$} \\ & p(v) \geq 0 \qquad \mbox{for all $v\in V$} \end{align*} We show that this linear program can be solved in polynomial time using the Ellipsoid method by designing a polynomial time separation oracle. That is we need to design a polynomial time algorithm that given $p^* \in \mathbb{R}^V$ certifies that $p^*$ is a feasible solution or outputs a violated constraints. The non-negativity constraints are trivial to check in time $O(|V|)$ (i.e., polynomial time) so let us worry about the other constraints. These constraints can be rewritten as $f(S) \geq 0$ for all $\emptyset \subset S \subset V$, where \begin{align*} f(S) = |E(S,\bar S)| - \sum_{v\in S} p^*(v) \quad \mbox{for $\emptyset \subseteq S \subseteq V$.} \end{align*} Note that $f$ is a submodular function since it is a sum of two submodular functions: the cut function (which is submodular as seen in class) and a linear function (trivially submodular). Hence there is a violated constraint if and only if \begin{align*} \min_{\emptyset \subseteq S \subset V} f(S) < 0\,. \end{align*} This is an instance of the submodular function minimization problem with the exception that we do not allow $S=V$ for a solution. Therefore we solve $n$ instances of submodular minimization on the smaller ground sets $V \setminus \{v_1\}$, $V \setminus \{v_2\}$, \dots, $V \setminus \{v_n\}$ where $V =\{v_1, \dots, v_n\}$. Since submodular function minimization is polynomial time solvable (and we can clearly evaluate our submodular function in polynomial time), we can solve the separation problem in polynomial time.
| 0.1
|
M1_preference_data_23
|
Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \alpha) \), of appropriate dimension, where \( \alpha \) is a real-number between 0 and 1?
|
We can view the threshold as an additional weight by adding the constant input $1$ to the input $\xv$. It amounts to consider the input $ ilde \xv^ op= [\xv^ op,1]$ since $ ilde \xv^ op [\wv^ op,b] = \xv^ op \wv + b$.
| 0.1
|
M1_preference_data_24
|
Remember that monoids can be represented by the following type class:
1 trait SemiGroup[T]:
2 extension (x: T) def combine (y: T): T
3
4 trait Monoid[T] extends SemiGroup[T]:
5 def unit: T
Additionally the three following laws should hold for all Monoid[M] and all a, b, c: M:
(Associativity) a.combine(b).combine(c) === a.combine(b.combine(c))
(Left unit) unit.combine(a) === a
(Right unit) a.combine(unit) === a
Consider the following implementation of Monoid for Int:
1 given Pos: Monoid[Int] with
2 extension (x: Int) def combine (y: Int): Int = Math.max(x + y, 0)
3 def unit: Int = 0
Which of the three monoid laws does it fullfil?
None of them
Only Associativity
Only Left unit
Only Right unit
Only Associativity and Left unit
Only Associativity and Right unit
Only Left unit and Right unit
All of them
|
Here we give a detailed explanation of how to set the costs. Your solution does not need to contain such a detailed explanation. The idea of using the Hedge method for linear programming is to associate an expert with each constraint of the LP. In other words, the Hedge method will maintain a weight distribution over the set of constraints of a linear problem to solve, and to iteratively update those weights in a multiplicative manner based on the cost function at each step. Initially, the Hedge method will give a weight $w^{(1)}_i = 1$ for every constraint/expert $i=1,\dots, m$ (the number $m$ of constraints now equals the number of experts). And at each step $t$, it will maintain a convex combination $\vec{p}^{(t)}$ of the constraints (that is defined in terms of the weights). Using such a convex combination $\vec{p}$, a natural easier LP with a single constraint is obtained by summing up all the constraints according to $\vec{p}$. Any optimal solution of the original LP is also a solution of this reduced problem, so the new problem will have at least the same cost as the previous one. We define an oracle for solving this reduced problem: \begin{definition}{} An oracle that, given $\vec{p} = (p_1, \dots, p_m) \geq \mathbf{0}$ such that $\sum_{i=1}^m p_i = 1$, outputs an optimal solution $x^*$ to the following reduced linear problem: \begin{align*} \textbf{Maximize} \hspace{0.8cm} & \sum_{j=1}^n c_jx_j \\ \textbf{subject to}\hspace{0.8cm} &\left(\sum_{i=1}^m p_i A_i \right) \cdot x \leq \sum_{i=1}^m p_ib_i \\ \hspace{0.8cm} & x \geq 0 \\ \end{align*} \end{definition} As explained, we associate an expert to each constraint of the covering LP. In addition, we wish to increase the weight of unsatisfied constraints and decrease the weight of satisfied constraints (in a smooth manner depending on the size of the violation or the slack). The Hedge algorithm for covering LPs thus becomes: \begin{itemize} \item Assign each constraint $i$ a weight $w_i^{(1)}$ initialized to $1$. \end{itemize} At each time $t$: \begin{itemize} \item Pick the distribution $p^{(t)}_i = w_i^{(t)}/\Phi^{(t)}$ where $\Phi^{(t)}= \sum_{i\in [N]} w^{(t)}_i$. \item Now \emph{we define the cost vector instead of the adversary} as follows: \begin{itemize} \item Let $x^{(t)}$ be the solution returned by the oracle on the LP obtained by using the convex combination $\vec{p}^{(t)}$ of constraints. Notice that cost of $x^{(t)}$, i.e., $c^\top x^{(t)}$, is at least the cost of an optimal solution to the original LP. \item Define the cost of constraint $i$ as \begin{align*} m^{(t)}_i = b_i -\sum_{j=1}^n A_{ij} x_j = b_i - A_i x . \end{align*} Notice that we have a positive cost if the constraint is satisfied (so the weight will be decreased by Hedge) and a negative cost if it is violated (so the weight will be increased by Hedge). \end{itemize} \item After observing the cost vector, set $w_i^{(t+1)} = w_i^{(t)} \cdot e^{-\varepsilon \cdot m_i^{(t)}}$. \end{itemize} {\bf Output:} the average $\bar x =\frac{1}{T} \sum_{t=1}^T x^{(t)}$ of the constructed solutions.
| 0.1
|
M1_preference_data_25
|
Build the inverse document-frequency matrix (idf)
|
x => Math.min(a(x), b(x))
| 0.1
|
M1_preference_data_26
|
Given a joint data distribution $\mathcal D$ on $\mathcal X imes \{-1,1\}$ and $n$ independent and identically distributed observations from $\mathcal D$, the goal of the classification task is to learn a classifier $f:\mathcal X o \{-1,1\}$ with minimum true risk $\mathcal L(f) = \mathbb E_{(X,Y)\sim \mathcal D} [oldsymbol{\mathbb{1}}_{f(X)
eq Y}]$ where $oldsymbol{\mathbb{1}}_{C} = egin{cases}
1 \; ext{ if } C ext{ is true} \
0 \quad ext{otherwise}
\end{cases}$. %
We denote by $\mathcal D_{X}$ the marginal law (probability distribution) of $X$, and $\mathcal D_{Y|X}$ the conditional law of $Y$ given $X$.
Give the two reasons seen in the course which explain that minimizing the true risk with the $0-1$ loss over the set of classifiers $f:\mathcal X o \{-1,1\}$ is problematic.
|
Amazon Web Services (as can be found by looking up the error name)
| 0.1
|
M1_preference_data_27
|
You've been hired to modernize a codebase from a 50-year-old company: version control, automated builds, and continuous integration. One of your colleagues, who is not completely up-to-date with modern practices, asks you the following question:
"Do I have to do one "commit" each day with my day's work?"
What would be your answer?
|
The result of doing scanLeft1 and then reversing the answer is not the same as applying scanRight1 on the reversed input (unless $f$ is commutative) Consider once again our favourite sequence $A = (a_1, a_2)$. We apply the operations as required: $$rev(A).scanLeft1(f) = (a_2, f(a_2, a_1))$$ and $$rev(A.scanRight1(f)) = (a_2, f(a_1, a_2))$$ These are not equal unless $f$ is commutative. Choose once again the function $f(x, y) := x$. We get $$rev(A).scanLeft1(f) = (a_2, a_2)$$ and $$rev(A.scanRight1(f)) = (a_2, a_1)$$ which are unequal if $a_1 \not = a_2$.
| 0.1
|
M1_preference_data_28
|
You just started an internship in an IT services company.
Your first task is about maintaining an old version of a product still used by a large customer. A bug just got fixed in the latest version of the product, and you must fix it in the old version. You ask where the source code is, and a developer shows you a repository in which the development team makes a single commit each week with the week's work. The old version is in another repository, which is a copy of the original repository made back when the version was released.
Suggest a better way to handle old versions,
|
" Compute confidence for a given set of rules and their respective support freqSet : frequent itemset of N-element H : list of candidate elements Y1, Y2... that are part of the frequent itemset supportData : dictionary storing itemsets support rules : array to store rules min_confidence : rules with a confidence under this threshold should be pruned " def compute_confidence(freqSet, H, supportData, rules, min_confidence=0.7): prunedH = [] for Y in H: X = freqSet - Y support_XuY = supportData[freqSet] support_X = supportData[X] conf = support_XuY/support_X if conf >= min_confidence: rules.append((X, Y, conf)) prunedH.append(Y) return prunedH
| 0.1
|
M1_preference_data_29
|
In a game of Othello (also known as Reversi in French-speaking countries), when a player puts a token on a square of the board, we have to look in all directions around that square to find which squares should be “flipped” (i.e., be stolen from the opponent). We implement this in a method computeFlips, taking the position of square, and returning a list of all the squares that should be flipped: final case class Square(x: Int, y: Int) def computeFlips(square: Square): List[Square] = { List(−1, 0, 1).flatMap{i => List(−1, 0, 1).filter{j => i != 0 || j != 0}.flatMap{j => computeFlipsInDirection(square, i, j)}}} def computeFlips In Direction (square: Square, dirX: Int, dirY: Int): List[Square] = {// omitted} Rewrite the method computeFlips to use one for comprehension instead of maps, flatMaps and filter s. The resulting for comprehension should of course have the same result as the expression above for all value of square. However, it is not necessary that it desugars exactly to the expression above.
|
Neither
| 0.1
|
M1_preference_data_30
|
Consider the Poisson distribution with parameter $\lambda$. It has a probability mass function given by $p(i)=\frac{\lambda^{i} e^{-\lambda}}{i !}$, $i=0,1, \cdots$ (i) Write $p(i)$ in the form of an exponential distribution $p(i)=h(i) e^{\eta \phi(i)-A(\eta)}$. Explicitly specify $h, \eta, \phi$, and $A(\eta)$ (ii) Compute $\frac{d A(\eta)}{d \eta}$ and $\frac{d^{2} A(\eta)}{d \eta^{2}}$ ? Is this the result you expected?
|
Since FloodSet guarantees that all non-faulty processes obtain the same W after f+1 rounds, other decision rules would also work correctly, as long as all the processes apply the same rule.
| 0.1
|
M1_preference_data_31
|
One of your colleagues has recently taken over responsibility for a legacy codebase, a library currently used by some of your customers. Before making functional changes, your colleague found a bug caused by incorrect use of the following method in the codebase:
public class User {
/** Indicates whether the user’s browser, if any, has JavaScript enabled. */
public boolean hasJavascriptEnabled() { … }
// … other methods, such as getName(), getAge(), ...
}
Your colleague believes that this is a bad API. You are reviewing the pull request your colleague made to fix this bug. After some discussion and additional commits to address feedback, the pull request is ready. You can either "squash" the pull request into a single commit, or leave the multiple commits as they are. Explain in 1 sentence whether you should "squash" and why.
|
causal language modeling
learns to predict the next word, which you would need to generate a story.
| 0.1
|
M1_preference_data_32
|
Consider the following CFG
\(\text{S} \rightarrow \text{NP VP PNP}\)
\(\text{NP} \rightarrow \text{Det N}\)
\(\text{NP} \rightarrow \text{Det Adj N}\)
\(\text{VP} \rightarrow \text{V}\)
\(\text{VP} \rightarrow \text{Aux Ving}\)
\(\text{VP} \rightarrow \text{VP NP}\)
\(\text{VP} \rightarrow \text{VP PNP}\)
\(\text{PNP} \rightarrow \text{Prep NP}\)
and the following lexicon:
the:Det, red:Adj, cat:N, is:Aux, meowing:Ving, on:Prep, roof:N
The next four questions ask you the content of a given cell of the chart used by the CYK algorithm (used here as a recognizer) for the input sentence
the red cat is meowing on the roof
Simply answer "empty'' if the corresponding cell is empty and use a comma to separate your answers when the cell contains several objects.What is the content of the cell at row 3 column 6 (indexed as in the lectures)?
|
We give a rounding algorithm that gives a cut $x^*$ (an integral vector) of expected value equal to the value of an optimal solution $x$ to the quadratic program. The rounding algorithm is the basic one: set $x^*_i = 1$ with probability $x_i$ and $x^*_i = 0$ with probability $1 - x_i$. The expected value of the objective function is then $\sum_{i,j}\mathbb{E}\left( (1 - x^*_i)x^*_j + x^*_i (1 - x^*_j) \right) = \sum_{(i,j) \in E} (1 - x_i)x_j + x_i (1 - x_j)$ by independence of the variables $x^*_i$ and $x^*_j$. Thus the expected value after rounding is the optimal value of the quadratic relaxation. But as no integral cut can have larger value than the relaxation, the value of the obtained cut $x^*$ must always equal the value of the fractional solution $x$. It follows that the optimal value of the quadratic relaxation equals the value of an optimal cut.
| 0.1
|
M1_preference_data_33
|
Implement Item-based collaborative filtering using the following formula: \begin{equation} {r}_{x}(a) = \frac{\sum\limits_{b \in N_{I}(a)} sim(a, b) r_{x}(b)}{\sum\limits_{b \in N_{I}(a)}|sim(a, b)|} \end{equation} You will create a function that takes as input the ratings and the similarity matrix and gives as output the predicted ratings.
|
Recall the definition of pairwise independence: for any non-empty $S$ and $T$ such that $S\neq T$ and two bits $b_S$ and $b_T$, we have \begin{align*} \Pr[X_S= b_S \wedge X_T = b_T] = 1/4\,. \end{align*} We now first argue that $\mathbb{E}[X_S] = 1/2, \mathbb{E}[X_T] = 1/2$ and $\mathbb{E}[X_S X_T] = 1/4$ implies that they are pairwise independent. We have \begin{align*} \Pr[X_S= 1 \wedge X_T = 1] &= \mathbb{E}[X_S X_T] = 1/4\,, \\ \Pr[X_S= 1 \wedge X_T = 0] &= \mathbb{E}[X_S] - \mathbb{E}[X_S X_T] = 1/4\,, \\ \Pr[X_S= 0 \wedge X_T = 1] &= \mathbb{E}[X_T] - \mathbb{E}[X_S X_T] = 1/4\,, \\ \Pr[ X_S = 0 \wedge X_T = 0] & = \mbox{``remaining probability''}= 1- 3\cdot 1/4 = 1/4\,. \end{align*} We thus complete the proof by showing that $\mathbb{E}[X_S] = \mathbb{E}[X_T] = 1/2$ and $\mathbb{E}[X_S X_T] = 1/4$. In both calculations we use the identity $\oplus_{i\in A}\: y_i = \frac{1}{2}\left( 1 - \prod_{i\in A} (-1)^{y_i} \right)$. For the former, \begin{align*} \mathbb{E}[X_S] = \mathbb{E}[\oplus_{i\in S}\: y_i ] = \mathbb{E}\left[\frac{1}{2} \left(1- \prod_{i\in S} (-1)^{y_i} \right)\right] = \frac{1}{2} \left(1- \prod_{i\in S} \mathbb{E}[(-1)^{y_i}] \right) = \frac{1}{2}\,. \end{align*} The second to last equality is due to the independence of the random bits $y_i$ and the last equality follows because $y_i$ is an uniform random bit. The same calculation also shows that $\mathbb{E}[X_T] = 1/2$. For the latter, \begin{align*} \mathbb{E}[X_SX_T] & = \mathbb{E}[\oplus_{i\in S}\: y_i \cdot \oplus_{i\in T}\: y_i] \\ & = \mathbb{E}\left[\frac{1}{2} \left(1- \prod_{i\in S} (-1)^{y_i} \right)\cdot \frac{1}{2} \left(1- \prod_{i\in T} (-1)^{y_i} \right)\right]\\ &= \frac{1}{4} \left(1- \mathbb{E}\left[\prod_{i\in S} (-1)^{y_i}\right] - \mathbb{E}\left[\prod_{i\in T} (-1)^{y_i}\right] + \mathbb{E}\left[\prod_{i\in S} (-1)^{y_i} \prod_{i\in T} (-1)^{y_i}\right] \right) \\ & = \frac{1}{4} \left(1 + \mathbb{E}\left[\prod_{i\in S} (-1)^{y_i} \prod_{i\in T} (-1)^{y_i}\right] \right) \qquad \mbox{(by independence of $y_i$s)}\\ & = \frac{1}{4} \left(1 + \mathbb{E}\left[\prod_{i\in S\Delta T} (-1)^{y_i} \right] \right) \qquad \mbox{(recall $S\Delta T = S\setminus T \cup T\setminus S$)}\\ & = \frac{1}{4} \qquad \mbox{($S\Delta T \neq \emptyset$ and again using independence of $y_i$s.} \end{align*}
| 0.1
|
M1_preference_data_34
|
Implement Latent Semantic Indexing by selecting the first x largest singular values of the term document matrix Hint 1: np.linalg.svd(M, full_matrices=False) performs SVD on the matrix $\mathbf{M}$ and returns $\mathbf{K}, \mathbf{S}, \mathbf{D}^T$ - $\mathbf{K}, \mathbf{D}^T$ are matrices with orthonormal columns - $\mathbf{S}$ is a **vector** of singular values in a **descending** order
|
We have a variable $x_1$ for moitie moitie, a variable $x_2$ for a la tomate, and a variable $x_3$ for Raclette. The linear program becomes \begin{align*} \text{Minimize} \quad &50 x_1 + 75 x_2 + 60 x_3\\ \text{Subject to} \quad &35 x_1 + 0.5 x_2 + 0. 5x_3 \geq 0.5 \\ &60 x_1 + 300 x_2 + 0. 5x_3 \geq 15 \\ &30 x_1 + 20x_2 + 70 x_3 \geq 4 \\ & x_1, x_2, x_3\geq 0 \end{align*}
| 0.1
|
M1_preference_data_35
|
In class, we saw Karger's beautiful randomized algorithm for finding a minimum cut in an undirected graph $G=(V,E)$. Recall that his algorithm works by repeatedly contracting a randomly selected edge until the graph only consists of two vertices which define the returned cut. For general graphs, we showed that the returned cut is a minimum cut with probability at least $1/\binom{n}{2}$. In this problem, we are going to analyze the algorithm in the special case when the input graph is a tree. Specifically, you should show that if the input graph $G=(V,E)$ is a spanning tree, then Karger's algorithm returns a minimum cut with probability $1$. \\ {\em (In this problem you are asked to show that Karger's min-cut algorithm returns a minimum cut with probability $1$ if the input graph is a spanning tree. Recall that you are allowed to refer to material covered in the lecture notes.)}
|
Any terminating exception.
1. Memory protection violation.
2. Memory fault.
| 0.1
|
M1_preference_data_36
|
What happens in the reliable broadcast algorithm if the accuracy property of the failure detector is violated?
|
Using Claim 7 and Corollary 8 from the Lecture 7 notes, the expected cost of collection $C$ after $d$ executions of Step 3 of the algorithm for set cover given in Lecture 7 notes is at most $d \cdot LP_{OPT}$. Let $X_i$ be a random variable corresponding to the event whether i-th element is covered by the output $C$ or not. Specifically, let $X_i = 1$ if the given constraint is not satisfied (therefore given element is not covered) and $X_i = 0$ if it is satisfied (therefore the element is covered) after $d$ executions of Step 3. Let $X$ denote the total number of constraints that are not satisfied. Therefore we write, $$X = X_1 + X_2 + \dots + X_n.$$ From Claim~9 in the Lecture 7 notes, we know that the probability that a constraint remains unsatisfied after a single execution of Step 3 is at most $\frac{1}{e}$. In addition, from the first step in the proof of Claim~10, the probability that a constraint is unsatisfied after $d$ executions of Step 3 is at most $\frac{1}{e^d}$. In other words, we have in our notation that $$\mathbb{P}(X_i = 1) \leq \frac{1}{e^d}.$$ Since each $X_i$ is a Bernoulli random variable, we can write their expectation as $$\mathbb{E}(X_i) \leq \frac{1}{e^d}.$$ We want to bound the probability that more than 10 \% of the elements are not covered (which also means less than 90 \% of the elements are covered). We can use Markov's Inequality to write, \begin{align*} \mathbb{P}\left(X \geq \frac{n}{10}\right) &\leq \frac{\mathbb{E}(X)}{n/10}\\ &= \frac{\mathbb{E}(X_1) + \mathbb{E}(X_2) + \dots + \mathbb{E}(X_n)}{n/10} \\ &\leq \frac{n \cdot \frac{1}{e^d}}{n/10} \\ &= \frac{10}{e^d}. \end{align*} Lastly, similar to Claim 11 in the lecture notes, we will bound the probability of bad events (namely, the cost is high or less than 90 \% of elements are covered). Firstly, we found that expected cost after $d$ executions is at most $d\cdot LP_{OPT}$. We can write using Markov's Inequality that, $$\mathbb{P}(cost \geq 5d\cdot LP_{OPT}) \leq \frac{1}{5}.$$ Secondly, we bound the probability of event that less than 90 \% of elements are covered. We did it above and showed that probability that more than 10 \% of the elements are not covered is at most $\frac{10}{e^d}$. In the worst case, these bad events are completely disjoint. Therefore, the probability that no bad event occurs is at least $1 - \frac{1}{5} - \frac{10}{e^d} > \frac{1}{2}$ for some large constant $d$. Therefore, the algorithm, with probability at least $\frac{1}{2}$, will return a collection of sets that cover at least 90 \% of the elements and has cost at most $5d LP_{OPT} \leq 5d \mbox{OPT}$.
| 0.1
|
M1_preference_data_37
|
In this week's lecture, you have been introduced to the aggregate method of ParSeq[A] (and other parallel data structures). It has the following signature: def aggregate[B](z: B)(f: (B, A) => B, g: (B, B) => B): B Discuss, as a group, what aggregate does and what its arguments represent. Consider the parallel sequence xs containing the three elements x1, x2 and x3. Also consider the following call to aggregate: xs.aggregate(z)(f, g) The above call might potentially result in the following computation: f(f(f(z, x1), x2), x3) But it might also result in other computations. Come up with at least two other computations in terms of f and g that may result from the above call to aggregate. Below are other examples of calls to aggregate. In each case, check if the call can lead to different results depending on the strategy used by aggregate to aggregate all values contained in data down to a single value. You should assume that data is a parallel sequence of values of type BigInt. 1. data.aggregate(1)(_ + _, _ + _)
|
The result of scanRight1 is not the same as scanLeft1 on the reversed sequence either. Consider the same example as the previous case, but reverse the argument of scanLeft1. We have $$rev(A).scanLeft1(f) = (a_2, f(a_2, a_1))$$ but $$A.scanRight1(f) = (f(a_1, a_2), a_2)$$ With the choice of $f(x, y) := x$, we get $$rev(A).scanLeft1(f) = (a_2, a_1)$$ and $$A.scanRight1(f) = (a_1, a_2)$$ which once again are unequal if $a_1 \not = a_2$.
| 0.1
|
M1_preference_data_38
|
Given the following classes:
• class Pair[+U, +V]
• class Iterable[+U]
• class Map[U, +V] extends Iterable[Pair[U, V]]
Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither
covariance nor contravariance).
Consider also the following typing relationships for A, B, X, and Y:
• A >: B
• X >: Y
Fill in the subtyping relation between the types below using symbols:
• <: in case T1 is a subtype of T2;
• >: in case T1 is a supertype of T2;
• “Neither” in case T1 is neither a supertype nor a supertype of T2.
What is the correct subtyping relationship between A => (Y => X) and A => (X
=> Y)?
|
def cosine_similarity(v1,v2): """ It computes cosine similarity. :param v1: list of floats, with the vector of a document. :param v2: list of floats, with the vector of a document. :return: float """ sumxx, sumxy, sumyy = 0, 0, 0 for i in range(len(v1)): x = v1[i]; y = v2[i] sumxx += x*x sumyy += y*y sumxy += x*y if sumxy == 0: sim = 0 else: sim = sumxy/math.sqrt(sumxx*sumyy) return sim
| 0.1
|
M1_preference_data_39
|
The goal of the 4 following questions is to prove that the methods map and mapTr are equivalent. The
former is the version seen in class and is specified by the lemmas MapNil and MapCons. The later version
is a tail-recursive version and is specified by the lemmas MapTrNil and MapTrCons.
All lemmas on this page hold for all x: Int, y: Int, xs: List[Int], ys: List[Int], l: List
[Int] and f: Int => Int.
Given the following lemmas:
(MapNil) Nil.map(f) === Nil
(MapCons) (x :: xs).map(f) === f(x) :: xs.map(f)
(MapTrNil) Nil.mapTr(f, ys) === ys
(MapTrCons) (x :: xs).mapTr(f, ys) === xs.mapTr(f, ys ++ (f(x) :: Nil))
(NilAppend) Nil ++ xs === xs
(ConsAppend) (x :: xs) ++ ys === x :: (xs ++ ys)
Let us first prove the following lemma:
(AccOut) l.mapTr(f, y :: ys) === y :: l.mapTr(f, ys)
We prove it by induction on l.
Base case: l is Nil. Therefore, we need to prove:
Nil.mapTr(f, y :: ys) === y :: Nil.mapTr(f, ys).
What exact sequence of lemmas should we apply to rewrite the left hand-side (Nil.mapTr(f, y :: ys))
to the right hand-side (y :: Nil.mapTr(f, ys))?
|
Continous integration should be set up for the main branch, to reduce the likelihood of bugs in the final product.
| 0.1
|
M1_preference_data_40
|
Assume you work in a team that is developing a weather application that brings together data from several sources. One of your colleagues is responsible for creating a client for a weather service that returns data in JSON format. Your colleague suggests creating a weather client interface that returns the weather as a string, then a class that gets (fetches from the weather service) and returns the JSON, and a decorator that extracts the weather prediction from that JSON and returns it. What do you think about this approach?
|
We convince our friend by taking $y_1\geq 0$ multiples of the first constraints and $y_2\geq 0$ multiplies of the second constraint so that \begin{align*} 6 x_1 + 14 x_2 + 13 x_3 \leq y_1 ( x_1 + 3x_2 + x_3 ) + y_2 (x_1 + 2x_2 + 4 x_3) \leq y_1 24 + y_2 60\,. \end{align*} To get the best upper bound, we wish to minimize the right-hand-side $24 y_1 + 60 y_2$. However, for the first inequality to hold, we need that $y_1 x_1 + y_2 x_1 \geq 6 x_1$ for all non-negative $x_1$ and so $y_1 + y_2 \geq 6$. The same argument gives us the constraints $3y_1 + 2y_2 \geq 14$ for $x_2$ and $y_1 + 4y_2 \geq 13$ for $x_3$. It follows that we can formulate the problem of finding an upper bound as the following linear program (the dual): \begin{align*} \text{Minimize} \quad &24y_1 + 60 y_2\\ \text{Subject to} \quad &y_1 + y_2 \geq 6 \\ & 3y_1 + 2y_2 \geq 14 \\ & y_1 + 4 y_2 \geq 13 \\ & y_1, y_2 \geq 0 \end{align*}
| 0.1
|
M1_preference_data_41
|
Consider the following case class definitions: case class Node(id: Int) case class Edge(from: Node, to: Node) Let us represent a directed graph G as the list of all its edges (of type List[Edge]). We are interested in computing the set of all nodes reachable in exactly n steps from a set of initial nodes. Write a reachable function with the following signature to provide this functionality: def reachable(n: Int, init: Set[Node], edges: List[Edge]): Set[Node] You can assume that n >= 0.
|
The transducer T1 is built by using the standard operators (concatenation, disjunction and cross-product) and regular expressions available for the transducers.
For instance:
T1 = ([a-z]+)
((\+V\+IndPres\+) x (\+))
(((([12]s)
| ([123]p)) x (2))
| ((3s) x (1))
)
| 0.1
|
M1_preference_data_42
|
Design a polynomial-time algorithm for the matroid matching problem: \begin{description} \item[Input:] A bipartite graph $G=(A \cup B, E)$ and two matroids $\mathcal{M}_A = (A, \mathcal{I}_A)$, $\mathcal{M}_B = (B, \mathcal{I}_B)$. \item[Output:] A matching $M \subseteq E$ of maximum cardinality satisfying: \begin{enumerate} \item[(i)] the vertices $A' = \{a\in A: \mbox{there is a $b\in B$ such that $\{a,b\}\in M$}\}$ of $A$ that are matched by $M$ form an independent set in $\mathcal{M}_A$, i.e., $A'\in \mathcal{I}_A$; and \item[(ii)] the vertices $B' = \{b\in B: \mbox{there is an $a\in A$ such that $\{a,b\}\in M$}\}$ of $B$ that are matched by $M$ form an independent set in $\mathcal{M}_B$, i.e., $B'\in \mathcal{I}_B$. \end{enumerate} \end{description} We assume that the independence oracles for both matroids $\mathcal{M}_A$ and $\mathcal{M}_B$ can be implemented in polynomial-time. Also to your help you may use the following fact without proving it. \begin{center} \begin{boxedminipage}{\textwidth} \textbf{Fact (obtaining a new matroid by copying elements)}. Let $\mathcal{M} = (N, \mathcal{I})$ be a matroid where $N = \{e_1, \ldots, e_n\}$ consists of $n$ elements. Now, for each $i=1,\ldots, n$, make $k_i$ copies of $e_i$ to obtain the new ground set \begin{align*} N' = \{e_1^{(1)}, e_1^{(2)},\ldots, e_1^{(k_1)}, e_2^{(1)}, e_2^{(2)}, \ldots, e_2^{(k_2)}, \ldots, e_n^{(1)},e_n^{(2)}, \ldots, e_n^{(k_n)}\}\,, \end{align*} where we denote the $k_i$ copies of $e_i$ by $e_i^{(1)}, e_i^{(2)},\ldots, e_i^{(k_i)}$. Then $(N', \mathcal{I}')$ is a matroid where a subset $I' \subseteq N'$ is independent, i.e., $I' \in \mathcal{I}'$, if and only if the following conditions hold:\\[-1mm] \begin{enumerate} \item[(i)] $I'$ contains at most one copy of each element, i.e., we have $|I' \cap \{e_i^{(1)}, \ldots, e_i^{(k_i)}\}| \leq 1$ for each $i= 1,\ldots, n$; \item[(ii)] the original elements corresponding to the copies in $I'$ form an independent set in $\mathcal{I}$, i.e., if $I' = \{e_{i_1}^{(j_1)}, e_{i_2}^{(j_2)}, \ldots, e_{i_\ell}^{(j_\ell)}\}$ then $\{e_{i_1}, e_{i_2}, \ldots, e_{i_\ell}\} \in \mathcal{I}$.\\ \end{enumerate} Moreover, if the independence oracle of $(N, \mathcal{I})$ can be implemented in polynomial time, then the independence oracle of $(N', \mathcal{I}')$ can be implemented in polynomial time. \end{boxedminipage} \end{center} {\em (In this problem you are asked to design and analyze a polynomial-time algorithm for the matroid matching problem. You are allowed to use the above fact without any proof and to assume that all independence oracles can be implemented in polynomial time. Recall that you are allowed to refer to material covered in the lecture notes.)}
|
Let us compute the derivative wrt a particular user $u^{\prime}$ and set it to 0 . We get
$$
\sum_{u^{\prime} \sim m}\left(f_{u^{\prime} m}-r_{u^{\prime} m}\right)+\lambda b_{u^{\prime}}=0
$$
Note that the $f_{u^{\prime} m}$ contains the $b_{u^{\prime}}$. Solving this equation for $b_{u^{\prime}}$ we get
$$
b_{u^{\prime}}=\frac{\sum_{u^{\prime} \sim m}\left(r_{u^{\prime} m}-\left\langle\mathbf{v}_{u^{\prime}}, \mathbf{w}_{m}\right\rangle-b_{m}\right)}{\lambda+\sum_{u^{\prime} \sim m} 1}
$$
where $u^{\prime} \sim m$ are the movies rated by $u^{\prime}$.
| 0.1
|
M1_preference_data_43
|
Assume you are working on a mobile application. You meet a client while out for coffee, who tells you:
"I noticed it's not possible to customize the profile picture. I know you have a lot of stuff to do this sprint, but my boss is threatening to switch to another app, could you get this fixed during this sprint?"
In one sentence, give an answer that helps both you and the client.
|
No, it's not, you have to potentially write a lot of statements that you'll need to manually remove. Using a debugger, your friend could add breakpoints that prints the values, and can change them on the fly.
| 0.1
|
M1_preference_data_44
|
Assume that some of your colleagues work on an AI-based image generation service, where a user enters a topic, and the AI generates a synthetic photo on that topic. They tell you the following about this service:
"Currently, the user types in the topic they want to see images for, and the client app sends a request to the server with the user ID and the indicated topic. The server generates an image, which takes a second or so, and sends it to the client app, which then requests another image on the same topic, and so on, until the app has received 9 images. It then displays these in a 3x3 grid. The user now looks at the 9 images and, if they see an inappropriate one, they click on a button that causes the app to send a review request to the server. Human moderators then process each report, and data scientists tweak the AI model to avoid generating images similar to the ones reported as inappropriate. Users then get a notification that their report was processed. The whole reporting process typically takes a day."
Now assume you can change the server's interface. Explain in 1-2 sentences an alternative way to make the app display the 9 images faster:
|
alloc is used to give fresh registers to each
routine without having to push explicitly values on the stack
(and popping them back prior to a return). It is usually
placed at the beginning of each routine. The first parameter
indicates how many values will be hidden on the next
br.call and the second how many values will be used
for returning values from called routines. The compiler need
to determine the largest number of registers needed by the
routing (first parameter) and the maximum number of returned
values from all the possibily called routines (second
parameter). On executing alloc, the processor simply
memorizes the values to be used in the successive call
instruction to change the offset.
| 0.1
|
M1_preference_data_45
|
We have a collection of rectangles in a plane, whose sides are aligned with the coordinate axes. Each rectangle is represented by its lower left corner $(x_1,y_1)$ and its upper right corner $(x_2,y_2)$. All coordinates are of type Long. We require $x_1 \le x_2$ and $y_1 \le y_2$. Define an operation hull2 that takes two Rectangles, r1 and r2, and computes as the result the smallest Rectangle containing both r1 and r2.
|
semantic ambiguity (two different meanings, polysemy, homonymy(homography)).
Word Sense Disambiguation (WSD) through the information from the context (e.g. coehsion).
| 0.1
|
M1_preference_data_46
|
We consider now the ridge regression problem: $$ \min _{\mathbf{w} \in \mathbb{R}^{d}} \frac{1}{2 N} \sum_{n=1}^{N}\left[y_{n}-\mathbf{x}_{n}^{\top} \mathbf{w}\right]^{2}+\lambda\|\mathbf{w}\|_{2}^{2}, $$ where the data $\left\{\left(\mathbf{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$ are such that the feature vector $\mathbf{x}_{n} \in \mathbb{R}^{D}$ and the response variable $y_{n} \in \mathbb{R}$ Compute the closed-form solution $\mathbf{w}_{\text {ridge }}^{\star}$ of this problem, providing the required justifications. State the final result using the data matrix $\mathbf{X} \in \mathbb{R}^{N \times D}$.
|
1. Predicates are 1-bit registers associated with
instruction. If the predicate is true, the instruction commits
the result to the register file; if it is false, the result is
dropped.
2. It allows to execute both sides of a branch and, when
there are enough available resources, one does not suffer from
the branch penalty which, in VLIW processors, in the absence
of speculative execution after branches, may often be
significant.
3. It does make sense also in RISC processors and indeed some
have partial or complete forms of predication (e.g., ARM). In
particular, for branches very hard to predict, it may be that
executing instructions on both control flow branches costs
less than the average misprediction penalty.
| 0.1
|
M1_preference_data_47
|
If process i fails, then eventually all processes j≠i fail
Is the following true? If some process j≠i does not fail, then process i has not failed
|
Simply use that if $g(\xv)g^\star(\xv)<0$
we get that $|2\eta(\xv)-1)|\leq |2\eta(\xv)-1-b(g(\xv))|$
Indeed either $\eta(\xv)>1/2$ and $g(\xv)<0$ and thus $b(g(\xv))<0$, or $\eta(\xv)<1/2$ and $g(\xv)>0$ and thus $b(g(\xv))>0$, where we have used that $b$ preserves signs.
| 0.1
|
M1_preference_data_48
|
Homer, Marge, and Lisa Simpson have decided to go for a hike in the beautiful Swiss Alps. Homer has greatly surpassed Marge's expectations and carefully prepared to bring $n$ items whose total size equals the capacity of his and his wife Marge's two knapsacks. Lisa does not carry a knapsack due to her young age. More formally, Homer and Marge each have a knapsack of capacity $C$, there are $n$ items where item $i=1, 2, \ldots, n$ has size $s_i >0$, and we have $\sum_{i=1}^n s_i = 2\cdot C$ due to Homer's meticulous preparation. However, being Homer after all, Homer has missed one thing: although the items fit perfectly in the two knapsacks fractionally, it might be impossible to pack them because items must be assigned integrally! Luckily Lisa has studied linear programming and she saves the family holiday by proposing the following solution: \begin{itemize} \item Take \emph{any} extreme point $x^*$ of the linear program: \begin{align*} x_{iH} + x_{iM}& \leq 1 \qquad \quad \mbox{for all items $i=1,2,\ldots, n$}\\ \sum_{i=1}^n s_i x_{iH} & = C \\ \sum_{i=1}^n s_i x_{iM} & = C \\ 0 \leq x_{ij} &\leq 1 \qquad \quad \mbox{for all items $i=1,2, \ldots, n$ and $j\in \{H,M\}$}. \end{align*} \item Divide the items as follows: \begin{itemize} \item Homer and Marge will carry the items $\{i: x^*_{iH} = 1\}$ and $\{i: x^*_{iM}=1\}$, respectively. \item Lisa will carry any remaining items. \end{itemize} \end{itemize} {Prove} that Lisa needs to carry at most one item. \\[1mm] {\em (In this problem you are asked to give a formal proof of the statement that Lisa needs to carry at most one item. You are not allowed to change Lisa's solution for dividing the items among the family members. Recall that you are allowed to refer to material covered in the lecture notes.) }
|
# adding album number
two_ormore.loc[:,'album_number'] = (two_ormore
.sort_values(by=['releaseyear', 'reviewdate'])
.groupby('artist')['album']
.transform(lambda x : range(len(x))))
# example artist:
two_ormore.sort_values(by=['releaseyear', 'reviewdate'])\
.query('artist == "Young Thug"')[['releaseyear','reviewdate','album_number']]
| 0.1
|
M1_preference_data_49
|
You have been hired to evaluate an email monitoring system aimed at detecting potential security issues. The targeted goal of the application is to decide whether a given email should be further reviewed or not. You have been given the results of three different systems that have been evaluated on the same panel of 157 different emails. Here are the classification errors and their standard deviations: system 1 (error=0.079, stddev=0.026) system 2 (error=0.081, stddev=0.005) system 3 (error=0.118, stddev=0.004) What should be the minimal size of a test set to ensure, at a 95% confidence level, that a system has an error 0.02 lower (absolute difference) than system 3? Justify your answer.
|
As evident from the confusion matrix, the disparity in class proportions does indeed hurt the model. Almost all (99%) of the examples are classified as class 0. What's more, 75% of the articles in class 1, are predicted to belong to class 0.
One way to address this is to employ cost-sensitive learning, i.e., using class-weight='balanced' while training the model. Another way could be up/down sampling minority/majority class. 'SMOTE' is a very popular technique of oversampling the minority class that can be employed here.
| 0.1
|
M1_preference_data_50
|
What is the problem addressed by a Part-of-Speech (PoS) tagger?
Why isn't it trivial? What are the two main difficulties?
|
['answer should fit the regular expression: 10^8 + 10^15', 'answer should fit the regular expression: (10\\^8|10\\^\\{8\\}|10\\^(8)|10⁸) ?\\+ ?(10\\^15|10\\^\\{15\\}|10\\^(15)|10¹⁵)', 'answer should fit the regular expression: (10\\^15|10\\^\\{15\\}|10\\^(15)|10¹⁵) ?\\+ ?(10\\^8|10\\^\\{8\\}|10\\^(8)|10⁸)']
| 0.1
|
M1_preference_data_51
|
If process i fails, then eventually all processes j≠i fail
Is the following true? If no process j≠i fails, nothing can be said about process i
|
You could change the interface such that all 9 images are batched together, this reduces the communication.
| 0.1
|
M1_preference_data_52
|
In an automated email router of a company, we want to make the distinction between three kind of
emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a
Naive Bayes approach.
What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'?
We will consider the following three messages:
The Dow industrials tumbled 120.54 to 10924.74, hurt by GM's sales forecast
and two economic reports. Oil rose to $71.92.
BitTorrent Inc. is boosting its network capacity as it prepares to become a centralized hub for legal video content. In May, BitTorrent announced a deal with
Warner Brothers to distribute its TV and movie content via the BT platform. It
has now lined up IP transit for streaming videos at a few gigabits per second
Intel will sell its XScale PXAxxx applications processor and 3G baseband processor businesses to Marvell for $600 million, plus existing liabilities. The deal
could make Marvell the top supplier of 3G and later smartphone processors, and
enable Intel to focus on its core x86 and wireless LAN chipset businesses, the
companies say.
For the first text, give an example of the corresponding output of the NLP pre-processor steps.
|
False
| 0.1
|
M1_preference_data_53
|
From a corpus of \( N \) occurences of \( m \) different tokens:How many different 4-grams (values) could you possibly have?
|
x => if s(x) then 1 else 0
| 0.1
|
M1_preference_data_54
|
A company active in automatic recognition of hand-written documents needs to improve the quality of their recognizer. This recognizer produces sets of sequences of correct English words, but some of the produced sequences do not make any sense. For instance the processing of a given hand-written input can produce a set of transcriptions like: 'A was salmon outer the does', 'It was a afternoon nice sunny', and 'I Thomas at mice not the spoon'.
What is wrong with such sentences? NLP techniques of what level might allow the system to select the correct one(s)? What would be the required resources?
|
Average degree is not recommended as the degree distribution of real-world networks usually follows a powerlaw. Summarizing powerlaws with average values is not a good idea, as there is a long tail, and there are many nodes that have very high degree. Instead, median is a better choice.
| 0.1
|
M1_preference_data_55
|
What is modulo scheduling and what are its benefits? What does
it apply to? What is its goal? In which respect is it superior to
simpler techniques with the same goal?
|
First, let us simplify the situation a little by noticing that with probability $1$, all elements $h(i)$ for $i \in U$ are different. This is because $\Pr[h(i) = h(j)] = 0$ for $i \ne j$ (recall that each $h(i)$ is uniform on the interval $[0,1]$). Given this, let us see where $\min_{i \in A \cup B} h(i)$ is attained: \begin{itemize} \item if it is attained in $A \cap B$, then $h_A = h_B = h_{A \cup B} = h_{A \cap B}$, \item otherwise, say it is attained in $A \setminus B$: then $h_A < h_B$. \end{itemize} Therefore the event $h_A = h_B$ is (almost everywhere) equal to $h_{A \cup B} = h_{A \cap B}$. Furthermore, notice that for any set $S \subseteq U$ and any $i \in S$ we have $\Pr[h(i) = h_S] = 1/|S|$ due to symmetry. Therefore \[ \Pr[h_A = h_B] = \Pr[h_{A \cap B} = h_{A \cup B}] = \sum_{i \in A \cap B} \Pr[h(i) = h_{A \cup B}] = |A \cap B| \cdot \frac{1}{|A \cup B|} = J(A,B). \]
| 0.1
|
M1_preference_data_56
|
Assume you are writing server-side code for an online shop. The code will be invoked over HTTP from a mobile app. Your current code is as follows:
public class ShoppingCart {
public void buy(Product product, int quantity) {
if (product == null) { throw new IllegalArgumentException("product cannot be null"); }
if (quantity < 1) { throw new IllegalArgumentException("quantity must be at least 1"); }
int price = product.getUnitPrice() * quantity;
int discount = computeDiscount(product, quantity);
int shippingFees = computeShippingFees(product, quantity);
int totalPrice = price - discount + shippingFees;
// this triggers a call to the actual credit card processor
CreditCardProcessor.billCurrentUser(totalPrice);
}
private int computeDiscount(Product product, int quantity) {
// ... discount computation logic ...
}
private int computeShippingFees(Product product, int quantity) {
// ... shipping fees computation logic ...
}
}
A colleague states that a null product should throw a checked exception, not an "IllegalArgumentException", because users might input bad data in the app. Explain in 1 sentence whether this is a good idea and why or why not.
|
Any NLP application that requires the assessment of the semantic proximity between textual entities (text, segments, words, ...) might benefit from the semantic vectorial representation. Information retrieval is of course one of the prototypical applications illustrating the potentiality of the VS techniques. However, many other applications can be considered:
\begin{itemize}
\item automated summarization: the document to summarize is split into passages; each of the passages is represented in a vector space and the passage(s) that is the 'most central' in the set of vector thus produced are taken as good candidates for the summary to generate;
\item semantic desambiguisation: when polysemic words (such as 'pen' which can be a place to put cow or a writing instrument) are a problem -for example in machine translationvectorial representations can be generated for the different possible meanings of a word (for example from machine readable disctionalires) and used to desambiguate the occurrences of an ambiguous word in documents;
\item automated routing of messages to users: each user is represented by the vector representing the semantic content of the messages s/he has received so far, and any new incoming message is routed only to those users the representative vector of which is enough similar to the vector representing the content of the incoming message;
\item text categorization or clustering
\item ...
\end{itemize}
| 0.1
|
M1_preference_data_57
|
Given a document collection with a vocabulary consisting of three words, $V = {a,b,c}$, and two documents $d_1$ = aabc and $d_2 = abc$. The query is $q = ab$. Using smoothed probabilistic retrieval (with $\lambda=0.5$), is it possible to enforce both a ranking $d_1 > d_2$ and $d_2 > d_1$ by adding suitable documents to the collection. If yes, give examples of such documents to be added, if no, provide an argument why this cannot be the case.
|
1. Rotating registers: A register file with a hardware
managed offset added to the querying addresses, to rename
registers automatically across loop iterations.
2. (Rotating) predicates: to enable only the active stages of
the loop.
3. Loop count register: Tracks the number of loop iterations
remaining to be done.
4. Epilogue count: when the loop count reaches zero, keep the
loop active for some cycles during the epilogue until the
pipeline is flushed.
5. Special control flow instructions to manage the loop
execution.
| 0.1
|
M1_preference_data_58
|
Assume you decide to contribute to an open source project, by adding a feature to an existing class of the project. The class uses an underscore at the beginning of names then "camelCase" for private properties such as "_likeThis", but you find this odd because you're used to the "snake case" "like_this". Which option would you choose for the name of the new private property that you will add?
|
from sklearn.metrics import mean_squared_error from math import sqrt def rmse(prediction, ground_truth): prediction = prediction[ground_truth.nonzero()].flatten() ground_truth = ground_truth[ground_truth.nonzero()].flatten() return sqrt(mean_squared_error(prediction, ground_truth))
| 0.1
|
M1_preference_data_59
|
For this homework you will use a dataset of 18,403 music reviews scraped from Pitchfork¹, including relevant metadata such as review author, review date, record release year, review score, and genre, along with the respective album's audio features pulled from Spotify's API. The data consists of the following columns: artist, album, recordlabel, releaseyear, score, reviewauthor, reviewdate, genre, key, acousticness, danceability, energy, instrumentalness, liveness, loudness, speechiness, valence, tempo.
Create a new column 'album_number' which indicates how many albums the artist has produced before this one (before the second album, the artist has already produced one album).
|
$ ext{Var}[\wv^ op \xx] = rac1N \sum_{n=1}^N (\wv^ op \xx_n)^2$ %
| 0.1
|
M1_preference_data_60
|
Implement RSME score based on the following formula. \begin{equation} \mathit{RMSE} =\sqrt{\frac{1}{N} \sum_i (r_i -\hat{r_i})^2} \end{equation} You can use the mean_squared_error function from sklearn.metrics.
|
Every process uses TRB to broadcast its proposal. Let p be any process, eventually every correct process either delivers p’s proposal or ⊥ (if p fails). Eventually, every correct process has the same set of proposals (at least one is not ⊥, since not every process crashes). Processes use a shared but arbitrary function to extract a decision out of the set of proposals (e.g., sort alphabetically and pick the first).
| 0.1
|
M1_preference_data_61
|
Give some arguments justifying why evaluation is especially important for NLP. In particular, explain the role of evaluation when a corpus-based approach is used.
|
If you do not get an accuracy of at least 90 percent then you are not really doing anything since you can get ten percent by simply always outputting 0.
| 0.1
|
M1_preference_data_62
|
Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|d_j|}\frac{w_{iq}}{|q|}$ . (1) Another way of looking at the information retrieval problem is using a probabilistic approach. The probabilistic view of information retrieval consists of determining the conditional probability $P(q|d_j)$ that for a given document $d_j$ the query by the user is $q$. So, practically in probabilistic retrieval when a query $q$ is given, for each document it is evaluated how probable it is that the query is indeed relevant for the document, which results in a ranking of the documents. In order to relate vector space retrieval to a probabilistic view of information retrieval, we interpret the weights in Equation (1) as follows: - $w_{ij}/|d_j|$ can be interpreted as the conditional probability $P(k_i|d_j)$ that for a given document $d_j$ the term $k_i$ is important (to characterize the document $d_j$). - $w_{iq}/|q|$ can be interpreted as the conditional probability $P(q|k_i)$ that for a given term $k_i$ the query posed by the user is $q$. Intuitively, $P(q|k_i)$ gives the amount of importance given to a particular term while querying. With this interpretation you can rewrite Equation (1) as follows: $sim(q,d_j) = \sum^m_{i=1} P(k_i|d_j)P(q|k_i)$ (2) Note that the model described in Question (a) provides a probabilistic interpretation for vector space retrieval where weights are interpreted as probabilities . Compare to the probabilistic retrieval model based on language models introduced in the lecture and discuss the differences.
|
Total order property: Let m1 and m2 be any two messages and suppose p and q are any two correct processes that deliver m1 and m2. If p delivers m1 before m2, then q delivers m1 before m2.
This allows a scenario where faulty process p broadcasts messages 1, 2, 3, and correct processes a, b, c behave as follows:
- Process a delivers 1, then 2.
- Process b delivers 3, then 2.
- Process c delivers 1, then 3.
| 0.1
|
M1_preference_data_63
|
In this problem, we consider a generalization of the min-cost perfect matching problem. The generalization is called the \emph{min-cost perfect $b$-matching problem} and is defined as follows: \begin{description} \item[Input:] A graph $G = (V,E)$ with edge costs $c: E \rightarrow \mathbb{R}$ and degree bounds $b: V \rightarrow \{1,2, \ldots, n\}$. \item[Output:] A subset $F \subseteq E$ of minimum cost $\sum_{e\in F} c(e)$ such that for each vertex $v\in V$: \begin{itemize} \item The number of edges incident to $v$ in $F$ equals $b(v)$, i.e., $|\{e\in F : v \in e\}| = b(v)$. \end{itemize} \end{description} Note that min-cost perfect matching problem is the special case when $b(v) =1$ for all $v\in V$. An example with general $b$'s is as follows: \begin{tikzpicture} \node at (1, 2.8) {Input}; \node[vertex] (u1) at (0,2) {$u_1$}; \node[vertex] (u2) at (0,0) {$u_2$}; \node[vertex] (v1) at (2,2) {$v_1$}; \node[vertex] (v2) at (2,0) {$v_2$}; \node[left = 0.1cm of u1] {$b(u_1) = 1$}; \node[left = 0.1cm of u2] {$b(u_2) = 2$}; \node[right = 0.1cm of v1] {$b(v_1) = 1$}; \node[right = 0.1cm of v2] {$b(v_2) = 2$}; \draw (u1) edge[ultra thick] (v1) edge (v2); \draw (u2) edge (v1) edge[ultra thick] (v2); \begin{scope}[xshift=7cm] \node at (1, 2.8) {Output}; \node[vertex] (u1) at (0,2) {$u_1$}; \node[vertex] (u2) at (0,0) {$u_2$}; \node[vertex] (v1) at (2,2) {$v_1$}; \node[vertex] (v2) at (2,0) {$v_2$}; \draw (u1) edge (v2); \draw (u2) edge (v1) edge[ultra thick] (v2); \end{scope} \end{tikzpicture} On the left, we illustrate the input graph with the degree bounds (the $b$'s). Thin and thick edges have cost $1$ and $2$, respectively. On the right, we illustrate a solution of cost $1+1 +2 = 4$. It is a feasible solution since the degree of each vertex $v$ equals $b(v)$ in the solution. Your task is to prove the following statement: If the input graph $G=(V,E)$ is bipartite then any extreme point solution to the following linear programming relaxation (that has a variable $x_e$ for every edge $e\in E$) is integral: \begin{align*} \textbf{Minimize} \hspace{0.8cm} & \sum_{e\in E} c(e) x_e\\ \textbf{subject to}\hspace{0.8cm} & \sum_{e\in E: v\in e} x_e = b(v) \qquad \mbox{for all $v\in V$}\\ \hspace{0.8cm} & \hspace{0.9cm} 0 \leq x_e \leq 1 \hspace{0.9cm} \mbox{for all $e\in E$}. \end{align*} {\em (In this problem you are asked to prove that every extreme point solution to the above linear program is integral assuming that the input graph $G$ is bipartite. Recall that you are allowed to refer to material covered in the lecture notes.)}
|
def communities_modularity(G, nodes_community): ''' input: G:nx.Graph nodes_community:{node_id:community_id} output: Q (modularity metric) ''' Q = 0 m = len(G.edges) for node_i in G.nodes: for node_j in G.nodes: if nodes_community[node_i] == nodes_community[node_j]: Q += G.number_of_edges(node_i, node_j) - G.degree[node_i]*G.degree[node_j]/(2*m) Q = Q/(2*m) return Q
| 0.1
|
M1_preference_data_64
|
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website!
Your column generation system has become quite successful and you've managed to automate most of your job simply by typing your editor's title pitches into your model to produce your column every day. Two years later, during the COVID--25 pandemic, your editor proposes to use your system to generate an information sheet about the pandemic for anyone looking for information about symptoms, treatments, testing sites, medical professionals, etc. Given the similarity to a previous pandemic many years before, COVID--19, you train your model on all news articles published about COVID--19 between the years of 2019--2022. Then, you generate the information page from your trained model.
Give an example of a potential harm that your model could produce from the perspective of human interaction harms.
|
As a visually impaired user, I want my reading assistant to be able to read the jokes out loud, so that I can make my friends laugh.
| 0.1
|
M1_preference_data_65
|
The purpose of this first exercise part is to ensure that the predictions produced by minimizing the true $\phi$-risk are optimal. As for the $0-1$ loss, it can be shown that the true $\phi$-risk is minimized at a predictor $g^\star:\mathcal X o \R$ satisfying for all $\xv\in\mathcal X$:
Let $b: \R o \R$ a function that preserves the sign, i.e., $b(\R_+^*)\subseteq \R_+^*$ and $b(\R_-^*)\subseteq \R_-^*$. Show that
egin{align*}
\mathcal L (g)-\mathcal L^\star \leq \mathbb E[|2\eta(X)-1-b(g(X))|]
\end{align*}
egin{align*}
\mathcal L (g)-\mathcal L^\star = \mathbb E[oldsymbol{\mathbb{1}}_{g(X)g^\star(X)<0}|2\eta(X)-1|].
\end{align*}
|
1. Continuous integration, even paired with tests, cannot guarantee the code "never has bugs"
2. Feature branches should be allowed to fail tests, otherwise developers will not commit enough and risk losing data
| 0.1
|
M1_preference_data_66
|
Assume that you are part of a team developing a mobile app using Scrum.
When using the app, you identified multiple bugs and features which you think should be implemented, and took some notes. You want to
share these with the Product Owner. Your backlog of tasks includes the following task:
- [ ] As a registered user, I can click on the settings button from the navigation pane, so I can access the settings screen from everywhere in the app.
Is this item suitable to be submitted to the Product Backlog? Why?
|
The load should become an advanced load and there must be
a exttt{chk.a} instruction right after the store to
exttt{r3}; the recovery code consists of the load simply
repeated and of the incrementation of exttt{r1} also
repeated.
| 0.1
|
M1_preference_data_67
|
List two common types of exceptions which must be implemented
precisely. Explain why.
|
Given a total-order broadcast primitive TO, a consensus abstraction is obtained as follows:
upon init do
decided := false
end
upon propose(v) do TO-broadcast(v)
end
upon TO-deliver(v) do
if not decided then
decided := true
decide(v) end
end
When a process proposes a value v in consensus, it TO-broadcasts v. When the first message is TO-delivered containing some value x, a process decides x.
Since the total-order broadcast delivers the same sequence of messages at every correct process, and every TO-delivered message has been TO-broadcast, this abstraction implements consensus.
| 0.1
|
M1_preference_data_68
|
In which class of processors do you expect to find reservation
stations?
|
Firstly, the destination address of the load itself and of
all the preceding stores must be known. Secondly, there should
be no collision with one of the store addresses (and in this
case the load can be sent to memory) or, if there is any, the
data for the latest colliding store must be known (and in this
case this data value is returned as the result).
| 0.1
|
M1_preference_data_69
|
Consider an HMM Part-of-Speech tagger, the tagset of which contains, among others: DET, N, V, ADV and ADJ, and some of the parameters of which are:
$$
\begin{gathered}
P_{1}(\mathrm{a} \mid \mathrm{DET})=0.1, \quad P_{1}(\text {accurately} \mid \mathrm{ADV})=0.1, \quad P_{1}(\text {computer} \mid \mathrm{N})=0.1, \\
P_{1}(\text {process} \mid \mathrm{N})=0.095, \quad P_{1}(\text {process} \mid \mathrm{V})=0.005, \\
P_{1}(\text {programs} \mid \mathrm{N})=0.080, \quad P_{1}(\text {programs} \mid \mathrm{V})=0.020,
\end{gathered}
$$
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& & \multicolumn{5}{|l|}{$\mathrm{Y} \rightarrow$} \\
\hline
& & $\mathrm{DET}$ & N & V & ADJ & $\mathrm{ADV}$ \\
\hline
\multirow[t]{5}{*}{$X \downarrow$} & $\mathrm{DET}$ & 0 & 0.55 & 0 & 0.02 & 0.03 \\
\hline
& $\mathrm{N}$ & 0.01 & 0.10 & 0.08 & 0.01 & 0.02 \\
\hline
& V & 0.16 & 0.11 & 0.06 & 0.08 & 0.08 \\
\hline
& ADJ & 0.01 & 0.65 & 0 & 0.05 & 0 \\
\hline
& ADV & 0.08 & 0.02 & 0.09 & 0.04 & 0.04 \\
\hline
\end{tabular}
\end{center}
$P_{2}(\mathrm{Y} \mid \mathrm{X}):\left(\right.$ for instance $\left.P_{2}(\mathrm{~N} \mid \mathrm{DET})=0.55\right)$
and:
$P_{3}(\mathrm{DET})=0.20, \quad P_{3}(\mathrm{~N})=0.06, \quad P_{3}(\mathrm{~V})=0.08, \quad P_{3}(\mathrm{ADV})=0.07, \quad P_{3}(\mathrm{ADJ})=0.02$.
What would be the output of the HMM PoS tagger on the above sentence?
Fully justify your answer.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$\mathrm{x}$ & $\mathrm{y}$ & $\mathrm{xlN}$ & processlx & ylx & programsly & ADVly \\
\hline\hline
$\mathrm{N}$ & $\mathrm{N}$ & 10 & 95 & 10 & 80 & 2 \\
\hline
$\mathrm{V}$ & $\mathrm{N}$ & 8 & 5 & 11 & 80 & 2 \\
\hline
$\mathrm{N}$ & $\mathrm{V}$ & 10 & 95 & 8 & 20 & 8 \\
\hline
$\mathrm{V}$ & $\mathrm{V}$ & 8 & 5 & 6 & 20 & 8 \\
\hline
\end{tabular}
\end{center}
|
We prove that all the extreme points are integral by contradiction. To that end, assume that there exists an extreme point $x^*$ that is not integral. Let $G=(V_1,V_2,E)$ be the given bipartite graph and let $E_f= \{e \in E \text{ }|\text{ } 0 < x_e^* < 1\}$. If $E_f$ contains a cycle, then the proof follows in the same way as the proof in the lecture notes. Therefore, we assume that $E_f$ does not contain any cycles. Consider any maximal path in $E_f$; let it have vertices $v_1,...,v_{k}$ and edges $e_1,...,e_{k-1}$. Choose any $\epsilon$ such that $0 < \epsilon < \min(x^*_{e_i},1-x^*_{e_i} : i = 1, ..., k-1)$. Note that, since $E_f$ only contains edges that are fractional, such an $\epsilon$ exists. Let $y,z$ be the following two solutions to the linear program: \[ y = \left\{ \begin{array}{l l} x^*_e + \epsilon & \quad \text{if } e \in \{e_1,e_3,e_5,e_7,...\} \\ x^*_e - \epsilon & \quad \text{if } e \in \{e_2,e_4,e_6,e_8,...\} \\ x^*_e & \quad \text{otherwise}\\ \end{array} \right.\] \[ z = \left\{ \begin{array}{l l} x^*_e - \epsilon & \quad \text{if } e \in \{e_1,e_3,e_5,e_7,...\} \\ x^*_e + \epsilon & \quad \text{if } e \in \{e_2,e_4,e_6,e_8,...\} \\ x^*_e & \quad \text{otherwise}\\ \end{array} \right.\] One can see that $x^* = \frac{y+z}{2}$. We continue by showing that $y$ is a feasible solution to the linear program. One can see that for any vertex $v \in G$ except $v_1$ and $v_k$ we have $\sum_{e \in \delta(v)} y_e = \sum_{e \in \delta(v)} x^*_e $. So, we only need to show that the linear program constraint holds for $v_1$ and $v_k$. Let us first state two observations. First, by the definition of $\epsilon$, we have that $0 \leq x^*_{e_1}+\epsilon \leq 1$, $0 \leq x^*_{e_1}-\epsilon \leq 1$, $0 \leq x^*_{e_{k-1}}+\epsilon \leq 1$, and $0 \leq x^*_{e_{k-1}}-\epsilon \leq 1$. Second, since the path is maximal and $E_f$ does not contain any cycles, the degrees of $v_1$ and $v_k$ in $E_f$ are both one. Therefore $\sum_{e \in \delta(v_1)} y_e = y_{e_1}$ and $\sum_{e \in \delta(v_k)} y_e = y_{e_{k-1}}$. Putting together the previous two observations, we get that the linear program constraint also holds for $v_1$ and $v_k$, so $y$ is a feasible solution. We can similarly show that $z$ is also a feasible solution. This shows that we can write $x^*$ as a convex combination of $y$ and $z$, which contradicts the fact that $x^*$ is an extreme point.
| 0.1
|
M1_preference_data_70
|
Assume you're working for a startup that develops a university management app. You just received a description of what the app should do:
> This app will be the administrative backbone of the university.
> Almost all staff will use it.
> Human Resources will register each student, including their personal details, and use the system to ensure each student follows the rules concerning the duration of studies, the number of courses that must be taken, the payment of all applicable fees...
> Professors will use the app to input grades and to send informational messages to students in their courses.
> Students will be able to see the list of courses and register for a course.
> Staff members will also be able to update their personal details, including their banking coordinates for their salary.
Write a user story, in a single sentence using the below format, that summarizes this conversation:
> As a student, I want to ... so that ...
Your story must contain all necessary information and only that information.
|
['1']
| 0.1
|
M1_preference_data_71
|
Let $A \in \mathbb{R}^{m\times n}$, $b\in \mathbb{R}^m$ and $c\in \mathbb{R}^n$. Consider the following linear program with $n$ variables: \begin{align*} \textbf{maximize} \hspace{0.8cm} & c^Tx \\ \textbf{subject to}\hspace{0.8cm} & Ax =b \\ \hspace{0.8cm} & x \geq 0 \end{align*} Show that any extreme point $x^*$ has at most $m$ non-zero entries, i.e., $|\{i: x^*_i > 0 \}| \leq m$. \\[-0.2cm] \noindent \emph{Hint: what happens if the columns corresponding to non-zero entries in $x^*$ are linearly dependent?}\\[-0.2cm] {\small (If you are in a good mood you can prove the following stronger statement: $x^*$ is an extreme point if and only if the columns of $A$ corresponding to non-zero entries of $x^*$ are linearly independent.)}
|
We assume that we cannot directly compute $\phi(\mathbf{x})$. The complexity would be too high. Instead, we will now apply the kernel trick to this problem:
| 0.1
|
M1_preference_data_72
|
What is the formal relation between accuracy and the error rate? In which case would you recommend to use the one or the other?
|
This is a compatibility break, the method should be deprecated instead and removed in some future release after it has been deprecated for some time.
| 0.1
|
M1_preference_data_73
|
Consider the following contains function defined on Iterable (in particular, it accepts both Vector and List). def contains[A](l: Iterable[A], elem: A): Boolean = val n = l.size if n <= 5 then for i <- l do if i == elem then return true false else val (p0, p1) = parallel( contains(l.take(n / 2), elem), contains(l.drop(n / 2), elem) ) p0 || p1 Let $n$$n$ be the size of l. Assume that drop and take run in $\Theta(1)$ on Vector and $\Theta(n)$ on List. What is the asymptotic depth of contains if it is called on a List?
|
Let $y$ be an optimal dual solution. By complementary slackness we have that for each set $S$, either $x^*_s = 0$ or $\sum_{e\in S} y_e = c(S)$. Let us now compute the cost of our algorithm. The cost of the algorithm is $\sum_{S:x^*_S > 0} c(S)$. By complementarity slackness we get that \[ \sum_{S:x^*_S > 0} c(S) = \sum_{S:x^*_S > 0} \sum_{e\in S} y_e \leq \sum_{S} \sum_{e\in S} y_e = \sum_{e \in U} y_e \sum_{S \ni e} 1 \le \sum_{e\in U} f\cdot y_e. \] We also know that (since $y$ is a feasible dual solution) $\sum_{e\in U} y_e \leq OPT$. Therefore the cost of the above algorithm is at most $f\cdot OPT$.
| 0.1
|
M1_preference_data_74
|
Prove that if a^2 is even, a is even.
|
The colleague should start off by profiling first. He might have an idea to improve the performance of a component of the app, but it might not be the performance bottleneck. Thus, the end user won't notice any improvement.
| 0.1
|
M1_preference_data_75
|
In the following problem Alice holds a string $x = \langle x_1, x_2, \ldots, x_n \rangle$ and Bob holds a string $y = \langle y_1, y_2, \ldots, y_n\rangle$. Both strings are of length $n$ and $x_i, y_i \in \{1,2,\ldots, n\}$ for $i=1,2, \ldots, n$. The goal is for Alice and Bob to use little communication to estimate the quantity \begin{align*} Q = \sum_{i=1}^n (x_i + y_i)^2\,. \end{align*} A trivial solution is for Alice to transfer all of her string $x$ to Bob who then computes $Q$ exactly. However this requires Alice to send $\Theta(n \log n)$ bits of information to Bob. In the following, we use randomization and approximation to achieve a huge improvement on the number of bits transferred from Alice to Bob. Indeed, for a small parameter $\epsilon > 0$, your task is to devise and analyze a protocol of the following type: \begin{itemize} \item On input $x$, Alice uses a randomized algorithm to compute a message $m$ that consists of $O(\log (n)/\epsilon^2)$ bits. She then transmits the message $m$ to Bob. \item Bob then, as a function of $y$ and the message $m$, computes an estimate $Z$. \end{itemize} Your protocol should ensure that \begin{align} \label{eq:guaranteeStream} \Pr[| Z - Q| \geq \epsilon Q] \leq 1/3\,, \end{align} where the probability is over the randomness used by Alice.\\ {\em (In this problem you are asked to (i) explain how Alice computes the message $m$ of $O(\log(n)/\epsilon^2)$ bits (ii) explain how Bob calculates the estimate $Z$, and (iii) prove that the calculated estimate satisfies~\eqref{eq:guaranteeStream}. Recall that you are allowed to refer to material covered in the lecture notes.) }
|
The algorithm is as follows: \begin{itemize} \item If $x_1 \geq W$, then do the exchange on day $1$ and receive $x_1$ Swiss francs. \item Otherwise, do the exchange on day $2$ and receive $x_2$ Swiss francs. \end{itemize} We now analyze its competitiveness. If $x_1 \geq W$, then our algorithm gets at least $W$ Swiss francs. Optimum is at most $W^2$ and so we are $1/W$ competitive. Otherwise if $x_1 < W$ then we get $x_2 \geq 1$ Swiss francs which is $x_2/ \max(x_2, x_1) \geq 1/W$ competitive.
| 0.1
|
M1_preference_data_76
|
You have just started your prestigious and important job as the Swiss Cheese Minister. As it turns out, different fondues and raclettes have different nutritional values and different prices: \begin{center} \begin{tabular}{|l|l|l|l||l|} \hline Food & Fondue moitie moitie & Fondue a la tomate & Raclette & Requirement per week \\ \hline Vitamin A [mg/kg] & 35 & 0.5 & 0.5 & 0.5 mg \\ Vitamin B [mg/kg] & 60 & 300 & 0.5 & 15 mg \\ Vitamin C [mg/kg] & 30 & 20 & 70 & 4 mg \\ \hline [price [CHF/kg] & 50 & 75 & 60 & --- \\ \hline \end{tabular} \end{center} Formulate the problem of finding the cheapest combination of the different fondues (moitie moitie \& a la tomate) and Raclette so as to satisfy the weekly nutritional requirement as a linear program.
|
1. The adjacency graph has ones everywhere except for (i) no
edges between exttt{sum} and exttt{i}, and between
exttt{sum} and exttt{y\_coord}, and (ii) five on the edge
between exttt{x\_coord} and exttt{y\_coord}, and two on the
edge between exttt{i} and exttt{y\_coord}.
2. Any of these solution should be optimal either as shown or
reversed:
- exttt{x\_coord}, exttt{y\_coord}, exttt{i},
exttt{j}, exttt{sum}
- exttt{j}, exttt{sum}, exttt{x\_coord},
exttt{y\_coord}, exttt{i}
- exttt{sum}, exttt{j}, exttt{x\_coord},
exttt{y\_coord}, exttt{i}
3. Surely, this triad should be adjacent: exttt{x\_coord}, exttt{y\_coord}, exttt{i}.
| 0.1
|
M1_preference_data_77
|
Your colleague wants your opinion on a module design question. They are developing a service that recommends hikes near users based on the weather, and they think the module should take as input a weather service, a service that lists hikes, a function that sorts hikes by length, and outputs an array of hikes.
What do you think? (the answer should make it possible to have automated tests for the module)
|
def compute_precision_at_k(retrieved_tweets, gt, k=5): """ It computes the precision score at a defined set of retrieved documents (k). :param predict: list of predictions :param gt: list of actual relevant data :param k: int :return: float, the precision at a given k """ results = retrieved_tweets.merge(gt, how="outer", on="id") return np.array(results[:k]['relevant'].tolist()).mean()
| 0.1
|
M1_preference_data_78
|
Implement the function `check_words` that checks if the words of a strings have common words with a list. Write your code in python. Your code should be agnostic to lower/upper case.
|
df["authors_publications_last"] = df["authors_publications"].apply(lambda a:int(str(a).split(";")[-1]))
df["authors_citations_last"] = df["authors_citations"].apply(lambda a: int(str(a).split(";")[-1]))
df["reputation"] = np.log10(df["authors_citations_last"]/df["authors_publications_last"] + 1)
| 0.1
|
M1_preference_data_79
|
Consider the following matrix-factorization problem. For the observed ratings $r_{u m}$ for a given pair $(u, m)$ of a user $u$ and a movie $m$, one typically tries to estimate the score by $$ f_{u m}=\left\langle\mathbf{v}_{u}, \mathbf{w}_{m}\right\rangle+b_{u}+b_{m} $$ Here $\mathbf{v}_{u}$ and $\mathbf{w}_{m}$ are vectors in $\mathbb{R}^{D}$ and $b_{u}$ and $b_{m}$ are scalars, indicating the bias. Assume that our objective is given by $$ \frac{1}{2} \sum_{u \sim m}\left(f_{u m}-r_{u m}\right)^{2}+\frac{\lambda}{2}\left[\sum_{u \in \mathbf{U}}\left(b_{u}^{2}+\left\|\mathbf{v}_{u}\right\|^{2}\right)+\sum_{m \in \mathbf{M}}\left(b_{m}^{2}+\left\|\mathbf{w}_{m}\right\|^{2}\right)\right] $$ where $\lambda>0$. Here $\mathbf{U}$ denotes the set of all users, $M$ the set of all movies, and $u \sim m$ represents the sum over all $(u, m)$ pairs for which a rating exists. Write the optimal values of $b_{u}$, provided that all other values are fixed.
|
Using meta data of the movie as additional information to encode the similarity, perhaps approximating the corresponding weight as a linear combination of existing movies based on their similarities in terms of meta information.
| 0.1
|
M1_preference_data_80
|
We have a collection of rectangles in a plane, whose sides are aligned with the coordinate axes. Each rectangle is represented by its lower left corner $(x_1,y_1)$ and its upper right corner $(x_2,y_2)$. All coordinates are of type Long. We require $x_1 \le x_2$ and $y_1 \le y_2$. How can the result be computed in parallel? Which properties of hull2 need to hold to make the solution correct? Prove these properties for hull2.
|
Suppose that completeness is violated. Then, the processes might not be relaying messages they should be relaying. This may violate agreement. For instance, assume that only a single process p1 BEB-delivers (hence RB-delivers) a message m from a crashed process p2. If a failure detector (at p1) does not ever suspect p2, no other correct process will deliver m (agreement is violated).
| 0.1
|
M1_preference_data_81
|
Consider the following snippet used to produce a
high-performance circuit using a statically scheduled HLS tool, such
as Xilinx Vivado HLS. Assume that a erb+double+ multiplication
takes several cycles (latency) to compute.
egin{verbatim}
double a[ARRAY_SIZE] = ...;
int b = 1;
for (int i = 0; i < ARRAY_SIZE; i++)
if (a[i] * (double) b >= CONST)
b++;
\end{verbatim}
Would a compiler for a VLIW processor like Itanium experience more, less, or different problems in scheduling efficiently the snippet above? Explain.
|
Noticing that $80 \cdot 2=20 \cdot 8$, only the first three enter the game, among which the first is clerarly the best.
The output will thus be
a (DET)
computer (N)
process (N)
programs (N)
accurately (ADV)
| 0.1
|
M1_preference_data_82
|
One of your colleagues has recently taken over responsibility for a legacy codebase, a library currently used by some of your customers. Before making functional changes, your colleague found a bug caused by incorrect use of the following method in the codebase:
public class User {
/** Indicates whether the user’s browser, if any, has JavaScript enabled. */
public boolean hasJavascriptEnabled() { … }
// … other methods, such as getName(), getAge(), ...
}
Your colleague believes that this is a bad API. You are reviewing the pull request your colleague made to fix this bug. Part of the pull request deletes the "hasJavascriptEnabled" method from the code, but you disagree. Explain in 1 sentence why this could cause issues and what should be done instead.
|
1. Modulo scheduling is a loop pipelining technique. It
transforms a loop kernel in a way to expose maximum
parallelism and minimize the loop initiation interval.
2. Compared to basic software pipelining, it does not need an
explicit prologue and epilogue.
| 0.1
|
M1_preference_data_83
|
You have been hired to evaluate an email monitoring system aimed at detecting potential security issues. The targeted goal of the application is to decide whether a given email should be further reviewed or not. Give four standard measures usually considered for the evaluation of such a system? Explain their meaning. Briefly discuss their advantages/drawbacks.
|
we still lack 5\%: 16 to 20 will provide it: 20 tokens altogether.
| 0.1
|
M1_preference_data_84
|
Consider two Information Retrieval systems S1 and S2 that produced the following outputs for
the 4 reference queries q1, q2, q3, q4:
S1: | referential:
q1: d01 d02 d03 d04 dXX dXX dXX dXX | q1: d01 d02 d03 d04
q2: d06 dXX dXX dXX dXX | q2: d05 d06
q3: dXX d07 d09 d11 dXX dXX dXX dXX dXX | q3: d07 d08 d09 d10 d11
q4: d12 dXX dXX d14 d15 dXX dXX dXX dXX | q4: d12 d13 d14 d15
S2:: | referential:
q1: dXX dXX dXX dXX d04 | q1: d01 d02 d03 d04
q2: dXX dXX d05 d06 | q2: d05 d06
q3: dXX dXX d07 d08 d09 | q3: d07 d08 d09 d10 d11
q4: dXX d13 dXX d15 | q4: d12 d13 d14 d15
where dXX refer to document references that do not appear in the referential. To make the
answer easier, we copied the referential on the right.
For each of the two systems, compute the mean Precision and Recall measures (provide the
results as fractions). Explain all the steps of your computation.
|
Proof sketch Show that $D(L) \leq D'(L)$ for all $1 \leq L$. Then, show that, for any $1 \leq L_1 \leq L_2$, we have $D'(L_1) \leq D'(L_2)$. This property can be shown by induction on $L_2$. Finally, let $n$ be such that $L \leq 2n < 2L$. We have that: $$\begin{align} D(L) &\leq D'(L) &\text{Proven earlier.} \\ &\leq D'(2n) &\text{Also proven earlier.} \\ &\leq \log_2(2n) (d + cT) + cT \\ &< \log_2(2L) (d + cT) + cT \\ &= \log_2(L) (d+cT) + \log_2(2) (d+cT) + cT \\ &= \log_2(L) (d+cT) + d + 2cT \end{align}$$ Done.
| 0.1
|
M1_preference_data_85
|
Why a data prefetcher could hinder a Prime+Probe cache attack?
How can the attacker overcome this problem?
|
Call(N("exists"), Fun("y", Call(Call(N("less"), N("x")), N("y"))))
| 0.1
|
M1_preference_data_86
|
We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture):
...some picture...
Explanation of (some) tags:
\begin{center}
\begin{tabular}{l|l|l|l}
Tag & English expl. & Expl. française & Example(s) \\
\hline
JJ & Adjective & adjectif & yellow \\
NN & Noun, Singular & nom commun singulier & cat \\
NNS & Noun, Plural & nom commun pluriel & cats \\
PRP\$ & Possessive Pronoun & pronom possessif & my, one's \\
RB & Adverb & adverbe & never, quickly \\
VBD & Verb, Past Tense & verbe au passé & ate \\
VBN & Verb, Past Participle & participe passé & eaten \\
VBZ & Verb, Present 3P Sing & verbe au présent, 3e pers. sing. & eats \\
WP\$ & Possessive wh- & pronom relatif (poss.) & whose \\
\end{tabular}
\end{center}
What kind of model (of PoS tagger) is it? What assumption(s) does it rely on?
|
['NP']
| 0.1
|
M1_preference_data_87
|
Consider the following contains function defined on Iterable (in particular, it accepts both Vector and List). def contains[A](l: Iterable[A], elem: A): Boolean = val n = l.size if n <= 5 then for i <- l do if i == elem then return true false else val (p0, p1) = parallel( contains(l.take(n / 2), elem), contains(l.drop(n / 2), elem) ) p0 || p1 Let $n$ be the size of l. Assume that drop and take run in $\Theta(1)$ on Vector and $\Theta(n)$ on List. What is the asymptotic depth of contains if it is called on a Vector?
|
Real-world code typically has too many paths to feasibly cover, e.g., because there are too many "if" conditions, or potentially-infinite loops.
| 0.1
|
M1_preference_data_88
|
/True or false:/ Is the following statement true or false? Justify your answer. "The node with the highest clustering coefficient in an undirected graph is the node that belongs to the largest number of triangles."
|
True
| 0.1
|
M1_preference_data_89
|
Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. Recall that in the min $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. Show that the optimal value of the following linear program equals the number of edges crossed by a min $s,t$-cut: \begin{align*} \textbf{minimize} \hspace{0.8cm} & \sum_{e\in E} y_e \\ \textbf{subject to}\hspace{0.8cm} & y_{\{u,v\}} \geq x_u - x_v \qquad \mbox{for every $\{u,v\}\in E$} \\ \hspace{0.8cm} & y_{\{u,v\}} \geq x_v - x_u \qquad \mbox{for every $\{u,v\}\in E$} \\ & \hspace{0.6cm}x_s = 0 \\ & \hspace{0.6cm}x_t = 1 \\ & \hspace{0.6cm}x_v \in [0,1] \qquad \mbox{for every $v\in V$} \end{align*} The above linear program has a variable $x_v$ for every vertex $v\in V$ and a variable $y_e$ for every edge $e\in E$. \emph{Hint: Show that the expected value of the following randomized rounding equals the value of the linear program. Select $\theta$ uniformly at random from $[0,1]$ and output the cut $ S = \{v\in V: x_v \leq \theta\}$.}
|
(a) err = 1 - acc. (b) does not make any sense: they are the same (opposite, actually)
| 0.1
|
M1_preference_data_90
|
What is your take on the accuracy obtained in an unballanced dataset? Do you think accuracy is the correct evaluation metric for this task? If yes, justify! If not, why not, and what else can be used?
|
1. In general, no. Speculative execution needs the ability to
rollback wrongly executed instructions and VLIW typically miss
appropriate data structures for this.
2, In specific cases, processors may implement mechanisms to
implement speculatively instructions (e.g., by not raising
exceptions or detecting colliding memory accesses) and
instructions to detect wrong execution. Compilers are then in
charge to use such mechanisms to explicitly rollback or ignore
operations executed due to wrong speculation. Itanium
implements two such speculative loads making it possible to
implement them speculatively before branches and before
stores, respectively.
3. It could be mentioned that predication is, in a sense, a
form of speculative execution.
| 0.1
|
M1_preference_data_91
|
Consider the following toy learning corpus of 59 tokens (using a tokenizer that splits on whitespaces and punctuation), out of a possible vocabulary of $N=100$ different tokens:
Pulsed operation of lasers refers to any laser not classified as continuous wave, so that the optical power appears in pulses of some duration at some repetition rate. This\linebreak encompasses a wide range of technologies addressing a number of different motivations. Some lasers are pulsed simply because they cannot be run in continuous wave mode.
Using a 2-gram language model, what are the values of the parameters corresponding to "continuous wave" and to "pulsed laser" using estimation smoothed by a Dirichlet prior with parameters all equal to $0.01$
|
It can be simply a test followed by a load at an arbitrary
address (an array access protected by the test) and an
indirect access based on the result of that access.
| 0.1
|
M1_preference_data_92
|
In the context of Load Store Queue, What conditions must be satisfied in the LSQ for a load to be executed and the result to be returned to the processor?
|
}
(a) The log-likelihood is
$$
\begin{aligned}
\mathcal{L} & =\left(\sum_{n=1}^{N}\left(y_{n} \log (\theta)-\theta-\log y_{n} !\right)\right. \\
& =\log (\theta) \sum_{n=1}^{N} y_{n}-N \theta-\log \left(\prod_{n=1}^{N} y_{i} !\right)
\end{aligned}
$$
(b) Taking the derivative with respect to $\theta$, setting the result to 0 and solving for $\theta$ we get
$$
\theta=\frac{1}{N} \sum_{n=1}^{N} y_{n}
$$
(c) The parameter $\theta$ represents the mean of the Poisson distribution and the optimum choice of $\theta$ is to set this mean to the empirical mean of the samples.
| 0.1
|
M1_preference_data_93
|
Can we devise a Best-effort Broadcast algorithm that satisfies the causal delivery property, without being a causal broadcast algorithm, i.e., without satisfying the agreement property of a reliable broadcast?
|
If there are two experts, then the Weighted Majority strategy boils down to following the prediction of the expert who was wrong fewer times in the past. Assume a deterministic implementation of this strategy -- i.e., if the two experts are tied, then listen to the first one. Our example will consist of (arbitrarily many) identical phases; each phase consists of two days and it looks as follows. On the first day, we are going to follow the first expert. The first expert is wrong and the second one is right. Therefore we ``suffer''. On the second day, we are going to follow the second expert. But the first expert is right and the second one is wrong. We suffer again. In total, we suffered twice and each expert suffered only once.
| 0.1
|
M1_preference_data_94
|
The MIPS R10000 fetches four instructions at once and, therefore,
there are four such circuits working in parallel inside the processor. What is the function of the ``Old dest'' field in the ``Active
List''? And what is the function of ``Log dest''? Why are they
needed in the ``Active list''?
|
The model could generate text that suggests treatments to users. As the model is not a medical professional, these treatments could cause harm to the user if followed. The model could also give wrong addresses to testing sites, causing users to be harmed. Others are acceptable.
| 0.1
|
M1_preference_data_95
|
Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & 2x_1 - x_2 \\ \textbf{subject to}\hspace{0.8cm} & x_1 - x_2 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{0.85cm}x_1 + s_2 = 4 \\ \hspace{0.8cm} & \hspace{0.85cm} x_2 + s_3 = 2 \\ \hspace{0.8cm} &\hspace{-0.8cm} x_1,\: x_2, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + x_2 - s_1 \\ s_2 &= 3 -x_2 + s_1 \\ s_3 &= 2 -x_2 \\ \cline{1-2} z &= 2 + x_2 - 2s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau.
|
The function contains will create contiguous sub-arrays of less than 5 elements of the array. The work is the total sum of all the work done by every parallel task. A task either splits the array or processes it sequentially. Since splitting is done in $\Theta(1)$ and every element is going to be processed sequentially, the asymptotic work is $\Theta(n)$.
| 0.1
|
M1_preference_data_96
|
What does it mean that a processor supports precise exceptions?
|
Let $\phi_{1}(\cdot)$ be the feature map corresponding to $\kappa_{1}(\cdot, \cdot)$. Then by direct inspection we see that $\phi(\cdot)=\phi_{1}(f(\cdot))$ is the feature map corresponding to $\kappa(f(\cdot), f(\cdot))$. Indeed,
$$
\phi_{1}(f(\mathbf{x}))^{\top} \phi_{1}\left(f\left(\mathbf{x}^{\prime}\right)\right)=\kappa_{1}\left(f(\mathbf{x}), f\left(\mathbf{x}^{\prime}\right)=\kappa\left(\mathbf{x}, \mathbf{x}^{\prime}\right) .\right.
$$
| 0.1
|
M1_preference_data_97
|
Consider an IR engine, which uses an indexing mechanism implementing the following 3 consecutive filters:
a morpho-syntactic filter that restricts indexing term candidates to only nouns, and reduces them to their root forms;
a frequencial filter parameterized with \(f_\text{min}=0.06\) (resp. \(f_\text{max}=0.20\)) as lower (resp. upper) cut-off value, expressed as relative frequencies;
a stop word filter using the following stop list: {a, in, mouse, the}.
and the following document \(d\):
Cats are the worst enemies of rodents. After all, a cat is a cat: as soon as it can, it rushes into the bushes with only one target in mind: mice, mice and mice! Naturally, the cats of houses are less frightening, as for them croquette loaded dressers have replaced prey hiding bushes. Cat's life in the house is easy!...
What is the multi-set resulting from the indexing of document \(d\) by the above described IR engine?
Format your answer as an alphabetically ordered list of the form: "lemma1(tf1), lemma2(tf2), ...", where tfi is the term frequency of indexing term i.
For instance: dog(2), frog(3), zebra(1)
|
['N']
| 0.1
|
M1_preference_data_98
|
Consider the following SCFG with the following probabilities:
S → NP VP
0.8
S → NP VP PP
0.2
VP → Vi
{a1}
VP → Vt NP
{a2}
VP → VP PP
a
NP → Det NN
0.3
NP → NP PP
0.7
PP → Prep NP
1.0
Vi → sleeps 1.0Vt → saw 1.0NN → man {b1}NN → dog bNN → telescope {b2}Det → the 1.0Prep → with 0.5Prep → in 0.5What is the value of a? (Give your answer as a numerical value, not as a formula)
|
First, the output is always feasible since we always include all vertices with $x_i \geq 1/2$ which is a feasible vertex cover as seen in class. We proceed to analyze the approximation guarantee. Let $X_i$ be the indicator random variable that $i$ is in the output vertex cover. Then $\Pr[X_i = 1]$ is equal to the probability that $t \leq x_i$ which is $1$ if $x_i \geq 1/2$ and otherwise it is $x_i/(1/2) = 2x_i$. We thus always have that $\Pr[X_i =1] \leq 2x_i$. Hence, \begin{align*} \E[\sum_{i\in S_t} w(i)] = \E[\sum_{i\in V} X_i w(i)] = \sum_{i\in V} \E[X_i] w(i)] \leq 2 \sum_{i\in V} x_i w(i)\,. \end{align*}
| 0.1
|
M1_preference_data_99
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- -