question stringlengths 6 3.53k | text stringlengths 17 2.05k | source stringclasses 1 value |
|---|---|---|
Consider a $d$-regular undirected graph $G = (V,E)$ and let $M$ be its normalized adjacency matrix. As seen in class, $M$ has $n= |V|$ eigenvalues $1=\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n\geq -1$ and the corresponding eigenvectors ${v}_1, {v}_2, \ldots, {v}_n \in \mathbb{R}^n$ can be selected to be orthogonal vectors where \begin{align*} {v}_1 = \begin{bmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix} \mbox{ is the all one vector.} \end{align*} Assuming that $\lambda_2 = 1$, your task is to design a procedure \textsc{FindDisconnectedSet}$(v_2)$ that takes as input the second eigenvector and outputs a non-empty subset $S \subsetneq V$ of the vertices such that there is no edge crossing the cut defined by $S$. In other words, the output $S$ must satisfy $S \neq \emptyset, S \neq V$ and any edge $e \in E$ has either both endpoints in $S$ or both endpoints in $V \setminus S$. We remark that your procedure \textsc{FindDisconnectedSet} does \textbf{not} know the edgeset $E$ of the graph. Thus it needs to define the set $S$ only based on the values $v_2(i)$ the second eigenvector assigns to every vertex $i\in V$. \\ {\em (In this problem you are asked to (i) design the algorithm \textsc{FindDisconnectedSet} and (ii) argue that it outputs a non-empty $S \subsetneq V$ that cuts $0$ edges assuming $\lambda_2 = 1$. Recall that you are allowed to refer to material covered in the lecture notes.)} | In such a scenario, the second smallest eigenvalue ( λ 2 {\displaystyle \lambda _{2}} ) of L {\displaystyle L} , yields a lower bound on the optimal cost ( c {\displaystyle c} ) of ratio-cut partition with c ≥ λ 2 n {\displaystyle c\geq {\frac {\lambda _{2}}{n}}} . The eigenvector ( V 2 {\displaystyle V_{2}} ) corresponding to λ 2 {\displaystyle \lambda _{2}} , called the Fiedler vector, bisects the graph into only two communities based on the sign of the corresponding vector entry. Division into a larger number of communities can be achieved by repeated bisection or by using multiple eigenvectors corresponding to the smallest eigenvalues. The examples in Figures 1,2 illustrate the spectral bisection approach. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider a $d$-regular undirected graph $G = (V,E)$ and let $M$ be its normalized adjacency matrix. As seen in class, $M$ has $n= |V|$ eigenvalues $1=\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n\geq -1$ and the corresponding eigenvectors ${v}_1, {v}_2, \ldots, {v}_n \in \mathbb{R}^n$ can be selected to be orthogonal vectors where \begin{align*} {v}_1 = \begin{bmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix} \mbox{ is the all one vector.} \end{align*} Assuming that $\lambda_2 = 1$, your task is to design a procedure \textsc{FindDisconnectedSet}$(v_2)$ that takes as input the second eigenvector and outputs a non-empty subset $S \subsetneq V$ of the vertices such that there is no edge crossing the cut defined by $S$. In other words, the output $S$ must satisfy $S \neq \emptyset, S \neq V$ and any edge $e \in E$ has either both endpoints in $S$ or both endpoints in $V \setminus S$. We remark that your procedure \textsc{FindDisconnectedSet} does \textbf{not} know the edgeset $E$ of the graph. Thus it needs to define the set $S$ only based on the values $v_2(i)$ the second eigenvector assigns to every vertex $i\in V$. \\ {\em (In this problem you are asked to (i) design the algorithm \textsc{FindDisconnectedSet} and (ii) argue that it outputs a non-empty $S \subsetneq V$ that cuts $0$ edges assuming $\lambda_2 = 1$. Recall that you are allowed to refer to material covered in the lecture notes.)} | Every two-graph is equivalent to a set of lines in some dimensional euclidean space each pair of which meet in the same angle. The set of lines constructed from a two graph on n vertices is obtained as follows. Let -ρ be the smallest eigenvalue of the Seidel adjacency matrix, A, of the two-graph, and suppose that it has multiplicity n - d. Then the matrix ρI + A is positive semi-definite of rank d and thus can be represented as the Gram matrix of the inner products of n vectors in euclidean d-space. As these vectors have the same norm (namely, ρ {\displaystyle {\sqrt {\rho }}} ) and mutual inner products ±1, any pair of the n lines spanned by them meet in the same angle φ where cos φ = 1/ρ. Conversely, any set of non-orthogonal equiangular lines in a euclidean space can give rise to a two-graph (see equiangular lines for the construction).With the notation as above, the maximum cardinality n satisfies n ≤ d(ρ2 - 1)/(ρ2 - d) and the bound is achieved if and only if the two-graph is regular. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following case class definitions: case class Node(id: Int) case class Edge(from: Node, to: Node) Let us represent a directed graph G as the list of all its edges (of type List[Edge]). We are interested in computing the set of all nodes reachable in exactly n steps from a set of initial nodes. Given a set of nodes within the graph, use the function you defined above to compute the subset of these nodes that belong to a cycle of size 3 within the graph. def cycles3(nodes: Set[Node], edges: List[Edge]): Set[Node] | In an undirected graph, an edge connecting two nodes has a single meaning. In a directed graph, the edges connecting two different nodes have different meanings, depending on their direction. Edges are the key concept in graph databases, representing an abstraction that is not directly implemented in a relational model or a document-store model. Properties are information associated to nodes. For example, if Wikipedia were one of the nodes, it might be tied to properties such as website, reference material, or words that starts with the letter w, depending on which aspects of Wikipedia are germane to a given database. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following case class definitions: case class Node(id: Int) case class Edge(from: Node, to: Node) Let us represent a directed graph G as the list of all its edges (of type List[Edge]). We are interested in computing the set of all nodes reachable in exactly n steps from a set of initial nodes. Given a set of nodes within the graph, use the function you defined above to compute the subset of these nodes that belong to a cycle of size 3 within the graph. def cycles3(nodes: Set[Node], edges: List[Edge]): Set[Node] | In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics. A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines), and for a directed graph are also known as edges but also sometimes arrows or arcs. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references. A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric attribute (cost, capacity, length, etc.). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following definition of trees representing higher-order functions, as well as a recursive function
subst0.
1 enum Expr:
2 case C(c: BigInt)
3 case N(name: String)
4 case BinOp(op: BinOps, e1: Expr, e2: Expr)
5 case IfNonzero(cond: Expr, trueE: Expr, falseE: Expr)
6 case Call(fun: Expr, arg: Expr)
7 case Fun(param: String, body: Expr)
8
9 import Expr._
10
11 enum BinOps:
12 case Plus, Minus, Times, Power, LessEq
13
14 def subst0(e: Expr, n: String, r: Expr): Expr = e match
15 case C(c) => e
16 case N(s) => if s == n then r else e
17 case BinOp(op, e1, e2) =>
18 BinOp(op, subst0(e1, n, r), subst0(e2, n, r))
19 case IfNonzero(cond, trueE, falseE) =>
20 IfNonzero(subst0(cond,n,r), subst0(trueE,n,r), subst0(falseE,n,r))
21 case Call(f, arg) =>
22 Call(subst0(f, n, r), subst0(arg, n, r))
23 case Fun(formal, body) =>
24 if formal == n then e
25 else Fun(formal, subst0(body, n, r))
And consider the following expression:
1 val e = Call(N("exists"), Fun("y", Call(Call(N("less"), N("x")), N("y"))))
What is subst0(e, "x", N("y")) equal to? | Informally, and using programming language jargon, a tree (xy) can be thought of as a function x applied to an argument y. When evaluated (i.e., when the function is "applied" to the argument), the tree "returns a value", i.e., transforms into another tree. The "function", "argument" and the "value" are either combinators or binary trees. If they are binary trees, they may be thought of as functions too, if needed. The evaluation operation is defined as follows: (x, y, and z represent expressions made from the functions S, K, and I, and set values): I returns its argument: Ix = xK, when applied to any argument x, yields a one-argument constant function Kx, which, when applied to any argument y, returns x: Kxy = xS is a substitution operator. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following definition of trees representing higher-order functions, as well as a recursive function
subst0.
1 enum Expr:
2 case C(c: BigInt)
3 case N(name: String)
4 case BinOp(op: BinOps, e1: Expr, e2: Expr)
5 case IfNonzero(cond: Expr, trueE: Expr, falseE: Expr)
6 case Call(fun: Expr, arg: Expr)
7 case Fun(param: String, body: Expr)
8
9 import Expr._
10
11 enum BinOps:
12 case Plus, Minus, Times, Power, LessEq
13
14 def subst0(e: Expr, n: String, r: Expr): Expr = e match
15 case C(c) => e
16 case N(s) => if s == n then r else e
17 case BinOp(op, e1, e2) =>
18 BinOp(op, subst0(e1, n, r), subst0(e2, n, r))
19 case IfNonzero(cond, trueE, falseE) =>
20 IfNonzero(subst0(cond,n,r), subst0(trueE,n,r), subst0(falseE,n,r))
21 case Call(f, arg) =>
22 Call(subst0(f, n, r), subst0(arg, n, r))
23 case Fun(formal, body) =>
24 if formal == n then e
25 else Fun(formal, subst0(body, n, r))
And consider the following expression:
1 val e = Call(N("exists"), Fun("y", Call(Call(N("less"), N("x")), N("y"))))
What is subst0(e, "x", N("y")) equal to? | (1, if n = 0; else n × ((Y G) (n−1)))) (2−1)))) 4 × (3 × (2 × (1, if 1 = 0; else 1 × ((Y G) (1−1))))) 4 × (3 × (2 × (1 × (G (Y G) (1−1))))) 4 × (3 × (2 × (1 × ((λn. (1, if n = 0; else n × ((Y G) (n−1)))) (1−1))))) 4 × (3 × (2 × (1 × (1, if 0 = 0; else 0 × ((Y G) (0−1)))))) 4 × (3 × (2 × (1 × (1)))) 24Every recursively defined function can be seen as a fixed point of some suitably defined function closing over the recursive call with an extra argument, and therefore, using Y, every recursively defined function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication and comparison predicate of natural numbers recursively. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Remember that monoids can be represented by the following type class:
1 trait SemiGroup[T]:
2 extension (x: T) def combine (y: T): T
3
4 trait Monoid[T] extends SemiGroup[T]:
5 def unit: T
Additionally the three following laws should hold for all Monoid[M] and all a, b, c: M:
(Associativity) a.combine(b).combine(c) === a.combine(b.combine(c))
(Left unit) unit.combine(a) === a
(Right unit) a.combine(unit) === a
Consider the following implementation of Monoid for Int:
1 given Pos: Monoid[Int] with
2 extension (x: Int) def combine (y: Int): Int = Math.max(x + y, 0)
3 def unit: Int = 0
Which of the three monoid laws does it fullfil?
None of them
Only Associativity
Only Left unit
Only Right unit
Only Associativity and Left unit
Only Associativity and Right unit
Only Left unit and Right unit
All of them | In particular, when an identity element is required by the type of structure, the identity element of the first structure must be mapped to the corresponding identity element of the second structure. For example: A semigroup homomorphism is a map between semigroups that preserves the semigroup operation. A monoid homomorphism is a map between monoids that preserves the monoid operation and maps the identity element of the first monoid to that of the second monoid (the identity element is a 0-ary operation). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Remember that monoids can be represented by the following type class:
1 trait SemiGroup[T]:
2 extension (x: T) def combine (y: T): T
3
4 trait Monoid[T] extends SemiGroup[T]:
5 def unit: T
Additionally the three following laws should hold for all Monoid[M] and all a, b, c: M:
(Associativity) a.combine(b).combine(c) === a.combine(b.combine(c))
(Left unit) unit.combine(a) === a
(Right unit) a.combine(unit) === a
Consider the following implementation of Monoid for Int:
1 given Pos: Monoid[Int] with
2 extension (x: Int) def combine (y: Int): Int = Math.max(x + y, 0)
3 def unit: Int = 0
Which of the three monoid laws does it fullfil?
None of them
Only Associativity
Only Left unit
Only Right unit
Only Associativity and Left unit
Only Associativity and Right unit
Only Left unit and Right unit
All of them | The class of all semigroups forms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law: x ( y z ) = ( x y ) z . {\displaystyle x(yz)=(xy)z.} The class of groups forms a variety of algebras of signature (2,0,1), the three operations being respectively multiplication (binary), identity (nullary, a constant) and inversion (unary). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A multiset is an unordered collection where elements can appear multiple times. We will represent a
multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that
is not in the multiset, and the (positive) number of times it appears otherwise:
1 type Multiset = Char => Int
What should replace ??? so that the following function transforms given set s to a
multiset where each element of s appears exactly once?
1 type Set = Char => Boolean
2 def setToMultiset(s: Set): MultiSet = ??? | In mathematics, a multiset (or bag, or mset) is a modification of the concept of a set that, unlike a set, allows for multiple instances for each of its elements. The number of instances given for each element is called the multiplicity of that element in the multiset. As a consequence, an infinite number of multisets exist which contain only elements a and b, but vary in the multiplicities of their elements: The set {a, b} contains only elements a and b, each having multiplicity 1 when {a, b} is seen as a multiset. In the multiset {a, a, b}, the element a has multiplicity 2, and b has multiplicity 1. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A multiset is an unordered collection where elements can appear multiple times. We will represent a
multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that
is not in the multiset, and the (positive) number of times it appears otherwise:
1 type Multiset = Char => Int
What should replace ??? so that the following function transforms given set s to a
multiset where each element of s appears exactly once?
1 type Set = Char => Boolean
2 def setToMultiset(s: Set): MultiSet = ??? | The multiset construction, denoted A = M { B } {\displaystyle {\mathcal {A}}={\mathfrak {M}}\{{\mathcal {B}}\}} is a generalization of the set construction. In the set construction, each element can occur zero or one times. In a multiset, each element can appear an arbitrary number of times. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Prove Hall's Theorem: \begin{itemize} \item[]``An $n$-by-$n$ bipartite graph $G=(A \cup B, E)$ has a perfect matching if and only if $|S| \leq |N(S)|$ for all $S\subseteq A$.'' \end{itemize} \emph{(Hint: use the properties of the augmenting path algorithm for the hard direction.)} | Hall's marriage theorem provides a condition guaranteeing that a bipartite graph (X + Y, E) admits a perfect matching, or - more generally - a matching that saturates all vertices of Y. The condition involves the number of neighbors of subsets of Y. Generalizing Hall's theorem to hypergraphs requires a generalization of the concepts of bipartiteness, perfect matching, and neighbors. 1. Bipartiteness: The notion of a bipartiteness can be extended to hypergraphs in many ways (see bipartite hypergraph). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Prove Hall's Theorem: \begin{itemize} \item[]``An $n$-by-$n$ bipartite graph $G=(A \cup B, E)$ has a perfect matching if and only if $|S| \leq |N(S)|$ for all $S\subseteq A$.'' \end{itemize} \emph{(Hint: use the properties of the augmenting path algorithm for the hard direction.)} | When Hall's condition does not hold, the original theorem tells us only that a perfect matching does not exist, but does not tell what is the largest matching that does exist. To learn this information, we need the notion of deficiency of a graph. Given a bipartite graph G = (X+Y, E), the deficiency of G w.r.t. X is the maximum, over all subsets W of X, of the difference |W| - |NG(W)|. The larger is the deficiency, the farther is the graph from satisfying Hall's condition. Using Hall's marriage theorem, it can be proved that, if the deficiency of a bipartite graph G is d, then G admits a matching of size at least |X|-d. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following function sums:
1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match
2 case Nil => List((c, 1))
3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs)
4
5 def sums(digits: List[Int]): List[(Int, Int)] =
6 digits.foldRight(List[(Int, Int)]())(add)
Your task is to identify several operations on lists of digits:
What does the following operation implement, for a given input list of digits?
1 def mystery2(digits: List[Int]): List[Int] =
2 mystery1(digits).filter(_ == 1) | At this point there are two single-digit numbers, the first derived from the first operand and the second derived from the second operand. Apply the originally specified operation to the two condensed operands, and then apply the summing-of-digits procedure to the result of the operation. Sum the digits of the result that were originally obtained for the original calculation. If the result of step 4 does not equal the result of step 5, then the original answer is wrong. If the two results match, then the original answer may be right, though it is not guaranteed to be.Example Assume the calculation 6,338 × 79, manually done, yielded a result of 500,702:Sum the digits of 6,338: (6 + 3 = 9, so count that as 0) + 3 + 8 = 11 Iterate as needed: 1 + 1 = 2 Sum the digits of 79: 7 + (9 counted as 0) = 7 Perform the original operation on the condensed operands, and sum digits: 2 × 7 = 14; 1 + 4 = 5 Sum the digits of 500702: 5 + 0 + 0 + (7 + 0 + 2 = 9, which counts as 0) = 5 5 = 5, so there is a good chance that the prediction that 6,338 × 79 equals 500,702 is right.The same procedure can be used with multiple operations, repeating steps 1 and 2 for each operation. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following function sums:
1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match
2 case Nil => List((c, 1))
3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs)
4
5 def sums(digits: List[Int]): List[(Int, Int)] =
6 digits.foldRight(List[(Int, Int)]())(add)
Your task is to identify several operations on lists of digits:
What does the following operation implement, for a given input list of digits?
1 def mystery2(digits: List[Int]): List[Int] =
2 mystery1(digits).filter(_ == 1) | After applying an arithmetic operation to two operands and getting a result, the following procedure can be used to improve confidence in the correctness of the result: Sum the digits of the first operand; any 9s (or sets of digits that add to 9) can be counted as 0. If the resulting sum has two or more digits, sum those digits as in step one; repeat this step until the resulting sum has only one digit. Repeat steps one and two with the second operand. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following problem where we are given an edge-colored graph and we wish to find a spanning tree that contains a specified number of edges of each color: \begin{description} \item[Input:] A connected undirected graph $G=(V,E)$ where the edges $E$ are partitioned into $k$ color classes $E_1, E_2, \dots, E_k$. In addition each color class $i$ has a target number $t_i \in \mathbb{N}$. \item[Output:] If possible, a spanning tree $T \subseteq E$ of the graph satisfying the color requirements: \begin{align*} |T \cap E_i| = t_i \qquad \mbox{ for $i=1,\dots, k$.} \end{align*} Otherwise, i.e., if no such spanning tree $T$ exists, output that no solution exists. \end{description} \noindent {Design} a polynomial time algorithm for the above problem. You should analyze the correctness of your algorithm, i.e., why it finds a solution if possible. To do so, you are allowed to use algorithms and results seen in class without reexplaining them. | Because the problem of testing whether a graph is class 1 is NP-complete, there is no known polynomial time algorithm for edge-coloring every graph with an optimal number of colors. Nevertheless, a number of algorithms have been developed that relax one or more of these criteria: they only work on a subset of graphs, or they do not always use an optimal number of colors, or they do not always run in polynomial time. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following problem where we are given an edge-colored graph and we wish to find a spanning tree that contains a specified number of edges of each color: \begin{description} \item[Input:] A connected undirected graph $G=(V,E)$ where the edges $E$ are partitioned into $k$ color classes $E_1, E_2, \dots, E_k$. In addition each color class $i$ has a target number $t_i \in \mathbb{N}$. \item[Output:] If possible, a spanning tree $T \subseteq E$ of the graph satisfying the color requirements: \begin{align*} |T \cap E_i| = t_i \qquad \mbox{ for $i=1,\dots, k$.} \end{align*} Otherwise, i.e., if no such spanning tree $T$ exists, output that no solution exists. \end{description} \noindent {Design} a polynomial time algorithm for the above problem. You should analyze the correctness of your algorithm, i.e., why it finds a solution if possible. To do so, you are allowed to use algorithms and results seen in class without reexplaining them. | For some graphs, such as bipartite graphs and high-degree planar graphs, the number of colors is always Δ, and for multigraphs, the number of colors may be as large as 3Δ/2. There are polynomial time algorithms that construct optimal colorings of bipartite graphs, and colorings of non-bipartite simple graphs that use at most Δ+1 colors; however, the general problem of finding an optimal edge coloring is NP-hard and the fastest known algorithms for it take exponential time. Many variations of the edge-coloring problem, in which an assignments of colors to edges must satisfy other conditions than non-adjacency, have been studied. Edge colorings have applications in scheduling problems and in frequency assignment for fiber optic networks. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following algorithm that takes as input a complete $n$-by-$n$ bipartite graph $G=(U \cup V,E)$ with positive integer edge-weights $w :E \rightarrow \mathbb{Z}_{> 0 }$: \begin{center} \begin{boxedminipage}[t]{0.85\textwidth} \begin{minipage}{14cm} \begin{verse} \textsc{MinWeightPerfectMatching}$(G, w)$: \\[2mm] 1. \FOR each edge $e\in E$ {\scriptsize (i.e., each pair $(u,v)$ since the graph is complete)} \\ 2. \qquad select independently and uniformly at random $p(e) \in \{1, \dots, n^2\}$.\\[1mm] 3. Define a bi-adjacency matrix $A$ with $n$ rows (one for each $u\in U$) and $n$ columns (one for each $v\in V$) as follows: \begin{align*} A_{u,v} = 2^{n^{100} w(u,v)}\cdot p(u,v) \,. \end{align*}\\ 4. \RETURN largest positive integer $i$ such that $2^{i \cdot n^{100} }$ divides $\det(A)$ (if no such $i$ exists, we return $0$). \end{verse} \end{minipage} \end{boxedminipage} \end{center} Prove that the above algorithm returns the value of a min-weight perfect matching with probability at least $1-1/n$. Recall that you are allowed to refer to material covered in the course. \\[2mm] \noindent Hint: Let $\mathcal{M}_i$ denote the set of perfect matchings $M$ whose weight $\sum_{e\in M} w(e)$ equals $i$. Use that one can write $\det(A)$ as follows: \begin{align*} \det(A) = \sum^{\infty}_{i=0} 2^{i \cdot n^{100}} f_i({p}) \qquad \mbox{where } f_i(p) = \sum_{M \in \mathcal{M}_i} \textrm{sign}(M) \prod_{e\in M} p(e)\,. \end{align*} Here $\textrm{sign}(M)\in \{\pm 1\}$ is the sign of the permutation corresponding to $M$. | The original application was to minimum-weight (or maximum-weight) perfect matchings in a graph. Each edge is assigned a random weight in {1, …, 2m}, and F {\displaystyle {\mathcal {F}}} is the set of perfect matchings, so that with probability at least 1/2, there exists a unique perfect matching. When each indeterminate x i j {\displaystyle x_{ij}} in the Tutte matrix of the graph is replaced with 2 w i j {\displaystyle 2^{w_{ij}}} where w i j {\displaystyle w_{ij}} is the random weight of the edge, we can show that the determinant of the matrix is nonzero, and further use this to find the matching. More generally, the paper also observed that any search problem of the form "Given a set system ( S , F ) {\displaystyle (S,{\mathcal {F}})} , find a set in F {\displaystyle {\mathcal {F}}} " could be reduced to a decision problem of the form "Is there a set in F {\displaystyle {\mathcal {F}}} with total weight at most k?". | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following algorithm that takes as input a complete $n$-by-$n$ bipartite graph $G=(U \cup V,E)$ with positive integer edge-weights $w :E \rightarrow \mathbb{Z}_{> 0 }$: \begin{center} \begin{boxedminipage}[t]{0.85\textwidth} \begin{minipage}{14cm} \begin{verse} \textsc{MinWeightPerfectMatching}$(G, w)$: \\[2mm] 1. \FOR each edge $e\in E$ {\scriptsize (i.e., each pair $(u,v)$ since the graph is complete)} \\ 2. \qquad select independently and uniformly at random $p(e) \in \{1, \dots, n^2\}$.\\[1mm] 3. Define a bi-adjacency matrix $A$ with $n$ rows (one for each $u\in U$) and $n$ columns (one for each $v\in V$) as follows: \begin{align*} A_{u,v} = 2^{n^{100} w(u,v)}\cdot p(u,v) \,. \end{align*}\\ 4. \RETURN largest positive integer $i$ such that $2^{i \cdot n^{100} }$ divides $\det(A)$ (if no such $i$ exists, we return $0$). \end{verse} \end{minipage} \end{boxedminipage} \end{center} Prove that the above algorithm returns the value of a min-weight perfect matching with probability at least $1-1/n$. Recall that you are allowed to refer to material covered in the course. \\[2mm] \noindent Hint: Let $\mathcal{M}_i$ denote the set of perfect matchings $M$ whose weight $\sum_{e\in M} w(e)$ equals $i$. Use that one can write $\det(A)$ as follows: \begin{align*} \det(A) = \sum^{\infty}_{i=0} 2^{i \cdot n^{100}} f_i({p}) \qquad \mbox{where } f_i(p) = \sum_{M \in \mathcal{M}_i} \textrm{sign}(M) \prod_{e\in M} p(e)\,. \end{align*} Here $\textrm{sign}(M)\in \{\pm 1\}$ is the sign of the permutation corresponding to $M$. | These weights should exceed the weights of all existing matchings, to prevent appearance of artificial edges in the possible solution. As shown by Mulmuley, Vazirani and Vazirani, the problem of minimum weight perfect matching is converted to finding minors in the adjacency matrix of a graph. Using the isolation lemma, a minimum weight perfect matching in a graph can be found with probability at least 1⁄2. For a graph with n vertices, it requires O ( log 2 ( n ) ) {\displaystyle O(\log ^{2}(n))} time. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Recall the Jaccard index that we saw in Exercise Set 10: Suppose we have a universe $U$. For non-empty sets $A,B \subseteq U$, the Jaccard index is defined as \begin{align*} J(A,B) = \frac{|A \cap B|}{|A \cup B|}\,. \end{align*} Design a locality sensitive hash (LSH) family $\mathcal{H}$ of functions $h: 2^U \rightarrow [0,1]$ such that for any non-empty sets $A, B\subseteq U$, \begin{align*} \Pr_{h \sim \mathcal{H}}[h(A) \neq h(B)] \begin{cases} \leq 0.01 & \mbox{if $J(A,B) \geq 0.99$,}\\ \geq 0.1 & \mbox{if $J(A,B) \leq 0.9$.} \end{cases} \end{align*} {\em (In this problem you are asked to explain the hash family and argue that it satisfies the above properties. Recall that you are allowed to refer to material covered in the course.)} | Suppose U is composed of subsets of some ground set of enumerable items S and the similarity function of interest is the Jaccard index J. If π is a permutation on the indices of S, for A ⊆ S {\displaystyle A\subseteq S} let h ( A ) = min a ∈ A { π ( a ) } {\displaystyle h(A)=\min _{a\in A}\{\pi (a)\}} . Each possible choice of π defines a single hash function h mapping input sets to elements of S. Define the function family H to be the set of all such functions and let D be the uniform distribution. Given two sets A , B ⊆ S {\displaystyle A,B\subseteq S} the event that h ( A ) = h ( B ) {\displaystyle h(A)=h(B)} corresponds exactly to the event that the minimizer of π over A ∪ B {\displaystyle A\cup B} lies inside A ∩ B {\displaystyle A\cap B} . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Recall the Jaccard index that we saw in Exercise Set 10: Suppose we have a universe $U$. For non-empty sets $A,B \subseteq U$, the Jaccard index is defined as \begin{align*} J(A,B) = \frac{|A \cap B|}{|A \cup B|}\,. \end{align*} Design a locality sensitive hash (LSH) family $\mathcal{H}$ of functions $h: 2^U \rightarrow [0,1]$ such that for any non-empty sets $A, B\subseteq U$, \begin{align*} \Pr_{h \sim \mathcal{H}}[h(A) \neq h(B)] \begin{cases} \leq 0.01 & \mbox{if $J(A,B) \geq 0.99$,}\\ \geq 0.1 & \mbox{if $J(A,B) \leq 0.9$.} \end{cases} \end{align*} {\em (In this problem you are asked to explain the hash family and argue that it satisfies the above properties. Recall that you are allowed to refer to material covered in the course.)} | If μ {\displaystyle \mu } is a measure on a measurable space X {\displaystyle X} , then we define the Jaccard coefficient by J μ ( A , B ) = μ ( A ∩ B ) μ ( A ∪ B ) , {\displaystyle J_{\mu }(A,B)={{\mu (A\cap B)} \over {\mu (A\cup B)}},} and the Jaccard distance by d μ ( A , B ) = 1 − J μ ( A , B ) = μ ( A △ B ) μ ( A ∪ B ) . {\displaystyle d_{\mu }(A,B)=1-J_{\mu }(A,B)={{\mu (A\triangle B)} \over {\mu (A\cup B)}}.} Care must be taken if μ ( A ∪ B ) = 0 {\displaystyle \mu (A\cup B)=0} or ∞ {\displaystyle \infty } , since these formulas are not well defined in these cases. The MinHash min-wise independent permutations locality sensitive hashing scheme may be used to efficiently compute an accurate estimate of the Jaccard similarity coefficient of pairs of sets, where each set is represented by a constant-sized signature derived from the minimum values of a hash function. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Do the functions first and second return the same output for every possible input? def first(x: List[Int]): Int = x.head + first(x.tail) def second(x: List[Int]): Int = x.foldLeft(0)(_ + _) | Function 2 is function 1 with the lines swapped. In the case of a function calling itself only once, instructions placed before the recursive call are executed once per recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly after the maximum recursion has been reached. Also note that the order of the print statements is reversed, which is due to the way the functions and statements are stored on the call stack. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Do the functions first and second return the same output for every possible input? def first(x: List[Int]): Int = x.head + first(x.tail) def second(x: List[Int]): Int = x.foldLeft(0)(_ + _) | Using Haskell as an example, foldl and foldr can be formulated in a few equations. If the list is empty, the result is the initial value. If not, fold the tail of the list using as new initial value the result of applying f to the old initial value and the first element. If the list is empty, the result is the initial value z. If not, apply f to the first element and the result of folding the rest. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following data structure: enum IntSet: \t case Empty \t case NonEmpty(x: Int, l: IntSet, r: IntSet) And the following lemmas, holding for all x: Int, xs: List[Int], ys: List[Int], l: IntSet and r: IntSet: (SizeNil) nil.size === 0 (SizeCons) (x :: xs).size === xs.size + 1 (ConcatSize) (xs ++ ys).size === xs.size + ys.size (TreeSizeEmpty) Empty.treeSize === 0 (TreeSizeNonEmpty) NonEmpty(x, l, r).treeSize === l.treeSize + r.treeSize + 1 (ToListEmpty) Empty.toList === nil (ToListNonEmpty) NonEmpty(x, l, r).toList === l.toList ++ (x :: r.toList) Let us prove the following lemma for all s: IntSet: (ToListSize) s.toList.size === s.treeSize We prove it by induction on s. Base case: s is Empty. Therefore, we need to prove: Empty.toList.size === Empty.treeSize Starting from the left hand-side (Empty.toList.size), what exact sequence of lemmas should we apply to get the right hand-side (Empty.treeSize)? | The empty set has 0 members and 1 subset, and 20 = 1. The induction hypothesis is the proposition in case n; we use it to prove case n + 1. In a size-(n + 1) set, choose a distinguished element. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following data structure: enum IntSet: \t case Empty \t case NonEmpty(x: Int, l: IntSet, r: IntSet) And the following lemmas, holding for all x: Int, xs: List[Int], ys: List[Int], l: IntSet and r: IntSet: (SizeNil) nil.size === 0 (SizeCons) (x :: xs).size === xs.size + 1 (ConcatSize) (xs ++ ys).size === xs.size + ys.size (TreeSizeEmpty) Empty.treeSize === 0 (TreeSizeNonEmpty) NonEmpty(x, l, r).treeSize === l.treeSize + r.treeSize + 1 (ToListEmpty) Empty.toList === nil (ToListNonEmpty) NonEmpty(x, l, r).toList === l.toList ++ (x :: r.toList) Let us prove the following lemma for all s: IntSet: (ToListSize) s.toList.size === s.treeSize We prove it by induction on s. Base case: s is Empty. Therefore, we need to prove: Empty.toList.size === Empty.treeSize Starting from the left hand-side (Empty.toList.size), what exact sequence of lemmas should we apply to get the right hand-side (Empty.treeSize)? | The proof of part 1 of the first recursion theorem is obtained by iterating the enumeration operator Φ beginning with the empty set. First, a sequence Fk is constructed, for k = 0 , 1 , … {\displaystyle k=0,1,\ldots } . Let F0 be the empty set. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The goal of the 4 following questions is to prove that the methods map and mapTr are equivalent. The
former is the version seen in class and is specified by the lemmas MapNil and MapCons. The later version
is a tail-recursive version and is specified by the lemmas MapTrNil and MapTrCons.
All lemmas on this page hold for all x: Int, y: Int, xs: List[Int], ys: List[Int], l: List
[Int] and f: Int => Int.
Given the following lemmas:
(MapNil) Nil.map(f) === Nil
(MapCons) (x :: xs).map(f) === f(x) :: xs.map(f)
(MapTrNil) Nil.mapTr(f, ys) === ys
(MapTrCons) (x :: xs).mapTr(f, ys) === xs.mapTr(f, ys ++ (f(x) :: Nil))
(NilAppend) Nil ++ xs === xs
(ConsAppend) (x :: xs) ++ ys === x :: (xs ++ ys)
Let us first prove the following lemma:
(AccOut) l.mapTr(f, y :: ys) === y :: l.mapTr(f, ys)
We prove it by induction on l.
Induction step: l is x :: xs. Therefore, we need to prove:
(x :: xs).mapTr(f, y :: ys) === y :: (x :: xs).mapTr(f, ys)
We name the induction hypothesis IH.
What exact sequence of lemmas should we apply to rewrite the left hand-side ((x :: xs).mapTr(f, y
:: ys)) to the right hand-side (y :: (x :: xs).mapTr(f, ys))? | Two linear maps S and T in L(V, W) induce the same map between P(V) and P(W) if and only if they differ by a scalar multiple, that is if T = λS for some λ ≠ 0. Thus if one identifies the scalar multiples of the identity map with the underlying field K, the set of K-linear morphisms from P(V) to P(W) is simply P(L(V, W)). The automorphisms P(V) → P(V) can be described more concretely. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The goal of the 4 following questions is to prove that the methods map and mapTr are equivalent. The
former is the version seen in class and is specified by the lemmas MapNil and MapCons. The later version
is a tail-recursive version and is specified by the lemmas MapTrNil and MapTrCons.
All lemmas on this page hold for all x: Int, y: Int, xs: List[Int], ys: List[Int], l: List
[Int] and f: Int => Int.
Given the following lemmas:
(MapNil) Nil.map(f) === Nil
(MapCons) (x :: xs).map(f) === f(x) :: xs.map(f)
(MapTrNil) Nil.mapTr(f, ys) === ys
(MapTrCons) (x :: xs).mapTr(f, ys) === xs.mapTr(f, ys ++ (f(x) :: Nil))
(NilAppend) Nil ++ xs === xs
(ConsAppend) (x :: xs) ++ ys === x :: (xs ++ ys)
Let us first prove the following lemma:
(AccOut) l.mapTr(f, y :: ys) === y :: l.mapTr(f, ys)
We prove it by induction on l.
Induction step: l is x :: xs. Therefore, we need to prove:
(x :: xs).mapTr(f, y :: ys) === y :: (x :: xs).mapTr(f, ys)
We name the induction hypothesis IH.
What exact sequence of lemmas should we apply to rewrite the left hand-side ((x :: xs).mapTr(f, y
:: ys)) to the right hand-side (y :: (x :: xs).mapTr(f, ys))? | The maps between the kernels and the maps between the cokernels are induced in a natural manner by the given (horizontal) maps because of the diagram's commutativity. The exactness of the two induced sequences follows in a straightforward way from the exactness of the rows of the original diagram. The important statement of the lemma is that a connecting homomorphism d exists which completes the exact sequence. In the case of abelian groups or modules over some ring, the map d can be constructed as follows: Pick an element x in ker c and view it as an element of C; since g is surjective, there exists y in B with g(y) = x. Because of the commutativity of the diagram, we have g'(b(y)) = c(g(y)) = c(x) = 0 (since x is in the kernel of c), and therefore b(y) is in the kernel of g' . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following function sums:
1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match
2 case Nil => List((c, 1))
3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs)
4
5 def sums(digits: List[Int]): List[(Int, Int)] =
6 digits.foldRight(List[(Int, Int)]())(add)
Your task is to identify several operations on lists of digits:
What does the following operation implement, for a given input list of digits?
1 def mystery4(digits: List[Int]): Int = sums(digits) match
2 case Nil => 0
3 case t => t.reduceLeft((a, b) => (a._1, a._2 + b._2))._2 | At this point there are two single-digit numbers, the first derived from the first operand and the second derived from the second operand. Apply the originally specified operation to the two condensed operands, and then apply the summing-of-digits procedure to the result of the operation. Sum the digits of the result that were originally obtained for the original calculation. If the result of step 4 does not equal the result of step 5, then the original answer is wrong. If the two results match, then the original answer may be right, though it is not guaranteed to be.Example Assume the calculation 6,338 × 79, manually done, yielded a result of 500,702:Sum the digits of 6,338: (6 + 3 = 9, so count that as 0) + 3 + 8 = 11 Iterate as needed: 1 + 1 = 2 Sum the digits of 79: 7 + (9 counted as 0) = 7 Perform the original operation on the condensed operands, and sum digits: 2 × 7 = 14; 1 + 4 = 5 Sum the digits of 500702: 5 + 0 + 0 + (7 + 0 + 2 = 9, which counts as 0) = 5 5 = 5, so there is a good chance that the prediction that 6,338 × 79 equals 500,702 is right.The same procedure can be used with multiple operations, repeating steps 1 and 2 for each operation. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following function sums:
1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match
2 case Nil => List((c, 1))
3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs)
4
5 def sums(digits: List[Int]): List[(Int, Int)] =
6 digits.foldRight(List[(Int, Int)]())(add)
Your task is to identify several operations on lists of digits:
What does the following operation implement, for a given input list of digits?
1 def mystery4(digits: List[Int]): Int = sums(digits) match
2 case Nil => 0
3 case t => t.reduceLeft((a, b) => (a._1, a._2 + b._2))._2 | After applying an arithmetic operation to two operands and getting a result, the following procedure can be used to improve confidence in the correctness of the result: Sum the digits of the first operand; any 9s (or sets of digits that add to 9) can be counted as 0. If the resulting sum has two or more digits, sum those digits as in step one; repeat this step until the resulting sum has only one digit. Repeat steps one and two with the second operand. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. In the minimum (unweighted) $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. We shall use a linear program to solve this problem. Let ${P}$ be the set of all paths between $s$ and $t$ in the graph $G$. The linear program has a variable $y_e$ for each edge $e\in E$ and is defined as follows: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & & \displaystyle\sum_{e \in E} y_e &\\ \text{subject to}& & \displaystyle\sum_{e \in p} y_e \ge 1 &\forall p \in P,\\ & & y_e \ge 0 & \forall e \in E. \end{array} \end{equation*} For example, consider the following graph where the numbers on the edges depict the $y_e$-values of a feasible solution to the linear program: \begin{center} \input{cutExample} \end{center} The values on the edges depict a feasible but not optimal solution to the linear program. That it is feasible follows because each $y_e$ is non-negative and $\sum_{e\in p} y_e \geq 1$ for all $p\in P$. Indeed, for the path $s, b, a, t$ we have $y_{\{s,b\}}+ y_{\{b,a\}} + y_{\{a,t\}} = 1/4 + 1/4 + 1/2 = 1$, and similar calculations for each path $p$ between $s$ and $t$ show that $\sum_{e\in p} y_e \geq 1$. That the solution is not optimal follows because its value is $2.5$ whereas an optimal solution has value $2$. Let $\opt$ denote the number of edges crossing a minimum $s,t$-cut and let $\optlp$ denote the value of an optimal solution the linear program. Prove that $\optlp \leq \opt$. \\ {\em (In this problem you are asked to prove $\optlp \leq \opt$. Recall that you are allowed to refer to material covered in the lecture notes.)} | For weighted graphs with positive edge weights w: E → R + {\displaystyle w\colon E\rightarrow \mathbf {R} ^{+}} the weight of the cut is the sum of the weights of edges between vertices in each part w ( S , T ) = ∑ u v ∈ E: u ∈ S , v ∈ T w ( u v ) , {\displaystyle w(S,T)=\sum _{uv\in E\colon u\in S,v\in T}w(uv)\,,} which agrees with the unweighted definition for w = 1 {\displaystyle w=1} . A cut is sometimes called a “global cut” to distinguish it from an “ s {\displaystyle s} - t {\displaystyle t} cut” for a given pair of vertices, which has the additional requirement that s ∈ S {\displaystyle s\in S} and t ∈ T {\displaystyle t\in T} . Every global cut is an s {\displaystyle s} - t {\displaystyle t} cut for some s , t ∈ V {\displaystyle s,t\in V} . Thus, the minimum cut problem can be solved in polynomial time by iterating over all choices of s , t ∈ V {\displaystyle s,t\in V} and solving the resulting minimum s {\displaystyle s} - t {\displaystyle t} cut problem using the max-flow min-cut theorem and a polynomial time algorithm for maximum flow, such as the push-relabel algorithm, though this approach is not optimal. Better deterministic algorithms for the global minimum cut problem include the Stoer–Wagner algorithm, which has a running time of O ( m n + n 2 log n ) {\displaystyle O(mn+n^{2}\log n)} . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. In the minimum (unweighted) $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. We shall use a linear program to solve this problem. Let ${P}$ be the set of all paths between $s$ and $t$ in the graph $G$. The linear program has a variable $y_e$ for each edge $e\in E$ and is defined as follows: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & & \displaystyle\sum_{e \in E} y_e &\\ \text{subject to}& & \displaystyle\sum_{e \in p} y_e \ge 1 &\forall p \in P,\\ & & y_e \ge 0 & \forall e \in E. \end{array} \end{equation*} For example, consider the following graph where the numbers on the edges depict the $y_e$-values of a feasible solution to the linear program: \begin{center} \input{cutExample} \end{center} The values on the edges depict a feasible but not optimal solution to the linear program. That it is feasible follows because each $y_e$ is non-negative and $\sum_{e\in p} y_e \geq 1$ for all $p\in P$. Indeed, for the path $s, b, a, t$ we have $y_{\{s,b\}}+ y_{\{b,a\}} + y_{\{a,t\}} = 1/4 + 1/4 + 1/2 = 1$, and similar calculations for each path $p$ between $s$ and $t$ show that $\sum_{e\in p} y_e \geq 1$. That the solution is not optimal follows because its value is $2.5$ whereas an optimal solution has value $2$. Let $\opt$ denote the number of edges crossing a minimum $s,t$-cut and let $\optlp$ denote the value of an optimal solution the linear program. Prove that $\optlp \leq \opt$. \\ {\em (In this problem you are asked to prove $\optlp \leq \opt$. Recall that you are allowed to refer to material covered in the lecture notes.)} | The cutting-plane method for solving 0–1 integer programs, first introduced for the traveling salesman problem by Dantzig, Fulkerson & Johnson (1954) and generalized to other integer programs by Gomory (1958), takes advantage of this multiplicity of possible relaxations by finding a sequence of relaxations that more tightly constrain the solution space until eventually an integer solution is obtained. This method starts from any relaxation of the given program, and finds an optimal solution using a linear programming solver. If the solution assigns integer values to all variables, it is also the optimal solution to the unrelaxed problem. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let $E$ be a finite ground set and let $\mathcal{I}$ be a family of ground sets. Which of the following definitions of $\mathcal{I}$ guarantees that $M = (E, \mathcal{I})$ is a matroid? \begin{enumerate} \item $E$ is the edges of an undirected bipartite graph and $\mathcal{I} = \{X \subseteq E : \mbox{$X$ is an acyclic edge set}\}$. \item $E$ is the edges of an undirected graph and $\mathcal{I} = \{X \subseteq E : \mbox{$X$ is an acyclic edge set}\}$. \item $E$ is the edges of an undirected bipartite graph and $\mathcal{I} = \{X \subseteq E : \mbox{$X$ is a matching}\}$. \item $E$ is the edges of an undirected graph and $\mathcal{I} = \{X \subseteq E : \mbox{$X$ is a matching}\}$. \item $E = \{1, 2, \ldots, n\}$ is the set of indices of vectors $v_1, \ldots, v_n \in \mathbb{R}^d$ and \\$\mathcal{I} = \{X \subseteq E : \mbox{the vectors $\{v_i : i \in X\}$ are linearly \emph{dependent}}\}$. \item $E = \{1, 2, \ldots, n\}$ is the set of indices of vectors $v_1, \ldots, v_n \in \mathbb{R}^d$ and \\$\mathcal{I} = \{X \subseteq E : \mbox{the vectors $\{v_i : i \in X\}$ are linearly \emph{independent}}\}$. \end{enumerate} The definitions of $\mathcal{I}$ that guarantees that $M = (E, \mathcal{I})$ is a matroid are: | In terms of independence, a finite matroid M {\displaystyle M} is a pair ( E , I ) {\displaystyle (E,{\mathcal {I}})} , where E {\displaystyle E} is a finite set (called the ground set) and I {\displaystyle {\mathcal {I}}} is a family of subsets of E {\displaystyle E} (called the independent sets) with the following properties: (I1) The empty set is independent, i.e., ∅ ∈ I {\displaystyle \emptyset \in {\mathcal {I}}} . (I2) Every subset of an independent set is independent, i.e., for each A ′ ⊆ A ⊆ E {\displaystyle A'\subseteq A\subseteq E} , if A ∈ I {\displaystyle A\in {\mathcal {I}}} then A ′ ∈ I {\displaystyle A'\in {\mathcal {I}}} . This is sometimes called the hereditary property, or the downward-closed property. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Let $E$ be a finite ground set and let $\mathcal{I}$ be a family of ground sets. Which of the following definitions of $\mathcal{I}$ guarantees that $M = (E, \mathcal{I})$ is a matroid? \begin{enumerate} \item $E$ is the edges of an undirected bipartite graph and $\mathcal{I} = \{X \subseteq E : \mbox{$X$ is an acyclic edge set}\}$. \item $E$ is the edges of an undirected graph and $\mathcal{I} = \{X \subseteq E : \mbox{$X$ is an acyclic edge set}\}$. \item $E$ is the edges of an undirected bipartite graph and $\mathcal{I} = \{X \subseteq E : \mbox{$X$ is a matching}\}$. \item $E$ is the edges of an undirected graph and $\mathcal{I} = \{X \subseteq E : \mbox{$X$ is a matching}\}$. \item $E = \{1, 2, \ldots, n\}$ is the set of indices of vectors $v_1, \ldots, v_n \in \mathbb{R}^d$ and \\$\mathcal{I} = \{X \subseteq E : \mbox{the vectors $\{v_i : i \in X\}$ are linearly \emph{dependent}}\}$. \item $E = \{1, 2, \ldots, n\}$ is the set of indices of vectors $v_1, \ldots, v_n \in \mathbb{R}^d$ and \\$\mathcal{I} = \{X \subseteq E : \mbox{the vectors $\{v_i : i \in X\}$ are linearly \emph{independent}}\}$. \end{enumerate} The definitions of $\mathcal{I}$ that guarantees that $M = (E, \mathcal{I})$ is a matroid are: | The collections of dependent sets, of bases, and of circuits each have simple properties that may be taken as axioms for a matroid. For instance, one may define a matroid M {\displaystyle M} to be a pair ( E , B ) {\displaystyle (E,{\mathcal {B}})} , where E {\displaystyle E} is a finite set as before and B {\displaystyle {\mathcal {B}}} is a collection of subsets of E {\displaystyle E} , called "bases", with the following properties: (B1) B {\displaystyle {\mathcal {B}}} is nonempty. (B2) If A {\displaystyle A} and B {\displaystyle B} are distinct members of B {\displaystyle {\mathcal {B}}} and a ∈ A ∖ B {\displaystyle a\in A\smallsetminus B} , then there exists an element b ∈ B ∖ A {\displaystyle b\in B\smallsetminus A} such that ( A ∖ { a } ) ∪ { b } ∈ B {\displaystyle (A\smallsetminus \{a\})\cup \{b\}\in {\mathcal {B}}} . This property is called the basis exchange property.It follows from the basis exchange property that no member of B {\displaystyle {\mathcal {B}}} can be a proper subset of another. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
An expression is referentially transparent if it always returns the same value, no matter
the global state of the program. A referentially transparent expression can be replaced by its value without
changing the result of the program.
Say we have a value representing a class of students and their GPAs. Given the following defintions:
1 case class Student(gpa: Double)
2
3 def count(c: List[Student], student: Student): Double =
4 c.filter(s => s == student).size
5
6 val students = List(
7 Student(1.0), Student(2.0), Student(3.0),
8 Student(4.0), Student(5.0), Student(6.0)
9 )
And the expression e:
1 count(students, Student(6.0)) | Clearly, replacing x=x * 10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
An expression is referentially transparent if it always returns the same value, no matter
the global state of the program. A referentially transparent expression can be replaced by its value without
changing the result of the program.
Say we have a value representing a class of students and their GPAs. Given the following defintions:
1 case class Student(gpa: Double)
2
3 def count(c: List[Student], student: Student): Double =
4 c.filter(s => s == student).size
5
6 val students = List(
7 Student(1.0), Student(2.0), Student(3.0),
8 Student(4.0), Student(5.0), Student(6.0)
9 )
And the expression e:
1 count(students, Student(6.0)) | Absence of side effects is a necessary, but not sufficient, condition for referential transparency. Referential transparency means that an expression (such as a function call) can be replaced with its value. This requires that the expression is pure, that is to say the expression must be deterministic (always give the same value for the same input) and side-effect free. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Write the dual of the following linear program: \begin{align*} \text{Maximize} \quad &6x_1 + 14 x_2 + 13 x_3\\ \text{Subject to} \quad & x_1 + 3x_2 + x_3 \leq 24 \\ & x_1 + 2x_2 + 4 x_3 \leq 60 \\ & x_1, x_2, x_3 \geq 0 \end{align*} Hint: How can you convince your friend that the above linear program has optimum value at most $z$? | Consider the following linear program in standard form: min x c T x subjected to A x = b x ∈ R + {\displaystyle {\begin{aligned}&\min _{x}c^{T}x\\&{\text{subjected to}}\\&Ax=b\\&x\in \mathbb {R} ^{+}\end{aligned}}} which we will call the primal problem as well as its dual linear program: max u u T b subjected to u T A ≤ c u ∈ R {\displaystyle {\begin{aligned}&\max _{u}u^{T}b\\&{\text{subjected to}}\\&u^{T}A\leq c\\&u\in \mathbb {R} \end{aligned}}} Moreover, let x ∗ {\displaystyle x^{*}} and u ∗ {\displaystyle u^{*}} be optimal solutions for these two problems which can be provided by any linear solver. These solutions verify the constraints of their linear program and, by duality, have the same value of objective function ( c T x ∗ = u ∗ T b {\displaystyle c^{T}x^{*}=u^{*T}b} ) which we will call z ∗ {\displaystyle z^{*}} . This optimal value is a function of the different coefficients of the primal problem: z ∗ = z ∗ ( c , A , b ) {\displaystyle z^{*}=z^{*}(c,A,b)} . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Write the dual of the following linear program: \begin{align*} \text{Maximize} \quad &6x_1 + 14 x_2 + 13 x_3\\ \text{Subject to} \quad & x_1 + 3x_2 + x_3 \leq 24 \\ & x_1 + 2x_2 + 4 x_3 \leq 60 \\ & x_1, x_2, x_3 \geq 0 \end{align*} Hint: How can you convince your friend that the above linear program has optimum value at most $z$? | Let an integer programming problem be formulated (in canonical form) as: Maximize c T x Subject to A x ≤ b , x ≥ 0 , x i all integers . {\displaystyle {\begin{aligned}{\text{Maximize }}&c^{T}x\\{\text{Subject to }}&Ax\leq b,\\&x\geq 0,\,x_{i}{\text{ all integers}}.\end{aligned}}} where A is a matrix and b , c is a vector. The vector x is unknown and is to be found in order to maximize the objective while respecting the linear constraints. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
An expression is referentially transparent if it always returns the same value, no matter
the global state of the program. A referentially transparent expression can be replaced by its value without
changing the result of the program.
Say we have a value representing a class of students and their GPAs. Given the following defintions:
1 case class Student(gpa: Double)
2
3 def count(c: List[Student], student: Student): Double =
4 c.filter(s => s == student).size
5
6 val students = List(
7 Student(1.0), Student(2.0), Student(3.0),
8 Student(4.0), Student(5.0), Student(6.0)
9 )
And the expression e:
1 count(students, Student(6.0))
Is the expression e referentially transparent? | Clearly, replacing x=x * 10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
An expression is referentially transparent if it always returns the same value, no matter
the global state of the program. A referentially transparent expression can be replaced by its value without
changing the result of the program.
Say we have a value representing a class of students and their GPAs. Given the following defintions:
1 case class Student(gpa: Double)
2
3 def count(c: List[Student], student: Student): Double =
4 c.filter(s => s == student).size
5
6 val students = List(
7 Student(1.0), Student(2.0), Student(3.0),
8 Student(4.0), Student(5.0), Student(6.0)
9 )
And the expression e:
1 count(students, Student(6.0))
Is the expression e referentially transparent? | Absence of side effects is a necessary, but not sufficient, condition for referential transparency. Referential transparency means that an expression (such as a function call) can be replaced with its value. This requires that the expression is pure, that is to say the expression must be deterministic (always give the same value for the same input) and side-effect free. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Design and analyze a polynomial time algorithm for the following problem: \begin{description} \item[INPUT:] An undirected graph $G=(V,E)$. \item[OUTPUT:] A non-negative vertex potential $p(v)\geq 0$ for each vertex $v\in V$ such that \begin{align*} \sum_{v\in S} p(v) \leq |E(S, \bar S)| \quad \mbox{for every $\emptyset \neq S \subsetneq V$ \quad and \quad $\sum_{v\in V} p(v)$ is maximized.} \end{align*} \end{description} {\small (Recall that $E(S, \bar S)$ denotes the set of edges that cross the cut defined by $S$, i.e., $E(S, \bar S) = \{e\in E: |e\cap S| = |e\cap \bar S| = 1\}$.)} \\[1mm] \noindent Hint: formulate the problem as a large linear program (LP) and then show that the LP can be solved in polynomial time. \\[1mm] {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it returns a correct solution and that it runs in polynomial time. Recall that you are allowed to refer to material covered in the course.) } | This algorithm can be derandomized with the method of conditional probabilities; therefore there is a simple deterministic polynomial-time 0.5-approximation algorithm as well. One such algorithm starts with an arbitrary partition of the vertices of the given graph G = ( V , E ) {\displaystyle G=(V,E)} and repeatedly moves one vertex at a time from one side of the partition to the other, improving the solution at each step, until no more improvements of this type can be made. The number of iterations is at most | E | {\displaystyle |E|} because the algorithm improves the cut by at least one edge at each step. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Design and analyze a polynomial time algorithm for the following problem: \begin{description} \item[INPUT:] An undirected graph $G=(V,E)$. \item[OUTPUT:] A non-negative vertex potential $p(v)\geq 0$ for each vertex $v\in V$ such that \begin{align*} \sum_{v\in S} p(v) \leq |E(S, \bar S)| \quad \mbox{for every $\emptyset \neq S \subsetneq V$ \quad and \quad $\sum_{v\in V} p(v)$ is maximized.} \end{align*} \end{description} {\small (Recall that $E(S, \bar S)$ denotes the set of edges that cross the cut defined by $S$, i.e., $E(S, \bar S) = \{e\in E: |e\cap S| = |e\cap \bar S| = 1\}$.)} \\[1mm] \noindent Hint: formulate the problem as a large linear program (LP) and then show that the LP can be solved in polynomial time. \\[1mm] {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it returns a correct solution and that it runs in polynomial time. Recall that you are allowed to refer to material covered in the course.) } | Given an undirected graph G = (V, E) with an assignment of weights to the edges w: E → N and an integer k ∈ { 2 , 3 , … , | V | } , {\displaystyle k\in \{2,3,\ldots ,|V|\},} partition V into k disjoint sets F = { C 1 , C 2 , … , C k } {\displaystyle F=\{C_{1},C_{2},\ldots ,C_{k}\}} while minimizing ∑ i = 1 k − 1 ∑ j = i + 1 k ∑ v 1 ∈ C i v 2 ∈ C j w ( { v 1 , v 2 } ) {\displaystyle \sum _{i=1}^{k-1}\ \sum _{j=i+1}^{k}\sum _{\begin{smallmatrix}v_{1}\in C_{i}\\v_{2}\in C_{j}\end{smallmatrix}}w(\left\{v_{1},v_{2}\right\})} For a fixed k, the problem is polynomial time solvable in O ( | V | k 2 ) . {\displaystyle O{\bigl (}|V|^{k^{2}}{\bigr )}.} However, the problem is NP-complete if k is part of the input. It is also NP-complete if we specify k vertices and ask for the minimum k-cut which separates these vertices among each of the sets. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the standard linear programming relaxation of Set Cover that we saw in class. We gave a randomized rounding algorithm for the Set Cover problem. Use similar techniques to give an algorithm that, with probability at least a positive constant, returns a collection of sets that cover at least $90\%$ of the elements and has cost at most a constant factor larger than the LP solution. | One can turn the linear programming relaxation for this problem into an approximate solution of the original unrelaxed set cover instance via the technique of randomized rounding (Raghavan & Tompson 1987). Given a fractional cover, in which each set Si has weight wi, choose randomly the value of each 0–1 indicator variable xi to be 1 with probability wi × (ln n +1), and 0 otherwise. Then any element ej has probability less than 1/(e×n) of remaining uncovered, so with constant probability all elements are covered. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the standard linear programming relaxation of Set Cover that we saw in class. We gave a randomized rounding algorithm for the Set Cover problem. Use similar techniques to give an algorithm that, with probability at least a positive constant, returns a collection of sets that cover at least $90\%$ of the elements and has cost at most a constant factor larger than the LP solution. | The following example illustrates how randomized rounding can be used to design an approximation algorithm for the Set Cover problem. Fix any instance ⟨ c , S ⟩ {\displaystyle \langle c,{\mathcal {S}}\rangle } of set cover over a universe U {\displaystyle {\mathcal {U}}} . For step 1, let IP be the standard integer linear program for set cover for this instance. For step 2, let LP be the linear programming relaxation of IP, and compute an optimal solution x ∗ {\displaystyle x^{*}} to LP using any standard linear programming algorithm. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
An expression is referentially transparent if it always returns the same value, no matter
the global state of the program. A referentially transparent expression can be replaced by its value without
changing the result of the program.
Say we have a value representing a class of students and their GPAs. Given the following defintions:
1 case class Student(gpa: Double)
2
3 def count(c: List[Student], student: Student): Double =
4 c.filter(s => s == student).size
5
6 val students = List(
7 Student(1.0), Student(2.0), Student(3.0),
8 Student(4.0), Student(5.0), Student(6.0)
9 )
And the expression e:
1 count(students, Student(6.0))
If we change our definitions to:
1 class Student2(var gpa: Double, var name: String = "*")
2
3 def innerCount(course: List[Student2], student: Student2): Double =
4 course.filter(s => s == student).size
5
6 def count2(course: List[Student2], student: Student2): Double =
7 innerCount(course.map(s => new Student2(student.gpa, student.name)),
student)
8
9 val students2 = List(
10 Student2(1.0, "Ana"), Student2(2.0, "Ben"), Student2(3.0, "Cal"),
11 Student2(4.0, "Dre"), Student2(5.0, "Egg"), Student2(6.0, "Fra")
12 )
And our expression to: e2:
1 count2(students2, Student2(6.0, "*"))
What is the result of e2? | Clearly, replacing x=x * 10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
An expression is referentially transparent if it always returns the same value, no matter
the global state of the program. A referentially transparent expression can be replaced by its value without
changing the result of the program.
Say we have a value representing a class of students and their GPAs. Given the following defintions:
1 case class Student(gpa: Double)
2
3 def count(c: List[Student], student: Student): Double =
4 c.filter(s => s == student).size
5
6 val students = List(
7 Student(1.0), Student(2.0), Student(3.0),
8 Student(4.0), Student(5.0), Student(6.0)
9 )
And the expression e:
1 count(students, Student(6.0))
If we change our definitions to:
1 class Student2(var gpa: Double, var name: String = "*")
2
3 def innerCount(course: List[Student2], student: Student2): Double =
4 course.filter(s => s == student).size
5
6 def count2(course: List[Student2], student: Student2): Double =
7 innerCount(course.map(s => new Student2(student.gpa, student.name)),
student)
8
9 val students2 = List(
10 Student2(1.0, "Ana"), Student2(2.0, "Ben"), Student2(3.0, "Cal"),
11 Student2(4.0, "Dre"), Student2(5.0, "Egg"), Student2(6.0, "Fra")
12 )
And our expression to: e2:
1 count2(students2, Student2(6.0, "*"))
What is the result of e2? | I call a mode of containment φ referentially transparent if, whenever an occurrence of a singular term t is purely referential in a term or sentence ψ(t), it is purely referential also in the containing term or sentence φ(ψ(t)). The term appeared in its contemporary computer science usage in the discussion of variables in programming languages in Christopher Strachey's seminal set of lecture notes Fundamental Concepts in Programming Languages (1967): One of the most useful properties of expressions is that called by Quine referential transparency. In essence this means that if we wish to find the value of an expression which contains a sub-expression, the only thing we need to know about the sub-expression is its value. Any other features of the sub-expression, such as its internal structure, the number and nature of its components, the order in which they are evaluated or the colour of the ink in which they are written, are irrelevant to the value of the main expression. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following method:
1 def mystery6(nIter: Int) (ss: List[String] ): List[String] =
2 if nIter <= 0 then ss
3 else mystery6 (nIter - 1) (
4 for
5 s <- ss
6 c <- List (’c’ , ’b’ , ’a’)
7 yield
8 s + c
9 ) ::: ss
What is the output if we call mystery6 this way:
mystery6(5)(List("")).filter(_.exists(_ == ’b’))(0) | Python uses the following syntax to express list comprehensions over finite lists: A generator expression may be used in Python versions >= 2.4 which gives lazy evaluation over its input, and can be used with generators to iterate over 'infinite' input such as the count generator function which returns successive integers: (Subsequent use of the generator expression will determine when to stop generating values). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following method:
1 def mystery6(nIter: Int) (ss: List[String] ): List[String] =
2 if nIter <= 0 then ss
3 else mystery6 (nIter - 1) (
4 for
5 s <- ss
6 c <- List (’c’ , ’b’ , ’a’)
7 yield
8 s + c
9 ) ::: ss
What is the output if we call mystery6 this way:
mystery6(5)(List("")).filter(_.exists(_ == ’b’))(0) | An alternative representation is Scott encoding, which uses the idea of continuations and can lead to simpler code. (see also Mogensen–Scott encoding). In this approach, we use the fact that lists can be observed using pattern matching expression. For example, using Scala notation, if list denotes a value of type List with empty list Nil and constructor Cons(h, t) we can inspect the list and compute nilCode in case the list is empty and consCode(h, t) when the list is not empty: The list is given by how it acts upon nilCode and consCode. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In this problem we design an LSH for points in $\mathbb{R}^d$ with the $\epsilonll_1$ distance, i.e. $$d(p,q) =\sum_{i=1}^d |p_i - q_i|.$$ Define a class of hash functions as follows: Fix a positive number $w$. Each hash function is defined via a choice of $d$ independently selected random real numbers $s_1,s_2,\dots,s_d$, each uniform in $[0,w)$. The hash function associated with this random set of choices is $$h(x_1,\dots ,x_d) = \left(\left\lfloor \frac{x_1 - s_1}{w}\right\rfloor ,\left\lfloor \frac{x_2 - s_2}{w}\right\rfloor,\dots,\left\lfloor \frac{x_d - s_d}{w}\right\rfloor\right).$$ Let $\alpha_i = |p_i - q_i|$. What is the probability that $h(p) = h(q)$, in terms of the $\alpha_i$ values? It may be easier to first think of the case when $w=1$. Try to also simplify your expression if $w$ is much larger than $\alpha_i$'s, using that $(1-x) \approx e^{-x}$ for small values of $x\geq 0$. | One of the main applications of LSH is to provide a method for efficient approximate nearest neighbor search algorithms. Consider an LSH family F {\displaystyle {\mathcal {F}}} . The algorithm has two main parameters: the width parameter k and the number of hash tables L. In the first step, we define a new family G {\displaystyle {\mathcal {G}}} of hash functions g, where each function g is obtained by concatenating k functions h 1 , … , h k {\displaystyle h_{1},\ldots ,h_{k}} from F {\displaystyle {\mathcal {F}}} , i.e., g ( p ) = {\displaystyle g(p)=} . In other words, a random hash function g is obtained by concatenating k randomly chosen hash functions from F {\displaystyle {\mathcal {F}}} . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In this problem we design an LSH for points in $\mathbb{R}^d$ with the $\epsilonll_1$ distance, i.e. $$d(p,q) =\sum_{i=1}^d |p_i - q_i|.$$ Define a class of hash functions as follows: Fix a positive number $w$. Each hash function is defined via a choice of $d$ independently selected random real numbers $s_1,s_2,\dots,s_d$, each uniform in $[0,w)$. The hash function associated with this random set of choices is $$h(x_1,\dots ,x_d) = \left(\left\lfloor \frac{x_1 - s_1}{w}\right\rfloor ,\left\lfloor \frac{x_2 - s_2}{w}\right\rfloor,\dots,\left\lfloor \frac{x_d - s_d}{w}\right\rfloor\right).$$ Let $\alpha_i = |p_i - q_i|$. What is the probability that $h(p) = h(q)$, in terms of the $\alpha_i$ values? It may be easier to first think of the case when $w=1$. Try to also simplify your expression if $w$ is much larger than $\alpha_i$'s, using that $(1-x) \approx e^{-x}$ for small values of $x\geq 0$. | As a result, the statistical distance to a uniform family is O ( m / p ) {\displaystyle O(m/p)} , which becomes negligible when p ≫ m {\displaystyle p\gg m} . The family of simpler hash functions h a ( x ) = ( a x mod p ) mod m {\displaystyle h_{a}(x)=(ax~{\bmod {~}}p)~{\bmod {~}}m} is only approximately universal: Pr { h a ( x ) = h a ( y ) } ≤ 2 / m {\displaystyle \Pr\{h_{a}(x)=h_{a}(y)\}\leq 2/m} for all x ≠ y {\displaystyle x\neq y} . Moreover, this analysis is nearly tight; Carter and Wegman show that Pr { h a ( 1 ) = h a ( m + 1 ) } ≥ 2 / ( m − 1 ) {\displaystyle \Pr\{h_{a}(1)=h_{a}(m+1)\}\geq 2/(m-1)} whenever ( p − 1 ) mod m = 1 {\displaystyle (p-1)~{\bmod {~}}m=1} . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Church booleans are a representation of booleans in the lambda calculus. The Church encoding of true and false are functions of two parameters: Church encoding of tru: t => f => t Church encoding of fls: t => f => f What should replace ??? so that the following function computes not(b and c)? b => c => b ??? (not b) | Church Booleans are the Church encoding of the Boolean values true and false. Some programming languages use these as an implementation model for Boolean arithmetic; examples are Smalltalk and Pico. Boolean logic may be considered as a choice. The Church encoding of true and false are functions of two parameters: true chooses the first parameter. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Church booleans are a representation of booleans in the lambda calculus. The Church encoding of true and false are functions of two parameters: Church encoding of tru: t => f => t Church encoding of fls: t => f => f What should replace ??? so that the following function computes not(b and c)? b => c => b ??? (not b) | false chooses the second parameter.The two definitions are known as Church Booleans: true ≡ λ a . λ b . a false ≡ λ a . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & 4x_1 - x_2 - 2x_3 \\ \textbf{subject to}\hspace{0.8cm} & x_1 - x_3 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{0.85cm}x_1 + s_2 = 4 \\ \hspace{0.8cm} & \hspace{-0.85cm} -3x_2 + 2x_3 + s_3 = 4 \\ \hspace{0.8cm} &\hspace{-1.4cm} x_1,\: x_2, \: x_3, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + x_3 - s_1 \\ s_2 &= 3 -x_3 + s_1 \\ s_3 &= 4 +3x_2 - 2x_3 \\ \cline{1-2} z &= 4 - x_2 + 2x_3 - 4s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau. | Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations each of which give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot improves the solution. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & 4x_1 - x_2 - 2x_3 \\ \textbf{subject to}\hspace{0.8cm} & x_1 - x_3 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{0.85cm}x_1 + s_2 = 4 \\ \hspace{0.8cm} & \hspace{-0.85cm} -3x_2 + 2x_3 + s_3 = 4 \\ \hspace{0.8cm} &\hspace{-1.4cm} x_1,\: x_2, \: x_3, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + x_3 - s_1 \\ s_2 &= 3 -x_3 + s_1 \\ s_3 &= 4 +3x_2 - 2x_3 \\ \cline{1-2} z &= 4 - x_2 + 2x_3 - 4s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau. | Using the simplex method to solve a linear program produces a set of equations of the form x i + ∑ j a ¯ i , j x j = b ¯ i {\displaystyle x_{i}+\sum _{j}{\bar {a}}_{i,j}x_{j}={\bar {b}}_{i}} where xi is a basic variable and the xj's are the nonbasic variables (i.e. the basic solution which is an optimal solution to the relaxed linear program is x i = b ¯ i {\displaystyle x_{i}={\bar {b}}_{i}} and x j = 0 {\displaystyle x_{j}=0} ). We write coefficients b ¯ i {\displaystyle {\bar {b}}_{i}} and a ¯ i , j {\displaystyle {\bar {a}}_{i,j}} with a bar to denote the last tableau produced by the simplex method. These coefficients are different from the coefficients in the matrix A and the vector b. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Suppose that Alice and Bob have two documents $d_A$ and $d_B$ respectively, and Charlie wants to learn about the difference between them. We represent each document by its word frequency vector as follows. We assume that words in $d_A$ and $d_B$ come from some dictionary of size $n$, and let $x\in \mathbb{R}^n$ be a vector such that for every word $i\in [n]$\footnote{We let $[n]:=\{1,2,\ldots, n\}$.} the entry $x_i$ equals the number of times the $i$-th word in the dictionary occurs in $d_A$. Similarly, let $y\in \mathbb{R}^n$ be a vector such that for every word $i\in [n]$ the entry $y_i$ denotes the number of times the $i$-th word in the dictionary occurs in $d_B$. We assume that the number of words in each document is bounded by a polynomial in $n$. Suppose that there exists $i^*\in [n]$ such that for all $i\in [n]\setminus \{i^*\}$ one has $|x_i-y_i|\leq 2$, and for $i^*$ one has $|x_{i^*}-y_{i^*}|\geq n^{1/2}$. Show that Alice and Bob can each send a $O(\log^2 n)$-bit message to Charlie, from which Charlie can recover the identity of the special word $i^*$. Your solution must succeed with probability at least $9/10$. You may assume that Alice, Bob and Charlie have a source of shared random bits. | Given that there are three occurrences of letters followed by an L, the resulting probability is 1 3 ⋅ 3 12 = 1 12 {\displaystyle {1 \over 3}\cdot {3 \over 12}={1 \over 12}} . We obtain 17 words, which can each be encoded into a fixed-sized output of ⌈ log 2 ( 17 ) ⌉ = 5 {\displaystyle \lceil \log _{2}(17)\rceil =5} bits. Note that we could iterate further, increasing the number of words by | U | − 1 = 8 {\displaystyle |{\mathcal {U}}|-1=8} every time. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Suppose that Alice and Bob have two documents $d_A$ and $d_B$ respectively, and Charlie wants to learn about the difference between them. We represent each document by its word frequency vector as follows. We assume that words in $d_A$ and $d_B$ come from some dictionary of size $n$, and let $x\in \mathbb{R}^n$ be a vector such that for every word $i\in [n]$\footnote{We let $[n]:=\{1,2,\ldots, n\}$.} the entry $x_i$ equals the number of times the $i$-th word in the dictionary occurs in $d_A$. Similarly, let $y\in \mathbb{R}^n$ be a vector such that for every word $i\in [n]$ the entry $y_i$ denotes the number of times the $i$-th word in the dictionary occurs in $d_B$. We assume that the number of words in each document is bounded by a polynomial in $n$. Suppose that there exists $i^*\in [n]$ such that for all $i\in [n]\setminus \{i^*\}$ one has $|x_i-y_i|\leq 2$, and for $i^*$ one has $|x_{i^*}-y_{i^*}|\geq n^{1/2}$. Show that Alice and Bob can each send a $O(\log^2 n)$-bit message to Charlie, from which Charlie can recover the identity of the special word $i^*$. Your solution must succeed with probability at least $9/10$. You may assume that Alice, Bob and Charlie have a source of shared random bits. | Now let the stream of bytes provide a string of overlapping 5-letter words, each "letter" taking values A, B, C, D, E. The letters are determined by the number of 1s in a byte 0, 1, or 2 yield A, 3 yields B, 4 yields C, 5 yields D and 6, 7 or 8 yield E. Thus we have a monkey at a typewriter hitting five keys with various probabilities (37, 56, 70, 56, 37 over 256). There are 55 possible 5-letter words, and from a string of 256000 (overlapping) 5-letter words, counts are made on the frequencies for each word. The quadratic form in the weak inverse of the covariance matrix of the cell counts provides a chisquare test Q5–Q4, the difference of the naive Pearson sums of (OBS-EXP)2 / EXP on counts for 5- and 4-letter cell counts. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. Recall that in the min $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. Show that the optimal value of the following linear program equals the number of edges crossed by a min $s,t$-cut: \begin{align*} \textbf{minimize} \hspace{0.8cm} & \sum_{e\in E} y_e \\ \textbf{subject to}\hspace{0.8cm} & y_{\{u,v\}} \geq x_u - x_v \qquad \mbox{for every $\{u,v\}\in E$} \\ \hspace{0.8cm} & y_{\{u,v\}} \geq x_v - x_u \qquad \mbox{for every $\{u,v\}\in E$} \\ & \hspace{0.6cm}x_s = 0 \\ & \hspace{0.6cm}x_t = 1 \\ & \hspace{0.6cm}x_v \in [0,1] \qquad \mbox{for every $v\in V$} \end{align*} The above linear program has a variable $x_v$ for every vertex $v\in V$ and a variable $y_e$ for every edge $e\in E$. \emph{Hint: Show that the expected value of the following randomized rounding equals the value of the linear program. Select $\theta$ uniformly at random from $[0,1]$ and output the cut $ S = \{v\in V: x_v \leq \theta\}$.} | There are 2 | V | {\displaystyle 2^{|V|}} ways of choosing for each vertex whether it belongs to S {\displaystyle S} or to T {\displaystyle T} , but two of these choices make S {\displaystyle S} or T {\displaystyle T} empty and do not give rise to cuts. Among the remaining choices, swapping the roles of S {\displaystyle S} and T {\displaystyle T} does not change the cut, so each cut is counted twice; therefore, there are 2 | V | − 1 − 1 {\displaystyle 2^{|V|-1}-1} distinct cuts. The minimum cut problem is to find a cut of smallest size among these cuts. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. Recall that in the min $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. Show that the optimal value of the following linear program equals the number of edges crossed by a min $s,t$-cut: \begin{align*} \textbf{minimize} \hspace{0.8cm} & \sum_{e\in E} y_e \\ \textbf{subject to}\hspace{0.8cm} & y_{\{u,v\}} \geq x_u - x_v \qquad \mbox{for every $\{u,v\}\in E$} \\ \hspace{0.8cm} & y_{\{u,v\}} \geq x_v - x_u \qquad \mbox{for every $\{u,v\}\in E$} \\ & \hspace{0.6cm}x_s = 0 \\ & \hspace{0.6cm}x_t = 1 \\ & \hspace{0.6cm}x_v \in [0,1] \qquad \mbox{for every $v\in V$} \end{align*} The above linear program has a variable $x_v$ for every vertex $v\in V$ and a variable $y_e$ for every edge $e\in E$. \emph{Hint: Show that the expected value of the following randomized rounding equals the value of the linear program. Select $\theta$ uniformly at random from $[0,1]$ and output the cut $ S = \{v\in V: x_v \leq \theta\}$.} | By repeating the contraction algorithm T = ( n 2 ) ln n {\displaystyle T={\binom {n}{2}}\ln n} times with independent random choices and returning the smallest cut, the probability of not finding a minimum cut is T ≤ 1 e ln n = 1 n . {\displaystyle \left^{T}\leq {\frac {1}{e^{\ln n}}}={\frac {1}{n}}\,.} The total running time for T {\displaystyle T} repetitions for a graph with n {\displaystyle n} vertices and m {\displaystyle m} edges is O ( T m ) = O ( n 2 m log n ) {\displaystyle O(Tm)=O(n^{2}m\log n)} . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A beautiful result by the Swiss mathematician Leonhard Euler (1707 - 1783) can be stated as follows: \begin{itemize} \item[] Let $G= (V,E)$ be an undirected graph. If every vertex has an even degree, then we can orient the edges in $E$ to obtain a directed graph where the in-degree of each vertex equals its out-degree. \end{itemize} In this problem, we address the problem of correcting an imperfect orientation $A$ to a perfect one $A'$ by flipping the orientation of the fewest possible edges. The formal problem statement is as follows: \begin{description} \item[Input:] An undirected graph $G=(V,E)$ where every vertex has an even degree and an orientation $A$ of $E$. That is, for every $\{u,v\}\in E$, $A$ either contains the directed edge $(u,v)$ that is oriented towards $v$ or the directed edge $(v,u)$ that is oriented towards $u$. \item[Output:] An orientation $A'$ of $E$ such that $|A'\setminus A|$ is minimized and \begin{align*} \underbrace{|\{u\in V : (u,v) \in A'\}|}_{\mbox{\scriptsize in-degree}} = \underbrace{|\{u\in V: (v,u) \in A'\}|}_{\mbox{\scriptsize out-degree}} \qquad \mbox{for every $v\in V$}. \end{align*} \end{description} \noindent {Design and analyze} a polynomial-time algorithm for the above problem. \\ {\em (In this problem you are asked to (i) design the algorithm, (ii) analyze its running time, and (iii) show that it returns a correct solution. Recall that you are allowed to refer to material covered in the lecture notes.)} \\[1cm] \setlength{\fboxsep}{2mm} \begin{boxedminipage}{\textwidth} An example is as follows: \begin{center} \begin{tikzpicture} \begin{scope} \node at (0, 2) {\small $G$}; \node[vertex] (b) at (1,1) {$b$}; \node[vertex] (c) at (1,-1) {$c$}; \node[vertex] (d) at (-1,-1) {$d$}; \node[vertex] (a) at (-1,1) {$a$}; \draw (a) edge (b); \draw (b) edge (c); \draw (c) edge (d); \draw (d) edge (a); \end{scope} \begin{scope}[xshift=5.5cm] \node at (0, 2) {\small $A = \{(a,b), (c,b), (c,d), (d,a)\}$}; \node[vertex] (b) at (1,1) {$b$}; \node[vertex] (c) at (1,-1) {$c$}; \node[vertex] (d) at (-1,-1) {$d$}; \node[vertex] (a) at (-1,1) {$a$}; \draw (a) edge[->] (b); \draw (b) edge[<-] (c); \draw (c) edge[->] (d); \draw (d) edge[->] (a); \end{scope} \begin{scope}[xshift=11cm] \node at (0, 2) {\small $A' = \{(a,b), (b,c), (c,d), (d,a)\}$}; \node[vertex] (b) at (1,1) {$b$}; \node[vertex] (c) at (1,-1) {$c$}; \node[vertex] (d) at (-1,-1) {$d$}; \node[vertex] (a) at (-1,1) {$a$}; \draw (a) edge[->] (b); \draw (b) edge[->] (c); \draw (c) edge[->] (d); \draw (d) edge[->] (a); \end{scope} \end{tikzpicture} \end{center} The solution $A'$ has value $|A' \setminus A| = 1$ {\small (the number of edges for which the orientation was flipped).} \end{boxedminipage} | This is an orientation with the property that, for every pair of vertices u and v in G, the number of pairwise edge-disjoint directed paths from u to v in the resulting directed graph is at least ⌊ k 2 ⌋ {\displaystyle \left\lfloor {\frac {k}{2}}\right\rfloor } , where k is the maximum number of paths in a set of edge-disjoint undirected paths from u to v. Nash-Williams' orientations also have the property that they are as close as possible to being Eulerian orientations: at each vertex, the indegree and the outdegree are within one of each other. The existence of well-balanced orientations, together with Menger's theorem, immediately implies Robbins' theorem: by Menger's theorem, a 2-edge-connected graph has at least two edge-disjoint paths between every pair of vertices, from which it follows that any well-balanced orientation must be strongly connected. More generally this result implies that every 2k-edge-connected undirected graph can be oriented to form a k-edge-connected directed graph. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
A beautiful result by the Swiss mathematician Leonhard Euler (1707 - 1783) can be stated as follows: \begin{itemize} \item[] Let $G= (V,E)$ be an undirected graph. If every vertex has an even degree, then we can orient the edges in $E$ to obtain a directed graph where the in-degree of each vertex equals its out-degree. \end{itemize} In this problem, we address the problem of correcting an imperfect orientation $A$ to a perfect one $A'$ by flipping the orientation of the fewest possible edges. The formal problem statement is as follows: \begin{description} \item[Input:] An undirected graph $G=(V,E)$ where every vertex has an even degree and an orientation $A$ of $E$. That is, for every $\{u,v\}\in E$, $A$ either contains the directed edge $(u,v)$ that is oriented towards $v$ or the directed edge $(v,u)$ that is oriented towards $u$. \item[Output:] An orientation $A'$ of $E$ such that $|A'\setminus A|$ is minimized and \begin{align*} \underbrace{|\{u\in V : (u,v) \in A'\}|}_{\mbox{\scriptsize in-degree}} = \underbrace{|\{u\in V: (v,u) \in A'\}|}_{\mbox{\scriptsize out-degree}} \qquad \mbox{for every $v\in V$}. \end{align*} \end{description} \noindent {Design and analyze} a polynomial-time algorithm for the above problem. \\ {\em (In this problem you are asked to (i) design the algorithm, (ii) analyze its running time, and (iii) show that it returns a correct solution. Recall that you are allowed to refer to material covered in the lecture notes.)} \\[1cm] \setlength{\fboxsep}{2mm} \begin{boxedminipage}{\textwidth} An example is as follows: \begin{center} \begin{tikzpicture} \begin{scope} \node at (0, 2) {\small $G$}; \node[vertex] (b) at (1,1) {$b$}; \node[vertex] (c) at (1,-1) {$c$}; \node[vertex] (d) at (-1,-1) {$d$}; \node[vertex] (a) at (-1,1) {$a$}; \draw (a) edge (b); \draw (b) edge (c); \draw (c) edge (d); \draw (d) edge (a); \end{scope} \begin{scope}[xshift=5.5cm] \node at (0, 2) {\small $A = \{(a,b), (c,b), (c,d), (d,a)\}$}; \node[vertex] (b) at (1,1) {$b$}; \node[vertex] (c) at (1,-1) {$c$}; \node[vertex] (d) at (-1,-1) {$d$}; \node[vertex] (a) at (-1,1) {$a$}; \draw (a) edge[->] (b); \draw (b) edge[<-] (c); \draw (c) edge[->] (d); \draw (d) edge[->] (a); \end{scope} \begin{scope}[xshift=11cm] \node at (0, 2) {\small $A' = \{(a,b), (b,c), (c,d), (d,a)\}$}; \node[vertex] (b) at (1,1) {$b$}; \node[vertex] (c) at (1,-1) {$c$}; \node[vertex] (d) at (-1,-1) {$d$}; \node[vertex] (a) at (-1,1) {$a$}; \draw (a) edge[->] (b); \draw (b) edge[->] (c); \draw (c) edge[->] (d); \draw (d) edge[->] (a); \end{scope} \end{tikzpicture} \end{center} The solution $A'$ has value $|A' \setminus A| = 1$ {\small (the number of edges for which the orientation was flipped).} \end{boxedminipage} | A strong orientation of a given bridgeless undirected graph may be found in linear time by performing a depth-first search of the graph, orienting all edges in the depth-first search tree away from the tree root, and orienting all the remaining edges (which must necessarily connect an ancestor and a descendant in the depth-first search tree) from the descendant to the ancestor. If an undirected graph G with bridges is given, together with a list of ordered pairs of vertices that must be connected by directed paths, it is possible in polynomial time to find an orientation of G that connects all the given pairs, if such an orientation exists. However, the same problem is NP-complete when the input may be a mixed graph.It is #P-complete to count the number of strong orientations of a given graph G, even when G is planar and bipartite. However, for dense graphs (more specifically, graphs in which each vertex has a linear number of neighbors), the number of strong orientations may be estimated by a fully polynomial-time randomized approximation scheme. The problem of counting strong orientations may also be solved exactly, in polynomial time, for graphs of bounded treewidth. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the linear programming relaxation for minimum-weight vertex cover: \begin{align*} \text{Minimize} \quad &\sum_{v\in V} x_v w(v)\\ \text{Subject to} \quad &x_u + x_v \geq 1 \quad \forall \{u,v\} \in E \\ &0 \leq x_v \leq 1 \quad \ \ \forall v \in V \end{align*} In class, we saw that any extreme point is integral when considering bipartite graphs. For general graphs, this is not true, as can be seen by considering the graph consisting of a single triangle. However, we have the following statement for general graphs: \begin{itemize} \item[] Any extreme point $x^*$ satisfies $x^*_v \in \{0, \frac12, 1\}$ for every $v\in V$\,. \end{itemize} Prove the above statement. | Assume that every vertex has an associated cost of c ( v ) ≥ 0 {\displaystyle c(v)\geq 0} . The (weighted) minimum vertex cover problem can be formulated as the following integer linear program (ILP). This ILP belongs to the more general class of ILPs for covering problems. The integrality gap of this ILP is 2 {\displaystyle 2} , so its relaxation (allowing each variable to be in the interval from 0 to 1, rather than requiring the variables to be only 0 or 1) gives a factor- 2 {\displaystyle 2} approximation algorithm for the minimum vertex cover problem. Furthermore, the linear programming relaxation of that ILP is half-integral, that is, there exists an optimal solution for which each entry x v {\displaystyle x_{v}} is either 0, 1/2, or 1. A 2-approximate vertex cover can be obtained from this fractional solution by selecting the subset of vertices whose variables are nonzero. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the linear programming relaxation for minimum-weight vertex cover: \begin{align*} \text{Minimize} \quad &\sum_{v\in V} x_v w(v)\\ \text{Subject to} \quad &x_u + x_v \geq 1 \quad \forall \{u,v\} \in E \\ &0 \leq x_v \leq 1 \quad \ \ \forall v \in V \end{align*} In class, we saw that any extreme point is integral when considering bipartite graphs. For general graphs, this is not true, as can be seen by considering the graph consisting of a single triangle. However, we have the following statement for general graphs: \begin{itemize} \item[] Any extreme point $x^*$ satisfies $x^*_v \in \{0, \frac12, 1\}$ for every $v\in V$\,. \end{itemize} Prove the above statement. | The vertex cover problem involves finding a set of vertices that touches every edge of the graph. It is NP-hard but can be approximated to within an approximation ratio of two, for instance by taking the endpoints of the matched edges in any maximal matching. Evidence that this is the best possible approximation ratio of a polynomial-time approximation algorithm is provided by the fact that, when represented as a semidefinite program, the problem has an integrality gap of two; this gap is the ratio between the solution value of the integer solution (a valid vertex cover) and of its semidefinite relaxation. According to the unique games conjecture, for many problems such as this the optimal approximation ratio is provided by the integrality gap of their semidefinite relaxation. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
To which expression is the following for-loop translated ?
1 def mystery8(xs : List[List[Int]]) =
2 for
3 x <- xs
4 y <- x
5 yield
6 y | Instead of the Java "foreach" loops for looping through an iterator, Scala has for-expressions, which are similar to list comprehensions in languages such as Haskell, or a combination of list comprehensions and generator expressions in Python. For-expressions using the yield keyword allow a new collection to be generated by iterating over an existing one, returning a new collection of the same type. They are translated by the compiler into a series of map, flatMap and filter calls. Where yield is not used, the code approximates to an imperative-style loop, by translating to foreach. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
To which expression is the following for-loop translated ?
1 def mystery8(xs : List[List[Int]]) =
2 for
3 x <- xs
4 y <- x
5 yield
6 y | A similar notation available in a number of programming languages (notably Python and Haskell) is the list comprehension, which combines map and filter operations over one or more lists. In Python, the set-builder's braces are replaced with square brackets, parentheses, or curly braces, giving list, generator, and set objects, respectively. Python uses an English-based syntax. Haskell replaces the set-builder's braces with square brackets and uses symbols, including the standard set-builder vertical bar. The same can be achieved in Scala using Sequence Comprehensions, where the "for" keyword returns a list of the yielded variables using the "yield" keyword.Consider these set-builder notation examples in some programming languages: The set builder notation and list comprehension notation are both instances of a more general notation known as monad comprehensions, which permits map/filter-like operations over any monad with a zero element. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
To which expression is the following for-loop translated? for x <- xs if x > 5; y <- ys yield x + y | Instead of the Java "foreach" loops for looping through an iterator, Scala has for-expressions, which are similar to list comprehensions in languages such as Haskell, or a combination of list comprehensions and generator expressions in Python. For-expressions using the yield keyword allow a new collection to be generated by iterating over an existing one, returning a new collection of the same type. They are translated by the compiler into a series of map, flatMap and filter calls. Where yield is not used, the code approximates to an imperative-style loop, by translating to foreach. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
To which expression is the following for-loop translated? for x <- xs if x > 5; y <- ys yield x + y | The syntax of the JavaScript for loop is as follows: or | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What does the following operation output for a given input list of numbers ?
1 def mystery5(ys: List[Int]) =
2 for y <- ys if y >= 0 && y <= 255 yield
3 val bits =
4 for z <- 7 to 0 by -1 yield
5 if ((1 << z) & y) != 0 then "1" else "0"
6 bits.foldRight("")((z, acc) => z + acc)
We have as an output... | This implementation is written in Python; it is assumed that the input_list will be a sequence of integers. The function returns a new list rather than mutating the one passed in, but it can be trivially modified to operate in place efficiently. We can also implement the algorithm using Java. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What does the following operation output for a given input list of numbers ?
1 def mystery5(ys: List[Int]) =
2 for y <- ys if y >= 0 && y <= 255 yield
3 val bits =
4 for z <- 7 to 0 by -1 yield
5 if ((1 << z) & y) != 0 then "1" else "0"
6 bits.foldRight("")((z, acc) => z + acc)
We have as an output... | This can be compiled into an executable which matches and outputs strings of integers. For example, given the input: abc123z. !&*2gj6 the program will print: Saw an integer: 123 Saw an integer: 2 Saw an integer: 6 | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are asked to implement the following List functions using only the specified List API methods. You are also allowed to use the reverse method in any subquestion, should you see it fit. If you need another method of List, you need to reimplement it as part of your answer. Please refer to the appendix on the last page as a reminder for the behavior of the given List API methods. Implement scanLeft using only foldLeft, Nil and :: (cons). def scanLeft[A, B](xs: List[A])(z: B)(op: (B, A) => B): List[B] = ??? | They also highlight the fact that foldr (:) is the identity function on lists (a shallow copy in Lisp parlance), as replacing cons with cons and nil with nil will not change the result. The left fold diagram suggests an easy way to reverse a list, foldl (flip (:)) . Note that the parameters to cons must be flipped, because the element to add is now the right hand parameter of the combining function. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are asked to implement the following List functions using only the specified List API methods. You are also allowed to use the reverse method in any subquestion, should you see it fit. If you need another method of List, you need to reimplement it as part of your answer. Please refer to the appendix on the last page as a reminder for the behavior of the given List API methods. Implement scanLeft using only foldLeft, Nil and :: (cons). def scanLeft[A, B](xs: List[A])(z: B)(op: (B, A) => B): List[B] = ??? | Were the function f to refer to its second argument first here, and be able to produce some part of its result without reference to the recursive case (here, on its left i.e., in its first argument), then the recursion would stop. This means that while foldr recurses on the right, it allows for a lazy combining function to inspect list's elements from the left; and conversely, while foldl recurses on the left, it allows for a lazy combining function to inspect list's elements from the right, if it so chooses (e.g., last == foldl (\a b->b) (error "empty list")). Reversing a list is also tail-recursive (it can be implemented using rev = foldl (\ys x -> x: ys) ). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following classes:
• class Pair[+U, +V]
• class Iterable[+U]
• class Map[U, +V] extends Iterable[Pair[U, V]]
Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither
covariance nor contravariance).
Consider also the following typing relationships for A, B, X, and Y:
• A >: B
• X >: Y
Fill in the subtyping relation between the types below using symbols:
• <: in case T1 is a subtype of T2;
• >: in case T1 is a supertype of T2;
• “Neither” in case T1 is neither a supertype nor a supertype of T2.
What is the correct subtyping relationship between Map[A, X] and Map[B, Y]? | Subtyping and inheritance are independent (orthogonal) relationships. They may coincide, but none is a special case of the other. In other words, between two types S and T, all combinations of subtyping and inheritance are possible: S is neither a subtype nor a derived type of T S is a subtype but is not a derived type of T S is not a subtype but is a derived type of T S is both a subtype and a derived type of TThe first case is illustrated by independent types, such as Boolean and Float. The second case can be illustrated by the relationship between Int32 and Int64. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Given the following classes:
• class Pair[+U, +V]
• class Iterable[+U]
• class Map[U, +V] extends Iterable[Pair[U, V]]
Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither
covariance nor contravariance).
Consider also the following typing relationships for A, B, X, and Y:
• A >: B
• X >: Y
Fill in the subtyping relation between the types below using symbols:
• <: in case T1 is a subtype of T2;
• >: in case T1 is a supertype of T2;
• “Neither” in case T1 is neither a supertype nor a supertype of T2.
What is the correct subtyping relationship between Map[A, X] and Map[B, Y]? | Sound structural subtyping rules for types other than object types are also well known.Implementations of programming languages with subtyping fall into two general classes: inclusive implementations, in which the representation of any value of type A also represents the same value at type B if A <: B, and coercive implementations, in which a value of type A can be automatically converted into one of type B. The subtyping induced by subclassing in an object-oriented language is usually inclusive; subtyping relations that relate integers and floating-point numbers, which are represented differently, are usually coercive. In almost all type systems that define a subtyping relation, it is reflexive (meaning A <: A for any type A) and transitive (meaning that if A <: B and B <: C then A <: C). This makes it a preorder on types. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For a bipartite graph, devise an efficient algorithm for finding an augmenting path $P$ (if one exists). What is the total running time of the \textsc{AugmentingPathAlgorithm} explained in the second lecture? | The search for an augmenting path uses an auxiliary data structure consisting of a forest F whose individual trees correspond to specific portions of the graph G. In fact, the forest F is the same that would be used to find maximum matchings in bipartite graphs (without need for shrinking blossoms). In each iteration the algorithm either (1) finds an augmenting path, (2) finds a blossom and recurses onto the corresponding contracted graph, or (3) concludes there are no augmenting paths. The auxiliary structure is built by an incremental procedure discussed next.The construction procedure considers vertices v and edges e in G and incrementally updates F as appropriate. If v is in a tree T of the forest, we let root(v) denote the root of T. If both u and v are in the same tree T in F, we let distance(u,v) denote the length of the unique path from u to v in T. INPUT: Graph G, matching M on G OUTPUT: augmenting path P in G or empty path if none found B01 function find_augmenting_path(G, M): P B02 F ← empty forest B03 unmark all vertices and edges in G, mark all edges of M B05 for each exposed vertex v do B06 create a singleton tree { v } and add the tree to F B07 end for B08 while there is an unmarked vertex v in F with distance(v, root(v)) even do B09 while there exists an unmarked edge e = { v, w } do B10 if w is not in F then // w is matched, so add e and w's matched edge to F B11 x ← vertex matched to w in M B12 add edges { v, w } and { w, x } to the tree of v B13 else B14 if distance(w, root(w)) is odd then // Do nothing. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
For a bipartite graph, devise an efficient algorithm for finding an augmenting path $P$ (if one exists). What is the total running time of the \textsc{AugmentingPathAlgorithm} explained in the second lecture? | An improvement to this algorithm is given by the more elaborate Hopcroft–Karp algorithm, which searches for multiple augmenting paths simultaneously. This algorithm runs in O ( V E ) {\displaystyle O({\sqrt {V}}E)} time. The algorithm of Chandran and Hochbaum for bipartite graphs runs in time that depends on the size of the maximum matching k, which for |X| < |Y| is O ( min { | X | k , E } + k min { k 2 , E } ) . {\displaystyle O\left(\min\{|X|k,E\}+{\sqrt {k}}\min\{k^{2},E\}\right).} | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Recall from the last lecture (see Section 16.1.1 in notes of Lecture~8) that the number of mistakes that Weighted Majority makes is at most $2(1+\epsilon) \cdot \mbox{(\# of $i$'s mistakes)} + O(\log N/\epsilon)$, where $i$ is any expert and $N$ is the number of experts. Give an example that shows that the factor $2$ is tight in the above bound. The simplest such example only uses two experts, i.e., $N=2$, and each of the experts is wrong roughly half of the time. Finally, note how your example motivates the use of a random strategy (as in the Hedge strategy that we will see in the next lecture). | Suppose there are n {\displaystyle n} experts and the best expert makes m {\displaystyle m} mistakes. The weighted majority algorithm (WMA) makes at most 2.4 ( log 2 n + m ) {\displaystyle 2.4(\log _{2}n+m)} mistakes, which is not a very good bound. We can do better by introducing randomization. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Recall from the last lecture (see Section 16.1.1 in notes of Lecture~8) that the number of mistakes that Weighted Majority makes is at most $2(1+\epsilon) \cdot \mbox{(\# of $i$'s mistakes)} + O(\log N/\epsilon)$, where $i$ is any expert and $N$ is the number of experts. Give an example that shows that the factor $2$ is tight in the above bound. The simplest such example only uses two experts, i.e., $N=2$, and each of the experts is wrong roughly half of the time. Finally, note how your example motivates the use of a random strategy (as in the Hedge strategy that we will see in the next lecture). | The nonrandomized weighted majority algorithm (WMA) only guarantees an upper bound of 2.4 ( log 2 n + m ) {\displaystyle 2.4(\log _{2}n+m)} , which is problematic for highly error-prone experts (e.g. the best expert still makes a mistake 20% of the time.) Suppose we do N = 100 {\displaystyle N=100} rounds using n = 10 {\displaystyle n=10} experts. If the best expert makes m = 20 {\displaystyle m=20} mistakes, we can only guarantee an upper bound of 2.4 ( log 2 10 + 20 ) ≈ 56 {\displaystyle 2.4(\log _{2}10+20)\approx 56} on our number of mistakes. As this is a known limitation of WMA, attempts to improve this shortcoming have been explored in order to improve the dependence on m {\displaystyle m} . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider an IR system using a Vector Space model with Okapi BM25 as the weighting scheme (with \(k=1.5\) and \(b=0.75\)) and operating on a document collection that contains:a document \(d_1\), andand a document \(d_3\) corresponding to the concatenation of 3 copies of \(d_1\).Indicate which of the following statements are true, where \(\langle d\rangle\) stands for the vector representing document \(d\):(Penalty for wrong ticks.) | The computed Tk and Dk matrices define the term and document vector spaces, which with the computed singular values, Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors. The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, d, for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider an IR system using a Vector Space model with Okapi BM25 as the weighting scheme (with \(k=1.5\) and \(b=0.75\)) and operating on a document collection that contains:a document \(d_1\), andand a document \(d_3\) corresponding to the concatenation of 3 copies of \(d_1\).Indicate which of the following statements are true, where \(\langle d\rangle\) stands for the vector representing document \(d\):(Penalty for wrong ticks.) | Documents and queries are represented as vectors. d j = ( w 1 , j , w 2 , j , … , w n , j ) {\displaystyle d_{j}=(w_{1,j},w_{2,j},\dotsc ,w_{n,j})} q = ( w 1 , q , w 2 , q , … , w n , q ) {\displaystyle q=(w_{1,q},w_{2,q},\dotsc ,w_{n,q})} Each dimension corresponds to a separate term. If a term occurs in the document, its value in the vector is non-zero. Several different ways of computing these values, also known as (term) weights, have been developed. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In an automated email router of a company, we want to make the distinction between three kind of
emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a
Naive Bayes approach.
What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'?
We will consider the following three messages:
The Dow industrials tumbled 120.54 to 10924.74, hurt by GM's sales forecast
and two economic reports. Oil rose to $71.92.
BitTorrent Inc. is boosting its network capacity as it prepares to become a centralized hub for legal video content. In May, BitTorrent announced a deal with
Warner Brothers to distribute its TV and movie content via the BT platform. It
has now lined up IP transit for streaming videos at a few gigabits per second
Intel will sell its XScale PXAxxx applications processor and 3G baseband processor businesses to Marvell for $600 million, plus existing liabilities. The deal
could make Marvell the top supplier of 3G and later smartphone processors, and
enable Intel to focus on its core x86 and wireless LAN chipset businesses, the
companies say.
For the first text, give an example of the corresponding output of the NLP pre-processor steps. | Naive Bayes classifiers are a popular statistical technique of e-mail filtering. They typically use bag-of-words features to identify email spam, an approach commonly used in text classification. Naive Bayes classifiers work by correlating the use of tokens (typically words, or sometimes other things), with spam and non-spam e-mails and then using Bayes' theorem to calculate a probability that an email is or is not spam. Naive Bayes spam filtering is a baseline technique for dealing with spam that can tailor itself to the email needs of individual users and give low false positive spam detection rates that are generally acceptable to users. It is one of the oldest ways of doing spam filtering, with roots in the 1990s. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
In an automated email router of a company, we want to make the distinction between three kind of
emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a
Naive Bayes approach.
What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'?
We will consider the following three messages:
The Dow industrials tumbled 120.54 to 10924.74, hurt by GM's sales forecast
and two economic reports. Oil rose to $71.92.
BitTorrent Inc. is boosting its network capacity as it prepares to become a centralized hub for legal video content. In May, BitTorrent announced a deal with
Warner Brothers to distribute its TV and movie content via the BT platform. It
has now lined up IP transit for streaming videos at a few gigabits per second
Intel will sell its XScale PXAxxx applications processor and 3G baseband processor businesses to Marvell for $600 million, plus existing liabilities. The deal
could make Marvell the top supplier of 3G and later smartphone processors, and
enable Intel to focus on its core x86 and wireless LAN chipset businesses, the
companies say.
For the first text, give an example of the corresponding output of the NLP pre-processor steps. | In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve high accuracy levels.Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression,: 718 which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. In the statistics literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but naive Bayes is not (necessarily) a Bayesian method. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What measure should you compute to estimate the quality of the annotations produced by the two annotators? | The quality of the sequence assembly influences the quality of the annotation, so it is important to assess assembly quality before performing the subsequent annotation steps. In order to quantify the quality of a genome annotation, three metrics have been used: recall, precision and accuracy; although these measures are not explicitly used in annotation projects, but rather in discussions of prediction accuracy.Community annotation approaches are great techniques for quality control and standardization in genome annotation. An annotation jamboree that took part in 2002, led to the creation of the annotation standards used by the Sanger Institute's Human and Vertebrate Analysis Project (HAVANA). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
What measure should you compute to estimate the quality of the annotations produced by the two annotators? | Confusion matrix for two annotators, three categories {Yes, No, Maybe} and 45 items rated (90 ratings for 2 annotators): To calculate the expected agreement, sum marginals across annotators and divide by the total number of ratings to obtain joint proportions. Square and total these: To calculate observed agreement, divide the number of items on which annotators agreed by the total number of items. In this case, Pr ( a ) = 1 + 5 + 9 45 = 0.333. {\displaystyle \Pr(a)={\frac {1+5+9}{45}}=0.333.} Given that Pr(e) = 0.369, Scott's pi is then π = 0.333 − 0.369 1 − 0.369 = − 0.057. {\displaystyle \pi ={\frac {0.333-0.369}{1-0.369}}=-0.057.} | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Briefly describe the specific objectives of the morphological module in the general perspective of automated Natural Language Processing. | "Explorations in automated language classification". Folia Linguistica. 42 (2): 331–354. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Briefly describe the specific objectives of the morphological module in the general perspective of automated Natural Language Processing. | "Explorations in automated language classification". Folia Linguistica. 42 (2): 331–354. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
If A={a} and B={b}, select all strings that belongs to (A ⊗ B)+
A penalty will be applied for any wrong answers selected. | b ) = ( λ a . λ b . a ) ( λ a . | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
If A={a} and B={b}, select all strings that belongs to (A ⊗ B)+
A penalty will be applied for any wrong answers selected. | *?b would match first "ab" in "ababab", where a. *b would match the entire string. If the U flag is set, then quantifiers are ungreedy (lazy) by default, while ? makes them greedy. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following context-free grammar \(G\) (where \(\text{S}\) is the top-level symbol):
\(R_{01}: \text{S} \rightarrow \text{NP VP}\)
\(R_{02}: \text{NP} \rightarrow \text{NP0}\)
\(R_{03}: \text{NP} \rightarrow \text{Det NP0}\)
\(R_{04}: \text{NP0} \rightarrow \text{N}\)
\(R_{05}: \text{NP0} \rightarrow \text{Adj N}\)
\(R_{06}: \text{NP0} \rightarrow \text{NP0 PNP}\)
\(R_{07}: \text{VP} \rightarrow \text{V}\)
\(R_{08}: \text{VP} \rightarrow \text{V NP}\)
\(R_{09}: \text{VP} \rightarrow \text{V NP PNP}\)
\(R_{10}: \text{PNP} \rightarrow \text{Prep NP}\)
complemented by the lexicon \(L\):
a : Det
blue : Adj, N
drink : N, V
drinks : N, V
friends : N
from : Prep
gave : V
letter : N
my : Det
neighbor : N
nice : Adj, N
of : Prep
postman : N
ran : V
the : Det
to : PrepHow many parse trees does the grammar \(G\) associate to the word sequence"the postman ran the letter for the drinks on the friends"? | The grammar G = ( { S , A , B , C , D } , { a , b , c } , R , S ) {\displaystyle G=(\{S,A,B,C,D\},\{a,b,c\},R,S)} , with productions S → A B & D C {\displaystyle S\rightarrow AB\&DC} , A → a A | ϵ {\displaystyle A\rightarrow aA\ |\ \epsilon } , B → b B c | ϵ {\displaystyle B\rightarrow bBc\ |\ \epsilon } , C → c C | ϵ {\displaystyle C\rightarrow cC\ |\ \epsilon } , D → a D b | ϵ {\displaystyle D\rightarrow aDb\ |\ \epsilon } ,is conjunctive. A typical derivation is S ⇒ ( A B & D C ) ⇒ ( a A B & D C ) ⇒ ( a B & D C ) ⇒ ( a b B c & D C ) ⇒ ( a b c & D C ) ⇒ ( a b c & a D b C ) ⇒ ( a b c & a b C ) ⇒ ( a b c & a b c C ) ⇒ ( a b c & a b c ) ⇒ a b c {\displaystyle S\Rightarrow (AB\&DC)\Rightarrow (aAB\&DC)\Rightarrow (aB\&DC)\Rightarrow (abBc\&DC)\Rightarrow (abc\&DC)\Rightarrow (abc\&aDbC)\Rightarrow (abc\&abC)\Rightarrow (abc\&abcC)\Rightarrow (abc\&abc)\Rightarrow abc} It can be shown that L ( G ) = { a n b n c n: n ≥ 0 } {\displaystyle L(G)=\{a^{n}b^{n}c^{n}:n\geq 0\}} . The language is not context-free, proved by the pumping lemma for context-free languages. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following context-free grammar \(G\) (where \(\text{S}\) is the top-level symbol):
\(R_{01}: \text{S} \rightarrow \text{NP VP}\)
\(R_{02}: \text{NP} \rightarrow \text{NP0}\)
\(R_{03}: \text{NP} \rightarrow \text{Det NP0}\)
\(R_{04}: \text{NP0} \rightarrow \text{N}\)
\(R_{05}: \text{NP0} \rightarrow \text{Adj N}\)
\(R_{06}: \text{NP0} \rightarrow \text{NP0 PNP}\)
\(R_{07}: \text{VP} \rightarrow \text{V}\)
\(R_{08}: \text{VP} \rightarrow \text{V NP}\)
\(R_{09}: \text{VP} \rightarrow \text{V NP PNP}\)
\(R_{10}: \text{PNP} \rightarrow \text{Prep NP}\)
complemented by the lexicon \(L\):
a : Det
blue : Adj, N
drink : N, V
drinks : N, V
friends : N
from : Prep
gave : V
letter : N
my : Det
neighbor : N
nice : Adj, N
of : Prep
postman : N
ran : V
the : Det
to : PrepHow many parse trees does the grammar \(G\) associate to the word sequence"the postman ran the letter for the drinks on the friends"? | This is an example grammar: S ⟶ NP VP VP ⟶ VP PP VP ⟶ V NP VP ⟶ eats PP ⟶ P NP NP ⟶ Det N NP ⟶ she V ⟶ eats P ⟶ with N ⟶ fish N ⟶ fork Det ⟶ a {\displaystyle {\begin{aligned}{\ce {S}}&\ {\ce {->NP\ VP}}\\{\ce {VP}}&\ {\ce {->VP\ PP}}\\{\ce {VP}}&\ {\ce {->V\ NP}}\\{\ce {VP}}&\ {\ce {->eats}}\\{\ce {PP}}&\ {\ce {->P\ NP}}\\{\ce {NP}}&\ {\ce {->Det\ N}}\\{\ce {NP}}&\ {\ce {->she}}\\{\ce {V}}&\ {\ce {->eats}}\\{\ce {P}}&\ {\ce {->with}}\\{\ce {N}}&\ {\ce {->fish}}\\{\ce {N}}&\ {\ce {->fork}}\\{\ce {Det}}&\ {\ce {->a}}\end{aligned}}} Now the sentence she eats a fish with a fork is analyzed using the CYK algorithm. In the following table, in P {\displaystyle P} , i is the number of the row (starting at the bottom at 1), and j is the number of the column (starting at the left at 1). For readability, the CYK table for P is represented here as a 2-dimensional matrix M containing a set of non-terminal symbols, such that Rk is in M {\displaystyle M} if, and only if, P {\displaystyle P} . In the above example, since a start symbol S is in M {\displaystyle M} , the sentence can be generated by the grammar. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following toy learning corpus of 59 tokens (using a tokenizer that splits on whitespaces and punctuation), out of a possible vocabulary of $N=100$ different tokens:
Pulsed operation of lasers refers to any laser not classified as continuous wave, so that the optical power appears in pulses of some duration at some repetition rate. This\linebreak encompasses a wide range of technologies addressing a number of different motivations. Some lasers are pulsed simply because they cannot be run in continuous wave mode.
Using a 2-gram language model, what are the values of the parameters corresponding to "continuous wave" and to "pulsed laser" using Maximum-Likelihood estimates? | The pulsed operation of lasers refers to any laser not classified as a continuous wave so that the optical power appears in pulses of some duration at some repetition rate. This encompasses a wide range of technologies addressing many different motivations. Some lasers are pulsed simply because they cannot be run in continuous mode. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider the following toy learning corpus of 59 tokens (using a tokenizer that splits on whitespaces and punctuation), out of a possible vocabulary of $N=100$ different tokens:
Pulsed operation of lasers refers to any laser not classified as continuous wave, so that the optical power appears in pulses of some duration at some repetition rate. This\linebreak encompasses a wide range of technologies addressing a number of different motivations. Some lasers are pulsed simply because they cannot be run in continuous wave mode.
Using a 2-gram language model, what are the values of the parameters corresponding to "continuous wave" and to "pulsed laser" using Maximum-Likelihood estimates? | Pulsed operation of lasers refers to any laser not classified as continuous wave, so that the optical power appears in pulses of some duration at some repetition rate. This encompasses a wide range of technologies addressing a number of different motivations. Some lasers are pulsed simply because they cannot be run in continuous mode. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Provide a precise definition of concatenative morphology and illustrate your answer with concrete examples in English or French. Is this type of morphology relevant for all languages? More generally, is morphology of the same complexity for all languages? | Nonconcatenative morphology, also called discontinuous morphology and introflection, is a form of word formation and inflection in which the root is modified and which does not involve stringing morphemes together sequentially. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Provide a precise definition of concatenative morphology and illustrate your answer with concrete examples in English or French. Is this type of morphology relevant for all languages? More generally, is morphology of the same complexity for all languages? | Nonconcatenative morphology is extremely well developed in the Semitic languages in which it forms the basis of virtually all higher-level word formation (as with the example given in the diagram). That is especially pronounced in Arabic, which also uses it to form approximately 41% of plurals in what is often called the broken plural. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
It is often desirable to be able to express the performance of an NLP system in the form of one single number, which is not the case with Precision/Recall curves. Indicate what score can be used to convert a Precision/Recall performance into a unique number. Give the formula for the corresponding evaluation metric, and indicate how it can be weighted. | During the evaluation of WSD systems two main performance measures are used: Precision: the fraction of system assignments made that are correct Recall: the fraction of total word instances correctly assigned by a systemIf a system makes an assignment for every word, then precision and recall are the same, and can be called accuracy. This model has been extended to take into account systems that return a set of senses with weights for each occurrence. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
It is often desirable to be able to express the performance of an NLP system in the form of one single number, which is not the case with Precision/Recall curves. Indicate what score can be used to convert a Precision/Recall performance into a unique number. Give the formula for the corresponding evaluation metric, and indicate how it can be weighted. | Precision and recall are single-value metrics based on the whole list of documents returned by the system. For systems that return a ranked sequence of documents, it is desirable to also consider the order in which the returned documents are presented. By computing a precision and recall at every position in the ranked sequence of documents, one can plot a precision-recall curve, plotting precision p ( r ) {\displaystyle p(r)} as a function of recall r {\displaystyle r} . Average precision computes the average value of p ( r ) {\displaystyle p(r)} over the interval from r = 0 {\displaystyle r=0} to r = 1 {\displaystyle r=1}: AveP = ∫ 0 1 p ( r ) d r {\displaystyle \operatorname {AveP} =\int _{0}^{1}p(r)dr} That is the area under the precision-recall curve. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are responsible for a project aiming at providing on-line recommendations to the customers of
a on-line book selling company.
The general idea behind this recommendation system is to cluster books according to both customers
and content similarities, so as to propose books similar to the books already bought by a given
customer. The core of the recommendation system is a clustering algorithm aiming at regrouping
books likely to be appreciate by the same person. This clustering should not only be achieved
based on the purchase history of customers, but should also be refined by the content of the books
themselves. It's that latter aspect we want to address in this exam question.
Consider the following six 'documents' (toy example):
d1: 'Because cows are not sorted as they return from the fields to their home pen, cow flows
are improved.'
d2: 'He was convinced that if he owned the fountain pen that he'd seen in the shop window for years, he could write fantastic stories with it. That was the kind of pen you cannot forget.'
d3: 'With this book you will learn how to draw humans, animals (cows, horses, etc.) and flowers with a charcoal pen.'
d4: 'The cows were kept in pens behind the farm, hidden from the road. That was the typical kind of pen made for cows.'
d5: 'If Dracula wrote with a fountain pen, this would be the kind of pen he would write with, filled with blood red ink. It was the pen she chose for my punishment, the pen of my torment. What a mean cow!'
d6: 'What pen for what cow? A red pen for a red cow, a black pen for a black cow, a brown pen for a brown cow, ... Understand?'
and suppose (toy example) that they are indexed only by the two words: pen and cow.
What are their vector representations? | One key choice for researchers when applying unsupervised methods is selecting the number of categories to sort documents into rather than defining what the categories are in advance. Single membership models: these models automatically cluster texts into different categories that are mutually exclusive, and documents are coded into one and only one category. As pointed out by Grimmer and Stewart (16), "each algorithm has three components: (1) a definition of document similarity or distance; (2) an objective function that operationalizes and ideal clustering; and (3) an optimization algorithm." | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
You are responsible for a project aiming at providing on-line recommendations to the customers of
a on-line book selling company.
The general idea behind this recommendation system is to cluster books according to both customers
and content similarities, so as to propose books similar to the books already bought by a given
customer. The core of the recommendation system is a clustering algorithm aiming at regrouping
books likely to be appreciate by the same person. This clustering should not only be achieved
based on the purchase history of customers, but should also be refined by the content of the books
themselves. It's that latter aspect we want to address in this exam question.
Consider the following six 'documents' (toy example):
d1: 'Because cows are not sorted as they return from the fields to their home pen, cow flows
are improved.'
d2: 'He was convinced that if he owned the fountain pen that he'd seen in the shop window for years, he could write fantastic stories with it. That was the kind of pen you cannot forget.'
d3: 'With this book you will learn how to draw humans, animals (cows, horses, etc.) and flowers with a charcoal pen.'
d4: 'The cows were kept in pens behind the farm, hidden from the road. That was the typical kind of pen made for cows.'
d5: 'If Dracula wrote with a fountain pen, this would be the kind of pen he would write with, filled with blood red ink. It was the pen she chose for my punishment, the pen of my torment. What a mean cow!'
d6: 'What pen for what cow? A red pen for a red cow, a black pen for a black cow, a brown pen for a brown cow, ... Understand?'
and suppose (toy example) that they are indexed only by the two words: pen and cow.
What are their vector representations? | A number of canonical tasks are associated with statistical relational learning, the most common ones being. collective classification, i.e. the (simultaneous) prediction of the class of several objects given objects' attributes and their relations link prediction, i.e. predicting whether or not two or more objects are related link-based clustering, i.e. the grouping of similar objects, where similarity is determined according to the links of an object, and the related task of collaborative filtering, i.e. the filtering for information that is relevant to an entity (where a piece of information is considered relevant to an entity if it is known to be relevant to a similar entity). social network modelling object identification/entity resolution/record linkage, i.e. the identification of equivalent entries in two or more separate databases/datasets | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Many general evaluation metrics can be considered for various NLP tasks. The simplest one is accuracy. Give several examples of NLP tasks for which accuracy can be used as an evaluation metric. Justify why. In general, what property(ies) must an NLP task satisfy in order to be evaluable through accuracy? | Human ratings: give the generated text to a person, and ask them to rate the quality and usefulness of the text. Metrics: compare generated texts to texts written by people from the same input data, using an automatic metric such as BLEU, METEOR, ROUGE and LEPOR.An ultimate goal is how useful NLG systems are at helping people, which is the first of the above techniques. However, task-based evaluations are time-consuming and expensive, and can be difficult to carry out (especially if they require subjects with specialised expertise, such as doctors). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Many general evaluation metrics can be considered for various NLP tasks. The simplest one is accuracy. Give several examples of NLP tasks for which accuracy can be used as an evaluation metric. Justify why. In general, what property(ies) must an NLP task satisfy in order to be evaluable through accuracy? | Human ratings: give the generated text to a person, and ask them to rate the quality and usefulness of the text. Metrics: compare generated texts to texts written by people from the same input data, using an automatic metric such as BLEU, METEOR, ROUGE and LEPOR.An ultimate goal is how useful NLG systems are at helping people, which is the first of the above techniques. However, task-based evaluations are time-consuming and expensive, and can be difficult to carry out (especially if they require subjects with specialised expertise, such as doctors). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The goal of this question is to illustrate how to use transducers to implement a simplified version of the conjugation of English verbs. We will restrict to the conjugated forms corresponding to the indicative mode and the present tense. Formally, this can be modeled as defining a transducer able to recognize associations such as the following table:
make+V+IndPres+1s make
make+V+IndPres+2s make
make+V+IndPres+3s makes
make+V+IndPres+1p make
make+V+IndPres+2p make
make+V+IndPres+3p make
where "V" identifies the grammatical category "Verb", "IndPres" indicates that we are dealing with the Indicative mode and the Present tense, and "1s", "2s", "3s" (resp. "1p", "2p", "3p") refer to the first, second, and third person singular (resp. plural).
In the above table, what do the strings in the first and the second column correspond to? | For verbs, the endings in the present tense are given as -va, -ta, -ta. The table below shows a comparison of the conjugation of the verb delati, which means "to do, to make, to work" and belongs to Class IV in the singular, dual, and plural. In the imperative, the endings are given as -iva for the first-person dual and -ita for the second-person dual. The table below shows the imperative forms for the verb hoditi ("to walk") in the first and second persons of the imperative (the imperative does not exist for first-person singular). | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
The goal of this question is to illustrate how to use transducers to implement a simplified version of the conjugation of English verbs. We will restrict to the conjugated forms corresponding to the indicative mode and the present tense. Formally, this can be modeled as defining a transducer able to recognize associations such as the following table:
make+V+IndPres+1s make
make+V+IndPres+2s make
make+V+IndPres+3s makes
make+V+IndPres+1p make
make+V+IndPres+2p make
make+V+IndPres+3p make
where "V" identifies the grammatical category "Verb", "IndPres" indicates that we are dealing with the Indicative mode and the Present tense, and "1s", "2s", "3s" (resp. "1p", "2p", "3p") refer to the first, second, and third person singular (resp. plural).
In the above table, what do the strings in the first and the second column correspond to? | The fourth conjugation is characterized by the vowel ī and can be recognized by the -īre ending of the present active infinitive. Deponent verbs have the infinitive -īrī: Other forms: Infinitive: audīre "to hear" Passive infinitive: audīrī "to be heard" Imperative: audī! (pl. audīte!) | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider using a parser with the following (partial) grammar:
S -> NP VP
VP -> V
NP -> Det N
VP -> VP PP
NP -> N
VP -> VBP VBG PP
NP -> NP PP
PP -> P NP
and (also partial) lexicon:
2012 N
from P
Switzerland N
in P
USA N
increasing VBG
are VBP
the Det
exports N
to P
exports V
Using the CYK algorithm, parse the following sentence with the above lexicon/grammar:
the exports from the USA to Switzerland are increasing in 2012
Provide both the complete, fully filled, data structure used by the algorithm, as well as the result of
the parsing in the form of a/the parse tree(s). | This is an example grammar: S ⟶ NP VP VP ⟶ VP PP VP ⟶ V NP VP ⟶ eats PP ⟶ P NP NP ⟶ Det N NP ⟶ she V ⟶ eats P ⟶ with N ⟶ fish N ⟶ fork Det ⟶ a {\displaystyle {\begin{aligned}{\ce {S}}&\ {\ce {->NP\ VP}}\\{\ce {VP}}&\ {\ce {->VP\ PP}}\\{\ce {VP}}&\ {\ce {->V\ NP}}\\{\ce {VP}}&\ {\ce {->eats}}\\{\ce {PP}}&\ {\ce {->P\ NP}}\\{\ce {NP}}&\ {\ce {->Det\ N}}\\{\ce {NP}}&\ {\ce {->she}}\\{\ce {V}}&\ {\ce {->eats}}\\{\ce {P}}&\ {\ce {->with}}\\{\ce {N}}&\ {\ce {->fish}}\\{\ce {N}}&\ {\ce {->fork}}\\{\ce {Det}}&\ {\ce {->a}}\end{aligned}}} Now the sentence she eats a fish with a fork is analyzed using the CYK algorithm. In the following table, in P {\displaystyle P} , i is the number of the row (starting at the bottom at 1), and j is the number of the column (starting at the left at 1). For readability, the CYK table for P is represented here as a 2-dimensional matrix M containing a set of non-terminal symbols, such that Rk is in M {\displaystyle M} if, and only if, P {\displaystyle P} . In the above example, since a start symbol S is in M {\displaystyle M} , the sentence can be generated by the grammar. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Consider using a parser with the following (partial) grammar:
S -> NP VP
VP -> V
NP -> Det N
VP -> VP PP
NP -> N
VP -> VBP VBG PP
NP -> NP PP
PP -> P NP
and (also partial) lexicon:
2012 N
from P
Switzerland N
in P
USA N
increasing VBG
are VBP
the Det
exports N
to P
exports V
Using the CYK algorithm, parse the following sentence with the above lexicon/grammar:
the exports from the USA to Switzerland are increasing in 2012
Provide both the complete, fully filled, data structure used by the algorithm, as well as the result of
the parsing in the form of a/the parse tree(s). | Rules representing regular German sentence structure.And finally, we need rules according to which one can relate these two structures together. Accordingly, we can state the following stages of translation: 1st: getting basic part-of-speech information of each source word:a = indef.article; girl = noun; eats = verb; an = indef.article; apple = noun2nd: getting syntactic information about the verb "to eat":NP-eat-NP; here: eat – Present Simple, 3rd Person Singular, Active Voice3rd: parsing the source sentence:(NP an apple) = the object of eatOften only partial parsing is sufficient to get to the syntactic structure of the source sentence and to map it onto the structure of the target sentence. | https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.