question
stringlengths
6
3.53k
text
stringlengths
17
2.05k
source
stringclasses
1 value
From a corpus of \( N \) occurences of \( m \) different tokens:How many different 4-grams (values) could you possibly have?
Using a modification of byte-pair encoding, in the first step, all unique characters (including blanks and punctuation marks) are treated as an initial set of n-grams (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) n-grams that most frequently occur together are then again merged into even lengthier n-gram repeatedly until a vocabulary of prescribed size is obtained (in case of GPT-3, the size is 50257). Token vocabulary consists of integers, spanning from zero up to the size of the token vocabulary.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
From a corpus of \( N \) occurences of \( m \) different tokens:How many different 4-grams (values) could you possibly have?
This is repeated until a vocabulary of prescribed size is obtained. Note that new words can always be constructed from final vocabulary tokens and initial-set characters.All the unique tokens found in a corpus are listed in a token vocabulary, the size of which, in the case of GPT-3, is 50257. The difference between the modified and the original algorithm is that the original algorithm does not merge the most frequent pair of bytes of data, but replaces them by a new byte that was not contained in the initial dataset. A lookup table of the replacements is required to rebuild the initial dataset. The algorithm is effective for the tokenization because it does not require large computational overheads and remains consistent and reliable.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! Would you use a causal language modeling or masked language modeling training objective to train your model? Why?
The original paper on generative pre-training of a transformer-based language model was written by Alec Radford and his colleagues, and published in preprint on OpenAI's website on June 11, 2018. It showed how a generative model of language is able to acquire world knowledge and process long-range dependencies by pre-training on a diverse corpus with long stretches of contiguous text.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! Would you use a causal language modeling or masked language modeling training objective to train your model? Why?
The original paper on generative pre-training of a transformer-based language model was written by Alec Radford and his colleagues, and published in preprint on OpenAI's website on June 11, 2018. It showed how a generative model of language is able to acquire world knowledge and process long-range dependencies by pre-training on a diverse corpus with long stretches of contiguous text.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have been provided with the following definitions for the possible meanings of the words "balloon" and "plane": balloon: - meaning 1: balloon --(hyponym)--> inflatable - meaning 2: balloon --(hyponym)--> transport plane: - meaning 1: plane --(hyponym)--> transport plane --(holonym)--> wing - meaning 2: plane --(hyponym)--> surface What type of approach has been used to produce this type of semantic representations? What principle does it rely on?
A semantic difficulty may arise when considering reference in representationalism. If a person says "I see the Eiffel Tower" at a time when they are indeed looking at the Eiffel Tower, to what does the term "Eiffel Tower" refer? The direct realist might say that in the representational account people do not really see the tower but rather 'see' the representation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have been provided with the following definitions for the possible meanings of the words "balloon" and "plane": balloon: - meaning 1: balloon --(hyponym)--> inflatable - meaning 2: balloon --(hyponym)--> transport plane: - meaning 1: plane --(hyponym)--> transport plane --(holonym)--> wing - meaning 2: plane --(hyponym)--> surface What type of approach has been used to produce this type of semantic representations? What principle does it rely on?
A semantic difficulty may arise when considering reference in representationalism. If a person says "I see the Eiffel Tower" at a time when they are indeed looking at the Eiffel Tower, to what does the term "Eiffel Tower" refer? The direct realist might say that in the representational account people do not really see the tower but rather 'see' the representation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem we are going to investigate the linear programming relaxation of a classical scheduling problem. In the considered problem, we are given a set $M$ of $m$ machines and a set $J$ of $n$ jobs. Each job $j\in J$ has a processing time $p_j > 0$ and can be processed on a subset $N(j) \subseteq M$ of the machines. The goal is to assign each job $j$ to a machine in $N(j)$ so as to complete all the jobs by a given deadline $T$. (Each machine can only process one job at a time.) If we, for $j\in J$ and $i\in N(j)$, let $x_{ij}$ denote the indicator variable indicating that $j$ was assigned to $i$, then we can formulate the scheduling problem as the following integer linear program: \begin{align*} \sum_{i\in N(j)} x_{ij} & = 1 \qquad \mbox{for all } j\in J & \hspace{-3em} \mbox{\small \emph{(Each job $j$ should be assigned to a machine $i\in N(j)$)}} \\ \sum_{j\in J: i \in N(j)} x_{ij} p_j & \leq T \qquad \mbox{for all } i \in M & \hspace{-3em} \mbox{\small \emph{(Time needed to process jobs assigned to $i$ should be $\leq T$)}} \\ x_{ij} &\in \{0,1\} \ \mbox{for all } j\in J, \ i \in N(j) \end{align*} The above integer linear program is NP-hard to solve, but we can obtain a linear programming relaxation by relaxing the constraints $x_{ij} \in \{0,1\}$ to $x_{ij} \in [0,1]$. The obtained linear program can be solved in polynomial time using e.g. the ellipsoid method. \\[2mm] \emph{Example.} An example is as follows. We have two machines $M = \{m_1, m_2\}$ and three jobs $J= \{j_1, j_2, j_3\}$. Job $j_1$ has processing time $1/2$ and can only be assigned to $m_1$; job $j_2$ has processing time $1/2$ and can only be assigned to $m_2$; and job $j_3$ has processing time $1$ and can be assigned to either machine. Finally, we have the ``deadline'' $T=1$. An extreme point solution to the linear programming relaxation is $x^*_{11} = 1, x^*_{22} =1, x^*_{13} = 1/2$ and $x^*_{23} = 1/2$. The associated graph $H$ (defined in subproblem~\textbf{a}) can be illustrated as follows: \begin{tikzpicture} \node[vertex] (a1) at (0,1.7) {$a_1$}; \node[vertex] (a2) at (0,0.3) {$a_2$}; \node[vertex] (b1) at (3,2.5) {$b_1$}; \node[vertex] (b2) at (3,1) {$b_2$}; \node[vertex] (b3) at (3,-0.5) {$b_3$}; \draw (a1) edge (b3); \draw (a2) edge (b3); \end{tikzpicture} Use the structural result proved in the first subproblem to devise an efficient rounding algorithm that, given an instance and a feasible extreme point $x^*$ in the linear programming relaxation corresponding to the instance, returns a schedule that completes all jobs by deadline $T + \max_{j\in J} p_j$. In other words, you wish to assign jobs to machines so that the total processing time of the jobs a machine receives is at most $T + \max_{j\in J} p_j$.
A natural way to formulate the problem as a linear program is called the Lenstra–Shmoys–Tardos linear program (LST LP). For each machine i and job j, define a variable z i , j {\displaystyle z_{i,j}} , which equals 1 iff machine i processes job j, and 0 otherwise. Then, the LP constraints are: ∑ i = 1 m z i , j = 1 {\displaystyle \sum _{i=1}^{m}z_{i,j}=1} for every job j in 1,...,n; ∑ i = 1 m z i , j ⋅ p i , j ≤ T {\displaystyle \sum _{i=1}^{m}z_{i,j}\cdot p_{i,j}\leq T} for every machine i in 1,...,m; z i , j ∈ { 0 , 1 } {\displaystyle z_{i,j}\in \{0,1\}} for every i, j.Relaxing the integer constraints gives a linear program with size polynomial in the input. Solving the relaxed problem can be rounded to obtain a 2-approximation to the problem.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem we are going to investigate the linear programming relaxation of a classical scheduling problem. In the considered problem, we are given a set $M$ of $m$ machines and a set $J$ of $n$ jobs. Each job $j\in J$ has a processing time $p_j > 0$ and can be processed on a subset $N(j) \subseteq M$ of the machines. The goal is to assign each job $j$ to a machine in $N(j)$ so as to complete all the jobs by a given deadline $T$. (Each machine can only process one job at a time.) If we, for $j\in J$ and $i\in N(j)$, let $x_{ij}$ denote the indicator variable indicating that $j$ was assigned to $i$, then we can formulate the scheduling problem as the following integer linear program: \begin{align*} \sum_{i\in N(j)} x_{ij} & = 1 \qquad \mbox{for all } j\in J & \hspace{-3em} \mbox{\small \emph{(Each job $j$ should be assigned to a machine $i\in N(j)$)}} \\ \sum_{j\in J: i \in N(j)} x_{ij} p_j & \leq T \qquad \mbox{for all } i \in M & \hspace{-3em} \mbox{\small \emph{(Time needed to process jobs assigned to $i$ should be $\leq T$)}} \\ x_{ij} &\in \{0,1\} \ \mbox{for all } j\in J, \ i \in N(j) \end{align*} The above integer linear program is NP-hard to solve, but we can obtain a linear programming relaxation by relaxing the constraints $x_{ij} \in \{0,1\}$ to $x_{ij} \in [0,1]$. The obtained linear program can be solved in polynomial time using e.g. the ellipsoid method. \\[2mm] \emph{Example.} An example is as follows. We have two machines $M = \{m_1, m_2\}$ and three jobs $J= \{j_1, j_2, j_3\}$. Job $j_1$ has processing time $1/2$ and can only be assigned to $m_1$; job $j_2$ has processing time $1/2$ and can only be assigned to $m_2$; and job $j_3$ has processing time $1$ and can be assigned to either machine. Finally, we have the ``deadline'' $T=1$. An extreme point solution to the linear programming relaxation is $x^*_{11} = 1, x^*_{22} =1, x^*_{13} = 1/2$ and $x^*_{23} = 1/2$. The associated graph $H$ (defined in subproblem~\textbf{a}) can be illustrated as follows: \begin{tikzpicture} \node[vertex] (a1) at (0,1.7) {$a_1$}; \node[vertex] (a2) at (0,0.3) {$a_2$}; \node[vertex] (b1) at (3,2.5) {$b_1$}; \node[vertex] (b2) at (3,1) {$b_2$}; \node[vertex] (b3) at (3,-0.5) {$b_3$}; \draw (a1) edge (b3); \draw (a2) edge (b3); \end{tikzpicture} Use the structural result proved in the first subproblem to devise an efficient rounding algorithm that, given an instance and a feasible extreme point $x^*$ in the linear programming relaxation corresponding to the instance, returns a schedule that completes all jobs by deadline $T + \max_{j\in J} p_j$. In other words, you wish to assign jobs to machines so that the total processing time of the jobs a machine receives is at most $T + \max_{j\in J} p_j$.
Using the technique of Linear programming relaxation, it is possible to approximate the optimal scheduling with slightly better approximation factors. The approximation ratio of the first such algorithm is asymptotically 2 when k is large, but when k=2 the algorithm achieves an approximation ratio of 5/3. The approximation factor for arbitrary k was later improved to 1.582.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int Assuming that elements of multisets are only lowercase letters of the English alpha- bet, what does the secret function compute? def diff(a: Multiset, b: Multiset): Multiset = \t x => Math.abs(a(x) - b(x)) def secret(a: Multiset, b: Multiset) = \t (’a’ to ’z’).map(x => diff(a, b)(x)).sum == 0
A multiset may be formally defined as an ordered pair (A, m) where A is the underlying set of the multiset, formed from its distinct elements, and m: A → Z + {\displaystyle m\colon A\to \mathbb {Z} ^{+}} is a function from A to the set of positive integers, giving the multiplicity – that is, the number of occurrences – of the element a in the multiset as the number m(a). (It is also possible to allow multiplicity 0 or ∞ {\displaystyle \infty } , especially when considering submultisets. This article is restricted to finite, positive multiplicities.)
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int Assuming that elements of multisets are only lowercase letters of the English alpha- bet, what does the secret function compute? def diff(a: Multiset, b: Multiset): Multiset = \t x => Math.abs(a(x) - b(x)) def secret(a: Multiset, b: Multiset) = \t (’a’ to ’z’).map(x => diff(a, b)(x)).sum == 0
The set of all bags over type T is given by the expression bag T. If by multiset one considers equal items identical and simply counts them, then a multiset can be interpreted as a function from the input domain to the non-negative integers (natural numbers), generalizing the identification of a set with its indicator function. In some cases a multiset in this counting sense may be generalized to allow negative values, as in Python. C++'s Standard Template Library implements both sorted and unsorted multisets.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a matroid $\mathcal{M}= (E, \mathcal{I})$ and a weight function $w: E \rightarrow \mathbb{R}$,~\textsc{Greedy} for matroids returns a base $S = \{s_1, s_2, \dots, s_k\}$ of maximum weight. As noted in the lecture notes, any base consists of the same number, say $k$, of elements (which is said to be the rank of the matroid). We further assume that the elements of $S$ are indexed so that $w(s_1) \geq w(s_2) \geq \dots \geq w(s_k)$. Let $S_\ell = \{s_1, \dots, s_\ell\}$ be the subset of $S$ consisting of the $\ell$ first elements, for $\ell = 1,\dots, k$. Then prove that \begin{align*} w(S_\ell) = \max_{T\in \mathcal{I}: |T| = \ell} w(T) \mbox{ for all $\ell =1, \dots, k$.} \end{align*} In other words, \textsc{Greedy} does not only returns a base of maximum weight but the ``prefixes'' are maximum weight sets of respective cardinalities.
In combinatorics, a branch of mathematics, a weighted matroid is a matroid endowed with function with respect to which one can perform a greedy algorithm. A weight function w: E → R + {\displaystyle w:E\rightarrow \mathbb {R} ^{+}} for a matroid M = ( E , I ) {\displaystyle M=(E,I)} assigns a strictly positive weight to each element of E {\displaystyle E} . We extend the function to subsets of E {\displaystyle E} by summation; w ( A ) {\displaystyle w(A)} is the sum of w ( x ) {\displaystyle w(x)} over x {\displaystyle x} in A {\displaystyle A} . A matroid with an associated weight function is called a weighted matroid.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a matroid $\mathcal{M}= (E, \mathcal{I})$ and a weight function $w: E \rightarrow \mathbb{R}$,~\textsc{Greedy} for matroids returns a base $S = \{s_1, s_2, \dots, s_k\}$ of maximum weight. As noted in the lecture notes, any base consists of the same number, say $k$, of elements (which is said to be the rank of the matroid). We further assume that the elements of $S$ are indexed so that $w(s_1) \geq w(s_2) \geq \dots \geq w(s_k)$. Let $S_\ell = \{s_1, \dots, s_\ell\}$ be the subset of $S$ consisting of the $\ell$ first elements, for $\ell = 1,\dots, k$. Then prove that \begin{align*} w(S_\ell) = \max_{T\in \mathcal{I}: |T| = \ell} w(T) \mbox{ for all $\ell =1, \dots, k$.} \end{align*} In other words, \textsc{Greedy} does not only returns a base of maximum weight but the ``prefixes'' are maximum weight sets of respective cardinalities.
A weighted matroid is a matroid together with a function from its elements to the nonnegative real numbers. The weight of a subset of elements is defined to be the sum of the weights of the elements in the subset. The greedy algorithm can be used to find a maximum-weight basis of the matroid, by starting from the empty set and repeatedly adding one element at a time, at each step choosing a maximum-weight element among the elements whose addition would preserve the independence of the augmented set. This algorithm does not need to know anything about the details of the matroid's definition, as long as it has access to the matroid through an independence oracle, a subroutine for testing whether a set is independent. This optimization algorithm may be used to characterize matroids: if a family F of sets, closed under taking subsets, has the property that, no matter how the sets are weighted, the greedy algorithm finds a maximum-weight set in the family, then F must be the family of independent sets of a matroid.The notion of matroid has been generalized to allow for other types of sets on which a greedy algorithm gives optimal solutions; see greedoid and matroid embedding for more information.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the Maximum Disjoint Paths problem: given an undirected graph $G=(V,E)$ with designated source $s\in V$ and sink $t\in V\setminus \{s\}$ vertices, find the maximum number of edge-disjoint paths from $s$ to $t$. To formulate it as a linear program, we have a variable $x_p$ for each possible path $p$ that starts at the source $s$ and ends at the sink $t$. The intuitive meaning of $x_p$ is that it should take value $1$ if the path $p$ is used and $0$ otherwise\footnote{I know that the number of variables may be exponential, but let us not worry about that.}. Let $P$ be the set of all such paths from $s$ to $t$. The linear programming relaxation of this problem now becomes \begin{align*} \mbox{Maximize} &\qquad \sum_{p\in P} x_p \\ \mbox{subject to} & \quad \ \ \sum_{p\in P: e\in p} x_p \leq 1, \qquad \ \forall e\in E,\\ &\ \ \quad \qquad x_p \geq 0, \qquad \qquad \ \forall p \in P. \end{align*} What is the dual of this linear program? What famous combinatorial problem do binary solutions to the dual solve?
Given a directed graph G = ( V , E ) {\displaystyle G=(V,E)} and two vertices s {\displaystyle s} and t {\displaystyle t} , we are to find the maximum number of paths from s {\displaystyle s} to t {\displaystyle t} . This problem has several variants: 1. The paths must be edge-disjoint. This problem can be transformed to a maximum flow problem by constructing a network N = ( V , E ) {\displaystyle N=(V,E)} from G {\displaystyle G} , with s {\displaystyle s} and t {\displaystyle t} being the source and the sink of N {\displaystyle N} respectively, and assigning each edge a capacity of 1 {\displaystyle 1} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the Maximum Disjoint Paths problem: given an undirected graph $G=(V,E)$ with designated source $s\in V$ and sink $t\in V\setminus \{s\}$ vertices, find the maximum number of edge-disjoint paths from $s$ to $t$. To formulate it as a linear program, we have a variable $x_p$ for each possible path $p$ that starts at the source $s$ and ends at the sink $t$. The intuitive meaning of $x_p$ is that it should take value $1$ if the path $p$ is used and $0$ otherwise\footnote{I know that the number of variables may be exponential, but let us not worry about that.}. Let $P$ be the set of all such paths from $s$ to $t$. The linear programming relaxation of this problem now becomes \begin{align*} \mbox{Maximize} &\qquad \sum_{p\in P} x_p \\ \mbox{subject to} & \quad \ \ \sum_{p\in P: e\in p} x_p \leq 1, \qquad \ \forall e\in E,\\ &\ \ \quad \qquad x_p \geq 0, \qquad \qquad \ \forall p \in P. \end{align*} What is the dual of this linear program? What famous combinatorial problem do binary solutions to the dual solve?
In the undirected edge-disjoint paths problem, we are given an undirected graph G = (V, E) and two vertices s and t, and we have to find the maximum number of edge-disjoint s-t paths in G. Menger's theorem states that the maximum number of edge-disjoint s-t paths in an undirected graph is equal to the minimum number of edges in an s-t cut-set.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem, we give a $2$-approximation algorithm for the submodular vertex cover problem which is a generalization of the classic vertex cover problem seen in class. We first, in subproblem~\textbf{(a)}, give a new rounding for the classic vertex cover problem and then give the algorithm for the more general problem in subproblem~\textbf{(b)}. Design and analyze a \emph{deterministic} $2$-approximation algorithm for the submodular vertex cover problem: \begin{description} \item[Input:] An undirected graph $G = (V,E)$ and a non-negative submodular function $f: 2^V \rightarrow \mathbb{R}_+$ on the vertex subsets. \item[Output:] A vertex cover $S\subseteq V$ that minimizes $f(S)$. \end{description} We remark that the classic vertex cover problem is the special case when $f$ is the linear function $f(S) = \sum_{i\in S} w(i)$ for some non-negative vertex weights $w$. A randomized 2-approximation algorithm will be given partial credits and to your help you may use the following fact without proving it. \begin{center} \begin{boxedminipage}{0.86\textwidth} \textbf{Fact}. Let $V = \{1,2, \ldots, n\}$ and let $\hat f: [0,1]^n \rightarrow \mathbb{R}_+$ denote the Lov\'{a}sz extension of $f$. There is a deterministic polynomial-time algorithm that minimizes $\hat f(x)$ subject to $x_i + x_j \geq 1$ for all $\{i,j\} \in E$ and $x_i \in [0,1]$ for all $i\in V$. \end{boxedminipage} \end{center} {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it runs in polynomial-time, and (iii) prove that the value of the found solution is at most twice the value of an optimal solution. You are allowed to use the above fact without any proof. For full score your algorithm should be deterministic but randomized solutions will be given partial credits. Recall that you are allowed to refer to material covered in the lecture notes.)}
Assume that every vertex has an associated cost of c ( v ) ≥ 0 {\displaystyle c(v)\geq 0} . The (weighted) minimum vertex cover problem can be formulated as the following integer linear program (ILP). This ILP belongs to the more general class of ILPs for covering problems. The integrality gap of this ILP is 2 {\displaystyle 2} , so its relaxation (allowing each variable to be in the interval from 0 to 1, rather than requiring the variables to be only 0 or 1) gives a factor- 2 {\displaystyle 2} approximation algorithm for the minimum vertex cover problem. Furthermore, the linear programming relaxation of that ILP is half-integral, that is, there exists an optimal solution for which each entry x v {\displaystyle x_{v}} is either 0, 1/2, or 1. A 2-approximate vertex cover can be obtained from this fractional solution by selecting the subset of vertices whose variables are nonzero.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem, we give a $2$-approximation algorithm for the submodular vertex cover problem which is a generalization of the classic vertex cover problem seen in class. We first, in subproblem~\textbf{(a)}, give a new rounding for the classic vertex cover problem and then give the algorithm for the more general problem in subproblem~\textbf{(b)}. Design and analyze a \emph{deterministic} $2$-approximation algorithm for the submodular vertex cover problem: \begin{description} \item[Input:] An undirected graph $G = (V,E)$ and a non-negative submodular function $f: 2^V \rightarrow \mathbb{R}_+$ on the vertex subsets. \item[Output:] A vertex cover $S\subseteq V$ that minimizes $f(S)$. \end{description} We remark that the classic vertex cover problem is the special case when $f$ is the linear function $f(S) = \sum_{i\in S} w(i)$ for some non-negative vertex weights $w$. A randomized 2-approximation algorithm will be given partial credits and to your help you may use the following fact without proving it. \begin{center} \begin{boxedminipage}{0.86\textwidth} \textbf{Fact}. Let $V = \{1,2, \ldots, n\}$ and let $\hat f: [0,1]^n \rightarrow \mathbb{R}_+$ denote the Lov\'{a}sz extension of $f$. There is a deterministic polynomial-time algorithm that minimizes $\hat f(x)$ subject to $x_i + x_j \geq 1$ for all $\{i,j\} \in E$ and $x_i \in [0,1]$ for all $i\in V$. \end{boxedminipage} \end{center} {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it runs in polynomial-time, and (iii) prove that the value of the found solution is at most twice the value of an optimal solution. You are allowed to use the above fact without any proof. For full score your algorithm should be deterministic but randomized solutions will be given partial credits. Recall that you are allowed to refer to material covered in the lecture notes.)}
The vertex cover problem involves finding a set of vertices that touches every edge of the graph. It is NP-hard but can be approximated to within an approximation ratio of two, for instance by taking the endpoints of the matched edges in any maximal matching. Evidence that this is the best possible approximation ratio of a polynomial-time approximation algorithm is provided by the fact that, when represented as a semidefinite program, the problem has an integrality gap of two; this gap is the ratio between the solution value of the integer solution (a valid vertex cover) and of its semidefinite relaxation. According to the unique games conjecture, for many problems such as this the optimal approximation ratio is provided by the integrality gap of their semidefinite relaxation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: 1 type Multiset = Char => Int The intersection of two multisets a and b contains all elements that are in both a and b. For example, the intersection of a multiset {’b’, ’b’, ’b’, ’c’} and a multiset {’b’, ’a’, ’b’} is a multiset {’b’, ’b’}. What should replace ??? so that the intersection function is correct? 1 def intersection(a: Multiset, b: Multiset): Multiset = ???
A multiset may be formally defined as an ordered pair (A, m) where A is the underlying set of the multiset, formed from its distinct elements, and m: A → Z + {\displaystyle m\colon A\to \mathbb {Z} ^{+}} is a function from A to the set of positive integers, giving the multiplicity – that is, the number of occurrences – of the element a in the multiset as the number m(a). (It is also possible to allow multiplicity 0 or ∞ {\displaystyle \infty } , especially when considering submultisets. This article is restricted to finite, positive multiplicities.)
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: 1 type Multiset = Char => Int The intersection of two multisets a and b contains all elements that are in both a and b. For example, the intersection of a multiset {’b’, ’b’, ’b’, ’c’} and a multiset {’b’, ’a’, ’b’} is a multiset {’b’, ’b’}. What should replace ??? so that the intersection function is correct? 1 def intersection(a: Multiset, b: Multiset): Multiset = ???
The multiset construction, denoted A = M { B } {\displaystyle {\mathcal {A}}={\mathfrak {M}}\{{\mathcal {B}}\}} is a generalization of the set construction. In the set construction, each element can occur zero or one times. In a multiset, each element can appear an arbitrary number of times.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following function sums: 1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match 2 case Nil => List((c, 1)) 3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs) 4 5 def sums(digits: List[Int]): List[(Int, Int)] = 6 digits.foldRight(List[(Int, Int)]())(add) Your task is to identify several operations on lists of digits: What does the following operation implement, for a given input list of digits? 1 def mystery1(digits: List[Int]): List[Int] = 2 sums(digits).filter(_._2 == 1).map(_._1)
At this point there are two single-digit numbers, the first derived from the first operand and the second derived from the second operand. Apply the originally specified operation to the two condensed operands, and then apply the summing-of-digits procedure to the result of the operation. Sum the digits of the result that were originally obtained for the original calculation. If the result of step 4 does not equal the result of step 5, then the original answer is wrong. If the two results match, then the original answer may be right, though it is not guaranteed to be.Example Assume the calculation 6,338 × 79, manually done, yielded a result of 500,702:Sum the digits of 6,338: (6 + 3 = 9, so count that as 0) + 3 + 8 = 11 Iterate as needed: 1 + 1 = 2 Sum the digits of 79: 7 + (9 counted as 0) = 7 Perform the original operation on the condensed operands, and sum digits: 2 × 7 = 14; 1 + 4 = 5 Sum the digits of 500702: 5 + 0 + 0 + (7 + 0 + 2 = 9, which counts as 0) = 5 5 = 5, so there is a good chance that the prediction that 6,338 × 79 equals 500,702 is right.The same procedure can be used with multiple operations, repeating steps 1 and 2 for each operation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following function sums: 1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match 2 case Nil => List((c, 1)) 3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs) 4 5 def sums(digits: List[Int]): List[(Int, Int)] = 6 digits.foldRight(List[(Int, Int)]())(add) Your task is to identify several operations on lists of digits: What does the following operation implement, for a given input list of digits? 1 def mystery1(digits: List[Int]): List[Int] = 2 sums(digits).filter(_._2 == 1).map(_._1)
After applying an arithmetic operation to two operands and getting a result, the following procedure can be used to improve confidence in the correctness of the result: Sum the digits of the first operand; any 9s (or sets of digits that add to 9) can be counted as 0. If the resulting sum has two or more digits, sum those digits as in step one; repeat this step until the resulting sum has only one digit. Repeat steps one and two with the second operand.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement a function that takes a list ls as argument, and returns a list of all the suffixes of ls. That is, given a list List(a,b,c,...) it returns List(List(a,b,c,...), List(b,c,...), List(c,...), List(...), ..., List()). Implement the function recursively using only Nil (empty), :: (cons) and pattern matching. def tails(ls: List[Int]): List[List[Int]] = ???
As the syntax supports alternative patterns in function definitions, we can continue the definition extending it to take more generic arguments: Here, the first n is a single variable pattern, which will match absolutely any argument and bind it to name n to be used in the rest of the definition. In Haskell (unlike at least Hope), patterns are tried in order so the first definition still applies in the very specific case of the input being 0, while for any other argument the function returns n * f (n-1) with n being the argument. The wildcard pattern (often written as _) is also simple: like a variable name, it matches any value, but does not bind the value to any name. Algorithms for matching wildcards in simple string-matching situations have been developed in a number of recursive and non-recursive varieties.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement a function that takes a list ls as argument, and returns a list of all the suffixes of ls. That is, given a list List(a,b,c,...) it returns List(List(a,b,c,...), List(b,c,...), List(c,...), List(...), ..., List()). Implement the function recursively using only Nil (empty), :: (cons) and pattern matching. def tails(ls: List[Int]): List[List[Int]] = ???
This can lead to stack overflows when one reaches the end of the list and tries to evaluate the resulting potentially gigantic expression. For this reason, such languages often provide a stricter variant of left folding which forces the evaluation of the initial parameter before making the recursive call. In Haskell this is the foldl' (note the apostrophe, pronounced 'prime') function in the Data.List library (one needs to be aware of the fact though that forcing a value built with a lazy data constructor won't force its constituents automatically by itself). Combined with tail recursion, such folds approach the efficiency of loops, ensuring constant space operation, when lazy evaluation of the final result is impossible or undesirable.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & 2x_1 - x_2 \\ \textbf{subject to}\hspace{0.8cm} & x_1 - x_2 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{0.85cm}x_1 + s_2 = 4 \\ \hspace{0.8cm} & \hspace{0.85cm} x_2 + s_3 = 2 \\ \hspace{0.8cm} &\hspace{-0.8cm} x_1,\: x_2, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + x_2 - s_1 \\ s_2 &= 3 -x_2 + s_1 \\ s_3 &= 2 -x_2 \\ \cline{1-2} z &= 2 + x_2 - 2s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau.
Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations each of which give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot improves the solution.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & 2x_1 - x_2 \\ \textbf{subject to}\hspace{0.8cm} & x_1 - x_2 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{0.85cm}x_1 + s_2 = 4 \\ \hspace{0.8cm} & \hspace{0.85cm} x_2 + s_3 = 2 \\ \hspace{0.8cm} &\hspace{-0.8cm} x_1,\: x_2, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + x_2 - s_1 \\ s_2 &= 3 -x_2 + s_1 \\ s_3 &= 2 -x_2 \\ \cline{1-2} z &= 2 + x_2 - 2s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau.
Using the simplex method to solve a linear program produces a set of equations of the form x i + ∑ j a ¯ i , j x j = b ¯ i {\displaystyle x_{i}+\sum _{j}{\bar {a}}_{i,j}x_{j}={\bar {b}}_{i}} where xi is a basic variable and the xj's are the nonbasic variables (i.e. the basic solution which is an optimal solution to the relaxed linear program is x i = b ¯ i {\displaystyle x_{i}={\bar {b}}_{i}} and x j = 0 {\displaystyle x_{j}=0} ). We write coefficients b ¯ i {\displaystyle {\bar {b}}_{i}} and a ¯ i , j {\displaystyle {\bar {a}}_{i,j}} with a bar to denote the last tableau produced by the simplex method. These coefficients are different from the coefficients in the matrix A and the vector b.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Does the following code compile? val x = 12 def foo(x: List[Int]): Int = x match \t case Nil => 0 \t case x :: xs => x
In Scala, the bottom type is denoted as Nothing. Besides its use for functions that just throw exceptions or otherwise don't return normally, it's also used for covariant parameterized types. For example, Scala's List is a covariant type constructor, so List is a subtype of List for all types A. So Scala's Nil, the object for marking the end of a list of any type, belongs to the type List.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Does the following code compile? val x = 12 def foo(x: List[Int]): Int = x match \t case Nil => 0 \t case x :: xs => x
In Scala, functions are objects, and a convenient syntax exists for specifying anonymous functions. An example is the expression x => x < 2, which specifies a function with one parameter, that compares its argument to see if it is less than 2. It is equivalent to the Lisp form (lambda (x) (< x 2)). Note that neither the type of x nor the return type need be explicitly specified, and can generally be inferred by type inference; but they can be explicitly specified, e.g. as (x: Int) => x < 2 or even (x: Int) => (x < 2): Boolean.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Chef Baker Buttersweet just took over his family business - baking tasty cakes! He notices that he has $m$ different ingredients in various quantities. In particular, he has $b_i \geq 0$ kilograms of ingredient $i$ for $i = 1, \dots, m$. His family cookbook has recipes for $n$ types of mouthwatering cakes. A kilogram of cake of type $j$ is worth $c_j$ CHF. For each recipe $j$, the cookbook says how many kilograms of each of the ingredients are needed to make one kilogram of cake of type $j$. One kilogram of cake of type $j$, for $j=1, \dots, m$, needs precisely $a_{ij}$ kilograms of ingredient $i$ for all $i=1,\dots,m$. Chef wants to make $x_j \leq 1$ kilograms of cake of type $j$. Having studied linear programming, he knows that the maximum revenue he can get is given by the following linear program, where $A \in \mathbb{R}_{+}^{m\times n} \mbox{ , } b \in \mathbb{R}_+^m \mbox{ and } c\in \mathbb{R}^n_+$. \begin{align*} \textbf{Maximize} \hspace{0.8cm} & \sum_{j=1}^n c_j x_j\\ \textbf{subject to}\hspace{0.8cm} & Ax \leq b \\ \hspace{0.8cm} & 1 \geq x_j \geq 0 \ \ \ \forall j. \end{align*} Chef realizes that he can use Hedge algorithm to solve this linear program (approximately) but he is struggling with how to set the costs $m^{(t)}_{i}$ at each iteration. Explain how to set these costs properly. {\em (In this problem you are asked to define the costs $m^{(t)}_i$. You do \textbf{not} need to explain how to solve the reduced linear program that has a single constraint. Recall that you are allowed to refer to material covered in the lecture notes.)}
The following algorithms can be used to find an envy-free cake-cutting with maximum sum-of-utilities, for a cake which is a 1-dimensional interval, when each person may receive disconnected pieces and the value functions are additive: For n {\displaystyle n} partners with piecewise-constant valuations: divide the cake into m totally-constant regions. Solve a linear program with nm variables: each (agent, region) pair has a variable that determines the fraction of the region given to the agent. For each region, there is a constraint saying that the sum of all fractions from this region is 1; for each (agent, agent) pair, there is a constraint saying that the first agent does not envy the second one. Note that the allocation produced by this procedure might be highly fractioned.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Chef Baker Buttersweet just took over his family business - baking tasty cakes! He notices that he has $m$ different ingredients in various quantities. In particular, he has $b_i \geq 0$ kilograms of ingredient $i$ for $i = 1, \dots, m$. His family cookbook has recipes for $n$ types of mouthwatering cakes. A kilogram of cake of type $j$ is worth $c_j$ CHF. For each recipe $j$, the cookbook says how many kilograms of each of the ingredients are needed to make one kilogram of cake of type $j$. One kilogram of cake of type $j$, for $j=1, \dots, m$, needs precisely $a_{ij}$ kilograms of ingredient $i$ for all $i=1,\dots,m$. Chef wants to make $x_j \leq 1$ kilograms of cake of type $j$. Having studied linear programming, he knows that the maximum revenue he can get is given by the following linear program, where $A \in \mathbb{R}_{+}^{m\times n} \mbox{ , } b \in \mathbb{R}_+^m \mbox{ and } c\in \mathbb{R}^n_+$. \begin{align*} \textbf{Maximize} \hspace{0.8cm} & \sum_{j=1}^n c_j x_j\\ \textbf{subject to}\hspace{0.8cm} & Ax \leq b \\ \hspace{0.8cm} & 1 \geq x_j \geq 0 \ \ \ \forall j. \end{align*} Chef realizes that he can use Hedge algorithm to solve this linear program (approximately) but he is struggling with how to set the costs $m^{(t)}_{i}$ at each iteration. Explain how to set these costs properly. {\em (In this problem you are asked to define the costs $m^{(t)}_i$. You do \textbf{not} need to explain how to solve the reduced linear program that has a single constraint. Recall that you are allowed to refer to material covered in the lecture notes.)}
At most ( k − 1 ) n {\displaystyle (k-1)n} cuts are needed, and this is optimal. Consider now the case k=2 and arbitrary weights. Stromquist and Woodall proved that there exists an exact division of a pie (a circular cake) in which each piece contains at most n-1 intervals; hence, at most 2n-2 cuts are needed.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following function sums: 1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match 2 case Nil => List((c, 1)) 3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs) 4 5 def sums(digits: List[Int]): List[(Int, Int)] = 6 digits.foldRight(List[(Int, Int)]())(add) Your task is to identify several operations on lists of digits: What does the following operation implement, for a given input list of digits? 1 def mystery3(digits: List[Int]): Int = sums(digits) match 2 case Nil => 0 3 case t => t.reduceLeft((a, b) => (a._1 * a._2 + b._1 * b._2, 1))._1
At this point there are two single-digit numbers, the first derived from the first operand and the second derived from the second operand. Apply the originally specified operation to the two condensed operands, and then apply the summing-of-digits procedure to the result of the operation. Sum the digits of the result that were originally obtained for the original calculation. If the result of step 4 does not equal the result of step 5, then the original answer is wrong. If the two results match, then the original answer may be right, though it is not guaranteed to be.Example Assume the calculation 6,338 × 79, manually done, yielded a result of 500,702:Sum the digits of 6,338: (6 + 3 = 9, so count that as 0) + 3 + 8 = 11 Iterate as needed: 1 + 1 = 2 Sum the digits of 79: 7 + (9 counted as 0) = 7 Perform the original operation on the condensed operands, and sum digits: 2 × 7 = 14; 1 + 4 = 5 Sum the digits of 500702: 5 + 0 + 0 + (7 + 0 + 2 = 9, which counts as 0) = 5 5 = 5, so there is a good chance that the prediction that 6,338 × 79 equals 500,702 is right.The same procedure can be used with multiple operations, repeating steps 1 and 2 for each operation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following function sums: 1 def add(c: Int, acc: List[(Int, Int)]): List[(Int, Int)] = acc match 2 case Nil => List((c, 1)) 3 case x :: xs => if x._1 == c then (c, x._2+1) :: xs else x :: add(c, xs) 4 5 def sums(digits: List[Int]): List[(Int, Int)] = 6 digits.foldRight(List[(Int, Int)]())(add) Your task is to identify several operations on lists of digits: What does the following operation implement, for a given input list of digits? 1 def mystery3(digits: List[Int]): Int = sums(digits) match 2 case Nil => 0 3 case t => t.reduceLeft((a, b) => (a._1 * a._2 + b._1 * b._2, 1))._1
After applying an arithmetic operation to two operands and getting a result, the following procedure can be used to improve confidence in the correctness of the result: Sum the digits of the first operand; any 9s (or sets of digits that add to 9) can be counted as 0. If the resulting sum has two or more digits, sum those digits as in step one; repeat this step until the resulting sum has only one digit. Repeat steps one and two with the second operand.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Change Karger's algorithm so that it also works for edge-weighted graphs. Also adapt the analysis to prove that it still returns any min cut $(S^*, \overline{S^*})$ with probability at least $1/{n \choose 2}$. (Hence, edge-weighted graphs also have at most ${n \choose 2}$ min cuts.)
The minimum cut problem in undirected, weighted graphs limited to non-negative weights can be solved in polynomial time by the Stoer-Wagner algorithm. In the special case when the graph is unweighted, Karger's algorithm provides an efficient randomized method for finding the cut. In this case, the minimum cut equals the edge connectivity of the graph. A generalization of the minimum cut problem without terminals is the minimum k-cut, in which the goal is to partition the graph into at least k connected components by removing as few edges as possible. For a fixed value of k, this problem can be solved in polynomial time, though the algorithm is not practical for large k.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Change Karger's algorithm so that it also works for edge-weighted graphs. Also adapt the analysis to prove that it still returns any min cut $(S^*, \overline{S^*})$ with probability at least $1/{n \choose 2}$. (Hence, edge-weighted graphs also have at most ${n \choose 2}$ min cuts.)
All other edges connecting either u {\displaystyle u} or v {\displaystyle v} are "reattached" to the merged node, effectively producing a multigraph. Karger's basic algorithm iteratively contracts randomly chosen edges until only two nodes remain; those nodes represent a cut in the original graph. By iterating this basic algorithm a sufficient number of times, a minimum cut can be found with high probability.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have $1$ Euro and your goal is to exchange it to Swiss francs during the next two consecutive days. The exchange rate is an arbitrary function from days to real numbers from the interval $[1,W^2]$, where $W\geq 1$ is known to the algorithm. More precisely, at day $1$, you learn the exchange rate $x_1 \in [1,W^2]$, where $x_1$ is the amount of Swiss francs you can buy from $1$ Euro. You then need to decide between the following two options: \begin{enumerate}[label=(\roman*)] \item Trade the whole $1$ Euro at day $1$ and receive $x_1$ Swiss francs. \item Wait and trade the whole $1$ Euro at day $2$ at exchange rate $x_2 \in [1,W^2]$. The exchange rate $x_2$ is known only at day 2, i.e., after you made your decision at day 1. \end{enumerate} In the following two subproblems, we will analyze the competitive ratio of optimal deterministic algorithms. Recall that we say that an online algorithm is $c$-competitive if, for any $x_1, x_2 \in [1,W^2]$, it exchanges the $1$ Euro into at least $c \cdot \max\{x_1, x_2\}$ Swiss francs. Show that any deterministic algorithm has a competitive ratio of at most $1/W$. {\em (In this problem you are asked to prove that any deterministic algorithm has a competitive ratio of at most $1/W$ for the above problem. Recall that you are allowed to refer to material covered in the lecture notes.)}
(Example where E ( y i ) {\displaystyle \mathbb {E} (y_{i})} converges) You have a fair coin and are repeatedly tossing it. Each time, before it is tossed, you can choose to stop tossing it and get paid (in dollars, say) the average number of heads observed. You wish to maximise the amount you get paid by choosing a stopping rule. If Xi (for i ≥ 1) forms a sequence of independent, identically distributed random variables with Bernoulli distribution Bern ( 1 2 ) , {\displaystyle {\text{Bern}}\left({\frac {1}{2}}\right),} and if y i = 1 i ∑ k = 1 i X k {\displaystyle y_{i}={\frac {1}{i}}\sum _{k=1}^{i}X_{k}} then the sequences ( X i ) i ≥ 1 {\displaystyle (X_{i})_{i\geq 1}} , and ( y i ) i ≥ 1 {\displaystyle (y_{i})_{i\geq 1}} are the objects associated with this problem.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have $1$ Euro and your goal is to exchange it to Swiss francs during the next two consecutive days. The exchange rate is an arbitrary function from days to real numbers from the interval $[1,W^2]$, where $W\geq 1$ is known to the algorithm. More precisely, at day $1$, you learn the exchange rate $x_1 \in [1,W^2]$, where $x_1$ is the amount of Swiss francs you can buy from $1$ Euro. You then need to decide between the following two options: \begin{enumerate}[label=(\roman*)] \item Trade the whole $1$ Euro at day $1$ and receive $x_1$ Swiss francs. \item Wait and trade the whole $1$ Euro at day $2$ at exchange rate $x_2 \in [1,W^2]$. The exchange rate $x_2$ is known only at day 2, i.e., after you made your decision at day 1. \end{enumerate} In the following two subproblems, we will analyze the competitive ratio of optimal deterministic algorithms. Recall that we say that an online algorithm is $c$-competitive if, for any $x_1, x_2 \in [1,W^2]$, it exchanges the $1$ Euro into at least $c \cdot \max\{x_1, x_2\}$ Swiss francs. Show that any deterministic algorithm has a competitive ratio of at most $1/W$. {\em (In this problem you are asked to prove that any deterministic algorithm has a competitive ratio of at most $1/W$ for the above problem. Recall that you are allowed to refer to material covered in the lecture notes.)}
A different approach to Siegel's paradox is proposed by K. Mallahi-Karai and P. Safari, where they show that the only possible way to avoid making risk-less money in such future-based currency exchanges is to settle on the (weighted) geometric mean of the future exchange rates, or more generally a product of the weighted geometric mean and a so-called reciprocity function. The weights of the geometric mean depend on the probability of the rates occurring in the future, while the reciprocity function can always be taken to be the unit function. What this implies, for instance, in the case of apple/orange example above, is that the consumers should trade their products for √(2)(1/2) = 1 units of the other product to avoid an arbitrage. This method will provide currency traders on both sides with a common exchange rate they can safely agree on. == References ==
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have $1$ Euro and your goal is to exchange it to Swiss francs during the next two consecutive days. The exchange rate is an arbitrary function from days to real numbers from the interval $[1,W^2]$, where $W\geq 1$ is known to the algorithm. More precisely, at day $1$, you learn the exchange rate $x_1 \in [1,W^2]$, where $x_1$ is the amount of Swiss francs you can buy from $1$ Euro. You then need to decide between the following two options: \begin{enumerate}[label=(\roman*)] \item Trade the whole $1$ Euro at day $1$ and receive $x_1$ Swiss francs. \item Wait and trade the whole $1$ Euro at day $2$ at exchange rate $x_2 \in [1,W^2]$. The exchange rate $x_2$ is known only at day 2, i.e., after you made your decision at day 1. \end{enumerate} In the following two subproblems, we will analyze the competitive ratio of optimal deterministic algorithms. Recall that we say that an online algorithm is $c$-competitive if, for any $x_1, x_2 \in [1,W^2]$, it exchanges the $1$ Euro into at least $c \cdot \max\{x_1, x_2\}$ Swiss francs. Give a deterministic algorithm with a competitive ratio of $1/W$. \\ {\em (In this problem you are asked to (i) design a deterministic online algorithm for the above problem and (ii) to prove that your algorithm is $1/W$-competitive. Recall that you are allowed to refer to material covered in the lecture notes.)}
One way to deal with the foreign exchange risk is to engage in a forward transaction. In this transaction, money does not actually change hands until some agreed upon future date. A buyer and seller agree on an exchange rate for any date in the future, and the transaction occurs on that date, regardless of what the market rates are then. The duration of the trade can be one day, a few days, months or years. Usually the date is decided by both parties. Then the forward contract is negotiated and agreed upon by both parties.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have $1$ Euro and your goal is to exchange it to Swiss francs during the next two consecutive days. The exchange rate is an arbitrary function from days to real numbers from the interval $[1,W^2]$, where $W\geq 1$ is known to the algorithm. More precisely, at day $1$, you learn the exchange rate $x_1 \in [1,W^2]$, where $x_1$ is the amount of Swiss francs you can buy from $1$ Euro. You then need to decide between the following two options: \begin{enumerate}[label=(\roman*)] \item Trade the whole $1$ Euro at day $1$ and receive $x_1$ Swiss francs. \item Wait and trade the whole $1$ Euro at day $2$ at exchange rate $x_2 \in [1,W^2]$. The exchange rate $x_2$ is known only at day 2, i.e., after you made your decision at day 1. \end{enumerate} In the following two subproblems, we will analyze the competitive ratio of optimal deterministic algorithms. Recall that we say that an online algorithm is $c$-competitive if, for any $x_1, x_2 \in [1,W^2]$, it exchanges the $1$ Euro into at least $c \cdot \max\{x_1, x_2\}$ Swiss francs. Give a deterministic algorithm with a competitive ratio of $1/W$. \\ {\em (In this problem you are asked to (i) design a deterministic online algorithm for the above problem and (ii) to prove that your algorithm is $1/W$-competitive. Recall that you are allowed to refer to material covered in the lecture notes.)}
A different approach to Siegel's paradox is proposed by K. Mallahi-Karai and P. Safari, where they show that the only possible way to avoid making risk-less money in such future-based currency exchanges is to settle on the (weighted) geometric mean of the future exchange rates, or more generally a product of the weighted geometric mean and a so-called reciprocity function. The weights of the geometric mean depend on the probability of the rates occurring in the future, while the reciprocity function can always be taken to be the unit function. What this implies, for instance, in the case of apple/orange example above, is that the consumers should trade their products for √(2)(1/2) = 1 units of the other product to avoid an arbitrage. This method will provide currency traders on both sides with a common exchange rate they can safely agree on. == References ==
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
& \multicolumn{3}{c}{ extbf{ProofWriter}} & \multicolumn{3}{c}{ extbf{CLUTRR-SG}} \ \cmidrule(lr){2-4} \cmidrule(lr){5-7} What does the following function implement? 1 a => b => (not a) (not b) fls
could belong to C l t ≥ {\displaystyle Cl_{t}^{\geq }} or the form: x {\displaystyle x\,\!} could belong to C l t ≤ {\displaystyle Cl_{t}^{\leq }} . Finally, approximate rules has the syntax: if f ( x , q 1 ) ≥ r 1 {\displaystyle f(x,q_{1})\geq r_{1}\,\!}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
& \multicolumn{3}{c}{ extbf{ProofWriter}} & \multicolumn{3}{c}{ extbf{CLUTRR-SG}} \ \cmidrule(lr){2-4} \cmidrule(lr){5-7} What does the following function implement? 1 a => b => (not a) (not b) fls
For example, defining exp ⁡ ( A ) = I + A + 1 2 ! A 2 + 1 3 ! A 3 + ⋯ {\textstyle \exp(A)=I+A+{\frac {1}{2!
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $y_1, y_2, \ldots, y_n$ be uniform random bits. For each non-empty subset $S\subseteq \{1,2, \ldots, n\}$, define $X_S = \oplus_{i\in S}\:y_i$. Show that the bits $\{X_S: \emptyset \neq S\subseteq \{1,2, \ldots, n\} \}$ are pairwise independent. This shows how to stretch $n$ truly random bits to $2^n-1$ pairwise independent bits. \\ \emph{Hint: Observe that it is sufficient to prove $\mathbb{E}[X_S] = 1/2$ and $\mathbb{E}[X_S X_T] = 1/4$ to show that they are pairwise independent. Also use the identity $\oplus_{i\in A}\: y_i = \frac{1}{2}\left( 1 - \prod_{i\in A} (-1)^{y_i} \right)$.}
Assume that the least significant set bit of x − y {\displaystyle x-y} appears on position w − c {\displaystyle w-c} . Since a {\displaystyle a} is a random odd integer and odd integers have inverses in the ring Z 2 w {\displaystyle Z_{2^{w}}} , it follows that a ( x − y ) mod 2 w {\displaystyle a(x-y){\bmod {2}}^{w}} will be uniformly distributed among w {\displaystyle w} -bit integers with the least significant set bit on position w − c {\displaystyle w-c} . The probability that these bits are all 0's or all 1's is therefore at most 2 / 2 M = 2 / m {\displaystyle 2/2^{M}=2/m} . On the other hand, if c < M {\displaystyle c
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $y_1, y_2, \ldots, y_n$ be uniform random bits. For each non-empty subset $S\subseteq \{1,2, \ldots, n\}$, define $X_S = \oplus_{i\in S}\:y_i$. Show that the bits $\{X_S: \emptyset \neq S\subseteq \{1,2, \ldots, n\} \}$ are pairwise independent. This shows how to stretch $n$ truly random bits to $2^n-1$ pairwise independent bits. \\ \emph{Hint: Observe that it is sufficient to prove $\mathbb{E}[X_S] = 1/2$ and $\mathbb{E}[X_S X_T] = 1/4$ to show that they are pairwise independent. Also use the identity $\oplus_{i\in A}\: y_i = \frac{1}{2}\left( 1 - \prod_{i\in A} (-1)^{y_i} \right)$.}
A finite set of n {\displaystyle n} random variables { X 1 , … , X n } {\displaystyle \{X_{1},\ldots ,X_{n}\}} is pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily mutually independent as defined next. A finite set of n {\displaystyle n} random variables { X 1 , … , X n } {\displaystyle \{X_{1},\ldots ,X_{n}\}} is mutually independent if and only if for any sequence of numbers { x 1 , … , x n } {\displaystyle \{x_{1},\ldots ,x_{n}\}} , the events { X 1 ≤ x 1 } , … , { X n ≤ x n } {\displaystyle \{X_{1}\leq x_{1}\},\ldots ,\{X_{n}\leq x_{n}\}} are mutually independent events (as defined above in Eq.3). This is equivalent to the following condition on the joint cumulative distribution function F X 1 , … , X n ( x 1 , … , x n ) {\displaystyle F_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem we are going to formally analyze the important median trick. Suppose that we have a streaming algorithm for distinct elements that outputs an estimate $\hat d$ of the number $d$ of distinct elements such that \begin{align*} \Pr[\hat d > 3d] \leq 47 \% \qquad \mbox{and} \qquad \Pr[\hat d < d/3] \leq 47\%\,, \end{align*} where the probabilities are over the randomness of the streaming algorithm (the selection of hash functions). In other words, our algorithm overestimates the true value by a factor of 3 with a quite large probability $47\%$ (and also underestimates with large probability). We want to do better! An important and useful technique for doing better is the median trick: run $t$ independent copies in parallel and output the median of the $t$ estimates (it is important that it is the median and \emph{not} the mean as a single horrible estimate can badly affect the mean). Prove that if we select $t = C \ln(1/\delta)$ for some large (but reasonable) constant $C$, then the estimate $\hat d$ given by the median trick satisfies \begin{align*} d/3 \leq \hat d \leq 3d \qquad \mbox{with probability at least $1-\delta$.} \end{align*} \emph{Hint: an important tool in this exercise are the Chernoff Bounds, which basically say that sums of independent variables are highly concentrated.} Two such bounds can be stated as follows. Suppose $ X_1, X_2, \dots, X_n$ are independent random variables taking values in $\{0,1\}$. Let $X$ denote their sum and let $\mu = \mathbb{E}[X]$ denote the sum's expected value. Then for any $\delta \in (0,1)$, \begin{align*} \Pr[ X \leq (1- \delta) \mu] \leq e^{-\frac{\delta^2 \mu }{2}} \qquad \mbox{and} \qquad \Pr[ X \geq (1+ \delta) \mu] \leq e^{-\frac{\delta^2 \mu }{3}}\,. \end{align*}
With the resulting sample size, the expected bucket size and especially the probability of a bucket exceeding a certain size can be estimated. The following will show that for an oversampling factor of S ∈ Θ ( log ⁡ n ϵ 2 ) {\displaystyle S\in \Theta \left({\dfrac {\log n}{\epsilon ^{2}}}\right)} the probability that no bucket has more than ( 1 + ϵ ) ⋅ n p {\displaystyle (1+\epsilon )\cdot {\dfrac {n}{p}}} elements is larger than 1 − 1 n {\displaystyle 1-{\dfrac {1}{n}}} . To show this let ⟨ e 1 , … , e n ⟩ {\displaystyle \langle e_{1},\dots ,e_{n}\rangle } be the input as a sorted sequence. For a processor to get more than ( 1 + ϵ ) ⋅ n / p {\displaystyle (1+\epsilon )\cdot n/p} elements, there has to exist a subsequence of the input of length ( 1 + ϵ ) ⋅ n / p {\displaystyle (1+\epsilon )\cdot n/p} , of which a maximum of S samples are picked. These cases constitute the probability P fail {\displaystyle P_{\text{fail}}} . This can be represented as the random variable: For the expected value of X i {\displaystyle X_{i}} holds: This will be used to estimate P fail {\displaystyle P_{\text{fail}}}: Using the Chernoff bound now, it can be shown:
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem we are going to formally analyze the important median trick. Suppose that we have a streaming algorithm for distinct elements that outputs an estimate $\hat d$ of the number $d$ of distinct elements such that \begin{align*} \Pr[\hat d > 3d] \leq 47 \% \qquad \mbox{and} \qquad \Pr[\hat d < d/3] \leq 47\%\,, \end{align*} where the probabilities are over the randomness of the streaming algorithm (the selection of hash functions). In other words, our algorithm overestimates the true value by a factor of 3 with a quite large probability $47\%$ (and also underestimates with large probability). We want to do better! An important and useful technique for doing better is the median trick: run $t$ independent copies in parallel and output the median of the $t$ estimates (it is important that it is the median and \emph{not} the mean as a single horrible estimate can badly affect the mean). Prove that if we select $t = C \ln(1/\delta)$ for some large (but reasonable) constant $C$, then the estimate $\hat d$ given by the median trick satisfies \begin{align*} d/3 \leq \hat d \leq 3d \qquad \mbox{with probability at least $1-\delta$.} \end{align*} \emph{Hint: an important tool in this exercise are the Chernoff Bounds, which basically say that sums of independent variables are highly concentrated.} Two such bounds can be stated as follows. Suppose $ X_1, X_2, \dots, X_n$ are independent random variables taking values in $\{0,1\}$. Let $X$ denote their sum and let $\mu = \mathbb{E}[X]$ denote the sum's expected value. Then for any $\delta \in (0,1)$, \begin{align*} \Pr[ X \leq (1- \delta) \mu] \leq e^{-\frac{\delta^2 \mu }{2}} \qquad \mbox{and} \qquad \Pr[ X \geq (1+ \delta) \mu] \leq e^{-\frac{\delta^2 \mu }{3}}\,. \end{align*}
By Yao's principle, it also applies to the expected number of comparisons for a randomized algorithm on its worst-case input. For deterministic algorithms, it has been shown that selecting the k {\displaystyle k} th element requires ( 1 + H ( k / n ) ) n + Ω ( n ) {\displaystyle {\bigl (}1+H(k/n){\bigr )}n+\Omega ({\sqrt {n}})} comparisons, where is the binary entropy function. The special case of median-finding has a slightly larger lower bound on the number of comparisons, at least ( 2 + ε ) n {\displaystyle (2+\varepsilon )n} , for ε ≈ 2 − 80 {\displaystyle \varepsilon \approx 2^{-80}} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall the Manhattan distance function that we saw in class: for any $d$-dimensional Boolean vectors $p,q \in \{0,1\}^d$, the Manhattan distance is defined by \begin{align*} \dist(p,q) = \|p-q\|_1 = |\{i: p_i \neq q_i\}|\,. \end{align*} Design a locality sensitive hash (LSH) family $\mathcal{H}$ of functions $h: \{0,1\}^d \rightarrow \{0,1,2,3\}$ such that for any $p, q\in \{0,1\}^d$, \begin{align*} \Pr_{h \sim \mathcal{H}}[h(p) = h(q)] = \left( 1-\frac{\dist(p,q)}{d} \right)^2\,. \end{align*} {\em (In this problem you are asked to explain the hash family and show that it satisfies the above property. Recall that you are allowed to refer to material covered in the lecture notes.)}
As a result, the statistical distance to a uniform family is O ( m / p ) {\displaystyle O(m/p)} , which becomes negligible when p ≫ m {\displaystyle p\gg m} . The family of simpler hash functions h a ( x ) = ( a x mod p ) mod m {\displaystyle h_{a}(x)=(ax~{\bmod {~}}p)~{\bmod {~}}m} is only approximately universal: Pr { h a ( x ) = h a ( y ) } ≤ 2 / m {\displaystyle \Pr\{h_{a}(x)=h_{a}(y)\}\leq 2/m} for all x ≠ y {\displaystyle x\neq y} . Moreover, this analysis is nearly tight; Carter and Wegman show that Pr { h a ( 1 ) = h a ( m + 1 ) } ≥ 2 / ( m − 1 ) {\displaystyle \Pr\{h_{a}(1)=h_{a}(m+1)\}\geq 2/(m-1)} whenever ( p − 1 ) mod m = 1 {\displaystyle (p-1)~{\bmod {~}}m=1} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Recall the Manhattan distance function that we saw in class: for any $d$-dimensional Boolean vectors $p,q \in \{0,1\}^d$, the Manhattan distance is defined by \begin{align*} \dist(p,q) = \|p-q\|_1 = |\{i: p_i \neq q_i\}|\,. \end{align*} Design a locality sensitive hash (LSH) family $\mathcal{H}$ of functions $h: \{0,1\}^d \rightarrow \{0,1,2,3\}$ such that for any $p, q\in \{0,1\}^d$, \begin{align*} \Pr_{h \sim \mathcal{H}}[h(p) = h(q)] = \left( 1-\frac{\dist(p,q)}{d} \right)^2\,. \end{align*} {\em (In this problem you are asked to explain the hash family and show that it satisfies the above property. Recall that you are allowed to refer to material covered in the lecture notes.)}
A locality-preserving hash is a hash function f that maps points in a metric space M = ( M , d ) {\displaystyle {\mathcal {M}}=(M,d)} to a scalar value such that d ( p , q ) < d ( q , r ) ⇒ | f ( p ) − f ( q ) | < | f ( q ) − f ( r ) | {\displaystyle d(p,q)
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following classes: • class Pair[+U, +V] • class Iterable[+U] • class Map[U, +V] extends Iterable[Pair[U, V]] Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither covariance nor contravariance). Consider also the following typing relationships for A, B, X, and Y: • A >: B • X >: Y Fill in the subtyping relation between the types below using symbols: • <: in case T1 is a subtype of T2; • >: in case T1 is a supertype of T2; • “Neither” in case T1 is neither a supertype nor a supertype of T2. What is the correct subtyping relationship between A => (Y => X) and A => (X => Y)?
Subtyping and inheritance are independent (orthogonal) relationships. They may coincide, but none is a special case of the other. In other words, between two types S and T, all combinations of subtyping and inheritance are possible: S is neither a subtype nor a derived type of T S is a subtype but is not a derived type of T S is not a subtype but is a derived type of T S is both a subtype and a derived type of TThe first case is illustrated by independent types, such as Boolean and Float. The second case can be illustrated by the relationship between Int32 and Int64.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following classes: • class Pair[+U, +V] • class Iterable[+U] • class Map[U, +V] extends Iterable[Pair[U, V]] Recall that + means covariance, - means contravariance and no annotation means invariance (i.e. neither covariance nor contravariance). Consider also the following typing relationships for A, B, X, and Y: • A >: B • X >: Y Fill in the subtyping relation between the types below using symbols: • <: in case T1 is a subtype of T2; • >: in case T1 is a supertype of T2; • “Neither” in case T1 is neither a supertype nor a supertype of T2. What is the correct subtyping relationship between A => (Y => X) and A => (X => Y)?
In languages with subtyping, the compatibility relation is more complex: If B is a subtype of A, then a value of type B can be used in a context where one of type A is expected (covariant), even if the reverse is not true. Like equivalence, the subtype relation is defined differently for each programming language, with many variations possible. The presence of parametric or ad hoc polymorphism in a language may also have implications for type compatibility.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Design and analyze a polynomial-time algorithm for the following problem: \begin{center} \begin{boxedminipage}[t]{0.83\textwidth} \begin{description} \item[Input:] a vertex set $V$. \item[Output:] vertex subsets $S_1, S_2, \ldots, S_\ell \subseteq V$ with the following property:\\[2mm] For every set of edges $E\subseteq {V \choose 2}$, there is an $i\in \{1,2, \ldots, \ell\}$ such that \begin{align*} |\{e\in E: |e\cap S_i| = 1\}| \geq |E|/2\,, \end{align*} i.e., $S_i$ cuts at least half the edges in $G = (V,E)$. \end{description} \end{boxedminipage} \end{center} We remark that, since your algorithm should run in time polynomial in $n=|V|$, it can output at most polynomially (in $n$) many vertex sets. We also emphasize that the algorithm does \textbf{not} take the edge set $E$ as input. {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it runs in time polynomial in $n$, and (iii) prove that the output satisfies the property given in the problem statement. Recall that you are allowed to refer to material covered in the lecture notes.)}
Given an undirected graph G = (V, E) with an assignment of weights to the edges w: E → N and an integer k ∈ { 2 , 3 , … , | V | } , {\displaystyle k\in \{2,3,\ldots ,|V|\},} partition V into k disjoint sets F = { C 1 , C 2 , … , C k } {\displaystyle F=\{C_{1},C_{2},\ldots ,C_{k}\}} while minimizing ∑ i = 1 k − 1 ∑ j = i + 1 k ∑ v 1 ∈ C i v 2 ∈ C j w ( { v 1 , v 2 } ) {\displaystyle \sum _{i=1}^{k-1}\ \sum _{j=i+1}^{k}\sum _{\begin{smallmatrix}v_{1}\in C_{i}\\v_{2}\in C_{j}\end{smallmatrix}}w(\left\{v_{1},v_{2}\right\})} For a fixed k, the problem is polynomial time solvable in O ( | V | k 2 ) . {\displaystyle O{\bigl (}|V|^{k^{2}}{\bigr )}.} However, the problem is NP-complete if k is part of the input. It is also NP-complete if we specify k vertices and ask for the minimum k-cut which separates these vertices among each of the sets.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Design and analyze a polynomial-time algorithm for the following problem: \begin{center} \begin{boxedminipage}[t]{0.83\textwidth} \begin{description} \item[Input:] a vertex set $V$. \item[Output:] vertex subsets $S_1, S_2, \ldots, S_\ell \subseteq V$ with the following property:\\[2mm] For every set of edges $E\subseteq {V \choose 2}$, there is an $i\in \{1,2, \ldots, \ell\}$ such that \begin{align*} |\{e\in E: |e\cap S_i| = 1\}| \geq |E|/2\,, \end{align*} i.e., $S_i$ cuts at least half the edges in $G = (V,E)$. \end{description} \end{boxedminipage} \end{center} We remark that, since your algorithm should run in time polynomial in $n=|V|$, it can output at most polynomially (in $n$) many vertex sets. We also emphasize that the algorithm does \textbf{not} take the edge set $E$ as input. {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it runs in time polynomial in $n$, and (iii) prove that the output satisfies the property given in the problem statement. Recall that you are allowed to refer to material covered in the lecture notes.)}
An exhaustive search algorithm can solve the problem in time 2knO(1), where k is the size of the vertex cover. Vertex cover is therefore fixed-parameter tractable, and if we are only interested in small k, we can solve the problem in polynomial time. One algorithmic technique that works here is called bounded search tree algorithm, and its idea is to repeatedly choose some vertex and recursively branch, with two cases at each step: place either the current vertex or all its neighbours into the vertex cover. The algorithm for solving vertex cover that achieves the best asymptotic dependence on the parameter runs in time O ( 1.2738 k + ( k ⋅ n ) ) {\displaystyle O(1.2738^{k}+(k\cdot n))} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Design a polynomial-time algorithm for the matroid matching problem: \begin{description} \item[Input:] A bipartite graph $G=(A \cup B, E)$ and two matroids $\mathcal{M}_A = (A, \mathcal{I}_A)$, $\mathcal{M}_B = (B, \mathcal{I}_B)$. \item[Output:] A matching $M \subseteq E$ of maximum cardinality satisfying: \begin{enumerate} \item[(i)] the vertices $A' = \{a\in A: \mbox{there is a $b\in B$ such that $\{a,b\}\in M$}\}$ of $A$ that are matched by $M$ form an independent set in $\mathcal{M}_A$, i.e., $A'\in \mathcal{I}_A$; and \item[(ii)] the vertices $B' = \{b\in B: \mbox{there is an $a\in A$ such that $\{a,b\}\in M$}\}$ of $B$ that are matched by $M$ form an independent set in $\mathcal{M}_B$, i.e., $B'\in \mathcal{I}_B$. \end{enumerate} \end{description} We assume that the independence oracles for both matroids $\mathcal{M}_A$ and $\mathcal{M}_B$ can be implemented in polynomial-time. Also to your help you may use the following fact without proving it. \begin{center} \begin{boxedminipage}{\textwidth} \textbf{Fact (obtaining a new matroid by copying elements)}. Let $\mathcal{M} = (N, \mathcal{I})$ be a matroid where $N = \{e_1, \ldots, e_n\}$ consists of $n$ elements. Now, for each $i=1,\ldots, n$, make $k_i$ copies of $e_i$ to obtain the new ground set \begin{align*} N' = \{e_1^{(1)}, e_1^{(2)},\ldots, e_1^{(k_1)}, e_2^{(1)}, e_2^{(2)}, \ldots, e_2^{(k_2)}, \ldots, e_n^{(1)},e_n^{(2)}, \ldots, e_n^{(k_n)}\}\,, \end{align*} where we denote the $k_i$ copies of $e_i$ by $e_i^{(1)}, e_i^{(2)},\ldots, e_i^{(k_i)}$. Then $(N', \mathcal{I}')$ is a matroid where a subset $I' \subseteq N'$ is independent, i.e., $I' \in \mathcal{I}'$, if and only if the following conditions hold:\\[-1mm] \begin{enumerate} \item[(i)] $I'$ contains at most one copy of each element, i.e., we have $|I' \cap \{e_i^{(1)}, \ldots, e_i^{(k_i)}\}| \leq 1$ for each $i= 1,\ldots, n$; \item[(ii)] the original elements corresponding to the copies in $I'$ form an independent set in $\mathcal{I}$, i.e., if $I' = \{e_{i_1}^{(j_1)}, e_{i_2}^{(j_2)}, \ldots, e_{i_\ell}^{(j_\ell)}\}$ then $\{e_{i_1}, e_{i_2}, \ldots, e_{i_\ell}\} \in \mathcal{I}$.\\ \end{enumerate} Moreover, if the independence oracle of $(N, \mathcal{I})$ can be implemented in polynomial time, then the independence oracle of $(N', \mathcal{I}')$ can be implemented in polynomial time. \end{boxedminipage} \end{center} {\em (In this problem you are asked to design and analyze a polynomial-time algorithm for the matroid matching problem. You are allowed to use the above fact without any proof and to assume that all independence oracles can be implemented in polynomial time. Recall that you are allowed to refer to material covered in the lecture notes.)}
In combinatorial optimization, the matroid parity problem is a problem of finding the largest independent set of paired elements in a matroid. The problem was formulated by Lawler (1976) as a common generalization of graph matching and matroid intersection. It is also known as polymatroid matching, or the matchoid problem.Matroid parity can be solved in polynomial time for linear matroids. However, it is NP-hard for certain compactly-represented matroids, and requires more than a polynomial number of steps in the matroid oracle model.Applications of matroid parity algorithms include finding large planar subgraphs and finding graph embeddings of maximum genus. These algorithms can also be used to find connected dominating sets and feedback vertex sets in graphs of maximum degree three.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Design a polynomial-time algorithm for the matroid matching problem: \begin{description} \item[Input:] A bipartite graph $G=(A \cup B, E)$ and two matroids $\mathcal{M}_A = (A, \mathcal{I}_A)$, $\mathcal{M}_B = (B, \mathcal{I}_B)$. \item[Output:] A matching $M \subseteq E$ of maximum cardinality satisfying: \begin{enumerate} \item[(i)] the vertices $A' = \{a\in A: \mbox{there is a $b\in B$ such that $\{a,b\}\in M$}\}$ of $A$ that are matched by $M$ form an independent set in $\mathcal{M}_A$, i.e., $A'\in \mathcal{I}_A$; and \item[(ii)] the vertices $B' = \{b\in B: \mbox{there is an $a\in A$ such that $\{a,b\}\in M$}\}$ of $B$ that are matched by $M$ form an independent set in $\mathcal{M}_B$, i.e., $B'\in \mathcal{I}_B$. \end{enumerate} \end{description} We assume that the independence oracles for both matroids $\mathcal{M}_A$ and $\mathcal{M}_B$ can be implemented in polynomial-time. Also to your help you may use the following fact without proving it. \begin{center} \begin{boxedminipage}{\textwidth} \textbf{Fact (obtaining a new matroid by copying elements)}. Let $\mathcal{M} = (N, \mathcal{I})$ be a matroid where $N = \{e_1, \ldots, e_n\}$ consists of $n$ elements. Now, for each $i=1,\ldots, n$, make $k_i$ copies of $e_i$ to obtain the new ground set \begin{align*} N' = \{e_1^{(1)}, e_1^{(2)},\ldots, e_1^{(k_1)}, e_2^{(1)}, e_2^{(2)}, \ldots, e_2^{(k_2)}, \ldots, e_n^{(1)},e_n^{(2)}, \ldots, e_n^{(k_n)}\}\,, \end{align*} where we denote the $k_i$ copies of $e_i$ by $e_i^{(1)}, e_i^{(2)},\ldots, e_i^{(k_i)}$. Then $(N', \mathcal{I}')$ is a matroid where a subset $I' \subseteq N'$ is independent, i.e., $I' \in \mathcal{I}'$, if and only if the following conditions hold:\\[-1mm] \begin{enumerate} \item[(i)] $I'$ contains at most one copy of each element, i.e., we have $|I' \cap \{e_i^{(1)}, \ldots, e_i^{(k_i)}\}| \leq 1$ for each $i= 1,\ldots, n$; \item[(ii)] the original elements corresponding to the copies in $I'$ form an independent set in $\mathcal{I}$, i.e., if $I' = \{e_{i_1}^{(j_1)}, e_{i_2}^{(j_2)}, \ldots, e_{i_\ell}^{(j_\ell)}\}$ then $\{e_{i_1}, e_{i_2}, \ldots, e_{i_\ell}\} \in \mathcal{I}$.\\ \end{enumerate} Moreover, if the independence oracle of $(N, \mathcal{I})$ can be implemented in polynomial time, then the independence oracle of $(N', \mathcal{I}')$ can be implemented in polynomial time. \end{boxedminipage} \end{center} {\em (In this problem you are asked to design and analyze a polynomial-time algorithm for the matroid matching problem. You are allowed to use the above fact without any proof and to assume that all independence oracles can be implemented in polynomial time. Recall that you are allowed to refer to material covered in the lecture notes.)}
Many other optimization problems can be formulated as linear matroid parity problems, and solved in polynomial time using this formulation. Graph matching A maximum matching in a graph is a subset of edges, no two sharing an endpoint, that is as large as possible. It can be formulated as a matroid parity problem in a partition matroid that has an element for each vertex-edge incidence in the graph. In this matroid, two elements are paired if they are the two incidences for the same edge as each other.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int The filter operation on a multiset m returns the subset of m for which p holds. What should replace ??? so that the filter function is correct? def filter(m: Multiset, p: Char => Boolean): Multiset = ???
A multiset may be formally defined as an ordered pair (A, m) where A is the underlying set of the multiset, formed from its distinct elements, and m: A → Z + {\displaystyle m\colon A\to \mathbb {Z} ^{+}} is a function from A to the set of positive integers, giving the multiplicity – that is, the number of occurrences – of the element a in the multiset as the number m(a). (It is also possible to allow multiplicity 0 or ∞ {\displaystyle \infty } , especially when considering submultisets. This article is restricted to finite, positive multiplicities.)
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int The filter operation on a multiset m returns the subset of m for which p holds. What should replace ??? so that the filter function is correct? def filter(m: Multiset, p: Char => Boolean): Multiset = ???
In mathematics, a multiset (or bag, or mset) is a modification of the concept of a set that, unlike a set, allows for multiple instances for each of its elements. The number of instances given for each element is called the multiplicity of that element in the multiset. As a consequence, an infinite number of multisets exist which contain only elements a and b, but vary in the multiplicities of their elements: The set {a, b} contains only elements a and b, each having multiplicity 1 when {a, b} is seen as a multiset. In the multiset {a, a, b}, the element a has multiplicity 2, and b has multiplicity 1.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A monad M is a parametric type M[T] with two operations, flatMap and unit: extension [T, U](m: M[T]) def flatMap(f: T => M[U]): M[U] def unit[T](x: T): M[T] To qualify as a monad, a type has to satisfy the three following laws for all m: M[T], x: T, f: T => M[U] and g: U => M[V]: (Associativity) m.flatMap(f).flatMap(g) === m.flatMap(f(_).flatMap(g)) (Left unit) unit(x).flatMap(f) === f(x) (Right unit) m.flatMap(unit) === m Is List with its usual flatMap method and unit(x) = List(x) a monad?
The more common definition for a monad in functional programming, used in the above example, is actually based on a Kleisli triple ⟨T, η, μ⟩ rather than category theory's standard definition. The two constructs turn out to be mathematically equivalent, however, so either definition will yield a valid monad. Given any well-defined, basic types T, U, a monad consists of three parts: A type constructor M that builds up a monadic type M T A type converter, often called unit or return, that embeds an object x in the monad: A combinator, typically called bind (as in binding a variable) and represented with an infix operator >>= or a method called flatMap, that unwraps a monadic variable, then inserts it into a monadic function/expression, resulting in a new monadic value: To fully qualify as a monad though, these three parts must also respect a few laws: unit is a left-identity for bind: unit is also a right-identity for bind: bind is essentially associative:Algebraically, this means any monad both gives rise to a category (called the Kleisli category) and a monoid in the category of functors (from values to computations), with monadic composition as a binary operator in the monoid: 2450s and unit as identity in the monad.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A monad M is a parametric type M[T] with two operations, flatMap and unit: extension [T, U](m: M[T]) def flatMap(f: T => M[U]): M[U] def unit[T](x: T): M[T] To qualify as a monad, a type has to satisfy the three following laws for all m: M[T], x: T, f: T => M[U] and g: U => M[V]: (Associativity) m.flatMap(f).flatMap(g) === m.flatMap(f(_).flatMap(g)) (Left unit) unit(x).flatMap(f) === f(x) (Right unit) m.flatMap(unit) === m Is List with its usual flatMap method and unit(x) = List(x) a monad?
In homological algebra, a monad is a 3-term complex A → B → Cof objects in some abelian category whose middle term B is projective and whose first map A → B is injective and whose second map B → C is surjective. Equivalently, a monad is a projective object together with a 3-step filtration (B ⊃ ker(B → C) ⊃ im(A → B)). In practice A, B, and C are often vector bundles over some space, and there are several minor extra conditions that some authors add to the definition. Monads were introduced by Horrocks (1964, p.698).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the maximum directed cut problem we are given as input a directed graph $G = (V, A)$. Each arc $(i, j)\in A$ has a nonnegative weight $w_{ij} \geq 0$. The goal is to partition $V$ into two sets $U$ and $W = V \setminus U$ so as to maximize the total weight of the arcs going from $U$ to $W$ (that is, arcs $(i, j)$ with $i \in U$ and $j \in W$). Give a randomized 1/4-approximation algorithm for this problem (together with a proof that it is a 1/4-approximation in expectation).
We already know that (2,1) cut is the minimum bisection problem and it is NP-complete. Next, we assess a 3-partition problem wherein n = 3k, which is also bounded in polynomial time. Now, if we assume that we have a finite approximation algorithm for (k, 1)-balanced partition, then, either the 3-partition instance can be solved using the balanced (k,1) partition in G or it cannot be solved.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the maximum directed cut problem we are given as input a directed graph $G = (V, A)$. Each arc $(i, j)\in A$ has a nonnegative weight $w_{ij} \geq 0$. The goal is to partition $V$ into two sets $U$ and $W = V \setminus U$ so as to maximize the total weight of the arcs going from $U$ to $W$ (that is, arcs $(i, j)$ with $i \in U$ and $j \in W$). Give a randomized 1/4-approximation algorithm for this problem (together with a proof that it is a 1/4-approximation in expectation).
One wants a subset S of the vertex set such that the number of edges between S and the complementary subset is as large as possible. Equivalently, one wants a bipartite subgraph of the graph with as many edges as possible. There is a more general version of the problem called weighted max-cut, where each edge is associated with a real number, its weight, and the objective is to maximize the total weight of the edges between S and its complement rather than the number of the edges. The weighted max-cut problem allowing both positive and negative weights can be trivially transformed into a weighted minimum cut problem by flipping the sign in all weights.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In a game of Othello (also known as Reversi in French-speaking countries), when a player puts a token on a square of the board, we have to look in all directions around that square to find which squares should be “flipped” (i.e., be stolen from the opponent). We implement this in a method computeFlips, taking the position of square, and returning a list of all the squares that should be flipped: final case class Square(x: Int, y: Int) def computeFlips(square: Square): List[Square] = { List(−1, 0, 1).flatMap{i => List(−1, 0, 1).filter{j => i != 0 || j != 0}.flatMap{j => computeFlipsInDirection(square, i, j)}}} def computeFlips In Direction (square: Square, dirX: Int, dirY: Int): List[Square] = {// omitted} Rewrite the method computeFlips to use one for comprehension instead of maps, flatMaps and filter s. The resulting for comprehension should of course have the same result as the expression above for all value of square. However, it is not necessary that it desugars exactly to the expression above.
Then, the initial position can be described in combinatorial game theory notation as { ( A 1 , A 2 ) , ( B 1 , B 2 ) , … | ( A 1 , B 1 ) , ( A 2 , B 2 ) , … } . {\displaystyle \{(\mathrm {A} 1,\mathrm {A} 2),(\mathrm {B} 1,\mathrm {B} 2),\dots |(\mathrm {A} 1,\mathrm {B} 1),(\mathrm {A} 2,\mathrm {B} 2),\dots \}.} In standard Cross-Cram play, the players alternate turns, but this alternation is handled implicitly by the definitions of combinatorial game theory rather than being encoded within the game states.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In a game of Othello (also known as Reversi in French-speaking countries), when a player puts a token on a square of the board, we have to look in all directions around that square to find which squares should be “flipped” (i.e., be stolen from the opponent). We implement this in a method computeFlips, taking the position of square, and returning a list of all the squares that should be flipped: final case class Square(x: Int, y: Int) def computeFlips(square: Square): List[Square] = { List(−1, 0, 1).flatMap{i => List(−1, 0, 1).filter{j => i != 0 || j != 0}.flatMap{j => computeFlipsInDirection(square, i, j)}}} def computeFlips In Direction (square: Square, dirX: Int, dirY: Int): List[Square] = {// omitted} Rewrite the method computeFlips to use one for comprehension instead of maps, flatMaps and filter s. The resulting for comprehension should of course have the same result as the expression above for all value of square. However, it is not necessary that it desugars exactly to the expression above.
The game subtract-a-square can also be played with multiple numbers. At each turn the player to make a move first selects one of the numbers, and then subtracts a square from it. Such a 'sum of normal games' can be analysed using the Sprague–Grundy theorem. This theorem states that each position in the game subtract-a-square may be mapped onto an equivalent nim heap size.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem we are going to investigate the linear programming relaxation of a classical scheduling problem. In the considered problem, we are given a set $M$ of $m$ machines and a set $J$ of $n$ jobs. Each job $j\in J$ has a processing time $p_j > 0$ and can be processed on a subset $N(j) \subseteq M$ of the machines. The goal is to assign each job $j$ to a machine in $N(j)$ so as to complete all the jobs by a given deadline $T$. (Each machine can only process one job at a time.) If we, for $j\in J$ and $i\in N(j)$, let $x_{ij}$ denote the indicator variable indicating that $j$ was assigned to $i$, then we can formulate the scheduling problem as the following integer linear program: \begin{align*} \sum_{i\in N(j)} x_{ij} & = 1 \qquad \mbox{for all } j\in J & \hspace{-3em} \mbox{\small \emph{(Each job $j$ should be assigned to a machine $i\in N(j)$)}} \\ \sum_{j\in J: i \in N(j)} x_{ij} p_j & \leq T \qquad \mbox{for all } i \in M & \hspace{-3em} \mbox{\small \emph{(Time needed to process jobs assigned to $i$ should be $\leq T$)}} \\ x_{ij} &\in \{0,1\} \ \mbox{for all } j\in J, \ i \in N(j) \end{align*} The above integer linear program is NP-hard to solve, but we can obtain a linear programming relaxation by relaxing the constraints $x_{ij} \in \{0,1\}$ to $x_{ij} \in [0,1]$. The obtained linear program can be solved in polynomial time using e.g. the ellipsoid method. \\[2mm] \emph{Example.} An example is as follows. We have two machines $M = \{m_1, m_2\}$ and three jobs $J= \{j_1, j_2, j_3\}$. Job $j_1$ has processing time $1/2$ and can only be assigned to $m_1$; job $j_2$ has processing time $1/2$ and can only be assigned to $m_2$; and job $j_3$ has processing time $1$ and can be assigned to either machine. Finally, we have the ``deadline'' $T=1$. An extreme point solution to the linear programming relaxation is $x^*_{11} = 1, x^*_{22} =1, x^*_{13} = 1/2$ and $x^*_{23} = 1/2$. The associated graph $H$ (defined in subproblem~\textbf{a}) can be illustrated as follows: \begin{tikzpicture} \node[vertex] (a1) at (0,1.7) {$a_1$}; \node[vertex] (a2) at (0,0.3) {$a_2$}; \node[vertex] (b1) at (3,2.5) {$b_1$}; \node[vertex] (b2) at (3,1) {$b_2$}; \node[vertex] (b3) at (3,-0.5) {$b_3$}; \draw (a1) edge (b3); \draw (a2) edge (b3); \end{tikzpicture} Let $x^*$ be an extreme point solution to the linear program and consider the (undirected) bipartite graph $H$ associated to $x^*$ defined as follows. Its left-hand-side has a vertex $a_i$ for each machine $i\in M$ and its right-hand-side has a vertex $b_j$ for each job $j\in J$. Finally, $H$ has an edge $\{a_i, b_j\}$ iff $0 < x^*_{ij} < 1$.\\[0mm] {Prove that $H$ is acyclic} (using that $x^*$ is an extreme point).
A natural way to formulate the problem as a linear program is called the Lenstra–Shmoys–Tardos linear program (LST LP). For each machine i and job j, define a variable z i , j {\displaystyle z_{i,j}} , which equals 1 iff machine i processes job j, and 0 otherwise. Then, the LP constraints are: ∑ i = 1 m z i , j = 1 {\displaystyle \sum _{i=1}^{m}z_{i,j}=1} for every job j in 1,...,n; ∑ i = 1 m z i , j ⋅ p i , j ≤ T {\displaystyle \sum _{i=1}^{m}z_{i,j}\cdot p_{i,j}\leq T} for every machine i in 1,...,m; z i , j ∈ { 0 , 1 } {\displaystyle z_{i,j}\in \{0,1\}} for every i, j.Relaxing the integer constraints gives a linear program with size polynomial in the input. Solving the relaxed problem can be rounded to obtain a 2-approximation to the problem.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this problem we are going to investigate the linear programming relaxation of a classical scheduling problem. In the considered problem, we are given a set $M$ of $m$ machines and a set $J$ of $n$ jobs. Each job $j\in J$ has a processing time $p_j > 0$ and can be processed on a subset $N(j) \subseteq M$ of the machines. The goal is to assign each job $j$ to a machine in $N(j)$ so as to complete all the jobs by a given deadline $T$. (Each machine can only process one job at a time.) If we, for $j\in J$ and $i\in N(j)$, let $x_{ij}$ denote the indicator variable indicating that $j$ was assigned to $i$, then we can formulate the scheduling problem as the following integer linear program: \begin{align*} \sum_{i\in N(j)} x_{ij} & = 1 \qquad \mbox{for all } j\in J & \hspace{-3em} \mbox{\small \emph{(Each job $j$ should be assigned to a machine $i\in N(j)$)}} \\ \sum_{j\in J: i \in N(j)} x_{ij} p_j & \leq T \qquad \mbox{for all } i \in M & \hspace{-3em} \mbox{\small \emph{(Time needed to process jobs assigned to $i$ should be $\leq T$)}} \\ x_{ij} &\in \{0,1\} \ \mbox{for all } j\in J, \ i \in N(j) \end{align*} The above integer linear program is NP-hard to solve, but we can obtain a linear programming relaxation by relaxing the constraints $x_{ij} \in \{0,1\}$ to $x_{ij} \in [0,1]$. The obtained linear program can be solved in polynomial time using e.g. the ellipsoid method. \\[2mm] \emph{Example.} An example is as follows. We have two machines $M = \{m_1, m_2\}$ and three jobs $J= \{j_1, j_2, j_3\}$. Job $j_1$ has processing time $1/2$ and can only be assigned to $m_1$; job $j_2$ has processing time $1/2$ and can only be assigned to $m_2$; and job $j_3$ has processing time $1$ and can be assigned to either machine. Finally, we have the ``deadline'' $T=1$. An extreme point solution to the linear programming relaxation is $x^*_{11} = 1, x^*_{22} =1, x^*_{13} = 1/2$ and $x^*_{23} = 1/2$. The associated graph $H$ (defined in subproblem~\textbf{a}) can be illustrated as follows: \begin{tikzpicture} \node[vertex] (a1) at (0,1.7) {$a_1$}; \node[vertex] (a2) at (0,0.3) {$a_2$}; \node[vertex] (b1) at (3,2.5) {$b_1$}; \node[vertex] (b2) at (3,1) {$b_2$}; \node[vertex] (b3) at (3,-0.5) {$b_3$}; \draw (a1) edge (b3); \draw (a2) edge (b3); \end{tikzpicture} Let $x^*$ be an extreme point solution to the linear program and consider the (undirected) bipartite graph $H$ associated to $x^*$ defined as follows. Its left-hand-side has a vertex $a_i$ for each machine $i\in M$ and its right-hand-side has a vertex $b_j$ for each job $j\in J$. Finally, $H$ has an edge $\{a_i, b_j\}$ iff $0 < x^*_{ij} < 1$.\\[0mm] {Prove that $H$ is acyclic} (using that $x^*$ is an extreme point).
Using the technique of Linear programming relaxation, it is possible to approximate the optimal scheduling with slightly better approximation factors. The approximation ratio of the first such algorithm is asymptotically 2 when k is large, but when k=2 the algorithm achieves an approximation ratio of 5/3. The approximation factor for arbitrary k was later improved to 1.582.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the min-cost perfect matching problem on a bipartite graph $G=(A \cup B, E)$ with costs $c: E \rightarrow \mathbb{R}$. Recall from the lecture that the dual linear program is \begin{align*} \text{Maximize} \quad & \sum_{a\in A} u_a + \sum_{b\in B} v_b\\ \text{Subject to} \quad &u_a + v_b \leq c(\{a,b\}) \qquad \mbox{for every edge $\{a,b\} \in E$.} \\ \end{align*} Show that the dual linear program is unbounded if there is a set $S \subseteq A$ such that $|S| > |N(S)|$, where $N(S) = \{ v\in B: \{u,v\} \in E \mbox{ for some $u\in S$}\}$ denotes the neighborhood of $S$. This proves (as expected) that the primal is infeasible in this case.
Therefore, by the LP duality theorem, both programs have the same solution. This fact is true not only in bipartite graphs but in arbitrary graphs:In any graph, the largest size of a fractional matching equals the smallest size of a fractional vertex cover.What makes bipartite graphs special is that, in bipartite graphs, both these linear programs have optimal solutions in which all variable values are integers. This follows from the fact that in the fractional matching polytope of a bipartite graph, all extreme points have only integer coordinates, and the same is true for the fractional vertex-cover polytope. Therefore the above theorem implies:In any bipartite graph, the largest size of a matching equals the smallest size of a vertex cover.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the min-cost perfect matching problem on a bipartite graph $G=(A \cup B, E)$ with costs $c: E \rightarrow \mathbb{R}$. Recall from the lecture that the dual linear program is \begin{align*} \text{Maximize} \quad & \sum_{a\in A} u_a + \sum_{b\in B} v_b\\ \text{Subject to} \quad &u_a + v_b \leq c(\{a,b\}) \qquad \mbox{for every edge $\{a,b\} \in E$.} \\ \end{align*} Show that the dual linear program is unbounded if there is a set $S \subseteq A$ such that $|S| > |N(S)|$, where $N(S) = \{ v\in B: \{u,v\} \in E \mbox{ for some $u\in S$}\}$ denotes the neighborhood of $S$. This proves (as expected) that the primal is infeasible in this case.
The polytope described by the linear program upper bounding the sum of edges taken per vertex is integral in the case of bipartite graphs, that is, it exactly describes the matching polytope, while for general graphs it is non-integral. Hence, for bipartite graphs, it suffices to solve the corresponding linear program to obtain a valid matching. For general graphs, however, there are two other characterizations of the matching polytope one of which makes use of the blossom inequality for odd subsets of vertices and hence allows to relax the integer program to a linear program while still obtaining valid matchings. These characterizations are of further interest in Edmonds' famous blossom algorithm used for finding such matchings in general graphs.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are given three classes (Student, Exam and Course which are defined below) and the method generatePassedExams, which from a given list of students and a list of courses, generates a list of students and all their successfully passed courses together with the corresponding grade. A course is considered as successfully passed if the grade for that course is greater than 2. case class Student(name: String, exams: List[Exam]) case class Exam(courseId: String, grade: Double) case class Course(id: String, name: String) def generatePassedExams( students: List[Student], courses: List[Course]): List[(String, String, Double)] = { for { s <- students e <- s.exams if e.grade > 2 c <- courses if e.courseId == c.id } yield (s.name, c.name, e.grade) } Your task is to rewrite the method generatePassedExams to use map, flatMap and filter instead of the for-comprehension. The resulting method should of course have the same result as the for-comprehension above.
An example of a list comprehension using multiple generators:
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are given three classes (Student, Exam and Course which are defined below) and the method generatePassedExams, which from a given list of students and a list of courses, generates a list of students and all their successfully passed courses together with the corresponding grade. A course is considered as successfully passed if the grade for that course is greater than 2. case class Student(name: String, exams: List[Exam]) case class Exam(courseId: String, grade: Double) case class Course(id: String, name: String) def generatePassedExams( students: List[Student], courses: List[Course]): List[(String, String, Double)] = { for { s <- students e <- s.exams if e.grade > 2 c <- courses if e.courseId == c.id } yield (s.name, c.name, e.grade) } Your task is to rewrite the method generatePassedExams to use map, flatMap and filter instead of the for-comprehension. The resulting method should of course have the same result as the for-comprehension above.
Python uses the following syntax to express list comprehensions over finite lists: A generator expression may be used in Python versions >= 2.4 which gives lazy evaluation over its input, and can be used with generators to iterate over 'infinite' input such as the count generator function which returns successive integers: (Subsequent use of the generator expression will determine when to stop generating values).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. In the minimum (unweighted) $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. We shall use a linear program to solve this problem. Let ${P}$ be the set of all paths between $s$ and $t$ in the graph $G$. The linear program has a variable $y_e$ for each edge $e\in E$ and is defined as follows: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & & \displaystyle\sum_{e \in E} y_e &\\ \text{subject to}& & \displaystyle\sum_{e \in p} y_e \ge 1 &\forall p \in P,\\ & & y_e \ge 0 & \forall e \in E. \end{array} \end{equation*} For example, consider the following graph where the numbers on the edges depict the $y_e$-values of a feasible solution to the linear program: \begin{center} \input{cutExample} \end{center} The values on the edges depict a feasible but not optimal solution to the linear program. That it is feasible follows because each $y_e$ is non-negative and $\sum_{e\in p} y_e \geq 1$ for all $p\in P$. Indeed, for the path $s, b, a, t$ we have $y_{\{s,b\}}+ y_{\{b,a\}} + y_{\{a,t\}} = 1/4 + 1/4 + 1/2 = 1$, and similar calculations for each path $p$ between $s$ and $t$ show that $\sum_{e\in p} y_e \geq 1$. That the solution is not optimal follows because its value is $2.5$ whereas an optimal solution has value $2$. Prove that $\opt\leq \optlp$, where $\opt$ and $\optlp$ are defined as in {\bf 6a}. \\ Hint: Round a feasible linear programming solution $y$. In the (randomized) rounding it may be helpful to consider, for each vertex $v\in V$, the length of the shortest path from $s$ to $v$ in the graph where edge $e\in E$ has length $y_e$. For example, in the graph and linear programming solution depicted in the problem statement, we have that the length of the shortest path from $s$ to $a$ equals $1/2$. \\ {\em (In this problem you are asked to prove $\opt \leq \optlp$. Recall that you are allowed to refer to material covered in the lecture notes.)}
For weighted graphs with positive edge weights w: E → R + {\displaystyle w\colon E\rightarrow \mathbf {R} ^{+}} the weight of the cut is the sum of the weights of edges between vertices in each part w ( S , T ) = ∑ u v ∈ E: u ∈ S , v ∈ T w ( u v ) , {\displaystyle w(S,T)=\sum _{uv\in E\colon u\in S,v\in T}w(uv)\,,} which agrees with the unweighted definition for w = 1 {\displaystyle w=1} . A cut is sometimes called a “global cut” to distinguish it from an “ s {\displaystyle s} - t {\displaystyle t} cut” for a given pair of vertices, which has the additional requirement that s ∈ S {\displaystyle s\in S} and t ∈ T {\displaystyle t\in T} . Every global cut is an s {\displaystyle s} - t {\displaystyle t} cut for some s , t ∈ V {\displaystyle s,t\in V} . Thus, the minimum cut problem can be solved in polynomial time by iterating over all choices of s , t ∈ V {\displaystyle s,t\in V} and solving the resulting minimum s {\displaystyle s} - t {\displaystyle t} cut problem using the max-flow min-cut theorem and a polynomial time algorithm for maximum flow, such as the push-relabel algorithm, though this approach is not optimal. Better deterministic algorithms for the global minimum cut problem include the Stoer–Wagner algorithm, which has a running time of O ( m n + n 2 log ⁡ n ) {\displaystyle O(mn+n^{2}\log n)} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider an undirected graph $G=(V,E)$ and let $s\neq t\in V$. In the minimum (unweighted) $s,t$-cut problem, we wish to find a set $S\subseteq V$ such that $s\in S$, $t\not \in S$ and the number of edges crossing the cut is minimized. We shall use a linear program to solve this problem. Let ${P}$ be the set of all paths between $s$ and $t$ in the graph $G$. The linear program has a variable $y_e$ for each edge $e\in E$ and is defined as follows: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & & \displaystyle\sum_{e \in E} y_e &\\ \text{subject to}& & \displaystyle\sum_{e \in p} y_e \ge 1 &\forall p \in P,\\ & & y_e \ge 0 & \forall e \in E. \end{array} \end{equation*} For example, consider the following graph where the numbers on the edges depict the $y_e$-values of a feasible solution to the linear program: \begin{center} \input{cutExample} \end{center} The values on the edges depict a feasible but not optimal solution to the linear program. That it is feasible follows because each $y_e$ is non-negative and $\sum_{e\in p} y_e \geq 1$ for all $p\in P$. Indeed, for the path $s, b, a, t$ we have $y_{\{s,b\}}+ y_{\{b,a\}} + y_{\{a,t\}} = 1/4 + 1/4 + 1/2 = 1$, and similar calculations for each path $p$ between $s$ and $t$ show that $\sum_{e\in p} y_e \geq 1$. That the solution is not optimal follows because its value is $2.5$ whereas an optimal solution has value $2$. Prove that $\opt\leq \optlp$, where $\opt$ and $\optlp$ are defined as in {\bf 6a}. \\ Hint: Round a feasible linear programming solution $y$. In the (randomized) rounding it may be helpful to consider, for each vertex $v\in V$, the length of the shortest path from $s$ to $v$ in the graph where edge $e\in E$ has length $y_e$. For example, in the graph and linear programming solution depicted in the problem statement, we have that the length of the shortest path from $s$ to $a$ equals $1/2$. \\ {\em (In this problem you are asked to prove $\opt \leq \optlp$. Recall that you are allowed to refer to material covered in the lecture notes.)}
The cutting-plane method for solving 0–1 integer programs, first introduced for the traveling salesman problem by Dantzig, Fulkerson & Johnson (1954) and generalized to other integer programs by Gomory (1958), takes advantage of this multiplicity of possible relaxations by finding a sequence of relaxations that more tightly constrain the solution space until eventually an integer solution is obtained. This method starts from any relaxation of the given program, and finds an optimal solution using a linear programming solver. If the solution assigns integer values to all variables, it is also the optimal solution to the unrelaxed problem.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following quadratic programming relaxation of the Max Cut problem on $G=(V,E)$: \begin{align*} \textbf{maximize} \hspace{0.8cm} & \sum_{\{i,j\} \in E} (1-x_i)x_j + x_i (1-x_j) \\ \textbf{subject to}\hspace{0.8cm} & x_i \in [0,1] ~ ~ \forall i\in V \end{align*} Show that the optimal value of the quadratic relaxation actually equals the value of an optimal cut. (Unfortunately, this does not give an exact algorithm for Max Cut as the above quadratic program is NP-hard to solve (so is Max Cut).) \\ \noindent\emph{Hint: analyze basic randomized rounding.}
However, Goemans and Williamson observed a general three-step procedure for attacking this sort of problem: Relax the integer quadratic program into an SDP. Solve the SDP (to within an arbitrarily small additive error ϵ {\displaystyle \epsilon } ). Round the SDP solution to obtain an approximate solution to the original integer quadratic program.For max cut, the most natural relaxation is max ∑ ( i , j ) ∈ E 1 − ⟨ v i , v j ⟩ 2 , {\displaystyle \max \sum _{(i,j)\in E}{\frac {1-\langle v_{i},v_{j}\rangle }{2}},} such that ‖ v i ‖ 2 = 1 {\displaystyle \lVert v_{i}\rVert ^{2}=1} , where the maximization is over vectors { v i } {\displaystyle \{v_{i}\}} instead of integer scalars.This is an SDP because the objective function and constraints are all linear functions of vector inner products.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following quadratic programming relaxation of the Max Cut problem on $G=(V,E)$: \begin{align*} \textbf{maximize} \hspace{0.8cm} & \sum_{\{i,j\} \in E} (1-x_i)x_j + x_i (1-x_j) \\ \textbf{subject to}\hspace{0.8cm} & x_i \in [0,1] ~ ~ \forall i\in V \end{align*} Show that the optimal value of the quadratic relaxation actually equals the value of an optimal cut. (Unfortunately, this does not give an exact algorithm for Max Cut as the above quadratic program is NP-hard to solve (so is Max Cut).) \\ \noindent\emph{Hint: analyze basic randomized rounding.}
When the algorithm terminates, at least half of the edges incident to every vertex belong to the cut, for otherwise moving the vertex would improve the cut. Therefore, the cut includes at least | E | / 2 {\displaystyle |E|/2} edges. The polynomial-time approximation algorithm for Max-Cut with the best known approximation ratio is a method by Goemans and Williamson using semidefinite programming and randomized rounding that achieves an approximation ratio α ≈ 0.878 , {\displaystyle \alpha \approx 0.878,} where α = 2 π min 0 ≤ θ ≤ π θ 1 − cos ⁡ θ .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement a function that takes a lists ls as argument and returns the length of the longest contiguous sequence of repeated elements in that list. For this second question, you are required to use foldLeft in your solution, and your solution should not be recursive. For example: longest(List(1, 2, 2, 5, 5, 5, 1, 1, 1)) == 3 def longest[A](ls: List[A]): Int = ???
One find LIS by using the fact: Number of such sequences corresponds to LIS, where the actually LIS sequence is found by taking one element from each sequence. Note that 1 ≤ l ≤ i + 1 , {\displaystyle 1\leq l\leq i+1,} because l ≥ 1 {\displaystyle l\geq 1} represents the length of the increasing subsequence, and k ≥ 0 {\displaystyle k\geq 0} represents the index of its termination. The length of M {\displaystyle M} is 1 {\displaystyle 1} more than the length of X {\displaystyle X} but it is possible that not all elements in this array are used by the algorithm (in fact, if the longest increasing sequence has length L {\displaystyle L} then only M , … , M {\displaystyle M,\ldots ,M} are used by the algorithm).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement a function that takes a lists ls as argument and returns the length of the longest contiguous sequence of repeated elements in that list. For this second question, you are required to use foldLeft in your solution, and your solution should not be recursive. For example: longest(List(1, 2, 2, 5, 5, 5, 1, 1, 1)) == 3 def longest[A](ls: List[A]): Int = ???
As another, more formal example, consider the following property of lists: EQ: len ⁡ ( L + + M ) = len ⁡ ( L ) + len ⁡ ( M ) {\displaystyle {\text{EQ:}}\quad \operatorname {len} (L+\!+\ M)=\operatorname {len} (L)+\operatorname {len} (M)} Here ++ denotes the list concatenation operation, len() the list length, and L and M are lists. In order to prove this, we need definitions for length and for the concatenation operation. Let (h:t) denote a list whose head (first element) is h and whose tail (list of remaining elements) is t, and let denote the empty list.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement a function that inserts a given element elem into a sorted (in ascending order) list list . The resulting list should also be sorted in ascending order. Implement the function recursively. def insert (elem: Int, list: List[Int]): List[Int] = ???
Insertion sort for int list (ascending) can be expressed concisely as follows:
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement a function that inserts a given element elem into a sorted (in ascending order) list list . The resulting list should also be sorted in ascending order. Implement the function recursively. def insert (elem: Int, list: List[Int]): List[Int] = ???
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list similar to how we put money in our wallet. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shellsort is a variant of insertion sort that is more efficient for larger lists.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In class, we saw Karger's beautiful randomized algorithm for finding a min-cut in an undirected graph $G=(V,E)$ with $n = |V|$ vertices. Each iteration of Karger's algorithm can be implemented in time $O(n^2)$, and if repeated $\Theta(n^2 \log n)$ times, Karger's algorithm returns a min-cut with probability at least $1-1/n$. However, this leads to the often prohibitively large running time of $O(n^4 \log n)$. Karger and Stein made a crucial observation that allowed them to obtain a much faster algorithm for min-cut: the Karger-Stein algorithm runs in time $O(n^2 \log^3 n)$ and finds a min-cut with probability at least $1-1/n$. Explain in a couple of sentences the main idea that allowed Karger and Stein to modify Karger's algorithm into the much faster Karger-Stein algorithm. In other words, what are the main differences between the two algorithms?
To determine a min-cut, one has to touch every edge in the graph at least once, which is Θ ( n 2 ) {\displaystyle \Theta (n^{2})} time in a dense graph. The Karger–Stein's min-cut algorithm takes the running time of O ( n 2 ln O ( 1 ) ⁡ n ) {\displaystyle O(n^{2}\ln ^{O(1)}n)} , which is very close to that. == References ==
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In class, we saw Karger's beautiful randomized algorithm for finding a min-cut in an undirected graph $G=(V,E)$ with $n = |V|$ vertices. Each iteration of Karger's algorithm can be implemented in time $O(n^2)$, and if repeated $\Theta(n^2 \log n)$ times, Karger's algorithm returns a min-cut with probability at least $1-1/n$. However, this leads to the often prohibitively large running time of $O(n^4 \log n)$. Karger and Stein made a crucial observation that allowed them to obtain a much faster algorithm for min-cut: the Karger-Stein algorithm runs in time $O(n^2 \log^3 n)$ and finds a min-cut with probability at least $1-1/n$. Explain in a couple of sentences the main idea that allowed Karger and Stein to modify Karger's algorithm into the much faster Karger-Stein algorithm. In other words, what are the main differences between the two algorithms?
In computer science and graph theory, Karger's algorithm is a randomized algorithm to compute a minimum cut of a connected graph. It was invented by David Karger and first published in 1993.The idea of the algorithm is based on the concept of contraction of an edge ( u , v ) {\displaystyle (u,v)} in an undirected graph G = ( V , E ) {\displaystyle G=(V,E)} . Informally speaking, the contraction of an edge merges the nodes u {\displaystyle u} and v {\displaystyle v} into one, reducing the total number of nodes of the graph by one.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Use the integrality of the bipartite perfect matching polytope (as proved in class) to show the following classical result: \begin{itemize} \item[] The edge set of a $k$-regular bipartite graph $G=(A\cup B, E)$ can in polynomial time be partitioned into $k$ disjoint perfect matchings. \end{itemize} \noindent A graph is $k$-regular if the degree of each vertex equals $k$. Two matchings are disjoint if they do not share any edges.
The polytope described by the linear program upper bounding the sum of edges taken per vertex is integral in the case of bipartite graphs, that is, it exactly describes the matching polytope, while for general graphs it is non-integral. Hence, for bipartite graphs, it suffices to solve the corresponding linear program to obtain a valid matching. For general graphs, however, there are two other characterizations of the matching polytope one of which makes use of the blossom inequality for odd subsets of vertices and hence allows to relax the integer program to a linear program while still obtaining valid matchings. These characterizations are of further interest in Edmonds' famous blossom algorithm used for finding such matchings in general graphs.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Use the integrality of the bipartite perfect matching polytope (as proved in class) to show the following classical result: \begin{itemize} \item[] The edge set of a $k$-regular bipartite graph $G=(A\cup B, E)$ can in polynomial time be partitioned into $k$ disjoint perfect matchings. \end{itemize} \noindent A graph is $k$-regular if the degree of each vertex equals $k$. Two matchings are disjoint if they do not share any edges.
Let G = ( X , Y , E ) {\displaystyle G=(X,Y,E)} be a finite bipartite graph with bipartite sets X {\displaystyle X} and Y {\displaystyle Y} and edge set E {\displaystyle E} . An X {\displaystyle X} -perfect matching (also called an X {\displaystyle X} -saturating matching) is a matching, a set of disjoint edges, which covers every vertex in X {\displaystyle X} . For a subset W {\displaystyle W} of X {\displaystyle X} , let N G ( W ) {\displaystyle N_{G}(W)} denote the neighborhood of W {\displaystyle W} in G {\displaystyle G} , the set of all vertices in Y {\displaystyle Y} that are adjacent to at least one element of W {\displaystyle W} . The marriage theorem in this formulation states that there is an X {\displaystyle X} -perfect matching if and only if for every subset W {\displaystyle W} of X {\displaystyle X}: In other words, every subset W {\displaystyle W} of X {\displaystyle X} must have sufficiently many neighbors in Y {\displaystyle Y} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Homer, Marge, and Lisa Simpson have decided to go for a hike in the beautiful Swiss Alps. Homer has greatly surpassed Marge's expectations and carefully prepared to bring $n$ items whose total size equals the capacity of his and his wife Marge's two knapsacks. Lisa does not carry a knapsack due to her young age. More formally, Homer and Marge each have a knapsack of capacity $C$, there are $n$ items where item $i=1, 2, \ldots, n$ has size $s_i >0$, and we have $\sum_{i=1}^n s_i = 2\cdot C$ due to Homer's meticulous preparation. However, being Homer after all, Homer has missed one thing: although the items fit perfectly in the two knapsacks fractionally, it might be impossible to pack them because items must be assigned integrally! Luckily Lisa has studied linear programming and she saves the family holiday by proposing the following solution: \begin{itemize} \item Take \emph{any} extreme point $x^*$ of the linear program: \begin{align*} x_{iH} + x_{iM}& \leq 1 \qquad \quad \mbox{for all items $i=1,2,\ldots, n$}\\ \sum_{i=1}^n s_i x_{iH} & = C \\ \sum_{i=1}^n s_i x_{iM} & = C \\ 0 \leq x_{ij} &\leq 1 \qquad \quad \mbox{for all items $i=1,2, \ldots, n$ and $j\in \{H,M\}$}. \end{align*} \item Divide the items as follows: \begin{itemize} \item Homer and Marge will carry the items $\{i: x^*_{iH} = 1\}$ and $\{i: x^*_{iM}=1\}$, respectively. \item Lisa will carry any remaining items. \end{itemize} \end{itemize} {Prove} that Lisa needs to carry at most one item. \\[1mm] {\em (In this problem you are asked to give a formal proof of the statement that Lisa needs to carry at most one item. You are not allowed to change Lisa's solution for dividing the items among the family members. Recall that you are allowed to refer to material covered in the lecture notes.) }
The most common problem being solved is the 0-1 knapsack problem, which restricts the number x i {\displaystyle x_{i}} of copies of each kind of item to zero or one. Given a set of n {\displaystyle n} items numbered from 1 up to n {\displaystyle n} , each with a weight w i {\displaystyle w_{i}} and a value v i {\displaystyle v_{i}} , along with a maximum weight capacity W {\displaystyle W} , maximize ∑ i = 1 n v i x i {\displaystyle \sum _{i=1}^{n}v_{i}x_{i}} subject to ∑ i = 1 n w i x i ≤ W {\displaystyle \sum _{i=1}^{n}w_{i}x_{i}\leq W} and x i ∈ { 0 , 1 } {\displaystyle x_{i}\in \{0,1\}} .Here x i {\displaystyle x_{i}} represents the number of instances of item i {\displaystyle i} to include in the knapsack. Informally, the problem is to maximize the sum of the values of the items in the knapsack so that the sum of the weights is less than or equal to the knapsack's capacity. The bounded knapsack problem (BKP) removes the restriction that there is only one of each item, but restricts the number x i {\displaystyle x_{i}} of copies of each kind of item to a maximum non-negative integer value c {\displaystyle c}: maximize ∑ i = 1 n v i x i {\displaystyle \sum _{i=1}^{n}v_{i}x_{i}} subject to ∑ i = 1 n w i x i ≤ W {\displaystyle \sum _{i=1}^{n}w_{i}x_{i}\leq W} and x i ∈ { 0 , 1 , 2 , … , c } .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Homer, Marge, and Lisa Simpson have decided to go for a hike in the beautiful Swiss Alps. Homer has greatly surpassed Marge's expectations and carefully prepared to bring $n$ items whose total size equals the capacity of his and his wife Marge's two knapsacks. Lisa does not carry a knapsack due to her young age. More formally, Homer and Marge each have a knapsack of capacity $C$, there are $n$ items where item $i=1, 2, \ldots, n$ has size $s_i >0$, and we have $\sum_{i=1}^n s_i = 2\cdot C$ due to Homer's meticulous preparation. However, being Homer after all, Homer has missed one thing: although the items fit perfectly in the two knapsacks fractionally, it might be impossible to pack them because items must be assigned integrally! Luckily Lisa has studied linear programming and she saves the family holiday by proposing the following solution: \begin{itemize} \item Take \emph{any} extreme point $x^*$ of the linear program: \begin{align*} x_{iH} + x_{iM}& \leq 1 \qquad \quad \mbox{for all items $i=1,2,\ldots, n$}\\ \sum_{i=1}^n s_i x_{iH} & = C \\ \sum_{i=1}^n s_i x_{iM} & = C \\ 0 \leq x_{ij} &\leq 1 \qquad \quad \mbox{for all items $i=1,2, \ldots, n$ and $j\in \{H,M\}$}. \end{align*} \item Divide the items as follows: \begin{itemize} \item Homer and Marge will carry the items $\{i: x^*_{iH} = 1\}$ and $\{i: x^*_{iM}=1\}$, respectively. \item Lisa will carry any remaining items. \end{itemize} \end{itemize} {Prove} that Lisa needs to carry at most one item. \\[1mm] {\em (In this problem you are asked to give a formal proof of the statement that Lisa needs to carry at most one item. You are not allowed to change Lisa's solution for dividing the items among the family members. Recall that you are allowed to refer to material covered in the lecture notes.) }
The unbounded knapsack problem (UKP) places no restriction on the number of copies of each kind of item. Besides, here we assume that x i > 0 {\displaystyle x_{i}>0} m = max ( ∑ i = 1 n v i x i ) {\displaystyle m=\max \left(\sum _{i=1}^{n}v_{i}x_{i}\right)} subject to ∑ i = 1 n w i x i ≤ w ′ {\displaystyle \sum _{i=1}^{n}w_{i}x_{i}\leq w'} and x i > 0 {\displaystyle x_{i}>0} Observe that m {\displaystyle m} has the following properties: 1. m = 0 {\displaystyle m=0\,\!} (the sum of zero items, i.e., the summation of the empty set). 2. m = max ( v 1 + m , v 2 + m , .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Devise an algorithm for the following graph orientation problem: \begin{description} \item[Input:] An undirected graph $G = (V,E)$ and capacities $k : V \rightarrow \mathbb{Z}$ for each vertex. \item[Output:] If possible, an orientation of $G$ such that each vertex $v\in V$ has in-degree at most $k(v)$. \end{description} An orientation of an undirected graph $G$ replaces each undirected edge $\{u,v\}$ by either an arc $(u,v)$ from $u$ to $v$ or by an $(v,u)$ from $v$ to $u$. \\[2mm] \noindent\emph{(Hint: reduce the problem to matroid intersection. You can also use bipartite matching\ldots)}
In graph theory, an orientation of an undirected graph is an assignment of a direction to each edge, turning the initial graph into a directed graph.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Devise an algorithm for the following graph orientation problem: \begin{description} \item[Input:] An undirected graph $G = (V,E)$ and capacities $k : V \rightarrow \mathbb{Z}$ for each vertex. \item[Output:] If possible, an orientation of $G$ such that each vertex $v\in V$ has in-degree at most $k(v)$. \end{description} An orientation of an undirected graph $G$ replaces each undirected edge $\{u,v\}$ by either an arc $(u,v)$ from $u$ to $v$ or by an $(v,u)$ from $v$ to $u$. \\[2mm] \noindent\emph{(Hint: reduce the problem to matroid intersection. You can also use bipartite matching\ldots)}
Another interesting connection concerns orientations of graphs. An orientation of an undirected graph G is any directed graph obtained by choosing one of the two possible orientations for each edge. An example of an orientation of the complete graph Kk is the transitive tournament T→k with vertices 1,2,…,k and arcs from i to j whenever i < j. A homomorphism between orientations of graphs G and H yields a homomorphism between the undirected graphs G and H, simply by disregarding the orientations. On the other hand, given a homomorphism G → H between undirected graphs, any orientation H→ of H can be pulled back to an orientation G→ of G so that G→ has a homomorphism to H→.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following case class definitions: case class Node(id: Int) case class Edge(from: Node, to: Node) Let us represent a directed graph G as the list of all its edges (of type List[Edge]). We are interested in computing the set of all nodes reachable in exactly n steps from a set of initial nodes. Write a reachable function with the following signature to provide this functionality: def reachable(n: Int, init: Set[Node], edges: List[Edge]): Set[Node] You can assume that n >= 0.
For a directed graph G = ( V , E ) {\displaystyle G=(V,E)} , with vertex set V {\displaystyle V} and edge set E {\displaystyle E} , the reachability relation of G {\displaystyle G} is the transitive closure of E {\displaystyle E} , which is to say the set of all ordered pairs ( s , t ) {\displaystyle (s,t)} of vertices in V {\displaystyle V} for which there exists a sequence of vertices v 0 = s , v 1 , v 2 , . . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following case class definitions: case class Node(id: Int) case class Edge(from: Node, to: Node) Let us represent a directed graph G as the list of all its edges (of type List[Edge]). We are interested in computing the set of all nodes reachable in exactly n steps from a set of initial nodes. Write a reachable function with the following signature to provide this functionality: def reachable(n: Int, init: Set[Node], edges: List[Edge]): Set[Node] You can assume that n >= 0.
In graph theory, reachability refers to the ability to get from one vertex to another within a graph. A vertex s {\displaystyle s} can reach a vertex t {\displaystyle t} (and t {\displaystyle t} is reachable from s {\displaystyle s} ) if there exists a sequence of adjacent vertices (i.e. a walk) which starts with s {\displaystyle s} and ends with t {\displaystyle t} . In an undirected graph, reachability between all pairs of vertices can be determined by identifying the connected components of the graph. Any pair of vertices in such a graph can reach each other if and only if they belong to the same connected component; therefore, in such a graph, reachability is symmetric ( s {\displaystyle s} reaches t {\displaystyle t} iff t {\displaystyle t} reaches s {\displaystyle s} ). The connected components of an undirected graph can be identified in linear time. The remainder of this article focuses on the more difficult problem of determining pairwise reachability in a directed graph (which, incidentally, need not be symmetric).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a bipartite graph $G=(V,E)$ where $V$ is partitioned into $A$ and $B$. Let $(A, \mathcal{I})$ be the matroid with ground set $A$ and \begin{align*} \mathcal{I} = \{ A' \subseteq A: \mbox{ $G$ has a matching in which every vertex of $A'$ is matched}\}\,. \end{align*} Recall that we say that a vertex is matched by a matching $M$ if there is an edge in $M$ incident to $v$. Show that $(A, \mathcal{I})$ is indeed a matroid by verifying the two axioms.
Let G = (U,V,E) be a bipartite graph. One may define a partition matroid MU on the ground set E, in which a set of edges is independent if no two of the edges have the same endpoint in U. Similarly one may define a matroid MV in which a set of edges is independent if no two of the edges have the same endpoint in V. Any set of edges that is independent in both MU and MV has the property that no two of its edges share an endpoint; that is, it is a matching. Thus, the largest common independent set of MU and MV is a maximum matching in G. Similarly, if each edge has a weight, then the maximum-weight independent set of MU and MV is a Maximum weight matching in G.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a bipartite graph $G=(V,E)$ where $V$ is partitioned into $A$ and $B$. Let $(A, \mathcal{I})$ be the matroid with ground set $A$ and \begin{align*} \mathcal{I} = \{ A' \subseteq A: \mbox{ $G$ has a matching in which every vertex of $A'$ is matched}\}\,. \end{align*} Recall that we say that a vertex is matched by a matching $M$ if there is an edge in $M$ incident to $v$. Show that $(A, \mathcal{I})$ is indeed a matroid by verifying the two axioms.
A maximum matching in a graph is a set of edges that is as large as possible subject to the condition that no two edges share an endpoint. In a bipartite graph with bipartition ( U , V ) {\displaystyle (U,V)} , the sets of edges satisfying the condition that no two edges share an endpoint in U {\displaystyle U} are the independent sets of a partition matroid with one block per vertex in U {\displaystyle U} and with each of the numbers d i {\displaystyle d_{i}} equal to one. The sets of edges satisfying the condition that no two edges share an endpoint in V {\displaystyle V} are the independent sets of a second partition matroid. Therefore, the bipartite maximum matching problem can be represented as a matroid intersection of these two matroids.More generally the matchings of a graph may be represented as an intersection of two matroids if and only if every odd cycle in the graph is a triangle containing two or more degree-two vertices.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & \hspace{0.4cm}4x_1 - 6x_2 + 4x_3 \\ \textbf{subject to}\hspace{0.6cm} & x_1 - 3x_2 + x_3 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{1.90cm}x_1 + s_2 = 8 \\ \hspace{0.8cm} & \hspace{0.65cm} 3x_2 + 2x_3 + s_3 = 6 \\ \hspace{0.8cm} &\hspace{-0.35cm} x_1,\: x_2, \: x_3, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + 3x_2 - x_3 - s_1 \\ s_2 &= 7 -3x_2 + x_3 + s_1 \\ s_3 &= 6 - 3x_2 - 2x_3 \\ \cline{1-2} z &= 4 + 6 x_2 - 4s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau.
Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations each of which give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot improves the solution.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose we use the Simplex method to solve the following linear program: \begin{align*} \textbf{maximize} \hspace{0.8cm} & \hspace{0.4cm}4x_1 - 6x_2 + 4x_3 \\ \textbf{subject to}\hspace{0.6cm} & x_1 - 3x_2 + x_3 + s_1 = 1 \\ \hspace{0.8cm} & \hspace{1.90cm}x_1 + s_2 = 8 \\ \hspace{0.8cm} & \hspace{0.65cm} 3x_2 + 2x_3 + s_3 = 6 \\ \hspace{0.8cm} &\hspace{-0.35cm} x_1,\: x_2, \: x_3, \:s_1, \:s_2, \:s_3 \geq 0 \end{align*} At the current step, we have the following Simplex tableau: \begin{align*} \hspace{1cm} x_1 &= 1 + 3x_2 - x_3 - s_1 \\ s_2 &= 7 -3x_2 + x_3 + s_1 \\ s_3 &= 6 - 3x_2 - 2x_3 \\ \cline{1-2} z &= 4 + 6 x_2 - 4s_1 \end{align*} Write the tableau obtained by executing one iteration (pivot) of the Simplex method starting from the above tableau.
Using the simplex method to solve a linear program produces a set of equations of the form x i + ∑ j a ¯ i , j x j = b ¯ i {\displaystyle x_{i}+\sum _{j}{\bar {a}}_{i,j}x_{j}={\bar {b}}_{i}} where xi is a basic variable and the xj's are the nonbasic variables (i.e. the basic solution which is an optimal solution to the relaxed linear program is x i = b ¯ i {\displaystyle x_{i}={\bar {b}}_{i}} and x j = 0 {\displaystyle x_{j}=0} ). We write coefficients b ¯ i {\displaystyle {\bar {b}}_{i}} and a ¯ i , j {\displaystyle {\bar {a}}_{i,j}} with a bar to denote the last tableau produced by the simplex method. These coefficients are different from the coefficients in the matrix A and the vector b.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Professor Ueli von Gruy\`{e}res has worked intensely throughout his career to get a good estimator of the yearly consumption of cheese in Switzerland. Recently, he had a true breakthrough. He was able to design an incredibly efficient randomized algorithm \Alg that outputs a random value $X$ satisfying \begin{align*} \mathbb{E}[X] = c \qquad \mbox{ and } \qquad \textrm{Var}[X] = c^2\,, \end{align*} where $c$ is the (unknown) yearly consumption of cheese in Switzerland. In other words, \Alg is an unbiased estimator of $c$ with variance $c^2$. Use Ueli von Gruy\`{e}res' algorithm \Alg to design an algorithm that outputs a random value $Y$ with the following guarantee: \begin{align} \label{eq:guarantee} \Pr[|Y - c| \geq \epsilon c] \leq \delta\qquad \mbox{ where $\epsilon > 0$ and $\delta >0$ are small constants.} \end{align} Your algorithm should increase the resource requirements (its running time and space usage) by at most a factor $O(1/\epsilon^2 \cdot \log(1/\delta))$ compared to the requirements of $\Alg$. \\[0mm] {\em (In this problem you are asked to (i) design the algorithm using $\mathcal{A}$, (ii) show that it satisfies the guarantee~\eqref{eq:guarantee}, and (iii) analyze how much the resource requirements increase compared to that of simply running $\mathcal{A}$. Recall that you are allowed to refer to material covered in the course.)}
A way simpler possibility comes to mind and it is just drawing a straight line between two points and coming up with all the relevant data graphically. However, even though it is clearly seen in the paper that the income perceived is rising by 100 francs per sample family, the food expenditure is definitely not decreasing following any fixed arithmetic trend but it could be a geometric rate. His data could have been derived from an approximation following several news articles at the time.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Professor Ueli von Gruy\`{e}res has worked intensely throughout his career to get a good estimator of the yearly consumption of cheese in Switzerland. Recently, he had a true breakthrough. He was able to design an incredibly efficient randomized algorithm \Alg that outputs a random value $X$ satisfying \begin{align*} \mathbb{E}[X] = c \qquad \mbox{ and } \qquad \textrm{Var}[X] = c^2\,, \end{align*} where $c$ is the (unknown) yearly consumption of cheese in Switzerland. In other words, \Alg is an unbiased estimator of $c$ with variance $c^2$. Use Ueli von Gruy\`{e}res' algorithm \Alg to design an algorithm that outputs a random value $Y$ with the following guarantee: \begin{align} \label{eq:guarantee} \Pr[|Y - c| \geq \epsilon c] \leq \delta\qquad \mbox{ where $\epsilon > 0$ and $\delta >0$ are small constants.} \end{align} Your algorithm should increase the resource requirements (its running time and space usage) by at most a factor $O(1/\epsilon^2 \cdot \log(1/\delta))$ compared to the requirements of $\Alg$. \\[0mm] {\em (In this problem you are asked to (i) design the algorithm using $\mathcal{A}$, (ii) show that it satisfies the guarantee~\eqref{eq:guarantee}, and (iii) analyze how much the resource requirements increase compared to that of simply running $\mathcal{A}$. Recall that you are allowed to refer to material covered in the course.)}
A way simpler possibility comes to mind and it is just drawing a straight line between two points and coming up with all the relevant data graphically. However, even though it is clearly seen in the paper that the income perceived is rising by 100 francs per sample family, the food expenditure is definitely not decreasing following any fixed arithmetic trend but it could be a geometric rate. His data could have been derived from an approximation following several news articles at the time.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $A \in \mathbb{R}^{m\times n}$, $b\in \mathbb{R}^m$ and $c\in \mathbb{R}^n$. Consider the following linear program with $n$ variables: \begin{align*} \textbf{maximize} \hspace{0.8cm} & c^Tx \\ \textbf{subject to}\hspace{0.8cm} & Ax =b \\ \hspace{0.8cm} & x \geq 0 \end{align*} Show that any extreme point $x^*$ has at most $m$ non-zero entries, i.e., $|\{i: x^*_i > 0 \}| \leq m$. \\[-0.2cm] \noindent \emph{Hint: what happens if the columns corresponding to non-zero entries in $x^*$ are linearly dependent?}\\[-0.2cm] {\small (If you are in a good mood you can prove the following stronger statement: $x^*$ is an extreme point if and only if the columns of $A$ corresponding to non-zero entries of $x^*$ are linearly independent.)}
It can be shown that for a linear program in standard form, if the objective function has a maximum value on the feasible region, then it has this value on (at least) one of the extreme points. This in itself reduces the problem to a finite computation since there is a finite number of extreme points, but the number of extreme points is unmanageably large for all but the smallest linear programs.It can also be shown that, if an extreme point is not a maximum point of the objective function, then there is an edge containing the point so that the value of the objective function is strictly increasing on the edge moving away from the point. If the edge is finite, then the edge connects to another extreme point where the objective function has a greater value, otherwise the objective function is unbounded above on the edge and the linear program has no solution.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $A \in \mathbb{R}^{m\times n}$, $b\in \mathbb{R}^m$ and $c\in \mathbb{R}^n$. Consider the following linear program with $n$ variables: \begin{align*} \textbf{maximize} \hspace{0.8cm} & c^Tx \\ \textbf{subject to}\hspace{0.8cm} & Ax =b \\ \hspace{0.8cm} & x \geq 0 \end{align*} Show that any extreme point $x^*$ has at most $m$ non-zero entries, i.e., $|\{i: x^*_i > 0 \}| \leq m$. \\[-0.2cm] \noindent \emph{Hint: what happens if the columns corresponding to non-zero entries in $x^*$ are linearly dependent?}\\[-0.2cm] {\small (If you are in a good mood you can prove the following stronger statement: $x^*$ is an extreme point if and only if the columns of $A$ corresponding to non-zero entries of $x^*$ are linearly independent.)}
21, 502–505 (1970). On Bauer’s characterization of extreme points. Math.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the program below. Tick the correct answer. def fun(x: List[Int]) = if x.isEmpty then None else Some(x) val lists = List(List(1, 2, 3), List(), List(4, 5, 6)) for \t l <- lists \t v1 <- fun(l) \t v2 <- fun(v1) yield v2
Instead of the Java "foreach" loops for looping through an iterator, Scala has for-expressions, which are similar to list comprehensions in languages such as Haskell, or a combination of list comprehensions and generator expressions in Python. For-expressions using the yield keyword allow a new collection to be generated by iterating over an existing one, returning a new collection of the same type. They are translated by the compiler into a series of map, flatMap and filter calls. Where yield is not used, the code approximates to an imperative-style loop, by translating to foreach.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the program below. Tick the correct answer. def fun(x: List[Int]) = if x.isEmpty then None else Some(x) val lists = List(List(1, 2, 3), List(), List(4, 5, 6)) for \t l <- lists \t v1 <- fun(l) \t v2 <- fun(v1) yield v2
Python uses the following syntax to express list comprehensions over finite lists: A generator expression may be used in Python versions >= 2.4 which gives lazy evaluation over its input, and can be used with generators to iterate over 'infinite' input such as the count generator function which returns successive integers: (Subsequent use of the generator expression will determine when to stop generating values).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The goal of the 4 following questions is to prove that the methods map and mapTr are equivalent. The former is the version seen in class and is specified by the lemmas MapNil and MapCons. The later version is a tail-recursive version and is specified by the lemmas MapTrNil and MapTrCons. All lemmas on this page hold for all x: Int, y: Int, xs: List[Int], ys: List[Int], l: List [Int] and f: Int => Int. Given the following lemmas: (MapNil) Nil.map(f) === Nil (MapCons) (x :: xs).map(f) === f(x) :: xs.map(f) (MapTrNil) Nil.mapTr(f, ys) === ys (MapTrCons) (x :: xs).mapTr(f, ys) === xs.mapTr(f, ys ++ (f(x) :: Nil)) (NilAppend) Nil ++ xs === xs (ConsAppend) (x :: xs) ++ ys === x :: (xs ++ ys) Let us first prove the following lemma: (AccOut) l.mapTr(f, y :: ys) === y :: l.mapTr(f, ys) We prove it by induction on l. Base case: l is Nil. Therefore, we need to prove: Nil.mapTr(f, y :: ys) === y :: Nil.mapTr(f, ys). What exact sequence of lemmas should we apply to rewrite the left hand-side (Nil.mapTr(f, y :: ys)) to the right hand-side (y :: Nil.mapTr(f, ys))?
{\displaystyle l\in J_{2}.} Every map f ∈ ∏ J ∙ = def ∏ i ∈ I J i = J 1 × J 2 = K × L {\displaystyle f\in {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \prod \limits _{i\in I}}J_{i}=J_{1}\times J_{2}=K\times L} can be bijectively identified with the pair ( f ( 1 ) , f ( 2 ) ) ∈ K × L {\displaystyle \left(f(1),f(2)\right)\in K\times L} (the inverse sends ( k , l ) ∈ K × L {\displaystyle (k,l)\in K\times L} to the map f ( k , l ) ∈ ∏ J ∙ {\displaystyle f_{(k,l)}\in {\textstyle \prod }J_{\bullet }} defined by 1 ↦ k {\displaystyle 1\mapsto k} and 2 ↦ l ; {\displaystyle 2\mapsto l;} this is technically just a change of notation). Recall that Eq. 5 ∩∪ to ∪∩ was Expanding and simplifying the left hand side gives and doing the same to the right hand side gives: Thus the general identity Eq. 5 ∩∪ to ∪∩ reduces down to the previously given set equality Eq. 3b:
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The goal of the 4 following questions is to prove that the methods map and mapTr are equivalent. The former is the version seen in class and is specified by the lemmas MapNil and MapCons. The later version is a tail-recursive version and is specified by the lemmas MapTrNil and MapTrCons. All lemmas on this page hold for all x: Int, y: Int, xs: List[Int], ys: List[Int], l: List [Int] and f: Int => Int. Given the following lemmas: (MapNil) Nil.map(f) === Nil (MapCons) (x :: xs).map(f) === f(x) :: xs.map(f) (MapTrNil) Nil.mapTr(f, ys) === ys (MapTrCons) (x :: xs).mapTr(f, ys) === xs.mapTr(f, ys ++ (f(x) :: Nil)) (NilAppend) Nil ++ xs === xs (ConsAppend) (x :: xs) ++ ys === x :: (xs ++ ys) Let us first prove the following lemma: (AccOut) l.mapTr(f, y :: ys) === y :: l.mapTr(f, ys) We prove it by induction on l. Base case: l is Nil. Therefore, we need to prove: Nil.mapTr(f, y :: ys) === y :: Nil.mapTr(f, ys). What exact sequence of lemmas should we apply to rewrite the left hand-side (Nil.mapTr(f, y :: ys)) to the right hand-side (y :: Nil.mapTr(f, ys))?
The maps between the kernels and the maps between the cokernels are induced in a natural manner by the given (horizontal) maps because of the diagram's commutativity. The exactness of the two induced sequences follows in a straightforward way from the exactness of the rows of the original diagram. The important statement of the lemma is that a connecting homomorphism d exists which completes the exact sequence. In the case of abelian groups or modules over some ring, the map d can be constructed as follows: Pick an element x in ker c and view it as an element of C; since g is surjective, there exists y in B with g(y) = x. Because of the commutativity of the diagram, we have g'(b(y)) = c(g(y)) = c(x) = 0 (since x is in the kernel of c), and therefore b(y) is in the kernel of g' .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following definition of trees representing higher-order functions, as well as a recursive function subst0. 1 enum Expr: 2 case C(c: BigInt) 3 case N(name: String) 4 case BinOp(op: BinOps, e1: Expr, e2: Expr) 5 case IfNonzero(cond: Expr, trueE: Expr, falseE: Expr) 6 case Call(fun: Expr, arg: Expr) 7 case Fun(param: String, body: Expr) 8 9 import Expr._ 10 11 enum BinOps: 12 case Plus, Minus, Times, Power, LessEq 13 14 def subst0(e: Expr, n: String, r: Expr): Expr = e match 15 case C(c) => e 16 case N(s) => if s == n then r else e 17 case BinOp(op, e1, e2) => 18 BinOp(op, subst0(e1, n, r), subst0(e2, n, r)) 19 case IfNonzero(cond, trueE, falseE) => 20 IfNonzero(subst0(cond,n,r), subst0(trueE,n,r), subst0(falseE,n,r)) 21 case Call(f, arg) => 22 Call(subst0(f, n, r), subst0(arg, n, r)) 23 case Fun(formal, body) => 24 if formal == n then e 25 else Fun(formal, subst0(body, n, r)) And consider the following expression: 1 val e = Call(N("exists"), Fun("y", Call(Call(N("less"), N("x")), N("y")))) What is subst0(e, "y", C(42)) equal to?
Informally, and using programming language jargon, a tree (xy) can be thought of as a function x applied to an argument y. When evaluated (i.e., when the function is "applied" to the argument), the tree "returns a value", i.e., transforms into another tree. The "function", "argument" and the "value" are either combinators or binary trees. If they are binary trees, they may be thought of as functions too, if needed. The evaluation operation is defined as follows: (x, y, and z represent expressions made from the functions S, K, and I, and set values): I returns its argument: Ix = xK, when applied to any argument x, yields a one-argument constant function Kx, which, when applied to any argument y, returns x: Kxy = xS is a substitution operator.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following definition of trees representing higher-order functions, as well as a recursive function subst0. 1 enum Expr: 2 case C(c: BigInt) 3 case N(name: String) 4 case BinOp(op: BinOps, e1: Expr, e2: Expr) 5 case IfNonzero(cond: Expr, trueE: Expr, falseE: Expr) 6 case Call(fun: Expr, arg: Expr) 7 case Fun(param: String, body: Expr) 8 9 import Expr._ 10 11 enum BinOps: 12 case Plus, Minus, Times, Power, LessEq 13 14 def subst0(e: Expr, n: String, r: Expr): Expr = e match 15 case C(c) => e 16 case N(s) => if s == n then r else e 17 case BinOp(op, e1, e2) => 18 BinOp(op, subst0(e1, n, r), subst0(e2, n, r)) 19 case IfNonzero(cond, trueE, falseE) => 20 IfNonzero(subst0(cond,n,r), subst0(trueE,n,r), subst0(falseE,n,r)) 21 case Call(f, arg) => 22 Call(subst0(f, n, r), subst0(arg, n, r)) 23 case Fun(formal, body) => 24 if formal == n then e 25 else Fun(formal, subst0(body, n, r)) And consider the following expression: 1 val e = Call(N("exists"), Fun("y", Call(Call(N("less"), N("x")), N("y")))) What is subst0(e, "y", C(42)) equal to?
(1, if n = 0; else n × ((Y G) (n−1)))) (2−1)))) 4 × (3 × (2 × (1, if 1 = 0; else 1 × ((Y G) (1−1))))) 4 × (3 × (2 × (1 × (G (Y G) (1−1))))) 4 × (3 × (2 × (1 × ((λn. (1, if n = 0; else n × ((Y G) (n−1)))) (1−1))))) 4 × (3 × (2 × (1 × (1, if 0 = 0; else 0 × ((Y G) (0−1)))))) 4 × (3 × (2 × (1 × (1)))) 24Every recursively defined function can be seen as a fixed point of some suitably defined function closing over the recursive call with an extra argument, and therefore, using Y, every recursively defined function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication and comparison predicate of natural numbers recursively.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a data stream $\sigma=(a_1,\ldots, a_m)$, with $a_j\in [n]$ for every $j=1,\ldots, m$, where we let $[n]:=\{1, 2, \ldots, n\}$ to simplify notation. For $i\in [n]$ let $f_i$ denote the number of times element $i$ appeared in the stream $\sigma$. We say that a stream $\sigma$ is {\epsilonm approximately sparse} if there exists $i^*\in [n]$ such that $f_{i^*}=\lceil n^{1/4}\rceil$ and for all $i\in [n]\setminus \{i^*\}$ one has $f_i\leq 10$. We call $i^*$ the {\epsilonm dominant} element of $\sigma$. Give a single pass streaming algorithm that finds the dominant element $i^*$ in the input stream as long as the stream is approximately sparse. Your algorithm should succeed with probability at least $9/10$ and use $O(n^{1/2}\log^2 n)$ bits of space. You may assume knowledge of $n$ (and that $n$ is larger than an absolute constant).
Instance: A stream of elements x 1 , x 2 , … , x s {\displaystyle x_{1},x_{2},\ldots ,x_{s}} with repetitions, and an integer m {\displaystyle m} . Let n {\displaystyle n} be the number of distinct elements, namely n = | { x 1 , x 2 , … , x s } | {\displaystyle n=|\left\{{x_{1},x_{2},\ldots ,x_{s}}\right\}|} , and let these elements be { e 1 , e 2 , … , e n } {\displaystyle \left\{{e_{1},e_{2},\ldots ,e_{n}}\right\}} . Objective: Find an estimate n ^ {\displaystyle {\widehat {n}}} of n {\displaystyle n} using only m {\displaystyle m} storage units, where m ≪ n {\displaystyle m\ll n} .An example of an instance for the cardinality estimation problem is the stream: a , b , a , c , d , b , d {\displaystyle a,b,a,c,d,b,d} . For this instance, n = | { a , b , c , d } | = 4 {\displaystyle n=|\left\{{a,b,c,d}\right\}|=4} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a data stream $\sigma=(a_1,\ldots, a_m)$, with $a_j\in [n]$ for every $j=1,\ldots, m$, where we let $[n]:=\{1, 2, \ldots, n\}$ to simplify notation. For $i\in [n]$ let $f_i$ denote the number of times element $i$ appeared in the stream $\sigma$. We say that a stream $\sigma$ is {\epsilonm approximately sparse} if there exists $i^*\in [n]$ such that $f_{i^*}=\lceil n^{1/4}\rceil$ and for all $i\in [n]\setminus \{i^*\}$ one has $f_i\leq 10$. We call $i^*$ the {\epsilonm dominant} element of $\sigma$. Give a single pass streaming algorithm that finds the dominant element $i^*$ in the input stream as long as the stream is approximately sparse. Your algorithm should succeed with probability at least $9/10$ and use $O(n^{1/2}\log^2 n)$ bits of space. You may assume knowledge of $n$ (and that $n$ is larger than an absolute constant).
In the data stream model, the frequent elements problem is to output a set of elements that constitute more than some fixed fraction of the stream. A special case is the majority problem, which is to determine whether or not any value constitutes a majority of the stream. More formally, fix some positive constant c > 1, let the length of the stream be m, and let fi denote the frequency of value i in the stream. The frequent elements problem is to output the set { i | fi > m/c }.Some notable algorithms are: Boyer–Moore majority vote algorithm Count-Min sketch Lossy counting Multi-stage Bloom filters Misra–Gries heavy hitters algorithm Misra–Gries summary
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose you are using the Hedge algorithm to invest your money (in a good way) into $N$ different investments. Every day you see how well your investments go: for $i\in [N]$ you observe the change of each investment in percentages. For example, $\mbox{change(i) = 20\%}$ would mean that investment $i$ increased in value by $20\%$ and $\mbox{change}(i) = -10\%$ would mean that investment $i$ decreased in value by $10\%$. How would you implement the ``adversary'' at each day $t$ so as to make sure that Hedge gives you (over time) almost as a good investment as the best one? In other words, how would you set the cost vector $\vec{m}^{(t)}$ each day?
So every day it multiplies its value once by (100% + r%). So if I hold the investment for n days, its value will have multiplied itself by this amount n times, making that value (100% + r%)n of what it was at the start – that is, (1 + r)n times what it was at the start. So to figure out how much I would need to start with today to get y dollars n days from now, I need to divide y dollars by n."
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose you are using the Hedge algorithm to invest your money (in a good way) into $N$ different investments. Every day you see how well your investments go: for $i\in [N]$ you observe the change of each investment in percentages. For example, $\mbox{change(i) = 20\%}$ would mean that investment $i$ increased in value by $20\%$ and $\mbox{change}(i) = -10\%$ would mean that investment $i$ decreased in value by $10\%$. How would you implement the ``adversary'' at each day $t$ so as to make sure that Hedge gives you (over time) almost as a good investment as the best one? In other words, how would you set the cost vector $\vec{m}^{(t)}$ each day?
The hedge algorithm is similar to the weighted majority algorithm. However, their exponential update rules are different. It is generally used to solve the problem of binary allocation in which we need to allocate different portion of resources into N different options. The loss with every option is available at the end of every iteration. The goal is to reduce the total loss suffered for a particular allocation. The allocation for the following iteration is then revised, based on the total loss suffered in the current iteration using multiplicative update.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus