contest_id
stringlengths 1
4
| index
stringclasses 43
values | title
stringlengths 2
63
| statement
stringlengths 51
4.24k
| tutorial
stringlengths 19
20.4k
| tags
listlengths 0
11
| rating
int64 800
3.5k
⌀ | code
stringlengths 46
29.6k
⌀ |
|---|---|---|---|---|---|---|---|
1265
|
E
|
Beautiful Mirrors
|
Creatnx has $n$ mirrors, numbered from $1$ to $n$. Every day, Creatnx asks exactly one mirror "Am I beautiful?". The $i$-th mirror will tell Creatnx that he is beautiful with probability $\frac{p_i}{100}$ for all $1 \le i \le n$.
Creatnx asks the mirrors one by one, starting from the $1$-st mirror. Every day, if he asks $i$-th mirror, there are two possibilities:
- The $i$-th mirror tells Creatnx that he is beautiful. In this case, if $i = n$ Creatnx will stop and become happy, otherwise he will continue asking the $i+1$-th mirror next day;
- In the other case, Creatnx will feel upset. The next day, Creatnx will start asking from the $1$-st mirror again.
You need to calculate the expected number of days until Creatnx becomes happy.
This number should be found by modulo $998244353$. Formally, let $M = 998244353$. It can be shown that the answer can be expressed as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \not \equiv 0 \pmod{M}$. Output the integer equal to $p \cdot q^{-1} \bmod M$. In other words, output such an integer $x$ that $0 \le x < M$ and $x \cdot q \equiv p \pmod{M}$.
|
Let $p_i$ be the probability that the $i$-th mirror will answer YES when it is asked. That is, this $p_i$ equals to $p_i$ in the problem statement divide by 100. Let $e_i$ be the expected number of days until Creatnx becomes happy when initially he is at the $i$-th mirror. For convenient, let $e_{n+1} = 0$ (because when Creatnx is at $(n+1)$-th mirror he is happy already). The answer of the problem will be $e_1$. By the definition of expectation value and its basic properties, the following must holds for all $1 \leq i \leq n$: $e_i = 1 + p_i \cdot e_{i+1} + (1-p_i)\cdot e_1$ Let explain this equation for those who are not familiar with probability. Expectation value is just average of all possible outcomes. The first number 1 in the right hand side means Creatnx spends 1 day to ask the $i$-th mirror. With probability of $p_i$ the $i$-th mirror will answer YES and Creatnx will move to the $(i+1)$-th mirror in the next day. At the $(i+1)$-th mirror, Creatnx on average needs to spend $e_{i+1}$ days more to become happy. The second term $p_i \cdot e_{i+1}$ explains this case. Similarly, the third term $(1-p_i)\cdot e_1$ represents the case where the $i$-th mirror answers NO. To find $e_1$ we need to solve $n$ equations: (1) $e_1 = 1 + p_1 \cdot e_2 + (1-p_1)\cdot e_1$ (2) $e_2 = 1 + p_2 \cdot e_3 + (1-p_2)\cdot e_1$ $\ldots$ ($n$) $e_n = 1 + p_n \cdot e_{n+1} + (1-p_n)\cdot e_1$ We can solve this system of equations by using substitution - a common technique. From equation (1) we have $e_2 = e_1 - \frac{1}{p_1}$. Substituting this in (2) we obtained $e_3=e_1-\frac{1}{p_1 \cdot p_2} - \frac{1}{p_2}$. See the pattern now? Similarly by substituting to that last equation we have: $0 = e_{n+1}=e_1 - \frac{1}{p_1\cdot p_2 \cdot \ldots \cdot p_n} - \frac{1}{p_2\cdot p_3 \cdot \ldots \cdot p_n} - \ldots - \frac{1}{p_n}$ $e_1 = \frac{1}{p_1\cdot p_2 \cdot \ldots \cdot p_n} + \frac{1}{p_2\cdot p_3 \cdot \ldots \cdot p_n} + \ldots + \frac{1}{p_n}$ $e_1 = \frac{1 + p_1 + p_1 \cdot p_2 + \ldots + p_1\cdot p_2 \cdot \ldots \cdot p_{n-1}}{p_1\cdot p_2 \cdot \ldots \cdot p_n}$ We can compute $e_1$ according to the above formula in $\mathcal{O}(n)$.
|
[
"data structures",
"dp",
"math",
"probabilities"
] | 2,100
|
#include <bits/stdc++.h>
using namespace std;
const int MOD = 119 << 23 | 1;
int inv(int a) {
int r = 1, t = a, k = MOD - 2;
while (k) {
if (k & 1) r = (long long) r * t % MOD;
t = (long long) t * t % MOD;
k >>= 1;
}
return r;
}
int main() {
int n; cin >> n;
vector<int> p(n);
for (int i = 0; i < n; i++) cin >> p[i], p[i] = (long long) p[i] * inv(100) % MOD;
int a = 1, b = 0;
for (int i = 0; i < n; i++) {
a = (long long) a * inv(p[i]) % MOD;
b = (long long) b * inv(p[i]) % MOD;
a = (a + (long long) (p[i] - 1) * inv(p[i])) % MOD;
b = (b - inv(p[i]) + MOD) % MOD;
}
cout << (long long) (MOD - b) * inv(a) % MOD << "\n";
return 0;
}
|
1266
|
A
|
Competitive Programmer
|
Bob is a competitive programmer. He wants to become red, and for that he needs a strict training regime. He went to the annual meeting of grandmasters and asked $n$ of them how much effort they needed to reach red.
"Oh, I just spent $x_i$ \textbf{hours} solving problems", said the $i$-th of them.
Bob wants to train his math skills, so for each answer he wrote down the number of \textbf{minutes} ($60 \cdot x_i$), thanked the grandmasters and went home. Bob could write numbers with leading zeroes — for example, if some grandmaster answered that he had spent $2$ hours, Bob could write $000120$ instead of $120$.
Alice wanted to tease Bob and so she took the numbers Bob wrote down, and for each of them she did one of the following independently:
- rearranged its digits, or
- wrote a random number.
This way, Alice generated $n$ numbers, denoted $y_1$, ..., $y_n$.
For each of the numbers, help Bob determine whether $y_i$ can be a permutation of a number divisible by $60$ (possibly with leading zeroes).
|
Thanks to Chinese remainder theorem, a number is divisible by $60$ if and only if it is divisible by $3$ and $20$. A number is divisible by $3$ if and only if the sum of its digits is divisible by $3$. Note that as the sum doesn't change if we reorder digits, it applies also to the sum of digits of $s$. A number is divisible by $20$ if it ends in $20$, $40$, $60$, $80$ or $00$. Hence, it is necessary and sufficient if $s$ contains a $0$ and then at least one additional even digit. Overall, there are three conditions to check: The digit sum is divisible by $3$. There is at least a single $0$. There are at least two even digits (including $0$s).
|
[
"chinese remainder theorem",
"math"
] | 1,000
| null |
1266
|
B
|
Dice Tower
|
Bob is playing with $6$-sided dice. A net of such standard cube is shown below.
He has an unlimited supply of these dice and wants to build a tower by stacking multiple dice on top of each other, while choosing the orientation of each dice. Then he counts the number of visible pips on the faces of the dice.
For example, the number of visible pips on the tower below is $29$ — the number visible on the top is $1$, from the south $5$ and $3$, from the west $4$ and $2$, from the north $2$ and $4$ and from the east $3$ and $5$.
The one at the bottom and the two sixes by which the dice are touching are not visible, so they are not counted towards total.
Bob also has $t$ favourite integers $x_i$, and for every such integer his goal is to build such a tower that the number of visible pips is exactly $x_i$. For each of Bob's favourite integers determine whether it is possible to build a tower that has exactly that many visible pips.
|
Consider a die other than the top-most one. As the sum of numbers on the opposite faces of a die is always $7$, the sum of numbers on the visible faces is always $14$, regardless of its orientation. For the top-most die, the numbers on the sides also add up to $14$, and there is an additional number on top of the die. The total number of visible pips is thus $14d + t$, where $d$ is the number of dice and $t$ is the number on top. For a given $x$, compute $t = x \bmod 14$ and $d = \lfloor \frac{x}{14} \rfloor$. The answer is positive if and only if $d \geq 1$ and $1 \leq t \leq 6$.
|
[
"constructive algorithms",
"math"
] | 1,000
| null |
1266
|
C
|
Diverse Matrix
|
Let $a$ be a matrix of size $r \times c$ containing positive integers, not necessarily distinct. Rows of the matrix are numbered from $1$ to $r$, columns are numbered from $1$ to $c$. We can construct an array $b$ consisting of $r + c$ integers as follows: for each $i \in [1, r]$, let $b_i$ be the greatest common divisor of integers in the $i$-th row, and for each $j \in [1, c]$ let $b_{r+j}$ be the greatest common divisor of integers in the $j$-th column.
We call the matrix \textbf{diverse} if all $r + c$ numbers $b_k$ ($k \in [1, r + c]$) are pairwise distinct.
The \textbf{magnitude} of a matrix equals to the maximum of $b_k$.
For example, suppose we have the following matrix:
\begin{center}
$\begin{pmatrix} 2 & 9 & 7\\ 4 & 144 & 84 \end{pmatrix}$
\end{center}
We construct the array $b$:
- $b_1$ is the greatest common divisor of $2$, $9$, and $7$, that is $1$;
- $b_2$ is the greatest common divisor of $4$, $144$, and $84$, that is $4$;
- $b_3$ is the greatest common divisor of $2$ and $4$, that is $2$;
- $b_4$ is the greatest common divisor of $9$ and $144$, that is $9$;
- $b_5$ is the greatest common divisor of $7$ and $84$, that is $7$.
So $b = [1, 4, 2, 9, 7]$. All values in this array are distinct, so the matrix is diverse. The magnitude is equal to $9$.
For a given $r$ and $c$, find a diverse matrix that minimises the magnitude. If there are multiple solutions, you may output any of them. If there are no solutions, output a single integer $0$.
|
As the example reveals, the case when $r = c = 1$ is impossible. It turns out that this is the only impossible case. We will prove this by providing a construction that always achieves a magnitude of $r + c$. If $r = 1$, one optimal solution is $A = (2, 3, \dots, c+1)$. The case where $c = 1$ is similar. Assume $r, c \geq 2$ and assign $a_{i,j} = i * (j+r)$. We can now show that the gcd of the $i$-th row equals to $i$: $b_i = \mathrm{gcd}\left\{i * (r+1), i * (r+2), \dots i * (r+c)\right\} = i \cdot \mathrm{gcd}\left\{r+1, r+2, \dots, r+c \right\}$ As $r+1$ and $r+2$ are coprime, $b_i = i$. Similarly, we can show that $b_{r+j}$ = $r + j$. To summarise, $b_k = k$ for all $k$, hence all row and column gcds are pairwise distinct, and the maximum is $r + c$. As the magnitude is a maximum of $r + c$ pairwise distinct positive integers, $r + c$ is optimal.
|
[
"constructive algorithms",
"greedy",
"math",
"number theory"
] | 1,400
| null |
1266
|
D
|
Decreasing Debts
|
There are $n$ people in this world, conveniently numbered $1$ through $n$. They are using burles to buy goods and services. Occasionally, a person might not have enough currency to buy what he wants or needs, so he borrows money from someone else, with the idea that he will repay the loan later with interest. Let $d(a,b)$ denote the debt of $a$ towards $b$, or $0$ if there is no such debt.
Sometimes, this becomes very complex, as the person lending money can run into financial troubles before his debtor is able to repay his debt, and finds himself in the need of borrowing money.
When this process runs for a long enough time, it might happen that there are so many debts that they can be consolidated. There are two ways this can be done:
- Let $d(a,b) > 0$ and $d(c,d) > 0$ such that $a \neq c$ or $b \neq d$. We can decrease the $d(a,b)$ and $d(c,d)$ by $z$ and increase $d(c,b)$ and $d(a,d)$ by $z$, where $0 < z \leq \min(d(a,b),d(c,d))$.
- Let $d(a,a) > 0$. We can set $d(a,a)$ to $0$.
The total debt is defined as the sum of all debts:
$$\Sigma_d = \sum_{a,b} d(a,b)$$
Your goal is to use the above rules in any order any number of times, to make the total debt as small as possible. Note that you don't have to minimise the \textbf{number} of non-zero debts, only the \textbf{total debt}.
|
Consider a solution which minimises the total debt. Assume for contradiction that there is a triple of vertices $u \neq v \neq w$, such that $d(u,v) > 0$ and $d(v,w) > 0$. We can use the first rule with $a = u$, $b = c = v$, $d = w$ and $z = min(d(u,v), d(v,w))$ and then the second rule with $a = v$ and the same $z$. We have just reduced the total debt by $z$, which is a contradiction. So, there cannot be such a triple, in particular there cannot be a vertex $v$ that has both incoming and outgoing debt. Hence, every vertex has only outgoing edges or only incoming ones. Define $bal(u) = \sum_v d(u,v) - \sum_v d(v,u)$ to be the balance of $u$. Any application of either rule preserves balance of all vertices. It follows that any solution in which every vertex has either outgoing or incoming edges is constructible using finite number of applications of rules. This means that we can just find balances, and greedily match vertices with positive balance to vertices with negative balance. The total debt is then $\Sigma_d = \frac{\sum_v |bal(v)|}{2}$ and it is clear that we cannot do better than that.
|
[
"constructive algorithms",
"data structures",
"graphs",
"greedy",
"implementation",
"math",
"two pointers"
] | 2,000
| null |
1266
|
E
|
Spaceship Solitaire
|
Bob is playing a game of Spaceship Solitaire. The goal of this game is to build a spaceship. In order to do this, he first needs to accumulate enough resources for the construction. There are $n$ types of resources, numbered $1$ through $n$. Bob needs at least $a_i$ pieces of the $i$-th resource to build the spaceship. The number $a_i$ is called \textbf{the goal for resource $i$}.
Each resource takes $1$ turn to produce and in each turn only one resource can be produced. However, there are certain milestones that speed up production. Every milestone is a triple $(s_j, t_j, u_j)$, meaning that as soon as Bob has $t_j$ units of the resource $s_j$, he receives one unit of the resource $u_j$ for free, without him needing to spend a turn. It is possible that getting this free resource allows Bob to claim reward for another milestone. This way, he can obtain a large number of resources in a single turn.
The game is constructed in such a way that there are never two milestones that have the same $s_j$ \textbf{and} $t_j$, that is, the award for reaching $t_j$ units of resource $s_j$ is at most one additional resource.
\textbf{A bonus is never awarded for $0$ of any resource, neither for reaching the goal $a_i$ nor for going past the goal — formally, for every milestone $0 < t_j < a_{s_j}$.}
A bonus for reaching certain amount of a resource can be the resource itself, that is, $s_j = u_j$.
Initially there are no milestones. You are to process $q$ updates, each of which adds, removes or modifies a milestone. After every update, output the minimum number of turns needed to finish the game, that is, to accumulate at least $a_i$ of $i$-th resource for each $i \in [1, n]$.
|
Consider a fixed game. Let $m_i$ be the total number of milestones having $u_j = i$, that is, the maximum possible number of "free" units of resource $i$ that can be obtained. We claim that in optimal solution, we (manually) produce this resource exactly $p_i = \max\left(a_i - m_i, 0\right)$ times. It is obvious that we cannot do better and this number is necessary. Let's prove that it is also sufficient. First remove arbitrary milestones such that $a_i \geq m_i$ for all $i$. This clearly cannot make the production work faster. Now, $p_i + m_i = a_i$ holds for each resource. Let's perform the production of $p_i$ units for all $i$ in arbitrary order and let $c_i$ be the total amount of $i$-th resource after this process finishes. Clearly $c_i \leq a_i$. Assume for contradiciton that for a resource $i$ we have $c_i < a_i$, that is, the goal has not been reached. As we performed all $p_i$ manual productions for this resource, it means that we did not reach $a_i - c_i$ milestones claiming this resource. The total number of unclaimed awards is thus: $\sum (a_i - c_i)$ Where are the milestones corresponding to these awards? Clearly, they can only be of form $(i, j, k)$ for $j > c_i$, otherwise we would have claimed them. There is never an award for reaching the goal, so the total number of positions for unreached milestones is $\sum \max(0, a_i - c_i - 1)$ As there is always at most one award for reaching a milestone, the number of unclaimed awards is at most the number of milestones not reached: $\sum (a_i - c_i) \leq \sum \max(0, a_i - c_i - 1)$ As $a_i \geq c_i$, this is equivalent to $\sum (a_i - c_i) \leq \sum_{a_i > c_i} (a_i - c_i - 1) + \sum_{a_i = c_i} (a_i - c_i) = \sum (a_i - c_i) - \left|\{i \colon a_i > c_i\}\right|$ Subtracting $\sum (a_i - c_i)$ from both sides and rearranging yields $\left|\{i \colon a_i > c_i\}\right| \leq 0$ so the number of resources that did not reach their goal is $0$, which is a contradiction. From here the solution is simple. We need to maintain all $m_i$ and the sum of $p_i$. Each update changes at most two $m_i$, so the total complexity is $\mathcal O(n)$.
|
[
"data structures",
"greedy",
"implementation"
] | 2,100
| null |
1266
|
F
|
Almost Same Distance
|
Let $G$ be a simple graph. Let $W$ be a non-empty subset of vertices. Then $W$ is \textbf{almost-$k$-uniform} if for each pair of distinct vertices $u,v \in W$ the distance between $u$ and $v$ is either $k$ or $k+1$.
You are given a tree on $n$ vertices. For each $i$ between $1$ and $n$, find the maximum size of an almost-$i$-uniform set.
|
There are three cases of how an almost-$k$-uniform set looks like depending on the value of $k$. The first case is when $k = 1$. In this case, any maximal almost-$1$-uniform set is a vertex plus all of its neighbours. We can simply check each vertex and find the one with the highest degree. The second case is when $k$ is odd and greater than one: $k = 2l + 1$ for $l \geq 1$. Then any almost-$k$-uniform set looks as follows. There is a middle vertex $v$, and every $w \in W$ has distance of $l$ or $l+1$ from $v$. Additionally, at most one of the vertices is in distance $l$, and each $w$ is in a different subtree of $w$. The third case is when $k$ is even: $k = 2l$ for $l \geq 1$. Then any almost-$k$-uniform set can be constructed in one of two ways. The first is similar to the previous case, except that all vertices have to be at distance of exactly $l$ from the middle vertex. The second construction begins by selecting a middle edge $\{u, v\}$, removing it from the tree and then finding a set of vertices $W$ such that each $w \in W$ is in the distance of $l$ from either $u$ or $v$ (depending in which subtree it is) and each $w$ is in a different subtree of $u$ or $v$. Now we need to figure out how to make these constructions efficiently. We begin by calculating $d_v(u)$ - the depth of a subtree $u$ when the tree is rooted at $v$. We compute this for each edge $\{u, v\}$ by using the standard technique of two DFS traversals. Furthermore we use the fact that the answer is monotone with respect to parity: $Ans[i+2] \leq Ans[i]$ for all $i$ - we only calculate the answer in some "interesting" points and then preserve this inequality in the end by taking suffix maximums. For the odd case, consider a middle vertex $v$. Sort the depths of subtrees of neighbours of $v$. Then, $Ans[2*l+1] \geq x$ if there are at $x-1$ subtrees of depth at least $l+1$ and one additional subtree of depth $l$ or more. By sorting the depths, we can process each middle vertex in $\mathcal O(\mathrm{deg}_v \log \mathrm{deg}(v))$. In total, we have $\mathcal O(n \log n)$ for this step. For the even case there are two options. The first of these is very similar to the above construction, we just look for $x$ subtrees of depth at least $l$. The second option is more involved. Consider a middle edge $\{u,v\}$. Let $x$ be the number of $d_u(w) \geq l$ for $w \neq v$ and $y$ be the number of $d_v(w) \geq l$ for $w \neq u$. Then we can conclude that $Ans[2*l] \geq x + y$. However, we cannot directly calculate the above quantity for each edge, as the processing of each vertex would take time quadratic in its degree. We will do a series of steps to improve upon this. The first optimization is that in the process of finding $x$ and $y$ above, we also consider $d_v(u)$ and $d_u(v)$, but then subtract $2$ from the answer. Why can we do that? First see that this case is only important if both $x \geq 1$ and $y \geq 1$ - otherwise this is the middle vertex case that is already processed. This means that $\max d_v(w) \geq l$ and hence $d_u(v) \geq l+1 \geq l$. This means that adding $d_u(v)$ to the set increases $x$ by one. The same argument shows that $y$ increases by $1$. How does this modification help us? There are two ways of proceeding now. We can fix $v$ as one of the endpoints of the middle edge and merge the subtree depths from individual subtrees by taking the maximum and process all neighbours of $u$ at the same time. This alone still does not change the complexity, because we still process each vertex once for each of its neighbours, and this processing takes time at least linear in the degree. Now comes the final step.Perform a centroid decomposition of the tree. When processing $v$ as the fixed end of the middle vertex, consider all $d_v(w)$ for the almost-$k$-universal set, but as the other endpoint of the middle edge we consider only the vertices in the tree where $v$ is the centroid. This way, each middle edge is processed exactly once. Each individual vertex is processed once when it is centroid and then once within each centroid tree it belongs to. Since the depth of centroid decomposition is $\mathcal O(\log n)$, we only process each vertex $\mathcal O(\log n)$ times. A single processing every vertex costs $\mathcal O(\mathrm{deg}_v \log \mathrm{deg}_v)$ time (because of sorting the degrees). Thus the total running time is $\mathcal O(n \log^2 n)$, which is the most expensive part of the algorithm. We can also replace the sort with parallel counting sort, and this makes the time $\mathcal O(n \log n)$, but that doesn't run faster in practice. Perform a centroid decomposition of the tree. When processing $v$ as the fixed end of the middle vertex, consider all $d_v(w)$ for the almost-$k$-universal set, but as the other endpoint of the middle edge we consider only the vertices in the tree where $v$ is the centroid. This way, each middle edge is processed exactly once. Each individual vertex is processed once when it is centroid and then once within each centroid tree it belongs to. Since the depth of centroid decomposition is $\mathcal O(\log n)$, we only process each vertex $\mathcal O(\log n)$ times. A single processing every vertex costs $\mathcal O(\mathrm{deg}_v \log \mathrm{deg}_v)$ time (because of sorting the degrees). Thus the total running time is $\mathcal O(n \log^2 n)$, which is the most expensive part of the algorithm. We can also replace the sort with parallel counting sort, and this makes the time $\mathcal O(n \log n)$, but that doesn't run faster in practice. For each vertex $u$, store the histogram of $d_u(v)$ in a map. Now, consider each edge separately, and merge calculate the answer naively, using a linear pass through $d_v(u)$ and $d_u(v)$. Why does it work fast? The sum of all $d_u(v)$ for a fixed $u$ equals to $n-1$, so there are at most $\mathcal O(\sqrt n)$ different values in the histogram, which makes the solution $\mathcal O(n \sqrt n)$ in total, and quite fast in practice.
|
[
"dfs and similar",
"graphs"
] | 2,900
| null |
1266
|
G
|
Permutation Concatenation
|
Let $n$ be an integer. Consider all permutations on integers $1$ to $n$ in lexicographic order, and concatenate them into one big sequence $P$. For example, if $n = 3$, then $P = [1, 2, 3, 1, 3, 2, 2, 1, 3, 2, 3, 1, 3, 1, 2, 3, 2, 1]$. The length of this sequence is $n \cdot n!$.
Let $1 \leq i \leq j \leq n \cdot n!$ be a pair of indices. We call the sequence $(P_i, P_{i+1}, \dots, P_{j-1}, P_j)$ a \textbf{subarray} of $P$.
You are given $n$. Find the number of distinct subarrays of $P$. Since this number may be large, output it modulo $998244353$ (a prime number).
|
Let $LCP$ be the longest-common-prefix array associated with a suffix tree of the sequence $p$. The answer is $|p| \cdot (|p| + 1) - \sum_{i=1}^{p} LCP[i]$. Let's calculate $c_i$ - the number of positions for which $LCP[j] = i$. The following holds: $c_0 = n$ $c_i = \frac{n!}{(n-i+1)!} * ((n-i) * (n-i) + 1)$ for all $i$ between $1$ and $n-1$ $c_i = n! - c_{i-n}$ for all $i$ between $n$ and $2n-1$ $c_i = 0$ for all $i \geq 2n$ Using the above formulas, we can calculate the answer in $\mathcal O(n)$. Below follows a proof of the above statement. Beware that it is neither complete, polished, nor interesting It is just case bashing that I used to convince myself that the pattern I observed during problem preparation is correct, an incomplete transript of my scribbles. Apologies to everybody who expected something nice here. Proof Construct the suffix array such that the terminator is strictly larger than $n$. This doesn't change the answer nor the LCP histogram, and it's easier to reason about. Here $LCP_n[i]$ denotes the longest-common-prefix of the suffix starting at position $i$ with the next suffix in lexicographic order. We will build the knowledge about the LCP values guided by induction over $n$. Lemma 1: For every $i$ between $1$ and $n-1$, and for every $k$ between $1$ and $n!$, it holds $LCP_n[k \cdot n + i] = LCP_n[k \cdot n + i - 1] - 1$ In other words, the LCP value is largest for the suffix aligned on permutation boundary, and shortening the suffix by one decreases the LCP by one. It is obvious that it can't decrease it more. Why does it always decrease by one? TODO For the above reason, we only need to consider positions divisible by $n$ (if we number from $0$), which we will do in the rest of the text. Lemma 2: Let $i$ be between $1$ and $n$ and $p = [n, n-1, \dots, i+1, i-1, \dots, 2, 1, i]$. Then $LCP_n[indexof(p)] = n-1$. We call such permutation semidecreasing. Proof: First, let's see that there is no position which has LCP of $n$ or more (even one that would be lexicographically smaller. Such position needs to have a permutation boundary somewhere, let it be after the number $j$: $[n, n-1, \dots, j] + [j-1, j-2, \dots, 2, 1, i]$. Note that the suffix of the left permutation is in decreasing order and it's first element is $n$. When this happens, the next permutation in must swap $n$ with the preceding element, and hence the prefix of the right permutation cannot be complemetary to the suffix of the left permutation. Next, see that there is a subarray of $P$ that has common prefix of exactly $n-1$ with this permutation, and the next element is larger. There are two cases: $i = n$, that is, the permutation is $[n-1, n-2, \dots, 2, 1, n]$. Then there is a suffix $[n-1, n-2, \dots, 2, 1, \$]$ at the end of $P$. $i \neq n$, a permutation of form $[n, n-1, \dots, i+1, i-1, \dots, 2, 1, i]$. Then there is lexicographically larger string with common prefix of length $n-1$ on the boundary of permutations $[i, n, n-1, \dots, i+1, i-1, \dots, 2, 1]$ and $[i+1, 1, 2, \dots, i-1, i+1, \dots, n-1, n]$ Definition (enlarged permutations) Consider a permutation on $n$ elements. There are $n+1$ permutation on $n+1$ elements that can be obtained from it by prefixing it with a number from set $\{0.5, 1.5, \dots, n+0.5\}$ and renumbering. For example, from $p = [3,1,2,4]$ we obtain $q = \{[1,4,2,3,5], [2,4,1,3,5], [3,4,1,2,5], [4,3,1,2,5], [5,3,1,2,4]\}$. These enlarged permutations will be of great use within the proofs, as we will see soon. Their LCP values will be closely related. Lemma 3 (enlarging non-semidecreasing permutations): Let $LCP_n[k \cdot n] \geq n$. Then $LCP_{n+1}[(k + m \cdot n!) \cdot (n+1)] + 2$. In other words, the LCP of enlarged permutations is two more than that of the original permutation, provided the original permutation was having LCP large enough. Proof: The permutation with corresponding LCP $n$ and the following one are enlarged with the same number $e$ (because the last permutation of $P_{n-1}$ has $LCP = n-2$ as shown by Lemma 2). The lexicographically following permutation in $P_{n-1}$ can also be found in $P_n$, subject to insertion of element $e$ and renumbering. We will just demonstrate this fact informally. For example, consider permutation $p = [1,2,4,3]$. The lexicographically following suffix is on the boundary of permutations $q_1 = [3,1,2,4]$ and $q_2 = [3,1,4,2]$. Say we enlarge $p$ to $p' = [3,1,2,5,4]$. Then the boundary of permutations $q'_1 = [4,3,1,2,5]$ and $q'_2 = [4,3,1,5,2]$ contains the subarray $[3,1,2,5,4,3,1]$, which is longer than the original LCP by two. Here, we inserted $[3]$ after the first element and renumbered the old $3$ to $4$ and $4$ to $5$. Let's characterize the indices where $LCP_n[k \cdot n] < n$. Lemma 4 (enlarging semidecreasing permutations): Let $p$ be a semidecreasing permutation on $n$ elements. Consider its enlargement $p^{+}$ - a permutation on $n+1$ elements. There are three cases: $p^{+}$ is also semidecreasing. Then $LCP_{n+1}[indexof(p^{+})] = n$. $p$ is the last permutation and the enlargement element is neither $0.5$ nor $n+0.5$. Then $LCP_{n+1}[indexof(p^{+})] = n+2$. Otherwise, $LCP_{n+1}[indexof(p^{+})] = n+1$. Proof: The first case is obvious. In the third case, the enlarged permutation is $p^{+} = [i, n+1, n, \dots, i+1, i-1, \dots, 2, 1]$. The lexicographically next permutation is on the boundary of $[1, i, n+1, n, \dots, i+1, i-1, \dots, 2]$ and $[1, i+1, 2, 3, \dots, i, i+2, \dots, n, n+1]$ and the LCP has length $n+2$. This is because there is an extra match of $i$ at the beginning, and $1$ and $i+1$ at the end. The second case is similar to the Lemma 3. Combined the above and induction on $n$ yields: Lemma 5: For all $k$ it holds $LCP_n[k \cdot n] < 2n$. Using induction, we can also count the number of permutations $p_n$ for which $LCP_n[indexof(p_n)] = k$. Thanks to Lemma 1 this is extendable to suffices not aligned with a permutation, yielding the $c_i$ values in the statement.
|
[
"string suffix structures"
] | 3,300
| null |
1266
|
H
|
Red-Blue Graph
|
There is a directed graph on $n$ vertices numbered $1$ through $n$ where each vertex (except $n$) has two outgoing arcs, red and blue. At any point in time, exactly one of the arcs is active for each vertex. Initially, all blue arcs are active and there is a token located at vertex $1$. In one second, the vertex with token first switches its active arcs — the inactive arc becomes active and vice versa. Then, the token is moved along the active arc. When the token reaches the vertex $n$, it stops. It is guaranteed that $n$ is reachable via arcs from every vertex.
You are given $q$ queries. Each query contains a state of the graph — a pair $(v, s)$ of the following form:
- $v$ is the vertex where the token is currently located;
- $s$ is a string consisting of $n - 1$ characters. The $i$-th character corresponds to the color of the active edge leading from the $i$-th vertex (the character is 'R' if red arc is active, otherwise the character is 'B').
For each query, determine whether the given state is reachable from the initial state and the first time this configuration appears. Note that the two operations (change active arc and traverse it) are atomic — a state is not considered reached if it appears after changing the active arc but before traversing it.
|
Consider a fixed query $\{s_i\}_{i=1}^{n-1}$ and $v$. Let $x_i$ be the number of times the red arc from $i$ was traversed, and $y_i$ the number of times the blue arc was traversed. The answer for a given query, should the $x_i$ and $y_i$ exist, is the sum of all of them. Let's try to find these values. When the blue arc going from $i$ is active, we have $x_i = y_i$. Otherwise, $x_i = y_i + 1$. In both cases $x_i = y_i + [s_i = \text{'R'}]\,.$ Let's denote $B_i$ the set of blue arcs going to $i$, and $R_i$ the set of red arcs going to $i$. For every vertex other than $1$ and the current vertex $v$, the sum of traversals of incoming edges must equal the sum of traversals of outgoing edges. For the current vertex, the sum of incoming traversals is one more than the outgoing, and for vertex $1$ it is the opposite. This gives us $\sum_{r \in R_i} x_r + \sum_{b \in B_i} y_b + [ i = 1 ] = x_i + y_i + [ i = v ]\,.$ Substituting the $y$s and rearranging yields: $2x_i - \sum_{r \in R_i} x_r - \sum_{b \in B_i} x_b = [ s_i = \text{'R'} ] - \sum_{b \in B_i} [s_b = \text{'R'}] + [ i = 1] - [ i = v ]\,.$ Consider this equality for each $i$ from $1$ to $n-1$. We have $n-1$ equalities for $n-1$ unknowns, written in matrix form $Ax = z$. The value of $A$ does not depend on the actual query - it is determined by the structure of the graph. Let's look at the matrix in more detail. On the main diagonal, each value is $2$ - because each vertex has outdegree $2$. Then there are some negative values for incoming edges. Those values are either $-1$, or $-2$ when both the red and blue edge have the same endpoint. In each column, the sum of negative entries is between $0$ and $-2$, because each $-1$ corresponds to and outgoing edge, and because some of the edges may end in $n$, for which we don't have equation. This combined yields an important observation: the matrix $A$ is irreducibly diagonally dominant. That is, in each column it holds $|a_{ii}| \geq \sum_{i\neq j}|a_{ij}|$, and there is at least one column for which there is strict inequality (because vertex $n$ is reachable). An interesting property of such matrix is that it is non-singular. For every right side $z$ there is thus exactly one solution for $x$. There are $2^{n-1} \cdot (n - 1)$ possible queries, and each of them corresponds to some pair of vectors $(x, y)$. Clearly, every reachable state corresponds to exactly one pair that describes the process that led to this state. However, it is evident that there are input graphs in which the target vertex is reached in fewer moves, hence some $(x, y)$ pairs must correspond to invalid states. Let's look at which states are invalid. First, every element of $x$ must be non-negative and integer. Consider a graph on three vertices, where each arc, red or blue, goes to the vertex with higher ID ($1 \rightarrow 2$ and $2 \rightarrow 3$), $v = 2$ and $s = \text{"BB"}$. The solution of this linear system is $x = \left(\frac{1}{2}, 0\right)$. This is clearly not a valid state. There is one more situation that is invalid. Consider again a graph on three vertices where each red arc goes to the vertex with higher ID as before, but each blue arc is a loop. In this graph, the token moves to the last vertex in two moves, making both red edges active in the process. For query $v = 2$ and $s = \text{"RR"}$, we get a solution $x = \left(1, 0\right)$ and $y = \left(1, 0\right)$. This is not a valid state. The reason for this is the fact that the active arc from $i$ speaks truth about a simple fact - where did the token go the last time it left vertex $i$. The active edge in vertex $1$ is blue, which leads to itself, meaning that the token never left the vertex. But this is a contradiction, because we know it is in vertex $2$. This condition can be phrased as follows: For every vertex that had the token at least once, the vertex containing the token must be reachable via the active arcs. These two conditions are in fact necessary and sufficient. Let's summarise what we have so far: Build a system of linear equations and solve for $x$. Check whether all $x$ are non-negative and integer. Verify that the current vertex is reachable from all visited vertices using active edges. If both conditions are true, calculate $y$ and sum all $x$ and $y$ to get the answer. Otherwise, the answer is negative. There are some technical details left to be solved. Firstly, as we already noticed, the matrix $A$ doesn't depend on the actual query, only on the graph structure. We can thus precompute its inverse and solve each query as a matrix vector implementation, reducing time per query from $\mathcal O(n^3)$ to $\mathcal O(n^2)$. Secondly, how do we compute $x$ precisely to check whether they are integers? There are two ways - either we use exact fractions with big integers, or compute everything in modular arithmetic and verify the solution on integers. Since $128$-bit integers are not a thing on codeforces, we either use Schrage's method for the modular multiplication, or use two moduli and Chinese remainder theorem. The total complexity is $\mathcal O(n^3 + q \cdot n^2)$.
|
[
"dp",
"graphs",
"math",
"matrices",
"meet-in-the-middle"
] | 3,400
| null |
1268
|
A
|
Long Beautiful Integer
|
You are given an integer $x$ of $n$ digits $a_1, a_2, \ldots, a_n$, which make up its decimal notation in order from left to right.
Also, you are given a positive integer $k < n$.
Let's call integer $b_1, b_2, \ldots, b_m$ \textbf{beautiful} if $b_i = b_{i+k}$ for each $i$, such that $1 \leq i \leq m - k$.
You need to find the smallest \textbf{beautiful} integer $y$, such that $y \geq x$.
|
At first, let's set $a_i=a_{i-k}$ for all $i>k$. If it is at least the initial $a$, then you can print it as the answer. Otherwise, Let's find the last non-nine digit among the first $k$, increase it by one, and change all $9$'s on the segment from it to $k$-th character to $0$. After that, again set $a_i = a_{i-k}$ for all $i>k$. Then, you can print it as the answer.
|
[
"constructive algorithms",
"greedy",
"implementation",
"strings"
] | 1,700
| null |
1268
|
B
|
Domino for Young
|
You are given a Young diagram.
Given diagram is a histogram with $n$ columns of lengths $a_1, a_2, \ldots, a_n$ ($a_1 \geq a_2 \geq \ldots \geq a_n \geq 1$).
\begin{center}
{\small Young diagram for $a=[3,2,2,2,1]$.}
\end{center}
Your goal is to find the largest number of non-overlapping dominos that you can draw inside of this histogram, a domino is a $1 \times 2$ or $2 \times 1$ rectangle.
|
Let's color diagram into two colors as a chessboard. I claim that the Young diagram can be partitioned into domino if and only if the number of white cells inside it is equal to the number of black cells inside it. If the Young diagram has two equal rows (or columns) you can delete one domino, and the diagram will still have an equal number of white and black cells. If all rows and columns are different, it means that the Young diagram is a "basic" diagram, i.e have lengths of columns $1, 2, \ldots, n$. But in a "basic" diagram the number of white and black cells is different! So, we have a contradiction! But what if the number of black and white cells are not the same? I claim that the answer is $\min($ the number of white cells, the number of black cells $)$. Just because if you have more white cells (case with more black case is symmetrical), and there are no equal rows and columns, you can take the first column with more white cells than black cells and delete the last cell of this column, in the end, you will have a Young diagram with an equal number of black and white cells, so you can find the answer by algorithm described below.
|
[
"dp",
"greedy",
"math"
] | 2,000
| null |
1268
|
C
|
K Integers
|
You are given a permutation $p_1, p_2, \ldots, p_n$.
In one move you can swap two adjacent values.
You want to perform a minimum number of moves, such that in the end there will exist a subsegment $1,2,\ldots, k$, in other words in the end there should be an integer $i$, $1 \leq i \leq n-k+1$ such that $p_i = 1, p_{i+1} = 2, \ldots, p_{i+k-1}=k$.
Let $f(k)$ be the minimum number of moves that you need to make a subsegment with values $1,2,\ldots,k$ appear in the permutation.
You need to find $f(1), f(2), \ldots, f(n)$.
|
At first, let's add to the answer number of inversions among numbers $1,2,\ldots,k$. After that, let's say that $x \leq k$ is one, and $x > k$ is zero. Then you need to calculate the smallest number of swaps to make segment $1,1,\ldots,1$ of length $k$ appear in the permutation. For this, let's call $p_i$ the number of ones on the prefix. For all $s_i=0$ we need to add $\min{(p_i, k - p_i)}$ to the answer (it is an obvious lower bound, and it is simple to prove that we always can do one operation to reduce this total value by one). How to calculate this for each $k$? Let's move $k$ from $1$ to $n$. You can maintain number of inversions with BIT. To calculate the second value, you can note that you just need to find $\frac{k}{2}$-th number $\leq k$ and add values at the left and add the right with different coefficients. To maintain them, you can recalculate everything when you are moving the median (in heap). But also it is possible to maintain the segment tree by $p_i$ and just take some sum.
|
[
"binary search",
"data structures"
] | 2,300
| null |
1268
|
D
|
Invertation in Tournament
|
You are given a tournament — complete directed graph.
In one operation you can pick any vertex $v$ and change the direction of all edges with $v$ on one of the ends (i.e all edges $u \to v$ change their orientation to $v \to u$ and vice versa).
You want to make the tournament strongly connected with the smallest possible number of such operations if it is possible.
Also, if it is possible, you need to find the number of ways to make this number of operations to make graph strongly connected (two ways are different if for some $i$ vertex that we chose on $i$-th operation in one way is different from vertex that we chose on $i$-th operation in another way). You only need to find this value modulo $998\,244\,353$.
|
Lemma: for $n>6$ it is always possible to invert one vertex. Start by proving that for $n \geq 4$ in the strongly connected tournament it is possible to invert one vertex so it will remain strongly connected, it is possible by induction. If there is a big SCC (with at least four vertices), invert good vertex in it. If there are at least three strongly connected components, invert random vertex in the middle one. If there are two SCCs, then all of them have size $\leq 3$, so the number of vertices is $\leq 6$. So you can check each vertex in $O(\frac{n^2}{32})$ with bitset. But also it is possible to check that tournament is strongly connected by degree sequence in $O(n)$. For this, you can note that the degree of each vertex in the rightest SCC is smaller than degrees of all other vertices. So in the sorted by degree order you can check for the prefix of length $k$, that the number of edges outgoing from them (sum of degrees) is $\frac{k(k-1)}{2}+k(n-k)$ if there exists $k<n$ which satisfy this constraint, then it is simple to prove that the graph is not strongly connected. So, you can solve this problem in $O(n^2)$ or in $O(\frac{n^3}{32})$.
|
[
"brute force",
"divide and conquer",
"graphs",
"math"
] | 3,200
| null |
1268
|
E
|
Happy Cactus
|
You are given a cactus graph, in this graph each edge lies on at most one simple cycle.
It is given as $m$ edges $a_i, b_i$, weight of $i$-th edge is $i$.
Let's call a path in cactus \textbf{increasing} if the weights of edges on this path are increasing.
Let's call a pair of vertices $(u,v)$ \textbf{happy} if there exists an increasing path that starts in $u$ and ends in $v$.
For each vertex $u$ find the number of other vertices $v$, such that pair $(u,v)$ is happy.
|
At first, let's solve for the tree. Let $dp_v$ be the number of answer for vertex $v$. Let's look at edges in the order of decreasing weight. How $dp$ is changing when you are looking at edge $i$? I claim that $dp'_v = dp_v$ for $v \neq a_i$and $v \neq b_i$. And $dp'_{a_i} = dp'_{b_i} = dp_{a_i} + dp_{b_i}$. Why? I like this thinking about this problem: in each vertex sitting a rat, initially $i$-th rat is infected by $i$-th type of infection. After that, rats $a_m$ and $b_m$, $a_{m-1}$ and $b_{m-1}$, ..., $a_1$ and $b_1$ bite each other. When two rats bite each other, they have a union of their infections. I claim that the number of infections of $i$-th vertex, in the end, is equal to the required value. So on the tree, it is easy to see that the infections are not intersecting when two rats bite each other, so they just change the number to sum. But on the cactus, they may have some non-empty intersection. Now, it is easy to see that: Let's say that $f_i$ is equal to the number of infections of $a_i$ (same as the number of infections of $b_i$) after the moment when you meet this edge. Similar to the tree case, when $i$-th edge connects different connected components, $f_i$ is just equal to the sum of the number of infections. When $i$-th edge connects same connected components, $f_i$ is equal to the sum of the number of infections (let's call this value $x$). Or $f_i$ is equal to $x - f_e$ where $e$ is some edge on the path between $a_i$ and $b_i$ (note that it is a cactus, so this path is unique). This $e$ is always the largest edge on the path, and it is subtracted if and only if the path from it to $a_i$ and from it to $b_i$ is decreasing. So, we can solve the problem in $O(n+m)$.
|
[
"dp"
] | 3,400
| null |
1269
|
A
|
Equation
|
Let's call a positive integer \textbf{composite} if it has at least one divisor other than $1$ and itself. For example:
- the following numbers are composite: $1024$, $4$, $6$, $9$;
- the following numbers are not composite: $13$, $1$, $2$, $3$, $37$.
You are given a positive integer $n$. Find two composite integers $a,b$ such that $a-b=n$.
It can be proven that solution always exists.
|
Print $9n$ and $8n$.
|
[
"brute force",
"math"
] | 800
| null |
1269
|
B
|
Modulo Equality
|
You are given a positive integer $m$ and two integer sequence: $a=[a_1, a_2, \ldots, a_n]$ and $b=[b_1, b_2, \ldots, b_n]$. Both of these sequence have a length $n$.
Permutation is a sequence of $n$ different positive integers from $1$ to $n$. For example, these sequences are permutations: $[1]$, $[1,2]$, $[2,1]$, $[6,7,3,4,1,2,5]$. These are not: $[0]$, $[1,1]$, $[2,3]$.
You need to find the non-negative integer $x$, and increase all elements of $a_i$ by $x$, modulo $m$ (i.e. you want to change $a_i$ to $(a_i + x) \bmod m$), so it would be possible to rearrange elements of $a$ to make it equal $b$, among them you need to find the smallest possible $x$.
In other words, you need to find the smallest non-negative integer $x$, for which it is possible to find some permutation $p=[p_1, p_2, \ldots, p_n]$, such that for all $1 \leq i \leq n$, $(a_i + x) \bmod m = b_{p_i}$, where $y \bmod m$ — remainder of division of $y$ by $m$.
For example, if $m=3$, $a = [0, 0, 2, 1], b = [2, 0, 1, 1]$, you can choose $x=1$, and $a$ will be equal to $[1, 1, 0, 2]$ and you can rearrange it to make it equal $[2, 0, 1, 1]$, which is equal to $b$.
|
There exists some $i$, such that $(a_i + x) \bmod m = b_1$. Let's enumerate it, then $x$ is $(b_1 - a_i) \bmod m$. Like that you can get $O(n)$ candidates, each of them can be checked in $O(n \log n)$ with sort or in $O(n)$ if you will note that the order is just cyclically shifting. Also, this problem can be solved in $O(n)$ with some string matching algorithms, I will leave it as a bonus.
|
[
"brute force",
"sortings"
] | 1,500
| null |
1270
|
A
|
Card Game
|
Two players decided to play one interesting card game.
There is a deck of $n$ cards, with values from $1$ to $n$. The values of cards are \textbf{pairwise different} (this means that no two different cards have equal values). At the beginning of the game, the deck is completely distributed between players such that each player has at least one card.
The game goes as follows: on each turn, each player chooses one of their cards (whichever they want) and puts on the table, so that the other player doesn't see which card they chose. After that, both cards are revealed, and the player, value of whose card was larger, takes both cards in his hand. Note that as all cards have different values, one of the cards will be strictly larger than the other one. Every card may be played any amount of times. The player loses if he doesn't have any cards.
For example, suppose that $n = 5$, the first player has cards with values $2$ and $3$, and the second player has cards with values $1$, $4$, $5$. Then one possible flow of the game is:
- The first player chooses the card $3$. The second player chooses the card $1$. As $3>1$, the first player gets both cards. Now the first player has cards $1$, $2$, $3$, the second player has cards $4$, $5$.
- The first player chooses the card $3$. The second player chooses the card $4$. As $3<4$, the second player gets both cards. Now the first player has cards $1$, $2$. The second player has cards $3$, $4$, $5$.
- The first player chooses the card $1$. The second player chooses the card $3$. As $1<3$, the second player gets both cards. Now the first player has only the card $2$. The second player has cards $1$, $3$, $4$, $5$.
- The first player chooses the card $2$. The second player chooses the card $4$. As $2<4$, the second player gets both cards. Now the first player is out of cards and loses. Therefore, the second player wins.
Who will win if both players are playing optimally? It can be shown that one of the players has a winning strategy.
|
We can show that the player who has the largest card (the one with value $n$) has the winning strategy! Indeed, if a player has the card with value $n$, he can choose to play it every time, taking the card from the opponent every time (as every other card has a value smaller than $n$). In at most $n-1$ moves, the opponent will be out of cards (and he will lose).
|
[
"games",
"greedy",
"math"
] | 800
|
"#include <bits/stdc++.h>\nusing namespace std;\n\n\nvoid solve()\n{\n int n, k1, k2;\n cin >> n >> k1 >> k2;\n \n vector<int> a(n);\n for (int i = 0; i<n; i++) cin>>a[i];\n for (int i = 0; i < k1; i++) \n {\n if (a[i] == n) {\n cout << \"YES\"<<endl;\n return;\n }\n }\n cout << \"NO\"<<endl;;\n}\n\nint main() {\n int t;\n cin>>t;\n for (int i = 0; i<t; i++) solve();\n}"
|
1270
|
B
|
Interesting Subarray
|
For an array $a$ of integers let's denote its maximal element as $\max(a)$, and minimal as $\min(a)$. We will call an array $a$ of $k$ integers \textbf{interesting} if $\max(a) - \min(a) \ge k$. For example, array $[1, 3, 4, 3]$ isn't interesting as $\max(a) - \min(a) = 4 - 1 = 3 < 4$ while array $[7, 3, 0, 4, 3]$ is as $\max(a) - \min(a) = 7 - 0 = 7 \ge 5$.
You are given an array $a$ of $n$ integers. Find some interesting \textbf{nonempty} subarray of $a$, or tell that it doesn't exist.
An array $b$ is a subarray of an array $a$ if $b$ can be obtained from $a$ by deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. In particular, an array is a subarray of itself.
|
We will show that if some interesting nonempty subarray exists, then also exists some interesting subarray of length $2$. Indeed, let $a[l..r]$ be some interesting nonempty subarray, let $a_{max}$ be the maximal element, $a_{min}$ - minimal, without loss of generality, $max > min$. Then $a_{max} - a_{min} \ge r - l + 1 \ge max - min + 1$, or $(a_{max} - a_{max - 1}) + (a_{max - 1} - a_{max - 2}) + \dots + (a_{min+1} - a_{min}) \ge max - min + 1$, so at least one of the terms had to be $\ge 2$. Therefore, for some $i$ holds $a_{i+1} - a_i \ge 2$, so subarray $[i, i+1]$ is interesting! Therefore, the solution is as follows: for each $i$ from $1$ to $n-1$ check if $|a_{i+1} - a_i|\ge 2$ holds. If this is true for some $i$, we have found an interesting subarray of length $2$, else such subarray doesn't exist. Asymptotic $O(n)$.
|
[
"constructive algorithms",
"greedy",
"math"
] | 1,200
|
"#ifdef DEBUG\n#define _GLIBCXX_DEBUG\n#endif\n#pragma GCC optimize(\"O3\")\n#include <bits/stdc++.h>\nusing namespace std;\ntypedef long double ld;\ntypedef long long ll;\nconst int maxN = (int)3e5 + 100;\nint a[maxN];\nint main() {\n ios_base::sync_with_stdio(false);\n cin.tie(nullptr);\n //freopen(\"input.txt\", \"r\", stdin);\n int tst;\n cin >> tst;\n while (tst--) {\n int n;\n cin >> n;\n for (int i = 1; i <= n; i++) cin >> a[i];\n bool fnd = false;\n for (int i = 1; i + 1 <= n; i++) {\n if (abs(a[i + 1] - a[i]) > 1) {\n cout << \"YES\" << '\\n' << i << \" \" << i + 1 << '\\n';\n fnd = true;\n break;\n }\n }\n if (!fnd) cout << \"NO\" << '\\n';\n }\n return 0;\n}"
|
1270
|
C
|
Make Good
|
Let's call an array $a_1, a_2, \dots, a_m$ of nonnegative integer numbers \textbf{good} if $a_1 + a_2 + \dots + a_m = 2\cdot(a_1 \oplus a_2 \oplus \dots \oplus a_m)$, where $\oplus$ denotes the bitwise XOR operation.
For example, array $[1, 2, 3, 6]$ is good, as $1 + 2 + 3 + 6 = 12 = 2\cdot 6 = 2\cdot (1\oplus 2 \oplus 3 \oplus 6)$. At the same time, array $[1, 2, 1, 3]$ isn't good, as $1 + 2 + 1 + 3 = 7 \neq 2\cdot 1 = 2\cdot(1\oplus 2 \oplus 1 \oplus 3)$.
You are given an array of length $n$: $a_1, a_2, \dots, a_n$. Append at most $3$ elements to it to make it good. Appended elements don't have to be different. It can be shown that the solution always exists under the given constraints. If there are different solutions, you are allowed to output any of them. Note that \textbf{you don't have to minimize the number of added elements!}. So, if an array is good already you are allowed to not append elements.
|
Let the sum of numbers be $S$, and their $\oplus$ be $X$. Solution 1: If $S\le 2X$ and $S$ was even, it would be enough to add into the array $2$ numbers $\frac{2X-S}{2}, \frac{2X-S}{2}$: $X$ wouldn't change, and the sum would become $2X$. How to achieve this? Let's add to the array number $2^{50} + (S\bmod 2)$. If new sum and $\oplus$ of all numbers are $S_1$ and $X_1$ correspondently, then we know, that $S_1\le 2\cdot 2^{50} \le 2X_1$, and $S_1$ is even. We spent $3$ numbers. Solution 2: It's enough to add to the array numbers $X$ and $S + X$. Indeed, $S + X + (S+X) = 2(S+X)$, and $X\oplus X \oplus (S+X) = (S+X)$. We spent only $2$ numbers! Both solutions have asymptotic $O(n)$ - to calculate $S$ and $X$.
|
[
"bitmasks",
"constructive algorithms",
"math"
] | 1,400
|
"#include <bits/stdc++.h>\n\nusing namespace std;\n\nusing ll = long long;\n\nvoid solve()\n{\n int n;\n cin>>n;\n int temp;\n ll Sum = 0;\n ll Xor = 0;\n for (int i = 0; i<n; i++)\n {\n cin>>temp;\n Sum+=temp;\n Xor^=temp;\n }\n vector<ll> answer;\n ll good = (1ll<<50) + Sum%2;\n Sum+=good;\n Xor^=good;\n ll need = 2*Xor - Sum;\n cout<<3<<endl<<good<<' '<<need/2<<' '<<need/2<<endl; \n}\n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(nullptr);\n\n int t;\n cin>>t;\n for (int i = 0; i<t; i++) solve();\n\n}"
|
1270
|
D
|
Strange Device
|
\textbf{This problem is interactive}.
We have hidden an array $a$ of $n$ \textbf{pairwise different} numbers (this means that no two numbers are equal). You can get some information about this array using a new device you just ordered on Amazon.
This device can answer queries of the following form: in response to the positions of $k$ different elements of the array, it will return the position and value of the $m$-th among them in the ascending order.
Unfortunately, the instruction for the device was lost during delivery. However, you remember $k$, but don't remember $m$. Your task is to find $m$ using queries to this device.
You can ask \textbf{not more than $n$ queries}.
Note that the array $a$ and number $m$ are fixed before the start of the interaction and don't depend on your queries. In other words, \textbf{interactor is not adaptive}.
Note that you don't have to minimize the number of queries, and you don't need to guess array $a$. You just have to guess $m$.
|
We will show how to guess $m$ with $k+1\le n$ queries. Let's leave only first $k+1$ elements of the array. We will ask $k+1$ queries: $i$-th question - about all elements from $1$-th to $k+1$-th, except $i$-th. Denote elements from $1$-th to $k+1$-th in decreasing order as $b_1 < b_2 < \dots < b_{k+1}$. Then, if we throw out element which is larger than $b_m$, then $b_m$ is the $m$-th largest among remaining elements. If we throw out element which is smaller of equal to $b_m$, then $b_{m+1}$ is the $m$-th largest among remaining elements. Therefore, among answers to our $k+1$ queries we will have $m$ times element $b_{m+1}$ and $k+1-m$ times element $b_m$. As $b_{m+1}>b_m$, we can first find $b_{m+1}$ (as the largest of all answers to the queries), and after that we can find $m$ as the number of times $b_{m+1}$ meets among these answers!
|
[
"constructive algorithms",
"interactive",
"math",
"sortings"
] | 1,900
|
"#include <bits/stdc++.h>\n\nusing namespace std;\n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(nullptr);\n\n int n, k;\n cin>>n>>k;\n vector<int> elements;\n for (int i = 1; i<=k+1; i++)\n {\n cout<<\"? \";\n \n for (int j = 1; j<=k+1; j++) if (j!=i) cout<<j<<' ';\n cout<<endl;\n \n int pos, el;\n cin>>pos>>el;\n elements.push_back(el);\n }\n int maxx = elements[0];\n for (auto it: elements) maxx = max(maxx, it);\n int m = 0;\n for (auto it: elements) if (it==maxx) m++;\n cout<<\"! \"<<m<<endl;\n\n}"
|
1270
|
E
|
Divide Points
|
You are given a set of $n\ge 2$ \textbf{pairwise different} points with integer coordinates. Your task is to partition these points into two \textbf{nonempty} groups $A$ and $B$, such that the following condition holds:
For every two points $P$ and $Q$, write the Euclidean distance between them on the blackboard: if they belong to the \textbf{same} group — with a \textbf{yellow} pen, and if they belong to \textbf{different} groups — with a \textbf{blue} pen. \textbf{Then no yellow number is equal to any blue number}.
It is guaranteed that such a partition exists for any possible input. If there exist multiple partitions, you are allowed to output any of them.
|
Let's divide all points into $4$ groups by parity of their coordinates: $A_{00}$ - (even, even), $A_{01}$ - (even, odd), $A_{10}$ -(odd, even), $A_{11}$ - (odd, odd). If all points belong to one group, for example, $A_{00}$, let's divide all coordinates by $2$ and start again. We just scaled down the image in $2$ times, and all distances between points also got smaller in exactly $2$ times, so any valid partition for new points remains valid for old points. Note that this division by $2$ can't last long: as every division by $2$ decreases the distance between points in $2$ times, all the initial distances don't exceed $4\cdot 10^6$, and distances can't get smaller than $1$, we can't have more than $\log{4\cdot 10^6}$ such divisions. From now on we suppose that at least two groups are nonempty. If there is at least one point with the odd sum of coordinates, and at least one point with even sum of coordinates, we can put to $A$ all points with even sum of coordinates, and to $B$ - all with odd. Then for any two points $(x_1, y_1), (x_2, y_2)$, the square of distance between them $(x_1 - x_2)^2 + (y_1 - y_2)^2$ will be even if these points are from the same group, and odd if they are from different groups. As square of every yellow number will be even, and of every blue odd, no blue number will be equal to any yellow, and the partition would be valid. If all points have even sum of coordinates, then only points $A_{00}$ and $A_{11}$ will be nonempty. Then we can set $A = A_{00}, B = A_{11}$: in this case for any two points $(x_1, y_1), (x_2, y_2)$, the square of distance between them $(x_1 - x_2)^2 + (y_1 - y_2)^2$ will be divisible by $4$, if these points are from the same group, and will give remainder $2$ under division by $4$, if from different. Similarly, if all points have odd sum of coordinates, then only points $A_{01}$ and $A_{10}$ will be nonempty. Then we can set $A = A_{01}, B = A_{10}$: in this case for any two points $(x_1, y_1), (x_2, y_2)$, the square of distance between them $(x_1 - x_2)^2 + (y_1 - y_2)^2$ will be divisible by $4$, if these points are from the same group, and will give remainder $2$ under division by $4$, if from different.
|
[
"constructive algorithms",
"geometry",
"math"
] | 2,300
|
"#include <bits/stdc++.h>\n\nusing namespace std;\n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(nullptr);\n\n int n;\n cin>>n;\n vector<pair<int, int>> p(n);\n for (int i = 0; i<n; i++) {cin>>p[i].first>>p[i].second; p[i].first+=1e6; p[i].second+=1e6;}\n\n while (true)\n {\n vector<vector<int>> cnt(2, vector<int>(2));\n for (int i = 0; i<n; i++) cnt[p[i].first%2][p[i].second%2]++;\n if (cnt[0][0]+cnt[1][1]>0 && cnt[0][1]+cnt[1][0]>0)\n {\n vector<int> A;\n for (int i = 0; i<n; i++) if ((p[i].first + p[i].second)%2==0) A.push_back(i);\n cout<<A.size()<<endl;\n for (auto it: A) cout<<it+1<<' ';\n return 0;\n }\n if (cnt[0][0]+cnt[0][1]>0 && cnt[1][1]+cnt[1][0]>0)\n {\n vector<int> A;\n for (int i = 0; i<n; i++) if (p[i].first%2==0) A.push_back(i);\n cout<<A.size()<<endl;\n for (auto it: A) cout<<it+1<<' ';\n return 0;\n }\n int x, y;\n for (int i = 0; i<2; i++)\n for (int j = 0; j<2; j++) if (cnt[i][j]>0) {x = i; y = j;}\n\n for (int i = 0; i<n; i++) {p[i].first = (p[i].first - x)/2; p[i].second = (p[i].second - y)/2;}\n }\n\n}"
|
1270
|
F
|
Awesome Substrings
|
Let's call a binary string $s$ \textbf{awesome}, if it has at least $1$ symbol 1 and length of the string is divisible by the number of 1 in it. In particular, 1, 1010, 111 are \textbf{awesome}, but 0, 110, 01010 aren't.
You are given a binary string $s$. Count the number of its \textbf{awesome} substrings.
A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end.
|
It will be easier for us to think that string is actually an array of ones and zeroes. Let's calculate array of preffix sums - $pref[]$. It's easy to see that, substring $[l;r]$ will be awesome, iff $r - l + 1 = k \cdot (pref[r] - pref[l - 1])$ for some integer $k$. It's equivalent to $r - k \cdot pref[r] = l - 1 - k \cdot pref[l - 1]$. So, we must calculate number of pairs of equal integer in array $t[i] = i - k \cdot pref[i]$ for each $k$ from $1$ to $n$. Let's fix some number $T$ and note, that if $k > T$, then $pref[r] - pref[l - 1] = \frac{r - l + 1}{k} \le \frac{n}{T}$. In other words, in awesome substring number of ones or $k$ is not big. For $k \le T$ we can calculate number of pairs of equal integers in $O(nT)$. To do this, note that $-nT \le i - k \cdot pref[i] \le n$, so, independently for each $k$ we can put all numbers into one array and then calculate number of equal integers. After this, we can fix $l$ and iterate how many number of ones our string will contain(using this, we get bounds for $r$). Knowing number of ones, we'll know which remainder should give $r$. So, task comes down to calculate how many integeres on some segment give some fixed remainder if we divide them by some fixed integer. This can be calculated in constant time(but you should remember to count only such $r$ for whic $k > T$. This part works in $O(n \cdot \frac{n}{T})$. If we choose $T = \sqrt{n}$,we will get that our solution works in $O(n\sqrt{n})$(on practice, this solution easily fits in TL for $T$ from $300$ to $500$) Also note, that if you use some data structures for the first part(like map or unordered_map in C++) and choose big $T$, your solution can get TL. TL wasn't big because of fast bruteforces.
|
[
"math",
"strings"
] | 2,600
|
"#define _CRT_SECURE_NO_WARNINGS\n#include <bits/stdc++.h>\n/*\n#pragma GCC target (\"avx2\")\n#pragma GCC optimization (\"O3\")\n#pragma GCC optimization (\"unroll-loops\")*/\n\nusing namespace std;\n\n#define ll long long\n#define ld long double\n#define mp make_pair\n\n\nll solve1 (string s)\n{\n\n int n = s.size();\n int m = sqrt(n);\n\n vector<int> pos_one;\n for (int i = 0; i<n; i++) if (s[i]=='1') pos_one.push_back(i);\n if (pos_one.size()==0) return 0;\n\n pos_one.push_back(n);\n\n\n vector<int> cnt(n*m+n);\n\n\n ll total = 0;\n for (ll i = 1; i<=m; i++)\n {\n int cur = 0;\n cnt[i*n]++;\n for (int j = 1; j<=n; j++)\n {\n if (s[j-1]=='1') cur++;\n int idx = j - i*cur;\n total+=cnt[idx+i*n];\n cnt[idx+i*n]++;\n }\n\n cur = 0;\n cnt[i*n]--;\n for (int j = 1; j<=n; j++)\n {\n if (s[j-1]=='1') cur++;\n int idx = j - i*cur;\n cnt[idx+i*n]--;\n }\n }\n\n\n vector<int> idx(n, -1);\n int cur = 0;\n for (int i = 0; i<n; i++)\n {\n idx[i] = cur;\n if (s[i]=='1') cur++;\n }\n\n for (int i = 0; i<n; i++)\n {\n for (ll j = 1; j<=n/m&&idx[i]+j<=cur; j++)\n {\n\n ll l = pos_one[idx[i]+j-1] - i + 1;\n ll r = pos_one[idx[i]+j] - i;\n l = max(l, j*(m+1));\n if (l<=r) total+=r/j - (l-1)/j;\n }\n }\n return total;\n}\n\nll solve2(string s)\n{\n int n = s.size();\n vector<int> pref(n+1);\n for (int i = 1; i<=n; i++)\n {\n pref[i] = pref[i-1];\n if (s[i-1]=='1') pref[i]++;\n }\n ll cnt = 0;\n for (int i = 0; i<n; i++)\n for (int j = i+1; j<=n; j++)\n {\n if ((pref[j]>pref[i])&&((j-i)%(pref[j]-pref[i])==0)) cnt++;\n }\n return cnt;\n}\n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(nullptr);\n\n string s;\n cin>>s;\n cout<<solve1(s);\n}"
|
1270
|
G
|
Subset with Zero Sum
|
You are given $n$ integers $a_1, a_2, \dots, a_n$, such that for each $1\le i \le n$ holds $i-n\le a_i\le i-1$.
Find some nonempty subset of these integers, whose sum is equal to $0$. It can be shown that such a subset exists under given constraints. If there are several possible subsets with zero-sum, you can find any of them.
|
Note that the condition $i-n \le a_i \le i-1$ is equivalent to $1 \le i - a_i \le n$. Let's build an oriented graph $G$ on $n$ nodes with the following principle: for each $i$ from $1$ to $n$, draw an oriented edge from vertex $i$ to vertex $i - a_i$. In this graph, there is an outgoing edge from every vertex, so it has an oriented cycle. Let vertices of this cycle be - $i_1, i_2, \dots, i_k$. Then: $i_1 - a_{i_1} = i_2$ $i_2 - a_{i_2} = i_3$ $\vdots$ $i_n - a_{i_n} = i_1$ After adding all these equalities, we get $a_{i_1} + a_{i_2} + \dots + a_{i_k} = 0$ We can find some oriented cycle in $O(n)$ (just go by an available edge until you get to previously visited vertex).
|
[
"constructive algorithms",
"dfs and similar",
"graphs",
"math"
] | 2,700
|
"#include <bits/stdc++.h>\n\nusing namespace std;\n\nvoid solve()\n{\n int n;\n cin >> n;\n vector<int> a(n+1);\n for (int i = 1; i <= n; i++) cin >> a[i];\n vector<bool> visited(n+1);\n int cur = 1;\n while (!visited[cur])\n {\n visited[cur] = true;\n cur = cur - a[cur];\n }\n vector<int> answer = {cur};\n int cur1 = cur - a[cur];\n while (cur1!=cur)\n {\n answer.push_back(cur1);\n cur1 = cur1 - a[cur1];\n }\n cout<<answer.size()<<'\\n';\n for (auto it: answer) cout<<it<<' ';\n cout<<'\\n';\n}\n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(nullptr);\n\n int t;\n cin>>t;\n for (int i = 0; i<t; i++) solve();\n\n\n}"
|
1270
|
H
|
Number of Components
|
Suppose that we have an array of $n$ distinct numbers $a_1, a_2, \dots, a_n$. Let's build a graph on $n$ vertices as follows: for every pair of vertices $i < j$ let's connect $i$ and $j$ with an edge, if $a_i < a_j$. Let's define \textbf{weight} of the array to be the number of connected components in this graph. For example, weight of array $[1, 4, 2]$ is $1$, weight of array $[5, 4, 3]$ is $3$.
You have to perform $q$ queries of the following form — change the value at some position of the array. After each operation, output the weight of the array. Updates are not independent (the change stays for the future).
|
Firstly, note that all connected components form segments of sequential indexes. Indeed, let numbers $i$ and $j$ lie in one component and let's consider $i < x < j$. Because $i$ and $j$ lie in one component, there exists a path, which connects them: $v_1 = i, \ldots, v_t = j$, such that $v_k$ and $v_{k+1}$ are connected with an edge for any $k$. Then there exists $p$, such that $v_p < x < v_{p+1}$.But if $a_{v_p} < a_x$, then there exists an edge $xv_p$, otherwise there exists an edge between $x$ and $a_{v_{p+1}}$(because $a_x < a_{v_p} < a_{v_{p+1}}$). So, because components are non-intersecting segments, there is no edge between $[l_1;r_1]$ and $[l_2;r_2]$, $l_1 \le r_1 < l_2 \le r_2$, iff $min_{x \in [l_1;r_1]}(a) > max_{x \in [l_2;r_2]}(a)$. That's why number of components is equal to number of $i$, such that $min_{x \in [1;i]}(a) > max_{x \in [i + 1; n]}(a)$, increased by $1$. So, our task is to calculate number of prefix minimums which are strictly greater than corresponding suffix maximums. To do this, let's set $a[0] = \infty$ and $a[n + 1] = 0$. After this let's consider some number $h$ and build new array $b$, such that $b[i] = 1$ iff $a[i] \ge h$. Then if $h = min_{x \in [1;i]}(a)$ for some suitable $i$, array $b$ looks like $11\ldots00$. On the other hand, if $b$ looks like $11..00$ for some $h = a[t]$, there exists a unique $i$ for which prefix minimum is greater than suffix maximum. So, we need to calculate number of $h$ for which array $b$ looks in a such way. Note, that array looks like this, iff number of adjacent pair of integers $b[t] \ne b[t + 1]$ is equal to one. Moreover, for any $h$ this number is at least one. Thus, we just need to maintain $f[h]$ (number of adjacent non-equal integers in array $b$) for all $h$ and calculate number of $h$ that have $f[h]$ equal to minimum. It can be done with segment tree, if we note that, if we change one position in array $a$, value $f[h]$ can change only because of integers $a[i - 1], a[i], a[i + 1]$. Then, changing $f[h]$ is addition of $\pm1$ on some segments. But we also need to remember, that we should count only values of $h$, which are equal to some integer in $a[]$(so if have query $pos, x$ we need to activate $h=x$ and deactivate $h=a[pos]$ in segment tree). We are sorry for rather strict TL. This was done because with aim to cut off $O(n\sqrt{nlog(n)})$ solutions(not sure if we succeed), which worked not slower than solutions on Java with set.
|
[
"data structures"
] | 3,300
|
"#include <bits/stdc++.h>\n\nusing namespace std;\n\nclass SegTree {\nprivate:\n int n;\n vector<int> val, cnt, lazy;\n \n void push(int x, int l, int r) {\n if (lazy[x] == 0) return;\n val[x] += lazy[x];\n if (l != r) {\n lazy[x * 2] += lazy[x];\n lazy[x * 2 + 1] += lazy[x];\n }\n lazy[x] = 0;\n }\n \n void merge(int x) {\n if (val[x*2] == val[x*2+1]) {\n val[x] = val[x*2];\n cnt[x] = cnt[x*2] + cnt[x*2+1];\n } else if (val[x*2] < val[x*2+1]) {\n val[x] = val[x*2];\n cnt[x] = cnt[x*2];\n } else {\n val[x] = val[x*2+1];\n cnt[x] = cnt[x*2+1];\n }\n }\n \n void recount(int x, int tl, int tr, int pos, int delta) {\n push(x, tl, tr);\n if (tl == tr) {\n cnt[x] += delta;\n return;\n }\n int tm = (tl + tr) / 2;\n if (pos <= tm) {\n recount(x * 2, tl, tm, pos, delta);\n push(x * 2 + 1, tm + 1, tr);\n } else {\n push(x * 2, tl, tm);\n recount(x * 2 + 1, tm + 1, tr, pos, delta);\n }\n merge(x);\n }\n \n void update(int x, int tl, int tr, int l, int r, int val) {\n push(x, tl, tr);\n if (l > r) return;\n if (l == tl && tr == r) {\n lazy[x] += val;\n push(x, tl, tr);\n return;\n }\n int tm = (tl + tr) / 2;\n update(x * 2, tl, tm, l, min(r, tm), val);\n update(x * 2 + 1, tm + 1, tr, max(tm + 1, l), r, val);\n merge(x);\n }\n \n int query(int x, int tl, int tr, int l, int r) {\n push(x, tl, tr);\n if (l == tl && tr == r) {\n return val[x] == 1 ? cnt[x] : 0;\n }\n int tm = (tl + tr) / 2;\n if (r <= tm) {\n return query(x * 2, tl, tm, l, r);\n } else if (l > tm) {\n return query(x * 2 + 1, tm + 1, tr, l, r);\n } else {\n return query(x * 2, tl, tm, l, tm) +\n query(x * 2 + 1, tm + 1, tr, tm + 1, r);\n }\n }\npublic:\n inline void update(int l, int r, int val) {\n update(1, 0, n - 1, l, r, val);\n }\n\n inline int query(int l, int r) {\n return query(1, 0, n - 1, l, r);\n }\n\n inline void recount(int pos, int val) {\n recount(1, 0, n - 1, pos, val);\n }\n\n SegTree(int n) : n(n), val(4*n), cnt(4*n), lazy(4*n) {}\n};\n\nint main() {\n ios_base::sync_with_stdio(false);\n \n int n, q; cin >> n >> q;\n vector<int> a(n + 2);\n int mx = 0;\n for (int i = 0; i < n; ++i) {\n cin >> a[i+1];\n mx = max(mx, a[i+1]);\n }\n a[n+1] = 0;\n vector<int> pos(q), x(q);\n for (int i = 0; i < q; ++i) {\n cin >> pos[i] >> x[i];\n mx = max(mx, x[i]);\n }\n a[0] = mx+1;\n \n ++n;\n SegTree segTree(a[0] + 1);\n \n auto addDiff = [&](int a, int b, int sgn) {\n if (a == b) return;\n segTree.update(min(a, b), max(a, b) - 1, sgn);\n };\n \n for (int i = 0; i < n; ++i) {\n addDiff(a[i], a[i + 1], +1);\n segTree.recount(a[i], +1);\n }\n for (int i = 0; i < q; ++i) {\n int p = pos[i], y = x[i];\n segTree.recount(a[p], -1);\n addDiff(a[p - 1], a[p], -1);\n addDiff(a[p], a[p + 1], -1);\n segTree.recount(y, +1);\n addDiff(a[p - 1], y, +1);\n addDiff(y, a[p + 1], +1);\n a[p] = y;\n cout << segTree.query(1, a[0] - 1) << \"\\n\";\n }\n}"
|
1270
|
I
|
Xor on Figures
|
There is given an integer $k$ and a grid $2^k \times 2^k$ with some numbers written in its cells, cell $(i, j)$ initially contains number $a_{ij}$. Grid is considered to be a torus, that is, the cell to the right of $(i, 2^k)$ is $(i, 1)$, the cell below the $(2^k, i)$ is $(1, i)$ There is also given a lattice figure $F$, consisting of $t$ cells, where $t$ is \textbf{odd}. $F$ doesn't have to be connected.
We can perform the following operation: place $F$ at some position on the grid. (Only translations are allowed, rotations and reflections are prohibited). Now choose any nonnegative integer $p$. After that, for each cell $(i, j)$, covered by $F$, replace $a_{ij}$ by $a_{ij}\oplus p$, where $\oplus$ denotes the bitwise XOR operation.
More formally, let $F$ be given by cells $(x_1, y_1), (x_2, y_2), \dots, (x_t, y_t)$. Then you can do the following operation: choose any $x, y$ with $1\le x, y \le 2^k$, any nonnegative integer $p$, and for every $i$ from $1$ to $n$ replace number in the cell $(((x + x_i - 1)\bmod 2^k) + 1, ((y + y_i - 1)\bmod 2^k) + 1)$ with $a_{((x + x_i - 1)\bmod 2^k) + 1, ((y + y_i - 1)\bmod 2^k) + 1}\oplus p$.
Our goal is to make all the numbers equal to $0$. Can we achieve it? If we can, find the smallest number of operations in which it is possible to do this.
|
Let's fix first cell of fiqure - $(x_1, y_1)$. Then if figure is lied in a such way, that first cell will be cell $(p,q)$ of the original board, other figure cells will be in the cells $(a + x_i - x_1, b + y_i - y_1)$ of the original board. Let's name such shifts $(x_i - x_1, y_i - y_1)$ as $(c_i, d_i)$. Then let's compute value $f[p, q] = \oplus a_{p - c_i, q - d_i}$ for $i = 1 \ldots t$(all additions are done modulo $2^k$). $\textbf{Lemma 1}$ If all numbers $f[p, q] = 0$, then all numbers in original array are zero too. It will be proved in the end. So our aim is to get zero array $f$. $\textbf{Lemma 2}$ Assume, that we use operation in a such way that, that first figure cell is in $(p, q)$. Then values of $f$ will change in a such way: For all $p', q'$ of the form $p + 2\cdot c_i, q + 2\cdot d_i$ $f[p', q']$ values will xor with $x$, for other it will not change. Consider cell $(p', q')$. Then $f'[p', q'] = f[p', q'] \oplus (x \oplus x \ldots \oplus x)$, where we xor with $x$ for all pair $i, j$ such that $(p' - c_i, q' - d_i) = (p + c_j, q + d_j)$. Note, that if pair $i, j$ is suitable, then pair $j, i$ is suitable too. So, because $x \oplus x = 0$, we can only have non-xored pairs $i, i$ in the end, but then $p', q'$ looks like $p + 2\cdot c_i, q + 2\cdot d_i$ for some $i$. We get that task for array $a$ is equivalent for task with array $f$ with figure scaled two times. Note that in this case, if we xor on some figure we change only cells with one parity of coordinates, so we can solve this task independently for $4$ boards of size $2^(k-1)$ (also, some cells of figure can map to the same one, then we should not use them). It's only left to show that if $f$ is zero, then $a$ is also zero.It's sufficient to show that for any values of $f$ we can get zero array(follow from linear algebra, both facts are equivalent to fact that $f$ is non-degenerate transform). But this statement can be easily proved using induction on $k$, because if we get all zeroes for problem with $k - 1$, then they will be zeroes in array $f$ too. In the end we get $k=0$ and figure with one cell(because $t$ is odd). We can get zero in one operation. Also, from non-degenerativity it follow that answer is unique(if we xor on each figure at most once).
|
[
"constructive algorithms",
"fft",
"math"
] | 3,500
|
"#include <bits/stdc++.h>\n\nusing namespace std;\n\n#define ll long long\n#define mp make_pair \n\nint main() {\n ios_base::sync_with_stdio(0);\n cin.tie(nullptr);\n\n int k;\n cin>>k;\n int n = 1<<k;\n vector<vector<ll>> a(n, vector<ll>(n));\n for (int i = 0; i<n; i++)\n for (int j = 0; j<n; j++) cin>>a[i][j];\n\n int t;\n cin>>t;\n vector<pair<int, int>> F(t);\n for (int i = 0; i<t; i++) cin>>F[i].first>>F[i].second;\n vector<pair<int, int>> V;\n auto key = F[0];\n for (auto it: F)\n {\n V.push_back(mp(it.first - key.first, it.second - key.second));\n }\n for (int iteration = 0; iteration<k; iteration++)\n {\n vector<vector<ll>> new_grid(n, vector<ll>(n));\n for (int i = 0; i<n; i++)\n for (int j = 0; j<n; j++)\n {\n for (auto it: V) new_grid[i][j]^=a[(i+n-it.first)%n][(j+n-it.second)%n];\n }\n a = new_grid;\n for (int i = 0; i<t; i++)\n {\n V[i].first = (V[i].first*2)%n;\n V[i].second = (V[i].second*2)%n;\n }\n }\n int cnt = 0;\n for (int i = 0; i<n; i++)\n for (int j = 0; j<n; j++) if (a[i][j]) cnt++;\n cout<<cnt;\n}"
|
1271
|
A
|
Suits
|
A new delivery of clothing has arrived today to the clothing store. This delivery consists of $a$ ties, $b$ scarves, $c$ vests and $d$ jackets.
The store does not sell single clothing items — instead, it sells suits of two types:
- a suit of the first type consists of one tie and one jacket;
- a suit of the second type consists of one scarf, one vest and one jacket.
Each suit of the first type costs $e$ coins, and each suit of the second type costs $f$ coins.
Calculate the maximum possible cost of a set of suits that can be composed from the delivered clothing items. Note that one item cannot be used in more than one suit (though some items may be left unused).
|
There are two ways to approach this problem. The first way is to iterate on the number of suits of one type that we will compose and calculate the number of suits of the second type we can compose from the remaining items. The second way is to use the fact that if $e > f$, then we have to make as many suits of the first type as possible (and the opposite is true if $f > e$). So we firstly make the maximum possible number of more expensive suits, and use the remaining items to compose cheaper suits.
|
[
"brute force",
"greedy",
"math"
] | 800
| null |
1271
|
B
|
Blocks
|
There are $n$ blocks arranged in a row and numbered from left to right, starting from one. Each block is either black or white.
You may perform the following operation zero or more times: choose two \textbf{adjacent} blocks and invert their colors (white block becomes black, and vice versa).
You want to find a sequence of operations, such that they make all the blocks having the same color. You \textbf{don't have} to minimize the number of operations, but it should not exceed $3 \cdot n$. If it is impossible to find such a sequence of operations, you need to report it.
|
Suppose we want to make all blocks white (if we want to make them black, the following algorithm still works with a few changes). The first block has to be white, so if it is black, we have to invert the pair $(1, 2)$ once, otherwise we should not invert it at all (inverting twice is the same as not inverting at all). Then consider the second block. We need to invert it once if it is black - but if we invert the pair $(1, 2)$, then the first block becomes black. So we can't invert the pair $(1, 2)$, and we have to invert the pair $(2, 3)$ (or don't invert anything if the second block is white now). And so on: for the $i$-th block, we cannot invert the pair $(i - 1, i)$, since it will affect the color of the previous block. So we don't have much choice in our algorithm. After that, we arrive at the last block. If it is white, we are done with no more than $n - 1$ actions. If it is black, run the same algorithm, but we have to paint everything black now. If it fails again, then there is no answer.
|
[
"greedy",
"math"
] | 1,300
| null |
1271
|
C
|
Shawarma Tent
|
The map of the capital of Berland can be viewed on the infinite coordinate plane. Each point with integer coordinates contains a building, and there are streets connecting every building to four neighbouring buildings. All streets are parallel to the coordinate axes.
The main school of the capital is located in $(s_x, s_y)$. There are $n$ students attending this school, the $i$-th of them lives in the house located in $(x_i, y_i)$. It is possible that some students live in the same house, but no student lives in $(s_x, s_y)$.
After classes end, each student walks from the school to his house along one of the shortest paths. So the distance the $i$-th student goes from the school to his house is $|s_x - x_i| + |s_y - y_i|$.
The Provision Department of Berland has decided to open a shawarma tent somewhere in the capital (at some point with integer coordinates). It is considered that the $i$-th student will buy a shawarma if at least one of the shortest paths from the school to the $i$-th student's house goes through the point where the shawarma tent is located. It is forbidden to place the shawarma tent at the point where the school is located, but the coordinates of the shawarma tent may coincide with the coordinates of the house of some student (or even multiple students).
You want to find the maximum possible number of students buying shawarma and the optimal location for the tent itself.
|
Suppose that the point $(t_x, t_y)$ is the answer. If the distance between this point $t$ and the school is greater than $1$, then there exists at least one point $(T_x, T_y)$ such that: the distance between $T$ and the school is exactly $1$; $T$ lies on the shortest path between the school and $t$. Now we claim that the $T$ can also be the optimal answer. That's because, if there exists a shortest path from the school to some point $i$ going through $t$, the shortest path from $s$ to $t$ going through $T$ can be extended to become the shortest path to $i$. So we only need to check four points adjacent to the school as possible answers. To check whether a point $(a_x, a_y)$ lies on the shortest path from $(b_x, b_y)$ to $(c_x, c_y)$, we need to verify that $min(b_x, c_x) \le a_x \le max(b_x, c_x)$ and $min(b_y, c_y) \le a_y \le max(b_y, c_y)$.
|
[
"brute force",
"geometry",
"greedy",
"implementation"
] | 1,300
| null |
1271
|
D
|
Portals
|
You play a strategic video game (yeah, we ran out of good problem legends). In this game you control a large army, and your goal is to conquer $n$ castles of your opponent.
Let's describe the game process in detail. Initially you control an army of $k$ warriors. Your enemy controls $n$ castles; to conquer the $i$-th castle, you need at least $a_i$ warriors (you are so good at this game that you don't lose any warriors while taking over a castle, so your army stays the same after the fight). After you take control over a castle, you recruit new warriors into your army — formally, after you capture the $i$-th castle, $b_i$ warriors join your army. Furthermore, after capturing a castle (or later) you can defend it: if you leave at least one warrior in a castle, this castle is considered defended. Each castle has an importance parameter $c_i$, and your total score is the sum of importance values over all defended castles. There are two ways to defend a castle:
- if you are currently in the castle $i$, you may leave one warrior to defend castle $i$;
- there are $m$ one-way portals connecting the castles. Each portal is characterised by two numbers of castles $u$ and $v$ (for each portal holds $u > v$). A portal can be used as follows: if you are currently in the castle $u$, you may send one warrior to defend castle $v$.
Obviously, when you order your warrior to defend some castle, he leaves your army.
You capture the castles in fixed order: you have to capture the first one, then the second one, and so on. After you capture the castle $i$ (but only before capturing castle $i + 1$) you may recruit new warriors from castle $i$, leave a warrior to defend castle $i$, and use any number of portals leading from castle $i$ to other castles having smaller numbers. As soon as you capture the next castle, these actions for castle $i$ won't be available to you.
If, during some moment in the game, you don't have enough warriors to capture the next castle, you lose. Your goal is to maximize the sum of importance values over all defended castles (note that you may hire new warriors in the last castle, defend it and use portals leading from it even after you capture it — your score will be calculated afterwards).
Can you determine an optimal strategy of capturing and defending the castles?
|
Note, that for every castle $i$ there is some list of castles $x$, such that you can defend castle $i$ standing in the castle $x$. The key observation is that it's always optimal to defend castle $i$ (assuming we decided to defend it) in the latest possible castle. Since it gives you more warriors in between of $i$ and $x$ (more freedom), it's always optimal. We will prune all other $x$'s except for the maximum one. Now our process looks like: Conquer next castle, Acquire new warriors, Decide whether or not you you defend previous castle $i$, such that the current castle is $x$ in terms of the paragraph above. There might be several such $i$ to process for the current castle. Since in this process we decide on each castle exactly only once, the process can be simulated as a simple dynamic programming with states "number of castles conquered, number of warriors available", it's possible to compute this dp in $\mathcal{O}(n \cdot C)$, where $C$ is the total number of warriors. Or you can use a greedy approach in $\mathcal{O}(n \log(n))$. Just maintain the process above, defending all the castles you can defend. In case it turns out you are lacking few warriors later, just undo several defended castes. To do so, just maintain undoable castles in a Heap or std::set.
|
[
"data structures",
"dp",
"greedy",
"implementation",
"sortings"
] | 2,100
| null |
1271
|
E
|
Common Number
|
At first, let's define function $f(x)$ as follows: $$ \begin{matrix} f(x) & = & \left\{ \begin{matrix} \frac{x}{2} & \mbox{if } x \text{ is even} \\ x - 1 & \mbox{otherwise } \end{matrix} \right. \end{matrix} $$
We can see that if we choose some value $v$ and will apply function $f$ to it, then apply $f$ to $f(v)$, and so on, we'll eventually get $1$. Let's write down all values we get in this process in a list and denote this list as $path(v)$. For example, $path(1) = [1]$, $path(15) = [15, 14, 7, 6, 3, 2, 1]$, $path(32) = [32, 16, 8, 4, 2, 1]$.
Let's write all lists $path(x)$ for every $x$ from $1$ to $n$. The question is next: what is the maximum value $y$ such that $y$ is contained in at least $k$ different lists $path(x)$?
Formally speaking, you need to find maximum $y$ such that $\left| \{ x ~|~ 1 \le x \le n, y \in path(x) \} \right| \ge k$.
|
Let's introduce a function $count(x)$ - the number of $y \in [1, n]$ such that $x \in path(y)$. The problem is now to find the greatest number $x$ such that $count(x) \ge k$. How can we calculate $count(x)$? First, let's consider the case when $x$ is odd. $x$ is contained in $path(x)$; then $x$ is contained in $path(2x)$ (since $\frac{2x}{2} = x$) and in $path(2x + 1)$ (since $2x + 1$ is odd, $f(2x + 1) = 2x$). The next numbers containing $x$ in their paths are $4x$, $4x + 1$, $4x + 2$ and $4x + 3$, then $8x$, $8x + 1$, ..., $8x + 7$, and so on. By processing each segment of numbers containing $x$ in their paths in $O(1)$, we can calculate $count(x)$ for odd $x$ in $O(\log n)$. What about even $x$? The first numbers containing $x$ in their paths are $x$ and $x + 1$, then $2x$, $2x + 1$, $2x + 2$ and $2x + 3$, then $4x$, $4x + 1$, ..., $4x + 7$, and so on. So the case with even $x$ can also be solved in $O(\log n)$. We can also see that $count(x) \ge count(x + 2)$ simply because for each number containing $x + 2$ in its path, there is another number that is less than it which contains $x$ in its path. And this fact means that if we want to find the greatest $x$ such that $count(x) \ge k$, we only have to run two binary searches: one binary search over odd numbers, and another binary search over even numbers.
|
[
"binary search",
"combinatorics",
"dp",
"math"
] | 2,100
| null |
1271
|
F
|
Divide The Students
|
Recently a lot of students were enrolled in Berland State University. All students were divided into groups according to their education program. Some groups turned out to be too large to attend lessons in the same auditorium, so these groups should be divided into two subgroups. Your task is to help divide the first-year students of the computer science faculty.
There are $t$ new groups belonging to this faculty. Students have to attend classes on three different subjects — maths, programming and P. E. All classes are held in different places according to the subject — maths classes are held in auditoriums, programming classes are held in computer labs, and P. E. classes are held in gyms.
Each group should be divided into two subgroups so that there is enough space in every auditorium, lab or gym for all students of the subgroup. For the first subgroup of the $i$-th group, maths classes are held in an auditorium with capacity of $a_{i, 1}$ students; programming classes are held in a lab that accomodates up to $b_{i, 1}$ students; and P. E. classes are held in a gym having enough place for $c_{i, 1}$ students. Analogically, the auditorium, lab and gym for the second subgroup can accept no more than $a_{i, 2}$, $b_{i, 2}$ and $c_{i, 2}$ students, respectively.
As usual, some students skip some classes. Each student considers some number of subjects (from $0$ to $3$) to be useless — that means, he skips all classes on these subjects (and attends all other classes). This data is given to you as follows — the $i$-th group consists of:
- $d_{i, 1}$ students which attend all classes;
- $d_{i, 2}$ students which attend all classes, except for P. E.;
- $d_{i, 3}$ students which attend all classes, except for programming;
- $d_{i, 4}$ students which attend only maths classes;
- $d_{i, 5}$ students which attend all classes, except for maths;
- $d_{i, 6}$ students which attend only programming classes;
- $d_{i, 7}$ students which attend only P. E.
There is one more type of students — those who don't attend any classes at all (but they, obviously, don't need any place in auditoriums, labs or gyms, so the number of those students is insignificant in this problem).
Your task is to divide each group into two subgroups so that every auditorium (or lab, or gym) assigned to each subgroup has enough place for all students from this subgroup attending the corresponding classes (if it is possible). Each student of the $i$-th group should belong to exactly one subgroup of the $i$-th group; it is forbidden to move students between groups.
|
Suppose we have only students of types $1$, $4$, $6$ and $7$ (all students attend either all subjects or only one subject). We can divide these students into two subgroups in $O(1)$: the first subgroup can accomodate no more than $min(a_1, b_1, c_1)$ students of the first type; the second subgroup can accomodate no more than $min(a_2, b_2, c_2)$ students of the first type; it does not matter how we distribute them into groups, as long as their number does not exceed the limits. After that, it's easy to distribute three other types. Okay, we can solve the problem with four types in $O(1)$. How to solve the problem with seven types in $O(M^3)$? Let's iterate on $f_2$, $f_3$ and $f_5$, and check whether we can distribute the remaining types! Though it may seem slow, we will do only $10^9$ iterations, and the time limit is generous enough (model solution works in 1.8 seconds without any cutoffs).
|
[
"brute force"
] | 2,700
| null |
1272
|
A
|
Three Friends
|
Three friends are going to meet each other. Initially, the first friend stays at the position $x = a$, the second friend stays at the position $x = b$ and the third friend stays at the position $x = c$ on the coordinate axis $Ox$.
In one minute \textbf{each friend independently} from other friends can change the position $x$ by $1$ to the left or by $1$ to the right (i.e. set $x := x - 1$ or $x := x + 1$) or even don't change it.
Let's introduce the total pairwise distance — the sum of distances between each pair of friends. Let $a'$, $b'$ and $c'$ be the final positions of the first, the second and the third friend, correspondingly. Then the total pairwise distance is $|a' - b'| + |a' - c'| + |b' - c'|$, where $|x|$ is the absolute value of $x$.
Friends are interested in the minimum total pairwise distance they can reach if they will move optimally. \textbf{Each friend will move no more than once}. So, more formally, they want to know the minimum total pairwise distance they can reach after one minute.
You have to answer $q$ independent test cases.
|
This problem can be solved with simple simulation. Let $na \in \{a - 1, a, a + 1\}$ be the new position of the first friend, $nb \in \{b - 1, b, b + 1\}$ and $nc \in \{c - 1, c, c + 1\}$ are new positions of the second and the third friends correspondingly. For the fixed positions you can update the answer with the value $|na - nb| + |na - nc| + |nb - nc|$. And iterating over three positions can be implemented with nested loops. Time complexity: $O(1)$ per test case.
|
[
"brute force",
"greedy",
"math",
"sortings"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
int calc(int a, int b, int c) {
return abs(a - b) + abs(a - c) + abs(b - c);
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int q;
cin >> q;
for (int i = 0; i < q; ++i) {
int a, b, c;
cin >> a >> b >> c;
int ans = calc(a, b, c);
for (int da = -1; da <= 1; ++da) {
for (int db = -1; db <= 1; ++db) {
for (int dc = -1; dc <= 1; ++dc) {
int na = a + da;
int nb = b + db;
int nc = c + dc;
ans = min(ans, calc(na, nb, nc));
}
}
}
cout << ans << endl;
}
return 0;
}
|
1272
|
B
|
Snow Walking Robot
|
Recently you have bought a snow walking robot and brought it home. Suppose your home is a cell $(0, 0)$ on an infinite grid.
You also have the sequence of instructions of this robot. It is written as the string $s$ consisting of characters 'L', 'R', 'U' and 'D'. If the robot is in the cell $(x, y)$ right now, he can move to one of the adjacent cells (depending on the current instruction).
- If the current instruction is 'L', then the robot can move to the left to $(x - 1, y)$;
- if the current instruction is 'R', then the robot can move to the right to $(x + 1, y)$;
- if the current instruction is 'U', then the robot can move to the top to $(x, y + 1)$;
- if the current instruction is 'D', then the robot can move to the bottom to $(x, y - 1)$.
You've noticed the warning on the last page of the manual: if the robot visits some cell (\textbf{except} $(0, 0)$) twice then it breaks.
So the sequence of instructions is valid if the robot starts in the cell $(0, 0)$, performs the given instructions, visits no cell other than $(0, 0)$ two or more times and ends the path in the cell $(0, 0)$. Also cell $(0, 0)$ should be visited \textbf{at most} two times: at the beginning and at the end (if the path is empty then it is visited only once). For example, the following sequences of instructions are considered valid: "UD", "RL", "UUURULLDDDDLDDRRUU", and the following are considered invalid: "U" (the endpoint is not $(0, 0)$) and "UUDD" (the cell $(0, 1)$ is visited twice).
The initial sequence of instructions, however, might be not valid. You don't want your robot to break so you decided to reprogram it in the following way: you will remove some (possibly, all or none) instructions from the initial sequence of instructions, then rearrange the remaining instructions as you wish and turn on your robot to move.
Your task is to remove as few instructions from the initial sequence as possible and rearrange the remaining ones so that the sequence is valid. Report the valid sequence of the maximum length you can obtain.
Note that you can choose \textbf{any} order of remaining instructions (you don't need to minimize the number of swaps or any other similar metric).
You have to answer $q$ independent test cases.
|
Let $cnt[L]$ be the number of occurrences of the character 'L' in the initial string, $cnt[R]$ - the number of occurrences of the character 'R', $cnt[U]$ and $cnt[D]$ are the same things for remaining characters. It is obvious that in every answer the number of 'L' equals the number of 'R' and the same for 'D' and 'U'. The maximum theoretic answer we can obtain has length $2 \cdot (min(cnt[L], cnt[R]) + min(cnt[U] + cnt[D]))$. And... We almost always can obtain this answer! If there is at least one occurrence of each character, then we can construct some kind of rectangular path: $min(cnt[L], cnt[R])$ moves right, then $min(cnt[U], cnt[D])$ moves up, and the completing part. But there are some corner cases when some characters are missing. If $min(cnt[U], cnt[D]) = 0$ then our answer is empty or (if it is possible) it is "LR". The same if $min(cnt[L], cnt[R]) = 0$. Time complexity: $O(|s|)$ per test case.
|
[
"constructive algorithms",
"greedy",
"implementation"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
const string MOVES = "LRUD";
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int q;
cin >> q;
for (int i = 0; i < q; ++i) {
string s;
cin >> s;
map<char, int> cnt;
for (auto c : MOVES) cnt[c] = 0;
for (auto c : s) ++cnt[c];
int v = min(cnt['U'], cnt['D']);
int h = min(cnt['L'], cnt['R']);
if (min(v, h) == 0) {
if (v == 0) {
h = min(h, 1);
cout << 2 * h << endl << string(h, 'L') + string(h, 'R') << endl;
} else {
v = min(v, 1);
cout << 2 * v << endl << string(v, 'U') + string(v, 'D') << endl;
}
} else {
string res;
res += string(h, 'L');
res += string(v, 'U');
res += string(h, 'R');
res += string(v, 'D');
cout << res.size() << endl << res << endl;
}
}
return 0;
}
|
1272
|
C
|
Yet Another Broken Keyboard
|
Recently, Norge found a string $s = s_1 s_2 \ldots s_n$ consisting of $n$ lowercase Latin letters. As an exercise to improve his typing speed, he decided to type all substrings of the string $s$. Yes, all $\frac{n (n + 1)}{2}$ of them!
A substring of $s$ is a non-empty string $x = s[a \ldots b] = s_{a} s_{a + 1} \ldots s_{b}$ ($1 \leq a \leq b \leq n$). For example, "auto" and "ton" are substrings of "automaton".
Shortly after the start of the exercise, Norge realized that his keyboard was broken, namely, he could use only $k$ Latin letters $c_1, c_2, \ldots, c_k$ out of $26$.
After that, Norge became interested in how many substrings of the string $s$ he could still type using his broken keyboard. Help him to find this number.
|
Let's replace all characters of $s$ with zeros and ones (zero if the character is unavailable and one otherwise). Then we have the binary string and we have to calculate the number of contiguous segments of this string consisting only of ones. It can be done with two pointers approach. If we are staying at the position $i$ and its value is zero, just skip it. Otherwise, let's find the leftmost position $j$ such that $j > i$ and the $j$-th value is zero. Then we have to add to the answer the value $\frac{(j - i) \cdot (j - i + 1)}{2}$ and set $i := j$. Time complexity: $O(n)$.
|
[
"combinatorics",
"dp",
"implementation"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n, k;
cin >> n >> k;
string s;
cin >> s;
set<char> st;
for (int i = 0; i < k; ++i) {
char c;
cin >> c;
st.insert(c);
}
long long ans = 0;
for (int i = 0; i < n; ++i) {
int j = i;
while (j < n && st.count(s[j])) ++j;
int len = j - i;
ans += len * 1ll * (len + 1) / 2;
i = j;
}
cout << ans << endl;
return 0;
}
|
1272
|
D
|
Remove One Element
|
You are given an array $a$ consisting of $n$ integers.
You can remove \textbf{at most one} element from this array. Thus, the final length of the array is $n-1$ or $n$.
Your task is to calculate the maximum possible length of the \textbf{strictly increasing} contiguous subarray of the remaining array.
Recall that the contiguous subarray $a$ with indices from $l$ to $r$ is $a[l \dots r] = a_l, a_{l + 1}, \dots, a_r$. The subarray $a[l \dots r]$ is called strictly increasing if $a_l < a_{l+1} < \dots < a_r$.
|
Firstly, let's calculate for each $i$ from $1$ to $n$ two following values: $r_i$ and $l_i$. $r_i$ means the maximum length of the increasing sequence starting in the position $i$, and $l_i$ means the maximum length of the increasing sequence ending in the position $i$. Initially, all $2n$ values are $1$ (the element itself). The array $r$ can be calculated in order from right to left with the following condition: if $a_i < a_{i + 1}$ then $r_i = r_{i + 1} + 1$, otherwise it still remain $1$. The same with the array $l$, but we have to calculate its values in order from left to right, and if $a_i > a_{i - 1}$ then $l_i = l_{i - 1} + 1$, otherwise it still remain $1$. Having these arrays we can calculate the answer. The initial answer (if we don't remove any element) is the maximum value of the array $l$. And if we remove the $i$-th element (where $i = 2 \dots n - 1$), then we can update the answer with the value $l_{i - 1} + r_{i + 1}$ if $a_{i - 1} < a_{i + 1}$. Time complexity: $O(n)$.
|
[
"brute force",
"dp"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
int ans = 1;
vector<int> rg(n, 1);
for (int i = n - 2; i >= 0; --i) {
if (a[i + 1] > a[i]) rg[i] = rg[i + 1] + 1;
ans = max(ans, rg[i]);
}
vector<int> lf(n, 1);
for (int i = 1; i < n; ++i) {
if (a[i - 1] < a[i]) lf[i] = lf[i - 1] + 1;
ans = max(ans, lf[i]);
}
for (int i = 0; i < n - 2; ++i) {
if (a[i] < a[i + 2]) ans = max(ans, lf[i] + rg[i + 2]);
}
cout << ans << endl;
return 0;
}
|
1272
|
E
|
Nearest Opposite Parity
|
You are given an array $a$ consisting of $n$ integers. In one move, you can jump from the position $i$ to the position $i - a_i$ (if $1 \le i - a_i$) or to the position $i + a_i$ (if $i + a_i \le n$).
For each position $i$ from $1$ to $n$ you want to know the minimum the number of moves required to reach any position $j$ such that $a_j$ has the opposite parity from $a_i$ (i.e. if $a_i$ is odd then $a_j$ has to be even and vice versa).
|
In this problem, we have directed graph consisting of $n$ vertices (indices of the array) and at most $2n-2$ edges. Some vertices have the value $0$, some have the value $1$. Our problem is to find for every vertex the nearest vertex having the opposite parity. Let's try to solve the problem for odd numbers and then just run the same algorithm with even numbers. We have multiple odd vertices and we need to find the nearest even vertex for each of these vertices. This problem can be solved with the standard and simple but pretty idea. Let's inverse our graph and run a multi-source breadth-first search from all even vertices. The only difference between standard bfs and multi-source bfs is that the second one have many vertices at the first step (vertices having zero distance). Now we can notice that because of bfs every odd vertex of our graph has the distance equal to the minimum distance to some even vertex in the initial graph. This is exactly what we need. Then just run the same algorithm for even numbers and print the answer. Time complexity: $O(n)$.
|
[
"dfs and similar",
"graphs",
"shortest paths"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
const int INF = 1e9;
int n;
vector<int> a;
vector<int> ans;
vector<vector<int>> g;
void bfs(const vector<int> &start, const vector<int> &end) {
vector<int> d(n, INF);
queue<int> q;
for (auto v : start) {
d[v] = 0;
q.push(v);
}
while (!q.empty()) {
int v = q.front();
q.pop();
for (auto to : g[v]) {
if (d[to] == INF) {
d[to] = d[v] + 1;
q.push(to);
}
}
}
for (auto v : end) {
if (d[v] != INF) {
ans[v] = d[v];
}
}
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
cin >> n;
a = vector<int>(n);
g = vector<vector<int>>(n);
vector<int> even, odd;
for (int i = 0; i < n; ++i) {
cin >> a[i];
if (a[i] & 1) odd.push_back(i);
else even.push_back(i);
}
for (int i = 0; i < n; ++i) {
int lf = i - a[i];
int rg = i + a[i];
if (0 <= lf) g[lf].push_back(i);
if (rg < n) g[rg].push_back(i);
}
ans = vector<int>(n, -1);
bfs(odd, even);
bfs(even, odd);
for (auto it : ans) cout << it << " ";
cout << endl;
return 0;
}
|
1272
|
F
|
Two Bracket Sequences
|
You are given two bracket sequences (not necessarily regular) $s$ and $t$ consisting only of characters '(' and ')'. You want to construct the shortest \textbf{regular} bracket sequence that contains both given bracket sequences as \textbf{subsequences} (not necessarily contiguous).
Recall what is the regular bracket sequence:
- () is the regular bracket sequence;
- if $S$ is the regular bracket sequence, then ($S$) is a regular bracket sequence;
- if $S$ and $T$ regular bracket sequences, then $ST$ (concatenation of $S$ and $T$) is a regular bracket sequence.
Recall that the subsequence of the string $s$ is such string $t$ that can be obtained from $s$ by removing some (possibly, zero) amount of characters. For example, "coder", "force", "cf" and "cores" are subsequences of "codeforces", but "fed" and "z" are not.
|
Firstly, notice that the length of the answer cannot exceed $400$ ($200$ copies of ()). Now we can do some kind of simple dynamic programming. Let $dp_{i, j, bal}$ be the minimum possible length of the prefix of the regular bracket sequence if we are processed first $i$ characters of the first sequence, first $j$ characters of the second sequence and the current balance is $bal$. Each dimension of this dp should have a size nearby $200 + \varepsilon$. The base of this dp is $dp_{0, 0, 0} = 0$, all other values $dp_{i, j, bal} = +\infty$. Transitions are very easy: if we want to place the opening bracket, then we increase $i$ if the $i$-th character of $s$ exists and equals '(', the same with the second sequence and $j$, the balance increases by one, and the length of the answer increases by one. If we want to place the closing bracket, then we increase $i$ if the $i$-th character of $s$ exists and equals ')', the same with the second sequence and $j$, the balance decreases by one, and the length of the answer increases by one. Note that the balance cannot be greater than $200$ or less than $0$ at any moment. Don't forget to maintain parents in this dp to restore the actual answer! The last problem that can be unresolved is how to write this dp? The easiest way is bfs, because every single transition increases our answer by one. Then we can restore answer from the state $dp_{|s|, |t|, 0}$. You can write it recursively, but I don't sure this will look good. And you also can write it just with nested loops, if you are careful enough. Time complexity: $O(|s| \cdot |t| \cdot max(|s|, |t|))$. If you know the faster solution, please share it!
|
[
"dp",
"strings",
"two pointers"
] | 2,200
|
#include <bits/stdc++.h>
using namespace std;
const int N = 202;
const int INF = 1e9;
int dp[N][N][2 * N];
pair<pair<int, int>, pair<int, char>> p[N][N][2 * N];
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
string s, t;
cin >> s >> t;
int n = s.size(), m = t.size();
for (int i = 0; i <= n; ++i) {
for (int j = 0; j <= m; ++j) {
for (int bal = 0; bal < 2 * N; ++bal) {
dp[i][j][bal] = INF;
}
}
}
dp[0][0][0] = 0;
for (int i = 0; i <= n; ++i) {
for (int j = 0; j <= m; ++j) {
for (int bal = 0; bal < 2 * N; ++bal) {
if (dp[i][j][bal] == INF) continue;
int nxti = i + (i < n && s[i] == '(');
int nxtj = j + (j < m && t[j] == '(');
if (bal + 1 < 2 * N && dp[nxti][nxtj][bal + 1] > dp[i][j][bal] + 1) {
dp[nxti][nxtj][bal + 1] = dp[i][j][bal] + 1;
p[nxti][nxtj][bal + 1] = make_pair(make_pair(i, j), make_pair(bal, '('));
}
nxti = i + (i < n && s[i] == ')');
nxtj = j + (j < m && t[j] == ')');
if (bal > 0 && dp[nxti][nxtj][bal - 1] > dp[i][j][bal] + 1) {
dp[nxti][nxtj][bal - 1] = dp[i][j][bal] + 1;
p[nxti][nxtj][bal - 1] = make_pair(make_pair(i, j), make_pair(bal, ')'));
}
}
}
}
int ci = n, cj = m, cbal = 0;
for (int bal = 0; bal < 2 * N; ++bal) {
if (dp[n][m][bal] + bal < dp[n][m][cbal] + cbal) {
cbal = bal;
}
}
string res = string(cbal, ')');
while (ci > 0 || cj > 0 || cbal != 0) {
int nci = p[ci][cj][cbal].first.first;
int ncj = p[ci][cj][cbal].first.second;
int ncbal = p[ci][cj][cbal].second.first;
res += p[ci][cj][cbal].second.second;
ci = nci;
cj = ncj;
cbal = ncbal;
}
reverse(res.begin(), res.end());
cout << res << endl;
return 0;
}
|
1276
|
A
|
As Simple as One and Two
|
You are given a non-empty string $s=s_1s_2\dots s_n$, which consists only of lowercase Latin letters. Polycarp does not like a string if it contains at least one string "one" or at least one string "two" (or both at the same time) as a \textbf{substring}. In other words, Polycarp does not like the string $s$ if there is an integer $j$ ($1 \le j \le n-2$), that $s_{j}s_{j+1}s_{j+2}=$"one" or $s_{j}s_{j+1}s_{j+2}=$"two".
For example:
- Polycarp does not like strings "oneee", "ontwow", "twone" and "oneonetwo" (they all have at least one substring "one" or "two"),
- Polycarp likes strings "oonnee", "twwwo" and "twnoe" (they have no substrings "one" and "two").
Polycarp wants to select a certain set of indices (positions) and remove all letters on these positions. All removals are made at the same time.
For example, if the string looks like $s=$"onetwone", then if Polycarp selects two indices $3$ and $6$, then "{on\underline{\textbf{e}}tw\underline{\textbf{o}}ne}" will be selected and the result is "ontwne".
What is the minimum number of indices (positions) that Polycarp needs to select to make the string liked? What should these positions be?
|
Consider each occurrence of substrings one and two. Obviously, at least one character have to be deleted in such substrings. These substrings cannot intersect in any way, except for one case: twone. Thus, the answer is necessarily no less than the value $c_{21}+c_{1}+c_{2}$, where $c_{21}$ is the number of occurrences of the string twone and $c_{1}$ is the number of occurrences of the string one (which are not part of twone) and $c_{2}$ is the number of occurrences of the string two (which are not part of twone). Let's propose a method that does exactly $c_ {21} + c_ {1} + c_ {2}$ removals and, thus, will be optimal. Delete character o in each occurrence of twone. This action will delete both substrings one and two at the same time. Next, delete character n in each occurrence of one. This action will delete all substrings one. Next, delete character w in each occurrence of two. This action will delete all substrings two. Note that it is important to delete the middle letters in the last two paragraphs to avoid appearing a new occurrence after a line is collapsed. The following is an example of a possible implementation of the main part of a solution:
|
[
"dp",
"greedy"
] | 1,400
|
string s;
cin >> s;
vector<int> r;
for (string t: {"twone", "one", "two"}) {
for (size_t pos = 0; (pos = s.find(t, pos)) != string::npos;) {
s[pos + t.length() / 2] = '?';
r.push_back(pos + t.length() / 2);
}
}
cout << r.size() << endl;
for (auto rr: r)
cout << rr + 1 << " ";
cout << endl;
|
1276
|
B
|
Two Fairs
|
There are $n$ cities in Berland and some pairs of them are connected by two-way roads. It is guaranteed that you can pass from any city to any other, moving along the roads. Cities are numerated from $1$ to $n$.
Two fairs are currently taking place in Berland — they are held in two different cities $a$ and $b$ ($1 \le a, b \le n$; $a \ne b$).
Find the number of pairs of cities $x$ and $y$ ($x \ne a, x \ne b, y \ne a, y \ne b$) such that if you go from $x$ to $y$ you will have to go through both fairs (the order of visits doesn't matter). Formally, you need to find the number of pairs of cities $x,y$ such that any path from $x$ to $y$ goes through $a$ and $b$ (in any order).
Print the required number of pairs. The order of two cities in a pair does not matter, that is, the pairs $(x,y)$ and $(y,x)$ must be taken into account only once.
|
This problem has a simple linear solution (just two depth-first searches) without involving cut points, biconnected components, and other advanced techniques. Let's reformulate this problem in the language of graph theory: you are given an undirected graph and two vertices $a$ and $b$, you need to find the number of pairs of vertices ($x, y$) such that any path from $x$ to $y$ contains both vertices $a$ and $b$. In other words, we are interested in pairs of vertices ($x, y$) such that deleting the vertex $a$ (while going from $b$) breaks the connection from $x$ to $y$ and deleting the vertex $b$ (while going from $a$) breaks the connection from $x$ to $y$. Let's remove the vertex $a$ and select the connected components in the resulting graph. Similarly, we remove the vertex $b$ and select the connected components in the resulting graph. Then the pair ($x, y$) interests us if $x$ and $y$ belong to different components both when removing $a$ and when removing $b$. Thus, let's find a pair of $(\alpha_u, \beta_u)$ for each vertex $u$. It will be numbers of the connected components when $a$ and $b$ are removed, respectively. The pair ($x, y$) interests us if $(\alpha_x, \beta_x) \ne (\alpha_y, \beta_y)$. The total number of vertex pairs is $n \cdot (n - 1) / 2$. Let's subtract the number of uninteresting pairs from it. Firstly, these are pairs such that $(\alpha_x, \beta_x)$ and $(\alpha_y, \beta_y)$ partially equals (in exactly one component). For example, let the equality be on the first component in the common value of $\alpha$. Let the total number of pairs $(\alpha_u, \beta_u)$ such that $\alpha_u = \alpha$ be equal to $c$, then subtract $c \cdot (c-1) / 2$ from the current answer. We will do this with all $\alpha$ and $\beta$. Note that some uninteresting pairs were counted twice. These are pairs of vertices such that $(\alpha_x, \beta_x)$ and $(\alpha_y, \beta_y)$ equals in both components. We can count the number of corresponding vertices $c$ and add $c \cdot (c-1) / 2$ to the current answer for each pair. In the following code, let $p$ be an array of pairs of component numbers for all vertices except $a$ and $b$. Each pair is the number of the connected component of this vertex if $a$ is removed, and the number of the connected component of this vertex if $b$ is removed. Then the main part of the solution can be like this: To build the array $p$ we could use the code below:
|
[
"combinatorics",
"dfs and similar",
"dsu",
"graphs"
] | 1,900
|
void dfs(int u, int c) {
if (color[u] == 0) {
color[u] = c;
for (int v: g[u])
dfs(v, c);
}
}
vector<int> groups(int f) {
color = vector<int>(n);
color[f] = -1;
int c = 0;
forn(i, n)
if (i != f && color[i] == 0)
dfs(i, ++c);
return color;
}
// begin of read input and construct graph g
// some code ...
// end of read input and construct graph g
vector<pair<int,int>> p(n - 2);
{
int index = 0;
auto g = groups(a);
forn(i, n)
if (i != a && i != b)
p[index++].first = g[i];
}
{
int index = 0;
auto g = groups(b);
forn(i, n)
if (i != a && i != b)
p[index++].second = g[i];
}
|
1276
|
C
|
Beautiful Rectangle
|
You are given $n$ integers. You need to choose a subset and put the chosen numbers in a beautiful rectangle (rectangular matrix). Each chosen number should occupy one of its rectangle cells, each cell must be filled with exactly one chosen number. Some of the $n$ numbers may not be chosen.
A rectangle (rectangular matrix) is called beautiful if in each row and in each column all values are different.
What is the largest (by the total number of cells) beautiful rectangle you can construct? Print the rectangle itself.
|
First, let's formulate the criteria that from the given set of numbers $x_1, x_2, \dots, x_k$ we can create a beautiful rectangle $a \times b$ (where $a \cdot b = k, a \le b$). Obviously, if some number occurs more than $a$ times, then among $a$ rows there will be such row that will contain two or more occurrences of the number (pigeonhole principle). Let's prove that if all numbers in $x[1 \dots k]$ occur no more than $a$ times, we can create a beautiful rectangle $a \times b$ (where $a \cdot b = k, a \le b$). We will numerate cells from the upper left corner in the order from one, moving diagonally each time. Assume rows are numerated from $0$ to $a-1$ and columns are numerated from $0$ to $b-1$. Let's begin from the cell ($0,0$) and move right-down each time. If we face to a border, we will move cyclically. Thus, from the cell ($i,j$) we will move to the cell $((i+1) \bmod a, (j+1) \bmod b)$ each time (where $p \bmod q$ is the remainder when divided $p$ by $q$). If we are going to move to a visited cell, before moving let's assign $i := (i + 1) \bmod a$. Example of the numeration for rectangles $3\times3$ and $4\times6$. We can also prove that while such numeration each row and each column contain numbers that differ by no less than $a-1$ (if we are on a row/column, we will make a turn before we will be on the row/column again). Moreover, the difference reaches $a-1$ (not $a$) when we move to the previously visited cell and assign $i := (i + 1) \bmod a$. So, we can prove that the lengths of such orbits are equal $lcm(a,b)$ ($lcm$ is a least common multiple). Consequently, they are divided by $a$. It means that if we will arrange the numbers from $x$ in the order from the most common (at worst case those that meet $a$ times) to the least common, each row and each column will always contain different numbers. Thus, we have a plan of the solution: find optimal $a$ and $b$ so that the answer is the largest rectangle $a \times b$ ($a \le b$). For this we will iterate over all possible candidates in $a$ and for each candidate each number $v$ from $x$ we will use it no more than $\min(c_v, a)$ times where $c_v$ is a number of occurrences $v$ in the given sequence. So, if we choose $a$, the upper bound of a rectangle area is $\sum \min(c_v, a)$ for all possible different numbers $v$ from the given sequence. Consequently, the maximal value of $b$ is $\sum \min(c_v, a) / a$. And let's update the answer if for current iteration $a \cdot b$ is larger than previously found answer (still consider that $a \le b$). We can maintain the value $\sum \min(c_v, a)$ while $a$ is incremented by one. For doing this we should each time add $geq[a]$ to this value, where $geq[a]$ is a number of different numbers in the given sequence, which occurs at least $a$ times (we can precalculate this array). Below is a code which reads the input data and precalculates the array $geq$. Below is a code which finds the optimal sizes of the rectangle by iterating its the smallest side $a$. And below is a code which prints the optimal rectangle sizes, generates the required beautiful rectangle and prints it. Thus, the total complexity of the algorithm is $O(n \log n)$, where $\log n$ appears only thanks for working with std::map (and we can easily get rid of it and make the linear algorithm).
|
[
"brute force",
"combinatorics",
"constructive algorithms",
"data structures",
"greedy",
"math"
] | 2,300
|
cout << best << endl << best_a << " " << best_b << endl;
vector<vector<int>> r(best_a, vector<int>(best_b));
int x = 0, y = 0;
for (int i = n; i >= 1; i–)
for (auto val: val_by_cnt[i])
forn(j, min(i, best_a)) {
if (r[x][y] != 0)
x = (x + 1) % best_a;
if (r[x][y] == 0)
r[x][y] = val;
x = (x + 1) % best_a;
y = (y + 1) % best_b;
}
forn(i, best_a) {
forn(j, best_b)
cout << r[i][j] << " ";
cout << endl;
}
|
1276
|
D
|
Tree Elimination
|
Vasya has a tree with $n$ vertices numbered from $1$ to $n$, and $n - 1$ edges numbered from $1$ to $n - 1$. Initially each vertex contains a token with the number of the vertex written on it.
Vasya plays a game. He considers all edges of the tree by increasing of their indices. For every edge he acts as follows:
- If both endpoints of the edge contain a token, remove a token from one of the endpoints and write down its number.
- Otherwise, do nothing.
The result of the game is the sequence of numbers Vasya has written down. Note that there may be many possible resulting sequences depending on the choice of endpoints when tokens are removed.
Vasya has played for such a long time that he thinks he exhausted all possible resulting sequences he can obtain. He wants you to verify him by computing the number of distinct sequences modulo $998\,244\,353$.
|
First of all, counting different sequences is the same as counting the number of different playbacks of the elimination process (that is, different combinations of which token was removed on each step). Indeed, if we consider any resulting sequence, we can simulate the process and unambiguously determine which endpoint got its token removed on any step, skipping steps when no tokens can be removed. To count different playbacks, we will use subtree dynamic programming. Let us consider a vertex $v$, and forget about all edges not incident to any vertex that is not in a subtree of $v$; that is, we are only considering edges in the subtree of $v$, as well as the edge between $v$ and its parent $p$ (for convenience, assume that the root vertex has an edge with index $n$ to a "virtual" parent). Note that we assume that $p$ can not be eliminated by means other than considering its edge to $v$. As a shorthand, we will say "$v$ was compared with $u$" to mean "when the edge $(v, u)$ was considered, both its endpoints had their token", and "$v$ was killed by $u$" to mean "$v$ was compared with $u$ and lost its token on that step". We will distinguish three types of playbacks in $v$'s subtree: $v$ was killed before comparing to $p$ (situation $0$); $v$ was killed by $p$ (situation $1$); $v$ killed $p$ (situation $2$). We will write $dp_{v, s}$ the number of playbacks in the subtree of $v$ that correspond to the situation $s$. Let $u_1, \ldots, u_k$ be the list of children of $v$ ordered by increasing of $\mathrm{index}(u_i, v)$, and let $d$ be the largest index such that $\mathrm{index}(u_d, v) < \mathrm{index}(v, p)$. If $\mathrm{index}(u_1, v) > \mathrm{index}(v, p)$, put $d = 0$. Let us work out the recurrence relations for $dp_{v, s}$. For example, for $s = 0$ we must have that $v$ was killed by one of its children $u_i$ with $i \leq d$. For a fixed $i$, the playback should have proceeded as follows: all children $u_1, \ldots, u_i$ were either killed before comparing to $v$, or killed by $v$ (but they could not have survived comparing with $v$). $v$ was killed by $u_i$; all children $u_{i + 1}, \ldots, u_k$ were either killed before comparing to $v$, or "survived" the non-existent comparison to $v$ (but they could not have been killed by $v$). Consequently, we have the formula $dp_{v, 0} = \sum_{i = 1}^d \left(\prod_{j = 1}^{i = 1}(dp_{u_j, 0} + dp_{u_j, 1}) \times dp_{u_i, 1} \times \prod_{j = i + 1}^k (dp_{u_j, 0} + dp_{u_j, 2})\right).$ Arguing in a similar way, we can further obtain $dp_{v, 1} = \prod_{j = 1}^{i = d}(dp_{u_j, 0} + dp_{u_j, 1}) \times \prod_{j = d + 1}^k (dp_{u_j, 0} + dp_{u_j, 2}),$ $dp_{v, 2} = \sum_{i = d + 1}^k \left(\prod_{j = 1}^{i = 1}(dp_{u_j, 0} + dp_{u_j, 1}) \times dp_{u_i, 1} \times \prod_{j = i + 1}^k (dp_{u_j, 0} + dp_{u_j, 2})\right) + \prod_{j = 1}^k (dp_{u_j, 0} + dp_{u_j, 1}).$ In all these formulas we naturally assume that empty products are equal to $1$. To compute these formulas fast enough we can use prefix products of $dp_{j, 0} + dp_{j, 1}$ and suffix products of $dp_{j, 0} + dp_{j, 2}$. Finally, the answer is equal to $dp_{root, 0} + dp_{root, 1}$ (either the root was killed, or it wasn't and we assume that it was killed by its virtual parent). This solution can implemented in $O(n)$ since the edges are already given by increasing of their indices, but $O(n \log n)$ should also be enough.
|
[
"dp",
"trees"
] | 2,900
| null |
1276
|
E
|
Four Stones
|
There are four stones on an infinite line in integer coordinates $a_1, a_2, a_3, a_4$. The goal is to have the stones in coordinates $b_1, b_2, b_3, b_4$. The order of the stones does not matter, that is, a stone from any position $a_i$ can end up in at any position $b_j$, provided there is a required number of stones in each position (that is, if a coordinate $x$ appears $k$ times among numbers $b_1, \ldots, b_4$, there should be exactly $k$ stones at $x$ in the end).
We are allowed to move stones with the following operation: choose two stones at \textbf{distinct} positions $x$ and $y$ with at least one stone each, and move one stone from $x$ to $2y - x$. In other words, the operation moves a stone to a symmetric position relative to some other stone. At any moment it is allowed to have any number of stones at the same position.
Find any sequence of operations that achieves the goal, or determine that it is impossible. The sequence does not have to be shortest, but it may contain at most $1000$ operations.
|
First, when is the task impossible? If all $a_i$ have the same remainder $d$ modulo an integer $g$, then it will always be true regardless of our moves. The largest $g$ we can take is equal to $\mathrm{GCD}(a_2 - a_1, a_3 - a_1, a_4 - a_1)$. Remark: case when all $a_i$ are equal is special. If the GCD's or the remainders do not match for $a_i$ and $b_i$, then there is no answer. For convenience, let us apply the transformation $x \to (x - d) / g$ to all coordinates, and assume that $g = 1$. We can observe further that parity of each coordinate is preserved under any operation, thus the number of even and odd numbers among $a_i$ and $b_i$ should also match. Otherwise, we will divide the task into two subproblems: Given four stones $a_1, \ldots, a_4$, move them into a segment $[x, x + 1]$ for an arbitrary even $x$. Given four stones in a segment $[x, x + 1]$, shift them into a segment $[y, y + 1]$. Suppose we can gather $a_i$ and $b_i$ into segments $[x, x + 1]$ and $[y, y + 1]$. Then we can solve the problem as follows: gather $a_i$ into $[x, x + 1]$; shift the stones from $[x, x + 1]$ to $[y, y + 1]$; undo gathering $b_i$ into $[y, y + 1]$ (all moves are reversible). First subproblem. Throughout, let $\Delta$ denote the maximum distance between any two stones. Our goal is to make $\Delta = 1$. We will achieve this by repeatedly decreasing $\Delta$ by at least a quarter, then we will be done in $O(\log \Delta)$ steps. Suppose $a_1 \leq a_2 \leq a_3 \leq a_4$, and one of the stones $a_i$ is in the range $[a_1 + \Delta / 4, a_1 + 3 \Delta / 4]$. Consider two halves $[a_1, a_i]$ and $[a_i, a_4]$, and mirror all stones in the shorter half with respect to $a_i$. Then, $a_i$ becomes either the leftmost or the rightmost stone, and the new maximum distance $\Delta'$ is at most $3 \Delta / 4$, thus reaching our goal. What if no stones are in this range? Denote $d_i = \min(a_i - a_1, a_4 - a_i)$ - the distance from $a_i$ to the closest extreme (leftmost or rightmost) stone. We suppose that $d_2, d_3 < \Delta / 4$, otherwise we can reduce $\Delta$ as shown above. Further at least one of $d_2, d_3$ is non-zero, since otherwise we would have $\mathrm{GCD}(a_2 - a_1, a_3 - a_1, a_4 - a_1) = 1 = \Delta$, and the goal would be reached. Observe that performing moves $(a_i, a_j)$, $(a_i, a_k)$ changes $a_i$ to $a_i + 2(a_k - a_j)$. With this we are able to change $d_i$ to $d_i + 2 d_j$. Repeatedly do this with $d_i \leq d_j$ (that is, we are adding twice the largest distance to the smallest). In $O(\log \Delta)$ steps we will have $\max(d_2, d_3) \geq \Delta / 4$, allowing us to decrease $\Delta$ as shown above. Finally, we have all $a_i \in [x, x + 1]$. To make things easier, if $x$ is odd, move all $a_i = x$ to $x + 2$. Second subproblem. We now have all stones in a range $[x, x + 1]$, and we want to shift them by $2d$ (we will assume that $2d \geq 0$). Observe that any arrangement of stones can be shifted by $2\Delta$ by mirroring all stones with respect to the rightmost stone twice. Consider an operation we will call Expand: when $a_1 \leq a_2 \leq a_3 \leq a_4$, make moves $(a_3, a_1)$ and $(a_2, a_4)$. We can see that if we apply Expand $k$ times, the largest distance $\Delta$ will grow exponentially in $k$. We can then shift the stones by $2d$ as follows: Apply Expand until $\Delta > d$. While $\Delta \leq d$, shift all stones by $2\Delta$ as shown above, and decrease $d$ by $\Delta$. If $\Delta = 1$, exit. Perform Expand${}^{-1}$ - the inverse operation to Expand, and return to step $2$. In the end, we will have $\Delta = 1$ and $d = 0$. Further, since $\Delta$ grows exponentially in the number of Expand's, for each value of $\Delta$ we will be making $O(1)$ shifts, thus the total number of operations for this method $O(\log d)$.
|
[
"constructive algorithms"
] | 3,500
| null |
1276
|
F
|
Asterisk Substrings
|
Consider a string $s$ of $n$ lowercase English letters. Let $t_i$ be the string obtained by replacing the $i$-th character of $s$ with an asterisk character *. For example, when $s = \mathtt{abc}$, we have $t_1 = \tt{*bc}$, $t_2 = \tt{a*c}$, and $t_3 = \tt{ab*}$.
Given a string $s$, count the number of distinct strings of lowercase English letters and asterisks that occur as a substring of at least one string in the set $\{s, t_1, \ldots, t_n \}$. The empty string should be counted.
Note that *'s are just characters and do not play any special role as in, for example, regex matching.
|
There are two types of substrings we have to count: with and without the *. The substrings without * are just all substrings of the intial string $s$, which can be counted in $O(n)$ or $O(n \log n)$ using any standard suffix structure. We now want to count the substrings containing *. Consider all such substrings of the form "$u$*$v$", where $u$ and $v$ are letter strings. For a fixed prefix $u$, how many ways are there to choose the suffix $v$? Consider the right context $rc(u)$ - the set of positions $i + |u|$ such that the suffix $s_i s_{i + 1} \ldots$ starts with $u$. Then, the number of valid $v$ is the number of distinct prefixes of suffixes starting at positions in the set $\{i + 1 \mid i \in rc(u)\}$. For an arbitrary set $X$ of suffixes of $s$ (given by their positions), how do we count the number of their distinct prefixes? If $X$ is ordered by lexicographic comparison of suffixes, the answer is $1 + \sum_{i \in X} (|s| - i) - \sum_{i, j\text{ are adjacent in }X} |LCP(i, j)|$, where $LCP(i, j)$ is the largest common prefix length of suffixes starting at $i$ and $j$. Recall that $LCP(i, j)$ queries can be answered online in $O(1)$ by constructing the suffix array of $s$ with adjacent LCPs, and using a sparse table to answer RMQ queries. With this, we can implement $X$ as a sorted set with lexicographic comparison. With a bit more work we can also process updates to $X$ and maintain $\sum LCP(i, j)$ over adjacent pairs, thus always keeping the actual number of distinct prefixes. Now to solve the actual problem. Construct the suffix tree of $s$ in $O(n)$ or $O(n \log n)$. We will run a DFS on the suffix tree that considers all possible positions of $u$. When we finish processing a vertex corresponding to a string $u$, we want to have the structure keeping the ordered set $X(u)$ of suffixes for $rc(u)$. To do this, consider children $w_1, \ldots, w_k$ of $u$ in the suffix tree. Then, $X(u)$ can be obtained by merging $ex(X(w_1), |w_1| - |u|), \ldots, ex(X(w_k), |w_k| - |u|)$, where $ex(X, l)$ is the structure obtained by prolonging all prefixes of $X$ by $l$, provided that all extensions are equal substrings. Note that $ex(X, l)$ does not require reordering of suffixes in $X$, and simply increases the answer by $l$, but we need to subtract $l$ from all starting positions in $X$, which can be done lazily. Using smallest-to-largest merging trick, we can always have an up-to-date $X(u)$ in $O(n \log^2 n)$ total time. We compute the answer by summing over all $u$. Suppose the position of $u$ in the suffix tree is not a vertex, but is located on an edge $l$ characters above a vertex $u$. Then we need to add $l + ans(X(u))$ to the answer. Since we know $X(u)$ for all $u$, the total contribution of these positions can be accounted for in $O(1)$ per edge. If $u$ is a vertex, on the step of processing $u$ we add $ans(ex(X(w_1), |w_1| - |u| - 1) \cup \ldots \cup ex(X(w_k), |w_k| - |u| - 1))$, using smallest-to-largest again. Note that we still need to return $X(u)$, thus after computing $u$'s contribution we need to undo the merging and merge again with different shifts. The total complexity is $O(n \log^2 n)$. If we compute LCPs in $O(\log n)$ instead of $O(1)$, we end up with $O(n \log^3 n)$, which is pushing it, but can probably pass.
|
[
"string suffix structures"
] | 3,400
| null |
1277
|
A
|
Happy Birthday, Polycarp!
|
Hooray! Polycarp turned $n$ years old! The Technocup Team sincerely congratulates Polycarp!
Polycarp celebrated all of his $n$ birthdays: from the $1$-th to the $n$-th. At the moment, he is wondering: how many times he turned beautiful number of years?
According to Polycarp, a positive integer is beautiful if it consists of only one digit repeated one or more times. For example, the following numbers are beautiful: $1$, $77$, $777$, $44$ and $999999$. The following numbers are not beautiful: $12$, $11110$, $6969$ and $987654321$.
Of course, Polycarpus uses the decimal numeral system (i.e. radix is 10).
Help Polycarpus to find the number of numbers from $1$ to $n$ (inclusive) that are beautiful.
|
It seems that one of the easiest ways to solve this problem is to iterate over all beautiful numbers up to $10^9$ and check each of the numbers to ensure that it does not exceed $n$. First of all, you can iterate over a length from $1$ to $8$, supporting a number of the form 11...1 of this length, and inside iterate over a factor for this number from $1$ to $9$. The main part of a solution might look like this:
|
[
"implementation"
] | 1,000
|
cin >> n;
int b = 0, ans = 0;
for (int len = 1; len <= 9; len++) {
b = b * 10 + 1;
for (int m = 1; m <= 9; m++)
if (b * m <= n)
ans++;
}
cout << ans << endl;
|
1277
|
B
|
Make Them Odd
|
There are $n$ positive integers $a_1, a_2, \dots, a_n$. For the one move you can choose any even value $c$ and divide by two \textbf{all} elements that equal $c$.
For example, if $a=[6,8,12,6,3,12]$ and you choose $c=6$, and $a$ is transformed into $a=[3,8,12,3,3,12]$ after the move.
You need to find the minimal number of moves for transforming $a$ to an array of only odd integers (each element shouldn't be divisible by $2$).
|
Consider the greatest positive value in the set. Anyway, once we will divide it by two. It is always optimal to do it on the first move because the result of the division can be divided again (if needed) later. So, the optimal way to solve this problem is: as long as there is at least one even value in the set, we need to choose the maximal even number in the set and divide all the numbers equal to it by $2$. For effective implementation, you can use features of the standard library to represent the set as std::set (for C++, in other languages there are alternatives of this data structure or you can modify the solution). Below is an example of a possible implementation of the main part of the solution:
|
[
"greedy",
"number theory"
] | 1,200
|
cin >> n;
set<int> a;
for (int i = 0; i < n; i++) {
int elem;
cin >> elem;
a.insert(elem);
}
int result = 0;
while (!a.empty()) {
int m = *a.rbegin();
a.erase(m);
if (m % 2 == 0) {
result++;
a.insert(m / 2);
}
}
cout << result << endl;
|
1277
|
D
|
Let's Play the Words?
|
Polycarp has $n$ \textbf{different} binary words. A word called binary if it contains only characters '0' and '1'. For example, these words are binary: "0001", "11", "0" and "0011100".
Polycarp wants to offer his set of $n$ binary words to play a game "words". In this game, players name words and each next word (starting from the second) must start with the last character of the previous word. The first word can be any. For example, these sequence of words can be named during the game: "0101", "1", "10", "00", "00001".
Word reversal is the operation of reversing the order of the characters. For example, the word "0111" after the reversal becomes "1110", the word "11010" after the reversal becomes "01011".
Probably, Polycarp has such a set of words that there is no way to put them in the order correspondent to the game rules. In this situation, he wants to reverse some words from his set so that:
- the final set of $n$ words still contains \textbf{different} words (i.e. all words are unique);
- there is a way to put all words of the final set of words in the order so that the final sequence of $n$ words is consistent with the game rules.
Polycarp wants to reverse minimal number of words. Please, help him.
|
For a concrete set of words, it's not hard to find a criteria for checking if there is a correct order of arrangement of words for playing a game. Let's call such sets of words correct. Firstly the set of words is correct if the number of words like 0...1 and the number of words like 1...0 differ by no more than $1$. Secondly it's correct if the number of words like 0...0 or like 1...1 is zero, because they have the same characters at the beginning and at the ending, and we can insert them in any position. And finally if words of both kinds 0...0 and 1...1 are present and there is at least one word like 0...1 or 1...0. It can be easily proved if we note that this problem is equivalent to the Euler traversal of a directed graph with two nodes. But let's prove it without resorting to graph theory: if there are words of both kinds 0...0 and 1...1, but there is no words of kinds 0...1 and 1...0, starting from a word of one kind you can't go to a word of another kind. Consequently, if words of both kinds 0...0 and 1...1 are present, there should be at least one word like 0...1 or 1...0 - is a necessary condition of the problem; if the number of words like 0...1 and the number of words like 1...0 differ by no more than $1$, we can call them alternately starting with a kind that is larger. If these numbers are equal, we can start with any kind. And we can insert words of kind 0...0 and 1...1 at any suitable moment. Reversals only affect the mutual number of lines of the kind 0...1 and 1...0. Therefore, immediately while reading the input data, we can check the necessary condition (first item above). Without loss of generality we may assume that the number of words like 0...1 equals $n_{01}$ and like 1...0 equals $n_{10}$. Also we assume that $n_{01}>n_{10}+1$. Remember that all words in the current set are unique. Let's prove that we can always choose some words of kind 0...1 and reverse them so that $n_{01}=n_{10}+1$ (and at the result all words would still be unique). In fact, the set of words of kind $n_{10}$ has no more than $n_{10}$ such words that after the reversing, the word will turn into an existing one (because it will become of type 1...0 and there are only $n_{10}$ such words). And it means that there is no less than $n_{01}-n_{10}$ words which we can reverse and get still unique word. So, we can choose any $n_{01}-n_{10}-1$ of them. Thus, after checking of the necessary condition (first item above), we need to reverse just $n_{01}-n_{10}-1$ words of kind that is larger, which reversals aren't duplicates. Below is an example of a possible implementation of the main part of the solution described above.
|
[
"data structures",
"hashing",
"implementation",
"math"
] | 1,900
|
int n;
cin >> n;
vector<string> s(n);
set<string> s01;
set<string> s10;
vector<bool> u(2);
forn(i, n) {
cin >> s[i];
if (s[i][0] == '0' && s[i].back() == '1')
s01.insert(s[i]);
if (s[i][0] == '1' && s[i].back() == '0')
s10.insert(s[i]);
u[s[i][0] - '0'] = u[s[i].back() - '0'] = true;
}
if (u[0] && u[1] && s01.size() == 0 && s10.size() == 0) {
cout << -1 << endl;
continue;
}
vector<int> rev;
if (s01.size() > s10.size() + 1) {
forn(i, n)
if (s[i][0] == '0' && s[i].back() == '1') {
string ss(s[i]);
reverse(ss.begin(), ss.end());
if (s10.count(ss) == 0)
rev.push_back(i);
}
} else if (s10.size() > s01.size() + 1) {
forn(i, n)
if (s[i][0] == '1' && s[i].back() == '0') {
string ss(s[i]);
reverse(ss.begin(), ss.end());
if (s01.count(ss) == 0)
rev.push_back(i);
}
}
int ans = max(0, (int(max(s01.size(), s10.size())) - int(min(s01.size(), s10.size()))) / 2);
cout << ans << endl;
forn(i, ans)
cout << rev[i] + 1 << " ";
cout << endl;
|
1278
|
A
|
Shuffle Hashing
|
Polycarp has built his own web service. Being a modern web service it includes login feature. And that always implies password security problems.
Polycarp decided to store the hash of the password, generated by the following algorithm:
- take the password $p$, consisting of lowercase Latin letters, and shuffle the letters randomly in it to obtain $p'$ ($p'$ can still be equal to $p$);
- generate two random strings, consisting of lowercase Latin letters, $s_1$ and $s_2$ (any of these strings can be empty);
- the resulting hash $h = s_1 + p' + s_2$, where addition is string concatenation.
For example, let the password $p =$ "abacaba". Then $p'$ can be equal to "aabcaab". Random strings $s1 =$ "zyx" and $s2 =$ "kjh". Then $h =$ "zyxaabcaabkjh".
Note that no letters could be deleted or added to $p$ to obtain $p'$, only the order could be changed.
Now Polycarp asks you to help him to implement the password check module. Given the password $p$ and the hash $h$, check that $h$ can be the hash for the password $p$.
Your program should answer $t$ independent test cases.
|
The general idea of the solution is to check that string $h$ contains some substring which is a permutation of $p$. The constraints were so low you could do it with any algorithm, even $O(n^3 \log n)$ per test case could pass. The most straightforward way was to iterate over the substring of $h$, sort it and check if it's equal to $p$ sorted. That's $O(n^3 \log n)$. Next you could notice than only substrings of length $|p|$ matter and shave another $n$ off the complexity to get $O(n^2 \log n)$. After that you might remember that the size of the alphabet is pretty low. And one string is a permutation of another one if the amounts of letters 'a', letters 'b' and so on in them are equal. So you can precalculate array $cnt_p$, where $cnt_p[i]$ is equal to the amount of the $i$-th letter of the alphabet in $p$. Calculating this array for $O(n)$ substrings will be $O(n)$ each, so that makes it $O(n^2)$. Then notice how easy it is to recalculate the letter counts going from some substring $[i; i + |p| - 1]$ to $[i + 1; i + |p|]$. Just subtract $1$ from the amount of the $i$-th letter and add $1$ to the amount of the $(i + |p|)$-th letter. Comparing two array every time will still lead to $O(n \cdot |AL|)$, though. The final optimization is to maintain the boolean array $eq$ such that $eq_i$ means that $cnt_p[i]$ is equal to the current value of $cnt$ of the substring. You are updating just two values of $cnt$ on each step, thus only two values of $eq$ might change. You want all the $|AL|$ values to be $true$, so keep the number of values $true$ in that array and say "YES" if that number is equal to $|AL|$. That finally makes the solution $O(n)$ per test case.
|
[
"brute force",
"implementation",
"strings"
] | 1,000
|
#include <bits/stdc++.h>
#define forn(i, n) for (int i = 0; i < int(n); i++)
using namespace std;
const int AL = 26;
string p, h;
bool read(){
if (!(cin >> p >> h))
return false;
return true;
}
void solve(){
vector<int> cntp(AL), cnt(AL);
vector<bool> eq(AL);
int sum = 0;
auto chg = [&cntp, &cnt, &eq, &sum](int c, int val){
sum -= eq[c];
cnt[c] += val;
eq[c] = (cntp[c] == cnt[c]);
sum += eq[c];
};
forn(i, p.size())
++cntp[p[i] - 'a'];
forn(i, AL){
eq[i] = (cnt[i] == cntp[i]);
sum += eq[i];
}
int m = p.size();
int n = h.size();
forn(i, n){
chg(h[i] - 'a', 1);
if (i >= m) chg(h[i - m] - 'a', -1);
if (sum == AL){
puts("YES");
return;
}
}
puts("NO");
}
int main() {
int tc;
scanf("%d", &tc);
forn(_, tc){
read();
solve();
}
return 0;
}
|
1278
|
B
|
A and B
|
You are given two integers $a$ and $b$. You can perform a sequence of operations: during the first operation you choose one of these numbers and increase it by $1$; during the second operation you choose one of these numbers and increase it by $2$, and so on. You choose the number of these operations yourself.
For example, if $a = 1$ and $b = 3$, you can perform the following sequence of three operations:
- add $1$ to $a$, then $a = 2$ and $b = 3$;
- add $2$ to $b$, then $a = 2$ and $b = 5$;
- add $3$ to $a$, then $a = 5$ and $b = 5$.
Calculate the minimum number of operations required to make $a$ and $b$ equal.
|
Assume that $a > b$. Let's denote the minimum number of operations required to make $a$ and $b$ equal as $x$. There are two restrictions on $x$: At first, $\frac{x(x+1)}{2} \ge a - b$, because if $\frac{x(x+1)}{2} < a - b$ then $a$ will be greater than $b$ (after applying all operations); Secondly, integers $\frac{x(x+1)}{2}$ and $a - b$ must have the same parity, because if they have different parity, then $a$ and $b$ will have different parity (after applying all operations). It turns out that we always can make integers $a$ and $b$ equal after applying $x$ operations. It's true because we have to add $\frac{\frac{x(x+1)}{2} - a + b}{2} + a - b$ to $b$ and the rest to $a$. And we can get any integer from $0$ to $\frac{z(z+1)}{2}$ as a sum of subset of set $\{1, 2, \dots, z\}$.
|
[
"greedy",
"math"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
int t, a, b;
bool ok(int res, int d){
long long sum = res * 1LL * (res + 1) / 2;
if(sum < d) return false;
return sum % 2 == d % 2;
}
int main() {
cin >> t;
for(int tc = 0; tc < t; ++tc){
cin >> a >> b;
int d = abs(a - b);
if(d == 0){
cout << "0\n";
continue;
}
int res = 1;
while(!ok(res, d)) ++res;
cout << res << endl;
}
return 0;
}
|
1278
|
C
|
Berry Jam
|
Karlsson has recently discovered a huge stock of berry jam jars in the basement of the house. More specifically, there were $2n$ jars of strawberry and blueberry jam.
All the $2n$ jars are arranged in a row. The stairs to the basement are exactly in the middle of that row. So when Karlsson enters the basement, he sees exactly $n$ jars to his left and $n$ jars to his right.
For example, the basement might look like this:
Being the starightforward man he is, he immediately starts eating the jam. In one minute he chooses to empty either the first non-empty jar to his left or the first non-empty jar to his right.
Finally, Karlsson decided that at the end the amount of full strawberry and blueberry jam jars should become the same.
For example, this might be the result:
\begin{center}
He has eaten $1$ jar to his left and then $5$ jars to his right. There remained exactly $3$ full jars of both strawberry and blueberry jam.
\end{center}
Jars are numbered from $1$ to $2n$ from left to right, so Karlsson initially stands between jars $n$ and $n+1$.
What is the minimum number of jars Karlsson is required to empty so that an equal number of full strawberry and blueberry jam jars is left?
Your program should answer $t$ independent test cases.
|
Let's transit from counting strawberry and blueberry jam jars separately to their difference. Let $dif$ be equal to $\#of\_strawberry\_jars - \#of\_blueberry\_jars$. Then eating one strawberry jar decreases $dif$ by $1$ and eating one blueberry jar increases $dif$ by $1$. The goal is to make $dif$ equal to $0$. Let there be some initial difference $dif_{init}$. Let's eat first $l$ jars from the left and first $r$ jars from the right. Difference of the jars on the left is $difl$, on the right it's $difr$. So the goal becomes to find such $l$ and $r$ that $dif_{init} - difl_l - difr_r = 0$. Rewrite that as $dif_{init} - difl_l = difr_r$. Now for each unique value of $difr_r$ save the smallest $r$ to reach that value in a map. Finally, iterate over the $l$ and find the minimum answer. Overall complexity: $O(n \log n)$.
|
[
"data structures",
"dp",
"greedy",
"implementation"
] | 1,700
|
#include <bits/stdc++.h>
#define forn(i, n) for (int i = 0; i < int(n); i++)
using namespace std;
int n;
vector<int> a;
bool read(){
if (scanf("%d", &n) != 1)
return false;
a.resize(2 * n);
forn(i, 2 * n)
scanf("%d", &a[i]);
return true;
}
void solve(){
int cur = 0;
map<int, int> difr;
difr[0] = 0;
cur = 0;
for (int i = n; i < 2 * n; ++i){
if (a[i] == 1)
++cur;
else
--cur;
if (!difr.count(cur))
difr[cur] = i - (n - 1);
}
int ans = 2 * n;
int dif = count(a.begin(), a.end(), 1) - count(a.begin(), a.end(), 2);
cur = 0;
for (int i = n - 1; i >= 0; --i){
if (a[i] == 1)
++cur;
else
--cur;
if (difr.count(dif - cur))
ans = min(ans, n - i + difr[dif - cur]);
}
if (difr.count(dif)){
ans = min(ans, difr[dif]);
}
printf("%d\n", ans);
}
int main() {
int tc;
scanf("%d", &tc);
forn(_, tc){
read();
solve();
}
return 0;
}
|
1278
|
D
|
Segment Tree
|
As the name of the task implies, you are asked to do some work with segments and trees.
Recall that a tree is a connected undirected graph such that there is exactly one simple path between every pair of its vertices.
You are given $n$ segments $[l_1, r_1], [l_2, r_2], \dots, [l_n, r_n]$, $l_i < r_i$ for every $i$. It is guaranteed that all segments' endpoints are integers, and all endpoints are unique — there is no pair of segments such that they start in the same point, end in the same point or one starts in the same point the other one ends.
Let's generate a graph with $n$ vertices from these segments. Vertices $v$ and $u$ are connected by an edge if and only if segments $[l_v, r_v]$ and $[l_u, r_u]$ intersect and neither of it lies fully inside the other one.
For example, pairs $([1, 3], [2, 4])$ and $([5, 10], [3, 7])$ will induce the edges but pairs $([1, 2], [3, 4])$ and $([5, 7], [3, 10])$ will not.
Determine if the resulting graph is a tree or not.
|
The main idea of the solution is to find a linear number of intersections of segments. Intersections can be found with sweep line approach. We will maintain a set for the endpoints open segments. When we add a segment, we find all segments which intersect with it - that is, all segments that end earlier than it. Obviously, if the number of intersections are greater than $n-1$, then the answer is "NO". So as soon as we find $n$ intersections, we stop our algorithm. After that, it is necessary to check the connectivity of the resulting graph. You can use DFS or DSU to do this.
|
[
"data structures",
"dsu",
"graphs",
"trees"
] | 2,100
|
#include <bits/stdc++.h>
using namespace std;
#define x first
#define y second
#define mp make_pair
#define pb push_back
#define all(a) (a).begin(), (a).end()
#define forn(i, n) for (int i = 0; i < int(n); i++)
typedef long long li;
typedef pair<li, li> pt;
const int N = 500010;
int n;
pt a[N];
vector<int> g[N];
bool used[N];
void dfs(int v, int p = -1) {
used[v] = true;
for (auto to : g[v]) if (to != p) {
if (!used[to])
dfs(to, v);
}
}
int main() {
scanf("%d", &n);
forn(i, n) scanf("%d%d", &a[i].x, &a[i].y);
vector<pt> evs;
forn(i, n) {
evs.pb(mp(a[i].x, i));
evs.pb(mp(a[i].y, i));
}
sort(all(evs));
int cnt = 0;
set<pt> cur;
for (auto it : evs) {
if (cnt >= n) break;
if (cur.count(it)) {
cur.erase(it);
} else {
int i = it.y;
int r = a[i].y;
for (auto jt : cur) {
if (jt.x > r) break;
int j = jt.y;
g[i].pb(j);
g[j].pb(i);
cnt++;
if (cnt >= n) break;
}
cur.insert(mp(a[i].y, i));
}
}
dfs(0);
puts(cnt == n - 1 && count(used, used + n, 1) == n ? "YES" : "NO");
}
|
1278
|
E
|
Tests for problem D
|
We had a really tough time generating tests for problem D. In order to prepare strong tests, we had to solve the following problem.
Given an undirected labeled tree consisting of $n$ vertices, find a set of segments such that:
- both endpoints of each segment are integers from $1$ to $2n$, and each integer from $1$ to $2n$ should appear as an endpoint of exactly one segment;
- all segments are non-degenerate;
- for each pair $(i, j)$ such that $i \ne j$, $i \in [1, n]$ and $j \in [1, n]$, the vertices $i$ and $j$ are connected with an edge if and only if the segments $i$ and $j$ intersect, but neither segment $i$ is fully contained in segment $j$, nor segment $j$ is fully contained in segment $i$.
Can you solve this problem too?
|
For each vertex, we will build the following structure for its children: the segment for the second child is nested in the segment for the first child, the nested for the third child is nested in the segment for the second child, and so on; and the children of different vertices do not intersect at all. Let's solve the problem recursively: for each of the children, create a set of segments with endpoints from $1$ to $2k$, where $k$ is the size of the subtree. After that, combine them. To do this, you can use small-to-large technique and change the coordinates of the segments or use the necessary offset in the function call for the next child. After that, it remains to cross children's segments with the segment of the vertex itself. To do this, you can move the right ends of all segments of the children by $1$ to the right, and add a segment that starts before the first one and ends immediately after the last one.
|
[
"constructive algorithms",
"dfs and similar",
"divide and conquer",
"trees"
] | 2,200
|
#include <bits/stdc++.h>
using namespace std;
#define x first
#define y second
#define pb push_back
#define mp make_pair
#define all(a) (a).begin(), (a).end()
#define sz(a) int((a).size())
#define forn(i, n) for (int i = 0; i < int(n); i++)
typedef pair<int, int> pt;
const int N = 500 * 1000 + 13;
int n;
vector<int> g[N], vs[N];
pt segs[N];
void dfs(int v, int p = -1) {
int sum = 0;
int bst = -1;
for (auto to : g[v]) if (to != p) {
dfs(to, v);
sum += 2 * sz(vs[to]);
if (bst == -1 || sz(vs[to]) > sz(vs[bst]))
bst = to;
}
if (bst == -1) {
vs[v].pb(v);
segs[v] = mp(1, 2);
return;
}
swap(vs[v], vs[bst]);
int lst = segs[bst].y;
sum -= 2 * sz(vs[v]);
sum += 1;
segs[bst].y += sum;
for (auto to : g[v]) if (to != p && to != bst) {
int add = lst - 1;
for (auto u : vs[to]) {
segs[u].x += add;
segs[u].y += add;
vs[v].pb(u);
}
lst = segs[to].y;
sum -= 2 * sz(vs[to]);
segs[to].y += sum;
vs[to].clear();
}
vs[v].pb(v);
segs[v] = mp(lst, segs[bst].y + 1);
}
int main() {
scanf("%d", &n);
forn(i, n - 1) {
int x, y;
scanf("%d%d", &x, &y);
--x; --y;
g[x].pb(y);
g[y].pb(x);
}
dfs(0);
for (int i = 0; i < n; i++)
printf("%d %d\n", segs[i].x, segs[i].y);
return 0;
}
|
1278
|
F
|
Cards
|
Consider the following experiment. You have a deck of $m$ cards, and exactly one card is a joker. $n$ times, you do the following: shuffle the deck, take the top card of the deck, look at it and return it into the deck.
Let $x$ be the number of times you have taken the joker out of the deck during this experiment. Assuming that every time you shuffle the deck, all $m!$ possible permutations of cards are equiprobable, what is the expected value of $x^k$? Print the answer modulo $998244353$.
|
First of all, I would like to thank Errichto for his awesome lecture on expected value: part 1, part 2. This problem was invented after I learned the concept of estimating the square of expected value from that lecture - and the editorial uses some ideas that were introduced there. Okay, now for the editorial itself. We call a number $a$ as good if $1 \le a \le n$, and the $a$-th shuffle of the deck resulted in a joker on top. $x$ from our problem is the number of such good numbers $a$. We can represent $x^2$ as the number of pairs $(a_1, a_2)$ such that every element of the pair is a good number, $x^3$ as the number of triples, and so on - $x^k$ is the number of $k$-tuples $(a_1, a_2, \dots, a_k)$ such that each element of a tuple is a good number. So we can rewrite the expected value of $x^k$ as the expected number of such tuples, or the sum of $P(t)$ over all tuples $t$, where $P(t)$ is the probability that $t$ consists of good numbers. How to calculate the probability that $t$ is a good tuple? Since all shuffles of the deck result in a joker with probability $\frac{1}{m}$, $P(t)$ should be equal to $\frac{1}{m^k}$ - but that is true only if all elements in $t$ are unique. How to deal with tuples with repeating elements? Since all occurences of the same element are either good or bad (with probability $\frac{1}{m}$ of being good), the correct formula for $P(t)$ is $P(t) = \frac{1}{m^d}$, where $d$ is the number of distinct elements in the tuple. Okay, then for each $d \in [1, k]$ we have to calculate the number of $k$-tuples with exactly $d$ distinct elements. To do that, we use dynamic programming: let $dp_{i, j}$ be the number of $i$-tuples with exactly $j$ distinct elements. Each transition in this dynamic programming solution models adding an element to the tuple; if we want to compute the transitions leading from $dp_{i, j}$, we either add a new element to the tuple (there are $n - j$ ways to choose it, and we enter the state $dp_{i + 1, j + 1}$), or we add an already existing element (there are $j$ ways to choose it, and we enter the state $dp_{i + 1, j}$). Overall complexity is $O(k^2 \log MOD)$ or $O(k^2 + \log MOD)$, depending on your implementation.
|
[
"combinatorics",
"dp",
"math",
"number theory",
"probabilities"
] | 2,600
|
#include<bits/stdc++.h>
using namespace std;
const int MOD = 998244353;
const int N = 5043;
int add(int x, int y)
{
x += y;
while(x >= MOD) x -= MOD;
while(x < 0) x += MOD;
return x;
}
int mul(int x, int y)
{
return (x * 1ll * y) % MOD;
}
int binpow(int x, int y)
{
int z = 1;
while(y > 0)
{
if(y % 2 == 1)
z = mul(z, x);
x = mul(x, x);
y /= 2;
}
return z;
}
int inv(int x)
{
return binpow(x, MOD - 2);
}
int n, m, k;
int dp[N][N];
int main()
{
cin >> n >> m >> k;
dp[0][0] = 1;
for(int i = 0; i < k; i++)
for(int j = 0; j < k; j++)
{
dp[i + 1][j] = add(dp[i + 1][j], mul(dp[i][j], j));
dp[i + 1][j + 1] = add(dp[i + 1][j + 1], mul(dp[i][j], n - j));
}
int ans = 0;
for(int i = 1; i <= k; i++)
ans = add(ans, mul(dp[k][i], binpow(inv(m), i)));
cout << ans << endl;
}
|
1279
|
A
|
New Year Garland
|
Polycarp is sad — New Year is coming in few days but there is still no snow in his city. To bring himself New Year mood, he decided to decorate his house with some garlands.
The local store introduced a new service this year, called "Build your own garland". So you can buy some red, green and blue lamps, provide them and the store workers will solder a single garland of them. The resulting garland will have all the lamps you provided put in a line. Moreover, no pair of lamps of the same color will be adjacent to each other in this garland!
For example, if you provide $3$ red, $3$ green and $3$ blue lamps, the resulting garland can look like this: "RGBRBGBGR" ("RGB" being the red, green and blue color, respectively). Note that it's ok to have lamps of the same color on the ends of the garland.
However, if you provide, say, $1$ red, $10$ green and $2$ blue lamps then the store workers won't be able to build any garland of them. Any garland consisting of these lamps will have at least one pair of lamps of the same color adjacent to each other. Note that the store workers should use all the lamps you provided.
So Polycarp has bought some sets of lamps and now he wants to know if the store workers can build a garland from each of them.
|
Let $r \le g \le b$ (if it is not the case, do some swaps). If $b > r + g + 1$, then at least two blue lamps will be adjacent - so there is no solution. Otherwise the answer can be easily constucted. Place all blue lamps in a row. Then place red lamps: one between the first and the second blue lamp, one between the second and the third, and so on. Then place all green lamps: one between the $(b-1)$-th blue lamp and the $b$-th, one between the blue lamps with numbers $(b - 2)$ and $(b - 1)$, and so on. Since $r + g \ge b - 1$, there is at least one non-blue lamp between each pair of blue lamps. If $g = b$, we didn't place all green lamps, we can place the remaining one before all other lamps (the same with $r = b$). So, if we swap $l$, $g$ and $b$ in such a way that $r \le g \le b$, we only have to check that $b \le r + g + 1$.
|
[
"math"
] | 900
|
for t in range(int(input())):
a = sorted(list(map(int, input().split())))
print('Yes' if a[2] <= a[0] + a[1] + 1 else 'No')
|
1279
|
B
|
Verse For Santa
|
New Year is coming! Vasya has prepared a New Year's verse and wants to recite it in front of Santa Claus.
Vasya's verse contains $n$ parts. It takes $a_i$ seconds to recite the $i$-th part. Vasya can't change the order of parts in the verse: firstly he recites the part which takes $a_1$ seconds, secondly — the part which takes $a_2$ seconds, and so on. After reciting the verse, Vasya will get the number of presents equal to the number of parts he fully recited.
Vasya can skip at most one part of the verse while reciting it (if he skips more than one part, then Santa will definitely notice it).
Santa will listen to Vasya's verse for no more than $s$ seconds. For example, if $s = 10$, $a = [100, 9, 1, 1]$, and Vasya skips the first part of verse, then he gets two presents.
Note that it is possible to recite the whole verse (if there is enough time).
Determine which part Vasya needs to skip to obtain the maximum possible number of gifts. If Vasya shouldn't skip anything, print 0. If there are multiple answers, print any of them.
You have to process $t$ test cases.
|
If $\sum\limits_{i=1}^n a_i \le s$ then answer is 0. Otherwise let's find we minimum index $x$ such that $\sum\limits_{i=1}^x a_i > s$. It's useless to skip a part $i > x$, because Vasya just has not time to recite previous part (it's change nothing). So he has to skip a part $i \le x$. And among such parts it's beneficial to skip part with maximum value of $a_i$.
|
[
"binary search",
"brute force",
"implementation"
] | 1,300
|
for t in range(int(input())):
n, t = map(int, input().split())
a = list(map(int, input().split()))
id = 0
for i in range(n):
if a[i] > a[id]:
id = i
t -= a[i]
if t < 0:
break
if t >= 0:
id = -1
print(id + 1)
|
1279
|
C
|
Stack of Presents
|
Santa has to send presents to the kids. He has a large stack of $n$ presents, numbered from $1$ to $n$; the topmost present has number $a_1$, the next present is $a_2$, and so on; the bottom present has number $a_n$. All numbers are distinct.
Santa has a list of $m$ \textbf{distinct} presents he has to send: $b_1$, $b_2$, ..., $b_m$. He will send them \textbf{in the order they appear in the list}.
To send a present, Santa has to find it in the stack by removing all presents above it, taking this present and returning all removed presents on top of the stack. So, if there are $k$ presents above the present Santa wants to send, it takes him $2k + 1$ seconds to do it. Fortunately, Santa can speed the whole process up — when he returns the presents to the stack, he may reorder them as he wishes (only those which were above the present he wanted to take; the presents below cannot be affected in any way).
What is the minimum time required to send all of the presents, provided that Santa knows the whole list of presents he has to send and reorders the presents optimally? Santa cannot change the order of presents or interact with the stack of presents in any other way.
Your program has to answer $t$ different test cases.
|
At first let's precalculate array $pos$ such that $pos_{a_i} = i$. Now presume that we have to calculate answer for $b_i$. Then there are two cases (let's denote $lst = \max\limits_{1 \le j < i} pos_{b_j}$, initially $lst = -1$): if $pos_{b_i} > lst$ then we have to spend $1 + 2 \cdot (pos_{b_i} - (i - 1))$ seconds on it (1 second on the gift $b_i$, $pos_{b_i} - (i - 1)$ seconds on removing gifts above and $pos_{b_i} - (i - 1)$ seconds on pushing these gifts); if $pos_{b_i} < lst$ then we can reorder gifts by previous actions such that gift $b_i$ be on the top of stack. So we spend only 1 second on it.
|
[
"data structures",
"implementation"
] | 1,400
|
for t in range(int(input())):
n, m = map(int, input().split())
a = [x - 1 for x in list(map(int, input().split()))]
b = [x - 1 for x in list(map(int, input().split()))]
pos = a[:]
for i in range(n):
pos[a[i]] = i
lst = -1
res = m
for i in range(m):
if pos[b[i]] > lst:
res += 2 * (pos[b[i]] - i)
lst = pos[b[i]]
print(res)
|
1279
|
D
|
Santa's Bot
|
Santa Claus has received letters from $n$ different kids throughout this year. Of course, each kid wants to get some presents from Santa: in particular, the $i$-th kid asked Santa to give them one of $k_i$ different items as a present. Some items could have been asked by multiple kids.
Santa is really busy, so he wants the New Year Bot to choose the presents for all children. Unfortunately, the Bot's algorithm of choosing presents is bugged. To choose a present for some kid, the Bot does the following:
- choose one kid $x$ equiprobably among all $n$ kids;
- choose some item $y$ equiprobably among all $k_x$ items kid $x$ wants;
- choose a kid $z$ who will receive the present equipropably among all $n$ kids (this choice is independent of choosing $x$ and $y$); the resulting triple $(x, y, z)$ is called \textbf{the decision} of the Bot.
If kid $z$ listed item $y$ as an item they want to receive, then the decision \textbf{valid}. Otherwise, the Bot's choice is \textbf{invalid}.
Santa is aware of the bug, but he can't estimate if this bug is really severe. To do so, he wants to know the probability that one decision generated according to the aforementioned algorithm is \textbf{valid}. Can you help him?
|
First of all, how to deal with the fractions modulo $998244353$? According to Fermat's little theorem, $x^{\phi(m)} \equiv 1 (mod m)$ if $x$ is coprime with $m$. So, the inverse element for the denominator $y$ is $y^{\phi(998244353) - 1} = y^{998244351}$, taken modulo $998244353$. A cool property of fractions taken modulo $998244353$ (or any other number such that denominator is coprime with it) is that if we want to add two fractions together and calculate the result modulo some number, we can convert these fractions beforehand and then just add them as integer numbers. The same works with subtracting, multiplying, dividing and exponentiating fractions. Okay, now for the solution itself. We know that there are at most $10^6$ possible pairs of $(x, y)$; we can iterate on these pairs, calculate the probability that the fixed pair is included in the robot's decision (that probability is $\frac{1}{x \cdot k_x}$), and calculate the probability that $(x, y)$ extends to a valid triple (it is equal to $\frac{cnt_y}{z}$, where $cnt_y$ is the number of kids who want item $y$). Multiplying these two probabilities, we get the probability that $(x, y)$ is chosen and produces a valid decision (since these events are independent), and we sum up these values over all possible pairs $(x, y)$.
|
[
"combinatorics",
"math",
"probabilities"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
#define x first
#define y second
#define pb push_back
#define mp make_pair
#define sqr(a) ((a) * (a))
#define sz(a) int((a).size())
#define all(a) (a).begin(), (a).end()
#define forn(i, n) for (int i = 0; i < int(n); ++i)
const int MOD = 998244353;
const int N = 1e6 + 7;
int n;
vector<int> a[N];
int cnt[N];
int inv[N];
int add(int a, int b) {
a += b;
if (a >= MOD) a -= MOD;
return a;
}
int mul(int a, int b) {
return a * 1ll * b % MOD;
}
int binpow(int a, int b) {
int res = 1;
while (b) {
if (b & 1) res = mul(res, a);
a = mul(a, a);
b >>= 1;
}
return res;
}
int main() {
scanf("%d", &n);
forn(i, n) {
int k;
scanf("%d", &k);
a[i].resize(k);
forn(j, k) scanf("%d", &a[i][j]);
forn(j, k) cnt[a[i][j]]++;
}
forn(i, N) inv[i] = binpow(i, MOD - 2);
int ans = 0;
forn(i, n) for (auto x : a[i])
ans = add(ans, mul(mul(inv[n], inv[sz(a[i])]), mul(cnt[x], inv[n])));
printf("%d\n", ans);
}
|
1279
|
E
|
New Year Permutations
|
Yeah, we failed to make up a New Year legend for this problem.
A permutation of length $n$ is an array of $n$ integers such that every integer from $1$ to $n$ appears in it exactly once.
An element $y$ of permutation $p$ is reachable from element $x$ if $x = y$, or $p_x = y$, or $p_{p_x} = y$, and so on.
The \textbf{decomposition} of a permutation $p$ is defined as follows: firstly, we have a permutation $p$, all elements of which are \textbf{not marked}, and an empty list $l$. Then we do the following: while there is at least one \textbf{not marked} element in $p$, we find the leftmost such element, list all elements that are reachable from it \textbf{in the order they appear in $p$}, mark all of these elements, then cyclically shift the list of those elements so that the maximum appears at the first position, and add this list \textbf{as an element} of $l$. After all elements are marked, $l$ is the result of this decomposition.
For example, if we want to build a decomposition of $p = [5, 4, 2, 3, 1, 7, 8, 6]$, we do the following:
- initially $p = [5, 4, 2, 3, 1, 7, 8, 6]$ (bold elements are marked), $l = []$;
- the leftmost unmarked element is $5$; $5$ and $1$ are reachable from it, so the list we want to shift is $[5, 1]$; there is no need to shift it, since maximum is already the first element;
- $p = [\textbf{5}, 4, 2, 3, \textbf{1}, 7, 8, 6]$, $l = [[5, 1]]$;
- the leftmost unmarked element is $4$, the list of reachable elements is $[4, 2, 3]$; the maximum is already the first element, so there's no need to shift it;
- $p = [\textbf{5}, \textbf{4}, \textbf{2}, \textbf{3}, \textbf{1}, 7, 8, 6]$, $l = [[5, 1], [4, 2, 3]]$;
- the leftmost unmarked element is $7$, the list of reachable elements is $[7, 8, 6]$; we have to shift it, so it becomes $[8, 6, 7]$;
- $p = [\textbf{5}, \textbf{4}, \textbf{2}, \textbf{3}, \textbf{1}, \textbf{7}, \textbf{8}, \textbf{6}]$, $l = [[5, 1], [4, 2, 3], [8, 6, 7]]$;
- all elements are marked, so $[[5, 1], [4, 2, 3], [8, 6, 7]]$ is the result.
The New Year transformation of a permutation is defined as follows: we build the decomposition of this permutation; then we sort all lists in decomposition in ascending order of the first elements (we don't swap the elements in these lists, only the lists themselves); then we concatenate the lists into one list which becomes a new permutation. For example, the New Year transformation of $p = [5, 4, 2, 3, 1, 7, 8, 6]$ is built as follows:
- the decomposition is $[[5, 1], [4, 2, 3], [8, 6, 7]]$;
- after sorting the decomposition, it becomes $[[4, 2, 3], [5, 1], [8, 6, 7]]$;
- $[4, 2, 3, 5, 1, 8, 6, 7]$ is the result of the transformation.
We call a permutation \textbf{good} if the result of its transformation is the same as the permutation itself. For example, $[4, 3, 1, 2, 8, 5, 6, 7]$ is a good permutation; and $[5, 4, 2, 3, 1, 7, 8, 6]$ is bad, since the result of transformation is $[4, 2, 3, 5, 1, 8, 6, 7]$.
Your task is the following: given $n$ and $k$, find the $k$-th (lexicographically) good permutation of length $n$.
|
Let's calculate $cycle_n$ - the number of permutations of length $n$, which have a maximum at the position $1$ and consist of exactly one cycle. Each good permutation can be divided into such blocks, so we'll need this value later. It is easy to notice that $cycle_n = (n-2)!$. Let's calculate the following dynamic programming $dp_i$ - the number of good permutations consisting of elements $[i, n]$. To calculate $dp_i$, let's iterate over $j$ - the maximum element of the first block, it determines the length of this block $(j - i + 1)$. $dp_i = \sum_{j=i}^n(dp_{j + 1} \cdot cycle_{j - i - 1})$. Now let's use the standard method of lexicographic recovery. We will iterate over which element to put next, it immediately determines the size of the new block and all the elements in it. If the number of permutations starting with such block is at least $k$, then you need to restore this block entirely and reduce the task to the one without this block. Otherwise, you need to subtract the number of permutations starting on such block from $k$ and proceed to the next option for the block. We will also use lexicographic recovery to restore the block. You must carefully maintain the current block so that it consists of exactly one cycle. To do this, you can use DSU or explicitly check for a cycle.
|
[
"combinatorics",
"dp"
] | 2,700
|
#include <bits/stdc++.h>
using namespace std;
#define x first
#define y second
#define pb push_back
#define mp make_pair
#define sqr(a) ((a) * (a))
#define sz(a) int((a).size())
#define all(a) (a).begin(), (a).end()
#define forn(i, n) for (int i = 0; i < int(n); ++i)
#define fore(i, l, r) for (int i = int(l); i < int(r); ++i)
typedef long long li;
typedef long double ld;
const int N = 55;
const li INF = 2e18;
int n;
li k;
li cycl[N], cnt[N];
bool used[N];
int ans[N];
li add(li x, li y) {
return min(INF, x + y);
}
li mul(li x, li y) {
ld t = ld(x) + y;
if (t > INF) return INF;
return x * y;
}
void solve() {
cin >> n >> k;
--k;
cycl[0] = cycl[1] = 1;
fore(i, 2, n + 1)
cycl[i] = mul(cycl[i - 1], i - 1);
cnt[n] = 1;
for (int i = n - 1; i >= 0; i--) {
cnt[i] = 0;
fore(val, i, n) {
int len = (val - i) + 1;
cnt[i] = add(cnt[i], mul(cnt[i + len], cycl[len - 1]));
}
}
if (cnt[0] <= k) {
cout << -1 << endl;
return;
}
memset(used, 0, sizeof(used));
memset(ans, -1, sizeof(ans));
forn(i, n) {
fore(val, i, n) {
int len = (val - i) + 1;
li cur = mul(cnt[i + len], cycl[len - 1]);
if (cur <= k) {
k -= cur;
continue;
}
ans[i] = val;
used[val] = true;
for (int j = i + 1; j < i + len; j++) {
int lft = len - (j - i) - 1;
fore(nval, i, val) if (!used[nval] && j != nval) {
if (j != i + len - 1) {
int tmp = ans[nval];
while (tmp != -1 && tmp != j)
tmp = ans[tmp];
if (tmp == j) continue;
}
li cur = mul(cnt[i + len], cycl[lft]);
if (cur <= k) {
k -= cur;
continue;
}
ans[j] = nval;
used[nval] = true;
break;
}
}
i += len - 1;
break;
}
}
forn(i, n) cout << ans[i] + 1 << ' ';
cout << endl;
}
int main() {
int tc;
cin >> tc;
forn(i, tc) solve();
}
|
1279
|
F
|
New Year and Handle Change
|
New Year is getting near. So it's time to change handles on codeforces. Mishka wants to change his handle but in such a way that people would not forget who he is.
To make it work, he only allowed to change letters case. More formally, during \textbf{one} handle change he can choose any segment of his handle $[i; i + l - 1]$ and apply tolower or toupper to all letters of his handle on this segment (more fomally, replace all uppercase letters with corresponding lowercase or vice versa). The length $l$ is fixed for all changes.
Because it is not allowed to change codeforces handle too often, Mishka can perform at most $k$ such operations. What is the \textbf{minimum} value of $min(lower, upper)$ (where $lower$ is the number of lowercase letters, and $upper$ is the number of uppercase letters) can be obtained after optimal sequence of changes?
|
Let's simplify the problem a bit: we need either to minimize the number of lowercase letters or to minimize the number of uppercase letters. Both variants can be described by the following model: we have a binary array $a$ where $a[i] = 0$ if $s[i]$ is in the correct case and $a[i] = 1$ otherwise. We can do at most $k$ operations "set $0$ on the segment $[i, i + l - 1]$" and we'd like to minimize the total sum of $a$. At first, let's start with a solution which is pretty slow but correct. Let $dp[len][c]$ be the minimum sum of the prefix $a[0] \dots a[len - 1]$ such that $c$ operations was already applied on it. In order to calculate this $dp$ somehow efficiently, we need to understand that it's optimal to avoid intersections of segments of applied operations so we can further specify the state of $dp$ with the following: all $c$ applied operations have their right borders $\le len - 1$. It's easy to specify the transitions: we either apply set operation on $[len, len + l - 1]$ and relax $dp[len + l][c + 1]$ with $dp[len][c]$ or not and relax $d[len + 1][c]$ with $dp[len][c] + a[len]$. It still $O(nk)$ so we'd like to optimize it more - and we can do it using the "lambda-optimization" i. e. "aliens trick". Here we will try to describe what "aliens trick" is and the "features" of its application on the discrete calculations. In general, "aliens trick" allows you to get rid of the restriction on the total number of operations applied to the array (sometimes it's the number of segments in the partition of the array) by replacing it with the binary search of the value $\lambda$ connected to it. The $\lambda$ is the cost of using the operation (or the cost to use one more segment in the partition). In other words, we can use as many operations as we want but we need to pay for each of them. Often, we can calculate the answer without the restriction faster. The main restriction of the using this dp-optimization is the following (in case of the discrete model): consider the answer $ans(c)$ for the fixed $c$, or $dp[n][c]$. If we look at the function $ans(c)$ it should be "somewhat convex", i.e $ans(c - 1) - ans(c) \ge ans(c) - ans(c + 1)$ (or, sometimes, $ans(c) - ans(c - 1) \ge ans(c + 1) - ans(c)$) for all possible $c$. Let's look at the answers of the modified version of the problem (with cost $\lambda$ for each used operation) as function $res(\lambda, c)$. It's easy to prove that $res(\lambda, c) = ans(c) + \lambda c$ and it's also "somewhat convex" for a fixed $\lambda$ (as a sum of convex functions). But, more important, it has the following property: let $c_\lambda$ be the position where the $res(\lambda, c)$ is the minimum possible. It can be proven (from the convex property) that $c_{\lambda} \ge c_{\lambda + 1}$. This property leads to the solution: binary search $\lambda$ while keeping track of the $c_\lambda$, i. e. find the minimum $\lambda$ that $c_\lambda \le k$. But there are several problems related to the discrete origin of the problem: The $c_\lambda$ is not unique. In general case, there is a segment $[cl_\lambda, cr_\lambda]$ where the minimum $res(\lambda, c)$ can be achieved. But there is still a property that $cl_\lambda \ge cl_{\lambda + 1}$ and $cr_\lambda \ge cr_{\lambda + 1}$. So we need to ensure that we will always find either minimum such $c_\lambda$ or maximum such $c_\lambda$. The second problem comes from the first one. There are situations when $c_\lambda - c_{\lambda + eps} > 1$. It creates a problem in the next situation: suppose the binary search finished with $\lambda_{opt}$; the $c_{\lambda_{opt} - 1} > k$ and $c_{\lambda_{opt}} < k$. But we need to use exactly $k$ operations, what to do? Using float values will not help, so we don't need them (so we'll use usual integer bs). Suppose we minimized the $c_{\lambda_{opt}}$ then we can show that $k \in [cl_{\lambda_{opt}}, cr_{\lambda_{opt}}]$ or, in other words, $res(\lambda_{opt}, k) = res(\lambda_{opt}, c_{\lambda_{opt}})$. So we can claim that we calculated the value not only for $c_{\lambda_{opt}}$ but also for $k$. Using float values will not help, so we don't need them (so we'll use usual integer bs). Suppose we minimized the $c_{\lambda_{opt}}$ then we can show that $k \in [cl_{\lambda_{opt}}, cr_{\lambda_{opt}}]$ or, in other words, $res(\lambda_{opt}, k) = res(\lambda_{opt}, c_{\lambda_{opt}})$. So we can claim that we calculated the value not only for $c_{\lambda_{opt}}$ but also for $k$. In the end, if we can efficiently calculate $c_\lambda$ and $res(\lambda, c_\lambda)$ for the fixed $\lambda$, then we can binary search $\lambda_{opt}$, extract $res(\lambda_{opt}, c_{\lambda_{opt}})$ and claim that the $dp[n][k] = res(\lambda_{opt}, c_{\lambda_{opt}}) - \lambda_{opt} k$. Finally, let's discuss, how to calculate $c_\lambda$ and $res(\lambda, c_\lambda)$ for a fixed $\lambda$. Since $res(\lambda, c_\lambda)$ is just a minimum cost and the $c_\lambda$ is the minimum number of operations with such cost. We can calculate it by simplifying our starting $dp$. (Remember, the cost is calculated in a next way: for each remaining $1$ in $a$ we pay $1$ and for each used operation we pay $\lambda$). Let $d[len] = \{cost_{len}, cnt_{len}\}$, where $cost_{len}$ is minimum cost on the prefix of length $len$ and $cnt_{len}$ is minimum number of operations $cost_{len}$ can be achieved. Then the transitions are almost the same: we either let $a[len]$ be and relax $d[pos + 1]$ with $\{cost_{len} + a[len], cnt_{len}\}$ or start new operation and relax $d[pos + len]$ with $\{cost_{len} + \lambda, cnt_{len} + 1\}$. The result is pair $d[n]$. Some additional information: we should carefully choose the borders of the binary search: we should choose the left border so it's optimal to use operation whenever we can (usually, $0$ or $-1$). And we should choose the right border so it's never optimal to use even one operation (usually more than the maximum possible answer). The total complexity is $O(n \log{n})$. P. S.: We don't have the strict proof that the $ans(c)$ is convex, but we have faith and stress. We'd appreciate it if someone would share the proof in the comment section.
|
[
"binary search",
"dp"
] | 2,800
|
#include <bits/stdc++.h>
using namespace std;
const int N = 1000 * 1000 + 11;
const int INF = 1e9;
int n, k, l;
string s;
int a[N];
pair<int, int> dp[N];
int calc(int mid) {
for (int i = 0; i <= n; ++i) {
dp[i] = make_pair(INF, INF);
}
dp[0] = make_pair(0, 0);
for (int i = 0; i < n; ++i) {
if (dp[i + 1] > make_pair(dp[i].first + a[i], dp[i].second)) {
dp[i + 1] = make_pair(dp[i].first + a[i], dp[i].second);
}
if (dp[min(n, i + l)] > make_pair(dp[i].first + mid, dp[i].second + 1)) {
dp[min(n, i + l)] = make_pair(dp[i].first + mid, dp[i].second + 1);
}
}
return dp[n].second;
}
int solve() {
int l = 0, r = n;
int res = 0;
while (l <= r) {
int mid = (l + r) >> 1;
if (calc(mid) > k) {
l = mid + 1;
res = mid;
} else {
r = mid - 1;
}
}
if (calc(res) <= k) return 0;
calc(res + 1);
return dp[n].first - (res + 1) * 1ll * k;
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
cin >> n >> k >> l >> s;
for (int i = 0; i < n; ++i) {
a[i] = islower(s[i]) > 0;
}
int ans = INF;
ans = min(ans, solve());
for (int i = 0; i < n; ++i) {
a[i] = isupper(s[i]) > 0;
}
ans = min(ans, solve());
cout << ans << endl;
return 0;
}
|
1280
|
A
|
Cut and Paste
|
We start with a string $s$ consisting only of the digits $1$, $2$, or $3$. The length of $s$ is denoted by $|s|$. For each $i$ from $1$ to $|s|$, the $i$-th character of $s$ is denoted by $s_i$.
There is one cursor. The cursor's location $\ell$ is denoted by an integer in $\{0, \ldots, |s|\}$, with the following meaning:
- If $\ell = 0$, then the cursor is located before the first character of $s$.
- If $\ell = |s|$, then the cursor is located right after the last character of $s$.
- If $0 < \ell < |s|$, then the cursor is located between $s_\ell$ and $s_{\ell+1}$.
We denote by $s_\text{left}$ the string to the left of the cursor and $s_\text{right}$ the string to the right of the cursor.
We also have a string $c$, which we call our \underline{clipboard}, which starts out as empty. There are three types of actions:
- \textbf{The Move action}. Move the cursor one step to the right. This increments $\ell$ once.
- \textbf{The Cut action}. Set $c \leftarrow s_\text{right}$, then set $s \leftarrow s_\text{left}$.
- \textbf{The Paste action}. Append the value of $c$ to the end of the string $s$. Note that this doesn't modify $c$.
The cursor initially starts at $\ell = 0$. Then, we perform the following procedure:
- Perform the Move action once.
- Perform the Cut action once.
- Perform the Paste action $s_\ell$ times.
- If $\ell = x$, stop. Otherwise, return to step 1.
You're given the initial string $s$ and the integer $x$. What is the length of $s$ when the procedure stops? Since this value may be very large, only find it modulo $10^9 + 7$.
It is guaranteed that $\ell \le |s|$ at any time.
|
Let $S^t$ be the string $S$ after the $t$th round, and let $S^0$ be the initial $S$. We also denote by $S_{i\ldots }$ the suffix of $S$ from the $i$th character, $S_i$, onwards. A single round turns $S^{t-1}$ into $S^t$ by replicating the suffix $S_{t+1\ldots }^{t-1}$ exactly $S_t$ times. Hence, we have the recurrence $S^t = S^{t-1} + S_{t+1\ldots }^{t-1}\cdot \left(S_t^{t-1} - 1\right),$ In terms of lengths, we have $\left|S^t\right| = \left|S^{t-1}\right| + \left|S_{t+1\ldots }^{t-1}\right|\cdot \left(S_t^{t-1} - 1\right).$ $\left|S^t\right| = \left|S^{t-1}\right| + \left(\left|S^{t-1}\right| - t\right)\cdot \left(S_t^{t-1} - 1\right).$ This cannot be simulated yet as it is since the length of $S$ could be growing very quickly. But notice that $S^{t-1}$ is always a prefix of $S^t$. Therefore, for any two $t_1$ and $t_2$, the $i$th letters of $S^{t_1}$ and $S^{t_2}$ are the same (as long as their lengths are at least $i$). Also, note that we only need to access up to the $x$th character, $S_x$. Therefore, we only need to grow $S$ just enough until it contains at least $x$ characters. After that, we can stop modifying $S$ at that point and simply keep track of the length, maintaining it using the recurrence above. The running time is $O(|S| + x)$. (But in languages where strings are immutable, you should use a dynamically-resizing list instead of appending strings repeatedly, otherwise, you'll get a running time of $O(x^2)$.)
|
[
"implementation",
"math"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
constexpr ll mod = 1'000'000'007;
constexpr int N = 1111;
char _s[N];
ll solve() {
int x;
scanf("%d%s", &x, _s);
ll ls = strlen(_s);
vector<char> s(_s, _s + ls);
for (int i = 1; i <= x; i++) {
int v = s[i - 1] - '1';
if (s.size() < x) {
vector<char> sub(s.begin() + i, s.end());
for (int it = 0; it < v; it++) s.insert(s.end(), sub.begin(), sub.end());
}
ls = (ls + (ls - i) * v) % mod;
}
return ls;
}
int main() {
int z;
for (scanf("%d", &z); z--; printf("%lld\n", (solve() % mod + mod) % mod));
}
|
1280
|
B
|
Beingawesomeism
|
You are an all-powerful being and you have created a rectangular world. In fact, your world is so bland that it could be represented by a $r \times c$ grid. Each cell on the grid represents a country. Each country has a dominant religion. There are only two religions in your world. One of the religions is called Beingawesomeism, who do good for the sake of being good. The other religion is called Pushingittoofarism, who do murders for the sake of being bad.
Oh, and you are actually not really all-powerful. You just have one power, which you can use infinitely many times! Your power involves missionary groups. When a missionary group of a certain country, say $a$, passes by another country $b$, they change the dominant religion of country $b$ to the dominant religion of country $a$.
In particular, a single use of your power is this:
- You choose a horizontal $1 \times x$ subgrid or a vertical $x \times 1$ subgrid. That value of $x$ is up to you;
- You choose a direction $d$. If you chose a horizontal subgrid, your choices will either be NORTH or SOUTH. If you choose a vertical subgrid, your choices will either be EAST or WEST;
- You choose the number $s$ of steps;
- You command each country in the subgrid to send a missionary group that will travel $s$ steps towards direction $d$. In each step, they will visit (and in effect convert the dominant religion of) all $s$ countries they pass through, as detailed above.
- The parameters $x$, $d$, $s$ must be chosen in such a way that any of the missionary groups won't leave the grid.
The following image illustrates one possible single usage of your power. Here, A represents a country with dominant religion Beingawesomeism and P represents a country with dominant religion Pushingittoofarism. Here, we've chosen a $1 \times 4$ subgrid, the direction NORTH, and $s = 2$ steps.
You are a being which believes in free will, for the most part. However, you just really want to stop receiving murders that are attributed to your name. Hence, you decide to use your powers and try to make Beingawesomeism the dominant religion in every country.
What is the minimum number of usages of your power needed to convert everyone to Beingawesomeism?
With god, nothing is impossible. But maybe you're not god? If it is impossible to make Beingawesomeism the dominant religion in all countries, you must also admit your mortality and say so.
|
If everything is P, then it is clearly impossible (MORTAL). Otherwise, you can turn everything into A in at most $4$ moves, starting from any single A. Thus, the answer is between $0$ and $4$. We can exhaust all possibilities: The answer is $0$ if: Everything is an A. Otherwise, at least $1$ move is needed. Everything is an A. Otherwise, at least $1$ move is needed. The answer is $1$ if: At least one of the edge rows/columns is all As. Otherwise, it can be shown that at least $2$ moves are needed, because if every edge has at least one P, then no single move can simultaneously turn all four edges into A. To see this, note that our move must simultaneously touch all four edges. This forces us to select our initial row/column to be an entire edge row/column of the grid. But then, we are forced to have at least one P in our selection, and this P cannot be removed in this move. At least one of the edge rows/columns is all As. Otherwise, it can be shown that at least $2$ moves are needed, because if every edge has at least one P, then no single move can simultaneously turn all four edges into A. To see this, note that our move must simultaneously touch all four edges. This forces us to select our initial row/column to be an entire edge row/column of the grid. But then, we are forced to have at least one P in our selection, and this P cannot be removed in this move. The answer is $2$ if: There is one corner that's an A because in a single move, we can turn an edge into all As. There's a whole column or row of As, because again, in a single move, we can turn an edge into all As. (This case could be tricky to spot.) Otherwise, it can be shown that at least $3$ moves are needed. This is because, if we are only allowed $2$ moves, then our first move must take us to a configuration where only $1$ move is needed. In other words, in a single move, we must ensure that one edge has all As. Now, suppose we have decided which edge to turn into all As. Since all corners are Ps, our move must touch both corners of that edge, and so we are forced to copy an entire row/column up to that edge. But since every row/column has a P, this means that the edge will contain a P after the move, and hence, we have failed to turn that edge into all As. (We cannot also have accidentally turned another edge into all As since the other corners are still Ps.) There is one corner that's an A because in a single move, we can turn an edge into all As. There's a whole column or row of As, because again, in a single move, we can turn an edge into all As. (This case could be tricky to spot.) Otherwise, it can be shown that at least $3$ moves are needed. This is because, if we are only allowed $2$ moves, then our first move must take us to a configuration where only $1$ move is needed. In other words, in a single move, we must ensure that one edge has all As. Now, suppose we have decided which edge to turn into all As. Since all corners are Ps, our move must touch both corners of that edge, and so we are forced to copy an entire row/column up to that edge. But since every row/column has a P, this means that the edge will contain a P after the move, and hence, we have failed to turn that edge into all As. (We cannot also have accidentally turned another edge into all As since the other corners are still Ps.) The answer is $3$ if: There is at least one A in one of the edges, because in a single move, we can ensure that one corner becomes an A. Otherwise, it can be shown that at least $4$ moves are needed, because we can't turn any corner into A in a single move (because all edges are Ps, and only cells in edges get copied onto corners), and we also can't turn any row/column into all As in a single move (since that requires copying an entire row/column onto it, but again, note that the edges are all Ps). There is at least one A in one of the edges, because in a single move, we can ensure that one corner becomes an A. Otherwise, it can be shown that at least $4$ moves are needed, because we can't turn any corner into A in a single move (because all edges are Ps, and only cells in edges get copied onto corners), and we also can't turn any row/column into all As in a single move (since that requires copying an entire row/column onto it, but again, note that the edges are all Ps). The answer is $4$ if: It is not one of the cases above, since $4$ moves are always enough. It is not one of the cases above, since $4$ moves are always enough.
|
[
"implementation",
"math"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
int solve(int r, int c, vector<string> grid) {
vector<int> rows(r), cols(c);
int total = 0;
for (int i = 0; i < r; i++) {
for (int j = 0; j < c; j++) {
if (grid[i][j] == 'A') rows[i]++, cols[j]++, total++;
}
}
if (total == r * c) return 0;
if (total == 0) return -1;
if (rows[0] == c || rows.back() == c || cols[0] == r || cols.back() == r) return 1;
if (grid[0][0] == 'A' || grid[0].back() == 'A' || grid.back()[0] == 'A' || grid.back().back() == 'A') return 2;
if (*max_element(rows.begin(), rows.end()) == c || *max_element(cols.begin(), cols.end()) == r) return 2;
if (rows[0] || rows.back() || cols[0] || cols.back()) return 3;
return 4;
}
int main() {
int z;
for (cin >> z; z--;) {
int r, c;
cin >> r >> c;
vector<string> grid(r);
for (int i = 0; i < r; i++) cin >> grid[i];
int res = solve(r, c, grid);
(~res ? cout << res : cout << "MORTAL") << '\n';
}
}
|
1280
|
C
|
Jeremy Bearimy
|
Welcome! Everything is fine.
You have arrived in The Medium Place, the place between The Good Place and The Bad Place. You are assigned a task that will either make people happier or torture them for eternity.
You have a list of $k$ pairs of people who have arrived in a new inhabited neighborhood. You need to assign each of the $2k$ people into one of the $2k$ houses. Each person will be the resident of exactly one house, and each house will have exactly one resident.
Of course, in the neighborhood, it is possible to visit friends. There are $2k - 1$ roads, each of which connects two houses. It takes some time to traverse a road. We will specify the amount of time it takes in the input. The neighborhood is designed in such a way that from anyone's house, there is exactly one sequence of distinct roads you can take to any other house. In other words, the graph with the houses as vertices and the roads as edges is a tree.
The truth is, these $k$ pairs of people are actually soulmates. We index them from $1$ to $k$. We denote by $f(i)$ the amount of time it takes for the $i$-th pair of soulmates to go to each other's houses.
As we have said before, you will need to assign each of the $2k$ people into one of the $2k$ houses. You have two missions, one from the entities in The Good Place and one from the entities of The Bad Place. Here they are:
- The first mission, from The Good Place, is to assign the people into the houses such that the sum of $f(i)$ over all pairs $i$ is minimized. Let's define this minimized sum as $G$. This makes sure that soulmates can easily and efficiently visit each other;
- The second mission, from The Bad Place, is to assign the people into the houses such that the sum of $f(i)$ over all pairs $i$ is maximized. Let's define this maximized sum as $B$. This makes sure that soulmates will have a difficult time to visit each other.
What are the values of $G$ and $B$?
|
Maximization Suppose we're maximizing the sum. Consider a single edge $(a,b)$, and consider the two components on either side of this edge. Then we have an important observation: in the optimal solution, the nodes of one component are all paired with nodes on the other component. This is because otherwise, there will be at least one pair in each component that lies entirely in that component, say $(i_a, j_a)$ and $(i_b, j_b$). But if we switch the pairing to, say, $(i_a, i_b)$ and $(j_a, j_b)$, then the cost increases, because we're introducing new edges (namely $(a,b)$, among possibly others) while keeping everything from the previous pairing. Repeating this, we can construct an optimal solution where all nodes in one component are paired with nodes in the other component. This means that in the optimal solution, the edge $(a, b)$ is counted exactly $\min(c_a, c_b)$ times, where $c_a$ is the size of the component on $a$'s side, and $c_b$ is the size of the component on $b$'s side. Therefore, the edge contributes exactly $\mathit{weight}(a,b) \cdot \min(c_a, c_b)$ to the answer. But the same is true for all edges! Therefore, we can compute the answer by just summing up all contributions. The only remaining step needed is to compute the sizes of all subtrees, and this can be done with a single BFS/DFS and DP. This runs in $O(n)$. Minimization Now, suppose we're minimizing the sum. Consider again a single edge $(a,b)$. Again, we have an important observation: in the optimal solution, at most one pair passes through $(a,b)$. This is because otherwise, if there are at least two such pairs, then we can again switch the pairing (essentially the reverse of maximizing), which decreases the cost, because it doesn't introduce additional edges but it decreases the number of pairs passing through $(a,b)$ by $2$. Repeating this, we can ensure that at most one pair passes through $(a,b)$. Furthermore, the parity of the number of pairs passing through $(a,b)$ is fixed. (Why?) Therefore, in the optimal solution, $(a,b)$ is counted exactly $(c_a \bmod 2)$ times. (Note that $c_a \equiv c_b \pmod{2}$) But again, the same is true for all edges! Therefore, we can compute the answer in $O(n)$ as well, by summing up all contributions.
|
[
"dfs and similar",
"graphs",
"greedy",
"trees"
] | 2,000
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
void solve() {
int n;
cin >> n;
n <<= 1;
vector<vector<pair<int,ll>>> adj(n);
for (int i = 0; i < n - 1; i++) {
int a, b; ll c;
cin >> a >> b >> c;
a--, b--;
adj[a].emplace_back(b, c);
adj[b].emplace_back(a, c);
}
vector<int> parent(n, -1), que(1, 0);
vector<ll> parent_cost(n);
parent[0] = 0;
for (int f = 0; f < que.size(); f++) {
int i = que[f];
for (auto [j, c]: adj[i]) {
if (parent[j] == -1) {
parent[j] = i;
parent_cost[j] = c;
que.push_back(j);
}
}
}
vector<int> size(n, 1);
ll mn = 0, mx = 0;
reverse(que.begin(), que.end());
for (int i : que) {
size[parent[i]] += size[i];
mx += parent_cost[i] * min(size[i], n - size[i]);
mn += parent_cost[i] * (size[i] & 1);
}
cout << mn << " " << mx << '\n';
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int z;
for (cin >> z; z--; solve());
}
|
1280
|
D
|
Miss Punyverse
|
The Oak has $n$ nesting places, numbered with integers from $1$ to $n$. Nesting place $i$ is home to $b_i$ bees and $w_i$ wasps.
Some nesting places are connected by branches. We call two nesting places \underline{adjacent} if there exists a branch between them. A \underline{simple path} from nesting place $x$ to $y$ is given by a sequence $s_0, \ldots, s_p$ of distinct nesting places, where $p$ is a non-negative integer, $s_0 = x$, $s_p = y$, and $s_{i-1}$ and $s_{i}$ are adjacent for each $i = 1, \ldots, p$. The branches of The Oak are set up in such a way that for any two pairs of nesting places $x$ and $y$, there exists a unique simple path from $x$ to $y$. Because of this, biologists and computer scientists agree that The Oak is in fact, a tree.
A \underline{village} is a \underline{nonempty} set $V$ of nesting places such that for any two $x$ and $y$ in $V$, there exists a simple path from $x$ to $y$ whose intermediate nesting places all lie in $V$.
A set of villages $\cal P$ is called a \underline{partition} if each of the $n$ nesting places is contained in exactly one of the villages in $\cal P$. In other words, no two villages in $\cal P$ share any common nesting place, and altogether, they contain all $n$ nesting places.
The Oak holds its annual Miss Punyverse beauty pageant. The two contestants this year are Ugly Wasp and Pretty Bee. The winner of the beauty pageant is determined by voting, which we will now explain. Suppose $\mathcal{P}$ is a partition of the nesting places into $m$ villages $V_1, \ldots, V_m$. There is a local election in each village. Each of the insects in this village vote for their favorite contestant. If there are \textbf{strictly} more votes for Ugly Wasp than Pretty Bee, then Ugly Wasp is said to win in that village. Otherwise, Pretty Bee wins. Whoever wins in the most number of villages wins.
As it always goes with these pageants, bees always vote for the bee (which is Pretty Bee this year) and wasps always vote for the wasp (which is Ugly Wasp this year). Unlike their general elections, no one abstains from voting for Miss Punyverse as everyone takes it very seriously.
Mayor Waspacito, and his assistant Alexwasp, wants Ugly Wasp to win. He has the power to choose how to partition The Oak into exactly $m$ villages. If he chooses the partition optimally, determine the maximum number of villages in which Ugly Wasp wins.
|
Let's say a region is winning if there are strictly more wasps than bees. Thus, we're maximizing the number of winning regions. Tree DP seems natural in this sort of situation. For example, after rooting the tree arbitrarily, you could probably come up with something like $f(i, r)$: Given the subtree rooted at $i$ and the number of regions $r$, find the maximum number of winning regions if we partition that subtree into $r$ regions. However, this has an issue: we also need to merge our topmost component with the topmost component of some subtrees, so the subtrees' vote advantage at their topmost components matter, and must also be considered in the DP. Unfortunately, if we just naively insert the vote advantage to our DP state (or output), this means that there could potentially be too many states/outputs to fit in the time limit. Fortunately, we can keep the number of states the same by using the following greedy observation: It is optimal to maximize the number of winning regions first, and then to maximize the vote advantage at the top second. In other words, in a subtree, $(\text{$x$ winning regions}, \text{$-\infty$ vote advantage})$ $(\text{$x-1$ winning regions}, \text{$+\infty$ vote advantage}).$ So the DP now becomes $f(i, r)$: Given the subtree rooted at $i$ and the number of regions $r$, find the maximum number of winning regions, and among all such possibilities, find the maximum vote advantage of the top component. Just be careful that your DP solution doesn't construct size-$0$ partitions. Now, what about the time complexity? We have a state that looks like $(i, r)$, where $1 \le r \le \mathit{size}(i)$, and a transition that runs in $O(r)$, so naively it feels like it runs in $O(n^3)$ in the worst case. However, if you've seen this before, this is actually the sort of DP pattern that is really $O(n^2)$, at least with the right implementation. Specifically, if we ensure that we only loop across the range where the substates are both valid, then you may check that the total amount of work done in each node is $O\left(\mathit{size}(i)^2 - \sum\limits_{\text{$j$ is a child of $i$}} \mathit{size}(j)^2\right)$, and such a recurrence can be analyzed to be $O(n^2)$.
|
[
"dp",
"greedy",
"trees"
] | 2,500
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
struct Group {
int win; ll adv;
Group(int win = -1, ll adv = 0): win(win), adv(adv) {}
Group operator+(const Group& o) const {
return Group(win + o.win, adv + o.adv);
}
Group operator+() const {
return Group(win + (adv > 0));
}
bool operator<(const Group& o) const {
return win < o.win || win == o.win && adv < o.adv;
}
};
vector<Group> convolve(const vector<Group>& fa, const vector<Group>& fb) {
vector<Group> fi(fa.size() + fb.size());
for (int a = 0; a < fa.size(); a++) {
for (int b = 0; b < fb.size(); b++) {
fi[a + b ] = max(fi[a + b ], fa[a] + fb[b]);
fi[a + b + 1] = max(fi[a + b + 1], fa[a] + +fb[b]);
}
}
return fi;
}
vector<vector<Group>> f;
vector<vector<int>> adj;
void convolve_tree(int i, int p = -1) {
for (int j : adj[i]) {
if (j == p) continue;
convolve_tree(j, i);
f[i] = convolve(f[i], f[j]);
}
}
int solve() {
int n, m;
scanf("%d%d", &n, &m);
f = vector<vector<Group>>(n, vector<Group>(1, Group(0)));
for (int i = 0, b; i < n; i++) {
scanf("%d", &b);
f[i][0].adv -= b;
}
for (int i = 0, w; i < n; i++) {
scanf("%d", &w);
f[i][0].adv += w;
}
adj = vector<vector<int>>(n);
for (int i = 1; i < n; i++) {
int x, y;
scanf("%d%d", &x, &y);
x--, y--;
adj[x].push_back(y);
adj[y].push_back(x);
}
convolve_tree(0);
return (+f[0][m - 1]).win;
}
int main() {
int z;
for (scanf("%d", &z); z--; printf("%d\n", solve()));
}
|
1280
|
E
|
Kirchhoff's Current Loss
|
Your friend Kirchhoff is shocked with the current state of electronics design.
"Ohmygosh! Watt is wrong with the field? All these circuits are inefficient! There's so much capacity for improvement. The electrical engineers must not conduct their classes very well. It's absolutely revolting" he said.
The negativity just keeps flowing out of him, but even after complaining so many times he still hasn't lepton the chance to directly change anything.
"These circuits have too much total resistance. Wire they designed this way? It's just causing a massive loss of resistors! Their entire field could conserve so much money if they just maximized the potential of their designs. Why can't they just try alternative ideas?"
The frequency of his protests about the electrical engineering department hertz your soul, so you have decided to take charge and help them yourself. You plan to create a program that will optimize the circuits while keeping the same circuit layout and maintaining the same effective resistance.
A \underline{circuit} has two endpoints, and is associated with a certain constant, $R$, called its \textbf{effective resistance}.
The circuits we'll consider will be formed from individual resistors joined together in \underline{series} or in \underline{parallel}, forming more complex circuits. The following image illustrates combining circuits in series or parallel.
According to your friend Kirchhoff, the effective resistance can be calculated quite easily when joining circuits this way:
- When joining $k$ circuits in \underline{series} with effective resistances $R_1, R_2, \ldots, R_k$, the effective resistance $R$ of the resulting circuit is the sum $$R = R_1 + R_2 + \ldots + R_k.$$
- When joining $k$ circuits in \underline{parallel} with effective resistances $R_1, R_2, \ldots, R_k$, the effective resistance $R$ of the resulting circuit is found by solving for $R$ in $$\frac{1}{R} = \frac{1}{R_1} + \frac{1}{R_2} + \ldots + \frac{1}{R_k},$$ \underline{assuming all $R_i > 0$}; if at least one $R_i = 0$, then the effective resistance of the whole circuit is simply $R = 0$.
Circuits will be represented by strings. Individual resistors are represented by an asterisk, "*". For more complex circuits, suppose $s_1, s_2, \ldots, s_k$ represent $k \ge 2$ circuits. Then:
- "($s_1$ S $s_2$ S $\ldots$ S $s_k$)" represents their \underline{series} circuit;
- "($s_1$ P $s_2$ P $\ldots$ P $s_k$)" represents their \underline{parallel} circuit.
For example, "(* P (* S *) P *)" represents the following circuit:
Given a circuit, your task is to assign the resistances of the individual resistors such that they satisfy the following requirements:
- Each individual resistor has a \underline{nonnegative integer} resistance value;
- The effective resistance of the whole circuit is $r$;
- The sum of the resistances of the individual resistors is minimized.
If there are $n$ individual resistors, then you need to output the list $r_1, r_2, \ldots, r_n$ ($0 \le r_i$, and $r_i$ is an integer), where $r_i$ is the resistance assigned to the $i$-th individual resistor that appears in the input (from left to right). If it is impossible to accomplish the task, you must say so as well.
If it is possible, then it is guaranteed that the minimum sum of resistances is at most $10^{18}$.
|
Instead of minimizing the integer cost (which feels like a hard number theory problem), let's try to minimize the real number cost first. This is a bit easier, but it should help us solve the integer case by at least giving us a lower bound. If we're allowed to assign arbitrary real numbers as costs, then we can deduce that the minimum cost to obtain a resistance of $r$ is proportional to $r$. This is because we can just scale resistances arbitrarily, so if we have an optimal solution for one given $r$ with optimal cost $c\cdot r$, then to get any other target resistance $r'$, we can simply scale all resistances by $r'/r$, and we get a circuit with resistance $r'$ and cost $c\cdot r'$ that should also be optimal. To prove this more rigorously, define $f(r)$ to be the optimal cost for a target resistance $r$. Then, the scaling idea above shows that $f(r') \le f(r)\cdot (r'/r)$, which is equivalent to $f(r')/r' \le f(r)/r$. By symmetry, $f(r')/r' \ge f(r)/r$, and so $f(r')/r' = f(r)/r$, for any $r, r' > 0$. Therefore, $f$ must be linear in $r$, i.e., $f(r) = c\cdot r$ for some $c$. Thus, we just need to find this proportionality constant $c$ for our input circuit. This can be computed inductively. For a basic resistor, $c = 1$. For $k$ circuits with constants $c_1, \ldots, c_k$ joined in series, $c = \min(c_1, \ldots, c_k)$. For $k$ circuits with constants $c_1, \ldots, c_k$ joined in parallel, $\sqrt{c} = \sqrt{c_1} + \ldots + \sqrt{c_k}$. But note that all three statements imply (via induction) that $\sqrt{c}$ is an integer! Even more, using the formulas above, we can deduce the following stronger statement: for the purposes of minimization, the whole circuit is equivalent to a parallel circuit with $\sqrt{c}$ resistors. You can also prove this via induction, and it's actually not that hard. The hardest part is knowing what to prove in the first place. Once we have this statement, we can now turn its proof into a recursive algorithm that actually computes these $\sqrt{c}$ parallel resistors the whole circuit is equivalent to. (The most straightforward proof of it naturally translates into such an algorithm.) We then assign a resistance of $0$ to everything else. Finally, using the same analysis as before, to obtain a resistance of $r$ in a parallel circuit of $\sqrt{c}$ resistors, the optimal way is to assign $\sqrt{c}\cdot r$ to everything. But if $r$ is an integer, then $\sqrt{c}\cdot r$ is an integer as well, and since we're assigning $0$ to everything else, it means that the minimum cost to achieve $r$ can be achieved in integers as well!
|
[
"math"
] | 2,900
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
const ll INF = 1LL << 60;
class Circuit {
public:
ll w = -1;
ll width() {
if (w == -1) compute_width();
return w;
}
virtual void compute_width() {}
virtual void assign(ll value) {}
virtual ostream& write_to(ostream& os) const { return os; }
void assign_minimal(ll value) { assign(value * width()); }
friend ostream& operator<<(ostream& os, const Circuit& circuit);
};
ostream& operator<<(ostream& os, const Circuit& circuit) {
return circuit.write_to(os);
}
class Resistor: public Circuit {
public:
ll value = -1;
Resistor() {}
void compute_width() override {
w = 1;
}
void assign(ll value) override {
this->value = value;
}
ostream& write_to(ostream& os) const override {
return os << ' ' << value;
}
};
class Join: public Circuit {
public:
char typ;
vector<Circuit*> children;
Join() {}
void compute_width() override {
w = typ == 'S' ? INF : 0;
for (auto child: children) {
ll ww = child->width();
w = typ == 'S' ? min(w, ww) : w + ww;
}
}
void assign(ll value) override {
for (auto child: children) {
if (typ == 'S') {
if (child->width() == width()) {
child->assign(value);
value = 0;
} else {
child->assign(0);
}
} else {
child->assign(value);
}
}
}
ostream& write_to(ostream& os) const override {
for (auto child: children) os << *child;
return os;
}
};
class CircuitParser {
const string& s;
int i = 0;
CircuitParser(const string& s): s(s) {}
Circuit *parse() {
if (s[i++] == '(') {
Join *j = new Join();
while (true) {
j->children.push_back(parse());
if (s[i++] == ')') break;
j->typ = s[i++];
i++;
}
return j;
} else {
return new Resistor();
}
}
public:
static Circuit *parse(const string& s) {
return CircuitParser(s).parse();
}
};
void solve(ll a, const string& s) {
Circuit *c = CircuitParser::parse(s);
c->assign_minimal(a);
cout << "REVOLTING" << *c << '\n';
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int z;
for (cin >> z; z--;) {
ll a;
string s;
cin >> a;
getline(cin, s);
assert(s[0] == ' ');
solve(a, s.substr(1));
}
}
|
1280
|
F
|
Intergalactic Sliding Puzzle
|
You are an intergalactic surgeon and you have an alien patient. For the purposes of this problem, we can and we will model this patient's body using a $2 \times (2k + 1)$ rectangular grid. The alien has $4k + 1$ distinct organs, numbered $1$ to $4k + 1$.
In healthy such aliens, the organs are arranged in a particular way. For example, here is how the organs of a healthy such alien would be positioned, when viewed from the top, for $k = 4$:
Here, the E represents empty space.
In general, the first row contains organs $1$ to $2k + 1$ (in that order from left to right), and the second row contains organs $2k + 2$ to $4k + 1$ (in that order from left to right) and then empty space right after.
Your patient's organs are complete, and inside their body, but they somehow got shuffled around! Your job, as an intergalactic surgeon, is to put everything back in its correct position. All organs of the alien must be in its body during the entire procedure. This means that at any point during the procedure, there is exactly one cell (in the grid) that is empty. In addition, you can only move organs around by doing one of the following things:
- You can switch the positions of the empty space E with any organ to its immediate left or to its immediate right (if they exist). In reality, you do this by sliding the organ in question to the empty space;
- You can switch the positions of the empty space E with any organ to its immediate top or its immediate bottom (if they exist) \underline{only if} the empty space is on the \underline{leftmost} column, \underline{rightmost} column or in the \underline{centermost} column. Again, you do this by sliding the organ in question to the empty space.
Your job is to figure out a sequence of moves you must do during the surgical procedure in order to place back all $4k + 1$ internal organs of your patient in the correct cells. If it is impossible to do so, you must say so.
|
The "shortcuts" thing in the output section is basically a way for you to define subroutines, i.e., you can create simpler (useful) operations, and then you can combine them into more complex operations. Now, to solve the problem, we may represent the grid by the "circular permutation" obtained by going around the grid once, say in a clockwise order, starting from, say, the top-left corner. Without the middle column, this circular permutation cannot be changed; we can rotate it, but that's it. Thus, to make progress, we must use the middle column. The key insight here is that moving something through the middle column corresponds to one, and only one, kind of operation: which, on our circular permutation, corresponds to: Notice that it simply moved $4$ a couple of places to the right. In other words, moving through the middle corresponds to a rotation of $2k+1$ elements. Since we can also rotate the whole thing, this $(2k+1)$-rotation can be performed anywhere in our permutation. But since $(2k+1)$-rotations and full rotations (i.e., $(4k+1)$-rotations) are both even permutations, and these are the only available moves, this means that inputs that are an odd permutation away from the target state are unsolvable! (You can also notice this right away once you realize that this problem is essentially the "sliding picture puzzle" but with even more restricted moves.) Amazingly, the converse is true: all even permutations are solvable! This follows from the fact that we can produce a $3$-rotation from the given operations. You may want to try to come up with it yourself, so instead of giving the sequence of moves directly, I'll just give you a hint: there is a sequence of six $(2k+1)$-rotations that is equivalent to the $3$-rotation $(1 2 3)$ or $(1 3 2)$, and it only involves moving $1$, $2k+2$ and $2k+3$. (If you still can't find it, you could also write a backtracking program that finds it.) Once we have any single $3$-rotation, it can then be applied anywhere, again using with full rotations. Also, it is a well-known fact that any even permutation is representable by a product of $3$-rotations. This means that all even permutations are solvable! Now, determining which inputs are solvable is one thing, but actually finding the solution is a whole different beast. Fortunately, the "shortcuts" thing is here to make it a bit easier. The most important milestone is being able to come up with a sequence corresponding to a $3$-rotation; assign an uppercase letter for such an operation. After that, you will only need $3$-rotations and full rotations to solve the rest.
|
[
"combinatorics",
"constructive algorithms",
"math"
] | 3,400
|
def solve(k, grid):
seek = *range(2*k + 2), *range(4*k + 1, 2*k + 1, -1)
flat = [seek[v] for v in grid[0] + grid[1][::-1] if v]
m = {
'L': 'l'*2*k + 'u' + 'r'*2*k + 'd',
'R': 'u' + 'l'*2*k + 'd' + 'r'*2*k,
'C': 'l'*k + 'u' + 'r'*k + 'd',
'D': 'CC' + 'R'*(2*k + 1) + 'CC' + 'R'*(2*k + 2),
'F': 'R'*(k - 1) + 'DD' + 'R'*(2*k + 1) + 'D' + 'L'*2*k + 'DD' + 'L'*k,
'G': 'FF',
}
[(i, j)] = [(i, j) for i in range(2) for j in range(2*k + 1) if grid[i][j] == 0]
st = 'r'*(2*k - j) + 'd'*(1 - i)
for v in range(2, 4*k + 2):
ct = flat.index(v)
if ct >= 2:
st += 'L'*(ct - 2) + 'GR'*(ct - 2) + 'G'
flat = flat[ct - 1: ct + 1] + flat[:ct - 1] + flat[ct + 1:]
if ct >= 1:
st += 'G'
flat = flat[1:3] + flat[:1] + flat[3:]
st += 'L'
flat = flat[1:] + flat[:1]
if flat[0] == 1: return st, m
def main():
def get_line():
return [0 if x == 'E' else int(x) for x in input().split()]
for cas in range(int(input())):
res = solve(int(input()), [get_line() for i in range(2)])
if res is None:
print('SURGERY FAILED')
else:
print('SURGERY COMPLETE')
st, m = res
print(st)
for shortcut in m.items(): print(*shortcut)
print('DONE')
main()
|
1281
|
A
|
Suffix Three
|
We just discovered a new data structure in our research group: a \textbf{suffix three}!
It's very useful for natural language processing. Given three languages and three suffixes, a suffix three can determine which language a sentence is written in.
It's super simple, 100% accurate, and doesn't involve advanced machine learning algorithms.
Let us tell you how it works.
- If a sentence ends with "po" the language is Filipino.
- If a sentence ends with "desu" or "masu" the language is Japanese.
- If a sentence ends with "mnida" the language is Korean.
Given this, we need you to implement a suffix three that can differentiate Filipino, Japanese, and Korean.
Oh, did I say three suffixes? I meant four.
|
The simplest way to solve it is to use your language's builtin string methods like ends_with. (It might be different in your preferred language.) Alternatively, if you know how to access the individual letters of a string, then you may implement something similar to ends_with yourself. To print the required output, you can just use something like: Alternatively, notice that you can simply check the last letter since o, u and a are distinct, so it can be simplified slightly. One can even write a Python one-liner (for a single test case):
|
[
"implementation"
] | 800
|
for cas in range(int(input())): print({'o': "FILIPINO", 'a': "KOREAN", 'u': "JAPANESE"}[input()[-1]])
|
1281
|
B
|
Azamon Web Services
|
Your friend Jeff Zebos has been trying to run his new online company, but it's not going very well. He's not getting a lot of sales on his website which he decided to call \textbf{Azamon}. His big problem, you think, is that he's not ranking high enough on the search engines. If only he could rename his products to have better names than his competitors, then he'll be at the top of the search results and will be a millionaire.
After doing some research, you find out that search engines only sort their results lexicographically. If your friend could rename his products to lexicographically smaller strings than his competitor's, then he'll be at the top of the rankings!
To make your strategy less obvious to his competitors, you decide to swap no more than two letters of the product names.
Please help Jeff to find improved names for his products that are lexicographically smaller than his competitor's!
Given the string $s$ representing Jeff's product name and the string $c$ representing his competitor's product name, find a way to swap \textbf{at most one pair} of characters in $s$ (that is, find two distinct indices $i$ and $j$ and swap $s_i$ and $s_j$) such that the resulting new name becomes strictly lexicographically smaller than $c$, or determine that it is impossible.
\textbf{Note:} String $a$ is \underline{strictly lexicographically smaller} than string $b$ if and only if one of the following holds:
- $a$ is a \underline{proper prefix} of $b$, that is, $a$ is a \underline{prefix} of $b$ such that $a \neq b$;
- There exists an integer $1 \le i \le \min{(|a|, |b|)}$ such that $a_i < b_i$ and $a_j = b_j$ for $1 \le j < i$.
|
The problem becomes a bit easier if we try to answer a different question: What is the lexicographically smallest string we can form from $S$? We then simply compare this string with $C$. This works because if the smallest string we can form is not smaller than $C$, then clearly no other string we can form will be smaller than $C$. To find the lexicographically smallest string we can form, we can be greedy. We sort $S$ and find the first letter that isn't in its correct sorted position. In other words, find the first position where $S$ and $\mathrm{sorted}(S)$ doesn't match. We then find the letter that should be in that position and put it in its correct position. If there are multiple choices, it is better to take the one that occurs last, since it makes the resulting string smallest. A special case is when $S$ is already sorted. In this case, we can't make $S$ any smaller, so we should not swap at all. The solution runs in $O(|S| \log |S|)$, but solutions running in $O(|S|^2)$ are also accepted. (There are other, different solutions that run in $O(|S|^2)$.) This can be improved to $O(|S|)$ by replacing the sorting step with simpler operations, since we don't actually need the full sorted version of $S$.
|
[
"greedy"
] | 1,600
|
def solve(s, t):
mns = list(s)
for i in range(len(s)-2,-1,-1): mns[i] = min(mns[i], mns[i + 1])
for i in range(len(s)):
if s[i] != mns[i]:
j = max(j for j, v in enumerate(s[i:], i) if v == mns[i])
s = s[:i] + s[j] + s[i+1:j] + s[i] + s[j+1:]
break
return s if s < t else '---'
for cas in range(int(input())):
print(solve(*input().split()))
|
1282
|
A
|
Temporarily unavailable
|
Polycarp lives on the coordinate axis $Ox$ and travels from the point $x=a$ to $x=b$. It moves uniformly rectilinearly at a speed of one unit of distance per minute.
On the axis $Ox$ at the point $x=c$ the base station of the mobile operator is placed. It is known that the radius of its coverage is $r$. Thus, if Polycarp is at a distance less than or equal to $r$ from the point $x=c$, then he is in the network coverage area, otherwise — no. The base station can be located both on the route of Polycarp and outside it.
Print the time in minutes during which Polycarp will \textbf{not be} in the coverage area of the network, with a rectilinear uniform movement from $x=a$ to $x=b$. His speed — one unit of distance per minute.
|
To get an answer, we need to subtract from the whole time the time that we will be in the coverage area. Let the left boundary of the cover be $L=c-r$, and the right boundary of the cover be $R=c+r$. Then the intersection boundaries will be $st=max(L,min(a,b))$, $ed=min(R, max(a,b))$. Then the answer is calculated by the formula $|b-a|-max(0, ed - st)$.
|
[
"implementation",
"math"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
int main() {
int t;
cin >> t;
forn(tt, t) {
int a, b, c, r;
cin >> a >> b >> c >> r;
int L = max(min(a, b), c - r);
int R = min(max(a, b), c + r);
cout << max(a, b) - min(a, b) - max(0, R - L) << endl;
}
}
|
1282
|
B2
|
K for the Price of One (Hard Version)
|
This is the hard version of this problem. The only difference is the constraint on $k$ — the number of gifts in the offer. In this version: $2 \le k \le n$.
Vasya came to the store to buy goods for his friends for the New Year. It turned out that he was very lucky — today the offer "$k$ of goods for the price of one" is held in store.
Using this offer, Vasya can buy \textbf{exactly} $k$ of any goods, paying only for the most expensive of them. Vasya decided to take this opportunity and buy as many goods as possible for his friends with the money he has.
More formally, for each good, its price is determined by $a_i$ — the number of coins it costs. Initially, Vasya has $p$ coins. He wants to buy the maximum number of goods. Vasya can perform one of the following operations as many times as necessary:
- Vasya can buy one good with the index $i$ if he currently has enough coins (i.e $p \ge a_i$). After buying this good, the number of Vasya's coins will decrease by $a_i$, (i.e it becomes $p := p - a_i$).
- Vasya can buy a good with the index $i$, and also choose exactly $k-1$ goods, the price of which does not exceed $a_i$, if he currently has enough coins (i.e $p \ge a_i$). Thus, he buys all these $k$ goods, and his number of coins decreases by $a_i$ (i.e it becomes $p := p - a_i$).
Please note that each good can be bought no more than once.
For example, if the store now has $n=5$ goods worth $a_1=2, a_2=4, a_3=3, a_4=5, a_5=7$, respectively, $k=2$, and Vasya has $6$ coins, then he can buy $3$ goods. A good with the index $1$ will be bought by Vasya without using the offer and he will pay $2$ coins. Goods with the indices $2$ and $3$ Vasya will buy using the offer and he will pay $4$ coins. It can be proved that Vasya can not buy more goods with six coins.
Help Vasya to find out the maximum number of goods he can buy.
|
If you sort the array by costs, it will always be profitable to take segments of length $k$ with the cheapest possible end. It remains only to understand when you need to take gifts without a promotion. It makes no sense to take $k$ or more gifts without a promotion, so we can combine them and buy together. It also makes no sense to take not the cheapest gifts without a stock, since the total cost from this will only increase. So the solution to the problem is to iterate over the prefix in a sorted array of length no more than $k$, and then buy items together for $k$ pieces. This solution works in the linear time since we will always look at elements with a different index by modulo $k$.
|
[
"dp",
"greedy",
"sortings"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using ld = long double;
int main() {
int cntTest;
cin >> cntTest;
for (int test = 0; test < cntTest; test++) {
int n, p, k;
cin >> n >> p >> k;
int pref = 0;
int ans = 0;
vector<int> a(n);
for (int i = 0; i < n; i++) {
cin >> a[i];
}
sort(a.begin(), a.end());
for (int i = 0; i <= k; i++) {
int sum = pref;
if (sum > p) break;
int cnt = i;
for (int j = i + k - 1; j < n; j += k) {
if (sum + a[j] <= p) {
cnt += k;
sum += a[j];
} else {
break;
}
}
pref += a[i];
ans = max(ans, cnt);
}
cout << ans << "\n";
}
}
|
1282
|
C
|
Petya and Exam
|
Petya has come to the math exam and wants to solve as many problems as possible. He prepared and carefully studied the rules by which the exam passes.
The exam consists of $n$ problems that can be solved in $T$ minutes. Thus, the exam begins at time $0$ and ends at time $T$. Petya can leave the exam at any integer time from $0$ to $T$, inclusive.
All problems are divided into two types:
- easy problems — Petya takes exactly $a$ minutes to solve any easy problem;
- hard problems — Petya takes exactly $b$ minutes ($b > a$) to solve any hard problem.
Thus, if Petya starts solving an easy problem at time $x$, then it will be solved at time $x+a$. Similarly, if at a time $x$ Petya starts to solve a hard problem, then it will be solved at time $x+b$.
For every problem, Petya knows if it is easy or hard. Also, for each problem is determined time $t_i$ ($0 \le t_i \le T$) at which it will become mandatory (required). If Petya leaves the exam at time $s$ and there is such a problem $i$ that $t_i \le s$ and he didn't solve it, then he will receive $0$ points for the whole exam. Otherwise (i.e if he has solved all such problems for which $t_i \le s$) he will receive a number of points equal to the number of solved problems. Note that leaving at time $s$ Petya can have both "mandatory" and "non-mandatory" problems solved.
For example, if $n=2$, $T=5$, $a=2$, $b=3$, the first problem is hard and $t_1=3$ and the second problem is easy and $t_2=2$. Then:
- if he leaves at time $s=0$, then he will receive $0$ points since he will not have time to solve any problems;
- if he leaves at time $s=1$, he will receive $0$ points since he will not have time to solve any problems;
- if he leaves at time $s=2$, then he can get a $1$ point by solving the problem with the number $2$ (it must be solved in the range from $0$ to $2$);
- if he leaves at time $s=3$, then he will receive $0$ points since at this moment both problems will be mandatory, but he will not be able to solve both of them;
- if he leaves at time $s=4$, then he will receive $0$ points since at this moment both problems will be mandatory, but he will not be able to solve both of them;
- if he leaves at time $s=5$, then he can get $2$ points by solving all problems.
Thus, the answer to this test is $2$.
Help Petya to determine the maximal number of points that he can receive, before leaving the exam.
|
Sort all problems by time $t_i$. You may notice that it is profitably to leave the exam at only one of the time points $t_i-1$. $t_i$ - a time when the task with the number $i$ becomes mandatory. Leaving at any other time in the range from $t_{i-1}$ to $t_i-2$ does not make sense since new mandatory tasks cannot appear at this time, and there is always less time to solve than at the moment $t_i - 1$. Then for each such moment, we need to know how many simple and hard tasks have already become mandatory. Then we know the time that we need to spend on solving them. The remaining time can be spent on other tasks. It is more profitable to solve simple at first, and then hard. It remains only to consider all such moments and take the maximum of the tasks solved among them, which will be the answer.
|
[
"greedy",
"sortings",
"two pointers"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using ld = long double;
int main() {
int cntTest;
cin >> cntTest;
for (int test = 0; test < cntTest; test++) {
ll n, t, a, b;
cin >> n >> t >> a >> b;
vector<pair<ll, ll>> v;
vector<int> hard(n);
int cntA = 0, cntB = 0;
for (int i = 0; i < n; i++) {
cin >> hard[i];
if (hard[i]) {
cntB++;
} else {
cntA++;
}
}
for (int i = 0; i < n; i++) {
ll time;
cin >> time;
v.push_back({time, hard[i]});
}
v.push_back({t + 1, 0});
sort(v.begin(), v.end());
ll cnt1 = 0, cnt2 = 0;
ll ans = 0;
for (int i = 0; i <= n; i++) {
ll need = cnt1 * a + cnt2 * b;
ll has = v[i].first - 1 - need;
if (has >= 0) {
ll canA = min((cntA - cnt1), has / a);
has -= canA * a;
ll canB = min((cntB - cnt2), has / b);
ans = max(ans, cnt1 + cnt2 + canA + canB);
}
int l = i;
while (l < v.size() && v[l].first == v[i].first) {
if (v[l].second) {
cnt2++;
} else {
cnt1++;
}
l++;
}
i = l - 1;
}
cout << ans << "\n";
}
}
|
1282
|
D
|
Enchanted Artifact
|
\textbf{This is an interactive problem.}
After completing the last level of the enchanted temple, you received a powerful artifact of the 255th level. Do not rush to celebrate, because this artifact has a powerful rune that can be destroyed with a single spell $s$, which you are going to find.
We define the spell as some \textbf{non-empty string} consisting only of the letters a and b.
At any time, you can cast an arbitrary non-empty spell $t$, and the rune on the artifact will begin to resist. Resistance of the rune is the edit distance between the strings that specify the casted spell $t$ and the rune-destroying spell $s$.
Edit distance of two strings $s$ and $t$ is a value equal to the minimum number of one-character operations of replacing, inserting and deleting characters in $s$ to get $t$. For example, the distance between ababa and aaa is $2$, the distance between aaa and aba is $1$, the distance between bbaba and abb is $3$. The edit distance is $0$ if and only if the strings are equal.
It is also worth considering that the artifact has a resistance limit — if you cast more than $n + 2$ spells, where $n$ is the length of spell $s$, the rune will be blocked.
Thus, it takes $n + 2$ or fewer spells to destroy the rune that is on your artifact. Keep in mind that the required destructive spell $s$ must also be counted among these $n + 2$ spells.
Note that the length $n$ of the rune-destroying spell $s$ is not known to you in advance. It is only known that its length $n$ does not exceed $300$.
|
Firstly, let's find out the number of letters a and b in the hidden string in two queries. This can be done, for example, using queries aaa ... aaa and bbb ... bbb of length $300$. Let the answers to these queries be $q_a$ and $q_b$, then the number of letters a and b would be $\#a = 300 - q_a$ and $\#b = 300 - q_b$ respectively. These answers are explained by the fact that i. e. for string aaa ... aaa it takes $300 - \#a - \#b$ steps to remove the letters a at the end of the string and then replace $\#b$ letters a with b to change the string aaa ... aaa into the string $s$. Now we know the length $l = \#a + \#b$. Consider an arbitrary string $t$ of length $l$ and let the answer to its query be $q$. Then if we replace the letter $t_i$ with the opposite one (from a to b or from b to a), then we may have one of two situations: $q$ decreased by $1$, then the letter $t_i$ after the change coincides with the letter $s_i$. otherwise the letter $t_i$ before the change matches the letter $s_i$. Thus, you can loop from left to right and for each position $i$ find out the character $t_i$, starting, for example, from the string aaa ... aaa of length $l$. The current algorithm guesses the string in $n + 3$ queries. In order to get rid of one unnecessary query, note that we do not need to make a query to find out the character in the last position. If the number of letters a whose location we know is equal to $\#a$, then the last character cannot be a, which means it is b. Similarly for the symmetric case. Thus, we can guess the string in $n + 2$ queries. Similarly, it is possible to solve for an arbitrary alphabet in $(|\Sigma| - 1) n + 2$ queries, where $|\Sigma|$ is the size of the alphabet. For an arbitrary alphabet, there is also a solution using random which solves the problem on average in $\frac{|\Sigma|}{2} n + 2$ queries, but it will not work in this task, since the chance to spend more than $n + 2$ queries is quite large.
|
[
"constructive algorithms",
"interactive",
"strings"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
int f(string s) {
cout << s << endl;
int w;
cin >> w;
if (w == 0)
exit(0);
return w;
}
int main() {
const int N = 300;
int st = f(string(N, 'a'));
int n = 2 * N - (st + f(string(N, 'b')));
string t(n, 'a');
int A = N - st, B = n - A;
st = B;
for (int i = 0; i < n - 1; i++) {
t[i] = 'b';
if (f(t) > st)
t[i] = 'a';
else
st--;
}
if (st)
t.back() = 'b';
f(t);
}
|
1282
|
E
|
The Cake Is a Lie
|
We are committed to the well being of all participants. Therefore, instead of the problem, we suggest you enjoy a piece of cake.
Uh oh. Somebody cut the cake. We told them to wait for you, but they did it anyway. There is still some left, though, if you hurry back. Of course, before you taste the cake, you thought about how the cake was cut.
It is known that the cake was originally a regular $n$-sided polygon, each vertex of which had a unique number from $1$ to $n$. The vertices were numbered in random order.
Each piece of the cake is a triangle. The cake was cut into $n - 2$ pieces as follows: each time one cut was made with a knife (from one vertex to another) such that exactly one triangular piece was separated from the current cake, and the rest continued to be a convex polygon. In other words, each time three consecutive vertices of the polygon were selected and the corresponding triangle was cut off.
A possible process of cutting the cake is presented in the picture below.
\begin{center}
{\small Example of 6-sided cake slicing.}
\end{center}
You are given a set of $n-2$ triangular pieces in random order. The vertices of each piece are given in random order — clockwise or counterclockwise. Each piece is defined by three numbers — the numbers of the corresponding $n$-sided cake vertices.
For example, for the situation in the picture above, you could be given a set of pieces: $[3, 6, 5], [5, 2, 4], [5, 4, 6], [6, 3, 1]$.
You are interested in two questions.
- What was the enumeration of the $n$-sided cake vertices?
- In what order were the pieces cut?
Formally, you have to find two permutations $p_1, p_2, \dots, p_n$ ($1 \le p_i \le n$) and $q_1, q_2, \dots, q_{n - 2}$ ($1 \le q_i \le n - 2$) such that if the cake vertices are numbered with the numbers $p_1, p_2, \dots, p_n$ in order clockwise or counterclockwise, then when cutting pieces of the cake in the order $q_1, q_2, \dots, q_{n - 2}$ always cuts off a triangular piece so that the remaining part forms one convex polygon.
For example, in the picture above the answer permutations could be: $p=[2, 4, 6, 1, 3, 5]$ (or any of its cyclic shifts, or its reversal and after that any cyclic shift) and $q=[2, 4, 1, 3]$.
Write a program that, based on the given triangular pieces, finds any suitable permutations $p$ and $q$.
|
The problem can be solved in different ways: one can independently find both permutations, or use one to find another. Firstly, let's find $q$ - the order of cutting cake pieces. Let's take a look at the edges of the first piece. This triangle has a common side with no more than one other piece. If it has no common sides with other triangles - there is only one triangle, the answer is trivial. So we consider that the first triangle is adjacent to exactly one other triangle. After cutting off this triangle, we have a similar problem for a ($n - 1$)-sided cake. Let the first triangle be any triangle adjacent only to $1$ another triangle, cut it off, solve the problem recursively. This can be done by building for the polygon dual graph. The remaining problem is to find the permutation $p$: Let's use $q$ to find $p$. Reverse $q$ to get the order of adding triangles to obtain the desired polygon. This can be done by adding to the doubly-linked list a triangle vertex, that wasn't in the list before, between two existing ones. Let's note that each side of the cake is found exactly once in the input, the other edges are found twice. So we have to find these sides of the polygon, then we get a doubly-linked list, which represents $p$.
|
[
"constructive algorithms",
"data structures",
"dfs and similar",
"graphs"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
void dfs_order(int u, int p, vector<vector<int>> const& g, vector<int> &order) {
for (auto v : g[u]) {
if (v != p) {
dfs_order(v, u, g, order);
}
}
order.push_back(u);
}
void get_order(map<pair<int, int>, vector<int>> const& in, int n, vector<int> &order) {
vector<vector<int>> g(n);
for (auto e : in) {
auto vs = e.second;
if (vs.size() == 2) {
g[vs[0]].push_back(vs[1]);
g[vs[1]].push_back(vs[0]);
}
}
dfs_order(0, -1, g, order);
}
void dfs_polygon(int u, vector<vector<int>> const& g, vector<bool> &used, vector<int> &res) {
if (used[u]) {
return;
}
used[u] = true;
res.push_back(u);
for (auto e : g[u]) {
dfs_polygon(e, g, used, res);
}
}
void get_polygon(map<pair<int, int>, vector<int>> const& in, int n, vector<int> &polygon) {
vector<vector<int>> g(n);
for (auto e : in) {
if (e.second.size() == 1) {
auto edge = e.first;
g[edge.first].push_back(edge.second);
g[edge.second].push_back(edge.first);
}
}
vector<bool> used(n);
dfs_polygon(0, g, used, polygon);
}
void get_polygon(vector<vector<int>> const& in, int n, vector<int> const& order, vector<int> &polygon) {
vector<pair<int, int>> listp(n, {-1, -1});
auto last = in[order.back()];
for (int i = 0; i < 3; i++) {
listp[last[i]] = {last[(i + 1) % 3], last[(i + 2) % 3]};
}
for (int i = (int) order.size() - 2; i >= 0; i--) {
auto x = in[order[i]];
int j = 0;
while (listp[x[j]] != pair<int, int>{-1, -1}) {
j++;
}
int x1 = x[j], x2 = x[(j + 1) % 3], x3 = x[(j + 2) % 3];
if (listp[x2].second != x3) {
swap(x2, x3);
}
listp[x1] = {x2, x3};
listp[x2].second = x1;
listp[x3].first = x1;
}
polygon.push_back(0);
int now = listp[0].second;
while (now != 0) {
polygon.push_back(now);
now = listp[now].second;
}
}
void out(vector<int> const& v) {
for (auto e : v) {
cout << e + 1 << ' ';
}
cout << endl;
}
void solve() {
int n;
cin >> n;
vector<vector<int>> in(n - 2, vector<int>(3));
for (int i = 0; i < n - 2; i++) {
for (int j = 0; j < 3; j++) {
cin >> in[i][j];
in[i][j]--;
}
sort(in[i].begin(), in[i].end());
}
map<pair<int, int>, vector<int>> mp;
for (int i = 0; i < n - 2; i++) {
auto tri = in[i];
for (int j = 0; j < 2; j++) {
for (int k = j + 1; k < 3; k++) {
mp[{tri[j], tri[k]}].push_back(i);
}
}
}
vector<int> order;
get_order(mp, n, order);
vector<int> polygon;
get_polygon(in, n, order, polygon);
// get_polygon(mp, n, polygon); // another solution
out(polygon);
out(order);
}
int main() {
int t;
cin >> t;
for (int t_num = 1; t_num <= t; t_num++) {
solve();
}
return 0;
}
|
1283
|
A
|
Minutes Before the New Year
|
New Year is coming and you are excited to know how many minutes remain before the New Year. You know that currently the clock shows $h$ hours and $m$ minutes, where $0 \le hh < 24$ and $0 \le mm < 60$. We use 24-hour time format!
Your task is to find the number of minutes before the New Year. You know that New Year comes when the clock shows $0$ hours and $0$ minutes.
You have to answer $t$ independent test cases.
|
In this problem we just need to print $1440 - 60h - m$.
|
[
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int q;
scanf("%d", &q);
for (int i = 0; i < q; ++i) {
int h, m;
scanf("%d %d", &h, &m);
printf("%d\n", 1440 - h * 60 - m);
}
return 0;
}
|
1283
|
B
|
Candies Division
|
Santa has $n$ candies and he wants to gift them to $k$ kids. He wants to divide as many candies as possible between all $k$ kids. Santa can't divide one candy into parts but he is allowed to not use some candies at all.
Suppose the kid who recieves the minimum number of candies has $a$ candies and the kid who recieves the maximum number of candies has $b$ candies. Then Santa will be \textbf{satisfied}, if the both conditions are met at the same time:
- $b - a \le 1$ (it means $b = a$ or $b = a + 1$);
- the number of kids who has $a+1$ candies (\textbf{note that $a+1$ not necessarily equals $b$}) does not exceed $\lfloor\frac{k}{2}\rfloor$ (less than or equal to $\lfloor\frac{k}{2}\rfloor$).
$\lfloor\frac{k}{2}\rfloor$ is $k$ divided by $2$ and rounded \textbf{down} to the nearest integer. For example, if $k=5$ then $\lfloor\frac{k}{2}\rfloor=\lfloor\frac{5}{2}\rfloor=2$.
Your task is to find the maximum number of candies Santa can give to kids so that he will be \textbf{satisfied}.
You have to answer $t$ independent test cases.
|
Firstly, we can notice that we always can distribute $n - n \% k$ (where $\%$ is the modulo operation) candies between kids. In this case $a=\lfloor\frac{n}{k}\rfloor$ and the answer is at least $ak$. And then we can add the value $min(n \% k, \lfloor\frac{k}{2}\rfloor)$ to the answer. Why? Because there is only $n \% k$ candies remain and the maximum number of kids to whom we can give one more candy is $\lfloor\frac{k}{2}\rfloor$.
|
[
"math"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int q;
cin >> q;
for (int i = 0; i < q; ++i) {
int n, k;
cin >> n >> k;
int full = n - n % k;
full += min(n % k, k / 2);
cout << full << endl;
}
return 0;
}
|
1283
|
C
|
Friends and Gifts
|
There are $n$ friends who want to give gifts for the New Year to each other. Each friend should give \textbf{exactly} one gift and receive \textbf{exactly} one gift. The friend \textbf{cannot} give the gift to himself.
For each friend the value $f_i$ is known: it is either $f_i = 0$ if the $i$-th friend doesn't know whom he wants to give the gift to or $1 \le f_i \le n$ if the $i$-th friend wants to give the gift to the friend $f_i$.
You want to fill in the unknown values ($f_i = 0$) in such a way that each friend gives \textbf{exactly} one gift and receives \textbf{exactly} one gift and there is \textbf{no} friend who gives the gift to himself. It is guaranteed that the initial information isn't contradictory.
If there are several answers, you can print any.
|
In this problem, we need to print the permutation without fixed points (without values $p_i = i$) but some values are known in advance. Let's consider the permutation as a graph. We know that the permutation is the set of non-intersecting cycles. In this problem, we are given such a graph but some edges are removed. How to deal with it? Firstly, let's find isolated vertices in the graph. Let its number be $cnt$. If $cnt=0$ then all is ok and we skip the current step. If $cnt=1$ then let's pin this isolated vertex to any vertex to which we can pin it. Otherwise, $cnt>1$ and we can create the chine consisting of all isolated vertices. Now $cnt=0$ and we can finally construct the remaining part of the graph. We can notice that we have the same number of vertices with zero incoming and zero outcoming degrees. And because we got rid of all possible loops in the graph, we can match these vertices as we want. Time complexity: $O(n)$.
|
[
"constructive algorithms",
"data structures",
"math"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n;
cin >> n;
vector<int> f(n);
vector<int> in(n), out(n);
for (int i = 0; i < n; ++i) {
cin >> f[i];
--f[i];
if (f[i] != -1) {
++out[i];
++in[f[i]];
}
}
vector<int> loops;
for (int i = 0; i < n; ++i) {
if (in[i] == 0 && out[i] == 0) {
loops.push_back(i);
}
}
if (loops.size() == 1) {
int idx = loops[0];
for (int i = 0; i < n; ++i) {
if (in[i] == 0 && i != idx) {
f[idx] = i;
++out[idx];
++in[i];
break;
}
}
} else if (loops.size() > 1) {
for (int i = 0; i < int(loops.size()); ++i) {
int cur = loops[i];
int nxt = loops[(i + 1) % int(loops.size())];
f[cur] = nxt;
++out[cur];
++in[nxt];
}
}
loops.clear();
vector<int> ins, outs;
for (int i = 0; i < n; ++i) {
if (in[i] == 0) ins.push_back(i);
if (out[i] == 0) outs.push_back(i);
}
assert(ins.size() == outs.size());
for (int i = 0; i < int(outs.size()); ++i) {
f[outs[i]] = ins[i];
}
for (int i = 0; i < n; ++i) {
cout << f[i] + 1 << " ";
}
cout << endl;
return 0;
}
|
1283
|
D
|
Christmas Trees
|
There are $n$ Christmas trees on an infinite number line. The $i$-th tree grows at the position $x_i$. All $x_i$ are guaranteed to be distinct.
Each \textbf{integer} point can be either occupied by the Christmas tree, by the human or not occupied at all. Non-integer points cannot be occupied by anything.
There are $m$ people who want to celebrate Christmas. Let $y_1, y_2, \dots, y_m$ be the positions of people (note that all values $x_1, x_2, \dots, x_n, y_1, y_2, \dots, y_m$ should be \textbf{distinct} and all $y_j$ should be \textbf{integer}). You want to find such an arrangement of people that the value $\sum\limits_{j=1}^{m}\min\limits_{i=1}^{n}|x_i - y_j|$ is the minimum possible (in other words, the sum of distances to the nearest Christmas tree for all people should be minimized).
In other words, let $d_j$ be the distance from the $j$-th human to the nearest Christmas tree ($d_j = \min\limits_{i=1}^{n} |y_j - x_i|$). Then you need to choose such positions $y_1, y_2, \dots, y_m$ that $\sum\limits_{j=1}^{m} d_j$ is the minimum possible.
|
In this problem, we first need to consider all points adjacent to at least one Christmas tree, then all points at the distance two from the nearby Christmas tree and so on... What it looks like? Yes, well-known multi-source bfs. Let's maintain a queue of positions and the set of used positions (and the distance to each vertex, of course). In the first step, we add all positions of the Christmas tree with a zero distance as initial vertices. Let the current vertex is $v$. If $d_v = 0$ (this is the Christmas tree) then just add $v-1$ and $v+1$ to the queue (if these vertices aren't added already) and continue. Otherwise, increase the answer by $d_v$ and add $v$ to the array of positions of people. When the length of this array reaches $m$, interrupt bfs and print the answer. Don't forget about some special cases as using Arrays.sort in Java or using std::unordered_map in C++ because this can lead to the quadratic complexity. Time complexity: $O(n \log n)$.
|
[
"graphs",
"greedy",
"shortest paths"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
mt19937 rnd(time(NULL));
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
#endif
int n, m;
cin >> n >> m;
vector<int> x(n);
for (int i = 0; i < n; ++i) {
cin >> x[i];
}
queue<int> q;
map<int, int> d;
for (int i = 0; i < n; ++i) {
d[x[i]] = 0;
q.push(x[i]);
}
long long ans = 0;
vector<int> res;
while (!q.empty()) {
if (int(res.size()) == m) break;
int cur = q.front();
q.pop();
if (d[cur] != 0) {
ans += d[cur];
res.push_back(cur);
}
if (!d.count(cur - 1)) {
d[cur - 1] = d[cur] + 1;
q.push(cur - 1);
}
if (!d.count(cur + 1)) {
d[cur + 1] = d[cur] + 1;
q.push(cur + 1);
}
}
cout << ans << endl;
shuffle(res.begin(), res.end(), rnd);
for (auto it : res) cout << it << " ";
cout << endl;
return 0;
}
|
1283
|
E
|
New Year Parties
|
Oh, New Year. The time to gather all your friends and reflect on the heartwarming events of the past year...
$n$ friends live in a city which can be represented as a number line. The $i$-th friend lives in a house with an integer coordinate $x_i$. The $i$-th friend can come celebrate the New Year to the house with coordinate $x_i-1$, $x_i+1$ or stay at $x_i$. Each friend is allowed to move no more than once.
For all friends $1 \le x_i \le n$ holds, however, they can come to houses with coordinates $0$ and $n+1$ (if their houses are at $1$ or $n$, respectively).
For example, let the initial positions be $x = [1, 2, 4, 4]$. The final ones then can be $[1, 3, 3, 4]$, $[0, 2, 3, 3]$, $[2, 2, 5, 5]$, $[2, 1, 3, 5]$ and so on. The number of occupied houses is the number of distinct positions among the final ones.
So all friends choose the moves they want to perform. After that the number of occupied houses is calculated. What is the minimum and the maximum number of occupied houses can there be?
|
At first treat the two subtasks as completely independent problems. For both solutions the array of frequences is more convinient to use, so let's build it ($cnt_i$ is the number of friends living in house $i$). 1) Minimum Collect the answer greedily from left to right. If $cnt_i = 0$ then proceed to $i+1$, otherwise add $1$ to the answer and proceed to $i+3$. To prove that let's maximize the number of merges of houses instead of minimizing the actual count of them. It's easy to show that the final number of houses is the initial one minus the number of merges. So if there are people in all $3$ consecutive houses starting from $i$, then $2$ merges is the absolute best you can do with them, skipping any of the merges won't get the better answer. For only $2$ of them occupied $1$ merge is the best and we can achieve that $1$ merge. And a single occupied house obviously will do $0$ merges. 2) Maximum Also greedy but let's process the houses in segments of consecutive positions with positive $cnt$. Take a look at the sum of some segment of houses. If the sum is greater than the length then you can enlarge that segment $1$ house to the left or to the right. If the sum is greater by at least $2$, than you can enlarge it both directions at the same time. Thus the following greedy will work. Let's update the segments from left to right. For each segments check the distance to the previous one (if it was enlarged to the right then consider the new right border). If you can enlarge the current segment and there is space on the left, then enlarge it. And if you still have possibility to enlarge the segment then enlarge it to the right. Notice that it doesn't matter which of any pair of consecutive segments will take the spot between them as the answer changes the same. The initial segments can be obtained with two pointers. Overall complexity: $O(n)$.
|
[
"dp",
"greedy"
] | 1,800
|
#include <bits/stdc++.h>
#define forn(i, n) for (int i = 0; i < int(n); i++)
using namespace std;
int n;
vector<int> a, cnt;
int solvemin(){
int res = 0;
forn(i, n){
if (!cnt[i]) continue;
++res;
i += 2;
}
return res;
}
int solvemax(){
int res = 0;
int dist = 2;
bool right = false;
forn(i, n){
if (!cnt[i]){
++dist;
continue;
}
int j = i - 1;
int sum = 0;
while (j + 1 < n && cnt[j + 1]){
++j;
sum += cnt[j];
}
res += (j - i + 1);
if (sum > j - i + 1 && (!right || dist > 1)){
--sum;
++res;
}
right = false;
if (sum > j - i + 1){
right = true;
++res;
}
i = j;
dist = 0;
}
return res;
}
int main() {
scanf("%d", &n);
a.resize(n);
cnt.resize(n + 1);
forn(i, n){
scanf("%d", &a[i]);
++cnt[a[i] - 1];
}
printf("%d %d\n", solvemin(), solvemax());
return 0;
}
|
1283
|
F
|
DIY Garland
|
Polycarp has decided to decorate his room because the New Year is soon. One of the main decorations that Polycarp will install is the garland he is going to solder himself.
Simple garlands consisting of several lamps connected by one wire are too boring for Polycarp. He is going to solder a garland consisting of $n$ lamps and $n - 1$ wires. Exactly one lamp will be connected to power grid, and power will be transmitted from it to other lamps by the wires. Each wire connectes exactly two lamps; one lamp is called \textbf{the main lamp} for this wire (the one that gets power from some other wire and transmits it to this wire), the other one is called \textbf{the auxiliary lamp} (the one that gets power from this wire). Obviously, each lamp has at most one wire that brings power to it (and this lamp is the auxiliary lamp for this wire, and the main lamp for all other wires connected directly to it).
Each lamp has a brightness value associated with it, the $i$-th lamp has brightness $2^i$. We define the \textbf{importance} of the wire as the sum of brightness values over all lamps that become disconnected from the grid if the wire is cut (and all other wires are still working).
Polycarp has drawn the scheme of the garland he wants to make (the scheme depicts all $n$ lamp and $n - 1$ wires, and the lamp that will be connected directly to the grid is marked; the wires are placed in such a way that the power can be transmitted to each lamp). After that, Polycarp calculated the importance of each wire, enumerated them from $1$ to $n - 1$ in descending order of their importance, and then wrote the index of the main lamp for each wire (in the order from the first wire to the last one).
The following day Polycarp bought all required components of the garland and decided to solder it — but he could not find the scheme. Fortunately, Polycarp found the list of indices of main lamps for all wires. Can you help him restore the original scheme?
|
First of all, we don't like the fact that importance values can be integers up to $2^n$ (it is kinda hard to work with them). Let's rephrase the problem. The highest bit set to $1$ in the importance value denotes the maximum in the subtree rooted at the auxiliary lamp for the wire. So, we sort the wires according to the maximums in their subtrees. To break ties, we could consider the second maximum, then the third maximum - but that's not convenient. We can use something much easier: suppose there are two vertices with the same maximum in their subtrees; these vertices belong to the path from the root to the maximum in their subtrees, and the one which is closer to the root has the greater importance value. So, to get the order described in the problem statement, we could sort the vertices according to the maximum in their subtrees, and use depth as the tie-breaker. What does this imply? All vertices of some prefix are ancestors of vertex $n$, so some prefix denotes the path from the root to $n$ (excluding $n$ itself). Then there are some values describing the path from some already visited vertex to $n - 1$ (if $n - 1$ was not met before), then to $n - 2$, and so on. How can we use this information to restore the original tree? $a_1$ is the root, obviously. Then the sequence can be separated into several subsegments, each representing a vertical path in the tree (and each vertex is the parent of the next vertex in the sequence, if they belong to the same subsegment). How can we separate these vertices into subsegments, and how to find the parents for vertices which did not appear in the sequence at all? Suppose some vertex appears several times in our sequence. The first time it appeared in the sequence, it was in the middle of some vertical path, so the previous vertex is its parent; and every time this vertex appears again, it means that we start a new path - and that's how decomposition into paths is done. Determining the parents of vertices that did not appear in the sequence is a bit harder, but can also be done. Let's recall that our sequence is decomposed into paths from root to $n$, from some visited vertex to $n - 1$, from some visited vertex to $n - 2$, and so on; so, each time the path changes, it means that we have found the maximum vertex (among unvisited ones). So we should keep track of the maximum vertex that was not introduced in the sequence while we split it into paths, and each time a path breaks, it means that we found the vertex we were keeping track of. Overall, this solution can be implemented in $O(n)$.
|
[
"constructive algorithms",
"greedy",
"trees"
] | 2,200
|
#define _CRT_SECURE_NO_WARNINGS
#include<cstdio>
#include<vector>
#include<algorithm>
using namespace std;
int main()
{
int n;
scanf("%d", &n);
vector<int> a(n - 1);
for (int i = 0; i < n - 1; i++)
{
scanf("%d", &a[i]);
a[i]--;
}
int root = a[0];
int last = -1;
vector<int> used(n, 0);
printf("%d\n", root + 1);
int cur = n - 1;
for (int i = 0; i < n - 1; i++)
{
used[a[i]] = 1;
while (used[cur])
cur--;
if (i == n - 2 || used[a[i + 1]])
{
printf("%d %d\n", a[i] + 1, cur + 1);
used[cur] = 1;
}
else
printf("%d %d\n", a[i + 1] + 1, a[i] + 1);
}
return 0;
}
|
1285
|
A
|
Mezo Playing Zoma
|
Today, Mezo is playing a game. Zoma, a character in that game, is initially at position $x = 0$. Mezo starts sending $n$ commands to Zoma. There are two possible commands:
- 'L' (Left) sets the position $x: =x - 1$;
- 'R' (Right) sets the position $x: =x + 1$.
Unfortunately, Mezo's controller malfunctions sometimes. Some commands are sent successfully and some are ignored. If the command is ignored then the position $x$ doesn't change and Mezo simply proceeds to the next command.
For example, if Mezo sends commands "LRLR", then here are some possible outcomes (underlined commands are sent successfully):
- "{\underline{LRLR}}" — Zoma moves to the left, to the right, to the left again and to the right for the final time, ending up at position $0$;
- "LRLR" — Zoma recieves no commands, doesn't move at all and ends up at position $0$ as well;
- "{\underline{L}R\underline{L}R}" — Zoma moves to the left, then to the left again and ends up in position $-2$.
Mezo doesn't know which commands will be sent successfully beforehand. Thus, he wants to know how many different positions may Zoma end up at.
|
Let $c_L$ and $c_R$ be the number of 'L's and 'R's in the string respectively. Note that Zoma may end up at any integer point in the interval $[-c_L, c_R]$. So, the answer equals $c_R - (-c_L) + 1 = n + 1$.
|
[
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
#define finish(x) return cout << x << endl, 0
#define ll long long
int n;
string s;
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
cin >> n >> s;
cout << n + 1 << endl;
}
|
1285
|
B
|
Just Eat It!
|
Today, Yasser and Adel are at the shop buying cupcakes. There are $n$ cupcake types, arranged from $1$ to $n$ on the shelf, and there are infinitely many of each type. The tastiness of a cupcake of type $i$ is an integer $a_i$. There are both tasty and nasty cupcakes, so the tastiness can be positive, zero or negative.
Yasser, of course, wants to try them all, so he will buy exactly one cupcake of each type.
On the other hand, Adel will choose some segment $[l, r]$ $(1 \le l \le r \le n)$ that does not include all of cupcakes (he can't choose $[l, r] = [1, n]$) and buy exactly one cupcake of each of types $l, l + 1, \dots, r$.
After that they will compare the total tastiness of the cupcakes each of them have bought. Yasser will be happy if the total tastiness of cupcakes he buys is \textbf{strictly} greater than the total tastiness of cupcakes Adel buys \textbf{regardless of Adel's choice}.
For example, let the tastinesses of the cupcakes be $[7, 4, -1]$. Yasser will buy all of them, the total tastiness will be $7 + 4 - 1 = 10$. Adel can choose segments $[7], [4], [-1], [7, 4]$ or $[4, -1]$, their total tastinesses are $7, 4, -1, 11$ and $3$, respectively. Adel can choose segment with tastiness $11$, and as $10$ is not strictly greater than $11$, Yasser won't be happy :(
Find out if Yasser will be happy after visiting the shop.
|
If there is at least a prefix or a suffix with non-positive sum, we can delete that prefix/suffix and end up with an array with sum $\geq$ the sum of the whole array. So, if that's the case, the answer is "NO". Otherwise, all the segments that Adel can choose will have sum $<$ than the sum of the whole array because the elements that are not in the segment will always have a strictly positive sum. So, in that case, the answer is "YES". Time complexity: $O(n)$
|
[
"dp",
"greedy",
"implementation"
] | 1,300
|
#include <bits/stdc++.h>
using namespace std;
#define finish(x) return cout << x << endl, 0
#define ll long long
int n;
vector <int> a;
bool solve(){
cin >> n;
a.resize(n);
for(auto &i : a) cin >> i;
ll sum = 0;
for(int i = 0 ; i < n ; i++){
sum += a[i];
if(sum <= 0) return 0;
}
sum = 0;
for(int i = n - 1 ; i >= 0 ; i--){
sum += a[i];
if(sum <= 0) return 0;
}
return 1;
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
int T;
cin >> T;
while(T--){
if(solve()) cout << "YES\n";
else cout << "NO\n";
}
}
|
1285
|
C
|
Fadi and LCM
|
Today, Osama gave Fadi an integer $X$, and Fadi was wondering about the minimum possible value of $max(a, b)$ such that $LCM(a, b)$ equals $X$. Both $a$ and $b$ should be positive integers.
$LCM(a, b)$ is the smallest positive integer that is divisible by both $a$ and $b$. For example, $LCM(6, 8) = 24$, $LCM(4, 12) = 12$, $LCM(2, 3) = 6$.
Of course, Fadi immediately knew the answer. Can you be just like Fadi and find any such pair?
|
There will always be a solution where $a$ and $b$ are coprime. To see why, let's prime factorize $a$ and $b$. If they share a prime factor we can omit all its occurrences from one of them, precisely from the one that has fewer occurrences of that prime, without affecting their $LCM$. Now, let's prime factorize $X$. Since there will be at most $11$ distinct primes, we can distribute them between $a$ and $b$ with a bruteforce. For an easier implementation, you can loop over all divisors $d$ of $X$, check if $LCM(d, \frac{X}{d})$ equals $X$, and minimize the answer with the pair $(d, \frac{X}{d})$. Time complexity: $O(\sqrt{n})$
|
[
"brute force",
"math",
"number theory"
] | 1,400
|
#include <bits/stdc++.h>
using namespace std;
#define finish(x) return cout << x << endl, 0
#define ll long long
ll x;
ll lcm(ll a, ll b){
return a / __gcd(a, b) * b;
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
cin >> x;
ll ans;
for(ll i = 1 ; i * i <= x ; i++){
if(x % i == 0 && lcm(i, x / i) == x){
ans = i;
}
}
cout << ans << " " << x / ans << endl;
}
|
1285
|
D
|
Dr. Evil Underscores
|
Today, as a friendship gift, Bakry gave Badawy $n$ integers $a_1, a_2, \dots, a_n$ and challenged him to choose an integer $X$ such that the value $\underset{1 \leq i \leq n}{\max} (a_i \oplus X)$ is minimum possible, where $\oplus$ denotes the bitwise XOR operation.
As always, Badawy is too lazy, so you decided to help him and find the minimum possible value of $\underset{1 \leq i \leq n}{\max} (a_i \oplus X)$.
|
We will solve this problem recursively starting from the most significant bit. Let's split the elements into two groups, one with the elements which have the current bit on and one with the elements which have the current bit off. If either group is empty, we can assign the current bit of $X$ accordingly so that we have the current bit off in our answer, so we will just proceed to the next bit. Otherwise, both groups aren't empty, so whatever value we assign to the current bit of $X$, we will have this bit on in our answer. Now, to decide which value to assign to the current bit of $X$, we will solve the same problem recursively for each of the groups for the next bit; let $ans_{on}$ and $ans_{off}$ be the answers of the recursive calls for the on and the off groups respectively. Note that if we assign $1$ to the current bit of $X$, the answer will be $2^{i}+ans_{off}$, and if we assign $0$ to the current bit of $X$, the answer will be $2^{i}+ans_{on}$, where $i$ is the current bit. So, simply we will choose the minimum of these two cases for our answer to be $2^{i}+min(ans_{on}, ans_{off})$. Time complexity: $O(nlog(maxa_i))$
|
[
"bitmasks",
"brute force",
"dfs and similar",
"divide and conquer",
"dp",
"greedy",
"strings",
"trees"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
#define finish(x) return cout << x << endl, 0
#define ll long long
int n;
vector <int> a;
int solve(vector <int> &c, int bit){
if(bit < 0) return 0;
vector <int> l, r;
for(auto &i : c){
if(((i >> bit) & 1) == 0) l.push_back(i);
else r.push_back(i);
}
if(l.size() == 0) return solve(r, bit - 1);
if(r.size() == 0) return solve(l, bit - 1);
return min(solve(l, bit - 1), solve(r, bit - 1)) + (1 << bit);
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
cin >> n;
a.resize(n);
for(auto &i : a) cin >> i;
cout << solve(a, 30) << endl;
}
|
1285
|
E
|
Delete a Segment
|
There are $n$ segments on a $Ox$ axis $[l_1, r_1]$, $[l_2, r_2]$, ..., $[l_n, r_n]$. Segment $[l, r]$ covers all points from $l$ to $r$ inclusive, so all $x$ such that $l \le x \le r$.
Segments can be placed arbitrarily — be inside each other, coincide and so on. Segments can degenerate into points, that is $l_i=r_i$ is possible.
Union of the set of segments is such a set of segments which covers exactly the same set of points as the original set. For example:
- if $n=3$ and there are segments $[3, 6]$, $[100, 100]$, $[5, 8]$ then their union is $2$ segments: $[3, 8]$ and $[100, 100]$;
- if $n=5$ and there are segments $[1, 2]$, $[2, 3]$, $[4, 5]$, $[4, 6]$, $[6, 6]$ then their union is $2$ segments: $[1, 3]$ and $[4, 6]$.
Obviously, a union is a set of pairwise non-intersecting segments.
You are asked to erase exactly one segment of the given $n$ so that the number of segments in the union of the rest $n-1$ segments is maximum possible.
For example, if $n=4$ and there are segments $[1, 4]$, $[2, 3]$, $[3, 6]$, $[5, 7]$, then:
- erasing the first segment will lead to $[2, 3]$, $[3, 6]$, $[5, 7]$ remaining, which have $1$ segment in their union;
- erasing the second segment will lead to $[1, 4]$, $[3, 6]$, $[5, 7]$ remaining, which have $1$ segment in their union;
- erasing the third segment will lead to $[1, 4]$, $[2, 3]$, $[5, 7]$ remaining, which have $2$ segments in their union;
- erasing the fourth segment will lead to $[1, 4]$, $[2, 3]$, $[3, 6]$ remaining, which have $1$ segment in their union.
Thus, you are required to erase the third segment to get answer $2$.
Write a program that will find the maximum number of segments in the union of $n-1$ segments if you erase any of the given $n$ segments.
Note that if there are multiple equal segments in the given set, then you can erase only one of them anyway. So the set after erasing will have exactly $n-1$ segments.
|
Ok, looking for a new number of segments in a union is actually hard. Let $nw_i$ be the union of segments after erasing the $i$-th one. Obviously, each of the segments in $nw[i]$ has its left and right borders. Let me show you how to calculate the number of any of these two kinds. Let's choose left borders. I will call the set of left borders of the set $s$ of segments $lf_s$. Build the initial union of all segments (that is a standart algorithm, google it if you want). Call it $init$. We are asked to find $\max \limits_{i} |nw[i]|$, but let's instead find $\max \limits_{i} |nw[i]| - |init|$ (that is the difference of sizes of the initial union and the new one for $i$). Surely, adding $|init|$ to this value will be the answer. Moreover, $\max \limits_{i} |nw[i]| - |init| = \max \limits_{i} |lf_{nw[i]}| - |lf_{init}|$ and that's what we are going to calculate. Call that difference $diff_i$. Let's do the following sweep line. Add queries of form $(l_i, i, 1)$ and $(r_i, i, -1)$. Process them in sorted order. Maintain the set of the open segments. This sweepline will add segment $i$ on a query of the first type and remove segment $i$ on a query of the second type. Initialize all the $diff_i$ with zeroes, this sweepline will help us to calculate all the values altogether. Look at the all updates on the same coordinate $x$. The only case we care about is: the current set of open segments contain exactly one segment and there is at least one adding update. Let this currently open segment be $j$. Consider what happens with $nw[j]$. $x$ is not in the $lf_{init}$ because at least that segment $j$ covers it. $x$ is also in $lf_{nf[j]}$ because after erasing segment $j$ $x$ becomes a left border of some segment of the union (you are adding a segment with the left border $x$ and points slightly to the left of $x$ are no longer covered by segment $j$). Thus, $diff_j$ increases by $1$. The other possible cases are: there are no open segments currently - this is not important because $x$ was a left border and stays as a left border; there are more than two open segments - not important because $x$ will still be covered by at least one of them after erasing some other; there are no adding updates - $x$ was a left border but doesn't become a new one. Thus, we handled all the left border count increasing cases. But there are also a decreasing case. Left border can get removed if the segment you are erasing had its left border in the initial union and was the only segment with such left border. You can get $lf_{init}$ while getting $init$. Then for each of $lf_{init}$ you can count how many segments start in it. Finally, iterate over $i$ and decrease $diff_i$ by one if the value for the left border of the segment $i$ is exactly $1$. Finally, $diff_i$ is obtained, $(\max \limits_i diff_i) + |init|$ is the answer. Overall complexity: $O(n \log n)$. Author: MikeMirzayanov
|
[
"brute force",
"constructive algorithms",
"data structures",
"dp",
"graphs",
"sortings",
"trees",
"two pointers"
] | 2,300
|
#include <bits/stdc++.h>
#define forn(i, n) for (int i = 0; i < int(n); i++)
#define x first
#define y second
using namespace std;
const int INF = 2e9;
typedef pair<int, int> pt;
map<int, int> ls;
int get(vector<pt> a){
int cnt = 0;
int l = -INF, r = -INF;
sort(a.begin(), a.end());
forn(i, a.size()){
if (a[i].x > r){
if (r != -INF)
ls[l] = 0;
++cnt;
l = a[i].x, r = a[i].y;
}
else{
r = max(r, a[i].y);
}
}
ls[l] = 0;
return cnt;
}
void process(vector<pair<int, pt>> &qr, vector<int> &ans){
set<int> open;
forn(i, qr.size()){
vector<int> op, cl;
int j = i - 1;
while (j + 1 < int(qr.size()) && qr[j + 1].x == qr[i].x){
++j;
if (qr[j].y.x == 1)
op.push_back(qr[j].y.y);
else
cl.push_back(qr[j].y.y);
}
if (open.size() == 1 && !op.empty()){
++ans[*open.begin()];
}
for (auto it : op){
open.insert(it);
}
for (auto it : cl){
open.erase(it);
}
i = j;
}
}
void solve(){
int n;
scanf("%d", &n);
vector<pt> a(n);
forn(i, n) scanf("%d%d", &a[i].x, &a[i].y);
vector<pair<int, pt>> qr;
forn(i, n){
qr.push_back({a[i].x, {1, i}});
qr.push_back({a[i].y, {-1, i}});
}
sort(qr.begin(), qr.end());
vector<int> ans(n, 0);
ls.clear();
int cur = get(a);
process(qr, ans);
forn(i, n) if (ls.count(a[i].x)) ++ls[a[i].x];
forn(i, n) if (ls[a[i].x] == 1) --ans[i];
printf("%d\n", *max_element(ans.begin(), ans.end()) + cur);
}
int main(){
int tc;
scanf("%d", &tc);
forn(i, tc)
solve();
}
|
1285
|
F
|
Classical?
|
Given an array $a$, consisting of $n$ integers, find:
$$\max\limits_{1 \le i < j \le n} LCM(a_i,a_j),$$
where $LCM(x, y)$ is the smallest positive integer that is divisible by both $x$ and $y$. For example, $LCM(6, 8) = 24$, $LCM(4, 12) = 12$, $LCM(2, 3) = 6$.
|
Since $LCM(x,y)=\frac{x*y}{GCD(x,y)}$, it makes sense to try and fix $GCD(x,y)$. Let's call it $g$. Now, let's only care about the multiples of $g$ in the input. Assume we divide them all by $g$. We now want the maximum product of 2 coprime numbers in this new array. Let's sort the numbers and iterate from the biggest to the smallest, keeping a stack. Assume the current number you're iterating on is $x$. While there is a number in the stack coprime to $x$, you can actually pop the top of the stack; you'll never need it again. That's because this number together with a number smaller than $x$ can never give a better product than that of a greater, or equal, number together with $x$! Now, we just need to figure out whether there's a number coprime to $x$ in the stack. This could be easily done with inclusion-exclusion. Assume the number of multiples of $d$ in the stack is $cnt_d$; the number of elements in the stack coprime to $x$ is: $\sum_{d|x} \mu(d)*cnt_d$ The complexity is $O(\sum\limits_{i=1}^{n} \sigma_{0}(i)^2)$ where $\sigma_{0}$ is the divisor count function. That's because each number enters the routine of calculating the maximum product of a coprime pair $\sigma_{0}$ times, and we iterate through its divisors in this routine.
|
[
"binary search",
"combinatorics",
"number theory"
] | 2,900
|
#include <bits/stdc++.h>
using namespace std;
#define MX 100000
int arr[100005],u[MX+5],cnt[MX+5];
vector<int> d[MX+5];
bool b[MX+5];
int coprime(int x)
{
int ret=0;
for (int i:d[x])
ret+=cnt[i]*u[i];
return ret;
}
void update(int x,int a)
{
for (int i:d[x])
cnt[i]+=a;
}
int main()
{
for (int i=1;i<=MX;i++)
{
for (int j=i;j<=MX;j+=i)
d[j].push_back(i);
if (i==1)
u[i]=1;
else if ((i/d[i][1])%d[i][1]==0)
u[i]=0;
else
u[i]=-u[i/d[i][1]];
}
int n;
scanf("%d",&n);
long long ans=0;
for (int i=0;i<n;i++)
{
int a;
scanf("%d",&a);
ans=max(ans,(long long)a);
b[a]=1;
}
for (int g=1;g<=MX;g++)
{
stack<int> s;
for (int i=MX/g;i>0;i--)
{
if (!b[i*g])
continue;
int c=coprime(i);
while (c)
{
if (__gcd(i,s.top())==1)
{
ans=max(ans,1LL*i*s.top()*g);
c--;
}
update(s.top(),-1);
s.pop();
}
update(i,1);
s.push(i);
}
while (!s.empty())
{
update(s.top(),-1);
s.pop();
}
}
printf("%I64d",ans);
}
|
1286
|
A
|
Garland
|
Vadim loves decorating the Christmas tree, so he got a beautiful garland as a present. It consists of $n$ light bulbs in a single row. Each bulb has a number from $1$ to $n$ (in arbitrary order), such that all the numbers are distinct. While Vadim was solving problems, his home Carp removed some light bulbs from the garland. Now Vadim wants to put them back on.
Vadim wants to put all bulb back on the garland. Vadim defines complexity of a garland to be the number of pairs of adjacent bulbs with numbers with different parity (remainder of the division by $2$). For example, the complexity of 1 4 2 3 5 is $2$ and the complexity of 1 3 5 7 6 4 2 is $1$.
No one likes complexity, so Vadim wants to minimize the number of such pairs. Find the way to put all bulbs back on the garland, such that the complexity is as small as possible.
|
The problem can be solved using a greedy algorithm. Notice that the only information we need is parity of numbers on bulbs. So let's replace numbers by their remainders modulo $2$. Than complexity of garland will be the number of pairs of adjacent numbers that are different. Let's call such pairs as bad. Divide garland into segments of removed bulbs. Let's call number before segment as left border and number after segment as right border. If there's no number before/after the segment than the segment doesn't have left/right border. Notice that when filling a segment, one should place the same numbers in a row (if any). If the segment has different borders then the optimal way is to place all zeroes near zero-border and all ones near one-border. If the segment has the same borders and we place both numbers in the segment that there will be at least two bad pairs and we will achieve it by placing all zeroes and then all ones. Similarly one could prove cases with the absence of one or two borders. If the segment has both borders and they are different then this segment always will increase the complexity by $1$. If the segment has both borders and they are different then this segment will increase the complexity by $0$ or $2$. $0$ will be in case we fill segment by numbers of the same parity as its borders. Otherwise, it will be $2$. If the segment doesn't have at least one of borders, it will increase the complexity by $0$ if all numbers have the same parity as its border (if any) and $1$ otherwise. So in order to minimize the complexity of garland, first of all, we should fill segments with the same borders by the numbers of the same parity. Obviously we should consider such segments in increasing length order. Then we should fill segments with only one border such that complexity won't increase. After that, we can place the remaining numbers arbitrary (but inside one segment we should place the same numbers in a row). Because for all remaining segments number of bad pairs is fixed. Time complexity is $O(n \log n)$. Also, this problem could be solved using dynamic programming.
|
[
"dp",
"greedy",
"sortings"
] | 1,800
| null |
1286
|
B
|
Numbers on Tree
|
Evlampiy was gifted a rooted tree. The vertices of the tree are numbered from $1$ to $n$. Each of its vertices also has an integer $a_i$ written on it. For each vertex $i$, Evlampiy calculated $c_i$ — the number of vertices $j$ in the subtree of vertex $i$, such that $a_j < a_i$.
\begin{center}
Illustration for the second example, the first integer is $a_i$ and the integer in parentheses is $c_i$
\end{center}
After the new year, Evlampiy could not remember what his gift was! He remembers the tree and the values of $c_i$, but he completely forgot which integers $a_i$ were written on the vertices.
Help him to restore initial integers!
|
There are several approaches to this problem. We will tell one of them. Note that if $c_i$ for some vertex is greater than the number of vertices in its subtree, then there is no answer. Now we prove that we can always build the answer, so that all $a_i$ will be numbers from $1$ to $n$. On those numbers, let's build some structure that supports deleting elements and searching for k-th element. Let's denote by $d_v$ the number of vertices in the subtree of vertex $v$. Now iterate over the subtree of $v$ in the order of the depth first search. Then let's set $a_v$ = $c_v$-th element in our structure (and after that delete this element). Firstly, such an element will always exist. This is true because when we examine the vertex $v$, all vertices in the subtree of this vertex are not yet considered $\ Rightarrow$ since $c_v \ leq d_v$ $\ Rightarrow$ in our structure there are at least $c_v$ elements. Secondly, the set of all values in the subtree will be a prefix of our structure. If this is true, then the condition that the subtree contains exactly $c_v$ elements smaller than ours is guaranteed to be satisfied (because all elements from our structure that are smaller than ours are there, and we specifically took the $c_v$-th element). Let us prove this fact by induction on the size of the tree. For a tree of size $1$ this is obvious (we always take the first element). Now for size $k$, we have the root on which the number $c_x \ leq k-1$ is written. Then when we throw out $c_x$, and then throw out all the vertices in the subtree, we will remove the prefix of at least $k-1$ vertices, which means that we will drop all the vertices up to $c_x$, as well as some prefix of vertices after it, thus in total we'll throw out some prefix of vertices. Now, we have reduced the problem to dfs and searching for k-order statistics. This can be done in a variety of ways - segment tree, Fenwick tree, sqrt decomposition, Cartesian tree, or a built-in c++ tree. Code of the author solution with this tree.
|
[
"constructive algorithms",
"data structures",
"dfs and similar",
"graphs",
"greedy",
"trees"
] | 1,800
| null |
1286
|
C2
|
Madhouse (Hard version)
|
\textbf{This problem is different with easy version only by constraints on total answers length}
\textbf{It is an interactive problem}
Venya joined a tour to the madhouse, in which orderlies play with patients the following game. Orderlies pick a string $s$ of length $n$, consisting only of lowercase English letters. The player can ask two types of queries:
- ? l r – ask to list all substrings of $s[l..r]$. Substrings will be returned in random order, and in every substring, all characters will be randomly shuffled.
- ! s – guess the string picked by the orderlies. This query can be asked exactly once, after that the game will finish. If the string is guessed correctly, the player wins, otherwise he loses.
The player can ask \textbf{no more than $3$ queries} of the first type.
To make it easier for the orderlies, there is an additional limitation: the total number of returned substrings in all queries of the first type must not exceed $\left\lceil 0.777(n+1)^2 \right\rceil$ ($\lceil x \rceil$ is $x$ rounded up).
Venya asked you to write a program, which will guess the string by interacting with the orderlies' program and acting by the game's rules.
Your program should immediately terminate after guessing the string using a query of the second type. In case your program guessed the string incorrectly, or it violated the game rules, it will receive verdict Wrong answer.
Note that in every test case the string is fixed beforehand and will not change during the game, which means that the interactor is not adaptive.
|
Let's consider the solution that uses $2$ queries with the lengths $n$ and $n-1$ (it asks about too many substrings, so it will not pass all the tests, but it will help us further). Let's ask about substrings $[1..n]$ and $[1..n-1]$. For convenience, rearrange the letters in all strings in alphabetical order. Then note that all the suffixes of $S$ correspond to those strings that we get in the first case, and do not get in the second. Having found all such strings, we can easily find all suffixes of $S$ by looking at them from smaller ones to bigger. For a complete solution we should first find the first $n / 2$ characters of the string by the solution described above. Then ask about the whole string. Let $cnt_{i, x}$ be the number of times that the symbol $x$ occurs in total in all substrings of length $i$ in the last query. Note that the symbol at position $j$ is counted in $cnt_{i, x}$ exactly $min(i, j, n - j + 1)$ times. Then for all $i$ from $1$ to $n / 2$, the value $cnt_{i + 1, x}$ - $cnt_{i,x}$ is equal to the number of times that $x$ occurs on positions with indices from $i + 1$ to $n - i$. Knowing these quantities, it is possible to find out how many times the element x occurs on positions $i$ and $n - i + 1$ in sum for each $i$ from $1$ to $n / 2$. Since we already know the first half of the string, it is not difficult to restore the character at the position $n - i + 1$, and therefore the entire string. In total, we asked about the substrings $[1 .. n / 2]$, $[1..n / 2 - 1]$ and $[1..n]$, the total number of substrings received is $\approx \frac{(\frac{n + 1}{2})^2}{2}+ \frac{(\frac{n + 1}{2})^2}{2} + \frac{(n + 1)^2}{2} = 0.75 (n + 1)^2$, and that quite satisfies the limitations of the problem.
|
[
"brute force",
"constructive algorithms",
"hashing",
"interactive",
"math"
] | 2,800
| null |
1286
|
D
|
LCC
|
An infinitely long Line Chillland Collider (LCC) was built in Chillland. There are $n$ pipes with coordinates $x_i$ that are connected to LCC. When the experiment starts at time 0, $i$-th proton flies from the $i$-th pipe with speed $v_i$. It flies to the right with probability $p_i$ and flies to the left with probability $(1 - p_i)$. The duration of the experiment is determined as the time of the first collision of any two protons. In case there is no collision, the duration of the experiment is considered to be zero.
Find the expected value of the duration of the experiment.
\begin{center}
Illustration for the first example
\end{center}
|
Note, that the first collision will occur between two neighboring particles in the original array. These two particles have 3 options to collide: both particles move to the right, both move to the left, they move towards each other. Let's go through these options and calculate the time of the collision. Let's do this for each pair of neighboring vertices and sort them by the collision time. Then the probability that $i$th will occur first is equal to the probability that the first $(i-1)$ collisions will not occur minus the probability that the first i will not occur. To find these probabilities we can use the Segment Tree. In each of its vertices, we will maintain an answer for four masks on which way the first and the last particle of the segment are moving. The answer for the mask is the probability that none of the first $X$ collisions will not occur, and the extreme ones move in accordance with the mask. Then to add a ban for the $(i + 1)$th collision, it is enough to make an update at the point. The final asymptotic is $O(n \cdot 2^4 \cdot \log(n))$
|
[
"data structures",
"math",
"matrices",
"probabilities"
] | 3,100
| null |
1286
|
E
|
Fedya the Potter Strikes Back
|
Fedya has a string $S$, initially empty, and an array $W$, also initially empty.
There are $n$ queries to process, one at a time. Query $i$ consists of a lowercase English letter $c_i$ and a nonnegative integer $w_i$. First, $c_i$ must be appended to $S$, and $w_i$ must be appended to $W$. The answer to the query is the sum of suspiciousnesses for all subsegments of $W$ $[L, \ R]$, $(1 \leq L \leq R \leq i)$.
We define the suspiciousness of a subsegment as follows: if the substring of $S$ corresponding to this subsegment (that is, a string of consecutive characters from $L$-th to $R$-th, inclusive) matches the prefix of $S$ of the same length (that is, a substring corresponding to the subsegment $[1, \ R - L + 1]$), then its suspiciousness is equal to the minimum in the array $W$ on the $[L, \ R]$ subsegment. Otherwise, in case the substring does not match the corresponding prefix, the suspiciousness is $0$.
Help Fedya answer all the queries before the orderlies come for him!
|
Let $ans_{i}$ be the answer for the moment after $i$ queries. Then, $s_{i} = ans_{i} - ans_{i-1}$ is equal to the sum of suspiciousness of suffixes of the string after $i$ queries. If we calculate $s_{i}$, $ans$ will be the prefix sums of this array. Let's maintain the KMP tree of the string. Each vertex of the tree corresponds to a prefix of the string. Let $S$ be the subset of suffixes, which are equal to the corresponding prefixes. We can note that $S$ is exactly the path from the root to the current vertex in the tree. Let $s_{v}$ be the next character of vertex of KMP, which corresponds to the prefix with length $v$ (the string is indexed from $0$). Let's find out what happens after adding a new character to the end of the string. Some suffixes from $S$ can't be extended with this character (keeping the condition about equality to the prefix), so they will be removed from the set. Also, if the new character is equal to $s_{0}$, a suffix with length $1$ will be added to the set. These are the only modifications that will be applied to $S$. We want to find all elements of $S$, which will be removed after a certain query. Let's denote the number of these elements by $r$. If we find them in $O(r)$, the amortized time will be $O(n)$. A suffix can't be extended if and only if the next character of its vertex is not equal to the new character. We need to find all these vertexes on the way up from the last vertex. Let $link_{v}$ be the closest ancestor of $v$ with the different next character (we can calculate it for each vertex when it is added). Let's ascend from vertex $v$. If we are in a vertex with the same next character as in the vertex $v$, then we will go to the $link_{v}$, otherwise, we will handle this vertex as a removing suffix and go to the parent vertex. It's clear that it works in $O(r)$. Now our task is just adding a suffix with length $1$, removing suffixes and adding an element to all suffixes. We can create a segment tree for minimum on $w$ in order to get the minimum on the removing segment. Let's maintain a mulitset of minimums on current suffixes. So, our queries are: Add element $x$ Remove element $x$ For each element $a$ make $a = min(a, x)$ Get the sum of the elements Let's store a map from an element to the number of its occurrences. Queries 1 and 2 can be done obviously. To perform the 3-rd query, we can just iterate through elements, which are bigger than $x$, remove them from the map, and add as many elements with value $x$. The amortized complexity of this solution is $O(n \log n)$. BONUS: solve it when a suspiciousness of a "suspicious" segment is the $mex$ of elements instead of $min$. Code (with BigInt realization)
|
[
"data structures",
"strings"
] | 3,200
| null |
1286
|
F
|
Harry The Potter
|
To defeat Lord Voldemort, Harry needs to destroy all horcruxes first. The last horcrux is an array $a$ of $n$ integers, which also needs to be destroyed. The array is considered destroyed if all its elements are zeroes. To destroy the array, Harry can perform two types of operations:
- choose an index $i$ ($1 \le i \le n$), an integer $x$, and subtract $x$ from $a_i$.
- choose two indices $i$ and $j$ ($1 \le i, j \le n; i \ne j$), an integer $x$, and subtract $x$ from $a_i$ and $x + 1$ from $a_j$.
Note that $x$ does not have to be positive.
Harry is in a hurry, please help him to find the minimum number of operations required to destroy the array and exterminate Lord Voldemort.
|
Assume that we have done $m$ queries of the second type. On the $i$th query we subtracted $x_i$ from $a_{p_i}$ and $x_i + 1$ from $a_{q_i}$. Let's construct undirected graph on $n$ vertices with edges $(p_i, q_i)$. Assume we have a cycle $c_1, c_2, ..., c_k$ in this graph, then, we can replace queries along this cycle with single queries of the first type to each vertex. Therefore, in an optimal answer queries of the second type form a forest. Let's call subset good if it size is $k$ and it can be destroyed with $k - 1$ operations of the second type. So, the problem is equivalent to grouping some of the elements of $a_i$ into maximal number of disjoint good subsets. Elements that do not belong to any set can be destroyed with operations of the first type. Let's find out whether a subset $S$ is good or not. Let's forget about + 1 in a query(e.g. we subtract $x$ and $x$). Consider a sequence of $m$ queries $p_i$, $q_i$, $x_i$ that destroys $S$ and $(p_i, q_i)$ form a connected tree. Let's select an arbitrary vertex of this tree as a root. Then, it is easy to see that for each vertex only its height modulo 2 is matter. Therefore, we can consider only trees of the following structure: The elements with the even height contribute positive change to the root, the elements with the odd height contribute negative change to the root. So, our problem is equivalent to finding the set $T \subset S$ such that $sum(S \setminus T) - sum(T) = 0$ and $T \neq \emptyset, T \neq S$. Now we subtract $x$ and $x + 1$. It turns out that it changes the above condition to $sum(S \setminus T) - sum(T) \in \{-|S|, -|S| + 2, ..., |S| - 2, |S|\}$ and $T \neq \emptyset, T \neq S$. Such set $T$ can be found with MITM in $O((1 + \sqrt{2}) ^ n)$ time (knapsack in $2^{n/2}$ over all subsets). So, now we know all good subsets. Let $A_{mask} = 1$ if $mask$ is a good subset and $0$ otherwise. Let's denote $A * B$ operation as OR convolution on independent subsets. Suppose $p$ is a minimum integer such that $\underbrace{A * A * ... * A}_\textit{p times} = 0$(e.g. all values of the product array are zero). It is clear that $n - p$ is a minimal number of operations required to destroy the array. OR convolution on independent subsets can be done in $O(n^2 \cdot 2^n)$ time and the minimal power can be estimated in $O(\log{n})$ time. Also, this part can be done with dynamic programming on subsets in $O(3^n)$ time which some participants managed to squeeze. So, the total time complexity is $O((1 + \sqrt{2})^n + \log{n} \cdot n^2 \cdot 2^n)$.
|
[
"brute force",
"constructive algorithms",
"dp",
"fft",
"implementation",
"math"
] | 3,100
| null |
1287
|
A
|
Angry Students
|
It's a walking tour day in SIS.Winter, so $t$ groups of students are visiting Torzhok. Streets of Torzhok are so narrow that students have to go in a row one after another.
Initially, some students are angry. Let's describe a group of students by a string of capital letters "A" and "P":
- "A" corresponds to an angry student
- "P" corresponds to a patient student
Such string describes the row from the last to the first student.
Every minute every angry student throws a snowball at the next student. Formally, if an angry student corresponds to the character with index $i$ in the string describing a group then they will throw a snowball at the student that corresponds to the character with index $i+1$ (students are given from the last to the first student). If the target student was not angry yet, they become angry. Even if the first (the rightmost in the string) student is angry, they don't throw a snowball since there is no one in front of them.
Let's look at the first example test. The row initially looks like this: PPAP. Then, after a minute the only single angry student will throw a snowball at the student in front of them, and they also become angry: PPAA. After that, no more students will become angry.
Your task is to help SIS.Winter teachers to determine the last moment a student becomes angry for every group.
|
We will take a look at two different solutions. First has complexity $O(\sum\limits_{i = 0}^{t - 1} {k_i} ^ {2})$. In this solution, we will simulate the events described in the statement. We will simulate every minute. Note that every minute (until we have found the answer) number of angry students will increase by at least $1$, but it is bounded by $k$, so we don't need to simulate more than $k + 1$ minutes. Assume that $a_i$ describe students' states after $i$ minutes. Then $a_0[j] = 1$ if initially $j$-th student is angry. And $a_i[j] = \max(a_{i - 1}[j], a_{i - 1}[j - 1])$. Second solution has complexity $O(\sum\limits_{i = 0}^{t - 1} {k_i})$. Note that every angry student will make angry every student between him and the next (closest to the right) angry student. Students will become angry one by one. So for every angry student, we should update the answer by number of patient students till the nearest angry student to the right (or till the end of row.).
|
[
"greedy",
"implementation"
] | 800
| null |
1287
|
B
|
Hyperset
|
Bees Alice and Alesya gave beekeeper Polina famous card game "Set" as a Christmas present. The deck consists of cards that vary in four features across three options for each kind of feature: number of shapes, shape, shading, and color. In this game, some combinations of three cards are said to make up a set. For every feature — color, number, shape, and shading — the three cards must display that feature as either all the same, or pairwise different. The picture below shows how sets look.
Polina came up with a new game called "Hyperset". In her game, there are $n$ cards with $k$ features, each feature has three possible values: "S", "E", or "T". The original "Set" game can be viewed as "Hyperset" with $k = 4$.
Similarly to the original game, three cards form a set, if all features are the same for all cards or are pairwise different. The goal of the game is to compute the number of ways to choose three cards that form a set.
Unfortunately, winter holidays have come to an end, and it's time for Polina to go to school. Help Polina find the number of sets among the cards lying on the table.
|
Firstly, we can notice that two cards uniquely identify the third, which forms a set with them. If the $i$-th feature of two cards is the same, then in the third card also has the same, otherwise, it has a different feature. Thus, we can check all pairs of cards, find their third one, which forms a set with them, and find out if it exists. Time complexity: $O$($kn^{2}$ $log n$).
|
[
"brute force",
"data structures",
"implementation"
] | 1,500
| null |
1288
|
A
|
Deadline
|
Adilbek was assigned to a special project. For Adilbek it means that he has $n$ days to run a special program and provide its results. But there is a problem: the program needs to run for $d$ days to calculate the results.
Fortunately, Adilbek can optimize the program. If he spends $x$ ($x$ is a non-negative integer) days optimizing the program, he will make the program run in $\left\lceil \frac{d}{x + 1} \right\rceil$ days ($\left\lceil a \right\rceil$ is the ceiling function: $\left\lceil 2.4 \right\rceil = 3$, $\left\lceil 2 \right\rceil = 2$). The program cannot be run and optimized simultaneously, so the total number of days he will spend is equal to $x + \left\lceil \frac{d}{x + 1} \right\rceil$.
Will Adilbek be able to provide the generated results in no more than $n$ days?
|
At first, let's note that if $x$ is integer and $x$ and $y$ are non-negative then $x + \left\lceil y \right\rceil = \left\lceil x + y \right\rceil$. So, instead of looking at $x + \left\lceil \frac{d}{x + 1} \right\rceil$ we can consider $\left\lceil x + \frac{d}{x + 1} \right\rceil$. It's easier since the function $x + \frac{d}{x + 1} = (x + 1) + \frac{d}{(x + 1)} - 1$ is more common function and it can be proven that it's concave upward. It means that this function has a unique minimum and, moreover, we can calculate it: $f(x) = x + \frac{d}{x + 1}$ has minimum value in $x_0 = \sqrt{d} - 1$ and $f(x_0) = 2 \sqrt{d} - 1$. Since the ceiling function is monotonically increasing so we can assume that $\left\lceil f(x) \right\rceil \le \left\lceil f(x + 1) \right\rceil$ for all $x \ge \sqrt{d}$. So we can just iterate $x$ from $0$ to $\left\lfloor \sqrt{d} \right\rfloor$ and check the unequation $\left\lceil f(x) \right\rceil \le n$. The total complexity is equal to $O(T \sqrt{d})$. There is a simple optimization: because of the monotone ceiling we can prove that we need to check only $\left\lfloor \sqrt{d} - 1 \right\rfloor$ and $\left\lceil \sqrt{d} - 1 \right\rceil$.
|
[
"binary search",
"brute force",
"math",
"ternary search"
] | 1,100
|
#include<bits/stdc++.h>
using namespace std;
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
// int tt = clock();
#endif
int T; cin >> T;
while(T--) {
int n, d;
cin >> n >> d;
int x, MAG = (int)sqrt(d) + 10;
for(x = 0; x < MAG; x++) {
if(x + (d + x) / (x + 1) <= n)
break;
}
cout << (x < MAG ? "YES" : "NO") << endl;
}
return 0;
}
|
1288
|
B
|
Yet Another Meme Problem
|
You are given two integers $A$ and $B$, calculate the number of pairs $(a, b)$ such that $1 \le a \le A$, $1 \le b \le B$, and the equation $a \cdot b + a + b = conc(a, b)$ is true; $conc(a, b)$ is the concatenation of $a$ and $b$ (for example, $conc(12, 23) = 1223$, $conc(100, 11) = 10011$). \textbf{$a$ and $b$ should not contain leading zeroes}.
|
Let's perform some conversions: $a \cdot b + a + b = conc(a, b)$ $a \cdot b + a + b = a \cdot 10^{|b|} + b$, where $|b|$ is the length of decimal representation of $b$. $a \cdot b + a = a \cdot 10^{|b|}$ $b + 1 = 10^{|b|}$ Thus, $b$ always look like $99 \dots 99$. So, the answer is $a * (|b + 1| - 1)$.
|
[
"math"
] | 1,100
|
for t in range(int(input())):
a, b = map(int, input().split())
print(a * (len(str(b + 1)) - 1))
|
1288
|
C
|
Two Arrays
|
You are given two integers $n$ and $m$. Calculate the number of pairs of arrays $(a, b)$ such that:
- the length of both arrays is equal to $m$;
- each element of each array is an integer between $1$ and $n$ (inclusive);
- $a_i \le b_i$ for any index $i$ from $1$ to $m$;
- array $a$ is sorted in non-descending order;
- array $b$ is sorted in non-ascending order.
As the result can be very large, you should print it modulo $10^9+7$.
|
Let's consider the following sequence: $a_1, a_2, \dots, a_m, b_m, b_{m-1}, \dots , b_1$. It's sequence of length $2m$ sorted in non-descending order, where each element of each sequence is an integer between $1$ and $n$. We can find the number of such sequences by simple combinatorics - it's combination with repetitions. So the answer is ${n+2m-1 \choose 2m} = \frac{(n + 2m - 1)!}{(2m)! (n-1)!}$.
|
[
"combinatorics",
"dp"
] | 1,600
|
from math import factorial as fact
mod = 10**9 + 7
def C(n, k):
return fact(n) // (fact(k) * fact(n - k))
n, m = map(int, input().split())
print(C(n + 2*m - 1, 2*m) % mod)
|
1288
|
D
|
Minimax Problem
|
You are given $n$ arrays $a_1$, $a_2$, ..., $a_n$; each array consists of exactly $m$ integers. We denote the $y$-th element of the $x$-th array as $a_{x, y}$.
You have to choose two arrays $a_i$ and $a_j$ ($1 \le i, j \le n$, it is possible that $i = j$). After that, you will obtain a new array $b$ consisting of $m$ integers, such that for every $k \in [1, m]$ $b_k = \max(a_{i, k}, a_{j, k})$.
Your goal is to choose $i$ and $j$ so that the value of $\min \limits_{k = 1}^{m} b_k$ is maximum possible.
|
We will use binary search to solve the problem. Suppose we want to know if the answer is not less than $x$. Each array can be represented by a $m$-bit mask, where the $i$-th bit is $1$ if the $i$-th element of the array is not less than $x$, or $0$ if the $i$-th element is less than $x$. If we want to verify that the answer is not less than $x$, we have to choose two arrays such that bitwise OR of their masks is $2^m - 1$. Checking all pairs of arrays is too slow. Instead, we can treat the arrays represented by the same masks as equal - so we will have no more than $2^m$ distinct arrays, and we can iterate over $4^m$ pairs. Overall, the solution works in $O(\log A (4^m + nm))$.
|
[
"binary search",
"bitmasks",
"dp"
] | 2,000
|
#include<bits/stdc++.h>
using namespace std;
int n, m;
vector<vector<int> > a;
int a1, a2;
bool can(int mid)
{
vector<int> msk(1 << m, -1);
for(int i = 0; i < n; i++)
{
int cur = 0;
for(int j = 0; j < m; j++)
if(a[i][j] >= mid)
cur ^= (1 << j);
msk[cur] = i;
}
if(msk[(1 << m) - 1] != -1)
{
a1 = a2 = msk[(1 << m) - 1];
return true;
}
for(int i = 0; i < (1 << m); i++)
for(int j = 0; j < (1 << m); j++)
if(msk[i] != -1 && msk[j] != -1 && (i | j) == (1 << m) - 1)
{
a1 = msk[i];
a2 = msk[j];
return true;
}
return false;
}
int main()
{
scanf("%d %d", &n, &m);
a.resize(n, vector<int>(m));
for(int i = 0; i < n; i++)
for(int j = 0; j < m; j++)
scanf("%d", &a[i][j]);
int lf = 0;
int rg = int(1e9) + 43;
while(rg - lf > 1)
{
int m = (lf + rg) / 2;
if(can(m))
lf = m;
else
rg = m;
}
assert(can(lf));
printf("%d %d\n", a1 + 1, a2 + 1);
}
|
1288
|
E
|
Messenger Simulator
|
Polycarp is a frequent user of the very popular messenger. He's chatting with his friends all the time. He has $n$ friends, numbered from $1$ to $n$.
Recall that a permutation of size $n$ is an array of size $n$ such that each integer from $1$ to $n$ occurs exactly once in this array.
So his recent chat list can be represented with a permutation $p$ of size $n$. $p_1$ is the most recent friend Polycarp talked to, $p_2$ is the second most recent and so on.
Initially, Polycarp's recent chat list $p$ looks like $1, 2, \dots, n$ (in other words, it is an identity permutation).
After that he receives $m$ messages, the $j$-th message comes from the friend $a_j$. And that causes friend $a_j$ to move to the first position in a permutation, shifting everyone between the first position and the current position of $a_j$ by $1$. Note that if the friend $a_j$ is in the first position already then nothing happens.
For example, let the recent chat list be $p = [4, 1, 5, 3, 2]$:
- if he gets messaged by friend $3$, then $p$ becomes $[3, 4, 1, 5, 2]$;
- if he gets messaged by friend $4$, then $p$ doesn't change $[4, 1, 5, 3, 2]$;
- if he gets messaged by friend $2$, then $p$ becomes $[2, 4, 1, 5, 3]$.
For each friend consider all position he has been at in the beginning and after receiving each message. Polycarp wants to know what were the minimum and the maximum positions.
|
So I have two slightly different approaches to the problem. There is a straightforward (no brain) one and a bit smarter one. The minimum place is the same in both solutions. For the $i$-th friend it's just $i$ if he never moves and $1$ otherwise. Obtaining the maximum place is trickier. For the first approach, take a look what happens with some friend $i$ after he gets moved to the first position. Or what's more useful - what happens after he gets moved to the first position and before he gets moved again afterwards (or the queries end). Notice how every other friend is to the right of him initially. Thus, if anyone sends a message, then the position of the friend $i$ increases by one. However, if that friend moves again, nothing changes. That should remind of a well-known problem already. You are just required to count the number of distinct values on some segments. The constraints allow you to do whatever you want: segtree with vectors in nodes, Mo, persistent segtree (I hope ML is not too tight for that). Unfortunately, for each friend we have missed the part before his first move. In that case for each $i$ you need to count the number of distinct values greater than $i$, as only friends with greater index will matter. Luckily, you can do it in a single BIT. Let $j$-th its value be set to zero if the friend $j$ hasn't sent messages and one otherwise. Let's process messages from left to right. If the friend sends a message for the first time, then update the BIT with $1$ in his index and update his answer with the suffix sum of values greater than his index. Finally, there are also friends, who haven't sent messages at all. As we have built the BIT already, the only thing left is to iterate over these friends and update the answers for them with a suffix sum. Overall complexity: $O((n+m) \log^2 m)$/$O((n+m) \sqrt m)$/$O((n+m) \log m)$. The attached solutions are $O((n+m) \sqrt m)$ and $O((n+m) \log^2 m)$. The second solution requires a small observation to be made. Notice that for each friend you can only check his position right before his moves and at the end of the messages. That works because the position can decrease only by his move, so it's either increases or stays the same between the moves. So let's learn to simulate the process quickly. The process we are given requires us to move someone to the first position and then shift some friends. Let's not shift! And let's also reverse the list, it's more convenient to append instead of prepending. So initially the list is $n, n - 1, \dots, 1$ and the message moves a friend to the end of the list. Allocate $n + m$ positions in a BIT, for example. Initially the first $n$ positions are taken, the rest $m$ are free (mark them with ones and zeroes, respectively). For each friend his position in this BIT is known (initially they are $pos_i = n - i + 1$, because we reversed the list). On the $j$-th message sent count the number of taken positions to the right of $pos_{a[j]}$, set $0$ in $pos_{a[j]}$, update $pos_{a[j]}: =j+n$ and set $1$ in $pos_{a[j]}$. And don't forget to update each friend's maximum after all the messages are sent, that is the number of taken positions to the right of his final one as well. Overall complexity $O((n+m) \log (n+m))$.
|
[
"data structures"
] | 2,000
|
#include <bits/stdc++.h>
#define forn(i, n) for (int i = 0; i < int(n); i++)
using namespace std;
const int N = 600 * 1000 + 13;
int f[N];
void upd(int x, int val){
for (int i = x; i >= 0; i = (i & (i + 1)) - 1)
f[i] += val;
}
int get(int x){
int res = 0;
for (int i = x; i < N; i |= i + 1)
res += f[i];
return res;
}
int main() {
int n, m;
scanf("%d%d", &n, &m);
vector<int> mn(n);
iota(mn.begin(), mn.end(), 0);
vector<int> mx = mn;
vector<int> a(m);
forn(i, m){
scanf("%d", &a[i]);
--a[i];
mn[a[i]] = 0;
}
vector<int> pos(n);
forn(i, n) pos[i] = n - i - 1;
forn(i, n) upd(i, 1);
forn(i, m){
mx[a[i]] = max(mx[a[i]], get(pos[a[i]] + 1));
upd(pos[a[i]], -1);
pos[a[i]] = i + n;
upd(pos[a[i]], 1);
}
forn(i, n){
mx[i] = max(mx[i], get(pos[i] + 1));
}
forn(i, n) printf("%d %d\n", mn[i] + 1, mx[i] + 1);
return 0;
}
|
1288
|
F
|
Red-Blue Graph
|
You are given a bipartite graph: the first part of this graph contains $n_1$ vertices, the second part contains $n_2$ vertices, and there are $m$ edges. \textbf{The graph can contain multiple edges}.
Initially, each edge is colorless. For each edge, you may either leave it uncolored (it is free), paint it red (it costs $r$ coins) or paint it blue (it costs $b$ coins). No edge can be painted red and blue simultaneously.
There are three types of vertices in this graph — colorless, red and blue. Colored vertices impose additional constraints on edges' colours:
- for each red vertex, the number of red edges indicent to it should be \textbf{strictly greater} than the number of blue edges incident to it;
- for each blue vertex, the number of blue edges indicent to it should be \textbf{strictly greater} than the number of red edges incident to it.
Colorless vertices impose no additional constraints.
Your goal is to paint some (possibly none) edges so that all constraints are met, and among all ways to do so, you should choose the one with minimum total cost.
|
A lot of things in this problem may tell us that we should try thinking about a flow solution. Okay, let's try to model the problem as a flow network. First of all, our network will consist of vertices and edges of the original graph. We somehow have to denote "red", "blue" and "colorless" edges; we will do it as follows: each edge of the original graph corresponds to a bidirectional edge with capacity $1$ in the network; if the flow goes from the left part to the right part along the edge, it is red; if the flow goes from right to left, it is a blue edge; and if there is no flow along the edge, it is colorless. Okay, we need to impose some constraints on the vertices. Consider some vertex $v$ from the left part. Each red edge incident to it transfers one unit of flow from it to some other vertex, and each blue edge incident to it does the opposite. So, the difference between the number of blue and red edges incident to $v$ is the amount of excess flow that has to be transfered somewhere else. If $v$ is colorless, there are no constraints on the colors of edges, so this amount of excess flow does not matter - to model it, we can add a directed edge from source to $v$ with infinite capacity, and a directed edge from $v$ to sink with infinite capacity. What if $v$ is red? At least one unit of flow should be transfered to it; so we add a directed edge from the source to $v$ with infinite capacity such that there should be at least one unit of flow along it. And if $v$ is blue, we need to transfer at least one unit of excess flow from it - so we add a directed edge from $v$ to the sink with infinite capacity such that there is at least one unit of flow along it. The colors of the vertices in the right part can be modeled symmetrically. How to deal with edges such that there should be some flow along them? You may use classic "flows with demands" approach from here: https://cp-algorithms.com/graph/flow_with_demands.html. Or you can model it with the help of the costs: if the flow along the edge should be between $l$ and $r$, we can add two edges: one with capacity $l$ and cost $k$ (where $k$ is a negative number with sufficiently large absolute value, for example, $-10^9$), and another with capacity $r - l$ and cost $0$. Okay, now we know how to find at least one painting. How about finding the cheapest painting that meets all the constraints? One of the simplest ways to do it is to impose costs on the edges of the original graph: we can treat each edge of the original graph as a pair of directed edges, one going from left to right with capacity $1$ and cost $r$, and another going from right to left with capacity $1$ and cost $b$.
|
[
"constructive algorithms",
"flows"
] | 2,900
|
#include<bits/stdc++.h>
using namespace std;
const int N = 443;
int n1, n2, m, r, b;
string s1, s2;
int u[N];
int v[N];
struct edge
{
int y, c, f, cost;
edge() {};
edge(int y, int c, int f, int cost) : y(y), c(c), f(f), cost(cost) {};
};
int bal[N][N];
int s, t, oldS, oldT, V;
vector<int> g[N];
vector<edge> e;
void add(int x, int y, int c, int cost)
{
g[x].push_back(e.size());
e.push_back(edge(y, c, 0, cost));
g[y].push_back(e.size());
e.push_back(edge(x, 0, 0, -cost));
}
int rem(int num)
{
return e[num].c - e[num].f;
}
void add_LR(int x, int y, int l, int r, int cost)
{
int c = r - l;
if(l > 0)
{
add(s, y, l, cost);
add(x, t, l, cost);
}
if(c > 0)
{
add(x, y, c, cost);
}
}
int p[N];
int d[N];
int pe[N];
int inq[N];
bool enlarge()
{
for(int i = 0; i < V; i++)
{
d[i] = int(1e9);
p[i] = -1;
pe[i] = -1;
inq[i] = 0;
}
d[s] = 0;
queue<int> q;
q.push(s);
inq[s] = 1;
while(!q.empty())
{
int k = q.front();
q.pop();
inq[k] = 0;
for(auto z : g[k])
{
if(!rem(z)) continue;
if(d[e[z].y] > d[k] + e[z].cost)
{
p[e[z].y] = k;
pe[e[z].y] = z;
d[e[z].y] = d[k] + e[z].cost;
if(!inq[e[z].y])
{
q.push(e[z].y);
inq[e[z].y] = 1;
}
}
}
}
if(p[t] == -1)
return false;
int cur = t;
while(cur != s)
{
e[pe[cur]].f++;
e[pe[cur] ^ 1].f--;
cur = p[cur];
}
return true;
}
void add_edge(int x, int y)
{
add(x, y + n1, 1, r);
add(y + n1, x, 1, b);
}
void impose_left(int x)
{
if(s1[x] == 'R')
{
add_LR(oldS, x, 1, m, 0);
}
else if(s1[x] == 'B')
{
add_LR(x, oldT, 1, m, 0);
}
else
{
add(oldS, x, m, 0);
add(x, oldT, m, 0);
}
}
void impose_right(int x)
{
if(s2[x] == 'R')
{
add_LR(x + n1, oldT, 1, m, 0);
}
else if(s2[x] == 'B')
{
add_LR(oldS, x + n1, 1, m, 0);
}
else
{
add(oldS, x + n1, m, 0);
add(x + n1, oldT, m, 0);
}
}
void construct_bal()
{
for(int i = 0; i < n1; i++)
{
for(auto z : g[i])
{
if(e[z].y >= n1 && e[z].y < n1 + n2)
bal[i][e[z].y - n1] += e[z].f;
}
}
}
void find_ans()
{
int res = 0;
string w = "";
for(auto x : g[s])
if(rem(x))
{
cout << -1 << endl;
return;
}
for(int i = 0; i < m; i++)
{
if(bal[u[i]][v[i]] > 0)
{
bal[u[i]][v[i]]--;
res += r;
w += "R";
}
else if(bal[u[i]][v[i]] < 0)
{
bal[u[i]][v[i]]++;
res += b;
w += "B";
}
else w += "U";
}
cout << res << endl << w << endl;
}
int main()
{
cin >> n1 >> n2 >> m >> r >> b;
cin >> s1;
cin >> s2;
for(int i = 0; i < m; i++)
{
cin >> u[i] >> v[i];
u[i]--;
v[i]--;
}
oldS = n1 + n2;
oldT = oldS + 1;
s = oldT + 1;
t = s + 1;
V = t + 1;
for(int i = 0; i < n1; i++)
impose_left(i);
for(int i = 0; i < n2; i++)
impose_right(i);
for(int i = 0; i < m; i++)
add_edge(u[i], v[i]);
add(oldT, oldS, 100000, 0);
while(enlarge());
construct_bal();
find_ans();
}
|
1290
|
A
|
Mind Control
|
You and your $n - 1$ friends have found an array of integers $a_1, a_2, \dots, a_n$. You have decided to share it in the following way: All $n$ of you stand in a line in a particular order. Each minute, the person at the front of the line chooses either the first or the last element of the array, removes it, and keeps it for himself. He then gets out of line, and the next person in line continues the process.
You are standing in the $m$-th position in the line. \textbf{Before the process starts}, you may choose up to $k$ different people in the line, and persuade them to always take either the first or the last element in the array on their turn (for each person his own choice, not necessarily equal for all people), no matter what the elements themselves are. \textbf{Once the process starts, you cannot persuade any more people, and you cannot change the choices for the people you already persuaded}.
Suppose that you're doing your choices optimally. What is the greatest integer $x$ such that, no matter what are the choices of the friends you didn't choose to control, the element you will take from the array will be \underline{greater than or equal to} $x$?
Please note that the friends you don't control may do their choice \textbf{\underline{arbitrarily}}, and they will not necessarily take the biggest element available.
|
People behind you are useless, ignore them. Let's assume that $k \le m-1$. It's always optimal to control as many people as possible. Your strategy can be summarized by a single integer $x$, the number of people you force to take the first element (among the $k$ you control). Similarly, the strategy of your opponents can be summarized by a single integer integer $y$, the number of non-controlled friends who choose to take the first element. When it will be your turn, the array will contain exactly $n-m+1$ elements. You will be able to take the biggest element among the first element $a_{1+x+y}$ and the last element $a_{1+x+y+(n-m)}$. These observations lead to an obvious $O(n^2)$ solution (iterate over all strategies $x$ and for each strategy, iterate over all cases $y$), which was sufficient to pass the tests. However, the second iteration can be easily optimized with a data structure. Let's note $b_i = \max(a_{1+i}, a_{1+i+(n-m)})$. The final answer is $\displaystyle \max_{x \in [0 ; k]} \bigg[ \min_{y \in [0 ; m-1-k]} b_{x+y} \bigg]$. Note that it can be rewritten as $\displaystyle \max_{x \in [0 ; k]} \bigg[ \min_{y' \in [x ; x+m-1-k]} b_{y'} \bigg]$. It can be computed in $O(n \log n)$ using segment tree or in $O(n)$ using monotonic deque.
|
[
"brute force",
"data structures",
"implementation"
] | 1,600
|
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n, m, k;
cin >> n >> m >> k;
k = min(k, m - 1);
vector<int> a(n);
for(auto &x : a)
cin >> x;
vector<int> b;
for(int i = 0; i < m; i++)
b.push_back(max(a[i], a[i + n - m]));
int sz = m - k;
int ans = 0;
deque<int> q;
for(int i = 0, j = 0; i + sz - 1 < m; i++) {
while(q.size() && q.front() < i)
q.pop_front();
while(j < i + sz) {
while(q.size() && b[q.back()] >= b[j])
q.pop_back();
q.push_back(j++);
}
ans = max(ans, b[q.front()]);
}
cout << ans << '\n';
}
int main() {
int t;
cin >> t;
while(t--)
solve();
}
|
1290
|
B
|
Irreducible Anagrams
|
Let's call two strings $s$ and $t$ anagrams of each other if it is possible to rearrange symbols in the string $s$ to get a string, equal to $t$.
Let's consider two strings $s$ and $t$ \textbf{which are anagrams of each other}. We say that $t$ is a reducible anagram of $s$ if there exists an integer $k \ge 2$ and $2k$ non-empty strings $s_1, t_1, s_2, t_2, \dots, s_k, t_k$ that satisfy the following conditions:
- If we write the strings $s_1, s_2, \dots, s_k$ in order, the resulting string will be equal to $s$;
- If we write the strings $t_1, t_2, \dots, t_k$ in order, the resulting string will be equal to $t$;
- For all integers $i$ between $1$ and $k$ inclusive, $s_i$ and $t_i$ are anagrams of each other.
If such strings don't exist, then $t$ is said to be an irreducible anagram of $s$. \textbf{Note that these notions are only defined when $s$ and $t$ are anagrams of each other}.
For example, consider the string $s = $ "gamegame". Then the string $t = $ "megamage" is a reducible anagram of $s$, we may choose for example $s_1 = $ "game", $s_2 = $ "gam", $s_3 = $ "e" and $t_1 = $ "mega", $t_2 = $ "mag", $t_3 = $ "e":
On the other hand, we can prove that $t = $ "memegaga" is an irreducible anagram of $s$.
You will be given a string $s$ and $q$ queries, represented by two integers $1 \le l \le r \le |s|$ (where $|s|$ is equal to the length of the string $s$). For each query, you should find if the substring of $s$ formed by characters from the $l$-th to the $r$-th has \underline{at least one} irreducible anagram.
|
We claim that a string has at least one irreducible anagram if and only if one of the following conditions holds: Its length is equal to $1$. Its first and last characters are different. It contains at least three different characters. Once we have proven this characterization it is easy to solve the problem: For any given query, the first and second conditions are trivial to check, while the third condition can be checked efficiently if we maintain the number of appearances of each character in each prefix of our string. This allows us to answer queries in $O(k)$ where $k = 26$ is the size of our alphabet. Now let's prove the characterization. Consider any string $s$ with $n = |s| \ge 2$. First note that for any two strings $a$ and $b$ that are anagrams, it is enough to check that no two proper prefixes of them are anagrams for them to be irreducible anagrams, because if $a$ and $b$ are reducible then $a_1$ and $b_1$ are two proper prefixes that are anagrams. We will consider three cases. In what follows all indices are $1$-based. If $s[1] \neq s[n]$.Write all occurrences of $s[n]$ in $s$, and then write all the remaining characters of $s$ in any order. Every proper prefix of the resulting string will have more occurrences of $s[n]$ than the corresponding prefix of $s$, so no two of them will be anagrams. Write all occurrences of $s[n]$ in $s$, and then write all the remaining characters of $s$ in any order. Every proper prefix of the resulting string will have more occurrences of $s[n]$ than the corresponding prefix of $s$, so no two of them will be anagrams. If $s[1] = s[n]$ and $s$ has at least three different characters.Consider the last distinct character that appears in $s$. Write all occurrences of it, followed by all occurrences of $s[n]$, and then write the remaining characters of $s$ in any order. We can check that every proper prefix of the resulting strings contains more occurrences of either this last distinct character, or more occurrences of $s[n]$, than the corresponding prefix of $s$, so no two proper prefixes are anagrams. Consider the last distinct character that appears in $s$. Write all occurrences of it, followed by all occurrences of $s[n]$, and then write the remaining characters of $s$ in any order. We can check that every proper prefix of the resulting strings contains more occurrences of either this last distinct character, or more occurrences of $s[n]$, than the corresponding prefix of $s$, so no two proper prefixes are anagrams. If $s[1] = s[n]$ and $s$ has at most two different characters.Assume that $s$ only has characters $a$ and $b$, and that $s[1] = a$. Assume that $s$ has an irreducible anagram $t$. Then $t[1] = b$, as otherwise $s[1, 1]$ and $t[1, 1]$ are anagrams. Consider the leftmost position $x$ such that the prefix $s[1, x]$ has at least as many appearances of $b$ as $t$. We have $x \le n - 1$ because $s[1, n - 1]$ contains every possible appearance of $b$. Moreover, we have $x > 1$. Now, notice that $t[1, x - 1]$ must have strictly more appearances of $b$ than $s[1, x - 1]$. This is only possible if this prefix had exactly one more appearance of $b$, and then $s[1, x]$ and $t[1, x]$ have the same number of appearances of $b$. But this means that the proper prefixes $s[1, x]$ and $t[1, x]$ are anagrams - a contradiction. Assume that $s$ only has characters $a$ and $b$, and that $s[1] = a$. Assume that $s$ has an irreducible anagram $t$. Then $t[1] = b$, as otherwise $s[1, 1]$ and $t[1, 1]$ are anagrams. Consider the leftmost position $x$ such that the prefix $s[1, x]$ has at least as many appearances of $b$ as $t$. We have $x \le n - 1$ because $s[1, n - 1]$ contains every possible appearance of $b$. Moreover, we have $x > 1$. Now, notice that $t[1, x - 1]$ must have strictly more appearances of $b$ than $s[1, x - 1]$. This is only possible if this prefix had exactly one more appearance of $b$, and then $s[1, x]$ and $t[1, x]$ have the same number of appearances of $b$. But this means that the proper prefixes $s[1, x]$ and $t[1, x]$ are anagrams - a contradiction.
|
[
"binary search",
"constructive algorithms",
"data structures",
"strings",
"two pointers"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
const int N = 200005;
char s[N];
int n, q, l, r, sum[N][26];
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cin >> (s + 1);
n = strlen(s + 1);
for (int i = 1; i <= n; i++) {
for (int j = 0; j < 26; j++) {
sum[i][j] = sum[i - 1][j];
}
sum[i][s[i] - 'a']++;
}
cin >> q;
while (q--) {
cin >> l >> r;
int cnt = 0;
for (int i = 0; i < 26; i++) {
cnt += (sum[r][i] - sum[l - 1][i] > 0);
}
if (l == r || cnt >= 3 || s[l] != s[r]) {
cout << "Yes\n";
} else {
cout << "No\n";
}
}
}
|
1290
|
C
|
Prefix Enlightenment
|
There are $n$ lamps on a line, numbered from $1$ to $n$. Each one has an initial state off ($0$) or on ($1$).
You're given $k$ subsets $A_1, \ldots, A_k$ of $\{1, 2, \dots, n\}$, such that the intersection of any three subsets is empty. In other words, for all $1 \le i_1 < i_2 < i_3 \le k$, $A_{i_1} \cap A_{i_2} \cap A_{i_3} = \varnothing$.
In one operation, you can choose one of these $k$ subsets and switch the state of all lamps in it. It is guaranteed that, with the given subsets, it's possible to make all lamps be simultaneously on using this type of operation.
Let $m_i$ be the minimum number of operations you have to do in order to make the $i$ first lamps be simultaneously on. Note that there is no condition upon the state of other lamps (between $i+1$ and $n$), they can be either off or on.
You have to compute $m_i$ for all $1 \le i \le n$.
|
The condition "the intersection of any three subsets is empty" can be easily rephrased in a more useful way: each element appears in at most two subsets. Let's suppose for the moment that each elements appears in exactly two subsets. We can think of each element as an edge between the subsets, it's a classical point of view. If we see subsets as nodes, we can model the subsets choice by coloring nodes into two colors, "taken" or "non-taken". If an element is initially off, we need to take exactly one of the subsets containing it. The corresponding edge should have endpoints with different color. If an element is initially on, we must take none or both subsets : endpoints with same color. We recognize a sort of bipartition, obviously there are at most two correct colorings for each connected component: fixing the color of a node fix the color of all connected nodes. Hence, the final answer is the sum for each component, of the size of the smaller side of the partition. Since the answer exists for $i = n$, there exists a such partition of the graph (into "red" and "blue" nodes). We can find it with usual dfs, and keep it for lower values of $i$. In order to compute all $m_i$ efficiently, we start from a graph with no edges ($i = 0$), and we add edges with DSU, maintaining in each connected component the count of nodes in red side, and the count of nodes in blue side. Now, how to deal with elements that appears in exactly one subset? They don't add any edge in the graph, but they force to take one of the sides of the connected component. To simulate this, we can use a special forced flag, or just fix the count of the other side to $+\infty$ (but be careful about overflow if you do that). Final complexity : $O((n+k) \cdot \alpha(k))$.
|
[
"dfs and similar",
"dsu",
"graphs"
] | 2,400
|
#include <bits/stdc++.h>
#define fi first
#define se second
using namespace std;
const int N = 1E6 + 5, K = 1E6 + 5, INF = 1E9 + 7;
int n, k, c, v, ans = 0, dsu[K];
char s[N];
vector<int> adj[N];
struct node {
int l, r, xo;
node(int _l = 0, int _r = 0, int _xo = 0) : l(_l), r(_r), xo(_xo) {}
int get() {
return min(l, r);
}
inline void operator+=(node oth) {
l = min(INF, l + oth.l);
r = min(INF, r + oth.r);
}
} val[K];
pair<int, int> trace(int u) {
if (dsu[u] < 0) {
return {u, 0};
} else {
pair<int, int> tmp = trace(dsu[u]);
dsu[u] = tmp.fi;
val[u].xo ^= tmp.se;
return {dsu[u], val[u].xo};
}
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cin >> n >> k >> (s + 1);
for (int i = 1; i <= k; i++) {
dsu[i] = -1;
val[i] = node(1, 0, 0);
cin >> c;
while (c--) {
cin >> v;
adj[v].push_back(i);
}
}
for (int i = 1; i <= n; i++) {
int typ = (s[i] - '0') ^ 1;
if (ans != -1) {
if (adj[i].size() == 1) {
pair<int, int> u = trace(adj[i][0]);
ans -= val[u.fi].get();
val[u.fi] += node((u.se == typ) * INF, (u.se != typ) * INF);
ans += val[u.fi].get();
} else if (adj[i].size() == 2) {
pair<int, int> u = trace(adj[i][0]);
pair<int, int> v = trace(adj[i][1]);
if (u.fi != v.fi) {
ans -= val[u.fi].get() + val[v.fi].get();
if (dsu[u.fi] > dsu[v.fi]) {
swap(u, v);
}
if (u.se ^ v.se ^ typ) {
swap(val[v.fi].l, val[v.fi].r);
val[v.fi].xo = 1;
}
dsu[u.fi] += dsu[v.fi];
dsu[v.fi] = u.fi;
val[u.fi] += val[v.fi];
ans += val[u.fi].get();
}
}
}
cout << ans << '\n';
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.