contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
1732
A
Bestie
You are given an array $a$ consisting of $n$ integers $a_1, a_2, \ldots, a_n$. Friends asked you to make the greatest common divisor (GCD) of all numbers in the array equal to $1$. In one operation, you can do the following: - Select an arbitrary index in the array $1 \leq i \leq n$; - Make $a_i = \gcd(a_i, i)$, where $\gcd(x, y)$ denotes the GCD of integers $x$ and $y$. The cost of such an operation is $n - i + 1$. You need to find the minimum total cost of operations we need to perform so that the GCD of the all array numbers becomes equal to $1$.
Let's make an important observation: $\gcd(n - 1, n) = 1$ for any value of $n$. Moreover, choosing $i = n - 1$ and $i = n$ are the cheapest operations. From this we can conclude that the answer is $\leq 3$. Let $g$ be the $\gcd$ of all numbers in the array. Then we have the following cases: If $g = 1$, then the operation can be omitted and the answer is $0$, Otherwise, let's try the cheapest operation $i = n$. If $\gcd(g, n) = 1$, then the answer is $1$. Otherwise, let's try the next cheapest operation, ie $i = n - 1$. If $\gcd(g, n - 1) = 1$, then the answer is $2$. Otherwise, the answer is $3$, since $\gcd(g, n - 1, n) = 1$.
[ "brute force", "combinatorics", "constructive algorithms", "implementation", "math", "number theory" ]
1,000
null
1732
B
Ugu
A binary string is a string consisting only of the characters 0 and 1. You are given a binary string $s_1 s_2 \ldots s_n$. It is necessary to make this string non-decreasing in the least number of operations. In other words, each character should be not less than the previous. In one operation, you can do the following: - Select an arbitrary index $1 \leq i \leq n$ in the string; - For all $j \geq i$, change the value in the $j$-th position to the opposite, that is, if $s_j = 1$, then make $s_j = 0$, and vice versa. What is the minimum number of operations needed to make the string non-decreasing?
Let's mentally imagine the following array of length $n - 1$: $a_i = 0$ if $s_i = s_{i+1}$, and $1$ otherwise. Note that if we apply the operation to the index $i$, then all the values of the array $a$ do not change, except for $a_{i - 1}$. Let's look at this in more detail: For $i \leq j$, note that the $j$th and $(j+1)$th elements invert their value, so $a_j$ does not change. For $j < i - 1$, note that the $j$-th and $(j+1)$-th elements do not change their value, so $a_j$ does not change. For $j = i - 1$, note that the $j$th element does not change its value, but the $(j+1)$th element does, so $a_j$ will change its value. If we look at the array $a$ for a sorted binary string, we can see that this array does not contain more than one unit (you either have a string consisting of only zeros or only ones, or it looks like this - $0000\ldots01\ldots11111$ ). Let $s$ be the number of ones in the original array $a$. We have now shown that the answer is $\geq \max(s - 1, 0)$. In fact, if the string starts with $0$, then the answer is $\max(s - 1, 0)$, otherwise it is $s$. Let's prove that if the string starts with $0$, then we can get the answer $\max(s - 1, 0)$ (the case with $1$ will be similar). Let's show a constructive proof using a small example $s = 0001110010$: Choose $i = 3$, then $s = 0000001101$, Choose $i = 7$, then $s = 0000000010$, Choose $i = 9$, then $s = 0000000001$.
[ "brute force", "dp", "greedy", "implementation" ]
900
null
1732
C1
Sheikh (Easy version)
\textbf{This is the easy version of the problem. The only difference is that in this version $q = 1$.} You are given an array of integers $a_1, a_2, \ldots, a_n$. The cost of a subsegment of the array $[l, r]$, $1 \leq l \leq r \leq n$, is the value $f(l, r) = \operatorname{sum}(l, r) - \operatorname{xor}(l, r)$, where $\operatorname{sum}(l, r) = a_l + a_{l+1} + \ldots + a_r$, and $\operatorname{xor}(l, r) = a_l \oplus a_{l+1} \oplus \ldots \oplus a_r$ ($\oplus$ stands for bitwise XOR). You will have $q = 1$ query. Each query is given by a pair of numbers $L_i$, $R_i$, where $1 \leq L_i \leq R_i \leq n$. You need to find the subsegment $[l, r]$, $L_i \leq l \leq r \leq R_i$, with maximum value $f(l, r)$. If there are several answers, then among them you need to find a subsegment with the minimum length, that is, the minimum value of $r - l + 1$.
Note that $f(l, r) \leq f(l, r + 1)$. To prove this fact, let's see how the sum and xor change when the element $x$ is added. The sum will increase by $x$, but $xor$ cannot increase by more than $x$. Then it was possible to use two pointers or binary search to solve the problem. If you solve the problem in the second way, then you iterate over the right boundary of the answer and look for the optimal left boundary for it by binary search. You will need $O(1)$ to find the sum on the segment and xor on the segment. To do this, you can use prefix sums and prefix xor.
[ "binary search", "bitmasks", "greedy", "two pointers" ]
1,800
null
1732
C2
Sheikh (Hard Version)
\textbf{This is the hard version of the problem. The only difference is that in this version $q = n$.} You are given an array of integers $a_1, a_2, \ldots, a_n$. The cost of a subsegment of the array $[l, r]$, $1 \leq l \leq r \leq n$, is the value $f(l, r) = \operatorname{sum}(l, r) - \operatorname{xor}(l, r)$, where $\operatorname{sum}(l, r) = a_l + a_{l+1} + \ldots + a_r$, and $\operatorname{xor}(l, r) = a_l \oplus a_{l+1} \oplus \ldots \oplus a_r$ ($\oplus$ stands for bitwise XOR). You will have $q$ queries. Each query is given by a pair of numbers $L_i$, $R_i$, where $1 \leq L_i \leq R_i \leq n$. You need to find the subsegment $[l, r]$, $L_i \leq l \leq r \leq R_i$, with maximum value $f(l, r)$. If there are several answers, then among them you need to find a subsegment with the minimum length, that is, the minimum value of $r - l + 1$.
Note that $f(l, r) \leq f(l, r + 1)$. To prove this fact, let's see how the sum and xor change when the element $x$ is added. The sum will increase by $x$, but $xor$ cannot increase by more than $x$. From this we obtain that the maximum value of $f$ is reached on the entire subsegment. Next, let's see in which case $xor$ changes exactly by $x$, because if it changes by a smaller value, then $f(l, r)$ will be strictly less than $f(l, r + 1)$. The value of $xor$ will change exactly by $x$ if all 1 bits of $x$ were zeros in the current $xor$. In fact, this means that if we consider the first $\log A + 1$ non-zero element, then at least one of the bits will occur twice, and thus the value of the function will become smaller. Let's put all these facts together: we can remove at most $\log A + 1= 30 + 1= 31$ non-zero element from the beginning and end of the subsegment. Then let's just iterate over how many non-zero elements we remove on the prefix and suffix (their positions can be found using binary search) and using prefix sums and $xor$-s, calculate the value on this subsegment.
[ "binary search", "bitmasks", "brute force", "greedy", "implementation", "two pointers" ]
2,100
null
1732
D1
Balance (Easy version)
\textbf{This is the easy version of the problem. The only difference is that in this version there are no "remove" queries.} Initially you have a set containing one element — $0$. You need to handle $q$ queries of the following types: - + $x$ — add the integer $x$ to the set. It is guaranteed that this integer is not contained in the set; - ? $k$ — find the $k\text{-mex}$ of the set. In our problem, we define the $k\text{-mex}$ of a set of integers as the smallest non-negative integer $x$ that is divisible by $k$ and which is not contained in the set.
Let's look at a stupid solution and try to improve it. In a stupid solution, we can simply add to the set, and when answering a query, iterate over the numbers ${0, k, 2k, 3k, \ldots}$ and so on until we find the answer. This solution will take a long time if the answer is $c \cdot k$, where $c$ is large. We will improve the solution. If a request comes to us for the first time for a given $k$, then we calculate the answer for it greedily and remember it. In the future, we will no longer check with $0$ whether there is a number in the set, but with the previous answer. Let's count for each value $x$ how many times we can check for its presence in the set. First, we do this for $k$ such that $x$ is divisible by $k$. Secondly, the set must already contain elements ${0, k, 2k, \ldots, x - k}$, that is, $\frac{x}{k}$ numbers. Note that if $k$ is not one of the largest divisors of $x$, then $\frac{x}{k}$ becomes greater than $q$. Therefore, this solution will work quite quickly.
[ "brute force", "data structures", "implementation", "number theory" ]
1,500
null
1732
D2
Balance (Hard version)
\textbf{This is the hard version of the problem. The only difference is that in this version there are remove queries.} Initially you have a set containing one element — $0$. You need to handle $q$ queries of the following types: - + $x$ — add the integer $x$ to the set. It is guaranteed that this integer is not contained in the set; - - $x$ — remove the integer $x$ from the set. It is guaranteed that this integer is contained in the set; - ? $k$ — find the $k\text{-mex}$ of the set. In our problem, we define the $k\text{-mex}$ of a set of integers as the smallest non-negative integer $x$ that is divisible by $k$ and which is not contained in the set.
Let's look at a stupid solution and try to improve it. In a stupid solution, we can simply add and remove elements from the set, and when answering a query, iterate over the numbers ${0, k, 2k, 3k, \ldots}$ and so on until we find the answer. This solution will take a long time if the answer is $c \cdot k$, where $c$ is large. We will improve the solution, first consider the solution of the problem without the removal operation. If a request comes to us for the first time for a given $k$, then we calculate the answer for it greedily and remember it. In the future, we will no longer check with $0$ whether there is a number in the set, but with the previous answer. Now consider a variant of the problem with the delete operation. Let's set for a fixed $k$ to store all the numbers that we have removed and they are $\leq$ than the maximum answer found for this $k$. Then let's see what happens during the search for an answer operation. If set for a given $k$ is not empty, then the answer will be the minimum element from the set, otherwise we will try to improve the current maximum answer for this $k$ (that is, if it was equal to $c \cdot k$, then we will check $c \cdot k, (c + 1) \cdot k, \ldots$). It remains to figure out how we can recalculate these set in the case of an add/remove operation. Let's actually just remember for each value in which set it participates and we will update all of them. Let's calculate the running time. Let's understand how many sets a given value $x$ can participate in. First, it lies in sets where $x$ is divisible by $k$. Second, if $x$ is in the set for the number $k$, then at least $\frac{x}{k}$ numbers have already been added. That is, if $x$ lies in $t$ sets and among these $k$ there are the largest divisors of $x$, then we should already have added approximately $\sum\limits_{k_i=1}^t \frac{x }{k_i}$, where $k_i$ is the $i$-th largest divisor of $x$. Since we have $q$ queries in total, the given value $x$ may not lie in a large number of sets.
[ "brute force", "data structures", "number theory" ]
2,400
null
1732
E
Location
You are given two arrays of integers $a_1, a_2, \ldots, a_n$ and $b_1, b_2, \ldots, b_n$. You need to handle $q$ queries of the following two types: - $1$ $l$ $r$ $x$: assign $a_i := x$ for all $l \leq i \leq r$; - $2$ $l$ $r$: find the minimum value of the following expression among all $l \leq i \leq r$: $$\frac{\operatorname{lcm}(a_i, b_i)}{\gcd(a_i, b_i)}.$$ In this problem $\gcd(x, y)$ denotes the greatest common divisor of $x$ and $y$, and $\operatorname{lcm}(x, y)$ denotes the least common multiple of $x$ and $y$.
Tasks of this kind, as a rule, are solved using data structures, and this one is no exception. Since the constraints in the problem are not large enough, it is logical to think in the direction of root optimizations. Let's divide the array into blocks of length $k$, we will have about $\frac{n}{k}$ such blocks. For each block, we want to maintain a response, i.e. a minimum value of $\frac{lcm(a_i, b_i)}{gcd(a_i, b_i)}$. Let's see what happens with the first type of operation. If the block partially intersects with the segment of the request, then it is possible to go through this block for $O(k)$ and recalculate the answer. There are no more than two such blocks, so in total we will spend $O(k)$ on this (we neglect the running time of $\gcd$). If the block lies entirely in the segment of the request (and there can be $\frac{n}{k}$ such blocks), then you need to somehow recalculate the response more quickly. To do this, let's precalculate the following value for each block: $answer_x$ - what will be the answer in the block if we assign the value $x$ to all numbers. Let's learn how to calculate the answer for a fixed $x$ first. To do this, let's iterate over all divisors $d$ of the number $x$ - in fact, by enumeration of this divisor, we will try to fix $\gcd$. Then note that since we want the value to minimize the value, we need to find the minimum value of $b_i$ that is divisible by $d$. And then we make $answer_x = \min(\frac{b_i}{d})$ over all such $d$. Let's note that $\gcd(b_i, x)$ may not actually be equal to $d$, but we know for sure that $\gcd(b_i, x) \geq d$, and since we want to minimize the value , then we do not do worse. Already now we can calculate the answer for $A \log A$ inside the block, where $A$ is the maximum value. But you can do even better! Let's note that $answer_x = min(answer_{\frac{x}{p}} \cdot p)$ where $p$ is a prime divisor of $x$, and don't forget the case when $d = x$ . This follows from the fact that all divisors of the number $x$ are contained among the divisors of numbers of the form $\frac{x}{p}$. Let's calculate the running time and find the optimal $k$. We need to find all its divisors for each number in order to quickly find out the minimum number that is divisible by the given one inside the block - we do this in $O(n \sqrt A)$. Inside each block, our precalculation now works for $O(A \log \log A)$, that is, in total for all blocks $O(\frac{n}{k} \cdot A \log \log A)$. We answer the request for $O(k \cdot \gcd + \frac{n}{k})$. Hence we get that it is advantageous to take $k$ approximately $\sqrt n$ (we have all quantities of the same order, so we use $n$).
[ "data structures", "dp", "math", "number theory" ]
2,800
null
1733
A
Consecutive Sum
You are given an array $a$ with $n$ integers. You can perform the following operation at most $k$ times: - Choose two indices $i$ and $j$, in which $i \,\bmod\, k = j \,\bmod\, k$ ($1 \le i < j \le n$). - Swap $a_i$ and $a_j$. After performing all operations, you have to select $k$ consecutive elements, and the sum of the $k$ elements becomes your score. Find the maximum score you can get. Here $x \bmod y$ denotes the remainder from dividing $x$ by $y$.
$i$-th element moves only to $(i + xk)$-th position ($x$ is integer). We cannot select $a_i$ and $a_{i + xk}$ simultaneously. In other words, if we can swap $a_i$ and $a_j$, we cannot select $a_i$ and $a_j$ simultaneously. Among all elements $a_{i + xk}$ for each $i$ ($1 \le i \le k$), only one element is selected. For each index $i$ ($k + 1 \le i \le n$), there is exactly one element among $a_1$ to $a_k$, which can swap with $a_i$. If $a_i$ is greater than that element, swap them. This process perform the operation at most $n - k$ times. After performing operations, select $a_1$ to $a_k$. This is the maximum score we can get.
[ "greedy", "sortings" ]
800
#import<bits/stdc++.h> #define endl '\n' using namespace std; long long k, n, s, t, x, a[100005]; int main() { ios::sync_with_stdio(0); cin.tie(0); for(cin >> t; t--;) { cin >> n >> k; for(int i = 1; i <= n; i++) { cin >> x; a[i % k] = max(a[i % k], x); } s = 0; for(int i = 0; i < k; i++)s += a[i]; cout << s << endl; fill(a, a + k, 0); } }
1733
B
Rule of League
There is a badminton championship in which $n$ players take part. The players are numbered from $1$ to $n$. The championship proceeds as follows: player $1$ and player $2$ play a game, then the winner and player $3$ play a game, and then the winner and player $4$ play a game, and so on. So, $n-1$ games are played, and the winner of the last game becomes the champion. There are no draws in the games. You want to find out the result of championship. Currently, you only know the following information: - Each player has either won $x$ games or $y$ games in the championship. Given $n$, $x$, and $y$, find out if there is a result that matches this information.
Because one of player $1$ and player $2$ win $0$ games and the other win at least $1$ game, $\min(x, y) = 0$ and $\max(x, y) > 0$ must be true in order to generate a valid result. There are one winner and one loser for every match. So the sum of winning count and the sum of losing count are same. The sum of winning count is a multiple of $\max(x, y)$. Therefore, the sum of losing count is also a multiple of $\max(x, y)$. (The sum of losing count) equals to $n - 1$. So $n - 1$ is a multiple of $\max(x, y)$ if there is a valid result. According to hints, $\min(x, y) = 0$ and $\max(x, y) > 0$ and $(n - 1) \,\bmod\, \max(x, y) = 0$ holds in order to generate a valid result. If so, player $1$ and player $2$ would play first. Let's consider player $2$ wins. Then player $2$ should win $\max(x, y)$ games, and loses to player $\max(x, y) + 2$. Likewise, player $\max(x, y) + 2$ wins $\max(x, y)$ games and loses to player $2 \cdot \max(x, y) + 2$. Construct the remaining result in the same way.
[ "constructive algorithms", "math" ]
900
#import<bits/stdc++.h> #define endl '\n' using namespace std; int k, n, t, x, y; int main() { ios::sync_with_stdio(0); cin.tie(0); for(cin >> t; t--;) { cin >> n >> x >> y; if(x > y)swap(x, y); if(x || !y || (n - 1) % y) { cout << -1 << endl; continue; } for(int k = 2; k <= n; k += y) { for(int i = 1; i <= y; i++)cout << k << ' '; } cout << endl; } }
1733
C
Parity Shuffle Sorting
You are given an array $a$ with $n$ non-negative integers. You can apply the following operation on it. - Choose two indices $l$ and $r$ ($1 \le l < r \le n$). - If $a_l + a_r$ is odd, do $a_r := a_l$. If $a_l + a_r$ is even, do $a_l := a_r$. Find any sequence of at most $n$ operations that makes $a$ non-decreasing. It can be proven that it is always possible. Note that you do not have to minimize the number of operations. An array $a_1, a_2, \ldots, a_n$ is non-decreasing if and only if $a_1 \le a_2 \le \ldots \le a_n$.
If all elements are equal, that array is also non-decreasing. For each operation, one element is changed. Because we have to use at most $n$ operations, an element change occurs at most $n$ times. Considering that the initial array can be decreasing, at least $n - 1$ operations can be needed in some cases. If $n = 1$, do nothing. Otherwise, select indices $1$ and $n$ to make $a_1$ equal to $a_n$ first. After that, for each element $a_i$ ($2 \le i < n$), select indices $1$ and $i$ if $a_1 + a_i$ is odd, and select indices $i$ and $n$ otherwise. This process requires $n - 1$ operations and make all elements equal, which is also non-decreasing.
[ "constructive algorithms", "sortings" ]
1,300
#import<bits/stdc++.h> #define endl '\n' using namespace std; int n, t, x, a[100005]; int main() { ios::sync_with_stdio(0); cin.tie(0); for(cin >> t; t--;) { cin >> n; for(int i = 1; i <= n; i++)cin >> a[i]; cout << n - 1 << endl; if(n > 1)cout << 1 << ' ' << n << endl; x = (a[1] + a[n]) % 2 ? a[1] : a[n]; for(int i = 2; i < n; i++) { if((x + a[i]) % 2)cout << 1 << ' ' << i << endl; else cout << i << ' ' << n << endl; } } }
1733
D1
Zero-One (Easy Version)
\textbf{This is the easy version of the problem. In this version, $n \le 3000$, $x \ge y$ holds. You can make hacks only if both versions of the problem are solved.} You are given two binary strings $a$ and $b$, both of length $n$. You can do the following operation any number of times (possibly zero). - Select two indices $l$ and $r$ ($l < r$). - Change $a_l$ to $(1 - a_l)$, and $a_r$ to $(1 - a_r)$. - If $l + 1 = r$, the cost of the operation is $x$. Otherwise, the cost is $y$. You have to find the minimum cost needed to make $a$ equal to $b$ or say there is no way to do so.
Consider $c$ is another binary string, in which $c_i = a_i \oplus b_i$. We have to make $c$ equal to $000 \ldots 000$ using the operation. Because the operation does not change the parity of the number of $1$ in $c$, the answer is $-1$ if $c$ has odd number of $1$. If the number of $1$ in $c$ is $d$ (now assume that $d$ is even), at least $\frac{d}{2}$ operations are needed. So total cost would be at least $\frac{d}{2} \times y$. One $x$-cost operation can be replaced with two $y$-cost operations. Consider another binary string $c$, in which $c_i = a_i \oplus b_i$ ($1 \le i \le n$). So doing an operation means selecting two indices of $c$ and flipping them. Also, let's define $d$ is the number of $1$ in $c$. Because the parity of $d$ never changes, the answer is $-1$ if $d$ is odd. If $d$ is even, classify the cases: $[1]$ If $d = 2$ and two $1$-s are adjacent, the answer is $\min(x, 2y)$. Because $n \ge 5$ holds, we can always replace one $x$-cost operation with two $y$-cost operations. $[2]$ If $d = 2$ and two $1$-s are not adjacent, the answer is $y$. $[3]$ If $d \ne 2$, select $i$-th $1$ and $(i + \frac{d}{2})$-th $1$ each ($1 \le i \le \frac{d}{2}$). This costs $\frac{d}{2} \times y$, and we showed the cost cannot be reduced more in hint 3.
[ "constructive algorithms", "greedy", "math" ]
1,400
#import<bits/stdc++.h> #define endl '\n' using namespace std; long long d, n, t, x, y; string a, b; int main() { ios::sync_with_stdio(0); cin.tie(0); for(cin >> t; t--;) { cin >> n >> x >> y >> a >> b; d = 0; for(int i = 0; i < n; i++) { a[i] ^= b[i]; d += a[i]; } if(d % 2) { cout << -1 << endl; continue; } if(d == 2) { int l, r; for(l = 0; !a[l]; )l++; for(r = n - 1; !a[r]; )r--; if(l + 1 == r)cout << min(x, 2 * y) << endl; else cout << min((r - l) * x, y) << endl; } else cout << d / 2 * y << endl; } }
1733
D2
Zero-One (Hard Version)
\textbf{This is the hard version of this problem. In this version, $n \le 5000$ holds, and this version has no restriction between $x$ and $y$. You can make hacks only if both versions of the problem are solved.} You are given two binary strings $a$ and $b$, both of length $n$. You can do the following operation any number of times (possibly zero). - Select two indices $l$ and $r$ ($l < r$). - Change $a_l$ to $(1 - a_l)$, and $a_r$ to $(1 - a_r)$. - If $l + 1 = r$, the cost of the operation is $x$. Otherwise, the cost is $y$. You have to find the minimum cost needed to make $a$ equal to $b$ or say there is no way to do so.
Greedy solution used in D1 doesn't work in this version. The restriction accepts normal $O(n^2)$ solution. (Continued from D1 editorial) If $x < y$, greedy approach used in D1 doesn't work. Let's use DP. Define $z0[i][j]$ as the minimal cost when there is $j$ $1$s in first $i$ elements of $c$ and $c_i = 0$, and $z1[i][j]$ as the minimal cost when there is $j$ $1$s in first $i$ elements of $c$ and $c_i = 1$. Initially all table values are $\infty$. First, check $c_1$. If $c_1 = 0$, set $z0[1][0]$ to $0$. Otherwise, set $z1[1][1]$ to $0$. Then, check the following elements from $c_2$ to $c_n$ in turn. $[4]$ If $c_i = 0$, $z0[i][j] = \min(z0[i - 1][j], z1[i - 1][j]), \\ z1[i][j] = \begin{cases} \min(z0[i - 1][j] + y, z1[i - 1][j] + x), & \text{if } j \le 1, \\ \\ \min(z0[i - 1][j] + y, z1[i - 1][j] + x, z0[i - 1][j - 2] + x, z1[i - 1][j - 2] + y), & \text{if } j > 1, \end{cases}$ $[5]$ If $c_i = 1$, $z0[i][j] = \begin{cases} \min(z0[i - 1][j + 1] + y, z1[i - 1][j + 1] + x), & \text{if } j = 0, \\ \min(z0[i - 1][j + 1] + y, z1[i - 1][j + 1] + x, z0[i - 1][j - 1] + x, z1[i - 1][j - 1] + y), & \text{if } 0 < j < i, \\ \min(z0[i][j], z0[i - 1][j - 1] + x, z1[i - 1][j - 1] + y), & \text{if } j = i, \end{cases} \\ z1[i][j] = \begin{cases} \infty, & \text{if } j = 0, \\ \min(z0[i - 1][j - 1], z1[i - 1][j - 1]), & \text{if } j > 0, \end{cases}$ for $0 \le j \le i$. The answer is $z[n][0]$.
[ "dp", "greedy" ]
2,000
#import<bits/stdc++.h> #define endl '\n' using namespace std; long long d, n, t, x, y, z0[5004][5004], z1[5004][5004]; string a, b; int main() { ios::sync_with_stdio(0); cin.tie(0); for(cin >> t; t--;) { cin >> n >> x >> y >> a >> b; d = 0; for(int i = 0; i < n; i++) { a[i] ^= b[i]; d += a[i]; } if(d % 2) { cout << -1 << endl; continue; } if(d == 2) { int l, r; for(l = 0; !a[l]; )l++; for(r = n - 1; !a[r]; )r--; if(l + 1 == r)cout << min(x, 2 * y) << endl; else cout << min((r - l) * x, y) << endl; continue; } if(!d || x >= y) { cout << d / 2 * y << endl; continue; } for(int i = 0; i < n; i++) { fill(z0[i], z0[i] + n + 1, 1LL << 60); fill(z1[i], z1[i] + n + 1, 1LL << 60); } if(a[0] == 0)z0[0][0] = 0; if(a[0] == 1)z1[0][1] = 0; for(int i = 1; i < n; i++) { if(a[i]) { for(int j = i + 1; j >= 0; j--) { if(j <= i)z0[i][j] = min(z0[i - 1][j + 1] + y, z1[i - 1][j + 1] + x); if(j) { z0[i][j] = min({z0[i][j], z0[i - 1][j - 1] + x, z1[i - 1][j - 1] + y}); z1[i][j] = min(z0[i - 1][j - 1], z1[i - 1][j - 1]); } } } else { for(int j = i + 1; j >= 0; j--) { z0[i][j] = min(z0[i - 1][j], z1[i - 1][j]); z1[i][j] = min(z0[i - 1][j] + y, z1[i - 1][j] + x); if(j > 1)z1[i][j] = min({z1[i][j], z0[i - 1][j - 2] + x, z1[i - 1][j - 2] + y}); } } } cout << z0[n - 1][0] << endl; } }
1733
E
Conveyor
There is a conveyor with $120$ rows and $120$ columns. Each row and column is numbered from $0$ to $119$, and the cell in $i$-th row and $j$-th column is denoted as $(i, j)$. The top leftmost cell is $(0, 0)$. Each cell has a belt, and all belts are initially facing to the right. Initially, a slime ball is on the belt of $(0, 0)$, and other belts are empty. Every second, the state of the conveyor changes as follows: - All slime balls on the conveyor move one cell in the direction of the belt at the same time. If there is no cell in the moved position, the slime gets out of the conveyor, and if two slime balls move to the same cell, they merge into one. - All belts with slime ball in the previous second change direction at the same time: belts facing to the right become facing to the down, and vice versa. - A new slime ball is placed on cell $(0, 0)$. There are $q$ queries, each being three integers $t$, $x$, and $y$. You have to find out if there is a slime at the cell $(x, y)$ after $t$ seconds from the start. Can you do it?
Each second, slime ball moves to next diagonal. Every slime always locates in different diagonal. No two slime ball will merge forever. If $t < x + y$, the answer should be "NO". In the conveyor, cells with same $(i + j)$ value consists a diagonal ($i$ is row number, $j$ is column number). Let's call them $(i + j)$-th diagonal. So there are $239$ diagonals, from $0$-th to $238$-th. So, we can find every slime ball move to the next diagonal for every second. It means no two slime ball merge forever. Given $t$, $x$, $y$, cell $(x, y)$ belongs to $(x + y)$-th diagonal. If $t < x + y$, the answer is NO because there is no slime ball in $(x + y)$-th diagonal yet. If $t \ge x + y$, $(x + y)$-th diagonal contains one slime ball, and the ball is placed on cell $(0, 0)$ after $(t - x - y)$ seconds from the start. So $(t - x - y)$ slime balls passed this diagonal before. Now, find out which cell contains slime ball among the diagonal. To do this, we use following method: simulate with $(t - x - y)$ slime balls to check how many slime reach each cell of the diagonal, and repeat this with $(t - x - y + 1)$ slime balls. Exactly one cell will show different result, and this cell is where $(t - x - y + 1)$-th slime ball passes through. If this cell is equal to $(x, y)$, the answer is YES. Otherwise the answer is NO. Simulation with $x$ slime balls processes as follows: Place $x$ slime balls on cell $(0, 0)$. Move slime balls to next diagonal. For each cell, if the cell contains $k$ slime balls, $\lceil \frac{k}{2} \rceil$ moves to right and $\lfloor \frac{k}{2} \rfloor$ moves to down. Repeat the second step. If slime balls reached aimed diagonal, stop and find the result.
[ "constructive algorithms", "dp", "math" ]
2,700
#import<bits/stdc++.h> #define endl '\n' using namespace std; typedef long long LL; LL px, py, q, t, x, y, a[240], b[240]; int main() { ios::sync_with_stdio(0); cin.tie(0); for(cin >> q; q--;) { cin >> t >> x >> y; if(t < x + y) { cout << "NO" << endl; continue; } fill(a, a + 240, 0); fill(b, b + 240, 0); a[0] = t - x - y; b[0] = a[0] + 1; for(int i = 0; i < x + y; i++) { for(int j = i; j >= 0; j--) { a[j + 1] += a[j] / 2; a[j] -= a[j] / 2; b[j + 1] += b[j] / 2; b[j] -= b[j] / 2; } } px = py = -1; for(int i = 0; i < 240; i++) { if(a[i] != b[i]) { px = i; py = x + y - i; break; } } if(x == px && y == py)cout << "YES" << endl; else cout << "NO" << endl; } }
1734
A
Select Three Sticks
You are given $n$ sticks with positive integral length $a_1, a_2, \ldots, a_n$. You can perform the following operation any number of times (possibly zero): - choose one stick, then either increase or decrease its length by $1$. After each operation, all sticks should have positive lengths. What is the minimum number of operations that you have to perform such that it is possible to select three of the $n$ sticks and use them without breaking to form an equilateral triangle? An equilateral triangle is a triangle where all of its three sides have the same length.
We first sort the array $a$ in non-decreasing order. Denote the indices of the elements that we choose from $a$ to be $x$, $y$, and $z$, where $1 \le x < y < z \le n$, and the final value (after performing the operations) of the concerned elements to be $v$. The minimum required number of operations is then $|a_x-v|+|a_y-v|+|a_z-v|$. It is well-known that such expression attains its minimum value when $v$ is the median of $a_x$, $a_y$, and $a_z$. Since the array $a$ has already been sorted, it is best to assign $v$ to be $a_y$. Our expression then becomes $|a_x-a_y|+|a_y-a_y|+|a_z-a_y|=(a_y-a_x)+0+(a_z-a_y)=a_z-a_x$. We would like to minimize the value of $a_z$, which implies $z$ should be as small as possible since $a$ is sorted. It is clear that taking $z=y+1$ would minimize the value of the expression. Similarly, we can show that we can take $x=y-1$ to minimize the value of the expression. Therefore, the only possible values of the triplets $(x,y,z)$ are of the form $(t,t+1,t+2)$ for positive integers $1 \le t \le n-2$, and we can iterate through all such triplets and find the best one. The time complexity is $\Theta(n \log n)$ per case due to sorting.
[ "brute force", "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; void solve(){ int n; cin>>n; int a[n+1]; for (int i=1; i<=n; i++){ cin>>a[i]; } sort(a+1,a+n+1); int ans=2e9; for (int i=3; i<=n; i++){ ans=min(ans,a[i]-a[i-2]); } cout<<ans<<'\n'; } int main(){ int t; cin>>t; for (int i=1; i<=t; i++){ solve(); } }
1734
B
Bright, Nice, Brilliant
There is a pyramid which consists of $n$ floors. The floors are numbered from top to bottom in increasing order. In the pyramid, the $i$-th floor consists of $i$ rooms. Denote the $j$-th room on the $i$-th floor as $(i,j)$. For all positive integers $i$ and $j$ such that $1 \le j \le i < n$, there are $2$ \textbf{one-way} staircases which lead from $(i,j)$ to $(i+1,j)$ and from $(i,j)$ to $(i+1,j+1)$ respectively. In each room you can either put a torch or leave it empty. Define the brightness of a room $(i, j)$ to be the number of rooms with a torch from which you can reach the room $(i, j)$ through a non-negative number of staircases. For example, when $n=5$ and torches are placed in the rooms $(1,1)$, $(2,1)$, $(3,2)$, $(4,1)$, $(4,3)$, and $(5,3)$, the pyramid can be illustrated as follows: In the above picture, rooms with torches are colored in yellow, and empty rooms are white. The blue numbers in the bottom-right corner indicate the brightness of the rooms. The room $(4,2)$ (the room with a star) has brightness $3$. In the picture below, the rooms from where you can reach $(4,2)$ have red border. The brightness is $3$ since there are three torches among these rooms. The pyramid is called nice if and only if for all floors, all rooms in the floor have the same brightness. Define the brilliance of a nice pyramid to be the sum of brightness over the rooms $(1,1)$, $(2,1)$, $(3,1)$, ..., $(n,1)$. Find an arrangement of torches in the pyramid, such that the resulting pyramid is nice and its brilliance is maximized. We can show that an answer always exists. If there are multiple answers, output any one of them.
Note that the brightnesses of the rooms on the $i$-th floor is at most $i$. This is because in room $(i,1)$, only $i$ rooms, namely, $(1,1)$, $(2,1)$, $\ldots$, $(i,1)$ can reach to $(i,1)$ through some number of staircases. It is also possible to find a configuration of torches in the pyramid such that the brightnesses of the rooms on the $i$-th floor is exactly $i$, i.e. it attains the upper bound. The configuration is as follows: Room $(i,j)$ contains a torch if and only if it is the leftmost room ($i=1$) or the rightmost room ($i=j$) on the $i$-th floor. This is valid because for all rooms $(i,j)$, it can be reached from $(1,1)$, $(2,1)$, $(3,1)$, $\ldots$, $(i-j+1,1)$ and $(2,2)$, $(3,3)$, $\ldots$, $(j,j)$. In other words, room $(i,j)$ has brightness $(i-j+1)+j-1=i$, so the pyramid is nice.
[ "constructive algorithms" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int n; cin >> n; for (int i = 1; i <= n; i++) { for (int j = 1; j <= i; j++) { cout << (j == 1 || j == i) << ' '; } cout << '\n'; } } }
1734
C
Removing Smallest Multiples
You are given a set $S$, which contains the first $n$ positive integers: $1, 2, \ldots, n$. You can perform the following operation on $S$ any number of times (possibly zero): - Choose a positive integer $k$ where $1 \le k \le n$, such that there exists a multiple of $k$ in $S$. Then, delete the \textbf{smallest} multiple of $k$ from $S$. This operation requires a cost of $k$. You are given a set $T$, which is a subset of $S$. Find the minimum possible total cost of operations such that $S$ would be transformed into $T$. We can show that such a transformation is always possible.
One operation should be used to remove every element not belonging to $T$. Let $v$ be an element not belonging to $T$. Suppose a $x$-cost operation removes value $v$, then $v$ must be divisible by $x$. Furthermore, the multiples $x,2x,\cdots (k-1)x$ must have been already removed from $S$, where we write $v = kx$. Since removed elements stay removed, the above is only possible if all of $x,2x,\cdots (k-1)x$ does not belong to $T$. For each $v$, let $f(v)$ be the smallest integer $x$ satisfying the above condition. As we can always remove $v$ using a $v$-cost operation, $f(v) \leq v$ and in particular $f(v)$ exists. The total cost must be at least $\sum_{i \not \in T} f(i)$. We claim that this cost can be achieved. To do so, we should remove the required elements in ascending order. When removing $v$, we assume all $w \not\in T$ with $w<v$ have already been removed. At this state, an $f(v)$-cost operation would be able to remove $v$. It remains to find the values $f(v)$. To do so efficiently, we can perform the above process in a bottom-up manner similar to the Sieve of Eratosthenes. Please refer to the code below for implementation details. The overall complexity is $n (1 +\frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n}) = \Theta(n \log n)$.
[ "greedy", "math" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; bool a[n + 1]; string str; cin >> str; for (int i = 1; i <= n; i++) { a[i] = (str[i - 1] == '1'); } long long ans = 0; int cost[n + 1]; for (int i = n; i >= 1; i--) { for (int j = i; j <= n; j += i) { if (a[j]) break; cost[j] = i; } } for (int i = 1; i <= n; i++) { if (!a[i]) ans += cost[i]; } cout << ans << '\n'; } int main() { int t; cin >> t; while (t--) solve(); }
1734
D
Slime Escape
You are playing a game called Slime Escape. The game takes place on a number line. Initially, there are $n$ slimes. For all positive integers $i$ where $1 \le i \le n$, the $i$-th slime is located at position $i$ and has health $a_i$. You are controlling the slime at position $k$. There are two escapes located at positions $0$ and $n+1$. Your goal is to reach \textbf{any one} of the two escapes by performing any number of game moves. In one game move, you move your slime to the left or right by one position. However, if there is another slime in the new position, you must absorb it. When absorbing a slime, the health of your slime would be increased by the health of the absorbed slime, then the absorbed slime would be removed from the game. Note that some slimes might have negative health, so your health would decrease when absorbing such slimes. You lose the game immediately if your slime has negative health at any moment during the game. Can you reach one of two escapes by performing any number of game moves, without ever losing the game?
Let's call a group of slime good if their total health is at least $0$, or if defeating this group allows you to reach the exits. We partition the slimes into good groups in a two-pointer like manner. To form the groups to the right, start from position $k$, then find the smallest position $r$ such that slimes from $k+1$ through $r$ form a good group. We do the same starting from $r+1$ again. Repeat this process until slimes to the right are partitioned into groups, which can be done by maintaining the sum of health. We partition the left slimes into groups in a similar way. We can observe that in an optimal strategy, we may assume the player absorbs group-by-group. Assuming there is a valid strategy $S$ to reach the exit. Let $X$ be the first group to the left, and $Y$ be the first group to the right. Without loss of generality, assume all slimes of $X$ are absorbed before all slimes of $Y$ are absorbed first. Suppose there are $x$ slimes in $X$ and $y$ slimes in $Y$. Suppose we did not absorb all of $X$ at a time. For example, if there are $6$ slimes in $X$ and $5$ or more slimes in $Y$, the beginning of our strategy may look like this: $LLRRLRLLRL....$ Consider instead the strategy where we move all `L' to the front. We claim that the strategy stays valid. That is, the following strategy $LLLLLLRRRR....$ is at least as good. We check that at every moment of strategy $LLLLLLRRRR....$, there is some point in time $LLRRLRLLRL....$ which we would have less than or equal amount of health. For the state $T_1$ that we take $a$ moves to the left in $LLLLLLRRRR....$, we compare it with any moment $T_2$ in $LLRRLRLLRL....$ that we have taken $a$ moves to the left. To reach $T_2$, we take $b$ moves to the right. We can check that under the assumption, $0 \leq b < y$. Since $Y$ is the smallest good group to the right, taking these $b$ extra right moves must have reduced our health. So, we have more health at $T_1$ than in $T_2$. Now consider the state $T_3$ in $LLLLLLRRRR....$ where we take all $x$ left moves and $b$ more right moves. We should compare it with any moment $T_4$ in $LLRRLRLLRL....$ that we have taken $b$ right moves and some $a$ left moves. We can check that under the assumption $0 \leq a < x$. Since $X$ is the smallest good group, taking only $a$ moves to the left gives us less health compared to taking all $x$ moves to the left. Therefore, we have more health at $T_3$ than in $T_4$. Therefore, if $LLRRLRLLRL....$ is valid, $LLLLLLRRRR....$ must be valid. So, we only need to consider strategies of the second form. By applying this claim recursively, we notice that we only need to consider strategies that absorb whole groups at a time. This works for all strategies, we just referred to the starting strategy as $LLRRLRLLRL....$, and the better strategy as $LLLLLLRRRR....$ to simplify notations. For any good group, since the total health is positive, there is no drawback to absorbing a good group. In other words, whenever it is possible to absorb a good group, we will absorb it. For each group $G$, we calculate the ``requirement'' of the group - the lowest health we can begin with, such that we can absorb the group while maintaining non-negative health at all times. The requirement of a group of slime with health $a_1,a_2 \cdots a_n$ can be expressed as $- \min_{k=0}^n (\sum_{i=1}^k a_i)$ Finally, we can simply simulate the process. We repeatedly attempt to absorb good groups to the left or to the right. We keep track of the current health, initially equal to $a_k$. Whenever we consider whether to absorb a group or not, we absorb it if and only if the current health is at least as high as its requirement. Otherwise, we ignore it for now and attempt to do so for the group on the other side. If it is possible to reach a state where either all left groups or all right groups are absorbed, then we can win the game. If at some point, it is not possible to absorb the left group nor the right group, then we lose. The overall complexity is $\Theta(n)$. It is also possible to use a range $\max$/$\min$ segment tree form the groups instead of using two-pointers, in which case its complexity would be $\Theta(n \log n)$.
[ "data structures", "dp", "greedy", "two pointers" ]
1,800
#include <bits/stdc++.h> using namespace std; void solve() { long long n, k, P; cin >> n >> k; k--; long long L[n]; for (int i = 0; i < n; i++) cin >> L[i]; P = L[k]; L[k] = 0; long long ps[n + 1]; ps[0] = 0; for (int i = 1; i <= n; i++) ps[i] = ps[i - 1] + L[i - 1]; int l = k, r = k + 1; vector<pair<long long, long long> > LG, RG; for (int i = k - 1; i >= 0; i--) { if (ps[i] <= ps[l] || i == 0) { long long worst = 0, cur = 0; for (int j = l - 1; j >= i; j--) { cur += L[j]; worst = min(worst, cur); } LG.push_back({cur, -worst}); l = i; } } for (int i = k + 2; i <= n; i++) { if (ps[i] >= ps[r] || i == n) { long long worst = 0, cur = 0; for (int j = r; j <= i - 1; j++) { cur += L[j]; worst = min(worst, cur); } RG.push_back({cur, -worst}); r = i; } } reverse(LG.begin(), LG.end()); reverse(RG.begin(), RG.end()); long long curp = P; while (true) { bool acted = false; if (!LG.empty() && curp >= LG.back().second) { curp += LG.back().first; LG.pop_back(); acted = true; } if (!RG.empty() && curp >= RG.back().second) { curp += RG.back().first; RG.pop_back(); acted = true; } if (LG.empty() || RG.empty()) { cout << "YES\n"; return; } if (!acted) { cout << "NO\n"; return; } } } int main() { int t; cin >> t; while (t--) solve(); }
1734
E
Rectangular Congruence
You are given a \textbf{prime} number $n$, and an array of $n$ integers $b_1,b_2,\ldots, b_n$, where $0 \leq b_i < n$ for each $1 \le i \leq n$. You have to find a matrix $a$ of size $n \times n$ such that all of the following requirements hold: - $0 \le a_{i,j} < n$ for all $1 \le i, j \le n$. - $a_{r_1, c_1} + a_{r_2, c_2} \not\equiv a_{r_1, c_2} + a_{r_2, c_1} \pmod n$ for all positive integers $r_1$, $r_2$, $c_1$, and $c_2$ such that $1 \le r_1 < r_2 \le n$ and $1 \le c_1 < c_2 \le n$. - $a_{i,i} = b_i$ for all $1 \le i \leq n$. Here $x \not \equiv y \pmod m$ denotes that $x$ and $y$ give different remainders when divided by $m$. If there are multiple solutions, output any. It can be shown that such a matrix always exists under the given constraints.
We say a matrix to be good if it satisfies the congruence condition (the second condition). When we have a good matrix, we can add any value $c$ to a whole row while maintaining the congruence relation. The same is true for adding the same value to a whole column. Suppose we have any good matrix $A$, then by adding $b_i - a_{i,i}$ to the $i$-th row for each of $i=1,2,\cdots,n$, we obtain a good matrix that has the desired values on the diagonal. In fact, there are a lot of possible constructions. We present a few of them here: $a_{i,j} = i \times j \pmod n$ $a_{i,j} = i \times j \pmod n$ $a_{i,j} = (i + j)^2 \pmod n$. This needs special handling when $n=2$. $a_{i,j} = (i + j)^2 \pmod n$. This needs special handling when $n=2$. $a_{i,j} = \frac{(i+j)(i+j+1)}{2} \pmod n$. $a_{i,j} = \frac{(i+j)(i+j+1)}{2} \pmod n$. The coolest part is that all quadratic polynomials in the form $ai^2 + bij + cj^2 + di + ej + f$ are valid for all integers $a,b,c,d,e,f$ and $b \not\equiv 0 \pmod n$ The coolest part is that all quadratic polynomials in the form $ai^2 + bij + cj^2 + di + ej + f$ are valid for all integers $a,b,c,d,e,f$ and $b \not\equiv 0 \pmod n$ As a bonus, we prove that the general quadratic polynomial gives a good construction. Since we can add values to a whole row or column, and $i^2, j^2 , i , j , 1$ are also constant on the rows and columns, adding them by $a_{i,j}$ has no effect. So we may just assume $a = c = d =e = f =0$. So it suffices to show that $a_{i,j} = b i j \pmod n$ satisfies the condition. We can see directly that $a_{r_1,c_1} - a_{r_2,c_1} - a_{r_1,c_2} +a_{r_2,c_2} = b(r_1 - r_2)(c_1 - c_2)$. As $r_1 \neq r_2$, $c_1 \neq c_2$, $b \not\equiv 0 \pmod n$, and $n$ is a prime, this expression must be nonzero $\pmod n$. Here are some extra observations that may enable one to find a good matrix more quickly. For each $j$, the values $a_{i,j} - a_{i-1,j}$ over all $0 \le i <n$ must be a permutation of ${0,1,\cdots,n-1}$. A good matrix stays good if we permute the rows or permute the columns. Therefore, we can show that there exists some good matrix with $a_{1,j} = a_{i,1} = 0$, and $a_{2,j} = j-1$, $a_{i,2} = i-1$. Assuming this, it should not be difficult to discover $a_{i,j} = (i-1)(j-1)$ yields one of the good matrices. Let $b$ be the two dimensional difference array of $a$, that is, $b_{i,j} = a_{i,j} - a_{i-1,j} - a_{i,j-1} + a_{i-1,j-1}$. Then, the condition becomes sum of any rectangles of $b$ must be nonzero $\pmod n$. It is easy to see $b = 1$ is valid. This corresponds to the solution that $a_{i,j} = i \times j \pmod n$.
[ "constructive algorithms", "number theory" ]
2,100
#include <bits/stdc++.h> using namespace std; int want[355]; int board[355][355]; int main() { int n; cin >> n; for(int i = 0; i < n ; i ++){ cin >> want[i]; } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { board[i][j] = (i * j) % n; } } for(int i = 0; i < n; i ++){ int extra = (want[i] + n - board[i][i] )% n ; for (int j = 0; j < n; j++) { board[i][j] += extra; board[i][j] %= n; } } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { cout << board[i][j] << ' '; } cout << endl; } }
1734
F
Zeros and Ones
Let $S$ be the Thue-Morse sequence. In other words, $S$ is the $0$-indexed binary string with infinite length that can be constructed as follows: - Initially, let $S$ be "0". - Then, we perform the following operation infinitely many times: concatenate $S$ with a copy of itself with flipped bits.For example, here are the first four iterations: \begin{center} \begin{tabular}{|c||l||l||l|} \hline Iteration & $S$ before iteration & $S$ before iteration with flipped bits & Concatenated $S$ \ \hline \hline 1 & 0 & 1 & 01 \ 2 & 01 & 10 & 0110 \ 3 & 0110 & 1001 & 01101001 \ 4 & 01101001 & 10010110 & 0110100110010110 \ $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ \ \hline \end{tabular} \end{center} You are given two positive integers $n$ and $m$. Find the number of positions where the strings $S_0 S_1 \ldots S_{m-1}$ and $S_n S_{n + 1} \ldots S_{n + m - 1}$ are different.
Observe that the $i$-th character is `1' if and only if $i$ has an odd number of set bits in its binary representation. Both solutions make use of this fact. The constraints allows solutions of up to $\Theta(q \log^3 n)$. Yet, both of the model solution runs in $\Theta(\log n)$. The question can be reformulated as follows: How many integers $x$ between $0$ and $m-1$ inclusive have the property that the total number of set bits of $x$ and $x+n$ is an odd number? This can be solved with digit DP. We process the bit position from $\lceil \log(\max) \rceil$ down to $0$. We maintain three states: $\text{ans}$, a boolean value; $\text{ans}$, a boolean value; $\text{trailzeros}$, an integer between $0$ and $\lceil \log(\max) \rceil$ inclusive; and $\text{trailzeros}$, an integer between $0$ and $\lceil \log(\max) \rceil$ inclusive; and $\text{under}$, a boolean value. $\text{under}$, a boolean value. We can thus conclude the following: the number of trailing zeros is all we need to decide the answer. After processing each bit $k$, we should have the following: the number of integers $x$ between $0$ and $\lfloor \frac{m}{2^k} \rfloor$ inclusive which have the following property: the total number of set bits of $x$ and $x + \lfloor \frac{n}{2^k} \rfloor$ is equal to $\text{ans} \mod 2$; the total number of set bits of $x$ and $x + \lfloor \frac{n}{2^k} \rfloor$ is equal to $\text{ans} \mod 2$; the number of trailing `1's of $x + \lfloor \frac{n}{2^k} \rfloor$ is equal to $\text{trailzeros}$; the number of trailing `1's of $x + \lfloor \frac{n}{2^k} \rfloor$ is equal to $\text{trailzeros}$; the boolean value $[x < \lfloor \frac{m}{2^k} \rfloor]$ (where $[]$ is the Iverson bracket). the boolean value $[x < \lfloor \frac{m}{2^k} \rfloor]$ (where $[]$ is the Iverson bracket). Now onto the transitions. Suppose we are adding the $(k-1)$-th digit, and let $d$ be the new digit of $x$, and $z$ be the $(k-1)$-th digit of $n$. If $z+d = 0$, then $(\text{ans},\text{trailzeros})$ after digit $k$ will be transited to $(\text{ans},0)$ after digit $k-1$; If $z+d = 0$, then $(\text{ans},\text{trailzeros})$ after digit $k$ will be transited to $(\text{ans},0)$ after digit $k-1$; if $z+d = 1$, then $(\text{ans},\text{trailzeros})$ after digit $k$ will be transited to $((\text{ans} + z +d) \mod 2, \text{trailzeros}+1)$ after digit $k-1$; if $z+d = 1$, then $(\text{ans},\text{trailzeros})$ after digit $k$ will be transited to $((\text{ans} + z +d) \mod 2, \text{trailzeros}+1)$ after digit $k-1$; if $z+d =2$, then $(\text{ans},\text{trailzeros})$ after digit $k$ will be transited to $((\text{ans} + z + \text{trail} + 1) \mod 2, 0)$ after digit $k-1$ if $z+d =2$, then $(\text{ans},\text{trailzeros})$ after digit $k$ will be transited to $((\text{ans} + z + \text{trail} + 1) \mod 2, 0)$ after digit $k-1$ The final answer is the total number of values for which $\text{ans} =1$ and $\text{under} = 1$. The above solution runs in $\Theta(\log^2 (\max))$ per query. There is a simple way to optimize this to $\Theta(\log(\max))$: note that we only need to keep parity of $\text{trailzero}$. There are many other digit DP approaches that give similar time complexity. The constraints should allow most of them pass. Define the function $f(x) :=$ the parity of bit one in the number $x$. We have thus reworded the statement into evaluating the follow expression: $T = \sum_{i = 0}^{k - 1} \left[f(i) \ne f(i + n)\right]$ The formula can be further transformed as: $T = \sum_{i = 0}^{k - 1} f(i \oplus (i + n))$ since $\left[ f(a) \ne f(b) \right] = f(a \oplus b)$ holds true for all non-negative integers $a$ and $b$. Imagine we construct a grid and assign the value at row $r$ and column $c$ to be $f(r \oplus c)$. Then, $T$ is sum of a diagonal of length $k$ which starts at either $(0, n)$ or $(n, 0)$. Without loss of generality, we use $(0, n)$ in this editorial. The grid can be constructed similarly to the way we construct the string $S$. We start with a $1$-by-$1$ matrix $M_0=\begin{bmatrix} 0 \end{bmatrix}$. Then, the matrix $M_i$ of size $2^i \times 2^i$ can be constructed as follows: $M_i = \begin{bmatrix} M_{i - 1} & \overline {M_{i - 1}} \\ \overline{M_{i - 1}} & M_{i - 1} \end{bmatrix}$ where $\overline{M_{i - 1}}$ is the matrix $M_{i - 1}$ but with flipped bits. Here is another way of constructing the grid: let $C_i$ be an infinite chess board with alternating colors, similar to a regular chess board, but with each of the cells being size $2^i \times 2^i$. For example, $C_0$, $C_1$ and $C_2$ in an $8$-by-$8$ grid is as follows: We claim that our grid is the $\text{xor}$ of all chess board $C_i$. The proof is easy: $C_i$ is constructed by $\text{xor}$-ing the $i$-th bit of the the row and column number. We are therefore motivated to proceed in the following way: if we drop the least significant bit (by making it to be $0$), we are still solving a very similar problem to the original problem, because dropping the first bit is similar to removing $C_0$. And when we shift $C_i$ to $C_{i - 1}$, it is a recursion of the same problem! Going back to the problem, where we are computing sum of a diagonal of length $k$. If $k$ is odd, we can make it odd by adding the last element to the result and decreasing $k$ by one. Now, $k$ is even, and we can utilize the recurrence as follows: remove $C_0$. remove $C_0$. scale the board down by 2 (including $n$ and $k$). By doing so, $C_i$ becomes $C_{i - 1}$. scale the board down by 2 (including $n$ and $k$). By doing so, $C_i$ becomes $C_{i - 1}$. solve the new problem. solve the new problem. scale the board up again and add $C_0$ back. scale the board up again and add $C_0$ back. from the result of the scaled down problem, some how calculate the result of the original problem from the result of the scaled down problem, some how calculate the result of the original problem The result of the scaled down problem is the number of $2$-by-$2$ cells with value $1$. From the number of $2$-by-$2$ cells with value $1$, we compute the number of cells with value $0$ as well. It is not hard to observe that it crosses the $2$-by-$2$ cells at all places. The only thing that matters is the parity of $n$. If $n$ is even, then the diagonal crosses the diagonal of the $2$-by-$2$ cells. In the scaled-down version, the diagonal is still a single diagonal starting at $(0, \frac{n}{2})$; otherwise, If $n$ is even, then the diagonal crosses the diagonal of the $2$-by-$2$ cells. In the scaled-down version, the diagonal is still a single diagonal starting at $(0, \frac{n}{2})$; otherwise, if $n$ is odd, it crosses the corner of the $2$-by-$2$ cells. In the scaled-down version, the diagonal is actually $2$ neighboring diagonals starting at $(0, \frac{n-1}{2})$ and $(0, \frac{n+1}{2})$. if $n$ is odd, it crosses the corner of the $2$-by-$2$ cells. In the scaled-down version, the diagonal is actually $2$ neighboring diagonals starting at $(0, \frac{n-1}{2})$ and $(0, \frac{n+1}{2})$. Also, the $2$-by-$2$ cells with values $0$ and $1$ respectively will also have the form: From here we have everything we need to compute the result of the original problem. Overall, the number of states we have to visit is $\Theta(\log k)$.
[ "bitmasks", "divide and conquer", "dp", "math" ]
2,500
import random cache = {} def popcount(n): res = 0 while n: res += 1 n &= n - 1 return res def solve(n, k): if k == 0: return 0 if k == 1: return popcount(n) & 1 if k % 2 == 1: t = solve(n, k - 1) x = popcount((k - 1) ^ (n + k - 1)) & 1 # print(t, x) return t + x if (n, k) in cache: return cache[(n, k)] if n % 2 == 0: one_cell = 2 zero_cell = 0 cnt1 = solve(n // 2, k // 2) cnt0 = k // 2 - cnt1 else: one_cell = 0 zero_cell = 1 cnt1 = solve(n // 2, k // 2) + solve(n // 2 + 1, k // 2) cnt0 = k - cnt1 res = one_cell * cnt1 + zero_cell * cnt0 # print(f"n = {n}, k = {k}, cnt1 = {cnt1}, cnt0 = {cnt0}, one_cell = {one_cell}, zero_cell = {zero_cell}") cache[(n, k)] = res return res t = int(input()) for _ in range(t): cache.clear() n, k = map(int, input().split()) print(solve(n, k))
1735
A
Working Week
Your working week consists of $n$ days numbered from $1$ to $n$, after day $n$ goes day $1$ again. And $3$ of them are days off. One of the days off is the last day, day $n$. You have to decide when the other two are. Choosing days off, you pursue two goals: - No two days should go one after the other. Note that you can't make day $1$ a day off because it follows day $n$. - Working segments framed by days off should be as dissimilar as possible in duration. More specifically, if the segments are of size $l_1$, $l_2$, and $l_3$ days long, you want to maximize $\min(|l_1 - l_2|, |l_2 - l_3|, |l_3 - l_1|)$. Output the maximum value of $\min(|l_1 - l_2|, |l_2 - l_3|, |l_3 - l_1|)$ that can be obtained.
Let's consider that $l_1$, $l_2$, and $l_3$ are sorted working segments. Can we explicitly say something about one of them? $l_1$ must be equal to $1$. Let's consider that $l_1$, $l_2$, and $l_3$ are sorted working segments. If $l_1$ is not equal to $1$ then we can decrease $l_1$ by $1$ and increase $l_3$ by $1$. So we'll increase the answer. We've got that $l_1 = 1$ and we have to work just with $l_2$ and $l_3$. Now, our problem can be rewritten as: $l_2 + l_3 = n - 4$, maximize $min(l_2 - 1, l_3 - l_2)$. And as we know that $l_3 = n - 4 - l_2$, just: maximize $min(l_2 - 1, n - 4 - 2 \cdot l_2)$. If we increase both values under the minimum scope by one, solutions don't change: maximize $min(l_2, (n - 3) - 2 \cdot l_2)$. If we choose $l_2 = \left\lfloor\frac{n-3}{3}\right\rfloor$, then $min(l_2, (n - 3) - 2 \cdot l_2) = \left\lfloor\frac{n-3}{3}\right\rfloor$. If the answer is greater, then $l_2 > \frac{n - 3}{3}$ and $(n - 3) - 2 \cdot l_2 > \frac{n - 3}{3}$, and it means that $2 \cdot (l_2) + ((n - 3) - 2 \cdot l_2) > n - 3$ but $2 \cdot (l_2) + ((n - 3) - 2 \cdot l_2) = n - 3$. The only thing is left to do is to calculate final answer. And it is $\left\lfloor\frac{n-3}{3}\right\rfloor - 1$ or just $\left\lfloor\frac{n}{3}\right\rfloor - 2$. It was a mathematician way of solving. As it's pretty obvious that $l_2$ is approximately $\frac{n}{3}$, you could check $l_2 = \frac{n}{3} \pm 5$ and choose the best among them.
[ "constructive algorithms", "greedy", "math" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; int l_2 = (n - 3) / 3; int ans = l_2 - 1; cout << ans << '\n'; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t = 1; cin >> t; while (t--) solve(); return 0; }
1735
B
Tea with Tangerines
There are $n$ pieces of tangerine peel, the $i$-th of them has size $a_i$. In one step it is possible to divide one piece of size $x$ into two pieces of positive integer sizes $y$ and $z$ so that $y + z = x$. You want that for each pair of pieces, their sizes differ \textbf{strictly} less than twice. In other words, there should not be two pieces of size $x$ and $y$, such that $2x \le y$. What is the minimum possible number of steps needed to satisfy the condition?
Is there a way to cut pieces to save the minimum value and satisfy required conditions? What is the minimum possible number of operations to perform it? Is there any better solution? Let's start with a simple solution. Let's choose the minimum piece from $a$ and assume that it will remain the minimum until the end. As the array is sorted, let's define the minimum piece as $a_1$. It means that in the end, all pieces must be smaller or equal to $2 \cdot a_1 - 1$. The lower bound of the answer for this solution is $\displaystyle{\sum\limits_{i=1}^{n}\left\lceil\frac{a_i}{2 \cdot a_1 - 1}\right\rceil}$. Let's show that this is achievable. For each piece, while its size greater than $2 \cdot a_1 - 1$, let's cut off a piece of size $2 \cdot a_1 - 1$. The only problem is that we could get a piece smaller than $a_1$ in the end. But it means that before the last cut we had a piece in the range $[2 \cdot a_1, 3 \cdot a_1 - 2]$. All pieces in this range can be easily cut into pieces of the right size in one move. The only left question is why the minimum piece in the end should have size $a_1$. Actually, it shouldn't, but it gives the best answer anyway. As was described above, the lower bound of the solution with the minimal piece of size $x$ in the end is $\displaystyle{\sum\limits_{i=1}^{n}\left\lceil\frac{a_i}{2 \cdot x - 1}\right\rceil}$. Having a minimal piece with a smaller size, we can't get anything better, because the lower bound will be equal or greater for all $x < a_1$.
[ "greedy", "math" ]
900
#include <bits/stdc++.h> using namespace std; int n; vector<int> a; void solve() { cin >> n; a.resize(n); int ans = 0; for (auto &i : a) { cin >> i; ans += (i - 1) / (2 * a[0] - 1); } cout << ans << '\n'; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t = 1; cin >> t; while (t--) solve(); return 0; }
1735
C
Phase Shift
There was a string $s$ which was supposed to be encrypted. For this reason, all $26$ lowercase English letters were arranged in a circle in some order, afterwards, each letter in $s$ was replaced with the one that follows in clockwise order, in that way the string $t$ was obtained. You are given a string $t$. Determine the lexicographically smallest string $s$ that could be a prototype of the given string $t$. A string $a$ is lexicographically smaller than a string $b$ of the same length if and only if: - in the first position where $a$ and $b$ differ, the string $a$ has a letter, that appears earlier in the alphabet than the corresponding letter in $b$.
What is the first letter in the answer? $a$ if $t$ doesn't start with $a$ and $b$ otherwise. Ask the same question as Hint1 for each position. When we can't choose the minimum unused letter? If we form a circle of size less then $26$. Maintain any structure to check it. First of all, the encryption process is reversible. If we obtained $t$ from $s$ using the circle $c$, we can obtain $s$ from $t$ using the same cycle $c$, but reversed. So, let's think in terms of encryption of $t$. Lexicographical order itself is a greedy thing. So, we can create a greedy algorithm. Let's go from left to right and generate the result letter by letter. We have to choose the best possible option at each step. Let's describe the options we have. If the current letter was used earlier, we already know the replacement we need to choose. Otherwise, we would like to choose the minimum possible option. We need to maintain some structure to know what is acceptable. Let's keep the circle that is already generated(it's a graph). For each letter we have one incoming edge and one outgoing edge in the end. Let's keep them for every letter: arrays $in[26]$, $out[26]$. When we want to generate an outgoing edge at some step(let's define the letter on this step as $x$), we have to choose the minimum letter that doesn't have an incoming edge yet. With one exception: if creating the edge using this rule creates a circle of size less than $26$. It would mean that we wouldn't have a full circle in the end. It's easy to see that there is no more than one such letter, as this letter is just the end of a chain starting in $x$. To check that a small circle wasn't generated, we can go along an outgoing edge $26$ times, starting at $x$. If we end up in $x$ or there was no edge at some step then everything is ok, we can create this edge. Complexity is $\mathcal{O}(26 \cdot 26 + n)$, that is, $\mathcal{O}(n)$.
[ "dfs and similar", "dsu", "graphs", "greedy", "implementation", "strings" ]
1,400
#include <bits/stdc++.h> using namespace std; void solve() { int n; string t; cin >> n; cin >> t; vector<int> edge(26, -1); vector<int> redge(26, -1); auto get_path_end = [&](int c) { int len = 0; int cur = c; while (edge[cur] != -1) len++, cur = edge[cur]; return make_pair(cur, len); }; vector<int> vec; for (auto c : t) vec.push_back(c - 'a'); for (int i = 0; i < n; i++) { if (edge[vec[i]] == -1) { for (int c = 0; c < 26; c++) if (redge[c] == -1) { auto [clast, len] = get_path_end(c); if (clast != vec[i] || len == 25) { edge[vec[i]] = c; redge[c] = vec[i]; break; } } } vec[i] = edge[vec[i]]; } for (int i = 0; i < n; i++) t[i] = vec[i] + 'a'; cout << t << '\n'; } int main() { // viv = true; ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); cout << fixed << setprecision(20); int t = 1; cin >> t; while (t--) solve(); #ifdef DEBUG cerr << "Runtime is: " << clock() * 1.0 / CLOCKS_PER_SEC << endl; #endif }
1735
D
Meta-set
You like the card board game "Set". Each card contains $k$ features, each of which is equal to a value from the set $\{0, 1, 2\}$. The deck contains all possible variants of cards, that is, there are $3^k$ different cards in total. A feature for three cards is called good if it is the same for these cards or pairwise distinct. Three cards are called a set if all $k$ features are good for them. For example, the cards $(0, 0, 0)$, $(0, 2, 1)$, and $(0, 1, 2)$ form a set, but the cards $(0, 2, 2)$, $(2, 1, 2)$, and $(1, 2, 0)$ do not, as, for example, the last feature is not good. A group of \textbf{five} cards is called a meta-set, if there is strictly more than one set among them. How many meta-sets there are among given $n$ distinct cards?
How many sets can fit in $5$ cards? At most two. If there are two sets among $5$ cards, there will be a central card. Consider each card as a central card. For every two cards, there is always a single card that forms a set with them. [1] That means that two sets can share at most one card. Let's prove that there are no more than $2$ sets in a meta-set. Let's define $5$ cards as $c_1, c_2, c_3, c_4, c_5$. Let's guess that $(c_1, c_2, c_3)$ is a set. All other sets can have at most one card among $(c_1, c_2, c_3)$ (according to [1]), so they must include $c_4$ and $c_5$. So we have at most one other set, otherwise they would have two same cards, which is prohibited according to [1]. So, every meta-set looks like $2$ sets with one common card. Let's call this card a central card. Now there is just a simple combinatorics. For each card, we want to know the number of sets that include it. If this number is $s$, then we should add $\frac{s(s-1)}{2}$ to the answer - it is the number of meta-sets with this card as a central card. To get the number of sets for each card, we can iterate over all pairs of cards $(i, j)$, generate the complement to the set, and add $1$ to that card in a map/hashmap. Complexity is $\mathcal{O}(kn^2\log(n))$ or $\mathcal{O}(kn^2)$.
[ "brute force", "combinatorics", "data structures", "hashing", "math" ]
1,700
#include <bits/stdc++.h> using namespace std; #define F first #define S second typedef long long ll; typedef long double ld; typedef pair<ll, ll> pll; typedef pair<int, int> pii; const long long kk = 1000; const long long ml = kk * kk; const long long mod = ml * kk + 7; const long long inf = ml * ml * ml + 7; #ifdef DEBUG mt19937 rng(1033); #else mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); #endif int rnd(int mod) { return uniform_int_distribution<int>(0, mod - 1)(rng); } bool viv = false; int n, k; vector<vector<int>> v; vector<int> get_comp(vector<int> a, vector<int> b) { vector<int> res(k); for (int i = 0; i < k; i++) res[i] = (6 - (a[i] + b[i])) % 3; return res; } void solve() { cin >> n >> k; v.resize(n); for (auto &vec : v) { vec.resize(k); for (auto &i : vec) cin >> i; } map<vector<int>, int> cnt; for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { auto comp = get_comp(v[i], v[j]); cnt[comp]++; } } ll ans = 0; for (auto vec : v) { ans += (ll)cnt[vec] * (cnt[vec] - 1) / 2; } cout << ans << '\n'; } int main() { // viv = true; ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); cout << fixed << setprecision(20); int t = 1; // cin >> t; while (t--) solve(); #ifdef DEBUG cerr << "Runtime is: " << clock() * 1.0 / CLOCKS_PER_SEC << endl; #endif }
1735
E
House Planning
There are $n$ houses in your city arranged on an axis at points $h_1, h_2, \ldots, h_n$. You want to build a new house for yourself and consider two options where to place it: points $p_1$ and $p_2$. As you like visiting friends, you have calculated in advance the distances from both options to all existing houses. More formally, you have calculated two arrays $d_1$, $d_2$: $d_{i, j} = \left|p_i - h_j\right|$, where $|x|$ defines the absolute value of $x$. After a long time of inactivity you have forgotten the locations of the houses $h$ and the options $p_1$, $p_2$. But your diary still keeps two arrays — $d_1$, $d_2$, whose authenticity you doubt. Also, the values inside each array could be \textbf{shuffled}, so values at the same positions of $d_1$ and $d_2$ may correspond to different houses. Pay attention, that values from one array could not get to another, in other words, all values in the array $d_1$ correspond the distances from $p_1$ to the houses, and in the array $d_2$  — from $p_2$ to the houses. Also pay attention, that the locations of the houses $h_i$ and the considered options $p_j$ could match. For example, the next locations are correct: $h = \{1, 0, 3, 3\}$, $p = \{1, 1\}$, that could correspond to already shuffled $d_1 = \{0, 2, 1, 2\}$, $d_2 = \{2, 2, 1, 0\}$. Check whether there are locations of houses $h$ and considered points $p_1$, $p_2$, for which the founded arrays of distances would be correct. If it is possible, find appropriate locations of houses and considered options.
How many possible options are there for the distance between $p_1$ and $p_2$. We can limit it with $2 \cdot n$ options. Consider options $d_1[1] + d_2[i]$, $|d_1[1] - d_2[i]|$. Solve each one in linear(almost) time. Consider the biggest distance among $d_1$ and $d_2$. Can we match it with something? Remove them one by one while they exceed the distance between $p_1$ and $p_2$. Then the problem is trivial. Let's assume that considered point $p_1$ was to the left of considered point $p_2$. Let's assume that we know the distance $l$ between considered points $p_1$ and $p_2$. Let's show how to solve this problem in linear time(almost linear). As long as there is a value greater than $l$, let's get the largest among them(let's call it $x$). Let's assume that this value is from $d_1$. It's easy to see that this point is to the right of the considered point $p_2$ (because the largest distances is to the point $p_1$). It means that we can match distance $x$ from $d_1$ to the distance $x - l$ from $d_2$. When there is no value greater than $l$, all other houses are located between considered points. We can match them by sorting. That is the $\mathcal{O}(n log(n))$ solution. Let's limit possible options of $l$ with $\mathcal{O}(n)$ options. If we know that some house has distances $x$ and $y$ to considered options, then there are $2$ options of $l$: $x + y$ and $|x - y|$. Let's consider $2 \cdot n$ options $d_1[1] + d_2[i]$, $|d_1[1] - d_2[i]|$. Complexity is $\mathcal{O}(n^2 log(n))$.
[ "constructive algorithms", "data structures", "graph matchings", "greedy" ]
2,400
#include <bits/stdc++.h> using namespace std; #define F first #define S second typedef long long ll; typedef long double ld; typedef pair<ll, ll> pll; typedef pair<int, int> pii; const long long kk = 1000; const long long ml = kk * kk; const long long mod = ml * kk + 7; const long long inf = ml * ml * ml + 7; #ifdef DEBUG mt19937 rng(1033); #else mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); #endif int rnd(int mod) { return uniform_int_distribution<int>(0, mod - 1)(rng); } bool viv = false; int n; vector<int> d1, d2; bool work(int points_diff) { int p1 = 0; int p2 = points_diff; multiset<int, greater<int>> s1, s2; for (auto i : d1) s1.insert(i); for (auto i : d2) s2.insert(i); auto farthest = [] (multiset<int, greater<int>> &s) { return s.empty() ? -1 : *s.begin(); }; auto farthest_both = [&]() { return max(farthest(s1), farthest(s2)); }; vector<int> ans; while (farthest_both() > points_diff) { bool choose_s1 = farthest(s1) > farthest(s2); auto &s_far = choose_s1 ? s1 : s2; auto &s_near = choose_s1 ? s2 : s1; int value = *s_far.begin(); int complem = value - points_diff; if (!s_near.count(complem)) return false; s_far.erase(s_far.find(value)); s_near.erase(s_near.find(complem)); if (choose_s1) ans.push_back(p1 + value); else ans.push_back(p2 - value); } vector<int> left1, left2; for (auto i : s1) left1.push_back(i); for (auto i : s2) left2.push_back(i); sort(left1.begin(), left1.end()); sort(left2.rbegin(), left2.rend()); for (int i = 0; i < left1.size(); i++) if (left1[i] + left2[i] != points_diff) return false; for (auto i : left1) ans.push_back(i); sort(ans.begin(), ans.end()); int sh = max(-ans[0], 0); p1 += sh, p2 += sh; for (auto &i : ans) i += sh; cout << "YES\n"; for (auto i : ans) cout << i << ' '; cout << '\n'; cout << p1 << ' ' << p2 << '\n'; return true; } void solve() { cin >> n; d1.resize(n); d2.resize(n); for (auto &d : d1) cin >> d; for (auto &d : d2) cin >> d; int dist1 = d1[0]; for (auto dist2 : d2) { if (work(dist1 + dist2)) return; if (work(abs(dist1 - dist2))) return; } cout << "NO\n"; } int main() { // viv = true; ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); cout << fixed << setprecision(20); int t = 1; cin >> t; while (t--) solve(); #ifdef DEBUG cerr << "Runtime is: " << clock() * 1.0 / CLOCKS_PER_SEC << endl; #endif }
1735
F
Pebbles and Beads
There are two currencies: pebbles and beads. Initially you have $a$ pebbles, $b$ beads. There are $n$ days, each day you can exchange one currency for another at some exchange rate. On day $i$, you can exchange $-p_i \leq x \leq p_i$ pebbles for $-q_i \leq y \leq q_i$ beads or vice versa. It's allowed not to perform an exchange at all. Meanwhile, if you perform an exchange, the proportion $x \cdot q_i = -y \cdot p_i$ must be fulfilled. Fractional exchanges are allowed. You can perform no more than one such exchange in one day. The numbers of pebbles and beads you have must always remain non-negative. Please solve the following $n$ problems independently: for each day $i$, output the maximum number of pebbles that you can have at the end of day $i$ if you perform exchanges optimally.
Draw currencies on 2D plane. Assume that you can throw out any amount of money at any moment. What will an area of possible points look like? It will look like a convex polygon in the upper-right quarter. Keep its edges. How does the structure change when a new day comes? A new segment is added. A prefix of old segments is shifted by vector, the remaining suffix of old segments is shifted by opposite vector, Let's draw currencies on 2D plane. Having $x$ pebbles and $y$ beads is described as point $(x, y)$. Let's assume that we can throw out any amount of money at any moment. In this case, an area of possible points can be described as a convex polygon in the upper-right quarter. Initially it is the rectangle $[(0, 0), (0, b), (a, b), (a, 0)]$. At any moment, this polygon can be described as a list of segments starting at point $(0, y_0)$ and finishing at point $(x_k, 0)$. In the rectangle described above, there are $2$ segments. Let's keep those segments sorted by angle. When a new day comes, each point can be shifted by the vector $c\cdot(p_i, -q_i)\ \forall c \in [-1, 1]$ if the new point has non-negative coordinates. If we forget about new points to be non-negative, how new segments look like? We just have to add new segment $(2 \cdot p_i, -2 \cdot q_i)$ and shift a prefix of old segments by $(-p_i, q_i)$ and remaining suffix of segments by $(p_i, -q_i)$. Then the only thing left to do is to cut segments to keep our polygon non-negative. Sounds great, but sounds like $\mathcal{O}(n^2)$. Do we need to maintain segments explicitly? No! Let's just keep the set of their lengths and angles. Knowledge of extreme points $(0, y_0)$ and $(x_k, 0)$ is enough. So we need to: Insert a new segment to the set. (You need just length and angle). Shift extreme points $(0, y_0)$ and $(x_k, 0)$ by $(-p_i, q_i)$ and $(p_i, -q_i)$ correspondingly. Delete or cut the last and first segments while they are out of the non-negative area. Complexity $\mathcal{O}(nlog(n))$. There is another simple $\mathcal{O}(n^2log(n))$ approach: We can keep the area as a polygon. At each step, create two copies shifted by corresponding vectors. Build a convex hull of them. Cut this convex hull to be in the non-negative area. It won't fit. Mentioned just for fun.
[ "data structures", "geometry" ]
2,900
#include <bits/stdc++.h> using namespace std; #define F first #define S second typedef long long ll; typedef long double ld; typedef pair<ll, ll> pll; typedef pair<int, int> pii; const long long kk = 1000; const long long ml = kk * kk; const long long mod = ml * kk + 7; const long long inf = ml * ml * ml + 7; const ld eps = 1e-9; #ifdef DEBUG mt19937 rng(1033); #else mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); #endif int rnd(int mod) { return uniform_int_distribution<int>(0, mod - 1)(rng); } int n, a, b; vector<int> p, q; struct Seg { ll init_x, init_y; int num; ld x, y; Seg() {} Seg(int p, int q, int num): init_x(q), init_y(-p), num(num) { x = init_x; y = init_y; } friend bool operator<(const Seg &s1, const Seg &s2) { if (s1.init_x * s2.init_y == s1.init_y * s2.init_x) return s1.num < s2.num; return s1.init_x * s2.init_y < s1.init_y * s2.init_x; } Seg operator*=(ld ratio) { x *= ratio; y *= ratio; return *this; } void show() { cout << "Seg(" << x << ' ' << y << ")"; } }; struct Point { ld x, y; Point() {} Point(int a, int b): x(b), y(a) {} Point operator+=(const Point &v) { x += v.x; y += v.y; return *this; } Point operator-=(const Point &v) { x -= v.x; y -= v.y; return *this; } Point operator+=(const Seg &v) { x += v.x; y += v.y; return *this; } Point operator-=(const Seg &v) { x -= v.x; y -= v.y; return *this; } void show() { cout << "Point(" << x << ' ' << y << ")"; } }; struct Stock { Point best_a; Point best_b; multiset<Seg> segs; Stock(int a, int b) { best_a = Point(a, 0); best_b = Point(0, b); Seg seg_1 = Seg(a, 0, -2); Seg seg_2 = Seg(0, b, -1); if (a) segs.insert(seg_1); if (b) segs.insert(seg_2); } void add_seg(int p, int q, int num) { Seg new_seg = Seg(2 * p, 2 * q, num); segs.insert(new_seg); Point shift(p, -q); best_a += shift; best_b -= shift; cut_left(); cut_down(); } void cut_left() { while (best_a.x < 0) { Seg l_seg = *segs.begin(); Point new_best_a = best_a; new_best_a += l_seg; if (new_best_a.x > eps) { ld ratio = new_best_a.x / l_seg.x; Seg l_seg_left = l_seg; l_seg_left *= ratio; segs.erase(segs.find(l_seg)); segs.insert(l_seg_left); new_best_a -= l_seg_left; new_best_a.x = max(new_best_a.x, (ld)0); } else { segs.erase(segs.begin()); } best_a = new_best_a; } } void cut_down() { while (best_b.y < 0) { Seg d_seg = *segs.rbegin(); Point new_best_b = best_b; new_best_b -= d_seg; if (new_best_b.y > eps) { ld ratio = new_best_b.y / -d_seg.y; Seg d_seg_left = d_seg; d_seg_left *= ratio; segs.erase(segs.find(d_seg)); segs.insert(d_seg_left); new_best_b += d_seg_left; new_best_b.y = max(new_best_b.y, (ld)0); } else { segs.erase(segs.find(d_seg)); } best_b = new_best_b; } } void print_best() { cout << best_a.y << '\n'; } void show() { cout << "----\tStock\t----\n"; cout << "best_a = "; best_a.show(); cout << endl; Point cur = best_a; for (auto seg : segs) { cur += seg; cout << "\t"; cur.show(); cout << endl; } cout << "best_b = "; best_b.show(); cout << endl; cout << "----\tEnd\t----\n\n"; } }; void solve() { cin >> n >> a >> b; p.resize(n); q.resize(n); for (int i = 0; i < n; i++) cin >> p[i]; for (int i = 0; i < n; i++) cin >> q[i]; Stock st(a, b); for (int i = 0; i < n; i++) { st.add_seg(p[i], q[i], i); st.print_best(); } } int main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); cout << fixed << setprecision(20); int t = 1; cin >> t; while (t--) solve(); #ifdef DEBUG cerr << "Runtime is: " << clock() * 1.0 / CLOCKS_PER_SEC << endl; #endif }
1736
A
Make A Equal to B
You are given two arrays $a$ and $b$ of $n$ elements, each element is either $0$ or $1$. You can make operations of $2$ kinds. - Pick an index $i$ and change $a_i$ to $1-a_i$. - Rearrange the array $a$ however you want. Find the minimum number of operations required to make $a$ equal to $b$.
It is easy to observe that the second operation needs to be performed at most once. Now, we just need to check $2$ cases, one in which the re-arrangement operation is used, and one in which it is not. If the re-arrangement operation is to be used, then we just need to make the counts of $0$s and $1$s in $a$ equal to that of $b$. Without loss of generality assume $a$ contains $x$ more $0$s than $b$, then the cost in this case will just be $x + 1$ (extra one for re-arrangement cost). If the re-arrangement operation is not to be used, then we just need to make each element of $a$ equal to the corresponding element of $b$. Finally, our answer is the smaller cost of these $2$ cases. Time complexity is $O(n)$.
[ "brute force", "greedy", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; #define ll long long void solve(){ ll n; cin>>n; ll sum=0,ans=0; vector<ll> a(n),b(n); for(auto &it:a){ cin>>it; sum+=it; } for(auto &it:b){ cin>>it; sum-=it; } for(ll i=0;i<n;i++){ ans+=(a[i]^b[i]); } ans=min(ans,1+abs(sum)); cout<<ans<<"\n"; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); ll t; cin>>t; while(t--){ solve(); } }
1736
B
Playing with GCD
You are given an integer array $a$ of length $n$. Does there exist an array $b$ consisting of $n+1$ positive integers such that $a_i=\gcd (b_i,b_{i+1})$ for all $i$ ($1 \leq i \leq n$)? Note that $\gcd(x, y)$ denotes the greatest common divisor (GCD) of integers $x$ and $y$.
Take $a_0 = a_{n+1} = 1$. Now take $b_i=lcm(a_{i-1},a_i)$ for $1 \leq i \leq n+1$. If $b$ gives us $a$ after performing the $\gcd$ operations, then the answer is YES, otherwise the answer is NO. (When answer is NO, we would get a case like $\gcd(b_i, b_{i + 1}) = k \cdot a_i$(where $k > 1$ for some $i$). Suppose $c$ is some valid array which gives us $a$. So, $c_i$ should be divisible by $b_i$. This means $\gcd(c_i, c_{i+1}) \geq \gcd(b_i, b_{i + 1})$. So, if $\gcd(b_i, b_{i + 1}) > a_i$ for any $i$, we should also have $\gcd(c_i, c_{i+1}) > a_i$. This implies that $c$ is not valid if $b$ is not valid. Time complexity is $O(n \cdot \log(bmax))$.
[ "math", "number theory" ]
1,200
#include <bits/stdc++.h> using namespace std; #define ll long long ll lcm(ll a,ll b){ ll g=__gcd(a,b); return (a*b/g); } void solve(){ ll n; cin>>n; vector<ll> a(n+2,1); for(ll i=1;i<=n;i++){ cin>>a[i]; } vector<ll> b(n+2,1); for(ll i=1;i<=n+1;i++){ b[i]=lcm(a[i],a[i-1]); } for(ll i=1;i<=n;i++){ if(__gcd(b[i],b[i+1])!=a[i]){ cout<<"NO\n"; return; } } cout<<"YES\n"; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); ll t; cin>>t; while(t--){ solve(); } }
1736
C1
Good Subarrays (Easy Version)
This is the easy version of this problem. In this version, we do not have queries. Note that we have multiple test cases in this version. You can make hacks only if both versions of the problem are solved. An array $b$ of length $m$ is good if for all $i$ the $i$-th element is greater than or equal to $i$. In other words, $b$ is good if and only if $b_i \geq i$ for all $i$ ($1 \leq i \leq m$). You are given an array $a$ consisting of $n$ positive integers. Find the number of pairs of indices $(l, r)$, where $1 \le l \le r \le n$, such that the array $[a_l, a_{l+1}, \ldots, a_r]$ is good.
Suppose $l[i]$ represents the leftmost point such that subarray $a[l[i],i]$ is good. Notice that the array $l$ is non-decreasing. So suppose $dp[i]$ denotes the length of longest good subarray which ends at index $i$. Take $dp[0]=0$. Now $dp[i]=min(dp[i-1]+1,a[i])$. Suppose $a[i] \geq dp[i-1]+1$. Now we claim that $dp[i]=dp[i-1]+1$. We know $a[i-dp[i-1],i-1]$ is \t{good}. Now if we look at array $b=a[i-dp[i-1],i]$, $b_i \geq i$ for $1 \leq i \leq dp[i-1]$. For $b$ to be good, last element of $b$(which is $a[i]$) should be greater than or equal $dp[i-1]+1$(which is consistent with our supposition). So $b$ is good. We can similarly cover the case when $a[i] < dp[i-1]+1$. So our answer is $\sum_{i=1}^{n} dp[i]$. Time complexity is $O(n)$.
[ "binary search", "data structures", "schedules", "two pointers" ]
1,300
#include <bits/stdc++.h> using namespace std; #define ll long long void solve(){ ll n; cin>>n; vector<ll> dp(n+5,0); ll ans=0; for(ll i=1;i<=n;i++){ ll x; cin>>x; dp[i]=min(dp[i-1]+1,x); ans+=dp[i]; } cout<<ans<<"\n"; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); ll t; cin>>t; while(t--){ solve(); } }
1736
C2
Good Subarrays (Hard Version)
This is the hard version of this problem. In this version, we have queries. Note that we do not have multiple test cases in this version. You can make hacks only if both versions of the problem are solved. An array $b$ of length $m$ is good if for all $i$ the $i$-th element is greater than or equal to $i$. In other words, $b$ is good if and only if $b_i \geq i$ for all $i$ ($1 \leq i \leq m$). You are given an array $a$ consisting of $n$ positive integers, and you are asked $q$ queries. In each query, you are given two integers $p$ and $x$ ($1 \leq p,x \leq n$). You have to do $a_p := x$ (assign $x$ to $a_p$). In the updated array, find the number of pairs of indices $(l, r)$, where $1 \le l \le r \le n$, such that the array $[a_l, a_{l+1}, \ldots, a_r]$ is good. Note that all queries are \textbf{independent}, which means after each query, the initial array $a$ is restored.
Let us continue the idea of C1. Suppose $track[i]$ denotes $\sum_{j=i}^{n} dp[j]$ if $dp[i]=a[i]$. We can precalculate array $track$. Now suppose $a_p$ is changed to $x$ and $adp[i]$ denotes the length of longest good subarray which ends at index $i$ in the updated array. It is easy to see that $adp[i]=dp[i]$ for $1 \leq i < p$. Now let $q$ be the smallest index greater than $p$ such that $adp[q]=a[q]$(It might be the case that there does not exist any such $q$ which can be handled similarly). So we have $3$ ranges to deal with - $(1,p-1)$, $(p,q-1)$ and $(q,n)$. Now $\sum_{i=1}^{p-1} adp[i]$ = $\sum_{i=1}^{p-1} dp[i]$(which can be stored as prefix sum). Also $\sum_{i=q}^{n} adp[i]$ = $track[q]$. Now we only left with range $(p,q-1)$. An interesting observation is $adp[i]=adp[i-1]+1$ for $p < i < q$. This approach can be implemented neatly in many ways(one way is answer each query offline). Time complexity is $O(n \cdot \log(n))$.
[ "binary search", "data structures", "dp", "two pointers" ]
2,400
#include <bits/stdc++.h> #include <ext/pb_ds/tree_policy.hpp> #include <ext/pb_ds/assoc_container.hpp> using namespace __gnu_pbds; using namespace std; #define ll long long const ll INF_MUL=1e13; const ll INF_ADD=1e18; #define pb push_back #define mp make_pair #define nline "\n" #define f first #define s second #define pll pair<ll,ll> #define all(x) x.begin(),x.end() #define vl vector<ll> #define vvl vector<vector<ll>> #define vvvl vector<vector<vector<ll>>> #ifndef ONLINE_JUDGE #define debug(x) cerr<<#x<<" "; _print(x); cerr<<nline; #else #define debug(x); #endif const ll MOD=1e9+7; const ll MAX=100010; ll cal(ll n){ ll now=(n*(n+1))/2; return now; } class ST { public: vector<ll> segs; ll size = 0; ll ID = INF_ADD; ST(ll sz) { segs.assign(2 * sz, ID); size = sz; } ll comb(ll a, ll b) { return min(a, b); } void upd(ll idx, ll val) { segs[idx += size] = val; for(idx /= 2; idx; idx /= 2) segs[idx] = comb(segs[2 * idx], segs[2 * idx + 1]); } ll query(ll l, ll r) { ll lans = ID, rans = ID; for(l += size, r += size + 1; l < r; l /= 2, r /= 2) { if(l & 1) lans = comb(lans, segs[l++]); if(r & 1) rans = comb(segs[--r], rans); } return comb(lans, rans); } }; void solve(){ ll n; cin>>n; vector<ll> a(n+5,0),use(n+5,0),pref(n+5,0); for(ll i=1;i<=n;i++){ cin>>a[i]; use[i]=min(use[i-1]+1,a[i]); pref[i]=pref[i-1]+use[i]; } ST segtree(n+5); auto get=[&](ll l,ll r,ll till,ll pos,ll tar){ while(l<=r){ ll mid=(l+r)/2; if(segtree.query(pos,mid)>=tar){ till=mid,l=mid+1; } else{ r=mid-1; } } return till; }; vector<ll> track(n+5,0); for(ll i=n;i>=1;i--){ segtree.upd(i,a[i]-i); ll till=get(i,n,i,i,a[i]-i); track[i]=track[till+1]+cal(a[i]+till-i)-cal(a[i]-1); } ll q; cin>>q; while(q--){ ll p,x; cin>>p>>x; ll target=min(x,use[p-1]+1); ll till=get(p+1,n,p,p+1,target-p); ll ans=pref[p-1]+track[till+1]+cal(target+till-p)-cal(target-1); cout<<ans<<nline; } return; } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); freopen("error.txt", "w", stderr); #endif ll test_cases=1; //cin>>test_cases; while(test_cases--){ solve(); } cout<<fixed<<setprecision(10); cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n"; }
1736
D
Equal Binary Subsequences
Everool has a binary string $s$ of length $2n$. Note that a binary string is a string consisting of only characters $0$ and $1$. He wants to partition $s$ into two \textbf{disjoint equal} subsequences. He needs your help to do it. You are allowed to do the following operation \textbf{exactly} once. - You can choose any subsequence (\textbf{possibly empty}) of $s$ and rotate it right by one position. In other words, you can select a sequence of indices $b_1, b_2, \ldots, b_m$, where $1 \le b_1 < b_2 < \ldots < b_m \le 2n$. After that you \textbf{simultaneously} set $$s_{b_1} := s_{b_m},$$ $$s_{b_2} := s_{b_1},$$ $$\ldots,$$ $$s_{b_m} := s_{b_{m-1}}.$$ Can you partition $s$ into two \textbf{disjoint equal} subsequences after performing the allowed operation \textbf{exactly} once? A partition of $s$ into two disjoint equal subsequences $s^p$ and $s^q$ is two \textbf{increasing} arrays of indices $p_1, p_2, \ldots, p_n$ and $q_1, q_2, \ldots, q_n$, such that each integer from $1$ to $2n$ is encountered in either $p$ or $q$ exactly once, $s^p = s_{p_1} s_{p_2} \ldots s_{p_n}$, $s^q = s_{q_1} s_{q_2} \ldots s_{q_n}$, and $s^p = s^q$. If it is not possible to partition after performing any kind of operation, report $-1$. If it is possible to do the operation and partition $s$ into two disjoint subsequences $s^p$ and $s^q$, such that $s^p = s^q$, print elements of $b$ and indices of $s^p$, i. e. the values $p_1, p_2, \ldots, p_n$.
It is easy to see that a necessary condition for a solution to exist is that the number of $1$ in $s$ should be even. It turns out that this condition is sufficient too. Here is one valid construction: We make $n$ pairs of the form $(s[2i-1],s[2i])$ for $(1 \leq i \leq n)$. Assume we have $x$ pairs in which both elements are different and $n-x$ pairs in which both elements are same. \textbf{Claim} - $x$ should be even. \textbf{Proof} - Assume that among the $n-x$ pairs in which both elements are same, we have $y$ pairs in which both elements are $1$. So number of $1$ in $s$ is $x+2 \cdot y$. We know that number of $1$ in $s$ is even, so for $x+2 \cdot y$ to be even, $x$ should also be even. Now we will select $x$ indices; exactly one index from each of the $x$ pairs in which both elements are distinct. Take the index of $0$ from $i_{th}$ pair if $i$ is odd, else take the index of $1$. Thus our selected characters = ${0,1,0,1, \dots ,0,1}$ Now on cyclically shifting the selected characters clockwise once, we can see that elements at selected indices got flipped. Since, elements in those $x$ pairs were distinct initially, and we flipped exactly one character from each of those $x$ pairs, both elements of those $x$ pairs are same now. Hence, in updated $s$, $s[2i-1]=s[2i]$. So, for $s_1$, we can select characters of all odd indices. Finally we'll have $s_1 = s_2$. Time complexity is $O(n)$.
[ "constructive algorithms", "geometry", "greedy", "implementation", "strings" ]
2,200
#include <bits/stdc++.h> using namespace std; #define ll long long void solve(){ ll n; cin>>n; string s; cin>>s; ll freq=0; vector<ll> ans; ll need=0; for(ll i=0;i<2*n;i+=2){ if(s[i]!=s[i+1]){ freq++; ans.push_back(i+1); if(s[i]-'0'!=need){ ans.back()++; } need^=1; } } if(freq&1){ cout<<"-1\n"; return; } cout<<ans.size()<<" "; for(auto it:ans){ cout<<it<<" "; } cout<<"\n"; for(ll i=1;i<=2*n;i+=2){ cout<<i<<" \n"[i+1==2*n]; } } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); ll t; cin>>t; while(t--){ solve(); } }
1736
E
Swap and Take
You're given an array consisting of $n$ integers. You have to perform $n$ turns. Initially your score is $0$. On the $i$-th turn, you are allowed to leave the array as it is or swap any one pair of $2$ adjacent elements in the array and change exactly one of them to $0$(and leave the value of other element unchanged) after swapping. In either case(whether you swap or not), after this you add $a_i$ to your score. What's the maximum possible score you can get?
As the constraints suggest, we should use dp to solve this problem. Let's write the original indices of the array that are added during this process - $p_1, p_2, \ldots, p_n$. None of added numbers are zeroed in an optimal answer. It gives that $p_1 \le p_2 \le \ldots \le p_n$ and the answer is equal to the sum of $a[p_k]$ ($1 \leq k \leq n$). To get the optimal answer we'll use $dp[t][last][m]$ = maximum score on $t$-th turn if $p_t = last$ and we have performed $m$ swapping moves (the first dimension can be omitted). Note that $m \leq i$. It can be updated by considering the next index but it will take $O(n^4)$. The most straightforward way to improve it to $O(n^3)$ is to use prefix maximums. Here are some details. We have only two cases: $p_t=p_{t-1}$ - In this case, our transition is just $dp[t][last][m]=dp[t-1][last][m-1]+a[last]$ $p_t=p_{t-1}$ - In this case, our transition is just $dp[t][last][m]=dp[t-1][last][m-1]+a[last]$ $p_t > p_{t-1}$ - Let us make some observations. First of all, $p_t \ge t$. So number of swaps to bring $p_t$ to index $t$ is fixed. It is $p_t-t$. So $dp[t][last][m]=\max_{j=1}^{last-1} (dp[t-1][j][m-(p_t-t)])+a[last]$. Note that we can find $\max_{j=1}^{last-1} (dp[t-1][j][m-(p_t-t)])$ in $O(1)$. Hint - use prefix maximum. $p_t > p_{t-1}$ - Let us make some observations. First of all, $p_t \ge t$. So number of swaps to bring $p_t$ to index $t$ is fixed. It is $p_t-t$. So $dp[t][last][m]=\max_{j=1}^{last-1} (dp[t-1][j][m-(p_t-t)])+a[last]$. Note that we can find $\max_{j=1}^{last-1} (dp[t-1][j][m-(p_t-t)])$ in $O(1)$. Hint - use prefix maximum. Time complexity is $O(n^3)$.
[ "dp", "greedy" ]
2,600
#include <bits/stdc++.h> using namespace std; const int MAX=505; vector<vector<vector<int>>> dp(MAX,vector<vector<int>>(MAX,vector<int>(MAX,-(int)(1e9)))); int main() { int n; cin>>n; vector<int> a(n+5); for(int i=1;i<=n;i++){ cin>>a[i]; } vector<vector<int>> prefix(n+5,vector<int>(n+5,0)); int ans=0; for(int i=1;i<=n;i++){ for(int j=1;j<=n;j++){ for(int k=0;k<=i;k++){ if(k){ dp[i][j][k]=dp[i-1][j][k-1]+a[j]; } if(j>=i){ int need=j-i; if(need>k){ continue; } dp[i][j][k]=max(dp[i][j][k],prefix[k-need][j-1]+a[j]); } } } for(int j=1;j<=n;j++){ for(int k=0;k<=i;k++){ prefix[k][j]=max(prefix[k][j],dp[i][j][k]); } } for(int j=0;j<=i;j++){ for(int k=1;k<=n;k++){ prefix[j][k]=max(prefix[j][k],prefix[j][k-1]); ans=max(ans,prefix[j][k]); } } } cout<<ans; }
1737
A
Ela Sorting Books
\begin{tabular}{ll} & Ela loves reading a lot, just like her new co-workers in DTL! On her first day after becoming an engineer in DTL, she is challenged by a co-worker to sort a heap of books into different compartments on the shelf. \ \end{tabular} $n$ books must be split into $k$ compartments on the bookshelf ($n$ is divisible by $k$). Each book is represented by a lowercase Latin letter \textbf{from 'a' to 'y'} inclusively, which is the beginning letter in the title of the book. Ela must stack \textbf{exactly} $\frac{n}{k}$ books in each compartment. After the books are stacked, for each compartment indexed from $1$ to $k$, she takes the \textbf{minimum excluded (MEX)} letter of the multiset of letters formed by letters representing all books in that compartment, then combines the resulting letters into a string. The first letter of the resulting string is the MEX letter of the multiset of letters formed by the first compartment, the second letter of the resulting string is the MEX letter of the multiset of letters formed by the second compartment, ... and so on. Please note, under the constraint of this problem, \textbf{MEX letter can always be determined for any multiset found in this problem} because 'z' is not used. What is the \textbf{lexicographically greatest} resulting string possible that Ela can create? A string $a$ is lexicographically greater than a string $b$ if and only if one of the following holds: - $b$ is a prefix of $a$, but $b \ne a$; - in the first position where $a$ and $b$ differ, the string $a$ has a letter that appears later in the alphabet than the corresponding letter in $b$. The minimum excluded (MEX) letter of a multiset of letters is the letter that appears earliest in the alphabet and is not contained in the multiset. For example, if a multiset of letters contains $7$ letters 'b', 'a', 'b', 'c', 'e', 'c', 'f' respectively, then the MEX letter of this compartment is 'd', because 'd' is not included in the multiset, and all letters comes before 'd' in the alphabet, namely 'a', 'b' and 'c', are included in the multiset.
We'll iterate through compartments from $1$ to $K$. we'll try to put 1 'a' book in it, so the MEX in that compartment will not be 'a'. If any compartment can't be filled by 'a' because we ran out of 'a', the MEX of that compartment will have to be 'a'. The same logic applies to 'b', 'c', 'd', ..., where for each compartment from $1$, if the MEX of that compartment hasn't been determined, we'll fill the corresponding letter there. We'll stop when reaching the end of the alphabet, or we've reached the ($\frac{N}{K}$)-th letter in the alphabet (because the first compartment is already full).
[ "greedy", "implementation", "strings" ]
900
... void execute(int test_number) { cin>>n>>k>>str; vector <int> count_char(26, 0); for (char c: str) count_char[c - 'a']++; string ans = ""; for (int i = 0; i < min(25, n/k); i++) { while (k - ans.size() > count_char[i]) { ans.push_back(i + 'a'); } } char c = 'a' + min(n / k, 25); while (k > ans.size()) { ans += c; } reverse(ans.begin(), ans.end()); cout << ans << "\n"; } ...
1737
B
Ela's Fitness and the Luxury Number
\begin{tabular}{ll} & While working at DTL, Ela is very aware of her physical and mental health. She started to practice various sports, such as Archery, Yoga, and Football. \ \end{tabular} Since she started engaging in sports activities, Ela switches to trying a new sport on days she considers being "Luxury" days. She counts the days since she started these activities, in which the day she starts is numbered as day $1$. A "Luxury" day is the day in which the number of this day is a luxurious number. An integer $x$ is called a luxurious number if it is divisible by ${\lfloor \sqrt{x} \rfloor}$. Here $\lfloor r \rfloor$ denotes the "floor" of a real number $r$. In other words, it's the largest integer not greater than $r$. For example: $8$, $56$, $100$ are luxurious numbers, since $8$ is divisible by $\lfloor \sqrt{8} \rfloor = \lfloor 2.8284 \rfloor = 2$, $56$ is divisible $\lfloor \sqrt{56} \rfloor = \lfloor 7.4833 \rfloor = 7$, and $100$ is divisible by $\lfloor \sqrt{100} \rfloor = \lfloor 10 \rfloor = 10$, respectively. On the other hand $5$, $40$ are not, since $5$ are not divisible by $\lfloor \sqrt{5} \rfloor = \lfloor 2.2361 \rfloor = 2$, and $40$ are not divisible by $\lfloor \sqrt{40} \rfloor = \lfloor 6.3246 \rfloor = 6$. Being a friend of Ela, you want to engage in these fitness activities with her to keep her and yourself accompanied (and have fun together, of course). Between day $l$ and day $r$, you want to know how many times she changes the activities.
We shift the perspective to see the pattern from $\lfloor \sqrt{x} \rfloor$, instead of $x$, to see what kind of patterns are needed so $x$ can be luxurious. Note that: ${(a + 1)}^2 - 1 = a * (a + 2)$. Therefore, every half-closed segment $[a^2, (a + 1) ^ 2)$ contains exactly $3$ luxurious number: $a^2$, $a * (a + 1)$ and $a * (a + 2)$. Also note that, since large numbers can cause inaccuracies in floating point computation, we should use binary search to find the floor-value of a square root, instead of using the sqrt function in any language.
[ "binary search", "implementation", "math" ]
1,300
... ll l, r; ll bs_sqrt(ll x) { ll left = 0, right = 2000000123; while (right > left) { ll mid = (left + right) / 2; if (mid * mid > x) right = mid; else left = mid + 1; } return left - 1; } // main solution goes here: void execute(int test_number) { cin >> l >> r; ll sql = bs_sqrt(l), sqr = bs_sqrt(r); ll ans; if (sql == sqr) { ans = 0; for (int i = 0; i < 3; i++) { if (l <= sql * (sql + i) && sql * (sql + i) <= r) ans++; } } else { ans = (sqr - sql - 1) * 3; for (int i = 0; i < 3; i++) { if (l <= sql * (sql + i) && sql * (sql + i) <= r) ans++; if (l <= sqr * (sqr + i) && sqr * (sqr + i) <= r) ans++; } } cout << ans << "\n"; } ...
1737
C
Ela and Crickets
\begin{tabular}{ll} & Ela likes Chess a lot. During breaks, she usually challenges her co-worker in DTL to some chess games. She's not an expert at classic chess, but she's very interested in Chess variants, where she has to adapt to new rules and test her tactical mindset to win the game. \ \end{tabular} The problem, which involves a non-standard chess pieces type that is described below, reads: given $3$ white \textbf{crickets} on a $n \cdot n$ board, arranged in an "L" shape next to each other, there are no other pieces on the board. Ela wants to know with a finite number of moves, can she put any white cricket on the square on row $x$, column $y$? An "L"-shape piece arrangement can only be one of the below: \begin{center} \begin{tabular}{cc} & \ \end{tabular} \end{center} \begin{center} \begin{tabular}{cc} & \ \end{tabular} \end{center} For simplicity, we describe the rules for crickets on the board where only three white crickets are. It can move horizontally, vertically, or diagonally, but only to a square in some direction that is \textbf{immediately after} another cricket piece (so that it must \textbf{jump over} it). If the square immediately behind the piece is unoccupied, the cricket will occupy the square. Otherwise (when the square is occupied by another cricket, or does not exist), the cricket isn't allowed to make such a move. See an example of valid crickets' moves on the pictures in the Note section.
The initial configuration will have one central piece and 2 non-central pieces. The central piece is the one on the square that shares the edges with both of the 2 other squares. As the crickets are standing next to each other, we can prove that each of them can only jump into another square with the same color of the square it's standing on. Assuming that the central piece are on the dark square, we consider 3 cases: The central piece initially lies on the corner of the board (restrict the moves of the crickets): can only move pieces to squares on the edges they cover. The target square is dark: we can prove that there is always a way to line up 2 pieces on the same-colored squares with the target square diagonally. The target square is light: we can see that the only piece on the white square can only move 2 squares horizontally/vertically, not diagonally in any way, so if the target square x-coordinates has the same oddity as the original square x-coordinate of the light, then it's possible. Otherwise, it's not. For the case of 2 lights, and 1 dark, use the same reasoning. Complexity: $O(1)$
[ "constructive algorithms", "games", "implementation", "math" ]
1,500
... int n; int x[3], y[3]; int u, v; pii centralSquare() { int a = (x[0] == x[1]) ? x[0] : x[2]; int b = (y[0] == y[1]) ? y[0] : y[2]; return {a, b}; } // main solution goes here: void execute(int test_number) { cin>>n; for (int i=0; i<3; i++) cin>>x[i]>>y[i]; cin>>u>>v; int cx = centralSquare().first, cy = centralSquare().second; if ((cx == 1 || cx == n) && (cy == 1 || cy == n)) { // "corner" case, literally // the crickets can only reach coordinates within the edges that already contains at least 2 crickets, // which contains the centralSquare of the L cout << ((u == cx || v == cy) ? "YES\n" : "NO\n"); } else { if ((cx + cy) % 2 == (u + v) % 2) { cout << (cx % 2 == u % 2 ? "YES\n" : "NO\n"); } else { // can be prove to always reach, since we have ways to align 2 crickets in the same diagonal as target cout << "YES\n"; } } } ...
1737
D
Ela and the Wiring Wizard
Ela needs to send a large package from machine $1$ to machine $n$ through a network of machines. Currently, with the network condition, she complains that the network is too slow and the package can't arrive in time. Luckily, a Wiring Wizard offered her a helping hand. The network can be represented as an \textbf{undirected connected graph} with $n$ nodes, each node representing a machine. $m$ wires are used to connect them. Wire $i$ is used to connect machines $u_i$ and $v_i$, and has a weight $w_i$. The aforementioned large package, if going through wire $i$, will move from machine $u_i$ to machine $v_i$ (or vice versa) in exactly $w_i$ microseconds. The Wiring Wizard can use his spell an arbitrary number of times. For each spell, he will choose the wire of index $i$, connecting machine $u_i$ and $v_i$, and rewire it following these steps: - Choose one machine that is connected by this wire. Without loss of generality, let's choose $v_i$. - Choose a machine that is currently connecting to $v_i$ (including $u_i$), call it $t_i$. Disconnect the wire indexed $i$ from $v_i$, then using it to connect $u_i$ and $t_i$. The rewiring of wire $i$ will takes $w_i$ microseconds, and the weight of the wire will not change after this operation. After a rewiring, a machine might have some wire connect it with itself. Also, the Wiring Wizard has warned Ela that rewiring might cause temporary disconnections between some machines, but Ela just ignores it anyway. Her mission is to send the large package from machine $1$ to machine $n$ as fast as possible. Note that the Wizard can use his spell on a wire zero, one, or many times. To make sure the network works seamlessly while transferring the large package, \textbf{once the package starts transferring from machine $1$, the Wiring Wizard cannot use his spell to move wires around anymore.} Ela wonders, with the help of the Wiring Wizard, what is the least amount of time needed to transfer the large package from machine $1$ to $n$.
Note that you can turn $(u, v)$ edge into a self loop $(u, u)$ in a single operation. We can prove that it is always better to make an edge directly connect $1$ and $n$. Denote the index of that edge as $i$, and $\text{dist}[u][v]$ as the shortest path from $u$ to $v$ on the unweighted version of the input graph. There are two cases: Connect $u_i$ to $1$ and $v_i$ to $n$ (and vice-versa) directly. The cost is $\text{dist}[u_i][1] + \text{dist}[v_i][n] + 1$ multiplied by $w_i$. Connect $1$, $n$ and $u_i$ through an intermediate vertex $x$. Then we will follow the shortest path from $u_i$ to $x$, connect $v_i$ to $x$, then create a self loop $(x, x)$. After that one of the edges will follow the path from $x$ to $1$, the other one will go from $x$ to $n$, and we are able to create an edge $(1, n)$ with weight $w$. The similar procedure is used to connect $1$, $n$ and $v_i$. The cost of the transformation is $\text{dist}[1][x] + \text{dist}[x][n] + \text{dist}[u_i][x] + 2$ multiplied by $w_i$. We may use BFS or Floyd algorithm in order to calculate $\text{dist}[u][v]$. The final complexity is $O(n^3)$.
[ "brute force", "dp", "graphs", "greedy", "implementation", "shortest paths" ]
2,200
#include <bits/stdc++.h> #define ll long long #define db long double #define ull unsigned long long #define x first #define y second #define mp make_pair #define pb push_back #define all(a) a.begin(), a.end() using namespace std; #define pper(a) cerr << #a << " = " << a << endl; void per() { cerr << endl; } template<typename Head, typename... Tail> void per(Head H, Tail... T) { cerr << H << ' '; per(T...); } template<class T> bool uin(T &a, T b) { return a > b ? (a = b, true) : false; } template<class T> bool uax(T &a, T b) { return a < b ? (a = b, true) : false; } template<class U, class V> ostream& operator<<(ostream& out, const pair<U, V>& a) { return out << "(" << a.x << ", " << a.y << ")"; } template<class U, class V> istream& operator>>(istream& in, pair<U, V>& a) { return in >> a.x >> a.y; } template<typename W, typename T = typename enable_if<!is_same<W, string>::value, typename W::value_type>::type> ostream& operator<<(ostream& out, const W& v) { out << "{ "; for (const auto& x : v) out << x << ", "; return out << '}'; } template<class T> void readArr(T from, T to) { for (auto i = from; i != to; ++i) cin >> *i; } mt19937 mrand(1337); unsigned int myRand32() { return mrand() & (unsigned int)(-1); } unsigned ll myRand64() { return ((ull)myRand32() << 32) ^ myRand32(); } const int mod = 1000000007; void add(int& a, int b) { a += b; if (a >= mod) a -= mod; } void dec(int &a, int b) { a -= b; if (a < 0) a += mod; } int mult(int a, int b) { return a * (ll)b % mod; } int bp(int a, int b) { int res = 1; while (b > 0) { if (b & 1) res = mult(res, a); a = mult(a, a); b >>= 1; } return res; } const int N = 507; ll f[N][N]; int main(){ #ifdef LOCAL freopen("N_input.txt", "r", stdin); //freopen("N_output.txt", "w", stdout); #endif ios_base::sync_with_stdio(0); cin.tie(0); int t; cin >> t; for (int a = 0; a < t; ++a) { int n, m; cin >> n >> m; for (int i = 0; i < n; ++i) { for (int j = 0; j < n; ++j) { f[i][j] = 1e18; } f[i][i] = 0; } vector<tuple<int, int, int> > ed; for (int i = 0; i < m; ++i) { int u, v, w; cin >> u >> v >> w; ed.pb(make_tuple(u - 1, v - 1, w)); f[u - 1][v - 1] = 1; f[v - 1][u - 1] = 1; } for (int k = 0; k < n; ++k) { for (int i = 0; i < n; ++i) { for (int j = 0; j < n; ++j) { f[i][j] = min(f[i][j], f[i][k] + f[k][j]); } } } ll ans = 1e18; for (auto x : ed) { int u = get<0>(x); int v = get<1>(x); int w = get<2>(x); // per(ans, u, v, w); ans = min(ans, (ll) w * (f[0][u] + f[n - 1][v] + 1)); ans = min(ans, (ll) w * (f[0][v] + f[n - 1][u] + 1)); // per(ans, u, v, w); for (int i = 0; i < n; ++i) { ans = min(ans, (ll) w * (f[v][i] + 1 + f[i][0] + f[i][n-1] + 1)); ans = min(ans, (ll) w * (f[u][i] + 1 + f[i][0] + f[i][n-1] + 1)); } } cout << ans << '\n'; } }
1737
E
Ela Goes Hiking
\begin{tabular}{ll} & Ela likes to go hiking a lot. She loves nature and exploring the various creatures it offers. One day, she saw a strange type of ant, with a \textbf{cannibalistic} feature. More specifically, an ant would eat any ants that it sees which is smaller than it.Curious about this feature from a new creature, Ela ain't furious. She conducts a long, non-dubious, sentimental experiment. \ \end{tabular} She puts $n$ cannibalistic ants in a line on a long wooden stick. Initially, the ants have the same weight of $1$. The distance between any two consecutive ants is the same. The distance between the first ant in the line to the left end and the last ant in the line to the right end is also the same as the distance between the ants. Each ant starts moving towards the left-end or the right-end randomly and equiprobably, at the same constant pace throughout the experiment. Two ants will crash if they are standing next to each other in the line and moving in opposite directions, and ants will change direction immediately when they reach the end of the stick. Ela can't determine the moving direction of each ant, but she understands very well their behavior when crashes happen. - If a crash happens between two ants of different weights, the heavier one will eat the lighter one, and gain the weight of the lighter one. After that, the heavier and will continue walking in the same direction. In other words, if the heavier one has weight $x$ and walking to the right, the lighter one has weight $y$ and walking to the left ($x > y$), then after the crash, the lighter one will diminish, and the heavier one will have weight $x + y$ and continue walking to the right. - If a crash happens between two ants with the same weight, the one walking to the left end of the stick will eat the one walking to the right, and then continue walking in the same direction. In other words, if one ant of weight $x$ walking to the left, crashes with another ant of weight $x$ walking to the right, the one walking to the right will disappear, and the one walking to the left will have to weight $2x$ and continue walking to the left. Please, check the example in the "Note" section, which will demonstrate the ants' behavior as above. We can prove that after a definite amount of time, there will be only one last ant standing. Initially, each ant can randomly and equiprobably move to the left or the right, which generates $2^n$ different cases of initial movements for the whole pack. For each position in the line, calculate the probability that the ant begins in that position and survives. Output it modulo $10^9 + 7$. Formally, let $M = 10^9 + 7$. It can be shown that the answer can be expressed as an irreducible fraction $\frac{p}{q}$, where $p$ and $q$ are integers and $q \not \equiv 0 \pmod{M}$. Output the integer equal to $p \cdot q^{-1} \bmod M$. In other words, output such an integer $x$ that $0 \le x < M$ and $x \cdot q \equiv p \pmod{M}$.
The first ant will die if there are more than $1$ ants since it doesn't have any way to eat other ants (initially no ants have a weight less than it, and no ants are on the left of it). Call $f(i)$ the probability for the $i$-th ants to be the last one standing in the ants from $1$ to $2i - 1$. $i$ will be the last one standing between them if $i$ is walking to the left and there are $\frac{i-1}{2}$ consecutive ants standing right to the left of it walking to the right. In other words: $i$ will survive if every ant from $\lfloor \frac{i + 2}{2} \rfloor$-th to $(i - 1)$-th ant all walking to the right and $i$ itself walking to the left. Therefore: $f(i) = \frac{1}{2^{\frac{i+1}{2}}}$. Except for when $i = n$, that's when whichever direction the $n$-th ant is walking to, it will eventually change direction into walking left: $f(n) = \frac{1}{2^{\frac{n-1}{2}}}$. Note that $f(i)$ won't take into account ants from $2i$ to $n$. $g(i)$ will be the probability for the $i$-th ants to be the last one standing in the ants from $1$ to $n$. This holds 2 conditions accountable: - $i$ can survive for ants between $1$ and $2i - 1$, which only dependent on how ants from $\frac{i-1}{2}$ to $i$ is moving. We already calculated that using $f(i)$ - Ant from $2i$, $2i - 1$ ... $n$ cannot survive, which only dependent on how ants from $i+1$ to $n$ is moving, we can calculate this using $1 - g(2i) - g(2i + 1) - .. - g(n)$. $g(i)$ = $f(i) * (1 - g(2i) - g(2i + 1) - .. - g(n))$ The answer for each $x$ is $g(x)$
[ "combinatorics", "dp", "math", "probabilities" ]
2,500
... // data preprocessing: (e.g.: divisor generating, prime sieve) ll POW2[mn]; void preprocess() { POW2[0] = 1; for (int i = 1; i < mn; i++) POW2[i] = POW2[i &mdash; 1] * 2 % mod; } // global variables: ll n; ll POW(ll u, ll v) { if (v == 0) return 1; ll mid = POW(u, v / 2); mid = (mid * mid) % mod; return (v & 1) ? (mid * u % mod) : mid; } // main solution goes here: void execute(int test_number) { cin>>n; if (n == 1) { cout << "1\n"; return; } vector <ll> ans(n + 1, 0), sufsum(n + 1, 0); sufsum[n] = ans[n] = POW(POW2[(n &mdash; 1) / 2], mod &mdash; 2); for (int i = n &mdash; 1; i > 1; i--) { ans[i] = POW(POW2[(i + 1) / 2], mod &mdash; 2); if (2 * i <= n) ans[i] = ans[i] * (1 &mdash; sufsum[i * 2] + mod) % mod; sufsum[i] = (sufsum[i + 1] + ans[i]) % mod; } for (int i = 1; i <= n; i++) cout << ans[i] << "\n"; } ...
1737
F
Ela and Prime GCD
\begin{tabular}{ll} & After a long, tough, but fruitful day at DTL, Ela goes home happily. She entertains herself by solving Competitive Programming problems. She prefers short statements, because she already read too many long papers and documentation at work. The problem of the day reads: \ \end{tabular} You are given an integer $c$. Suppose that $c$ has $n$ divisors. You have to find a sequence with $n - 1$ integers $[a_1, a_2, ... a_{n - 1}]$, which satisfies the following conditions: - Each element is strictly greater than $1$. - Each element is a divisor of $c$. - All elements are distinct. - For all $1 \le i < n - 1$, $\gcd(a_i, a_{i + 1})$ is a prime number. In this problem, because $c$ can be too big, the result of prime factorization of $c$ is given instead. Note that $\gcd(x, y)$ denotes the greatest common divisor (GCD) of integers $x$ and $y$ and a prime number is a positive integer which has exactly $2$ divisors.
Author: constructive Observation: Assume that x is composite number and divisor of n. Among all the multiples of x, the number of the divisor of n must be less than or equal to m/2. First, factorize n. Assume that w is divisor of n. If w is in the form of a^4, a^3b^2, or a^2b^2c^2, it can be proved that there is no answer. Otherwise, there can be two cases. If the possible maximum exponent of prime factor is 2, place the divisors like this: 1 a^2b^2 b a^2b b^2 ab a^2 ab^2 a / 1 a^2 a. And expand the sequence as follows: Repeat the current sequence twice - 1 a^2b^2 b a^2b b^2 ab a^2 ab^2 a 1 a^2b^2 b a^2b b^2 ab a^2 ab^2 a / 1 a^2 a 1 a^2 a. Multiply the odd-indexed elements of first half and the even-indexed elements of second half by the new prime factor. Index 1 is exception - 1 a^2b^2 bc a^2b b^2c ab a^2c ab^2 ac c a^2b^2c b a^2bc b^2 abc a^2 ab^2c a / 1 a^2 ab b a^2b a. If more prime factor exists, jump to "Otherwise". Otherwise, place the divisors like this: 1 a^3 a a^2 / 1 a. Now the exponents of other prime factors are all 1, and we can expand the sequence as follows: Repeat the current sequence twice - 1 a^3 a a^2 1 a^3 a a^2 / 1 a 1 a. Multiply the even-indexed elements of first half and the odd-indexed elements of second half by the new prime factor - 1 a^3b a a^2b b a^3 ab a^2 / 1 ab b a. 3-1. If the maximum exponent is 3, swap a and b - 1 a^3b b a^2b a a^3 ab a^2 3-2. Otherwise, swap b and ab - 1 b ab a. Like this, we can expand the sequence
[ "constructive algorithms", "math", "number theory" ]
3,300
#import<bits/stdc++.h> #define endl '\n' using namespace std; int m, t, b[18], check[18], cnt[5]; vector<int>v; vector<vector<int>>a; void initialize(int m) { fill(b, b + m + 1, 0); fill(check, check + m + 1, 0); fill(cnt, cnt + 5, 0); v.clear(); a.clear(); } void insert1(int p1, int c1) { v[p1] = c1; a.push_back(v); } void insert2(int p1, int p2, int c1, int c2) { v[p1] = c1; v[p2] = c2; a.push_back(v); } void f1(int x) { int n = a.size(); for(int i = 0; i < n; i++)a.push_back(a[i]); for(int i = 0; i < n; i += 2) { a[i + 1][x] = 1; a[i + n][x] = 1; } swap(a[n / 2], a[n]); } void f2(int x) { int n = a.size(); for(int i = 0; i < n; i++)a.push_back(a[i]); for(int i = 1; i < n; i += 2) { a[i + 1][x] = 1; a[i + n][x] = 1; } a[n][x] = 1; } void f3(int x) { int n = a.size(); for(int i = 0; i < n; i++)a.push_back(a[i]); for(int i = 0; i < n; i += 2) { a[i + 1][x] = 1; a[i + n][x] = 1; } swap(a[n], a[2 * n &mdash; 1]); } int main() { ios::sync_with_stdio(0); cin.tie(0); for(cin >> t; t--;) { cin >> m; for(int i = 1; i <= m; i++) { cin >> b[i]; cnt[min(b[i], 4)]++; } if(cnt[4] || cnt[3] >= 2 || cnt[3] && cnt[2] || cnt[2] >= 3) { cout << -1 << endl; initialize(m); continue; } for(int i = 1; i <= m; i++)v.push_back(0); a.push_back(v); if(cnt[2]) { int p1 = -1, p2 = -1; for(int i = 1; i <= m; i++) { if(b[i] == 2) { if(~p1)p2 = i - 1; else p1 = i - 1; } } if(~p2) { insert2(p1, p2, 2, 2); insert2(p1, p2, 0, 1); insert2(p1, p2, 2, 1); insert2(p1, p2, 0, 2); insert2(p1, p2, 1, 1); insert2(p1, p2, 2, 0); insert2(p1, p2, 1, 2); insert2(p1, p2, 1, 0); check[p1 + 1] = check[p2 + 1] = 1; } else { insert1(p1, 2); insert1(p1, 1); check[p1 + 1] = 1; } for(int i = 1; i <= m; i++) { if(check[i])continue; if(a.size() % 2)f2(i - 1); else f3(i - 1); } } else { if(cnt[3]) { int p = 0; for(int i = 1; i <= m; i++) { if(b[i] == 3)p = i - 1; } insert1(p, 3); insert1(p, 1); insert1(p, 2); check[p + 1] = 1; } else { insert1(0, 1); check[1] = 1; } for(int i = 1; i <= m; i++) { if(!check[i])f1(i - 1); } } for(auto &v: a) { if(*max_element(v.begin(), v.end())) { for(auto &p: v)cout << p << ' '; cout << endl; } } initialize(m); } }
1737
G
Ela Takes Dancing Class
\begin{tabular}{ll} & DTL engineers love partying in the weekend. Ela does, too! Unfortunately, she didn't know how to dance yet. Therefore, she decided to take a dancing class. \ \end{tabular} There are $n$ students in the dancing class, including Ela. In the final project, $n$ students will participate in a choreography described below. $n$ students are positioned on the positive side of the $Ox$-axis. The $i$-th dancer is located at $a_i > 0$. Some dancers will change positions during the dance (we'll call them movable dancers), and others will stay in the same place during a choreography (we'll call them immovable dancers). We distinguish the dancers using a binary string $s$ of length $n$: if $s_i$ equals '1', then the $i$-th dancer is movable, otherwise the $i$-th dancer is immovable. Let's call the "positive energy value" of the choreography $d > 0$. The dancers will perform "movements" based on this value. Each minute after the dance begins, the movable dancer with the \textbf{smallest} $x$-coordinate will start moving to the right and initiate a "movement". At the beginning of the movement, the dancer's energy level will be initiated equally to the positive energy value of the choreography, which is $d$. Each time they move from some $y$ to $y+1$, the energy level will be decreased by $1$. At some point, the dancer might meet other fellow dancers in the same coordinates. If it happens, then the energy level of the dancer will be increased by $1$. A dancer will stop moving to the right when his energy level reaches $0$, and he doesn't share a position with another dancer. The dancers are very well-trained, and each "movement" will end before the next minute begins. To show her understanding of this choreography, Ela has to answer $q$ queries, each consisting of two integers $k$ and $m$. The answer to this query is the coordinate of the $m$-th dancer \textbf{of both types} from the left at $k$-th minute after the choreography begins. In other words, denote $x_{k, 1}, x_{k, 2}, \dots, x_{k, n}$ as the sorted coordinates of the dancers at $k$-th minute from the beginning, you need to print $x_{k, m}$.
Note that the immovable dancers don't change their positions during transformations. Let's pretend that all immovable dancers disappeared. We denote an arrangement as "good" if the dancer in the first step jump over any other dancers (the new placement of the first dancer is greater than the position of any other dancers). Denote $C$ as the number of dancers in the arrangement. A "good" dancer arrangement retains some properties: After $C$ operations, the relative coordinates of dancers are untouched, and the absolute coordinates of dancers are increased by $(C - 1 + d)$. Proof: by intuition. Dancer $1$'s position is increased by $C - 1 + d$ since it goes over $C - 1$ dancers and jumps to the following $d$-th space. Dancer $2$ must jump across dancer $1$ (there aren't enough empty spaces between them), and its position will be increased by the same amount. The description is the same for others. After $i$ operations ($i < C$), the absolute coordinates of the leftmost $i$ dancers increased by $C - 1 + d$. Repeatedly increase the current arrangement by several $C - 1 + d$ until dancer $1$ can go over dancer $i$. Increase the coordinate of all the arranged dancers that lie after dancer $i$ by $1$. Push the dancer into the maintained arrangement. Let's binary search each query. We can safely remove all immovable dancers and the remaining dancers that haven't been observed, and appropriately adjust the query segment. In order to maintain the "good" arrangement, we may use any balanced binary search tree (treap for example). The final complexity is $O((n + q) \log^2 n)$.
[ "binary search", "data structures" ]
3,500
#include <bits/stdc++.h> #define all(x) (x).begin(), (x).end() #define rall(x) (x).rbegin(), (x).rend() using namespace std; #ifdef LOCAL #include "debug.h" #else #define debug(...) 42 #endif using ll = long long; using ld = long double; template<class T> bool uin(T &a, T b) { return a > b ? (a = b, true) : false; } template<class T> bool uax(T &a, T b) { return a < b ? (a = b, true) : false; } mt19937 rng(chrono::system_clock::now().time_since_epoch().count()); struct Node { ll key, lazy; int cnt; unsigned int prior; Node *l, *r; Node() {} Node(ll key) : key(key), lazy(0), cnt(1), prior(rng()), l(0), r(0) {} }; inline int get_cnt(Node *t) { return t ? t->cnt : 0; } inline void upd_cnt(Node *t) { if (t) { t->cnt = get_cnt(t->l) + get_cnt(t->r) + 1; } } inline void modify(Node *t, ll delta) { if (t) { t->key += delta; t->lazy += delta; } } inline void down(Node *t) { if (t && t->lazy) { ll lazy = t->lazy; t->lazy = 0; if (t->l) { t->l->key += lazy; t->l->lazy += lazy; } if (t->r) { t->r->key += lazy; t->r->lazy += lazy; } } } void split(Node *t, Node *&l, Node *&r, int key, int add = 0) { if (!t) { l = r = 0; return; } down(t); int cur_key = add + get_cnt(t->l); if (key <= cur_key) { split(t->l, l, t->l, key, add); r = t; } else { split(t->r, t->r, r, key, cur_key + 1); l = t; } upd_cnt(t); } void split(Node *t, ll key, Node *&l, Node *&r) { if (!t) { l = r = 0; return; } down(t); if (t->key <= key) { split(t->r, key, t->r, r); l = t; } else { split(t->l, key, l, t->l); r = t; } upd_cnt(t); } void merge(Node *&t, Node *l, Node *r) { down(l); down(r); if (!l || !r) { t = l ? l : r; } else if (l->prior > r->prior) { merge(l->r, l->r, r); t = l; } else { merge(r->l, l, r->l); t = r; } upd_cnt(t); } struct Arrangement { Node *root; Arrangement() : root(0) {} Arrangement(vector<ll> vec) { root = 0; sort(all(vec)); for (auto x: vec) { merge(root, root, new Node(x)); } } int size() { return get_cnt(root); } void inc_all(ll delta) { modify(root, delta); } void inc_prefix_rev(int len, ll delta) { Node *l, *r; split(root, l, r, len); modify(l, delta); merge(root, r, l); } void inc_suffix_rev(int len, ll delta) { Node *l, *r; int C = size(); split(root, l, r, C - len); modify(r, delta); merge(root, r, l); } void insert(ll val) { Node *l, *r; split(root, val - 1, l, r); modify(r, 1); // increase by 1 merge(root, l, new Node(val)); merge(root, root, r); } int cnt_upper_bound(ll val) { int ans = 0; auto t = root; while (t) { down(t); if (t->key > val) { t = t->l; } else { ans += get_cnt(t->l) + 1; t = t->r; } } return ans; } ll get_first() { assert(root); auto t = root; while (t->l) { down(t); t = t->l; } return t->key; } }; void solve() { int n, q; ll d; cin >> n >> d >> q; vector<ll> a(n); for (auto &x: a) { cin >> x; } string s; cin >> s; for (int i = 0; i < n; i++) { if (s[i] == '0') { a[i] = -a[i]; } } struct Query { ll k; int m; int id; ll ans; }; vector<Query> queries; for (int i = 0; i < q; i++) { int k, m; cin >> k >> m; queries.push_back({k, m, i, 0}); } vector<ll> immovable; for (auto x: a) { if (x < 0) { immovable.push_back(-x); } } { // get rid of immovable balls vector<ll> new_a; for (auto x: a) { if (x < 0) { continue; } int cnt = upper_bound(all(immovable), x) - immovable.begin(); new_a.push_back(x - cnt); } a = new_a; } // designated data structure Arrangement arrangement({a[0]}); ll op = 0; auto process_query = [&](Query &query, int i) { int C = arrangement.size(); ll need = query.k - op; ll big = need / C; int small = need - big * C; arrangement.inc_all(big * (C - 1 + d)); arrangement.inc_prefix_rev(small, C - 1 + d); ll lo = 0, hi = 3e18; while (hi - lo > 1) { ll mid = (lo + hi) / 2; ll L = mid; int m = query.m; int cnt = upper_bound(all(immovable), L) - immovable.begin(); L -= cnt; m -= cnt; cnt = upper_bound(i + all(a), L) - a.begin() - i; L -= cnt; m -= cnt; if (m <= arrangement.cnt_upper_bound(L)) { hi = mid; } else { lo = mid; } } query.ans = hi; arrangement.inc_suffix_rev(small, -(C - 1 + d)); arrangement.inc_all(-big * (C - 1 + d)); }; sort(all(queries), [](const Query &x, const Query &y) { return x.k < y.k; }); auto current_query = queries.begin(); for (int i = 1; i < (int)a.size(); i++) { ll gap = a[i] - arrangement.get_first(); int C = arrangement.size(); ll h = (gap - 1) / (C - 1 + d); while (current_query != queries.end() && current_query->k <= op + h * C) { process_query(*(current_query++), i); } arrangement.inc_all(h * (C - 1 + d)); arrangement.insert(a[i]); op += h * C; } while (current_query != queries.end()) { process_query(*(current_query++), a.size()); } sort(all(queries), [](const Query &x, const Query &y) { return x.id < y.id; }); for (auto &query: queries) { cout << query.ans << "\n"; } } int main() { cin.tie(nullptr)->sync_with_stdio(false); int T = 1; // cin >> T; while (T--) { solve(); } }
1738
A
Glory Addicts
The hero is addicted to glory, and is fighting against a monster. The hero has $n$ skills. The $i$-th skill is of type $a_i$ (either \textbf{fire} or \textbf{frost}) and has initial damage $b_i$. The hero can perform all of the $n$ skills in any order (with each skill performed exactly \textbf{once}). When performing each skill, the hero can play a magic as follows: - If the current skill immediately follows another skill of a different type, then its damage is \textbf{doubled}. In other words, - If a skill of type fire and with initial damage $c$ is performed immediately after a skill of type fire, then it will deal $c$ damage; - If a skill of type fire and with initial damage $c$ is performed immediately after a skill of type frost, then it will deal $2c$ damage; - If a skill of type frost and with initial damage $c$ is performed immediately after a skill of type fire, then it will deal $2c$ damage; - If a skill of type frost and with initial damage $c$ is performed immediately after a skill of type frost , then it will deal $c$ damage. Your task is to find the \textbf{maximum} damage the hero can deal.
Suppose the first skill to be performed is fixed. Then it is optimal to use the following greedy strategy. If possible, perform a skill of a different type from the last skill. If there are multiple skills of a different type from the last skill, choose the one with the largest initial damage. Inspired by the above observation, if the type of the first skill is fixed, it is optimal to choose the one with the smallest initial damage. This is because the first skill will be never doubled. Therefore, we have the following algorithm. Try each possible type $a$ (of the first skill). Remove the skill of type $a$ with the smallest intial damage. Alternate the types of the following skills as much as possible. This algorithm is sufficient to pass this problem. Nevertheless, a slightly more elegant analysis will give a simpler solution. If the number of skills of type fire is equal to that of skills of type frost, double the damage of all skills except for the one with the smallest initial damage. Otherwise, let $k$ be the smaller number of skills of either type, then double the damage of the largest $k$ skills of both types. The time complexity is $O(n \log n)$ due to sortings.
[ "greedy", "implementation", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; scanf("%d", &n); vector<int> a(n), b(n); for (int i = 0; i < n; ++i) scanf("%d", &a[i]); for (int i = 0; i < n; ++i) scanf("%d", &b[i]); vector<vector<long long>> v(2); for (int i = 0; i < n; ++i) v[a[i]].push_back(b[i]); for (int i = 0; i < 2; ++i) sort(v[i].begin(), v[i].end()); auto go = [&]() { long long res = 0; if (v[0].size() == v[1].size()) res -= min(v[0].front(), v[1].front()); while (!v[0].empty() && !v[1].empty()) { res += (v[0].back() + v[1].back()) * 2; v[0].pop_back(); v[1].pop_back(); } for (auto x : v[0]) res += x; for (auto x : v[1]) res += x; return res; }; printf("%lld\n", go()); } int main() { int tests; scanf("%d", &tests); while (tests--) solve(); return 0; }
1738
B
Prefix Sum Addicts
Suppose $a_1, a_2, \dots, a_n$ is a sorted \textbf{integer} sequence of length $n$ such that $a_1 \leq a_2 \leq \dots \leq a_n$. For every $1 \leq i \leq n$, the prefix sum $s_i$ of the first $i$ terms $a_1, a_2, \dots, a_i$ is defined by $$ s_i = \sum_{k=1}^i a_k = a_1 + a_2 + \dots + a_i. $$ Now you are given the last $k$ terms of the prefix sums, which are $s_{n-k+1}, \dots, s_{n-1}, s_{n}$. Your task is to determine whether this is possible. Formally, given $k$ integers $s_{n-k+1}, \dots, s_{n-1}, s_{n}$, the task is to check whether there is a sequence $a_1, a_2, \dots, a_n$ such that - $a_1 \leq a_2 \leq \dots \leq a_n$, and - $s_i = a_1 + a_2 + \dots + a_i$ for all $n-k+1 \leq i \leq n$.
If $k = 1$, it is always possible, so the answer is "YES". In the following, we assume that $k \geq 2$. Here, we are given $s_{n-k+1}, \dots, s_n$. We can resume $a_{n-k+2}, \dots, a_{n}$ by letting $a_{i} = s_{i} - s_{i-1}$ for every $n-k+2 \leq i \leq n$. If the known elements of $a_i$ cannot form a sorted array, i.e., it does not hold that $a_{n-k+2} \leq \dots \leq a_n$, the answer is "NO". Note that the sum of the first $n-k+1$ elements of $a_i$ should satisfy that $s_{n-k+1} = a_1 + \dots + a_{n-k+1} \leq (n-k+1) a_{n-k+2}$. If this does not hold, the answer is "NO". Having checked that both $a_{n-k+2} \leq \dots \leq a_n$ and $s_{n-k+1} \leq (n-k+1) a_{n-k+2}$ hold, we claim that the answer is "YES". A possible solution could be $a_i = \begin{cases} \left\lfloor\dfrac{s_{n-k+1}}{n-k+1}\right\rfloor, & 1 \leq i \leq s_{n-k+1} \bmod (n-k+1), \\ \left\lceil\dfrac{s_{n-k+1}}{n-k+1}\right\rceil, & s_{n-k+1} \bmod (n-k+1) < i \leq n-k+1. \end{cases}$
[ "constructive algorithms", "greedy", "math", "sortings" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve() { int n, k; cin >> n >> k; vector<long long> s(n + 1); for (int i = n - k + 1; i <= n; ++i) cin >> s[i]; if (k == 1) { cout << "YES" << endl; return; } vector<long long> a(n + 1); for (int i = n - k + 2; i <= n; ++i) a[i] = s[i] - s[i - 1]; if (!std::is_sorted(a.begin() + n - k + 2, a.end())) { cout << "NO" << endl; return; } if (s[n - k + 1] > a[n - k + 2] * (n - k + 1)) { cout << "NO" << endl; return; } cout << "YES" << endl; } int main() { int tests; cin >> tests; while (tests--) solve(); return 0; }
1738
C
Even Number Addicts
Alice and Bob are playing a game on a sequence $a_1, a_2, \dots, a_n$ of length $n$. They move in turns and \textbf{Alice moves first}. In the turn of each player, he or she should select an integer and remove it from the sequence. The game ends when there is no integer left in the sequence. Alice wins if the sum of her selected integers is \textbf{even}; otherwise, Bob wins. Your task is to determine who will win the game, if both players play optimally.
We only need to consider the case that $a_i =$ 0 or 1. Suppose there are $a$ 0's and $b$ 1's in total. Consider the following cases: $b \equiv 2 \pmod 4$. Bob has a winning strategy: Always choose the number that Alice chooses in her last move. This strategy keeps the invariant that Alice and Bob have the same number of 1's after Bob's each move. The only exception that Bob cannot go on with this strategy is that Alice takes the last 0. In this case, there must be an even number of 1's (and no 0's) remaining. Therefore, each of Alice and Bob will choose half of the remaining 1's. At last, Alice and Bob have the same number $b/2$ of 1's, which is odd. $b \equiv 3 \pmod 4$. Alice has a winning strategy: Choose 1 first. After this move, the game is reduced to $a$ 0's and $b-1$ 1's with Bob taking the first turn and Bob wins if he has an even number of 1's at last. This reduced game is indeed the case of $b \equiv 2 \pmod 4$ which we have already proved that Bob always loses. $b \equiv 0 \pmod 4$. Alice has a winning strategy: Choose 0 first; after that, choose the number that Bob chooses in his last move. This strategy keeps the invariant that Alice and Bob have the same number of 1's after Alice's each move. The only exception that Alice cannot go on with this strategy is that there is no 0. In this case, there must be an even number of 1's (and no 0's) remaining. Therefore, each of Alice and Bob will choose half of the remaining 1's. At last, Alice and Bob have the same number $b/2$ of 1's, which is even. $b \equiv 1 \pmod 4$. If any of Alice and Bob chooses the first 1, the game is reduced to $a$ 0's and $b-1$ 1's with its opponent moving first, resulting in the case of $b \equiv 0 \pmod 4$ and its opponent wins. Therefore, the one who chooses the first 1 loses. With this observation, Alice will lose if there are an even number of $0$'s, i.e., $a \equiv 0 \pmod 2$; and Alice will win if $a \equiv 1 \pmod 2$.
[ "dp", "games", "greedy", "math" ]
1,500
#include <bits/stdc++.h> using namespace std; int u[111][111][2]; int dp[111][111][2]; int go(int x, int y, int z) { if (x + y == 0) return z == 0; int& res = dp[x][y][z]; if (u[x][y][z]) return res; u[x][y][z] = 1; if (x > 0) res |= 1 - go(x - 1, y, (z + y + 1) % 2); if (y > 0) res |= 1 - go(x, y - 1, (z + y + 1) % 2); return res; } void solve() { int n; cin >> n; vector<int> a(n); for (auto& e : a) cin >> e; int x = 0, y = 0; for (auto e : a) { if (e % 2 == 0) x += 1; else y += 1; } int res = go(x, y, 0); cout << (res ? "Alice" : "Bob") << "\n"; } int main() { int tests = 1; cin >> tests; for (int test = 1; test <= tests; ++test) solve(); return 0; }
1738
D
Permutation Addicts
Given a permutation $a_1, a_2, \dots, a_n$ of integers from $1$ to $n$, and a threshold $k$ with $0 \leq k \leq n$, you compute a sequence $b_1, b_2, \dots, b_n$ as follows. For every $1 \leq i \leq n$ in increasing order, let $x = a_i$. - If $x \leq k$, set $b_{x}$ to the last element $a_j$ ($1 \leq j < i$) that $a_j > k$. If no such element $a_j$ exists, set $b_{x} = n+1$. - If $x > k$, set $b_{x}$ to the last element $a_j$ ($1 \leq j < i$) that $a_j \leq k$. If no such element $a_j$ exists, set $b_{x} = 0$. Unfortunately, after the sequence $b_1, b_2, \dots, b_n$ has been completely computed, the permutation $a_1, a_2, \dots, a_n$ and the threshold $k$ are discarded. Now you only have the sequence $b_1, b_2, \dots, b_n$. Your task is to find any possible permutation $a_1, a_2, \dots, a_n$ and threshold $k$ that produce the sequence $b_1, b_2, \dots, b_n$. \textbf{It is guaranteed that} there exists at least one pair of permutation $a_1, a_2, \dots, a_n$ and threshold $k$ that produce the sequence $b_1, b_2, \dots, b_n$. A permutation of integers from $1$ to $n$ is a sequence of length $n$ which contains all integers from $1$ to $n$ exactly once.
For readability and convenience of the readers who are interested in the checker of this problem, we consider this problem without assuming the existence of the threshold $k$ and permutation $a_1, a_2, \dots, a_n$. Let's first determine the value of $k$. A valid sequence $b$ must satisfy that For every $i \leq k$, we have $b_i > i$; For every $i > k$, we have $b_i < i$. For every $i \leq k$, we have $b_i > k$; For every $i > k$, we have $b_i \leq k$. Now $k$ is fixed. We are going to find a sequence $a$ that produces sequence $b$ with respect to $k$. We construct a directed graph $G$ with $n+2$ vertices numbered from $0$ to $n+1$ as follows. For every $1 \leq i \leq n$, add an edge from vertex $b_i$ to vertex $i$. It is clear that there are $n$ edges in graph $G$. We consider two parts of the graph $G$, set $A$ of vertices from $0$ to $k$ and set $B$ of vertices from $k+1$ to $n+1$. Then every directed edge is either from $A$ to $B$ or from $B$ to $A$. Claim B.1. There is exactly one vertex between vertex $0$ and vertex $n+1$ that is isolated (that is, no edges are incident to it). Proof: First of all, we show that it cannot be the case that both vertex $0$ and vertex $n+1$ are isolated. This is because $b_{a_1}$ is either $0$ or $n+1$ according to its definition. Now suppose both vertex $0$ and vertex $n+1$ are not isolated. Then there are two different indexes $x$ and $y$ such that $b_x = 0$ and $b_y = n+1$ with $1 \leq y \leq k < x \leq n$. Find two different indexes $i$ and $j$ such that $a_i = x$ and $a_j = y$. If $i < j$, then we have $a_i = x > k \geq a_j = y$. By the definition of $b_{y}$, $a_i = x$ is a candidate. So $b_y \neq n+1$. If $i > j$, then we have $a_j = y \leq k < a_i = x$. By the definition of $b_{x}$, $a_j = y$ is a candidate. So $b_x \neq 0$. By Claim B.1, we can just ignore the isolated vertex. After that, there are $n+1$ vertices and $n$ edges. It seems like that the graph $G$ is a tree! Next, we will show that this is true. Claim B.2. The graph $G$ must not contain loops. That is, $G$ is a DAG (directed acyclic graph). Proof: There is no edge leading to vertex $0$ or $n + 1$. So loops will only occur among vertices from $1$ to $n$. Every edge $(u, v)$ in graph $G$ for $1 \leq u, v \leq n$ means that, in sequence $a$, the value of $u$ is in front of the value of $v$. Since sequence $a$ is a permutation, all values appear exactly once. A loop implies that there are two different vertices $u$ and $v$ such that $u$ is in front of $v$ and $v$ is in front of $u$. This is of course imposssible. $\Box$ According to the construction of graph $G$, there is at most one edge leading to each vertex. So DAG $G$ is a rooted tree with vertex $0$ or $n+1$, with every edge $(u, v)$ meaning that $u$ is the parent of $v$. Now we have a tree $G$, and want to find a suitable sequence $a$. Claim B.3. For every vertex $u$ in tree $G$, there is at most one child vertex $v$ of $u$ that is not a leaf. Proof: If there are two child vertices $v_1$ and $v_2$ of $u$ that are not leaves, let $w_1$ and $w_2$ be child vertices of $v_1$ and $v_2$, respectively. Let $a^{-1}(x)$ the index $i$ of $x$ such that $a_i = x$. In this notation, we have $a^{-1}(u) < a^{-1}(v)$ for every edge $(u, v)$ in tree $G$. Without loss of generality, we assume that $a^{-1}(v_1) < a^{-1}(v_2)$. Then, we have $a^{-1}(v_2) < a^{-1}(w_1)$. If this is not true, i.e., $a^{-1}(v_2) > a^{-1}(w_1)$, then $w_1$ is a candidate for $b_{v_2}$, which leads that $u$ is no longer the parent vertex of $v_2$. Now we have $a^{-1}(v_2) < a^{-1}(w_1)$. This means that $v_2$ is a candidate for $b_{w_1}$, which leads that $v_1$ is no longer the parent vertex of $w_1$. A contradiction! $\Box$ Now we are ready to give an algorithm to find a suitable sequence $a$ with tree $G$, which is rather simple: Find the BFS order of tree $G$, with non-leaf vertices visited last. Before processing the BFS, remember to check the graph $G$ as follows: There is exactly one isolated vertex between vertex $0$ and $n+1$. The graph $G$ is a DAG, i.e., no loops exist in graph $G$. With the above, graph $G$ must be a rooted tree. Choose the non-isolated vertex from vertex $0$ and $n+1$ as the root. For every vertex $u$ in tree $G$, there is at most one non-leaf child vertex of $u$.
[ "constructive algorithms", "data structures", "dfs and similar", "dsu", "graphs", "trees" ]
1,900
#include <bits/stdc++.h> using namespace std; void solve() { int n; scanf("%d", &n); vector<int> b(n + 1); int k = 0; for (int i = 1; i <= n; ++i) { scanf("%d", &b[i]); if (b[i] > i) k = i; } vector<vector<int>> v(n + 2); for (int i = 1; i <= n; ++i) v[b[i]].push_back(i); int root = v[0].size() ? 0 : n + 1; vector<int> q = { root }; for (int i = 0; i < q.size(); ++i) { int x = q[i]; sort(v[x].begin(), v[x].end(), [&](int a, int b) { return v[a].size() < v[b].size(); }); for (auto y : v[x]) q.push_back(y); } printf("%d\n", k); assert(q.size() == n + 1); for (int i = 1; i < (int)q.size(); ++i) { if (i > 1) printf(" "); printf("%d", q[i]); } puts(""); } int main() { int tests; scanf("%d", &tests); while (tests--) solve(); return 0; }
1738
E
Balance Addicts
Given an integer sequence $a_1, a_2, \dots, a_n$ of length $n$, your task is to compute the number, modulo $998244353$, of ways to partition it into several \textbf{non-empty} \textbf{continuous} subsequences such that the sums of elements in the subsequences form a \textbf{balanced} sequence. A sequence $s_1, s_2, \dots, s_k$ of length $k$ is said to be balanced, if $s_{i} = s_{k-i+1}$ for every $1 \leq i \leq k$. For example, $[1, 2, 3, 2, 1]$ and $[1,3,3,1]$ are balanced, but $[1,5,15]$ is not. Formally, every partition can be described by a sequence of indexes $i_1, i_2, \dots, i_k$ of length $k$ with $1 = i_1 < i_2 < \dots < i_k \leq n$ such that - $k$ is the number of non-empty continuous subsequences in the partition; - For every $1 \leq j \leq k$, the $j$-th continuous subsequence starts with $a_{i_j}$, and ends exactly before $a_{i_{j+1}}$, where $i_{k+1} = n + 1$. That is, the $j$-th subsequence is $a_{i_j}, a_{i_j+1}, \dots, a_{i_{j+1}-1}$. There are $2^{n-1}$ different partitions in total. Let $s_1, s_2, \dots, s_k$ denote the sums of elements in the subsequences with respect to the partition $i_1, i_2, \dots, i_k$. Formally, for every $1 \leq j \leq k$, $$ s_j = \sum_{i=i_{j}}^{i_{j+1}-1} a_i = a_{i_j} + a_{i_j+1} + \dots + a_{i_{j+1}-1}. $$ For example, the partition $[1\,|\,2,3\,|\,4,5,6]$ of sequence $[1,2,3,4,5,6]$ is described by the sequence $[1,2,4]$ of indexes, and the sums of elements in the subsequences with respect to the partition is $[1,5,15]$. Two partitions $i_1, i_2, \dots, i_k$ and $i'_1, i'_2, \dots, i'_{k'}$ (described by sequences of indexes) are considered to be different, if at least one of the following holds. - $k \neq k'$, - $i_j \neq i'_j$ for some $1 \leq j \leq \min\left\{ k, k' \right\}$.
Let $f(i, j)$ be the answer to the problem for the subsequence $a_i, a_{i+1}, \dots, a_j$. And we want to find $f(1, n)$. For every $1 \leq i \leq j \leq n$, we consider the following cases. $a_i = \dots = a_j = 0$. In this case, every partition produces a balanced sequence, thereby $f(i, j) = 2^{j-i}$. $a_i = a_j = 0$. In this case, suppose that there are $x$ prefix $0$'s and $y$ suffix $0$'s in $a_i, \dots, a_j$ with $x, y \geq 1$. There can be at most $\min\{x, y\}$ zeros in both the prefix and suffix of the balanced sequence. There are $c_{x,y} = \sum_{k=0}^{\min\{x,y\}} \binom{x}{k} \binom{y}{k}$ choices in total. So $f(i,j) = c_{x,y} f(i+x, j-y)$. $c_{x,y} = \sum_{k=0}^{\min\{x,y\}} \binom{x}{k} \binom{y}{k}$ Otherwise, find the leftmost position $i \leq l \leq j$ and the rightmost position $i \leq r \leq j$ such that $a_i + \dots + a_l = a_r + \dots + a_j > 0$. $l = j$. We have $f(i, j) = 1$. $a_{l+1} = \dots = a_{r-1} = 0$. There are $2^{r-l}$ possible choices to make the sequence balanced. So $f(i, j) = 2^{r-l}$. Otherwise, suppose there are $x$ prefix $0$'s and $y$ suffix $0$'s in $a_{l+1}, \dots, a_{r-1}$. One can cut both parts with sum $a_i + \dots + a_l = a_r + \dots + a_j$. The number of choices to cut out $k$ $0$'s in the balanced sequence is $\binom{x+1}{k+1} \binom{y+1}{k+1}.$ With the case of no cut considered, we have $f(i, j) = \left( \sum_{k=0}^{\min\{x,y\}} \binom{x+1}{k+1} \binom{y+1}{k+1} + 1 \right) f(l+x+1, r-y-1) = c_{x+1, y+1} f(l+x+1, r-y-1).$ $l = j$. We have $f(i, j) = 1$. $a_{l+1} = \dots = a_{r-1} = 0$. There are $2^{r-l}$ possible choices to make the sequence balanced. So $f(i, j) = 2^{r-l}$. Otherwise, suppose there are $x$ prefix $0$'s and $y$ suffix $0$'s in $a_{l+1}, \dots, a_{r-1}$. One can cut both parts with sum $a_i + \dots + a_l = a_r + \dots + a_j$. The number of choices to cut out $k$ $0$'s in the balanced sequence is $\binom{x+1}{k+1} \binom{y+1}{k+1}.$ With the case of no cut considered, we have $f(i, j) = \left( \sum_{k=0}^{\min\{x,y\}} \binom{x+1}{k+1} \binom{y+1}{k+1} + 1 \right) f(l+x+1, r-y-1) = c_{x+1, y+1} f(l+x+1, r-y-1).$ $\binom{x+1}{k+1} \binom{y+1}{k+1}.$ $f(i, j) = \left( \sum_{k=0}^{\min\{x,y\}} \binom{x+1}{k+1} \binom{y+1}{k+1} + 1 \right) f(l+x+1, r-y-1) = c_{x+1, y+1} f(l+x+1, r-y-1).$ Since every element is processed once, it is clear that the time complexity is $O(n)$.
[ "combinatorics", "dp", "math", "two pointers" ]
2,300
#include <bits/stdc++.h> using namespace std; constexpr long long MOD = 998244353; long long power(long long a, long long b) { if (b == 0) return 1; long long t = power(a, b / 2); t = t * t % MOD; if (b % 2 == 1) t = t * a % MOD; return t; } void solve() { int n; scanf("%d", &n); vector<long long> a(n + 1); for (int i = 1; i <= n; ++i) scanf("%lld", &a[i]); vector<long long> factorial(n + 1, 1), inv_factorial(n + 1, 1); for (int i = 1; i <= n; ++i) { factorial[i] = factorial[i - 1] * i % MOD; inv_factorial[i] = inv_factorial[i - 1] * power(i, MOD - 2) % MOD; } auto choose = [&](int n, int m) { return factorial[n] * inv_factorial[m] % MOD * inv_factorial[n - m] % MOD; }; function<long long(int, int)> f = [&](int i, int j) -> long long { int l = i, r = j; while (l <= j && a[l] == 0) l += 1; while (r >= i && a[r] == 0) r -= 1; if (l == j + 1) // all zero return power(2, j - i); if (i != l && j != r) // both ends have zeros { int x = l - i, y = j - r; long long coef = 0; for (int k = 0; k <= min(x, y); ++k) coef = (coef + choose(x, k) * choose(y, k)) % MOD; return f(l, r) * coef % MOD; } // at most one end has zeros i = l, j = r; long long ls = a[i], rs = a[j]; while (ls != rs) { if (ls < rs) ls += a[++i]; else rs += a[--j]; } if (i >= j) return 1; l = i + 1, r = j - 1; while (l <= j && a[l] == 0) l += 1; while (r >= i && a[r] == 0) r -= 1; if (l == j) // x, zeros, x return power(2, j - i); // x, zeros, something, zeros, x int x = l - i - 1, y = j - r - 1; long long coef = 0; for (int k = 0; k <= min(x, y) + 1; ++k) coef = (coef + choose(x + 1, k) * choose(y + 1, k)) % MOD; return f(l, r) * coef % MOD; }; long long res = f(1, n); printf("%lld\n", res); } int main() { int tests; cin >> tests; while (tests--) solve(); return 0; }
1738
F
Connectivity Addicts
\textbf{This is an interactive problem}. Given a simple undirected graph with $n$ vertices numbered from $1$ to $n$, your task is to color all the vertices such that for every color $c$, the following conditions hold: - The set of vertices with color $c$ is \textbf{connected}; - $s_c \leq n_c^2$, where $n_c$ is the number of vertices with color $c$, and $s_c$ is the sum of degrees of vertices with color $c$. It can be shown that there always exists a way to color all the vertices such that the above conditions hold. Initially, you are only given the number $n$ of vertices and the degree of each vertex. In each query, you can choose a vertex $u$. As a response, you will be given the $k$-th edge incident to $u$, if this is the $k$-th query on vertex $u$. You are allowed to make \textbf{at most $n$ queries}. An undirected graph is simple if it does not contain multiple edges or self-loops. The degree of a vertex is the number of edges incident to it. A set $S$ of vertices is connected if for every two different vertices $u, v \in S$, there is a path, which only passes through vertices in $S$, that connects $u$ and $v$. That is, there is a sequence of edges $(u_1, v_1), (u_2, v_2), \dots, (u_k, v_k)$ with $k \geq 1$ such that - $u_1 = u$, $v_k = v$, and $v_i = u_{i+1}$ for every $1 \leq i < k$; and - $u_k \in S$ and $v_k \in S$ for every $1 \leq i \leq k$. Especially, a set containing only one vertex is connected.
Let's consider the following BFS-like algorithm. Repeat the following procedure until all vertices are visited. Choose an unvisited vertex $u$ with the largest degree. Let $S$ be a set of vertices, initially consisting of the only vertex $u$. For every neighbor vertex $v$ of vertex $u$, If vertex $v$ is visited, color all vertices in $S$ with the same color as vertex $v$, and then end the procedure this time; Otherwise, add vertex $v$ to set $S$. Color all vertices in $S$ with a new one color. Choose an unvisited vertex $u$ with the largest degree. Let $S$ be a set of vertices, initially consisting of the only vertex $u$. For every neighbor vertex $v$ of vertex $u$, If vertex $v$ is visited, color all vertices in $S$ with the same color as vertex $v$, and then end the procedure this time; Otherwise, add vertex $v$ to set $S$. If vertex $v$ is visited, color all vertices in $S$ with the same color as vertex $v$, and then end the procedure this time; Otherwise, add vertex $v$ to set $S$. Color all vertices in $S$ with a new one color. It is clear that in each repetition of the procedure, the number $k$ of edges visited (i.e., the number of queries) will cause at least $k$ vertices being colored. Since all vertices will be colored exactly once eventually, the number of queries is no more than $n$. A careful analysis will find that the number of queries is no more than $n-C$, where $C$ is the number of different existing colors. The time complexity of this algorithm can be $O(n)$, $O(n\log n)$, or $O(n^2)$, depending on concrete implementations. Anyway, any implementation of such time complexity can pass. It is clear that all vertices with the same color are connected. It remains to see why this algorithm will color all vertices such that $s_c \leq n_c^2$, where $n_c$ is the number of vertices with color $c$, and $s_c$ is the sum of degrees of vertices with color $c$. To see this, it can be shown by induction that after every repetition of the procedure, the number $n_c$ of vertices with an existing color $c$ is always no less than the degree of any vertex with color $c$. Since we enumerate vertices in decreasing order of their degrees, the degree $d_u$ of the current vertex $u$ must hold that $d_u \leq n_c$ for every existing color $c$. We consider two cases: During the procedure, no neighbor vertex $v$ of vertex $u$ is visited. That is, every neighbor vertex of vertex $u$ has degree $\leq d_u$. Then, there are $d_u + 1$ vertices in $S$, and they are assigned with a new color $c$, thereby $s_c \leq (d_u+1) d_u \leq (d_u+1)^2 = n_c^2$. During the procedure, we find a neighbor vertex $v$ of vertex $u$ that is visited. Let $c$ be the color of vertex $v$ and $n'_c$ be the number of vertices with color $c$ before coloring all vertices in $S$. We have $|S| \leq d \leq n'_c$. Let $n_c = n'_c+|S|$ be the number of vertices with color $c$ after coloring all vertices in $S$ with color $c$. Then $s_c \leq s'_c + |S|d \leq (n'_c)^2 + |S|n'_c \leq (n'_c+|S|)^2 = n_c^2$. This interesting problem surprisingly comes from non-traditional algorithm scenarios - QUANTUM algorithms. Further reading: Christoph Dürr, Mark Heiligman, Peter Høyer, and Mehdi Mhalla. Quantum query complexity of some graph problems. SIAM Journal on Computing, 35(6):1310-1328, 2006.
[ "constructive algorithms", "dsu", "graphs", "greedy", "interactive", "shortest paths", "trees" ]
2,400
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> d(n + 1); for (int i = 1; i <= n; ++i) cin >> d[i]; auto query = [&](int x) { cout << "? " << x << endl; int y; cin >> y; return y; }; vector<int> visited(n + 1); vector<int> res(n + 1); int color_cnt = 0; while (1) { int x = 0; for (int i = 1; i <= n; ++i) if (!visited[i] && d[i] >= d[x]) x = i; if (x == 0) break; vector<int> q = { x }; int colored_vertex = 0; for (int i = 1; i <= d[x]; ++i) { int y = query(x); if (visited[y]) { colored_vertex = y; break; } q.push_back(y); } int color = colored_vertex == 0 ? ++color_cnt : res[colored_vertex]; for (auto z : q) { visited[z] = 1; res[z] = color; } } cout << "!"; for (int i = 1; i <= n; ++i) cout << " " << res[i]; cout << endl; } int main() { int tests; cin >> tests; while (tests--) solve(); return 0; }
1738
G
Anti-Increasing Addicts
You are given an $n \times n$ grid. We write $(i, j)$ to denote the cell in the $i$-th row and $j$-th column. For each cell, you are told whether yon can delete it or not. Given an integer $k$, you are asked to delete \textbf{exactly} $(n-k+1)^2$ cells from the grid such that the following condition holds. - You cannot find $k$ not deleted cells $(x_1, y_1), (x_2, y_2), \dots, (x_k, y_k)$ that are strictly increasing, i.e., $x_i < x_{i+1}$ and $y_i < y_{i+1}$ for all $1 \leq i < k$. Your task is to find a solution, or report that it is impossible.
Consider increasing diagonals (sets of cells $(x, y)$ for which $x - y = c$ for some fixed $c$). Clearly, from the diagonal of length $t$, we have to delete at least $\max(0, t - (k-1))$ cells. There are $2$ diagonals of length $i$ for each $i$ from $1$ to $n-1$ and one diagonal of length $n$, so we have to delete at least $2 + 4 + \ldots + 2(n-k) + (n-k+1) = (n-k+1)^2$ cells from them in total. This means that we will delete precisely $\max(0, t - (k-1))$ cells from the diagonal of length $t$. Now, consider two adjacent diagonals of lengths $t-1$ and $t$ for some $t \ge k$. Let's call diagonal of length $t$ large, and of $t-1$ small. Let's enumerate cells of large diagonal from $1$ to $t$, and of small from $1$ to $t-1$. We have to delete precisely $t-k+1$ cells from large diagonal, and $t-k$ from small. Suppose that the cells deleted from the large diagonal have indexes $1 \le x_1<x_2<\ldots<x_{t-k+1} \le t$. For some $1 \le j \le t-k$, consider the path, containing cells from $1$-st to $(x_j-1)$-st in the large diagonal, from $x_j$-th to $(x_{j+1}-1)$-st in the small diagonal, and from $(x_{j+1}+1)$-st to $t$-th in the large diagonal. Note that this is an increasing sequence of cells of length $t-1$, so we have to delete at least $t-k$ cells from it. We deleted precisely $t-k-2$ cells from it in the large diagonal, so we have to delete at least one cell in the small diagonal in the range $[x_j, x_{j+1}-1]$. Note that the ranges $[x_j, x_{j+1}-1]$ for $1 \le j \le t-k$ don't intersect, and there are $t-k$ of them. So, we have to delete precisely one cell from each such range and not delete any cell outside of $[x_1, x_{t-k+1}-1]$. Surprisingly, these criteria are sufficient (meaning that for every two adjacent diagonals, the cells in the smaller one are deleted in these ranges determined from the cells deleted in the larger one). Let's first show how to solve the problem based on this and then how to prove that this is sufficient. If these criteria are indeed sufficient, then let's construct the set of deleted cells one by one (if it exists). How do we choose the first deleted cell on the main diagonal? Just choose the first cell which you can delete. How do we choose the first deleted cell on the diagonals adjacent to the main diagonal? Just choose the first cell which you can delete that goes after the deleted cell in the main diagonal. How do we choose the second deleted cell on the main diagonal and the first deleted cells on the diagonals of length $n-2$? Again, just choose the first cell that you are allowed to delete which does not violate the conditions. (and so on) You can simulate this greedy in just $O(n^2)$ - just keep for each diagonal the last deleted cell. If at any point there were no allowed cells, then answer is NO, otherwise, we found the construction. How to prove that this is sufficient? Let's prove the following statement. Lemma: In each cell $(x, y)$, write the number of non-deleted cells in the diagonal of $(x, y)$, up to the cell $(x, y)$. Denote this number by $a_{x, y}$. Then $a_{x-1, y} \le a_{x, y}$ for any $1 \le x \le n-1$, $1 \le y \le n$, and $a_{x, y-1} \le a_{x, y}$ for any $1 \le x \le n$, $1 \le y \le n-1$ (in other words, $a$ is nondecreasing by rows and columns). Proof: Almost trivial from our constraints. Let's show that $a_{x-1, y} \le a_{x, y}$, for example. If cell $(x-1, y)$ is on the main diagonal or lower, then cell $(x-1, y)$ is on a larger diagonal than $(x, y)$. We deleted $y - a_{x, y}$ cells in the diagonal of $(x, y)$ up to this cell. Therefore, the $(y - a_{x, y})$-th deleted cell in the larger diagonal has to have $y$ coordinate at most $y$ as well, so we deleted at least $y - a_{x, y}$ cells in the diagonal of $(x-1, y)$ up to that cell, and there are at most $a_{x, y}$ not deleted cells there. The similar argument goes for the case when $(x-1, y)$ is above the main diagonal. With this lemma, suppose that there is an increasing sequence of not deleted cells $(x_1, y_1), (x_2, y_2), \ldots, (x_k, y_k)$ (with $x_i < x_{i+1}, y_i<y_{i+1}$). Then it's easy to show that $a_{x_i, y_i}<a_{x_{i+1}, y_{i+1}}$. Indeed, $a_{x_i, y_i}\le a_{x_{i+1}-1, y_{i+1}-1} = a_{x_{i+1}, y_{i+1}} - 1$. But then we would get $a_{x_k, y_k} \ge k$, which obviously doesn't hold (there are at most $k-1$ not deleted cells in each diagonal by our construction). Bonus: It's possible to show that the answer is NO if and only if there is an increasing sequence of $k$ cells, each of which we aren't allowed to delete. Proof is left to the reader as an exercise.
[ "constructive algorithms", "dp", "greedy", "math" ]
2,900
#include <bits/stdc++.h> using namespace std; int n, k; pair<int, int> cell(int diag, int pos) { int st_x = 0; int st_y = 0; if (diag <= n - 1) st_y = (n - 1) - diag; if (diag >= n - 1) st_x = diag - (n - 1); return pair<int, int>(st_x + pos, st_y + pos); } void solve() { cin >> n >> k; vector<string> s(n); for (int i = 0; i < n; i++) cin >> s[i]; string full; for (int i = 0; i < n; i++) full += '1'; vector<string> t(n, full); vector<int> last_del(2 * n - 1, -1); int m = n - k + 1; for (int iter = 0; iter <= 2 * (m - 1); iter++) { vector<pair<int, int>> updates; for (int diag = 0; diag <= 2 * (n - 1); diag++) if (abs(diag - (n - 1)) <= min(iter, 2 * (m - 1) - iter) && abs(diag - (n - 1)) % 2 == iter % 2) { int nxt = last_del[diag] + 1; //diag-1 if (diag - 1 < n - 1) nxt = max(nxt, last_del[diag - 1] + 1); else nxt = max(nxt, last_del[diag - 1]); //diag+1 if (diag + 1 > n - 1) nxt = max(nxt, last_del[diag + 1] + 1); else nxt = max(nxt, last_del[diag + 1]); while (true) { auto cur = cell(diag, nxt); if (cur.first >= n || cur.second >= n) { cout << "NO" << '\n'; return; } if (s[cur.first][cur.second] == '0') nxt++; else { t[cur.first][cur.second] = '0'; break; } } updates.push_back(pair<int, int>(diag, nxt)); } for (auto it : updates) last_del[it.first] = it.second; } cout << "YES" << '\n'; for (auto it : t) cout << it << '\n'; } int main() { ios_base::sync_with_stdio(0); cin.tie(nullptr); int t; cin >> t; while (t--) solve(); }
1738
H
Palindrome Addicts
Your task is to maintain a queue consisting of lowercase English letters as follows: - "push $c$": insert a letter $c$ at the back of the queue; - "pop": delete a letter from the front of the queue. Initially, the queue is empty. After each operation, you are asked to count the number of \textbf{distinct} palindromic substrings in the string that are obtained by concatenating the letters from the front to the back of the queue. Especially, the number of distinct palindromic substrings of the empty string is $0$. A string $s[1..n]$ of length $n$ is palindromic if $s[i] = s[n-i+1]$ for every $1 \leq i \leq n$. The string $s[l..r]$ is a substring of string $s[1..n]$ for every $1 \leq l \leq r \leq n$. Two strings $s[1..n]$ and $t[1..m]$ are distinct, if at least one of the following holds. - $n \neq m$; - $s[i] \neq t[i]$ for some $1 \leq i \leq \min\{n,m\}$.
To count the number of distinct palindromic substrings, we adopt the powerful data structure called eertree (also known as palindromic tree or palindromic automaton). The number of distinct palindromic substrings of a string $s$ is related to the number of nodes in the eertree of $s$. See Wikipedia for its standard operations. In the following, we will consider how to maintain the eertree under push and pop queue operations. It is a standard trick to push a character at the back. So we only need to consider how to pop a character at the front. The key issue is to delete some nodes when they no longer exist. To achieve this, we maintain the following information of each node $v$: $\texttt{link_cnt}$: the number of nodes that link to $v$. $\texttt{rightmost_occurrence}$: the rightmost occurrence of $v$. $\texttt{second_rightmost_occurrence}$: the second rightmost occurrence of $v$. Now it remains to consider how to maintain $\texttt{rightmost_occurrence}$ and $\texttt{second_rightmost_occurrence}$ in a lazy manner. After a character $c$ has been pushed at the back of the string $s$, let $v$ be the longest palindromic suffix of the current string $sc$. Update $\texttt{rightmost_occurrence}[v]$ and $\texttt{second_rightmost_occurrence}[v]$ immediately with the new occurrence of $v$. When a character is being popped from the front of the string $s$, let $v$ be the longest palindromic prefix of the current string (right before the pop operation). If $v$ is unique in $s$, then let $u$ be the node that $v$ links to, and update $\texttt{rightmost_occurrence}[u]$ and $\texttt{second_rightmost_occurrence}[u]$ with the occurrence of $u$ induced by $v$ (which is a suffix of $v$). Here, note that $u$ occurs at least twice in $v$ as its prefix and suffix; and the suffix $u$ of $v$ is desired as induced by $v$. In this way, we can maintain the eertree under push and pop queue operations in $O(\Sigma n)$ time, where $\Sigma = 26$ is the size of alphabet. Further reading: Takuya Mieno, Kiichi Watanabe, Yuto Nakashima, Shunsuke Inenaga, Hideo Bannai, and Masayuki Takeda. Palindromic trees for a sliding window and its applications. Information Processing Letters, 173:106174, 2022. An alternative approach Answer the queries offline. Since we only push characters at the back and remove characters at the front, we can deal with all operations offline and find the whole string $s$ with its characters deleted during the operations (This can be achieved easily by only considering push operations). For example, the whole string of the sample input is "$aaabbaab$". In this way, every query of the number of distinct palindromic substrings is a range query of the form $(l, r)$ that asks the number of distinct palindromic substrings in $s[l..r]$. Indeed, this kind of queries can be answered in $O(\log n)$ time per query with $O(n \log n)$ time preprocess. Further reading: Mikhail Rubinchik and Arseny M. Shur. Counting palindromes in substrings. In Proceedings of the 24th International Conference on String Processing and Information Retrieval, pages 290-303, 2017.
[ "data structures", "strings" ]
3,300
#include <bits/stdc++.h> using namespace std; template<int alpha> class EerQueue { static_assert(alpha > 0); public: struct Node { std::array<Node*, alpha> next; Node* pre, * fail; int len; int rightmost_occurence, second_rightmost_occurence; // The (second) rightmost occurence of the palindrome associated with this Node. int fail_cnt; // The count of fails that link to this Node. Node() : pre(nullptr), fail(nullptr), len(0), next{}, rightmost_occurence(-1), second_rightmost_occurence(-1), fail_cnt(0) { } void UpdateOccurence(int occurence) { if (occurence > rightmost_occurence) { second_rightmost_occurence = rightmost_occurence; rightmost_occurence = occurence; } else if (occurence > second_rightmost_occurence) second_rightmost_occurence = occurence; } }; private: int node_cnt; Node* odd_root, * even_root, * cur; // cur is the Node of the longest suffix palindrome. Node* NewNode(int len = 0) { Node* it = new Node(); it->len = len; return it; } int start; std::vector<int> data; std::vector<Node*> prefix_palindrome; int size; Node* GetFail(Node* it, int pos) { while (pos - it->len - 1 < start || data[pos - it->len - 1] != data[pos]) it = it->fail; return it; } public: EerQueue() : node_cnt(0), start(0), size(0) { odd_root = NewNode(-1); even_root = NewNode(0); odd_root->fail = odd_root; odd_root->fail_cnt += 1; even_root->fail = odd_root; odd_root->fail_cnt += 1; cur = even_root; } Node* Push(int x) { assert(0 <= x && x < alpha); int pos = data.size(); data.push_back(x); prefix_palindrome.push_back(nullptr); Node* it = GetFail(cur, pos); if (it->next[x] == nullptr) { Node* tmp = NewNode(it->len + 2); tmp->pre = it; tmp->fail = GetFail(it->fail, pos)->next[x]; if (tmp->fail == nullptr) tmp->fail = even_root; tmp->fail->fail_cnt += 1; it->next[x] = tmp; size += 1; } cur = it->next[x]; int occurence = pos - cur->len + 1; cur->UpdateOccurence(occurence); prefix_palindrome[occurence] = cur; return cur; } void Pop() { assert(start < (int)data.size()); Node* longest_prefix_palindrome = prefix_palindrome[start]; if (longest_prefix_palindrome->len == (int)data.size() - start) cur = cur->fail; Node* it = longest_prefix_palindrome->fail; if (start != (int)data.size() - 1) { int occurence = start + longest_prefix_palindrome->len - it->len; it->UpdateOccurence(occurence); if (prefix_palindrome[occurence] == nullptr || it->len > prefix_palindrome[occurence]->len) prefix_palindrome[occurence] = it; } if (longest_prefix_palindrome->fail_cnt == 0 && longest_prefix_palindrome->second_rightmost_occurence < start) { int x = data[start]; longest_prefix_palindrome->pre->next[x] = nullptr; it->fail_cnt -= 1; size -= 1; } start += 1; } int NumOfPalindromes() { return size; } }; void solve() { int q; cin >> q; constexpr int alpha = 26; EerQueue<alpha> g; while (q--) { string op; cin >> op; if (op == "push") { char c; cin >> c; g.Push(c - 'a'); } else g.Pop(); cout << g.NumOfPalindromes() << "\n"; } } int main() { ios_base::sync_with_stdio(false); cin.tie(0); solve(); return 0; }
1739
A
Immobile Knight
There is a chess board of size $n \times m$. The rows are numbered from $1$ to $n$, the columns are numbered from $1$ to $m$. Let's call a cell isolated if a knight placed in that cell can't move to any other cell on the board. Recall that a chess knight moves two cells in one direction and one cell in a perpendicular direction: Find any isolated cell on the board. If there are no such cells, print any cell on the board.
Let's consider some cases. If at least one of $n$ or $m$ are $1$, then all cells are isolated. A knight can't move one in a perpendicular direction. If at least one of $n$ or $m$ are at least $4$, then the knight always has at least one move. No matter where you place it, it can move two cells along the greater of the dimensions and move one in a perpendicular direction, because it's at least $2$. Three cases are left. $(2, 2)$, $(2, 3)$ and $(3, 3)$. For all of these cases, the middle cell is isolated. That cell is $(\lfloor \frac n 2 \rfloor + 1, \lfloor \frac m 2 \rfloor + 1)$. Since it doesn't matter which cell you print in the first two cases, you can always print $(\lfloor \frac n 2 \rfloor + 1, \lfloor \frac m 2 \rfloor + 1)$. Overall complexity: $O(1)$ per testcase. Alternatively, you can check every possible cell. Iterate over a cell and check all eight possible knight moves from it. If none are inside the board, the cell is isolated. Overall complexity: $O(nm)$ per testcase.
[ "implementation" ]
800
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int main() { int t; scanf("%d", &t); forn(_, t){ int n, m; scanf("%d%d", &n, &m); int svx = 1, svy = 1; for (int x = 1; x <= n; ++x){ for (int y = 1; y <= m; ++y){ bool ok = true; for (int dx : {-2, -1, 1, 2}){ for (int dy : {-2, -1, 1, 2}){ if (abs(dx * dy) != 2) continue; if (1 <= x + dx && x + dx <= n && 1 <= y + dy && y + dy <= m) ok = false; } } if (ok){ svx = x; svy = y; } } } printf("%d %d\n", svx, svy); } }
1739
B
Array Recovery
For an array of \textbf{non-negative} integers $a$ of size $n$, we construct another array $d$ as follows: $d_1 = a_1$, $d_i = |a_i - a_{i - 1}|$ for $2 \le i \le n$. Your task is to restore the array $a$ from a given array $d$, or to report that there are multiple possible arrays.
Note that $a_i = a_{i-1} + d_i$ or $a_i= a_{i-1} - d_i$. Since there is no upper bound for the values of $a_i$, the case where $a_i = a_{i-1} + d_i$ for all $i$ always exists. It remains to check if there are other ways. To do this, it is enough to check whether there is such a position $pos$ that: $pos > 1$; $d_{pos} \ne 0$; the change $a_{pos} = a_{pos-1} + d_{pos}$ to $a_{pos} = a_{pos-1} - d_{pos}$ doesn't result in a negative value of $a_{pos}$. The reason for $d_{pos} \ne 0$ is that for $d_{pos} = 0$ no matter the plus or minus we choose, the array $a$ doesn't change. If you could change at least one sign to minus, that would be another answer.
[ "constructive algorithms", "greedy", "math" ]
1,100
for _ in range(int(input())): n = int(input()) ans = [0] for x in map(int, input().split()): if x != 0 and ans[-1] - x >= 0: print(-1) break else: ans.append(ans[-1] + x) else: print(*ans[1:])
1739
C
Card Game
Consider a game with $n$ cards ($n$ is even). Each card has a number written on it, between $1$ and $n$. All numbers on the cards are different. We say that a card with number $x$ is stronger than a card with number $y$ if $x > y$. Two players, Alex and Boris, play this game. In the beginning, each of them receives exactly $\frac{n}{2}$ cards, so each card belongs to exactly one player. Then, they take turns. Alex goes first, then Boris, then Alex again, and so on. On a player's turn, he must play \textbf{exactly one} of his cards. Then, if the opponent doesn't have any cards \textbf{stronger} than the card played, the opponent loses, and the game ends. Otherwise, the opponent has to play a stronger card (exactly one card as well). These two cards are removed from the game, and the turn ends. If there are no cards left, the game ends in a draw; otherwise it's the opponent's turn. Consider all possible ways to distribute the cards between two players, so that each of them receives exactly half of the cards. You have to calculate three numbers: - the number of ways to distribute the cards so that Alex wins; - the number of ways to distribute the cards so that Boris wins; - the number of ways to distribute the cards so that the game ends in a draw. You may assume that both players play optimally (i. e. if a player can win no matter how his opponent plays, he wins). Two ways to distribute the cards are different if there is at least one card such that, in one of these ways, it is given to Alex, and in the other way, it is given to Boris. For example, suppose $n = 4$, Alex receives the cards $[2, 3]$, and Boris receives the cards $[1, 4]$. Then the game may go as follows: - if Alex plays the card $2$, then Boris has to respond with the card $4$. Then, Alex's turn ends, and Boris' turn starts. Boris has only one card left, which is $1$; he plays it, and Alex responds with the card $3$. So, the game ends in a draw; - if Alex plays the card $3$, then Boris has to respond with the card $4$. Then, Alex's turn ends, and Boris' turn starts. Boris has only one card left, which is $1$; he plays it, and Alex responds with the card $2$. So, the game ends in a draw. So, in this case, the game ends in a draw.
The example tests suggest that there is only one possible distribution with a draw. Let's find out why it is so. We will use a process similar to induction/recursion to distribute the cards between the two players so that the game ends in a draw: suppose Alex receives the card $n$. Then he wins since he can play it immediately. So, for the game to result in a draw, Boris must receive the card $n$. suppose Boris receives the card $n-1$. Then he wins since he also has the card $n$, he can use it to answer any first move of Alex, and then win the game by playing $n-1$. So, for the game to result in a draw, Alex must receive the card $n-1$. suppose Boris receives the card $n-2$. Then he wins since he also has the card $n$: if Alex plays the card $n-1$, Boris responds with $n$ and then plays $n-2$; if Alex plays some other card, Boris responds with $n-2$ and the plays $n$. So, for the game to result in a draw, Alex must receive the card $n-2$. and so on. In fact, if Alex receives the card $n-1$ and Boris receives the card $n$, Alex must play the card $n-1$ or something equivalent to it on the first move, and Boris must respond with the card $n$, so we can consider the game without these two cards with the roles swapped. So, if we consider the distribution of cards as a string with characters A and B, where A denotes the card belonging to Alex, and B denotes the card belonging to Boris, and the $i$-th character of the string represents the card $n-i+1$, the only possible distribution for the draw is BAABBAAB... But there's more to this string representation of the distribution of cards: the first character that is different from this pattern denotes the winner; if the first different character is A in the draw distribution and B in the distribution we consider, the winner is Boris; otherwise, the winner is Alex. This may lead us to the following ways to count the number of possible distributions which win/lose for Alex: we can use dynamic programming of the form $dp_{x,y,t}$, where $x$ is the number of characters A we used, $y$ is the number of characters B we used, and $t$ is $0$, $1$ or $2$ depending on whether our string coincides with the draw string ($t = 0$), differs from it in a way that Alex wins ($t = 1$), or differs from it in a way that Boris wins ($t = 2$); the actual value of $dp_{x,y,t}$ must be the number of ways to reach this state of dynamic programming. The answer then is stored in the states of the form $dp_{\frac{n}{2},\frac{n}{2},t}$. or we can use combinatorics: let's iterate on the length of the prefix that is common in the draw string and in the string representing the distribution of cards, and then count the number of ways to distribute the remaining characters with a binomial coefficient. To calculate the binomial coefficients, we can use one of the following methods: Pascal's triangle, precalculating factorials and modular inverses to then, or calculating factorials with big integers in Java or Python.
[ "combinatorics", "constructive algorithms", "dp", "games" ]
1,500
def fact(n): return 1 if n == 0 else n * fact(n - 1) def choose(n, k): return fact(n) // fact(k) // fact(n - k) def calc(n): if n == 2: return [1, 0, 1] else: a = calc(n - 2) return [choose(n - 1, n // 2) + a[1], choose(n - 2, n // 2) + a[0], 1] t = int(input()) for i in range(t): mod = 998244353 n = int(input()) a = calc(n) a = list(map(lambda x: x % mod, a)) print(*a)
1739
D
Reset K Edges
You are given a rooted tree, consisting of $n$ vertices. The vertices are numbered from $1$ to $n$, the root is the vertex $1$. You can perform the following operation \textbf{at most} $k$ times: - choose an edge $(v, u)$ of the tree such that $v$ is a parent of $u$; - remove the edge $(v, u)$; - add an edge $(1, u)$ (i. e. make $u$ with its subtree a child of the root). The height of a tree is the maximum depth of its vertices, and the depth of a vertex is the number of edges on the path from the root to it. For example, the depth of vertex $1$ is $0$, since it's the root, and the depth of all its children is $1$. What's the smallest height of the tree that can be achieved?
Start with the following. Let's look at the input format and consider what the operation actually does to it. Since it only changes the parent of some vertex, it modifies only one value in it. Moreover, it just assigns it to $1$. Thus, the goal is to assign at most $k$ values of parents to $1$ to minimize the resulting height of the tree. In particular, that implies that we can freely rearrange the operations, since the assignments don't depend on each other. One more conclusion. Imagine we have already built some answer. One by one, we moved some subtrees to be children of the root. It could happen that we first moved some subtree of a vertex $u$ and then applied the operation to an edge inside the subtree of $u$. Let's show that it's always possible to rearrange the operations in the answer to avoid that. Just apply the operations in order of decreasing the depth of the vertex $u$. If we knew what height $h$ we want to get, we could have been making sure that cut subtree $u$ has height at most $h-1$ (since it gets increased by $1$ when glueing it to the root), then pretending that that subtree doesn't exist anymore. Moreover, it's always required to cut subtrees with height at most $h-1$. If you cut a higher subtree, then the answer can't be smaller than $h+1$, since we rearranged the operation to not touch that subtree anymore. Well, let's fix that height $h$ if we wanted that. Let's try the solve the opposite problem. How many operations will it require to make the tree height at most $h$? Obviously, the values for this problem are non-increasing - the greater we allow the height to be, the less operations it will require. Thus, we will be able to apply binary search to it to find the smallest height we can achieve with at most $k$ operations. Now we want to be choosing the subtrees of height at most $h-1$ repeatedly and cutting them off until the height of the tree becomes at most $h$. Let's think greedily. If the height of the tree is not at most $h$ yet, then there exists a vertex with the depth greater than $h$. Let's look at the deepest of them. That leaf has to be cut in some subtree. Otherwise, the tree won't become any less higher. What subtree is the best for it? What options do we have? That vertex itself and all its parents up until $h-1$ above. It's always optimal to cut the highest of them - the $(h-1)$-st parent, since it will remove at least all the vertices of any other cut and some other vertices along with them. It's also always possible to remove the $(h-1)$-st parent, since it will always have height exactly $h-1$. The vertex we are looking at is the deepest in the entire tree - there are no deeper vertices in the subtree of the $(h-1)$-st parent. Thus, the strategy is to keep cutting the $(h-1)$-st parent of the deepest vertex until the tree becomes at most $h$ height. Now about the implementation details. First, we can process the vertices from the deepest upwards in their order in the original tree. The operation only removes some vertices but doesn't change the depth of the remaining ones. For example, you can do a bfs from the root to find the order. Now the $(h-1)$-st parent. Let's find it for each vertex before starting the process. Run a dfs and maintain the stack of the ascendants. When going down the child, append it to the stack. What exiting, pop from the stack. Now you can just look at the $(h-1)$-st element from the top of the stack. To be able to do that, simulate the stack with a vector (C++) or a list (Python). Finally, we would have to determine if the current vertex in the order is removed or not. For that, we could maintain a boolean array $\mathit{used}$ for the removed vertices. Once you apply the operation, run the dfs from the removed vertex and mark all the newly removed descendants of it in $\mathit{used}$. If you don't go into already marked vertices, there will be no more than $n$ calls of the dfs. The number of cut vertices is the answer for the fixed height $h$. Overall complexity: $O(n \log n)$ per testcase.
[ "binary search", "data structures", "dfs and similar", "graphs", "greedy", "trees" ]
1,900
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; int n; vector<vector<int>> g; vector<int> st; vector<int> pd; void init(int v, int d){ st.push_back(v); if (int(st.size()) - d >= 0) pd[v] = st[st.size() - d]; for (int u : g[v]) init(u, d); st.pop_back(); } vector<char> used; void dfs(int v){ used[v] = true; for (int u : g[v]) if (!used[u]) dfs(u); } int get(int d){ pd.assign(n, -1); init(0, d); vector<int> ord, h(n); queue<int> q; q.push(0); while (!q.empty()){ int v = q.front(); q.pop(); ord.push_back(v); for (int u : g[v]){ q.push(u); h[u] = h[v] + 1; } } reverse(ord.begin(), ord.end()); used.assign(n, 0); int res = 0; for (int v : ord) if (!used[v] && h[v] > d){ ++res; dfs(pd[v]); } return res; } int main() { int t; scanf("%d", &t); while (t--){ int k; scanf("%d%d", &n, &k); g.assign(n, vector<int>()); for (int i = 1; i < n; ++i){ int p; scanf("%d", &p); --p; g[p].push_back(i); } int l = 1, r = n - 1; int ans = n; while (l <= r){ int m = (l + r) / 2; if (get(m) <= k){ ans = m; r = m - 1; } else{ l = m + 1; } } printf("%d\n", ans); } return 0; }
1739
E
Cleaning Robot
Consider a hallway, which can be represented as the matrix with $2$ rows and $n$ columns. Let's denote the cell on the intersection of the $i$-th row and the $j$-th column as $(i, j)$. The distance between the cells $(i_1, j_1)$ and $(i_2, j_2)$ is $|i_1 - i_2| + |j_1 - j_2|$. There is a cleaning robot in the cell $(1, 1)$. Some cells of the hallway are clean, other cells are dirty (the cell with the robot is clean). You want to clean the hallway, so you are going to launch the robot to do this. After the robot is launched, it works as follows. While at least one cell is dirty, the robot chooses \textbf{the closest (to its current cell) cell} among those which are dirty, moves there and cleans it (so the cell is no longer dirty). After cleaning a cell, the robot again finds the closest dirty cell \textbf{to its current cell}, and so on. This process repeats until the whole hallway is clean. However, there is a critical bug in the robot's program. If at some moment, there are multiple closest (to the robot's current position) dirty cells, the robot malfunctions. You want to clean the hallway in such a way that the robot doesn't malfunction. \textbf{Before launching the robot}, you can clean some (possibly zero) of the dirty cells yourself. However, you don't want to do too much dirty work yourself while you have this nice, smart (yet buggy) robot to do this. Note that you cannot make a clean cell dirty. Calculate the maximum possible number of cells you can leave dirty before launching the robot, so that it doesn't malfunction.
Why did the author choose the width of the hallway to be only $2$? Well, in that case you can show that the robot will never move to the left while cleaning. That is not true on width $3$ already. When does the robot break? Let the robot currently be in the cell $(j, i)$ ($0$-indexed) and the next column with a dirty cell be $\mathit{nxt}_i$ (possibly, $\mathit{nxt}_i = i$). The robot breaks only if both $(1 - j, \mathit{nxt}_i)$ and $(j, \mathit{nxt}_i)$ are dirty. That helps us to do a dynamic programming solution. Since we can only care about $O(1)$ next columns, we would want to have some $dp[i][j]$ - the largest number of dirty cells we can leave to the robot if we processed the first $i$ columns of the hallway and are currently standing in the $j$-th row of the $i$-th column. Maybe with some additional states of the current or the next columns. We want the dp to maintain the invariant that everything to the left of the $i$-th column is cleaned in such a way the robot can reach the cell $(j, i)$. We can choose when to fix the $i$-th column: either maintain it being correct prior to entering the state or handling it in the transition to the next one. I chose the former option. There probably exists a million different dps that work, I'll describe the one I did. Let $dp[i][j][f]$ be the largest number of dirty cells that we can leave to the robot if: we fixed which of the dirty cells in the first $i$ columns, inclusive, are cleaned by hand; the robot reaches the cell $(j, i)$ from the left; $f$ is true if the cell in the opposite row of the $i$-th column is dirty. The transitions handle what to do with the dirty cells in the $(i+1)$-st column and where the robot goes based on that. In particular, there are the following transitions: if $f$ is true, then we have to clean the cell $(j, i + 1)$, and the robot will move into $(1 - j, i + 1)$ - otherwise the robot breaks from having two options; if $f$ is false, then let's say that the robot doesn't break immediately but moves into the next column in a unique way: it moves horizontally first, then possibly vertically; we can leave the next column as is, and the robot will move into $(j, i + 1)$ if the cell $(1 - j, i + 1)$ is clean, or $(1 - j, i + 1)$ if it's dirty; if $f$ is false, then we can clean the cell $(1 - j, i + 1)$, and the robot will move into $(j, i + 1)$. we can leave the next column as is, and the robot will move into $(j, i + 1)$ if the cell $(1 - j, i + 1)$ is clean, or $(1 - j, i + 1)$ if it's dirty; if $f$ is false, then we can clean the cell $(1 - j, i + 1)$, and the robot will move into $(j, i + 1)$. Since we maintained the invariant that the $i$-th column is valid, we can update the answer from all four states in the last column. Overall complexity: $O(n)$.
[ "bitmasks", "dp" ]
2,400
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for(int i = 0; i < int(n); i++) const int INF = 1e9; int main(){ int n; cin >> n; vector<string> s(2); forn(i, 2) cin >> s[i]; vector<array<array<int, 2>, 2>> dp(n + 1); forn(i, n + 1) forn(j, 2) forn(k, 2) dp[i][j][k] = -INF; dp[0][0][s[1][0] == '1'] = s[1][0] == '1'; dp[0][0][0] = 0; forn(i, n - 1) forn(j, 2){ int nxtj = s[j][i + 1] == '1'; int nxtj1 = s[j ^ 1][i + 1] == '1'; dp[i + 1][j ^ 1][0] = max(dp[i + 1][j ^ 1][0], dp[i][j][1] + nxtj1); dp[i + 1][j][nxtj1] = max(dp[i + 1][j][nxtj1], dp[i][j][0] + nxtj1 + nxtj); dp[i + 1][j][0] = max(dp[i + 1][j][0], dp[i][j][0] + nxtj); } cout << max({dp[n - 1][0][0], dp[n - 1][0][1], dp[n - 1][1][0], dp[n - 1][1][1]}) << '\n'; }
1739
F
Keyboard Design
Monocarp has a dictionary of $n$ words, consisting of $12$ first letters of the Latin alphabet. The words are numbered from $1$ to $n$. In every pair of adjacent characters in each word, the characters are different. For every word $i$, Monocarp also has an integer $c_i$ denoting how often he uses this word. Monocarp wants to design a keyboard that would allow him to type some of the words easily. A keyboard can be denoted as a sequence of $12$ first letters of the Latin alphabet, where each letter from a to l appears exactly once. A word can be typed with the keyboard easily if, for every pair of adjacent characters in the word, these characters are adjacent in the keyboard as well. The optimality of the keyboard is the sum of $c_i$ over all words $i$ that can be typed easily with it. Help Monocarp to design a keyboard with the maximum possible optimality.
For each word, let's consider a graph on $12$ vertices where the $i$-th and the $j$-th vertices are connected by an edge iff the $i$-th character of the alphabet is adjacent to the $j$-th character of the alphabet in this string. Obviously, this graph is connected (except for the isolated vertices). If there is a vertex of degree $3$ or more in this graph, or if there is a cycle in this graph, it is impossible to design a keyboard to type the word easily: in the first case, the letter represented by that vertex must have at least three neighbors on the keyboard, but can have only at most two; in the second case, the keyboard must be cyclic (and it is not). So, the word can be typed easily only if the graph representing it consists of one path and several isolated vertices. Let's write the letters along the path we constructed for the word in a single string. For example, for the word abacabacd, we get edges ab, ac and cd in the graph, so the letters along the path are either dcab or bacd (and, obviously, one can be obtained from the other by reversing the string). Let $f(s)$ and $f'(s)$ be the two strings we obtain from the word $s$ using this method. Now, we claim that the word $s$ can be typed easily if and only if one of these two strings ($f(s)$ and $f'(s)$) is a substring of the keyboard - this would mean that every pair of letters that should be on adjacent positions are actually on adjacent positions. Okay, now we construct $f(s_i)$ and $f'(s_i)$ for each word, and our goal is to find the permutation of the first $12$ characters of Latin alphabet such that the sum of $c_i$ over all words having either $f(s_i)$ or $f'(s_i)$ as a substring is the maximum possible. There are two key observations that allow us to solve this problem: $f(s_i)$ and $f'(s_i)$ cannot be the substrings of the same keyboard (the proof is simple: if $f(s_i)$ is a substring, its first character must be before its second character; and if $f'(s_i)$ is a substring, its second-to-last character (which is the second character of $f(s_i)$) must be before its last character (which is the first character of $f(s_i)$); neither $f(s_i)$ nor $f'(s_i)$ can appear in the keyboard twice (it's obvious since the keyboard is a permutation). So, we can reformulate the problem as follows: let $c_i$ be the cost of the string $f(s_i)$ and the cost of the string $f'(s_i)$ as well; find the permutation of the first $12$ characters of the Latin alphabet so that its cost (which is the sum of costs of its substrings) is the maximum possible. To solve this problem, we can store the strings in an Aho-Corasick automaton, and for every state of the automaton, precalculate the total cost of all string ending in this state (that is, the cost of this state and all states reachable from it via the suffix links). Then run a dynamic programming of the form $dp_{mask,v}$ - the maximum possible cost of a partial keyboard if we used a $mask$ of characters and the Aho-Corasick automaton is currently in the state $v$. This dynamic programming runs in $O(2^K \cdot K \cdot A)$, where $K$ is the size of the alphabet ($12$), and $A$ is the size of the automaton (up to $4000$).
[ "bitmasks", "data structures", "dp", "string suffix structures", "strings" ]
2,600
#include<bits/stdc++.h> using namespace std; const int N = 10043; const int K = 12; int tsz = 0; int trie[N][K]; int aut[N][K]; int lnk[N]; int p[N]; int pchar[N]; int cost[N]; int ncost[N]; int newNode() { lnk[tsz] = -1; ncost[tsz] = -1; cost[tsz] = 0; for(int i = 0; i < K; i++) { trie[tsz][i] = aut[tsz][i] = -1; } return tsz++; } int nxt(int x, int y) { if(trie[x][y] == -1) { trie[x][y] = newNode(); pchar[trie[x][y]] = y; p[trie[x][y]] = x; } return trie[x][y]; } int go(int x, int y); int get_lnk(int x) { if(lnk[x] != -1) return lnk[x]; int& d = lnk[x]; if(x == 0 || p[x] == 0) return d = 0; return d = go(get_lnk(p[x]), pchar[x]); } int go(int x, int y) { if(aut[x][y] != -1) return aut[x][y]; int& d = aut[x][y]; if(trie[x][y] != -1) return d = trie[x][y]; if(x == 0) return d = 0; return d = go(get_lnk(x), y); } void add(string s, int c) { int cur = 0; for(auto x : s) cur = nxt(cur, x - 'a'); cost[cur] += c; } int calc(int x) { if(ncost[x] != -1) return ncost[x]; ncost[x] = cost[x]; int y = get_lnk(x); if(y != x) ncost[x] += calc(y); return ncost[x]; } int main() { int root = newNode(); int n; cin >> n; for(int i = 0; i < n; i++) { string s; int x; cin >> x >> s; map<char, set<char>> adj; for(int j = 0; j + 1 < s.size(); j++) { adj[s[j]].insert(s[j + 1]); adj[s[j + 1]].insert(s[j]); } bool bad = false; string res = ""; char c; for(c = 'a'; c <= 'l'; c++) { if(!adj.count(c)) continue; if(adj[c].size() >= 3) bad = true; if(adj[c].size() == 1) break; } if(c == 'm' || bad) continue; res.push_back(c); while(adj[c].size() > 0) { char d = *adj[c].begin(); adj[c].erase(d); adj[d].erase(c); c = d; res.push_back(c); } bad |= adj.size() != res.size(); map<char, int> pos; for(int i = 0; i < res.size(); i++) pos[res[i]] = i; for(int i = 0; i + 1 < s.size(); i++) bad |= abs(pos[s[i]] - pos[s[i + 1]]) > 1; if(bad) continue; add(res, x); reverse(res.begin(), res.end()); add(res, x); } int INF = 1e9; int K = 12; vector<vector<int>> dp(1 << K, vector<int>(tsz + 1, -INF)); vector<vector<pair<int, int>>> pdp(1 << K, vector<pair<int, int>>(tsz + 1)); dp[0][0] = 0; for(int i = 0; i < (1 << K); i++) for(int j = 0; j <= tsz; j++) { for(int z = 0; z < K; z++) { if(i & (1 << z)) continue; int nstate = go(j, z); int add = calc(nstate); int nmask = i | (1 << z); if(dp[nmask][nstate] < dp[i][j] + add) { dp[nmask][nstate] = dp[i][j] + add; pdp[nmask][nstate] = {z, j}; } } } string ans = ""; int curmask = (1 << K) - 1; int curstate = max_element(dp[curmask].begin(), dp[curmask].end()) - dp[curmask].begin(); while(curmask != 0) { int cc = pdp[curmask][curstate].first; int ns = pdp[curmask][curstate].second; ans.push_back(char('a' + cc)); curmask ^= (1 << cc); curstate = ns; } cout << ans << endl; }
1740
A
Factorise N+M
Pak Chanek has a prime number$^\dagger$ $n$. Find a prime number $m$ such that $n + m$ is not prime. $^\dagger$ A prime number is a number with \textbf{exactly} $2$ factors. The first few prime numbers are $2,3,5,7,11,13,\ldots$. In particular, $1$ is \textbf{not} a prime number.
There are multiple solutions for this problem. We will discuss two of them. One solution is to choose $m = n$. This always guarantees that $m$ is prime, because $n$ is always prime. And we can see that $n + m = n + n = 2n$, which is always not prime, because $n > 1$ always holds. Another solution is to choose $m=7$. If $n$ is odd, then $n+m$ will be an even number greater than $2$ and therefore not prime. Otherwise $n$ is even. The only even number prime number is $2$ and it can be verified that $2+7=9$ is not a prime number. Time complexity for each test case: $O(1)$
[ "constructive algorithms", "number theory" ]
800
null
1740
B
Jumbo Extra Cheese 2
Pak Chanek has $n$ two-dimensional slices of cheese. The $i$-th slice of cheese can be represented as a rectangle of dimensions $a_i \times b_i$. We want to arrange them on the two-dimensional plane such that: - Each edge of each cheese is parallel to either the x-axis or the y-axis. - The bottom edge of each cheese is a segment of the x-axis. - No two slices of cheese overlap, but their sides can touch. - They form one connected shape. Note that we can arrange them in any order (the leftmost slice of cheese is not necessarily the first slice of cheese). Also note that we can rotate each slice of cheese in any way as long as all conditions still hold. Find the minimum possible perimeter of the constructed shape.
According to the problem, the arrays $a$ and $b$ denote the dimensions of the rectangles. Define arrays $c$ and $d$ as a certain orientation configuration of the rectangles with $c_i$ denoting the width and $d_i$ denoting the height. Define arrays $e$ and $f$ as a certain permutation of the orientation configuration such that the $i$-th rectangle from the left has width $e_i$ and height $f_i$. Consider a certain configuration of $e$ and $f$. The perimeter of the connected shape will be $2(e_1+e_2+\ldots+e_n)+f_1+|f_1-f_2|+|f_2-f_3|+\ldots+|f_{n-1}-f_n|+f_n$. Notice that if we sort the rectangles based on $f_i$, the perimeter would become $2(e_1+e_2+\ldots+e_n+f_n)$. This is actually the minimum possible perimeter for a fixed configuration of $c$ and $d$ as we cannot get a smaller perimeter than this. This means, if we have a certain configuration of $c$ and $d$, the optimal perimeter is $2(c_1+c_2+\ldots+c_n+\max(d))$. Now, we just need to find a configuration of $c$ and $d$ from $a$ and $b$ that results in the minimum value of $2(c_1+c_2+\ldots+c_n+\max(d))$. Notice that each element of $a$ and $b$ can only be counted either $0$ or $2$ times in the final answer. Consider the maximum value out of all elements of $a$ and $b$. If there are multiple values that are maximum, just consider one of them. We can see that that value will always be counted $2$ times no matter how we construct $c$ and $d$. We need to determine whether to put that maximum value in $c$ (width) or $d$ (height). It is actually better to put that value in $d$ as it will cause all other values of $d$ to be counted $0$ times in the final answer. Now, we just need to determine the orientation for each of the other rectangles. We have determined that the maximum value in $a$ and $b$ must be put in $d$, which will automatically become the value of $\max(d)$. Therefore, all other values in $d$ will be counted $0$ times in the final answer. This means, because we want to minimise the final answer, for each $i$, it is always better to put the larger value out of $a_i$ and $b_i$ in $d$ as it will cause the value that is not being counted in the final answer to be larger. Using every observation above, in order to get the minimum possible perimeter, we can do the following: Construct the arrays $c$ and $d$ with $c_i$ and $d_i$ taking their values from $a_i$ and $b_i$ while making $c_i\leq d_i$. The answer is $2(c_1+c_2+\ldots+c_n+\max(d))$. Time complexity for each test case: $O(n)$
[ "geometry", "greedy", "sortings" ]
800
null
1740
C
Bricks and Bags
There are $n$ bricks numbered from $1$ to $n$. Brick $i$ has a weight of $a_i$. Pak Chanek has $3$ bags numbered from $1$ to $3$ that are initially empty. For each brick, Pak Chanek must put it into one of the bags. After this, each bag must contain at least one brick. After Pak Chanek distributes the bricks, Bu Dengklek will take exactly one brick from each bag. Let $w_j$ be the weight of the brick Bu Dengklek takes from bag $j$. The score is calculated as $|w_1 - w_2| + |w_2 - w_3|$, where $|x|$ denotes the absolute value of $x$. It is known that Bu Dengklek will take the bricks in such a way that minimises the score. What is the maximum possible final score if Pak Chanek distributes the bricks optimally?
Firstly, sort $a$. From now on, we will always refer to the sorted array $a$. Let $p_j$ be the index of the brick taken by Bu Dengklek from bag $j$. The configuration of $p_1,p_2,p_3$ must be in the form of one of the following: $p_1 < p_2 < p_3$ $p_1 > p_2 > p_3$ $p_2 < \min(p_1, p_3)$ $p_2 > \max(p_1, p_3)$ Let's look at the third case. Let's consider the case where $\min(p_1, p_3) > p_2+1$. If this is the case, let's look at all possible cases for the brick at index $p_2+1$. If Pak Chanek puts brick $p_2+1$ into bag $1$, it is more optimal for Bu Dengklek to take that brick from bag $1$ instead of $p_1$ because $a_{p_2+1}-a_{p_2} \leq a_{p_1}-a_{p_2}$. If Pak Chanek puts brick $p_2+1$ into bag $2$, it is more optimal for Bu Dengklek to take that brick from bag $2$ instead of $p_2$ because $(a_{p_1}-a_{p_2+1})+(a_{p_3}-a_{p_2+1}) \leq (a_{p_1}-a_{p_2})+(a_{p_3}-a_{p_2})$. If Pak Chanek puts brick $p_2+1$ into bag $3$, it is more optimal for Bu Dengklek to take that brick from bag $3$ instead of $p_3$ because $a_{p_2+1}-a_{p_2} \leq a_{p_3}-a_{p_2}$. This means, choosing $p_1,p_2,p_3$ such that $\min(p_1, p_3) > p_2+1$ is always less optimal for Bu Dengklek. Therefore, Bu Dengklek will take the bricks such that if $p_2 < \min(p_1, p_3)$, then $\min(p_1, p_3) = p_2 + 1$ also holds. A similar logic can be used for the fourth case. So, we can obtain that Bu Dengklek will always take the bricks such that the configuration of $p_1,p_2,p_3$ is in the form of one of the following: $p_1 < p_2 < p_3$ $p_1 > p_2 > p_3$ $p_2 < \min(p_1, p_3)$ and $\min(p_1, p_3) = p_2 + 1$ $p_2 > \max(p_1, p_3)$ and $\max(p_1, p_3) = p_2 - 1$ In fact, Pak Chanek is always able to force it such that the bricks taken by Bu Dengklek form any configuration of $p_1,p_2,p_3$ that follows one of the constraints above. Let's look at one possible construction. For a configuration of $p_1,p_2,p_3$ construct an order of the bags $j_1,j_2,j_3$ such that $p_{j_1}<p_{j_2}<p_{j_3}$. Then, Pak Chanek can force Bu Dengklek to choose that configuration by doing the following: Put the bricks at indices from $1$ to $p_{j_1}$ into bag $j_1$. Put the bricks at indices from $p_{j_1}+1$ to $p_{j_3}-1$ into bag $j_2$. Put the bricks at indices from $p_{j_3}$ to $n$ into bag $j_3$. Therefore, we just need to find the values of $p_1,p_2,p_3$ satisfying the constraints above such that $|a_{p_1}-a_{p_2}|+|a_{p_2}-a_{p_3}|$ is maximised. To maximise the final score, we can see that it is always more optimal to either use the third or the fourth case for $p_1,p_2,p_3$. For the third case, it is always more optimal to maximise $\max(p_1, p_3)$, so we should set $\max(p_1, p_3) = n$. A similar logic can be used for the fourth case to see that we should set $\min(p_1, p_3) = 1$. Therefore, the maximum final score is the maximum of these two cases: $(a_i-a_{i-1})+(a_i-a_1)$ with $3 \leq i \leq n$. $(a_{i+1}-a_i)+(a_n-a_i)$ with $1 \leq i \leq n-2$. Time complexity for each test case: $O(n \log n)$
[ "constructive algorithms", "games", "greedy", "sortings" ]
1,400
null
1740
D
Knowledge Cards
Pak Chanek, a renowned scholar, invented a card puzzle using his knowledge. In the puzzle, you are given a board with $n$ rows and $m$ columns. Let $(r, c)$ represent the cell in the $r$-th row and the $c$-th column. Initially, there are $k$ cards stacked in cell $(1, 1)$. Each card has an integer from $1$ to $k$ written on it. More specifically, the $i$-th card \textbf{from the top} of the stack in cell $(1, 1)$ has the number $a_i$ written on it. It is known that no two cards have the same number written on them. In other words, the numbers written on the cards are a permutation of integers from $1$ to $k$. All other cells are empty. You need to move the $k$ cards to cell $(n, m)$ to create another stack of cards. Let $b_i$ be the number written on the $i$-th card \textbf{from the top} of the stack in cell $(n, m)$. You should create the stack in cell $(n, m)$ in such a way so that $b_i = i$ for all $1 \leq i \leq k$. In one move, you can remove the \textbf{top card} from a cell and place it onto an adjacent cell (a cell that shares a common side). If the target cell already contains one or more cards, you place your card \textbf{on the top of the stack}. You must do each operation while satisfying the following restrictions: - Each cell other than $(1,1)$ and $(n,m)$ must not have more than one card on it. - You cannot move a card onto cell $(1,1)$. - You cannot move a card from cell $(n,m)$. Given the values of $n$, $m$, $k$ and the array $a$, determine if the puzzle is solvable.
Let card $c$ be the card with number $c$ written on it. Notice that we must put the cards into the stack in $(n,m)$ in order from card $k$, card $k-1$, card $k-2$, and so on until card $1$. The key observation for this problem is that, under given constraints, if we ignore cells $(1,1)$ and $(n,m)$ we can always move any card to either $(n-1,m)$ or $(n,m-1)$ if and only if there is at least one empty cell. We can obtain this by considering the fact that an empty cell can "move" to any cell and can be used to rotate any $2 \times 2$ square of cells. Because $3 \leq n, m$, if we ignore cells $(1,1)$ and $(n,m)$, each cell is a part of a $2 \times 2$ square. Moving a desired card to a one of $(n-1,m)$ or $(n,m-1)$ can be done by firstly moving the empty cell to be adjacent to the desired card, and then only doing $2 \times 2$ rotations that simultaneously move the empty cell and the desired card. Therefore, we can iterate the card in the stack $(1, 1)$ from top to bottom. While iterating, one should maintain all of the cards in the board that are not in cell $(1, 1)$ or $(n, m)$. We define those group of cards as active cards. Each time we put a move a card out of $(1, 1)$, we add the card to the active cards. Before finishing the iteration we will try to move one or more cards from the active cards to $(n, m)$ as long as there is a card we can move. We can maintain the active cards with a priority queue or a set. The active cards can be moved around freely if and only if there is at least one empty cell in the grid if we ignore $(1,1)$ and $(n,m)$. Therefore, to check whether or not we can solve the puzzle, we just need to check whether or not there exists a moment where the number of active cards exceed $nm-3$. There also exists a solution with a time complexity of $O(k)$ just by iterating the positions of each card number, but it will not be explained here. Time complexity for each test case: $O(k\log k)$ or $O(k)$
[ "constructive algorithms", "data structures" ]
1,500
null
1740
E
Hanging Hearts
Pak Chanek has $n$ blank heart-shaped cards. Card $1$ is attached directly to the wall while each of the other cards is hanging onto exactly one other card by a piece of string. Specifically, card $i$ ($i > 1$) is hanging onto card $p_i$ ($p_i < i$). In the very beginning, Pak Chanek must write one integer number on each card. He does this by choosing any permutation $a$ of $[1, 2, \dots, n]$. Then, the number written on card $i$ is $a_i$. After that, Pak Chanek must do the following operation $n$ times while maintaining a sequence $s$ (which is initially empty): - Choose a card $x$ such that no other cards are hanging onto it. - Append the number written on card $x$ to the end of $s$. - If $x \neq 1$ and the number on card $p_x$ is larger than the number on card $x$, replace the number on card $p_x$ with the number on card $x$. - Remove card $x$. After that, Pak Chanek will have a sequence $s$ with $n$ elements. What is the maximum length of the longest non-decreasing subsequence$^\dagger$ of $s$ at the end if Pak Chanek does all the steps optimally? $^\dagger$ A sequence $b$ is a subsequence of a sequence $c$ if $b$ can be obtained from $c$ by deletion of several (possibly, zero or all) elements. For example, $[3,1]$ is a subsequence of $[3,2,1]$, $[4,3,1]$ and $[3,1]$, but not $[1,3,3,7]$ and $[3,10,4]$.
The cards form a rooted tree with card $i$ being the root, where for each $i$ ($i>1$), the parent of card $i$ is $p_i$. Let $w_i$ be the number on card $i$ when it is about to get removed. To remove card $i$, we must previously remove all cards in the subtree of $i$ other than $i$ itself. Thus, we can see that $w_i$ is the minimum value of $a$ out of all cards in the subtree of $i$. Let the sequence $[v_1,v_2,v_3,\ldots]$ mean that the $i$-th element of the longest non-decreasing subsequence comes from card $v_i$. Then, we can get the following observations: We can only remove card $i$ if all other cards in the subtree of $i$ have been removed. Hence, for each $v_i$, every $v_j$ with $j>i$ must not be in the subtree of $v_i$. We can make $v_{i+1}$ to be the ancestor of $v_i$ by making $w_{v_{i+1}}=w_{v_i}$. This can be done by making it such that the minimum value of $a$ in the subtree of $v_i$ is simultaneously the minimum value of $a$ in the subtree of $v_{i+1}$. If $v_{i+1}$ is not the ancestor of $v_i$, then $w_{v_{i+1}} \neq w_{v_i}$ must hold. Because $v$ forms a non-decreasing sequence, then $w_{v_{i+1}} > w_{v_i}$ must hold. That means, $w_{v_{i+1}}$ is guaranteed to be larger than the values of $w$ for all ancestors of $v_i$. This means, after $v_{i+1}$, $v$ cannot continue with any ancestors of $v_i$. From the information above, it means that when constructing $v$, we do one the following operation several times: Append an ancestor of the last card. Append a card that is not an ancestor of the last card and not a descendant of any of the previous cards. After this, we cannot choose any ancestor of the card before this new card. In the optimal configuration, the sequence $v$ resembles the following: It consists of several paths of cards where each path is from a card to one of its ancestors. For any pair of cards from two different paths, one must not be an ancestor of the other. We will use dynamic programming on tree. Let $dp[i]$ denote the length of the longest non-decreasing subsequence created from only doing operations to the subtree of $i$. Then, we have two cases: If card $i$ is used in the longest non-decreasing subsequence, then the maximum answer is the maximum number of cards in a path from card $i$ to one of the cards in the subtree of $i$. If card $i$ is not used in the longest non-decreasing subsequence, then the maximum answer is the sum of $dp$ values of the children of card $i$. Time complexity: $O(n)$
[ "constructive algorithms", "data structures", "dfs and similar", "dp", "greedy", "trees" ]
1,800
null
1740
F
Conditional Mix
Pak Chanek is given an array $a$ of $n$ integers. For each $i$ ($1 \leq i \leq n$), Pak Chanek will write the one-element set $\{a_i\}$ on a whiteboard. After that, in one operation, Pak Chanek may do the following: - Choose two different sets $S$ and $T$ on the whiteboard such that $S \cap T = \varnothing$ ($S$ and $T$ do not have any common elements). - Erase $S$ and $T$ from the whiteboard and write $S \cup T$ (the union of $S$ and $T$) onto the whiteboard. After performing zero or more operations, Pak Chanek will construct a multiset $M$ containing the sizes of all sets written on the whiteboard. In other words, each element in $M$ corresponds to the size of a set after the operations. How many distinct$^\dagger$ multisets $M$ can be created by this process? Since the answer may be large, output it modulo $998\,244\,353$. $^\dagger$ Multisets $B$ and $C$ are different if and only if there exists a value $k$ such that the number of elements with value $k$ in $B$ is different than the number of elements with value $k$ in $C$.
Let $cnt_i$ denote the number of occurrences of element $i$ in array $a$. Claim: A multiset $M$ of size $n$, $M_1,M_2,\ldots,M_n$ where $M_1 \geq M_2 \geq \ldots \geq M_n \geq 0$ is good if and only if $\sum_{i=1}^{n} M_i=n$ and $\sum_{i=1}^k M_i \leq \sum_{i=1}^n \min(k,cnt_i)$ for every $1 \leq k \leq n$. The proof of that claim has been discussed in the comment section here. We proceed with formulating a dynamic programming solution. Let $dp[pos][sum][last]$ denote that we have picked a prefix $P_1,P_2,\ldots,P_{pos}$ such that $\sum_{i=1}^{pos} P_i = sum$ and $P_{pos}=last$. Our transition is going from $dp[pos][sum][last]$ to $dp[pos+1][sum+x][x]$ for all $x \leq last$, this can be easily handled using prefix sums. However, we still have $O(n^3)$ states. We can cut down the number of states by noting that $n \geq \sum_{i=1}^{pos} P_i \geq last \cdot pos$. That is, for each value of $pos$, there are only $\frac{n}{pos}$ values of $last$. Since $\frac{n}{1}+\frac{n}{2}+\frac{n}{3}+\ldots+\frac{n}{n} \approx n\log n$, the number of states is bounded by $O(n^2 \log n)$. Time complexity: $O(n^2 \log n)$
[ "combinatorics", "dp", "math" ]
2,600
null
1740
G
Dangerous Laser Power
Pak Chanek has an $n \times m$ grid of portals. The portal on the $i$-th row and $j$-th column is denoted as portal $(i,j)$. The portals $(1,1)$ and $(n,m)$ are on the north-west and south-east corner of the grid respectively. The portal $(i,j)$ has two settings: - \textbf{Type} $t_{i,j}$, which is either $0$ or $1$. - \textbf{Strength} $s_{i,j}$, which is an integer between $1$ and $10^9$ inclusive. Each portal has $4$ faces labelled with integers $0,1,2,3$, which correspond to the north, east, south, and west direction respectively.When a laser enters face $k$ of portal $(i, j)$ with speed $x_\text{in}$, it leaves the portal going out of face $(k+2+t_{i,j}) \bmod 4$ with speed $x_\text{out} = \max(x_\text{in},s_{i,j})$. The portal also has to consume $x_\text{out} - x_\text{in}$ units of \textbf{energy}. Pak Chanek is very bored today. He will shoot $4nm$ lasers with an initial speed of $1$, one into each face of each portal. Each laser will travel throughout this grid of portals until it moves outside the grid or it has passed through $10^{100}$ portals. At the end, Pak Chanek thinks that a portal is \textbf{good} if and only if the total energy consumed by that portal modulo $2$ is equal to its type. Given the strength settings of all portals, find a way to assign the type settings of each portal such that the number of good portals is maximised.
Claim: There is a construction where all portals are good. We will proceed with the construction. The key observation of the problem is that for a portal $(i,j)$ to consume energy, the laser must not enter a portal with a strength greater than or equal to $s_{i,j}$ before going through portal $(i,j)$. Let's say we already have the type settings of all lasers. Consider all possible lasers that at some point enters a certain face of portal $(i,j)$. We can find the paths of those lasers before entering portal $(i,j)$ by backtracking in that direction from portal $(i,j)$. Consider the first portal in that backtracking path that has a strength greater than or equal to $s_{i,j}$. Notice that all lasers that starts from portals that are located after that portal in the backtracking path must go through that portal before entering portal $(i,j)$. Therefore, those lasers cannot make portal $(i,j)$ use energy, which means they can be ignored. Using the observation above, we can see that we must only backtrack until we find a portal with a strength greater than or equal to $s_{i,j}$. Observe that to get the path up to that point, the type settings we should know are only the type settings of portals with strengths smaller than $s_{i,j}$. Therefore, the construction can be generated by calculating the type settings of the portals from the smallest to the largest strengths. In each iteration, we use the type settings of the previous portals to find the backtracking paths of the current portal. If we already know the $4$ backtracking paths for each of the $4$ directions of that portal, we can find the total energy that portal will consume in the end, which means we can find the type setting for that portal that makes it a good portal. The naive implementation of this has a time complexity of $O(n^2m^2)$. Notice that each backtracking path of the portals are just the merging of smaller paths. We can maintain a disjoint set data structure to handle this while maintaining the essential values needed in the calculation of the total energy consumed for each path. Time complexity: $O(nm)$
[ "constructive algorithms", "dsu", "sortings" ]
3,100
null
1740
H
MEX Tree Manipulation
Given a rooted tree, define the value of vertex $u$ in the tree recursively as the MEX$^\dagger$ of the \textbf{values of its children}. Note that it is only the children, not all of its descendants. In particular, the value of a leaf is $0$. Pak Chanek has a rooted tree that initially only contains a single vertex with index $1$, which is the root. Pak Chanek is going to do $q$ queries. In the $i$-th query, Pak Chanek is given an integer $x_i$. Pak Chanek needs to add a new vertex with index $i+1$ as the child of vertex $x_i$. After adding the new vertex, Pak Chanek needs to recalculate the values of all vertices and report the sum of the values of all vertices in the current tree. $^\dagger$ The MEX (minimum excluded) of an array is the smallest non-negative integer that does not belong to the array. For example, the MEX of $[0,1,1,2,6,7]$ is $3$ and the MEX of $[6,9]$ is $0$.
Define $\text{MEX}_i(A)$ of a certain array $A$ as the $i$-th smallest non-negative integer that does not belong to the array. Let's solve a simpler problem. Imagine that you are given an array $A$, you are asked to append a new element $x$ to the array, and you need to calculate the MEX of the array after the operation. It is easy to see that the new value of MEX is either $\text{MEX}_1(A)$ if $x \neq \text{MEX}_1(A)$ or $\text{MEX}_2(A)$ if $x = \text{MEX}_1(A)$ where $A$ is the array before the operation. Let $(x, y, z)$ define a relationship between input and output which means: If the input is not $x$, then the output is $y$. If the input is $x$, then the output is $z$. This means the relationship between the new element to be appended to $A$ and the new value of MEX can be expressed as $(\text{MEX}_1(A), \text{MEX}_1(A), \text{MEX}_2(A))$. Define that triple as $\text{MEXtuple}(A)$. Define the heavy child of vertex $u$ as the child with the largest subtree size. If there are children with the same subtree size, choose one of them. Let's call each child other than the heavy child as a light child. In particular, the edge that connects a vertex to its heavy child is called a heavy edge. Let $C_u$ be the array that contains the values of the light children of vertex $u$. We can see that if we maintain $\text{MEXtuple}(C_u)$ we can get the relationship between the value of the heavy child and the value of vertex $u$. Let's consider a path in the tree that traverses through parents while only using heavy edges. Consider the relationship between the value of the vertex at the beginning of the path and the end of the path. Let's calculate the $\text{MEXtuple}$ for each vertex in the path other than the first one. We can see that we can get the relationship between the vertex in the beginning of the path and the end of the path using those $\text{MEXtuple}$ values. In fact, that relationship can also be expressed as a triple $(x, y, z)$ defined above. Let's decompose the tree into chains of heavy edges. For each chain, construct a segment tree with each node in the segment tree calculating the relationship triple $(x, y, z)$ for a segment of the chain. Suppose we want to add a new vertex. The vertices that have their values changed are the ones that lie on the path from the new vertex to the root. Observe that that path only traverses through at most $O(\log Q)$ different heavy edge chains. For each chain, we can get the value of the vertex at the end of the chain by doing a point update on the chain's segment tree. When we move between two different chains, we update the $C_u$ for a vertex and recalculate it's $\text{MEXtuple}$ in $O(\log Q)$ time complexity using a set or another segment tree. We repeat this process until we reach the root. We can calculate that the total complexity of a single query is $O(\log^2 Q)$. To get the sum of the values of the entire tree after an operation, we can store two new variables in each node of the segment tree of each chain that represents the sum of the chain segment for the two possible cases of the input. Time complexity: $O(Q \log^2 Q)$
[ "data structures", "trees" ]
3,300
null
1740
I
Arranging Crystal Balls
In the world of Compfestnesia, Pak Chanek discovers a secret underground dungeon. Inside it, there is a treasure chest that is surrounded by $n$ statues that are arranged in a circular manner. The statues are numbered from $0$ to $n-1$ with statue $i$ being to the left of statue $i+1$ and statue $n-1$ being to the left of statue $0$. Pak Chanek observes that each statue is holding a crystal ball with an integer between $0$ and $m-1$ inclusive. Let's say the integer in the crystal ball of statue $i$ is $a_i$. The dungeon provides instructions that every integer in the crystal balls must be $0$ in order to open the treasure chest. To achieve that, Pak Chanek is given an integer $k$, and he can do zero or more operations. In a single operation, Pak Chanek does the following: - Choose exactly $k$ consecutive statues. In other words, choose the statues $p, (p+1) \bmod n, (p+2) \bmod n, (p+3) \bmod n, \ldots, (p+k-1) \bmod n$ for some chosen index $p$. - Do one of the following: - For all chosen statues, change their values of $a_i$ into $(a_i+1) \bmod m$. - For all chosen statues, change their values of $a_i$ into $(a_i-1) \bmod m$. Help Pak Chanek find the minimum possible number of operations to open the treasure chest.
Let's simplify the problem such that the only possible operation for each $k$ chosen statues is $a_i := (a_i - 1) \bmod m$. To solve this, we first define $c_i$ as the number of times we do that operation for the statues $i, (i-1) \bmod n, (i-2) \bmod n, (i-3) \bmod n, \ldots, (i-k+1) \bmod n$. We can see that in the optimal configuration, $0 \leq c_i \leq m-1$ must hold. For it to make each element of $a$ to be $0$, $(c_i+c_{(i+1)\bmod n}+c_{(i+2)\bmod n}+\ldots+c_{(i+k-1)\bmod n})\bmod m=a_i$ must hold. The number of operations is equal to $c_0+c_1+c_2+\ldots+c_{n-1}$. In order to extend to the original problem, we can see that for a given array $c$, the number of operations that can be made to be equal to $\sum_{i=0}^{n-1}\min(c_i,m-c_i)$. So we need to find an array $c$ satisfying the conditions above such that $\sum_{i=0}^{n-1}\min(c_i,m-c_i)$ is as small as possible. We can see that $a_{(i+1)\bmod n} - a_i = c_{(i+k) \bmod n} - c_i$. That means, if we have determined the value of $c_i$, the value of $c_{(i+k) \bmod n}$ is forced. If we form an edge for each pair of indices $(i, (i+k) \bmod n)$, we will get $\text{GCD}(n,k)$ connected components. This means if we have determined the value of one vertex for each connected component, the values of the entire array $c$ are forced. However, we must also consider possibility that at least one connected component cannot have assigned values that satisfy all of its edges. In that case, it is impossible to finish the objective of the problem. Define $d=\text{GCD}(n,k)$. For each $c_i$, we can find the difference relationship between it and $c_{i \bmod d}$. Therefore, to answer the problem, we must find the values of $c_0, c_1, c_2, \ldots, c_{d-1}$ such that $c_0+c_1+c_2+\ldots+c_{k-1}=a_0$ and if all the other values of $c$ are calculated, $\sum_{i=0}^{n-1}\min(c_i,m-c_i)$ is as small as possible. For the $c_0+c_1+c_2+\ldots+c_{k-1}=a_0$ condition, using simple modular algebra, we can calculate one or more values $b_1,b_2,b_3,\ldots$ such that $c_0+c_1+c_2+\ldots+c_{k-1}=a_0$ if and only if $c_0+c_1+c_2+\ldots+c_{d-1}$ is equal to one of the values of $b$. Let's consider the connected component $c_i, c_{i+d}, c_{i+2d}, \ldots$. It must have $\frac{n}{d}$ elements. Define $f_i(x)$ as the sum of $\min(c_j,m-c_j)$ for all elements of $c_j$ in that connected component if $c_i=x$. We can easily calculate all values of $f_i(x)$ for all connected components in $O(nm)$. After this, we must the minimum value of $f_0(c_0)+f_1(c_1)+f_2(c_2)+\ldots+f_{d-1}(c_{d-1})$ such that $c_0+c_1+c_2+\ldots+c_{d-1}$ is equal to one of the values of $b$. This can be solved using modular knapsack dynamic programming, but the time complexity is $O(nm^2)$. We must optimise it further. Consider the contribution of a single element $c_j$ to the values of $f_i(x)$ of its connected component. Let's say the difference relationship is that $c_i-c_j=w$ must hold. Notice that: It adds $(x-w)\bmod m$ to all values of $f_i(x)$ for each $x$ going up modularly from $w$ to $(w+\lfloor\frac{m}{2}\rfloor)\bmod m$ It adds $(w-x)\bmod m$ to all values of $f_i(x)$ for each $x$ going up modularly from $(w+\lceil\frac{m}{2}\rceil)\bmod m$ to $w$. We can see it as "changing slope" at most three times, namely in $x=w$, $x=(w+\lfloor\frac{m}{2}\rfloor)\bmod m$, and $x=(w+\lceil\frac{m}{2}\rceil)\bmod m$. This means, if we add up all contributions of each element, the number of points the slope changes in one connected component is at most $\frac{3n}{d}$. We will call these points as special points. Consider an array $c_0, c_1, c_2, \ldots, c_{d-1}$ satisfying the condition that $c_0+c_1+c_2+\ldots+c_{d-1}$ is equal to one of the values of $b$. Consider the case if there exists at least two different indices $i$ such that the value of $c_i$ is not a special point in $f_i$. Suppose the indices are $i_1$ and $i_2$. Since both of them are not special points, the slope between $(c_{i_1}-1)\bmod m$ and $c_{i_1}$ is the same as the slope between $c_{i_1}$ and $(c_{i_1}+1)\bmod m$. The same is true for $i_2$. Let's say $z_1$ is the increase in value of the final answer if we do $c_{i_1}:=(c_{i_1}+1)\bmod m$ and $c_{i_2}:=(c_{i_2}-1)\bmod m$, and $z_2$ is the increase in value of the final answer if we do $c_{i_1}:=(c_{i_1}-1)\bmod m$ and $c_{i_2}:=(c_{i_2}+1)\bmod m$. From the observation above, we can see that $z_1 = -z_2$, which means one of them is non-positive. This means, if this is the case, we can always change the array $c$ into another array with an equal or smaller final answer. This means, we can always do that operation to get an array $c$ such that there is at most one non-special point. Therefore, we can just consider those cases to find the minimum answer. We can solve this using divide and conquer. We first call the recursive function $\text{dnc}(0,d-1)$. The moment we call the function $\text{dnc}(l,r)$, we have made the modular knapsack dynamic programming array if we have only considered the indices $i$ with $i<l$ or $r<i$ and only considering their special points. $\text{dnc}(l,r)$ recurses into $\text{dnc}(l,\frac{l+r}{2})$ and $\text{dnc}(\frac{l+r}{2}+1,r)$. Once $l=r$, we will consider that index as the one that can have a non-special point, then we can just iterate the array $b$ and take the answer from the knapsack array. The depth of the recursion is at most $O(\log d)$. Each depth has insertions of $O(d)$ indices to the knapsack. Because each connected component only has $O(\frac{n}{d})$ special points, each depth does $O(n)$ dynamic programming transitions with each transition having $O(m)$ time complexity. Time complexity: $O(nm \log (\text{GCD}(n,k)))$
[ "data structures", "divide and conquer", "dp", "geometry", "graphs", "number theory" ]
3,500
null
1741
A
Compare T-Shirt Sizes
Two T-shirt sizes are given: $a$ and $b$. The T-shirt size is either a string M or a string consisting of several (possibly zero) characters X and one of the characters S or L. For example, strings M, XXL, S, XXXXXXXS could be the size of some T-shirts. And the strings XM, LL, SX are not sizes. The letter M stands for medium, S for small, L for large. The letter X refers to the degree of size (from eXtra). For example, XXL is extra-extra-large (bigger than XL, and smaller than XXXL). You need to compare two given sizes of T-shirts $a$ and $b$. The T-shirts are compared as follows: - any small size (no matter how many letters X) is smaller than the medium size and any large size; - any large size (regardless of the number of letters X) is larger than the medium size and any small size; - the more letters X before S, the smaller the size; - the more letters X in front of L, the larger the size. For example: - XXXS < XS - XXXL > XL - XL > M - XXL = XXL - XXXXXS < M - XL > XXXS
Let $sa$, $sb$ are the last characters of lines $a$ and $b$ respectively. And $|a|, |b|$ are the sizes of these strings. $sa \neq sb$: then the answer depends only on $sa$ and $sb$ and is uniquely defined as the inverse of $sa$ to $sb$ ("<" if $sa > sb$, ">" if $sa < sb$, since the characters S, M, L are in reverse order in the alphabet). $sa = sb$: $|a| = |b|$. Then the answer is "=". This also covers the case $sa = sb =$M; $sa = sb =$S. Then the larger the size of the string, the smaller the size of the t-shirt. That is, the answer is "<" if $|a| > |b|$ and ">" if $|a| < |b|$; $sa = sb =$L. Then the larger the size of the string, the smaller the size of the t-shirt. That is, the answer is "<" if $|a| < |b|$ and ">" if $|a| > |b|$; $|a| = |b|$. Then the answer is "=". This also covers the case $sa = sb =$M; $sa = sb =$S. Then the larger the size of the string, the smaller the size of the t-shirt. That is, the answer is "<" if $|a| > |b|$ and ">" if $|a| < |b|$; $sa = sb =$L. Then the larger the size of the string, the smaller the size of the t-shirt. That is, the answer is "<" if $|a| < |b|$ and ">" if $|a| > |b|$;
[ "implementation", "strings" ]
800
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); i++) #define sz(v) (int)v.size() #define all(v) v.begin(),v.end() #define eb emplace_back void ct(char c) { cout << c << '\n'; } void solve() { string a,b; cin >> a >> b; char ca = a.back(); char cb = b.back(); if (ca == cb) { if (sz(a) == sz(b)) cout << '='; else if (ca == 'S') { cout << (sz(a) < sz(b) ? '>' : '<'); } else { cout << (sz(a) < sz(b) ? '<' : '>'); } }else cout << (ca < cb ? '>' : '<'); cout << '\n'; } int main() { int t; cin >> t; forn(tt, t) { solve(); } }
1741
B
Funny Permutation
A sequence of $n$ numbers is called permutation if it contains all numbers from $1$ to $n$ exactly once. For example, the sequences $[3, 1, 4, 2]$, [$1$] and $[2,1]$ are permutations, but $[1,2,1]$, $[0,1]$ and $[1,3,4]$ are not. For a given number $n$ you need to make a permutation $p$ such that two requirements are satisfied at the same time: - For each element $p_i$, at least one of its neighbors has a value that differs from the value of $p_i$ by one. That is, for each element $p_i$ ($1 \le i \le n$), at least one of its neighboring elements (standing to the left or right of $p_i$) must be $p_i + 1$, or $p_i - 1$. - the permutation must have no fixed points. That is, for every $i$ ($1 \le i \le n$), $p_i \neq i$ must be satisfied. Let's call the permutation that satisfies these requirements funny. For example, let $n = 4$. Then [$4, 3, 1, 2$] is a funny permutation, since: - to the right of $p_1=4$ is $p_2=p_1-1=4-1=3$; - to the left of $p_2=3$ is $p_1=p_2+1=3+1=4$; - to the right of $p_3=1$ is $p_4=p_3+1=1+1=2$; - to the left of $p_4=2$ is $p_3=p_4-1=2-1=1$. - for all $i$ is $p_i \ne i$. For a given positive integer $n$, output \textbf{any} funny permutation of length $n$, or output -1 if funny permutation of length $n$ does not exist.
We cannot make a funny permutation only when $n = 3$, because one of the neighboring elements of $3$ must be equal to $2$. Any permutation made in this way will not satisfy the conditions: The permutation [$3, 2, 1$] will have a fixed point $p_2 = 2$. The permutation [$1, 3, 2$] will have a fixed point $p_1 = 1$. The permutation [$2, 3, 1$] will not have a neighbor equal to $p_3 + 1 = 1 + 1 = 2$ for $p_3 = 1$. For the remaining values of $n$, make the following observations: For $n = 2$, the only funny permutation is [$2, 1$]. When $n \ge 4$, permutations of the form [$3, \dots, n, 2, 1$] will always be funny because all elements $p_i$ will have a neighbor equal to $p_i - 1$ or $p_i + 1$, and the permutation will have no fixed points ($p_{n-1} = 2$, $p_n = 1$, and for $3 \le i \le n - 2$ will always be true $p_i = i + 2$).
[ "constructive algorithms", "math" ]
800
#include "bits/stdc++.h" using namespace std; void solve(){ int n; cin >> n; if(n == 3){ cout << -1 << endl; } else{ for(int i = 3; i <= n; i++) cout << i << ' '; cout << 2 << ' ' << 1 << endl; } } int main(){ int t; cin >> t; while(t--){ solve(); } }
1741
C
Minimize the Thickness
You are given a sequence $a=[a_1,a_2,\dots,a_n]$ consisting of $n$ \textbf{positive} integers. Let's call a group of consecutive elements a segment. Each segment is characterized by two indices: the index of its left end and the index of its right end. Denote by $a[l,r]$ a segment of the sequence $a$ with the left end in $l$ and the right end in $r$, i.e. $a[l,r]=[a_l, a_{l+1}, \dots, a_r]$. For example, if $a=[31,4,15,92,6,5]$, then $a[2,5]=[4,15,92,6]$, $a[5,5]=[6]$, $a[1,6]=[31,4,15,92,6,5]$ are segments. We split the given sequence $a$ into segments so that: - each element is in \textbf{exactly} one segment; - the sums of elements for all segments are \textbf{equal}. For example, if $a$ = [$55,45,30,30,40,100$], then such a sequence can be split into three segments: $a[1,2]=[55,45]$, $a[3,5]=[30, 30, 40]$, $a[6,6]=[100]$. Each element belongs to exactly segment, the sum of the elements of each segment is $100$. Let's define thickness of split as the length of the longest segment. For example, the thickness of the split from the example above is $3$. Find the minimum thickness among all possible splits of the given sequence of $a$ into segments in the required way.
Let's iterate over the length of the first segment of the split. Having fixed it, we actually fixed the sum that needs to be collected on all other segments. Since each element must belong to exactly one segment, we can build other segments greedily. If we have found a solution, we will remember the length of the longest segment in it and try to update the answer. We have $n$ possible lengths of the first segment, for each of which we greedily built the answer for $n$. Thus, the asymptotics of the solution will be $O(n^2)$.
[ "brute force", "greedy", "math", "two pointers" ]
1,100
#include <bits/stdc++.h> using namespace std; const int MAXN = 2020; int n; int arr[MAXN]; int go(int i, int sum) { if (i == n) return 0; for (int j = i + 1, cur = 0; j <= n; ++j) { cur += arr[j - 1]; if (cur > sum) return n; if (cur == sum) return max(j - i, go(j, sum)); } return n; } int solve() { int ans = n; for (int len = 1, sum = 0; len < n; ++len) { sum += arr[len - 1]; ans = min(ans, go(0, sum)); } return ans; } int main() { int t; cin >> t; while (t--) { cin >> n; for (int i = 0; i < n; ++i) cin >> arr[i]; cout << solve() << endl; } }
1741
D
Masha and a Beautiful Tree
The girl named Masha was walking in the forest and found a complete binary tree of height $n$ and a permutation $p$ of length $m=2^n$. A complete binary tree of height $n$ is a rooted tree such that every vertex except the leaves has exactly two sons, and the length of the path from the root to any of the leaves is $n$. The picture below shows the complete binary tree for $n=2$. A permutation is an array consisting of $n$ different integers from $1$ to $n$. For example, [$2,3,1,5,4$] is a permutation, but [$1,2,2$] is not ($2$ occurs twice), and [$1,3,4$] is also not a permutation ($n=3$, but there is $4$ in the array). Let's enumerate $m$ leaves of this tree from left to right. The leaf with the number $i$ contains the value $p_i$ ($1 \le i \le m$). For example, if $n = 2$, $p = [3, 1, 4, 2]$, the tree will look like this: Masha considers a tree beautiful if the values in its leaves are ordered from left to right in increasing order. In one operation, Masha can choose any non-leaf vertex of the tree and swap its left and right sons (along with their subtrees). For example, if Masha applies this operation to the root of the tree discussed above, it will take the following form: Help Masha understand if she can make a tree beautiful in a certain number of operations. If she can, then output the minimum number of operations to make the tree beautiful.
Let some vertex be responsible for a segment of leaves $[l..r]$. Then her left son is responsible for the segment $[l..\frac{l+r-1}{2}]$, and the right for the segment $[\frac{l+r+1}{2}..r]$. Note that if we do not apply the operation to this vertex, then it will not be possible to move some element from the right son's segment to the left son's segment. It remains to understand when we need to apply the operation to the vertex. Let the maximum on the segment $[l..r]$ be $max$, the minimum on the same segment is $min$. Then if $min$ lies in the right son, and $max$ in the left, then we should obviously apply the operation, for the reason described above. In the case when $min$ lies in the left son, and $max$ in the right, the application of the operation will definitely not allow you to get a solution. Let's continue to act in a similar way recursively from the children of the current vertex. At the end, we should check whether we have received a sorted permutation. The above solution works for $O(nm)$, since there are $n$ levels in the tree and at each level, vertexes are responsible for $m$ sheets in total. You can optimize this solution to $O(m)$ if you pre-calculate the maximum and minimum for each vertex.
[ "dfs and similar", "divide and conquer", "graphs", "sortings", "trees" ]
1,300
#include <bits/stdc++.h> using namespace std; const int MAXM = 300300; int n, m; int arr[MAXM]; int solve(int l, int r) { if (r - l == 1) return 0; int mid = (l + r) >> 1; int mal = *max_element(arr + l, arr + mid); int mar = *max_element(arr + mid, arr + r); int ans = 0; if (mal > mar) { ++ans; for (int i = 0; i < (mid - l); ++i) swap(arr[l + i], arr[mid + i]); } return solve(l, mid) + solve(mid, r) + ans; } int solve() { int ans = solve(0, m); if (is_sorted(arr, arr + m)) return ans; return -1; } int main() { int t; cin >> t; while (t--) { cin >> m; for (int i = 0; i < m; ++i) cin >> arr[i]; cout << solve() << endl; } }
1741
E
Sending a Sequence Over the Network
The sequence $a$ is sent over the network as follows: - sequence $a$ is split into segments (each element of the sequence belongs to exactly one segment, each segment is a group of consecutive elements of sequence); - for each segment, its length is written next to it, either to the left of it or to the right of it; - the resulting sequence $b$ is sent over the network. For example, we needed to send the sequence $a = [1, 2, 3, 1, 2, 3]$. Suppose it was split into segments as follows: $[\textcolor{red}{1}] + [\textcolor{blue}{2, 3, 1}] + [\textcolor{green}{2, 3}]$. Then we could have the following sequences: - $b = [1, \textcolor{red}{1}, 3, \textcolor{blue}{2, 3, 1}, \textcolor{green}{2, 3}, 2]$, - $b = [\textcolor{red}{1}, 1, 3, \textcolor{blue}{2, 3, 1}, 2, \textcolor{green}{2, 3}]$, - $b = [\textcolor{red}{1}, 1, \textcolor{blue}{2, 3, 1}, 3, 2, \textcolor{green}{2, 3}]$, - $b = [\textcolor{red}{1}, 1,\textcolor{blue}{2, 3, 1}, 3, \textcolor{green}{2, 3}, 2]$. If a different segmentation had been used, the sent sequence might have been different. The sequence $b$ is given. Could the sequence $b$ be sent over the network? In other words, is there such a sequence $a$ that converting $a$ to send it over the network could result in a sequence $b$?
Let's introduce the dynamics. $dp[i] = true$ if on the prefix $i$ the answer is Yes. Then in this sequence $b$ the numbers corresponding to the sizes of the segments from the partition $a$ into subsegments will be called interesting. A number at position $i$ in the sequence $b$, if it is interesting, is either to the right or to the left of the segment. If it is to the left of the segment, it can only be interesting if $dp[i-1] = true$. Then $dp[i + b[i]] = true$. If it is on the right side of the segment, then if $dp[i - b[i] - 1] = true$, then $dp[i] = true$. The answer for the whole sequence is Yes if $dp[n] = true$.
[ "dp" ]
1,600
#include <bits/stdc++.h> using namespace std; #define forn(i, n) for (int i = 0; i < int(n); i++) #define sz(v) (int)v.size() #define all(v) v.begin(),v.end() #define eb emplace_back void solve() { int n; cin >> n; vector<int> a(n+1); for (int i = 1; i <= n; ++i) { cin >> a[i]; } vector<bool> dp(n+1, false); dp[0] = true; for (int i = 1; i <= n; ++i) { if (i + a[i] <= n && dp[i-1]) dp[i + a[i]] = true; if (i - a[i] - 1 >= 0 && dp[i - a[i] - 1]) dp[i] = true; } cout << (dp[n] ? "YES" : "NO") << '\n'; } int main() { int t; cin >> t; forn(tt, t) { solve(); } }
1741
F
Multi-Colored Segments
Dmitry has $n$ segments of different colors on the coordinate axis $Ox$. Each segment is characterized by three integers $l_i$, $r_i$ and $c_i$ ($1 \le l_i \le r_i \le 10^9, 1 \le c_i \le n$), where $l_i$ and $r_i$ are are the coordinates of the ends of the $i$-th segment, and $c_i$ is its color. Dmitry likes to find the minimum distances between segments. However, he considers pairs of segments of the same color uninteresting. Therefore, he wants to know for each segment the distance from this segment to the nearest \textbf{differently} colored segment. The distance between two segments is the minimum of the distances between a point of the first segment and a point of the second segment. In particular, if the segments intersect, then the distance between them is equal to $0$. For example, Dmitry has $5$ segments: - The first segment intersects with the second (and these are segments of different colors), so the answers for them are equal to $0$. - For the $3$-rd segment, the nearest segment of a different color is the $2$-nd segment, the distance to which is equal to $2$. - For the $4$-th segment, the nearest segment of a different color is the $5$-th segment, the distance to which is equal to $1$. - The $5$-th segment lies inside the $2$-nd segment (and these are segments of different colors), so the answers for them are equal to $0$.
Let's go through the segments $2$ times: in non-decreasing coordinates of the left end, and then - in non-increasing coordinates of the right end. To walk a second time, just multiply the coordinates of the left and right borders by $-1$, and then swap them and walk from left to right. Going through the segments in non-decreasing coordinates of the left end, you need to find for each segment a segment that starts not to the right of the current one and ends as far to the right as possible. If the coordinate of its right end is not less than the coordinate of the left end of the current segment, then it intersects with it, otherwise the distance between them is equal to the distance between the coordinate of the left end of the current segment and the maximum coordinate of the right end of the segment starting to the left of ours. Note that it is enough for us to store no more than $2$ segments: for each color we will store the maximum right coordinate of the segment of this color, which has already been considered. If we store the $2$ colors with the largest right coordinates, then one of them is definitely not equal to the current one. When considering a segment, we add it to the list, and if the size of the list becomes $3$, then we leave $2$ of optimal elements.
[ "binary search", "data structures", "math", "sortings" ]
2,000
#include <iostream> #include <vector> #include <algorithm> #include <set> #include <queue> using namespace std; int n; vector<int> calc(vector<vector<int>> a) { vector<pair<int, int>> l(n), r(n); for (int i = 0; i < n; ++i) { l[i] = {a[i][0], i}; r[i] = {a[i][1], i}; } sort(l.begin(), l.end()); sort(r.begin(), r.end()); vector<vector<pair<int, int>>> suf(n); vector<pair<int, int>> curr; for (int i0 = n - 1; i0 >= 0; --i0) { int xr = r[i0].first; int i = r[i0].second; int xl = a[i][0]; int c = a[i][2]; if (curr.empty()) { curr.emplace_back(xl, c); } else if (curr.size() == 1) { if (curr[0].second == c) { curr[0].first = min(curr[0].first, xl); } else { curr.emplace_back(xl, c); } } else { if (curr[0].second == c) { curr[0].first = min(curr[0].first, xl); } else if (curr[1].second == c) { curr[1].first = min(curr[1].first, xl); } else { curr.emplace_back(xl, c); } } sort(curr.begin(), curr.end()); if (curr.size() == 3) { curr.pop_back(); } suf[i0] = curr; } vector<int> ans(n, 1e9); int j = 0; for (int i0 = 0; i0 < n; ++i0) { int xl = l[i0].first, i = l[i0].second; int xr = a[i][1], c = a[i][2]; while (j < n && r[j].first < xl) { j++; } if (j < n) { vector<pair<int, int>> s = suf[j]; if (s[0].second != c) { ans[i] = min(ans[i], max(0, s[0].first - xr)); } else if (s.size() == 2) { ans[i] = min(ans[i], max(0, s[1].first - xr)); } } } return ans; } const int K = 1e9 + 1; void solve() { cin >> n; vector<vector<int>> a(n, vector<int>(3)), b(n, vector<int>(3)); vector<pair<int, int>> l(n), r(n); for (int i = 0; i < n; ++i) { for (int j = 0; j < 3; ++j) { cin >> a[i][j]; if (j == 2) { b[i][j] = a[i][j]; } else { b[i][1 - j] = K - a[i][j]; } } } vector<int> ans1 = calc(a), ans2 = calc(b); for (int i = 0; i < n; ++i) { cout << min(ans1[i], ans2[i]) << ' '; } cout << '\n'; } int main() { ios_base::sync_with_stdio(0); cin.tie(0), cout.tie(0); int t = 1; cin >> t; for (int it = 0; it < t; ++it) { solve(); } return 0; }
1741
G
Kirill and Company
Kirill lives on a connected undirected graph of $n$ vertices and $m$ edges at vertex $1$. One fine evening he gathered $f$ friends, the $i$-th friend lives at the vertex $h_i$. So all friends are now in the vertex $1$, the $i$-th friend must get to his home to the vertex $h_i$. The evening is about to end and it is time to leave. It turned out that $k$ ($k \le 6$) of his friends have no cars, and they would have to walk if no one gives them a ride. One friend with a car can give a ride to \textbf{any} number of friends without cars, but only if he can give them a ride by driving along one of the \textbf{shortest} paths to his house. For example, in the graph below, a friend from vertex $h_i=5$ can give a ride to friends from the following sets of vertices: $[2, 3]$, $[2, 4]$, $[2]$, $[3]$, $[4]$, but can't give a ride to friend from vertex $6$ or a set $[3, 4]$. \begin{center} {\small The vertices where friends without cars live are highlighted in green, and with cars — in red.} \end{center} Kirill wants as few friends as possible to have to walk. Help him find the \textbf{minimum} possible number.
To begin with, let's learn how to find possible sets of friends for the vertex, whom he can give a ride, in the form of masks. Let's use a breadth first search, every time we find the shortest path to the vertex $u$ with the previous vertex $v$, we will add the masks of the vertex $v$ to the masks of the vertex $u$, updating them with friends living in $u$. Now, according to the resulting sets, you need to get the best combination of them. We will find it using the knapsack problem, we will use masks as weights, and the total weight will be the bitwise OR of the selected set.
[ "bitmasks", "brute force", "dfs and similar", "dp", "flows", "graphs", "shortest paths" ]
2,200
from collections import deque def solve(): n, m = map(int, input().split()) sl = [[] for _ in range(n)] for _ in range(m): u, v = map(int, input().split()) u -= 1 v -= 1 sl[u] += [v] sl[v] += [u] f = int(input()) h = [int(x) - 1 for x in input().split()] mask = [0] * n k = int(input()) p = [int(x) - 1 for x in input().split()] + [-1] for i in range(k): mask[h[p[i]]] += 1 << i vars = [set() for _ in range(n)] dist = [-1] * n dist[0] = 0 vars[0].add(mask[0]) q = deque([0]) while len(q) > 0: v = q.popleft() for u in sl[v]: if dist[u] == -1: dist[u] = dist[v] + 1 q.append(u) if dist[u] == dist[v] + 1: for msk in vars[v]: vars[u].add(msk | mask[u]) backpack = [False] * (1 << k) backpack[0] = True j = 0 for i in range(f): if i == p[j]: j += 1 continue nw = backpack.copy() for msk in range(1 << k): if not backpack[msk]: continue for var in vars[h[i]]: nw[msk | var] = True backpack, nw = nw, backpack mn = k for msk in range(1 << k): if not backpack[msk]: continue ans = 0 for b in range(k): if msk & (1 << b) == 0: ans += 1 mn = min(mn, ans) print(mn) t = int(input()) for _ in range(t): solve()
1742
A
Sum
You are given three integers $a$, $b$, and $c$. Determine if one of them is the sum of the other two.
You only need to write an if statement and check if any of these are true: $a+b=c$, $b+c=a$, $c+a=b$.
[ "implementation" ]
800
#include <bits/stdc++.h> using namespace std; const int MAX = 200007; const int MOD = 1000000007; void solve() { int a, b, c; cin >> a >> b >> c; cout << ((a + b == c || c + a == b || b + c == a) ? "YES\n" : "NO\n"); } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1742
B
Increasing
You are given an array $a$ of $n$ positive integers. Determine if, by rearranging the elements, you can make the array strictly increasing. In other words, determine if it is possible to rearrange the elements such that $a_1 < a_2 < \dots < a_n$ holds.
If there are two elements with the same value, then the answer is NO, because neither of these values is less than the other. Otherwise, the answer is YES, since we can just sort the array. The time complexity is $\mathcal{O}(n \log n)$ or $\mathcal{O}(n)$ depending on the implementation.
[ "greedy", "implementation", "sortings" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; int x[n]; set<int> a; for(int i = 0; i < n; i++) { cin >> x[i]; } for(int i = 0; i < n; i++) { if(a.find(x[i]) != a.end()) { cout << "NO" << endl; return; } a.insert(x[i]); } cout << "YES" << endl; } int main() { int t; cin >> t; while(t--) { solve(); } }
1742
C
Stripes
On an $8 \times 8$ grid, some horizontal rows have been painted red, and some vertical columns have been painted blue, in some order. The stripes are drawn sequentially, one after the other. When the stripe is drawn, it repaints all the cells through which it passes. Determine which color was used last. \begin{center} {\small The red stripe was painted after the blue one, so the answer is R.} \end{center}
Note that if a stripe is painted last, then the entire stripe appears in the final picture (because no other stripe is covering it). Since rows are only painted red and columns are only painted blue, we can just check if any row contains 8 Rs. If there is such a row, then red was painted last; otherwise, blue was painted last. How do you write a validator for this problem? (Given a grid, check if it is a valid input.)
[ "implementation" ]
900
#include <bits/stdc++.h> using namespace std; const int MAX = 200007; const int MOD = 1000000007; void solve() { char g[8][8]; vector<int> r; for (int i = 0; i < 8; i++) { for (int j = 0; j < 8; j++) { cin >> g[i][j]; if (g[i][j] == 'R') {r.push_back(i);} } } for (int i : r) { bool ok = true; for (int j = 0; j < 8; j++) { if (g[i][j] != 'R') {ok = false; break;} } if (ok) {cout << "R\n"; return;} } cout << "B\n"; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();} // solve(); }
1742
D
Coprime
Given an array of $n$ positive integers $a_1, a_2, \dots, a_n$ ($1 \le a_i \le 1000$). Find the maximum value of $i + j$ such that $a_i$ and $a_j$ are coprime,$^{\dagger}$ or $-1$ if no such $i$, $j$ exist. For example consider the array $[1, 3, 5, 2, 4, 7, 7]$. The maximum value of $i + j$ that can be obtained is $5 + 7$, since $a_5 = 4$ and $a_7 = 7$ are coprime. $^{\dagger}$ Two integers $p$ and $q$ are coprime if the only positive integer that is a divisor of both of them is $1$ (that is, their greatest common divisor is $1$).
Note that the array has at most $1000$ distinct elements, since $a_i \leq 1000$. For each value, store the largest index it is in. Then we can brute force all pairs of values, and find the coprime pair with largest sum of indices. The time complexity is $\mathcal{O}(a_i^2 \log a_i + n)$ per testcase.
[ "brute force", "greedy", "number theory" ]
1,100
#include "bits/stdc++.h" using namespace std; #define ll long long #define forn(i,n) for(int i=0;i<n;i++) #define all(v) v.begin(), v.end() #define rall(v) v.rbegin(),v.rend() #define pb push_back #define sz(a) (int)a.size() vector<int> pairs[1001]; void solve() { int n; cin >> n; vector<int> id[1001]; for(int i = 1; i <= n; ++i) { int x; cin >> x; id[x].push_back(i); } int ans = -1; for(int i = 1; i <= 1000; ++i) { for(int j: pairs[i]) { if(!id[i].empty() && !id[j].empty()) { ans = max(ans, id[i].back() + id[j].back()); } } } cout << ans << "\n"; } int32_t main() { for(int i = 1; i <= 1000; ++i) { for(int j = 1; j <= 1000; ++j) { if(__gcd(i, j) == 1) { pairs[i].push_back(j); } } } ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); int t = 1; cin >> t; while(t--) { solve(); } }
1742
E
Scuza
Timur has a stairway with $n$ steps. The $i$-th step is $a_i$ meters higher than its predecessor. The first step is $a_1$ meters higher than the ground, and the ground starts at $0$ meters. \begin{center} {\small The stairs for the first test case.} \end{center} Timur has $q$ questions, each denoted by an integer $k_1, \dots, k_q$. For each question $k_i$, you have to print the maximum possible height Timur can achieve by climbing the steps if his legs are of length $k_i$. Timur can only climb the $j$-th step if his legs are of length at least $a_j$. In other words, $k_i \geq a_j$ for each step $j$ climbed. Note that you should answer each question independently.
Let's compute the prefix sums of the array $a$: let $b_i = a_1 + \dots + a_i$. Rephrasing the problem: for each question containing an integer $k$, we need to find the largest $a_i$ such that $a_1, \dots, a_i$ are all at most $k$, and then output $b_i$. In other words, $\max(a_1, \dots, a_i) \leq k$. Let's make the prefix maximums of the array: let $m_i = \max(a_1, \dots, a_i)$. Then we need to find the largest $i$ such that $m_i \leq k$, which is doable using binary search, since the array $m$ is non-decreasing. Once we find the index $i$, we simply need to output $b_i$. The time complexity is $\mathcal{O}(n \log n)$ per testcase.
[ "binary search", "greedy", "math" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve() { int n, q; cin >> n >> q; vector<long long> pref; pref.push_back(0); vector<int> prefmax; for(int i = 0; i < n; i++) { int x; cin >> x; pref.push_back(pref.back()+x); if(i == 0) { prefmax.push_back(x); } else { prefmax.push_back(max(prefmax.back(), x)); } } for(int i = 0; i < q; i++) { int k; cin >> k; int ind = upper_bound(prefmax.begin(), prefmax.end(), k)-prefmax.begin(); cout << pref[ind] << " "; } cout << endl; } int main() { int t; cin >> t; while(t--) { solve(); } }
1742
F
Smaller
Alperen has two strings, $s$ and $t$ which are both initially equal to "a". He will perform $q$ operations of two types on the given strings: - $1 \;\; k \;\; x$ — Append the string $x$ exactly $k$ times at the end of string $s$. In other words, $s := s + \underbrace{x + \dots + x}_{k \text{ times}}$. - $2 \;\; k \;\; x$ — Append the string $x$ exactly $k$ times at the end of string $t$. In other words, $t := t + \underbrace{x + \dots + x}_{k \text{ times}}$. After each operation, determine if it is possible to \textbf{rearrange} the characters of $s$ and $t$ such that $s$ is lexicographically smaller$^{\dagger}$ than $t$. Note that the strings change after performing each operation and \textbf{don't} go back to their initial states. $^{\dagger}$ Simply speaking, the lexicographical order is the order in which words are listed in a dictionary. A formal definition is as follows: string $p$ is lexicographically smaller than string $q$ if there exists a position $i$ such that $p_i < q_i$, and for all $j < i$, $p_j = q_j$. If no such $i$ exists, then $p$ is lexicographically smaller than $q$ if the length of $p$ is less than the length of $q$. For example, $abdc < abe$ and $abc < abcd$, where we write $p < q$ if $p$ is lexicographically smaller than $q$.
First of all, let's think about how we should rearrange the two strings in such a way that $a < b$ (if that is ever possible). It's always optimal to arrange $a$'s characters increasingly in lexicographic order and $b$'s characters decreasingly. Since initially both $a$ and $b$ contain a character "a", the first time $b$ receives any other letter than "a" the answer will always be "YES", because that character will always be lexicographically larger than $a$'s first character which should be "a". In the other case, we know that $b$ doesn't have any other characters than "a", so we can compare the string $a$ with multiple "a" characters and we know that $a$ will be smaller if and only if it's only formed of "a"s and has a smaller size than $b$.
[ "constructive algorithms", "greedy", "strings" ]
1,500
#include "bits/stdc++.h" using namespace std; #define ll long long #define forn(i,n) for(int i=0;i<n;i++) #define all(v) v.begin(), v.end() #define rall(v) v.rbegin(),v.rend() #define pb push_back #define sz(a) (int)a.size() void solve() { int q; cin >> q; bool otherA = false, otherB = false; ll cntA = 0, cntB = 0; while(q--) { ll d, k; string x; cin >> d >> k >> x; for(auto c: x) { if(d == 1) { if(c != 'a') otherA = 1; else cntA += k; } else { if(c != 'a') otherB = 1; else cntB += k; } } if(otherB) { cout << "YES\n"; } else if(!otherA && cntA < cntB) { cout << "YES\n"; } else { cout << "NO\n"; } } } int32_t main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); int t = 1; cin >> t; while(t--) { solve(); } }
1742
G
Orray
You are given an array $a$ consisting of $n$ nonnegative integers. Let's define the prefix OR array $b$ as the array $b_i = a_1~\mathsf{OR}~a_2~\mathsf{OR}~\dots~\mathsf{OR}~a_i$, where $\mathsf{OR}$ represents the bitwise OR operation. In other words, the array $b$ is formed by computing the $\mathsf{OR}$ of every prefix of $a$. You are asked to rearrange the elements of the array $a$ in such a way that its prefix OR array is lexicographically maximum. An array $x$ is lexicographically greater than an array $y$ if in the first position where $x$ and $y$ differ, $x_i > y_i$.
Note that in this context $maxval$ denotes $10^9$. We can make the observation that only the first $log_2(maxval)$ elements matter, since after placing them optimally we can be sure all bits that could be set in the prefix OR would have already been set. So, we can brute force the optimal choice $log_2(maxval)$ times (we choose to add an element if it provides the largest new prefix OR value among all unused elements) and then just add the rest of the unused elements.
[ "bitmasks", "brute force", "greedy", "math", "sortings" ]
1,500
#include "bits/stdc++.h" using namespace std; #define ll long long #define forn(i,n) for(int i=0;i<n;i++) #define all(v) v.begin(), v.end() #define rall(v) v.rbegin(),v.rend() #define pb push_back #define sz(a) (int)a.size() void solve() { int n; cin >> n; vector<int> a(n); forn(i, n) cin >> a[i]; //we care at most about first log2(a) values int cur_or = 0; vector<bool> vis(n, false); for(int i = 0; i < min(31, n); ++i) { int mx = 0, idx = -1; for(int j = 0; j < n; ++j) { if(vis[j]) continue; if((cur_or | a[j]) > mx) { mx = (cur_or | a[j]); idx = j; } } vis[idx] = true; cout << a[idx] << " "; cur_or |= a[idx]; } forn(i, n) if(!vis[i]) cout << a[i] << " "; cout << '\n'; } int32_t main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); int t = 1; cin >> t; while(t--) { solve(); } }
1743
A
Password
Monocarp has forgotten the password to his mobile phone. The password consists of $4$ digits from $0$ to $9$ (note that it can start with the digit $0$). Monocarp remembers that his password had exactly two different digits, and each of these digits appeared exactly two times in the password. Monocarp also remembers some digits which were definitely not used in the password. You have to calculate the number of different sequences of $4$ digits that could be the password for Monocarp's mobile phone (i. e. these sequences should meet all constraints on Monocarp's password).
There are two possible solutions for the problem. The first solution is basically brute force. Each password can be obtained from an integer from $0$ to $9999$. If the number is from $1000$ to $9999$, then it's already a password of length $4$. Otherwise, you have to prepend it with enough zeros so that it becomes length $4$. Then you have to check if the password is valid. First, check if it consists of exactly two different digits: make a set of all its characters (set<char> in case of C++, for example) and check its size. Then check if the first digit of the password appears exactly twice. It would mean that the other digits appears exactly twice as well. Finally, check if neither of the found digits are forbidden. The second solution is based on combinatorics. First, choose the two digits that will appear in the password: $C(10 - n, 2)$. Since $n$ digits are prohibited, the remaining $10 - n$ are allowed. Second, choose the positions that will be taken by the first one: $C(4, 2)$. The answer is the product of these two values.
[ "brute force", "combinatorics", "implementation", "math" ]
800
#include <bits/stdc++.h> using namespace std; int n; inline void read() { cin >> n; int x; for (int i = 0; i < n; ++i) cin >> x; } inline int fac(int n) { int res = 1; for (int i = 2; i <= n; i++) { res *= i; } return res; } inline int c(int n, int k) { return fac(n) / fac(k) / fac(n - k); } inline void solve() { cout << c(10 - n, 2) * c(4, 2) << endl; } int main () { int t; cin >> t; while (t--){ read(); solve(); } }
1743
B
Permutation Value
You are given an integer $n$. You have to construct a permutation of size $n$. A \textbf{permutation} is an array where each integer from $1$ to $s$ (where $s$ is the size of permutation) occurs exactly once. For example, $[2, 1, 4, 3]$ is a permutation of size $4$; $[1, 2, 4, 5, 3]$ is a permutation of size $5$; $[1, 4, 3]$ is not a permutation (the integer $2$ is absent), $[2, 1, 3, 1]$ is not a permutation (the integer $1$ appears twice). A \textbf{subsegment} of a permutation is a contiguous subsequence of that permutation. For example, the permutation $[2, 1, 4, 3]$ has $10$ subsegments: $[2]$, $[2, 1]$, $[2, 1, 4]$, $[2, 1, 4, 3]$, $[1]$, $[1, 4]$, $[1, 4, 3]$, $[4]$, $[4, 3]$ and $[3]$. The \textbf{value} of the permutation is the number of its subsegments which are also permutations. For example, the value of $[2, 1, 4, 3]$ is $3$ since the subsegments $[2, 1]$, $[1]$ and $[2, 1, 4, 3]$ are permutations. You have to construct a permutation of size $n$ with \textbf{minimum possible value} among all permutations of size $n$.
The subsegment $[1]$, as well as the whole permutation, will always be a permutation, so the value is at least $2$. Let's try to find a way to generate a permutation of $n$ elements with value equal to $2$. Every permutation must contain the number $1$. Let's try to construct the answer in such a way that if a subsegment contains the number $1$, then it also contains the number $n$ (if it is so, it can only be a permutation if it contains all $n$ numbers). If we begin our permutation with the numbers $1$ and $n$, we will reach our goal: the only subsegment which does not contain $n$ but contains $1$ is $[1]$, and the only subsegment which contains $n$ and also a permutation is the whole permutation itself. So, any permutation that begins with $[1, n \dots]$ can be the answer.
[ "constructive algorithms", "greedy" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; cout << 1; for(int i = n; i >= 2; i--) cout << " " << i; cout << endl; } int main() { int t; cin >> t; for(int i = 0; i < t; i++) solve(); }
1743
C
Save the Magazines
Monocarp has been collecting rare magazines for quite a while, and now he has decided to sell them. He distributed the magazines between $n$ boxes, arranged in a row. The $i$-th box contains $a_i$ magazines. Some of the boxes are covered with lids, others are not. Suddenly it started to rain, and now Monocarp has to save as many magazines from the rain as possible. To do this, he can move the lids between boxes as follows: if the $i$-th box was covered with a lid initially, he can either move the lid from the $i$-th box to the box $(i-1)$ (if it exists), or keep the lid on the $i$-th box. You may assume that Monocarp can move the lids instantly at the same moment, and no lid can be moved more than once. If a box will be covered with a lid after Monocarp moves the lids, the magazines in it will be safe from the rain; otherwise they will soak. You have to calculate the maximum number of magazines Monocarp can save from the rain.
Let's process the boxes from left to right. Consider the first box. If it has a lid, then you can just add the number of magazines in it to the answer and forget about this box. To be exact, proceed to solve the problem with the first box removed. If it doesn't have a lid, then look at the next box. If it doesn't have a lid too, then this box can never be covered. Remove it and proceed further. If the next box has a lid, then look at the next one. Again, if it doesn't have a lid, then these two first boxes are solved independently of everything else. You can cover exactly one of them. Choose the bigger one and remove them both. To propagate the argument, let's derive a pattern. First, there's a box without a lid. Then some number of boxes with lids in a row. Then a box without a lid again. Among the first box and the box with lids, you can choose exactly one to not be covered. However, that can be any one of them. The best box to be left uncovered is the one with the smallest number of magazines in it. Thus, the solution is the following. As long as the first box has a lid, keep removing the first box and adding it to the answer. Then, as long as there are boxes left, take the first box and the largest number of consecutive boxes with lids after it (that number might be zero). On that segment, find the minimum value and the sum. Add the sum minus the minimum to the answer, remove the entire segment. The removals can be done explicitly with a queue or just a reversed vector or implicitly with maintaining a pointer to the first non-removed box. Overall complexity: $O(n)$.
[ "constructive algorithms", "dp", "greedy" ]
1,100
for _ in range(int(input())): n = int(input()) s = '0' + input() a = [0] + list(map(int, input().split())) ans = 0 i = 0 while i <= n: mn = a[i] sm = a[i] j = i + 1 while j <= n and s[j] == '1': mn = min(mn, a[j]) sm += a[j] j += 1 ans += sm - mn i = j print(ans)
1743
D
Problem with Random Tests
You are given a string $s$ consisting of $n$ characters. Each character of $s$ is either 0 or 1. A substring of $s$ is a contiguous subsequence of its characters. You have to choose two substrings of $s$ (possibly intersecting, possibly the same, possibly non-intersecting — just any two substrings). After choosing them, you calculate the value of the chosen pair of substrings as follows: - let $s_1$ be the first substring, $s_2$ be the second chosen substring, and $f(s_i)$ be the integer such that $s_i$ is its binary representation (for example, if $s_i$ is 11010, $f(s_i) = 26$); - the value is the \textbf{bitwise OR} of $f(s_1)$ and $f(s_2)$. Calculate the maximum possible value you can get, and print it \textbf{in binary representation without leading zeroes}.
The first observation we need is that we can choose two prefixes of $s$ as the substrings used in forming the results. This can be proved easily: suppose we chose a substring which does not contain the leftmost character of $s$; if we expand it to the left, the answer won't become worse. So, it is optimal to choose two prefixes of $s$ as the substrings. Furthermore, one of these prefixes must be $s$ itself: if the leftmost index of 1 is $i$, the length of the answer won't exceed $n - i + 1$, but the only way to have a 1 in the $(n-i+1)$-th bit of the answer is to choose a prefix of $s$ where the $(n-i+1)$-th character (from the right) is 1; and there is only one such prefix of $s$, which is $s$ itself. So, now we can solve the problem in $O(n^2)$ - try to combine all prefixes of $s$ with $s$ itself, and choose the one that yields the best answer. To speed this up, we need to somehow cut down on the number of prefixes of $s$ we check. Let's look at the first block of 1's in $s$. The next character after this block is 0; since we take $s$ as one of the substring, in order to get 1 instead of 0 in the corresponding position of the answer, we need to choose a prefix which has 1 in that position. This 1 represents one of the 1's from the first block of 1's, since only one of them can shift to that position. So, we need to check only the prefixes such that, by using them, we shift some character 1 from the first block to the position of the first 0 after this block. Since the tests are random, the expected length of the first block of 1's is $O(1)$ (furthermore, even the probabiliy that its length is $20$ or bigger is about $10^{-6}$), so the expected number of prefixes we need to check is also $O(1)$. Thus, the expected runtime of our solution is $O(n)$.
[ "brute force", "dp", "greedy", "probabilities" ]
1,700
#include<bits/stdc++.h> using namespace std; char buf[1000043]; string normalize(const string& v) { int cnt = 0; while(cnt < v.size() && v[cnt] == '0') cnt++; if(cnt == v.size()) return "0"; return v.substr(cnt, int(v.size()) - cnt); } string operator |(const string& a, const string& b) { int sz = max(a.size(), b.size()); string ans(sz, '0'); for(int i = 0; i < a.size(); i++) if(a[i] == '1') ans[i + sz - int(a.size())] = '1'; for(int i = 0; i < b.size(); i++) if(b[i] == '1') ans[i + sz - int(b.size())] = '1'; return normalize(ans); } bool better(const string& a, const string& b) { if(a.size() != b.size()) return a.size() > b.size(); return a > b; } int main() { int n; scanf("%d", &n); string s; scanf("%s", buf); s = buf; string ans = s | s; int pos1 = s.find("1"); if(pos1 != string::npos) { int pos2 = s.find("0", pos1); if(pos2 != string::npos) { int cur = pos1; int not_needed = 0; while(true) { if(cur == n || (s[cur] == '1' && cur > pos2)) break; string nw = s | s.substr(pos1, n - pos1 - not_needed); if(better(nw, ans)) ans = nw; cur++; not_needed++; } } } puts(ans.c_str()); }
1743
E
FTL
Monocarp is playing a video game. In the game, he controls a spaceship and has to destroy an enemy spaceship. Monocarp has two lasers installed on his spaceship. Both lasers $1$ and $2$ have two values: - $p_i$ — the power of the laser; - $t_i$ — the reload time of the laser. When a laser is fully charged, Monocarp can either shoot it or wait for the other laser to charge and shoot both of them at the same time. An enemy spaceship has $h$ durability and $s$ shield capacity. When Monocarp shoots an enemy spaceship, it receives $(P - s)$ damage (i. e. $(P - s)$ gets subtracted from its durability), where $P$ is the total power of the lasers that Monocarp shoots (i. e. $p_i$ if he only shoots laser $i$ and $p_1 + p_2$ if he shoots both lasers at the same time). An enemy spaceship is considered destroyed when its durability becomes $0$ or lower. Initially, both lasers are zero charged. What's the lowest amount of time it can take Monocarp to destroy an enemy spaceship?
At any time, we have three possible choices: wait and shoot the first laser, the second laser and both lasers. Sometimes it makes sense to wait to both because you can deal $s$ more damage than you would do by shooting both lasers separately. The first claim: greedy won't work. Maybe there is a sufficiently smart greedy, we weren't able to come up with it. The second claim: bruteforce won't work. The funny thing is that it actually worked on the constraints up to $2000$, but again, we couldn't code any sufficiently fast one for $5000$. Thus, let's try some dynamic programming. Since all the times are huge, we'd want to avoid having them as the states. What is small, however, is the durability of the enemy ship and the number of shots we have to make to destroy it. Ideally, we'd like to have some $dp[i]$ - the smallest time to deal $i$ damage to the enemy ship. This way, $dp[n]$ would be the answer. Sadly, it's not immediately clear how to get rid of reload times completely. There might be states with different times until the charge with the same damage dealt, and we don't know which of those we want to keep. Thus, let's make the dp state more complicated. Let $dp[i]$ be the smallest time it takes to deal $i$ damage if the last shot was from both lasers at the same time. This way we know the reload times of both lasers - they are full $t_1$ and $t_2$. $dp[0] = 0$, as moment $0$ has both lasers zero charged as if after a shot. What are the transitions? Well, now we have to shoot each laser multiple times, then wait until both are charged and shoot both. Both lasers can now be considered independent of each other. Let the time between the previous double shot and the next one be some value $t$. During this time, it never made sense to wait until shooting each laser. So we waited $t_1$, shot the first laser, waited another $t_1$, shot again, until we couldn't shoot anymore, since the laser wouldn't recharge in time before the double shot. Same for the second laser. Notice that if both $t \mod t_1 \neq 0$ and $t \mod t_2 \neq 0$, then you could just decrease $t$ by $1$ and shoot each laser the same number of times. Thus, only $t$ that are multiples of either $t_1$ or $t_2$ are optimal. Thus, we can iterate over all possible waiting times $t$. Just iterate over $i \cdot t_1$ and $i \cdot t_2$ for all $i$ from $1$ to $h$. Having a fixed $t$, calculate the number of shots of each laser, calculate the damage, go into the corresponding dp state. It could also happen that the last shot before destroying the ship wasn't a double one. However, it still follows the same ideas. It means that each laser was shooting non-stop until the ship was destroyed. Thus, the destruction time is still a multiple of either of the reload times. Overall complexity: $O(h^2)$.
[ "binary search", "dp" ]
2,400
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < int(n); i++) using namespace std; const long long INF64 = 1e18; int main() { vector<int> ps(2); vector<long long> ts(2); int h, s; forn(i, 2) scanf("%d%lld", &ps[i], &ts[i]); scanf("%d%d", &h, &s); long long ans = INF64; vector<long long> dp(h + 1, INF64); dp[0] = 0; forn(i, h) for (int j = 1; j <= h - i; ++j) forn(k, 2){ int ni = min((long long)h, i + j * (ps[k] - s) + j * ts[k] / ts[k ^ 1] * (ps[k ^ 1] - s)); if (ni == h) ans = min(ans, dp[i] + j * ts[k]); if (j * ts[k] >= ts[k ^ 1]){ ni = min((long long)h, i + (j - 1) * (ps[k] - s) + (j * ts[k] / ts[k ^ 1] - 1) * (ps[k ^ 1] - s) + (ps[0] + ps[1] - s)); dp[ni] = min(dp[ni], dp[i] + j * ts[k]); } } ans = min(ans, dp[h]); printf("%lld\n", ans); }
1743
F
Intersection and Union
You are given $n$ segments on the coordinate axis. The $i$-th segment is $[l_i, r_i]$. Let's denote the set of all integer points belonging to the $i$-th segment as $S_i$. Let $A \cup B$ be the union of two sets $A$ and $B$, $A \cap B$ be the intersection of two sets $A$ and $B$, and $A \oplus B$ be the symmetric difference of $A$ and $B$ (a set which contains all elements of $A$ and all elements of $B$, except for the ones that belong to both sets). Let $[\mathbin{op}_1, \mathbin{op}_2, \dots, \mathbin{op}_{n-1}]$ be an array where each element is either $\cup$, $\oplus$, or $\cap$. Over all $3^{n-1}$ ways to choose this array, calculate the sum of the following values: $$|(((S_1\ \mathbin{op}_1\ S_2)\ \mathbin{op}_2\ S_3)\ \mathbin{op}_3\ S_4)\ \dots\ \mathbin{op}_{n-1}\ S_n|$$ In this expression, $|S|$ denotes the size of the set $S$.
We will use the Contribution to the Sum technique to solve this problem: for every integer from $0$ to $300000$, let's calculate the number of ways to choose the operators so it belongs to the result, and add all of the results. For a fixed integer $x$, the number of ways to choose the operators so that $x$ belongs to the result can be done as follows: let $dp_{i,f}$ be the number of ways to choose the first $i$ operators so that, after applying them, the resulting set contains $x$ if $f = 1$, and does not contain $x$ if $f = 0$. The transitions from $dp_i$ to $dp_{i+1}$ depend on whether the number $x$ belongs to the segment $i+1$. Obviously, this is too slow if we compute the dynamic programming from scratch for every integer $x$. Instead, we can notice that the transitions from $dp_i$ to $dp_{i+1}$ are linear combinations: both $dp_{i+1,0}$ and $dp_{i+1,1}$ are linear combinations of $dp_{i,0}$ and $dp_{i,1}$ with coefficients depending on whether the element $x$ belongs to the set or not. So, transitioning from $dp_i$ to $dp_{i+1}$ can be written in terms of multiplying by a $2 \times 2$ matrix. Let's build a segment tree where each vertex stores a transition matrix, and operations are "calculate the product of matrices on a segment" and "replace a matrix at some index". We can build a sequence of these transition matrices for $x=0$ and store them in the segment tree; for $x=1$, this sequence of transition matrices will change only in positions $j$ such that either $0$ belongs to $[l_j, r_j]$ and $1$ does not belong to it, or vice versa. So, we can go from $x=0$ to $x=1$ by replacing these transition matrices in the segment tree. For $x=2$, the only changes from $x=0$ are in positions $j$ such that either $1$ belongs to $[l_j, r_j]$ and $2$ does not belong to it, or vice versa - and we can replace the matrices in these positions as well. In total, there will be only $O(n)$ such replacements; so, we solve the problem in $O(M + n \log M)$, where $M$ is the constraint on the numbers belonging to the sets.
[ "data structures", "dp", "matrices", "probabilities" ]
2,300
#include<bits/stdc++.h> using namespace std; const int N = 300043; typedef array<int, 2> vec; typedef array<vec, 2> mat; const int MOD = 998244353; mat operator*(const mat& a, const mat& b) { mat c; for(int i = 0; i < 2; i++) for(int j = 0; j < 2; j++) c[i][j] = 0; for(int i = 0; i < 2; i++) for(int j = 0; j < 2; j++) for(int k = 0; k < 2; k++) c[i][k] = (a[i][j] * 1ll * b[j][k] + c[i][k]) % MOD; return c; } mat ZERO = {vec({3, 0}), vec({1, 2})}; mat ONE = {vec({1, 2}), vec({1, 2})}; mat t[4 * N]; void recalc(int v) { t[v] = t[v * 2 + 1] * t[v * 2 + 2]; } void build(int v, int l, int r) { if(l == r - 1) { t[v] = ZERO; } else { int m = (l + r) / 2; build(v * 2 + 1, l, m); build(v * 2 + 2, m, r); recalc(v); } } void upd(int v, int l, int r, int pos, int val) { if(l == r - 1) { if(val == 0) t[v] = ZERO; else t[v] = ONE; } else { int m = (l + r) / 2; if(pos < m) upd(v * 2 + 1, l, m, pos, val); else upd(v * 2 + 2, m, r, pos, val); recalc(v); } } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int n; cin >> n; vector<vector<pair<int, int>>> v(N); for(int i = 0; i < n; i++) { int l, r; cin >> l >> r; v[l].push_back(make_pair(1, i)); v[r + 1].push_back(make_pair(0, i)); } build(0, 0, n - 1); int cur = 0; int ans = 0; for(int i = 0; i <= 300000; i++) { for(auto x : v[i]) { if(x.second == 0) cur = x.first; else upd(0, 0, n - 1, x.second - 1, x.first); } ans = (ans + t[0][cur][1]) % MOD; } cout << ans << endl; }
1743
G
Antifibonacci Cut
\textbf{Note that the memory limit is unusual.} Let's define the sequence of Fibonacci strings as follows: $f_0$ is 0, $f_1$ is 1, $f_i$ is $f_{i-1} + f_{i-2}$ for $i>1$ ($+$ denotes the concatenation of two strings). So, for example, $f_2$ is 10, $f_3$ is 101, $f_4$ is 10110. For a given string $s$, let's define $g(s)$ as the number of ways to cut it into several (any number, possibly even just one) strings such that none of these strings are Fibonacci strings. For example, if $s$ is 10110101, $g(s) = 3$ since there are three ways to cut it: - 101101 $+$ 01; - 1011 $+$ 0101; - 1011 $+$ 01 $+$ 01. You are given a sequence of strings $s_1, s_2, \dots, s_n$. Calculate $g(s_1), g(s_1 + s_2), \dots, g(s_1 + s_2 + \ldots + s_n)$. Since these values can be huge, print them modulo $998244353$.
The first idea that comes to mind is running some sort of dynamic programming: $dp_i$ - the number of ways to cut the string consisting of the first $i$ characters. When we calculate $dp_i$, we need to take the sum of the previous values of $dp$, and then subtract $dp_j$ for every $j$ such that the string from the $j$-th character (inclusive) to the $i$-th character (non-inclusive) is a Fibonacci string. Unfortunately, there are two main issues with this solution: firstly, we cannot store the array $dp$ in memory; and secondly, we have to search for the Fibonacci strings ending in a certain index quickly (something like Aho-Corasick could work with a less strict memory limit, but right now we cannot use it). We will try to resolve both of these issues with the following approach: while we process the characters, we will maintain the list of tuples $(j, dp_j)$ such that the string from the $j$-th character to the current one is a prefix of some Fibonacci string. How do we maintain them? Every Fibonacci string $f_i$ (except for $f_0$) is a prefix of $f_{i+1}$. So, all Fibonacci strings we are interested in (except for $f_0$ again) are prefixes of the same long Fibonacci string. Suppose a tuple $(j, dp_j)$ represents some index $j$ such that the string from the $j$-th character to the current one is a prefix of that long Fibonacci string. Each time we append a character, we filter this list of tuples by trying to check if this new character matches the next character in the prefix (if it does not, the tuple is discarded). For the tuples that represent the prefixes equal to Fibonacci strings, we need to subtract the value of $dp_j$ from the new $dp$ value we are trying to calculate (checking if a prefix is a Fibonacci string is easy, we just need to check its length). How do we check that if we add a character 1 or 0, it is still a prefix? There are two ways to do this: either generate the first $3 \cdot 10^6$ characters of the long Fibonacci string; or represent the current prefix as the sum of Fibonacci strings $f_{i_1} + f_{i_2} + \dots + f_{i_k}$ such that for every $j \in [1, k - 1]$, the condition $f_{i_j} > f_{i_{j+1}} + 1$ holds (i. e. the Fibonacci strings we split the current prefix into are arranged in descending order, and there is no pair of equal or adjacent (by index) Fibonacci strings in the split). This representation is very similar to writing an integer in Zeckendorf system. The next character in the prefix depends on whether $f_1$ belongs to this split: if it belongs, it is the last string in the split, so we need to append 0 to transform $f_1$ into $f_2$; otherwise, we need to append 1. Okay, so now we can solve the problem in $O(NM)$ time (where $N$ is the total length of the strings in the input, and $M$ is the size of the list of tuples $(j, dp_j)$ we discussed earlier). This actually works since it looks like the size of the list of tuples is bounded as $O(\log N)$. Unfortunately, we don't have a strict mathematical proof of this; we checked this by brute force with $N$ up to $3 \cdot 10^6$, so it definitely works under the constraints of the problem.
[ "bitmasks", "combinatorics", "constructive algorithms", "data structures", "dp", "hashing", "math" ]
3,000
#include<bits/stdc++.h> using namespace std; const int MOD = 998244353; int add(int x, int y) { x += y; while(x >= MOD) x -= MOD; while(x < 0) x += MOD; return x; } int sub(int x, int y) { return add(x, MOD - y); } int expected(int mask) { if(mask & 2) return 0; return 1; } int last_bit(int x) { if(x == 0) return -1; return x - (x & (x - 1)); } bool go(int& a, int x) { if(expected(a) != x) { a = 1 << x; return false; } a ^= (1 << x); while(true) { int b = last_bit(a); int c = last_bit(a - b); if(c == 2 * b) a += b; else break; } return true; } bool is_fib(int a) { return a == last_bit(a); } vector<pair<int, int>> go(const vector<pair<int, int>>& a, int x) { vector<pair<int, int>> nw; for(auto b : a) { int cost = b.first; int seqn = b.second; if(go(seqn, x)) nw.push_back(make_pair(cost, seqn)); } return nw; } int main() { ios_base::sync_with_stdio(0); cin.tie(0); int last = 1, sum = 1; vector<pair<int, int>> w; int n; cin >> n; for(int i = 0; i < n; i++) { string s; cin >> s; for(auto x : s) { int c = x - '0'; int ndp = sub(sum, last); w = go(w, c); for(int j = 0; j < w.size(); j++) { if(is_fib(w[j].second)) ndp = sub(ndp, w[j].first); } if(c == 1) w.push_back(make_pair(last, 2)); sum = add(sum, ndp); last = ndp; assert(w.size() <= 60); } cout << last << endl; } }
1744
A
Number Replacement
An integer array $a_1, a_2, \ldots, a_n$ is being transformed into an array of lowercase English letters using the following prodecure: While there is at least one number in the array: - Choose any number $x$ from the array $a$, and any letter of the English alphabet $y$. - Replace all occurrences of number $x$ with the letter $y$. For example, if we initially had an array $a = [2, 3, 2, 4, 1]$, then we could transform it the following way: - Choose the number $2$ and the letter c. After that $a = [c, 3, c, 4, 1]$. - Choose the number $3$ and the letter a. After that $a = [c, a, c, 4, 1]$. - Choose the number $4$ and the letter t. After that $a = [c, a, c, t, 1]$. - Choose the number $1$ and the letter a. After that $a = [c, a, c, t, a]$. After the transformation all letters are united into a string, in our example we get the string "cacta". Having the array $a$ and the string $s$ determine if the string $s$ could be got from the array $a$ after the described transformation?
Let's note that if $a_i = a_j$, then $s_i$ must be equal to $s_j$, since we must change the same value to the same letter. If we check this for all pairs of $i$ and $j$ and find no such contradictions, then the answer is "YES", otherwise "NO". We got the solution for $O(n^2)$ for one test case.
[ "greedy", "implementation" ]
800
null
1744
B
Even-Odd Increments
You are given $n$ of integers $a_1, a_2, \ldots, a_n$. Process $q$ queries of two types: - query of the form "0 $x_j$": add the value $x_j$ to all even elements of the array $a$, - query of the form "1 $x_j$": add the value $x_j$ to all odd elements of the array $a$. Note that when processing the query, we look specifically at the odd/even value of $a_i$, not its index. After processing each query, print the sum of the elements of the array $a$. Please note that the answer for some test cases won't fit into 32-bit integer type, so you should use at least 64-bit integer type in your programming language (like long long for C++).
Let $\mathit{sum}$ - the initial sum in the array, $\mathit{cnt}_0$ - the number of even numbers, $\mathit{cnt}_1$ - the number of odd numbers. Let's see how these values change with each action. In fact, we can consider four main options: Add an even number $x$ to all even numbers. Then $\mathit{sum}$ will increase by $x \cdot \mathit{cnt}_0$, and the number of even and odd numbers will remain the same. Add an odd number $x$ to all even numbers. Then $\mathit{sum}$ will increase by $x \cdot \mathit{cnt}_0$, the number of even numbers will become $0$, all numbers will become odd, so $\mathit{cnt}_1 = n$. Add an even number $x$ to all odd numbers. Then $\mathit{sum}$ will increase by $x \cdot \mathit{cnt}_1$, and the number of even and odd numbers will remain the same. Add an odd number $x$ to all odd numbers. Then $\mathit{sum}$ will increase by $x \cdot \mathit{cnt}_1$, the number of odd numbers will become $0$, all numbers will become even, so $\mathit{cnt}_0 = n$.
[ "implementation", "math" ]
800
null
1744
C
Traffic Light
You find yourself on an unusual crossroad with a weird traffic light. That traffic light has three possible colors: red (r), yellow (y), green (g). It is known that the traffic light repeats its colors every $n$ seconds and at the $i$-th second the color $s_i$ is on. That way, the order of the colors is described by a string. For example, if $s=$"rggry", then the traffic light works as the following: red-green-green-red-yellow-red-green-green-red-yellow- ... and so on. More formally, you are given a string $s_1, s_2, \ldots, s_n$ of length $n$. At the first second the color $s_1$ is on, at the second — $s_2$, ..., at the $n$-th second the color $s_n$ is on, at the $n + 1$-st second the color $s_1$ is on and so on. You need to cross the road and that can only be done when the green color is on. You know which color is on the traffic light at the moment, but you don't know the current moment of time. You need to find the minimum amount of time in which you are guaranteed to cross the road. You can assume that you cross the road immediately. For example, with $s=$"rggry" and the current color r there are two options: either the green color will be on after $1$ second, or after $3$. That way, the answer is equal to $3$ — that is the number of seconds that we are guaranteed to cross the road, if the current color is r.
Let's note that for each second of color $c$ in the traffic light, we need to find the rightmost green time, and then find the largest distance between color $c$ and the nearest green. Also, let's not forget that traffic light states are cyclical. To get rid of cyclicity, you can write the string $s$ twice and for each cell of color $c$ from the first half, find the nearest green color (thus we solved the problem with cyclicity). And now we can just follow this line from right to left and maintain the index of the last occurrence of green. If we encounter color $c$, then we try to update our answer $ans = \max(ans, last - i)$, where $ans$ is our answer, $last$ is the nearest time that green was on color, $i$ - current time.
[ "binary search", "implementation", "two pointers" ]
1,000
null
1744
D
Divisibility by 2^n
You are given an array of positive integers $a_1, a_2, \ldots, a_n$. Make the product of all the numbers in the array (that is, $a_1 \cdot a_2 \cdot \ldots \cdot a_n$) divisible by $2^n$. You can perform the following operation as many times as you like: - select an arbitrary index $i$ ($1 \leq i \leq n$) and replace the value $a_i$ with $a_i=a_i \cdot i$. You cannot apply the operation repeatedly to a single index. In other words, all selected values of $i$ must be different. Find the smallest number of operations you need to perform to make the product of all the elements in the array divisible by $2^n$. Note that such a set of operations does not always exist.
Let's notice that if we multiply the numbers $a_1 \cdot a_2 \cdot \ldots \cdot a_n$, then the power of two of the product is equal to the sum of the powers of two in each of the numbers. Let's calculate the initial sum of the powers of twos in the product. This can be done as follows: let's take the element $a_i$ and divide it by $2$ as long as we can, while remembering to increase our counter by the number of occurrences of a power of two. Now let's move on to operations and note that choosing the index $i$ will increase the degree of occurrence of two by a fixed number (that is, it does not matter when to apply this operation). Choosing an index $i$ will increment the counter by a number $x$ such that $i$ is divisible by $2^x$ but not by $2^{x + 1}$ - you can find this $x$, again , by dividing by $2$ while we can. Since we want to minimize the number of operations used, at each moment of time we will try to use an index that increases the counter by the largest number. To do this, it is enough to sort the indices by this index of increase and take them greedily from the largest increase to the smallest. We get the solution in $O(n \log n + n \log A)$.
[ "greedy", "math", "sortings" ]
1,200
null
1744
E2
Divisible Numbers (hard version)
\textbf{This is an hard version of the problem. The only difference between an easy and a hard version is the constraints on $a$, $b$, $c$ and $d$.} You are given $4$ positive integers $a$, $b$, $c$, $d$ with $a < c$ and $b < d$. Find any pair of numbers $x$ and $y$ that satisfies the following conditions: - $a < x \leq c$, $b < y \leq d$, - $x \cdot y$ is divisible by $a \cdot b$. Note that required $x$ and $y$ may not exist.
Let's look at the slow solution first, which will lead us to the full one. Let's iterate over the number $x$ from $a + 1$ to $c$. Given a number $x$, we want to find a $y$ from $b + 1$ to $d$ such that $x \cdot y$ is divisible by $a \cdot b$. Since $x \cdot y$ must be divisible by $a \cdot b$, the following conclusion can be drawn: $y$ must be divisible by $\frac{a \cdot b}{\gcd(a \cdot b, x) }$. Let's denote this number as $s$. Now our task is to check if there is a multiple of $s$ between $b + 1$ and $d$. The problem can be solved in many ways, you can simply consider the largest multiple of $s$ that does not exceed $d$ - it is equal to $\left \lfloor{\frac{d}{s}} \right \rfloor \cdot s$ We compare this number with $b + 1$, and if it matches, then we have found a suitable pair. Now let's note that we do not need to iterate over all the values of $x$, because from the number $x$ we are only interested in $\gcd(x, a \cdot b)$ - and this is one of the divisors of $a \cdot b$! Even though the product $a \cdot b$ can be large, we can still consider all divisors of this number, since $a$ and $b$ themselves are up to $10^9$. Let's find the divisors of $a$ and $b$ separately, then notice that any divisor of $a \cdot b$ - is $a' \cdot b'$, where $a'$ - is some divisor $a$, and $b'$ - is some divisor $b$. Let's calculate the running time. We need to factorize the numbers $a$ and $b$ into prime factors, this can be done in $O(\sqrt a)$. Next, you need to iterate over pairs of divisors $a$ and $b$. Recall the estimate for the number of divisors of a number (https://oeis.org/A066150): the number $x$ up to $10^9$ has no more than $1344$ divisors. Therefore, we can sort through the pairs for $1344^2$. We learned how to find the optimal $y$ for $O(1)$. We get that for each test case we have learned to solve the problem in $O(\sqrt a + (1344^2))$.
[ "brute force", "math", "number theory" ]
1,900
null
1744
F
MEX vs MED
You are given a permutation $p_1, p_2, \ldots, p_n$ of length $n$ of numbers $0, \ldots, n - 1$. Count the number of subsegments $1 \leq l \leq r \leq n$ of this permutation such that $mex(p_l, p_{l+1}, \ldots, p_r) > med(p_l, p_{l+1}, \ldots, p_r)$. $mex$ of $S$ is the smallest non-negative integer that does not occur in $S$. For example: - $mex({0, 1, 2, 3}) = 4$ - $mex({0, 4, 1, 3}) = 2$ - $mex({5, 4, 0, 1, 2}) = 3$ $med$ of the set $S$ is the median of the set, i.e. the element that, after sorting the elements in non-decreasing order, will be at position number $\left \lfloor{ \frac{|S| + 1}{2} } \right \rfloor$ (array elements are numbered starting from $1$ and here $\left \lfloor{v} \right \rfloor$ denotes rounding $v$ down.). For example: - $med({0, 1, 2, 3}) = 1$ - $med({0, 4, 1, 3}) = 1$ - $med({5, 4, 0, 1, 2}) = 2$ A sequence of $n$ numbers is called a permutation if it contains all the numbers from $0$ to $n - 1$ exactly once.
Let's learn how to count the number of subsegments where $\mathit{mex} > \mathit{mid}$ for a fixed value of $\mathit{mex}$. Let's understand on which subsegments $\mathit{mex}$ has such a value. We understand that the numbers $0, 1, \ldots, \mathit{mex} - 1$ should be in this subsegment, the number $\mathit{mex}$ - should not, and then - does not matter . Let $\ell$ - be the index of the left-most occurrence of the numbers $0, 1, \ldots, \mathit{mex} - 1$, and $r$ - the right-most occurrence. Also, let $\mathit{pos}$ "be the index of the number $\mathit{mex}$. If $\ell < \mathit{pos} < r$, then there is no subsegment with the given $\mathit{ mex}$, otherwise let's think about how many numbers can be in a subsegment. It is stated that if the length of a segment is $\leq 2 \cdot \mathit{mex}$, then $\mathit{mex} > \mathit{mid}$ on it, otherwise it is not true. Indeed, if the length of a segment is $\leq 2 \cdot \mathit{mex}$, then simply by the definition of the median, it will be among the first $\mathit{mex}$ numbers. On the other hand, if the length of the segment becomes longer, then according to the same definition of the median, it will no longer be among the first $\mathit{mex}$ numbers, which are equal to $0, 1, \ldots, \mathit{mex} - 1$. It turns out that we need to count the number of subsegments of length $\leq 2 \cdot \mathit{mex}$ that contain the subsegment $\ell \ldots r$ inside themselves, but not $\mathit{pos}$. This, of course, can be done with large formulas and the like, but one more useful fact can be noticed! If we iterate over $\mathit{mex}$ in ascending order, and at the same time maintain $\ell$ and $r$, then let's see what happens with a fixed $\mathit{mex}$ and a position of $\mathit{pos}$ (for convenience, let's assume that $\mathit{pos} < \ell$, since the second case is similar). In fact, you can simply iterate over the left border of our segment from $\mathit{pos} + 1$ to $\ell$, and then use a simpler formula to calculate the number of good subsegments with such fixed values. Why can this left boundary be iterated for each $\mathit{mex}$ if $\mathit{pos} < \ell$ (correspondingly, if it were $\mathit{pos} > r$, then we would iterate over the right boundary) ? Let's remember that after this step, $\ell$ becomes equal to $\mathit{pos}$. That is, if we iterate over an element, then we move the corresponding border. And we can only move it $O(n)$ times.
[ "math", "two pointers" ]
2,000
null
1746
A
Maxmina
You have an array $a$ of size $n$ consisting only of zeroes and ones and an integer $k$. In one operation you can do one of the following: - Select $2$ consecutive elements of $a$ and replace them with their minimum (that is, let $a := [a_{1}, a_{2}, \ldots, a_{i-1}, \min(a_{i}, a_{i+1}), a_{i+2}, \ldots, a_{n}]$ for some $1 \le i \le n-1$). This operation decreases the size of $a$ by $1$. - Select $k$ consecutive elements of $a$ and replace them with their maximum (that is, let $a := [a_{1}, a_{2}, \ldots, a_{i-1}, \max(a_{i}, a_{i+1}, \ldots, a_{i+k-1}), a_{i+k}, \ldots, a_{n}]$ for some $1 \le i \le n-k+1$). This operation decreases the size of $a$ by $k-1$. Determine if it's possible to turn $a$ into $[1]$ after several (possibly zero) operations.
step 1: It's obvious that the answer is "NO" if $a_{i} = 0$ for all $1 \le i \le n$. step 2: Lets prove that the answer is "YES" if $a_{i} = 1$ for at least one $1 \le i \le n$. step 3: If size of $a$ is equal to $k$, just use second type operation once and we are done. step 4: Otherwise (if $|a| > k$), there will be three cases: (assume that $a_{j} = 1$) if $j > 2$, you can use first type operation on first and second element and decrease size of $a$ by 1. else if $j < |a|-1$, you can use first type operation on last and second to last element and decrease size of $a$ by 1. else, it can be shown easily that $|a| = 3$ and $k = 2$ so you can use second type operation twice and turn $a$ into a single $1$. In first and second case, you can continue decreasing size of $a$ until $|a| = k$ (or you reach $3$-rd case) and you can use second type operation to reach your aim. step 5: So we proved that the answer is "YES" iff $a_{i} = 1$ for at least one $1 \le i \le n$ or in other words, iff $\sum_{i = 1}^n a_{i} > 0$.
[ "constructive algorithms", "greedy" ]
800
// In the name of God #include <iostream> using namespace std; int main() { int t; cin >> t; while(t--) { int n, k; cin >> n >> k; int sum = 0; for(int i = 0 ; i < n ; i++){ int a; cin >> a; sum += a; } if(sum > 0) cout << "YES" << endl; else cout << "NO" << endl; } return 0; } // Thank God
1746
B
Rebellion
You have an array $a$ of size $n$ consisting only of zeroes and ones. You can do the following operation: - choose two indices $1 \le i , j \le n$, $i \ne j$, - add $a_{i}$ to $a_{j}$, - remove $a_{i}$ from $a$. Note that elements of $a$ can become bigger than $1$ after performing some operations. Also note that $n$ becomes $1$ less after the operation. What is the minimum number of operations needed to make $a$ non-decreasing, i. e. that each element is not less than the previous element?
step 1: Assume that $a_{i} = 0$ for each $1 \le i \le n$, what should we do? Nothing! The array is already sorted and the answer is $0$. step 2: After sorting the array, consider the first non-zero element of $a$, how many elements after that are equal to zero? In other words, consider smallest $i$ such that $a_{i} > 0$, how many indices $k$ exist such that $i < k$ and $a_{k} = 0$? 0, because we call an array $a$ sorted (in non-decreasing order), if for all $1 \le i < j \le n$, $a_{i} \le a_{j}$ holds. So all numbers after leftmost non-zero element must be non-zero too. step 3: assume that after sorting $a$, $a_{i}$ ($1 \le i \le n$) is the leftmost non-zero element. Define $G_{1}$ as the set of indices $j$ such that $1 \le j < i$ and $a_{j} > 0$ at the beginning, also define $G_{2}$ as the set of indices $j$ such that $i \le j \le n$ and $a_{j} = 0$ at the beginning. What is the answer? $max(|G_{1}|, |G_{2}|)$. It's obvious that in one operation at most one element will be deleted from each group. So we must perform at least $max(|G_{1}|, |G_{2}|)$ operations. Now we want to show that it's sufficient too. There are three cases: $min(|G_{1}|, |G_{2}|) > 1$, at this point, we know that all elements of $a$ are $0$ or $1$. So we can pick one element from $G_{1}$ and add it to an element from $G_{2}$, so the size of both groups will decrease by $1$. It's obvious that all elements of $a$ will remain less than or equal to $1$ after this operation. $|G_{1}| = 0$, it's easy to see that we can add $a[k]$ ($k \in G_{2}$) to a random element. So $|G_{2}|$ will decrease by $1$. $|G_{2}| = 0$, it's easy to see that we can add $a[k]$ ($k \in G_{1}$) to the last element of $a$. So $|G_{1}|$ will decrease by $1$. step 4: Now how can we solve the problem using previous steps? If all elements are equal to $0$, the answer is obviously zero. Otherwise we will calculate two arrays, number of ones in each prefix and number of zeros in each suffix. We will also fix the leftmost non-zero element and calculate the answer for it easily by using Step 3 algorithm in O(n).
[ "constructive algorithms", "greedy", "two pointers" ]
800
/// In the name of God #include <iostream> using namespace std; int main() { int t; cin >> t; while(t--) { int n; cin >> n; int A[n], cnt[2][n+1]; cnt[0][0] = cnt[1][0] = 0; for(int i = 0; i < n; i++) { cin >> A[i]; cnt[0][i+1] = cnt[0][i]+(A[i]==0?1:0); cnt[1][i+1] = cnt[1][i]+(A[i]==1?1:0); } int ans = n-1; for(int last_zero = 0; last_zero <= n; last_zero++) ans= min(ans, max(cnt[1][last_zero], cnt[0][n]-cnt[0][last_zero])); cout << ans << endl; } } /// Thank God . . .
1746
C
Permutation Operations
You are given a permutation $a$ of size $n$ and you should perform $n$ operations on it. In the $i$-th operation, you can choose a non-empty suffix of $a$ and increase all of its elements by $i$. How can we perform the operations to minimize the number of inversions in the final array? Note that you can perform operations on the same suffix any number of times you want. A permutation of size $n$ is an array of size $n$ such that each integer from $1$ to $n$ occurs exactly once in this array. A suffix is several consecutive elements of an array that include the last element of the array. An inversion in an array $a$ is a pair of indices $(i, j)$ such that $i > j$ and $a_{i} < a_{j}$.
step 1: Try to prove that the answer is always zero. Or in other words, we can always make the array $a$ non-decreasing. We will prove this fact in next steps. step 2: If some array $a$ is non-decreasing, what can we say about array $d=[a_{2}-a_{1}, a_{3}-a_{2}, ... a_{n}-a_{n-1}]$? It's obvious that all elements of array $d$ should be non-negative. step 3: If we perform the $i$-th operation on the suffix starting at index $j$, what happens to array $d$? All elements of it will remain the same except $d_{j-1}=a_{j}-a_{j-1}$ which will increase by $i$. step 4: Considering the fact that array $a$ consists only of positive integers, what can we say about $d_{i}=a_{i+1}-a_{i}$? Since $a_{i+1} \ge 0$, we can say that $d_{i} \ge -a_{i}$. step 5: Using step 3 and 4 and knowing that $a$ is a permutation of numbers $1$ to $n$, what can we do to make all elements of $d$ non-negative? for $i \le n-1$, we can perform $a_{i}$-th operation on the suffix starting from index $i+1$. So $d_{i}$ will increase by $a_{i}$ and since we knew that $d_{i} \ge -a_{i}$, after performing this operation $d_{i} \ge 0$ will hold. So after this steps, elements of $d$ will be non-negative and according to Step 2, that's exactly what we want. It's also easy to see that it doesn't matter how we perform $a_{n}$-th operation, so we can do anything we want.
[ "constructive algorithms", "greedy", "implementation", "math" ]
1,300
/// In the name of God #include <bits/stdc++.h> using namespace std; inline void solve() { int n; cin >> n; int permutation[n], location[n]; for(int i = 0; i < n; i++) { cin >> permutation[i]; permutation[i]--; location[permutation[i]] = i; } for(int i = 0; i < n; i++) { if(location[i] == n-1) cout << rand()%n+1 << (i == n-1?'\n':' '); else cout << location[i]+2 << (i == n-1?'\n':' '); } } int main() { int t; cin >> t; while(t--) solve(); return 0; } /// Thank God . . .
1746
D
Paths on the Tree
You are given a rooted tree consisting of $n$ vertices. The vertices are numbered from $1$ to $n$, and the root is the vertex $1$. You are also given a score array $s_1, s_2, \ldots, s_n$. A multiset of $k$ simple paths is called valid if the following two conditions are both true. - Each path starts from $1$. - Let $c_i$ be the number of paths covering vertex $i$. For each pair of vertices $(u,v)$ ($2\le u,v\le n$) that have the same parent, $|c_u-c_v|\le 1$ holds. The value of the path multiset is defined as $\sum\limits_{i=1}^n c_i s_i$.It can be shown that it is always possible to find at least one valid multiset. Find the maximum value among all valid multisets.
Define $f(u, cnt)$ represents the maximum score of $cnt$ balls passing through the subtree of node $u$. Define $num$ as the number of the sons of node $u$. The transition is only related to $\lceil cnt / num \rceil$ and $\lfloor cnt / num \rfloor$ two states of the subtree. For each node $u$, $cnt$ can only be two adjacent integer $(x, x + 1)$. It can be proved that the four numbers $\lfloor x/num \rfloor$, $\lceil x/num \rceil$, $\lceil (x + 1)/num \rceil$ and $\lfloor (x + 1)/num \rfloor$ can only have two kinds of numbers at most, and they are adjacent natural numbers. So the number of states can be proved to be $\mathcal{O}(n)$.
[ "dfs and similar", "dp", "greedy", "sortings", "trees" ]
1,900
/// In the name of God #include <bits/stdc++.h> using namespace std; typedef long long ll; const int N = 200000; ll V[N], dp[N]; int dad[N]; vector<int> child[N]; vector<pair<int, ll>> answers[N]; inline ll DP(int v, ll k) { for(auto [kp, ans]: answers[v]) if(k == kp) return ans; ll cnt_child = (ll)child[v].size(); ll ans = k*V[v]; if(cnt_child == 0) return ans; if(k%cnt_child == 0) for(auto u: child[v]) ans += DP(u, k/cnt_child); else { ll dp1[cnt_child], dp2[cnt_child], diff[cnt_child]; for(int i = 0; i < cnt_child; i++) dp1[i] = DP(child[v][i], k/cnt_child), dp2[i] = DP(child[v][i], k/cnt_child+1); for(int i = 0; i < cnt_child; i++) diff[i] = dp2[i] - dp1[i]; sort(diff, diff+cnt_child, greater<int>()); for(int i = 0; i < cnt_child; i++) ans += dp1[i]; for(int i = 0; i < k%cnt_child; i++) ans += diff[i]; } answers[v].push_back({k, ans}); return ans; } inline ll solve() { ll n, k; cin >> n >> k; for(int i = 0; i < n; i++) child[i].clear(), answers[i].clear(); dad[0] = 0; for(int i = 0; i < n-1; i++) { cin >> dad[i+1]; dad[i+1]--; child[dad[i+1]].push_back(i+1); } for(int i = 0; i < n; i++) cin >> V[i]; return DP(0, k); } int main() { int t; cin >> t; while(t--) cout << solve() << '\n'; return 0; } /// Thank God . . .
1746
E1
Joking (Easy Version)
The only difference between this problem and the hard version is the maximum number of questions. This is an interactive problem. There is a hidden integer $1 \le x \le n$ which you have to find. In order to find it you can ask at most $\mathbf{82}$ questions. In each question you can choose a non-empty integer set $S$ and ask if $x$ belongs to $S$ or not, after each question, if $x$ belongs to $S$, you'll receive "YES", otherwise "NO". But the problem is that not all answers are necessarily true (some of them are joking), it's just guaranteed that for each two consecutive questions, at least one of them is answered correctly. Additionally to the questions, you can make at most $2$ guesses for the answer $x$. Each time you make a guess, if you guess $x$ correctly, you receive ":)" and your program should terminate, otherwise you'll receive ":(". As a part of the joking, we will \textbf{not} fix the value of $x$ in the beginning. Instead, it can change throughout the interaction as long as all the previous responses are valid as described above. Note that your answer guesses are always answered correctly. If you ask a question before and after a guess, at least one of these two questions is answered correctly, as normal.
step 1: How can we make sure that some statement is not joking? If something is told for two times in a row, we can be sure that it's true. For example if we are told that $x \ne 3$ in two consecutive questions, we can be sure that it's true. Because at least one of those questions is answered correctly and that's enough. step 2: What can be found out if we ask about a set $S$? (assume that we will be answered correctly) If the answer is "YES", then $x \in S$ and we can reduce our search domain to $S$. Otherwise, we can remove $S$ from the search domain. step 3: Using previous steps, how can we really reduce our search domain? Step 2 can do it only if we have a correct statement about $x$ and using Step 1, we can find something that is surely correct. Assume that $V$ is our current search domain. Split it into $4$ subsets $V_{1}, V_{2}, V_{3}$ and $V_{4}$. then ask about following sets: $V_{1} \cup V_{2}$ $V_{1} \cup V_{3}$ It's easy to show that no matter what the answers are, we can always remove at least one of the subsets from the search domain. For example if both answers are "YES", we can remove $V_{4}$ and if both answers are "NO", then we can remove $V_{1}$. So after these questions, we can reduce size of search domain by at least $min(|V_{1}|,|V_{2}|,|V_{3}|,|V_{4}|)$, and it's obvious to see that we can choose $V_{i}$'s in such a way that this value is at least $\frac{|V|}{4}$. Finally, as long as size of the search domain is greater than $3$, we can reduce it using this algorithm. It can be shown easily that we can reduce our search domain to only $3$ numbers with at most $76$ questions. step 4: Now assume that we have only $3$ candidates for $x$, since we have only $2$ chances to guess, we must remove one of them. How can we do it? Note that since we've used $76$ questions in previous step, we only have $6$ questions left. Assume that our three candidates are $a, b$ and $c$. it can be shown that using following questions, we can always remove at least one of them from the search domain: ${a}$ ${b}$ ${b}$ ${a}$ After that, when we have only two candidates for $x$, we can guess them one by one and we are done.
[ "binary search", "constructive algorithms", "interactive", "ternary search" ]
2,500
/// In the name of God #pragma GCC optimize("Ofast","unroll-loops","O3") #include <bits/stdc++.h> using namespace std; inline bool get_answer(){string s; cin >> s; return s == "YES";} inline bool f(int i){return i&1;} inline bool g(int i){return i&2;} void solve(const vector<int> &Valid) { if(Valid.size() < 3u) { cout << "! " << Valid[0] << endl; string s; cin >> s; if(s == ":)")return; cout << "! " << Valid[1] << endl; return; } else if(Valid.size() == 3u) { bool is[4]; cout << "? 1 " << Valid[0] << endl;is[0] = get_answer(); cout << "? 1 " << Valid[1] << endl;is[1] = get_answer(); cout << "? 1 " << Valid[1] << endl;is[2] = get_answer(); cout << "? 1 " << Valid[0] << endl;is[3] = get_answer(); if(is[1] and is[2])return solve({Valid[1]}); else if(!is[1] and !is[2])return solve({Valid[0], Valid[2]}); else if((is[0] and is[1]) or (is[2] and is[3]))return solve({Valid[0], Valid[1]}); else if((is[0] and !is[1]) or (!is[2] and is[3]))return solve({Valid[0], Valid[2]}); else return solve({Valid[1], Valid[2]}); } else { vector<int> query[2]; for(int i = 0; i < (int)Valid.size(); i++) { if(f(i))query[0].push_back(Valid[i]); if(g(i))query[1].push_back(Valid[i]); } bool is[2]; for(int i = 0; i < 2; i++) { cout << "? " << query[i].size(); for(auto u: query[i])cout << ' ' << u; cout << endl; is[i] = get_answer(); } vector<int> NewValid; for(int i = 0; i < (int)Valid.size(); i++) { if((!f(i) ^ is[0]) or (!g(i) ^ is[1]))NewValid.push_back(Valid[i]); } return solve(NewValid); } } int main() { int n = 0; cin >> n; vector<int> all(n); for(int i = 0; i < n; i++)all[i] = i+1; solve(all); return 0; } /// Thank God . . .
1746
E2
Joking (Hard Version)
The only difference between this problem and the hard version is the maximum number of questions. This is an interactive problem. There is a hidden integer $1 \le x \le n$ which you have to find. In order to find it you can ask at most $\mathbf{53}$ questions. In each question you can choose a non-empty integer set $S$ and ask if $x$ belongs to $S$ or not, after each question, if $x$ belongs to $S$, you'll receive "YES", otherwise "NO". But the problem is that not all answers are necessarily true (some of them are joking), it's just guaranteed that for each two consecutive questions, at least one of them is answered correctly. Additionally to the questions, you can make at most $2$ guesses for the answer $x$. Each time you make a guess, if you guess $x$ correctly, you receive ":)" and your program should terminate, otherwise you'll receive ":(". As a part of the joking, we will \textbf{not} fix the value of $x$ in the beginning. Instead, it can change throughout the interaction as long as all the previous responses are valid as described above. Note that your answer guesses are always answered correctly. If you ask a question before and after a guess, at least one of these two questions is answered correctly, as normal.
step 1: First note that in any question we ask, a set will be introduced as the set that contains $x$. If the answer is "YES", then the set that we asked is introduced and otherwise, its complement. Assume that our current search domain is some set $V$. In addition, assume that $V = A \cup B$ such that set $A$ is the set that introduced to contain $x$ in very last step and $B$ is the set of other candidates for $x$ (candidates that don't belong to $A$). Now assume that in next question, a set $C$ is introduced to contain $x$ such that $C = a \cup b, (a \in A , b \in B)$. what can we do? how can we reduce our search domain after this question? Considering the solution of E1, it's not so hard to see that we can remove $B-b$ from our search domain. So our new search domain will be $V' = A' \cup B'$ such that $A' = C = a \cup b$ is the set of candidates that introduced in last question and $B' = A - a$ is the set of other candidates. step 2: Let's calculate minimum needed questions for a search domain of size $n$ using the idea of previous step. We can do it using a simple dp, let $dp[A][B]$ be the minimum number of questions needed to find $x$ in a search domain of size $A+B$ in which $A$ numbers are introduced in last question and $B$ numbers are not. In order to calculate our dp, if we ask about some set which consists of $a$ numbers from first set and $b$ numbers from second set, we should update $dp[A][B]$ from $dp[A-a][a+b]$. So we can say that: $dp[A][B] = \min_{a \le A , b \le B} dp[A-a][a+b]$ . Using this dp and keeping track of its updates, we can simply solve the task. But there is a problem, we don't have enough time to calculate the whole dp. So what can we do? step 3: Since calculating $dp[A][B]$ for $A <= n , B <= m$ is $O(n^2 \cdot m^2)$, we can only calculate it for small $n$ and $m$ like $n,m \le 100$. What can we do for the rest? Maybe some heuristic method? It's not so hard to see that for most $A$ and $B$ (especially for large values) it's best to update $dp[A][B]$ from something around $dp[A-A/2][A/2+B/2]$ (in other words, asking $A/2$ numbers from the first set and $B/2$ numbers from the second one). Using such method, we can calculate dp in a reasonable time and with a very good accuracy, and this accuracy is enough to find the answer in less than $53$ questions.
[ "dp", "interactive" ]
3,200
/// In the name of God #pragma GCC optimize("Ofast","unroll-loops","O3") #include <bits/stdc++.h> using namespace std; const int SUM = 50; int dp[SUM][SUM]; pair<int,int> updater[SUM][SUM]; map<pair<int,int>, int> Dp; map<pair<int,int>, pair<int,int>> Updater; inline void preprocess() { for(int i = 0; i < SUM; i++) { for(int j = 0; j < SUM; j++) { dp[i][j] = SUM; updater[i][j] = {i, j}; } } dp[0][0] = dp[0][1] = dp[1][0] = dp[2][0] = dp[1][1] = dp[0][2] = 0; for(int sum = 0; sum < SUM; sum ++)for(int last = sum; last >= 0; last--) { int now = sum - last; for(int SelectLast = 0; SelectLast <= last; SelectLast++)for(int SelectNow = 0; SelectNow <= now; SelectNow++) { if(dp[last][now] > 1 + max(dp[now-SelectNow][SelectNow+SelectLast], dp[SelectNow][sum-SelectNow-SelectLast])) { dp[last][now] = 1 + max(dp[now-SelectNow][SelectNow+SelectLast], dp[SelectNow][now+last-SelectNow-SelectLast]); updater[last][now] = {SelectLast, SelectNow}; } } } } inline int DP(const int last, const int now) { if(last < 0 || now < 0)return SUM; if(last + now < SUM)return dp[last][now]; if(Dp.find({last, now}) != Dp.end())return Dp[{last, now}]; if((last&1) && (now&1)) { Dp[{last, now}] = 1 + DP((now+1)/2, (last+ now)/2); Updater[{last, now}] = {(last+1)/2, now/2}; } else { Dp[{last, now}] = 1 + DP((now+1)/2, (last+1)/2+(now+1)/2); Updater[{last, now}] = {(last+1)/2, (now+1)/2}; } return Dp[{last, now}]; } inline bool IsIn(const int x, const vector<int> &Sorted) { auto u = lower_bound(Sorted.begin(), Sorted.end(), x); if(u == Sorted.end() or *u != x)return false; return true; } vector<int> solve(const vector<int> &LastAns, const vector<int> &Valid) { if((int)Valid.size() < 3) { return Valid; } pair<int,int> Select; if((int)Valid.size() < SUM) { Select = updater[LastAns.size()][Valid.size()-LastAns.size()]; } else { DP((int)LastAns.size(), (int)(Valid.size()-LastAns.size())); Select = Updater[{LastAns.size(), Valid.size()-LastAns.size()}]; } vector<int> query; int p = 0; while(Select.first --)query.push_back(LastAns[p++]); p = 0; vector<int> LastAnsSorted = LastAns; sort(LastAnsSorted.begin(), LastAnsSorted.end()); while(Select.second --) { while(IsIn(Valid[p], LastAnsSorted)) p++; query.push_back(Valid[p++]); } cout << "? " << query.size(); for(auto u: query)cout << ' ' << u; cout << endl; string s; cin >> s; bool correct = (s == "YES"); sort(query.begin(), query.end()); vector<int> NewLast, NewValid; for(auto u: Valid) { if(!IsIn(u, LastAnsSorted) and (correct ^ IsIn(u, query))) { NewLast.push_back(u); } if(!IsIn(u, LastAnsSorted) or !(correct ^ IsIn(u, query))) { NewValid.push_back(u); } } vector<int> ans = solve(NewLast, NewValid); return ans; } int main() { preprocess(); int n; cin >> n; vector<int> all(n); for(int i = 0; i < n; i++)all[i] = i+1; vector<int> valid = solve({}, all); for(auto guess: valid) { cout << "! " << guess << endl; string s; cin >> s; if(s == ":)")return 0; } return 0; } /// Thank God . . .
1746
F
Kazaee
You have an array $a$ consisting of $n$ positive integers and you have to handle $q$ queries of the following types: - $1$ $i$ $x$: change $a_{i}$ to $x$, - $2$ $l$ $r$ $k$: check if the number of occurrences of every positive integer in the subarray $a_{l}, a_{l+1}, \ldots a_{r}$ is a multiple of $k$ (check the example for better understanding).
step 1: First of all, we can compress $a_{i}$'s and $x$'s (in second type query). so we can assume that all numbers are less than $n+q \le 6 \cdot 10^5$. step 2: Lets first solve the problem for smaller constrains. We can use data structures like fenwick tree or segment tree to count the number of occurrences of each number and check if it's divisible by $k$ to answer second type query. And we can simply update our data structure after each query of the first type. So if we use something like fenwick tree, we can handle everything in $O((n+q) \cdot n + (n+q) \cdot q \cdot log_{2}(n))$. step 3: Obviously we can not use previous step solution for original constrains. In other words, we don't have enough time to check all numbers one by one. How can we do better? Checking them together?! It's obviously not a good idea to check all numbers together (checking that sum of their occurrences is divisible by $k$). So what can we do? What if we check a random subset of numbers? step 4: Assume that $S$ is a random subset of all numbers, it's obvious that if the answer of the query is "YES" (number of occurrences of every number is divisible by $k$), the condition holds for $S$ too (sum of number of occurrences of numbers that belong to $S$ is divisible by $k$). What about the opposite? What is the probability that the answer of the query is "NO" and yet sum of occurrences of numbers belonging to $S$ is divisible by $k$? It's not so hard to prove that the probability is equal to $\frac{1}{2}$ for $k = 2$ and it's less than $\frac{1}{2}$ for $k \ge 2$. So in general, the probability that a random subset $S$ leads to a wrong answer is less than or equal to $\frac{1}{2}$. So if we use something like $30$ random subsets, the probability will be less than $\frac{1}{2^{30}}$ which is reasonable for problem constrains and we can use this random method to solve the task.
[ "data structures", "hashing", "probabilities" ]
2,800
#include <bits/stdc++.h> using namespace std; typedef long long int ll; mt19937 rnd(time(0)); const int N = 300'000 + 5; const int Q = 300'000 + 5; const int T = 50; bitset<N+Q> RandomSet[T]; unordered_map<int, int> id; int cnt_id = 0; int n, q, A[N]; struct fenwick { int PartialSum[N]; fenwick() { for(int i = 0; i < N; i++)PartialSum[i] = 0; } inline void add(int index, bool increase) { while(index < N) { PartialSum[index] += (increase? 1 : -1); index += index&-index; } } inline int get(int index) { int sum = 0; while(index) { sum += PartialSum[index]; index -= index&-index; } return sum; } }Fen[T]; inline int GetId(const int x) { auto id_iterator = id.find(x); if(id_iterator == id.end()) { return id[x] = cnt_id++; } else return (*id_iterator).second; } inline void ChooseRandomSets() { for(int i = 0; i < T; i++) { for(int j = 0; j < N+Q; j++) { if(rnd()&1)RandomSet[i].set(j); } } } inline void AddArrayToFenwick() { for(int i = 0; i < n; i++) { int MyId = GetId(A[i]); for(int j = 0; j < T; j++) { if(RandomSet[j][MyId])Fen[j].add(i+1, true); } } } inline void Query() { int index, l, r, k, x, type; for(int i = 0; i < q; i++) { cin >> type; if(type == 1) { cin >> index >> x; index --; int IdPre = GetId(A[index]); int IdNew = GetId(x); A[index] = x; for(int j = 0; j < T; j++) { if(RandomSet[j][IdPre])Fen[j].add(index+1, false); if(RandomSet[j][IdNew])Fen[j].add(index+1, true); } } if(type == 2) { cin >> l >> r >> k; l--; if(k == 1){cout << "YES\n"; continue;} else if((r-l)%k != 0){cout << "NO\n"; continue;} bool answer = true; for(int j = 0; j < T; j++) { if((Fen[j].get(r)-Fen[j].get(l))%k != 0){answer = false; break;} } cout << (answer?"YES":"NO") << '\n'; } } } int main() { ios::sync_with_stdio(false) , cin.tie(0); ChooseRandomSets(); cin >> n >> q; for(int i = 0; i < n; i++) cin >> A[i]; AddArrayToFenwick(); Query(); return 0; }
1746
G
Olympiad Training
Anton decided to get ready for an Olympiad in Informatics. Ilya prepared $n$ tasks for him to solve. It is possible to submit the solution for the $i$-th task in the first $d_{i}$ days only. Anton \textbf{cannot} solve more than one task a day. Ilya estimated the usefulness of the $i$-th tasks as $r_{i}$ and divided the tasks into three topics, the topic of the $i$-th task is $type_{i}$. Anton wants to solve exactly $a$ tasks on the first topic, $b$ tasks on the second topic and $c$ tasks on the third topic. Tell Anton if it is possible to do so, and if it is, calculate the maximum total usefulness of the tasks he may solve.
Consider all subsets of the original set in which tasks can be performed in some order to satisfy the constraints on their completion, and also that the size of each of them does not exceed $a+b+c$. It is easy to show that this set is a weighted matroid. This means that the optimal set of tasks can be found using a greedy algorithm. So, we can find a set of tasks with maximum total usefulness, which: satisfies all deadline constraints has no more than $a+b+c$ elements For the given parameters $(x, y, z)$, we introduce the following transformation: Increase the usefulness of all tasks of the first type by $x$, the second type~--- by $y$, and the third type~--- by $z$. Let's assume that the optimal solution after the transformation is described by the triple $[a',b',c']$~--- the number of problems of each type, respectively. We need to get a solution described by the triple $[a,b,c]$. For $[a,b,c] \neq [a',b',c']$, $a+b+c=a'+b'+c'$ holds. Then (without limitation of generality, since there are at least two inequalities among $a\vee a',b\vee b',c\vee c'$) we assume that $a'>a,b'<b$. Consider the following coordinate system on a two-dimensional plane (modulo the addition of $\overrightarrow{(1;1;1)})$: For the coordinates of the point, we take the value $\vec{x}\times x+\vec{y}\times y + \vec{z}\times z$$(\vec{x}+\vec{y}+\vec{z} =\vec{0})$. It is easy to understand that the coordinates of the point $(x; y)$ on the plane are uniquely set by the triple $(x; y; z)$ up to the addition of $\overrightarrow{(1;1;1)} \times k, k \in\mathbb{Z}$ Let's assume that there are parameters $(x,y,z)$ for which the optimal solution is unique and equal to $[a,b,c]$ (see "But what if..?" to see what needs to be done to the input to make this hold). Consider the solution $[a',b',c']$ for the "center" of the current polygon (initially it is a hexagon with infinitely distant coordinates). The "center" is the center of the triangle formed by the middle lines of our polygon. On this plane, decreasing $z$ or $y$ does not increase $b'+c'$, therefore, it does not decrease $a'$. Thus, the red area in the picture below does not suit us. Increasing $x$ or $z$ does not decrease $a'+c'$, therefore, it does not increase $b'$. Because of this, the blue area in the picture below does not suit us either. So, we need to cut off one half-plane along the $z$ axis. It can be shown that the area of the polygon under consideration decreases by a constant time for every 2-3 iterations. But what if the number of tasks of the desired color jumps from $a-1$ to $a+1$ at once? This is possible if and only if with the addition of some $x$ we get several pairs of elements of equal weight at the same time. To get rid of this, we can simply add a random number from $0$ to $\frac{1}{3}$ to each weight. The fact that such a change will fix the problem described above and will not change the optimal answer to the original problem remains to the reader as an exercise (the authors have proof). After $O(\log C)$ iterations, our algorithm will converge to a point (a triple of real numbers). It remains only to check that the solution found is suitable for us. $O(n\log n\log C)$
[ "binary search", "data structures", "dp", "flows", "geometry", "implementation", "sortings" ]
3,500
#include <bits/stdc++.h> using namespace std; #ifdef SG #include <debug.h> #else template<typename T> struct outputer; struct outputable {}; #define PRINT(...) #define OUTPUT(...) #define show(...) #define debug(...) #define deepen(...) #define timer(...) #define fbegin(...) #define fend #define pbegin(...) #define pend #endif #define ARG4(_1,_2,_3,_4,...) _4 #define forn3(i,l,r) for (int i = int(l); i < int(r); ++i) #define forn2(i,n) forn3 (i, 0, n) #define forn(...) ARG4(__VA_ARGS__, forn3, forn2) (__VA_ARGS__) #define ford3(i,l,r) for (int i = int(r) - 1; i >= int(l); --i) #define ford2(i,n) ford3 (i, 0, n) #define ford(...) ARG4(__VA_ARGS__, ford3, ford2) (__VA_ARGS__) #define ve vector #define pa pair #define tu tuple #define mp make_pair #define mt make_tuple #define pb emplace_back #define fs first #define sc second #define all(a) (a).begin(), (a).end() #define sz(a) ((int)(a).size()) typedef long double ld; typedef signed __int128 lll; typedef unsigned __int128 ulll; typedef int64_t ll; typedef uint64_t ull; typedef uint32_t ui; typedef uint16_t us; typedef uint8_t uc; typedef pa<int, int> pii; typedef pa<int, ll> pil; typedef pa<ll, int> pli; typedef pa<ll, ll> pll; typedef ve<int> vi; template<typename T> inline auto sqr (T x) -> decltype(x * x) {return x * x;} template<typename T1, typename T2> inline bool umx (T1& a, T2 b) {if (a < b) {a = b; return 1;} return 0;} template<typename T1, typename T2> inline bool umn (T1& a, T2 b) {if (b < a) {a = b; return 1;} return 0;} const int N = 100000; const int X = 1000000000; const int T = 3; struct Input { int n; std::array<int, T> k; int a[N], t[N], d[N]; bool read() { if (!(cin >> n)) { return 0; } forn (i, T) { cin >> k[i]; } forn (i, n) { cin >> a[i] >> t[i] >> d[i]; --t[i]; --d[i]; } return 1; } void init(const Input& input) { *this = input; } }; struct Data: Input { ll ans; void write() { cout << ans << endl; } }; namespace Main { const lll P = 2 * N + 1; const lll Q = 2 * N + 2; const lll R = 2 * N + 3; const lll A[3] = {3 * P * Q, 3 * P * R, 3 * Q * R}; const lll M = 3 * P * Q * R * N; const lll inf = (X + 1) * M; struct SNM { int rnd = 0; int n; int pr[N]; void init(int cnt) { n = cnt; forn (i, n) { pr[i] = i; } } int get_p(int v) { if (v < 0 || pr[v] == v) { return v; } return pr[v] = get_p(pr[v]); } bool add(int v) { v = get_p(v); if (v == -1) { return 0; } pr[v] = v - 1; return 1; } }; struct Solution: Data { SNM snm; int m; lll b[N], c[N]; vi ord[T]; array<int, T> check(const array<lll, T>& adds) { forn (i, n) { c[i] = b[i] + adds[t[i]]; } auto cmp = [&](int lhs, int rhs) { return c[lhs] > c[rhs]; }; static vi q; q.clear(); forn (i, T) { static vi w; w.resize(sz(q) + sz(ord[i])); merge(all(q), all(ord[i]), w.begin(), cmp); q.swap(w); } snm.init(n); array<int, T> cnt{}; int tot = 0; ll val = 0; for (int i : q) { if (tot >= m) { break; } if (snm.add(d[i])) { cnt[t[i]]++; tot++; val += a[i]; } } if (cnt == k) { ans = val; } return cnt; } // x[i] == adds[(i + 1) % T] - adds[i] array<lll, T> get_middle_point(const array<lll, T>& lb, const array<lll, T>& rb) { array<lll, T> x; forn (i, T) { x[i] = lb[i] + rb[i]; } lll sum = accumulate(all(x), lll(0)); forn (i, T) { x[i] = T * x[i] - sum; if (x[i] >= 0) { x[i] /= (2 * T); } else { x[i] = (x[i] - 2 * T + 1) / (2 * T); } } sum = accumulate(all(x), lll(0)); forn (i, T) { if (sum < 0 && x[i] < rb[i]) { x[i]++; sum++; } assert(x[i] >= lb[i]); assert(x[i] <= rb[i]); } assert(sum == 0); return x; } bool search() { array<lll, T> lb, rb; forn (i, T) { lb[i] = -2 * inf; rb[i] = 2 * inf; } while (true) { { lll sum_l = accumulate(all(lb), lll(0)); lll sum_r = accumulate(all(rb), lll(0)); forn (i, T) { lll sol = sum_l - lb[i]; lll sor = sum_r - rb[i]; umx(lb[i], -sor); umn(rb[i], -sol); if (lb[i] > rb[i]) { return 0; } } } array<lll, T> x = get_middle_point(lb, rb); array<lll, T> adds{}; forn (i, T - 1) { adds[i + 1] = adds[i] + x[i]; } array<int, T> cnt = check(adds); assert(accumulate(all(cnt), 0) == m); if (cnt == k) { return 1; } forn (i, T) { lll d1 = cnt[i] - k[i]; lll d2 = cnt[(i + 1) % T] - k[(i + 1) % T]; if (d1 > 0 && d2 < 0) { lb[i] = x[i] + 1; } if (d1 < 0 && d2 > 0) { rb[i] = x[i] - 1; } } } } void solve() { forn (i, n) { b[i] = a[i] * M + i * A[t[i]]; } forn (i, n) { ord[t[i]].pb(i); } forn (i, T) { sort(all(ord[i]), [&](int lhs, int rhs) { return b[lhs] > b[rhs]; }); } m = accumulate(all(k), 0); { array<int, T> r = check({}); if (accumulate(all(r), 0) != m) { ans = -1; return; } } forn (i, T) { array<lll, T> adds{}; adds[i] = inf; if (check(adds)[i] < k[i]) { ans = -1; return; } adds[i] = -inf; if (check(adds)[i] > k[i]) { ans = -1; return; } } assert(search()); } void clear() { forn (i, T) { ord[i].clear(); } } }; } Main::Solution sol; int main() { #ifdef SG freopen((problemname + ".in").c_str(), "r", stdin); // freopen((problemname + ".out").c_str(), "w", stdout); #endif int t; cin >> t; forn (i, t) { sol.read(); sol.solve(); sol.write(); sol.clear(); } return 0; }
1747
A
Two Groups
You are given an array $a$ consisting of $n$ integers. You want to distribute these $n$ integers into two groups $s_1$ and $s_2$ (groups can be empty) so that the following conditions are satisfied: - For each $i$ $(1 \leq i \leq n)$, $a_i$ goes into exactly one group. - The value $|sum(s_1)| - |sum(s_2)|$ is the maximum possible among all such ways to distribute the integers.Here $sum(s_1)$ denotes the sum of the numbers in the group $s_1$, and $sum(s_2)$ denotes the sum of the numbers in the group $s_2$. Determine the maximum possible value of $|sum(s_1)| - |sum(s_2)|$.
How about putting all positive numbers in one group and negative in second group Let $S$ denotes sum of element of array $a$. Claim: Answer is $|S|$. Proof: Let sum of all positive elements is $S_{pos}$ and sum of all negative elements $S_{neg}$. Put all positive numbers in first group and negative numbers in second group. We get $||S_{pos}| - |S_{neg}|| = |S|$. Let's prove that we can not do better than that. Let $S_1$ denotes sum of elements of first group and $S_2$ denotes sum of elements of second group. We have $|S_1| - |S_2| \leq |S_1 + S_2| = |S|$. Hence $|S|$ is the upperbound for the answer.
[ "constructive algorithms", "greedy" ]
800
// Jai Shree Ram #include<bits/stdc++.h> using namespace std; #define rep(i,a,n) for(int i=a;i<n;i++) #define ll long long #define int long long #define pb push_back #define all(v) v.begin(),v.end() #define endl "\n" #define x first #define y second #define gcd(a,b) __gcd(a,b) #define mem1(a) memset(a,-1,sizeof(a)) #define mem0(a) memset(a,0,sizeof(a)) #define sz(a) (int)a.size() #define pii pair<int,int> #define hell 1000000007 #define elasped_time 1.0 * clock() / CLOCKS_PER_SEC template<typename T1,typename T2>istream& operator>>(istream& in,pair<T1,T2> &a){in>>a.x>>a.y;return in;} template<typename T1,typename T2>ostream& operator<<(ostream& out,pair<T1,T2> a){out<<a.x<<" "<<a.y;return out;} template<typename T,typename T1>T maxs(T &a,T1 b){if(b>a)a=b;return a;} template<typename T,typename T1>T mins(T &a,T1 b){if(b<a)a=b;return a;} int solve(){ int n; cin >> n; int s = 0; for(int i = 0; i < n; i++){ int x; cin >> x; s += x; } cout << abs(s) << endl; return 0; } signed main(){ ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); //freopen("input.txt", "r", stdin); //freopen("output.txt", "w", stdout); #ifdef SIEVE sieve(); #endif #ifdef NCR init(); #endif int t=1;cin>>t; while(t--){ solve(); } return 0; }
1747
B
BAN BAN
You are given an integer $n$. Let's define $s(n)$ as the string "BAN" concatenated $n$ times. For example, $s(1)$ = "BAN", $s(3)$ = "BANBANBAN". Note that the length of the string $s(n)$ is equal to $3n$. Consider $s(n)$. You can perform the following operation on $s(n)$ any number of times (possibly zero): - Select any two distinct indices $i$ and $j$ $(1 \leq i, j \leq 3n, i \ne j)$. - Then, swap $s(n)_i$ and $s(n)_j$. You want the string "BAN" to \textbf{not appear} in $s(n)$ as a \textbf{subsequence}. What's the smallest number of operations you have to do to achieve this? Also, find one such shortest sequence of operations. A string $a$ is a subsequence of a string $b$ if $a$ can be obtained from $b$ by deletion of several (possibly, zero or all) characters.
Instead of subsequences solve for substrings. That is there should not be any substring $\texttt{BAN}$ after performing operations. In one operation you can destroy atmost $2$ substrings. Find minimum operations to destroy $n$ substrings. $\left \lceil\frac{n}{2}\right \rceil$ Congrats, you have solved for subsequences also! No subsequences of string $\texttt{BAN}$ would also mean no substrings of $\texttt{BAN}$ in original string. Let minimum number of operations to have no substrings of $\texttt{BAN}$ be $x$, it would be also be the lower bound for having no subsequences of string $\texttt{BAN}$. Claim: $x = \left \lceil\frac{n}{2}\right \rceil$. Proof: Swap $i$-th $\texttt{B}$ from start with $i$-th $\texttt{N}$ from end for $1 \leq i \leq \left \lceil\frac{n}{2}\right \rceil$. We can see that, no substrings of $\texttt{BAN}$ exists after performing $\left \lceil\frac{n}{2}\right \rceil$ operations. Since we can only destroy atmost $2$ substrings in one operations, $\left \lceil\frac{n}{2}\right \rceil$ is minimum possible. Now if you see clearly, after performing above operations, there does not exist any subsequence of string $\texttt{BAN}$ in original string. Hence $\left \lceil\frac{n}{2}\right \rceil$ is also the answer for the original problem. Divide problem into two different cases. When $a_1 \gt \min(a)$ and when $a_1 = \min(a)$. You do not need more hints to solve the problem. Case 1: $a_1 \gt \min(a)$ $\texttt{Alice}$ can force the $\texttt{Bob}$ to always decrease the minimum element by always choosing minimum element of $a$ in her turn. Where as $\texttt{Bob}$ can not do much, all other elements he would swap with would be greater than or equal to $\min(a)$. Even if there exists multiple minimums in $a$, In first move $\texttt{Alice}$ would decrease from $a_1$, hence in this case $\texttt{Alice}$ would always win. Case 2: $a_1 = \min(a)$ In this case optimal startegy for $\texttt{Bob}$ would be to always chhose minimum element of the array, which is $a_1$. $\texttt{Alice}$ would always be swapping the element greater than $a_1$ in her turn, hence in the case $\texttt{Bob}$ would always win
[ "constructive algorithms" ]
900
// Jai Shree Ram #include<bits/stdc++.h> using namespace std; #define rep(i,a,n) for(int i=a;i<n;i++) #define ll long long #define int long long #define pb push_back #define all(v) v.begin(),v.end() #define endl "\n" #define x first #define y second #define gcd(a,b) __gcd(a,b) #define mem1(a) memset(a,-1,sizeof(a)) #define mem0(a) memset(a,0,sizeof(a)) #define sz(a) (int)a.size() #define pii pair<int,int> #define hell 1000000007 #define elasped_time 1.0 * clock() / CLOCKS_PER_SEC template<typename T1,typename T2>istream& operator>>(istream& in,pair<T1,T2> &a){in>>a.x>>a.y;return in;} template<typename T1,typename T2>ostream& operator<<(ostream& out,pair<T1,T2> a){out<<a.x<<" "<<a.y;return out;} template<typename T,typename T1>T maxs(T &a,T1 b){if(b>a)a=b;return a;} template<typename T,typename T1>T mins(T &a,T1 b){if(b<a)a=b;return a;} int solve(){ int n; cin >> n; vector<int> a(n); for(auto &i:a)cin >> i; sort(a.begin() + 1,a.end()); cout << (a[0] > a[1] ? "Alice" : "Bob") << endl; return 0; } signed main(){ ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); //freopen("input.txt", "r", stdin); //freopen("output.txt", "w", stdout); #ifdef SIEVE sieve(); #endif #ifdef NCR init(); #endif int t=1;cin>>t; while(t--){ solve(); } return 0; }
1747
C
Swap Game
Alice and Bob are playing a game on an array $a$ of $n$ positive integers. Alice and Bob make alternating moves with Alice going first. In his/her turn, the player makes the following move: - If $a_1 = 0$, the player loses the game, otherwise: - Player chooses some $i$ with $2\le i \le n$. Then player decreases the value of $a_1$ by $1$ and swaps $a_1$ with $a_i$. Determine the winner of the game if both players play optimally.
Case 1: $a_1 \gt \min(a)$ $\texttt{Alice}$ can force the $\texttt{Bob}$ to always decrease the minimum element by always choosing minimum element of $a$ in her turn. Where as $\texttt{Bob}$ can not do much, all other elements he would swap with would be greater than or equal to $\min(a)$. Even if there exists multiple minimums in $a$, In first move $\texttt{Alice}$ would decrease from $a_1$, hence in this case $\texttt{Alice}$ would always win. Case 2: $a_1 = \min(a)$ In this case optimal startegy for $\texttt{Bob}$ would be to always chhose minimum element of the array, which is $a_1$. $\texttt{Alice}$ would always be swapping the element greater than $a_1$ in her turn, hence in the case $\texttt{Bob}$ would always win
[ "games" ]
1,200
// Jai Shree Ram #include<bits/stdc++.h> using namespace std; #define rep(i,a,n) for(int i=a;i<n;i++) #define ll long long #define int long long #define pb push_back #define all(v) v.begin(),v.end() #define endl " " #define x first #define y second #define gcd(a,b) __gcd(a,b) #define mem1(a) memset(a,-1,sizeof(a)) #define mem0(a) memset(a,0,sizeof(a)) #define sz(a) (int)a.size() #define pii pair<int,int> #define hell 1000000007 #define elasped_time 1.0 * clock() / CLOCKS_PER_SEC template<typename T1,typename T2>istream& operator>>(istream& in,pair<T1,T2> &a){in>>a.x>>a.y;return in;} template<typename T1,typename T2>ostream& operator<<(ostream& out,pair<T1,T2> a){out<<a.x<<" "<<a.y;return out;} template<typename T,typename T1>T maxs(T &a,T1 b){if(b>a)a=b;return a;} template<typename T,typename T1>T mins(T &a,T1 b){if(b<a)a=b;return a;} int solve(){ int n; cin >> n; vector<int> a(n); for(auto &i:a)cin >> i; sort(a.begin() + 1,a.end()); cout << (a[0] > a[1] ? "Alice" : "Bob") << endl; return 0; } signed main(){ ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); //freopen("input.txt", "r", stdin); //freopen("output.txt", "w", stdout); #ifdef SIEVE sieve(); #endif #ifdef NCR init(); #endif int t=1;cin>>t; while(t--){ solve(); } return 0; }
1747
D
Yet Another Problem
You are given an array $a$ of $n$ integers $a_1, a_2, a_3, \ldots, a_n$. You have to answer $q$ independent queries, each consisting of two integers $l$ and $r$. - Consider the subarray $a[l:r]$ $=$ $[a_l, a_{l+1}, \ldots, a_r]$. You can apply the following operation to the subarray any number of times (possibly zero)- - Choose two integers $L$, $R$ such that $l \le L \le R \le r$ and $R - L + 1$ is \textbf{odd}. - Replace each element in the subarray from $L$ to $R$ with the XOR of the elements in the subarray $[L, R]$. - The answer to the query is the minimum number of operations required to make all elements of the subarray $a[l:r]$ equal to $0$ or $-1$ if it is impossible to make all of them equal to $0$. You can find more details about XOR operation here.
Forget queries, they are just here to make problem look complicated. Solve for $q = 1$. XOR of array does not change after operations. Hence if initially XOR is not equal to $0$, answer is $-1$. Is this condition sufficient? No, We need one more condition There must exist some prefix of odd size, such that xor of elements of that prefix is $0$. First forget queries, solve for single array $a$. Let's make some observations. Xor of array does not change after each operation Xor of array does not change after each operation Look at the set of prefix XORs while doing operations. Its size always decreases or remains same after each operation. Infact we can further reduce it to parities. Let $S_{0}$, $S_{1}$ be sets of prefix XOR's of parities $0$ and $1$ respectively. After each operation new sets $S'_{0}$, $S'_{1}$ will be subsets of $S_{0}$ and $S_1$ respectively. Look at the set of prefix XORs while doing operations. Its size always decreases or remains same after each operation. Infact we can further reduce it to parities. Let $S_{0}$, $S_{1}$ be sets of prefix XOR's of parities $0$ and $1$ respectively. After each operation new sets $S'_{0}$, $S'_{1}$ will be subsets of $S_{0}$ and $S_1$ respectively. So necessary conditions for answer to exist is that xor of array should be $0$ and $S_{1}$ should contains $0$. Now comes to minimum operations. Claim: If above conditions are satisfied, its always possible to make all elements $0$ in less than or equal to $2$ operations Proof: Let length of array be $n$. Case 1: $n$ is odd Just apply the operation on whole array. Case 2: $n$ is even There will exists some odd size prefix $j$ such that xor of its elements is $0$. Apply operation on $[1,j]$ and $[j + 1,n]$. It can happen that $j = 1$ or $j = n - 1$, in that case we only need one operation, because other remaining element would already be equal to $0$. To solve for queries, you just need to check for odd prefix, which can be done using some data structure like $\texttt{std::map}$ or $\texttt{std::set}$ in C++. Do not forget to check the case when all elements are already $0$.
[ "binary search", "bitmasks", "constructive algorithms", "data structures" ]
1,900
#include<bits/stdc++.h> using namespace std ; #define ll long long #define pb push_back #define all(v) v.begin(),v.end() #define sz(a) (ll)a.size() #define F first #define S second #define INF 2000000000000000000 #define popcount(x) __builtin_popcountll(x) #define pll pair<ll,ll> #define pii pair<int,int> #define ld long double template<typename T, typename U> static inline void amin(T &x, U y){ if(y < x) x = y; } template<typename T, typename U> static inline void amax(T &x, U y){ if(x < y) x = y; } #ifdef LOCAL #define debug(...) debug_out(#__VA_ARGS__, __VA_ARGS__) #else #define debug(...) 3000 #endif int _runtimeTerror_() { int n, Q; cin >> n >> Q; map<int, int> odd, even; vector<int> last_nz(n + 1, 0), last(n + 1, -1), pxor(n + 1, 0); vector<int> a(n + 1); even[0] = 0; int cur = 0; for(int i=1;i<=n;++i) { cin >> a[i]; cur ^= a[i]; pxor[i] = cur; if(a[i] == 0) { last_nz[i] = last_nz[i - 1]; } else { last_nz[i] = i; } if(i & 1) { if(even.count(cur)) { last[i] = even[cur]; } odd[cur] = i; } else { if(odd.count(cur)) { last[i] = odd[cur]; } even[cur] = i; } } while(Q--) { int l, r; cin >> l >> r; if(pxor[l - 1] != pxor[r]) { cout << "-1\n"; } else if(last_nz[r] < l) { cout << "0\n"; } else if(r % 2 == l % 2) { cout << "1\n"; } else if(a[l] == 0 or a[r] == 0) { cout << "1\n"; } else if(last[r] >= l) { cout << "2\n"; } else { cout << "-1\n"; } } return 0; } int main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); #ifdef runSieve sieve(); #endif #ifdef NCR initncr(); #endif int TESTS = 1; //cin >> TESTS; while(TESTS--) { _runtimeTerror_(); } return 0; }